Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
active-directory-b2c | Aad Sspr Technical Profile | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/aad-sspr-technical-profile.md | Title: Microsoft Entra ID SSPR technical profiles in custom policies description: Custom policy reference for Microsoft Entra ID SSPR technical profiles in Azure AD B2C.-+ --++ Last updated 11/08/2022 |
active-directory-b2c | Access Tokens | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/access-tokens.md | Title: Request an access token in Azure Active Directory B2C description: Learn how to request an access token from Azure Active Directory B2C.-+ -+ Last updated 03/09/2023-+ |
active-directory-b2c | Active Directory Technical Profile | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/active-directory-technical-profile.md | Title: Define a Microsoft Entra technical profile in a custom policy description: Define a Microsoft Entra technical profile in a custom policy in Azure Active Directory B2C.-+ --++ Last updated 11/06/2023 |
active-directory-b2c | Add Api Connector Token Enrichment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/add-api-connector-token-enrichment.md | Title: Token enrichment - Azure Active Directory B2C description: Enrich tokens with claims from external identity data sources using APIs or outbound webhooks.-+ --++ Last updated 01/17/2023 |
active-directory-b2c | Add Api Connector | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/add-api-connector.md | Title: Add API connectors to sign up user flows description: Configure an API connector to be used in a sign-up user flow.-+ Last updated 12/20/2022 -+ zone_pivot_groups: b2c-policy-type |
active-directory-b2c | Add Identity Provider | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/add-identity-provider.md | Title: Add an identity provider - Azure Active Directory B2C description: Learn how to add an identity provider to your Active Directory B2C tenant.-+ Last updated 02/08/2023-+ |
active-directory-b2c | Add Native Application | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/add-native-application.md | Title: Add a native client application - Azure Active Directory B2C description: Learn how to add a native client application to your Active Directory B2C tenant.-+ Last updated 02/04/2019-+ |
active-directory-b2c | Add Password Change Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/add-password-change-policy.md | Title: Set up password change by using custom policies description: Learn how to set up a custom policy so users can change their password in Azure Active Directory B2C.-+ --++ Last updated 08/24/2021 |
active-directory-b2c | Add Password Reset Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/add-password-reset-policy.md | Title: Set up a password reset flow description: Learn how to set up a password reset flow in Azure Active Directory B2C (Azure AD B2C).-+ -+ Last updated 10/25/2022-+ zone_pivot_groups: b2c-policy-type |
active-directory-b2c | Add Profile Editing Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/add-profile-editing-policy.md | Title: Set up a profile editing flow description: Learn how to set up a profile editing flow in Azure Active Directory B2C.-+ -+ Last updated 06/07/2021-+ zone_pivot_groups: b2c-policy-type |
active-directory-b2c | Add Ropc Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/add-ropc-policy.md | Title: Set up a resource owner password credentials flow description: Learn how to set up the resource owner password credentials (ROPC) flow in Azure Active Directory B2C.-+ -+ Last updated 12/16/2022-+ zone_pivot_groups: b2c-policy-type |
active-directory-b2c | Add Sign In Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/add-sign-in-policy.md | Title: Set up a sign-in flow description: Learn how to set up a sign-in flow in Azure Active Directory B2C.-+ -+ Last updated 08/24/2021-+ zone_pivot_groups: b2c-policy-type |
active-directory-b2c | Add Sign Up And Sign In Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/add-sign-up-and-sign-in-policy.md | Title: Set up a sign-up and sign-in flow description: Learn how to set up a sign-up and sign-in flow in Azure Active Directory B2C.-+ -+ Last updated 02/09/2023 |
active-directory-b2c | Add Web Api Application | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/add-web-api-application.md | Title: Add a web API application - Azure Active Directory B2C description: Learn how to add a web API application to your Active Directory B2C tenant.-+ |
active-directory-b2c | Age Gating | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/age-gating.md | Title: Enable age gating in Azure Active Directory B2C description: Learn about how to identify minors using your application.-+ -+ Last updated 04/07/2022 |
active-directory-b2c | Analytics With Application Insights | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/analytics-with-application-insights.md | Title: Track user behavior by using Application Insights description: Learn how to enable event logs in Application Insights from Azure AD B2C user journeys.-+ -+ Last updated 08/24/2021 |
active-directory-b2c | Api Connector Samples | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/api-connector-samples.md | Title: Samples of APIs for modifying your Azure AD B2C user flows description: Code samples for modifying user flows with API connectors -+ |
active-directory-b2c | Api Connectors Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/api-connectors-overview.md | Title: About API connectors in Azure AD B2C description: Use Microsoft Entra API connectors to customize and extend your user flows and custom policies by using REST APIs or outbound webhooks to external identity data sources. -+ |
active-directory-b2c | App Registrations Training Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/app-registrations-training-guide.md | Title: New App registrations experience in Azure AD B2C description: An introduction to the new App registration experience in Azure AD B2C.-+ -+ Last updated 05/25/2020-+ |
active-directory-b2c | Application Types | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/application-types.md | Title: Application types supported by Azure AD B2C description: Learn about the types of applications you can use with Azure Active Directory B2C.-+ -+ Last updated 10/11/2022 |
active-directory-b2c | Authorization Code Flow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/authorization-code-flow.md | Title: Authorization code flow - Azure Active Directory B2C description: Learn how to build web apps by using Azure AD B2C and OpenID Connect authentication protocol.- - - Last updated 11/06/2023 + +# Customer intent: As a developer who is building a web app, I want to learn more about the OAuth 2.0 authorization code flow in Azure AD B2C, so that I can add sign-up, sign-in, and other identity management tasks to my app. + # OAuth 2.0 authorization code flow in Azure Active Directory B2C -You can use the OAuth 2.0 authorization code grant in apps installed on a device to gain access to protected resources, such as web APIs. By using the Azure Active Directory B2C (Azure AD B2C) implementation of OAuth 2.0, you can add sign-up, sign-in, and other identity management tasks to your single-page, mobile, and desktop apps. This article is language-independent. In the article, we describe how to send and receive HTTP messages without using any open-source libraries. When possible, we recommend you use the supported Microsoft Authentication Libraries (MSAL).Take a look at the [sample apps that use MSAL](integrate-with-app-code-samples.md). +You can use the OAuth 2.0 authorization code grant in apps installed on a device to gain access to protected resources, such as web APIs. By using the Azure Active Directory B2C (Azure AD B2C) implementation of OAuth 2.0, you can add sign-up, sign-in, and other identity management tasks to your single-page, mobile, and desktop apps. In this article, we describe how to send and receive HTTP messages without using any open-source libraries. This article is language-independent. When possible, we recommend you use the supported Microsoft Authentication Libraries (MSAL). Take a look at the [sample apps that use MSAL](integrate-with-app-code-samples.md). The OAuth 2.0 authorization code flow is described in [section 4.1 of the OAuth 2.0 specification](https://tools.ietf.org/html/rfc6749). You can use it for authentication and authorization in most [application types](application-types.md), including web applications, single-page applications, and natively installed applications. You can use the OAuth 2.0 authorization code flow to securely acquire access tokens and refresh tokens for your applications, which can be used to access resources that are secured by an [authorization server](protocols-overview.md). The refresh token allows the client to acquire new access (and refresh) tokens once the access token expires, typically after one hour. The authorization code flow for single page applications requires some additiona The `spa` redirect type is backwards compatible with the implicit flow. Apps currently using the implicit flow to get tokens can move to the `spa` redirect URI type without issues and continue using the implicit flow. ## 1. Get an authorization code-The authorization code flow begins with the client directing the user to the `/authorize` endpoint. This is the interactive part of the flow, where the user takes action. In this request, the client indicates in the `scope` parameter the permissions that it needs to acquire from the user. The following examples (with line breaks for readability) shows how to acquire an authorization code. If you're testing this GET HTTP request, use your browser. +The authorization code flow begins with the client directing the user to the `/authorize` endpoint. This is the interactive part of the flow, where the user takes action. In this request, the client indicates in the `scope` parameter the permissions that it needs to acquire from the user. The following examples (with line breaks for readability) show how to acquire an authorization code. If you're testing this GET HTTP request, use your browser. ```http client_id=90c0fe63-bcf2-44d5-8fb7-b8bbc0b29dc6 | client_id |Required |The application ID assigned to your app in the [Azure portal](https://portal.azure.com). | | response_type |Required |The response type, which must include `code` for the authorization code flow. You can receive an ID token if you include it in the response type, such as `code+id_token`, and in this case, the scope needs to include `openid`.| | redirect_uri |Required |The redirect URI of your app, where authentication responses are sent and received by your app. It must exactly match one of the redirect URIs that you registered in the portal, except that it must be URL-encoded. |-| scope |Required |A space-separated list of scopes. The `openid` scope indicates a permission to sign in the user and get data about the user in the form of ID tokens. The `offline_access` scope is optional for web applications. It indicates that your application will need a *refresh token* for extended access to resources.The client-id indicates the token issued are intended for use by Azure AD B2C registered client. The `https://{tenant-name}/{app-id-uri}/{scope}` indicates a permission to protected resources, such as a web API. For more information, see [Request an access token](access-tokens.md#scopes). | +| scope |Required |A space-separated list of scopes. The `openid` scope indicates a permission to sign in the user and get data about the user in the form of ID tokens. The `offline_access` scope is optional for web applications. It indicates that your application will need a *refresh token* for extended access to resources. The client-id indicates the token issued are intended for use by Azure AD B2C registered client. The `https://{tenant-name}/{app-id-uri}/{scope}` indicates a permission to protected resources, such as a web API. For more information, see [Request an access token](access-tokens.md#scopes). | | response_mode |Recommended |The method that you use to send the resulting authorization code back to your app. It can be `query`, `form_post`, or `fragment`. | | state |Recommended |A value included in the request that can be a string of any content that you want to use. Usually, a randomly generated unique value is used, to prevent cross-site request forgery attacks. The state also is used to encode information about the user's state in the app before the authentication request occurred. For example, the page the user was on, or the user flow that was being executed. | | prompt |Optional |The type of user interaction that is required. Currently, the only valid value is `login`, which forces the user to enter their credentials on that request. Single sign-on will not take effect. | grant_type=authorization_code | client_id |Required |The application ID assigned to your app in the [Azure portal](https://portal.azure.com).| | client_secret | Yes, in Web Apps | The application secret that was generated in the [Azure portal](https://portal.azure.com/). Client secrets are used in this flow for Web App scenarios, where the client can securely store a client secret. For Native App (public client) scenarios, client secrets cannot be securely stored, and therefore are not used in this call. If you use a client secret, please change it on a periodic basis. | | grant_type |Required |The type of grant. For the authorization code flow, the grant type must be `authorization_code`. |-| scope |Recommended |A space-separated list of scopes. A single scope value indicates to Microsoft Entra ID both of the permissions that are being requested. Using the client ID as the scope indicates that your app needs an access token that can be used against your own service or web API, represented by the same client ID. The `offline_access` scope indicates that your app needs a refresh token for long-lived access to resources. You also can use the `openid` scope to request an ID token from Azure AD B2C. | +| scope |Recommended |A space-separated list of scopes. A single scope value indicates to Azure AD B2C both of the permissions that are being requested. Using the client ID as the scope indicates that your app needs an access token that can be used against your own service or web API, represented by the same client ID. The `offline_access` scope indicates that your app needs a refresh token for long-lived access to resources. You also can use the `openid` scope to request an ID token from Azure AD B2C. | | code |Required |The authorization code that you acquired in from the `/authorize` endpoint. | | redirect_uri |Required |The redirect URI of the application where you received the authorization code. | | code_verifier | recommended | The same `code_verifier` used to obtain the authorization code. Required if PKCE was used in the authorization code grant request. For more information, see the [PKCE RFC](https://tools.ietf.org/html/rfc7636). | |
active-directory-b2c | Azure Monitor | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/azure-monitor.md | Title: Monitor Azure AD B2C with Azure Monitor description: Learn how to log Azure AD B2C events with Azure Monitor by using delegated resource management.-+ -+ |
active-directory-b2c | B2c Global Identity Funnel Based Design | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/b2c-global-identity-funnel-based-design.md | Title: Build a global identity solution with funnel-based approach description: Learn the funnel-based design consideration for Azure AD B2C to provide customer identity management for global customers.-+ -+ Last updated 12/15/2022 |
active-directory-b2c | B2c Global Identity Proof Of Concept Funnel | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/b2c-global-identity-proof-of-concept-funnel.md | Title: Azure Active Directory B2C global identity framework proof of concept for funnel-based configuration description: Learn how to create a proof of concept for funnel-based approach for Azure AD B2C to provide customer identity and access management for global customers.-+ -+ Last updated 12/15/2022 |
active-directory-b2c | B2c Global Identity Proof Of Concept Regional | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/b2c-global-identity-proof-of-concept-regional.md | Title: Azure Active Directory B2C global identity framework proof of concept for region-based configuration description: Learn how to create a proof of concept regional based approach for Azure AD B2C to provide customer identity and access management for global customers.-+ -+ Last updated 12/15/2022 |
active-directory-b2c | B2c Global Identity Region Based Design | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/b2c-global-identity-region-based-design.md | Title: Build a global identity solution with region-based approach description: Learn the region-based design consideration for Azure AD B2C to provide customer identity management for global customers.-+ -+ Last updated 12/15/2022 |
active-directory-b2c | B2c Global Identity Solutions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/b2c-global-identity-solutions.md | Title: Azure Active Directory B2C global identity framework description: Learn how to configure Azure AD B2C to provide customer identity and access management for global customers.-+ -+ Last updated 12/15/2022 |
active-directory-b2c | B2clogin | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/b2clogin.md | Title: Migrate applications and APIs to b2clogin.com description: Learn about using b2clogin.com in your redirect URLs for Azure Active Directory B2C.-+ -+ Last updated 11/21/2023 |
active-directory-b2c | Best Practices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/best-practices.md | Title: Best practices for Azure AD B2C description: Recommendations and best practices to consider when working with Azure Active Directory B2C (Azure AD B2C).-+ -+ Last updated 07/13/2023 |
active-directory-b2c | Billing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/billing.md | Title: Billing model for Azure Active Directory B2C description: Learn about Azure AD B2C's monthly active users (MAU) billing model, how to link an Azure AD B2C tenant to an Azure subscription, and how to select the appropriate premium tier pricing.-+ -+ Last updated 06/06/2023 |
active-directory-b2c | Boolean Transformations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/boolean-transformations.md | Title: Boolean claims transformation examples for custom policies description: Boolean claims transformation examples for the Identity Experience Framework (IEF) schema of Azure Active Directory B2C.-+ -+ Last updated 02/16/2022 |
active-directory-b2c | Buildingblocks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/buildingblocks.md | Title: BuildingBlocks description: Specify the BuildingBlocks element of a custom policy in Azure Active Directory B2C.-+ -+ Last updated 12/10/2019 |
active-directory-b2c | Claim Resolver Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/claim-resolver-overview.md | Title: Claim resolvers in custom policies description: Learn how to use claims resolvers in a custom policy in Azure Active Directory B2C.-+ -+ Last updated 02/16/2022 |
active-directory-b2c | Claims Transformation Technical Profile | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/claims-transformation-technical-profile.md | Title: Define a claims transformation technical profile description: Define a claims transformation technical profile in a custom policy in Azure Active Directory B2C.-+ -+ Last updated 01/17/2022 |
active-directory-b2c | Claimsproviders | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/claimsproviders.md | Title: ClaimsProviders - Azure Active Directory B2C description: Specify the ClaimsProvider element of a custom policy in Azure Active Directory B2C.-+ -+ Last updated 03/08/2021 |
active-directory-b2c | Claimsschema | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/claimsschema.md | Title: "ClaimsSchema: Azure Active Directory B2C" description: Specify the ClaimsSchema element of a custom policy in Azure Active Directory B2C.-+ -+ Last updated 03/06/2022 |
active-directory-b2c | Claimstransformations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/claimstransformations.md | Title: ClaimsTransformations - Azure Active Directory B2C description: Definition of the ClaimsTransformations element in the Identity Experience Framework Schema of Azure Active Directory B2C.-+ -+ Last updated 09/10/2018 |
active-directory-b2c | Client Credentials Grant Flow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/client-credentials-grant-flow.md | Title: Set up OAuth 2.0 client credentials flow description: Learn how to set up the OAuth 2.0 client credentials flow in Azure Active Directory B2C.- - - Last updated 11/21/2023-+ zone_pivot_groups: b2c-policy-type |
active-directory-b2c | Conditional Access Identity Protection Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/conditional-access-identity-protection-overview.md | Title: Identity Protection and Conditional Access in Azure AD B2C description: Learn how Identity Protection gives you visibility into risky sign-ins and risk detections. Find out how and Conditional Access lets you enforce organizational policies based on risk events in your Azure AD B2C tenants. - |
active-directory-b2c | Conditional Access Technical Profile | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/conditional-access-technical-profile.md | Title: Conditional Access technical profiles in custom policies description: Custom policy reference for Conditional Access technical profiles in Azure AD B2C.-+ -+ Last updated 06/18/2021 |
active-directory-b2c | Conditional Access User Flow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/conditional-access-user-flow.md | Title: Add Conditional Access to a user flow in Azure AD B2C description: Learn how to add Conditional Access to your Azure AD B2C user flows. Configure multifactor authentication (MFA) settings and Conditional Access policies in your user flows to enforce policies and remediate risky sign-ins.-+ Last updated 04/10/2022-+ |
active-directory-b2c | Configure A Sample Node Web App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-a-sample-node-web-app.md | Title: Configure authentication in a sample Node.js web application by using Azure Active Directory B2C (Azure AD B2C) description: This article discusses how to use Azure Active Directory B2C to sign in and sign up users in a Node.js web application. -+ -+ Last updated 07/07/2022 |
active-directory-b2c | Configure Authentication In Azure Static App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-in-azure-static-app.md | Title: Configure authentication in an Azure Static Web App by using Azure Active Directory B2C description: This article discusses how to use Azure Active Directory B2C to sign in and sign up users in an Azure Static Web App.-+ -+ Last updated 08/22/2022 |
active-directory-b2c | Configure Authentication In Azure Web App File Based | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-in-azure-web-app-file-based.md | Title: Configure authentication in an Azure Web App configuration file by using Azure Active Directory B2C description: This article discusses how to use Azure Active Directory B2C to sign in and sign up users in an Azure Web App using configuration file.-+ -+ Last updated 06/28/2022 |
active-directory-b2c | Configure Authentication In Azure Web App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-in-azure-web-app.md | Title: Configure authentication in an Azure Web App by using Azure Active Directory B2C description: This article discusses how to use Azure Active Directory B2C to sign in and sign up users in an Azure Web App.-+ -+ Last updated 06/28/2022 |
active-directory-b2c | Configure Authentication In Sample Node Web App With Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-in-sample-node-web-app-with-api.md | Title: Configure authentication in a sample Node.js web API by using Azure Active Directory B2C description: Follow the steps in this article to learn how to configure authentication in a sample Node.js web API by using Azure AD B2C -+ -+ Last updated 03/24/2023 |
active-directory-b2c | Configure Authentication Sample Android App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-sample-android-app.md | Title: Configure authentication in a sample Android application by using Azure Active Directory B2C description: This article discusses how to use Azure Active Directory B2C to sign in and sign up users in an Android application.-+ -+ Last updated 07/05/2021 |
active-directory-b2c | Configure Authentication Sample Angular Spa App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-sample-angular-spa-app.md | Title: Configure authentication in a sample Angular SPA by using Azure Active Directory B2C description: Learn how to use Azure Active Directory B2C to sign in and sign up users in an Angular SPA.-+ -+ Last updated 03/09/2023 |
active-directory-b2c | Configure Authentication Sample Ios App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-sample-ios-app.md | Title: Configure authentication in a sample iOS Swift application by using Azure Active Directory B2C description: This article discusses how to use Azure Active Directory B2C to sign in and sign up users in an iOS Swift application.-+ -+ Last updated 01/06/2023 |
active-directory-b2c | Configure Authentication Sample Python Web App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-sample-python-web-app.md | Title: Configure authentication in a sample Python web application by using Azure Active Directory B2C description: This article discusses how to use Azure Active Directory B2C to sign in and sign up users in a Python web application.-+ -+ Last updated 02/28/2023 |
active-directory-b2c | Configure Authentication Sample React Spa App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-sample-react-spa-app.md | Title: Configure authentication in a sample React SPA by using Azure Active Directory B2C description: Learn how to use Azure Active Directory B2C to sign in and sign up users in a React SPA.-+ -+ Last updated 04/24/2023 |
active-directory-b2c | Configure Authentication Sample Spa App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-sample-spa-app.md | Title: Configure authentication in a sample single-page application by using Azure Active Directory B2C description: This article discusses how to use Azure Active Directory B2C to sign in and sign up users in a single-page application.-+ -+ Last updated 04/30/2022 |
active-directory-b2c | Configure Authentication Sample Web App With Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-sample-web-app-with-api.md | Title: Configure authentication in a sample web application that calls a web API by using Azure Active Directory B2C description: This article discusses using Azure Active Directory B2C to sign in and sign up users in an ASP.NET web application that calls a web API.-+ -+ Last updated 07/05/2021 |
active-directory-b2c | Configure Authentication Sample Web App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-sample-web-app.md | Title: Configure authentication in a sample web application by using Azure Active Directory B2C description: This article discusses how to use Azure Active Directory B2C to sign in and sign up users in an ASP.NET web application.-+ -+ Last updated 03/11/2022 |
active-directory-b2c | Configure Authentication Sample Wpf Desktop App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-sample-wpf-desktop-app.md | Title: Configure authentication in a sample WPF desktop application by using Azure Active Directory B2C description: This article discusses how to use Azure Active Directory B2C to sign in and sign up users in a WPF desktop application.-+ -+ Last updated 08/04/2021 |
active-directory-b2c | Configure Security Analytics Sentinel | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-security-analytics-sentinel.md | Title: Configure security analytics for Azure Active Directory B2C data with Microsoft Sentinel description: Use Microsoft Sentinel to perform security analytics for Azure Active Directory B2C data.-+ -+ Last updated 03/06/2023 |
active-directory-b2c | Configure Tokens | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-tokens.md | Title: Configure tokens - Azure Active Directory B2C description: Learn how to configure the token lifetime and compatibility settings in Azure Active Directory B2C.- - - Last updated 11/20/2023-+ zone_pivot_groups: b2c-policy-type |
active-directory-b2c | Configure User Input | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-user-input.md | Title: Add user attributes and customize user input description: Learn how to customize user input and add user attributes to the sign-up or sign-in journey in Azure Active Directory B2C.-+ -+ Last updated 12/28/2022-+ zone_pivot_groups: b2c-policy-type |
active-directory-b2c | Contentdefinitions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/contentdefinitions.md | Title: ContentDefinitions description: Specify the ContentDefinitions element of a custom policy in Azure Active Directory B2C.-+ -+ Last updated 09/12/2021 |
active-directory-b2c | Cookie Definitions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/cookie-definitions.md | Title: Cookie definitions description: Provides definitions for the cookies used in Azure Active Directory B2C.-+ -+ Last updated 03/20/2022 |
active-directory-b2c | Custom Domain | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-domain.md | Title: Enable Azure AD B2C custom domains description: Learn how to enable custom domains in your redirect URLs for Azure Active Directory B2C.- - Previously updated : 11/3/2022 Last updated : 11/13/2023 zone_pivot_groups: b2c-policy-type+ +#Customer intent: As a developer, I want to use my own domain name for the sign-in and sign-up experience, so that my users have a seamless experience. # Enable custom domains for Azure Active Directory B2C [!INCLUDE [active-directory-b2c-choose-user-flow-or-custom-policy](../../includes/active-directory-b2c-choose-user-flow-or-custom-policy.md)] -This article describes how to enable custom domains in your redirect URLs for Azure Active Directory B2C (Azure AD B2C). By using a verified custom domain, you've benefits such as: +This article describes how to enable custom domains in your redirect URLs for Azure Active Directory B2C (Azure AD B2C). Using a verified custom domain has a number of benefits such as: - It provides a more seamless user experience. From the user's perspective, they remain in your domain during the sign in process rather than redirecting to the Azure AD B2C default domain *<tenant-name>.b2clogin.com*.-- By staying in the same domain for your application during sign-in, you mitigate the impact of [third-party cookie blocking](/azure/active-directory/develop/reference-third-party-cookies-spas). -+- By staying in the same domain for your application during sign-in, you mitigate the impact of [third-party cookie blocking](/entra/identity-platform/reference-third-party-cookies-spas). - You increase the number of objects (user accounts and applications) you can create in your Azure AD B2C tenant from the default 1.25 million to 5.25 million. -![Screenshot demonstrates an Azure AD B2C custom domain user experience.](./media/custom-domain/custom-domain-user-experience.png) + :::image type="content" source="./media/custom-domain/custom-domain-user-experience.png" alt-text="Screenshot of a browser window with the domain name highlighted in the address bar to show the custom domain experience."::: ## Custom domain overview The following diagram illustrates Azure Front Door integration: 1. Azure Front Door invokes Azure AD B2C content using the Azure AD B2C `<tenant-name>.b2clogin.com` default domain. The request to the Azure AD B2C endpoint includes the original custom domain name. 1. Azure AD B2C responds to the request by displaying the relevant content and the original custom domain. -![Diagram shows the custom domain networking flow.](./media/custom-domain/custom-domain-network-flow.png) > [!IMPORTANT] > The connection from the browser to Azure Front Door should always use IPv4 instead of IPv6. When using custom domains, consider the following: -- You can set up multiple custom domains. For the maximum number of supported custom domains, see [Microsoft Entra service limits and restrictions](../active-directory/enterprise-users/directory-service-limits-restrictions.md) for Azure AD B2C and [Azure subscription and service limits, quotas, and constraints](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-front-door-classic-limits) for Azure Front Door.+- You can set up multiple custom domains. For the maximum number of supported custom domains, see [Microsoft Entra service limits and restrictions](/entra/identity/users/directory-service-limits-restrictions) for Azure AD B2C and [Azure subscription and service limits, quotas, and constraints](/azure/azure-resource-manager/management/azure-subscription-service-limits#azure-front-door-classic-limits) for Azure Front Door. - Azure Front Door is a separate Azure service, so extra charges will be incurred. For more information, see [Front Door pricing](https://azure.microsoft.com/pricing/details/frontdoor). - After you configure custom domains, users will still be able to access the Azure AD B2C default domain name *<tenant-name>.b2clogin.com* (unless you're using a custom policy and you [block access](#optional-block-access-to-the-default-domain-name). - If you have multiple applications, migrate them all to the custom domain because the browser stores the Azure AD B2C session under the domain name currently being used. When using custom domains, consider the following: ## Step 1: Add a custom domain name to your Azure AD B2C tenant -Every new Azure AD B2C tenant comes with an initial domain name, <domainname>.onmicrosoft.com. You can't change or delete the initial domain name, but you can add a custom domain. +When you create an Azure AD B2C tenant it comes with an initial domain name, <domainname>.onmicrosoft.com. You can't change or delete the initial domain name, but you can add your own custom domain. Follow these steps to add a custom domain to your Azure AD B2C tenant: -1. [Add your custom domain name to Microsoft Entra ID](../active-directory/fundamentals/add-custom-domain.md#add-your-custom-domain-name). +1. [Add your custom domain name to Microsoft Entra ID](/entra/fundamentals/add-custom-domain#add-your-custom-domain-name). > [!IMPORTANT] > For these steps, be sure to sign in to your **Azure AD B2C** tenant and select the **Microsoft Entra ID** service. -1. [Add your DNS information to the domain registrar](../active-directory/fundamentals/add-custom-domain.md#add-your-dns-information-to-the-domain-registrar). After you add your custom domain name to Microsoft Entra ID, create a DNS `TXT`, or `MX` record for your domain. Creating this DNS record for your domain verifies ownership of your domain name. +1. [Add your DNS information to the domain registrar](/entra/fundamentals/add-custom-domain#add-your-dns-information-to-the-domain-registrar). After you add your custom domain name to Microsoft Entra ID, create a DNS `TXT`, or `MX` record for your domain. Creating this DNS record for your domain verifies ownership of your domain name. The following examples demonstrate TXT records for *login.contoso.com* and *account.contoso.com*: Follow these steps to add a custom domain to your Azure AD B2C tenant: > [!TIP] > You can manage your custom domain with any publicly available DNS service, such as GoDaddy. If you don't have a DNS server, you can use [Azure DNS zone](../dns/dns-getstarted-portal.md), or [App Service domains](../app-service/manage-custom-dns-buy-domain.md). -1. [Verify your custom domain name](../active-directory/fundamentals/add-custom-domain.md#verify-your-custom-domain-name). Verify each subdomain, or hostname you plan to use. For example, to be able to sign in with *login.contoso.com* and *account.contoso.com*, you need to verify both subdomains and not just the top-level domain *contoso.com*. +1. [Verify your custom domain name](/entra/fundamentals/add-custom-domain#verify-your-custom-domain-name). Verify each subdomain, or hostname you plan to use. For example, to be able to sign in with *login.contoso.com* and *account.contoso.com*, you need to verify both subdomains and not just the top-level domain *contoso.com*. > [!IMPORTANT] > After the domain is verified, **delete** the DNS TXT record you created. Follow these steps to create an Azure Front Door: |Subscription|Select your Azure subscription.| |Resource group| Select an existing resource group, or create a new one.| |Name| Give your profile a name such as `b2cazurefrontdoor`.|- |Tier| Select either Standard or Premium tier. Standard tier is content delivery optimized. Premium tier builds on Standard tier and is focused on security. See [Tier Comparison](../frontdoor/standard-premium/tier-comparison.md).| + |Tier| Select either Standard or Premium tier. Standard tier is content delivery optimized. Premium tier builds on Standard tier and is focused on security. See [Tier Comparison](../frontdoor/front-door-cdn-comparison.md).| |Endpoint name| Enter a globally unique name for your endpoint, such as `b2cazurefrontdoor`. The **Endpoint hostname** is generated automatically. | |Origin type| Select `Custom`.|- |Origin host name| Enter `<tenant-name>.b2clogin.com`. Replace `<tenant-name>` with the [name of your Azure AD B2C tenant]( tenant-management-read-tenant-name.md#get-your-tenant-name) such as `contoso.b2clogin.com`.| + |Origin host name| Enter `<tenant-name>.b2clogin.com`. Replace `<tenant-name>` with the [name of your Azure AD B2C tenant](tenant-management-read-tenant-name.md#get-your-tenant-name) such as `contoso.b2clogin.com`.| Leave the **Caching** and **WAF policy** empty. -1. Once the Azure Front Door resource is created, select **Overview**, and copy the **Endpoint hostname**. It looks something like `b2cazurefrontdoor-ab123e.z01.azurefd.net`. +1. Once the Azure Front Door resource is created, select **Overview**, and copy the **Endpoint hostname**. You will need this later on. It will look something like `b2cazurefrontdoor-ab123e.z01.azurefd.net`. 1. Make sure the **Host name** and **Origin host header** of your origin have the same value: 1. Under **Settings**, select **Origin groups**. Follow these steps to create an Azure Front Door: 1. On the right pane, select your **Origin host name** such as `contoso.b2clogin.com`. 1. On the **Update origin** pane, update the **Host name** and **Origin host header** to have the same value. - :::image type="content" source="./media/custom-domain/azure-front-door-custom-domain-origins.png" alt-text="Screenshot of how to update custom domain origins."::: -+ :::image type="content" source="./media/custom-domain/azure-front-door-custom-domain-origins.png" alt-text="Screenshot of the Origin groups menu from the Azure portal with Host name and Origin host header text boxes highlighted."::: ## Step 3: Set up your custom domain on Azure Front Door The **default-route** routes the traffic from the client to Azure Front Door. Th The following screenshot shows how to select the default-route. - ![Screenshot of selecting the default route.](./media/custom-domain/enable-the-route.png) + :::image type="content" source="./media/custom-domain/enable-the-route.png" alt-text="Screenshot of the Front Door manager page from the Azure portal with the default route highlighted."::: 1. Select the **Enable route** checkbox. 1. Select **Update** to save the changes. ## Step 4: Configure CORS -If you [customize the Azure AD B2C user interface](customize-ui-with-html.md) with an HTML template, you need to [Configure CORS](customize-ui-with-html.md?pivots=b2c-user-flow.md#3-configure-cors) with your custom domain. +If you are using a custom HTML template to [customize the Azure AD B2C user interface](customize-ui-with-html.md), you need to [Configure CORS](customize-ui-with-html.md?pivots=b2c-user-flow.md#3-configure-cors) with your custom domain. Configure Azure Blob storage for Cross-Origin Resource Sharing with the following steps: Configure Azure Blob storage for Cross-Origin Resource Sharing with the followin 1. For **Application**, select the web application named *webapp1* that you previously registered. The **Reply URL** should show `https://jwt.ms`. 1. Copy the URL under **Run user flow endpoint**. - ![Screenshot of how to copy the authorization request U R I.](./media/custom-domain/user-flow-run-now.png) + :::image type="content" source="./media/custom-domain/user-flow-run-now.png" alt-text="Screenshot of the Run user flow page from the Azure portal with the copy button for the Run userflow endpoint text box highlighted."::: -1. To simulate a sign in with your custom domain, open a web browser and use the URL you copied. Replace the Azure AD B2C domain (_<tenant-name>_.b2clogin.com) with your custom domain. +1. To simulate a sign in with your custom domain, open a web browser and use the URL you just copied. Replace the Azure AD B2C domain (_<tenant-name>_.b2clogin.com) with your custom domain. For example, instead of: The following example shows a valid OAuth redirect URI: https://login.contoso.com/contoso.onmicrosoft.com/oauth2/authresp ``` -If you choose to use the [tenant ID](#optional-use-tenant-id), a valid OAuth redirect URI would look like the following sample: --```http -https://login.contoso.com/11111111-1111-1111-1111-111111111111/oauth2/authresp -``` - The [SAML identity providers](saml-identity-provider-technical-profile.md) metadata would look like the following sample: ```http The custom domain integration applies to authentication endpoints that use Azure Replace: - **custom-domain** with your custom domain - **tenant-name** with your tenant name or tenant ID-- **policy-name** with your policy name. [Learn more about Azure AD B2C policies](technical-overview.md#identity-experiences-user-flows-or-custom-policies). -+- **policy-name** with your policy name. The [SAML service provider](./saml-service-provider.md) metadata may look like the following sample: https://custom-domain-name/tenant-name/policy-name/Samlp/metadata You can replace your B2C tenant name in the URL with your tenant ID GUID so as to remove all references to ΓÇ£b2cΓÇ¥ in the URL. You can find your tenant ID GUID in the B2C Overview page in Azure portal. For example, change `https://account.contosobank.co.uk/contosobank.onmicrosoft.com/` -to -`https://account.contosobank.co.uk/<tenant ID GUID>/` +to `https://account.contosobank.co.uk/<tenant ID GUID>/` ++If you choose to use tenant ID instead of tenant name, be sure to update the identity provider **OAuth redirect URIs** accordingly. When using your tenant ID instead of tenant name, a valid OAuth redirect URI would look like the following sample: -If you choose to use tenant ID instead of tenant name, be sure to update the identity provider **OAuth redirect URIs** accordingly. For more information, see [Configure your identity provider](#configure-your-identity-provider). +```http +https://login.contoso.com/11111111-1111-1111-1111-111111111111/oauth2/authresp +``` +For more information, see [Configure your identity provider](#configure-your-identity-provider). ### Token issuance The token issuer name (iss) claim changes based on the custom domain being used. ```http https://<domain-name>/11111111-1111-1111-1111-111111111111/v2.0/ ```- ::: zone pivot="b2c-custom-policy" ## (Optional) Block access to the default domain name -After you add the custom domain and configure your application, users will still be able to access the <tenant-name>.b2clogin.com domain. To prevent access, you can configure the policy to check the authorization request "host name" against an allowed list of domains. The host name is the domain name that appears in the URL. The host name is available through `{Context:HostName}` [claim resolvers](claim-resolver-overview.md). Then you can present a custom error message. +After you add the custom domain and configure your application, users will still be able to access the <tenant-name>.b2clogin.com domain. If you want to prevent access, you can configure the policy to check the authorization request "host name" against an allowed list of domains. The host name is the domain name that appears in the URL. The host name is available through `{Context:HostName}` [claim resolvers](claim-resolver-overview.md). Then you can present a custom error message. 1. Get the example of a conditional access policy that checks the host name from [GitHub](https://github.com/azure-ad-b2c/samples/tree/master/policies/check-host-name). 1. In each file, replace the string `yourtenant` with the name of your Azure AD B2C tenant. For example, if the name of your B2C tenant is *contosob2c*, all instances of `yourtenant.onmicrosoft.com` become `contosob2c.onmicrosoft.com`. When using custom domains, consider the following points: - **Possible causes** - This issue could be related to the Azure Front Door route configuration. - **Resolution**: Check the status of the **default-route**. If it's disabled, [Enable the route](#33-enable-the-route). The following screenshot shows how the default-route should look like: - ![Screenshot of the status of the default-route.](./media/custom-domain/azure-front-door-route-status.png) + :::image type="content" source="./media/custom-domain/azure-front-door-route-status.png" alt-text="Screenshot of the Front Door manager page from the Azure portal with the default route, Status and Provisioning state items highlighted."::: ### Azure AD B2C returns the resource you're looking for has been removed, had its name changed, or is temporarily unavailable. When using custom domains, consider the following points: - **Possible causes** - This issue could be related to the Microsoft Entra custom domain verification. - **Resolution**: Make sure the custom domain is [registered and **successfully verified**](#step-1-add-a-custom-domain-name-to-your-azure-ad-b2c-tenant) in your Azure AD B2C tenant. -### Identify provider returns an error +### Identity provider returns an error - **Symptom** - After you configure a custom domain, you're able to sign in with local accounts. But when you sign in with credentials from external [social or enterprise identity providers](add-identity-provider.md), the identity provider presents an error message. - **Possible causes** - When Azure AD B2C takes the user to sign in with a federated identity provider, it specifies the redirect URI. The redirect URI is the endpoint to where the identity provider returns the token. The redirect URI is the same domain your application uses with the authorization request. If the redirect URI isn't yet registered in the identity provider, it may not trust the new redirect URI, which results in an error message. Yes, Azure AD B2C supports BYO-WAF (Bring Your Own Web Application Firewall). Ho Yes, Azure Front Door can be in a different subscription. -## Next steps +## See also ++* Learn about [OAuth authorization requests](protocols-overview.md). +* Learn about [OpenID Connect authorization requests](openid-connect.md). +* Learn about [authorization code flow](authorization-code-flow.md). + -Learn about [OAuth authorization requests](protocols-overview.md). |
active-directory-b2c | Custom Email Mailjet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-email-mailjet.md | Title: Custom email verification with Mailjet description: Learn how to integrate with Mailjet to customize the verification email sent to your customers when they sign up to use your Azure AD B2C-enabled applications.-+ -+ Last updated 10/06/2022 |
active-directory-b2c | Custom Email Sendgrid | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-email-sendgrid.md | Title: Custom email verification with SendGrid description: Learn how to integrate with SendGrid to customize the verification email sent to your customers when they sign up to use your Azure AD B2C-enabled applications.-+ -+ Last updated 11/20/2023 |
active-directory-b2c | Custom Policies Series Branch User Journey | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-policies-series-branch-user-journey.md | Title: Create branching in user journey by using Azure AD B2C custom policy description: Learn how to enable or disable Technical Profiles based on claims values. Learn how to branch in user journeys by enabling and disabling Azure AD B2C custom policy technical profiles. -+ -+ Last updated 11/06/2023 |
active-directory-b2c | Custom Policies Series Call Rest Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-policies-series-call-rest-api.md | Title: Call a REST API by using Azure Active Directory B2C custom policy description: Learn how to make an HTTP call to external API by using Azure Active Directory B2C custom policy.-+ -+ -+ Last updated 11/20/2023 Next, learn: - About [RESTful technical profile](restful-technical-profile.md). -- How to [Create and read a user account by using Azure Active Directory B2C custom policy](custom-policies-series-store-user.md)+- How to [Create and read a user account by using Azure Active Directory B2C custom policy](custom-policies-series-store-user.md) |
active-directory-b2c | Custom Policies Series Collect User Input | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-policies-series-collect-user-input.md | Title: Collect and manipulate user inputs by using Azure AD B2C custom policy description: Learn how to collect user inputs from a user and manipulate them by using Azure Active Directory B2C custom policy -+ -+ Last updated 11/06/2023 |
active-directory-b2c | Custom Policies Series Hello World | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-policies-series-hello-world.md | Title: Write your first Azure AD B2C custom policy - Hello World! description: Learn how to write your first custom policy. A custom that shows of returns Hello World message. -+ -+ Last updated 11/06/2023 |
active-directory-b2c | Custom Policies Series Install Xml Extensions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-policies-series-install-xml-extensions.md | Title: Validate custom policy files by using TrustFrameworkPolicy schema description: Learn how to validate custom policy files by using TrustFrameworkPolicy schema and other XML extensions for Visual Studio code. You also learn to navigate custom policy file by using Azure AD B2C extension. -+ -+ Last updated 11/20/2023 |
active-directory-b2c | Custom Policies Series Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-policies-series-overview.md | Title: Create and run your own custom policies in Azure Active Directory B2C description: Learn how to create and run your own custom policies in Azure Active Directory B2C. Learn how to create Azure Active Directory B2C custom policies from scratch in a how-to guide series.-+ -+ Last updated 11/06/2023 |
active-directory-b2c | Custom Policies Series Sign Up Or Sign In Federation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-policies-series-sign-up-or-sign-in-federation.md | Title: Set up a sign-up and sign-in flow with a social account by using Azure Active Directory B2C custom policy description: Learn how to configure a sign-up and sign-in flow for a social account, Facebook, by using Azure Active Directory B2C custom policy. -+ -+ Last updated 11/06/2023 |
active-directory-b2c | Custom Policies Series Sign Up Or Sign In | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-policies-series-sign-up-or-sign-in.md | Title: Set up a sign-up and sign-in flow for a local account by using Azure Active Directory B2C custom policy description: Learn how to configure a sign-up and sign-in flow for a local account, using email and password, by using Azure Active Directory B2C custom policy. -+ -+ Last updated 11/06/2023 |
active-directory-b2c | Custom Policies Series Store User | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-policies-series-store-user.md | Title: Create a user account by using Azure Active Directory B2C custom policy description: Learn how to create a user account in Azure AD B2C storage by using a custom policy. -+ -+ Last updated 11/06/2023 |
active-directory-b2c | Custom Policies Series Validate User Input | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-policies-series-validate-user-input.md | Title: Validate user inputs by using Azure AD B2C custom policy description: Learn how to validate user inputs by using Azure Active Directory B2C custom policy. Learn how to validate user input by limiting user input options. Learn how to validate user input by using Predicates. Learn how to validate user input by using Regular Expressions. Learn how to validate user input by using validation technical profiles -+ -+ Last updated 11/06/2023 |
active-directory-b2c | Custom Policy Developer Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-policy-developer-notes.md | Title: Developer notes for user flows and custom policies description: Notes for developers on configuring and maintaining Azure AD B2C with user flows and custom policies.-+ -+ Last updated 10/05/2023-+ |
active-directory-b2c | Custom Policy Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-policy-overview.md | Title: Azure Active Directory B2C custom policy overview description: A topic about Azure Active Directory B2C custom policies and the Identity Experience Framework.-+ -+ Last updated 11/20/2023 |
active-directory-b2c | Custom Policy Reference Sso | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-policy-reference-sso.md | Title: Single sign-on session providers using custom policies description: Learn how to manage single sign-on sessions using custom policies in Azure AD B2C.-+ -+ Last updated 02/03/2022 |
active-directory-b2c | Customize Ui With Html | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/customize-ui-with-html.md | Title: Customize the user interface with HTML templates description: Learn how to customize the user interface with HTML templates for your applications that use Azure Active Directory B2C.-+ -+ Last updated 11/06/2023-+ zone_pivot_groups: b2c-policy-type |
active-directory-b2c | Customize Ui | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/customize-ui.md | Title: Customize the user interface description: Learn how to customize the user interface for your applications that use Azure Active Directory B2C.-+ -+ Last updated 12/16/2022-+ zone_pivot_groups: b2c-policy-type |
active-directory-b2c | Data Residency | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/data-residency.md | Title: "Azure AD B2C: Region availability & data residency" description: Region availability, data residency, high availability, SLA, and information about Azure Active Directory B2C preview tenants.-+ -+ Last updated 06/24/2023 |
active-directory-b2c | Date Transformations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/date-transformations.md | Title: Date claims transformation examples for custom policies description: Date claims transformation examples for the Identity Experience Framework (IEF) schema of Azure Active Directory B2C.-+ -+ Last updated 02/16/2022 |
active-directory-b2c | Deploy Custom Policies Devops | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/deploy-custom-policies-devops.md | Title: Deploy custom policies with Azure Pipelines description: Learn how to deploy Azure AD B2C custom policies in a CI/CD pipeline by using Azure Pipelines.-+ -+ Last updated 03/25/2022 |
active-directory-b2c | Deploy Custom Policies Github Action | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/deploy-custom-policies-github-action.md | Title: Deploy custom policies with GitHub Actions description: Learn how to deploy Azure AD B2C custom policies in a CI/CD pipeline by using GitHub Actions.-+ -+ Last updated 08/25/2021 |
active-directory-b2c | Direct Signin | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/direct-signin.md | Title: Set up direct sign-in using Azure Active Directory B2C description: Learn how to prepopulate the sign-in name or redirect straight to a social identity provider.-+ -+ Last updated 06/21/2022-+ zone_pivot_groups: b2c-policy-type |
active-directory-b2c | Disable Email Verification | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/disable-email-verification.md | Title: Disable email verification during customer sign-up description: Learn how to disable email verification during customer sign-up in Azure Active Directory B2C.-+ -+ Last updated 09/15/2021-+ zone_pivot_groups: b2c-policy-type |
active-directory-b2c | Display Control Time Based One Time Password | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/display-control-time-based-one-time-password.md | Title: TOTP display controls description: Learn how to use Azure AD B2C TOTP display controls in the user journeys provided by your custom policies.-+ -+ Last updated 07/20/2022 |
active-directory-b2c | Display Control Verification | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/display-control-verification.md | Title: Verify claims with display controls description: Learn how to use Azure AD B2C display controls to verify the claims in the user journeys provided by your custom policies.-+ -+ Last updated 12/10/2019 |
active-directory-b2c | Display Controls | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/display-controls.md | Title: Display control reference description: Reference for Azure AD B2C display controls. Use display controls for customizing user journeys defined in your custom policies.-+ -+ Last updated 12/9/2021 |
active-directory-b2c | Embedded Login | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/embedded-login.md | Title: Embed Azure Active Directory B2C user interface into your app with a custom policy description: Learn how to embed Azure Active Directory B2C user interface into your app with a custom policy- - - Last updated 11/20/2023-+ zone_pivot_groups: b2c-policy-type |
active-directory-b2c | Enable Authentication Android App Options | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-android-app-options.md | Title: Enable Android mobile application options by using Azure Active Directory B2C description: This article discusses several ways to enable Android mobile application options by using Azure Active Directory B2C.-+ -+ Last updated 10/06/2022 |
active-directory-b2c | Enable Authentication Android App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-android-app.md | Title: Enable authentication in an Android app - Azure AD B2C description: Enable authentication in an Android application using Azure Active Directory B2C building blocks. Learn how to use Azure AD B2C to sign in and sign up users in an Android application.-+ -+ Last updated 09/16/2021 |
active-directory-b2c | Enable Authentication Angular Spa App Options | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-angular-spa-app-options.md | Title: Configure authentication options in an Angular application by using Azure Active Directory B2C description: Enable the use of Angular application options in several ways.-+ -+ Last updated 03/23/2023 |
active-directory-b2c | Enable Authentication Angular Spa App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-angular-spa-app.md | Title: Enable authentication in an Angular application by using Azure Active Directory B2C building blocks description: Use the building blocks of Azure Active Directory B2C to sign in and sign up users in an Angular application.-+ -+ Last updated 03/23/2023 |
active-directory-b2c | Enable Authentication Azure Static App Options | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-azure-static-app-options.md | Title: Enable Azure Static Web App authentication options using Azure Active Directory B2C description: This article discusses several ways to enable Azure Static Web App authentication options.-+ -+ Last updated 06/28/2022 |
active-directory-b2c | Enable Authentication In Node Web App Options | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-in-node-web-app-options.md | Title: Enable Node.js web app authentication options using Azure Active Directory B2C description: This article discusses several ways to enable Node.js web app authentication options.-+ -+ Last updated 02/02/2022 |
active-directory-b2c | Enable Authentication In Node Web App With Api Options | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-in-node-web-app-with-api-options.md | Title: Enable Node.js web API authentication options using Azure Active Directory B2C description: This article discusses several ways to enable Node.js web API authentication options.-+ -+ Last updated 02/10/2022 |
active-directory-b2c | Enable Authentication In Node Web App With Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-in-node-web-app-with-api.md | Title: Enable authentication in your own Node.js web API by using Azure Active Directory B2C description: Follow this article to learn how to call your own web API protected by Azure AD B2C from your own node js web app. The web app acquires an access token and uses it to call a protected endpoint in the web API. The web app adds the access token as a bearer in the Authorization header, and the web API needs to validate it. -+ -+ Last updated 02/09/2022 |
active-directory-b2c | Enable Authentication In Node Web App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-in-node-web-app.md | Title: Enable authentication in your own Node web application using Azure Active Directory B2C description: This article explains how to enable authentication in your own Node.js web application using Azure AD B2C -+ -+ Last updated 02/02/2022 |
active-directory-b2c | Enable Authentication Ios App Options | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-ios-app-options.md | Title: Enable iOS Swift mobile application options by using Azure Active Directory B2C description: This article discusses several ways to enable iOS Swift mobile application options by using Azure Active Directory B2C.-+ -+ Last updated 07/29/2021 |
active-directory-b2c | Enable Authentication Ios App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-ios-app.md | Title: Enable authentication in an iOS Swift app by using Azure AD B2C description: This article discusses how to enable authentication in an iOS Swift application by using Azure Active Directory B2C building blocks. Learn how to use Azure AD B2C to sign in and sign up users in an iOS Swift application.-+ -+ Last updated 03/24/2023 |
active-directory-b2c | Enable Authentication Python Web App Options | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-python-web-app-options.md | Title: Enable Python web application options by using Azure Active Directory B2C description: This article shows you how to enable the use of Python web application options.-+ -+ Last updated 07/05/2021 |
active-directory-b2c | Enable Authentication Python Web App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-python-web-app.md | Title: Enable authentication in your own Python web application using Azure Active Directory B2C description: This article explains how to enable authentication in your own Python web application using Azure AD B2C -+ -+ Last updated 06/28/2022 |
active-directory-b2c | Enable Authentication React Spa App Options | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-react-spa-app-options.md | Title: Enable React application options by using Azure Active Directory B2C description: Enable the use of React application options in several ways.-+ -+ Last updated 07/07/2022 |
active-directory-b2c | Enable Authentication React Spa App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-react-spa-app.md | Title: Enable authentication in a React application by using Azure Active Directory B2C building blocks description: Use the building blocks of Azure Active Directory B2C to sign in and sign up users in a React application.-+ -+ Last updated 11/20/2023 |
active-directory-b2c | Enable Authentication Spa App Options | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-spa-app-options.md | Title: Enable SPA application options by using Azure Active Directory B2C description: This article discusses several ways to enable the use of SPA applications.-+ -+ Last updated 07/05/2021 |
active-directory-b2c | Enable Authentication Spa App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-spa-app.md | Title: Enable authentication in a SPA application by using Azure Active Directory B2C building blocks description: This article discusses the building blocks of Azure Active Directory B2C for signing in and signing up users in a SPA application.-+ -+ Last updated 03/24/2023 |
active-directory-b2c | Enable Authentication Web Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-web-api.md | Title: Enable authentication in a web API by using Azure Active Directory B2C description: This article discusses how to use Azure Active Directory B2C to protect a web API.-+ -+ Last updated 11/20/2023 |
active-directory-b2c | Enable Authentication Web App With Api Options | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-web-app-with-api-options.md | Title: Enable a web application that calls web API options by using Azure Active Directory B2C description: This article discusses how to enable the use of a web application that calls web API options in several ways.-+ -+ Last updated 07/05/2021 |
active-directory-b2c | Enable Authentication Web App With Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-web-app-with-api.md | Title: Enable authentication in web apps that call a web API by using Azure Active Directory B2C building blocks description: This article discusses the building blocks of an ASP.NET web app that calls a web API by using Azure Active Directory B2C.-+ -+ Last updated 11/10/2021 |
active-directory-b2c | Enable Authentication Web Application Options | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-web-application-options.md | Title: Enable web app authentication options using Azure Active Directory B2C description: This article discusses several ways to enable web app authentication options.-+ -+ Last updated 08/12/2021 |
active-directory-b2c | Enable Authentication Web Application | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-web-application.md | Title: Enable authentication in a web app by using Azure Active Directory B2C building blocks description: This article discusses how to use the building blocks of Azure Active Directory B2C to sign in and sign up users in an ASP.NET web app.-+ -+ Last updated 06/11/2021 |
active-directory-b2c | Enable Authentication Wpf Desktop App Options | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-wpf-desktop-app-options.md | Title: Enable WPF desktop application options using Azure Active Directory B2C description: Enable the use of WPF desktop application options by using several ways.-+ -+ Last updated 08/04/2021 |
active-directory-b2c | Error Codes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/error-codes.md | Title: Error code reference description: A list of the error codes that can be returned by the Azure Active Directory B2C service.-+ -+ Last updated 11/08/2023 |
active-directory-b2c | Extensions App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/extensions-app.md | Title: Extensions app in Azure Active Directory B2C description: Restoring the b2c-extensions-app.-+ -+ Last updated 11/02/2021 |
active-directory-b2c | External Identities Videos | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/external-identities-videos.md | Title: Microsoft Azure Active Directory B2C external identity video series description: Learn about external identities in Azure AD B2C in the Microsoft identity platform -+ -+ Last updated 06/08/2023 |
active-directory-b2c | Find Help Open Support Ticket | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/find-help-open-support-ticket.md | Title: Find help and open a support ticket for Azure Active Directory B2C description: Learn how to find technical, pre-sales, billing, and subscription help and open a support ticket for Azure Active Directory B2C -+ -+ Last updated 03/13/2023 |
active-directory-b2c | Force Password Reset | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/force-password-reset.md | Title: Configure a force password reset flow in Azure AD B2C description: Learn how to set up a forced password reset flow in Azure Active Directory B2C.-+ -+ Last updated 10/31/2023 |
active-directory-b2c | General Transformations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/general-transformations.md | Title: General claims transformation examples for custom policies description: General claims transformation examples for the Identity Experience Framework (IEF) schema of Azure Active Directory B2C.-+ -+ Last updated 02/16/2022 |
active-directory-b2c | Https Cipher Tls Requirements | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/https-cipher-tls-requirements.md | Title: TLS and cipher suite requirements - Azure AD B2C description: Notes for developers on HTTPS cipher suite and TLS requirements when interacting with web API endpoints.-+ -+ Last updated 04/30/2021-+ |
active-directory-b2c | Id Token Hint | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/id-token-hint.md | Title: Define an ID token hint technical profile in a custom policy description: Define an ID token hint technical profile in a custom policy in Azure Active Directory B2C.-+ -+ Last updated 09/16/2021 |
active-directory-b2c | Identity Protection Investigate Risk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-protection-investigate-risk.md | Title: Investigate risk with Azure Active Directory B2C Identity Protection description: Learn how to investigate risky users, and detections in Azure AD B2C Identity Protection - Last updated 09/16/2021-+ |
active-directory-b2c | Identity Provider Adfs Saml | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-adfs-saml.md | Title: Add AD FS as a SAML identity provider by using custom policies description: Set up AD FS 2016 using the SAML protocol and custom policies in Azure Active Directory B2C-+ -+ Last updated 09/16/2021-+ zone_pivot_groups: b2c-policy-type |
active-directory-b2c | Identity Provider Adfs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-adfs.md | Title: Add AD FS as an OpenID Connect identity provider by using custom policies description: Set up AD FS 2016 using the OpenID Connect protocol and custom policies in Azure Active Directory B2C-+ -+ Last updated 06/08/2022-+ zone_pivot_groups: b2c-policy-type |
active-directory-b2c | Identity Provider Amazon | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-amazon.md | Title: Set up sign-up and sign-in with an Amazon account description: Provide sign-up and sign-in to customers with Amazon accounts in your applications using Azure Active Directory B2C.-+ -+ -+ Last updated 09/16/2021 |
active-directory-b2c | Identity Provider Apple Id | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-apple-id.md | Title: Set up sign-up and sign-in with an Apple ID description: Provide sign-up and sign-in to customers with Apple ID in your applications using Azure Active Directory B2C.-+ -+ Last updated 11/02/2021-+ zone_pivot_groups: b2c-policy-type |
active-directory-b2c | Identity Provider Azure Ad B2c | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-azure-ad-b2c.md | Title: Set up sign-up and sign-in with an Azure AD B2C account from another Azure AD B2C tenant description: Provide sign-up and sign-in to customers with Azure AD B2C accounts from another tenant in your applications using Azure Active Directory B2C.-+ -+ Last updated 10/11/2023 -+ zone_pivot_groups: b2c-policy-type |
active-directory-b2c | Identity Provider Azure Ad Multi Tenant | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-azure-ad-multi-tenant.md | Title: Set up sign-in for multi-tenant Microsoft Entra ID by custom policies + Title: Set up sign-in for multitenant Microsoft Entra ID using custom policies -description: Add a multi-tenant Microsoft Entra identity provider using custom policies in Azure Active Directory B2C. -+description: Add a multitenant Microsoft Entra identity provider using custom policies in Azure Active Directory B2C. - - Previously updated : 11/17/2022 Last updated : 11/16/2023 zone_pivot_groups: b2c-policy-type++#Customer intent: As a developer, I want to enable sign-in for users using the multitenant endpoint for Microsoft Entra ID. Allowing users from multiple Microsoft Entra tenants to sign in using Azure AD B2C, without me having to configure an identity provider for each tenant. + -# Set up sign-in for multi-tenant Microsoft Entra ID using custom policies in Azure Active Directory B2C +# Set up sign-in for multitenant Microsoft Entra ID using custom policies in Azure Active Directory B2C [!INCLUDE [active-directory-b2c-choose-user-flow-or-custom-policy](../../includes/active-directory-b2c-choose-user-flow-or-custom-policy.md)] zone_pivot_groups: b2c-policy-type ::: zone pivot="b2c-custom-policy" -This article shows you how to enable sign-in for users using the multi-tenant endpoint for Microsoft Entra ID. Allowing users from multiple Microsoft Entra tenants to sign in using Azure AD B2C, without you having to configure an identity provider for each tenant. However, guest members in any of these tenants **will not** be able to sign in. For that, you need to [individually configure each tenant](identity-provider-azure-ad-single-tenant.md). +This article shows you how to enable sign-in for users using the multitenant endpoint for Microsoft Entra ID, allowing users from multiple Microsoft Entra tenants to sign in using Azure AD B2C, without you having to configure an identity provider for each tenant. However, guest members in any of these tenants **will not** be able to sign in. For that, you need to [individually configure each tenant](identity-provider-azure-ad-single-tenant.md). ## Prerequisites This article shows you how to enable sign-in for users using the multi-tenant en > [!NOTE] > In this article, it assumed that **SocialAndLocalAccounts** starter pack is used in the previous steps mentioned in pre-requisite. -<a name='register-an-azure-ad-app'></a> - ## Register a Microsoft Entra app -To enable sign-in for users with a Microsoft Entra account in Azure Active Directory B2C (Azure AD B2C), you need to create an application in the [Azure portal](https://portal.azure.com). For more information, see [Register an application with the Microsoft identity platform](../active-directory/develop/quickstart-register-app.md). +To enable users to sign in to Azure AD B2C with a Microsoft Entra account, you first need to create an application in the Microsoft Entra tenant from the [Azure portal](https://portal.azure.com). For more information, see [Register an application with the Microsoft identity platform](/entra/identity-platform/quickstart-register-app). 1. Sign in to the [Azure portal](https://portal.azure.com).-1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Microsoft Entra ID tenant from the **Directories + subscriptions** menu. +1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Microsoft Entra tenant from the **Directories + subscriptions** menu. 1. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **App registrations**. 1. Select **New registration**. 1. Enter a **Name** for your application. For example, `Azure AD B2C App`. To enable sign-in for users with a Microsoft Entra account in Azure Active Direc 1. Select **Certificates & secrets**, and then select **New client secret**. 1. Enter a **Description** for the secret, select an expiration, and then select **Add**. Record the **Value** of the secret for use in a later step. -### Configuring optional claims +> [!NOTE] +> The client secret will not be shown again after this point. If you do not make a record of it, you will have to create a new one. ++### [Optional] Configuring optional claims If you want to get the `family_name`, and `given_name` claims from Microsoft Entra ID, you can configure optional claims for your application in the Azure portal UI or application manifest. For more information, see [How to provide optional claims to your Microsoft Entra app](../active-directory/develop/optional-claims.md). If you want to get the `family_name`, and `given_name` claims from Microsoft Ent ## [Optional] Verify your app authenticity -[Publisher verification](../active-directory/develop/publisher-verification-overview.md) helps your users understand the authenticity of the app you [registered](#register-an-azure-ad-app). A verified app means that the publisher of the app has [verified](/partner-center/verification-responses) their identity using their Microsoft Partner Network (MPN). Learn how to [mark your app as publisher verified](../active-directory/develop/mark-app-as-publisher-verified.md). +[Publisher verification](/entra/identity-platform/publisher-verification-overview) helps your users understand the authenticity of the app you registered. A verified app means that the publisher of the app has [verified](/partner-center/verification-responses) their identity using their Microsoft Partner Network (MPN). Learn how to [mark your app as publisher verified](/entra/identity-platform/mark-app-as-publisher-verified). ## Create a policy key -You need to store the application key that you created in your Azure AD B2C tenant. +You now need to store the application key that you created in your Azure AD B2C tenant. 1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu. 1. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **Azure AD B2C**. You need to store the application key that you created in your Azure AD B2C tena 1. For **Key usage**, select `Signature`. 1. Select **Create**. -<a name='configure-azure-ad-as-an-identity-provider'></a> ## Configure Microsoft Entra ID as an identity provider You can define Microsoft Entra ID as a claims provider by adding Microsoft Entra 1. Under the **ClaimsProvider** element, update the value for **Domain** to a unique value that can be used to distinguish it from other identity providers. 1. Under the **TechnicalProfile** element, update the value for **DisplayName**, for example, `Multi-Tenant AAD`. This value is displayed on the sign-in button on your sign-in page.-1. Set **client_id** to the application ID of the Microsoft Entra multi-tenant application that you registered earlier. -1. Under **CryptographicKeys**, update the value of **StorageReferenceId** to the name of the policy key that created earlier. For example, `B2C_1A_AADAppSecret`. +1. Set **client_id** to the application ID of the Microsoft Entra multitenant application that you registered earlier. +1. Under **CryptographicKeys**, update the value of **StorageReferenceId** to the name of the policy key that you created earlier. For example, `B2C_1A_AADAppSecret`. ### Restrict access Perform these steps for each Microsoft Entra tenant that should be used to sign 1. Select the **Run now** button. 1. From the sign-up or sign-in page, select **Common Microsoft Entra ID** to sign in with Microsoft Entra account. -To test the multi-tenant sign-in capability, perform the last two steps using the credentials for a user that exists another Microsoft Entra tenant. Copy the **Run now endpoint** and open it in a private browser window, for example, Incognito Mode in Google Chrome or an InPrivate window in Microsoft Edge. Opening in a private browser window allows you to test the full user journey by not using any currently cached Microsoft Entra credentials. +To test the multitenant sign-in capability, perform the last two steps using the credentials for a user that exists with another Microsoft Entra tenant. Copy the **Run now endpoint** and open it in a private browser window, for example, Incognito Mode in Google Chrome or an InPrivate window in Microsoft Edge. Opening in a private browser window allows you to test the full user journey by not using any currently cached Microsoft Entra credentials. If the sign-in process is successful, your browser is redirected to `https://jwt.ms`, which displays the contents of the token returned by Azure AD B2C. -## Next steps +## See also - Learn how to [pass the Microsoft Entra token to your application](idp-pass-through-user-flow.md).-- Check out the Microsoft Entra multi-tenant federation [Live demo](https://github.com/azure-ad-b2c/unit-tests/tree/main/Identity-providers#azure-active-directory), and how to pass Microsoft Entra access token [Live demo](https://github.com/azure-ad-b2c/unit-tests/tree/main/Identity-providers#azure-active-directory-with-access-token)+- Check out the Microsoft Entra multitenant federation [Live demo](https://github.com/azure-ad-b2c/unit-tests/tree/main/Identity-providers#azure-active-directory), and how to pass Microsoft Entra access token [Live demo](https://github.com/azure-ad-b2c/unit-tests/tree/main/Identity-providers#azure-active-directory-with-access-token) ::: zone-end |
active-directory-b2c | Identity Provider Azure Ad Single Tenant | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-azure-ad-single-tenant.md | Title: Set up sign-in for a Microsoft Entra organization description: Set up sign-in for a specific Microsoft Entra organization in Azure Active Directory B2C.-+ -+ Last updated 02/07/2023 -+ zone_pivot_groups: b2c-policy-type |
active-directory-b2c | Identity Provider Ebay | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-ebay.md | Title: Set up sign-up and sign-in with an eBay account description: Provide sign-up and sign-in to customers with eBay accounts in your applications using Azure Active Directory B2C.-+ -+ Last updated 09/16/2021 |
active-directory-b2c | Identity Provider Facebook | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-facebook.md | Title: Set up sign-up and sign-in with a Facebook account description: Provide sign-up and sign-in to customers with Facebook accounts in your applications using Azure Active Directory B2C.-+ -+ Last updated 03/10/2022-+ zone_pivot_groups: b2c-policy-type |
active-directory-b2c | Identity Provider Generic Openid Connect | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-generic-openid-connect.md | Title: Set up sign-up and sign-in with OpenID Connect description: Set up sign-up and sign-in with any OpenID Connect identity provider (IdP) in Azure Active Directory B2C.-+ -+ Last updated 12/28/2022 |
active-directory-b2c | Identity Provider Generic Saml Options | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-generic-saml-options.md | Title: Set sign-in with SAML identity provider options description: Configure sign-in SAML identity provider (IdP) options in Azure Active Directory B2C.-+ -+ Last updated 03/20/2023-+ zone_pivot_groups: b2c-policy-type |
active-directory-b2c | Identity Provider Generic Saml | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-generic-saml.md | Title: Set up sign-up and sign-in with SAML identity provider description: Set up sign-up and sign-in with any SAML identity provider (IdP) in Azure Active Directory B2C.-+ -+ Last updated 09/16/2021-+ zone_pivot_groups: b2c-policy-type |
active-directory-b2c | Identity Provider Github | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-github.md | Title: Set up sign-up and sign-in with a GitHub account description: Provide sign-up and sign-in to customers with GitHub accounts in your applications using Azure Active Directory B2C.-+ -+ Last updated 03/10/2022-+ zone_pivot_groups: b2c-policy-type |
active-directory-b2c | Identity Provider Google | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-google.md | Title: Set up sign-up and sign-in with a Google account description: Provide sign-up and sign-in to customers with Google accounts in your applications using Azure Active Directory B2C.-+ -+ Last updated 03/10/2022-+ zone_pivot_groups: b2c-policy-type |
active-directory-b2c | Identity Provider Id Me | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-id-me.md | Title: Set up sign-up and sign-in with a ID.me account description: Provide sign-up and sign-in to customers with ID.me accounts in your applications using Azure Active Directory B2C.-+ -+ Last updated 09/16/2021 |
active-directory-b2c | Identity Provider Linkedin | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-linkedin.md | Title: Set up sign-up and sign-in with a LinkedIn account description: Provide sign-up and sign-in to customers with LinkedIn accounts in your applications using Azure Active Directory B2C.-+ -+ Last updated 09/16/2021-+ zone_pivot_groups: b2c-policy-type |
active-directory-b2c | Identity Provider Local | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-local.md | Title: Set up Azure AD B2C local account identity provider description: Define the identity types uses can use to sign-up or sign-in (email, username, phone number) in your Azure Active Directory B2C tenant.-+ -+ Last updated 09/02/2022 |
active-directory-b2c | Identity Provider Microsoft Account | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-microsoft-account.md | Title: Set up sign-up and sign-in with a Microsoft Account description: Provide sign-up and sign-in to customers with Microsoft Accounts in your applications using Azure Active Directory B2C.-+ -+ Last updated 05/01/2023-+ zone_pivot_groups: b2c-policy-type |
active-directory-b2c | Identity Provider Mobile Id | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-mobile-id.md | Title: Set up sign-up and sign-in with Mobile ID description: Provide sign-up and sign-in to customers with Mobile ID in your applications using Azure Active Directory B2C.-+ -+ Last updated 04/08/2022 |
active-directory-b2c | Identity Provider Ping One | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-ping-one.md | Title: Set up sign-up and sign-in with a PingOne account description: Provide sign-up and sign-in to customers with PingOne accounts in your applications using Azure Active Directory B2C.-+ -+ Last updated 12/2/2021-+ zone_pivot_groups: b2c-policy-type |
active-directory-b2c | Identity Provider Qq | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-qq.md | Title: Set up sign-up and sign-in with a QQ account using Azure Active Directory B2C description: Provide sign-up and sign-in to customers with QQ accounts in your applications using Azure Active Directory B2C.-+ -+ Last updated 09/16/2021-+ zone_pivot_groups: b2c-policy-type |
active-directory-b2c | Identity Provider Salesforce Saml | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-salesforce-saml.md | Title: Set up sign-in with a Salesforce SAML provider by using SAML protocol description: Set up sign-in with a Salesforce SAML provider by using SAML protocol in Azure Active Directory B2C.-+ -+ Last updated 09/16/2021-+ zone_pivot_groups: b2c-policy-type |
active-directory-b2c | Identity Provider Salesforce | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-salesforce.md | Title: Set up sign-up and sign-in with a Salesforce account description: Provide sign-up and sign-in to customers with Salesforce accounts in your applications using Azure Active Directory B2C.-+ -+ Last updated 09/16/2021-+ zone_pivot_groups: b2c-policy-type |
active-directory-b2c | Identity Provider Swissid | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-swissid.md | Title: Set up sign-up and sign-in with a SwissID account description: Provide sign-up and sign-in to customers with SwissID accounts in your applications using Azure Active Directory B2C.-+ -+ Last updated 12/07/2021 |
active-directory-b2c | Identity Provider Twitter | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-twitter.md | Title: Set up sign-up and sign-in with a Twitter account description: Provide sign-up and sign-in to customers with Twitter accounts in your applications using Azure Active Directory B2C.-+ -+ Last updated 07/20/2022-+ zone_pivot_groups: b2c-policy-type |
active-directory-b2c | Identity Provider Wechat | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-wechat.md | Title: Set up sign-up and sign-in with a WeChat account description: Provide sign-up and sign-in to customers with WeChat accounts in your applications using Azure Active Directory B2C.-+ -+ Last updated 09/16/2021-+ zone_pivot_groups: b2c-policy-type |
active-directory-b2c | Identity Provider Weibo | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-weibo.md | Title: Set up sign-up and sign-in with a Weibo account description: Provide sign-up and sign-in to customers with Weibo accounts in your applications using Azure Active Directory B2C.-+ -+ Last updated 09/16/2021-+ zone_pivot_groups: b2c-policy-type |
active-directory-b2c | Identity Verification Proofing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-verification-proofing.md | Title: Identity proofing and verification for Azure AD B2C description: Learn about our partners who integrate with Azure AD B2C to provide identity proofing and verification solutions -+ -+ Last updated 01/18/2023 |
active-directory-b2c | Idp Pass Through User Flow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/idp-pass-through-user-flow.md | Title: Pass an identity provider access token to your app description: Learn how to pass an access token for OAuth 2.0 identity providers as a claim in a user flow in Azure Active Directory B2C.-+ -+ Last updated 03/16/2023-+ zone_pivot_groups: b2c-policy-type |
active-directory-b2c | Implicit Flow Single Page Application | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/implicit-flow-single-page-application.md | Title: Single-page application sign-in using the OAuth 2.0 implicit flow in Azure Active Directory B2C description: Learn how to add single-page sign-in using the OAuth 2.0 implicit flow with Azure Active Directory B2C.-+ -+ Last updated 06/21/2022 |
active-directory-b2c | Integer Transformations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/integer-transformations.md | Title: Integer claims transformation examples for custom policies description: Integer claims transformation examples for the Identity Experience Framework (IEF) schema of Azure Active Directory B2C.-+ -+ Last updated 02/16/2022 |
active-directory-b2c | Integrate With App Code Samples | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/integrate-with-app-code-samples.md | Title: Azure Active Directory B2C integrates with app samples description: Code samples for integrating Azure AD B2C to mobile, desktop, web, and single-page applications.-+ |
active-directory-b2c | Javascript And Page Layout | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/javascript-and-page-layout.md | Title: JavaScript and page layout versions description: Learn how to enable JavaScript and use page layout versions in Azure Active Directory B2C.-+ -+ Last updated 10/17/2023-+ zone_pivot_groups: b2c-policy-type |
active-directory-b2c | Json Transformations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/json-transformations.md | Title: JSON claims transformation examples for custom policies description: JSON claims transformation examples for the Identity Experience Framework (IEF) schema of Azure Active Directory B2C.-+ -+ Last updated 02/14/2023 |
active-directory-b2c | Jwt Issuer Technical Profile | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/jwt-issuer-technical-profile.md | Title: Define a technical profile for a JWT issuer in a custom policy description: Define a technical profile for a JSON web token (JWT) issuer in a custom policy in Azure Active Directory B2C.-+ -+ Last updated 03/04/2021 |
active-directory-b2c | Language Customization | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/language-customization.md | Title: Language customization in Azure Active Directory B2C description: Learn about customizing the language experience in your user flows in Azure Active Directory B2C.-+ -+ Last updated 12/28/2022-+ zone_pivot_groups: b2c-policy-type |
active-directory-b2c | Localization String Ids | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/localization-string-ids.md | Title: Localization string IDs - Azure Active Directory B2C description: Specify the IDs for a content definition with an ID of api.signuporsignin in a custom policy in Azure Active Directory B2C.-+ -+ Last updated 04/19/2022 |
active-directory-b2c | Localization | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/localization.md | Title: Localization - Azure Active Directory B2C description: Specify the Localization element of a custom policy in Azure Active Directory B2C.-+ -+ Last updated 03/06/2022 |
active-directory-b2c | Manage Custom Policies Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/manage-custom-policies-powershell.md | |
active-directory-b2c | Manage User Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/manage-user-access.md | Title: Manage user access in Azure Active Directory B2C description: Learn how to identify minors, collect date of birth and country/region data, and get acceptance of terms of use in your application by using Azure AD B2C.-+ -+ Last updated 01/13/2022 |
active-directory-b2c | Manage User Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/manage-user-data.md | Title: Manage user data in Azure Active Directory B2C description: Learn how to delete or export user data in Azure AD B2C.-+ -+ Last updated 05/06/2018 |
active-directory-b2c | Manage Users Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/manage-users-portal.md | Title: Create & delete Azure AD B2C consumer user accounts in the Azure portal description: Learn how to use the Azure portal to create and delete consumer users in your Azure AD B2C directory.-+ -+ Last updated 05/26/2023 |
active-directory-b2c | Microsoft Graph Get Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/microsoft-graph-get-started.md | Title: Register a Microsoft Graph application description: Prepare for managing Azure AD B2C resources with Microsoft Graph by registering an application that's granted the required Graph API permissions.-+ -+ Last updated 06/24/2022 |
active-directory-b2c | Microsoft Graph Operations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/microsoft-graph-operations.md | Title: Manage resources with Microsoft Graph description: How to manage resources in an Azure AD B2C tenant by calling the Microsoft Graph API and using an application identity to automate the process.- - Previously updated : 11/20/2023- Last updated : 11/13/2023 + +#Customer intent: As a developer, I want to manage resources in my Azure AD B2C tenant by calling the Microsoft Graph API and using an application identity to automate the process. + # Manage Azure AD B2C with Microsoft Graph Microsoft Graph allows you to manage resources in your Azure AD B2C directory. T > [!NOTE] > You can also programmatically create an Azure AD B2C directory itself, along with the corresponding Azure resource linked to an Azure subscription. This functionality isn't exposed through the Microsoft Graph API, but through the Azure REST API. For more information, see [B2C Tenants - Create](/rest/api/activedirectory/b2c-tenants/create). -Watch this video to learn about Azure AD B2C user migration using Microsoft Graph API. -->[!Video https://www.youtube.com/embed/9BRXBtkBzL4] - ## Prerequisites - To use MS Graph API, and interact with resources in your Azure AD B2C tenant, you need an application registration that grants the permissions to do so. Follow the steps in the [Register a Microsoft Graph application](microsoft-graph-get-started.md) article to create an application registration that your management application can use. Watch this video to learn about Azure AD B2C user migration using Microsoft Grap - [Update a user](/graph/api/user-update) - [Delete a user](/graph/api/user-delete) +### User migration ++Watch this video to learn how user migration to Azure AD B2C can be managed using Microsoft Graph API. ++>[!Video https://www.youtube.com/embed/9BRXBtkBzL4] + ## User phone number management A phone number that can be used by a user to sign-in using [SMS or voice calls](sign-in-options.md#phone-sign-in), or [multifactor authentication](multi-factor-authentication.md). For more information, see [Microsoft Entra authentication methods API](/graph/api/resources/phoneauthenticationmethod). A phone number that can be used by a user to sign-in using [SMS or voice calls]( Note, the [list](/graph/api/authentication-list-phonemethods) operation returns only enabled phone numbers. The following phone number should be enabled to use with the list operations. -![Enable phone sign-in](./media/microsoft-graph-operations/enable-phone-sign-in.png) - > [!NOTE] > A correctly represented phone number is stored with a space between the country code and the phone number. The Azure AD B2C service doesn't currently add this space by default. + ## Self-service password reset email address An email address that can be used by a [username sign-in account](sign-in-options.md#username-sign-in) to reset the password. For more information, see [Microsoft Entra authentication methods API](/graph/api/resources/emailauthenticationmethod). Configure pre-built policies for sign-up, sign-in, combined sign-up and sign-in, ## User flow authentication methods (beta) -Choose a mechanism for letting users register via local accounts. Local accounts are the accounts where Azure AD B2C does the identity assertion. For more information, see [b2cAuthenticationMethodsPolicy resource type](/graph/api/resources/b2cauthenticationmethodspolicy). +Choose a mechanism for letting users register via local accounts. A Local account is one where Azure AD B2C completes the identity assertion. For more information, see [b2cAuthenticationMethodsPolicy resource type](/graph/api/resources/b2cauthenticationmethodspolicy). - [Get](/graph/api/b2cauthenticationmethodspolicy-get) - [Update](/graph/api/b2cauthenticationmethodspolicy-update) Deleted users and apps can only be restored if they were deleted within the last ## How to programmatically manage Microsoft Graph -When you want to manage Microsoft Graph, you can either do it as the application using the application permissions, or you can use delegated permissions. For delegated permissions, either the user or an administrator consents to the permissions that the app requests. The app is delegated with the permission to act as a signed-in user when it makes calls to the target resource. Application permissions are used by apps that do not require a signed in user present and thus require application permissions. Because of this, only administrators can consent to application permissions. +You can manage Microsoft Graph in two ways: ++* **Delegated permissions** either the user or an administrator consents to the permissions that the app requests. The app is delegated with the permission to act as a signed-in user when it makes calls to the target resource. +* **Application permissions** are used by apps that do not require a signed in user present. Because of this, only administrators can consent to application permissions. > [!NOTE] > Delegated permissions for users signing in through user flows or custom policies cannot be used against delegated permissions for Microsoft Graph API.+ ## Code sample: How to programmatically manage user accounts This code sample is a .NET Core console application that uses the [Microsoft Graph SDK](/graph/sdks/sdks-overview) to interact with Microsoft Graph API. Its code demonstrates how to call the API to programmatically manage users in an Azure AD B2C tenant. The initialized _GraphServiceClient_ is then used in _UserService.cs_ to perform [Make API calls using the Microsoft Graph SDKs](/graph/sdks/create-requests) includes information on how to read and write information from Microsoft Graph, use `$select` to control the properties returned, provide custom query parameters, and use the `$filter` and `$orderBy` query parameters. -## Next steps +## See also - For code samples in JavaScript and Node.js, please see: [Manage B2C user accounts with MSAL.js and Microsoft Graph SDK](https://github.com/Azure-Samples/ms-identity-b2c-javascript-nodejs-management) - Explore [Graph Explorer](https://aka.ms/ge) that lets you try Microsoft Graph APIs and learn about them. |
active-directory-b2c | Multi Factor Auth Technical Profile | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/multi-factor-auth-technical-profile.md | Title: Microsoft Entra ID multifactor authentication technical profiles in custom policies description: Custom policy reference for Microsoft Entra ID multifactor authentication technical profiles in Azure AD B2C.-+ -+ Last updated 11/08/2022 |
active-directory-b2c | Multi Factor Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/multi-factor-authentication.md | Title: Multifactor authentication in Azure Active Directory B2C description: How to enable multifactor authentication in consumer-facing applications secured by Azure Active Directory B2C.--+ - - Previously updated : 07/20/2022 Last updated : 11/15/2023 -+ zone_pivot_groups: b2c-policy-type+ +#Customer intent: As a developer, I want to learn how to enable multifactor authentication in consumer-facing applications secured by Azure Active Directory B2C. + # Enable multifactor authentication in Azure Active Directory B2C [!INCLUDE [active-directory-b2c-choose-user-flow-or-custom-policy](../../includes/active-directory-b2c-choose-user-flow-or-custom-policy.md)] -Azure Active Directory B2C (Azure AD B2C) integrates directly with [Microsoft Entra multifactor authentication](../active-directory/authentication/concept-mfa-howitworks.md) so that you can add a second layer of security to sign-up and sign-in experiences in your applications. You enable multifactor authentication without writing a single line of code. If you already created sign up and sign-in user flows, you can still enable multifactor authentication. +Azure Active Directory B2C (Azure AD B2C) integrates directly with [Microsoft Entra multifactor authentication](/entra/identity/authentication/concept-mfa-howitworks) so that you can add a second layer of security to sign-up and sign-in experiences in your applications. If you already created sign-up and sign-in user flows, you can still enable multifactor authentication. -This feature helps applications handle scenarios such as: +Using this feature applications can handle multiple scenarios such as: -- You don't require multifactor authentication to access one application, but you do require it to access another. For example, the customer can sign into an auto insurance application with a social or local account, but must verify the phone number before accessing the home insurance application registered in the same directory.-- You don't require multifactor authentication to access an application in general, but you do require it to access the sensitive portions within it. For example, the customer can sign in to a banking application with a social or local account and check the account balance, but must verify the phone number before attempting a wire transfer.+- Requiring multifactor authentication to access one application, but not requiring it to access another. For example, a customer can sign into an auto insurance application with a social or local account, but must verify the phone number before accessing the home insurance application registered in the same directory. +- Requiring multifactor authentication to access an application in general, but not requiring it to access the sensitive portions within it. For example, a customer can sign in to a banking application with a social or local account and check the account balance, but must verify the phone number before attempting a wire transfer. ## Prerequisites This feature helps applications handle scenarios such as: With [Conditional Access](conditional-access-identity-protection-overview.md) users may or may not be challenged for MFA based on configuration decisions that you can make as an administrator. The methods of the multifactor authentication are: -- **Email** - During sign-in, a verification email containing a one-time password (OTP) is sent to the user. The user provides the OTP code that was sent in the email. -- **SMS or phone call** - During the first sign-up or sign-in, the user is asked to provide and verify a phone number. During subsequent sign-ins, the user is prompted to select either the **Send Code** or **Call Me** phone MFA option. Depending on the user's choice, a text message is sent or a phone call is made to the verified phone number to identify the user. The user either provides the OTP code sent via text message or approves the phone call.+- **Email** - During sign-in, a verification email containing a one-time password (OTP) is sent to the user. The user provides the OTP code that was sent in the email to the application. +- **SMS or phone call** - During the first sign-up or sign-in, the user is asked to provide and verify a phone number. During subsequent sign-ins, the user is prompted to select either the **Send Code** or **Call Me** option. Depending on the user's choice, a text message is sent or a phone call is made to the verified phone number to identify the user. The user either provides the OTP code sent via text message or approves the phone call. - **Phone call only** - Works in the same way as the SMS or phone call option, but only a phone call is made. - **SMS only** - Works in the same way as the SMS or phone call option, but only a text message is sent. - **Authenticator app - TOTP** - The user must install an authenticator app that supports time-based one-time password (TOTP) verification, such as the [Microsoft Authenticator app](https://www.microsoft.com/security/mobile-authenticator-app), on a device that they own. During the first sign-up or sign-in, the user scans a QR code or enters a code manually using the authenticator app. During subsequent sign-ins, the user types the TOTP code that appears on the authenticator app. See [how to set up the Microsoft Authenticator app](#enroll-a-user-in-totp-with-an-authenticator-app-for-end-users). To enable multifactor authentication, get the custom policy starter pack from Gi ## Enroll a user in TOTP with an authenticator app (for end users) -When an Azure AD B2C application enables MFA using the TOTP option, end users need to use an authenticator app to generate TOTP codes. Users can use the [Microsoft Authenticator app](https://www.microsoft.com/security/mobile-authenticator-app) or any other authenticator app that supports TOTP verification. An Azure AD B2C system admin needs to advise end users to set up the Microsoft Authenticator app using the following steps: +When an Azure AD B2C application uses the TOTP option for MFA, end users need to use an authenticator app to generate TOTP codes. Users can use the [Microsoft Authenticator app](https://www.microsoft.com/security/mobile-authenticator-app) or any other authenticator app that supports TOTP verification. If using the Microsoft Authenticator app an Azure AD B2C system admin needs to advise end users to set up the Microsoft Authenticator app using the following steps: 1. [Download and install the Microsoft Authenticator app](https://www.microsoft.com/en-us/security/mobile-authenticator-app) on your Android or iOS mobile device.-1. Open the application requiring you to use TOTP for MFA, for example *Contoso webapp*, and then sign in or sign up by entering the required information. -1. If you're asked to enroll your account by scanning a QR code using an authenticator app, open the Microsoft Authenticator app in your phone, and in the upper right corner, select the **3-dotted** menu icon (for Android) or **+** menu icon (for IOS). +1. Open the Azure AD B2C application requiring you to use TOTP for MFA, for example *Contoso webapp*, and then sign in or sign up by entering the required information. +1. If you're asked to enroll your account by scanning a QR code using an authenticator app, open the Microsoft Authenticator app in your phone, and in the upper right corner, select the **3-dotted** menu icon (for Android) or **+** menu icon (for iOS). 1. Select **+ Add account**.-1. Select **Other account (Google, Facebook, etc.)**, and then scan the QR code shown in the application (for example, *Contoso webapp*) to enroll your account. If you're unable to scan the QR code, you can add the account manually: +1. Select **Other account (Google, Facebook, etc.)**, and then scan the QR code shown in the Azure AD B2C application to enroll your account. If you're unable to scan the QR code, you can add the account manually: 1. In the Microsoft Authenticator app on your phone, select **OR ENTER CODE MANUALLY**.- 1. In the application (for example, *Contoso webapp*), select **Still having trouble?**. This displays **Account Name** and **Secret**. + 1. In the Azure AD B2C application, select **Still having trouble?**. This displays **Account Name** and **Secret**. 1. Enter the **Account Name** and **Secret** in your Microsoft Authenticator app, and then select **FINISH**.-1. In the application (for example, *Contoso webapp*), select **Continue**. +1. In the Azure AD B2C application, select **Continue**. 1. In **Enter your code**, enter the code that appears in your Microsoft Authenticator app. 1. Select **Verify**. 1. During subsequent sign-in to the application, type the code that appears in the Microsoft Authenticator app. -Learn about [OATH software tokens](../active-directory/authentication/concept-authentication-oath-tokens.md) +Learn about [OATH software tokens](/entra/identity/authentication/concept-authentication-oath-tokens) ## Delete a user's TOTP authenticator enrollment (for system admins) -In Azure AD B2C, you can delete a user's TOTP authenticator app enrollment. Then the user would be required to re-enroll their account to use TOTP authentication again. To delete a user's TOTP enrollment, you can use either the [Azure portal](https://portal.azure.com) or the [Microsoft Graph API](/graph/api/softwareoathauthenticationmethod-delete). +In Azure AD B2C, you can delete a user's TOTP authenticator app enrollment. The user will then be forced to re-enroll their account to use TOTP authentication again. To delete a user's TOTP enrollment, you can use either the [Azure portal](https://portal.azure.com) or the [Microsoft Graph API](/graph/api/softwareoathauthenticationmethod-delete). > [!NOTE]-> - Deleting a user's TOTP authenticator app enrollment from Azure AD B2C doesn't remove the user's account in the TOTP authenticator app. The system admin needs to direct the user to manually delete their account from the TOTP authenticator app before trying to enroll again. +> - Deleting a user's TOTP authenticator app enrollment from Azure AD B2C doesn't remove the user's account in the TOTP authenticator app on their device. The system admin needs to direct the user to manually delete their account from the TOTP authenticator app on their device before trying to enroll again. > - If the user accidentally deletes their account from the TOTP authenticator app, they need to notify a system admin or app owner who can delete the user's TOTP authenticator enrollment from Azure AD B2C so the user can re-enroll. ### Delete TOTP authenticator app enrollment using the Azure portal In Azure AD B2C, you can delete a user's TOTP authenticator app enrollment. Then 1. Under **Usable authentication methods**, find **Software OATH token**, and then select the ellipsis menu next to it. If you don't see this interface, select the option to **"Switch to the new user authentication methods experience! Click here to use it now"** to switch to the new authentication methods experience. 1. Select **Delete**, and then select **Yes** to confirm. ### Delete TOTP authenticator app enrollment using the Microsoft Graph API |
active-directory-b2c | Multiple Token Endpoints | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/multiple-token-endpoints.md | Title: Migrate OWIN-based web APIs to b2clogin.com or a custom domain description: Learn how to enable a .NET web API to support tokens issued by multiple token issuers while you migrate your applications to b2clogin.com.-+ -+ Last updated 03/15/2021 |
active-directory-b2c | Oauth1 Technical Profile | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/oauth1-technical-profile.md | Title: Define an OAuth1 technical profile in a custom policy description: Define an OAuth 1.0 technical profile in a custom policy in Azure Active Directory B2C.-+ -+ Last updated 09/10/2018 |
active-directory-b2c | Oauth2 Error Technical Profile | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/oauth2-error-technical-profile.md | Title: Define an OAuth2 custom error technical profile in a custom policy description: Define an OAuth2 custom error technical profile in a custom policy in Azure Active Directory B2C.-+ -+ Last updated 02/25/2022 |
active-directory-b2c | Oauth2 Technical Profile | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/oauth2-technical-profile.md | Title: Define an OAuth2 technical profile in a custom policy description: Define an OAuth2 technical profile in a custom policy in Azure Active Directory B2C.-+ -+ Last updated 11/30/2021 |
active-directory-b2c | One Time Password Technical Profile | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/one-time-password-technical-profile.md | Title: Enable one-time password (OTP) verification description: Learn how to set up a one-time password (OTP) scenario by using Azure AD B2C custom policies.-+ -+ Last updated 10/19/2020 |
active-directory-b2c | Openid Connect Technical Profile | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/openid-connect-technical-profile.md | Title: Define an OpenID Connect technical profile in a custom policy description: Define an OpenID Connect technical profile in a custom policy in Azure Active Directory B2C.-+ -+ Last updated 09/12/2023 |
active-directory-b2c | Openid Connect | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/openid-connect.md | Title: Web sign in with OpenID Connect - Azure Active Directory B2C description: Build web applications using the OpenID Connect authentication protocol in Azure Active Directory B2C.-+ -+ Last updated 11/22/2023 |
active-directory-b2c | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/overview.md | Title: What is Azure Active Directory B2C? description: Learn how you can use Azure Active Directory B2C to support external identities in your applications, including social sign-up with Facebook, Google, and other identity providers.- - -+ Previously updated : 10/26/2022- Last updated : 11/08/2023 + +# Customer intent: As a technical or non-technical customer, I need to understand at a high level what Azure AD B2C is and how it can help me build a customer-facing application. + # What is Azure Active Directory B2C? -Azure Active Directory B2C provides business-to-customer identity as a service. Your customers use their preferred social, enterprise, or local account identities to get single sign-on access to your applications and APIs. +Azure Active Directory B2C provides business-to-customer identity as a service. Your customers can use their preferred social, enterprise, or local account identities to get single sign-on access to your applications and APIs. ![Infographic of Azure AD B2C identity providers and downstream applications](./media/overview/azureadb2c-overview.png) Azure AD B2C is a customer identity access management (CIAM) solution capable of supporting millions of users and billions of authentications per day. It takes care of the scaling and safety of the authentication platform, monitoring, and automatically handling threats like denial-of-service, password spray, or brute force attacks. -Azure AD B2C is a separate service from [Microsoft Entra ID](../active-directory/fundamentals/whatis.md). It is built on the same technology as Microsoft Entra ID but for a different purpose. It allows businesses to build customer facing applications, and then allow anyone to sign-up and into those applications with no restrictions on user account. +Azure AD B2C is built on the same technology as [Microsoft Entra ID](../active-directory/fundamentals/whatis.md) but for a different purpose and is a separate service. It allows businesses to build customer facing applications, and then allow anyone to sign up and sign in to those applications with no restrictions on user account. ## Who uses Azure AD B2C?-Any business or individual who wishes to authenticate end users to their web/mobile applications using a white-label authentication solution. Apart from authentication, Azure AD B2C service is used for authorization such as access to API resources by authenticated users. Azure AD B2C is designed to be used by **IT administrators** and **developers**. +Any business or individual who wishes to authenticate end users to their web or mobile applications using a white-label authentication solution. Apart from authentication, Azure AD B2C service is used for authorization such as access to API resources by authenticated users. Azure AD B2C is designed to be used by **IT administrators** and **developers**. ## Custom-branded identity solution -Azure AD B2C is a white-label authentication solution. You can customize the entire user experience with your brand so that it blends seamlessly with your web and mobile applications. +Azure AD B2C is a white-label authentication solution which means you can customize the entire user experience with your brand so that it blends seamlessly with your web and mobile applications. -Customize every page displayed by Azure AD B2C when your users sign-up, sign in, and modify their profile information. Customize the HTML, CSS, and JavaScript in your user journeys so that the Azure AD B2C experience looks and feels like it's a native part of your application. +Customize every page displayed by Azure AD B2C when your users sign up, sign in, and modify their profile information. Customize the HTML, CSS, and JavaScript in your user journeys so that the Azure AD B2C experience looks and feels like it's a native part of your application. -![Customized sign-up and sign-in pages and background image](./media/overview/sign-in-small.png) ## Single sign-on access with a user-provided identity Another external user store scenario is to have Azure AD B2C handle the authenti :::image type="content" source="./media/overview/scenario-remoteprofile.png" alt-text="A logical diagram of Azure AD B2C communicating with an external user store."::: -Azure AD B2C can facilitate collecting the information from the user during registration or profile editing, then hand that data off to the external system via API. Then, during future authentications, Azure AD B2C can retrieve the data from the external system and, if needed, include it as a part of the authentication token response it sends to your application. +Azure AD B2C can facilitate collecting information from a user during registration or profile editing, then hand that data off to an external system via API. Then, during future authentications, Azure AD B2C can retrieve that data from the external system and, if needed, include it as a part of the authentication token response it sends to your application. ## Progressive profiling Another user journey option includes progressive profiling. Progressive profilin Use Azure AD B2C to facilitate identity verification and proofing by collecting user data, then passing it to a third-party system to perform validation, trust scoring, and approval for user account creation. - :::image type="content" source="./media/overview/scenario-idproofing.png" alt-text="A diagram showing the user flow for third-party identity proofing."::: -You have learned some of the things you can do with Azure AD B2C as your business-to-customer identity platform. You may now move on directly to a more in-depth [technical overview of Azure AD B2C](technical-overview.md). - ## Next steps Now that you have an idea of what Azure AD B2C is and some of the scenarios it can help with, dig a little deeper into its features and technical aspects. |
active-directory-b2c | Page Layout | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/page-layout.md | Title: Page layout versions description: Page layout version history for UI customization in custom policies.-+ -+ Last updated 10/16/2023 |
active-directory-b2c | Partner Akamai Secure Hybrid Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-akamai-secure-hybrid-access.md | Title: Configure Azure Active Directory B2C with Akamai for secure hybrid access description: Learn how to integrate Azure AD B2C authentication with Akamai for secure hybrid access -+ -+ Last updated 11/23/2022 |
active-directory-b2c | Partner Akamai | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-akamai.md | Title: Configure Azure Active Directory B2C with Akamai Web Application Protector description: Configure Akamai Web Application Protector with Azure AD B2C-+ -+ Last updated 05/04/2023 |
active-directory-b2c | Partner Arkose Labs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-arkose-labs.md | Title: Tutorial to configure Azure Active Directory B2C with the Arkose Labs platform description: Learn to configure Azure Active Directory B2C with the Arkose Labs platform to identify risky and fraudulent users-+ -+ Last updated 01/18/2023 |
active-directory-b2c | Partner Asignio | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-asignio.md | Title: Configure Asignio with Azure Active Directory B2C for multifactor authentication description: Configure Azure Active Directory B2C with Asignio for multifactor authentication-+ -+ Last updated 05/04/2023 |
active-directory-b2c | Partner Bindid | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-bindid.md | Title: Configure Transmit Security with Azure Active Directory B2C for passwordless authentication description: Configure Azure AD B2C with Transmit Security BindID for passwordless customer authentication-+ -+ Last updated 04/27/2023 |
active-directory-b2c | Partner Biocatch | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-biocatch.md | Title: Tutorial to configure BioCatch with Azure Active Directory B2C description: Tutorial to configure Azure Active Directory B2C with BioCatch to identify risky and fraudulent users-+ -+ Last updated 03/13/2023 |
active-directory-b2c | Partner Bloksec | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-bloksec.md | Title: Tutorial to configure Azure Active Directory B2C with BlokSec for passwordless authentication description: Learn how to integrate Azure AD B2C authentication with BlokSec for Passwordless authentication-+ -+ Last updated 03/09/2023 |
active-directory-b2c | Partner Cloudflare | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-cloudflare.md | Title: Tutorial to configure Azure Active Directory B2C with Cloudflare Web Application Firewall description: Tutorial to configure Azure Active Directory B2C with Cloudflare Web application firewall and protect applications from malicious attacks -+ -+ Last updated 12/6/2022 |
active-directory-b2c | Partner Datawiza | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-datawiza.md | Title: Tutorial to configure Azure Active Directory B2C with Datawiza description: Learn how to integrate Azure AD B2C authentication with Datawiza for secure hybrid access -+ -+ Last updated 01/23/2023 |
active-directory-b2c | Partner Deduce | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-deduce.md | Title: Configure Azure Active Directory B2C with Deduce description: Learn how to integrate Azure AD B2C authentication with Deduce for identity verification -+ -+ Last updated 8/22/2022 |
active-directory-b2c | Partner Dynamics 365 Fraud Protection | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-dynamics-365-fraud-protection.md | Title: Tutorial to configure Azure Active Directory B2C with Microsoft Dynamics 365 Fraud Protection description: Tutorial to configure Azure AD B2C with Microsoft Dynamics 365 Fraud Protection to identify risky and fraudulent accounts-+ -+ Last updated 02/27/2023 |
active-directory-b2c | Partner Eid Me | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-eid-me.md | Title: Configure Azure Active Directory B2C with Bluink eID-Me for identity verification description: Learn how to integrate Azure AD B2C authentication with eID-Me for identity verification -+ -+ Last updated 03/10/2023 |
active-directory-b2c | Partner Experian | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-experian.md | Title: Tutorial to configure Azure Active Directory B2C with Experian description: Learn how to integrate Azure AD B2C authentication with Experian for Identification verification and proofing based on user attributes to prevent fraud.-+ -+ Last updated 12/6/2022 |
active-directory-b2c | Partner F5 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-f5.md | |
active-directory-b2c | Partner Gallery | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-gallery.md | Title: ISV Partner gallery for Azure AD B2C description: Learn how to integrate with our ISV partners to tailor your end-user experience to your needs. Our partner network extends our solution capabilities; enable MFA, Secure Customer Authentication, role-based access control; combat fraud through Identity Verification Proofing.-+ -+ Last updated 1/25/2023 |
active-directory-b2c | Partner Grit App Proxy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-grit-app-proxy.md | Title: Migrate applications to Azure AD B2C with Grit's app proxy description: Learn how Grit's app proxy can migrate your applications to Azure AD B2C with no code change-+ -+ Last updated 1/25/2023 |
active-directory-b2c | Partner Grit Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-grit-authentication.md | Title: Configure Grit's biometric authentication with Azure Active Directory B2C description: Learn how Grit's biometric authentication with Azure AD B2C secures your account-+ -+ Last updated 1/25/2023 |
active-directory-b2c | Partner Grit Editor | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-grit-editor.md | Title: Edit identity experience framework XML with Grit Visual Identity Experience Framework (IEF) Editor description: Learn how Grit Visual IEF Editor enables fast authentication deployments in Azure AD B2C-+ -+ Last updated 10/10/2022 |
active-directory-b2c | Partner Grit Iam | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-grit-iam.md | Title: Configure the Grit IAM B2B2C solution with Azure Active Directory B2C description: Learn how to integrate Azure AD B2C authentication with the Grit IAM B2B2C solution-+ -+ Last updated 9/15/2022 |
active-directory-b2c | Partner Haventec | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-haventec.md | |
active-directory-b2c | Partner Hypr | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-hypr.md | Title: Tutorial to configure Azure Active Directory B2C with HYPR description: Tutorial to configure Azure Active Directory B2C with Hypr for true passwordless strong customer authentication-+ -+ Last updated 12/7/2022 |
active-directory-b2c | Partner Idemia | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-idemia.md | Title: Configure IDEMIA Mobile ID with Azure Active Directory B2C description: Learn to integrate Azure AD B2C authentication with IDEMIA Mobile ID for a relying party to consume Mobile ID, or US state-issued mobile IDs-+ -+ Last updated 03/10/2023 |
active-directory-b2c | Partner Idology | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-idology.md | Title: IDology integration with Azure Active Directory B2C description: Learn how to integrate a sample online payment app in Azure AD B2C with IDology. IDology is an identity verification and proofing provider with multiple solutions.-+ -+ Last updated 06/08/2020 |
active-directory-b2c | Partner Itsme | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-itsme.md | Title: itsme OpenID Connect with Azure Active Directory B2C description: Learn how to integrate Azure AD B2C authentication with itsme OIDC using client_secret user flow policy. itsme is a digital ID app. It allows you to log in securely without card-readers, passwords, two-factor authentication, and multiple PIN codes.-+ -+ Last updated 09/20/2021 |
active-directory-b2c | Partner Jumio | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-jumio.md | Title: Tutorial to configure Azure Active Directory B2C with Jumio description: Configure Azure Active Directory B2C with Jumio for automated ID verification, safeguarding customer data.-+ -+ Last updated 12/7/2022 |
active-directory-b2c | Partner Keyless | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-keyless.md | Title: Tutorial to configure Keyless with Azure Active Directory B2C description: Tutorial to configure Sift Keyless with Azure Active Directory B2C for passwordless authentication -+ -+ Last updated 03/06/2023 |
active-directory-b2c | Partner Lexisnexis | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-lexisnexis.md | |
active-directory-b2c | Partner N8identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-n8identity.md | Title: Configure TheAccessHub Admin Tool by using Azure Active Directory B2C description: Configure TheAccessHub Admin Tool with Azure Active Directory B2C for customer account migration and customer service request (CSR) administration-+ -+ Last updated 12/6/2022 |
active-directory-b2c | Partner Nevis | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-nevis.md | Title: Tutorial to configure Azure Active Directory B2C with Nevis description: Learn how to integrate Azure AD B2C authentication with Nevis for passwordless authentication -+ -+ Last updated 12/8/2022 |
active-directory-b2c | Partner Nok Nok | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-nok-nok.md | Title: Tutorial to configure Nok Nok Passport with Azure Active Directory B2C for passwordless FIDO2 authentication description: Configure Nok Nok Passport with Azure AD B2C to enable passwordless FIDO2 authentication-+ -+ Last updated 03/13/2023 |
active-directory-b2c | Partner Onfido | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-onfido.md | Title: Tutorial to configure Azure Active Directory B2C with Onfido description: Learn how to integrate Azure AD B2C authentication with Onfido for document ID and facial biometrics verification -+ -+ Last updated 12/8/2022 |
active-directory-b2c | Partner Ping Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-ping-identity.md | Title: Tutorial to configure Azure Active Directory B2C with Ping Identity description: Learn how to integrate Azure AD B2C authentication with Ping Identity-+ -+ Last updated 01/20/2023 |
active-directory-b2c | Partner Saviynt | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-saviynt.md | Title: Tutorial to configure Saviynt with Azure Active Directory B2C description: Learn to configure Azure AD B2C with Saviynt for cross-application integration for better security, governance, and compliance.ΓÇ»-+ -+ Last updated 05/23/2023 |
active-directory-b2c | Partner Strata | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-strata.md | Title: Tutorial to configure Azure Active Directory B2C with Strata description: Learn how to integrate Azure AD B2C authentication with whoIam for user verification -+ -+ Last updated 12/16/2022 |
active-directory-b2c | Partner Trusona | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-trusona.md | Title: Trusona Authentication Cloud with Azure AD B2C description: Learn how to add Trusona Authentication Cloud as an identity provider on Azure AD B2C to enable a "tap-and-go" passwordless authentication-+ -+ Last updated 03/10/2023 |
active-directory-b2c | Partner Twilio | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-twilio.md | Title: Twilio Verify App with Azure Active Directory B2C description: Learn how to integrate a sample online payment app in Azure AD B2C with the Twilio Verify API. Comply with PSD2 (Payment Services Directive 2) transaction requirements through dynamic linking and strong customer authentication.-+ -+ Last updated 09/20/2021 |
active-directory-b2c | Partner Typingdna | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-typingdna.md | |
active-directory-b2c | Partner Web Application Firewall | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-web-application-firewall.md | Title: Tutorial to configure Azure Active Directory B2C with Azure Web Application Firewall description: Learn to configure Azure AD B2C with Azure Web Application Firewall to protect applications from malicious attacks -+ -+ Last updated 03/08/2023 |
active-directory-b2c | Partner Whoiam Rampart | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-whoiam-rampart.md | Title: Configure WhoIAM Rampart with Azure Active Directory B2C description: Learn how to integrate Azure AD B2C authentication with WhoIAM Rampart-+ -+ Last updated 05/02/2023 |
active-directory-b2c | Partner Whoiam | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-whoiam.md | Title: Tutorial to configure Azure Active Directory B2C with WhoIAM description: In this tutorial, learn how to integrate Azure AD B2C authentication with WhoIAM for user verification. -+ -+ Last updated 01/18/2023 |
active-directory-b2c | Partner Xid | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-xid.md | Title: Configure xID with Azure Active Directory B2C for passwordless authentication description: Configure Azure Active Directory B2C with xID for passwordless authentication-+ -+ Last updated 05/04/2023 |
active-directory-b2c | Partner Zscaler | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-zscaler.md | Title: Tutorial - Configure Zscaler Private access with Azure Active Directory B description: Learn how to integrate Azure AD B2C authentication with Zscaler.-+ -+ Last updated 01/18/2023 |
active-directory-b2c | Password Complexity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/password-complexity.md | Title: Configure password complexity requirements description: How to configure complexity requirements for passwords supplied by consumers in Azure Active Directory B2C.-+ -+ Last updated 01/10/2023-+ zone_pivot_groups: b2c-policy-type |
active-directory-b2c | Phone Authentication User Flows | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/phone-authentication-user-flows.md | Title: Set up phone sign-up and sign-in for user flows description: Define the identity types you can use (email, username, phone number) for local account authentication when you set up user flows in your Azure Active Directory B2C tenant.-+ -+ Last updated 09/20/2021 |
active-directory-b2c | Phone Based Mfa | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/phone-based-mfa.md | Title: Securing phone-based MFA in Azure AD B2C description: Learn tips for securing phone-based multifactor authentication in your Azure AD B2C tenant by using Azure Monitor Log Analytics reports and alerts. Use our workbook to identify fraudulent phone authentications and mitigate fraudulent sign-ups. =-+ -+ Last updated 09/20/2021 |
active-directory-b2c | Phone Factor Technical Profile | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/phone-factor-technical-profile.md | Title: Define a phone factor technical profile in a custom policy description: Define a phone factor technical profile in a custom policy in Azure Active Directory B2C.-+ -+ Last updated 10/12/2020 |
active-directory-b2c | Phone Number Claims Transformations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/phone-number-claims-transformations.md | Title: Phone number claims transformations in custom policies description: Custom policy reference for phone number claims transformations in Azure AD B2C.-+ -+ Last updated 02/16/2022 |
active-directory-b2c | Policy Keys Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/policy-keys-overview.md | Title: Policy keys overview - Azure Active Directory B2C description: Learn about the types of encryption policy keys that can be used in Azure Active Directory B2C for signing and validating tokens, client secrets, certificates, and passwords.-+ -+ Last updated 09/20/2021 |
active-directory-b2c | Predicates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/predicates.md | Title: Predicates and PredicateValidations description: Prevent malformed data from being added to your Azure AD B2C tenant by using custom policies in Azure Active Directory B2C.-+ -+ Last updated 03/13/2022 |
active-directory-b2c | Protocols Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/protocols-overview.md | Title: Authentication protocols in Azure Active Directory B2C description: How to build apps directly by using the protocols that are supported by Azure Active Directory B2C.-+ -+ Last updated 06/21/2022 |
active-directory-b2c | Publish App To Azure Ad App Gallery | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/publish-app-to-azure-ad-app-gallery.md | Title: Publish your Azure Active Directory B2C app to the Microsoft Entra app gallery description: Learn how to list an Azure AD B2C app that supports single sign-on in the Microsoft Entra app gallery. -+ -+ Last updated 09/30/2022 |
active-directory-b2c | Quickstart Native App Desktop | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/quickstart-native-app-desktop.md | Title: "Quickstart: Set up sign in for a desktop app using Azure Active Directory B2C" description: In this Quickstart, run a sample WPF desktop application that uses Azure Active Directory B2C to provide account sign in.-+ -+ Last updated 01/13/2022 |
active-directory-b2c | Quickstart Single Page App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/quickstart-single-page-app.md | Title: "Quickstart: Set up sign in for a single-page app (SPA)" description: In this Quickstart, run a sample single-page application that uses Azure Active Directory B2C to provide account sign-in.-+ -+ Last updated 02/23/2023 |
active-directory-b2c | Quickstart Web App Dotnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/quickstart-web-app-dotnet.md | Title: "Quickstart: Set up sign-in for an ASP.NET web app" description: In this Quickstart, run a sample ASP.NET web app that uses Azure Active Directory B2C to provide account sign-in.-+ |
active-directory-b2c | Register Apps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/register-apps.md | Title: Register apps in Azure Active Directory B2C description: Learn how to register different apps types such as web app, web API, single-page apps, mobile and desktop apps, daemon apps, Microsoft Graph apps and SAML app in Azure Active Directory B2C -+ -+ Last updated 09/30/2022 |
active-directory-b2c | Relyingparty | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/relyingparty.md | Title: RelyingParty - Azure Active Directory B2C description: Specify the RelyingParty element of a custom policy in Azure Active Directory B2C.-+ -+ Last updated 03/13/2023-+ |
active-directory-b2c | Restful Technical Profile | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/restful-technical-profile.md | Title: Define a RESTful technical profile in a custom policy description: Define a RESTful technical profile in a custom policy in Azure Active Directory B2C.-+ -+ Last updated 06/08/2022 |
active-directory-b2c | Roles Resource Access Control | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/roles-resource-access-control.md | Title: Roles and resource access control description: Learn how to use roles to control resource access.-+ -+ Last updated 02/24/2023 |
active-directory-b2c | Saml Identity Provider Technical Profile | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/saml-identity-provider-technical-profile.md | Title: Define a SAML technical profile in a custom policy description: Define a SAML technical profile in a custom policy in Azure Active Directory B2C.-+ -+ Last updated 01/05/2023 |
active-directory-b2c | Saml Issuer Technical Profile | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/saml-issuer-technical-profile.md | Title: Define a technical profile for a SAML issuer in a custom policy description: Define a technical profile for a Security Assertion Markup Language token (SAML) issuer in a custom policy in Azure Active Directory B2C.-+ -+ Last updated 04/08/2022 |
active-directory-b2c | Saml Service Provider Options | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/saml-service-provider-options.md | Title: Configure SAML service provider options title-suffix: Azure Active Directory B2C description: Learn how to configure Azure Active Directory B2C SAML service provider options.-+ -+ Last updated 10/16/2023 |
active-directory-b2c | Saml Service Provider | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/saml-service-provider.md | Title: Configure Azure Active Directory B2C as a SAML IdP to your applications title-suffix: Azure Active Directory B2C description: Learn how to configure Azure Active Directory B2C to provide SAML protocol assertions to your applications (service providers).-+ -+ Last updated 06/24/2023 |
active-directory-b2c | Secure Api Management | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/secure-api-management.md | Title: Secure an Azure API Management API by using Azure Active Directory B2C description: Learn how to use access tokens issued by Azure Active Directory B2C to secure an Azure API Management API endpoint.-+ -+ Last updated 09/20/2021 |
active-directory-b2c | Secure Rest Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/secure-rest-api.md | Title: Secure APIs used for API connectors in Azure AD B2C description: Secure your custom RESTful APIs used for API connectors in Azure AD B2C.- - - Last updated 11/20/2023 |
active-directory-b2c | Security Architecture | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/security-architecture.md | Title: Security architecture in Azure AD B2C description: End to end guidance on how to secure your Azure AD B2C solution.-+ -+ Last updated 05/09/2023 |
active-directory-b2c | Self Asserted Technical Profile | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/self-asserted-technical-profile.md | Title: Define a self-asserted technical profile in a custom policy description: Define a self-asserted technical profile in a custom policy in Azure Active Directory B2C.-+ -+ Last updated 11/07/2022 |
active-directory-b2c | Service Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/service-limits.md | Title: Azure Active Directory B2C service limits and restrictions description: Reference for service limits and restrictions for Azure Active Directory B2C service.-+ -+ Last updated 12/29/2022 |
active-directory-b2c | Session Behavior | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/session-behavior.md | Title: Configure session behavior - Azure Active Directory B2C description: Learn how to configure session behavior in Azure Active Directory B2C.-+ -+ Last updated 11/20/2023-+ zone_pivot_groups: b2c-policy-type |
active-directory-b2c | Sign In Options | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/sign-in-options.md | Title: Sign-in options supported by Azure AD B2C description: Learn about the sign-up and sign-in options you can use with Azure Active Directory B2C, including username and password, email, phone, or federation with social or external identity providers.-+ -+ Last updated 02/08/2023 |
active-directory-b2c | Social Transformations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/social-transformations.md | Title: Social account claims transformation examples for custom policies description: Social account claims transformation examples for the Identity Experience Framework (IEF) schema of Azure Active Directory B2C.-+ -+ Last updated 02/16/2022 |
active-directory-b2c | Solution Articles | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/solution-articles.md | Title: Solutions and Training for Azure Active Directory B2C description: This article gives you links to solution and training information that can help you understand and use Azure Active Directory B2C for end-to-end-business solutions.-+ |
active-directory-b2c | String Transformations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/string-transformations.md | Title: String claims transformation examples for custom policies description: String claims transformation examples for the Identity Experience Framework (IEF) schema of Azure Active Directory B2C.-+ -+ Last updated 02/16/2022 |
active-directory-b2c | Stringcollection Transformations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/stringcollection-transformations.md | Title: StringCollection claims transformation examples for custom policies description: StringCollection claims transformation examples for the Identity Experience Framework (IEF) schema of Azure Active Directory B2C.-+ -+ Last updated 02/16/2022 |
active-directory-b2c | Subjourneys | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/subjourneys.md | Title: Sub journeys in Azure Active Directory B2C description: Specify the sub journeys element of a custom policy in Azure Active Directory B2C.-+ -+ Last updated 02/09/2022 |
active-directory-b2c | Supported Azure Ad Features | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/supported-azure-ad-features.md | Title: Supported Microsoft Entra ID features description: Learn about Microsoft Entra ID features, which are still supported in Azure AD B2C.-+ -+ Last updated 11/06/2023 |
active-directory-b2c | Technical Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/technical-overview.md | Title: Technical and feature overview - Azure Active Directory B2C description: An in-depth introduction to the features and technologies in Azure Active Directory B2C. Azure Active Directory B2C has high availability globally. - - - Previously updated : 10/26/2022- Last updated : 11/08/2023 + +#Customer intent: As an IT admin or developer, I need to understand in more detail the technical aspects and features of Azure AD B2C and how it can help me build a customer-facing application. + # Technical and feature overview of Azure Active Directory B2C -A companion to [About Azure Active Directory B2C](overview.md), this article provides a more in-depth introduction to the service. Discussed here are the primary resources you work with in the service, its features. Learn how these features enable you to provide a fully custom identity experience for your customers in your applications. +This article is a companion to [About Azure Active Directory B2C](overview.md) and provides a more in-depth introduction to the service. We will discuss here the primary resources you work with in the service, its features and learn how they enable you to provide a fully custom identity experience for customers in your applications. ## Azure AD B2C tenant -In Azure Active Directory B2C (Azure AD B2C), a *tenant* represents your organization and is a directory of users. Each Azure AD B2C tenant is distinct and separate from other Azure AD B2C tenants. An Azure AD B2C tenant is different than a Microsoft Entra tenant, which you may already have. +In Azure Active Directory B2C (Azure AD B2C), a *tenant* represents your organization and is a directory of users. Each Azure AD B2C tenant is distinct and separate from other Azure AD B2C tenants. An Azure AD B2C tenant is also different from a Microsoft Entra tenant, which you may already have. The primary resources you work with in an Azure AD B2C tenant are: -* **Directory** - The *directory* is where Azure AD B2C stores your users' credentials, profile data, and your application registrations. -* **Application registrations** - Register your web, mobile, and native applications with Azure AD B2C to enable identity management. You can also register any APIs you want to protect with Azure AD B2C. -* **User flows** and **custom policies** - Create identity experiences for your applications with built-in user flows and fully configurable custom policies: +* **Directory** - This is where Azure AD B2C stores your users' credentials, profile data, and your application registrations. +* **Application registrations** - You can register your web, mobile, and native applications with Azure AD B2C to enable identity management. You can also register any APIs you want to protect with Azure AD B2C. +* **User flows** and **custom policies** - These are used to create identity experiences for your applications with built-in user flows and fully configurable custom policies: * **User flows** help you quickly enable common identity tasks like sign-up, sign-in, and profile editing. * **Custom policies** let you build complex identity workflows unique to your organization, customers, employees, partners, and citizens. * **Sign-in options** - Azure AD B2C offers various [sign-up and sign-in options](sign-in-options.md) for users of your applications:- * **Username, email, and phone sign-in** - Configure your Azure AD B2C local accounts to allow sign-up and sign-in with a username, email address, phone number, or a combination of methods. - * **Social identity providers** - Federate with social providers like Facebook, LinkedIn, or Twitter. - * **External identity providers** - Federate with standard identity protocols like OAuth 2.0, OpenID Connect, and more. + * **Username, email, and phone sign-in** - You can configure your Azure AD B2C local accounts to allow sign up and sign in with a username, email address, phone number, or a combination of methods. + * **Social identity providers** - You can federate with social providers like Facebook, LinkedIn, or Twitter. + * **External identity providers** - You can also federate with standard identity protocols like OAuth 2.0, OpenID Connect, and more. * **Keys** - Add and manage encryption keys for signing and validating tokens, client secrets, certificates, and passwords. An Azure AD B2C tenant is the first resource you need to create to get started with Azure AD B2C. Learn how to: An Azure AD B2C tenant is the first resource you need to create to get started w Azure AD B2C defines several types of user accounts. Microsoft Entra ID, Microsoft Entra B2B, and Azure Active Directory B2C share these account types. * **Work account** - Users with work accounts can manage resources in a tenant, and with an administrator role, can also manage tenants. Users with work accounts can create new consumer accounts, reset passwords, block/unblock accounts, and set permissions or assign an account to a security group.-* **Guest account** - External users you invite to your tenant as guests. A typical scenario for inviting a guest user to your Azure AD B2C tenant is to share administration responsibilities. -* **Consumer account** - Accounts that are managed by Azure AD B2C user flows and custom policies. +* **Guest account** - These are external users you invite to your tenant as guests. A typical scenario for inviting a guest user to your Azure AD B2C tenant is to share administration responsibilities. +* **Consumer account** - These are accounts that are managed by Azure AD B2C user flows and custom policies. :::image type="content" source="media/technical-overview/portal-01-users.png" alt-text="Screenshot of the Azure AD B2C user management page in the Azure portal.":::<br/>*Figure: User directory within an Azure AD B2C tenant in the Azure portal.* For more information, see [Overview of user accounts in Azure Active Directory B ## Local account sign-in options -Azure AD B2C provides various ways in which you can authenticate a user. Users can sign-in to a local account, by using username and password, phone verification (also known as password-less authentication). Email sign-up is enabled by default in your local account identity provider settings. +Azure AD B2C provides various ways in which you can authenticate a user. Users can sign-in to a local account, by using username and password, phone verification (also known as passwordless authentication). Email sign-up is enabled by default in your local account identity provider settings. Learn more about [sign-in options](sign-in-options.md) or how to [set up the local account identity provider](identity-provider-local.md). Learn more about [sign-in options](sign-in-options.md) or how to [set up the loc Azure AD B2C lets you manage common attributes of consumer account profiles. For example display name, surname, given name, city, and others. -You can also extend the Microsoft Entra schema to store additional information about your users. For example, their country/region of residency, preferred language, and preferences like whether they want to subscribe to a newsletter or enable multifactor authentication. For more information, see: +You can also extend the underlying Microsoft Entra ID schema to store additional information about your users. For example, their country/region of residency, preferred language, and preferences like whether they want to subscribe to a newsletter or enable multifactor authentication. For more information, see: * [User profile attributes](user-profile-attributes.md) * [Add user attributes and customize user input in](configure-user-input.md) ## Sign-in with external identity providers -You can configure Azure AD B2C to allow users to sign in to your application with credentials from social and enterprise identity providers. Azure AD B2C can federate with identity providers that support OAuth 1.0, OAuth 2.0, OpenID Connect, and SAML protocols. For example, Facebook, Microsoft account, Google, Twitter, and AD-FS. +You can configure Azure AD B2C to allow users to sign in to your application with credentials from social and enterprise identity providers. Azure AD B2C can federate with identity providers that support OAuth 1.0, OAuth 2.0, OpenID Connect, and SAML protocols. For example, Facebook, Microsoft account, Google, Twitter, and Active Directory Federation Service (AD FS). :::image type="content" source="media/technical-overview/external-idps.png" alt-text="Diagram showing company logos for a sample of external identity providers."::: On the sign-up or sign-in page, Azure AD B2C presents a list of external identit :::image type="content" source="media/technical-overview/external-idp.png" alt-text="Diagram showing a mobile sign-in example with a social account (Facebook)."::: -To see how to add identity providers in Azure AD B2C, see [Add identity providers to your applications in Azure Active Directory B2C](add-identity-provider.md). +To learn more about identity providers, see [Add identity providers to your applications in Azure Active Directory B2C](add-identity-provider.md). ## Identity experiences: user flows or custom policies -In Azure AD B2C, you can define the business logic that users follow to gain access to your application. For example, you can determine the sequence of steps users follow when they sign in, sign up, edit a profile, or reset a password. After completing the sequence, the user acquires a token and gains access to your application. +In Azure AD B2C, you can define the business logic that users follow to gain access to your application. For example, you can determine the sequence of steps users follow when they sign in, sign up, edit their profile, or reset a password. After completing the sequence, the user acquires a token and gains access to your application. In Azure AD B2C, there are two ways to provide identity user experiences: -* **User flows** are predefined, built-in, configurable policies that we provide so you can create sign-up, sign-in, and policy editing experiences in minutes. +* **User flows** - These are predefined, built-in, configurable policies that we provide so you can create sign-up, sign-in, and policy editing experiences in minutes. -* **Custom policies** enable you to create your own user journeys for complex identity experience scenarios. +* **Custom policies** - These enable you to create your own user journeys for complex identity experience scenarios. The following screenshot shows the user flow settings UI, versus custom policy configuration files. :::image type="content" source="media/technical-overview/user-flow-vs-custom-policy.png" alt-text="Screenshot showing the user flow settings UI versus a custom policy configuration file."::: -Read the [User flows and custom policies overview](user-flow-overview.md) article. It gives an overview of user flows and custom policies, and helps you decide which method will work best for your business needs. +To learn more about user flows and custom policies, and help you decide which method will work best for your business needs, see [User flows and custom policies overview](user-flow-overview.md). ## User interface For information on UI customization, see: ## Custom domain -You can customize your Azure AD B2C domain in the redirect URIs for your application. Custom domain allows you to create a seamless experience so that the pages that are shown blend seamlessly with the domain name of your application. From the user's perspective, they remain in your domain during the sign-in process rather than redirecting to the Azure AD B2C default domain .b2clogin.com. +You can customize your Azure AD B2C domain in the redirect URIs for your application. Custom domain allows you to create a seamless experience so that the pages that are shown blend seamlessly with the domain name of your application. From the user's perspective, they remain in your domain during the sign-in process rather than redirecting to the Azure AD B2C default domain *.b2clogin.com*. For more information, see [Enable custom domains](custom-domain.md). ## Localization -Language customization in Azure AD B2C allows you to accommodate different languages to suit your customer needs. Microsoft provides the translations for 36 languages, but you can also provide your own translations for any language. +Language customization in Azure AD B2C allows you to accommodate different languages to suit your customer needs. Microsoft provides localizations for 36 languages, but you can also provide your own localizations for any language. :::image type="content" source="media/technical-overview/localization.png" alt-text="Screenshot of three sign in pages showing UI text in different languages."::: See how localization works in [Language customization in Azure Active Directory ## Email verification -Azure AD B2C ensures valid email addresses by requiring customers to verify them during the sign-up, and password reset flows. It also prevents malicious actors from using automated processes to generate fraudulent accounts in your applications. +Azure AD B2C ensures valid email addresses by requiring customers to verify them during the sign-up, and password reset flows. This also prevents malicious actors from using automated processes to generate fraudulent accounts in your applications. :::image type="content" source="media/technical-overview/email-verification.png" alt-text="Screenshots showing the process for email verification."::: -You can customize the email to users that sign up to use your applications. By using the third-party email provider, you can use your own email template and From: address and subject, as well as support localization and custom one-time password (OTP) settings. For more information, see: +You can customize the email sent to users that sign up to use your applications. By using a third-party email provider, you can use your own email template and From: address and subject, as well as support localization and custom one-time password (OTP) settings. For more information, see: * [Custom email verification with Mailjet](custom-email-mailjet.md) * [Custom email verification with SendGrid](custom-email-sendgrid.md) You can add a REST API call at any step in a user journey defined by a custom po * After Azure AD B2C creates a new account in the directory * Before Azure AD B2C issues an access token -For more information, see [Integrate REST API claims exchanges in your Azure AD B2C custom policy](api-connectors-overview.md). +For more information, see [About API connectors in Azure AD B2C](api-connectors-overview.md). ## Protocols and tokens For more information, see [Integrate REST API claims exchanges in your Azure AD The following diagram shows how Azure AD B2C can communicate using various protocols within the same authentication flow: -![Diagram of OIDC-based client app federating with a SAML-based IdP](media/technical-overview/protocols.png) :::image type="content" source="media/technical-overview/protocols.png" alt-text="Diagram of OIDC-based client app federating with a SAML-based IdP."::: - 1. The relying party application starts an authorization request to Azure AD B2C using OpenID Connect. 1. When a user of the application chooses to sign in using an external identity provider that uses the SAML protocol, Azure AD B2C invokes the SAML protocol to communicate with that identity provider. 1. After the user completes the sign-in operation with the external identity provider, Azure AD B2C then returns the token to the relying party application using OpenID Connect. For example, to sign in to an application, the application uses the *sign up or ## Multifactor authentication (MFA) -Azure AD B2C Multi-Factor Authentication (MFA) helps safeguard access to data and applications while maintaining simplicity for your users. It provides extra security by requiring a second form of authentication, and delivers strong authentication by offering a range of easy-to-use authentication methods. +Azure AD B2C Multifactor Authentication (MFA) helps safeguard access to data and applications while maintaining simplicity for your users. It provides extra security by requiring a second form of authentication, and delivers strong authentication by offering a range of easy-to-use authentication methods. Your users may or may not be challenged for MFA based on configuration decisions that you can make as an administrator. Microsoft Entra ID Protection risk-detection features, including risky users and :::image type="content" source="media/technical-overview/conditional-access-flow.png" alt-text="Diagram showing conditional access flow."::: --Azure AD B2C evaluates each sign-in event and ensures that all policy requirements are met before granting the user access. Risky users or sign-ins may be blocked, or challenged with a specific remediation like multifactor authentication (MFA). For more information, see [Identity Protection and Conditional Access](conditional-access-identity-protection-overview.md). +Azure AD B2C evaluates each sign-in event and ensures that all policy requirements are met before granting the user access. Risky users or risky sign-ins may be blocked, or challenged with a specific remediation like multifactor authentication (MFA). For more information, see [Identity Protection and Conditional Access](conditional-access-identity-protection-overview.md). ## Password complexity For more information, see [Configure complexity requirements for passwords in Az ## Force password reset -As an Azure AD B2C tenant administrator, you can [reset a user's password](manage-users-portal.md#reset-a-users-password) if the user forgets their password. Or you would like to force them to reset the password periodically. For more information, see [Set up a force password reset flow](force-password-reset.md). --+As an Azure AD B2C tenant administrator, you can [reset a user's password](manage-users-portal.md#reset-a-users-password) if the user forgets their password. Or you can set a policy to force users to reset their password periodically. For more information, see [Set up a force password reset flow](force-password-reset.md). :::image type="content" source="media/technical-overview/force-password-reset-flow.png" alt-text="Force password reset flow."::: As an Azure AD B2C tenant administrator, you can [reset a user's password](manag To prevent brute-force password guessing attempts, Azure AD B2C uses a sophisticated strategy to lock accounts based on the IP of the request, the passwords entered, and several other factors. The duration of the lockout is automatically increased based on risk and the number of attempts. -![Account smart lockout](media/technical-overview/smart-lockout1.png) :::image type="content" source="media/technical-overview/smart-lockout1.png" alt-text="Screenshot of UI for account lockout with arrows highlighting the lockout notification."::: For more information about managing password protection settings, see [Mitigate credential attacks in Azure AD B2C](threat-management.md). ## Protect resources and customer identities -Azure AD B2C complies with the security, privacy, and other commitments described in the [Microsoft Azure Trust Center](https://www.microsoft.com/trustcenter/cloudservices/azure). +Azure AD B2C complies with the security, privacy, and other commitments described in the [Microsoft Azure Trust Center](https://www.microsoft.com//trust-center). -Sessions are modeled as encrypted data, with the decryption key known only to the Azure AD B2C Security Token Service. A strong encryption algorithm, AES-192, is used. All communication paths are protected with TLS for confidentiality and integrity. Our Security Token Service uses an Extended Validation (EV) certificate for TLS. In general, the Security Token Service mitigates cross-site scripting (XSS) attacks by not rendering untrusted input. +Sessions are modeled as encrypted data, with the decryption key known only to the Azure AD B2C Security Token Service (STS). A strong encryption algorithm, AES-192, is used. All communication paths are protected with TLS for confidentiality and integrity. Our Security Token Service uses an Extended Validation (EV) certificate for TLS. In general, the Security Token Service mitigates cross-site scripting (XSS) attacks by not rendering untrusted input. :::image type="content" source="media/technical-overview/user-data.png" alt-text="Diagram of secure data in transit and at rest."::: You can assign roles to control who can perform certain administrative actions i * Create and manage trust framework policies in the Identity Experience Framework (custom policies) * Manage secrets for federation and encryption in the Identity Experience Framework (custom policies) -For more information about Microsoft Entra roles, including Azure AD B2C administration role support, see [Administrator role permissions in Microsoft Entra ID](../active-directory/roles/permissions-reference.md). +For more information about Microsoft Entra roles, including Azure AD B2C administration role support, see [Administrator role permissions in Microsoft Entra ID](/entra/identity/role-based-access-control/permissions-reference). ## Auditing and logs -Azure AD B2C emits audit logs containing activity information about its resources, issued tokens, and administrator access. You can use the audit logs to understand platform activity and diagnose issues. Audit log entries are available soon after the activity that generated the event occurs. +Azure AD B2C creates audit logs containing activity information about its resources, issued tokens, and administrator access. You can use the audit logs to understand platform activity and diagnose issues. Audit log entries are available soon after the activity that generated the event occurs. In an audit log, which is available for your Azure AD B2C tenant or for a particular user, you can find information including: Learn more about [Azure Active Directory B2C service Region availability & data ## Automation using Microsoft Graph API -Use MS graph API to manage your Azure AD B2C directory. You can also create the Azure AD B2C directory itself. You can manage users, identity providers, user flows, custom policies and many more. +Use MS graph API to manage your Azure AD B2C directory. You can also create the Azure AD B2C directory itself. You can manage users, identity providers, user flows, custom policies and more. Learn more about how to [Manage Azure AD B2C with Microsoft Graph](microsoft-graph-operations.md). |
active-directory-b2c | Technicalprofiles | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/technicalprofiles.md | Title: Technical profiles description: Specify the TechnicalProfiles element of a custom policy in Azure Active Directory B2C.-+ -+ Last updated 06/22/2023 |
active-directory-b2c | Tenant Management Check Tenant Creation Permission | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/tenant-management-check-tenant-creation-permission.md | Title: Review tenant creation permission in Azure Active Directory B2C description: Learn how to check tenant creation permission in Azure Active Directory B2C before you create tenant-+ -+ -+ Last updated 01/30/2023 |
active-directory-b2c | Tenant Management Directory Quota | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/tenant-management-directory-quota.md | Title: Manage directory size quota in Azure Active Directory B2C description: Learn how to manage directory size quota in your Azure AD B2C tenant-+ -+ Last updated 06/15/2023-+ |
active-directory-b2c | Tenant Management Emergency Access Account | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/tenant-management-emergency-access-account.md | Title: Manage emergency access accounts in Azure Active Directory B2C description: Learn how to manage emergency access accounts in Azure AD B2C tenants -+ -+ Last updated 11/20/2023-+ |
active-directory-b2c | Tenant Management Manage Administrator | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/tenant-management-manage-administrator.md | Title: Manage administrator accounts in Azure Active Directory B2C description: Learn how to add an administrator account to your Azure Active Directory B2C tenant. Learn how to invite a guest account as an administrator into your Azure AD B2C tenant. -+ -+ -+ Last updated 01/30/2023 |
active-directory-b2c | Tenant Management Read Tenant Name | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/tenant-management-read-tenant-name.md | Title: Find tenant name and tenant ID description: Learn how to find tenant name and tenant ID -+ -+ Last updated 01/30/2023-+ |
active-directory-b2c | Threat Management | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/threat-management.md | Title: Mitigate credential attacks - Azure AD B2C description: Learn about detection and mitigation techniques for credential attacks (password attacks) in Azure Active Directory B2C, including smart account lockout features.-+ -+ Last updated 09/20/2021 |
active-directory-b2c | Tokens Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/tokens-overview.md | Title: Overview of tokens - Azure Active Directory B2C description: Learn about the tokens used in Azure Active Directory B2C.-+ -+ Last updated 04/24/2023 |
active-directory-b2c | Troubleshoot With Application Insights | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/troubleshoot-with-application-insights.md | Title: Troubleshoot custom policies with Application Insights description: How to set up Application Insights to trace the execution of your custom policies.-+ -+ Last updated 11/20/2023-+ zone_pivot_groups: b2c-policy-type |
active-directory-b2c | Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/troubleshoot.md | Title: Troubleshoot custom policies and user flows in Azure Active Directory B2C description: Learn about approaches to solving errors when working with custom policies in Azure Active Directory B2C.-+ -+ Last updated 11/20/2023 |
active-directory-b2c | Trustframeworkpolicy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/trustframeworkpolicy.md | Title: TrustFrameworkPolicy - Azure Active Directory B2C description: Specify the TrustFrameworkPolicy element of a custom policy in Azure Active Directory B2C.-+ -+ Last updated 11/09/2021 |
active-directory-b2c | Tutorial Create Tenant | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/tutorial-create-tenant.md | Title: Tutorial - Create an Azure Active Directory B2C tenant description: Follow this tutorial to learn how to prepare for registering your applications by creating an Azure Active Directory B2C tenant using the Azure portal.-+ -+ Last updated 11/08/2023 |
active-directory-b2c | Tutorial Create User Flows | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/tutorial-create-user-flows.md | Title: Tutorial - Create user flows and custom policies - Azure Active Directory B2C description: Follow this tutorial to learn how to create user flows and custom policies in the Azure portal to enable sign up, sign in, and user profile editing for your applications in Azure Active Directory B2C.- - - Previously updated : 10/26/2022 Last updated : 11/10/2023 -+ zone_pivot_groups: b2c-policy-type+ +#Customer intent: As a developer, I want to learn how to create user flows and custom policies in the Azure portal to enable sign up, sign in, and user profile editing for my applications in Azure Active Directory B2C. + # Tutorial: Create user flows and custom policies in Azure Active Directory B2C A user flow lets you determine how users interact with your application when the ::: zone-end ::: zone pivot="b2c-custom-policy"- - If you don't have one already, [create an Azure AD B2C tenant](tutorial-create-tenant.md) that is linked to your Azure subscription. - [Register a web application](tutorial-register-applications.md), and [enable ID token implicit grant](tutorial-register-applications.md#enable-id-token-implicit-grant).-- ::: zone-end ::: zone pivot="b2c-user-flow" ## Create a sign-up and sign-in user flow -The sign-up and sign-in user flow handles both sign-up and sign-in experiences with a single configuration. Users of your application are led down the right path depending on the context. +The sign-up and sign-in user flow handles both experiences with a single configuration. Users of your application are led down the right path depending on the context. To create a sign-up and sign-in user flow: 1. Sign in to the [Azure portal](https://portal.azure.com). 1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu. 1. In the Azure portal, search for and select **Azure AD B2C**. 1. Under **Policies**, select **User flows**, and then select **New user flow**. - ![User flows page in portal with New user flow button highlighted](./media/tutorial-create-user-flows/sign-up-sign-in-user-flow.png) + ![Screenshot of the User flows page from the Azure portal with New user flow button highlighted.](./media/tutorial-create-user-flows/sign-up-sign-in-user-flow.png) 1. On the **Create a user flow** page, select the **Sign up and sign in** user flow. - ![Select a user flow page with Sign-up and sign-in flow highlighted](./media/tutorial-create-user-flows/select-user-flow-type.png) + ![Screenshot of the Select a user flow page from the Azure portal with the Sign-up and sign-in flow highlighted.](./media/tutorial-create-user-flows/select-user-flow-type.png) 1. Under **Select a version**, select **Recommended**, and then select **Create**. ([Learn more](user-flow-versions.md) about user flow versions.) The sign-up and sign-in user flow handles both sign-up and sign-in experiences w 1. For **Identity providers**, select **Email signup**. 1. For **User attributes and token claims**, choose the claims and attributes that you want to collect and send from the user during sign-up. For example, select **Show more**, and then choose attributes and claims for **Country/Region**, **Display Name**, and **Postal Code**. Select **OK**. - ![Attributes and claims selection page with three claims selected](./media/tutorial-create-user-flows/signup-signin-attributes.png) + ![Screenshot of the attributes and claims selection page from the Azure portal with three claims selected and highlighted.](./media/tutorial-create-user-flows/signup-signin-attributes.png) -1. Select **Create** to add the user flow. A prefix of *B2C_1_* is automatically prepended to the name. +1. Select **Create** to add the user flow. A prefix of *B2C_1_* is automatically prepended to the name you entered earlier. For example, *B2C_1_signupsignin1*. ### Test the user flow -1. Select the user flow you created to open its overview page. -1. At the top of the user flow overview page, select **Run user flow**. A pane opens at the right side of the page. +1. From the **User flows** page, select the user flow you just created to open its overview page. +1. At the top of the user flow overview page, select **Run user flow**. A pane will open at the right side of the page. 1. For **Application**, select the web application you wish to test, such as the one named *webapp1*. The **Reply URL** should show `https://jwt.ms`. 1. Select **Run user flow**, and then select **Sign up now**. - ![Run user flow page in portal with Run user flow button highlighted](./media/tutorial-create-user-flows/signup-signin-run-now.PNG) + ![A screenshot of the Run user flow page from the Azure portal portal with Run user flow button, Application text box and Reply URL text box highlighted.](./media/tutorial-create-user-flows/signup-signin-run-now.PNG) 1. Enter a valid email address, select **Send verification code**, enter the verification code that you receive, then select **Verify code**. 1. Enter a new password and confirm the password.-1. Select your country and region, enter the name that you want displayed, enter a postal code, and then select **Create**. The token is returned to `https://jwt.ms` and should be displayed to you. -1. You can now run the user flow again and you should be able to sign in with the account that you created. The returned token includes the claims that you selected of country/region, name, and postal code. +1. Select your country and region, enter the name that you want displayed, enter a postal code, and then select **Create**. The token is returned to `https://jwt.ms` and should be displayed to you in your browser. +1. You can now run the user flow again and you should be able to sign in with the account that you just created. The returned token includes the claims that you selected of country/region, name, and postal code. > [!NOTE]-> The "Run user flow" experience is not currently compatible with the SPA reply URL type using authorization code flow. To use the "Run user flow" experience with these kinds of apps, register a reply URL of type "Web" and enable the implicit flow as described [here](tutorial-register-spa.md). +> The "Run user flow" experience is not currently compatible with the SPA reply URL type using authorization code flow. To use the "Run user flow" experience with these kinds of apps, [register a reply URL of type "Web" and enable the implicit flow.](tutorial-register-spa.md). ## Enable self-service password reset To enable [self-service password reset](add-password-reset-policy.md) for the sign-up or sign-in user flow: -1. Select the sign-up or sign-in user flow you created. +1. From the **User flows** page, select the sign-up or sign-in user flow you just created. 1. Under **Settings** in the left menu, select **Properties**. 1. Under **Password configuration**, select **Self-service password reset**. 1. Select **Save**. ### Test the user flow -1. Select the user flow you created to open its overview page, then select **Run user flow**. +1. From the **User flows** page, select the user flow you just created to open its overview page, then select **Run user flow**. 1. For **Application**, select the web application you wish to test, such as the one named *webapp1*. The **Reply URL** should show `https://jwt.ms`. 1. Select **Run user flow**. 1. From the sign-up or sign-in page, select **Forgot your password?**. 1. Verify the email address of the account that you previously created, and then select **Continue**.-1. You now have the opportunity to change the password for the user. Change the password and select **Continue**. The token is returned to `https://jwt.ms` and should be displayed to you. +1. You now have the opportunity to change the password for the user. Change the password and select **Continue**. The token is returned to `https://jwt.ms` and should be displayed in your browser. ## Create a profile editing user flow If you want to enable users to edit their profile in your application, you use a ### Test the user flow 1. Select the user flow you created to open its overview page.-1. At the top of the user flow overview page, select **Run user flow**. A pane opens at the right side of the page. +1. At the top of the user flow overview page, select **Run user flow**. A pane will open at the right side of the page. 1. For **Application**, select the web application you wish to test, such as the one named *webapp1*. The **Reply URL** should show `https://jwt.ms`. 1. Select **Run user flow**, and then sign in with the account that you previously created.-1. You now have the opportunity to change the display name and job title for the user. Select **Continue**. The token is returned to `https://jwt.ms` and should be displayed to you. +1. You now have the opportunity to change the display name and job title for the user. Select **Continue**. The token is returned to `https://jwt.ms` and should be displayed in your browser. ::: zone-end ::: zone pivot="b2c-custom-policy" > [!TIP] > This article explains how to set up your tenant manually. You can automate the entire process from this article. Automating will deploy the Azure AD B2C [SocialAndLocalAccountsWithMFA starter pack](https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack), which will provide Sign Up and Sign In, Password Reset and Profile Edit journeys. To automate the walkthrough below, visit the [IEF Setup App](https://aka.ms/iefsetup) and follow the instructions. - ## Add signing and encryption keys for Identity Experience Framework applications 1. Sign in to the [Azure portal](https://portal.azure.com). If you want to enable users to edit their profile in your application, you use a ## Register Identity Experience Framework applications -Azure AD B2C requires you to register two applications that it uses to sign up and sign in users with local accounts: *IdentityExperienceFramework*, a web API, and *ProxyIdentityExperienceFramework*, a native app with delegated permission to the IdentityExperienceFramework app. Your users can sign up with an email address or username and a password to access your tenant-registered applications, which creates a "local account." Local accounts exist only in your Azure AD B2C tenant. +Azure AD B2C requires you to register two applications that it uses to sign up and sign in users with local accounts: *IdentityExperienceFramework*, a web API, and *ProxyIdentityExperienceFramework*, a native app with delegated permission to the IdentityExperienceFramework app. Your users can sign up with an email address or username and a password to access applications registered to your tenant, which creates a "local account." Local accounts exist only in your Azure AD B2C tenant. -You need to register these two applications in your Azure AD B2C tenant only once. +You will need to register these two applications in your Azure AD B2C tenant only once. ### Register the IdentityExperienceFramework application Now, grant permissions to the API scope you exposed earlier in the *IdentityExpe 1. In the left menu, under **Manage**, select **API permissions**. 1. Under **Configured permissions**, select **Add a permission**.-1. Select the **My APIs** tab, then select the **IdentityExperienceFramework** application. +1. Select the **APIs my organization uses** tab, then select the **IdentityExperienceFramework** application. 1. Under **Permission**, select the **user_impersonation** scope that you defined earlier. 1. Select **Add permissions**. As directed, wait a few minutes before proceeding to the next step. 1. Select **Grant admin consent for *<your tenant name)>***. Now, grant permissions to the API scope you exposed earlier in the *IdentityExpe ## Custom policy starter pack -Custom policies are a set of XML files you upload to your Azure AD B2C tenant to define technical profiles and user journeys. We provide starter packs with several pre-built policies to get you going quickly. Each of these starter packs contains the smallest number of technical profiles and user journeys needed to achieve the scenarios described: +Custom policies are a set of XML files you upload to your Azure AD B2C tenant to define technical profiles and user journeys. We provide starter packs with several pre-built policies to get you going quickly. Each of these starter packs contains the smallest number of technical profiles and user journeys needed to achieve the scenarios described. For a more in-depth guide to Azure AD B2C custom policies, follow our [custom policies how-to guide series](custom-policies-series-overview.md). - **LocalAccounts** - Enables the use of local accounts only. - **SocialAccounts** - Enables the use of social (or federated) accounts only. - **SocialAndLocalAccounts** - Enables the use of both local and social accounts.-- **SocialAndLocalAccountsWithMFA** - Enables social, local, and multi-factor authentication options.+- **SocialAndLocalAccountsWithMFA** - Enables social, local, and multifactor authentication options. Each starter pack contains: In this article, you edit the XML custom policy files in the **SocialAndLocalAcc ### Get the starter pack -Get the custom policy starter packs from GitHub, then update the XML files in the SocialAndLocalAccounts starter pack with your Azure AD B2C tenant name. +Get the custom policy starter packs from GitHub, then update the XML files in the **SocialAndLocalAccounts** starter pack with your Azure AD B2C tenant name. 1. [Download the .zip file](https://github.com/Azure-Samples/active-directory-b2c-custom-policy-starterpack/archive/master.zip) or clone the repository: Add the application IDs to the extensions file *TrustFrameworkExtensions.xml*. ## Add Facebook as an identity provider -The **SocialAndLocalAccounts** starter pack includes Facebook social sign in. Facebook isn't required for using custom policies, but we use it here to demonstrate how you can enable federated social login in a custom policy. If you don't need to enable federated social login, use the **LocalAccounts** starter pack instead, and skip [Add Facebook as an identity provider](tutorial-create-user-flows.md?pivots=b2c-custom-policy#add-facebook-as-an-identity-provider) section. +The **SocialAndLocalAccounts** starter pack includes Facebook social sign in. Facebook isn't required for using custom policies, but we use it here to demonstrate how you can enable federated social login in a custom policy. If you don't need to enable federated social login, use the **LocalAccounts** starter pack instead, and skip to the [Upload the policies](tutorial-create-user-flows.md?pivots=b2c-custom-policy#upload-the-policies) section. ### Create Facebook application |
active-directory-b2c | Tutorial Delete Tenant | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/tutorial-delete-tenant.md | Title: Clean up resources and delete a tenant - Azure Active Directory B2C description: Steps describing how to delete an Azure AD B2C tenant. Learn how to delete all tenant resources, and then delete the tenant.-+ -+ Last updated 03/06/2023 |
active-directory-b2c | Tutorial Register Applications | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/tutorial-register-applications.md | Title: "Tutorial: Register a web application in Azure Active Directory B2C" + Title: "Tutorial - Register a web application in Azure Active Directory B2C" description: Follow this tutorial to learn how to register a web application in Azure Active Directory B2C using the Azure portal.- - - Last updated 10/26/2022-+ ++#Customer intent: As a developer or IT admin, I want to register my web application in Azure AD B2C so that I can enable my users to sign up, sign in, and manage their profiles. + # Tutorial: Register a web application in Azure Active Directory B2C Before your [applications](application-types.md) can interact with Azure Active Directory B2C (Azure AD B2C), they must be registered in a tenant that you manage. This tutorial shows you how to register a web application using the Azure portal. -A "web application" refers to a traditional web application that performs most of the application logic on the server. They may be built using frameworks like ASP.NET Core, Spring (Java), Flask (Python), and Express (Node.js). +A "web application" refers to a traditional web application that performs most of the application logic on the server. They may be built using frameworks like ASP.NET Core, Spring (Java), Flask (Python), or Express (Node.js). > [!IMPORTANT] > If you're using a single-page application ("SPA") instead (e.g. using Angular, Vue, or React), learn [how to register a single-page application](tutorial-register-spa.md). If you haven't already created your own [Azure AD B2C Tenant](tutorial-create-te ## Register a web application -To register a web application in your Azure AD B2C tenant, you can use our new unified **App registrations** experience or our legacy **Applications (Legacy)** experience. [Learn more about the new experience](./app-registrations-training-guide.md). +To register a web application in your Azure AD B2C tenant, you can use our new unified **App registrations**. [Learn more about the new experience](./app-registrations-training-guide.md). -#### [App registrations](#tab/app-reg-ga/) +#### App registrations 1. Sign in to the [Azure portal](https://portal.azure.com). 1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu. To register a web application in your Azure AD B2C tenant, you can use our new u 1. Under **Permissions**, select the *Grant admin consent to openid and offline_access permissions* check box. 1. Select **Register**. -#### [Applications (Legacy)](#tab/applications-legacy/) --1. Sign in to the [Azure portal](https://portal.azure.com). -1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu. -1. In the Azure portal, search for and select **Azure AD B2C**. -1. Select **Applications (Legacy)**, and then select **Add**. -1. Enter a name for the application. For example, *webapp1*. -1. For **Include web app/ web API**, select **Yes**. -1. For **Reply URL**, enter an endpoint where Azure AD B2C should return any tokens that your application requests. For example, you could set it to listen locally at `http://localhost:5000`. You can add and modify redirect URIs in your registered applications at any time. -- The following restrictions apply to redirect URIs: -- * The reply URL must begin with the scheme `https`, unless using `localhost`. - * The reply URL is case-sensitive. Its case must match the case of the URL path of your running application. For example, if your application includes as part of its path `.../abc/response-oidc`, do not specify `.../ABC/response-oidc` in the reply URL. Because the web browser treats paths as case-sensitive, cookies associated with `.../abc/response-oidc` may be excluded if redirected to the case-mismatched `.../ABC/response-oidc` URL. - * The reply URL should include or exclude the trailing forward slash as your application expects it. For example, `https://contoso.com/auth-response` and `https://contoso.com/auth-response/` might be treated as nonmatching URLs in your application. --1. Select **Create** to complete the application registration. --* * * - > [!TIP] > If you don't see the app(s) you created under **App registrations**, refresh the portal. To register a web application in your Azure AD B2C tenant, you can use our new u For a web application, you need to create an application secret. The client secret is also known as an *application password*. The secret will be used by your application to exchange an authorization code for an access token. -#### [App registrations](#tab/app-reg-ga/) +#### App registrations 1. In the **Azure AD B2C - App registrations** page, select the application you created, for example *webapp1*. 1. In the left menu, under **Manage**, select **Certificates & secrets**. For a web application, you need to create an application secret. The client secr 1. Under **Expires**, select a duration for which the secret is valid, and then select **Add**. 1. Record the secret's **Value** for use in your client application code. This secret value is never displayed again after you leave this page. You use this value as the application secret in your application's code. -#### [Applications (Legacy)](#tab/applications-legacy/) --1. In the **Azure AD B2C - Applications** page, select the application you created, for example *webapp1*. -1. Select **Keys** and then select **Generate key**. -1. Select **Save** to view the key. Make note of the **App key** value. You use this value as the application secret in your application's code. --* * * - > [!NOTE] > For security purposes, you can roll over the application secret periodically, or immediately in case of emergency. Any application that integrates with Azure AD B2C should be prepared to handle a secret rollover event, no matter how frequently it may occur. You can set two application secrets, allowing your application to keep using the old secret during an application secret rotation event. To add another client secret, repeat steps in this section. |
active-directory-b2c | Tutorial Register Spa | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/tutorial-register-spa.md | Title: Register a single-page application in Azure Active Directory B2C description: Follow this guide to learn how to register a single-page application (SPA) in Azure Active Directory B2C using the Azure portal.-+ -+ Last updated 11/20/2023-+ |
active-directory-b2c | User Flow Custom Attributes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/user-flow-custom-attributes.md | Title: Define custom attributes in Azure Active Directory B2C description: Define custom attributes for your application in Azure Active Directory B2C to collect information about your customers.-+ -+ Last updated 03/09/2023-+ zone_pivot_groups: b2c-policy-type |
active-directory-b2c | User Flow Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/user-flow-overview.md | Title: User flows and custom policies in Azure Active Directory B2C description: Learn more about built-in user flows and the custom policy extensible policy framework of Azure Active Directory B2C.- - - Previously updated : 10/24/2022- Last updated : 11/09/2023 + +# Customer intent: As a developer, I want to understand the difference between user flows and custom policies, so that I can choose the best method for my business needs. I want to understand the scenarios that can be enabled with each method, and how to integrate them with my applications. + # User flows and custom policies overview In Azure AD B2C, there are two ways to provide identity user experiences: The following screenshot shows the user flow settings UI, versus custom policy configuration files. This article gives a brief overview of user flows and custom policies, and helps you decide which method will work best for your business needs. To set up the most common identity tasks, the Azure portal includes several pred You can configure user flow settings like these to control identity experience behaviors in your applications: * Account types used for sign-in, such as social accounts like a Facebook, or local accounts that use an email address and password for sign-in-* Attributes to be collected from the consumer, such as first name, postal code, or country/region of residency -* Microsoft Entra multifactor authentication +* Attributes to be collected from the consumer, such as first name, last name, postal code, or country/region of residency +* Multifactor authentication * Customization of the user interface * Set of claims in a token that your application receives after the user completes the user flow * Session management Most of the common identity scenarios for apps can be defined and implemented ef Custom policies are configuration files that define the behavior of your Azure AD B2C tenant user experience. While user flows are predefined in the Azure AD B2C portal for the most common identity tasks, custom policies can be fully edited by an identity developer to complete many different tasks. -A custom policy is fully configurable and policy-driven. It orchestrates trust between entities in standard protocols. For example, OpenID Connect, OAuth, SAML, and a few non-standard ones, for example REST API-based system-to-system claims exchanges. The framework creates user-friendly, white-labeled experiences. +A custom policy is fully configurable and policy-driven. It orchestrates trust between entities in standard protocols such as OpenID Connect, OAuth, SAML. As well as a few non-standard ones, for example REST API-based system-to-system claims exchanges. The framework creates user-friendly, white-labeled experiences. The custom policy gives you the ability to construct user journeys with any combination of steps. For example: The custom policy gives you the ability to construct user journeys with any comb Each user journey is defined by a policy. You can build as many or as few policies as you need to enable the best user experience for your organization. -![Diagram showing an example of a complex user journey enabled by IEF](media/user-flow-overview/custom-policy-diagram.png) A custom policy is defined by multiple XML files that refer to each other in a hierarchical chain. The XML elements define the claims schema, claims transformations, content definitions, claims providers, technical profiles, user journey orchestration steps, and other aspects of the identity experience. You can create many user flows, or custom policies of different types in your te When a user wants to sign in to your application, the application initiates an authorization request to a user flow- or custom policy-provided endpoint. The user flow or custom policy defines and controls the user's experience. When they complete a user flow, Azure AD B2C generates a token, then redirects the user back to your application. -![Mobile app with arrows showing flow between Azure AD B2C sign-in page](media/user-flow-overview/app-integration.png) Multiple applications can use the same user flow or custom policy. A single application can use multiple user flows or custom policies. For example, to sign in to an application, the application uses the *sign up or Your application triggers a user flow by using a standard HTTP authentication request that includes the user flow or custom policy name. A customized [token](tokens-overview.md) is received as a response. - ## Next steps - To create the recommended user flows, follow the instructions in [Tutorial: Create a user flow](tutorial-create-user-flows.md). |
active-directory-b2c | User Flow Versions Legacy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/user-flow-versions-legacy.md | Title: Legacy user flow versions in Azure Active Directory B2C description: Learn about legacy versions of user flows available in Azure Active Directory B2C.-+ -+ Last updated 07/30/2020 |
active-directory-b2c | User Flow Versions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/user-flow-versions.md | Title: User flow versions in Azure Active Directory B2C description: Learn about the versions of user flows available in Azure Active Directory B2C.-+ -+ Last updated 08/17/2021 |
active-directory-b2c | User Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/user-migration.md | Title: User migration approaches description: Migrate user accounts from another identity provider to Azure AD B2C by using the pre migration or seamless migration methods.-+ -+ Last updated 12/29/2022 -+ # Migrate users to Azure AD B2C |
active-directory-b2c | User Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/user-overview.md | Title: Overview of user accounts in Azure Active Directory B2C description: Learn about the types of user accounts that can be used in Azure Active Directory B2C.-+ -+ Last updated 12/28/2022 |
active-directory-b2c | User Profile Attributes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/user-profile-attributes.md | Title: User profile attributes in Azure Active Directory B2C description: Learn about the user resource type attributes that Azure AD B2C directory user profile supports. Find out about built-in attributes, extensions, and how attributes map to Microsoft Graph.- - - Last updated 11/20/2023 |
active-directory-b2c | Userinfo Endpoint | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/userinfo-endpoint.md | Title: UserInfo endpoint description: Define a UserInfo endpoint in a custom policy in Azure Active Directory B2C.-+ -+ Last updated 09/20/2021-+ zone_pivot_groups: b2c-policy-type |
active-directory-b2c | Userjourneys | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/userjourneys.md | Title: UserJourneys description: Specify the UserJourneys element of a custom policy in Azure Active Directory B2C.-+ -+ Last updated 01/27/2023 |
active-directory-b2c | Validation Technical Profile | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/validation-technical-profile.md | Title: Define a validation technical profile in a custom policy description: Validate claims by using a validation technical profile in a custom policy in Azure Active Directory B2C.-+ -+ Last updated 03/16/2020 |
active-directory-b2c | View Audit Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/view-audit-logs.md | Title: Access and review audit logs description: How to access Azure AD B2C audit logs programmatically and in the Azure portal.-+ -+ Last updated 06/08/2022 |
active-directory-b2c | Whats New Docs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/whats-new-docs.md | Last updated 11/01/2023 -+ |
ai-services | Concept Add On Capabilities | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-add-on-capabilities.md | monikerRange: '>=doc-intel-3.1.0' Document Intelligence supports more sophisticated and modular analysis capabilities. Use the add-on features to extend the results to include more features extracted from your documents. Some add-on features incur an extra cost. These optional features can be enabled and disabled depending on the scenario of the document extraction. The following add-on capabilities are available for `2023-07-31 (GA)` and later releases: -* [`ocr.highResolution`](#high-resolution-extraction) +* [`ocrHighResolution`](#high-resolution-extraction) -* [`ocr.formula`](#formula-extraction) +* [`formulas`](#formula-extraction) -* [`ocr.font`](#font-property-extraction) +* [`styleFont`](#font-property-extraction) ++* [`barcodes`](#barcode-property-extraction) ++* [`languages`](#language-detection) -* [`ocr.barcode`](#barcode-property-extraction) :::moniker-end :::moniker range="doc-intel-4.0.0" > [!NOTE] >-> Add-on capabilities are available within all models except for the [Read model](concept-read.md). +> Not all add-on capabilities are supported by all models. For more information, *see* [model data extraction](concept-model-overview.md#model-data-extraction). The following add-on capability is available for `2023-10-31-preview` and later releases: +* [`keyValuePairs`](#key-value-pairs) * [`queryFields`](#query-fields) > [!NOTE] The `ocr.barcode` capability extracts all identified barcodes in the `barcodes` | `ITF` |:::image type="content" source="media/barcodes/interleaved-two-five.png" alt-text="Screenshot of the interleaved-two-of-five barcode (ITF).":::| | `Data Matrix` |:::image type="content" source="media/barcodes/datamatrix.gif" alt-text="Screenshot of the Data Matrix.":::| +## Language detection ++It predicts the detected primary language for each text line along with the `confidence` in the `languages` collection under `analyzeResult`. ++```json +"languages": [ + { + "spans": [ + { + "offset": 0, + "length": 131 + } + ], + "locale": "en", + "confidence": 0.7 + }, +] +``` + :::moniker range="doc-intel-4.0.0" +## Key-value Pairs ++Key-value pairs are specific spans within the document that identify a label or key and its associated response or value. In a structured form, these pairs could be the label and the value the user entered for that field. In an unstructured document, they could be the date a contract was executed on based on the text in a paragraph. The AI model is trained to extract identifiable keys and values based on a wide variety of document types, formats, and structures. ++Keys can also exist in isolation when the model detects that a key exists, with no associated value or when processing optional fields. For example, a middle name field can be left blank on a form in some instances. Key-value pairs are spans of text contained in the document. For documents where the same value is described in different ways, for example, customer/user, the associated key is either customer or user (based on context). + ## Query Fields * Document Intelligence now supports query field extractions. With query field extraction, you can add fields to the extraction process using a query request without the need for added training. For query field extraction, specify the fields you want to extract and Document :::image type="content" source="media/studio/query-fields.png" alt-text="Screenshot of the query fields button in Document Intelligence Studio."::: -* You can pass a list of field labels like `Party1`, `Party2`, `TermsOfUse`, `PaymentTerms`, `PaymentDate`, and `TermEndDate`" as part of the analyze document request. +* You can pass a list of field labels like `Party1`, `Party2`, `TermsOfUse`, `PaymentTerms`, `PaymentDate`, and `TermEndDate`" as part of the `analyze document` request. :::image type="content" source="media/studio/query-field-select.png" alt-text="Screenshot of query fields selection window in Document Intelligence Studio."::: |
ai-services | Concept Business Card | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-business-card.md | Document Intelligence **v3.1:2023-07-31 (GA)** supports the following tools, app | Feature | Resources | Model ID | |-|-|--|-|**Business card model**| • [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=businessCard)<br>• [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)<br>• [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)<br>• [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)<br>• [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)<br>• [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)|**prebuilt-businessCard**| +|**Business card model**| • [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=businessCard)<br>• [**REST API**](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)<br>• [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)<br>• [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)<br>• [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)<br>• [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)|**prebuilt-businessCard**| :::moniker-end ::: moniker range=">=doc-intel-3.0.0" |
ai-services | Concept Composed Models | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-composed-models.md | Document Intelligence **v4.0:2023-10-31-preview** supports the following tools, | Feature | Resources | |-|-|-|_**Custom model**_| • [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</br>• [REST API](https://westus.dev.cognitive.microsoft.com/docs/services/document-intelligence-api-2023-10-31-preview/operations/BuildDocumentModel)</br>• [C# SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>• [Java SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>• [JavaScript SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>• [Python SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)| -| _**Composed model**_| • [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</br>• [REST API](https://westus.dev.cognitive.microsoft.com/docs/services/document-intelligence-api-2023-10-31-preview/operations/ComposeDocumentModel)</br>• [C# SDK](/dotnet/api/azure.ai.formrecognizer.training.formtrainingclient.startcreatecomposedmodel)</br>• [Java SDK](/java/api/com.azure.ai.formrecognizer.training.formtrainingclient.begincreatecomposedmodel)</br>• [JavaScript SDK](/javascript/api/@azure/ai-form-recognizer/documentmodeladministrationclient?view=azure-node-latest#@azure-ai-form-recognizer-documentmodeladministrationclient-begincomposemodel&preserve-view=true)</br>• [Python SDK](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.formtrainingclient?view=azure-python#azure-ai-formrecognizer-formtrainingclient-begin-create-composed-model&preserve-view=true)| +|_**Custom model**_| • [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</br>• [REST API](/rest/api/aiservices/document-models/build-model?view=rest-aiservices-2023-10-31-preview&preserve-view=true&tabs=HTTP)</br>• [C# SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>• [Java SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>• [JavaScript SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>• [Python SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)| +| _**Composed model**_| • [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</br>• [REST API](/rest/api/aiservices/document-models/compose-model?view=rest-aiservices-2023-10-31-preview&preserve-view=true&tabs=HTTP)</br>• [C# SDK](/dotnet/api/azure.ai.formrecognizer.training.formtrainingclient.startcreatecomposedmodel)</br>• [Java SDK](/java/api/com.azure.ai.formrecognizer.training.formtrainingclient.begincreatecomposedmodel)</br>• [JavaScript SDK](/javascript/api/@azure/ai-form-recognizer/documentmodeladministrationclient?view=azure-node-latest#@azure-ai-form-recognizer-documentmodeladministrationclient-begincomposemodel&preserve-view=true)</br>• [Python SDK](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.formtrainingclient?view=azure-python#azure-ai-formrecognizer-formtrainingclient-begin-create-composed-model&preserve-view=true)| :::moniker-end Document Intelligence **v3.1:2023-07-31 (GA)** supports the following tools, app | Feature | Resources | |-|-|-|_**Custom model**_| • [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</br>• [REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)</br>• [C# SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)</br>• [Java SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)</br>• [JavaScript SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)</br>• [Python SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)| -| _**Composed model**_| • [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</br>• [REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/ComposeDocumentModel)</br>• [C# SDK](/dotnet/api/azure.ai.formrecognizer.training.formtrainingclient.startcreatecomposedmodel)</br>• [Java SDK](/java/api/com.azure.ai.formrecognizer.training.formtrainingclient.begincreatecomposedmodel)</br>• [JavaScript SDK](/javascript/api/@azure/ai-form-recognizer/documentmodeladministrationclient?view=azure-node-latest#@azure-ai-form-recognizer-documentmodeladministrationclient-begincomposemodel&preserve-view=true)</br>• [Python SDK](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.formtrainingclient?view=azure-python#azure-ai-formrecognizer-formtrainingclient-begin-create-composed-model&preserve-view=true)| +|_**Custom model**_| • [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</br>• [REST API](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)</br>• [C# SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)</br>• [Java SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)</br>• [JavaScript SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)</br>• [Python SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)| +| _**Composed model**_| • [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</br>• [REST API](/rest/api/aiservices/document-models/compose-model?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)</br>• [C# SDK](/dotnet/api/azure.ai.formrecognizer.training.formtrainingclient.startcreatecomposedmodel)</br>• [Java SDK](/java/api/com.azure.ai.formrecognizer.training.formtrainingclient.begincreatecomposedmodel)</br>• [JavaScript SDK](/javascript/api/@azure/ai-form-recognizer/documentmodeladministrationclient?view=azure-node-latest#@azure-ai-form-recognizer-documentmodeladministrationclient-begincomposemodel&preserve-view=true)</br>• [Python SDK](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.formtrainingclient?view=azure-python#azure-ai-formrecognizer-formtrainingclient-begin-create-composed-model&preserve-view=true)| :::moniker-end |
ai-services | Concept Contract | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-contract.md | The Document Intelligence contract model uses powerful Optical Character Recogni ## Automated contract processing -Automated contract processing is the process of extracting key contract fields from documents. Historically, the contract analysis process has been done manually and, hence, very time consuming. Accurate extraction of key data from contracts is typically the first and one of the most critical steps in the contract automation process. +Automated contract processing is the process of extracting key contract fields from documents. Historically, the contract analysis process is achieved manually and, hence, very time consuming. Accurate extraction of key data from contracts is typically the first and one of the most critical steps in the contract automation process. ## Development options Document Intelligence v4.0 (2023-10-31-preview) supports the following tools, ap | Feature | Resources | Model ID | |-|-|--|-|**Contract model**|• [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com)</br>• [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/document-intelligence-api-2023-10-31-preview/operations/AnalyzeDocument)</br>• [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>• [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>• [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>• [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)|**prebuilt-contract**| +|**Contract model**|• [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com)</br>• [**REST API**](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-10-31-preview&preserve-view=true&tabs=HTTP)</br>• [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>• [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>• [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>• [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)|**prebuilt-contract**| ::: moniker-end ::: moniker range="doc-intel-3.1.0" Document Intelligence v3.1 supports the following tools, applications, and libra | Feature | Resources | Model ID | |-|-|--|-|**Contract model**|• [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com)</br>• [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)</br>• [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)</br>• [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)</br>• [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)</br>• [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)|**prebuilt-contract**| +|**Contract model**|• [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com)</br>• [**REST API**](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)</br>• [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)</br>• [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)</br>• [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)</br>• [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)|**prebuilt-contract**| ::: moniker-end ::: moniker range="doc-intel-3.0.0" |
ai-services | Concept Custom Classifier | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-custom-classifier.md | |
ai-services | Concept Custom Label Tips | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-custom-label-tips.md | |
ai-services | Concept Custom Label | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-custom-label.md | Tabular fields are also useful when extracting repeating information within a do * View the REST APIs: > [!div class="nextstepaction"]- > [Document Intelligence API v4.0:2023-10-31-preview](https://westus.dev.cognitive.microsoft.com/docs/services/document-intelligence-api-2023-10-31-preview/operations/AnalyzeDocument) + > [Document Intelligence API v4.0:2023-10-31-preview](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-10-31-preview&preserve-view=true&tabs=HTTP) > [!div class="nextstepaction"]- > [Document Intelligence API v3.1:2023-07-31 (GA)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument) + > [Document Intelligence API v3.1:2023-07-31 (GA)](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP) |
ai-services | Concept Custom Lifecycle | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-custom-lifecycle.md | |
ai-services | Concept Custom Neural | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-custom-neural.md | As of October 18, 2022, Document Intelligence custom neural model training will > [!TIP] > You can [copy a model](disaster-recovery.md#copy-api-overview) trained in one of the select regions listed to **any other region** and use it accordingly. >-> Use the [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/document-intelligence-api-2023-10-31-preview/operations/CopyDocumentModelTo) or [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects) to copy a model to another region. +> Use the [**REST API**](/rest/api/aiservices/document-models/copy-model-to?view=rest-aiservices-2023-10-31-preview&preserve-view=true +&tabs=HTTP) or [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects) to copy a model to another region. :::moniker-end As of October 18, 2022, Document Intelligence custom neural model training will > [!TIP] > You can [copy a model](disaster-recovery.md#copy-api-overview) trained in one of the select regions listed to **any other region** and use it accordingly. >-> Use the [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/CopyDocumentModelTo) or [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects) to copy a model to another region. +> Use the [**REST API**](/rest/api/aiservices/document-models/copy-model-to?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP) or [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects) to copy a model to another region. :::moniker-end Custom neural models are available in the [v3.0 and later models](v3-1-migration | Document Type | REST API | SDK | Label and Test Models| |--|--|--|--|-| Custom document | [Document Intelligence 3.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)| [Document Intelligence SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)| [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio) +| Custom document | [Document Intelligence 3.1](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)| [Document Intelligence SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)| [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio) The build operation to train model supports a new ```buildMode``` property, to train a custom neural model, set the ```buildMode``` to ```neural```. |
ai-services | Concept Custom Template | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-custom-template.md | Template models rely on a defined visual template, changes to the template resul ::: moniker range="doc-intel-4.0.0" -Custom template models are generally available with the [v4.0 API](https://westus.dev.cognitive.microsoft.com/docs/services/document-intelligence-api-2023-10-31-preview/operations/BuildDocumentModel). If you're starting with a new project or have an existing labeled dataset, use the v3.1 or v3.0 API with Document Intelligence Studio to train a custom template model. +Custom template models are generally available with the [v4.0 API](/rest/api/aiservices/document-models/build-model?view=rest-aiservices-2023-10-31-preview&preserve-view=true&tabs=HTTP). If you're starting with a new project or have an existing labeled dataset, use the v3.1 or v3.0 API with Document Intelligence Studio to train a custom template model. | Model | REST API | SDK | Label and Test Models| |--|--|--|--|-| Custom template | [v3.1 API](https://westus.dev.cognitive.microsoft.com/docs/services/document-intelligence-api-2023-10-31-preview/operations/BuildDocumentModel)| [Document Intelligence SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)| [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio)| +| Custom template | [v3.1 API](/rest/api/aiservices/document-models/build-model?view=rest-aiservices-2023-10-31-preview&preserve-view=true&tabs=HTTP)| [Document Intelligence SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)| [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio)| With the v3.0 and later APIs, the build operation to train model supports a new ```buildMode``` property, to train a custom template model, set the ```buildMode``` to ```template```. https://{endpoint}/documentintelligence/documentModels:build?api-version=2023-10 ::: moniker range="doc-intel-3.1.0" -Custom template models are generally available with the [v3.1 API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/BuildDocumentModel). If you're starting with a new project or have an existing labeled dataset, use the v3.1 or v3.0 API with Document Intelligence Studio to train a custom template model. +Custom template models are generally available with the [v3.1 API](/rest/api/aiservices/document-models/build-model?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP). If you're starting with a new project or have an existing labeled dataset, use the v3.1 or v3.0 API with Document Intelligence Studio to train a custom template model. | Model | REST API | SDK | Label and Test Models| |--|--|--|--|-| Custom template | [v3.1 API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)| [Document Intelligence SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)| [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio)| +| Custom template | [v3.1 API](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)| [Document Intelligence SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)| [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio)| With the v3.0 and later APIs, the build operation to train model supports a new ```buildMode``` property, to train a custom template model, set the ```buildMode``` to ```template```. |
ai-services | Concept Custom | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-custom.md | The custom template or custom form model relies on a consistent visual template Your training set consists of structured documents where the formatting and layout are static and constant from one document instance to the next. Custom template models support key-value pairs, selection marks, tables, signature fields, and regions. Template models and can be trained on documents in any of the [supported languages](language-support.md). For more information, *see* [custom template models](concept-custom-template.md). -If the language of your documents and extraction scenarios supports custom neural models, it's recommended that you use custom neural models over template models for higher accuracy. +If the language of your documents and extraction scenarios supports custom neural models, we recommend that you use custom neural models over template models for higher accuracy. > [!TIP] > If the language of your documents and extraction scenarios supports custom neura ### Build mode -The build custom model operation has added support for the *template* and *neural* custom models. Previous versions of the REST API and SDKs only supported a single build mode that is now known as the *template* mode. +The build custom model operation adds support for the *template* and *neural* custom models. Previous versions of the REST API and SDKs only supported a single build mode that is now known as the *template* mode. * Template models only accept documents that have the same basic page structureΓÇöa uniform visual appearanceΓÇöor the same relative positioning of elements within the document. Document Intelligence v3.1 and later models support the following tools, applica | Feature | Resources | Model ID| |||:|-|Custom model| • [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/customform/projects)</br>• [REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)</br>• [C# SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>• [Python SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)|***custom-model-id***| +|Custom model| • [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/customform/projects)</br>• [REST API](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)</br>• [C# SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>• [Python SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)|***custom-model-id***| :::moniker-end The following table describes the features available with the associated tools a | Document type | REST API | SDK | Label and Test Models| |--|--|--|--|-| Custom template v 4.0 v3.1 v3.0 | [Document Intelligence 3.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)| [Document Intelligence SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)| [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio)| -| Custom neural v4.0 v3.1 v3.0 | [Document Intelligence 3.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)| [Document Intelligence SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)| [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio) +| Custom template v 4.0 v3.1 v3.0 | [Document Intelligence 3.1](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)| [Document Intelligence SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)| [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio)| +| Custom neural v4.0 v3.1 v3.0 | [Document Intelligence 3.1](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)| [Document Intelligence SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)| [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio) | Custom form v2.1 | [Document Intelligence 2.1 GA API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeWithCustomForm) | [Document Intelligence SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-2.1.0&preserve-view=true?pivots=programming-language-python)| [Sample labeling tool](https://fott-2-1.azurewebsites.net/)| > [!NOTE] > Custom template models trained with the 3.0 API will have a few improvements over the 2.1 API stemming from improvements to the OCR engine. Datasets used to train a custom template model using the 2.1 API can still be used to train a new model using the 3.0 API. The following table describes the features available with the associated tools a * **Custom model v4.0, v3.1 and v3.0 APIs** supports signature detection for custom forms. When you train custom models, you can specify certain fields as signatures. When a document is analyzed with your custom model, it indicates whether a signature was detected or not. * [Document Intelligence v3.1 migration guide](v3-1-migration-guide.md): This guide shows you how to use the v3.0 version in your applications and workflows.-* [REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument): This API shows you more about the v3.0 version and new capabilities. +* [REST API](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP): This API shows you more about the v3.0 version and new capabilities. 1. Build your training dataset. |
ai-services | Concept Document Intelligence Studio | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-document-intelligence-studio.md | |
ai-services | Concept General Document | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-general-document.md | -|Layout model with the optional query string parameter **`features=keyValuePairs`** enabled.|• v4:2023-10-31-preview</br>• v3.1:2023-07-31 (GA) |**`prebuilt-layout`**| +|`Layout` model with the optional query string parameter **`features=keyValuePairs`** enabled.|• v4:2023-10-31-preview</br>• v3.1:2023-07-31 (GA) |**`prebuilt-layout`**| |General document model|• v3.1:2023-07-31 (GA)</br>• v3.0:2022-08-31 (GA)</br>• v2.1 (GA)|**`prebuilt-document`**| :::moniker-end Document Intelligence v3.1 supports the following tools, applications, and libra | Feature | Resources | Model ID | |-|-|--|-|**General document model**|• [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com)</br>• [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)</br>• [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)</br>• [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)</br>• [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)</br>• [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)|**prebuilt-document**| +|**General document model**|• [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com)</br>• [**REST API**](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)</br>• [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)</br>• [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)</br>• [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)</br>• [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)|**prebuilt-document**| ::: moniker-end ::: moniker range="doc-intel-3.0.0" Keys can also exist in isolation when the model detects that a key exists, with * Follow our [**Document Intelligence v3.1 migration guide**](v3-1-migration-guide.md) to learn how to use the v3.1 version in your applications and workflows. -* Explore our [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument). +* Explore our [**REST API**](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP). > [!div class="nextstepaction"] > [Try the Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio) |
ai-services | Concept Health Insurance Card | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-health-insurance-card.md | Document Intelligence v4.0 (2023-10-31-preview) supports the following tools, ap | Feature | Resources | Model ID | |-|-|--|-|**Health insurance card model**|• [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com)</br>• [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/document-intelligence-api-2023-10-31-preview/operations/AnalyzeDocument)</br>• [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>• [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>• [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>• [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)|**prebuilt-healthInsuranceCard.us**| +|**Health insurance card model**|• [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com)</br>• [**REST API**](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-10-31-preview&preserve-view=true&tabs=HTTP)</br>• [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>• [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>• [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>• [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)|**prebuilt-healthInsuranceCard.us**| ::: moniker-end ::: moniker range="doc-intel-3.1.0" Document Intelligence v3.1 supports the following tools, applications, and libra | Feature | Resources | Model ID | |-|-|--|-|**Health insurance card model**|• [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com)</br>• [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)</br>• [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)</br>• [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)</br>• [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)</br>• [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)|**prebuilt-healthInsuranceCard.us**| +|**Health insurance card model**|• [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com)</br>• [**REST API**](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)</br>• [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)</br>• [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)</br>• [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)</br>• [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)|**prebuilt-healthInsuranceCard.us**| ::: moniker-end ::: moniker range="doc-intel-3.0.0" See how data is extracted from health insurance cards using the Document Intelli * Follow our [**Document Intelligence v3.1 migration guide**](v3-1-migration-guide.md) to learn how to use the v3.1 version in your applications and workflows. -* Explore our [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument) to learn more about the v3.1 version and new capabilities. +* Explore our [**REST API**](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP) to learn more about the v3.1 version and new capabilities. ## Next steps |
ai-services | Concept Id Document | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-id-document.md | Document Intelligence v4.0 (2023-10-31-preview) supports the following tools, ap | Feature | Resources | Model ID | |-|-|--|-|**ID document model**|• [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com)</br>• [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/document-intelligence-api-2023-10-31-preview/operations/AnalyzeDocument)</br>• [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>• [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>• [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>• [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)|**prebuilt-idDocument**| +|**ID document model**|• [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com)</br>• [**REST API**](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-10-31-preview&preserve-view=true&tabs=HTTP)</br>• [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>• [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>• [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>• [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)|**prebuilt-idDocument**| ::: moniker-end ::: moniker range="doc-intel-3.1.0" Document Intelligence v3.1 supports the following tools, applications, and libra | Feature | Resources | Model ID | |-|-|--|-|**ID document model**|• [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com)</br>• [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)</br>• [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)</br>• [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)</br>• [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)</br>• [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)|**prebuilt-idDocument**| +|**ID document model**|• [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com)</br>• [**REST API**](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)</br>• [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)</br>• [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)</br>• [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)</br>• [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)|**prebuilt-idDocument**| ::: moniker-end ::: moniker range="doc-intel-3.0.0" |
ai-services | Concept Invoice | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-invoice.md | Document Intelligence v4.0 (2023-10-31-preview) supports the following tools, ap | Feature | Resources | Model ID | |-|-|--|-|**Invoice model**|• [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com)</br>• [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/document-intelligence-api-2023-10-31-preview/operations/AnalyzeDocument)</br>• [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>• [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>• [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>• [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)|**prebuilt-invoice**| +|**Invoice model**|• [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com)</br>• [**REST API**](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-10-31-preview&preserve-view=true&tabs=HTTP)</br>• [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>• [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>• [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>• [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)|**prebuilt-invoice**| ::: moniker-end ::: moniker range="doc-intel-3.1.0" Document Intelligence v3.1 supports the following tools, applications, and libra | Feature | Resources | Model ID | |-|-|--|-|**Invoice model**|• [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com)</br>• [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)</br>• [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)</br>• [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)</br>• [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)</br>• [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)|**prebuilt-invoice**| +|**Invoice model**|• [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com)</br>• [**REST API**](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)</br>• [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)</br>• [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)</br>• [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)</br>• [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)|**prebuilt-invoice**| ::: moniker-end ::: moniker range="doc-intel-3.0.0" |
ai-services | Concept Layout | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-layout.md | Document Intelligence v4.0 (2023-10-31-preview) supports the following tools, ap | Feature | Resources | Model ID | |-|-|--|-|**Layout model**|• [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com)</br>• [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/document-intelligence-api-2023-10-31-preview/operations/AnalyzeDocument)</br>• [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>• [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>• [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>• [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)|**prebuilt-layout**| +|**Layout model**|• [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com)</br>• [**REST API**](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-10-31-preview&preserve-view=true&tabs=HTTP)</br>• [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>• [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>• [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>• [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)|**prebuilt-layout**| ::: moniker-end ::: moniker range="doc-intel-3.1.0" Document Intelligence v3.1 supports the following tools, applications, and libra | Feature | Resources | Model ID | |-|-|--|-|**Layout model**|• [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com)</br>• [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)</br>• [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)</br>• [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)</br>• [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)</br>• [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)|**prebuilt-layout**| +|**Layout model**|• [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com)</br>• [**REST API**](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)</br>• [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)</br>• [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)</br>• [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)</br>• [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)|**prebuilt-layout**| ::: moniker-end ::: moniker range="doc-intel-3.0.0" See how data, including text, tables, table headers, selection marks, and struct * Select the **Fetch** button. -1. Select **Run Layout**. The Document Intelligence Sample Labeling tool calls the Analyze Layout API and analyze the document. +1. Select **Run Layout**. The Document Intelligence Sample Labeling tool calls the `Analyze Layout` API and analyze the document. :::image type="content" source="media/fott-layout.png" alt-text="Screenshot of `Layout` dropdown window."::: For large multi-page documents, use the `pages` query parameter to indicate spec ## The Get Analyze Layout Result operation -The second step is to call the [Get Analyze Layout Result](https://westcentralus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/GetAnalyzeLayoutResult) operation. This operation takes as input the Result ID the Analyze Layout operation created. It returns a JSON response that contains a **status** field with the following possible values. +The second step is to call the [Get Analyze Layout Result](https://westcentralus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/GetAnalyzeLayoutResult) operation. This operation takes as input the Result ID the `Analyze Layout` operation created. It returns a JSON response that contains a **status** field with the following possible values. |Field| Type | Possible values | |:--|:-:|:-| |
ai-services | Concept Model Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-model-overview.md | -|Model|[2023-10-31-preview](https://westus.dev.cognitive.microsoft.com/docs/services/document-intelligence-api-2023-10-31-preview/operations/AnalyzeDocument)|[2023-07-31 (GA)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)|[2022-08-31 (GA)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)|[v2.1 (GA)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)| +|Model|[2023-10-31-preview](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-10-31-preview&preserve-view=true&tabs=HTTP)|[2023-07-31 (GA)](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)|[2022-08-31 (GA)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2022-08-31/operations/AnalyzeDocument)|[v2.1 (GA)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)| |-|--||--|| |[Add-on capabilities](concept-add-on-capabilities.md) | ✔️| ✔️| n/a| n/a| |[Business Card](concept-business-card.md) | deprecated|✔️|✔️|✔️ | The following table shows the available models for each current preview and stab | [Custom classification model](#custom-classifier)| The **Custom classification model** can classify each page in an input file to identify the document(s) within and can also identify multiple documents or multiple instances of a single document within an input file. | [Composed models](#composed-models) | Combine several custom models into a single model to automate processing of diverse document types with a single composed model. -For all models, except Business card model, Document Intelligence now supports add-on capabilities to allow for more sophisticated analysis. These optional capabilities can be enabled and disabled depending on the scenario of the document extraction. There are four add-on capabilities available for the `2023-07-31` (GA) and later API version: +For all models, except Business card model, Document Intelligence now supports add-on capabilities to allow for more sophisticated analysis. These optional capabilities can be enabled and disabled depending on the scenario of the document extraction. There are seven add-on capabilities available for the `2023-07-31` (GA) and later API version: -* [`ocr.highResolution`](concept-add-on-capabilities.md#high-resolution-extraction) -* [`ocr.formula`](concept-add-on-capabilities.md#formula-extraction) -* [`ocr.font`](concept-add-on-capabilities.md#font-property-extraction) -* [`ocr.barcode`](concept-add-on-capabilities.md#barcode-property-extraction) +* [`ocrHighResolution`](concept-add-on-capabilities.md#high-resolution-extraction) +* [`formulas`](concept-add-on-capabilities.md#formula-extraction) +* [`styleFont`](concept-add-on-capabilities.md#font-property-extraction) +* [`barcodes`](concept-add-on-capabilities.md#barcode-property-extraction) +* [`languages`](concept-add-on-capabilities.md#language-detection) +* [`keyValuePairs`](concept-add-on-capabilities.md#key-value-pairs) (2023-10-31-preview) +* [`queryFields`](concept-add-on-capabilities.md#query-fields) (2023-31-preview) ## Analysis features The Layout analysis model analyzes and extracts text, tables, selection marks, a > > [Learn more: layout model](concept-layout.md) - ### Health insurance card :::image type="icon" source="media/studio/health-insurance-logo.png"::: The US tax document models analyze and extract key fields and line items from a |US Tax 1098-T|Extract qualified tuition details.|**prebuilt-tax.us.1098T**| |US Tax 1099|Extract Information from 1099 forms.|**prebuilt-tax.us.1099(variations)**| - ***Sample W-2 document processed using [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.w2)***: :::image type="content" source="./media/studio/w-2.png" alt-text="Screenshot of a sample W-2."::: |
ai-services | Concept Query Fields | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-query-fields.md | |
ai-services | Concept Read | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-read.md | Document Intelligence v4.0 (2023-10-31-preview) supports the following tools, ap | Feature | Resources | Model ID | |-|-|--|-|**Read OCR model**|• [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com)</br>• [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/document-intelligence-api-2023-10-31-preview/operations/AnalyzeDocument)</br>• [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>• [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>• [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>• [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)|**prebuilt-read**| +|**Read OCR model**|• [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com)</br>• [**REST API**](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-10-31-preview&preserve-view=true&tabs=HTTP)</br>• [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>• [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>• [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>• [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)|**prebuilt-read**| ::: moniker-end ::: moniker range="doc-intel-3.1.0" Document Intelligence v3.1 supports the following tools, applications, and libra | Feature | Resources | Model ID | |-|-|--|-|**Read OCR model**|• [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com)</br>• [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)</br>• [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)</br>• [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)</br>• [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)</br>• [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)|**prebuilt-read**| +|**Read OCR model**|• [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com)</br>• [**REST API**](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)</br>• [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)</br>• [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)</br>• [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)</br>• [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)|**prebuilt-read**| ::: moniker-end ::: moniker range="doc-intel-3.0.0" Complete a Document Intelligence quickstart: Explore our REST API: > [!div class="nextstepaction"]-> [Document Intelligence API v3.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument) +> [Document Intelligence API v3.1](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP) |
ai-services | Concept Receipt | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-receipt.md | Document Intelligence v4.0 (2023-10-31-preview) supports the following tools, ap | Feature | Resources | Model ID | |-|-|--|-|**Receipt model**|• [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com)</br>• [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/document-intelligence-api-2023-10-31-preview/operations/AnalyzeDocument)</br>• [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>• [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>• [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>• [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)|**prebuilt-receipt**| +|**Receipt model**|• [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com)</br>• [**REST API**](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-10-31-preview&preserve-view=true&tabs=HTTP)</br>• [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>• [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>• [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>• [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)|**prebuilt-receipt**| ::: moniker-end ::: moniker range="doc-intel-3.1.0" Document Intelligence v3.1 supports the following tools, applications, and libra | Feature | Resources | Model ID | |-|-|--|-|**Receipt model**|• [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com)</br>• [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)</br>• [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)</br>• [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)</br>• [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)</br>• [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)|**prebuilt-receipt**| +|**Receipt model**|• [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com)</br>• [**REST API**](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)</br>• [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)</br>• [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)</br>• [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)</br>• [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)|**prebuilt-receipt**| ::: moniker-end ::: moniker range="doc-intel-3.0.0" |
ai-services | Concept Tax Document | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-tax-document.md | Document Intelligence v4.0 (2023-10-31-preview) supports the following tools, ap | Feature | Resources | Model ID | |-|-|--|-|**US tax form models**|• [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com)</br>• [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/document-intelligence-api-2023-10-31-preview/operations/AnalyzeDocument)</br>• [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>• [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>• [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>• [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)|**• prebuilt-tax.us.W-2</br>• prebuilt-tax.us.1098</br>• prebuilt-tax.us.1098E</br>• prebuilt-tax.us.1098T</br>• prebuilt-tax.us.1099(Variations)**| +|**US tax form models**|• [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com)</br>• [**REST API**](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-10-31-preview&preserve-view=true&tabs=HTTP)</br>• [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>• [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>• [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>• [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)|**• prebuilt-tax.us.W-2</br>• prebuilt-tax.us.1098</br>• prebuilt-tax.us.1098E</br>• prebuilt-tax.us.1098T</br>• prebuilt-tax.us.1099(Variations)**| ::: moniker-end ::: moniker range="doc-intel-3.1.0" Document Intelligence v3.1 supports the following tools, applications, and libra | Feature | Resources | Model ID | |-|-|--|-|**US tax form models**|• [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com)</br>• [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)</br>• [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)</br>• [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)</br>• [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)</br>• [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)|**• prebuilt-tax.us.W-2</br>• prebuilt-tax.us.1098</br>• prebuilt-tax.us.1098E</br>• prebuilt-tax.us.1098T**| +|**US tax form models**|• [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com)</br>• [**REST API**](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)</br>• [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)</br>• [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)</br>• [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)</br>• [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)|**• prebuilt-tax.us.W-2</br>• prebuilt-tax.us.1098</br>• prebuilt-tax.us.1098E</br>• prebuilt-tax.us.1098T**| ::: moniker-end ::: moniker range="doc-intel-3.0.0" The following are the fields extracted from a 1099-nec tax form in the JSON outp |Name| Type | Description | Example output | |:--|:-|:-|::|-| TaxYear | String | Tax Year extracted from Form 1099-NEC.| 2021 | -| Payer | Object | An object that contains the payers's TIN, Name, Address, and PhoneNumber | | -| Recipient | Object | An object that contains the recipient's TIN, Name, Address, and AccountNumber| | -| Box1 |number|Box 1 extracted from Form 1099-NEC.| 123456 | -| Box2 |boolean|Box 2 extracted from Form 1099-NEC.| true | -| Box4 |number|Box 4 extracted from Form 1099-NEC.| 123456 | -| StateTaxesWithheld |array| State Taxes Withheld extracted from Form 1099-NEC (boxes 5,6, and 7)| | +| `TaxYear` | String | Tax Year extracted from Form 1099-NEC.| 2021 | +| `Payer` | Object | An object that contains the payer's TIN, Name, Address, and PhoneNumber | | +| `Recipient` | Object | An object that contains the recipient's TIN, Name, Address, and AccountNumber| | +| `Box1` |number|Box 1 extracted from Form 1099-NEC.| 123456 | +| `Box2` |boolean|Box 2 extracted from Form 1099-NEC.| true | +| `Box4` |number|Box 4 extracted from Form 1099-NEC.| 123456 | +| `StateTaxesWithheld` |array| State Taxes Withheld extracted from Form 1099-NEC (boxes 5, 6, and 7)| | The tax documents key-value pairs and line items extracted are in the `documentResults` section of the JSON output. |
ai-services | Create Sas Tokens | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/create-sas-tokens.md | The SAS URL includes a special set of [query parameters](/rest/api/storageservic ### REST API -To use your SAS URL with the [REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/BuildDocumentModel), add the SAS URL to the request body: +To use your SAS URL with the [REST API](/rest/api/aiservices/document-models/build-model?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP), add the SAS URL to the request body: ```json { |
ai-services | Disaster Recovery | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/disaster-recovery.md | Ocp-Apim-Subscription-Key: {SOURCE_FORM_RECOGNIZER_RESOURCE_KEY} ### Track the target model ID -You can also use the **[Get model](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/GetModel)** API to track the status of the operation by querying the target model. Call the API using the target model ID that you copied down from the [Generate Copy authorization request](#generate-copy-authorization-request) response. +You can also use the **[Get model](/rest/api/aiservices/document-models/get-model?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)** API to track the status of the operation by querying the target model. Call the API using the target model ID that you copied down from the [Generate Copy authorization request](#generate-copy-authorization-request) response. ```http GET https://{YOUR-ENDPOINT}/formrecognizer/documentModels/{modelId}?api-version=2023-07-31" -H "Ocp-Apim-Subscription-Key: {YOUR-KEY} Operation-Location: https://{source-resource}.cognitiveservices.azure.com/formre ### Track copy operation progress -You can use the [**Get operation**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/GetOperation) API to list all document model operations (succeeded, in-progress, or failed) associated with your Document Intelligence resource. Operation information only persists for 24 hours. Here's a list of the operations (operationId) that can be returned: +You can use the [**Get operation**](/rest/api/aiservices/miscellaneous/get-operation?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP) API to list all document model operations (succeeded, in-progress, or failed) associated with your Document Intelligence resource. Operation information only persists for 24 hours. Here's a list of the operations (operationId) that can be returned: * documentModelBuild * documentModelCompose You can use the [**Get operation**](https://westus.dev.cognitive.microsoft.com/d ### Track the target model ID -If the operation was successful, the document model can be accessed using the [**getModel**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/GetModel) (get a single model), or [**GetModels**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/GetModels) (get a list of models) APIs. +If the operation was successful, the document model can be accessed using the [**getModel**](/rest/api/aiservices/document-models/get-model?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP) (get a single model), or [**GetModels**](/rest/api/aiservices/document-models/get-model?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTPs) (get a list of models) APIs. ::: moniker-end curl -i GET "https://<SOURCE_FORM_RECOGNIZER_RESOURCE_ENDPOINT>/formrecognizer/v In this guide, you learned how to use the Copy API to back up your custom models to a secondary Document Intelligence resource. Next, explore the API reference docs to see what else you can do with Document Intelligence. -* [REST API reference documentation](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument) +* [REST API reference documentation](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP) ::: moniker-end |
ai-services | Compose Custom Models | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/how-to-guides/compose-custom-models.md | If you want to use manually labeled data, you have to upload the *.labels.json* When you [train your model](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects) with labeled data, the model uses supervised learning to extract values of interest, using the labeled forms you provide. Labeled data results in better-performing models and can produce models that work with complex forms or forms containing values without keys. -Document Intelligence uses the [prebuilt-layout model](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument) API to learn the expected sizes and positions of typeface and handwritten text elements and extract tables. Then it uses user-specified labels to learn the key/value associations and tables in the documents. We recommend that you use five manually labeled forms of the same type (same structure) to get started with training a new model. Then, add more labeled data, as needed, to improve the model accuracy. Document Intelligence enables training a model to extract key-value pairs and tables using supervised learning capabilities. +Document Intelligence uses the [prebuilt-layout model](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP) API to learn the expected sizes and positions of typeface and handwritten text elements and extract tables. Then it uses user-specified labels to learn the key/value associations and tables in the documents. We recommend that you use five manually labeled forms of the same type (same structure) to get started with training a new model. Then, add more labeled data, as needed, to improve the model accuracy. Document Intelligence enables training a model to extract key-value pairs and tables using supervised learning capabilities. ### [Document Intelligence Studio](#tab/studio) Training with labels leads to better performance in some scenarios. To train wit > [!NOTE] > **the `create compose model` operation is only available for custom models trained _with_ labels.** Attempting to compose unlabeled models will produce an error. -With the [**create compose model**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/ComposeDocumentModel) operation, you can assign up to 100 trained custom models to a single model ID. When analyze documents with a composed model, Document Intelligence first classifies the form you submitted, then chooses the best matching assigned model, and returns results for that model. This operation is useful when incoming forms may belong to one of several templates. +With the [**create compose model**](/rest/api/aiservices/document-models/compose-model?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP) operation, you can assign up to 100 trained custom models to a single model ID. When analyze documents with a composed model, Document Intelligence first classifies the form you submitted, then chooses the best matching assigned model, and returns results for that model. This operation is useful when incoming forms may belong to one of several templates. ### [Document Intelligence Studio](#tab/studio) Once the training process has successfully completed, you can begin to build you #### Compose your custom models -The [compose model API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/ComposeDocumentModel) accepts a list of model IDs to be composed. +The [compose model API](/rest/api/aiservices/document-models/compose-model?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP) accepts a list of model IDs to be composed. :::image type="content" source="../media/compose-model-request-body.png" alt-text="Screenshot of compose model request."::: #### Analyze documents -To make an [**Analyze document**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument) request, use a unique model name in the request parameters. +To make an [**Analyze document**](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP) request, use a unique model name in the request parameters. :::image type="content" source="../media/custom-model-analyze-request.png" alt-text="Screenshot of a custom model request URL."::: #### Manage your composed models -You can manage custom models throughout your development needs including [**copying**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/CopyDocumentModelTo), [**listing**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/GetModels), and [**deleting**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/DeleteModel) your models. +You can manage custom models throughout your development needs including [**copying**](/rest/api/aiservices/document-models/copy-model-to?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP), [**listing**](/rest/api/aiservices/document-models/get-model?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTPs), and [**deleting**](/rest/api/aiservices/document-models/delete-model?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP) your models. ### [Client libraries](#tab/sdks) When the operation completes, your newly composed model appears in the list. ### [**REST API**](#tab/rest) -Using the **REST API**, you can make a [**Compose Custom Model**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/ComposeDocumentModel) request to create a single composed model from existing models. The request body requires a string array of your `modelIds` to compose and you can optionally define the `modelName`. +Using the **REST API**, you can make a [**Compose Custom Model**](/rest/api/aiservices/document-models/compose-model?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP) request to create a single composed model from existing models. The request body requires a string array of your `modelIds` to compose and you can optionally define the `modelName`. ### [**Client-library SDKs**](#tab/sdks) Use the programming language code of your choice to create a composed model that ### [**REST API**](#tab/rest) -Using the REST API, you can make an [Analyze Document](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument) request to analyze a document and extract key-value pairs and table data. +Using the REST API, you can make an [Analyze Document](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP) request to analyze a document and extract key-value pairs and table data. ### [**Client-library SDKs**](#tab/sdks) Test your newly trained models by [analyzing forms](build-a-custom-model.md?view ## Manage your custom models -You can [manage your custom models](../how-to-guides/use-sdk-rest-api.md?view=doc-intel-2.1.0&preserve-view=true) throughout their lifecycle by viewing a [list of all custom models](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/GetModels) under your subscription, retrieving information about [a specific custom model](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/GetModel), and [deleting custom models](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/DeleteModel) from your account. +You can [manage your custom models](../how-to-guides/use-sdk-rest-api.md?view=doc-intel-2.1.0&preserve-view=true) throughout their lifecycle by viewing a [list of all custom models](/rest/api/aiservices/document-models/get-model?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTPs) under your subscription, retrieving information about [a specific custom model](/rest/api/aiservices/document-models/get-model?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP), and [deleting custom models](/rest/api/aiservices/document-models/delete-model?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP) from your account. Great! You've learned the steps to create custom and composed models and use them in your Document Intelligence projects and applications. Great! You've learned the steps to create custom and composed models and use the Learn more about the Document Intelligence client library by exploring our API reference documentation. > [!div class="nextstepaction"]-> [Document Intelligence API reference](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument) +> [Document Intelligence API reference](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP) ::: moniker-end |
ai-services | Use Sdk Rest Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/how-to-guides/use-sdk-rest-api.md | Congratulations! You've learned to use Document Intelligence models to analyze v > [Try the Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio) > [!div class="nextstepaction"]-> [Explore the Document Intelligence REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument) +> [Explore the Document Intelligence REST API](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP) ::: moniker-end ::: moniker range="doc-intel-2.1.0" |
ai-services | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/overview.md | You can use Document Intelligence to automate document processing in application | Model ID | Description| Development options | |-|--|-|-|**prebuilt-contract**|Extract contract agreement and party details.|● [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=contract)</br>● [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument) +|**prebuilt-contract**|Extract contract agreement and party details.|● [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=contract)</br>● [**REST API**](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP) > [!div class="nextstepaction"] > [Return to model types](#prebuilt-models) You can use Document Intelligence to automate document processing in application | Model ID | Description| Development options | |-|--|-|-|**prebuilt-tax.us.1098**|Extract mortgage interest information and details.|● [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.1098)</br>● [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument) +|**prebuilt-tax.us.1098**|Extract mortgage interest information and details.|● [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.1098)</br>● [**REST API**](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP) > [!div class="nextstepaction"] > [Return to model types](#prebuilt-models) You can use Document Intelligence to automate document processing in application | Model ID | Description |Development options | |-|--|-|-|**prebuilt-tax.us.1098E**|Extract student loan information and details.|● [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.1098E)</br>● [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument) +|**prebuilt-tax.us.1098E**|Extract student loan information and details.|● [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.1098E)</br>● [**REST API**](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP) > [!div class="nextstepaction"] > [Return to model types](#prebuilt-models) You can use Document Intelligence to automate document processing in application | Model ID |Description|Development options | |-|--|--|-|**prebuilt-tax.us.1098T**|Extract tuition information and details.|● [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.1098T)</br>● [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument) +|**prebuilt-tax.us.1098T**|Extract tuition information and details.|● [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.1098T)</br>● [**REST API**](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP) > [!div class="nextstepaction"] > [Return to model types](#prebuilt-models) You can use Document Intelligence to automate document processing in application | About | Description |Automation use cases |Development options | |-|--|--|--|-|[**Custom model**](concept-custom.md) | Extracts information from forms and documents into structured data based on a model created from a set of representative training document sets.|Extract distinct data from forms and documents specific to your business and use cases.|● [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</br>● [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/BuildDocumentModel)</br>● [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>● [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>● [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>● [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)| +|[**Custom model**](concept-custom.md) | Extracts information from forms and documents into structured data based on a model created from a set of representative training document sets.|Extract distinct data from forms and documents specific to your business and use cases.|● [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</br>● [**REST API**](/rest/api/aiservices/document-models/build-model?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)</br>● [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>● [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>● [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>● [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)| > [!div class="nextstepaction"] > [Return to custom model types](#custom-models) You can use Document Intelligence to automate document processing in application | About | Description |Automation use cases | Development options | |-|--|-|--|-|[**Custom Template model**](concept-custom-template.md) | The custom template model extracts labeled values and fields from structured and semi-structured documents.</br> | Extract key data from highly structured documents with defined visual templates or common visual layouts, forms.| ● [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</br>● [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/BuildDocumentModel)</br>● [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>● [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>● [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>● [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) +|[**Custom Template model**](concept-custom-template.md) | The custom template model extracts labeled values and fields from structured and semi-structured documents.</br> | Extract key data from highly structured documents with defined visual templates or common visual layouts, forms.| ● [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</br>● [**REST API**](/rest/api/aiservices/document-models/build-model?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)</br>● [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>● [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>● [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>● [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) > [!div class="nextstepaction"] > [Return to custom model types](#custom-models) You can use Document Intelligence to automate document processing in application | About | Description |Automation use cases | Development options | |-|--|-|--|- |[**Custom Neural model**](concept-custom-neural.md)| The custom neural model is used to extract labeled data from structured (surveys, questionnaires), semi-structured (invoices, purchase orders), and unstructured documents (contracts, letters).|Extract text data, checkboxes, and tabular fields from structured and unstructured documents.|[**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</br>● [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/BuildDocumentModel)</br>● [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>● [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>● [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>● [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) + |[**Custom Neural model**](concept-custom-neural.md)| The custom neural model is used to extract labeled data from structured (surveys, questionnaires), semi-structured (invoices, purchase orders), and unstructured documents (contracts, letters).|Extract text data, checkboxes, and tabular fields from structured and unstructured documents.|[**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</br>● [**REST API**](/rest/api/aiservices/document-models/build-model?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)</br>● [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>● [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>● [**JavaScript SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>● [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) > [!div class="nextstepaction"] > [Return to custom model types](#custom-models) You can use Document Intelligence to automate document processing in application | About | Description |Automation use cases | Development options | |-|--|-|--|-|[**Composed custom models**](concept-composed-models.md)| A composed model is created by taking a collection of custom models and assigning them to a single model built from your form types.| Useful when you train several models and want to group them to analyze similar form types like purchase orders.|● [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</br>● [**REST API**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/ComposeDocumentModel)</br>● [**C# SDK**](/dotnet/api/azure.ai.formrecognizer.training.formtrainingclient.startcreatecomposedmodel)</br>● [**Java SDK**](/jav?view=doc-intel-3.0.0&preserve-view=true) +|[**Composed custom models**](concept-composed-models.md)| A composed model is created by taking a collection of custom models and assigning them to a single model built from your form types.| Useful when you train several models and want to group them to analyze similar form types like purchase orders.|● [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</br>● [**REST API**](/rest/api/aiservices/document-models/compose-model?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)</br>● [**C# SDK**](/dotnet/api/azure.ai.formrecognizer.training.formtrainingclient.startcreatecomposedmodel)</br>● [**Java SDK**](/jav?view=doc-intel-3.0.0&preserve-view=true) > [!div class="nextstepaction"] > [Return to custom model types](#custom-models) You can use Document Intelligence to automate document processing in application | About | Description |Automation use cases | Development options | |-|--|-|--|-|[**Composed classification model**](concept-custom-classifier.md)| Custom classification models combine layout and language features to detect, identify, and classify documents within an input file.|● A loan application packaged containing application form, payslip, and, bank statement.</br>● A collection of scanned invoices. |● [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</br>● [REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/BuildDocumentClassifier)</br> +|[**Composed classification model**](concept-custom-classifier.md)| Custom classification models combine layout and language features to detect, identify, and classify documents within an input file.|● A loan application packaged containing application form, payslip, and, bank statement.</br>● A collection of scanned invoices. |● [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</br>● [REST API](/rest/api/aiservices/document-classifiers/build-classifier?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)</br> > [!div class="nextstepaction"] > [Return to custom model types](#custom-models) |
ai-services | Sdk Overview V3 0 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/sdk-overview-v3-0.md | Document Intelligence SDK supports the following languages and platforms: | Language → Document Intelligence SDK version | Package| Supported API version| Platform support | |:-:|:-|:-| :-|-| [.NET/C# → 4.0.0 (GA)](https://azuresdkdocs.blob.core.windows.net/$web/dotnet/Azure.AI.FormRecognizer/4.0.0/https://docsupdatetracker.net/index.html)|[NuGet](https://www.nuget.org/packages/Azure.AI.FormRecognizer)|[v3.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)</br> [v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[v2.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux, Docker](https://dotnet.microsoft.com/download)| -|[Java → 4.0.6 (GA)](https://azuresdkdocs.blob.core.windows.net/$web/java/azure-ai-formrecognizer/4.0.0/https://docsupdatetracker.net/index.html) |[MVN repository](https://mvnrepository.com/artifact/com.azure/azure-ai-formrecognizer/4.0.0-beta.6) |[v3.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)</br> [v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[v2.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux](/java/openjdk/install)| -|[JavaScript → 4.0.0 (GA)](https://azuresdkdocs.blob.core.windows.net/$web/javascript/azure-ai-form-recognizer/4.0.0/https://docsupdatetracker.net/index.html)| [npm](https://www.npmjs.com/package/@azure/ai-form-recognizer)| [v3.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)</br> [v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[v2.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) | [Browser, Windows, macOS, Linux](https://nodejs.org/en/download/) | -|[Python → 3.2.0 (GA)](https://azuresdkdocs.blob.core.windows.net/$web/python/azure-ai-formrecognizer/3.2.0/https://docsupdatetracker.net/index.html) | [PyPI](https://pypi.org/project/azure-ai-formrecognizer/3.2.0/)| [v3.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)</br> [v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[v2.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux](/azure/developer/python/configure-local-development-environment?tabs=windows%2Capt%2Ccmd#use-the-azure-cli) +| [.NET/C# → 4.0.0 (GA)](https://azuresdkdocs.blob.core.windows.net/$web/dotnet/Azure.AI.FormRecognizer/4.0.0/https://docsupdatetracker.net/index.html)|[NuGet](https://www.nuget.org/packages/Azure.AI.FormRecognizer)|[v3.0](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)</br> [v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[v2.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux, Docker](https://dotnet.microsoft.com/download)| +|[Java → 4.0.6 (GA)](https://azuresdkdocs.blob.core.windows.net/$web/java/azure-ai-formrecognizer/4.0.0/https://docsupdatetracker.net/index.html) |[MVN repository](https://mvnrepository.com/artifact/com.azure/azure-ai-formrecognizer/4.0.0-beta.6) |[v3.0](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)</br> [v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[v2.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux](/java/openjdk/install)| +|[JavaScript → 4.0.0 (GA)](https://azuresdkdocs.blob.core.windows.net/$web/javascript/azure-ai-form-recognizer/4.0.0/https://docsupdatetracker.net/index.html)| [npm](https://www.npmjs.com/package/@azure/ai-form-recognizer)| [v3.0](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)</br> [v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[v2.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) | [Browser, Windows, macOS, Linux](https://nodejs.org/en/download/) | +|[Python → 3.2.0 (GA)](https://azuresdkdocs.blob.core.windows.net/$web/python/azure-ai-formrecognizer/3.2.0/https://docsupdatetracker.net/index.html) | [PyPI](https://pypi.org/project/azure-ai-formrecognizer/3.2.0/)| [v3.0](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)</br> [v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[v2.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux](/azure/developer/python/configure-local-development-environment?tabs=windows%2Capt%2Ccmd#use-the-azure-cli) ## Supported Clients The [Microsoft Q&A](/answers/topics/azure-form-recognizer.html) and [Stack Overf ## Next steps >[!div class="nextstepaction"]-> [**Explore Document Intelligence REST API v3.0**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument) +> [**Explore Document Intelligence REST API v3.0**](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP) > [!div class="nextstepaction"] > [**Try a Document Intelligence quickstart**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true) |
ai-services | Sdk Overview V3 1 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/sdk-overview-v3-1.md | Document Intelligence SDK supports the following languages and platforms: | Language ΓåÆ Document Intelligence SDK version           | Package| Supported API version          | Platform support | |:-:|:-|:-| :-:|-| [**.NET/C# ΓåÆ latest (GA)**](/dotnet/api/overview/azure/ai.formrecognizer-readme?view=azure-dotnet&preserve-view=true)|[NuGet](https://www.nuget.org/packages/Azure.AI.FormRecognizer/4.1.0)|[• 2023-07-31 (GA)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)</br> [• 2022-08-31 (GA)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)</br> [• v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[• v2.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux, Docker](https://dotnet.microsoft.com/download)| -|[**Java ΓåÆ latest (GA)**](https://azuresdkdocs.blob.core.windows.net/$web/java/azure-ai-formrecognizer/4.1.0/https://docsupdatetracker.net/index.html) |[MVN repository](https://mvnrepository.com/artifact/com.azure/azure-ai-formrecognizer/4.1.0) |[• 2023-07-31 (GA)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)</br> [• 2022-08-31 (GA)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)</br> [• v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[• v2.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux](/java/openjdk/install)| -|[**JavaScript ΓåÆ latest (GA)**](https://azuresdkdocs.blob.core.windows.net/$web/javascript/azure-ai-form-recognizer/5.0.0/https://docsupdatetracker.net/index.html)| [npm](https://www.npmjs.com/package/@azure/ai-form-recognizer)| [• 2023-07-31 (GA)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)</br> • [2022-08-31 (GA)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)</br> [• v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[• v2.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) | [Browser, Windows, macOS, Linux](https://nodejs.org/en/download/) | -|[**Python ΓåÆ latest (GA)**](https://azuresdkdocs.blob.core.windows.net/$web/python/azure-ai-formrecognizer/3.3.0/https://docsupdatetracker.net/index.html) | [PyPI](https://pypi.org/project/azure-ai-formrecognizer/3.3.0/)| [• 2023-07-31 (GA)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)</br> • [2022-08-31 (GA)](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument)</br> [• v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[• v2.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux](/azure/developer/python/configure-local-development-environment?tabs=windows%2Capt%2Ccmd#use-the-azure-cli) +| [**.NET/C# ΓåÆ latest (GA)**](/dotnet/api/overview/azure/ai.formrecognizer-readme?view=azure-dotnet&preserve-view=true)|[NuGet](https://www.nuget.org/packages/Azure.AI.FormRecognizer/4.1.0)|[• 2023-07-31 (GA)](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)</br> [• 2022-08-31 (GA)](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)</br> [• v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[• v2.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux, Docker](https://dotnet.microsoft.com/download)| +|[**Java ΓåÆ latest (GA)**](https://azuresdkdocs.blob.core.windows.net/$web/java/azure-ai-formrecognizer/4.1.0/https://docsupdatetracker.net/index.html) |[MVN repository](https://mvnrepository.com/artifact/com.azure/azure-ai-formrecognizer/4.1.0) |[• 2023-07-31 (GA)](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)</br> [• 2022-08-31 (GA)](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)</br> [• v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[• v2.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux](/java/openjdk/install)| +|[**JavaScript ΓåÆ latest (GA)**](https://azuresdkdocs.blob.core.windows.net/$web/javascript/azure-ai-form-recognizer/5.0.0/https://docsupdatetracker.net/index.html)| [npm](https://www.npmjs.com/package/@azure/ai-form-recognizer)| [• 2023-07-31 (GA)](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)</br> • [2022-08-31 (GA)](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)</br> [• v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[• v2.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) | [Browser, Windows, macOS, Linux](https://nodejs.org/en/download/) | +|[**Python ΓåÆ latest (GA)**](https://azuresdkdocs.blob.core.windows.net/$web/python/azure-ai-formrecognizer/3.3.0/https://docsupdatetracker.net/index.html) | [PyPI](https://pypi.org/project/azure-ai-formrecognizer/3.3.0/)| [• 2023-07-31 (GA)](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)</br> • [2022-08-31 (GA)](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)</br> [• v2.1](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2-1/operations/AnalyzeBusinessCardAsync)</br>[• v2.0](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-v2/operations/AnalyzeLayoutAsync) |[Windows, macOS, Linux](/azure/developer/python/configure-local-development-environment?tabs=windows%2Capt%2Ccmd#use-the-azure-cli) ## Supported Clients The [Microsoft Q&A](/answers/topics/azure-form-recognizer.html) and [Stack Overf ## Next steps > [!div class="nextstepaction"]->Explore [**Document Intelligence REST API 2023-07-31**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument) operations. +>Explore [**Document Intelligence REST API 2023-07-31**](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP) operations. |
ai-services | V3 1 Migration Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/v3-1-migration-guide.md | GET https://{your-form-recognizer-endpoint}/formrecognizer/info? api-version=202 ## Next steps -* [Review the new REST API](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument) +* [Review the new REST API](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP) * [What is Document Intelligence?](overview.md) * [Document Intelligence quickstart](quickstarts/get-started-sdks-rest-api.md) |
ai-services | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/whats-new.md | The v3.1 API introduces new and updated capabilities: * US Military ID > [!TIP]-> All January 2023 updates are available with [REST API version **2022-08-31 (GA)**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument). +> All January 2023 updates are available with [REST API version **2022-08-31 (GA)**](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP). * **[Prebuilt receipt model](concept-receipt.md#supported-languages-and-locales)ΓÇöadditional language support**: The v3.1 API introduces new and updated capabilities: * Document Intelligence v3.0 generally available - * **Document Intelligence REST API v3.0 is now generally available and ready for use in production applications!** Update your applications with [**REST API version 2022-08-31**](https://westus.dev.cognitive.microsoft.com/docs/services/form-recognizer-api-2023-07-31/operations/AnalyzeDocument). + * **Document Intelligence REST API v3.0 is now generally available and ready for use in production applications!** Update your applications with [**REST API version 2022-08-31**](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP). * Document Intelligence Studio updates > [!div class="checklist"] |
ai-services | Document Summarization | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/summarization/how-to/document-summarization.md | You can also use the `sortby` parameter to specify in what order the extracted s ### Try document abstractive summarization -<!-- [Reference documentation](https://go.microsoft.com/fwlink/?linkid=2211684) --> - The following example gets you started with document abstractive summarization: 1. Copy the command below into a text editor. The BASH example uses the `\` line continuation character. If your console or terminal uses a different line continuation character, use that character instead. |
ai-services | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/summarization/overview.md | As you use document summarization in your applications, see the following refere |JavaScript | [JavaScript documentation](/javascript/api/overview/azure/ai-text-analytics-readme?view=azure-node-preview&preserve-view=true) | [JavaScript samples](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/textanalytics/ai-text-analytics/samples/v5) | |Python | [Python documentation](/python/api/overview/azure/ai-textanalytics-readme?view=azure-python-preview&preserve-view=true) | [Python samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/textanalytics/azure-ai-textanalytics/samples) | -<!-- |REST API | [REST API documentation](https://go.microsoft.com/fwlink/?linkid=2211684) | | --> - ## Responsible AI An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which itΓÇÖs deployed. Read the [transparency note for summarization](/legal/cognitive-services/language-service/transparency-note-extractive-summarization?context=/azure/ai-services/language-service/context/context) to learn about responsible AI use and deployment in your systems. You can also see the following articles for more information: |
ai-services | Models | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/models.md | See [model versions](../concepts/model-versions.md) to learn about how Azure Ope | `gpt-4-32k`(0314) | 32,768 | Sep 2021 | | `gpt-4` (0613) | 8,192 | Sep 2021 | | `gpt-4-32k` (0613) | 32,768 | Sep 2021 |-| `gpt-4` (1106-preview)**<sup>1</sup>** | Input: 128,000 <br> Output: 4096 | Apr 2023 | +| `gpt-4` (1106-preview)**<sup>1</sup>**<br>**GPT-4 Turbo Preview** | Input: 128,000 <br> Output: 4096 | Apr 2023 | -**<sup>1</sup>** We don't recommend using this model in production. We will upgrade all deployments of this model to a future stable version. Models designated preview do not follow the standard Azure OpenAI model lifecycle. +**<sup>1</sup>** GPT-4 Turbo Preview = `gpt-4` (1106-preview). To deploy this model, under **Deployments** select model **gpt-4**. For **Model version** select **1106-preview**. We don't recommend using this model in production. We will upgrade all deployments of this model to a future stable version. Models designated preview do not follow the standard Azure OpenAI model lifecycle. > [!NOTE] > Regions where GPT-4 (0314) & (0613) are listed as available have access to both the 8K and 32K versions of the model |
ai-services | Content Filters | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/content-filters.md | keywords: # How to configure content filters with Azure OpenAI Service > [!NOTE]-> All customers have the ability to modify the content filters to be stricter (for example, to filter content at lower severity levels than the default). Approval is required for full content filtering control, including (i) configuring content filters at severity level high only (ii) or turning the content filters off. Managed customers only may apply for full content filtering control via this form: [Azure OpenAI Limited Access Review: Modified Content Filters and Abuse Monitoring (microsoft.com)](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xURE01NDY1OUhBRzQ3MkQxMUhZSE1ZUlJKTiQlQCN0PWcu). +> All customers have the ability to modify the content filters to be stricter (for example, to filter content at lower severity levels than the default). Approval is required for turning the content filters partially or fully off. Managed customers only may apply for full content filtering control via this form: [Azure OpenAI Limited Access Review: Modified Content Filters and Abuse Monitoring (microsoft.com)](https://customervoice.microsoft.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbR7en2Ais5pxKtso_Pz4b1_xURE01NDY1OUhBRzQ3MkQxMUhZSE1ZUlJKTiQlQCN0PWcu). -The content filtering system integrated into Azure OpenAI Service runs alongside the core models and uses an ensemble of multi-class classification models to detect four categories of harmful content (violence, hate, sexual, and self-harm) at four severity levels respectively (safe, low, medium, and high), and optional binary classifiers for detecting jailbreak risk, existing text, and code in public repositories. The default content filtering configuration is set to filter at the medium severity threshold for all four content harms categories for both prompts and completions. That means that content that is detected at severity level medium or high is filtered, while content detected at severity level low or safe is not filtered by the content filters. Learn more about content categories, severity levels, and the behavior of the content filtering system [here](../concepts/content-filter.md). Jailbreak risk detection and protected text and code models are optional and off by default. For jailbreak and protected material text and code models, the configurability feature allows all customers to turn the models on and off. The models are by default off and can be turned on per your scenario. Note that some models are required to be on for certain scenarios to retain coverage under the [Customer Copyright Commitment](https://www.microsoft.com/licensing/news/Microsoft-Copilot-Copyright-Commitment). +The content filtering system integrated into Azure OpenAI Service runs alongside the core models and uses an ensemble of multi-class classification models to detect four categories of harmful content (violence, hate, sexual, and self-harm) at four severity levels respectively (safe, low, medium, and high), and optional binary classifiers for detecting jailbreak risk, existing text, and code in public repositories. The default content filtering configuration is set to filter at the medium severity threshold for all four content harms categories for both prompts and completions. That means that content that is detected at severity level medium or high is filtered, while content detected at severity level low or safe is not filtered by the content filters. Learn more about content categories, severity levels, and the behavior of the content filtering system [here](../concepts/content-filter.md). Jailbreak risk detection and protected text and code models are optional and off by default. For jailbreak and protected material text and code models, the configurability feature allows all customers to turn the models on and off. The models are by default off and can be turned on per your scenario. Note that some models are required to be on for certain scenarios to retain coverage under the [Customer Copyright Commitment](/legal/cognitive-services/openai/customer-copyright-commitment?context=%2Fazure%2Fai-services%2Fopenai%2Fcontext%2Fcontext). Content filters can be configured at resource level. Once a new configuration is created, it can be associated with one or more deployments. For more information about model deployment, see the [resource deployment guide](create-resource.md). |
ai-services | Embeddings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/embeddings.md | AzureKeyCredential credentials = new (oaiKey); OpenAIClient openAIClient = new (oaiEndpoint, credentials); -EmbeddingsOptions embeddingOptions = new ("Your text string goes here"); +EmbeddingsOptions embeddingOptions = new() +{ + DeploymentName = "text-embedding-ada-002", + Input = { "Your text string goes here" }, +}; -var returnValue = openAIClient.GetEmbeddings("YOUR_DEPLOYMENT_NAME", embeddingOptions); +var returnValue = openAIClient.GetEmbeddings(embeddingOptions); -foreach (float item in returnValue.Value.Data[0].Embedding) +foreach (float item in returnValue.Value.Data[0].Embedding.ToArray()) { Console.WriteLine(item); } |
ai-services | Latency | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/latency.md | + + Title: Azure OpenAI Service performance & latency ++description: Learn about performance and latency with Azure OpenAI +++ Last updated : 11/21/2023+++recommendations: false ++++# Performance and latency ++This article will provide you with background around how latency works with Azure OpenAI and how to optimize your environment to improve performance. ++## What is latency? ++The high level definition of latency in this context is the amount of time it takes to get a response back from the model. For completion and chat completion requests, latency is largely dependent on model type as well as the number of tokens generated and returned. The number of tokens sent to the model as part of the input token limit, has a much smaller overall impact on latency. ++## Improve performance ++### Model selection ++Latency varies based on what model you are using. For an identical request, it is expected that different models will have a different latency. If your use case requires the lowest latency models with the fastest response times we recommend the latest models in the [GPT-3.5 Turbo model series](../concepts/models.md#gpt-35-models). ++### Max tokens ++When you send a completion request to the Azure OpenAI endpoint your input text is converted to tokens which are then sent to your deployed model. The model receives the input tokens and then begins generating a response. It's an iterative sequential process, one token at a time. Another way to think of it is like a for loop with `n tokens = n iterations`. ++So another important factor when evaluating latency is how many tokens are being generated. This is controlled largely via the `max_tokens` parameter. Reducing the number of tokens generated per request will reduce the latency of each request. ++### Streaming ++**Examples of when to use streaming**: ++Chat bots and conversational interfaces. ++Streaming impacts perceived latency. If you have streaming enabled you'll receive tokens back in chunks as soon as they're available. From a user perspective, this often feels like the model is responding faster even though the overall time to complete the request remains the same. ++**Examples of when streaming is less important**: ++Sentiment analysis, language translation, content generation. ++There are many use cases where you are performing some bulk task where you only care about the finished result, not the real-time response. If streaming is disabled, you won't receive any tokens until the model has finished the entire response. ++### Content filtering ++Azure OpenAI includes a [content filtering system](./content-filters.md) that works alongside the core models. This system works by running both the prompt and completion through an ensemble of classification models aimed at detecting and preventing the output of harmful content. ++The content filtering system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions. ++The addition of content filtering comes with an increase in safety, but also latency. There are many applications where this tradeoff in performance is necessary, however there are certain lower risk use cases where disabling the content filters to improve performance might be worth exploring. ++Learn more about requesting modifications to the default, [content filtering policies](./content-filters.md). ++## Summary ++* **Model latency**: If model latency is important to you we recommend trying out our latest models in the [GPT-3.5 Turbo model series](../concepts/models.md). ++* **Lower max tokens**: OpenAI has found that even in cases where the total number of tokens generated is similar the request with the higher value set for the max token parameter will have more latency. ++* **Lower total tokens generated**: The fewer tokens generated the faster the overall response will be. Remember this is like having a for loop with `n tokens = n iterations`. Lower the number of tokens generated and overall response time will improve accordingly. ++* **Streaming**: Enabling streaming can be useful in managing user expectations in certain situations by allowing the user to see the model response as it is being generated rather than having to wait until the last token is ready. ++* **Content Filtering** improves safety, but it also impacts latency. Evaluate if any of your workloads would benefit from [modified content filtering policies](./content-filters.md). |
ai-services | Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/migration.md | description: Learn about migrating to the latest release of the OpenAI Python li -+ Last updated 11/15/2023 asyncio.run(dall_e()) - `openai.aiosession` (OpenAI now uses `httpx`) - `openai.Deployment` (Previously used for Azure OpenAI) - `openai.Engine`-- `openai.File.find_matching_files()`+- `openai.File.find_matching_files()` |
ai-services | Switching Endpoints | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/switching-endpoints.md | Title: How to switch between OpenAI and Azure OpenAI Service endpoints with Python description: Learn about the changes you need to make to your code to swap back and forth between OpenAI and Azure OpenAI endpoints.--++ Previously updated : 07/20/2023 Last updated : 11/22/2023 -> [!NOTE] -> This library is maintained by OpenAI and is currently in preview. Refer to the [release history](https://github.com/openai/openai-python/releases) or the [version.py commit history](https://github.com/openai/openai-python/commits/main/openai/version.py) to track the latest updates to the library. +This article only shows examples with the new OpenAI Python 1.x API library. For information on migrating from `0.28.1` to `1.x` refer to our [migration guide](./migration.md). ## Authentication We recommend using environment variables. If you haven't done this before our [P <td> ```python-import openai +from openai import OpenAI ++client = OpenAI( + api_key=os.environ['OPENAI_API_KEY'] +) -openai.api_key = "sk-..." -openai.organization = "..." ``` openai.organization = "..." <td> ```python-import openai --openai.api_type = "azure" -openai.api_key = "..." -openai.api_base = "https://example-endpoint.openai.azure.com" -openai.api_version = "2023-05-15" # subject to change +import os +from openai import AzureOpenAI + +client = AzureOpenAI( + api_key=os.getenv("AZURE_OPENAI_KEY"), + api_version="2023-10-01-preview", + azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT") + ) ``` </td> openai.api_version = "2023-05-15" # subject to change <td> ```python-import openai +from openai import OpenAI ++client = OpenAI( + api_key=os.environ['OPENAI_API_KEY'] +) + -openai.api_key = "sk-..." -openai.organization = "..." openai.organization = "..." <td> ```python-import openai -from azure.identity import DefaultAzureCredential +from azure.identity import DefaultAzureCredential, get_bearer_token_provider +from openai import AzureOpenAI -credential = DefaultAzureCredential() -token = credential.get_token("https://cognitiveservices.azure.com/.default") +token_provider = get_bearer_token_provider(DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default") -openai.api_type = "azure_ad" -openai.api_key = token.token -openai.api_base = "https://example-endpoint.openai.azure.com" -openai.api_version = "2023-05-15" # subject to change +api_version = "2023-12-01-preview" +endpoint = "https://my-resource.openai.azure.com" ++client = AzureOpenAI( + api_version=api_version, + azure_endpoint=endpoint, + azure_ad_token_provider=token_provider, +) ``` </td> For OpenAI `engine` still works in most instances, but it's deprecated and `mode <td> ```python-completion = openai.Completion.create( - prompt="<prompt>", - model="text-davinci-003" -) - -chat_completion = openai.ChatCompletion.create( - messages="<messages>", - model="gpt-4" +completion = client.completions.create( + model='gpt-3.5-turbo-instruct', + prompt="<prompt>) ) -embedding = openai.Embedding.create( - input="<input>", - model="text-embedding-ada-002" +chat_completion = client.chat.completions.create( + model="gpt-4", + messages="<messages>" ) ---+embedding = client.embeddings.create( + input="<input>", + model="text-embedding-ada-002" +) ``` </td> <td> ```python-completion = openai.Completion.create( - prompt="<prompt>", - deployment_id="text-davinci-003" # This must match the custom deployment name you chose for your model. - #engine="text-davinci-003" +completion = client.completions.create( + model=gpt-35-turbo-instruct, # This must match the custom deployment name you chose for your model. + prompt=<"prompt"> )- -chat_completion = openai.ChatCompletion.create( - messages="<messages>", - deployment_id="gpt-4" # This must match the custom deployment name you chose for your model. - #engine="gpt-4" +chat_completion = client.chat.completions.create( + model="gpt-35-turbo", # model = "deployment_name". + messages=<"messages"> ) -embedding = openai.Embedding.create( - input="<input>", - deployment_id="text-embedding-ada-002" # This must match the custom deployment name you chose for your model. - #engine="text-embedding-ada-002" +embedding = client.embeddings.create( + input = "<input>", + model= "text-embedding-ada-002" # model = "deployment_name". ) ``` OpenAI currently allows a larger number of array inputs with text-embedding-ada- ```python inputs = ["A", "B", "C"] -embedding = openai.Embedding.create( +embedding = client.embeddings.create( input=inputs, model="text-embedding-ada-002" ) embedding = openai.Embedding.create( ```python inputs = ["A", "B", "C"] #max array size=16 -embedding = openai.Embedding.create( +embedding = client.embeddings.create( input=inputs,- deployment_id="text-embedding-ada-002" # This must match the custom deployment name you chose for your model. + model="text-embedding-ada-002" # This must match the custom deployment name you chose for your model. #engine="text-embedding-ada-002" )+ ``` </td> |
ai-services | Working With Models | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/working-with-models.md | description: Learn about managing model deployment life cycle, updates, & retire Last updated 10/04/2023-+ |
ai-services | Quotas Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/quotas-limits.md | The following sections provide you with a quick guide to the default quotas and |--|--| | OpenAI resources per region per Azure subscription | 30 | | Default DALL-E 2 quota limits | 2 concurrent requests |-| Default DALL-E 3 quota limits| 2 capacity units (12 requests per minute)| +| Default DALL-E 3 quota limits| 2 capacity units (6 requests per minute)| | Maximum prompt tokens per request | Varies per model. For more information, see [Azure OpenAI Service models](./concepts/models.md)| | Max fine-tuned model deployments | 5 | | Total number of training jobs per resource | 100 | |
ai-services | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/whats-new.md | keywords: ## November 2023 ++ ### GPT-4 Turbo Preview & GPT-3.5-Turbo-1106 released Both models are the latest release from OpenAI with improved instruction following, [JSON mode](./how-to/json-mode.md), [reproducible output](./how-to/reproducible-output.md), and parallel function calling. DALL-E 3 includes built-in prompt rewriting to enhance images, reduce bias, and Try out DALL-E 3 by following a [quickstart](./dall-e-quickstart.md). +### Responsible AI ++- **Expanded customer configurability**: All Azure OpenAI customers can now configure all severity levels (low, medium, high) for the categories hate, violence, sexual and self-harm, including filtering only high severity content. [Configure content filters](./how-to/content-filters.md) ++- **Content Credentials in all DALL-E models**: AI-generated images from all DALL-E models now include a digital credential that discloses the content as AI-generated. Applications that display image assets can leverage the open source [Content Authenticity Initiative SDK](https://opensource.contentauthenticity.org/docs/js-sdk/getting-started/quick-start/) to display credentials in their AI generated images. [Content Credentials in Azure OpenAI](/azure/ai-services/openai/concepts/content-credentials) +++- **New RAI models** + + - **Jailbreak risk detection**: Jailbreak attacks are user prompts designed to provoke the Generative AI model into exhibiting behaviors it was trained to avoid or to break the rules set in the System Message. The jailbreak risk detection model is optional (default off), and available in annotate and filter model. It runs on user prompts. + - **Protected material text**: Protected material text describes known text content (for example, song lyrics, articles, recipes, and selected web content) that can be outputted by large language models. The protected material text model is optional (default off), and available in annotate and filter model. It runs on LLM completions. + - **Protected material code**: Protected material code describes source code that matches a set of source code from public repositories, which can be outputted by large language models without proper citation of source repositories. The protected material code model is optional (default off), and available in annotate and filter model. It runs on LLM completions. ++ [Configure content filters](./how-to/content-filters.md) ++- **Blocklists**: Customers can now quickly customize content filter behavior for prompts and completions further by creating a custom blocklist in their filters. The custom blocklist allows the filter to take action on a customized list of patterns, such as specific terms or regex patterns. In addition to custom blocklists, we provide a Microsoft profanity blocklist (English). [Use blocklists](./how-to/use-blocklists.md) ## October 2023 ### New fine-tuning models (preview) |
ai-services | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/policy-reference.md | Title: Built-in policy definitions for Azure AI services description: Lists Azure Policy built-in policy definitions for Azure AI services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/15/2023 Last updated : 11/21/2023 |
ai-services | Recover Purge Resources | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/recover-purge-resources.md | + + Title: Recover or purge deleted Azure AI services resources ++description: This article provides instructions on how to recover or purge an already-deleted Azure AI services resource. ++++ Last updated : 11/15/2023++++# Recover or purge deleted Azure AI services resources ++This article provides instructions on how to recover or purge an Azure AI services resource that is already deleted. ++Once you delete a resource, you won't be able to create another one with the same name for 48 hours. To create a resource with the same name, you need to purge the deleted resource. ++> [!NOTE] +> The instructions in this article are applicable to both a multi-service resource and a single-service resource. A multi-service resource enables access to multiple Azure AI services using a single key and endpoint. On the other hand, a single-service resource enables access to just that specific Azure AI service for which the resource was created. ++## Recover a deleted resource ++The following prerequisites must be met before you can recover a deleted resource: ++* The resource to be recovered must have been deleted within the past 48 hours. +* The resource to be recovered must not have been purged already. A purged resource can't be recovered. +* Before you attempt to recover a deleted resource, make sure that the resource group for that account exists. If the resource group was deleted, you must recreate it. Recovering a resource group isn't possible. For more information, seeΓÇ»[Manage resource groups](../azure-resource-manager/management/manage-resource-groups-portal.md). +* If the deleted resource used customer-managed keys with Azure Key Vault and the key vault have also been deleted, then you must restore the key vault before you restore the Azure AI services resource. For more information, see [Azure Key Vault recovery management](../key-vault/general/key-vault-recovery.md). +* If the deleted resource used a customer-managed storage and storage account has also been deleted, you must restore the storage account before you restore the Azure AI services resource. For instructions, see [Recover a deleted storage account](../storage/common/storage-account-recover.md). ++To recover a deleted Azure AI services resource, use the following commands. Where applicable, replace: ++* `{subscriptionID}` with your Azure subscription ID +* `{resourceGroup}` with your resource group +* `{resourceName}` with your resource name +* `{location}` with the location of your resource +++# [Azure portal](#tab/azure-portal) ++If you need to recover a deleted resource, navigate to the hub of the Azure AI services API type and select "Manage deleted resources" from the menu. For example, if you would like to recover an "Anomaly detector" resource, search for "Anomaly detector" in the search bar and select the service. Then select **Manage deleted resources**. ++Select the subscription in the dropdown list to locate the deleted resource you would like to recover. Select one or more of the deleted resources and select **Recover**. +++> [!NOTE] +> It can take a couple of minutes for your deleted resource(s) to recover and show up in the list of the resources. Select the **Refresh** button in the menu to update the list of resources. ++# [Rest API](#tab/rest-api) ++Use the following `PUT` command: ++```rest-api +https://management.azure.com/subscriptions/{subscriptionID}/resourceGroups/{resourceGroup}/providers/Microsoft.CognitiveServices/accounts/{resourceName}?Api-Version=2021-04-30 +``` ++In the request body, use the following JSON format: ++```json +{ + "location": "{location}", + "properties": { + "restore": true + } +} +``` ++# [PowerShell](#tab/powershell) ++Use the following command to restore the resource: ++```powershell +New-AzResource -Location {location} -Properties @{restore=$true} -ResourceId /subscriptions/{subscriptionID}/resourceGroups/{resourceGroup}/providers/Microsoft.CognitiveServices/accounts/{resourceName} -ApiVersion 2021-04-30 +``` ++If you need to find the name of your deleted resources, you can get a list of deleted resource names with the following command: ++```powershell +Get-AzResource -ResourceId /subscriptions/{subscriptionId}/providers/Microsoft.CognitiveServices/deletedAccounts -ApiVersion 2021-04-30 +``` ++# [Azure CLI](#tab/azure-cli) ++```azurecli-interactive +az resource create --subscription {subscriptionID} -g {resourceGroup} -n {resourceName} --location {location} --namespace Microsoft.CognitiveServices --resource-type accounts --properties "{\"restore\": true}" +``` ++++## Purge a deleted resource ++Your subscription must have `Microsoft.CognitiveServices/locations/resourceGroups/deletedAccounts/delete` permissions to purge resources, such as [Cognitive Services Contributor](../role-based-access-control/built-in-roles.md#cognitive-services-contributor) or [Contributor](../role-based-access-control/built-in-roles.md#contributor). ++When using `Contributor` to purge a resource the role must be assigned at the subscription level. If the role assignment is only present at the resource or resource group level, you can't access the purge functionality. ++To purge a deleted Azure AI services resource, use the following commands. Where applicable, replace: ++* `{subscriptionID}` with your Azure subscription ID +* `{resourceGroup}` with your resource group +* `{resourceName}` with your resource name +* `{location}` with the location of your resource ++> [!NOTE] +> Once a resource is purged, it is permanently deleted and cannot be restored. You will lose all data and keys associated with the resource. +++# [Azure portal](#tab/azure-portal) ++If you need to purge a deleted resource, the steps are similar to recovering a deleted resource. ++1. Navigate to the hub of the Azure AI services API type of your deleted resource. For example, if you would like to purge an "Anomaly detector" resource, search for "Anomaly detector" in the search bar and select the service. Then select **Manage deleted resources** from the menu. ++1. Select the subscription in the dropdown list to locate the deleted resource you would like to purge. ++1. Select one or more deleted resources and select **Purge**. Purging permanently deletes an Azure AI services resource. ++ :::image type="content" source="media/managing-deleted-resource.png" alt-text="A screenshot showing a list of resources that can be purged." lightbox="media/managing-deleted-resource.png"::: +++# [Rest API](#tab/rest-api) ++Use the following `DELETE` command: ++```rest-api +https://management.azure.com/subscriptions/{subscriptionID}/providers/Microsoft.CognitiveServices/locations/{location}/resourceGroups/{resourceGroup}/deletedAccounts/{resourceName}?Api-Version=2021-04-30` +``` ++# [PowerShell](#tab/powershell) ++```powershell +Remove-AzResource -ResourceId /subscriptions/{subscriptionID}/providers/Microsoft.CognitiveServices/locations/{location}/resourceGroups/{resourceGroup}/deletedAccounts/{resourceName} -ApiVersion 2021-04-30 +``` ++# [Azure CLI](#tab/azure-cli) ++```azurecli-interactive +az resource delete --ids /subscriptions/{subscriptionId}/providers/Microsoft.CognitiveServices/locations/{location}/resourceGroups/{resourceGroup}/deletedAccounts/{resourceName} +``` +++++## See also +* [Create a multi-service resource](multi-service-resource.md) +* [Create a new resource using an ARM template](create-account-resource-manager-template.md) |
ai-services | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure AI services description: Lists Azure Policy Regulatory Compliance controls available for Azure AI services. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 11/06/2023 Last updated : 11/21/2023 |
ai-services | How To Pronunciation Assessment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/how-to-pronunciation-assessment.md | var pronunciationAssessmentConfig = new sdk.PronunciationAssessmentConfig( gradingSystem: sdk.PronunciationAssessmentGradingSystem.HundredMark, granularity: sdk.PronunciationAssessmentGranularity.Phoneme, enableMiscue: false); -pronunciationAssessmentConfig.EnableProsodyAssessment(); -pronunciationAssessmentConfig.EnableContentAssessmentWithTopic("greeting"); +pronunciationAssessmentConfig.enableProsodyAssessment(); +pronunciationAssessmentConfig.enableContentAssessmentWithTopic("greeting"); ``` ::: zone-end |
ai-services | Speech Services Quotas And Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/speech-service/speech-services-quotas-and-limits.md | These limits aren't adjustable. For more information on batch synthesis latency, | Quota | Free (F0)| Standard (S0) | |--|--|--| | File size (plain text in SSML)<sup>1</sup> | 3,000 characters per file | 20,000 characters per file |-| File size (lexicon file)<sup>2</sup> | 3,000 characters per file | 20,000 characters per file | +| File size (lexicon file)<sup>2</sup> | 30KB per file | 100KB per file| | Billable characters in SSML| 15,000 characters per file | 100,000 characters per file | | Export to audio library | 1 concurrent task | N/A | <sup>1</sup> The limit only applies to plain text in SSML and doesn't include tags. -<sup>2</sup> The limit includes all text including tags. The characters of lexicon file aren't charged. Only the lexicon elements in SSML are counted as billable characters. Refer to [billable characters](text-to-speech.md#billable-characters) to learn more. +<sup>2</sup> The characters of lexicon file aren't charged. Only the lexicon elements in SSML are counted as billable characters. Refer to [billable characters](text-to-speech.md#billable-characters) to learn more. ### Speaker recognition quotas and limits per resource |
ai-studio | Configure Managed Network | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/configure-managed-network.md | __Outbound__ service tag rules: __Inbound__ service tag rules: * `AzureMachineLearning` +> [!NOTE] +> For an Azure AI resource using a managed virtual network, a private endpoint is automatically created for a connection if the target resource is an Azure Private Link supported resource (Key Vault, Storage Account, Container Registry, Azure AI, Azure OpenAI, Azure Cognitive Search). For more on connections, see [How to add a new connection in Azure AI Studio](connections-add.md). + ## List of scenario specific outbound rules ### Scenario: Access public machine learning packages |
ai-studio | Configure Private Link | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/configure-private-link.md | description: Learn how to configure a private link for Azure AI -- - ignite-2023 + Last updated 11/15/2023 |
ai-studio | Create Projects | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/create-projects.md | You can create a project in Azure AI Studio in more than one way. The most direc 1. Enter a name for the project. 1. Select an Azure AI resource from the dropdown to host your project. If you don't have access to an Azure AI resource yet, select **Create a new resource**. - > [!TIP] - > It's recommended to share an Azure AI resource with your team. This lets you share configurations like data connections with all projects, and centrally manage security settings and spend. + :::image type="content" source="../media/how-to/projects-create-details.png" alt-text="Screenshot of the project details page within the create project dialog." lightbox="../media/how-to/projects-create-details.png"::: > [!NOTE]- > To create an Azure AI resource, you must have **Owner** or **Contributor** permissions on the selected resource group. -- :::image type="content" source="../media/how-to/projects-create-details.png" alt-text="Screenshot of the project details page within the create project dialog." lightbox="../media/how-to/projects-create-details.png"::: + > To create an Azure AI resource, you must have **Owner** or **Contributor** permissions on the selected resource group. It's recommended to share an Azure AI resource with your team. This lets you share configurations like data connections with all projects, and centrally manage security settings and spend. 1. If you're creating a new Azure AI resource, enter a name. You can create a project in Azure AI Studio in more than one way. The most direc > [!TIP] > Especially for getting started it's recommended to create a new resource group for your project. This allows you to easily manage the project and all of its resources together. When you create a project, several resources are created in the resource group, including an Azure AI resource, a container registry, and a storage account. --1. Enter the **Location** for the Azure AI resource and then select **Next**. The location is the region where the Azure AI resource is hosted. The location of the Azure AI resource is also the location of the project. -1. Review the project details and then select **Create a project**. Azure AI services availability differs per region. For example, certain models might not be available in certain regions. +1. Enter the **Location** for the Azure AI resource and then select **Next**. The location is the region where the Azure AI resource is hosted. The location of the Azure AI resource is also the location of the project. Azure AI services availability differs per region. For example, certain models might not be available in certain regions. +1. Review the project details and then select **Create a project**. :::image type="content" source="../media/how-to/projects-create-review-finish.png" alt-text="Screenshot of the review and finish page within the create project dialog." lightbox="../media/how-to/projects-create-review-finish.png"::: |
ai-studio | Evaluate Generative Ai App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/evaluate-generative-ai-app.md | description: Evaluate your generative AI application with Azure AI Studio UI and -- - ignite-2023 + Last updated 11/15/2023 |
ai-studio | Index Add | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/index-add.md | You must have: > [!NOTE] > If you see a **DeploymentNotFound** error, you need to assign more permissions. See [mitigate DeploymentNotFound error](#mitigate-deploymentnotfound-error) for more details. -1. You're taken to the index details page where you can see the status of your index creation +1. You're taken to the index details page where you can see the status of your index creation. ### Mitigate DeploymentNotFound error |
ai-studio | Python Tool | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/prompt-flow-tools/python-tool.md | description: This article introduces the Python tool for flows in Azure AI Studi -- - ignite-2023 + Last updated 11/15/2023 |
ai-studio | Troubleshoot Deploy And Monitor | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/troubleshoot-deploy-and-monitor.md | Option 2: Find the build log within Azure Machine Learning studio, which is a se **Answer:** We're working on improving the user experience of web app deployment at this time. For the time being, here's a tip: if your web app launch button doesn't become active after a while, try deploy again using the 'update an existing app' option. If the web app was properly deployed, it should show up on the dropdown list of your existing web apps. +**Question:** I deployed a model but I don't see it in the playground. +**Answer:** Playground only supports a few select models, such as Azure OpenAI models and Llama-2. If playground support is available, you see the **Open in playground** button on the model deployment's **Details** page. + ## Next steps - [Azure AI Studio overview](../what-is-ai-studio.md) |
ai-studio | Deploy Chat Web App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/tutorials/deploy-chat-web-app.md | Follow these steps to deploy a chat model and test it without your data. :::image type="content" source="../media/tutorials/chat-web-app/deploy-gpt-35-turbo-16k.png" alt-text="Screenshot of the model selection page." lightbox="../media/tutorials/chat-web-app/deploy-gpt-35-turbo-16k.png"::: 1. On the **Deploy model** page, enter a name for your deployment, and then select **Deploy**. After the deployment is created, you see the deployment details page. Details include the date you created the deployment and the created date and version of the model you deployed.-1. On the deployment details page from the previous step, select **Test in playground**. +1. On the deployment details page from the previous step, select **Open in playground**. :::image type="content" source="../media/tutorials/chat-web-app/deploy-gpt-35-turbo-16k-details.png" alt-text="Screenshot of the GPT chat deployment details." lightbox="../media/tutorials/chat-web-app/deploy-gpt-35-turbo-16k-details.png"::: |
ai-studio | Deploy Copilot Ai Studio | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/tutorials/deploy-copilot-ai-studio.md | + + Title: Build and deploy a question and answer copilot with prompt flow in Azure AI Studio ++description: Use this article to build and deploy a question and answer copilot with prompt flow in Azure AI Studio ++++ Last updated : 11/15/2023++++# Tutorial: Build and deploy a question and answer copilot with prompt flow in Azure AI Studio +++In this [Azure AI Studio](https://ai.azure.com) tutorial, you use generative AI and prompt flow to build, configure, and deploy a copilot for your retail company called Contoso. Your retail company specializes in outdoor camping gear and clothing. ++The copilot should answer questions about your products and services. It should also answer questions about your customers. For example, the copilot can answer questions such as "How much do the TrailWalker hiking shoes cost?" and "How many TrailWalker hiking shoes did Daniel Wilson buy?". ++The steps in this tutorial are: ++1. Create an Azure AI Studio project. +1. Deploy an Azure OpenAI model and chat with your data. +1. Create a prompt flow from the playground. +1. Customize prompt flow with multiple data sources. +1. Evaluate the flow using a question and answer evaluation dataset. +1. Deploy the flow for consumption. ++## Prerequisites ++- An Azure subscription - <a href="https://azure.microsoft.com/free/cognitive-services" target="_blank">Create one for free</a>. +- Access granted to Azure OpenAI in the desired Azure subscription. ++ Currently, access to this service is granted only by application. You can apply for access to Azure OpenAI by completing the form at <a href="https://aka.ms/oai/access" target="_blank">https://aka.ms/oai/access</a>. Open an issue on this repo to contact us if you have an issue. ++- You need an Azure AI resource and your user role must be **Azure AI Developer**, **Contributor**, or **Owner** on the Azure AI resource. For more information, see [Azure AI resources](../concepts/ai-resources.md) and [Azure AI roles](../concepts/rbac-ai-studio.md). + - If your role is **Contributor** or **Owner**, you can [create an Azure AI resource in this tutorial](#create-an-azure-ai-project-in-azure-ai-studio). + - If your role is **Azure AI Developer**, the Azure AI resource must already be created. ++- Your subscription needs to be below your [quota limit](../how-to/quota.md) to [deploy a new model in this tutorial](#deploy-a-chat-model). Otherwise you already need to have a [deployed chat model](../how-to/deploy-models.md). ++- You need a local copy of product and customer data. The [Azure/aistudio-copilot-sample repository on GitHub](https://github.com/Azure/aistudio-copilot-sample/tree/main/data) contains sample retail customer and product information that's relevant for this tutorial scenario. Clone the repository or copy the files from [1-customer-info](https://github.com/Azure/aistudio-copilot-sample/tree/main/data/1-customer-info) and [3-product-info](https://github.com/Azure/aistudio-copilot-sample/tree/main/data/3-product-info). ++## Create an Azure AI project in Azure AI Studio ++Your Azure AI project is used to organize your work and save state while building your copilot. During this tutorial, your project contains your data, prompt flow runtime, evaluations, and other resources. For more information about the Azure AI projects and resources model, see [Azure AI resources](../concepts/ai-resources.md). ++To create an Azure AI project in Azure AI Studio, follow these steps: ++1. Sign in to [Azure AI Studio](https://ai.azure.com) and go to the **Build** page from the top menu. +1. Select **+ New project**. +1. Enter a name for the project. +1. Select an Azure AI resource from the dropdown to host your project. If you don't have access to an Azure AI resource yet, select **Create a new resource**. ++ :::image type="content" source="../media/tutorials/copilot-deploy-flow/create-project-details.png" alt-text="Screenshot of the project details page within the create project dialog." lightbox="../media/tutorials/copilot-deploy-flow/create-project-details.png"::: ++ > [!NOTE] + > To create an Azure AI resource, you must have **Owner** or **Contributor** permissions on the selected resource group. It's recommended to share an Azure AI resource with your team. This lets you share configurations like data connections with all projects, and centrally manage security settings and spend. ++1. If you're creating a new Azure AI resource, enter a name. ++ :::image type="content" source="../media/tutorials/copilot-deploy-flow/create-project-resource.png" alt-text="Screenshot of the create resource page within the create project dialog." lightbox="../media/tutorials/copilot-deploy-flow/create-project-resource.png"::: ++1. Select your **Azure subscription** from the dropdown. Choose a specific Azure subscription for your project for billing, access, or administrative reasons. For example, this grants users and service principals with subscription-level access to your project. ++1. Leave the **Resource group** as the default to create a new resource group. Alternatively, you can select an existing resource group from the dropdown. ++ > [!TIP] + > Especially for getting started it's recommended to create a new resource group for your project. This allows you to easily manage the project and all of its resources together. When you create a project, several resources are created in the resource group, including an Azure AI resource, a container registry, and a storage account. ++1. Enter the **Location** for the Azure AI resource and then select **Next**. The location is the region where the Azure AI resource is hosted. The location of the Azure AI resource is also the location of the project. ++ > [!NOTE] + > Azure AI resources and services availability differ per region. For example, certain models might not be available in certain regions. The resources in this tutorial are created in the **East US 2** region. ++1. Review the project details and then select **Create a project**. ++Once a project is created, you can access the **Tools**, **Components**, and **Settings** assets in the left navigation panel. ++## Deploy a chat model ++Follow these steps to deploy an Azure OpenAI chat model for your copilot. ++1. Sign in to [Azure AI Studio](https://ai.azure.com) with credentials that have access to your Azure OpenAI resource. During or after the sign-in workflow, select the appropriate directory, Azure subscription, and Azure OpenAI resource. You should be on the Azure AI Studio **Home** page. +1. Select **Build** from the top menu and then select **Deployments** > **Create**. + + :::image type="content" source="../media/tutorials/copilot-deploy-flow/deploy-create.png" alt-text="Screenshot of the deployments page with a button to create a new project." lightbox="../media/tutorials/copilot-deploy-flow/deploy-create.png"::: ++1. On the **Select a model** page, select the model you want to deploy from the **Model** dropdown. For example, select **gpt-35-turbo-16k**. Then select **Confirm**. ++ :::image type="content" source="../media/tutorials/copilot-deploy-flow/deploy-gpt-35-turbo-16k.png" alt-text="Screenshot of the model selection page." lightbox="../media/tutorials/copilot-deploy-flow/deploy-gpt-35-turbo-16k.png"::: ++1. On the **Deploy model** page, enter a name for your deployment, and then select **Deploy**. After the deployment is created, you see the deployment details page. Details include the date you created the deployment and the created date and version of the model you deployed. +1. On the deployment details page from the previous step, select **Open in playground**. ++ :::image type="content" source="../media/tutorials/copilot-deploy-flow/deploy-gpt-35-turbo-16k-details.png" alt-text="Screenshot of the GPT chat deployment details." lightbox="../media/tutorials/copilot-deploy-flow/deploy-gpt-35-turbo-16k-details.png"::: ++For more information about deploying models, see [how to deploy models](../how-to/deploy-models.md). ++## Chat in the playground without your data ++In the [Azure AI Studio](https://ai.azure.com) playground you can observe how your model responds with and without your data. In this section, you test your model without your data. In the next section, you add your data to the model to help it better answer questions about your products. ++1. In the playground, make sure that **Chat** is selected from the **Mode** dropdown. Select your deployed GPT chat model from the **Deployment** dropdown. ++ :::image type="content" source="../media/tutorials/copilot-deploy-flow/playground-chat.png" alt-text="Screenshot of the chat playground with the chat mode and model selected." lightbox="../media/tutorials/copilot-deploy-flow/playground-chat.png"::: ++1. In the **System message** text box on the **Assistant setup** pane, provide this prompt to guide the assistant: "You're an AI assistant that helps people find information." You can tailor the prompt for your scenario. For more information, see [prompt samples](../how-to/models-foundation-azure-ai.md#prompt-samples). +1. Select **Apply changes** to save your changes, and when prompted to see if you want to update the system message, select **Continue**. +1. In the chat session pane, enter the following question: "How much do the TrailWalker hiking shoes cost", and then select the right arrow icon to send. ++ :::image type="content" source="../media/tutorials/copilot-deploy-flow/chat-without-data.png" alt-text="Screenshot of the first chat question without grounding data." lightbox="../media/tutorials/copilot-deploy-flow/chat-without-data.png"::: ++1. The assistant replies that it doesn't know the answer. The model doesn't have access to product information about the TrailWalker hiking shoes. ++ :::image type="content" source="../media/tutorials/copilot-deploy-flow/assistant-reply-not-grounded.png" alt-text="Screenshot of the assistant's reply without grounding data." lightbox="../media/tutorials/copilot-deploy-flow/assistant-reply-not-grounded.png"::: ++In the next section, you'll add your data to the model to help it answer questions about your products. ++## Add your data and try the chat model again ++You need a local copy of example product information. For more information and links to example data, see the [prerequisites](#prerequisites). ++You upload your local data files to Azure Blob storage and create an Azure AI Search index. Your data source is used to help ground the model with specific data. Grounding means that the model uses your data to help it understand the context of your question. You're not changing the deployed model itself. Your data is stored separately and securely in your Azure subscription. For more information, see [Azure OpenAI on your data](/azure/ai-services/openai/concepts/use-your-data). ++Follow these steps to add your data to the playground to help the assistant answer questions about your products. ++1. If you aren't already in the [Azure AI Studio](https://ai.azure.com) playground, select **Build** from the top menu and then select **Playground** from the collapsible left menu. +1. On the **Assistant setup** pane, select **Add your data (preview)** > **+ Add a data source**. ++ :::image type="content" source="../media/tutorials/copilot-deploy-flow/add-your-data.png" alt-text="Screenshot of the chat playground with the option to add a data source visible." lightbox="../media/tutorials/copilot-deploy-flow/add-your-data.png"::: ++1. In the **Data source** page that appears, select **Upload files** from the **Select data source** dropdown. ++ :::image type="content" source="../media/tutorials/copilot-deploy-flow/add-your-data-source.png" alt-text="Screenshot of the product data source selection options." lightbox="../media/tutorials/copilot-deploy-flow/add-your-data-source.png"::: ++ > [!TIP] + > For data source options and supported file types and formats, see [Azure OpenAI on your data](/azure/ai-services/openai/concepts/use-your-data). ++1. Enter *product-info* as the name of your product information index. ++ :::image type="content" source="../media/tutorials/copilot-deploy-flow/add-your-data-source-details.png" alt-text="Screenshot of the resources and information required to upload files." lightbox="../media/tutorials/copilot-deploy-flow/add-your-data-source-details.png"::: ++1. Select or create an Azure AI Search resource named *contoso-outdoor-search* and select the acknowledgment that connecting it incurs usage on your account. ++ > [!NOTE] + > You use the *product-info* index and the *contoso-outdoor-search* Azure AI Search resource in prompt flow later in this tutorial. If the names you enter differ from what's specified here, make sure to use the names you entered in the rest of the tutorial. ++1. Select the Azure subscription that contains the Azure OpenAI resource you want to use. Then select **Next**. ++1. On the **Upload files** page, select **Browse for a file** and select the files you want to upload. Select the product info files that you downloaded or created earlier. See the [prerequisites](#prerequisites). If you want to upload more than one file, do so now. You can't add more files later in the same playground session. +1. Select **Upload** to upload the file to your Azure Blob storage account. Then select **Next** from the bottom of the page. ++ :::image type="content" source="../media/tutorials/copilot-deploy-flow/add-your-data-uploaded-product-info.png" alt-text="Screenshot of the dialog to select and upload files." lightbox="../media/tutorials/copilot-deploy-flow/add-your-data-uploaded-product-info.png"::: ++1. On the **Data management** page under **Search type**, select **Keyword**. This setting helps determine how the model responds to requests. Then select **Next**. + + > [!NOTE] + > If you had added vector search on the **Select or add data source** page, then more options would be available here for an additional cost. For more information, see [Azure OpenAI on your data](/azure/ai-services/openai/concepts/use-your-data). + +1. Review the details you entered, and select **Save and close**. You can now chat with the model and it uses information from your data to construct the response. ++ :::image type="content" source="../media/tutorials/copilot-deploy-flow/add-your-data-review-finish.png" alt-text="Screenshot of the review and finish page for adding data." lightbox="../media/tutorials/copilot-deploy-flow/add-your-data-review-finish.png"::: ++1. Now on the **Assistant setup** pane, you can see that your data ingestion is in progress. Before proceeding, wait until you see the data source and index name in place of the status. ++ :::image type="content" source="../media/tutorials/copilot-deploy-flow/add-your-data-ingestion-in-progress.png" alt-text="Screenshot of the chat playground with the status of data ingestion in view." lightbox="../media/tutorials/copilot-deploy-flow/add-your-data-ingestion-in-progress.png"::: ++1. You can now chat with the model asking the same question as before ("How much do the TrailWalker hiking shoes cost"), and this time it uses information from your data to construct the response. You can expand the **references** button to see the data that was used. ++ :::image type="content" source="../media/tutorials/copilot-deploy-flow/chat-with-data.png" alt-text="Screenshot of the assistant's reply with grounding data." lightbox="../media/tutorials/copilot-deploy-flow/chat-with-data.png"::: +++## Create compute and runtime that are needed for prompt flow ++You use prompt flow to optimize the messages that are sent to the copilot's chat model. Prompt flow requires a compute instance and a runtime. If you already have a compute instance and a runtime, you can skip this section and remain in the playground. ++To create a compute instance and a runtime, follow these steps: +1. If you don't have a compute instance, you can [create one in Azure AI Studio](../how-to/create-manage-compute.md). +1. Then create a runtime by following the steps in [how to create a runtime](../how-to/create-manage-runtime.md). ++To complete the rest of the tutorial, make sure that your runtime is in the **Running** status. You might need to select **Refresh** to see the updated status. ++> [!IMPORTANT] +> You're charged for compute instances while they are running. To avoid incurring unnecessary Azure costs, pause the compute instance when you're not actively working in prompt flow. For more information, see [how to start and stop compute](../how-to/create-manage-compute.md#start-or-stop-a-compute-instance). +++## Create a prompt flow from the playground ++Now that your [deployed chat model](#deploy-a-chat-model) is working in the playground [with your data](#add-your-data-and-try-the-chat-model-again), you could [deploy your copilot as a web app](deploy-chat-web-app.md#deploy-your-web-app) from the playground. ++But you might ask "How can I further customize this copilot?" You might want to add multiple data sources, compare different prompts or the performance of multiple models. A [prompt flow](../how-to/prompt-flow.md) serves as an executable workflow that streamlines the development of your LLM-based AI application. It provides a comprehensive framework for managing data flow and processing within your application. ++In this section, you learn how to transition to prompt flow from the playground. You export the playground chat environment including connections to the data that you added. Later in this tutorial, you [evaluate the flow](#evaluate-the-flow-using-a-question-and-answer-evaluation-dataset) and then [deploy the flow](#deploy-the-flow) for [consumption](#use-the-deployed-flow). ++> [!NOTE] +> The changes made in prompt flow aren't applied backwards to update the playground environment. ++You can create a prompt flow from the playground by following these steps: +1. If you aren't already in the [Azure AI Studio](https://ai.azure.com) playground, select **Build** from the top menu and then select **Playground** from the collapsible left menu. +1. Select **Open in prompt flow** from the menu above the **Chat session** pane. +1. Enter a folder name for your prompt flow. Then select **Open**. Azure AI Studio exports the playground chat environment including connections to your data to prompt flow. ++ :::image type="content" source="../media/tutorials/copilot-deploy-flow/prompt-flow-from-playground.png" alt-text="Screenshot of the open in prompt flow dialog." lightbox="../media/tutorials/copilot-deploy-flow/prompt-flow-from-playground.png"::: ++Within a flow, nodes take center stage, representing specific tools with unique capabilities. These nodes handle data processing, task execution, and algorithmic operations, with inputs and outputs. By connecting nodes, you establish a seamless chain of operations that guides the flow of data through your application. For more information, see [prompt flow tools](../how-to/prompt-flow.md#prompt-flow-tools). ++To facilitate node configuration and fine-tuning, a visual representation of the workflow structure is provided through a DAG (Directed Acyclic Graph) graph. This graph showcases the connectivity and dependencies between nodes, providing a clear overview of the entire workflow. The nodes in the graph shown here are representative of the playground chat experience that you exported to prompt flow. ++ :::image type="content" source="../media/tutorials/copilot-deploy-flow/prompt-flow-overview-graph.png" alt-text="Screenshot of the default graph exported from the playground to prompt flow." lightbox="../media/tutorials/copilot-deploy-flow/prompt-flow-overview-graph.png"::: ++Nodes can be added, updated, rearranged, or removed. The nodes in your flow at this point include: +- **DetermineIntent**: This node determines the intent of the user's query. It uses the system prompt to determine the intent. You can edit the system prompt to provide scenario-specific few-shot examples. +- **ExtractIntent**: This node formats the output of the **DetermineIntent** node and sends it to the **RetrieveDocuments** node. +- **RetrieveDocuments**: This node searches for top documents related to the query. This node uses the search type and any parameters you pre-configured in playground. +- **FormatRetrievedDocuments**: This node formats the output of the **RetrieveDocuments** node and sends it to the **DetermineReply** node. +- **DetermineReply**: This node contains an extensive system prompt, which asks the LLM to respond using the retrieved documents only. There are two inputs: + - The **RetrieveDocuments** node provides the top retrieved documents. + - The **FormatConversation** node provides the formatted conversation history including the latest query. ++The **FormatReply** node formats the output of the **DetermineReply** node. ++In prompt flow, you should also see: +- **Save**: You can save your prompt flow at any time by selecting **Save** from the top menu. Be sure to save your prompt flow periodically as you make changes in this tutorial. +- **Runtime**: The runtime that you created [earlier in this tutorial](#create-compute-and-runtime-that-are-needed-for-prompt-flow). You can start and stop runtimes and compute instances via **Settings** in the left menu. To work in prompt flow, make sure that your runtime is in the **Running** status. ++ :::image type="content" source="../media/tutorials/copilot-deploy-flow/prompt-flow-overview.png" alt-text="Screenshot of the prompt flow editor and surrounding menus." lightbox="../media/tutorials/copilot-deploy-flow/prompt-flow-overview.png"::: ++- **Tools**: You can return to the prompt flow anytime by selecting **Prompt flow** from **Tools** in the left menu. Then select the prompt flow folder that you created earlier (not the sample flow). ++ :::image type="content" source="../media/tutorials/copilot-deploy-flow/prompt-flow-return.png" alt-text="Screenshot of the list of your prompt flows." lightbox="../media/tutorials/copilot-deploy-flow/prompt-flow-return.png"::: +++## Customize prompt flow with multiple data sources ++Earlier in the [Azure AI Studio](https://ai.azure.com) playground, you [added your data](#add-your-data-and-try-the-chat-model-again) to create one search index that contained product data for the Contoso copilot. So far, users can only inquire about products with questions such as "How much do the TrailWalker hiking shoes cost?". But they can't get answers to questions such as "How many TrailWalker hiking shoes did Daniel Wilson buy?" To enable this scenario, we add another index with customer information to the flow. ++### Create the customer info index ++You need a local copy of example customer information. For more information and links to example data, see the [prerequisites](#prerequisites). ++Follow these instructions on how to create a new index: ++1. Select **Index** from the left menu. Then select **+ New index**. ++ :::image type="content" source="../media/tutorials/copilot-deploy-flow/add-index-new.png" alt-text="Screenshot of the indexes page with the button to create a new index." lightbox="../media/tutorials/copilot-deploy-flow/add-index-new.png"::: ++ You're taken to the **Create an index** wizard. ++1. On the Source data page, select **Upload folder** from the **Upload** dropdown. Select the customer info files that you downloaded or created earlier. See the [prerequisites](#prerequisites). ++ :::image type="content" source="../media/tutorials/copilot-deploy-flow/add-index-dataset-upload-folder.png" alt-text="Screenshot of the customer data source selection options." lightbox="../media/tutorials/copilot-deploy-flow/add-index-dataset-upload-folder.png"::: ++1. Select **Next** at the bottom of the page. +1. Select the same Azure AI Search resource (*contoso-outdoor-search*) that you used for your product info index (*product-info*). Then select **Next**. ++ :::image type="content" source="../media/tutorials/copilot-deploy-flow/add-index-storage.png" alt-text="Screenshot of the selected Azure AI Search resource." lightbox="../media/tutorials/copilot-deploy-flow/add-index-storage.png"::: ++1. Select **Hybrid + Semantic (Recommended)** for the **Search type**. This type should be selected by default. +1. Select *Default_AzureOpenAI* from the **Azure OpenAI resource** dropdown. Select the checkbox to acknowledge that an Azure OpenAI embedding model will be deployed if it's not already. Then select **Next**. ++ :::image type="content" source="../media/tutorials/copilot-deploy-flow/add-index-search-settings.png" alt-text="Screenshot of index search type options." lightbox="../media/tutorials/copilot-deploy-flow/add-index-search-settings.png"::: ++ > [!NOTE] + > The embedding model is listed with other model deployments in the **Deployments** page. ++1. Enter **customer-info** for the index name. Then select **Next**. ++ :::image type="content" source="../media/tutorials/copilot-deploy-flow/add-index-settings.png" alt-text="Screenshot of the index name and virtual machine options." lightbox="../media/tutorials/copilot-deploy-flow/add-index-settings.png"::: ++1. Review the details you entered, and select **Create**. ++ :::image type="content" source="../media/tutorials/copilot-deploy-flow/add-index-review.png" alt-text="Screenshot of the review and finish index creation page." lightbox="../media/tutorials/copilot-deploy-flow/add-index-review.png"::: ++ > [!NOTE] + > You use the *customer-info* index and the *contoso-outdoor-search* Azure AI Search resource in prompt flow later in this tutorial. If the names you enter differ from what's specified here, make sure to use the names you entered in the rest of the tutorial. ++1. You're taken to the index details page where you can see the status of your index creation ++ :::image type="content" source="../media/tutorials/copilot-deploy-flow/add-index-created-details.png" alt-text="Screenshot of the customer info index details." lightbox="../media/tutorials/copilot-deploy-flow/add-index-created-details.png"::: ++For more information on how to create an index, see [Create an index](../how-to/index-add.md). ++### Add customer information to the flow ++After you're done creating your index, return to your prompt flow and follow these steps to add the customer info to the flow: ++1. Select the **RetrieveDocuments** node from the graph and rename it **RetrieveProductInfo**. Now the retrieve product info node can be distinguished from the retrieve customer info node that you add to the flow. ++ :::image type="content" source="../media/tutorials/copilot-deploy-flow/node-rename-retrieve-product-info.png" alt-text="Screenshot of the prompt flow node for retrieving product info." lightbox="../media/tutorials/copilot-deploy-flow/node-rename-retrieve-product-info.png"::: ++1. Select **+ Python** from the top menu to create a new [Python node](../how-to/prompt-flow-tools/python-tool.md) that's used to retrieve customer information. ++ :::image type="content" source="../media/tutorials/copilot-deploy-flow/node-new-retrieve-customer-info.png" alt-text="Screenshot of the prompt flow node for retrieving customer info." lightbox="../media/tutorials/copilot-deploy-flow/node-new-retrieve-customer-info.png"::: ++1. Name the node **RetrieveCustomerInfo** and select **Add**. +1. Copy and paste the Python code from the **RetrieveProductInfo** node into the **RetrieveCustomerInfo** node to replace all of the default code. +1. Select the **Validate and parse input** button to validate the inputs for the **RetrieveCustomerInfo** node. If the inputs are valid, prompt flow parses the inputs and creates the necessary variables for you to use in your code. ++ :::image type="content" source="../media/tutorials/copilot-deploy-flow/customer-info-validate-parse.png" alt-text="Screenshot of the validate and parse input button." lightbox="../media/tutorials/copilot-deploy-flow/customer-info-validate-parse.png"::: ++1. Edit the **RetrieveCustomerInfo** inputs that prompt flow parsed for you so that it can connect to your *customer-info* index. ++ :::image type="content" source="../media/tutorials/copilot-deploy-flow/customer-info-edit-inputs.png" alt-text="Screenshot of inputs to edit in the retrieve customer info node." lightbox="../media/tutorials/copilot-deploy-flow/customer-info-edit-inputs.png"::: ++ > [!NOTE] + > The graph is updated immediately after you set the **queries** input value to **ExtractIntent.output.search_intents**. In the graph you can see that **RetrieveCustomerInfo** gets inputs from **ExtractIntent**. ++ The inputs are case sensitive, so be sure they match these values exactly: + + | Name | Type | Value | + |-|-|--| + | **embeddingModelConnection** | Azure OpenAI | *Default_AzureOpenAI* | + | **embeddingModelName** | string | *None* | + | **indexName** | string | *customer-info* | + | **queries** | string | *${ExtractIntent.output.search_intents}* | + | **queryType** | string | *simple* | + | **searchConnection** | Cognitive search | *contoso-outdoor-search* | + | **semanticConfiguration** | string | *None* | + | **topK** | int | *5* | ++1. Select **Save** from the top menu to save your changes. ++### Format the retrieved documents to output ++Now that you have both the product and customer info in your prompt flow, you format the retrieved documents so that the large language model can use them. ++1. Select the **FormatRetrievedDocuments** node from the graph. +1. Copy and paste the following Python code to replace all contents in the **FormatRetrievedDocuments** code block. ++ ```python + from promptflow import tool + + @tool + def format_retrieved_documents(docs1: object, docs2: object, maxTokens: int) -> str: + formattedDocs = [] + strResult = "" + docs = [val for pair in zip(docs1, docs2) for val in pair] + for index, doc in enumerate(docs): + formattedDocs.append({ + f"[doc{index}]": { + "title": doc['title'], + "content": doc['content'] + } + }) + formattedResult = { "retrieved_documents": formattedDocs } + nextStrResult = str(formattedResult) + if (estimate_tokens(nextStrResult) > maxTokens): + break + strResult = nextStrResult + + return { + "combined_docs": docs, + "strResult": strResult + } + + def estimate_tokens(text: str) -> int: + return (len(text) + 2) / 3 + ``` ++1. Select the **Validate and parse input** button to validate the inputs for the **FormatRetrievedDocuments** node. If the inputs are valid, prompt flow parses the inputs and creates the necessary variables for you to use in your code. ++1. Edit the **FormatRetrievedDocuments** inputs that prompt flow parsed for you so that it can extract product and customer info from the **RetrieveProductInfo** and **RetrieveCustomerInfo** nodes. ++ :::image type="content" source="../media/tutorials/copilot-deploy-flow/format-retrieved-documents-edit-inputs.png" alt-text="Screenshot of inputs to edit in the format retrieved documents node." lightbox="../media/tutorials/copilot-deploy-flow/format-retrieved-documents-edit-inputs.png"::: ++ The inputs are case sensitive, so be sure they match these values exactly: + + | Name | Type | Value | + |-|-|--| + | **docs1** | object | *${RetrieveProductInfo.output}* | + | **docs2** | object | *${RetrieveCustomerInfo.output}* | + | **maxTokens** | int | *5000* | ++1. Select the **DetermineReply** node from the graph. +1. Set the **documentation** input to *${FormatRetrievedDocuments.output.strResult}*. ++ :::image type="content" source="../media/tutorials/copilot-deploy-flow/determine-reply-edit-inputs.png" alt-text="Screenshot of editing the documentation input value in the determine reply node." lightbox="../media/tutorials/copilot-deploy-flow/determine-reply-edit-inputs.png"::: ++1. Select the **outputs** node from the graph. +1. Set the **fetched_docs** input to *${FormatRetrievedDocuments.output.combined_docs}*. ++ :::image type="content" source="../media/tutorials/copilot-deploy-flow/outputs-edit.png" alt-text="Screenshot of editing the fetched_docs input value in the outputs node." lightbox="../media/tutorials/copilot-deploy-flow/outputs-edit.png"::: ++1. Select **Save** from the top menu to save your changes. ++### Chat in prompt flow with product and customer info ++By now you have both the product and customer info in prompt flow. You can chat with the model in prompt flow and get answers to questions such as "How many TrailWalker hiking shoes did Daniel Wilson buy?" Before proceeding to a more formal evaluation, you can optionally chat with the model to see how it responds to your questions. ++1. Select **Chat** from the top menu in prompt flow to try chat. +1. Enter "How many TrailWalker hiking shoes did Daniel Wilson buy?" and then select the right arrow icon to send. +1. The response is what you expect. The model uses the customer info to answer the question. ++ :::image type="content" source="../media/tutorials/copilot-deploy-flow/chat-with-data-customer.png" alt-text="Screenshot of the assistant's reply with product and customer grounding data." lightbox="../media/tutorials/copilot-deploy-flow/chat-with-data-customer.png"::: ++## Evaluate the flow using a question and answer evaluation dataset ++In [Azure AI Studio](https://ai.azure.com), you want to evaluate the flow before you [deploy the flow](#deploy-the-flow) for [consumption](#use-the-deployed-flow). ++In this section, you use the built-in evaluation to evaluate your flow with a question and answer evaluation dataset. The built-in evaluation uses AI-assisted metrics to evaluate your flow: groundedness, relevance, and retrieval score. For more information, see [built-in evaluation metrics](../concepts/evaluation-metrics-built-in.md). ++### Create an evaluation ++You need a question and answer evaluation dataset that contains questions and answers that are relevant to your scenario. Create a new file locally named **qa-evaluation.jsonl**. Copy and paste the following questions and answers (`"truth"`) into the file. ++```json +{"question": "What color is the CozyNights Sleeping Bag?", "truth": "Red"} +{"question": "When did Daniel Wilson order the BaseCamp Folding Table?", "truth": "May 7th, 2023"} +{"question": "How much do TrailWalker Hiking Shoes cost? ", "truth": "$110"} +{"question": "What kind of tent did Sarah Lee buy?", "truth": "SkyView 2 person tent"} +{"question": "What is Melissa Davis's phone number?", "truth": "555-333-4444"} +{"question": "What is the proper care for trailwalker hiking shoes?", "truth": "After each use, remove any dirt or debris by brushing or wiping the shoes with a damp cloth."} +{"question": "Does TrailMaster Tent come with a warranty?", "truth": "2 years"} +{"question": "How much did David Kim spend on the TrailLite Daypack?", "truth": "$240"} +{"question": "What items did Amanda Perez purchase?", "truth": "TrailMaster X4 Tent, TrekReady Hiking Boots (quantity 3), CozyNights Sleeping Bag, TrailBlaze Hiking Pants, RainGuard Hiking Jacket, and CompactCook Camping Stove"} +{"question": "What is the Brand for TrekReady Hiking Boots", "truth": "TrekReady"} +{"question": "How many items did Karen Williams buy?", "truth": "three items of the Summit Breeze Jacket"} +{"question": "France is in Europe", "truth": "Sorry, I can only truth questions related to outdoor/camping gear and equipment"} +``` ++Now that you have your evaluation dataset, you can evaluate your flow by following these steps: ++1. Select **Evaluate** > **Built-in evaluation** from the top menu in prompt flow. + + :::image type="content" source="../media/tutorials/copilot-deploy-flow/evaluate-built-in-evaluation.png" alt-text="Screenshot of the option to create a built-in evaluation from prompt flow." lightbox="../media/tutorials/copilot-deploy-flow/evaluate-built-in-evaluation.png"::: ++ You're taken to the **Create a new evaluation** wizard. ++1. Enter a name for your evaluation and select a runtime. +1. Select **Question and answer pairs with retrieval-augmented generation** from the scenario options. ++ :::image type="content" source="../media/tutorials/copilot-deploy-flow/evaluate-basic-scenario.png" alt-text="Screenshot of selecting an evaluation scenario." lightbox="../media/tutorials/copilot-deploy-flow/evaluate-basic-scenario.png"::: ++1. Select the flow to evaluate. In this example, select *Contoso outdoor flow* or whatever you named your flow. Then select **Next**. ++1. Select the metrics you want to use to evaluate your flow. In this example, select **Groundedness**, **Relevance**, and **Retrieval score**. ++ :::image type="content" source="../media/tutorials/copilot-deploy-flow/evaluate-metrics.png" alt-text="Screenshot of selecting evaluation metrics." lightbox="../media/tutorials/copilot-deploy-flow/evaluate-metrics.png"::: ++1. Select a model to use for evaluation. In this example, select **gpt-35-turbo-16k**. Then select **Next**. ++ > [!NOTE] + > Evaluation with AI-assisted metrics needs to call another GPT model to do the calculation. For best performance, use a GPT-4 or gpt-35-turbo-16k model. If you didn't previously deploy a GPT-4 or gpt-35-turbo-16k model, you can deploy another model by following the steps in [Deploy a chat model](#deploy-a-chat-model). Then return to this step and select the model you deployed. ++1. Select **Add new dataset**. Then select **Next**. ++ :::image type="content" source="../media/tutorials/copilot-deploy-flow/evaluate-add-dataset.png" alt-text="Screenshot of the option to use a new or existing dataset." lightbox="../media/tutorials/copilot-deploy-flow/evaluate-add-dataset.png"::: ++1. Select **Upload files**, browse files, and select the **qa-evaluation.jsonl** file that you created earlier. ++ :::image type="content" source="../media/tutorials/copilot-deploy-flow/evaluate-upload-files.png" alt-text="Screenshot of the dataset upload files button." lightbox="../media/tutorials/copilot-deploy-flow/evaluate-upload-files.png"::: ++1. After the file is uploaded, you need to map the properties from the file (data source) to the evaluation properties. Enter the following values for each data source property: ++ :::image type="content" source="../media/tutorials/copilot-deploy-flow/evaluate-map-data-source.png" alt-text="Screenshot of the evaluation dataset mapping." lightbox="../media/tutorials/copilot-deploy-flow/evaluate-map-data-source.png"::: ++ | Name | Description | Type | Data source | + |-|-|--|--| + | **chat_history** | The chat history | list | *${data.chat_history}* | + | **query** | The query | string | *${data.question}* | + | **question** | A query seeking specific information | string | *${data.question}* | + | **answer** | The response to question generated by the model as answer | string | ${run.outputs.reply} | + | **documents** | String with context from retrieved documents | string | ${run.outputs.fetched_docs} | ++1. Select **Next**. +1. Review the evaluation details and then select **Submit**. ++ :::image type="content" source="../media/tutorials/copilot-deploy-flow/evaluate-review-finish.png" alt-text="Screenshot of the review and finish page within the create evaluation dialog." lightbox="../media/tutorials/copilot-deploy-flow/evaluate-review-finish.png"::: ++ You're taken to the **Metric evaluations** page. ++### View the evaluation status and results ++Now you can view the evaluation status and results by following these steps: ++1. After you [create an evaluation](#create-an-evaluation), if you aren't there already go to **Build** > **Evaluation**. On the **Metric evaluations** page, you can see the evaluation status and the metrics that you selected. You might need to select **Refresh** after a couple of minutes to see the **Completed** status. ++ :::image type="content" source="../media/tutorials/copilot-deploy-flow/evaluate-status-completed.png" alt-text="Screenshot of the metric evaluations page." lightbox="../media/tutorials/copilot-deploy-flow/evaluate-status-completed.png"::: ++ > [!TIP] + > Once the evaluation is in **Completed** status, you don't need runtime or compute to complete the rest of this tutorial. You can stop your compute instance to avoid incurring unnecessary Azure costs. For more information, see [how to start and stop compute](../how-to/create-manage-compute.md#start-or-stop-a-compute-instance). ++1. Select the name of the evaluation that completed first (*contoso-evaluate-from-flow_variant_0*) to see the evaluation details with the columns that you mapped earlier. ++ :::image type="content" source="../media/tutorials/copilot-deploy-flow/evaluate-view-results-detailed.png" alt-text="Screenshot of the detailed metrics results page." lightbox="../media/tutorials/copilot-deploy-flow/evaluate-view-results-detailed.png"::: ++1. Select the name of the evaluation that completed second (*evaluation_contoso-evaluate-from-flow_variant_0*) to see the evaluation metrics: **Groundedness**, **Relevance**, and **Retrieval score**. ++ :::image type="content" source="../media/tutorials/copilot-deploy-flow/evaluate-view-results-metrics.png" alt-text="Screenshot of the average metrics scores." lightbox="../media/tutorials/copilot-deploy-flow/evaluate-view-results-metrics.png"::: ++For more information, see [view evaluation results](../how-to/evaluate-flow-results.md). ++## Deploy the flow ++Now that you [built a flow](#create-a-prompt-flow-from-the-playground) and completed a metrics-based [evaluation](#evaluate-the-flow-using-a-question-and-answer-evaluation-dataset), it's time to create your online endpoint for real-time inference. That means you can use the deployed flow to answer questions in real time. ++Follow these steps to deploy a prompt flow as an online endpoint from [Azure AI Studio](https://ai.azure.com). ++1. Have a prompt flow ready for deployment. If you don't have one, see [how to build a prompt flow](../how-to/flow-develop.md). +1. Optional: Select **Chat** to test if the flow is working correctly. Testing your flow before deployment is recommended best practice. ++1. Select **Deploy** on the flow editor. ++ :::image type="content" source="../media/tutorials/copilot-deploy-flow/deploy-from-flow.png" alt-text="Screenshot of the deploy button from a prompt flow editor." lightbox = "../media/tutorials/copilot-deploy-flow/deploy-from-flow.png"::: ++1. Provide the requested information on the **Basic Settings** page in the deployment wizard. ++ :::image type="content" source="../media/tutorials/copilot-deploy-flow/deploy-basic-settings.png" alt-text="Screenshot of the basic settings page in the deployment wizard." lightbox = "../media/tutorials/copilot-deploy-flow/deploy-basic-settings.png"::: ++1. Select **Next** to proceed to the advanced settings pages. +1. On the **Advanced settings - Endpoint** page, leave the default settings and select **Next**. +1. On the **Advanced settings - Deployment** page, leave the default settings and select **Next**. +1. On the **Advanced settings - Outputs & connections** page, make sure all outputs are selected under **Included in endpoint response**. ++ :::image type="content" source="../media/tutorials/copilot-deploy-flow/deploy-advanced-outputs-connections.png" alt-text="Screenshot of the advanced settings page in the deployment wizard." lightbox = "../media/tutorials/copilot-deploy-flow/deploy-advanced-outputs-connections.png"::: ++1. Select **Review + Create** to review the settings and create the deployment. +1. Select **Create** to deploy the prompt flow. ++ :::image type="content" source="../media/tutorials/copilot-deploy-flow/deploy-review-create.png" alt-text="Screenshot of the review prompt flow deployment settings page." lightbox = "../media/tutorials/copilot-deploy-flow/deploy-review-create.png"::: ++For more information, see [how to deploy a flow](../how-to/flow-deploy.md). ++## Use the deployed flow ++Your copilot application can use the deployed prompt flow to answer questions in real time. You can use the REST endpoint or the SDK to use the deployed flow. ++1. To view the status of your deployment in [Azure AI Studio](https://ai.azure.com), select **Deployments** from the left navigation. Once the deployment is created successfully, you can select the deployment to view the details. ++ :::image type="content" source="../media/tutorials/copilot-deploy-flow/deployments-state-updating.png" alt-text="Screenshot of the prompt flow deployment state in progress." lightbox = "../media/tutorials/copilot-deploy-flow/deployments-state-updating.png"::: ++ > [!NOTE] + > If you see a message that says "Currently this endpoint has no deployments" or the **State** is still *Updating*, you might need to select **Refresh** after a couple of minutes to see the deployment. ++1. Optionally, the details page is where you can change the authentication type or enable monitoring. ++ :::image type="content" source="../media/tutorials/copilot-deploy-flow/deploy-authentication-monitoring.png" alt-text="Screenshot of the prompt flow deployment details page." lightbox = "../media/tutorials/copilot-deploy-flow/deploy-authentication-monitoring.png"::: ++1. Select the **Consume** tab. You can see code samples and the REST endpoint for your copilot application to use the deployed flow. ++ :::image type="content" source="../media/tutorials/copilot-deploy-flow/deployments-score-url-samples.png" alt-text="Screenshot of the prompt flow deployment endpoint and code samples." lightbox = "../media/tutorials/copilot-deploy-flow/deployments-score-url-samples.png"::: +++## Clean up resources ++To avoid incurring unnecessary Azure costs, you should delete the resources you created in this quickstart if they're no longer needed. To manage resources, you can use the [Azure portal](https://portal.azure.com?azure-portal=true). ++You can also [stop or delete your compute instance](../how-to/create-manage-compute.md#start-or-stop-a-compute-instance) in [Azure AI Studio](https://ai.azure.com). ++## Next steps ++* Learn more about [prompt flow](../how-to/prompt-flow.md). +* [Deploy a web app for chat on your data](./deploy-chat-web-app.md). +* [Get started building a sample copilot application with the SDK](https://github.com/azure/aistudio-copilot-sample) |
aks | Ai Toolchain Operator | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/ai-toolchain-operator.md | Title: Deploy an AI model on Azure Kubernetes Service (AKS) with the AI toolchain operator (Preview) description: Learn how to enable the AI toolchain operator add-on on Azure Kubernetes Service (AKS) to simplify OSS AI model management and deployment. - - - azure-kubernetes-service - - ignite-2023 + Last updated 11/03/2023 |
aks | Aks Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/aks-migration.md | Title: Migrate to Azure Kubernetes Service (AKS) description: This article shows you how to migrate to Azure Kubernetes Service (AKS). Previously updated : 05/30/2023 Last updated : 11/21/2023 In this article, we summarize migration details for: * Ensure your target Kubernetes version is within the supported window for AKS. Older versions may not be within the supported range and require a version upgrade for AKS support. For more information, see [AKS supported Kubernetes versions](./supported-kubernetes-versions.md). * If you're migrating to a newer version of Kubernetes, review the [Kubernetes version and version skew support policy](https://kubernetes.io/docs/setup/release/version-skew-policy/#supported-versions). +An important practice that you should include as part of your migration process is remembering to follow commonly used deployment and testing patterns. Testing your application before deployment is an important step to ensure its quality, functionality, and compatibility with the target environment. It can help you identify and fix any errors, bugs, or issues that might affect the performance, security, or usability of the application or underlying infrastructure. + ## Use Azure Migrate to migrate your applications to AKS Azure Migrate offers a unified platform to assess and migrate to Azure on-premises servers, infrastructure, applications, and data. For AKS, you can use Azure Migrate for the following tasks: |
aks | App Routing Dns Ssl | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/app-routing-dns-ssl.md | Title: Use an Azure DNS zone with SSL/TLS certificates from Azure Key Vault -description: Understand what Azure DNS zone and Azure Key Vault configuration options are supported with the application routing add-on for Azure Kubernetes Service. + Title: Set up advanced Ingress configurations on Azure Kubernetes Service +description: Understand the advanced configuration options that are supported with the application routing add-on for Azure Kubernetes Service. Previously updated : 11/03/2023 Last updated : 11/21/2023 -# Use an Azure DNS zone with SSL/TLS certificates from Azure Key Vault with the application routing add-on +# Set up advanced Ingress configurations with the application routing add-on An Ingress is an API object that defines rules, which allow external access to services in an Azure Kubernetes Service (AKS) cluster. When you create an Ingress object that uses the application routing add-on nginx Ingress classes, the add-on creates, configures, and manages one or more Ingress controllers in your AKS cluster. -This article shows you how to set up an advanced Ingress configuration to encrypt the traffic and use Azure DNS to manage DNS zones. +This article shows you how to set up an advanced Ingress configuration to encrypt the traffic with SSL/TLS certificates stored in an Azure Key Vault, and use Azure DNS to manage DNS zones. ## Application routing add-on with nginx features az keyvault certificate import --vault-name <KeyVaultName> -n <KeyVaultCertifica > [!IMPORTANT] > To enable the add-on to reload certificates from Azure Key Vault when they change, you should to enable the [secret autorotation feature][csi-secrets-store-autorotation] of the Secret Store CSI driver with the `--enable-secret-rotation` argument. When autorotation is enabled, the driver updates the pod mount and the Kubernetes secret by polling for changes periodically, based on the rotation poll interval you define. The default rotation poll interval is two minutes. - ### Enable Azure Key Vault integration On a cluster with the application routing add-on enabled, use the [`az aks approuting update`][az-aks-approuting-update] command using the `--enable-kv` and `--attach-kv` arguments to enable the Azure Key Vault provider for Secrets Store CSI Driver and apply the required role assignments. To enable support for DNS zones, see the following prerequisites: > [!NOTE] > If you already have an Azure DNS Zone, you can skip this step.-> + 1. Create an Azure DNS zone using the [`az network dns zone create`][az-network-dns-zone-create] command. ```azurecli-interactive Learn about monitoring the Ingress-nginx controller metrics included with the ap [rbac-owner]: ../role-based-access-control/built-in-roles.md#owner [rbac-classic]: ../role-based-access-control/rbac-and-directory-admin-roles.md#classic-subscription-administrator-roles [app-routing-add-on-basic-configuration]: app-routing.md-[secret-store-csi-provider]: csi-secrets-store-driver.md [csi-secrets-store-autorotation]: csi-secrets-store-configuration-options.md#enable-and-disable-auto-rotation-[az-keyvault-set-policy]: /cli/azure/keyvault#az-keyvault-set-policy [azure-key-vault-overview]: ../key-vault/general/overview.md-[az-aks-addon-update]: /cli/azure/aks/addon#az-aks-addon-update [az-aks-approuting-update]: /cli/azure/aks/approuting#az-aks-approuting-update [az-aks-approuting-zone]: /cli/azure/aks/approuting/zone [az-network-dns-zone-show]: /cli/azure/network/dns/zone#az-network-dns-zone-show-[az-role-assignment-create]: /cli/azure/role/assignment#az-role-assignment-create [az-network-dns-zone-create]: /cli/azure/network/dns/zone#az-network-dns-zone-create [az-keyvault-certificate-import]: /cli/azure/keyvault/certificate#az-keyvault-certificate-import [az-keyvault-create]: /cli/azure/keyvault#az-keyvault-create Learn about monitoring the Ingress-nginx controller metrics included with the ap [create-an-azure-dns-zone]: #create-a-global-azure-dns-zone [azure-dns-overview]: ../dns/dns-overview.md [az-keyvault-certificate-show]: /cli/azure/keyvault/certificate#az-keyvault-certificate-show-[az-aks-enable-addons]: /cli/azure/aks/addon#az-aks-enable-addon -[az-aks-show]: /cli/azure/aks/addon#az-aks-show [prometheus-in-grafana]: app-routing-nginx-prometheus.md |
aks | App Routing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/app-routing.md | When the application routing add-on is disabled, some Kubernetes resources might ## Next steps -* [Configure custom ingress configurations][custom-ingress-configurations] shows how to create Ingresses with a private load balancer, configure SSL certificate integration with Azure Key Vault, and DNS management with Azure DNS. +* [Configure custom ingress configurations][custom-ingress-configurations] shows how to create an advanced Ingress configuration to encrypt the traffic and use Azure DNS to manage DNS zones. * Learn about monitoring the ingress-nginx controller metrics included with the application routing add-on with [with Prometheus in Grafana][prometheus-in-grafana] (preview) as part of analyzing the performance and usage of your application. When the application routing add-on is disabled, some Kubernetes resources might [az-aks-approuting-enable]: /cli/azure/aks/approuting#az-aks-approuting-enable [az-aks-approuting-disable]: /cli/azure/aks/approuting#az-aks-approuting-disable [az-aks-enable-addons]: /cli/azure/aks#az-aks-enable-addons-[az-aks-disable-addons]: /cli/azure/aks#az-aks-disable-addons [az-aks-install-cli]: /cli/azure/aks#az-aks-install-cli [az-aks-get-credentials]: /cli/azure/aks#az-aks-get-credentials [install-azure-cli]: /cli/azure/install-azure-cli |
aks | Auto Upgrade Node Os Image | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/auto-upgrade-node-os-image.md | + + Title: Auto-upgrade Node OS Images +description: Learn how to choose an upgrade channel that best supports your needs for cluster's node OS security and maintenance. ++++ Last updated : 11/22/2023+++# Auto-upgrade node OS images ++AKS provides multiple auto-upgrade channels dedicated to timely node-level OS security updates. This channel is different from cluster-level Kubernetes version upgrades and supersedes it. ++## Interactions between node OS auto-upgrade and cluster auto-upgrade ++Node-level OS security updates are released at a faster rate than Kubernetes patch or minor version updates. The node OS auto-upgrade channel grants you flexibility and enables a customized strategy for node-level OS security updates. Then, you can choose a separate plan for cluster-level Kubernetes version [auto-upgrades][Autoupgrade]. +It's best to use both cluster-level [auto-upgrades][Autoupgrade] and the node OS auto-upgrade channel together. Scheduling can be fine-tuned by applying two separate sets of [maintenance windows][planned-maintenance] - `aksManagedAutoUpgradeSchedule` for the cluster [auto-upgrade][Autoupgrade] channel and `aksManagedNodeOSUpgradeSchedule` for the node OS auto-upgrade channel. ++## Channels for node OS image upgrades ++The selected channel determines the timing of upgrades. When making changes to node OS auto-upgrade channels, allow up to 24 hours for the changes to take effect. ++> [!NOTE] +> Node OS image auto-upgrade won't affect the cluster's Kubernetes version. It only works for a cluster in a [supported version][supported]. ++The following upgrade channels are available. You're allowed to choose one of these options: ++|Channel|Description|OS-specific behavior| +||| +| `None`| Your nodes don't have security updates applied automatically. This means you're solely responsible for your security updates.|N/A| +| `Unmanaged`|OS updates are applied automatically through the OS built-in patching infrastructure. Newly allocated machines are unpatched initially. The OS's infrastructure patches them at some point.|Ubuntu and Azure Linux (CPU node pools) apply security patches through unattended upgrade/dnf-automatic roughly once per day around 06:00 UTC. Windows doesn't automatically apply security patches, so this option behaves equivalently to `None`.| +| `SecurityPatch`|This channel is in preview and requires enabling the feature flag `NodeOsUpgradeChannelPreview`. Refer to the prerequisites section for details. AKS regularly updates the node's virtual hard disk (VHD) with patches from the image maintainer labeled "security only." There might be disruptions when the security patches are applied to the nodes. When the patches are applied, the VHD is updated and existing machines are upgraded to that VHD, honoring maintenance windows and surge settings. This option incurs the extra cost of hosting the VHDs in your node resource group. If you use this channel, Linux [unattended upgrades][unattended-upgrades] are disabled by default.|Azure Linux doesn't support this channel on GPU-enabled VMs. `SecurityPatch` works on patch versions that are deprecated, so long as the minor Kubernetes version is still supported.| +| `NodeImage`|AKS updates the nodes with a newly patched VHD containing security fixes and bug fixes on a weekly cadence. The update to the new VHD is disruptive, following maintenance windows and surge settings. No extra VHD cost is incurred when choosing this option. If you use this channel, Linux [unattended upgrades][unattended-upgrades] are disabled by default. Node image upgrades support patch versions that are deprecated, so long as the minor Kubernetes version is still supported.| ++To set the node OS auto-upgrade channel when creating a cluster, use the *node-os-upgrade-channel* parameter, similar to the following example. ++```azurecli-interactive +az aks create --resource-group myResourceGroup --name myAKSCluster --node-os-upgrade-channel SecurityPatch +``` ++To set the node os auto-upgrade channel on existing cluster, update the *node-os-upgrade-channel* parameter, similar to the following example. ++```azurecli-interactive +az aks update --resource-group myResourceGroup --name myAKSCluster --node-os-upgrade-channel SecurityPatch +``` ++## Update ownership and schedule ++The default cadence means there's no planned maintenance window applied. ++|Channel|Updates Ownership|Default cadence| +||| +| `Unmanaged`|OS driven security updates. AKS has no control over these updates.|Nightly around 6AM UTC for Ubuntu and Azure Linux. Monthly for Windows.| +| `SecurityPatch`|AKS|Weekly.| +| `NodeImage`|AKS|Weekly.| ++## SecurityPatch channel requirements ++To use the `SecurityPatch` channel, your cluster must support these requirements. +- Must be using API version `11-02-preview` or later ++- If using Azure CLI, the `aks-preview` CLI extension version `0.5.127` or later must be installed ++- The `NodeOsUpgradeChannelPreview` feature flag must be enabled on your subscription ++### Register NodeOsUpgradeChannelPreview ++Register the `NodeOsUpgradeChannelPreview` feature flag by using the [az feature register][az-feature-register] command, as shown in the following example: ++```azurecli-interactive +az feature register --namespace "Microsoft.ContainerService" --name "NodeOsUpgradeChannelPreview" +``` ++It takes a few minutes for the status to show *Registered*. Verify the registration status by using the [az feature show][az-feature-show] command: ++```azurecli-interactive +az feature show --namespace "Microsoft.ContainerService" --name "NodeOsUpgradeChannelPreview" +``` ++When the status reflects *Registered*, refresh the registration of the *Microsoft.ContainerService* resource provider by using the [az provider register][az-provider-register] command: ++```azurecli-interactive +az provider register --namespace Microsoft.ContainerService +``` ++## Node channel known bugs ++- Currently, when you set the [cluster auto-upgrade channel][Autoupgrade] to `node-image`, it also automatically sets the node OS auto-upgrade channel to `NodeImage`. You can't change node OS auto-upgrade channel value if your cluster auto-upgrade channel is `node-image`. In order to set the node OS auto-upgrade channel value, check the [cluster auto-upgrade channel][Autoupgrade] value isn't `node-image`. ++- The `SecurityPatch` channel isn't supported on Windows OS node pools. + + > [!NOTE] + > By default, any new cluster created with an API version of `06-01-2022` or later will set the node OS auto-upgrade channel value to `NodeImage`. Any existing clusters created with an API version earlier than `06-01-2022` will have the node OS auto-upgrade channel value set to `None` by default. +++## Node OS planned maintenance windows ++Planned maintenance for the node OS auto-upgrade starts at your specified maintenance window. ++> [!NOTE] +> To ensure proper functionality, use a maintenance window of four hours or more. ++For more information on Planned Maintenance, see [Use Planned Maintenance to schedule maintenance windows for your Azure Kubernetes Service (AKS) cluster][planned-maintenance]. ++## Node OS auto-upgrades FAQ ++* How can I check the current nodeOsUpgradeChannel value on a cluster? ++Run the `az aks show` command and check the "autoUpgradeProfile" to determine what value the `nodeOsUpgradeChannel` is set to: ++```azurecli-interactive +az aks show --resource-group myResourceGroup --name myAKSCluster --query "autoUpgradeProfile" +``` ++* How can I monitor the status of node OS auto-upgrades? ++To view the status of your node OS auto upgrades, look up [activity logs][monitor-aks] on your cluster. You can also look up specific upgrade-related events as mentioned in [Upgrade an AKS cluster][aks-upgrade]. AKS also emits upgrade-related Event Grid events. To learn more, see [AKS as an Event Grid source][aks-eventgrid]. ++* Can I change the node OS auto-upgrade channel value if my cluster auto-upgrade channel is set to `node-image` ? ++ No. Currently, when you set the [cluster auto-upgrade channel][Autoupgrade] to `node-image`, it also automatically sets the node OS auto-upgrade channel to `NodeImage`. You can't change the node OS auto-upgrade channel value if your cluster auto-upgrade channel is `node-image`. In order to be able to change the node OS auto-upgrade channel values, make sure the [cluster auto-upgrade channel][Autoupgrade] isn't `node-image`. ++<!-- LINKS --> +[planned-maintenance]: planned-maintenance.md +[release-tracker]: release-tracker.md +[az-provider-register]: /cli/azure/provider#az-provider-register +[az-feature-register]: /cli/azure/feature#az-feature-register +[az-feature-show]: /cli/azure/feature#az-feature-show +[upgrade-aks-cluster]: upgrade-cluster.md +[unattended-upgrades]: https://help.ubuntu.com/community/AutomaticSecurityUpdates +[Autoupgrade]: auto-upgrade-cluster.md +[kured]: node-updates-kured.md +[supported]: ./support-policies.md +[monitor-aks]: ./monitor-aks-reference.md +[aks-eventgrid]: ./quickstart-event-grid.md +[aks-upgrade]: ./upgrade-cluster.md |
aks | Azure Cni Overlay | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-cni-overlay.md | You can provide outbound (egress) connectivity to the internet for Overlay pods You can configure ingress connectivity to the cluster using an ingress controller, such as Nginx or [HTTP application routing](./http-application-routing.md). You cannot configure ingress connectivity using Azure App Gateway. For details see [Limitations with Azure CNI Overlay](#limitations-with-azure-cni-overlay). +## Regional availability for ARM64 node pools ++Azure CNI Overlay is currently unavailable for ARM64 node pools in the following regions: ++- East US 2 +- France Central +- Southeast Asia +- South Central US +- West Europe +- West US 3 + ## Differences between Kubenet and Azure CNI Overlay Like Azure CNI Overlay, Kubenet assigns IP addresses to pods from an address space logically different from the VNet, but it has scaling and other limitations. The below table provides a detailed comparison between Kubenet and Azure CNI Overlay. If you don't want to assign VNet IP addresses to pods due to IP shortage, we recommend using Azure CNI Overlay. az aks create -n $clusterName -g $resourceGroup \ > - Doesn't use the dynamic pod IP allocation feature. > - Doesn't have network policies enabled. > - Doesn't use any Windows node pools with docker as the container runtime.++> [!NOTE] +> Because Routing domain is not yet supported for ARM, CNI Overlay is not yet supported on ARM-based (ARM64) processor nodes. +> > [!WARNING] > Prior to Windows OS Build 20348.1668, there was a limitation around Windows Overlay pods incorrectly SNATing packets from host network pods, which had a more detrimental effect for clusters upgrading to Overlay. To avoid this issue, **use Windows OS Build greater than or equal to 20348.1668**. |
aks | Best Practices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/best-practices.md | Title: Best practices for Azure Kubernetes Service (AKS) description: Collection of the cluster operator and developer best practices to build and manage applications in Azure Kubernetes Service (AKS) Previously updated : 03/07/2023 Last updated : 11/21/2023 Building and running applications successfully in Azure Kubernetes Service (AKS) * Cluster and pod security. * Business continuity and disaster recovery. -The AKS product group, engineering teams, and field teams (including global black belts [GBBs]) contributed to, wrote, and grouped the following best practices and conceptual articles. Their purpose is to help cluster operators and developers better understand the concepts above and implement the appropriate features. +The AKS product group, engineering teams, and field teams (including global black belts (GBBs)) contributed to, wrote, and grouped the following best practices and conceptual articles. Their purpose is to help cluster operators and developers better understand the concepts above and implement the appropriate features. ## Cluster operator best practices If you're a cluster operator, work with application owners and developers to understand their needs. Then, you can use the following best practices to configure your AKS clusters to fit your needs. +An important practice that you should include as part of your application development and deployment process is remembering to follow commonly used deployment and testing patterns. Testing your application before deployment is an important step to ensure its quality, functionality, and compatibility with the target environment. It can help you identify and fix any errors, bugs, or issues that might affect the performance, security, or usability of the application or underlying infrastructure. + ### Multi-tenancy * [Best practices for cluster isolation](operator-best-practices-cluster-isolation.md) |
aks | Confidential Containers Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/confidential-containers-overview.md | The following are considerations with this preview of Confidential Containers: * Pulling container images from a private container registry or container images that originate from a private container registry in a Confidential Containers pod manifest isn't supported in this release. * Version 1 container images aren't supported. * Updates to secrets and ConfigMaps aren't reflected in the guest.-* Ephemeral containers and other troubleshooting methods require a policy modification and redeployment. It includes `exec` in container -log output from containers. `stdio` (ReadStreamRequest and WriteStreamRequest) is enabled. +* Ephemeral containers and other troubleshooting methods like `exec` into a container, +log outputs from containers, and `stdio` (ReadStreamRequest and WriteStreamRequest) require a policy modification and redeployment. * The policy generator tool doesn't support cronjob deployment types.-* Due to container image layer measurements being encoded in the security policy, we don't recommend using the `latest` tag when specifying containers. It's also a restriction with the policy generator tool. +* Due to container image layer measurements being encoded in the security policy, we don't recommend using the `latest` tag when specifying containers. * Services, Load Balancers, and EndpointSlices only support the TCP protocol. * All containers in all pods on the clusters must be configured to `imagePullPolicy: Always`. * The policy generator only supports pods that use IPv4 addresses. log output from containers. `stdio` (ReadStreamRequest and WriteStreamRequest) i It's important you understand the memory and processor resource allocation behavior in this release. * CPU: The shim assigns one vCPU to the base OS inside the pod. If no resource `limits` are specified, the workloads don't have separate CPU shares assigned, the vCPU is then shared with that workload. If CPU limits are specified, CPU shares are explicitly allocated for workloads.-* Memory: The Kata-CC handler uses 2 GB memory for the UVM OS and X MB memory for containers based on resource `limits` if specified (resulting in a 2-GB VM when no limit is given, without implicit memory for containers). The [Kata][kata-technical-documentation] handler uses 256 MB base memory for the UVM OS and X MB memory when resource `limits` are specified. If limits are unspecified, an implicit limit of 1,792 MB is added resulting in a 2 GB VM and 1,792 MB implicit memory for containers. +* Memory: The Kata-CC handler uses 2 GB memory for the UVM OS and X MB additional memory where X is the resource `limits` if specified in the YAML manifest (resulting in a 2-GB VM when no limit is given, without implicit memory for containers). The [Kata][kata-technical-documentation] handler uses 256 MB base memory for the UVM OS and X MB additional memory when resource `limits` are specified in the YAML manifest. If limits are unspecified, an implicit limit of 1,792 MB is added resulting in a 2 GB VM and 1,792 MB implicit memory for containers. In this release, specifying resource requests in the pod manifests aren't supported. The Kata container ignores resource requests from pod YAML manifest, and as a result, containerd doesn't pass the requests to the shim. Use resource `limit` instead of resource `requests` to allocate memory or CPU resources for workloads or containers. With the local container filesystem backed by VM memory, writing to the containe [pod-sandboxing-overview]: use-pod-sandboxing.md [azure-dedicated-hosts]: ../virtual-machines/dedicated-hosts.md [deploy-confidential-containers-default-aks]: deploy-confidential-containers-default-policy.md-[confidential-containers-security-policy]: ../confidential-computing/confidential-containers-aks-security-policy.md +[confidential-containers-security-policy]: ../confidential-computing/confidential-containers-aks-security-policy.md |
aks | Coredns Custom | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/coredns-custom.md | Sudden spikes in DNS traffic within AKS clusters are a common occurrence due to CoreDNS uses [horizontal cluster proportional autoscaler][cluster-proportional-autoscaler] for pod auto scaling. The `coredns-autoscaler` ConfigMap can be edited to configure the scaling logic for the number of CoreDNS pods. The `coredns-autoscaler` ConfigMap currently supports two different ConfigMap key values: `linear` and `ladder` which correspond to two supported control modes. The `linear` controller yields a number of replicas in [min,max] range equivalent to `max( ceil( cores * 1/coresPerReplica ) , ceil( nodes * 1/nodesPerReplica ) )`. The `ladder` controller calculates the number of replicas by consulting two different step functions, one for core scaling and another for node scaling, yielding the max of the two replica values. For more information on the control modes and ConfigMap format, please consult the [upstream documentation][cluster-proportional-autoscaler-control-patterns]. +> [!IMPORTANT] +> A minimum of 2 CoreDNS pod replicas per cluster is recommended. Configuring a minimum of 1 CoreDNS pod replica may result in failures during operations which require node draining, such as cluster upgrade operations. + To retrieve the `coredns-autoscaler` ConfigMap, you can run the `kubectl get configmap coredns-autoscaler -n kube-system -o yaml` command which will return the following: ```yaml |
aks | Cost Analysis | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/cost-analysis.md | description: Learn how to use cost analysis to surface granular cost allocation -- - ignite-2023 + Last updated 11/06/2023 |
aks | Enable Authentication Microsoft Entra Id | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/enable-authentication-microsoft-entra-id.md | Title: Enable Managed Identity Authentication -description: Learn how to enable Microsoft Entra ID on AKS with kubelogin. Connect your clusters to authenticate Azure users with credentials or managed roles. + Title: Enable managed identity authentication on Azure Kubernetes Service +description: Learn how to enable Microsoft Entra ID on Azure Kubernetes Service with kubelogin and authenticate Azure users with credentials or managed roles. Previously updated : 11/13/2023 Last updated : 11/22/2023 -# Enable Azure Managed Identity authentication for Kubernetes clusters with kubelogin +# Enable Azure managed identity authentication for Kubernetes clusters with kubelogin The AKS-managed Microsoft Entra integration simplifies the Microsoft Entra integration process. Previously, you were required to create a client and server app, and the Microsoft Entra tenant had to grant Directory Read permissions. Now, the AKS resource provider manages the client and server apps for you. Cluster administrators can configure Kubernetes role-based access control (Kuber Learn more about the Microsoft Entra integration flow in the [Microsoft Entra documentation](concepts-identity.md#azure-ad-integration). -## Limitations of integration +## Limitations -Azure Managed ID on AKS has certain limits to account for before you make a decision. -* The integration can't be disabled once added. +The following are constraints integrating Azure managed identity authentication on AKS. ++* Integration can't be disabled once added. * Downgrades from an integrated cluster to the legacy Microsoft Entra ID clusters aren't supported. * Clusters without Kubernetes RBAC support are unable to add the integration. ## Before you begin -There are a few requirements to properly install the aks addon for managed identity. +The following requirements need to be met in order to properly install the AKS addon for managed identity. + * You have Azure CLI version 2.29.0 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI](/cli/azure/install-azure-cli). * You need `kubectl` with a minimum version of [1.18.1](https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.18.md#v1181) or [`kubelogin`][kubelogin]. With the Azure CLI and the Azure PowerShell module, these two commands are included and automatically managed. Meaning, they're upgraded by default and running `az aks install-cli` isn't required or recommended. If you're using an automated pipeline, you need to manage upgrades for the correct or latest version. The difference between the minor versions of Kubernetes and `kubectl` shouldn't be more than *one* version. Otherwise, authentication issues occur on the wrong version. * If you're using [helm](https://github.com/helm/helm), you need a minimum version of helm 3.3. There are some non-interactive scenarios that don't support `kubectl`. In these ## Troubleshoot access issues > [!IMPORTANT]-> The steps described in this section bypass the normal Microsoft Entra group authentication. Use them only in an emergency. +> The step described in this section suggests an alternative authentication method compared to the normal Microsoft Entra group authentication. Use this option only in an emergency. -If you lack admin access to a valid Microsoft Entra group, you can follow this workaround. Sign in through the [Azure Kubernetes Service Cluster Admin](../role-based-access-control/built-in-roles.md#azure-kubernetes-service-cluster-admin-role) role and grant your group or tenant admin credentials to access your cluster. +If you lack administrative access to a valid Microsoft Entra group, you can follow this workaround. Sign in with an account that is a member of the [Azure Kubernetes Service Cluster Admin](../role-based-access-control/built-in-roles.md#azure-kubernetes-service-cluster-admin-role) role and grant your group or tenant admin credentials to access your cluster. ## Next steps |
aks | Planned Maintenance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/planned-maintenance.md | az aks maintenanceconfiguration delete -g myResourceGroup --cluster-name myAKSCl AKS auto-upgrade needs a certain amount of time to take the maintenance window into consideration. We recommend at least 24 hours between the creation/update of the maintenance configuration, and when it's scheduled to start. + Also, please ensure your cluster is started when the planned maintenance window is starting. If the cluster is stopped, then its control plane is deallocated and no operations can be performed. + * AKS auto-upgrade didn't upgrade all my agent pools - or one of the pools was upgraded outside of the maintenance window? If an agent pool fails to upgrade (eg. because of Pod Disruption Budgets preventing it to upgrade) or is in a Failed state, then it might be upgraded later outside of the maintenance window. This scenario is called "catch-up upgrade" and avoids letting Agent pools with a different version than the AKS control plane. az aks maintenanceconfiguration delete -g myResourceGroup --cluster-name myAKSCl We recommend setting the [Node OS security updates][node-image-auto-upgrade] schedule to a weekly cadence if you're using `NodeImage` channel since a new node image gets shipped every week and daily if you opt in for `SecurityPatch` channel to receive daily security updates. Set the [auto-upgrade][auto-upgrade] schedule to a monthly cadence to stay on top of the kubernetes N-2 [support policy][aks-support-policy]. +  + ## Next steps - To get started with upgrading your AKS cluster, see [Upgrade an AKS cluster][aks-upgrade] |
aks | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/policy-reference.md | Title: Built-in policy definitions for Azure Kubernetes Service description: Lists Azure Policy built-in policy definitions for Azure Kubernetes Service. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/15/2023 Last updated : 11/21/2023 |
aks | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure Kubernetes Service (AKS) description: Lists Azure Policy Regulatory Compliance controls available for Azure Kubernetes Service (AKS). These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 11/06/2023 Last updated : 11/21/2023 |
aks | Stop Cluster Upgrade Api Breaking Changes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/stop-cluster-upgrade-api-breaking-changes.md | You can also check past API usage by enabling [Container Insights][container-ins > [!NOTE] > `Z` is the zone designator for the zero UTC/GMT offset, also known as 'Zulu' time. This example sets the end of the window to `13:00:00` GMT. For more information, see [Combined date and time representations](https://wikipedia.org/wiki/ISO_8601#Combined_date_and_time_representations). +* Once the previous command has succeeded, you can retry the upgrade operation. ++ ```azurecli-interactive + az aks upgrade --name myAKSCluster --resource-group myResourceGroup --kubernetes-version <KUBERNETES_VERSION> + ``` ++ ## Next steps This article showed you how to stop AKS cluster upgrades automatically on API breaking changes. To learn more about more upgrade options for AKS clusters, see [Upgrade options for Azure Kubernetes Service (AKS) clusters](./upgrade-cluster.md). |
aks | Upgrade Cluster | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/upgrade-cluster.md | To configure automatic upgrades, see the following articles: * [Automatically upgrade an AKS cluster](./auto-upgrade-cluster.md) * [Use Planned Maintenance to schedule and control upgrades for your AKS cluster](./planned-maintenance.md)-* [Stop AKS cluster upgrades automatically on API breaking changes (Preview)](./stop-cluster-upgrade-api-breaking-changes.md) +* [Stop AKS cluster upgrades automatically on API breaking changes](./stop-cluster-upgrade-api-breaking-changes.md) * [Automatically upgrade AKS cluster node operating system images](./auto-upgrade-node-image.md) * [Apply security updates to AKS nodes automatically using GitHub Actions](./node-upgrade-github-actions.md) |
aks | Upgrade | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/upgrade.md | description: Learn about the various upgradeable components of an Azure Kubernet Previously updated : 11/11/2022 Last updated : 11/21/2023 # Upgrading Azure Kubernetes Service clusters and node pools The following table summarizes the details of updating each component: |Node image version upgrade|**Linux**: weekly<br>**Windows**: monthly|Yes|Automatic, Manual|[AKS node image upgrade][node-image-upgrade]| |Security patches and hot fixes for node images|As-necessary|||[AKS node security patches][node-security-patches]| +An important practice that you should include as part of your upgrade process is remembering to follow commonly used deployment and testing patterns. Testing an upgrade in a development or test environment before deployment in production is an important step to ensure application functionality and compatibility with the target environment. It can help you identify and fix any errors, bugs, or issues that might affect the performance, security, or usability of the application or underlying infrastructure. + ## Automatic upgrades Automatic upgrades can be performed through [auto upgrade channels][auto-upgrade] or via [GitHub Actions][gh-actions-upgrade]. |
api-management | Credentials How To User Delegated | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/credentials-how-to-user-delegated.md | description: Learn how to configure a connection with user-delegated permissions + Last updated 11/14/2023 |
api-management | Grpc Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/grpc-api.md | |
api-management | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/policy-reference.md | Title: Built-in policy definitions for Azure API Management description: Lists Azure Policy built-in policy definitions for Azure API Management. These built-in policy definitions provide approaches to managing your Azure resources. Previously updated : 11/15/2023 Last updated : 11/21/2023 |
api-management | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure API Management description: Lists Azure Policy Regulatory Compliance controls available for Azure API Management. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 11/06/2023 Last updated : 11/21/2023 |
app-service | Configure Language Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-java.md | More configuration may be necessary for encrypting your JDBC connection with cer - [PostgreSQL](https://jdbc.postgresql.org/documentation/ssl/) - [SQL Server](/sql/connect/jdbc/connecting-with-ssl-encryption)-- [MySQL](https://dev.mysql.com/doc/connector-j/5.1/en/connector-j-reference-using-ssl.html) - [MongoDB](https://mongodb.github.io/mongo-java-driver/3.4/driver/tutorials/ssl/) - [Cassandra](https://docs.datastax.com/en/developer/java-driver/4.3/) |
app-service | Deploy Azure Pipelines | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-azure-pipelines.md | Title: Configure CI/CD with Azure Pipelines description: Learn how to deploy your code to Azure App Service from a CI/CD pipeline with Azure Pipelines. Previously updated : 09/13/2022 Last updated : 12/13/2023 ms. Use [Azure Pipelines](/azure/devops/pipelines/) to automatically deploy your web YAML pipelines are defined using a YAML file in your repository. A step is the smallest building block of a pipeline and can be a script or task (prepackaged script). [Learn about the key concepts and components that make up a pipeline](/azure/devops/pipelines/get-started/key-pipelines-concepts). -You'll use the [Azure Web App task](/azure/devops/pipelines/tasks/deploy/azure-rm-web-app) to deploy to Azure App Service in your pipeline. For more complicated scenarios such as needing to use XML parameters in your deploy, you can use the [Azure App Service Deploy task](/azure/devops/pipelines/tasks/deploy/azure-rm-web-app-deployment). +You'll use the [Azure Web App task (`AzureWebApp`)](/azure/devops/pipelines/tasks/deploy/azure-rm-web-app) to deploy to Azure App Service in your pipeline. For more complicated scenarios such as needing to use XML parameters in your deploy, you can use the [Azure App Service deploy task (AzureRmWebAppDeployment)](/azure/devops/pipelines/tasks/deploy/azure-rm-web-app-deployment). ## Prerequisites You'll use the [Azure Web App task](/azure/devops/pipelines/tasks/deploy/azure-r - Java: [Create a Java app on Azure App Service](quickstart-java.md) - Python: [Create a Python app in Azure App Service](quickstart-python.md) --### Create your pipeline +## 1. Create a pipeline for your stack The code examples in this section assume you're deploying an ASP.NET web app. You can adapt the instructions for other frameworks. -Learn more about [Azure Pipelines ecosystem support](/azure/devops/pipelines/ecosystems/ecosystems). +Learn more about [Azure Pipelines ecosystem support](/azure/devops/pipelines/ecosystems/ecosystems). # [YAML](#tab/yaml/) Learn more about [Azure Pipelines ecosystem support](/azure/devops/pipelines/eco 1. When your new pipeline appears, take a look at the YAML to see what it does. When you're ready, select **Save and run**. -### Add the Azure Web App task +# [Classic](#tab/classic/) ++To get started: ++1. Create a pipeline and select the **ASP.NET Core** template. This selection automatically adds the tasks required to build the code in the sample repository. ++2. Save the pipeline and queue a build to see it in action. ++ The **ASP.NET Core** pipeline template publishes the deployment ZIP file as an Azure artifact for the deployment task later. ++-- ++## 2. Add the deployment task ++# [YAML](#tab/yaml/) ++1. Click the end of the YAML file, then select **Show assistant**.' 1. Use the Task assistant to add the [Azure Web App](/azure/devops/pipelines/tasks/deploy/azure-rm-web-app) task. :::image type="content" source="media/deploy-azure-pipelines/azure-web-app-task.png" alt-text="Screenshot of Azure web app task."::: -1. Select **Azure Resource Manager** for the **Connection type** and choose your **Azure subscription**. Make sure to **Authorize** your connection. + Alternatively, you can add the [Azure App Service deploy (AzureRmWebAppDeployment)](/azure/devops/pipelines/tasks/deploy/azure-rm-web-app-deployment) task. -1. Select **Web App on Linux** and enter your `azureSubscription`, `appName`, and `package`. Your complete YAML should look like this. +1. Choose your **Azure subscription**. Make sure to **Authorize** your connection. The authorization creates the required service connection. ++1. Select the **App type**, **App name**, and **Runtime stack** based on your App Service app. Your complete YAML should look similar to the following code. ```yaml variables: Learn more about [Azure Pipelines ecosystem support](/azure/devops/pipelines/eco publishWebProjects: true - task: AzureWebApp@1 inputs:- azureSubscription: '<Azure service connection>' + azureSubscription: '<service-connection-name>' appType: 'webAppLinux'- appName: '<Name of web app>' + appName: '<app-name>' package: '$(System.DefaultWorkingDirectory)/**/*.zip' ``` - * **azureSubscription**: your Azure subscription. - * **appName**: the name of your existing app service. - * **package**: the file path to the package or a folder containing your app service contents. Wildcards are supported. + * **azureSubscription**: Name of the authorized service connection to your Azure subscription. + * **appName**: Name of your existing app. + * **package**: Fike path to the package or a folder containing your app service contents. Wildcards are supported. # [Classic](#tab/classic/) To get started: -1. Create a pipeline and select the **ASP.NET Core** template. This selection automatically adds the tasks required to build the code in the sample repository. --2. Save the pipeline and queue a build to see it in action. --3. Create a release pipeline and select the **Azure App Service Deployment** template for your stage. - This automatically adds the necessary tasks. --4. Link the build pipeline as an artifact for this release pipeline. Save the release pipeline and create a release to see it in action. ----Now you're ready to read through the rest of this article to learn some of the more common changes that people make to customize an Azure Web App deployment. --## Use the Azure Web App task +1. Create a [release pipeline](/azure/devops/pipelines/release/) by selecting **Releases** from the left menu and select **New pipeline**. -# [YAML](#tab/yaml/) +1. Select the **Azure App Service deployment** template for your stage. This automatically adds the necessary tasks. -The Azure Web App Deploy task is the simplest way to deploy to an Azure Web App. By default, your deployment happens to the root application in the Azure Web App. + > [!NOTE] + > If you're deploying a Node.js app to App Service on Windows, select the **Deploy Node.js App to Azure App Service** template. The only difference between these templates is that Node.js template configures the task to generate a **web.config** file containing a parameter that starts the **iisnode** service. -The [Azure App Service Deploy task](/azure/devops/pipelines/tasks/deploy/azure-rm-web-app-deployment) allows you to modify configuration settings inside web packages and XML parameters files. +1. To link this release pipeline to the Azure artifact from the previous step, select **Add an artifact** > **Build**. -### Deploy a Web Deploy package +1. In **Source (build pipeline)**, select the build pipeline you created in the previous section, then select **Add**. -To deploy a .zip Web Deploy package (for example, from an ASP.NET web app) to an Azure Web App, -add the following snippet to your *azure-pipelines.yml* file: +1. Save the release pipeline and create a release to see it in action. -```yaml -- task: AzureWebApp@1- inputs: - azureSubscription: '<Azure service connection>' - appName: '<Name of web app>' - package: $(System.DefaultWorkingDirectory)/**/*.zip -``` --* **azureSubscription**: your Azure subscription. -* **appName**: the name of your existing app service. -* **package**: the file path to the package or a folder containing your app service contents. Wildcards are supported. + -The snippet assumes that the build steps in your YAML file produce the zip archive in the `$(System.DefaultWorkingDirectory)` folder on your agent. +### Example: Deploy a .NET app -For information on Azure service connections, see the [following section](#endpoint). +# [YAML](#tab/yaml/) -### Deploy a .NET app +To deploy a .zip web package (for example, from an ASP.NET web app) to an Azure Web App, use the following snippet to deploy the build to an app. -if you're building a [.NET Core app](/azure/devops/pipelines/ecosystems/dotnet-core), use the following snipped to deploy the build to an app. ```yaml variables: steps: publishWebProjects: true - task: AzureWebApp@1 inputs:- azureSubscription: '<Azure service connection>' + azureSubscription: '<service-connection-name>' appType: 'webAppLinux'- appName: '<Name of web app>' + appName: '<app-name>' package: '$(System.DefaultWorkingDirectory)/**/*.zip' ``` steps: * **appName**: the name of your existing app service. * **package**: the file path to the package or a folder containing your app service contents. Wildcards are supported. - # [Classic](#tab/classic/) -The simplest way to deploy to an Azure Web App is to use the **Azure Web App** task. -To deploy to any Azure App service (Web app for Windows, Linux, container, Function app or web jobs), use the **Azure App Service Deploy** task. -This task is automatically added to the release pipeline when you select one of the prebuilt deployment templates for Azure App Service deployment. -Templates exist for apps developed in various programming languages. If you can't find a template for your language, select the generic **Azure App Service Deployment** template. --When you link the artifact in your release pipeline to a build that compiles and publishes the web package, -it's automatically downloaded and placed into the `$(System.DefaultWorkingDirectory)` folder on the agent as part of the release. -This is where the task picks up the web package for deployment. ----<a name="endpoint"></a> --## Use a service connection +For classic pipelines, it's the easiest to define build and release stages in separate pages (**Pipelines** and **Releases**, respectively). In general, you: -To deploy to Azure App Service, you'll need to use an Azure Resource Manager [service connection](/azure/devops/pipelines/library/service-endpoints). The Azure service connection stores the credentials to connect from Azure Pipelines or Azure DevOps Server to Azure. +- In the **Pipelines** page, build and test your app by using the template of your choice, such as **ASP.NET Core**, **Node.js with Grunt**, **Maven**, or others, and publish an artifact. +- In the **Release** page, use the generic **Azure App Service deployment** template to deploy the artifact. -Learn more about [Azure Resource Manager service connections](/azure/devops/pipelines/library/connect-to-azure). If your service connection isn't working as expected, see [Troubleshooting service connections](/azure/devops/pipelines/release/azure-rm-endpoint). --# [YAML](#tab/yaml/) --You'll need an Azure service connection for the `AzureWebApp` task. The Azure service connection stores the credentials to connect from Azure Pipelines to Azure. See [Create an Azure service connection](/azure/devops/pipelines/library/connect-to-azure). --# [Classic](#tab/classic/) --For Azure DevOps Services, the easiest way to get started with this task is to be signed in as a user who owns both the Azure DevOps Services organization and the Azure subscription. In this case, you won't have to manually create the service connection. --Otherwise, to learn how to create an Azure service connection, see [Create an Azure service connection](/azure/devops/pipelines/library/connect-to-azure). +There may be templates for specific programming languages to choose from. -## Deploy to a virtual application +## Example: deploy to a virtual application # [YAML](#tab/yaml/) -By default, your deployment happens to the root application in the Azure Web App. You can deploy to a specific virtual application by using the `VirtualApplication` property of the `AzureRmWebAppDeployment` task: +By default, your deployment happens to the root application in the Azure Web App. You can deploy to a specific virtual application by using the `VirtualApplication` property of the Azure App Service deploy (`AzureRmWebAppDeployment`) task: ```yaml - task: AzureRmWebAppDeployment@4 By default, your deployment happens to the root application in the Azure Web App VirtualApplication: '<name of virtual application>' ``` -* **VirtualApplication**: the name of the Virtual Application that has been configured in the Azure portal. For more information, see [Configure an App Service app in the Azure portal +* **VirtualApplication**: the name of the Virtual Application that's configured in the Azure portal. For more information, see [Configure an App Service app in the Azure portal ](./configure-common.md). # [Classic](#tab/classic/) -By default, your deployment happens to the root application in the Azure Web App. If you want to deploy to a specific virtual application, -enter its name in the **Virtual Application** property of the **Azure App Service Deploy** task. +By default, your deployment happens to the root application in the Azure Web App. If you want to deploy to a specific virtual application, enter its name in the **Virtual Application** property of the **Azure App Service deploy** task. -## Deploy to a slot +## Example: Deploy to a slot # [YAML](#tab/yaml/) -You can configure the Azure Web App to have multiple slots. Slots allow you to safely deploy your app and test it before making it available to your customers. - The following example shows how to deploy to a staging slot, and then swap to a production slot: ```yaml - task: AzureWebApp@1 inputs:- azureSubscription: '<Azure service connection>' + azureSubscription: '<service-connection-name>' appType: webAppLinux- appName: '<name of web app>' + appName: '<app-name>' deployToSlotOrASE: true resourceGroupName: '<name of resource group>' slotName: staging The following example shows how to deploy to a staging slot, and then swap to a - task: AzureAppServiceManage@0 inputs:- azureSubscription: '<Azure service connection>' + azureSubscription: '<service-connection-name>' appType: webAppLinux- WebAppName: '<name of web app>' + WebAppName: '<app-name>' ResourceGroupName: '<name of resource group>' SourceSlot: staging SwapWithProduction: true The following example shows how to deploy to a staging slot, and then swap to a # [Classic](#tab/classic/) -You can configure the Azure Web App to have multiple slots. Slots allow you to safely deploy your app and test it before making it available to your customers. --Use the option **Deploy to Slot or App Service Environment** in the **Azure Web App** task to specify the slot to deploy to. +Use the option **Deploy to Slot or App Service Environment** in the **Azure Web App** task to specify the slot to deploy to. To swap the slots, use the **Azure App Service manage** task. -## Deploy to multiple web apps +## Example: Deploy to multiple web apps # [YAML](#tab/yaml/) -You can use [jobs](/azure/devops/pipelines/process/phases) in your YAML file to set up a pipeline of deployments. -By using jobs, you can control the order of deployment to multiple web apps. +You can use [jobs](/azure/devops/pipelines/process/phases) in your YAML file to set up a pipeline of deployments. By using jobs, you can control the order of deployment to multiple web apps. ```yaml jobs: jobs: # deploy to Azure Web App staging - task: AzureWebApp@1 inputs:- azureSubscription: '<Azure service connection>' + azureSubscription: '<service-connection-name>' appType: <app type>- appName: '<name of test stage web app>' + appName: '<staging-app-name>' deployToSlotOrASE: true- resourceGroupName: <resource group name> + resourceGroupName: <group-name> slotName: 'staging' package: '$(Build.ArtifactStagingDirectory)/**/*.zip' jobs: - task: AzureWebApp@1 inputs:- azureSubscription: '<Azure service connection>' + azureSubscription: '<service-connection-name>' appType: <app type>- appName: '<name of test stage web app>' - resourceGroupName: <resource group name> + appName: '<production-app-name>' + resourceGroupName: <group-name> package: '$(Pipeline.Workspace)/**/*.zip' ``` # [Classic](#tab/classic/) -If you want to deploy to multiple web apps, add stages to your release pipeline. -You can control the order of deployment. To learn more, see [Stages](/azure/devops/pipelines/process/stages). +If you want to deploy to multiple web apps, add stages to your release pipeline. You can control the order of deployment. To learn more, see [Stages](/azure/devops/pipelines/process/stages). -## Make configuration changes --For most language stacks, [app settings](./configure-common.md?toc=%2fazure%2fapp-service%2fcontainers%2ftoc.json#configure-app-settings) and [connection strings](./configure-common.md?toc=%2fazure%2fapp-service%2fcontainers%2ftoc.json#configure-connection-strings) can be set as environment variables at runtime. --App settings can also be resolved from Key Vault using [Key Vault references](./app-service-key-vault-references.md). +## Example: Make variable substitutions -For ASP.NET and ASP.NET Core developers, setting app settings in App Service are like setting them in `<appSettings>` in Web.config. -You might want to apply a specific configuration for your web app target before deploying to it. -This is useful when you deploy the same build to multiple web apps in a pipeline. -For example, if your Web.config file contains a connection string named `connectionString`, -you can change its value before deploying to each web app. You can do this either by applying -a Web.config transformation or by substituting variables in your Web.config file. +For most language stacks, [app settings](./configure-common.md?toc=%2fazure%2fapp-service%2fcontainers%2ftoc.json#configure-app-settings) and [connection strings](./configure-common.md?toc=%2fazure%2fapp-service%2fcontainers%2ftoc.json#configure-connection-strings) can be set as environment variables at runtime. -**Azure App Service Deploy task** allows users to modify configuration settings in configuration files (*.config files) inside web packages and XML parameters files (parameters.xml), based on the stage name specified. --> [!NOTE] -> File transforms and variable substitution are also supported by the separate [File Transform task](/azure/devops/pipelines/tasks/utility/file-transform) for use in Azure Pipelines. -You can use the File Transform task to apply file transformations and variable substitutions on any configuration and parameters files. ---### Variable substitution +But there are other reasons you would want to make variable substitutions to your *Web.config*. In this example, your Web.config file contains a connection string named `connectionString`. You can change its value before deploying to each web app. You can do this either by applying a Web.config transformation or by substituting variables in your Web.config file. # [YAML](#tab/yaml/) -The following snippet shows an example of variable substitution: +The following snippet shows an example of variable substitution by using the Azure App Service Deploy (`AzureRmWebAppDeployment`) task: ```yaml jobs: To change `connectionString` by using variable substitution: -## Deploying conditionally +## Example: Deploy conditionally # [YAML](#tab/yaml/) -To do this in YAML, you can use one of these techniques: +To do this in YAML, you can use one of the following techniques: * Isolate the deployment steps into a separate job, and add a condition to that job. * Add a condition to the step. The following example shows how to use step conditions to deploy only builds tha - task: AzureWebApp@1 condition: and(succeeded(), eq(variables['Build.SourceBranch'], 'refs/heads/main')) inputs:- azureSubscription: '<Azure service connection>' - appName: '<name of web app>' + azureSubscription: '<service-connection-name>' + appName: '<app-name>' ``` To learn more about conditions, see [Specify conditions](/azure/devops/pipelines/process/conditions). To learn more, see [Release, branch, and stage triggers](/azure/devops/pipelines -## (Classic) Deploy a release pipeline +## Example: deploy using Web Deploy ++The Azure App Service deploy (`AzureRmWebAppDeployment`) task can deploy to App Service using Web Deploy. ++# [YAML](#tab/yaml/) ++```yml +trigger: +- main ++pool: + vmImage: windows-latest ++variables: + buildConfiguration: 'Release' -You can use a release pipeline to pick up the artifacts published by your build and then deploy them to your Azure web site. +steps: +- script: dotnet build --configuration $(buildConfiguration) + displayName: 'dotnet build $(buildConfiguration)' +- task: DotNetCoreCLI@2 + inputs: + command: 'publish' + publishWebProjects: true + arguments: '--configuration $(buildConfiguration)' + zipAfterPublish: true +- task: AzureRmWebAppDeployment@4 + inputs: + ConnectionType: 'AzureRM' + azureSubscription: '<service-connection-name>' + appType: 'webApp' + WebAppName: '<app-name>' + packageForLinux: '$(System.DefaultWorkingDirectory)/**/*.zip' + enableCustomDeployment: true + DeploymentType: 'webDeploy' +``` -1. Do one of the following to start creating a release pipeline: +# [Classic](#tab/classic/) - * If you've just completed a CI build, choose the link (for example, _Build 20170815.1_) - to open the build summary. Then choose **Release** to start a new release pipeline that's automatically linked to the build pipeline. +In the release pipeline, assuming you're using the **Azure App Service deployment** template: - * Open the **Releases** tab in **Azure Pipelines**, open the **+** dropdown - in the list of release pipelines, and choose **Create release pipeline**. +1. Select the **Tasks** tab, then select **Deploy Azure App Service**. This is the `AzureRmWebAppDeployment` task. -1. The easiest way to create a release pipeline is to use a template. If you're deploying a Node.js app, select the **Deploy Node.js App to Azure App Service** template. - Otherwise, select the **Azure App Service Deployment** template. Then choose **Apply**. +1. In the dialog, make sure that **Connection type** is set to **Azure Resource Manager**. - > [!NOTE] - > The only difference between these templates is that Node.js template configures the task to generate a **web.config** file containing a parameter that starts the **iisnode** service. +1. In the dialog, expand **Additional Deployment Options** and select **Select deployment method**. Make sure that **Web Deploy** is selected as the deployment method. -1. If you created your new release pipeline from a build summary, check that the build pipeline and artifact - is shown in the **Artifacts** section on the **Pipeline** tab. If you created a new release pipeline from - the **Releases** tab, choose the **+ Add** link and select your build artifact. +1. Save the release pipeline. -1. Choose the **Continuous deployment** icon in the **Artifacts** section, check that the - continuous deployment trigger is enabled, and add a filter to include the **main** branch. +> [!NOTE] +> With the [`AzureRmWebAppDeployment@3`](/azure/devops/pipelines/tasks/reference/azure-rm-web-app-deployment-v3) and [`AzureRmWebAppDeployment@4`](/azure/devops/pipelines/tasks/reference/azure-rm-web-app-deployment-v4) tasks, you should use the **Azure Resource Manager** connection type, or `AzureRM`, when deploying with Web Deploy. It uses publishing profiles for deployment when basic authentication is enabled for your app, but it uses the more secure Entra ID authentication when [basic authentication is disabled](configure-basic-auth-disable.md). - > [!NOTE] - > Continuous deployment isn't enabled by default when you create a new release pipeline from the **Releases** tab. + -1. Open the **Tasks** tab and, with **Stage 1** selected, configure the task property variables as follows: +## Frequently asked questions - * **Azure Subscription:** Select a connection from the list under **Available Azure Service Connections** or create a more restricted permissions connection to your Azure subscription. - If you're using Azure Pipelines and if you see an **Authorize** button next to the input, select it to authorize Azure Pipelines to connect to your Azure subscription. If you're using TFS or if you don't see the desired Azure subscription in the list of subscriptions, see [Azure Resource Manager service connection](/azure/devops/pipelines/library/connect-to-azure) to manually set up the connection. +#### What's the difference between the `AzureWebApp` and `AzureRmWebAppDeployment` tasks? - * **App Service Name**: Select the name of the web app from your subscription. +The Azure Web App task (`AzureWebApp`) is the simplest way to deploy to an Azure Web App. By default, your deployment happens to the root application in the Azure Web App. - > [!NOTE] - > Some settings for the tasks may have been automatically defined as - > [stage variables](/azure/devops/pipelines/release/variables#custom-variables) - > when you created a release pipeline from a template. - > These settings cannot be modified in the task settings; instead you must - > select the parent stage item in order to edit these settings. - +The [Azure App Service Deploy task (`AzureRmWebAppDeployment`)](/azure/devops/pipelines/tasks/deploy/azure-rm-web-app-deployment) can handle more custom scenarios, such as: -1. Save the release pipeline. +- [Modify configuration settings](#example-make-variable-substitutions) inside web packages and XML parameters files. +- [Deploy with Web Deploy](#example-deploy-using-web-deploy), if you're used to the IIS deployment process. +- [Deploy to virtual applications](#example-deploy-to-a-virtual-application). +- Deploy to other app types, like Container apps, Function apps, WebJobs, or API and Mobile apps. -### Create a release to deploy your app +> [!NOTE] +> File transforms and variable substitution are also supported by the separate [File Transform task](/azure/devops/pipelines/tasks/utility/file-transform) for use in Azure Pipelines. You can use the File Transform task to apply file transformations and variable substitutions on any configuration and parameters files. -You're now ready to create a release, which means to run the release pipeline with the artifacts produced by a specific build. This will result in deploying the build: +#### I get the message "Invalid App Service package or folder path provided." -1. Choose **+ Release** and select **Create a release**. +In YAML pipelines, depending on your pipeline, there may be a mismatch between where your built web package is saved and where the deploy task is looking for it. For example, the `AzureWebApp` task picks up the web package for deployment. For example, the AzureWebApp task looks in `$(System.DefaultWorkingDirectory)/**/*.zip`. If the web package is deposited elsewhere, modify the value of `package`. -1. In the **Create a new release** panel, check that the artifact version you want to use is selected and choose **Create**. +#### I get the message "Publish using webdeploy options are supported only when using Windows agent." -1. Choose the release link in the information bar message. For example: "Release **Release-1** has been created". +This error occurs in the **AzureRmWebAppDeployment** task when you configure the task to deploy using Web Deploy, but your agent isn't running Windows. Verify that your YAML has something similar to the following code: ++```yml +pool: + vmImage: windows-latest +``` -1. In the pipeline view, choose the status link in the stages of the pipeline to see the logs and agent output. +#### Web Deploy doesn't work when I disable basic authentication -1. After the release is complete, navigate to your site running in Azure using the Web App URL `http://{web_app_name}.azurewebsites.net`, and verify its contents. +For troubleshooting information on getting Microsoft Entra ID authentication to work with the `AzureRmWebAppDeployment` task, see [I can't Web Deploy to my Azure App Service using Microsoft Entra ID authentication from my Windows agent](/azure/devops/pipelines/tasks/reference/azure-rm-web-app-deployment-v4#i-cant-web-deploy-to-my-azure-app-service-using-microsoft-entra-id-authentication-from-my-windows-agent) ## Next steps |
app-service | Deploy Configure Credentials | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-configure-credentials.md | description: Learn what types of deployment credentials are in Azure App Service Last updated 02/11/2021 -+ |
app-service | Deploy Continuous Deployment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-continuous-deployment.md | Title: Configure continuous deployment description: Learn how to enable CI/CD to Azure App Service from GitHub, Bitbucket, Azure Repos, or other repos. Select the build pipeline that fits your needs. ms.assetid: 6adb5c84-6cf3-424e-a336-c554f23b4000 Previously updated : 03/12/2021 Last updated : 12/12/2023 Select the tab that corresponds to your build provider to continue. # [GitHub](#tab/github) -4. [GitHub Actions](#how-the-github-actions-build-provider-works) is the default build provider. To change the provider, select **Change provider** > **App Service Build Service** (Kudu) > **OK**. +4. [GitHub Actions](#how-does-the-github-actions-build-provider-work) is the default build provider. To change the provider, select **Change provider** > **App Service Build Service** (Kudu) > **OK**. > [!NOTE] > To use Azure Pipelines as the build provider for your App Service app, configure CI/CD directly from Azure Pipelines. Don't configure it in App Service. The **Azure Pipelines** option just points you in the right direction. Select the tab that corresponds to your build provider to continue. 1. If you're deploying from GitHub for the first time, select **Authorize** and follow the authorization prompts. If you want to deploy from a different user's repository, select **Change Account**. 1. After you authorize your Azure account with GitHub, select the **Organization**, **Repository**, and **Branch** to configure CI/CD for. -If you canΓÇÖt find an organization or repository, you might need to enable more permissions on GitHub. For more information, see [Managing access to your organization's repositories](https://docs.github.com/organizations/managing-access-to-your-organizations-repositories). -1. When GitHub Actions is selected as the build provider, you can select the workflow file you want by using the **Runtime stack** and **Version** dropdown lists. Azure commits this workflow file into your selected GitHub repository to handle build and deploy tasks. To see the file before saving your changes, select **Preview file**. + If you canΓÇÖt find an organization or repository, you might need to enable more permissions on GitHub. For more information, see [Managing access to your organization's repositories](https://docs.github.com/organizations/managing-access-to-your-organizations-repositories). ++1. (Preview) Under **Authentication type**, select **User-assigned identity** for better security. For more information, see [frequently asked questions](). ++1. When **GitHub Actions** is selected as the build provider, you can select the workflow file you want by using the **Runtime stack** and **Version** dropdown lists. Azure commits this workflow file into your selected GitHub repository to handle build and deploy tasks. To see the file before saving your changes, select **Preview file**. > [!NOTE]- > App Service detects the [language stack setting](configure-common.md#configure-language-stack-settings) of your app and selects the most appropriate workflow template. If you choose a different template, it might deploy an app that doesn't run properly. For more information, see [How the GitHub Actions build provider works](#how-the-github-actions-build-provider-works). + > App Service detects the [language stack setting](configure-common.md#configure-language-stack-settings) of your app and selects the most appropriate workflow template. If you choose a different template, it might deploy an app that doesn't run properly. For more information, see [How the GitHub Actions build provider works](#how-does-the-github-actions-build-provider-work). 1. Select **Save**. See [Local Git deployment to Azure App Service](deploy-local-git.md). ![Screenshot that shows how to disconnect your cloud folder sync with your App Service app in the Azure portal.](media/app-service-continuous-deployment/disable.png) -1. By default, the GitHub Actions workflow file is preserved in your repository, but it will continue to trigger deployment to your app. To delete the file from your repository, select **Delete workflow file**. +1. By default, the GitHub Actions workflow file is preserved in your repository, but it continues to trigger deployment to your app. To delete the file from your repository, select **Delete workflow file**. 1. Select **OK**. [!INCLUDE [What happens to my app during deployment?](../../includes/app-service-deploy-atomicity.md)] -## How the GitHub Actions build provider works +## Frequently asked questions ++- [How does the GitHub Actions build provider work?](#how-does-the-github-actions-build-provider-work) +- [How do I configure continuous deployment without basic authentication?](#how-do-i-configure-continuous-deployment-without-basic-authentication) +- [What does the user-assigned identity option do for GitHub Actions?](#what-does-the-user-assigned-identity-option-do-for-github-actions) +- [I see "You do not have sufficient permissions on this app to assign role-based access to a managed identity and configure federated credentials." when I select the user-assigned identity option with GitHub Actions.](#i-see-you-do-not-have-sufficient-permissions-on-this-app-to-assign-role-based-access-to-a-managed-identity-and-configure-federated-credentials-when-i-select-the-user-assigned-identity-option-with-github-actions) +- [How do I deploy from other repositories](#how-do-i-deploy-from-other-repositories) ++#### How does the GitHub Actions build provider work? The GitHub Actions build provider is an option for [CI/CD from GitHub](#configure-the-deployment-source). It completes these actions to set up CI/CD: You can customize the GitHub Actions build provider in these ways: - Customize the workflow file after it's generated in your GitHub repository. For more information, see [Workflow syntax for GitHub Actions](https://docs.github.com/actions/reference/workflow-syntax-for-github-actions). Just make sure that the workflow deploys to App Service with the [azure/webapps-deploy](https://github.com/Azure/webapps-deploy) action. - If the selected branch is protected, you can still preview the workflow file without saving the configuration and then manually add it into your repository. This method doesn't give you log integration with the Azure portal.-- Instead of using a publishing profile, deploy by using a [service principal](../active-directory/develop/app-objects-and-service-principals.md#service-principal-object) in Microsoft Entra ID.--#### Authenticate by using a service principal --This optional configuration replaces the default authentication with publishing profiles in the generated workflow file. --1. Generate a service principal by using the [az ad sp create-for-rbac](/cli/azure/ad/sp#az-ad-sp-create-for-rbac) command in the [Azure CLI](/cli/azure/). In the following example, replace \<subscription-id>, \<group-name>, and \<app-name> with your own values: -- ```azurecli-interactive - az ad sp create-for-rbac --name "myAppDeployAuth" --role contributor \ - --scopes /subscriptions/<subscription-id>/resourceGroups/<group-name>/providers/Microsoft.Web/sites/<app-name> \ - --sdk-auth - ``` - - > [!IMPORTANT] - > For security, grant the minimum required access to the service principal. The scope in the previous example is limited to the specific App Service app and not the entire resource group. - -1. Save the entire JSON output for the next step, including the top-level `{}`. --1. In [GitHub](https://github.com/), in your repository, select **Settings** > **Secrets** > **Add a new secret**. --1. Paste the entire JSON output from the Azure CLI command into the secret's value field. Give the secret a name like `AZURE_CREDENTIALS`. --1. In the workflow file generated by the Deployment Center, revise the `azure/webapps-deploy` step to look like the following example (which is modified from a Node.js workflow file): -- ```yaml - - name: Sign in to Azure - # Use the GitHub secret you added. - uses: azure/login@v1 - with: - creds: ${{ secrets.AZURE_CREDENTIALS }} - - name: Deploy to Azure Web App - # Remove publish-profile. - uses: azure/webapps-deploy@v2 - with: - app-name: '<app-name>' - slot-name: 'production' - package: . - - name: Sign out of Azure. - run: | - az logout - ``` - -## Deploy from other repositories +- Instead of using a user-assigned managed identity or the publishing profile, you can also deploy by using a [service principal](deploy-github-actions.md?tabs=userlevel) in Microsoft Entra ID. ++#### How do I configure continuous deployment without basic authentication? ++To configure continuous deployment [without basic authentication](configure-basic-auth-disable.md), try using GitHub Actions with the **user-assigned identity** option. ++#### What does the user-assigned identity option do for GitHub Actions? ++When you select **user-assigned identity** under the **GitHub Actions** source, Azure creates a [user-managed identity](/en-us/entra/identity/managed-identities-azure-resources/overview#managed-identity-types) for you and [federates it with GitHub as an authorized client](/entra/workload-id/workload-identity-federation-create-trust-user-assigned-managed-identity?pivots=identity-wif-mi-methods-azp). This user-managed identity isn't shown in the **Identities** page for your app. ++This automatically created user-managed identity should be used only for the GitHub Actions deployment. Using it for other configurations isn't supported. ++#### I see "You do not have sufficient permissions on this app to assign role-based access to a managed identity and configure federated credentials." when I select the user-assigned identity option with GitHub Actions. ++To use the **user-assigned identity** option for your GitHub Actions deployment, you need the `Microsoft.Authorization/roleAssignments/write` permission on your app. By default, the **User Access Administrator** role and **Owner** role have this permission already, but the **Contributor** role doesn't. ++#### How do I deploy from other repositories For Windows apps, you can manually configure continuous deployment from a cloud Git or Mercurial repository that the portal doesn't directly support, like [GitLab](https://gitlab.com/). You do that by selecting **External Git** in the **Source** dropdown list. For more information, see [Set up continuous deployment using manual steps](https://github.com/projectkudu/kudu/wiki/Continuous-deployment#setting-up-continuous-deployment-using-manual-steps). |
app-service | Deploy Github Actions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-github-actions.md | Title: Configure CI/CD with GitHub Actions description: Learn how to deploy your code to Azure App Service from a CI/CD pipeline with GitHub Actions. Customize the build tasks and execute complex deployments. Previously updated : 12/14/2021 Last updated : 12/14/2023 The file has three sections: ## Use the Deployment Center -You can quickly get started with GitHub Actions by using the App Service Deployment Center. This will automatically generate a workflow file based on your application stack and commit it to your GitHub repository in the correct directory. --1. Navigate to your webapp in the Azure portal -1. On the left side, click **Deployment Center** -1. Under **Continuous Deployment (CI / CD)**, select **GitHub** -1. Next, select **GitHub Actions** -1. Use the dropdowns to select your GitHub repository, branch, and application stack - - If the selected branch is protected, you can still continue to add the workflow file. Be sure to review your branch protections before continuing. -1. On the final screen, you can review your selections and preview the workflow file that will be committed to the repository. If the selections are correct, click **Finish** --This will commit the workflow file to the repository. The workflow to build and deploy your app will start immediately. +You can quickly get started with GitHub Actions by using the App Service Deployment Center. This turn-key method automatically generates a workflow file based on your application stack and commits it to your GitHub repository in the correct directory. For more information, see [Continuous deployment to Azure App Service](deploy-continuous-deployment.md). ## Set up a workflow manually -You can also deploy a workflow without using the Deployment Center. To do so, you will need to first generate deployment credentials. +You can also deploy a workflow without using the Deployment Center. To do so, you need to first generate deployment credentials. ## Generate deployment credentials -The recommended way to authenticate with Azure App Services for GitHub Actions is with a publish profile. You can also authenticate with a service principal or Open ID Connect but the process requires more steps. +The recommended way to authenticate with Azure App Services for GitHub Actions is with a user-defined managed identity, and the easiest way for that is by [configuring GitHub Actions deployment directly in the portal](deploy-continuous-deployment.md) instead and selecting **User-assigned managed identity**. -Save your publish profile credential or service principal as a [GitHub secret](https://docs.github.com/en/actions/reference/encrypted-secrets) to authenticate with Azure. You'll access the secret within your workflow. +> [!NOTE] +> Authentication using a user-assigned managed identity is currently in preview. ++Alternatively, you can authenticate with a service principal, OpenID Connect, or a publish profile. # [Publish profile](#tab/applevel) +> [!NOTE] +> Publish profile requires [basic authentication](configure-basic-auth-disable.md) to be enabled. + A publish profile is an app-level credential. Set up your publish profile as a GitHub secret. 1. Go to your app service in the Azure portal. A publish profile is an app-level credential. Set up your publish profile as a G 1. Save the downloaded file. You'll use the contents of the file to create a GitHub secret. > [!NOTE]-> As of October 2020, Linux web apps will need the app setting `WEBSITE_WEBDEPLOY_USE_SCM` set to `true` **before downloading the publish profile**. This requirement will be removed in the future. +> As of October 2020, Linux web apps needs the app setting `WEBSITE_WEBDEPLOY_USE_SCM` set to `true` **before downloading the publish profile**. This requirement will be removed in the future. # [Service principal](#tab/userlevel) az ad sp create-for-rbac --name "myApp" --role contributor \ --sdk-auth ``` -In the example above, replace the placeholders with your subscription ID, resource group name, and app name. The output is a JSON object with the role assignment credentials that provide access to your App Service app similar to below. Copy this JSON object for later. +In the previous example, replace the placeholders with your subscription ID, resource group name, and app name. The output is a JSON object with the role assignment credentials that provide access to your App Service app similar to the following JSON snippet. Copy this JSON object for later. ```output { In the example above, replace the placeholders with your subscription ID, resour OpenID Connect is an authentication method that uses short-lived tokens. Setting up [OpenID Connect with GitHub Actions](https://docs.github.com/en/actions/deployment/security-hardening-your-deployments/about-security-hardening-with-openid-connect) is more complex process that offers hardened security. -1. If you do not have an existing application, register a [new Active Directory application and service principal that can access resources](../active-directory/develop/howto-create-service-principal-portal.md). Create the Active Directory application. +1. If you don't have an existing application, register a [new Active Directory application and service principal that can access resources](../active-directory/develop/howto-create-service-principal-portal.md). Create the Active Directory application. ```azurecli-interactive az ad app create --display-name myApp ``` - This command will output JSON with an `appId` that is your `client-id`. Save the value to use as the `AZURE_CLIENT_ID` GitHub secret later. + This command outputs a JSON with an `appId` that is your `client-id`. Save the value to use as the `AZURE_CLIENT_ID` GitHub secret later. You'll use the `objectId` value when creating federated credentials with Graph API and reference it as the `APPLICATION-OBJECT-ID`. OpenID Connect is an authentication method that uses short-lived tokens. Setting az ad sp create --id $appId ``` -1. Create a new role assignment by subscription and object. By default, the role assignment will be tied to your default subscription. Replace `$subscriptionId` with your subscription ID, `$resourceGroupName` with your resource group name, and `$assigneeObjectId` with the generated `assignee-object-id`. Learn [how to manage Azure subscriptions with the Azure CLI](/cli/azure/manage-azure-subscriptions-azure-cli). +1. Create a new role assignment by subscription and object. By default, the role assignment is tied to your default subscription. Replace `$subscriptionId` with your subscription ID, `$resourceGroupName` with your resource group name, and `$assigneeObjectId` with the generated `assignee-object-id`. Learn [how to manage Azure subscriptions with the Azure CLI](/cli/azure/manage-azure-subscriptions-azure-cli). ```azurecli-interactive az role assignment create --role contributor --subscription $subscriptionId --assignee-object-id $assigneeObjectId --scope /subscriptions/$subscriptionId/resourceGroups/$resourceGroupName/providers/Microsoft.Web/sites/ --assignee-principal-type ServicePrincipal OpenID Connect is an authentication method that uses short-lived tokens. Setting * Replace `APPLICATION-OBJECT-ID` with the **objectId (generated while creating app)** for your Active Directory application. * Set a value for `CREDENTIAL-NAME` to reference later.- * Set the `subject`. The value of this is defined by GitHub depending on your workflow: + * Set the `subject`. Its value is defined by GitHub depending on your workflow: * Jobs in your GitHub Actions environment: `repo:< Organization/Repository >:environment:< Name >` * For Jobs not tied to an environment, include the ref path for branch/tag based on the ref path used for triggering the workflow: `repo:< Organization/Repository >:ref:< ref path>`. For example, `repo:n-username/ node_express:ref:refs/heads/my-branch` or `repo:n-username/ node_express:ref:refs/tags/my-tag`. * For workflows triggered by a pull request event: `repo:< Organization/Repository >:pull_request`. jobs: ## Next steps -You can find our set of Actions grouped into different repositories on GitHub, each one containing documentation and examples to help you use GitHub for CI/CD and deploy your apps to Azure. --- [Actions workflows to deploy to Azure](https://github.com/Azure/actions-workflow-samples)+Check out references on Azure GitHub Actions and workflows: - [Azure login](https://github.com/Azure/login)- - [Azure WebApp](https://github.com/Azure/webapps-deploy)- - [Azure WebApp for containers](https://github.com/Azure/webapps-container-deploy)- - [Docker login/logout](https://github.com/Azure/docker-login)--- [Events that trigger workflows](https://docs.github.com/en/actions/reference/events-that-trigger-workflows)- - [K8s deploy](https://github.com/Azure/k8s-deploy)-+- [Actions workflows to deploy to Azure](https://github.com/Azure/actions-workflow-samples) - [Starter Workflows](https://github.com/actions/starter-workflows)+- [Events that trigger workflows](https://docs.github.com/en/actions/reference/events-that-trigger-workflows) |
app-service | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/policy-reference.md | Title: Built-in policy definitions for Azure App Service description: Lists Azure Policy built-in policy definitions for Azure App Service. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/15/2023 Last updated : 11/21/2023 |
app-service | Quickstart Dotnetcore | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/quickstart-dotnetcore.md | In this step, you fork a demo project to deploy. This quickstart uses the [Azure Developer CLI](/azure/developer/azure-developer-cli/overview) (`azd`) both to create Azure resources and deploy code to it. For more information about Azure Developer CLI, visit the [documentation](/azure/developer/azure-developer-cli/install-azd?tabs=winget-windows%2Cbrew-mac%2Cscript-linux&pivots=os-windows) or [training path](/training/paths/azure-developer-cli/). -Retrieve and initialize [the ASP.NET Core web app template](https://github.com/Azure-Samples/quickstart-deploy-aspnet-core-app-service.git) for this quickstart using the following steps: +Retrieve and initialize [the ASP.NET Core web app template](https://github.com/Azure-Samples/quickstart-deploy-aspnet-core-app-service) for this quickstart using the following steps: 1. Open a terminal window on your machine to an empty working directory. Initialize the `azd` template using the `azd init` command. |
app-service | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure App Service description: Lists Azure Policy Regulatory Compliance controls available for Azure App Service. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 11/06/2023 Last updated : 11/21/2023 |
application-gateway | Troubleshooting Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/for-containers/troubleshooting-guide.md | Scenarios in which you would notice a 500-error code on Application Gateway for - It refers to a resource that doesn't exist. In this case, the HTTPRoute's status has a condition with reason set to `BackendNotFound` and the message explains that the resource doesn't exist. - It refers to a resource in another namespace when the reference isn't explicitly allowed by a ReferenceGrant (or equivalent concept). In this case, the HTTPRoute's status has a condition with reason set to `RefNotPermitted` and the message explains which cross-namespace reference isn't allowed. - For instance, if an HTTPRoute has two backends specified with equal weights, and one is invalid 50 percent of the traffic must receive a 500. This is based on the specifications provided by Gateway API [here](https://gateway-api.sigs.k8s.io/v1alpha2/references/spec/#gateway.networking.k8s.io%2fv1beta1.HTTPRouteRule) + For instance, if an HTTPRoute has two backends specified with equal weights, and one is invalid 50 percent of the traffic must receive a 500. 2. No endpoints found for all backends: when there are no endpoints found for all the backends referenced in an HTTPRoute, a 500 error code is obtained. |
attestation | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/attestation/policy-reference.md | Title: Built-in policy definitions for Azure Attestation description: Lists Azure Policy built-in policy definitions for Azure Attestation. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/15/2023 Last updated : 11/21/2023 |
automation | Automation Child Runbooks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-child-runbooks.md | Title: Create modular runbooks in Azure Automation description: This article explains how to create a runbook that another runbook calls. Previously updated : 10/16/2022 Last updated : 11/21/2022 #Customer intent: As a developer, I want create modular runbooks so that I can be more efficient. Currently, PowerShell 5.1 is supported and only certain runbook types can call e * The PowerShell types and the PowerShell Workflow types can't call each other inline. They must use `Start-AzAutomationRunbook`. > [!IMPORTANT]-> Executing child scripts using `.\child-runbook.ps1` is not supported in PowerShell 7.1 and PowerShell 7.2 (preview). +> Executing child scripts using `.\child-runbook.ps1` is not supported in PowerShell 7.1 and PowerShell 7.2 **Workaround**: Use `Start-AutomationRunbook` (internal cmdlet) or `Start-AzAutomationRunbook` (from *Az.Automation* module) to start another runbook from parent runbook. The publish order of runbooks matters only for PowerShell Workflow and graphical PowerShell Workflow runbooks. |
automation | Automation Hrw Run Runbooks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-hrw-run-runbooks.md | Title: Run Azure Automation runbooks on a Hybrid Runbook Worker description: This article describes how to run runbooks on machines in your local datacenter or other cloud provider with the Hybrid Runbook Worker. Previously updated : 09/17/2023 Last updated : 11/21/2023 Azure Automation handles jobs on Hybrid Runbook Workers differently from jobs ru Jobs for Hybrid Runbook Workers run under the local **System** account. > [!NOTE]->- PowerShell 5.1, PowerShell 7.1(preview), Python 2.7, and Python 3.8(preview) runbooks are supported on both extension-based and agent-based Windows Hybrid Runbook Workers. For agent based workers, ensure the Windows Hybrid worker version is 7.3.12960 or above. ->- PowerShell 7.2 (preview) and Python 3.10 (preview) runbooks are supported on extension-based Windows Hybrid Workers only. Ensure the Windows Hybrid worker extension version is 1.1.11 or above. +>- PowerShell 5.1, PowerShell 7.1(preview), Python 2.7, and Python 3.8 runbooks are supported on both extension-based and agent-based Windows Hybrid Runbook Workers. For agent based workers, ensure the Windows Hybrid worker version is 7.3.12960 or above. +>- PowerShell 7.2 and Python 3.10 (preview) runbooks are supported on extension-based Windows Hybrid Workers only. Ensure the Windows Hybrid worker extension version is 1.1.11 or above. #### [Extension-based Hybrid Workers](#tab/win-extn-hrw) If the *Python* executable file is at the default location *C:\Python27\python.e ### Linux Hybrid Worker > [!NOTE]->- PowerShell 5.1, PowerShell 7.1(preview), Python 2.7, Python 3.8 (preview) runbooks are supported on both extension-based and agent-based Linux Hybrid Runbook Workers. For agent-based workers, ensure the Linux Hybrid Runbook worker version is 1.7.5.0 or above. ->- PowerShell 7.2 (preview) and Python 3.10 (preview) runbooks are supported on extension-based Linux Hybrid Workers only. Ensure the Linux Hybrid worker extension version is 1.1.11 or above. +>- PowerShell 5.1, PowerShell 7.1(preview), Python 2.7, Python 3.8 runbooks are supported on both extension-based and agent-based Linux Hybrid Runbook Workers. For agent-based workers, ensure the Linux Hybrid Runbook worker version is 1.7.5.0 or above. +>- PowerShell 7.2 and Python 3.10 (preview) runbooks are supported on extension-based Linux Hybrid Workers only. Ensure the Linux Hybrid worker extension version is 1.1.11 or above. #### [Extension-based Hybrid Workers](#tab/Lin-extn-hrw) |
automation | Automation Runbook Types | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-runbook-types.md | Title: Azure Automation runbook types description: This article describes the types of runbooks that you can use in Azure Automation and considerations for determining which type to use. Previously updated : 11/07/2023 Last updated : 11/21/2023 The Azure Automation Process Automation feature supports several types of runboo | Type | Description | |: |: |-| [PowerShell](#powershell-runbooks) |Textual runbook based on Windows PowerShell scripting. The currently supported versions are: PowerShell 5.1 (GA), PowerShell 7.1 (preview), and PowerShell 7.2 (preview).| +| [PowerShell](#powershell-runbooks) |Textual runbook based on Windows PowerShell scripting. The currently supported versions are: PowerShell 5.1 (GA), PowerShell 7.1 (preview), and PowerShell 7.2 (GA).| | [PowerShell Workflow](#powershell-workflow-runbooks)|Textual runbook based on Windows PowerShell Workflow scripting. | | [Python](#python-runbooks) |Textual runbook based on Python scripting. The currently supported versions are: Python 2.7 (GA), Python 3.8 (GA), and Python 3.10 (preview). | | [Graphical](#graphical-runbooks)|Graphical runbook based on Windows PowerShell and created and edited completely in the graphical editor in Azure portal. | Take into account the following considerations when determining which type to us PowerShell runbooks are based on Windows PowerShell. You directly edit the code of the runbook using the text editor in the Azure portal. You can also use any offline text editor and [import the runbook](manage-runbooks.md) into Azure Automation. -The PowerShell version is determined by the **Runtime version** specified (that is version 7.2 (preview), 7.1 (preview) or 5.1). The Azure Automation service supports the latest PowerShell runtime. +The PowerShell version is determined by the **Runtime version** specified (that is version 7.2, 7.1 (preview) or 5.1). The Azure Automation service supports the latest PowerShell runtime. -The same Azure sandbox and Hybrid Runbook Worker can execute **PowerShell 5.1** and **PowerShell 7.1 (preview)** runbooks side by side. +The same Azure sandbox and Hybrid Runbook Worker can execute multiple **PowerShell** runbooks targeting different runtime versions side by side. > [!NOTE]-> - Currently, PowerShell 7.2 (preview) runtime version is supported for both Cloud and Hybrid jobs in all Public regions except Australia Central2, Korea South, Sweden South, Jio India Central, Brazil Southeast, Central India, West India, UAE Central, and Gov clouds. -> - At the time of runbook execution, if you select **Runtime Version** as **7.1 (preview)**, PowerShell modules targeting 7.1 (preview) runtime version are used and if you select **Runtime Version** as **5.1**, PowerShell modules targeting 5.1 runtime version are used. This applies for PowerShell 7.2 (preview) modules and runbooks. +> - Currently, PowerShell 7.2 runtime version is supported for both Cloud and Hybrid jobs in all Public regions except Central India, UAE Central, Israel Central, Italy North, Germany North and Gov clouds. +> - At the time of runbook execution, if you select **Runtime Version** as **7.2**, PowerShell modules targeting 7.2 runtime version are used and if you select **Runtime Version** as **5.1**, PowerShell modules targeting 5.1 runtime version are used. This applies for PowerShell 7.1 (preview) modules and runbooks. Ensure that you select the right Runtime Version for modules. For example: if you're executing a runbook for a SharePoint automation scenario :::image type="content" source="./media/automation-runbook-types/runbook-types.png" alt-text="runbook Types."::: > [!NOTE]-> Currently, PowerShell 5.1, PowerShell 7.1 (preview) and PowerShell 7.2 (preview) are supported. +> Currently, PowerShell 5.1, PowerShell 7.1 (preview) and PowerShell 7.2 are supported. ### Advantages The following are the current limitations and known issues with PowerShell runbo **Limitations** -- You must be familiar with PowerShell scripting. - Runbooks can't use [parallel processing](automation-powershell-workflow.md#use-parallel-processing) to execute multiple actions in parallel. - Runbooks can't use [checkpoints](automation-powershell-workflow.md#use-checkpoints-in-a-workflow) to resume runbook if there's an error. - You can include only PowerShell, PowerShell Workflow runbooks, and graphical runbooks as child runbooks by using the [Start-AzAutomationRunbook](/powershell/module/az.automation/start-azautomationrunbook) cmdlet, which creates a new job. The following are the current limitations and known issues with PowerShell runbo **Limitations** -- You must be familiar with PowerShell scripting. - The Azure Automation internal PowerShell cmdlets aren't supported on a Linux Hybrid Runbook Worker. You must import the `automationassets` module at the beginning of your PowerShell runbook to access the Automation account shared resources (assets) functions. - For the PowerShell 7 runtime version, the module activities aren't extracted for the imported modules. - *PSCredential* runbook parameter type isn't supported in PowerShell 7 runtime version. The following are the current limitations and known issues with PowerShell runbo - If you import module Az.Accounts with version 2.12.3 or newer, ensure that you import the **Newtonsoft.Json** v10 module explicitly if PowerShell 7.1 runbooks have a dependency on this version of the module. The workaround for this issue is to use PowerShell 7.2 runbooks. -# [PowerShell 7.2 (preview)](#tab/lps72) +# [PowerShell 7.2](#tab/lps72) **Limitations** > [!NOTE]-> Currently, PowerShell 7.2 (preview) runtime version is supported for both Cloud and Hybrid jobs in all Public regions except Australia Central2, Korea South, Sweden South, Jio India Central, Brazil Southeast, Central India, West India, UAE Central, and Gov clouds. +> Currently, PowerShell 7.2 runtime version is supported for both Cloud and Hybrid jobs in all Public regions except Central India, UAE Central, Israel Central, Italy North, Germany North and Gov clouds. -- For the PowerShell 7 runtime version, the module activities aren't extracted for the imported modules.+- For the PowerShell 7.2 runtime version, the module activities aren't extracted for the imported modules. - PowerShell 7.x doesn't support workflows. For more information, see [PowerShell workflow](/powershell/scripting/whats-new/differences-from-windows-powershell#powershell-workflow) for more details. - PowerShell 7.x currently doesn't support signed runbooks.-- Source control integration doesn't support PowerShell 7.2 (preview). Also, PowerShell 7.2 (preview) runbooks in source control get created in Automation account as Runtime 5.1.-- Currently, PowerShell 7.2 (preview) runbooks are only supported from Azure portal. Rest API and PowerShell aren't supported.-- Az module 8.3.0 is installed by default and can't be managed at the automation account level for PowerShell 7.2 (preview). Use custom modules to override the Az module to the desired version.-- The imported PowerShell 7.2 (preview) module would be validated during job execution. Ensure that all dependencies for the selected module are also imported for successful job execution.-- PowerShell 7.2 module management is not supported through `Get-AzAutomationModule` cmdlets.+- Source control integration doesn't support PowerShell 7.2. Also, PowerShell 7.2 runbooks in source control get created in Automation account as Runtime 5.1. +- Az module 8.3.0 is installed by default. The complete list of component modules of selected Az module version is shown once Az version is configured again using Azure portal or API. +- The imported PowerShell 7.2 module would be validated during job execution. Ensure that all dependencies for the selected module are also imported for successful job execution. - Azure runbook doesn't support `Start-Job` with `-credential`. - Azure doesn't support all PowerShell input parameters. [Learn more](runbook-input-parameters.md). The following are the current limitations and known issues with PowerShell runbo PowerShell Workflow runbooks are text runbooks based on [Windows PowerShell Workflow](automation-powershell-workflow.md). You directly edit the code of the runbook using the text editor in the Azure portal. You can also use any offline text editor and [import the runbook](manage-runbooks.md) into Azure Automation. > [!NOTE]-> PowerShell 7.1 (preview) and PowerShell 7.2 (preview) do not support Workflow runbooks. +> PowerShell 7.1 (preview) and PowerShell 7.2 do not support Workflow runbooks. ### Advantages |
automation | Extension Based Hybrid Runbook Worker Install | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/extension-based-hybrid-runbook-worker-install.md | Follow the steps mentioned below as an example: ```powershell-interactive New-AzAutomationHybridRunbookWorkerGroup -AutomationAccountName "Contoso17" -Name "RunbookWorkerGroupName" -ResourceGroupName "ResourceGroup01" ```-1. Create an Azure VM or Arc-enabled server and add it to the above created Hybrid Worker Group. Use the below command to add an existing Azure VM or Arc-enabled Server to the Hybrid Worker Group. Generate a new GUID and pass it as `hybridRunbookWorkerGroupName`. To fetch `vmResourceId`, go to the **Properties** tab of the VM on Azure portal. +1. Create an Azure VM or Arc-enabled server and add it to the above created Hybrid Worker Group. Use the below command to add an existing Azure VM or Arc-enabled Server to the Hybrid Worker Group. Generate a new GUID and pass it as the name of the Hybrid Worker. To fetch `vmResourceId`, go to the **Properties** tab of the VM on Azure portal. ```azurepowershell New-AzAutomationHybridRunbookWorker -AutomationAccountName "Contoso17" -Name "RunbookWorkerName" -HybridRunbookWorkerGroupName "RunbookWorkerGroupName" -VmResourceId "VmResourceId" -ResourceGroupName "ResourceGroup01" |
automation | Automation Tutorial Runbook Textual | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/learn/automation-tutorial-runbook-textual.md | Title: Tutorial - Create a PowerShell Workflow runbook in Azure Automation description: This tutorial teaches you to create, test, and publish a PowerShell Workflow runbook. Previously updated : 10/16/2022 Last updated : 11/21/2022 #Customer intent: As a developer, I want use workflow runbooks so that I can automate the parallel starting of VMs.-> This article is applicable for PowerShell 5.1; PowerShell 7.1 (preview) and PowerShell 7.2 (preview) don't support workflows. +> This article is applicable for PowerShell 5.1; PowerShell 7.1 (preview) and PowerShell 7.2 don't support workflows. In this tutorial, you learn how to: If you're not going to continue to use this runbook, delete it with the followin In this tutorial, you created a PowerShell workflow runbook. For a look at Python 3 runbooks, see: > [!div class="nextstepaction"]-> [Tutorial: Create a Python 3 runbook (preview)](automation-tutorial-runbook-textual-python-3.md) +> [Tutorial: Create a Python 3 runbook](automation-tutorial-runbook-textual-python-3.md) |
automation | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/policy-reference.md | Title: Built-in policy definitions for Azure Automation description: Lists Azure Policy built-in policy definitions for Azure Automation. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/15/2023 Last updated : 11/21/2023 |
automation | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure Automation description: Lists Azure Policy Regulatory Compliance controls available for Azure Automation. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 11/06/2023 Last updated : 11/21/2023 |
azure-app-configuration | Howto Create Snapshots | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/howto-create-snapshots.md | configurationBuilder.AddAzureAppConfiguration(options => ``` > [!NOTE]-> Snapshot support is available if you use version **7.0.0-preview** or later of any of the following packages. +> Snapshot support is available if you use version **7.0.0** or later of any of the following packages. > - `Microsoft.Extensions.Configuration.AzureAppConfiguration` > - `Microsoft.Azure.AppConfiguration.AspNetCore` > - `Microsoft.Azure.AppConfiguration.Functions.Worker` |
azure-app-configuration | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/policy-reference.md | Title: Built-in policy definitions for Azure App Configuration description: Lists Azure Policy built-in policy definitions for Azure App Configuration. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/15/2023 Last updated : 11/21/2023 |
azure-app-configuration | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure App Configuration description: Lists Azure Policy Regulatory Compliance controls available for Azure App Configuration. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 11/06/2023 Last updated : 11/21/2023 |
azure-arc | Validation Program | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/validation-program.md | To see how all Azure Arc-enabled components are validated, see [Validation progr | [PowerStore X](https://www.dell.com/en-us/dt/storage/powerstore-storage-appliance/powerstore-x-series.htm)|1.20.6|1.0.0_2021-07-30|15.0.2148.140 | 12.3 (Ubuntu 12.3-1) | ### Hitachi-|Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL server version +|Solution and version |Kubernetes version |Azure Arc-enabled data services version |SQL engine version |PostgreSQL server version| |--|--|--|--|--|-|Hitachi Virtual Storage Software Block software-defined storage (VSSB) | 1.24.12 | 1.20.0_2023-06-13 | 16.0.5100.7242 | 14.5 (Ubuntu 20.04)| -|Hitachi Virtual Storage Platform (VSP) | 1.24.12 | 1.19.0_2023-05-09 | 16.0.937.6221 | 14.5 (Ubuntu 20.04)| -|[Hitachi UCP with RedHat OpenShift](https://www.hitachivantara.com/en-us/solutions/modernize-digital-core/infrastructure-modernization/hybrid-cloud-infrastructure.html) | 1.23.12 | 1.16.0_2023-02-14 | 16.0.937.6221 | 14.5 (Ubuntu 20.04)| -|[Hitachi UCP with VMware Tanzu](https://www.hitachivantara.com/en-us/solutions/modernize-digital-core/infrastructure-modernization/hybrid-cloud-infrastructure.html) | 1.23.8 | 1.16.0_2023-02-14 | 16.0.937.6221 | 14.5 (Ubuntu 20.04)| --+|Red Hat OCP 4.12.30|1.25.11|1.25.0_2023-11-14|16.0.5100.7246|Not validated| +|Hitachi Virtual Storage Software Block software-defined storage (VSSB)|1.24.12 |1.20.0_2023-06-13 |16.0.5100.7242 |14.5 (Ubuntu 20.04)| +|Hitachi Virtual Storage Platform (VSP) |1.24.12 |1.19.0_2023-05-09 |16.0.937.6221 |14.5 (Ubuntu 20.04)| +|[Hitachi UCP with RedHat OpenShift](https://www.hitachivantara.com/en-us/solutions/modernize-digital-core/infrastructure-modernization/hybrid-cloud-infrastructure.html) |1.23.12 |1.16.0_2023-02-14 |16.0.937.6221 |14.5 (Ubuntu 20.04)| ### HPE |
azure-arc | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/policy-reference.md | Title: Built-in policy definitions for Azure Arc-enabled Kubernetes description: Lists Azure Policy built-in policy definitions for Azure Arc-enabled Kubernetes. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/15/2023 Last updated : 11/21/2023 # |
azure-arc | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/policy-reference.md | Title: Built-in policy definitions for Azure Arc-enabled servers description: Lists Azure Policy built-in policy definitions for Azure Arc-enabled servers (preview). These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/15/2023 Last updated : 11/21/2023 |
azure-arc | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure Arc-enabled servers (preview) description: Lists Azure Policy Regulatory Compliance controls available for Azure Arc-enabled servers (preview). These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 11/06/2023 Last updated : 11/21/2023 |
azure-arc | Troubleshoot Extended Security Updates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/troubleshoot-extended-security-updates.md | Title: How to troubleshoot delivery of Extended Security Updates for Windows Server 2012 through Azure Arc description: Learn how to troubleshoot delivery of Extended Security Updates for Windows Server 2012 through Azure Arc. Previously updated : 10/24/2023 Last updated : 11/21/2023 If you're unable to successfully link your Azure Arc-enabled server to an activa ## ESU patches issues -If you have issues receiving ESUs after successfully enrolling the server through Arc-enabled servers, or you need additional information related to issues affecting ESU deployment, see [Troubleshoot issues in ESU](/troubleshoot/windows-client/windows-7-eos-faq/troubleshoot-extended-security-updates-issues). +Ensure that both the licensing package and SSU are downloaded for the Azure Arc-enabled server as documented at [KB5031043: Procedure to continue receiving security updates after extended support has ended on October 10, 2023](https://support.microsoft.com/topic/kb5031043-procedure-to-continue-receiving-security-updates-after-extended-support-has-ended-on-october-10-2023-c1a20132-e34c-402d-96ca-1e785ed51d45). Ensure you are following all of the networking prerequisites as recorded at [Prepare to deliver Extended Security Updates for Windows Server 2012](prepare-extended-security-updates.md?tabs=azure-cloud#networking). ++If installing the Extended Security Update enabled by Azure Arc fails with errors such as "ESU: Trying to Check IMDS Again LastError=HRESULT_FROM_WIN32(12029)" or "ESU: Trying to Check IMDS Again LastError=HRESULT_FROM_WIN32(12002)", there is a known remediation approach: ++1. Download this [intermediate CA published by Microsoft](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20TLS%20Issuing%20CA%2001%20-%20xsign.crt). +1. Install the downloaded certificate as Local Computer under `Intermediate Certificate Authorities\Certificates`. Use the following command to install the certificate correctly: ++ `certutil -addstore CA 'Microsoft Azure TLS Issuing CA 01 - xsign.crt'` ++1. Install security updates. If it fails, reboot the machine and install security updates again. ++If you're working with Azure Government Cloud, use the following instructions instead of those above: ++1. Download this [intermediate CA published by Microsoft](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20TLS%20Issuing%20CA%2002%20-%20xsign.crt). ++1. Install the downloaded certificate as Local Computer under `Intermediate Certificate Authorities\Certificates`. Use the following command to install the certificate correctly: ++ `certutil -addstore CA 'Microsoft Azure TLS Issuing CA 02 - xsign.crt'` ++1. Install security updates. If it fails, reboot the machine and install security updates again. ++If you encounter the error "ESU: not eligible HRESULT_FROM_WIN32(1633)", follow these steps: ++`Remove-Item ΓÇ£$env:ProgramData\AzureConnectedMachineAgent\Certs\license.jsonΓÇ¥ -Force` ++`Restart-Service himds` ++If you have other issues receiving ESUs after successfully enrolling the server through Arc-enabled servers, or you need additional information related to issues affecting ESU deployment, see [Troubleshoot issues in ESU](/troubleshoot/windows-client/windows-7-eos-faq/troubleshoot-extended-security-updates-issues). |
azure-cache-for-redis | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/policy-reference.md | Title: Built-in policy definitions for Azure Cache for Redis description: Lists Azure Policy built-in policy definitions for Azure Cache for Redis. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/15/2023 Last updated : 11/21/2023 |
azure-cache-for-redis | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure Cache for Redis description: Lists Azure Policy Regulatory Compliance controls available for Azure Cache for Redis. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 11/06/2023 Last updated : 11/21/2023 |
azure-functions | Deployment Zip Push | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/deployment-zip-push.md | Title: Zip push deployment for Azure Functions description: Use the .zip file deployment facilities of the Kudu deployment service to publish your Azure Functions. -+ Last updated 08/12/2018 |
azure-functions | Dotnet Isolated Process Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/dotnet-isolated-process-guide.md | You'll find these extension packages under [Microsoft.Azure.Functions.Worker.Ext ## Start-up and configuration -When using .NET isolated functions, you have access to the start-up of your function app, which is usually in Program.cs. You're responsible for creating and starting your own host instance. As such, you also have direct access to the configuration pipeline for your app. With .NET Functions isolated worker process, you can much more easily add configurations, inject dependencies, and run your own middleware. +When using .NET isolated functions, you have access to the start-up of your function app, which is usually in `Program.cs`. You're responsible for creating and starting your own host instance. As such, you also have direct access to the configuration pipeline for your app. With .NET Functions isolated worker process, you can much more easily add configurations, inject dependencies, and run your own middleware. The following code shows an example of a [HostBuilder] pipeline: :::code language="csharp" source="~/azure-functions-dotnet-worker/samples/FunctionApp/Program.cs" id="docsnippet_startup"::: -This code requires `using Microsoft.Extensions.DependencyInjection;`. +This code requires `using Microsoft.Extensions.DependencyInjection;`. -A [HostBuilder] is used to build and return a fully initialized [`IHost`][IHost] instance, which you run asynchronously to start your function app. +Before calling `Build()` on the `HostBuilder`, you should: ++- Call either `ConfigureFunctionsWebApplication()` if using [ASP.NET Core integration](#aspnet-core-integration) or `ConfigureFunctionsWorkerDefaults()` otherwise. See [HTTP trigger](#http-trigger) for details on these options. + - If you're writing your application using F#, some trigger and binding extensions require extra configuration here. See the setup documentation for the [Blobs extension][fsharp-blobs], the [Tables extension][fsharp-tables], and the [Cosmos DB extension][fsharp-cosmos] if you plan to use this in your app. +- Configure any services or app configuration your project requires. See [Configuration] for details. + - If you are planning to use Application Insights, you need to call `AddApplicationInsightsTelemetryWorkerService()` and `ConfigureFunctionsApplicationInsights()` in the `ConfigureServices()` delegate. See [Application Insights](#application-insights) for details. ++If your project targets .NET Framework 4.8, you also need to add `FunctionsDebugger.Enable();` before creating the HostBuilder. It should be the first line of your `Main()` method. For more information, see [Debugging when targeting .NET Framework](#debugging-when-targeting-net-framework). ++The [HostBuilder] is used to build and return a fully initialized [`IHost`][IHost] instance, which you run asynchronously to start your function app. :::code language="csharp" source="~/azure-functions-dotnet-worker/samples/FunctionApp/Program.cs" id="docsnippet_host_run"::: -> [!IMPORTANT] -> If your project targets .NET Framework 4.8, you also need to add `FunctionsDebugger.Enable();` before creating the HostBuilder. It should be the first line of your `Main()` method. For more information, see [Debugging when targeting .NET Framework](#debugging-when-targeting-net-framework). +[fsharp-blobs]: ./functions-bindings-storage-blob.md#install-extension +[fsharp-tables]: ./functions-bindings-storage-table.md#install-extension +[fsharp-cosmos]: ./functions-bindings-cosmosdb-v2.md#install-extension ### Configuration The following extension methods on [FunctionContext] make it easier to work with | **`GetHttpRequestDataAsync`** | Gets the `HttpRequestData` instance when called by an HTTP trigger. This method returns an instance of `ValueTask<HttpRequestData?>`, which is useful when you want to read message data, such as request headers and cookies. | | **`GetHttpResponseData`** | Gets the `HttpResponseData` instance when called by an HTTP trigger. | | **`GetInvocationResult`** | Gets an instance of `InvocationResult`, which represents the result of the current function execution. Use the `Value` property to get or set the value as needed. |-| **` GetOutputBindings`** | Gets the output binding entries for the current function execution. Each entry in the result of this method is of type `OutputBindingData`. You can use the `Value` property to get or set the value as needed. | -| **` BindInputAsync`** | Binds an input binding item for the requested `BindingMetadata` instance. For example, you can use this method when you have a function with a `BlobInput` input binding that needs to be accessed or updated by your middleware. | +| **`GetOutputBindings`** | Gets the output binding entries for the current function execution. Each entry in the result of this method is of type `OutputBindingData`. You can use the `Value` property to get or set the value as needed. | +| **`BindInputAsync`** | Binds an input binding item for the requested `BindingMetadata` instance. For example, you can use this method when you have a function with a `BlobInput` input binding that needs to be accessed or updated by your middleware. | The following is an example of a middleware implementation that reads the `HttpRequestData` instance and updates the `HttpResponseData` instance during function execution. This middleware checks for the presence of a specific request header(x-correlationId), and when present uses the header value to stamp a response header. Otherwise, it generates a new GUID value and uses that for stamping the response header. A function can have zero or more input bindings that can pass data to a function ### Output bindings -To write to an output binding, you must apply an output binding attribute to the function method, which defined how to write to the bound service. The value returned by the method is written to the output binding. For example, the following example writes a string value to a message queue named `output-queue` by using an output binding: +To write to an output binding, you must apply an output binding attribute to the function method, which define how to write to the bound service. The value returned by the method is written to the output binding. For example, the following example writes a string value to a message queue named `output-queue` by using an output binding: :::code language="csharp" source="~/azure-functions-dotnet-worker/samples/Extensions/Queue/QueueFunction.cs" id="docsnippet_queue_output_binding" ::: |
azure-functions | Durable Functions Entities | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-entities.md | module.exports = df.entity(function(context) { ::: zone-end ::: zone pivot="python"+> [!NOTE] +> Refer to the [Azure Functions Python developer guide](../functions-reference-python.md) for more details about how the V2 model works. + The following code is the `Counter` entity implemented as a durable function written in Python. # [v2](#tab/python-v2) |
azure-functions | Durable Functions Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-overview.md | Durable Functions is designed to work with all Azure Functions programming langu [!INCLUDE [functions-nodejs-model-tabs-description](../../../includes/functions-nodejs-model-tabs-description.md)] ::: zone-end + Like Azure Functions, there are templates to help you develop Durable Functions using [Visual Studio](durable-functions-create-first-csharp.md), [Visual Studio Code](quickstart-js-vscode.md), and the [Azure portal](durable-functions-create-portal.md). ## Application patterns You can use the `context.df` object to invoke other functions by name, pass para ::: zone-end ::: zone pivot="python"- # [Python](#tab/v1-model) ```python |
azure-functions | Functions Bindings Cosmosdb V2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-cosmosdb-v2.md | This version of the Azure Cosmos DB bindings extension introduces the ability to Add the extension to your project by installing the [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.CosmosDB/), version 4.x. +If you're writing your application using F#, you must also configure this extension as part of the app's [startup configuration](./dotnet-isolated-process-guide.md#start-up-and-configuration). In the call to `ConfigureFunctionsWorkerDefaults()` or `ConfigureFunctionsWebApplication()`, add a delegate that takes an `IFunctionsWorkerApplication` parameter. Then within the body of that delegate, call `ConfigureCosmosDBExtension()` on the object: ++```fsharp +let hostBuilder = new HostBuilder() +hostBuilder.ConfigureFunctionsWorkerDefaults(fun (context: HostBuilderContext) (appBuilder: IFunctionsWorkerApplicationBuilder) -> + appBuilder.ConfigureCosmosDBExtension() |> ignore +) |> ignore +``` + # [Functions 2.x+](#tab/functionsv2/isolated-process) Add the extension to your project by installing the [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.CosmosDB/), version 3.x. |
azure-functions | Functions Bindings Storage Blob | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-blob.md | Functions 1.x apps automatically have a reference the [Microsoft.Azure.WebJobs]( This version allows you to bind to types from [Azure.Storage.Blobs](/dotnet/api/azure.storage.blobs). Learn more about how these new types are different from `WindowsAzure.Storage` and `Microsoft.Azure.Storage` and how to migrate to them from the [Azure.Storage.Blobs Migration Guide](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/storage/Azure.Storage.Blobs/AzureStorageNetMigrationV12.md). -Add the extension to your project by installing the [Microsoft.Azure.Functions.Worker.Extensions.Storage.Blobs NuGet package], version 5.x. +Add the extension to your project by installing the [Microsoft.Azure.Functions.Worker.Extensions.Storage.Blobs NuGet package], version 5.x or later. Using the .NET CLI: ```dotnetcli-dotnet add package Microsoft.Azure.Functions.Worker.Extensions.Storage.Blobs --version 5.0.0 +dotnet add package Microsoft.Azure.Functions.Worker.Extensions.Storage.Blobs ``` [!INCLUDE [functions-bindings-storage-extension-v5-isolated-worker-tables-note](../../includes/functions-bindings-storage-extension-v5-isolated-worker-tables-note.md)] +If you're writing your application using F#, you must also configure this extension as part of the app's [startup configuration](./dotnet-isolated-process-guide.md#start-up-and-configuration). In the call to `ConfigureFunctionsWorkerDefaults()` or `ConfigureFunctionsWebApplication()`, add a delegate that takes an `IFunctionsWorkerApplication` parameter. Then within the body of that delegate, call `ConfigureBlobStorageExtension()` on the object: ++```fsharp +let hostBuilder = new HostBuilder() +hostBuilder.ConfigureFunctionsWorkerDefaults(fun (context: HostBuilderContext) (appBuilder: IFunctionsWorkerApplicationBuilder) -> + appBuilder.ConfigureBlobStorageExtension() |> ignore +) |> ignore +``` + # [Functions 2.x and higher](#tab/functionsv2/isolated-process) Add the extension to your project by installing the [Microsoft.Azure.Functions.Worker.Extensions.Storage NuGet package, version 4.x]. |
azure-functions | Functions Bindings Storage Table | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-storage-table.md | dotnet add package Microsoft.Azure.Functions.Worker.Extensions.Storage --version [!INCLUDE [functions-bindings-storage-extension-v5-isolated-worker-tables-note](../../includes/functions-bindings-storage-extension-v5-isolated-worker-tables-note.md)] +If you're writing your application using F#, you must also configure this extension as part of the app's [startup configuration](./dotnet-isolated-process-guide.md#start-up-and-configuration). In the call to `ConfigureFunctionsWorkerDefaults()` or `ConfigureFunctionsWebApplication()`, add a delegate that takes an `IFunctionsWorkerApplication` parameter. Then within the body of that delegate, call `ConfigureTablesExtension()` on the object: ++```fsharp +let hostBuilder = new HostBuilder() +hostBuilder.ConfigureFunctionsWorkerDefaults(fun (context: HostBuilderContext) (appBuilder: IFunctionsWorkerApplicationBuilder) -> + appBuilder.ConfigureTablesExtension() |> ignore +) |> ignore +``` + # [Combined Azure Storage extension](#tab/storage-extension/isolated-process) Tables are included in a combined package for Azure Storage. Install the [Microsoft.Azure.Functions.Worker.Extensions.Storage NuGet package](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.Storage/4.0.4), version 4.x. |
azure-functions | Functions Container Apps Hosting | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-container-apps-hosting.md | This integration also means that you can use existing Functions client tools and ## Deploying Azure Functions to Container Apps -In the current preview, you must deploy your functions code in a Linux container that you create. Functions maintains a set of [lanuage-specific base images](https://mcr.microsoft.com/catalog?search=functions) that you can use to generate your containerized function apps. When you create a Functions project using [Azure Functions Core Tools](./functions-run-local.md) and include the [`--docker` option](./functions-core-tools-reference.md#func-init), Core Tools also generates a Dockerfile that you can use to create your container from the correct base image. +In the current preview, you must deploy your functions code in a Linux container that you create. Functions maintains a set of [language-specific base images](https://mcr.microsoft.com/catalog?search=functions) that you can use to generate your containerized function apps. When you create a Functions project using [Azure Functions Core Tools](./functions-run-local.md) and include the [`--docker` option](./functions-core-tools-reference.md#func-init), Core Tools also generates a Dockerfile that you can use to create your container from the correct base image. Azure Functions currently supports the following methods of deployment to Azure Container Apps: |
azure-functions | Functions Scenarios | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-scenarios.md | public static async Task Run( ::: zone pivot="programming-language-java" + [Azure Functions Kafka trigger Java Sample](https://github.com/azure/azure-functions-kafka-extension/tree/main/samples/WalletProcessing_KafkademoSample)-+ [Event Hubs trigger examples](https://github.com/azure-samples/azure-functions-samples-java/blob/master/src/main/java/com/functions/EventHubTriggerFunction.java) -+ [Kafka triggered function examples](https://github.com/azure-samples/azure-functions-samples-java/blob/master/src/main/java/com/functions/KafkaTriggerFunction.java) + [Azure Event Hubs trigger for Azure Functions](functions-bindings-event-hubs-trigger.md?pivots=programming-language-java) + [Apache Kafka trigger for Azure Functions](functions-bindings-kafka-trigger.md?pivots=programming-language-java) ::: zone-end |
azure-government | Documentation Government Csp List | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-csp-list.md | Below you can find a list of all the authorized Cloud Solution Providers (CSPs), |[CACI Inc - Federal](https://www.caci.com/)| |[Caloudi Corporation](https://www.caloudi.com/)| |[Cambria Solutions, Inc.](https://www.cambriasolutions.com/)|-|[Capgemini Government Solutions LLC](https://www.capgemini.com/us-en/service/capgemini-government-solutions/)| |[CAPSYS Technologies, LLC](https://www.capsystech.com/)| |[Casserly Consulting](https://www.casserlyconsulting.com)| |[Carahsoft Technology Corporation](https://www.carahsoft.com/)| Below you can find a list of all the authorized Cloud Solution Providers (CSPs), |[Cyber Advisors](https://cyberadvisors.com)| |[Cyber Cloud Technologies](https://www.cyber-cloud.com)| |[Cyber Korp Inc.](https://cyberkorp.com/)|-|[Cybercore Solutions LLC](https://cybercoresolutions.com/)| |[Dalecheck Technology Group](https://www.dalechek.com/)| |[Dasher Technologies, Inc.](https://www.dasher.com)| |[Data Center Services Inc](https://www.d8acenter.com)| Below you can find a list of all the authorized Cloud Solution Providers (CSPs), |[MetroStar Systems Inc.](https://www.metrostarsystems.com)| |[Mibura Inc.](https://www.mibura.com/)| |[Microtechnologies, LLC](https://www.microtech.net/)|-|[Miken Technologies](https://www.miken.net)| |[mindSHIFT Technologies, Inc.](https://www.mindshift.com/)| |[MIS Sciences Corp](https://www.mis-sciences.com/)| |[Mission Cyber LLC](https://missioncyber.com/b/)| Below you can find a list of all the authorized Cloud Solution Providers (CSPs), |[Redhorse Corporation](https://www.redhorsecorp.com)| |[Regan Technologies Corporation](http://www.regantech.com/)| |Remote Support Solutions Corp DBA RemoteWorks|-|[Resource Metrix](https://www.rmtrx.com)| |[Revenue Solutions, Inc](https://www.revenuesolutionsinc.com)| |[Ridge IT](https://www.ridgeit.com/)| |[RMON Networks Inc.](https://rmonnetworks.com/)| Below you can find a list of all the authorized Cloud Solution Providers (CSPs), |[TSAChoice Inc.](https://www.tsachoice.com)| |[Turnkey Technologies, Inc.](https://www.turnkeytec.com)| |[Tyto Athene LLC](https://gotyto.com/)|-|[U2Cloud LLC](https://www.u2cloud.com)| |[UDRI - SSG](https://udayton.edu/)| |[Unisys Corp / Blue Bell](https://www.unisys.com)| |[United Data Technologies, Inc.](https://udtonline.com)| Below you can find a list of all the authorized Cloud Solution Providers (CSPs), |[Vology Inc.](https://www.vology.com/)| |[vSolvIT](https://www.vsolvit.com/)| |[Warren Averett Technology Group](https://warrenaverett.com/warren-averett-technology-group/)|-|[Wintellect, LLC](https://www.wintellect.com)| |[Wintellisys, Inc.](https://wintellisys.com)| |[Withum](https://www.withum.com/service/cyber-information-security-services/)| |[Workspot, Inc.](https://workspot.com)| |[WorkMagic LLC](https://www.workmagic.com)| |[Wovenware US, Inc.](https://www.wovenware.com)| |[WCC Global](https://wwcglobal.com)|-|[WWT](https://www2.wwt.com)| |[Xantrion Incorporated](https://www.xantrion.com)| |[X-Centric IT Solutions, LLC](https://www.x-centric.com/)| |[XentIT, llc](https://xentit.com)| |
azure-linux | Quickstart Azure Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-linux/quickstart-azure-powershell.md | + + Title: 'Quickstart: Deploy an Azure Linux Container Host for an AKS cluster using Azure PowerShell' +description: Learn how to quickly create an Azure Linux Container Host for an AKS cluster using Azure PowerShell. ++++ Last updated : 11/20/2023+++# Quickstart: Deploy an Azure Linux Container Host for an AKS cluster using Azure PowerShell ++Get started with the Azure Linux Container Host by using Azure PowerShell to deploy an Azure Linux Container Host for an AKS cluster. After installing the prerequisites, you create a resource group, create an AKS cluster, connect to the cluster, and run a sample multi-container application in the cluster. ++## Prerequisites ++- [!INCLUDE [quickstarts-free-trial-note](../../includes/quickstarts-free-trial-note.md)] +- Use the PowerShell environment in [Azure Cloud Shell](/azure/cloud-shell/overview). For more information, see [Azure Cloud Shell Quickstart](/azure/cloud-shell/quickstart). + [![Screenshot of Launch Cloud Shell in a new window button.](./media/hdi-launch-cloud-shell.png)](https://shell.azure.com) +- If you're running PowerShell locally, install the `Az PowerShell` module and connect to your Azure account using the [`Connect-AzAccount`](/powershell/module/az.accounts/Connect-AzAccount) cmdlet. For more information about installing the Az PowerShell module, see [Install Azure PowerShell][install-azure-powershell]. +- The identity you use to create your cluster has the appropriate minimum permissions. For more details on access and identity for AKS, see [Access and identity options for Azure Kubernetes Service (AKS)](../aks/concepts-identity.md). ++## Create a resource group ++An [Azure resource group][azure-resource-group] is a logical group in which Azure resources are deployed and managed. When creating a resource group, you need to specify a location. This location is the storage location of your resource group metadata and where your resources run in Azure if you don't specify another region during resource creation. ++The following example creates resource group named *testAzureLinuxResourceGroup* in the *eastus* region. ++- Create a resource group using the [`New-AzResourceGroup`][new-azresourcegroup] cmdlet. ++ ```azurepowershell-interactive + New-AzResourceGroup -Name testAzureLinuxResourceGroup -Location eastus + ``` ++ The following example output resembles successful creation of the resource group: ++ ```output + ResourceGroupName : testAzureLinuxResourceGroup + Location : eastus + ProvisioningState : Succeeded + Tags : + ResourceId : /subscriptions/00000000-0000-0000-0000-000000000000/resourceGroups/testAzureLinuxResourceGroup + ``` ++ > [!NOTE] + > The above example uses *eastus*, but Azure Linux Container Host clusters are available in all regions. ++## Create an Azure Linux Container Host cluster ++The following example creates a cluster named *testAzureLinuxCluster* with one node. ++- Create an AKS cluster using the [`New-AzAksCluster`][new-azakscluster] cmdlet with the `-NodeOsSKU` flag set to *AzureLinux*. ++ ```azurepowershell-interactive + New-AzAksCluster -ResourceGroupName testAzureLinuxResourceGroup -Name testAzureLinuxCluster -NodeOsSKU AzureLinux + ``` ++ After a few minutes, the command completes and returns JSON-formatted information about the cluster. ++## Connect to the cluster ++To manage a Kubernetes cluster, use the Kubernetes command-line client, [kubectl](https://kubernetes.io/docs/reference/kubectl/kubectl/). `kubectl` is already installed if you use Azure Cloud Shell. ++1. Install `kubectl` locally using the `Install-AzAksCliTool` cmdlet. ++ ```azurepowershell-interactive + Install-AzAksCliTool + ``` ++2. Configure `kubectl` to connect to your Kubernetes cluster using the [`Import-AzAksCredential`][import-azakscredential] cmdlet. This command downloads credentials and configures the Kubernetes CLI to use them. ++ ```azurepowershell-interactive + Import-AzAksCredential -ResourceGroupName testAzureLinuxResourceGroup -Name testAzureLinuxCluster + ``` ++3. Verify the connection to your cluster using the [`kubectl get`][kubectl-get] command. This command returns a list of the cluster pods. ++ ```azurepowershell-interactive + kubectl get pods --all-namespaces + ``` ++## Deploy the application ++A [Kubernetes manifest file](../../articles/aks/concepts-clusters-workloads.md#deployments-and-yaml-manifests) defines a cluster's desired state, such as which container images to run. ++In this quickstart, you use a manifest to create all objects needed to run the [Azure Vote application](https://github.com/Azure-Samples/azure-voting-app-redis). This manifest includes two Kubernetes deployments: ++- The sample Azure Vote Python applications. +- A Redis instance. ++This manifest also creates two [Kubernetes Services](../../articles/aks/concepts-network.md#services): ++- An internal service for the Redis instance. +- An external service to access the Azure Vote application from the internet. ++1. Create a file named `azure-vote.yaml` and copy in the following manifest. ++ - If you use the Azure Cloud Shell, you can create the file using `code`, `vi`, or `nano`. ++ ```yaml + apiVersion: apps/v1 + kind: Deployment + metadata: + name: azure-vote-back + spec: + replicas: 1 + selector: + matchLabels: + app: azure-vote-back + template: + metadata: + labels: + app: azure-vote-back + spec: + nodeSelector: + "kubernetes.io/os": linux + containers: + - name: azure-vote-back + image: mcr.microsoft.com/oss/bitnami/redis:6.0.8 + env: + - name: ALLOW_EMPTY_PASSWORD + value: "yes" + resources: + requests: + cpu: 100m + memory: 128Mi + limits: + cpu: 250m + memory: 256Mi + ports: + - containerPort: 6379 + name: redis + + apiVersion: v1 + kind: Service + metadata: + name: azure-vote-back + spec: + ports: + - port: 6379 + selector: + app: azure-vote-back + + apiVersion: apps/v1 + kind: Deployment + metadata: + name: azure-vote-front + spec: + replicas: 1 + selector: + matchLabels: + app: azure-vote-front + template: + metadata: + labels: + app: azure-vote-front + spec: + nodeSelector: + "kubernetes.io/os": linux + containers: + - name: azure-vote-front + image: mcr.microsoft.com/azuredocs/azure-vote-front:v1 + resources: + requests: + cpu: 100m + memory: 128Mi + limits: + cpu: 250m + memory: 256Mi + ports: + - containerPort: 80 + env: + - name: REDIS + value: "azure-vote-back" + + apiVersion: v1 + kind: Service + metadata: + name: azure-vote-front + spec: + type: LoadBalancer + ports: + - port: 80 + selector: + app: azure-vote-front + ``` ++ For a breakdown of YAML manifest files, see [Deployments and YAML manifests](../../articles/aks/concepts-clusters-workloads.md#deployments-and-yaml-manifests). ++2. Deploy the application using the [`kubectl apply`](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply) command and specify the name of your YAML manifest: ++ ```azurepowershell-interactive + kubectl apply -f azure-vote.yaml + ``` ++ The following example resembles output showing the successfully created deployments and ++ ```output + deployment "azure-vote-back" created + service "azure-vote-back" created + deployment "azure-vote-front" created + service "azure-vote-front" created + ``` ++## Test the application ++When the application runs, a Kubernetes service exposes the application frontend to the internet. This process can take a few minutes to complete. ++1. Monitor progress using the [`kubectl get service`](https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get) command with the `--watch` argument. ++ ```azurepowershell-interactive + kubectl get service azure-vote-front --watch + ``` ++ The **EXTERNAL-IP** output for the `azure-vote-front` service initially shows as *pending*. ++ ```output + NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE + azure-vote-front LoadBalancer 10.0.37.27 <pending> 80:30572/TCP 6s + ``` ++2. Once the **EXTERNAL-IP** address changes from *pending* to an actual public IP address, use `CTRL-C` to stop the `kubectl` watch process. The following example output shows a valid public IP address assigned to the service: ++ ```output + azure-vote-front LoadBalancer 10.0.37.27 52.179.23.131 80:30572/TCP 2m + ``` ++3. Open a web browser to the external IP address of your service to see the application in action. ++ :::image type="content" source="./media/azure-voting-application.png" alt-text="Screenshot of browsing to Azure Vote sample application."::: ++## Delete the cluster ++If you don't plan on continuing through the following tutorials, remove the created resources to avoid incurring Azure charges. ++- Remove the resource group and all related resources using the [`RemoveAzResourceGroup`][remove-azresourcegroup] cmdlet. ++ ```azurepowershell-interactive + Remove-AzResourceGroup -Name testAzureLinuxResourceGroup + ``` ++## Next steps ++In this quickstart, you deployed an Azure Linux Container Host AKS cluster. To learn more about the Azure Linux Container Host and walk through a complete cluster deployment and management example, continue to the Azure Linux Container Host tutorial. ++> [!div class="nextstepaction"] +> [Azure Linux Container Host tutorial](./tutorial-azure-linux-create-cluster.md) ++<!-- LINKS - internal --> +[install-azure-powershell]: /powershell/azure/install-az-ps +[azure-resource-group]: ../azure-resource-manager/management/overview.md +[new-azresourcegroup]: /powershell/module/az.resources/new-azresourcegroup +[new-azakscluster]: /powershell/module/az.aks/new-azakscluster +[import-azakscredential]: /powershell/module/az.aks/import-azakscredential +[kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get +[remove-azresourcegroup]: /powershell/module/az.resources/remove-azresourcegroup |
azure-maps | Understanding Azure Maps Transactions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/understanding-azure-maps-transactions.md | When you useΓÇ»[Azure Maps Services], the API requests you make generate transac The following table summarizes the Azure Maps services that generate transactions, billable and nonbillable, along with any notable aspects that are helpful to understand in how the number of transactions are calculated. +> [!NOTE] +> +> For Azure Maps pricing information and free offering details, see [Azure Maps Pricing]. + | Azure Maps Service | Billable | Transaction Calculation | Meter | |--|-|-|-| | Data service (Deprecated<sup>1</sup>) | Yes, except for `MapDataStorageService.GetDataStatus` and `MapDataStorageService.GetUserData`, which are nonbillable| One request = 1 transaction| <ul><li>Location Insights Data (Gen2 pricing)</li></ul>| |
azure-monitor | Azure Monitor Agent Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-migration.md | The following features and services now have an Azure Monitor Agent version (som | [Network Watcher](../../network-watcher/network-watcher-monitoring-overview.md) | Migrate to new service called Connection Monitor with Azure Monitor Agent | Generally available | [Monitor network connectivity using Azure Monitor agent with connection monitor](../../network-watcher/azure-monitor-agent-with-connection-monitor.md) | | Azure Stack HCI Insights | Migrate to Azure Monitor Agent | Generally available| [Monitor Azure Stack HCI with Insights](/azure-stack/hci/manage/monitor-hci-single) | | [Azure Virtual Desktop (AVD) Insights](../../virtual-desktop/insights.md) | Migrate to Azure Monitor Agent |Generally available | [Use Azure Virtual Desktop Insights to monitor your deployment](../../virtual-desktop/insights.md#session-host-data-settings) |+| [Container Monitoring Solution](../containers/containers.md) | Migrate to new service called Container Insights with Azure Monitor Agent | Generally Available | [Enable Container Insights](../containers/container-insights-transition-solution.md) | > [!NOTE] > Features and services listed above in preview **may not be available in Azure Government and China clouds**. They will be available typically within a month *after* the features/services become generally available. |
azure-monitor | Alerts Create New Alert Rule | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-create-new-alert-rule.md | To edit an existing alert rule: 1. On the **Actions** tab, select or create the required [action groups](./action-groups.md). + :::image type="content" source="media/alerts-create-new-alert-rule/alerts-rule-actions-tab.png" alt-text="Screenshot that shows the Actions tab when creating a new alert rule."::: + ### Set the alert rule details 1. On the **Details** tab, define the **Project details**. To edit an existing alert rule: |Field |Description | ||| |Enable upon creation| Select for the alert rule to start running as soon as you're done creating it.|- |Automatically resolve alerts (preview) |Select to make the alert stateful. When an alert is stateful, the alert is resolved when the condition is no longer met.<br> If you don't select this checkbox, metric alerts are stateless. Stateless alerts fire each time the condition is met, even if alert already fired.<br> The frequency of notifications for stateless metric alerts differs based on the alert rule's configured frequency:<br>**Alert frequency of less than 5 minutes**: While the condition continues to be met, a notification is sent somewhere between one and six minutes.<br>**Alert frequency of more than 5 minutes**: While the condition continues to be met, a notification is sent between the configured frequency and double the value of the frequency. For example, for an alert rule with a frequency of 15 minutes, a notification is sent somewhere between 15 to 30 minutes.| + |Automatically resolve alerts (preview) |Select to make the alert stateful. When an alert is stateful, the alert is resolved when the condition is no longer met.<br> If you don't select this checkbox, metric alerts are stateless. Stateless alerts fire each time the condition is met, even if alert already fired.<br> The frequency of notifications for stateless metric alerts differs based on the alert rule's configured frequency:<br>**Alert frequency of less than 5 minutes**: While the condition continues to be met, a notification is sent somewhere between one and six minutes.<br>**Alert frequency of more than 5 minutes**: While the condition continues to be met, a notification is sent between the configured frequency and doubles the value of the frequency. For example, for an alert rule with a frequency of 15 minutes, a notification is sent somewhere between 15 to 30 minutes.| #### [Log alert](#tab/log) To edit an existing alert rule: The identity associated with the rule must have these roles: - If the query is accessing a Log Analytics workspace, the identity must be assigned a **Reader role** for all workspaces accessed by the query. If you're creating resource-centric log alerts, the alert rule may access multiple workspaces, and the identity must have a reader role on all of them.- - If the you are querying an ADX or ARG cluster you must add **Reader role** for all data sources accessed by the query. For example, if the query is resource centric, it needs a reader role on that resources. + - If you are querying an ADX or ARG cluster you must add **Reader role** for all data sources accessed by the query. For example, if the query is resource centric, it needs a reader role on that resources. - If the query is [accessing a remote Azure Data Explorer cluster](../logs/azure-monitor-data-explorer-proxy.md), the identity must be assigned: - **Reader role** for all data sources accessed by the query. For example, if the query is calling a remote Azure Data Explorer cluster using the adx() function, it needs a reader role on that ADX cluster. - **Database viewer** for all databases the query is accessing. To edit an existing alert rule: 1. Select **Enable upon creation** for the alert rule to start running as soon as you're done creating it. -1. <a name="custom-props"></a>(Optional) In the **Custom properties**, if you've configured action groups for this alert rule, you can add your own properties to include in the alert notification payload. You can use these properties in the actions called by the action group, such as webhook, Azure function or logic app actions. +1. <a name="custom-props"></a>(Optional) In the **Custom properties** section, if you've configured action groups for this alert rule, you can add your own properties to include in the alert notification payload. You can use these properties in the actions called by the action group, such as webhook, Azure function or logic app actions. The custom properties are specified as key:value pairs, using either static text, a dynamic value extracted from the alert payload, or a combination of both. To edit an existing alert rule: Use the [common alert schema](alerts-common-schema.md) format to specify the field in the payload, whether or not the action groups configured for the alert rule use the common schema. - :::image type="content" source="media/alerts-create-new-alert-rule/alerts-rule-actions-tab.png" alt-text="Screenshot that shows the Actions tab when creating a new alert rule."::: + :::image type="content" source="media/alerts-create-new-alert-rule/alerts-rule-custom-props.png" alt-text="Screenshot that shows the custom properties section of creating a new alert rule."::: In the following examples, values in the **custom properties** are used to utilize data from a payload that uses the common alert schema: |
azure-monitor | Asp Net Core | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-core.md | |
azure-monitor | Asp Net Dependencies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-dependencies.md | |
azure-monitor | Asp Net Trace Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net-trace-logs.md | For each source, you can set the following parameters: ## Use DiagnosticSource events -You can configure [System.Diagnostics.DiagnosticSource](https://github.com/dotnet/corefx/blob/master/src/System.Diagnostics.DiagnosticSource/src/DiagnosticSourceUsersGuide.md) events to be sent to Application Insights as traces. First, install the [`Microsoft.ApplicationInsights.DiagnosticSourceListener`](https://www.nuget.org/packages/Microsoft.ApplicationInsights.DiagnosticSourceListener) NuGet package. Then edit the "TelemetryModules" section of the [ApplicationInsights.config](./configuration-with-applicationinsights-config.md) file. +You can configure [System.Diagnostics.DiagnosticSource](https://github.com/dotnet/runtime/blob/main/src/libraries/System.Diagnostics.DiagnosticSource/src/DiagnosticSourceUsersGuide.md) events to be sent to Application Insights as traces. First, install the [`Microsoft.ApplicationInsights.DiagnosticSourceListener`](https://www.nuget.org/packages/Microsoft.ApplicationInsights.DiagnosticSourceListener) NuGet package. Then edit the "TelemetryModules" section of the [ApplicationInsights.config](./configuration-with-applicationinsights-config.md) file. ```xml <Add Type="Microsoft.ApplicationInsights.DiagnosticSourceListener.DiagnosticSourceTelemetryModule, Microsoft.ApplicationInsights.DiagnosticSourceListener"> |
azure-monitor | Asp Net | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/asp-net.md | -> [!NOTE] -> An [OpenTelemetry-based .NET offering](opentelemetry-enable.md?tabs=net) is available. [Learn more](opentelemetry-overview.md). [!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)] |
azure-monitor | Availability Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/availability-overview.md | You can create up to 100 availability tests per Application Insights resource. ## Troubleshooting +> [!WARNING] +> We have recently enabled TLS 1.3 in Availability Tests. If you are seeing new error messages as a result, please ensure that clients running on Windows Server 2022 with TLS 1.3 enabled can connect to your endpoint. If you are unable to do this, you may consider temporarily disabling TLS 1.3 on your endpoint so that Availability Tests will fall back to older TLS versions. +> For additional information, please check the [troubleshooting article](/troubleshoot/azure/azure-monitor/app-insights/troubleshoot-availability). See the dedicated [troubleshooting article](/troubleshoot/azure/azure-monitor/app-insights/troubleshoot-availability). ## Frequently asked questions |
azure-monitor | Configuration With Applicationinsights Config | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/configuration-with-applicationinsights-config.md | |
azure-monitor | Custom Operations Tracking | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/custom-operations-tracking.md | This article provides guidance on how to track custom operations with the Applic - Application Insights for web applications (running ASP.NET) version 2.4+. - Application Insights for ASP.NET Core version 2.1+. + ## Overview An operation is a logical piece of work run by an application. It has a name, start time, duration, result, and a context of execution like user name, properties, and result. If operation A was initiated by operation B, then operation B is set as a parent for A. An operation can have only one parent, but it can have many child operations. For more information on operations and telemetry correlation, see [Application Insights telemetry correlation](distributed-tracing-telemetry-correlation.md). |
azure-monitor | Eventcounters | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/eventcounters.md | |
azure-monitor | Get Metric | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/get-metric.md | |
azure-monitor | Ilogger | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/ilogger.md | |
azure-monitor | Java Standalone Config | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-config.md | This article shows you how to configure Azure Monitor Application Insights for J ## Connection string and role name +Connection string and role name are the most common settings you need to get started: ++```json +{ + "connectionString": "...", + "role": { + "name": "my cloud role name" + } +} +``` +Connection string is required. Role name is important anytime you're sending data from different applications to the same Application Insights resource. More information and configuration options are provided in the following sections. You can specify your own configuration file path by using one of these two optio * `APPLICATIONINSIGHTS_CONFIGURATION_FILE` environment variable * `applicationinsights.configuration.file` Java system property -If you specify a relative path, it's resolved relative to the directory where `applicationinsights-agent-3.4.18.jar` is located. +If you specify a relative path, it resolves relative to the directory where `applicationinsights-agent-3.4.18.jar` is located. Alternatively, instead of using a configuration file, you can specify the entire _content_ of the JSON configuration via the environment variable `APPLICATIONINSIGHTS_CONFIGURATION_CONTENT`. Or you can set the connection string by using the Java system property `applicat You can also set the connection string by specifying a file to load the connection string from. -If you specify a relative path, it's resolved relative to the directory where `applicationinsights-agent-3.4.18.jar` is located. +If you specify a relative path, it resolves relative to the directory where `applicationinsights-agent-3.4.18.jar` is located. ```json { Sampling is based on request, which means that if a request is captured (sampled Sampling is also based on trace ID to help ensure consistent sampling decisions across different services. -Sampling only applies to logs inside of a request. Logs which are not inside of a request (e.g. startup logs) are always collected by default. +Sampling only applies to logs inside of a request. Logs that aren't inside of a request (for example, startup logs) are always collected by default. If you want to sample those logs, you can use [Sampling overrides](./java-standalone-sampling-overrides.md). ### Rate-limited sampling Starting from 3.4.0, rate-limited sampling is available and is now the default. -If no sampling has been configured, the default is now rate-limited sampling configured to capture at most +If no sampling is configured, the default is now rate-limited sampling configured to capture at most (approximately) five requests per second, along with all the dependencies and logs on those requests. This configuration replaces the prior default, which was to capture all requests. If you still want to capture all requests, use [fixed-percentage sampling](#fixed-percentage-sampling) and set the sampling percentage to 100. If you want to collect some other JMX metrics: In the preceding configuration example: * `name` is the metric name that is assigned to this JMX metric (can be anything).-* `objectName` is the [Object Name](https://docs.oracle.com/javase/8/docs/api/javax/management/ObjectName.html) of the JMX MBean that you want to collect. -* `attribute` is the attribute name inside of the JMX MBean that you want to collect. +* `objectName` is the [Object Name](https://docs.oracle.com/javase/8/docs/api/javax/management/ObjectName.html) of the `JMX MBean` that you want to collect. +* `attribute` is the attribute name inside of the `JMX MBean` that you want to collect. Numeric and Boolean JMX metric values are supported. Boolean JMX metrics are mapped to `0` for false and `1` for true. You can use `${...}` to read the value from the specified environment variable a ## Inherited attribute (preview) -Starting from version 3.2.0, if you want to set a custom dimension programmatically on your request telemetry -and have it inherited by dependency and log telemetry, which are captured in the context of that request: +Starting with version 3.2.0, you can set a custom dimension programmatically on your request telemetry. It ensures inheritance by dependency and log telemetry. All are captured in the context of that request. ```json { For example, when your java application returns a response like: </html> ``` -Then it will be automatically modified to return: +It automatically modifies to return: + ```html <!DOCTYPE html> <html lang="en"> Log4j, Logback, JBoss Logging, and java.util.logging are autoinstrumented. Loggi Logging is only captured if it: -* Meets the level that's configured for the logging framework. -* Also meets the level that's configured for Application Insights. +* Meets the configured level for the logging framework. +* Also meets the configured level for Application Insights. For example, if your logging framework is configured to log `WARN` (and aforementioned) from the package `com.example`, and Application Insights is configured to capture `INFO` (and aforementioned), Application Insights only captures `WARN` (and more severe) from the package `com.example`. |
azure-monitor | Migrate From Instrumentation Keys To Connection Strings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/migrate-from-instrumentation-keys-to-connection-strings.md | This article walks through migrating from instrumentation keys to [connection st 1. Configure the Application Insights SDK by following [How to set connection strings](sdk-connection-string.md#set-a-connection-string). > [!IMPORTANT]-> Don't use both a connection string and an instrumentation key. The latter one set supersedes the other, and could result in telemetry not appearing on the portal. [missing data](#missing-data). +> Don't use both a connection string and an instrumentation key. The latter one set supersedes the other, and could result in telemetry not appearing on the portal. See [missing data](#missing-data). ## Migration at scale |
azure-monitor | Nodejs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/nodejs.md | All events related to an incoming HTTP request are correlated for faster trouble You can use the TelemetryClient API to manually instrument and monitor more aspects of your app and system. We describe the TelemetryClient API in more detail later in this article. -> [!NOTE] -> An [OpenTelemetry-based Node.js offering](opentelemetry-enable.md?tabs=nodejs) is available. [Learn more](opentelemetry-overview.md). ## Get started |
azure-monitor | Opentelemetry Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-configuration.md | Use one of the following two ways to configure the connection string: ### [Java](#tab/java) +To set the connection string, see [Connection string](java-standalone-config.md#connection-string). ### [Node.js](#tab/nodejs) |
azure-monitor | Performance Counters | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/performance-counters.md | Windows provides a variety of [performance counters](/windows/desktop/perfctrs/a Performance counters collection is supported if your application is running under IIS on an on-premises host or is a virtual machine to which you have administrative access. Although applications running as Azure Web Apps don't have direct access to performance counters, a subset of available counters is collected by Application Insights. + ## Prerequisites Grant the app pool service account permission to monitor performance counters by adding it to the [Performance Monitor Users](/windows/security/identity-protection/access-control/active-directory-security-groups#bkmk-perfmonitorusers) group. |
azure-monitor | Pre Aggregated Metrics Log Metrics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/pre-aggregated-metrics-log-metrics.md | |
azure-monitor | Sdk Connection String | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/sdk-connection-string.md | Key-value pairs provide an easy way for users to define a prefix suffix combinat [!INCLUDE [azure-monitor-log-analytics-rebrand](../../../includes/azure-monitor-instrumentation-key-deprecation.md)] - - ## Scenario overview Scenarios most affected by this change: For more information, see [Regions that require endpoint modification](./create- #### Is the connection string a secret? -The connection string contains an ikey, which is a unique identifier used by the ingestion service to associate telemetry to a specific Application Insights resource. It's not considered a security token or key. If you want to protect your AI resource from misuse, the ingestion endpoint provides authenticated telemetry ingestion options based on Microsoft Entra ID. +The connection string contains an ikey, which is a unique identifier used by the ingestion service to associate telemetry to a specific Application Insights resource. These ikey unique identifiers aren't security tokens or security keys. If you want to protect your AI resource from misuse, the ingestion endpoint provides authenticated telemetry ingestion options based on [Microsoft Entra ID](azure-ad-authentication.md#microsoft-entra-authentication-for-application-insights). > [!NOTE]-> The Application Insights JavaScript SDK requires the connection string to be passed in during initialization and configuration. It's viewable in plain text in client browsers. There's no easy way to use the Microsoft Entra ID-based authentication for browser telemetry. We recommend that you consider creating a separate Application Insights resource for browser telemetry if you need to secure the service telemetry. +> The Application Insights JavaScript SDK requires the connection string to be passed in during initialization and configuration. It's viewable in plain text in client browsers. There's no easy way to use the [Microsoft Entra ID-based authentication](azure-ad-authentication.md#microsoft-entra-authentication-for-application-insights) for browser telemetry. We recommend that you consider creating a separate Application Insights resource for browser telemetry if you need to secure the service telemetry. ## Connection string examples |
azure-monitor | Standard Metrics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/standard-metrics.md | |
azure-monitor | Telemetry Channels | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/telemetry-channels.md | |
azure-monitor | Worker Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/worker-service.md | |
azure-monitor | Container Insights Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-authentication.md | Title: Configure agent authentication for the Container Insights agent description: This article describes how to configure authentication for the containerized agent used by Container insights. + Last updated 10/18/2023 |
azure-monitor | Container Insights Enable Aks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-enable-aks.md | Title: Enable Container insights for Azure Kubernetes Service (AKS) cluster description: Learn how to enable Container insights on an Azure Kubernetes Service (AKS) cluster. Last updated 11/14/2023-+ The command will return JSON-formatted information about the solution. The `addo * If you experience issues while you attempt to onboard the solution, review the [Troubleshooting guide](container-insights-troubleshoot.md). * With monitoring enabled to collect health and resource utilization of your AKS cluster and workloads running on them, learn [how to use](container-insights-analyze.md) Container insights.- |
azure-monitor | Prometheus Remote Write | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/prometheus-remote-write.md | Use the following command to view your container log. Remote write data is flowi ```azurecli kubectl logs <Prometheus-Pod-Name> <Azure-Monitor-Side-Car-Container-Name>-# example: kubectl logs prometheus-prometheus-kube-prometheus-prometheus-0 prom-remotewrite +# example: kubectl logs prometheus-prometheus-kube-prometheus-prometheus-0 prom-remotewrite --namespace <namespace> ``` The output from this command should look similar to the following: |
azure-monitor | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/policy-reference.md | Title: Built-in policy definitions for Azure Monitor description: Lists Azure Policy built-in policy definitions for Azure Monitor. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/15/2023 Last updated : 11/21/2023 |
azure-monitor | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure Monitor description: Lists Azure Policy Regulatory Compliance controls available for Azure Monitor. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 11/06/2023 Last updated : 11/21/2023 |
azure-monitor | Workbooks Graph Visualizations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-graph-visualizations.md | The following graph shows data flowing in and out of a computer via various port <!-- convertborder later --> :::image type="content" source="./media/workbooks-graph-visualizations/graph.png" lightbox="./media/workbooks-graph-visualizations/graph.png" alt-text="Screenshot that shows a tile summary view." border="false"::: -Watch this video to learn how to create graphs and use links in Azure Workbooks. -> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE5ah5O] - ## Add a graph 1. Switch the workbook to edit mode by selecting **Edit**. |
azure-monitor | Workbooks Honey Comb | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-honey-comb.md | The following image shows the CPU utilization of virtual machines across two sub <!-- convertborder later --> :::image type="content" source=".\media\workbooks-honey-comb\cpu-example.png" lightbox=".\media\workbooks-honey-comb\cpu-example.png" alt-text="Screenshot that shows the CPU utilization of virtual machines across two subscriptions." border="false"::: +Watch this video to learn how to build a hive cluster. ++> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RE5ah5O] + ## Add a honeycomb 1. Switch the workbook to edit mode by selecting **Edit**. |
azure-monitor | Workbooks Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/visualize/workbooks-overview.md | Watch this video to see how you can use Azure Workbooks to get insights and visu ## The gallery -The gallery lists all the saved workbooks and templates for your workspace. You can easily organize, sort, and manage workbooks of all types. +The gallery lists all the saved workbooks and templates in your current environment. Select **Browse across galleries** to see the workbooks for all your resources. :::image type="content" source="media/workbooks-overview/workbooks-gallery.png" alt-text="Screenshot that shows the Workbooks gallery."::: For custom roles, you must add `microsoft.insights/workbooks/write` to the user' ## Next steps -[Get started with Azure Workbooks](workbooks-getting-started.md) +[Get started with Azure Workbooks](workbooks-getting-started.md) |
azure-netapp-files | Azure Government | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-government.md | All [Azure NetApp Files features](whats-new.md) available on Azure public cloud | Azure NetApp Files backup | Public preview | No | | Azure NetApp Files large volumes | Public preview | No | | Edit network features for existing volumes | Public preview | No |-| Standard network features | Generally available (GA) | Public preview [(in select regions)](azure-netapp-files-network-topologies.md#supported-regions) | | Standard storage with cool access in Azure NetApp Files | Public preview | No | ## Portal access |
azure-netapp-files | Azure Netapp Files Metrics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-metrics.md | Azure NetApp Files metrics are natively integrated into Azure monitor. From with This size includes logical space used by active file systems and snapshots. - *Volume Snapshot Size* The size of all snapshots in a volume. +- *Throughput limit reached* + + Throughput limit reached is a boolean metric that denotes the volume is hitting its QoS limits. The value 1 means that the volume has reached its maximum throughput, and throughput for this volume will be throttled. The value 0 means this limit has not yet been reached. + + If the volume is hitting the throughput limit, it's not sized appropriately for the application's demands. To resolve throughput issues: ++ - Resize the volume: ++ Increase the volume size to allocate more throughput to the volume so it's not throttled. + - Modify the service level: + + The Premium and Ultra service levels in Azure NetApp Files cater to workloads with higher throughput requirements. [Moving the volume to a capacity pool in a higher service level](dynamic-change-volume-service-level.md) automatically increases these limits for the volume. + - Change the workloads/application: ++ Consider repurposing the volume and delegating a different volume with a larger size and/or in a higher service level to meet your application requirements. If it's an NFS volume, consider changing mount options to reduce data flow if your application supports those changes. ++ :::image type="content" source="../media/azure-netapp-files/throughput-limit-reached.png" alt-text="Screenshot that shows Azure NetApp Files metrics a line graph demonstrating throughput limit reached." lightbox="../media/azure-netapp-files/throughput-limit-reached.png"::: + ## Performance metrics for volumes |
azure-netapp-files | Azure Netapp Files Network Topologies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-network-topologies.md | Azure NetApp Files volumes are designed to be contained in a special purpose sub * UAE North * UK South * UK West-* US Gov Texas (public preview) -* US Gov Virginia (public preview) +* US Gov Arizona +* US Gov Texas +* US Gov Virginia * West Europe * West US * West US 2 |
azure-netapp-files | Cool Access Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/cool-access-introduction.md | Using Azure NetApp Files standard storage with cool access, you can configure in Most cold data is associated with unstructured data. It can account for more than 50% of the total storage capacity in many storage environments. Infrequently accessed data associated with productivity software, completed projects, and old datasets are an inefficient use of a high-performance storage. -Azure NetApp Files supports three [service levels](azure-netapp-files-service-levels.md) that can be configured at capacity pool level (Standard, Premium and Ultra). Cool access is an additional service only on the Standard service level. +Azure NetApp Files supports three [service levels](azure-netapp-files-service-levels.md) that can be configured at capacity pool level (Standard, Premium and Ultra). Cool access is an additional service only on the Standard service level. Standard storage with cool access is supported only on capacity pools of the **auto** QoS type. You can configure the standard storage with cool access on a volume by specifying the number of days (the coolness period, ranging from 7 to 183 days) for inactive data to be considered "cool". When the data has remained inactive for the specified coolness period, the tiering process begins, and the data is moved to the cool tier (the Azure storage account). This move to the cool tier can take a few days. For example, if you specify 31 days as the coolness period, then 31 days after a data block is last accessed (read or write), it's qualified for movement to the cool tier. When you create volumes in the capacity pool and start tiering data to the cool * Assume that you create four volumes with 1 TiB each. Each volume has 0.25 TiB of the volume capacity on the hot tier, and 0.75 TiB of the volume capacity in the cool tier. The billing calculation is as follows: - * 1 TiB capacity at the hot tier rate - * 3 TiB capacity at the cool tier rate + * 1-TiB capacity at the hot tier rate + * 3-TiB capacity at the cool tier rate * Network transfer between the hot tier and the cool tier at the rate determined by the markup on top of the transaction cost (`GET`, `PUT`) on blob storage and private link transfer in either direction between the hot tiers. * Assume that you create two volumes with 1 TiB each. Each volume has 0.25 TiB of the volume capacity on the hot tier, and 0.75 TiB of the volume capacity in the cool tier. The billing calculation is as follows: - * 0.5 TiB capacity at the hot tier rate + * 0.5-TiB capacity at the hot tier rate * 2 TiB of unallocated capacity at the hot tier rate - * 1.5 TiB capacity at the cool tier rate + * 1.5-TiB capacity at the cool tier rate * Network transfer between the hot tier and the cool tier at the rate determined by the markup on top of the transaction cost (`GET`, `PUT`) on blob storage and private link transfer in either direction between the hot tiers. * Assume that you create one volume with 1 TiB. The volume has 0.25 TiB of the volume capacity on the hot tier, 0.75 of the volume capacity in the cool tier. The billing calculation is as follows: - * 0.25 TiB capacity at the hot tier rate - * 0.75 TiB capacity at the cool tier rate + * 0.25-TiB capacity at the hot tier rate + * 0.75-TiB capacity at the cool tier rate * Network transfer between the hot tier and the cool tier at the rate determined by the markup on top of the transaction cost (`GET`, `PUT`) on blob storage and private link transfer in either direction between the hot tiers. ### Examples of cost calculations with varying coolness periods Your storage cost for the *first month* would be: | Cost | Description | Calculation | |||| | Unallocated storage cost for Day 1~30 (30 days) | 1 TiB of unallocated storage | `1 TiB x 1024 x 30 days x 730/30 hrs. x $0.000202/GiB/hr. = $151.00` |-| Storage cost for Day 1~7 (7 days) | 4 TiB of active data (hot tier) | `4 TiB x 1024 x 7 days x 730/30 hrs. x $0.000202/GiB/hr. = $140.93` | +| Storage cost for Day 1~7 (seven days) | 4 TiB of active data (hot tier) | `4 TiB x 1024 x 7 days x 730/30 hrs. x $0.000202/GiB/hr. = $140.93` | | Storage cost for Day 8~30 (23 days) | 1 TiB of active data (hot tier) <br><br> 3 TiB of inactive data (cool tier) | `1 TiB x 1024 x 23 days x 730/30 hrs. x $0.000202/GiB/hr. = $115.77` <br><br> `3 TiB x 1024 x 23 days x 730/30 hrs. x $0.000082/GiB/hr. = $140.98` | | Network transfer cost | Moving inactive data to cool tier <br><br> 20% of data read/write from cool tier | `3 TiB x 1024 x $0.020000/GiB = $61.44` <br><br> `3 TiB x 1024 x 20% x $0.020000/GiB = $12.29` | | **First month total** || **`$622.41`** | Your storage cost for the *second month* would be: | Cost | Description | Calculation | |||| | Unallocated storage cost for Day 1~30 (30 days) | 1 TiB of unallocated storage | `1 TiB x 1024 x 30 days x 730/30 hrs. x $0.000202/GiB/hr. = $151.00` |-| Storage cost for Day 1~5 (5 days) | 4 TiB of active data (hot tier) | `4 TiB x 1024 x 5 days x 730/30 hrs. x $0.000202/GiB/hr. = $100.67` | +| Storage cost for Day 1~5 (five days) | 4 TiB of active data (hot tier) | `4 TiB x 1024 x 5 days x 730/30 hrs. x $0.000202/GiB/hr. = $100.67` | | Storage cost for Day 6~30 (25 days) | 1 TiB of active data (hot tier) <br><br> 3 TiB of inactive data (cool tier) | `1 TiB x 1024 x 25 days x 730/30 hrs. x $0.000202/GiB/hr. = $125.83` <br><br> `3 TiB x 1024 x 25 days x 730/30 hrs. x $0.000082/GiB/hr. = $153.24` | | Network transfer cost | Moving inactive data to cool tier <br><br> 20% of data read/write from cool tier | `3 TiB x 1024 x $0.020000 /GiB = $61.44` <br><br> `3 TiB x 1024 x 20% x $0.020000/GiB = $12.29` | | **Second month total** || **`$604.47`** | Your storage cost for the *third month* would be: | Cost | Description | Calculation | |||| | Unallocated storage cost for Day 1~30 (30 days) | 1 TiB of unallocated storage | `1 TiB x 1024 x 30 days x 730/30 hrs. x $0.000202/GiB/hr. = $151.00` |-| Storage cost for Day 1~3 (3 days) | 4 TiB of active data (hot tier) | `4 TiB x 1024 x 3 days x 730/30 hrs. x $0.000202/GiB/hr. = $60.40` | +| Storage cost for Day 1~3 (three days) | 4 TiB of active data (hot tier) | `4 TiB x 1024 x 3 days x 730/30 hrs. x $0.000202/GiB/hr. = $60.40` | | Storage cost for Day 4~30 (27 days) | 1 TiB of active data (hot tier) <br><br> 3 TiB of inactive data (cool tier) | `1 TiB x 1024 x 27 days x 730/30 hrs. x $0.000202/GiB/hr. = $135.90` <br><br> `3 TiB x 1024 x 27 days x 730/30 hrs. x $0.000082/GiB/hr. = $165.50` | | Network transfer cost | Moving inactive data to cool tier <br><br> 20% of data read/write from cool tier | `3 TiB x 1024 x $0.020000/GiB = $61.44` <br><br> `3 TiB x 1024 x 20% x $0.020000/GiB = $12.29` | | **Third month total** || **`$586.52`** | |
azure-netapp-files | Manage Cool Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/manage-cool-access.md | The standard storage with cool access feature provides options for the ΓÇ£coolne * A cool-access capacity pool can contain both volumes with cool access enabled and volumes with cool access disabled. * After the capacity pool is configured with the option to support cool access volumes, the setting can't be disabled at the _capacity pool_ level. However, you can turn on or turn off the cool access setting at the volume level anytime. Turning off the cool access setting at the _volume_ level stops further tiering of data.ΓÇ» * Standard storage with cool access is supported only on capacity pools of the **auto** QoS type. + * An auto QoS capacity pool enabled for standard storage with cool access cannot be converted to a capacity pool using manual QoS. * You can't use large volumes with Standard storage with cool access. * See [Resource limits for Azure NetApp Files](azure-netapp-files-resource-limits.md#resource-limits) for maximum number of volumes supported for cool access per subscription per region.-* Considerations for using cool access with [cross-region replication](cross-region-replication-requirements-considerations.md) (CRR): +* Considerations for using cool access with [cross-region replication](cross-region-replication-requirements-considerations.md) (CRR) and [cross-zone replication](cross-zone-replication-introduction.md): * If the volume is in a CRR relationship as a source volume, you can enable cool access on it only if the [mirror state](cross-region-replication-display-health-status.md#display-replication-status) is `Mirrored`. Enabling cool access on the source volume automatically enables cool access on the destination volume. * If the volume is in a CRR relationship as a destination volume (data protection volume), enabling cool access isn't supported for the volume. * The cool access setting is updated automatically on the destination volume to be the same as the source volume. When you update the cool access setting on the source volume, the same setting is applied at the destination volume. |
azure-netapp-files | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/whats-new.md | Azure NetApp Files is updated regularly. This article provides a summary about t ## November 2023 +* [Standard network features is US Gov regions](azure-netapp-files-network-topologies.md#supported-regions) is now generally available (GA) + + Azure NetApp Files now supports Standard network features for new volumes in US Gov Arizona, US Gov Texas, and US Gov Virginia. Standard network features provide an enhanced virtual networking experience through various features for a seamless and consistent experience with security posture of all their workloads including Azure NetApp Files. + * [Volume user and group quotas](default-individual-user-group-quotas-introduction.md) is now generally available (GA). User and group quotas enable you to stay in control and define how much storage capacity can be used by individual users or groups can use within a specific Azure NetApp Files volume. You can set default (same for all users) or individual user quotas on all NFS, SMB, and dual protocol-enabled volumes. On all NFS-enabled volumes, you can define a default (that is, same for all users) or individual group quotas. |
azure-portal | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/policy-reference.md | Title: Built-in policy definitions for Azure portal description: Lists Azure Policy built-in policy definitions for Azure portal. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/15/2023 Last updated : 11/21/2023 |
azure-relay | Relay Hybrid Connections Java Get Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-relay/relay-hybrid-connections-java-get-started.md | + + Title: Azure Relay Hybrid Connections - HTTP requests in Java +description: Write a Java console application for Azure Relay Hybrid Connections HTTP requests. + Last updated : 06/21/2022++++# Get started with Relay Hybrid Connections HTTP requests in Java +++In this quickstart, you create Java sender and receiver applications that send and receive messages by using the HTTP protocol. The applications use Hybrid Connections feature of Azure Relay. To learn about Azure Relay in general, see [Azure Relay](relay-what-is-it.md). ++In this quickstart, you take the following steps: ++1. Create a Relay namespace by using the Azure portal. +2. Create a hybrid connection in that namespace by using the Azure portal. +3. Write a server (listener) console application to receive messages. +4. Write a client (sender) console application to send messages. +5. Run applications. ++## Prerequisites +- [Java](https://www.java.com/en/). Please ensure that you are running JDK 1.8+ +- [Maven](https://maven.apache.org/install.html). Please ensure that you have Maven installed +- [Azure Relay SDK](https://github.com/Azure/azure-relay-java). Review Java SDK +- An Azure subscription. If you don't have one, [create a free account](https://azure.microsoft.com/free/) before you begin. ++## Create a namespace using the Azure portal ++## Create a hybrid connection using the Azure portal ++## Create a server application (listener) +To listen and receive messages from the Relay, write a Java console application. +++## Create a client application (sender) ++To send messages to the Relay, you can use any HTTP client, or write a Java console application. +++## Run the applications ++1. Run the server application: from a Java command prompt or application type `java -cp <jar_dependency_path> com.example.listener.Listener.java`. +2. Run the client application: from a Java command prompt or application type `java -cp <jar_dependency_path> com.example.sender.Sender.java`, and enter some text. +3. Ensure that the server application console outputs the text that was entered in the client application. ++Congratulations, you have created an end-to-end Hybrid Connections application using Java! ++## Next steps +In this quickstart, you created Java client and server applications that used HTTP to send and receive messages. The Hybrid Connections feature of Azure Relay also supports using WebSockets to send and receive messages. To learn how to use WebSockets with Azure Relay Hybrid Connections, see the [WebSockets quickstart](relay-hybrid-connections-node-get-started.md). ++In this quickstart, you used Java to create client and server applications. To learn how to write client and server applications using .NET Framework, see the [.NET WebSockets quickstart](relay-hybrid-connections-dotnet-get-started.md) or the [.NET HTTP quickstart](relay-hybrid-connections-http-requests-dotnet-get-started.md). |
azure-relay | Relay What Is It | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-relay/relay-what-is-it.md | To get started with using Hybrid Connections in Azure Relay, see the following q - [Hybrid Connections - Node WebSockets](relay-hybrid-connections-node-get-started.md) - [Hybrid Connections - .NET HTTP](relay-hybrid-connections-http-requests-dotnet-get-started.md) - [Hybrid Connections - Node HTTP](relay-hybrid-connections-http-requests-node-get-started.md)+- [Hybrid Connections - Java HTTP](relay-hybrid-connections-java-get-started.md) For more samples, see [Azure Relay - Hybrid Connections samples on GitHub](https://github.com/Azure/azure-relay/tree/master/samples/hybrid-connections). |
azure-resource-manager | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/custom-providers/policy-reference.md | Title: Built-in policy definitions for Azure Custom Resource Providers description: Lists Azure Policy built-in policy definitions for Azure Custom Resource Providers. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/15/2023 Last updated : 11/21/2023 |
azure-resource-manager | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/managed-applications/policy-reference.md | Title: Built-in policy definitions for Azure Managed Applications description: Lists Azure Policy built-in policy definitions for Azure Managed Applications. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/15/2023 Last updated : 11/21/2023 |
azure-resource-manager | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/overview.md | Title: Azure Resource Manager overview description: Describes how to use Azure Resource Manager for deployment, management, and access control of resources on Azure. Previously updated : 09/27/2023 Last updated : 11/13/2023 # What is Azure Resource Manager? There are some important factors to consider when defining your resource group: To ensure state consistency for the resource group, all [control plane operations](./control-plane-and-data-plane.md) are routed through the resource group's location. When selecting a resource group location, we recommend that you select a location close to where your control operations originate. Typically, this location is the one closest to your current location. This routing requirement only applies to control plane operations for the resource group. It doesn't affect requests that are sent to your applications. - If a resource group's region is temporarily unavailable, you can't update resources in the resource group because the metadata is unavailable. The resources in other regions still function as expected, but you can't update them. + If a resource group's region is temporarily unavailable, you may not be able to update resources in the resource group because the metadata is unavailable. The resources in other regions will still function as expected, but you may not be able to update them. This condition may also apply to global resources like Azure DNS, Azure DNS Private Zones, Azure Traffic Manager, and Azure Front Door. You can view which types have their metadata managed by Azure Resource Manager via the [list of types for the Azure Resource Graph resources table](../../governance/resource-graph/reference/supported-tables-resources.md#resources). For more information about building reliable applications, see [Designing reliable Azure applications](/azure/architecture/checklist/resiliency-per-service). The Azure Resource Manager service is designed for resiliency and continuous ava This resiliency applies to services that receive requests through Resource Manager. For example, Key Vault benefits from this resiliency. +### Resource group location alignment +To reduce the likelihood of being impacted by region outages, if they occur, it's recommended to co-locate your resources with their resource group in the same region together. +The resource group location is used to determine the location where Azure Resource Manager will store metadata related to all the resources within the resource group, which is then used for routing and caching. For instance, when you list your resources at the subscription or resource group scopes, Azure Resource Manager responds based off this cache. +When the resource group's region is unavailable, Azure Resource Manager may be unable to update your resource's metadata and may block your write calls. By co-locating your resource and resource group region, you can reduce your chance of being affected by region unavailability since your resource and resource management metadata will all be stored in one region instead of multiple. + ## Next steps * To learn about limits that are applied across Azure services, see [Azure subscription and service limits, quotas, and constraints](azure-subscription-service-limits.md). |
azure-resource-manager | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/policy-reference.md | Title: Built-in policy definitions for Azure Resource Manager description: Lists Azure Policy built-in policy definitions for Azure Resource Manager. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/15/2023 Last updated : 11/21/2023 |
azure-resource-manager | Resource Name Rules | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/resource-name-rules.md | description: Shows the rules and restrictions for naming Azure resources. Previously updated : 08/02/2023+ Last updated : 11/20/2023 # Naming rules and restrictions for Azure resources In the following tables, the term alphanumeric refers to: > [!div class="mx-tableFixed"] > | Entity | Scope | Length | Valid Characters | > | | | | |-> | workspaces | resource group | 3-33 | Alphanumerics and hyphens. | +> | workspaces | resource group | 3-33 | Alphanumerics and hyphens | > | workspaces / computes | workspace | 3-24 for compute instance<br>3-32 for AML compute<br>2-16 for other compute types | Alphanumerics and hyphens. |+> | workspaces / datastores | workspace | Maximum 255 characters for datastore name| Datastore name consists only of lowercase letters, digits, and underscores | ## Microsoft.ManagedIdentity |
azure-resource-manager | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure Resource Manager description: Lists Azure Policy Regulatory Compliance controls available for Azure Resource Manager. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 11/06/2023 Last updated : 11/21/2023 |
azure-resource-manager | Template Functions Resource | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-functions-resource.md | The possible uses of `list*` are shown in the following table. | Microsoft.DocumentDB/databaseAccounts | [listKeys](/rest/api/cosmos-db-resource-provider/2021-11-15-preview/database-accounts/list-keys?tabs=HTTP) | | Microsoft.DocumentDB/databaseAccounts/notebookWorkspaces | [listConnectionInfo](/rest/api/cosmos-db-resource-provider/2023-03-15-preview/notebook-workspaces/list-connection-info?tabs=HTTP) | | Microsoft.DomainRegistration/topLevelDomains | [listAgreements](/rest/api/appservice/topleveldomains/listagreements) |-| Microsoft.EventGrid/domains | [listKeys](/rest/api/eventgrid/controlplane-version2022-06-15/domains/list-shared-access-keys) | -| Microsoft.EventGrid/topics | [listKeys](/rest/api/eventgrid/controlplane-version2022-06-15/topics/list-shared-access-keys) | | Microsoft.EventHub/namespaces/authorizationRules | [listKeys](/rest/api/eventhub) | | Microsoft.EventHub/namespaces/disasterRecoveryConfigs/authorizationRules | [listKeys](/rest/api/eventhub) | | Microsoft.EventHub/namespaces/eventhubs/authorizationRules | [listKeys](/rest/api/eventhub) | The possible uses of `list*` are shown in the following table. | Microsoft.Logic/workflows/versions/triggers | [listCallbackUrl](/rest/api/logic/workflowversions/listcallbackurl) | | Microsoft.MachineLearning/webServices | [listkeys](/rest/api/machinelearning/webservices/listkeys) | | Microsoft.MachineLearning/Workspaces | listworkspacekeys |-| Microsoft.MachineLearningServices/workspaces/computes | [listKeys](/rest/api/azureml/2023-04-01/compute/list-keys) | -| Microsoft.MachineLearningServices/workspaces/computes | [listNodes](/rest/api/azureml/2023-04-01/compute/list-nodes) | -| Microsoft.MachineLearningServices/workspaces | [listKeys](/rest/api/azureml/2023-04-01/workspaces/list-keys) | | Microsoft.Maps/accounts | [listKeys](/rest/api/maps-management/accounts/listkeys) | | Microsoft.Media/mediaservices/assets | [listContainerSas](/rest/api/media/assets/listcontainersas) | | Microsoft.Media/mediaservices/assets | [listStreamingLocators](/rest/api/media/assets/liststreaminglocators) | The possible uses of `list*` are shown in the following table. | Microsoft.ServiceBus/namespaces/authorizationRules | [listKeys](/rest/api/servicebus/controlplane-stable/namespaces-authorization-rules/list-keys) | | Microsoft.ServiceBus/namespaces/disasterRecoveryConfigs/authorizationRules | [listKeys](/rest/api/servicebus/controlplane-stable/disaster-recovery-configs/list-keys) | | Microsoft.ServiceBus/namespaces/queues/authorizationRules | [listKeys](/rest/api/servicebus/controlplane-stable/queues-authorization-rules/list-keys) |-| Microsoft.ServiceBus/namespaces/topics/authorizationRules | [listKeys](/rest/api/servicebus/controlplane-stable/topics%20ΓÇô%20authorization%20rules/list-keys) | | Microsoft.SignalRService/SignalR | [listKeys](/rest/api/signalr/signalr/listkeys) | | Microsoft.Storage/storageAccounts | [listAccountSas](/rest/api/storagerp/storageaccounts/listaccountsas) | | Microsoft.Storage/storageAccounts | [listKeys](/rest/api/storagerp/storageaccounts/listkeys) | |
azure-signalr | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/policy-reference.md | Title: Built-in policy definitions for Azure SignalR description: Lists Azure Policy built-in policy definitions for Azure SignalR. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/15/2023 Last updated : 11/21/2023 |
azure-signalr | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure SignalR description: Lists Azure Policy Regulatory Compliance controls available for Azure SignalR. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 11/06/2023 Last updated : 11/21/2023 |
azure-signalr | Signalr Concept Performance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-signalr/signalr-concept-performance.md | You can easily monitor your service in the Azure portal. From the **Metrics** pa The chart shows the computing pressure of your SignalR service. You can test your scenario and check this metric to decide whether to scale up. The latency inside SignalR service remains low if the Server Load is below 70%. > [!NOTE]-> If you are using unit 50 or unit 100 **and** your scenario is mainly sending to small groups (group size <100) or single connection, you need to check [sending to small group](#small-group) or [sending to connection](#send-to-connection) for reference. In those scenarios there is large routing cost which is not included in the Server Load. +> If you are using unit 50 or unit 100 **and** your scenario is mainly sending to small groups (group size <20) or single connection, you need to check [sending to small group](#small-group) or [sending to connection](#send-to-connection) for reference. In those scenarios there is large routing cost which is not included in the Server Load. ## Term definitions Many client connections are calling the hub, so the app server number is also cr > [!NOTE] > The client connection number, message size, message sending rate, routing cost, SKU tier, and CPU/memory of the app server affect the overall performance of **send to small group**.+> +> The group count, group member count listed in the table are **not hard limits**. These parameter values are selected to establish a stable benchmark scenario. For example, it is OK to assign each conneciton to a distinct group. Under this configuration, the performance is close to [send to connection](#send-to-connection). ##### Big group |
azure-sql-edge | Deploy Onnx | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-sql-edge/deploy-onnx.md | This quickstart is based on **scikit-learn** and uses the [Boston Housing datase - Install Python packages needed for this quickstart: - 1. Open [New Notebook](/azure-data-studio/notebooks/sql-kernel) connected to the Python 3 Kernel. + 1. Open [New Notebook](/azure-data-studio/notebooks/notebooks-python-kernel) connected to the Python 3 Kernel. 1. Select **Manage Packages** 1. In the **Installed** tab, look for the following Python packages in the list of installed packages. If any of these packages aren't installed, select the **Add New** tab, search for the package, and select **Install**. - **scikit-learn** |
azure-web-pubsub | Concept Performance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/concept-performance.md | In this guide, we'll introduce the factors that affect Web PubSub upstream appli It shows the computing pressure of your Azure Web PubSub service. You could test on your own scenario and check this metrics to decide whether to scale up. The latency inside Azure Web PubSub service would remain low if the Server Load is below 70%. > [!NOTE]-> If you are using unit 50 or unit 100 **and** your scenario is mainly sending to small groups (group size <100), you need to check [sending to small group](#small-group) for reference. In those scenarios there is large routing cost which is not included in the Server Load. +> If you are using unit 50 or unit 100 **and** your scenario is mainly sending to small groups (group size <20), you need to check [sending to small group](#small-group) for reference. In those scenarios there is large routing cost which is not included in the Server Load. Below are detailed concepts for evaluating performance. ## Term definitions The routing cost is significant for sending message to many small groups. Curren | Outbound messages per second | 4,000 | 8,000 | 20,000 | 40,000 | 80,000 | 150,000 | 150,000 | | Outbound bandwidth | **8 MBps** | **16 MBps** | **40 MBps** | **80 MBps** | **160 MBps** | **300 MBps** | **300 MBps** | +> [!NOTE] +> The group count, group member count listed in the table are **not hard limits**. These parameter values are selected to establish a stable benchmark scenario. + ### Triggering Cloud Event Service delivers client events to the upstream webhook using the [CloudEvents HTTP protocol](./reference-cloud-events.md). |
azure-web-pubsub | Howto Create Serviceclient With Net And Azure Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/howto-create-serviceclient-with-net-and-azure-identity.md | This how-to guide shows you how to create a `WebPubSubServiceClient` using Micro - Install [Azure.Identity](https://www.nuget.org/packages/Azure.Identity) from nuget.org. ```bash- Install-Package Azure.Identity + dotnet add package Azure.Identity ``` - Install [Azure.Messaging.WebPubSub](https://www.nuget.org/packages/Azure.Messaging.WebPubSub) from nuget.org ```bash- Install-Package Azure.Messaging.WebPubSub + dotnet add package Azure.Messaging.WebPubSub ``` +- If using DependencyInjection, install [Microsoft.Extensions.Azure](https://www.nuget.org/packages/Microsoft.Extensions.Azure) from nuget.org ++ ```bash + dotnet add package Microsoft.Extensions.Azure + ``` + ## Sample codes 1. Create a `TokenCredential` with Azure Identity SDK. |
backup | Backup Azure Immutable Vault Concept | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-immutable-vault-concept.md | Immutable vault can help you protect your backup data by blocking any operations ## Before you start -- Immutable vault is available in all Azure public regions.+- Immutable vault is available in all Azure public and US Government regions. - Immutable vault is supported for Recovery Services vaults and Backup vaults. - Enabling Immutable vault blocks you from performing specific operations on the vault and its protected items. See the [restricted operations](#restricted-operations). - Enabling immutability for the vault is a reversible operation. However, you can choose to make it irreversible to prevent any malicious actors from disabling it (after disabling it, they can perform destructive operations). Learn about [making Immutable vault irreversible](#making-immutability-irreversible). |
backup | Backup Azure Troubleshoot Blob Backup | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-troubleshoot-blob-backup.md | This article provides troubleshooting information to address issues you encounte **Recommendation**: Ensure that the restore point ID is correct and the restore point didn't get deleted based on the backup retention settings. For a recent recovery point, ensure that the corresponding backup job is complete. We recommend you triggering the operation again using a valid restore point. If the issue persists, contact Microsoft support. +### UserErrorContainerNotFoundForPointInTimeRestore ++**Error code**: `UserErrorContainerNotFoundForPointInTimeRestore` ++**Error message**: A container selected for the restore was not found in the storage account for the selected point in time. ++**Recommendation**: Use specific container restore or prefix match restore for containers that are present in the account. We also recommend enabling vaulted backup for your storage account to get comprehensive protection against deletion of containers. If you already have it configured, you can use a recovery point for performing recovery of deleted containers. + ### UserErrorTargetContainersExistOnAccount **Error code**: `UserErrorTargetContainersExistOnAccount` |
backup | Backup Support Matrix Iaas | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-support-matrix-iaas.md | Back up managed disks after enabling a resource group lock | Not supported.<br/> Modify backup policy for a VM | Supported.<br/><br/> The VM will be backed up according to the schedule and retention settings in the new policy. If retention settings are extended, existing recovery points are marked and kept. If they're reduced, existing recovery points will be pruned in the next cleanup job and eventually deleted. Cancel a backup job| Supported during the snapshot process.<br/><br/> Not supported when the snapshot is being transferred to the vault. Back up the VM to a different region or subscription |Not supported.<br><br>For successful backup, virtual machines must be in the same subscription as the vault for backup.-Back up daily via the Azure VM extension | Four backups per day: one scheduled backup as set up in the backup policy, and three on-demand backups. <br><br> To allow user retries in case of failed attempts, the hard limit for on-demand backups is set to nine attempts. +Back up daily via the Azure VM extension | Four backups per day: one scheduled backup as set up in the backup policy, and three on-demand backups. <br><br> To allow user retries in case of failed attempts, the hard limit for on-demand backups is set to nine attempts in a 24 hour UTC period. Back up daily via the MARS agent | Three scheduled backups per day. Back up daily via DPM or MABS | Two scheduled backups per day. Back up monthly or yearly| Not supported when you're backing up with the Azure VM extension. Only daily and weekly are supported.<br/><br/> You can set up the policy to retain daily or weekly backups for a monthly or yearly retention period. Back up Azure VMs with locks | Supported for managed VMs. <br><br> Not supported Configure standalone Azure VMs in Windows Storage Spaces | Not supported. [Restore Virtual Machine Scale Sets](../virtual-machine-scale-sets/virtual-machine-scale-sets-orchestration-modes.md#scale-sets-with-flexible-orchestration) | Supported for the flexible orchestration model to back up and restore a single Azure VM. Restore with managed identities | Supported for managed Azure VMs. <br><br> Not supported for classic and unmanaged Azure VMs. <br><br> Cross-region restore isn't supported with managed identities. <br><br> Currently, this is available in all Azure public and national cloud regions. <br><br> [Learn more](backup-azure-arm-restore-vms.md#restore-vms-with-managed-identities).-<a name="tvm-backup">Back up trusted launch VMs</a> | Backup is supported. <br><br> Backup of trusted launch VMs is supported through [Enhanced policy](backup-azure-vms-enhanced-policy.md). You can enable backup through a [Recovery Services vault](./backup-azure-arm-vms-prepare.md), the [pane for managing a VM](./backup-during-vm-creation.md#start-a-backup-after-creating-the-vm), and the [pane for creating a VM](backup-during-vm-creation.md#create-a-vm-with-backup-configured). <br><br> **Feature details** <br><br> - Backup is supported in all regions where trusted launch VMs are available. <br><br> - Configuration of backups, alerts, and monitoring for trusted launch VMs is currently not supported through the backup center. <br><br> - Migration of an existing [Gen2 VM](../virtual-machines/generation-2.md) (protected with Azure Backup) to a trusted launch VM is currently not supported. [Learn how to create a trusted launch VM](../virtual-machines/trusted-launch-portal.md?tabs=portal#deploy-a-trusted-launch-vm). <br><br> - Item-level restore is not supported. +<a name="tvm-backup">Back up trusted launch VMs</a> | Backup is supported. <br><br> Backup of trusted launch VMs is supported through [Enhanced policy](backup-azure-vms-enhanced-policy.md). You can enable backup through a [Recovery Services vault](./backup-azure-arm-vms-prepare.md), the [pane for managing a VM](./backup-during-vm-creation.md#start-a-backup-after-creating-the-vm), and the [pane for creating a VM](backup-during-vm-creation.md#create-a-vm-with-backup-configured). <br><br> **Feature details** <br><br> - Backup is supported in all regions where trusted launch VMs are available. <br><br> - Configuration of backups, alerts, and monitoring for trusted launch VMs is currently not supported through the backup center. <br><br> - Migration of an existing [Gen2 VM](../virtual-machines/generation-2.md) (protected with Azure Backup) to a trusted launch VM is currently not supported. [Learn how to create a trusted launch VM](../virtual-machines/trusted-launch-portal.md?tabs=portal#deploy-a-trusted-launch-vm). <br><br> - Item-level restore is supported for the scenarios mentioned [here](backup-support-matrix-iaas.md#support-for-file-level-restore). [Back up confidential VMs](../confidential-computing/confidential-vm-overview.md) | The backup support is in limited preview. <br><br> Backup is supported only for confidential VMs that have no confidential disk encryption and for confidential VMs that have confidential OS disk encryption through a platform-managed key (PMK). <br><br> Backup is currently not supported for confidential VMs that have confidential OS disk encryption through a customer-managed key (CMK). <br><br> **Feature details** <br><br> - Backup is supported in [all regions where confidential VMs are available](../confidential-computing/confidential-vm-overview.md#regions). <br><br> - Backup is supported only if you're using [Enhanced policy](backup-azure-vms-enhanced-policy.md). You can configure backup through the [pane for creating a VM](backup-azure-arm-vms-prepare.md), the [pane for managing a VM](backup-during-vm-creation.md#start-a-backup-after-creating-the-vm), and the [Recovery Services vault](backup-azure-arm-vms-prepare.md). <br><br> - [Cross-region restore](backup-azure-arm-restore-vms.md#cross-region-restore) and file recovery (item-level restore) for confidential VMs are currently not supported. ## VM storage support Adding a disk to a protected VM | Supported. Resizing a disk on a protected VM | Supported. Shared storage| Backing up VMs by using Cluster Shared Volumes (CSV) or Scale-Out File Server isn't supported. CSV writers are likely to fail during backup. On restore, disks that contain CSV volumes might not come up. [Shared disks](../virtual-machines/disks-shared-enable.md) | Not supported.-<a name="ultra-disk-backup">Ultra disks</a> | Supported with [Enhanced policy](backup-azure-vms-enhanced-policy.md). The support is currently in preview. <br><br> Supported region(s) - Sweden Central, Central US, North Central US, South Central US, East US, East US 2, West US 2, West Europe and North Europe. <br><br> To enroll your subscription for this feature, [fill this form](https://forms.office.com/r/1GLRnNCntU). <br><br> - Configuration of Ultra disk protection is supported via Recovery Services vault only. This configuration is currently not supported via virtual machine blade. <br><br> - Cross-region restore is currently not supported for machines using Ultra disks. <br><br> - GRS type vaults cannot be used for enabling backup. -<a name="premium-ssd-v2-backup">Premium SSD v2</a> | Supported with [Enhanced policy](backup-azure-vms-enhanced-policy.md). The support is currently in preview. <br><br> Supported region(s) - East US, West Europe, Central US, South Central US, East US 2, West US 2 and North Europe. <br><br> To enroll your subscription for this feature, [fill this form](https://forms.office.com/r/h56TpTc773). <br><br> - Configuration of Premium v2 disk protection is supported via Recovery Services vault only. This configuration is currently not supported via virtual machine blade. <br><br> - Cross-region restore is currently not supported for machines using Premium v2 disks. <br><br> - GRS type vaults cannot be used for enabling backup. +<a name="ultra-disk-backup">Ultra disks</a> | Supported with [Enhanced policy](backup-azure-vms-enhanced-policy.md). The support is currently in preview. <br><br> [Supported regions](../virtual-machines/disks-types.md#ultra-disk-limitations). <br><br> To enroll your subscription for this feature, [fill this form](https://forms.office.com/r/1GLRnNCntU). <br><br> - Configuration of Ultra disk protection is supported via Recovery Services vault only. This configuration is currently not supported via virtual machine blade. <br><br> - Cross-region restore is currently not supported for machines using Ultra disks. <br><br> - GRS type vaults cannot be used for enabling backup. +<a name="premium-ssd-v2-backup">Premium SSD v2</a> | Supported with [Enhanced policy](backup-azure-vms-enhanced-policy.md). The support is currently in preview. <br><br> [Supported regions](../virtual-machines/disks-types.md#regional-availability). <br><br> To enroll your subscription for this feature, [fill this form](https://forms.office.com/r/h56TpTc773). <br><br> - Configuration of Premium v2 disk protection is supported via Recovery Services vault only. This configuration is currently not supported via virtual machine blade. <br><br> - Cross-region restore is currently not supported for machines using Premium v2 disks. <br><br> - GRS type vaults cannot be used for enabling backup. [Temporary disks](../virtual-machines/managed-disks-overview.md#temporary-disk) | Azure Backup doesn't back up temporary disks. NVMe/[ephemeral disks](../virtual-machines/ephemeral-os-disks.md) | Not supported. [Resilient File System (ReFS)](/windows-server/storage/refs/refs-overview) restore | Supported. Volume Shadow Copy Service (VSS) supports app-consistent backups on ReFS. |
backup | Backup Support Matrix Mars Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-support-matrix-mars-agent.md | Windows Server 2008 SP2| 1,700 GB Windows 8 or later| 54,400 GB Windows 7| 1,700 GB -### Minimum retention limits +### Retention limits -The following are the minimum retention durations that can be set for the different recovery points: +The following are the retention durations that can be set for the different recovery points: -|Recovery point |Duration | -||| -|Daily recovery point | 7 days | -|Weekly recovery point | 4 weeks | -|Monthly recovery point | 3 months | -|Yearly recovery point | 1 year | +|Recovery point |Minimum | Maximum +||| +|Daily recovery point | 7 days | 9999 days +|Weekly recovery point | 4 weeks | 5163 weeks +|Monthly recovery point | 3 months | 1188 months +|Yearly recovery point | 1 year | 99 years ### Other limitations |
backup | Encryption At Rest With Cmk For Backup Vault | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/encryption-at-rest-with-cmk-for-backup-vault.md | Title: Encryption of backup data in the Backup vault using customer-managed keys description: Learn how Azure Backup allows you to encrypt your backup data using customer-managed keys (CMK) in a Backup vault. Last updated 11/20/2023-+ |
backup | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/policy-reference.md | Title: Built-in policy definitions for Azure Backup description: Lists Azure Policy built-in policy definitions for Azure Backup. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/15/2023 Last updated : 11/21/2023 |
backup | Quick Sap Hana Database Instance Restore | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/quick-sap-hana-database-instance-restore.md | description: In this quickstart, learn how to restore the entire SAP HANA system ms.devlang: azurecli Last updated 11/02/2023-+ |
backup | Restore Sql Database Azure Vm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/restore-sql-database-azure-vm.md | The secondary region restore user experience will be similar to the primary regi >[!NOTE] >- After the restore is triggered and in the data transfer phase, the restore job can't be cancelled.->- The role/access level required to perform restore operation in cross-regions are _Backup Operator_ role in the subscription and _Contributor(write)_ access on the source and target virtual machines. To view backup jobs, _ Backup reader_ is the minimum premission required in the subscription. +>- The role/access level required to perform restore operation in cross-regions are _Backup Operator_ role in the subscription and _Contributor(write)_ access on the source and target virtual machines. To view backup jobs, _Backup reader_ is the minimum permission required in the subscription. >- The RPO for the backup data to be available in secondary region is 12 hours. Therefore, when you turn on CRR, the RPO for the secondary region is 12 hours + log frequency duration (that can be set to a minimum of 15 minutes). ### Monitoring secondary region restore jobs |
backup | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure Backup description: Lists Azure Policy Regulatory Compliance controls available for Azure Backup. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 11/06/2023 Last updated : 11/21/2023 |
backup | Tutorial Configure Sap Hana Database Instance Snapshot Backup | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/tutorial-configure-sap-hana-database-instance-snapshot-backup.md | Title: Tutorial - Configure SAP HANA database instance snapshot backup description: In this tutorial, learn how to configure the SAP HANA database instance snapshot backup and run an on-demand backup. Last updated 11/02/2023-+ For more information on the supported scenarios, see the [support matrix](./sap- ## Next steps - [Learn how to restore an SAP HANA database instance snapshot in Azure VM](sap-hana-database-instances-restore.md).-- [Troubleshoot common issues with SAP HANA database backups](backup-azure-sap-hana-database-troubleshoot.md).+- [Troubleshoot common issues with SAP HANA database backups](backup-azure-sap-hana-database-troubleshoot.md). |
batch | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/policy-reference.md | Title: Built-in policy definitions for Azure Batch description: Lists Azure Policy built-in policy definitions for Azure Batch. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/15/2023 Last updated : 11/21/2023 |
batch | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure Batch description: Lists Azure Policy Regulatory Compliance controls available for Azure Batch. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 11/06/2023 Last updated : 11/21/2023 |
communication-services | Notifications | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/notifications.md | You can connect an Azure Notification Hub to your Communication Services resourc Communication Services uses Azure Notification Hub as a pass-through service to communicate with the various platform-specific push notification services using the [Direct Send](/rest/api/notificationhubs/direct-send) API. This allows you to reuse your existing Azure Notification Hub resources and configurations to deliver low latency, reliable notifications to your applications. > [!NOTE]-> Currently calling push notifications are supported for both Android and iOS. Chat push notifications are only supported for Android SDK in version 1.1.0-beta.4. +> Currently calling and chat push notifications are supported for both Android and iOS. ### Notification Hub provisioning |
communication-services | Number Lookup Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/numbers/number-lookup-sdk.md | The following list presents the set of features which are currently available in | -- | - | | - | - | | | Core Capabilities | Get Number Type | ✔️ | ✔️ | ✔️ | ✔️ | | | Get Carrier registered name | ✔️ | ✔️ | ✔️ | ✔️ |-| | Get associated Mobile Network Code, if available(two or three decimal digits used to identify network operator within a country) | ✔️ | ✔️ | ✔️ | ✔️ | -| | Get associated Mobile Country Code, if available(three decimal digits used to identify the country of a mobile operator) | ✔️ | ✔️ | ✔️ | ✔️ | +| | Get associated Mobile Network Code, if available (two or three decimal digits used to identify network operator within a country) | ✔️ | ✔️ | ✔️ | ✔️ | +| | Get associated Mobile Country Code, if available (three decimal digits used to identify the country of a mobile operator) | ✔️ | ✔️ | ✔️ | ✔️ | | | Get associated ISO Country Code | ✔️ | ✔️ | ✔️ | ✔️ | | Phone Number | All number types in E164 format | ✔️ | ✔️ | ✔️ | ✔️ | |
communication-services | Privacy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/privacy.md | Use [Chat APIs](/rest/api/communication/chat/chatthread) to get, list, update, a - `Delete Thread` - `Delete Message` +For customers that use Virtual appointments, refer to our Teams Interoperability [user privacy](interop/guest/privacy.md#chat-storage) for storage of chat messages in Teams meetings. + ### SMS Sent and received SMS messages are ephemerally processed by the service and not retained. |
communication-services | Matching Concepts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/router/matching-concepts.md | var worker = await client.CreateWorkerAsync(new CreateWorkerOptions(workerId: "w ::: zone pivot="programming-language-javascript" ```typescript-const worker = await client.path("/routing/workers/{workerId}", "worker-1").patch({ +let worker = await client.path("/routing/workers/{workerId}", "worker-1").patch({ body: { availableForOffers: true, capacity: 2, If a worker would like to stop receiving offers, it can be deregistered by setti ```csharp worker.AvailableForOffers = false;-await client.UpdateWorkerAsync(worker); +worker = await client.UpdateWorkerAsync(worker); ``` ::: zone-end await client.UpdateWorkerAsync(worker); ::: zone pivot="programming-language-javascript" ```typescript-await client.path("/routing/workers/{workerId}", "worker-1").patch({ +worker = await client.path("/routing/workers/{workerId}", worker.body.id).patch({ body: { availableForOffers: false }, contentType: "application/merge-patch+json" }); await client.path("/routing/workers/{workerId}", "worker-1").patch({ ::: zone pivot="programming-language-python" ```python-client.upsert_worker(worker_id = "worker-1", available_for_offers = False) +worker = client.upsert_worker(worker_id = worker.id, available_for_offers = False) ``` ::: zone-end client.upsert_worker(worker_id = "worker-1", available_for_offers = False) ::: zone pivot="programming-language-java" ```java-client.updateWorkerWithResponse("worker-1", worker.setAvailableForOffers(false)); +worker = client.updateWorkerWithResponse(worker.getId(), worker.setAvailableForOffers(false)); ``` ::: zone-end |
communication-services | Teams Interop Call Automation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/call-automation/teams-interop-call-automation.md | call_connection_client.transfer_call_to_participant(target_participant = Microso -- -### How to tell if your Tenant isn't enabled for this preview? -![Screenshot showing the error during Step 1.](./media/teams-federation-error.png) - ## Clean up resources If you want to clean up and remove a Communication Services subscription, you can delete the resource or resource group. Deleting the resource group also deletes any other resources associated with it. Learn more about [cleaning up resources](../../quickstarts/create-communication-resource.md#clean-up-resources). |
communication-services | Send Email Smtp | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/email/send-email-smtp/send-email-smtp.md | In this quick start, you learn about how to send email using SMTP. ::: zone pivot="smtp-method-powershell" [!INCLUDE [Send a message with SMTP and Windows Powershell](./includes/send-email-smtp-powershell.md)] |
communication-services | Contact Center | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/tutorials/contact-center.md | This overview describes concepts for **contact center** applications. After comp Contact center applications are focused on unscheduled communication between **consumers** and **agents**. The **organizational boundary** between consumers and agents, and the **unscheduled** nature of the interaction, are key attributes of contact center applications. -This article focuses on *inbound* engagement, where the consumer initiates communication. Developers interested in scheduled business-to-consumer interactions should read our [Virtual Visits](/azure/communication-services/tutorials/virtual-visits) tutorial. Many businesses also have *outbound* communication needs, for which we recommend the outbound [customer engagement](/learn.microsoft.com/dynamics365/customer-insights/journeys/portal-optional) tutorial. +This article focuses on *inbound* engagement, where the consumer initiates communication. Developers interested in scheduled business-to-consumer interactions should read our [Virtual Visits](/azure/communication-services/tutorials/virtual-visits) tutorial. The term ΓÇ£contact centerΓÇ¥ captures a large family of applications diverse across scale, channels, and organizational approach: The following list presents the set of features that are currently available for - [Quickstart: Join your calling app to a Teams call queue](/azure/communication-services/quickstarts/voice-video-calling/get-started-teams-call-queue) - [Quickstart - Teams Auto Attendant on Azure Communication Services](/azure/communication-services/quickstarts/voice-video-calling/get-started-teams-auto-attendant)-- [Get started with a click to call experience using Azure Communication Services - An Azure Communication Services tutorial](/azure/communication-services/tutorials/calling-widget/calling-widget-overview) ## Extend your contact center voice solution to Teams users |
communications-gateway | Connect Operator Connect | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/connect-operator-connect.md | -After you have deployed Azure Communications Gateway and connected it to your core network, you need to connect it to Microsoft Phone System. You also need to onboard to the Operator Connect or Teams Phone Mobile environments. +After you deploy Azure Communications Gateway and connect it to your core network, you need to connect it to Microsoft Phone System. You also need to onboard to the Operator Connect or Teams Phone Mobile environments. -This article describes how to set up Azure Communications Gateway for Operator Connect and Teams Phone Mobile. When you have finished the steps in this article, you will be ready to [Prepare for live traffic](prepare-for-live-traffic-operator-connect.md) with Operator Connect, Teams Phone Mobile and Azure Communications Gateway. +This article describes how to set up Azure Communications Gateway for Operator Connect and Teams Phone Mobile. After you finish the steps in this article, you can [prepare for live traffic](prepare-for-live-traffic-operator-connect.md) with Operator Connect, Teams Phone Mobile and Azure Communications Gateway. > [!TIP] > This article assumes that your Azure Communications Gateway onboarding team from Microsoft is also onboarding you to Operator Connect and/or Teams Phone Mobile. If you've chosen a different onboarding partner for Operator Connect or Teams Phone Mobile, you need to ask them to arrange changes to the Operator Connect and/or Teams Phone Mobile environments. ## Prerequisites -You must have carried out all the steps in [Deploy Azure Communications Gateway](deploy.md). +You must [deploy Azure Communications Gateway](deploy.md). You must have access to a user account with the Microsoft Entra Global Administrator role. +You must allocate six "service verification" test numbers for each of Operator Connect and Teams Phone Mobile. These numbers are used by the Operator Connect and Teams Phone Mobile programs for continuous call testing. +- If you selected the service you're setting up as part of deploying Azure Communications Gateway, you've allocated numbers for the service already. +- Otherwise, choose the phone numbers now (in E.164 format and including the country code) and names to identify them. We recommend names of the form OC1 and OC2 (for Operator Connect) and TPM1 and TPM2 (for Teams Phone Mobile). ++You must also allocate at least one test number for each service for integration testing. ++If you want to set up Teams Phone Mobile and you didn't select it when you deployed Azure Communications Gateway, choose: +- The number used in Teams Phone Mobile to access the Voicemail Interactive Voice Response (IVR) from native dialers. +- How you plan to route Teams Phone Mobile calls to Microsoft Phone System. Choose from: + - Integrated MCP (MCP in Azure Communications Gateway). + - On-premises MCP. + - Another method to route calls. ++## Enable Operator Connect or Teams Phone Mobile support ++> [!NOTE] +> If you selected Operator Connect or Teams Phone Mobile when you [deployed Azure Communications Gateway](deploy.md), skip this step and go to [Add the Project Synergy application to your Azure tenancy](#add-the-project-synergy-application-to-your-azure-tenancy). ++1. Sign in to the [Azure portal](https://azure.microsoft.com/). +1. In the search bar at the top of the page, search for your Communications Gateway resource and select it. +1. In the side menu bar, find **Communications services** and select **Operator Connect** or **Teams Phone Mobile** (as appropriate) to open a page for the service. +1. On the service's page, select **Operator Connect settings** or **Teams Phone Mobile settings**. +1. Fill in the fields, selecting **Review + create** and **Create**. +1. Select the **Overview** page for your resource. +1. Select **Add test lines** and add the service verification lines you chose in [Prerequisites](#prerequisites). Set the **Testing purpose** to **Automated**. + > [!IMPORTANT] + > Do not add the numbers for integration testing. You will configure numbers for integration testing when you [carry out integration testing and prepare for live traffic](prepare-for-live-traffic-operator-connect.md). +1. Wait for your resource to be updated. When your resource is ready, the **Provisioning Status** field on the resource overview changes to "Complete." We recommend that you check in periodically to see if the Provisioning Status field is "Complete." This step might take up to two weeks. + ## Add the Project Synergy application to your Azure tenancy +Before starting this step, check that the **Provisioning Status** field for your resource is "Complete". + > [!NOTE] >This step and the next step ([Assign an Admin user to the Project Synergy application](#assign-an-admin-user-to-the-project-synergy-application)) set you up as an Operator in the Teams Phone Mobile (TPM) and Operator Connect (OC) environments. If you've already gone through onboarding, go to [Find the Object ID and Application ID for your Azure Communication Gateway resource](#find-the-object-id-and-application-id-for-your-azure-communication-gateway-resource). |
communications-gateway | Connect Teams Direct Routing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/connect-teams-direct-routing.md | -After you have deployed Azure Communications Gateway and connected it to your core network, you need to connect it to Microsoft Phone System. +After you deploy Azure Communications Gateway and connect it to your core network, you need to connect it to Microsoft Phone System. -This article describes how to start setting up Azure Communications Gateway for Microsoft Teams Direct Routing. When you have finished the steps in this article, you can set up test users for test calls and prepare for live traffic. +This article describes how to start connecting Azure Communications Gateway to Microsoft Teams Direct Routing. After you finish the steps in this article, you can set up test users for test calls and prepare for live traffic. ## Prerequisites -You must have carried out all the steps in [Deploy Azure Communications Gateway](deploy.md). +You must [deploy Azure Communications Gateway](deploy.md). -Your organization must have integrated with Azure Communications Gateway's Provisioning API. +Your organization must [integrate with Azure Communications Gateway's Provisioning API](integrate-with-provisioning-api.md). If you didn't configure the Provisioning API in the Azure portal as part of deploying, you also need to know: +- The IP addresses or address ranges (in CIDR format) in your network that should be allowed to connect to the Provisioning API, as a comma-separated list. +- (Optional) The name of any custom SIP header that Azure Communications Gateway should add to messages entering your network. You must have **Reader** access to the subscription into which Azure Communications Gateway is deployed. You must be able to sign in to the Microsoft 365 admin center for your tenant as a Global Administrator. +## Enable Microsoft Teams Direct Routing support ++> [!NOTE] +> If you selected Microsoft Teams Direct Routing when you [deployed Azure Communications Gateway](deploy.md), skip this step and go to [Find your Azure Communication Gateway's domain names](#find-your-azure-communication-gateways-domain-names). ++1. Sign in to the [Azure portal](https://azure.microsoft.com/). +1. In the search bar at the top of the page, search for your Communications Gateway resource and select it. +1. In the side menu bar, find **Communications services** and select **Teams Direct Routing** to open a page for the service. +1. On the service's page, select **Teams Direct Routing settings**. +1. Fill in the fields, selecting **Review + create** and **Create**. +1. Select the **Overview** page for your resource. +1. Wait for your resource to be updated. When your resource is ready, the **Provisioning Status** field on the resource overview changes to "Complete." We recommend that you check in periodically to see if the Provisioning Status field is "Complete." This step might take up to two weeks. + ## Find your Azure Communication Gateway's domain names +Before starting this step, check that the **Provisioning Status** field for your resource is "Complete". + Microsoft Teams only sends traffic to domains that you've confirmed that you own. Your Azure Communications Gateway deployment automatically receives an autogenerated fully qualified domain name (FQDN) and regional subdomains of this domain. 1. Sign in to the [Azure portal](https://azure.microsoft.com/). |
communications-gateway | Connect Zoom | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/connect-zoom.md | -After you have deployed Azure Communications Gateway and connected it to your core network, you need to connect it to Zoom. +After you deploy Azure Communications Gateway and connect it to your core network, you need to connect it to Zoom. -This article describes how to start setting up Azure Communications Gateway for Zoom Phone Cloud Peering. When you have finished the steps in this article, you can set up test users for test calls and prepare for live traffic. +This article describes how to start connecting Azure Communications Gateway to Zoom Phone Cloud Peering. After you finish the steps in this article, you can set up test users for test calls and prepare for live traffic. ## Prerequisites -You must have started the onboarding process with Zoom to become a Zoom Phone Cloud Peering provider. For more information on Cloud Peering, see [Zoom's Cloud Peering information](https://partner.zoom.us/partner-type/cloud-peering/). +You must start the onboarding process with Zoom to become a Zoom Phone Cloud Peering provider. For more information on Cloud Peering, see [Zoom's Cloud Peering information](https://partner.zoom.us/partner-type/cloud-peering/). -You must have carried out all the steps in [Deploy Azure Communications Gateway](deploy.md). +You must [deploy Azure Communications Gateway](deploy.md). -Your organization must have integrated with Azure Communications Gateway's Provisioning API. +Your organization must [integrate with Azure Communications Gateway's Provisioning API](integrate-with-provisioning-api.md). If you didn't configure the Provisioning API in the Azure portal as part of deploying, you also need to know: +- The IP addresses or address ranges (in CIDR format) in your network that should be allowed to connect to the Provisioning API, as a comma-separated list. +- (Optional) The name of any custom SIP header that Azure Communications Gateway should add to messages entering your network. ++You must allocate "service verification" test numbers for Zoom. Zoom use these numbers for continuous call testing. +- If you selected the service you're setting up as part of deploying Azure Communications Gateway, you've allocated numbers for the service already. +- Otherwise, choose the phone numbers now (in E.164 format and including the country code). You need six numbers for the US and Canada or two numbers for the rest of the world. ++You must also allocate at least one test number for each service for your own integration testing. ++You must know which Zoom Phone Cloud Peering region you need to connect to. You must have **Reader** access to the subscription into which Azure Communications Gateway is deployed. You must be able to contact your Zoom representative. +## Enable Zoom Phone Cloud Peering support ++> [!NOTE] +> If you selected Zoom Phone Cloud Peering when you [deployed Azure Communications Gateway](deploy.md), skip this step and go to [Ask your onboarding team for the FQDNs and IP addresses for Azure Communications Gateway](#ask-your-onboarding-team-for-the-fqdns-and-ip-addresses-for-azure-communications-gateway). ++1. Sign in to the [Azure portal](https://azure.microsoft.com/). +1. In the search bar at the top of the page, search for your Communications Gateway resource and select it. +1. In the side menu bar, find **Communications services** and select **Zoom Phone Cloud Peering** to open a page for the service. +1. On the service's page, select **Zoom Phone Cloud Peering settings**. +1. Fill in the fields, selecting **Review + create** and **Create**. + > [!IMPORTANT] + > Do not add the numbers for your own integration testing when you configure test numbers. You will configure numbers for integration testing when you [configure test numbers](configure-test-numbers-zoom.md). +1. Select the **Overview** page for your resource. +1. Wait for your resource to be updated. When your resource is ready, the **Provisioning Status** field on the resource overview changes to "Complete." We recommend that you check in periodically to see if the Provisioning Status field is "Complete." This step might take up to two weeks. + ## Ask your onboarding team for the FQDNs and IP addresses for Azure Communications Gateway +Before starting this step, check that the **Provisioning Status** field for your resource is "Complete". + Ask your onboarding team for: - All the IP addresses that Azure Communications Gateway could use to send signaling and media to Zoom. |
communications-gateway | Deploy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/deploy.md | For Operator Connect and Teams Phone Mobile: |**Value**|**Field name(s) in Azure portal**| |||-|A name for the test line. |**Name**| +|A name for the test line. We recommend names of the form OC1 and OC2 (for Operator Connect) and TPM1 and TPM2 (for Teams Phone Mobile). |**Name**| |The phone number for the test line, in E.164 format and including the country code. |**Phone Number**| |The purpose of the test line (always **Automated**).|**Testing purpose**| Once your resource has been provisioned, a message appears saying **Your deploym ## Wait for provisioning to complete -Wait for your resource to be provisioned and connected. When your resource is ready, your onboarding team contacts you and the Provisioning Status field on the resource overview changes to "Complete." We recommend that you check in periodically to see if the Provisioning Status field has changed. This step might take up to two weeks. +Wait for your resource to be provisioned. When your resource is ready, the **Provisioning Status** field on the resource overview changes to "Complete." We recommend that you check in periodically to see if the Provisioning Status field is "Complete." This step might take up to two weeks. ## Connect Azure Communications Gateway to your networks |
communications-gateway | Prepare For Live Traffic Operator Connect | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/prepare-for-live-traffic-operator-connect.md | In this article, you learn about the steps that you and your onboarding team mus ## Prerequisites -- You must have [deployed Azure Communications Gateway](deploy.md) using the Microsoft Azure portal and [connected it to Operator Connect or Teams Phone Mobile](connect-operator-connect.md).-- You must have [chosen some test numbers](deploy.md#prerequisites).-- You must have a tenant you can use for testing (representing an enterprise customer), and some users in that tenant to whom you can assign the test numbers.- - If you do not already have a suitable test tenant, you can use the [Microsoft 365 Developer Program](https://developer.microsoft.com/microsoft-365/dev-program), which provides E5 licenses. +- You must [deploy Azure Communications Gateway](deploy.md) using the Microsoft Azure portal and [connect it to Operator Connect or Teams Phone Mobile](connect-operator-connect.md). +- You must know the test numbers to use for integration testing and for service verification (continuous call testing). These numbers can't be the same. You chose them as part of [deploying Azure Communications Gateway](deploy.md#prerequisites) or [connecting it to Operator Connect or Teams Phone Mobile](connect-operator-connect.md#prerequisites). + - Integration testing allows you to confirm that Azure Communications Gateway and Microsoft Phone System are interoperating correctly with your network. + - Service verification is set up by the Operator Connect and Teams Phone Mobile programs and ensures that your deployment is able to handle calls from Microsoft Phone System throughout the lifetime of your deployment. +- You must have a tenant you can use for integration testing (representing an enterprise customer), and some users in that tenant to whom you can assign the numbers for integration testing. + - If you don't already have a suitable test tenant, you can use the [Microsoft 365 Developer Program](https://developer.microsoft.com/microsoft-365/dev-program), which provides E5 licenses. - The test users must be licensed for Teams Phone System and in Teams Only mode. - You must have access to the following configuration portals. In this article, you learn about the steps that you and your onboarding team mus |[Operator Connect portal](https://operatorconnect.microsoft.com/) | `Admin` role or `PartnerSettings.Read` and `NumberManagement.Write` roles (configured on the Project Synergy enterprise application that you set up when [you connected to Operator Connect or Teams Phone Mobile](connect-operator-connect.md#add-the-project-synergy-application-to-your-azure-tenancy))| |[Teams Admin Center](https://admin.teams.microsoft.com/) for your test tenant |User management| - ## Methods In some parts of this article, the steps you must take depend on whether your deployment includes the Number Management Portal. This article provides instructions for both types of deployment. Choose the appropriate instructions. Your onboarding team must register the test enterprise tenant that you chose in - The ID of the tenant to use for testing. 1. Wait for your onboarding team to confirm that your test tenant has been registered. -## Assign numbers to test users in your tenant +## Set up your test tenant ++Integration testing requires setting up your test tenant for Operator Connect or Teams Phone Mobile and configuring users in this tenant with the numbers you chose for integration testing. ++> [!IMPORTANT] +> Do not assign the service verification numbers to test users. Your onboarding team arranges configuration of your service verification numbers. 1. Ask your onboarding team for the name of the Calling Profile that you must use for these test numbers. The name typically has the suffix `commsgw`. This Calling Profile has been created for you during the Azure Communications Gateway deployment process. 1. In your test tenant, request service from your company. Your onboarding team must register the test enterprise tenant that you chose in 1. Assign the number to a user. 1. Repeat for all your test users. +## Update your network's routing configuration ++Your network must route calls for service verification testing and for integration testing to Azure Communications Gateway. ++1. Route all calls from any service verification number to any other service verification number back to Microsoft Phone System through Azure Communications Gateway. +2. Route calls involving the test numbers for integration testing in the same way that you expect to route customer calls. + ## Carry out integration testing and request changes Network integration includes identifying SIP interoperability requirements and configuring devices to meet these requirements. For example, this process often includes interworking header formats and/or the signaling & media flows used for call hold and session refresh. |
container-apps | Add Ons Qdrant | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/add-ons-qdrant.md | description: Learn to use the Container Apps Qdrant vector database add-on. -- - ignite-2023 + Last updated 11/02/2023 |
container-apps | Dapr Component Resiliency | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/dapr-component-resiliency.md | - - ignite-fall-2023 - - ignite-2023 + # Customer Intent: As a developer, I'd like to learn how to make my container apps resilient using Azure Container Apps. |
container-apps | Deploy Artifact | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/deploy-artifact.md | |
container-apps | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/policy-reference.md | Title: Built-in policy definitions for Azure Container Apps description: Lists Azure Policy built-in policy definitions for Azure Container Apps. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/15/2023 Last updated : 11/21/2023 |
container-apps | Service Discovery Resiliency | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/service-discovery-resiliency.md | - - ignite-fall-2023 - - ignite-2023 + # Customer Intent: As a developer, I'd like to learn how to make my container apps resilient using Azure Container Apps. |
container-instances | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/policy-reference.md | |
container-registry | Container Registry Concepts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-concepts.md | Repository names can also include [namespaces](container-registry-best-practices Repository names can only include lowercase alphanumeric characters, periods, dashes, underscores, and forward slashes. -For complete repository naming rules, see the [Open Container Initiative Distribution Specification](https://github.com/docker/distribution/blob/master/docs/spec/api.md#overview). - ## Artifact A container image or other artifact within a registry is associated with one or more tags, has one or more layers, and is identified by a manifest. Understanding how these components relate to each other can help you manage your registry effectively. |
container-registry | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/policy-reference.md | Title: Built-in policy definitions for Azure Container Registry description: Lists Azure Policy built-in policy definitions for Azure Container Registry. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/15/2023 Last updated : 11/21/2023 |
container-registry | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/security-controls-policy.md | description: Lists Azure Policy Regulatory Compliance controls available for Azu Previously updated : 11/06/2023 Last updated : 11/21/2023 |
container-registry | Tutorial Artifact Cache | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/tutorial-artifact-cache.md | Artifact Cache currently supports the following upstream registries: | Nvidia | Supports both authenticated pulls and unauthenticated pulls. | Azure CLI | | Quay | Supports both authenticated pulls and unauthenticated pulls. | Azure CLI, Azure portal | | registry.k8s.io | Supports both authenticated pulls and unauthenticated pulls. | Azure CLI |+|Google Container Registry|Supports both authenticated pulls and unauthenticated pulls.|Azure CLI| ## Wildcards The addition of the new cache rule is allowed because `contoso.azurecr.io/librar <!-- LINKS - External --> -[docker-rate-limit]:https://aka.ms/docker-rate-limit +[docker-rate-limit]:https://aka.ms/docker-rate-limit + |
container-registry | Tutorial Artifact Streaming Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/tutorial-artifact-streaming-cli.md | Title: "Enable Artifact Streaming- Azure CLI" description: "Enable Artifact Streaming in Azure Container Registry using Azure CLI commands to enhance and supercharge managing, scaling, and deploying artifacts through containerized platforms." + Last updated 10/31/2023- # Artifact Streaming - Azure CLI |
container-registry | Tutorial Troubleshoot Artifact Cache | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/tutorial-troubleshoot-artifact-cache.md | We recommend deleting any unwanted cache rules to avoid hitting the limit. Learn more about the [Cache Terminology](tutorial-artifact-cache.md#terminology) + ## Unable to create cache rule using a wildcard If you're trying to create a cache rule, but there's a conflict with an existing rule. The error message suggests that there's already a cache rule with a wildcard for the specified target repository. To resolve this issue, you need to follow these steps: 1. Double-check your cache configuration to ensure that the new rule is correctly applied and there are no other conflicting rules. - ## Upstream support Artifact Cache currently supports the following upstream registries: Artifact Cache currently supports the following upstream registries: | Nvidia | Supports both authenticated pulls and unauthenticated pulls. | Azure CLI | | Quay | Supports both authenticated pulls and unauthenticated pulls. | Azure CLI, Azure portal | | registry.k8s.io | Supports both authenticated pulls and unauthenticated pulls. | Azure CLI |+|Google Container Registry|Supports both authenticated pulls and unauthenticated pulls.|Azure CLI| <!-- LINKS - External --> [create-and-store-keyvault-credentials]:../key-vault/secrets/quick-create-portal.md-[az-keyvault-set-policy]: ../key-vault/general/assign-access-policy.md#assign-an-access-policy ++[az-keyvault-set-policy]: ../key-vault/general/assign-access-policy.md#assign-an-access-policy + |
copilot | Analyze Cost Management | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/copilot/analyze-cost-management.md | When you ask Microsoft Copilot for Azure (preview) questions about your costs, i [!INCLUDE [scenario-note](includes/scenario-note.md)] + ## Sample prompts Here are a few examples of the kinds of prompts you can use for Cost Management. Modify these prompts based on your real-life scenarios, or try additional prompts to meet your needs. |
copilot | Author Api Management Policies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/copilot/author-api-management-policies.md | When you're working with API Management policies, you can also select a portion [!INCLUDE [scenario-note](includes/scenario-note.md)] + ## Sample prompts Here are a few examples of the kinds of prompts you can use to get help authoring API Management policies. Modify these prompts based on your real-life scenarios, or try additional prompts to create different kinds of policies. |
copilot | Build Infrastructure Deploy Workloads | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/copilot/build-infrastructure-deploy-workloads.md | Once you're there, start the conversation by letting Microsoft Copilot for Azure [!INCLUDE [scenario-note](includes/scenario-note.md)] + ## Sample prompts The prompts you use can vary depending on the type of workload you want to deploy, and the stage of the conversation that you're in. Here are a few examples of the kinds of prompts you might use. Modify these prompts based on your real-life scenarios, or try additional prompts as the conversation continues. |
copilot | Capabilities | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/copilot/capabilities.md | |
copilot | Deploy Vms Effectively | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/copilot/deploy-vms-effectively.md | While it can be helpful to have some familiarity with different VM configuration [!INCLUDE [scenario-note](includes/scenario-note.md)] + ## Create cost-efficient VMs Microsoft Copilot for Azure (preview) can guide you in suggesting different options to save costs as you deploy a virtual machine. If you're new to creating VMs, Microsoft Copilot for Azure (preview) can help you understand the best ways to reduce costs More experienced users can confirm the best ways to make sure VMs align with both use cases and budget needs, or find ways to make a specific VM size more cost-effective by enabling certain features that might help lower overall cost. |
copilot | Generate Cli Scripts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/copilot/generate-cli-scripts.md | description: Learn about scenarios where Microsoft Copilot for Azure (preview) c Last updated 11/15/2023 -- - ignite-2023 - - ignite-2023-copilotinAzure + When you tell Microsoft Copilot for Azure (preview) about a task you want to per [!INCLUDE [scenario-note](includes/scenario-note.md)] + ## Sample prompts Here are a few examples of the kinds of prompts you can use to generate Azure CLI scripts. Modify these prompts based on your real-life scenarios, or try additional prompts to create different kinds of queries. |
copilot | Generate Kubernetes Yaml | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/copilot/generate-kubernetes-yaml.md | You provide your application specifications, such as container images, resource [!INCLUDE [scenario-note](includes/scenario-note.md)] + ## Sample prompts Here are a few examples of the kinds of prompts you can use to generate Kubernetes YAML files. Modify these prompts based on your real-life scenarios, or try additional prompts to create different kinds of Kubernetes YAML files. |
copilot | Get Information Resource Graph | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/copilot/get-information-resource-graph.md | While a high level of accuracy is typical, we strongly advise you to review the [!INCLUDE [scenario-note](includes/scenario-note.md)] + ## Sample prompts Here are a few examples of the kinds of prompts you can use to generate Azure Resource Graph queries. Modify these prompts based on your real-life scenarios, or try additional prompts to create different kinds of queries. |
copilot | Get Monitoring Information | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/copilot/get-monitoring-information.md | When you ask Microsoft Copilot for Azure (preview) about logs, it automatically [!INCLUDE [scenario-note](includes/scenario-note.md)] + ### Sample prompts Here are a few examples of the kinds of prompts you can use to get information about Azure Monitor logs. Modify these prompts based on your real-life scenarios, or try additional prompts to get different kinds of information. |
copilot | Improve Storage Accounts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/copilot/improve-storage-accounts.md | When you ask Microsoft Copilot for Azure (preview) about improving security acco [!INCLUDE [scenario-note](includes/scenario-note.md)] + ## Sample prompts Here are a few examples of the kinds of prompts you can use to improve and protect your storage accounts. Modify these prompts based on your real-life scenarios, or try additional prompts to get advice on specific areas. |
copilot | Limited Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/copilot/limited-access.md | As part of Microsoft's commitment to responsible AI, we are currently limiting t ## Registration process -Microsoft Copilot for Azure requires registration (preview) and is currently only available to approved enterprise customers and partners. Customers who wish to use Microsoft Copilot for Azure (preview) are required to submit a [registration form](https://aka.ms/MSCopilotforAzurePreview). +Microsoft Copilot for Azure (preview) requires registration and is currently only available to approved enterprise customers and partners. Customers who wish to use Microsoft Copilot for Azure (preview) are required to submit a [registration form](https://aka.ms/MSCopilotforAzurePreview). Access to Microsoft Copilot for Azure (preview) is subject to Microsoft's sole discretion based on eligibility criteria and a vetting process, and customers must acknowledge that they have read and understand the Azure terms of service for Microsoft Copilot for Azure (preview). |
copilot | Optimize Code Application Insights | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/copilot/optimize-code-application-insights.md | Title: Discover performance recommendations with Code Optimizations using Microsoft Copilot for Azure (preview) description: Learn about scenarios where Microsoft Copilot for Azure (preview) can use Application Insight Code Optimizations to help optimize your apps. Previously updated : 11/15/2023 Last updated : 11/20/2023 When you ask Microsoft Copilot for Azure (preview) to provide these recommendati [!INCLUDE [scenario-note](includes/scenario-note.md)] + ## Sample prompts Here are a few examples of the kinds of prompts you can use with Code Optimizations. Modify these prompts based on your real-life scenarios, or try additional prompts about specific areas for optimization. Here are a few examples of the kinds of prompts you can use with Code Optimizati ## Examples -In this example, Microsoft Copilot for Azure (preview) responds to the prompt, "Show my code performance recommendations." The response notes that there are 18 recommendations, providing the option to view either the top recommendation or all recommendations at once. +In this example, Microsoft Copilot for Azure (preview) responds to the prompt, "Any code performance optimizations?" The response notes that there are 6 recommendations, providing the option to view either the top recommendation or all recommendations at once. :::image type="content" source="media/optimize-code-application-insights/code-optimizations-show-recommendations.png" lightbox="media/optimize-code-application-insights/code-optimizations-show-recommendations.png" alt-text="Screenshot of Microsoft Copilot for Azure responding to a question about code optimizations."::: |
copilot | Understand Service Health | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/copilot/understand-service-health.md | You can ask Microsoft Copilot for Azure (preview) questions to get information f [!INCLUDE [scenario-note](includes/scenario-note.md)] + ## Sample prompts Here are a few examples of the kinds of prompts you can use to get service health information. Modify these prompts based on your real-life scenarios, or try additional prompts about specific service health events. |
copilot | Work Smarter Edge | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/copilot/work-smarter-edge.md | When you ask Microsoft Copilot for Azure (preview) for information about the sta [!INCLUDE [scenario-note](includes/scenario-note.md)] + ## Sample prompts Here are a few examples of the kinds of prompts you can use to work with your Azure Stack HCI clusters. Modify these prompts based on your real-life scenarios, or try additional prompts to get different types of information. |
cosmos-db | Container Copy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/container-copy.md | You might need to copy data within your Azure Cosmos DB account if you want to a * Update the [unique keys](unique-keys.md) for a container. * Rename a container or database. * Change capacity mode of an account from serverless to provisioned or vice-versa.-* Adopt new features that are supported only for new containers. +* Adopt new features that are supported only for new containers, e.g. [Hierarchical partition keys](hierarchical-partition-keys.md). Container copy jobs can be [created and managed by using Azure CLI commands](how-to-container-copy.md). Currently, container copy is supported in the following regions: | East US 2 | Norway West | Southeast Asia | | East US 2 EUAP | Switzerland North | UAE Central | | North Central US | Switzerland West | West India |-| South Central US | UK South | Not supported | -| West Central US | UK West | Not supported | -| West US | West Europe | Not supported | -| West US 2 | Not supported | Not supported | +| South Central US | UK South | East Asia | +| West Central US | UK West | Malaysia South | +| West US | West Europe | Japan West | +| West US 2 | Israel Central | Australia Southeast | +| Not supported | South Africa North | Not supported | + ## Known and common issues |
cosmos-db | How To Container Copy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-container-copy.md | Create a job to copy a container within an Azure Cosmos DB API for NoSQL account az cosmosdb copy create ` --resource-group $destinationRG ` --job-name $jobName `- --dest-account $destAccount ` - --src-account $srcAccount ` + --dest-account $destinationAccount ` + --src-account $sourceAccount ` --dest-nosql database=$destinationDatabase container=$destinationContainer ` --src-nosql database=$sourceDatabase container=$sourceContainer ``` Create a job to copy a container within an Azure Cosmos DB API for Cassandra acc az cosmosdb copy create ` --resource-group $destinationRG ` --job-name $jobName `- --dest-account $destAccount ` - --src-account $srcAccount ` + --dest-account $destinationAccount ` + --src-account $sourceAccount ` --dest-cassandra keyspace=$destinationKeySpace table=$destinationTable ` --src-cassandra keyspace=$sourceKeySpace table=$sourceTable ``` Create a job to copy a container within an Azure Cosmos DB API for MongoDB accou az cosmosdb copy create ` --resource-group $destinationRG ` --job-name $jobName `- --dest-account $destAccount ` - --src-account $srcAccount ` + --dest-account $destinationAccount ` + --src-account $sourceAccount ` --dest-mongo database=$destinationDatabase collection=$destinationCollection ` --src-mongo database=$sourceDatabase collection=$sourceCollection ``` While copying data from one account's container to another account's container. az cosmosdb copy create ` --resource-group $destinationAccountRG ` --job-name $jobName `- --dest-account $destAccount ` - --src-account $srcAccount ` + --dest-account $destinationAccount ` + --src-account $sourceAccount ` --dest-nosql database=$destinationDatabase container=$destinationContainer ` --src-nosql database=$sourceDatabase container=$sourceContainer ``` View the progress and status of a copy job: ```azurecli-interactive az cosmosdb copy show ` --resource-group $destinationAccountRG `- --account-name $destAccount ` + --account-name $destinationAccount ` --job-name $jobName ``` To list all the container copy jobs created in an account: ```azurecli-interactive az cosmosdb copy list ` --resource-group $destinationAccountRG `- --account-name $destAccount + --account-name $destinationAccount ``` ### Pause a container copy job In order to pause an ongoing container copy job, you can use the command: ```azurecli-interactive az cosmosdb copy pause ` --resource-group $destinationAccountRG `- --account-name $destAccount ` + --account-name $destinationAccount ` --job-name $jobName ``` In order to resume an ongoing container copy job, you can use the command: ```azurecli-interactive az cosmosdb copy resume ` --resource-group $destinationAccountRG `- --account-name $destAccount ` + --account-name $destinationAccount ` --job-name $jobName ``` In order to cancel an ongoing container copy job, you can use the command: ```azurecli-interactive az cosmosdb copy cancel ` --resource-group $destinationAccountRG `- --account-name $destAccount ` + --account-name $destinationAccount ` --job-name $jobName ``` |
cosmos-db | How To Setup Customer Managed Keys Existing Accounts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-setup-customer-managed-keys-existing-accounts.md | To enable CMK on an existing account, update the account with an ARM template se ``` {- "properties": { + "properties": { "keyVaultKeyUri": "<key-vault-key-uri>"- } + } } ``` The output of this CLI command for enabling CMK waits for the completion of encr az cosmosdb update --name "testaccount" --resource-group "testrg" --key-uri "https://keyvaultname.vault.azure.net/keys/key1" ``` -### Steps to enable CMK on your existing Azure Cosmos DB account with PITR or Analytical store account +### Steps to enable CMK on your existing Azure Cosmos DB account with Continuous backup or Analytical store account For enabling CMK on existing account that has continuous backup and point in time restore enabled, we need to follow some extra steps. Follow step 1 to step 5 and then follow instructions to enable CMK on existing account. For enabling CMK on existing account that has continuous backup and point in tim **For System managed identity :** ```- az cosmosdb update --resource-group $resourceGroupName  --name $accountName  --default- identity "SystemAssignedIdentity=subscriptions/00000000-0000-0000-0000-00000000/resourcegroups/MyRG/providers/Microsoft.ManagedIdentity/ systemAssignedIdentities/MyID" + az cosmosdb update --resource-group $resourceGroupName  --name $accountName  --default-identity "SystemAssignedIdentity=subscriptions/00000000-0000-0000-0000-00000000/resourcegroups/MyRG/providers/Microsoft.ManagedIdentity/ systemAssignedIdentities/MyID" ``` **For User managed identity  :** As you would expect, by enabling CMK there's a slight increase in data size and **Should you back up the data before enabling CMK?** -Enabling CMK doesn't pose any threat of data loss. In general, we suggest you back up the data regularly. +Enabling CMK doesn't pose any threat of data loss. **Are old backups taken as a part of periodic backup encrypted?** No. Old periodic backups aren't encrypted. Newly generated backups after CMK enabled is encrypted. -**What is the behavior on existing accounts that are enabled for Continuous backup (PITR)** +**What is the behavior on existing accounts that are enabled for Continuous backup?** -When CMK is turned on, the encryption is turned on for continuous backups as well. All restores going forward is encrypted. +When CMK is turned on, the encryption is turned on for continuous backups as well. Once CMK is turned on, all restored accounts going forward will be CMK enabled. **What is the behavior if CMK is enabled on PITR enabled account and we restore account to the time CMK was disabled?** In this case CMK is explicitly enabled on the restored target account for the following reasons: - Once CMK is enabled on the account, there's no option to disable CMK. -- This behavior is in line with the current design of restore of CMK enabled account if periodic backup+- This behavior is in line with the current design of restore of CMK enabled account with periodic backup **What happens when user revokes the key while CMK migration is in-progress?** |
cosmos-db | Index Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/index-policy.md | The following considerations apply when creating composite indexes to optimize a ## <a id=index-transformation></a>Modifying the indexing policy -A container's indexing policy can be updated at any time [by using the Azure portal or one of the supported SDKs](how-to-manage-indexing-policy.md). An update to the indexing policy triggers a transformation from the old index to the new one, which is performed online and in-place (so no additional storage space is consumed during the operation). The old indexing policy is efficiently transformed to the new policy without affecting the write availability, read availability, or the throughput provisioned on the container. Index transformation is an asynchronous operation, and the time it takes to complete depends on the provisioned throughput, the number of items and their size. +A container's indexing policy can be updated at any time [by using the Azure portal or one of the supported SDKs](how-to-manage-indexing-policy.md). An update to the indexing policy triggers a transformation from the old index to the new one, which is performed online and in-place (so no additional storage space is consumed during the operation). The old indexing policy is efficiently transformed to the new policy without affecting the write availability, read availability, or the throughput provisioned on the container. Index transformation is an asynchronous operation, and the time it takes to complete depends on the provisioned throughput, the number of items and their size. If multiple indexing policy updates have to be made, it is recommended to do all the changes as a single operation in order to have the index transformation complete as quickly as possible. > [!IMPORTANT] > Index transformation is an operation that consumes [Request Units](request-units.md). Request Units consumed by an index transformation aren't currently billed if you are using [serverless](serverless.md) containers. These Request Units will get billed once serverless becomes generally available. |
cosmos-db | Compatibility | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/compatibility.md | Azure Cosmos DB for MongoDB vCore supports the following indexes and index prope <tr><td>Text Index</td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr> <tr><td>Geospatial Index</td><td><img src="media/compatibility/no-icon.svg" alt="No">No</td></tr> <tr><td>Hashed Index</td><td><img src="media/compatibility/yes-icon.svg" alt="Yes">Yes</td></tr>-<tr><td>Vector Index (only available in Cosmos DB)</td><td><img src="medi)</td></tr> +<tr><td>Vector Index (only available in Cosmos DB)</td><td><img src="medi>vector search</a></td></tr> </table> |
cosmos-db | Free Tier | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/free-tier.md | Azure Cosmos DB for MongoDB vCore now introduces a new SKU, the "Free Tier," ena boasting command and feature parity with a regular Azure Cosmos DB for MongoDB vCore account. It makes it easy for you to get started, develop, test your applications, or even run small production workloads for free. With Free Tier, you get a dedicated MongoDB cluster with 32-GB storage, perfect -for all of your learning & evaluation needs. Users can provision a single free DB server per supported Azure region for a given subscription. This feature is currently available for our users in the East US, West Europe, and Southeast Asia regions. +for all of your learning & evaluation needs. Users can provision a single free DB server per supported Azure region for a given subscription. This feature is currently available for our users in the East US, and Southeast Asia regions. ## Get started specify your storage requirements, and you're all set. Rest assured, your data, ## Restrictions * For a given subscription, only one free tier account is permissible in a region.-* Free tier is currently available in East US, West Europe, and Southeast Asia regions only. +* Free tier is currently available in East US, and Southeast Asia regions only. * High availability, Azure Active Directory (Azure AD) and Diagnostic Logging are not supported. |
cosmos-db | How To Monitor Diagnostics Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/how-to-monitor-diagnostics-logs.md | - - ignite-2023 + Last updated 10/31/2023 # CustomerIntent: As a operations engineer, I want to review diagnostic logs so that I troubleshoot issues as they occur. |
cosmos-db | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/policy-reference.md | Title: Built-in policy definitions for Azure Cosmos DB description: Lists Azure Policy built-in policy definitions for Azure Cosmos DB. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/15/2023 Last updated : 11/21/2023 |
cosmos-db | Product Updates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/product-updates.md | Title: Product updates for Azure Cosmos DB for PostgreSQL -description: Release notes, new features and features in preview +description: Release notes, new features, and features in preview Previously updated : 11/14/2023 Last updated : 11/20/2023 # Product updates for Azure Cosmos DB for PostgreSQL Updates that donΓÇÖt directly affect the internals of a cluster are rolled out g Updates that change cluster internals, such as installing a [new minor PostgreSQL version](https://www.postgresql.org/developer/roadmap/), are delivered to existing clusters as part of the next [scheduled maintenance](concepts-maintenance.md) event. Such updates are available immediately to newly created clusters. ### November 2023+* General availability: [The latest minor PostgreSQL version updates](reference-versions.md#postgresql-versions) (11.22, 12.17, 13.13, 14.10, 15.5, and 16.1) are now available in all supported regions. * PostgreSQL 16 is now the default Postgres version for Azure Cosmos DB for PostgreSQL in Azure portal. * Learn how to do [in-place upgrade of major PostgreSQL versions](./howto-upgrade.md) in Azure Cosmos DB for PostgreSQL. * Retirement: As of November 9, 2023, PostgreSQL 11 is unsupported by PostgreSQL community. |
cosmos-db | Reference Extensions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/reference-extensions.md | The versions of each extension installed in a cluster sometimes differ based on > | [lo](https://www.postgresql.org/docs/current/lo.html) | Large Object maintenance. | 1.1 | 1.1 | 1.1 | 1.1 | 1.1 | 1.1 | > | [ltree](https://www.postgresql.org/docs/current/static/ltree.html) | Provides a data type for hierarchical tree-like structures. | 1.1 | 1.1 | 1.2 | 1.2 | 1.2 | 1.2 | > | [seg](https://www.postgresql.org/docs/current/seg.html) | Data type for representing line segments or floating-point intervals. | 1.3 | 1.3 | 1.3 | 1.4 | 1.4 | 1.4 |-> | [tdigest](https://github.com/tvondra/tdigest) | Data type for on-line accumulation of rank-based statistics such as quantiles and trimmed means. | 1.4.0 | 1.4.0 | 1.4.0 | 1.4.0 | 1.4.0 | 1.4.0 | +> | [tdigest](https://github.com/tvondra/tdigest) | Data type for on-line accumulation of rank-based statistics such as quantiles and trimmed means. | 1.4.1 | 1.4.1 | 1.4.1 | 1.4.1 | 1.4.1 | 1.4.1 | > | [topn](https://github.com/citusdata/postgresql-topn/) | Type for top-n JSONB. | 2.6.0 | 2.6.0 | 2.6.0 | 2.6.0 | 2.6.0 | 2.6.0 | ### Full-text search extensions The versions of each extension installed in a cluster sometimes differ based on > | [intagg](https://www.postgresql.org/docs/current/intagg.html) | Integer aggregator and enumerator (obsolete). | 1.1 | 1.1 | 1.1 | 1.1 | 1.1 | 1.1 | > | [intarray](https://www.postgresql.org/docs/current/static/intarray.html) | Provides functions and operators for manipulating null-free arrays of integers. | 1.2 | 1.2 | 1.3 | 1.5 | 1.5 | 1.5 | > | [moddatetime](https://www.postgresql.org/docs/current/contrib-spi.html#id-1.11.7.45.9) | Functions for tracking last modification time. | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 | 1.0 |-> | [pg\_partman](https://pgxn.org/dist/pg_partman/doc/pg_partman.html) | Manages partitioned tables by time or ID. | 4.7.4 | 4.7.4 | 4.7.4 | 4.7.4 | 4.7.4 | 4.7.4 | +> | [pg\_partman](https://pgxn.org/dist/pg_partman/doc/pg_partman.html) | Manages partitioned tables by time or ID. | 4.7.4 | 4.7.4 | 4.7.4 | 5.0.0 | 5.0.0 | 5.0.0 | > | [pg\_surgery](https://www.postgresql.org/docs/current/pgsurgery.html) | Functions to perform surgery on a damaged relation. | | | | 1.0 | 1.0 | 1.0 | > | [pg\_trgm](https://www.postgresql.org/docs/current/static/pgtrgm.html) | Provides functions and operators for determining the similarity of alphanumeric text based on trigram matching. | 1.4 | 1.4 | 1.5 | 1.6 | 1.6 | 1.6 | > | [pgcrypto](https://www.postgresql.org/docs/current/static/pgcrypto.html) | Provides cryptographic functions. | 1.3 | 1.3 | 1.3 | 1.3 | 1.3 | 1.3 | The versions of each extension installed in a cluster sometimes differ based on > ||||||| > | [amcheck](https://www.postgresql.org/docs/current/amcheck.html) | Functions for verifying relation integrity. | 1.1 | 1.2 | 1.2 | 1.3 | 1.3 | 1.3 | > | [dblink](https://www.postgresql.org/docs/current/dblink.html) | A module that supports connections to other PostgreSQL databases from within a database session. See the "dblink and postgres_fdw" section for information about this extension. | 1.2 | 1.2 | 1.2 | 1.2 | 1.2 | 1.2 |-> | [old\_snapshot](https://www.postgresql.org/docs/current/oldsnapshot.html) | Allows inspection of the server state that is used to implement old_snapshot_threshold. | | | | 1.0 | 1.0 | 1.0 | +> | [old\_snapshot](https://www.postgresql.org/docs/current/oldsnapshot.html) | Allows inspection of the server state that is used to implement old_snapshot_threshold. | | | | 1.0 | 1.0 | | > | [pageinspect](https://www.postgresql.org/docs/current/pageinspect.html) | Inspect the contents of database pages at a low level. | 1.7 | 1.7 | 1.8 | 1.9 | 1.11 | 1.12 | > | [pg\_azure\_storage](howto-ingest-azure-blob-storage.md) | Azure integration for PostgreSQL. | | | 1.3 | 1.3 | 1.3 | 1.3 | > | [pg\_buffercache](https://www.postgresql.org/docs/current/static/pgbuffercache.html) | Provides a means for examining what's happening in the shared buffer cache in real time. | 1.3 | 1.3 | 1.3 | 1.3 | 1.3 | 1.4 | The versions of each extension installed in a cluster sometimes differ based on > [!div class="mx-tableFixed"] > | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** | **PG 15** | **PG 16** | > |||||||-> [pgvector](https://github.com/pgvector/pgvector#installation-notes) | Open-source vector similarity search for Postgres | 0.5.0 | 0.5.0 | 0.5.0 | 0.5.0 | 0.5.0 | 0.5.0 | +> [pgvector](https://github.com/pgvector/pgvector#installation-notes) | Open-source vector similarity search for Postgres | 0.5.1 | 0.5.1 | 0.5.1 | 0.5.1 | 0.5.1 | 0.5.1 | ### PostGIS extensions |
cosmos-db | Reference Versions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/reference-versions.md | customizable during creation and can be upgraded in-place once the cluster is cr ### PostgreSQL version 16 -The current minor release is 16.0. Refer to the [PostgreSQL -documentation](https://www.postgresql.org/docs/release/16.0/) to +The current minor release is 16.1. Refer to the [PostgreSQL +documentation](https://www.postgresql.org/docs/release/16.1/) to learn more about improvements and fixes in this minor release. ### PostgreSQL version 15 -The current minor release is 15.4. Refer to the [PostgreSQL -documentation](https://www.postgresql.org/docs/release/15.4/) to +The current minor release is 15.5. Refer to the [PostgreSQL +documentation](https://www.postgresql.org/docs/release/15.5/) to learn more about improvements and fixes in this minor release. ### PostgreSQL version 14 -The current minor release is 14.9. Refer to the [PostgreSQL -documentation](https://www.postgresql.org/docs/release/14.9/) to +The current minor release is 14.10. Refer to the [PostgreSQL +documentation](https://www.postgresql.org/docs/release/14.10/) to learn more about improvements and fixes in this minor release. ### PostgreSQL version 13 -The current minor release is 13.12. Refer to the [PostgreSQL -documentation](https://www.postgresql.org/docs/release/13.12/) to +The current minor release is 13.13. Refer to the [PostgreSQL +documentation](https://www.postgresql.org/docs/release/13.13/) to learn more about improvements and fixes in this minor release. ### PostgreSQL version 12 -The current minor release is 12.16. Refer to the [PostgreSQL -documentation](https://www.postgresql.org/docs/release/12.16/) to +The current minor release is 12.17. Refer to the [PostgreSQL +documentation](https://www.postgresql.org/docs/release/12.17/) to learn more about improvements and fixes in this minor release. ### PostgreSQL version 11 learn more about improvements and fixes in this minor release. > [!CAUTION] > PostgreSQL community ended support for PostgreSQL 11 on November 9, 2023. See [restrictions](./reference-versions.md#retired-postgresql-engine-versions-not-supported-in-azure-cosmos-db-for-postgresql) that apply to the retired PostgreSQL major versions in Azure Cosmos DB for PostgreSQL. Learn about [in-place upgrades for major PostgreSQL versions](./concepts-upgrade.md) in Azure Cosmos DB for PostgreSQL. -The current minor release is 11.21. Refer to the [PostgreSQL -documentation](https://www.postgresql.org/docs/release/11.21/) to +The *final* minor release is 11.22. Refer to the [PostgreSQL +documentation](https://www.postgresql.org/docs/release/11.22/) to learn more about improvements and fixes in this minor release. ### PostgreSQL version 10 and older |
cosmos-db | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure Cosmos DB description: Lists Azure Policy Regulatory Compliance controls available for Azure Cosmos DB. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 11/06/2023 Last updated : 11/21/2023 |
cosmos-db | Vector Database | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/vector-database.md | Title: Vector database -description: Vector database functionalities in Azure Cosmos DB for retrieval augmented generation (RAG) and vector search. +description: Vector database extension and retrieval augmented generation (RAG) implementation. -# Vector database functionality implementation using Azure Cosmos DB +# Vector database [!INCLUDE[NoSQL, MongoDB vCore, PostgreSQL](includes/appliesto-nosql-mongodbvcore-postgresql.md)] -You likely considered augmenting your applications with Large Language Models (LLMs) that can access your own data store through Retrieval Augmented Generation (RAG). This approach allows you to +You have likely considered augmenting your applications with large language models (LLMs) and vector databases that can access your own data through retrieval-augmented generation (RAG). This approach allows you to - Generate contextually relevant and accurate responses to user prompts from AI models - Overcome ChatGPT, GPT-3.5, or GPT-4ΓÇÖs token limits - Reduce the costs from frequent fine-tuning on updated data -Some RAG implementation tutorials demonstrate integrating vector databases. Instead of adding a separate vector database to your existing tech stack, you can achieve the same outcome using Azure Cosmos DB with Azure OpenAI Service and optionally Azure Cognitive Search when working with multi-modal data. +Some RAG implementation tutorials demonstrate integrating vector databases that are distinct from traditional relational and non-relational databases. Instead of adding a separate vector database to your existing tech stack, you can achieve the same outcome using the vector database extensions for Azure Cosmos DB when working with multi-modal data. By doing so, you can keep your vector embeddings and original data together to achieve data consistency, scale, and performance while avoiding the extra cost of moving data to a separate vector database. -Here are some solutions: +Here is how: | | Description | | | |-| **[Azure Cosmos DB for NoSQL with Azure Cognitive Search](#implement-vector-database-functionalities-using-azure-cosmos-db-for-nosql-and-azure-cognitive-search)**. | Augment your Azure Cosmos DB data with semantic and vector search capabilities of Azure Cognitive Search. | -| **[Azure Cosmos DB for Mongo DB vCore](#implement-vector-database-functionalities-using-azure-cosmos-db-for-mongodb-vcore)**. | Featuring native support for vector search, store your application data and vector embeddings together in a single MongoDB-compatible service. | -| **[Azure Cosmos DB for PostgreSQL](#implement-vector-database-functionalities-using-azure-cosmos-db-for-postgresql)**. | Offering native support vector search, you can store your data and vectors together in a scalable PostgreSQL offering. | +| **[Azure Cosmos DB for Mongo DB vCore](#implement-vector-database-functionalities-using-our-api-for-mongodb-vcore)** | Store your application data and vector embeddings together in a single MongoDB-compatible service featuring native support for vector search. | +| **[Azure Cosmos DB for PostgreSQL](#implement-vector-database-functionalities-using-our-api-for-postgresql)** | Store your data and vectors together in a scalable PostgreSQL offering with native support for vector search. | +| **[Azure Cosmos DB for NoSQL with Azure AI Search](#implement-vector-database-functionalities-using-our-nosql-api-and-ai-search)** | Augment your Azure Cosmos DB data with semantic and vector search capabilities of Azure AI Search. | -## Vector database related concepts +## What does a vector database do? -You might first want to ensure that you understand the following concepts: +The vector search feature in a vector database enables retrieval-augmented generation to harness LLMs and custom data or domain-specific information. This process involves extracting pertinent information from a custom data source and integrating it into the model request through prompt engineering. -- Grounding LLMs-- Retrieval Augmented Generation (RAG)-- Embeddings-- Vector search-- Prompt engineering--RAG harnesses LLMs and external knowledge to effectively handle custom data or domain-specific knowledge. It involves extracting pertinent information from a custom data source and integrating it into the model request through prompt engineering. --A robust mechanism is necessary to identify the most relevant data from the custom source that can be passed to the LLM. This mechanism allows you to optimize for the LLMΓÇÖs limit on the number of tokens per request. This limitation is where embeddings play a crucial role. By converting the data in your database into embeddings and storing them as vectors for future use, we apply the advantage of capturing the semantic meaning of the text, going beyond mere keywords to comprehend the context. +A robust mechanism is necessary to identify the most relevant data from the custom source that can be passed to the LLM. Our vector search features convert the data in your database into embeddings and store them as vectors for future use, thus capturing the semantic meaning of the text and going beyond mere keywords to comprehend the context. Moreover, this mechanism allows you to optimize for the LLMΓÇÖs limit on the number of tokens per request. Prior to sending a request to the LLM, the user input/query/request is also transformed into an embedding, and vector search techniques are employed to locate the most similar embeddings within the database. This technique enables the identification of the most relevant data records in the database. These retrieved records are then supplied as input to the LLM request using prompt engineering. -Here are multiple ways to implement RAG on your data stored in Azure Cosmos DB, thus achieving the same outcome as using a vector database. --## Implement vector database functionalities using Azure Cosmos DB for NoSQL and Azure Cognitive Search --Implement RAG patterns with Azure Cosmos DB for NoSQL and Azure Cognitive Search. This approach enables powerful integration of your data residing in Azure Cosmos DB for NoSQL into your AI-oriented applications. Azure Cognitive Search empowers you to efficiently index, and query high-dimensional vector data, allowing you to use Azure Cosmos DB for NoSQL for the same purpose as a vector database. --### Azure Cosmos DB-based vector database functionality code samples --- [.NET RAG Pattern retail reference solution for NoSQL](https://github.com/Azure/Vector-Search-AI-Assistant-MongoDBvCore)-- [.NET tutorial - recipe chatbot](https://github.com/microsoft/AzureDataRetrievalAugmentedGenerationSamples/tree/main/C%23/CosmosDB-NoSQL_CognitiveSearch)-- [.NET tutorial - recipe chatbot w/ Semantic Kernel](https://github.com/microsoft/AzureDataRetrievalAugmentedGenerationSamples/tree/main/C%23/CosmosDB-NoSQL_CognitiveSearch_SemanticKernel)-- [Python notebook tutorial - Azure product chatbot](https://github.com/microsoft/AzureDataRetrievalAugmentedGenerationSamples/tree/main/Python/CosmosDB-NoSQL_CognitiveSearch)+Here are multiple ways to implement RAG on your data by using our vector database functionalities. -## Implement vector database functionalities using Azure Cosmos DB for MongoDB vCore +## Implement vector database functionalities using our API for MongoDB vCore Use the native vector search feature in Azure Cosmos DB for MongoDB vCore, which offers an efficient way to store, index, and search high-dimensional vector data directly alongside other application data. This approach removes the necessity of migrating your data to costlier alternative vector databases and provides a seamless integration of your AI-driven applications. -### Azure Cosmos DB-based vector database functionality code samples +### Vector database implementation code samples - [.NET RAG Pattern retail reference solution](https://github.com/Azure/Vector-Search-AI-Assistant-MongoDBvCore) - [.NET tutorial - recipe chatbot](https://github.com/microsoft/AzureDataRetrievalAugmentedGenerationSamples/tree/main/C%23/CosmosDB-MongoDBvCore) - [Python notebook tutorial - Azure product chatbot](https://github.com/microsoft/AzureDataRetrievalAugmentedGenerationSamples/tree/main/Python/CosmosDB-MongoDB-vCore) -## Implement vector database functionalities using Azure Cosmos DB for PostgreSQL +## Implement vector database functionalities using our API for PostgreSQL -Use the native vector search feature in Azure Cosmos DB for PostgreSQL, offers an efficient way to store, index, and search high-dimensional vector data directly alongside other application data. This approach removes the necessity of migrating your data to costlier alternative vector databases and provides a seamless integration of your AI-driven applications. +Use the native vector search feature in Azure Cosmos DB for PostgreSQL, which offers an efficient way to store, index, and search high-dimensional vector data directly alongside other application data. This approach removes the necessity of migrating your data to costlier alternative vector databases and provides a seamless integration of your AI-driven applications. -### Azure Cosmos DB-based vector database functionality code samples +### Vector database implementation code samples - Python: [Python notebook tutorial - food review chatbot](https://github.com/microsoft/AzureDataRetrievalAugmentedGenerationSamples/tree/main/Python/CosmosDB-PostgreSQL_CognitiveSearch) +## Implement vector database functionalities using our NoSQL API and AI Search ++Implement RAG patterns with Azure Cosmos DB for NoSQL and Azure AI Search. This approach enables powerful integration of your data residing in the NoSQL API into your AI-oriented applications. Azure AI Search empowers you to efficiently index and query high-dimensional vector data, thereby meeting your vector database needs. ++### Vector database implementation code samples ++- [.NET RAG Pattern retail reference solution for NoSQL](https://github.com/Azure/Vector-Search-AI-Assistant-MongoDBvCore) +- [.NET tutorial - recipe chatbot](https://github.com/microsoft/AzureDataRetrievalAugmentedGenerationSamples/tree/main/C%23/CosmosDB-NoSQL_CognitiveSearch) +- [.NET tutorial - recipe chatbot w/ Semantic Kernel](https://github.com/microsoft/AzureDataRetrievalAugmentedGenerationSamples/tree/main/C%23/CosmosDB-NoSQL_CognitiveSearch_SemanticKernel) +- [Python notebook tutorial - Azure product chatbot](https://github.com/microsoft/AzureDataRetrievalAugmentedGenerationSamples/tree/main/Python/CosmosDB-NoSQL_CognitiveSearch) + ## Related content - [Vector search with Azure Cognitive Search](../search/vector-search-overview.md) |
cost-management-billing | Ea Portal Enrollment Invoices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/ea-portal-enrollment-invoices.md | To view credits: To apply your Azure Prepayment to overages, you must meet the following criteria: -- You've incurred overage charges that haven't been paid and are within one year of the billed service's end date.+- You've incurred overage charges that haven't been paid and are within 3 months of the invoice bill date. - Your available Azure Prepayment amount covers the full number of incurred charges, including all past unpaid Azure invoices. - The billing term that you want to complete must be fully closed. Billing fully closes after the fifth day of each month. - The billing period that you want to offset must be fully closed. |
data-factory | Airflow Get Ip Airflow Cluster | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/airflow-get-ip-airflow-cluster.md | This document explains how to enhance security of your data stores and resources > Importing DAGs is currently not supported using blob storage with IP allow listing or using private endpoints. We suggest using Git-sync instead. ### Step 1: Retrieve the bearer token for the Airflow API.-- Similar to the authentication process used in the standard Azure REST API, acquiring an access token from Azure AD is required before making a call to the Airflow REST API. A guide on how to obtain the token from Azure AD can be found at https://learn.microsoft.com/rest/api/azure.-- It should be noted that to obtain an access token for Data Factory, the resource to be used is **https://datafactory.azure.com**. +- Similar to the authentication process used in the standard Azure REST API, acquiring an access token from Azure AD is required before making a call to the Airflow REST API. A guide on how to obtain the token from Azure AD can be found at [https://learn.microsoft.com/rest/api/azure](/rest/api/azure). - Additionally, the service principal used to obtain the access token needs to have atleast a **contributor role** on the Data Factory where the Airflow Integration Runtime is located. For more information, see the below screenshots. For more information, see the below screenshots. - [Run an existing pipeline with Managed Airflow](tutorial-run-existing-pipeline-with-airflow.md) - [Managed Airflow pricing](airflow-pricing.md)-- [How to change the password for Managed Airflow environments](password-change-airflow.md)+- [How to change the password for Managed Airflow environments](password-change-airflow.md) |
data-factory | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/policy-reference.md | |
data-factory | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/whats-new.md | Check out our [What's New video archive](https://www.youtube.com/playlist?list=P General Availability of Time to Live (TTL) for Managed Virtual Network [Learn more](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/general-availability-of-time-to-live-ttl-for-managed-virtual/ba-p/3922218) -### Region expanstion +### Region expansion Azure Data Factory is generally available in Poland Central [Learn more](https://techcommunity.microsoft.com/t5/azure-data-factory-blog/continued-region-expansion-azure-data-factory-is-generally/ba-p/3965769) |
data-lake-analytics | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/policy-reference.md | Title: Built-in policy definitions for Azure Data Lake Analytics description: Lists Azure Policy built-in policy definitions for Azure Data Lake Analytics. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/15/2023 Last updated : 11/21/2023 |
data-lake-analytics | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-analytics/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure Data Lake Analytics description: Lists Azure Policy Regulatory Compliance controls available for Azure Data Lake Analytics. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 11/06/2023 Last updated : 11/21/2023 |
data-lake-store | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/policy-reference.md | Title: Built-in policy definitions for Azure Data Lake Storage Gen1 description: Lists Azure Policy built-in policy definitions for Azure Data Lake Storage Gen1. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/15/2023 Last updated : 11/21/2023 |
data-lake-store | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-lake-store/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure Data Lake Storage Gen1 description: Lists Azure Policy Regulatory Compliance controls available for Azure Data Lake Storage Gen1. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 11/06/2023 Last updated : 11/21/2023 |
databox-online | Azure Stack Edge Gpu Deploy Checklist | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-checklist.md | Use the following checklist to ensure you have this information after youΓÇÖve p |--|-|-| | Device management | <ul><li>Azure subscription</li><li>Resource providers registered</li><li>Azure Storage account</li></ul>|<ul><li>Enabled for Azure Stack Edge, owner or contributor access.</li><li>In Azure portal, go to **Home > Subscriptions > Your-subscription > Resource providers**. Search for `Microsoft.EdgeOrder` and register. Repeat for `Microsoft.Devices` if deploying IoT workloads.</li><li>Need access credentials.</li></ul> | | Device installation | Power cables in the package. <br>For US, an SVE 18/3 cable rated for 125 V and 15 Amps with a NEMA 5-15P to C13 (input to output) connector is shipped. | For more information, see the list of [Supported power cords by country](azure-stack-edge-technical-specifications-power-cords-regional.md). |-| | <ul><li>At least one 1-GbE RJ-45 network cable for Port 1 </li><li>At least one 25/10-GbE SFP+ copper cable for Port 3, Port 4, Port 5, or Port 6</li></ul>| Customer needs to procure these cables.<br>For a full list of supported network cables, switches, and transceivers for device network cards from Cavium, see [Cavium FastlinQ 41000 Series Interoperability Matrix](https://www.marvell.com/documents/xalflardzafh32cfvi0z/).<br>For a full list of supported cables and modules for 25 GbE and 10 GbE from Mellanox, see [Mellanox dual port 25G ConnectX-4 channel network adapter compatible products](https://docs.mellanox.com/display/ConnectX4LxFirmwarev14271016/Firmware+Compatible+Products).| +| | <ul><li>At least one 1-GbE RJ-45 network cable for Port 1 </li><li>At least one 25/10-GbE SFP+ copper cable for Port 3, Port 4, Port 5, or Port 6</li></ul>| Customer needs to procure these cables.<br>For a full list of supported network cables, switches, and transceivers for device network cards from Cavium, see [Cavium FastlinQ 41000 Series Interoperability Matrix](https://www.marvell.com/documents/xalflardzafh32cfvi0z/).| | Network readiness | Check to see how ready your network is for the deployment of an Azure Stack Edge device. |[Use the Azure Stack Network Readiness Checker](azure-stack-edge-deploy-check-network-readiness.md) to test all needed connections. | | First-time device connection | Laptop whose IPv4 settings can be changed. <!--<li> A minimum of 1 GbE switch must be used for the device once the initial setup is complete. The local web UI will not be accessible if the connected switch is not at least 1 Gbe.</li>-->| If connecting Port 1 directly to a laptop (without a switch), use an Ethernet crossover cable or a USB to Ethernet adaptor. | | Device sign-in | Device administrator password, between 8 and 16 characters, including three of the following character types: uppercase, lowercase, numeric, and special characters. | Default password is *Password1*, which expires at first sign-in. | Use the following checklist to ensure you have this information after youΓÇÖve p |--|-|-| | Device management | <ul><li>Azure subscription</li><li>Resource providers registered</li><li>Azure Storage account</li></ul>|<ul><li>Enabled for Azure Stack Edge, owner or contributor access.</li><li>In Azure portal, go to **Home > Subscriptions > Your-subscription > Resource providers**. Search for `Microsoft.EdgeOrder` and register. Repeat for `Microsoft.Devices` if deploying IoT workloads.</li><li>Need access credentials.</li></ul> | | Device installation | Four power cables for the two device nodes in the package. <br>For US, an SVE 18/3 cable rated for 125 V and 15 Amps with a NEMA 5-15P to C13 (input to output) connector is shipped. | For more information, see the list of [Supported power cords by country](azure-stack-edge-technical-specifications-power-cords-regional.md). |-| | <ul><li>At least two 1-GbE RJ-45 network cables for Port 1 on the two device nodes </li><li>You would need two 1-GbE RJ-45 network cables to connect Port 2 on each device node to the internet. Depending on the network topology you wish to deploy, you also need SFP+ copper cables to connect Port 3 and Port 4 across the device nodes and also from device nodes to the switches. See the [Supported network topologies](azure-stack-edge-gpu-clustering-overview.md#supported-networking-topologies).</li></ul> | Customer needs to procure these cables.<br>For a full list of supported network cables, switches, and transceivers for device network cards from Cavium, see [Cavium FastlinQ 41000 Series Interoperability Matrix](https://www.marvell.com/documents/xalflardzafh32cfvi0z/).<br>For a full list of supported cables and modules for 25 GbE and 10 GbE from Mellanox, see [Mellanox dual port 25G ConnectX-4 channel network adapter compatible products](https://docs.mellanox.com/display/ConnectX4LxFirmwarev14271016/Firmware+Compatible+Products).| +| | <ul><li>At least two 1-GbE RJ-45 network cables for Port 1 on the two device nodes </li><li>You would need two 1-GbE RJ-45 network cables to connect Port 2 on each device node to the internet. Depending on the network topology you wish to deploy, you also need SFP+ copper cables to connect Port 3 and Port 4 across the device nodes and also from device nodes to the switches. See the [Supported network topologies](azure-stack-edge-gpu-clustering-overview.md#supported-networking-topologies).</li></ul> | Customer needs to procure these cables.<br>For a full list of supported network cables, switches, and transceivers for device network cards from Cavium, see [Cavium FastlinQ 41000 Series Interoperability Matrix](https://www.marvell.com/documents/xalflardzafh32cfvi0z/).| | First-time device connection | Laptop whose IPv4 settings can be changed.<!--<li> A minimum of 1 GbE switch must be used for the device once the initial setup is complete. The local web UI will not be accessible if the connected switch is not at least 1 Gbe.</li>-->|This laptop connects to Port 1 via a switch or a USB to Ethernet adaptor. | | Device sign-in | Device administrator password, between 8 and 16 characters, including three of the following character types: uppercase, lowercase, numeric, and special characters. | Default password is *Password1*, which expires at first sign-in. | | Network settings | Each device node has 2 x 1-GbE, 4 x 25-GbE network ports. <ul><li>Port 1 is used for initial configuration only.</li><li>Port 2 must be connected to the Internet (with connectivity to Azure). Port 3 and Port 4 must be configured and connected across the two device nodes in accordance with the network topology you intend to deploy. You can choose from one of the three [Supported network topologies](azure-stack-edge-gpu-clustering-overview.md#supported-networking-topologies).</li><li>DHCP and static IPv4 configuration supported.</li></ul> | Static IPv4 configuration requires IP, DNS server, and default gateway. | |
databox-online | Azure Stack Edge Gpu Deploy Install | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-deploy-install.md | The backplane of Azure Stack Edge device: For a full list of supported cables, switches, and transceivers for these network adapter cards, see: - [`Qlogic` Cavium 25G NDC adapter interoperability matrix](https://www.marvell.com/documents/xalflardzafh32cfvi0z/).-- 25 GbE and 10 GbE cables and modules in [Mellanox dual port 25G ConnectX-4 channel network adapter compatible products](https://docs.mellanox.com/display/ConnectX4LxFirmwarev14271016/Firmware+Compatible+Products). > [!NOTE] > Using USB ports to connect any external device, including keyboards and monitors, is not supported for Azure Stack Edge devices. |
databox-online | Azure Stack Edge Gpu Quickstart | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-quickstart.md | Before you deploy, make sure that following prerequisites are in place: 1. **Install**: Connect PORT 1 to a client computer via an Ethernet crossover cable or USB Ethernet adapter. Connect at least one other device port for data, preferably 25 GbE, (from PORT 3 to PORT 6) to Internet via SFP+ copper cables or use PORT 2 with RJ45 patch cable. Connect the provided power cords to the Power Supply Units and to separate power distribution outlets. Press the power button on the front panel to turn on the device. - See [Cavium FastlinQ 41000 Series Interoperability Matrix](https://www.marvell.com/documents/xalflardzafh32cfvi0z/) and [Mellanox dual port 25G ConnectX-4 channel network adapter compatible products](https://docs.mellanox.com/display/ConnectX4LxFirmwarev14271016/Firmware+Compatible+Products) to get compatible network cables and switches. + See [Cavium FastlinQ 41000 Series Interoperability Matrix](https://www.marvell.com/documents/xalflardzafh32cfvi0z/) to get compatible network cables and switches. Here is the minimum cabling configuration needed to deploy your device: ![Back plane of a cabled device](./media/azure-stack-edge-gpu-quickstart/backplane-min-cabling-1.png) |
databox-online | Azure Stack Edge Gpu Technical Specifications Compliance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-gpu-technical-specifications-compliance.md | Here are the details for the Mellanox card: For a full list of supported cables, switches, and transceivers for these network cards, go to: - [`Qlogic` Cavium 25G NDC adapter interoperability matrix](https://www.marvell.com/documents/xalflardzafh32cfvi0z/).-- [Mellanox dual port 25G ConnectX-4 channel network adapter compatible products](https://docs.mellanox.com/display/ConnectX4LxFirmwarev14271016/Firmware+Compatible+Products). ## Storage specifications |
databox-online | Azure Stack Edge Mini R Deploy Checklist | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-mini-r-deploy-checklist.md | Use the following checklist to ensure you have this information after you have p |--|-|-| | Device management | <ul><li>Azure subscription</li><li>Resource providers registered</li><li>Azure Storage account</li></ul>|<ul><li>Enabled for Azure Stack Edge Mini R/Data Box Gateway, owner or contributor access.</li><li>In Azure portal, go to **Home > Subscriptions > Your-subscription > Resource providers**. Search for `Microsoft.DataBoxEdge` and register. Repeat for `Microsoft.Devices` if deploying IoT workloads.</li><li>Need access credentials.</li></ul> | | Device installation | Power cables in the package. <br>For US, an SVE 18/3 cable rated for 125 V and 15 Amps with a NEMA 5-15P to C13 (input to output) connector is shipped. | For more information, see the list of [Supported power cords by country](azure-stack-edge-technical-specifications-power-cords-regional.md). |-| | <ul><li>At least 1 X 1-GbE RJ-45 network cable for Port 1 </li><li>At least 1 X 25-GbE SFP+ copper cable for Port 3, Port 4, Port 5, or Port 6</li></ul>| Customer needs to procure these cables.<br>For a full list of supported network cables, switches, and transceivers for device network cards, see [Cavium FastlinQ 41000 Series Interoperability Matrix](https://www.marvell.com/documents/xalflardzafh32cfvi0z/) and [Mellanox dual port 25G ConnectX-4 channel network adapter compatible products](https://docs.mellanox.com/display/ConnectX4LxFirmwarev14271016/Firmware+Compatible+Products).| +| | <ul><li>At least 1 X 1-GbE RJ-45 network cable for Port 1 </li><li>At least 1 X 25-GbE SFP+ copper cable for Port 3, Port 4, Port 5, or Port 6</li></ul>| Customer needs to procure these cables.<br>For a full list of supported network cables, switches, and transceivers for device network cards, see [Cavium FastlinQ 41000 Series Interoperability Matrix](https://www.marvell.com/documents/xalflardzafh32cfvi0z/).| | Network readiness | Check to see how ready your network is for the deployment of an Azure Stack Edge device. | [Use the Azure Stack Network Readiness Checker](azure-stack-edge-deploy-check-network-readiness.md) to test all needed connections. | | First-time device connection | Laptop whose IPv4 settings can be changed. This laptop connects to Port 1 via a switch or a USB to Ethernet adaptor.<!--<li> A minimum of 1 GbE switch must be used for the device once the initial setup is complete. The local web UI will not be accessible if the connected switch is not at least 1 Gbe.</li>-->| | | Device sign-in | Device administrator password, between 8 and 16 characters, including three of the following character types: uppercase, lowercase, numeric, and special characters. | Default password is *Password1*, which expires at first sign-in. | |
databox-online | Azure Stack Edge Pro 2 Deploy Checklist | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-2-deploy-checklist.md | Use the following checklist to ensure you have this information after youΓÇÖve p |--|-|--| | Device management | - Azure subscription. <br> - Resource providers registered. <br> - Azure Storage account.|- Enabled for Azure Stack Edge, owner or contributor access. <br> - In Azure portal, go to **Home > Subscriptions > Your-subscription > Resource providers**. Search for `Microsoft.EdgeOrder` and register. Repeat for `Microsoft.Devices` if deploying IoT workloads. <br> - Need access credentials. | | Device installation | One power cable in the package. <!--<br>For US, an SVE 18/3 cable rated for 125 V and 15 Amps with a NEMA 5-15P to C13 (input to output) connector is shipped.--> | For more information, see the list of [Supported power cords by country](azure-stack-edge-technical-specifications-power-cords-regional.md) |-| | - At least one X 1-GbE RJ-45 network cable for Port 1. <br> - At least 100-GbE QSFP28 Passive Direct Attached Cable (tested in-house) for each data network interface Port 3 and Port 4 to be configured. <br> - At least one 100-GbE network switch to connect a 1 GbE or a 100-GbE network interface to the Internet for data.| Customer needs to procure these cables.<br><br>For a full list of supported cables, modules, and switches, see [Connect-X6 DX adapter card compatible firmware](https://docs.nvidia.com/networking/display/ConnectX6DxFirmwarev22271016/Firmware%20Compatible%20Products).| +| | - At least one X 1-GbE RJ-45 network cable for Port 1. <br> - At least 100-GbE QSFP28 Passive Direct Attached Cable (tested in-house) for each data network interface Port 3 and Port 4 to be configured. <br> - At least one 100-GbE network switch to connect a 1 GbE or a 100-GbE network interface to the Internet for data.| Customer needs to procure these cables.| | First-time device connection | Laptop whose IPv4 settings can be changed. This laptop connects to Port 1 via a switch or a USB to Ethernet adapter. </li><!--<li> A minimum of 1 GbE switch must be used for the device once the initial setup is complete. The local web UI will not be accessible if the connected switch is not at least 1 Gbe.</li>-->| | | Device sign-in | Device administrator password, between 8 and 16 characters, including three of the following character types: uppercase, lowercase, numeric, and special characters. | Default password is *Password1*, which expires at first sign-in. | | Network settings | Device comes with 2 x 10/1-GbE, 2 x 100-GbE network ports. <br> - Port 1 is used to configure management settings only. One or more data ports can be connected and configured. <br> - At least one data network interface from among Port 2 to Port 4 needs to be connected to the Internet (with connectivity to Azure). <br> - DHCP and static IPv4 configuration supported. | Static IPv4 configuration requires IP, DNS server, and default gateway. | Use the following checklist to ensure you have this information after youΓÇÖve p |--|-|--| | Device management | - Azure subscription <br> - Resource providers registered <br> - Azure Storage account|Enabled for Azure Stack Edge, owner or contributor access. <br> - In Azure portal, go to **Home > Subscriptions > Your-subscription > Resource providers**. Search for `Microsoft.EdgeOrder` and register. Repeat for `Microsoft.Devices` if deploying IoT workloads. <br> - Need access credentials</li> | | Device installation | One power cable in the package per device node. <!--<br>For US, an SVE 18/3 cable rated for 125 V and 15 Amps with a NEMA 5-15P to C13 (input to output) connector is shipped.--> | For more information, see the list of [Supported power cords by country](azure-stack-edge-technical-specifications-power-cords-regional.md) |-| | <br> - At least two 1-GbE RJ-45 network cables for Port 1 on the two device nodes <br> - You would need two 1-GbE network cables to connect Port 2 on each device node to the internet. Depending on the network topology you wish to deploy, you may also need at least one 100-GbE QSFP28 Passive Direct Attached Cable (tested in-house) to connect Port 3 and Port 4 across the device nodes. <br> - You would also need at least one 10/1-GbE network switch to connect Port 1 and Port 2. You would need a 100/10-GbE switch to connect Port 3 or Port 4 network interface to the Internet for data.| Customer needs to procure these cables and switches. Exact number of cables and switches would depend on the network topology that you deploy. <br><br> For a full list of supported cables, modules, and switches, see [Connect-X6 DX adapter card compatible firmware](https://docs.nvidia.com/networking/display/ConnectX6DxFirmwarev22271016/Firmware%20Compatible%20Products).| +| | <br> - At least two 1-GbE RJ-45 network cables for Port 1 on the two device nodes <br> - You would need two 1-GbE network cables to connect Port 2 on each device node to the internet. Depending on the network topology you wish to deploy, you may also need at least one 100-GbE QSFP28 Passive Direct Attached Cable (tested in-house) to connect Port 3 and Port 4 across the device nodes. <br> - You would also need at least one 10/1-GbE network switch to connect Port 1 and Port 2. You would need a 100/10-GbE switch to connect Port 3 or Port 4 network interface to the Internet for data.| Customer needs to procure these cables and switches. Exact number of cables and switches would depend on the network topology that you deploy.| | First-time device connection | Via a laptop whose IPv4 settings can be changed. This laptop connects to Port 1 via a switch or a USB to Ethernet adapter. | | Device sign-in | Device administrator password, between 8 and 16 characters, including three of the following character types: uppercase, lowercase, numeric, and special characters. | Default password is *Password1*, which expires at first sign-in. | | Network settings | Device comes with 2 x 10/1-GbE network ports, Port 1 and Port 2. Device also has 2 x 100-GbE network ports, Port 3 and Port 4. <br> - Port 1 is used for initial configuration. Port 2, Port 3, and Port 4 are also connected and configured. <br> - At least one data network interface from among Port 2 - Port 4 needs to be connected to the Internet (with connectivity to Azure). <br> - DHCP and static IPv4 configuration supported. | Static IPv4 configuration requires IP, DNS server, and default gateway. | |
databox-online | Azure Stack Edge Pro 2 Deploy Install | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-2-deploy-install.md | On your device: - One network card corresponding to two high-speed ports and two built-in 10/1-GbE ports: - **Intel Ethernet X722 network adapter** - Port 1, Port 2.- - **Mellanox dual port 100 GbE ConnectX-6 Dx network adapter** - Port 3, Port 4. See a full list of [Supported cables, switches, and transceivers for ConnectX-6 Dx network adapters](https://docs.nvidia.com/networking/display/ConnectX6DxFirmwarev22271016/Firmware+Compatible+Products). + - **Mellanox dual port 100 GbE ConnectX-6 Dx network adapter** - Port 3, Port 4. - Two Wi-Fi Sub miniature version A (SMA) connectors located on the faceplate of PCIe card slot located below Port 3 and Port 4. The Wi-Fi antennas are installed on these connectors. |
databox-online | Azure Stack Edge Pro R Deploy Checklist | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-r-deploy-checklist.md | Use the following checklist to ensure you have this information after you have p |--|-|-| | Device management | <ul><li>Azure subscription</li><li>Resource providers registered</li><li>Azure Storage account</li></ul>|<ul><li>Enabled for Azure Stack Edge Pro/Data Box Gateway, owner or contributor access.</li><li>In Azure portal, go to **Home > Subscriptions > Your-subscription > Resource providers**. Search for `Microsoft.DataBoxEdge` and register. Repeat for `Microsoft.Devices` if deploying IoT workloads.</li><li>Need access credentials.</li></ul> | | Device installation | Power cables in the package. <br>For US, an SVE 18/3 cable rated for 125 V and 15 Amps with a NEMA 5-15P to C13 (input to output) connector is shipped. | For more information, see the list of [Supported power cords by country](azure-stack-edge-technical-specifications-power-cords-regional.md). |-| | <ul><li>At least 1 X 1-GbE RJ-45 network cable for Port 1 </li><li> At least 1 X 25-GbE SFP+ copper cable for Port 3, Port 4</li><ul>| Customer needs to procure these cables.<br>For a full list of supported network cables, switches, and transceivers for device network cards, see [Cavium FastlinQ 41000 Series Interoperability Matrix](https://www.marvell.com/documents/xalflardzafh32cfvi0z/) and [Mellanox dual port 25G ConnectX-4 channel network adapter compatible products](https://docs.mellanox.com/display/ConnectX4LxFirmwarev14271016/Firmware+Compatible+Products).| +| | <ul><li>At least 1 X 1-GbE RJ-45 network cable for Port 1 </li><li> At least 1 X 25-GbE SFP+ copper cable for Port 3, Port 4</li><ul>| Customer needs to procure these cables.<br>For a full list of supported network cables, switches, and transceivers for device network cards, see [Cavium FastlinQ 41000 Series Interoperability Matrix](https://www.marvell.com/documents/xalflardzafh32cfvi0z/).| | Network readiness | Check to see how ready your network is for the deployment of an Azure Stack Edge device. | [Use the Azure Stack Network Readiness Checker](azure-stack-edge-deploy-check-network-readiness.md) to test all needed connections. | | First-time device connection | Laptop whose IPv4 settings can be changed. This laptop connects to Port 1 via a switch or a USB to Ethernet adaptor. <!--<li> A minimum of 1 GbE switch must be used for the device once the initial setup is complete. The local web UI will not be accessible if the connected switch is not at least 1 Gbe.</li>-->| | | Device sign-in | Device administrator password, between 8 and 16 characters, including three of the following character types: uppercase, lowercase, numeric, and special characters. | Default password is *Password1*, which expires at first sign-in. | |
databox-online | Azure Stack Edge Pro R Technical Specifications Compliance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/azure-stack-edge-pro-r-technical-specifications-compliance.md | Your Azure Stack Edge Pro R device has the following network hardware: | PSID (R640) | MT_2420110034 |--> <!-- confirm w/ Ravi what is this--> -For a full list of supported cables, switches, and transceivers for these network cards, go to [Mellanox dual port 25G ConnectX-4 channel network adapter compatible products](https://docs.mellanox.com/display/ConnectX4LxFirmwarev14271016/Firmware+Compatible+Products). - ## Storage specifications Azure Stack Edge Pro R devices have eight data disks and two M.2 SATA disks that serve as operating system disks. For more information, go to [M.2 SATA disks](https://en.wikipedia.org/wiki/M.2). |
databox-online | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox-online/policy-reference.md | Title: Built-in policy definitions for Azure Stack Edge description: Lists Azure Policy built-in policy definitions for Azure Stack Edge. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/15/2023 Last updated : 11/21/2023 |
databox | Data Box Disk Deploy Copy Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-disk-deploy-copy-data.md | +<!-- # Customer intent: As an IT admin, I need to be able to order Data Box Disk to upload on-premises data from my server onto Azure.-++# Doc scores: +# 11/18/22: 75 (2456/62) +# 09/01/23: 100 (2159/0) ::: zone target="docs"+--> # Tutorial: Copy data to Azure Data Box Disk and verify +<!-- ::: zone-end ::: zone target="chromeless" After the disks are connected and unlocked, you can copy data from your source d ::: zone-end ::: zone target="docs"+--> -This tutorial describes how to copy data from your host computer and then generate checksums to verify data integrity. +This tutorial describes how to copy data from your host computer and generate checksums to verify data integrity. In this tutorial, you learn how to: In this tutorial, you learn how to: ## Prerequisites Before you begin, make sure that:+ - You have completed the [Tutorial: Install and configure your Azure Data Box Disk](data-box-disk-deploy-set-up.md). - Your disks are unlocked and connected to a client computer.-- Your client computer that is used to copy data to the disks is running a [Supported operating system](data-box-disk-system-requirements.md#supported-operating-systems-for-clients).-- Make sure that the intended storage type for your data matches [Supported storage types](data-box-disk-system-requirements.md#supported-storage-types-for-upload).-- Review [Managed disk limits in Azure object size limits](data-box-disk-limits.md#azure-object-size-limits).-+- The client computer used to copy data to the disks is running a [Supported operating system](data-box-disk-system-requirements.md#supported-operating-systems-for-clients). +- The intended storage type for your data matches [Supported storage types](data-box-disk-system-requirements.md#supported-storage-types-for-upload). +- You've reviewed [Managed disk limits in Azure object size limits](data-box-disk-limits.md#azure-object-size-limits). ## Copy data to disks Review the following considerations before you copy the data to the disks: -- It is your responsibility to ensure that you copy the data to folders that correspond to the appropriate data format. For instance, copy the block blob data to the folder for block blobs. If the data format does not match the appropriate folder (storage type), then at a later step, the data upload to Azure fails.-- While copying data, ensure that the data size conforms to the size limits described in the [Azure storage and Data Box Disk limits](data-box-disk-limits.md).-- If you want to preserve metadata (ACLs, timestamps, and file attributes) when transferring data to Azure Files, follow the guidance in [Preserving file ACLs, attributes, and timestamps with Azure Data Box Disk](data-box-disk-file-acls-preservation.md).-- If data that is being uploaded by Data Box Disk is concurrently uploaded by other applications outside of Data Box Disk, this could result in upload job failures and data corruption.+- It is your responsibility to ensure that you copy your local data to the folders that correspond to the appropriate data format. For instance, copy block blob data to the *BlockBlob* folder. Block blobs being archived should be copied to the *BlockBlob_Archive* folder. If the local data format doesn't match the appropriate folder for the chosen storage type, the data upload to Azure fails in a later step. +- While copying data, ensure that the data size conforms to the size limits described within in the [Azure storage and Data Box Disk limits](data-box-disk-limits.md) article. +- To preserve metadata such as ACLs, timestamps, and file attributes when transferring data to Azure Files, follow the guidance within the [Preserving file ACLs, attributes, and timestamps with Azure Data Box Disk](data-box-disk-file-acls-preservation.md) article. +- If you use both Data Box Disk and other applications to upload data simultaneously, you may experience upload job failures and data corruption. ++ > [!IMPORTANT] + > Data uploaded to the archive tier remains offline and needs to be rehydrated before reading or modifying. Data copied to the archive tier must remain for at least 180 days or be subject to an early deletion charge. Archive tier is not supported for ZRS, GZRS, or RA-GZRS accounts. > [!IMPORTANT] > If you specified managed disks as one of the storage destinations during order creation, the following section is applicable. -- You can only have one managed disk with a given name in a resource group across all the precreated folders and across all the Data Box Disk. This implies that the VHDs uploaded to the precreated folders should have unique names. Make sure that the given name does not match an already existing managed disk in a resource group. If VHDs have same names, then only one VHD is converted to managed disk with that name. The other VHDs are uploaded as page blobs into the staging storage account.-- Always copy the VHDs to one of the precreated folders. If you copy the VHDs outside of these folders or in a folder that you created, the VHDs are uploaded to Azure Storage account as page blobs and not managed disks.-- Only the fixed VHDs can be uploaded to create managed disks. Dynamic VHDs, differencing VHDs or VHDX files are not supported.-- If you don't have long paths enabled on the client, and any path and file name in your data copy exceeds 256 characters, the Data Box Split Copy Tool (DataBoxDiskSplitCopy.exe) or the Data Box Disk Validation tool (DataBoxDiskValidation.cmd) will report failures. To avoid this kind of failure, [enable long paths on your Windows client](/windows/win32/fileio/maximum-file-path-limitation?tabs=cmd#enable-long-paths-in-windows-10-version-1607-and-later).+- Ensure that virtual hard disks (VHDs) uploaded to the precreated folders have unique names within resource groups. Managed disks must have unique names within a resource group across all the precreated folders on the Data Box Disk. If you're using multiple Data Box Disks, managed disk names must be unique across all folder and disks. When VHDs with duplicate names are found, only one is converted to a managed disk with that name. The remaining VHDs are uploaded as page blobs into the staging storage account. +- Always copy the VHDs to one of the precreated folders. VHDs placed outside of these folders or in a folder that you created are uploaded to Azure Storage accounts as page blobs instead of managed disks. +- Only fixed VHDs can be uploaded to create managed disks. Dynamic VHDs, differencing VHDs and VHDX files aren't supported. +- The Data Box Disk Split Copy and Validation tools, `DataBoxDiskSplitCopy.exe` and `DataBoxDiskValidation.cmd`, report failures when long paths are processed. These failures are common when long paths aren't enabled on the client, and your data copy's paths and file names exceed 256 characters. To avoid these failures, follow the guidance within the [enable long paths on your Windows client](/windows/win32/fileio/maximum-file-path-limitation?tabs=cmd#enable-long-paths-in-windows-10-version-1607-and-later) article. Perform the following steps to connect and copy data from your computer to the Data Box Disk. -1. View the contents of the unlocked drive. The list of the precreated folders and subfolders in the drive is different depending upon the options selected when placing the Data Box Disk order. If a precreated folder does not exist, do not create it as copying to a user created folder will fail to upload on Azure. +1. View the contents of the unlocked drive. The list of the precreated folders and subfolders in the drive varies according to the options you select when placing the Data Box Disk order. The creation of extra folders isn't permitted, as copying data to a user-created folder causes upload failures. - |Selected storage destination |Storage account type|Staging storage account type |Folders and sub-folders | - ||||| - |Storage account |GPv1 or GPv2 | NA | BlockBlob <br> PageBlob <br> AzureFile | - |Storage account |Blob storage account | NA | BlockBlob | - |Managed disks |NA | GPv1 or GPv2 | ManagedDisk<ul> <li>PremiumSSD</li><li>StandardSSD</li><li>StandardHDD</li></ul> | - |Storage account <br> Managed disks |GPv1 or GPv2 | GPv1 or GPv2 |BlockBlob <br> PageBlob <br> AzureFile <br> ManagedDisk<ul> <li> PremiumSSD </li><li>StandardSSD</li><li>StandardHDD</li></ul> | - |Storage account <br> Managed disks |Blob storage account | GPv1 or GPv2 |BlockBlob <br> ManagedDisk<ul> <li>PremiumSSD</li><li>StandardSSD</li><li>StandardHDD</li></ul> | + |Selected storage destination |Storage account type|Staging storage account type |Folders and subfolders | + ||--|--|| + |Storage account |GPv1 or GPv2 | NA | BlockBlob<br>BlockBlob_Archive<br>PageBlob<br>AzureFile | + |Storage account |Blob storage account| NA | BlockBlob<br>BlockBlob_Archive | + |Managed disks |NA | GPv1 or GPv2 | ManagedDisk<ul><li>PremiumSSD</li><li>StandardSSD</li><li>StandardHDD</li></ul> | + |Storage account<br>Managed disks |GPv1 or GPv2 | GPv1 or GPv2 | BlockBlob<br/>BlockBlob_Archive<br/>PageBlob<br/>AzureFile<br/>ManagedDisk<ul><li>PremiumSSD</li><li>StandardSSD</li><li>StandardHDD</li></ul>| + |Storage account <br> Managed disks |Blob storage account | GPv1 or GPv2 |BlockBlob<br>BlockBlob_Archive<br>ManagedDisk<ul> <li>PremiumSSD</li><li>StandardSSD</li><li>StandardHDD</li></ul> | - An example screenshot of an order where a GPv2 storage account was specified is shown below: + The following screenshot shows an order where a GPv2 storage account and archive tier were specified: - ![Contents of the disk drive](media/data-box-disk-deploy-copy-data/data-box-disk-content.png) - -2. Copy the data that needs to be imported as block blobs in to *BlockBlob* folder. Similarly, copy data such as VHD/VHDX to *PageBlob* folder and data in to *AzureFile* folder. + :::image type="content" source="media/data-box-disk-deploy-copy-data/content-sml.png" alt-text="Screenshot of the contents of the disk drive." lightbox="media/data-box-disk-deploy-copy-data/content.png"::: - A container is created in the Azure storage account for each subfolder under BlockBlob and PageBlob folders. All files under BlockBlob and PageBlob folders are copied into a default container `$root` under the Azure Storage account. Any files in the `$root` container are always uploaded as block blobs. +1. Copy data to be imported as block blobs into the *BlockBlob* folder. Copy data to be stored as block blobs with the archive tier into the *BlockBlob_Archive* folder. Similarly, copy VHD or VHDX data to the *PageBlob* folder, and file share data into *AzureFile* folder. - Copy files to a folder within *AzureFile* folder. All files under *AzureFile* folder will be uploaded as files to a default container of type ΓÇ£databox-format-GuidΓÇ¥ (ex: databox-azurefile-7ee19cfb3304122d940461783e97bf7b4290a1d7). + A container is created in the Azure storage account for each subfolder within the *BlockBlob* and *PageBlob* folders. All files copied to the *BlockBlob* and *PageBlob* folders are copied into a default `$root` container within the Azure Storage account. Any files in the `$root` container are always uploaded as block blobs. - If files and folders exist in the root directory, then you must move those to a different folder before you begin data copy. + Copy data to be placed in Azure file shares to a subfolder within the *AzureFile* folder. All files copied to the *AzureFile* folder are copied as files to a default container of type `databox-format-[GUID]`, for example, `databox-azurefile-7ee19cfb3304122d940461783e97bf7b4290a1d7`. ++ Before you begin to copy data, you need to move any files and folders that exist in the root directory to a different folder. > [!IMPORTANT] > All the containers, blobs, and filenames should conform to [Azure naming conventions](data-box-disk-limits.md#azure-block-blob-page-blob-and-file-naming-conventions). If these rules are not followed, the data upload to Azure will fail. -3. When copying files, ensure that files do not exceed ~4.7 TiB for block blobs, ~8 TiB for page blobs, and ~1 TiB for Azure Files. -4. You can use drag and drop with File Explorer to copy the data. You can also use any SMB compatible file copy tool such as Robocopy to copy your data. Multiple copy jobs can be initiated using the following Robocopy command: +1. When copying files, ensure that files don't exceed 4.7 TiB for block blobs, 8 TiB for page blobs, and 1 TiB for Azure Files. +1. You can use File Explorer's drag and drop functionality to copy the data. You can also use any SMB compatible file copy tool such as Robocopy to copy your data. - `Robocopy <source> <destination> * /MT:64 /E /R:1 /W:1 /NFL /NDL /FFT /Log:c:\RobocopyLog.txt` - - The parameters and options for the command are tabulated as follows: - - |Parameters/Options |Description | - |--|| - |Source | Specifies the path to the source directory. | - |Destination | Specifies the path to the destination directory. | - |/E | Copies subdirectories including empty directories. | - |/MT[:N] | Creates multi-threaded copies with N threads where N is an integer between 1 and 128. <br>The default value for N is 8. | - |/R: \<N> | Specifies the number of retries on failed copies. The default value of N is 1,000,000 (one million retries). | - |/W: \<N> | Specifies the wait time between retries, in seconds. The default value of N is 30 (wait time 30 seconds). | - |/NFL | Specifies that file names are not to be logged. | - |/NDL | Specifies that directory names are not to be logged. | - |/FFT | Assumes FAT file times (two-second precision). | - |/Log:\<Log File> | Writes the status output to the log file (overwrites the existing log file). | -- Multiple disks can be used in parallel with multiple jobs running on each disk. --6. Check the copy status when the job is in progress. The following sample shows the output of the robocopy command to copy files to the Data Box Disk. + One benefit of using a file copy tool is the ability to initiate multiple copy jobs, as in the following example using the Robocopy tool: - ``` - C:\Users>robocopy - - - ROBOCOPY :: Robust File Copy for Windows - - - + `Robocopy <source> <destination> * /MT:64 /E /R:1 /W:1 /NFL /NDL /FFT /Log:c:\RobocopyLog.txt` ++ >[!NOTE] + > The parameters used in this example are based on the environment used during in-house testing. Your paramters and values are likely different. ++ The parameters and options for the command are used as follows: ++ |Parameters/Options |Description | + |-|| + |Source | Specifies the path to the source directory. | + |Destination | Specifies the path to the destination directory. | + |/E | Copies subdirectories including empty directories. | + |/MT[:n] | Creates multi-threaded copies with *n* threads where *n* is an integer between 1 and 128.<br>The default value for *n* is 8. | + |/R: \<n> | Specifies the number of retries on failed copies.<br>The default value of *n* is 1,000,000 retries. | + |/W: \<n> | Specifies the wait time between retries, in seconds.<br>The default value of *n* is 30 and is equivalent to a wait time 30 seconds. | + |/NFL | Specifies that file names aren't logged. | + |/NDL | Specifies that directory names aren't to be logged. | + |/FFT | Assumes FAT file times with a resolution precision of two seconds. | + |/Log:\<Log File> | Writes the status output to the log file.<br>Any existing log file is overwritten. | ++ Multiple disks can be used in parallel with multiple jobs running on each disk. Keep in mind that duplicate filenames are either overwritten or result in a copy error. ++1. Check the copy status when the job is in progress. The following sample shows the output of the robocopy command to copy files to the Data Box Disk. ++ ```Sample output + + C:\Users>robocopy + - + ROBOCOPY :: Robust File Copy for Windows + - + Started : Thursday, March 8, 2018 2:34:53 PM- Simple Usage :: ROBOCOPY source destination /MIR - - source :: Source Directory (drive:\path or \\server\share\path). - destination :: Destination Dir (drive:\path or \\server\share\path). - /MIR :: Mirror a complete directory tree. - - For more usage information run ROBOCOPY /? - - **** /MIR can DELETE files as well as copy them ! - - C:\Users>Robocopy C:\Git\azure-docs-pr\contributor-guide \\10.126.76.172\devicemanagertest1_AzFile\templates /MT:64 /E /R:1 /W:1 /FFT - - - ROBOCOPY :: Robust File Copy for Windows - - + Simple Usage :: ROBOCOPY source destination /MIR ++ source :: Source Directory (drive:\path or \\server\share\path). + destination :: Destination Dir (drive:\path or \\server\share\path). + /MIR :: Mirror a complete directory tree. ++ For more usage information run ROBOCOPY /? ++ **** /MIR can DELETE files as well as copy them ! + C:\Users>Robocopy C:\Repository\guides \\10.126.76.172\AzFileUL\templates /MT:64 /E /R:1 /W:1 /FFT + - + ROBOCOPY :: Robust File Copy for Windows + - + Started : Thursday, March 8, 2018 2:34:58 PM- Source : C:\Git\azure-docs-pr\contributor-guide\ + Source : C:\Repository\guides\ Dest : \\10.126.76.172\devicemanagertest1_AzFile\templates\ Files : *.* Perform the following steps to connect and copy data from your computer to the D - 100% New File 206 C:\Git\azure-docs-pr\contributor-guide\article-metadata.md - 100% New File 209 C:\Git\azure-docs-pr\contributor-guide\content-channel-guidance.md - 100% New File 732 C:\Git\azure-docs-pr\contributor-guide\contributor-guide-index.md - 100% New File 199 C:\Git\azure-docs-pr\contributor-guide\contributor-guide-pr-criteria.md - New File 178 C:\Git\azure-docs-pr\contributor-guide\contributor-guide-pull-request-co100% .md - New File 250 C:\Git\azure-docs-pr\contributor-guide\contributor-guide-pull-request-et100% e.md - 100% New File 174 C:\Git\azure-docs-pr\contributor-guide\create-images-markdown.md - 100% New File 197 C:\Git\azure-docs-pr\contributor-guide\create-links-markdown.md - 100% New File 184 C:\Git\azure-docs-pr\contributor-guide\create-tables-markdown.md - 100% New File 208 C:\Git\azure-docs-pr\contributor-guide\custom-markdown-extensions.md - 100% New File 210 C:\Git\azure-docs-pr\contributor-guide\file-names-and-locations.md - 100% New File 234 C:\Git\azure-docs-pr\contributor-guide\git-commands-for-master.md - 100% New File 186 C:\Git\azure-docs-pr\contributor-guide\release-branches.md - 100% New File 240 C:\Git\azure-docs-pr\contributor-guide\retire-or-rename-an-article.md - 100% New File 215 C:\Git\azure-docs-pr\contributor-guide\style-and-voice.md - 100% New File 212 C:\Git\azure-docs-pr\contributor-guide\syntax-highlighting-markdown.md - 100% New File 207 C:\Git\azure-docs-pr\contributor-guide\tools-and-setup.md + 100% New File 206 C:\Repository\guides\article-metadata.md + 100% New File 209 C:\Repository\guides\content-channel-guidance.md + 100% New File 732 C:\Repository\guides\index.md + 100% New File 199 C:\Repository\guides\pr-criteria.md + 100% New File 178 C:\Repository\guides\pull-request-co.md + 100% New File 250 C:\Repository\guides\pull-request-ete.md + 100% New File 174 C:\Repository\guides\create-images-markdown.md + 100% New File 197 C:\Repository\guides\create-links-markdown.md + 100% New File 184 C:\Repository\guides\create-tables-markdown.md + 100% New File 208 C:\Repository\guides\custom-markdown-extensions.md + 100% New File 210 C:\Repository\guides\file-names-and-locations.md + 100% New File 234 C:\Repository\guides\git-commands-for-master.md + 100% New File 186 C:\Repository\guides\release-branches.md + 100% New File 240 C:\Repository\guides\retire-or-rename-an-article.md + 100% New File 215 C:\Repository\guides\style-and-voice.md + 100% New File 212 C:\Repository\guides\syntax-highlighting-markdown.md + 100% New File 207 C:\Repository\guides\tools-and-setup.md Total Copied Skipped Mismatch FAILED Extras Perform the following steps to connect and copy data from your computer to the D Speed : 5620 Bytes/sec. Speed : 0.321 MegaBytes/min.- Ended : Thursday, March 8, 2018 2:34:59 PM - - C:\Users> + Ended : Thursday, August 31, 2023 2:34:59 PM + ```- + To optimize the performance, use the following robocopy parameters when copying the data. - | Platform | Mostly small files < 512 KB | Mostly medium files 512 KB-1 MB | Mostly large files > 1 MB | - |-|--|--|--| - | Data Box Disk | 4 Robocopy sessions* <br> 16 threads per sessions | 2 Robocopy sessions* <br> 16 threads per sessions | 2 Robocopy sessions* <br> 16 threads per sessions | - + | Platform | Mostly small files < 512 KB | Mostly medium files 512 KB-1 MB | Mostly large files > 1 MB | + ||--|-|| + | Data Box Disk | 4 Robocopy sessions*<br>16 threads per session | 2 Robocopy session*<br>16 threads per session | 2 Robocopy session*<br>16 threads per session | + **Each Robocopy session can have a maximum of 7,000 directories and 150 million files.*- - >[!NOTE] - > The parameters suggested above are based on the environment used in inhouse testing. - - For more information on Robocopy command, go to [Robocopy and a few examples](https://social.technet.microsoft.com/wiki/contents/articles/1073.robocopy-and-a-few-examples.aspx). -6. Open the target folder to view and verify the copied files. If you have any errors during the copy process, download the log files for troubleshooting. The log files are located as specified in the robocopy command. - + For more information on the Robocopy command, read the [Robocopy and a few examples](https://social.technet.microsoft.com/wiki/contents/articles/1073.robocopy-and-a-few-examples.aspx) article. ++1. Open the target folder, then view and verify the copied files. If you have any errors during the copy process, download the log files for troubleshooting. The robocopy command's output specifies the location of the log files. + ### Split and copy data to disks -This optional procedure may be used when you are using multiple disks and have a large dataset that needs to be split and copied across all the disks. The Data Box Split Copy tool helps split and copy the data on a Windows computer. +The Data Box Split Copy tool helps split and copy data across two or more Azure Data Box Disks. The tool is only available for use on a Windows computer. This optional procedure is helpful when you have a large dataset that needs to be split and copied across several disks. >[!IMPORTANT]-> Data Box Split Copy tool also validates your data. If you use Data Box Split Copy tool to copy data, you can skip the [validation step](#validate-data). -> Split Copy tool is not supported with managed disks. +> The Data Box Split Copy tool can also validate your data. If you use Data Box Split Copy tool to copy data, you can skip the [validation step](#validate-data). +> The Split Copy tool is not supported with managed disks. -1. On your Windows computer, ensure that you have the Data Box Split Copy tool downloaded and extracted in a local folder. This tool was downloaded when you downloaded the Data Box Disk toolset for Windows. -2. Open File Explorer. Make a note of the data source drive and drive letters assigned to Data Box Disk. +1. On your Windows computer, ensure that you have the Data Box Split Copy tool downloaded and extracted in a local folder. This tool is included within the Data Box Disk toolset for Windows. +1. Open File Explorer. Make a note of the data source drive and drive letters assigned to Data Box Disk. - ![Split copy data](media/data-box-disk-deploy-copy-data/split-copy-1.png) - -3. Identify the source data to copy. For instance, in this case: - - Following block blob data was identified. + :::image type="content" source="media/data-box-disk-deploy-copy-data/split-copy-1-sml.png" alt-text="Screenshot of the data source drive and drive letters assigned to Data Box Disk." lightbox="media/data-box-disk-deploy-copy-data/split-copy-1.png"::: - ![Split copy data 2](media/data-box-disk-deploy-copy-data/split-copy-2.png) +1. Identify the source data to copy. For instance, in this case: + - The following block blob data was identified. - - Following page blob data was identified. + :::image type="content" source="media/data-box-disk-deploy-copy-data/split-copy-2-sml.png" alt-text="Screenshot of block blob data identified for the copy process." lightbox="media/data-box-disk-deploy-copy-data/split-copy-2.png"::: - ![Split copy data 3](media/data-box-disk-deploy-copy-data/split-copy-3.png) - -4. Go to the folder where the software is extracted. Locate the `SampleConfig.json` file in that folder. This is a read-only file that you can modify and save. + - The following page blob data was identified. - ![Split copy data 4](media/data-box-disk-deploy-copy-data/split-copy-4.png) - -5. Modify the `SampleConfig.json` file. - - - Provide a job name. This creates a folder in the Data Box Disk and eventually becomes the container in the Azure storage account associated with these disks. The job name must follow the Azure container naming conventions. - - Supply a source path making note of the path format in the `SampleConfigFile.json`. - - Enter the drive letters corresponding to the target disks. The data is taken from the source path and copied across multiple disks. - - Provide a path for the log files. By default, it is sent to the current directory where the `.exe` is located. + :::image type="content" source="media/data-box-disk-deploy-copy-data/split-copy-3-sml.png" alt-text="Screenshot of page blob data identified for the copy process." lightbox="media/data-box-disk-deploy-copy-data/split-copy-3.png"::: - ![Split copy data 5](media/data-box-disk-deploy-copy-data/split-copy-5.png) +1. Navigate to the folder where the software is extracted and locate the `SampleConfig.json` file. This file is a read-only file that you can modify and save. -6. To validate the file format, go to `JSONlint`. Save the file as `ConfigFile.json`. + :::image type="content" source="media/data-box-disk-deploy-copy-data/split-copy-4-sml.png" alt-text="Screenshot showing the location of the sample configuration file." lightbox="media/data-box-disk-deploy-copy-data/split-copy-4.png"::: - ![Split copy data 6](media/data-box-disk-deploy-copy-data/split-copy-6.png) - -7. Open a Command Prompt window. +1. Modify the `SampleConfig.json` file. -8. Run the `DataBoxDiskSplitCopy.exe`. Type + - Provide a job name. A folder with this name is created on the Data Box Disk. It's also used to create a container in the Azure storage account associated with these disks. The job name must follow the [Azure container naming conventions](/rest/api/storageservices/naming-and-referencing-containers--blobs--and-metadata). + - Supply a source path, making note of the path format in the `SampleConfigFile.json`. + - Enter the drive letters corresponding to the target disks. Data is taken from the source path and copied across multiple disks. + - Provide a path for the log files. By default, log files are sent to the directory where the `.exe` file is located. + - To validate the file format, go to `JSONlint`. - `DataBoxDiskSplitCopy.exe PrepImport /config:<Your-config-file-name.json>` + :::image type="content" source="media/data-box-disk-deploy-copy-data/split-copy-5.png" alt-text="Screenshot showing the contents of the sample configuration file."::: - ![Split copy data 7](media/data-box-disk-deploy-copy-data/split-copy-7.png) - -9. Enter to continue the script. + - Save the file as `ConfigFile.json`. ++ :::image type="content" source="media/data-box-disk-deploy-copy-data/split-copy-6-sml.png" alt-text="Screenshot showing the location of the replacement configuration file." lightbox="media/data-box-disk-deploy-copy-data/split-copy-6.png"::: - ![Split copy data 8](media/data-box-disk-deploy-copy-data/split-copy-8.png) +1. Open a Command Prompt window with elevated privileges and run the `DataBoxDiskSplitCopy.exe` using the following command. ++ ```Command prompt + DataBoxDiskSplitCopy.exe PrepImport /config:ConfigFile.json + ``` ++1. When prompted, press any key to continue running the tool. ++ :::image type="content" source="media/data-box-disk-deploy-copy-data/split-copy-8-sml.png" alt-text="Screenshot showing the command prompt window executing the Split Copy tool." lightbox="media/data-box-disk-deploy-copy-data/split-copy-8.png"::: -10. When the dataset is split and copied, the summary of the Split Copy tool for the copy session is presented. A sample output is shown below. +1. After the dataset is split and copied, the summary of the Split Copy tool for the copy session is presented as shown in the following sample output. - ![Split copy data 9](media/data-box-disk-deploy-copy-data/split-copy-9.png) - -11. Verify that the data is split across the target disks. - - ![Split copy data 10](media/data-box-disk-deploy-copy-data/split-copy-10.png) - ![Split copy data 11](media/data-box-disk-deploy-copy-data/split-copy-11.png) - - If you examine the contents of `n:` drive further, you will see that two sub-folders are created corresponding to block blob and page blob format data. - - ![Split copy data 12](media/data-box-disk-deploy-copy-data/split-copy-12.png) + :::image type="content" source="media/data-box-disk-deploy-copy-data/split-copy-9-sml.png" alt-text="Screenshot showing the summary presented after successful execution of the Split Copy tool." lightbox="media/data-box-disk-deploy-copy-data/split-copy-9.png"::: -12. If the copy session fails, then to recover and resume, use the following command: +1. Verify that the data is split properly across the target disks. - `DataBoxDiskSplitCopy.exe PrepImport /config:<configFile.json> /ResumeSession` + :::image type="content" source="media/data-box-disk-deploy-copy-data/split-copy-10-sml.png" alt-text="Screenshot indicating resulting data split properly across the first of two target disks." lightbox="media/data-box-disk-deploy-copy-data/split-copy-10.png"::: -If you see errors using the Split Copy tool, go to how to [troubleshoot Split Copy tool errors](data-box-disk-troubleshoot-data-copy.md). + :::image type="content" source="media/data-box-disk-deploy-copy-data/split-copy-11-sml.png" alt-text="Screenshot indicating resulting data split properly across the second of two target disks." lightbox="media/data-box-disk-deploy-copy-data/split-copy-11.png"::: -After the data copy is complete, you can proceed to validate your data. If you used the Split Copy tool, skip the validation (Split Copy tool validates as well) and advance to the next tutorial. + Examine the `H:` drive contents and ensure that two subfolders are created that correspond to block blob and page blob format data. + :::image type="content" source="media/data-box-disk-deploy-copy-data/split-copy-12-sml.png" alt-text="Screenshot showing two subfolders created which correspond to block blob and page blob format data." lightbox="media/data-box-disk-deploy-copy-data/split-copy-12.png"::: -## Validate data +1. If the copy session fails, use the following command to recover and resume: -If you did not use the Data Box Split Copy tool to copy data, you will need to validate your data. To verify the data, perform the following steps. + `DataBoxDiskSplitCopy.exe PrepImport /config:ConfigFile.json /ResumeSession` -1. Run the `DataBoxDiskValidation.cmd` for checksum validation in the *DataBoxDiskImport* folder of your drive. This is available for Windows environment only. Linux users need to validate that the source data that is copied to the disk meets the [prerequisites](./data-box-disk-limits.md). - - ![Data Box Disk validation tool output](media/data-box-disk-deploy-copy-data/data-box-disk-validation-tool-output.png) +If you encounter errors while using the Split Copy tool, follow the steps within the [troubleshoot Split Copy tool errors](data-box-disk-troubleshoot-data-copy.md) article. ++>[!IMPORTANT] +> The Data Box Split Copy tool also validates your data. If you use Data Box Split Copy tool to copy data, you can skip the [validation step](#validate-data). +> The Split Copy tool is not supported with managed disks. -2. Choose the appropriate option. **We recommend that you always validate the files and generate checksums by selecting option 2**. Depending upon your data size, this step may take a while. Once the script has completed, exit out of the command window. If there are any errors during validation and checksum generation, you are notified and a link to the error logs is also provided. +## Validate data - ![Checksum output](media/data-box-disk-deploy-copy-data/data-box-disk-checksum-output.png) +If you didn't use the Data Box Split Copy tool to copy data, you need to validate your data. Perform the following steps on each of your Data Box Disks to verify the data. If you encounter errors during validation, follow the steps within the [troubleshoot validation errors](data-box-disk-troubleshoot.md) article. - > [!TIP] - > - Reset the tool between two runs. - > - The checksum process may take more time if you have a large data set containing small files (~KBs). If you use option 1 and skip checksum creation, then you need to independently verify the data integrity of the uploaded data in Azure preferably via checksums before you delete any copies of the data in your possession. +1. Run `DataBoxDiskValidation.cmd` for checksum validation in the *DataBoxDiskImport* folder of your drive. This tool is only available for the Windows environment. Linux users need to validate that the source data copied to the disk meets [Azure Data Box prerequisites](./data-box-disk-limits.md). -3. If using multiple disks, run the command for each disk. + :::image type="content" source="media/data-box-disk-deploy-copy-data/validation-tool-output-sml.png" alt-text="Screenshot showing Data Box Disk validation tool output." lightbox="media/data-box-disk-deploy-copy-data/validation-tool-output.png"::: -If you see errors during validation, see [troubleshoot validation errors](data-box-disk-troubleshoot.md). +1. Choose the appropriate validation option when prompted. **We recommend that you always validate the files and generate checksums by selecting option 2**. After the script has completed, exit out of the command window. The time required for validation to complete depends upon the size of your data. The tool notifies you of any errors encountered during validation and checksum generation, and provides you with a link to the error logs. ++ :::image type="content" source="media/data-box-disk-deploy-copy-data/checksum-output-sml.png" alt-text="Screenshot showing a failed execution attempt and indicating the location of the corresponding log file." lightbox="media/data-box-disk-deploy-copy-data/checksum-output.png"::: ++ > [!TIP] + > - Reset the tool between two runs. + > - The checksum process may take more time if you have a large data set containing many files that take up relatively little storage capacity. If you validate files and skip checksum creation, you should independently verify data integrity on the Data Box Disk prior to deleting any copies. This verification ideally includes generating checksums. ## Next steps -In this tutorial, you learned about Azure Data Box Disk topics such as: +In this tutorial, you learned how to complete the following tasks with Azure Data Box Disk: > [!div class="checklist"] > * Copy data to Data Box Disk Advance to the next tutorial to learn how to return the Data Box Disk and verify > [!div class="nextstepaction"] > [Ship your Azure Data Box back to Microsoft](./data-box-disk-deploy-picked-up.md) +<!-- ::: zone-end-+--> +<!-- ::: zone target="chromeless" ### Copy data to disks Take the following steps to verify your data. For more information on data validation, see [Validate data](#validate-data). If you experience errors during validation, see [troubleshoot validation errors](data-box-disk-troubleshoot.md). ::: zone-end+--> |
databox | Data Box Disk Deploy Ordered | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-disk-deploy-ordered.md | +# Doc scores: +# 10/21/22: 75 (1921/15) +# 09/24/23: 100 (1996/0) + # Tutorial: Order an Azure Data Box Disk -Azure Data Box Disk is a hybrid cloud solution that allows you to import your on-premises data into Azure in a quick, easy, and reliable way. You transfer your data to solid-state disks (SSDs) supplied by Microsoft and ship the disks back. This data is then uploaded to Azure. +Azure Data Box Disk is a hybrid cloud solution that allows you to import your on-premises data into Azure in a quick, easy, and reliable way. You transfer your data to solid-state disks (SSDs) supplied by Microsoft and ship the disks back. This data is then uploaded to Azure. This tutorial describes how you can order an Azure Data Box Disk. In this tutorial, you learn about: This tutorial describes how you can order an Azure Data Box Disk. In this tutori > > * Order a Data Box Disk > * Track the order-> * Cancel the order +> * Cancel the order ## Prerequisites Before you begin, make sure that: * You have a client computer available from which you can copy the data. Your client computer must: * Run a [Supported operating system](data-box-disk-system-requirements.md#supported-operating-systems-for-clients).- * Have other [required software](data-box-disk-system-requirements.md#other-required-software-for-windows-clients) installed if it is a Windows client. + * Have other [required software](data-box-disk-system-requirements.md#other-required-software-for-windows-clients) installed if it's a Windows client. ## Order Data Box Disk Take the following steps to order Data Box Disk. 1. In the upper left corner of the portal, click **+ Create a resource**, and search for *Azure Data Box*. Click **Azure Data Box**. - ![Search Azure Data Box 1](media/data-box-disk-deploy-ordered/search-data-box11.png) + :::image type="content" source="media/data-box-disk-deploy-ordered/search-data-box11-sml.png" alt-text="Search Azure Data Box 1" lightbox="media/data-box-disk-deploy-ordered/search-data-box11.png"::: -2. Click **Create**. +1. Click **Create**. -3. Check if Data Box service is available in your region. Enter or select the following information and click **Apply**. +1. Check if Data Box service is available in your region. Enter or select the following information and click **Apply**. - ![Select Data Box Disk option](media/data-box-disk-deploy-ordered/select-data-box-sku-1.png) + :::image type="content" source="media/data-box-disk-deploy-ordered/select-data-box-sku-1-sml.png" alt-text="Select Data Box Disk option" lightbox="media/data-box-disk-deploy-ordered/select-data-box-sku-1.png"::: |Setting|Value| ||| |Transfer type| Import to Azure|- |Subscription|Select the subscription for which Data Box service is enabled.<br> The subscription is linked to your billing account. | - |Resource group| Select the resource group you want to use to order a Data Box. <br> A resource group is a logical container for the resources that can be managed or deployed together.| + |Subscription|Select the subscription for which Data Box service is enabled.<br /> The subscription is linked to your billing account. | + |Resource group| Select the resource group you want to use to order a Data Box. <br /> A resource group is a logical container for the resources that can be managed or deployed together.| |Source country/region | Select the country/region where your data currently resides.| |Destination Azure region|Select the Azure region where you want to transfer data.| -4. Select **Data Box Disk**. The maximum capacity of the solution for a single order of 5 disks is 35 TB. You could create multiple orders for larger data sizes. -- ![Select Data Box Disk option 2](media/data-box-disk-deploy-ordered/select-data-box-sku-zoom.png) +1. Select **Data Box Disk**. The maximum capacity of the solution for a single order of five disks is 35 TB. You could create multiple orders for larger data sizes. -5. In **Order**, specify the **Order details** in the **Basics** tab. Enter or select the following information. + :::image type="content" alt-text="Select Data Box Disk option 2" source="media/data-box-disk-deploy-ordered/select-data-box-sku-zoom.png"::: +1. In **Order**, specify the **Order details** in the **Basics** tab. Enter or select the following information. |Setting|Value| ||| |Subscription| The subscription is automatically populated based on your earlier selection. | |Resource group| The resource group you selected previously. |- |Import order name|Provide a friendly name to track the order.<br> The name can have between 3 and 24 characters that can be letters, numbers, and hyphens. <br> The name must start and end with a letter or a number. | - |Number of disks per order| Enter the number of disks you would like to order. <br> There can be a maximum of 5 disks per order (1 disk = 7TB). | - |Disk passkey| Supply the disk passkey if you check **Use custom key instead of Azure generated passkey**. <br> Provide a 12 to 32-character alphanumeric key that has at least one numeric and one special character. The allowed special characters are `@?_+`. <br> You can choose to skip this option and use the Azure generated passkey to unlock your disks.| + |Import order name|Provide a friendly name to track the order.<br /> The name can have between 3 and 24 characters that can be letters, numbers, and hyphens. <br /> The name must start and end with a letter or a number. | + |Number of disks per order| Enter the number of disks you would like to order. <br /> There can be a maximum of five disks per order (1 disk = 7TB). | + |Disk passkey| Supply the disk passkey if you check **Use custom key instead of Azure generated passkey**. <br /> Provide a 12-character to 32-character alphanumeric key that has at least one numeric and one special character. The allowed special characters are `@?_+`. <br /> You can choose to skip this option and use the Azure generated passkey to unlock your disks.| - ![Screenshot of order details](media/data-box-disk-deploy-ordered/data-box-disk-order.png) + :::image type="content" alt-text="Screenshot of order details" source="media/data-box-disk-deploy-ordered/data-box-disk-order-sml.png" lightbox="media/data-box-disk-deploy-ordered/data-box-disk-order.png"::: ++1. On the **Data destination** screen, select the **Data destination** - either storage accounts or managed disks (or both). ++ > [!CAUTION] + > Blob data can be uploaded to the archive tier, but will need to be rehydrated before reading or modifying. Data copied to the archive tier must remain for at least 180 days or be subject to an early deletion charge. Archive tier is not supported for ZRS, GZRS, or RA-GZRS accounts. -6. On the **Data destination** screen, select the **Data destination** - either storage accounts or managed disks (or both). - |Setting|Value| |||- |Data destination |Choose from storage account or managed disks or both.<br> Based on the specified Azure region, select a storage account from the filtered list of an existing storage account. Data Box Disk can be linked with only 1 storage account.<br> You can also create a new General-purpose v1, General-purpose v2, or Blob storage account.<br> Storage accounts with virtual networks are supported. To allow Data Box service to work with secured storage accounts, enable the trusted services within the storage account network firewall settings. For more information, see how to Add Azure Data Box as a trusted service.| - |Destination Azure region| Select a region for your storage account. <br> Currently, storage accounts in all regions in US, West and North Europe, Canada, and Australia are supported. | - |Resource group| If using Data Box Disk to create managed disks from the on-premises VHDs, you need to provide the resource group.<br> Create a new resource group if you intend to create managed disks from on-premises VHDs. Use an existing resource group only if it was created for Data Box Disk order for managed disk by Data Box service.<br> Only one resource group is supported.| + |Data destination |Choose from storage account or managed disks or both.<br /> Based on the specified Azure region, select a storage account from the filtered list of an existing storage account. Data Box Disk can be linked with only one storage account.<br /> You can also create a new General-purpose v1, General-purpose v2, or Blob storage account.<br /> Storage accounts with virtual networks are supported. To allow Data Box service to work with secured storage accounts, enable the trusted services within the storage account network firewall settings. For more information, see how to Add Azure Data Box as a trusted service. <br /> To enable support for large file shares, select **Enable large file shares**. To enable the ability to move blob data to the archive tier, select **Enable copy to archive**. | + |Destination Azure region| Select a region for your storage account. <br /> Currently, storage accounts in all regions in US, West and North Europe, Canada, and Australia are supported. | + |Resource group| If using Data Box Disk to create managed disks from the on-premises VHDs, you need to provide the resource group.<br /> Create a new resource group if you intend to create managed disks from on-premises VHDs. Use an existing resource group only if it was created for Data Box Disk order for managed disk by Data Box service.<br /> Only one resource group is supported.| - ![Screenshot of Data Box Disk data destination.](media/data-box-disk-deploy-ordered/data-box-disk-order-destination.png) + :::image type="content" alt-text="Screenshot of Data Box Disk data destination." source="media/data-box-disk-deploy-ordered/data-box-disk-order-destination-sml.png" lightbox="media/data-box-disk-deploy-ordered/data-box-disk-order-destination.png"::: The storage account specified for managed disks is used as a staging storage account. The Data Box service uploads the VHDs to the staging storage account and then converts those into managed disks and moves to the resource groups. For more information, see Verify data upload to Azure. -7. Select **Next: Security>** to continue. +1. Select **Next: Security>** to continue. The **Security** screen lets you use your own encryption key.- + All settings on the **Security** screen are optional. If you don't change any settings, the default settings will apply. -8. If you want to use your own customer-managed key to protect the unlock passkey for your new resource, expand **Encryption type**. - - ![Screenshot of Data Box Disk encryption type.](media/data-box-disk-deploy-ordered/data-box-disk-encryption.png) +1. If you want to use your own customer-managed key to protect the unlock passkey for your new resource, expand **Encryption type**. ++ :::image type="content" alt-text="Screenshot of Data Box Disk encryption type." source="media/data-box-disk-deploy-ordered/data-box-disk-encryption-sml.png" lightbox="media/data-box-disk-deploy-ordered/data-box-disk-encryption.png"::: Configuring a customer-managed key for your Azure Data Box Disk is optional. By default, Data Box uses a Microsoft managed key to protect the unlock passkey. Take the following steps to order Data Box Disk. 1. To use a customer-managed key, select **Customer managed key** as the key type. Then choose **Select a key vault and key**. - ![Screenshot of Customer managed key selection.](media/data-box-disk-deploy-ordered/data-box-disk-customer-key.png) + :::image type="content" alt-text="Screenshot of Customer managed key selection." source="media/data-box-disk-deploy-ordered/data-box-disk-customer-key-sml.png" lightbox="media/data-box-disk-deploy-ordered/data-box-disk-customer-key.png"::: 1. In the **Select key from Azure Key Vault** blade: - - The **Subscription** is automatically populated. -- - For **Key vault**, you can select an existing key vault from the dropdown list. + * The **Subscription** is automatically populated. + * For **Key vault**, you can select an existing key vault from the dropdown list. - ![Screenshot of existing key vault.](media/data-box-disk-deploy-ordered/data-box-disk-select-key-vault.png) + :::image type="content" alt-text="Screenshot of existing key vault." source="media/data-box-disk-deploy-ordered/data-box-disk-select-key-vault.png"::: Or select **Create new key vault** if you want to create a new key vault. - ![Screenshot of new key vault.](media/data-box-disk-deploy-ordered/data-box-disk-create-new-key-vault.png) + :::image type="content" alt-text="Screenshot of new key vault." source="media/data-box-disk-deploy-ordered/data-box-disk-create-new-key-vault-sml.png" lightbox="media/data-box-disk-deploy-ordered/data-box-disk-create-new-key-vault.png"::: Then, on the **Create key vault** screen, enter the resource group and a key vault name. Ensure that **Soft delete** and **Purge protection** are enabled. Accept all other defaults, and select **Review + Create**. - ![Screenshot of Create key vault blade.](media/data-box-disk-deploy-ordered/data-box-disk-key-vault-blade.png) + :::image type="content" alt-text="Screenshot of Create key vault blade." source="media/data-box-disk-deploy-ordered/data-box-disk-key-vault-blade-sml.png" lightbox="media/data-box-disk-deploy-ordered/data-box-disk-key-vault-blade.png"::: Review the information for your key vault, and select **Create**. Wait for a couple minutes for key vault creation to complete. - ![Screenshot of Review + create.](media/data-box-disk-deploy-ordered/data-box-disk-create-key-vault.png) + :::image type="content" alt-text="Screenshot of Review + create." source="media/data-box-disk-deploy-ordered/data-box-disk-create-key-vault-sml.png" lightbox="media/data-box-disk-deploy-ordered/data-box-disk-create-key-vault.png"::: 1. The **Select a key** blade will display your selected key vault.- - ![Screenshot of new key vault 2.](media/data-box-disk-deploy-ordered/data-box-disk-new-key-vault.png) - ++ :::image type="content" alt-text="Screenshot of new key vault 2." source="media/data-box-disk-deploy-ordered/data-box-disk-new-key-vault-sml.png" lightbox="media/data-box-disk-deploy-ordered/data-box-disk-new-key-vault.png"::: + If you want to create a new key, select **Create new key**. You must use an **RSA key**. The size can be 2048 or greater. Enter a name for your new key, accept the other defaults, and select **Create**. - ![Screenshot of Create new key.](media/data-box-disk-deploy-ordered/data-box-disk-new-key.png) + :::image type="content" alt-text="Screenshot of Create new key." source="media/data-box-disk-deploy-ordered/data-box-disk-new-key-sml.png" lightbox="media/data-box-disk-deploy-ordered/data-box-disk-new-key.png"::: - You'll be notified when the key has been created in your key vault. Your new key will be selected and displayed on the **Select a key** blade. + You're notified when the key has been created in your key vault. Your new key is selected on the **Select a key** blade. 1. Select the **Version** of the key to use, and then choose **Select**.- - ![Screenshot of key version.](media/data-box-disk-deploy-ordered/data-box-disk-key-version.png) - ++ :::image type="content" alt-text="Screenshot of key version." source="media/data-box-disk-deploy-ordered/data-box-disk-key-version-sml.png" lightbox="media/data-box-disk-deploy-ordered/data-box-disk-key-version.png"::: + If you want to create a new key version, select **Create new version**. - ![Screenshot of new key version.](media/data-box-disk-deploy-ordered/data-box-disk-new-key-version.png) + :::image type="content" alt-text="Screenshot of new key version." source="media/data-box-disk-deploy-ordered/data-box-disk-new-key-version-sml.png" lightbox="media/data-box-disk-deploy-ordered/data-box-disk-new-key-version.png"::: Choose settings for the new key version, and select **Create**. - ![Screenshot of new key version settings.](media/data-box-disk-deploy-ordered/data-box-disk-new-key-settings.png) + :::image type="content" alt-text="Screenshot of new key version settings." source="media/data-box-disk-deploy-ordered/data-box-disk-new-key-settings-sml.png" lightbox="media/data-box-disk-deploy-ordered/data-box-disk-new-key-settings.png"::: The **Encryption type** settings on the **Security** screen show your key vault and key. - ![Screenshot of encryption type settings.](media/data-box-disk-deploy-ordered/data-box-disk-encryption-settings.png) + :::image type="content" alt-text="Screenshot of encryption type settings." source="media/data-box-disk-deploy-ordered/data-box-disk-encryption-settings-sml.png" lightbox="media/data-box-disk-deploy-ordered/data-box-disk-encryption-settings.png"::: -1. Select a user identity that you'll use to manage access to this resource. Choose **Select a user identity**. In the panel on the right, select the subscription and the managed identity to use. Then choose **Select**. +1. Select a user identity that you use to manage access to this resource. Choose **Select a user identity**. In the panel on the right, select the subscription and the managed identity to use. Then choose **Select**. A user-assigned managed identity is a stand-alone Azure resource that can be used to manage multiple resources. For more information, see Managed identity types. If you need to create a new managed identity, follow the guidance in Create, list, delete, or assign a role to a user-assigned managed identity using the Azure portal. - ![Screenshot of user identity.](media/data-box-disk-deploy-ordered/data-box-disk-user-identity.png) + :::image type="content" alt-text="Screenshot of user identity." source="media/data-box-disk-deploy-ordered/data-box-disk-user-identity-sml.png" lightbox="media/data-box-disk-deploy-ordered/data-box-disk-user-identity.png"::: The user identity is shown in Encryption type settings. - ![Screenshot of user identity 2.](media/data-box-disk-deploy-ordered/data-box-disk-user-identity-2.png) -+ :::image type="content" alt-text="Screenshot of user identity 2." source="media/data-box-disk-deploy-ordered/data-box-disk-user-identity-2-sml.png" lightbox="media/data-box-disk-deploy-ordered/data-box-disk-user-identity-2.png"::: -8. In the **Contact details** tab, select **Add address** and enter the address details. Click Validate address. The service validates the shipping address for service availability. If the service is available for the specified shipping address, you receive a notification to that effect. +1. In the **Contact details** tab, select **Add address** and enter the address details. Click Validate address. The service validates the shipping address for service availability. If the service is available for the specified shipping address, you receive a notification to that effect. If you have chosen self-managed shipping, see [Use self-managed shipping](data-box-disk-portal-customer-managed-shipping.md). - ![Screenshot of Data Box Disk contact details.](media/data-box-disk-deploy-ordered/data-box-disk-contact-details.png) + :::image type="content" alt-text="Screenshot of Data Box Disk contact details." source="media/data-box-disk-deploy-ordered/data-box-disk-contact-details-sml.png" lightbox="media/data-box-disk-deploy-ordered/data-box-disk-contact-details.png"::: Specify valid email addresses as the service sends email notifications regarding any updates to the order status to the specified email addresses. We recommend that you use a group email so that you continue to receive notifications if an admin in the group leaves. -9. Review the information in the **Review + Order** tab related to the order, contact, notification, and privacy terms. Check the box corresponding to the agreement to privacy terms. +1. Review the information in the **Review + Order** tab related to the order, contact, notification, and privacy terms. Check the box corresponding to the agreement to privacy terms. -10. Click **Order**. The order takes a few minutes to be created. +1. Click **Order**. The order takes a few minutes to be created. ## Track the order After you have placed the order, you can track the status of the order from Azure portal. Go to your order and then go to **Overview** to view the status. The portal shows the job in **Ordered** state. -![Data Box Disk status ordered.](media/data-box-disk-deploy-ordered/data-box-portal-ordered.png) -If the disks are not available, you receive a notification. If the disks are available, Microsoft identifies the disks for shipment and prepares the disk package. During disk preparation, following actions occur: +If the disks aren't available, you receive a notification. If the disks are available, Microsoft identifies the disks for shipment and prepares the disk package. During disk preparation, following actions occur: * Disks are encrypted using AES-128 BitLocker encryption. * Disks are locked to prevent an unauthorized access to the disks. To cancel this order, in the Azure portal, go to **Overview** and click **Cancel You can only cancel when the disks are ordered, and the order is being processed for shipment. Once the order is processed, you can no longer cancel the order. -![Cancel order.](media/data-box-disk-deploy-ordered/cancel-order1.png) To delete a canceled order, go to **Overview** and click **Delete** from the command bar. In this tutorial, you learned about Azure Data Box topics such as: Advance to the next tutorial to learn how to set up your Data Box Disk. > [!div class="nextstepaction"]-> [Set up your Azure Data Box Disk](./data-box-disk-deploy-set-up.md) +> [Set up your Azure Data Box Disk](./data-box-disk-deploy-set-up.md) |
databox | Data Box Disk Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-disk-limits.md | For the latest information on Azure storage service limits and best practices fo - [Block blobs and page blob conventions](/rest/api/storageservices/understanding-block-blobs--append-blobs--and-page-blobs) > [!IMPORTANT]-> If there are any files or directories that exceed the Azure Storage service limits, or do not conform to Azure Files/Blob naming conventions, then these files or directories are not ingested into the Azure Storage via the Data Box service. +> If there are any files or directories that exceed the Azure Storage service limits, or don't conform to Azure Files/Blob naming conventions, then these files or directories are not ingested into the Azure Storage via the Data Box service. ## Data copy and upload caveats -- Do not copy data directly into the disks. Copy data to pre-created *BlockBlob*, *PageBlob*, and *AzureFile* folders.+- Don't copy data directly into the disks. Copy data to pre-created *BlockBlob*, *PageBlob*, and *AzureFile* folders. - A folder under the *BlockBlob* and *PageBlob* is a container. For instance, containers are created as *BlockBlob/container* and *PageBlob/container*. - If a folder has the same name as an existing container, the folder's contents are merged with the container's contents. Files or blobs that aren't already in the cloud are added to the container. If a file or blob has the same name as a file or blob that's already in the container, the existing file or blob is overwritten. - Every file written into *BlockBlob* and *PageBlob* shares is uploaded as a block blob and page blob respectively. - The hierarchy of files is maintained while uploading to the cloud for both blobs and Azure Files. For example, you copied a file at this path: `<container folder>\A\B\C.txt`. This file is uploaded to the same path in cloud.-- Any empty directory hierarchy (without any files) created under *BlockBlob* and *PageBlob* folders is not uploaded.+- Any empty directory hierarchy (without any files) created under *BlockBlob* and *PageBlob* folders isn't uploaded. - If you don't have long paths enabled on the client, and any path and file name in your data copy exceeds 256 characters, the Data Box Split Copy Tool (DataBoxDiskSplitCopy.exe) or the Data Box Disk Validation tool (DataBoxDiskValidation.cmd) will report failures. To avoid this kind of failure, [enable long paths on your Windows client](/windows/win32/fileio/maximum-file-path-limitation?tabs=cmd#enable-long-paths-in-windows-10-version-1607-and-later). - To improve performance during data uploads, we recommend that you [enable large file shares on the storage account and increase share capacity to 100 TiB](../../articles/storage/files/storage-how-to-create-file-share.md#enable-large-file-shares-on-an-existing-account). Large file shares are only supported for storage accounts with locally redundant storage (LRS).-- If there are any errors when uploading data to Azure, an error log is created in the target storage account. The path to this error log is available in the portal when the upload is complete and you can review the log to take corrective action. Do not delete data from the source without verifying the uploaded data.-- File metadata and NTFS permissions are not preserved when the data is uploaded to Azure Files. For example, the *Last modified* attribute of the files will not be kept when the data is copied.+- If there are any errors when uploading data to Azure, an error log is created in the target storage account. The path to this error log is available in the portal when the upload is complete and you can review the log to take corrective action. Don't delete data from the source without verifying the uploaded data. +- File metadata and NTFS permissions aren't preserved when the data is uploaded to Azure Files. For example, the *Last modified* attribute of the files won't be kept when the data is copied. - If you specified managed disks in the order, review the following additional considerations: - - You can only have one managed disk with a given name in a resource group across all the precreated folders and across all the Data Box Disk. This implies that the VHDs uploaded to the precreated folders should have unique names. Make sure that the given name does not match an already existing managed disk in a resource group. If VHDs have same names, then only one VHD is converted to managed disk with that name. The other VHDs are uploaded as page blobs into the staging storage account. + - You can only have one managed disk with a given name in a resource group across all the precreated folders and across all the Data Box Disk. This implies that the VHDs uploaded to the precreated folders should have unique names. Make sure that the given name doesn't match an already existing managed disk in a resource group. If VHDs have same names, then only one VHD is converted to managed disk with that name. The other VHDs are uploaded as page blobs into the staging storage account. - Always copy the VHDs to one of the precreated folders. If you copy the VHDs outside of these folders or in a folder that you created, the VHDs are uploaded to Azure Storage account as page blobs and not managed disks.- - Only the fixed VHDs can be uploaded to create managed disks. Dynamic VHDs, differencing VHDs or VHDX files are not supported. - - Non VHD files copied to the precreated managed disk folders will not be converted to a managed disk. + - Only the fixed VHDs can be uploaded to create managed disks. Dynamic VHDs, differencing VHDs or VHDX files aren't supported. + - Non VHD files copied to the precreated managed disk folders won't be converted to a managed disk. ## Azure storage account size limits Here are the limits on the size of data that can be copied into a storage accoun |--|| | block blob, page blob | For current information about these limits, see [Azure Blob storage scale targets](../storage/blobs/scalability-targets.md#scale-targets-for-blob-storage), [Azure standard storage scale targets](../storage/common/scalability-targets-standard-account.md#scale-targets-for-standard-storage-accounts), and [Azure Files scale targets](../storage/files/storage-files-scale-targets.md). <br /><br /> The limits include data from all the sources, including Data Box Disk.| - ## Azure object size limits Here are the sizes of the Azure objects that can be written. Make sure that all the files that are uploaded conform to these limits. | Azure object type | Default limit | |-|--|-| Block Blob | ~ 4.75 TiB | -| Page Blob | 8 TiB <br> (Every file uploaded in Page Blob format must be 512 bytes aligned, else the upload fails. <br> Both the VHD and VHDX are 512 bytes aligned.) | -|Azure Files | 1 TiB <br> Max. size of share is 5 TiB | -| Managed disks |4 TiB <br> For more information on size and limits, see: <li>[Scalability targets for managed disks](../virtual-machines/disks-scalability-targets.md#managed-virtual-machine-disks)</li>| -+| Block blob | 7 TiB | +| Page blob | 4 TiB <br> Every file uploaded in page blob format must be 512 bytes aligned (an integral multiple), else the upload fails. <br> VHD and VHDX are 512 bytes aligned. | +| Azure Files | 1 TiB | +| Managed disks | 4 TiB <br> For more information on size and limits, see: <li>[Scalability targets of Standard SSDs](../virtual-machines/disks-types.md#standard-ssds)</li><li>[Scalability targets of Premium SSDs](../virtual-machines/disks-types.md#standard-hdds)</li><li>[Scalability targets of Standard HDDs](../virtual-machines/disks-types.md#premium-ssds)</li><li>[Pricing and billing of managed disks](../virtual-machines/disks-types.md#billing)</li> ## Azure block blob, page blob, and file naming conventions -| Entity | Conventions | ++<!--| Entity | Conventions | |-|| | Container names for block blob and page blob <br> Fileshare names for Azure Files | Must be a valid DNS name that is 3 to 63 characters long. <br> Must start with a letter or number. <br> Can contain only lowercase letters, numbers, and the hyphen (-). <br> Every hyphen (-) must be immediately preceded and followed by a letter or number. <br> Consecutive hyphens are not permitted in names. | | Directory and file names for Azure files |<li> Case-preserving, case-insensitive and must not exceed 255 characters in length. </li><li> Cannot end with the forward slash (/). </li><li>If provided, it will be automatically removed. </li><li> Following characters are not allowed: <code>" \\ / : \| < > * ?</code></li><li> Reserved URL characters must be properly escaped. </li><li> Illegal URL path characters are not allowed. Code points like \\uE000 are not valid Unicode characters. Some ASCII or Unicode characters, like control characters (0x00 to 0x1F, \\u0081, etc.), are also not allowed. For rules governing Unicode strings in HTTP/1.1 see RFC 2616, Section 2.2: Basic Rules and RFC 3987. </li><li> Following file names are not allowed: LPT1, LPT2, LPT3, LPT4, LPT5, LPT6, LPT7, LPT8, LPT9, COM1, COM2, COM3, COM4, COM5, COM6, COM7, COM8, COM9, PRN, AUX, NUL, CON, CLOCK$, dot character (.), and two dot characters (..).</li>|-| Blob names for block blob and page blob | Blob names are case-sensitive and can contain any combination of characters. <br> A blob name must be between 1 to 1,024 characters long. <br> Reserved URL characters must be properly escaped. <br>The number of path segments comprising the blob name cannot exceed 254. A path segment is the string between consecutive delimiter characters (for example, the forward slash '/') that correspond to the name of a virtual directory. | +| Blob names for block blob and page blob | Blob names are case-sensitive and can contain any combination of characters. <br> A blob name must be between 1 to 1,024 characters long. <br> Reserved URL characters must be properly escaped. <br>The number of path segments comprising the blob name cannot exceed 254. A path segment is the string between consecutive delimiter characters (for example, the forward slash '/') that correspond to the name of a virtual directory. | --> ## Managed disk naming conventions | Entity | Conventions | |-|--|-| Managed disk names | <li> The name must be 1 to 80 characters long. </li><li> The name must begin with a letter or number, end with a letter, number or underscore. </li><li> The name may contain only letters, numbers, underscores, periods, or hyphens. </li><li> The name should not have spaces or `/`. | +| Managed disk names | <li> The name must be 1 to 80 characters long. </li><li> The name must begin with a letter or number, end with a letter, number or underscore. </li><li> The name may contain only letters, numbers, underscores, periods, or hyphens. </li><li> The name shouldn't have spaces or `/`. | ## Next steps |
databox | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/policy-reference.md | Title: Built-in policy definitions for Azure Data Box description: Lists Azure Policy built-in policy definitions for Azure Data Box. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/15/2023 Last updated : 11/21/2023 |
databox | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure Data Box description: Lists Azure Policy Regulatory Compliance controls available for Azure Data Box. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 11/06/2023 Last updated : 11/21/2023 |
ddos-protection | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/policy-reference.md | |
defender-for-cloud | Concept Data Security Posture Prepare | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/concept-data-security-posture-prepare.md | The table summarizes support for data-aware posture management. |**Support** | **Details**| | | | |What Azure data resources can I discover? | **Object storage:**<br /><br />[Block blob](../storage/blobs/storage-blobs-introduction.md) storage accounts in Azure Storage v1/v2<br/><br/> Azure Data Lake Storage Gen2<br/><br/>Storage accounts behind private networks are supported.<br/><br/> Storage accounts encrypted with a customer-managed server-side key are supported.<br/><br/> Accounts aren't supported if any of these settings are enabled: [Public network access is disabled](../storage/common/storage-network-security.md#change-the-default-network-access-rule); Storage account is defined as [Azure DNS Zone](https://techcommunity.microsoft.com/t5/azure-storage-blog/public-preview-create-additional-5000-azure-storage-accounts/ba-p/3465466); The storage account endpoint has a [custom domain mapped to it](../storage/blobs/storage-custom-domain-name.md).<br /><br /><br />**Databases**<br /><br />Azure SQL Databases |-|What AWS data resources can I discover? | **Object storage:**<br /><br />AWS S3 buckets<br/><br/> Defender for Cloud can discover KMS-encrypted data, but not data encrypted with a customer-managed key.<br /><br />**Databases**<br /><br />Any flavor of RDS instances | +|What AWS data resources can I discover? | **Object storage:**<br /><br />AWS S3 buckets<br/><br/> Defender for Cloud can discover KMS-encrypted data, but not data encrypted with a customer-managed key.<br /><br />**Databases**<br /><br />Any flavor of RDS instances <br /><br />Unsupported scenarios: <br />- You can't share a DB snapshot that uses an option group with permanent or persistent options, except for Oracle DB instances that have the **Timezone** or **OLS** option (or both). [Learn more](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_ShareSnapshot.html) | |What GCP data resources can I discover? | GCP storage buckets<br/> Standard Class<br/> Geo: region, dual region, multi region | |What permissions do I need for discovery? | Storage account: Subscription Owner<br/> **or**<br/> `Microsoft.Authorization/roleAssignments/*` (read, write, delete) **and** `Microsoft.Security/pricings/*` (read, write, delete) **and** `Microsoft.Security/pricings/SecurityOperators` (read, write)<br/><br/> Amazon S3 buckets and RDS instances: AWS account permission to run Cloud Formation (to create a role). <br/><br/>GCP storage buckets: Google account permission to run script (to create a role). | |What file types are supported for sensitive data discovery? | Supported file types (you can't select a subset) - .doc, .docm, .docx, .dot, .gz, .odp, .ods, .odt, .pdf, .pot, .pps, .ppsx, .ppt, .pptm, .pptx, .xlc, .xls, .xlsb, .xlsm, .xlsx, .xlt, .csv, .json, .psv, .ssv, .tsv, .txt., xml, .parquet, .avro, .orc.| |
defender-for-cloud | Integration Servicenow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/integration-servicenow.md | Last updated 11/13/2023 ServiceNow is a cloud-based workflow automation and enterprise-oriented solution that enables organizations to manage and track digital workflows within a unified, robust platform. ServiceNow helps to improve operational efficiencies by streamlining and automating routine work tasks and delivers resilient services that help increase your productivity. -ServiceNow is now integrated with Microsoft Defender for Cloud, which enables customers to connect ServiceNow to their Defender for Cloud environment to prioritize remediation of recommendations that impact your business. Microsoft Defender for Cloud integrates with the ITSM module (incident management). As part of this connection, customers will be able to create/view ServiceNow tickets (linked to recommendations) from Microsoft Defender for Cloud. +ServiceNow is now integrated with Microsoft Defender for Cloud, which enables customers to connect ServiceNow to their Defender for Cloud environment to prioritize remediation of recommendations that impact your business. Microsoft Defender for Cloud integrates with the ITSM module (incident management). As part of this connection, customers can create/view ServiceNow tickets (linked to recommendations) from Microsoft Defender for Cloud. ## Common use cases and scenarios -As part of the integration, you can create and monitor tickets in ServiceNow directly from Microsoft Defender for Cloud:   +As part of the integration, you can create and monitor tickets in ServiceNow directly from Microsoft Defender for Cloud: -- **Incident**: An incident is an unplanned interruption of reduction in the quality of an IT service. It can be reported by a user or monitoring system. ServiceNow’s incident management module helps IT teams track and manage incidents, from initial reporting to resolution. -- **Problem**: A problem is the underlying cause of one or more incidents. It’s often a recurring or persistent issue that needs to be addressed to prevent future incidents.   -- **Change**: A change is a planned alternation or addition to an IT service or its supporting infrastructure. A change management module helps IT teams plan, approve, and execute changes in a controlled and systematic manner. It minimizes the risk of service disruptions and maintains service quality.   +- **Incident**: An incident is an unplanned interruption of reduction in the quality of an IT service. It can be reported by a user or monitoring system. ServiceNow’s incident management module helps IT teams track and manage incidents, from initial reporting to resolution. +- **Problem**: A problem is the underlying cause of one or more incidents. It’s often a recurring or persistent issue that needs to be addressed to prevent future incidents. +- **Change**: A change is a planned alternation or addition to an IT service or its supporting infrastructure. A change management module helps IT teams plan, approve, and execute changes in a controlled and systematic manner. It minimizes the risk of service disruptions and maintains service quality. ## Preview prerequisites As part of the integration, you can create and monitor tickets in ServiceNow dir ## Create an application registry in ServiceNOW -To onboard ServiceNow to Defender for Cloud, you need a Client ID and Client Secret for the ServiceNow instance. If you don't have a Client ID and Client Secret, follow these steps to create them: +To onboard ServiceNow to Defender for Cloud, you need a Client ID and Client Secret for the ServiceNow instance. If you don't have a Client ID and Client Secret, follow these steps to create them: 1. Sign in to ServiceNow with an account that has permission to modify the Application Registry.-1. Browse to **System OAuth**, click **Application Registry**. +1. Browse to **System OAuth**, and select **Application Registry**. :::image type="content" border="true" source="./media/integration-servicenow/app-registry.png" alt-text="Screenshot of application registry."::: -1. In the upper right corner, click **New**. +1. In the upper right corner, select **New**. :::image type="content" border="true" source="./media/integration-servicenow/new.png" alt-text="Screenshot of where to start a new instance."::: To onboard ServiceNow to Defender for Cloud, you need a Client ID and Client Sec :::image type="content" border="true" source="./media/integration-servicenow/endpoint.png" alt-text="Screenshot of where to create an OAUTH API endpoint."::: -1. Complete the OAuth Client application details to create a Client ID and Client +1. Complete the OAuth Client application details to create a Client ID and Client Secret: - **Name**: A descriptive name (for example, MDCIntegrationSNOW) - **Client ID**: Client ID is automatically generated by the ServiceNow OAuth server. - **Client Secret**: Enter a secret, or leave it blank to automatically generate the Client Secret for the OAuth application.- - **Refresh Token Lifespan**: Time in seconds that the refresh token is valid. + - **Refresh Token Lifespan**: Time in seconds that the refresh token is valid. - **Access Token Lifespan**: Time in seconds that the access token is valid. >[!NOTE] Secret: :::image type="content" border="true" source="./media/integration-servicenow/app-details.png" alt-text="Screenshot of application details."::: -1. Click **Submit** to save the API Client ID and Client Secret. +1. Select **Submit** to save the API Client ID and Client Secret. After you complete these steps, you can use this integration name (MDCIntegrationSNOW in our example) to connect ServiceNow to Microsoft Defender for Cloud. ## Create ServiceNow Integration with Microsoft Defender for Cloud -1. Sign in to [the Azure portal](https://aka.ms/integrations) as at least a [Security Administrator](/entra/identity/role-based-access-control/permissions-reference#security-administrator) and navigate to **Microsoft Defender for Cloud** > **Environment settings**. -1. Click **Integrations** to connect your environment to a third-party ticketing system, which is ServiceNow in this scenario. +1. Sign in to [the Azure portal](https://aka.ms/integrations) as at least a Security Admin and navigate to **Microsoft Defender for Cloud** > **Environment settings**. +1. Select **Integrations** to connect your environment to a third-party ticketing system, which is ServiceNow in this scenario. :::image type="content" border="true" source="./media/integration-servicenow/integrations.png" alt-text="Screenshot of integrations."::: After you complete these steps, you can use this integration name (MDCIntegratio :::image type="content" border="true" source="./media/integration-servicenow/add-servicenow.png" alt-text="Screenshot of how to add ServiceNow."::: Use the instance URL, name, password, Client ID, and Client Secret that you previously created for the application registry to help complete the ServiceNow general information.- - Based on your permissions, you can create an **Integration** by using: - ++ Based on your permissions, you can create an **Integration** by using: + - Management group - Subscription (API only, to reduce subscription level onboardings) - Master connector- - Connector + - Connector - For simplicity, We recommend creating the integration on the higher scope based on the user permissions. For example, if you have permission for a management group, you could create a single integration of a management group rather than create integrations in each one of the subscriptions. + For simplicity, We recommend creating the integration on the higher scope based on the user permissions. For example, if you have permission for a management group, you could create a single integration of a management group rather than create integrations in each one of the subscriptions. 1. Choose **Default** or **Customized** based on your requirement.- + The default option creates a Title, Description and Short description in the backend. The customized option lets you choose other fields such as **Incident data**, **Problems data**, and **Changes data**. :::image type="content" border="true" source="./media/integration-servicenow/customize-fields.png" alt-text="Screenshot of how to customize fields."::: - If you click the drop-down menu, you see **Assigned to**, **Caller**, and **Short description** are grayed out because those are necessary fields. You can choose other fields such as **Assignment group**, **Description**, **Impact**, or **Urgency**. + If you select the drop-down menu, you see **Assigned to**, **Caller**, and **Short description** are grayed out because those are necessary fields. You can choose other fields such as **Assignment group**, **Description**, **Impact**, or **Urgency**. :::image type="content" border="true" source="./media/integration-servicenow/customize-fields.png" alt-text="Screenshot of how to customize fields."::: -1. A notice appears after successful creation of integration. +1. A notice appears after successful creation of integration. :::image type="content" border="true" source="./media/integration-servicenow/notice.png" alt-text="Screenshot of notice after successful creation of integration."::: -You can review the integrations in ARG both on the individual integration or on all integrations. +You can review the integrations in ARG both on the individual integration or on all integrations. :::image type="content" border="true" source="./media/integration-servicenow/all-integrations.png" alt-text="Screenshot of all integrations."::: -You can review an integration, or all integrations, in [Azure Resource Graph (ARG)](/azure/governance/resource-graph), an Azure service that gives you the ability to query across multiple subscriptions. On the Integrations page, click **Open in ARG** to explore the details in ARG. +You can review an integration, or all integrations, in [Azure Resource Graph (ARG)](/azure/governance/resource-graph), an Azure service that gives you the ability to query across multiple subscriptions. On the Integrations page, select **Open in ARG** to explore the details in ARG. :::image type="content" border="true" source="./media/integration-servicenow/open.png" alt-text="Screenshot of how to open in ARG."::: You can review an integration, or all integrations, in [Azure Resource Graph (AR Security admins can now create and assign tickets directly from the Microsoft Defender for Cloud portal. -1. Navigate to **Microsoft Defender for Cloud** > **Recommendations** and select any recommendation with unhealthy resources that you want to create a ServiceNow ticket for and assign an owner to. -1. Click the resource from the unhealthy resources and click **Create assignment**. +1. Navigate to **Microsoft Defender for Cloud** > **Recommendations** and select any recommendation with unhealthy resources that you want to create a ServiceNow ticket for and assign an owner to. +1. Select the resource from the unhealthy resources and select **Create assignment**. :::image type="content" border="true" source="./media/integration-servicenow/create-assignment.png" alt-text="Screenshot of how to create an assignment."::: Security admins can now create and assign tickets directly from the Microsoft De - ServiceNow ticket type – Choose **incident**, **change request**, or **problem**. >[!NOTE]- >In ServiceNow, there are several types of tickets that can be used to manage and track different types of incidents, requests, and tasks. Only incident, change request, and problem are supported with this integration. + >In ServiceNow, there are several types of tickets that can be used to manage and track different types of incidents, requests, and tasks. Only incident, change request, and problem are supported with this integration. :::image type="content" border="true" source="./media/integration-servicenow/assignment-type.png" alt-text="Screenshot of how to complete the assignment type."::: To assign an affected recommendation to an owner who resides in ServiceNow, we provide a new unified experience for all platforms. Under **Assignment details**, complete the following fields:- - - **Assigned to**: Choose the owner whom you would like to assign the affected recommendation to. - - **Caller**: Represents the user defining the assignment. - - **Description and Short Description**: If you chose a default integration earlier, description, and short description are automatically completed. - - **Remediation timeframe**: Choose the remediation timeframe to desired deadline for the recommendation to be remediated. ++ - **Assigned to**: Choose the owner whom you would like to assign the affected recommendation to. + - **Caller**: Represents the user defining the assignment. + - **Description and Short Description**: If you chose a default integration earlier, description, and short description are automatically completed. + - **Remediation timeframe**: Choose the remediation timeframe to desired deadline for the recommendation to be remediated. - **Apply Grace Period**: You can apply a grace period so that the resources that are given a due date don’t affect your Secure Score until they’re overdue. - **Set Email Notifications**: You can send a reminder to the owners or the owner’s direct manager. Security admins can now create and assign tickets directly from the Microsoft De :::image type="content" border="true" source="./media/integration-servicenow/ticket.png" alt-text="Screenshot of a ticket ID."::: - Click the Ticket ID to go to the newly created incident in the ServiceNow portal. + Select the Ticket ID to go to the newly created incident in the ServiceNow portal. :::image type="content" border="true" source="./media/integration-servicenow/incident.png" alt-text="Screenshot of an incident."::: >[!NOTE]- >When integration is deleted, all the assignments will be deleted. It could take up to 24 hrs. + >When integration is deleted, all the assignments will be deleted. It could take up to 24 hrs. ## Bidirectional synchronization ServiceNow and Microsoft Defender for Cloud automatically synchronize the status of the tickets between the platforms, which includes: -- A verification that a ticket state is still **In progress**. If the ticket state is changed to **Resolved**, **Cancelled**, or **Closed** in ServiceNow, the change is synchronized to Microsoft Defender for Cloud and delete the assignment.-- When the ticket owner is changed in ServiceNow, the assignment owner is updated in Microsoft Defender for Cloud. +- A verification that a ticket state is still **In progress**. If the ticket state is changed to **Resolved**, **Canceled**, or **Closed** in ServiceNow, the change is synchronized to Microsoft Defender for Cloud and delete the assignment. +- When the ticket owner is changed in ServiceNow, the assignment owner is updated in Microsoft Defender for Cloud. >[!NOTE] >Synchronization occurs every 24 hrs. |
defender-for-cloud | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/policy-reference.md | Title: Built-in policy definitions description: Lists Azure Policy built-in policy definitions for Microsoft Defender for Cloud. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/15/2023 Last updated : 11/21/2023 |
defender-for-cloud | Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md | Title: Release notes description: This page is updated frequently with the latest updates in Defender for Cloud. Previously updated : 11/20/2023 Last updated : 11/22/2023 # What's new in Microsoft Defender for Cloud? If you're looking for items older than six months, you can find them in the [Arc | Date | Update | |--|--|-| November 20|GA release: New autoprovisioning process for SQL Servers on machines plan| +| November 22 | [Enable permissions management with Defender for Cloud (Preview)](#enable-permissions-management-with-defender-for-cloud-preview) | +| November 22 | [Defender for Cloud integration with ServiceNow](#defender-for-cloud-integration-with-servicenow) | +| November 20| [General Availability of the autoprovisioning process for SQL Servers on machines plan](#general-availability-of-the-autoprovisioning-process-for-sql-servers-on-machines-plan)| | November 15 | [Defender for Cloud is now integrated with Microsoft 365 Defender](#defender-for-cloud-is-now-integrated-with-microsoft-365-defender) | | November 15 | [General availability of Containers Vulnerability Assessment powered by Microsoft Defender Vulnerability Management (MDVM) in Defender for Containers and Defender for Container Registries](#general-availability-of-containers-vulnerability-assessment-powered-by-microsoft-defender-vulnerability-management-mdvm-in-defender-for-containers-and-defender-for-container-registries) | | November 15 | [Change to Container Vulnerability Assessments recommendation names](#change-to-container-vulnerability-assessments-recommendation-names) | If you're looking for items older than six months, you can find them in the [Arc | November 15 | [General Availability release of sensitive data discovery for databases](#general-availability-release-of-sensitive-data-discovery-for-databases) | | November 6 | [New version of the recommendation to find missing system updates is now GA](#new-version-of-the-recommendation-to-find-missing-system-updates-is-now-ga) | -### GA release: New autoprovisioning process for SQL Servers on machines plan +### Enable permissions management with Defender for Cloud (Preview) ++November 22, 2023 ++Microsoft now offers both Cloud-Native Application Protection Platforms (CNAPP) and Cloud Infrastructure Entitlement Management (CIEM) solutions with [Microsoft Defender for Cloud (CNAPP)](defender-for-cloud-introduction.md) and [Microsoft Entra Permissions Management](/entra/permissions-management/) (CIEM). ++Security administrators can get a centralized view of their unused or excessive access permissions within Defender for Cloud. ++Security teams can drive the least privilege access controls for cloud resources and receive actionable recommendations for resolving permissions risks across Azure, AWS, and GCP cloud environments as part of their Defender Cloud Security Posture Management (CSPM), without any extra licensing requirements. ++Learn how to [Enable Permissions Management in Microsoft Defender for Cloud (Preview)](enable-permissions-management.md). ++### Defender for Cloud integration with ServiceNow ++November 22, 2023 ++ServiceNow is now integrated with Microsoft Defender for Cloud, which enables customers to connect ServiceNow to their Defender for Cloud environment to prioritize remediation of recommendations that affect your business. Microsoft Defender for Cloud integrates with the ITSM module (incident management). As part of this connection, customers are able to create/view ServiceNow tickets (linked to recommendations) from Microsoft Defender for Cloud. ++You can learn more about [Defender for Cloud's integration with ServiceNow](integration-servicenow.md). ++### General Availability of the autoprovisioning process for SQL Servers on machines plan November 20, 2023 |
defender-for-iot | Tutorial Configure Your Solution | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/device-builders/tutorial-configure-your-solution.md | -This article explains how to add a resource group to your Microsoft Defender for IoT solution. To learn more about resource groups, see [Manage Azure Resource Manager resource groups by using the Azure portal](../../azure-resource-manager/management/manage-resource-groups-portal.md). +This article explains how to add a resource group to your Microsoft Defender for IoT solution. To learn more about resource groups, see [Manage Azure resource groups by using the Azure portal](../../azure-resource-manager/management/manage-resource-groups-portal.md). With Defender for IoT, you can monitor your entire IoT solution in one dashboard. From that dashboard, you can surface all of your IoT devices, IoT platforms, and back-end resources in Azure. |
defender-for-iot | Configure Sensor Settings Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/configure-sensor-settings-portal.md | -After [onboarding](onboard-sensors.md) a new OT network sensor to Microsoft Defender for IoT, you may want to define several settings directly on the OT sensor console, such as [adding local users](manage-users-sensor.md) or [connecting to an on-premises management console](ot-deploy/connect-sensors-to-management.md). +After [onboarding](onboard-sensors.md) a new OT network sensor to Microsoft Defender for IoT, you might want to define several settings directly on the OT sensor console, such as [adding local users](manage-users-sensor.md) or [connecting to an on-premises management console](ot-deploy/connect-sensors-to-management.md). -Selected OT sensor settings, listed below, are also available directly from the Azure portal, and can be applied in bulk across multiple cloud-connected OT sensors at a time, or across all OT sensors in a specific site or zone. This article describes how to view and configure view OT network sensor settings from the Azure portal. +The OT sensor settings listed in this article are also available directly from the Azure portal. Use the Azure portal to apply these settings in bulk across multiple cloud-connected OT sensors at a time, or across all cloud-connected OT sensors in a specific site or zone. This article describes how to view and configure view OT network sensor settings from the Azure portal. > [!NOTE] > The **Sensor settings** page in Defender for IoT is in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include other legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. To define OT sensor settings, make sure that you have the following: ## Define a new sensor setting -Define a new setting whenever you want to define a specific configuration for one or more OT network sensors. For example, you might want to define bandwidth caps for all OT sensors in a specific site or zone, or for a single OT sensor at a specific location in your network. +Define a new setting whenever you want to define a specific configuration for one or more OT network sensors. For example, if you want to define bandwidth caps for all OT sensors in a specific site or zone, or define them for a single OT sensor at a specific location in your network. **To define a new setting**: Define a new setting whenever you want to define a specific configuration for on |Tab name |Description | ||| |**Basics** | Select the subscription where you want to apply your setting, and your [setting type](#sensor-setting-reference). <br><br>Enter a meaningful name and an optional description for your setting. |- |**Setting** | Define the values for your selected setting type.<br>For details about the options available for each setting type, find your selected setting type in the [Sensor setting reference](#sensor-setting-reference) below. | + |**Setting** | Define the values for your selected setting type.<br>For details about the options available for each setting type, find your selected setting type in the [Sensor setting reference](#sensor-setting-reference) below. | |**Apply** | Use the **Select sites**, **Select zones**, and **Select sensors** dropdown menus to define where you want to apply your setting. <br><br>**Important**: Selecting a site or zone applies the setting to all connected OT sensors, including any OT sensors added to the site or zone later on. <br>If you select to apply your settings to an entire site, you don't also need to select its zones or sensors. |- |**Review and create** | Check the selections you've made for your setting. <br><br>If your new setting replaces an existing setting, a :::image type="icon" source="media/how-to-manage-individual-sensors/warning-icon.png" border="false"::: warning is shown to indicate the existing setting.<br><br>When you're satisfied with the setting's configuration, select **Create**. | + |**Review and create** | Check the selections made for your setting. <br><br>If your new setting replaces an existing setting, a :::image type="icon" source="media/how-to-manage-individual-sensors/warning-icon.png" border="false"::: warning is shown to indicate the existing setting.<br><br>When you're satisfied with the setting's configuration, select **Create**. | Your new setting is now listed on the **Sensor settings (Preview)** page under its setting type, and on the sensor details page for any related OT sensor. Sensor settings are shown as read-only on the sensor details page. For example: For example: This procedure describes how to edit OT sensor settings if your OT sensor is currently disconnected from Azure, such as during an ongoing security incident. -By default, if you've configured any settings from the Azure portal, all settings that are configurable from both the Azure portal and the OT sensor are set to read-only on the OT sensor itself. For example, if you've configured a VLAN from the Azure portal, then bandwidth cap, subnet, and VLAN settings are *all* set to read-only, and blocked from modifications on the OT sensor. +By default, if you configure any settings from the Azure portal, all settings that are configurable from both the Azure portal and the OT sensor are set to read-only on the OT sensor itself. For example, if you configure a VLAN from the Azure portal, then bandwidth cap, subnet, and VLAN settings are *all* set to read-only, and blocked from modifications on the OT sensor. -If you're in a situation where the OT sensor is disconnected from Azure, and you need to modify one of these settings, you'll first need to gain write access to those settings. +If you're in a situation where the OT sensor is disconnected from Azure, and you need to modify one of these settings, you must first gain write access to those settings. **To gain write access to blocked OT sensor settings**: To configure Active Directory settings from the Azure portal, define values for |**Domain Controller FQDN** | The fully qualified domain name (FQDN), exactly as it appears on your LDAP server. For example, enter `host1.subdomain.contoso.com`. <br><br> If you encounter an issue with the integration using the FQDN, check your DNS configuration. You can also enter the explicit IP of the LDAP server instead of the FQDN when setting up the integration. | |**Domain Controller Port** | The port where your LDAP is configured. For example, use port 636 for LDAPS (SSL) connections. | |**Primary Domain** | The domain name, such as `subdomain.contoso.com`, and then select the connection type for your LDAP configuration. <br><br>Supported connection types include: **LDAPS/NTLMv3** (recommended), **LDAP/NTLMv3**, or **LDAP/SASL-MD5** |-|**Active Directory Groups** | Select **+ Add** to add an Active Directory group to each permission level listed, as needed. <br><br> When you enter a group name, make sure that you enter the group name exactly as it's defined in your Active Directory configuration on the LDAP server. You'll use these group names when adding new sensor users with Active Directory.<br><br> Supported permission levels include **Read-only**, **Security Analyst**, **Admin**, and **Trusted Domains**. | +|**Active Directory Groups** | Select **+ Add** to add an Active Directory group to each permission level listed, as needed. <br><br> When you enter a group name, make sure that you enter the group name exactly as defined in your Active Directory configuration on the LDAP server. You use these group names when adding new sensor users with Active Directory.<br><br> Supported permission levels include **Read-only**, **Security Analyst**, **Admin**, and **Trusted Domains**. | > [!IMPORTANT] > When entering LDAP parameters: For a bandwidth cap, define the maximum bandwidth you want the sensor to use for **Default**: 1500 Kbps -**Minimum required for a stable connection to Azure**: 350 Kbps. At this minimum setting, connections to the sensor console may be slower than usual. +**Minimum required for a stable connection to Azure**: 350 Kbps. At this minimum setting, connections to the sensor console might be slower than usual. ### NTP To configure an NTP server for your sensor from the Azure portal, define an IP/D ### Subnet -To focus the Azure device inventory on devices that are in your IoT/OT scope, you will need to manually edit the subnet list to include only the locally monitored subnets that are in your IoT/OT scope. Once the subnets have been configured, the network location of the devices is shown in the *Network location* (Public preview) column in the Azure device inventory. All of the devices associated with the listed subnets will be displayed as *local*, while devices associated with detected subnets not included in the list will be displayed as *routed*. +To focus the Azure device inventory on devices that are in your IoT/OT scope, you need to manually edit the subnet list to include only the locally monitored subnets that are in your IoT/OT scope. Once the subnets are configured, the network location of the devices is shown in the *Network location* (Public preview) column in the Azure device inventory. All of the devices associated with the listed subnets are displayed as *local*, while devices associated with detected subnets not included in the list are displayed as *routed*. **To configure your subnets in the Azure portal**: 1. In the Azure portal, go to **Sites and sensors** > **Sensor settings**. -1. Under **Subnets**, review the detected subnets. To focus the device inventory and view local devices in the inventory, delete any subnets that are not in your IoT/OT scope by selecting the options menu (...) on any subnet you want to delete. +1. Under **Subnets**, review the configured subnets. To focus the device inventory and view local devices in the inventory, delete any subnets that are not in your IoT/OT scope by selecting the options menu (...) on any subnet you want to delete. 1. To modify additional settings, select any subnet and then select **Edit** for the following options: |
defender-for-iot | Manage Subscriptions Enterprise | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/manage-subscriptions-enterprise.md | Start your enterprise IoT trial using the [Microsoft Defender for IoT - EIoT Dev 1. On the **Microsoft Defender for IoT - EIoT Device License - add-on** page, select **Start free trial**. On the **Check out** page, select **Try now**. > [!TIP]-> Make sure to [assign your licenses to specific users]/microsoft-365/admin/manage/assign-licenses-to-users to start using them. +> Make sure to [assign your licenses to specific users](/microsoft-365/admin/manage/assign-licenses-to-users) to start using them. > For more information, see [Free trial](billing.md#free-trial). |
dev-box | How To Hibernate Your Dev Box | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-hibernate-your-dev-box.md | Some settings aren't compatible with hibernation and prevent your dev box from h ## Related content - [Manage a dev box by using the developer portal](how-to-create-dev-boxes-developer-portal.md)-- [How to configure Dev Box Hibernation (preview)](how-to-configure-dev-box-hibernation.md)+- [How to configure Dev Box Hibernation (preview)](how-to-configure-dev-box-hibernation.md) |
digital-twins | Quickstart 3D Scenes Studio | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/quickstart-3d-scenes-studio.md | -# Mandatory fields. Title: Quickstart - Get started with 3D Scenes Studio (preview) description: Learn how to use 3D Scenes Studio (preview) for Azure Digital Twins by following this demo, where you'll create a sample scene with elements and behaviors.-# # |
dms | Known Issues Azure Sql Migration Azure Data Studio | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dms/known-issues-azure-sql-migration-azure-data-studio.md | exec sp_addRoleMember 'loginmanager', 'testuser' Note: To view error detail, Open Microsoft Integration runtime configurtion manager > Diagnostics > logging > view logs. It will open the Event viewer > Application and Service logs > Connectors - Integration runtime and now filter for errors. +- **Message**: Deployed failure: Index cannot be created on computed column '{0}' of table '{1}' because the underlying object '{2}' has a different owner. Object element: {3}. + + ` Sample Generated Script:: IF NOT EXISTS (SELECT * FROM sys.indexes WHERE object_id = OBJECT_ID(N'[Sales].[Customer]') AND name = N'AK_Customer_AccountNumber') CREATE UNIQUE NONCLUSTERED INDEX [AK_Customer_AccountNumber] ON [Sales].[Customer] ( [AccountNumber] ASC )WITH (PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, SORT_IN_TEMPDB = OFF, IGNORE_DUP_KEY = OFF, DROP_EXISTING = OFF, ONLINE = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON) ` ++- **Cause**: All function references in the computed column must have the same owner as the table. ++- **Recommendation**: Check the doc [Ownership Requirement](https://learn.microsoft.com/sql/relational-databases/indexes/indexes-on-computed-columns?view=sql-server-ver16#ownership-requirements). ++ ## Error code: Ext_RestoreSettingsError - **Message**: Unable to read blobs in storage container, exception: The remote server returned an error: (403) Forbidden.; The remote server returned an error: (403) Forbidden |
dns | Dns Getstarted Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-getstarted-portal.md | Title: 'Quickstart: Create a DNS zone and record - Azure portal' + Title: 'Quickstart: Create a public DNS zone and record - Azure portal' -description: Use this step-by-step quickstart guide to learn how to create an Azure DNS zone and record using the Azure portal. +description: Use this step-by-step quickstart guide to learn how to create an Azure public DNS zone and record using the Azure portal. Previously updated : 09/27/2022 Last updated : 11/20/2023 -#Customer intent: As an administrator or developer, I want to learn how to configure Azure DNS using the Azure portal so I can use Azure DNS for my name resolution. +#Customer intent: As an administrator or developer, I want to learn how to configure Azure DNS using the Azure portal so I can use Azure DNS for my public zone. # Quickstart: Create an Azure DNS zone and record using the Azure portal You can configure Azure DNS to resolve host names in your public domain. For example, if you purchased the *contoso.xyz* domain name from a domain name registrar, you can configure Azure DNS to host the *contoso.xyz* domain and resolve *`www.contoso.xyz`* to the IP address of your web server or web app. -In this quickstart, you'll create a test domain, and then create an address record to resolve *www* to the IP address *10.10.10.10*. +In this quickstart, you create a test domain, and then create an address record to resolve *www* to the IP address *10.10.10.10*. :::image type="content" source="media/dns-getstarted-portal/environment-diagram.png" alt-text="Diagram of DNS deployment environment using the Azure portal." border="false"::: ->[!IMPORTANT] ->All the names and IP addresses in this quickstart are examples that do not represent real-world scenarios. +> [!IMPORTANT] +> The names and IP addresses in this quickstart are examples that do not represent real-world scenarios. The private IP address 10.10.10.10 is used here with a public DNS zone for testing purposes. -<! You can also perform these steps using [Azure PowerShell](dns-getstarted-powershell.md) or the cross-platform [Azure CLI](dns-getstarted-cli.md).-> If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin. For all portal steps, sign in to the [Azure portal](https://portal.azure.com). ## Prerequisites -- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).+An Azure account with an active subscription is required. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). ## Sign in to the Azure portal A DNS zone contains the DNS entries for a domain. To start hosting your domain i **To create the DNS zone:** -1. At upper left, select **Create a resource**, then **Networking**, and then **DNS zone**. +1. At the upper left, select **Create a resource**, enter **DNS zone** into **Search services and marketplace** and then select **DNS zone**. +2. On the **DNS zone** page, select **Create**. -1. On the **Create DNS zone** page, type or select the following values: + ![A screenshot of the DNS zone marketplace.](./media/dns-getstarted-portal/dns-new-zone.png) ++3. On the **Create DNS zone** page, type or select the following values: - - **Name**: Type *contoso.xyz* for this quickstart example. The DNS zone name can be any value that is not already configured on the Azure DNS servers. A real-world value would be a domain that you bought from a domain name registrar. - **Resource group**: Select **Create new**, enter *MyResourceGroup*, and select **OK**. The resource group name must be unique within the Azure subscription. + - **Name**: Type *contoso.xyz* for this quickstart example. The DNS zone name can be any value that isn't already configured on the Azure DNS servers. A real-world value would be a domain that you bought from a domain name registrar. + - **Resource group location**: Select a location for the new resource group. In this example, the location selected is **West US**. -1. Select **Create**. +4. Select **Review create** and then select **Create**. - ![DNS zone](./media/dns-getstarted-portal/openzone650.png) + ![A screenshot showing how to create a DNS zone.](./media/dns-getstarted-portal/dns-create-zone.png) It may take a few minutes to create the zone. ## Create a DNS record -You create DNS entries or records for your domain inside the DNS zone. Create a new address record or 'A' record to resolve a host name to an IPv4 address. +Next, DNS records are created for your domain inside the DNS zone. A new address record, known as an '**A**' record, is created to resolve a host name to an IPv4 address. **To create an 'A' record:** 1. In the Azure portal, under **All resources**, open the **contoso.xyz** DNS zone in the **MyResourceGroup** resource group. You can enter *contoso.xyz* in the **Filter by name** box to find it more easily.+2. At the top of the **contoso.xyz** DNS zone page, select **+ Record set**. +3. In the **Add a record set** window, enter or select the following values: -1. At the top of the **DNS zone** page, select **+ Record set**. --1. On the **Add record set** page, type or select the following values: -- - **Name**: Type *www*. The record name is the host name that you want to resolve to the specified IP address. + - **Name**: Type *www*. This record name is the host name that you want to resolve to the specified IP address. - **Type**: Select **A**. 'A' records are the most common, but there are other record types for mail servers ('MX'), IP v6 addresses ('AAAA'), and so on. - **TTL**: Type *1*. *Time-to-live* of the DNS request specifies how long DNS servers and clients can cache a response.- - **TTL unit**: Select **Hours**. This is the time unit for the **TTL** value. - - **IP address**: For this quickstart example, type *10.10.10.10*. This value is the IP address the record name resolves to. In a real-world scenario, you would enter the public IP address for your web server. + - **TTL unit**: Select **Hours**. The time unit for the **TTL** entry is specified here. + - **IP address**: For this quickstart example, type *10.10.10.10*. This value is the IP address that the record name resolves to. In a real-world scenario, you would enter the public IP address for your web server. +4. Select **OK** to create the A record. -Since this quickstart is just for quick testing purposes, there's no need to configure the Azure DNS name servers at a domain name registrar. With a real production domain, you'll want anyone on the Internet to resolve the host name to connect to your web server or app. You'll visit your domain name registrar to replace the name server records with the Azure DNS name servers. For more information, see [Tutorial: Host your domain in Azure DNS](dns-delegate-domain-azure-dns.md#delegate-the-domain). +Since this quickstart is just for quick testing purposes, there's no need to configure the Azure DNS name servers at a domain name registrar. In a real production domain, you must enable users on the Internet to resolve the host name and connect to your web server or app. To accomplish this task, visit your domain name registrar and replace the name server records with the Azure DNS name servers. For more information, see [Tutorial: Host your domain in Azure DNS](dns-delegate-domain-azure-dns.md#delegate-the-domain). ## Test the name resolution Now that you have a test DNS zone with a test 'A' record, you can test the name **To test DNS name resolution:** -1. In the Azure portal, under **All resources**, open the **contoso.xyz** DNS zone in the **MyResourceGroup** resource group. You can enter *contoso.xyz* in the **Filter by name** box to find it more easily. --1. Copy one of the name server names from the name server list on the **Overview** page. +1. On the **contoso.xyz** DNS zone page, copy one of the name server names from the name server list. For example: ns1-32.azure-dns.com. - ![zone](./media/dns-getstarted-portal/viewzonens500.png) + [ ![A screenshot of the DNS zone contents.](./media/dns-getstarted-portal/view-zone.png) ](./media/dns-getstarted-portal/view-zone.png#lightbox) 1. Open a command prompt, and run the following command: Now that you have a test DNS zone with a test 'A' record, you can test the name For example: ```- nslookup www.contoso.xyz ns1-08.azure-dns.com. + nslookup www.contoso.xyz ns1-32.azure-dns.com. ``` You should see something like the following screen: - ![Screenshot shows a command prompt window with an n s lookup command and values for Server, Address, Name, and Address.](media/dns-getstarted-portal/nslookup.PNG) + ![A screenshot of a command prompt window with an nslookup command and values for Server, Address, Name, and Address.](media/dns-getstarted-portal/nslookup.png) The host name **www\.contoso.xyz** resolves to **10.10.10.10**, just as you configured it. This result verifies that name resolution is working correctly. |
dns | Dns Zones Records | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dns/dns-zones-records.md | You can modify all properties of the SOA record except for the `host` property. The zone serial number in the SOA record isn't updated automatically when changes are made to the records in the zone. It can be updated manually by editing the SOA record, if necessary. +> [!NOTE] +> Azure DNS doesn't currently support the use of a dot (**.**) before the '**@**' in the SOA hostmaster mailbox entry. For example: `john.smith@contoso.xyz` (converted to john.smith.contoso.xyz) and `john\.smith@contoso.xyz` are not allowed. + ### SPF records [!INCLUDE [dns-spf-include](../../includes/dns-spf-include.md)] |
energy-data-services | Concepts Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/concepts-authentication.md | + + Title: Authentication concepts in Microsoft Azure Data Manager for Energy +description: This article describes the various concepts regarding the authentication in Azure Data Manager for Energy. ++++ Last updated : 02/10/2023++++# Authentication concepts in Azure Data Manager for Energy ++## Service Principals +In the Azure Data Manager for Energy instance, +1. No Service Principals are created. +2. The app-id is used for API access. The same app-id is used to provision ADME instance. +3. The app-id doesn't have access to infrastructure resources. +4. The app-id also gets added as OWNER to all OSDU groups by default. +5. For service-to-service (S2S) communication, ADME uses MSI (msft service identity). ++In the OSDU instance, +1. Terraform scripts create two Service Principals: +2. The first Service Principal is used for API access. It can also manage infrastructure resources. +3. The second Service Principal is used for service-to-service (S2S) communications. ++ |
energy-data-services | Concepts Entitlements | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/concepts-entitlements.md | -Access management is a critical function for any service or resource. Entitlement service helps you manage who has access to your Azure Data Manager for Energy instance, what they can view or edit, and what services or data they have access to. +Access management is a critical function for any service or resource. The entitlement service lets you control who can use your Azure Data Manager for Energy, what they can see or change, and which services or data they can use. ## Groups Some user, data, and service groups are created by default when a data partition ## Group naming -All group identifiers (emails) will be of form {groupType}.{serviceName|resourceName}.{permission}@{partition}.{domain}.com. A group naming convention has been adopted such that the group's name should start with the word "data." for data groups; "service." for service groups; and "users." for user groups. There is one exception for "users" group which is created when a new data partition is provisioned. For example, for data partition `opendes`, the group `users@opendes.dataservices.energy` is created. +All group identifiers (emails) are of form {groupType}.{serviceName|resourceName}.{permission}@{partition}.{domain}.com. A group naming convention is adopted by OSDU such that the group's name starts with +1. the word "data." for data groups; +2. the word "service." for service groups; +3. the word "users." for user groups. There's one exception for "users" group created when a new data partition is provisioned. For example, for data partition `opendes`, the group `users@opendes.dataservices.energy` is created. ## Users -For each OSDU group, you can either add a user as an OWNER or a MEMBER. If you're an OWNER of an OSDU group, then you can add or remove the members of that group or delete the group. If you are a MEMBER of an OSDU group, you can view, edit, or delete the service or data depending on the scope of the OSDU group. For example, if you are a MEMBER of service.legal.editor OSDU group, you can call the APIs to change the legal service. +For each OSDU group, you can either add a user as an OWNER or a MEMBER. +1. If you're an OWNER of an OSDU group, then you can add or remove the members of that group or delete the group. +2. If you're a MEMBER of an OSDU group, you can view, edit, or delete the service or data depending on the scope of the OSDU group. For example, if you're a MEMBER of service.legal.editor OSDU group, you can call the APIs to change the legal service. > [!NOTE] > Do not delete the OWNER of a group unless there is another OWNER to manage the users. ## Entitlement APIs -A full list of entitlements API endpoints can be found in [OSDU entitlement service](https://community.opengroup.org/osdu/platform/security-and-compliance/entitlements/-/blob/release/0.15/docs/tutorial/Entitlements-Service.md#entitlement-service-api). A few illustrations of how to use Entitlement APIs are available in the [How to manage users](how-to-manage-users.md). Depending on the resources you have, you need to use the entitlements service in different ways than what is shown below. -+A full list of entitlements API endpoints can be found in [OSDU entitlement service](https://community.opengroup.org/osdu/platform/security-and-compliance/entitlements/-/blob/release/0.15/docs/tutorial/Entitlements-Service.md#entitlement-service-api). A few illustrations of how to use Entitlement APIs are available in the [How to manage users](how-to-manage-users.md). > [!NOTE] > The OSDU documentation refers to V1 endpoints, but the scripts noted in this documentation refer to V2 endpoints, which work and have been successfully validated. |
energy-data-services | How To Manage Users | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-manage-users.md | -In this article, you'll know how to manage users in Azure Data Manager for Energy. It uses the [entitlements API](https://community.opengroup.org/osdu/platform/security-and-compliance/entitlements/-/tree/master/) and acts as a group-based authorization system for data partitions within Azure Data Manager for Energy instance. For more information about Azure Data Manager for Energy entitlements, see [entitlement services](concepts-entitlements.md). +In this article, you'll learn how to manage users and their memberships in OSDU groups in Azure Data Manager for Energy. [Entitlements APIs](https://community.opengroup.org/osdu/platform/security-and-compliance/entitlements/-/tree/master/) are used to add or remove users to OSDU groups and to check the entitlements when the user tries to access the OSDU services or data. For more information about OSDU groups, see [entitlement services](concepts-entitlements.md). ## Prerequisites+1. Create an Azure Data Manager for Energy instance using the tutorial at [How to create Azure Data Manager for Energy instance](quickstart-create-microsoft-energy-data-services-instance.md). +2. Generate the access token needed to call the Entitlements APIs. +3. Get various parameters of your instance such as client-id, client-secret, etc. +4. Keep all these parameter values handy as they will be needed for executing different user management requests via the Entitlements API. -Create an Azure Data Manager for Energy instance using the tutorial at [How to create Azure Data Manager for Energy instance](quickstart-create-microsoft-energy-data-services-instance.md). --You will need to pass parameters for generating the access token, which you'll need to make valid calls to the Entitlements API of your Azure Data Manager for Energy instance. You will also need these parameters for different user management requests to the Entitlements API. Hence Keep the following values handy for these actions. -+## Fetch Parameters #### Find `tenant-id`-Navigate to the Microsoft Entra account for your organization. One way to do so is by searching for "Microsoft Entra ID" in the Azure portal's search bar. Once there, locate `tenant-id` under the basic information section in the *Overview* tab. Copy the `tenant-id` and paste in an editor to be used later. +1. Navigate to the Microsoft Entra account for your organization. You can search for "Microsoft Entra ID" in the Azure portal's search bar. +2. Locate `tenant-id` under the basic information section in the *Overview* tab. +3. Copy the `tenant-id` and paste it into an editor to be used later. :::image type="content" source="media/how-to-manage-users/azure-active-directory.png" alt-text="Screenshot of search for Microsoft Entra I D."::: :::image type="content" source="media/how-to-manage-users/tenant-id.png" alt-text="Screenshot of finding the tenant-id."::: #### Find `client-id`-Often called `app-id`, it's the same value that you used to register your application during the provisioning of your [Azure Data Manager for Energy instance](quickstart-create-microsoft-energy-data-services-instance.md). You'll find the `client-id` in the *Essentials* pane of Azure Data Manager for Energy *Overview* page. Copy the `client-id` and paste in an editor to be used later. +It's the same value that you used to register your application during the provisioning of your [Azure Data Manager for Energy instance](quickstart-create-microsoft-energy-data-services-instance.md). It is often referred to as `app-id`. ++1. Find the `client-id` in the *Essentials* pane of Azure Data Manager for Energy *Overview* page. +2. Copy the `client-id` and paste it into an editor to be used later. +3. Currently, one Azure Data Manager for Energy instance allows one app-id to be as associated with one instance. > [!IMPORTANT]-> The 'client-id' that is passed as values in the entitlement API calls needs to be the same which was used for provisioning of your Azure Data Manager for Energy instance. +> The 'client-id' that is passed as values in the entitlement API calls needs to be the same that was used for provisioning your Azure Data Manager for the Energy instance. :::image type="content" source="media/how-to-manage-users/client-id-or-app-id.png" alt-text="Screenshot of finding the client-id for your registered App."::: #### Find `client-secret`-Sometimes called an application password, a `client-secret` is a string value your app can use in place of a certificate to identity itself. Navigate to *App Registrations*. Once there, open 'Certificates & secrets' under the *Manage* section. Create a `client-secret` for the `client-id` that you used to create your Azure Data Manager for Energy instance, you can add one now by clicking on *New Client Secret*. Record the secret's `value` for use in your client application code. +A `client-secret` is a string value your app can use in place of a certificate to identify itself. It is sometimes referred to as an application password. ++1. Navigate to *App Registrations*. +2. Open 'Certificates & secrets' under the *Manage* section. +3. Create a `client-secret` for the `client-id` that you used to create your Azure Data Manager for Energy instance. +4. Add one now by clicking on *New Client Secret*. +5. Record the secret's `value` for later use in your client application code. +6. The Service Principal [SPN] of the app id and client secret has the Infra Admin access to the instance. > [!CAUTION]-> Don't forget to record the secret's value for use in your client application code. This secret value is never displayed again after you leave this page at the time of creation of 'client secret'. +> Don't forget to record the secret's value. This secret value is never displayed again after you leave this page of 'client secret' creation. :::image type="content" source="media/how-to-manage-users/client-secret.png" alt-text="Screenshot of finding the client secret."::: -#### Find the `url`for your Azure Data Manager for Energy instance -Navigate to your Azure Data Manager for Energy *Overview* page on Azure portal. Copy the URI from the essentials pane. +#### Find the `URL` for your Azure Data Manager for Energy instance +1. Navigate to your Azure Data Manager for Energy *Overview* page on the Azure portal. +2. Copy the URI from the essentials pane. -#### Find the `data-partition-id` for your group -You have two ways to get the list of data-partitions in your Azure Data Manager for Energy instance. -- One option is to navigate *Data Partitions* menu item under the Advanced section of your Azure Data Manager for Energy UI.+#### Find the `data-partition-id` +1. You have two ways to get the list of data partitions in your Azure Data Manager for Energy instance. ' +2. One option is to navigate the *Data Partitions* menu item under the Advanced section of your Azure Data Manager for Energy UI. :::image type="content" source="media/how-to-manage-users/data-partition-id.png" alt-text="Screenshot of finding the data-partition-id from the Azure Data Manager for Energy instance."::: -- Another option is by clicking on the *view* below the *data partitions* field in the essentials pane of your Azure Data Manager for Energy *Overview* page. +3. Another option is to click on the *view* below the *data partitions* field in the essentials pane of your Azure Data Manager for Energy *Overview* page. :::image type="content" source="media/how-to-manage-users/data-partition-id-second-option.png" alt-text="Screenshot of finding the data-partition-id from the Azure Data Manager for Energy instance overview page."::: :::image type="content" source="media/how-to-manage-users/data-partition-id-second-option-step-2.png" alt-text="Screenshot of finding the data-partition-id from the Azure Data Manager for Energy instance overview page with the data partitions."::: ## Generate access token -You need to generate access token to use entitlements API. Run the below curl command in Azure Cloud Bash after replacing the placeholder values with the corresponding values found earlier in the pre-requisites step. +1. Run the below curl command in Azure Cloud Bash after replacing the placeholder values with the corresponding values found earlier in the above steps. **Request format** curl --location --request POST 'https://login.microsoftonline.com/<tenant-id>/oa "access_token": "abcdefgh123456............." } ```-Copy the `access_token` value from the response. You'll need it to pass as one of the headers in all calls to the Entitlements API of your Azure Data Manager for Energy instance. --## User management activities +2. Copy the `access_token` value from the response. You'll need it to pass as one of the headers in all calls to the Entitlements APIs. -You can manage users' access to your Azure Data Manager for Energy instance or data partitions. As a prerequisite for this step, you need to find the 'object-id' (OID) of the user(s) first. If you are managing an application's access to your instance or data partition, then you must find and use the application ID (or client ID) instead of the OID. +## Fetch OID +`object-id` (OID) is the Microsoft Entra user Object ID. -You'll need to input the `object-id` (OID) of the users (or the application or client ID if managing access for an application) as parameters in the calls to the Entitlements API of your Azure Data Manager for Energy Instance. `object-id` (OID) is the Microsoft Entra user Object ID. +1. Find the 'object-id' (OID) of the user(s) first. If you are managing an application's access, you must find and use the application ID (or client ID) instead of the OID. +2. Input the `object-id` (OID) of the users (or the application or client ID if managing access for an application) as parameters in the calls to the Entitlements API of your Azure Data Manager for Energy Instance. :::image type="content" source="media/how-to-manage-users/azure-active-directory-object-id.png" alt-text="Screenshot of finding the object-id from Microsoft Entra I D."::: :::image type="content" source="media/how-to-manage-users/profile-object-id.png" alt-text="Screenshot of finding the object-id from the profile."::: -### Get the list of all available groups +## Get the list of all available groups -Run the below curl command in Azure Cloud Bash to get all the groups that are available for your Azure Data Manager for Energy instance and its data partitions. +Run the below curl command in Azure Cloud Bash to get all the groups that are available for your Azure Data Manager for the Energy instance and its data partitions. ```bash curl --location --request GET "https://<URI>/api/entitlements/v2/groups/" \ Run the below curl command in Azure Cloud Bash to get all the groups that are av --header 'Authorization: Bearer <access_token>' ``` -### Add user(s) to a users group +## Add user(s) to a OSDU group -Run the below curl command in Azure Cloud Bash to add user(s) to the "Users" group using Entitlement service. +1. Run the below curl command in Azure Cloud Bash to add the user(s) to the "Users" group using the Entitlement service. +2. The value to be sent for the param **"email"** is the **Object_ID (OID)** of the user and not the user's email. ```bash curl --location --request POST 'https://<URI>/api/entitlements/v2/groups/users@<data-partition-id>.dataservices.energy/members' \ Run the below curl command in Azure Cloud Bash to add user(s) to the "Users" gro }' ``` -The value to be sent for the param **"email"** is the **Object_ID (OID)** of the user and not the user's email - **Sample request** Consider an Azure Data Manager for Energy instance named "medstest" with a data partition named "dp1" Consider an Azure Data Manager for Energy instance named "medstest" with a data "role": "MEMBER" } ```+> [!IMPORTANT] +> The app-id is the default OWNER of all the groups. -### Add user(s) to an entitlements group +## Add user(s) to an entitlements group -Run the below curl command in Azure Cloud Bash to add user(s) to an entitlement group using Entitlement service. +1. Run the below curl command in Azure Cloud Bash to add the user(s) to an entitlement group using the Entitlement service. +2. The value to be sent for the param **"email"** is the **Object_ID (OID)** of the user and not the user's email. ```bash curl --location --request POST 'https://<URI>/api/entitlements/v2/groups/service.search.user@<data-partition-id>.dataservices.energy/members' \ Run the below curl command in Azure Cloud Bash to add user(s) to an entitlement "role": "MEMBER" }' ```-The value to be sent for the param **"email"** is the **Object_ID (OID)** of the user and not the user's email + **Sample request** -Consider an Azure Data Manager for Energy instance named "medstest" with a data partition named "dp1" +Consider an Azure Data Manager for Energy instance named "medstest" with a data partition named "dp1". ```bash curl --location --request POST 'https://medstest.energy.azure.com/api/entitlements/v2/groups/service.search.user@medstest-dp1.dataservices.energy/members' \ Consider an Azure Data Manager for Energy instance named "medstest" with a data } ``` -### Get entitlements groups for a given user +## Get entitlements groups for a given user -Run the below curl command in Azure Cloud Bash to get all the groups associated with the user. +1. Run the below curl command in Azure Cloud Bash to get all the groups associated with the user. ```bash curl --location --request GET 'https://<URI>/api/entitlements/v2/members/<OBJECT_ID>/groups?type=none' \ Consider an Azure Data Manager for Energy instance named "medstest" with a data } ``` -### Delete entitlement groups of a given user --Run the below curl command in Azure Cloud Bash to delete a given user to your Azure Data Manager for Energy instance data partition. +## Delete entitlement groups of a given user -As stated above, **DO NOT** delete the OWNER of a group unless you have another OWNER that can manage users in that group. +1. Run the below curl command in Azure Cloud Bash to delete a given user from a given data partition. +2. As stated above, **DO NOT** delete the OWNER of a group unless you have another OWNER who can manage users in that group. ```bash curl --location --request DELETE 'https://<URI>/api/entitlements/v2/members/<OBJECT_ID>' \ No output for a successful response ## Next steps <!-- Add a context sentence for the following links -->-Create a legal tag for your Azure Data Manager for Energy instance's data partition. +Create a legal tag for your data partition. > [!div class="nextstepaction"] > [How to manage legal tags](how-to-manage-legal-tags.md) |
event-grid | Authenticate With Access Keys Shared Access Signatures | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/authenticate-with-access-keys-shared-access-signatures.md | Last updated 08/10/2021 This article provides information on authenticating clients that publish events to Azure Event Grid topics, domains, partner namespaces using **access key** or **Shared Access Signature (SAS)** token. > [!IMPORTANT]-> - Authenticating and authorizing users or applications using Microsoft Entra identities provides superior security and ease of use over key-based and shared access signatures (SAS) authentication. With Microsoft Entra ID, there is no need to store secrets used for authentication in your code and risk potential security vulnerabilities. We strongly recommend you use Microsoft Entra ID with your Azure Event Grid event publishing applications. For more information, see [Authenticate publishing clients using Microsoft Entra ID](authenticate-with-active-directory.md). +> - Authenticating and authorizing users or applications using Microsoft Entra identities provides superior security and ease of use over key-based and shared access signatures (SAS) authentication. With Microsoft Entra ID, there is no need to store secrets used for authentication in your code and risk potential security vulnerabilities. We strongly recommend you use Microsoft Entra ID with your Azure Event Grid event publishing applications. For more information, see [Authenticate publishing clients using Microsoft Entra ID](authenticate-with-microsoft-entra-id.md). > - Microsoft Entra authentication isn't supported for namespace topics. |
event-grid | Authenticate With Entra Id Namespaces | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/authenticate-with-entra-id-namespaces.md | Last updated 11/15/2023 This article describes how to authenticate clients publishing events to Azure Event Grid namespaces using Microsoft Entra ID. ## Overview-The [Microsoft Identity](../active-directory/develop/v2-overview.md) platform provides an integrated authentication and access control management for resources and applications that use Microsoft Entra ID as their identity provider. Use the Microsoft Identity platform to provide authentication and authorization support in your applications. It's based on open standards such as OAuth 2.0 and OpenID Connect and offers tools and open-source libraries that support many authentication scenarios. It provides advanced features such as [Conditional Access](../active-directory/conditional-access/overview.md) that allows you to set policies that require multifactor authentication or allow access from specific locations, for example. +The [Microsoft Identity](/entra/identity-platform/v2-overview) platform provides an integrated authentication and access control management for resources and applications that use Microsoft Entra ID as their identity provider. Use the Microsoft Identity platform to provide authentication and authorization support in your applications. It's based on open standards such as OAuth 2.0 and OpenID Connect and offers tools and open-source libraries that support many authentication scenarios. It provides advanced features such as [Conditional Access](/entra/identity/conditional-access/overview) that allows you to set policies that require multifactor authentication or allow access from specific locations, for example. -An advantage that improves your security stance when using Microsoft Entra ID is that you don't need to store credentials, such as authentication keys, in the code or repositories. Instead, you rely on the acquisition of OAuth 2.0 access tokens from the Microsoft Identity platform that your application presents when authenticating to a protected resource. You can register your event publishing application with Microsoft Entra ID and obtain a service principal associated with your app that you manage and use. Instead, you can use [Managed Identities](../active-directory/managed-identities-azure-resources/overview.md), either system assigned or user assigned, for an even simpler identity management model as some aspects of the identity lifecycle are managed for you. +An advantage that improves your security stance when using Microsoft Entra ID is that you don't need to store credentials, such as authentication keys, in the code or repositories. Instead, you rely on the acquisition of OAuth 2.0 access tokens from the Microsoft Identity platform that your application presents when authenticating to a protected resource. You can register your event publishing application with Microsoft Entra ID and obtain a service principal associated with your app that you manage and use. Instead, you can use [Managed Identities](/entra/identity/managed-identities-azure-resources/overview), either system assigned or user assigned, for an even simpler identity management model as some aspects of the identity lifecycle are managed for you. -[Role-based access control (RBAC)](../active-directory/develop/custom-rbac-for-developers.md) allows you to configure authorization in a way that certain security principals (identities for users, groups, or apps) have specific permissions to execute operations over Azure resources. This way, the security principal used by a client application that sends events to Event Grid must have the RBAC role **EventGrid Data Sender** associated with it. +[Role-based access control (RBAC)](/entra/identity-platform/custom-rbac-for-developers) allows you to configure authorization in a way that certain security principals (identities for users, groups, or apps) have specific permissions to execute operations over Azure resources. This way, the security principal used by a client application that sends events to Event Grid must have the RBAC role **EventGrid Data Sender** associated with it. ### Security principals There are two broad categories of security principals that are applicable when discussing authentication of an Event Grid publishing client: There are two broad categories of security principals that are applicable when d - **Managed identities**. A managed identity can be system assigned, which you enable on an Azure resource and is associated to only that resource, or user assigned, which you explicitly create and name. User assigned managed identities can be associated to more than one resource. - **Application security principal**. It's a type of security principal that represents an application, which accesses resources protected by Microsoft Entra ID. -Regardless of the security principal used, a managed identity or an application security principal, your client uses that identity to authenticate before Microsoft Entra ID and obtain an [OAuth 2.0 access token](../active-directory/develop/access-tokens.md) that's sent with requests when sending events to Event Grid. That token is cryptographically signed and once Event Grid receives it, the token is validated. For example, the audience (the intended recipient of the token) is confirmed to be Event Grid (`https://eventgrid.azure.net`), among other things. The token contains information about the client identity. Event Grid takes that identity and validates that the client has the role **EventGrid Data Sender** assigned to it. More precisely, Event Grid validates that the identity has the ``Microsoft.EventGrid/events/send/action`` permission in an RBAC role associated to the identity before allowing the event publishing request to complete. +Regardless of the security principal used, a managed identity or an application security principal, your client uses that identity to authenticate before Microsoft Entra ID and obtain an [OAuth 2.0 access token](/entra/identity-platform/access-tokens) that's sent with requests when sending events to Event Grid. That token is cryptographically signed and once Event Grid receives it, the token is validated. For example, the audience (the intended recipient of the token) is confirmed to be Event Grid (`https://eventgrid.azure.net`), among other things. The token contains information about the client identity. Event Grid takes that identity and validates that the client has the role **EventGrid Data Sender** assigned to it. More precisely, Event Grid validates that the identity has the ``Microsoft.EventGrid/events/send/action`` permission in an RBAC role associated to the identity before allowing the event publishing request to complete. If you're using the Event Grid SDK, you don't need to worry about the details on how to implement the acquisition of access tokens and how to include it with every request to Event Grid because the [Event Grid data plane SDKs](#publish-events-using-event-grids-client-sdks) do that for you. Managed identities are identities associated with Azure resources. Managed ident Managed identity provides Azure services with an automatically managed identity in Microsoft Entra ID. Contrasting to other authentication methods, you don't need to store and protect access keys or Shared Access Signatures (SAS) in your application code or configuration, either for the identity itself or for the resources you need to access. -To authenticate your event publishing client using managed identities, first decide on the hosting Azure service for your client application and then enable system assigned or user assigned managed identities on that Azure service instance. For example, you can enable managed identities on a [VM](../active-directory/managed-identities-azure-resources/qs-configure-portal-windows-vm.md), an [Azure App Service or Azure Functions](../app-service/overview-managed-identity.md?tabs=dotnet). +To authenticate your event publishing client using managed identities, first decide on the hosting Azure service for your client application and then enable system assigned or user assigned managed identities on that Azure service instance. For example, you can enable managed identities on a [VM](/entr?tabs=dotnet). Once you have a managed identity configured in a hosting service, [assign the permission to publish events to that identity](#assign-permission-to-a-security-principal-to-publish-events). ## Authenticate using a security principal of a client application -Besides managed identities, another identity option is to create a security principal for your client application. To that end, you need to register your application with Microsoft Entra ID. Registering your application is a gesture through which you delegate identity and access management control to Microsoft Entra ID. Follow the steps in section [Register an application](../active-directory/develop/quickstart-register-app.md#register-an-application) and in section [Add a client secret](../active-directory/develop/quickstart-register-app.md#add-a-client-secret). Make sure to review the [prerequisites](../active-directory/develop/quickstart-register-app.md#prerequisites) before starting. +Besides managed identities, another identity option is to create a security principal for your client application. To that end, you need to register your application with Microsoft Entra ID. Registering your application is a gesture through which you delegate identity and access management control to Microsoft Entra ID. Follow the steps in section [Register an application](/entra/identity-platform/quickstart-register-app#register-an-application) and in section [Add a client secret](/entra/identity-platform/quickstart-register-app#add-a-client-secret). Make sure to review the [prerequisites](/entra/identity-platform/quickstart-register-app#prerequisites) before starting. Once you have an application security principal and followed above steps, [assign the permission to publish events to that identity](#assign-permission-to-a-security-principal-to-publish-events). > [!NOTE]-> When you register an application in the portal, an [application object](../active-directory/develop/app-objects-and-service-principals.md#application-object) and a [service principal](../active-directory/develop/app-objects-and-service-principals.md#service-principal-object) are created automatically in your home tenant. Alternatively, you can use Microsot Graph to register your application. However, if you register or create an application using the Microsoft Graph APIs, creating the service principal object is a separate step. +> When you register an application in the portal, an [application object](/entra/identity-platform/app-objects-and-service-principals?tabs=browser#application-object) and a [service principal](/entra/identity-platform/app-objects-and-service-principals?tabs=browser#service-principal-object) are created automatically in your home tenant. Alternatively, you can use Microsot Graph to register your application. However, if you register or create an application using the Microsoft Graph APIs, creating the service principal object is a separate step. ## Assign permission to a security principal to publish events Following are the prerequisites to authenticate to Event Grid. ### Publish events using Microsoft Entra Authentication -To send events to a topic, domain, or partner namespace, you can build the client in the following way. The api version that first provided support for Microsoft Entra authentication is ``2018-01-01``. Use that API version or a more recent version in your application. +To send events to a topic, domain, or partner namespace, you can build the client in the following way. The API version that first provided support for Microsoft Entra authentication is ``2018-01-01``. Use that API version or a more recent version in your application. Sample: For more information, see the following articles: ## Disable key and shared access signature authentication -Microsoft Entra authentication provides a superior authentication support than that's offered by access key or Shared Access Signature (SAS) token authentication. With Microsoft Entra authentication, the identity is validated against Microsoft Entra identity provider. As a developer, you won't have to handle keys in your code if you use Microsoft Entra authentication. You'll also benefit from all security features built into the Microsoft Identity platform, such as [Conditional Access](../active-directory/conditional-access/overview.md) that can help you improve your application's security stance. +Microsoft Entra authentication provides a superior authentication support than that's offered by access key or Shared Access Signature (SAS) token authentication. With Microsoft Entra authentication, the identity is validated against Microsoft Entra identity provider. As a developer, you won't have to handle keys in your code if you use Microsoft Entra authentication. You'll also benefit from all security features built into the Microsoft Identity platform, such as [Conditional Access](/entra/identity/conditional-access/overview) that can help you improve your application's security stance. Once you decide to use Microsoft Entra authentication, you can disable authentication based on access keys or SAS tokens. Once you decide to use Microsoft Entra authentication, you can disable authentic When creating a new topic, you can disable local authentication on the **Advanced** tab of the **Create Topic** page. For an existing topic, following these steps to disable local authentication: 1. Navigate to the **Event Grid Topic** page for the topic, and select **Enabled** under **Local Authentication** - :::image type="content" source="./media/authenticate-with-active-directory/existing-topic-local-auth.png" alt-text="Screenshot showing the Overview page of an existing topic."::: + :::image type="content" source="./media/authenticate-with-microsoft-entra-id/existing-topic-local-auth.png" alt-text="Screenshot showing the Overview page of an existing topic."::: 2. In the **Local Authentication** popup window, select **Disabled**, and select **OK**. - :::image type="content" source="./media/authenticate-with-active-directory/local-auth-popup.png" alt-text="Screenshot showing the Local Authentication window."::: + :::image type="content" source="./media/authenticate-with-microsoft-entra-id/local-auth-popup.png" alt-text="Screenshot showing the Local Authentication window."::: ### Azure CLI New-AzResource -ResourceGroupName <ResourceGroupName> -ResourceType Microsoft.Ev - [.NET](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/identity/Azure.Identity/README.md) - [Python](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/identity/azure-identity/README.md) - [JavaScript](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/identity/identity/README.md)-- Learn about [managed identities](../active-directory/managed-identities-azure-resources/overview.md)+- Learn about [managed identities](/entra/identity/managed-identities-azure-resources/overview) - Learn about [how to use managed identities for App Service and Azure Functions](../app-service/overview-managed-identity.md?tabs=dotnet)-- Learn about [applications and service principals](../active-directory/develop/app-objects-and-service-principals.md)-- Learn about [registering an application with the Microsoft Identity platform](../active-directory/develop/quickstart-register-app.md).+- Learn about [applications and service principals](/entra/identity-platform/app-objects-and-service-principals) +- Learn about [registering an application with the Microsoft Identity platform](/entra/identity-platform/quickstart-register-app). - Learn about how [authorization](../role-based-access-control/overview.md) (RBAC access control) works. - Learn about Event Grid built-in RBAC roles including its [Event Grid Data Sender](../role-based-access-control/built-in-roles.md#eventgrid-data-sender) role. [Event Grid's roles list](security-authorization.md#built-in-roles). - Learn about [assigning RBAC roles](../role-based-access-control/role-assignments-portal.md?tabs=current) to identities. - Learn about how to define [custom RBAC roles](../role-based-access-control/custom-roles.md).-- Learn about [application and service principal objects in Microsoft Entra ID](../active-directory/develop/app-objects-and-service-principals.md).-- Learn about [Microsoft Identity Platform access tokens](../active-directory/develop/access-tokens.md).-- Learn about [OAuth 2.0 authentication code flow and Microsoft Identity Platform](../active-directory/develop/v2-oauth2-auth-code-flow.md)+- Learn about [application and service principal objects in Microsoft Entra ID](/entra/identity-platform/app-objects-and-service-principals). +- Learn about [Microsoft Identity Platform access tokens](/entra/identity-platform/access-tokens). +- Learn about [OAuth 2.0 authentication code flow and Microsoft Identity Platform](/entra/identity-platform/v2-oauth2-auth-code-flow) |
event-grid | Authenticate With Microsoft Entra Id | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/authenticate-with-microsoft-entra-id.md | + + Title: Authenticate Event Grid publishing clients using Microsoft Entra ID +description: This article describes how to authenticate Azure Event Grid publishing client using Microsoft Entra ID. +++ - build-2023 + - ignite-2023 Last updated : 11/15/2023+++# Authentication and authorization with Microsoft Entra ID +This article describes how to authenticate Azure Event Grid publishing clients using Microsoft Entra ID. ++## Overview +The [Microsoft Identity](/entra/identity-platform/v2-overview) platform provides an integrated authentication and access control management for resources and applications that use Microsoft Entra ID as their identity provider. Use the Microsoft identity platform to provide authentication and authorization support in your applications. It's based on open standards such as OAuth 2.0 and OpenID Connect and offers tools and open-source libraries that support many authentication scenarios. It provides advanced features such as [Conditional Access](/entra/identity/conditional-access/overview) that allows you to set policies that require multifactor authentication or allow access from specific locations, for example. ++An advantage that improves your security stance when using Microsoft Entra ID is that you don't need to store credentials, such as authentication keys, in the code or repositories. Instead, you rely on the acquisition of OAuth 2.0 access tokens from the Microsoft identity platform that your application presents when authenticating to a protected resource. You can register your event publishing application with Microsoft Entra ID and obtain a service principal associated with your app that you manage and use. Instead, you can use [Managed Identities](/entra/identity/managed-identities-azure-resources/overview), either system assigned or user assigned, for an even simpler identity management model as some aspects of the identity lifecycle are managed for you. ++[Role-based access control (RBAC)](/entra/identity-platform/custom-rbac-for-developers) allows you to configure authorization in a way that certain security principals (identities for users, groups, or apps) have specific permissions to execute operations over Azure resources. This way, the security principal used by a client application that sends events to Event Grid must have the RBAC role **EventGrid Data Sender** associated with it. ++### Security principals +There are two broad categories of security principals that are applicable when discussing authentication of an Event Grid publishing client: ++- **Managed identities**. A managed identity can be system assigned, which you enable on an Azure resource and is associated to only that resource, or user assigned, which you explicitly create and name. User assigned managed identities can be associated to more than one resource. +- **Application security principal**. It's a type of security principal that represents an application, which accesses resources protected by Microsoft Entra ID. ++Regardless of the security principal used, a managed identity or an application security principal, your client uses that identity to authenticate before Microsoft Entra ID and obtain an [OAuth 2.0 access token](/entra/identity-platform/access-tokens) that's sent with requests when sending events to Event Grid. That token is cryptographically signed and once Event Grid receives it, the token is validated. For example, the audience (the intended recipient of the token) is confirmed to be Event Grid (`https://eventgrid.azure.net`), among other things. The token contains information about the client identity. Event Grid takes that identity and validates that the client has the role **EventGrid Data Sender** assigned to it. More precisely, Event Grid validates that the identity has the ``Microsoft.EventGrid/events/send/action`` permission in an RBAC role associated to the identity before allowing the event publishing request to complete. + +If you're using the Event Grid SDK, you don't need to worry about the details on how to implement the acquisition of access tokens and how to include it with every request to Event Grid because the [Event Grid data plane SDKs](#publish-events-using-event-grids-client-sdks) do that for you. ++<a name='client-configuration-steps-to-use-azure-ad-authentication'></a> ++### Client configuration steps to use Microsoft Entra authentication +Perform the following steps to configure your client to use Microsoft Entra authentication when sending events to a topic, domain, or partner namespace. ++1. Create or use a security principal you want to use to authenticate. You can use a [managed identity](#authenticate-using-a-managed-identity) or an [application security principal](#authenticate-using-a-security-principal-of-a-client-application). +2. [Grant permission to a security principal to publish events](#assign-permission-to-a-security-principal-to-publish-events) by assigning the **EventGrid Data Sender** role to the security principal. +3. Use the Event Grid SDK to publish events to an Event Grid. ++## Authenticate using a managed identity ++Managed identities are identities associated with Azure resources. Managed identities provide an identity that applications use when using Azure resources that support Microsoft Entra authentication. Applications may use the managed identity of the hosting resource like a virtual machine or Azure App service to obtain Microsoft Entra tokens that are presented with the request when publishing events to Event Grid. When the application connects, Event Grid binds the managed entity's context to the client. Once it's associated with a managed identity, your Event Grid publishing client can do all authorized operations. Authorization is granted by associating a managed entity to an Event Grid RBAC role. ++Managed identity provides Azure services with an automatically managed identity in Microsoft Entra ID. Contrasting to other authentication methods, you don't need to store and protect access keys or Shared Access Signatures (SAS) in your application code or configuration, either for the identity itself or for the resources you need to access. ++To authenticate your event publishing client using managed identities, first decide on the hosting Azure service for your client application and then enable system assigned or user assigned managed identities on that Azure service instance. For example, you can enable managed identities on a [VM](/entr?tabs=dotnet). ++Once you have a managed identity configured in a hosting service, [assign the permission to publish events to that identity](#assign-permission-to-a-security-principal-to-publish-events). ++## Authenticate using a security principal of a client application ++Besides managed identities, another identity option is to create a security principal for your client application. To that end, you need to register your application with Microsoft Entra ID. Registering your application is a gesture through which you delegate identity and access management control to Microsoft Entra ID. Follow the steps in section [Register an application](/entra/identity-platform/quickstart-register-app#register-an-application) and in section [Add a client secret](/entra/identity-platform/quickstart-register-app#add-a-client-secret). Make sure to review the [prerequisites](/entra/identity-platform/quickstart-register-app#prerequisites) before starting. ++Once you have an application security principal and followed above steps, [assign the permission to publish events to that identity](#assign-permission-to-a-security-principal-to-publish-events). ++> [!NOTE] +> When you register an application in the portal, an [application object](/entra/identity-platform/app-objects-and-service-principals?tabs=browser#application-object) and a [service principal](/entra/identity-platform/app-objects-and-service-principals?tabs=browser#service-principal-object) are created automatically in your home tenant. Alternatively, you can use Microsot Graph to register your application. However, if you register or create an application using the Microsoft Graph APIs, creating the service principal object is a separate step. +++## Assign permission to a security principal to publish events ++The identity used to publish events to Event Grid must have the permission ``Microsoft.EventGrid/events/send/action`` that allows it to send events to Event Grid. That permission is included in the built-in RBAC role [Event Grid Data Sender](../role-based-access-control/built-in-roles.md#eventgrid-data-sender). This role can be assigned to a [security principal](../role-based-access-control/overview.md#security-principal), for a given [scope](../role-based-access-control/overview.md#scope), which can be a management group, an Azure subscription, a resource group, or a specific Event Grid topic, domain, or partner namespace. Follow the steps in [Assign Azure roles](../role-based-access-control/role-assignments-portal.md?tabs=current) to assign a security principal the **EventGrid Data Sender** role and in that way grant an application using that security principal access to send events. Alternatively, you can define a [custom role](../role-based-access-control/custom-roles.md) that includes the ``Microsoft.EventGrid/events/send/action`` permission and assign that custom role to your security principal. ++With RBAC privileges taken care of, you can now [build your client application to send events](#publish-events-using-event-grids-client-sdks) to Event Grid. ++> [!NOTE] +> Event Grid supports more RBAC roles for purposes beyond sending events. For more information, see[Event Grid built-in roles](security-authorization.md#built-in-roles). +++## Publish events using Event Grid's client SDKs ++Use [Event Grid's data plane SDK](https://devblogs.microsoft.com/azure-sdk/event-grid-ga/) to publish events to Event Grid. Event Grid's SDK support all authentication methods, including Microsoft Entra authentication. ++Here's the sample code that publishes events to Event Grid using the .NET SDK. You can get the topic endpoint on the **Overview** page for your Event Grid topic in the Azure portal. It's in the format: `https://<TOPIC-NAME>.<REGION>-1.eventgrid.azure.net/api/events`. ++```csharp +ManagedIdentityCredential managedIdentityCredential = new ManagedIdentityCredential(); +EventGridPublisherClient client = new EventGridPublisherClient( new Uri("<TOPIC ENDPOINT>"), managedIdentityCredential); +++EventGridEvent egEvent = new EventGridEvent( + "ExampleEventSubject", + "Example.EventType", + "1.0", + "This is the event data"); ++// Send the event +await client.SendEventAsync(egEvent); +``` ++### Prerequisites ++Following are the prerequisites to authenticate to Event Grid. ++- Install the SDK on your application. + - [Java](/java/api/overview/azure/messaging-eventgrid-readme#include-the-package) + - [.NET](/dotnet/api/overview/azure/messaging.eventgrid-readme#install-the-package) + - [JavaScript](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/eventgrid/eventgrid#install-the-azureeventgrid-package) + - [Python](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/eventgrid/azure-eventgrid#install-the-package) +- Install the Azure Identity client library. The Event Grid SDK depends on the Azure Identity client library for authentication. + - [Azure Identity client library for Java](/java/api/overview/azure/identity-readme) + - [Azure Identity client library for .NET](/dotnet/api/overview/azure/identity-readme) + - [Azure Identity client library for JavaScript](/javascript/api/overview/azure/identity-readme) + - [Azure Identity client library for Python](/python/api/overview/azure/identity-readme) +- A topic, domain, or partner namespace created to which your application sends events. ++<a name='publish-events-using-azure-ad-authentication'></a> ++### Publish events using Microsoft Entra authentication ++To send events to a topic, domain, or partner namespace, you can build the client in the following way. The API version that first provided support for Microsoft Entra authentication is ``2018-01-01``. Use that API version or a more recent version in your application. ++Sample: ++This C# snippet creates an Event Grid publisher client using an Application (Service Principal) with a client secret, to enable the DefaultAzureCredential method you need to add the [Azure.Identity library](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/identity/Azure.Identity/README.md). If you're using the official SDK, the SDK handles the version for you. ++```csharp +Environment.SetEnvironmentVariable("AZURE_CLIENT_ID", ""); +Environment.SetEnvironmentVariable("AZURE_TENANT_ID", ""); +Environment.SetEnvironmentVariable("AZURE_CLIENT_SECRET", ""); ++EventGridPublisherClient client = new EventGridPublisherClient(new Uri("your-event-grid-topic-domain-or-partner-namespace-endpoint"), new DefaultAzureCredential()); +``` ++For more information, see the following articles: ++- [Azure Event Grid client library for Java](/java/api/overview/azure/messaging-eventgrid-readme) +- [Azure Event Grid client library for .NET](https://github.com/Azure/azure-sdk-for-net/tree/main/sdk/eventgrid/Azure.Messaging.EventGrid#authenticate-using-azure-active-directory) +- [Azure Event Grid client library for JavaScript](/javascript/api/overview/azure/eventgrid-readme) +- [Azure Event Grid client library for Python](/python/api/overview/azure/eventgrid-readme) ++## Disable key and shared access signature authentication ++Microsoft Entra authentication provides a superior authentication support than that's offered by access key or Shared Access Signature (SAS) token authentication. With Microsoft Entra authentication, the identity is validated against Microsoft Entra identity provider. As a developer, you won't have to handle keys in your code if you use Microsoft Entra authentication. You'll also benefit from all security features built into the Microsoft Identity platform, such as [Conditional Access](/entra/identity/conditional-access/overview) that can help you improve your application's security stance. ++Once you decide to use Microsoft Entra authentication, you can disable authentication based on access keys or SAS tokens. ++> [!NOTE] +> Acess keys or SAS token authentication is a form of **local authentication**. you'll hear sometimes referring to "local auth" when discussing this category of authentication mechanisms that don't rely on Microsoft Entra ID. The API parameter used to disable local authentication is called, appropriately so, ``disableLocalAuth``. ++### Azure portal ++When creating a new topic, you can disable local authentication on the **Advanced** tab of the **Create Topic** page. +++For an existing topic, following these steps to disable local authentication: ++1. Navigate to the **Event Grid Topic** page for the topic, and select **Enabled** under **Local Authentication** ++ :::image type="content" source="./media/authenticate-with-microsoft-entra-id/existing-topic-local-auth.png" alt-text="Screenshot showing the Overview page of an existing topic."::: +2. In the **Local Authentication** popup window, select **Disabled**, and select **OK**. ++ :::image type="content" source="./media/authenticate-with-microsoft-entra-id/local-auth-popup.png" alt-text="Screenshot showing the Local Authentication window."::: +++### Azure CLI +The following CLI command shows the way to create a custom topic with local authentication disabled. The disable local auth feature is currently available as a preview and you need to use API version ``2021-06-01-preview``. ++```cli +az resource create --subscription <subscriptionId> --resource-group <resourceGroup> --resource-type Microsoft.EventGrid/topics --api-version 2021-06-01-preview --name <topicName> --location <location> --properties "{ \"disableLocalAuth\": true}" +``` ++For your reference, the following are the resource type values that you can use according to the topic you're creating or updating. ++| Topic type | Resource type | +| | :| +| Domains | Microsoft.EventGrid/domains | +| Partner Namespace | Microsoft.EventGrid/partnerNamespaces| +| Custom Topic | Microsoft.EventGrid/topics | ++### Azure PowerShell ++If you're using PowerShell, use the following cmdlets to create a custom topic with local authentication disabled. ++```PowerShell ++Set-AzContext -SubscriptionId <SubscriptionId> ++New-AzResource -ResourceGroupName <ResourceGroupName> -ResourceType Microsoft.EventGrid/topics -ApiVersion 2021-06-01-preview -ResourceName <TopicName> -Location <Location> -Properties @{disableLocalAuth=$true} +``` ++> [!NOTE] +> - To learn about using the access key or shared access signature authentication, see [Authenticate publishing clients with keys or SAS tokens](security-authenticate-publishing-clients.md) +> - This article deals with authentication when publishing events to Event Grid (event ingress). Authenticating Event Grid when delivering events (event egress) is the subject of article [Authenticate event delivery to event handlers](security-authentication.md). ++## Resources +- Data plane SDKs + - Java SDK: [GitHub](https://github.com/Azure/azure-sdk-for-jav) + - .NET SDK: [GitHub](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/eventgrid/Azure.Messaging.EventGrid) | [samples](https://github.com/Azure/azure-sdk-for-net/tree/master/sdk/eventgrid/Azure.Messaging.EventGrid/samples) | [migration guide from previous SDK version](https://github.com/Azure/azure-sdk-for-net/blob/master/sdk/eventgrid/Azure.Messaging.EventGrid/MigrationGuide.md) + - Python SDK: [GitHub](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/eventgrid/azure-eventgrid) | [samples](https://github.com/Azure/azure-sdk-for-python/tree/master/sdk/eventgrid/azure-eventgrid/samples) | [migration guide from previous SDK version](https://github.com/Azure/azure-sdk-for-python/blob/master/sdk/eventgrid/azure-eventgrid/migration_guide.md) + - JavaScript SDK: [GitHub](https://github.com/Azure/azure-sdk-for-js/blob/master/sdk/eventgrid/eventgrid/) | [samples](https://github.com/Azure/azure-sdk-for-js/blob/master/sdk/eventgrid/eventgrid/samples) | [migration guide from previous SDK version](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/eventgrid/eventgrid/MIGRATION.md) +- [Event Grid SDK blog](https://devblogs.microsoft.com/azure-sdk/event-grid-ga/) +- Azure Identity client library + - [Java](https://github.com/Azure/azure-sdk-for-jav) + - [.NET](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/identity/Azure.Identity/README.md) + - [Python](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/identity/azure-identity/README.md) + - [JavaScript](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/identity/identity/README.md) +- Learn about [managed identities](/entra/identity/managed-identities-azure-resources/overview) +- Learn about [how to use managed identities for App Service and Azure Functions](../app-service/overview-managed-identity.md?tabs=dotnet) +- Learn about [applications and service principals](/entra/identity-platform/app-objects-and-service-principals) +- Learn about [registering an application with the Microsoft Identity platform](/entra/identity-platform/quickstart-register-app). +- Learn about how [authorization](../role-based-access-control/overview.md) (RBAC access control) works. +- Learn about Event Grid built-in RBAC roles including its [Event Grid Data Sender](../role-based-access-control/built-in-roles.md#eventgrid-data-sender) role. [Event Grid's roles list](security-authorization.md#built-in-roles). +- Learn about [assigning RBAC roles](../role-based-access-control/role-assignments-portal.md?tabs=current) to identities. +- Learn about how to define [custom RBAC roles](../role-based-access-control/custom-roles.md). +- Learn about [application and service principal objects in Microsoft Entra ID](/entra/identity-platform/app-objects-and-service-principals). +- Learn about [Microsoft Identity Platform access tokens](/entra/identity-platform/access-tokens). +- Learn about [OAuth 2.0 authentication code flow and Microsoft Identity Platform](/entra/identity-platform/v2-oauth2-auth-code-flow) |
event-grid | Authentication Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/authentication-overview.md | Authentication for clients publishing events to Event Grid is supported using th > [!IMPORTANT] > Microsoft Entra authentication isn't supported for namespace topics. -<a name='authenticate-using-azure-active-directory'></a> - ## Authenticate using Microsoft Entra ID-Microsoft Entra integration for Event Grid resources provides Azure role-based access control (RBAC) for fine-grained control over a clientΓÇÖs access to resources. You can use Azure RBAC to grant permissions to a security principal, which may be a user, a group, or an application service principal. Microsoft Entra authenticates the security principal and returns an OAuth 2.0 token. The token can be used to authorize a request to access Event Grid resources (topics, domains, or partner namespaces). For detailed information, see [Authenticate and authorize with the Microsoft identity platform](authenticate-with-active-directory.md). +Microsoft Entra integration for Event Grid resources provides Azure role-based access control (RBAC) for fine-grained control over a clientΓÇÖs access to resources. You can use Azure RBAC to grant permissions to a security principal, which may be a user, a group, or an application service principal. Microsoft Entra authenticates the security principal and returns an OAuth 2.0 token. The token can be used to authorize a request to access Event Grid resources (topics, domains, or partner namespaces). For detailed information, see [Authenticate and authorize with the Microsoft identity platform](authenticate-with-microsoft-entra-id.md). > [!IMPORTANT] |
event-grid | Configure Custom Topic | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/configure-custom-topic.md | When you use Azure portal, you can assign one system assigned identity and up to :::image type="content" source="./media/managed-service-identity/identity-existing-topic.png" alt-text="Screenshot showing the Identity page for a custom topic."::: ### To assign a user-assigned identity to a topic-1. Create a user-assigned identity by following instructions in the [Manage user-assigned managed identities](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md) article. +1. Create a user-assigned identity by following instructions in the [Manage user-assigned managed identities](/entra/identity/managed-identities-azure-resources/how-manage-user-assigned-managed-identities) article. 1. On the **Identity** page, switch to the **User assigned** tab in the right pane, and then select **+ Add** on the toolbar. :::image type="content" source="./media/managed-service-identity/user-assigned-identity-add-button.png" alt-text="Screenshot showing the User Assigned Identity tab of the Identity page."::: |
event-grid | Create Custom Topic | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/create-custom-topic.md | On the **Security** page of the **Create Topic** or **Create Event Grid Domain* :::image type="content" source="./media/managed-service-identity/create-page-add-user-assigned-identity-link.png" alt-text="Screenshot of the Identity page with user assigned identity option selected." lightbox="./media/managed-service-identity/create-page-add-user-assigned-identity-link.png"::: 1. To disable local authentication, select **Disabled**. When you do it, the topic or domain can't be accessed using accesskey and SAS authentication, but only via Microsoft Entra authentication. - :::image type="content" source="./media/authenticate-with-active-directory/create-topic-disable-local-auth.png" alt-text="Screenshot showing the Advanced tab of Create Topic page when you can disable local authentication."::: + :::image type="content" source="./media/authenticate-with-microsoft-entra-id/create-topic-disable-local-auth.png" alt-text="Screenshot showing the Advanced tab of Create Topic page when you can disable local authentication."::: 1. Select **Advanced** at the bottom of the page to switch to the **Advanced** page. ## Advanced page |
event-grid | Enable Identity Custom Topics Domains | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/enable-identity-custom-topics-domains.md | Last updated 07/21/2022 # Assign a managed identity to an Event Grid custom topic or domain -This article shows you how to use the Azure portal and CLI to assign a system-assigned or a user-assigned [managed identity](../active-directory/managed-identities-azure-resources/overview.md) to an Event Grid custom topic or a domain. +This article shows you how to use the Azure portal and CLI to assign a system-assigned or a user-assigned [managed identity](/entra/identity/managed-identities-azure-resources/overview) to an Event Grid custom topic or a domain. ## Enable identity when creating a topic or domain When you use Azure portal, you can assign one system assigned identity and up to The following procedures show you how to enable an identity for a custom topic. The steps for enabling an identity for a domain are similar. 1. Go to the [Azure portal](https://portal.azure.com).-2. Search for **event grid topics** in the search bar at the top. +2. Search for **Event Grid topics** in the search bar at the top. 3. Select the **custom topic** for which you want to enable the managed identity. 4. Select **Identity** on the left menu. The following procedures show you how to enable an identity for a custom topic. :::image type="content" source="./media/managed-service-identity/identity-existing-topic.png" alt-text="Identity page for a custom topic"::: ### To assign a user-assigned identity to a topic-1. Create a user-assigned identity by following instructions in the [Manage user-assigned managed identities](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md) article. +1. Create a user-assigned identity by following instructions in the [Manage user-assigned managed identities](/entra/identity/managed-identities-azure-resources/how-manage-user-assigned-managed-identities) article. 1. On the **Identity** page, switch to the **User assigned** tab in the right pane, and then select **+ Add** on the toolbar. :::image type="content" source="./media/managed-service-identity/user-assigned-identity-add-button.png" alt-text="Screenshot showing the User Assigned Identity tab"::: The following procedures show you how to enable an identity for a custom topic. 1. Select **Add**. 1. Refresh the list in the **User assigned** tab to see the added user-assigned identity. -You can use similar steps to enable an identity for an event grid domain. +You can use similar steps to enable an identity for an Event Grid domain. # [Azure CLI](#tab/cli) You can also use Azure CLI to assign a system-assigned identity to an existing custom topic or domain. Currently, Azure CLI doesn't support assigning a user-assigned identity to a topic or a domain. |
event-grid | Enable Identity Partner Topic | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/enable-identity-partner-topic.md | Last updated 07/21/2022 # Assign a managed identity to an Azure Event Grid partner topic -This article shows you how to use the Azure portal to assign a system-assigned or a user-assigned [managed identity](../active-directory/managed-identities-azure-resources/overview.md) to an Event Grid partner topic. When you use the Azure portal, you can assign one system assigned identity and up to two user assigned identities to an existing partner topic. +This article shows you how to use the Azure portal to assign a system-assigned or a user-assigned [managed identity](/entra/identity/managed-identities-azure-resources/overview) to an Event Grid partner topic. When you use the Azure portal, you can assign one system assigned identity and up to two user assigned identities to an existing partner topic. ## Navigate to your partner topic 1. Go to the [Azure portal](https://portal.azure.com). This article shows you how to use the Azure portal to assign a system-assigned o :::image type="content" source="./media/enable-identity-partner-topic/identity-existing-topic.png" alt-text="Screenshot showing the Identity page for a partner topic."::: ## Assign a user-assigned identity-1. Create a user-assigned identity by following instructions in the [Manage user-assigned managed identities](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md) article. +1. Create a user-assigned identity by following instructions in the [Manage user-assigned managed identities](/entra/identity/managed-identities-azure-resources/how-manage-user-assigned-managed-identities) article. 1. On the **Identity** page, switch to the **User assigned** tab in the right pane, and then select **+ Add** on the toolbar. :::image type="content" source="./media/enable-identity-partner-topic/user-assigned-identity-add-button.png" alt-text="Screenshot showing the User Assigned Identity tab"::: |
event-grid | Enable Identity System Topics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/enable-identity-system-topics.md | Last updated 11/02/2021 # Assign a system-managed identity to an Event Grid system topic-In this article, you learn how to assign a system-assigned or a user-assigned identity to an Event Grid system topic. To learn about managed identities in general, see [What are managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md). +In this article, you learn how to assign a system-assigned or a user-assigned identity to an Event Grid system topic. To learn about managed identities in general, see [What are managed identities for Azure resources](/entra/identity/managed-identities-azure-resources/overview). > [!NOTE] > - You can assign one system-assigned identity and up to two user-assigned identities to a system topic. This section shows you how to enable a managed identity for an existing system t ### Enable user-assigned identity -1. First, create a user-assigned identity by following instructions in the [Manage user-assigned managed identities](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md) article. +1. First, create a user-assigned identity by following instructions in the [Manage user-assigned managed identities](/entra/identity/managed-identities-azure-resources/how-manage-user-assigned-managed-identities) article. 1. On the **Identity** page, switch to the **User assigned** tab in the right pane, and then select **+ Add** on the toolbar. :::image type="content" source="./media/managed-service-identity/system-topic-user-identity-add-button.png" alt-text="Image showing the Add button selected in the User assigned tab of the Identity page."::: |
event-grid | Event Grid Namespace Managed Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/event-grid-namespace-managed-identity.md | -In this article, you learn how to assign a system-assigned or a user-assigned identity to an Event Grid namespace. To learn about managed identities in general, see [What are managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md). +In this article, you learn how to assign a system-assigned or a user-assigned identity to an Event Grid namespace. To learn about managed identities in general, see [What are managed identities for Azure resources](/entra/identity/managed-identities-azure-resources/overview). > [!NOTE] > - You can assign one system-assigned identity and up to two user-assigned identities to a namespace. This section shows you how to enable a managed identity for an existing system t ### Enable user-assigned identity -1. First, create a user-assigned identity by following instructions in the [Manage user-assigned managed identities](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md) article. +1. First, create a user-assigned identity by following instructions in the [Manage user-assigned managed identities](/entra/identity/managed-identities-azure-resources/how-manage-user-assigned-managed-identities) article. 1. On the **Identity** page, switch to the **User assigned** tab in the right pane, and then select **+ Add** on the toolbar. |
event-grid | Managed Service Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/managed-service-identity.md | Last updated 03/25/2021 # Event delivery with a managed identity-This article describes how to use a [managed service identity](../active-directory/managed-identities-azure-resources/overview.md) for an Azure event grid system topic, custom topic, or domain. Use it to forward events to supported destinations such as Service Bus queues and topics, event hubs, and storage accounts. +This article describes how to use a [managed service identity](/entra/identity/managed-identities-azure-resources/overview) for an Azure event grid system topic, custom topic, or domain. Use it to forward events to supported destinations such as Service Bus queues and topics, event hubs, and storage accounts. az eventgrid event-subscription create ## Private endpoints Currently, it's not possible to deliver events using [private endpoints](../private-link/private-endpoint-overview.md). That is, there is no support if you have strict network isolation requirements where your delivered events traffic must not leave the private IP space. -However, if your requirements call for a secure way to send events using an encrypted channel and a known identity of the sender (in this case, Event Grid) using public IP space, you could deliver events to Event Hubs, Service Bus, or Azure Storage service using an Azure event grid custom topic or a domain with system-managed identity configured as shown in this article. Then, you can use a private link configured in Azure Functions or your webhook deployed on your virtual network to pull events. See the sample: [Connect to private endpoints with Azure Functions](/samples/azure-samples/azure-functions-private-endpoints/connect-to-private-endpoints-with-azure-functions/). +However, if your requirements call for a secure way to send events using an encrypted channel and a known identity of the sender (in this case, Event Grid) using public IP space, you could deliver events to Event Hubs, Service Bus, or Azure Storage service using an Azure event grid custom topic or a domain with system-managed identity configured as shown in this article. Then, you can use a private link configured in Azure Functions or your webhook deployed on your virtual network to pull events. See the tutorial: [Connect to private endpoints with Azure Functions](../azure-functions/functions-create-vnet.md). Under this configuration, the traffic goes over the public IP/internet from Event Grid to Event Hubs, Service Bus, or Azure Storage, but the channel can be encrypted and a managed identity of Event Grid is used. If you configure your Azure Functions or webhook deployed to your virtual network to use an Event Hubs, Service Bus, or Azure Storage via private link, that section of the traffic will evidently stay within Azure. ## Next steps-To learn about managed identities, see [What are managed identities for Azure resources](../active-directory/managed-identities-azure-resources/overview.md). +To learn about managed identities, see [What are managed identities for Azure resources](/entra/identity/managed-identities-azure-resources/overview). |
event-grid | Microsoft Entra Events | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/microsoft-entra-events.md | + + Title: Microsoft Entra events +description: This article describes Microsoft Entra event types and provides event samples. + Last updated : 09/19/2023+++# Microsoft Entra events ++This article provides the properties and schema for Microsoft Entra events, which are published by Microsoft Graph API. For an introduction to event schemas, see [CloudEvents schema](cloud-event-schema.md). ++## Available event types +These events are triggered when a [User](/graph/api/resources/user) or [Group](/graph/api/resources/group) is created, updated, or deleted in Microsoft Entra ID or by operating over those resources using Microsoft Graph API. ++ | Event name | Description | + | - | -- | + | **Microsoft.Graph.UserUpdated** | Triggered when a user in Microsoft Entra ID is created or updated. | + | **Microsoft.Graph.UserDeleted** | Triggered when a user in Microsoft Entra ID is permanently deleted. | + | **Microsoft.Graph.GroupUpdated** | Triggered when a group in Microsoft Entra ID is created or updated. | + | **Microsoft.Graph.GroupDeleted** | Triggered when a group in Microsoft Entra ID is permanently deleted. | ++> [!NOTE] +> By default, deleting a user or a group is only a soft delete operation, which means that the user or group is marked as deleted but the user or group object still exists. Microsoft Graph sends an updated event when users are soft deleted. To permanently delete a user, navigate to the **Delete users** page in the Azure portal and select **Delete permanently**. Steps to permanently delete a group are similar. ++## Example event +When an event is triggered, the Event Grid service sends data about that event to subscribing destinations. This section contains an example of what that data would look like for each Microsoft Entra event. ++### Microsoft.Graph.UserUpdated event ++```json +{ + "id": "00d8a100-2e92-4bfa-86e1-0056dacd0fce", + "type": "Microsoft.Graph.UserUpdated", + "source": "/tenants/<tenant-id>/applications/<application-id>", + "subject": "Users/<user-id>", + "time": "2022-05-24T22:24:31.3062901Z", + "datacontenttype": "application/json", + "specversion": "1.0", + "data": { + "changeType": "updated", + "clientState": "<guid>", + "resource": "Users/<user-id>", + "resourceData": { + "@odata.type": "#Microsoft.Graph.User", + "@odata.id": "Users/<user-id>", + "id": "<user-id>", + "organizationId": "<tenant-id>", + "eventTime": "2022-05-24T22:24:31.3062901Z", + "sequenceNumber": <sequence-number> + }, + "subscriptionExpirationDateTime": "2022-05-24T23:21:19.3554403+00:00", + "subscriptionId": "<microsoft-graph-subscription-id>", + "tenantId": "<tenant-id> + } +} +``` +### Microsoft.Graph.UserDeleted event ++```json +{ + "id": "00d8a100-2e92-4bfa-86e1-0056dacd0fce", + "type": "Microsoft.Graph.UserDeleted", + "source": "/tenants/<tenant-id>/applications/<application-id>", + "subject": "Users/<user-id>", + "time": "2022-05-24T22:24:31.3062901Z", + "datacontenttype": "application/json", + "specversion": "1.0", + "data": { + "changeType": "deleted", + "clientState": "<guid>", + "resource": "Users/<user-id>", + "resourceData": { + "@odata.type": "#Microsoft.Graph.User", + "@odata.id": "Users/<user-id>", + "id": "<user-id>", + "organizationId": "<tenant-id>", + "eventTime": "2022-05-24T22:24:31.3062901Z", + "sequenceNumber": <sequence-number> + }, + "subscriptionExpirationDateTime": "2022-05-24T23:21:19.3554403+00:00", + "subscriptionId": "<microsoft-graph-subscription-id>", + "tenantId": "<tenant-id> + } +} +``` ++### Microsoft.Graph.GroupUpdated event ++```json +{ + "id": "00d8a100-2e92-4bfa-86e1-0056dacd0fce", + "type": "Microsoft.Graph.GroupUpdated", + "source": "/tenants/<tenant-id>/applications/<application-id>", + "subject": "Groups/<group-id>", + "time": "2022-05-24T22:24:31.3062901Z", + "datacontenttype": "application/json", + "specversion": "1.0", + "data": { + "changeType": "updated", + "clientState": "<guid>", + "resource": "Groups/<group-id>", + "resourceData": { + "@odata.type": "#Microsoft.Graph.Group", + "@odata.id": "Groups/<group-id>", + "id": "<group-id>", + "organizationId": "<tenant-id>", + "eventTime": "2022-05-24T22:24:31.3062901Z", + "sequenceNumber": <sequence-number> + }, + "subscriptionExpirationDateTime": "2022-05-24T23:21:19.3554403+00:00", + "subscriptionId": "<microsoft-graph-subscription-id>", + "tenantId": "<tenant-id> + } +} +``` ++### Microsoft.Graph.GroupDeleted event ++```json +{ + "id": "00d8a100-2e92-4bfa-86e1-0056dacd0fce", + "type": "Microsoft.Graph.GroupDeleted", + "source": "/tenants/<tenant-id>/applications/<application-id>", + "subject": "Groups/<group-id>", + "time": "2022-05-24T22:24:31.3062901Z", + "datacontenttype": "application/json", + "specversion": "1.0", + "data": { + "changeType": "deleted", + "clientState": "<guid>", + "resource": "Groups/<group-id>", + "resourceData": { + "@odata.type": "#Microsoft.Graph.Group", + "@odata.id": "Groups/<group-id>", + "id": "<group-id>", + "organizationId": "<tenant-id>", + "eventTime": "2022-05-24T22:24:31.3062901Z", + "sequenceNumber": <sequence-number> + }, + "subscriptionExpirationDateTime": "2022-05-24T23:21:19.3554403+00:00", + "subscriptionId": "<microsoft-graph-subscription-id>", + "tenantId": "<tenant-id> + } +} +``` +++## Event properties ++An event has the following top-level data: ++| Property | Type | Description | +| -- | - | -- | +| `source` | string | The tenant event source. This field isn't writeable. Microsoft Graph API provides this value. | +| `subject` | string | Publisher-defined path to the event subject. | +| `type` | string | One of the event types for this event source. | +| `time` | string | The time the event is generated based on the provider's UTC time | +| `id` | string | Unique identifier for the event. | +| `data` | object | Event payload that provides the data about the resource state change. | +| `specversion` | string | CloudEvents schema specification version. | ++++The data object has the following properties: ++| Property | Type | Description | +| -- | - | -- | +| `changeType` | string | The type of resource state change. | +| `resource` | string | The resource identifier for which the event was raised. | +| `tenantId` | string | The organization ID where the user or group is kept. | +| `clientState` | string | A secret provided by the user at the time of the Graph API subscription creation. | +| `@odata.type` | string | The Graph API change type. | +| `@odata.id` | string | The Graph API resource identifier for which the event was raised. | +| `id` | string | The resource identifier for which the event was raised. | +| `organizationId` | string | The Microsoft Entra tenant identifier. | +| `eventTime` | string | The time when the resource state changed. | +| `sequenceNumber` | string | A sequence number. | +| `subscriptionExpirationDateTime` | string | The time in [RFC 3339](https://tools.ietf.org/html/rfc3339) format at which the Graph API subscription expires. | +| `subscriptionId` | string | The Graph API subscription identifier. | +| `tenantId` | string | The Microsoft Entra tenant identifier. | +++## Next steps ++* For an introduction to Azure Event Grid's Partner Events, see [Partner Events overview](partner-events-overview.md) +* For information on how to subscribe to Microsoft Graph API to receive Microsoft Entra events, see [subscribe to Azure Graph API events](subscribe-to-graph-api-events.md). +* For information about Azure Event Grid event handlers, see [event handlers](event-handlers.md). +* For more information about creating an Azure Event Grid subscription, see [create event subscription](subscribe-through-portal.md#create-event-subscriptions) and [Event Grid subscription schema](subscription-creation-schema.md). +* For information about how to configure an event subscription to select specific events to be delivered, see [event filtering](event-filtering.md). You may also want to refer to [filter events](how-to-filter-events.md). |
event-grid | Monitor Virtual Machine Changes Logic App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/monitor-virtual-machine-changes-logic-app.md | For example, here are some events that publishers can send to subscribers throug * A new message appears in a queue. -This tutorial creates a Consumption logic app resource that runs in [*multi-tenant* Azure Logic Apps](../logic-apps/logic-apps-overview.md) and is based on the [Consumption pricing model](../logic-apps/logic-apps-pricing.md#consumption-pricing). Using this logic app resource, you create a workflow that monitors changes to a virtual machine, and sends emails about those changes. When you create a workflow that has an event subscription to an Azure resource, events flow from that resource through Azure Event Grid to the workflow. +This tutorial creates a Consumption logic app resource that runs in [*multitenant* Azure Logic Apps](../logic-apps/logic-apps-overview.md) and is based on the [Consumption pricing model](../logic-apps/logic-apps-pricing.md#consumption-pricing). Using this logic app resource, you create a workflow that monitors changes to a virtual machine, and sends emails about those changes. When you create a workflow that has an event subscription to an Azure resource, events flow from that resource through Azure Event Grid to the workflow. ![Screenshot showing the workflow designer with a workflow that monitors a virtual machine using Azure Event Grid.](./media/monitor-virtual-machine-changes-logic-app/monitor-virtual-machine-logic-app-overview.png) In this tutorial, you learn how to: > make sure that you create a *stateful* workflow, not a stateless workflow. This tutorial applies only > to Consumption logic apps, which follow a different user experience. To add Azure Event Grid operations > to your workflow in the designer, on the operations picker pane, make sure that you select the **Azure** tab. - > For more information about multi-tenant versus single-tenant Azure Logic Apps, review [Single-tenant versus multi-tenant and integration service environment](../logic-apps/single-tenant-overview-compare.md). + > For more information about multitenant versus single-tenant Azure Logic Apps, review [Single-tenant versus multitenant and integration service environment](../logic-apps/single-tenant-overview-compare.md). 1. When you're done, select **Review + create**. On the next pane, confirm the provided information, and select **Create**. Now add the Azure Event Grid trigger, which you use to monitor the resource grou > > If you're signed in with a personal Microsoft account, such as @outlook.com or @hotmail.com, > the Azure Event Grid trigger might not appear correctly. As a workaround, select - > [Connect with Service Principal](../active-directory/develop/howto-create-service-principal-portal.md), + > [Connect with Service Principal](/entra/identity-platform/howto-create-service-principal-portal), > or authenticate as a member of the Microsoft Entra that's associated with > your Azure subscription, for example, *user-name*@emailoutlook.onmicrosoft.com. |
event-grid | Mqtt Client Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/mqtt-client-authentication.md | We support authentication of clients using X.509 certificates. X.509 certificat ## Supported authentication modes - Certificates issued by a Certificate Authority (CA)-- Self-signed client certificate - thumbprint based authentication+- Self-signed client certificate - thumbprint +- Microsoft Entra ID token -**Certificate Authority (CA) signed certificates:** +### Certificate Authority (CA) signed certificates: In this method, a root or intermediate X.509 certificate is registered with the service. Essentially, the root or intermediary certificate that is used to sign the client certificate, must be registered with the service first. While registering clients, you need to identify the certificate field used to ho :::image type="content" source="./media/mqtt-client-authentication/mqtt-client-certificate-chain-authentication-options.png" alt-text="Screenshot showing the client metadata with the five certificate chain based validation schemes."::: -**Self-signed client certificate - thumbprint based authentication:** +### Self-signed client certificate - thumbprint -Clients are onboarded to the service using the certificate thumbprint alongside the identity record. In this method of authentication, the client registry stores the exact thumbprint of the certificate that the client is going to use to authenticate. When client tries to connect to the service, service validates the client by comparing the thumbprint presented in the client certificate with the thumbprint stored in client metadata. +In this method of authentication, the client registry stores the exact thumbprint of the certificate that the client is going to use to authenticate. When client tries to connect to the service, service validates the client by comparing the thumbprint presented in the client certificate with the thumbprint stored in client metadata. :::image type="content" source="./media/mqtt-client-authentication/mqtt-client-metadata-thumbprint.png" alt-text="Screenshot showing the client metadata with thumbprint authentication scheme."::: while authenticating the client connection, In both modes of client authentication, we expect the client authentication name to be provided either in the username field of the connect packet or in one of the client certificate fields. -### Supported client certificate fields for alternative source of client authentication name +**Supported client certificate fields for alternative source of client authentication name** + You can use one of the following fields to provide client authentication name in the client certificate. | Authentication name source option | Certificate field | Description | You can use one of the following fields to provide client authentication name in | Certificate Ip | tls_client_auth_san_ip | The IPv4 or IPv6 address present in the iPAddress SAN entry in the certificate. | | Certificate Email | tls_client_auth_san_email | The rfc822Name SAN entry in the certificate. | +++### Microsoft Entra ID token ++You can authenticate MQTT clients with Microsoft Entra JWT to connect to Event Grid namespace. You can use Azure role-based access control (Azure RBAC) to enable MQTT clients, with Microsoft Entra identity, to publish or subscribe access to specific topic spaces. ++ ## High level flow of how mutual transport layer security (mTLS) connection is established To establish a secure connection with MQTT broker, you can use either MQTTS over port 8883 or MQTT over web sockets on port 443. It's important to note that only secure connections are supported. The following steps are to establish secure connection prior to the client authentication. To establish a secure connection with MQTT broker, you can use either MQTTS over ## Next steps - Learn how to [authenticate clients using certificate chain](mqtt-certificate-chain-client-authentication.md)+- Learn how to [authenticate client using Microsoft Entra ID token](mqtt-client-azure-ad-token-and-rbac.md) |
event-grid | Mqtt Client Microsoft Entra Token And Rbac | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/mqtt-client-microsoft-entra-token-and-rbac.md | + + Title: Microsoft Entra JWT authentication and RBAC authorization for clients with Microsoft Entra identity +description: Describes JWT authentication and RBAC roles to authorize clients with Microsoft Entra identity to publish or subscribe MQTT messages +++ - ignite-2023 Last updated : 11/15/2023+++++# Microsoft Entra JWT authentication and Azure RBAC authorization to publish or subscribe MQTT messages ++You can authenticate MQTT clients with Microsoft Entra JWT to connect to Event Grid namespace. You can use Azure role-based access control (Azure RBAC) to enable MQTT clients, with Microsoft Entra identity, to publish or subscribe access to specific topic spaces. ++> [!IMPORTANT] +> - This feature is supported only when using MQTT v5 protocol version +> - JWT authentication is supported for Managed Identities and Service principals only ++## Prerequisites +- You need an Event Grid namespace with MQTT enabled. Learn about [creating Event Grid namespace](/azure/event-grid/create-view-manage-namespaces#create-a-namespace) ++<a name='authentication-using-azure-ad-jwt'></a> ++## Authentication using Microsoft Entra JWT +You can use the MQTT v5 CONNECT packet to provide the Microsoft Entra JWT token to authenticate your client, and you can use the MQTT v5 AUTH packet to refresh the token. ++In CONNECT packet, you can provide required values in the following fields: ++|Field | Value | +||| +|Authentication Method | OAUTH2-JWT | +|Authentication Data | JWT token | ++In AUTH packet, you can provide required values in the following fields: ++|Field | Value | +||| +| Authentication Method | OAUTH2-JWT | +| Authentication Data | JWT token | +| Authentication Reason Code | 25 | + +Authenticate Reason Code with value 25 signifies reauthentication. ++> [!NOTE] +> - Audience: ΓÇ£audΓÇ¥ claim must be set to "https://eventgrid.azure.net/". ++## Authorization to grant access permissions +A client using Microsoft Entra ID based JWT authentication needs to be authorized to communicate with the Event Grid namespace. You can assign the following two built-in roles to provide either publish or subscribe permissions, to clients with Microsoft Entra identities. ++- Use **EventGrid TopicSpaces Publisher** role to provide MQTT message publisher access +- Use **EventGrid TopicSpaces Subscriber** role to provide MQTT message subscriber access ++You can use these roles to provide permissions at subscription, resource group, Event Grid namespace or Event Grid topicspace scope. ++## Assigning the publisher role to your Microsoft Entra identity at topicspace scope ++1. In the Azure portal, navigate to your Event Grid namespace +1. Navigate to the topicspace to which you want to authorize access. +1. Go to the Access control (IAM) page of the topicspace +1. Select the **Role assignments** tab to view the role assignments at this scope. +1. Select **+ Add** and Add role assignment. +1. On the Role tab, select the "EventGrid TopicSpaces Publisher" role. +1. On the Members tab, for **Assign access to**, select User, group, or service principal option to assign the selected role to one or more service principals (applications). +1. Select **+ Select members**. +1. Find and select the service principals. +1. Select **Next** +1. Select **Review + assign** on the Review + assign tab. ++> [!NOTE] +> You can follow similar steps to assign the built-in EventGrid TopicSpaces Subscriber role at topicspace scope. ++## Next steps +- See [Publish and subscribe to MQTT message using Event Grid](mqtt-publish-and-subscribe-portal.md) +- To learn more about how Managed Identities work, you can refer to [How managed identities for Azure resources work with Azure virtual machines - Microsoft Entra](/entra/identity/managed-identities-azure-resources/how-managed-identities-work-vm) +- To learn more about how to obtain tokens from Microsoft Entra ID, you can refer to [obtaining Microsoft Entra tokens](/entra/identity-platform/v2-oauth2-client-creds-grant-flow#get-a-token) +- To learn more about Azure Identity client library, you can refer to [using Azure Identity client library](/entra/identity/managed-identities-azure-resources/how-to-use-vm-token#get-a-token-using-the-azure-identity-client-library) +- To learn more about implementing an interface for credentials that can provide a token, you can refer to [TokenCredential Interface](/java/api/com.azure.core.credential.tokencredential) +- To learn more about how to authenticate using Azure Identity, you can refer to [examples](https://github.com/Azure/azure-sdk-for-java/wiki/Azure-Identity-Examples) +- If you prefer to use custom roles, you can review the process to [create a custom role](/azure/role-based-access-control/custom-roles-portal) |
event-grid | Mqtt Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/mqtt-overview.md | IoT applications are software designed to interact with and process data from Io ### Client authentication -Event Grid has a client registry that stores information about the clients permitted to connect to it. Before a client can connect, there must be an entry for that client in the client registry. As a client connects to MQTT broker, it needs to authenticate with MQTT broker based on credentials stored in the identity registry. MQTT broker supports X.509 certificate authentication that is the industry authentication standard in IoT devices and [Microsoft Entra ID (formerly Azure Active Directory)](mqtt-client-azure-ad-token-and-rbac.md) that is Azure's authentication standard for applications.[Learn more about MQTT client authentication.](mqtt-client-authentication.md) +Event Grid has a client registry that stores information about the clients permitted to connect to it. Before a client can connect, there must be an entry for that client in the client registry. As a client connects to MQTT broker, it needs to authenticate with MQTT broker based on credentials stored in the identity registry. MQTT broker supports X.509 certificate authentication that is the industry authentication standard in IoT devices and [Microsoft Entra ID (formerly Azure Active Directory)](mqtt-client-microsoft-entra-token-and-rbac.md) that is Azure's authentication standard for applications.[Learn more about MQTT client authentication.](mqtt-client-authentication.md) ### Access control |
event-grid | Onboard Partner | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/onboard-partner.md | Contact the Event Grid team at [askgrid@microsoft.com](mailto:askgrid@microsoft. To complete the remaining steps, make sure you have: - An Azure subscription. If you don't have one, [create a free account](https://azure.microsoft.com/free/) before you begin.-- An Azure [tenant](../active-directory/develop/quickstart-create-new-tenant.md).+- An Azure [tenant](/entra/identity-platform/quickstart-create-new-tenant). [!INCLUDE [register-provider](./includes/register-provider.md)] |
event-grid | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/overview.md | Event Grid offers a rich mixture of features. These features include: - **[Built-in cloud integration](mqtt-routing.md)** - route your MQTT messages to Azure services or custom webhooks for further processing. - **Flexible and fine-grained [access control model](mqtt-access-control.md)** - group clients and topic to simplify access control management, and use the variable support in topic templates for a fine-grained access control. - **X.509 certificate [authentication](mqtt-client-authentication.md)** - authenticate your devices using the IoT industry's standard mechanism for authentication.-- **[Microsoft Entra ID (formerly Azure Active Directory) authentication](mqtt-client-azure-ad-token-and-rbac.md)** - authenticate your applications using the Azure's standard mechanism for authentication.+- **[Microsoft Entra ID (formerly Azure Active Directory) authentication](mqtt-client-microsoft-entra-token-and-rbac.md)** - authenticate your applications using the Azure's standard mechanism for authentication. - **TLS 1.2 and TLS 1.3 support** - secure your client communication using robust encryption protocols. - **Multi-session support** - connect your applications with multiple active sessions to ensure reliability and scalability. - **MQTT over WebSockets** - enable connectivity for clients in firewall-restricted environments. Your own service or application publishes events to Event Grid that subscriber a A multitenant SaaS provider or platform can publish their events to Event Grid through a feature called [Partner Events](partner-events-overview.md). You can [subscribe to those events](subscribe-to-partner-events.md) and automate tasks, for example. Events from the following partners are currently available: - [Auth0](auth0-overview.md)-- [Microsoft Graph API](subscribe-to-graph-api-events.md). Through Microsoft Graph API you can get events from [Azure AD](azure-active-directory-events.md), [Microsoft Outlook](outlook-events.md), [Teams](teams-events.md), Conversations, security alerts, and Universal Print.+- [Microsoft Graph API](subscribe-to-graph-api-events.md). Through Microsoft Graph API you can get events from [Microsoft Entra ID](microsoft-entra-events.md), [Microsoft Outlook](outlook-events.md), [Teams](teams-events.md), Conversations, security alerts, and Universal Print. - [Tribal Group](subscribe-to-tribal-group-events.md) - [SAP](subscribe-to-sap-events.md) |
event-grid | Partner Events Graph Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/partner-events-graph-api.md | Microsoft Graph API provides a unified programmable model that you can use to re |Microsoft event source |Resource(s) | Available event types | |: | : | :-|-|Microsoft Entra ID| [User](/graph/api/resources/user), [Group](/graph/api/resources/group) | [Microsoft Entra event types](azure-active-directory-events.md) | +|Microsoft Entra ID| [User](/graph/api/resources/user), [Group](/graph/api/resources/group) | [Microsoft Entra event types](microsoft-entra-events.md) | |Microsoft Outlook|[Event](/graph/api/resources/event) (calendar meeting), [Message](/graph/api/resources/message) (email), [Contact](/graph/api/resources/contact) | [Microsoft Outlook event types](outlook-events.md) | |Microsoft Teams|[ChatMessage](/graph/api/resources/callrecords-callrecord), [CallRecord](/graph/api/resources/callrecords-callrecord) (meeting) | [Microsoft Teams event types](teams-events.md) | |Microsoft SharePoint and OneDrive| [DriveItem](/graph/api/resources/driveitem)| | |
event-grid | Partner Events Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/partner-events-overview.md | You may want to use the Partner Events feature if you've one or more of the foll A partner must go through an [onboarding process](onboard-partner.md) before a customer can start receiving events from partners. Following is the list of available partners from which you can receive events via Event Grid. ### Microsoft Graph API-Through Microsoft Graph API, you can get events from a diverse set of Microsoft services such as [Microsoft Entra ID](azure-active-directory-events.md), [Microsoft Outlook](outlook-events.md), [Teams](teams-events.md), **SharePoint**, and so on. For a complete list of event sources, see [Microsoft Graph API's change notifications documentation](/graph/webhooks#supported-resources). +Through Microsoft Graph API, you can get events from a diverse set of Microsoft services such as [Microsoft Entra ID](microsoft-entra-events.md), [Microsoft Outlook](outlook-events.md), [Teams](teams-events.md), **SharePoint**, and so on. For a complete list of event sources, see [Microsoft Graph API's change notifications documentation](/graph/webhooks#supported-resources). ### Auth0 [Auth0](https://auth0.com) is a managed authentication platform for businesses to authenticate, authorize, and secure access for applications, devices, and users. You can create an [Auth0 partner topic](auth0-overview.md) to connect your Auth0 and Azure accounts. This integration allows you to react to, log, and monitor Auth0 events in real time. To try it out, see [Integrate Azure Event Grid with Auth0](auth0-how-to.md). |
event-grid | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/policy-reference.md | Title: Built-in policy definitions for Azure Event Grid description: Lists Azure Policy built-in policy definitions for Azure Event Grid. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/15/2023 Last updated : 11/21/2023 |
event-grid | Post To Custom Topic | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/post-to-custom-topic.md | This article describes how to post an event to a custom topic using an access ke > [!NOTE]-> Microsoft Entra authentication provides a superior authentication support than that's offered by access key or Shared Access Signature (SAS) token authentication. With Microsoft Entra authentication, the identity is validated against Microsoft Entra identity provider. As a developer, you won't have to handle keys in your code if you use Microsoft Entra authentication. you'll also benefit from all security features built into the Microsoft identity platform, such as Conditional Access, that can help you improve your application's security stance. For more information, see [Authenticate publishing clients using Microsoft Entra ID](authenticate-with-active-directory.md). +> Microsoft Entra authentication provides a superior authentication support than that's offered by access key or Shared Access Signature (SAS) token authentication. With Microsoft Entra authentication, the identity is validated against Microsoft Entra identity provider. As a developer, you won't have to handle keys in your code if you use Microsoft Entra authentication. you'll also benefit from all security features built into the Microsoft identity platform, such as Conditional Access, that can help you improve your application's security stance. For more information, see [Authenticate publishing clients using Microsoft Entra ID](authenticate-with-microsoft-entra-id.md). ## Endpoint |
event-grid | Publish Deliver Events With Namespace Topics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/publish-deliver-events-with-namespace-topics.md | Title: Publish and deliver events using namespace topics description: This article provides step-by-step instructions to publish to Azure Event Grid in the CloudEvents JSON format and deliver those events by using the push delivery model. - - - ignite-2023 + Last updated 11/15/2023 |
event-grid | Publish Events To Namespace Topics Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/publish-events-to-namespace-topics-java.md | Title: Publish events using namespace topics with Java description: This article provides step-by-step instructions to publish events to an Event Grid namespace topic using pull delivery. - - - ignite-2023 + Last updated 11/15/2023 |
event-grid | Publish Events Using Namespace Topics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/publish-events-using-namespace-topics.md | Title: Publish and consume events using namespace topics description: This article provides step-by-step instructions to publish events to Azure Event Grid in the CloudEvents JSON format and consume those events by using the pull delivery model. - - - ignite-2023 + Last updated 11/15/2023 |
event-grid | Receive Events From Namespace Topics Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/receive-events-from-namespace-topics-java.md | Title: Receive events using namespace topics with Java description: This article provides step-by-step instructions to consume events from Event Grid namespace topics using pull delivery. - - - ignite-2023 + Last updated 11/15/2023 |
event-grid | Powershell Webhook Secure Delivery Microsoft Entra App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/scripts/powershell-webhook-secure-delivery-microsoft-entra-app.md | + + Title: Azure PowerShell - Secure WebHook delivery with Microsoft Entra Application in Azure Event Grid +description: Describes how to deliver events to HTTPS endpoints protected by Microsoft Entra Application using Azure Event Grid +ms.devlang: powershell ++ Last updated : 10/14/2021+++# Secure WebHook delivery with Microsoft Entra Application in Azure Event Grid ++This script provides the configuration to deliver events to HTTPS endpoints protected by Microsoft Entra Application using Azure Event Grid. ++Here are the high level steps from the script: ++1. Create a service principal for **Microsoft.EventGrid** if it doesn't already exist. +1. Create a role named **AzureEventGridSecureWebhookSubscriber** in the **Microsoft Entra app for your Webhook**. +1. Create a service principal for the **event subscription writer app** if it doesn't already exist. +1. Add service principal of event subscription writer Microsoft Entra app to the AzureEventGridSecureWebhookSubscriber role +1. Add service principal of Microsoft.EventGrid to the AzureEventGridSecureWebhookSubscriber role as well ++## Sample script - stable ++```azurepowershell +# NOTE: Before run this script ensure you are logged in Azure by using "az login" command. ++$webhookAppObjectId = "[REPLACE_WITH_YOUR_ID]" +$eventSubscriptionWriterAppId = "[REPLACE_WITH_YOUR_ID]" ++# Start execution +try { ++ # Creates an application role of given name and description ++ Function CreateAppRole([string] $Name, [string] $Description) + { + $appRole = New-Object Microsoft.Open.AzureAD.Model.AppRole + $appRole.AllowedMemberTypes = New-Object System.Collections.Generic.List[string] + $appRole.AllowedMemberTypes.Add("Application"); + $appRole.AllowedMemberTypes.Add("User"); + $appRole.DisplayName = $Name + $appRole.Id = New-Guid + $appRole.IsEnabled = $true + $appRole.Description = $Description + $appRole.Value = $Name; ++ return $appRole + } ++ # Creates Azure Event Grid Azure AD Application if not exists + # You don't need to modify this id + # But Azure Event Grid Azure AD Application Id is different for different clouds ++ $eventGridAppId = "4962773b-9cdb-44cf-a8bf-237846a00ab7" # Azure Public Cloud + # $eventGridAppId = "54316b56-3481-47f9-8f30-0300f5542a7b" # Azure Government Cloud + $eventGridRoleName = "AzureEventGridSecureWebhookSubscriber" # You don't need to modify this role name + $eventGridSP = Get-AzureADServicePrincipal -Filter ("appId eq '" + $eventGridAppId + "'") + if ($eventGridSP -match "Microsoft.EventGrid") + { + Write-Host "The Azure AD Application is already defined.`n" + } else { + Write-Host "Creating the Azure Event Grid Azure AD Application" + $eventGridSP = New-AzureADServicePrincipal -AppId $eventGridAppId + } ++ # Creates the Azure app role for the webhook Azure AD application ++ $app = Get-AzureADApplication -ObjectId $webhookAppObjectId + $appRoles = $app.AppRoles ++ Write-Host "Azure AD App roles before addition of the new role..." + Write-Host $appRoles + + if ($appRoles -match $eventGridRoleName) + { + Write-Host "The Azure Event Grid role is already defined.`n" + } else { + Write-Host "Creating the Azure Event Grid role in Azure AD Application: " $webhookAppObjectId + $newRole = CreateAppRole -Name $eventGridRoleName -Description "Azure Event Grid Role" + $appRoles.Add($newRole) + Set-AzureADApplication -ObjectId $app.ObjectId -AppRoles $appRoles + } ++ Write-Host "Azure AD App roles after addition of the new role..." + Write-Host $appRoles ++ # Creates the user role assignment for the app that will create event subscription ++ $servicePrincipal = Get-AzureADServicePrincipal -Filter ("appId eq '" + $app.AppId + "'") + $eventSubscriptionWriterSP = Get-AzureADServicePrincipal -Filter ("appId eq '" + $eventSubscriptionWriterAppId + "'") ++ if ($null -eq $eventSubscriptionWriterSP) + { + Write-Host "Create new Azure AD Application" + $eventSubscriptionWriterSP = New-AzureADServicePrincipal -AppId $eventSubscriptionWriterAppId + } ++ try + { + Write-Host "Creating the Azure AD Application role assignment: " $eventSubscriptionWriterAppId + $eventGridAppRole = $app.AppRoles | Where-Object -Property "DisplayName" -eq -Value $eventGridRoleName + New-AzureADServiceAppRoleAssignment -Id $eventGridAppRole.Id -ResourceId $servicePrincipal.ObjectId -ObjectId $eventSubscriptionWriterSP.ObjectId -PrincipalId $eventSubscriptionWriterSP.ObjectId + } + catch + { + if( $_.Exception.Message -like '*Permission being assigned already exists on the object*') + { + Write-Host "The Azure AD Application role is already defined.`n" + } + else + { + Write-Error $_.Exception.Message + } + Break + } ++ # Creates the service app role assignment for Event Grid Azure AD Application ++ $eventGridAppRole = $app.AppRoles | Where-Object -Property "DisplayName" -eq -Value $eventGridRoleName + New-AzureADServiceAppRoleAssignment -Id $eventGridAppRole.Id -ResourceId $servicePrincipal.ObjectId -ObjectId $eventGridSP.ObjectId -PrincipalId $eventGridSP.ObjectId + + # Print output references for backup ++ Write-Host ">> Webhook's Azure AD Application Id: $($app.AppId)" + Write-Host ">> Webhook's Azure AD Application ObjectId Id: $($app.ObjectId)" +} +catch { + Write-Host ">> Exception:" + Write-Host $_ + Write-Host ">> StackTrace:" + Write-Host $_.ScriptStackTrace +} +``` ++## Script explanation ++For more details refer to [Secure WebHook delivery with Microsoft Entra ID in Azure Event Grid](../secure-webhook-delivery.md) |
event-grid | Powershell Webhook Secure Delivery Microsoft Entra User | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/scripts/powershell-webhook-secure-delivery-microsoft-entra-user.md | + + Title: Azure PowerShell - Secure WebHook delivery with Microsoft Entra user in Azure Event Grid +description: Describes how to deliver events to HTTPS endpoints protected by Microsoft Entra user using Azure Event Grid +ms.devlang: powershell ++ Last updated : 09/29/2021+++# Secure WebHook delivery with Microsoft Entra user in Azure Event Grid ++This script provides the configuration to deliver events to HTTPS endpoints protected by Microsoft Entra user using Azure Event Grid. ++Here are the high level steps from the script: ++1. Create a service principal for **Microsoft.EventGrid** if it doesn't already exist. +1. Create a role named **AzureEventGridSecureWebhookSubscriber** in the **Microsoft Entra app for your Webhook**. +1. Add service principal of user who will be creating the subscription to the AzureEventGridSecureWebhookSubscriber role. +1. Add service principal of Microsoft.EventGrid to the AzureEventGridSecureWebhookSubscriber. ++## Sample script - stable ++```azurepowershell +# NOTE: Before run this script ensure you are logged in Azure by using "az login" command. ++$webhookAppObjectId = "[REPLACE_WITH_YOUR_ID]" +$eventSubscriptionWriterUserPrincipalName = "[REPLACE_WITH_USER_PRINCIPAL_NAME_OF_THE_USER_WHO_WILL_CREATE_THE_SUBSCRIPTION]" ++# Start execution +try { ++ # Creates an application role of given name and description ++ Function CreateAppRole([string] $Name, [string] $Description) + { + $appRole = New-Object Microsoft.Open.AzureAD.Model.AppRole + $appRole.AllowedMemberTypes = New-Object System.Collections.Generic.List[string] + $appRole.AllowedMemberTypes.Add("Application"); + $appRole.AllowedMemberTypes.Add("User"); + $appRole.DisplayName = $Name + $appRole.Id = New-Guid + $appRole.IsEnabled = $true + $appRole.Description = $Description + $appRole.Value = $Name; ++ return $appRole + } ++ # Creates Azure Event Grid Azure AD Application if not exists + # You don't need to modify this id + # But Azure Event Grid Azure AD Application Id is different for different clouds + + $eventGridAppId = "4962773b-9cdb-44cf-a8bf-237846a00ab7" # Azure Public Cloud + # $eventGridAppId = "54316b56-3481-47f9-8f30-0300f5542a7b" # Azure Government Cloud + $eventGridRoleName = "AzureEventGridSecureWebhookSubscriber" # You don't need to modify this role name + $eventGridSP = Get-AzureADServicePrincipal -Filter ("appId eq '" + $eventGridAppId + "'") + if ($eventGridSP -match "Microsoft.EventGrid") + { + Write-Host "The Azure AD Application is already defined.`n" + } else { + Write-Host "Creating the Azure Event Grid Azure AD Application" + $eventGridSP = New-AzureADServicePrincipal -AppId $eventGridAppId + } ++ # Creates the Azure app role for the webhook Azure AD application ++ $app = Get-AzureADApplication -ObjectId $webhookAppObjectId + $appRoles = $app.AppRoles ++ Write-Host "Azure AD App roles before addition of the new role..." + Write-Host $appRoles + + if ($appRoles -match $eventGridRoleName) + { + Write-Host "The Azure Event Grid role is already defined.`n" + } else { + Write-Host "Creating the Azure Event Grid role in Azure AD Application: " $webhookAppObjectId + $newRole = CreateAppRole -Name $eventGridRoleName -Description "Azure Event Grid Role" + $appRoles.Add($newRole) + Set-AzureADApplication -ObjectId $app.ObjectId -AppRoles $appRoles + } ++ Write-Host "Azure AD App roles after addition of the new role..." + Write-Host $appRoles ++ # Creates the user role assignment for the user who will create event subscription ++ $servicePrincipal = Get-AzureADServicePrincipal -Filter ("appId eq '" + $app.AppId + "'") ++ try + { + Write-Host "Creating the Azure Ad App Role assignment for user: " $eventSubscriptionWriterUserPrincipalName + $eventSubscriptionWriterUser = Get-AzureAdUser -ObjectId $eventSubscriptionWriterUserPrincipalName + $eventGridAppRole = $app.AppRoles | Where-Object -Property "DisplayName" -eq -Value $eventGridRoleName + New-AzureADUserAppRoleAssignment -Id $eventGridAppRole.Id -ResourceId $servicePrincipal.ObjectId -ObjectId $eventSubscriptionWriterUser.ObjectId -PrincipalId $eventSubscriptionWriterUser.ObjectId + } + catch + { + if( $_.Exception.Message -like '*Permission being assigned already exists on the object*') + { + Write-Host "The Azure AD User Application role is already defined.`n" + } + else + { + Write-Error $_.Exception.Message + } + Break + } ++ # Creates the service app role assignment for Event Grid Azure AD Application ++ $eventGridAppRole = $app.AppRoles | Where-Object -Property "DisplayName" -eq -Value $eventGridRoleName + New-AzureADServiceAppRoleAssignment -Id $eventGridAppRole.Id -ResourceId $servicePrincipal.ObjectId -ObjectId $eventGridSP.ObjectId -PrincipalId $eventGridSP.ObjectId + + # Print output references for backup ++ Write-Host ">> Webhook's Azure AD Application Id: $($app.AppId)" + Write-Host ">> Webhook's Azure AD Application ObjectId Id: $($app.ObjectId)" +} +catch { + Write-Host ">> Exception:" + Write-Host $_ + Write-Host ">> StackTrace:" + Write-Host $_.ScriptStackTrace +} +``` ++## Script explanation ++For more details refer to [Secure WebHook delivery with Microsoft Entra ID in Azure Event Grid](../secure-webhook-delivery.md) |
event-grid | Secure Webhook Delivery | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/secure-webhook-delivery.md | Last updated 10/12/2022 This article describes how to use Microsoft Entra ID to secure the connection between your **event subscription** and your **webhook endpoint**. It uses the Azure portal for demonstration, however the feature can also be enabled using CLI, PowerShell, or the SDKs. > [!IMPORTANT]-> Additional access check has been introduced as part of create or update of event subscription on March 30, 2021 to address a security vulnerability. The subscriber client's service principal needs to be either an owner or have a role assigned on the destination application service principal. Reconfigure your Microsoft Entra Application following the new instructions below.For an overview of Microsoft Entra applications and service principals, see [Microsoft identity platform (v2.0) overview](../active-directory/develop/v2-overview.md). +> Additional access check has been introduced as part of create or update of event subscription on March 30, 2021 to address a security vulnerability. The subscriber client's service principal needs to be either an owner or have a role assigned on the destination application service principal. Reconfigure your Microsoft Entra Application following the new instructions below.For an overview of Microsoft Entra applications and service principals, see [Microsoft identity platform (v2.0) overview](/entra/identity-platform/v2-overview). ## Scenarios This article explains how to implement the following two scenarios in detail: This section shows how to configure the event subscription by using a Microsoft PS /home/user>Connect-AzureAD -TenantId $webhookAadTenantId ``` -4. Open the [following script](scripts/powershell-webhook-secure-delivery-azure-ad-user.md) and update the values of **$webhookAppObjectId** and **$eventSubscriptionWriterUserPrincipalName** with your identifiers, then continue to run the script. +4. Open the [following script](scripts/powershell-webhook-secure-delivery-microsoft-entra-user.md) and update the values of **$webhookAppObjectId** and **$eventSubscriptionWriterUserPrincipalName** with your identifiers, then continue to run the script. - Variables: - **$webhookAppObjectId**: Microsoft Entra application ID created for the webhook - **$eventSubscriptionWriterUserPrincipalName**: Azure user principal name of the user who creates event subscription > [!NOTE]- > You don't need to modify the value of **$eventGridAppId**. In this script, **AzureEventGridSecureWebhookSubscriber** is set for the **$eventGridRoleName**. Remember, you must be a member of the [Microsoft Entra Application Administrator role](../active-directory/roles/permissions-reference.md#all-roles) or be an owner of the service principal of webhook app in Microsoft Entra ID to execute this script. + > You don't need to modify the value of **$eventGridAppId**. In this script, **AzureEventGridSecureWebhookSubscriber** is set for the **$eventGridRoleName**. Remember, you must be a member of the [Microsoft Entra Application Administrator role](/entra/identity/role-based-access-control/permissions-reference#all-roles) or be an owner of the service principal of webhook app in Microsoft Entra ID to execute this script. If you see the following error message, you need to elevate to the service principal. An extra access check has been introduced as part of create or update of event subscription on March 30, 2021 to address a security vulnerability. The subscriber client's service principal needs to be either an owner or have a role assigned on the destination application service principal. This section shows how to configure the event subscription by using a Microsoft PS /home/user>Connect-AzureAD -TenantId $webhookAadTenantId ``` -7. Open the [following script](scripts/powershell-webhook-secure-delivery-azure-ad-app.md) and update the values of **$webhookAppObjectId** and **$eventSubscriptionWriterAppId** with your identifiers, then continue to run the script. +7. Open the [following script](scripts/powershell-webhook-secure-delivery-microsoft-entra-app.md) and update the values of **$webhookAppObjectId** and **$eventSubscriptionWriterAppId** with your identifiers, then continue to run the script. - Variables: - **$webhookAppObjectId**: Microsoft Entra application ID created for the webhook - **$eventSubscriptionWriterAppId**: Microsoft Entra application ID for Event Grid subscription writer app. > [!NOTE]- > You don't need to modify the value of **```$eventGridAppId```**. In this script, **AzureEventGridSecureWebhookSubscriber** as set for the **```$eventGridRoleName```**. Remember, you must be a member of the [Microsoft Entra Application Administrator role](../active-directory/roles/permissions-reference.md#all-roles) or be an owner of the service principal of webhook app in Microsoft Entra ID to execute this script. + > You don't need to modify the value of **```$eventGridAppId```**. In this script, **AzureEventGridSecureWebhookSubscriber** as set for the **```$eventGridRoleName```**. Remember, you must be a member of the [Microsoft Entra Application Administrator role](/entra/identity/role-based-access-control/permissions-reference#all-roles) or be an owner of the service principal of webhook app in Microsoft Entra ID to execute this script. 8. Sign-in as the Event Grid subscription writer Microsoft Entra Application by running the command. Based on the diagram, follow next steps to configure both tenants. Do the following steps in **Tenant A**: -1. Create a Microsoft Entra application for the Event Grid subscription writer configured to work with any Microsoft Entra directory (Multi-tenant). +1. Create a Microsoft Entra application for the Event Grid subscription writer configured to work with any Microsoft Entra directory (multitenant). 2. Create a secret for the Microsoft Entra application, and save the value (you need this value later). Do the following steps in **Tenant B**: PS /home/user>$webhookAadTenantId = "[REPLACE_WITH_YOUR_TENANT_ID]" PS /home/user>Connect-AzureAD -TenantId $webhookAadTenantId ```-7. Open the [following script](scripts/powershell-webhook-secure-delivery-azure-ad-app.md), and update values of **$webhookAppObjectId** and **$eventSubscriptionWriterAppId** with your identifiers, then continue to run the script. +7. Open the [following script](scripts/powershell-webhook-secure-delivery-microsoft-entra-app.md), and update values of **$webhookAppObjectId** and **$eventSubscriptionWriterAppId** with your identifiers, then continue to run the script. - Variables: - **$webhookAppObjectId**: Microsoft Entra application ID created for the webhook - **$eventSubscriptionWriterAppId**: Microsoft Entra application ID for Event Grid subscription writer > [!NOTE]- > You don't need to modify the value of **```$eventGridAppId```**. In this script, **AzureEventGridSecureWebhookSubscriber** is set for **```$eventGridRoleName```**. Remember, you must be a member of the [Microsoft Entra Application Administrator role](../active-directory/roles/permissions-reference.md#all-roles) or be an owner of the service principal of webhook app in Microsoft Entra ID to execute this script. + > You don't need to modify the value of **```$eventGridAppId```**. In this script, **AzureEventGridSecureWebhookSubscriber** is set for **```$eventGridRoleName```**. Remember, you must be a member of the [Microsoft Entra Application Administrator role](/entra/identity/role-based-access-control/permissions-reference#all-roles) or be an owner of the service principal of webhook app in Microsoft Entra ID to execute this script. If you see the following error message, you need to elevate to the service principal. An extra access check has been introduced as part of create or update of event subscription on March 30, 2021 to address a security vulnerability. The subscriber client's service principal needs to be either an owner or have a role assigned on the destination application service principal. |
event-grid | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure Event Grid description: Lists Azure Policy Regulatory Compliance controls available for Azure Event Grid. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 11/06/2023 Last updated : 11/21/2023 |
event-grid | Subscribe To Graph Api Events | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/subscribe-to-graph-api-events.md | This article describes steps to subscribe to events published by Microsoft Graph |Microsoft event source |Resource(s) | Available event types | |: | : | :-|-|Microsoft Entra ID| [User](/graph/api/resources/user), [Group](/graph/api/resources/group) | [Microsoft Entra event types](azure-active-directory-events.md) | +|Microsoft Entra ID| [User](/graph/api/resources/user), [Group](/graph/api/resources/group) | [Microsoft Entra event types](microsoft-entra-events.md) | |Microsoft Outlook|[Event](/graph/api/resources/event) (calendar meeting), [Message](/graph/api/resources/message) (email), [Contact](/graph/api/resources/contact) | [Microsoft Outlook event types](outlook-events.md) | |Microsoft Teams|[ChatMessage](/graph/api/resources/callrecords-callrecord), [CallRecord](/graph/api/resources/callrecords-callrecord) (meeting) | [Microsoft Teams event types](teams-events.md) | |Microsoft SharePoint and OneDrive| [DriveItem](/graph/api/resources/driveitem)| | |
event-grid | Subscribe To Resource Notifications Resources Events | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/subscribe-to-resource-notifications-resources-events.md | Title: Subscribe to Azure Resource Notifications - Resource Management events description: This article explains how to subscribe to Azure Resource Notifications - Azure Resource Management events. + Last updated 10/08/2023 |
event-hubs | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/policy-reference.md | Title: Built-in policy definitions for Azure Event Hubs description: Lists Azure Policy built-in policy definitions for Azure Event Hubs. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/15/2023 Last updated : 11/21/2023 |
event-hubs | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure Event Hubs description: Lists Azure Policy Regulatory Compliance controls available for Azure Event Hubs. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 11/06/2023 Last updated : 11/21/2023 |
expressroute | Customer Controlled Gateway Maintenance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/customer-controlled-gateway-maintenance.md | |
expressroute | Expressroute Locations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/expressroute-locations.md | The following table shows locations by service provider. If you want to view ava | **[Spark NZ](https://www.sparkdigital.co.nz/solutions/connectivity/cloud-connect/)** | Supported | Supported | Auckland<br/>Sydney | | **[Swisscom](https://www.swisscom.ch/en/business/enterprise/offer/cloud-data-center/microsoft-cloud-services/microsoft-azure-von-swisscom.html)** | Supported | Supported | Geneva<br/>Zurich | | **[Tata Communications](https://www.tatacommunications.com/solutions/network/cloud-ready-networks/)** | Supported | Supported | Amsterdam<br/>Chennai<br/>Chicago<br/>Hong Kong<br/>London<br/>Mumbai<br/>Pune<br/>Sao Paulo<br/>Silicon Valley<br/>Singapore<br/>Washington DC |-| **[Telefonica](https://www.telefonica.com/es/home)** | Supported | Supported | Amsterdam<br/>Dallas<br/>Frankfurt2<br/>Hong Kong<br/>Madrid<br/>Sao Paulo<br/>Singapore<br/>Washington DC | +| **[Telefonica](https://www.telefonica.com/es/)** | Supported | Supported | Amsterdam<br/>Dallas<br/>Frankfurt2<br/>Hong Kong<br/>Madrid<br/>Sao Paulo<br/>Singapore<br/>Washington DC | | **[Telehouse - KDDI](https://www.telehouse.net/solutions/cloud-services/cloud-link)** | Supported | Supported | London<br/>London2<br/>Singapore2 | | **Telenor** |Supported |Supported | Amsterdam<br/>London<br/>Oslo<br/>Stavanger | | **[Telia Carrier](https://www.teliacarrier.com/)** | Supported | Supported | Amsterdam<br/>Chicago<br/>Dallas<br/>Frankfurt<br/>Hong Kong<br/>London<br/>Oslo<br/>Paris<br/>Seattle<br/>Silicon Valley<br/>Stockholm<br/>Washington DC | The following table shows locations by service provider. If you want to view ava | **[Telus](https://www.telus.com)** | Supported | Supported | Montreal<br/>Quebec City<br/>Seattle<br/>Toronto<br/>Vancouver | | **[Teraco](https://www.teraco.co.za/services/africa-cloud-exchange/)** | Supported | Supported | Cape Town<br/>Johannesburg | | **[TIME dotCom](https://www.time.com.my/enterprise/connectivity/direct-cloud)** | Supported | Supported | Kuala Lumpur |-| **[Tivit](https://tivit.com/solucoes/public-cloud/)** |Supported |Supported | Sao Paulo2 | +| **[Tivit](https://tivit.com/en/home-ingles/)** |Supported |Supported | Sao Paulo2 | | **[Tokai Communications](https://www.tokai-com.co.jp/en/)** | Supported | Supported | Osaka<br/>Tokyo2 | | **TPG Telecom**| Supported | Supported | Melbourne<br/>Sydney | | **[Transtelco](https://transtelco.net/enterprise-services/)** | Supported | Supported | Dallas<br/>Queretaro(Mexico City)| If you're remote and don't have fiber connectivity, or you want to explore other | **[Spectrum Enterprise](https://enterprise.spectrum.com/services/internet-networking/wan/cloud-connect.html)** | Equinix | Chicago<br/>Dallas<br/>Los Angeles<br/>New York<br/>Silicon Valley | | **[Tamares Telecom](http://www.tamarestelecom.com/our-services/#Connectivity)** | Equinix | London | | **[Tata Teleservices](https://www.tatatelebusiness.com/data-services/ez-cloud-connect/)** | Tata Communications | Chennai<br/>Mumbai |-| **[TDC Erhverv](https://tdc.dk/Produkter/cloudaccessplus)** | Equinix | Amsterdam | +| **[TDC Erhverv](https://tdc.dk/)** | Equinix | Amsterdam | | **[Telecom Italia Sparkle](https://www.tisparkle.com/our-platform/enterprise-platform/sparkle-cloud-connect)**| Equinix | Amsterdam | | **[Telekom Deutschland GmbH](https://cloud.telekom.de/de/infrastruktur/managed-it-services/managed-hybrid-infrastructure-mit-microsoft-azure)** | Interxion | Amsterdam<br/>Frankfurt | | **[Telia](https://www.telia.se/foretag/losningar/produkter-tjanster/datanet)** | Equinix | Amsterdam | |
expressroute | Gateway Migration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/gateway-migration.md | description: This article explains how to seamlessly migrate from Standard/HighP -- - ignite-2023 + Last updated 11/15/2023 |
expressroute | Rate Limit | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/rate-limit.md | You can enable rate limiting for an ExpressRoute Direct circuit, either during t > - Currently, the only way to enable rate limiting is through the Azure portal. > - The rate limiting feature is currently in the preview stage and can only be enabled after the circuit creation process is completed. The feature will be available for enabling during the circuit creation process in the general availability (GA) stage. -To enable rate limiting while creating an ExpressRoute Direct circuit, follow these steps: --1. Sign-in to the [Azure portal](https://portal.azure.com/) and select **+ Create a resource**. --1. Search for *ExpressRoute circuit* and select **Create**. --1. Enter the required information in the **Basics** tab and select **Next** button. --1. In the **Configuration** tab, enter the required information and select the **Enable Rate Limiting** check box. The following diagram shows a screenshot of the **Configuration** tab. -- :::image type="content" source="./media/rate-limit/create-circuit.png" alt-text="Screenshot of the configuration tab for a new ExpressRoute Direct circuit."::: --1. Select **Next: Tags** and provide tagging for the circuit, if necessary. --1. Select **Review + create** and then select **Create** to create the circuit. - ### Existing ExpressRoute Direct circuits To enable rate limiting for an existing ExpressRoute Direct circuit, follow these steps: |
firewall | Protect Azure Virtual Desktop | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/protect-azure-virtual-desktop.md | -Azure Virtual Desktop is a desktop and app virtualization service that runs on Azure. When an end user connects to an Azure Virtual Desktop environment, their session is run by a host pool. A host pool is a collection of Azure virtual machines that register to Azure Virtual Desktop as session hosts. These virtual machines run in your virtual network and are subject to the virtual network security controls. They need outbound Internet access to the Azure Virtual Desktop service to operate properly and might also need outbound Internet access for end users. Azure Firewall can help you lock down your environment and filter outbound traffic. +Azure Virtual Desktop is a cloud virtual desktop infrastructure (VDI) service that runs on Azure. When an end user connects to Azure Virtual Desktop, their session comes from a session host in a host pool. A host pool is a collection of Azure virtual machines that register to Azure Virtual Desktop as session hosts. These virtual machines run in your virtual network and are subject to the virtual network security controls. They need outbound internet access to the Azure Virtual Desktop service to operate properly and might also need outbound internet access for end users. Azure Firewall can help you lock down your environment and filter outbound traffic. -[ ![Azure Virtual Desktop architecture](media/protect-windows-virtual-desktop/windows-virtual-desktop-architecture-diagram.png) ](media/protect-windows-virtual-desktop/windows-virtual-desktop-architecture-diagram.png#lightbox) Follow the guidelines in this article to provide extra protection for your Azure Virtual Desktop host pool using Azure Firewall. ## Prerequisites + - A deployed Azure Virtual Desktop environment and host pool. For more information, see [Deploy Azure Virtual Desktop](../virtual-desktop/deploy-azure-virtual-desktop.md). - An Azure Firewall deployed with at least one Firewall Manager Policy. - DNS and DNS Proxy enabled in the Firewall Policy to use [FQDN in Network Rules](../firewall/fqdn-filtering-network-rules.md). -For more information, see [Tutorial: Create a host pool by using the Azure portal](../virtual-desktop/create-host-pools-azure-marketplace.md) --To learn more about Azure Virtual Desktop environments see [Azure Virtual Desktop environment](../virtual-desktop/environment-setup.md). +To learn more about Azure Virtual Desktop terminology, see [Azure Virtual Desktop terminology](../virtual-desktop/terminology.md). ## Host pool outbound access to Azure Virtual Desktop -The Azure virtual machines you create for Azure Virtual Desktop must have access to several Fully Qualified Domain Names (FQDNs) to function properly. Azure Firewall provides an Azure Virtual Desktop FQDN Tag to simplify this configuration. Use the following steps to allow outbound Azure Virtual Desktop platform traffic: --You'll need to create an Azure Firewall Policy and create Rule Collections for Network Rules and Applications Rules. Give the Rule Collection a priority and an allow or deny action. -In order to identify a specific AVD Host Pool as "Source" in the tables below, [IP Group](../firewall/ip-groups.md) can be created to represent it. --### Create network rules --Based on the Azure Virtual Desktop (AVD) [reference article](../virtual-desktop/safe-url-list.md), these are the ***mandatory*** rules to allow outbound access to the control plane and core dependent --# [Azure cloud](#tab/azure) +The Azure virtual machines you create for Azure Virtual Desktop must have access to several Fully Qualified Domain Names (FQDNs) to function properly. Azure Firewall uses the Azure Virtual Desktop FQDN tag `WindowsVirtualDesktop` to simplify this configuration. You need to create an Azure Firewall Policy and create Rule Collections for Network Rules and Applications Rules. Give the Rule Collection a priority and an *allow* or *deny* action. -| Name | Source type | Source | Protocol | Destination ports | Destination type | Destination | -| | -- | - | -- | -- | - | | -| Rule Name | IP Address or Group | IP Group or VNet or Subnet IP Address | TCP | 443 | FQDN | `login.microsoftonline.com` | -| Rule Name | IP Address or Group | IP Group or VNet or Subnet IP Address | TCP | 443 | Service Tag | `WindowsVirtualDesktop`, `AzureFrontDoor.Frontend`, `AzureMonitor` | -| Rule Name | IP Address or Group | IP Group or VNet or Subnet IP Address | TCP, UDP | 53 | IP Address | * | -| Rule name | IP Address or Group | IP Group or VNet or Subnet IP Address | TCP | 1688 | IP address | `20.118.99.224`, `40.83.235.53` (`azkms.core.windows.net`) | -| Rule name | IP Address or Group | IP Group or VNet or Subnet IP Address | TCP | 1688 | IP address | `23.102.135.246` (`kms.core.windows.net`) | -| Rule name | IP Address or Group | IP Group or VNet or Subnet IP Address | TCP | 443 | FQDN | `mrsglobalsteus2prod.blob.core.windows.net` | -| Rule name | IP Address or Group | IP Group or VNet or Subnet IP Address | TCP | 443 | FQDN | `wvdportalstorageblob.blob.core.windows.net` | -| Rule name | IP Address or Group | IP Group or VNet or Subnet IP Address | TCP | 80 | FQDN | `oneocsp.microsoft.com` | -| Rule name | IP Address or Group | IP Group or VNet or Subnet IP Address | TCP | 80 | FQDN | `www.microsoft.com` | --# [Azure for US Government](#tab/azure-for-us-government) --| Name | Source type | Source | Protocol | Destination ports | Destination type | Destination | -| | -- | - | -- | -- | - | | -| Rule Name | IP Address or Group | IP Group or VNet or Subnet IP Address | TCP | 443 | FQDN | `login.microsoftonline.us` | -| Rule Name | IP Address or Group | IP Group or VNet or Subnet IP Address | TCP | 443 | Service Tag | `WindowsVirtualDesktop`, `AzureMonitor` | -|Rule Name|IP Address or Group|IP Group or VNet or Subnet IP Address|TCP|443|FQDN|gcs.monitoring.core.usgovcloudapi.net| -| Rule Name | IP Address or Group | IP Group or VNet or Subnet IP Address | TCP, UDP | 53 | IP Address | * | -| Rule name | IP Address or Group | IP Group or VNet or Subnet IP Address | TCP | 1688 | IP address | `kms.core.usgovcloudapi.net`| -| Rule name | IP Address or Group | IP Group or VNet or Subnet IP Address | TCP | 443 | FQDN | `mrsglobalstugviffx.blob.core.usgovcloudapi.net` | -| Rule name | IP Address or Group | IP Group or VNet or Subnet IP Address | TCP | 443 | FQDN | `wvdportalstorageblob.blob.core.usgovcloudapi.net` | -| Rule name | IP Address or Group | IP Group or VNet or Subnet IP Address | TCP | 80 | FQDN | `ocsp.msocsp.com` | ----> [!NOTE] -> Some deployments might not need DNS rules. For example, Microsoft Entra Domain controllers forward DNS queries to Azure DNS at 168.63.129.16. --Azure Virtual Desktop (AVD) official documentation reports the following Network rules as **optional** depending on the usage and scenario: --| Name | Source type | Source | Protocol | Destination ports | Destination type | Destination | -| -| -- | - | -- | -- | - | | -| Rule Name | IP Address or Group | IP Group or VNet or Subnet IP Address | UDP | 123 | FQDN | `time.windows.com` | -| Rule Name | IP Address or Group | IP Group or VNet or Subnet IP Address | TCP | 443 | FQDN | `login.windows.net` | -| Rule Name | IP Address or Group | IP Group or VNet or Subnet IP Address | TCP | 443 | FQDN | `www.msftconnecttest.com` | ---### Create application rules --Azure Virtual Desktop (AVD) official documentation reports the following Application rules as **optional** depending on the usage and scenario: --| Name | Source type | Source | Protocol | Destination type | Destination | -| | -- | --| - | - | - | -| Rule Name | IP Address or Group | VNet or Subnet IP Address | Https:443 | FQDN Tag | `WindowsUpdate`, `Windows Diagnostics`, `MicrosoftActiveProtectionService` | -| Rule Name | IP Address or Group | VNet or Subnet IP Address | Https:443 | FQDN | `*.events.data.microsoft.com`| -| Rule Name | IP Address or Group | VNet or Subnet IP Address | Https:443 | FQDN | `*.sfx.ms` | -| Rule Name | IP Address or Group | VNet or Subnet IP Address | Https:443 | FQDN | `*.digicert.com` | -| Rule Name | IP Address or Group | VNet or Subnet IP Address | Https:443 | FQDN | `*.azure-dns.com`, `*.azure-dns.net` | +You need to create rules for each of the required FQDNs and endpoints. The list is available at [Required FQDNs and endpoints for Azure Virtual Desktop](../virtual-desktop/required-fqdn-endpoint.md). In order to identify a specific host pool as *Source*, you can create an [IP Group](../firewall/ip-groups.md) with each session host to represent it. > [!IMPORTANT] > We recommend that you don't use TLS inspection with Azure Virtual Desktop. For more information, see the [proxy server guidelines](../virtual-desktop/proxy-server-support.md#dont-use-ssl-termination-on-the-proxy-server). -## Azure Firewall Policy Sample -All the mandatory and optional rules mentioned above can be easily deployed a single Azure Firewall Policy using the template published at [this link](https://github.com/Azure/RDS-Templates/tree/master/AzureFirewallPolicyForAVD). -Before deploying into production, it's highly recommended to review all the Network and Application rules defined, ensure alignment with Azure Virtual Desktop official documentation and security requirements. +## Azure Firewall Policy Sample ++All the mandatory and optional rules mentioned can be easily deployed in a single Azure Firewall Policy using the template published at [https://github.com/Azure/RDS-Templates/tree/master/AzureFirewallPolicyForAVD](https://github.com/Azure/RDS-Templates/tree/master/AzureFirewallPolicyForAVD). +Before deploying into production, we recommended reviewing all the Network and Application rules defined, ensure alignment with Azure Virtual Desktop official documentation and security requirements. ## Host pool outbound access to the Internet |
governance | Control Mapping | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/blueprints/samples/iso27001-ase-sql-workload/control-mapping.md | -maps to the ISO 27001 controls. For more information about the controls, see [ISO 27001](https://www.iso.org/isoiec-27001-information-security.html). +maps to the ISO 27001 controls. The following mappings are to the **ISO 27001:2013** controls. Use the navigation on the right to jump directly to a specific control mapping. Many of the mapped controls are implemented with an [Azure Policy](../../../policy/overview.md) Additional articles about blueprints and how to use them: - Understand how to use [static and dynamic parameters](../../concepts/parameters.md). - Learn to customize the [blueprint sequencing order](../../concepts/sequencing-order.md). - Find out how to make use of [blueprint resource locking](../../concepts/resource-locking.md).-- Learn how to [update existing assignments](../../how-to/update-existing-assignments.md).+- Learn how to [update existing assignments](../../how-to/update-existing-assignments.md). |
governance | Control Mapping | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/blueprints/samples/iso27001-shared/control-mapping.md | -maps to the ISO 27001 controls. For more information about the controls, see -[ISO 27001](https://www.iso.org/isoiec-27001-information-security.html). +maps to the ISO 27001 controls. The following mappings are to the **ISO 27001:2013** controls. Use the navigation on the right to jump directly to a specific control mapping. Many of the mapped controls are implemented with an Additional articles about blueprints and how to use them: - Understand how to use [static and dynamic parameters](../../concepts/parameters.md). - Learn to customize the [blueprint sequencing order](../../concepts/sequencing-order.md). - Find out how to make use of [blueprint resource locking](../../concepts/resource-locking.md).-- Learn how to [update existing assignments](../../how-to/update-existing-assignments.md).+- Learn how to [update existing assignments](../../how-to/update-existing-assignments.md). |
governance | Definition Structure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/definition-structure.md | A parameter has the following properties that are used in the policy definition: resource or scope. - `defaultValue`: (Optional) Sets the value of the parameter in an assignment if no value is given. Required when updating an existing policy definition that is assigned. For oject-type parameters, the value must match the appropriate schema. - `allowedValues`: (Optional) Provides an array of values that the parameter accepts during- assignment. Allowed value comparisons are case-sensitive. For oject-type parameters, the values must match the appropriate schema. + assignment. Allowed value comparisons are case-sensitive. For object-type parameters, the values must match the appropriate schema. - `schema`: (Optional) Provides validation of parameter inputs during assignment using a self-defined JSON schema. This property is only supported for object-type parameters and follows the [Json.NET Schema](https://www.newtonsoft.com/jsonschema) 2019-09 implementation. You can learn more about using schemas at https://json-schema.org/ and test draft schemas at https://www.jsonschemavalidator.net/. ### Sample Parameters |
governance | Australia Ism | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/australia-ism.md | Title: Regulatory Compliance details for Australian Government ISM PROTECTED description: Details of the Australian Government ISM PROTECTED Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 11/06/2023 Last updated : 11/21/2023 |
governance | Azure Security Benchmark | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/azure-security-benchmark.md | Title: Regulatory Compliance details for Microsoft cloud security benchmark description: Details of the Microsoft cloud security benchmark Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 11/06/2023 Last updated : 11/21/2023 initiative definition. |Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||-|[\[Preview\]: API endpoints in Azure API Management should be authenticated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8ac833bd-f505-48d5-887e-c993a1d3eea0) |API endpoints published within Azure API Management should enforce authentication to help minimize security risk. Authentication mechanisms are sometimes implemented incorrectly or are missing. This allows attackers to exploit implementation flaws and to access data. Learn More about the OWASP API Threat for Broken User Authentication here: [https://learn.microsoft.com/azure/api-management/mitigate-owasp-api-threats#broken-user-authentication](../../../api-management/mitigate-owasp-api-threats.md#broken-user-authentication) |AuditIfNotExists, Disabled |[1.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_APIMApiEndpointsShouldbeAuthenticated_AuditIfNotExists.json) | +|[API endpoints in Azure API Management should be authenticated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8ac833bd-f505-48d5-887e-c993a1d3eea0) |API endpoints published within Azure API Management should enforce authentication to help minimize security risk. Authentication mechanisms are sometimes implemented incorrectly or are missing. This allows attackers to exploit implementation flaws and to access data. Learn More about the OWASP API Threat for Broken User Authentication here: [https://learn.microsoft.com/azure/api-management/mitigate-owasp-api-threats#broken-user-authentication](../../../api-management/mitigate-owasp-api-threats.md#broken-user-authentication) |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_APIMApiEndpointsShouldbeAuthenticated_AuditIfNotExists.json) | |[API Management calls to API backends should be authenticated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc15dcc82-b93c-4dcb-9332-fbf121685b54) |Calls from API Management to backends should use some form of authentication, whether via certificates or credentials. Does not apply to Service Fabric backends. |Audit, Disabled, Deny |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/API%20Management/ApiManagement_BackendAuth_AuditDeny.json) | |[API Management calls to API backends should not bypass certificate thumbprint or name validation](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F92bb331d-ac71-416a-8c91-02f2cb734ce4) |To improve the API security, API Management should validate the backend server certificate for all API calls. Enable SSL certificate thumbprint and name validation. |Audit, Disabled, Deny |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/API%20Management/ApiManagement_BackendCertificateChecks_AuditDeny.json) | |[Azure SQL Database should be running TLS version 1.2 or newer](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F32e6bbec-16b6-44c2-be37-c5b672d103cf) |Setting TLS version to 1.2 or newer improves security by ensuring your Azure SQL Database can only be accessed from clients using TLS 1.2 or newer. Using versions of TLS less than 1.2 is not recommended since they have well documented security vulnerabilities. |Audit, Disabled, Deny |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_MiniumTLSVersion_Audit.json) | initiative definition. |Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||-|[\[Preview\]: Microsoft Defender for APIs should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7926a6d1-b268-4586-8197-e8ae90c877d7) |Microsoft Defender for APIs brings new discovery, protection, detection, & response coverage to monitor for common API based attacks & security misconfigurations. |AuditIfNotExists, Disabled |[1.0.2-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableDefenderForAPIS_Audit.json) | +|[Microsoft Defender for APIs should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7926a6d1-b268-4586-8197-e8ae90c877d7) |Microsoft Defender for APIs brings new discovery, protection, detection, & response coverage to monitor for common API based attacks & security misconfigurations. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableDefenderForAPIS_Audit.json) | ### Monitor anomalies and threats targeting sensitive data initiative definition. |Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||-|[\[Preview\]: Microsoft Defender for APIs should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7926a6d1-b268-4586-8197-e8ae90c877d7) |Microsoft Defender for APIs brings new discovery, protection, detection, & response coverage to monitor for common API based attacks & security misconfigurations. |AuditIfNotExists, Disabled |[1.0.2-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableDefenderForAPIS_Audit.json) | |[Azure Defender for Azure SQL Database servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7fe3b40f-802b-4cdd-8bd4-fd799c948cc2) |Azure Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, and discovering and classifying sensitive data. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedDataSecurityOnSqlServers_Audit.json) | |[Azure Defender for open-source relational databases should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0a9fbe0d-c5c4-4da8-87d8-f4fd77338835) |Azure Defender for open-source relational databases detects anomalous activities indicating unusual and potentially harmful attempts to access or exploit databases. Learn more about the capabilities of Azure Defender for open-source relational databases at [https://aka.ms/AzDforOpenSourceDBsDocu](https://aka.ms/AzDforOpenSourceDBsDocu). Important: Enabling this plan will result in charges for protecting your open-source relational databases. Learn about the pricing on Security Center's pricing page: [https://aka.ms/pricing-security-center](https://aka.ms/pricing-security-center) |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAzureDefenderOnOpenSourceRelationalDatabases_Audit.json) | |[Azure Defender for SQL servers on machines should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6581d072-105e-4418-827f-bd446d56421b) |Azure Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, and discovering and classifying sensitive data. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedDataSecurityOnSqlServerVirtualMachines_Audit.json) | |[Azure Defender for SQL should be enabled for unprotected SQL Managed Instances](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb7388-5bf4-4ad7-ba99-2cd2f41cebb9) |Audit each SQL Managed Instance without advanced data security. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_AdvancedDataSecurity_Audit.json) |+|[Microsoft Defender for APIs should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7926a6d1-b268-4586-8197-e8ae90c877d7) |Microsoft Defender for APIs brings new discovery, protection, detection, & response coverage to monitor for common API based attacks & security misconfigurations. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableDefenderForAPIS_Audit.json) | |[Microsoft Defender for Storage (Classic) should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F308fbb08-4ab8-4e67-9b29-592e93fb94fa) |Microsoft Defender for Storage (Classic) provides detections of unusual and potentially harmful attempts to access or exploit storage accounts. |AuditIfNotExists, Disabled |[1.0.4](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnStorageAccounts_Audit.json) | |[Microsoft Defender for Storage should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F640d2586-54d2-465f-877f-9ffc1d2109f4) |Microsoft Defender for Storage detects potential threats to your storage accounts. It helps prevent the three major impacts on your data and workload: malicious file uploads, sensitive data exfiltration, and data corruption. The new Defender for Storage plan includes Malware Scanning and Sensitive Data Threat Detection. This plan also provides a predictable pricing structure (per storage account) for control over coverage and costs. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/MDC_Microsoft_Defender_For_Storage_Full_Audit.json) | initiative definition. |Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||-|[\[Preview\]: API endpoints that are unused should be disabled and removed from the Azure API Management service](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc8acafaf-3d23-44d1-9624-978ef0f8652c) |As a security best practice, API endpoints that haven't received traffic for 30 days are considered unused and should be removed from the Azure API Management service. Keeping unused API endpoints may pose a security risk to your organization. These may be APIs that should have been deprecated from the Azure API Management service but may have been accidentally left active. Such APIs typically do not receive the most up to date security coverage. |AuditIfNotExists, Disabled |[1.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_APIMUnusedApiEndpointsShouldbeRemoved_AuditIfNotExists.json) | +|[API endpoints that are unused should be disabled and removed from the Azure API Management service](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc8acafaf-3d23-44d1-9624-978ef0f8652c) |As a security best practice, API endpoints that haven't received traffic for 30 days are considered unused and should be removed from the Azure API Management service. Keeping unused API endpoints may pose a security risk to your organization. These may be APIs that should have been deprecated from the Azure API Management service but may have been accidentally left active. Such APIs typically do not receive the most up to date security coverage. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_APIMUnusedApiEndpointsShouldbeRemoved_AuditIfNotExists.json) | ### Use only approved applications in virtual machine initiative definition. |Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[\[Preview\]: Azure Arc enabled Kubernetes clusters should have Microsoft Defender for Cloud extension installed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8dfab9c4-fe7b-49ad-85e4-1e9be085358f) |Microsoft Defender for Cloud extension for Azure Arc provides threat protection for your Arc enabled Kubernetes clusters. The extension collects data from all nodes in the cluster and sends it to the Azure Defender for Kubernetes backend in the cloud for further analysis. Learn more in [https://docs.microsoft.com/azure/defender-for-cloud/defender-for-containers-enable?pivots=defender-for-container-arc](../../../defender-for-cloud/defender-for-containers-enable.md). |AuditIfNotExists, Disabled |[6.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ASC_Azure_Defender_Kubernetes_Arc_Extension_Audit.json) |-|[\[Preview\]: Microsoft Defender for APIs should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7926a6d1-b268-4586-8197-e8ae90c877d7) |Microsoft Defender for APIs brings new discovery, protection, detection, & response coverage to monitor for common API based attacks & security misconfigurations. |AuditIfNotExists, Disabled |[1.0.2-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableDefenderForAPIS_Audit.json) | |[Azure Defender for App Service should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2913021d-f2fd-4f3d-b958-22354e2bdbcb) |Azure Defender for App Service leverages the scale of the cloud, and the visibility that Azure has as a cloud provider, to monitor for common web app attacks. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnAppServices_Audit.json) | |[Azure Defender for Azure SQL Database servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7fe3b40f-802b-4cdd-8bd4-fd799c948cc2) |Azure Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, and discovering and classifying sensitive data. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedDataSecurityOnSqlServers_Audit.json) | |[Azure Defender for DNS should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbdc59948-5574-49b3-bb91-76b7c986428d) |Azure Defender for DNS provides an additional layer of protection for your cloud resources by continuously monitoring all DNS queries from your Azure resources. Azure Defender alerts you about suspicious activity at the DNS layer. Learn more about the capabilities of Azure Defender for DNS at [https://aka.ms/defender-for-dns](https://aka.ms/defender-for-dns) . Enabling this Azure Defender plan results in charges. Learn about the pricing details per region on Security Center's pricing page: [https://aka.ms/pricing-security-center](https://aka.ms/pricing-security-center) . |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAzureDefenderOnDns_Audit.json) | initiative definition. |[Azure Defender for SQL should be enabled for unprotected SQL Managed Instances](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb7388-5bf4-4ad7-ba99-2cd2f41cebb9) |Audit each SQL Managed Instance without advanced data security. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_AdvancedDataSecurity_Audit.json) | |[Azure Kubernetes Service clusters should have Defender profile enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa1840de2-8088-4ea8-b153-b4c723e9cb01) |Microsoft Defender for Containers provides cloud-native Kubernetes security capabilities including environment hardening, workload protection, and run-time protection. When you enable the SecurityProfile.AzureDefender on your Azure Kubernetes Service cluster, an agent is deployed to your cluster to collect security event data. Learn more about Microsoft Defender for Containers in [https://docs.microsoft.com/azure/defender-for-cloud/defender-for-containers-introduction?tabs=defender-for-container-arch-aks](../../../defender-for-cloud/defender-for-containers-introduction.md) |Audit, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Kubernetes/ASC_Azure_Defender_Kubernetes_AKS_SecurityProfile_Audit.json) | |[Microsoft Defender CSPM should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f90fc71-a595-4066-8974-d4d0802e8ef0) |Defender Cloud Security Posture Management (CSPM) provides enhanced posture capabilities and a new intelligent cloud security graph to help identify, prioritize, and reduce risk. Defender CSPM is available in addition to the free foundational security posture capabilities turned on by default in Defender for Cloud. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Azure_Defender_CSPM_Audit.json) |+|[Microsoft Defender for APIs should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7926a6d1-b268-4586-8197-e8ae90c877d7) |Microsoft Defender for APIs brings new discovery, protection, detection, & response coverage to monitor for common API based attacks & security misconfigurations. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableDefenderForAPIS_Audit.json) | |[Microsoft Defender for Containers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1c988dd6-ade4-430f-a608-2a3e5b0a6d38) |Microsoft Defender for Containers provides hardening, vulnerability assessment and run-time protections for your Azure, hybrid, and multi-cloud Kubernetes environments. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnContainers_Audit.json) | |[Microsoft Defender for SQL status should be protected for Arc-enabled SQL Servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F938c4981-c2c9-4168-9cd6-972b8675f906) |Microsoft Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, discovering and classifying sensitive data. Once enabled, the protection status indicates that the resource is actively monitored. Even when Defender is enabled, multiple configuration settings should be validated on the agent, machine, workspace and SQL server to ensure active protection. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ProtectDefenderForSQLOnArc_Audit.json) | |[Microsoft Defender for Storage (Classic) should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F308fbb08-4ab8-4e67-9b29-592e93fb94fa) |Microsoft Defender for Storage (Classic) provides detections of unusual and potentially harmful attempts to access or exploit storage accounts. |AuditIfNotExists, Disabled |[1.0.4](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnStorageAccounts_Audit.json) | initiative definition. |Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||-|[\[Preview\]: Microsoft Defender for APIs should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7926a6d1-b268-4586-8197-e8ae90c877d7) |Microsoft Defender for APIs brings new discovery, protection, detection, & response coverage to monitor for common API based attacks & security misconfigurations. |AuditIfNotExists, Disabled |[1.0.2-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableDefenderForAPIS_Audit.json) | |[Azure Defender for App Service should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2913021d-f2fd-4f3d-b958-22354e2bdbcb) |Azure Defender for App Service leverages the scale of the cloud, and the visibility that Azure has as a cloud provider, to monitor for common web app attacks. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnAppServices_Audit.json) | |[Azure Defender for Azure SQL Database servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7fe3b40f-802b-4cdd-8bd4-fd799c948cc2) |Azure Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, and discovering and classifying sensitive data. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedDataSecurityOnSqlServers_Audit.json) | |[Azure Defender for DNS should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbdc59948-5574-49b3-bb91-76b7c986428d) |Azure Defender for DNS provides an additional layer of protection for your cloud resources by continuously monitoring all DNS queries from your Azure resources. Azure Defender alerts you about suspicious activity at the DNS layer. Learn more about the capabilities of Azure Defender for DNS at [https://aka.ms/defender-for-dns](https://aka.ms/defender-for-dns) . Enabling this Azure Defender plan results in charges. Learn about the pricing details per region on Security Center's pricing page: [https://aka.ms/pricing-security-center](https://aka.ms/pricing-security-center) . |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAzureDefenderOnDns_Audit.json) | initiative definition. |[Azure Defender for SQL should be enabled for unprotected Azure SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb4388-5bf4-4ad7-ba82-2cd2f41ceae9) |Audit SQL servers without Advanced Data Security |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_AdvancedDataSecurity_Audit.json) | |[Azure Defender for SQL should be enabled for unprotected SQL Managed Instances](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb7388-5bf4-4ad7-ba99-2cd2f41cebb9) |Audit each SQL Managed Instance without advanced data security. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_AdvancedDataSecurity_Audit.json) | |[Microsoft Defender CSPM should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f90fc71-a595-4066-8974-d4d0802e8ef0) |Defender Cloud Security Posture Management (CSPM) provides enhanced posture capabilities and a new intelligent cloud security graph to help identify, prioritize, and reduce risk. Defender CSPM is available in addition to the free foundational security posture capabilities turned on by default in Defender for Cloud. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Azure_Defender_CSPM_Audit.json) |+|[Microsoft Defender for APIs should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7926a6d1-b268-4586-8197-e8ae90c877d7) |Microsoft Defender for APIs brings new discovery, protection, detection, & response coverage to monitor for common API based attacks & security misconfigurations. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableDefenderForAPIS_Audit.json) | |[Microsoft Defender for Containers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1c988dd6-ade4-430f-a608-2a3e5b0a6d38) |Microsoft Defender for Containers provides hardening, vulnerability assessment and run-time protections for your Azure, hybrid, and multi-cloud Kubernetes environments. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnContainers_Audit.json) | |[Microsoft Defender for SQL status should be protected for Arc-enabled SQL Servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F938c4981-c2c9-4168-9cd6-972b8675f906) |Microsoft Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, discovering and classifying sensitive data. Once enabled, the protection status indicates that the resource is actively monitored. Even when Defender is enabled, multiple configuration settings should be validated on the agent, machine, workspace and SQL server to ensure active protection. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ProtectDefenderForSQLOnArc_Audit.json) | |[Microsoft Defender for Storage (Classic) should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F308fbb08-4ab8-4e67-9b29-592e93fb94fa) |Microsoft Defender for Storage (Classic) provides detections of unusual and potentially harmful attempts to access or exploit storage accounts. |AuditIfNotExists, Disabled |[1.0.4](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnStorageAccounts_Audit.json) | initiative definition. |Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | |||||-|[\[Preview\]: Microsoft Defender for APIs should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7926a6d1-b268-4586-8197-e8ae90c877d7) |Microsoft Defender for APIs brings new discovery, protection, detection, & response coverage to monitor for common API based attacks & security misconfigurations. |AuditIfNotExists, Disabled |[1.0.2-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableDefenderForAPIS_Audit.json) | |[Azure Defender for App Service should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2913021d-f2fd-4f3d-b958-22354e2bdbcb) |Azure Defender for App Service leverages the scale of the cloud, and the visibility that Azure has as a cloud provider, to monitor for common web app attacks. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnAppServices_Audit.json) | |[Azure Defender for Azure SQL Database servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7fe3b40f-802b-4cdd-8bd4-fd799c948cc2) |Azure Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, and discovering and classifying sensitive data. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedDataSecurityOnSqlServers_Audit.json) | |[Azure Defender for DNS should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbdc59948-5574-49b3-bb91-76b7c986428d) |Azure Defender for DNS provides an additional layer of protection for your cloud resources by continuously monitoring all DNS queries from your Azure resources. Azure Defender alerts you about suspicious activity at the DNS layer. Learn more about the capabilities of Azure Defender for DNS at [https://aka.ms/defender-for-dns](https://aka.ms/defender-for-dns) . Enabling this Azure Defender plan results in charges. Learn about the pricing details per region on Security Center's pricing page: [https://aka.ms/pricing-security-center](https://aka.ms/pricing-security-center) . |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAzureDefenderOnDns_Audit.json) | initiative definition. |[Azure Defender for SQL should be enabled for unprotected Azure SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb4388-5bf4-4ad7-ba82-2cd2f41ceae9) |Audit SQL servers without Advanced Data Security |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_AdvancedDataSecurity_Audit.json) | |[Azure Defender for SQL should be enabled for unprotected SQL Managed Instances](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb7388-5bf4-4ad7-ba99-2cd2f41cebb9) |Audit each SQL Managed Instance without advanced data security. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_AdvancedDataSecurity_Audit.json) | |[Microsoft Defender CSPM should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f90fc71-a595-4066-8974-d4d0802e8ef0) |Defender Cloud Security Posture Management (CSPM) provides enhanced posture capabilities and a new intelligent cloud security graph to help identify, prioritize, and reduce risk. Defender CSPM is available in addition to the free foundational security posture capabilities turned on by default in Defender for Cloud. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Azure_Defender_CSPM_Audit.json) |+|[Microsoft Defender for APIs should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7926a6d1-b268-4586-8197-e8ae90c877d7) |Microsoft Defender for APIs brings new discovery, protection, detection, & response coverage to monitor for common API based attacks & security misconfigurations. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableDefenderForAPIS_Audit.json) | |[Microsoft Defender for Containers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1c988dd6-ade4-430f-a608-2a3e5b0a6d38) |Microsoft Defender for Containers provides hardening, vulnerability assessment and run-time protections for your Azure, hybrid, and multi-cloud Kubernetes environments. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnContainers_Audit.json) | |[Microsoft Defender for SQL status should be protected for Arc-enabled SQL Servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F938c4981-c2c9-4168-9cd6-972b8675f906) |Microsoft Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, discovering and classifying sensitive data. Once enabled, the protection status indicates that the resource is actively monitored. Even when Defender is enabled, multiple configuration settings should be validated on the agent, machine, workspace and SQL server to ensure active protection. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ProtectDefenderForSQLOnArc_Audit.json) | |[Microsoft Defender for Storage (Classic) should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F308fbb08-4ab8-4e67-9b29-592e93fb94fa) |Microsoft Defender for Storage (Classic) provides detections of unusual and potentially harmful attempts to access or exploit storage accounts. |AuditIfNotExists, Disabled |[1.0.4](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnStorageAccounts_Audit.json) | |
governance | Built In Initiatives | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/built-in-initiatives.md | Title: List of built-in policy initiatives description: List built-in policy initiatives for Azure Policy. Categories include Regulatory Compliance, Guest Configuration, and more. Previously updated : 11/15/2023 Last updated : 11/21/2023 |
governance | Built In Policies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/built-in-policies.md | Title: List of built-in policy definitions description: List built-in policy definitions for Azure Policy. Categories include Tags, Regulatory Compliance, Key Vault, Kubernetes, Guest Configuration, and more. Previously updated : 11/15/2023 Last updated : 11/21/2023 |
governance | Canada Federal Pbmm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/canada-federal-pbmm.md | Title: Regulatory Compliance details for Canada Federal PBMM description: Details of the Canada Federal PBMM Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 11/06/2023 Last updated : 11/21/2023 |
governance | Cis Azure 1 1 0 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/cis-azure-1-1-0.md | Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 1.1.0 description: Details of the CIS Microsoft Azure Foundations Benchmark 1.1.0 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 11/06/2023 Last updated : 11/21/2023 |
governance | Cis Azure 1 3 0 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/cis-azure-1-3-0.md | Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 1.3.0 description: Details of the CIS Microsoft Azure Foundations Benchmark 1.3.0 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 11/06/2023 Last updated : 11/21/2023 |
governance | Cis Azure 1 4 0 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/cis-azure-1-4-0.md | Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 1.4.0 description: Details of the CIS Microsoft Azure Foundations Benchmark 1.4.0 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 11/06/2023 Last updated : 11/21/2023 |
governance | Cis Azure 2 0 0 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/cis-azure-2-0-0.md | Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 2.0.0 description: Details of the CIS Microsoft Azure Foundations Benchmark 2.0.0 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 11/06/2023 Last updated : 11/21/2023 |
governance | Cmmc L3 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/cmmc-l3.md | Title: Regulatory Compliance details for CMMC Level 3 description: Details of the CMMC Level 3 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 11/06/2023 Last updated : 11/21/2023 This built-in initiative is deployed as part of the |[Management ports of virtual machines should be protected with just-in-time network access control](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb0f33259-77d7-4c9e-aac6-3aabcfae693c) |Possible network Just In Time (JIT) access will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_JITNetworkAccess_Audit.json) | |[Public network access on Azure SQL Database should be disabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b8ca024-1d5c-4dec-8995-b1a932b41780) |Disabling the public network access property improves security by ensuring your Azure SQL Database can only be accessed from a private endpoint. This configuration denies all logins that match IP or virtual network based firewall rules. |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_PublicNetworkAccess_Audit.json) | |[Public network access should be disabled for MariaDB servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffdccbe47-f3e3-4213-ad5d-ea459b2fa077) |Disable the public network access property to improve security and ensure your Azure Database for MariaDB can only be accessed from a private endpoint. This configuration strictly disables access from any public address space outside of Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MariaDB_DisablePublicNetworkAccess_Audit.json) |-|[Public network access should be disabled for MySQL flexible servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc9299215-ae47-4f50-9c54-8a392f68a052) |Disabling the public network access property improves security by ensuring your Azure Database for MySQL flexible servers can only be accessed from a private endpoint. This configuration strictly disables access from any public address space outside of Azure IP range and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MySQL_FlexibleServers_DisablePublicNetworkAccess_Audit.json) | +|[Public network access should be disabled for MySQL flexible servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc9299215-ae47-4f50-9c54-8a392f68a052) |Disabling the public network access property improves security by ensuring your Azure Database for MySQL flexible servers can only be accessed from a private endpoint. This configuration strictly disables access from any public address space outside of Azure IP range and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[2.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MySQL_FlexibleServers_DisablePublicNetworkAccess_Audit.json) | |[Public network access should be disabled for MySQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd9844e8a-1437-4aeb-a32c-0c992f056095) |Disable the public network access property to improve security and ensure your Azure Database for MySQL can only be accessed from a private endpoint. This configuration strictly disables access from any public address space outside of Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MySQL_DisablePublicNetworkAccess_Audit.json) | |[Public network access should be disabled for PostgreSQL flexible servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5e1de0e3-42cb-4ebc-a86d-61d0c619ca48) |Disabling the public network access property improves security by ensuring your Azure Database for PostgreSQL flexible servers can only be accessed from a private endpoint. This configuration strictly disables access from any public address space outside of Azure IP range and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[3.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_FlexibleServers_DisablePublicNetworkAccess_Audit.json) | |[Public network access should be disabled for PostgreSQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb52376f7-9612-48a1-81cd-1ffe4b61032c) |Disable the public network access property to improve security and ensure your Azure Database for PostgreSQL can only be accessed from a private endpoint. This configuration disables access from any public address space outside of Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_DisablePublicNetworkAccess_Audit.json) | This built-in initiative is deployed as part of the |[Only secure connections to your Azure Cache for Redis should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F22bee202-a82f-4305-9a2a-6d7f44d4dedb) |Audit enabling of only connections via SSL to Azure Cache for Redis. Use of secure connections ensures authentication between the server and the service and protects data in transit from network layer attacks such as man-in-the-middle, eavesdropping, and session-hijacking |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cache/RedisCache_AuditSSLPort_Audit.json) | |[Public network access on Azure SQL Database should be disabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b8ca024-1d5c-4dec-8995-b1a932b41780) |Disabling the public network access property improves security by ensuring your Azure SQL Database can only be accessed from a private endpoint. This configuration denies all logins that match IP or virtual network based firewall rules. |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_PublicNetworkAccess_Audit.json) | |[Public network access should be disabled for MariaDB servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffdccbe47-f3e3-4213-ad5d-ea459b2fa077) |Disable the public network access property to improve security and ensure your Azure Database for MariaDB can only be accessed from a private endpoint. This configuration strictly disables access from any public address space outside of Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MariaDB_DisablePublicNetworkAccess_Audit.json) |-|[Public network access should be disabled for MySQL flexible servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc9299215-ae47-4f50-9c54-8a392f68a052) |Disabling the public network access property improves security by ensuring your Azure Database for MySQL flexible servers can only be accessed from a private endpoint. This configuration strictly disables access from any public address space outside of Azure IP range and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MySQL_FlexibleServers_DisablePublicNetworkAccess_Audit.json) | +|[Public network access should be disabled for MySQL flexible servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc9299215-ae47-4f50-9c54-8a392f68a052) |Disabling the public network access property improves security by ensuring your Azure Database for MySQL flexible servers can only be accessed from a private endpoint. This configuration strictly disables access from any public address space outside of Azure IP range and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[2.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MySQL_FlexibleServers_DisablePublicNetworkAccess_Audit.json) | |[Public network access should be disabled for MySQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd9844e8a-1437-4aeb-a32c-0c992f056095) |Disable the public network access property to improve security and ensure your Azure Database for MySQL can only be accessed from a private endpoint. This configuration strictly disables access from any public address space outside of Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MySQL_DisablePublicNetworkAccess_Audit.json) | |[Public network access should be disabled for PostgreSQL flexible servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5e1de0e3-42cb-4ebc-a86d-61d0c619ca48) |Disabling the public network access property improves security by ensuring your Azure Database for PostgreSQL flexible servers can only be accessed from a private endpoint. This configuration strictly disables access from any public address space outside of Azure IP range and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[3.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_FlexibleServers_DisablePublicNetworkAccess_Audit.json) | |[Public network access should be disabled for PostgreSQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb52376f7-9612-48a1-81cd-1ffe4b61032c) |Disable the public network access property to improve security and ensure your Azure Database for PostgreSQL can only be accessed from a private endpoint. This configuration disables access from any public address space outside of Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_DisablePublicNetworkAccess_Audit.json) | This built-in initiative is deployed as part of the |[Internet-facing virtual machines should be protected with network security groups](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff6de0be7-9a8a-4b8a-b349-43cf02d22f7c) |Protect your virtual machines from potential threats by restricting access to them with network security groups (NSG). Learn more about controlling traffic with NSGs at [https://aka.ms/nsg-doc](https://aka.ms/nsg-doc) |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnInternetFacingVirtualMachines_Audit.json) | |[Public network access on Azure SQL Database should be disabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b8ca024-1d5c-4dec-8995-b1a932b41780) |Disabling the public network access property improves security by ensuring your Azure SQL Database can only be accessed from a private endpoint. This configuration denies all logins that match IP or virtual network based firewall rules. |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_PublicNetworkAccess_Audit.json) | |[Public network access should be disabled for MariaDB servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffdccbe47-f3e3-4213-ad5d-ea459b2fa077) |Disable the public network access property to improve security and ensure your Azure Database for MariaDB can only be accessed from a private endpoint. This configuration strictly disables access from any public address space outside of Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MariaDB_DisablePublicNetworkAccess_Audit.json) |-|[Public network access should be disabled for MySQL flexible servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc9299215-ae47-4f50-9c54-8a392f68a052) |Disabling the public network access property improves security by ensuring your Azure Database for MySQL flexible servers can only be accessed from a private endpoint. This configuration strictly disables access from any public address space outside of Azure IP range and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MySQL_FlexibleServers_DisablePublicNetworkAccess_Audit.json) | +|[Public network access should be disabled for MySQL flexible servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc9299215-ae47-4f50-9c54-8a392f68a052) |Disabling the public network access property improves security by ensuring your Azure Database for MySQL flexible servers can only be accessed from a private endpoint. This configuration strictly disables access from any public address space outside of Azure IP range and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[2.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MySQL_FlexibleServers_DisablePublicNetworkAccess_Audit.json) | |[Public network access should be disabled for MySQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd9844e8a-1437-4aeb-a32c-0c992f056095) |Disable the public network access property to improve security and ensure your Azure Database for MySQL can only be accessed from a private endpoint. This configuration strictly disables access from any public address space outside of Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MySQL_DisablePublicNetworkAccess_Audit.json) | |[Public network access should be disabled for PostgreSQL flexible servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5e1de0e3-42cb-4ebc-a86d-61d0c619ca48) |Disabling the public network access property improves security by ensuring your Azure Database for PostgreSQL flexible servers can only be accessed from a private endpoint. This configuration strictly disables access from any public address space outside of Azure IP range and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[3.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_FlexibleServers_DisablePublicNetworkAccess_Audit.json) | |[Public network access should be disabled for PostgreSQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb52376f7-9612-48a1-81cd-1ffe4b61032c) |Disable the public network access property to improve security and ensure your Azure Database for PostgreSQL can only be accessed from a private endpoint. This configuration disables access from any public address space outside of Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_DisablePublicNetworkAccess_Audit.json) | This built-in initiative is deployed as part of the |[Non-internet-facing virtual machines should be protected with network security groups](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbb91dfba-c30d-4263-9add-9c2384e659a6) |Protect your non-internet-facing virtual machines from potential threats by restricting access with network security groups (NSG). Learn more about controlling traffic with NSGs at [https://aka.ms/nsg-doc](https://aka.ms/nsg-doc) |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnInternalVirtualMachines_Audit.json) | |[Public network access on Azure SQL Database should be disabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b8ca024-1d5c-4dec-8995-b1a932b41780) |Disabling the public network access property improves security by ensuring your Azure SQL Database can only be accessed from a private endpoint. This configuration denies all logins that match IP or virtual network based firewall rules. |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_PublicNetworkAccess_Audit.json) | |[Public network access should be disabled for MariaDB servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffdccbe47-f3e3-4213-ad5d-ea459b2fa077) |Disable the public network access property to improve security and ensure your Azure Database for MariaDB can only be accessed from a private endpoint. This configuration strictly disables access from any public address space outside of Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MariaDB_DisablePublicNetworkAccess_Audit.json) |-|[Public network access should be disabled for MySQL flexible servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc9299215-ae47-4f50-9c54-8a392f68a052) |Disabling the public network access property improves security by ensuring your Azure Database for MySQL flexible servers can only be accessed from a private endpoint. This configuration strictly disables access from any public address space outside of Azure IP range and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MySQL_FlexibleServers_DisablePublicNetworkAccess_Audit.json) | +|[Public network access should be disabled for MySQL flexible servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc9299215-ae47-4f50-9c54-8a392f68a052) |Disabling the public network access property improves security by ensuring your Azure Database for MySQL flexible servers can only be accessed from a private endpoint. This configuration strictly disables access from any public address space outside of Azure IP range and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[2.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MySQL_FlexibleServers_DisablePublicNetworkAccess_Audit.json) | |[Public network access should be disabled for MySQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd9844e8a-1437-4aeb-a32c-0c992f056095) |Disable the public network access property to improve security and ensure your Azure Database for MySQL can only be accessed from a private endpoint. This configuration strictly disables access from any public address space outside of Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MySQL_DisablePublicNetworkAccess_Audit.json) | |[Public network access should be disabled for PostgreSQL flexible servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5e1de0e3-42cb-4ebc-a86d-61d0c619ca48) |Disabling the public network access property improves security by ensuring your Azure Database for PostgreSQL flexible servers can only be accessed from a private endpoint. This configuration strictly disables access from any public address space outside of Azure IP range and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[3.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_FlexibleServers_DisablePublicNetworkAccess_Audit.json) | |[Public network access should be disabled for PostgreSQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb52376f7-9612-48a1-81cd-1ffe4b61032c) |Disable the public network access property to improve security and ensure your Azure Database for PostgreSQL can only be accessed from a private endpoint. This configuration disables access from any public address space outside of Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_DisablePublicNetworkAccess_Audit.json) | This built-in initiative is deployed as part of the |[Only secure connections to your Azure Cache for Redis should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F22bee202-a82f-4305-9a2a-6d7f44d4dedb) |Audit enabling of only connections via SSL to Azure Cache for Redis. Use of secure connections ensures authentication between the server and the service and protects data in transit from network layer attacks such as man-in-the-middle, eavesdropping, and session-hijacking |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cache/RedisCache_AuditSSLPort_Audit.json) | |[Public network access on Azure SQL Database should be disabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b8ca024-1d5c-4dec-8995-b1a932b41780) |Disabling the public network access property improves security by ensuring your Azure SQL Database can only be accessed from a private endpoint. This configuration denies all logins that match IP or virtual network based firewall rules. |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_PublicNetworkAccess_Audit.json) | |[Public network access should be disabled for MariaDB servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffdccbe47-f3e3-4213-ad5d-ea459b2fa077) |Disable the public network access property to improve security and ensure your Azure Database for MariaDB can only be accessed from a private endpoint. This configuration strictly disables access from any public address space outside of Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MariaDB_DisablePublicNetworkAccess_Audit.json) |-|[Public network access should be disabled for MySQL flexible servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc9299215-ae47-4f50-9c54-8a392f68a052) |Disabling the public network access property improves security by ensuring your Azure Database for MySQL flexible servers can only be accessed from a private endpoint. This configuration strictly disables access from any public address space outside of Azure IP range and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MySQL_FlexibleServers_DisablePublicNetworkAccess_Audit.json) | +|[Public network access should be disabled for MySQL flexible servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc9299215-ae47-4f50-9c54-8a392f68a052) |Disabling the public network access property improves security by ensuring your Azure Database for MySQL flexible servers can only be accessed from a private endpoint. This configuration strictly disables access from any public address space outside of Azure IP range and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[2.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MySQL_FlexibleServers_DisablePublicNetworkAccess_Audit.json) | |[Public network access should be disabled for MySQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd9844e8a-1437-4aeb-a32c-0c992f056095) |Disable the public network access property to improve security and ensure your Azure Database for MySQL can only be accessed from a private endpoint. This configuration strictly disables access from any public address space outside of Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MySQL_DisablePublicNetworkAccess_Audit.json) | |[Public network access should be disabled for PostgreSQL flexible servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5e1de0e3-42cb-4ebc-a86d-61d0c619ca48) |Disabling the public network access property improves security by ensuring your Azure Database for PostgreSQL flexible servers can only be accessed from a private endpoint. This configuration strictly disables access from any public address space outside of Azure IP range and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[3.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_FlexibleServers_DisablePublicNetworkAccess_Audit.json) | |[Public network access should be disabled for PostgreSQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb52376f7-9612-48a1-81cd-1ffe4b61032c) |Disable the public network access property to improve security and ensure your Azure Database for PostgreSQL can only be accessed from a private endpoint. This configuration disables access from any public address space outside of Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_DisablePublicNetworkAccess_Audit.json) | This built-in initiative is deployed as part of the |[Non-internet-facing virtual machines should be protected with network security groups](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbb91dfba-c30d-4263-9add-9c2384e659a6) |Protect your non-internet-facing virtual machines from potential threats by restricting access with network security groups (NSG). Learn more about controlling traffic with NSGs at [https://aka.ms/nsg-doc](https://aka.ms/nsg-doc) |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnInternalVirtualMachines_Audit.json) | |[Public network access on Azure SQL Database should be disabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b8ca024-1d5c-4dec-8995-b1a932b41780) |Disabling the public network access property improves security by ensuring your Azure SQL Database can only be accessed from a private endpoint. This configuration denies all logins that match IP or virtual network based firewall rules. |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_PublicNetworkAccess_Audit.json) | |[Public network access should be disabled for MariaDB servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffdccbe47-f3e3-4213-ad5d-ea459b2fa077) |Disable the public network access property to improve security and ensure your Azure Database for MariaDB can only be accessed from a private endpoint. This configuration strictly disables access from any public address space outside of Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MariaDB_DisablePublicNetworkAccess_Audit.json) |-|[Public network access should be disabled for MySQL flexible servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc9299215-ae47-4f50-9c54-8a392f68a052) |Disabling the public network access property improves security by ensuring your Azure Database for MySQL flexible servers can only be accessed from a private endpoint. This configuration strictly disables access from any public address space outside of Azure IP range and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MySQL_FlexibleServers_DisablePublicNetworkAccess_Audit.json) | +|[Public network access should be disabled for MySQL flexible servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc9299215-ae47-4f50-9c54-8a392f68a052) |Disabling the public network access property improves security by ensuring your Azure Database for MySQL flexible servers can only be accessed from a private endpoint. This configuration strictly disables access from any public address space outside of Azure IP range and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[2.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MySQL_FlexibleServers_DisablePublicNetworkAccess_Audit.json) | |[Public network access should be disabled for MySQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd9844e8a-1437-4aeb-a32c-0c992f056095) |Disable the public network access property to improve security and ensure your Azure Database for MySQL can only be accessed from a private endpoint. This configuration strictly disables access from any public address space outside of Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MySQL_DisablePublicNetworkAccess_Audit.json) | |[Public network access should be disabled for PostgreSQL flexible servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5e1de0e3-42cb-4ebc-a86d-61d0c619ca48) |Disabling the public network access property improves security by ensuring your Azure Database for PostgreSQL flexible servers can only be accessed from a private endpoint. This configuration strictly disables access from any public address space outside of Azure IP range and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[3.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_FlexibleServers_DisablePublicNetworkAccess_Audit.json) | |[Public network access should be disabled for PostgreSQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb52376f7-9612-48a1-81cd-1ffe4b61032c) |Disable the public network access property to improve security and ensure your Azure Database for PostgreSQL can only be accessed from a private endpoint. This configuration disables access from any public address space outside of Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_DisablePublicNetworkAccess_Audit.json) | |
governance | Fedramp High | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/fedramp-high.md | Title: Regulatory Compliance details for FedRAMP High description: Details of the FedRAMP High Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 11/06/2023 Last updated : 11/21/2023 initiative definition. |[Authorized IP ranges should be defined on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e246bcf-5f6f-4f87-bc6f-775d4712c7ea) |Restrict access to the Kubernetes Service Management API by granting API access only to IP addresses in specific ranges. It is recommended to limit access to authorized IP ranges to ensure that only applications from allowed networks can access the cluster. |Audit, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableIpRanges_KubernetesService_Audit.json) | |[Azure API for FHIR should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1ee56206-5dd1-42ab-b02d-8aae8b1634ce) |Azure API for FHIR should have at least one approved private endpoint connection. Clients in a virtual network can securely access resources that have private endpoint connections through private links. For more information, visit: [https://aka.ms/fhir-privatelink](https://aka.ms/fhir-privatelink). |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/API%20for%20FHIR/HealthcareAPIs_PrivateLink_Audit.json) | |[Azure Cache for Redis should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7803067c-7d34-46e3-8c79-0ca68fc4036d) |Private endpoints lets you connect your virtual network to Azure services without a public IP address at the source or destination. By mapping private endpoints to your Azure Cache for Redis instances, data leakage risks are reduced. Learn more at: [https://docs.microsoft.com/azure/azure-cache-for-redis/cache-private-link](../../../azure-cache-for-redis/cache-private-link.md). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cache/RedisCache_PrivateEndpoint_AuditIfNotExists.json) |-|[Azure AI Search service should use a SKU that supports private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa049bf77-880b-470f-ba6d-9f21c530cf83) |With supported SKUs of Azure Cognitive Search, Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Search service, data leakage risks are reduced. Learn more at: [https://aka.ms/azure-cognitive-search/inbound-private-endpoints](https://aka.ms/azure-cognitive-search/inbound-private-endpoints). |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Search/Search_RequirePrivateLinkSupportedResource_Deny.json) | +|[Azure Cognitive Search service should use a SKU that supports private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa049bf77-880b-470f-ba6d-9f21c530cf83) |With supported SKUs of Azure Cognitive Search, Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The private link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your Search service, data leakage risks are reduced. Learn more at: [https://aka.ms/azure-cognitive-search/inbound-private-endpoints](https://aka.ms/azure-cognitive-search/inbound-private-endpoints). |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Search/Search_RequirePrivateLinkSupportedResource_Deny.json) | |[Azure Cognitive Search services should disable public network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fee980b6d-0eca-4501-8d54-f6290fd512c3) |Disabling public network access improves security by ensuring that your Azure Cognitive Search service is not exposed on the public internet. Creating private endpoints can limit exposure of your Search service. Learn more at: [https://aka.ms/azure-cognitive-search/inbound-private-endpoints](https://aka.ms/azure-cognitive-search/inbound-private-endpoints). |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Search/Search_RequirePublicNetworkAccessDisabled_Deny.json) | |[Azure Cognitive Search services should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0fda3595-9f2b-4592-8675-4231d6fa82fe) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to Azure Cognitive Search, data leakage risks are reduced. Learn more about private links at: [https://aka.ms/azure-cognitive-search/inbound-private-endpoints](https://aka.ms/azure-cognitive-search/inbound-private-endpoints). |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Search/Search_PrivateEndpoints_Audit.json) | |[Azure Cosmos DB accounts should have firewall rules](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F862e97cf-49fc-4a5c-9de4-40d4e2e7c8eb) |Firewall rules should be defined on your Azure Cosmos DB accounts to prevent traffic from unauthorized sources. Accounts that have at least one IP rule defined with the virtual network filter enabled are deemed compliant. Accounts disabling public access are also deemed compliant. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Cosmos%20DB/Cosmos_NetworkRulesExist_Audit.json) | |
governance | Fedramp Moderate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/fedramp-moderate.md | Title: Regulatory Compliance details for FedRAMP Moderate description: Details of the FedRAMP Moderate Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 11/06/2023 Last updated : 11/21/2023 |
governance | Gov Azure Security Benchmark | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-azure-security-benchmark.md | Title: Regulatory Compliance details for Microsoft cloud security benchmark (Azure Government) description: Details of the Microsoft cloud security benchmark (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 11/06/2023 Last updated : 11/21/2023 |
governance | Gov Cis Azure 1 1 0 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-cis-azure-1-1-0.md | Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 1.1.0 (Azure Government) description: Details of the CIS Microsoft Azure Foundations Benchmark 1.1.0 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 11/06/2023 Last updated : 11/21/2023 |
governance | Gov Cis Azure 1 3 0 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-cis-azure-1-3-0.md | Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 1.3.0 (Azure Government) description: Details of the CIS Microsoft Azure Foundations Benchmark 1.3.0 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 11/06/2023 Last updated : 11/21/2023 |
governance | Gov Cmmc L3 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-cmmc-l3.md | Title: Regulatory Compliance details for CMMC Level 3 (Azure Government) description: Details of the CMMC Level 3 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 11/06/2023 Last updated : 11/21/2023 |
governance | Gov Fedramp High | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-fedramp-high.md | Title: Regulatory Compliance details for FedRAMP High (Azure Government) description: Details of the FedRAMP High (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 11/06/2023 Last updated : 11/21/2023 |
governance | Gov Fedramp Moderate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-fedramp-moderate.md | Title: Regulatory Compliance details for FedRAMP Moderate (Azure Government) description: Details of the FedRAMP Moderate (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 11/06/2023 Last updated : 11/21/2023 |
governance | Gov Irs 1075 Sept2016 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-irs-1075-sept2016.md | Title: Regulatory Compliance details for IRS 1075 September 2016 (Azure Government) description: Details of the IRS 1075 September 2016 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 11/06/2023 Last updated : 11/21/2023 |
governance | Gov Iso 27001 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-iso-27001.md | Title: Regulatory Compliance details for ISO 27001:2013 (Azure Government) description: Details of the ISO 27001:2013 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 11/06/2023 Last updated : 11/21/2023 |
governance | Gov Nist Sp 800 171 R2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-nist-sp-800-171-r2.md | Title: Regulatory Compliance details for NIST SP 800-171 R2 (Azure Government) description: Details of the NIST SP 800-171 R2 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 11/06/2023 Last updated : 11/21/2023 |
governance | Gov Nist Sp 800 53 R4 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-nist-sp-800-53-r4.md | Title: Regulatory Compliance details for NIST SP 800-53 Rev. 4 (Azure Government) description: Details of the NIST SP 800-53 Rev. 4 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 11/06/2023 Last updated : 11/21/2023 |
governance | Gov Nist Sp 800 53 R5 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-nist-sp-800-53-r5.md | Title: Regulatory Compliance details for NIST SP 800-53 Rev. 5 (Azure Government) description: Details of the NIST SP 800-53 Rev. 5 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 11/06/2023 Last updated : 11/21/2023 |
governance | Hipaa Hitrust 9 2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/hipaa-hitrust-9-2.md | Title: Regulatory Compliance details for HIPAA HITRUST 9.2 description: Details of the HIPAA HITRUST 9.2 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 11/06/2023 Last updated : 11/21/2023 |
governance | Index | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/index.md | Title: Index of policy samples description: Index of built-ins for Azure Policy. Categories include Tags, Regulatory Compliance, Key Vault, Kubernetes, Guest Configuration, and more. Previously updated : 09/14/2023 Last updated : 11/21/2023 Azure: - [RBI ITF NBFC v2017](./rbi-itf-nbfc-2017.md) - [RMIT Malaysia](./rmit-malaysia.md) - [SWIFT CSP-CSCF v2021](./swift-csp-cscf-2021.md)+- [SWIFT CSP-CSCF v2022](./swift-csp-cscf-2022.md) - [UK OFFICIAL and UK NHS](./ukofficial-uknhs.md) The following are the [Regulatory Compliance](../concepts/regulatory-compliance.md) built-ins in |
governance | Irs 1075 Sept2016 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/irs-1075-sept2016.md | Title: Regulatory Compliance details for IRS 1075 September 2016 description: Details of the IRS 1075 September 2016 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 11/06/2023 Last updated : 11/21/2023 initiative definition. |[Management ports of virtual machines should be protected with just-in-time network access control](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb0f33259-77d7-4c9e-aac6-3aabcfae693c) |Possible network Just In Time (JIT) access will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_JITNetworkAccess_Audit.json) | |[Service Fabric clusters should only use Azure Active Directory for client authentication](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb54ed75b-3e1a-44ac-a333-05ba39b99ff0) |Audit usage of client authentication only via Azure Active Directory in Service Fabric |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Service%20Fabric/ServiceFabric_AuditADAuth_Audit.json) | -### Account Management (AC-2) +### Information Flow Enforcement (AC-4) -**ID**: IRS 1075 9.3.1.2 +**ID**: IRS 1075 9.3.1.4 |Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |
governance | Iso 27001 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/iso-27001.md | Title: Regulatory Compliance details for ISO 27001:2013 description: Details of the ISO 27001:2013 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 11/06/2023 Last updated : 11/21/2023 |
governance | New Zealand Ism | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/new-zealand-ism.md | Title: Regulatory Compliance details for New Zealand ISM Restricted description: Details of the New Zealand ISM Restricted Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 11/06/2023 Last updated : 11/21/2023 |
governance | Nist Sp 800 171 R2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/nist-sp-800-171-r2.md | Title: Regulatory Compliance details for NIST SP 800-171 R2 description: Details of the NIST SP 800-171 R2 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 11/06/2023 Last updated : 11/21/2023 |
governance | Nist Sp 800 53 R4 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/nist-sp-800-53-r4.md | Title: Regulatory Compliance details for NIST SP 800-53 Rev. 4 description: Details of the NIST SP 800-53 Rev. 4 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 11/06/2023 Last updated : 11/21/2023 |
governance | Nist Sp 800 53 R5 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/nist-sp-800-53-r5.md | Title: Regulatory Compliance details for NIST SP 800-53 Rev. 5 description: Details of the NIST SP 800-53 Rev. 5 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 11/06/2023 Last updated : 11/21/2023 |
governance | Nl Bio Cloud Theme | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/nl-bio-cloud-theme.md | Title: Regulatory Compliance details for NL BIO Cloud Theme description: Details of the NL BIO Cloud Theme Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 11/06/2023 Last updated : 11/21/2023 |
governance | Nz Ism Restricted 3 5 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/nz-ism-restricted-3-5.md | Title: Regulatory Compliance details for NZ ISM Restricted v3.5 description: Details of the NZ ISM Restricted v3.5 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 11/06/2023 Last updated : 11/21/2023 |
governance | Pci Dss 3 2 1 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/pci-dss-3-2-1.md | Title: Regulatory Compliance details for PCI DSS 3.2.1 description: Details of the PCI DSS 3.2.1 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 11/06/2023 Last updated : 11/21/2023 |
governance | Pci Dss 4 0 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/pci-dss-4-0.md | Title: Regulatory Compliance details for PCI DSS v4.0 description: Details of the PCI DSS v4.0 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 11/06/2023 Last updated : 11/21/2023 |
governance | Rbi Itf Banks 2016 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/rbi-itf-banks-2016.md | Title: Regulatory Compliance details for Reserve Bank of India IT Framework for Banks v2016 description: Details of the Reserve Bank of India IT Framework for Banks v2016 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 11/06/2023 Last updated : 11/21/2023 initiative definition. |Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | ||||| |[Public network access should be disabled for MariaDB servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffdccbe47-f3e3-4213-ad5d-ea459b2fa077) |Disable the public network access property to improve security and ensure your Azure Database for MariaDB can only be accessed from a private endpoint. This configuration strictly disables access from any public address space outside of Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MariaDB_DisablePublicNetworkAccess_Audit.json) |-|[Public network access should be disabled for MySQL flexible servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc9299215-ae47-4f50-9c54-8a392f68a052) |Disabling the public network access property improves security by ensuring your Azure Database for MySQL flexible servers can only be accessed from a private endpoint. This configuration strictly disables access from any public address space outside of Azure IP range and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MySQL_FlexibleServers_DisablePublicNetworkAccess_Audit.json) | +|[Public network access should be disabled for MySQL flexible servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc9299215-ae47-4f50-9c54-8a392f68a052) |Disabling the public network access property improves security by ensuring your Azure Database for MySQL flexible servers can only be accessed from a private endpoint. This configuration strictly disables access from any public address space outside of Azure IP range and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[2.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MySQL_FlexibleServers_DisablePublicNetworkAccess_Audit.json) | |[Public network access should be disabled for MySQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd9844e8a-1437-4aeb-a32c-0c992f056095) |Disable the public network access property to improve security and ensure your Azure Database for MySQL can only be accessed from a private endpoint. This configuration strictly disables access from any public address space outside of Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MySQL_DisablePublicNetworkAccess_Audit.json) | |[Public network access should be disabled for PostgreSQL flexible servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5e1de0e3-42cb-4ebc-a86d-61d0c619ca48) |Disabling the public network access property improves security by ensuring your Azure Database for PostgreSQL flexible servers can only be accessed from a private endpoint. This configuration strictly disables access from any public address space outside of Azure IP range and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[3.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_FlexibleServers_DisablePublicNetworkAccess_Audit.json) | |[Public network access should be disabled for PostgreSQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb52376f7-9612-48a1-81cd-1ffe4b61032c) |Disable the public network access property to improve security and ensure your Azure Database for PostgreSQL can only be accessed from a private endpoint. This configuration disables access from any public address space outside of Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_DisablePublicNetworkAccess_Audit.json) | |
governance | Rbi Itf Nbfc 2017 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/rbi-itf-nbfc-2017.md | Title: Regulatory Compliance details for Reserve Bank of India - IT Framework for NBFC description: Details of the Reserve Bank of India - IT Framework for NBFC Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 11/06/2023 Last updated : 11/21/2023 |
governance | Rmit Malaysia | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/rmit-malaysia.md | Title: Regulatory Compliance details for RMIT Malaysia description: Details of the RMIT Malaysia Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 11/06/2023 Last updated : 11/21/2023 initiative definition. |[PostgreSQL server should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c14b034-bcb6-4905-94e7-5b8e98a47b65) |Virtual network based firewall rules are used to enable traffic from a specific subnet to Azure Database for PostgreSQL while ensuring the traffic stays within the Azure boundary. This policy provides a way to audit if the Azure Database for PostgreSQL has virtual network service endpoint being used. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_VirtualNetworkServiceEndpoint_Audit.json) | |[Public network access on Azure SQL Database should be disabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b8ca024-1d5c-4dec-8995-b1a932b41780) |Disabling the public network access property improves security by ensuring your Azure SQL Database can only be accessed from a private endpoint. This configuration denies all logins that match IP or virtual network based firewall rules. |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_PublicNetworkAccess_Audit.json) | |[Public network access should be disabled for MariaDB servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffdccbe47-f3e3-4213-ad5d-ea459b2fa077) |Disable the public network access property to improve security and ensure your Azure Database for MariaDB can only be accessed from a private endpoint. This configuration strictly disables access from any public address space outside of Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MariaDB_DisablePublicNetworkAccess_Audit.json) |-|[Public network access should be disabled for MySQL flexible servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc9299215-ae47-4f50-9c54-8a392f68a052) |Disabling the public network access property improves security by ensuring your Azure Database for MySQL flexible servers can only be accessed from a private endpoint. This configuration strictly disables access from any public address space outside of Azure IP range and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MySQL_FlexibleServers_DisablePublicNetworkAccess_Audit.json) | +|[Public network access should be disabled for MySQL flexible servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc9299215-ae47-4f50-9c54-8a392f68a052) |Disabling the public network access property improves security by ensuring your Azure Database for MySQL flexible servers can only be accessed from a private endpoint. This configuration strictly disables access from any public address space outside of Azure IP range and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[2.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MySQL_FlexibleServers_DisablePublicNetworkAccess_Audit.json) | |[Public network access should be disabled for MySQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd9844e8a-1437-4aeb-a32c-0c992f056095) |Disable the public network access property to improve security and ensure your Azure Database for MySQL can only be accessed from a private endpoint. This configuration strictly disables access from any public address space outside of Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/MySQL_DisablePublicNetworkAccess_Audit.json) | |[Public network access should be disabled for PostgreSQL flexible servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5e1de0e3-42cb-4ebc-a86d-61d0c619ca48) |Disabling the public network access property improves security by ensuring your Azure Database for PostgreSQL flexible servers can only be accessed from a private endpoint. This configuration strictly disables access from any public address space outside of Azure IP range and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[3.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_FlexibleServers_DisablePublicNetworkAccess_Audit.json) | |[Public network access should be disabled for PostgreSQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb52376f7-9612-48a1-81cd-1ffe4b61032c) |Disable the public network access property to improve security and ensure your Azure Database for PostgreSQL can only be accessed from a private endpoint. This configuration disables access from any public address space outside of Azure IP range, and denies all logins that match IP or virtual network-based firewall rules. |Audit, Deny, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_DisablePublicNetworkAccess_Audit.json) | |
governance | Swift Csp Cscf 2021 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/swift-csp-cscf-2021.md | Title: Regulatory Compliance details for SWIFT CSP-CSCF v2021 description: Details of the SWIFT CSP-CSCF v2021 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 11/06/2023 Last updated : 11/21/2023 |
governance | Swift Csp Cscf 2022 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/swift-csp-cscf-2022.md | + + Title: Regulatory Compliance details for SWIFT CSP-CSCF v2022 +description: Details of the SWIFT CSP-CSCF v2022 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Last updated : 11/21/2023++++# Details of the SWIFT CSP-CSCF v2022 Regulatory Compliance built-in initiative ++The following article details how the Azure Policy Regulatory Compliance built-in initiative +definition maps to **compliance domains** and **controls** in SWIFT CSP-CSCF v2022. +For more information about this compliance standard, see +[SWIFT CSP-CSCF v2022](https://www.swift.com/myswift/customer-security-programme-csp). To understand +_Ownership_, see [Azure Policy policy definition](../concepts/definition-structure.md#type) and +[Shared responsibility in the cloud](../../../security/fundamentals/shared-responsibility.md). ++The following mappings are to the **SWIFT CSP-CSCF v2022** controls. Many of the controls +are implemented with an [Azure Policy](../overview.md) initiative definition. To review the complete +initiative definition, open **Policy** in the Azure portal and select the **Definitions** page. +Then, find and select the **SWIFT CSP-CSCF v2022** Regulatory Compliance built-in +initiative definition. ++> [!IMPORTANT] +> Each control below is associated with one or more [Azure Policy](../overview.md) definitions. +> These policies may help you [assess compliance](../how-to/get-compliance-data.md) with the +> control; however, there often is not a one-to-one or complete match between a control and one or +> more policies. As such, **Compliant** in Azure Policy refers only to the policy definitions +> themselves; this doesn't ensure you're fully compliant with all requirements of a control. In +> addition, the compliance standard includes controls that aren't addressed by any Azure Policy +> definitions at this time. Therefore, compliance in Azure Policy is only a partial view of your +> overall compliance status. The associations between compliance domains, controls, and Azure Policy +> definitions for this compliance standard may change over time. To view the change history, see the +> [GitHub Commit History](https://github.com/Azure/azure-policy/commits/master/built-in-policies/policySetDefinitions/Regulatory%20Compliance/SWIFT_CSP-CSCF_v2022.json). ++## 1. Restrict Internet Access & Protect Critical Systems from General IT Environment ++### Ensure the protection of the user's local SWIFT infrastructure from potentially compromised elements of the general IT environment and external environment. ++**ID**: SWIFT CSCF v2022 1.1 +**Ownership**: Shared ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[\[Preview\]: All Internet traffic should be routed via your deployed Azure Firewall](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffc5e4038-4584-4632-8c85-c0448d374b2c) |Azure Security Center has identified that some of your subnets aren't protected with a next generation firewall. Protect your subnets from potential threats by restricting access to them with Azure Firewall or a supported next generation firewall |AuditIfNotExists, Disabled |[3.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/ASC_All_Internet_traffic_should_be_routed_via_Azure_Firewall.json) | +|[\[Preview\]: Network traffic data collection agent should be installed on Linux virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F04c4380f-3fae-46e8-96c9-30193528f602) |Security Center uses the Microsoft Dependency agent to collect network traffic data from your Azure virtual machines to enable advanced network protection features such as traffic visualization on the network map, network hardening recommendations and specific network threats. |AuditIfNotExists, Disabled |[1.0.2-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ASC_Dependency_Agent_Audit_Linux.json) | +|[\[Preview\]: Network traffic data collection agent should be installed on Windows virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2f2ee1de-44aa-4762-b6bd-0893fc3f306d) |Security Center uses the Microsoft Dependency agent to collect network traffic data from your Azure virtual machines to enable advanced network protection features such as traffic visualization on the network map, network hardening recommendations and specific network threats. |AuditIfNotExists, Disabled |[1.0.2-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ASC_Dependency_Agent_Audit_Windows.json) | +|[Adaptive application controls for defining safe applications should be enabled on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F47a6b606-51aa-4496-8bb7-64b11cf66adc) |Enable application controls to define the list of known-safe applications running on your machines, and alert you when other applications run. This helps harden your machines against malware. To simplify the process of configuring and maintaining your rules, Security Center uses machine learning to analyze the applications running on each machine and suggest the list of known-safe applications. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_AdaptiveApplicationControls_Audit.json) | +|[Adaptive network hardening recommendations should be applied on internet facing virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F08e6af2d-db70-460a-bfe9-d5bd474ba9d6) |Azure Security Center analyzes the traffic patterns of Internet facing virtual machines and provides Network Security Group rule recommendations that reduce the potential attack surface |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_AdaptiveNetworkHardenings_Audit.json) | +|[All network ports should be restricted on network security groups associated to your virtual machine](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9daedab3-fb2d-461e-b861-71790eead4f6) |Azure Security Center has identified some of your network security groups' inbound rules to be too permissive. Inbound rules should not allow access from 'Any' or 'Internet' ranges. This can potentially enable attackers to target your resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_UnprotectedEndpoints_Audit.json) | +|[Allowlist rules in your adaptive application control policy should be updated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F123a3936-f020-408a-ba0c-47873faf1534) |Monitor for changes in behavior on groups of machines configured for auditing by Azure Security Center's adaptive application controls. Security Center uses machine learning to analyze the running processes on your machines and suggest a list of known-safe applications. These are presented as recommended apps to allow in adaptive application control policies. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_AdaptiveApplicationControlsUpdate_Audit.json) | +|[App Service apps should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2d21331d-a4c2-4def-a9ad-ee4e1e023beb) |Use virtual network service endpoints to restrict access to your app from selected subnets from an Azure virtual network. To learn more about App Service service endpoints, visit [https://aka.ms/appservice-vnet-service-endpoint](https://aka.ms/appservice-vnet-service-endpoint). |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_AppService_AuditIfNotExists.json) | +|[Azure Key Vault should have firewall enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F55615ac9-af46-4a59-874e-391cc3dfb490) |Enable the key vault firewall so that the key vault is not accessible by default to any public IPs. Optionally, you can configure specific IP ranges to limit access to those networks. Learn more at: [https://docs.microsoft.com/azure/key-vault/general/network-security](../../../key-vault/general/network-security.md) |Audit, Deny, Disabled |[3.2.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/AzureKeyVaultFirewallEnabled_Audit.json) | +|[Check for privacy and security compliance before establishing internal connections](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fee4bbbbb-2e52-9adb-4e3a-e641f7ac68ab) |CMA_0053 - Check for privacy and security compliance before establishing internal connections |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0053.json) | +|[Ensure external providers consistently meet interests of the customers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3eabed6d-1912-2d3c-858b-f438d08d0412) |CMA_C1592 - Ensure external providers consistently meet interests of the customers |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1592.json) | +|[Implement system boundary protection](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F01ae60e2-38bb-0a32-7b20-d3a091423409) |CMA_0328 - Implement system boundary protection |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0328.json) | +|[Internet-facing virtual machines should be protected with network security groups](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff6de0be7-9a8a-4b8a-b349-43cf02d22f7c) |Protect your virtual machines from potential threats by restricting access to them with network security groups (NSG). Learn more about controlling traffic with NSGs at [https://aka.ms/nsg-doc](https://aka.ms/nsg-doc) |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnInternetFacingVirtualMachines_Audit.json) | +|[IP Forwarding on your virtual machine should be disabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbd352bd5-2853-4985-bf0d-73806b4a5744) |Enabling IP forwarding on a virtual machine's NIC allows the machine to receive traffic addressed to other destinations. IP forwarding is rarely required (e.g., when using the VM as a network virtual appliance), and therefore, this should be reviewed by the network security team. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_IPForwardingOnVirtualMachines_Audit.json) | +|[Key Vault should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fea4d6841-2173-4317-9747-ff522a45120f) |This policy audits any Key Vault not configured to use a virtual network service endpoint. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_KeyVault_Audit.json) | +|[Network Watcher should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb6e2945c-0b7b-40f5-9233-7a5323b5cdc6) |Network Watcher is a regional service that enables you to monitor and diagnose conditions at a network scenario level in, to, and from Azure. Scenario level monitoring enables you to diagnose problems at an end to end network level view. It is required to have a network watcher resource group to be created in every region where a virtual network is present. An alert is enabled if a network watcher resource group is not available in a particular region. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/NetworkWatcher_Enabled_Audit.json) | +|[Review cloud service provider's compliance with policies and agreements](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fffea18d9-13de-6505-37f3-4c1f88070ad7) |CMA_0469 - Review cloud service provider's compliance with policies and agreements |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0469.json) | +|[Storage accounts should restrict network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F34c877ad-507e-4c82-993e-3452a6e0ad3c) |Network access to storage accounts should be restricted. Configure network rules so only applications from allowed networks can access the storage account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges |Audit, Deny, Disabled |[1.1.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/Storage_NetworkAcls_Audit.json) | +|[Storage Accounts should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F60d21c4f-21a3-4d94-85f4-b924e6aeeda4) |This policy audits any Storage Account not configured to use a virtual network service endpoint. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_StorageAccount_Audit.json) | +|[Subnets should be associated with a Network Security Group](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe71308d3-144b-4262-b144-efdc3cc90517) |Protect your subnet from potential threats by restricting access to it with a Network Security Group (NSG). NSGs contain a list of Access Control List (ACL) rules that allow or deny network traffic to your subnet. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnSubnets_Audit.json) | +|[Undergo independent security review](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9b55929b-0101-47c0-a16e-d6ac5c7d21f8) |CMA_0515 - Undergo independent security review |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0515.json) | +|[VM Image Builder templates should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2154edb9-244f-4741-9970-660785bccdaa) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your VM Image Builder building resources, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/virtual-machines/linux/image-builder-networking#deploy-using-an-existing-vnet](../../../virtual-machines/linux/image-builder-networking.md#deploy-using-an-existing-vnet). |Audit, Disabled, Deny |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/VM%20Image%20Builder/PrivateLinkEnabled_Audit.json) | ++### Restrict and control the allocation and usage of administrator-level operating system accounts. ++**ID**: SWIFT CSCF v2022 1.2 +**Ownership**: Shared ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[A maximum of 3 owners should be designated for your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4f11b553-d42e-4e3a-89be-32ca364cad4c) |It is recommended to designate up to 3 subscription owners in order to reduce the potential for breach by a compromised owner. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_DesignateLessThanXOwners_Audit.json) | +|[Audit privileged functions](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff26af0b1-65b6-689a-a03f-352ad2d00f98) |CMA_0019 - Audit privileged functions |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0019.json) | +|[Blocked accounts with owner permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0cfea604-3201-4e14-88fc-fae4c427a6c5) |Deprecated accounts with owner permissions should be removed from your subscription. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveBlockedAccountsWithOwnerPermissions_Audit.json) | +|[Blocked accounts with read and write permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8d7e1fde-fe26-4b5f-8108-f8e432cbc2be) |Deprecated accounts should be removed from your subscriptions. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveBlockedAccountsWithReadWritePermissions_Audit.json) | +|[Define and enforce conditions for shared and group accounts](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff7eb1d0b-6d4f-2d59-1591-7563e11a9313) |CMA_0117 - Define and enforce conditions for shared and group accounts |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0117.json) | +|[Design an access control model](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F03b6427e-6072-4226-4bd9-a410ab65317e) |CMA_0129 - Design an access control model |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0129.json) | +|[Develop and establish a system security plan](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb2ea1058-8998-3dd1-84f1-82132ad482fd) |CMA_0151 - Develop and establish a system security plan |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0151.json) | +|[Develop information security policies and procedures](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faf227964-5b8b-22a2-9364-06d2cb9d6d7c) |CMA_0158 - Develop information security policies and procedures |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0158.json) | +|[Employ least privilege access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1bc7fd64-291f-028e-4ed6-6e07886e163f) |CMA_0212 - Employ least privilege access |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0212.json) | +|[Establish a privacy program](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F39eb03c1-97cc-11ab-0960-6209ed2869f7) |CMA_0257 - Establish a privacy program |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0257.json) | +|[Establish security requirements for the manufacturing of connected devices](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fafbecd30-37ee-a27b-8e09-6ac49951a0ee) |CMA_0279 - Establish security requirements for the manufacturing of connected devices |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0279.json) | +|[Guest accounts with owner permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F339353f6-2387-4a45-abe4-7f529d121046) |External accounts with owner permissions should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveGuestAccountsWithOwnerPermissions_Audit.json) | +|[Guest accounts with read permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe9ac8f8e-ce22-4355-8f04-99b911d6be52) |External accounts with read privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveGuestAccountsWithReadPermissions_Audit.json) | +|[Guest accounts with write permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F94e1c2ac-cbbe-4cac-a2b5-389c812dee87) |External accounts with write privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveGuestAccountsWithWritePermissions_Audit.json) | +|[Implement security engineering principles of information systems](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fdf2e9507-169b-4114-3a52-877561ee3198) |CMA_0325 - Implement security engineering principles of information systems |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0325.json) | +|[Management ports of virtual machines should be protected with just-in-time network access control](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb0f33259-77d7-4c9e-aac6-3aabcfae693c) |Possible network Just In Time (JIT) access will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_JITNetworkAccess_Audit.json) | +|[Monitor account activity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7b28ba4f-0a87-46ac-62e1-46b7c09202a8) |CMA_0377 - Monitor account activity |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0377.json) | +|[Monitor privileged role assignment](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fed87d27a-9abf-7c71-714c-61d881889da4) |CMA_0378 - Monitor privileged role assignment |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0378.json) | +|[Restrict access to privileged accounts](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F873895e8-0e3a-6492-42e9-22cd030e9fcd) |CMA_0446 - Restrict access to privileged accounts |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0446.json) | +|[Revoke privileged roles as appropriate](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F32f22cfa-770b-057c-965b-450898425519) |CMA_0483 - Revoke privileged roles as appropriate |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0483.json) | +|[There should be more than one owner assigned to your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F09024ccc-0c5f-475e-9457-b7c0d9ed487b) |It is recommended to designate more than one subscription owner in order to have administrator access redundancy. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_DesignateMoreThanOneOwner_Audit.json) | +|[Use privileged identity management](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe714b481-8fac-64a2-14a9-6f079b2501a4) |CMA_0533 - Use privileged identity management |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0533.json) | ++### Secure the virtualisation platform and virtual machines (VMs) that host SWIFT-related components to the same level as physical systems. ++**ID**: SWIFT CSCF v2022 1.3 +**Ownership**: Shared ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Audit VMs that do not use managed disks](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F06a78e20-9358-41c9-923c-fb736d382a4d) |This policy audits VMs that do not use managed disks |audit |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/VMRequireManagedDisk_Audit.json) | +|[Implement system boundary protection](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F01ae60e2-38bb-0a32-7b20-d3a091423409) |CMA_0328 - Implement system boundary protection |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0328.json) | ++### Control/Protect Internet access from operator PCs and systems within the secure zone. ++**ID**: SWIFT CSCF v2022 1.4 +**Ownership**: Shared ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[\[Preview\]: All Internet traffic should be routed via your deployed Azure Firewall](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffc5e4038-4584-4632-8c85-c0448d374b2c) |Azure Security Center has identified that some of your subnets aren't protected with a next generation firewall. Protect your subnets from potential threats by restricting access to them with Azure Firewall or a supported next generation firewall |AuditIfNotExists, Disabled |[3.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/ASC_All_Internet_traffic_should_be_routed_via_Azure_Firewall.json) | +|[Authorize remote access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fdad8a2e9-6f27-4fc2-8933-7e99fe700c9c) |CMA_0024 - Authorize remote access |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0024.json) | +|[Define cryptographic use](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc4ccd607-702b-8ae6-8eeb-fc3339cd4b42) |CMA_0120 - Define cryptographic use |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0120.json) | +|[Document and implement wireless access guidelines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F04b3e7f6-4841-888d-4799-cda19a0084f6) |CMA_0190 - Document and implement wireless access guidelines |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0190.json) | +|[Document mobility training](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F83dfb2b8-678b-20a0-4c44-5c75ada023e6) |CMA_0191 - Document mobility training |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0191.json) | +|[Document remote access guidelines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3d492600-27ba-62cc-a1c3-66eb919f6a0d) |CMA_0196 - Document remote access guidelines |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0196.json) | +|[Implement controls to secure alternate work sites](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcd36eeec-67e7-205a-4b64-dbfe3b4e3e4e) |CMA_0315 - Implement controls to secure alternate work sites |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0315.json) | +|[Internet-facing virtual machines should be protected with network security groups](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff6de0be7-9a8a-4b8a-b349-43cf02d22f7c) |Protect your virtual machines from potential threats by restricting access to them with network security groups (NSG). Learn more about controlling traffic with NSGs at [https://aka.ms/nsg-doc](https://aka.ms/nsg-doc) |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnInternetFacingVirtualMachines_Audit.json) | +|[Non-internet-facing virtual machines should be protected with network security groups](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbb91dfba-c30d-4263-9add-9c2384e659a6) |Protect your non-internet-facing virtual machines from potential threats by restricting access with network security groups (NSG). Learn more about controlling traffic with NSGs at [https://aka.ms/nsg-doc](https://aka.ms/nsg-doc) |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnInternalVirtualMachines_Audit.json) | +|[Protect wireless access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd42a8f69-a193-6cbc-48b9-04a9e29961f1) |CMA_0411 - Protect wireless access |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0411.json) | +|[Provide privacy training](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F518eafdd-08e5-37a9-795b-15a8d798056d) |CMA_0415 - Provide privacy training |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0415.json) | ++### Ensure the protection of the customer's connectivity infrastructure from external environment and potentially compromised elements of the general IT environment. ++**ID**: SWIFT CSCF v2022 1.5A +**Ownership**: Shared ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[\[Preview\]: All Internet traffic should be routed via your deployed Azure Firewall](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffc5e4038-4584-4632-8c85-c0448d374b2c) |Azure Security Center has identified that some of your subnets aren't protected with a next generation firewall. Protect your subnets from potential threats by restricting access to them with Azure Firewall or a supported next generation firewall |AuditIfNotExists, Disabled |[3.0.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/ASC_All_Internet_traffic_should_be_routed_via_Azure_Firewall.json) | +|[\[Preview\]: Network traffic data collection agent should be installed on Linux virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F04c4380f-3fae-46e8-96c9-30193528f602) |Security Center uses the Microsoft Dependency agent to collect network traffic data from your Azure virtual machines to enable advanced network protection features such as traffic visualization on the network map, network hardening recommendations and specific network threats. |AuditIfNotExists, Disabled |[1.0.2-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ASC_Dependency_Agent_Audit_Linux.json) | +|[\[Preview\]: Network traffic data collection agent should be installed on Windows virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2f2ee1de-44aa-4762-b6bd-0893fc3f306d) |Security Center uses the Microsoft Dependency agent to collect network traffic data from your Azure virtual machines to enable advanced network protection features such as traffic visualization on the network map, network hardening recommendations and specific network threats. |AuditIfNotExists, Disabled |[1.0.2-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ASC_Dependency_Agent_Audit_Windows.json) | +|[Adaptive application controls for defining safe applications should be enabled on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F47a6b606-51aa-4496-8bb7-64b11cf66adc) |Enable application controls to define the list of known-safe applications running on your machines, and alert you when other applications run. This helps harden your machines against malware. To simplify the process of configuring and maintaining your rules, Security Center uses machine learning to analyze the applications running on each machine and suggest the list of known-safe applications. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_AdaptiveApplicationControls_Audit.json) | +|[Adaptive network hardening recommendations should be applied on internet facing virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F08e6af2d-db70-460a-bfe9-d5bd474ba9d6) |Azure Security Center analyzes the traffic patterns of Internet facing virtual machines and provides Network Security Group rule recommendations that reduce the potential attack surface |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_AdaptiveNetworkHardenings_Audit.json) | +|[All network ports should be restricted on network security groups associated to your virtual machine](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9daedab3-fb2d-461e-b861-71790eead4f6) |Azure Security Center has identified some of your network security groups' inbound rules to be too permissive. Inbound rules should not allow access from 'Any' or 'Internet' ranges. This can potentially enable attackers to target your resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_UnprotectedEndpoints_Audit.json) | +|[App Service apps should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2d21331d-a4c2-4def-a9ad-ee4e1e023beb) |Use virtual network service endpoints to restrict access to your app from selected subnets from an Azure virtual network. To learn more about App Service service endpoints, visit [https://aka.ms/appservice-vnet-service-endpoint](https://aka.ms/appservice-vnet-service-endpoint). |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_AppService_AuditIfNotExists.json) | +|[Azure DDoS Protection Standard should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa7aca53f-2ed4-4466-a25e-0b45ade68efd) |DDoS protection standard should be enabled for all virtual networks with a subnet that is part of an application gateway with a public IP. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableDDoSProtection_Audit.json) | +|[Azure Key Vault should have firewall enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F55615ac9-af46-4a59-874e-391cc3dfb490) |Enable the key vault firewall so that the key vault is not accessible by default to any public IPs. Optionally, you can configure specific IP ranges to limit access to those networks. Learn more at: [https://docs.microsoft.com/azure/key-vault/general/network-security](../../../key-vault/general/network-security.md) |Audit, Deny, Disabled |[3.2.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/AzureKeyVaultFirewallEnabled_Audit.json) | +|[Control information flow](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F59bedbdc-0ba9-39b9-66bb-1d1c192384e6) |CMA_0079 - Control information flow |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0079.json) | +|[Employ boundary protection to isolate information systems](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F311802f9-098d-0659-245a-94c5d47c0182) |CMA_C1639 - Employ boundary protection to isolate information systems |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1639.json) | +|[Employ flow control mechanisms of encrypted information](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F79365f13-8ba4-1f6c-2ac4-aa39929f56d0) |CMA_0211 - Employ flow control mechanisms of encrypted information |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0211.json) | +|[Employ restrictions on external system interconnections](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F80029bc5-834f-3a9c-a2d8-acbc1aab4e9f) |CMA_C1155 - Employ restrictions on external system interconnections |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1155.json) | +|[Establish firewall and router configuration standards](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F398fdbd8-56fd-274d-35c6-fa2d3b2755a1) |CMA_0272 - Establish firewall and router configuration standards |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0272.json) | +|[Establish network segmentation for card holder data environment](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff476f3b0-4152-526e-a209-44e5f8c968d7) |CMA_0273 - Establish network segmentation for card holder data environment |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0273.json) | +|[Identify and manage downstream information exchanges](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc7fddb0e-3f44-8635-2b35-dc6b8e740b7c) |CMA_0298 - Identify and manage downstream information exchanges |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0298.json) | +|[Implement managed interface for each external service](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb262e1dd-08e9-41d4-963a-258909ad794b) |CMA_C1626 - Implement managed interface for each external service |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1626.json) | +|[Implement system boundary protection](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F01ae60e2-38bb-0a32-7b20-d3a091423409) |CMA_0328 - Implement system boundary protection |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0328.json) | +|[Internet-facing virtual machines should be protected with network security groups](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff6de0be7-9a8a-4b8a-b349-43cf02d22f7c) |Protect your virtual machines from potential threats by restricting access to them with network security groups (NSG). Learn more about controlling traffic with NSGs at [https://aka.ms/nsg-doc](https://aka.ms/nsg-doc) |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnInternetFacingVirtualMachines_Audit.json) | +|[IP Forwarding on your virtual machine should be disabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbd352bd5-2853-4985-bf0d-73806b4a5744) |Enabling IP forwarding on a virtual machine's NIC allows the machine to receive traffic addressed to other destinations. IP forwarding is rarely required (e.g., when using the VM as a network virtual appliance), and therefore, this should be reviewed by the network security team. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_IPForwardingOnVirtualMachines_Audit.json) | +|[Key Vault should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fea4d6841-2173-4317-9747-ff522a45120f) |This policy audits any Key Vault not configured to use a virtual network service endpoint. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_KeyVault_Audit.json) | +|[Network Watcher should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb6e2945c-0b7b-40f5-9233-7a5323b5cdc6) |Network Watcher is a regional service that enables you to monitor and diagnose conditions at a network scenario level in, to, and from Azure. Scenario level monitoring enables you to diagnose problems at an end to end network level view. It is required to have a network watcher resource group to be created in every region where a virtual network is present. An alert is enabled if a network watcher resource group is not available in a particular region. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/NetworkWatcher_Enabled_Audit.json) | +|[Storage accounts should restrict network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F34c877ad-507e-4c82-993e-3452a6e0ad3c) |Network access to storage accounts should be restricted. Configure network rules so only applications from allowed networks can access the storage account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges |Audit, Deny, Disabled |[1.1.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/Storage_NetworkAcls_Audit.json) | +|[Storage Accounts should use a virtual network service endpoint](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F60d21c4f-21a3-4d94-85f4-b924e6aeeda4) |This policy audits any Storage Account not configured to use a virtual network service endpoint. |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/VirtualNetworkServiceEndpoint_StorageAccount_Audit.json) | +|[Subnets should be associated with a Network Security Group](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe71308d3-144b-4262-b144-efdc3cc90517) |Protect your subnet from potential threats by restricting access to it with a Network Security Group (NSG). NSGs contain a list of Access Control List (ACL) rules that allow or deny network traffic to your subnet. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_NetworkSecurityGroupsOnSubnets_Audit.json) | +|[VM Image Builder templates should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2154edb9-244f-4741-9970-660785bccdaa) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your VM Image Builder building resources, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/virtual-machines/linux/image-builder-networking#deploy-using-an-existing-vnet](../../../virtual-machines/linux/image-builder-networking.md#deploy-using-an-existing-vnet). |Audit, Disabled, Deny |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/VM%20Image%20Builder/PrivateLinkEnabled_Audit.json) | ++## 2. Reduce Attack Surface and Vulnerabilities ++### Ensure the confidentiality, integrity, and authenticity of application data flows between local SWIFT-related components. ++**ID**: SWIFT CSCF v2022 2.1 +**Ownership**: Shared ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Authentication to Linux machines should require SSH keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F630c64f9-8b6b-4c64-b511-6544ceff6fd6) |Although SSH itself provides an encrypted connection, using passwords with SSH still leaves the VM vulnerable to brute-force attacks. The most secure option for authenticating to an Azure Linux virtual machine over SSH is with a public-private key pair, also known as SSH keys. Learn more: [https://docs.microsoft.com/azure/virtual-machines/linux/create-ssh-keys-detailed](../../../virtual-machines/linux/create-ssh-keys-detailed.md). |AuditIfNotExists, Disabled |[3.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxNoPasswordForSSH_AINE.json) | +|[Automation account variables should be encrypted](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3657f5a0-770e-44a3-b44e-9431ba1e9735) |It is important to enable encryption of Automation account variable assets when storing sensitive data |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Automation/Automation_AuditUnencryptedVars_Audit.json) | +|[Configure actions for noncompliant devices](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb53aa659-513e-032c-52e6-1ce0ba46582f) |CMA_0062 - Configure actions for noncompliant devices |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0062.json) | +|[Configure workstations to check for digital certificates](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F26daf649-22d1-97e9-2a8a-01b182194d59) |CMA_0073 - Configure workstations to check for digital certificates |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0073.json) | +|[Control information flow](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F59bedbdc-0ba9-39b9-66bb-1d1c192384e6) |CMA_0079 - Control information flow |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0079.json) | +|[Define a physical key management process](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F51e4b233-8ee3-8bdc-8f5f-f33bd0d229b7) |CMA_0115 - Define a physical key management process |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0115.json) | +|[Define cryptographic use](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc4ccd607-702b-8ae6-8eeb-fc3339cd4b42) |CMA_0120 - Define cryptographic use |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0120.json) | +|[Define organizational requirements for cryptographic key management](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd661e9eb-4e15-5ba1-6f02-cdc467db0d6c) |CMA_0123 - Define organizational requirements for cryptographic key management |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0123.json) | +|[Determine assertion requirements](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7a0ecd94-3699-5273-76a5-edb8499f655a) |CMA_0136 - Determine assertion requirements |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0136.json) | +|[Develop and maintain baseline configurations](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2f20840e-7925-221c-725d-757442753e7c) |CMA_0153 - Develop and maintain baseline configurations |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0153.json) | +|[Employ boundary protection to isolate information systems](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F311802f9-098d-0659-245a-94c5d47c0182) |CMA_C1639 - Employ boundary protection to isolate information systems |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1639.json) | +|[Employ flow control mechanisms of encrypted information](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F79365f13-8ba4-1f6c-2ac4-aa39929f56d0) |CMA_0211 - Employ flow control mechanisms of encrypted information |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0211.json) | +|[Enforce random unique session identifiers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc7d57a6a-7cc2-66c0-299f-83bf90558f5d) |CMA_0247 - Enforce random unique session identifiers |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0247.json) | +|[Enforce security configuration settings](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F058e9719-1ff9-3653-4230-23f76b6492e0) |CMA_0249 - Enforce security configuration settings |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0249.json) | +|[Establish a configuration control board](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7380631c-5bf5-0e3a-4509-0873becd8a63) |CMA_0254 - Establish a configuration control board |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0254.json) | +|[Establish a data leakage management procedure](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c9aa856-6b86-35dc-83f4-bc72cec74dea) |CMA_0255 - Establish a data leakage management procedure |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0255.json) | +|[Establish and document a configuration management plan](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F526ed90e-890f-69e7-0386-ba5c0f1f784f) |CMA_0264 - Establish and document a configuration management plan |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0264.json) | +|[Establish backup policies and procedures](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4f23967c-a74b-9a09-9dc2-f566f61a87b9) |CMA_0268 - Establish backup policies and procedures |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0268.json) | +|[Implement an automated configuration management tool](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F33832848-42ab-63f3-1a55-c0ad309d44cd) |CMA_0311 - Implement an automated configuration management tool |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0311.json) | +|[Implement controls to secure all media](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe435f7e3-0dd9-58c9-451f-9b44b96c0232) |CMA_0314 - Implement controls to secure all media |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0314.json) | +|[Implement system boundary protection](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F01ae60e2-38bb-0a32-7b20-d3a091423409) |CMA_0328 - Implement system boundary protection |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0328.json) | +|[Information flow control using security policy filters](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F13ef3484-3a51-785a-9c96-500f21f84edd) |CMA_C1029 - Information flow control using security policy filters |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1029.json) | +|[Isolate SecurID systems, Security Incident Management systems](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fdd6d00a8-701a-5935-a22b-c7b9c0c698b2) |CMA_C1636 - Isolate SecurID systems, Security Incident Management systems |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1636.json) | +|[Issue public key certificates](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F97d91b33-7050-237b-3e23-a77d57d84e13) |CMA_0347 - Issue public key certificates |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0347.json) | +|[Maintain availability of information](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3ad7f0bc-3d03-0585-4d24-529779bb02c2) |CMA_C1644 - Maintain availability of information |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1644.json) | +|[Manage symmetric cryptographic keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9c276cf3-596f-581a-7fbd-f5e46edaa0f4) |CMA_0367 - Manage symmetric cryptographic keys |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0367.json) | +|[Notify users of system logon or access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffe2dff43-0a8c-95df-0432-cb1c794b17d0) |CMA_0382 - Notify users of system logon or access |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0382.json) | +|[Produce, control and distribute asymmetric cryptographic keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fde077e7e-0cc8-65a6-6e08-9ab46c827b05) |CMA_C1646 - Produce, control and distribute asymmetric cryptographic keys |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1646.json) | +|[Produce, control and distribute symmetric cryptographic keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F16c54e01-9e65-7524-7c33-beda48a75779) |CMA_C1645 - Produce, control and distribute symmetric cryptographic keys |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1645.json) | +|[Protect data in transit using encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb11697e8-9515-16f1-7a35-477d5c8a1344) |CMA_0403 - Protect data in transit using encryption |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0403.json) | +|[Protect passwords with encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb2d3e5a2-97ab-5497-565a-71172a729d93) |CMA_0408 - Protect passwords with encryption |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0408.json) | +|[Protect special information](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa315c657-4a00-8eba-15ac-44692ad24423) |CMA_0409 - Protect special information |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0409.json) | +|[Remediate information system flaws](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbe38a620-000b-21cf-3cb3-ea151b704c3b) |CMA_0427 - Remediate information system flaws |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0427.json) | +|[Restrict access to private keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8d140e8b-76c7-77de-1d46-ed1b2e112444) |CMA_0445 - Restrict access to private keys |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0445.json) | +|[Secure the interface to external systems](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fff1efad2-6b09-54cc-01bf-d386c4d558a8) |CMA_0491 - Secure the interface to external systems |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0491.json) | +|[Windows machines should be configured to use secure communication protocols](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5752e6d6-1206-46d8-8ab1-ecc2f71a8112) |To protect the privacy of information communicated over the Internet, your machines should use the latest version of the industry-standard cryptographic protocol, Transport Layer Security (TLS). TLS secures communications over a network by encrypting a connection between machines. |AuditIfNotExists, Disabled |[4.1.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_SecureWebProtocol_AINE.json) | ++### Minimise the occurrence of known technical vulnerabilities on operator PCs and within the local SWIFT infrastructure by ensuring vendor support, applying mandatory software updates, and applying timely security updates aligned to the assessed risk. ++**ID**: SWIFT CSCF v2022 2.2 +**Ownership**: Shared ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Add system-assigned managed identity to enable Guest Configuration assignments on virtual machines with no identities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3cf2ab00-13f1-4d0c-8971-2ac904541a7e) |This policy adds a system-assigned managed identity to virtual machines hosted in Azure that are supported by Guest Configuration but do not have any managed identities. A system-assigned managed identity is a prerequisite for all Guest Configuration assignments and must be added to machines before using any Guest Configuration policy definitions. For more information on Guest Configuration, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |modify |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AddSystemIdentityWhenNone_Prerequisite.json) | +|[Add system-assigned managed identity to enable Guest Configuration assignments on VMs with a user-assigned identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F497dff13-db2a-4c0f-8603-28fa3b331ab6) |This policy adds a system-assigned managed identity to virtual machines hosted in Azure that are supported by Guest Configuration and have at least one user-assigned identity but do not have a system-assigned managed identity. A system-assigned managed identity is a prerequisite for all Guest Configuration assignments and must be added to machines before using any Guest Configuration policy definitions. For more information on Guest Configuration, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |modify |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AddSystemIdentityWhenUser_Prerequisite.json) | +|[Audit Windows VMs with a pending reboot](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4221adbc-5c0f-474f-88b7-037a99e6114c) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if the machine is pending reboot for any of the following reasons: component based servicing, Windows Update, pending file rename, pending computer rename, configuration manager pending reboot. Each detection has a unique registry path. |auditIfNotExists |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_WindowsPendingReboot_AINE.json) | +|[Correlate Vulnerability scan information](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe3905a3c-97e7-0b4f-15fb-465c0927536f) |CMA_C1558 - Correlate Vulnerability scan information |Manual, Disabled |[1.1.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1558.json) | +|[Deploy the Windows Guest Configuration extension to enable Guest Configuration assignments on Windows VMs](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F385f5831-96d4-41db-9a3c-cd3af78aaae6) |This policy deploys the Windows Guest Configuration extension to Windows virtual machines hosted in Azure that are supported by Guest Configuration. The Windows Guest Configuration extension is a prerequisite for all Windows Guest Configuration assignments and must be deployed to machines before using any Windows Guest Configuration policy definition. For more information on Guest Configuration, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |deployIfNotExists |[1.2.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_DeployExtensionWindows_Prerequisite.json) | +|[Disseminate security alerts to personnel](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9c93ef57-7000-63fb-9b74-88f2e17ca5d2) |CMA_C1705 - Disseminate security alerts to personnel |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1705.json) | +|[Perform vulnerability scans](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c5e0e1a-216f-8f49-0a15-76ed0d8b8e1f) |CMA_0393 - Perform vulnerability scans |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0393.json) | +|[Remediate information system flaws](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbe38a620-000b-21cf-3cb3-ea151b704c3b) |CMA_0427 - Remediate information system flaws |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0427.json) | +|[System updates on virtual machine scale sets should be installed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc3f317a7-a95c-4547-b7e7-11017ebdf2fe) |Audit whether there are any missing system security updates and critical updates that should be installed to ensure that your Windows and Linux virtual machine scale sets are secure. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssMissingSystemUpdates_Audit.json) | +|[System updates should be installed on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F86b3d65f-7626-441e-b690-81a8b71cff60) |Missing security system updates on your servers will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_MissingSystemUpdates_Audit.json) | +|[Use automated mechanisms for security alerts](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb8689b2e-4308-a58b-a0b4-6f3343a000df) |CMA_C1707 - Use automated mechanisms for security alerts |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1707.json) | ++### Reduce the cyber-attack surface of SWIFT-related components by performing system hardening. ++**ID**: SWIFT CSCF v2022 2.3 +**Ownership**: Shared ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Add system-assigned managed identity to enable Guest Configuration assignments on virtual machines with no identities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3cf2ab00-13f1-4d0c-8971-2ac904541a7e) |This policy adds a system-assigned managed identity to virtual machines hosted in Azure that are supported by Guest Configuration but do not have any managed identities. A system-assigned managed identity is a prerequisite for all Guest Configuration assignments and must be added to machines before using any Guest Configuration policy definitions. For more information on Guest Configuration, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |modify |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AddSystemIdentityWhenNone_Prerequisite.json) | +|[Add system-assigned managed identity to enable Guest Configuration assignments on VMs with a user-assigned identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F497dff13-db2a-4c0f-8603-28fa3b331ab6) |This policy adds a system-assigned managed identity to virtual machines hosted in Azure that are supported by Guest Configuration and have at least one user-assigned identity but do not have a system-assigned managed identity. A system-assigned managed identity is a prerequisite for all Guest Configuration assignments and must be added to machines before using any Guest Configuration policy definitions. For more information on Guest Configuration, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |modify |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AddSystemIdentityWhenUser_Prerequisite.json) | +|[Audit Linux machines that do not have the passwd file permissions set to 0644](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe6955644-301c-44b5-a4c4-528577de6861) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if Linux machines that do not have the passwd file permissions set to 0644 |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxPassword121_AINE.json) | +|[Audit Windows machines that contain certificates expiring within the specified number of days](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1417908b-4bff-46ee-a2a6-4acc899320ab) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if certificates in the specified store have an expiration date out of range for the number of days given as parameter. The policy also provides the option to only check for specific certificates or exclude specific certificates, and whether to report on expired certificates. |auditIfNotExists |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_CertificateExpiration_AINE.json) | +|[Audit Windows machines that do not store passwords using reversible encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fda0f98fe-a24b-4ad5-af69-bd0400233661) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if Windows machines that do not store passwords using reversible encryption |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_WindowsPasswordEncryption_AINE.json) | +|[Automate proposed documented changes](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5c40f27b-6791-18c5-3f85-7b863bd99c11) |CMA_C1191 - Automate proposed documented changes |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1191.json) | +|[Conduct a security impact analysis](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F203101f5-99a3-1491-1b56-acccd9b66a9e) |CMA_0057 - Conduct a security impact analysis |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0057.json) | +|[Configure actions for noncompliant devices](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb53aa659-513e-032c-52e6-1ce0ba46582f) |CMA_0062 - Configure actions for noncompliant devices |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0062.json) | +|[Deploy the Linux Guest Configuration extension to enable Guest Configuration assignments on Linux VMs](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F331e8ea8-378a-410f-a2e5-ae22f38bb0da) |This policy deploys the Linux Guest Configuration extension to Linux virtual machines hosted in Azure that are supported by Guest Configuration. The Linux Guest Configuration extension is a prerequisite for all Linux Guest Configuration assignments and must be deployed to machines before using any Linux Guest Configuration policy definition. For more information on Guest Configuration, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |deployIfNotExists |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_DeployExtensionLinux_Prerequisite.json) | +|[Deploy the Windows Guest Configuration extension to enable Guest Configuration assignments on Windows VMs](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F385f5831-96d4-41db-9a3c-cd3af78aaae6) |This policy deploys the Windows Guest Configuration extension to Windows virtual machines hosted in Azure that are supported by Guest Configuration. The Windows Guest Configuration extension is a prerequisite for all Windows Guest Configuration assignments and must be deployed to machines before using any Windows Guest Configuration policy definition. For more information on Guest Configuration, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |deployIfNotExists |[1.2.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_DeployExtensionWindows_Prerequisite.json) | +|[Develop and maintain a vulnerability management standard](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F055da733-55c6-9e10-8194-c40731057ec4) |CMA_0152 - Develop and maintain a vulnerability management standard |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0152.json) | +|[Develop and maintain baseline configurations](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2f20840e-7925-221c-725d-757442753e7c) |CMA_0153 - Develop and maintain baseline configurations |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0153.json) | +|[Enforce security configuration settings](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F058e9719-1ff9-3653-4230-23f76b6492e0) |CMA_0249 - Enforce security configuration settings |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0249.json) | +|[Establish a configuration control board](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7380631c-5bf5-0e3a-4509-0873becd8a63) |CMA_0254 - Establish a configuration control board |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0254.json) | +|[Establish a risk management strategy](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd36700f2-2f0d-7c2a-059c-bdadd1d79f70) |CMA_0258 - Establish a risk management strategy |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0258.json) | +|[Establish and document a configuration management plan](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F526ed90e-890f-69e7-0386-ba5c0f1f784f) |CMA_0264 - Establish and document a configuration management plan |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0264.json) | +|[Establish and document change control processes](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbd4dc286-2f30-5b95-777c-681f3a7913d3) |CMA_0265 - Establish and document change control processes |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0265.json) | +|[Establish configuration management requirements for developers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8747b573-8294-86a0-8914-49e9b06a5ace) |CMA_0270 - Establish configuration management requirements for developers |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0270.json) | +|[Implement an automated configuration management tool](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F33832848-42ab-63f3-1a55-c0ad309d44cd) |CMA_0311 - Implement an automated configuration management tool |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0311.json) | +|[Management ports of virtual machines should be protected with just-in-time network access control](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb0f33259-77d7-4c9e-aac6-3aabcfae693c) |Possible network Just In Time (JIT) access will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_JITNetworkAccess_Audit.json) | +|[Perform a privacy impact assessment](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd18af1ac-0086-4762-6dc8-87cdded90e39) |CMA_0387 - Perform a privacy impact assessment |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0387.json) | +|[Perform a risk assessment](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8c5d3d8d-5cba-0def-257c-5ab9ea9644dc) |CMA_0388 - Perform a risk assessment |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0388.json) | +|[Perform audit for configuration change control](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1282809c-9001-176b-4a81-260a085f4872) |CMA_0390 - Perform audit for configuration change control |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0390.json) | +|[Retain previous versions of baseline configs](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5e4e9685-3818-5934-0071-2620c4fa2ca5) |CMA_C1181 - Retain previous versions of baseline configs |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1181.json) | +|[VM Image Builder templates should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2154edb9-244f-4741-9970-660785bccdaa) |Azure Private Link lets you connect your virtual network to Azure services without a public IP address at the source or destination. The Private Link platform handles the connectivity between the consumer and services over the Azure backbone network. By mapping private endpoints to your VM Image Builder building resources, data leakage risks are reduced. Learn more about private links at: [https://docs.microsoft.com/azure/virtual-machines/linux/image-builder-networking#deploy-using-an-existing-vnet](../../../virtual-machines/linux/image-builder-networking.md#deploy-using-an-existing-vnet). |Audit, Disabled, Deny |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/VM%20Image%20Builder/PrivateLinkEnabled_Audit.json) | ++### Ensure the confidentiality, integrity, and mutual authenticity of data flows between local or remote SWIFT infrastructure components and the back-office first hops they connect to. ++**ID**: SWIFT CSCF v2022 2.4 +**Ownership**: Shared ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Conduct backup of information system documentation](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb269a749-705e-8bff-055a-147744675cdf) |CMA_C1289 - Conduct backup of information system documentation |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1289.json) | +|[Configure workstations to check for digital certificates](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F26daf649-22d1-97e9-2a8a-01b182194d59) |CMA_0073 - Configure workstations to check for digital certificates |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0073.json) | +|[Establish backup policies and procedures](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4f23967c-a74b-9a09-9dc2-f566f61a87b9) |CMA_0268 - Establish backup policies and procedures |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0268.json) | +|[Implement controls to secure all media](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe435f7e3-0dd9-58c9-451f-9b44b96c0232) |CMA_0314 - Implement controls to secure all media |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0314.json) | +|[Notify users of system logon or access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffe2dff43-0a8c-95df-0432-cb1c794b17d0) |CMA_0382 - Notify users of system logon or access |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0382.json) | +|[Protect data in transit using encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb11697e8-9515-16f1-7a35-477d5c8a1344) |CMA_0403 - Protect data in transit using encryption |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0403.json) | +|[Protect passwords with encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb2d3e5a2-97ab-5497-565a-71172a729d93) |CMA_0408 - Protect passwords with encryption |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0408.json) | ++### Back-office Data Flow Security ++**ID**: SWIFT CSCF v2022 2.4A +**Ownership**: Customer ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Authentication to Linux machines should require SSH keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F630c64f9-8b6b-4c64-b511-6544ceff6fd6) |Although SSH itself provides an encrypted connection, using passwords with SSH still leaves the VM vulnerable to brute-force attacks. The most secure option for authenticating to an Azure Linux virtual machine over SSH is with a public-private key pair, also known as SSH keys. Learn more: [https://docs.microsoft.com/azure/virtual-machines/linux/create-ssh-keys-detailed](../../../virtual-machines/linux/create-ssh-keys-detailed.md). |AuditIfNotExists, Disabled |[3.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxNoPasswordForSSH_AINE.json) | +|[Automation account variables should be encrypted](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3657f5a0-770e-44a3-b44e-9431ba1e9735) |It is important to enable encryption of Automation account variable assets when storing sensitive data |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Automation/Automation_AuditUnencryptedVars_Audit.json) | +|[Windows machines should be configured to use secure communication protocols](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5752e6d6-1206-46d8-8ab1-ecc2f71a8112) |To protect the privacy of information communicated over the Internet, your machines should use the latest version of the industry-standard cryptographic protocol, Transport Layer Security (TLS). TLS secures communications over a network by encrypting a connection between machines. |AuditIfNotExists, Disabled |[4.1.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_SecureWebProtocol_AINE.json) | ++### Protect the confidentiality of SWIFT-related data transmitted or stored outside of the secure zone as part of operational processes. ++**ID**: SWIFT CSCF v2022 2.5 +**Ownership**: Shared ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Conduct backup of information system documentation](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb269a749-705e-8bff-055a-147744675cdf) |CMA_C1289 - Conduct backup of information system documentation |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1289.json) | +|[Configure workstations to check for digital certificates](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F26daf649-22d1-97e9-2a8a-01b182194d59) |CMA_0073 - Configure workstations to check for digital certificates |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0073.json) | +|[Establish backup policies and procedures](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4f23967c-a74b-9a09-9dc2-f566f61a87b9) |CMA_0268 - Establish backup policies and procedures |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0268.json) | +|[Implement controls to secure all media](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe435f7e3-0dd9-58c9-451f-9b44b96c0232) |CMA_0314 - Implement controls to secure all media |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0314.json) | +|[Manage the transportation of assets](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4ac81669-00e2-9790-8648-71bc11bc91eb) |CMA_0370 - Manage the transportation of assets |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0370.json) | +|[Protect data in transit using encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb11697e8-9515-16f1-7a35-477d5c8a1344) |CMA_0403 - Protect data in transit using encryption |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0403.json) | +|[Protect passwords with encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb2d3e5a2-97ab-5497-565a-71172a729d93) |CMA_0408 - Protect passwords with encryption |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0408.json) | ++### External Transmission Data Protection ++**ID**: SWIFT CSCF v2022 2.5A +**Ownership**: Customer ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Audit virtual machines without disaster recovery configured](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0015ea4d-51ff-4ce3-8d8c-f3f8f0179a56) |Audit virtual machines which do not have disaster recovery configured. To learn more about disaster recovery, visit [https://aka.ms/asr-doc](https://aka.ms/asr-doc). |auditIfNotExists |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/RecoveryServices_DisasterRecovery_Audit.json) | +|[Audit VMs that do not use managed disks](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F06a78e20-9358-41c9-923c-fb736d382a4d) |This policy audits VMs that do not use managed disks |audit |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/VMRequireManagedDisk_Audit.json) | +|[Automation account variables should be encrypted](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3657f5a0-770e-44a3-b44e-9431ba1e9735) |It is important to enable encryption of Automation account variable assets when storing sensitive data |Audit, Deny, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Automation/Automation_AuditUnencryptedVars_Audit.json) | +|[Azure Backup should be enabled for Virtual Machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F013e242c-8828-4970-87b3-ab247555486d) |Ensure protection of your Azure Virtual Machines by enabling Azure Backup. Azure Backup is a secure and cost effective data protection solution for Azure. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Backup/VirtualMachines_EnableAzureBackup_Audit.json) | +|[Geo-redundant storage should be enabled for Storage Accounts](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbf045164-79ba-4215-8f95-f8048dc1780b) |Use geo-redundancy to create highly available applications |Audit, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/GeoRedundant_StorageAccounts_Audit.json) | +|[Secure transfer to storage accounts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F404c3081-a854-4457-ae30-26a93ef643f9) |Audit requirement of Secure transfer in your storage account. Secure transfer is an option that forces your storage account to accept requests only from secure connections (HTTPS). Use of HTTPS ensures authentication between the server and the service and protects data in transit from network layer attacks such as man-in-the-middle, eavesdropping, and session-hijacking |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/Storage_AuditForHTTPSEnabled_Audit.json) | +|[Virtual machines should encrypt temp disks, caches, and data flows between Compute and Storage resources](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0961003e-5a0a-4549-abde-af6a37f2724d) |By default, a virtual machine's OS and data disks are encrypted-at-rest using platform-managed keys. Temp disks, data caches and data flowing between compute and storage aren't encrypted. Disregard this recommendation if: 1. using encryption-at-host, or 2. server-side encryption on Managed Disks meets your security requirements. Learn more in: Server-side encryption of Azure Disk Storage: [https://aka.ms/disksse,](https://aka.ms/disksse,) Different disk encryption offerings: [https://aka.ms/diskencryptioncomparison](https://aka.ms/diskencryptioncomparison) |AuditIfNotExists, Disabled |[2.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_UnencryptedVMDisks_Audit.json) | ++### Protect the confidentiality and integrity of interactive operator sessions that connect to the local or remote (operated by a service provider) SWIFT infrastructure or service provider SWIFT-related applications ++**ID**: SWIFT CSCF v2022 2.6 +**Ownership**: Shared ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Add system-assigned managed identity to enable Guest Configuration assignments on virtual machines with no identities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3cf2ab00-13f1-4d0c-8971-2ac904541a7e) |This policy adds a system-assigned managed identity to virtual machines hosted in Azure that are supported by Guest Configuration but do not have any managed identities. A system-assigned managed identity is a prerequisite for all Guest Configuration assignments and must be added to machines before using any Guest Configuration policy definitions. For more information on Guest Configuration, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |modify |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AddSystemIdentityWhenNone_Prerequisite.json) | +|[Add system-assigned managed identity to enable Guest Configuration assignments on VMs with a user-assigned identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F497dff13-db2a-4c0f-8603-28fa3b331ab6) |This policy adds a system-assigned managed identity to virtual machines hosted in Azure that are supported by Guest Configuration and have at least one user-assigned identity but do not have a system-assigned managed identity. A system-assigned managed identity is a prerequisite for all Guest Configuration assignments and must be added to machines before using any Guest Configuration policy definitions. For more information on Guest Configuration, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |modify |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AddSystemIdentityWhenUser_Prerequisite.json) | +|[Authorize remote access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fdad8a2e9-6f27-4fc2-8933-7e99fe700c9c) |CMA_0024 - Authorize remote access |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0024.json) | +|[Configure workstations to check for digital certificates](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F26daf649-22d1-97e9-2a8a-01b182194d59) |CMA_0073 - Configure workstations to check for digital certificates |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0073.json) | +|[Deploy the Windows Guest Configuration extension to enable Guest Configuration assignments on Windows VMs](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F385f5831-96d4-41db-9a3c-cd3af78aaae6) |This policy deploys the Windows Guest Configuration extension to Windows virtual machines hosted in Azure that are supported by Guest Configuration. The Windows Guest Configuration extension is a prerequisite for all Windows Guest Configuration assignments and must be deployed to machines before using any Windows Guest Configuration policy definition. For more information on Guest Configuration, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |deployIfNotExists |[1.2.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_DeployExtensionWindows_Prerequisite.json) | +|[Document and implement wireless access guidelines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F04b3e7f6-4841-888d-4799-cda19a0084f6) |CMA_0190 - Document and implement wireless access guidelines |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0190.json) | +|[Document mobility training](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F83dfb2b8-678b-20a0-4c44-5c75ada023e6) |CMA_0191 - Document mobility training |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0191.json) | +|[Document remote access guidelines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3d492600-27ba-62cc-a1c3-66eb919f6a0d) |CMA_0196 - Document remote access guidelines |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0196.json) | +|[Identify and authenticate network devices](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fae5345d5-8dab-086a-7290-db43a3272198) |CMA_0296 - Identify and authenticate network devices |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0296.json) | +|[Implement controls to secure alternate work sites](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcd36eeec-67e7-205a-4b64-dbfe3b4e3e4e) |CMA_0315 - Implement controls to secure alternate work sites |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0315.json) | +|[Protect data in transit using encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb11697e8-9515-16f1-7a35-477d5c8a1344) |CMA_0403 - Protect data in transit using encryption |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0403.json) | +|[Protect passwords with encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb2d3e5a2-97ab-5497-565a-71172a729d93) |CMA_0408 - Protect passwords with encryption |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0408.json) | +|[Protect wireless access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd42a8f69-a193-6cbc-48b9-04a9e29961f1) |CMA_0411 - Protect wireless access |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0411.json) | +|[Provide privacy training](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F518eafdd-08e5-37a9-795b-15a8d798056d) |CMA_0415 - Provide privacy training |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0415.json) | +|[Reauthenticate or terminate a user session](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd6653f89-7cb5-24a4-9d71-51581038231b) |CMA_0421 - Reauthenticate or terminate a user session |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0421.json) | +|[Windows machines should be configured to use secure communication protocols](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5752e6d6-1206-46d8-8ab1-ecc2f71a8112) |To protect the privacy of information communicated over the Internet, your machines should use the latest version of the industry-standard cryptographic protocol, Transport Layer Security (TLS). TLS secures communications over a network by encrypting a connection between machines. |AuditIfNotExists, Disabled |[4.1.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_SecureWebProtocol_AINE.json) | +|[Windows machines should meet requirements for 'Security Options - Interactive Logon'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd472d2c9-d6a3-4500-9f5f-b15f123005aa) |Windows machines should have the specified Group Policy settings in the category 'Security Options - Interactive Logon' for displaying last user name and requiring ctrl-alt-del. This policy requires that the Guest Configuration prerequisites have been deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_SecurityOptionsInteractiveLogon_AINE.json) | ++### Identify known vulnerabilities within the local SWIFT environment by implementing a regular vulnerability scanning process and act upon results. ++**ID**: SWIFT CSCF v2022 2.7 +**Ownership**: Shared ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[A vulnerability assessment solution should be enabled on your virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F501541f7-f7e7-4cd6-868c-4190fdad3ac9) |Audits virtual machines to detect whether they are running a supported vulnerability assessment solution. A core component of every cyber risk and security program is the identification and analysis of vulnerabilities. Azure Security Center's standard pricing tier includes vulnerability scanning for your virtual machines at no extra cost. Additionally, Security Center can automatically deploy this tool for you. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ServerVulnerabilityAssessment_Audit.json) | +|[Azure Defender for App Service should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2913021d-f2fd-4f3d-b958-22354e2bdbcb) |Azure Defender for App Service leverages the scale of the cloud, and the visibility that Azure has as a cloud provider, to monitor for common web app attacks. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnAppServices_Audit.json) | +|[Azure Defender for Key Vault should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e6763cc-5078-4e64-889d-ff4d9a839047) |Azure Defender for Key Vault provides an additional layer of protection and security intelligence by detecting unusual and potentially harmful attempts to access or exploit key vault accounts. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnKeyVaults_Audit.json) | +|[Azure Defender for servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4da35fc9-c9e7-4960-aec9-797fe7d9051d) |Azure Defender for servers provides real-time threat protection for server workloads and generates hardening recommendations as well as alerts about suspicious activities. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnVM_Audit.json) | +|[Correlate Vulnerability scan information](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe3905a3c-97e7-0b4f-15fb-465c0927536f) |CMA_C1558 - Correlate Vulnerability scan information |Manual, Disabled |[1.1.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1558.json) | +|[Implement privileged access for executing vulnerability scanning activities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5b802722-71dd-a13d-2e7e-231e09589efb) |CMA_C1555 - Implement privileged access for executing vulnerability scanning activities |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1555.json) | +|[Incorporate flaw remediation into configuration management](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F34aac8b2-488a-2b96-7280-5b9b481a317a) |CMA_C1671 - Incorporate flaw remediation into configuration management |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1671.json) | +|[Microsoft Defender for Storage (Classic) should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F308fbb08-4ab8-4e67-9b29-592e93fb94fa) |Microsoft Defender for Storage (Classic) provides detections of unusual and potentially harmful attempts to access or exploit storage accounts. |AuditIfNotExists, Disabled |[1.0.4](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnStorageAccounts_Audit.json) | +|[Observe and report security weaknesses](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fff136354-1c92-76dc-2dab-80fb7c6a9f1a) |CMA_0384 - Observe and report security weaknesses |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0384.json) | +|[Perform a trend analysis on threats](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F50e81644-923d-33fc-6ebb-9733bc8d1a06) |CMA_0389 - Perform a trend analysis on threats |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0389.json) | +|[Perform threat modeling](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbf883b14-9c19-0f37-8825-5e39a8b66d5b) |CMA_0392 - Perform threat modeling |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0392.json) | +|[Perform vulnerability scans](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c5e0e1a-216f-8f49-0a15-76ed0d8b8e1f) |CMA_0393 - Perform vulnerability scans |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0393.json) | +|[Remediate information system flaws](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbe38a620-000b-21cf-3cb3-ea151b704c3b) |CMA_0427 - Remediate information system flaws |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0427.json) | +|[Vulnerabilities in container security configurations should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe8cbc669-f12d-49eb-93e7-9273119e9933) |Audit vulnerabilities in security configuration on machines with Docker installed and display as recommendations in Azure Security Center. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_ContainerBenchmark_Audit.json) | +|[Vulnerabilities in security configuration on your machines should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe1e5fd5d-3e4c-4ce1-8661-7d1873ae6b15) |Servers which do not satisfy the configured baseline will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_OSVulnerabilities_Audit.json) | +|[Vulnerabilities in security configuration on your virtual machine scale sets should be remediated](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c735d8a-a4ba-4a3a-b7cf-db7754cf57f4) |Audit the OS vulnerabilities on your virtual machine scale sets to protect them from attacks. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssOSVulnerabilities_Audit.json) | ++### Ensure a consistent and effective approach for the customers' messaging monitoring. ++**ID**: SWIFT CSCF v2022 2.8.5 +**Ownership**: Shared ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Assess risk in third party relationships](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0d04cb93-a0f1-2f4b-4b1b-a72a1b510d08) |CMA_0014 - Assess risk in third party relationships |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0014.json) | +|[Define and document government oversight](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcbfa1bd0-714d-8d6f-0480-2ad6a53972df) |CMA_C1587 - Define and document government oversight |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1587.json) | +|[Define requirements for supplying goods and services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2b2f3a72-9e68-3993-2b69-13dcdecf8958) |CMA_0126 - Define requirements for supplying goods and services |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0126.json) | +|[Determine supplier contract obligations](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F67ada943-8539-083d-35d0-7af648974125) |CMA_0140 - Determine supplier contract obligations |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0140.json) | +|[Establish policies for supply chain risk management](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9150259b-617b-596d-3bf5-5ca3fce20335) |CMA_0275 - Establish policies for supply chain risk management |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0275.json) | +|[Require external service providers to comply with security requirements](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4e45863d-9ea9-32b4-a204-2680bc6007a6) |CMA_C1586 - Require external service providers to comply with security requirements |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1586.json) | +|[Review cloud service provider's compliance with policies and agreements](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fffea18d9-13de-6505-37f3-4c1f88070ad7) |CMA_0469 - Review cloud service provider's compliance with policies and agreements |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0469.json) | +|[Undergo independent security review](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9b55929b-0101-47c0-a16e-d6ac5c7d21f8) |CMA_0515 - Undergo independent security review |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0515.json) | ++### Ensure the protection of the local SWIFT infrastructure from risks exposed by the outsourcing of critical activities. ++**ID**: SWIFT CSCF v2022 2.8A +**Ownership**: Shared ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Determine supplier contract obligations](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F67ada943-8539-083d-35d0-7af648974125) |CMA_0140 - Determine supplier contract obligations |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0140.json) | +|[Document acquisition contract acceptance criteria](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0803eaa7-671c-08a7-52fd-ac419f775e75) |CMA_0187 - Document acquisition contract acceptance criteria |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0187.json) | +|[Document protection of personal data in acquisition contracts](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff9ec3263-9562-1768-65a1-729793635a8d) |CMA_0194 - Document protection of personal data in acquisition contracts |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0194.json) | +|[Document protection of security information in acquisition contracts](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd78f95ba-870a-a500-6104-8a5ce2534f19) |CMA_0195 - Document protection of security information in acquisition contracts |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0195.json) | +|[Document requirements for the use of shared data in contracts](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0ba211ef-0e85-2a45-17fc-401d1b3f8f85) |CMA_0197 - Document requirements for the use of shared data in contracts |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0197.json) | +|[Document security assurance requirements in acquisition contracts](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F13efd2d7-3980-a2a4-39d0-527180c009e8) |CMA_0199 - Document security assurance requirements in acquisition contracts |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0199.json) | +|[Document security documentation requirements in acquisition contract](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa465e8e9-0095-85cb-a05f-1dd4960d02af) |CMA_0200 - Document security documentation requirements in acquisition contract |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0200.json) | +|[Document security functional requirements in acquisition contracts](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F57927290-8000-59bf-3776-90c468ac5b4b) |CMA_0201 - Document security functional requirements in acquisition contracts |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0201.json) | +|[Document security strength requirements in acquisition contracts](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Febb0ba89-6d8c-84a7-252b-7393881e43de) |CMA_0203 - Document security strength requirements in acquisition contracts |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0203.json) | +|[Document the information system environment in acquisition contracts](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc148208b-1a6f-a4ac-7abc-23b1d41121b1) |CMA_0205 - Document the information system environment in acquisition contracts |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0205.json) | +|[Document the protection of cardholder data in third party contracts](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F77acc53d-0f67-6e06-7d04-5750653d4629) |CMA_0207 - Document the protection of cardholder data in third party contracts |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0207.json) | ++### Ensure outbound transaction activity within the expected bounds of normal business. ++**ID**: SWIFT CSCF v2022 2.9 +**Ownership**: Shared ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Authorize, monitor, and control voip](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe4e1f896-8a93-1151-43c7-0ad23b081ee2) |CMA_0025 - Authorize, monitor, and control voip |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0025.json) | +|[Control information flow](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F59bedbdc-0ba9-39b9-66bb-1d1c192384e6) |CMA_0079 - Control information flow |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0079.json) | +|[Employ flow control mechanisms of encrypted information](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F79365f13-8ba4-1f6c-2ac4-aa39929f56d0) |CMA_0211 - Employ flow control mechanisms of encrypted information |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0211.json) | +|[Implement system boundary protection](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F01ae60e2-38bb-0a32-7b20-d3a091423409) |CMA_0328 - Implement system boundary protection |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0328.json) | +|[Manage gateways](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F63f63e71-6c3f-9add-4c43-64de23e554a7) |CMA_0363 - Manage gateways |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0363.json) | +|[Perform a trend analysis on threats](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F50e81644-923d-33fc-6ebb-9733bc8d1a06) |CMA_0389 - Perform a trend analysis on threats |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0389.json) | +|[Route traffic through managed network access points](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbab9ef1d-a16d-421a-822d-3fa94e808156) |CMA_0484 - Route traffic through managed network access points |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0484.json) | ++### Restrict transaction activity to validated and approved business counterparties. ++**ID**: SWIFT CSCF v2022 2.11A +**Ownership**: Shared ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Authorize access to security functions and information](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faeed863a-0f56-429f-945d-8bb66bd06841) |CMA_0022 - Authorize access to security functions and information |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0022.json) | +|[Authorize and manage access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F50e9324a-7410-0539-0662-2c1e775538b7) |CMA_0023 - Authorize and manage access |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0023.json) | +|[Design an access control model](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F03b6427e-6072-4226-4bd9-a410ab65317e) |CMA_0129 - Design an access control model |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0129.json) | +|[Employ least privilege access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1bc7fd64-291f-028e-4ed6-6e07886e163f) |CMA_0212 - Employ least privilege access |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0212.json) | +|[Enforce logical access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F10c4210b-3ec9-9603-050d-77e4d26c7ebb) |CMA_0245 - Enforce logical access |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0245.json) | +|[Enforce mandatory and discretionary access control policies](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb1666a13-8f67-9c47-155e-69e027ff6823) |CMA_0246 - Enforce mandatory and discretionary access control policies |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0246.json) | +|[Reassign or remove user privileges as needed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7805a343-275c-41be-9d62-7215b96212d8) |CMA_C1040 - Reassign or remove user privileges as needed |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1040.json) | +|[Require approval for account creation](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fde770ba6-50dd-a316-2932-e0d972eaa734) |CMA_0431 - Require approval for account creation |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0431.json) | +|[Review user groups and applications with access to sensitive data](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Feb1c944e-0e94-647b-9b7e-fdb8d2af0838) |CMA_0481 - Review user groups and applications with access to sensitive data |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0481.json) | +|[Review user privileges](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff96d2186-79df-262d-3f76-f371e3b71798) |CMA_C1039 - Review user privileges |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1039.json) | ++## 3. Physically Secure the Environment ++### Prevent unauthorised physical access to sensitive equipment, workplace environments, hosting sites, and storage. ++**ID**: SWIFT CSCF v2022 3.1 +**Ownership**: Shared ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Audit VMs that do not use managed disks](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F06a78e20-9358-41c9-923c-fb736d382a4d) |This policy audits VMs that do not use managed disks |audit |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/VMRequireManagedDisk_Audit.json) | +|[Control physical access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F55a7f9a0-6397-7589-05ef-5ed59a8149e7) |CMA_0081 - Control physical access |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0081.json) | +|[Define a physical key management process](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F51e4b233-8ee3-8bdc-8f5f-f33bd0d229b7) |CMA_0115 - Define a physical key management process |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0115.json) | +|[Establish and maintain an asset inventory](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F27965e62-141f-8cca-426f-d09514ee5216) |CMA_0266 - Establish and maintain an asset inventory |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0266.json) | +|[Implement physical security for offices, working areas, and secure areas](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F05ec66a2-137c-14b8-8e75-3d7a2bef07f8) |CMA_0323 - Implement physical security for offices, working areas, and secure areas |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0323.json) | +|[Install an alarm system](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faa0ddd99-43eb-302d-3f8f-42b499182960) |CMA_0338 - Install an alarm system |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0338.json) | +|[Manage a secure surveillance camera system](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff2222056-062d-1060-6dc2-0107a68c34b2) |CMA_0354 - Manage a secure surveillance camera system |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0354.json) | +|[Review and update physical and environmental policies and procedures](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F91cf132e-0c9f-37a8-a523-dc6a92cd2fb2) |CMA_C1446 - Review and update physical and environmental policies and procedures |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1446.json) | ++## 4. Prevent Compromise of Credentials ++### Ensure passwords are sufficiently resistant against common password attacks by implementing and enforcing an effective password policy. ++**ID**: SWIFT CSCF v2022 4.1 +**Ownership**: Shared ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Add system-assigned managed identity to enable Guest Configuration assignments on virtual machines with no identities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3cf2ab00-13f1-4d0c-8971-2ac904541a7e) |This policy adds a system-assigned managed identity to virtual machines hosted in Azure that are supported by Guest Configuration but do not have any managed identities. A system-assigned managed identity is a prerequisite for all Guest Configuration assignments and must be added to machines before using any Guest Configuration policy definitions. For more information on Guest Configuration, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |modify |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AddSystemIdentityWhenNone_Prerequisite.json) | +|[Add system-assigned managed identity to enable Guest Configuration assignments on VMs with a user-assigned identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F497dff13-db2a-4c0f-8603-28fa3b331ab6) |This policy adds a system-assigned managed identity to virtual machines hosted in Azure that are supported by Guest Configuration and have at least one user-assigned identity but do not have a system-assigned managed identity. A system-assigned managed identity is a prerequisite for all Guest Configuration assignments and must be added to machines before using any Guest Configuration policy definitions. For more information on Guest Configuration, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |modify |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AddSystemIdentityWhenUser_Prerequisite.json) | +|[Audit Linux machines that allow remote connections from accounts without passwords](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fea53dbee-c6c9-4f0e-9f9e-de0039b78023) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if Linux machines that allow remote connections from accounts without passwords |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxPassword110_AINE.json) | +|[Audit Linux machines that have accounts without passwords](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff6ec09a3-78bf-4f8f-99dc-6c77182d0f99) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if Linux machines that have accounts without passwords |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_LinuxPassword232_AINE.json) | +|[Audit Windows machines that allow re-use of the passwords after the specified number of unique passwords](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5b054a0d-39e2-4d53-bea3-9734cad2c69b) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if Windows machines that allow re-use of the passwords after the specified number of unique passwords. Default value for unique passwords is 24 |AuditIfNotExists, Disabled |[2.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_WindowsPasswordEnforce_AINE.json) | +|[Audit Windows machines that do not have the maximum password age set to specified number of days](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4ceb8dc2-559c-478b-a15b-733fbf1e3738) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if Windows machines that do not have the maximum password age set to specified number of days. Default value for maximum password age is 70 days |AuditIfNotExists, Disabled |[2.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_WindowsMaximumPassword_AINE.json) | +|[Audit Windows machines that do not have the minimum password age set to specified number of days](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F237b38db-ca4d-4259-9e47-7882441ca2c0) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if Windows machines that do not have the minimum password age set to specified number of days. Default value for minimum password age is 1 day |AuditIfNotExists, Disabled |[2.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_WindowsMinimumPassword_AINE.json) | +|[Audit Windows machines that do not have the password complexity setting enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbf16e0bb-31e1-4646-8202-60a235cc7e74) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if Windows machines that do not have the password complexity setting enabled |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_WindowsPasswordComplexity_AINE.json) | +|[Audit Windows machines that do not restrict the minimum password length to specified number of characters](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa2d0e922-65d0-40c4-8f87-ea6da2d307a2) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if Windows machines that do not restrict the minimum password length to specified number of characters. Default value for minimum password length is 14 characters |AuditIfNotExists, Disabled |[2.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_WindowsPasswordLength_AINE.json) | +|[Deploy the Linux Guest Configuration extension to enable Guest Configuration assignments on Linux VMs](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F331e8ea8-378a-410f-a2e5-ae22f38bb0da) |This policy deploys the Linux Guest Configuration extension to Linux virtual machines hosted in Azure that are supported by Guest Configuration. The Linux Guest Configuration extension is a prerequisite for all Linux Guest Configuration assignments and must be deployed to machines before using any Linux Guest Configuration policy definition. For more information on Guest Configuration, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |deployIfNotExists |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_DeployExtensionLinux_Prerequisite.json) | +|[Deploy the Windows Guest Configuration extension to enable Guest Configuration assignments on Windows VMs](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F385f5831-96d4-41db-9a3c-cd3af78aaae6) |This policy deploys the Windows Guest Configuration extension to Windows virtual machines hosted in Azure that are supported by Guest Configuration. The Windows Guest Configuration extension is a prerequisite for all Windows Guest Configuration assignments and must be deployed to machines before using any Windows Guest Configuration policy definition. For more information on Guest Configuration, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |deployIfNotExists |[1.2.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_DeployExtensionWindows_Prerequisite.json) | +|[Document security strength requirements in acquisition contracts](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Febb0ba89-6d8c-84a7-252b-7393881e43de) |CMA_0203 - Document security strength requirements in acquisition contracts |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0203.json) | +|[Establish a password policy](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd8bbd80e-3bb1-5983-06c2-428526ec6a63) |CMA_0256 - Establish a password policy |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0256.json) | +|[Establish authenticator types and processes](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F921ae4c1-507f-5ddb-8a58-cfa9b5fd96f0) |CMA_0267 - Establish authenticator types and processes |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0267.json) | +|[Implement parameters for memorized secret verifiers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3b30aa25-0f19-6c04-5ca4-bd3f880a763d) |CMA_0321 - Implement parameters for memorized secret verifiers |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0321.json) | +|[Manage authenticator lifetime and reuse](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F29363ae1-68cd-01ca-799d-92c9197c8404) |CMA_0355 - Manage authenticator lifetime and reuse |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0355.json) | +|[Protect passwords with encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb2d3e5a2-97ab-5497-565a-71172a729d93) |CMA_0408 - Protect passwords with encryption |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0408.json) | ++### Prevent that a compromise of a single authentication factor allows access into SWIFT-related systems or applications by implementing multi-factor authentication. ++**ID**: SWIFT CSCF v2022 4.2 +**Ownership**: Shared ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Accounts with owner permissions on Azure resources should be MFA enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe3e008c3-56b9-4133-8fd7-d3347377402a) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with owner permissions to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForAccountsWithOwnerPermissions_Audit.json) | +|[Accounts with read permissions on Azure resources should be MFA enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F81b3ccb4-e6e8-4e4a-8d05-5df25cd29fd4) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with read privileges to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForAccountsWithReadPermissions_Audit.json) | +|[Accounts with write permissions on Azure resources should be MFA enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F931e118d-50a1-4457-a5e4-78550e086c52) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with write privileges to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForAccountsWithWritePermissions_Audit.json) | +|[Adopt biometric authentication mechanisms](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7d7a8356-5c34-9a95-3118-1424cfaf192a) |CMA_0005 - Adopt biometric authentication mechanisms |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0005.json) | +|[Identify and authenticate network devices](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fae5345d5-8dab-086a-7290-db43a3272198) |CMA_0296 - Identify and authenticate network devices |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0296.json) | ++## 5. Manage Identities and Segregate Privileges ++### Enforce the security principles of need-to-know access, least privilege, and separation of duties for operator accounts. ++**ID**: SWIFT CSCF v2022 5.1 +**Ownership**: Shared ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[A maximum of 3 owners should be designated for your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4f11b553-d42e-4e3a-89be-32ca364cad4c) |It is recommended to designate up to 3 subscription owners in order to reduce the potential for breach by a compromised owner. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_DesignateLessThanXOwners_Audit.json) | +|[Add system-assigned managed identity to enable Guest Configuration assignments on virtual machines with no identities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3cf2ab00-13f1-4d0c-8971-2ac904541a7e) |This policy adds a system-assigned managed identity to virtual machines hosted in Azure that are supported by Guest Configuration but do not have any managed identities. A system-assigned managed identity is a prerequisite for all Guest Configuration assignments and must be added to machines before using any Guest Configuration policy definitions. For more information on Guest Configuration, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |modify |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AddSystemIdentityWhenNone_Prerequisite.json) | +|[Add system-assigned managed identity to enable Guest Configuration assignments on VMs with a user-assigned identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F497dff13-db2a-4c0f-8603-28fa3b331ab6) |This policy adds a system-assigned managed identity to virtual machines hosted in Azure that are supported by Guest Configuration and have at least one user-assigned identity but do not have a system-assigned managed identity. A system-assigned managed identity is a prerequisite for all Guest Configuration assignments and must be added to machines before using any Guest Configuration policy definitions. For more information on Guest Configuration, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |modify |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AddSystemIdentityWhenUser_Prerequisite.json) | +|[Assign account managers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4c6df5ff-4ef2-4f17-a516-0da9189c603b) |CMA_0015 - Assign account managers |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0015.json) | +|[Audit user account status](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F49c23d9b-02b0-0e42-4f94-e8cef1b8381b) |CMA_0020 - Audit user account status |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0020.json) | +|[Audit Windows machines that contain certificates expiring within the specified number of days](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1417908b-4bff-46ee-a2a6-4acc899320ab) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if certificates in the specified store have an expiration date out of range for the number of days given as parameter. The policy also provides the option to only check for specific certificates or exclude specific certificates, and whether to report on expired certificates. |auditIfNotExists |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_CertificateExpiration_AINE.json) | +|[Automate account management](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2cc9c165-46bd-9762-5739-d2aae5ba90a1) |CMA_0026 - Automate account management |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0026.json) | +|[Blocked accounts with owner permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0cfea604-3201-4e14-88fc-fae4c427a6c5) |Deprecated accounts with owner permissions should be removed from your subscription. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveBlockedAccountsWithOwnerPermissions_Audit.json) | +|[Blocked accounts with read and write permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8d7e1fde-fe26-4b5f-8108-f8e432cbc2be) |Deprecated accounts should be removed from your subscriptions. Deprecated accounts are accounts that have been blocked from signing in. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveBlockedAccountsWithReadWritePermissions_Audit.json) | +|[Define access authorizations to support separation of duties](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F341bc9f1-7489-07d9-4ec6-971573e1546a) |CMA_0116 - Define access authorizations to support separation of duties |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0116.json) | +|[Define information system account types](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F623b5f0a-8cbd-03a6-4892-201d27302f0c) |CMA_0121 - Define information system account types |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0121.json) | +|[Deploy the Windows Guest Configuration extension to enable Guest Configuration assignments on Windows VMs](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F385f5831-96d4-41db-9a3c-cd3af78aaae6) |This policy deploys the Windows Guest Configuration extension to Windows virtual machines hosted in Azure that are supported by Guest Configuration. The Windows Guest Configuration extension is a prerequisite for all Windows Guest Configuration assignments and must be deployed to machines before using any Windows Guest Configuration policy definition. For more information on Guest Configuration, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |deployIfNotExists |[1.2.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_DeployExtensionWindows_Prerequisite.json) | +|[Design an access control model](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F03b6427e-6072-4226-4bd9-a410ab65317e) |CMA_0129 - Design an access control model |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0129.json) | +|[Disable authenticators upon termination](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd9d48ffb-0d8c-0bd5-5f31-5a5826d19f10) |CMA_0169 - Disable authenticators upon termination |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0169.json) | +|[Document access privileges](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa08b18c7-9e0a-89f1-3696-d80902196719) |CMA_0186 - Document access privileges |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0186.json) | +|[Document separation of duties](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe6f7b584-877a-0d69-77d4-ab8b923a9650) |CMA_0204 - Document separation of duties |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0204.json) | +|[Employ least privilege access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1bc7fd64-291f-028e-4ed6-6e07886e163f) |CMA_0212 - Employ least privilege access |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0212.json) | +|[Establish conditions for role membership](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F97cfd944-6f0c-7db2-3796-8e890ef70819) |CMA_0269 - Establish conditions for role membership |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0269.json) | +|[Guest accounts with owner permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F339353f6-2387-4a45-abe4-7f529d121046) |External accounts with owner permissions should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveGuestAccountsWithOwnerPermissions_Audit.json) | +|[Guest accounts with read permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe9ac8f8e-ce22-4355-8f04-99b911d6be52) |External accounts with read privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveGuestAccountsWithReadPermissions_Audit.json) | +|[Guest accounts with write permissions on Azure resources should be removed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F94e1c2ac-cbbe-4cac-a2b5-389c812dee87) |External accounts with write privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveGuestAccountsWithWritePermissions_Audit.json) | +|[Manage system and admin accounts](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F34d38ea7-6754-1838-7031-d7fd07099821) |CMA_0368 - Manage system and admin accounts |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0368.json) | +|[Monitor access across the organization](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F48c816c5-2190-61fc-8806-25d6f3df162f) |CMA_0376 - Monitor access across the organization |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0376.json) | +|[Monitor account activity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7b28ba4f-0a87-46ac-62e1-46b7c09202a8) |CMA_0377 - Monitor account activity |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0377.json) | +|[Notify when account is not needed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8489ff90-8d29-61df-2d84-f9ab0f4c5e84) |CMA_0383 - Notify when account is not needed |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0383.json) | +|[Protect audit information](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e696f5a-451f-5c15-5532-044136538491) |CMA_0401 - Protect audit information |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0401.json) | +|[Reassign or remove user privileges as needed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7805a343-275c-41be-9d62-7215b96212d8) |CMA_C1040 - Reassign or remove user privileges as needed |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1040.json) | +|[Require approval for account creation](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fde770ba6-50dd-a316-2932-e0d972eaa734) |CMA_0431 - Require approval for account creation |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0431.json) | +|[Restrict access to privileged accounts](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F873895e8-0e3a-6492-42e9-22cd030e9fcd) |CMA_0446 - Restrict access to privileged accounts |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0446.json) | +|[Review account provisioning logs](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa830fe9e-08c9-a4fb-420c-6f6bf1702395) |CMA_0460 - Review account provisioning logs |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0460.json) | +|[Review user accounts](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F79f081c7-1634-01a1-708e-376197999289) |CMA_0480 - Review user accounts |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0480.json) | +|[Review user privileges](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff96d2186-79df-262d-3f76-f371e3b71798) |CMA_C1039 - Review user privileges |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1039.json) | +|[Revoke privileged roles as appropriate](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F32f22cfa-770b-057c-965b-450898425519) |CMA_0483 - Revoke privileged roles as appropriate |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0483.json) | +|[Separate duties of individuals](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F60ee1260-97f0-61bb-8155-5d8b75743655) |CMA_0492 - Separate duties of individuals |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0492.json) | +|[There should be more than one owner assigned to your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F09024ccc-0c5f-475e-9457-b7c0d9ed487b) |It is recommended to designate more than one subscription owner in order to have administrator access redundancy. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_DesignateMoreThanOneOwner_Audit.json) | ++### Ensure the proper management, tracking, and use of connected and disconnected hardware authentication or personal tokens (when tokens are used). ++**ID**: SWIFT CSCF v2022 5.2 +**Ownership**: Shared ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Distribute authenticators](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F098dcde7-016a-06c3-0985-0daaf3301d3a) |CMA_0184 - Distribute authenticators |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0184.json) | +|[Establish authenticator types and processes](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F921ae4c1-507f-5ddb-8a58-cfa9b5fd96f0) |CMA_0267 - Establish authenticator types and processes |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0267.json) | +|[Establish procedures for initial authenticator distribution](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F35963d41-4263-0ef9-98d5-70eb058f9e3c) |CMA_0276 - Establish procedures for initial authenticator distribution |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0276.json) | +|[Management ports of virtual machines should be protected with just-in-time network access control](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb0f33259-77d7-4c9e-aac6-3aabcfae693c) |Possible network Just In Time (JIT) access will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_JITNetworkAccess_Audit.json) | +|[Verify identity before distributing authenticators](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F72889284-15d2-90b2-4b39-a1e9541e1152) |CMA_0538 - Verify identity before distributing authenticators |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0538.json) | ++### To the extent permitted and practicable, ensure the trustworthiness of staff operating the local SWIFT environment by performing regular staff screening. ++**ID**: SWIFT CSCF v2022 5.3A +**Ownership**: Shared ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Clear personnel with access to classified information](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc42f19c9-5d88-92da-0742-371a0ea03126) |CMA_0054 - Clear personnel with access to classified information |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0054.json) | +|[Ensure access agreements are signed or resigned timely](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe7589f4e-1e8b-72c2-3692-1e14d7f3699f) |CMA_C1528 - Ensure access agreements are signed or resigned timely |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1528.json) | +|[Implement personnel screening](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe0c480bf-0d68-a42d-4cbb-b60f851f8716) |CMA_0322 - Implement personnel screening |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0322.json) | +|[Protect special information](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa315c657-4a00-8eba-15ac-44692ad24423) |CMA_0409 - Protect special information |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0409.json) | +|[Rescreen individuals at a defined frequency](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc6aeb800-0b19-944d-92dc-59b893722329) |CMA_C1512 - Rescreen individuals at a defined frequency |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1512.json) | ++### Protect physically and logically the repository of recorded passwords. ++**ID**: SWIFT CSCF v2022 5.4 +**Ownership**: Shared ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Audit Windows machines that do not store passwords using reversible encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fda0f98fe-a24b-4ad5-af69-bd0400233661) |Requires that prerequisites are deployed to the policy assignment scope. For details, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). Machines are non-compliant if Windows machines that do not store passwords using reversible encryption |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_WindowsPasswordEncryption_AINE.json) | +|[Document security strength requirements in acquisition contracts](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Febb0ba89-6d8c-84a7-252b-7393881e43de) |CMA_0203 - Document security strength requirements in acquisition contracts |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0203.json) | +|[Establish a password policy](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd8bbd80e-3bb1-5983-06c2-428526ec6a63) |CMA_0256 - Establish a password policy |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0256.json) | +|[Implement parameters for memorized secret verifiers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3b30aa25-0f19-6c04-5ca4-bd3f880a763d) |CMA_0321 - Implement parameters for memorized secret verifiers |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0321.json) | +|[Key vaults should have deletion protection enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0b60c0b2-2dc2-4e1c-b5c9-abbed971de53) |Malicious deletion of a key vault can lead to permanent data loss. You can prevent permanent data loss by enabling purge protection and soft delete. Purge protection protects you from insider attacks by enforcing a mandatory retention period for soft deleted key vaults. No one inside your organization or Microsoft will be able to purge your key vaults during the soft delete retention period. Keep in mind that key vaults created after September 1st 2019 have soft-delete enabled by default. |Audit, Deny, Disabled |[2.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/KeyVault_Recoverable_Audit.json) | +|[Protect passwords with encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb2d3e5a2-97ab-5497-565a-71172a729d93) |CMA_0408 - Protect passwords with encryption |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0408.json) | ++## 6. Detect Anomalous Activity to Systems or Transaction Records ++### Ensure that local SWIFT infrastructure is protected against malware and act upon results. ++**ID**: SWIFT CSCF v2022 6.1 +**Ownership**: Shared ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Audit privileged functions](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff26af0b1-65b6-689a-a03f-352ad2d00f98) |CMA_0019 - Audit privileged functions |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0019.json) | +|[Audit user account status](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F49c23d9b-02b0-0e42-4f94-e8cef1b8381b) |CMA_0020 - Audit user account status |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0020.json) | +|[Block untrusted and unsigned processes that run from USB](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3d399cf3-8fc6-0efc-6ab0-1412f1198517) |CMA_0050 - Block untrusted and unsigned processes that run from USB |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0050.json) | +|[Correlate audit records](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F10874318-0bf7-a41f-8463-03e395482080) |CMA_0087 - Correlate audit records |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0087.json) | +|[Correlate Vulnerability scan information](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe3905a3c-97e7-0b4f-15fb-465c0927536f) |CMA_C1558 - Correlate Vulnerability scan information |Manual, Disabled |[1.1.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1558.json) | +|[Determine auditable events](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2f67e567-03db-9d1f-67dc-b6ffb91312f4) |CMA_0137 - Determine auditable events |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0137.json) | +|[Endpoint protection solution should be installed on virtual machine scale sets](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F26a828e1-e88f-464e-bbb3-c134a282b9de) |Audit the existence and health of an endpoint protection solution on your virtual machines scale sets, to protect them from threats and vulnerabilities. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_VmssMissingEndpointProtection_Audit.json) | +|[Establish requirements for audit review and reporting](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb3c8cc83-20d3-3890-8bc8-5568777670f4) |CMA_0277 - Establish requirements for audit review and reporting |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0277.json) | +|[Implement privileged access for executing vulnerability scanning activities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5b802722-71dd-a13d-2e7e-231e09589efb) |CMA_C1555 - Implement privileged access for executing vulnerability scanning activities |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1555.json) | +|[Integrate audit review, analysis, and reporting](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff741c4e6-41eb-15a4-25a2-61ac7ca232f0) |CMA_0339 - Integrate audit review, analysis, and reporting |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0339.json) | +|[Integrate cloud app security with a siem](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9fdde4a9-85fa-7850-6df4-ae9c4a2e56f9) |CMA_0340 - Integrate cloud app security with a siem |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0340.json) | +|[Manage gateways](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F63f63e71-6c3f-9add-4c43-64de23e554a7) |CMA_0363 - Manage gateways |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0363.json) | +|[Microsoft Antimalware for Azure should be configured to automatically update protection signatures](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc43e4a30-77cb-48ab-a4dd-93f175c63b57) |This policy audits any Windows virtual machine not configured with automatic update of Microsoft Antimalware protection signatures. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/VirtualMachines_AntiMalwareAutoUpdate_AuditIfNotExists.json) | +|[Microsoft IaaSAntimalware extension should be deployed on Windows servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9b597639-28e4-48eb-b506-56b05d366257) |This policy audits any Windows server VM without Microsoft IaaSAntimalware extension deployed. |AuditIfNotExists, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/WindowsServers_AntiMalware_AuditIfNotExists.json) | +|[Monitor missing Endpoint Protection in Azure Security Center](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faf6cd1bd-1635-48cb-bde7-5b15693900b9) |Servers without an installed Endpoint Protection agent will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_MissingEndpointProtection_Audit.json) | +|[Observe and report security weaknesses](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fff136354-1c92-76dc-2dab-80fb7c6a9f1a) |CMA_0384 - Observe and report security weaknesses |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0384.json) | +|[Perform a trend analysis on threats](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F50e81644-923d-33fc-6ebb-9733bc8d1a06) |CMA_0389 - Perform a trend analysis on threats |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0389.json) | +|[Perform threat modeling](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbf883b14-9c19-0f37-8825-5e39a8b66d5b) |CMA_0392 - Perform threat modeling |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0392.json) | +|[Perform vulnerability scans](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c5e0e1a-216f-8f49-0a15-76ed0d8b8e1f) |CMA_0393 - Perform vulnerability scans |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0393.json) | +|[Remediate information system flaws](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbe38a620-000b-21cf-3cb3-ea151b704c3b) |CMA_0427 - Remediate information system flaws |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0427.json) | +|[Review account provisioning logs](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa830fe9e-08c9-a4fb-420c-6f6bf1702395) |CMA_0460 - Review account provisioning logs |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0460.json) | +|[Review administrator assignments weekly](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff27a298f-9443-014a-0d40-fef12adf0259) |CMA_0461 - Review administrator assignments weekly |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0461.json) | +|[Review audit data](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6625638f-3ba1-7404-5983-0ea33d719d34) |CMA_0466 - Review audit data |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0466.json) | +|[Review cloud identity report overview](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8aec4343-9153-9641-172c-defb201f56b3) |CMA_0468 - Review cloud identity report overview |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0468.json) | +|[Review controlled folder access events](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff48b60c6-4b37-332f-7288-b6ea50d300eb) |CMA_0471 - Review controlled folder access events |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0471.json) | +|[Review exploit protection events](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa30bd8e9-7064-312a-0e1f-e1b485d59f6e) |CMA_0472 - Review exploit protection events |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0472.json) | +|[Review file and folder activity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef718fe4-7ceb-9ddf-3198-0ee8f6fe9cba) |CMA_0473 - Review file and folder activity |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0473.json) | +|[Review malware detections report weekly](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4a6f5cbd-6c6b-006f-2bb1-091af1441bce) |CMA_0475 - Review malware detections report weekly |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0475.json) | +|[Review role group changes weekly](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F70fe686f-1f91-7dab-11bf-bca4201e183b) |CMA_0476 - Review role group changes weekly |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0476.json) | +|[Update antivirus definitions](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fea9d7c95-2f10-8a4d-61d8-7469bd2e8d65) |CMA_0517 - Update antivirus definitions |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0517.json) | +|[Verify software, firmware and information integrity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fdb28735f-518f-870e-15b4-49623cbe3aa0) |CMA_0542 - Verify software, firmware and information integrity |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0542.json) | ++### Ensure the software integrity of the SWIFT-related components and act upon results. ++**ID**: SWIFT CSCF v2022 6.2 +**Ownership**: Shared ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Configure workstations to check for digital certificates](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F26daf649-22d1-97e9-2a8a-01b182194d59) |CMA_0073 - Configure workstations to check for digital certificates |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0073.json) | +|[Employ automatic shutdown/restart when violations are detected](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b8a7ec3-11cc-a2d3-8cd0-eedf074424a4) |CMA_C1715 - Employ automatic shutdown/restart when violations are detected |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1715.json) | +|[Protect data in transit using encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb11697e8-9515-16f1-7a35-477d5c8a1344) |CMA_0403 - Protect data in transit using encryption |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0403.json) | +|[Protect passwords with encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb2d3e5a2-97ab-5497-565a-71172a729d93) |CMA_0408 - Protect passwords with encryption |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0408.json) | +|[Verify software, firmware and information integrity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fdb28735f-518f-870e-15b4-49623cbe3aa0) |CMA_0542 - Verify software, firmware and information integrity |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0542.json) | +|[View and configure system diagnostic data](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0123edae-3567-a05a-9b05-b53ebe9d3e7e) |CMA_0544 - View and configure system diagnostic data |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0544.json) | ++### Ensure the integrity of the database records for the SWIFT messaging interface or the customer connector and act upon results. ++**ID**: SWIFT CSCF v2022 6.3 +**Ownership**: Shared ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Verify software, firmware and information integrity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fdb28735f-518f-870e-15b4-49623cbe3aa0) |CMA_0542 - Verify software, firmware and information integrity |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0542.json) | +|[View and configure system diagnostic data](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0123edae-3567-a05a-9b05-b53ebe9d3e7e) |CMA_0544 - View and configure system diagnostic data |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0544.json) | ++### Record security events and detect anomalous actions and operations within the local SWIFT environment. ++**ID**: SWIFT CSCF v2022 6.4 +**Ownership**: Shared ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[\[Preview\]: Log Analytics Extension should be enabled for listed virtual machine images](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F32133ab0-ee4b-4b44-98d6-042180979d50) |Reports virtual machines as non-compliant if the virtual machine image is not in the list defined and the extension is not installed. |AuditIfNotExists, Disabled |[2.0.1-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/LogAnalytics_OSImage_Audit.json) | +|[\[Preview\]: Network traffic data collection agent should be installed on Linux virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F04c4380f-3fae-46e8-96c9-30193528f602) |Security Center uses the Microsoft Dependency agent to collect network traffic data from your Azure virtual machines to enable advanced network protection features such as traffic visualization on the network map, network hardening recommendations and specific network threats. |AuditIfNotExists, Disabled |[1.0.2-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ASC_Dependency_Agent_Audit_Linux.json) | +|[\[Preview\]: Network traffic data collection agent should be installed on Windows virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2f2ee1de-44aa-4762-b6bd-0893fc3f306d) |Security Center uses the Microsoft Dependency agent to collect network traffic data from your Azure virtual machines to enable advanced network protection features such as traffic visualization on the network map, network hardening recommendations and specific network threats. |AuditIfNotExists, Disabled |[1.0.2-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ASC_Dependency_Agent_Audit_Windows.json) | +|[Activity log should be retained for at least one year](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb02aacc0-b073-424e-8298-42b22829ee0a) |This policy audits the activity log if the retention is not set for 365 days or forever (retention days set to 0). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ActivityLogRetention_365orGreater.json) | +|[Add system-assigned managed identity to enable Guest Configuration assignments on virtual machines with no identities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3cf2ab00-13f1-4d0c-8971-2ac904541a7e) |This policy adds a system-assigned managed identity to virtual machines hosted in Azure that are supported by Guest Configuration but do not have any managed identities. A system-assigned managed identity is a prerequisite for all Guest Configuration assignments and must be added to machines before using any Guest Configuration policy definitions. For more information on Guest Configuration, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |modify |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AddSystemIdentityWhenNone_Prerequisite.json) | +|[Add system-assigned managed identity to enable Guest Configuration assignments on VMs with a user-assigned identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F497dff13-db2a-4c0f-8603-28fa3b331ab6) |This policy adds a system-assigned managed identity to virtual machines hosted in Azure that are supported by Guest Configuration and have at least one user-assigned identity but do not have a system-assigned managed identity. A system-assigned managed identity is a prerequisite for all Guest Configuration assignments and must be added to machines before using any Guest Configuration policy definitions. For more information on Guest Configuration, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |modify |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_AddSystemIdentityWhenUser_Prerequisite.json) | +|[All flow log resources should be in enabled state](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F27960feb-a23c-4577-8d36-ef8b5f35e0be) |Audit for flow log resources to verify if flow log status is enabled. Enabling flow logs allows to log information about IP traffic flowing. It can be used for optimizing network flows, monitoring throughput, verifying compliance, detecting intrusions and more. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/NetworkWatcherFlowLog_Enabled_Audit.json) | +|[App Service apps should have resource logs enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F91a78b24-f231-4a8a-8da9-02c35b2b6510) |Audit enabling of resource logs on the app. This enables you to recreate activity trails for investigation purposes if a security incident occurs or your network is compromised. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_ResourceLoggingMonitoring_Audit.json) | +|[Audit privileged functions](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff26af0b1-65b6-689a-a03f-352ad2d00f98) |CMA_0019 - Audit privileged functions |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0019.json) | +|[Audit user account status](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F49c23d9b-02b0-0e42-4f94-e8cef1b8381b) |CMA_0020 - Audit user account status |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0020.json) | +|[Audit virtual machines without disaster recovery configured](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0015ea4d-51ff-4ce3-8d8c-f3f8f0179a56) |Audit virtual machines which do not have disaster recovery configured. To learn more about disaster recovery, visit [https://aka.ms/asr-doc](https://aka.ms/asr-doc). |auditIfNotExists |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/RecoveryServices_DisasterRecovery_Audit.json) | +|[Auto provisioning of the Log Analytics agent should be enabled on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F475aae12-b88a-4572-8b36-9b712b2b3a17) |To monitor for security vulnerabilities and threats, Azure Security Center collects data from your Azure virtual machines. Data is collected by the Log Analytics agent, formerly known as the Microsoft Monitoring Agent (MMA), which reads various security-related configurations and event logs from the machine and copies the data to your Log Analytics workspace for analysis. We recommend enabling auto provisioning to automatically deploy the agent to all supported Azure VMs and any new ones that are created. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Automatic_provisioning_log_analytics_monitoring_agent.json) | +|[Azure Backup should be enabled for Virtual Machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F013e242c-8828-4970-87b3-ab247555486d) |Ensure protection of your Azure Virtual Machines by enabling Azure Backup. Azure Backup is a secure and cost effective data protection solution for Azure. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Backup/VirtualMachines_EnableAzureBackup_Audit.json) | +|[Azure Defender for App Service should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2913021d-f2fd-4f3d-b958-22354e2bdbcb) |Azure Defender for App Service leverages the scale of the cloud, and the visibility that Azure has as a cloud provider, to monitor for common web app attacks. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnAppServices_Audit.json) | +|[Azure Defender for Key Vault should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e6763cc-5078-4e64-889d-ff4d9a839047) |Azure Defender for Key Vault provides an additional layer of protection and security intelligence by detecting unusual and potentially harmful attempts to access or exploit key vault accounts. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnKeyVaults_Audit.json) | +|[Azure Defender for servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4da35fc9-c9e7-4960-aec9-797fe7d9051d) |Azure Defender for servers provides real-time threat protection for server workloads and generates hardening recommendations as well as alerts about suspicious activities. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnVM_Audit.json) | +|[Azure Monitor log profile should collect logs for categories 'write,' 'delete,' and 'action'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1a4e592a-6a6e-44a5-9814-e36264ca96e7) |This policy ensures that a log profile collects logs for categories 'write,' 'delete,' and 'action' |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ActivityLog_CaptureAllCategories.json) | +|[Azure Monitor Logs clusters should be created with infrastructure-encryption enabled (double encryption)](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fea0dfaed-95fb-448c-934e-d6e713ce393d) |To ensure secure data encryption is enabled at the service level and the infrastructure level with two different encryption algorithms and two different keys, use an Azure Monitor dedicated cluster. This option is enabled by default when supported at the region, see [https://docs.microsoft.com/azure/azure-monitor/platform/customer-managed-keys#customer-managed-key-overview](../../../azure-monitor/platform/customer-managed-keys.md#customer-managed-key-overview). |audit, Audit, deny, Deny, disabled, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/LogAnalyticsClusters_CMKDoubleEncryptionEnabled_Deny.json) | +|[Azure Monitor Logs clusters should be encrypted with customer-managed key](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f68a601-6e6d-4e42-babf-3f643a047ea2) |Create Azure Monitor logs cluster with customer-managed keys encryption. By default, the log data is encrypted with service-managed keys, but customer-managed keys are commonly required to meet regulatory compliance. Customer-managed key in Azure Monitor gives you more control over the access to you data, see [https://docs.microsoft.com/azure/azure-monitor/platform/customer-managed-keys](../../../azure-monitor/platform/customer-managed-keys.md). |audit, Audit, deny, Deny, disabled, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/LogAnalyticsClusters_CMKEnabled_Deny.json) | +|[Azure Monitor Logs for Application Insights should be linked to a Log Analytics workspace](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd550e854-df1a-4de9-bf44-cd894b39a95e) |Link the Application Insights component to a Log Analytics workspace for logs encryption. Customer-managed keys are commonly required to meet regulatory compliance and for more control over the access to your data in Azure Monitor. Linking your component to a Log Analytics workspace that's enabled with a customer-managed key, ensures that your Application Insights logs meet this compliance requirement, see [https://docs.microsoft.com/azure/azure-monitor/platform/customer-managed-keys](../../../azure-monitor/platform/customer-managed-keys.md). |audit, Audit, deny, Deny, disabled, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ApplicationInsightsComponent_WorkspaceAssociation_Deny.json) | +|[Azure Monitor should collect activity logs from all regions](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F41388f1c-2db0-4c25-95b2-35d7f5ccbfa9) |This policy audits the Azure Monitor log profile which does not export activities from all Azure supported regions including global. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ActivityLog_CaptureAllRegions.json) | +|[Azure Monitor solution 'Security and Audit' must be deployed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3e596b57-105f-48a6-be97-03e9243bad6e) |This policy ensures that Security and Audit is deployed. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/Security_Audit_MustBeDeployed.json) | +|[Correlate audit records](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F10874318-0bf7-a41f-8463-03e395482080) |CMA_0087 - Correlate audit records |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0087.json) | +|[Deploy the Windows Guest Configuration extension to enable Guest Configuration assignments on Windows VMs](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F385f5831-96d4-41db-9a3c-cd3af78aaae6) |This policy deploys the Windows Guest Configuration extension to Windows virtual machines hosted in Azure that are supported by Guest Configuration. The Windows Guest Configuration extension is a prerequisite for all Windows Guest Configuration assignments and must be deployed to machines before using any Windows Guest Configuration policy definition. For more information on Guest Configuration, visit [https://aka.ms/gcpol](https://aka.ms/gcpol). |deployIfNotExists |[1.2.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Guest%20Configuration/GuestConfiguration_DeployExtensionWindows_Prerequisite.json) | +|[Determine auditable events](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2f67e567-03db-9d1f-67dc-b6ffb91312f4) |CMA_0137 - Determine auditable events |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0137.json) | +|[Establish requirements for audit review and reporting](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb3c8cc83-20d3-3890-8bc8-5568777670f4) |CMA_0277 - Establish requirements for audit review and reporting |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0277.json) | +|[Flow logs should be configured for every network security group](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc251913d-7d24-4958-af87-478ed3b9ba41) |Audit for network security groups to verify if flow logs are configured. Enabling flow logs allows to log information about IP traffic flowing through network security group. It can be used for optimizing network flows, monitoring throughput, verifying compliance, detecting intrusions and more. |Audit, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/NetworkSecurityGroup_FlowLog_Audit.json) | +|[Integrate audit review, analysis, and reporting](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff741c4e6-41eb-15a4-25a2-61ac7ca232f0) |CMA_0339 - Integrate audit review, analysis, and reporting |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0339.json) | +|[Integrate cloud app security with a siem](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9fdde4a9-85fa-7850-6df4-ae9c4a2e56f9) |CMA_0340 - Integrate cloud app security with a siem |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0340.json) | +|[Log Analytics extension should be enabled in virtual machine scale sets for listed virtual machine images](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5c3bc7b8-a64c-4e08-a9cd-7ff0f31e1138) |Reports virtual machine scale sets as non-compliant if the virtual machine image is not in the list defined and the extension is not installed. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/LogAnalytics_OSImage_VMSS_Audit.json) | +|[Microsoft Defender for Storage (Classic) should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F308fbb08-4ab8-4e67-9b29-592e93fb94fa) |Microsoft Defender for Storage (Classic) provides detections of unusual and potentially harmful attempts to access or exploit storage accounts. |AuditIfNotExists, Disabled |[1.0.4](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnStorageAccounts_Audit.json) | +|[Network Watcher flow logs should have traffic analytics enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2f080164-9f4d-497e-9db6-416dc9f7b48a) |Traffic analytics analyzes flow logs to provide insights into traffic flow in your Azure cloud. It can be used to visualize network activity across your Azure subscriptions and identify hot spots, identify security threats, understand traffic flow patterns, pinpoint network misconfigurations and more. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/NetworkWatcher_FlowLog_TrafficAnalytics_Audit.json) | +|[Perform vulnerability scans](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c5e0e1a-216f-8f49-0a15-76ed0d8b8e1f) |CMA_0393 - Perform vulnerability scans |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0393.json) | +|[Provide real-time alerts for audit event failures](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0f4fa857-079d-9d3d-5c49-21f616189e03) |CMA_C1114 - Provide real-time alerts for audit event failures |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1114.json) | +|[Remediate information system flaws](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbe38a620-000b-21cf-3cb3-ea151b704c3b) |CMA_0427 - Remediate information system flaws |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0427.json) | +|[Resource logs in Batch accounts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F428256e6-1fac-4f48-a757-df34c2b3336d) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[5.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Batch/Batch_AuditDiagnosticLog_Audit.json) | +|[Resource logs in Key Vault should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcf820ca0-f99e-4f3e-84fb-66e913812d21) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[5.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/KeyVault_AuditDiagnosticLog_Audit.json) | +|[Resource logs in Logic Apps should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F34f95f76-5386-4de7-b824-0d8478470c9d) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[5.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Logic%20Apps/LogicApps_AuditDiagnosticLog_Audit.json) | +|[Resource logs in Search services should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb4330a05-a843-4bc8-bf9a-cacce50c67f4) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[5.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Search/Search_AuditDiagnosticLog_Audit.json) | +|[Resource logs in Service Bus should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff8d36e2f-389b-4ee4-898d-21aeb69a0f45) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[5.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Service%20Bus/ServiceBus_AuditDiagnosticLog_Audit.json) | +|[Review account provisioning logs](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa830fe9e-08c9-a4fb-420c-6f6bf1702395) |CMA_0460 - Review account provisioning logs |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0460.json) | +|[Review administrator assignments weekly](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff27a298f-9443-014a-0d40-fef12adf0259) |CMA_0461 - Review administrator assignments weekly |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0461.json) | +|[Review audit data](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6625638f-3ba1-7404-5983-0ea33d719d34) |CMA_0466 - Review audit data |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0466.json) | +|[Review cloud identity report overview](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8aec4343-9153-9641-172c-defb201f56b3) |CMA_0468 - Review cloud identity report overview |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0468.json) | +|[Review controlled folder access events](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff48b60c6-4b37-332f-7288-b6ea50d300eb) |CMA_0471 - Review controlled folder access events |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0471.json) | +|[Review exploit protection events](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa30bd8e9-7064-312a-0e1f-e1b485d59f6e) |CMA_0472 - Review exploit protection events |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0472.json) | +|[Review file and folder activity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef718fe4-7ceb-9ddf-3198-0ee8f6fe9cba) |CMA_0473 - Review file and folder activity |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0473.json) | +|[Review role group changes weekly](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F70fe686f-1f91-7dab-11bf-bca4201e183b) |CMA_0476 - Review role group changes weekly |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0476.json) | +|[Saved-queries in Azure Monitor should be saved in customer storage account for logs encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffa298e57-9444-42ba-bf04-86e8470e32c7) |Link storage account to Log Analytics workspace to protect saved-queries with storage account encryption. Customer-managed keys are commonly required to meet regulatory compliance and for more control over the access to your saved-queries in Azure Monitor. For more details on the above, see [https://docs.microsoft.com/azure/azure-monitor/platform/customer-managed-keys?tabs=portal#customer-managed-key-for-saved-queries](../../../azure-monitor/platform/customer-managed-keys.md#customer-managed-key-for-saved-queries). |audit, Audit, deny, Deny, disabled, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/LogAnalyticsWorkspaces_CMKBYOSQueryEnabled_Deny.json) | +|[Storage account containing the container with activity logs must be encrypted with BYOK](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffbb99e8e-e444-4da0-9ff1-75c92f5a85b2) |This policy audits if the Storage account containing the container with activity logs is encrypted with BYOK. The policy works only if the storage account lies on the same subscription as activity logs by design. More information on Azure Storage encryption at rest can be found here [https://aka.ms/azurestoragebyok](https://aka.ms/azurestoragebyok). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ActivityLog_StorageAccountBYOK_Audit.json) | +|[The Log Analytics extension should be installed on Virtual Machine Scale Sets](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fefbde977-ba53-4479-b8e9-10b957924fbf) |This policy audits any Windows/Linux Virtual Machine Scale Sets if the Log Analytics extension is not installed. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/VMSS_LogAnalyticsAgent_AuditIfNotExists.json) | +|[Virtual machines should have the Log Analytics extension installed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa70ca396-0a34-413a-88e1-b956c1e683be) |This policy audits any Windows/Linux virtual machines if the Log Analytics extension is not installed. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/VirtualMachines_LogAnalyticsAgent_AuditIfNotExists.json) | ++### Detect and contain anomalous network activity into and within the local or remote SWIFT environment. ++**ID**: SWIFT CSCF v2022 6.5A +**Ownership**: Shared ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[\[Preview\]: Network traffic data collection agent should be installed on Linux virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F04c4380f-3fae-46e8-96c9-30193528f602) |Security Center uses the Microsoft Dependency agent to collect network traffic data from your Azure virtual machines to enable advanced network protection features such as traffic visualization on the network map, network hardening recommendations and specific network threats. |AuditIfNotExists, Disabled |[1.0.2-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ASC_Dependency_Agent_Audit_Linux.json) | +|[\[Preview\]: Network traffic data collection agent should be installed on Windows virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2f2ee1de-44aa-4762-b6bd-0893fc3f306d) |Security Center uses the Microsoft Dependency agent to collect network traffic data from your Azure virtual machines to enable advanced network protection features such as traffic visualization on the network map, network hardening recommendations and specific network threats. |AuditIfNotExists, Disabled |[1.0.2-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ASC_Dependency_Agent_Audit_Windows.json) | +|[Adaptive network hardening recommendations should be applied on internet facing virtual machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F08e6af2d-db70-460a-bfe9-d5bd474ba9d6) |Azure Security Center analyzes the traffic patterns of Internet facing virtual machines and provides Network Security Group rule recommendations that reduce the potential attack surface |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_AdaptiveNetworkHardenings_Audit.json) | +|[Alert personnel of information spillage](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9622aaa9-5c49-40e2-5bf8-660b7cd23deb) |CMA_0007 - Alert personnel of information spillage |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0007.json) | +|[Authorize, monitor, and control voip](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe4e1f896-8a93-1151-43c7-0ad23b081ee2) |CMA_0025 - Authorize, monitor, and control voip |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0025.json) | +|[Azure Defender for App Service should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2913021d-f2fd-4f3d-b958-22354e2bdbcb) |Azure Defender for App Service leverages the scale of the cloud, and the visibility that Azure has as a cloud provider, to monitor for common web app attacks. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnAppServices_Audit.json) | +|[Azure Defender for Key Vault should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e6763cc-5078-4e64-889d-ff4d9a839047) |Azure Defender for Key Vault provides an additional layer of protection and security intelligence by detecting unusual and potentially harmful attempts to access or exploit key vault accounts. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnKeyVaults_Audit.json) | +|[Azure Defender for servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4da35fc9-c9e7-4960-aec9-797fe7d9051d) |Azure Defender for servers provides real-time threat protection for server workloads and generates hardening recommendations as well as alerts about suspicious activities. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnVM_Audit.json) | +|[Detect network services that have not been authorized or approved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F86ecd378-a3a0-5d5b-207c-05e6aaca43fc) |CMA_C1700 - Detect network services that have not been authorized or approved |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1700.json) | +|[Develop an incident response plan](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2b4e134f-1e4c-2bff-573e-082d85479b6e) |CMA_0145 - Develop an incident response plan |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0145.json) | +|[Document security operations](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2c6bee3a-2180-2430-440d-db3c7a849870) |CMA_0202 - Document security operations |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0202.json) | +|[Implement system boundary protection](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F01ae60e2-38bb-0a32-7b20-d3a091423409) |CMA_0328 - Implement system boundary protection |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0328.json) | +|[Manage gateways](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F63f63e71-6c3f-9add-4c43-64de23e554a7) |CMA_0363 - Manage gateways |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0363.json) | +|[Microsoft Defender for Storage (Classic) should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F308fbb08-4ab8-4e67-9b29-592e93fb94fa) |Microsoft Defender for Storage (Classic) provides detections of unusual and potentially harmful attempts to access or exploit storage accounts. |AuditIfNotExists, Disabled |[1.0.4](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnStorageAccounts_Audit.json) | +|[Network Watcher should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb6e2945c-0b7b-40f5-9233-7a5323b5cdc6) |Network Watcher is a regional service that enables you to monitor and diagnose conditions at a network scenario level in, to, and from Azure. Scenario level monitoring enables you to diagnose problems at an end to end network level view. It is required to have a network watcher resource group to be created in every region where a virtual network is present. An alert is enabled if a network watcher resource group is not available in a particular region. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/NetworkWatcher_Enabled_Audit.json) | +|[Route traffic through managed network access points](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbab9ef1d-a16d-421a-822d-3fa94e808156) |CMA_0484 - Route traffic through managed network access points |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0484.json) | +|[Set automated notifications for new and trending cloud applications in your organization](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faf38215f-70c4-0cd6-40c2-c52d86690a45) |CMA_0495 - Set automated notifications for new and trending cloud applications in your organization |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0495.json) | +|[Turn on sensors for endpoint security solution](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5fc24b95-53f7-0ed1-2330-701b539b97fe) |CMA_0514 - Turn on sensors for endpoint security solution |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0514.json) | ++## 7. Plan for Incident Response and Information Sharing ++### Ensure a consistent and effective approach for the management of cyber incidents. ++**ID**: SWIFT CSCF v2022 7.1 +**Ownership**: Shared ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Address information security issues](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F56fb5173-3865-5a5d-5fad-ae33e53e1577) |CMA_C1742 - Address information security issues |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1742.json) | +|[Email notification for high severity alerts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6e2593d9-add6-4083-9c9b-4b7d2188c899) |To ensure the relevant people in your organization are notified when there is a potential security breach in one of your subscriptions, enable email notifications for high severity alerts in Security Center. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Email_notification.json) | +|[Email notification to subscription owner for high severity alerts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0b15565f-aa9e-48ba-8619-45960f2c314d) |To ensure your subscription owners are notified when there is a potential security breach in their subscription, set email notifications to subscription owners for high severity alerts in Security Center. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Email_notification_to_subscription_owner.json) | +|[Identify classes of Incidents and Actions taken](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F23d1a569-2d1e-7f43-9e22-1f94115b7dd5) |CMA_C1365 - Identify classes of Incidents and Actions taken |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1365.json) | +|[Incorporate simulated events into incident response training](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1fdeb7c4-4c93-8271-a135-17ebe85f1cc7) |CMA_C1356 - Incorporate simulated events into incident response training |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1356.json) | +|[Provide information spillage training](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2d4d0e90-32d9-4deb-2166-a00d51ed57c0) |CMA_0413 - Provide information spillage training |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0413.json) | +|[Review and update incident response policies and procedures](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb28c8687-4bbd-8614-0b96-cdffa1ac6d9c) |CMA_C1352 - Review and update incident response policies and procedures |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1352.json) | +|[Subscriptions should have a contact email address for security issues](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4f4f78b8-e367-4b10-a341-d9a4ad5cf1c7) |To ensure the relevant people in your organization are notified when there is a potential security breach in one of your subscriptions, set a security contact to receive email notifications from Security Center. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Security_contact_email.json) | ++### Ensure all staff are aware of and fulfil their security responsibilities by performing regular awareness activities, and maintain security knowledge of staff with privileged access. ++**ID**: SWIFT CSCF v2022 7.2 +**Ownership**: Shared ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Document security and privacy training activities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F524e7136-9f6a-75ba-9089-501018151346) |CMA_0198 - Document security and privacy training activities |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0198.json) | +|[Provide periodic role-based security training](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9ac8621d-9acd-55bf-9f99-ee4212cc3d85) |CMA_C1095 - Provide periodic role-based security training |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1095.json) | +|[Provide periodic security awareness training](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F516be556-1353-080d-2c2f-f46f000d5785) |CMA_C1091 - Provide periodic security awareness training |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1091.json) | +|[Provide privacy training](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F518eafdd-08e5-37a9-795b-15a8d798056d) |CMA_0415 - Provide privacy training |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0415.json) | +|[Provide role-based practical exercises](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd041726f-00e0-41ca-368c-b1a122066482) |CMA_C1096 - Provide role-based practical exercises |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1096.json) | +|[Provide role-based security training](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4c385143-09fd-3a34-790c-a5fd9ec77ddc) |CMA_C1094 - Provide role-based security training |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1094.json) | +|[Provide role-based training on suspicious activities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff6794ab8-9a7d-3b24-76ab-265d3646232b) |CMA_C1097 - Provide role-based training on suspicious activities |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1097.json) | +|[Provide security awareness training for insider threats](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9b8b05ec-3d21-215e-5d98-0f7cf0998202) |CMA_0417 - Provide security awareness training for insider threats |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0417.json) | +|[Provide security training before providing access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2b05dca2-25ec-9335-495c-29155f785082) |CMA_0418 - Provide security training before providing access |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0418.json) | +|[Provide security training for new users](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1cb7bf71-841c-4741-438a-67c65fdd7194) |CMA_0419 - Provide security training for new users |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0419.json) | +|[Provide updated security awareness training](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd136ae80-54dd-321c-98b4-17acf4af2169) |CMA_C1090 - Provide updated security awareness training |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1090.json) | ++### Validate the operational security configuration and identify security gaps by performing penetration testing. ++**ID**: SWIFT CSCF v2022 7.3A +**Ownership**: Shared ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Employ independent team for penetration testing](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F611ebc63-8600-50b6-a0e3-fef272457132) |CMA_C1171 - Employ independent team for penetration testing |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1171.json) | +|[Require developers to build security architecture](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff131c8c5-a54a-4888-1efc-158928924bc1) |CMA_C1612 - Require developers to build security architecture |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1612.json) | ++### Evaluate the risk and readiness of the organisation based on plausible cyber-attack scenarios. ++**ID**: SWIFT CSCF v2022 7.4A +**Ownership**: Shared ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Conduct Risk Assessment](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F677e1da4-00c3-287a-563d-f4a1cf9b99a0) |CMA_C1543 - Conduct Risk Assessment |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1543.json) | +|[Conduct risk assessment and distribute its results](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd7c1ecc3-2980-a079-1569-91aec8ac4a77) |CMA_C1544 - Conduct risk assessment and distribute its results |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1544.json) | +|[Conduct risk assessment and document its results](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1dbd51c2-2bd1-5e26-75ba-ed075d8f0d68) |CMA_C1542 - Conduct risk assessment and document its results |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1542.json) | +|[Establish a risk management strategy](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd36700f2-2f0d-7c2a-059c-bdadd1d79f70) |CMA_0258 - Establish a risk management strategy |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0258.json) | +|[Implement the risk management strategy](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc6fe3856-4635-36b6-983c-070da12a953b) |CMA_C1744 - Implement the risk management strategy |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1744.json) | +|[Perform a risk assessment](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8c5d3d8d-5cba-0def-257c-5ab9ea9644dc) |CMA_0388 - Perform a risk assessment |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0388.json) | +|[Review and update risk assessment policies and procedures](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F20012034-96f0-85c2-4a86-1ae1eb457802) |CMA_C1537 - Review and update risk assessment policies and procedures |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1537.json) | ++## 8. Set and Monitor Performance ++### Ensure availability by formally setting and monitoring the objectives to be achieved ++**ID**: SWIFT CSCF v2022 8.1 +**Ownership**: Shared ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Coordinate contingency plans with related plans](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc5784049-959f-6067-420c-f4cefae93076) |CMA_0086 - Coordinate contingency plans with related plans |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0086.json) | +|[Develop contingency plan](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faa305b4d-8c84-1754-0c74-dec004e66be0) |CMA_C1244 - Develop contingency plan |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1244.json) | +|[Obtain legal opinion for monitoring system activities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd9af7f88-686a-5a8b-704b-eafdab278977) |CMA_C1688 - Obtain legal opinion for monitoring system activities |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1688.json) | +|[Perform a trend analysis on threats](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F50e81644-923d-33fc-6ebb-9733bc8d1a06) |CMA_0389 - Perform a trend analysis on threats |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0389.json) | +|[Plan for continuance of essential business functions](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd9edcea6-6cb8-0266-a48c-2061fbac4310) |CMA_C1255 - Plan for continuance of essential business functions |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1255.json) | +|[Plan for resumption of essential business functions](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7ded6497-815d-6506-242b-e043e0273928) |CMA_C1253 - Plan for resumption of essential business functions |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1253.json) | +|[Provide monitoring information as needed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7fc1f0da-0050-19bb-3d75-81ae15940df6) |CMA_C1689 - Provide monitoring information as needed |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1689.json) | +|[Resume all mission and business functions](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F91a54089-2d69-0f56-62dc-b6371a1671c0) |CMA_C1254 - Resume all mission and business functions |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1254.json) | ++### Ensure availability, capacity, and quality of services to customers ++**ID**: SWIFT CSCF v2022 8.4 +**Ownership**: Shared ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Conduct capacity planning](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F33602e78-35e3-4f06-17fb-13dd887448e4) |CMA_C1252 - Conduct capacity planning |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1252.json) | +|[Coordinate contingency plans with related plans](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc5784049-959f-6067-420c-f4cefae93076) |CMA_0086 - Coordinate contingency plans with related plans |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0086.json) | +|[Create alternative actions for identified anomalies](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcc2f7339-2fac-1ea9-9ca3-cd530fbb0da2) |CMA_C1711 - Create alternative actions for identified anomalies |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1711.json) | +|[Develop contingency plan](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faa305b4d-8c84-1754-0c74-dec004e66be0) |CMA_C1244 - Develop contingency plan |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1244.json) | +|[Notify personnel of any failed security verification tests](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F18e9d748-73d4-0c96-55ab-b108bfbd5bc3) |CMA_C1710 - Notify personnel of any failed security verification tests |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1710.json) | +|[Perform security function verification at a defined frequency](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff30edfad-4e1d-1eef-27ee-9292d6d89842) |CMA_C1709 - Perform security function verification at a defined frequency |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1709.json) | +|[Plan for continuance of essential business functions](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd9edcea6-6cb8-0266-a48c-2061fbac4310) |CMA_C1255 - Plan for continuance of essential business functions |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1255.json) | ++### Ensure early availability of SWIFTNet releases and of the FIN standards for proper testing by the customer before going live. ++**ID**: SWIFT CSCF v2022 8.5 +**Ownership**: Shared ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Address coding vulnerabilities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F318b2bd9-9c39-9f8b-46a7-048401f33476) |CMA_0003 - Address coding vulnerabilities |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0003.json) | +|[Develop and document application security requirements](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6de65dc4-8b4f-34b7-9290-eb137a2e2929) |CMA_0148 - Develop and document application security requirements |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0148.json) | +|[Document the information system environment in acquisition contracts](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc148208b-1a6f-a4ac-7abc-23b1d41121b1) |CMA_0205 - Document the information system environment in acquisition contracts |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0205.json) | +|[Establish a secure software development program](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe750ca06-1824-464a-2cf3-d0fa754d1cb4) |CMA_0259 - Establish a secure software development program |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0259.json) | +|[Perform vulnerability scans](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c5e0e1a-216f-8f49-0a15-76ed0d8b8e1f) |CMA_0393 - Perform vulnerability scans |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0393.json) | +|[Remediate information system flaws](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbe38a620-000b-21cf-3cb3-ea151b704c3b) |CMA_0427 - Remediate information system flaws |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0427.json) | +|[Require developers to document approved changes and potential impact](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3a868d0c-538f-968b-0191-bddb44da5b75) |CMA_C1597 - Require developers to document approved changes and potential impact |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1597.json) | +|[Require developers to implement only approved changes](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F085467a6-9679-5c65-584a-f55acefd0d43) |CMA_C1596 - Require developers to implement only approved changes |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1596.json) | +|[Require developers to manage change integrity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb33d61c1-7463-7025-0ec0-a47585b59147) |CMA_C1595 - Require developers to manage change integrity |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1595.json) | +|[Require developers to produce evidence of security assessment plan execution](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff8a63511-66f1-503f-196d-d6217ee0823a) |CMA_C1602 - Require developers to produce evidence of security assessment plan execution |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1602.json) | +|[Verify software, firmware and information integrity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fdb28735f-518f-870e-15b4-49623cbe3aa0) |CMA_0542 - Verify software, firmware and information integrity |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0542.json) | ++## 9. Ensure Availability through Resilience ++### Providers must ensure that the service remains available for customers in the event of a local disturbance or malfunction. ++**ID**: SWIFT CSCF v2022 9.1 +**Ownership**: Shared ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Conduct incident response testing](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3545c827-26ee-282d-4629-23952a12008b) |CMA_0060 - Conduct incident response testing |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0060.json) | +|[Coordinate contingency plans with related plans](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc5784049-959f-6067-420c-f4cefae93076) |CMA_0086 - Coordinate contingency plans with related plans |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0086.json) | +|[Develop contingency plan](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faa305b4d-8c84-1754-0c74-dec004e66be0) |CMA_C1244 - Develop contingency plan |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1244.json) | +|[Develop contingency planning policies and procedures](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F75b42dcf-7840-1271-260b-852273d7906e) |CMA_0156 - Develop contingency planning policies and procedures |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0156.json) | +|[Distribute policies and procedures](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Feff6e4a5-3efe-94dd-2ed1-25d56a019a82) |CMA_0185 - Distribute policies and procedures |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0185.json) | +|[Establish an information security program](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F84245967-7882-54f6-2d34-85059f725b47) |CMA_0263 - Establish an information security program |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0263.json) | +|[Provide contingency training](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fde936662-13dc-204c-75ec-1af80f994088) |CMA_0412 - Provide contingency training |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0412.json) | +|[Run simulation attacks](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa8f9c283-9a66-3eb3-9e10-bdba95b85884) |CMA_0486 - Run simulation attacks |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0486.json) | ++### Providers must ensure that the service remains available for customers in the event of a site disaster. ++**ID**: SWIFT CSCF v2022 9.2 +**Ownership**: Shared ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Conduct backup of information system documentation](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb269a749-705e-8bff-055a-147744675cdf) |CMA_C1289 - Conduct backup of information system documentation |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1289.json) | +|[Create separate alternate and primary storage sites](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F81b6267b-97a7-9aa5-51ee-d2584a160424) |CMA_C1269 - Create separate alternate and primary storage sites |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1269.json) | +|[Ensure alternate storage site safeguards are equivalent to primary site](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F178c8b7e-1b6e-4289-44dd-2f1526b678a1) |CMA_C1268 - Ensure alternate storage site safeguards are equivalent to primary site |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1268.json) | +|[Establish alternate storage site that facilitates recovery operations](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F245fe58b-96f8-9f1e-48c5-7f49903f66fd) |CMA_C1270 - Establish alternate storage site that facilitates recovery operations |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1270.json) | +|[Establish alternate storage site to store and retrieve backup information](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0a412110-3874-9f22-187a-c7a81c8a6704) |CMA_C1267 - Establish alternate storage site to store and retrieve backup information |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1267.json) | +|[Establish an alternate processing site](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faf5ff768-a34b-720e-1224-e6b3214f3ba6) |CMA_0262 - Establish an alternate processing site |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0262.json) | +|[Establish requirements for internet service providers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5f2e834d-7e40-a4d5-a216-e49b16955ccf) |CMA_0278 - Establish requirements for internet service providers |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0278.json) | +|[Identify and mitigate potential issues at alternate storage site](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F13939f8c-4cd5-a6db-9af4-9dfec35e3722) |CMA_C1271 - Identify and mitigate potential issues at alternate storage site |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1271.json) | +|[Prepare alternate processing site for use as operational site](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0f31d98d-5ce2-705b-4aa5-b4f6705110dd) |CMA_C1278 - Prepare alternate processing site for use as operational site |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1278.json) | +|[Recover and reconstitute resources after any disruption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff33c3238-11d2-508c-877c-4262ec1132e1) |CMA_C1295 - Recover and reconstitute resources after any disruption |Manual, Disabled |[1.1.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1295.json) | +|[Restore resources to operational state](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff801d58e-5659-9a4a-6e8d-02c9334732e5) |CMA_C1297 - Restore resources to operational state |Manual, Disabled |[1.1.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1297.json) | +|[Separately store backup information](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffc26e2fd-3149-74b4-5988-d64bb90f8ef7) |CMA_C1293 - Separately store backup information |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1293.json) | +|[Transfer backup information to an alternate storage site](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7bdb79ea-16b8-453e-4ca4-ad5b16012414) |CMA_C1294 - Transfer backup information to an alternate storage site |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1294.json) | ++### Service bureaux must ensure that the service remains available for their customers in the event of a disturbance, a hazard, or an incident. ++**ID**: SWIFT CSCF v2022 9.3 +**Ownership**: Shared ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Develop and document a business continuity and disaster recovery plan](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbd6cbcba-4a2d-507c-53e3-296b5c238a8e) |CMA_0146 - Develop and document a business continuity and disaster recovery plan |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0146.json) | +|[Develop contingency plan](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faa305b4d-8c84-1754-0c74-dec004e66be0) |CMA_C1244 - Develop contingency plan |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1244.json) | +|[Employ automatic emergency lighting](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faa892c0d-2c40-200c-0dd8-eac8c4748ede) |CMA_0209 - Employ automatic emergency lighting |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0209.json) | +|[Implement a penetration testing methodology](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc2eabc28-1e5c-78a2-a712-7cc176c44c07) |CMA_0306 - Implement a penetration testing methodology |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0306.json) | +|[Implement physical security for offices, working areas, and secure areas](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F05ec66a2-137c-14b8-8e75-3d7a2bef07f8) |CMA_0323 - Implement physical security for offices, working areas, and secure areas |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0323.json) | +|[Review and update physical and environmental policies and procedures](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F91cf132e-0c9f-37a8-a523-dc6a92cd2fb2) |CMA_C1446 - Review and update physical and environmental policies and procedures |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1446.json) | +|[Run simulation attacks](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa8f9c283-9a66-3eb3-9e10-bdba95b85884) |CMA_0486 - Run simulation attacks |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0486.json) | ++### Providers' availability and quality of service is ensured through usage of the recommended SWIFT connectivity packs and the appropriate line bandwidth ++**ID**: SWIFT CSCF v2022 9.4 +**Ownership**: Shared ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Authorize, monitor, and control voip](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe4e1f896-8a93-1151-43c7-0ad23b081ee2) |CMA_0025 - Authorize, monitor, and control voip |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0025.json) | +|[Conduct capacity planning](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F33602e78-35e3-4f06-17fb-13dd887448e4) |CMA_C1252 - Conduct capacity planning |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1252.json) | +|[Implement system boundary protection](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F01ae60e2-38bb-0a32-7b20-d3a091423409) |CMA_0328 - Implement system boundary protection |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0328.json) | +|[Manage gateways](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F63f63e71-6c3f-9add-4c43-64de23e554a7) |CMA_0363 - Manage gateways |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0363.json) | +|[Route traffic through managed network access points](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbab9ef1d-a16d-421a-822d-3fa94e808156) |CMA_0484 - Route traffic through managed network access points |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0484.json) | ++## 10. Be Ready in case of Major Disaster ++### Business continuity is ensured through a documented plan communicated to the potentially affected +parties (service bureau and customers). ++**ID**: SWIFT CSCF v2022 10.1 +**Ownership**: Shared ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Coordinate contingency plans with related plans](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc5784049-959f-6067-420c-f4cefae93076) |CMA_0086 - Coordinate contingency plans with related plans |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0086.json) | +|[Develop contingency plan](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faa305b4d-8c84-1754-0c74-dec004e66be0) |CMA_C1244 - Develop contingency plan |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1244.json) | +|[Plan for continuance of essential business functions](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd9edcea6-6cb8-0266-a48c-2061fbac4310) |CMA_C1255 - Plan for continuance of essential business functions |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1255.json) | +|[Plan for resumption of essential business functions](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7ded6497-815d-6506-242b-e043e0273928) |CMA_C1253 - Plan for resumption of essential business functions |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1253.json) | +|[Resume all mission and business functions](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F91a54089-2d69-0f56-62dc-b6371a1671c0) |CMA_C1254 - Resume all mission and business functions |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1254.json) | ++## 11. Monitor in case of Major Disaster ++### Ensure a consistent and effective approach for the event monitoring and escalation. ++**ID**: SWIFT CSCF v2022 11.1 +**Ownership**: Shared ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Document security operations](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2c6bee3a-2180-2430-440d-db3c7a849870) |CMA_0202 - Document security operations |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0202.json) | +|[Obtain legal opinion for monitoring system activities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd9af7f88-686a-5a8b-704b-eafdab278977) |CMA_C1688 - Obtain legal opinion for monitoring system activities |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1688.json) | +|[Perform a trend analysis on threats](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F50e81644-923d-33fc-6ebb-9733bc8d1a06) |CMA_0389 - Perform a trend analysis on threats |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0389.json) | +|[Provide monitoring information as needed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7fc1f0da-0050-19bb-3d75-81ae15940df6) |CMA_C1689 - Provide monitoring information as needed |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1689.json) | +|[Turn on sensors for endpoint security solution](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5fc24b95-53f7-0ed1-2330-701b539b97fe) |CMA_0514 - Turn on sensors for endpoint security solution |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0514.json) | ++### Ensure a consistent and effective approach for the management of incidents (Problem Management). ++**ID**: SWIFT CSCF v2022 11.2 +**Ownership**: Shared ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Assess information security events](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F37b0045b-3887-367b-8b4d-b9a6fa911bb9) |CMA_0013 - Assess information security events |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0013.json) | +|[Conduct incident response testing](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3545c827-26ee-282d-4629-23952a12008b) |CMA_0060 - Conduct incident response testing |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0060.json) | +|[Develop an incident response plan](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2b4e134f-1e4c-2bff-573e-082d85479b6e) |CMA_0145 - Develop an incident response plan |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0145.json) | +|[Develop security safeguards](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F423f6d9c-0c73-9cc6-64f4-b52242490368) |CMA_0161 - Develop security safeguards |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0161.json) | +|[Document security operations](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2c6bee3a-2180-2430-440d-db3c7a849870) |CMA_0202 - Document security operations |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0202.json) | +|[Enable network protection](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8c255136-994b-9616-79f5-ae87810e0dcf) |CMA_0238 - Enable network protection |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0238.json) | +|[Eradicate contaminated information](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F54a9c072-4a93-2a03-6a43-a060d30383d7) |CMA_0253 - Eradicate contaminated information |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0253.json) | +|[Establish an information security program](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F84245967-7882-54f6-2d34-85059f725b47) |CMA_0263 - Establish an information security program |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0263.json) | +|[Execute actions in response to information spills](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fba78efc6-795c-64f4-7a02-91effbd34af9) |CMA_0281 - Execute actions in response to information spills |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0281.json) | +|[Identify classes of Incidents and Actions taken](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F23d1a569-2d1e-7f43-9e22-1f94115b7dd5) |CMA_C1365 - Identify classes of Incidents and Actions taken |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1365.json) | +|[Implement incident handling](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F433de59e-7a53-a766-02c2-f80f8421469a) |CMA_0318 - Implement incident handling |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0318.json) | +|[Incorporate simulated events into incident response training](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1fdeb7c4-4c93-8271-a135-17ebe85f1cc7) |CMA_C1356 - Incorporate simulated events into incident response training |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1356.json) | +|[Maintain data breach records](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0fd1ca29-677b-2f12-1879-639716459160) |CMA_0351 - Maintain data breach records |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0351.json) | +|[Maintain incident response plan](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F37546841-8ea1-5be0-214d-8ac599588332) |CMA_0352 - Maintain incident response plan |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0352.json) | +|[Perform a trend analysis on threats](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F50e81644-923d-33fc-6ebb-9733bc8d1a06) |CMA_0389 - Perform a trend analysis on threats |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0389.json) | +|[Protect incident response plan](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2401b496-7f23-79b2-9f80-89bb5abf3d4a) |CMA_0405 - Protect incident response plan |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0405.json) | +|[Provide information spillage training](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2d4d0e90-32d9-4deb-2166-a00d51ed57c0) |CMA_0413 - Provide information spillage training |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0413.json) | +|[Review and update incident response policies and procedures](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb28c8687-4bbd-8614-0b96-cdffa1ac6d9c) |CMA_C1352 - Review and update incident response policies and procedures |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1352.json) | +|[Run simulation attacks](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa8f9c283-9a66-3eb3-9e10-bdba95b85884) |CMA_0486 - Run simulation attacks |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0486.json) | +|[View and investigate restricted users](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F98145a9b-428a-7e81-9d14-ebb154a24f93) |CMA_0545 - View and investigate restricted users |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0545.json) | ++### Ensure an adequate escalation of operational malfunctions in case of customer impact. ++**ID**: SWIFT CSCF v2022 11.4 +**Ownership**: Shared ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Automate process to document implemented changes](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F43ac3ccb-4ef6-7d63-9a3f-6848485ba4e8) |CMA_C1195 - Automate process to document implemented changes |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1195.json) | +|[Automate process to highlight unreviewed change proposals](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F92b49e92-570f-1765-804a-378e6c592e28) |CMA_C1193 - Automate process to highlight unreviewed change proposals |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1193.json) | +|[Develop an incident response plan](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2b4e134f-1e4c-2bff-573e-082d85479b6e) |CMA_0145 - Develop an incident response plan |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0145.json) | +|[Document security operations](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2c6bee3a-2180-2430-440d-db3c7a849870) |CMA_0202 - Document security operations |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0202.json) | +|[Enable network protection](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8c255136-994b-9616-79f5-ae87810e0dcf) |CMA_0238 - Enable network protection |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0238.json) | +|[Eradicate contaminated information](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F54a9c072-4a93-2a03-6a43-a060d30383d7) |CMA_0253 - Eradicate contaminated information |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0253.json) | +|[Establish and document change control processes](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbd4dc286-2f30-5b95-777c-681f3a7913d3) |CMA_0265 - Establish and document change control processes |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0265.json) | +|[Establish configuration management requirements for developers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8747b573-8294-86a0-8914-49e9b06a5ace) |CMA_0270 - Establish configuration management requirements for developers |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0270.json) | +|[Establish relationship between incident response capability and external providers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb470a37a-7a47-3792-34dd-7a793140702e) |CMA_C1376 - Establish relationship between incident response capability and external providers |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1376.json) | +|[Execute actions in response to information spills](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fba78efc6-795c-64f4-7a02-91effbd34af9) |CMA_0281 - Execute actions in response to information spills |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0281.json) | +|[Implement incident handling](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F433de59e-7a53-a766-02c2-f80f8421469a) |CMA_0318 - Implement incident handling |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0318.json) | +|[Perform a trend analysis on threats](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F50e81644-923d-33fc-6ebb-9733bc8d1a06) |CMA_0389 - Perform a trend analysis on threats |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0389.json) | +|[Perform audit for configuration change control](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1282809c-9001-176b-4a81-260a085f4872) |CMA_0390 - Perform audit for configuration change control |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0390.json) | +|[View and investigate restricted users](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F98145a9b-428a-7e81-9d14-ebb154a24f93) |CMA_0545 - View and investigate restricted users |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0545.json) | ++### Effective support is offered to customers in case they face problems during their business hours. ++**ID**: SWIFT CSCF v2022 11.5 +**Ownership**: Shared ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Develop an incident response plan](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2b4e134f-1e4c-2bff-573e-082d85479b6e) |CMA_0145 - Develop an incident response plan |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0145.json) | +|[Document security operations](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2c6bee3a-2180-2430-440d-db3c7a849870) |CMA_0202 - Document security operations |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0202.json) | +|[Enable network protection](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8c255136-994b-9616-79f5-ae87810e0dcf) |CMA_0238 - Enable network protection |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0238.json) | +|[Eradicate contaminated information](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F54a9c072-4a93-2a03-6a43-a060d30383d7) |CMA_0253 - Eradicate contaminated information |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0253.json) | +|[Establish relationship between incident response capability and external providers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb470a37a-7a47-3792-34dd-7a793140702e) |CMA_C1376 - Establish relationship between incident response capability and external providers |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1376.json) | +|[Execute actions in response to information spills](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fba78efc6-795c-64f4-7a02-91effbd34af9) |CMA_0281 - Execute actions in response to information spills |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0281.json) | +|[Identify incident response personnel](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F037c0089-6606-2dab-49ad-437005b5035f) |CMA_0301 - Identify incident response personnel |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0301.json) | +|[Implement incident handling](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F433de59e-7a53-a766-02c2-f80f8421469a) |CMA_0318 - Implement incident handling |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0318.json) | +|[Perform a trend analysis on threats](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F50e81644-923d-33fc-6ebb-9733bc8d1a06) |CMA_0389 - Perform a trend analysis on threats |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0389.json) | +|[View and investigate restricted users](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F98145a9b-428a-7e81-9d14-ebb154a24f93) |CMA_0545 - View and investigate restricted users |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0545.json) | ++## 12. Ensure Knowledge is Available ++### Ensure quality of service to customers through SWIFT certified employees. ++**ID**: SWIFT CSCF v2022 12.1 +**Ownership**: Shared ++|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> | +||||| +|[Provide periodic role-based security training](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9ac8621d-9acd-55bf-9f99-ee4212cc3d85) |CMA_C1095 - Provide periodic role-based security training |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1095.json) | +|[Provide role-based security training](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4c385143-09fd-3a34-790c-a5fd9ec77ddc) |CMA_C1094 - Provide role-based security training |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1094.json) | +|[Provide security training before providing access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2b05dca2-25ec-9335-495c-29155f785082) |CMA_0418 - Provide security training before providing access |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0418.json) | ++## Next steps ++Additional articles about Azure Policy: ++- [Regulatory Compliance](../concepts/regulatory-compliance.md) overview. +- See the [initiative definition structure](../concepts/initiative-definition-structure.md). +- Review other examples at [Azure Policy samples](./index.md). +- Review [Understanding policy effects](../concepts/effects.md). +- Learn how to [remediate non-compliant resources](../how-to/remediate-resources.md). |
governance | Ukofficial Uknhs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/ukofficial-uknhs.md | Title: Regulatory Compliance details for UK OFFICIAL and UK NHS description: Details of the UK OFFICIAL and UK NHS Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 11/06/2023 Last updated : 11/21/2023 |
hdinsight | Ambari Web Ui Auto Logout | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/ambari-web-ui-auto-logout.md | description: Disable able auto logout from Ambari Web UI. Previously updated : 10/30/2022 Last updated : 11/21/2023 # Disable auto logout from Ambari Web UI To disable the auto logout feature, **Next steps** -* [Optimize clusters with Apache Ambari in Azure HDInsight](./hdinsight-changing-configs-via-ambari.md) +* [Optimize clusters with Apache Ambari in Azure HDInsight](./hdinsight-changing-configs-via-ambari.md) |
hdinsight | Apache Hadoop Mahout Linux Mac | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hadoop/apache-hadoop-mahout-linux-mac.md | description: Learn how to use the Apache Mahout machine learning library to gene Previously updated : 10/25/2022 Last updated : 11/21/2023 # Generate recommendations using Apache Mahout in Azure HDInsight One of the functions that is provided by Mahout is a recommendation engine. This The following workflow is a simplified example that uses movie data: -* **Co-occurrence**: Joe, Alice, and Bob all liked *Star Wars*, *The Empire Strikes Back*, and *Return of the Jedi*. Mahout determines that users who like any one of these movies also like the other two. +* **Co-occurrence**: Joe, Alice, and Bob all liked *Star Wars*, *The Empire Strikes Back*, and *Return of the `Jedi`*. Mahout determines that users who like any one of these movies also like the other two. * **Co-occurrence**: Bob and Alice also liked *The Phantom Menace*, *Attack of the Clones*, and *Revenge of the Sith*. Mahout determines that users who liked the previous three movies also like these three movies. The data contained in `user-ratings.txt` has a structure of `userID`, `movieID`, ## Run the analysis -1. Use [ssh command](../hdinsight-hadoop-linux-use-ssh-unix.md) to connect to your cluster. Edit the command below by replacing CLUSTERNAME with the name of your cluster, and then enter the command: +1. Use [ssh command](../hdinsight-hadoop-linux-use-ssh-unix.md) to connect to your cluster. Edit the following command by replacing CLUSTERNAME with the name of your cluster, and then enter the command: ```cmd ssh sshuser@CLUSTERNAME-ssh.azurehdinsight.net |
hdinsight | Hbase Troubleshoot Rest Not Spending | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hbase/hbase-troubleshoot-rest-not-spending.md | Title: Apache HBase REST not responding to requests in Azure HDInsight description: Resolve issue with Apache HBase REST not responding to requests in Azure HDInsight. Previously updated : 10/10/2022 Last updated : 11/21/2023 # Scenario: Apache HBase REST not responding to requests in Azure HDInsight |
hdinsight | Hdinsight Hadoop Create Linux Clusters Azure Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-create-linux-clusters-azure-cli.md | description: Learn how to create Azure HDInsight clusters using the cross-platfo Previously updated : 10/19/2022 Last updated : 11/21/2023 # Create HDInsight clusters using the Azure CLI The steps in this document walk-through creating a HDInsight 4.0 cluster using t ## Create a cluster -1. Log in to your Azure subscription. If you plan to use Azure Cloud Shell, then select **Try it** in the upper-right corner of the code block. Else, enter the command below: +1. Log in to your Azure subscription. If you plan to use Azure Cloud Shell, then select **Try it** in the upper-right corner of the code block. Else, enter the following command: ```azurecli-interactive az login The steps in this document walk-through creating a HDInsight 4.0 cluster using t # az account set --subscription "SUBSCRIPTIONID" ``` -2. Set environment variables. The use of variables in this article is based on Bash. Slight variations will be needed for other environments. See [az-hdinsight-create](/cli/azure/hdinsight#az-hdinsight-create) for a complete list of possible parameters for cluster creation. +2. Set environment variables. The use of variables in this article is based on Bash. Slight variations are needed for other environments. See [az-hdinsight-create](/cli/azure/hdinsight#az-hdinsight-create) for a complete list of possible parameters for cluster creation. |Parameter | Description | ||| |`--workernode-count`| The number of worker nodes in the cluster. This article uses the variable `clusterSizeInNodes` as the value passed to `--workernode-count`. | |`--version`| The HDInsight cluster version. This article uses the variable `clusterVersion` as the value passed to `--version`. See also: [Supported HDInsight versions](./hdinsight-component-versioning.md#supported-hdinsight-versions).|- |`--type`| Type of HDInsight cluster, like: hadoop, interactivehive, hbase, kafka, spark, rserver, mlservices. This article uses the variable `clusterType` as the value passed to `--type`. See also: [Cluster types and configuration](./hdinsight-hadoop-provision-linux-clusters.md#cluster-type).| + |`--type`| Type of HDInsight cluster, like: hadoop, interactive hive, hbase, kafka, spark, `rserver`, `mlservices`. This article uses the variable `clusterType` as the value passed to `--type`. See also: [Cluster types and configuration](./hdinsight-hadoop-provision-linux-clusters.md#cluster-type).| |`--component-version`|The versions of various Hadoop components, in space-separated versions in 'component=version' format. This article uses the variable `componentVersion` as the value passed to `--component-version`. See also: [Hadoop components](./hdinsight-component-versioning.md).| Replace `RESOURCEGROUPNAME`, `LOCATION`, `CLUSTERNAME`, `STORAGEACCOUNTNAME`, and `PASSWORD` with the desired values. Change values for the other variables as desired. Then enter the CLI commands. The steps in this document walk-through creating a HDInsight 4.0 cluster using t export componentVersion=Hadoop=3.1 ``` -3. [Create the resource group](/cli/azure/group#az-group-create) by entering the command below: +3. [Create the resource group](/cli/azure/group#az-group-create) by entering the following command: ```azurecli-interactive az group create \ The steps in this document walk-through creating a HDInsight 4.0 cluster using t For a list of valid locations, use the `az account list-locations` command, and then use one of the locations from the `name` value. -4. [Create an Azure Storage account](/cli/azure/storage/account#az-storage-account-create) by entering the command below: +4. [Create an Azure Storage account](/cli/azure/storage/account#az-storage-account-create) by entering the following command: ```azurecli-interactive # Note: kind BlobStorage is not available as the default storage account. The steps in this document walk-through creating a HDInsight 4.0 cluster using t --sku Standard_LRS ``` -5. [Extract the primary key from the Azure Storage account](/cli/azure/storage/account/keys#az-storage-account-keys-list) and store it in a variable by entering the command below: +5. [Extract the primary key from the Azure Storage account](/cli/azure/storage/account/keys#az-storage-account-keys-list) and store it in a variable by entering the following command: ```azurecli-interactive export AZURE_STORAGE_KEY=$(az storage account keys list \ The steps in this document walk-through creating a HDInsight 4.0 cluster using t --query [0].value -o tsv) ``` -6. [Create an Azure Storage container](/cli/azure/storage/container#az-storage-container-create) by entering the command below: +6. [Create an Azure Storage container](/cli/azure/storage/container#az-storage-container-create) by entering the following command: ```azurecli-interactive az storage container create \ |
hdinsight | Hdinsight Hadoop Create Linux Clusters Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-hadoop-create-linux-clusters-portal.md | description: Learn to create Apache Hadoop, Apache HBase, and Apache Spark clust Previously updated : 10/20/2022 Last updated : 11/21/2023 # Create Linux-based clusters in HDInsight by using the Azure portal |
hdinsight | Apache Kafka Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/kafka/apache-kafka-introduction.md | description: 'Learn about Apache Kafka on HDInsight: What it is, what it does, a Previously updated : 10/17/2022 Last updated : 11/21/2023 #Customer intent: As a developer, I want to understand how Kafka on HDInsight is different from Kafka on other platforms. |
hdinsight | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/policy-reference.md | Title: Built-in policy definitions for Azure HDInsight description: Lists Azure Policy built-in policy definitions for Azure HDInsight. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/15/2023 Last updated : 11/21/2023 |
healthcare-apis | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/policy-reference.md | Title: Built-in policy definitions for Azure API for FHIR description: Lists Azure Policy built-in policy definitions for Azure API for FHIR. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/15/2023 Last updated : 11/21/2023 |
healthcare-apis | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure API for FHIR description: Lists Azure Policy Regulatory Compliance controls available for Azure API for FHIR. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 11/06/2023 Last updated : 11/21/2023 |
healthcare-apis | Deploy Dicom Services In Azure Data Lake | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/deploy-dicom-services-in-azure-data-lake.md | + + Title: Deploy the DICOM service with Azure Data Lake Storage +description: Learn how to deploy the DICOM service and store all your DICOM data in its native format with a data lake in Azure Health Data Services. ++++ Last updated : 11/21/2023+++++# Deploy the DICOM service with Data Lake Storage (Preview) ++Deploying the [DICOM® service with Azure Data Lake Storage](dicom-data-lake.md) enables organizations to store and process imaging data in a standardized, secure, and scalable way. ++After deployment completes, you can use the Azure portal to see the details about the DICOM service, including the service URL. The service URL to access your DICOM service is ```https://<workspacename-dicomservicename>.dicom.azurehealthcareapis.com```. Make sure to specify the API version as part of the URL when you make requests. For more information, see [API versioning for the DICOM service](api-versioning-dicom-service.md). ++## Prerequisites ++- **Deploy an Azure Health Data Services workspace**. For more information, see [Deploy a workspace in the Azure portal](../healthcare-apis-quickstart.md). +- **Create a storage account with a hierarchical namespace**. For more information, see [Create a storage account to use with Azure Data Lake Storage Gen2](/azure/storage/blobs/create-data-lake-storage-account). +- **Create a blob container in the storage account**. The container is used by the DICOM service to store DICOM files. For more information, see [Manage blob containers using the Azure portal](/azure/storage/blobs/blob-containers-portal). ++> [!NOTE] +> The Azure Data Lake Storage option is only available for new instances of the DICOM service. After the option becomes generally available, we plan to offer a migration path for existing DICOM service instances. ++## Deploy the DICOM service with Data Lake Storage using the Azure portal ++1. On the **Resource group** page of the Azure portal, select the name of the **Azure Health Data Services workspace**. ++ :::image type="content" source="media/deploy-data-lake/resource-group.png" alt-text="Screenshot showing a Health Data Services Workspace in the resource group view in the Azure portal." lightbox="media/deploy-data-lake/resource-group.png"::: ++1. Select **Deploy DICOM service**. ++ :::image type="content" source="media/deploy-data-lake/workspace-deploy-dicom.png" alt-text="Screenshot showing the Deploy DICOM service button in the workspace view in the Azure portal." lightbox="media/deploy-data-lake/workspace-deploy-dicom.png"::: ++1. Select **Add DICOM service**. ++ :::image type="content" source="media/deploy-data-lake/add-dicom-service.png" alt-text="Screenshot showing the Add DICOM Service button in the Azure portal." lightbox="media/deploy-data-lake/add-dicom-service.png"::: ++1. Enter a name for the DICOM service. ++1. Select **External (preview)** for the Storage Location. ++ :::image type="content" source="media/deploy-data-lake/dicom-deploy-options.png" alt-text="Screenshot showing the options in the Create DICOM service view." lightbox="media/deploy-data-lake/dicom-deploy-options.png"::: ++1. Select the **subscription** and **resource group** that contains the storage account. ++1. Select the **storage account** created in the prerequisites. ++1. Select the **storage container** created in the prerequisites. ++1. Select **Review + create** to deploy the DICOM service. ++1. When the system displays a green validation check mark, select **Create** to deploy the DICOM service. ++1. After the deployment process completes, select **Go to resource**. ++ :::image type="content" source="media/deploy-data-lake/dicom-deploy-complete.png" alt-text="Screenshot showing the completed deployment of the DICOM service." lightbox="media/deploy-data-lake/dicom-deploy-complete.png"::: ++ The DICOM service overview screen shows the new service and lists the storage account. ++ :::image type="content" source="media/deploy-data-lake/dicom-service-overview.png" alt-text="Screenshot that shows the DICOM service overview." lightbox="media/deploy-data-lake/dicom-service-overview.png"::: ++## Deploy the DICOM service with Data Lake Storage by using an ARM template ++Use the Azure portal to **Deploy a custom template** and then use the sample ARM template to deploy the DICOM service with Azure Data Lake Storage. For more information, see [Create and deploy ARM templates by using the Azure portal](../../azure-resource-manager/templates/quickstart-create-templates-use-the-portal.md). ++```json +{ + "$schema": "https://schema.management.azure.com/schemas/2019-04-01/deploymentTemplate.json#", + "contentVersion": "1.0.0.0", + "parameters": { + "workspaceName": { + "type": "String" + }, + "dicomServiceName": { + "type": "String" + }, + "region": { + "defaultValue": "westus3", + "type": "String" + }, + "storageAccountName": { + "type": "String" + }, + "storageAccountSku": { + "defaultValue": "Standard_LRS", + "type": "String" + }, + "containerName": { + "type": "String" + } + }, + "variables": { + "managedIdentityName": "[concat(parameters('workspacename'), '-', parameters('dicomServiceName'))]", + "StorageBlobDataContributor": "[subscriptionResourceId('Microsoft.Authorization/roleDefinitions', 'ba92f5b4-2d11-453d-a403-e96b0029c9fe')]" + }, + "resources": [ + { + "type": "Microsoft.Storage/storageAccounts", + "apiVersion": "2022-05-01", + "name": "[parameters('storageAccountName')]", + "location": "[parameters('region')]", + "sku": { + "name": "[parameters('storageAccountSku')]" + }, + "kind": "StorageV2", + "properties": { + "accessTier": "Hot", + "supportsHttpsTrafficOnly": true, + "isHnsEnabled": true, + "minimumTlsVersion": "TLS1_2", + "allowBlobPublicAccess": false, + "allowSharedKeyAccess": false, + "encryption": { + "keySource": "Microsoft.Storage", + "requireInfrastructureEncryption": true, + "services": { + "blob": { + "enabled": true + }, + "file": { + "enabled": true + }, + "queue": { + "enabled": true + } + } + } + } + }, + { + "type": "Microsoft.Storage/storageAccounts/blobServices/containers", + "apiVersion": "2022-05-01", + "name": "[format('{0}/default/{1}', parameters('storageAccountName'), parameters('containerName'))]", + "dependsOn": [ + "[resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName'))]" + ] + }, + { + "type": "Microsoft.ManagedIdentity/userAssignedIdentities", + "apiVersion": "2023-01-31", + "name": "[variables('managedIdentityName')]", + "location": "[parameters('region')]" + }, + { + "type": "Microsoft.Authorization/roleAssignments", + "apiVersion": "2021-04-01-preview", + "name": "[guid(resourceGroup().id, parameters('storageAccountName'), variables('managedIdentityName'))]", + "location": "[parameters('region')]", + "dependsOn": [ + "[resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName'))]", + "[resourceId('Microsoft.ManagedIdentity/userAssignedIdentities', variables('managedIdentityName'))]" + ], + "properties": { + "roleDefinitionId": "[variables('StorageBlobDataContributor')]", + "principalId": "[reference(resourceId('Microsoft.ManagedIdentity/userAssignedIdentities',variables('managedIdentityName'))).principalId]", + "principalType": "ServicePrincipal" + }, + "scope": "[concat('Microsoft.Storage/storageAccounts', '/', parameters('storageAccountName'))]" + }, + { + "type": "Microsoft.HealthcareApis/workspaces", + "apiVersion": "2023-02-28", + "name": "[parameters('workspaceName')]", + "location": "[parameters('region')]" + }, + { + "type": "Microsoft.HealthcareApis/workspaces/dicomservices", + "apiVersion": "2023-02-28", + "name": "[concat(parameters('workspaceName'), '/', parameters('dicomServiceName'))]", + "location": "[parameters('region')]", + "dependsOn": [ + "[resourceId('Microsoft.HealthcareApis/workspaces', parameters('workspaceName'))]", + "[resourceId('Microsoft.ManagedIdentity/userAssignedIdentities', variables('managedIdentityName'))]", + "[resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName'))]" + ], + "identity": { + "type": "UserAssigned", + "userAssignedIdentities": { + "[resourceId('Microsoft.ManagedIdentity/userAssignedIdentities', variables('managedIdentityName'))]": {} + } + }, + "properties": { + "storageConfiguration": { + "accountName": "[parameters('storageAccountName')]", + "containerName": "[parameters('containerName')]" + } + } + } + ], + "outputs": { + "storageAccountResourceId": { + "type": "string", + "value": "[resourceId('Microsoft.Storage/storageAccounts', parameters('storageAccountName'))]" + }, + "containerName": { + "type": "string", + "value": "[parameters('containerName')]" + } + } +} +``` ++1. When prompted, select the values for the workspace name, DICOM service name, region, storage account name, storage account SKU, and container name. ++1. Select **Review + create** to deploy the DICOM service. ++## Next steps ++* [Assign roles for the DICOM service](../configure-azure-rbac.md#assign-roles-for-the-dicom-service) +* [Use DICOMweb Standard APIs with DICOM services](dicomweb-standard-apis-with-dicom-services.md) +* [Enable audit and diagnostic logging in the DICOM service](enable-diagnostic-logging.md) + |
healthcare-apis | Deploy Dicom Services In Azure | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/deploy-dicom-services-in-azure.md | -After deployment is finished, you can use the Azure portal to go to the newly created DICOM service to see the details, including your service URL. The service URL to access your DICOM service is ```https://<workspacename-dicomservicename>.dicom.azurehealthcareapis.com```. Make sure to specify the version as part of the URL when you make requests. For more information, see [API versioning for the DICOM service](api-versioning-dicom-service.md). +After deployment completes, you can use the Azure portal to see the details about the DICOM service, including the service URL. The service URL to access your DICOM service is ```https://<workspacename-dicomservicename>.dicom.azurehealthcareapis.com```. Make sure to specify the API version as part of the URL when you make requests. For more information, see [API versioning for the DICOM service](api-versioning-dicom-service.md). ++> [!NOTE] +> A public preview of the DICOM service with Data Lake Storage is now available. This capability provides greater flexibility and control over your imaging data. Learn more: [Deploy the DICOM service with Data Lake Storage (Preview)](deploy-dicom-services-in-azure-data-lake.md) ## Prerequisites |
healthcare-apis | Dicom Data Lake | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-data-lake.md | + + Title: Azure Data Lake Storage integration for the DICOM service in Azure Health Data Services +description: Learn how to use Azure Data Lake Storage with the DICOM service to store, access, and analyze medical imaging data in the cloud. Explore the benefits, architecture, and data contracts of this integration. ++++ Last updated : 11/21/2023+++++# Azure Data Lake Storage integration for the DICOM service (Preview) ++The [DICOM® service](overview.md) provides cloud-scale storage for medical imaging data using the DICOMweb standard. With the integration of Azure Data Lake Storage, you gain full control of your imaging data and increased flexibility for accessing and working with that data through the Azure storage ecosystem and APIs. ++By using Azure Data Lake Storage with the DICOM service, organizations are able to: ++- **Directly access medical imaging data** stored by the DICOM service using Azure storage APIs and DICOMweb APIs, providing more flexibility to access and work with the data. +- **Open medical imaging data up to the entire ecosystem of tools** for working with Azure storage, including AzCopy, Azure Storage Explorer, and the Data Movement library. +- **Unlock new analytics and AI/ML scenarios** by using services that natively integrate with Azure Data Lake Storage, including Azure Synapse, Azure Databricks, Azure Machine Learning, and Microsoft Fabric. +- **Grant controls to manage storage permissions, access controls, tiers, and rules**. ++Another benefit of Azure Data Lake Storage is that it connects to [Microsoft Fabric](/fabric/get-started/microsoft-fabric-overview). Microsoft Fabric is an end-to-end, unified analytics platform that brings together all the data and analytics tools that organizations need to unlock the potential of their data and lay the foundation for AI scenarios. By using Microsoft Fabric, you can use the rich ecosystem of Azure services to perform advanced analytics and AI/ML with medical imaging data, such as building and deploying machine learning models, creating cohorts for clinical trials, and generating insights for patient care and outcomes. ++To learn more about using Microsoft Fabric with imaging data, see [Get started using DICOM data in analytics workloads](get-started-with-analytics-dicom.md). ++## Service architecture and APIs +++The DICOM service exposes the [DICOMweb APIs](dicomweb-standard-apis-with-dicom-services.md) to store, query for, and retrieve DICOM data. The architecture enables you to specify an Azure Data Lake Storage account and container at the time the DICOM service is deployed. The storage container is used by the DICOM service to store DICOM files received by the DICOMweb APIs. The DICOM service retrieves data from the storage account to fulfill search and retrieve queries, allowing full DICOMweb interoperability with DICOM data. ++With this architecture, the storage container remains in your control and is directly accessible using familiar [Azure storage APIs](/rest/api/storageservices/data-lake-storage-gen2) and tools. ++## Data contracts ++The DICOM service stores data in predictable locations in the data lake, following this convention: ++``` +AHDS/{workspace-name}/dicom/{dicom-service-name}/{partition-name} +``` ++| Parameter | Description | +|-|-| +| `{workspace-name}` | The name of the Health Data Services workspace that contains the DICOM service. | +| `{dicom-service-name}` | The name of the DICOM service instance. | +| `{partition-name}` | The name of the data partition. Note, if no partitions are specified, all DICOM data is stored in the default partition, named `Microsoft.Default`. | ++> [!NOTE] +> During public preview, the DICOM service writes data to the storage container and reads the data, but user-added data isn't read and indexed by the DICOM service. Similarly, if DICOM data written by the DICOM service is modified or removed, it may result in errors when accessing data with the DICOMweb APIs. ++## Permissions ++The DICOM service is granted access to the data like any other service or application accessing data in a storage account. Access can be revoked at any time without affecting your organization's ability to access the data. The DICOM service needs to be granted the [Storage Blob Data Contributor](/azure/role-based-access-control/built-in-roles#storage-blob-data-contributor) role by using a system-assigned or user-assigned managed identity. ++## Access tiers ++You can manage costs for imaging data stored by the DICOM service by using Azure Storage access tiers for the data lake storage account. The DICOM service only supports online access tiers (either hot, cool, or cold), and can retrieve imaging data in those tiers immediately. The hot tier is the best choice for data that is in active use. The cool or cold tier is ideal for data that is accessed less frequently but still must be available for reading and writing. ++To learn more about access tiers, including cost tradeoffs and best practices, see [Azure Storage access tiers](/azure/storage/blobs/access-tiers-overview) ++## Limitations ++During public preview, the DICOM service with data lake storage has these limitations: ++- [Bulk Import](import-files.md) isn't supported. +- UPS-RS work items aren't stored in the data lake storage account. +- User data added to the data lake storage account isn't read and indexed by the DICOM service. It's possible that a filename collision could occur, so we recommend that you don't write data to the folder structure used by the DICOM service. +- If DICOM data written by the DICOM service is modified or removed, errors might result when accessing data with the DICOMweb APIs. +- Configuration of customer-managed keys isn't supported during the creation of a DICOM service when you opt to use external storage. +- The archive access tier isn't supported. Moving data to the archive tier will result in errors when accessing data with the DICOMweb APIs. ++## Next steps ++[Deploy the DICOM service with Azure Data Lake Storage (Preview)](deploy-dicom-services-in-azure-data-lake.md) ++[Get started using DICOM data in analytics workloads](get-started-with-analytics-dicom.md) ++[Use DICOMweb standard APIs](dicomweb-standard-apis-with-dicom-services.md) + |
healthcare-apis | Dicom Service V2 Api Changes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/dicom-service-v2-api-changes.md | An attribute can be corrected in the following ways: #### Fewer Study, Series, and Instance attributes are returned by default The set of attributes returned by default has been reduced to improve performance. See the detailed list in the [search response](./dicom-services-conformance-statement-v2.md#search-response) documentation. +Attributes added newly to default tags. ++|Tag level| Tag | Attribute Name | +|:--| :-- | :- | +|Study | (0008, 1030) | StudyDescription | +|Series | (0008, 1090) | ManufacturerModelName | ++Attributes removed from default tags. ++|Tag level| Tag | Attribute Name | +|:--| :-- | :- | +|Study | (0008, 0005) | SpecificCharacterSet | +|Study | (0008, 0030) | StudyTime | +|Study | (0008, 0056) | InstanceAvailability | +|Study | (0008, 0201) | TimezoneOffsetFromUTC | +|Study | (0010, 0040) | PatientSex | +|Study | (0020, 0010) | StudyID | +|Series | (0008, 0005) | SpecificCharacterSet | +|Series | (0008, 0201) | TimezoneOffsetFromUTC | +|Series | (0008, 103E) | SeriesDescription | +|Series | (0040, 0245) | PerformedProcedureStepStartTime | +|Series | (0040, 0275) | RequestAttributesSequence | +|Instance | (0008, 0005) | SpecificCharacterSet | +|Instance | (0008, 0016) | SOPClassUID | +|Instance | (0008, 0056) | InstanceAvailability | +|Instance | (0008, 0201) | TimezoneOffsetFromUTC | +|Instance | (0020, 0013) | InstanceNumber | +|Instance | (0028, 0010) | Rows | +|Instance | (0028, 0011) | Columns | +|Instance | (0028, 0100) | BitsAllocated | +|Instance | (0028, 0008) | NumberOfFrames | ++All the removed tags are part of additional tags which will be returned when queried with `includefield = all`. + #### Null padded attributes can be searched for with or without padding When an attribute was stored using null padding, it can be searched for with or without the null padding in uri encoding. Results retrieved are for attributes stored both with and without null padding. |
healthcare-apis | Get Started With Analytics Dicom | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/dicom/get-started-with-analytics-dicom.md | This article describes how to get started by using DICOM® data in analytics Before you get started, complete these steps: -* Deploy an instance of the [DICOM service](deploy-dicom-services-in-azure.md). * Create a [storage account with Azure Data Lake Storage Gen2 capabilities](../../storage/blobs/create-data-lake-storage-account.md) by enabling a hierarchical namespace: * Create a container to store DICOM metadata, for example, named `dicom`.+* Deploy an instance of the [DICOM service](deploy-dicom-services-in-azure.md). + * (_Optional_) Deploy the [DICOM service with Data Lake Storage (Preview)](deploy-dicom-services-in-azure-data-lake.md) to enable direct access to DICOM files. * Create a [Data Factory](../../data-factory/quickstart-create-data-factory.md) instance: * Enable a [system-assigned managed identity](../../data-factory/data-factory-service-identity.md). * Create a [lakehouse](/fabric/data-engineering/tutorial-build-lakehouse) in Fabric. Data Factory pipelines are a collection of _activities_ that perform a task, lik 1. Select **Use this template** to create the new pipeline. +### Create a pipeline for DICOM data (Preview) ++If you created the DICOM service with Azure Data Lake Storage (Preview), you need to use a custom template to include a new `fileName` parameter in the metadata pipeline. Instead of using the template from the template gallery, follow these steps to configure the pipeline. ++1. Download the [preview template](https://github.com/microsoft/dicom-server/blob/main/samples/templates/Copy%20DICOM%20Metadata%20Changes%20to%20ADLS%20Gen2%20in%20Delta%20Format.zip) from GitHub. The template file is a compressed (zipped) folder. You don't need to extract the files because they're already uploaded in compressed form. ++1. In Azure Data Factory, select **Author** from the left menu. On the **Factory Resources** pane, select the plus sign (+) to add a new resource. Select **Pipeline** and then select **Import from pipeline template**. ++1. In the **Open** window, select the preview template that you downloaded. Select **Open**. ++1. In the **Inputs** section, select the linked services created for the DICOM service and Azure Data Lake Storage Gen2 account. ++ :::image type="content" source="media/data-factory-create-pipeline.png" alt-text="Screenshot showing the Inputs section with linked services selected." lightbox="media/data-factory-create-pipeline.png"::: ++1. Select **Use this template** to create the new pipeline. + ## Schedule a pipeline -Pipelines are scheduled by _triggers_. There are different types of triggers. _Schedule triggers_ allow pipelines to be triggered on a wall-clock schedule. _Manual triggers_ trigger pipelines on demand. +Pipelines are scheduled by _triggers_. There are different types of triggers. _Schedule triggers_ allow pipelines to be triggered on a wall-clock schedule, which means they run at specific times of the day, such as every hour or every day at midnight. _Manual triggers_ trigger pipelines on demand, which means they run whenever you want them to. In this example, a _tumbling window trigger_ is used to periodically run the pipeline given a starting point and regular time interval. For more information about triggers, see [Pipeline execution and triggers in Azure Data Factory or Azure Synapse Analytics](../../data-factory/concepts-pipeline-execution-triggers.md). You can monitor trigger runs and their associated pipeline runs on the **Monitor [Fabric](https://www.microsoft.com/microsoft-fabric) is an all-in-one analytics solution that sits on top of [Microsoft OneLake](/fabric/onelake/onelake-overview). With the use of a [Fabric lakehouse](/fabric/data-engineering/lakehouse-overview), you can manage, structure, and analyze data in OneLake in a single location. Any data outside of OneLake, written to Data Lake Storage Gen2, can be connected to OneLake as shortcuts to take advantage of Fabric's suite of tools. -### Create shortcuts +### Create shortcuts to metadata tables 1. Go to the lakehouse created in the prerequisites. In the **Explorer** view, select the ellipsis menu (**...**) next to the **Tables** folder. After you create the shortcuts, expand a table to show the names and types of th :::image type="content" source="media/fabric-shortcut-schema.png" alt-text="Screenshot that shows the table columns listed in the Explorer view." lightbox="media/fabric-shortcut-schema.png"::: +### Create shortcuts to files ++If you're using a [DICOM service with Data Lake Storage](dicom-data-lake.md), you can additionally create a shortcut to the DICOM file data stored in the data lake. ++1. Go to the lakehouse created in the prerequisites. In the **Explorer** view, select the ellipsis menu (**...**) next to the **Files** folder. ++1. Select **New shortcut** to create a new shortcut to the storage account that contains the DICOM data. ++ :::image type="content" source="media/fabric-new-shortcut-files.png" alt-text="Screenshot that shows the New shortcut option of the Files menu in the Explorer view." lightbox="media/fabric-new-shortcut-files.png"::: ++1. Select **Azure Data Lake Storage Gen2** as the source for the shortcut. ++ :::image type="content" source="media/fabric-new-shortcut.png" alt-text="Screenshot that shows the New shortcut view with the Azure Data Lake Storage Gen2 tile." lightbox="media/fabric-new-shortcut.png"::: ++1. Under **Connection settings**, enter the **URL** you used in the [Linked services](#create-a-linked-service-for-azure-data-lake-storage-gen2) section. ++ :::image type="content" source="media/fabric-connection-settings.png" alt-text="Screenshot that shows the connection settings for the Azure Data Lake Storage Gen2 account." lightbox="media/fabric-connection-settings.png"::: ++1. Select an existing connection or create a new connection by selecting the **Authentication kind** you want to use. ++1. Select **Next**. ++1. Enter a **Shortcut Name** that describes the DICOM data. For example, **contoso-dicom-files**. ++1. Enter the **Sub Path** that matches the name of the storage container and folder used by the DICOM service. For example, if you wanted to link to the root folder the Sub Path would be **/dicom/AHDS**. Note, the root folder will always be `AHDS`, but you can optionally link to a child folder for a specific workspace or DICOM service instance. ++1. Select **Create** to create the shortcut. ++ ### Run notebooks After the tables are created in the lakehouse, you can query them from [Fabric notebooks](/fabric/data-engineering/how-to-use-notebook). You can create notebooks directly from the lakehouse by selecting **Open Notebook** from the menu bar. After a few seconds, the results of the query appear in a table underneath the c :::image type="content" source="media/fabric-notebook-results.png" alt-text="Screenshot that shows a notebook with a sample Spark SQL query and results." lightbox="media/fabric-notebook-results.png"::: +#### Access DICOM file data in notebooks ++If you used the preview template to create the pipeline and created a shortcut to the DICOM file data, you can use the `filePath` column in the `instance` table to correlate instance metadata to file data. ++``` SQL +SELECT sopInstanceUid, filePath from instance +``` ++ ## Summary In this article, you learned how to: |
healthcare-apis | Release Notes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/release-notes.md | This article provides details about the features and enhancements made to Azure Bulk delete operation is currently in public preview. Review disclaimer for details. [!INCLUDE public preview disclaimer] +**$import operation now supports importing soft deleted resources** +The capability to import soft deleted resources is useful during migration from Azure API for FHIR to Azure Health Data Services. For more details, visit [Fix SQL Import for Soft Delete and History](https://github.com/microsoft/fhir-server/pull/3530). ++**Performance improvement** +In this release we have improved performance of FHIR queries with _include parameter. For more information, visit [Change query generator to use INNER JOIN](https://github.com/microsoft/fhir-server/pull/3572). ++**Bug fix: Searching with _include and wildcard resulted in query failure** +The issue is fixed and permits only wild character ΓÇ£*ΓÇ¥ to be present for _include and _revinclude searches. For more information, visit [Fix syntax check for : when wildcard is used](https://github.com/microsoft/fhir-server/pull/3541). ++**Bug fix: Multiple export jobs created resulting in increase data storage volume** +Due to a bug, Export job was creating multiple child jobs when used with typefilter parameter. The fix addresses the issue for more information, visit [Fix export](https://github.com/microsoft/fhir-server/pull/3567). ++**Bug Fix: Retriable exception for import operation, when using duplicate files** +In case of duplicate files during import, the exception would be thrown This exception was considered as a retriable exception. This bug fix addresses the issue and import operation with same file will no longer be considered retriable. For information, visit [Handles exception message for duplicate file in import operation](https://github.com/microsoft/fhir-server/pull/3557). ++ ## October 2023 ### DICOM Service |
healthcare-apis | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure Health Data Services FHIR service description: Lists Azure Policy Regulatory Compliance controls available. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 11/06/2023 Last updated : 11/21/2023 |
integration-environments | Create Integration Environment | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/integration-environments/create-integration-environment.md | description: Create an integration environment to centrally organize and manage Previously updated : 11/15/2023 Last updated : 11/22/2023 # CustomerIntent: As an integration developer, I want a way to centrally and logically organize Azure resoruces related to my organization's integration solutions. To centrally and logically organize and manage Azure resources associated with y > > Your integration environment and the Azure resources that you want to organize must exist in the same Azure subscription. +- Register the **Microsoft.IntegrationSpaces** resource provider for the Azure Integration Environment resource. ++ 1. In the [Azure portal](https://portal.azure.com) search box, enter and select **Subscriptions**. ++ 1. On the **Subscriptions** page, find and select your Azure subscription. ++ 1. On your subscription menu, under **Settings**, select **Resource providers**. ++ 1. In the **Resource providers** filter box, enter **integration**, and select **Microsoft.IntegrationSpaces**. ++ 1. On the **Resource providers** toolbar, select **Register**. ++ After the Azure portal completes the registration, the **Microsoft.IntegrationSpaces** resource provider status changes to **Registered**. + ## Create an integration environment 1. In the [Azure portal](https://portal.azure.com) search box, enter **integration environments**, and then select **Integration Environments**. |
iot-central | Tutorial Define Gateway Device Type | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/tutorial-define-gateway-device-type.md | Now that you have the simulated devices in your application, you can create the 1. On the **Devices** page, select **RS40 Occupancy Sensor** in the list of device templates, and then select your simulated **RS40 Occupancy Sensor** device. -1. Select **Connect to gateway**. +1. Select **Attach to gateway**. -1. On the **Connect to a gateway** dialog, select the **Smart Building gateway device** template, and then select the simulated instance you created previously. +1. On the **Attach to a gateway** dialog, select the **Smart Building gateway device** template, and then select the simulated instance you created previously. 1. Select **Attach**. |
iot-develop | Concepts Azure Rtos Security Practices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/concepts-azure-rtos-security-practices.md | Whether you're using Azure RTOS in combination with Azure Sphere or not, the Mic - [PSA Certified 10 security goals explained](https://www.psacertified.org/blog/psa-certified-10-security-goals-explained/) discusses the Azure Resource Manager Platform Security Architecture (PSA). It provides a standardized framework for building secure embedded devices by using Resource Manager TrustZone technology. Microcontroller manufacturers can certify designs with the Resource Manager PSA Certified program giving a level of confidence about the security of applications built on Resource Manager technologies. - [Common Criteria](https://www.commoncriteriaportal.org/) is an international agreement that provides standardized guidelines and an authorized laboratory program to evaluate products for IT security. Certification provides a level of confidence in the security posture of applications using devices that were evaluated by using the program guidelines. - [Security Evaluation Standard for IoT Platforms (SESIP)](https://globalplatform.org/sesip/) is a standardized methodology for evaluating the security of connected IoT products and components.-- [ISO 27000 family](https://www.iso.org/isoiec-27001-information-security.html) is a collection of standards regarding the management and security of information assets. The standards provide baseline guarantees about the security of digital information in certified products. - [FIPS 140-2/3](https://csrc.nist.gov/publications/detail/fips/140/3/final) is a US government program that standardizes cryptographic algorithms and implementations used in US government and military applications. Along with documented standards, certified laboratories provide FIPS certification to guarantee specific cryptographic implementations adhere to regulations. |
iot-hub | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/policy-reference.md | Title: Built-in policy definitions for Azure IoT Hub description: Lists Azure Policy built-in policy definitions for Azure IoT Hub. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/15/2023 Last updated : 11/21/2023 |
iot-hub | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure IoT Hub description: Lists Azure Policy Regulatory Compliance controls available for Azure IoT Hub. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 11/06/2023 Last updated : 11/21/2023 |
iot-operations | Howto Deploy Iot Operations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/deploy-iot-ops/howto-deploy-iot-operations.md | - - ignite-2023 + Last updated 11/07/2023 #CustomerIntent: As an OT professional, I want to deploy Azure IoT Operations to a Kubernetes cluster. |
iot-operations | Howto Prepare Cluster | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/deploy-iot-ops/howto-prepare-cluster.md | - - ignite-2023 + Last updated 11/07/2023 #CustomerIntent: As an IT professional, I want prepare an Azure-Arc enabled Kubernetes cluster so that I can deploy Azure IoT Operations to it. |
iot-operations | Quickstart Add Assets | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/get-started/quickstart-add-assets.md | Complete [Quickstart: Deploy Azure IoT Operations to an Arc-enabled Kubernetes c To sign in to the Azure IoT Operations portal you need a work or school account in the tenant where you deployed Azure IoT Operations. If you're currently using a Microsoft account (MSA), you need to create a Microsoft Entra ID with at least contributor permissions for the resource group that contains your **Kubernetes - Azure Arc** instance. To learn more, see [Known Issues > Create Entra account](../troubleshoot/known-issues.md#azure-iot-operations-preview-portal). -Install the [mqttui](https://github.com/EdJoPaTo/mqttui) tool on the Ubuntu machine where you're running Kubernetes: --```bash -wget https://github.com/EdJoPaTo/mqttui/releases/download/v0.19.0/mqttui-v0.19.0-x86_64-unknown-linux-gnu.deb -sudo dpkg -i mqttui-v0.19.0-x86_64-unknown-linux-gnu.deb -``` --> [!TIP] -> If you're running the quickstart on another platform, you can use other MQTT tools such as [MQTT Explorer](https://apps.microsoft.com/detail/9PP8SFM082WD). - ## What problem will we solve? The data that OPC UA servers expose can have a complex structure and can be difficult to understand. Azure IoT Operations provides a way to model OPC UA assets as tags, events, and properties. This modeling makes it easier to understand the data and to use it in downstream processes such as the MQ broker and Azure IoT Data Processor (preview) pipelines. To enable the asset endpoint to use an untrusted certificate: > [!CAUTION] > Don't use untrusted certificates in production environments. -1. Run the following command to apply the configuration to use an untrusted certificate: +1. Run the following command on the machine where your cluster is running to apply the configuration to use an untrusted certificate: - ```bash + ```console kubectl apply -f https://raw.githubusercontent.com/Azure-Samples/explore-iot-operations/main/samples/quickstarts/opc-ua-connector-0.yaml ``` -1. Restart the `aio-opc-supervisor` pod by using a command that looks like the following example: +1. Find the name of your `aio-opc-supervisor` pod by using the following command: - ```bash - kubectl delete pod aio-opc-supervisor-956fbb649-k9ppr -n azure-iot-operations + ```console + kubectl get pods -n azure-iot-operations ``` - The name of your `aio-opc-supervisor` pod will be different. To find the name of your pod, run the following command: + The name of your pod looks like `aio-opc-supervisor-956fbb649-k9ppr`. - ```bash - kubectl get pods -n azure-iot-operations +1. Restart the `aio-opc-supervisor` pod by using a command that looks like the following example. Use the `aio-opc-supervisor` pod name from the previous step: ++ ```console + kubectl delete pod aio-opc-supervisor-956fbb649-k9ppr -n azure-iot-operations ``` ## Manage your assets Review your asset and tag details and make any adjustments you need before you s ## Verify data is flowing -To verify data is flowing from your assets by using the **mqttui** tool: +To verify data is flowing from your assets by using the **mqttui** tool. In this quickstart you run the **mqttui** tool inside your Kubernetes cluster: -1. Run the following command to make the MQ broker accessible from your local machine: +1. Run the following command to deploy a pod that includes the **mqttui** and **mosquitto** tools that are useful for interacting with the MQ broker in the cluster: - ```bash - kubectl apply -f https://raw.githubusercontent.com/Azure-Samples/explore-iot-operations/main/samples/quickstarts/az-mqtt-non-tls-listener.yaml + ```console + kubectl apply -f https://raw.githubusercontent.com/Azure-Samples/explore-iot-operations/main/samples/quickstarts/mqtt-client.yaml ``` > [!CAUTION]- > This configuration exposes the MQ broker without TLS. Don't use this configuration in production environments. + > This configuration isn't secure. Don't use this configuration in a production environment. -1. Run the following command to find the `EXTERNAL-IP` address that the non-TLS listener pod is using: +1. When the **mqtt-client** pod is running, run the following command to create a shell environment in the pod you created: - ```bash - kubectl get svc aio-mq-dmqtt-frontend-nontls -n azure-iot-operations + ```console + kubectl exec --stdin --tty mqtt-client -n azure-iot-operations -- sh ``` -1. In a separate terminal window, run the following command to connect to the MQ broker using the **mqttui** tool. Replace the `<external-ip>` placeholder with the `EXTERNAL-IP` address that you found in the previous step: +1. At the shell in the **mqtt-client** pod, run the following command to connect to the MQ broker using the **mqttui** tool: - ```bash - mqttui -b mqtt://<external-ip>:1883 + ```console + mqttui -b mqtts://aio-mq-dmqtt-frontend:8883 -u '$sat' --password $(cat /var/run/secrets/tokens/mq-sat) --insecure ``` 1. Verify that the thermostat asset you added is publishing data. You can find the telemetry in the `azure-iot-operations/data` topic. :::image type="content" source="media/quickstart-add-assets/mqttui-output.png" alt-text="Screenshot of the mqttui topic display showing the temperature telemetry."::: - If there's no data flowing, restart the `aio-opc-opc.tcp-1` pod by using a command that looks like the following example: + If there's no data flowing, restart the `aio-opc-opc.tcp-1` pod. - ```bash - kubectl delete pod aio-opc-opc.tcp-1-849dd78866-vhmz6 -n azure-iot-operations + First, find the name of your `aio-opc-opc.tcp-1` pod by using the following command: ++ ```console + kubectl get pods -n azure-iot-operations ``` - The name of your `aio-opc-opc.tcp-1` pod will be different. To find the name of your pod, run the following command: + The name of your pod looks like `aio-opc-opc.tcp-1-849dd78866-vhmz6`. - ```bash - kubectl get pods -n azure-iot-operations + Then restart the `aio-opc-opc.tcp-1` pod by using a command that looks like the following example. Use the `aio-opc-opc.tcp-1` pod name from the previous step: ++ ```console + kubectl delete pod aio-opc-opc.tcp-1-849dd78866-vhmz6 -n azure-iot-operations ``` The sample tags you added in the previous quickstart generate messages from your asset that look like the following examples: When you deploy Azure IoT Operations, the deployment includes the Akri discovery kubectl get pods -n azure-iot-operations | grep akri ``` +```powershell +kubectl get pods -n azure-iot-operations | Select-String -Pattern "akri" +``` + The output from the previous command looks like the following example: -```text +```console akri-opcua-asset-discovery-daemonset-h47zk 1/1 Running 3 (4h15m ago) 2d23h aio-akri-otel-collector-5c775f745b-g97qv 1/1 Running 3 (4h15m ago) 2d23h aio-akri-agent-daemonset-mp6v7 1/1 Running 3 (4h15m ago) 2d23h ``` -On the machine where your Kubernetes cluster is running, run the following command to apply the configuration for a new configuration for the discovery handler: +On the machine where your Kubernetes cluster is running, run the following command to apply a new configuration for the discovery handler: -```bash +```console kubectl apply -f https://raw.githubusercontent.com/Azure-Samples/explore-iot-operations/main/samples/quickstarts/akri-opcua-asset.yaml ``` To verify the configuration, run the following command to view the Akri instances that represent the OPC UA data sources discovered by Akri: -```bash +```console kubectl get akrii -n azure-iot-operations ``` -The output from the previous command looks like the following example: +The output from the previous command looks like the following example. You may need to wait for a few seconds for the Akri instance to be created: -```text +```console NAMESPACE NAME CONFIG SHARED NODES AGE azure-iot-operations akri-opcua-asset-dbdef0 akri-opcua-asset true ["dom-aio-vm"] 35m ``` |
iot-operations | Quickstart Deploy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/get-started/quickstart-deploy.md | - - ignite-2023 + Last updated 11/15/2023 #CustomerIntent: As a < type of user >, I want < what? > so that < why? >. On Ubuntu Linux, use K3s to create a Kubernetes cluster. ```bash mkdir ~/.kube- cp ~/.kube/config ~/.kube/config.back sudo KUBECONFIG=~/.kube/config:/etc/rancher/k3s/k3s.yaml kubectl config view --flatten > ~/.kube/merged mv ~/.kube/merged ~/.kube/config chmod 0600 ~/.kube/config On Ubuntu Linux, use K3s to create a Kubernetes cluster. Part of the deployment process is to configure your cluster so that it can communicate securely with your Azure IoT Operations components and key vault. The Azure CLI command `az iot ops init` does this for you. Once your cluster is configured, then you can deploy Azure IoT Operations. -Use the Azure portal to create a key vault, build the `az iot ops init` command based on your resources, and then deploy Azure IoT Operations components to your Arc-enabled Kubernetes cluster. +Use the Azure CLI to create a key vault, build the `az iot ops init` command based on your resources, and then deploy Azure IoT Operations components to your Arc-enabled Kubernetes cluster. ### Create a key vault -You can use an existing key vault for your secrets, but verify that the **Permission model** is set to **Vault access policy**. You can check this setting in the **Access configuration** section of an existing key vault. +You can use an existing key vault for your secrets, but verify that the **Permission model** is set to **Vault access policy**. You can check this setting in the Azure portal in the **Access configuration** section of an existing key vault. Or use the [az keyvault show](/cli/azure/keyvault#az-keyvault-show) command to check that `enableRbacAuthorization` is false. -1. Open the [Azure portal](https://portal.azure.com). +To create a new key vault, use the following command: -1. In the search bar, search for and select **Key vaults**. --1. Select **Create**. --1. On the **Basics** tab of the **Create a key vault** page, provide the following information: -- | Field | Value | - | -- | -- | - | **Subscription** | Select the subscription that also contains your Arc-enabled Kubernetes cluster. | - | **Resource group** | Select the resource group that also contains your Arc-enabled Kubernetes cluster. | - | **Key vault name** | Provide a globally unique name for your key vault. | - | **Region** | Select a region close to you. | - | **Pricing tier** | The default **Standard** tier is suitable for this quickstart. | --1. Select **Next**. --1. On the **Access configuration** tab, provide the following information: -- | Field | Value | - | -- | -- | - | **Permission model** | Select **Vault access policy**. | -- :::image type="content" source="./media/quickstart-deploy/key-vault-access-policy.png" alt-text="Screenshot of selecting the vault access policy permission model in the Azure portal."::: --1. Select **Review + create**. --1. Select **Create**. +```azurecli +az keyvault create --enable-rbac-authorization false --name "<your unique key vault name>" --resource-group "<the name of the resource group that contains your Kubernetes cluster>" +``` ### Deploy Azure IoT Operations While the deployment is in progress, you can watch the resources being applied t To view the pods on your cluster, run the following command: -```bash +```console kubectl get pods -n azure-iot-operations ``` |
iot-operations | Quickstart Process Telemetry | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/get-started/quickstart-process-telemetry.md | To create a service principal that gives your pipeline access to your Microsoft 1. Use the following Azure CLI command to create a service principal. - ```bash + ```azurecli az ad sp create-for-rbac --name <YOUR_SP_NAME> ``` To add the secret reference to your Kubernetes cluster, edit the **aio-default-s 1. Enter the following command on the machine where your cluster is running to edit the **aio-default-spc** `secretproviderclass` resource. The YAML configuration for the resource opens in your default editor: - ```bash + ```console kubectl edit secretproviderclass aio-default-spc -n azure-iot-operations ``` 1. Add a new entry to the array of secrets for your new Azure Key Vault secret. The `spec` section looks like the following example: ```yaml- # Please edit the object below. Lines beginning with a '#' will be ignored, + # Edit the object below. Lines beginning with a '#' will be ignored, # and an empty file will abort the edit. If an error occurs while saving this file will be # reopened with the relevant failures. # To add the secret reference to your Kubernetes cluster, edit the **aio-default-s The CSI driver updates secrets by using a polling interval, therefore the new secret isn't available to the pod until the polling interval is reached. To update the pod immediately, restart the pods for the component. For Data Processor, run the following commands: -```bash +```console kubectl delete pod aio-dp-reader-worker-0 -n azure-iot-operations kubectl delete pod aio-dp-runner-worker-0 -n azure-iot-operations ``` In the following steps, leave all values at their default unless otherwise speci 1. Select the pipeline name, **\<pipeline-name\>**, and change it to _passthrough-data-pipeline_. Select **Apply**. 1. Select **Save** to save and deploy the pipeline. It takes a few seconds to deploy this pipeline to your cluster.-1. Connect to the MQ broker using your MQTT client again. This time, specify the topic `dp-output`. +1. Run the following command to create a shell environment in the **mqtt-client** pod you created in the previous quickstart: - ```bash - mqttui -b mqtt://127.0.0.1:1883 "dp-output" + ```console + kubectl exec --stdin --tty mqtt-client -n azure-iot-operations -- sh + ``` ++1. At the shell in the **mqtt-client** pod, connect to the MQ broker using your MQTT client again. This time, specify the topic `dp-output`. ++ ```console + mqttui -b mqtts://aio-mq-dmqtt-frontend:8883 -u '$sat' --password $(cat /var/run/secrets/tokens/mq-sat) --insecure "dp-output" ``` 1. You see the same data flowing as previously. This behavior is expected because the deployed _passthrough data pipeline_ doesn't transform the data. The pipeline routes data from one MQTT topic to another. In the following steps, leave all values at their default unless otherwise speci | Name | `equipment-data` | | Expiration time | `1h` | -1. Select **Create dataset** to save the reference dataset destination details. It takes a few seconds to deploy the dataset to your cluster and become visible in the dataset list view. +1. Select **Create** to save the reference dataset destination details. It takes a few seconds to deploy the dataset to your cluster and become visible in the dataset list view. 1. Use the values in the following table to configure the destination stage. Then select **Apply**: In the following steps, leave all values at their default unless otherwise speci To store the reference data, publish it as an MQTT message to the `reference_data` topic by using the mqttui tool: -```bash -mqttui -b mqtt://127.0.0.1:1883 publish "reference_data" '{ "customer": "Contoso", "batch": 102, "equipment": "Boiler", "location": "Seattle", "isSpare": true }' -``` +1. Create a shell environment in the **mqtt-client** pod you created in the previous quickstart: ++ ```console + kubectl exec --stdin --tty mqtt-client -n azure-iot-operations -- sh + ``` ++1. Publish the message: ++ ```console + mqttui -b mqtts://aio-mq-dmqtt-frontend:8883 -u '$sat' --password $(cat /var/run/secrets/tokens/mq-sat) --insecure publish "reference_data" '{ "customer": "Contoso", "batch": 102, "equipment": "Boiler", "location": "Seattle", "isSpare": true }' + ``` After you publish the message, the pipeline receives the message and stores the data in the equipment data reference dataset. Create a Data Processor pipeline to process and enrich your data before it sends Add two properties: - | Parameter | Value | - | -- | -- | - | Input path | `.payload.payload["temperature"]` | - | Output path | `.payload.payload.temperature_lkv` | - | Expiration time | `01h` | -- | Parameter | Value | - | -- | -- | - | Input path | `.payload.payload["Tag 10"]` | - | Output path | `.payload.payload.tag1_lkv` | - | Expiration time | `01h` | + | Input path | Output path | Expiration time | + | -- | -- | | + | `.payload.payload["temperature"]` | `.payload.payload.temperature_lkv` | `01h` | + | `.payload.payload["Tag 10"]` | `.payload.payload.tag1_lkv` | `01h` | This stage enriches the incoming messages with the latest `temperature` and `Tag 10` values if they're missing. The tracked latest values are retained for 1 hour. If the tracked properties appear in the message, the tracked latest value is updated to ensure that the values are always up to date. Create a Data Processor pipeline to process and enrich your data before it sends | Parameter | Value | | - | -- |- | Display name | construct full payload | + | Display name | `construct full payload` | The following jq expression formats the payload property to include all telemetry values and all the contextual information as key value pairs: Create a Data Processor pipeline to process and enrich your data before it sends :::image type="content" source="media/quickstart-process-telemetry/lakehouse-preview.png" alt-text="Screenshot that shows data from the pipeline appearing in the lakehouse table."::: +> [!TIP] +> Make sure that no other processes write to the OPCUA table in your lakehouse. If you write to the table from multiple sources, you might see corrupted data in the table. + ## How did we solve the problem? In this quickstart, you used Data Processor pipelines to process your OPC UA data before sending it to a Microsoft Fabric lakehouse. You used the pipelines to: |
iot-operations | Howto Configure Opcua Authentication Options | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/manage-devices-assets/howto-configure-opcua-authentication-options.md | - - ignite-2023 + Last updated 11/6/2023 # CustomerIntent: As a user in IT, operations, or development, I want to configure my OPC UA industrial edge environment |
iot-operations | Howto Configure L3 Cluster Layered Network | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/manage-layered-network/howto-configure-l3-cluster-layered-network.md | - - ignite-2023 + Last updated 11/15/2023 #CustomerIntent: As an operator, I want to configure Layered Network Management so that I have secure isolate devices. |
iot-operations | Howto Configure L4 Cluster Layered Network | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/manage-layered-network/howto-configure-l4-cluster-layered-network.md | - - ignite-2023 + Last updated 11/15/2023 #CustomerIntent: As an operator, I want to configure Layered Network Management so that I have secure isolate devices. |
iot-operations | Howto Configure Authorization | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/manage-mqtt-connectivity/howto-configure-authorization.md | The specification of a *BrokerAuthorization* resource has the following fields: | Field Name | Required | Description | | | | | | listenerRef | Yes | The names of the BrokerListener resources that this authorization policy applies. This field is required and must match an existing *BrokerListener* resource in the same namespace. |-| authorizationPolicies | Yes | This field defines the settings for the authorization policies. | -| enableCache | | Whether to enable caching for the authorization policies. | -| rules | | A boolean flag that indicates whether to enable caching for the authorization policies. If set to `true`, the broker caches the authorization results for each client and topic combination to improve performance and reduce latency. If set to `false`, the broker evaluates the authorization policies for each client and topic request, to ensure consistency and accuracy. This field is optional and defaults to `false`. | -| principals | | This subfield defines the identities that represent the clients. | -| usernames | | A list of usernames that match the clients. The usernames are case-sensitive and must match the usernames provided by the clients during authentication. | -| attributes | | A list of key-value pairs that match the attributes of the clients. The attributes are case-sensitive and must match the attributes provided by the clients during authentication. | -| brokerResources | Yes | This subfield defines the objects that represent the actions or topics. | -| method | Yes | The type of action that the clients can perform on the broker. This subfield is required and can be one of these values: **Connect**: This value indicates that the clients can connect to the broker. - **Publish**: This value indicates that the clients can publish messages to topics on the broker. - **Subscribe**: This value indicates that the clients can subscribe to topics on the broker. | -| topics | No | A list of topics or topic patterns that match the topics that the clients can publish or subscribe to. This subfield is required if the method is Subscribe or Publish. | +| authorizationPolicies | Yes | This field defines the settings for the authorization policies, such as *enableCache* and *rules*.| +| enableCache | No | Whether to enable caching for the authorization policies. If set to `true`, the broker caches the authorization results for each client and topic combination to improve performance and reduce latency. If set to `false`, the broker evaluates the authorization policies for each client and topic request, to ensure consistency and accuracy. This field is optional and defaults to `false`. | +| rules | No | A list of rules that specify the principals and resources for the authorization policies. Each rule has these subfields: *principals* and *brokerResources*. | +| principals | Yes | This subfield defines the identities that represent the clients, such as *usernames*, *clientids*, and *attributes*.| +| usernames | No | A list of usernames that match the clients. The usernames are case-sensitive and must match the usernames provided by the clients during authentication. | +| clientids | No | A list of client IDs that match the clients. The client IDs are case-sensitive and must match the client IDs provided by the clients during connection. | +| attributes | No | A list of key-value pairs that match the attributes of the clients. The attributes are case-sensitive and must match the attributes provided by the clients during authentication. | +| brokerResources | Yes | This subfield defines the objects that represent the actions or topics, such as *method* and *topics*. | +| method | Yes | The type of action that the clients can perform on the broker. This subfield is required and can be one of these values: **Connect**: This value indicates that the clients can connect to the broker. **Publish**: This value indicates that the clients can publish messages to topics on the broker. **Subscribe**: This value indicates that the clients can subscribe to topics on the broker. | +| topics | No | A list of topics or topic patterns that match the topics that the clients can publish or subscribe to. This subfield is required if the method is *Subscribe* or *Publish*. | The following example shows how to create a *BrokerAuthorization* resource that defines the authorization policies for a listener named *my-listener*. |
iot-operations | Howto Configure Transform Stage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/process-data/howto-configure-transform-stage.md | Last updated 10/09/2023 [!INCLUDE [public-preview-note](../includes/public-preview-note.md)] -Use the _transform_ stage to carry out structural transformations on messages in a pipeline such as: +Use the _transform_ stage to carry out structural transformations on messages in a pipeline, such as: -- Renaming tags and properties-- Unbatching data-- Adding new properties-- Adding calculated values+- Rename tags and properties +- Unbatch data +- Add new properties +- Add calculated values The transform stage uses [jq](concept-jq.md) to support data transformation: - Each pipeline partition transforms messages independently of each other.-- The stage outputs a transformed message based on the jq expression](concept-jq-expression.md) you provide.-- Create a [jq expression](concept-jq-expression.md) to transform a message based on how the structure of the incoming message to the stage. +- The stage outputs a transformed message based on the [jq expression](concept-jq-expression.md) you provide. +- Create a [jq expression](concept-jq-expression.md) to transform a message based on the structure of the incoming message to the stage. ## Prerequisites To configure and use a transform pipeline stage, you need: ### Configure the stage -The transform stage JSON configuration defines the details of the stage. To author the stage, you can either interact with the form-based UI, or provide the JSON configuration on the **Advanced** tab: +The transform stage JSON configuration defines the details of the stage. To author the stage, you can either interact with the form-based UI or provide the JSON configuration on the **Advanced** tab: -| Name | Value | Required | Example | +| Name | Value | Required | Example | | | | | | | Name | A name to show in the Data Processor UI. | Yes | `Transform1` | | Description | A user-friendly description of what the transform stage does. | No | `Rename Tags` | The following transformation example converts the array of tags in the input mes } ``` -The output from the transform stage looks like the following example +The output from the transform stage looks like the following example: ```json { |
iot-operations | Tutorial Connect Event Grid | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/send-view-analyze-data/tutorial-connect-event-grid.md | |
iot-operations | Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-operations/troubleshoot/troubleshoot.md | Status: Status: Failed Events: <none> ```++## Data is corrupted in the Microsoft Fabric lakehouse table ++If data is corrupted in the Microsoft Fabric lakehouse table that your Data Processor pipeline is writing to, make sure that no other processes are writing to the table. If you write to the Microsoft Fabric lakehouse table from multiple sources, you might see corrupted data in the table. ++## Deployment issues with Data Processor ++If you see deployment errors with Data Processor pods, make sure that when you created your Azure Key Vault you chose **Vault access policy** as the **Permission model**. |
key-vault | Logging | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/logging.md | -After you create one or more Managed HSMs, you'll likely want to monitor how and when your HSMss are accessed, and by who. You can do this by enabling logging, which saves information in an Azure storage account that you provide. A new container named **insights-logs-auditevent** is automatically created for your specified storage account. You can use this same storage account for collecting logs for multiple Managed HSMs. +After you create one or more Managed HSMs, you'll likely want to monitor how and when your HSMs are accessed, and by who. You can do this by enabling logging, which saves information in an Azure storage account that you provide. A new container named **insights-logs-auditevent** is automatically created for your specified storage account. You can use this same storage account for collecting logs for multiple Managed HSMs. You can access your logging information 10 minutes (at most) after the Managed HSM operation. In most cases, it will be quicker than this. It's up to you to manage your logs in your storage account: |
key-vault | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/policy-reference.md | Title: Built-in policy definitions for Key Vault description: Lists Azure Policy built-in policy definitions for Key Vault. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/15/2023 Last updated : 11/21/2023 |
key-vault | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure Key Vault description: Lists Azure Policy Regulatory Compliance controls available for Azure Key Vault. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 11/06/2023 Last updated : 11/21/2023 |
lab-services | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/policy-reference.md | Title: Built-in policy definitions for Lab Services description: Lists Azure Policy built-in policy definitions for Azure Lab Services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/15/2023 Last updated : 11/21/2023 |
lighthouse | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lighthouse/samples/policy-reference.md | Title: Built-in policy definitions for Azure Lighthouse description: Lists Azure Policy built-in policy definitions for Azure Lighthouse. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/15/2023 Last updated : 11/21/2023 |
load-testing | How To Create And Run Load Test With Jmeter Script | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-create-and-run-load-test-with-jmeter-script.md | |
logic-apps | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/policy-reference.md | Title: Built-in policy definitions for Azure Logic Apps description: Lists Azure Policy built-in policy definitions for Azure Logic Apps. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/15/2023 Last updated : 11/21/2023 ms.suite: integration |
logic-apps | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure Logic Apps description: Lists Azure Policy Regulatory Compliance controls available for Azure Logic Apps. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 11/06/2023 Last updated : 11/21/2023 |
machine-learning | Concept Automl Forecasting Methods | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-automl-forecasting-methods.md | Each Series in Own Group (1:1) | All Series in Single Group (N:1) -| -- Naive, Seasonal Naive, Average, Seasonal Average, Exponential Smoothing, ARIMA, ARIMAX, Prophet | Linear SGD, LARS LASSO, Elastic Net, K Nearest Neighbors, Decision Tree, Random Forest, Extremely Randomized Trees, Gradient Boosted Trees, LightGBM, XGBoost, TCNForecaster -More general model groupings are possible via AutoML's Many-Models solution; see our [Many Models- Automated ML notebook](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/pipelines/1k_demand_forecasting_with_pipeline_components/automl-forecasting-demand-many-models-in-pipeline/automl-forecasting-demand-many-models-in-pipeline.ipynb) and [Hierarchical time series- Automated ML notebook](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/pipelines/1k_demand_forecasting_with_pipeline_components/automl-forecasting-demand-hierarchical-timeseries-in-pipeline/automl-forecasting-demand-hierarchical-timeseries-in-pipeline.ipynb). +More general model groupings are possible via AutoML's Many-Models solution; see our [Many Models- Automated ML notebook](https://github.com/Azure/azureml-examples/blob/main/sdk/python/jobs/pipelines/1k_demand_forecasting_with_pipeline_components/automl-forecasting-demand-many-models-in-pipeline/automl-forecasting-demand-many-models-in-pipeline.ipynb). ## Next steps |
machine-learning | Concept Compute Target | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-compute-target.md | When you select a node size for a managed compute resource in Azure Machine Lear There are a few exceptions and limitations to choosing a VM size: * Some VM series aren't supported in Azure Machine Learning.-* Some VM series, such as GPUs and other special SKUs, might not initially appear in your list of available VMs. But you can still use them, once you request a quota change. For more information about requesting quotas, see [Request quota increases](how-to-manage-quotas.md#request-quota-increases). +* Some VM series, such as GPUs and other special SKUs, might not initially appear in your list of available VMs. But you can still use them, once you request a quota change. For more information about requesting quotas, see [Request quota and limit increases](how-to-manage-quotas.md#request-quota-and-limit-increases). See the following table to learn more about supported series. | **Supported VM series** | **Category** | **Supported by** | |
machine-learning | Concept Endpoints Online | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-endpoints-online.md | Azure Machine Learning allows you to perform real-time inferencing on data by us To define an endpoint, you need to specify: -- **Endpoint name**: This name must be unique in the Azure region. For more information on the naming rules, see [managed online endpoint limits](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints).+- **Endpoint name**: This name must be unique in the Azure region. For more information on the naming rules, see [endpoint limits](how-to-manage-quotas.md#azure-machine-learning-online-endpoints-and-batch-endpoints). - **Authentication mode**: You can choose between key-based authentication mode and Azure Machine Learning token-based authentication mode for the endpoint. A key doesn't expire, but a token does expire. For more information on authenticating, see [Authenticate to an online endpoint](how-to-authenticate-online-endpoint.md). Azure Machine Learning provides the convenience of using **managed online endpoints** for deploying your ML models in a turnkey manner. This is the _recommended_ way to use online endpoints in Azure Machine Learning. Managed online endpoints work with powerful CPU and GPU machines in Azure in a scalable, fully managed way. These endpoints also take care of serving, scaling, securing, and monitoring your models, to free you from the overhead of setting up and managing the underlying infrastructure. The following table describes the key attributes of a deployment: | Scoring script | The relative path to the scoring file in the source code directory. This Python code must have an `init()` function and a `run()` function. The `init()` function will be called after the model is created or updated (you can use it to cache the model in memory, for example). The `run()` function is called at every invocation of the endpoint to do the actual scoring and prediction. | | Environment | The environment to host the model and code. This value can be either a reference to an existing versioned environment in the workspace or an inline environment specification. Note: Microsoft regularly patches the base images for known security vulnerabilities. You'll need to redeploy your endpoint to use the patched image. If you provide your own image, you're responsible for updating it. For more information, see [Image patching](concept-environments.md#image-patching). | | Instance type | The VM size to use for the deployment. For the list of supported sizes, see [Managed online endpoints SKU list](reference-managed-online-endpoints-vm-sku-list.md). |-| Instance count | The number of instances to use for the deployment. Base the value on the workload you expect. For high availability, we recommend that you set the value to at least `3`. We reserve an extra 20% for performing upgrades. For more information, see [managed online endpoint quotas](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints). | +| Instance count | The number of instances to use for the deployment. Base the value on the workload you expect. For high availability, we recommend that you set the value to at least `3`. We reserve an extra 20% for performing upgrades. For more information, see [virtual machine quota allocation for deployments](how-to-deploy-online-endpoints.md#virtual-machine-quota-allocation-for-deployment). | To learn how to deploy online endpoints using the CLI, SDK, studio, and ARM template, see [Deploy an ML model with an online endpoint](how-to-deploy-online-endpoints.md). For more information on monitoring, see [Monitor online endpoints](how-to-monito - [Deploy models with REST](how-to-deploy-with-rest.md) - [How to monitor managed online endpoints](how-to-monitor-online-endpoints.md) - [How to view managed online endpoint costs](how-to-view-online-endpoints-costs.md)-- [Manage and increase quotas for resources with Azure Machine Learning](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints)+- [Manage and increase quotas for resources with Azure Machine Learning](how-to-manage-quotas.md#azure-machine-learning-online-endpoints-and-batch-endpoints) |
machine-learning | Concept Endpoints | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-endpoints.md | You can create and manage batch and online endpoints with multiple developer too - [How to deploy pipelines with batch endpoints](how-to-use-batch-pipeline-deployments.md) - [How to use online endpoints with the studio](how-to-use-managed-online-endpoint-studio.md) - [How to monitor managed online endpoints](how-to-monitor-online-endpoints.md)-- [Manage and increase quotas for resources with Azure Machine Learning](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints)+- [Manage and increase quotas for resources with Azure Machine Learning](how-to-manage-quotas.md#azure-machine-learning-online-endpoints-and-batch-endpoints) |
machine-learning | How To Access Resources From Endpoints Managed Identities | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-resources-from-endpoints-managed-identities.md | This YAML example, `2-sai-deployment.yml`, # [System-assigned (Python)](#tab/system-identity-python) -To deploy an online endpoint with the Python SDK (v2), objects may be used to define the configuration as below. Alternatively, YAML files may be loaded using the `.load` method. +To deploy an online endpoint with the Python SDK (v2), objects can be used to define the configuration as below. Alternatively, YAML files can be loaded using the `.load` method. The following Python endpoint object: This deployment object: # [User-assigned (Python)](#tab/user-identity-python) -To deploy an online endpoint with the Python SDK (v2), objects may be used to define the configuration as below. Alternatively, YAML files may be loaded using the `.load` method. +To deploy an online endpoint with the Python SDK (v2), objects can be used to define the configuration as below. Alternatively, YAML files can be loaded using the `.load` method. For a user-assigned identity, we will define the endpoint configuration below once the User-Assigned Managed Identity has been created. Delete the User-assigned managed identity: * To see which compute resources you can use, see [Managed online endpoints SKU list](reference-managed-online-endpoints-vm-sku-list.md). * For more on costs, see [View costs for an Azure Machine Learning managed online endpoint](how-to-view-online-endpoints-costs.md). * For information on monitoring endpoints, see [Monitor managed online endpoints](how-to-monitor-online-endpoints.md).-* For limitations for managed endpoints, see [Manage and increase quotas for resources with Azure Machine Learning-managed online endpoint](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints). -* For limitations for Kubernetes endpoints, see [Manage and increase quotas for resources with Azure Machine Learning-kubernetes online endpoint](how-to-manage-quotas.md#azure-machine-learning-kubernetes-online-endpoints). +* For limitations for managed online endpoint and Kubernetes online endpoint, see [limits for online endpoints](how-to-manage-quotas.md#azure-machine-learning-online-endpoints-and-batch-endpoints). |
machine-learning | How To Configure Network Isolation With V2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-network-isolation-with-v2.md | The Azure Machine Learning CLI v2 uses our new v2 API platform. New features suc As mentioned in the previous section, there are two types of operations; with ARM and with the workspace. With the __legacy v1 API__, most operations used the workspace. With the v1 API, adding a private endpoint to the workspace provided network isolation for everything except CRUD operations on the workspace or compute resources. -With the __new v2 API__, most operations use ARM. So enabling a private endpoint on your workspace doesn't provide the same level of network isolation. Operations that use ARM communicate over public networks, and include any metadata (such as your resource IDs) or parameters used by the operation. For example, the [create or update job](/rest/api/azureml/2023-04-01/jobs/create-or-update) api sends metadata, and [parameters](./reference-yaml-job-command.md). +With the __new v2 API__, most operations use ARM. So enabling a private endpoint on your workspace doesn't provide the same level of network isolation. Operations that use ARM communicate over public networks, and include any metadata (such as your resource IDs) or parameters used by the operation. For example, the [parameters](./reference-yaml-job-command.md). > [!IMPORTANT] > For most people, using the public ARM communications is OK: |
machine-learning | How To Datastore | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-datastore.md | az ml datastore create --file my_onelakesp_datastore.yml - [Access data in a job](how-to-read-write-data-v2.md#access-data-in-a-job) - [Create and manage data assets](how-to-create-data-assets.md#create-and-manage-data-assets) - [Import data assets (preview)](how-to-import-data-assets.md#import-data-assets-preview)-- [Data administration](how-to-administrate-data-authentication.md#data-administration)+- [Data administration](how-to-administrate-data-authentication.md#data-administration) |
machine-learning | How To Deploy Automl Endpoint | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-automl-endpoint.md | Next, we'll create the managed online endpoints and deployments. 1. Configure online endpoint: > [!TIP]- > * `name`: The name of the endpoint. It must be unique in the Azure region. The name for an endpoint must start with an upper- or lowercase letter and only consist of '-'s and alphanumeric characters. For more information on the naming rules, see [managed online endpoint limits](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints). + > * `name`: The name of the endpoint. It must be unique in the Azure region. The name for an endpoint must start with an upper- or lowercase letter and only consist of '-'s and alphanumeric characters. For more information on the naming rules, see [endpoint limits](how-to-manage-quotas.md#azure-machine-learning-online-endpoints-and-batch-endpoints). > * `auth_mode` : Use `key` for key-based authentication. Use `aml_token` for Azure Machine Learning token-based authentication. A `key` doesn't expire, but `aml_token` does expire. For more information on authenticating, see [Authenticate to an online endpoint](how-to-authenticate-online-endpoint.md). |
machine-learning | How To Deploy Custom Container | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-custom-container.md | Learn how to use a custom container for deploying a model to an online endpoint Custom container deployments can use web servers other than the default Python Flask server used by Azure Machine Learning. Users of these deployments can still take advantage of Azure Machine Learning's built-in monitoring, scaling, alerting, and authentication. -The following table lists various [deployment examples](https://github.com/Azure/azureml-examples/tree/main/cli/endpoints/online/custom-container) that use custom containers such as TensorFlow Serving, TorchServe, Triton Inference Server, Plumber R package, and AzureML Inference Minimal image. +The following table lists various [deployment examples](https://github.com/Azure/azureml-examples/tree/main/cli/endpoints/online/custom-container) that use custom containers such as TensorFlow Serving, TorchServe, Triton Inference Server, Plumber R package, and Azure Machine Learning Inference Minimal image. |Example|Script (CLI)|Description| |-||| For more information, see [Deploy machine learning models to managed online endp ### Configure online endpoint > [!TIP]-> * `name`: The name of the endpoint. It must be unique in the Azure region. The name for an endpoint must start with an upper- or lowercase letter and only consist of '-'s and alphanumeric characters. For more information on the naming rules, see [managed online endpoint limits](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints). +> * `name`: The name of the endpoint. It must be unique in the Azure region. The name for an endpoint must start with an upper- or lowercase letter and only consist of '-'s and alphanumeric characters. For more information on the naming rules, see [endpoint limits](how-to-manage-quotas.md#azure-machine-learning-online-endpoints-and-batch-endpoints). > * `auth_mode` : Use `key` for key-based authentication. Use `aml_token` for Azure Machine Learning token-based authentication. A `key` doesn't expire, but `aml_token` does expire. For more information on authenticating, see [Authenticate to an online endpoint](how-to-authenticate-online-endpoint.md). Optionally, you can add description, tags to your endpoint. Now that you've understood how the YAML was constructed, create your endpoint. az ml online-endpoint create --name tfserving-endpoint -f endpoints/online/custom-container/tfserving-endpoint.yml ``` -Creating a deployment may take few minutes. +Creating a deployment might take few minutes. ```azurecli az ml online-deployment create --name tfserving-deployment -f endpoints/online/custom-container/tfserving-deployment.yml --all-traffic |
machine-learning | How To Deploy Online Endpoints | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-online-endpoints.md | cd azureml-examples To define an endpoint, you need to specify: -* Endpoint name: The name of the endpoint. It must be unique in the Azure region. For more information on the naming rules, see [managed online endpoint limits](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints). +* Endpoint name: The name of the endpoint. It must be unique in the Azure region. For more information on the naming rules, see [endpoint limits](how-to-manage-quotas.md#azure-machine-learning-online-endpoints-and-batch-endpoints). * Authentication mode: The authentication method for the endpoint. Choose between key-based authentication and Azure Machine Learning token-based authentication. A key doesn't expire, but a token does expire. For more information on authenticating, see [Authenticate to an online endpoint](how-to-authenticate-online-endpoint.md). * Optionally, you can add a description and tags to your endpoint. The following snippet shows the *endpoints/online/managed/sample/endpoint.yml* f :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/online/managed/sample/endpoint.yml"::: -The reference for the endpoint YAML format is described in the following table. To learn how to specify these attributes, see the [online endpoint YAML reference](reference-yaml-endpoint-online.md). For information about limits related to managed endpoints, see [Manage and increase quotas for resources with Azure Machine Learning](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints). +The reference for the endpoint YAML format is described in the following table. To learn how to specify these attributes, see the [online endpoint YAML reference](reference-yaml-endpoint-online.md). For information about limits related to managed endpoints, see [limits for online endpoints](how-to-manage-quotas.md#azure-machine-learning-online-endpoints-and-batch-endpoints). | Key | Description | | -- | -- | The following table describes the key attributes of a deployment: | Scoring script | The relative path to the scoring file in the source code directory. This Python code must have an `init()` function and a `run()` function. The `init()` function will be called after the model is created or updated (you can use it to cache the model in memory, for example). The `run()` function is called at every invocation of the endpoint to do the actual scoring and prediction. | | Environment | The environment to host the model and code. This value can be either a reference to an existing versioned environment in the workspace or an inline environment specification. | | Instance type | The VM size to use for the deployment. For the list of supported sizes, see [Managed online endpoints SKU list](reference-managed-online-endpoints-vm-sku-list.md). |-| Instance count | The number of instances to use for the deployment. Base the value on the workload you expect. For high availability, we recommend that you set the value to at least `3`. We reserve an extra 20% for performing upgrades. For more information, see [managed online endpoint quotas](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints). | +| Instance count | The number of instances to use for the deployment. Base the value on the workload you expect. For high availability, we recommend that you set the value to at least `3`. We reserve an extra 20% for performing upgrades. For more information, see [virtual machine quota allocation for deployments](how-to-deploy-online-endpoints.md#virtual-machine-quota-allocation-for-deployment). | > [!NOTE] > - The model and container image (as defined in Environment) can be referenced again at any time by the deployment when the instances behind the deployment go through security patches and/or other recovery operations. If you used a registered model or container image in Azure Container Registry for deployment and removed the model or the container image, the deployments relying on these assets can fail when reimaging happens. If you removed the model or the container image, ensure the dependent deployments are re-created or updated with alternative model or container image. For more information on creating an environment, see [Manage Azure Machine Learn ### Register the model -A model registration is a logical entity in the workspace that may contain a single model file or a directory of multiple files. As a best practice for production, you should register the model and environment. When creating the endpoint and deployment in this article, we'll assume that you've registered the [model folder](https://github.com/Azure/azureml-examples/tree/main/cli/endpoints/online/model-1/model) that contains the model. +A model registration is a logical entity in the workspace that can contain a single model file or a directory of multiple files. As a best practice for production, you should register the model and environment. When creating the endpoint and deployment in this article, we'll assume that you've registered the [model folder](https://github.com/Azure/azureml-examples/tree/main/cli/endpoints/online/model-1/model) that contains the model. To register the example model, follow these steps: One way to create a managed online endpoint in the studio is from the **Models** 1. Enter an __Endpoint name__. > [!NOTE]- > * Endpoint name: The name of the endpoint. It must be unique in the Azure region. For more information on the naming rules, see [managed online endpoint limits](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints). + > * Endpoint name: The name of the endpoint. It must be unique in the Azure region. For more information on the naming rules, see [endpoint limits](how-to-manage-quotas.md#azure-machine-learning-online-endpoints-and-batch-endpoints). > * Authentication type: The authentication method for the endpoint. Choose between key-based authentication and Azure Machine Learning token-based authentication. A `key` doesn't expire, but a token does expire. For more information on authenticating, see [Authenticate to an online endpoint](how-to-authenticate-online-endpoint.md). > * Optionally, you can add a description and tags to your endpoint. If you aren't going use the deployment, you should delete it by running the foll - [Troubleshoot online endpoints deployment](how-to-troubleshoot-online-endpoints.md) - [Enable network isolation with managed online endpoints](how-to-secure-online-endpoint.md) - [View costs for an Azure Machine Learning managed online endpoint](how-to-view-online-endpoints-costs.md)-- [Manage and increase quotas for resources with Azure Machine Learning](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints)+- [Manage and increase quotas for resources with Azure Machine Learning](how-to-manage-quotas.md#azure-machine-learning-online-endpoints-and-batch-endpoints) - [Use batch endpoints for batch scoring](batch-inference/how-to-use-batch-endpoint.md) |
machine-learning | How To Deploy With Rest | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-with-rest.md | If you aren't going use the deployment, you should delete it with the below comm * Learn [safe rollout for online endpoints](how-to-safely-rollout-online-endpoints.md). * [View costs for an Azure Machine Learning managed online endpoint](how-to-view-online-endpoints-costs.md). * [Managed online endpoints SKU list](reference-managed-online-endpoints-vm-sku-list.md).-* Learn about limits on managed online endpoints in [Manage and increase quotas for resources with Azure Machine Learning](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints). +* Learn about [limits for online endpoints](how-to-manage-quotas.md#azure-machine-learning-online-endpoints-and-batch-endpoints). |
machine-learning | How To Manage Quotas | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-quotas.md | Title: Manage resources and quotas -description: Learn about the quotas and limits on resources for Azure Machine Learning and how to request quota increases. +description: Learn about the quotas and limits on resources for Azure Machine Learning and how to request quota and limit increases. -# Manage and increase quotas for resources with Azure Machine Learning +# Manage and increase quotas and limits for resources with Azure Machine Learning -Azure uses limits and quotas to prevent budget overruns due to fraud, and to honor Azure capacity constraints. Consider these limits as you scale for production workloads. In this article, you learn about: +Azure uses quotas and limits to prevent budget overruns due to fraud, and to honor Azure capacity constraints. Consider these limits as you scale for production workloads. In this article, you learn about: > [!div class="checklist"] > + Default limits on Azure resources related to [Azure Machine Learning](overview-what-is-azure-machine-learning.md). Azure uses limits and quotas to prevent budget overruns due to fraud, and to hon > + Viewing your quotas and limits. > + Requesting quota increases. -Along with managing quotas, you can learn how to [plan and manage costs for Azure Machine Learning](concept-plan-manage-cost.md) or learn about the [service limits in Azure Machine Learning](resource-limits-capacity.md). +Along with managing quotas and limits, you can learn how to [plan and manage costs for Azure Machine Learning](concept-plan-manage-cost.md) or learn about the [service limits in Azure Machine Learning](resource-limits-capacity.md). ## Special considerations + Quotas are applied to each subscription in your account. If you have multiple subscriptions, you must request a quota increase for each subscription. -+ A quota is a *credit limit* on Azure resources, *not a capacity guarantee*. If you have large-scale capacity needs, [contact Azure support to increase your quota](#request-quota-increases). ++ A quota is a *credit limit* on Azure resources, *not a capacity guarantee*. If you have large-scale capacity needs, [contact Azure support to increase your quota](#request-quota-and-limit-increases). + A quota is shared across all the services in your subscriptions, including Azure Machine Learning. Calculate usage across all services when you're evaluating capacity. Along with managing quotas, you can learn how to [plan and manage costs for Azur + **Default limits vary by offer category type**, such as free trial, pay-as-you-go, and virtual machine (VM) series (such as Dv2, F, and G). -## Default resource quotas +## Default resource quotas and limits -In this section, you learn about the default and maximum quota limits for the following resources: +In this section, you learn about the default and maximum quotas and limits for the following resources: + Azure Machine Learning assets- + Azure Machine Learning computes (including serverless Spark) - + Azure Machine Learning online endpoints (both managed and Kubernetes) - + Azure Machine Learning pipelines ++ Azure Machine Learning computes (including serverless Spark)++ Azure Machine Learning shared quota++ Azure Machine Learning online endpoints (both managed and Kubernetes) and batch endpoints++ Azure Machine Learning pipelines++ Azure Machine Learning integration with Synapse + Virtual machines + Azure Container Instances + Azure Storage In this section, you learn about the default and maximum quota limits for the fo > [!IMPORTANT] > Limits are subject to change. For the latest information, see [Service limits in Azure Machine Learning](resource-limits-capacity.md). -- ### Azure Machine Learning assets The following limits on assets apply on a *per-workspace* basis. In addition, the maximum **run time** is 30 days and the maximum number of **met > * The *quota on the number of cores* is split by each VM Family and cumulative total cores. > * The *quota on the number of unique compute resources* per region is separate from the VM core quota, as it applies only to the managed compute resources of Azure Machine Learning. -To raise the limits for the following items, [Request a quota increase](#request-quota-increases): +To raise the limits for the following items, [Request a quota increase](#request-quota-and-limit-increases): * VM family core quotas. To learn more about which VM family to request a quota increase for, see [virtual machine sizes in Azure](../virtual-machines/sizes.md). For example, GPU VM families start with an "N" in their family name (such as the NCv3 series). * Total subscription core quotas Azure Machine Learning provides a pool of shared quota that is available for dif Use of the shared quota pool is available for running Spark jobs and for testing inferencing for Llama models from the Model Catalog. You should use the shared quota only for creating temporary test endpoints, not production endpoints. For endpoints in production, you should request dedicated quota by [filing a support ticket](https://ml.azure.com/quota). Billing for shared quota is usage-based, just like billing for dedicated virtual machine families. -### Azure Machine Learning managed online endpoints --Azure Machine Learning managed online endpoints have limits described in the following table. These limits are _regional_, meaning that you can use up to these limits per each region you're using. Notice that some of the limits are shared with all the types of endpoints in the region (managed online endpoints, Kubernetes online endpoints, and batch endpoints). --| **Resource** | **Limit** | **Allows exception** | -| | | | -| Endpoint name| Endpoint names must <li> Begin with a letter <li> Be 3-32 characters in length <li> Only consist of letters and numbers <sup>1</sup> | - | -| Deployment name| Deployment names must <li> Begin with a letter <li> Be 3-32 characters in length <li> Only consist of letters and numbers <sup>1</sup> | - | -| Number of endpoints per subscription | 100 <sup>2</sup> | Yes | -| Number of deployments per subscription | 200 | Yes | -| Number of deployments per endpoint | 20 | Yes | -| Number of instances per deployment | 20 <sup>3</sup> | Yes | -| Max request time-out at endpoint level | 180 seconds | - | -| Total requests per second at endpoint level for all deployments | 500 <sup>4</sup> | Yes | -| Total connections per second at endpoint level for all deployments | 500 <sup>4</sup> | Yes | -| Total connections active at endpoint level for all deployments | 500 <sup>4</sup> | Yes | -| Total bandwidth at endpoint level for all deployments | 5 MBPS <sup>4</sup> | Yes | --<sup>1</sup> Single hyphens like, `my-endpoint-name`, are accepted in endpoint and deployment names. --<sup>2</sup> Limit shared with other types of endpoints. +### Azure Machine Learning online endpoints and batch endpoints -<sup>3</sup> We reserve 20% extra compute resources for performing upgrades. For example, if you request 10 instances in a deployment, you must have a quota for 12. Otherwise, you receive an error. +Azure Machine Learning online endpoints and batch endpoints have resource limits described in the following table. -<sup>4</sup> The default limit for some subscriptions may be different. For example, when you request a limit increase it may show 100 instead. If you request a limit increase, be sure to calculate related limit increases you might need. For example, if you request a limit increase for requests per second, you might also want to compute the required connections and bandwidth limits and include that limit increase in the same request. +> [!IMPORTANT] +> These limits are _regional_, meaning that you can use up to these limits per each region you're using. For example, if your current limit for number of endpoints per subscription is 100, you can create 100 endpoints in the East US region, 100 endpoints in the West US region, and 100 endpoints in each of the other supported regions in a single subscription. Same principle applies to all the other limits. To determine the current usage for an endpoint, [view the metrics](how-to-monitor-online-endpoints.md#metrics). -To request an exception from the Azure Machine Learning product team, use the steps in the [Endpoint quota increases](#endpoint-quota-increases). --### Azure Machine Learning Kubernetes online endpoints --Azure Machine Learning Kubernetes online endpoints have limits described in the following table. --| **Resource** | **Limit** | -| | | -| Endpoint name| Same as [managed online endpoint](#azure-machine-learning-managed-online-endpoints) | -| Deployment name| Same as [managed online endpoint](#azure-machine-learning-managed-online-endpoints)| -| Number of endpoints per subscription | 50 | -| Number of deployments per subscription | 200 | -| Number of deployments per endpoint | 20 | -| Max request time-out at endpoint level | 300 seconds | --The sum of Kubernetes online endpoints, managed online endpoints, and managed batch endpoints under each subscription can't exceed 50. Similarly, the sum of Kubernetes online deployments, managed online deployments and managed batch deployments under each subscription can't exceed 200. --### Azure Machine Learning batch endpoints --Azure Machine Learning batch endpoints have limits described in the following table. These limits are _regional_, meaning that you can use up to these limits for each region you're using. Notice that some of the limits are shared with all the types of endpoints in the region (managed online endpoints, Kubernetes online endpoints, and batch endpoints). +To request an exception from the Azure Machine Learning product team, use the steps in the [Endpoint limit increases](#endpoint-limit-increases). ++| **Resource** | **Limit <sup>1</sup>** | **Allows exception** | **Applies to** | +| | - | | | +| Endpoint name| Endpoint names must <li> Begin with a letter <li> Be 3-32 characters in length <li> Only consist of letters and numbers <sup>2</sup> | - | All types of endpoints <sup>3</sup> | +| Deployment name| Deployment names must <li> Begin with a letter <li> Be 3-32 characters in length <li> Only consist of letters and numbers <sup>2</sup> | - | All types of endpoints <sup>3</sup> | +| Number of endpoints per subscription | 100 | Yes | All types of endpoints <sup>3</sup> | +| Number of deployments per subscription | 500 | Yes | All types of endpoints <sup>3</sup>| +| Number of deployments per endpoint | 20 | Yes | All types of endpoints <sup>3</sup> | +| Number of instances per deployment | 50 <sup>4</sup> | Yes | Managed online endpoint | +| Max request time-out at endpoint level | 180 seconds | - | Managed online endpoint | +| Max request time-out at endpoint level | 300 seconds | - | Kubernetes online endpoint | +| Total requests per second at endpoint level for all deployments | 500 <sup>5</sup> | Yes | Managed online endpoint | +| Total connections per second at endpoint level for all deployments | 500 <sup>5</sup> | Yes | Managed online endpoint | +| Total connections active at endpoint level for all deployments | 500 <sup>5</sup> | Yes | Managed online endpoint | +| Total bandwidth at endpoint level for all deployments | 5 MBPS <sup>5</sup> | Yes | Managed online endpoint | -| **Resource** | **Limit** | **Allows exception** | -| | | | -| Endpoint name| Endpoint names must <li> Begin with a letter <li> Be 3-32 characters in length <li> Only consist of letters and numbers <sup>1</sup> | - | -| Deployment name| Deployment names must <li> Begin with a letter <li> Be 3-32 characters in length <li> Only consist of letters and numbers <sup>1</sup> | - | -| Number of endpoints per subscription | 100 <sup>2</sup> | Yes | -| Number of deployments per subscription | 500 | Yes | -| Number of deployments per endpoint | 20 | Yes | -| Number of instances per deployment | 50 | Yes | --<sup>1</sup> Single hyphens like, `my-endpoint-name`, are accepted in endpoint and deployment names. --<sup>2</sup> Limit shared with other types of endpoints. +> [!NOTE] +> 1. This is a regional limit. For example, if current limit on number of endpoint is 100, you can create 100 endpoints in the East US region, 100 endpoints in the West US region, and 100 endpoints in each of the other supported regions in a single subscription. Same principle applies to all the other limits. +> 2. Single dashes like, `my-endpoint-name`, are accepted in endpoint and deployment names. +> 3. Endpoints and deployments can be of different types, but limits apply to the sum of all types. For example, the sum of managed online endpoints, Kubernetes online endpoint and batch endpoint under each subscription can't exceed 100 per region by default. Similarly, the sum of managed online deployments, Kubernetes online deployments and batch deployments under each subscription can't exceed 500 per region by default. +> 4. We reserve 20% extra compute resources for performing upgrades. For example, if you request 10 instances in a deployment, you must have a quota for 12. Otherwise, you receive an error. There are some VM SKUs that are exempt from extra quota. See [virtual machine quota allocation for deployment](how-to-deploy-online-endpoints.md#virtual-machine-quota-allocation-for-deployment) for more. +> 5. Requests per second, connections, bandwidth etc are related. If you request for increase for any of these limits, ensure estimating/calculating other related limites together. ### Azure Machine Learning pipelines [Azure Machine Learning pipelines](concept-ml-pipelines.md) have the following limits. Azure Machine Learning batch endpoints have limits described in the following ta ### Azure Machine Learning integration with Synapse -Azure Machine Learning serverless Spark provides easy access to distributed computing capability for scaling Apache Spark jobs. Serverless Spark utilizes the same dedicated quota as Azure Machine Learning Compute. Quota limits can be increased by submitting a support ticket and [requesting for quota increase](#request-quota-increases) for ESv3 series under the "Machine Learning Service: Virtual Machine Quota" category. +Azure Machine Learning serverless Spark provides easy access to distributed computing capability for scaling Apache Spark jobs. Serverless Spark utilizes the same dedicated quota as Azure Machine Learning Compute. Quota limits can be increased by submitting a support ticket and [requesting for quota and limit increase](#request-quota-and-limit-increases) for ESv3 series under the "Machine Learning Service: Virtual Machine Quota" category. To view quota usage, navigate to Machine Learning studio and select the subscription name that you would like to see usage for. Select "Quota" in the left panel. For more information, see [Container Instances limits](../azure-resource-manager ### Storage Azure Storage has a limit of 250 storage accounts per region, per subscription. This limit includes both Standard and Premium storage accounts. + ## Workspace-level quotas Use workspace-level quotas to manage Azure Machine Learning compute target allocation between multiple [workspaces](concept-workspace.md) in the same subscription. You can't set a negative value or a value higher than the subscription-level quo > [!NOTE] > You need subscription-level permissions to set a quota at the workspace level. + ## View quotas in the studio 1. When you create a new compute resource, by default you see only VM sizes that you already have quota to use. Switch the view to **Select from all options**. You manage the Azure Machine Learning compute quota on your subscription separat 4. You can switch between a subscription-level view and a workspace-level view. -## Request quota increases ++## Request quota and limit increases ++VM quota increase is to increase the number of cores per VM family per region. Endpoint limit increase is to increase the endpoint-specific limits per subscription per region. Make sure to choose the right category when you are submitting the quota increase request, as described in the next section. ++### VM quota increases To raise the limit for Azure Machine Learning VM quota above the default limit, you can request for quota increase from the above **Usage + quotas** view or submit a quota increase request from Azure Machine Learning studio. To raise the limit for Azure Machine Learning VM quota above the default limit, [![Screenshot of the new VM quota request form.](./media/how-to-manage-quotas/mlstudio-new-quota-limit.png)](./media/how-to-manage-quotas/mlstudio-new-quota-limit.png) -### Endpoint quota increases +### Endpoint limit increases -To raise endpoint quota, [open an online customer support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest/). When requesting for quota increase, provide the following information: +To raise endpoint limit, [open an online customer support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest/). When requesting for endpoint limit increase, provide the following information: 1. When opening the support request, select __Service and subscription limits (quotas)__ as the __Issue type__.-2. Select the subscription of your choice -3. Select __Machine Learning Service: Endpoint Limits__ as the __Quota type__. -1. On the __Additional details__ tab, select __Enter details__ and then provide the quota you'd like to increase and the new value, the reason for the quota increase request, and __location(s)__ where you need the quota increase. Finally, select __Save and continue__ to continue. +1. Select the subscription of your choice. +1. Select __Machine Learning Service: Endpoint Limits__ as the __Quota type__. +1. On the __Additional details__ tab, you need to provide detailed reasons for the limit increase in order for your request to be processed. Select __Enter details__ and then provide the limit you'd like to increase and the new value for each limit, the reason for the limit increase request, and __location(s)__ where you need the limit increase. +Be sure to add the following information into the reason for limit increase: + 1. Description of your scenario and workload (such as text, image, and so on). + 1. Rationale for the requested increase. + 1. Provide the target throughput and its pattern (average/peak QPS, concurrent users). + 1. Provide the target latency at scale and the current latency you observe with a single instance. + 1. Provide the VM SKU and number of instances in total to support the target throughput and latency. Provide how many endpoints/deployments/instances you plan to use in each region. + 1. Confirm if you have a benchmark test that indicates the selected VM SKU and the number of instances that would meet your throughput and latency requirement. + 1. Provide the type of the payload and size of a single payload. Network bandwidth should align with the payload size and requests per second. + 1. Provide planned time plan (by when you need increased limits - provide staged plan if possible) and confirm if (1) the cost of running it at that scale is reflected in your budget and (2) the target VM SKUs are approved. +1. Finally, select __Save and continue__ to continue. ++[![Screenshot of the endpoint limit details form.](./media/how-to-manage-quotas/quota-details.png)](./media/how-to-manage-quotas/quota-details.png) -[![Screenshot of the endpoint quota details form.](./media/how-to-manage-quotas/quota-details.png)](./media/how-to-manage-quotas/quota-details.png) +> [!NOTE] +> This endpoint limit increase request is different from VM quota increase request. If your request is related to VM quota increase, follow the instructions in the [VM quota increases](#vm-quota-increases) section. ## Next steps |
machine-learning | How To Manage Rest | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-rest.md | providers/Microsoft.MachineLearningServices/workspaces/<YOUR-WORKSPACE-NAME>/com -H "Authorization:Bearer <YOUR-ACCESS-TOKEN>" ``` -To create or overwrite a named compute resource, you'll use a PUT request. In the following, in addition to the now-familiar replacements of `YOUR-SUBSCRIPTION-ID`, `YOUR-RESOURCE-GROUP`, `YOUR-WORKSPACE-NAME`, and `YOUR-ACCESS-TOKEN`, replace `YOUR-COMPUTE-NAME`, and values for `location`, `vmSize`, `vmPriority`, `scaleSettings`, `adminUserName`, and `adminUserPassword`. As specified in the reference at [Machine Learning Compute - Create Or Update SDK Reference](/rest/api/azureml/2023-04-01/workspaces/create-or-update), the following command creates a dedicated, single-node Standard_D1 (a basic CPU compute resource) that will scale down after 30 minutes: +To create or overwrite a named compute resource, you'll use a PUT request. In the following, in addition to the now-familiar replacements of `YOUR-SUBSCRIPTION-ID`, `YOUR-RESOURCE-GROUP`, `YOUR-WORKSPACE-NAME`, and `YOUR-ACCESS-TOKEN`, replace `YOUR-COMPUTE-NAME`, and values for `location`, `vmSize`, `vmPriority`, `scaleSettings`, `adminUserName`, and `adminUserPassword`. The following command creates a dedicated, single-node Standard_D1 (a basic CPU compute resource) that will scale down after 30 minutes: ```bash curl -X PUT \ |
machine-learning | How To Manage Synapse Spark Pool | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-synapse-spark-pool.md | synapse_comp = SynapseSparkCompute( ml_client.begin_create_or_update(synapse_comp) ``` -A Synapse Spark pool can also use a user-assigned identity. For a user-assigned identity, you can pass a managed identity definition, using the [IdentityConfiguration](/python/api/azure-ai-ml/azure.ai.ml.entities.identityconfiguration) class, as the `identity` parameter of the `SynapseSparkCompute` class. For the managed identity definition used in this way, set the `type` to `UserAssigned`. In addition, pass a `user_assigned_identities` parameter. The parameter `user_assigned_identities` is a list of objects of the [UserAssignedIdentity](/python/api/azure-ai-ml/azure.ai.ml.entities.userassignedidentity) class. The `resource_id`of the user-assigned identity populates each `UserAssignedIdentity` class object. This code snippet attaches a Synapse Spark pool that uses a user-assigned identity: +A Synapse Spark pool can also use a user-assigned identity. For a user-assigned identity, you can pass a managed identity definition, using the [IdentityConfiguration](/python/api/azure-ai-ml/azure.ai.ml.entities.identityconfiguration) class, as the `identity` parameter of the `SynapseSparkCompute` class. For the managed identity definition used in this way, set the `type` to `UserAssigned`. In addition, pass a `user_assigned_identities` parameter. The parameter `user_assigned_identities` is a list of objects of the UserAssignedIdentity class. The `resource_id`of the user-assigned identity populates each `UserAssignedIdentity` class object. This code snippet attaches a Synapse Spark pool that uses a user-assigned identity: ```python # import required libraries synapse_identity = IdentityConfiguration(type="SystemAssigned") synapse_comp = SynapseSparkCompute(name=synapse_name, resource_id=synapse_resource,identity=synapse_identity) ml_client.begin_create_or_update(synapse_comp) ``` -A Synapse Spark pool can also use a user-assigned identity. For a user-assigned identity, you can pass a managed identity definition, using the [IdentityConfiguration](/python/api/azure-ai-ml/azure.ai.ml.entities.identityconfiguration) class, as the `identity` parameter of the `SynapseSparkCompute` class. For the managed identity definition used in this way, set the `type` to `UserAssigned`. In addition, pass a `user_assigned_identities` parameter. The parameter `user_assigned_identities` is a list of objects of the [UserAssignedIdentity](/python/api/azure-ai-ml/azure.ai.ml.entities.userassignedidentity) class. The `resource_id`of the user-assigned identity populates each `UserAssignedIdentity` class object. This code snippet updates a Synapse Spark pool to use a user-assigned identity: +A Synapse Spark pool can also use a user-assigned identity. For a user-assigned identity, you can pass a managed identity definition, using the [IdentityConfiguration](/python/api/azure-ai-ml/azure.ai.ml.entities.identityconfiguration) class, as the `identity` parameter of the `SynapseSparkCompute` class. For the managed identity definition used in this way, set the `type` to `UserAssigned`. In addition, pass a `user_assigned_identities` parameter. The parameter `user_assigned_identities` is a list of objects of the UserAssignedIdentity class. The `resource_id`of the user-assigned identity populates each `UserAssignedIdentity` class object. This code snippet updates a Synapse Spark pool to use a user-assigned identity: ```python # import required libraries Some user scenarios may require access to a serverless Spark compute, during an - [Interactive Data Wrangling with Apache Spark in Azure Machine Learning](./interactive-data-wrangling-with-apache-spark-azure-ml.md) -- [Submit Spark jobs in Azure Machine Learning](./how-to-submit-spark-jobs.md)+- [Submit Spark jobs in Azure Machine Learning](./how-to-submit-spark-jobs.md) |
machine-learning | How To Monitor Online Endpoints | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-monitor-online-endpoints.md | For example, you can split along the deployment dimension to compare the request **Bandwidth throttling** -Bandwidth will be throttled if the quota limits are exceeded for _managed_ online endpoints. For more information on limits, see the article on [managing and increasing quotas for managed online endpoints](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints)). To determine if requests are throttled: +Bandwidth will be throttled if the quota limits are exceeded for _managed_ online endpoints. For more information on limits, see the article on [limits for online endpoints](how-to-manage-quotas.md#azure-machine-learning-online-endpoints-and-batch-endpoints). To determine if requests are throttled: - Monitor the "Network bytes" metric - The response trailers will have the fields: `ms-azureml-bandwidth-request-delay-ms` and `ms-azureml-bandwidth-response-delay-ms`. The values of the fields are the delays, in milliseconds, of the bandwidth throttling. For more information, see [Bandwidth limit issues](how-to-troubleshoot-online-endpoints.md#bandwidth-limit-issues). There are three logs that can be enabled for online endpoints: * **AMLOnlineEndpointConsoleLog**: Contains logs that the containers output to the console. Below are some cases: - * If the container fails to start, the console log may be useful for debugging. + * If the container fails to start, the console log can be useful for debugging. * Monitor container behavior and make sure that all requests are correctly handled. * Write request IDs in the console log. Joining the request ID, the AMLOnlineEndpointConsoleLog, and AMLOnlineEndpointTrafficLog in the Log Analytics workspace, you can trace a request from the network entry point of an online endpoint to the container. - * You may also use this log for performance analysis in determining the time required by the model to process each request. + * You can also use this log for performance analysis in determining the time required by the model to process each request. * **AMLOnlineEndpointEventLog**: Contains event information regarding the containerΓÇÖs life cycle. Currently, we provide information on the following types of events: |
machine-learning | How To Safely Rollout Online Endpoints | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-safely-rollout-online-endpoints.md | The following table lists key attributes to specify when you define an endpoint. | Attribute | Description | |-|--|-| Name | **Required.** Name of the endpoint. It must be unique in the Azure region. For more information on the naming rules, see [managed online endpoint limits](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints). | +| Name | **Required.** Name of the endpoint. It must be unique in the Azure region. For more information on the naming rules, see [endpoint limits](how-to-manage-quotas.md#azure-machine-learning-online-endpoints-and-batch-endpoints). | | Authentication mode | The authentication method for the endpoint. Choose between key-based authentication `key` and Azure Machine Learning token-based authentication `aml_token`. A key doesn't expire, but a token does expire. For more information on authenticating, see [Authenticate to an online endpoint](how-to-authenticate-online-endpoint.md). | | Description | Description of the endpoint. | | Tags | Dictionary of tags for the endpoint. | A *deployment* is a set of resources required for hosting the model that does th | Scoring script | Python code that executes the model on a given input request. This value can be the relative path to the scoring file in the source code directory.<br>The scoring script receives data submitted to a deployed web service and passes it to the model. The script then executes the model and returns its response to the client. The scoring script is specific to your model and must understand the data that the model expects as input and returns as output.<br>In this example, we have a *score.py* file. This Python code must have an `init()` function and a `run()` function. The `init()` function will be called after the model is created or updated (you can use it to cache the model in memory, for example). The `run()` function is called at every invocation of the endpoint to do the actual scoring and prediction. | | Environment | **Required.** The environment to host the model and code. This value can be either a reference to an existing versioned environment in the workspace or an inline environment specification. The environment can be a Docker image with Conda dependencies, a Dockerfile, or a registered environment. | | Instance type | **Required.** The VM size to use for the deployment. For the list of supported sizes, see [Managed online endpoints SKU list](reference-managed-online-endpoints-vm-sku-list.md). |-| Instance count | **Required.** The number of instances to use for the deployment. Base the value on the workload you expect. For high availability, we recommend that you set the value to at least `3`. We reserve an extra 20% for performing upgrades. For more information, see [managed online endpoint quotas](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints). | +| Instance count | **Required.** The number of instances to use for the deployment. Base the value on the workload you expect. For high availability, we recommend that you set the value to at least `3`. We reserve an extra 20% for performing upgrades. For more information, see [limits for online endpoints](how-to-manage-quotas.md#azure-machine-learning-online-endpoints-and-batch-endpoints). | To see a full list of attributes that you can specify when you create a deployment, see [CLI (v2) managed online deployment YAML schema](/azure/machine-learning/reference-yaml-deployment-managed-online) or [SDK (v2) ManagedOnlineDeployment Class](/python/api/azure-ai-ml/azure.ai.ml.entities.managedonlinedeployment). First set the endpoint's name and then configure it. In this article, you'll use :::code language="yaml" source="~/azureml-examples-main/cli/endpoints/online/managed/sample/endpoint.yml"::: -The reference for the endpoint YAML format is described in the following table. To learn how to specify these attributes, see the [online endpoint YAML reference](reference-yaml-endpoint-online.md). For information about limits related to managed endpoints, see [Manage and increase quotas for resources with Azure Machine Learning](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints). +The reference for the endpoint YAML format is described in the following table. To learn how to specify these attributes, see the [online endpoint YAML reference](reference-yaml-endpoint-online.md). For information about limits related to managed online endpoints, see [limits for online endpoints](how-to-manage-quotas.md#azure-machine-learning-online-endpoints-and-batch-endpoints). | Key | Description | | -- | -- | When you create a managed online endpoint in the Azure Machine Learning studio, ### Register your model -A model registration is a logical entity in the workspace. This entity may contain a single model file or a directory of multiple files. As a best practice for production, you should register the model and environment. When creating the endpoint and deployment in this article, we'll assume that you've registered the [model folder](https://github.com/Azure/azureml-examples/tree/main/cli/endpoints/online/model-1/model) that contains the model. +A model registration is a logical entity in the workspace. This entity can contain a single model file or a directory of multiple files. As a best practice for production, you should register the model and environment. When creating the endpoint and deployment in this article, we'll assume that you've registered the [model folder](https://github.com/Azure/azureml-examples/tree/main/cli/endpoints/online/model-1/model) that contains the model. To register the example model, follow these steps: This action opens up a window for you to specify details about your endpoint and ## Confirm your existing deployment -One way to confirm your existing deployment is to invoke your endpoint so that it can score your model for a given input request. When you invoke your endpoint via the CLI or Python SDK, you may choose to specify the name of the deployment that will receive the incoming traffic. +One way to confirm your existing deployment is to invoke your endpoint so that it can score your model for a given input request. When you invoke your endpoint via the CLI or Python SDK, you can choose to specify the name of the deployment that will receive the incoming traffic. > [!NOTE] > Unlike the CLI or Python SDK, Azure Machine Learning studio requires you to specify a deployment when you invoke an endpoint. Mirroring has the following limitations: * Mirroring is supported for the CLI (v2) (version 2.4.0 or above) and Python SDK (v2) (version 1.0.0 or above). If you use an older version of CLI/SDK to update an endpoint, you'll lose the mirror traffic setting. * Mirroring isn't currently supported for Kubernetes online endpoints. * You can mirror traffic to only one deployment in an endpoint.-* The maximum percentage of traffic you can mirror is 50%. This limit is to reduce the effect on your [endpoint bandwidth quota](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints) (default 5 MBPS)ΓÇöyour endpoint bandwidth is throttled if you exceed the allocated quota. For information on monitoring bandwidth throttling, see [Monitor managed online endpoints](how-to-monitor-online-endpoints.md#metrics-at-endpoint-scope). +* The maximum percentage of traffic you can mirror is 50%. This limit is to reduce the effect on your [endpoint bandwidth quota](how-to-manage-quotas.md#azure-machine-learning-online-endpoints-and-batch-endpoints) (default 5 MBPS)ΓÇöyour endpoint bandwidth is throttled if you exceed the allocated quota. For information on monitoring bandwidth throttling, see [Monitor managed online endpoints](how-to-monitor-online-endpoints.md#metrics-at-endpoint-scope). Also note the following behaviors: Alternatively, you can delete a managed online endpoint directly by selecting th - [Use network isolation with managed online endpoints](how-to-secure-online-endpoint.md) - [Access Azure resources with a online endpoint and managed identity](how-to-access-resources-from-endpoints-managed-identities.md) - [Monitor managed online endpoints](how-to-monitor-online-endpoints.md)-- [Manage and increase quotas for resources with Azure Machine Learning](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints)+- [Manage and increase quotas for resources with Azure Machine Learning](how-to-manage-quotas.md#azure-machine-learning-online-endpoints-and-batch-endpoints) - [View costs for an Azure Machine Learning managed online endpoint](how-to-view-online-endpoints-costs.md) - [Managed online endpoints SKU list](reference-managed-online-endpoints-vm-sku-list.md) - [Troubleshooting online endpoints deployment and scoring](how-to-troubleshoot-managed-online-endpoints.md) |
machine-learning | How To Secure Online Endpoint | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-online-endpoint.md | Because the workspace is configured to have a managed virtual network, any deplo :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint-workspacevnet.sh" ID="create_endpoint_inbound_blocked" ::: + If you disable public network access for the endpoint, the only way to invoke the endpoint is by using a private endpoint, which can access the workspace, in your virtual network. For more information, see [secure inbound scoring requests](concept-secure-online-endpoint.md#secure-inbound-scoring-requests) and [configure a private endpoint for an Azure Machine Learning workspace](how-to-configure-private-link.md). + Alternatively, if you'd like to allow the endpoint to receive scoring requests from the internet, uncomment the following code and run it instead. :::code language="azurecli" source="~/azureml-examples-main/cli/deploy-managed-online-endpoint-workspacevnet.sh" ID="create_endpoint_inbound_allowed" ::: |
machine-learning | How To Train Mlflow Projects | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-train-mlflow-projects.md | |
machine-learning | How To Troubleshoot Online Endpoints | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-online-endpoints.md | To debug conda installation problems, try the following steps: 1. If there are errors locally, try resolving the conda environment and creating a functional one before redeploying. -1. If the container crashes even if it resolves locally, the SKU size used for deployment may be too small. - 1. Conda package installation occurs at runtime, so if the SKU size is too small to accommodate all of the packages detailed in the `conda.yaml` environment file, then the container may crash. - 1. A Standard_F4s_v2 VM is a good starting SKU size, but larger ones may be needed depending on which dependencies are specified in the conda file. +1. If the container crashes even if it resolves locally, the SKU size used for deployment might be too small. + 1. Conda package installation occurs at runtime, so if the SKU size is too small to accommodate all of the packages detailed in the `conda.yaml` environment file, then the container might crash. + 1. A Standard_F4s_v2 VM is a good starting SKU size, but larger ones might be needed depending on which dependencies are specified in the conda file. 1. For Kubernetes online endpoint, the Kubernetes cluster must have minimum of 4 vCPU cores and 8-GB memory. ## Get container logs If you're creating or updating a Kubernetes online deployment, you can see [Comm ### ERROR: ImageBuildFailure -This error is returned when the environment (docker image) is being built. You can check the build log for more information on the failure(s). The build log is located in the default storage for your Azure Machine Learning workspace. The exact location may be returned as part of the error. For example, `"the build log under the storage account '[storage-account-name]' in the container '[container-name]' at the path '[path-to-the-log]'"`. +This error is returned when the environment (docker image) is being built. You can check the build log for more information on the failure(s). The build log is located in the default storage for your Azure Machine Learning workspace. The exact location might be returned as part of the error. For example, `"the build log under the storage account '[storage-account-name]' in the container '[container-name]' at the path '[path-to-the-log]'"`. The following list contains common image build failure scenarios: We also recommend reviewing the default [probe settings](reference-yaml-deployme If the error message mentions `"container registry authorization failure"` that means you can't access the container registry with the current credentials. The desynchronization of a workspace resource's keys can cause this error and it takes some time to automatically synchronize.-However, you can [manually call for a synchronization of keys](/cli/azure/ml/workspace#az-ml-workspace-sync-keys), which may resolve the authorization failure. +However, you can [manually call for a synchronization of keys](/cli/azure/ml/workspace#az-ml-workspace-sync-keys), which might resolve the authorization failure. -Container registries that are behind a virtual network may also encounter this error if set up incorrectly. You must verify that the virtual network that you have set up properly. +Container registries that are behind a virtual network might also encounter this error if set up incorrectly. You must verify that the virtual network that you have set up properly. #### Image build compute not set in a private workspace with VNet If the error message mentions `"failed to communicate with the workspace's conta #### Generic image build failure As stated previously, you can check the build log for more information on the failure.-If no obvious error is found in the build log and the last line is `Installing pip dependencies: ...working...`, then a dependency may cause the error. Pinning version dependencies in your conda file can fix this problem. +If no obvious error is found in the build log and the last line is `Installing pip dependencies: ...working...`, then a dependency might cause the error. Pinning version dependencies in your conda file can fix this problem. We also recommend [deploying locally](#deploy-locally) to test and debug your models locally before deploying to the cloud. Additionally, the following list is of common resources that might run out of qu Before deploying a model, you need to have enough compute quota. This quota defines how much virtual cores are available per subscription, per workspace, per SKU, and per region. Each deployment subtracts from available quota and adds it back after deletion, based on type of the SKU. -A possible mitigation is to check if there are unused deployments that you can delete. Or you can submit a [request for a quota increase](how-to-manage-quotas.md#request-quota-increases). +A possible mitigation is to check if there are unused deployments that you can delete. Or you can submit a [request for a quota increase](how-to-manage-quotas.md#request-quota-and-limit-increases). #### Cluster quota -This issue occurs when you don't have enough Azure ML Compute cluster quota. This quota defines the total number of clusters that may be in use at one time per subscription to deploy CPU or GPU nodes in Azure Cloud. +This issue occurs when you don't have enough Azure Machine Learning Compute cluster quota. This quota defines the total number of clusters that might be in use at one time per subscription to deploy CPU or GPU nodes in Azure Cloud. -A possible mitigation is to check if there are unused deployments that you can delete. Or you can submit a [request for a quota increase](how-to-manage-quotas.md#request-quota-increases). Make sure to select `Machine Learning Service: Cluster Quota` as the quota type for this quota increase request. +A possible mitigation is to check if there are unused deployments that you can delete. Or you can submit a [request for a quota increase](how-to-manage-quotas.md#request-quota-and-limit-increases). Make sure to select `Machine Learning Service: Cluster Quota` as the quota type for this quota increase request. #### Disk quota When you're creating a managed online endpoint, role assignment is required for #### Endpoint quota -Try to delete some unused endpoints in this subscription. If all of your endpoints are actively in use, you can try [requesting an endpoint quota increase](how-to-manage-quotas.md#endpoint-quota-increases). --For Kubernetes online endpoints, there's the endpoint quota boundary at the cluster level as well, you can check the [Kubernetes online endpoint quota](how-to-manage-quotas.md#azure-machine-learning-kubernetes-online-endpoints) section for more details. +Try to delete some unused endpoints in this subscription. If all of your endpoints are actively in use, you can try [requesting an endpoint limit increase](how-to-manage-quotas.md#endpoint-limit-increases). To learn more about the endpoint limit, see [Endpoint quota with Azure Machine Learning online endpoints and batch endpoints](how-to-manage-quotas.md#azure-machine-learning-online-endpoints-and-batch-endpoints). #### Kubernetes quota For more information, please see [Container Registry Authorization Error](#conta #### Invalid template function specification -This error occurs when a template function has been specified incorrectly. Either fix the policy or remove the policy assignment to unblock. The error message may include the policy assignment name and the policy definition to help you debug this error, and the [Azure policy definition structure article](https://aka.ms/policy-avoiding-template-failures), which discusses tips to avoid template failures. +This error occurs when a template function has been specified incorrectly. Either fix the policy or remove the policy assignment to unblock. The error message might include the policy assignment name and the policy definition to help you debug this error, and the [Azure policy definition structure article](https://aka.ms/policy-avoiding-template-failures), which discusses tips to avoid template failures. #### Unable to download user container image Retrying the operation might allow it to be performed without cancellation. #### Operation canceled waiting for lock confirmation -Azure operations have a brief waiting period after being submitted during which they retrieve a lock to ensure that we don't run into race conditions. This error happens when the operation you submitted is the same as another operation. And the other operation is currently waiting for confirmation that it has received the lock to proceed. It may indicate that you've submitted a similar request too soon after the initial request. +Azure operations have a brief waiting period after being submitted during which they retrieve a lock to ensure that we don't run into race conditions. This error happens when the operation you submitted is the same as another operation, and the other operation is currently waiting for confirmation that it has received the lock to proceed. It might indicate that you've submitted a similar request too soon after the initial request. -Retrying the operation after waiting several seconds up to a minute may allow it to be performed without cancellation. +Retrying the operation after waiting several seconds up to a minute might allow it to be performed without cancellation. ### ERROR: InternalServerError The following list is of reasons you might run into this error when creating/upd * The Azure ARC (For Azure Arc Kubernetes cluster) or Azure Machine Learning extension (For AKS) isn't properly installed or configured. Try to check the Azure ARC or Azure Machine Learning extension configuration and status. * The Kubernetes cluster has improper network configuration, check the proxy, network policy or certificate. * If you're using a private AKS cluster, it's necessary to set up private endpoints for ACR, storage account, workspace in the AKS vnet. -* Make sure your Azure machine learning extension version is greater than v1.1.25. +* Make sure your Azure Machine Learning extension version is greater than v1.1.25. ### ERROR: TokenRefreshFailed The following list is of reasons you might run into this error when creating/upd ### ERROR: GetAADTokenFailed -This error is because the Kubernetes cluster request AAD token failed or timed out, check your network accessibility then try again. +This error is because the Kubernetes cluster request Azure AD token failed or timed out, check your network accessibility then try again. * You can follow the [Configure required network traffic](../machine-learning/how-to-access-azureml-behind-firewall.md#scenario-use-kubernetes-compute) to check the outbound proxy, make sure the cluster can connect to workspace. * The workspace endpoint url can be found in online endpoint CRD in cluster. You can follow the troubleshooting steps in [GetAADTokenFailed](#error-getaadtok ### ERROR: ACRTokenExchangeFailed -This error is because the Kubernetes cluster exchange ACR token failed because AAD token is unauthorized yet. Since the role assignment takes some time, so you can wait a moment then try again. +This error is because the Kubernetes cluster exchange ACR token failed because Azure AD token is not yet authorized. Since the role assignment takes some time, so you can wait a moment then try again. -This failure may also be due to too many requests to the ACR service at that time, it should be a transient error, you can try again later. +This failure might also be due to too many requests to the ACR service at that time, it should be a transient error, you can try again later. ### ERROR: KubernetesUnaccessible The following list is of common model consumption errors resulting from the endp ### Bandwidth limit issues -Managed online endpoints have bandwidth limits for each endpoint. You find the limit configuration in [Manage and increase quotas for resources with Azure Machine Learning](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints). If your bandwidth usage exceeds the limit, your request is delayed. To monitor the bandwidth delay: +Managed online endpoints have bandwidth limits for each endpoint. You find the limit configuration in [limits for online endpoints](how-to-manage-quotas.md#azure-machine-learning-online-endpoints-and-batch-endpoints). If your bandwidth usage exceeds the limit, your request is delayed. To monitor the bandwidth delay: - Use metric "Network bytes" to understand the current bandwidth usage. For more information, see [Monitor managed online endpoints](how-to-monitor-online-endpoints.md). - There are two response trailers returned if the bandwidth limit enforced: The following table contains common error codes when consuming managed online en | 408 | Request timeout | The model execution took longer than the timeout supplied in `request_timeout_ms` under `request_settings` of your model deployment config. | | 424 | Model Error | If your model container returns a non-200 response, Azure returns a 424. Check the `Model Status Code` dimension under the `Requests Per Minute` metric on your endpoint's [Azure Monitor Metric Explorer](../azure-monitor/essentials/metrics-getting-started.md). Or check response headers `ms-azureml-model-error-statuscode` and `ms-azureml-model-error-reason` for more information. If 424 comes with liveness or readiness probe failing, consider adjusting [probe settings](reference-yaml-deployment-managed-online.md#probesettings) to allow longer time to probe liveness or readiness of the container. | | 429 | Too many pending requests | Your model is currently getting more requests than it can handle. Azure Machine Learning has implemented a system that permits a maximum of `2 * max_concurrent_requests_per_instance * instance_count requests` to be processed in parallel at any given moment to guarantee smooth operation. Other requests that exceed this maximum are rejected. You can review your model deployment configuration under the request_settings and scale_settings sections to verify and adjust these settings. Additionally, as outlined in the [YAML definition for RequestSettings](reference-yaml-deployment-managed-online.md#requestsettings), it's important to ensure that the environment variable `WORKER_COUNT` is correctly passed. <br><br> If you're using autoscaling and get this error, it means your model is getting requests quicker than the system can scale up. In this situation, consider resending requests with an [exponential backoff](https://en.wikipedia.org/wiki/Exponential_backoff) to give the system the time it needs to adjust. You could also increase the number of instances by using [code to calculate instance count](#how-to-calculate-instance-count). These steps, combined with setting autoscaling, help ensure that your model is ready to handle the influx of requests. |-| 429 | Rate-limiting | The number of requests per second reached the [limit](./how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints) of managed online endpoints. | +| 429 | Rate-limiting | The number of requests per second reached the [limits](how-to-manage-quotas.md#azure-machine-learning-online-endpoints-and-batch-endpoints) of managed online endpoints. | | 500 | Internal server error | Azure Machine Learning-provisioned infrastructure is failing. | #### Common error codes for kubernetes online endpoints The following table contains common error codes when consuming Kubernetes online | -- | -- | | | 409 | Conflict error | When an operation is already in progress, any new operation on that same online endpoint responds with 409 conflict error. For example, If create or update online endpoint operation is in progress and if you trigger a new Delete operation it throws an error. | | 502 | Has thrown an exception or crashed in the `run()` method of the score.py file | When there's an error in `score.py`, for example an imported package doesn't exist in the conda environment, a syntax error, or a failure in the `init()` method. You can follow [here](#error-resourcenotready) to debug the file. |-| 503 | Receive large spikes in requests per second | The autoscaler is designed to handle gradual changes in load. If you receive large spikes in requests per second, clients may receive an HTTP status code 503. Even though the autoscaler reacts quickly, it takes AKS a significant amount of time to create more containers. You can follow [here](#how-to-prevent-503-status-codes) to prevent 503 status codes. | -| 504 | Request has timed out | A 504 status code indicates that the request has timed out. The default timeout setting is 5 seconds. You can increase the timeout or try to speed up the endpoint by modifying the score.py to remove unnecessary calls. If these actions don't correct the problem, you can follow [here](#error-resourcenotready) to debug the score.py file. The code may be in a nonresponsive state or an infinite loop. | +| 503 | Receive large spikes in requests per second | The autoscaler is designed to handle gradual changes in load. If you receive large spikes in requests per second, clients might receive an HTTP status code 503. Even though the autoscaler reacts quickly, it takes AKS a significant amount of time to create more containers. You can follow [here](#how-to-prevent-503-status-codes) to prevent 503 status codes. | +| 504 | Request has timed out | A 504 status code indicates that the request has timed out. The default timeout setting is 5 seconds. You can increase the timeout or try to speed up the endpoint by modifying the score.py to remove unnecessary calls. If these actions don't correct the problem, you can follow [here](#error-resourcenotready) to debug the score.py file. The code might be in a nonresponsive state or an infinite loop. | | 500 | Internal server error | Azure Machine Learning-provisioned infrastructure is failing. | |
machine-learning | How To Use Managed Online Endpoint Studio | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-managed-online-endpoint-studio.md | Use the studio to create a managed online endpoint directly in your browser. Whe ### Register the model -A model registration is a logical entity in the workspace that may contain a single model file, or a directory containing multiple files. The steps in this article assume that you've registered the [model folder](https://github.com/Azure/azureml-examples/tree/main/cli/endpoints/online/model-1/model) that contains the model. +A model registration is a logical entity in the workspace that can contain a single model file, or a directory containing multiple files. The steps in this article assume that you've registered the [model folder](https://github.com/Azure/azureml-examples/tree/main/cli/endpoints/online/model-1/model) that contains the model. To register the example model using Azure Machine Learning studio, use the following steps: In this article, you learned how to use Azure Machine Learning managed online en - [How to monitor managed online endpoints](how-to-monitor-online-endpoints.md) - [Troubleshooting managed online endpoints deployment and scoring](./how-to-troubleshoot-online-endpoints.md) - [View costs for an Azure Machine Learning managed online endpoint](how-to-view-online-endpoints-costs.md)-- [Manage and increase quotas for resources with Azure Machine Learning](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints)+- [Manage and increase quotas for resources with Azure Machine Learning](how-to-manage-quotas.md#azure-machine-learning-online-endpoints-and-batch-endpoints) |
machine-learning | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/policy-reference.md | Title: Built-in policy definitions for Azure Machine Learning description: Lists Azure Policy built-in policy definitions for Azure Machine Learning. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/15/2023 Last updated : 11/21/2023 |
machine-learning | How To Deploy For Real Time Inference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/how-to-deploy-for-real-time-inference.md | This step allows you to configure the basic settings of the deployment. |Endpoint|You can select whether you want to deploy a new endpoint or update an existing endpoint. <br> If you select **New**, you need to specify the endpoint name.| |Deployment name| - Within the same endpoint, deployment name should be unique. <br> - If you select an existing endpoint, and input an existing deployment name, then that deployment will be overwritten with the new configurations. | |Virtual machine| The VM size to use for the deployment. For the list of supported sizes, see [Managed online endpoints SKU list](../reference-managed-online-endpoints-vm-sku-list.md).|-|Instance count| The number of instances to use for the deployment. Specify the value on the workload you expect. For high availability, we recommend that you set the value to at least 3. We reserve an extra 20% for performing upgrades. For more information, see [managed online endpoints quotas](../how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints)| +|Instance count| The number of instances to use for the deployment. Specify the value on the workload you expect. For high availability, we recommend that you set the value to at least 3. We reserve an extra 20% for performing upgrades. For more information, see [managed online endpoints quotas](../how-to-manage-quotas.md#azure-machine-learning-online-endpoints-and-batch-endpoints)| |Inference data collection (preview)| If you enable this, the flow inputs and outputs will be auto collected in an Azure Machine Learning data asset, and can be used for later monitoring. To learn more, see [how to monitor generative ai applications.](how-to-monitor-generative-ai-applications.md)| |Application Insights diagnostics| If you enable this, system metrics during inference time (such as token count, flow latency, flow request, and etc.) will be collected into workspace default Application Insights. To learn more, see [prompt flow serving metrics](#view-prompt-flow-endpoints-specific-metrics-optional).| Select **Metrics** tab in the left navigation. Select **promptflow standard metr ### Model response taking too long -Sometimes you might notice that the deployment is taking too long to respond. There are several potential factors for this to occur. +Sometimes, you might notice that the deployment is taking too long to respond. There are several potential factors for this to occur. - Model is not powerful enough (ex. use gpt over text-ada) - Index query is not optimized and taking too long |
machine-learning | How To Deploy To Code | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/how-to-deploy-to-code.md | To define an endpoint, you need to specify: -- **Endpoint name**: The name of the endpoint. It must be unique in the Azure region. For more information on the naming rules, see [managed online endpoint limits](../how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints). +- **Endpoint name**: The name of the endpoint. It must be unique in the Azure region. For more information on the naming rules, see [endpoint limits](../how-to-manage-quotas.md#azure-machine-learning-online-endpoints-and-batch-endpoints). - **Authentication mode**: The authentication method for the endpoint. Choose between key-based authentication and Azure Machine Learning token-based authentication. A key doesn't expire, but a token does expire. For more information on authenticating, see [Authenticate to an online endpoint](../how-to-authenticate-online-endpoint.md). Optionally, you can add a description and tags to your endpoint. - Optionally, you can add a description and tags to your endpoint. environment_variables: | Model | The model to use for the deployment. This value can be either a reference to an existing versioned model in the workspace or an inline model specification. | | Environment | The environment to host the model and code. It contains: <br> - `image`<br> - `inference_config`: is used to build a serving container for online deployments, including `liveness route`, `readiness_route`, and `scoring_route` . | | Instance type | The VM size to use for the deployment. For the list of supported sizes, see [Managed online endpoints SKU list](../reference-managed-online-endpoints-vm-sku-list.md). |-| Instance count | The number of instances to use for the deployment. Base the value on the workload you expect. For high availability, we recommend that you set the value to at least `3`. We reserve an extra 20% for performing upgrades. For more information, see [managed online endpoint quotas](../how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints). | -| Environment variables | Following environment variables need to be set for endpoints deployed from a flow: <br> - (required) `PROMPTFLOW_RUN_MODE: serving`: specify the mode to serving <br> - (required) `PRT_CONFIG_OVERRIDE`: for pulling connections from workspace <br> - (optional) `PROMPTFLOW_RESPONSE_INCLUDED_FIELDS:`: When there are multiple fields in the response, using this env variable will filter the fields to expose in the response. <br> For example, if there are two flow outputs: "answer", "context", and if you only want to have "answer" in the endpoint response, you can set this env variable to '["answer"]'. <br> - if you want to use user-assigned identity, you need to specify `UAI_CLIENT_ID: "uai_client_id_place_holder"`<br> | +| Instance count | The number of instances to use for the deployment. Base the value on the workload you expect. For high availability, we recommend that you set the value to at least `3`. We reserve an extra 20% for performing upgrades. For more information, see [limits for online endpoints](../how-to-manage-quotas.md#azure-machine-learning-online-endpoints-and-batch-endpoints). | +| Environment variables | Following environment variables need to be set for endpoints deployed from a flow: <br> - (required) `PROMPTFLOW_RUN_MODE: serving`: specify the mode to serving <br> - (required) `PRT_CONFIG_OVERRIDE`: for pulling connections from workspace <br> - (optional) `PROMPTFLOW_RESPONSE_INCLUDED_FIELDS:`: When there are multiple fields in the response, using this env variable will filter the fields to expose in the response. <br> For example, if there are two flow outputs: "answer", "context", and if you only want to have "answer" in the endpoint response, you can set this env variable to '["answer"]'. | If you create a Kubernetes online deployment, you need to specify the following additional attributes: |
machine-learning | Open Source Llm Tool | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/tools-reference/open-source-llm-tool.md | description: The prompt flow Open Source LLM tool enables you to utilize various -- - devx-track-python - - ignite-2023 + |
machine-learning | Transparency Note | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/transparency-note.md | Title: Transparency Note for auto-generate prompt variants in prompt flow -description: Transparency Note for auto-generate prompt variants in prompt flow +description: Learn about the feature in prompt flow that automatically generates variations of a base prompt with the help of language models. -An AI system includes not only the technology, but also the people who use it, the people who are affected by it, and the environment in which it's deployed. Creating a system that is fit for its intended purpose requires an understanding of how the technology works, what its capabilities and limitations are, and how to achieve the best performance. Microsoft's Transparency Notes are intended to help you understand how our AI technology works, the choices system owners can make that influence system performance and behavior, and the importance of thinking about the whole system, including the technology, the people, and the environment. You can use Transparency Notes when developing or deploying your own system, or share them with the people who will use or be affected by your system. +An AI system includes not only technology but also the people who use it, the people it affects, and the environment in which it's deployed. Creating a system that's fit for its intended purpose requires an understanding of how the technology works, what its capabilities and limitations are, and how to achieve the best performance. -Microsoft's Transparency Notes are part of a broader effort at Microsoft to put our AI Principles into practice. To find out more, see the [Microsoft's AI principles](https://www.microsoft.com/ai/responsible-ai). +Microsoft Transparency Notes help you understand: +- How our AI technology works. +- The choices that system owners can make that influence system performance and behavior. +- The importance of thinking about the whole system, including the technology, the people, and the environment. ++You can use Transparency Notes when you're developing or deploying your own system. Or you can share them with the people who use (or are affected by) your system. ++Transparency Notes are part of a broader effort at Microsoft to put AI principles into practice. To find out more, see the [Microsoft AI principles](https://www.microsoft.com/ai/responsible-ai). ++> [!IMPORTANT] +> Auto-generate prompt variants is currently in public preview. This preview is provided without a service-level agreement, and we don't recommend it for production workloads. Certain features might not be supported or might have constrained capabilities. +> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). ## The basics of auto-generate prompt variants in prompt flow -### Introduction +Prompt engineering is at the center of building applications by using language models. Microsoft's prompt flow offers rich capabilities to interactively edit, bulk test, and evaluate prompts with built-in flows to choose the best prompt. -Prompt engineering is at the center of building applications using Large Language Models. Microsoft's prompt flow offers rich capabilities to interactively edit, bulk test, and evaluate prompts with built-in flows to pick the best prompt. With the auto-generate prompt variants feature in prompt flow, we provide the ability to automatically generate variations of a user's base prompt with help of large language models and allow users to test them in prompt Flow to reach the optimal solution for the user's model and use case needs. +The auto-generate prompt variants feature in prompt flow can automatically generate variations of your base prompt with the help of language models. You can test those variations in prompt flow to reach the optimal solution for your model and use case. -### Key terms +This Transparency Note uses the following key terms: | **Term** | **Definition** | | | |-| Prompt flow | Prompt flow offers rich capabilities to interactively edit prompts and bulk test them with built-in evaluation flows to pick the best prompt. More information available at [What is prompt flow](./overview-what-is-prompt-flow.md) | -| Prompt engineering | The practice of crafting and refining input prompts to elicit more desirable responses from a large language model, particularly in large language models. | -| Prompt variants | Different versions or modifications of a given input prompt designed to test or achieve varied responses from a large language model. | -| Base prompt | The initial or primary prompt that serves as a starting point for eliciting response from large language models. In this case it is provided by the user and is modified to create prompt variants. | -| System prompt | A predefined prompt generated by a system, typically to initiate a task or seek specific information. This is not visible but is used internally to generate prompt variants. | +| Prompt flow | A development tool that streamlines the development cycle of AI applications that use language models. For more information, see [What is Azure Machine Learning prompt flow](./overview-what-is-prompt-flow.md). | +| Prompt engineering | The practice of crafting and refining input prompts to elicit more desirable responses from a language model. | +| Prompt variants | Different versions or modifications of an input prompt that are designed to test or achieve varied responses from a language model. | +| Base prompt | The initial or primary prompt that serves as a starting point for eliciting responses from language models. In this case, you provide the base prompt and modify it to create prompt variants. | +| System prompt | A predefined prompt that a system generates, typically to start a task or seek specific information. A system prompt isn't visible but is used internally to generate prompt variants. | ## Capabilities ### System behavior -The auto-generate prompt variants feature, as part of the prompt flow experience, provides the ability to automatically generate and easily assess prompt variations to quickly find the best prompt for your use case. This feature further empowers prompt flow's rich set of capabilities to interactively edit and evaluate prompts, with the goal of simplifying prompt engineering. --When provided with the user's base prompt the auto-generate prompt variants feature generates several variations using the generative power of Azure OpenAI models and an internal system prompt. While Azure OpenAI provides content management filters, we recommend verifying any prompts generated before using them in production scenarios. --### Use cases +You use the auto-generate prompt variants feature to automatically generate and then assess prompt variations, so you can quickly find the best prompt for your use case. This feature enhances the capabilities in prompt flow to interactively edit and evaluate prompts, with the goal of simplifying prompt engineering. -#### Intended uses +When you provide a base prompt, the auto-generate prompt variants feature generates several variations by using the generative power of Azure OpenAI Service models and an internal system prompt. Although Azure OpenAI Service provides content management filters, we recommend that you verify any generated prompts before you use them in production scenarios. -Auto-generate prompt variants can be used in the following scenarios. The system's intended use is: +### Use cases -**Generate new prompts from a provided base prompt**: "Generate Variants" feature will allow the users of prompt flow to automatically generate variants of their provided base prompt with help of LLMs (Large Language Models). +The intended use of auto-generate prompt variants is to *generate new prompts from a provided base prompt with the help of language models*. Don't use auto-generate prompt variants for decisions that might have serious adverse impacts. -#### Considerations when choosing a use case --**Do not use auto-generate prompt variants for decisions that might have serious adverse impacts.** --Auto-generate prompt variants was not designed or tested to recommend items that require additional considerations related to accuracy, governance, policy, legal, or expert knowledge as these often exist outside the scope of the usage patterns carried out by regular (non-expert) users. Examples of such use cases include medical diagnostics, banking, or financial recommendations, hiring or job placement recommendations, or recommendations related to housing. +Auto-generate prompt variants wasn't designed or tested to recommend items that require more considerations related to accuracy, governance, policy, legal, or expert knowledge. These considerations often exist outside the scope of the usage patterns that regular (non-expert) users carry out. Examples of such use cases include medical diagnostics, banking, or financial recommendations, hiring or job placement recommendations, or recommendations related to housing. ## Limitations -Explicitly in the generation of prompt variants, it is important to understand that while AI systems are incredibly valuable tools, they are **non-deterministic**. This means that perfect **accuracy** (the measure of how well the system-generated events correspond to real events that happened in a space) of predictions is not possible. A good model will have high accuracy, but it will occasionally output incorrect predictions. Failure to understand this limitation can lead to over-reliance on the system and unmerited decisions that can impact stakeholders. +In the generation of prompt variants, it's important to understand that although AI systems are valuable tools, they're *nondeterministic*. That is, perfect *accuracy* (the measure of how well the system-generated events correspond to real events that happen in a space) of predictions is not possible. A good model has high accuracy, but it occasionally makes incorrect predictions. Failure to understand this limitation can lead to overreliance on the system and unmerited decisions that can affect stakeholders. -Furthermore, the prompt variants that are generated using LLMs, are returned to the user as is. It is encouraged to evaluate and compare these variants to determine the best prompt for a given scenario. There are **additional concerns** here because many of the evaluations offered in the prompt flow ecosystems also depend on LLMs, potentially further decreasing the utility of any given prompt. Manual review is strongly recommended. +The prompt variants that the feature generates by using language models appear to you as is. We encourage you to evaluate and compare these variants to determine the best prompt for a scenario. -### Technical limitations, operational factors, and ranges +Many of the evaluations offered in the prompt flow ecosystems also depend on language models. This dependency can potentially decrease the utility of any prompt. We strongly recommend a manual review. -As mentioned previously, the auto-generate prompt variants feature does not provide a measurement or evaluation of the provided prompt variants. It is strongly recommended that the user of this feature evaluates the suggested prompts in the way which best aligns with their specific use case and requirements. +### Technical limitations, operational factors, and ranges -The auto-generate prompt variants feature is limited to generating a maximum of five variations from a given base prompt. If more are required, additional prompt variants can be generated after modifying the original base prompt. +The auto-generate prompt variants feature doesn't provide a measurement or evaluation of the prompt variants that it provides. We strongly recommend that you evaluate the suggested prompts in the way that best aligns with your specific use case and requirements. -Auto-generate prompt variants only supports Azure OpenAI models at this time. In addition to limiting users to only the models which are supported by Azure OpenAI, it also limits content to what is acceptable in terms of the Azure OpenAI's content management policy. Uses outside of this policy are not supported by this feature. +The auto-generate prompt variants feature is limited to generating a maximum of five variations from a base prompt. If you need more variations, modify your base prompt to generate them. -## System performance +Auto-generate prompt variants supports only Azure OpenAI Service models at this time. It also limits content to what's acceptable in terms of the content management policy in Azure OpenAI Service. The feature doesn't support uses outside this policy. -Performance for the auto-generate prompt variants feature is determined by the user's use case in each individual scenario – in this way the feature does not evaluate each prompt or generate metrics. +## System performance -Operating in the prompt flow ecosystem, which focuses on Prompt Engineering, provides a strong story for error handling. Often retrying the operation will resolve an error. One error which might arise specific to this feature is response filtering from the Azure OpenAI resource for content or harm detection, this would happen in the case that content in the base prompt is determined to be against Azure OpenAI's content management policy. To resolve these errors please update the base prompt in accordance with the guidance at [Azure OpenAI Service content filtering](/azure/ai-services/openai/concepts/content-filter). +Your use case in each scenario determines the performance of the auto-generate prompt variants feature. The feature doesn't evaluate prompts or generate metrics. -### Best practices for improving system performance +Operating in the prompt flow ecosystem, which focuses on prompt engineering, provides a strong story for error handling. Retrying the operation often resolves an error. -To improve performance there are several parameters which can be modified, depending on the use cases and requirements of the prompt requirements: +One error that might arise specific to this feature is response filtering from the Azure OpenAI Service resource for content or harm detection. This error happens when content in the base prompt is against the content management policy in Azure OpenAI Service. To resolve this error, update the base prompt in accordance with the guidance in [Azure OpenAI Service content filtering](/azure/ai-services/openai/concepts/content-filter). -- **Model**: The choice of models used with this feature will impact the performance. As general guidance, the GPT-4 model is more powerful than the GPT-3.5 and would thus be expected to generate more performant prompt variants. -- **Number of Variants**: This parameter specifies how many variants to generate. A larger number of variants will produce more prompts and therefore the likelihood of finding the best prompt for the use case. -- **Base Prompt**: Since this tool generates variants of the provided base prompt, a strong base prompt can set up the tool to provide the maximum value for your case. Please review the guidelines at Prompt engineering techniques with [Azure OpenAI](/azure/ai-services/openai/concepts/advanced-prompt-engineering). +### Best practices for improving system performance -## Evaluation of auto-generate prompt variants +To improve performance, you can modify the following parameters, depending on the use case and the prompt requirements: -### Evaluation methods +- **Model**: The choice of models that you use with this feature affects the performance. As general guidance, the GPT-4 model is more powerful than the GPT-3.5 model, so you can expect it to generate prompt variants that are more performant. +- **Number of Variants**: This parameter specifies how many variants to generate. A larger number of variants produces more prompts and increases the likelihood of finding the best prompt for the use case. +- **Base Prompt**: Because this tool generates variants of the provided base prompt, a strong base prompt can set up the tool to provide the maximum value for your case. Review the guidelines in [Prompt engineering techniques](/azure/ai-services/openai/concepts/advanced-prompt-engineering). -The auto-generate prompt variants feature been testing by the internal development team, targeting fit for purpose and harm mitigation. +## Evaluation of auto-generate prompt variants -### Evaluation results +The Microsoft development team tested the auto-generate prompt variants feature to evaluate harm mitigation and fitness for purpose. -Evaluation of harm management showed staunch support for the combination of system prompt and Azure Open AI content management policies in actively safe-guarding responses. Additional opportunities to minimize the chance and risk of harms can be found in the Microsoft documentation: [Azure OpenAI Service abuse monitoring](/azure/ai-services/openai/concepts/abuse-monitoring) and [Azure OpenAI Service content filtering](/azure/ai-services/openai/concepts/content-filter). +The testing for harm mitigation showed support for the combination of system prompts and Azure Open AI content management policies in actively safeguarding responses. You can find more opportunities to minimize the risk of harms in [Azure OpenAI Service abuse monitoring](/azure/ai-services/openai/concepts/abuse-monitoring) and [Azure OpenAI Service content filtering](/azure/ai-services/openai/concepts/content-filter). -Fit for purpose testing supported the quality of generated prompts from creative purposes (poetry) and chat-bot agents. The reader is cautioned from drawing sweeping conclusions given the breadth of possible base prompt and potential use cases. As previously mentioned, please use evaluations appropriate to the required use cases and ensure a human reviewer is part of the process. +Fitness-for-purpose testing supported the quality of generated prompts from creative purposes (poetry) and chat-bot agents. We caution you against drawing sweeping conclusions, given the breadth of possible base prompts and potential use cases. For your environment, use evaluations that are appropriate to the required use cases, and ensure that a human reviewer is part of the process. -## Evaluating and integrating auto-generate prompt variants for your use +## Evaluating and integrating auto-generate prompt variants for your use -The performance of the auto-generate prompt variants feature will vary depending on the base prompt and use case in it is used. True usage of the generated prompts will depend on a combination of the many elements of the system in which the prompt is used. +The performance of the auto-generate prompt variants feature varies, depending on the base prompt and use case. True usage of the generated prompts will depend on a combination of the many elements of the system in which you use the prompts. -To ensure optimal performance in their scenarios, customers should conduct their own evaluations of the solutions they implement using auto-generate prompt variants. Customers should, generally, follow an evaluation process that: +To ensure optimal performance in your scenarios, you should conduct your own evaluations of the solutions that you implement by using auto-generate prompt variants. In general, follow an evaluation process that: -- Uses internal stakeholders to evaluate any generated prompt. -- Uses internal stakeholders to evaluate results of any system which uses a generated prompt. -- Incorporates KPI (Key Performance Indicators) and metrics monitoring when deploying the service using generated prompts meets evaluation targets. +- Uses internal stakeholders to evaluate any generated prompt. +- Uses internal stakeholders to evaluate results of any system that uses a generated prompt. +- Incorporates key performance indicators (KPIs) and metrics monitoring when deploying the service by using generated prompts meets evaluation targets. ## Learn more about responsible AI - [Microsoft AI principles](https://www.microsoft.com/ai/responsible-ai)-- [Microsoft responsible AI resources](https://www.microsoft.com/ai/responsible-ai-resources) -- [Microsoft Azure Learning courses on responsible AI](/training/paths/responsible-ai-business-principles/)+- [Microsoft responsible AI resources](https://www.microsoft.com/ai/responsible-ai-resources) +- [Microsoft Azure training courses on responsible AI](/training/paths/responsible-ai-business-principles/) ## Learn more about auto-generate prompt variants +- [What is Azure Machine Learning prompt flow](./overview-what-is-prompt-flow.md) |
machine-learning | Reference Yaml Deployment Kubernetes Online | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-deployment-kubernetes-online.md | The source JSON schema can be found at https://azuremlschemas.azureedge.net/late | Key | Type | Description | Allowed values | Default value | | | - | -- | -- | - | | `$schema` | string | The YAML schema. If you use the Azure Machine Learning VS Code extension to author the YAML file, including `$schema` at the top of your file enables you to invoke schema and resource completions. | | |-| `name` | string | **Required.** Name of the deployment. <br><br> Naming rules are defined [here](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints).| | | +| `name` | string | **Required.** Name of the deployment. <br><br> Naming rules are defined [here](how-to-manage-quotas.md#azure-machine-learning-online-endpoints-and-batch-endpoints).| | | | `description` | string | Description of the deployment. | | | | `tags` | object | Dictionary of tags for the deployment. | | | | `endpoint_name` | string | **Required.** Name of the endpoint to create the deployment under. | | | |
machine-learning | Reference Yaml Deployment Managed Online | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-deployment-managed-online.md | The source JSON schema can be found at https://azuremlschemas.azureedge.net/late | Key | Type | Description | Allowed values | Default value | | | - | -- | -- | - | | `$schema` | string | The YAML schema. If you use the Azure Machine Learning VS Code extension to author the YAML file, including `$schema` at the top of your file enables you to invoke schema and resource completions. | | |-| `name` | string | **Required.** Name of the deployment. <br><br> Naming rules are defined [here](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints).| | | +| `name` | string | **Required.** Name of the deployment. <br><br> Naming rules are defined [here](how-to-manage-quotas.md#azure-machine-learning-online-endpoints-and-batch-endpoints).| | | | `description` | string | Description of the deployment. | | | | `tags` | object | Dictionary of tags for the deployment. | | | | `endpoint_name` | string | **Required.** Name of the endpoint to create the deployment under. | | | The source JSON schema can be found at https://azuremlschemas.azureedge.net/late | `environment_variables` | object | Dictionary of environment variable key-value pairs to set in the deployment container. You can access these environment variables from your scoring scripts. | | | | `environment` | string or object | **Required.** The environment to use for the deployment. This value can be either a reference to an existing versioned environment in the workspace or an inline environment specification. <br><br> To reference an existing environment, use the `azureml:<environment-name>:<environment-version>` syntax. <br><br> To define an environment inline, follow the [Environment schema](reference-yaml-environment.md#yaml-syntax). <br><br> As a best practice for production scenarios, you should create the environment separately and reference it here. | | | | `instance_type` | string | **Required.** The VM size to use for the deployment. For the list of supported sizes, see [Managed online endpoints SKU list](reference-managed-online-endpoints-vm-sku-list.md). | | |-| `instance_count` | integer | **Required.** The number of instances to use for the deployment. Specify the value based on the workload you expect. For high availability, Microsoft recommends you set it to at least `3`. <br><br> `instance_count` can be updated after deployment creation using `az ml online-deployment update` command. <br><br> We reserve an extra 20% for performing upgrades. For more information, see [managed online endpoint quotas](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints). | | | +| `instance_count` | integer | **Required.** The number of instances to use for the deployment. Specify the value based on the workload you expect. For high availability, Microsoft recommends you set it to at least `3`. <br><br> `instance_count` can be updated after deployment creation using `az ml online-deployment update` command. <br><br> We reserve an extra 20% for performing upgrades. For more information, see [virtual machine quota allocation for deployment](how-to-deploy-online-endpoints.md#virtual-machine-quota-allocation-for-deployment). | | | | `app_insights_enabled` | boolean | Whether to enable integration with the Azure Application Insights instance associated with your workspace. | | `false` | | `scale_settings` | object | The scale settings for the deployment. Currently only the `default` scale type is supported, so you don't need to specify this property. <br><br> With this `default` scale type, you can either manually scale the instance count up and down after deployment creation by updating the `instance_count` property, or create an [autoscaling policy](how-to-autoscale-endpoints.md). | | | | `scale_settings.type` | string | The scale type. | `default` | `default` | The source JSON schema can be found at https://azuremlschemas.azureedge.net/late | Key | Type | Description | Default value | | | - | -- | - |-| `request_timeout_ms` | integer | The scoring timeout in milliseconds. Note that the maximum value allowed is `180000` milliseconds. See [Managed online endpoint quotas](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints) for more. | `5000` | -| `max_concurrent_requests_per_instance` | integer | The maximum number of concurrent requests per instance allowed for the deployment. <br><br> **Note:** If you're using [Azure Machine Learning Inference Server](how-to-inference-server-http.md) or [Azure Machine Learning Inference Images](concept-prebuilt-docker-images-inference.md), your model must be configured to handle concurrent requests. To do so, pass `WORKER_COUNT: <int>` as an environment variable. For more information about `WORKER_COUNT`, see [Azure Machine Learning Inference Server Parameters](how-to-inference-server-http.md#server-parameters) <br><br> **Note:** Set to the number of requests that your model can process concurrently on a single node. Setting this value higher than your model's actual concurrency can lead to higher latencies. Setting this value too low may lead to under utilized nodes. Setting too low may also result in requests being rejected with a 429 HTTP status code, as the system will opt to fail fast. For more information, see [Troubleshooting online endpoints: HTTP status codes](how-to-troubleshoot-online-endpoints.md#http-status-codes). | `1` | +| `request_timeout_ms` | integer | The scoring timeout in milliseconds. Note that the maximum value allowed is `180000` milliseconds. See [limits for online endpoints](how-to-manage-quotas.md#azure-machine-learning-online-endpoints-and-batch-endpoints) for more. | `5000` | +| `max_concurrent_requests_per_instance` | integer | The maximum number of concurrent requests per instance allowed for the deployment. <br><br> **Note:** If you're using [Azure Machine Learning Inference Server](how-to-inference-server-http.md) or [Azure Machine Learning Inference Images](concept-prebuilt-docker-images-inference.md), your model must be configured to handle concurrent requests. To do so, pass `WORKER_COUNT: <int>` as an environment variable. For more information about `WORKER_COUNT`, see [Azure Machine Learning Inference Server Parameters](how-to-inference-server-http.md#server-parameters) <br><br> **Note:** Set to the number of requests that your model can process concurrently on a single node. Setting this value higher than your model's actual concurrency can lead to higher latencies. Setting this value too low might lead to under utilized nodes. Setting too low might also result in requests being rejected with a 429 HTTP status code, as the system will opt to fail fast. For more information, see [Troubleshooting online endpoints: HTTP status codes](how-to-troubleshoot-online-endpoints.md#http-status-codes). | `1` | | `max_queue_wait_ms` | integer | The maximum amount of time in milliseconds a request will stay in the queue. | `500` | ### ProbeSettings |
machine-learning | Reference Yaml Endpoint Online | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-endpoint-online.md | The source JSON schema can be found at https://azuremlschemas.azureedge.net/late | Key | Type | Description | Allowed values | Default value | | | - | -- | -- | - | | `$schema` | string | The YAML schema. If you use the Azure Machine Learning VS Code extension to author the YAML file, including `$schema` at the top of your file enables you to invoke schema and resource completions. | | |-| `name` | string | **Required.** Name of the endpoint. Needs to be unique at the Azure region level. <br><br> Naming rules are defined under [managed online endpoint limits](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints).| | | +| `name` | string | **Required.** Name of the endpoint. Needs to be unique at the Azure region level. <br><br> Naming rules are defined under [endpoint limits](how-to-manage-quotas.md#azure-machine-learning-online-endpoints-and-batch-endpoints).| | | | `description` | string | Description of the endpoint. | | | | `tags` | object | Dictionary of tags for the endpoint. | | | | `auth_mode` | string | The authentication method for the endpoint. Key-based authentication and Azure Machine Learning token-based authentication are supported. Key-based authentication doesn't expire but Azure Machine Learning token-based authentication does. | `key`, `aml_token` | `key` | |
machine-learning | Reference Yaml Monitor | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-monitor.md | You can find the schemas for older extension versions at [https://azuremlschemas | | - | -- | -- | | `$schema` | string | The YAML schema. | | | `name` | string | **Required.** Name of the schedule. | |-| `version` | string | Version of the schedule. If omitted, Azure Machine Learning will autogenerate a version. | | | `description` | string | Description of the schedule. | | | `tags` | object | Dictionary of tags for the schedule. | | | `trigger` | object | **Required.** The trigger configuration to define rule when to trigger job. **One of `RecurrenceTrigger` or `CronTrigger` is required.** | | |
machine-learning | Reference Yaml Schedule Data Import | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-schedule-data-import.md | The source JSON schema can be found at https://azuremlschemas.azureedge.net/late | | - | -- | -- | | `$schema` | string | The YAML schema. | | | `name` | string | **Required.** Name of the schedule. | |-| `version` | string | Version of the schedule. If omitted, Azure Machine Learning autogenerates a version. | | | `description` | string | Description of the schedule. | | | `tags` | object | Dictionary of tags for the schedule. | | | `trigger` | object | The trigger configuration to define rule when to trigger job. **One of `RecurrenceTrigger` or `CronTrigger` is required.** | | |
machine-learning | Reference Yaml Schedule | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-schedule.md | The source JSON schema can be found at https://azuremlschemas.azureedge.net/late | | - | -- | -- | | `$schema` | string | The YAML schema. | | | `name` | string | **Required.** Name of the schedule. | |-| `version` | string | Version of the schedule. If omitted, Azure Machine Learning will autogenerate a version. | | | `description` | string | Description of the schedule. | | | `tags` | object | Dictionary of tags for the schedule. | | | `trigger` | object | The trigger configuration to define rule when to trigger job. **One of `RecurrenceTrigger` or `CronTrigger` is required.** | | |
machine-learning | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure Machine Learning description: Lists Azure Policy Regulatory Compliance controls available for Azure Machine Learning. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 11/06/2023 Last updated : 11/21/2023 |
machine-learning | Tutorial Deploy Model | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-deploy-model.md | In this tutorial, we'll walk you through the steps of implementing a _managed on ## Create an online endpoint -Now that you have a registered model, it's time to create your online endpoint. The endpoint name needs to be unique in the entire Azure region. For this tutorial, you'll create a unique name using a universally unique identifier [`UUID`](https://en.wikipedia.org/wiki/Universally_unique_identifier). For more information on the endpoint naming rules, see [managed online endpoint limits](how-to-manage-quotas.md#azure-machine-learning-managed-online-endpoints). +Now that you have a registered model, it's time to create your online endpoint. The endpoint name needs to be unique in the entire Azure region. For this tutorial, you'll create a unique name using a universally unique identifier [`UUID`](https://en.wikipedia.org/wiki/Universally_unique_identifier). For more information on the endpoint naming rules, see [endpoint limits](how-to-manage-quotas.md#azure-machine-learning-online-endpoints-and-batch-endpoints). ```python ml_client.online_deployments.begin_create_or_update(green_deployment).result() ``` ## Update traffic allocation for deployments-You can split production traffic between deployments. You may first want to test the `green` deployment with sample data, just like you did for the `blue` deployment. Once you've tested your green deployment, allocate a small percentage of traffic to it. +You can split production traffic between deployments. You might first want to test the `green` deployment with sample data, just like you did for the `blue` deployment. Once you've tested your green deployment, allocate a small percentage of traffic to it. ```python |
machine-learning | How To Deploy Model Cognitive Search | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-deploy-model-cognitive-search.md | This article teaches you how to use Azure Machine Learning to deploy a model for Azure AI Search performs content processing over heterogenous content, to make it queryable by humans or applications. This process can be enhanced by using a model deployed from Azure Machine Learning. -Azure Machine Learning can deploy a trained model as a web service. The web service is then embedded in a Azure AI Search _skill_, which becomes part of the processing pipeline. +Azure Machine Learning can deploy a trained model as a web service. The web service is then embedded in an Azure AI Search _skill_, which becomes part of the processing pipeline. > [!IMPORTANT] > The information in this article is specific to the deployment of the model. It provides information on the supported deployment configurations that allow the model to be used by Azure AI Search. |
managed-ccf | Get Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-ccf/get-started.md | |
managed-ccf | How To Activate Members | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-ccf/how-to-activate-members.md | |
managed-ccf | How To Update Application | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-ccf/how-to-update-application.md | |
managed-ccf | How To Update Javascript Runtime Options | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-ccf/how-to-update-javascript-runtime-options.md | |
managed-ccf | Quickstart Deploy Application | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-ccf/quickstart-deploy-application.md | |
managed-ccf | Quickstart Go | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-ccf/quickstart-go.md | |
managed-ccf | Quickstart Java | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-ccf/quickstart-java.md | |
managed-ccf | Quickstart Net | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-ccf/quickstart-net.md | |
managed-ccf | Quickstart Typescript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-ccf/quickstart-typescript.md | |
managed-grafana | How To Data Source Plugins Managed Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-data-source-plugins-managed-identity.md | In this guide, you learn about data sources supported in each Azure Managed Gran ## Prerequisites -[An Azure Managed Grafana instance](./how-to-permissions.md). +[An Azure Managed Grafana instance](./quickstart-managed-grafana-portal.md). ## Supported Grafana data sources |
managed-instance-apache-cassandra | Best Practice Performance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-instance-apache-cassandra/best-practice-performance.md | If the CPU is only high for a few nodes, but low for the others, it indicates a > - Standard_D8s_v4 > - Standard_D16s_v4 > - Standard_D32s_v4+> - Standard_L8s_v3 +> - Standard_L16s_v3 +> - Standard_L32s_v3 +> - Standard_L8as_v3 +> - Standard_L16as_v3 +> - Standard_L32as_v3 |
managed-instance-apache-cassandra | Configure Hybrid Cluster | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-instance-apache-cassandra/configure-hybrid-cluster.md | - - ignite-2023 + # Quickstart: Configure a hybrid cluster with Azure Managed Instance for Apache Cassandra using Client Configurator |
mariadb | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/policy-reference.md | |
mariadb | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mariadb/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure Database for MariaDB description: Lists Azure Policy Regulatory Compliance controls available for Azure Database for MariaDB. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 11/06/2023 Last updated : 11/21/2023 |
migrate | Dotnet | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/appcat/dotnet.md | description: How to assess and replatform any type of .NET applications with the + Last updated 11/15/2023 |
migrate | How To Discover Applications | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/how-to-discover-applications.md | The sign-in used to connect to a source SQL Server instance requires sysadmin ro [!INCLUDE [Minimal Permissions for SQL Assessment](../../includes/database-migration-service-sql-permissions.md)] > -Once connected, the appliance gathers configuration and performance data of SQL Server instances and databases. The SQL Server configuration data is updated once every 24 hours and the performance data is captured every 30 seconds. Hence, any change to the properties of the SQL Server instance and databases such as database status, compatibility level etc. can take up to 24 hours to update on the portal. +Once connected, the appliance gathers configuration and performance data of SQL Server instances and databases. The SQL Server configuration data is updated once every 24 hours, and the performance data is captured every 30 seconds. Hence, any change to the properties of the SQL Server instance and databases such as database status, compatibility level etc. can take up to 24 hours to update on the portal. ## Discover ASP.NET web apps Once connected, the appliance gathers configuration and performance data of SQL - Currently, Windows servers aren't supported for Spring Boot app discovery, only Linux servers are supported. - Learn more about appliance requirements on [Azure Migrate appliance requirements](migrate-appliance.md) and [discovery support](migrate-support-matrix-vmware.md#dependency-analysis-requirements-agentless). +## Discover File Server Instances ++- Software inventory identifies File Server role installed on discovered servers running on VMware, Microsoft Hyper-V, and physical/bare-metal environments, along with IaaS services in various public cloud platforms. +- The File Server (FS-FileServer) role service in Windows Server is a part of the File and Storage Services role. Windows Server machines with File Server role enabled are determined to be used as file servers. +- Users can view the discovered file servers in the **Discovered servers** screen. The File server column in **Discovered servers** indicates whether a server is a file server or not. +- Currently, only Windows Server 2008 and later are supported. ## Next steps |
migrate | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/policy-reference.md | Title: Built-in policy definitions for Azure Migrate description: Lists Azure Policy built-in policy definitions for Azure Migrate. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/15/2023 Last updated : 11/21/2023 |
migrate | Tutorial Discover Vmware | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/tutorial-discover-vmware.md | In VMware vSphere Web Client, set up a read-only account to use for vCenter Serv ### Create an account to access servers +> [!NOTE] +> Lightweight Directory Access Protocol (LDAP) accounts are not supported for discovery. + Your user account on your servers must have the required permissions to initiate discovery of installed applications, agentless dependency analysis, and discovery of web apps, and SQL Server instances and databases. You can provide the user account information in the appliance configuration manager. The appliance doesn't install agents on the servers. * For **Windows servers** and web apps discovery, create an account (local or domain) that has administrator permissions on the servers. To discover SQL Server instances and databases, the Windows or SQL Server account must be a member of the sysadmin server role. Learn how to [assign the required role to the user account](/sql/relational-databases/security/authentication-access/server-level-roles). |
mysql | Concepts Customer Managed Key | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-customer-managed-key.md | Before you attempt to configure Key Vault, be sure to address the following requ Before you attempt to configure the CMK, be sure to address the following requirements. -- The customer-managed key to encrypt the DEK can be only asymmetric, RSA 2048,3072 or 4096.+- The customer-managed key to encrypt the DEK can be only asymmetric, RSA\RSA-HSM(Vaults with Premium SKU) 2048,3072 or 4096. - The key activation date (if set) must be a date and time in the past. The expiration date not set. - The key must be in the **Enabled** state. - The key must have [soft delete](../../key-vault/general/soft-delete-overview.md) with retention period set to 90 days. This implicitly sets the required key attribute recoveryLevel: ΓÇ£Recoverable.ΓÇ¥ As you configure Key Vault to use data encryption using a customer-managed key, - If Key Vault generates the key, create a key backup before using the key for the first time. You can only restore the backup to Key Vault. For more information about the backup command, see [Backup-AzKeyVaultKey](/powershell/module/az.keyVault/backup-azkeyVaultkey). > [!NOTE]-> It is advised to use a key vault from the same region, but if necessary, you can use a key vault from another region by specifying the "enter key identifier" information. -> RSA key stored in **Azure Key Vault Managed HSM**, is currently not supported. +> * It is advised to use a key vault from the same region, but if necessary, you can use a key vault from another region by specifying the "enter key identifier" information. +> * RSA key stored in **Azure Key Vault Managed HSM**, is currently not supported. + ## Inaccessible customer-managed key condition When you configure data encryption with a CMK in Key Vault, continuous access to this key is required for the server to stay online. If the flexible server loses access to the customer-managed key in Key Vault, the server begins denying all connections within 10 minutes. The flexible server issues a corresponding error message and changes the server state to Inaccessible. The server can reach this state for various reasons. |
mysql | Concepts High Availability | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-high-availability.md | You need to be able to mitigate downtime for your application even when you're n Yes, read replicas are supported for HA servers.</br> - **Can I use Data-in Replication for HA servers?**</br>-Yes, support for data-in replication for high availability (HA) enabled server is available only through GTID-based replication. +Support for data-in replication for high availability (HA) enabled server is available only through GTID-based replication. +The stored procedure for replication using GTID is available on all HA-enabled servers by the name `mysql.az_replication_with_gtid`. + - **To reduce downtime, can I fail over to the standby server during server restarts or while scaling up or down?** </br> Currently, Azure MySQL Flexible Server has utlized Planned Failover to optmize the HA operations including scaling up/down, and planned maintenance to help reduce the downtime. When such operations started, it would operate on the original standby instance first, followed by triggering a planned failover operation, and then operate on the original primary instance. </br> |
mysql | Concepts Connection Libraries | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/concepts-connection-libraries.md | MySQL offers standard database driver connectivity for using MySQL with applicat | PHP | Windows, Linux | [MySQL native driver for PHP - mysqlnd](https://dev.mysql.com/downloads/connector/php-mysqlnd/) | [Download](https://secure.php.net/downloads.php) | | ODBC | Windows, Linux, macOS X, and Unix platforms | [MySQL Connector/ODBC Developer Guide](https://dev.mysql.com/doc/connector-odbc/en/) | [Download](https://dev.mysql.com/downloads/connector/odbc/) | | ADO.NET | Windows | [MySQL Connector/Net Developer Guide](https://dev.mysql.com/doc/connector-net/en/) | [Download](https://dev.mysql.com/downloads/connector/net/) |-| JDBC | Platform independent | [MySQL Connector/J 8.1 Developer Guide](https://dev.mysql.com/doc/connector-j/8.1/en/) | [Download](https://dev.mysql.com/downloads/connector/j/) | +| JDBC | Platform independent | MySQL Connector/J 8.1 Developer Guide | [Download](https://dev.mysql.com/downloads/connector/j/) | | Node.js | Windows, Linux, macOS X | [sidorares/node-mysql2](https://github.com/sidorares/node-mysql2/tree/master/documentation) | [Download](https://github.com/sidorares/node-mysql2) | | Python | Windows, Linux, macOS X | [MySQL Connector/Python Developer Guide](https://dev.mysql.com/doc/connector-python/en/) | [Download](https://dev.mysql.com/downloads/connector/python/) | | C++ | Windows, Linux, macOS X | [MySQL Connector/C++ Developer Guide](https://dev.mysql.com/doc/connector-cpp/en/) | [Download](https://dev.mysql.com/downloads/connector/python/) | |
mysql | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/policy-reference.md | |
mysql | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/single-server/security-controls-policy.md | |
networking | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/policy-reference.md | Title: Built-in policy definitions for Azure networking services description: Lists Azure Policy built-in policy definitions for Azure networking services. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/15/2023 Last updated : 11/21/2023 |
networking | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/networking/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure networking services description: Lists Azure Policy Regulatory Compliance controls available for Azure networking services. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 11/06/2023 Last updated : 11/21/2023 |
notification-hubs | Private Link | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/private-link.md | description: Learn how to use the Private Link feature in Azure Notification Hub + Last updated 11/06/2023- # Use Private Link |
openshift | Quickstart Openshift Arm Bicep Template | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/quickstart-openshift-arm-bicep-template.md | More Azure Red Hat OpenShift templates can be found on the [Red Hat OpenShift we Create the following Bicep file containing the definition for the Azure Red Hat OpenShift cluster. The following example shows how your Bicep file should look when configured. -Save the following file as *azuredeploy.json*: +Save the following file as *azuredeploy.bicep*: ```bicep @description('Location') New-AzResourceGroupDeployment -ResourceGroupName $resourceGroup @templateParams ::: zone-end -### Connect to your cluster - PowerShell +### Connect to your cluster To connect to your new cluster, review the steps in [Connect to an Azure Red Hat OpenShift 4 cluster](tutorial-connect-cluster.md). |
operator-insights | Concept Data Quality Monitoring | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/concept-data-quality-monitoring.md |  Title: Data quality and quality monitoring description: This article helps you understand how data quality and quality monitoring work in Azure Operator Insights.--++ Last updated 10/24/2023 |
operator-insights | Concept Data Types | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/concept-data-types.md | Title: Data types - Azure Operator Insights description: This article provides an overview of the data types used by Azure Operator Insights Data Products--++ Last updated 10/25/2023 |
operator-insights | Concept Data Visualization | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/concept-data-visualization.md | Title: Data visualization in Azure Operator Insights Data Products description: This article outlines how data is stored and visualized in Azure Operator Insights Data Products.--++ Last updated 10/23/2023 |
operator-insights | Concept Mcc Data Product | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/concept-mcc-data-product.md | Title: Mobile Content Cloud (MCC) Data Product - Azure Operator Insights description: This article provides an overview of the MCC Data Product for Azure Operator Insights--+++ Last updated 10/25/2023 |
operator-insights | Dashboards Use | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/dashboards-use.md | Title: Use Azure Operator Insights Data Product dashboards description: This article outlines how to access and use dashboards in the Azure Operator Insights Data Product.--++ Last updated 10/24/2023 |
operator-insights | Data Product Create | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/data-product-create.md |  Title: Create an Azure Operator Insights Data Product description: In this article, learn how to create an Azure Operator Insights Data Product resource. --++ Last updated 10/16/2023 |
operator-insights | Data Query | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/data-query.md | Title: Query data in the Azure Operator Insights Data Product description: This article outlines how to access and query the data in the Azure Operator Insights Data Product.--++ Last updated 10/22/2023 |
operator-insights | How To Install Mcc Edr Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/how-to-install-mcc-edr-agent.md | Title: Create and configure MCC EDR Ingestion Agents description: Learn how to create and configure MCC EDR Ingestion Agents for Azure Operator Insights --++ Last updated 10/31/2023 |
operator-insights | How To Manage Mcc Edr Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/how-to-manage-mcc-edr-agent.md | Title: Manage MCC EDR Ingestion Agents for Azure Operator Insights description: Learn how to upgrade, update, roll back and manage MCC EDR Ingestion agents for AOI--++ Last updated 11/02/2023 |
operator-insights | Managed Identity | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/managed-identity.md | Title: Managed identity for Azure Operator Insights description: This article helps you understand managed identity and how it works in Azure Operator Insights.--++ Last updated 10/18/2023 |
operator-insights | Mcc Edr Agent Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/mcc-edr-agent-configuration.md | Title: MCC EDR Ingestion Agents configuration reference for Azure Operator Insights description: This article documents the complete set of configuration for the agent, listing all fields with examples and explanatory comments.--++ Last updated 11/02/2023 |
operator-insights | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/overview.md | Title: What is Azure Operator Insights? description: Azure Operator Insights is an Azure service for monitoring and analyzing data from multiple sources--++ Last updated 10/26/2023 |
operator-insights | Purview Setup | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/purview-setup.md |  Title: Use Microsoft Purview with an Azure Operator Insights Data Product description: In this article, learn how to set up Microsoft Purview to explore an Azure Operator Insights Data Product.--++ Last updated 11/02/2023 |
operator-insights | Troubleshoot Mcc Edr Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-insights/troubleshoot-mcc-edr-agent.md | Title: Monitor and troubleshoot MCC EDR Ingestion Agents for Azure Operator Insights description: Learn how to monitor MCC EDR Ingestion Agents and troubleshoot common issues --++ Last updated 10/30/2023 |
operator-nexus | Howto Kubernetes Cluster Action Restart | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-kubernetes-cluster-action-restart.md | Here's a sample of what the `restart-node` command generates, } ``` -- |
operator-nexus | Howto Run Instance Readiness Testing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-run-instance-readiness-testing.md | -## Get Instance Readiness Testing Framework -For more detailed information, including the latest documentation and release artifacts visit the [nexus-samples](https://github.com/microsoft/nexus-samples/) GitHub repository. If access is required, see "Requesting Access to Nexus-samples GitHub repository." - ### Request Access to Nexus-samples GitHub repository For access to the nexus-samples GitHub repository 1. Link your GitHub account to the Microsoft GitHub Org https://repos.opensource.microsoft.com/link |
operator-nexus | Howto Use Mde Runtime Protection | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/howto-use-mde-runtime-protection.md | |
operator-service-manager | Best Practices Onboard Deploy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-service-manager/best-practices-onboard-deploy.md | During installation and upgrade by default, atomic and wait options are set to t In the ARM template add the following section: <pre> "roleOverrideValues": [-"{\"name\":\"<<b>chart_name></b>\",\"deployParametersMappingRuleProfile\":{\"helmMappingRuleProfile\":{\"options\":{\"installOptions\":{\"atomic\":\"false\",\"wait\":\"true\",\"timeout\":\"100\"}}}}}}" + "{\"name\":\"<b>NF_component_name></b>\",\"deployParametersMappingRuleProfile\":{\"helmMappingRuleProfile\":{\"options\":{\"installOptions\":{\"atomic\":\"false\",\"wait\":\"true\",\"timeout\":\"100\"},\"upgradeOptions\":{\"atomic\":\"true\",\"wait\":\"true\",\"timeout\":\"4\"}}}}}" ] </pre> -The chart name is defined in the NFDV. +The component name is defined in the NFDV: +<pre> + networkFunctionTemplate: { + nfviType: 'AzureArcKubernetes' + networkFunctionApplications: [ + { + artifactType: 'HelmPackage' + <b>name: 'fed-crds'</b> + dependsOnProfile: null + artifactProfile: { + artifactStore: { + id: acrArtifactStore.id + } +</pre> ## Clean up considerations |
operator-service-manager | How To Create Custom Role | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-service-manager/how-to-create-custom-role.md | Sample JSON: ## Next steps -- [Assign a custom role](how-to-assign-custom-role.md)+- [Assign a custom role](how-to-assign-custom-role.md) |
operator-service-manager | How To Use Azure Operator Service Manager Cli Extension | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-service-manager/how-to-use-azure-operator-service-manager-cli-extension.md | |
operator-service-manager | Publisher Resource Preview Management | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-service-manager/publisher-resource-preview-management.md | Then issue the get command to check that the versionState change is complete. ```azurecli az rest --method get --uri {nsdvresourceId}?api-version=2023-09-01-``` +``` |
operator-service-manager | Quickstart Containerized Network Function Operator | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-service-manager/quickstart-containerized-network-function-operator.md | Completion of all the tasks outlined in these articles ensure that the Site Netw ## Next steps -- [Quickstart: Create a Containerized Network Functions (CNF) Site with Nginx](quickstart-containerized-network-function-create-site.md)+- [Quickstart: Create a Containerized Network Functions (CNF) Site with Nginx](quickstart-containerized-network-function-create-site.md) |
operator-service-manager | Quickstart Containerized Network Function Prerequisites | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-service-manager/quickstart-containerized-network-function-prerequisites.md | description: Use this Quickstart to install and configure the necessary prerequi + Last updated 09/08/2023 |
operator-service-manager | Quickstart Publish Virtualized Network Function Definition | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-service-manager/quickstart-publish-virtualized-network-function-definition.md | Here's sample input-vnf-nfd.json file: ΓÇ» ΓÇ» }, ΓÇ» ΓÇ» "vhd": { ΓÇ» ΓÇ» ΓÇ» ΓÇ» "file_path": "livecd.ubuntu-cpc.azure.vhd", -ΓÇ» ΓÇ» ΓÇ» ΓÇ» "version": "1-0-0" +ΓÇ» ΓÇ» ΓÇ» ΓÇ» "version": "1-0-0", + "image_disk_size_GB": 30, + "image_hyper_v_generation": "V1", + "image_api_version": "2023-03-01" ΓÇ» ΓÇ» } - } ``` Here's sample input-vnf-nfd.json file: | | *file_path*: Optional. File path of the artifact you wish to upload from your local disk. Delete if not required. Relative paths are relative to the configuration file. On Windows escape any backslash with another backslash. | | | *version*: Version of the artifact. For ARM templates version must be in format A.B.C. **vhd** |*artifact_name*: Name of the artifact.-| |*file_path*: Optional. File path of the artifact you wish to upload from your local disk. Delete if not required. Relative paths are relative to the configuration file. On Windows escape any backslash with another backslash. +| |*file_path*: Optional. File path of the artifact you wish to upload from your local disk. Delete if not required. Relative paths are relative to the configuration file. On Windows escape any backslash with another backslash. | |*blob_sas_url*: Optional. SAS URL of the blob artifact you wish to copy to your Artifact Store. Delete if not required.-| |*version*: Version of the artifact. Version of the artifact. For VHDs version must be in format A-B-C. +| |*version*: Version of the artifact. Version of the artifact. For VHDs version must be in format A-B-C. +| |*"image_disk_size_GB*: Optional. Specifies the size of empty data disks in gigabytes. This value cannot be larger than 1023 GB. Delete if not required. +| |*image_hyper_v_generation*: Optional. Specifies the HyperVGenerationType of the VirtualMachine created from the image. Valid values are V1 and V2. V1 is the default if not specified. Delete if not required. +| |*image_api_version*: Optional. The ARM API version used to create the Microsoft.Compute/images resource. Delete if not required. + +> [!Note] +> When utilizing the file_path option, it's essential to have a reliable internet connection with sufficient bandwidth, as the upload duration may vary depending on the file size. > [!IMPORTANT] > Each variable described in the previous table must be unique. For instance, the resource group name cannot already exist, and publisher and artifact store names must be unique in the region. |
operator-service-manager | Quickstart Virtualized Network Function Prerequisites | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-service-manager/quickstart-virtualized-network-function-prerequisites.md | description: Use this Quickstart to install and configure the necessary prerequi + Last updated 10/19/2023 |
partner-solutions | Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/apache-kafka-confluent-cloud/troubleshoot.md | If the offer isn't displayed, contact [Confluent support](https://support.conflu ## Purchase errors -* Purchase fails because a valid credit card isn't connected to the Azure subscription or a payment method isn't associated with the subscription. - Use a different Azure subscription. Or, add or update the credit card or payment method for the subscription. For more information, see [updating the credit and payment method](../../cost-management-billing/manage/change-credit-card.md). --* The EA subscription doesn't allow Marketplace purchases. -- Use a different subscription. Or, check if your EA subscription is enabled for Marketplace purchase. For more information, see [Enable Marketplace purchases](../../cost-management-billing/manage/ea-azure-marketplace.md#enabling-azure-marketplace-purchases). If those options don't solve the problem, contact [Confluent support](https://support.confluent.io). +If those options don't solve the problem, contact [Confluent support](https://support.confluent.io). ## Conflict error |
partner-solutions | Astronomer Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/astronomer/astronomer-troubleshoot.md | Here are some troubleshooting options to consider: The Astro resource can only be created by users who have _Owner_ or _Contributor_ access on the Azure subscription. Ensure you have the appropriate access before setting up this integration. -### Purchase errors +### Marketplace purchase errors -#### Purchase fails because a valid credit card isn't connected to the Azure subscription or a payment method isn't associated with the subscription -Use a different Azure subscription. Or, add or update the credit card or payment method for the subscription. For more information, see [updating the credit and payment method](../../cost-management-billing/manage/change-credit-card.md). --#### The EA subscription doesn't allow Marketplace purchases --Use a different subscription. Or, check if your EA subscription is enabled for Marketplace purchase. For more information, see [Enable Marketplace purchases](../../cost-management-billing/manage/ea-azure-marketplace.md#enabling-azure-marketplace-purchases). If those options don't solve the problem, contact [Astronomer support](https://support.astronomer.io). +If those options don't solve the problem, contact [Astronomer support](https://support.astronomer.io). ### DeploymentFailed error |
partner-solutions | Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/datadog/troubleshoot.md | Title: Troubleshooting for Datadog description: This article provides information about troubleshooting for Datadog on Azure.- - Last updated 01/06/2023 Last updated 01/06/2023 This document contains information about troubleshooting your solutions that use Datadog - An Azure Native ISV Service. -## Purchase errors +## Marketplace Purchase errors -* Purchase fails because a valid credit card isn't connected to the Azure subscription or a payment method isn't associated with the subscription. - Use a different Azure subscription. Or, add or update the credit card or payment method for the subscription. For more information, see [updating the credit and payment method](../../cost-management-billing/manage/change-credit-card.md). --* The EA subscription doesn't allow Marketplace purchases. -- Use a different subscription. Or, check if your EA subscription is enabled for Marketplace purchase. For more information, see [Enable Marketplace purchases](../../cost-management-billing/manage/ea-azure-marketplace.md#enabling-azure-marketplace-purchases). If those options don't solve the problem, contact [Datadog support](https://www.datadoghq.com/support). +If those options don't solve the problem, contact [Datadog support](https://www.datadoghq.com/support). ## Unable to create Datadog - An Azure Native ISV Service resource To set up the Azure Datadog integration, you must have **Owner** access on the A The following image shows the correct values. - :::image type="content" source="media/troubleshoot/troubleshooting.png" alt-text="Check SAML settings for the Datadog application in Azure A D." border="true"::: + :::image type="content" source="media/troubleshoot/troubleshooting.png" alt-text="Check SAML settings for the Datadog application in Microsoft Entra ID." border="true"::: - **Guest users invited to the tenant are unable to access Single sign-on** - Some users have two email addresses in Azure portal. Typically, one email is the user principal name (UPN) and the other email is an alternative email. To verify the resource has the correct role assignment, open the Azure portal an ## Datadog agent installation fails -The Azure Datadog integration provides you the ability to install Datadog agent on a virtual machine or app service. The API key selected as **Default Key** in the API Keys screen is used to configure the Datadog agent. If a default key isn't selected, the Datadog agent installation fails. +The Azure Datadog integration provides you with the ability to install Datadog agent on a virtual machine or app service. The API key selected as **Default Key** in the API Keys screen is used to configure the Datadog agent. If a default key isn't selected, the Datadog agent installation fails. -If the Datadog agent has been configured with an incorrect key, navigate to the API keys screen and change the **Default Key**. You'll have to uninstall the Datadog agent and reinstall it to configure the virtual machine with the new API keys. +If the Datadog agent is configured with an incorrect key, navigate to the API keys screen and change the **Default Key**. You must uninstall the Datadog agent and reinstall it to configure the virtual machine with the new API keys. ## Next steps If the Datadog agent has been configured with an incorrect key, navigate to the > [Azure portal](https://portal.azure.com/#view/HubsExtension/BrowseResource/resourceType/Microsoft.Datadog%2Fmonitors) > [!div class="nextstepaction"]- > [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/datadog1591740804488.dd_liftr_v2?tab=Overview) + > [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/datadog1591740804488.dd_liftr_v2?tab=Overview) |
partner-solutions | Dynatrace Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/dynatrace/dynatrace-troubleshoot.md | Last updated 02/02/2023 # Troubleshoot Azure Native Dynatrace Service -This article describes how to contact support when working with an Azure Native Dynatrace Service resource. Before contacting support, see [Fix common errors](#fix-common-errors). +In this article, you learn how to contact support when working with an Azure Native Dynatrace Service resource. Before contacting support, see [Fix common errors](#fix-common-errors). ## Contact support To contact support about the Azure Native Dynatrace Service, select **New Suppor This document contains information about troubleshooting your solutions that use Dynatrace. -### Purchase error +### Marketplace purchase errors -- Purchase fails because a valid credit card isn't connected to the Azure subscription or a payment method isn't associated with the subscription.-- - Use a different Azure subscription. Or, add or update the credit card or payment method for the subscription. For more information, see [updating the credit and payment method](../../cost-management-billing/manage/change-credit-card.md). --- The EA subscription doesn't allow _Marketplace_ purchases.- - Use a different subscription. Or, check if your EA subscription is enabled for Marketplace purchase. For more information, see [Enable Marketplace purchases](../../cost-management-billing/manage/ea-azure-marketplace.md#enabling-azure-marketplace-purchases). If those options don't solve the problem, contact [Dynatrace support](https://support.dynatrace.com/). + +If those options don't solve the problem, contact [Dynatrace support](https://support.dynatrace.com/). ### Unable to create Dynatrace resource This document contains information about troubleshooting your solutions that use - Resource doesn't support sending logs. Only resource types with monitoring log categories can be configured to send logs. For more information, see [supported categories](../../azure-monitor/essentials/resource-logs-categories.md). -- Limit of five diagnostic settings reached. This will display the message of Limit reached against the resource. Each Azure resource can have a maximum of five diagnostic settings. For more information, see [diagnostic settings](../../azure-monitor/essentials/diagnostic-settings.md?tabs=portal) You can go ahead and remove the other destinations to make sure each resource is sending data to at max five destinations.+- Limit of five diagnostic settings reached. This displays the message of Limit reached against the resource. Each Azure resource can have a maximum of five diagnostic settings. For more information, see [diagnostic settings](../../azure-monitor/essentials/diagnostic-settings.md?tabs=portal) You can go ahead and remove the other destinations to make sure each resource is sending data to at max five destinations. - Export of Metrics data isn't supported currently by the partner solutions under Azure Monitor diagnostic settings. This document contains information about troubleshooting your solutions that use ### Metrics checkbox disabled -- To collect metrics you must have owner permission on the subscription. If you are a contributor, refer to the contributor guide mentioned in [Configure metrics and logs](dynatrace-create.md#configure-metrics-and-logs).+- To collect metrics, you must have owner permission on the subscription. If you're a contributor, refer to the contributor guide mentioned in [Configure metrics and logs](dynatrace-create.md#configure-metrics-and-logs). ### Free trial errors |
partner-solutions | Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/elastic/troubleshoot.md | Only users who have *Owner* or *Contributor* access on the Azure subscription ca - Export of Metrics data isn't supported currently by the partner solutions under Azure Monitor diagnostic settings. -## Purchase errors +## Marketplace Purchase errors -- Purchase fails because a valid credit card isn't connected to the Azure subscription or a payment method isn't associated with the subscription.-- Use a different Azure subscription. Or, add or update the credit card or payment method for the subscription. For more information, see [updating the credit and payment method](../../cost-management-billing/manage/change-credit-card.md). --- The EA subscription doesn't allow Marketplace purchases.-- Use a different subscription. Or, check if your EA subscription is enabled for Marketplace purchase. For more information, see [Enable Marketplace purchases](../../cost-management-billing/manage/ea-azure-marketplace.md#enabling-azure-marketplace-purchases). ## Get support |
partner-solutions | New Relic Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/new-relic/new-relic-troubleshoot.md | Try the troubleshooting information in this article first. If that doesn't work, ## Fix common errors -### Purchase fails +### Marketplace purchase errors -A purchase can fail because a valid credit card isn't connected to the Azure subscription, or because a payment method isn't associated with the subscription. To solve this problem, use a different Azure subscription. Or, add or update the credit card or payment method for the subscription. For more information, see [Add, update, or delete a payment method](../../cost-management-billing/manage/change-credit-card.md). --A purchase can also fail because an Enterprise Agreement (EA) subscription doesn't allow Azure Marketplace purchases. Try to use a different subscription. Or, check if your EA subscription is enabled for Azure Marketplace purchases. For more information, see [Enabling Azure Marketplace purchases](../../cost-management-billing/manage/ea-azure-marketplace.md#enabling-azure-marketplace-purchases). ### You can't create a New Relic resource |
partner-solutions | Nginx Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/nginx/nginx-troubleshoot.md | You can get support for your NGINXaaS deployment through a **New Support request ## Troubleshooting +### Marketplace purchase errors ++ ### Unable to create an NGINXaaS resource as not a subscription owner The NGINXaaS integration can only be set up by users who have Owner access on the Azure subscription. Ensure you have the appropriate Owner access before starting to set up this integration. |
partner-solutions | Palo Alto Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/palo-alto/palo-alto-troubleshoot.md | Last updated 07/10/2023 # Troubleshooting Cloud Next-Generation Firewall by Palo Alto Networks - an Azure Native ISV Service -You can get support for your Palo Alto deployment through a **New Support request**. The procedure for creating the request is here. In addition, we have included troubleshooting for problems you might experience in creating and using a Palo Alto deployment. +You can get support for your Palo Alto deployment through a **New Support request**. The procedure for creating the request is here. In addition, you can find troubleshooting for problems you might experience in creating and using a Palo Alto deployment. ## Getting support You can get support for your Palo Alto deployment through a **New Support reques ## Troubleshooting +### Marketplace purchase errors ++ ### Unable to create a PCloud NGFW by Palo Alto Networks as not a subscription owner -Only users who have Owner access can setup a Palo Alto resource on the Azure subscription. Ensure you have the appropriate Owner access before starting to create a Palo Alto resource. +Only users who have Owner access can set up a Palo Alto resource on the Azure subscription. Ensure you have the appropriate Owner access before starting to create a Palo Alto resource. ## Next steps |
partner-solutions | Qumulo How To Manage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/qumulo/qumulo-how-to-manage.md | description: This article describes how to manage Azure Native Qumulo Scalable F Last updated 11/15/2023++ - ignite-2023 |
partner-solutions | Qumulo Troubleshoot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/qumulo/qumulo-troubleshoot.md | description: This article provides information about troubleshooting Azure Nativ Last updated 11/15/2023-++ - ignite-2023 # Troubleshoot Azure Native Qumulo Scalable File Service Try the troubleshooting information in this article first. If that doesn't work, :::image type="content" source="media/qumulo-troubleshooting/qumulo-support-request.png" alt-text="Screenshot that shows a request form for Qumulo support."::: -## You got a purchase error related to a payment method --A purchase can fail because a valid credit card is not connected to the Azure subscription, or because a payment method is not associated with the subscription. --Try using a different Azure subscription. Or, add or update the credit card or payment method for the subscription. For more information, see [Update the credit and payment method](../../cost-management-billing/manage/change-credit-card.md). --## You got a purchase error related to an Enterprise Agreement --Some Microsoft Enterprise Agreement (EA) subscriptions don't allow Azure Marketplace purchases. +## Purchase errors -Try using a different subscription, or [enable your subscription for Azure Marketplace purchases](../../cost-management-billing/manage/ea-azure-marketplace.md#enabling-azure-marketplace-purchases). ## You can't create a resource For successful creation of a Qumulo service, custom role-based access control (R > [Azure portal](https://portal.azure.com/#view/HubsExtension/BrowseResource/resourceType/Qumulo.Storage%2FfileSystems) > [!div class="nextstepaction"]- > [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/qumulo1584033880660.qumulo-saas-mpp?tab=Overview) + > [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/qumulo1584033880660.qumulo-saas-mpp?tab=Overview) |
partner-solutions | Qumulo Vendor Neutral Archive | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/qumulo/qumulo-vendor-neutral-archive.md | description: How to use PACS Vendor Neutral archive with Azure Native Qumulo Sca Last updated 11/15/2023-++ - ignite-2023 # What is Azure Native Qumulo for picture archiving and communication system vendor neutral archive? |
partner-solutions | Qumulo Video Editing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/qumulo/qumulo-video-editing.md | description: In this article, learn about the use case for Azure Native Qumulo S Last updated 11/15/2023++ - ignite-2023 |
partner-solutions | Qumulo Virtual Desktop | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/qumulo/qumulo-virtual-desktop.md | description: In this article, learn about the use case for Azure Native Qumulo S Last updated 11/15/2023-++ - ignite-2023 # What is Azure Native Qumulo Scalable File Service with a virtual desktop? |
postgresql | Concepts Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-limits.md | When using PostgreSQL for a busy database with a large number of concurrent conn - When the storage usage reaches 95% or if the available capacity is less than 5 GiB whichever is more, the server is automatically switched to **read-only mode** to avoid errors associated with disk-full situations. In rare cases, if the rate of data growth outpaces the time it takes switch to read-only mode, your Server may still run out of storage. - We recommend setting alert rules for `storage used` or `storage percent` when they exceed certain thresholds so that you can proactively take action such as increasing the storage size. For example, you can set an alert if the storage percent exceeds 80% usage. - If you're using logical replication, then you must drop the logical replication slot in the primary server if the corresponding subscriber no longer exists. Otherwise the WAL files start to get accumulated in the primary filling up the storage. If the storage threshold exceeds certain threshold and if the logical replication slot isn't in use (due to non-available subscriber), Flexible server automatically drops that unused logical replication slot. That action releases accumulated WAL files and avoids your server becoming unavailable due to storage getting filled situation. +- We don't support the creation of tablespaces, so if you're creating a database, donΓÇÖt provide a tablespace name. PostgreSQL will use the default one that is inherited from the template database. It's unsafe to provide a tablespace like the temporary one because we can't ensure that such objects will remain persistent after server restarts, HA failovers, etc. ### Networking When using PostgreSQL for a busy database with a large number of concurrent conn - Postgres 10 and older aren't supported as those are already retired by the open-source community. If you must use one of these versions, you need to use the [Single Server](../overview-single-server.md) option, which supports the older major versions 95, 96 and 10. - Flexible Server supports all `contrib` extensions and more. Please refer to [PostgreSQL extensions](/azure/postgresql/flexible-server/concepts-extensions). - Built-in PgBouncer connection pooler is currently not available for Burstable servers.-- SCRAM authentication isn't supported with connectivity using built-in PgBouncer. ### Stop/start operation When using PostgreSQL for a busy database with a large number of concurrent conn - When using the Point-in-time-Restore feature, the new server is created with the same compute and storage configurations as the server it is based on. - VNET based database servers are restored into the same VNET when you restore from a backup. - The new server created during a restore doesn't have the firewall rules that existed on the original server. Firewall rules need to be created separately for the new server.-- Restoring a deleted server isn't supported.-- Cross region restore isn't supported. - Restore to a different subscription isn't supported but as a workaround, you can restore the server within the same subscription and then migrate the restored server to a different subscription. ## Next steps |
postgresql | How To Manage Virtual Network Private Endpoint Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-manage-virtual-network-private-endpoint-portal.md | To create an Azure Database for PostgreSQL server, take the following steps: 5. Select **Next:Networking** 6. Choose **"Public access (allowed IP addresses) and Private endpoint"** checkbox checked as Connectivity method. 7. Select **"Add Private Endpoint"** in Private Endpoint section+ :::image type="content" source="./media/how-to-manage-virtual-network-private-endpoint-portal/private-endpoint-selection.png" alt-text="Screenshot of Add Private Endpoint button in Private Endpoint Section in Networking blade of Azure Portal" ::: 8. In Create Private Endpoint Screen enter following: | **Setting** | **Value**| |
postgresql | How To Server Logs Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-server-logs-portal.md | + + Title: 'How to enable and download server logs for Azure Database for PostgreSQL - Flexible Server' +description: This article describes how to download server logs using Azure portal. +++++ Last updated : 11/21/2023+++# Enable, list and download server logs for Azure Database for PostgreSQL - Flexible Server +++You can use server logs to help monitor and troubleshoot an instance of Azure Database for PostgreSQL - Flexible Server, and to gain detailed insights into the activities that have run on your servers. ++By default, the server logs feature in Azure Database for PostgreSQL - Flexible Server is disabled. However, after you enable the feature, a flexible server starts capturing events of the selected log type and writes them to a file. You can then use the Azure portal or the Azure CLI to download the files to assist with your troubleshooting efforts. This article explains how to enable the server logs feature in Azure Database for PostgreSQL - Flexible Server and download server log files. It also provides information about how to disable the feature. ++In this tutorial, youΓÇÖll learn how to: +- Enable the server logs feature. +- Disable the server logs feature. +- Download server log files. ++## Prerequisites ++To complete this tutorial, you need an existing Azure Database for PostgreSQL - Flexible Server. If you need to create a new server, see [Create an Azure Database for PostgreSQL - Flexible Server](./quickstart-create-server-portal.md). ++## Enable Server logs ++To enable the server logs feature, perform the following steps. ++1. In the [Azure portal](https://portal.azure.com), select your PostgreSQL flexible server. ++2. On the left pane, under **Monitoring**, select **Server logs**. ++ :::image type="content" source="./media/how-to-server-logs-portal/1-how-to-server-log.png" alt-text="Screenshot showing Azure Database for PostgreSQL - Server Logs."::: ++3. To enable server logs, under **Server logs**, select **Enable**. ++ :::image type="content" source="./media/how-to-server-logs-portal/2-how-to-server-log.png" alt-text="Screenshot showing Enable Server Logs."::: ++4. To configure retention period (in days), choose the slider. Minimum retention 1 days and Maximum retention is 7 days. +++## Download Server logs ++To download server logs, perform the following steps. ++> [!Note] +> After enabling logs, the log files will be available to download after few minutes. ++1. Under **Name**, select the log file you want to download, and then, under **Action**, select **Download**. ++ :::image type="content" source="./media/how-to-server-logs-portal/3-how-to-server-log.png" alt-text="Screenshot showing Server Logs - Download."::: ++2. To download multiple log files at one time, under **Name**, select the files you want to download, and then above **Name**, select **Download**. ++ :::image type="content" source="./media/how-to-server-logs-portal/4-how-to-server-log.png" alt-text="Screenshot showing server Logs - Download all."::: +++## Disable Server Logs ++1. From your Azure portal, select Server logs from Monitoring server pane. ++2. For disabling Server logs to file, Uncheck Enable. (The setting will disable logging for all the log_types available) ++ :::image type="content" source="./media/how-to-server-logs-portal/5-how-to-server-log.png" alt-text="Screenshot showing server Logs - Disable."::: ++3. Select Save. |
postgresql | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/policy-reference.md | |
postgresql | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/single-server/security-controls-policy.md | |
private-5g-core | Azure Stack Edge Packet Core Compatibility | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/azure-stack-edge-packet-core-compatibility.md | The following table provides information on which versions of the ASE device are | Packet core version | ASE Pro GPU compatible versions | ASE Pro 2 compatible versions | |--|--|--|+! 2310 | 2309 | 2309 | | 2308 | 2303, 2309 | 2303, 2309 | | 2307 | 2303 | 2303 | | 2306 | 2303 | 2303 | |
private-5g-core | Data Plane Packet Capture | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/data-plane-packet-capture.md | |
reliability | Reliability Azure Container Apps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/reliability-azure-container-apps.md | |
role-based-access-control | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/policy-reference.md | Title: Built-in policy definitions for Azure RBAC description: Lists Azure Policy built-in policy definitions for Azure RBAC. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/15/2023 Last updated : 11/21/2023 |
role-based-access-control | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure RBAC description: Lists Azure Policy Regulatory Compliance controls available for Azure role-based access control (Azure RBAC). These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 11/06/2023 Last updated : 11/21/2023 |
sap | Extensibility | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/automation/extensibility.md | Last updated 10/29/2023 + # Extending the SAP Deployment Automation Framework |
sap | Dbms Guide Oracle | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/dbms-guide-oracle.md | There are two recommended storage deployment patterns for SAP on Oracle on Azure Customers currently running Oracle databases on EXT4 or XFS file systems with LVM are encouraged to move to ASM. There are considerable performance, administration and reliability advantages to running on ASM compared to LVM. ASM reduces complexity, improves supportability and makes administration tasks simpler. This documentation contains links for Oracle DBAs to learn how to install and manage ASM. +Azure provides [multiple storage solutions](../../virtual-machines/disks-types.md). The table below details the support status ++| Storage type | Oracle support | Sector Size | Oracle Linux 8.x or higher | Windows Server 2019 | +|--||--| | --| +| **Block Storage Type** | | | | | +| Premium SSD | Supported | 512e | ASM Recommended. LVM Supported | No support for ASM on Windows | +| Premium SSD v2 | Supported | 4K Native | ASM Recommended. LVM Supported | No support for ASM on Windows. Change Log File disks from 4K Native to 512e | +| Standard SSD | Not supported | | | | +| Standard HDD | Not supported | | | | +| Ultra disk | Supported | 4K Native | ASM Recommended. LVM Supported | No support for ASM on Windows. Change Log File disks from 4K Native to 512e | +| | | | | | +| **Network Storage Types** | | | | | +| Azure NetApp Service (ANF) | Supported | - | Oracle dNFS Required | Not supported | +| Azure Files NFS | Not supported | | | +| Azure files SMB | Not supported | | | ++Additional considerations that apply list like: +1. No support for DIRECTIO with 4K Native sector size. **Do not set FILESYSTEMIO_OPTIONS for LVM configurations** +2. Oracle 19c and higher fully supports 4K Native sector size with both ASM and LVM +3. Oracle 19c and higher on Linux ΓÇô when moving from 512e storage to 4K Native storage Log sector sizes must be changed +4. To migrate from 512/512e sector size to 4K Native Review (Doc ID 1133713.1) ΓÇô see section ΓÇ£Offline Migration to 4Kb Sector DisksΓÇ¥ +5. No support for ASM on Windows platforms +6. No support for 4K Native sector size for Log volume on Windows platforms. SSDv2 and Ultra Disk must be changed to 512e via the ΓÇ£Edit DiskΓÇ¥ pencil icon in the Azure Portal +7. 4K Native sector size is supported on Data volume for Windows platforms only +8. It's recommended to review these MOS articles: + - Oracle Linux: File System's Buffer Cache versus Direct I/O (Doc ID 462072.1) + - Supporting 4K Sector Disks (Doc ID 1133713.1) + - Using 4k Redo Logs on Flash, 4k-Disk and SSD-based Storage (Doc ID 1681266.1) + - Things To Consider For Setting filesystemio_options And disk_asynch_io (Doc ID 1987437.1) ++It's recommended to use Oracle ASM on Linux with ASMLib. Performance, administration, support and configuration are optimized with deployment pattern. Oracle ASM and Oracle dNFS are going to set the correct parameters or bypass parameters (such as FILESYSTEMIO_OPTIONS) and therefore deliver better performance and reliability. ++ ### Oracle Automatic Storage Management (ASM) Checklist for Oracle Automatic Storage Management: Checklist for Oracle Automatic Storage Management: 1. All SAP on Oracle on Azure systems are running **ASM** including Development, QAS and Production. Small, Medium and Large databases 2. [**ASMLib**](https://docs.oracle.com/en/database/oracle/oracle-database/19/ladbi/about-oracle-asm-with-oracle-asmlib.html) is used and not UDEV. UDEV is required for multiple SANs, a scenario that doesn't exist on Azure-3. ASM should be configured for **External Redundancy**. Azure Premium SSD storage has built in triple redundancy. Azure Premium SSD matches the reliability and integrity of any other storage solution. For optional safety customers can consider **Normal Redundancy** for the Log Disk Group +3. ASM should be configured for **External Redundancy**. Azure Premium SSD storage provides triple redundancy. Azure Premium SSD matches the reliability and integrity of any other storage solution. For optional safety customers can consider **Normal Redundancy** for the Log Disk Group 4. No Mirror Log is required for ASM [888626 - Redo log layout for high-end systems](https://launchpad.support.sap.com/#/notes/888626) 5. ASM Disk Groups configured as per Variant 1, 2 or 3 below 6. ASM Allocation Unit size = 4MB (default). VLDB OLAP systems such as BW may benefit from larger ASM Allocation Unit size. Change only after confirming with Oracle support Disk performance can be monitored from inside Oracle Enterprise Manager and via - [Using Views to Display Oracle ASM Information](https://docs.oracle.com/en/database/oracle/oracle-database/19/ostmg/views-asm-info.html#GUID-23E1F0D8-ECF5-4A5A-8C9C-11230D2B4AD4) - [ASMCMD Disk Group Management Commands (oracle.com)](https://docs.oracle.com/en/database/oracle/oracle-database/19/ostmg/asmcmd-diskgroup-commands.html#GUID-55F7A91D-2197-467C-9847-82A3308F0392) -OS level monitoring tools can't monitor ASM disks as there is no recognizable file system. Freespace monitoring must be done from within Oracle. +OS level monitoring tools can't monitor ASM disks as there's no recognizable file system. Freespace monitoring must be done from within Oracle. ### Training Resources on Oracle Automatic Storage Management (ASM) or other backup tools. ## SAP on Oracle on Azure with LVM -ASM is the default recommendation from Oracle for all SAP systems of any size on Azure. Performance, Reliability and Support are better for customers using ASM. Oracle provides documentation and training for DBAs to transition to ASM and every customer who has migrated to ASM has been pleased with the benefits. In cases where the Oracle DBA team doesn't follow the recommendation from Oracle, Microsoft and SAP to use ASM the following LVM configuration should be used. +ASM is the default recommendation from Oracle for all SAP systems of any size on Azure. Performance, Reliability and Support are better for customers using ASM. Oracle provides documentation and training for DBAs to transition to ASM and every customer who migrated to ASM has been pleased with the benefits. In cases where the Oracle DBA team doesn't follow the recommendation from Oracle, Microsoft and SAP to use ASM the following LVM configuration should be used. Note that: when creating LVM the ΓÇ£-iΓÇ¥ option must be used to evenly distribute data across the number of disks in the LVM group. |
sap | Get Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/get-started.md | In the SAP workload documentation space, you can find the following areas: ## Change Log +- November 20, 2023: Add storage configuration for Mv3 medium memory VMs into the documents [SAP HANA Azure virtual machine Premium SSD storage configurations](./hana-vm-premium-ssd-v1.md), [SAP HANA Azure virtual machine Premium SSD v2 storage configurations](./hana-vm-premium-ssd-v2.md), and [SAP HANA Azure virtual machine Ultra Disk storage configurations](./hana-vm-ultra-disk.md) +- November 20, 2023: Add suppoerted storage matrix into the document [Azure Virtual Machines Oracle DBMS deployment for SAP workload](./dbms-guide-oracle.md) - November 09, 2023: Change in [SAP HANA infrastructure configurations and operations on Azure](./hana-vm-operations.md) to align multiple vNIC instructions with [planning guide](./planning-guide.md) and add /hana/shared on NFS on Azure Files - September 26, 2023: Change in [SAP HANA scale-out HSR with Pacemaker on Azure VMs on RHEL](./sap-hana-high-availability-scale-out-hsr-rhel.md) to add instructions for deploying /hana/shared (only) on NFS on Azure Files - September 12, 2023: Adding support to handle Azure scheduled events for [Pacemaker clusters running on RHEL](./high-availability-guide-rhel-pacemaker.md). In the SAP workload documentation space, you can find the following areas: - April 27, 2021: Added new Msv2, Mdsv2 VMs into HANA storage configuration in [SAP HANA Azure virtual machine storage configurations](./hana-vm-operations-storage.md) - April 27, 2021: Added requirement for using same storage types in HANA System Replication across all VMs of HSR configuration in [SAP HANA Azure virtual machine storage configurations](./hana-vm-operations-storage.md) - April 27, 2021: Added requirement for using same storage types in DBMS replication scenarios across all VMs of DBMS high availability replication configurations in [Azure Storage types for SAP workload](./planning-guide-storage.md)-- April 23, 2021: Added section to configure private link for Azure database for MySQL and some minor changes in [SAP BusinessObjects BI platform deployment guide for linux on Azure](businessobjects-deployment-guide-linux.md)-- April 22, 2021: Release of SAP BusinessObjects BI Platform for Windows on Azure documentation, [SAP BusinessObjects BI platform deployment guide for Windows on Azure](businessobjects-deployment-guide-windows.md)-- April 21, 2021: Add explanation why HCMT/HWCCT storage tests on M32ts and M32ls might fall short of HANA KPIs when enabling read cache for the Premium storage disks in article [SAP HANA Azure virtual machine storage configurations](./hana-vm-operations-storage.md)-- April 20, 2021: Clarify storage block sizes for IBM Db2 with different Azure block storage in article [IBM Db2 Azure Virtual Machines DBMS deployment for SAP workload](./dbms-guide-ibm.md)-- April 12, 2021: Change in [HA for SAP HANA on Azure VMs on SLES](./sap-hana-high-availability.md), [HA for SAP HANA on Azure VMs on RHEL](./sap-hana-high-availability-rhel.md) and [HA for SAP HANA scale-up with ANF on RHEL](./sap-hana-high-availability-netapp-files-red-hat.md) to add configuration instructions for SAP HANA system replication Python hook -- April 12, 2021: Replaced backup documentation for SAP HANA by documents of [SAP HANA backup/restore with Azure Backup service](../../backup/sap-hana-db-about.md) -- April 12, 2021: Release of [SAP HANA scale-out HSR with Pacemaker on Azure VMs on SLES](./sap-hana-high-availability-scale-out-hsr-suse.md) configuration guide-- April 07, 2021: Clarified support for SQL Server multi-instance and multi-database support in [SQL Server Azure Virtual Machines DBMS deployment for SAP NetWeaver](./dbms-guide-sqlserver.md)-- April 07, 2021: Added information related to secondary IP addresses in [Azure Virtual Machines planning and implementation for SAP NetWeaver](./planning-guide.md)-- April 07, 2021: added support for Oracle DBMS support on ANF in [Azure Storage types for SAP workload](./planning-guide-storage.md)-- March 17, 2021: Change in [HA for SAP HANA on Azure VMs on SLES](./sap-hana-high-availability.md), [HA for SAP HANA on Azure VMs on RHEL](./sap-hana-high-availability-rhel.md) and [HA for SAP HANA scale-up with ANF on RHEL](./sap-hana-high-availability-netapp-files-red-hat.md) to add instructions for HANA Active/Read-enabled system replication in Pacemaker cluster-- March 15, 2021: Change in [SAP ASCS/SCS instance with WSFC and file share](./sap-high-availability-guide-wsfc-file-share.md),[Install SAP ASCS/SCS instance with WSFC and file share](./sap-high-availability-installation-wsfc-file-share.md) and [SAP ASCS/SCS multi-SID with WSFC and file share](./sap-ascs-ha-multi-sid-wsfc-file-share.md) to clarify that the SAP ASCS/SCS instances and the SOFS share must be deployed in separate clusters-- March 03, 2021: Change in [HA guide for SAP ASCS/SCS with WSFC and Azure NetApp Files(SMB)](./high-availability-guide-windows-netapp-files-smb.md) to add a cautionary statement that elevated privileges are required for the user running SWPM, during the installation of the SAP system-- February 11, 2021: Changes in [High availability of IBM Db2 LUW on Azure VMs on Red Hat Enterprise Linux Server](./high-availability-guide-rhel-ibm-db2-luw.md) to amend pacemaker cluster commands for RHEL 8.x-- February 03, 2021: Change in [Setting up Pacemaker on RHEL in Azure](./high-availability-guide-rhel-pacemaker.md) to update pcmk_host_map in the `stonith create` command-- February 03, 2021: Change in [Setting up Pacemaker on SLES in Azure](./high-availability-guide-suse-pacemaker.md) to add pcmk_host_map in the `stonith create` command -- February 03, 2021: More details on I/O scheduler settings for SUSE in article [SAP HANA Azure virtual machine storage configurations](./hana-vm-operations-storage.md)-- February 01, 2021: Change in [HA for SAP HANA scale-up with ANF on RHEL](./sap-hana-high-availability-netapp-files-red-hat.md), [SAP HANA scale-out HSR with Pacemaker on Azure VMs on RHEL](./sap-hana-high-availability-scale-out-hsr-rhel.md), [SAP HANA scale-out with standby node on Azure VMs with ANF on SLES](./sap-hana-scale-out-standby-netapp-files-suse.md) and [SAP HANA scale-out with standby node on Azure VMs with ANF on RHEL](./sap-hana-scale-out-standby-netapp-files-rhel.md) to add a link to [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md)-- January 23, 2021: Introduce the functionality of HANA data volume partitioning as functionality to stripe I/O operations against HANA data files across different Azure disks or NFS shares without using a disk volume manager in articles [SAP HANA Azure virtual machine storage configurations](./hana-vm-operations-storage.md) and [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md)-- January 18, 2021: Added support of Azure net Apps Files based NFS for Oracle in [Azure Virtual Machines Oracle DBMS deployment for SAP workload](./dbms-guide-oracle.md) and adjusting decimals in table in document [NFS v4.1 volumes on Azure NetApp Files for SAP HANA](./hana-vm-operations-netapp.md)-- January 11, 2021: Minor changes in [HA for SAP NW on Azure VMs on RHEL for SAP applications](./high-availability-guide-rhel.md), [HA for SAP NW on Azure VMs on RHEL with ANF](./high-availability-guide-rhel-netapp-files.md) and [HA for SAP NW on Azure VMs on RHEL multi-SID guide](./high-availability-guide-rhel-multi-sid.md) to adjust commands to work for both RHEL8 and RHEL7, and ENSA1 and ENSA2-- January 05, 2021: Changes in [SAP HANA scale-out with standby node on Azure VMs with ANF on SLES](./sap-hana-scale-out-standby-netapp-files-suse.md) and [SAP HANA scale-out with standby node on Azure VMs with ANF on RHEL](./sap-hana-scale-out-standby-netapp-files-rhel.md), revising the recommended configuration to allow SAP Host Agent to manage the local port range -- January 04,2021: Add new Azure regions supported by HLI into [What is SAP HANA on Azure (Large Instances)](../large-instances/hana-overview-architecture.md)+ |
sap | Hana Vm Premium Ssd V1 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/hana-vm-premium-ssd-v1.md | Configuration for SAP **/hana/data** volume: | M32ts | 192 GiB | 500 MBps | 4 x P6 | 200 MBps | 680 MBps | 960 | 14,000 | | M32ls | 256 GiB | 500 MBps | 4 x P6 | 200 MBps | 680 MBps | 960 | 14,000 | | M64ls | 512 GiB | 1,000 MBps | 4 x P10 | 400 MBps | 680 MBps | 2,000 | 14,000 |-| M32dms_v2, M32ms_v2 | 875 GiB | 500 MBps | 4 x P15 | 500 MBps | 680 MBps | 4,400 | 14,000 | -| M64s, M64ds_v2, M64s_v2 | 1,024 GiB | 1,000 MBps | 4 x P15 | 500 MBps | 680 MBps | 4,400 | 14,000 | -| M64ms, M64dms_v2, M64ms_v2 | 1,792 GiB | 1,000 MBps | 4 x P20 | 600 MBps | 680 MBps | 9,200 | 14,000 | -| M128s, M128ds_v2, M128s_v2 | 2,048 GiB | 2,000 MBps | 4 x P20 | 600 MBps | 680 MBps | 9,200| 14,000 | -| M192ids_v2, M192is_v2 | 2,048 GiB | 2,000 MBps | 4 x P20 | 600 MBps | 680 MBps | 9,200| 14,000 | -| M128ms, M128dms_v2, M128ms_v2 | 3,892 GiB | 2,000 MBps | 4 x P30 | 800 MBps | no bursting | 20,000 | no bursting | -| M192ims, M192idms_v2 | 4,096 GiB | 2,000 MBps | 4 x P30 | 800 MBps | no bursting | 20,000 | no bursting | +| M32(d)ms_v2 | 875 GiB | 500 MBps | 4 x P15 | 500 MBps | 680 MBps | 4,400 | 14,000 | +| M48(d)s_1_v3, M96(d)s_1_v3 | 974 GiB | 1,560 MBps | 4 x P15 | 500 MBps | 680 MBps | 4,400 | 14,000 | +| M64s, M64(d)s_v2 | 1,024 GiB | 1,000 MBps | 4 x P15 | 500 MBps | 680 MBps | 4,400 | 14,000 | +| M64ms, M64(d)ms_v2 | 1,792 GiB | 1,000 MBps | 4 x P20 | 600 MBps | 680 MBps | 9,200 | 14,000 | +| M96(d)s_2_v3 | 1,946 GiB | 3,120 MBps | 4 x P20 | 600 MBps | 680 MBps | 9,200| 14,000 | +| M128s, M128(d)s_v2 | 2,048 GiB | 2,000 MBps | 4 x P20 | 600 MBps | 680 MBps | 9,200| 14,000 | +| M192i(d)s_v2 | 2,048 GiB | 2,000 MBps | 4 x P20 | 600 MBps | 680 MBps | 9,200| 14,000 | +| M128ms, M128(d)ms_v2 | 3,892 GiB | 2,000 MBps | 4 x P30 | 800 MBps | no bursting | 20,000 | no bursting | +| M176(d)s_3_v3 | 2,794 GiB | 4,000 MBps | 4 x P30 | 800 MBps | no bursting | 20,000| no bursting | +| M176(d)s_4_v3 | 3,750 GiB | 4,000 MBps | 4 x P30 | 800 MBps | no bursting | 20,000 | no bursting | +| M192i(d)ms_v2 | 4,096 GiB | 2,000 MBps | 4 x P30 | 800 MBps | no bursting | 20,000 | no bursting | | M208s_v2 | 2,850 GiB | 1,000 MBps | 4 x P30 | 800 MBps | no bursting | 20,000| no bursting | | M208ms_v2 | 5,700 GiB | 1,000 MBps | 4 x P40 | 1,000 MBps | no bursting | 30,000 | no bursting | | M416s_v2 | 5,700 GiB | 2,000 MBps | 4 x P40 | 1,000 MBps | no bursting | 30,000 | no bursting | For the **/hana/log** volume. the configuration would look like: | M32ts | 192 GiB | 500 MBps | 3 x P10 | 300 MBps | 510 MBps | 1,500 | 10,500 | | M32ls | 256 GiB | 500 MBps | 3 x P10 | 300 MBps | 510 MBps | 1,500 | 10,500 | | M64ls | 512 GiB | 1,000 MBps | 3 x P10 | 300 MBps | 510 MBps | 1,500 | 10,500 | -| M32dms_v2, M32ms_v2 | 875 GiB | 500 MBps | 3 x P15 | 375 MBps | 510 MBps | 3,300 | 10,500 | -| M64s, M64ds_v2, M64s_v2 | 1,024 GiB | 1,000 MBps | 3 x P15 | 375 MBps | 510 MBps | 3,300 | 10,500 | -| M64ms, M64dms_v2, M64ms_v2 | 1,792 GiB | 1,000 MBps | 3 x P15 | 375 MBps | 510 MBps | 3,300 | 10,500 | -| M128s, M128ds_v2, M128s_v2 | 2,048 GiB | 2,000 MBps | 3 x P15 | 375 MBps | 510 MBps | 3,300 | 10,500| -| M192ids_v2, M192is_v2 | 2,048 GiB | 2,000 MBps | 3 x P15 | 375 MBps | 510 MBps | 3,300 | 10,500| -| M128ms, M128dms_v2, M128ms_v2 | 3,892 GiB | 2,000 MBps | 3 x P15 | 375 MBps | 510 MBps | 3,300 | 10,500 | -| M192idms_v2, M192ims_v2 | 4,096 GiB | 2,000 MBps | 3 x P15 | 375 MBps | 510 MBps | 3,300 | 10,500 | +| M32(d)ms_v2 | 875 GiB | 500 MBps | 3 x P15 | 375 MBps | 510 MBps | 3,300 | 10,500 | +| M48(d)s_1_v3, M96(d)s_1_v3 | 974 GiB | 1,560 MBps | 3 x P15 | 375 MBps | 510 MBps | 3,300 | 10,500 | +| M64s, M64(d)s_v2 | 1,024 GiB | 1,000 MBps | 3 x P15 | 375 MBps | 510 MBps | 3,300 | 10,500 | +| M64ms, M64(d)s_v2 | 1,792 GiB | 1,000 MBps | 3 x P15 | 375 MBps | 510 MBps | 3,300 | 10,500 | +| M96(d)s_2_v3 | 1,946 GiB | 3,120 MBps | 3 x P15 | 375 MBps | 510 MBps | 3,300 | 10,500| +| M128s, M128(d)s_v2 | 2,048 GiB | 2,000 MBps | 3 x P15 | 375 MBps | 510 MBps | 3,300 | 10,500| +| M192i(d)s_v2 | 2,048 GiB | 2,000 MBps | 3 x P15 | 375 MBps | 510 MBps | 3,300 | 10,500| +| M176(d)s_3_v3 | 2,794 GiB | 4,000 MBps | 3 x P15 | 375 MBps | 510 MBps | 3,300 | 10,500 | +| M176(d)s_4_v3 | 3,750 GiB | 4,000 MBps | 3 x P15 | 375 MBps | 510 MBps | 3,300 | 10,500 | +| M192i(d)ms_v2 | 4,096 GiB | 2,000 MBps | 3 x P15 | 375 MBps | 510 MBps | 3,300 | 10,500 | | M208s_v2 | 2,850 GiB | 1,000 MBps | 3 x P15 | 375 MBps | 510 MBps | 3,300 | 10,500 | | M208ms_v2 | 5,700 GiB | 1,000 MBps | 3 x P15 | 375 MBps | 510 MBps | 3,300 | 10,500 | | M416s_v2 | 5,700 GiB | 2,000 MBps | 3 x P15 | 375 MBps | 510 MBps | 3,300 | 10,500 | For the other volumes, the configuration would look like: | M32ls | 256 GiB | 500 MBps | 1 x P15 | 1 x P6 | 1 x P6 | | M64ls | 512 GiB | 1000 MBps | 1 x P20 | 1 x P6 | 1 x P6 | | M32dms_v2, M32ms_v2 | 875 GiB | 500 MBps | 1 x P30 | 1 x P6 | 1 x P6 |-| M64s, M64ds_v2, M64s_v2 | 1,024 GiB | 1,000 MBps | 1 x P30 | 1 x P6 | 1 x P6 | -| M64ms, M64dms_v2, M64ms_v2 | 1,792 GiB | 1,000 MBps | 1 x P30 | 1 x P6 | 1 x P6 | -| M128s, M128ds_v2, M128s_v2 | 2,048 GiB | 2,000 MBps | 1 x P30 | 1 x P10 | 1 x P6 | -| M192ids_v2, M192is_v2 | 2,048 GiB | 2,000 MBps | 1 x P30 | 1 x P10 | 1 x P6 | +| M48(d)s_1_v3, M96(d)s_1_v3 | 974 GiB | 1,560 MBps | 1 x P30 | 1 x P6 | 1 x P6 | +| M64s, M64(d)s_v2 | 1,024 GiB | 1,000 MBps | 1 x P30 | 1 x P6 | 1 x P6 | +| M64ms, M64(d)ms_v2 | 1,792 GiB | 1,000 MBps | 1 x P30 | 1 x P6 | 1 x P6 | +| M96(d)s_2_v3 | 1,946 GiB | 3,120 MBps | 1 x P30 | 1 x P10 | 1 x P6 | +| M128s, M128(d)s_v2 | 2,048 GiB | 2,000 MBps | 1 x P30 | 1 x P10 | 1 x P6 | +| M192i(d)s_v2 | 2,048 GiB | 2,000 MBps | 1 x P30 | 1 x P10 | 1 x P6 | +| M176(d)s_3_v3 | 2,794 GiB | 4,000 MBps | 1 x P30 | 1 x P10 | 1 x P6 | +| M176(d)s_4_v3 | 3,750 GiB | 4,000 MBps | 1 x P30 | 1 x P10 | 1 x P6 | | M128ms, M128dms_v2, M128ms_v2 | 3,892 GiB | 2,000 MBps | 1 x P30 | 1 x P10 | 1 x P6 |-| M192idms_v2, M192ims_v2 | 4,096 GiB | 2,000 MBps | 1 x P30 | 1 x P10 | 1 x P6 | +| M192i(d)ms_v2 | 4,096 GiB | 2,000 MBps | 1 x P30 | 1 x P10 | 1 x P6 | | M208s_v2 | 2,850 GiB | 1,000 MBps | 1 x P30 | 1 x P10 | 1 x P6 | | M208ms_v2 | 5,700 GiB | 1,000 MBps | 1 x P30 | 1 x P10 | 1 x P6 | | M416s_v2 | 5,700 GiB | 2,000 MBps | 1 x P30 | 1 x P10 | 1 x P6 | A less costly alternative for such configurations could look like: | E64v3 | 432 GiB | 1,200 MB/s | 6 x P10 | 1 x E20 | 1 x E6 | 1 x E6 | won't achieve less than 1ms storage latency<sup>1</sup> | | E64ds_v4 | 504 GiB | 1200 MB/s | 7 x P10 | 1 x E20 | 1 x E6 | 1 x E6 | won't achieve less than 1ms storage latency<sup>1</sup> | | M64ls | 512 GiB | 1,000 MB/s | 7 x P10 | 1 x E20 | 1 x E6 | 1 x E6 | Using Write Accelerator for combined data and log volume will limit IOPS rate to 10,000<sup>2</sup> |-| M32dms_v2, M32ms_v2 | 875 GiB | 500 MB/s | 6 x P15 | 1 x E30 | 1 x E6 | 1 x E6 | Using Write Accelerator for combined data and log volume will limit IOPS rate to 5,000<sup>2</sup> | -| M64s, M64ds_v2, M64s_v2 | 1,024 GiB | 1,000 MB/s | 7 x P15 | 1 x E30 | 1 x E6 | 1 x E6 | Using Write Accelerator for combined data and log volume will limit IOPS rate to 10,000<sup>2</sup> | -| M64ms, M64dms_v2, M64ms_v2| 1,792 GiB | 1,000 MB/s | 6 x P20 | 1 x E30 | 1 x E6 | 1 x E6 | Using Write Accelerator for combined data and log volume will limit IOPS rate to 10,000<sup>2</sup> | -| M128s, M128ds_v2, M128s_v2 | 2,048 GiB | 2,000 MB/s |6 x P20 | 1 x E30 | 1 x E10 | 1 x E6 | Using Write Accelerator for combined data and log volume will limit IOPS rate to 20,000<sup>2</sup> | -| M192ids_v2, M192is_v2 | 2,048 GiB | 2,000 MB/s |6 x P20 | 1 x E30 | 1 x E10 | 1 x E6 | Using Write Accelerator for combined data and log volume will limit IOPS rate to 20,000<sup>2</sup> | -| M128ms, M128dms_v2, M128ms_v2 | 3,800 GiB | 2,000 MB/s | 5 x P30 | 1 x E30 | 1 x E10 | 1 x E6 | Using Write Accelerator for combined data and log volume will limit IOPS rate to 20,000<sup>2</sup> | -| M192idms_v2, M192ims_v2 | 4,096 GiB | 2,000 MB/s | 5 x P30 | 1 x E30 | 1 x E10 | 1 x E6 | Using Write Accelerator for combined data and log volume will limit IOPS rate to 20,000<sup>2</sup> | +| M32(d)ms_v2 | 875 GiB | 500 MB/s | 6 x P15 | 1 x E30 | 1 x E6 | 1 x E6 | Using Write Accelerator for combined data and log volume will limit IOPS rate to 5,000<sup>2</sup> | +| M48(d)s_1_v3, M96(d)s_1_v3 | 974 GiB | 1,560 MBps | 7 x P15 | 1 x E30 | 1 x E6 | 1 x E6 | Using Write Accelerator for combined data and log volume will limit IOPS rate to 10,000<sup>2</sup> || +| M64s, M64(d)s_v2 | 1,024 GiB | 1,000 MB/s | 7 x P15 | 1 x E30 | 1 x E6 | 1 x E6 | Using Write Accelerator for combined data and log volume will limit IOPS rate to 10,000<sup>2</sup> | +| M64ms, M64(d)ms_v2| 1,792 GiB | 1,000 MB/s | 6 x P20 | 1 x E30 | 1 x E6 | 1 x E6 | Using Write Accelerator for combined data and log volume will limit IOPS rate to 10,000<sup>2</sup> | +| M96(d)s_2_v3 | 1,946 GiB | 3,120 MBps | 6 x P20 | 1 x E30 | 1 x E10 | 1 x E6 | Using Write Accelerator for combined data and log volume will limit IOPS rate to 20,000<sup>2</sup> | +| M128s, M128(d)s_v2 | 2,048 GiB | 2,000 MB/s |6 x P20 | 1 x E30 | 1 x E10 | 1 x E6 | Using Write Accelerator for combined data and log volume will limit IOPS rate to 20,000<sup>2</sup> | +| M192i(d)s_v2 | 2,048 GiB | 2,000 MB/s |6 x P20 | 1 x E30 | 1 x E10 | 1 x E6 | Using Write Accelerator for combined data and log volume will limit IOPS rate to 20,000<sup>2</sup> | +| M128ms, M128(d)ms_v2 | 3,800 GiB | 2,000 MB/s | 5 x P30 | 1 x E30 | 1 x E10 | 1 x E6 | Using Write Accelerator for combined data and log volume will limit IOPS rate to 20,000<sup>2</sup> | +| M176(d)s_3_v3 | 2,794 GiB | 4,000 MBps | 4 x P30 | 1 x E30 | 1 x E10 | 1 x E6 | Using Write Accelerator for combined data and log volume will limit IOPS rate to 10,000<sup>2</sup> | +| M176(d)s_4_v3 | 3,750 GiB | 4,000 MBps | 5 x P30 | 1 x E30 | 1 x E10 | 1 x E6 | Using Write Accelerator for combined data and log volume will limit IOPS rate to 20,000<sup>2</sup> | +| M192i(d)ms_v2 | 4,096 GiB | 2,000 MB/s | 5 x P30 | 1 x E30 | 1 x E10 | 1 x E6 | Using Write Accelerator for combined data and log volume will limit IOPS rate to 20,000<sup>2</sup> | | M208s_v2 | 2,850 GiB | 1,000 MB/s | 4 x P30 | 1 x E30 | 1 x E10 | 1 x E6 | Using Write Accelerator for combined data and log volume will limit IOPS rate to 10,000<sup>2</sup> | | M208ms_v2 | 5,700 GiB | 1,000 MB/s | 4 x P40 | 1 x E30 | 1 x E10 | 1 x E6 | Using Write Accelerator for combined data and log volume will limit IOPS rate to 10,000<sup>2</sup> | | M416s_v2 | 5,700 GiB | 2,000 MB/s | 4 x P40 | 1 x E30 | 1 x E10 | 1 x E6 | Using Write Accelerator for combined data and log volume will limit IOPS rate to 20,000<sup>2</sup> | |
sap | Hana Vm Premium Ssd V2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/hana-vm-premium-ssd-v2.md | Configuration for SAP **/hana/data** volume: | M32ts | 192 GiB | 500 MBps | 20,000 | 224 GB | 425 MBps | 3,000| | M32ls | 256 GiB | 500 MBps | 20,000 | 304 GB | 425 MBps | 3,000 | | M64ls | 512 GiB | 1,000 MBps | 40,000 | 608 GB | 425 MBps | 3,000 | -| M32dms_v2, M32ms_v2 | 875 GiB | 500 MBps | 30,000 | 1056 GB | 425 MBps | 3,000 | -| M64s, M64ds_v2, M64s_v2 | 1,024 GiB | 1,000 MBps | 40,000 | 1232 GB | 600 MBps | 5,000 | -| M64ms, M64dms_v2, M64ms_v2 | 1,792 GiB | 1,000 MBps | 50,000 | 2144 GB | 600 MBps | 5,000 | -| M128s, M128ds_v2, M128s_v2 | 2,048 GiB | 2,000 MBps | 80,000 | 2464 GB | 800 MBps | 12,000| -| M192ids_v2, M192is_v2 | 2,048 GiB | 2,000 MBps | 80,000| 2464 GB | 800 MBps | 12,000| -| M128ms, M128dms_v2, M128ms_v2 | 3,892 GiB | 2,000 MBps | 80,000 | 4672 GB | 800 MBps | 12,000 | -| M192ims, M192idms_v2 | 4,096 GiB | 2,000 MBps | 80,000 | 4912 GB | 800 MBps | 12,000 | +| M32(d)ms_v2 | 875 GiB | 500 MBps | 30,000 | 1056 GB | 425 MBps | 3,000 | +| M48(d)s_1_v3, M96(d)s_1_v3 | 974 GiB | 1,560 MBps | 65,000 | 1232 GB | 600 MBps | 5,000 | +| M64s, M64(d)s_v2 | 1,024 GiB | 1,000 MBps | 40,000 | 1232 GB | 600 MBps | 5,000 | +| M64ms, M64(d)ms_v2 | 1,792 GiB | 1,000 MBps | 50,000 | 2144 GB | 600 MBps | 5,000 | +| M96(d)s_2_v3 | 1,946 GiB | 3,120 MBps | 130,000 | 2464 GB | 800 MBps | 12,000| +| M128s, M128(d)s_v2 | 2,048 GiB | 2,000 MBps | 80,000 | 2464 GB | 800 MBps | 12,000| +| M192i(d)s_v2 | 2,048 GiB | 2,000 MBps | 80,000| 2464 GB | 800 MBps | 12,000| +| M176(d)s_3_v3 | 2,794 GiB | 4,000 MBps | 130,000 | 3424 GB | 1,000 MBps| 15,000 | +| M176(d)s_4_v3 | 3,750 GiB | 4,000 MBps | 130,000 | 4672 GB | 800 MBps | 12,000 | +| M128ms, M128(d)ms_v2 | 3,892 GiB | 2,000 MBps | 80,000 | 4672 GB | 800 MBps | 12,000 | +| M192i(d)ms_v2 | 4,096 GiB | 2,000 MBps | 80,000 | 4912 GB | 800 MBps | 12,000 | | M208s_v2 | 2,850 GiB | 1,000 MBps | 40,000 | 3424 GB | 1,000 MBps| 15,000 | | M208ms_v2 | 5,700 GiB | 1,000 MBps | 40,000 | 6,848 GB | 1,000 MBps | 15,000 | | M416s_v2 | 5,700 GiB | 2,000 MBps | 80,000 | 6,848 GB | 1,200 MBps| 17,000 | For the **/hana/log** volume. the configuration would look like: | M32ts | 192 GiB | 500 MBps | 20,000 | 96 GB | 275 MBps | 3,000 | 192 GB | | M32ls | 256 GiB | 500 MBps | 20,000 | 128 GB | 275 MBps | 3,000 | 256 GB | | M64ls | 512 GiB | 1,000 MBps | 40,000 | 256 GB | 275 MBps | 3,000 | 512 GB | -| M32dms_v2, M32ms_v2 | 875 GiB | 500 MBps | 20,000 | 512 GB | 275 MBps | 3,000 | 875 GB | -| M64s, M64ds_v2, M64s_v2 | 1,024 GiB | 1,000 MBps | 40,000 | 512 GB | 275 MBps | 3,000 | 1,024 GB | -| M64ms, M64dms_v2, M64ms_v2 | 1,792 GiB | 1,000 MBps | 40,000 | 512 GB | 275 MBps | 3,000 | 1,024 GB | -| M128s, M128ds_v2, M128s_v2 | 2,048 GiB | 2,000 MBps | 80,000 | 512 GB | 300 MBps | 4,000 | 1,024 GB | -| M192ids_v2, M192is_v2 | 2,048 GiB | 2,000 MBps | 80,000 | 512 GB | 300 MBps | 4,000 | 1,024 GB | -| M128ms, M128dms_v2, M128ms_v2 | 3,892 GiB | 2,000 MBps | 80,000 | 512 GB | 300 MBps | 4,000 | 1,024 GB | -| M192idms_v2, M192ims_v2 | 4,096 GiB | 2,000 MBps | 80,000 | 512 GB | 300 MBps | 4,000 | 1,024 GB | +| M32(d)ms_v2 | 875 GiB | 500 MBps | 20,000 | 512 GB | 275 MBps | 3,000 | 875 GB | +| M48(d)s_1_v3, M96(d)s_1_v3 | 974 GiB | 1,560 MBps | 65,000 | 512 GB | 275 MBps | 3,000 | 1,024 GB | +| M64s, M64(d)s_v2 | 1,024 GiB | 1,000 MBps | 40,000 | 512 GB | 275 MBps | 3,000 | 1,024 GB | +| M64ms, M64(d)ms_v2 | 1,792 GiB | 1,000 MBps | 40,000 | 512 GB | 275 MBps | 3,000 | 1,024 GB | +| M96(d)s_2_v3 | 1,946 GiB | 3,120 MBps | 130,000 | 512 GB | 300 MBps | 4,000 | 1,024 GB | +| M128s, M128(d)s_v2 | 2,048 GiB | 2,000 MBps | 80,000 | 512 GB | 300 MBps | 4,000 | 1,024 GB | +| M192i(d)s_v2 | 2,048 GiB | 2,000 MBps | 80,000 | 512 GB | 300 MBps | 4,000 | 1,024 GB | +| M176(d)s_3_v3 | 2,794 GiB | 4,000 MBps | 130,000 | 512 GB | 300 MBps | 4,000 | 1,024 GB | +| M176(d)s_4_v3 | 3,750 GiB | 4,000 MBps | 130,000 | 512 GB | 300 MBps | 4,000 | 1,024 GB | +| M128ms, M128(d)ms_v2 | 3,892 GiB | 2,000 MBps | 80,000 | 512 GB | 300 MBps | 4,000 | 1,024 GB | +| M192i(d)ms_v2 | 4,096 GiB | 2,000 MBps | 80,000 | 512 GB | 300 MBps | 4,000 | 1,024 GB | | M208s_v2 | 2,850 GiB | 1,000 MBps | 40,000 | 512 GB | 300 MBps | 4,000 | 1,024 GB | | M208ms_v2 | 5,700 GiB | 1,000 MBps | 40,000 | 512 GB | 350 MBps | 4,500 | 1,024 GB | | M416s_v2 | 5,700 GiB | 2,000 MBps | 80,000 | 512 GB | 400 MBps | 5,000 | 1,024 GB | |
sap | Hana Vm Ultra Disk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/hana-vm-ultra-disk.md | The recommendations are often exceeding the SAP minimum requirements as stated e | M32ts | 192 GiB | 500 MBps | 250 GB | 400 MBps | 2,500 | 96 GB | 250 MBps | 1,800 | | M32ls | 256 GiB | 500 MBps | 300 GB | 400 MBps | 2,500 | 256 GB | 250 MBps | 1,800 | | M64ls | 512 GiB | 1,000 MBps | 620 GB | 400 MBps | 3,500 | 256 GB | 250 MBps | 1,800 |-| M32dms_v2, M32ms_v2 | 875 GiB | 500 MBps | 1,200 GB | 600 MBps | 5,000 | 512 GB | 250 MBps | 2,500 | -| M64s, M64ds_v2, M64s_v2 | 1,024 GiB | 1,000 MBps | 1,200 GB | 600 MBps | 5,000 | 512 GB | 250 MBps | 2,500 | -| M64ms, M64dms_v2, M64ms_v2 | 1,792 GiB | 1,000 MBps | 2,100 GB | 600 MBps | 5,000 | 512 GB | 250 MBps | 2,500 | -| M128s, M128ds_v2, M128s_v2 | 2,048 GiB | 2,000 MBps |2,400 GB | 750 MBps | 7,000 | 512 GB | 250 MBps | 2,500 | -| M192ids_v2, M192is_v2 | 2,048 GiB | 2,000 MBps |2,400 GB | 750 MBps | 7,000 | 512 GB | 250 MBps | 2,500 | -| M128ms, M128dms_v2, M128ms_v2 | 3,892 GiB | 2,000 MBps | 4,800 GB | 750 MBps |9,600 | 512 GB | 250 MBps | 2,500 | -| M192idms_v2, M192ims_v2 | 4,096 GiB | 2,000 MBps | 4,800 GB | 750 MBps |9,600 | 512 GB | 250 MBps | 2,500 | +| M32(d)ms_v2, | 875 GiB | 500 MBps | 1,200 GB | 600 MBps | 5,000 | 512 GB | 250 MBps | 2,500 | +| M48(d)s_1_v3, M96(d)s_1_v3 | 974 GiB | 1,560 MBps | 1,200 GB | 600 MBps | 5,000 | 512 GB | 250 MBps | 2,500 | +| M64s, M64(d)s_v2 | 1,024 GiB | 1,000 MBps | 1,200 GB | 600 MBps | 5,000 | 512 GB | 250 MBps | 2,500 | +| M64ms, M64(d)ms_v2| 1,792 GiB | 1,000 MBps | 2,100 GB | 600 MBps | 5,000 | 512 GB | 250 MBps | 2,500 | +| M96(d)s_2_v3 | 1,946 GiB | 3,120 MBps | 2,400 GB | 750 MBps | 7,000 | 512 GB | 250 MBps | 2,500 | +| M128s, M128(d)s_v2 | 2,048 GiB | 2,000 MBps |2,400 GB | 750 MBps | 7,000 | 512 GB | 250 MBps | 2,500 | +| M192i(d)s_v2 | 2,048 GiB | 2,000 MBps |2,400 GB | 750 MBps | 7,000 | 512 GB | 250 MBps | 2,500 | +| M176(d)s_3_v3 | 2,794 GiB | 4,000 MBps | 750 MBps | 7,000 | 512 GB | 250 MBps | 2,500 | +| M176(d)s_4_v3 | 3,750 GiB | 4,000 MBps | 4,800 GB | 750 MBps |9,600 | 512 GB | 250 MBps | 2,500 | +| M128ms, M128(d)ms_v2 | 3,892 GiB | 2,000 MBps | 4,800 GB | 750 MBps |9,600 | 512 GB | 250 MBps | 2,500 | +| M192i(d)ms_v2 | 4,096 GiB | 2,000 MBps | 4,800 GB | 750 MBps |9,600 | 512 GB | 250 MBps | 2,500 | | M208s_v2 | 2,850 GiB | 1,000 MBps | 3,500 GB | 750 MBps | 7,000 | 512 GB | 250 MBps | 2,500 | | M208ms_v2 | 5,700 GiB | 1,000 MBps | 7,200 GB | 750 MBps | 14,400 | 512 GB | 250 MBps | 2,500 | | M416s_v2 | 5,700 GiB | 2,000 MBps | 7,200 GB | 1,000 MBps | 14,400 | 512 GB | 400 MBps | 4,000 | |
sap | High Availability Guide Rhel Multi Sid | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-rhel-multi-sid.md | This article assumes that: To prevent the start of the instances by the *sapinit* startup script, all instances managed by Pacemaker must be commented out from */usr/sap/sapservices* file. The example shown below is for SAP systems `NW2` and `NW3`. ```cmd- # On the node where ASCS was installed, comment out the line for the ASCS instacnes - #LD_LIBRARY_PATH=/usr/sap/NW2/ASCS10/exe:$LD_LIBRARY_PATH; export LD_LIBRARY_PATH; /usr/sap/NW2/ASCS10/exe/sapstartsrv pf=/usr/sap/NW2/SYS/profile/NW2_ASCS10_msnw2ascs -D -u nw2adm - #LD_LIBRARY_PATH=/usr/sap/NW3/ASCS20/exe:$LD_LIBRARY_PATH; export LD_LIBRARY_PATH; /usr/sap/NW3/ASCS20/exe/sapstartsrv pf=/usr/sap/NW3/SYS/profile/NW3_ASCS20_msnw3ascs -D -u nw3adm + # Depending on whether the SAP Startup framework is integrated with systemd, you may observe below entries on the node for ASCS instances. You should comment out the line(s). + # LD_LIBRARY_PATH=/usr/sap/NW2/ASCS10/exe:$LD_LIBRARY_PATH; export LD_LIBRARY_PATH; /usr/sap/NW2/ASCS10/exe/sapstartsrv pf=/usr/sap/NW2/SYS/profile/NW2_ASCS10_msnw2ascs -D -u nw2adm + # LD_LIBRARY_PATH=/usr/sap/NW3/ASCS20/exe:$LD_LIBRARY_PATH; export LD_LIBRARY_PATH; /usr/sap/NW3/ASCS20/exe/sapstartsrv pf=/usr/sap/NW3/SYS/profile/NW3_ASCS20_msnw3ascs -D -u nw3adm + # systemctl --no-ask-password start SAPNW2_10 # sapstartsrv pf=/usr/sap/NW2/SYS/profile/NW2_ASCS10_msnw2ascs + # systemctl --no-ask-password start SAPNW3_20 # sapstartsrv pf=/usr/sap/NW3/SYS/profile/NW3_ASCS20_msnw3ascs - # On the node where ERS was installed, comment out the line for the ERS instacnes + # Depending on whether the SAP Startup framework is integrated with systemd, you may observe below entries on the node for ERS instances. You should comment out the line(s). #LD_LIBRARY_PATH=/usr/sap/NW2/ERS12/exe:$LD_LIBRARY_PATH; export LD_LIBRARY_PATH; /usr/sap/NW2/ERS12/exe/sapstartsrv pf=/usr/sap/NW2/ERS12/profile/NW2_ERS12_msnw2ers -D -u nw2adm #LD_LIBRARY_PATH=/usr/sap/NW3/ERS22/exe:$LD_LIBRARY_PATH; export LD_LIBRARY_PATH; /usr/sap/NW3/ERS22/exe/sapstartsrv pf=/usr/sap/NW3/ERS22/profile/NW3_ERS22_msnw3ers -D -u nw3adm+ # systemctl --no-ask-password start SAPNW2_12 # sapstartsrv pf=/usr/sap/NW2/ERS12/profile/NW2_ERS12_msnw2ers + # systemctl --no-ask-password start SAPNW3_22 # sapstartsrv pf=/usr/sap/NW3/ERS22/profile/NW3_ERS22_msnw3ers ``` + > [!IMPORTANT] + > With the systemd based SAP Startup Framework, SAP instances can now be managed by systemd. The minimum required Red Hat Enterprise Linux (RHEL) version is RHEL 8 for SAP. As described in SAP Note [3115048](https://me.sap.com/notes/3115048), a fresh installation of a SAP kernel with integrated systemd based SAP Startup Framework support will always result in a systemd controlled SAP instance. After an SAP kernel upgrade of an existing SAP installation to a kernel which has systemd based SAP Startup Framework support, however, some manual steps have to be performed as documented in SAP Note [3115048](https://me.sap.com/notes/3115048) to convert the existing SAP startup environment to one which is systemd controlled. + > + > When utilizing Red Hat HA services for SAP (cluster configuration) to manage SAP application server instances such as SAP ASCS and SAP ERS, additional modifications will be necessary to ensure compatibility between the SAPInstance resource agent and the new systemd-based SAP startup framework. So once the SAP application server instances has been installed or switched to a systemd enabled SAP Kernel as per SAP Note [3115048](https://me.sap.com/notes/3115048), the steps mentioned in [Red Hat KBA 6884531](https://access.redhat.com/articles/6884531) must be completed successfully on all cluster nodes. + 7. **[1]** Create the SAP cluster resources for the newly installed SAP system. If using enqueue server 1 architecture (ENSA1), define the resources for SAP systems `NW2` and `NW3` as follows: |
sap | High Availability Guide Rhel Netapp Files | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-rhel-netapp-files.md | Follow the steps in [Set up Pacemaker on Red Hat Enterprise Linux in Azure](high The following items are prefixed with either: -- **[A]**: Applicable to all nodes-- **[1]**: Only applicable to node 1-- **[2]**: Only applicable to node 2+* **[A]**: Applicable to all nodes +* **[1]**: Only applicable to node 1 +* **[2]**: Only applicable to node 2 1. **[A]** Set up hostname resolution. The following items are prefixed with either: ```bash sudo vi /usr/sap/sapservices - # On the node where you installed the ASCS, comment out the following line + # Depending on whether the SAP Startup framework is integrated with systemd, you will observe one of the two entries on the ASCS node. You should comment out the line(s). # LD_LIBRARY_PATH=/usr/sap/QAS/ASCS00/exe:$LD_LIBRARY_PATH; export LD_LIBRARY_PATH; /usr/sap/QAS/ASCS00/exe/sapstartsrv pf=/usr/sap/QAS/SYS/profile/QAS_ASCS00_anftstsapvh -D -u qasadm+ # systemctl --no-ask-password start SAPQAS_00 # sapstartsrv pf=/usr/sap/QAS/SYS/profile/QAS_ASCS00_anftstsapvh - # On the node where you installed the ERS, comment out the following line + # Depending on whether the SAP Startup framework is integrated with systemd, you will observe one of the two entries on the ASCS node. You should comment out the line(s). # LD_LIBRARY_PATH=/usr/sap/QAS/ERS01/exe:$LD_LIBRARY_PATH; export LD_LIBRARY_PATH; /usr/sap/QAS/ERS01/exe/sapstartsrv pf=/usr/sap/QAS/ERS01/profile/QAS_ERS01_anftstsapers -D -u qasadm+ # systemctl --no-ask-password start SAPQAS_01 # sapstartsrv pf=/usr/sap/QAS/ERS01/profile/QAS_ERS01_anftstsapers ``` -1. **[1]** Create the SAP cluster resources. + > [!IMPORTANT] + > With the systemd based SAP Startup Framework, SAP instances can now be managed by systemd. The minimum required Red Hat Enterprise Linux (RHEL) version is RHEL 8 for SAP. As described in SAP Note [3115048](https://me.sap.com/notes/3115048), a fresh installation of a SAP kernel with integrated systemd based SAP Startup Framework support will always result in a systemd controlled SAP instance. After an SAP kernel upgrade of an existing SAP installation to a kernel which has systemd based SAP Startup Framework support, however, some manual steps have to be performed as documented in SAP Note [3115048](https://me.sap.com/notes/3115048) to convert the existing SAP startup environment to one which is systemd controlled. + > + > When utilizing Red Hat HA services for SAP (cluster configuration) to manage SAP application server instances such as SAP ASCS and SAP ERS, additional modifications will be necessary to ensure compatibility between the SAPInstance resource agent and the new systemd-based SAP startup framework. So once the SAP application server instances has been installed or switched to a systemd enabled SAP Kernel as per SAP Note [3115048](https://me.sap.com/notes/3115048), the steps mentioned in [Red Hat KBA 6884531](https://access.redhat.com/articles/6884531) must be completed successfully on all cluster nodes. + +2. **[1]** Create the SAP cluster resources. If you use enqueue server 1 architecture (ENSA1), define the resources as shown here: The following items are prefixed with either: # rsc_sap_QAS_ERS01 (ocf::heartbeat:SAPInstance): Started anftstsapcl1 ``` -1. **[1]** Run the following step to configure `priority-fencing-delay` (applicable only as of pacemaker-2.0.4-6.el8 or higher). +3. **[1]** Run the following step to configure `priority-fencing-delay` (applicable only as of pacemaker-2.0.4-6.el8 or higher). > [!NOTE] > If you have a two-node cluster, you have the option to configure the `priority-fencing-delay` cluster property. This property introduces more delay in fencing a node that has higher total resource priority when a split-brain scenario occurs. For more information, see [Can Pacemaker fence the cluster node with the fewest running resources?](https://access.redhat.com/solutions/5110521). The following items are prefixed with either: sudo pcs property set priority-fencing-delay=15s ``` -1. **[A]** Add firewall rules for ASCS and ERS on both nodes. +4. **[A]** Add firewall rules for ASCS and ERS on both nodes. ```bash # Probe Port of ASCS |
sap | High Availability Guide Rhel Nfs Azure Files | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-rhel-nfs-azure-files.md | The following items are prefixed with: ```bash sudo vi /usr/sap/sapservices - # On the node where you installed the ASCS, comment out the following line + # Depending on whether the SAP Startup framework is integrated with systemd, you will observe one of the two entries on the ASCS node. You should comment out the line(s). # LD_LIBRARY_PATH=/usr/sap/NW1/ASCS00/exe:$LD_LIBRARY_PATH; export LD_LIBRARY_PATH; /usr/sap/NW1/ASCS00/exe/sapstartsrv pf=/usr/sap/NW1/SYS/profile/NW1_ASCS00_sapascs -D -u nw1adm+ # systemctl --no-ask-password start SAPNW1_00 # sapstartsrv pf=/usr/sap/NW1/SYS/profile/NW1_ASCS00_sapascs - # On the node where you installed the ERS, comment out the following line + # Depending on whether the SAP Startup framework is integrated with systemd, you will observe one of the two entries on the ERS node. You should comment out the line(s). # LD_LIBRARY_PATH=/usr/sap/NW1/ERS01/exe:$LD_LIBRARY_PATH; export LD_LIBRARY_PATH; /usr/sap/NW1/ERS01/exe/sapstartsrv pf=/usr/sap/NW1/ERS01/profile/NW1_ERS01_sapers -D -u nw1adm+ # systemctl --no-ask-password start SAPNW1_00 # sapstartsrv pf=/usr/sap/NW1/SYS/profile/NW1_ERS01_sapers ``` -1. **[1]** Create the SAP cluster resources. + > [!IMPORTANT] + > With the systemd based SAP Startup Framework, SAP instances can now be managed by systemd. The minimum required Red Hat Enterprise Linux (RHEL) version is RHEL 8 for SAP. As described in SAP Note [3115048](https://me.sap.com/notes/3115048), a fresh installation of a SAP kernel with integrated systemd based SAP Startup Framework support will always result in a systemd controlled SAP instance. After an SAP kernel upgrade of an existing SAP installation to a kernel which has systemd based SAP Startup Framework support, however, some manual steps have to be performed as documented in SAP Note [3115048](https://me.sap.com/notes/3115048) to convert the existing SAP startup environment to one which is systemd controlled. + > + > When utilizing Red Hat HA services for SAP (cluster configuration) to manage SAP application server instances such as SAP ASCS and SAP ERS, additional modifications will be necessary to ensure compatibility between the SAPInstance resource agent and the new systemd-based SAP startup framework. So once the SAP application server instances has been installed or switched to a systemd enabled SAP Kernel as per SAP Note [3115048](https://me.sap.com/notes/3115048), the steps mentioned in [Red Hat KBA 6884531](https://access.redhat.com/articles/6884531) must be completed successfully on all cluster nodes. ++2. **[1]** Create the SAP cluster resources. If you use enqueue server 1 architecture (ENSA1), define the resources as shown here: The following items are prefixed with: # rsc_sap_NW1_ERS01 (ocf::heartbeat:SAPInstance): Started sap-cl1 ``` -1. **[1]** Run the following step to configure `priority-fencing-delay` (applicable only as of pacemaker-2.0.4-6.el8 or higher). +3. **[1]** Run the following step to configure `priority-fencing-delay` (applicable only as of pacemaker-2.0.4-6.el8 or higher). > [!NOTE] > If you have a two-node cluster, you have the option to configure the `priority-fencing-delay` cluster property. This property introduces additional delay in fencing a node that has higher total resource priority when a split-brain scenario occurs. For more information, see [Can Pacemaker fence the cluster node with the fewest running resources?](https://access.redhat.com/solutions/5110521). The following items are prefixed with: sudo pcs property set priority-fencing-delay=15s ``` -1. **[A]** Add firewall rules for ASCS and ERS on both nodes. +4. **[A]** Add firewall rules for ASCS and ERS on both nodes. ```bash # Probe Port of ASCS |
sap | Sap Hana High Availability Netapp Files Red Hat | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-high-availability-netapp-files-red-hat.md | This section describes the steps required for a cluster to operate seamlessly wh Follow the steps in [Set up Pacemaker on Red Hat Enterprise Linux](./high-availability-guide-rhel-pacemaker.md) in Azure to create a basic Pacemaker cluster for this HANA server. +> [!IMPORTANT] +> With the systemd based SAP Startup Framework, SAP HANA instances can now be managed by systemd. The minimum required Red Hat Enterprise Linux (RHEL) version is RHEL 8 for SAP. As outlined in SAP Note [3189534](https://me.sap.com/notes/3189534), any new installations of SAP HANA SPS07 revision 70 or above, or updates to HANA systems to HANA 2.0 SPS07 revision 70 or above, SAP Startup framework will be automatically registered with systemd. +> +> When using HA solutions to manage SAP HANA system replication in combination with systemd-enabled SAP HANA instances (refer to SAP Note [3189534](https://me.sap.com/notes/3189534)), additional steps are necessary to ensure that the HA cluster can manage the SAP instance without systemd interference. So, for SAP HANA system integrated with systemd, additional steps outlined in [Red Hat KBA 7029705](https://access.redhat.com/solutions/7029705) must be followed on all cluster nodes. + ### Implement the Python system replication hook SAPHanaSR This step is an important one to optimize the integration with the cluster and improve the detection when a cluster failover is needed. We highly recommend that you configure the SAPHanaSR Python hook. Follow the steps in [Implement the Python system replication hook SAPHanaSR](sap-hana-high-availability-rhel.md#implement-the-python-system-replication-hook-saphanasr). This section describes how you can test your setup. 1. Verify the cluster configuration for a failure scenario when a node loses access to the NFS share (`/hana/shared`). The SAP HANA resource agents depend on binaries stored on `/hana/shared` to perform operations during failover. File system `/hana/shared` is mounted over NFS in the presented scenario. - + It's difficult to simulate a failure where one of the servers loses access to the NFS share. As a test, you can remount the file system as read-only. This approach validates that the cluster can fail over, if access to `/hana/shared` is lost on the active node. **Expected result:** On making `/hana/shared` as a read-only file system, the `OCF_CHECK_LEVEL` attribute of the resource `hana_shared1`, which performs read/write operations on file systems, fails. It isn't able to write anything on the file system and performs HANA resource failover. The same result is expected when your HANA node loses access to the NFS shares. |
sap | Sap Hana High Availability Rhel | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/sap-hana-high-availability-rhel.md | The steps in this section use the following prefixes: sudo vgcreate vg_hana_shared_HN1 /dev/disk/azure/scsi1/lun3 ``` - Create the logical volumes. A linear volume is created when you use `lvcreate` without the `-i` switch. We suggest that you create a striped volume for better I/O performance. Align the stripe sizes to the values documented in [SAP HANA VM storage configurations](./hana-vm-operations-storage.md). The `-i` argument should be the number of the underlying physical volumes, and the `-I` argument is the stripe size. + Create the logical volumes. A linear volume is created when you use `lvcreate` without the `-i` switch. We suggest that you create a striped volume for better I/O performance. Align the stripe sizes to the values documented in [SAP HANA VM storage configurations](./hana-vm-operations-storage.md). The `-i` argument should be the number of the underlying physical volumes, and the `-I` argument is the stripe size. In this document, two physical volumes are used for the data volume, so the `-i` switch argument is set to **2**. The stripe size for the data volume is **256KiB**. One physical volume is used for the log volume, so no `-i` or `-I` switches are explicitly used for the log volume commands. > [!IMPORTANT]- > Use the `-i` switch and set it to the number of the underlying physical volume when you use more than one physical volume for each data, log, or shared volumes. Use the `-I` switch to specify the stripe size when you're creating a striped volume. + > Use the `-i` switch and set it to the number of the underlying physical volume when you use more than one physical volume for each data, log, or shared volumes. Use the `-I` switch to specify the stripe size when you're creating a striped volume. > See [SAP HANA VM storage configurations](./hana-vm-operations-storage.md) for recommended storage configurations, including stripe sizes and number of disks. The following layout examples don't necessarily meet the performance guidelines for a particular system size. They're for illustration only. ```bash The steps in this section use the following prefixes: * [2292690 - SAP HANA DB: Recommended OS settings for RHEL 7](https://launchpad.support.sap.com/#/notes/2292690) * [2777782 - SAP HANA DB: Recommended OS Settings for RHEL 8](https://launchpad.support.sap.com/#/notes/2777782) * [2455582 - Linux: Running SAP applications compiled with GCC 6.x](https://launchpad.support.sap.com/#/notes/2455582)- * [2593824 - Linux: Running SAP applications compiled with GCC 7.x](https://launchpad.support.sap.com/#/notes/2593824) + * [2593824 - Linux: Running SAP applications compiled with GCC 7.x](https://launchpad.support.sap.com/#/notes/2593824) * [2886607 - Linux: Running SAP applications compiled with GCC 9.x](https://launchpad.support.sap.com/#/notes/2886607) 1. **[A]** Install the SAP HANA. The steps in this section use the following prefixes: Follow the steps in [Setting up Pacemaker on Red Hat Enterprise Linux in Azure](high-availability-guide-rhel-pacemaker.md) to create a basic Pacemaker cluster for this HANA server. +> [!IMPORTANT] +> With the systemd based SAP Startup Framework, SAP HANA instances can now be managed by systemd. The minimum required Red Hat Enterprise Linux (RHEL) version is RHEL 8 for SAP. As outlined in SAP Note [3189534](https://me.sap.com/notes/3189534), any new installations of SAP HANA SPS07 revision 70 or above, or updates to HANA systems to HANA 2.0 SPS07 revision 70 or above, SAP Startup framework will be automatically registered with systemd. +> +> When using HA solutions to manage SAP HANA system replication in combination with systemd-enabled SAP HANA instances (refer to SAP Note [3189534](https://me.sap.com/notes/3189534)), additional steps are necessary to ensure that the HA cluster can manage the SAP instance without systemd interference. So, for SAP HANA system integrated with systemd, additional steps outlined in [Red Hat KBA 7029705](https://access.redhat.com/solutions/7029705) must be followed on all cluster nodes. + ## Implement the Python system replication hook SAPHanaSR This important step optimizes the integration with the cluster and improves the detection when a cluster failover is needed. We highly recommend that you configure the SAPHanaSR Python hook. Use the command `sudo pcs status` to check the state of the cluster resources cr ## Configure HANA active/read-enabled system replication in Pacemaker cluster -Starting with SAP HANA 2.0 SPS 01, SAP allows active/read-enabled setups for SAP HANA System Replication, where the secondary systems of SAP HANA System Replication can be used actively for read-intense workloads. +Starting with SAP HANA 2.0 SPS 01, SAP allows active/read-enabled setups for SAP HANA System Replication, where the secondary systems of SAP HANA System Replication can be used actively for read-intense workloads. To support such a setup in a cluster, a second virtual IP address is required, which allows clients to access the secondary read-enabled SAP HANA database. To ensure that the secondary replication site can still be accessed after a takeover has occurred, the cluster needs to move the virtual IP address around with the secondary SAPHana resource. Resource Group: g_ip_HN1_03 vip_HN1_03 (ocf::heartbeat:IPaddr2): Started hn1-db-1 ``` -You can test the setup of the Azure fencing agent by disabling the network interface on the node where SAP HANA is running as Master. For a description on how to simulate a network failure, see [Red Hat Knowledge Base article 79523](https://access.redhat.com/solutions/79523). +You can test the setup of the Azure fencing agent by disabling the network interface on the node where SAP HANA is running as Master. For a description on how to simulate a network failure, see [Red Hat Knowledge Base article 79523](https://access.redhat.com/solutions/79523). In this example, we use the `net_breaker` script as root to block all access to the network: |
search | Knowledge Store Concept Intro | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/knowledge-store-concept-intro.md | Last updated 01/31/2023 # Knowledge store in Azure AI Search -Knowledge store is a data sink created by a [Azure AI Search enrichment pipeline](cognitive-search-concept-intro.md) that stores AI-enriched content in tables and blob containers in Azure Storage for independent analysis or downstream processing in non-search scenarios like knowledge mining. +Knowledge store is a data sink created by an [Azure AI Search enrichment pipeline](cognitive-search-concept-intro.md) that stores AI-enriched content in tables and blob containers in Azure Storage for independent analysis or downstream processing in non-search scenarios like knowledge mining. If you've used cognitive skills in the past, you already know that enriched content is created by *skillsets*. Skillsets move a document through a sequence of enrichments that invoke atomic transformations, such as recognizing entities or translating text. |
search | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/policy-reference.md | Title: Built-in policy definitions for Azure Cognitive Search description: Lists Azure Policy built-in policy definitions for Azure Cognitive Search. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/15/2023 Last updated : 11/21/2023 |
search | Search Get Started Portal Import Vectors | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-get-started-portal-import-vectors.md | This step creates the following objects: + Indexer with field mappings and output field mappings (if applicable). +If you get errors, review permissions first. You need **Cognitive Services OpenAI User** on Azure OpenAI and **Storage Blob Data Reader** on Azure Storage. Your blobs must be unstructured (chunked data is pulled from the blob's "content" property). + ## Check results Search explorer accepts text strings as input and then vectorizes the text for vector query execution. |
search | Search Howto Index Sharepoint Online | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-index-sharepoint-online.md | Last updated 11/07/2023 > [!IMPORTANT] > SharePoint indexer support is in public preview. It's offered "as-is", under [Supplemental Terms of Use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). Preview features aren't recommended for production workloads and aren't guaranteed to become generally available. >->To use this preview, [request access](https://aka.ms/azure-cognitive-search/indexer-preview). Access will be automatically approved after the form is submitted. After access is enabled, use a [preview REST API (2020-06-30-preview or later)](search-api-preview.md) to index your content. There is currently limited portal support and no .NET SDK support. +> Be sure to visit the [known limitations](#limitations-and-considerations) section before you start. +> +>To use this preview, [request access](https://aka.ms/azure-cognitive-search/indexer-preview). Access is automatically approved after the form is submitted. After access is enabled, use a [preview REST API (2023-10-01-Preview or later)](search-api-preview.md) to index your content. -This article explains how to configure a [search indexer](search-indexer-overview.md) to index documents stored in SharePoint document libraries for full text search in Azure AI Search. Configuration steps are followed by a deeper exploration of behaviors and scenarios you're likely to encounter. +This article explains how to configure a [search indexer](search-indexer-overview.md) to index documents stored in SharePoint document libraries for full text search in Azure AI Search. Configuration steps are first, followed by behaviors and scenarios ## Functionality -An indexer in Azure AI Search is a crawler that extracts searchable data and metadata from a data source. The SharePoint indexer will connect to your SharePoint site and index documents from one or more document libraries. The indexer provides the following functionality: +An indexer in Azure AI Search is a crawler that extracts searchable data and metadata from a data source. The SharePoint indexer connects to your SharePoint site and indexes documents from one or more document libraries. The indexer provides the following functionality: -+ Index content and metadata from one or more document libraries. -+ Incremental indexing, where the indexer identifies which file content or metadata have changed and indexes only the updated data. For example, if five PDFs are originally indexed and one is updated, only the updated PDF is indexed. -+ Deletion detection is built in. If a document is deleted from a document library, the indexer will detect the delete on the next indexer run and remove the document from the index. -+ Text and normalized images will be extracted by default from the documents that are indexed. Optionally a [skillset](cognitive-search-working-with-skillsets.md) can be added to the pipeline for [AI enrichment](cognitive-search-concept-intro.md). ++ Index files and metadata from one or more document libraries.++ Index incrementally, picking up just the new and changed files and metadata. ++ Deletion detection is built in. Deletion in a document library is picked up on the next indexer run, and the document is removed from the index.++ Text and normalized images are extracted by default from the documents that are indexed. Optionally, you can add a [skillset](cognitive-search-working-with-skillsets.md) for deeper [AI enrichment](cognitive-search-concept-intro.md), like OCR or text translation. ## Prerequisites The SharePoint indexer can extract text from the following document formats: [!INCLUDE [search-document-data-sources](../../includes/search-blob-data-sources.md)] +## Limitations and considerations ++Here are the limitations of this feature: +++ Indexing [SharePoint Lists](https://support.microsoft.com/office/introduction-to-lists-0a1c3ace-def0-44af-b225-cfa8d92c52d7) isn't supported.+++ Indexing SharePoint .ASPX site content isn't supported.+++ OneNote notebook files aren't supported.+++ [Private endpoint](search-indexer-howto-access-private.md) isn't supported.+++ Renaming a SharePoint folder doesn't trigger incremental indexing. A renamed folder is treated as new content.+++ SharePoint supports a granular authorization model that determines per-user access at the document level. The indexer doesn't pull these permissions into the index, and Azure AI Search doesn't support document-level authorization. When a document is indexed from SharePoint into a search service, the content is available to anyone who has read access to the index. If you require document-level permissions, you should consider [security filters to trim results](search-security-trimming-for-azure-search-with-aad.md) and automate copying the permissions at a file level to a field in the index.+++ (Known issue) Support for delegated permissions is currently broken. For now, use app-based permissions as a workaround. However, once user-delegated permissions do become operational, a new behavior enforces token expiration every 75 minutes, per the libraries used to implement delegated permissions. An expired token requires manual indexing using [Run Indexer (preview)](/rest/api/searchservice/indexers/run?view=rest-searchservice-2023-10-01-preview&tabs=HTTP&preserve-view=true). For this reason, you might want app-based permissions as a permanent solution.++Here are the considerations when using this feature: +++ If you need a SharePoint content indexing solution in a production environment, consider creating a custom connector with [SharePoint Webhooks](/sharepoint/dev/apis/webhooks/overview-sharepoint-webhooks), calling [Microsoft Graph API](/graph/use-the-api) to export the data to an Azure Blob container, and then use the [Azure Blob indexer](search-howto-indexing-azure-blob-storage.md) for incremental indexing.++<!-- + There could be Microsoft 365 processes that update SharePoint file system-metadata (based on different configurations in SharePoint) and will cause the SharePoint indexer to trigger. Make sure that you test your setup and understand the document processing count prior to using any AI enrichment. Since this is a third-party connector to Azure (SharePoint is located in Microsoft 365), SharePoint configuration is not checked by the indexer. --> +++ If your SharePoint configuration allows Microsoft 365 processes to update SharePoint file system metadata, be aware that these updates can trigger the SharePoint indexer, causing the indexer to ingest documents multiple times. Because the SharePoint indexer is a third-party connector to Azure, the indexer can't read the configuration or vary its behavior. It responds to changes in new and changed content, regardless of how those updates are made. For this reason, make sure that you test your setup and understand the document processing count prior to using the indexer and any AI enrichment.+ ## Configure the SharePoint indexer -To set up the SharePoint indexer, you'll need to perform some tasks in the Azure portal and others through the preview REST API. +To set up the SharePoint indexer, use both the Azure portal and a preview REST API. -The following video shows you how to set up the SharePoint indexer. +This section provides the steps. You can also watch the following video. > [!VIDEO https://www.youtube.com/embed/QmG65Vgl0JI] ### Step 1 (Optional): Enable system assigned managed identity -When a system-assigned managed identity is enabled, Azure creates an identity for your search service that can be used by the indexer. This identity is used to automatically detect the tenant the search service is provisioned in. +Enable a [system-assigned managed identity](search-howto-managed-identities-data-sources.md#create-a-system-managed-identity) to automatically detect the tenant the search service is provisioned in. -If the SharePoint site is in the same tenant as the search service, you'll need to enable the system-assigned managed identity for the search service in the Azure portal. If the SharePoint site is in a different tenant from the search service, skip this step. +Perform this step if the SharePoint site is in the same tenant as the search service. Skip this step if the SharePoint site is in a different tenant. The identity isn't used for indexing, just tenant detection. You can also skip this step if you want to put the tenant ID in the [connection string](#connection-string-format). -After selecting **Save** you'll see an Object ID that has been assigned to your search service. +After selecting **Save**, you get an Object ID that has been assigned to your search service. ### Step 2: Decide which permissions the indexer requires -The SharePoint indexer supports both [delegated and application](/graph/auth/auth-concepts#delegated-and-application-permissions) permissions. Choose which permissions you want to use based on your scenario: +The SharePoint indexer supports both [delegated and application](/graph/auth/auth-concepts#delegated-and-application-permissions) permissions. Choose which permissions you want to use based on your scenario. ++We recommend app-based permissions. See [limitations](#limitations-and-considerations) for known issues related to delegated permissions. -+ Delegated permissions, where the indexer runs under the identity of the user or app sending the request. Data access is limited to the sites and files to which the user has access. To support delegated permissions, the indexer requires a [device code prompt](../active-directory/develop/v2-oauth2-device-code.md) to sign in on behalf of the user. ++ Application permissions (recommended), where the indexer runs under the [identity of the SharePoint tenant](/sharepoint/dev/solution-guidance/security-apponly-azureacs) with access to all sites and files. The indexer requires a [client secret](../active-directory/develop/v2-oauth2-client-creds-grant-flow.md). The indexer will also require [tenant admin approval](../active-directory/manage-apps/grant-admin-consent.md) before it can index any content. -+ Application permissions, where the indexer runs under the identity of the SharePoint tenant with access to all sites and files within the SharePoint tenant. The indexer requires a [client secret](../active-directory/develop/v2-oauth2-client-creds-grant-flow.md) to access the SharePoint tenant. The indexer will also require [tenant admin approval](../active-directory/manage-apps/grant-admin-consent.md) before it can index any content. ++ Delegated permissions, where the indexer runs under the identity of the user or app sending the request. Data access is limited to the sites and files to which the caller has access. To support delegated permissions, the indexer requires a [device code prompt](../active-directory/develop/v2-oauth2-device-code.md) to sign in on behalf of the user. -If your Microsoft Entra organization has [Conditional Access enabled](../active-directory/conditional-access/overview.md) and your administrator isn't able to grant any device access for Delegated permissions, you should consider Application permissions instead. For more information, see [Microsoft Entra Conditional Access policies](./search-indexer-troubleshooting.md#azure-active-directory-conditional-access-policies). +If your Microsoft Entra organization has [conditional access enabled](../active-directory/conditional-access/overview.md) and your administrator isn't able to grant any device access for delegated permissions, you should consider app-based permissions instead. For more information, see [Microsoft Entra Conditional Access policies](./search-indexer-troubleshooting.md#azure-active-directory-conditional-access-policies). <a name='step-3-create-an-azure-ad-application'></a> -### Step 3: Create a Microsoft Entra application +### Step 3: Create a Microsoft Entra application registration -The SharePoint indexer will use this Microsoft Entra application for authentication. +The SharePoint indexer uses this Microsoft Entra application for authentication. 1. Sign in to the [Azure portal](https://portal.azure.com). The SharePoint indexer will use this Microsoft Entra application for authenticat 1. On the left, select **API permissions**, then **Add a permission**, then **Microsoft Graph**. + + If the indexer is using application API permissions, then select **Application permissions** and add the following: ++ + **Application - Files.Read.All** + + **Application - Sites.Read.All** + + :::image type="content" source="media/search-howto-index-sharepoint-online/application-api-permissions.png" alt-text="Screenshot of application API permissions."::: + + Using application permissions means that the indexer accesses the SharePoint site in a service context. So when you run the indexer it will have access to all content in the SharePoint tenant, which requires tenant admin approval. A client secret is also required for authentication. Setting up the client secret is described later in this article. + + If the indexer is using delegated API permissions, select **Delegated permissions** and add the following: + **Delegated - Files.Read.All** + **Delegated - Sites.Read.All** + **Delegated - User.Read** - :::image type="content" source="media/search-howto-index-sharepoint-online/delegated-api-permissions.png" alt-text="Delegated API permissions"::: + :::image type="content" source="media/search-howto-index-sharepoint-online/delegated-api-permissions.png" alt-text="Screenshot showing delegated API permissions."::: Delegated permissions allow the search client to connect to SharePoint under the security identity of the current user. - + If the indexer is using application API permissions, then select **Application permissions** and add the following: -- + **Application - Files.Read.All** - + **Application - Sites.Read.All** - - :::image type="content" source="media/search-howto-index-sharepoint-online/application-api-permissions.png" alt-text="Application API permissions"::: - - Using application permissions means that the indexer will access the SharePoint site in a service context. So when you run the indexer it will have access to all content in the SharePoint tenant, which requires tenant admin approval. A client secret is also required for authentication. Setting up the client secret is described later in this article. - 1. Give admin consent. Tenant admin consent is required when using application API permissions. Some tenants are locked down in such a way that tenant admin consent is required for delegated API permissions as well. If either of these conditions apply, youΓÇÖll need to have a tenant admin grant consent for this Microsoft Entra application before creating the indexer. - :::image type="content" source="media/search-howto-index-sharepoint-online/aad-app-grant-admin-consent.png" alt-text="Microsoft Entra app grant admin consent"::: + :::image type="content" source="media/search-howto-index-sharepoint-online/aad-app-grant-admin-consent.png" alt-text="Screenshot showing Microsoft Entra app grant admin consent."::: 1. Select the **Authentication** tab. The SharePoint indexer will use this Microsoft Entra application for authenticat 1. Select **+ Add a platform**, then **Mobile and desktop applications**, then check `https://login.microsoftonline.com/common/oauth2/nativeclient`, then **Configure**. - :::image type="content" source="media/search-howto-index-sharepoint-online/aad-app-authentication-configuration.png" alt-text="Microsoft Entra app authentication configuration"::: + :::image type="content" source="media/search-howto-index-sharepoint-online/aad-app-authentication-configuration.png" alt-text="Screenshot showing Microsoft Entra app authentication configuration."::: 1. (Application API Permissions only) To authenticate to the Microsoft Entra application using application permissions, the indexer requires a client secret. + Select **Certificates & Secrets** from the menu on the left, then **Client secrets**, then **New client secret**. - :::image type="content" source="media/search-howto-index-sharepoint-online/application-client-secret.png" alt-text="New client secret"::: + :::image type="content" source="media/search-howto-index-sharepoint-online/application-client-secret.png" alt-text="Screenshot showing new client secret."::: - + In the menu that pops up, enter a description for the new client secret. Adjust the expiration date if necessary. If the secret expires, it will need to be recreated and the indexer needs to be updated with the new secret. + + In the menu that pops up, enter a description for the new client secret. Adjust the expiration date if necessary. If the secret expires, it needs to be recreated and the indexer needs to be updated with the new secret. - :::image type="content" source="media/search-howto-index-sharepoint-online/application-client-secret-setup.png" alt-text="Setup client secret"::: + :::image type="content" source="media/search-howto-index-sharepoint-online/application-client-secret-setup.png" alt-text="Screenshot showing how to set up a client secret."::: - + The new client secret will appear in the secret list. Once you navigate away from the page the secret will no longer be visible, so copy it using the copy button and save it in a secure location. + + The new client secret appears in the secret list. Once you navigate away from the page, the secret is no longer be visible, so copy it using the copy button and save it in a secure location. - :::image type="content" source="media/search-howto-index-sharepoint-online/application-client-secret-copy.png" alt-text="Copy client secret"::: + :::image type="content" source="media/search-howto-index-sharepoint-online/application-client-secret-copy.png" alt-text="Screenshot showing where to copy a client secret."::: <a name="create-data-source"></a> ### Step 4: Create data source > [!IMPORTANT] -> Starting in this section you need to use the preview REST API for the remaining steps. If youΓÇÖre not familiar with the Azure AI Search REST API, we suggest taking a look at this [Quickstart](search-get-started-rest.md). +> Starting in this section, use the preview REST API for the remaining steps. We recommend the latest preview API, [2023-10-01-preview](/rest/api/searchservice/data-sources/create-or-update?view=rest-searchservice-2023-10-01-preview&tabs=HTTP&preserve-view=true). If youΓÇÖre not familiar with the Azure AI Search REST API, we suggest taking a look at this [Quickstart](search-get-started-rest.md). -A data source specifies which data to index, credentials needed to access the data, and policies to efficiently identify changes in the data (new, modified, or deleted rows). A data source can be used by multiple indexers in the same search service. +A data source specifies which data to index, credentials, and policies to efficiently identify changes in the data (new, modified, or deleted rows). A data source can be used by multiple indexers in the same search service. For SharePoint indexing, the data source must have the following required properties: + **name** is the unique name of the data source within your search service. + **type** must be "sharepoint". This value is case-sensitive. + **credentials** provide the SharePoint endpoint and the Microsoft Entra application (client) ID. An example SharePoint endpoint is `https://microsoft.sharepoint.com/teams/MySharePointSite`. You can get the endpoint by navigating to the home page of your SharePoint site and copying the URL from the browser.-+ **container** specifies which document library to index. More information on creating the container can be found in the [Controlling which documents are indexed](#controlling-which-documents-are-indexed) section of this document. ++ **container** specifies which document library to index. Properties [control which documents are indexed](#controlling-which-documents-are-indexed). -To create a data source, call [Create Data Source](/rest/api/searchservice/preview-api/create-or-update-data-source) using preview API version `2020-06-30-Preview` or later. +To create a data source, call [Create Data Source (preview)](/rest/api/searchservice/data-sources/create-or-update?view=rest-searchservice-2023-10-01-preview&tabs=HTTP&preserve-view=true). ```http-POST https://[service name].search.windows.net/datasources?api-version=2020-06-30-Preview +POST https://[service name].search.windows.net/datasources?api-version=2023-10-01-Preview Content-Type: application/json api-key: [admin key] The format of the connection string changes based on whether the indexer is usin The index specifies the fields in a document, attributes, and other constructs that shape the search experience. -To create an index, call [Create Index](/rest/api/searchservice/create-index): +To create an index, call [Create Index (preview)](/rest/api/searchservice/indexes/create-or-update?view=rest-searchservice-2023-10-01-preview&tabs=HTTP&preserve-view=true): ```http-POST https://[service name].search.windows.net/indexes?api-version=2020-06-30 +POST https://[service name].search.windows.net/indexes?api-version=2023-10-01-Preview Content-Type: application/json api-key: [admin key] api-key: [admin key] ### Step 6: Create an indexer -An indexer connects a data source with a target search index and provides a schedule to automate the data refresh. Once the index and data source have been created, you're ready to create the indexer. +An indexer connects a data source with a target search index and provides a schedule to automate the data refresh. Once the index and data source are created, you can create the indexer. -During this section youΓÇÖll be asked to sign in with your organization credentials that have access to the SharePoint site. If possible, we recommend creating a new organizational user account and giving that new user the exact permissions that you want the indexer to have. +During this step, youΓÇÖre asked to sign in with organization credentials that have access to the SharePoint site. If possible, we recommend creating a new organizational user account and giving that new user the exact permissions that you want the indexer to have. There are a few steps to creating the indexer: -1. Send a [Create Indexer](/rest/api/searchservice/preview-api/create-or-update-indexer) request: +1. Send a [Create Indexer (preview)](/rest/api/searchservice/indexers/create-or-update?view=rest-searchservice-2023-10-01-preview&tabs=HTTP&preserve-view=true) request: ```http- POST https://[service name].search.windows.net/indexers?api-version=2020-06-30-Preview + POST https://[service name].search.windows.net/indexers?api-version=2023-10-01-Preview Content-Type: application/json api-key: [admin key] There are a few steps to creating the indexer: } ``` -1. When creating the indexer for the first time, the [Create Indexer](/rest/api/searchservice/preview-api/create-or-update-indexer) request will remain waiting until your complete the next steps. You must call [Get Indexer Status](/rest/api/searchservice/get-indexer-status) to get the link and enter your new device code. +1. When you create the indexer for the first time, the [Create Indexer (preview)](/rest/api/searchservice/indexers/create-or-update?view=rest-searchservice-2023-10-01-preview&tabs=HTTP&preserve-view=true) request waits until you complete the next step. You must call [Get Indexer Status](/rest/api/searchservice/indexers/get-status?view=rest-searchservice-2023-10-01-preview&tabs=HTTP&preserve-view=true) to get the link and enter your new device code. ```http- GET https://[service name].search.windows.net/indexers/sharepoint-indexer/status?api-version=2020-06-30-Preview + GET https://[service name].search.windows.net/indexers/sharepoint-indexer/status?api-version=2023-10-01-Preview Content-Type: application/json api-key: [admin key] ``` - Note that if you donΓÇÖt run the [Get Indexer Status](/rest/api/searchservice/get-indexer-status) within 10 minutes the code will expire and youΓÇÖll need to recreate the [data source](#create-data-source). + If you donΓÇÖt run the [Get Indexer Status](/rest/api/searchservice/indexers/get-status?view=rest-searchservice-2023-10-01-preview&tabs=HTTP&preserve-view=true) within 10 minutes, the code expires and youΓÇÖll need to recreate the [data source](#create-data-source). - 1. The link for the device login and the new device code will appear under [Get Indexer Status](/rest/api/searchservice/get-indexer-status) response "errorMessage". + 1. Copy the device login code from the [Get Indexer Status](/rest/api/searchservice/indexers/get-status?view=rest-searchservice-2023-10-01-preview&tabs=HTTP&preserve-view=true) response. The device login can be found in the "errorMessage". ```http { There are a few steps to creating the indexer: ``` 1. Provide the code that was included in the error message. - :::image type="content" source="media/search-howto-index-sharepoint-online/enter-device-code.png" alt-text="Enter device code"::: + :::image type="content" source="media/search-howto-index-sharepoint-online/enter-device-code.png" alt-text="Screenshot showing how to enter a device code."::: 1. The SharePoint indexer will access the SharePoint content as the signed-in user. The user that logs in during this step will be that signed-in user. So, if you sign in with a user account that doesnΓÇÖt have access to a document in the Document Library that you want to index, the indexer wonΓÇÖt have access to that document. There are a few steps to creating the indexer: 1. Approve the permissions that are being requested. - :::image type="content" source="media/search-howto-index-sharepoint-online/aad-app-approve-api-permissions.png" alt-text="Approve API permissions"::: --1. The [Create Indexer](/rest/api/searchservice/preview-api/create-or-update-indexer) initial request will complete if all the permissions provided above are correct and within the 10 minute timeframe. + :::image type="content" source="media/search-howto-index-sharepoint-online/aad-app-approve-api-permissions.png" alt-text="Screenshot showing how to approve API permissions."::: +1. The [Create Indexer (preview)](/rest/api/searchservice/indexers/create-or-update?view=rest-searchservice-2023-10-01-preview&tabs=HTTP&preserve-view=true) initial request completes if all the permissions provided above are correct and within the 10 minute timeframe. > [!NOTE] > If the Microsoft Entra application requires admin approval and was not approved before logging in, you may see the following screen. [Admin approval](../active-directory/manage-apps/grant-admin-consent.md) is required to continue. ### Step 7: Check the indexer status -After the indexer has been created, you can call [Get Indexer Status](/rest/api/searchservice/get-indexer-status): +After the indexer has been created, you can call [Get Indexer Status](/rest/api/searchservice/indexers/get-status?view=rest-searchservice-2023-10-01-preview&tabs=HTTP&preserve-view=true): ```http-GET https://[service name].search.windows.net/indexers/sharepoint-indexer/status?api-version=2020-06-30-Preview +GET https://[service name].search.windows.net/indexers/sharepoint-indexer/status?api-version=2023-10-01-Preview Content-Type: application/json api-key: [admin key] ``` ## Updating the data source -If there are no updates to the data source object, the indexer can run on a schedule without any user interaction. However, every time the Azure AI Search data source object is updated or recreated when the device code expires you'll need to sign in again in order for the indexer to run. For example, if you change the data source query, sign in again using the `https://microsoft.com/devicelogin` and a new code. +If there are no updates to the data source object, the indexer runs on a schedule without any user interaction. -Once the data source has been updated or recreated when the device code expires, follow the below steps: +However, if you modify the data source object while the device code is expired, you must sign in again in order for the indexer to run. For example, if you change the data source query, sign in again using the `https://microsoft.com/devicelogin` and get the new device code. -1. Call [Run Indexer](/rest/api/searchservice/run-indexer) to manually kick off [indexer execution](search-howto-run-reset-indexers.md). +Here are the steps for updating a data source, assuming an expired device code: ++1. Call [Run Indexer (preview)](/rest/api/searchservice/indexers/run?view=rest-searchservice-2023-10-01-preview&tabs=HTTP&preserve-view=true) to manually start [indexer execution](search-howto-run-reset-indexers.md). ```http- POST https://[service name].search.windows.net/indexers/sharepoint-indexer/run?api-version=2020-06-30-Preview + POST https://[service name].search.windows.net/indexers/sharepoint-indexer/run?api-version=2023-10-01-Preview Content-Type: application/json api-key: [admin key] ``` -1. Check the [indexer status](/rest/api/searchservice/get-indexer-status). If the last indexer run has an error telling you to go to `https://microsoft.com/devicelogin`, go to that page and provide the new code. +1. Check the [indexer status](/rest/api/searchservice/indexers/get-status?view=rest-searchservice-2023-10-01-preview&tabs=HTTP&preserve-view=true). ```http- GET https://[service name].search.windows.net/indexers/sharepoint-indexer/status?api-version=2020-06-30-Preview + GET https://[service name].search.windows.net/indexers/sharepoint-indexer/status?api-version=2023-10-01-Preview Content-Type: application/json api-key: [admin key] ``` -1. Login. +1. If you get an error asking you to visit `https://microsoft.com/devicelogin`, open the page and copy the new code. ++1. Paste the code into the dialog box. -1. Manually run the indexer again and check the indexer status. This time the indexer run should successfully start. +1. Manually run the indexer again and check the indexer status. This time, the indexer run should successfully start. <a name="metadata"></a> ## Indexing document metadata -If you have set the indexer to index document metadata (`"dataToExtract": "contentAndMetadata"`), the following metadata will be available to index. +If you're indexing document metadata (`"dataToExtract": "contentAndMetadata"`), the following metadata will be available to index. | Identifier | Type | Description | | - | -- | -- |-| metadata_spo_site_library_item_id | Edm.String | The combination key of site ID, library ID, and item ID which uniquely identifies an item in a document library for a site. | +| metadata_spo_site_library_item_id | Edm.String | The combination key of site ID, library ID, and item ID, which uniquely identifies an item in a document library for a site. | | metadata_spo_site_id | Edm.String | The ID of the SharePoint site. | | metadata_spo_library_id | Edm.String | The ID of document library. | | metadata_spo_item_id | Edm.String | The ID of the (document) item in the library. | The SharePoint indexer also supports metadata specific to each document type. Mo You can control which files are indexed by setting inclusion and exclusion criteria in the "parameters" section of the indexer definition. -Include specific file extensions by setting `"indexedFileNameExtensions"` to a comma-separated list of file extensions (with a leading dot). Exclude specific file extensions by setting `"excludedFileNameExtensions"` to the extensions that should be skipped. If the same extension is in both lists, it will be excluded from indexing. +Include specific file extensions by setting `"indexedFileNameExtensions"` to a comma-separated list of file extensions (with a leading dot). Exclude specific file extensions by setting `"excludedFileNameExtensions"` to the extensions that should be skipped. If the same extension is in both lists, it's excluded from indexing. ```http PUT /indexers/[indexer name]?api-version=2020-06-30 PUT /indexers/[indexer name]?api-version=2020-06-30 ## Controlling which documents are indexed A single SharePoint indexer can index content from one or more document libraries. Use the "container" parameter on the data source definition to indicate which sites and document libraries to index from.-T + The [data source "container" section](#create-data-source) has two properties for this task: "name" and "query". ### Name The "query" parameter of the data source is made up of keyword/value pairs. The ## Handling errors -By default, the SharePoint indexer stops as soon as it encounters a document with an unsupported content type (for example, an image). You can use the `excludedFileNameExtensions` parameter to skip certain content types. However, you may need to index documents without knowing all the possible content types in advance. To continue indexing when an unsupported content type is encountered, set the `failOnUnsupportedContentType` configuration parameter to false: +By default, the SharePoint indexer stops as soon as it encounters a document with an unsupported content type (for example, an image). You can use the `excludedFileNameExtensions` parameter to skip certain content types. However, you might need to index documents without knowing all the possible content types in advance. To continue indexing when an unsupported content type is encountered, set the `failOnUnsupportedContentType` configuration parameter to false: ```http-PUT https://[service name].search.windows.net/indexers/[indexer name]?api-version=2020-06-30-Preview +PUT https://[service name].search.windows.net/indexers/[indexer name]?api-version=2023-10-01-Preview Content-Type: application/json api-key: [admin key] You can also continue indexing if errors happen at any point of processing, eith } ``` -## Limitations and considerations --These are the limitations of this feature: --+ Indexing [SharePoint Lists](https://support.microsoft.com/office/introduction-to-lists-0a1c3ace-def0-44af-b225-cfa8d92c52d7) is not supported. --+ If a SharePoint file content and/or metadata has been indexed, renaming a SharePoint folder in its parent hierarchy is not a condition that will re-index the document. --+ Indexing SharePoint .ASPX site content is not supported. --+ OneNote notebook files are not supported. --+ [Private endpoint](search-indexer-howto-access-private.md) is not supported. --+ SharePoint supports a granular authorization model that determines per-user access at the document level. The SharePoint indexer does not pull these permissions into the search index, and Azure AI Search does not support document-level authorization. When a document is indexed from SharePoint into a search service, the content is available to anyone who has read access to the index. If you require document-level permissions, you should consider [security filters to trim results](search-security-trimming-for-azure-search-with-aad.md) and automate copying the permissions at a file level to the index. ---These are the considerations when using this feature: --+ If there is a requirement to implement a SharePoint content indexing solution with Azure AI Search in a production environment, consider creating a custom connector with [SharePoint Webhooks](/sharepoint/dev/apis/webhooks/overview-sharepoint-webhooks) calling [Microsoft Graph API](/graph/use-the-api) to export the data to an Azure Blob container and use the [Azure Blob indexer](search-howto-indexing-azure-blob-storage.md) for incremental indexing. --+ There could be Microsoft 365 processes that update SharePoint file system-metadata (based on different configurations in SharePoint) and will cause the SharePoint indexer to trigger. Make sure that you test your setup and understand the document processing count prior to using any AI enrichment. Since this is a third-party connector to Azure (since SharePoint is located in Microsoft 365), SharePoint configuration is not checked by the indexer. --- ## See also + [Indexers in Azure AI Search](search-indexer-overview.md) |
search | Search Sku Tier | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-sku-tier.md | Most features are available on all tiers, including the free tier. In a few case | [IP firewall access](service-configure-firewall.md) | Not available on the Free tier. | | [Private endpoint (integration with Azure Private Link)](service-create-private-endpoint.md) | For inbound connections to a search service, not available on the Free tier. For outbound connections by indexers to other Azure resources, not available on Free or S3 HD. For indexers that use skillsets, not available on Free, Basic, S1, or S3 HD.| | [Availability Zones](search-reliability.md) | Not available on the Free or Basic tier. |-| [Semantic ranking (preview)](semantic-search-overview.md) | Not available on the Free tier. | +| [Semantic ranking](semantic-search-overview.md) | Not available on the Free tier. | Resource-intensive features might not work well unless you give it sufficient capacity. For example, [AI enrichment](cognitive-search-concept-intro.md) has long-running skills that time out on a Free service unless the dataset is small. |
search | Search What Is Azure Search | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-what-is-azure-search.md | Customers often ask how Azure AI Search compares with other search-related solut | Compared to | Key differences | |-|--|-| Microsoft Search | [Microsoft Search](/microsoftsearch/overview-microsoft-search) is for Microsoft 365 authenticated users who need to query over content in SharePoint. It's a ready-to-use search experience, enabled and configured by administrators, with the ability to accept external content through connectors from Microsoft and other sources. <br/><br/>In contrast, Azure AI Search executes queries over an index that you define, populated with data and documents you own, often from diverse sources. Azure AI Search has crawler capabilities for some Azure data sources through [indexers](search-indexer-overview.md), but you can push any JSON document that conforms to your index schema into a single, consolidated searchable resource. You can also customize the indexing pipeline to include machine learning and lexical analyzers. Because Azure AI Search is built to be a plug-in component in larger solutions, you can integrate search into almost any app, on any platform.| -|Bing | [Bing family of search APIs](/bing/search-apis/bing-web-search/bing-api-comparison) search the indexes on Bing.com for matching terms you submit. Indexes are built from HTML, XML, and other web content on public sites. Based on the same foundation, [Bing Custom Search](/bing/search-apis/bing-custom-search/overview) offers the same crawler technology for web content types, scoped to individual web sites.<br/><br/>In Azure AI Search, you define and populate the search index with your content. You control data ingestion using [indexers](search-indexer-overview.md) or by pushing any index-conforming JSON document to your search service. | -|Database search | Many database platforms include a built-in search experience. SQL Server has [full text search](/sql/relational-databases/search/full-text-search). Azure Cosmos DB and similar technologies have queryable indexes. When evaluating products that combine search and storage, it can be challenging to determine which way to go. Many solutions use both: DBMS for storage, and Azure AI Search for specialized search features.<br/><br/>Compared to DBMS search, Azure AI Search stores content from heterogeneous sources and offers specialized text processing features such as linguistic-aware text processing (stemming, lemmatization, word forms) in [56 languages](/rest/api/searchservice/language-support). It also supports autocorrection of misspelled words, [synonyms](/rest/api/searchservice/create-synonym-map), [suggestions](/rest/api/searchservice/suggestions), [scoring controls](/rest/api/searchservice/add-scoring-profiles-to-a-search-index), [facets](search-faceted-navigation.md), and [custom tokenization](/rest/api/searchservice/custom-analyzers-in-azure-search). The [full text search engine](search-lucene-query-architecture.md) in Azure AI Search is built on Apache Lucene, an industry standard in information retrieval. However, while Azure AI Search persists data in the form of an inverted index, it isn't a replacement for true data storage and we don't recommend using it in that capacity. For more information, see this [forum post](https://stackoverflow.com/questions/40101159/can-azure-search-be-used-as-a-primary-database-for-some-data). <br/><br/>Resource utilization is another inflection point in this category. Indexing and some query operations are often computationally intensive. Offloading search from the DBMS to a dedicated solution in the cloud preserves system resources for transaction processing. Furthermore, by externalizing search, you can easily adjust scale to match query volume.| -|Dedicated search solution | Assuming you've decided on dedicated search with full spectrum functionality, a final categorical comparison is between on premises solutions or a cloud service. Many search technologies offer controls over indexing and query pipelines, access to richer query and filtering syntax, control over rank and relevance, and features for self-directed and intelligent search. <br/><br/>A cloud service is the right choice if you want a turn-key solution with minimal overhead and maintenance, and adjustable scale. <br/><br/>Within the cloud paradigm, several providers offer comparable baseline features, with full-text search, geospatial search, and the ability to handle a certain level of ambiguity in search inputs. Typically, it's a [specialized feature](search-features-list.md), or the ease and overall simplicity of APIs, tools, and management that determines the best fit. | --Among cloud providers, Azure AI Search is strongest for full text search workloads over content stores and databases on Azure, for apps that rely primarily on search for both information retrieval and content navigation. +| Microsoft Search | [Microsoft Search](/microsoftsearch/overview-microsoft-search) is for Microsoft 365 authenticated users who need to query over content in SharePoint. Azure AI Search pulls in content across Azure and any JSON dataset. | +|Bing | [Bing APIs](/bing/search-apis/bing-web-search/bing-api-comparison) query the indexes on Bing.com for matching terms. Azure AI Search searches over indexes populated with your content. You control data ingestion and the schema. | +|Database search | SQL Server has [full text search](/sql/relational-databases/search/full-text-search) and Azure Cosmos DB and similar technologies have queryable indexes. Azure AI Search becomes an attractive alternative when you need features like lexical analyzers and relevance tuning, or content from heterogeneous sources. Resource utilization is another inflection point. Indexing and queries are computationally intensive. Offloading search from the DBMS preserves system resources for transaction processing. | +|Dedicated search solution | Assuming you've decided on dedicated search with full spectrum functionality, a final categorical comparison is between search technologies. Among cloud providers, Azure AI Search is strongest for vector, keyword, and hybrid workloads over content on Azure, for apps that rely primarily on search for both information retrieval and content navigation. | Key strengths include: ++ Relevance tuning through semantic ranking and scoring profiles. + Data integration (crawlers) at the indexing layer.-+ AI and machine learning integration with Azure AI services, useful if you need to make unsearchable content full text-searchable. -+ Security integration with Microsoft Entra ID for trusted connections, and with Azure Private Link integration to support private connections to a search index in no-internet scenarios. -+ Linguistic and custom text analysis in 56 languages. -+ [Full search experience](search-features-list.md): rich query language, relevance tuning and semantic ranking, faceting, autocomplete queries and suggested results, and synonyms. -+ Azure scale, reliability, and world-class availability. --Among our customers, those able to apply the widest range of features in Azure AI Search include online catalogs, line-of-business programs, and document discovery applications. ++ Azure AI integration for transformations that make content text and vector searchable.++ Microsoft Entra security for trusted connections, and Azure Private Link for private connections in no-internet scenarios.++ [Full search experience](search-features-list.md): Linguistic and custom text analysis in 56 languages. Faceting, autocomplete queries and suggested results, and synonyms.++ Azure scale, reliability, and global reach. <!-- ## Watch this video |
search | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure AI Search description: Lists Azure Policy Regulatory Compliance controls available for Azure AI Search. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 11/06/2023 Last updated : 11/21/2023 |
sentinel | Automate Incident Handling With Automation Rules | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/automate-incident-handling-with-automation-rules.md | Microsoft security alerts include the following: - Microsoft Defender for Office 365 - Microsoft Defender for Endpoint - Microsoft Defender for Identity-- Defender for IoT+- Microsoft Defender for IoT ### Multiple sequenced playbooks/actions in a single rule |
sentinel | Cloudwatch Lambda Function | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/cloudwatch-lambda-function.md | The lambda function uses Python 3.9 runtime and x86_64 architecture. :::image type="content" source="media/cloudwatch-lambda-function/lambda-other-permissions-policies.png" alt-text="Screenshot of the AWS Management Console Add permissions policies screen." lightbox="media/cloudwatch-lambda-function/lambda-other-permissions-policies.png"::: -1. Copy the code link from the [source file](https://github.com/Azure/Azure-Sentinel/blob/master/DataConnectors/AWS-S3/CloudWatchLanbdaFunction.py). 1. Return to the function, select **Code**, and paste the code link under **Code source**. :::image type="content" source="media/cloudwatch-lambda-function/lambda-code-source.png" alt-text="Screenshot of the AWS Management Console Code source screen." lightbox="media/cloudwatch-lambda-function/lambda-code-source.png"::: |
sentinel | Connect Logstash Data Connection Rules | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-logstash-data-connection-rules.md | The Microsoft Sentinel output plugin for Logstash sends JSON-formatted data to y ### Prerequisites -- Install a supported version of Logstash. The plugin supports: - - Logstash version 7.0 to 7.17.10. - - Logstash version 8.0 to 8.8.1. - +- Install a supported version of Logstash. The plugin supports the following Logstash versions: + - 7.0 - 7.17.13 + - 8.0 - 8.9 + - 8.11 + > [!NOTE] > If you use Logstash 8, we recommended that you [disable ECS in the pipeline](https://www.elastic.co/guide/en/logstash/8.4/ecs-ls.html). |
sentinel | Connect Logstash | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-logstash.md | The Logstash engine is comprised of three components: > > - Microsoft does not support third-party Logstash output plugins for Microsoft Sentinel, or any other Logstash plugin or component of any type. >-> - Microsoft Sentinel's Logstash output plugin supports only **Logstash versions 7.0 to 7.17.10, and versions 8.0 to 8.8.1**. +> - Microsoft Sentinel's Logstash output plugin supports only **Logstash versions 7.0 to 7.17.10, and versions 8.0 to 8.9 and 8.11**. > If you use Logstash 8, we recommended that you [disable ECS in the pipeline](https://www.elastic.co/guide/en/logstash/8.4/ecs-ls.html). The Microsoft Sentinel output plugin for Logstash sends JSON-formatted data to your Log Analytics workspace, using the Log Analytics HTTP Data Collector REST API. The data is ingested into custom logs. |
sentinel | Create Nrt Rules | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/create-nrt-rules.md | -> [!IMPORTANT] -> -> - Near-real-time (NRT) rules are currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. - Microsoft SentinelΓÇÖs [near-real-time analytics rules](near-real-time-rules.md) provide up-to-the-minute threat detection out-of-the-box. This type of rule was designed to be highly responsive by running its query at intervals just one minute apart. For the time being, these templates have limited application as outlined below, but the technology is rapidly evolving and growing. You create NRT rules the same way you create regular [scheduled-query analytics 1. From the Microsoft Sentinel navigation menu, select **Analytics**. -1. Select **Create** from the button bar, then **NRT query rule (preview)** from the drop-down list. +1. Select **Create** from the button bar, then **NRT query rule** from the drop-down list. :::image type="content" source="media/create-nrt-rules/create-nrt-rule.png" alt-text="Screenshot shows how to create a new NRT rule." lightbox="media/create-nrt-rules/create-nrt-rule.png"::: |
sentinel | Corelight | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/corelight.md | Install the agent on the Server where the Corelight logs are generated. 2. Configure the logs to be collected Follow the configuration steps below to get Corelight logs into Microsoft Sentinel. This configuration enriches events generated by Corelight module to provide visibility on log source information for Corelight logs. Refer to the [Azure Monitor Documentation](/azure/azure-monitor/agents/data-sources-json) for more details on these steps.-1. Download config file: [corelight.conf](https://aka.ms/sentinel-Corelight-conf/). -2. Login to the server where you have installed Azure Log Analytics agent. -3. Copy corelight.conf to the /etc/opt/microsoft/omsagent/**workspace_id**/conf/omsagent.d/ folder. -4. Edit corelight.conf as follows: +1. Log in to the server where you have installed Azure Log Analytics agent. +2. Copy corelight.conf to the /etc/opt/microsoft/omsagent/**workspace_id**/conf/omsagent.d/ folder. +3. Edit corelight.conf as follows: i. configure an alternate port to send data to, if desired (line 3) ii. replace **workspace_id** with real value of your Workspace ID (lines 22,23,24,27)-5. Save changes and restart the Azure Log Analytics agent for Linux service with the following command: +4. Save changes and restart the Azure Log Analytics agent for Linux service with the following command: sudo /opt/microsoft/omsagent/bin/service_control restart |
sentinel | Deprecated Ai Analyst Darktrace Via Legacy Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/deprecated-ai-analyst-darktrace-via-legacy-agent.md | Install and configure the Linux agent to collect your Common Event Format (CEF) 1.1 Select or create a Linux machine -Select or create a Linux machine that Microsoft Sentinel will use as the proxy between your security solution and Microsoft Sentinel this machine can be on your on-prem environment, Azure or other clouds. +Select or create a Linux machine that Microsoft Sentinel will use as the proxy between your security solution and Microsoft Sentinel this machine can be on your on-premises environment, Azure or other clouds. 1.2 Install the CEF collector on the Linux machine Make sure to configure the machine's security according to your organization's s [Learn more >](https://aka.ms/SecureCEF)----## Next steps --For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/darktrace1655286944672.darktrace_mss?tab=Overview) in the Azure Marketplace. |
sentinel | Deprecated Delinea Secret Server Via Legacy Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/deprecated-delinea-secret-server-via-legacy-agent.md | To integrate with [Deprecated] Delinea Secret Server via Legacy Agent make sure - **Delinea Secret Server**: must be configured to export logs via Syslog - [Learn more about configure Secret Server](https://thy.center/ss/link/syslog) -- ## Vendor installation instructions 1. Linux Syslog agent configuration Install and configure the Linux agent to collect your Common Event Format (CEF) 1.1 Select or create a Linux machine -Select or create a Linux machine that Microsoft Sentinel will use as the proxy between your security solution and Microsoft Sentinel this machine can be on your on-prem environment, Azure or other clouds. +Select or create a Linux machine that Microsoft Sentinel will use as the proxy between your security solution and Microsoft Sentinel this machine can be on your on-premises environment, Azure or other clouds. 1.2 Install the CEF collector on the Linux machine |
sentinel | Deprecated Extrahop Reveal X Via Legacy Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/deprecated-extrahop-reveal-x-via-legacy-agent.md | Install and configure the Linux agent to collect your Common Event Format (CEF) 1.1 Select or create a Linux machine -Select or create a Linux machine that Microsoft Sentinel will use as the proxy between your security solution and Microsoft Sentinel this machine can be on your on-prem environment, Azure or other clouds. +Select or create a Linux machine that Microsoft Sentinel will use as the proxy between your security solution and Microsoft Sentinel this machine can be on your on-premises environment, Azure or other clouds. 1.2 Install the CEF collector on the Linux machine Install the Microsoft Monitoring Agent on your Linux machine and configure the m 2. Forward ExtraHop Networks logs to Syslog agent 1. Set your security solution to send Syslog messages in CEF format to the proxy machine. Make sure to send the logs to port 514 TCP on the machine IP address.-2. Follow the directions to install the [ExtraHop Detection SIEM Connector bundle](https://aka.ms/asi-syslog-extrahop-forwarding) on your Reveal(x) system. The SIEM Connector is required for this integration. +2. Follow the directions to install the [ExtraHop Detection SIEM Connector bundle](https://learn.extrahop.com/extrahop-detection-siem-connector-bundle) on your Reveal(x) system. The SIEM Connector is required for this integration. 3. Enable the trigger for **ExtraHop Detection SIEM Connector - CEF** 4. Update the trigger with the ODS syslog targets you created  5. The Reveal(x) system formats syslog messages in Common Event Format (CEF) and then sends data to Microsoft Sentinel. |
sentinel | Deprecated Forcepoint Csg Via Legacy Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/deprecated-forcepoint-csg-via-legacy-agent.md | This integration requires the Linux Syslog agent to collect your Forcepoint Clou The integration is made available with two implementations options. -2.1 Docker Implementation +2.1 Splunk Implementation -Leverages docker images where the integration component is already installed with all necessary dependencies. +Leverages splunk images where the integration component is already installed with all necessary dependencies. Follow the instructions provided in the Integration Guide linked below. -[Integration Guide >](https://frcpnt.com/csg-sentinel) +[Integration Guide >](https://forcepoint.github.io/docs/csg_and_splunk/) -2.2 Traditional Implementation +2.2 VeloCloud Implementation Requires the manual deployment of the integration component inside a clean Linux machine. Follow the instructions provided in the Integration Guide linked below. -[Integration Guide >](https://frcpnt.com/csg-sentinel) +[Integration Guide >](https://forcepoint.github.io/docs/csg_and_velocloud/) 3. Validate connection |
sentinel | Deprecated Morphisec Utpp Via Legacy Agent | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/deprecated-morphisec-utpp-via-legacy-agent.md | Integrate vital insights from your security products with the Morphisec Data Con | **Kusto function url** | https://aka.ms/sentinel-morphisecutpp-parser | | **Log Analytics table(s)** | CommonSecurityLog (Morphisec)<br/> | | **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |-| **Supported by** | [Morphisec](https://support.morphisec.com/support/home) | +| **Supported by** | [Morphisec](https://support.morphisec.com/hc/en-us) | ## Query samples Install and configure the Linux agent to collect your Common Event Format (CEF) 1.1 Select or create a Linux machine -Select or create a Linux machine that Microsoft Sentinel will use as the proxy between your security solution and Microsoft Sentinel this machine can be on your on-prem environment, Azure or other clouds. +Select or create a Linux machine that Microsoft Sentinel will use as the proxy between your security solution and Microsoft Sentinel this machine can be on your on-premises environment, Azure or other clouds. 1.2 Install the CEF collector on the Linux machine |
sentinel | Netskope Using Azure Functions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/netskope-using-azure-functions.md | Netskope To integrate with Netskope (using Azure Functions) make sure you have: - **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions/).-- **Netskope API Token**: A Netskope API Token is required. [See the documentation to learn more about Netskope API](https://innovatechcloud.goskope.com/docs/Netskope_Help/en/rest-api-v1-overview.html). **Note:** A Netskope account is required+- **Netskope API Token**: A Netskope API Token is required. [See the documentation to learn more about Netskope API](https://www.netskope.com/resources). **Note:** A Netskope account is required ## Vendor installation instructions |
sentinel | Recommended Ai Analyst Darktrace Via Ama | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/recommended-ai-analyst-darktrace-via-ama.md | Make sure to configure the machine's security according to your organization's s [Learn more >](https://aka.ms/SecureCEF)----## Next steps --For more information, go to the [related solution](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/darktrace1655286944672.darktrace_mss?tab=Overview) in the Azure Marketplace. |
sentinel | Recommended Morphisec Utpp Via Ama | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/recommended-morphisec-utpp-via-ama.md | Integrate vital insights from your security products with the Morphisec Data Con | **Kusto function url** | https://aka.ms/sentinel-morphisecutpp-parser | | **Log Analytics table(s)** | CommonSecurityLog (Morphisec)<br/> | | **Data collection rules support** | [Workspace transform DCR](/azure/azure-monitor/logs/tutorial-workspace-transformations-portal) |-| **Supported by** | [Morphisec](https://support.morphisec.com/support/home) | +| **Supported by** | [Morphisec](https://support.morphisec.com/hc/en-us) | ## Query samples |
sentinel | Symantec Vip | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/symantec-vip.md | Configure the facilities you want to collect and their severities. 3. Configure and connect the Symantec VIP -[Follow these instructions](https://help.symantec.com/cs/VIP_EG_INSTALL_CONFIG/VIP/v134652108_v128483142/Configuring-syslog) to configure the Symantec VIP Enterprise Gateway to forward syslog. Use the IP address or hostname for the Linux device with the Linux agent installed as the Destination IP address. +Use the IP address or hostname for the Linux device with the Linux agent installed as the Destination IP address. |
sentinel | Tenable Io Vulnerability Management Using Azure Function | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors/tenable-io-vulnerability-management-using-azure-function.md | Tenable_IO_Assets_CL To integrate with Tenable.io Vulnerability Management (using Azure Function) make sure you have: - **Microsoft.Web/sites permissions**: Read and write permissions to Azure Functions to create a Function App is required. [See the documentation to learn more about Azure Functions](/azure/azure-functions/).-- **REST API Credentials/permissions**: Both a **TenableAccessKey** and a **TenableSecretKey** is required to access the Tenable REST API. [See the documentation to learn more about API](https://developer.tenable.com/reference#vulnerability-management). Check all [requirements and follow the instructions](https://docs.tenable.com/tenableio/vulnerabilitymanagement/Content/Settings/GenerateAPIKey.htm) for obtaining credentials.+- **REST API Credentials/permissions**: Both a **TenableAccessKey** and a **TenableSecretKey** is required to access the Tenable REST API. [See the documentation to learn more about API](https://developer.tenable.com/reference#vulnerability-management). Check all [requirements and follow the instructions](https://docs.tenable.com/nessus/Content/Credentials.htm) for obtaining credentials. ## Vendor installation instructions To integrate with Tenable.io Vulnerability Management (using Azure Function) mak **STEP 1 - Configuration steps for Tenable.io** - [Follow the instructions](https://docs.tenable.com/tenableio/vulnerabilitymanagement/Content/Settings/GenerateAPIKey.htm) to obtain the required API credentials. + [Follow the instructions](https://docs.tenable.com/integrations/BeyondTrust/Nessus/Content/API%20Configuration.htm) to obtain the required API credentials. |
sentinel | Detect Threats Built In | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/detect-threats-built-in.md | Detections include: | **Threat Intelligence** | Take advantage of threat intelligence produced by Microsoft to generate high fidelity alerts and incidents with the **Microsoft Threat Intelligence Analytics** rule. This unique rule isn't customizable, but when enabled, automatically matches Common Event Format (CEF) logs, Syslog data or Windows DNS events with domain, IP and URL threat indicators from Microsoft Threat Intelligence. Certain indicators contain more context information through MDTI (**Microsoft Defender Threat Intelligence**).<br><br>For more information on how to enable this rule, see [Use matching analytics to detect threats](use-matching-analytics-to-detect-threats.md).<br>For more information on MDTI, see [What is Microsoft Defender Threat Intelligence](/../defender/threat-intelligence/what-is-microsoft-defender-threat-intelligence-defender-ti) | <a name="anomaly"></a>**Anomaly** | Anomaly rule templates use machine learning to detect specific types of anomalous behavior. Each rule has its own unique parameters and thresholds, appropriate to the behavior being analyzed. <br><br>While the configurations of out-of-the-box rules can't be changed or fine-tuned, you can duplicate a rule, and then change and fine-tune the duplicate. In such cases, run the duplicate in **Flighting** mode and the original concurrently in **Production** mode. Then compare results, and switch the duplicate to **Production** if and when its fine-tuning is to your liking. <br><br>For more information, see [Use customizable anomalies to detect threats in Microsoft Sentinel](soc-ml-anomalies.md) and [Work with anomaly detection analytics rules in Microsoft Sentinel](work-with-anomaly-rules.md). | | <a name="scheduled"></a>**Scheduled** | Scheduled analytics rules are based on queries written by Microsoft security experts. You can see the query logic and make changes to it. You can use the scheduled rules template and customize the query logic and scheduling settings to create new rules. <br><br>Several new scheduled analytics rule templates produce alerts that are correlated by the Fusion engine with alerts from other systems to produce high-fidelity incidents. For more information, see [Advanced multistage attack detection](configure-fusion-rules.md#configure-scheduled-analytics-rules-for-fusion-detections).<br><br>**Tip**: Rule scheduling options include configuring the rule to run every specified number of minutes, hours, or days, with the clock starting when you enable the rule. <br><br>We recommend being mindful of when you enable a new or edited analytics rule to ensure that the rules get the new stack of incidents in time. For example, you might want to run a rule in synch with when your SOC analysts begin their workday, and enable the rules then.|-| <a name="nrt"></a>**Near-real-time (NRT)**<br>(Preview) | NRT rules are limited set of scheduled rules, designed to run once every minute, in order to supply you with information as up-to-the-minute as possible. <br><br>They function mostly like scheduled rules and are configured similarly, with some limitations. For more information, see [Detect threats quickly with near-real-time (NRT) analytics rules in Microsoft Sentinel](near-real-time-rules.md). | +| <a name="nrt"></a>**Near-real-time (NRT)** | NRT rules are limited set of scheduled rules, designed to run once every minute, in order to supply you with information as up-to-the-minute as possible. <br><br>They function mostly like scheduled rules and are configured similarly, with some limitations. For more information, see [Detect threats quickly with near-real-time (NRT) analytics rules in Microsoft Sentinel](near-real-time-rules.md). | > [!IMPORTANT]-> The rule templates so indicated above are currently in **PREVIEW**, as are some of the **Fusion** detection templates (see [Advanced multistage attack detection in Microsoft Sentinel](fusion.md) to see which ones). See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. +> Some of the **Fusion** detection templates are currently in **PREVIEW** (see [Advanced multistage attack detection in Microsoft Sentinel](fusion.md) to see which ones). See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. ## Use analytics rule templates |
sentinel | Feature Availability | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/feature-availability.md | This article describes the features available in Microsoft Sentinel across diffe |||||| |[Analytics rules health](monitor-analytics-rule-integrity.md) |Public preview |✅ |❌ |❌ | |[MITRE ATT&CK dashboard](mitre-coverage.md) |Public preview |✅ |❌ |❌ |-|[NRT rules](near-real-time-rules.md) |Public preview |✅ |✅ |✅ | +|[NRT rules](near-real-time-rules.md) |GA |✅ |✅ |✅ | |[Recommendations](detection-tuning.md) |Public preview |✅ |✅ |❌ | |[Scheduled](detect-threats-built-in.md) and [Microsoft rules](create-incidents-from-alerts.md) |GA |✅ |✅ |✅ | |
sentinel | Microsoft 365 Defender Sentinel Integration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/microsoft-365-defender-sentinel-integration.md | In addition to collecting alerts from these components and other services, Micro ## Connecting to Microsoft 365 Defender Install the Microsoft 365 Defender solution for Microsoft Sentinel and enable the Microsoft 365 Defender data connector to [collect incidents and alerts](connect-microsoft-365-defender.md). Microsoft 365 Defender incidents appear in the Microsoft Sentinel incidents queue, with **Microsoft 365 Defender** in the **Product name** field, shortly after they are generated in Microsoft 365 Defender.+ - It can take up to 10 minutes from the time an incident is generated in Microsoft 365 Defender to the time it appears in Microsoft Sentinel. -- Incidents will be ingested and synchronized at no extra cost.+- Alerts and incidents from Microsoft 365 Defender (those items which populate the *SecurityAlert* and *SecurityIncident* tables) are ingested into and synchronized with Microsoft Sentinel at no charge. For all other data types from individual Defender components (such as DeviceInfo, DeviceFileEvents, EmailEvents, and so on), ingestion will be charged. Once the Microsoft 365 Defender integration is connected, the connectors for all the integrated components and services (Defender for Endpoint, Defender for Identity, Defender for Office 365, Defender for Cloud Apps, Microsoft Entra ID Protection) will be automatically connected in the background if they weren't already. If any component licenses were purchased after Microsoft 365 Defender was connected, the alerts and incidents from the new product will still flow to Microsoft Sentinel with no additional configuration or charge. |
sentinel | Near Real Time Rules | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/near-real-time-rules.md | -> [!IMPORTANT] -> -> - Near-real-time (NRT) rules are currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. - ## What are near-real-time (NRT) analytics rules? -When you're faced with security threats, time and speed are of the essence. You need to be aware of threats as they materialize so you can analyze and respond quickly to contain them. Microsoft Sentinel's near-real-time (NRT) analytics rules offer you faster threat detection - closer to that of an on-premises SIEM - and the ability to shorten response times in specific scenarios. +When you're faced with security threats, time and speed are of the essence. You need to be aware of threats as they materialize so you can analyze and respond quickly to contain them. Microsoft Sentinel's near-real-time (NRT) analytics rules offer you faster threat detection—closer to that of an on-premises SIEM—and the ability to shorten response times in specific scenarios. Microsoft SentinelΓÇÖs [near-real-time analytics rules](detect-threats-built-in.md#nrt) provide up-to-the-minute threat detection out-of-the-box. This type of rule was designed to be highly responsive by running its query at intervals just one minute apart. NRT rules are hard-coded to run once every minute and capture events ingested in Unlike regular scheduled rules that run on a built-in five-minute delay to account for ingestion time lag, NRT rules run on just a two-minute delay, solving the ingestion delay problem by querying on events' ingestion time instead of their generation time at the source (the TimeGenerated field). This results in improvements of both frequency and accuracy in your detections. (To understand this issue more completely, see [Query scheduling and alert threshold](detect-threats-custom.md#query-scheduling-and-alert-threshold) and [Handle ingestion delay in scheduled analytics rules](ingestion-delay.md).) -NRT rules have many of the same features and capabilities as scheduled analytics rules. The full set of alert enrichment capabilities is available ΓÇô you can map entities and surface custom details, and you can configure dynamic content for alert details. You can choose how alerts are grouped into incidents, you can temporarily suppress the running of a query after it generates a result, and you can define automation rules and playbooks to run in response to alerts and incidents generated from the rule. +NRT rules have many of the same features and capabilities as scheduled analytics rules. The full set of alert enrichment capabilities is available—you can map entities and surface custom details, and you can configure dynamic content for alert details. You can choose how alerts are grouped into incidents, you can temporarily suppress the running of a query after it generates a result, and you can define automation rules and playbooks to run in response to alerts and incidents generated from the rule. For the time being, these templates have limited application as outlined below, but the technology is rapidly evolving and growing. |
sentinel | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/whats-new.md | The listed features were released in the last three months. For information abou ## November 2023 +- [Near-real-time rules now generally available](#near-real-time-rules-now-generally-available) - [Elevate your cybersecurity intelligence with enrichment widgets (Preview)](#elevate-your-cybersecurity-intelligence-with-enrichment-widgets-preview) +### Near-real-time rules now generally available ++Microsoft SentinelΓÇÖs [near-real-time analytics rules](detect-threats-built-in.md#nrt) are now generally available (GA). These highly responsive rules provide up-to-the-minute threat detection by running their queries at intervals just one minute apart. ++- [Learn more about near-real-time rules](near-real-time-rules.md). +- [Create and work with near-real-time rules](create-nrt-rules.md). + <a name="visualize-data-with-enrichment-widgets-preview"></a> ### Elevate your cybersecurity intelligence with enrichment widgets (Preview) -Enrichment Widgets in Microsoft Sentinel are dynamic components designed to provide you with in-depth, actionable intelligence about entities. They integrate external and internal content and data from various sources, offering a comprehensive understanding of potential security threats. These widgets serve as a powerful enhancement to your cybersecurity toolkit, offering both depth and breadth in information analysis. +Enrichment widgets in Microsoft Sentinel are dynamic components designed to provide you with in-depth, actionable intelligence about entities. They integrate external and internal content and data from various sources, offering a comprehensive understanding of potential security threats. These widgets serve as a powerful enhancement to your cybersecurity toolkit, offering both depth and breadth in information analysis. Widgets are already available in Microsoft Sentinel today (in Preview). They currently appear for IP entities, both on their full [entity pages](entity-pages.md) and on their [entity info panels](incident-investigation.md) that appear in Incident pages. These widgets show you valuable information about the entities, from both internal and third-party sources. |
service-bus-messaging | Enable Partitions Premium | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/enable-partitions-premium.md | Title: Enable partitioning in Azure Service Bus Premium namespaces description: This article explains how to enable partitioning in Azure Service Bus Premium namespaces by using Azure portal, PowerShell, CLI, and programming languages (C#, Java, Python, and JavaScript) Last updated 10/23/2023 -+ ms.devlang: azurecli |
service-bus-messaging | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/policy-reference.md | Title: Built-in policy definitions for Azure Service Bus Messaging description: Lists Azure Policy built-in policy definitions for Azure Service Bus Messaging. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/15/2023 Last updated : 11/21/2023 |
service-bus-messaging | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure Service Bus Messaging description: Lists Azure Policy Regulatory Compliance controls available for Azure Service Bus Messaging. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 11/06/2023 Last updated : 11/21/2023 |
service-bus-messaging | Service Bus Messaging Exceptions Latest | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-messaging-exceptions-latest.md | Title: Azure Service Bus - messaging exceptions | Microsoft Docs description: This article provides a list of Azure Service Bus messaging exceptions and suggested actions to taken when the exception occurs. + Last updated 02/17/2023 |
service-connector | Tutorial Csharp Webapp Storage Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/tutorial-csharp-webapp-storage-cli.md | +> > * Set up your initial environment with the Azure CLI > * Create a storage account and an Azure Blob Storage container. > * Deploy code to Azure App Service and connect to storage with managed identity using Service Connector ## Prerequisites -- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free).-- The <a href="/cli/azure/install-azure-cli" target="_blank">Azure CLI</a> 2.30.0 or higher. You'll use it to run commands in any shell to provision and configure Azure resources.+* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/). + ## Set up your initial environment+ 1. Check that your Azure CLI version is 2.30.0 or higher: ```azurecli az --version ```- If you need to upgrade, try the `az upgrade` command (requires version 2.11+) or see <a href="/cli/azure/install-azure-cli" target="_blank">Install the Azure CLI</a>. ++ If you need to upgrade, run the `az upgrade` command (requires version 2.11+). 1. Sign in to Azure using the CLI: ```azurecli az login ```+ This command opens a browser to gather your credentials. When the command finishes, it shows a JSON output containing information about your subscriptions. Once signed in, you can run Azure commands with the Azure CLI to work with resources in your subscription. Learn how to access Azure Blob Storage for a web app (not a signed-in user) runn ## Clone or download the sample app 1. Clone the sample repository:+ ```Bash git clone https://github.com/Azure-Samples/serviceconnector-webapp-storageblob-dotnet.git ``` 1. Go to the repository's root folder:+ ```Bash cd serviceconnector-webapp-storageblob-dotnet ``` Learn how to access Azure Blob Storage for a web app (not a signed-in user) runn 1. In the terminal, make sure you're in the *WebAppStorageMISample* repository folder that contains the app code. -1. Create an App Service app (the host process) with the [`az webapp up`](/cli/azure/webapp#az-webapp-up) command below. - +1. Create an App Service app (the host process) with the [`az webapp up`](/cli/azure/webapp#az-webapp-up) command below and replace the placeholders with your own data: ++ * For the `--location` argument, use a [region supported by Service Connector](concept-region-support.md). + * Replace `<app-name>` with a unique name across Azure. The server endpoint is `https://<app-name>.azurewebsites.net`. Allowed characters for `<app-name>` are `A`-`Z`, `0`-`9`, and `-`. A good pattern is to use a combination of your company name and an app identifier. + ```azurecli az webapp up --name <app-name> --sku B1 --location eastus --resource-group ServiceConnector-tutorial-rg ``` - Replace the following placeholder texts with your own data: -- - For the *`--location`* argument, make sure to use a [region supported by Service Connector](concept-region-support.md). - - Replace *`<app-name>`* with a unique name across all Azure (the server endpoint is `https://<app-name>.azurewebsites.net`). Allowed characters for *`<app-name>`* are `A`-`Z`, `0`-`9`, and `-`. A good pattern is to use a combination of your company name and an app identifier. - ## Create a storage account and a Blob Storage container -In the terminal, run the following command to create a general purpose v2 storage account and a Blob Storage container. +In the terminal, run the following command to create a general purpose v2 storage account and a Blob Storage container. ```azurecli az storage account create --name <storage-name> --resource-group ServiceConnector-tutorial-rg --sku Standard_RAGRS --https-only ```-Replace *`<storage-name>`* with a unique name. The name of the container must be in lowercase, start with a letter or a number, and can include only letters, numbers, and the dash (-) character. +Replace `<storage-name>` with a unique name. The name of the container must be in lowercase, start with a letter or a number, and can include only letters, numbers, and the dash (-) character. ## Connect an App Service app to a Blob Storage container with a managed identity -In the terminal, run the following command to connect your web app to blob storage with a managed identity. +In the terminal, run the following command to connect your web app to a blob storage using a managed identity. ```azurecli az webapp connection create storage-blob -g ServiceConnector-tutorial-rg -n <app-name> --tg ServiceConnector-tutorial-rg --account <storage-name> --system-identity ``` - Replace the following placeholder texts with your own data: -- Replace *`<app-name>`* with your web app name you used in step 3.-- Replace *`<storage-name>`* with your storage app name you used in step 4.+Replace the following placeholders with your own data: ++* Replace `<app-name>` with the web app name you used in step 3. +* Replace `<storage-name>` with the storage app name you used in step 4. > [!NOTE]-> If you see the error message "The subscription is not registered to use Microsoft.ServiceLinker", please run `az provider register -n Microsoft.ServiceLinker` to register the Service Connector resource provider and run the connection command again. +> If you see the error message "The subscription is not registered to use Microsoft.ServiceLinker", run `az provider register -n Microsoft.ServiceLinker` to register the Service Connector resource provider and run the connection command again. ## Run sample code -In the terminal, run the following command to open the sample application in your browser. Replace *`<app-name>`* with the web app name you used earlier. +In the terminal, run the following command to open the sample application in your browser. Replace `<app-name>` with the web app name you used earlier. ```Azure CLI az webapp browse --name <app-name> The sample code is a web application. Each time you refresh the index page, the ## Next steps -Follow the tutorials listed below to learn more about Service Connector. +To learn more about Service Connector, read the guide below. > [!div class="nextstepaction"]-> [Learn about Service Connector concepts](./concept-service-connector-internals.md) +> [Service Connector concepts](./concept-service-connector-internals.md) |
service-connector | Tutorial Django Webapp Postgres Cli | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/tutorial-django-webapp-postgres-cli.md | In this tutorial, you use the Azure CLI to complete the following tasks: ::: zone pivot="postgres-flexible-server" -This tutorial shows how to deploy a data-driven Python [Django](https://www.djangoproject.com/) web app to [Azure App Service](overview.md) and connect it to an [Azure Database for PostgreSQL Flexible server](../postgresql/flexible-server/index.yml) database. If you can't use PostgreSQL Flexible server, then select the Single Server option above. +This tutorial shows how to deploy a data-driven Python [Django](https://www.djangoproject.com/) web app to [Azure App Service](overview.md) and connect it to an [Azure Database for PostgreSQL Flexible server](../postgresql/flexible-server/index.yml) database. If you can't use PostgreSQL Flexible server, then select the Single Server option above. In this tutorial, you'll use the Azure CLI to complete the following tasks: In this tutorial, you'll use the Azure CLI to complete the following tasks: > * View diagnostic logs > * Manage the web app in the Azure portal - :::zone-end -## Set up your initial environment --1. Have an Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?ref=microsoft.com&utm_source=microsoft.com&utm_medium=docs&utm_campaign=visualstudio). -1. Install <a href="https://www.python.org/downloads/" target="_blank">Python 3.6 or higher</a>. -1. Install the <a href="/cli/azure/install-azure-cli" target="_blank">Azure CLI</a> 2.30.0 or higher, with which you run commands in any shell to provision and configure Azure resources. --Open a terminal window and check your Python version is 3.6 or higher: --# [Bash](#tab/bash) --```bash -python3 --version -``` --# [PowerShell](#tab/powershell) +## Prerequisites -```cmd -py -3 --version -``` +* An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/). -# [Cmd](#tab/cmd) +## Set up your initial environment -```cmd -py -3 --version -``` +1. Install [Python 3.6 or higher](https://www.python.org/downloads/). To check if your Python version is 3.6 or higher, run the following code in a terminal window: -+ ### [Bash](#tab/bash) -Check that your Azure CLI version is 2.30.0 or higher: + ```bash + python3 --version + ``` -```azurecli -az --version -``` + ### [PowerShell](#tab/powershell) -If you need to upgrade, try the `az upgrade` command (requires version 2.30.0+) or see <a href="/cli/azure/install-azure-cli" target="_blank">Install the Azure CLI</a>. + ```cmd + py -3 --version + ``` -Then sign in to Azure through the CLI: + ### [Cmd](#tab/cmd) -```azurecli -az login -``` + ```cmd + py -3 --version + ``` -This command opens a browser to gather your credentials. When the command finishes, it shows JSON output containing information about your subscriptions. + -Once signed in, you can run Azure commands with the Azure CLI to work with resources in your subscription. +1. Install the [Azure CLI](/cli/azure/install-azure-cli) 2.30.0 or higher. To check if your Azure CLI version is 2.30.0 or higher, run the `az --version` command. If you need to upgrade, run `az upgrade` (requires version 2.30.0+). -Having issues? [Let us know](https://aka.ms/DjangoCLITutorialHelp). +1. Sign in to Azure using the CLI with `az login`. This command opens a browser to gather your credentials. When the command finishes, it shows JSON output containing information about your subscriptions. Once signed in, you can run Azure commands with the Azure CLI to work with resources in your subscription. ## Clone or download the sample app -# [Git clone](#tab/clone) +### [Git clone](#tab/clone) Clone the sample repository: Clone the sample repository: git clone https://github.com/Azure-Samples/serviceconnector-webapp-postgresql-django.git ``` -Then navigate into that folder: +Navigate into the following folder: ```terminal cd serviceconnector-webapp-postgresql-django cd serviceconnector-webapp-postgresql-django ::: zone pivot="postgres-flexible-server" -For Flexible server, use the flexible-server branch of the sample, which contains a few necessary changes, such as how the database server URL is set and adding `'OPTIONS': {'sslmode': 'require'}` to the Django database configuration as required by Azure PostgreSQL Flexible server. +Use the flexible-server branch of the sample, which contains a few necessary changes, such as how the database server URL is set and adding `'OPTIONS': {'sslmode': 'require'}` to the Django database configuration as required by Azure PostgreSQL Flexible server. ```terminal git checkout flexible-server git checkout flexible-server ::: zone-end -# [Download](#tab/download) +### [Download](#tab/download) Visit [https://github.com/Azure-Samples/djangoapp](https://github.com/Azure-Samples/djangoapp). ::: zone pivot="postgres-flexible-server"+ For Flexible server, select the branches control that says "master" and then select the **flexible-server** branch.+ ::: zone-end -Select **Code**, and then select **Download ZIP**. +Select **Code**, and then select **Download ZIP**. -Unpack the ZIP file into a folder named *djangoapp*. +Unpack the ZIP file into a folder named *djangoapp*. -Then open a terminal window in that *djangoapp* folder. +Open a terminal window in that *djangoapp* folder. The djangoapp sample contains the data-driven Django polls app you get by follow The sample is also modified to run in a production environment like App Service: -- Production settings are in the *azuresite/production.py* file. Development settings are in *azuresite/settings.py*.-- The app uses production settings when the `WEBSITE_HOSTNAME` environment variable is set. Azure App Service automatically sets this variable to the URL of the web app, such as `msdocs-django.azurewebsites.net`.+* Production settings are in the *azuresite/production.py* file. Development settings are in *azuresite/settings.py*. +* The app uses production settings when the `WEBSITE_HOSTNAME` environment variable is set. Azure App Service automatically sets this variable to the URL of the web app, such as `msdocs-django.azurewebsites.net`. The production settings are specific to configuring Django to run in any production environment and aren't particular to App Service. For more information, see the [Django deployment checklist](https://docs.djangoproject.com/en/3.1/howto/deployment/checklist/). Also see [Production settings for Django on Azure](../app-service/configure-language-python.md#production-settings-for-django-apps) for details on some of the changes. Having issues? [Let us know](https://aka.ms/DjangoCLITutorialHelp). ## Create Postgres database in Azure ::: zone pivot="postgres-single-server"+ <!-- > [!NOTE] > Before you create an Azure Database for PostgreSQL server, check which [compute generation](../postgresql/concepts-pricing-tiers.md#compute-generations-and-vcores) is available in your region. --> -Enable parameters caching with the Azure CLI so you don't need to provide those parameters with every command. (Cached values are saved in the *.azure* folder.) +1. Enable parameters caching with the Azure CLI so you don't need to provide those parameters with every command. (Cached values are saved in the *.azure* folder.) -```azurecli -az config param-persist on -``` + ```azurecli + az config param-persist on + ``` -Install the `db-up` extension for the Azure CLI: +1. Install the `db-up` extension for the Azure CLI: -```azurecli -az extension add --name db-up -``` + ```azurecli + az extension add --name db-up + ``` -If the `az` command isn't recognized, be sure you have the Azure CLI installed as described in [Set up your initial environment](#set-up-your-initial-environment). + If the `az` command isn't recognized, be sure you have the Azure CLI installed as described in [Set up your initial environment](#set-up-your-initial-environment). -Then create the Postgres database in Azure with the [`az postgres up`](/cli/azure/postgres#az-postgres-up) command: +1. Create the Postgres database in Azure with the [`az postgres up`](/cli/azure/postgres#az-postgres-up) command: -```azurecli -az postgres up --resource-group ServiceConnector-tutorial-rg --location eastus --sku-name B_Gen5_1 --server-name <postgres-server-name> --database-name pollsdb --admin-user <admin-username> --admin-password <admin-password> --ssl-enforcement Enabled -``` + ```azurecli + az postgres up --resource-group ServiceConnector-tutorial-rg --location eastus --sku-name B_Gen5_1 --server-name <postgres-server-name> --database-name pollsdb --admin-user <admin-username> --admin-password <admin-password> --ssl-enforcement Enabled + ``` ++ Replace the following placeholder texts with your own data: -Replace the following placeholder texts with your own data: -- **Replace** *`<postgres-server-name>`* with a name that's **unique across all Azure** (the server endpoint becomes `https://<postgres-server-name>.postgres.database.azure.com`). A good pattern is to use a combination of your company name and another unique value.-- For *`<admin-username>`* and *`<admin-password>`*, specify credentials to create an administrator user for this Postgres server. The admin username can't be *azure_superuser*, *azure_pg_admin*, *admin*, *administrator*, *root*, *guest*, or *public*. It can't start with *pg_*. The password must contain **8 to 128 characters** from three of the following categories: English uppercase letters, English lowercase letters, numbers (0 through 9), and non-alphanumeric characters (for example, *!*, *#*, *%*). The password can't contain a username.-- Don't use the `$` character in the username or password. You'll later create environment variables with these values where the `$` character has special meaning within the Linux container used to run Python apps.-- The `*B_Gen5_1*` (Basic, Gen5, 1 core) [pricing tier](../postgresql/concepts-pricing-tiers.md) used here is the least expensive. For production databases, omit the `--sku-name` argument to use the GP_Gen5_2 (General Purpose, Gen 5, 2 cores) tier instead.+ * **Replace** *`<postgres-server-name>`* with a name that's **unique across all Azure** (the server endpoint becomes `https://<postgres-server-name>.postgres.database.azure.com`). A good pattern is to use a combination of your company name and another unique value. -This command performs the following actions, which may take a few minutes: + * For *`<admin-username>`* and *`<admin-password>`*, specify credentials to create an administrator user for this Postgres server. The admin username can't be *azure_superuser*, *azure_pg_admin*, *admin*, *administrator*, *root*, *guest*, or *public*. It can't start with *pg_*. The password must contain **8 to 128 characters** from three of the following categories: English uppercase letters, English lowercase letters, numbers (0 through 9), and non-alphanumeric characters (for example, *!*, *#*, *%*). The password can't contain a username. + * Don't use the `$` character in the username or password. You'll later create environment variables with these values where the `$` character has special meaning within the Linux container used to run Python apps. + * The `*B_Gen5_1*` (Basic, Gen5, 1 core) [pricing tier](../postgresql/concepts-pricing-tiers.md) used here is the least expensive. For production databases, omit the `--sku-name` argument to use the GP_Gen5_2 (General Purpose, Gen 5, 2 cores) tier instead. -- Create a [resource group](../azure-resource-manager/management/overview.md#terminology) called `ServiceConnector-tutorial-rg`, if it doesn't already exist.-- Create a Postgres server named by the `--server-name` argument.-- Create an administrator account using the `--admin-user` and `--admin-password` arguments. You can omit these arguments to allow the command to generate unique credentials for you.-- Create a `pollsdb` database as named by the `--database-name` argument.-- Enable access from your local IP address.-- Enable access from Azure services.-- Create a database user with access to the `pollsdb` database.+ This command performs the following actions, which may take a few minutes: -You can do all the steps separately with other `az postgres` and `psql` commands, but `az postgres up` does all the steps together. + * Create a [resource group](../azure-resource-manager/management/overview.md#terminology) called `ServiceConnector-tutorial-rg`, if it doesn't already exist. + * Create a Postgres server named by the `--server-name` argument. + * Create an administrator account using the `--admin-user` and `--admin-password` arguments. You can omit these arguments to allow the command to generate unique credentials for you. + * Create a `pollsdb` database as named by the `--database-name` argument. + * Enable access from your local IP address. + * Enable access from Azure services. + * Create a database user with access to the `pollsdb` database. -When the command completes, it outputs a JSON object that contains different connection strings for the database along with the server URL, a generated user name (such as "joyfulKoala@msdocs-djangodb-12345"), and a GUID password. + You can do all the steps separately with other `az postgres` and `psql` commands, but `az postgres up` does all the steps together. -> [!IMPORTANT] -> Copy the user name and password to a temporary text file as you will need them later in this tutorial. + When the command completes, it outputs a JSON object that contains different connection strings for the database along with the server URL, a generated user name (such as "joyfulKoala@msdocs-djangodb-12345"), and a GUID password. -<!-- not all locations support az postgres up --> -> [!TIP] -> `-l <location-name>` can be set to any [Azure regions](https://azure.microsoft.com/global-infrastructure/regions/). You can get the regions available to your subscription with the [`az account list-locations`](/cli/azure/account#az-account-list-locations) command. For production apps, put your database and your app in the same location. + > [!IMPORTANT] + > Copy the user name and password to a temporary text file as you will need them later in this tutorial. ++ <!-- not all locations support az postgres up --> + > [!TIP] + > `-l <location-name>` can be set to any [Azure regions](https://azure.microsoft.com/global-infrastructure/regions/). You can get the regions available to your subscription with the [`az account list-locations`](/cli/azure/account#az-account-list-locations) command. For production apps, put your database and your app in the same location. ::: zone-end When the command completes, it outputs a JSON object that contains different con ```azurecli az postgres flexible-server create --sku-name Standard_B1ms --public-access all ```- + If the `az` command isn't recognized, be sure you have the Azure CLI installed as described in [Set up your initial environment](#set-up-your-initial-environment).- + The [az postgres flexible-server create](/cli/azure/postgres/flexible-server#az-postgres-flexible-server-create) command performs the following actions, which take a few minutes:- - - Create a default resource group if there's not a cached name already. - - Create a PostgreSQL Flexible server: - - By default, the command uses a generated name like `server383813186`. You can specify your own name with the `--name` parameter. The name must be unique across all of Azure. - - The command uses the lowest-cost `Standard_B1ms` pricing tier. Omit the `--sku-name` argument to use the default `Standard_D2s_v3` tier. - - The command uses the resource group and location cached from the previous `az group create` command, which in this example is the resource group `ServiceConnector-tutorial-rg` in the `eastus` region. - - Create an administrator account with a username and password. You can specify these values directly with the `--admin-user` and `--admin-password` parameters. - - Create a database named `flexibleserverdb` by default. You can specify a database name with the `--database-name` parameter. - - Enables complete public access, which you can control using the `--public-access` parameter. - ++ * Create a default resource group if there's not a cached name already. + * Create a PostgreSQL Flexible server: + * By default, the command uses a generated name like `server383813186`. You can specify your own name with the `--name` parameter. The name must be unique across all of Azure. + * The command uses the lowest-cost `Standard_B1ms` pricing tier. Omit the `--sku-name` argument to use the default `Standard_D2s_v3` tier. + * The command uses the resource group and location cached from the previous `az group create` command, which in this example is the resource group `ServiceConnector-tutorial-rg` in the `eastus` region. + * Create an administrator account with a username and password. You can specify these values directly with the `--admin-user` and `--admin-password` parameters. + * Create a database named `flexibleserverdb` by default. You can specify a database name with the `--database-name` parameter. + * Enables complete public access, which you can control using the `--public-access` parameter. + 1. When the command completes, **copy the command's JSON output to a file** as you need values from the output later in this tutorial, specifically the host, username, and password, along with the database name. ::: zone-end In this section, you create app host in App Service app, connect this app to the ::: zone pivot="postgres-single-server" -In the terminal, make sure you're in the *djangoapp* repository folder that contains the app code. +1. In the terminal, make sure you're in the *djangoapp* repository folder that contains the app code. -Create an App Service app (the host process) with the [`az webapp up`](/cli/azure/webapp#az-webapp-up) command: +1. Create an App Service app (the host process) with the [`az webapp up`](/cli/azure/webapp#az-webapp-up) command: -```azurecli -az webapp up --resource-group ServiceConnector-tutorial-rg --location eastus --plan ServiceConnector-tutorial-plan --sku B1 --name <app-name> -``` -<!-- without --sku creates PremiumV2 plan --> + ```azurecli + az webapp up --resource-group ServiceConnector-tutorial-rg --location eastus --plan ServiceConnector-tutorial-plan --sku B1 --name <app-name> + ``` + <!-- without --sku creates PremiumV2 plan --> ++ * For the `--location` argument, make sure you use the location that [Service Connector supports](concept-region-support.md). + * **Replace** *\<app-name>* with a unique name across all Azure (the server endpoint is `https://<app-name>.azurewebsites.net`). Allowed characters for *\<app-name>* are `A`-`Z`, `0`-`9`, and `-`. A good pattern is to use a combination of your company name and an app identifier. -- For the `--location` argument, make sure you use the location that [Service Connector supports](concept-region-support.md).-- **Replace** *\<app-name>* with a unique name across all Azure (the server endpoint is `https://<app-name>.azurewebsites.net`). Allowed characters for *\<app-name>* are `A`-`Z`, `0`-`9`, and `-`. A good pattern is to use a combination of your company name and an app identifier.+ This command performs the following actions, which may take a few minutes: -This command performs the following actions, which may take a few minutes: + <!- + <!-- No it doesn't. az webapp up doesn't respect --resource-group --> -<!- -<!-- No it doesn't. az webapp up doesn't respect --resource-group --> -- Create the [resource group](../azure-resource-manager/management/overview.md#terminology) if it doesn't already exist. (In this command you use the same resource group in which you created the database earlier.)-- Create the [App Service plan](../app-service/overview-hosting-plans.md) *DjangoPostgres-tutorial-plan* in the Basic pricing tier (B1), if it doesn't exist. `--plan` and `--sku` are optional.-- Create the App Service app if it doesn't exist.-- Enable default logging for the app, if not already enabled.-- Upload the repository using ZIP deployment with build automation enabled.-- Cache common parameters, such as the name of the resource group and App Service plan, into the file *.azure/config*. As a result, you don't need to specify all the same parameter with later commands. For example, to redeploy the app after making changes, you can just run `az webapp up` again without any parameters. Commands that come from CLI extensions, such as `az postgres up`, however, do not at present use the cache, which is why you needed to specify the resource group and location here with the initial use of `az webapp up`.+ * Create the [resource group](../azure-resource-manager/management/overview.md#terminology) if it doesn't already exist. (In this command you use the same resource group in which you created the database earlier.) + * Create the [App Service plan](../app-service/overview-hosting-plans.md) *DjangoPostgres-tutorial-plan* in the Basic pricing tier (B1), if it doesn't exist. `--plan` and `--sku` are optional. + * Create the App Service app if it doesn't exist. + * Enable default logging for the app, if not already enabled. + * Upload the repository using ZIP deployment with build automation enabled. + * Cache common parameters, such as the name of the resource group and App Service plan, into the file *.azure/config*. As a result, you don't need to specify all the same parameter with later commands. For example, to redeploy the app after making changes, you can just run `az webapp up` again without any parameters. Commands that come from CLI extensions, such as `az postgres up`, however, do not at present use the cache, which is why you needed to specify the resource group and location here with the initial use of `az webapp up`. ::: zone-end This command performs the following actions, which may take a few minutes: az webapp up --name <app-name> --sku B1 ``` <!-- without --sku creates PremiumV2 plan -->- + This command performs the following actions, which may take a few minutes, using resource group and location cached from the previous `az group create` command (the group `Python-Django-PGFlex-rg` in the `eastus` region in this example).- + <!- <!-- No it doesn't. az webapp up doesn't respect --resource-group -->- - Create an [App Service plan](../app-service/overview-hosting-plans.md) in the Basic pricing tier (B1). You can omit `--sku` to use default values. - - Create the App Service app. - - Enable default logging for the app. - - Upload the repository using ZIP deployment with build automation enabled. + * Create an [App Service plan](../app-service/overview-hosting-plans.md) in the Basic pricing tier (B1). You can omit `--sku` to use default values. + * Create the App Service app. + * Enable default logging for the app. + * Upload the repository using ZIP deployment with build automation enabled. ::: zone-end Upon successful deployment, the command generates JSON output like the following example: -![Example az webapp up command output](../app-service/media/tutorial-python-postgresql-app/az-webapp-up-output.png) Having issues? Refer first to the [Troubleshooting guide](../app-service/configure-language-python.md#troubleshooting), otherwise, [let us know](https://aka.ms/DjangoCLITutorialHelp). With the code now deployed to App Service, the next step is to connect the app t The app code expects to find database information in four environment variables named `AZURE_POSTGRESQL_HOST`, `AZURE_POSTGRESQL_NAME`, `AZURE_POSTGRESQL_USER`, and `AZURE_POSTGRESQL_PASS`. -To set environment variables in App Service, create "app settings" with the following [az connection create]() command. +To set environment variables in App Service, create "app settings" with the following `az connection create` command. ::: zone pivot="postgres-single-server" az webapp connection create postgres --client-type django The resource group, app name, db name are drawn from the cached values. You need to provide admin password of your postgres database during the execution of this command. -- The command creates settings named "AZURE_POSTGRESQL_HOST", "AZURE_POSTGRESQL_NAME", "AZURE_POSTGRESQL_USER", "AZURE_POSTGRESQL_PASS" as expected by the app code.-- If you forgot your admin credentials, the command would guide you to reset it.-+* The command creates settings named "AZURE_POSTGRESQL_HOST", "AZURE_POSTGRESQL_NAME", "AZURE_POSTGRESQL_USER", "AZURE_POSTGRESQL_PASS" as expected by the app code. +* If you forgot your admin credentials, the command would guide you to reset it. ::: zone-end ::: zone pivot="postgres-flexible-server"+ ```azurecli az webapp connection create postgres-flexible --client-type django ``` The resource group, app name, db name are drawn from the cached values. You need to provide admin password of your postgres database during the execution of this command. -- The command creates settings named "AZURE_POSTGRESQL_HOST", "AZURE_POSTGRESQL_NAME", "AZURE_POSTGRESQL_USER", "AZURE_POSTGRESQL_PASS" as expected by the app code.-- If you forgot your admin credentials, the command would guide you to reset it.+* The command creates settings named "AZURE_POSTGRESQL_HOST", "AZURE_POSTGRESQL_NAME", "AZURE_POSTGRESQL_USER", "AZURE_POSTGRESQL_PASS" as expected by the app code. +* If you forgot your admin credentials, the command would guide you to reset it. ::: zone-end > [!NOTE]-> If you see the error message "The subscription is not registered to use Microsoft.ServiceLinker", please run `az provider register -n Microsoft.ServiceLinker` to register the Service Connector resource provider and run the connection command again. +> If you see the error message "The subscription is not registered to use Microsoft.ServiceLinker", please run `az provider register -n Microsoft.ServiceLinker` to register the Service Connector resource provider and run the connection command again. In your Python code, you access these settings as environment variables with statements like `os.environ.get('AZURE_POSTGRESQL_HOST')`. For more information, see [Access environment variables](../app-service/configure-language-python.md#access-environment-variables). Django database migrations ensure that the schema in the PostgreSQL on Azure dat az webapp ssh ``` --1. In the SSH session, run the following commands (you can paste commands using **Ctrl**+**Shift**+**V**): +1. In the SSH session, run the following commands: ```bash # Run database migrations Django database migrations ensure that the schema in the PostgreSQL on Azure dat 1. If you see an error that the database is locked, make sure that you ran the `az webapp settings` command in the previous section. Without those settings, the migrate command can't communicate with the database, resulting in the error. Having issues? Refer first to the [Troubleshooting guide](../app-service/configure-language-python.md#troubleshooting), otherwise, [let us know](https://aka.ms/DjangoCLITutorialHelp).- + ### Create a poll question in the app 1. Open the app website. The app should display the message "Polls app" and "No polls are available" because there are no specific polls yet in the database. Having issues? Refer first to the [Troubleshooting guide](../app-service/configu > [!NOTE] > App Service detects a Django project by looking for a *wsgi.py* file in each subfolder, which `manage.py startproject` creates by default. When App Service finds that file, it loads the Django web app. For more information, see [Configure built-in Python image](../app-service/configure-language-python.md). - ## Clean up resources -If you'd like to keep the app or continue to additional tutorials, skip ahead to [Next steps](#next-steps). Otherwise, to avoid incurring ongoing charges you can delete the resource group created for this tutorial: +If you'd like to keep the app or continue to additional tutorials, skip ahead to [Next steps](#next-steps). Otherwise, to avoid incurring ongoing charges, delete the resource group created for this tutorial: ```azurecli az group delete --name ServiceConnector-tutorial-rg --no-wait |
service-connector | Tutorial Java Spring Confluent Kafka | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/tutorial-java-spring-confluent-kafka.md | Title: 'Tutorial: Deploy a Spring Boot app connected to Apache Kafka on Confluent Cloud with Service Connector in Azure Spring Apps' description: Create a Spring Boot app connected to Apache Kafka on Confluent Cloud with Service Connector in Azure Spring Apps. ms.devlang: java-+ |
service-connector | Tutorial Python Functions Storage Blob As Input | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/tutorial-python-functions-storage-blob-as-input.md | description: Learn how you can connect a Python function to a storage blob as in + Last updated 10/25/2023 |
service-connector | Tutorial Python Functions Storage Queue As Trigger | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/tutorial-python-functions-storage-queue-as-trigger.md | description: Learn how you can connect a Python function to a storage queue as t + Last updated 10/25/2023 |
service-connector | Tutorial Python Functions Storage Table As Output | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/tutorial-python-functions-storage-table-as-output.md | description: Learn how you can connect a Python function to a storage table as o + Last updated 11/14/2023 |
service-fabric | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/policy-reference.md | |
service-fabric | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/security-controls-policy.md | |
service-fabric | Service Fabric Reliable Actors Diagnostics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-reliable-actors-diagnostics.md | Title: Actors diagnostics and monitoring + Title: Actors diagnostics and monitoring description: This article describes the diagnostics and performance monitoring features in the Service Fabric Reliable Actors runtime, including the events and performance counters emitted by it. The Reliable Actors runtime emits [EventSource](/dotnet/api/system.diagnostics.t ## EventSource events The EventSource provider name for the Reliable Actors runtime is "Microsoft-ServiceFabric-Actors". Events from this event source appear in the [Diagnostics Events](service-fabric-diagnostics-how-to-monitor-and-diagnose-services-locally.md#view-service-fabric-system-events-in-visual-studio) window when the actor application is being [debugged in Visual Studio](service-fabric-debugging-your-application.md). -Examples of tools and technologies that help in collecting and/or viewing EventSource events are [PerfView](https://www.microsoft.com/download/details.aspx?id=28567), [Azure Diagnostics](../cloud-services/cloud-services-dotnet-diagnostics.md), [Semantic Logging](/previous-versions/msp-n-p/dn774980(v=pandp.10)), and the [Microsoft TraceEvent Library](https://www.nuget.org/packages/Microsoft.Diagnostics.Tracing.TraceEvent). +Examples of tools and technologies that help in collecting and/or viewing EventSource events are [PerfView](https://github.com/Microsoft/perfview/releases), [Azure Diagnostics](../cloud-services/cloud-services-dotnet-diagnostics.md), [Semantic Logging](/previous-versions/msp-n-p/dn774980(v=pandp.10)), and the [Microsoft TraceEvent Library](https://www.nuget.org/packages/Microsoft.Diagnostics.Tracing.TraceEvent). ### Keywords All events that belong to the Reliable Actors EventSource are associated with one or more keywords. This enables filtering of events that are collected. The following keyword bits are defined. |
service-fabric | Service Fabric Reliable Services Diagnostics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-reliable-services-diagnostics.md | Title: Azure Service Fabric Stateful Reliable Services diagnostics + Title: Azure Service Fabric Stateful Reliable Services diagnostics description: Diagnostic functionality for Stateful Reliable Services in Azure Service Fabric The Azure Service Fabric Stateful Reliable Services StatefulServiceBase class em The EventSource name for the Stateful Reliable Services StatefulServiceBase class is "Microsoft-ServiceFabric-Services." Events from this event source appear in the [Diagnostics Events](service-fabric-diagnostics-how-to-monitor-and-diagnose-services-locally.md#view-service-fabric-system-events-in-visual-studio) window when the service is being [debugged in Visual Studio](service-fabric-debugging-your-application.md). -Examples of tools and technologies that help in collecting and/or viewing EventSource events are [PerfView](https://www.microsoft.com/download/details.aspx?id=28567), +Examples of tools and technologies that help in collecting and/or viewing EventSource events are [PerfView](https://github.com/Microsoft/perfview/releases), [Azure Diagnostics](../cloud-services/cloud-services-dotnet-diagnostics.md), and the [Microsoft TraceEvent Library](https://www.nuget.org/packages/Microsoft.Diagnostics.Tracing.TraceEvent). |
site-recovery | Azure To Azure Support Matrix | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-support-matrix.md | Title: Support matrix for Azure VM disaster recovery with Azure Site Recovery description: Summarizes support for Azure VMs disaster recovery to a secondary region with Azure Site Recovery. Previously updated : 09/28/2023 Last updated : 11/15/2023 Windows Server 2016 | Supported Server Core, Server with Desktop Experience. Windows Server 2012 R2 | Supported. Windows Server 2012 | Supported. Windows Server 2008 R2 with SP1/SP2 | Supported.<br/><br/> From version [9.30](https://support.microsoft.com/help/4531426/update-rollup-42-for-azure-site-recovery) of the Mobility service extension for Azure VMs, you need to install a Windows [servicing stack update (SSU)](https://support.microsoft.com/help/4490628) and [SHA-2 update](https://support.microsoft.com/help/4474419) on machines running Windows Server 2008 R2 SP1/SP2. SHA-1 isn't supported from September 2019, and if SHA-2 code signing isn't enabled the agent extension won't install/upgrade as expected. Learn more about [SHA-2 upgrade and requirements](https://aka.ms/SHA-2KB).+Windows 11 (x64) | Supported (From Mobility Agent version 9.56 onwards). Windows 10 (x64) | Supported. Windows 8.1 (x64) | Supported. Windows 8 (x64) | Supported. Debian 9 | Includes support for 9.1 to 9.13. Debian 9.0 isn't supported. [Suppor Debian 10 | [Supported kernel versions](#supported-debian-kernel-versions-for-azure-virtual-machines) Debian 11 | [Supported kernel versions](#supported-debian-kernel-versions-for-azure-virtual-machines) SUSE Linux Enterprise Server 12 | SP1, SP2, SP3, SP4, SP5 [(Supported kernel versions)](#supported-suse-linux-enterprise-server-12-kernel-versions-for-azure-virtual-machines)-SUSE Linux Enterprise Server 15 | 15, SP1, SP2, SP3, SP4 [(Supported kernel versions)](#supported-suse-linux-enterprise-server-15-kernel-versions-for-azure-virtual-machines) +SUSE Linux Enterprise Server 15 | 15, SP1, SP2, SP3, SP4, SP5 [(Supported kernel versions)](#supported-suse-linux-enterprise-server-15-kernel-versions-for-azure-virtual-machines) SUSE Linux Enterprise Server 11 | SP3<br/><br/> Upgrade of replicating machines from SP3 to SP4 isn't supported. If a replicated machine has been upgraded, you need to disable replication and re-enable replication after the upgrade. SUSE Linux Enterprise Server 11 | SP4 Oracle Linux | 6.4, 6.5, 6.6, 6.7, 6.8, 6.9, 6.10, 7.0, 7.1, 7.2, 7.3, 7.4, 7.5, 7.6, [7.7](https://support.microsoft.com/help/4531426/update-rollup-42-for-azure-site-recovery), [7.8](https://support.microsoft.com/help/4573888/), [7.9](https://support.microsoft.com/help/4597409), [8.0](https://support.microsoft.com/help/4573888/), [8.1](https://support.microsoft.com/help/4573888/), [8.2](https://support.microsoft.com/topic/update-rollup-55-for-azure-site-recovery-kb5003408-b19c8190-5f88-43ea-85b1-d9e0cc5ca7e8), [8.3](https://support.microsoft.com/topic/update-rollup-55-for-azure-site-recovery-kb5003408-b19c8190-5f88-43ea-85b1-d9e0cc5ca7e8) (running the Red Hat compatible kernel or Unbreakable Enterprise Kernel Release 3, 4, 5, and 6 (UEK3, UEK4, UEK5, UEK6), [8.4](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e), 8.5, 8.6, 8.7 <br> **Note:** Support for Oracle Linux 9.1 is removed from support matrix as issues were observed while using Azure Site Recovery with Oracle Linux 9.1. <br/><br/>8.1 (running on all UEK kernels and RedHat kernel <= 3.10.0-1062.* are supported in [9.35](https://support.microsoft.com/help/4573888/) Support for rest of the RedHat kernels is available in [9.36](https://support.microsoft.com/help/4578241/)). +Rocky Linux | [See supported versions](#supported-rocky-linux-kernel-versions-for-azure-virtual-machines). > [!NOTE] > For Linux versions, Azure Site Recovery doesn't support custom OS kernels. Only the stock kernels that are part of the distribution minor version release/update are supported. Oracle Linux | 6.4, 6.5, 6.6, 6.7, 6.8, 6.9, 6.10, 7.0, 7.1, 7.2, 7.3, 7.4, 7.5, **Release** | **Mobility service version** | **Kernel version** | | | |+14.04 LTS | [9.56]() | No new 14.04 LTS kernels supported in this release. | 14.04 LTS | [9.55](https://support.microsoft.com/topic/update-rollup-68-for-azure-site-recovery-a81c2d22-792b-4cde-bae5-dc7df93a7810) | No new 14.04 LTS kernels supported in this release. | 14.04 LTS | [9.54](https://support.microsoft.com/topic/update-rollup-67-for-azure-site-recovery-9fa97dbb-4539-4b6c-a0f8-c733875a119f)| No new 14.04 LTS kernels supported in this release. | 14.04 LTS | [9.53](https://support.microsoft.com/topic/update-rollup-66-for-azure-site-recovery-kb5023601-c306c467-c896-4c9d-b236-73b21ca27ca5)| No new 14.04 LTS kernels supported in this release. | 14.04 LTS | [9.52](https://support.microsoft.com/topic/update-rollup-65-for-azure-site-recovery-kb5021964-15db362f-faac-417d-ad71-c22424df43e0) | No new 14.04 LTS kernels supported in this release. |-14.04 LTS | [9.51](https://support.microsoft.com/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a) | No new 14.04 LTS kernels supported in this release. | |||+16.04 LTS | [9.56]() | No new 16.04 LTS kernels supported in this release. | 16.04 LTS | [9.55](https://support.microsoft.com/topic/update-rollup-68-for-azure-site-recovery-a81c2d22-792b-4cde-bae5-dc7df93a7810) | No new 16.04 LTS kernels supported in this release. |-16.04 LTS | [9.54](https://support.microsoft.com/topic/update-rollup-67-for-azure-site-recovery-9fa97dbb-4539-4b6c-a0f8-c733875a119f)| No new 14.04 LTS kernels supported in this release. | +16.04 LTS | [9.54](https://support.microsoft.com/topic/update-rollup-67-for-azure-site-recovery-9fa97dbb-4539-4b6c-a0f8-c733875a119f)| No new 16.04 LTS kernels supported in this release. | 16.04 LTS | [9.53](https://support.microsoft.com/topic/update-rollup-66-for-azure-site-recovery-kb5023601-c306c467-c896-4c9d-b236-73b21ca27ca5)| No new 16.04 LTS kernels supported in this release. | 16.04 LTS | [9.52](https://support.microsoft.com/topic/update-rollup-65-for-azure-site-recovery-kb5021964-15db362f-faac-417d-ad71-c22424df43e0) | No new 16.04 LTS kernels supported in this release. |-16.04 LTS | [9.51](https://support.microsoft.com/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a) | No new 16.04 LTS kernels supported in this release. | |||+18.04 LTS | [9.56]() | No new 18.04 LTS kernels supported in this release. | 18.04 LTS |[9.55](https://support.microsoft.com/topic/update-rollup-68-for-azure-site-recovery-a81c2d22-792b-4cde-bae5-dc7df93a7810) | 4.15.0-1166-azure <br> 4.15.0-1167-azure <br> 4.15.0-212-generic <br> 4.15.0-213-generic <br> 5.4.0-1108-azure <br> 5.4.0-1109-azure <br> 5.4.0-149-generic <br> 5.4.0-150-generic | 18.04 LTS |[9.54](https://support.microsoft.com/topic/update-rollup-67-for-azure-site-recovery-9fa97dbb-4539-4b6c-a0f8-c733875a119f)| 4.15.0-208-generic <br> 4.15.0-209-generic <br> 5.4.0-1105-azure <br> 5.4.0-1106-azure <br> 5.4.0-146-generic <br> 4.15.0-1163-azure <br> 4.15.0-1164-azure <br> 4.15.0-1165-azure <br> 4.15.0-210-generic <br> 4.15.0-211-generic <br> 5.4.0-1107-azure <br> 5.4.0-147-generic <br> 5.4.0-147-generic <br> 5.4.0-148-generic <br> 4.15.0-212-generic <br> 4.15.0-1166-azure <br> 5.4.0-149-generic <br> 5.4.0-150-generic <br> 5.4.0-1108-azure <br> 5.4.0-1109-azure | 18.04 LTS |[9.53](https://support.microsoft.com/topic/update-rollup-66-for-azure-site-recovery-kb5023601-c306c467-c896-4c9d-b236-73b21ca27ca5)| 5.4.0-137-generic <br> 5.4.0-1101-azure <br> 4.15.0-1161-azure <br> 4.15.0-204-generic <br> 5.4.0-1103-azure <br> 5.4.0-139-generic <br> 4.15.0-206-generic <br> 5.4.0-1104-azure <br> 5.4.0-144-generic <br> 4.15.0-1162-azure | 18.04 LTS |[9.52](https://support.microsoft.com/topic/update-rollup-65-for-azure-site-recovery-kb5021964-15db362f-faac-417d-ad71-c22424df43e0)| 4.15.0-196-generic <br> 4.15.0-1157-azure <br> 5.4.0-1098-azure <br> 4.15.0-1158-azure <br> 4.15.0-1159-azure <br> 4.15.0-201-generic <br> 4.15.0-202-generic <br> 5.4.0-1100-azure <br> 5.4.0-136-generic |-18.04 LTS | [9.51](https://support.microsoft.com/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a) |4.15.0-1151-azure </br> 4.15.0-193-generic </br> 5.4.0-1091-azure </br> 5.4.0-126-generic</br>4.15.0-1153-azure </br>4.15.0-194-generic </br>5.4.0-1094-azure </br>5.4.0-128-generic </br>5.4.0-131-generic | |||+20.04 LTS | [9.56]() | 5.15.0-1049-azure <br> 5.15.0-1050-azure <br> 5.15.0-1051-azure <br> 5.15.0-86-generic <br> 5.15.0-87-generic <br> 5.15.0-88-generic <br> 5.4.0-1117-azure <br> 5.4.0-1118-azure <br> 5.4.0-1119-azure <br> 5.4.0-164-generic <br> 5.4.0-165-generic <br> 5.4.0-166-generic | 20.04 LTS |[9.55](https://support.microsoft.com/topic/update-rollup-68-for-azure-site-recovery-a81c2d22-792b-4cde-bae5-dc7df93a7810) | 5.15.0-1039-azure <br> 5.15.0-1040-azure <br> 5.15.0-1041-azure <br> 5.15.0-73-generic <br> 5.15.0-75-generic <br> 5.15.0-76-generic <br> 5.4.0-1108-azure <br> 5.4.0-1109-azure <br> 5.4.0-1110-azure <br> 5.4.0-1111-azure <br> 5.4.0-149-generic <br> 5.4.0-150-generic <br> 5.4.0-152-generic <br> 5.4.0-153-generic <br> 5.4.0-155-generic <br> 5.4.0-1112-azure <br> 5.15.0-78-generic <br> 5.15.0-1042-azure <br> 5.15.0-79-generic <br> 5.4.0-156-generic <br> 5.15.0-1047-azure <br> 5.15.0-84-generic <br> 5.4.0-1116-azure <br> 5.4.0-163-generic <br> 5.15.0-1043-azure <br> 5.15.0-1045-azure <br> 5.15.0-1046-azure <br> 5.15.0-82-generic <br> 5.15.0-83-generic | 20.04 LTS |[9.54](https://support.microsoft.com/topic/update-rollup-67-for-azure-site-recovery-9fa97dbb-4539-4b6c-a0f8-c733875a119f)| 5.15.0-1035-azure <br> 5.15.0-1036-azure <br> 5.15.0-69-generic <br> 5.4.0-1105-azure <br> 5.4.0-1106-azure <br> 5.4.0-146-generic <br> 5.4.0-147-generic <br> 5.15.0-1037-azure <br> 5.15.0-1038-azure <br> 5.15.0-70-generic <br> 5.15.0-71-generic <br> 5.15.0-72-generic <br> 5.4.0-1107-azure <br> 5.4.0-148-generic <br> 5.4.0-149-generic <br> 5.4.0-150-generic <br> 5.4.0-1108-azure <br> 5.4.0-1109-azure <br> 5.15.0-73-generic <br> 5.15.0-1039-azure | 20.04 LTS | [9.53](https://support.microsoft.com/topic/update-rollup-66-for-azure-site-recovery-kb5023601-c306c467-c896-4c9d-b236-73b21ca27ca5) | 5.4.0-1101-azure <br> 5.15.0-1033-azure <br> 5.15.0-60-generic <br> 5.4.0-1103-azure <br> 5.4.0-139-generic <br> 5.15.0-1034-azure <br> 5.15.0-67-generic <br> 5.4.0-1104-azure <br> 5.4.0-144-generic | 20.04 LTS | [9.52](https://support.microsoft.com/topic/update-rollup-65-for-azure-site-recovery-kb5021964-15db362f-faac-417d-ad71-c22424df43e0) | 5.4.0-1095-azure <br> 5.15.0-1023-azure <br> 5.4.0-1098-azure <br> 5.15.0-1029-azure <br> 5.15.0-1030-azure <br> 5.15.0-1031-azure <br> 5.15.0-57-generic <br> 5.15.0-58-generic <br> 5.4.0-1100-azure <br> 5.4.0-136-generic <br> 5.4.0-137-generic |-20.04 LTS | [9.51](https://support.microsoft.com/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a) |5.13.0-1009-azure </br> 5.13.0-1012-azure </br> 5.13.0-1013-azure </br> 5.13.0-1014-azure </br> 5.13.0-1017-azure </br> 5.13.0-1021-azure </br> 5.13.0-1022-azure </br> 5.13.0-1023-azure </br> 5.13.0-1025-azure </br> 5.13.0-1028-azure </br> 5.13.0-1029-azure </br> 5.13.0-1031-azure </br> 5.13.0-21-generic </br> 5.13.0-22-generic </br> 5.13.0-23-generic </br> 5.13.0-25-generic </br> 5.13.0-27-generic </br> 5.13.0-28-generic </br> 5.13.0-30-generic </br> 5.13.0-35-generic </br> 5.13.0-37-generic </br> 5.13.0-39-generic </br> 5.13.0-40-generic </br> 5.13.0-41-generic </br> 5.13.0-44-generic </br> 5.13.0-48-generic </br> 5.13.0-51-generic </br> 5.13.0-52-generic </br> 5.15.0-1007-azure </br> 5.15.0-1008-azure </br> 5.15.0-1013-azure </br> 5.15.0-1014-azure </br> 5.15.0-1017-azure </br> 5.15.0-1019-azure </br> 5.15.0-1020-azure </br> 5.15.0-33-generic </br> 5.15.0-51-generic </br> 5.15.0-43-generic </br> 5.15.0-46-generic </br> 5.15.0-48-generic </br> 5.4.0-1091-azure </br> 5.4.0-126-generic </br> 5.15.0-1021-azure </br> 5.15.0-1022-azure </br> 5.15.0-50-generic </br> 5.15.0-52-generic </br> 5.4.0-1094-azure </br> 5.4.0-128-generic </br> 5.4.0-131-generic | |||+22.04 LTS | [9.56]() | 5.15.0-1049-azure <br> 5.15.0-1050-azure <br> 5.15.0-1051-azure <br> 5.15.0-86-generic <br> 5.15.0-87-generic <br> 5.15.0-88-generic | 22.04 LTS |[9.55](https://support.microsoft.com/topic/update-rollup-68-for-azure-site-recovery-a81c2d22-792b-4cde-bae5-dc7df93a7810)| 5.15.0-1039-azure <br> 5.15.0-1040-azure <br> 5.15.0-1041-azure <br> 5.15.0-73-generic <br> 5.15.0-75-generic <br> 5.15.0-76-generic <br> 5.15.0-78-generic <br> 5.15.0-1042-azure <br> 5.15.0-1044-azure <br> 5.15.0-79-generic <br> 5.15.0-1047-azure <br> 5.15.0-84-generic <br> 5.15.0-1045-azure <br> 5.15.0-1046-azure <br> 5.15.0-82-generic <br> 5.15.0-83-generic | 22.04 LTS |[9.54](https://support.microsoft.com/topic/update-rollup-67-for-azure-site-recovery-9fa97dbb-4539-4b6c-a0f8-c733875a119f)| 5.15.0-1035-azure <br> 5.15.0-1036-azure <br> 5.15.0-69-generic <br> 5.15.0-70-generic <br> 5.15.0-1037-azure <br> 5.15.0-1038-azure <br> 5.15.0-71-generic <br> 5.15.0-72-generic <br> 5.15.0-73-generic <br> 5.15.0-1039-azure | 22.04 LTS | [9.53](https://support.microsoft.com/topic/update-rollup-66-for-azure-site-recovery-kb5023601-c306c467-c896-4c9d-b236-73b21ca27ca5) | 5.15.0-1003-azure <br> 5.15.0-1005-azure <br> 5.15.0-1007-azure <br> 5.15.0-1008-azure <br> 5.15.0-1010-azure <br> 5.15.0-1012-azure <br> 5.15.0-1013-azure <br> 5.15.0-1014-azure <br> 5.15.0-1017-azure <br> 5.15.0-1019-azure <br> 5.15.0-1020-azure <br> 5.15.0-1021-azure <br> 5.15.0-1022-azure <br> 5.15.0-1023-azure <br> 5.15.0-1024-azure <br> 5.15.0-1029-azure <br> 5.15.0-1030-azure <br> 5.15.0-1031-azure <br> 5.15.0-25-generic <br> 5.15.0-27-generic <br> 5.15.0-30-generic <br> 5.15.0-33-generic <br> 5.15.0-35-generic <br> 5.15.0-37-generic <br> 5.15.0-39-generic <br> 5.15.0-40-generic <br> 5.15.0-41-generic <br> 5.15.0-43-generic <br> 5.15.0-46-generic <br> 5.15.0-47-generic <br> 5.15.0-48-generic <br> 5.15.0-50-generic <br> 5.15.0-52-generic <br> 5.15.0-53-generic <br> 5.15.0-56-generic <br> 5.15.0-57-generic <br> 5.15.0-58-generic <br> 5.15.0-1033-azure <br> 5.15.0-60-generic <br> 5.15.0-1034-azure <br> 5.15.0-67-generic | Oracle Linux | 6.4, 6.5, 6.6, 6.7, 6.8, 6.9, 6.10, 7.0, 7.1, 7.2, 7.3, 7.4, 7.5, **Release** | **Mobility service version** | **Kernel version** | | | |+Debian 7 | [9.56]()| No new Debian 7 kernels supported in this release. | Debian 7 | [9.55](https://support.microsoft.com/topic/update-rollup-68-for-azure-site-recovery-a81c2d22-792b-4cde-bae5-dc7df93a7810) | No new Debian 7 kernels supported in this release. | Debian 7 | [9.54](https://support.microsoft.com/topic/update-rollup-67-for-azure-site-recovery-9fa97dbb-4539-4b6c-a0f8-c733875a119f)| No new Debian 7 kernels supported in this release. | Debian 7 | [9.53](https://support.microsoft.com/topic/update-rollup-66-for-azure-site-recovery-kb5023601-c306c467-c896-4c9d-b236-73b21ca27ca5)| No new Debian 7 kernels supported in this release. | Debian 7 | [9.52](https://support.microsoft.com/topic/update-rollup-65-for-azure-site-recovery-kb5021964-15db362f-faac-417d-ad71-c22424df43e0) | No new Debian 7 kernels supported in this release. |-Debian 7 | [9.51](https://support.microsoft.com/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a) | No new Debian 7 kernels supported in this release. | |||+Debian 8 | [9.56]()| No new Debian 8 kernels supported in this release. | Debian 8 | [9.55](https://support.microsoft.com/topic/update-rollup-68-for-azure-site-recovery-a81c2d22-792b-4cde-bae5-dc7df93a7810) | No new Debian 8 kernels supported in this release. | Debian 8 | [9.54](https://support.microsoft.com/topic/update-rollup-67-for-azure-site-recovery-9fa97dbb-4539-4b6c-a0f8-c733875a119f)| No new Debian 8 kernels supported in this release. | Debian 8 | [9.53](https://support.microsoft.com/topic/update-rollup-66-for-azure-site-recovery-kb5023601-c306c467-c896-4c9d-b236-73b21ca27ca5)| No new Debian 8 kernels supported in this release. | Debian 8 | [9.52](https://support.microsoft.com/topic/update-rollup-65-for-azure-site-recovery-kb5021964-15db362f-faac-417d-ad71-c22424df43e0) | No new Debian 8 kernels supported in this release. |-Debian 8 | [9.51](https://support.microsoft.com/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a) | No new Debian 8 kernels supported in this release. | |||+Debian 9.1 | [9.56]()| No new Debian 9.1 kernels supported in this release. | Debian 9.1 | [9.55](https://support.microsoft.com/topic/update-rollup-68-for-azure-site-recovery-a81c2d22-792b-4cde-bae5-dc7df93a7810)| No new Debian 9.1 kernels supported in this release. | Debian 9.1 | [9.54](https://support.microsoft.com/topic/update-rollup-67-for-azure-site-recovery-9fa97dbb-4539-4b6c-a0f8-c733875a119f)| No new Debian 9.1 kernels supported in this release. | Debian 9.1 | [9.53](https://support.microsoft.com/topic/update-rollup-66-for-azure-site-recovery-kb5023601-c306c467-c896-4c9d-b236-73b21ca27ca5)| No new Debian 9.1 kernels supported in this release. | Debian 9.1 | [9.52](https://support.microsoft.com/topic/update-rollup-65-for-azure-site-recovery-kb5021964-15db362f-faac-417d-ad71-c22424df43e0) | No new Debian 9.1 kernels supported in this release. |-Debian 9.1 | [9.51](https://support.microsoft.com/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a) | No new Debian 9.1 kernels supported in this release. | |||+Debian 10 | [9.56]()| 5.10.0-0.deb10.26-amd64 <br> 5.10.0-0.deb10.26-cloud-amd64 | Debian 10 | [9.55](https://support.microsoft.com/topic/update-rollup-68-for-azure-site-recovery-a81c2d22-792b-4cde-bae5-dc7df93a7810)| 5.10.0-0.deb10.23-amd64 <br> 5.10.0-0.deb10.23-cloud-amd64 <br> 4.19.0-25-amd64 <br> 4.19.0-25-cloud-amd64 <br> 5.10.0-0.deb10.24-amd64 <br> 5.10.0-0.deb10.24-cloud-amd64 | Debian 10 | [9.54](https://support.microsoft.com/topic/update-rollup-67-for-azure-site-recovery-9fa97dbb-4539-4b6c-a0f8-c733875a119f)| 5.10.0-0.bpo.3-amd64 <br> 5.10.0-0.bpo.3-cloud-amd64 <br> 5.10.0-0.bpo.4-amd64 <br> 5.10.0-0.bpo.4-cloud-amd64 <br> 5.10.0-0.bpo.5-amd64 <br> 5.10.0-0.bpo.5-cloud-amd64 <br> 4.19.0-24-amd64 <br> 4.19.0-24-cloud-amd64 <br> 5.10.0-0.deb10.22-amd64 <br> 5.10.0-0.deb10.22-cloud-amd64 <br> 5.10.0-0.deb10.23-amd64 <br> 5.10.0-0.deb10.23-cloud-amd64 | Debian 10 | [9.53](https://support.microsoft.com/topic/update-rollup-66-for-azure-site-recovery-kb5023601-c306c467-c896-4c9d-b236-73b21ca27ca5)| 5.10.0-0.deb10.21-amd64 <br> 5.10.0-0.deb10.21-cloud-amd64 | Debian 10 | [9.52](https://support.microsoft.com/topic/update-rollup-65-for-azure-site-recovery-kb5021964-15db362f-faac-417d-ad71-c22424df43e0) | 4.19.0-23-amd64 <br> 4.19.0-23-cloud-amd64 <br> 5.10.0-0.deb10.20-amd64 <br> 5.10.0-0.deb10.20-cloud-amd64 |-Debian 10 | [9.51](https://support.microsoft.com/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a) | 4.19.0-22-amd64 </br> 4.19.0-22-cloud-amd64 </br> 5.10.0-0.deb10.19-amd64 </br> 5.10.0-0.deb10.19-cloud-amd64 | |||+Debian 11 | [9.56]()| 5.10.0-26-amd64 <br> 5.10.0-26-cloud-amd64 | Debian 11 | [9.55](https://support.microsoft.com/topic/update-rollup-68-for-azure-site-recovery-a81c2d22-792b-4cde-bae5-dc7df93a7810)| 5.10.0-24-amd64 <br> 5.10.0-24-cloud-amd64 <br> 5.10.0-25-amd64 <br> 5.10.0-25-cloud-amd64 | Debian 11 | [9.54](https://support.microsoft.com/topic/update-rollup-67-for-azure-site-recovery-9fa97dbb-4539-4b6c-a0f8-c733875a119f)| 5.10.0-22-amd64 <br> 5.10.0-22-cloud-amd64 <br> 5.10.0-23-amd64 <br> 5.10.0-23-cloud-amd64 | Debian 11 | [9.53](https://support.microsoft.com/topic/update-rollup-66-for-azure-site-recovery-kb5023601-c306c467-c896-4c9d-b236-73b21ca27ca5) | 5.10.0-21-amd64 </br> 5.10.0-21-cloud-amd64 | Debian 11 | [9.52](https://support.microsoft.com/topic/update-rollup-65-for-azur **Release** | **Mobility service version** | **Kernel version** | | | |+SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.56]() | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br></br> 4.12.14-16.152-azure:5 | SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.55](https://support.microsoft.com/topic/update-rollup-68-for-azure-site-recovery-a81c2d22-792b-4cde-bae5-dc7df93a7810) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br></br> 4.12.14-16.136-azure:5 <br> 4.12.14-16.139-azure:5 <br> 4.12.14-16.146-azure:5 <br> 4.12.14-16.149-azure:5 | SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.54](https://support.microsoft.com/topic/update-rollup-67-for-azure-site-recovery-9fa97dbb-4539-4b6c-a0f8-c733875a119f) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br></br> 4.12.14-16.130-azure:5 <br> 4.12.14-16.133-azure:5 | SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.53](https://support.microsoft.com/topic/update-rollup-66-for-azure-site-recovery-kb5023601-c306c467-c896-4c9d-b236-73b21ca27ca5) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br></br> 4.12.14-16.124-azure:5 <br> 4.12.14-16.127-azure:5 | SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.52](https://support.microsoft.com/topic/update-rollup-65-for-azure-site-recovery-kb5021964-15db362f-faac-417d-ad71-c22424df43e0) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br></br> 4.12.14-16.115-azure:5 <br> 4.12.14-16.120-azure:5 |-SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.51](https://support.microsoft.com/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a) | All [stock SUSE 12 SP1,SP2,SP3,SP4,SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br></br> 4.12.14-16.106-azure:5 </br> 4.12.14-16.112-azure | #### Supported SUSE Linux Enterprise Server 15 kernel versions for Azure virtual machines **Release** | **Mobility service version** | **Kernel version** | | | |+SUSE Linux Enterprise Server 15 (SP1, SP2, SP3, SP4, SP5) | [9.56]() | By default, all [stock SUSE 15, SP1, SP2, SP3, SP4, SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br></br> 5.14.21-150400.14.69-azure:4 <br> 5.14.21-150500.31-azure:5 <br> 5.14.21-150500.33.11-azure:5 <br> 5.14.21-150500.33.14-azure:5 <br> 5.14.21-150500.33.17-azure:5 <br> 5.14.21-150500.33.20-azure:5 <br> 5.14.21-150500.33.3-azure:5 <br> 5.14.21-150500.33.6-azure:5 | SUSE Linux Enterprise Server 15 (SP1, SP2, SP3, SP4) | [9.55](https://support.microsoft.com/topic/update-rollup-68-for-azure-site-recovery-a81c2d22-792b-4cde-bae5-dc7df93a7810) | By default, all [stock SUSE 15, SP1, SP2, SP3, SP4 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br></br> 5.14.21-150400.14.52-azure:4 <br> 4.12.14-16.139-azure:5 <br> 5.14.21-150400.14.55-azure:4 <br> 5.14.21-150400.14.60-azure:4 <br> 5.14.21-150400.14.63-azure:4 <br> 5.14.21-150400.14.66-azure:4 | SUSE Linux Enterprise Server 15 (SP1, SP2, SP3, SP4) | [9.54](https://support.microsoft.com/topic/update-rollup-67-for-azure-site-recovery-9fa97dbb-4539-4b6c-a0f8-c733875a119f) | By default, all [stock SUSE 15, SP1, SP2, SP3, SP4 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br></br> 5.14.21-150400.14.40-azure:4 <br> 5.14.21-150400.14.43-azure:4 <br> 5.14.21-150400.14.46-azure:4 <br> 5.14.21-150400.14.49-azure:4 | SUSE Linux Enterprise Server 15 (SP1, SP2, SP3, SP4) | [9.53](https://support.microsoft.com/topic/update-rollup-66-for-azure-site-recovery-kb5023601-c306c467-c896-4c9d-b236-73b21ca27ca5) | By default, all [stock SUSE 15, SP1, SP2, SP3, SP4 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br></br> 5.14.21-150400.14.31-azure:4 <br> 5.14.21-150400.14.34-azure:4 <br> 5.14.21-150400.14.37-azure:4 | SUSE Linux Enterprise Server 15 (SP1, SP2, SP3, SP4) | [9.52](https://support.microsoft.com/topic/update-rollup-65-for-azure-site-recovery-kb5021964-15db362f-faac-417d-ad71-c22424df43e0) | By default, all [stock SUSE 15, SP1, SP2, SP3, SP4 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br></br> 5.14.21-150400.12-azure:4 <br> 5.14.21-150400.14.10-azure:4 <br> 5.14.21-150400.14.13-azure:4 <br> 5.14.21-150400.14.16-azure:4 <br> 5.14.21-150400.14.7-azure:4 <br> 5.3.18-150300.38.83-azure:3 <br> 5.14.21-150400.14.21-azure:4 <br> 5.14.21-150400.14.28-azure:4 <br> 5.3.18-150300.38.88-azure:3 |-SUSE Linux Enterprise Server 15 (SP1, SP2, SP3) | [9.51](https://support.microsoft.com/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a) | By default, all [stock SUSE 15, SP1, SP2, SP3 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br></br> 5.3.18-150300.38.80-azure | +++#### Supported Rocky Linux kernel versions for Azure virtual machines ++**Release** | **Mobility service version** | **Kernel version** | + | | | +Rocky Linux | [9.56]() | Rocky Linux 8.7 <br> Rocky Linux 9.0 <br> Rocky Linux 9.1 | > [!NOTE] > To support latest Linux kernels within 15 days of release, Azure Site Recovery rolls out hot fix patch on top of latest mobility agent version. This fix is rolled out in between two major version releases. To update to latest version of mobility agent (including hot fix patch) follow steps mentioned in [this article](service-updates-how-to.md#azure-vm-disaster-recovery-to-azure). This patch is currently rolled out for mobility agents used in Azure to Azure DR scenario. |
site-recovery | Replication Appliance Support Matrix | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/replication-appliance-support-matrix.md | Ensure the following URLs are allowed and reachable from the Azure Site Recovery | **URL** | **Details** | | - | -|- | portal.azure.com | Navigate to the Azure portal. | + | `portal.azure.com` | Navigate to the Azure portal. | | `login.windows.net `<br>`graph.windows.net `<br>`*.msftauth.net`<br>`*.msauth.net`<br>`*.microsoft.com`<br>`*.live.com `<br>`*.office.com ` | To sign-in to your Azure subscription. | |`*.microsoftonline.com `|Create Azure Active Directory (AD) apps for the appliance to communicate with Azure Site Recovery. |- |management.azure.com |Create Microsoft Entra apps for the appliance to communicate with the Azure Site Recovery service. | + |`management.azure.com` |Create Microsoft Entra apps for the appliance to communicate with the Azure Site Recovery service. | |`*.services.visualstudio.com `|Upload app logs used for internal monitoring. | |`*.vault.azure.net `|Manage secrets in the Azure Key Vault. Note: Ensure that the machines that need to be replicated have access to this URL. |- |aka.ms |Allow access to "also known as" links. Used for Azure Site Recovery appliance updates. | - |download.microsoft.com/download |Allow downloads from Microsoft download. | + |`aka.ms` |Allow access to "also known as" links. Used for Azure Site Recovery appliance updates. | + |`download.microsoft.com/download` |Allow downloads from Microsoft download. | |`*.servicebus.windows.net `|Communication between the appliance and the Azure Site Recovery service. | |`*.discoverysrv.windowsazure.com `<br><br>`*.hypervrecoverymanager.windowsazure.com `<br><br> `*.backup.windowsazure.com ` |Connect to Azure Site Recovery micro-service URLs. |`*.blob.core.windows.net `|Upload data to Azure storage, which is used to create target disks. |+ |`*.backup.windowsazure.com `|Protection service URL ΓÇô a microservice used by Azure Site Recovery for processing & creating replicated disks in Azure. | | `*.prod.migration.windowsazure.com `| To discover your on-premises estate. #### Allow URLs for government clouds |
site-recovery | Vmware Azure Architecture Modernized | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-azure-architecture-modernized.md | If you're using a URL-based firewall proxy to control outbound connectivity, all | **URL** | **Details** | | - | -|-| portal.azure.com | Navigate to the Azure portal. | -| `*.windows.net `<br>`*.msftauth.net`<br>`*.msauth.net`<br>`*.microsoft.com`<br>`*.live.com `<br>`*.office.com ` | To sign-in to your Azure subscription. | +|`portal.azure.com` | Navigate to the Azure portal. | +|`*.windows.net `<br>`*.msftauth.net`<br>`*.msauth.net`<br>`*.microsoft.com`<br>`*.live.com `<br>`*.office.com ` | To sign-in to your Azure subscription. | |`*.microsoftonline.com `|Create Microsoft Entra apps for the appliance to communicate with Azure Site Recovery. |-|management.azure.com |Create Microsoft Entra apps for the appliance to communicate with the Azure Site Recovery service. | +|`management.azure.com` |Create Microsoft Entra apps for the appliance to communicate with the Azure Site Recovery service. | |`*.services.visualstudio.com `|Upload app logs used for internal monitoring. | |`*.vault.azure.net `|Manage secrets in the Azure Key Vault. Note: Ensure that machines to be replicated have access to this. |-|aka.ms |Allow access to "also known as" links. Used for Azure Site Recovery appliance updates. | -|download.microsoft.com/download |Allow downloads from Microsoft download. | +|`aka.ms` |Allow access to "also known as" links. Used for Azure Site Recovery appliance updates. | +|`download.microsoft.com/download` |Allow downloads from Microsoft download. | |`*.servicebus.windows.net `|Communication between the appliance and the Azure Site Recovery service. | |`*.discoverysrv.windowsazure.com `|Connect to Azure Site Recovery discovery service URL. |-|`*.hypervrecoverymanager.windowsazure.com `|Connect to Azure Site Recovery micro-service URLs | -|`*.blob.core.windows.net `|Upload data to Azure storage, which is used to create target disks | -|`*.backup.windowsazure.com `|Protection service URL ΓÇô a microservice used by Azure Site Recovery for processing & creating replicated disks in Azure | -+|`*.hypervrecoverymanager.windowsazure.com `|Connect to Azure Site Recovery micro-service URLs. | +|`*.blob.core.windows.net `|Upload data to Azure storage, which is used to create target disks. | +|`*.backup.windowsazure.com `|Protection service URL ΓÇô a microservice used by Azure Site Recovery for processing & creating replicated disks in Azure. | +|`*.prod.migration.windowsazure.com `| To discover your on-premises estate. | ## Replication process |
site-recovery | Vmware Physical Azure Support Matrix | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-azure-support-matrix.md | Title: Support matrix for VMware/physical disaster recovery in Azure Site Recove description: Summarizes support for disaster recovery of VMware VMs and physical server to Azure using Azure Site Recovery. Previously updated : 09/28/2023 Last updated : 11/21/2023 Linux Red Hat Enterprise | 5.2 to 5.11</b><br/> 6.1 to 6.10</b> </br> 7.0, 7.1, Linux: CentOS | 5.2 to 5.11</b><br/> 6.1 to 6.10</b><br/> </br> 7.0, 7.1, 7.2, 7.3, 7.4, 7.5, 7.6, [7.7](https://support.microsoft.com/help/4528026/update-rollup-41-for-azure-site-recovery), [7.8](https://support.microsoft.com/help/4564347/), [7.9](https://support.microsoft.com/help/4578241/) </br> [8.0](https://support.microsoft.com/help/4531426/update-rollup-42-for-azure-site-recovery), 8.1, [8.2](https://support.microsoft.com/help/4570609), [8.3](https://support.microsoft.com/help/4597409/), [8.4](https://support.microsoft.com/topic/883a93a7-57df-4b26-a1c4-847efb34a9e8) (4.18.0-305.30.1.el8_4.x86_64 or later), [8.5](https://support.microsoft.com/topic/883a93a7-57df-4b26-a1c4-847efb34a9e8) (4.18.0-348.5.1.el8_5.x86_64 or later), 8.6, 8.7 <br/><br/> Few older kernels on servers running CentOS 5.2-5.11 & 6.1-6.10 don't have [Linux Integration Services (LIS) components](https://www.microsoft.com/download/details.aspx?id=55106) pre-installed. If in-built LIS components are missing, ensure to install the [components](https://www.microsoft.com/download/details.aspx?id=55106) before enabling replication for the machines to boot in Azure. Ubuntu | Ubuntu 14.04* LTS server [(review supported kernel versions)](#ubuntu-kernel-versions)<br/>Ubuntu 16.04* LTS server [(review supported kernel versions)](#ubuntu-kernel-versions) </br> Ubuntu 18.04* LTS server [(review supported kernel versions)](#ubuntu-kernel-versions) </br> Ubuntu 20.04* LTS server [(review supported kernel versions)](#ubuntu-kernel-versions) <br> Ubuntu 22.04* LTS server [(review supported kernel versions)](#ubuntu-kernel-versions) <br> **Note**: Support for Ubuntu 22.04 is available for Modernized experience only and not available for Classic experience yet. </br> (*includes support for all 14.04.*x*, 16.04.*x*, 18.04.*x*, 20.04.*x* versions) Debian | Debian 7/Debian 8 (includes support for all 7. *x*, 8. *x* versions). [Ensure to download latest mobility agent installer on the configuration server](vmware-physical-mobility-service-overview.md#download-latest-mobility-agent-installer-for-suse-11-sp3-suse-11-sp4-rhel-5-cent-os-5-debian-7-debian-8-debian-9-oracle-linux-6-and-ubuntu-1404-server). <br/> Debian 9 (includes support for 9.1 to 9.13. Debian 9.0 isn't supported.). [Ensure to download latest mobility agent installer on the configuration server](vmware-physical-mobility-service-overview.md#download-latest-mobility-agent-installer-for-suse-11-sp3-suse-11-sp4-rhel-5-cent-os-5-debian-7-debian-8-debian-9-oracle-linux-6-and-ubuntu-1404-server). <br/> Debian 10, Debian 11 [(Review supported kernel versions)](#debian-kernel-versions).-SUSE Linux | SUSE Linux Enterprise Server 12 SP1, SP2, SP3, SP4, [SP5](https://support.microsoft.com/help/4570609) [(review supported kernel versions)](#suse-linux-enterprise-server-12-supported-kernel-versions) <br/> SUSE Linux Enterprise Server 15, 15 SP1, SP2, SP3, SP4 [(review supported kernel versions)](#suse-linux-enterprise-server-15-supported-kernel-versions) <br/> SUSE Linux Enterprise Server 11 SP3. [Ensure to download latest mobility agent installer on the configuration server](vmware-physical-mobility-service-overview.md#download-latest-mobility-agent-installer-for-suse-11-sp3-suse-11-sp4-rhel-5-cent-os-5-debian-7-debian-8-debian-9-oracle-linux-6-and-ubuntu-1404-server). </br> SUSE Linux Enterprise Server 11 SP4 </br> **Note**: Upgrading replicated machines from SUSE Linux Enterprise Server 11 SP3 to SP4 isn't supported. To upgrade, disable replication and re-enable after the upgrade. <br/>| -Oracle Linux | 6.4, 6.5, 6.6, 6.7, 6.8, 6.9, 6.10, 7.0, 7.1, 7.2, 7.3, 7.4, 7.5, 7.6, [7.7](https://support.microsoft.com/help/4531426/update-rollup-42-for-azure-site-recovery), [7.8](https://support.microsoft.com/help/4573888/), [7.9](https://support.microsoft.com/help/4597409/), [8.0](https://support.microsoft.com/help/4573888/), [8.1](https://support.microsoft.com/help/4573888/), [8.2](https://support.microsoft.com/topic/b19c8190-5f88-43ea-85b1-d9e0cc5ca7e8), [8.3](https://support.microsoft.com/topic/b19c8190-5f88-43ea-85b1-d9e0cc5ca7e8), [8.4](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e), 8.5, 8.6, 8.7 <br/><br/> **Note:** Support for Oracle Linux `9.0` and `9.1` is removed from support matrix, as issues were observed using Azure Site Recovery with Oracle Linux 9.0 and 9.1. <br><br> Running the Red Hat compatible kernel or Unbreakable Enterprise Kernel Release 3, 4 & 5 (UEK3, UEK4, UEK5)<br/><br/>8.1<br/>Running on all UEK kernels and RedHat kernel <= 3.10.0-1062.* are supported in [9.35](https://support.microsoft.com/help/4573888/) Support for rest of the RedHat kernels is available in [9.36](https://support.microsoft.com/help/4578241/). +SUSE Linux | SUSE Linux Enterprise Server 12 SP1, SP2, SP3, SP4, [SP5](https://support.microsoft.com/help/4570609) [(review supported kernel versions)](#suse-linux-enterprise-server-12-supported-kernel-versions) <br/> SUSE Linux Enterprise Server 15, 15 SP1, SP2, SP3, SP4, SP5 [(review supported kernel versions)](#suse-linux-enterprise-server-15-supported-kernel-versions) <br/> SUSE Linux Enterprise Server 11 SP3. [Ensure to download latest mobility agent installer on the configuration server](vmware-physical-mobility-service-overview.md#download-latest-mobility-agent-installer-for-suse-11-sp3-suse-11-sp4-rhel-5-cent-os-5-debian-7-debian-8-debian-9-oracle-linux-6-and-ubuntu-1404-server). </br> SUSE Linux Enterprise Server 11 SP4 </br> **Note**: Upgrading replicated machines from SUSE Linux Enterprise Server 11 SP3 to SP4 isn't supported. To upgrade, disable replication and re-enable after the upgrade. <br/> Support for SUSE Linux Enterprise Server 15 SP5 is available for Modernized experience only.| +Oracle Linux | 6.4, 6.5, 6.6, 6.7, 6.8, 6.9, 6.10, 7.0, 7.1, 7.2, 7.3, 7.4, 7.5, 7.6, [7.7](https://support.microsoft.com/help/4531426/update-rollup-42-for-azure-site-recovery), [7.8](https://support.microsoft.com/help/4573888/), [7.9](https://support.microsoft.com/help/4597409/), [8.0](https://support.microsoft.com/help/4573888/), [8.1](https://support.microsoft.com/help/4573888/), [8.2](https://support.microsoft.com/topic/b19c8190-5f88-43ea-85b1-d9e0cc5ca7e8), [8.3](https://support.microsoft.com/topic/b19c8190-5f88-43ea-85b1-d9e0cc5ca7e8), [8.4](https://support.microsoft.com/topic/update-rollup-59-for-azure-site-recovery-kb5008707-66a65377-862b-4a4c-9882-fd74bdc7a81e), 8.5, 8.6, 8.7 <br/><br/> **Note:** Support for Oracle Linux `9.0` and `9.1` is removed from support matrix, as issues were observed using Azure Site Recovery with Oracle Linux 9.0 and 9.1. <br><br> Running the Red Hat compatible kernel or Unbreakable Enterprise Kernel Release 3, 4 & 5 (UEK3, UEK4, UEK5)<br/><br/>8.1<br/>Running on all UEK kernels and RedHat kernel <= 3.10.0-1062.* are supported in [9.35](https://support.microsoft.com/help/4573888/) Support for rest of the RedHat kernels is available in [9.36](https://support.microsoft.com/help/4578241/). +Rocky Linux | [See supported versions](#rocky-linux-server-supported-kernel-versions). > [!NOTE] >- For each of the Windows versions, Azure Site Recovery only supports [Long-Term Servicing Channel (LTSC)](/windows-server/get-started/servicing-channels-comparison#long-term-servicing-channel-ltsc) builds. [Semi-Annual Channel](/windows-server/get-started/servicing-channels-comparison#semi-annual-channel) releases are currently unsupported at this time. Oracle Linux | 6.4, 6.5, 6.6, 6.7, 6.8, 6.9, 6.10, 7.0, 7.1, 7.2, 7.3, 7.4, 7.5, **Supported release** | **Mobility service version** | **Kernel version** | | | |-14.04 LTS | [9.51](https://support.microsoft.com/en-us/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a), [9.52](https://support.microsoft.com/topic/update-rollup-65-for-azure-site-recovery-kb5021964-15db362f-faac-417d-ad71-c22424df43e0), [9.53](https://support.microsoft.com/topic/update-rollup-66-for-azure-site-recovery-kb5023601-c306c467-c896-4c9d-b236-73b21ca27ca5), [9.54](https://support.microsoft.com/topic/update-rollup-67-for-azure-site-recovery-9fa97dbb-4539-4b6c-a0f8-c733875a119f), [9.55](https://support.microsoft.com/topic/update-rollup-68-for-azure-site-recovery-a81c2d22-792b-4cde-bae5-dc7df93a7810) | 3.13.0-24-generic to 3.13.0-170-generic,<br/>3.16.0-25-generic to 3.16.0-77-generic,<br/>3.19.0-18-generic to 3.19.0-80-generic,<br/>4.2.0-18-generic to 4.2.0-42-generic,<br/>4.4.0-21-generic to 4.4.0-148-generic,<br/>4.15.0-1023-azure to 4.15.0-1045-azure | +14.04 LTS | [9.52](https://support.microsoft.com/topic/update-rollup-65-for-azure-site-recovery-kb5021964-15db362f-faac-417d-ad71-c22424df43e0), [9.53](https://support.microsoft.com/topic/update-rollup-66-for-azure-site-recovery-kb5023601-c306c467-c896-4c9d-b236-73b21ca27ca5), [9.54](https://support.microsoft.com/topic/update-rollup-67-for-azure-site-recovery-9fa97dbb-4539-4b6c-a0f8-c733875a119f), [9.55](https://support.microsoft.com/topic/update-rollup-68-for-azure-site-recovery-a81c2d22-792b-4cde-bae5-dc7df93a7810), [9.56]() <br> **Note:** Support for 9.56 is only available for Modernized experience. | 3.13.0-24-generic to 3.13.0-170-generic,<br/>3.16.0-25-generic to 3.16.0-77-generic,<br/>3.19.0-18-generic to 3.19.0-80-generic,<br/>4.2.0-18-generic to 4.2.0-42-generic,<br/>4.4.0-21-generic to 4.4.0-148-generic,<br/>4.15.0-1023-azure to 4.15.0-1045-azure | |||-16.04 LTS | [9.51](https://support.microsoft.com/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a), [9.52](https://support.microsoft.com/topic/update-rollup-65-for-azure-site-recovery-kb5021964-15db362f-faac-417d-ad71-c22424df43e0), [9.53](https://support.microsoft.com/topic/update-rollup-66-for-azure-site-recovery-kb5023601-c306c467-c896-4c9d-b236-73b21ca27ca5), [9.54](https://support.microsoft.com/topic/update-rollup-67-for-azure-site-recovery-9fa97dbb-4539-4b6c-a0f8-c733875a119f), [9.55](https://support.microsoft.com/topic/update-rollup-68-for-azure-site-recovery-a81c2d22-792b-4cde-bae5-dc7df93a7810) | 4.4.0-21-generic to 4.4.0-210-generic,<br/>4.8.0-34-generic to 4.8.0-58-generic,<br/>4.10.0-14-generic to 4.10.0-42-generic,<br/>4.11.0-13-generic, 4.11.0-14-generic,<br/>4.13.0-16-generic to 4.13.0-45-generic,<br/>4.15.0-13-generic to 4.15.0-142-generic<br/>4.11.0-1009-azure to 4.11.0-1016-azure,<br/>4.13.0-1005-azure to 4.13.0-1018-azure <br/>4.15.0-1012-azure to 4.15.0-1113-azure </br> 4.15.0-101-generic to 4.15.0-107-generic | +16.04 LTS | [9.52](https://support.microsoft.com/topic/update-rollup-65-for-azure-site-recovery-kb5021964-15db362f-faac-417d-ad71-c22424df43e0), [9.53](https://support.microsoft.com/topic/update-rollup-66-for-azure-site-recovery-kb5023601-c306c467-c896-4c9d-b236-73b21ca27ca5), [9.54](https://support.microsoft.com/topic/update-rollup-67-for-azure-site-recovery-9fa97dbb-4539-4b6c-a0f8-c733875a119f), [9.55](https://support.microsoft.com/topic/update-rollup-68-for-azure-site-recovery-a81c2d22-792b-4cde-bae5-dc7df93a7810) | 4.4.0-21-generic to 4.4.0-210-generic,<br/>4.8.0-34-generic to 4.8.0-58-generic,<br/>4.10.0-14-generic to 4.10.0-42-generic,<br/>4.11.0-13-generic, 4.11.0-14-generic,<br/>4.13.0-16-generic to 4.13.0-45-generic,<br/>4.15.0-13-generic to 4.15.0-142-generic<br/>4.11.0-1009-azure to 4.11.0-1016-azure,<br/>4.13.0-1005-azure to 4.13.0-1018-azure <br/>4.15.0-1012-azure to 4.15.0-1113-azure </br> 4.15.0-101-generic to 4.15.0-107-generic | |||+18.04 LTS | [9.56]() <br> **Note:** Support for 9.56 is only available for Modernized experience.| No new Ubuntu 18.04 kernels supported in this release| 18.04 LTS |[9.55](https://support.microsoft.com/topic/update-rollup-68-for-azure-site-recovery-a81c2d22-792b-4cde-bae5-dc7df93a7810) | 4.15.0-1163-azure <br> 4.15.0-1164-azure <br> 4.15.0-1165-azure <br> 4.15.0-1166-azure <br> 4.15.0-1167-azure <br> 4.15.0-210-generic <br> 4.15.0-211-generic <br> 4.15.0-212-generic <br> 4.15.0-213-generic <br> 5.4.0-1107-azure <br> 5.4.0-1108-azure <br> 5.4.0-1109-azure <br> 5.4.0-147-generic <br> 5.4.0-148-generic <br> 5.4.0-149-generic <br> 5.4.0-150-generic | 18.04 LTS|[9.54](https://support.microsoft.com/topic/update-rollup-67-for-azure-site-recovery-9fa97dbb-4539-4b6c-a0f8-c733875a119f)| 4.15.0-1161-azure <br> 4.15.0-1162-azure <br> 4.15.0-204-generic <br> 4.15.0-206-generic <br> 4.15.0-208-generic <br> 4.15.0-209-generic <br> 5.4.0-1101-azure <br> 5.4.0-1103-azure <br> 5.4.0-1104-azure <br> 5.4.0-1105-azure <br> 5.4.0-1106-azure <br> 5.4.0-139-generic <br> 5.4.0-144-generic <br> 5.4.0-146-generic | 18.04 LTS|[9.53](https://support.microsoft.com/topic/update-rollup-66-for-azure-site-recovery-kb5023601-c306c467-c896-4c9d-b236-73b21ca27ca5)| 4.15.0-1157-azure </br> 4.15.0-1158-azure </br> 4.15.0-1159-azure </br> 4.15.0-197-generic </br> 4.15.0-200-generic </br> 4.15.0-201-generic </br> 4.15.0-202-generic <br> 5.4.0-1095-azure <br> 5.4.0-1098-azure <br> 5.4.0-1100-azure <br> 5.4.0-132-generic <br> 5.4.0-135-generic <br> 5.4.0-136-generic <br> 5.4.0-137-generic | 18.04 LTS | [9.52](https://support.microsoft.com/en-us/topic/update-rollup-65-for-azure-site-recovery-kb5021964-15db362f-faac-417d-ad71-c22424df43e0) | 4.15.0-1153-azure </br> 4.15.0-194-generic </br> 4.15.0-196-generic </br>5.4.0-1094-azure </br> 5.4.0-128-generic </br> 5.4.0-131-generic|-18.04 LTS |[9.51](https://support.microsoft.com/en-us/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a)|4.15.0-1149-azure </br> 4.15.0-1150-azure </br> 4.15.0-1151-azure </br>4.15.0-191-generic </br> 4.15.0-192-generic </br> 4.15.0-193-generic </br> 5.4.0-1089-azure </br> 5.4.0-1090-azure </br> 5.4.0-1091-azure </br> 5.4.0-124-generic </br> 5.4.0-125-generic </br> 5.4.0-126-generic| |||+20.04 LTS |[9.56]() <br> **Note**: Support for Ubuntu 20.04 is available for Modernized experience only and not available for Classic experience yet. | 5.15.0-1049-azure <br> 5.15.0-1050-azure <br> 5.15.0-1051-azure <br> 5.15.0-86-generic <br> 5.15.0-87-generic <br> 5.15.0-88-generic <br> 5.4.0-1117-azure <br> 5.4.0-1118-azure <br> 5.4.0-1119-azure <br> 5.4.0-164-generic <br> 5.4.0-165-generic <br> 5.4.0-166-generic | 20.04 LTS|[9.55](https://support.microsoft.com/topic/update-rollup-68-for-azure-site-recovery-a81c2d22-792b-4cde-bae5-dc7df93a7810) | 5.15.0-1037-azure <br> 5.15.0-1038-azure <br> 5.15.0-1039-azure <br> 5.15.0-1040-azure <br> 5.15.0-1041-azure <br> 5.15.0-70-generic <br> 5.15.0-71-generic <br> 5.15.0-72-generic <br> 5.15.0-73-generic <br> 5.15.0-75-generic <br> 5.15.0-76-generic <br> 5.4.0-1107-azure <br> 5.4.0-1108-azure <br> 5.4.0-1109-azure <br> 5.4.0-1110-azure <br> 5.4.0-1111-azure <br> 5.4.0-148-generic <br> 5.4.0-149-generic <br> 5.4.0-150-generic <br> 5.4.0-152-generic <br> 5.4.0-153-generic | 20.04 LTS|[9.54](https://support.microsoft.com/topic/update-rollup-67-for-azure-site-recovery-9fa97dbb-4539-4b6c-a0f8-c733875a119f)| 5.15.0-1033-azure <br> 5.15.0-1034-azure <br> 5.15.0-1035-azure <br> 5.15.0-1036-azure <br> 5.15.0-60-generic <br> 5.15.0-67-generic <br> 5.15.0-69-generic <br> 5.4.0-1101-azure <br> 5.4.0-1103-azure <br> 5.4.0-1104-azure <br> 5.4.0-1105-azure <br> 5.4.0-1106-azure <br> 5.4.0-139-generic <br> 5.4.0-144-generic <br> 5.4.0-146-generic <br> 5.4.0-147-generic | 20.04 LTS|[9.53](https://support.microsoft.com/topic/update-rollup-66-for-azure-site-recovery-kb5023601-c306c467-c896-4c9d-b236-73b21ca27ca5)| 5.15.0-1023-azure </br> 5.15.0-1029-azure </br> 5.15.0-1030-azure </br> 5.15.0-1031-azure </br> 5.15.0-53-generic </br> 5.15.0-56-generic </br> 5.15.0-57-generic <br> 5.15.0-58-generic <br> 5.4.0-1095-azure <br> 5.4.0-1098-azure <br> 5.4.0-1100-azure <br> 5.4.0-132-generic <br> 5.4.0-135-generic <br> 5.4.0-136-generic <br> 5.4.0-137-generic | 20.04 LTS|[9.52](https://support.microsoft.com/topic/update-rollup-65-for-azure-site-recovery-kb5021964-15db362f-faac-417d-ad71-c22424df43e0)|5.15.0-1021-azure </br> 5.15.0-1022-azure </br> 5.15.0-50-generic </br> 5.15.0-52-generic </br> 5.4.0-1094-azure </br> 5.4.0-128-generic </br> 5.4.0-131-generic |-20.04 LTS|[9.51](https://support.microsoft.com/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a)|5.13.0-1009-azure </br> 5.13.0-1012-azure </br> 5.13.0-1013-azure </br> 5.13.0-1014-azure </br> 5.13.0-1017-azure </br> 5.13.0-1021-azure </br> 5.13.0-1022-azure </br> 5.13.0-1023-azure </br>5.13.0-1025-azure </br> 5.13.0-1028-azure </br> 5.13.0-1029-azure </br> 5.13.0-1031-azure </br> 5.13.0-21-generic </br> 5.13.0-23-generic </br> 5.13.0-25-generic </br> 5.13.0-27-generic </br> 5.13.0-28-generic </br> 5.13.0-30-generic </br> 5.13.0-35-generic </br> 5.13.0-37-generic </br> 5.13.0-39-generic </br> 5.13.0-41-generic </br> 5.13.0-44-generic </br> 5.13.0-48-generic </br> 5.13.0-51-generic </br> 5.13.0-52-generic </br> 5.15.0-1007-azure </br> 5.15.0-1008-azure </br> 5.15.0-1013-azure </br> 5.15.0-1014-azure </br> 5.15.0-1017-azure </br> 5.15.0-1019-azure </br> 5.15.0-1020-azure </br> 5.15.0-33-generic </br> 5.15.0-41-generic </br> 5.15.0-43-generic </br> 5.15.0-46-generic </br> 5.15.0-48-generic </br> 5.4.0-1089-azure </br> 5.4.0-1090-azure </br> 5.4.0-1091-azure </br> 5.4.0-124-generic </br> 5.4.0-125-generic </br> 5.4.0-126-generic | |||+22.04 LTS <br> **Note**: Support for Ubuntu 22.04 is available for Modernized experience only and not available for Classic experience yet. |[9.56]() | 5.15.0-1049-azure <br> 5.15.0-1050-azure <br> 5.15.0-1051-azure <br> 5.15.0-86-generic <br> 5.15.0-87-generic <br> 5.15.0-88-generic | 22.04 LTS <br> **Note**: Support for Ubuntu 22.04 is available for Modernized experience only and not available for Classic experience yet. |[9.55](https://support.microsoft.com/topic/update-rollup-68-for-azure-site-recovery-a81c2d22-792b-4cde-bae5-dc7df93a7810)| 5.15.0-1037-azure <br> 5.15.0-1038-azure <br> 5.15.0-1039-azure <br> 5.15.0-1040-azure <br> 5.15.0-1041-azure <br> 5.15.0-71-generic <br> 5.15.0-72-generic <br> 5.15.0-73-generic <br> 5.15.0-75-generic <br> 5.15.0-76-generic | 22.04 LTS <br> **Note**: Support for Ubuntu 22.04 is available for Modernized experience only and not available for Classic experience yet. |[9.54](https://support.microsoft.com/topic/update-rollup-67-for-azure-site-recovery-9fa97dbb-4539-4b6c-a0f8-c733875a119f)| 5.15.0-1033-azure <br> 5.15.0-1034-azure <br> 5.15.0-1035-azure <br> 5.15.0-1036-azure <br> 5.15.0-60-generic <br> 5.15.0-67-generic <br> 5.15.0-69-generic <br> 5.15.0-70-generic| 22.04 LTS <br> **Note**: Support for Ubuntu 22.04 is available for Modernized experience only and not available for Classic experience yet. |[9.53](https://support.microsoft.com/topic/update-rollup-66-for-azure-site-recovery-kb5023601-c306c467-c896-4c9d-b236-73b21ca27ca5)|5.15.0-1003-azure </br> 5.15.0-1005-azure </br> 5.15.0-1007-azure </br> 5.15.0-1008-azure </br> 5.15.0-1010-azure </br> 5.15.0-1012-azure </br> 5.15.0-1013-azure <br> 5.15.0-1014-azure <br> 5.15.0-1017-azure <br> 5.15.0-1019-azure <br> 5.15.0-1020-azure <br> 5.15.0-1021-azure <br> 5.15.0-1022-azure <br> 5.15.0-1023-azure <br> 5.15.0-1024-azure <br> 5.15.0-1029-azure <br> 5.15.0-1030-azure <br> 5.15.0-1031-azure <br> 5.15.0-25-generic <br> 5.15.0-27-generic <br> 5.15.0-30-generic <br> 5.15.0-33-generic <br> 5.15.0-35-generic <br> 5.15.0-37-generic <br> 5.15.0-39-generic <br> 5.15.0-40-generic <br> 5.15.0-41-generic <br> 5.15.0-43-generic <br> 5.15.0-46-generic <br> 5.15.0-47-generic <br> 5.15.0-48-generic <br> 5.15.0-50-generic <br> 5.15.0-52-generic <br> 5.15.0-53-generic <br> 5.15.0-56-generic <br> 5.15.0-57-generic <br> 5.15.0-58-generic | ### Debian kernel versions - **Supported release** | **Mobility service version** | **Kernel version** | | | |-Debian 7 | [9.51](https://support.microsoft.com/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a), <br> [9.52](https://support.microsoft.com/en-us/topic/update-rollup-65-for-azure-site-recovery-kb5021964-15db362f-faac-417d-ad71-c22424df43e0), [9.53](https://support.microsoft.com/topic/update-rollup-66-for-azure-site-recovery-kb5023601-c306c467-c896-4c9d-b236-73b21ca27ca5), [9.54](https://support.microsoft.com/topic/update-rollup-67-for-azure-site-recovery-9fa97dbb-4539-4b6c-a0f8-c733875a119f), [9.55](https://support.microsoft.com/topic/update-rollup-68-for-azure-site-recovery-a81c2d22-792b-4cde-bae5-dc7df93a7810) | 3.2.0-4-amd64 to 3.2.0-6-amd64, 3.16.0-0.bpo.4-amd64 | +Debian 7 | [9.52](https://support.microsoft.com/en-us/topic/update-rollup-65-for-azure-site-recovery-kb5021964-15db362f-faac-417d-ad71-c22424df43e0), [9.53](https://support.microsoft.com/topic/update-rollup-66-for-azure-site-recovery-kb5023601-c306c467-c896-4c9d-b236-73b21ca27ca5), [9.54](https://support.microsoft.com/topic/update-rollup-67-for-azure-site-recovery-9fa97dbb-4539-4b6c-a0f8-c733875a119f), [9.55](https://support.microsoft.com/topic/update-rollup-68-for-azure-site-recovery-a81c2d22-792b-4cde-bae5-dc7df93a7810), 9.56 <br> **Note:** Support for 9.56 is only available for Modernized experience. | 3.2.0-4-amd64 to 3.2.0-6-amd64, 3.16.0-0.bpo.4-amd64 | |||-Debian 8 | [9.51](https://support.microsoft.com/en-us/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a), <br> [9.52](https://support.microsoft.com/en-us/topic/update-rollup-65-for-azure-site-recovery-kb5021964-15db362f-faac-417d-ad71-c22424df43e0), [9.53](https://support.microsoft.com/topic/update-rollup-66-for-azure-site-recovery-kb5023601-c306c467-c896-4c9d-b236-73b21ca27ca5), [9.54](https://support.microsoft.com/topic/update-rollup-67-for-azure-site-recovery-9fa97dbb-4539-4b6c-a0f8-c733875a119f), [9.55](https://support.microsoft.com/topic/update-rollup-68-for-azure-site-recovery-a81c2d22-792b-4cde-bae5-dc7df93a7810) | 3.16.0-4-amd64 to 3.16.0-11-amd64, 4.9.0-0.bpo.4-amd64 to 4.9.0-0.bpo.12-amd64 | +Debian 8 | [9.52](https://support.microsoft.com/en-us/topic/update-rollup-65-for-azure-site-recovery-kb5021964-15db362f-faac-417d-ad71-c22424df43e0), [9.53](https://support.microsoft.com/topic/update-rollup-66-for-azure-site-recovery-kb5023601-c306c467-c896-4c9d-b236-73b21ca27ca5), [9.54](https://support.microsoft.com/topic/update-rollup-67-for-azure-site-recovery-9fa97dbb-4539-4b6c-a0f8-c733875a119f), [9.55](https://support.microsoft.com/topic/update-rollup-68-for-azure-site-recovery-a81c2d22-792b-4cde-bae5-dc7df93a7810), 9.56 <br> **Note:** Support for 9.56 is only available for Modernized experience. | 3.16.0-4-amd64 to 3.16.0-11-amd64, 4.9.0-0.bpo.4-amd64 to 4.9.0-0.bpo.12-amd64 | |||+Debian 9.1 | [9.56]() <br> **Note:** Support for 9.56 is only available for Modernized experience. | No new Debian 9.1 kernels supported in this release| Debian 9.1 | [9.55](https://support.microsoft.com/topic/update-rollup-68-for-azure-site-recovery-a81c2d22-792b-4cde-bae5-dc7df93a7810) | No new Debian 9.1 kernels supported in this release| Debian 9.1 | [9.54](https://support.microsoft.com/topic/update-rollup-67-for-azure-site-recovery-9fa97dbb-4539-4b6c-a0f8-c733875a119f)| No new Debian 9.1 kernels supported in this release Debian 9.1 | [9.53](https://support.microsoft.com/topic/update-rollup-66-for-azure-site-recovery-kb5023601-c306c467-c896-4c9d-b236-73b21ca27ca5) | No new Debian 9.1 kernels supported in this release| Debian 9.1 | [9.52](https://support.microsoft.com/en-us/topic/update-rollup-65-for-azure-site-recovery-kb5021964-15db362f-faac-417d-ad71-c22424df43e0) | No new Debian 9.1 kernels supported in this release|-Debian 9.1 | [9.51](https://support.microsoft.com/en-us/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a),| No new Debian 9.1 kernels supported in this release| |||+Debian 10 | [9.56]() <br> **Note**: Support for 9.56 agent is available for Modernized experience only. | 5.10.0-0.deb10.26-amd64 <br> 5.10.0-0.deb10.26-cloud-amd64 | Debian 10 | [9.55](https://support.microsoft.com/topic/update-rollup-68-for-azure-site-recovery-a81c2d22-792b-4cde-bae5-dc7df93a7810) | 4.19.0-24-amd64 <br> 4.19.0-24-cloud-amd64 <br> 5.10.0-0.deb10.22-amd64 <br> 5.10.0-0.deb10.22-cloud-amd64 <br> 5.10.0-0.deb10.23-amd64 <br> 5.10.0-0.deb10.23-cloud-amd64 | Debian 10 | [9.54](https://support.microsoft.com/topic/update-rollup-67-for-azure-site-recovery-9fa97dbb-4539-4b6c-a0f8-c733875a119f)| 5.10.0-0.bpo.3-amd64 <br> 5.10.0-0.bpo.3-cloud-amd64 <br> 5.10.0-0.bpo.4-amd64 <br> 5.10.0-0.bpo.4-cloud-amd64 <br> 5.10.0-0.bpo.5-amd64 <br> 5.10.0-0.bpo.5-cloud-amd64 <br> 5.10.0-0.deb10.21-amd64 <br> 5.10.0-0.deb10.21-cloud-amd64 | Debian 10 | [9.53](https://support.microsoft.com/topic/update-rollup-66-for-azure-site-recovery-kb5023601-c306c467-c896-4c9d-b236-73b21ca27ca5) | 4.19.0-23-amd64 </br> 4.19.0-23-cloud-amd64 </br> 5.10.0-0.deb10.20-amd64 </br> 5.10.0-0.deb10.20-cloud-amd64 | Debian 10 | [9.52](https://support.microsoft.com/topic/update-rollup-65-for-azure-site-recovery-kb5021964-15db362f-faac-417d-ad71-c22424df43e0) | 4.19.0-22-amd64 </br> 4.19.0-22-cloud-amd64 </br> 5.10.0-0.deb10.19-amd64 </br> 5.10.0-0.deb10.19-cloud-amd64 |-Debian 10 | [9.51](https://support.microsoft.com/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a) | 5.10.0-0.deb10.16-amd64 </br> 5.10.0-0.deb10.16-cloud-amd64 </br> 5.10.0-0.deb10.17-amd64 </br> 5.10.0-0.deb10.17-cloud-amd64 | |||+Debian 11 | [9.56]() <br> **Note**: Support for 9.56 agent is available for Modernized experience only. | 5.10.0-26-amd64 <br> 5.10.0-26-cloud-amd64 | Debian 11 | [9.55](https://support.microsoft.com/topic/update-rollup-68-for-azure-site-recovery-a81c2d22-792b-4cde-bae5-dc7df93a7810)| 5.10.0-22-amd64 <br> 5.10.0-22-cloud-amd64 <br> 5.10.0-23-amd64 <br> 5.10.0-23-cloud-amd64 | Debian 11 | [9.54](https://support.microsoft.com/topic/update-rollup-67-for-azure-site-recovery-9fa97dbb-4539-4b6c-a0f8-c733875a119f)| 5.10.0-21-amd64 <br> 5.10.0-21-cloud-amd64 | Debian 11 | [9.53](https://support.microsoft.com/topic/update-rollup-66-for-azure-site-recovery-kb5023601-c306c467-c896-4c9d-b236-73b21ca27ca5) | 5.10.0-20-amd64 </br> 5.10.0-20-cloud-amd64 | Debian 11 | [9.52](https://support.microsoft.com/topic/update-rollup-65-for-azur **Release** | **Mobility service version** | **Kernel version** | | | |-SUSE Linux Enterprise Server 15, SP1, SP2, SP3, SP4 | [9.55](https://support.microsoft.com/topic/update-rollup-68-for-azure-site-recovery-a81c2d22-792b-4cde-bae5-dc7df93a7810) | By default, all [stock SUSE 12 SP1, SP2, SP3, SP4, SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br> 4.12.14-16.130-azure:5 <br> 4.12.14-16.133-azure:5 <br> 4.12.14-16.136-azure:5 | -SUSE Linux Enterprise Server 15, SP1, SP2, SP3, SP4 | [9.54](https://support.microsoft.com/topic/update-rollup-67-for-azure-site-recovery-9fa97dbb-4539-4b6c-a0f8-c733875a119f) | By default, all [stock SUSE 12 SP1, SP2, SP3, SP4, SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br> 4.12.14-16.124-azure:5 <br> 4.12.14-16.127-azure:5 | -SUSE Linux Enterprise Server 15, SP1, SP2, SP3, SP4 | [9.53](https://support.microsoft.com/topic/update-rollup-66-for-azure-site-recovery-kb5023601-c306c467-c896-4c9d-b236-73b21ca27ca5) | By default, all [stock SUSE 12 SP1, SP2, SP3, SP4, SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br> 4.12.14-16.115-azure:5 <br> 4.12.14-16.120-azure:5 | -SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.52](https://support.microsoft.com/en-us/topic/update-rollup-65-for-azure-site-recovery-kb5021964-15db362f-faac-417d-ad71-c22424df43e0) | All [stock SUSE 12 SP1, SP2, SP3, SP4, SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br> 4.12.14-16.112-azure:5 | -SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.51](https://support.microsoft.com/en-us/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a) | All [stock SUSE 12 SP1, SP2, SP3, SP4, SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br> 4.12.14-16.106-azure:5 </br> 4.12.14-16.109-azure:5 | +SUSE Linux Enterprise Server 12, SP1, SP2, SP3, SP4 | [9.56]() <br> **Note**: Support for 9.56 agent is available for Modernized experience only. | By default, all [stock SUSE 12 SP1, SP2, SP3, SP4, SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br> No new SUSE 12 kernels supported in this release. | +SUSE Linux Enterprise Server 12, SP1, SP2, SP3, SP4 | [9.55](https://support.microsoft.com/topic/update-rollup-68-for-azure-site-recovery-a81c2d22-792b-4cde-bae5-dc7df93a7810) | By default, all [stock SUSE 12 SP1, SP2, SP3, SP4, SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br> 4.12.14-16.130-azure:5 <br> 4.12.14-16.133-azure:5 <br> 4.12.14-16.136-azure:5 | +SUSE Linux Enterprise Server 12, SP1, SP2, SP3, SP4 | [9.54](https://support.microsoft.com/topic/update-rollup-67-for-azure-site-recovery-9fa97dbb-4539-4b6c-a0f8-c733875a119f) | By default, all [stock SUSE 12 SP1, SP2, SP3, SP4, SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br> 4.12.14-16.124-azure:5 <br> 4.12.14-16.127-azure:5 | +SUSE Linux Enterprise Server 12, SP1, SP2, SP3, SP4 | [9.53](https://support.microsoft.com/topic/update-rollup-66-for-azure-site-recovery-kb5023601-c306c467-c896-4c9d-b236-73b21ca27ca5) | By default, all [stock SUSE 12 SP1, SP2, SP3, SP4, SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br> 4.12.14-16.115-azure:5 <br> 4.12.14-16.120-azure:5 | +SUSE Linux Enterprise Server 12 (SP1, SP2, SP3, SP4, SP5) | [9.52](https://support.microsoft.com/topic/update-rollup-65-for-azure-site-recovery-kb5021964-15db362f-faac-417d-ad71-c22424df43e0) | All [stock SUSE 12 SP1, SP2, SP3, SP4, SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported. </br> 4.12.14-16.112-azure:5 | ### SUSE Linux Enterprise Server 15 supported kernel versions **Release** | **Mobility service version** | **Kernel version** | | | |+SUSE Linux Enterprise Server 15, SP1, SP2, SP3, SP4, SP5 <br> **Note:** SUSE 15 SP5 is only supported for Modernized experience. | [9.56]() <br> **Note**: Support for 9.56 agent is available for Modernized experience only. | By default, all [stock SUSE 15, SP1, SP2, SP3, SP4, SP5 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br> 4.12.14-16.152-azure:5 <br> 5.14.21-150400.14.69-azure:4 <br> 5.14.21-150500.31-azure:5 <br> 5.14.21-150500.33.11-azure:5 <br> 5.14.21-150500.33.14-azure:5 <br> 5.14.21-150500.33.17-azure:5 <br> 5.14.21-150500.33.20-azure:5 <br> 5.14.21-150500.33.3-azure:5 <br> 5.14.21-150500.33.6-azure:5 | SUSE Linux Enterprise Server 15, SP1, SP2, SP3, SP4 | [9.55](https://support.microsoft.com/topic/update-rollup-68-for-azure-site-recovery-a81c2d22-792b-4cde-bae5-dc7df93a7810) | By default, all [stock SUSE 15, SP1, SP2, SP3, SP4 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br> 5.14.21-150400.14.49-azure:4 <br> 5.14.21-150400.14.52-azure:4 | SUSE Linux Enterprise Server 15, SP1, SP2, SP3, SP4 | [9.54](https://support.microsoft.com/topic/update-rollup-67-for-azure-site-recovery-9fa97dbb-4539-4b6c-a0f8-c733875a119f) | By default, all [stock SUSE 15, SP1, SP2, SP3, SP4 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br> 5.14.21-150400.14.31-azure:4 <br> 5.14.21-150400.14.34-azure:4 <br> 5.14.21-150400.14.37-azure:4 <br> 5.14.21-150400.14.43-azure:4 <br> 5.14.21-150400.14.46-azure:4 <br> 5.14.21-150400.14.40-azure:4 | SUSE Linux Enterprise Server 15, SP1, SP2, SP3, SP4 | [9.53](https://support.microsoft.com/topic/update-rollup-66-for-azure-site-recovery-kb5023601-c306c467-c896-4c9d-b236-73b21ca27ca5) | By default, all [stock SUSE 15, SP1, SP2, SP3, SP4 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br> 5.14.21-150400.14.21-azure:4 <br> 5.14.21-150400.14.28-azure:4 <br> 5.3.18-150300.38.88-azure:3 | SUSE Linux Enterprise Server 15, SP1, SP2, SP3, SP4 | [9.52](https://support.microsoft.com/en-us/topic/update-rollup-65-for-azure-site-recovery-kb5021964-15db362f-faac-417d-ad71-c22424df43e0) | By default, all [stock SUSE 15, SP1, SP2, SP3, SP4 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br> 5.14.21-150400.12-azure:4 <br> 5.14.21-150400.14.10-azure:4 <br> 5.14.21-150400.14.13-azure:4 <br> 5.14.21-150400.14.16-azure:4 <br> 5.14.21-150400.14.7-azure:4 <br> 5.3.18-150300.38.80-azure:3 <br> 5.3.18-150300.38.83-azure:3 |-SUSE Linux Enterprise Server 15, SP1, SP2, SP3 | [9.51](https://support.microsoft.com/en-us/topic/update-rollup-64-for-azure-site-recovery-kb5020102-23db9799-102c-4378-9754-2f19f6c7858a) | By default, all [stock SUSE 15, SP1, SP2, SP3 kernels](https://www.suse.com/support/kb/doc/?id=000019587) are supported.</br> 5.3.18-150300.38.75-azure:3 | ++### Rocky Linux Server supported kernel versions ++**Release** | **Mobility service version** | **Kernel version** | + | | | +Rocky Linux <br> **Note**: Support for Rocky Linux is available for Modernized experience only. | [9.56]() | Rocky Linux 8.7 <br> Rocky Linux 9.0 <br> Rocky Linux 9.1 | + ## Linux file systems/guest storage |
spring-apps | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/policy-reference.md | Title: Built-in policy definitions for Azure Spring Apps description: Lists Azure Policy built-in policy definitions for Azure Spring Apps. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/15/2023 Last updated : 11/21/2023 |
spring-apps | Quickstart Deploy Restful Api App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-deploy-restful-api-app.md | |
spring-apps | Quickstart Fitness Store Azure Openai | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-fitness-store-azure-openai.md | |
spring-apps | Quickstart Sample App Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-sample-app-introduction.md | |
spring-apps | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure Spring Apps description: Lists Azure Policy Regulatory Compliance controls available for Azure Spring Apps. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 11/06/2023 Last updated : 11/21/2023 |
storage-mover | Storage Mover Create | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage-mover/storage-mover-create.md | description: Learn how to create a top-level Azure Storage Mover resource -+ Last updated 09/07/2022 |
storage | Immutable Policy Configure Version Scope | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/immutable-policy-configure-version-scope.md | If the container has an existing container-level legal hold, then it can't be mi To migrate a container to support version-level immutability policies in the Azure portal, follow these steps: 1. Navigate to the desired container.-1. Select the **More** button on the right, then select **Access policy**. +1. In the context menu of the container, then select **Access policy**. 1. Under **Immutable blob storage**, select **Add policy**. 1. For the **Policy type** field, choose *Time-based retention*, and specify the retention interval. 1. Select **Enable version-level immutability**. To configure a default version-level immutability policy for a storage account i To configure a default version-level immutability policy for a container in the Azure portal, follow these steps: 1. In the Azure portal, navigate to the **Containers** page, and locate the container to which you want to apply the policy.-2. Select the **More** button to the right of the container name, and choose **Access policy**. +2. In the context menu of the container, and choose **Access policy**. 3. In the **Access policy** dialog, under the **Immutable blob storage** section, choose **Add policy**. 4. Select **Time-based retention policy** and specify the retention interval. 5. Choose whether to allow protected append writes. az storage container immutability-policy create \ To determine the scope of a time-based retention policy in the Azure portal, follow these steps: 1. Navigate to the desired container.-1. Select the **More** button on the right, then select **Access policy**. +1. In the context menu of the container, then select **Access policy**. 1. Under **Immutable blob storage**, locate the **Scope** field. If the container is configured with a default version-level retention policy, then the scope is set to *Version*, as shown in the following image: :::image type="content" source="media/immutable-policy-configure-version-scope/version-scoped-retention-policy.png" alt-text="Screenshot showing default version-level retention policy configured for container"::: For more information on blob versioning, see [Blob versioning](versioning-overvi ### [Portal](#tab/azure-portal) -The Azure portal displays a list of blobs when you navigate to a container. Each blob displayed represents the current version of the blob. You can access a list of previous versions by selecting the **More** button for a blob and choosing **View previous versions**. +The Azure portal displays a list of blobs when you navigate to a container. Each blob displayed represents the current version of the blob. You can access a list of previous versions by opening the context menu of the blob and then choosing **View previous versions**. ### Configure a retention policy on the current version of a blob To configure a time-based retention policy on the current version of a blob, follow these steps: 1. Navigate to the container that contains the target blob.-1. Select the **More** button to the right of the blob name, and choose **Access policy**. If a time-based retention policy has already been configured for the previous version, it appears in the **Access policy** dialog. +1. In the context menu of the blob, and choose **Access policy**. If a time-based retention policy has already been configured for the previous version, it appears in the **Access policy** dialog. 1. In the **Access policy** dialog, under the **Immutable blob versions** section, choose **Add policy**. 1. Select **Time-based retention policy** and specify the retention interval. 1. Select **OK** to apply the policy to the current version of the blob. To configure a time-based retention policy on a previous version of a blob, foll 1. Navigate to the container that contains the target blob. 1. Select the blob, then navigate to the **Versions** tab.-1. Locate the target version, then select the **More** button and choose **Access policy**. If a time-based retention policy has already been configured for the previous version, it appears in the **Access policy** dialog. +1. Locate the target version, then, in the context menu of the version, choose **Access policy**. If a time-based retention policy has already been configured for the previous version, it appears in the **Access policy** dialog. 1. In the **Access policy** dialog, under the **Immutable blob versions** section, choose **Add policy**. 1. Select **Time-based retention policy** and specify the retention interval. 1. Select **OK** to apply the policy to the current version of the blob. You can modify an unlocked time-based retention policy to shorten or lengthen th To modify an unlocked time-based retention policy in the Azure portal, follow these steps: -1. Locate the target container or version. Select the **More** button and choose **Access policy**. -1. Locate the existing unlocked immutability policy. Select the **More** button, then select **Edit** from the menu. +1. Locate the target container or version. In the context menu of the container or version, choose **Access policy**. +1. Locate the existing unlocked immutability policy. In the context menu, select **Edit** from the menu. :::image type="content" source="media/immutable-policy-configure-version-scope/edit-existing-version-policy.png" alt-text="Screenshot showing how to edit an existing version-level time-based retention policy in Azure portal"::: 1. Provide the new date and time for the policy expiration. -To delete the unlocked policy, select **Delete** from the **More** menu. +To delete the unlocked policy, select **Delete** from the context menu. ### [PowerShell](#tab/azure-powershell) After a policy is locked, you can't delete it. However, you can delete the blob To lock a policy in the Azure portal, follow these steps: -1. Locate the target container or version. Select the **More** button and choose **Access policy**. -1. Under the **Immutable blob versions** section, locate the existing unlocked policy. Select the **More** button, then select **Lock policy** from the menu. +1. Locate the target container or version. In the context menu of the container or version, choose **Access policy**. +1. Under the **Immutable blob versions** section, locate the existing unlocked policy. Select **Lock policy** from the context menu. 1. Confirm that you want to lock the policy. :::image type="content" source="media/immutable-policy-configure-version-scope/lock-policy-portal.png" alt-text="Screenshot showing how to lock a time-based retention policy in Azure portal"::: To configure a legal hold on a blob version, you must first enable version-level To configure a legal hold on a blob version with the Azure portal, follow these steps: -1. Locate the target version, which may be the current version or a previous version of a blob. Select the **More** button and choose **Access policy**. +1. Locate the target version, which may be the current version or a previous version of a blob. In the context menu of the target version, choose **Access policy**. 2. Under the **Immutable blob versions** section, select **Add policy**. The following image shows a current version of a blob with both a time-based ret :::image type="content" source="media/immutable-policy-configure-version-scope/configure-legal-hold-blob-version.png" alt-text="Screenshot showing legal hold configured for blob version"::: -To clear a legal hold, navigate to the **Access policy** dialog, select the **More** button, and choose **Delete**. +To clear a legal hold, navigate to the **Access policy** dialog, in the context menu, choose **Delete**. #### [PowerShell](#tab/azure-powershell) |
storage | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/policy-reference.md | Title: Built-in policy definitions for Azure Storage description: Lists Azure Policy built-in policy definitions for Azure Storage. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/15/2023 Last updated : 11/21/2023 |
storage | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure Storage description: Lists Azure Policy Regulatory Compliance controls available for Azure Storage. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 11/06/2023 Last updated : 11/21/2023 |
storage | Storage Powershell Independent Clouds | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-powershell-independent-clouds.md | |
storage | Elastic San Networking | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-networking.md | After you have enabled the desired endpoints and granted access in your network - [Connect Azure Elastic SAN Preview volumes to an Azure Kubernetes Service cluster](elastic-san-connect-aks.md) - [Connect to Elastic SAN Preview volumes - Linux](elastic-san-connect-linux.md)-- [Connect to Elastic SAN Preview volumes - Windows](elastic-san-connect-windows.md)+- [Connect to Elastic SAN Preview volumes - Windows](elastic-san-connect-windows.md) |
storage | Elastic San Snapshots | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/elastic-san/elastic-san-snapshots.md | Title: Backup Azure Elastic SAN Preview volumes description: Learn about snapshots for Azure Elastic SAN Preview, including how to create and use them. + Last updated 11/15/2023 Currently, you can only use the Azure portal to create Elastic SAN volumes from 1. Navigate to your SAN and select **volumes**. 1. Select **Create volume**. 1. For **Source type** select **Disk snapshot** and fill out the rest of the values.-1. Select **Create**. +1. Select **Create**. |
storage | Files Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-whats-new.md | Note: Azure File Sync is zone-redundant in all regions that [support zones](../. ### 2022 quarter 4 (October, November, December) #### Azure Active Directory (Azure AD) Kerberos authentication for hybrid identities on Azure Files is generally available-This [feature](storage-files-identity-auth-hybrid-identities-enable.md) builds on top of [FSLogix profile container support](../../virtual-desktop/create-profile-container-azure-ad.md) released in December 2022 and expands it to support more use cases (SMB only). Hybrid identities, which are user identities created in Active Directory Domain Services (AD DS) and synced to Azure AD, can mount and access Azure file shares without the need for line-of-sight to an Active Directory domain controller. While the initial support is limited to hybrid identities, itΓÇÖs a significant milestone as we simplify identity-based authentication for Azure Files customers. [Read the blog post](https://techcommunity.microsoft.com/t5/azure-storage-blog/general-availability-azure-active-directory-kerberos-with-azure/ba-p/3612111). +This [feature](storage-files-identity-auth-hybrid-identities-enable.md) builds on top of [FSLogix profile container support](../../virtual-desktop/create-profile-container-azure-ad.md) released in December 2022 and expands it to support more use cases (SMB only). Hybrid identities, which are user identities created in Active Directory Domain Services (AD DS) and synced to Azure AD, can mount and access Azure file shares without the need for network connectivity to an Active Directory domain controller. While the initial support is limited to hybrid identities, itΓÇÖs a significant milestone as we simplify identity-based authentication for Azure Files customers. [Read the blog post](https://techcommunity.microsoft.com/t5/azure-storage-blog/general-availability-azure-active-directory-kerberos-with-azure/ba-p/3612111). ### 2022 quarter 2 (April, May, June) #### SUSE Linux support for SAP HANA System Replication (HSR) and Pacemaker |
storage | Storage Files Active Directory Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-active-directory-overview.md | description: Azure Files supports identity-based authentication over SMB (Server Previously updated : 07/18/2023 Last updated : 11/22/2023 It's helpful to understand some key terms relating to identity-based authenticat Azure Files supports identity-based authentication over SMB through the following methods. You can only use one method per storage account. -- **On-premises AD DS authentication:** On-premises AD DS-joined or Microsoft Entra Domain Services-joined Windows machines can access Azure file shares with on-premises Active Directory credentials that are synched to Microsoft Entra ID over SMB. Your client must have line of sight to your AD DS. If you already have AD DS set up on-premises or on a VM in Azure where your devices are domain-joined to your AD, you should use AD DS for Azure file shares authentication.+- **On-premises AD DS authentication:** On-premises AD DS-joined or Microsoft Entra Domain Services-joined Windows machines can access Azure file shares with on-premises Active Directory credentials that are synched to Microsoft Entra ID over SMB. Your client must have unimpeded network connectivity to your AD DS. If you already have AD DS set up on-premises or on a VM in Azure where your devices are domain-joined to your AD, you should use AD DS for Azure file shares authentication. - **Microsoft Entra Domain Services authentication:** Cloud-based, Microsoft Entra Domain Services-joined Windows VMs can access Azure file shares with Microsoft Entra credentials. In this solution, Microsoft Entra ID runs a traditional Windows Server AD domain on behalf of the customer, which is a child of the customerΓÇÖs Microsoft Entra tenant. -- **Microsoft Entra Kerberos for hybrid identities:** Using Microsoft Entra ID for authenticating [hybrid user identities](../../active-directory/hybrid/whatis-hybrid-identity.md) allows Microsoft Entra users to access Azure file shares using Kerberos authentication. This means your end users can access Azure file shares over the internet without requiring a line-of-sight to domain controllers from Microsoft Entra hybrid joined and Microsoft Entra joined VMs. Cloud-only identities aren't currently supported.+- **Microsoft Entra Kerberos for hybrid identities:** Using Microsoft Entra ID for authenticating [hybrid user identities](../../active-directory/hybrid/whatis-hybrid-identity.md) allows Microsoft Entra users to access Azure file shares using Kerberos authentication. This means your end users can access Azure file shares over the internet without requiring network connectivity to domain controllers from Microsoft Entra hybrid joined and Microsoft Entra joined VMs. Cloud-only identities aren't currently supported. - **AD Kerberos authentication for Linux clients:** Linux clients can use Kerberos authentication over SMB for Azure Files using on-premises AD DS or Microsoft Entra Domain Services. ## Restrictions You can enable identity-based authentication on your new and existing storage ac ### AD DS -For on-premises AD DS authentication, you must set up your AD domain controllers and domain-join your machines or VMs. You can host your domain controllers on Azure VMs or on-premises. Either way, your domain-joined clients must have line of sight to the domain controller, so they must be within the corporate network or virtual network (VNET) of your domain service. +For on-premises AD DS authentication, you must set up your AD domain controllers and domain-join your machines or VMs. You can host your domain controllers on Azure VMs or on-premises. Either way, your domain-joined clients must have unimpeded network connectivity to the domain controller, so they must be within the corporate network or virtual network (VNET) of your domain service. The following diagram depicts on-premises AD DS authentication to Azure file shares over SMB. The on-premises AD DS must be synced to Microsoft Entra ID using Microsoft Entra Connect Sync or Microsoft Entra Connect cloud sync. Only [hybrid user identities](../../active-directory/hybrid/whatis-hybrid-identity.md) that exist in both on-premises AD DS and Microsoft Entra ID can be authenticated and authorized for Azure file share access. This is because the share-level permission is configured against the identity represented in Microsoft Entra ID, whereas the directory/file-level permission is enforced with that in AD DS. Make sure that you configure the permissions correctly against the same hybrid user. To learn how to enable Microsoft Entra Domain Services authentication, see [Enab ### Microsoft Entra Kerberos for hybrid identities -Enabling and configuring Microsoft Entra ID for authenticating [hybrid user identities](../../active-directory/hybrid/whatis-hybrid-identity.md) allows Microsoft Entra users to access Azure file shares using Kerberos authentication. This configuration uses Microsoft Entra ID to issue the necessary Kerberos tickets to access the file share with the industry-standard SMB protocol. This means your end users can access Azure file shares over the internet without requiring a line-of-sight to domain controllers from Microsoft Entra hybrid joined and Microsoft Entra joined VMs. However, configuring directory and file-level permissions for users and groups requires line-of-sight to the on-premises domain controller. +Enabling and configuring Microsoft Entra ID for authenticating [hybrid user identities](../../active-directory/hybrid/whatis-hybrid-identity.md) allows Microsoft Entra users to access Azure file shares using Kerberos authentication. This configuration uses Microsoft Entra ID to issue the necessary Kerberos tickets to access the file share with the industry-standard SMB protocol. This means your end users can access Azure file shares over the internet without requiring network connectivity to domain controllers from Microsoft Entra hybrid joined and Microsoft Entra joined VMs. However, configuring directory and file-level permissions for users and groups requires unimpeded network connectivity to the on-premises domain controller. > [!IMPORTANT] > Microsoft Entra Kerberos authentication only supports hybrid user identities; it doesn't support cloud-only identities. A traditional AD DS deployment is required, and it must be synced to Microsoft Entra ID using Microsoft Entra Connect Sync or Microsoft Entra Connect cloud sync. Clients must be Microsoft Entra joined or [Microsoft Entra hybrid joined](../../active-directory/devices/hybrid-join-plan.md). Microsoft Entra Kerberos isnΓÇÖt supported on clients joined to Microsoft Entra Domain Services or joined to AD only. |
storage | Storage Files Configure P2s Vpn Windows | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-configure-p2s-vpn-windows.md | Title: Configure a Point-to-Site (P2S) VPN on Windows for use with Azure Files -description: How to configure a Point-to-Site (P2S) VPN on Windows for use with Azure Files + Title: Configure a point-to-site (P2S) VPN on Windows for use with Azure Files +description: How to configure a point-to-site (P2S) VPN on Windows for use with Azure Files Previously updated : 11/08/2022 Last updated : 11/21/2023 -# Configure a Point-to-Site (P2S) VPN on Windows for use with Azure Files -You can use a Point-to-Site (P2S) VPN connection to mount your Azure file shares over SMB from outside of Azure, without opening up port 445. A Point-to-Site VPN connection is a VPN connection between Azure and an individual client. To use a P2S VPN connection with Azure Files, a P2S VPN connection will need to be configured for each client that wants to connect. If you have many clients that need to connect to your Azure file shares from your on-premises network, you can use a Site-to-Site (S2S) VPN connection instead of a Point-to-Site connection for each client. To learn more, see [Configure a Site-to-Site VPN for use with Azure Files](storage-files-configure-s2s-vpn.md). +# Configure a point-to-site (P2S) VPN on Windows for use with Azure Files -We strongly recommend that you read [Networking considerations for direct Azure file share access](storage-files-networking-overview.md) before continuing with this how to article for a complete discussion of the networking options available for Azure Files. +You can use a point-to-site (P2S) VPN connection to mount your Azure file shares over SMB from outside of Azure, without opening up port 445. A point-to-site VPN connection is a VPN connection between Azure and an individual client. To use a P2S VPN connection with Azure Files, you must configure a VPN connection for each client that wants to connect. If you have many clients that need to connect to your Azure file shares from your on-premises network, you can use a site-to-site (S2S) VPN connection instead of a point-to-site connection for each client. To learn more, see [Configure a site-to-site VPN for use with Azure Files](storage-files-configure-s2s-vpn.md). -The article details the steps to configure a Point-to-Site VPN on Windows (Windows client and Windows Server) to mount Azure file shares directly on-premises. If you're looking to route Azure File Sync traffic over a VPN, please see [configuring Azure File Sync proxy and firewall settings](../file-sync/file-sync-firewall-and-proxy.md). +We strongly recommend that you read [Networking considerations for direct Azure file share access](storage-files-networking-overview.md) before continuing with this how-to article for a complete discussion of the networking options available for Azure Files. ++The article details the steps to configure a point-to-site VPN on Windows (Windows client and Windows Server) to mount Azure file shares directly on-premises. If you're looking to route Azure File Sync traffic over a VPN, see [configuring Azure File Sync proxy and firewall settings](../file-sync/file-sync-firewall-and-proxy.md). ## Applies to+ | File share type | SMB | NFS | |-|:-:|:-:| | Standard file shares (GPv2), LRS/ZRS | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | The article details the steps to configure a Point-to-Site VPN on Windows (Windo | Premium file shares (FileStorage), LRS/ZRS | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) | ## Prerequisites-- The most recent version of the Azure PowerShell module. For more information on how to install the Azure PowerShell, see [Install the Azure PowerShell module](/powershell/azure/install-azure-powershell) and select your operating system. If you prefer to use the Azure CLI on Windows, you may, however the instructions below are presented for Azure PowerShell. -- An Azure file share you would like to mount on-premises. Azure file shares are deployed within storage accounts, which are management constructs that represent a shared pool of storage in which you can deploy multiple file shares, as well as other storage resources, such as blob containers or queues. You can learn more about how to deploy Azure file shares and storage accounts in [Create an Azure file share](storage-how-to-create-file-share.md).+- The most recent version of the Azure PowerShell module. See [Install the Azure PowerShell module](/powershell/azure/install-azure-powershell). ++- An Azure file share you would like to mount on-premises. Azure file shares are deployed within storage accounts, which are management constructs that represent a shared pool of storage in which you can deploy multiple file shares, as well as other storage resources. Learn more about how to deploy Azure file shares and storage accounts in [Create an Azure file share](storage-how-to-create-file-share.md). -- A virtual network with a private endpoint for the storage account containing the Azure file share you want to mount on-premises. To learn more about how to create a private endpoint, see [Configuring Azure Files network endpoints](storage-files-networking-endpoints.md?tabs=azure-powershell). +- A virtual network with a private endpoint for the storage account that contains the Azure file share you want to mount on-premises. To learn how to create a private endpoint, see [Configuring Azure Files network endpoints](storage-files-networking-endpoints.md?tabs=azure-powershell). - A [gateway subnet](../../vpn-gateway/vpn-gateway-about-vpn-gateway-settings.md#gwsub) must be created on the virtual network, and you'll need to know the name of the gateway subnet. ## Collect environment information-In order to set up the point-to-site VPN, we first need to collect some information about your environment for use throughout the guide. See the [prerequisites](#prerequisites) section if you have not already created a storage account, virtual network, gateway subnet, and/or private endpoints. -Remember to replace `<resource-group>`, `<vnet-name>`, `<subnet-name>`, and `<storage-account-name>` with the appropriate values for your environment. +Before setting up the point-to-site VPN, you need to collect some information about your environment. Replace `<resource-group>`, `<vnet-name>`, `<subnet-name>`, and `<storage-account-name>` with the appropriate values for your environment. ```PowerShell $resourceGroupName = "<resource-group-name>" $privateEndpoint = Get-AzPrivateEndpoint | ` ``` ## Create root certificate for VPN authentication-In order for VPN connections from your on-premises Windows machines to be authenticated to access your virtual network, you must create two certificates: a root certificate, which will be provided to the virtual machine gateway, and a client certificate, which will be signed with the root certificate. The following PowerShell creates the root certificate; the client certificate will be created after the Azure virtual network gateway is created with information from the gateway. ++In order for VPN connections from your on-premises Windows machines to be authenticated to access your virtual network, you must create two certificates: a root certificate, which will be provided to the virtual machine gateway, and a client certificate, which will be signed with the root certificate. The following PowerShell creates the root certificate; you'll create the client certificate after deploying the Azure virtual network gateway. ```PowerShell $rootcertname = "CN=P2SRootCert" foreach($line in $rawRootCertificate) { ``` ## Deploy virtual network gateway-The Azure virtual network gateway is the service that your on-premises Windows machines will connect to. Before deploying the virtual network gateway, a [gateway subnet](../../vpn-gateway/vpn-gateway-about-vpn-gateway-settings.md#gwsub) must be created on the virtual network. -Deploying this service requires two basic components: +The Azure virtual network gateway is the service that your on-premises Windows machines will connect to. Before deploying the virtual network gateway, you must create a [gateway subnet](../../vpn-gateway/vpn-gateway-about-vpn-gateway-settings.md#gwsub) on the virtual network. ++Deploying a virtual network gateway requires two basic components: 1. A public IP address that will identify the gateway to your clients wherever they are in the world 2. The root certificate you created earlier, which will be used to authenticate your clients -Remember to replace `<desired-vpn-name-here>`, `<desired-region-here>`, and `<gateway-subnet-name-here>` in the below script with the proper values for these variables. +Remember to replace `<desired-vpn-name-here>`, `<desired-region-here>`, and `<gateway-subnet-name-here>` in the following script with the proper values for these variables. -> [!Note] -> Deploying the Azure virtual network gateway can take up to 45 minutes. While this resource is being deployed, this PowerShell script will block for the deployment to be completed. This is expected. +> [!NOTE] +> Deploying the Azure virtual network gateway can take up to 45 minutes. While this resource is being deployed, this PowerShell script will block the deployment from being completed. This is expected. ```PowerShell $vpnName = "<desired-vpn-name-here>" $vpn = New-AzVirtualNetworkGateway ` ``` ## Create client certificate-The client certificate is created with the URI of the virtual network gateway. This certificate is signed with the root certificate you created earlier. ++The following script creates the client certificate with the URI of the virtual network gateway. This certificate is signed with the root certificate you created earlier. ```PowerShell $clientcertpassword = "1234" Export-PfxCertificate ` ``` ## Configure the VPN client-The Azure virtual network gateway will create a downloadable package with configuration files required to initialize the VPN connection on your on-premises Windows machine. We will configure the VPN connection using the [Always On VPN](/windows-server/remote/remote-access/vpn/always-on-vpn/) feature introduced in Windows 10/Windows Server 2016. This package also contains executable packages which will configure the legacy Windows VPN client, if so desired. This guide uses Always On VPN rather than the legacy Windows VPN client as the Always On VPN client allows end-users to connect/disconnect from the Azure VPN without having administrator permissions to their machine. -The following script will install the client certificate required for authentication against the virtual network gateway, download, and install the VPN package. Remember to replace `<computer1>` and `<computer2>` with the desired computers. You can run this script on as many machines as you desire by adding more PowerShell sessions to the `$sessions` array. Your use account must be an administrator on each of these machines. If one of these machines is the local machine you are running the script from, you must run the script from an elevated PowerShell session. +The Azure virtual network gateway will create a downloadable package with configuration files required to initialize the VPN connection on your on-premises Windows machine. You'll configure the VPN connection using the [Always On VPN](/windows-server/remote/remote-access/vpn/always-on-vpn/) feature introduced in Windows 10/Windows Server 2016. This package also contains executables that will configure the legacy Windows VPN client, if desired. This guide uses Always On VPN rather than the legacy Windows VPN client because the Always On VPN client allows you to connect/disconnect from the Azure VPN without having administrator permissions to the machine. ++The following script will install the client certificate required for authentication against the virtual network gateway, and then download and install the VPN package. Remember to replace `<computer1>` and `<computer2>` with the desired computers. You can run this script on as many machines as you desire by adding more PowerShell sessions to the `$sessions` array. Your user account must be an administrator on each of these machines. If one of these machines is the local machine you're running the script from, you must run the script from an elevated PowerShell session. ```PowerShell $sessions = [System.Management.Automation.Runspaces.PSSession[]]@() Remove-Item -Path $vpnTemp -Recurse ``` ## Mount Azure file share-Now that you have set up your Point-to-Site VPN, you can use it to mount the Azure file share on the computers you setup via PowerShell. The following example will mount the share, list the root directory of the share to prove the share is actually mounted, and the unmount the share. Unfortunately, it is not possible to mount the share persistently over PowerShell remoting. To mount persistently, see [Use an Azure file share with Windows](storage-how-to-use-files-windows.md). ++Now that you've set up your point-to-Site VPN, you can use it to mount the Azure file share to an on-premises machine. The following example will mount the share, list the root directory of the share to prove the share is actually mounted, and then unmount the share. ++> [!NOTE] +> It isn't possible to mount the share persistently over PowerShell remoting. To mount persistently, see [Use an Azure file share with Windows](storage-how-to-use-files-windows.md). ```PowerShell $myShareToMount = "<file-share>" Invoke-Command ` ``` ## Rotate VPN Root Certificate-If a root certificate needs to be rotated due to expiration or new requirements, you can add a new root certificate to the existing virtual network gateway without the need for redeploying the virtual network gateway. Once the root certificate is added using the following sample script, you will need to re-create [VPN client certificate](#create-client-certificate). ++If a root certificate needs to be rotated due to expiration or new requirements, you can add a new root certificate to the existing virtual network gateway without redeploying the virtual network gateway. After adding the root certificate using the following script, you'll need to re-create the [VPN client certificate](#create-client-certificate). Replace `<resource-group-name>`, `<desired-vpn-name-here>`, and `<new-root-cert-name>` with your own values, then run the script. Add-AzVpnClientRootCertificate ` -VpnClientRootCertificateName $NewRootCertName ```+ ## See also+ - [Networking considerations for direct Azure file share access](storage-files-networking-overview.md)-- [Configure a Point-to-Site (P2S) VPN on Linux for use with Azure Files](storage-files-configure-p2s-vpn-linux.md)-- [Configure a Site-to-Site (S2S) VPN for use with Azure Files](storage-files-configure-s2s-vpn.md)+- [Configure a point-to-site (P2S) VPN on Linux for use with Azure Files](storage-files-configure-p2s-vpn-linux.md) +- [Configure a site-to-site (S2S) VPN for use with Azure Files](storage-files-configure-s2s-vpn.md) |
storage | Storage Files Identity Ad Ds Configure Permissions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-ad-ds-configure-permissions.md | description: Learn how to configure Windows ACLs for directory and file level pe Previously updated : 12/19/2022 Last updated : 11/21/2023 recommendations: false After you assign share-level permissions, you can configure Windows access contr Both share-level and file/directory-level permissions are enforced when a user attempts to access a file/directory, so if there's a difference between either of them, only the most restrictive one will be applied. For example, if a user has read/write access at the file level, but only read at a share level, then they can only read that file. The same would be true if it was reversed: if a user had read/write access at the share-level, but only read at the file-level, they can still only read the file. > [!IMPORTANT]-> To configure Windows ACLs, you'll need a client machine running Windows that has line-of-sight to the domain controller. If you're authenticating with Azure Files using Active Directory Domain Services (AD DS) or Microsoft Entra Kerberos for hybrid identities, this means you'll need line-of-sight to the on-premises AD. If you're using Microsoft Entra Domain Services, then the client machine must have line-of-sight to the domain controllers for the domain that's managed by Microsoft Entra Domain Services, which are located in Azure. +> To configure Windows ACLs, you'll need a client machine running Windows that has unimpeded network connectivity to the domain controller. If you're authenticating with Azure Files using Active Directory Domain Services (AD DS) or Microsoft Entra Kerberos for hybrid identities, this means you'll need unimpeded network connectivity to the on-premises AD. If you're using Microsoft Entra Domain Services, then the client machine must have unimpeded network connectivity to the domain controllers for the domain that's managed by Microsoft Entra Domain Services, which are located in Azure. ## Applies to | File share type | SMB | NFS | For more information on these advanced permissions, see [the command-line refere There are two approaches you can take to configuring and editing Windows ACLs: -- **Log in with username and storage account key every time**: Anytime you want to configure ACLs, mount the file share by using your storage account key on a machine that has line-of-sight to the domain controller.+- **Log in with username and storage account key every time**: Anytime you want to configure ACLs, mount the file share by using your storage account key on a machine that has unimpeded network connectivity to the domain controller. - **One-time username/storage account key setup:**- 1. Log in with a username and storage account key on a machine that has line-of-sight to the domain controller, and give some users (or groups) permission to edit permissions on the root of the file share. + 1. Log in with a username and storage account key on a machine that has unimpeded network connectivity to the domain controller, and give some users (or groups) permission to edit permissions on the root of the file share. 2. Assign those users the **Storage File Data SMB Share Elevated Contributor** Azure RBAC role.- 3. In the future, anytime you want to update ACLs, you can use one of those authorized users to log in from a machine that has line-of-sight to the domain controller and edit ACLs. + 3. In the future, anytime you want to update ACLs, you can use one of those authorized users to log in from a machine that has unimpeded network connectivity to the domain controller and edit ACLs. ## Mount the file share using your storage account key |
storage | Storage Files Identity Ad Ds Mount File Share | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-ad-ds-mount-file-share.md | description: Learn how to mount an Azure file share to your on-premises Active D Previously updated : 07/12/2023 Last updated : 11/21/2023 recommendations: false Sign in to the client using the credentials of the identity that you granted per Before you can mount the Azure file share, make sure you've gone through the following prerequisites: - If you're mounting the file share from a client that has previously connected to the file share using your storage account key, make sure that you've disconnected the share, removed the persistent credentials of the storage account key, and are currently using AD DS credentials for authentication. For instructions on how to remove cached credentials with storage account key and delete existing SMB connections before initializing a new connection with AD DS or Microsoft Entra credentials, follow the two-step process on the [FAQ page](./storage-files-faq.md#identity-based-authentication).-- Your client must have line of sight to your AD DS. If your machine or VM is outside of the network managed by your AD DS, you'll need to enable VPN to reach AD DS for authentication.+- Your client must have unimpeded network connectivity to your AD DS. If your machine or VM is outside of the network managed by your AD DS, you'll need to enable VPN to reach AD DS for authentication. > [!NOTE] > Using the canonical name (CNAME) to mount an Azure file share isn't currently supported while using identity-based authentication in single-forest AD environments. |
storage | Storage Files Identity Auth Active Directory Enable | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-auth-active-directory-enable.md | description: Learn about Active Directory Domain Services (AD DS) authentication Previously updated : 06/12/2023 Last updated : 11/21/2023 recommendations: false Before you enable AD DS authentication for Azure file shares, make sure you've c - Domain-join an on-premises machine or an Azure VM to on-premises AD DS. For information about how to domain-join, refer to [Join a Computer to a Domain](/windows-server/identity/ad-fs/deployment/join-a-computer-to-a-domain). - If a machine isn't domain joined, you can still use AD DS for authentication if the machine has line of sight to the on-premises AD domain controller and the user provides explicit credentials. For more information, see [Mount the file share from a non-domain-joined VM or a VM joined to a different AD domain](storage-files-identity-ad-ds-mount-file-share.md#mount-the-file-share-from-a-non-domain-joined-vm-or-a-vm-joined-to-a-different-ad-domain). + If a machine isn't domain joined, you can still use AD DS for authentication if the machine has unimpeded network connectivity to the on-premises AD domain controller and the user provides explicit credentials. For more information, see [Mount the file share from a non-domain-joined VM or a VM joined to a different AD domain](storage-files-identity-ad-ds-mount-file-share.md#mount-the-file-share-from-a-non-domain-joined-vm-or-a-vm-joined-to-a-different-ad-domain). - Select or create an Azure storage account. For optimal performance, we recommend that you deploy the storage account in the same region as the client from which you plan to access the share. Then, [mount the Azure file share](storage-how-to-use-files-windows.md) with your storage account key. Mounting with the storage account key verifies connectivity. |
storage | Storage Files Identity Auth Domain Services Enable | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-auth-domain-services-enable.md | description: Learn how to enable identity-based authentication over Server Messa Previously updated : 07/17/2023 Last updated : 11/22/2023 recommendations: false Before you enable Microsoft Entra Domain Services over SMB for Azure file shares To access an Azure file share by using Microsoft Entra credentials from a VM, your VM must be domain-joined to Microsoft Entra Domain Services. For more information about how to domain-join a VM, see [Join a Windows Server virtual machine to a managed domain](../../active-directory-domain-services/join-windows-vm.md). Microsoft Entra Domain Services authentication over SMB with Azure file shares is supported only on Azure VMs running on OS versions above Windows 7 or Windows Server 2008 R2. > [!NOTE]- > Non-domain-joined VMs can access Azure file shares using Microsoft Entra Domain Services authentication only if the VM has line-of-sight to the domain controllers for Microsoft Entra Domain Services. Usually this requires either site-to-site or point-to-site VPN. + > Non-domain-joined VMs can access Azure file shares using Microsoft Entra Domain Services authentication only if the VM has unimpeded network connectivity to the domain controllers for Microsoft Entra Domain Services. Usually this requires either site-to-site or point-to-site VPN. 1. **Select or create an Azure file share.** Get-ADUser $userObject -properties KerberosEncryptionType > [!IMPORTANT] > If you were previously using RC4 encryption and update the storage account to use AES-256, you should run `klist purge` on the client and then remount the file share to get new Kerberos tickets with AES-256. +## Assign share-level permissions ++To access Azure Files resources with identity-based authentication, an identity (a user, group, or service principal) must have the necessary permissions at the share level. This process is similar to specifying Windows share permissions, where you specify the type of access that a particular user has to a file share. The guidance in this section demonstrates how to assign read, write, or delete permissions for a file share to an identity. **We highly recommend assigning permissions by declaring actions and data actions explicitly as opposed to using the wildcard (\*) character.** ++Most users should assign share-level permissions to specific Microsoft Entra users or groups, and then [configure Windows ACLs](#configure-windows-acls) for granular access control at the directory and file level. However, alternatively you can set a [default share-level permission](storage-files-identity-ad-ds-assign-permissions.md#share-level-permissions-for-all-authenticated-identities) to allow contributor, elevated contributor, or reader access to **all authenticated identities**. ++There are five Azure built-in roles for Azure Files, some of which allow granting share-level permissions to users and groups: ++- **Storage File Data Share Reader** allows read access in Azure file shares over SMB. +- **Storage File Data Privileged Reader** allows read access in Azure file shares over SMB by overriding existing Windows ACLs. +- **Storage File Data Share Contributor** allows read, write, and delete access in Azure file shares over SMB. +- **Storage File Data Share Elevated Contributor** allows read, write, delete, and modify Windows ACLs in Azure file shares over SMB. +- **Storage File Data Privileged Contributor** allows read, write, delete, and modify Windows ACLs in Azure file shares over SMB by overriding existing Windows ACLs. ++> [!IMPORTANT] +> Full administrative control of a file share, including the ability to take ownership of a file, requires using the storage account key. Administrative control isn't supported with Microsoft Entra credentials. ++You can use the Azure portal, PowerShell, or Azure CLI to assign the built-in roles to the Microsoft Entra identity of a user for granting share-level permissions. Be aware that the share-level Azure role assignment can take some time to take effect. We recommend using share-level permission for high-level access management to an AD group representing a group of users and identities, then leverage Windows ACLs for granular access control at the directory/file level. ++<a name='assign-an-azure-role-to-an-azure-ad-identity'></a> ++### Assign an Azure role to a Microsoft Entra identity ++> [!IMPORTANT] +> **Assign permissions by explicitly declaring actions and data actions as opposed to using a wildcard (\*) character.** If a custom role definition for a data action contains a wildcard character, all identities assigned to that role are granted access for all possible data actions. This means that all such identities will also be granted any new data action added to the platform. The additional access and permissions granted through new actions or data actions may be unwanted behavior for customers using wildcard. ++# [Portal](#tab/azure-portal) +To assign an Azure role to a Microsoft Entra identity using the [Azure portal](https://portal.azure.com), follow these steps: ++1. In the Azure portal, go to your file share, or [Create a file share](storage-how-to-create-file-share.md). +2. Select **Access Control (IAM)**. +3. Select **Add a role assignment** +4. In the **Add role assignment** blade, select the appropriate built-in role (for example, Storage File Data SMB Share Reader or Storage File Data SMB Share Contributor) from the **Role** list. Leave **Assign access to** at the default setting: **Microsoft Entra user, group, or service principal**. Select the target Microsoft Entra identity by name or email address. +5. Select **Review + assign** to complete the role assignment. ++# [PowerShell](#tab/azure-powershell) ++The following PowerShell sample shows how to assign an Azure role to a Microsoft Entra identity, based on sign-in name. For more information about assigning Azure roles with PowerShell, see [Manage access using RBAC and Azure PowerShell](../../role-based-access-control/role-assignments-powershell.md). ++Before you run the following sample script, remember to replace placeholder values, including brackets, with your own values. ++```powershell +#Get the name of the custom role +$FileShareContributorRole = Get-AzRoleDefinition "<role-name>" #Use one of the built-in roles: Storage File Data SMB Share Reader, Storage File Data SMB Share Contributor, Storage File Data SMB Share Elevated Contributor +#Constrain the scope to the target file share +$scope = "/subscriptions/<subscription-id>/resourceGroups/<resource-group>/providers/Microsoft.Storage/storageAccounts/<storage-account>/fileServices/default/fileshares/<share-name>" +#Assign the custom role to the target identity with the specified scope. +New-AzRoleAssignment -SignInName <user-principal-name> -RoleDefinitionName $FileShareContributorRole.Name -Scope $scope +``` ++# [Azure CLI](#tab/azure-cli) + +The following command shows how to assign an Azure role to a Microsoft Entra identity based on sign-in name. For more information about assigning Azure roles with Azure CLI, see [Manage access by using RBAC and Azure CLI](../../role-based-access-control/role-assignments-cli.md). ++Before you run the following sample script, remember to replace placeholder values, including brackets, with your own values. ++```azurecli-interactive +#Assign the built-in role to the target identity: Storage File Data SMB Share Reader, Storage File Data SMB Share Contributor, Storage File Data SMB Share Elevated Contributor +az role assignment create --role "<role-name>" --assignee <user-principal-name> --scope "/subscriptions/<subscription-id>/resourceGroups/<resource-group>/providers/Microsoft.Storage/storageAccounts/<storage-account>/fileServices/default/fileshares/<share-name>" +``` +++## Configure Windows ACLs ++After you assign share-level permissions with RBAC, you can assign Windows ACLs at the root, directory, or file level. Think of share-level permissions as the high-level gatekeeper that determines whether a user can access the share, whereas Windows ACLs act at a more granular level to determine what operations the user can do at the directory or file level. ++Azure Files supports the full set of basic and advanced permissions. You can view and configure Windows ACLs on directories and files in an Azure file share by mounting the share and then using Windows File Explorer or running the Windows [icacls](/windows-server/administration/windows-commands/icacls) or [Set-ACL](/powershell/module/microsoft.powershell.security/set-acl) command. ++The following sets of permissions are supported on the root directory of a file share: ++- BUILTIN\Administrators:(OI)(CI)(F) +- NT AUTHORITY\SYSTEM:(OI)(CI)(F) +- BUILTIN\Users:(RX) +- BUILTIN\Users:(OI)(CI)(IO)(GR,GE) +- NT AUTHORITY\Authenticated Users:(OI)(CI)(M) +- NT AUTHORITY\SYSTEM:(F) +- CREATOR OWNER:(OI)(CI)(IO)(F) ++For more information, see [Configure directory and file-level permissions over SMB](storage-files-identity-ad-ds-configure-permissions.md). ++### Mount the file share using your storage account key ++Before you configure Windows ACLs, you must first mount the file share to your domain-joined VM by using your storage account key. To do this, log into the domain-joined VM as a Microsoft Entra user, open a Windows command prompt, and run the following command. Remember to replace `<YourStorageAccountName>`, `<FileShareName>`, and `<YourStorageAccountKey>` with your own values. If Z: is already in use, replace it with an available drive letter. You can find your storage account key in the Azure portal by navigating to the storage account and selecting **Security + networking** > **Access keys**, or you can use the `Get-AzStorageAccountKey` PowerShell cmdlet. ++It's important that you use the `net use` Windows command to mount the share at this stage and not PowerShell. If you use PowerShell to mount the share, then the share won't be visible to Windows File Explorer or cmd.exe, and you won't be able to configure Windows ACLs. ++> [!NOTE] +> You might see the **Full Control** ACL applied to a role already. This typically already offers the ability to assign permissions. However, because there are access checks at two levels (the share level and the file/directory level), this is restricted. Only users who have the **SMB Elevated Contributor** role and create a new file or directory can assign permissions on those new files or directories without using the storage account key. All other file/directory permission assignment requires connecting to the share using the storage account key first. ++``` +net use Z: \\<YourStorageAccountName>.file.core.windows.net\<FileShareName> /user:localhost\<YourStorageAccountName> <YourStorageAccountKey> +``` ++### Configure Windows ACLs with Windows File Explorer ++After you've mounted your Azure file share, you must configure the Windows ACLs. You can do this using either Windows File Explorer or icacls. ++Follow these steps to use Windows File Explorer to grant full permission to all directories and files under the file share, including the root directory. ++1. Open Windows File Explorer and right click on the file/directory and select **Properties**. +1. Select the **Security** tab. +1. Select **Edit** to change permissions. +1. You can change the permissions of existing users or select **Add** to grant permissions to new users. +1. In the prompt window for adding new users, enter the target user name you want to grant permission to in the **Enter the object names to select** box, and select **Check Names** to find the full UPN name of the target user. +1. Select **OK**. +1. In the **Security** tab, select all permissions you want to grant your new user. +1. Select **Apply**. ++### Configure Windows ACLs with icacls ++Use the following Windows command to grant full permissions to all directories and files under the file share, including the root directory. Remember to replace the placeholder values in the example with your own values. ++``` +icacls <mounted-drive-letter>: /grant <user-email>:(f) +``` ++For more information on how to use icacls to set Windows ACLs and the different types of supported permissions, see [the command-line reference for icacls](/windows-server/administration/windows-commands/icacls). ++## Mount the file share from a domain-joined VM ++The following process verifies that your file share and access permissions were set up correctly and that you can access an Azure file share from a domain-joined VM. Be aware that the share-level Azure role assignment can take some time to take effect. ++Sign in to the domain-joined VM using the Microsoft Entra identity to which you granted permissions. Be sure to sign in with Microsoft Entra credentials. If the drive is already mounted with the storage account key, you'll need to disconnect the drive or sign in again. ++Run the PowerShell script below or [use the Azure portal](storage-files-quick-create-use-windows.md#map-the-azure-file-share-to-a-windows-drive) to persistently mount the Azure file share and map it to drive Z: on Windows. If Z: is already in use, replace it with an available drive letter. Because you've been authenticated, you won't need to provide the storage account key. The script will check to see if this storage account is accessible via TCP port 445, which is the port SMB uses. Remember to replace `<storage-account-name>` and `<file-share-name>` with your own values. For more information, see [Use an Azure file share with Windows](storage-how-to-use-files-windows.md). ++Always mount Azure file shares using `file.core.windows.net`, even if you set up a private endpoint for your share. ++```powershell +$connectTestResult = Test-NetConnection -ComputerName <storage-account-name>.file.core.windows.net -Port 445 +if ($connectTestResult.TcpTestSucceeded) { + cmd.exe /C "cmdkey /add:`"<storage-account-name>.file.core.windows.net`" /user:`"localhost\<storage-account-name>`"" + New-PSDrive -Name Z -PSProvider FileSystem -Root "\\<storage-account-name>.file.core.windows.net\<file-share-name>" -Persist +} else { + Write-Error -Message "Unable to reach the Azure storage account via port 445. Check to make sure your organization or ISP is not blocking port 445, or use Azure P2S VPN, Azure S2S VPN, or Express Route to tunnel SMB traffic over a different port." +} +``` ++You can also use the `net-use` command from a Windows prompt to mount the file share. Remember to replace `<YourStorageAccountName>` and `<FileShareName>` with your own values. ++``` +net use Z: \\<YourStorageAccountName>.file.core.windows.net\<FileShareName> +``` ++## Mount the file share from a non-domain-joined VM or a VM joined to a different AD domain ++Non-domain-joined VMs or VMs that are joined to a different domain than the storage account can access Azure file shares using Microsoft Entra Domain Services authentication only if the VM has unimpeded network connectivity to the domain controllers for Microsoft Entra Domain Services, which are located in Azure. This usually requires setting up a site-to-site or point-to-site VPN. The user accessing the file share must have an identity and credentials (a Microsoft Entra identity synced from Microsoft Entra ID to Microsoft Entra Domain Services) in the Microsoft Entra Domain Services managed domain. ++To mount a file share from a non-domain-joined VM, the user must either: ++- Provide explicit credentials such as **DOMAINNAME\username** where **DOMAINNAME** is the Microsoft Entra Domain Services domain and **username** is the identity’s user name in Microsoft Entra Domain Services, or +- Use the notation **username@domainFQDN**, where **domainFQDN** is the fully qualified domain name. ++Using one of these approaches will allow the client to contact the domain controller in the Microsoft Entra Domain Services domain to request and receive Kerberos tickets. ++For example: ++``` +net use Z: \\<YourStorageAccountName>.file.core.windows.net\<FileShareName> /user:<DOMAINNAME\username> +``` ++or ++``` +net use Z: \\<YourStorageAccountName>.file.core.windows.net\<FileShareName> /user:<username@domainFQDN> +``` ## Next steps |
storage | Storage Files Identity Auth Hybrid Identities Enable | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-auth-hybrid-identities-enable.md | description: Learn how to enable identity-based Kerberos authentication for hybr Previously updated : 09/25/2023 Last updated : 11/21/2023 recommendations: false recommendations: false This article focuses on enabling and configuring Microsoft Entra ID (formerly Azure AD) for authenticating [hybrid user identities](../../active-directory/hybrid/whatis-hybrid-identity.md), which are on-premises AD DS identities that are synced to Microsoft Entra ID. Cloud-only identities aren't currently supported. -This configuration allows hybrid users to access Azure file shares using Kerberos authentication, using Microsoft Entra ID to issue the necessary Kerberos tickets to access the file share with the SMB protocol. This means your end users can access Azure file shares over the internet without requiring line-of-sight to domain controllers from Microsoft Entra hybrid joined and Microsoft Entra joined clients. However, configuring Windows access control lists (ACLs)/directory and file-level permissions for a user or group requires line-of-sight to the on-premises domain controller. +This configuration allows hybrid users to access Azure file shares using Kerberos authentication, using Microsoft Entra ID to issue the necessary Kerberos tickets to access the file share with the SMB protocol. This means your end users can access Azure file shares over the internet without requiring unimpeded network connectivity to domain controllers from Microsoft Entra hybrid joined and Microsoft Entra joined clients. However, configuring Windows access control lists (ACLs)/directory and file-level permissions for a user or group requires unimpeded network connectivity to the on-premises domain controller. For more information on supported options and considerations, see [Overview of Azure Files identity-based authentication options for SMB access](storage-files-active-directory-overview.md). For more information, see [this deep dive](https://techcommunity.microsoft.com/t5/itops-talk-blog/deep-dive-how-azure-ad-kerberos-works/ba-p/3070889). To enable Microsoft Entra Kerberos authentication using the [Azure portal](https :::image type="content" source="media/storage-files-identity-auth-hybrid-identities-enable/enable-azure-ad-kerberos.png" alt-text="Screenshot of the Azure portal showing Active Directory configuration settings for a storage account. Microsoft Entra Kerberos is selected." lightbox="media/storage-files-identity-auth-hybrid-identities-enable/enable-azure-ad-kerberos.png" border="true"::: -1. **Optional:** If you want to configure directory and file-level permissions through Windows File Explorer, then you need to specify the domain name and domain GUID for your on-premises AD. You can get this information from your domain admin or by running the following Active Directory PowerShell cmdlet from an on-premises AD-joined client: `Get-ADDomain`. Your domain name should be listed in the output under `DNSRoot` and your domain GUID should be listed under `ObjectGUID`. If you'd prefer to configure directory and file-level permissions using icacls, you can skip this step. However, if you want to use icacls, the client will need line-of-sight to the on-premises AD. +1. **Optional:** If you want to configure directory and file-level permissions through Windows File Explorer, then you need to specify the domain name and domain GUID for your on-premises AD. You can get this information from your domain admin or by running the following Active Directory PowerShell cmdlet from an on-premises AD-joined client: `Get-ADDomain`. Your domain name should be listed in the output under `DNSRoot` and your domain GUID should be listed under `ObjectGUID`. If you'd prefer to configure directory and file-level permissions using icacls, you can skip this step. However, if you want to use icacls, the client will need unimpeded network connectivity to the on-premises AD. 1. Select **Save**. To set share-level permissions, follow the instructions in [Assign share-level p ## Configure directory and file-level permissions -Once share-level permissions are in place, you can assign directory/file-level permissions to the user or group. **This requires using a device with line-of-sight to an on-premises AD**. To use Windows File Explorer, the device also needs to be domain-joined. +Once share-level permissions are in place, you can assign directory/file-level permissions to the user or group. **This requires using a device with unimpeded network connectivity to an on-premises AD**. To use Windows File Explorer, the device also needs to be domain-joined. There are two options for configuring directory and file-level permissions with Microsoft Entra Kerberos authentication: - **Windows File Explorer:** If you choose this option, then the client must be domain-joined to the on-premises AD.-- **icacls utility:** If you choose this option, then the client doesn't need to be domain-joined, but needs line-of-sight to the on-premises AD.+- **icacls utility:** If you choose this option, then the client doesn't need to be domain-joined, but needs unimpeded network connectivity to the on-premises AD. To configure directory and file-level permissions through Windows File Explorer, you also need to specify domain name and domain GUID for your on-premises AD. You can get this information from your domain admin or from an on-premises AD-joined client. If you prefer to configure using icacls, this step is not required. |
storage | Storage Files Identity Multiple Forests | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-identity-multiple-forests.md | description: Configure on-premises Active Directory Domain Services (AD DS) auth Previously updated : 11/15/2023 Last updated : 11/21/2023 To use this method, complete the following steps: 1. Select the node named after your domain (for example, **onpremad1.com**) and right-click **New Alias (CNAME)**. 1. For the alias name, enter your storage account name. 1. For the fully qualified domain name (FQDN), enter **`<storage-account-name>`.`<domain-name>`**, such as **mystorageaccount.onpremad1.com**.+ 1. If you're using a private endpoint (PrivateLink) for the storage account, add an additional CNAME entry to map to the private endpoint name, for example **mystorageaccount.privatelink.onpremad1.com**. 1. For the target host FQDN, enter **`<storage-account-name>`.file.core.windows.net** 1. Select **OK**. |
storage | Storage Files Migration Nas Cloud Databox | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-migration-nas-cloud-databox.md | description: Learn how to migrate files from an on-premises Network Attached Sto Previously updated : 12/15/2022 Last updated : 11/21/2023 recommendations: false To save time, you should proceed with this phase while you wait for your DataBox ## Phase 6: Copy files onto your DataBox -When your DataBox arrives, you need to set up your DataBox in a line of sight to your NAS appliance. Follow the setup documentation for the DataBox type you ordered. +When your DataBox arrives, you need to set up your DataBox with unimpeded network connectivity to your NAS appliance. Follow the setup documentation for the DataBox type you ordered. * [Set up Data Box](../../databox/data-box-quickstart-portal.md) * [Set up Data Box Disk](../../databox/data-box-disk-quickstart-portal.md) |
storage | Storage Files Migration Nas Hybrid Databox | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-migration-nas-hybrid-databox.md | description: Learn how to migrate files from an on-premises Network Attached Sto Previously updated : 03/5/2021 Last updated : 11/21/2023 The resource configuration (compute and RAM) of the Windows Server instance you ## Phase 5: Copy files onto your Data Box -When your Data Box arrives, you need to set it up in the line of sight to your NAS appliance. Follow the setup documentation for the type of Data Box you ordered: +When your Data Box arrives, you need to set it up with unimpeded network connectivity to your NAS appliance. Follow the setup documentation for the type of Data Box you ordered: * [Set up Data Box](../../databox/data-box-quickstart-portal.md). * [Set up Data Box Disk](../../databox/data-box-disk-quickstart-portal.md). There's more to discover about Azure file shares and Azure File Sync. The follow * [Migration overview](storage-files-migration-overview.md) * [Planning for an Azure File Sync deployment](../file-sync/file-sync-planning.md) * [Create a file share](storage-how-to-create-file-share.md)-* [Troubleshoot Azure File Sync](/troubleshoot/azure/azure-storage/file-sync-troubleshoot?toc=/azure/storage/file-sync/toc.json) +* [Troubleshoot Azure File Sync](/troubleshoot/azure/azure-storage/file-sync-troubleshoot?toc=/azure/storage/file-sync/toc.json) |
storage | Storage Files Migration Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-migration-overview.md | description: Learn how to migrate to Azure file shares and find your migration g Previously updated : 10/30/2023 Last updated : 11/21/2023 For an app that currently runs on an on-premises server, storing files in an Azu Some cloud apps don't depend on SMB or on machine-local data access or shared access. For those apps, object storage like [Azure blobs](../blobs/storage-blobs-overview.md) is often the best choice. -The key in any migration is to capture all the applicable file fidelity when moving your files from their current storage location to Azure. How much fidelity the Azure storage option supports and how much your scenario requires also helps you pick the right Azure storage. General-purpose file data traditionally depends on file metadata. App data might not. +The key in any migration is to capture all the applicable file fidelity when moving your files from their current storage location to Azure. How much fidelity the Azure storage option supports and how much your scenario requires also helps you pick the right Azure storage. Here are the two basic components of a file: - **Data stream**: The data stream of a file stores the file content.-- **File metadata**: The file metadata has these subcomponents:+- **File metadata**: Unlike object storage in Azure blobs, an Azure file share can natively store file metadata. General-purpose file data traditionally depends on file metadata. App data might not. The file metadata has these subcomponents: * File attributes like read-only * File permissions, which can be referred to as *NTFS permissions* or *file and folder ACLs* * Timestamps, most notably the creation and last-modified timestamps- * An alternative data stream, which is a space to store larger amounts of nonstandard properties + * An alternative data stream, which is a space to store larger amounts of nonstandard properties. This alternative data stream can't be stored on a file in an Azure file share. It's preserved on-premises when Azure File Sync is used. File fidelity in a migration can be defined as the ability to: - Store all applicable file information on the source. - Transfer files with the migration tool.-- Store files in the target storage of the migration. </br> Ultimately, the target for migration guides on this page is one or more Azure file shares. Consider this [list of features that SMB Azure file shares don't support](files-smb-protocol.md#limitations).+- Store files in the target storage of the migration. </br> The target for migration guides on this page is one or more Azure file shares. Consider this [list of features that SMB Azure file shares don't support](files-smb-protocol.md#limitations). To ensure your migration proceeds smoothly, identify [the best copy tool for your needs](#migration-toolbox) and match a storage target to your source. -Taking the previous information into account, you can see that the target storage for general-purpose files in Azure is [Azure file shares](storage-files-introduction.md). --Unlike object storage in Azure blobs, an Azure file share can natively store file metadata. Azure file shares also preserve the file and folder hierarchy, attributes, and permissions. NTFS permissions can be stored on files and folders because they're on-premises. - > [!IMPORTANT] > If you're migrating on-premises file servers to Azure File Sync, set the ACLs for the root directory of the file share **before** copying a large number of files, as changes to permissions for root ACLs can take up to a day to propagate if done after a large file migration. Users that leverage Active Directory Domain Services (AD DS) as their on-premises domain controller can natively access an Azure file share. So can users of Microsoft Entra Domain Services. Each uses their current identity to get access based on share permissions and on file and folder ACLs. This behavior is similar to a user connecting to an on-premises file share. -The alternative data stream is the primary aspect of file fidelity that currently can't be stored on a file in an Azure file share. It's preserved on-premises when Azure File Sync is used. - Learn more about [on-premises Active Directory authentication](storage-files-identity-auth-active-directory-enable.md) and [Microsoft Entra Domain Services authentication](storage-files-identity-auth-domain-services-enable.md) for Azure file shares. +## Supported metadata ++The following table lists supported metadata for Azure Files. ++> [!IMPORTANT] +> The *LastAccessTime* timestamp isn't currently supported for files or directories on the target share. ++| **Source** | **Target** | +||| +| Directory structure | The original directory structure of the source can be preserved on the target share. | +| Symbolic links | Symbolic links on the source can be preserved and mapped on the target share. | +| Access permissions | Azure Files supports Windows ACLs, and they must be set on the target share even if no AD integration is configured at migration time. The following ACLs must be preserved: owner security identifier (SID), group SID, discretionary access lists (DACLs), system access control lists (SACLs). | +| Create timestamp | The original create timestamp of the source file can be preserved on the target share. | +| Change timestamp | The original change timestamp of the source file can be preserved on the target share. | +| Modified timestamp | The original modified timestamp of the source file can be preserved on the target share. | +| File attributes | Common attributes such as read-only, hidden, and archive flags can be preserved on the target share. | + ## Migration guides The following table lists detailed migration guides. A scenario without a link doesn't yet have a published migration guide. Check th | Source | Target: </br>Hybrid deployment | Target: </br>Cloud-only deployment | |:|:--|:--| | | Tool combination:| Tool combination: |-| Windows Server 2012 R2 and later | <ul><li>[Azure File Sync](../file-sync/file-sync-deployment-guide.md)</li><li>[Azure File Sync and Azure DataBox](storage-files-migration-server-hybrid-databox.md)</li></ul> | <ul><li>[Via RoboCopy to a mounted Azure file share](storage-files-migration-robocopy.md)</li><li>via Azure File Sync: Follow same steps as [Azure File Sync hybrid deployment](../file-sync/file-sync-deployment-guide.md) and [decommission server endpoint](../file-sync/file-sync-server-endpoint-delete.md) at the end.</li></ul> | -| Windows Server 2012 and earlier | <ul><li>Via DataBox and Azure File Sync to recent server OS</li><li>Via Storage Migration Service to recent server with Azure File Sync, then upload</li></ul> | <ul><li>Via Storage Migration Service to recent server with Azure File Sync</li><li>[Via RoboCopy to a mounted Azure file share](storage-files-migration-robocopy.md)</li></ul> | +| Windows Server 2012 R2 and later | <ul><li>[Azure File Sync](../file-sync/file-sync-deployment-guide.md)</li><li>[Azure File Sync and Azure DataBox](storage-files-migration-server-hybrid-databox.md)</li></ul> | <ul><li>Via Azure Storage Mover</li><li>[Via RoboCopy to a mounted Azure file share](storage-files-migration-robocopy.md)</li><li>Via Azure File Sync: Follow same steps as [Azure File Sync hybrid deployment](../file-sync/file-sync-deployment-guide.md) and [decommission server endpoint](../file-sync/file-sync-server-endpoint-delete.md) at the end.</li></ul> | +| Windows Server 2012 and earlier | <ul><li>Via DataBox and Azure File Sync to recent server OS</li><li>Via Storage Migration Service to recent server with Azure File Sync, then upload</li></ul> | <ul><li>Via Azure Storage Mover</li><li>Via Storage Migration Service to recent server with Azure File Sync</li><li>[Via RoboCopy to a mounted Azure file share](storage-files-migration-robocopy.md)</li></ul> | | Network-attached storage (NAS) | <ul><li>[Via Azure File Sync upload](storage-files-migration-nas-hybrid.md)</li><li>[Via DataBox + Azure File Sync](storage-files-migration-nas-hybrid-databox.md)</li></ul> | <ul><li>[Via DataBox](storage-files-migration-nas-cloud-databox.md)</li><li>[Via RoboCopy to a mounted Azure file share](storage-files-migration-robocopy.md)</li></ul> | | Linux / Samba | <ul><li>[Azure File Sync and RoboCopy](storage-files-migration-linux-hybrid.md)</li></ul> | <ul><li>[Via RoboCopy to a mounted Azure file share](storage-files-migration-robocopy.md)</li></ul> | The following table classifies Microsoft tools and their current suitability for | Recommended | Tool | Support for Azure file shares | Preservation of file fidelity | | :-: | :-- | :- | :- |+|![Yes, recommended](medi) | Supported. | Full fidelity.* | |![Yes, recommended](media/storage-files-migration-overview/circle-green-checkmark.png)| RoboCopy | Supported. Azure file shares can be mounted as network drives. | Full fidelity.* | |![Yes, recommended](media/storage-files-migration-overview/circle-green-checkmark.png)| Azure File Sync | Natively integrated into Azure file shares. | Full fidelity.* | |![Yes, recommended](medi) | Supported. | Full fidelity.* | The following table classifies Microsoft tools and their current suitability for This section describes tools that help you plan and run migrations. +#### Azure Storage Mover ++Azure Storage Mover is a relatively new, fully managed migration service that enables you to migrate files and folders to SMB Azure file shares with the same level of file fidelity as the underlying Azure file share. Folder structure and metadata values such as file and folder timestamps, ACLs, and file attributes are maintained. + #### RoboCopy Included in Windows, RoboCopy is one of the tools most applicable to file migrations. The main [RoboCopy documentation](/windows-server/administration/windows-commands/robocopy) is a helpful resource for this tool's many options. |
storage | Storage Files Migration Server Hybrid Databox | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-migration-server-hybrid-databox.md | description: Migrate bulk data offline that's compatible with Azure File Sync. A Previously updated : 06/01/2021 Last updated : 11/21/2023 For a standard migration, choose one or a combination of these Data Box options: ## Phase 4: Copy files onto your Data Box -When your Data Box arrives, you need to set it up in the line of sight to your NAS appliance. Follow the setup documentation for the type of Data Box you ordered: +When your Data Box arrives, you need to set it up with unimpeded network connectivity to your NAS appliance. Follow the setup documentation for the type of Data Box you ordered: * [Set up Data Box](../../databox/data-box-quickstart-portal.md). * [Set up Data Box Heavy](../../databox/data-box-heavy-quickstart-portal.md). |
storage | Storage Files Planning | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-planning.md | To access an Azure file share, the user of the file share must be authenticated - **On-premises Active Directory Domain Services (AD DS, or on-premises AD DS)**: Azure storage accounts can be domain joined to a customer-owned Active Directory Domain Services, just like a Windows Server file server or NAS device. You can deploy a domain controller on-premises, in an Azure VM, or even as a VM in another cloud provider; Azure Files is agnostic to where your domain controller is hosted. Once a storage account is domain-joined, the end user can mount a file share with the user account they signed into their PC with. AD-based authentication uses the Kerberos authentication protocol. - **Microsoft Entra Domain Services**: Microsoft Entra Domain Services provides a Microsoft-managed domain controller that can be used for Azure resources. Domain joining your storage account to Microsoft Entra Domain Services provides similar benefits to domain joining it to a customer-owned AD DS. This deployment option is most useful for application lift-and-shift scenarios that require AD-based permissions. Since Microsoft Entra Domain Services provides AD-based authentication, this option also uses the Kerberos authentication protocol.-- **Microsoft Entra Kerberos for hybrid identities**: Microsoft Entra Kerberos allows you to use Microsoft Entra ID to authenticate [hybrid user identities](../../active-directory/hybrid/whatis-hybrid-identity.md), which are on-premises AD identities that are synced to the cloud. This configuration uses Microsoft Entra ID to issue Kerberos tickets to access the file share with the SMB protocol. This means your end users can access Azure file shares over the internet without requiring a line-of-sight to domain controllers from Microsoft Entra hybrid joined and Microsoft Entra joined VMs.+- **Microsoft Entra Kerberos for hybrid identities**: Microsoft Entra Kerberos allows you to use Microsoft Entra ID to authenticate [hybrid user identities](../../active-directory/hybrid/whatis-hybrid-identity.md), which are on-premises AD identities that are synced to the cloud. This configuration uses Microsoft Entra ID to issue Kerberos tickets to access the file share with the SMB protocol. This means your end users can access Azure file shares over the internet without requiring network connectivity to domain controllers from Microsoft Entra hybrid joined and Microsoft Entra joined VMs. - **Active Directory authentication over SMB for Linux clients**: Azure Files supports identity-based authentication over SMB for Linux clients using the Kerberos authentication protocol through either AD DS or Microsoft Entra Domain Services. - **Azure storage account key**: Azure file shares may also be mounted with an Azure storage account key. To mount a file share this way, the storage account name is used as the username and the storage account key is used as a password. Using the storage account key to mount the Azure file share is effectively an administrator operation, because the mounted file share will have full permissions to all of the files and folders on the share, even if they have ACLs. When using the storage account key to mount over SMB, the NTLMv2 authentication protocol is used. If you intend to use the storage account key to access your Azure file shares, we recommend using private endpoints or service endpoints as described in the [Networking](#networking) section. |
storage | Azure File Migration Program Solutions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/solution-integration/validated-partners/data-management/azure-file-migration-program-solutions.md | The following comparison matrix shows basic functionality, and comparison of mig | | [Atempo](https://www.atempo.com/) | [Data Dynamics](https://www.datadynamicsinc.com/) | [Komprise](https://www.komprise.com/) | | |||| | **Solution name** | [Miria](https://www.atempo.com/solutions/miria-migration-for-hybrid-nas-and-file-storages/)| [Data Mobility and Migration](https://azuremarketplace.microsoft.com/marketplace/apps/datadynamicsinc1581991927942.vm_4?tab=PlansAndPrice) | [Elastic Data Migration](https://azuremarketplace.microsoft.com/marketplace/apps/komprise_inc.intelligent_data_management?tab=OverviewΓÇï) |-| **Support provided by** | [Atempo](https://www.atempo.com/support-en/contacting-support/) | [Data Dynamics](https://www.datdynsupport.com/)<sub>1</sub> | [Komprise](https://komprise.freshdesk.com/support/home)<sub>1</sub> | +| **Support provided by** | [Atempo](https://www.atempo.com/support-en/contacting-support/) | [Data Dynamics](https://ddsupport.datadynamicsinc.com/)<sub>1</sub> | [Komprise](https://komprise.freshdesk.com/support/home)<sub>1</sub> | | **Azure Files support (all tiers)** | Yes | Yes | Yes | | **Azure NetApp Files support** | Yes | Yes | Yes | | **Azure Blob Hot / Cool support** | Yes | Yes | Yes | The following comparison matrix shows basic functionality, and comparison of mig ## Supported protocols (source / destination) -| | [Atempo](https://www.atempo.com/) | [Data Dynamics](https://www.datadynamicsinc.com/) | [Komprise](https://www.komprise.com/) | +| | [Atempo](https://www.atempo.com/) | [Data Dynamics](https://ddsupport.datadynamicsinc.com/) | [Komprise](https://www.komprise.com/) | | |||| | **Solution name** | [Miria](https://www.atempo.com/solutions/miria-migration-for-hybrid-nas-and-file-storages/)| [Data Mobility and Migration](https://azuremarketplace.microsoft.com/marketplace/apps/datadynamicsinc1581991927942.vm_4?tab=PlansAndPrice) | [Elastic Data Migration](https://azuremarketplace.microsoft.com/marketplace/apps/komprise_inc.intelligent_data_management?tab=OverviewΓÇï) | | **SMB 2.1** | Yes | Yes | Yes | The following comparison matrix shows basic functionality, and comparison of mig ## Extended features -| | [Atempo](https://www.atempo.com/) | [Data Dynamics](https://www.datadynamicsinc.com/) | [Komprise](https://www.komprise.com/) | +| | [Atempo](https://www.atempo.com/) | [Data Dynamics](https://ddsupport.datadynamicsinc.com/) | [Komprise](https://www.komprise.com/) | | |||| | **Solution name** | [Miria](https://www.atempo.com/solutions/miria-migration-for-hybrid-nas-and-file-storages/)| [Data Mobility and Migration](https://azuremarketplace.microsoft.com/marketplace/apps/datadynamicsinc1581991927942.vm_4?tab=PlansAndPrice) | [Elastic Data Migration](https://azuremarketplace.microsoft.com/marketplace/apps/komprise_inc.intelligent_data_management?tab=OverviewΓÇï) | | **UID / SID remapping** | No | Yes | No | The following comparison matrix shows basic functionality, and comparison of mig ## Assessment and reporting -| | [Atempo](https://www.atempo.com/) | [Data Dynamics](https://www.datadynamicsinc.com/) | [Komprise](https://www.komprise.com/) | +| | [Atempo](https://www.atempo.com/) | [Data Dynamics](https://ddsupport.datadynamicsinc.com/) | [Komprise](https://www.komprise.com/) | | |||| | **Solution name** | [Miria](https://www.atempo.com/solutions/miria-migration-for-hybrid-nas-and-file-storages/)| [Data Mobility and Migration](https://azuremarketplace.microsoft.com/marketplace/apps/datadynamicsinc1581991927942.vm_4?tab=PlansAndPrice) | [Elastic Data Migration](https://azuremarketplace.microsoft.com/marketplace/apps/komprise_inc.intelligent_data_management?tab=OverviewΓÇï) | | **Capacity** | Yes | Yes | Yes | The following comparison matrix shows basic functionality, and comparison of mig ## Licensing -| | [Atempo](https://www.atempo.com/) | [Data Dynamics](https://www.datadynamicsinc.com/) | [Komprise](https://www.komprise.com/) | +| | [Atempo](https://www.atempo.com/) | [Data Dynamics](https://ddsupport.datadynamicsinc.com/) | [Komprise](https://www.komprise.com/) | | |||| | **Solution name** | [Miria](https://www.atempo.com/solutions/miria-migration-for-hybrid-nas-and-file-storages/)| [Data Mobility and Migration](https://azuremarketplace.microsoft.com/marketplace/apps/datadynamicsinc1581991927942.vm_4?tab=PlansAndPrice) | [Elastic Data Migration](https://azuremarketplace.microsoft.com/marketplace/apps/komprise_inc.intelligent_data_management?tab=OverviewΓÇï) | | **BYOL** | Yes | Yes | Yes | The following comparison matrix shows basic functionality, and comparison of mig > [!IMPORTANT] > Support provided by ISV, not Microsoft- |
storage | Migration Tools Comparison | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/solution-integration/validated-partners/data-management/migration-tools-comparison.md | The following comparison matrix shows basic functionality of different tools tha | | [Microsoft](https://www.microsoft.com/) | [Microsoft](https://www.microsoft.com/) | [Data Dynamics](https://www.datadynamicsinc.com/) | [Komprise](https://www.komprise.com/) | [Atempo](https://www.atempo.com/) | [Cirrus Data](https://www.cirrusdata.com/) | | |--|--|--|||| | **Solution name** | [AzCopy](/azure/storage/common/storage-ref-azcopy-copy) | [Azure Storage Mover](/azure/storage-mover/) | [Data Mobility and Migration](https://azuremarketplace.microsoft.com/marketplace/apps/datadynamicsinc1581991927942.vm_4?tab=PlansAndPrice) | [Elastic Data Migration](https://azuremarketplace.microsoft.com/marketplace/apps/komprise_inc.intelligent_data_management?tab=OverviewΓÇï) | [Miria](https://azuremarketplace.microsoft.com/marketplace/apps/atempo1612274992591.miria_saas_prod?tab=Overview) | [Migrate Cloud](https://azuremarketplace.microsoft.com/en-us/marketplace/apps/cirrusdatasolutionsinc1618222951068.cirrus-migrate-cloud-sponsored-by-azure?tab=Overview) |-| **Support provided by** | Microsoft | Microsoft | [Data Dynamics](https://www.datdynsupport.com/)<sub>1</sub> | [Komprise](https://komprise.freshdesk.com/support/home)<sub>1</sub> | [Atempo](https://www.atempo.com/support-en/contacting-support/)<sub>1</sub>| [Cirrus Data](https://www.cirrusdata.com/global-support-services/)<sub>1</sub> | +| **Support provided by** | Microsoft | Microsoft | Data Dynamics<sub>1</sub> | [Komprise](https://komprise.freshdesk.com/support/home)<sub>1</sub> | [Atempo](https://www.atempo.com/support-en/contacting-support/)<sub>1</sub>| [Cirrus Data](https://www.cirrusdata.com/global-support-services/)<sub>1</sub> | | **Assessment** | No | No | Yes | Yes | Yes | Yes | | **SAN Migration** | No | No | No | No | No | Yes | | **NFS to Azure Blob** | Yes | Yes | Yes | Yes | Yes | No | The following comparison matrix shows basic functionality of different tools tha | | [Microsoft](https://www.microsoft.com/) | [Microsoft](https://www.microsoft.com/) | [Data Dynamics](https://www.datadynamicsinc.com/) | [Komprise](https://www.komprise.com/) | [Atempo](https://www.atempo.com/) | | |--|--|--||| | **Solution name** | [AzCopy](/azure/storage/common/storage-ref-azcopy-copy) | [Azure Storage Mover](/azure/storage-mover/) | [Data Mobility and Migration](https://azuremarketplace.microsoft.com/marketplace/apps/datadynamicsinc1581991927942.vm_4?tab=PlansAndPrice) | [Elastic Data Migration](https://azuremarketplace.microsoft.com/marketplace/apps/komprise_inc.intelligent_data_management?tab=OverviewΓÇï) | [Miria](https://azuremarketplace.microsoft.com/marketplace/apps/atempo1612274992591.miria_saas_prod?tab=Overview) |-| **Support provided by** | Microsoft | Microsoft | [Data Dynamics](https://www.datdynsupport.com/)<sub>1</sub> | [Komprise](https://komprise.freshdesk.com/support/home)<sub>1</sub> | [Atempo](https://www.atempo.com/support-en/contacting-support/)<sub>1</sub>| +| **Support provided by** | Microsoft | Microsoft | Data Dynamics<sub>1</sub> | [Komprise](https://komprise.freshdesk.com/support/home)<sub>1</sub> | [Atempo](https://www.atempo.com/support-en/contacting-support/)<sub>1</sub>| | **Azure Files support (all tiers)** | Yes | Yes | Yes | Yes | Yes | | **Azure NetApp Files support** | No | No | Yes | Yes | Yes | | **Azure Blob Hot / Cool support** | Yes | Yes | Yes | Yes | Yes | |
storage | Storagex Quick Start Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/solution-integration/validated-partners/data-management/storagex-quick-start-guide.md | -This article helps you deploy Data Dynamics StorageX in Microsoft Azure. We introduce key concepts around how StorageX works, deployment prerequisites, installation process, and how-tos for operational guidance. For more in-depth information, visit [Data Dynamics Customer Portal](https://www.datdynsupport.com/). +This article helps you deploy Data Dynamics StorageX in Microsoft Azure. We introduce key concepts around how StorageX works, deployment prerequisites, installation process, and how-tos for operational guidance. For more in-depth information, visit [Data Dynamics Customer Portal](https://ddsupport.datadynamicsinc.com/). Data Dynamics StorageX is a Unified Unstructured Data Management platform that allows analyzing, managing, and moving data across heterogenous storage environments. Basic capabilities are: - Data Movement capabilities If issues occur, Microsoft and Data Dynamics can provide help using regular supp In the [Azure portal](https://portal.azure.com) search for support in the search bar at the top. Select **Help + support** -> **New Support Request**. -### How to open a case with Data Dynamics +### How to open a case with Data Dynamics -Go to the [Data Dynamics Support Portal](https://www.datdynsupport.com/). If you have not registered, provide your email address, and our Support team will create an account for you. Once you have signed in, open a user request. If you have already opened an Azure support case, note support request number when creating the request. +Go to the [Data Dynamics Support Portal](https://ddsupport.datadynamicsinc.com/). If you have not registered, provide your email address, and our Support team will create an account for you. Once you have signed in, open a user request. If you have already opened an Azure support case, note support request number when creating the request. ## Next steps Various resources are available to learn more: - [Storage migration overview](../../../common/storage-migration-overview.md) - Features supported by Data Dynamics StorageX in [migration tools comparison matrix](./migration-tools-comparison.md)-- [Data Dynamics](https://www.datadynamicsinc.com/)-- [Data Dynamics Customer Portal](https://www.datdynsupport.com/) contains full documentation for StorageX+- [Data Dynamics](https://ddsupport.datadynamicsinc.com/) +- [Data Dynamics Customer Portal](https://ddsupport.datadynamicsinc.com/) contains full documentation for StorageX |
stream-analytics | Confluent Kafka Input | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/confluent-kafka-input.md | description: Learn about how to set up an Azure Stream Analytics job as a consum + Last updated 11/09/2023 |
stream-analytics | Confluent Kafka Output | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/confluent-kafka-output.md | description: Learn about how to set up an Azure Stream Analytics job as a produc + Last updated 11/09/2023 |
stream-analytics | Kafka Output | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/kafka-output.md | You can use four types of security protocols to connect to your Kafka clusters: > [!NOTE] > For SASL_SSL and SASL_PLAINTEXT, Azure Stream Analytics supports only PLAIN SASL mechanism.+> You must upload certificates as secrets to key vault using Azure CLI. |Property name |Description | |-|--| You can use four types of security protocols to connect to your Kafka clusters: > [!IMPORTANT]-> Confluent Cloud supports authentication using API Keys, OAuth, or SAML single sign-on (SSO). Azure Stream Analytics doesn't support authentication using OAuth or SAML single sign-on (SSO). -> You can connect to confluent cloud using an API Key that has topic-level access via the SASL_SSL security protocol. +> Confluent Cloud supports authentication using API Keys, OAuth, or SAML single sign-on (SSO). Azure Stream Analytics doesn't support OAuth or SAML single sign-on (SSO) authentication. +> You can connect to the confluent cloud using an API Key with topic-level access via the SASL_SSL security protocol. -### Connect to Confluent Cloud using API key --Azure stream analytics is a librdkafka-based client, and to connect to confluent cloud, you need TLS certificates that confluent cloud uses for server authentication. Confluent cloud uses TLS certificates from LetΓÇÖs Encrypt, an open certificate authority (CA). --Download the ISRG Root X1 certificate in **PEM** format on the site of [LetsEncrypt](https://letsencrypt.org/certificates/). ---> [!IMPORTANT] -> You must use Azure CLI to upload the certificate as a secret to your key vault. You cannot use Azure Portal to upload a certificate that has multiline secrets to key vault. -> The default timestamp type for a topic in a confluent cloud kafka cluster is **CreateTime**, make sure you update it to **LogAppendTime**. -> Azure Stream Analytics supports only numerical decimal format. --To authenticate using the API Key confluent offers, you must use the SASL_SSL protocol and complete the configuration as follows: --| Setting | Value | - | | | - | Username | confluent cloud API key | - | Password | confluent cloud API secret | - | Key vault name | name of Azure Key vault with uploaded certificate | - | Truststore certificates | name of the Key Vault Secret that holds the ISRG Root X1 certificate | -- :::image type="content" source="./media/kafka/kafka-input.png" alt-text="Screenshot showing how to configure kafka input for a stream analytics job." lightbox="./media/kafka/kafka-input.png" ::: --> [!NOTE] -> Depending on how your confluent cloud kafka cluster is configured, you may need a certificate different from the standard certificate confluent cloud uses for server authentication. Confirm with the admin of the confluent cloud kafka cluster to verify what certificate to use. --For step-by-step tutorial on connecting to confluent cloud kakfa, visit the documentation: +For a step-by-step tutorial on connecting to confluent cloud kakfa, visit the documentation: * Confluent cloud kafka input: [Stream data from confluent cloud Kafka with Azure Stream Analytics](confluent-kafka-input.md) * Confluent cloud kafka output: [Stream data from Azure Stream Analytics into confluent cloud](confluent-kafka-output.md) |
stream-analytics | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/policy-reference.md | Title: Built-in policy definitions for Azure Stream Analytics description: Lists Azure Policy built-in policy definitions for Azure Stream Analytics. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/15/2023 Last updated : 11/21/2023 |
stream-analytics | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure Stream Analytics description: Lists Azure Policy Regulatory Compliance controls available for Azure Stream Analytics. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 11/06/2023 Last updated : 11/21/2023 |
stream-analytics | Stream Analytics Define Kafka Input | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-define-kafka-input.md | You can use four types of security protocols to connect to your Kafka clusters: > [!NOTE] > For SASL_SSL and SASL_PLAINTEXT, Azure Stream Analytics supports only PLAIN SASL mechanism.+> > You must upload certificates as secrets to key vault using Azure CLI. |Property name |Description | |-|--| You can use four types of security protocols to connect to your Kafka clusters: > [!IMPORTANT]-> Confluent Cloud supports authentication using API Keys, OAuth, or SAML single sign-on (SSO). Azure Stream Analytics doesn't support authentication using OAuth or SAML single sign-on (SSO). -> You can connect to confluent cloud using an API Key that has topic-level access via the SASL_SSL security protocol. +> Confluent Cloud supports authentication using API Keys, OAuth, or SAML single sign-on (SSO). Azure Stream Analytics doesn't support OAuth or SAML single sign-on (SSO) authentication. +> You can connect to the confluent cloud using an API Key with topic-level access via the SASL_SSL security protocol. -### Connect to Confluent Cloud using API key --Azure stream analytics is a librdkafka-based client, and to connect to confluent cloud, you need TLS certificates that confluent cloud uses for server authentication. Confluent cloud uses TLS certificates from LetΓÇÖs Encrypt, an open certificate authority (CA). --Download the ISRG Root X1 certificate in **PEM** format on the site of [LetsEncrypt](https://letsencrypt.org/certificates/). ---> [!IMPORTANT] -> You must use Azure CLI to upload the certificate as a secret to your key vault. You cannot use Azure Portal to upload a certificate that has multiline secrets to key vault. -> The default timestamp type for a topic in a confluent cloud kafka cluster is **CreateTime**, make sure you update it to **LogAppendTime**. -> Azure Stream Analytics supports only numerical decimal format. --To authenticate using the API Key confluent offers, you must use the SASL_SSL protocol and complete the configuration as follows: --| Setting | Value | - | | | - | Username | confluent cloud API key | - | Password | confluent cloud API secret | - | Key vault name | name of Azure Key vault with uploaded certificate | - | Truststore certificates | name of the Key Vault Secret that holds the ISRG Root X1 certificate | -- :::image type="content" source="./media/kafka/kafka-input.png" alt-text="Screenshot showing how to configure kafka input for a stream analytics job." lightbox="./media/kafka/kafka-input.png" ::: --> [!NOTE] -> Depending on how your confluent cloud kafka cluster is configured, you may need a certificate different from the standard certificate confluent cloud uses for server authentication. Confirm with the admin of the confluent cloud kafka cluster to verify what certificate to use. --For step-by-step tutorial on connecting to confluent cloud kakfa, visit the documentation: +For a step-by-step tutorial on connecting to confluent cloud kakfa, visit the documentation: * Confluent cloud kafka input: [Stream data from confluent cloud Kafka with Azure Stream Analytics](confluent-kafka-input.md) * Confluent cloud kafka output: [Stream data from Azure Stream Analytics into confluent cloud](confluent-kafka-output.md) Certificates are stored as secrets in the key vault and must be in PEM format. ### Configure Key vault with permissions You can create a key vault resource by following the documentation [Quickstart: Create a key vault using the Azure portal](../key-vault/general/quick-create-portal.md)-To upload certificates, you must have "**Key Vault Administrator**" access to your Key vault. +You must have "**Key Vault Administrator**" access to your Key vault to upload certificates. Follow the following to grant admin access: > [!NOTE] |
synapse-analytics | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/policy-reference.md | Title: Built-in policy definitions description: Lists Azure Policy built-in policy definitions for Azure Synapse Analytics. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/15/2023 Last updated : 11/21/2023 |
synapse-analytics | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure Synapse Analytics description: Lists Azure Policy Regulatory Compliance controls available for Azure Synapse Analytics. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 11/06/2023 Last updated : 11/21/2023 |
synapse-analytics | Apache Spark Rapids Gpu | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/spark/apache-spark-rapids-gpu.md | resultDF.Show(); ## How to tune your application for GPUs -Most Spark jobs can see improved performance through tuning configuration settings from defaults, and the same holds true for jobs leveraging the RAPIDS accelerator plugin for Apache Spark. [This documentation](https://nvidia.github.io/spark-rapids/docs/tuning-guide.html) provides guidelines on how to tune a Spark job to run on GPUs using the RAPIDS plugin. -+Most Spark jobs can see improved performance through tuning configuration settings from defaults, and the same holds true for jobs leveraging the RAPIDS accelerator plugin for Apache Spark. ## Quotas and resource constraints in Azure Synapse GPU-enabled pools ### Workspace level |
synapse-analytics | Get Started Visual Studio | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/get-started-visual-studio.md | -> Serverless SQL pool is not supported by SSDT. +> Serverless SQL pool support requires at least VS2022 17.7 see the release notes: [Support for Serverless Sql Pool](/visualstudio/releases/2022/release-notes-v17.7#support-for-serverless-sql-pool-in-ssdt). ## Prerequisites |
update-manager | Dynamic Scope Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/dynamic-scope-overview.md | For Dynamic Scoping and configuration assignment, ensure that you have the follo ## Service limits -The following are the Dynamic scope limits for **each dynamic scope**. +The following are the Dynamic scope recommended limits for **each dynamic scope**. | Resource | Limit | |-|-| |
update-manager | Manage Alerts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/manage-alerts.md | + + Title: Create alerts in Azure Update Manager +description: This article describes on how to enable alerts (preview) with Azure Update Manager to address events as captured in updates data. +++ Last updated : 11/21/2023++++# Create alerts (preview) ++**Applies to:** :heavy_check_mark: Windows VMs :heavy_check_mark: Linux VMs :heavy_check_mark: On-premises environment :heavy_check_mark: Azure Arc-enabled servers. ++This article provides steps to enable Alerts (preview) with [Azure Update Manager](overview.md) to address events as captured in updates data. ++Azure Update Manager is a unified service that allows you to manage and govern updates for all your Windows and Linux virtual machines across your deployments in Azure, on-premises, and on the other cloud platforms from a single dashboard. It's designed as a standalone Azure service to provide SaaS experience to manage hybrid environments in Azure. ++Logs created from patching operations such as update assessments and installations are stored by Azure Update Manager in Azure Resource Graph (ARG). You can view up to last seven days of assessment data, and up to last 30 days of update installation results. ++## Prerequisite ++Alert rule based on ARG query requires a managed identity with reader role assigned for the targeted resources. ++## Enable alerts (Preview) with Azure Update Manager ++To enable alerts (Preview) with Azure Update Manager through Azure portal, follow these steps: ++1. Sign in to the [Azure portal](https://portal.azure.com) and go to **Azure Update Manager**. +1. Under **Monitoring**, select **New alerts rule (Preview)** to create a new alert rule. + + :::image type="content" source="./media/manage-alerts/access-alerts-inline.png" alt-text="Screenshot that shows how to access alerts feature." lightbox="./media/manage-alerts/access-alerts-expanded.png"::: + +1. On **Azure Update Manager | New alerts rule (Preview)** page, provide the following details: + 1. Select a **Subscription** from the dropdown that will be the scope of the alert rule. + 1. From the **Azure Resource Group query** dropdown, select a predefined alerting query option. + 1. You can select **Custom query** option to edit or write a custom query. + + :::image type="content" source="./media/manage-alerts/create-alert-rule-inline.png" alt-text="Screenshot that shows how to create alert rule." lightbox="./media/manage-alerts/create-alert-rule-expanded.png"::: + + 1. Select **View result and edit query in Logs** to run a selected alerting query option or to edit a query. + + :::image type="content" source="./media/manage-alerts/edit-query-inline.png" alt-text="Screenshot that shows how to edit query in logs." lightbox="./media/manage-alerts/edit-query-expanded.png"::: + + 1. Select **Run** to run the query to enable **Continue Editing Alert**. + + :::image type="content" source="./media/manage-alerts/run-query-inline.png" alt-text="Screenshot that shows how to run the query." lightbox="./media/manage-alerts/run-query-expanded.png"::: + +1. If you don't want to run a selected query or edit a query, select **Continue to create a new alert rule** to move to the alert rule create flow where you can set up the advanced alert rule configuration. + + :::image type="content" source="./media/manage-alerts/advance-alert-rule-configuration-inline.png" alt-text="Screenshot that shows how to configure advanced alert rule." lightbox="./media/manage-alerts/advance-alert-rule-configuration-expanded.png"::: ++1. Select **Review + create** to create alert. For more information, see [Create Azure Monitor alert rules](../azure-monitor/alerts/alerts-create-new-alert-rule.md#set-the-alert-rule-conditions). + - To identify alerts & alert rules created for Azure Update Manager, provide unique **Alert rule name** in the **Details** tab. + :::image type="content" source="./media/manage-alerts/unique-alert-name-inline.png" alt-text="Screenshot that shows how to create unique alert name." lightbox="./media/manage-alerts/unique-alert-name-expanded.png"::: ++## View alerts ++To view the alerts, follow these steps: ++1. Sign in to the [Azure portal](https://portal.azure.com) and go to **Azure Update Manager**. +1. Under **Monitoring**, select **New alerts rule (Preview)**. +1. Select **Go to alerts**. ++ :::image type="content" source="./media/manage-alerts/view-alerts-inline.png" alt-text="Screenshot that shows how to view alerts." lightbox="./media/manage-alerts/view-alerts-expanded.png"::: + +1. In the **Monitor | Alerts** page, you can view all the alerts. ++ :::image type="content" source="./media/manage-alerts/display-view-alerts-inline.png" alt-text="Screenshot that displays the list of alerts." lightbox="./media/manage-alerts/display-view-alerts-expanded.png"::: +++> [!NOTE] +> - Azure Resource Graph query used for alerts can return at maximum of 1000 rows. +> - By default, Azure Resource Graph query will return response as per the access provided via the users managed identity and user need to filter out by subscriptions, resource groups and other criteria as per the requirement. +## Next steps ++* [An overview on Azure Update Manager](overview.md) +* [Check update compliance](view-updates.md) +* [Deploy updates now (on-demand) for single machine](deploy-updates.md) +* [Schedule recurring updates](scheduled-patching.md) |
update-manager | Manage Updates Customized Images | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/manage-updates-customized-images.md | Title: Overview of customized images in Azure Update Manager description: This article describes customized image support, how to register and validate customized images for public preview, and limitations. -+ Last updated 11/20/2023 -> [!NOTE] -> Currently, schedule patching and periodic assessment on [specialized images](../virtual-machines/linux/imaging.md) and **VMs created by Azure Migrate, Azure Backup, and Azure Site Recovery** are supported in preview. - ## Asynchronous check to validate customized image support -If you're using Azure Compute Gallery (formerly known as Shared Image Gallery) to create customized images, you can use Update Manager operations such as **Check for updates**, **One-time update**, **Schedule updates**, or **Periodic assessment** to validate if the VMs are supported for guest patching. If the VMs are supported, you can begin patching. +If you're using customized images, you can use Update Manager operations such as **Check for updates**, **One-time update**, **Schedule updates**, or **Periodic assessment** to validate if the VMs are supported for guest patching. If the VMs are supported, you can begin patching. With marketplace images, support is validated even before Update Manager operation is triggered. Here, there are no preexisting validations in place and the Update Manager operations are triggered. Only their success or failure determines support. -For instance, an assessment call attempts to fetch the latest patch that's available from the image's OS family to check support. It stores this support-related data in an Azure Resource Graph table, which you can query to see the support status for your Azure Compute Gallery image. +For instance, an assessment call attempts to fetch the latest patch that's available from the image's OS family to check support. It stores this support-related data in an Azure Resource Graph table, which you can query to see the support status for your VM created from customized image. ## Check support for customized images We recommend that you run the Assess Patches API after the VM is provisioned and ## Limitations -The Azure Compute Gallery images are of two types: -- [Generalized](../virtual-machines/linux/imaging.md#generalized-images) images -- [Specialized](../virtual-machines/linux/imaging.md#specialized-images) images--Currently, scheduled patching and periodic assessment on [specialized images](../virtual-machines/linux/imaging.md#specialized-images) and VMs created by Azure Migrate, Azure Backup, and Azure Site Recovery are supported in preview. - The following supported scenarios are for both types. --| Images | Currently supported scenarios | Unsupported scenarios | -| | | | -| Azure Compute Gallery: Generalized images | - On-demand assessment </br> - On-demand patching </br> - Periodic assessment </br> - Scheduled patching | Automatic VM guest patching | -| Azure Compute Gallery: Specialized images | - On-demand assessment </br> - On-demand patching </br> - Periodic assessment (preview) </br> - Scheduled patching (preview) </br> | Automatic VM guest patching | -| Non-Azure Compute Gallery images (non-SIG)| - On-demand assessment </br> - On-demand patching </br> - Periodic assessment (preview) </br> - Scheduled patching (preview) </br> | Automatic VM guest patching | --Automatic VM guest patching doesn't work on Azure Compute Gallery images even if Patch orchestration mode is set to `Azure orchestrated/AutomaticByPlatform`. You can use scheduled patching to patch the machines and define your own schedules. +Automatic VM guest patching doesn't work on customized images even if Patch orchestration mode is set to `Azure orchestrated/AutomaticByPlatform`. You can use scheduled patching to patch the machines by defining your own schedules or by installing updates on-demand. ## Next steps |
update-manager | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/overview.md | description: This article tells what Azure Update Manager in Azure is and the sy Previously updated : 09/25/2023 Last updated : 11/13/2023 You can use Update Manager in Azure to: - Oversee update compliance for your entire fleet of machines in Azure, on-premises, and in other cloud environments. - Instantly deploy critical updates to help secure your machines.-- Use flexible patching options such as [automatic virtual machine (VM) guest patching](../virtual-machines/automatic-vm-guest-patching.md) in Azure, [hot patching](../automanage/automanage-hotpatch.md), and customer-defined maintenance schedules.+- Use flexible patching options such as [automatic virtual machine (VM) guest patching](../virtual-machines/automatic-vm-guest-patching.md) in Azure, [hotpatching](../automanage/automanage-hotpatch.md), and customer-defined maintenance schedules. We also offer other capabilities to help you manage updates for your Azure VMs that you should consider as part of your overall update management strategy. To learn more about the options that are available, see the Azure VM [update options](../virtual-machines/updates-maintenance-overview.md). Actions |Permission |Scope | For more information, see the [list of supported operating systems and VM images](support-matrix.md#supported-operating-systems). -- [Specialized images](../virtual-machines/linux/imaging.md#specialized-images) - Azure Update Manager now supports scheduled patching and periodic assessment for VMs including the VMs created by Azure Migrate, Azure Backup, and Azure Site Recovery in preview.--Currently, Update Manager has the following limitation regarding operating system support: ---For the preceding limitation, we recommend that you use [Automation Update Management](../automation/update-management/overview.md) until support is available in Update Manager. To learn more, see [Supported operating systems](support-matrix.md#supported-operating-systems). + Azure Update Manager supports [specialized images](../virtual-machines/linux/imaging.md#specialized-images) including the VMs created by Azure Migrate, Azure Backup, and Azure Site Recovery. ## VM extensions |
update-manager | Query Logs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/query-logs.md | Title: Query logs and results from Update Manager description: This article provides details on how you can review logs and search results from Azure Update Manager by using Azure Resource Graph.-+ Previously updated : 09/18/2023 Last updated : 11/21/2023 |
update-manager | Quickstart On Demand | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/quickstart-on-demand.md | Title: 'Quickstart: Deploy updates by using Update Manager in the Azure portal' description: This quickstart helps you to deploy updates immediately and view results for supported machines in Azure Update Manager by using the Azure portal.- Previously updated : 09/18/2023+ Last updated : 11/21/2023 |
update-manager | Support Matrix | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/support-matrix.md | description: This article provides a summary of supported regions and operating Previously updated : 09/18/2023 Last updated : 11/13/2023 United States | Central US </br> East US </br> East US 2</br> North Central US < All operating systems are assumed to be x64. For this reason, x86 isn't supported for any operating system. Update Manager doesn't support CIS-hardened images. -> [!NOTE] -> Currently, schedule patching and periodic assessment on [specialized images](../virtual-machines/linux/imaging.md) and **VMs created by Azure Migrate, Azure Backup, and Azure Site Recovery** are supported in preview. - # [Azure VMs](#tab/azurevm-os) ### Azure Marketplace/PIR images The Azure Marketplace image has the following attributes: - **SKU**: An instance of an offer, such as a major release of a distribution. Examples are `18.04LTS` and `2019-Datacenter`. - **Version**: The version number of an image SKU. -Update Manager supports the following operating system versions. You might experience failures if there are any configuration changes on the VMs, such as package or repository. +Update Manager supports the following operating system versions on VMs. You might experience failures if there are any configuration changes on the VMs, such as package or repository. #### Windows operating systems The following table lists the operating systems for Azure Marketplace images tha ### Custom images -We support [generalized](../virtual-machines/linux/imaging.md#generalized-images) custom images. Currently, scheduled patching and periodic assessment on [specialized images](../virtual-machines/linux/imaging.md#specialized-images) and VMs created by Azure Migrate, Azure Backup, and Azure Site Recovery are supported in preview. +We support VMs created from customized images and the following table lists the operating systems that we support for the same. For instructions on how to use Update Manager to manage updates on custom images, see [Manage updates for custom images](manage-updates-customized-images.md). -The following table lists the operating systems that we support for customized images. For instructions on how to use Update Manager to manage updates on custom images, see [Custom images (preview)](manage-updates-customized-images.md). +> [!NOTE] +> Automatic VM guest patching doesn't work on customized images even if the Patch orchestration mode is set to `Azure orchestrated/AutomaticByPlatform`. You can use scheduled patching to patch the machines by defining your own schedules or install updates on-demand. |**Windows operating system**| || The following table lists the operating systems supported on [Azure Arc-enabled -## Unsupported operating systems +## Unsupported workloads -The following table lists the operating systems that aren't supported. +The following table lists the workloads that aren't supported. - | **Operating system**| **Notes** + | **Workloads**| **Notes** |-|-| | Windows client | For client operating systems such as Windows 10 and Windows 11, we recommend [Microsoft Intune](/mem/intune/) to manage updates.|- | Virtual machine scale sets| We recommend that you use [Automatic upgrades](../virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-upgrade.md) to patch the virtual machine scale sets.| + | Virtual Machine Scale Sets| We recommend that you use [Automatic upgrades](../virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-upgrade.md) to patch the Virtual Machine Scale Sets.| | Azure Kubernetes Service nodes| We recommend the patching described in [Apply security and kernel updates to Linux nodes in Azure Kubernetes Service (AKS)](/azure/aks/node-updates-kured).| -Because Update Manager depends on your machine's OS package manager or update service, ensure that the Linux package manager or Windows Update client is enabled and can connect with an update source or repository. If you're running a Windows Server OS on your machine, see [Configure Windows Update settings](configure-wu-agent.md). +As Update Manager depends on your machine's OS package manager or update service, ensure that the Linux package manager or Windows Update client is enabled and can connect with an update source or repository. If you're running a Windows Server OS on your machine, see [Configure Windows Update settings](configure-wu-agent.md). ## Next steps |
update-manager | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/whats-new.md | +## November 2023 ++## Alerting (preview) +Azure Update Manager allows you to enable alerts to address events as captured in updates data. ++## Azure Stack HCI patching (preview) ++Azure Update Manager allows you to patch Azure Stack HCI cluster. [Learn more](/azure-stack/hci/update/azure-update-manager-23h2?toc=/azure/update-manager/toc.json&bc=/azure/update-manager/breadcrumb/toc.json) + ## October 2023 -### Azure Migrate, Azure Backup, Azure Site Recovery VMs support (preview) +### Azure Migrate, Azure Backup, Azure Site Recovery VMs support -Azure Update Manager now supports scheduled patching and periodic assessment for [specialized](../virtual-machines/linux/imaging.md#specialized-images) VMs including the VMs created by Azure Migrate, Azure Backup, and Azure Site Recovery in preview. +Azure Update Manager now supports [specialized](../virtual-machines/linux/imaging.md#specialized-images) VMs including the VMs created by Azure Migrate, Azure Backup, and Azure Site Recovery. [Learn more](manage-updates-customized-images.md). ## September 2023 You can now enable periodic assessment for your machines at scale using [Policy] ## Next steps -- [Learn more](support-matrix.md) about supported regions.+- [Learn more](support-matrix.md) about supported regions. |
update-manager | Whats Upcoming | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/update-manager/whats-upcoming.md | Last updated 11/07/2023 The article [What's new in Azure Update Manager](whats-new.md) contains updates of feature releases. This article lists all the upcoming features for Azure Update Manager. -## Azure Stack HCI patching (preview) -Azure Update Manager will allow you to patch Azure Stack HCI cluster. --## Alerting -Enable alerts to address events as captured in updates data. ## Prescript and postscript |
virtual-desktop | Check Access Validate Required Fqdn Endpoint | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/check-access-validate-required-fqdn-endpoint.md | + + Title: Check access to required FQDNs and endpoints for Azure Virtual Desktop +description: The Azure Virtual Desktop Agent URL Tool enables you to check that your session host virtual machines can access the required FQDNs and endpoints to ensure Azure Virtual Desktop works as intended. ++ Last updated : 11/21/2023++++# Check access to required FQDNs and endpoints for Azure Virtual Desktop ++In order to deploy Azure Virtual Desktop, you must allow specific FQDNs and endpoints. You can find the list of FQDNs and endpoints in [Required FQDNs and endpoints](required-fqdn-endpoint.md). ++Available as part of the Azure Virtual Desktop Agent (*RDAgent*) on each session host, the *Azure Virtual Desktop Agent URL Tool* enables you to quickly and easily validate whether your session hosts can access each FQDN and endpoint. If not it can't, the tool lists any required FQDNs and endpoints it can't access so you can unblock them and retest, if needed. ++> [!NOTE] +> The Azure Virtual Desktop Agent URL Tool doesn't verify that you've allowed access to wildcard entries we specify for FQDNs, only specific entries within those wildcards that depend on the session host location, so make sure the wildcard entries are allowed before you run the tool. ++## Prerequisites ++You need the following things to use the Azure Virtual Desktop Agent URL Tool: ++- A session host VM. ++- Your session host must have .NET 4.6.2 framework installed. ++- RDAgent version 1.0.2944.400 or higher on your session host. The executable for the Azure Virtual Desktop Agent URL Tool is `WVDAgentUrlTool.exe` and is included in the same installation folder as the RDAgent, for example `C:\Program Files\Microsoft RDInfra\RDAgent_1.0.2944.1200`. ++- The `WVDAgentUrlTool.exe` file must be in the same folder as the `WVDAgentUrlTool.config` file. ++## Use the Azure Virtual Desktop Agent URL Tool ++To use the Azure Virtual Desktop Agent URL Tool: ++1. Open PowerShell as an administrator on a session host. ++1. Run the following commands to change the directory to the same folder as the latest RDAgent installed on your session host: ++ ```powershell + $RDAgent = Get-WmiObject -Class Win32_Product | ? Name -eq "Remote Desktop Services Infrastructure Agent" | Sort-Object Version -Descending + $path = ($RDAgent[0]).InstallSource + "RDAgent_" + ($RDAgent[0]).Version + + cd $path + ``` ++1. Run the following command to run the Azure Virtual Desktop Agent URL Tool: ++ ```powershell + .\WVDAgentUrlTool.exe + ``` + +1. Once you run the file, you see a list of accessible and inaccessible FQDNs and endpoints. ++ For example, the following screenshot shows a scenario where you'd need to unblock two required FQDNs: ++ :::image type="content" source="media/check-access-validate-required-fqdn-endpoint/agent-url-tool-inaccessible.png" alt-text="A screenshot of the Azure Virtual Desktop Agent URL Tool showing that some FQDNs are inaccessible."::: ++ Here's what the output should look like when all required FQDNs and endpoints are accessible. The Azure Virtual Desktop Agent URL Tool doesn't verify that you allowed access to wildcard entries we specify for FQDNs. ++ :::image type="content" source="media/check-access-validate-required-fqdn-endpoint/agent-url-tool-accessible.png" alt-text="A screenshot of the Azure Virtual Desktop Agent URL Tool showing that all FQDNs and endpoints are accessible."::: ++1. You can repeat these steps on your other session host, particularly if they are in a different Azure region or use a different virtual network. ++## Next steps ++- Review the list of the [Required FQDNs and endpoints for Azure Virtual Desktop](required-fqdn-endpoint.md). ++- To learn how to unblock these FQDNs and endpoints in Azure Firewall, see [Use Azure Firewall to protect Azure Virtual Desktop](../firewall/protect-azure-virtual-desktop.md). ++- For more information about network connectivity, see [Understanding Azure Virtual Desktop network connectivity](network-connectivity.md) |
virtual-desktop | Install Office On Wvd Master Image | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/install-office-on-wvd-master-image.md | This article also assumes you have elevated access on the VM, whether it's provi Shared computer activation lets you to deploy Microsoft 365 Apps for enterprise to a computer in your organization that is accessed by multiple users. For more information about shared computer activation, see [Overview of shared computer activation for Microsoft 365 Apps](/deployoffice/overview-shared-computer-activation). -Use the [Office Deployment Tool](https://www.microsoft.com/download/details.aspx?id=49117) to install Office. Windows 10 Enterprise multi-session only supports the following versions of Office: +Use the [Office Deployment Tool](https://www.microsoft.com/download/details.aspx?id=49117) to install Office. Windows 10 Enterprise multi-session and Windows 11 Enterprise-multi-session only support the following versions of Office: - Microsoft 365 Apps for enterprise - Microsoft 365 Apps for business that comes with a Microsoft 365 Business Premium subscription |
virtual-desktop | Required Fqdn Endpoint | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/required-fqdn-endpoint.md | + + Title: Required FQDNs and endpoints for Azure Virtual Desktop +description: A list of FQDNs and endpoints you must allow, ensuring your Azure Virtual Desktop deployment works as intended. +++ Last updated : 11/21/2023+++# Required FQDNs and endpoints for Azure Virtual Desktop ++In order to deploy Azure Virtual Desktop and for your users to connect, you must allow specific FQDNs and endpoints. Users also need to be able to connect to certain FQDNs and endpoints to access their Azure Virtual Desktop resources. This article lists the required FQDNs and endpoints you need to allow for your session hosts and users. ++These FQDNs and endpoints could be blocked if you're using a firewall, such as [Azure Firewall](../firewall/protect-azure-virtual-desktop.md), or proxy service. For guidance on using a proxy service with Azure Virtual Desktop, see [Proxy service guidelines for Azure Virtual Desktop](proxy-server-support.md). This article doesn't include FQDNs and endpoints for other services such as Microsoft Entra ID, Office 365, custom DNS providers or time services. Microsoft Entra FQDNs and endpoints can be found under ID *56*, *59* and *125* in [Office 365 URLs and IP address ranges](/office365/enterprise/urls-and-ip-address-ranges#microsoft-365-common-and-office-online). ++You can check that your session host VMs can connect to these FQDNs and endpoints by following the steps to run the *Azure Virtual Desktop Agent URL Tool* in [Check access to required FQDNs and endpoints for Azure Virtual Desktop](check-access-validate-required-fqdn-endpoint.md). The Azure Virtual Desktop Agent URL Tool validates each FQDN and endpoint and show whether your session hosts can access them. ++> [!IMPORTANT] +> Microsoft doesn't support Azure Virtual Desktop deployments where the FQDNs and endpoints listed in this article are blocked. ++## Session host virtual machines ++The following table is the list of FQDNs and endpoints your session host VMs need to access for Azure Virtual Desktop. All entries are outbound; you don't need to open inbound ports for Azure Virtual Desktop. Select the relevant tab based on which cloud you're using. ++# [Azure cloud](#tab/azure) ++| Address | Protocol | Outbound port | Purpose | Service tag | +|--|--|--|--|--| +| `login.microsoftonline.com` | TCP | 443 | Authentication to Microsoft Online Services | +| `*.wvd.microsoft.com` | TCP | 443 | Service traffic | WindowsVirtualDesktop | +| `*.prod.warm.ingest.monitor.core.windows.net` | TCP | 443 | Agent traffic<br />[Diagnostic output](diagnostics-log-analytics.md) | AzureMonitor | +| `catalogartifact.azureedge.net` | TCP | 443 | Azure Marketplace | AzureFrontDoor.Frontend | +| `gcs.prod.monitoring.core.windows.net` | TCP | 443 | Agent traffic | AzureCloud | +| `kms.core.windows.net` | TCP | 1688 | Windows activation | Internet | +| `azkms.core.windows.net` | TCP | 1688 | Windows activation | Internet | +| `mrsglobalsteus2prod.blob.core.windows.net` | TCP | 443 | Agent and side-by-side (SXS) stack updates | AzureCloud | +| `wvdportalstorageblob.blob.core.windows.net` | TCP | 443 | Azure portal support | AzureCloud | +| `169.254.169.254` | TCP | 80 | [Azure Instance Metadata service endpoint](../virtual-machines/windows/instance-metadata-service.md) | N/A | +| `168.63.129.16` | TCP | 80 | [Session host health monitoring](../virtual-network/network-security-groups-overview.md#azure-platform-considerations) | N/A | +| `oneocsp.microsoft.com` | TCP | 80 | Certificates | N/A | +| `www.microsoft.com` | TCP | 80 | Certificates | N/A | ++The following table lists optional FQDNs and endpoints that your session host virtual machines might also need to access for other ++| Address | Protocol | Outbound port | Purpose | +|--|--|--|--| +| `login.windows.net` | TCP | 443 | Sign in to Microsoft Online Services and Microsoft 365 | +| `*.events.data.microsoft.com` | TCP | 443 | Telemetry Service | +| `www.msftconnecttest.com` | TCP | 80 | Detects if the session host is connected to the internet | +| `*.prod.do.dsp.mp.microsoft.com` | TCP | 443 | Windows Update | +| `*.sfx.ms` | TCP | 443 | Updates for OneDrive client software | +| `*.digicert.com` | TCP | 80 | Certificate revocation check | +| `*.azure-dns.com` | TCP | 443 | Azure DNS resolution | +| `*.azure-dns.net` | TCP | 443 | Azure DNS resolution | ++# [Azure for US Government](#tab/azure-for-us-government) ++| Address | Protocol | Outbound port | Purpose | Service tag | +|--|--|--|--|--| +| `login.microsoftonline.us` | TCP | 443 | Authentication to Microsoft Online Services | +| `*.wvd.azure.us` | TCP | 443 | Service traffic | WindowsVirtualDesktop | +| `*.prod.warm.ingest.monitor.core.usgovcloudapi.net` | TCP | 443 | Agent traffic<br />[Diagnostic output](diagnostics-log-analytics.md) | AzureMonitor | +| `gcs.monitoring.core.usgovcloudapi.net` | TCP | 443 | Agent traffic | AzureCloud | +| `kms.core.usgovcloudapi.net` | TCP | 1688 | Windows activation | Internet | +| `mrsglobalstugviffx.blob.core.usgovcloudapi.net` | TCP | 443 | Agent and side-by-side (SXS) stack updates | AzureCloud | +| `wvdportalstorageblob.blob.core.usgovcloudapi.net` | TCP | 443 | Azure portal support | AzureCloud | +| `169.254.169.254` | TCP | 80 | [Azure Instance Metadata service endpoint](../virtual-machines/windows/instance-metadata-service.md) | N/A | +| `168.63.129.16` | TCP | 80 | [Session host health monitoring](../virtual-network/network-security-groups-overview.md#azure-platform-considerations) | N/A | +| `ocsp.msocsp.com` | TCP | 80 | Certificates | N/A | ++The following table lists optional FQDNs and endpoints that your session host virtual machines might also need to access for other ++| Address | Protocol | Outbound port | Purpose | +|--|--|--|--| +| `*.events.data.microsoft.com` | TCP | 443 | Telemetry Service | +| `www.msftconnecttest.com` | TCP | 80 | Detects if the session host is connected to the internet | +| `*.prod.do.dsp.mp.microsoft.com` | TCP | 443 | Windows Update | +| `oneclient.sfx.ms` | TCP | 443 | Updates for OneDrive client software | +| `*.digicert.com` | TCP | 80 | Certificate revocation check | +| `*.azure-dns.com` | TCP | 443 | Azure DNS resolution | +| `*.azure-dns.net` | TCP | 443 | Azure DNS resolution | ++++This list doesn't include FQDNs and endpoints for other services such as Microsoft Entra ID, Office 365, custom DNS providers or time services. Microsoft Entra FQDNs and endpoints can be found under ID *56*, *59* and *125* in [Office 365 URLs and IP address ranges](/office365/enterprise/urls-and-ip-address-ranges#microsoft-365-common-and-office-online). ++> [!TIP] +> You must use the wildcard character (\*) for FQDNs involving *service traffic*. For *agent traffic*, if you prefer not to use a wildcard, here's how to find specific FQDNs to allow: +> +> 1. Ensure your session host virtual machines are registered to a host pool. +> 1. On a session host, open **Event viewer**, then go to **Windows logs** > **Application** > **WVD-Agent** and look for event ID **3701**. +> 1. Unblock the FQDNs that you find under event ID 3701. The FQDNs under event ID 3701 are region-specific. You'll need to repeat this process with the relevant FQDNs for each Azure region you want to deploy your session host virtual machines in. ++### Service tags and FQDN tags ++A [virtual network service tag](../virtual-network/service-tags-overview.md) represents a group of IP address prefixes from a given Azure service. Microsoft manages the address prefixes encompassed by the service tag and automatically updates the service tag as addresses change, minimizing the complexity of frequent updates to network security rules. Service tags can be used in both Network Security Group ([NSG](../virtual-network/network-security-groups-overview.md)) and [Azure Firewall](../firewall/service-tags.md) rules to restrict outbound network access. Service tags can be also used in User Defined Route ([UDR](../virtual-network/virtual-networks-udr-overview.md#user-defined)) to customize traffic routing behavior. ++Azure Firewall supports Azure Virtual Desktop as a [FQDN tag](../firewall/fqdn-tags.md). For more information, see [Use Azure Firewall to protect Azure Virtual Desktop deployments](../firewall/protect-azure-virtual-desktop.md). ++We recommend you use FQDN tags or service tags to simplify configuration. The listed FQDNs and endpoints and tags only correspond to Azure Virtual Desktop sites and resources. They don't include FQDNs and endpoints for other services such as Microsoft Entra ID. For service tags for other services, see [Available service tags](../virtual-network/service-tags-overview.md#available-service-tags). ++Azure Virtual Desktop doesn't have a list of IP address ranges that you can unblock instead of FQDNs to allow network traffic. If you're using a Next Generation Firewall (NGFW), you need to use a dynamic list made for Azure IP addresses to make sure you can connect. ++## End user devices ++Any device on which you use one of the [Remote Desktop clients](users/connect-windows.md?toc=/azure/virtual-desktop/toc.json&bc=/azure/virtual-desktop/breadcrumb/toc.json) to connect to Azure Virtual Desktop must have access to the following FQDNs and endpoints. Allowing these FQDNs and endpoints is essential for a reliable client experience. Blocking access to these FQDNs and endpoints is unsupported and affects service functionality. ++Select the relevant tab based on which cloud you're using. ++# [Azure cloud](#tab/azure) ++| Address | Protocol | Outbound port | Purpose | Client(s) | +|--|--|--|--|--| +| `login.microsoftonline.com` | TCP | 443 | Authentication to Microsoft Online Services | All | +| `*.wvd.microsoft.com` | TCP | 443 | Service traffic | All | +| `*.servicebus.windows.net` | TCP | 443 | Troubleshooting data | All | +| `go.microsoft.com` | TCP | 443 | Microsoft FWLinks | All | +| `aka.ms` | TCP | 443 | Microsoft URL shortener | All | +| `learn.microsoft.com` | TCP | 443 | Documentation | All | +| `privacy.microsoft.com` | TCP | 443 | Privacy statement | All | +| `query.prod.cms.rt.microsoft.com` | TCP | 443 | Download an MSI to update the client. Required for automatic updates. | [Windows Desktop](users/connect-windows.md) | ++# [Azure for US Government](#tab/azure-for-us-government) ++| Address | Protocol | Outbound port | Purpose | Client(s) | +|--|--|--|--|--| +| `login.microsoftonline.us` | TCP | 443 | Authentication to Microsoft Online Services | All | +| `*.wvd.azure.us` | TCP | 443 | Service traffic | All | +| `*.servicebus.usgovcloudapi.net` | TCP | 443 | Troubleshooting data | All | +| `go.microsoft.com` | TCP | 443 | Microsoft FWLinks | All | +| `aka.ms` | TCP | 443 | Microsoft URL shortener | All | +| `learn.microsoft.com` | TCP | 443 | Documentation | All | +| `privacy.microsoft.com` | TCP | 443 | Privacy statement | All | +| `query.prod.cms.rt.microsoft.com` | TCP | 443 | Download an MSI to update the client. Required for automatic updates. | [Windows Desktop](users/connect-windows.md) | ++++These FQDNs and endpoints only correspond to client sites and resources. This list doesn't include FQDNs and endpoints for other services such as Microsoft Entra ID or Office 365. Microsoft Entra FQDNs and endpoints can be found under ID *56*, *59* and *125* in [Office 365 URLs and IP address ranges](/office365/enterprise/urls-and-ip-address-ranges#microsoft-365-common-and-office-online). ++## Next steps ++- [Check access to required FQDNs and endpoints for Azure Virtual Desktop](check-access-validate-required-fqdn-endpoint.md). ++- To learn how to unblock these FQDNs and endpoints in Azure Firewall, see [Use Azure Firewall to protect Azure Virtual Desktop](../firewall/protect-azure-virtual-desktop.md). ++- For more information about network connectivity, see [Understanding Azure Virtual Desktop network connectivity](network-connectivity.md) |
virtual-desktop | Required Url Check Tool | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/required-url-check-tool.md | - Title: Use the Required URL Check tool for Azure Virtual Desktop -description: The Required URL Check tool enables you to check your session host virtual machines can access the required URLs to ensure Azure Virtual Desktop works as intended. -- Previously updated : 06/20/2023----# Required URL Check tool --In order to deploy and make Azure Virtual Desktop available to your users, you must allow specific URLs that your session host virtual machines (VMs) can access them anytime. You can find the list of URLs in [Required URL list](safe-url-list.md). --The Required URL Check tool will validate these URLs and show whether your session host VMs can access them. If not, then the tool will list the inaccessible URLs so you can unblock them and then retest, if needed. --> [!NOTE] -> - You can only use the Required URL Check tool for deployments in the Azure public cloud, it does not check access for sovereign clouds. -> - The Required URL Check tool can't verify URLs that wildcard entries are unblocked, only specific entries within those wildcards, so make sure the wildcard entries are unblocked first. --## Prerequisites --You need the following things to use the Required URL Check tool: --- A session host VM.--- Your session host VM must have .NET 4.6.2 framework installed.--- RDAgent version 1.0.2944.400 or higher on your session host VM. The Required URL Check tool (`WVDAgentUrlTool.exe`) is included in the same installation folder, for example `C:\Program Files\Microsoft RDInfra\RDAgent_1.0.2944.1200`.--- The `WVDAgentUrlTool.exe` file must be in the same folder as the `WVDAgentUrlTool.config` file.--## Use the Required URL Check tool --To use the Required URL Check tool: --1. Open a command prompt as an administrator on one of your session host VMs. --1. Run the following command to change the directory to the same folder as the current build agent (RDAgent_1.0.2944.1200 in this example): -- ```cmd - cd "C:\Program Files\Microsoft RDInfra\RDAgent_1.0.2944.1200" - ``` --1. Run the following command to run the Required URL Check tool: -- ```cmd - WVDAgentUrlTool.exe - ``` - -1. Once you run the file, you'll see a list of accessible and inaccessible URLs. -- For example, the following screenshot shows a scenario where you'd need to unblock two required non-wildcard URLs: -- > [!div class="mx-imgBorder"] - > ![Screenshot of non-accessible URLs output.](media/noaccess.png) - - Here's what the output should look like once you've unblocked all required non-wildcard URLs: -- > [!div class="mx-imgBorder"] - > ![Screenshot of accessible URLs output.](media/access.png) --1. You can repeat these steps on your other session host VMs, particularly if they are in a different Azure region or use a different virtual network. --## Next steps --For more information about network connectivity, see [Understanding Azure Virtual Desktop network connectivity](network-connectivity.md) |
virtual-desktop | Safe Url List | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/safe-url-list.md | - Title: Required URLs for Azure Virtual Desktop -description: A list of URLs you must unblock to ensure your Azure Virtual Desktop deployment works as intended. -- Previously updated : 08/30/2022-----# Required URLs for Azure Virtual Desktop --In order to deploy and make Azure Virtual Desktop available to your users, you must allow specific URLs that your session host virtual machines (VMs) can access them anytime. Users also need to be able to connect to certain URLs to access their Azure Virtual Desktop resources. This article lists the required URLs you need to allow for your session hosts and users. These URLs could be blocked if you're using [Azure Firewall](../firewall/protect-azure-virtual-desktop.md) or a third-party firewall or [proxy service](proxy-server-support.md). Azure Virtual Desktop doesn't support deployments that block the URLs listed in this article. -->[!IMPORTANT] ->Proxy Services that perform the following are not recommended with Azure Virtual Desktop. See the above link or Table of Contents regarding Proxy Support Guidelines for further details. ->1. SSL Termination (Break and Inspect) ->2. Require Authentication --You can validate that your session host VMs can connect to these URLs by following the steps to run the [Required URL Check tool](required-url-check-tool.md). The Required URL Check tool will validate each URL and show whether your session host VMs can access them. You can only use for deployments in the Azure public cloud, it does not check access for sovereign clouds. --## Session host virtual machines --The following table is the list of URLs your session host VMs need to access for Azure Virtual Desktop. Select the relevant tab based on which cloud you're using. --# [Azure cloud](#tab/azure) --| Address | Outbound TCP port | Purpose | Service tag | -||||| -| `login.microsoftonline.com` | 443 | Authentication to Microsoft Online Services | -| `*.wvd.microsoft.com` | 443 | Service traffic | WindowsVirtualDesktop | -| `*.prod.warm.ingest.monitor.core.windows.net` | 443 | Agent traffic<br />[Diagnostic output](diagnostics-log-analytics.md) | AzureMonitor | -| `catalogartifact.azureedge.net` | 443 | Azure Marketplace | AzureFrontDoor.Frontend | -| `gcs.prod.monitoring.core.windows.net` | 443 | Agent traffic | AzureCloud | -| `kms.core.windows.net` | 1688 | Windows activation | Internet | -| `azkms.core.windows.net` | 1688 | Windows activation | Internet | -| `mrsglobalsteus2prod.blob.core.windows.net` | 443 | Agent and side-by-side (SXS) stack updates | AzureCloud | -| `wvdportalstorageblob.blob.core.windows.net` | 443 | Azure portal support | AzureCloud | -| `169.254.169.254` | 80 | [Azure Instance Metadata service endpoint](../virtual-machines/windows/instance-metadata-service.md) | N/A | -| `168.63.129.16` | 80 | [Session host health monitoring](../virtual-network/network-security-groups-overview.md#azure-platform-considerations) | N/A | -| `oneocsp.microsoft.com` | 80 | Certificates | N/A | -| `www.microsoft.com` | 80 | Certificates | N/A | --> [!IMPORTANT] -> We've finished transitioning the URLs we use for Agent traffic. We no longer support the following URLs. To prevent your session host VMs from showing a *Needs Assistance* status due to this, you must allow the URL `*.prod.warm.ingest.monitor.core.windows.net` if you haven't already. You should also remove the following URLs if you explicitly allowed them before the change: -> -> | Address | Outbound TCP port | Purpose | Service tag | -> |--|--|--|--| -> | `production.diagnostics.monitoring.core.windows.net` | 443 | Agent traffic | AzureCloud | -> | `*xt.blob.core.windows.net` | 443 | Agent traffic | AzureCloud | -> | `*eh.servicebus.windows.net` | 443 | Agent traffic | AzureCloud | -> | `*xt.table.core.windows.net` | 443 | Agent traffic | AzureCloud | -> | `*xt.queue.core.windows.net` | 443 | Agent traffic | AzureCloud | --The following table lists optional URLs that your session host virtual machines might also need to access for other --| Address | Outbound TCP port | Purpose | -|--|--|--| -| `login.windows.net` | 443 | Sign in to Microsoft Online Services and Microsoft 365 | -| `*.events.data.microsoft.com` | 443 | Telemetry Service | -| `www.msftconnecttest.com` | 80 | Detects if the session host is connected to the internet | -| `*.prod.do.dsp.mp.microsoft.com` | 443 | Windows Update | -| `*.sfx.ms` | 443 | Updates for OneDrive client software | -| `*.digicert.com` | 80 | Certificate revocation check | -| `*.azure-dns.com` | 443 | Azure DNS resolution | -| `*.azure-dns.net` | 443 | Azure DNS resolution | --# [Azure for US Government](#tab/azure-for-us-government) --| Address | Outbound TCP port | Purpose | Service tag | -|--|--|--|--| -| `login.microsoftonline.us` | 443 | Authentication to Microsoft Online Services | -| `*.wvd.azure.us` | 443 | Service traffic | WindowsVirtualDesktop | -| `*.prod.warm.ingest.monitor.core.usgovcloudapi.net` | 443 | Agent traffic<br />[Diagnostic output](diagnostics-log-analytics.md) | AzureMonitor | -| `gcs.monitoring.core.usgovcloudapi.net` | 443 | Agent traffic | AzureCloud | -| `kms.core.usgovcloudapi.net` | 1688 | Windows activation | Internet | -| `mrsglobalstugviffx.blob.core.usgovcloudapi.net` | 443 | Agent and side-by-side (SXS) stack updates | AzureCloud | -| `wvdportalstorageblob.blob.core.usgovcloudapi.net` | 443 | Azure portal support | AzureCloud | -| `169.254.169.254` | 80 | [Azure Instance Metadata service endpoint](../virtual-machines/windows/instance-metadata-service.md) | N/A | -| `168.63.129.16` | 80 | [Session host health monitoring](../virtual-network/network-security-groups-overview.md#azure-platform-considerations) | N/A | -| `ocsp.msocsp.com` | 80 | Certificates | N/A | --> [!IMPORTANT] -> We've finished transitioning the URLs we use for Agent traffic. We no longer support the following URLs. To prevent your session host VMs from showing a *Needs Assistance* status due to this, you must allow the URL `*.prod.warm.ingest.monitor.core.usgovcloudapi.net`, if you haven't already. You should also remove the following URLs if you explicitly allowed them before the change: -> -> | Address | Outbound TCP port | Purpose | Service tag | -> |--|--|--|--| -> | `monitoring.core.usgovcloudapi.net` | 443 | Agent traffic | AzureCloud | -> | `fairfax.warmpath.usgovcloudapi.net` | 443 | Agent traffic | AzureCloud | -> | `*xt.blob.core.usgovcloudapi.net` | 443 | Agent traffic | AzureCloud | -> | `*.servicebus.usgovcloudapi.net` | 443 | Agent traffic | AzureCloud | -> | `*xt.table.core.usgovcloudapi.net` | 443 | Agent traffic | AzureCloud | --The following table lists optional URLs that your session host virtual machines might also need to access for other --| Address | Outbound TCP port | Purpose | -|--|--|--| -| `*.events.data.microsoft.com` | 443 | Telemetry Service | -| `www.msftconnecttest.com` | 80 | Detects if the session host is connected to the internet | -| `*.prod.do.dsp.mp.microsoft.com` | 443 | Windows Update | -| `oneclient.sfx.ms` | 443 | Updates for OneDrive client software | -| `*.digicert.com` | 80 | Certificate revocation check | -| `*.azure-dns.com` | 443 | Azure DNS resolution | -| `*.azure-dns.net` | 443 | Azure DNS resolution | ----> [!TIP] -> You must use the wildcard character (\*) for URLs involving service traffic. If you prefer not to use this for agent-related traffic, here's how to find those specific URLs to use without specifying wildcards: -> -> 1. Ensure your session host virtual machines are registered to a host pool. -> 1. Open **Event viewer**, then go to **Windows logs** > **Application** > **WVD-Agent** and look for event ID **3701**. -> 1. Unblock the URLs that you find under event ID 3701. The URLs under event ID 3701 are region-specific. You'll need to repeat this process with the relevant URLs for each Azure region you want to deploy your session host virtual machines in. --This list doesn't include URLs for other services like Microsoft Entra ID or Office 365. Microsoft Entra URLs can be found under ID 56, 59 and 125 in [Office 365 URLs and IP address ranges](/office365/enterprise/urls-and-ip-address-ranges#microsoft-365-common-and-office-online). --### Service tags and FQDN tags --A [virtual network service tag](../virtual-network/service-tags-overview.md) represents a group of IP address prefixes from a given Azure service. Microsoft manages the address prefixes encompassed by the service tag and automatically updates the service tag as addresses change, minimizing the complexity of frequent updates to network security rules. Service tags can be used in both Network Security Group ([NSG](../virtual-network/network-security-groups-overview.md)) and [Azure Firewall](../firewall/service-tags.md) rules to restrict outbound network access. Service tags can be also used in User Defined Route ([UDR](../virtual-network/virtual-networks-udr-overview.md#user-defined)) to customize traffic routing behavior. --Azure Firewall supports Azure Virtual Desktop as a [FQDN tag](../firewall/fqdn-tags.md). For more information, see [Use Azure Firewall to protect Azure Virtual Desktop deployments](../firewall/protect-azure-virtual-desktop.md). --We recommend you use FQDN tags or service tags instead of URLs to prevent service issues. The listed URLs and tags only correspond to Azure Virtual Desktop sites and resources. They don't include URLs for other services like Microsoft Entra ID. For other services, see [Available service tags](../virtual-network/service-tags-overview.md#available-service-tags). --Azure Virtual Desktop currently doesn't have a list of IP address ranges that you can unblock to allow network traffic. We only support unblocking specific URLs. If you're using a Next Generation Firewall (NGFW), you'll need to use a dynamic list specifically made for Azure IPs to make sure you can connect. --## Remote Desktop clients --Any [Remote Desktop clients](users/connect-windows.md?toc=/azure/virtual-desktop/toc.json&bc=/azure/virtual-desktop/breadcrumb/toc.json) you use to connect to Azure Virtual Desktop must have access to the following URLs. Select the relevant tab based on which cloud you're using. Opening these URLs is essential for a reliable client experience. Blocking access to these URLs is unsupported and will affect service functionality. --# [Azure cloud](#tab/azure) --| Address | Outbound TCP port | Purpose | Client(s) | -|--|--|--|--| -| `login.microsoftonline.com` | 443 | Authentication to Microsoft Online Services | All | -| `*.wvd.microsoft.com` | 443 | Service traffic | All | -| `*.servicebus.windows.net` | 443 | Troubleshooting data | All | -| `go.microsoft.com` | 443 | Microsoft FWLinks | All | -| `aka.ms` | 443 | Microsoft URL shortener | All | -| `learn.microsoft.com` | 443 | Documentation | All | -| `privacy.microsoft.com` | 443 | Privacy statement | All | -| `query.prod.cms.rt.microsoft.com` | 443 | Download an MSI to update the client. Required for auto-updates. | [Windows Desktop](users/connect-windows.md) | --# [Azure for US Government](#tab/azure-for-us-government) --| Address | Outbound TCP port | Purpose | Client(s) | -|--|--|--|--| -| `login.microsoftonline.us` | 443 | Authentication to Microsoft Online Services | All | -| `*.wvd.azure.us` | 443 | Service traffic | All | -| `*.servicebus.usgovcloudapi.net` | 443 | Troubleshooting data | All | -| `go.microsoft.com` | 443 | Microsoft FWLinks | All | -| `aka.ms` | 443 | Microsoft URL shortener | All | -| `learn.microsoft.com` | 443 | Documentation | All | -| `privacy.microsoft.com` | 443 | Privacy statement | All | -| `query.prod.cms.rt.microsoft.com` | 443 | Download an MSI to update the client. Required for auto-updates. | [Windows Desktop](users/connect-windows.md) | ----These URLs only correspond to client sites and resources. This list doesn't include URLs for other services like Microsoft Entra ID or Office 365. Microsoft Entra URLs can be found under IDs 56, 59 and 125 in [Office 365 URLs and IP address ranges](/office365/enterprise/urls-and-ip-address-ranges#microsoft-365-common-and-office-online). --## Next steps --To learn how to unblock these URLs in Azure Firewall for your Azure Virtual Desktop deployment, see [Use Azure Firewall to protect Azure Virtual Desktop](../firewall/protect-azure-virtual-desktop.md). |
virtual-desktop | Watermarking | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/watermarking.md | Title: Watermarking in Azure Virtual Desktop description: Learn how to enable watermarking in Azure Virtual Desktop to help prevent sensitive information from being captured on client endpoints. Previously updated : 07/31/2023 Last updated : 11/16/2023 # Watermarking in Azure Virtual Desktop You'll need the following things before you can use watermarking: - [Windows Desktop client](users/connect-windows.md?toc=%2Fazure%2Fvirtual-desktop%2Ftoc.json), version 1.2.3317 or later, on Windows 10 and later. - [Web client](users/connect-web.md?toc=%2Fazure%2Fvirtual-desktop%2Ftoc.json).+ - [macOS client](users/connect-macos.md). ++ Note: iOS and Android clients don't support watermarking. - [Azure Virtual Desktop Insights](azure-monitor.md) configured for your environment. |
virtual-machine-scale-sets | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/policy-reference.md | |
virtual-machine-scale-sets | Virtual Machine Scale Sets Attach Detach Vm | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-attach-detach-vm.md | |
virtual-machines | Disks Incremental Snapshots | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-incremental-snapshots.md | |
virtual-machines | Hibernate Resume | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/hibernate-resume.md | Refer to the [Hibernate troubleshooting guide](./hibernate-resume-troubleshootin ## Next Steps: - [Learn more about Azure billing](/azure/cost-management-billing/) - [Learn about Azure Virtual Desktop](../virtual-desktop/overview.md)-- [Look into Azure VM Sizes](sizes.md)+- [Look into Azure VM Sizes](sizes.md) |
virtual-machines | Maintenance Configurations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/maintenance-configurations.md | The following are the recommended limits for the mentioned indicators | Number of dynamic scopes per Resource Group or Subscription per Region | 250 | | Number of dynamic scopes per Maintenance Configuration | 50 | -The following are the Dynamic Scope Limits for **each dynamic scope** +The following are the Dynamic Scope recommended limits for **each dynamic scope** | Resource | Limit | |-|-| |
virtual-machines | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/policy-reference.md | Title: Built-in policy definitions for Azure Virtual Machines description: Lists Azure Policy built-in policy definitions for Azure Virtual Machines. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/15/2023 Last updated : 11/21/2023 |
virtual-machines | Regions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/regions.md | Azure has some special regions that you may wish to use when building out your a * **US Gov Virginia** and **US Gov Iowa** * A physical and logical network-isolated instance of Azure for US government agencies and partners, operated by screened US persons. Includes additional compliance certifications such as [FedRAMP](https://www.microsoft.com/en-us/TrustCenter/Compliance/FedRAMP) and [DISA](https://www.microsoft.com/en-us/TrustCenter/Compliance/DISA). Read more about [Azure Government](https://azure.microsoft.com/features/gov/). * **China East** and **China North**- * These regions are available through a unique partnership between Microsoft and 21Vianet, whereby Microsoft does not directly maintain the datacenters. See more about [Microsoft Azure operated by 21Vianet](https://www.windowsazure.cn/). + * These regions are available through a unique partnership between Microsoft and 21Vianet, whereby Microsoft does not directly maintain the datacenters. * **Germany Central** and **Germany Northeast** * These regions are available via a data trustee model whereby customer data remains in Germany under control of T-Systems, a Deutsche Telekom company, acting as the German data trustee. |
virtual-machines | Security Controls Policy Image Builder | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/security-controls-policy-image-builder.md | Title: Azure Policy Regulatory Compliance controls for Azure VM Image Builder description: Lists Azure Policy Regulatory Compliance controls available for Azure VM Image Builder. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 11/06/2023 Last updated : 11/21/2023 |
virtual-machines | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure Virtual Machines description: Lists Azure Policy Regulatory Compliance controls available for Azure Virtual Machines . These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 11/06/2023 Last updated : 11/21/2023 |
virtual-machines | N Series Amd Driver Setup | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/n-series-amd-driver-setup.md | Create the VMs using CLI. (Azure AMD GPU driver extensions don't support NGads 5. Reboot the VM ### Verify driver installation-1. You can verify driver installation in Device Manager. The following example shows successful configuration of the Radeon Pro V620 card on an Azure NGads V620 VM. The exact driver date and version will depend on the driver package released.<br><br> -![NGads driver device manager](https://github.com/isgonzalez-MSFT/azure-docs-pr/assets/135761331/abc86bb4-5d3d-416f-bb7b-822461fd5c37) +1. You can verify driver installation in Device Manager. The following example shows successful configuration of the Radeon Pro V620 card on an Azure NGads V620 VM. The exact driver date and version will depend on the driver package released. ## NVv4 Series ## |
virtual-machines | Byos | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/redhat/byos.md | description: Learn about bring-your-own-subscription images for Red Hat Enterpri -+ Last updated 06/10/2020 |
virtual-network | Policy Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/policy-reference.md | Title: Built-in policy definitions for Azure Virtual Network description: Lists Azure Policy built-in policy definitions for Azure Virtual Network. These built-in policy definitions provide common approaches to managing your Azure resources. Previously updated : 11/15/2023 Last updated : 11/21/2023 |
virtual-network | Security Controls Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/security-controls-policy.md | Title: Azure Policy Regulatory Compliance controls for Azure Virtual Network description: Lists Azure Policy Regulatory Compliance controls available for Azure Virtual Network. These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Previously updated : 11/06/2023 Last updated : 11/21/2023 |
virtual-wan | Create Bgp Peering Hub Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/create-bgp-peering-hub-powershell.md | |
virtual-wan | Customer Controlled Gateway Maintenance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/customer-controlled-gateway-maintenance.md | |
virtual-wan | Expressroute Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/expressroute-powershell.md | |
virtual-wan | Global Hub Profile | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/global-hub-profile.md | description: Learn about how to generate and download global and hub-level User Previously updated : 08/08/2022 Last updated : 11/21/2023 To generate and download VPN client profile configuration files, use the followi 1. Go to the **Virtual WAN**. 1. In the left pane, select **User VPN configurations**.-1. On the **User VPN configurations** page you'll see all of the User VPN configurations that you've created for your virtual WAN. In the **Hub** column, you'll see the hubs that are associated to each User VPN configuration. Click the **>** to expand and view the hub names. +1. On the **User VPN configurations** page, you'll see all of the User VPN configurations that you've created for your virtual WAN. In the **Hub** column, you'll see the hubs that are associated to each User VPN configuration. Click the **>** to expand and view the hub names. :::image type="content" source="./media/global-hub-profile/expand.png" alt-text="Screenshot that shows hubs list expanded." lightbox="./media/global-hub-profile/expand.png"::: This section pertains connections using the OpenVPN tunnel type and the Azure VP When you configure a hub P2S gateway, Azure assigns an internal certificate to the gateway. This is different than the root certificate information that you specify when you want to use Certificate Authentication as your authentication method. The internal certificate that is assigned to the hub is used for all authentication types. This value is represented in the profile configuration files that you generate as *servervalidation/cert/hash*. The VPN client uses this value as part of the connection process. -If you have multiple hubs in different geographic regions, each hub may use a different Azure-level server validation certificate. However, the global profile only contains the server validation certificate hash value for 1 of the hubs. This means that if the certificate for that hub isn't working properly for any reason, the client doesn't have the necessary server validation certificate hash value for the other hubs. +If you have multiple hubs in different geographic regions, each hub can use a different Azure-level server validation certificate. However, the global profile only contains the server validation certificate hash value for 1 of the hubs. This means that if the certificate for that hub isn't working properly for any reason, the client doesn't have the necessary server validation certificate hash value for the other hubs. As a best practice, we recommend that you update your VPN client profile configuration file to include the certificate hash value of all the hubs that are attached to the global profile, and then configure the Azure VPN Client using the updated file. |
virtual-wan | How To Nva Hub | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/how-to-nva-hub.md | The steps in this article help you create a **Barracuda CloudGen WAN** Network V For deployment documentation of **Cisco SD-WAN** within Azure Virtual WAN, see [Cisco Cloud OnRamp for Multi-Cloud](https://www.cisco.com/c/en/us/td/docs/routers/sdwan/configuration/cloudonramp/ios-xe-17/cloud-onramp-book-xe/cloud-onramp-multi-cloud.html#Cisco_Concept.dita_c61e0e7a-fff8-4080-afee-47b81e8df701). -For deployment documentation of **VMware SD-WAN** within Azure Virtual WAN, see [Deployment Guide for VMware SD-WAN in Virtual WAN Hub](https://kb.vmware.com/s/article/82746) +For deployment documentation of **VMware SD-WAN** within Azure Virtual WAN, see [Deployment Guide for VMware SD-WAN in Virtual WAN Hub](https://docs.vmware.com/en/VMware-SD-WAN/https://docsupdatetracker.net/index.html) ## Prerequisites |
virtual-wan | How To Virtual Hub Routing Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/how-to-virtual-hub-routing-powershell.md | |
virtual-wan | How To Virtual Hub Routing Preference Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/how-to-virtual-hub-routing-preference-powershell.md | |
virtual-wan | Howto Always On User Tunnel | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/howto-always-on-user-tunnel.md | |
virtual-wan | Howto Virtual Hub Routing Preference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/howto-virtual-hub-routing-preference.md | description: Learn how to configure Virtual WAN virtual hub routing preference u Previously updated : 10/26/2022 Last updated : 11/21/2023 # Configure virtual hub routing preference - Azure portal |
virtual-wan | Manage Secure Access Resources Spoke P2s | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/manage-secure-access-resources-spoke-p2s.md | Convert the hub to a secured hub using the following article: [Configure Azure F Create rules that dictate the behavior of Azure Firewall. By securing the hub, we ensure that all packets that enter the virtual hub are subject to firewall processing before accessing your Azure resources. -Once you complete these steps, you will have created an architecture that allows VPN users to access the VM with private IP address 10.18.0.4, but **NOT** access the VM with private IP address 10.18.0.5 +Once you complete these steps, you'll have created an architecture that allows VPN users to access the VM with private IP address 10.18.0.4, but **NOT** access the VM with private IP address 10.18.0.5 1. In the Azure portal, navigate to **Firewall Manager**. 1. Under Security, select **Azure Firewall policies**. Once you complete these steps, you will have created an architecture that allows 1. Select **Next: Rules**. 1. On the **Rules** tab, select **Add a rule collection**. 1. Provide a name for the collection. Set the type as **Network**. Add a priority value **100**.-1. Fill in the name of the rule, source type, source, protocol, destination ports, and destination type, as shown in the example below. Then, select **add**. This rule allows any IP address from the VPN client pool to access the VM with private IP address 10.18.04, but not any other resource connected to the virtual hub. Create any rules you want that fit your desired architecture and permissions rules. +1. Fill in the name of the rule, source type, source, protocol, destination ports, and destination type, as shown in the following example. Then, select **add**. This rule allows any IP address from the VPN client pool to access the VM with private IP address 10.18.04, but not any other resource connected to the virtual hub. Create any rules you want that fit your desired architecture and permissions rules. :::image type="content" source="./media/manage-secure-access-resources-spoke-p2s/rules.png" alt-text="Firewall rules" ::: |
virtual-wan | Monitoring Best Practices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/monitoring-best-practices.md | Title: Monitoring Virtual WAN - Best practices -description: Start here to learn monitoring best practices for Virtual WAN. +description: This article helps you learn Monitoring best practices for Virtual WAN. Previously updated : 10/03/2023 Last updated : 11/22/2023 # Monitoring Azure Virtual WAN - Best practices Most of the recommendations in this article suggest creating Azure Monitor alert |Recommendation | Description| |||-|Create alert rule for increase in Tunnel Egress and/or Ingress packet drop count.| An increase in tunnel egress and/or ingress packet drop count may indicate an issue with the Azure VPN gateway, or with the remote VPN device. Select the **Tunnel Egress/Ingress Packet drop count** metric when creating the alert rule(s). Define a **static Threshold value** greater than **0** and the **Total** aggregation type when configuring the alert logic.<br><br>You can choose to monitor the **Connection** as a whole, or split the alert rule by **Instance** and **Remote IP** to be alerted for issues involving individual tunnels. To learn the difference between the concept of **VPN connection**, **link**, and **tunnel** in Virtual WAN, see the [Virtual WAN FAQ](virtual-wan-faq.md).| -|Create alert rule to monitor BGP peer status.|When using BGP in your site-to-site connections, it's important to monitor the health of the BGP peerings between the gateway instances and the remote devices, as recurrent failures can disrupt connectivity.<br><br>Select the **BGP Peer Status** metric when creating the alert rule. Using a **static** threshold, choose the **Average** aggregation type and configure the alert to be triggered whenever the value is **less than 1**.<br><br>It's recommended to split the alert by **Instance** and **BGP Peer Address** to detect issues with individual peerings. Avoid selecting the gateway instance IPs as **BGP Peer Address** because this metric monitors the BGP status for every possible combination, including with the instance itself (which is always 0).| -|Create alert rule to monitor number of BGP routes advertised and learned.|**BGP Routes Advertised** and **BGP Routes Learned** monitor the number of routes advertised to and learned from peers by the VPN gateway, respectively. If these metrics drop to zero unexpectedly, it could be because thereΓÇÖs an issue with the gateway or with on-premises.<br><br>It's recommended to configure an alert for both these metrics to be triggered whenever their value is **zero**. Choose the **Total** aggregation type. Split by **Instance** to monitor individual gateway instances.| -|Create alert rule for tunnel overutilization.|The maximum throughput allowed per tunnel is determined by the scale units of the gateway instance where it terminates.<br><br>You may want to be alerted if a tunnel is at risk of nearing its maximum throughput, which can lead to performance and connectivity issues, and act proactively on it by investigating the root cause of the increased tunnel utilization or by increasing the gatewayΓÇÖs scale units.<br><br>Select **Tunnel Bandwidth** when creating the alert rule. Split by **Instance** and **Remote IP** to monitor all individual tunnels or choose specific tunnel(s) instead. Configure the alert to be triggered whenever the **Average** throughput is **greater than** a value that is close to the maximum throughput allowed per tunnel.<br><br>To learn more about how a tunnelΓÇÖs maximum throughput is impacted by the gatewayΓÇÖs scale units, see the [Virtual WAN FAQ](virtual-wan-faq.md).| +|Create alert rule for increase in Tunnel Egress and/or Ingress packet drop count.| An increase in tunnel egress and/or ingress packet drop count might indicate an issue with the Azure VPN gateway, or with the remote VPN device. Select the **Tunnel Egress/Ingress Packet drop count** metric when creating the alert rule(s). Define a **static Threshold value** greater than **0** and the **Total** aggregation type when configuring the alert logic.<br><br>You can choose to monitor the **Connection** as a whole, or split the alert rule by **Instance** and **Remote IP** to be alerted for issues involving individual tunnels. To learn the difference between the concept of **VPN connection**, **link**, and **tunnel** in Virtual WAN, see the [Virtual WAN FAQ](virtual-wan-faq.md).| +|Create alert rule to monitor BGP peer status.|When using BGP in your site-to-site connections, it's important to monitor the health of the BGP peerings between the gateway instances and the remote devices, as recurrent failures can disrupt connectivity.<br><br>Select the **BGP Peer Status** metric when creating the alert rule. Using a **static** threshold, choose the **Average** aggregation type and configure the alert to be triggered whenever the value is **less than 1**.<br><br>We recommend that you split the alert by **Instance** and **BGP Peer Address** to detect issues with individual peerings. Avoid selecting the gateway instance IPs as **BGP Peer Address** because this metric monitors the BGP status for every possible combination, including with the instance itself (which is always 0).| +|Create alert rule to monitor number of BGP routes advertised and learned.|**BGP Routes Advertised** and **BGP Routes Learned** monitor the number of routes advertised to and learned from peers by the VPN gateway, respectively. If these metrics drop to zero unexpectedly, it could be because thereΓÇÖs an issue with the gateway or with on-premises.<br><br>We recommend that you configure an alert for both these metrics to be triggered whenever their value is **zero**. Choose the **Total** aggregation type. Split by **Instance** to monitor individual gateway instances.| +|Create alert rule for VPN gateway overutilization.|A VPN gatewayΓÇÖs aggregate throughput is determined by the number of scale units per instance. Note that all tunnels that terminate in the same gateway instance will share its aggregate throughput. It's likely that tunnel stability will be affected if an instance is working at its capacity for a long period of time.<br><br>Select **Gateway S2S Bandwidth** when creating the alert rule. Configure the alert to be triggered whenever the **Average** throughput is **greater than** a value that is close to the maximum aggregate throughput of **both instances**. Alternatively, split the alert **by instance** and use the maximum throughput **per instance** as a reference.<br><br>It's good practice to determine the throughput needs per tunnel in advance in order to choose the appropriate number of scale units. To learn more about the supported scale unit values for site-to-site VPN gateways, see the [Virtual WAN FAQ](virtual-wan-faq.md). +|Create alert rule for tunnel overutilization.|The maximum throughput allowed per tunnel is determined by the scale units of the gateway instance where it terminates.<br><br>You might want to be alerted if a tunnel is at risk of nearing its maximum throughput, which can lead to performance and connectivity issues, and act proactively on it by investigating the root cause of the increased tunnel utilization or by increasing the gatewayΓÇÖs scale units.<br><br>Select **Tunnel Bandwidth** when creating the alert rule. Split by **Instance** and **Remote IP** to monitor all individual tunnels or choose specific tunnel(s) instead. Configure the alert to be triggered whenever the **Average** throughput is **greater than** a value that is close to the maximum throughput allowed per tunnel.<br><br>To learn more about how a tunnelΓÇÖs maximum throughput is impacted by the gatewayΓÇÖs scale units, see the [Virtual WAN FAQ](virtual-wan-faq.md).| **Design checklist - log query alerts** To configure log-based alerts, you must first create a diagnostic setting for yo |Recommendation | Description| ||| |Create tunnel disconnect alert rule.|**Use Tunnel Diagnostic Logs** to track disconnect events in your site-to-site connections. A disconnect event can be due to a failure to negotiate SAs, unresponsiveness of the remote VPN device, among other causes. Tunnel Diagnostic Logs also provide the disconnect reason. See the **Create tunnel disconnect alert rule - log query** below this table to select disconnect events when creating the alert rule.<br><br>Configure the alert to be triggered whenever the number of rows resulting from running the query above is **greater than 0**. For this alert to be effective, select **Aggregation Granularity** to be between 1 and 5 minutes and the **Frequency of evaluation** to also be between 1 and 5 minutes. This way, after the **Aggregation Granularity** interval has passed, the number of rows is 0 again for a new interval.<br><br>For troubleshooting tips when analyzing Tunnel Diagnostic Logs, see [Troubleshoot Azure VPN gateway](../vpn-gateway/troubleshoot-vpn-with-azure-diagnostics.md#TunnelDiagnosticLog) using diagnostic logs. Additionally, use **IKE Diagnostic Logs** to complement your troubleshooting, as these logs contain detailed IKE-specific diagnostics.|-|Create BGP disconnect alert rule. |Use **Route Diagnostic Logs** to track route updates and issues with BGP sessions. Repeated BGP disconnect events can impact connectivity and cause downtime. See the **Create BGP disconnect rule alert- log query** below this table to select disconnect events when creating the alert rule.<br><br>Configure the alert to be triggered whenever the number of rows resulting from running the query above is **greater than 0**. For this alert to be effective, select **Aggregation Granularity** to be between 1 and 5 minutes and the **Frequency of evaluation** to also be between 1 and 5 minutes. This way, after the **Aggregation Granularity** interval has passed, the number of rows is 0 again for a new interval if the BGP sessions have been restored.<br><br>For more information about the data collected by Route Diagnostic Logs, see [Troubleshooting Azure VPN Gateway using diagnostic logs](../vpn-gateway/troubleshoot-vpn-with-azure-diagnostics.md#RouteDiagnosticLog). | +|Create BGP disconnect alert rule. |Use **Route Diagnostic Logs** to track route updates and issues with BGP sessions. Repeated BGP disconnect events can affect connectivity and cause downtime. See the **Create BGP disconnect rule alert- log query** below this table to select disconnect events when creating the alert rule.<br><br>Configure the alert to be triggered whenever the number of rows resulting from running the query above is **greater than 0**. For this alert to be effective, select **Aggregation Granularity** to be between 1 and 5 minutes and the **Frequency of evaluation** to also be between 1 and 5 minutes. This way, after the **Aggregation Granularity** interval has passed, the number of rows is 0 again for a new interval if the BGP sessions have been restored.<br><br>For more information about the data collected by Route Diagnostic Logs, see [Troubleshooting Azure VPN Gateway using diagnostic logs](../vpn-gateway/troubleshoot-vpn-with-azure-diagnostics.md#RouteDiagnosticLog). | **Log queries** The following section details the configuration of metric-based alerts only. How ||| |Create alert rule for gateway overutilization.|The bandwidth of a point-to-site gateway is determined by the number of scale units configured. To learn more about point-to-site gateway scale units, see Point-to-site (User VPN).<br><br>**Use the Gateway P2S Bandwidth** metric to monitor the gatewayΓÇÖs utilization and configure an alert rule that is triggered whenever the gatewayΓÇÖs bandwidth is **greater than** a value near its aggregate throughput ΓÇô for example, if the gateway was configured with 2 scale units, it will have an aggregate throughput of 1 Gbps. In this case, you could define a threshold value of 950 Mbps.<br><br>Use this alert to proactively investigate the root cause of the increased utilization, and ultimately increase the number of scale units, if needed. Select the **Average** aggregation type when configuring the alert rule.| |Create alert for P2S connection count nearing limit |The maximum number of point-to-site connections allowed is also determined by the number of scale units configured on the gateway. To learn more about point-to-site gateway scale units, see the FAQ for [Point-to-site (User VPN)](virtual-wan-faq.md#p2s-concurrent).<br><br>Use the **P2S Connection Count** metric to monitor the number of connections. Select this metric to configure an alert rule that is triggered whenever the number of connections is nearing the maximum allowed. For example, a 1-scale unit gateway supports up to 500 concurrent connections. In this case, you could configure the alert to be triggered whenever the number of connections is **greater than** 450.<br><br>Use this alert to determine whether an increase in the number of scale units is required or not. Choose the **Total** aggregation type when configuring the alert rule.|-|Create alert rule for User VPN routes count nearing limit.|The maximum number of User VPN routes is determined by the protocol used. IKEv2 has a protocol-level limit of 255 routes, whereas OpenVPN has a limit of 1000 routes. To learn more about this, see [VPN server configuration concepts](point-to-site-concepts.md#vpn-server-configuration-concepts).<br><br>You may want to be alerted if youΓÇÖre close to hitting the maximum number of User VPN routes and act proactively to avoid any downtime. Use the **User VPN Route Count** to monitor this and configure an alert rule that is triggered whenever the number of routes surpasses a value close to the limit. For example, if the limit is 255 routes, an appropriate **Threshold** value could be 230. Choose the **Total** aggregation type when configuring the alert rule.| +|Create alert rule for User VPN routes count nearing limit.|The maximum number of User VPN routes is determined by the protocol used. IKEv2 has a protocol-level limit of 255 routes, whereas OpenVPN has a limit of 1000 routes. To learn more about this, see [VPN server configuration concepts](point-to-site-concepts.md#vpn-server-configuration-concepts).<br><br>You might want to be alerted if youΓÇÖre close to hitting the maximum number of User VPN routes and act proactively to avoid any downtime. Use the **User VPN Route Count** to monitor this and configure an alert rule that is triggered whenever the number of routes surpasses a value close to the limit. For example, if the limit is 255 routes, an appropriate **Threshold** value could be 230. Choose the **Total** aggregation type when configuring the alert rule.| ### ExpressRoute gateway -This section of the article focuses on metric-based alerts. There are no diagnostic logs currently available for Virtual WAN ExpressRoute gateways. In addition to the alerts described below, which focus on the gateway component, it's recommended to use the available metrics, logs, and tools to monitor the ExpressRoute circuit. To learn more about ExpressRoute monitoring, see [ExpressRoute monitoring, metrics, and alerts](../expressroute/expressroute-monitoring-metrics-alerts.md). To learn about how you can use the ExpressRoute Traffic Collector tool, see [Configure ExpressRoute Traffic Collector for ExpressRoute Direct](../expressroute/how-to-configure-traffic-collector.md). +This section of the article focuses on metric-based alerts. There are no diagnostic logs currently available for Virtual WAN ExpressRoute gateways. In addition to the alerts described below, which focus on the gateway component, we recommend that you use the available metrics, logs, and tools to monitor the ExpressRoute circuit. To learn more about ExpressRoute monitoring, see [ExpressRoute monitoring, metrics, and alerts](../expressroute/expressroute-monitoring-metrics-alerts.md). To learn about how you can use the ExpressRoute Traffic Collector tool, see [Configure ExpressRoute Traffic Collector for ExpressRoute Direct](../expressroute/how-to-configure-traffic-collector.md). **Design checklist - metric alerts** * Create alert rule for Bits Received Per Second. * Create alert rule for CPU overutilization. * Create alert rule for Packets per Second.-* Create alert rule for number of routes advertised to peer nearing limit. -* Count alert rule for number of routes learned from peer nearing limit. +* Create alert rule for number of routes advertised to peer. +* Count alert rule for number of routes learned from peer. * Create alert rule for high frequency in route changes. |Recommendation | Description| |||-|Create alert rule for Bits Received Per Second.|**Bits Received per Second** monitors the total amount of traffic received by the gateway from the MSEEs.<br><br>You may want to be alerted if the amount of traffic received by the gateway is at risk of hitting its maximum throughput, as this can lead to performance and connectivity issues. This allows you to act proactively by investigating the root cause of the increased gateway utilization or increasing the gatewayΓÇÖs maximum allowed throughput.<br><br>Choose the **Average** aggregation type and a **Threshold** value close to the maximum throughput provisioned for the gateway when configuring the alert rule.<br><br>Additionally, it's recommended to set an alert when the number of **Bits Received per Second** is near zero, as it may indicate an issue with the gateway or the MSEEs.<br><br>The maximum throughput of an ExpressRoute gateway is determined by number of scale units provisioned. To learn more about ExpressRoute gateway performance, see [About ExpressRoute connections in Azure Virtual WAN](virtual-wan-expressroute-about.md).| -|Create alert rule for CPU overutilization.|When using ExpressRoute gateways, it's important to monitor the CPU utilization. Prolonged high utilization can impact performance and connectivity.<br><br>Use the **CPU utilization** metric to monitor this and create an alert for whenever the CPU utilization is **greater than** 80%, so you can investigate the root cause and ultimately increase the number of scale units, if needed. Choose the **Average** aggregation type when configuring the alert rule.<br><br>To learn more about ExpressRoute gateway performance, see [About ExpressRoute connections in Azure Virtual WAN](virtual-wan-expressroute-about.md).| -|Create alert rule for packets received per second.|**Packets per second** monitors the number of inbound packets traversing the Virtual WAN ExpressRoute gateway.<br><br>You may want to be alerted if the number of **packets per second** is nearing the limit allowed for the number of scale units configured on the gateway.<br><br>Choose the Average aggregation type when configuring the alert rule. Choose a **Threshold** value close to the maximum number of **packets per second** allowed based on the number of scale units of the gateway. To learn more about ExpressRoute performance, see [About ExpressRoute connections in Azure Virtual WAN](virtual-wan-expressroute-about.md).<br><br>Additionally, it's recommended to set an alert when the number of **Packets per second** is near zero, as it may indicate an issue with the gateway or MSEEs.| -|Create alert rule for high frequency in route changes.|**Frequency of Routes changes** shows the change frequency of routes being learned and advertised from and to peers, including other types of branches such as site-to-site and point-to-site VPN. This metric provides visibility when a new branch or more circuits are being connected/disconnected.<br><br>This metric is a useful tool when identifying issues with BGP advertisements, such as flaplings. It's recommended to set an alert **if** the environment is **static** and BGP changes aren't expected. Select a **threshold value** that is **greater than 1** and an **Aggregation Granularity** of 15 minutes to monitor BGP behavior consistently.<br><br>If the environment is dynamic and BGP changes are frequently expected, you may choose not to set an alert otherwise in order to avoid false positives. However, you can still consider this metric for observability of your network.| +|Create alert rule for Bits Received Per Second.|**Bits Received per Second** monitors the total amount of traffic received by the gateway from the MSEEs.<br><br>You might want to be alerted if the amount of traffic received by the gateway is at risk of hitting its maximum throughput, as this can lead to performance and connectivity issues. This allows you to act proactively by investigating the root cause of the increased gateway utilization or increasing the gatewayΓÇÖs maximum allowed throughput.<br><br>Choose the **Average** aggregation type and a **Threshold** value close to the maximum throughput provisioned for the gateway when configuring the alert rule.<br><br>Additionally, we recommend that you set an alert when the number of **Bits Received per Second** is near zero, as it might indicate an issue with the gateway or the MSEEs.<br><br>The maximum throughput of an ExpressRoute gateway is determined by number of scale units provisioned. To learn more about ExpressRoute gateway performance, see [About ExpressRoute connections in Azure Virtual WAN](virtual-wan-expressroute-about.md).| +|Create alert rule for CPU overutilization.|When using ExpressRoute gateways, it's important to monitor the CPU utilization. Prolonged high utilization can affect performance and connectivity.<br><br>Use the **CPU utilization** metric to monitor this and create an alert for whenever the CPU utilization is **greater than** 80%, so you can investigate the root cause and ultimately increase the number of scale units, if needed. Choose the **Average** aggregation type when configuring the alert rule.<br><br>To learn more about ExpressRoute gateway performance, see [About ExpressRoute connections in Azure Virtual WAN](virtual-wan-expressroute-about.md).| +|Create alert rule for packets received per second.|**Packets per second** monitors the number of inbound packets traversing the Virtual WAN ExpressRoute gateway.<br><br>You might want to be alerted if the number of **packets per second** is nearing the limit allowed for the number of scale units configured on the gateway.<br><br>Choose the Average aggregation type when configuring the alert rule. Choose a **Threshold** value close to the maximum number of **packets per second** allowed based on the number of scale units of the gateway. To learn more about ExpressRoute performance, see [About ExpressRoute connections in Azure Virtual WAN](virtual-wan-expressroute-about.md).<br><br>Additionally, we recommend that you set an alert when the number of **Packets per second** is near zero, as it might indicate an issue with the gateway or MSEEs.| +|Create alert rule for number of routes advertised to peer. |**Count of Routes Advertised to Peers** monitors the number of routes advertised from the ExpressRoute gateway to the virtual hub router and to the Microsoft Enterprise Edge Devices.<br><br>We recommend that you configure an alert only on the two BGP peers displayed as **ExpressRoute Device** to identify when the count of advertised routes approaches the documented limit of **1000**. For example, configure the alert to be triggered when the number of routes advertised is **greater than 950**.<br><br>We also recommend that you configure an alert when the number of routes advertised to the Microsoft Edge Devices is **zero** in order to proactively detect any connectivity issues.<br><br>To add these alerts, select the **Count of Routes Advertised to Peers** metric, and then select the **Add filter** option and the **ExpressRoute** devices.| +|Create alert rule for number of routes learned from peer.|**Count of Routes Learned from Peers** monitors the number of routes the ExpressRoute gateway learns from the virtual hub router and from the Microsoft Enterprise Edge Device.<br><br>We recommend that you configure an alert **only** on the two BGP peers displayed as **ExpressRoute Device** to identify when the count of learned routes approaches the [documented limit](../expressroute/expressroute-faqs.md#are-there-limits-on-the-number-of-routes-i-can-advertise) of 4000 for Standard SKU and 10,000 for Premium SKU circuits.<br><br>We also recommend that you configure an alert when the number of routes advertised to the Microsoft Edge Devices is **zero**. This can help in detecting when your on-premises has stopped advertising routes. +|Create alert rule for high frequency in route changes.|**Frequency of Routes changes** shows the change frequency of routes being learned and advertised from and to peers, including other types of branches such as site-to-site and point-to-site VPN. This metric provides visibility when a new branch or more circuits are being connected/disconnected.<br><br>This metric is a useful tool when identifying issues with BGP advertisements, such as flaplings. We recommend that you to set an alert **if** the environment is **static** and BGP changes aren't expected. Select a **threshold value** that is **greater than 1** and an **Aggregation Granularity** of 15 minutes to monitor BGP behavior consistently.<br><br>If the environment is dynamic and BGP changes are frequently expected, you might choose not to set an alert otherwise in order to avoid false positives. However, you can still consider this metric for observability of your network.| ## Virtual hub This section of the article focuses on metric-based alerts. Azure Firewall offer |Recommendation | Description| ||| |Create alert rule for risk of SNAT port exhaustion.|Azure Firewall provides 2,496 SNAT ports per public IP address configured per backend virtual machine scale instance. ItΓÇÖs important to estimate in advance the number of SNAT ports that will fulfill your organizational requirements for outbound traffic to the Internet. Not doing so increases the risk of exhausting the number of available SNAT ports on the Azure Firewall, potentially causing outbound connectivity failures.<br><br>Use the **SNAT port utilization** metric to monitor the percentage of outbound SNAT ports currently in use. Create an alert rule for this metric to be triggered whenever this percentage surpasses **95%** (due to an unforeseen traffic increase, for example) so you can act accordingly by configuring an additional public IP address on the Azure Firewall, or by using an [Azure NAT Gateway](../nat-gateway/nat-overview.md) instead. Use the **Maximum** aggregation type when configuring the alert rule.<br><br>To learn more about how to interpret the **SNAT port utilization** metric, see [Overview of Azure Firewall logs and metrics](../firewall/logs-and-metrics.md#metrics). To learn more about how to scale SNAT ports in Azure Firewall, see [Scale SNAT ports with Azure NAT Gateway](../firewall/integrate-with-nat-gateway.md).|-|Create alert rule for firewall overutilization.|Azure Firewall maximum throughput differs depending on the SKU and features enabled. To learn more about Azure Firewall performance, see [Azure Firewall performance](../firewall/firewall-performance.md).<br><br>You may want to be alerted if your firewall is nearing its maximum throughput and troubleshoot the underlying cause, as this can have an impact in the firewallΓÇÖs performance.<br><br> Create an alert rule to be triggered whenever the **Throughput** metric surpasses a value nearing the firewallΓÇÖs maximum throughput ΓÇô if the maximum throughput is 30Gbps, configure 25Gbps as the **Threshold** value, for example. The **Throughput** metric unit is **bits/sec**. Choose the **Average** aggregation type when creating the alert rule. +|Create alert rule for firewall overutilization.|Azure Firewall maximum throughput differs depending on the SKU and features enabled. To learn more about Azure Firewall performance, see [Azure Firewall performance](../firewall/firewall-performance.md).<br><br>You might want to be alerted if your firewall is nearing its maximum throughput and troubleshoot the underlying cause, as this can have an impact in the firewallΓÇÖs performance.<br><br> Create an alert rule to be triggered whenever the **Throughput** metric surpasses a value nearing the firewallΓÇÖs maximum throughput ΓÇô if the maximum throughput is 30Gbps, configure 25Gbps as the **Threshold** value, for example. The **Throughput** metric unit is **bits/sec**. Choose the **Average** aggregation type when creating the alert rule. ## Next steps |
virtual-wan | Openvpn Azure Ad Client Mac | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/openvpn-azure-ad-client-mac.md | -> * The Azure VPN Client may not be available in all regions due to local regulations. +> * The Azure VPN Client might not be available in all regions due to local regulations. > * Microsoft Entra authentication is supported only for OpenVPN® protocol connections and requires the Azure VPN client. > Before you can connect and authenticate using Microsoft Entra ID, you must first Configure the following settings: * **Connection Name:** The name by which you want to refer to the connection profile.- * **VPN Server:** This name is the name that you want to use to refer to the server. The name you choose here does not need to be the formal name of a server. + * **VPN Server:** This name is the name that you want to use to refer to the server. The name you choose here doesn't need to be the formal name of a server. * **Server Validation** * **Certificate Information:** The certificate CA. * **Server Secret:** The server secret. Before you can connect and authenticate using Microsoft Entra ID, you must first 1. Using your credentials, sign in to connect. :::image type="content" source="media/openvpn-azure-ad-client-mac/add-4.png" alt-text="Screenshot of Azure VPN Client sign in to connect.":::-1. Once connected, you will see the **Connected** status. When you want to disconnect, click **Disconnect** to disconnect the connection. +1. Once connected, you'll see the **Connected** status. When you want to disconnect, click **Disconnect** to disconnect the connection. :::image type="content" source="media/openvpn-azure-ad-client-mac/add-5.png" alt-text="Screenshot of Azure VPN Client connected and disconnect button."::: |
virtual-wan | Openvpn Azure Ad Tenant | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/openvpn-azure-ad-tenant.md | |
virtual-wan | Packet Capture Site To Site Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/packet-capture-site-to-site-portal.md | Verify that you have the following configuration already set up in your environm * A Virtual WAN and a virtual hub. * A site-to-site VPN gateway deployed in the virtual hub.-* You may also have connections connecting VPN sites to your site-to-site VPN gateway. +* You can also have connections connecting VPN sites to your site-to-site VPN gateway. ## <a name="storage"></a> Create a storage account and container In this section, you start the packet capture on the virtual hub. ## <a name="filters"></a> Optional: Specify filters -To simplify your packet captures, you may specify filters on your packet capture to focus on specific behaviors. +To simplify your packet captures, you can specify filters on your packet capture to focus on specific behaviors. | Parameter | Description | Default values | Available values | ||||| To simplify your packet captures, you may specify filters on your packet capture > [!NOTE]-> For TracingFlags and TCPFlags, you may specify multiple protocols by adding up the numerical values for the protocols you want to capture (same as a logical OR). For example, if you want to capture only ESP and OPVN packets, specify a TracingFlag value of 8+1 = 9. +> For TracingFlags and TCPFlags, you can specify multiple protocols by adding up the numerical values for the protocols you want to capture (same as a logical OR). For example, if you want to capture only ESP and OPVN packets, specify a TracingFlag value of 8+1 = 9. > ## Stop a packet capture This section helps you stop or abort a packet capture. -1. On the virtual hub page, click the **Packet Capture** button to open the **Packet Capture** page, then click **Stop**. This opens the **Stop Packet Capture** page. At this point, the packet capture is not yet stopped. +1. On the virtual hub page, click the **Packet Capture** button to open the **Packet Capture** page, then click **Stop**. This opens the **Stop Packet Capture** page. At this point, the packet capture isn't yet stopped. :::image type="content" source="./media/packet-capture-site-to-site-portal/packet-stop.png" alt-text="Graphic showing the Stop button." lightbox="./media/packet-capture-site-to-site-portal/packet-stop-expand.png"::: 1. On the **Stop Packet Capture** page, paste the *SaS URL* for the storage container that you created earlier into the **Output Sas Url** field. This is the location where the packet capture files will be stored. |
virtual-wan | Packet Capture Site To Site Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/packet-capture-site-to-site-powershell.md | Verify that you have the following configuration already set up in your environm * A Virtual WAN and a virtual hub. * A site-to-site VPN gateway deployed in the virtual hub.-* You may also have connections connecting VPN sites to your site-to-site VPN gateway. +* You can also have connections connecting VPN sites to your site-to-site VPN gateway. ### Working with Azure PowerShell Verify that you have the following configuration already set up in your environm ### Set up the environment -Use the following command to verify that you are using the correct subscription and are logged in as a user that has permissions to perform the packet capture on the site-to-site VPN gateway +Use the following command to verify that you're using the correct subscription and are logged in as a user that has permissions to perform the packet capture on the site-to-site VPN gateway ```azurepowershell-interactive $subid = ΓÇ£<insert Virtual WAN subscription ID here>ΓÇ¥ This section helps you start a packet capture for your site-to-site VPN gateway ## <a name="filters"></a> Optional: Specify filters -To simplify your packet captures, you may specify filters on your packet capture to focus on specific behaviors. +To simplify your packet captures, you can specify filters on your packet capture to focus on specific behaviors. >[!NOTE]-> For TracingFlags and TCPFlags, you may specify multiple protocols by adding up the numerical values for the protocols you wish to capture (same as a logical OR). For example, if you want to capture only ESP and OPVN packets, specify a TracingFlag value of 8+1 = 9. +> For TracingFlags and TCPFlags, you can specify multiple protocols by adding up the numerical values for the protocols you wish to capture (same as a logical OR). For example, if you want to capture only ESP and OPVN packets, specify a TracingFlag value of 8+1 = 9. | Parameter | Description | Default values | Available values| | | | | | Start-AzVpnGatewayPacketCapture -ResourceGroupName $rg -Name "<name of the Gatew We recommend that you let the packet capture run for at least 600 seconds before stopping. When you stop a packet capture, the parameters are similar to the parameters in the [Start a packet capture](#start) section. In the command, the SAS URL value was generated in the [Create a storage account](#storage) section. If the `SasUrl` parameter isn't configured correctly, the capture might fail with storage errors. -When you are ready to stop the packet capture, run the following command: +When you're ready to stop the packet capture, run the following command: ```azurepowershell-interactive Stop-AzVpnGatewayPacketCapture -ResourceGroupName $rg -Name <GatewayName> -SasUrl $sasurl |
virtual-wan | Virtual Wan Expressroute About | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-expressroute-about.md | description: Learn about using ExpressRoute in Azure Virtual WAN to connect your Previously updated : 12/13/2022 Last updated : 11/21/2023 # About ExpressRoute connections in Azure Virtual WAN Dynamic routing (BGP) is supported. For more information, please see [Dynamic Ro ## ExpressRoute connection concepts | Concept| Description| Notes|-| --| --| --| +| | || | Propagate Default Route|If the Virtual WAN hub is configured with a 0.0.0.0/0 default route, this setting controls whether the 0.0.0.0/0 route is advertised to your ExpressRoute-connected site. The default route doesn't originate in the Virtual WAN hub. The route can be a static route in the default route table or 0.0.0.0/0 advertised from on-premises. | This field can be set to enabled or disabled.| | Routing Weight|If the Virtual WAN hub learns the same prefix from multiple connected ExpressRoute circuits, then the ExpressRoute connection with the higher weight will be preferred for traffic destined for this prefix. | This field can be set to a number between 0 and 32000.| ## ExpressRoute circuit concepts | Concept| Description| Notes|-| --| --| --| +| | | | | Authorization Key| An authorization key is granted by a circuit owner and is valid for only one ExpressRoute connection. | To redeem and connect an ExpressRoute circuit that isn't in your subscription, you'll need to collect the authorization key from the ExpressRoute circuit owner.| | Peer circuit URI| This is the Resource ID of the ExpressRoute circuit (which you can find under the **Properties** setting pane of the ExpressRoute Circuit). | To redeem and connect an ExpressRoute circuit that isn't in your subscription, you'll need to collect the Peer Circuit URI from the ExpressRoute circuit owner. | |
virtual-wan | Virtual Wan Point To Site Azure Ad | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/virtual-wan-point-to-site-azure-ad.md | |
virtual-wan | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/whats-new.md | You can also find the latest Azure Virtual WAN updates and subscribe to the RSS | |||||| |Feature|Software-as-a-service|Palo Alto Networks Cloud NGFW|General Availability of [Palo Alto Networks Cloud NGFW](https://aka.ms/pancloudngfwdocs), the first software-as-a-serivce security offering deployable within the Virtual WAN hub.|July 2023|Palo Alto Networks Cloud NGFW is now deployable in all Virtual WAN hubs (new and old). See [Limitations of Palo Alto Networks Cloud NGFW](how-to-palo-alto-cloud-ngfw.md) for a full list of limitations and regional availability. Same limitations as routing intent.| |Feature|Network Virtual Appliances (NVAs)/Integrated Third-party solutions in Virtual WAN hubs|[Fortinet NGFW](https://www.fortinet.com/products/next-generation-firewall)|General Availability of [Fortinet NGFW](https://aka.ms/fortinetngfwdocumentation) and [Fortinet SD-WAN/NGFW dual-role](https://aka.ms/fortinetdualroledocumentation) NVAs.|May 2023| Same limitations as routing intent. Doesn't support internet inbound scenario.|-|Feature|Network Virtual Appliances (NVAs)/Integrated Third-party solutions in Virtual WAN hubs|[Check Point CloudGuard Network Security for Azure Virtual WAN](https://www.checkpoint.com/cloudguard/microsoft-azure-security/wan/) |General Availability of [Check Point CloudGuard Network Security NVA deployable from Azure Marketplace](https://sc1.checkpoint.com/documents/IaaS/WebAdminGuides/EN/CP_CloudGuard_Network_for_Azure_vWAN_AdminGuide/Content/Topics-Azure-vWAN/Introduction.htm) within the Virtual WAN hub in all Azure regions.|May 2023|Same limitations as routing intent. Doesn't support internet inbound scenario.| +|Feature|Network Virtual Appliances (NVAs)/Integrated Third-party solutions in Virtual WAN hubs|[Check Point CloudGuard Network Security for Azure Virtual WAN](https://www.checkpoint.com/cloudguard/microsoft-azure-security/wan/) |General Availability of Check Point CloudGuard Network Security NVA deployable from Azure Marketplace within the Virtual WAN hub in all Azure regions.|May 2023|Same limitations as routing intent. Doesn't support internet inbound scenario.| |Feature |Network Virtual Appliances (NVAs)/Integrated Third-party solutions in Virtual WAN hubs| [Versa SD-WAN](about-nva-hub.md#partners)|Preview of Versa SD-WAN.|November 2021| | |Feature|Network Virtual Appliances (NVAs)/Integrated Third-party solutions in Virtual WAN hubs|[Cisco Viptela, Barracuda and VMware (Velocloud) SD-WAN](about-nva-hub.md#partners) |General Availability of SD-WAN solutions in Virtual WAN.|June/July 2021| | |
vpn-gateway | Customer Controlled Gateway Maintenance | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/customer-controlled-gateway-maintenance.md | |
vpn-gateway | Tutorial Site To Site Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/tutorial-site-to-site-portal.md | |
vpn-gateway | Vpn Gateway Create Site To Site Rm Powershell | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-create-site-to-site-rm-powershell.md | Create your virtual network. -Location 'East US' -AddressPrefix 10.1.0.0/16 -Subnet $subnet1, $subnet2 ``` -### <a name="gatewaysubnet"></a>To add a gateway subnet to a virtual network you have already created +#### <a name="gatewaysubnet"></a>To add a gateway subnet to a virtual network you have already created Use the steps in this section if you already have a virtual network, but need to add a gateway subnet. $gwipconfig = New-AzVirtualNetworkGatewayIpConfig -Name gwipconfig1 -SubnetId $s ## <a name="CreateGateway"></a>5. Create the VPN gateway -Create the virtual network VPN gateway. --Use the following values: +Create the virtual network VPN gateway. Creating a gateway can often take 45 minutes or more, depending on the selected gateway SKU. The following values are used in the example: * The *-GatewayType* for a site-to-site configuration is *Vpn*. The gateway type is always specific to the configuration that you're implementing. For example, other gateway configurations might require -GatewayType ExpressRoute. * The *-VpnType* can be *RouteBased* (referred to as a Dynamic Gateway in some documentation), or *PolicyBased* (referred to as a Static Gateway in some documentation). For more information about VPN gateway types, see [About VPN Gateway](vpn-gateway-about-vpngateways.md). New-AzVirtualNetworkGateway -Name VNet1GW -ResourceGroupName TestRG1 ` -VpnType RouteBased -GatewaySku VpnGw2 ``` -Creating a gateway can often take 45 minutes or more, depending on the selected gateway SKU. - ## <a name="ConfigureVPNDevice"></a>6. Configure your VPN device Site-to-site connections to an on-premises network require a VPN device. In this step, you configure your VPN device. When configuring your VPN device, you need the following items: |