Updates from: 10/09/2024 01:08:24
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Aad Sspr Technical Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/aad-sspr-technical-profile.md
description: Custom policy reference for Microsoft Entra ID SSPR technical profi
-+ Last updated 01/11/2024 -+ #Customer intent: As a developer using Azure AD B2C, I want to define a Microsoft Entra ID self-service password reset technical profile.
active-directory-b2c Access Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/access-tokens.md
description: Learn how to request an access token from Azure Active Directory B2
-+ Last updated 01/11/2024 -+ #Customer intent: As a developer integrating Azure Active Directory B2C with a web application and web API, I want to understand how to request an access token, so that I can authenticate and authorize users to access my APIs securely.
active-directory-b2c Active Directory Technical Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/active-directory-technical-profile.md
description: Define a Microsoft Entra technical profile in a custom policy in Az
-+ Last updated 01/11/2024 -+ #Customer intent: As a developer using Azure Active Directory B2C, I want to define a technical profile for Microsoft Entra user management, so that I can interact with a claims provider that supports the standardized protocol and perform operations like reading, writing, and deleting user accounts.
active-directory-b2c Add Api Connector Token Enrichment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/add-api-connector-token-enrichment.md
Title: Token enrichment - Azure Active Directory B2C
description: Enrich tokens with claims from external identity data sources using APIs or outbound webhooks. -+ Last updated 01/17/2023 -+ zone_pivot_groups: b2c-policy-type # Customer intent: I want to enrich tokens with claims from external identity data sources using APIs or outbound webhooks.
active-directory-b2c Add Api Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/add-api-connector.md
Title: Add API connectors to sign up user flows description: Configure an API connector to be used in a sign-up user flow. --++ Last updated 01/24/2024
active-directory-b2c Add Captcha https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/add-captcha.md
Title: Enable CAPTCHA in Azure Active Directory B2C
description: How to enable CAPTCHA for user flows and custom policies in Azure Active Directory B2C. -+ Last updated 05/03/2024 -+ zone_pivot_groups: b2c-policy-type #Customer intent: As a developer, I want to enable CAPTCHA in consumer-facing application that is secured by Azure Active Directory B2C, so that I can protect my sign-in and sign-up flows from automated attacks.
active-directory-b2c Add Identity Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/add-identity-provider.md
Last updated 03/22/2024 --++ #Customer Intent: As a developer integrating Azure AD B2C into my application, I want to add an identity provider, so that users can sign in with their existing social or enterprise accounts without creating a new account.
active-directory-b2c Add Native Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/add-native-application.md
Last updated 01/11/2024 --++ #Customer intent: As a developer integrating Azure Active Directory B2C with a native client application, I want to register the client resources in my tenant, so that my application can communicate with Azure Active Directory B2C.
active-directory-b2c Add Password Change Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/add-password-change-policy.md
description: Learn how to set up a custom policy so users can change their passw
-+ Last updated 01/11/2024 -+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Add Password Reset Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/add-password-reset-policy.md
description: Learn how to set up a password reset flow in Azure Active Directory B2C (Azure AD B2C). -+ Last updated 11/27/2023 -+ zone_pivot_groups: b2c-policy-type #Customer Intent: As an Azure AD B2C administrator, I want to set up a password reset flow for local accounts, so that users can reset their passwords if they forget them.
active-directory-b2c Add Profile Editing Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/add-profile-editing-policy.md
description: Learn how to set up a profile editing flow in Azure Active Director
-+ Last updated 01/11/2024 -+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Add Ropc Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/add-ropc-policy.md
description: Learn how to set up the resource owner password credentials (ROPC)
-+ Last updated 09/11/2024 -+ zone_pivot_groups: b2c-policy-type #Customer intent: As a developer integrating Azure AD B2C into my application, I want to set up the resource owner password credentials flow, so that my application can exchange valid credentials for tokens and authenticate users.
active-directory-b2c Add Sign In Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/add-sign-in-policy.md
description: Learn how to set up a sign-in flow in Azure Active Directory B2C.
-+ Last updated 01/11/2024 -+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Add Sign Up And Sign In Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/add-sign-up-and-sign-in-policy.md
description: Learn how to set up a sign-up and sign-in flow in Azure Active Directory B2C. -+ Last updated 03/22/2024 -+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Add Web Api Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/add-web-api-application.md
Previously updated : 01/11/2024 Last updated : 10/11/2024 --++ #Customer intent: As a developer integrating a web API with Azure Active Directory B2C, I want to register my application in the Azure portal, so that it can accept and respond to requests from client applications with access tokens.
This article shows you how to register web API resources in your Azure Active Directory B2C (Azure AD B2C) tenant so that they can accept and respond to requests by client applications that present an access token.
-To register an application in your Azure AD B2C tenant, you can use the Azure portal's new unified **App registrations** experience or the legacy **Applications (Legacy)** experience. [Learn more about the new experience](./app-registrations-training-guide.md).
-
-#### [App registrations](#tab/app-reg-ga/)
+To register an application in your Azure AD B2C tenant, you can use the following steps:
1. Sign in to the [Azure portal](https://portal.azure.com). 1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
To register an application in your Azure AD B2C tenant, you can use the Azure po
1. Select **Register**. 1. Record the **Application (client) ID** for use in your web API's code. -
-#### [Applications (Legacy)](#tab/applications-legacy/)
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
-1. Choose **All services** in the top-left corner of the Azure portal, and then search for and select **Azure AD B2C**.
-1. Select **Applications (Legacy)**, and then select **Add**.
-1. Enter a name for the application. For example, *webapi1*.
-1. For **Include web app/ web API** and **Allow implicit flow**, select **Yes**.
-1. For **Reply URL**, enter an endpoint where Azure AD B2C should return any tokens that your application requests. In your production application, you might set the reply URL to a value such as `https://localhost:44332`. For testing purposes, set the reply URL to `https://jwt.ms`.
-1. For **App ID URI**, enter the identifier used for your web API. The full identifier URI including the domain is generated for you. For example, `https://contosotenant.onmicrosoft.com/api`.
-1. Select **Create**.
-1. On the properties page, record the application ID that you'll use when you configure the web application.
-
-* * *
- ## Configure scopes Scopes provide a way to govern access to protected resources. Scopes are used by the web API to implement scope-based access control. For example, users of the web API could have both read and write access, or users of the web API might have only read access. In this tutorial, you use scopes to define read and write permissions for the web API.
To call a protected web API from an application, you need to grant your applicat
[!INCLUDE [active-directory-b2c-permissions-api](../../includes/active-directory-b2c-permissions-api.md)]
-Your application is registered to call the protected web API. A user authenticates with Azure AD B2C to use the application. The application obtains an authorization grant from Azure AD B2C to access the protected web API.
+Your application is registered to call the protected web API. A user authenticates with Azure AD B2C to use the application. The application obtains an authorization grant from Azure AD B2C to access the protected web API.
active-directory-b2c Age Gating https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/age-gating.md
Title: Enable age gating in Azure Active Directory B2C description: Learn about how to identify minors using your application.- ---+ Last updated 01/11/2024 -+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Analytics With Application Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/analytics-with-application-insights.md
description: Learn how to enable event logs in Application Insights from Azure A
-+ Last updated 01/26/2024 -+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Api Connector Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/api-connector-samples.md
Last updated 01/11/2024 --++ #Customer intent: As a developer integrating web APIs into user flows using API connectors, I want to access code samples that use API connectors, so that I can easily implement functionality such as fraud and abuse protection, identity verification, and invitation codes in my applications.
active-directory-b2c Api Connectors Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/api-connectors-overview.md
Title: About API connectors in Azure AD B2C description: Use Microsoft Entra API connectors to customize and extend your user flows and custom policies by using REST APIs or outbound webhooks to external identity data sources. --++ Last updated 01/11/2024
active-directory-b2c App Registrations Training Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/app-registrations-training-guide.md
description: An introduction to the new App registration experience in Azure AD
-+ Last updated 01/11/2024 -+ #Customer intent: As an Azure AD B2C user, I want to understand the new App registrations experience, so that I can manage all my app registrations in one place and take advantage of new features like unified app list, API permissions, and support for different account types.
active-directory-b2c Application Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/application-types.md
Title: Application types supported by Azure AD B2C description: Learn about the types of applications you can use with Azure Active Directory B2C.- + ---+ Previously updated : 01/11/2024---- Last updated : 10/11/2024+ #Customer intent: As a developer building an application that requires user authentication, I want to understand the different types of applications that can be used with Azure Active Directory B2C, so that I can choose the appropriate authentication method for my application.
To take advantage of this flow, your application can use an authentication libra
### Implicit grant flow
-Some libraries, like [MSAL.js 1.x](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib), only support the [implicit grant flow](implicit-flow-single-page-application.md) or your application is implemented to use implicit flow. In these cases, Azure AD B2C supports the [OAuth 2.0 implicit flow](implicit-flow-single-page-application.md). The implicit grant flow allows the application to get **ID** and **Access** tokens. Unlike the authorization code flow, implicit grant flow doesn't return a **Refresh token**.
-
-We **don't recommended** this approach.
+Some libraries, like [MSAL.js 1.x](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib), only support the [implicit grant flow](implicit-flow-single-page-application.md) or your application is implemented to use implicit flow. In these cases, Azure AD B2C supports the [OAuth 2.0 implicit flow](implicit-flow-single-page-application.md). The implicit grant flow allows the application to get **ID** and **Access** tokens. Unlike the authorization code flow, implicit grant flow doesn't return a **Refresh token**.
This authentication flow doesn't include application scenarios that use cross-platform JavaScript frameworks such as Electron and React-Native. Those scenarios require further capabilities for interaction with the native platforms.
+> [!WARNING]
+> Microsoft recommends you do *not* use the implicit grant flow. The recommended way of supporting SPAs is [OAuth 2.0 Authorization code flow (with PKCE)](./authorization-code-flow.md). Certain configurations of this flow requires a very high degree of trust in the application, and carries risks that are not present in other flows. You should only use this flow when other more secure flows aren't viable. For more information, see the [security concerns with implicit grant flow](/entra/identity-platform/v2-oauth2-implicit-grant-flow#security-concerns-with-implicit-grant-flow).
+ ## Web APIs You can use Azure AD B2C to secure web services such as your application's RESTful web API. Web APIs can use OAuth 2.0 to secure their data, by authenticating incoming HTTP requests using tokens. The caller of a web API appends a token in the authorization header of an HTTP request:
active-directory-b2c Authorization Code Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/authorization-code-flow.md
Title: Authorization code flow - Azure Active Directory B2C
description: Learn how to build web apps by using Azure AD B2C and OpenID Connect authentication protocol. -+ Last updated 01/11/2024 -+ # Customer intent: As a developer who is building a web app, I want to learn more about the OAuth 2.0 authorization code flow in Azure AD B2C, so that I can add sign-up, sign-in, and other identity management tasks to my app.
active-directory-b2c Azure Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/azure-monitor.md
description: Learn how to log Azure AD B2C events with Azure Monitor by using de
-+ -+ Last updated 09/11/2024
active-directory-b2c B2c Global Identity Funnel Based Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/b2c-global-identity-funnel-based-design.md
description: Learn the funnel-based design consideration for Azure AD B2C to provide customer identity management for global customers. -+ Last updated 01/26/2024 -+ # Customer intent: I'm a developer, and I need to understand how to build a global identity solution using a funnel-based approach, so I can implement it in my organization's Azure AD B2C environment.
active-directory-b2c B2c Global Identity Proof Of Concept Funnel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/b2c-global-identity-proof-of-concept-funnel.md
description: Learn how to create a proof of concept for funnel-based approach fo
-+ Last updated 01/26/2024 -+ # Customer intent: As a developer, I want to understand how to build a global identity solution using a funnel-based approach, so I can implement it in my organization's Azure AD B2C environment.
active-directory-b2c B2c Global Identity Proof Of Concept Regional https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/b2c-global-identity-proof-of-concept-regional.md
Title: Azure Active Directory B2C global identity framework proof of concept for
description: Learn how to create a proof of concept regional based approach for Azure AD B2C to provide customer identity and access management for global customers. -+ Last updated 01/24/2024 -+ # Customer intent: I'm a developer implementing Azure Active Directory B2C, and I want to configure region-based sign-up, sign-in, and password reset journeys. My goal is for users to be directed to the correct region and their data managed accordingly.
active-directory-b2c B2c Global Identity Region Based Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/b2c-global-identity-region-based-design.md
description: Learn the region-based design consideration for Azure AD B2C to provide customer identity management for global customers. -+ Last updated 01/26/2024 -+ # Customer intent: I'm a developer implementing a global identity solution. I need to understand the different scenarios and workflows for region-based design approach in Azure AD B2C. My goal is to design and implement the authentication and sign-up processes effectively for users from different regions.
active-directory-b2c B2c Global Identity Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/b2c-global-identity-solutions.md
description: Learn how to configure Azure AD B2C to provide customer identity and access management for global customers. -+ Last updated 01/26/2024 -+ # Customer intent: I'm a developer building a customer-facing application. I need to understand the different approaches to implement an identity platform using Azure AD B2C tenants for a globally operating business model. I want to make an informed decision about the architecture that best suits my application's requirements.
active-directory-b2c B2clogin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/b2clogin.md
description: Learn about using b2clogin.com in your redirect URLs for Azure Acti
-+ Last updated 01/26/2024 -+ #Customer intent: As an Azure AD B2C application developer, I want to update the redirect URLs in my identity provider's applications to reference b2clogin.com or a custom domain, so that I can authenticate users with Azure AD B2C using the updated endpoints.
active-directory-b2c Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/best-practices.md
description: Recommendations and best practices to consider when working with Az
-+ Last updated 10/07/2024-+ #Customer intent: As an application developer integrating Azure Active Directory B2C, I want recommendations and best practices for integrating Azure AD B2C into my application environment, so that I can ensure a secure and efficient integration with Azure AD B2C.
active-directory-b2c Billing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/billing.md
Title: Billing model for Azure Active Directory B2C
description: Learn about Azure AD B2C's monthly active users (MAU) billing model, how to link an Azure AD B2C tenant to an Azure subscription, and how to select the appropriate premium tier pricing. -+ Last updated 09/11/2024 -+ #Customer intent: As a business decision maker managing an Azure AD B2C tenant, I want to understand the billing model based on monthly active users (MAU), so that I can determine the cost and pricing structure for my Azure AD B2C tenant.
active-directory-b2c Boolean Transformations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/boolean-transformations.md
description: Boolean claims transformation examples for the Identity Experience
-+ Last updated 01/11/2024 -+ #Customer intent: As a developer working with Azure Active Directory B2C, I want to understand how to use boolean claims transformations, so that I can manipulate and evaluate boolean claims in my application.
active-directory-b2c Buildingblocks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/buildingblocks.md
description: Specify the BuildingBlocks element of a custom policy in Azure Acti
-+ Last updated 01/11/2024 -+ #Customer intent: As a developer creating a custom policy for Azure Active Directory B2C, I want to understand the structure and elements of the BuildingBlocks section, so that I can properly define the necessary components for my custom policy.
active-directory-b2c Captcha Technical Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/captcha-technical-profile.md
description: Define a CAPTCHA technical profile custom policy in Azure Active Di
-+ Last updated 01/17/2024 -+ #Customer intent: As a developer integrating a customer-facing application with Azure AD B2C, I want to define a CAPTCHA technical profile, so that I can secure sign-up and sign-in flows from automated attacks.
active-directory-b2c Claim Resolver Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/claim-resolver-overview.md
description: Learn how to use claims resolvers in a custom policy in Azure Activ
-+ Last updated 01/17/2024 -+ #Customer intent: As a developer using Azure AD B2C custom policies, I want to understand how to use claim resolvers in my technical profiles, so that I can provide context information about authorization requests and populate claims with dynamic values.
active-directory-b2c Claims Transformation Technical Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/claims-transformation-technical-profile.md
description: Define a claims transformation technical profile in a custom policy
-+ Last updated 01/11/2024 -+ #Customer intent: As a developer working with Azure Active Directory B2C custom policies, I want to define a claims transformation technical profile, so that I can manipulate claims values, validate claims, or set default values for a set of output claims.
active-directory-b2c Claimsproviders https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/claimsproviders.md
description: Specify the ClaimsProvider element of a custom policy in Azure Acti
-+ Last updated 01/11/2024 -+ #Customer intent: As a developer integrating with Azure Active Directory B2C, I want to understand how claims providers work and how to configure their technical profiles, so that I can communicate with different parties and leverage their capabilities.
active-directory-b2c Claimsschema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/claimsschema.md
description: Specify the ClaimsSchema element of a custom policy in Azure Active
-+ Last updated 01/11/2024 -+
active-directory-b2c Claimstransformations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/claimstransformations.md
description: Definition of the ClaimsTransformations element in the Identity Exp
-+ Last updated 01/11/2024 -+ #Customer intent: As a developer creating custom policies in Azure Active Directory B2C, I want to understand how to use claims transformations, so that I can convert and manipulate claims in user journeys.
active-directory-b2c Client Credentials Grant Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/client-credentials-grant-flow.md
description: Learn how to set up the OAuth 2.0 client credentials flow in Azure Active Directory B2C. -+ Previously updated : 01/11/2024 Last updated : 10/11/2024 -+ zone_pivot_groups: b2c-policy-type
The following example shows a client credentials user journey. The first and the
``` --
-## Next steps
-
-Learn how to [set up a resource owner password credentials flow in Azure AD B2C](add-ropc-policy.md)
active-directory-b2c Conditional Access Identity Protection Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/conditional-access-identity-protection-overview.md
Title: Identity Protection and Conditional Access in Azure AD B2C description: Learn how Identity Protection gives you visibility into risky sign-ins and risk detections. Find out how and Conditional Access lets you enforce organizational policies based on risk events in your Azure AD B2C tenants.--++ Last updated 01/11/2024
active-directory-b2c Conditional Access Technical Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/conditional-access-technical-profile.md
description: Custom policy reference for Conditional Access technical profiles i
-+ Last updated 01/11/2024 -+ #Customer intent: As an Azure AD B2C administrator, I want to define a Conditional Access technical profile in a custom policy, so that I can automate risk assessment and enforce organizational policies for sign-ins, including blocking access and challenging users with multi-factor authentication.
active-directory-b2c Conditional Access User Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/conditional-access-user-flow.md
Title: Add Conditional Access to a user flow in Azure AD B2C description: Learn how to add Conditional Access to your Azure AD B2C user flows. Configure multifactor authentication (MFA) settings and Conditional Access policies in your user flows to enforce policies and remediate risky sign-ins. --++ Last updated 09/11/2024
active-directory-b2c Configure A Sample Node Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-a-sample-node-web-app.md
-+ Last updated 01/11/2024 -+ #Customer intent: As a developer, I want to configure authentication in a Node.js web application using Azure Active Directory B2C, so that I can enable users to sign in, sign out, update profile, and reset password using Azure AD B2C user flows.
active-directory-b2c Configure Authentication In Azure Static App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-in-azure-static-app.md
description: This article discusses how to use Azure Active Directory B2C to si
-+ Last updated 01/11/2024 -+
active-directory-b2c Configure Authentication In Azure Web App File Based https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-in-azure-web-app-file-based.md
description: This article discusses how to use Azure Active Directory B2C to si
-+ Last updated 01/11/2024 -+
active-directory-b2c Configure Authentication In Azure Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-in-azure-web-app.md
description: This article discusses how to use Azure Active Directory B2C to si
-+ Last updated 01/11/2024 -+
active-directory-b2c Configure Authentication In Sample Node Web App With Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-in-sample-node-web-app-with-api.md
-+ Last updated 01/11/2024 -+ #Customer intent: As a developer, I want to configure authentication in a Node.js web API using Azure Active Directory B2C, so that I can protect the web API with token-based authentication and ensure that requests are accompanied by a valid access token.
active-directory-b2c Configure Authentication Sample Android App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-sample-android-app.md
description: This article discusses how to use Azure Active Directory B2C to si
-+ Last updated 01/11/2024 -+
active-directory-b2c Configure Authentication Sample Angular Spa App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-sample-angular-spa-app.md
description: Learn how to use Azure Active Directory B2C to sign in and sign up
-+ Last updated 01/11/2024 -+
active-directory-b2c Configure Authentication Sample Ios App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-sample-ios-app.md
description: This article discusses how to use Azure Active Directory B2C to si
-+ Last updated 01/11/2024 -+
active-directory-b2c Configure Authentication Sample Python Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-sample-python-web-app.md
description: This article discusses how to use Azure Active Directory B2C to si
-+ Last updated 06/04/2024 -+
active-directory-b2c Configure Authentication Sample React Spa App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-sample-react-spa-app.md
description: Learn how to use Azure Active Directory B2C to sign in and sign up
-+ Last updated 01/11/2024 -+
active-directory-b2c Configure Authentication Sample Spa App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-sample-spa-app.md
description: This article discusses how to use Azure Active Directory B2C to si
-+ Previously updated : 01/11/2024 Last updated : 10/11/2024 -+
Record the **Application (client) ID** to use later, when you configure the web
### Step 2.4: Enable the implicit grant flow
-In your own environment, if your SPA app uses MSAL.js 1.3 or earlier and the implicit grant flow or you configure [https://jwt.ms/](https://jwt.ms/) app for testing a user flow or custom policy, you need to enable the implicit grant flow in the app registration:
+You can enable implicit grant flow for two reasons, when youΓÇÖre using MSAL.js version 1.3 or earlier version or when you use an app registration to [test a user flow for testing purposes](add-sign-up-and-sign-in-policy.md?pivots=b2c-user-flow#test-the-user-flow).
-1. In the left menu, under **Manage**, select **Authentication**.
+Use these steps to enable implicit grant flow for your app:
+
+1. Select the app registration you created.
+
+1. Under **Manage**, select **Authentication**.
1. Under **Implicit grant and hybrid flows**, select both the **Access tokens (used for implicit flows)** and **ID tokens (used for implicit and hybrid flows)** check boxes. 1. Select **Save**.
-If your app uses MSAL.js 2.0 or later, don't enable implicit flow grant as MSAL.js 2.0+ supports the authorization code flow with PKCE. The SPA app in this article uses PKCE flow, and so you don't need to enable implicit grant flow.
+> [!NOTE]
+> If your app uses MSAL.js 2.0 or later, don't enable implicit grant flow as MSAL.js 2.0+ supports the [OAuth 2.0 Authorization code flow (with PKCE)](./authorization-code-flow.md). If you enable implicit grant to test a user flow, make sure you disable the implicit grant flow settings before you deploy your app to production.
### Step 2.5: Grant permissions
active-directory-b2c Configure Authentication Sample Web App With Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-sample-web-app-with-api.md
description: This article discusses using Azure Active Directory B2C to sign in
-+ Last updated 01/11/2024 -+
active-directory-b2c Configure Authentication Sample Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-sample-web-app.md
description: This article discusses how to use Azure Active Directory B2C to si
-+ Last updated 01/11/2024 -+
active-directory-b2c Configure Authentication Sample Wpf Desktop App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-authentication-sample-wpf-desktop-app.md
description: This article discusses how to use Azure Active Directory B2C to si
-+ Last updated 01/11/2024 -+
active-directory-b2c Configure Security Analytics Sentinel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-security-analytics-sentinel.md
description: Use Microsoft Sentinel to perform security analytics for Azure Acti
-+ Last updated 07/31/2024 -+ #Customer intent: As an IT professional, I want to gather logs and audit data using Microsoft Sentinel and Azure Monitor to secure applications that use Azure Active Directory B2C.
active-directory-b2c Configure Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-tokens.md
Title: Configure tokens - Azure Active Directory B2C
description: Learn how to configure the token lifetime and compatibility settings in Azure Active Directory B2C. -+ Last updated 01/11/2024 -+ zone_pivot_groups: b2c-policy-type #Customer intent: As a developer configuring token lifetimes in Azure Active Directory B2C, I want to understand the options and settings available for token lifetime and compatibility, so that I can customize them to fit the needs of my application and ensure secure access to resources.
active-directory-b2c Configure User Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-user-input.md
description: Learn how to customize user input and add user attributes to the sign-up or sign-in journey in Azure Active Directory B2C. -+ Last updated 12/13/2023 -+ zone_pivot_groups: b2c-policy-type #Customer Intent: As a developer using Azure Active Directory B2C, I want to add a new attribute to the sign-up journey, customize the input type, and define whether it's required, so that I can collect specific user information during the sign-up process.
active-directory-b2c Contentdefinitions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/contentdefinitions.md
description: Specify the ContentDefinitions element of a custom policy in Azure
-+ Last updated 01/11/2024 -+ #Customer intent: As a developer, I want to customize the user interface of my application using Azure Active Directory B2C, so that I can provide a personalized and branded experience to my customers.
active-directory-b2c Cookie Definitions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/cookie-definitions.md
description: Provides definitions for the cookies used in Azure Active Directory
-+ Last updated 01/11/2024 -+ #Customer intent: As a developer integrating Azure AD B2C into my application, I want to understand the cookies used by Azure AD B2C, so that I can properly handle and manage them in my application's authentication flow.
active-directory-b2c Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-domain.md
description: Learn how to enable custom domains in your redirect URLs for Azure Active Directory B2C, so that my users have a seamless experience. -+ Last updated 03/01/2024 -+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Custom Email Mailjet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-email-mailjet.md
description: Learn how to integrate with Mailjet to customize the verification e
-+ Last updated 01/11/2024 -+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Custom Email Sendgrid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-email-sendgrid.md
description: Learn how to integrate with SendGrid to customize the verification
-+ Last updated 01/11/2024 -+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Custom Policies Series Branch User Journey https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-policies-series-branch-user-journey.md
description: Learn how to enable or disable Technical Profiles based on claims v
-+ Previously updated : 01/11/2024 Last updated : 10/11/2024 -+ #Customer intent: As a developer using Azure Active Directory B2C, I want to create branching in the user journey based on the values of data in a custom policy, so that I can provide different user experiences and collect specific information from users based on their selections.
In this article, you learn how to use `EnabledForUserJourneys` element inside a
- If you don't have one already, [create an Azure AD B2C tenant](tutorial-create-tenant.md) that is linked to your Azure subscription. -- [Register a web application](tutorial-register-applications.md), and [enable ID token implicit grant](tutorial-register-applications.md#enable-id-token-implicit-grant). For the Redirect URI, use https://jwt.ms.
+- [Register a web application](tutorial-register-applications.md).
- You must have [Visual Studio Code (VS Code)](https://code.visualstudio.com/) installed in your computer.
active-directory-b2c Custom Policies Series Call Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-policies-series-call-rest-api.md
description: Learn how to make an HTTP call to external API by using Azure Activ
-+ Previously updated : 01/22/2024 Last updated : 10/11/2024 -+ #Customer intent: As a developer integrating customer-facing apps with Azure Active Directory B2C custom policy, I want to learn how to call a REST API from my custom policy, so that I can send and receive data from external services.
In [Create branching in user journey by using Azure AD B2C custom policies](cust
- If you don't have one already, [create an Azure AD B2C tenant](tutorial-create-tenant.md) that is linked to your Azure subscription. -- [Register a web application](tutorial-register-applications.md), and [enable ID token implicit grant](tutorial-register-applications.md#enable-id-token-implicit-grant). For the Redirect URI, use https://jwt.ms.
+- [Register a web application](tutorial-register-applications.md).
- You must have [Node.js](https://nodejs.org) installed in your computer.
active-directory-b2c Custom Policies Series Collect User Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-policies-series-collect-user-input.md
description: Learn how to collect user inputs from a user and manipulate them by
-+ Previously updated : 01/11/2024 Last updated : 10/11/2024 -+ #Customer intent: As a developer using Azure Active Directory B2C, I want to collect and manipulate user inputs by writing a custom policy, so that I can customize the user interface and process the inputs as claims in a JWT token.
In this article, you learn how to write a custom policy that collects user input
- If you don't have one already, [create an Azure AD B2C tenant](tutorial-create-tenant.md) that is linked to your Azure subscription. -- [Register a web application](tutorial-register-applications.md), and [enable ID token implicit grant](tutorial-register-applications.md#enable-id-token-implicit-grant). For the Redirect URI, use https://jwt.ms.
+- [Register a web application](tutorial-register-applications.md).
+
- You must have [Visual Studio Code (VS Code)](https://code.visualstudio.com/) installed in your computer. - Complete the steps in [Write your first Azure AD B2C custom policy - Hello World!](custom-policies-series-hello-world.md). This article is a part of [Create and run your own custom policies how-to guide series](custom-policies-series-overview.md).
active-directory-b2c Custom Policies Series Hello World https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-policies-series-hello-world.md
description: Learn how to write your first custom policy. A custom that shows of
-+ Previously updated : 01/11/2024 Last updated : 10/11/2024 -+ #Customer intent: As a developer creating a custom policy for Azure Active Directory B2C, I want to learn how to configure the signing and encryption keys, build the custom policy file, upload the policy file to Azure portal, and test the custom policy, so that I can customize user flows to meet my business specific needs.
While you can use pre-made custom policy [starter pack](https://github.com/Azure
- If you don't have one already, [create an Azure AD B2C tenant](tutorial-create-tenant.md) that is linked to your Azure subscription. -- [Register a web application](tutorial-register-applications.md), and [enable ID token implicit grant](tutorial-register-applications.md#enable-id-token-implicit-grant). For the Redirect URI, use https://jwt.ms.
+- [Register a web application](tutorial-register-applications.md).
- You must have [Visual Studio Code (VS Code)](https://code.visualstudio.com/) installed in your computer.
active-directory-b2c Custom Policies Series Install Xml Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-policies-series-install-xml-extensions.md
description: Learn how to validate custom policy files by using TrustFrameworkPo
-+ Last updated 01/11/2024 -+ #Customer intent: As a developer working with Azure AD B2C custom policies, I want to validate my custom policy files using the TrustFrameworkPolicy schema, so that I can ensure that my files are properly formatted and free of errors before uploading them.
active-directory-b2c Custom Policies Series Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-policies-series-overview.md
description: Learn how to create and run your own custom policies in Azure Activ
-+ Previously updated : 01/11/2024 Last updated : 10/11/2024 -+ #Customer intent: As an identity app developer using Azure Active Directory B2C, I want to learn how to create and run my own custom policies, so that I can create complex user journeys and customize the behavior of the user experience to meet my business specific needs.
active-directory-b2c Custom Policies Series Sign Up Or Sign In Federation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-policies-series-sign-up-or-sign-in-federation.md
description: Learn how to configure a sign-up and sign-in flow for a social acco
-+ Previously updated : 01/11/2024 Last updated : 10/11/2024 -+ #Customer intent: As a developer, I want to set up a sign-up and sign-in flow with a social account using Azure Active Directory B2C custom policy, so that users can sign in to my application using their social media credentials.
For local accounts, a user account is uniquely identified by using the `objectId
- If you don't have one already, [create an Azure AD B2C tenant](tutorial-create-tenant.md) that is linked to your Azure subscription. -- [Register a web application](tutorial-register-applications.md), and [enable ID token implicit grant](tutorial-register-applications.md#enable-id-token-implicit-grant). For the Redirect URI, use https://jwt.ms.
+- [Register a web application](tutorial-register-applications.md).
- You must have [Visual Studio Code (VS Code)](https://code.visualstudio.com/) installed in your computer.
active-directory-b2c Custom Policies Series Sign Up Or Sign In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-policies-series-sign-up-or-sign-in.md
description: Learn how to configure a sign-up and sign-in flow for a local accou
-+ Previously updated : 01/11/2024 Last updated : 10/11/2024 -+ #Customer intent: As a developer, I want to set up a sign-up and sign-in flow for a local account using Azure Active Directory B2C custom policy, so that users can create and sign in to their accounts in my application.
Azure AD B2C custom policy provides an OpenID Connect technical profile, which y
- If you don't have one already, [create an Azure AD B2C tenant](tutorial-create-tenant.md) that is linked to your Azure subscription. -- [Register a web application](tutorial-register-applications.md), and [enable ID token implicit grant](tutorial-register-applications.md#enable-id-token-implicit-grant). For the Redirect URI, use https://jwt.ms.
+- [Register a web application](tutorial-register-applications.md).
- You must have [Visual Studio Code (VS Code)](https://code.visualstudio.com/) installed in your computer.
active-directory-b2c Custom Policies Series Store User https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-policies-series-store-user.md
description: Learn how to create a user account in Azure AD B2C storage by using
-+ Previously updated : 09/11/2024 Last updated : 10/11/2024 -+ #Customer intent: As a developer using Azure Active Directory B2C, I want to create and read user accounts using custom policies, so that I can store and retrieve user information from Microsoft Entra ID storage and issue JWT tokens.
In [Call a REST API by using Azure Active Directory B2C custom policy](custom-po
- If you don't have one already, [create an Azure AD B2C tenant](tutorial-create-tenant.md) that is linked to your Azure subscription. -- [Register a web application](tutorial-register-applications.md), and [enable ID token implicit grant](tutorial-register-applications.md#enable-id-token-implicit-grant). For the Redirect URI, use https://jwt.ms.
+- [Register a web application](tutorial-register-applications.md).
- You must have [Visual Studio Code (VS Code)](https://code.visualstudio.com/) installed in your computer.
active-directory-b2c Custom Policies Series Validate User Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-policies-series-validate-user-input.md
description: Learn how to validate user inputs by using Azure Active Directory B
-+ Previously updated : 01/11/2024 Last updated : 10/11/2024 -+ #Customer intent: As a developer using Azure Active Directory B2C, I want to validate user inputs by using custom policies, so that I can ensure that the data entered by users is accurate and meets the required criteria.
Azure Active Directory B2C (Azure AD B2C) custom policy not only allows you to m
- If you don't have one already, [create an Azure AD B2C tenant](tutorial-create-tenant.md) that is linked to your Azure subscription. -- [Register a web application](tutorial-register-applications.md), and [enable ID token implicit grant](tutorial-register-applications.md#enable-id-token-implicit-grant). For the Redirect URI, use https://jwt.ms.
+- [Register a web application](tutorial-register-applications.md).
- You must have [Visual Studio Code (VS Code)](https://code.visualstudio.com/) installed in your computer.
active-directory-b2c Custom Policy Developer Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-policy-developer-notes.md
description: Notes for developers on configuring and maintaining Azure AD B2C wi
-+ Previously updated : 02/24/2024 Last updated : 10/10/2024 -+ #Customer intent: As a developer using Azure Active Directory B2C, I want to understand the available features and their availability, so that I can make informed decisions about which features to use in my application development.
The following table summarizes the OAuth 2.0 and OpenId Connect application auth
[Authorization code with PKCE](authorization-code-flow.md)| GA | GA | Allows users to sign in to mobile and single-page applications. The application receives an authorization code using proof key for code exchange (PKCE). The authorization code is redeemed to acquire a token to call web APIs. | [Client credentials flow](client-credentials-grant-flow.md)| Preview | Preview | Allows access web-hosted resources by using the identity of an application. Commonly used for server-to-server interactions that must run in the background, without immediate interaction with a user. | [Device authorization grant](https://tools.ietf.org/html/rfc8628)| NA | NA | Allows users to sign in to input-constrained devices such as a smart TV, IoT device, or printer. |
-[Implicit flow](implicit-flow-single-page-application.md) | GA | GA | Allows users to sign in to single-page applications. The app gets tokens directly without performing a back-end server credential exchange.|
+[Implicit flow](implicit-flow-single-page-application.md) | GA | GA | Allows users to sign in to single-page applications. The app gets tokens directly without performing a back-end server credential exchange. <br> **Note:** The recommended flow for supporting SPAs is [OAuth 2.0 Authorization code flow (with PKCE)](./authorization-code-flow.md).|
[On-behalf-of](../active-directory/develop/v2-oauth2-on-behalf-of-flow.md)| NA | NA | An application invokes a service or web API, which in turn needs to call another service or web API. <br /> <br /> For the middle-tier service to make authenticated requests to the downstream service, pass a *client credential* token in the authorization header. Optionally, you can include a custom header with the Azure AD B2C user's token. | [OpenId Connect](openid-connect.md) | GA | GA | OpenID Connect introduces the concept of an ID token, which is a security token that allows the client to verify the identity of the user. | [OpenId Connect hybrid flow](openid-connect.md) | GA | GA | Allows a web application retrieve the ID token on the authorize request along with an authorization code. |
active-directory-b2c Custom Policy Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-policy-overview.md
description: A topic about Azure Active Directory B2C custom policies and the Id
-+ Last updated 01/11/2024 -+
active-directory-b2c Custom Policy Reference Sso https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-policy-reference-sso.md
description: Learn how to manage single sign-on sessions using custom policies i
-+ Last updated 01/11/2024 -+ #Customer intent: As a developer configuring session behavior in Azure Active Directory B2C, I want to understand how to use session providers to manage single sign-on (SSO) sessions for different technical profiles, so that I can customize the SSO behavior and control the flow of my custom policy.
active-directory-b2c Customize Ui With Html https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/customize-ui-with-html.md
description: Learn how to customize the user interface with HTML templates for your applications that use Azure Active Directory B2C. -+ Last updated 01/22/2024 -+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Customize Ui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/customize-ui.md
description: Learn how to customize the user interface for your applications tha
-+ Last updated 01/11/2024 -+ zone_pivot_groups: b2c-policy-type #Customer intent: As a developer, I want to customize the user interface of my application, so that I can provide a seamless and branded user experience for sign-up, sign-in, profile editing, and password resetting.
active-directory-b2c Data Residency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/data-residency.md
description: Region availability, data residency, high availability, SLA, and in
-+ Last updated 01/11/2024 -+
active-directory-b2c Date Transformations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/date-transformations.md
description: Date claims transformation examples for the Identity Experience Fra
-+ Last updated 01/11/2024 -+
active-directory-b2c Deploy Custom Policies Devops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/deploy-custom-policies-devops.md
description: Learn how to deploy Azure AD B2C custom policies in a CI/CD pipelin
-+ Last updated 01/11/2024 -+ #Customer intent: As a developer managing Azure AD B2C custom policies, I want to automate the deployment process using Azure Pipelines, so that I can consistently test, build, and ship my code to any target.
active-directory-b2c Deploy Custom Policies Github Action https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/deploy-custom-policies-github-action.md
description: Learn how to deploy Azure AD B2C custom policies in a CI/CD pipelin
-+ Last updated 01/11/2024 -+ #Customer intent: As a developer, I want to automate the deployment of Azure Active Directory B2C custom policies using GitHub Actions, so that I can easily manage and deploy my custom policies without manual intervention.
active-directory-b2c Direct Signin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/direct-signin.md
description: Learn how to prepopulate the sign-in name or redirect straight to a
-+ Last updated 01/11/2024 -+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Disable Email Verification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/disable-email-verification.md
description: Learn how to disable email verification during customer sign-up in
-+ Last updated 01/11/2024 -+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Display Control Captcha https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/display-control-captcha.md
description: Learn how to define a CAPTCHA display controls custom policy in Azu
-+ Last updated 01/17/2024 -+ #Customer intent: As a developer integrating customer-facing apps with Azure AD B2C, I want to learn how to define a CAPTCHA display control for Azure AD B2C's custom policies so that I can protect my authentication flows from automated attacks.
active-directory-b2c Display Control Time Based One Time Password https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/display-control-time-based-one-time-password.md
description: Learn how to use Azure AD B2C TOTP display controls in the user jou
-+ Last updated 01/11/2024 -+ #Customer intent: As an Azure AD B2C administrator, I want to enable multifactor authentication using the TOTP method, so that end users can use an authenticator app to generate TOTP codes for enrollment and verification.
active-directory-b2c Display Control Verification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/display-control-verification.md
description: Learn how to use Azure AD B2C display controls to verify the claims
-+ Last updated 01/11/2024 -+ #Customer intent: As a user completing a verification process, I want to enter my email address or phone number and receive a verification code, so that I can verify my identity and proceed to the next step.
active-directory-b2c Display Controls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/display-controls.md
description: Reference for Azure AD B2C display controls. Use display controls f
-+ Last updated 01/11/2024 -+ #Customer intent: As a developer integrating Azure Active Directory B2C, I want to understand how to define and use display controls, so that I can create user interface elements with special functionality that interact with the back-end service and perform actions on the page.
active-directory-b2c Embedded Login https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/embedded-login.md
description: Learn how to embed Azure Active Directory B2C user interface into your app with a custom policy -+ Last updated 01/11/2024 -+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Enable Authentication Android App Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-android-app-options.md
description: This article discusses several ways to enable Android mobile appli
-+ Last updated 01/11/2024 -+
active-directory-b2c Enable Authentication Android App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-android-app.md
description: Enable authentication in an Android application using Azure Active
-+ Last updated 01/11/2024 -+
active-directory-b2c Enable Authentication Angular Spa App Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-angular-spa-app-options.md
description: Enable the use of Angular application options in several ways.
-+ Last updated 01/11/2024 -+
active-directory-b2c Enable Authentication Angular Spa App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-angular-spa-app.md
description: Use the building blocks of Azure Active Directory B2C to sign in a
-+ Last updated 01/11/2024 -+
active-directory-b2c Enable Authentication Azure Static App Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-azure-static-app-options.md
description: This article discusses several ways to enable Azure Static Web App
-+ Last updated 01/11/2024 -+
active-directory-b2c Enable Authentication In Node Web App Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-in-node-web-app-options.md
description: This article discusses several ways to enable Node.js web app auth
-+ Last updated 01/11/2024 -+
active-directory-b2c Enable Authentication In Node Web App With Api Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-in-node-web-app-with-api-options.md
description: This article discusses several ways to enable Node.js web API auth
-+ Last updated 01/11/2024 -+
active-directory-b2c Enable Authentication In Node Web App With Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-in-node-web-app-with-api.md
-+ Last updated 01/11/2024 -+ #Customer intent: As a developer, I want to enable authentication in my Node.js web API using Azure Active Directory B2C, so that I can protect my web API and authorize access using valid access tokens issued by Azure AD B2C.
active-directory-b2c Enable Authentication In Node Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-in-node-web-app.md
-+ Last updated 01/11/2024 -+ #Customer intent: As a Node.js web application developer, I want to enable Azure Active Directory B2C authentication in my application, so that users can sign in, sign out, update their profile, and reset their password using Azure AD B2C user flows.
active-directory-b2c Enable Authentication Ios App Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-ios-app-options.md
description: This article discusses several ways to enable iOS Swift mobile app
-+ Last updated 01/11/2024 -+
active-directory-b2c Enable Authentication Ios App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-ios-app.md
description: This article discusses how to enable authentication in an iOS Swif
-+ Last updated 01/11/2024 -+
active-directory-b2c Enable Authentication Python Web App Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-python-web-app-options.md
description: This article shows you how to enable the use of Python web applica
-+ Last updated 01/11/2024 -+
active-directory-b2c Enable Authentication Python Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-python-web-app.md
-+ Last updated 01/11/2024 -+ #Customer intent: As a Python web application developer, I want to enable Azure Active Directory B2C authentication in my application, so that users can sign in, sign out, update their profile, and reset their password using Azure AD B2C user flows.
active-directory-b2c Enable Authentication React Spa App Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-react-spa-app-options.md
description: Enable the use of React application options in several ways.
-+ Last updated 01/11/2024 -+
active-directory-b2c Enable Authentication React Spa App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-react-spa-app.md
description: Use the building blocks of Azure Active Directory B2C to sign in a
-+ Last updated 01/11/2024 -+
active-directory-b2c Enable Authentication Spa App Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-spa-app-options.md
description: This article discusses several ways to enable the use of SPA appli
-+ Last updated 01/11/2024 -+
active-directory-b2c Enable Authentication Spa App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-spa-app.md
description: This article discusses the building blocks of Azure Active Directo
-+ Last updated 01/11/2024 -+
active-directory-b2c Enable Authentication Web Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-web-api.md
description: This article discusses how to use Azure Active Directory B2C to pr
-+ Last updated 01/11/2024 -+
active-directory-b2c Enable Authentication Web App With Api Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-web-app-with-api-options.md
description: This article discusses how to enable the use of a web application
-+ Last updated 01/11/2024 -+
active-directory-b2c Enable Authentication Web App With Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-web-app-with-api.md
description: This article discusses the building blocks of an ASP.NET web app th
-+ Last updated 01/11/2024 -+
active-directory-b2c Enable Authentication Web Application Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-web-application-options.md
description: This article discusses several ways to enable web app authenticati
-+ Last updated 01/11/2024 -+
active-directory-b2c Enable Authentication Web Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-web-application.md
description: This article discusses how to use the building blocks of Azure Acti
-+ Last updated 01/11/2024 -+
active-directory-b2c Enable Authentication Wpf Desktop App Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/enable-authentication-wpf-desktop-app-options.md
description: Enable the use of WPF desktop application options by using several
-+ Last updated 01/11/2024 -+
active-directory-b2c Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/error-codes.md
description: A list of the error codes that can be returned by the Azure Active
-+ Last updated 01/11/2024 -+ #Customer intent: As a developer integrating Azure Active Directory B2C into my application, I want to understand the possible error codes and their meanings, so that I can handle them appropriately and provide a better user experience.
active-directory-b2c Extensions App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/extensions-app.md
description: Restoring the b2c-extensions-app.
-+ Last updated 09/11/2024 -+ #Customer intent: As an Azure AD B2C administrator, I want to verify the presence of the b2c-extensions-app in my directory, so that I can ensure the correct functioning of Azure AD B2C and avoid any loss of user information.
active-directory-b2c External Identities Videos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/external-identities-videos.md
description: Learn about external identities in Azure AD B2C in the Microsoft id
-+ Last updated 01/26/2024 -+ # Customer intent: I'm a developers working with Azure Active Directory B2C. I need videos that provide a deep-dive into the architecture and features of the service. My goal is to gain a better understanding of how to implement and utilize Azure AD B2C in my applications.
active-directory-b2c Find Help Open Support Ticket https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/find-help-open-support-ticket.md
description: Learn how to find technical, pre-sales, billing, and subscription h
-+ Last updated 01/11/2024 -+ #Customer intent: "As an Azure Active Directory B2C user experiencing technical issues, I want to open a support ticket, so that I can receive assistance from Microsoft support engineers to resolve my problem and contribute to service improvements."
active-directory-b2c Force Password Reset https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/force-password-reset.md
Title: Configure a force password reset flow in Azure AD B2C description: Learn how to set up a forced password reset flow in Azure Active Directory B2C.- ---+ Previously updated : 01/11/2024 Last updated : 10/11/2024 -+ zone_pivot_groups: b2c-policy-type
Connect-MgGraph -Scopes 'Domain.ReadWrite.All'
$domainId = "contoso.com" $params = @{ passwordValidityPeriodInDays = 90
- passwordNotificationWindowInDays = 15
} Update-MgDomain -DomainId $domainId -BodyParameter $params ```
-> [!NOTE]
-> `passwordValidityPeriodInDays` indicates the length of time in days that a password remains valid before it must be changed. `passwordNotificationWindowInDays` indicates the length of time in days before the password expiration date when users receive their first notification to indicate that their password is about to expire.
-
-## Next steps
+- `passwordValidityPeriodInDays` is the length of time in days that a password remains valid before it must be changed.
-Set up a [self-service password reset](add-password-reset-policy.md).
+## Related content
+Set up a [self-service password reset](add-password-reset-policy.md).
active-directory-b2c General Transformations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/general-transformations.md
description: General claims transformation examples for the Identity Experience
-+ Last updated 01/11/2024 -+ #Customer intent: As a developer using Azure Active Directory B2C, I want to understand how to use general claims transformations, so that I can customize and manipulate user claims in my custom policies.
active-directory-b2c Https Cipher Tls Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/https-cipher-tls-requirements.md
description: Notes for developers on HTTPS cipher suite and TLS requirements whe
-+ Last updated 01/11/2024 -+ #Customer intent: As a developer integrating Azure Active Directory B2C with my endpoints, I want to understand the TLS and cipher suite requirements, so that I can ensure my endpoints are compatible and establish a secure connection with Azure AD B2C.
active-directory-b2c Id Token Hint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/id-token-hint.md
description: Define an ID token hint technical profile in a custom policy in Azu
-+ Last updated 01/11/2024 -+ #Customer intent: As a developer integrating Azure AD B2C with a relying party application, I want to define an ID token hint technical profile, so that I can send a JWT token with a hint about the user or the authorization request. This allows me to validate the token and extract the claims for further processing.
active-directory-b2c Identity Provider Adfs Saml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-adfs-saml.md
description: Set up AD FS 2016 using the SAML protocol and custom policies in Azure Active Directory B2C -+ Last updated 01/24/2024 -+ zone_pivot_groups: b2c-policy-type #Customer Intent: As an Azure AD B2C administrator, I want to add AD FS as a SAML identity provider using custom policies, so that users can sign in with their AD FS accounts and access Azure AD B2C resources.
active-directory-b2c Identity Provider Adfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-adfs.md
description: Set up AD FS 2016 using the OpenID Connect protocol and custom policies in Azure Active Directory B2C -+ Last updated 01/24/2024 -+ zone_pivot_groups: b2c-policy-type #Customer Intent: As a developer integrating Azure AD B2C with AD FS, I want to configure AD FS as an OpenID Connect identity provider, so that users can sign in with their AD FS accounts in Azure AD B2C.
active-directory-b2c Identity Provider Amazon https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-amazon.md
description: Provide sign-up and sign-in to customers with Amazon accounts in your applications using Azure Active Directory B2C. -+ Last updated 09/16/2021 -+ zone_pivot_groups: b2c-policy-type #Customer Intent: As a developer using Azure Active Directory B2C, I want to set up sign-up and sign-in with an Amazon account, so that users can authenticate using their Amazon credentials.
active-directory-b2c Identity Provider Apple Id https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-apple-id.md
description: Provide sign-up and sign-in to customers with Apple ID in your applications using Azure Active Directory B2C. -+ Last updated 11/02/2021 -+ zone_pivot_groups: b2c-policy-type #Customer Intent: As a developer using Azure Active Directory B2C, I want to set up sign-up and sign-in with an Apple ID, so that users can authenticate using their Apple ID.
active-directory-b2c Identity Provider Azure Ad B2c https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-azure-ad-b2c.md
description: Provide sign-up and sign-in to customers with Azure AD B2C accounts
-+ Last updated 10/11/2023 -+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Identity Provider Azure Ad Multi Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-azure-ad-multi-tenant.md
description: Add a multitenant Microsoft Entra identity provider using custom policies in Azure Active Directory B2C. -+ Last updated 11/16/2023 -+ zone_pivot_groups: b2c-policy-type #Customer Intent: As a developer using Azure Active Directory B2C, I want to set up sign-in for multi-tenant Microsoft Entra ID, so that users from multiple Entra tenants can sign in without configuring an identity provider for each tenant.
active-directory-b2c Identity Provider Azure Ad Single Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-azure-ad-single-tenant.md
description: Set up sign-in for a specific Microsoft Entra organization in Azure
-+ Last updated 01/27/2024 -+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Identity Provider Ebay https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-ebay.md
description: Provide sign-up and sign-in to customers with eBay accounts in your
-+ Last updated 09/16/2021 -+ zone_pivot_groups: b2c-policy-type #Customer Intent: As a developer integrating Azure Active Directory B2C, I want to set up sign-in with eBay as an identity provider, so that users can sign in with their eBay accounts.
active-directory-b2c Identity Provider Facebook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-facebook.md
description: Provide sign-up and sign-in to customers with Facebook accounts in your applications using Azure Active Directory B2C. -+ Last updated 03/10/2022 -+ zone_pivot_groups: b2c-policy-type #Customer Intent: As a developer using Azure Active Directory B2C, I want to set up sign-in with a Facebook account, so that users can authenticate with their Facebook credentials and access my application.
active-directory-b2c Identity Provider Generic Openid Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-generic-openid-connect.md
description: Set up sign-up and sign-in with any OpenID Connect identity provide
-+ Last updated 12/28/2022 -+ zone_pivot_groups: b2c-policy-type #Customer Intent: As a developer integrating Azure AD B2C with a custom OpenID Connect identity provider, I want to understand the steps to add the identity provider and configure the necessary settings, so that users can sign in securely using the custom identity provider.
active-directory-b2c Identity Provider Generic Saml Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-generic-saml-options.md
description: Configure sign-in SAML identity provider (IdP) options in Azure Active Directory B2C. -+ Last updated 03/20/2023 -+ zone_pivot_groups: b2c-policy-type #Customer Intent: As a developer integrating Azure AD B2C with a SAML identity provider, I want to understand how to configure the SAML identity provider options, so that I can enable sign-in with the identity provider and map the claims correctly.
active-directory-b2c Identity Provider Generic Saml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-generic-saml.md
description: Set up sign-up and sign-in with any SAML identity provider (IdP) in Azure Active Directory B2C. -+ Last updated 01/24/2024 -+ zone_pivot_groups: b2c-policy-type #Customer Intent: As a developer integrating Azure AD B2C with a SAML identity provider, I want to configure the SAML technical profile and map the claims, so that users can sign in to my application using an existing social or enterprise identity.
active-directory-b2c Identity Provider Github https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-github.md
description: Provide sign-up and sign-in to customers with GitHub accounts in yo
-+ Last updated 03/10/2022 -+ zone_pivot_groups: b2c-policy-type #Customer Intent: As a developer using Azure Active Directory B2C, I want to integrate GitHub as an identity provider, so that users can sign up and sign in with their GitHub accounts.
active-directory-b2c Identity Provider Google https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-google.md
description: Provide sign-up and sign-in to customers with Google accounts in yo
-+ Last updated 12/13/2023 -+ zone_pivot_groups: b2c-policy-type #Customer intent: As a developer or IT administrator, I want to add sign-up and sign-in with a Google account, so that users can authenticate with their Google accounts.
active-directory-b2c Identity Provider Id Me https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-id-me.md
description: Provide sign-up and sign-in to customers with ID.me accounts in you
-+ Last updated 09/16/2021 -+ zone_pivot_groups: b2c-policy-type #Customer Intent: As a developer, I want to set up sign-up and sign-in with an ID.me account using Azure Active Directory B2C, so that I can enable users to authenticate with their ID.me accounts.
active-directory-b2c Identity Provider Linkedin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-linkedin.md
description: Provide sign-up and sign-in to customers with LinkedIn accounts in your applications using Azure Active Directory B2C. -+ Last updated 09/16/2021 -+ zone_pivot_groups: b2c-policy-type #Customer Intent: As a developer using Azure Active Directory B2C, I want to set up sign-up and sign-in with a LinkedIn account, so that users can authenticate using their LinkedIn credentials.
active-directory-b2c Identity Provider Local https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-local.md
description: Define the identity types uses can use to sign-up or sign-in (email, username, phone number) in your Azure Active Directory B2C tenant. -+ Last updated 01/24/2024 -+ zone_pivot_groups: b2c-policy-type #Customer Intent: As an Azure AD B2C administrator, I want to configure the sign-in methods for local accounts, so that users can sign up and sign in to the application using their preferred method (email, username, or phone number).
active-directory-b2c Identity Provider Microsoft Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-microsoft-account.md
description: Provide sign-up and sign-in to customers with Microsoft Accounts in your applications using Azure Active Directory B2C. -+ Last updated 05/01/2023 -+ zone_pivot_groups: b2c-policy-type #Customer Intent: As a developer using Azure Active Directory B2C, I want to set up sign-up and sign-in with a Microsoft account, so that users can authenticate using their Microsoft account credentials.
active-directory-b2c Identity Provider Mobile Id https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-mobile-id.md
description: Provide sign-up and sign-in to customers with Mobile ID in your app
-+ Last updated 04/08/2022 -+ zone_pivot_groups: b2c-policy-type #Customer Intent: As a developer integrating Azure Active Directory B2C, I want to set up sign-up and sign-in with Mobile ID, so that I can provide a strong multi-factor authentication solution for my customers and protect access to company data and applications.
active-directory-b2c Identity Provider Ping One https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-ping-one.md
description: Provide sign-up and sign-in to customers with PingOne accounts in your applications using Azure Active Directory B2C. -+ Last updated 12/2/2021 -+ zone_pivot_groups: b2c-policy-type #Customer Intent: As a developer integrating PingOne with Azure Active Directory B2C, I want to set up sign-up and sign-in with a PingOne account, so that users can authenticate using their PingOne credentials.
active-directory-b2c Identity Provider Qq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-qq.md
Title: Set up sign-up and sign-in with a QQ account using Azure Active Directory
description: Provide sign-up and sign-in to customers with QQ accounts in your applications using Azure Active Directory B2C. -+ Last updated 09/16/2021 -+ zone_pivot_groups: b2c-policy-type #Customer Intent: As a developer using Azure Active Directory B2C, I want to set up sign-up and sign-in with a QQ account, so that users can authenticate with their QQ accounts in my application.
active-directory-b2c Identity Provider Salesforce Saml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-salesforce-saml.md
description: Set up sign-in with a Salesforce SAML provider by using SAML protocol in Azure Active Directory B2C. -+ Last updated 09/16/2021 -+ zone_pivot_groups: b2c-policy-type #Customer Intent: As a developed using Azure Active Directory B2C, I want to set up sign-in with a Salesforce SAML provider, so that users from my Salesforce organization can sign in to Azure AD B2C using their Salesforce accounts.
active-directory-b2c Identity Provider Salesforce https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-salesforce.md
description: Provide sign-up and sign-in to customers with Salesforce accounts in your applications using Azure Active Directory B2C. -+ Last updated 09/16/2021 -+ zone_pivot_groups: b2c-policy-type #Customer Intent: As a developer integrating Salesforce with Azure Active Directory B2C, I want to set up sign-up and sign-in with a Salesforce account using Azure AD B2C, so that users can authenticate with their Salesforce credentials in my application.
active-directory-b2c Identity Provider Swissid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-swissid.md
description: Provide sign-up and sign-in to customers with SwissID accounts in y
-+ Last updated 12/07/2021 -+ zone_pivot_groups: b2c-policy-type #Customer Intent: As a developer integrating SwissID accounts with Azure Active Directory B2C, I want to set up sign-up and sign-in functionality for customers with SwissID accounts, so that they can easily access my application using their existing credentials.
active-directory-b2c Identity Provider Twitter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-twitter.md
description: Provide sign-up and sign-in to customers with X accounts in your applications using Azure Active Directory B2C. -+ Last updated 07/20/2022 -+ zone_pivot_groups: b2c-policy-type #Customer Intent: As a developer setting up sign-up and sign-in with an X account using Azure Active Directory B2C, I want to configure X as an identity provider so that I can enable users to sign in with their X accounts.
active-directory-b2c Identity Provider Wechat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-wechat.md
description: Provide sign-up and sign-in to customers with WeChat accounts in your applications using Azure Active Directory B2C. -+ Last updated 09/16/2021 -+ zone_pivot_groups: b2c-policy-type #Customer Intent: As a developer using Azure Active Directory B2C, I want to set up sign-up and sign-in with a WeChat account, so that users can authenticate using their WeChat credentials.
active-directory-b2c Identity Provider Weibo https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-weibo.md
description: Provide sign-up and sign-in to customers with Weibo accounts in your applications using Azure Active Directory B2C. -+ Last updated 09/16/2021 -+ zone_pivot_groups: b2c-policy-type #Customer Intent: As a developer using Azure Active Directory B2C, I want to set up sign-up and sign-in with a Weibo account, so that users can authenticate with their Weibo credentials and access my application.
active-directory-b2c Identity Verification Proofing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-verification-proofing.md
description: Learn about our partners who integrate with Azure AD B2C to provide
-+ Last updated 01/26/2024
active-directory-b2c Idp Pass Through User Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/idp-pass-through-user-flow.md
description: Learn how to pass an access token for OAuth 2.0 identity providers
-+ Last updated 09/11/2024 -+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Implicit Flow Single Page Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/implicit-flow-single-page-application.md
description: Learn how to add single-page sign-in using the OAuth 2.0 implicit f
-+ Previously updated : 01/11/2024 Last updated : 10/11/2024 -+ #Customer intent: As a developer building a single-page application (SPA) with a JavaScript framework, I want to implement OAuth 2.0 implicit flow for sign-in using Azure AD B2C, so that I can securely authenticate users without server-to-server exchange and handle user flows like sign-up and profile management.
Many modern applications have a single-page app (SPA) front end that is written
- Full-page browser redirects away from the app can be invasive to the user experience.
-The recommended way of supporting SPAs is [OAuth 2.0 Authorization code flow (with PKCE)](./authorization-code-flow.md).
+> [!WARNING]
+> Microsoft recommends you do *not* use the implicit grant flow. The recommended way of supporting SPAs is [OAuth 2.0 Authorization code flow (with PKCE)](./authorization-code-flow.md). Certain configurations of this flow requires a very high degree of trust in the application, and carries risks that are not present in other flows. You should only use this flow when other more secure flows aren't viable. For more information, see the [security concerns with implicit grant flow](/entra/identity-platform/v2-oauth2-implicit-grant-flow#security-concerns-with-implicit-grant-flow).
Some frameworks, like [MSAL.js 1.x](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib), only support the implicit grant flow. In these cases, Azure Active Directory B2C (Azure AD B2C) supports the OAuth 2.0 authorization implicit grant flow. The flow is described in [section 4.2 of the OAuth 2.0 specification](https://tools.ietf.org/html/rfc6749). In implicit flow, the app receives tokens directly from the Azure AD B2C authorize endpoint, without any server-to-server exchange. All authentication logic and session handling are done entirely in the JavaScript client with either a page redirect or a pop-up box.
GET https://{tenant}.b2clogin.com/{tenant}.onmicrosoft.com/{policy}/oauth2/v2.0/
## Next steps
-See the code sample: [Sign-in with Azure AD B2C in a JavaScript SPA](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/samples/msal-browser-samples/VanillaJSTestApp2.0/app/b2c).
+See the code sample: [Sign-in with Azure AD B2C in a JavaScript SPA](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/samples/msal-browser-samples/VanillaJSTestApp2.0/app/b2c).
active-directory-b2c Integer Transformations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/integer-transformations.md
description: Integer claims transformation examples for the Identity Experience
-+ Last updated 01/11/2024 -+ #Customer intent: As a developer working with Azure Active Directory B2C, I want to understand how to use integer claims transformations, so that I can manipulate numeric claims and perform comparisons in my application.
active-directory-b2c Integrate With App Code Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/integrate-with-app-code-samples.md
Last updated 01/24/2024 --++ #Customer Intent: As a developer, I want to access code samples for Azure Active Directory B2C, so that I can learn how to integrate authentication and user management into my web, mobile, and desktop applications using Azure AD B2C.
active-directory-b2c Javascript And Page Layout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/javascript-and-page-layout.md
description: Learn how to enable JavaScript and use page layout versions in Azur
-+ Last updated 10/17/2023 -+ zone_pivot_groups: b2c-policy-type #Customer Intent: As a developer customizing the user interface of an application in Azure Active Directory B2C, I want to enable JavaScript and page layout versions, so that I can create a more interactive and customized user experience for my users.
active-directory-b2c Json Transformations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/json-transformations.md
description: JSON claims transformation examples for the Identity Experience Fra
-+ Last updated 01/11/2024 -+ #Customer intent: As a developer working with Azure AD B2C, I want to understand how to use JSON claims transformations, so that I can manipulate and generate JSON data for my authentication and authorization processes.
active-directory-b2c Jwt Issuer Technical Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/jwt-issuer-technical-profile.md
description: Define a technical profile for a JSON web token (JWT) issuer in a c
-+ Last updated 01/11/2024 -+ #Customer intent: As a developer implementing custom policies in Azure Active Directory B2C, I want to define a technical profile for a JWT token issuer, so that I can emit a JWT token that is returned to the relying party application during the authentication flow.
active-directory-b2c Language Customization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/language-customization.md
Title: Language customization in Azure Active Directory B2C
description: Learn how to customize the language experience in your user flows in Azure Active Directory B2C. -+ Last updated 03/22/2024 -+ zone_pivot_groups: b2c-policy-type #Customer Intent: As a developer integrating Azure Active Directory B2C into my application, I want to customize the language of a user flow, so that I can provide a localized experience for my customers in different locales.
active-directory-b2c Localization String Ids https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/localization-string-ids.md
description: Specify the IDs for a content definition with an ID of api.signupor
-+ Last updated 02/24/2024 -+ #Customer intent: As a developer implementing user interface localization in Azure AD B2C, I want to access the list of localization string IDs, so that I can use them in my policy to support multiple locales or languages in the user journeys.
active-directory-b2c Localization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/localization.md
description: Specify the Localization element of a custom policy in Azure Active
-+ Last updated 01/11/2024 -+
active-directory-b2c Manage Custom Policies Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/manage-custom-policies-powershell.md
description: Use the Microsoft Graph PowerShell cmdlets for programmatic managem
-+ Last updated 01/11/2024 -+ #Customer intent: As an Azure AD B2C administrator, I want to manage custom policies using Microsoft Graph PowerShell, so that I can review, update, and delete policies in my Azure AD B2C tenant.
active-directory-b2c Manage User Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/manage-user-access.md
description: Learn how to identify minors, collect date of birth and country/reg
-+ Last updated 01/11/2024 -+ #Customer intent: As an application developer using Azure Active Directory B2C, I want to manage user access to my application by identifying minors, requiring parental consent, gathering birth and country/region data, and capturing a terms-of-use agreement, so that I can comply with regulatory standards and provide appropriate experiences for different user groups.
active-directory-b2c Manage User Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/manage-user-data.md
description: Learn how to delete or export user data in Azure AD B2C.
-+ Last updated 01/11/2024 -+
active-directory-b2c Manage Users Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/manage-users-portal.md
description: Learn how to use the Azure portal to create and delete consumer use
-+ Last updated 05/26/2023 -+ #Customer Intent: As an Azure AD B2C administrator, I want to manually create and delete consumer users in the Azure portal, so that I can manage user accounts for my applications.
active-directory-b2c Microsoft Graph Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/microsoft-graph-get-started.md
description: Prepare for managing Azure AD B2C resources with Microsoft Graph by
-+ Last updated 01/11/2024 -+ #Customer intent: As a developer, I want to register a Microsoft Graph application, so that I can automate tenant management tasks in my Azure AD B2C directory, such as migrating user stores, deploying custom policies, and obtaining audit logs.
active-directory-b2c Microsoft Graph Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/microsoft-graph-operations.md
description: How to manage resources in an Azure AD B2C tenant by calling the Microsoft Graph API and using an application identity to automate the process. -+ Last updated 01/11/2024 -+ #Customer intent: As a developer, I want to programmatically manage resources in my Azure AD B2C directory using Microsoft Graph API, so that I can automate user management tasks, such as creating, updating, and deleting users, identity providers, user flows, custom policies, and policy keys.
active-directory-b2c Multi Factor Auth Technical Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/multi-factor-auth-technical-profile.md
description: Custom policy reference for Microsoft Entra ID multifactor authenti
-+ Last updated 01/11/2024 -+ #Customer intent: As a developer integrating Azure AD B2C, I want to understand how to define a Microsoft Entra ID multifactor authentication technical profile, so that I can implement phone number verification and TOTP code verification in my custom policy.
active-directory-b2c Multi Factor Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/multi-factor-authentication.md
Title: Multifactor authentication in Azure Active Directory B2C
description: How to enable multifactor authentication in consumer-facing applications secured by Azure Active Directory B2C. -+ Last updated 01/11/2024 -+ zone_pivot_groups: b2c-policy-type #Customer intent: As a developer, I want to learn how to enable multifactor authentication in consumer-facing applications secured by Azure Active Directory B2C.
active-directory-b2c Multiple Token Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/multiple-token-endpoints.md
description: Learn how to enable a .NET web API to support tokens issued by mult
-+ Last updated 01/11/2024 -+ #Customer intent: As a developer migrating an OWIN-based web API to a new domain, I want to enable support for multiple token issuers, so that I can migrate my web applications in a staged manner and remove support for the old token issuer from the API.
active-directory-b2c Oauth1 Technical Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/oauth1-technical-profile.md
description: Define an OAuth 1.0 technical profile in a custom policy in Azure A
-+ Last updated 01/11/2024 -+ #Customer intent: As a developer implementing Azure Active Directory B2C custom policies, I want to define an OAuth1 technical profile, so that I can federate with an OAuth1 based identity provider like X and allow users to sign in with their existing social or enterprise identities.
active-directory-b2c Oauth2 Error Technical Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/oauth2-error-technical-profile.md
description: Define an OAuth2 custom error technical profile in a custom policy
-+ Last updated 05/07/2024 -+ #Customer intent: As a developer using Azure Active Directory B2C, I want to define an OAuth2 custom error technical profile, so that I can handle and return custom error messages to my OAuth2 or OpenId Connect relying party application when something goes wrong within my policy.
active-directory-b2c Oauth2 Technical Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/oauth2-technical-profile.md
description: Define an OAuth2 technical profile in a custom policy in Azure Acti
-+ Last updated 01/11/2024 -+ #Customer intent: As a developer integrating Azure AD B2C with an OAuth2 based identity provider, I want to define an OAuth2 technical profile in a custom policy, so that I can federate with the identity provider and allow users to sign in with their existing social or enterprise identities.
active-directory-b2c One Time Password Technical Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/one-time-password-technical-profile.md
description: Learn how to set up a one-time password (OTP) scenario by using Azu
-+ Last updated 01/11/2024 -+
active-directory-b2c Openid Connect Technical Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/openid-connect-technical-profile.md
description: Define an OpenID Connect technical profile in a custom policy in Az
-+ Last updated 01/11/2024 -+ #Customer intent: As a developer using Azure Active Directory B2C, I want to define an OpenID Connect technical profile in a custom policy, so that I can federate with an OpenID Connect based identity provider and allow users to sign in with their existing social or enterprise identities.
active-directory-b2c Openid Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/openid-connect.md
description: Build web applications using the OpenID Connect authentication prot
-+ Last updated 01/11/2024 -+
active-directory-b2c Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/overview.md
Title: What is Azure Active Directory B2C?
description: Learn how you can use Azure Active Directory B2C to support external identities in your applications, including social sign-up with Facebook, Google, and other identity providers. -+ Last updated 01/24/2024 -+ #Customer Intent: As an IT administrator or developer, I want to understand what Azure Active Directory B2C is and how it can be used for customer identity access management, so that I can determine if it is the right solution for authenticating end users to my web/mobile applications and managing access to API resources.
active-directory-b2c Page Layout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/page-layout.md
description: Page layout version history for UI customization in custom policies
-+ Last updated 04/16/2024 -+ #Customer intent: As a developer using Azure Active Directory B2C, I want to stay up-to-date with the latest page layout versions, so that I can ensure that my page elements reflect the latest security enhancements, accessibility standards, and fixes.
active-directory-b2c Partner Akamai Secure Hybrid Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-akamai-secure-hybrid-access.md
description: Learn how to integrate Azure AD B2C authentication with Akamai for
-+ Last updated 11/23/2022 -+ zone_pivot_groups: b2c-policy-type #Customer Intent: As a developer building a desktop app, I want to set up sign-in functionality using Azure Active Directory B2C, so that I can authenticate users with social and enterprise accounts and protect my application and customer data.
active-directory-b2c Partner Akamai https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-akamai.md
description: Configure Akamai Web Application Protector with Azure AD B2C
-+ Last updated 01/26/2024 -+ # Customer intent: I'm an IT admin, and I want to configure Azure Active Directory B2C with Akamai Enterprise Application Access for SSO and secure hybrid access. I want to enable Azure AD B2C authentication for end users accessing private applications secured by Akamai Enterprise Application Access.
active-directory-b2c Partner Arkose Labs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-arkose-labs.md
description: Learn to configure Azure Active Directory B2C with the Arkose Labs
-+ Last updated 01/26/2024 -+ # Customer intent: I'm a developer integrating Azure Active Directory B2C with the Arkose Labs platform. I need to configure the integration, so I can protect against bot attacks, account takeover, and fraudulent account openings.
active-directory-b2c Partner Asignio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-asignio.md
description: Learn how to configure Azure Active Directory B2C with Asignio for multifactor authentication -+ Previously updated : 10/03/2024 Last updated : 10/11/2024 -+ zone_pivot_groups: b2c-policy-type # Customer intent: As a developer integrating Asignio with Azure AD B2C for multifactor authentication. I want to configure an application with Asignio and set it up as an identity provider (IdP) in Azure AD B2C, so I can provide a passwordless, soft biometric, and multifactor authentication experience to customers.
Learn more: [Application types that can be used in Active Directory B2C](applica
For this tutorial, you're registering `https://jwt.ms`, a Microsoft web application with decoded token contents that don't leave your browser.
-### Register a web application and enable ID token implicit grant
+### Register a web application
-Complete [Tutorial: Register a web application in Azure Active Directory B2C](tutorial-register-applications.md?tabs=app-reg-ga)
+Complete the steps in [Tutorial: Register a web application in Azure Active Directory B2C](tutorial-register-applications.md?tabs=app-reg-ga) article.
->[!NOTE]
->Enable implicit flow only for testing purposes. DonΓÇÖt enable implicit flow in production.
## Configure Asignio as an identity provider in Azure AD B2C
active-directory-b2c Partner Bindid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-bindid.md
description: Configure Azure AD B2C with Transmit Security hosted sign in for pa
-+ Last updated 06/21/2024 -+ zone_pivot_groups: b2c-policy-type # Customer intent: I'm a developer integrating Azure Active Directory B2C with Transmit Security BindID. I need instructions to configure integration, so I can enable passwordless authentication using FIDO2 biometrics for my application.
active-directory-b2c Partner Biocatch https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-biocatch.md
description: Tutorial to configure Azure Active Directory B2C with BioCatch to i
-+ Last updated 06/21/2024 -+ # Customer intent: I'm a developer integrating Azure AD B2C authentication with BioCatch technology. I need to configure the custom UI, policies, and user journey. My goal is to enhance the security of my Customer Identity and Access Management (CIAM) system by analyzing user physical and cognitive behaviors.
active-directory-b2c Partner Bloksec https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-bloksec.md
description: Learn how to integrate Azure AD B2C authentication with BlokSec for
-+ Last updated 06/21/2024 -+ zone_pivot_groups: b2c-policy-type # Customer intent: I'm a developer integrating Azure Active Directory B2C with BlokSec for passwordless authentication. I need to configure integration, so I can simplify user sign-in and protect against identity-related attacks.
active-directory-b2c Partner Cloudflare https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-cloudflare.md
description: Tutorial to configure Azure Active Directory B2C with Cloudflare We
-+ Last updated 01/26/2024 -+ # Customer intent: I'm a developer configuring Azure AD B2C with Cloudflare WAF. I need to enable and configure the Web Application Firewall, so I can protect my application from malicious attacks such as SQL Injection and cross-site scripting (XSS).
active-directory-b2c Partner Datawiza https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-datawiza.md
description: Learn how to integrate Azure AD B2C authentication with Datawiza fo
-+ Last updated 01/26/2024 -+ # Customer intent: I'm a developer, and I want to integrate Azure Active Directory B2C with Datawiza Access Proxy (DAP). My goal is to enable single sign-on (SSO) and granular access control for on-premises legacy applications, without rewriting them.
active-directory-b2c Partner Deduce https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-deduce.md
description: Learn how to integrate Azure AD B2C authentication with Deduce for identity verification -+ Last updated 01/26/2024 -+ # Customer intent: As an Azure AD B2C administrator, I want to integrate Deduce with Azure AD B2C authentication. I want to combat identity fraud and create a trusted user experience for my organization.
active-directory-b2c Partner Dynamics 365 Fraud Protection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-dynamics-365-fraud-protection.md
description: Tutorial to configure Azure AD B2C with Microsoft Dynamics 365 Frau
-+ Last updated 01/26/2024 -+ # Customer intent: I'm a developer, and I want to integrate Microsoft Dynamics 365 Fraud Protection with Azure Active Directory B2C. I need to assess risk during attempts to create fraudulent accounts and sign-ins, and then block or challenge suspicious attempts.
active-directory-b2c Partner Eid Me https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-eid-me.md
description: Learn how to integrate Azure AD B2C authentication with eID-Me for identity verification -+ Last updated 06/21/2024 -+ zone_pivot_groups: b2c-policy-type # Customer intent: I'm an Azure AD B2C administrator, and I want to configure eID-Me as an identity provider (IdP). My goal is to enable users to verify their identity and sign in using eID-Me.
active-directory-b2c Partner Experian https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-experian.md
description: Learn how to integrate Azure AD B2C authentication with Experian fo
-+ Last updated 01/26/2024 -+ # Customer intent: I'm an Azure AD B2C administrator, and I want to integrate Experian CrossCore with Azure AD B2C. I need to verify user identification and perform risk analysis based on user attributes during sign-up.
active-directory-b2c Partner F5 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-f5.md
--++ Last updated 06/21/2024
active-directory-b2c Partner Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-gallery.md
description: Learn how to integrate with our ISV partners to tailor your end-use
-+ Last updated 01/11/2024 -+ #Customer intent: As an Azure AD B2C user, I want to integrate with ISV partners for multifactor authentication, role-based access control, identity verification and proofing, fraud protection, and compliance with PSD2 SCA requirements, so that I can enhance the security and user experience of my applications.
active-directory-b2c Partner Grit App Proxy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-grit-app-proxy.md
description: Learn how Grit's app proxy can migrate your applications to Azure AD B2C with no code change -+ Last updated 01/26/2024 -+ # Customer intent: I'm an application developer using header-based authentication, and I want to migrate my legacy application to Azure Active Directory B2C with Grit app proxy. I need to enable modern authentication experiences, enhance security, and save on licensing costs.
active-directory-b2c Partner Grit Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-grit-authentication.md
description: Learn how Grit's biometric authentication with Azure AD B2C secures your account -+ Last updated 01/26/2024 -+ # Customer intent: As an application developer using header-based authentication, I want to migrate my legacy application to Azure Active Directory B2C with Grit app proxy. I want to enable modern authentication experiences, enhance security, and save on licensing costs.
active-directory-b2c Partner Grit Editor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-grit-editor.md
description: Learn how Grit Visual IEF Editor enables fast authentication deployments in Azure AD B2C -+ Last updated 01/26/2024 -+ # Customer intent: I'm an Azure AD B2C administrator, and I want to use the Visual IEF Editor tool to create, modify, and deploy Azure AD B2C policies, without writing code.
active-directory-b2c Partner Grit Iam https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-grit-iam.md
description: Learn how to integrate Azure AD B2C authentication with the Grit IAM B2B2C solution -+ Last updated 01/26/2024 -+ # Customer intent: I'm a developer, and I want to integrate Azure Active Directory B2C authentication with the Grit IAM B2B2C solution. I need to provide secure and user-friendly identity and access management for my customers.
active-directory-b2c Partner Haventec https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-haventec.md
description: Learn to integrate Azure AD B2C with Haventec Authenticate for mult
-+ Last updated 06/21/2024 -+ # Customer intent: I'm a developer integrating Haventec Authenticate with Azure AD B2C. I need instructions to configure integration, so I can enable single-step, multi-factor passwordless authentication for my web and mobile applications.
active-directory-b2c Partner Hypr https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-hypr.md
description: Tutorial to configure Azure Active Directory B2C with Hypr for true
-+ Last updated 01/26/2024 -+ # Customer intent: I'm a developer integrating HYPR with Azure AD B2C. I want a tutorial to configure the Azure AD B2C policy to enable passwordless authentication using HYPR for my customer applications.
active-directory-b2c Partner Idemia https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-idemia.md
description: Learn to integrate Azure AD B2C authentication with IDEMIA Mobile I
-+ Last updated 01/26/2024 -+ zone_pivot_groups: b2c-policy-type # Customer intent: I'm an Azure AD B2C administrator, and I want to configure IDEMIA Mobile ID integration with Azure AD B2C. I want users to authenticate using biometric authentication services and benefit from a trusted, government-issued digital ID.
active-directory-b2c Partner Idology https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-idology.md
description: Learn how to integrate a sample online payment app in Azure AD B2C
-+ Last updated 01/11/2024 -+ #Customer intent: As an Azure AD B2C administrator, I want to integrate IDology with Azure AD B2C, so that I can verify and authenticate user identities using IDology's identity verification and proofing solutions.
active-directory-b2c Partner Itsme https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-itsme.md
Title: itsme OpenID Connect with Azure Active Directory B2C description: Learn how to integrate Azure AD B2C authentication with itsme OIDC using client_secret user flow policy. itsme is a digital ID app. It allows you to log in securely without card-readers, passwords, two-factor authentication, and multiple PIN codes.------++++ Previously updated : 01/11/2024-- Last updated : 10/11/2024+ #Customer intent: As a developer integrating Azure AD B2C authentication with itsme OpenID Connect (OIDC), I want to configure the itsme Identity Provider in Azure AD B2C, so that users can sign in securely using their itsme digital ID app without the need for passwords or multiple PIN codes.
Please clarify step 1 in the description below - we don't have steps in this tut
1. Select **Register**.
- a. For testing purposes, select **Authentication**, and under **Implicit Grant**, select the **Access Tokens** and **ID Tokens** check boxes.
+To use this app registration to test the user flow, you need to enable implicit grant flow:
+
+1. Select the app registration you created.
+
+1. Under **Manage**, select **Authentication**.
- b. Select **Save**.
+1. Under **Implicit grant and hybrid flows**, select both the **Access tokens (used for implicit flows)** and **ID tokens (used for implicit and hybrid flows)** check boxes.
+
+1. Select **Save**.
+
+> [!NOTE]
+> If you enable implicit grant to test a user flow, make sure you disable the implicit grant flow settings before you deploy your app to production.
## Test the user flow
Please clarify step 1 in the description below - we don't have steps in this tut
1. Select **Run user flow**.
- a. **Application**: *select the registered app*
+ a. For **Application**, select the app that you registered.
- b. **Reply URL**: *select the redirect URL*
+ b. For **Reply URL**, select the redirect URL that you added to your app. For testing purposes, select `https://jwt.ms`.
1. The itsme **Identify yourself** page appears.
For additional information, review the following articles:
* [Custom policies in Azure AD B2C](custom-policy-overview.md)
-* [Get started with custom policies in Azure AD B2C](tutorial-create-user-flows.md?pivots=b2c-custom-policy)
+* [Get started with custom policies in Azure AD B2C](tutorial-create-user-flows.md?pivots=b2c-custom-policy)
active-directory-b2c Partner Jumio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-jumio.md
description: Configure Azure Active Directory B2C with Jumio for automated ID ve
-+ Last updated 01/26/2024 -+ # Customer intent: I'm an Azure AD B2C administrator, and I want to integrate Jumio with Azure AD B2C. I need to enable real-time automated ID verification for user accounts and protect customer data.
active-directory-b2c Partner Keyless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-keyless.md
description: Tutorial to configure Keyless with Azure Active Directory B2C for p
-+ Last updated 08/09/2024 -+ # Customer intent: I'm a developer integrating Azure AD B2C with Keyless for passwordless authentication. I need to configure Keyless with Azure AD B2C, so I can provide a secure and convenient passwordless authentication experience for my customer applications.
active-directory-b2c Partner Lexisnexis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-lexisnexis.md
description: Learn how to integrate Azure AD B2C authentication with LexisNexis
-+ Last updated 01/26/2024 -+ # Customer intent: I'm a developer integrating Azure Active Directory B2C with LexisNexis ThreatMetrix. I want to configure the API and UI components, so I can verify user identities and perform risk analysis based on user attributes and device profiling information.
active-directory-b2c Partner N8identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-n8identity.md
description: Configure TheAccessHub Admin Tool with Azure Active Directory B2C f
-+ Last updated 01/26/2024 -+ # Customer intent: As an administrator managing customer accounts in Azure AD B2C, I want to configure TheAccessHub Admin Tool with Azure AD B2C. My goal is to migrate customer accounts, administer CSR requests, synchronize data, and customize notifications.
active-directory-b2c Partner Nevis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-nevis.md
description: Learn how to integrate Azure AD B2C authentication with Nevis for p
-+ Last updated 01/26/2024 -+ # Customer intent: I'm a developer, and I want to configure Nevis with Azure Active Directory B2C for passwordless authentication. I need to enable customer authentication and comply with Payment Services Directive 2 (PSD2) transaction requirements.
active-directory-b2c Partner Nok Nok https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-nok-nok.md
description: Configure Nok Nok Passport with Azure AD B2C to enable passwordless
-+ Last updated 06/21/2024 -+ # Customer intent: I'm a developer integrating Azure Active Directory B2C with a third-party authentication provider. I want to learn how to configure Nok Nok Passport as an identity provider (IdP) in Azure AD B2C. My goal is to enable passwordless FIDO authentication for my users.
active-directory-b2c Partner Onfido https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-onfido.md
description: Learn how to integrate Azure AD B2C authentication with Onfido for
-+ Last updated 01/26/2024 -+ # Customer intent: I'm a developer integrating Azure Active Directory B2C with Onfido. I need to configure the Onfido service to verify identity in the sign-up or sign-in flow. My goal is to meet Know Your Customer and identity requirements and provide a reliable onboarding experience, while reducing fraud.
active-directory-b2c Partner Ping Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-ping-identity.md
description: Learn how to integrate Azure AD B2C authentication with Ping Identi
-+ Last updated 01/26/2024 -+ # Customer intent: I'm a developer, and I want to learn how to configure Ping Identity with Azure Active Directory B2C for secure hybrid access (SHA). I need to extend the capabilities of Azure AD B2C and enable secure hybrid access using PingAccess and PingFederate.
active-directory-b2c Partner Saviynt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-saviynt.md
description: Learn to configure Azure AD B2C with Saviynt for cross-application
-+ Last updated 01/26/2024 -+ # Customer intent: As a security manager, I want to integrate Azure Active Directory B2C with Saviynt. I need visibility, security, and governance over user life-cycle management and access control.
active-directory-b2c Partner Strata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-strata.md
description: Learn how to integrate Azure AD B2C authentication with whoIam for
-+ Last updated 01/26/2024 -+ # Customer intent: As an IT admin, I want to integrate Azure Active Directory B2C with StrataMaverics Identity Orchestrator. I need to protect on-premises applications and enable customer single sign-on (SSO) to hybrid apps.
active-directory-b2c Partner Transmit Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-transmit-security.md
description: Learn how to configure Azure Active Directory B2C with Transmit Security for risk detect. -+ Last updated 06/04/2024 -+ zone_pivot_groups: b2c-policy-type # Customer intent: As a developer integrating Transmit Security with Azure AD B2C for risk detect. I want to configure a custom poicy with Transmit Security and set it up in Azure AD B2C, so I can detect and remidiate risks by using multi-factor authentication.
active-directory-b2c Partner Trusona https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-trusona.md
description: Learn how to add Trusona Authentication Cloud as an identity provider on Azure AD B2C to enable a "tap-and-go" passwordless authentication -+ Previously updated : 10/03/2024 Last updated : 10/11/2024 -+ zone_pivot_groups: b2c-policy-type # Customer intent: I'm a developer integrating Azure AD B2C authentication with Trusona Authentication Cloud. I want to configure Trusona Authentication Cloud as an identity provider (IdP) in Azure AD B2C, so I can enable passwordless authentication and provide a better user experience for my web application users.
To register a web application in your Azure AD B2C tenant, use our new unified a
1. Select **Register**. ### Enable ID token implicit grant
-If you register this app and configure it with `https://jwt.ms/` app for testing a user flow or custom policy, you need to enable the implicit grant flow in the app registration:
-1. In the left menu, under **Manage**, select **Authentication**.
+You can enable implicit grant flow to use this app registration to [test a user flow for testing purposes](add-sign-up-and-sign-in-policy.md?pivots=b2c-user-flow#test-the-user-flow).
-1. Under **Implicit grant and hybrid flows**, select **ID tokens (used for implicit and hybrid flows)** check boxes.
+1. Select the app registration you created.
+
+1. Under **Manage**, select **Authentication**.
+
+1. Under **Implicit grant and hybrid flows**, select both the **Access tokens (used for implicit flows)** and **ID tokens (used for implicit and hybrid flows)** check boxes.
1. Select **Save**.
->[!NOTE]
->Enable implicit flow only for testing purposes. DonΓÇÖt enable implicit flow in production.
+> [!NOTE]
+> If you enable implicit grant to test a user flow, make sure you disable the implicit grant flow settings before you deploy your app to production.
+ ## Step 3: Configure Trusona Authentication Cloud as an IdP in Azure AD B2C
active-directory-b2c Partner Twilio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-twilio.md
description: Learn how to integrate a sample online payment app in Azure AD B2C
-+ Last updated 09/11/2024 -+ #Customer intent: As a developer integrating Azure AD B2C with Twilio Verify API, I want a walkthrough on how to integrate a sample online payment app with Twilio Verify API, so that I can comply with PSD2 transaction requirements through dynamic linking and strong customer authentication.
active-directory-b2c Partner Typingdna https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-typingdna.md
description: Learn how to integrate Azure AD B2C authentication with TypingDNA to help with Identity verification and proofing based on user typing pattern, provides ID verification solutions forcing multifactor authentication and helps to comply with SCA requirements for Payment Services Directive 2 (PSD2). -+ Last updated 01/26/2024 -+ # Customer intent: I'm an Azure AD B2C administrator, and I want to integrate TypingDNA with Azure AD B2C. I need to comply with Payment Services Directive 2 (PSD2) transaction requirements through keystroke dynamics and strong customer authentication.
active-directory-b2c Partner Web Application Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-web-application-firewall.md
description: Learn to configure Azure AD B2C with Azure Web Application Firewall
-+ Last updated 01/26/2024 -+ # Customer intent: I'm a developer configuring Azure Active Directory B2C with Azure Web Application Firewall. I want to enable the WAF service for my B2C tenant with a custom domain, so I can protect my web applications from common exploits and vulnerabilities.
active-directory-b2c Partner Whoiam Rampart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-whoiam-rampart.md
description: Learn how to integrate Azure AD B2C authentication with WhoIAM Rampart -+ Last updated 07/31/2024 -+ # Customer intent: I'm a developer integrating WhoIAM Rampart with Azure AD B2C. I need to configure and integrate Rampart with Azure AD B2C using custom policies. My goal is to enable an integrated helpdesk and invitation-gated user registration experience for my application.
active-directory-b2c Partner Whoiam https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-whoiam.md
description: In this tutorial, learn how to integrate Azure AD B2C authenticatio
-+ Last updated 01/26/2024 -+ # Customer intent: I'm a developer integrating Azure Active Directory B2C with a third-party identity management system. I need a tutorial to configure WhoIAM Branded Identity Management System (BRIMS) with Azure AD B2C. My goal is to enable user verification with voice, SMS, and email in my application.
active-directory-b2c Partner Xid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-xid.md
description: Configure Azure Active Directory B2C with xID for passwordless authentication -+ Previously updated : 10/03/2024 Last updated : 10/11/2024 -+ # Customer intent: As an Azure AD B2C administrator, I want to configure xID as an identity provider, so users can sign in using xID and authenticate with their digital identity on their device.
Learn more: [Application types that can be used in Active Directory B2C](applica
For testing, you register `https://jwt.ms`, a Microsoft web application with decoded token contents, which don't leave your browser.
-### Register a web application and enable ID token implicit grant
+### Register a web application
-Complete [Tutorial: Register a web application in Azure AD B2C](tutorial-register-applications.md?tabs=app-reg-ga)
-
->[!NOTE]
->Enable implicit flow only for testing purposes. DonΓÇÖt enable implicit flow in production.
+Complete the steps in [Tutorial: Register a web application in Azure Active Directory B2C](tutorial-register-applications.md?tabs=app-reg-ga) article.
<a name='create-a-xid-policy-key'></a>
active-directory-b2c Partner Zscaler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-zscaler.md
description: Learn how to integrate Azure AD B2C authentication with Zscaler.
-+ Last updated 01/26/2024 -+ # Customer intent: As an IT admin, I want to integrate Azure Active Directory B2C authentication with Zscaler Private Access. I need to provide secure access to private applications and assets without the need for a virtual private network (VPN).
active-directory-b2c Password Complexity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/password-complexity.md
description: How to configure complexity requirements for passwords supplied by
-+ Last updated 01/11/2024 -+ zone_pivot_groups: b2c-policy-type #Customer intent: As a developer or IT admin, I want to configure the complexity requirements for passwords, so that I can enforce strong password policies and customize password complexity rules for my user flows and custom policies.
active-directory-b2c Phone Authentication User Flows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/phone-authentication-user-flows.md
description: Define the identity types you can use (email, username, phone numbe
-+ Last updated 01/11/2024 -+ #Customer intent: As a developer, I want to enable phone sign-up and sign-in for user flows, so that users can sign up for my application using their phone number as an identity option.
active-directory-b2c Phone Based Mfa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/phone-based-mfa.md
description: Learn tips for securing phone-based multifactor authentication in y
-+ Last updated 09/11/2024 -+
active-directory-b2c Phone Factor Technical Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/phone-factor-technical-profile.md
description: Define a phone factor technical profile in a custom policy in Azure
-+ Last updated 01/11/2024 -+ #Customer intent: As a developer implementing phone number verification in Azure AD B2C, I want to define a phone factor technical profile, so that I can provide a user interface for users to verify or enroll their phone numbers, support multiple phone numbers, and return claims indicating the status of the phone number.
active-directory-b2c Phone Number Claims Transformations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/phone-number-claims-transformations.md
description: Custom policy reference for phone number claims transformations in
-+ Last updated 01/11/2024 -+ #Customer intent: As a developer using Azure AD B2C, I want to understand how to define phone number claims transformations, so that I can convert phone number data types, validate phone number formats, and extract country/region codes and national numbers from phone numbers.
active-directory-b2c Policy Keys Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/policy-keys-overview.md
description: Learn about the types of encryption policy keys that can be used in
-+ Last updated 01/11/2024 -+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Predicates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/predicates.md
description: Prevent malformed data from being added to your Azure AD B2C tenant
-+ Last updated 01/11/2024 -+ #Customer intent: As a developer using Azure Active Directory B2C, I want to perform validation on user input data, so that I can ensure that only properly formed data is entered into the system.
active-directory-b2c Protocols Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/protocols-overview.md
description: How to build apps directly by using the protocols that are supporte
-+ Previously updated : 01/11/2024 Last updated : 10/11/2024 -+ #Customer intent: As a developer integrating Azure AD B2C into my application, I want to understand the authentication protocols supported by Azure AD B2C, so that I can choose the appropriate protocol for my application and ensure secure authentication and authorization for my users.
More information about the different types of tokens that are used in Azure AD B
When you're ready to review some example requests, you can start with one of the following tutorials. Each corresponds to a particular authentication scenario. If you need help with determining which flow is right for you, check out [the types of apps you can build by using Azure AD B2C](application-types.md). * [Build mobile and native applications by using OAuth 2.0](authorization-code-flow.md)
-* [Build web apps by using OpenID Connect](openid-connect.md)
-* [Build single-page apps using the OAuth 2.0 implicit flow](implicit-flow-single-page-application.md)
-
+* [Build web apps by using OpenID Connect](openid-connect.md)
active-directory-b2c Publish App To Azure Ad App Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/publish-app-to-azure-ad-app-gallery.md
-+ Last updated 01/11/2024 -+ #Customer intent: As a developer of an Azure Active Directory B2C app, I want to publish my app to the Microsoft Entra app gallery, so that customers can easily find and deploy my app, enable single sign-on, and automate user setup within their Microsoft Entra tenant.
active-directory-b2c Quickstart Native App Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/quickstart-native-app-desktop.md
description: In this Quickstart, run a sample WPF desktop application that uses
-+ Last updated 01/24/2023 -+ #Customer Intent: As a developer building a desktop app, I want to set up sign-in functionality using Azure Active Directory B2C, so that I can authenticate users with social and enterprise accounts and protect my application and customer data.
active-directory-b2c Quickstart Single Page App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/quickstart-single-page-app.md
description: In this Quickstart, run a sample single-page application that uses
-+ Last updated 02/23/2023 -+ #Customer Intent: As a developer building a single-page app, I want to set up sign-in functionality using Azure Active Directory B2C, so that I can authenticate users with social accounts and call a protected web API to retrieve user information.
active-directory-b2c Quickstart Web App Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/quickstart-web-app-dotnet.md
description: In this Quickstart, run a sample ASP.NET web app that uses Azure Ac
-+ Last updated 01/24/2023 -+ #Customer Intent: As a developer building an ASP.NET application, I want to set up sign-in functionality using Azure Active Directory B2C, so that I can authenticate users with social or enterprise accounts and protect my application and customer data.
active-directory-b2c Register Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/register-apps.md
description: Learn how to register different apps types such as web app, web API
-+ Last updated 01/11/2024 -+ #Customer intent: As a developer, I want to register my applications in Azure Active Directory B2C, so that I can enable authentication for various modern application architectures and specify the type of app that I want to register.
active-directory-b2c Relyingparty https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/relyingparty.md
description: Specify the RelyingParty element of a custom policy in Azure Active
-+ Last updated 01/22/2024 -+ #Customer intent: As a developer integrating Azure Active Directory B2C into my application, I want to understand how to configure the RelyingParty element, so that I can enforce user journeys and specify the claims needed for the issued token.
active-directory-b2c Restful Technical Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/restful-technical-profile.md
description: Define a RESTful technical profile in a custom policy in Azure Acti
-+ Last updated 01/22/2024 -+ #Customer intent: As a developer customer-facing apps with Azure Active Directory B2C, I want to learn how to define a REST technical profile, so that I can send and receive data from external services.
active-directory-b2c Roles Resource Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/roles-resource-access-control.md
description: Learn how to use roles to control resource access.
-+ Last updated 02/24/2023 -+ #Customer Intent: As an Azure AD B2C administrator, I want to assign users the least privileged role required to access resources, so that I can ensure proper access control and security within my tenant.
active-directory-b2c Saml Identity Provider Technical Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/saml-identity-provider-technical-profile.md
description: Define a SAML technical profile in a custom policy in Azure Active
-+ Last updated 01/11/2024 -+ #Customer intent: As a developer integrating Azure AD B2C with a SAML-based identity provider, I want to understand how to define a SAML identity provider technical profile, so that I can configure the necessary metadata and certificates for the integration.
active-directory-b2c Saml Issuer Technical Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/saml-issuer-technical-profile.md
description: Define a technical profile for a Security Assertion Markup Language
-+ Last updated 01/11/2024 -+ #Customer intent: As a developer configuring SAML token issuance in Azure AD B2C, I want to define a technical profile for a SAML token issuer, so that I can emit a SAML token that is returned to the relying party application.
active-directory-b2c Saml Service Provider Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/saml-service-provider-options.md
description: Learn how to configure Azure Active Directory B2C SAML service prov
-+ Last updated 01/11/2024 -+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Saml Service Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/saml-service-provider.md
description: Learn how to configure Azure Active Directory B2C to provide SAML p
-+ Last updated 01/11/2024 -+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Secure Api Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/secure-api-management.md
description: Learn how to use access tokens issued by Azure Active Directory B2C
-+ Last updated 01/11/2024 -+ #Customer intent: As an API developer, I want to secure my Azure API Management API with Azure AD B2C, so that I can restrict access to only authenticated clients and ensure that only valid access tokens are accepted.
active-directory-b2c Secure Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/secure-rest-api.md
description: Secure your custom RESTful APIs used for API connectors in Azure AD B2C. -+ Last updated 01/11/2024 -+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Security Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/security-architecture.md
description: End to end guidance on how to secure your Azure AD B2C solution.
-+ Last updated 05/09/2023 -+ #Customer intent: As a developer implementing Azure Active Directory B2C, I want to know the best practices for securing my identity solution, so that I can protect my users from bot attacks, fraudulent activities, and resource exhaustion.
active-directory-b2c Self Asserted Technical Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/self-asserted-technical-profile.md
description: Define a self-asserted technical profile in a custom policy in Azur
-+ Last updated 01/17/2024 -+ #Customer intent: As a developer using Azure Active Directory B2C, I want to define a self-asserted technical profile with display, so that I can collect and validate user input.
active-directory-b2c Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/service-limits.md
description: Reference for service limits and restrictions for Azure Active Dire
-+ Last updated 05/11/2024-+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Session Behavior https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/session-behavior.md
Title: Configure session behavior - Azure Active Directory B2C
description: Learn how to configure session behavior in Azure Active Directory B2C. -+ Last updated 01/11/2024 -+ zone_pivot_groups: b2c-policy-type #Customer intent: As a developer configuring session behavior in Azure Active Directory B2C, I want to understand the different types of single sign-on sessions (Azure AD B2C session, federated identity provider session, application session) and how to configure their behavior, so that I can implement the most appropriate SSO method for my policy.
active-directory-b2c Sign In Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/sign-in-options.md
description: Learn about the sign-up and sign-in options you can use with Azure Active Directory B2C, including username and password, email, phone, or federation with social or external identity providers. -+ Last updated 03/22/2024 -+ #Customer Intent: As a developer integrating Azure AD B2C into my application, I want to understand the different sign-in options available so that I can choose the appropriate method for my users and configure the sign-in flow accordingly.
active-directory-b2c Social Transformations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/social-transformations.md
description: Social account claims transformation examples for the Identity Expe
-+ Last updated 01/11/2024 -+ #Customer intent: As a developer using Azure AD B2C, I want to understand how to use social account claims transformations, so that I can link new social identities with existing accounts, create alternative security IDs, get a list of identity providers, and remove alternative security IDs by identity provider.
active-directory-b2c Solution Articles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/solution-articles.md
Last updated 01/11/2024 --++ #Customer intent: As a developer, I want to access downloadable solution guides and training for Azure Active Directory B2C, so that I can understand how to implement and leverage Azure AD B2C for customer identity management in my applications.
active-directory-b2c String Transformations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/string-transformations.md
description: String claims transformation examples for the Identity Experience F
-+ Last updated 01/11/2024 -+ #Customer intent: As a developer using Azure AD B2C, I want to understand how to use string claims transformations, so that I can manipulate and compare string claims in my custom policies.
active-directory-b2c Stringcollection Transformations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/stringcollection-transformations.md
description: StringCollection claims transformation examples for the Identity Ex
-+ Last updated 01/11/2024 -+ #Customer intent: As a developer using Azure Active Directory B2C, I want to understand how to use string collection claims transformations, so that I can add, parameterize, extract, and check values in string collections for claims.
active-directory-b2c Subjourneys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/subjourneys.md
description: Specify the sub journeys element of a custom policy in Azure Active
-+ Last updated 01/11/2024 -+ #Customer intent: As a developer implementing user journeys in Azure AD B2C, I want to understand how to use sub journeys to organize and simplify the flow of orchestration steps, so that I can create reusable step sequences and implement branching to better represent the business logic.
active-directory-b2c Supported Azure Ad Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/supported-azure-ad-features.md
description: Learn about Microsoft Entra ID features, which are still supported
-+ Last updated 01/11/2024 -+ #Customer intent: As an Azure AD B2C tenant administrator, I want to understand the differences between Microsoft Entra ID and Azure AD B2C features, so that I can effectively manage user accounts and configure the appropriate features for my tenant.
active-directory-b2c Technical Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/technical-overview.md
Title: Technical and feature overview - Azure Active Directory B2C
description: An in-depth introduction to the features and technologies in Azure Active Directory B2C. Azure Active Directory B2C has high availability globally. -+ Last updated 11/08/2023 -+ #Customer intent: As an IT admin or developer, I need to understand in more detail the technical aspects and features of Azure AD B2C and how it can help me build a customer-facing application.
active-directory-b2c Technicalprofiles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/technicalprofiles.md
description: Specify the TechnicalProfiles element of a custom policy in Azure A
-+ Last updated 01/11/2024 -+ #Customer intent: As a developer integrating Azure Active Directory B2C into my application, I want to understand the different types of technical profiles available, so that I can choose the appropriate profile to communicate with Azure AD B2C and perform actions such as user creation, user profile reading, and authentication.
active-directory-b2c Tenant Management Check Tenant Creation Permission https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/tenant-management-check-tenant-creation-permission.md
description: Learn how to check tenant creation permission in Azure Active Direc
-+ Last updated 09/11/2024 -+ #Customer intent: "As an Azure AD B2C administrator, I want to restrict non-admin users from creating tenants, so that I can ensure security and prevent unauthorized access. Additionally, as a non-admin user, I want to check if I have permission to create a tenant, so that I can proceed with the necessary actions."
active-directory-b2c Tenant Management Directory Quota https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/tenant-management-directory-quota.md
description: Learn how to manage directory size quota in your Azure AD B2C tenan
-+ Last updated 07/31/2024 -+ #Customer intent: As an Azure AD B2C tenant administrator, I want to monitor and manage the directory size quota, so that I can ensure that I don't exceed the maximum number of objects allowed in the directory and take necessary actions such as removing inactive users or requesting a quota increase.
active-directory-b2c Tenant Management Emergency Access Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/tenant-management-emergency-access-account.md
description: Learn how to manage emergency access accounts in Azure AD B2C tenan
-+ Last updated 09/11/2024 -+ #Customer intent: As an Azure AD B2C administrator, I want to create emergency access accounts with strong authentication and exclude them from conditional access policies, so that I can prevent accidental lockouts and ensure administrative access to the organization in case of emergencies.
active-directory-b2c Tenant Management Manage Administrator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/tenant-management-manage-administrator.md
description: Learn how to add an administrator account to your Azure Active Directory B2C tenant. Learn how to invite a guest account as an administrator into your Azure AD B2C tenant -+ Last updated 09/11/2024 -+ #Customer intent: As an Azure AD B2C administrator, I want to manage administrator accounts, add new administrators (work and guest accounts), assign roles to user accounts, remove role assignments, delete administrator accounts, and protect administrative accounts with multifactor authentication, so that I can control access and ensure security in my Azure AD B2C tenant.
active-directory-b2c Tenant Management Read Tenant Name https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/tenant-management-read-tenant-name.md
description: Learn how to find tenant name and tenant ID
-+ Last updated 01/11/2024 -+ #Customer intent: As a developer or IT administrator, I want to find my Azure AD B2C tenant details
active-directory-b2c Threat Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/threat-management.md
description: Learn about detection and mitigation techniques for credential atta
-+ Last updated 09/20/2021 -+ #Customer Intent: As an Azure AD B2C administrator, I want to mitigate credential attacks by using smart lockout, so that I can protect user accounts from unauthorized access.
active-directory-b2c Tokens Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/tokens-overview.md
description: Learn about the tokens used in Azure Active Directory B2C.
-+ Last updated 01/11/2024 -+
active-directory-b2c Troubleshoot With Application Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/troubleshoot-with-application-insights.md
description: How to set up Application Insights to trace the execution of your custom policies. -+ Last updated 01/22/2024 -+ zone_pivot_groups: b2c-policy-type #Customer intent: As a developer working with Azure Active Directory B2C, I want to collect logs from my custom policies using Application Insights, so that I can diagnose and troubleshoot any problems that may occur.
active-directory-b2c Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/troubleshoot.md
description: Learn about approaches to solving errors when working with custom p
-+ Last updated 01/11/2024 -+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Trustframeworkpolicy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/trustframeworkpolicy.md
description: Specify the TrustFrameworkPolicy element of a custom policy in Azur
-+ Last updated 01/23/2024 -+ #Customer intent: As a developer creating custom policies for Azure Active Directory B2C, I want to understand the structure and elements of the TrustFrameworkPolicy XML files, so that I can define the necessary attributes, elements, and references for my policies.
active-directory-b2c Tutorial Create Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/tutorial-create-tenant.md
description: Follow this tutorial to learn how to prepare for registering your a
-+ Last updated 09/11/2024 -+
active-directory-b2c Tutorial Create User Flows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/tutorial-create-user-flows.md
Title: Tutorial - Create user flows and custom policies - Azure Active Directory
description: Follow this tutorial to learn how to create user flows and custom policies in the Azure portal to enable sign up, sign in, and user profile editing for your applications in Azure Active Directory B2C. -+ Last updated 11/10/2023 -+ zone_pivot_groups: b2c-policy-type #Customer intent: As a developer, I want to learn how to create user flows and custom policies in the Azure portal to enable sign up, sign in, and user profile editing for my applications in Azure Active Directory B2C.
active-directory-b2c Tutorial Delete Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/tutorial-delete-tenant.md
description: Steps describing how to delete an Azure AD B2C tenant. Learn how to
-+ Last updated 09/11/2024 -+ #Customer intent: As an Azure AD B2C administrator, I want to delete the tenant and all associated resources, so that I can clean up after completing tutorials or testing.
active-directory-b2c Tutorial Register Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/tutorial-register-applications.md
description: Follow this tutorial to learn how to register a web application in Azure Active Directory B2C using the Azure portal. -+ Previously updated : 11/13/2023 Last updated : 10/10/2024 -+ #Customer intent: As a developer or IT admin, I want to register my web application in Azure AD B2C so that I can enable my users to sign up, sign in, and manage their profiles.
For a web application, you need to create an application secret. The client secr
## Enable ID token implicit grant
-If you register this app and configure it with [https://jwt.ms/](https://jwt.ms/) app for testing a user flow or custom policy, you need to enable the implicit grant flow in the app registration:
+You can enable implicit grant flow to use this app registration to [test a user flow for testing purposes](add-sign-up-and-sign-in-policy.md?pivots=b2c-user-flow#test-the-user-flow).
-1. In the left menu, under **Manage**, select **Authentication**.
+1. Select the app registration you created.
+
+1. Under **Manage**, select **Authentication**.
1. Under **Implicit grant and hybrid flows**, select both the **Access tokens (used for implicit flows)** and **ID tokens (used for implicit and hybrid flows)** check boxes. 1. Select **Save**. +
+> [!NOTE]
+> If you enable implicit grant to test a user flow, make sure you disable the implicit grant flow settings before you deploy your app to production.
++ ## Next steps In this article, you learned how to:
In this article, you learned how to:
> * Register a web application > * Create a client secret
-Learn how to [Create user flows in Azure Active Directory B2C](tutorial-create-user-flows.md)
+Learn how to [Create user flows in Azure Active Directory B2C](tutorial-create-user-flows.md)
active-directory-b2c Tutorial Register Spa https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/tutorial-register-spa.md
Title: Register a single-page application in Azure Active Directory B2C
+ Title: Register a single-page app in Azure Active Directory B2C
description: Follow this guide to learn how to register a single-page application (SPA) in Azure Active Directory B2C using the Azure portal. -+ -+ Previously updated : 01/11/2024 Last updated : 10/11/2024 -+ #Customer intent: As a developer building a single-page application (SPA), I want to register the SPA in Azure Active Directory B2C, so that I can enable authentication and authorization for my application and allow users to sign in and access protected APIs.
To take advantage of this flow, your application can use an authentication libra
### Implicit grant flow
-Some libraries, like [MSAL.js 1.x](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib), only support the implicit grant flow or your applications is implemented to use implicit flow. In these cases, Azure AD B2C supports the [OAuth 2.0 implicit flow](implicit-flow-single-page-application.md). The implicit grant flow allows the application to get **ID** and **Access** tokens. Unlike the authorization code flow, implicit grant flow doesn't return a **Refresh token**.
+Some libraries, like [MSAL.js 1.x](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib), only support the implicit grant flow or your applications is implemented to use implicit flow. In these cases, Azure AD B2C supports the [OAuth 2.0 implicit flow](implicit-flow-single-page-application.md). The implicit grant flow allows the application to get **ID** and **Access** tokens from the authorize endpoint. Unlike the authorization code flow, implicit grant flow doesn't return a **Refresh token**.
![Single-page applications-implicit](./media/tutorial-single-page-app/spa-app.svg)
This authentication flow doesn't include application scenarios that use cross-pl
1. Select **Register**.
-## Enable the implicit flow
+## Enable the implicit grant flow
-If youΓÇÖre using MSAL.js 1.3 or an earlier version with the implicit grant flow in your SPA app, or if you configure the [https://jwt.ms/](https://jwt.ms/) app for testing a user flow or custom policy, you need to enable the implicit grant flow in the app registration:
+You can enable implicit grant flow for two reasons, when youΓÇÖre using MSAL.js version 1.3 or earlier version or when you use an app registration to [test a user flow for testing purposes](add-sign-up-and-sign-in-policy.md?pivots=b2c-user-flow#test-the-user-flow).
-1. In the left menu, under **Manage**, select **Authentication**.
+Use these steps to enable implicit grant flow for your app:
+
+1. Select the app registration you created.
+
+1. Under **Manage**, select **Authentication**.
1. Under **Implicit grant and hybrid flows**, select both the **Access tokens (used for implicit flows)** and **ID tokens (used for implicit and hybrid flows)** check boxes. 1. Select **Save**.
-If your app uses MSAL.js 2.0 or later, don't enable implicit flow grant as MSAL.js 2.0+ supports the authorization code flow with PKCE.
+> [!NOTE]
+> If your app uses MSAL.js 2.0 or later, don't enable implicit grant flow as MSAL.js 2.0+ supports the [OAuth 2.0 Authorization code flow (with PKCE)](./authorization-code-flow.md). If you enable implicit grant to test a user flow, make sure you disable the implicit grant flow settings before you deploy your app to production.
-## Migrate from the implicit flow
+## Migrate from the implicit grant flow
-If you've an existing application that uses the implicit flow, we recommend that you migrate to use the authorization code flow by using a framework that supports it, like [MSAL.js 2.0+](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-browser).
+If you've an existing application that uses the implicit flow, we recommend that you migrate to use the authorization code flow with PKCE by using a framework that supports it, such as [MSAL.js 2.0+](https://github.com/AzureAD/microsoft-authentication-library-for-js/tree/dev/lib/msal-browser).
When all your production SPA represented by an app registration starts using the authorization code flow, disable the implicit grant flow settings as follows:
active-directory-b2c User Flow Custom Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/user-flow-custom-attributes.md
description: Define custom attributes for your application in Azure Active Direc
-+ Last updated 09/11/2024 -+ zone_pivot_groups: b2c-policy-type
active-directory-b2c User Flow Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/user-flow-overview.md
description: Learn more about built-in user flows and the custom policy extensible policy framework of Azure Active Directory B2C. -+ Last updated 11/09/2023 -+ #Customer intent: As a developer, I want to understand the difference between user flows and custom policies, so that I can choose the best method for my business needs. I want to understand the scenarios that can be enabled with each method, and how to integrate them with my applications.
active-directory-b2c User Flow Versions Legacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/user-flow-versions-legacy.md
description: Learn about legacy versions of user flows available in Azure Active
-+ Last updated 01/11/2024 -+ #Customer intent: As a developer using Azure Active Directory B2C, I want to understand the differences between legacy and recommended user flow versions, so that I can choose the appropriate user flow for my production applications.
active-directory-b2c User Flow Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/user-flow-versions.md
description: Learn about the versions of user flows available in Azure Active Di
-+ Last updated 01/11/2024 -+ #Customer intent: As a developer using Azure Active Directory B2C, I want to understand the differences between Recommended user flows and Standard (Legacy) user flows, so that I can choose the appropriate user flow version for my application and ensure it is maintained and updated.
active-directory-b2c User Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/user-migration.md
description: Migrate user accounts from another identity provider to Azure AD B2
-+ Last updated 01/11/2024 -+ #Customer intent: As an IT admin migrating user accounts to Azure AD B2C, I want to understand the different migration methods (pre migration and seamless migration), so that I can choose the appropriate approach and write the necessary application or script using the Microsoft Graph API.
active-directory-b2c User Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/user-overview.md
Title: Overview of user accounts in Azure Active Directory B2C
description: Learn about the types of user accounts that can be used in Azure Active Directory B2C. -+ Last updated 02/13/2024 -+ #Customer intent: As a developer or IT administrator, I want to understand the different types of user accounts available Azure AD B2C, so that I can properly manage and configure user accounts for my tenant.
active-directory-b2c User Profile Attributes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/user-profile-attributes.md
Title: User profile attributes in Azure Active Directory B2C
description: Learn about the user resource type attributes that Azure AD B2C directory user profile supports. Find out about built-in attributes, extensions, and how attributes map to Microsoft Graph. -+ Last updated 01/11/2024 -+
active-directory-b2c Userinfo Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/userinfo-endpoint.md
description: Define a UserInfo endpoint in a custom policy in Azure Active Direc
-+ Last updated 01/11/2024 -+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Userjourneys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/userjourneys.md
description: Specify the UserJourneys element of a custom policy in Azure Active
-+ Last updated 01/17/2024 -+ #Customer intent: As a developer integrating Azure AD B2C into an application, I want to understand how custom policy user journeys work so that I can design the steps that a users goes through for the relying party application to obtain the desired claims for a user.
active-directory-b2c Validation Technical Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/validation-technical-profile.md
description: Validate claims by using a validation technical profile in a custom
-+ Last updated 01/11/2024 -+ #Customer intent: As a developer implementing Azure Active Directory B2C custom policies, I want to define a validation technical profile, so that I can validate the output claims of a self-asserted technical profile and control the execution of subsequent validation technical profiles based on the success or failure of the validation.
active-directory-b2c View Audit Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/view-audit-logs.md
description: How to access Azure AD B2C audit logs programmatically and in the A
-+ Last updated 01/22/2024 -+
active-directory-b2c Whats New Docs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/whats-new-docs.md
Title: "What's new in Azure Active Directory business-to-customer (B2C)" description: "New and updated documentation for the Azure Active Directory business-to-customer (B2C)." Last updated 10/01/2024--++
api-management Developer Portal Wordpress Plugin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/developer-portal-wordpress-plugin.md
In this step, create a new Microsoft Entra app. In later steps, you configure th
`https://<apim-instance-name>.developer.azure-api.net/signin`
-1. On the **Authentication** page, under **Single-page application**, select **Add URI** and enter the following URI, substituting the name of your API Management instance:
+1. Select **+ Add a platform** again. Select **Single-page application** agaain.
+1. On the **Configure single-page application** page, enter the following redirect URI, substituting the name of your API Management instance, and select **Configure**:
`https://<apim-instance-name>.developer.azure-api.net/`
application-gateway Configuration Infrastructure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configuration-infrastructure.md
After you configure *active public and private listeners* (with rules) *with the
| Source | Source ports | Destination | Destination ports | Protocol | Access | |||||||
-|`<as per need>`|Any|`<Public and Private<br/>frontend IPs>`|`<listener ports>`|TCP|Allow|
+|`<as per need>`|Any|`<Public and Private frontend IPs>`|`<listener ports>`|TCP|Allow|
**Infrastructure ports**: Allow incoming requests from the source as the **GatewayManager** service tag and **Any** destination. The destination port range differs based on SKU and is required for communicating the status of the backend health. These ports are protected/locked down by Azure certificates. External entities can't initiate changes on those endpoints without appropriate certificates in place.
automanage Overview About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automanage/overview-about.md
# Azure Automanage machine best practices > [!CAUTION]
+> On September 30, 2027, the Azure Automanage Best Practices service will be retired. As a result, attempting to create a new configuration profile or onboarding a new subscription to the service will result in an error. Learn more [here](https://aka.ms/automanagemigration/) about how to migrate to Azure Policy before that date.
+
+> [!IMPORTANT]
> On 31 August 2024, both Automation Update Management and the Log Analytics agent it uses will be retired. Migrate to Azure Update Manager before that. Refer to guidance on migrating to Azure Update Manager [here](/azure/update-manager/guidance-migration-automation-update-management-azure-update-manager?WT.mc_id=Portal-Microsoft_Azure_Automation). [Migrate Now](https://portal.azure.com/). This article covers information about Azure Automanage machine best practices, which have the following benefits:
automanage Overview Configuration Profiles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automanage/overview-configuration-profiles.md
# Configuration profiles- > [!CAUTION]
+> On September 30, 2027, the Azure Automanage Best Practices service will be retired. As a result, attempting to create a new configuration profile or onboarding a new subscription to the service will result in an error. Learn more [here](https://aka.ms/automanagemigration/) about how to migrate to Azure Policy before that date.
+
+> [!IMPORTANT]
> On 31 August 2024, both Automation Update Management and the Log Analytics agent it uses will be retired. Migrate to Azure Update Manager before that. Refer to guidance on migrating to Azure Update Manager [here](/azure/update-manager/guidance-migration-automation-update-management-azure-update-manager?WT.mc_id=Portal-Microsoft_Azure_Automation). [Migrate Now](https://portal.azure.com/). When you are enabling Automanage for your machine, a configuration profile is required. Configuration profiles are the foundation of this service. They define which services we onboard your machines to and to some extent what the configuration of those services would be.
azure-cache-for-redis Cache Tutorial Aks Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-tutorial-aks-get-started.md
Title: 'Tutorial: Get started connecting an AKS application to a cache' description: In this tutorial, you learn how to connect your AKS-hosted application to an Azure Cache for Redis instance.---- Previously updated : 08/15/2023 Last updated : 10/01/2024 #CustomerIntent: As a developer, I want to see how to use a Azure Cache for Redis instance with an AKS container so that I see how I can use my cache instance with a Kubernetes cluster.
In this tutorial, you adapt the [AKS sample voting application](https://github.c
- An Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - An Azure Kubernetes Service Cluster - For more information on creating a cluster, see [Quickstart: Deploy an Azure Kubernetes Service (AKS) cluster using the Azure portal](/azure/aks/learn/quick-kubernetes-deploy-portal).
+- A user assigned managed identity that you want to use to connect to your Azure Cache for Redis instance.
> [!IMPORTANT] > This tutorial assumes that you are familiar with basic Kubernetes concepts like containers, pods and service.
In this tutorial, you adapt the [AKS sample voting application](https://github.c
For this tutorial, use a Standard C1 cache. :::image type="content" source="media/cache-tutorial-aks-get-started/cache-new-instance.png" alt-text="Screenshot of creating a Standard C1 cache in the Azure portal":::
-1. On the **Advanced** tab, enable **Non-TLS port**.
- :::image type="content" source="media/cache-tutorial-aks-get-started/cache-non-tls.png" alt-text="Screenshot of the Advanced tab with Non-TLS enabled during cache creation.":::
- 1. Follow the steps through to create the cache.
-> [!IMPORTANT]
-> This tutorial uses a non-TLS port for demonstration, but we highly recommend that you use a TLS port for anything in production.
+1. Once your Redis cache instance is created, navigate to the **Authentication** tab. Select the user assigned managed identity you want to use to connect to your Redis cache instance, then select **Save**.
+
+1. Alternatively, you can navigate to Data Access Configuration on the Resource menu to create a new Redis user with your user assigned managed identity to connect to your cache.
+
+1. Take note of the user name for your Redis user from the portal. You use this user name with the AKS workload.
+
+## Run sample locally
+
+To run this sample locally, configure your user principal as a Redis User on your Redis instance. The code sample will use your user principal through (DefaultAzureCredential)[https://learn.microsoft.com/en-us/dotnet/azure/sdk/authentication/?tabs=command-line#use-defaultazurecredential-in-an-application] to connect to Redis instance.
+
+## Configure your AKS cluster
-Creating the cache can take a few minutes. You can move to the next section while the process finishes.
+Follow these [steps](/azure/aks/workload-identity-deploy-cluster) to configure a workload identity for your AKS cluster. Complete the following steps:
-## Install and connect to your AKS cluster
+ - Enable OIDC issuer and workload identity
+ - Skip the step to create user assigned managed identity if you already created your managed identity. If you create a new managed identity, ensure that you create a new Redis User for your managed identity and assign appropriate data access permissions.
+ - Create a Kubernetes Service account annotated with the client ID of your user assigned managed identity
+ - Create a federated identity credential for your AKS cluster.
+
+## Configure your workload that connects to Azure Cache for Redis
+
+Next, set up the AKS workload to connect to Azure Cache for Redis after you configure the AKS cluster.
+
+1. Download the code for the [sample app](https://github.com/Azure-Samples/azure-cache-redis-sample/connect-from-aks).
+
+1. Build and push docker image to your Azure Container Registry using [az acr build](/cli/azure/acr#az-acr-build) command.
+
+ ```bash
+ az acr build --image sample/connect-from-aks-sample:1.0 --registry yourcontainerregistry --file Dockerfile .
+ ```
+
+1. Attach your container registry to your AKS cluster using following command:
+
+ ```bash
+ az aks update --name clustername --resource-group mygroup --attach-acr youracrname
+ ```
+
+## Deploy your workload
In this section, you first install the Kubernetes CLI and then connect to an AKS cluster.
If you use Azure Cloud Shell, _kubectl_ is already installed, and you can skip t
### Connect to your AKS cluster
-Use the portal to copy the resource group and cluster name for your AKS cluster. To configure _kubectl_ to connect to your AKS cluster, use the following command with your resource group and cluster name:
-
-```bash
- az aks get-credentials --resource-group myResourceGroup --name myClusterName
- ```
-
-Verify that you're able to connect to your cluster by running the following command:
+1. Use the portal to copy the resource group and cluster name for your AKS cluster. To configure _kubectl_ to connect to your AKS cluster, use the following command with your resource group and cluster name:
-```bash
-kubectl get nodes
-```
-
-You should see similar output showing the list of your cluster nodes.
-
-```output
-NAME STATUS ROLES AGE VERSION
-aks-agentpool-21274953-vmss000001 Ready agent 1d v1.24.15
-aks-agentpool-21274953-vmss000003 Ready agent 1d v1.24.15
-aks-agentpool-21274953-vmss000006 Ready agent 1d v1.24.15
-```
-
-## Update the voting application to use Azure Cache for Redis
+ ```bash
+ az aks get-credentials --resource-group myResourceGroup --name myClusterName
+ ```
-Use the [.yml file](https://github.com/Azure-Samples/azure-voting-app-redis/blob/master/azure-vote-all-in-one-redis.yaml) in the sample for reference.
+1. Verify that you're able to connect to your cluster by running the following command:
-Make the following changes to the deployment file before you save the file as _azure-vote-sample.yaml_.
+ ```bash
+ kubectl get nodes
+ ```
-1. Remove the deployment and service named `azure-vote-back`. This deployment is used to deploy a Redis container to your cluster that is not required when using Azure Cache for Redis.
+ You should see similar output showing the list of your cluster nodes.
-2. Replace the value `REDIS` variable from "azure-vote-back" to the _hostname_ of the Azure Cache for Redis instance that you created earlier. This change indicates that your application should use Azure Cache for Redis instead of a Redis container.
+ ```bash
+ NAME STATUS ROLES AGE VERSION
+ aks-agentpool-21274953-vmss000001 Ready agent 1d v1.29.7
+ aks-agentpool-21274953-vmss000003 Ready agent 1d v1.29.7
+ aks-agentpool-21274953-vmss000006 Ready agent 1d v1.29.7
+ ```
-3. Define variable named `REDIS_PWD`, and set the value to the _access key_ for the Azure Cache for Redis instance that you created earlier.
+## Run your workload
-After all the changes, the deployment file should look like following file with your _hostname_ and _access key_. Save your file as _azure-vote-sample.yaml_.
+1. The following code describes the pod specification file that you use to run our workload. Take note that the pod has the label _azure.workloadidentity/use: "true"_ and is annotated with _serviceAccountName_ as required by AKS workload identity. When using access key authentication, replace the value of AUTHENTICATION_TYPE, REDIS_HOSTNAME and REDIS_ACCESSKEY environment variables.
-```YAML
-apiVersion: apps/v1
-kind: Deployment
-metadata:
- name: azure-vote-front
-spec:
- replicas: 1
- selector:
- matchLabels:
- app: azure-vote-front
- strategy:
- rollingUpdate:
- maxSurge: 1
- maxUnavailable: 1
- minReadySeconds: 5
- template:
+ ```yml
+ apiVersion: v1
+ kind: Pod
metadata:
+ name: entrademo-pod
labels:
- app: azure-vote-front
+ azure.workload.identity/use: "true" # Required. Only pods with this label can use workload identity.
spec:
- nodeSelector:
- "kubernetes.io/os": linux
+ serviceAccountName: workload-identity-sa
containers:
- - name: azure-vote-front
- image: mcr.microsoft.com/azuredocs/azure-vote-front:v1
- ports:
- - containerPort: 80
+ - name: entrademo-container
+ image: youracr.azurecr.io/connect-from-aks-sample:1.0
+ imagePullPolicy: Always
+ command: ["dotnet", "ConnectFromAKS.dll"]
resources:
- requests:
- cpu: 250m
limits:
- cpu: 500m
+ memory: "256Mi"
+ cpu: "500m"
+ requests:
+ memory: "128Mi"
+ cpu: "250m"
env:
- - name: REDIS
- value: myrediscache.redis.cache.windows.net
- - name: REDIS_PWD
- value: myrediscacheaccesskey
-
-apiVersion: v1
-kind: Service
-metadata:
- name: azure-vote-front
-spec:
- type: LoadBalancer
- ports:
- - port: 80
- selector:
- app: azure-vote-front
-```
-
-## Deploy and test your application
+ - name: AUTHENTICATION_TYPE
+ value: "MANAGED_IDENTITY" # change to ACCESS_KEY to authenticate using access key
+ - name: REDIS_HOSTNAME
+ value: "your redis hostname"
+ - name: REDIS_ACCESSKEY
+ value: "your access key"
+ - name: REDIS_PORT
+ value: "6380"
+ restartPolicy: Never
+
+ ```
-Run the following command to deploy this application to your AKS cluster:
+1. Save this file as podspec.yaml and then apply it to your AKS cluster by running the folloWing command:
-```bash
-kubectl apply -f azure-vote-sample.yaml
-```
+ ```bash
+ kubectl apply -f podspec.yaml
+ ```
-You get a response indicating your deployment and service was created:
+ You get a response indicating your pod was created:
-```output
-deployment.apps/azure-vote-front created
-service/azure-vote-front created
-```
+ ```bash
+ pod/entrademo-pod created
+ ```
-To test the application, run the following command to check if the pod is running:
+1. To test the application, run the following command to check if the pod is running:
-```bash
-kubectl get pods
-```
+ ```bash
+ kubectl get pods
+ ```
-You see your pod running successfully like:
+ You see your pod running successfully like:
-```output
-NAME READY STATUS RESTARTS AGE
-azure-vote-front-7dd44597dd-p4cnq 1/1 Running 0 68s
-```
+ ```bash
+ NAME READY STATUS RESTARTS AGE
+ entrademo-pod 0/1 Completed 0 42s
+ ```
-Run the following command to get the endpoint for your application:
+1. Because this tutorial is a console app, you need to check the logs of the pod to verify that it ran as expected using this command.
-```bash
-kubectl get service azure-vote-front
-```
-
-You might see that the EXTERNAL-IP has status `<pending>` for a few minutes. Keep retrying until the status is replaced by an IP address.
-
-```output
-NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
-azure-vote-front LoadBalancer 10.0.166.147 20.69.136.105 80:30390/TCP 90s
-```
+ ```bash
+ kubectl logs entrademo-app
+ ```
-Once the External-IP is available, open a web browser to the External-IP address of your service and you see the application running as follows:
+ You see the following logs that indicate your pod successfully connected to your Redis instance using user assigned managed identity
+ ```bash
+ Connecting with managed identity..
+ Retrieved value from Redis: Hello, Redis!
+ Success! Previous value: Hello, Redis!
+ ```
-## Clean up your deployment
+## Clean up your cluster
To clean up your cluster, run the following commands: ```bash
-kubectl delete deployment azure-vote-front
-kubectl delete service azure-vote-front
+kubectl delete pod entrademo-pod
``` [!INCLUDE [cache-delete-resource-group](includes/cache-delete-resource-group.md)]
kubectl delete service azure-vote-front
## Related content - [Quickstart: Deploy an Azure Kubernetes Service (AKS) cluster using the Azure portal](/azure/aks/learn/quick-kubernetes-deploy-portal)-- [AKS sample voting application](https://github.com/Azure-Samples/azure-voting-app-redis/tree/master)
+- [Quickstart: Deploy and configure workload identity on an Azure Kubernetes Service (AKS) cluster](/azure/aks/workload-identity-deploy-cluster)
+- [Azure Cache for Redis Entra ID Authentication](/azure/azure-cache-for-redis/cache-azure-active-directory-for-authentication)
backup Azure Kubernetes Service Backup Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/azure-kubernetes-service-backup-overview.md
Azure Backup for AKS currently supports the following two options when doing a r
2. **Patch**: This option allows the patching mutable variable in the backed-up resource on the resource in the target cluster. If you want to update the number of replicas in the target cluster, you can opt for patching as an operation. >[!Note]
->AKS backup currently doesn't delete and recreate resources in the target cluster if they already exist. If you attempt to restore Persistent Volumess in the original location, delete the existing Persistent Volumes, and then do the restore operation.
+>AKS backup currently doesn't delete and recreate resources in the target cluster if they already exist. If you attempt to restore Persistent Volumes in the original location, delete the existing Persistent Volumes, and then do the restore operation.
## Use custom hooks for backup and restore
cost-management-billing Avoid Unused Subscriptions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/avoid-unused-subscriptions.md
Title: Avoid unused subscriptions
-description: Learn how to avoid having an unused subscription that gets automatically deleted.
+description: Learn how to prevent unused subscriptions from getting automatically blocked or deleted due to inactivity.
- Previously updated : 07/25/2024+ Last updated : 10/08/2024
+# customer intent: As a billing administrator, I want to prevent my subscriptions from getting blocked or deleted.
# Avoid unused subscriptions Unused and abandoned subscriptions can increase potential security risks to your Azure account. To reduce this risk, Microsoft takes measures to secure, protect, and ultimately delete unused Azure subscriptions.
+>[!NOTE]
+> This article only applies to Microsoft Online Service Program (MOSP) and Cloud Solution Provider (CSP) subscriptions.
+ ## What is an unused subscription? Unused subscriptions donΓÇÖt have usage, activity, or open support requests in more than one year (12 months). When a subscription enters the unused state, you receive a notification from Microsoft stating that your unused subscriptions will get blocked in 30 days.
databox Data Box Disk Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-disk-overview.md
Use Data Box Disk to transfer terabytes of data in scenarios with limited networ
- **Incremental transfer** - when an initial bulk transfer is done using Data Box Disk (seed) followed by incremental transfers over the network. For example, Commvault and Data Box Disk are used to move backup copies to Azure. This migration is followed by copying incremental data using network to Azure Storage. - **Periodic uploads** - when large amount of data is generated periodically and needs to be moved to Azure. One possible example might include the transfer of video content is generated on oil rigs and windmill farms for energy exploration. Additionally, periodic uploads can be useful for advanced driver assist system (ADAS) data collection campaigns, where data is collected from test vehicles.
-### Ingestion of data from Data Box
-
-Azure providers and non-Azure providers can ingest data from Azure Data Box. The Azure services that provide data ingestion from Azure Data Box include:
--- **SharePoint Online** - use Azure Data Box and the SharePoint Migration Tool (SPMT) to migrate your file share content to SharePoint Online. Using Data Box, you remove the dependency on your WAN link to transfer the data. For more information, see [Use the Azure Data Box Heavy to migrate your file share content to SharePoint Online](data-box-heavy-migrate-spo.md).--- **Azure File Sync** - replicates files from your Data Box to an Azure file share, enabling you to centralize your file services in Azure while maintaining local access to your data. For more information, see [Deploy Azure File Sync](../storage/file-sync/file-sync-deployment-guide.md).--- **HDFS stores** - migrate data from an on-premises Hadoop Distributed File System (HDFS) store of your Hadoop cluster into Azure Storage using Data Box. For more information, see [Migrate from on-premises HDFS store to Azure Storage with Azure Data Box](../storage/blobs/data-lake-storage-migrate-on-premises-hdfs-cluster.md).--- **Azure Backup** - allows you to move large backups of critical enterprise data through offline mechanisms to an Azure Recovery Services Vault. For more information, see [Azure Backup overview](../backup/backup-overview.md).-
-You can use your Data Box data with many non-Azure service providers. For instance:
--- **[Veeam](https://helpcenter.veeam.com/docs/backup/hyperv/osr_adding_data_box.html?ver=100)** - allows you to back up and replicate large amounts of data from your Hyper-V machine to your Data Box.
+> [!IMPORTANT]
+> Azure Data Box Disk is now generally available in a hardware-encrypted option in select countries and regions. These Data Box Disk self-encrypting drives (SEDs) are very well suited for data transfers from Linux systems and support similar data transfer rates to BitLocker-encrypted Data Box Disks on Windows and are popular with some of our automotive customers building ADAS capabilities.
## The workflow
databox Data Box Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/databox/data-box-overview.md
Here are the various scenarios where Data Box can be used to export data from Az
- **Migrate back to on-premises or to another cloud service provider** - when you want to move all the data back to on-premises, or to another cloud service provider, export data via Data Box to migrate the workloads.
+### Ingestion of data from Data Box
+Azure providers and non-Azure providers can ingest data from Azure Data Box. The Azure services that provide data ingestion from Azure Data Box include:
+
+- **SharePoint Online** - use Azure Data Box and the SharePoint Migration Tool (SPMT) to migrate your file share content to SharePoint Online. Using Data Box, you remove the dependency on your WAN link to transfer the data. For more information, see [Use the Azure Data Box Heavy to migrate your file share content to SharePoint Online](data-box-heavy-migrate-spo.md).
+
+- **Azure File Sync** - replicates files from your Data Box to an Azure file share, enabling you to centralize your file services in Azure while maintaining local access to your data. For more information, see [Deploy Azure File Sync](../storage/file-sync/file-sync-deployment-guide.md).
+
+- **HDFS stores** - migrate data from an on-premises Hadoop Distributed File System (HDFS) store of your Hadoop cluster into Azure Storage using Data Box. For more information, see [Migrate from on-premises HDFS store to Azure Storage with Azure Data Box](../storage/blobs/data-lake-storage-migrate-on-premises-hdfs-cluster.md).
+
+- **Azure Backup** - allows you to move large backups of critical enterprise data through offline mechanisms to an Azure Recovery Services Vault. For more information, see [Azure Backup overview](../backup/backup-overview.md).
+
+You can use your Data Box data with many non-Azure service providers. For instance:
+
+- **[Veeam](https://helpcenter.veeam.com/docs/backup/hyperv/osr_adding_data_box.html?ver=100)** - allows you to back up and replicate large amounts of data from your Hyper-V machine to your Data Box.
+
## Benefits Data Box is designed to move large amounts of data to Azure with little to no impact to network. The solution has the following benefits:
defender-for-iot Eiot Sensor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/eiot-sensor.md
- Title: Enhance device discovery with a Microsoft Defender for IoT Enterprise IoT network sensor
-description: Learn how to register an Enterprise IoT network sensor in Defender for IoT for extra device visibility not covered by Defender for Endpoint.
- Previously updated : 06/05/2023---
-# Discover Enterprise IoT devices with an Enterprise IoT network sensor (Public preview)
-
-> [!IMPORTANT]
-> Registering a new Enterprise IoT network sensor as described in this article is no longer available. For customers with the Azure Consumption Revenue (ACR) or legacy license, Defender for IoT maintains existing Enterprise IoT network sensors.
-
-This article describes how to register an Enterprise IoT network sensor in Microsoft Defender for IoT.
-
-Microsoft Defender XDR customers with an Enterprise IoT network sensor can see all discovered devices in the **Device inventory** in either Microsoft Defender XDR or Defender for IoT. You'll also get extra security value from more alerts, vulnerabilities, and recommendations in Microsoft Defender XDR for the newly discovered devices.
-
-If you're a Defender for IoT customer working solely in the Azure portal, an Enterprise IoT network sensor provides extra device visibility to Enterprise IoT devices, such as Voice over Internet Protocol (VoIP) devices, printers, and cameras, which might not be covered by your OT network sensors.
-
-Defender for IoT [alerts](how-to-manage-cloud-alerts.md) and [recommendations](recommendations.md) for devices discovered by the Enterprise IoT sensor only are available only in the Azure portal.
-
-For more information, see [Securing IoT devices in the enterprise](concept-enterprise.md).
-
-> [!IMPORTANT]
-> The Enterprise IoT Network sensor is currently in PREVIEW. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-
-## Prerequisites
-
-This section describes the prerequisites required before deploying an Enterprise IoT network sensor.
-
-### Azure requirements
--- To view Defender for IoT data in Microsoft Defender XDR, including devices, alerts, recommendations, and vulnerabilities, you must have **Enterprise IoT security** turned on in [Microsoft Defender XDR](eiot-defender-for-endpoint.md). -
- If you only want to view data in the Azure portal, you don't need Microsoft Defender XDR. You can also turn on **Enterprise IoT security** in Microsoft Defender XDR after registering your network sensor to bring [extra device visibility and security value](concept-enterprise.md#enterprise-iot-security-in-microsoft-defender-xdr) to your organization.
--- Make sure you can access the Azure portal as a [Security admin](../../role-based-access-control/built-in-roles.md#security-admin), [Contributor](../../role-based-access-control/built-in-roles.md#contributor), or [Owner](../../role-based-access-control/built-in-roles.md#owner) user. If you don't already have an Azure account, you can [create your free Azure account today](https://azure.microsoft.com/free/).-
-### Network requirements
--- Identify the devices and subnets you want to monitor so that you understand where to place an Enterprise IoT sensor in your network. You might want to deploy multiple Enterprise IoT sensors.--- Configure traffic mirroring in your network so that the traffic you want to monitor is mirrored to your Enterprise IoT sensor. Supported traffic mirroring methods are the same as for OT monitoring. For more information, see [Choose a traffic mirroring method for traffic monitoring](best-practices/traffic-mirroring-methods.md).-
-### Physical or virtual machine requirements
-
-Allocate a physical appliance or a virtual machine (VM) to use as your network sensor. Make sure that your machine has the following specifications:
-
-| Tier | Requirements |
-|--|--|
-| **Minimum** | To support up to 1 Gbps of data: <br><br>- 4 CPUs, each with 2.4 GHz or more<br>- 16-GB RAM of DDR4 or better<br>- 250 GB HDD |
-| **Recommended** | To support up to 15 Gbps of data: <br><br>- 8 CPUs, each with 2.4 GHz or more<br>- 32-GB RAM of DDR4 or better<br>- 500 GB HDD |
-
-Your machine must also have:
--- The [Ubuntu 18.04 Server](https://releases.ubuntu.com/18.04/) operating system. If you don't yet have Ubuntu installed, download the installation files to an external storage, such as a DVD or disk-on-key, and then install it on your appliance or VM. For more information, see the Ubuntu [Image Burning Guide](https://help.ubuntu.com/community/BurningIsoHowto).--- Network adapters, at least one for your switch monitoring (SPAN) port, and one for your management port to access the sensor's user interface-
-Your Enterprise IoT sensor must have access to the Azure cloud using a [direct connection](architecture-connections.md#direct-connections). Direct connections are configured for Enterprise IoT sensors using the same procedure as for OT sensors. For more information, see [Provision sensors for cloud management](ot-deploy/provision-cloud-management.md).
-
-## Prepare a physical appliance or VM
-
-This procedure describes how to prepare your physical appliance or VM to install the Enterprise IoT network sensor software.
-
-**To prepare your appliance**:
-
-1. Connect a network interface (NIC) from your physical appliance or VM to a switch as follows:
-
- - **Physical appliance** - Connect a monitoring NIC to a SPAN port directly by a copper or fiber cable.
-
- - **VM** - Connect a vNIC to a vSwitch, and configure your vSwitch security settings to accept *Promiscuous mode*. For more information, see, for example [Configure a SPAN monitoring interface for a virtual appliance](extra-deploy-enterprise-iot.md#configure-a-span-monitoring-interface-for-a-virtual-appliance).
-
-1. <a name="sign-in"></a>Sign in to your physical appliance or VM and run the following command to validate incoming traffic to the monitoring port:
-
- ```bash
- ifconfig
- ```
-
- The system displays a list of all monitored interfaces.
-
- Identify the interfaces that you want to monitor, which are usually the interfaces with no IP address listed. Interfaces with incoming traffic show an increasing number of RX packets.
-
-1. For each interface you want to monitor, run the following command to enable *Promiscuous mode* in the network adapter:
-
- ```bash
- ifconfig <monitoring port> up promisc
- ```
-
- Where `<monitoring port>` is an interface you want to monitor. Repeat this step for each interface you want to monitor.
-
-1. Ensure network connectivity by opening the following ports in your firewall:
-
- | Protocol | Transport | In/Out | Port | Purpose |
- |--|--|--|--|--|
- | HTTPS | TCP | In/Out | 443 | Cloud connection |
- | DNS | TCP/UDP | In/Out | 53 | Address resolution |
-
-1. Make sure that your physical appliance or VM can access the cloud using HTTPS on port 443 to the following Microsoft endpoints:
-
- - **EventHub**: `*.servicebus.windows.net`
- - **Storage**: `*.blob.core.windows.net`
- - **Download Center**: `download.microsoft.com`
- - **IoT Hub**: `*.azure-devices.net`
-
- > [!TIP]
- > You can also download and add the [Azure public IP ranges](https://www.microsoft.com/download/details.aspx?id=56519) so your firewall will allow the Azure endpoints that are specified above, along with their region.
- >
- > The Azure public IP ranges are updated weekly. New ranges appearing in the file will not be used in Azure for at least one week. To use this option, download the new json file every week and perform the necessary changes at your site to correctly identify services running in Azure.
-
-## Register an Enterprise IoT sensor in Defender for IoT
-
-This section describes how to register an Enterprise IoT sensor in Defender for IoT. When you're done registering your sensor, you continue on with installing the Enterprise IoT monitoring software on your sensor machine.
-
-**To register a sensor in the Azure portal**:
-
-1. Go to **Defender for IoT** > **Sites and sensors**, and then select **Onboard sensor** > **EIoT**.
-
-1. On the **Set up Enterprise IoT Security** page, enter the following details, and then select **Register**:
-
- - In the **Sensor name** field, enter a meaningful name for your sensor.
- - From the **Subscription** drop-down menu, select the subscription where you want to add your sensor.
-
- A **Sensor registration successful** screen shows your next steps and the command you'll need to start the sensor installation.
-
- For example:
-
- :::image type="content" source="media/tutorial-get-started-eiot/successful-registration.png" alt-text="Screenshot of the successful registration of an Enterprise IoT sensor.":::
-
-1. Copy the command to a safe location, where you're able to copy it to your physical appliance or VM in order to [install sensor software](#install-enterprise-iot-sensor-software).
-
-## Install Enterprise IoT sensor software
-
-This procedure describes how to install Enterprise IoT monitoring software on [your sensor machine](#prepare-a-physical-appliance-or-vm), either a physical appliance or VM.
-
-> [!NOTE]
-> While this procedure describes how to install sensor software on a VM using ESXi, enterprise IoT sensors are also supported using Hyper-V.
->
-
-**To install sensor software**:
-
-1. On your sensor machine, sign in to the sensor's CLI using a terminal, such as PuTTY, or MobaXterm.
-
-1. Run the command that you'd copied from the [sensor registration](#register-an-enterprise-iot-sensor-in-defender-for-iot) step. For example:
-
- :::image type="content" source="media/tutorial-get-started-eiot/enter-command.png" alt-text="Screenshot of running the command to install the Enterprise IoT sensor monitoring software.":::
-
- The process checks to see if the required Docker version is already installed. If itΓÇÖs not, the sensor installation also installs the latest Docker version.
-
- When the command process completes, the Ubuntu **Configure microsoft-eiot-sensor** wizard appears. In this wizard, use the up or down arrows to navigate, and the SPACE bar to select an option. Press ENTER to advance to the next screen.
-
-1. In the **Configure microsoft-eiot-sensor** wizard, in the **What is the name of the monitored interface?** screen, select one or more interfaces that you want to monitor with your sensor, and then select **OK**.
-
- For example:
-
- :::image type="content" source="media/tutorial-get-started-eiot/install-monitored-interface.png" alt-text="Screenshot of the Configuring microsoft-eiot-sensor screen.":::
-
-1. In the **Set up proxy server?** screen, select whether to set up a proxy server for your sensor. For example:
-
- :::image type="content" source="media/tutorial-get-started-eiot/proxy.png" alt-text="Screenshot of the Set up a proxy server screen.":::
-
- If you're setting up a proxy server, select **Yes**, and then define the proxy server host, port, username, and password, selecting **Ok** after each option.
-
- The installation takes a few minutes to complete.
-
-1. In the Azure portal, check that the **Sites and sensors** page now lists your new sensor.
-
- For example:
-
- :::image type="content" source="media/tutorial-get-started-eiot/view-sensor-listed.png" alt-text="Screenshot of your new Enterprise IoT sensor listed in the Sites and sensors page.":::
-
-In the **Sites and sensors** page, Enterprise IoT sensors are all automatically added to the same site, named **Enterprise network**. For more information, see [Manage sensors with Defender for IoT in the Azure portal](how-to-manage-sensors-on-the-cloud.md).
-
-> [!TIP]
-> If you don't see your Enterprise IoT data in Defender for IoT as expected, make sure that you're viewing the Azure portal with the correct subscriptions selected. For more information, see [Manage Azure portal settings](/azure/azure-portal/set-preferences).
->
-> If you still don't view your data as expected, [validate your sensor setup](extra-deploy-enterprise-iot.md#validate-your-enterprise-iot-sensor-setup) from the CLI.
-
-## View newly detected Enterprise IoT devices
-
-Once you've validated your setup, the Defender for IoT **Device inventory** page will start to populate with new devices detected by your sensor after 15 minutes.
-
-If you're a Defender for Endpoint customer with a [legacy Enterprise IoT plan](whats-new.md#enterprise-iot-protection-now-included-in-microsoft-365-e5-and-e5-security-licenses), you're able to view all detected devices in the **Device inventory** pages, in both Defender for IoT and Microsoft Defender XDR. Detected devices include both devices detected by Defender for Endpoint and devices detected by the Enterprise IoT sensor.
-
-For more information, see [Manage your device inventory from the Azure portal](how-to-manage-device-inventory-for-organizations.md) and [Microsoft Defender XDR device discovery](/microsoft-365/security/defender-endpoint/machines-view-overview).
--
-## Delete an Enterprise IoT network sensor
-
-Delete a sensor if it's no longer in use with Defender for IoT.
-
-1. From the **Sites and sensors** page on the Azure portal, locate your sensor in the grid.
-
-1. In the row for your sensor, select the **...** options menu > **Delete sensor**.
-
-For more information, see [Manage sensors with Defender for IoT in the Azure portal](how-to-manage-sensors-on-the-cloud.md).
-
-> [!TIP]
-> You can also remove your sensor manually from the CLI. For more information, see [Extra steps and samples for Enterprise IoT deployment](extra-deploy-enterprise-iot.md#remove-an-enterprise-iot-network-sensor-optional).
-
-If you want to cancel enterprise IoT security with Microsoft Defender XDR, do so from the Microsoft Defender Portal. For more information, see [Turn off enterprise IoT security](manage-subscriptions-enterprise.md#turn-off-enterprise-iot-security).
-
-## Next steps
--- [Extra steps and samples for Enterprise IoT deployment](extra-deploy-enterprise-iot.md)--- [Manage sensors in the Azure portal](how-to-manage-sensors-on-the-cloud.md)--- [View and manage alerts from the Azure portal](how-to-manage-cloud-alerts.md). For more information, see [Malware engine alerts](alert-engine-messages.md#malware-engine-alerts).--- [Enhance security posture with security recommendations](recommendations.md)
expressroute Traffic Collector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/expressroute/traffic-collector.md
Note: If your desired region is not yet supported, you can deploy ExpressRoute T
| | -- | | North American | <ul><li>Canada East</li><li>Canada Central</li><li>Central US</li><li>Central US EUAP</li><li>North Central US</li><li>South Central US</li><li>West Central US</li><li>East US</li><li>East US 2</li><li>West US</li><li>West US 2</li><li>West US 3</li></ul> | | South America | <ul><li>Brazil South</li><li>Brazil Southeast</li></ul> |
-| Europe | <ul><li>West Europe</li><li>North Europe</li><li>UK South</li><li>UK West</li><li>France Central</li><li>France South</li><li>Germany North</li><li>Germany West Central</li><li>Sweden Central</li><li>Sweden South</li><li>Switzerland North</li><li>Switzerland West</li><li>Norway East</li><li>Norway West</li></ul> |
-| Asia | <ul><li>East Asia</li><li>Southeast Asia</li><li>Central India</li><li>South India</li><li>Japan West</li><li>Korea South</li><li>UAE North</li></ul> |
+| Europe | <ul><li>West Europe</li><li>North Europe</li><li>UK South</li><li>UK West</li><li>France Central</li><li>France South</li><li>Germany North</li><li>Germany West Central</li><li>Sweden Central</li><li>Sweden South</li><li>Switzerland North</li><li>Switzerland West</li><li>Norway East</li><li>Norway West</li><li>Italy North</li><li>Poland Central</li></ul> |
+| Asia | <ul><li>East Asia</li><li>Southeast Asia</li><li>Central India</li><li>South India</li><li>Japan West</li><li>Korea South</li><li>UAE North</li><li>UAE Central</li></ul> |
| Africa | <ul><li>South Africa North</li><li>South Africa West</li></ul> | | Pacific | <ul><li>Australia Central</li><li>Australia Central 2</li><li>Australia East</li><li>Australia Southeast</li></ul> |
firewall Firewall Copilot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/firewall-copilot.md
Microsoft Copilot for Security is a generative AI-powered security solution that
Azure Firewall is a cloud-native and intelligent network firewall security service that provides best of breed threat protection for your cloud workloads running in Azure. It's a fully stateful firewall as a service with built-in high availability and unrestricted cloud scalability.
-The Azure Firewall integration helps analysts perform detailed investigations of the malicious traffic intercepted by the IDPS and/or threat intelligence features of their firewalls across their entire fleet using natural language questions in the Copilot for Security standalone experience.
+The Azure Firewall integration helps analysts perform detailed investigations of the malicious traffic intercepted by the IDPS feature of their firewalls across their entire fleet using natural language questions in the Copilot for Security standalone experience.
This article introduces you to Copilot and includes sample prompts that can help Azure Firewall users.
For more information about writing effective Copilot for Security prompts, see [
- [Azure Structured Firewall Logs](firewall-structured-logs.md#resource-specific-mode) ΓÇô the Azure Firewalls to be used with Copilot for Security must be configured with resource specific structured logs for IDPS and these logs must be sent to a Log Analytics workspace. - [Role Based Access Control for Azure Firewall](https://techcommunity.microsoft.com/t5/azure-network-security-blog/role-based-access-control-for-azure-firewall/ba-p/2245598) ΓÇô the users using the Azure Firewall plugin in Copilot for Security must have the appropriate Azure RBAC roles to access the Firewall and associated Log Analytics workspace(s). 2. Go to [Microsoft Copilot for Security](https://go.microsoft.com/fwlink/?linkid=2247989) and sign in with your credentials.
-3. Ensure that the Azure Firewall plugin is turned on. In the prompt bar, select the **Sources** icon.
+1. In the prompt bar, select the **Sources** icon.
- :::image type="content" source="media/firewall-copilot/copilot-prompts-bar-sources.png" alt-text="Screenshot of the prompt bar in Microsoft Copilot for Security with the Sources icon highlighted.":::
+ :::image type="content" source="media/firewall-copilot/copilot-prompts-bar-sources.png" alt-text="Screenshot of the prompt bar in Microsoft Copilot for Security with the Sources icon highlighted.":::
-
- In the **Manage sources** pop-up window that appears, confirm that the **Azure Firewall** toggle is turned on, then close the window.
- :::image type="content" source="media/firewall-copilot/azure-firewall-plugin.png" alt-text="Screenshot showing the Azure Firewall plugin.":::
+ In the **Manage sources** pop-up window that appears, confirm that the **Azure Firewall** toggle is turned on, then close the window. No additional configuration is necessary, as long as structured logs are being sent to a Log Analytics workspace and you have the right RBAC permissions, Copilot will find the data it needs to answer your questions.
+
+ :::image type="content" source="media/firewall-copilot/azure-firewall-plugin.png" alt-text="Screenshot showing the Azure Firewall plugin.":::
- > [!NOTE]
- > Some roles can turn the toggle on or off for plugins like Azure Firewall. For more information, see [Manage plugins in Microsoft Copilot for Security](/copilot/security/manage-plugins?tabs=securitycopilotplugin).
+ > [!NOTE]
+ > Some roles can turn the toggle on or off for plugins like Azure Firewall. For more information, see [Manage plugins in Microsoft Copilot for Security](/copilot/security/manage-plugins?tabs=securitycopilotplugin).
4. Enter your prompt in the prompt bar.
Get **additional details** to enrich the threat information/profile of an IDPS s
- I see that the third signature ID is associated with CVE _\<CVE number\>_, tell me more about this CVE. > [!NOTE]
->The Microsoft Defender Threat Intelligence plugin is another source that Copilot for Security may use to provide threat intelligence for IDPS signatures.
--
+> The Microsoft Threat Intelligence plugin is another source that Copilot for Security may use to provide threat intelligence for IDPS signatures.
### Look for a given IDPS signature across your tenant, subscription, or resource group Perform a **fleet-wide search** (over any scope) for a threat across all your Firewalls instead of searching for the threat manually.
When you interact with Copilot for Security to get Azure Firewall data, Copilot
## Related content -- [What is Microsoft Copilot for Security?](/copilot/security/microsoft-security-copilot)
+- [What is Microsoft Copilot for Security?](/copilot/security/microsoft-security-copilot)
governance Australia Ism https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/australia-ism.md
Title: Regulatory Compliance details for Australian Government ISM PROTECTED description: Details of the Australian Government ISM PROTECTED Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 09/30/2024 Last updated : 10/08/2024
governance Azure Security Benchmark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/azure-security-benchmark.md
Title: Regulatory Compliance details for Microsoft cloud security benchmark description: Details of the Microsoft cloud security benchmark Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 09/30/2024 Last updated : 10/08/2024
governance Built In Initiatives https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/built-in-initiatives.md
Title: List of built-in policy initiatives description: List built-in policy initiatives for Azure Policy. Categories include Regulatory Compliance, Azure Machine Configuration, and more. Previously updated : 09/30/2024 Last updated : 10/08/2024
governance Built In Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/built-in-policies.md
Title: List of built-in policy definitions description: List built-in policy definitions for Azure Policy. Categories include Tags, Regulatory Compliance, Key Vault, Kubernetes, Azure Machine Configuration, and more. Previously updated : 09/30/2024 Last updated : 10/08/2024
governance Canada Federal Pbmm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/canada-federal-pbmm.md
Title: Regulatory Compliance details for Canada Federal PBMM description: Details of the Canada Federal PBMM Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 09/30/2024 Last updated : 10/08/2024
governance Cis Azure 1 1 0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/cis-azure-1-1-0.md
Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 1.1.0 description: Details of the CIS Microsoft Azure Foundations Benchmark 1.1.0 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 09/30/2024 Last updated : 10/08/2024
governance Cis Azure 1 3 0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/cis-azure-1-3-0.md
Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 1.3.0 description: Details of the CIS Microsoft Azure Foundations Benchmark 1.3.0 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 09/30/2024 Last updated : 10/08/2024
governance Cis Azure 1 4 0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/cis-azure-1-4-0.md
Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 1.4.0 description: Details of the CIS Microsoft Azure Foundations Benchmark 1.4.0 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 09/30/2024 Last updated : 10/08/2024
governance Cis Azure 2 0 0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/cis-azure-2-0-0.md
Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 2.0.0 description: Details of the CIS Microsoft Azure Foundations Benchmark 2.0.0 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 09/30/2024 Last updated : 10/08/2024
governance Cmmc L3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/cmmc-l3.md
Title: Regulatory Compliance details for CMMC Level 3 description: Details of the CMMC Level 3 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 09/30/2024 Last updated : 10/08/2024
governance Fedramp High https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/fedramp-high.md
Title: Regulatory Compliance details for FedRAMP High description: Details of the FedRAMP High Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 09/30/2024 Last updated : 10/08/2024
governance Fedramp Moderate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/fedramp-moderate.md
Title: Regulatory Compliance details for FedRAMP Moderate description: Details of the FedRAMP Moderate Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 09/30/2024 Last updated : 10/08/2024
governance Gov Azure Security Benchmark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-azure-security-benchmark.md
Title: Regulatory Compliance details for Microsoft cloud security benchmark (Azure Government) description: Details of the Microsoft cloud security benchmark (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 09/30/2024 Last updated : 10/08/2024
governance Gov Cis Azure 1 1 0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-cis-azure-1-1-0.md
Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 1.1.0 (Azure Government) description: Details of the CIS Microsoft Azure Foundations Benchmark 1.1.0 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 09/30/2024 Last updated : 10/08/2024
governance Gov Cis Azure 1 3 0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-cis-azure-1-3-0.md
Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 1.3.0 (Azure Government) description: Details of the CIS Microsoft Azure Foundations Benchmark 1.3.0 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 09/30/2024 Last updated : 10/08/2024
governance Gov Cmmc L3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-cmmc-l3.md
Title: Regulatory Compliance details for CMMC Level 3 (Azure Government) description: Details of the CMMC Level 3 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 09/30/2024 Last updated : 10/08/2024
governance Gov Fedramp High https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-fedramp-high.md
Title: Regulatory Compliance details for FedRAMP High (Azure Government) description: Details of the FedRAMP High (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 09/30/2024 Last updated : 10/08/2024
governance Gov Fedramp Moderate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-fedramp-moderate.md
Title: Regulatory Compliance details for FedRAMP Moderate (Azure Government) description: Details of the FedRAMP Moderate (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 09/30/2024 Last updated : 10/08/2024
governance Gov Irs 1075 Sept2016 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-irs-1075-sept2016.md
Title: Regulatory Compliance details for IRS 1075 September 2016 (Azure Government) description: Details of the IRS 1075 September 2016 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 09/30/2024 Last updated : 10/08/2024
governance Gov Iso 27001 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-iso-27001.md
Title: Regulatory Compliance details for ISO 27001:2013 (Azure Government) description: Details of the ISO 27001:2013 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 09/30/2024 Last updated : 10/08/2024
governance Gov Nist Sp 800 171 R2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-nist-sp-800-171-r2.md
Title: Regulatory Compliance details for NIST SP 800-171 R2 (Azure Government) description: Details of the NIST SP 800-171 R2 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 09/30/2024 Last updated : 10/08/2024
governance Gov Nist Sp 800 53 R4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-nist-sp-800-53-r4.md
Title: Regulatory Compliance details for NIST SP 800-53 Rev. 4 (Azure Government) description: Details of the NIST SP 800-53 Rev. 4 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 09/30/2024 Last updated : 10/08/2024
governance Gov Nist Sp 800 53 R5 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-nist-sp-800-53-r5.md
Title: Regulatory Compliance details for NIST SP 800-53 Rev. 5 (Azure Government) description: Details of the NIST SP 800-53 Rev. 5 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 09/30/2024 Last updated : 10/08/2024
governance Gov Soc 2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/gov-soc-2.md
Title: Regulatory Compliance details for System and Organization Controls (SOC) 2 (Azure Government) description: Details of the System and Organization Controls (SOC) 2 (Azure Government) Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 09/30/2024 Last updated : 10/08/2024
governance Hipaa Hitrust 9 2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/hipaa-hitrust-9-2.md
Title: Regulatory Compliance details for HIPAA HITRUST 9.2 description: Details of the HIPAA HITRUST 9.2 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 09/30/2024 Last updated : 10/08/2024
governance Irs 1075 Sept2016 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/irs-1075-sept2016.md
Title: Regulatory Compliance details for IRS 1075 September 2016 description: Details of the IRS 1075 September 2016 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 09/30/2024 Last updated : 10/08/2024
governance Iso 27001 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/iso-27001.md
Title: Regulatory Compliance details for ISO 27001:2013 description: Details of the ISO 27001:2013 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 09/30/2024 Last updated : 10/08/2024
governance Mcfs Baseline Confidential https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/mcfs-baseline-confidential.md
Title: Regulatory Compliance details for Microsoft Cloud for Sovereignty Baseline Confidential Policies description: Details of the Microsoft Cloud for Sovereignty Baseline Confidential Policies Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 09/30/2024 Last updated : 10/08/2024
governance Mcfs Baseline Global https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/mcfs-baseline-global.md
Title: Regulatory Compliance details for Microsoft Cloud for Sovereignty Baseline Global Policies description: Details of the Microsoft Cloud for Sovereignty Baseline Global Policies Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 09/30/2024 Last updated : 10/08/2024
governance Nist Sp 800 171 R2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/nist-sp-800-171-r2.md
Title: Regulatory Compliance details for NIST SP 800-171 R2 description: Details of the NIST SP 800-171 R2 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 09/30/2024 Last updated : 10/08/2024
governance Nist Sp 800 53 R4 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/nist-sp-800-53-r4.md
Title: Regulatory Compliance details for NIST SP 800-53 Rev. 4 description: Details of the NIST SP 800-53 Rev. 4 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 09/30/2024 Last updated : 10/08/2024
governance Nist Sp 800 53 R5 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/nist-sp-800-53-r5.md
Title: Regulatory Compliance details for NIST SP 800-53 Rev. 5 description: Details of the NIST SP 800-53 Rev. 5 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 09/30/2024 Last updated : 10/08/2024
governance Nl Bio Cloud Theme https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/nl-bio-cloud-theme.md
Title: Regulatory Compliance details for NL BIO Cloud Theme description: Details of the NL BIO Cloud Theme Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 09/30/2024 Last updated : 10/08/2024
governance Pci Dss 3 2 1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/pci-dss-3-2-1.md
Title: Regulatory Compliance details for PCI DSS 3.2.1 description: Details of the PCI DSS 3.2.1 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 09/30/2024 Last updated : 10/08/2024
governance Pci Dss 4 0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/pci-dss-4-0.md
Title: Regulatory Compliance details for PCI DSS v4.0 description: Details of the PCI DSS v4.0 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 09/30/2024 Last updated : 10/08/2024
governance Rbi Itf Banks 2016 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/rbi-itf-banks-2016.md
Title: Regulatory Compliance details for Reserve Bank of India IT Framework for Banks v2016 description: Details of the Reserve Bank of India IT Framework for Banks v2016 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 09/30/2024 Last updated : 10/08/2024
governance Rbi Itf Nbfc 2017 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/rbi-itf-nbfc-2017.md
Title: Regulatory Compliance details for Reserve Bank of India - IT Framework for NBFC description: Details of the Reserve Bank of India - IT Framework for NBFC Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 09/30/2024 Last updated : 10/08/2024
governance Rmit Malaysia https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/rmit-malaysia.md
Title: Regulatory Compliance details for RMIT Malaysia description: Details of the RMIT Malaysia Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 09/30/2024 Last updated : 10/08/2024
governance Soc 2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/soc-2.md
Title: Regulatory Compliance details for System and Organization Controls (SOC) 2 description: Details of the System and Organization Controls (SOC) 2 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 09/30/2024 Last updated : 10/08/2024
governance Spain Ens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/spain-ens.md
Title: Regulatory Compliance details for Spain ENS description: Details of the Spain ENS Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 09/30/2024 Last updated : 10/08/2024
governance Swift Csp Cscf 2021 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/swift-csp-cscf-2021.md
Title: Regulatory Compliance details for SWIFT CSP-CSCF v2021 description: Details of the SWIFT CSP-CSCF v2021 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 09/30/2024 Last updated : 10/08/2024
governance Swift Csp Cscf 2022 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/swift-csp-cscf-2022.md
Title: Regulatory Compliance details for SWIFT CSP-CSCF v2022 description: Details of the SWIFT CSP-CSCF v2022 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 09/30/2024 Last updated : 10/08/2024
governance Ukofficial Uknhs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/ukofficial-uknhs.md
Title: Regulatory Compliance details for UK OFFICIAL and UK NHS description: Details of the UK OFFICIAL and UK NHS Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment. Previously updated : 09/30/2024 Last updated : 10/08/2024
healthcare-apis Search Samples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/search-samples.md
# FHIR search examples for Azure API for FHIR
-Below are some examples of using Fast Healthcare Interoperability Resources (FHIR&#174;) search operations, including search parameters and modifiers, chain and reverse chain search, composite search, viewing the next entry set for search results, and searching with a `POST` request. For more information about search, see [Overview of FHIR Search](overview-of-search.md).
+The following are examples of using Fast Healthcare Interoperability Resources (FHIR&reg;) search operations, including search parameters and modifiers, chain and reverse chain search, composite search, viewing the next entry set for search results, and searching with a `POST` request. For more information about search, see [Overview of FHIR Search](overview-of-search.md).
## Search result parameters ### _include
-`_include` searches across resources for the ones that include the specified parameter of the resource. For example, you can search across `MedicationRequest` resources to find only the ones that include information about the prescriptions for a specific patient, which is the `reference` parameter `patient`. In the example below, this will pull all the `MedicationRequests` and all patients that are referenced from the `MedicationRequests`:
+`_include` searches across resources for the ones that include the specified parameter of the resource. For example, you can search across `MedicationRequest` resources to find only the ones that include information about the prescriptions for a specific patient, which is the `reference` parameter `patient`. The following example pulls all the `MedicationRequests` and all patients that are referenced from the `MedicationRequests`.
```rest GET [your-fhir-server]/MedicationRequest?_include=MedicationRequest:patient
GET [your-fhir-server]/Patient?_revinclude=Encounter:subject
``` ### _elements
-`_elements` narrows down the search result to a subset of fields to reduce the response size by omitting unnecessary data. The parameter accepts a comma-separated list of base elements:
+`_elements` narrows down the search result to a subset of fields to reduce the response size by omitting unnecessary data. The parameter accepts a comma-separated list of base elements.
```rest GET [your-fhir-server]/Patient?_elements=identifier,active ```
-In this request, you'll get back a bundle of patients, but each resource will only include the identifier(s) and the patient's active status. Resources in this returned response will contain a `meta.tag` value of `SUBSETTED` to indicate that they're an incomplete set of results.
+From this request, you get a bundle of patients where each resource only includes the identifiers and the patient's active status. Resources in this response contain a `meta.tag` value of `SUBSETTED` to indicate that they're an incomplete set of results.
## Search modifiers ### :not
-`:not` allows you to find resources where an attribute isn't true. For example, you could search for patients where the gender isn't female:
+`:not` allows you to find resources where an attribute isn't true. For example, you could search for patients where the gender isn't female.
```rest GET [your-fhir-server]/Patient?gender:not=female
As a return value, you would get all patient entries where the gender isn't fema
### :missing
-`:missing` returns all resources that don't have a value for the specified element when the value is `true`, and returns all the resources that contain the specified element when the value is `false`. For simple data type elements, `:missing=true` will match on all resources where the element is present with extensions but has an empty value. For example, if you want to find all `Patient` resources that are missing information on birth date, you can do:
+`:missing` returns all resources that don't have a value for the specified element when the value is `true`, and returns all the resources that contain the specified element when the value is `false`. For simple data type elements, `:missing=true` matches on all resources where the element is present with extensions but has an empty value. The following example shows how to find all `Patient` resources that are missing information on birth date.
```rest GET [your-fhir-server]/Patient?birthdate:missing=true
GET [your-fhir-server]/Patient?name:exact=Jon
This request returns `Patient` resources that have the name exactly the same as `Jon`. If the resource had Patients with names such as `Jonathan` or `joN`, the search would ignore and skip the resource as it doesn't exactly match the specified value. ### :contains
-`:contains` is used for `string` parameters and searches for resources with partial matches of the specified value anywhere in the string within the field being searched. `contains` is case insensitive and allows character concatenating. For example:
+`:contains` is used for `string` parameters and searches for resources with partial matches of the specified value anywhere in the string within the field being searched. `contains` isn't case sensitive and allows character concatenating. For example:
```rest GET [your-fhir-server]/Patient?address:contains=Meadow ```
-This request would return you all `Patient` resources with `address` fields that have values that contain the string "Meadow". This means you could have addresses that include values such as "Meadowers" or "59 Meadow ST" returned as search results.
+This request would return all `Patient` resources with `address` fields that have values that contain the string "Meadow". This means you could have addresses that include values such as "Meadowers" or "59 Meadow ST" returned as search results.
## Chained search
To perform a series of search operations that cover multiple reference parameter
This request would return all the `DiagnosticReport` resources with a patient subject named "Sarah". The period `.` after the field `Patient` performs the chained search on the reference parameter of the `subject` parameter.
-Another common use of a regular search (not a chained search) is finding all encounters for a specific patient. `Patient`s will often have one or more `Encounter`s with a subject. To search for all `Encounter` resources for a `Patient` with the provided `id`:
+Another common use of a regular search (not a chained search) is finding all encounters for a specific patient. `Patient`s often have one or more `Encounter`s with a subject. The following searches for all `Encounter` resources for a `Patient` with the provided `id`.
```rest GET [your-fhir-server]/Encounter?subject=Patient/78a14cbe-8968-49fd-a231-d43e6619399f ```
-Using chained search, you can find all the `Encounter` resources that match a particular piece of `Patient` information, such as the `birthdate`:
+Using chained search, you can find all the `Encounter` resources that match a particular piece of `Patient` information, such as the `birthdate`.
```rest GET [your-fhir-server]/Encounter?subject:Patient.birthdate=1987-02-20 ```
-This would allow not just searching `Encounter` resources for a single patient, but across all patients that have the specified birth date value.
+This would allow searching `Encounter` resources across all patients that have the specified birth date value.
In addition, chained search can be done more than once in one request by using the symbol `&`, which allows you to search for multiple conditions in one request. In such cases, chained search "independently" searches for each parameter, instead of searching for conditions that only satisfy all the conditions at once:
GET [your-fhir-server]/Patient?general-practitioner:Practitioner.name=Sarah&gene
```
-This would return all `Patient` resources that have "Sarah" as the `generalPractitioner` and have a `generalPractitioner` that has the address with the state WA. In other words, if a patient had Sarah from the state NY and Bill from the state WA both referenced as the patient's `generalPractitioner`, the would be returned.
+This would return all `Patient` resources that have "Sarah" as the `generalPractitioner` and have a `generalPractitioner` that has the address with the state WA. In other words, if a patient had Sarah from the state NY and Bill from the state WA both referenced as the patient's `generalPractitioner`, both are returned.
-For scenarios in which the search has to be an `AND` operation that covers all conditions as a group, refer to the **composite search** example below.
+For scenarios in which the search has to be an `AND` operation that covers all conditions as a group, refer to the example in [**Composite search**](#composite-search).
## Reverse chain search
-Chain search lets you search for resources based on the properties of resources they refer to. Using reverse chain search, allows you do it the other way around. You can search for resources based on the properties of resources that refer to them, using `_has` parameter. For example, `Observation` resource has a search parameter `patient` referring to a Patient resource. To find all Patient resources that are referenced by `Observation` with a specific `code`:
+Chain search lets you search for resources based on the properties of resources they refer to. Using reverse chain search allows you to do it the other way around. You can search for resources based on the properties of resources that refer to them, using `_has` parameter. For example, an `Observation` resource has a search parameter `patient` referring to a Patient resource. Use the following to find all Patient resources referenced by `Observation` with a specific `code`.
```rest GET [base]/Patient?_has:Observation:patient:code=527 ```
-This request returns Patient resources that are referred by `Observation` with the code `527`.
+This request returns Patient resources referred by `Observation` with the code `527`.
-In addition, reverse chain search can have a recursive structure. For example, if you want to search for all patients that have `Observation` where the observation has an audit event from a specific user `janedoe`, you could do:
+In addition, reverse chain search can have a recursive structure. For example, the following searches for all patients that have `Observation` where the observation has an audit event from a specific user `janedoe`.
```rest GET [base]/Patient?_has:Observation:patient:_has:AuditEvent:entity:agent:Practitioner.name=janedoe
GET [base]/Patient?_has:Observation:patient:_has:AuditEvent:entity:agent:Practit
## Composite search
-To search for resources that meet multiple conditions at once, use composite search that joins a sequence of single parameter values with a symbol `$`. The returned result would be the intersection of the resources that match all of the conditions specified by the joined search parameters. Such search parameters are called composite search parameters, and they define a new parameter that combines the multiple parameters in a nested structure. For example, if you want to find all `DiagnosticReport` resources that contain `Observation` with a potassium value less than or equal to 9.2:
+To search for resources that meet multiple conditions at once, use a composite search that joins a sequence of single parameter values with a symbol `$`. The result would be the intersection of the resources that match all of the conditions specified by the joined search parameters. Such search parameters are called composite search parameters, and they define a new parameter that combines the multiple parameters in a nested structure. For example, the following search finds all `DiagnosticReport` resources that contain `Observation` with a potassium value less than or equal to 9.2.
```rest GET [your-fhir-server]/DiagnosticReport?result.code-value-quantity=2823-3$lt9.2
This request specifies the component containing a code of `2823-3`, which in thi
## Search the next entry set
-The maximum number of entries that can be returned per a single search query is 1000. However, you might have more than 1000 entries that match the search query, and you might want to see the next set of entries after the first 1000 entries that were returned. In such case, you would use the continuation token `url` value in `searchset` as in the `Bundle` result below:
+The maximum number of entries that can be returned per a single search query is 1000. If more than 1,000 entries that match the search query, you can use the following procedure to see entries greater than 1000.<br>
+Use the continuation token `url` value in `searchset`, as in the following `Bundle` result.
```json "resourceType": "Bundle",
The maximum number of entries that can be returned per a single search query is
```
-And you would do a GET request for the provided URL under the field `relation: next`:
+Then do a GET request for the provided URL under the field `relation: next`.
```rest GET [your-fhir-server]/Patient?_sort=_lastUpdated&ct=WzUxMDAxNzc1NzgzODc5MjAwODBd ```
-This will return the next set of entries for your search result. The `searchset` is the complete set of search result entries, and the continuation token `url` is the link provided by the server for you to retrieve the entries that don't show up on the first set because the restriction on the maximum number of entries returned for a search query.
+This returns the next set of entries for your search result. The `searchset` is the complete set of search result entries, and the continuation token `url` is the link provided by the server for you to retrieve entries that don't show up in the first 1000.
## Search using POST
-All of the search examples mentioned above have used `GET` requests. You can also do search operations using `POST` requests using `_search`:
+All of the search examples previously mentioned used `GET` requests. You can also do search operations using `POST` requests using `_search`.
```rest POST [your-fhir-server]/Patient/_search?_id=45 ```
-This request would return all `Patient` resources with the `id` value of 45. Just as in GET requests, the server determines which of the set of resources meets the condition(s), and returns a bundle resource in the HTTP response.
+This request returns `Patient` resources with the `id` value of 45. As with GET requests, the server determines which of the set of resources meets the condition, and returns a bundle resource in the HTTP response.
-Another example of searching using POST where the query parameters are submitted as a form body is:
+Another example of searching using POST where the query parameters are submitted as a form body is as follows.
```rest POST [your-fhir-server]/Patient/_search
In this article, you learned about how to search using different search paramete
>[!div class="nextstepaction"] >[Overview of FHIR Search](overview-of-search.md)
-FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/security-controls-policy.md
# Azure Policy Regulatory Compliance controls for Azure API for FHIR [Regulatory Compliance in Azure Policy](../../governance/policy/concepts/regulatory-compliance.md) provides Microsoft created and managed initiative definitions, known as _built-ins_, for the **compliance domains** and **security controls** related to different compliance standards. This
-page lists the **compliance domains** and **security controls** for Azure API for FHIR. You can
+page lists the **compliance domains** and **security controls** for Azure API for FHIR&reg;. You can
assign the built-ins for a **security control** individually to help make your Azure resources compliant with the specific standard.
In this article, you learned about the Azure Policy Regulatory Compliance contro
- Learn more about [Azure Policy Regulatory Compliance](../../governance/policy/concepts/regulatory-compliance.md). - See the built-ins on the [Azure Policy GitHub repo](https://github.com/Azure/azure-policy).
-FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
-
healthcare-apis Smart On Fhir https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/smart-on-fhir.md
Last updated 09/27/2023
# SMART on FHIR overview
-Substitutable Medical Applications and Reusable Technologies ([SMART on FHIR](https://docs.smarthealthit.org/)) is a healthcare standard through which applications can access clinical information through a data store. It adds a security layer based on open standards including OAuth2 and OpenID Connect, to FHIR interfaces to enable integration with EHR systems. Using SMART on FHIR provides at least three important benefits:
+Substitutable Medical Applications and Reusable Technologies ([SMART on FHIR&reg;](https://docs.smarthealthit.org/)) is a healthcare standard through which applications can access clinical information through a data store. It adds a security layer based on open standards including OAuth2 and OpenID Connect, to FHIR interfaces to enable integration with EHR systems. Using SMART on FHIR provides important benefits, including:
- Applications have a known method for obtaining authentication/authorization to a FHIR repository. - Users accessing a FHIR repository with SMART on FHIR are restricted to resources associated with the user, rather than having access to all data in the repository. - Users have the ability to grant applications access to a further limited set of their data by using SMART clinical scopes.
-Below tutorials describe steps to enable SMART on FHIR applications with FHIR Service.
+The following tutorials describe steps to enable SMART on FHIR applications with FHIR Service.
## Prerequisites - An instance of the FHIR service - .NET SDK 6.0 - [Enable cross-origin resource sharing (CORS)](configure-cross-origin-resource-sharing.md)-- [Register public client application in Microsoft Entra ID](/azure/healthcare-apis/azure-api-for-fhir/register-public-azure-ad-client-app)
- - After registering the application, make note of the applicationId for client application.
-- Ensure you have access to Azure Subscription of FHIR service, to create resources and add role assignments.
+- [Register a public client application in Microsoft Entra ID](/azure/healthcare-apis/azure-api-for-fhir/register-public-azure-ad-client-app)
+ - After registering the application, make note of the `applicationId` for the client application.
+- Ensure you have access to an Azure Subscription of FHIR service to create resources and add role assignments.
## SMART on FHIR using Samples OSS (SMART on FHIR(Enhanced)) ### Step 1: Set up FHIR SMART user role
-Follow the steps listed under section [Manage Users: Assign Users to Role](../../role-based-access-control/role-assignments-portal.yml). Any user added to role - "FHIR SMART User" will be able to access the FHIR Service if their requests comply with the SMART on FHIR implementation Guide, such as request having access token, which includes a fhirUser claim and a clinical scopes claim. The access granted to the users in this role will then be limited by the resources associated to their fhirUser compartment and the restrictions in the clinical scopes.
+Follow the steps listed under [Manage Users: Assign Users to Role](../../role-based-access-control/role-assignments-portal.yml). Any user added to role - "FHIR SMART User" is able to access the FHIR Service if their requests comply with the SMART on FHIR implementation Guide, such as request having access token, which includes a `fhirUser` claim and a clinical scopes claim. The access granted to the users in this role will be limited by the resources associated to their `fhirUser` compartment and the restrictions in the clinical scopes.
### Step 2: FHIR server integration with samples
-[Follow the steps](https://aka.ms/azure-health-data-services-smart-on-fhir-sample) under Azure Health Data and AI Samples OSS. This will enable integration of FHIR server with other Azure Services (such as APIM, Azure functions and more).
+[Follow the steps](https://aka.ms/azure-health-data-services-smart-on-fhir-sample) found in Azure Health Data and AI Samples OSS. This enables integration of FHIR server with other Azure Services (such as APIM, Azure functions and more).
> [!NOTE]
-> Samples are open-source code, and you should review the information and licensing terms on GitHub before using it. They are not part of the Azure Health Data Service and are not supported by Microsoft Support. These samples can be used to demonstrate how Azure Health Data Services and other open-source tools can be used together to demonstrate ONC (g)(10) compliance, using Microsoft Entra ID as the identity provider workflow.
-
+> Samples are open-source code, and you should review the information and licensing terms on GitHub before using it. They are not part of Azure Health Data Service and are not supported by Microsoft Support. These samples can be used to demonstrate how Azure Health Data Services and other open-source tools can be used together to demonstrate ONC (g)(10) compliance using Microsoft Entra ID as the identity provider workflow.
## SMART on FHIR proxy <details> <summary> Click to expand! </summary> > [!NOTE]
-> This is another option to SMART on FHIR (Enhanced) mentioned above. SMART on FHIR Proxy option only enables EHR launch sequence.
+> This is another path to SMART on FHIR (Enhanced) as mentioned. SMART on FHIR Proxy option only enables an EHR launch sequence.
### Step 1: Set admin consent for your client application To use SMART on FHIR, you must first authenticate and authorize the app. The first time you use SMART on FHIR, you also must get administrative consent to let the app access your FHIR resources. If you don't have an ownership role in the app, contact the app owner and ask them to grant admin consent for you in the app.
-If you do have administrative privileges, complete the following steps to grant admin consent to yourself directly. (You also can grant admin consent to yourself later when you're prompted in the app.) You can complete the same steps to add other users as owners, so they can view and edit this app registration.
+If you do have administrative privileges, complete the following steps to grant admin consent to yourself directly. (You can also grant admin consent to yourself later when you're prompted in the app.) You can complete the same steps to add other users as owners, so they can view and edit this app registration.
To add yourself or another user as owner of an app:
To add yourself or another user as owner of an app:
SMART on FHIR requires that `Audience` has an identifier URI equal to the URI of the FHIR service. The standard configuration of the Azure API for FHIR uses an `Audience` value of `https://azurehealthcareapis.com`. However, you can also set a value matching the specific URL of your FHIR service (for example `https://MYFHIRAPI.azurehealthcareapis.com`). This is required when working with the SMART on FHIR proxy.
-To enable the SMART on FHIR proxy in the **Authentication** settings for your Azure API for FHIR instance, select the **SMART on FHIR proxy** check box:
+To enable the SMART on FHIR proxy in the **Authentication** settings for your Azure API for FHIR instance, select the **SMART on FHIR proxy** check box.
![Screenshot shows enabling the SMART on FHIR proxy.](media/tutorial-smart-on-fhir/enable-smart-on-fhir-proxy.png) The SMART on FHIR proxy acts as an intermediary between the SMART on FHIR app and Microsoft Entra ID. The authentication reply (the authentication code) must go to the SMART on FHIR proxy instead of the app itself. The proxy then forwards the reply to the app.
-Because of this two-step relay of the authentication code, you need to set the reply URL (callback) for your Microsoft Entra client application to a URL that is a combination of the reply URL for the SMART on FHIR proxy and the reply URL for the SMART on FHIR app. The combined reply URL takes this form:
+Because of this two-step relay of the authentication code, you need to set the reply URL (callback) for your Microsoft Entra client application to a URL that is a combination of the reply URL for the SMART on FHIR proxy and the reply URL for the SMART on FHIR app. The combined reply URL takes the following form.
```http https://MYFHIRAPI.azurehealthcareapis.com/AadSmartOnFhirProxy/callback/aHR0cHM6Ly9sb2NhbGhvc3Q6NTAwMS9zYW1wbGVhcHAvaW5kZXguaHRtbA ```
-In that reply, `aHR0cHM6Ly9sb2NhbGhvc3Q6NTAwMS9zYW1wbGVhcHAvaW5kZXguaHRtbA` is a URL-safe, base64-encoded version of the reply URL for the SMART on FHIR app. For the SMART on FHIR app launcher, when the app is running locally, the reply URL is `https://localhost:5001/sampleapp/https://docsupdatetracker.net/index.html`.
+In the reply, `aHR0cHM6Ly9sb2NhbGhvc3Q6NTAwMS9zYW1wbGVhcHAvaW5kZXguaHRtbA` is a URL-safe, base64-encoded version of the reply URL for the SMART on FHIR app. For the SMART on FHIR app launcher, when the app is running locally, the reply URL is `https://localhost:5001/sampleapp/https://docsupdatetracker.net/index.html`.
-You can generate the combined reply URL by using a script like this:
+You can generate the combined reply URL by using a script like the following.
```PowerShell $replyUrl = "https://localhost:5001/sampleapp/https://docsupdatetracker.net/index.html"
$encodedText = $encodedText.Replace('+','-');
$newReplyUrl = $FhirServerUrl.TrimEnd('/') + "/AadSmartOnFhirProxy/callback/" + $encodedText ```
-Add the reply URL to the public client application that you created earlier for Microsoft Entra ID:
+Add the reply URL to the public client application that you created earlier for Microsoft Entra ID.
![Screenshot show how reply url can be configured for the public client.](media/tutorial-smart-on-fhir/configure-reply-url.png) ### Step 3: Get a test patient
-To test the Azure API for FHIR and the SMART on FHIR proxy, you'll need to have at least one patient in the database. If you've not interacted with the API yet, and you don't have data in the database, see [Access the FHIR service using Postman](./../fhir/use-postman.md) to load a patient. Make a note of the ID of a specific patient.
+To test the Azure API for FHIR and the SMART on FHIR proxy, you need to have at least one patient in the database. If you haven't interacted with the API yet, and you don't have data in the database, see [Access the FHIR service using Postman](./../fhir/use-postman.md) to load a patient. Make a note of the ID of a specific patient.
### Step 4: Download the SMART on FHIR app launcher The open-source [FHIR Server for Azure repository](https://github.com/Microsoft/fhir-server) includes a simple SMART on FHIR app launcher and a sample SMART on FHIR app. In this tutorial, use this SMART on FHIR launcher locally to test the setup.
-You can clone the GitHub repository and go to the application by using these commands:
+You can clone the GitHub repository and go to the application by using the following commands.
```PowerShell git clone https://github.com/Microsoft/fhir-server
The application needs a few configuration settings, which you can set in `appset
} ```
-We recommend that you use the `dotnet user-secrets` feature:
+We recommend that you use the `dotnet user-secrets` feature.
```PowerShell dotnet user-secrets set FhirServerUrl https://MYFHIRAPI.azurehealthcareapis.com dotnet user-secrets set ClientId <APP-ID> ```
-Use this command to run the application:
+Use this command to run the application.
```PowerShell dotnet run
dotnet run
### Step 5: Test the SMART on FHIR proxy
-After you start the SMART on FHIR app launcher, you can point your browser to `https://localhost:5001`, where you should see the following screen:
+After you start the SMART on FHIR app launcher, you can point your browser to `https://localhost:5001`, where you should see the following screen.
![Screenshot of SMART on FHIR app launcher.](media/tutorial-smart-on-fhir/smart-on-fhir-app-launcher.png)
-When you enter **Patient**, **Encounter**, or **Practitioner** information, you'll notice that the **Launch context** is updated. When you're using the Azure API for FHIR, the launch context is simply a JSON document that contains information about patient, practitioner, and more. This launch context is base64 encoded and passed to the SMART on FHIR app as the `launch` query parameter. According to the SMART on FHIR specification, this variable is opaque to the SMART on FHIR app and passed on to the identity provider.
+When you enter **Patient**, **Encounter**, or **Practitioner** information, notice that the **Launch context** is updated. When you're using the Azure API for FHIR, the launch context is simply a JSON document that contains information about patient, practitioner, and more. This launch context is base64 encoded and passed to the SMART on FHIR app as the `launch` query parameter. According to the SMART on FHIR specification, this variable is opaque to the SMART on FHIR app and passed on to the identity provider.
-The SMART on FHIR proxy uses this information to populate fields in the token response. The SMART on FHIR app *can* use these fields to control which patient it requests data for and how it renders the application on the screen. The SMART on FHIR proxy supports the following fields:
+The SMART on FHIR proxy uses this information to populate fields in the token response. The SMART on FHIR app *can* use these fields to control which patient it requests data for and how it renders the application on the screen. The SMART on FHIR proxy supports the following fields.
* `patient` * `encounter`
The SMART on FHIR proxy uses this information to populate fields in the token re
These fields are meant to provide guidance to the app, but they don't convey any security information. A SMART on FHIR application can ignore them.
-Notice that the SMART on FHIR app launcher updates the **Launch URL** information at the bottom of the page. Select **Launch** to start the sample app.
+Notice that the SMART on FHIR app launcher updates the **Launch URL** information at the bottom of the page.<br>
+
+Select **Launch** to start the sample app.
+ </details>
-
+ ## Migrate from SMART on FHIR Proxy to SMART on FHIR (Enhanced)+ [!INCLUDE [Migrate from SMART on FHIR Proxy to Enhanced](../includes/smart-on-fhir-proxy-migration.md)] ## Next steps
Now that you've learned about enabling SMART on FHIR functionality, see the sear
>[!div class="nextstepaction"] >[FHIR search examples](search-samples.md)
-
-FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
+
healthcare-apis Store Profiles In Fhir https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/azure-api-for-fhir/store-profiles-in-fhir.md
# Store profiles in Azure API for FHIR
-HL7 Fast Healthcare Interoperability Resources (FHIR&#174;) defines a standard and interoperable way to store and exchange healthcare data. Even within the base FHIR specification, it can be helpful to define other rules or extensions based on the context that FHIR is being used. For such context-specific uses of FHIR, **FHIR profiles** are used for the extra layer of specifications. [FHIR profile](https://www.hl7.org/fhir/profiling.html) allows you to narrow down and customize resource definitions using constraints and extensions.
+HL7 Fast Healthcare Interoperability Resources (FHIR&reg;) defines a standard and interoperable way to store and exchange healthcare data. Even within the base FHIR specification, it can be helpful to define other rules or extensions based on the context that FHIR is being used. For such context-specific uses of FHIR, **FHIR profiles** are used for the extra layer of specifications. [FHIR profile](https://www.hl7.org/fhir/profiling.html) allows you to narrow down and customize resource definitions using constraints and extensions.
Azure API for FHIR allows validating resources against profiles to see if the resources conform to the profiles. This article guides you through the basics of FHIR profiles and how to store them. For more information about FHIR profiles outside of this article, visit [HL7.org](https://www.hl7.org/fhir/profiling.html). ## FHIR profile: the basics
-A profile sets additional context on the resource that's represented as a `StructureDefinition` resource. A `StructureDefinition` defines a set of rules on the content of a resource or a data type, such as what elements a resource has and what values these elements can take.
+A profile sets additional context on the resource that's represented as a `StructureDefinition` resource. A `StructureDefinition` defines a set of rules on the content of a resource or data type, such as what elements a resource has and what values these elements can take.
-Below are some examples of how profiles can modify the base resource:
+Some examples of how profiles can modify the base resource are:
- Restrict cardinality: For example, you can set the maximum cardinality on an element to 0, which means that the element is ruled out in the specific context. - Restrict the contents of an element to a single fixed value. - Define required extensions for the resource.
-A `StructureDefinition` is identified by its canonical URL: `http://hl7.org/fhir/StructureDefinition/{profile}`
+A `StructureDefinition` is identified by its canonical URL: `http://hl7.org/fhir/StructureDefinition/{profile}`.
-For example:
+Here are some examples.
- `http://hl7.org/fhir/StructureDefinition/patient-birthPlace` is a base profile that requires information on the registered address of birth of the patient. - `http://hl7.org/fhir/StructureDefinition/bmi` is another base profile that defines how to represent Body Mass Index (BMI) observations. - `http://hl7.org/fhir/us/core/StructureDefinition/us-core-allergyintolerance` is a US Core profile that sets minimum expectations for `AllergyIntolerance` resource associated with a patient, and it identifies mandatory fields such as extensions and value sets.
-When a resource conforms to a profile, the profile is specified inside the `profile` element of the resource. Below you can see an example of the beginning of a 'Patient' resource, which has http://hl7.org/fhir/us/carin-bb/StructureDefinition/C4BB-Patient profile.
+When a resource conforms to a profile, the profile is specified inside the `profile` element of the resource. Following is an example of the beginning of a `Patient` resource, which has the profile http://hl7.org/fhir/us/carin-bb/StructureDefinition/C4BB-Patient.
```json {
When a resource conforms to a profile, the profile is specified inside the `prof
> [!NOTE] > Profiles must build on top of the base resource and cannot conflict with the base resource. For example, if an element has a cardinality of 1..1, the profile cannot make it optional.
-Profiles are also specified by various Implementation Guides (IGs). Some common IGs are listed below. For more information, visit the specific IG site to learn more about the IG and the profiles defined within it:
+Profiles are specified by various Implementation Guides (IGs). The following is a list of common IGs. For more information, visit the specific IG site to learn more about the IG and the profiles defined within it.
- [US Core](https://www.hl7.org/fhir/us/core/) - [CARIN Blue Button](https://hl7.org/fhir/us/carin-bb)
Profiles are also specified by various Implementation Guides (IGs). Some common
- [Argonaut](https://www.fhir.org/guides/argonaut/pd/) > [!NOTE]
-> The Azure API for FHIR does not store any profiles from implementation guides by default. You will need to load them into the Azure API for FHIR.
+> The Azure API for FHIR by default does not store any profiles from implementation guides. You will need to load them into the Azure API for FHIR.
## Accessing profiles and storing profiles
Conditional update: `PUT http://<your Azure API for FHIR base URL>/StructureDefi
} ```
-For example, if you'd like to store the `us-core-allergyintolerance` profile, you'd use the following rest command with the US Core allergy intolerance profile in the body. We've included a snippet of this profile for the example.
+For example, if you'd like to store the `us-core-allergyintolerance` profile, you'd use the following rest command with the US Core allergy intolerance profile in the body. We included a snippet of this profile for the example.
```rest PUT https://myAzureAPIforFHIR.azurehealthcareapis.com/StructureDefinition?url=http://hl7.org/fhir/us/core/StructureDefinition/us-core-allergyintolerance
PUT https://myAzureAPIforFHIR.azurehealthcareapis.com/StructureDefinition?url=ht
], "description" : "Defines constraints and extensions on the AllergyIntolerance resource for the minimal set of data to query and retrieve allergy information.", ```
-For more examples, see the [US Core sample REST file](https://github.com/microsoft/fhir-server/blob/main/docs/rest/PayerDataExchange/USCore.http) on the open-source site that walks through storing US Core profiles. To get the most up to date profiles, you should get the profiles directly from HL7 and the implementation guide that defines them.
+For more examples, see the [US Core sample REST file](https://github.com/microsoft/fhir-server/blob/main/docs/rest/PayerDataExchange/USCore.http) on the open-source site that walks through storing US Core profiles. To get the most up-to-date profiles, you should get the profiles directly from HL7 and the implementation guide that defines them.
### Viewing profiles You can access your existing custom profiles using a `GET` request, ``GET http://<your Azure API for FHIR base URL>/StructureDefinition?url={canonicalUrl}``, where `{canonicalUrl}` is the canonical URL of your profile.
-For example, if you want to view US Core Goal resource profile:
+For example, use the following if you want to view a US Core Goal resource profile.
`GET https://myworkspace-myfhirserver.fhir.azurehealthcareapis.com/StructureDefinition?url=http://hl7.org/fhir/us/core/StructureDefinition/us-core-goal`
-This will return the `StructureDefinition` resource for US Core Goal profile, that will start like this:
+This returns the `StructureDefinition` resource for US Core Goal profile, which starts like the following.
```json {
This will return the `StructureDefinition` resource for US Core Goal profile, th
> You'll only see the profiles that you've loaded into Azure API for FHIR.
-Azure API for FHIR doesn't return `StructureDefinition` instances for the base profiles, but they can be found in the HL7 website, such as:
+Azure API for FHIR doesn't return `StructureDefinition` instances for the base profiles, but they can be found in the HL7 website.
- `http://hl7.org/fhir/Observation.profile.json.html` - `http://hl7.org/fhir/Patient.profile.json.html`
Azure API for FHIR doesn't return `StructureDefinition` instances for the base p
### Profiles in the capability statement
-The `Capability Statement` lists all possible behaviors of Azure API for FHIR. Azure API for FHIR updates the capability statement with details of the stored profiles in the forms of:
+The `Capability Statement` lists all possible behaviors of Azure API for FHIR. Azure API for FHIR updates the capability statement with details of the stored profiles in the following forms.
- `CapabilityStatement.rest.resource.profile` - `CapabilityStatement.rest.resource.supportedProfile`
-For example, if you save a US Core Patient profile, which starts like this:
+For example, if you save a US Core Patient profile, which starts like the following.
```json {
And send a `GET` request for your `metadata`:
`GET http://<your Azure API for FHIR base URL>/metadata`
-You'll be returned with a `CapabilityStatement` that includes the following information on the US Core Patient profile you uploaded to Azure API for FHIR:
+You're returned a `CapabilityStatement` that includes the following information on the US Core Patient profile you uploaded to Azure API for FHIR.
```json ...
You'll be returned with a `CapabilityStatement` that includes the following info
], ``` ### Bindings in Profiles
-A terminology service is a set of functions that can perform operations on medical ΓÇ£terminologies,ΓÇ¥ such as validating codes, translating codes, expanding value sets, etc. The Azure API for FHIR service doesn't support terminology service. Information for supported operations ($), resource types and interactions can be found in the service's CapabilityStatement. Resource types ValueSet, StructureDefinition and CodeSystem are supported with basic CRUD operations and Search (as defined in the CapabilityStatement) as well as being leveraged by the system for use in $validate.
+A terminology service is a set of functions that can perform operations on medical ΓÇ£terminologies" such as validating codes, translating codes, expanding value sets, and other operations.<br>
+The Azure API for FHIR service doesn't support terminology service. Information for supported operations (`$`), resource types, and interactions can be found in the service's `CapabilityStatement`. Resource types `ValueSet`, `StructureDefinition` and `CodeSystem` are supported with basic create, read, update, and delete (CRUD) operations and Search (as defined in the `CapabilityStatement`), as well as being leveraged by the system for use in `$validate`.
-ValueSets can contain a complex set of rules and external references. Today, the service will only consider the pre-expanded inline codes. Customers need to upload supported ValueSets to the FHIR server prior to utilizing the $validate operation. The ValueSet resources must be uploaded to the FHIR server, using PUT or conditional update as mentioned under Storing Profiles section above.
+ValueSets can contain a complex set of rules and external references. Today, the service only considers the pre-expanded inline codes. Customers need to upload supported ValueSets to the FHIR server before utilizing the `$validate` operation. The `ValueSet` resources must be uploaded to the FHIR server, using PUT or conditional update, as mentioned in the [storing profiles](#storing-profiles) section.
## Next steps
-In this article, you've learned about FHIR profiles. Next, you'll learn how you can use $validate to ensure that resources conform to these profiles.
+In this article, you learned about FHIR profiles. Next, you can learn how you can use `$validate` to ensure that resources conform to these profiles.
>[!div class="nextstepaction"] >[Validate FHIR resources against profiles](validation-against-profiles.md)
-FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Fhir Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/fhir-best-practices.md
Azure FHIR service supports data ingestion through the import operation, which o
To achieve optimal performance with the import operation, consider the following best practices.
-* **Do** use large files while ingesting data. The optimal DNJSON file size for import is 50 MB or larger (or 20,000 resources or more, with no upper limit). Combining smaller files into larger ones can enhance performance.
+* **Do** use large files while ingesting data. The optimal NDJSON file size for import is 50 MB or larger (or 20,000 resources or more, with no upper limit). Combining smaller files into larger ones can enhance performance.
* **Consider** using the import operation over HTTP API requests to ingest the data into FHIR service. The import operation provides a high throughput and is a scalable method for loading data. * **Consider** importing all FHIR resource files in a single import operation for optimal performance. Aim for a total file size of 100 GB or more (or 100 million resources, no upper limit) in one operation. Maximizing an import in this way helps reduce the overhead associated with managing multiple import jobs. * **Consider** running multiple concurrent imports only if necessary, but limit parallel import jobs. A single large import is designed to consume all available system resources, and processing throughput doesn't increase with concurrent import jobs.
iot-hub-device-update Device Update Configure Repo https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-configure-repo.md
Following this document, learn how to configure a package repository using [OSCo
## Prerequisites
-You need an Azure account with an [IoT Hub](../iot-hub/iot-concepts-and-iot-hub.md) and Microsoft Azure Portal or Azure CLI to interact with devices via your IoT Hub. Follow the next steps to get started:
+You need an Azure account with an [IoT Hub](../iot-hub/iot-concepts-and-iot-hub.md) and Microsoft Azure portal or Azure CLI to interact with devices via your IoT Hub. Follow the next steps to get started:
- Create a Device Update account and instance in your IoT Hub. See [how to create it](create-device-update-account.md). - Install the [IoT Hub Identity Service](https://azure.github.io/iot-identity-service/installation.html) (or skip if [IoT Edge 1.2](../iot-edge/how-to-provision-single-device-linux-symmetric.md?preserve-view=true&tabs=azure-portal%2cubuntu&view=iotedge-2020-11#install-iot-edge) or higher is already installed on the device).-- Install the Device Update agent on the device. See [how to](device-update-ubuntu-agent.md#manually-prepare-a-device).
+- Install the Device Update agent on the device. See [how to](device-update-agent-provisioning.md#on-iot-edge-enabled-devices).
- Install the OSConfig agent on the device. See [how to](/azure/osconfig/howto-install?tabs=package#step-11-connect-a-device-to-packagesmicrosoftcom). - Now that both the agent and IoT Hub Identity Service are present on the device, the next step is to configure the device with an identity so it can connect to Azure. See example [here](/azure/osconfig/howto-install?tabs=package#job-2--connect-to-azure) ## How to configure package repository for package updates
-Follow the below steps to update Azure IoT Edge on Ubuntu Server 18.04 x64 by configuring a source repository. The tools and concepts in this tutorial still apply even if you plan to use a different OS platform configuration.
+Follow the below steps to update Azure IoT Edge on Ubuntu Server 22.04 x64 by configuring a source repository. The tools and concepts in this tutorial still apply even if you plan to use a different OS platform configuration.
1. Configure the package repository of your choice with the OSConfigΓÇÖs configure package repo module. See [how to](/azure/osconfig/howto-pmc?tabs=portal%2Csingle#example-1--specify-desired-package-sources). This repository should be the location where you wish to store packages to be downloaded to the device. 2. Upload your packages to the above configured repository. 3. Create an [APT manifest](device-update-apt-manifest.md) to provide the Device Update agent with the information it needs to download and install the packages (and their dependencies) from the repository. 4. Follow steps from [here](device-update-ubuntu-agent.md#prerequisites) to do a package update with Device Update. Device Update is used to deploy package updates to a large number of devices and at scale.
-5. Monitor results of the package update by following these [steps](device-update-ubuntu-agent.md#monitor-the-update-deployment).
+5. Monitor results of the package update by following these [steps](device-update-ubuntu-agent.md#monitor-the-update-deployment).
iot-hub-device-update Device Update Ubuntu Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-ubuntu-agent.md
Title: Device Update for Azure IoT Hub tutorial using the Ubuntu Server 18.04 x64 package agent | Microsoft Docs
-description: Get started with Device Update for Azure IoT Hub by using the Ubuntu Server 18.04 x64 package agent.
-- Previously updated : 1/26/2022
+ Title: Device Update for Azure IoT Hub tutorial using the Ubuntu Server 22.04 x64 package agent | Microsoft Docs
+description: Get started with Device Update for Azure IoT Hub by using the Ubuntu Server 22.04 x64 package agent.
++ Last updated : 10/2/2024
-# Tutorial: Device Update for Azure IoT Hub using the package agent on Ubuntu Server 18.04 x64
+# Tutorial: Device Update for Azure IoT Hub using the package agent on Ubuntu Server 22.04 x64
Device Update for Azure IoT Hub supports image-based, package-based, and script-based updates. Package-based updates are targeted updates that alter only a specific component or application on the device. They lead to lower consumption of bandwidth and help reduce the time to download and install the update. Package-based updates also typically allow for less downtime of devices when you apply an update and avoid the overhead of creating images. They use an [APT manifest](device-update-apt-manifest.md), which provides the Device Update agent with the information it needs to download and install the packages specified in the APT manifest file (and their dependencies) from a designated repository.
-This tutorial walks you through updating Azure IoT Edge on Ubuntu Server 18.04 x64 by using the Device Update package agent. Although the tutorial demonstrates updating IoT Edge, by using similar steps you could update other packages, such as the container engine it uses.
+This tutorial walks you through updating Azure IoT Edge on Ubuntu Server 22.04 x64 by using the Device Update package agent. Although the tutorial demonstrates installing the Microsoft Defender for IoT, by using similar steps you could update other packages, such as the IoT Edge itself or the container engine it uses.
The tools and concepts in this tutorial still apply even if you plan to use a different OS platform configuration. Finish this introduction to an end-to-end update process. Then choose your preferred form of updating an OS platform to dive into the details.
In this tutorial, you'll learn how to:
## Prepare a device
-Prepare a device automatically or manually.
- ### Use the automated Deploy to Azure button
-For convenience, this tutorial uses a [cloud-init](/azure/virtual-machines/linux/using-cloud-init)-based [Azure Resource Manager template](../azure-resource-manager/templates/overview.md) to help you quickly set up an Ubuntu 18.04 LTS virtual machine. It installs both the Azure IoT Edge runtime and the Device Update package agent. Then it automatically configures the device with provisioning information by using the device connection string for an IoT Edge device (prerequisite) that you supply. The Resource Manager template also avoids the need to start an SSH session to complete setup.
+For convenience, this tutorial uses a [cloud-init](/azure/virtual-machines/linux/using-cloud-init)-based [Azure Resource Manager template](../azure-resource-manager/templates/overview.md) to help you quickly set up an Ubuntu 22.04 LTS VM(Virtual Machine). It installs both the Azure IoT Edge runtime and the Device Update package agent. Then it automatically configures the device with provisioning information by using the device connection string for an IoT Edge device (prerequisite) that you supply. The Resource Manager template also avoids the need to start an SSH session to complete setup.
1. To begin, select the button:
- [![Screenshot showing the Deploy to Azure button for iotedge-vm-deploy](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2Fazure%2Fiotedge-vm-deploy%2Fdevice-update-tutorial%2FedgeDeploy.json).
+ [![Screenshot showing the Deploy to Azure button for iotedge-vm-deploy](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2Fazure%2Fiotedge-vm-deploy%2Fmain%2FedgeDeploy.json).
1. Fill in the available text boxes:
- > [!div class="mx-imgBorder"]
- > [![Screenshot showing the iotedge-vm-deploy template.](../iot-edge/media/how-to-install-iot-edge-ubuntuvm/iotedge-vm-deploy.png)](../iot-edge/media/how-to-install-iot-edge-ubuntuvm/iotedge-vm-deploy.png)
- * **Subscription**: The active Azure subscription to deploy the virtual machine into. * **Resource group**: An existing or newly created resource group to contain the virtual machine and its associated resources. * **Region**: The [geographic region](https://azure.microsoft.com/global-infrastructure/locations/) to deploy the virtual machine into. This value defaults to the location of the selected resource group.
For convenience, this tutorial uses a [cloud-init](/azure/virtual-machines/linux
* **Admin Username**: A username, which is provided root privileges on deployment. * **Device Connection String**: A [device connection string](../iot-edge/how-to-provision-single-device-linux-symmetric.md#view-registered-devices-and-retrieve-provisioning-information) for a device that was created within your intended [IoT hub](../iot-hub/about-iot-hub.md). * **VM Size**: The [size](../cloud-services/cloud-services-sizes-specs.md) of the virtual machine to be deployed.
- * **Ubuntu OS Version**: The version of the Ubuntu OS to be installed on the base virtual machine. Leave the default value unchanged because it will be set to Ubuntu 18.04-LTS already.
+ * **Ubuntu OS Version**: The version of the Ubuntu OS to be installed on the base virtual machine. Leave the default value unchanged because it is set to Ubuntu 22.04-LTS already.
* **Authentication Type**: Choose **sshPublicKey** or **password** based on your preference. * **Admin Password or Key**: The value of the SSH Public Key or the value of the password based on the choice of authentication type. After all the boxes are filled in, select the checkbox at the bottom of the page to accept the terms. Select **Purchase** to begin the deployment.
-1. Verify that the deployment has completed successfully. Allow a few minutes after deployment completes for the post-installation and configuration to finish installing IoT Edge and the device package update agent.
+1. Verify that the deployment is completed successfully. Allow a few minutes after deployment completes for the post-installation and configuration to finish installing IoT Edge and the device package update agent.
- A virtual machine resource should have been deployed into the selected resource group. Note the machine name, which is in the format `vm-0000000000000`. Also note the associated **DNS name**, which is in the format `<dnsLabelPrefix>`.`<location>`.cloudapp.azure.com.
+ A virtual machine resource should be deployed into the selected resource group. Note the machine name, which is in the format `vm-0000000000000`. Also note the associated **DNS name**, which is in the format `<dnsLabelPrefix>`.`<location>`.cloudapp.azure.com.
You can obtain the **DNS name** from the **Overview** section of the newly deployed virtual machine in the Azure portal.
For convenience, this tutorial uses a [cloud-init](/azure/virtual-machines/linux
> To SSH into this VM after setup, use the associated **DNS name** with the following command: `ssh <adminUsername>@<DNS_Name>`.
-1. Open the configuration details (See how to [set up configuration file here](device-update-configuration-file.md) with the command below. Set your connectionType as 'AIS' and connectionData as empty string. Please note that all values with the 'Place value here' tag must be set. See [Configuring a DU agent](./device-update-configuration-file.md#example-du-configjson-file-contents).
+1. Install the Device update agent on the VM.
```bash
- sudo nano /etc/adu/du-config.json
+ sudo apt-get install deviceupdate-agent
```
-1. Restart the Device Update agent.
+2. Open the configuration details (See how to [set up configuration file here](device-update-configuration-file.md) with the command below. Set your connectionType as 'AIS' and connectionData as empty string. Note that all values with the 'Place value here' tag must be set. See [Configuring a Device Update agent](./device-update-configuration-file.md#example-du-configjson-file-contents).
```bash
- sudo systemctl restart deviceupdate-agent
- ```
-
-Device Update for Azure IoT Hub software packages are subject to the following license terms:
-
-* [Device update for IoT Hub license](https://github.com/Azure/iot-hub-device-update/blob/main/LICENSE)
-* [Delivery optimization client license](https://github.com/microsoft/do-client/blob/main/LICENSE)
-
-Read the license terms before you use a package. Your installation and use of a package constitutes your acceptance of these terms. If you don't agree with the license terms, don't use that package.
-
-### Manually prepare a device
-
-Similar to the steps automated by the [cloud-init script](https://github.com/Azure/iotedge-vm-deploy/blob/1.2.0-rc4/cloud-init.txt), the following manual steps are used to install and configure a device. Use these steps to prepare a physical device.
-
-1. Follow the instructions to [install the Azure IoT Edge runtime](../iot-edge/how-to-provision-single-device-linux-symmetric.md?view=iotedge-2020-11&preserve-view=true).
-
- > [!NOTE]
- > The Device Update agent doesn't depend on IoT Edge. But it does rely on the IoT Identity Service daemon that's installed with IoT Edge (1.2.0 and higher) to obtain an identity and connect to IoT Hub.
- >
- > Although not covered in this tutorial, the [IoT Identity Service daemon can be installed standalone on Linux-based IoT devices](https://azure.github.io/iot-identity-service/installation.html). The sequence of installation matters. The Device Update package agent must be installed _after_ the IoT Identity Service. Otherwise, the package agent won't be registered as an authorized component to establish a connection to IoT Hub.
-
-1. Install the Device Update agent .deb packages:
-
- ```bash
- sudo apt-get install deviceupdate-agent
- ```
-
-1. Enter your IoT device's module (or device, depending on how you [provisioned the device with Device Update](device-update-agent-provisioning.md)) primary connection string in the configuration file. Please note that all values with the 'Place value here' tag must be set. See [Configuring a DU agent](./device-update-configuration-file.md#example-du-configjson-file-contents).
-
- ```bash
- sudo /etc/adu/du-config.json
+ sudo nano /etc/adu/du-config.json
``` 1. Restart the Device Update agent.
Read the license terms before you use a package. Your installation and use of a
## Import the update
-1. Go to [Device Update releases](https://github.com/Azure/iot-hub-device-update/releases) in GitHub and select the **Assets** dropdown list. Download `Tutorial_IoTEdge_PackageUpdate.zip` by selecting it. Extract the contents of the folder to discover a sample APT manifest (sample-1.0.2-aziot-edge-apt-manifest.json) and its corresponding import manifest (sample-1.0.2-aziot-edge-importManifest.json).
+1. Go to [Device Update releases](https://github.com/Azure/iot-hub-device-update/releases) in GitHub and select the **Assets** dropdown list. Download `Tutorial_IoTEdge_PackageUpdate.zip` by selecting it. Extract the contents of the folder to discover a sample APT manifest (sample-defender-iot-apt-manifest.json) and its corresponding import manifest (sample-defender-iot--importManifest.json).
1. Sign in to the [Azure portal](https://portal.azure.com/) and go to your IoT hub with Device Update. On the left pane, under **Automatic Device Management**, select **Updates**. 1. Select the **Updates** tab. 1. Select **+ Import New Update**.
For more information about tags and groups, see [Manage device groups](create-up
1. Select **Refresh** to view the latest status details.
-You've now completed a successful end-to-end package update by using Device Update for IoT Hub on an Ubuntu Server 18.04 x64 device.
+You've now completed a successful end-to-end package update by using Device Update for IoT Hub on an Ubuntu Server 22.04 x64 device.
## Clean up resources
migrate Tutorial Discover Vmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/migrate/vmware/tutorial-discover-vmware.md
In VMware vSphere Web Client, set up a read-only account to use for vCenter Serv
Your user account on your servers must have the required permissions to initiate discovery of installed applications, agentless dependency analysis, and discovery of web apps, and SQL Server instances and databases. You can provide the user account information in the appliance configuration manager. The appliance doesn't install agents on the servers.
-* For **Windows servers** and web apps discovery, create an account (local or domain) that has administrator permissions on the servers. Sentence should be - To discover SQL Server instances and databases, the Windows or SQL Server account must be a member of the sysadmin server role or have [these permissions](./migrate-support-matrix-vmware.md#configure-the-custom-login-for-sql-server-discovery) for each SQL Server instance. Learn how to [assign the required role to the user account](/sql/relational-databases/security/authentication-access/server-level-roles).
+* For **Windows servers** and web apps discovery, create an account (local or domain) that has administrator permissions on the servers. To discover SQL Server instances and databases, the Windows or SQL Server account must be a member of the sysadmin server role or have [these permissions](./migrate-support-matrix-vmware.md#configure-the-custom-login-for-sql-server-discovery) for each SQL Server instance. Learn how to [assign the required role to the user account](/sql/relational-databases/security/authentication-access/server-level-roles).
* For **Linux servers**, provide a sudo user account with permissions to execute ls and netstat commands or create a user account that has the CAP_DAC_READ_SEARCH and CAP_SYS_PTRACE permissions on /bin/netstat and /bin/ls files. If you're providing a sudo user account, ensure that you have enabled **NOPASSWD** for the account to run the required commands without prompting for a password every time sudo command is invoked. > [!NOTE]
modeling-simulation-workbench Tutorial Install Slurm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/modeling-simulation-workbench/tutorial-install-slurm.md
+
+ Title: "Tutorial: Install the Slurm Workload Manager on Azure Modeling and Simulation Workbench"
+description: "Learn how to install the Slurm Workload Manager in the Azure Modeling and Simulation Workbench."
++++ Last updated : 10/02/2024+
+#CustomerIntent: As an administrator, I want to learn how to install, setup, and configure the Slurm workload manager in the Azure Modeling and Simulation Workbench.
++
+# Tutorial: Install the Slurm workload manager in the Azure Modeling and Simulation Workbench
+
+The [Slurm](https://slurm.schedmd.com/overview.html) Workload Manager is a scheduler used in microelectronics design and other high-performance computing scenarios to manage jobs across compute clusters. The Modeling and Simulation Workbench can be deployed with a range of high-performance virtual machines (VM) ideal for large, compute-intensive workloads. Slurm clusters consist of a *controller node* that manages, stages, and schedules jobs bound for the *compute nodes*. Compute nodes are where the actual workloads are performed. A *node* is an individual element of the cluster, such as a VM.
+
+The Slurm installation package is already available on all Modeling and Simulation Workbench Chamber VMs. This tutorial shows you how to create VMs for your Slurm cluster and install Slurm.
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+>
+> * Create a cluster for Slurm
+> * Create an inventory of VMs
+> * Designate controller and compute nodes and install Slurm on each
+
+If you donΓÇÖt have an Azure subscription, [create a free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+## Prerequisites
+++
+## Sign in to the Azure portal and navigate to your workbench
+
+If you aren't already signed into the Azure portal, go to [portal.azure.com](https://portal.azure.com). Navigate to your workbench, then to the chamber you want to create your Slurm cluster.
+
+## Create a cluster for Slurm
+
+Slurm requires one node to serve as the controller and a set of compute nodes where workloads execute. The controller is traditionally a modestly sized VM. The controller isn't used for computational workloads and is left deployed between jobs, while the compute nodes themselves are typically sized for a specific task and often deleted after the job. Learn about the different VMs available in Modeling and Simulation Workbench on the [VM Offerings page](./concept-vm-offerings.md).
+
+### Create the Slurm controller node
+
+1. From the chamber overview page, select **Chamber VM** from the **Settings** menu, then either the **+ Create** button on action menu along the top or the blue **Create chamber VM** button in center of the page.
+ :::image type="content" source="media/tutorial-slurm/create-chamber-vm.png" alt-text="Screenshot of chamber VM overview page with Chamber VM in Settings and the create options on the page highlighted by red outlines." lightbox="media/tutorial-slurm/create-chamber-vm.png":::
+1. On the **Create chamber VM** page:
+ * Enter a **Name** for the VM. We recommend choosing a name that indicates it's the controller node.
+ * Select a VM size. For the controller, you can select the smallest VM available. The *D4s_v4* is currently the smallest.
+ * Leave the **Chamber VM image type** and **Chamber VM count** as the default of *Semiconductor* and *1*.
+ * Select **Review + create**.
+ :::image type="content" source="media/tutorial-slurm/configure-create-chamber-vm.png" alt-text="Screenshot of create chamber VM page with the name and VM size textboxes and the create button highlighted in red outline.":::
+1. After the validation check passes, select the **Create** button.
+
+Once the VM deploys, it's available in the connector desktop dashboard.
+
+### Create a Slurm compute cluster
+
+A *cluster* is a collection of VMs, individually referred to as *nodes* that perform the actual work. The compute nodes have their workloads dispatched and managed by the controller node. Similar to the steps you took when you created the controller, return to the **Chamber VM** page to create a cluster. The Modeling and Simulation Workbench allows you to create multiple, identical VMs in a single step.
+
+1. On the **Create chamber VM** page:
+ * Enter a **Name** for the VM cluster. Use a name that identifies these VMs as compute nodes. For example, include the word "node" or the type of workload somewhere in the name.
+ * Select a VM appropriately sized for the workload. Refer to the [VM Offerings](concept-vm-offerings.md) page for guidance on VM offerings, capabilities, features, and sizes.
+ * Leave the **Chamber VM image type** as the default of *Semiconductor*.
+ * In the **Chamber VM count** box, enter the number of nodes required.
+ * Select **Review + create**.
+1. After the validation check passes, select the **Create** button.
+
+VMs are deployed in parallel and appear in the dashboard. It isn't typically necessary to individually access worker nodes, however you can ssh to worker nodes in the same chamber if needed. In the next steps, you'll configure the compute nodes from the controller.
+
+### Connect to the controller node desktop
+
+Slurm installation is performed from the controller node.
+
+1. Navigate to the connector. From the **Settings** menu of the chamber, select **Connector**. Select the sole connector that appears in the resource list.
+ :::image type="content" source="media/tutorial-slurm/connector-overview.png" alt-text="Screenshot of connector overview page with Connector in Settings and the target connector highlighted with a red rectangle.":::
+1. From the connector page, select the **Desktop dashboard** URL.
+ :::image type="content" source="media/tutorial-slurm/connector-desktop-dashboard-url.png" alt-text="Screenshot of connector overview page with desktop dashboard URL highlighted in red rectangle.":::
+1. The desktop dashboard opens. Select your controller VM.
+
+## Create an inventory of VMs
+
+Slurm installation requires that you have a technical inventory of the compute nodes and as host names.
+
+### Get a list of deployed VMs
+
+Configuring Slurm requires an inventory of nodes. From the controller node:
+
+1. Open a terminal in your desktop by selecting the terminal icon from the menu bar at the top.
+ :::image type="content" source="media/tutorial-slurm/open-terminal.png" alt-text="Screenshot of desktop with terminal button highlighted in red.":::
+1. Execute the following commands to print a list of all VMs in the chamber. In this example, we have one controller and five nodes. The command prints the IP addresses in the first column and the hostnames in the second. From the naming, you can see the controller node and the compute nodes.
+
+ ```bash
+ ip=$(hostname -i | cut -d'.' -f1-3)
+ for i in {1..254}; do host "$ip.$i" | grep -v "not found" | awk -F'[. ]' '{print $4"."$3"."$2"."$1" "$10}'; done
+ ```
+
+ Your output will be similar to:
+
+ ```bash
+ 10.163.4.4 wrkldvmslurmcont29085dd
+ 10.163.4.5 wrkldvmslurm-nod0aef63d
+ 10.163.4.6 wrkldvmslurm-nod10870ad
+ 10.163.4.7 wrkldvmslurm-node4689c2
+ 10.163.4.8 wrkldvmslurm-noddfe3a7c
+ 10.163.4.9 wrkldvmslurm-nod034b970
+ ```
+
+1. Create a file with just the worker nodes, one host per line and call it *slurm_worker.txt*. For the remaining steps of this tutorial, use this list to configure the compute nodes from your controller. In some steps, the nodes need to be in a comma-delimited format. In those instances, we use a command-line shortcut to format the list without having to create a new file. To create *slurm_worker.txt*, remove the IP addresses in the first column, and the controller node which is listed first.
+
+### Gather technical specifications about the compute nodes
+
+Assuming that you created all the worker nodes in your cluster using the same VM, choose any node to retrieve technical information about the platform. In this example, we use `head` to grab the first host name in the compute node list and using `ssh` send the `lscpu` command to be executed:
+
+```bash
+$ ssh `head -1 ./slurm_worker.txt` lscpu
+The authenticity of host 'wrkldvmslurm-nod034b970 (10.163.4.9)' can't be established.
+ECDSA key fingerprint is SHA256:1I2zBg8N/1c0LBRfa+fOvpAoKe90OIr0FvgqwFSfoc0.
+Are you sure you want to continue connecting (yes/no/[fingerprint])? yes
+Warning: Permanently added 'wrkldvmslurm-nod034b970,10.163.4.9' (ECDSA) to the list of known hosts.
+Architecture: x86_64
+CPU op-mode(s): 32-bit, 64-bit
+Byte Order: Little Endian
+CPU(s): 4
+On-line CPU(s) list: 0-3
+Thread(s) per core: 2
+Core(s) per socket: 2
+Socket(s): 1
+NUMA node(s): 1
+Vendor ID: GenuineIntel
+CPU family: 6
+Model: 85
+Model name: Intel(R) Xeon(R) Platinum 8272CL CPU @ 2.60GHz
+Stepping: 7
+CPU MHz: 2593.907
+BogoMIPS: 5187.81
+Virtualization: VT-x
+Hypervisor vendor: Microsoft
+Virtualization type: full
+L1d cache: 32K
+L1i cache: 32K
+L2 cache: 1024K
+L3 cache: 36608K
+NUMA node0 CPU(s): 0-3
+Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ss ht syscall nx pdpe1gb rdtscp lm constant_tsc rep_good nopl xtopology cpuid pni pclmulqdq vmx ssse3 fma cx16 pcid sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand hypervisor lahf_lm abm 3dnowprefetch invpcid_single tpr_shadow vnmi ept vpid ept_ad fsgsbase bmi1 hle avx2 smep bmi2 erms invpcid rtm avx512f avx512dq rdseed adx smap clflushopt clwb avx512cd avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx512_vnni arch_capabilities
+```
+
+You'll be asked by the ssh client to verify the ECDSA key fingerprint of the remote machines. Take note of the following parameters:
+
+* **CPU(s)**
+* **Socket(s)**
+* **Core(s) per socket**
+* **Thread(s) per core**
+
+Slurm also requires an estimate of available memory on the compute nodes. To obtain the available memory of a worker node, execute the `free` command on any of the compute nodes from your controller and note the **available** memory reported in the output. Again, we use the first worker node in our list using the `head` command submitted via `ssh`.
+
+```bash
+$ ssh `head -1 ./slurm_worker.txt` free
+ total used free shared buff/cache available
+Mem: 16139424 1433696 7885256 766356 6820472 13593564
+Swap: 0 0 0
+```
+
+Note the available memory listed in the **available** column.
+
+## Install Slurm on your cluster
+
+### Prerequisite: Install MariaDB
+
+Slurm requires the MySql fork MariaDB to be installed from the Red Hat repository before installation. Azure maintains a private Red Hat repository mirror and chamber VMs have access to this repository. Install and configure MariaDB with the following commands:
+
+```bash
+sudo yum install -y mariadb-server
+sudo systemctl start mariadb
+sudo systemctl enable mariadb
+mysql_secure_installation
+```
+
+The *mysql_secure_installation* script asks for more configuration.
+
+* The default database password isn't set. Hit <kbd>Enter</kbd> when asked for current password.
+* Enter *Y* when asked to set root password. Create a new, secure root password for MariaDB, take note of it for later, then reenter to confirm. You need this password when you configure the Slurm controller in the following step.
+* Enter *Y* for the remaining questions for:
+ * Reloading privileged tables
+ * Removing anonymous users
+ * Disabling remote root login
+ * Removing tests databases
+ * Reloading privilege tables
+
+### Install Slurm on the controller
+
+The Modeling and Simulation Workbench provides a setup script to speed installation. It requires the parameters you collected earlier in this tutorial. Replace the placeholders with the parameters you collected. Execute these commands on the controller node. The \<clusternodes\> placeholder is a comma-separated, no space list of hostnames. The examples include a shortcut to do so, reformatting your compute node list in *slurm_worker.txt* into the proper comma-delimited format. The argument of the *sdwChamberSlurm.sh* script is as follows:
+
+```bash
+sudo /usr/sdw/slurm/sdwChamberSlurm.sh CONTROLLER <databaseSecret> <clusterNodes> <numberOfCpus> <numberOfSockets> <coresPerSocket> <threadsPerCore> <availableMemory>
+```
+
+For this example, we use the list of nodes we created in the previous steps and substitute our values collected during discovery. The `paste` command is used to reformat the list of worker nodes into the comma-delimited format without needing to create a new file.
+
+```bash
+sudo /usr/sdw/slurm/sdwChamberSlurm.sh CONTROLLER <databasepassword> `paste -d, -s ./slurm_nodes.txt` 4 1 2 2 13593564
+```
+
+The output should be similar to:
+
+```bash
+Last metadata expiration check: 4:00:15 ago on Thu 03 Oct 2024 01:52:40 PM UTC.
+Package bzip2-devel-1.0.6-26.el8.x86_64 is already installed.
+Package gcc-8.5.0-18.2.el8_8.x86_64 is already installed..
+
+...
+
+Installed:
+ slurm-24.05.0-1.el8.x86_64 slurm-slurmctld-24.05.0-1.el8.x86_64 slurm-slurmdbd-24.05.0-1.el8.x86_64
+
+Complete!
+[INFO] Slurm Controller successfully installed.
+[INFO] Slurm Config successfully updated.
+[INFO] slurmdbd successfully started.
+[INFO] slurmctld successfully started.
+```
+
+> [!TIP]
+> If your installation shows any [ERROR] message in these steps, check that you haven't mistyped or misplaced any parameter. Review your information and repeat the step.
+
+### Install Slurm on compute nodes
+
+Slurm must now be installed on the compute nodes. To ease this task, use your home directory which is mounted on all VMs, to ease distribution of files and scripts used.
+
+From your user account, copy the *munge.key* file to your home directory.
+
+```bash
+cd
+sudo cp /etc/munge/munge.key .
+```
+
+Create a script named *node-munge.sh* to set up each node's **munge** settings. This script should be in your home directory.
+
+```bash
+$ cat > node-munge.sh <<END
+#!/bin/bash
+
+mkdir -p /etc/munge
+yum install -y munge
+cp munge.key /etc/munge/munge.key
+chown -R munge:munge /etc/munge/munge.key
+
+END
+```
+
+Using the same file of the node hostnames that you previously used, execute the bash script you created on the node.
+
+```bash
+for host in `cat ./slurm_nodes.txt`; do ssh $host sudo sh ~/node-munge.sh; done
+```
+
+Your output should be similar to:
+
+```bash
+Last metadata expiration check: 4:02:25 ago on Thu 03 Oct 2024 09:35:58 PM UTC.
+Dependencies resolved.
+================================================================================
+ Package
+ Arch Version Repository Size
+================================================================================
+Installing:
+ munge x86_64 0.5.13-2.el8 rhel-8-for-x86_64-appstream-eus-rhui-rpms 122 k
+
+...
+
+Running transaction
+ Preparing : 1/1
+ Running scriptlet: munge-0.5.13-2.el8.x86_64 1/1
+ Installing : munge-0.5.13-2.el8.x86_64 1/1
+ Running scriptlet: munge-0.5.13-2.el8.x86_64 1/1
+ Verifying : munge-0.5.13-2.el8.x86_64 1/1
+Installed products updated.
+
+Installed:
+ munge-0.5.13-2.el8.x86_64
+
+Complete!
+```
+
+> [!IMPORTANT]
+> After configuring the compute nodes, be sure to delete the *munge.key* file from your home directory.
+
+## Validate installation
+
+To validate that Slurm installed successfully, a Chamber Admin can execute the `sinfo` command on any Slurm node, either on the controller or on a compute node.
+
+```bash
+$ sinfo
+PARTITION AVAIL TIMELIMIT NODES STATE NODELIST
+chamberSlurmPartition1* up infinite 5 idle wrkldvmslurm-nod0aef63d,wrkldvmslurm-nod034b970...
+```
+
+You can validate execution on compute nodes by sending a simple command using the `srun` command.
+
+```shell
+$ srun --nodes=6 hostname && srun sleep 30
+wrkldvmslurm-nod034b970
+wrkldvmslurm-nod0aef63d
+wrkldvmslurm-nod10870ad
+```
+
+If a job shows as *queued*, run `squeue` to list the job queue.
+
+```shell
+$ squeue
+ JOBID PARTITION NAME USER ST TIME NODES NODELIST(REASON)
+ 12 ChamberSl sleep jane.doe R 0:11 1 nodename1
+
+$ scontrol show job
+JobId=11 JobName=hostname
+ UserId=jane.doe(2982) GroupId=jane.doe(2982) MCS_label=N/A
+ Priority=4294901749 Nice=0 Account=(null) QOS=(null)
+ JobState=COMPLETED Reason=None Dependency=(null)
+ ...
+ NodeList=nodename[1-3]
+ BatchHost=nodename1
+ NumNodes=3 NumCPUs=48 NumTasks=3 CPUs/Task=1 ReqB:S:C:T=0:0:*:*
+ ...
+ Command=hostname
+ WorkDir=/mount/sharedhome/jane.doe
+ Power=
+
+JobId=12 JobName=sleep
+ UserId=jane.doe(2982) GroupId=jane.doe(2982) MCS_label=N/A
+ Priority=429490148 Nice=0 Account=(null) QOS=(null)
+ JobState=COMPLETED Reason=None Dependency=(null)
+ ...
+ NodeList=nodename1
+ BatchHost=nodename1
+ NumNodes=1 NumCPUs=16 NumTasks=1 CPUs/Task=1 ReqB:S:C:T=0:0:*:*
+ ...
+ Command=sleep
+ WorkDir=/mount/sharedhome/jane.doe
+ Power=
+```
+
+## Troubleshooting
+
+If a node's state is reported as *down* or *drain*, the `scontrol` command can restart it. Follow that with the `sinfo` command to verify activation.
+
+```bash
+$ sudo -u slurm scontrol update nodename=nodename1,nodename2 state=resume
+
+$ sinfo
+PARTITION AVAIL TIMELIMIT NODES STATE NODELIST
+chamberSlurmPartition1* up infinite 3 idle nodename[1-3]
+```
+
+## Related content
+
+* [Slurm Workload Manager](https://slurm.schedmd.com/overview.html)
+* [Slurm Workload Manager Quick Start Administrator Guide](https://slurm.schedmd.com/quickstart_admin.html)
+* [Slurm Workload Manager configuration](https://slurm.schedmd.com/slurm.conf.html)
+* [Slurm Accounting Storage configuration](https://slurm.schedmd.com/slurmdbd.conf.html)
+* [VM Offerings on Modeling and Simulation Workbench](./concept-vm-offerings.md)
+* [Create chamber storage](./how-to-guide-manage-chamber-storage.md)
+* [Create shared storage](./how-to-guide-manage-shared-storage.md)
nat-gateway Manage Nat Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/nat-gateway/manage-nat-gateway.md
Previously updated : 02/16/2024 Last updated : 09/17/2024 #Customer intent: As a network administrator, I want to learn how to create and remove a NAT gateway resource from a virtual network subnet. I also want to learn how to add and remove public IP addresses and prefixes used for outbound connectivity.
This article explains how to manage the following aspects of NAT gateway:
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). -- An existing Azure Virtual Network with a subnet. For more information, see [Quickstart: Create a virtual network using the Azure portal](../virtual-network/quick-create-portal.md).
+- An existing Azure Virtual Network and subnet. For more information, see [Quickstart: Create a virtual network using the Azure portal](../virtual-network/quick-create-portal.md).
- - The example virtual network that is used in this article is named *myVNet*.
+ - The example virtual network that is used in this article is named *vnet-1*.
- - The example subnet is named *mySubnet*.
+ - The example subnet is named *subnet-1*.
- - The example NAT gateway is named *myNATgateway*.
+ - The example NAT gateway is named *nat-gateway*.
# [**Azure PowerShell**](#tab/manage-nat-powershell) - An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). -- An existing Azure Virtual Network with a subnet. For more information, see [Quickstart: Create a virtual network using the Azure portal](../virtual-network/quick-create-portal.md).
+- An existing Azure Virtual Network and subnet. For more information, see [Quickstart: Create a virtual network using the Azure portal](../virtual-network/quick-create-portal.md).
- - The example virtual network that is used in this article is named *myVNet*.
+ - The example virtual network that is used in this article is named *vnet-1*.
- - The example subnet is named *mySubnet*.
+ - The example subnet is named *subnet-1*.
- - The example NAT gateway is named *myNATgateway*.
+ - The example NAT gateway is named *nat-gateway*.
To use Azure PowerShell for this article, you need:
To use Azure PowerShell for this article, you need:
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). -- An existing Azure Virtual Network with a subnet. For more information, see [Quickstart: Create a virtual network using the Azure portal](../virtual-network/quick-create-portal.md).
+- An existing Azure Virtual Network and subnet. For more information, see [Quickstart: Create a virtual network using the Azure portal](../virtual-network/quick-create-portal.md).
- - The example virtual network that is used in this article is named *myVNet*.
+ - The example virtual network that is used in this article is named *vnet-1*.
- - The example subnet is named *mySubnet*.
+ - The example subnet is named *subnet-1*.
- - The example NAT gateway is named *myNATgateway*.
+ - The example NAT gateway is named *nat-gateway*.
To use Azure CLI for this article, you need:
To use Azure CLI for this article, you need:
- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). -- An existing Azure Virtual Network with a subnet. For more information, see [Quickstart: Create a virtual network using the Azure portal](../virtual-network/quick-create-portal.md).
+- An existing Azure Virtual Network a subnet. For more information, see [Quickstart: Create a virtual network using the Azure portal](../virtual-network/quick-create-portal.md).
- - The example virtual network that is used in this article is named *myVNet*.
+ - The example virtual network that is used in this article is named *vnet-1*.
- - The example subnet is named *mySubnet*.
+ - The example subnet is named *subnet-1*.
- - The example NAT gateway is named *myNATgateway*.
+ - The example NAT gateway is named *nat-gateway*.
You can create a NAT gateway resource and add it to an existing subnet by using
1. In the search box at the top of the Azure portal, enter *NAT gateway*. Select **NAT gateways** in the search results.
-1. Select **Create**.
+1. Select **+ Create**.
-1. Enter the following information in the **Basics** tab of **Create network address translation (NAT) gateway**.
+1. Enter or select the following information in the **Basics** tab of **Create network address translation (NAT) gateway**.
- - Select your **Subscription**.
- - Select your resource group or select **Create new** to create a new resource group.
- - **NAT gateway name**. Enter *myNATgateway*.
- - Select your **Region**. This example uses **East US 2**.
- - Select an **Availability zone**. This example uses **No Zone**. For more information about NAT gateway availability, see [NAT gateway and availability zones](nat-availability-zones.md). |
- - Select a **TCP idle timeout (minutes)**. This example uses the default of **4**.
+ | Setting | Value |
+ | - | -- |
+ | **Project details** | |
+ | Subscription | Select your subscription. |
+ | Resource group | Select your resource group or select **Create new** to create a new resource group. |
+ | **Instance details** | |
+ | NAT gateway name | Enter *nat-gateway*. |
+ | Region | Select your region. This example uses **East US 2**. |
+ | Availability zone | Select **No Zone**. For more information about NAT gateway availability, see [NAT gateway and availability zones](nat-availability-zones.md). |
+ | TCP idle timeout (minutes) | Select the default of **4**. |
1. Select the **Outbound IP** tab, or select **Next: Outbound IP**. 1. You can select an existing public IP address or prefix or both to associate with the NAT gateway and enable outbound connectivity.
- - To create a new public IP for the NAT gateway, select **Create a new public IP address**. Enter *myPublicIP-NAT* in **Name**. Select **OK**.
+ - To create a new public IP for the NAT gateway, select **Create a new public IP address**. Enter *public-ip-nat* in **Name**. Select **OK**.
- - To create a new public IP prefix for the NAT gateway, select **Create a new public IP prefix**. Enter *myPublicIPPrefix-NAT* in **Name**. Select a **Prefix size**. Select **OK**.
+ - To create a new public IP prefix for the NAT gateway, select **Create a new public IP prefix**. Enter *public-ip-prefix-nat* in **Name**. Select a **Prefix size**. Select **OK**.
1. Select the **Subnet** tab, or select **Next: Subnet**.
-1. Select your virtual network. In this example, select **myVNet** in the dropdown list.
+1. Select your virtual network. In this example, select **vnet-1** in the dropdown list.
-1. Select the checkbox next to **mySubnet**.
+1. Select the checkbox next to **subnet-1**.
1. Select **Review + create**.
Use the [New-AzPublicIpAddress](/powershell/module/az.network/new-azpublicipaddr
```azurepowershell ## Create public IP address for NAT gateway ## $ip = @{
- Name = 'myPublicIP-NAT'
- ResourceGroupName = 'myResourceGroup'
+ Name = 'public-ip-nat'
+ ResourceGroupName = 'test-rg'
Location = 'eastus2' Sku = 'Standard' AllocationMethod = 'Static'
Use the [New-AzNatGateway](/powershell/module/az.network/new-aznatgateway) cmdle
```azurepowershell ## Place the virtual network into a variable. ## $net = @{
- Name = 'myVNet'
- ResourceGroupName = 'myResourceGroup'
+ Name = 'vnet-1'
+ ResourceGroupName = 'test-rg'
} $vnet = Get-AzVirtualNetwork @net ## Place the public IP address you created previously into a variable. ## $pip = @{
- Name = 'myPublicIP-NAT'
- ResourceGroupName = 'myResourceGroup'
+ Name = 'public-ip-nat'
+ ResourceGroupName = 'test-rg'
} $publicIP = Get-AzPublicIPAddress @pip ## Create NAT gateway resource ## $nat = @{
- ResourceGroupName = 'myResourceGroupNAT'
- Name = 'myNATgateway'
- IdleTimeoutInMinutes = '10'
+ ResourceGroupName = 'test-rg'
+ Name = 'nat-gateway'
+ IdleTimeoutInMinutes = '4'
Sku = 'Standard' Location = 'eastus2' PublicIpAddress = $publicIP
$natGateway = New-AzNatGateway @nat
## Create the subnet configuration. ## $sub = @{
- Name = 'mySubnet'
+ Name = 'subnet-1'
VirtualNetwork = $vnet NatGateway = $natGateway
- AddressPrefix = '10.0.2.0/24'
+ AddressPrefix = '10.0.0.0/24'
} Set-AzVirtualNetworkSubnetConfig @sub
Use the [New-AzPublicIpPrefix](/powershell/module/az.network/new-azpublicipprefi
```azurepowershell ## Create public IP prefix for NAT gateway ## $ip = @{
- Name = 'myPublicIPPrefix-NAT'
- ResourceGroupName = 'myResourceGroup'
+ Name = 'public-ip-prefix-nat'
+ ResourceGroupName = 'test-rg'
Location = 'eastus2' Sku = 'Standard' PrefixLength ='29'
Use the [New-AzNatGateway](/powershell/module/az.network/new-aznatgateway) cmdle
```azurepowershell ## Place the virtual network into a variable. ## $net = @{
- Name = 'myVNet'
- ResourceGroupName = 'myResourceGroup'
+ Name = 'vnet-1'
+ ResourceGroupName = 'test-rg'
} $vnet = Get-AzVirtualNetwork @net ## Place the public IP prefix you created previously into a variable. ## $pip = @{
- Name = 'myPublicIPPrefix-NAT'
- ResourceGroupName = 'myResourceGroup'
+ Name = 'public-ip-prefix-nat'
+ ResourceGroupName = 'test-rg'
} $publicIPprefix = Get-AzPublicIPPrefix @pip ## Create NAT gateway resource ## $nat = @{
- ResourceGroupName = 'myResourceGroupNAT'
- Name = 'myNATgateway'
- IdleTimeoutInMinutes = '10'
+ ResourceGroupName = 'test-rgNAT'
+ Name = 'nat-gateway'
+ IdleTimeoutInMinutes = '4'
Sku = 'Standard' Location = 'eastus2' PublicIpPrefix = $publicIPprefix
$natGateway = New-AzNatGateway @nat
## Create the subnet configuration. ## $sub = @{
- Name = 'mySubnet'
+ Name = 'subnet-1'
VirtualNetwork = $vnet NatGateway = $natGateway
+ AddressPrefix = '10.0.0.0/24'
} Set-AzVirtualNetworkSubnetConfig @sub
Use [az network public-ip create](/cli/azure/network/public-ip#az-network-public
```azurecli az network public-ip create \
- --resource-group myResourceGroup \
+ --resource-group test-rg \
--location eastus2 \
- --name myPublicIP-NAT \
+ --name public-ip-nat \
--sku standard ```
Use [az network nat gateway create](/cli/azure/network/nat/gateway#az-network-na
```azurecli az network nat gateway create \
- --resource-group myResourceGroup \
- --name myNATgateway \
- --public-ip-addresses myPublicIP-NAT \
- --idle-timeout 10
-
+ --resource-group test-rg \
+ --name nat-gateway \
+ --public-ip-addresses public-ip-nat \
+ --idle-timeout 4
``` Use [az network vnet subnet update](/cli/azure/network/vnet/subnet#az-network-vnet-subnet-update) to associate the NAT gateway with your virtual network subnet. ```azurecli az network vnet subnet update \
- --resource-group myResourceGroup \
- --vnet-name myVNet \
- --name mySubnet \
- --nat-gateway myNATgateway
+ --resource-group test-rg \
+ --vnet-name vnet-1 \
+ --name subnet-1 \
+ --nat-gateway nat-gateway
``` ### Public IP prefix
Use [az network public-ip prefix create](/cli/azure/network/public-ip/prefix#az-
```azurecli az network public-ip prefix create \ --length 29 \
- --resource-group myResourceGroup \
+ --resource-group test-rg \
--location eastus2 \
- --name myPublicIPprefix-NAT
+ --name public-ip-prefix-nat
``` Use [az network nat gateway create](/cli/azure/network/nat/gateway#az-network-nat-gateway-create) to create a NAT gateway resource and associate the public IP prefix that you created. ```azurecli az network nat gateway create \
- --resource-group myResourceGroup \
- --name myNATgateway \
- --public-ip-prefixes myPublicIPprefix-NAT \
+ --resource-group test-rg \
+ --name nat-gateway \
+ --public-ip-prefixes public-ip-prefix-nat \
--idle-timeout 10 ```
Use [az network vnet subnet update](/cli/azure/network/vnet/subnet#az-network-vn
```azurecli az network vnet subnet update \
- --resource-group myResourceGroup \
- --vnet-name myVNet \
- --name mySubnet \
- --nat-gateway myNATgateway
+ --resource-group test-rg \
+ --vnet-name vnet-1 \
+ --name subnet-1 \
+ --nat-gateway nat-gateway
``` # [**Bicep**](#tab/manage-nat-bicep)
To remove a NAT gateway from an existing subnet, complete the following steps.
1. In the search box at the top of the Azure portal, enter *NAT gateway*. Select **NAT gateways** in the search results.
-1. Select **myNATgateway**.
+1. Select **nat-gateway**.
1. Under **Settings**, select **Subnets**.
You can now associate the NAT gateway with a different subnet or virtual network
1. In the search box at the top of the Azure portal, enter *NAT gateway*. Select **NAT gateways** in the search results.
-1. Select **myNATgateway**.
+1. Select **nat-gateway**.
1. Select **Delete**.
You can now associate the NAT gateway with a different subnet or virtual network
# [**Azure PowerShell**](#tab/manage-nat-powershell)
-Removing the NAT gateway from a subnet by using Azure PowerShell isn't currently supported.
+Use [Set-AzVirtualNetworkSubnetConfig](/powershell/module/az.network/set-azvirtualnetworksubnetconfig) to remove the NAT gateway association from the subnet by setting the value to $null. Use [Set-AzVirtualNetwork](/powershell/module/az.network/set-azvirtualnetwork) to update the virtual network configuration.
++
+```azurepowershell
+# Specify the resource group and NAT gateway name
+$resourceGroupName = "test-rg"
+
+# Specify the virtual network name and subnet name
+$virtualNetworkName = "vnet-1"
+$subnetName = "subnet-1"
+
+# Get the virtual network
+$vnet = @{
+ Name = $virtualNetworkName
+ ResourceGroupName = $resourceGroupName
+}
+$virtualNetwork = Get-AzVirtualNetwork @vnet
+
+# Get the subnet
+$subnet = $virtualNetwork.Subnets | Where-Object {$_.Name -eq $subnetName}
+
+# Remove the NAT gateway association from the subnet
+$subnet.NatGateway = $null
+
+# Update the subnet configuration
+$subConfig = @{
+ Name = $subnetName
+ VirtualNetwork = $virtualNetwork
+ AddressPrefix = $subnet.AddressPrefix
+}
+Set-AzVirtualNetworkSubnetConfig @subConfig
+
+# Update the virtual network
+Set-AzVirtualNetwork -VirtualNetwork $virtualNetwork
+
+```
+
+Use [Remove-AzNatGateway](/powershell/module/az.network/remove-aznatgateway) to delete the NAT gateway resource.
+
+```azurepowershell
+# Specify the resource group and NAT gateway name
+$resourceGroupName = "test-rg"
+$natGatewayName = "nat-gateway"
+
+$nat = @{
+ Name = $natGatewayName
+ ResourceGroupName = $resourceGroupName
+}
+Remove-AzNatGateway @nat
+```
# [**Azure CLI**](#tab/manage-nat-cli)
Use [az network vnet subnet update](/cli/azure/network/vnet/subnet#az-network-vn
```azurecli az network vnet subnet update \
-ΓÇéΓÇéΓÇéΓÇéΓÇéΓÇé--resource-group myResourceGroup \
-ΓÇéΓÇéΓÇéΓÇéΓÇéΓÇé--vnet-name myVNet \
-ΓÇéΓÇéΓÇéΓÇéΓÇéΓÇé--name mySubnet \
+ΓÇéΓÇéΓÇéΓÇéΓÇéΓÇé--resource-group test-rg \
+ΓÇéΓÇéΓÇéΓÇéΓÇéΓÇé--vnet-name vnet-1 \
+ΓÇéΓÇéΓÇéΓÇéΓÇéΓÇé--name subnet-1 \
ΓÇéΓÇéΓÇéΓÇéΓÇéΓÇé--remove natGateway ```
Use [az network nat gateway delete](/cli/azure/network/nat/gateway#az-network-na
```azurecli az network nat gateway delete \
- --name myNATgateway \
- --resource-group myResourceGroup
+ --name nat-gateway \
+ --resource-group test-rg
``` # [**Bicep**](#tab/manage-nat-bicep)
-Use the Azure portal, Azure PowerShell, or Azure CLI to remove a NAT gateway from a subnet and delete the resource.
+```bicep
+@description('Name of resource group')
+param location string = resourceGroup().location
+
+var existingVNetName = 'vnet-1'
+var existingSubnetName = 'subnet-1'
+
+resource vnet 'Microsoft.Network/virtualNetworks@2023-05-01' existing = {
+ name: existingVNetName
+}
+output vnetid string = vnet.id
+
+resource updatedsubnet01 'Microsoft.Network/virtualNetworks/subnets@2023-06-01' = {
+ parent: vnet
+ name: existingSubnetName
+ properties: {
+ addressPrefix: vnet.properties.subnets[0].properties.addressPrefix
+ }
+}
+
+```
Complete the following steps to add or remove a public IP address from a NAT gat
| Setting | Value | | - | -- | | Subscription | Select your subscription. |
- | Resource group | Select your resource group. The example uses **myResourceGroup**. |
+ | Resource group | Select your resource group. The example uses **test-rg**. |
| Region | Select a region. This example uses **East US 2**. |
- | Name | Enter *myPublicIP-NAT2*. |
+ | Name | Enter *public-ip-nat2*. |
| IP version | Select **IPv4**. | | SKU | Select **Standard**. | | Availability zone | Select the default of **Zone-redundant**. |
Complete the following steps to add or remove a public IP address from a NAT gat
1. In the search box at the top of the Azure portal, enter *NAT gateway*. Select **NAT gateways** in the search results.
-1. Select **myNATgateway**.
+1. Select **nat-gateway**.
1. Under **Settings**, select **Outbound IP**.
Complete the following steps to add or remove a public IP address from a NAT gat
To add a public IP address to the NAT gateway, add it to an array object along with the current IP addresses. The PowerShell cmdlets replace all the addresses.
-In this example, the existing IP address associated with the NAT gateway is named *myPublicIP-NAT*. Replace this value with an array that contains both myPublicIP-NAT and a new IP address. If you have multiple IP addresses already configured, you must also add them to the array.
+In this example, the existing IP address associated with the NAT gateway is named *public-ip-nat*. Replace this value with an array that contains both public-ip-nat and a new IP address. If you have multiple IP addresses already configured, you must also add them to the array.
Use [New-AzPublicIpAddress](/powershell/module/az.network/new-azpublicipaddress) to create a new IP address for the NAT gateway. ```azurepowershell ## Create public IP address for NAT gateway ## $ip = @{
- Name = 'myPublicIP-NAT2'
- ResourceGroupName = 'myResourceGroup'
+ Name = 'public-ip-nat2'
+ ResourceGroupName = 'test-rg'
Location = 'eastus2' Sku = 'Standard' AllocationMethod = 'Static'
Use [Set-AzNatGateway](/powershell/module/az.network/set-aznatgateway) to add th
```azurepowershell ## Place NAT gateway into a variable. ## $ng = @{
- Name = 'myNATgateway'
- ResourceGroupName = 'myResourceGroup'
+ Name = 'nat-gateway'
+ ResourceGroupName = 'test-rg'
} $nat = Get-AzNatGateway @ng ## Place the existing public IP address associated with the NAT gateway into a variable. ## $ip = @{
- Name = 'myPublicIP-NAT'
- ResourceGroupName = 'myResourceGroup'
+ Name = 'public-ip-nat'
+ ResourceGroupName = 'test-rg'
} $publicIP1 = Get-AzPublicIPaddress @ip ## Place the public IP address you created previously into a variable. ## $ip = @{
- Name = 'myPublicIP-NAT2'
- ResourceGroupName = 'myResourceGroup'
+ Name = 'public-ip-nat2'
+ ResourceGroupName = 'test-rg'
} $publicIP2 = Get-AzPublicIPaddress @ip
Set-AzNatGateway @nt
### Remove public IP address
-To remove a public IP from a NAT gateway, create an array object that *doesn't* contain the IP address you want to remove. For example, you have a NAT gateway configured with two public IP addresses. You want to remove one of the IP addresses. The IP addresses associated with the NAT gateway are named myPublicIP-NAT and myPublicIP-NAT2. To remove myPublicIP-NAT2, create an array object for the PowerShell command that contains *only* myPublicIP-NAT. When you apply the command, the array is reapplied to the NAT gateway, and myPublicIP-NAT is the only associated public IP address.
+To remove a public IP from a NAT gateway, create an array object that *doesn't* contain the IP address you want to remove. For example, you have a NAT gateway configured with two public IP addresses. You want to remove one of the IP addresses. The IP addresses associated with the NAT gateway are named public-ip-nat and public-ip-nat2. To remove public-ip-nat2, create an array object for the PowerShell command that contains *only* public-ip-nat. When you apply the command, the array is reapplied to the NAT gateway, and public-ip-nat is the only associated public IP address.
Use [Set-AzNatGateway](/powershell/module/az.network/set-aznatgateway) to remove a public IP address from the NAT gateway. ```azurepowershell ## Place NAT gateway into a variable. ## $ng = @{
- Name = 'myNATgateway'
- ResourceGroupName = 'myResourceGroup'
+ Name = 'nat-gateway'
+ ResourceGroupName = 'test-rg'
} $nat = Get-AzNatGateway @ng ## Place the existing public IP address associated with the NAT gateway into a variable. ## $ip = @{
- Name = 'myPublicIP-NAT'
- ResourceGroupName = 'myResourceGroup'
+ Name = 'public-ip-nat'
+ ResourceGroupName = 'test-rg'
} $publicIP1 = Get-AzPublicIPaddress @ip ## Place the second public IP address into a variable. ## $ip = @{
- Name = 'myPublicIP-NAT2'
- ResourceGroupName = 'myResourceGroup'
+ Name = 'public-ip-nat2'
+ ResourceGroupName = 'test-rg'
} $publicIP2 = Get-AzPublicIPAddress @ip
Set-AzNatGateway @nt
### Add public IP address
-In this example, the existing public IP address associated with the NAT gateway is named *myPublicIP-NAT*.
+In this example, the existing public IP address associated with the NAT gateway is named *public-ip-nat*.
Use [az network public-ip create](/cli/azure/network/public-ip#az-network-public-ip-create) to create a new IP address for the NAT gateway. ```azurecli az network public-ip create \
- --resource-group myResourceGroup \
+ --resource-group test-rg \
--location eastus2 \
- --name myPublicIP-NAT2 \
+ --name public-ip-nat2 \
--sku standard ```
Use [az network nat gateway update](/cli/azure/network/nat/gateway#az-network-na
```azurecli az network nat gateway update \
- --name myNATgateway \
- --resource-group myResourceGroup \
- --public-ip-addresses myPublicIP-NAT myPublicIP-NAT2
+ --name nat-gateway \
+ --resource-group test-rg \
+ --public-ip-addresses public-ip-nat public-ip-nat2
``` ### Remove public IP address
-Use [az network nat gateway update](/cli/azure/network/nat/gateway#az-network-nat-gateway-update) to remove a public IP address from the NAT gateway. The Azure CLI command replaces the values. It doesn't remove a value. To remove a public IP address, include any IP address in the command that you want to keep. Omit the value that you want to remove. For example, you have a NAT gateway configured with two public IP addresses. You want to remove one of the IP addresses. The IP addresses associated with the NAT gateway are named myPublicIP-NAT and myPublicIP-NAT2. To remove myPublicIP-NAT2, omit the name of the IP address from the command. The command reapplies the IP addresses listed in the command to the NAT gateway. It removes any IP address not listed.
+Use [az network nat gateway update](/cli/azure/network/nat/gateway#az-network-nat-gateway-update) to remove a public IP address from the NAT gateway. The Azure CLI command replaces the values. It doesn't remove a value. To remove a public IP address, include any IP address in the command that you want to keep. Omit the value that you want to remove. For example, you have a NAT gateway configured with two public IP addresses. You want to remove one of the IP addresses. The IP addresses associated with the NAT gateway are named public-ip-nat and public-ip-nat2. To remove public-ip-nat2, omit the name of the IP address from the command. The command reapplies the IP addresses listed in the command to the NAT gateway. It removes any IP address not listed.
```azurecli az network nat gateway update \
- --name myNATgateway \
- --resource-group myResourceGroup \
- --public-ip-addresses myPublicIP-NAT
+ --name nat-gateway \
+ --resource-group test-rg \
+ --public-ip-addresses public-ip-nat
``` # [**Bicep**](#tab/manage-nat-bicep)
Complete the following steps to add or remove a public IP prefix from a NAT gate
| - | -- | | **Project details** | | | Subscription | Select your subscription. |
- | Resource group | Select your resource group. This example uses **myResourceGroup**. |
+ | Resource group | Select your resource group. This example uses **test-rg**. |
| **Instance details** | |
- | Name | Enter *myPublicIPPrefix-NAT*. |
+ | Name | Enter *public-ip-prefix-nat*. |
| Region | Select your region. This example uses **East US 2**. | | IP version | Select **IPv4**. | | Prefix ownership | Select **Microsoft owned**. |
Complete the following steps to add or remove a public IP prefix from a NAT gate
1. In the search box at the top of the Azure portal, enter *NAT gateway*. Select **NAT gateways** in the search results.
-1. Select **myNATgateway**.
+1. Select **nat-gateway**.
1. Under **Settings**, select **Outbound IP**.
Complete the following steps to add or remove a public IP prefix from a NAT gate
To add a public IP prefix to the NAT gateway, add it to an array object along with the current IP prefixes. The PowerShell cmdlets replace all the IP prefixes.
-In this example, the existing public IP prefix associated with the NAT gateway is named *myPublicIPprefix-NAT*. Replace this value with an array that contains both myPublicIPprefix-NAT and a new IP address prefix. If you have multiple IP prefixes already configured, you must also add them to the array.
+In this example, the existing public IP prefix associated with the NAT gateway is named *public-ip-prefix-nat*. Replace this value with an array that contains both public-ip-prefix-nat and a new IP address prefix. If you have multiple IP prefixes already configured, you must also add them to the array.
Use [New-AzPublicIpPrefix](/powershell/module/az.network/new-azpublicipprefix) to create a new public IP prefix for the NAT gateway. ```azurepowershell ## Create public IP prefix for NAT gateway ## $ip = @{
- Name = 'myPublicIPPrefix-NAT2'
- ResourceGroupName = 'myResourceGroup'
+ Name = 'public-ip-prefix-nat2'
+ ResourceGroupName = 'test-rg'
Location = 'eastus2' Sku = 'Standard' PrefixLength = '29'
Use [Set-AzNatGateway](/powershell/module/az.network/set-aznatgateway) to add th
```azurepowershell ## Place NAT gateway into a variable. ## $ng = @{
- Name = 'myNATgateway'
- ResourceGroupName = 'myResourceGroup'
+ Name = 'nat-gateway'
+ ResourceGroupName = 'test-rg'
} $nat = Get-AzNatGateway @ng ## Place the existing public IP prefix associated with the NAT gateway into a variable. ## $ip = @{
- Name = 'myPublicIPprefix-NAT'
- ResourceGroupName = 'myResourceGroup'
+ Name = 'public-ip-prefix-nat'
+ ResourceGroupName = 'test-rg'
} $prefixIP1 = Get-AzPublicIPPrefix @ip ## Place the public IP prefix you created previously into a variable. ## $ip = @{
- Name = 'myPublicIPprefix-NAT2'
- ResourceGroupName = 'myResourceGroup'
+ Name = 'public-ip-prefix-nat2'
+ ResourceGroupName = 'test-rg'
} $prefixIP2 = Get-AzPublicIPprefix @ip
Set-AzNatGateway @nt
### Remove public IP prefix
-To remove a public IP prefix from a NAT gateway, create an array object that *doesn't* contain the IP address prefix that you want to remove. For example, you have a NAT gateway configured with two public IP prefixes. You want to remove one of the IP prefixes. The IP prefixes associated with the NAT gateway are named myPublicIPprefix-NAT and myPublicIPprefix-NAT2. To remove myPublicIPprefix-NAT2, create an array object for the PowerShell command that contains *only* myPublicIPprefix-NAT. When you apply the command, the array is reapplied to the NAT gateway, and myPublicIPprefix-NAT is the only prefix associated.
+To remove a public IP prefix from a NAT gateway, create an array object that *doesn't* contain the IP address prefix that you want to remove. For example, you have a NAT gateway configured with two public IP prefixes. You want to remove one of the IP prefixes. The IP prefixes associated with the NAT gateway are named public-ip-prefix-nat and public-ip-prefix-nat2. To remove public-ip-prefix-nat2, create an array object for the PowerShell command that contains *only* public-ip-prefix-nat. When you apply the command, the array is reapplied to the NAT gateway, and public-ip-prefix-nat is the only prefix associated.
Use the [Set-AzNatGateway](/powershell/module/az.network/set-aznatgateway) cmdlet to remove a public IP prefix from the NAT gateway. ```azurepowershell ## Place NAT gateway into a variable. ## $ng = @{
- Name = 'myNATgateway'
- ResourceGroupName = 'myResourceGroup'
+ Name = 'nat-gateway'
+ ResourceGroupName = 'test-rg'
} $nat = Get-AzNatGateway @ng ## Place the existing public IP prefix associated with the NAT gateway into a variable. ## $ip = @{
- Name = 'myPublicIPprefix-NAT'
- ResourceGroupName = 'myResourceGroup'
+ Name = 'public-ip-prefix-nat'
+ ResourceGroupName = 'test-rg'
} $prefixIP1 = Get-AzPublicIPPrefix @ip ## Place the secondary public IP prefix into a variable. ## $ip = @{
- Name = 'myPublicIPprefix-NAT2'
- ResourceGroupName = 'myResourceGroup'
+ Name = 'public-ip-prefix-nat2'
+ ResourceGroupName = 'test-rg'
} $prefixIP2 = Get-AzPublicIPprefix @ip
Set-AzNatGateway @nt
### Add public IP prefix
-In this example, the existing public IP prefix associated with the NAT gateway is named *myPublicIPprefix-NAT*.
+In this example, the existing public IP prefix associated with the NAT gateway is named *public-ip-prefix-nat*.
Use [az network public-ip prefix create](/cli/azure/network/public-ip/prefix#az-network-public-ip-prefix-create) to create a public IP prefix for the NAT gateway. ```azurecli az network public-ip prefix create \ --length 29 \
- --resource-group myResourceGroup \
+ --resource-group test-rg \
--location eastus2 \
- --name myPublicIPprefix-NAT2
+ --name public-ip-prefix-nat2
``` Use [az network nat gateway update](/cli/azure/network/nat/gateway#az-network-nat-gateway-update) to add the public IP prefix that you created to the NAT gateway. The Azure CLI command replaces values. It doesn't add a value. To add the new IP address prefix to the NAT gateway, you must also include any other IP prefixes associated to the NAT gateway. ```azurecli az network nat gateway update \
- --name myNATgateway \
- --resource-group myResourceGroup \
- --public-ip-prefixes myPublicIPprefix-NAT myPublicIPprefix-NAT2
+ --name nat-gateway \
+ --resource-group test-rg \
+ --public-ip-prefixes public-ip-prefix-nat public-ip-prefix-nat2
``` ### Remove public IP prefix
-Use [az network nat gateway update](/cli/azure/network/nat/gateway#az-network-nat-gateway-update) to remove a public IP prefix from the NAT gateway. The Azure CLI command replaces the values. It doesn't remove a value. To remove a public IP prefix, include any prefix in the command that you wish to keep. Omit the one you want to remove. For example, you have a NAT gateway configured with two public IP prefixes. You want to remove one of the prefixes. The IP prefixes associated with the NAT gateway are named myPublicIPprefix-NAT and myPublicIPprefix-NAT2. To remove myPublicIPprefix-NAT2, omit the name of the IP prefix from the command. The command reapplies the IP prefixes listed in the command to the NAT gateway. It removes any IP address not listed.
+Use [az network nat gateway update](/cli/azure/network/nat/gateway#az-network-nat-gateway-update) to remove a public IP prefix from the NAT gateway. The Azure CLI command replaces the values. It doesn't remove a value. To remove a public IP prefix, include any prefix in the command that you wish to keep. Omit the one you want to remove. For example, you have a NAT gateway configured with two public IP prefixes. You want to remove one of the prefixes. The IP prefixes associated with the NAT gateway are named public-ip-prefix-nat and public-ip-prefix-nat2. To remove public-ip-prefix-nat2, omit the name of the IP prefix from the command. The command reapplies the IP prefixes listed in the command to the NAT gateway. It removes any IP address not listed.
```azurecli az network nat gateway update \
- --name myNATgateway \
- --resource-group myResourceGroup \
- --public-ip-prefixes myPublicIPprefix-NAT
+ --name nat-gateway \
+ --resource-group test-rg \
+ --public-ip-prefixes public-ip-prefix-nat
``` # [**Bicep**](#tab/manage-nat-bicep)
network-watcher Connection Monitor Virtual Machine Scale Set https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/connection-monitor-virtual-machine-scale-set.md
- Title: 'Tutorial: Monitor network communication with virtual machine scale set - Azure portal'-
-description: In this tutorial, you'll learn how to use Azure Network Watcher connection monitor tool to monitor network communication with a virtual machine scale set using the Azure portal.
---- Previously updated : 05/31/2024-
-#CustomerIntent: I need to monitor communication between a virtual machine scale set and a virtual machine. If the communication fails, I need to know why, so that I can resolve the problem.
---
-# Tutorial: Monitor network communication with a virtual machine scale set using the Azure portal
-
-Successful communication between a virtual machine scale set and another endpoint, such as virtual machine (VM), can be critical for your organization. Sometimes, the introduction of configuration changes can break communication.
-
-In this tutorial, you learn how to:
-
-> [!div class="checklist"]
-> * Create a virtual machine scale set and a VM.
-> * Monitor communication between a scale set and a VM by using Connection monitor.
-> * Generate alerts on Connection monitor metrics.
-> * Diagnose a communication problem between two VMs, and learn how to resolve it.
-
-> [!NOTE]
-> This tutorial uses Connection monitor (classic). To experience enhanced connectivity monitoring, try the updated version of [Connection monitor](connection-monitor-overview.md).
-
-> [!IMPORTANT]
-> As of July 1, 2021, you can't add new connection monitors in Connection monitor (classic) but you can continue to use earlier versions that were created prior to that date. To minimize service disruption to your current workloads, [migrate from Connection monitor (classic) to the latest Connection monitor](migrate-to-connection-monitor-from-connection-monitor-classic.md) in Azure Network Watcher before February 29, 2024.
--
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
-
-## Prerequisites
-
-* An Azure subscription
-
-## Sign in to Azure
-
-Sign in to the [Azure portal](https://portal.azure.com).
-
-## Create a virtual machine scale set
-
-In the following sections, you create a virtual machine scale set.
-
-### Create a load balancer
-
-[Azure Load Balancer](../load-balancer/load-balancer-overview.md) distributes incoming traffic among healthy virtual machine instances.
-
-First, create a public standard load balancer using the Azure portal. The name and public IP address you create are automatically configured as the load balancer's front end.
-
-1. In the search box, enter **load balancer** and then, under **Marketplace** in the search results, select **Load balancer**.
-1. On the **Basics** pane of the **Create load balancer** page, do the following:
-
- | Setting | Value |
- | | |
- | Subscription | Select your subscription. |
- | Resource group | Select **Create new** and then, in the box, type **myVMSSResourceGroup**.|
- | Name | Enter **myLoadBalancer**. |
- | Region | Select **East US**. |
- | Type | Select **Public**. |
- | SKU | Select **Standard**. |
- | Public IP address | Select **Create new**. |
- | Public IP address name | Enter **myPip**. |
- | Assignment| Select **Static**. |
- | Availability zone | Select **Zone-redundant**. |
-
-1. Select **Review + create**.
-1. After it passes validation, select **Create**.
-
-### Create a virtual machine scale set
-
-You can deploy a scale set with a Windows Server image or Linux images such as Ubuntu or Red Hat Enterprise Linux.
-
-1. Type **Scale set** in the search box. In the results, under **Marketplace**, select **Virtual machine scale sets**.
-1. On the **Virtual machine scale sets** pane, select **Create**. The **Create a virtual machine scale set** page opens.
-1. On the **Basics** pane, under **Project details**, ensure that the correct subscription is selected, and then select **myVMSSResourceGroup** in the resource group list.
-1. For **Name**, type **myScaleSet**.
-1. For **Region**, select a region that's close to your area.
-1. Under **Orchestration**, for **Orchestration mode**, ensure that the **Uniform** option is selected.
-1. For **Image**, select a marketplace image. In this example, we've chosen *Ubuntu Server 18.04 LTS*.
-1. Enter your username, and then select the authentication type you prefer.
- - A **Password** must be at least 12 characters long and contain three of the following: a lowercase character, an uppercase character, a number, and a special character. For more information, see [username and password requirements](/azure/virtual-machines/windows/faq#what-are-the-password-requirements-when-creating-a-vm-).
- - If you select a Linux OS disk image, you can instead choose **SSH public key**. Provide only your public key, such as *~/.ssh/id_rsa.pub*. You can use the Azure Cloud Shell from the portal to [create and use SSH keys](/azure/virtual-machines/linux/mac-create-ssh-keys).
-
-1. Select **Next**.
-1. Leave the defaults for the **Instance** and **Disks** pages.
-1. On the **Networking** page, under **Load balancing**, select **Yes** to put the scale set instances behind a load balancer.
-1. For **Load balancing options**, select **Azure load balancer**.
-1. For **Select a load balancer**, select **myLoadBalancer**, which you created earlier.
-1. For **Select a backend pool**, select **Create new**, type **myBackendPool**, and then select **Create**.
-1. When you're done, select **Review + create**.
-1. After it passes validation, select **Create** to deploy the scale set.
--
-After the scale set has been created, enable the Network Watcher extension in the scale set by doing the following:
-
-1. Under **Settings**, select **Extensions**.
-
-1. Select **Add extension**, and then select **Network Watcher Agent for Windows**.
-
-1. Under **Network Watcher Agent for Windows**, select **Create**.
-1. Under **Install extension**, select **OK**.
-1. Under **Extensions**, select **OK**.
-
-## Create the VM
-
-Complete the steps in the "Create the first virtual machine" section of [Tutorial: Monitor network communication between two virtual machines by using the Azure portal](./connection-monitor.md), but **with the following changes**:
-
-|Step|Setting|Value|
-|:|||
-| 1 | Select a version of **Ubuntu Server**. | |
-| 3 | Name | Enter **myVm2**. |
-| 3 | Authentication type | Paste your SSH public key or select **Password**, and then enter a password. |
-| 3 | Resource group | Select **Use existing**, and then select **myResourceGroup**. |
-| 6 | Extensions | Select **Network Watcher Agent for Linux**. |
-
-The VM takes a few minutes to deploy. Wait for it to finish deploying before you continue with the remaining steps.
-
-## Create a connection monitor
-
-To create a monitor in Connection monitor by using the Azure portal:
-
-1. On the Azure portal home page, go to **Network Watcher**.
-1. On the left pane, in the **Monitoring** section, select **Connection monitor**.
-
- You'll see a list of the connection monitors that were created in Connection monitor. To see the connection monitors that were created in the classic Connection monitor, select the **Connection monitor** tab.
-
- :::image type="content" source="./media/connection-monitor-2-preview/cm-resource-view.png" alt-text="Screenshot that lists the connection monitors that were created in Connection monitor.":::
-
-1. On the **Connection monitor** dashboard, at the upper left, select **Create**.
-
-1. On the **Basics** pane, enter information for your connection monitor:
-
- a. **Connection Monitor Name**: Enter a name for your connection monitor. Use the standard naming rules for Azure resources.
- b. **Subscription**: Select a subscription for your connection monitor.
- c. **Region**: Select a region for your connection monitor. You can select only the source VMs that are created in this region.
- d. **Workspace configuration**: Your workspace holds your monitoring data. Do either of the following:
- * To use the default workspace, select the checkbox.
- * To choose a custom workspace, clear the checkbox, and then select the subscription and region for your custom workspace.
-
- :::image type="content" source="./media/connection-monitor-2-preview/create-cm-basics.png" alt-text="Screenshot that shows the 'Basics' pane in Connection monitor.":::
-
-1. Select **Next: Test groups**.
-
-1. Add sources, destinations, and test configurations in your test groups. To learn about setting up test groups, see [Create test groups in Connection monitor](#create-test-groups-in-a-connection-monitor).
-
- :::image type="content" source="./media/connection-monitor-2-preview/create-tg.png" alt-text="Screenshot that shows the 'Test groups' pane in Connection monitor.":::
-
-1. At the bottom of the pane, select **Next: Create Alerts**. To learn about creating alerts, see [Create alerts in Connection monitor](#create-alerts-in-connection-monitor).
-
- :::image type="content" source="./media/connection-monitor-2-preview/create-alert.png" alt-text="Screenshot that shows the 'Create alerts' pane.":::
-
-1. At the bottom of the pane, select **Next: Review + create**.
-
-1. On the **Review + create** pane, review the basic information and test groups before you create the connection monitor. If you need to edit the connection monitor, you can do so by going back to the respective panes.
-
- :::image type="content" source="./media/connection-monitor-2-preview/review-create-cm.png" alt-text="Screenshot that shows the 'Review + create' pane in Connection monitor.":::
-
- > [!NOTE]
- > The **Review + create** pane shows the cost per month during the Connection monitor stage. Currently, the **Current Cost/Month** column shows no charge. When Connection monitor becomes generally available, this column will show a monthly charge.
- >
- > Even during the Connection monitor stage, Log Analytics ingestion charges apply.
-
-1. When you're ready to create the connection monitor, at the bottom of the **Review + create** pane, select **Create**.
-
-## Create test groups in a connection monitor
-
- > [!NOTE]
- > Connection monitor now supports the auto-enabling of monitoring extensions for Azure and non-Azure endpoints, thus eliminating the need for manual installation of monitoring solutions during the creation of Connection monitor.
-
-Each test group in a connection monitor includes sources and destinations that get tested on network parameters. They're tested for the percentage of checks that fail and the RTT over test configurations.
-
-In the Azure portal, to create a test group in a connection monitor, do the following:
-
-1. **Disable test group**: You can select this checkbox to disable monitoring for all sources and destinations that the test group specifies. This selection is cleared by default.
-1. **Name**: Name your test group.
-1. **Sources**: You can specify both Azure VMs and on-premises machines as sources if agents are installed on them. To learn about installing an agent for your source, see [Install monitoring agents](./connection-monitor-overview.md#install-monitoring-agents).
-
- * To choose Azure agents, select the **Azure endpoints** tab. Here you see only VMs or virtual machine scale sets that are bound to the region that you specified when you created the connection monitor. By default, VMs and virtual machine scale sets are grouped into the subscription that they belong to. These groups are collapsed.
-
- You can drill down from the **Subscription** level to other levels in the hierarchy:
-
- **Subscription** > **Resource group** > **Virtual network** > **Subnet** > **VMs with agents**
-
- You can also change the **Group by** selector to start the tree from any other level. For example, if you group by virtual network, you see the VMs that have agents in the hierarchy **Virtual network** > **Subnet** > **VMs with agents**.
-
- When you select a virtual network, subnet, a single VM or a virtual machine scale set the corresponding resource ID is set as the endpoint. By default, all VMs in the selected virtual network or subnet participate in monitoring. To reduce the scope, either select specific subnets or agents or change the value of the scope property.
-
- :::image type="content" source="./media/connection-monitor-2-preview/add-sources-1.png" alt-text="Screenshot that shows the 'Add Sources' pane and the Azure endpoints, including the 'virtual machine scale set' pane, in Connection monitor.":::
-
- * To choose on-premises agents, select the **NonΓÇôAzure endpoints** tab. By default, agents are grouped into workspaces by region. All these workspaces have the Network Performance Monitor configured.
-
- 1. Under **Create Connection Monitor**, on the **Basics** pane, the default region is selected. If you change the region, you can choose agents from workspaces in the new region. You can select one or more agents or subnets. In the **Subnet** view, you can select specific IPs for monitoring. If you add multiple subnets, a custom on-premises network named **OnPremises_Network_1** will be created. You can also change the **Group by** selector to group by agents.
-
- :::image type="content" source="./media/connection-monitor-2-preview/add-non-azure-sources.png" alt-text="Screenshot that shows the 'Add Sources' pane and the 'Non-Azure endpoints' pane in Connection monitor.":::
-
-1. To choose recently used endpoints, you can use the **Recent endpoint** pane.
-
- You need not choose the endpoints with monitoring agents enabled only. You can select Azure or non-Azure endpoints without the agent enabled and proceed with the creation of Connection Monitor. During the creation process, the monitoring agents for the endpoints will be automatically enabled.
-
- :::image type="content" source="./media/connection-monitor-2-preview/unified-enablement.png" alt-text="Screenshot that shows the 'Add Sources' pane and the 'Non-Azure endpoints' pane in Connection Monitor with unified enablement.":::
-
-1. When you finish setting up sources, select **Done** at the bottom of the pane. You can still edit basic properties like the endpoint name by selecting the endpoint in the **Create Test Group** view.
-
-1. **Destinations**: You can monitor connectivity to an Azure VM, an on-premises machine, or any endpoint (a public IP, URL, or FQDN) by specifying it as a destination. In a single test group, you can add Azure VMs, on-premises machines, Office 365 URLs, Dynamics 365 URLs, and custom endpoints.
-
- * To choose Azure VMs as destinations, select the **Azure endpoints** tab. By default, the Azure VMs are grouped into a subscription hierarchy that's in the region that you selected under **Create Connection Monitor** on the **Basics** pane. You can change the region and choose Azure VMs from the new region. Then you can drill down from the **Subscription** level to other levels in the hierarchy, just as you can when you set the source Azure endpoints.
-
- You can select virtual networks, subnets, or single VMs, as you can when you set the source Azure endpoints. When you select a virtual network, subnet, or single VM, the corresponding resource ID is set as the endpoint. By default, all VMs in the selected virtual network or subnet that have the Network Watcher extension participate in monitoring. To reduce the scope, either select specific subnets or agents or change the value of the scope property.
-
- :::image type="content" source="./media/connection-monitor-2-preview/add-azure-dests1.png" alt-text="<Screenshot that shows the 'Add Destinations' pane and the 'Azure endpoints' pane.>":::
-
- :::image type="content" source="./media/connection-monitor-2-preview/add-azure-dests2.png" alt-text="<Screenshot that shows the 'Add Destinations' pane at the Subscription level.>":::
-
- * To choose non-Azure agents as destinations, select the **Non-Azure endpoints** tab. By default, agents are grouped into workspaces by region. All these workspaces have Network Performance Monitor configured.
-
- If you need to add Network Performance Monitor to your workspace, get it from Azure Marketplace. For information about how to add Network Performance Monitor, see [Monitoring solutions in Azure Monitor](/previous-versions/azure/azure-monitor/insights/solutions). For information about how to configure agents for on-premises machines, see [Agents for on-premises machines](connection-monitor-overview.md#agents-for-on-premises-machines).
-
- Under **Create Connection Monitor**, on the **Basics** pane, the default region is selected. If you change the region, you can choose agents from workspaces in the new region. You can select one or more agents or subnets. In the **Subnet** view, you can select specific IPs for monitoring. If you add multiple subnets, a custom on-premises network named **OnPremises_Network_1** will be created.
-
- :::image type="content" source="./media/connection-monitor-2-preview/add-non-azure-dest.png" alt-text="Screenshot that shows the 'Add Destinations' pane and the 'Non-Azure endpoints' pane.":::
-
- * To choose public endpoints as destinations, select the **External Addresses** tab. The list of endpoints includes Office 365 test URLs and Dynamics 365 test URLs, grouped by name. You also can choose endpoints that were created in other test groups in the same connection monitor.
-
- To add an endpoint, in the upper-right corner, select **Add Endpoint**. Then provide an endpoint name and URL, IP, or FQDN.
-
- :::image type="content" source="./media/connection-monitor-2-preview/add-endpoints.png" alt-text="Screenshot that shows where to add public endpoints as destinations in Connection monitor.":::
-
- * To choose recently used endpoints, go to the **Recent endpoint** pane.
-
-1. When you finish choosing destinations, select **Done**. You can still edit basic properties, such as the endpoint name, by selecting the endpoint in the **Create Test Group** view.
-
-1. **Test configurations**: You can add one or more test configurations to a test group. Create a new test configuration by using the **New configuration** pane. Or add a test configuration from another test group in the same Connection monitor from the **Choose existing** pane.
-
- a. **Test configuration name**: Name the test configuration.
- b. **Protocol**: Select **TCP**, **ICMP**, or **HTTP**. To change HTTP to HTTPS, select **HTTP** as the protocol, and then select **443** as the port.
- c. **Create TCP test configuration**: This checkbox appears only if you select **HTTP** in the **Protocol** list. Select this checkbox to create another test configuration that uses the same sources and destinations that you specified elsewhere in your configuration. The new test configuration is named **\<name of test configuration>_networkTestConfig**.
- d. **Disable traceroute**: This checkbox applies when the protocol is TCP or ICMP. Select this box to stop sources from discovering topology and hop-by-hop RTT.
- e. **Destination port**: You can provide a destination port of your choice.
- f. **Listen on port**: This checkbox applies when the protocol is TCP. Select this checkbox to open the chosen TCP port if it's not already open.
- g. **Test Frequency**: In this list, specify how frequently sources will ping destinations on the protocol and port that you specified.
-
- You can choose 30 seconds, 1 minute, 5 minutes, 15 minutes, or 30 minutes. Select **custom** to enter another frequency that's between 30 seconds and 30 minutes. Sources will test connectivity to destinations based on the value that you choose. For example, if you select 30 seconds, sources will check connectivity to the destination at least once in every 30-second period.
- h. **Success Threshold**: You can set thresholds on the following network parameters:
-
- * **Checks failed**: Set the percentage of checks that can fail when sources check connectivity to destinations by using the criteria that you specified. For the TCP or ICMP protocol, the percentage of failed checks can be equated to the percentage of packet loss. For HTTP protocol, this value represents the percentage of HTTP requests that received no response.
-
- * **Round trip time**: Set the RTT, in milliseconds, for how long sources can take to connect to the destination over the test configuration.
-
- :::image type="content" source="./media/connection-monitor-2-preview/add-test-config.png" alt-text="Screenshot that shows where to set up a test configuration in Connection monitor.":::
-
-1. **Test Groups**: You can add one or more Test Groups to a Connection monitor. These test groups can consist of multiple Azure or non-Azure endpoints.
-
- For selected Azure VMs or Azure virtual machine scale sets and non-Azure endpoints without monitoring extensions, the extension for Azure VMs and the Network Performance Monitor solution for non-Azure endpoints will be auto-enabled after the creation of Connection monitor begins.
-
- If the selected virtual machine scale set is set for manual upgrade, you'll have to upgrade the scale set after the Network Watcher extension installation. Doing so lets you continue setting up the Connection monitor with virtual machine scale sets as endpoints. If the virtual machine scale set is set to auto-upgrade, you don't need to worry about upgrading after the installation of the Network Watcher extension.
-
- In the previously mentioned scenario, you can consent to an auto-upgrade of virtual Machine Scale sets with auto-enabling of the Network Watcher extension during the creation of Connection monitor for virtual Machine Scale sets with manual upgrading. This approach eliminates the need to manually upgrade the virtual machine scale set after you install the Network Watcher extension.
-
- :::image type="content" source="./media/connection-monitor-2-preview/consent-vmss-auto-upgrade.png" alt-text="Screenshot that shows where to set up a test group and consent for an auto-upgrade of the virtual machine scale set in Connection monitor.":::
--
-## Create alerts in Connection monitor
-
-You can set up alerts on tests that are failing based on the thresholds set in test configurations.
-
-In the Azure portal, to create alerts for a connection monitor, specify values for these fields:
-
-* **Create alert**: You can select this checkbox to create a metric alert in Azure Monitor. When you select this checkbox, the other fields will be enabled for editing. Additional charges for the alert will be applicable, based on the [pricing for alerts](https://azure.microsoft.com/pricing/details/monitor/).
-
-* **Scope** > **Resource** > **Hierarchy**: These values are automatically filled, based on the values specified on the **Basics** pane.
-
-* **Condition name**: The alert is created on the `Test Result(preview)` metric. When the result of the connection monitor test is a failing result, the alert rule will fire.
-
-* **Action group name**: You can enter your email directly or you can create alerts via action groups. If you enter your email directly, an action group with the name **NPM Email ActionGroup** is created. The email ID is added to that action group. If you choose to use action groups, you need to select a previously created action group. To learn how to create an action group, see [Create action groups in the Azure portal](/azure/azure-monitor/alerts/action-groups). After the alert is created, you can [manage your alerts](/azure/azure-monitor/alerts/alerts-metric#view-and-manage-with-azure-portal).
-
-* **Alert rule name**: The name of the connection monitor.
-
-* **Enable rule upon creation**: Select this checkbox to enable the alert rule based on the condition. Disable this checkbox if you want to create the rule without enabling it.
--
-After you've completed all the steps, the process will proceed with a unified enabling of monitoring extensions for all endpoints without monitoring agents enabled, followed by the creation of the connection monitor.
-
-After the creation process is successful, it takes about 5 minutes for the connection monitor to be displayed on the dashboard.
-
-## Virtual machine scale set coverage
-
-Currently, Connection monitor provides default coverage for the scale set instances that are selected as endpoints. This means that only a default percentage of all the added scale set instances would be randomly selected to monitor connectivity from the scale set to the endpoint.
-
-As a best practice, to avoid loss of data due to downscaling of instances, we recommend that you select *all* instances in a scale set while you're creating a test group, instead of selecting a particular few for monitoring your endpoints.
-
-## Scale limits
-
-Connection monitors have these scale limits:
-
-* Maximum connection monitors per subscription per region: 100
-* Maximum test groups per connection monitor: 20
-* Maximum sources and destinations per connection monitor: 100
-* Maximum test configurations per connection monitor: 2 via the Azure portal
-
-## Clean up resources
-
-When no longer needed, delete **myResourceGroup** resource group and all of the resources it contains:
-
-1. In the search box at the top of the portal, enter ***myResourceGroup***. Select **myResourceGroup** from the search results.
-
-1. Select **Delete resource group**.
-
-1. In **Delete a resource group**, enter ***myResourceGroup***, and then select **Delete**.
-
-1. Select **Delete** to confirm the deletion of the resource group and all its resources.
-
-## Next step
-
-To learn how to diagnose and troubleshoot problems with virtual network gateways, advance to the next tutorial:
-
-> [!div class="nextstepaction"]
-> [Diagnose communication problems between networks](diagnose-communication-problem-between-networks.md)
operational-excellence Overview Relocation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operational-excellence/overview-relocation.md
Each service specific guide can contain service-specific information on topics s
- Links to how-tos and relevant product-specific relocation information.
-## Service categories across region types
-- ## Azure services relocation guides The following tables provide links to each Azure service relocation document. The tables also provide information on which kind of relocation method is supported.
-### ![An icon that signifies this service is foundational.](./media/relocation/icon-foundational.svg) Foundational services
+
+### Analytics
| Product | Relocation | Relocation with data migration | Resource Mover | | | | | | [Azure Event Hubs](relocation-event-hub.md)| ✅ | ❌| ❌ | [Azure Event Hubs Cluster](relocation-event-hub-cluster.md)| ✅ | ❌ | ❌ |
-[Azure Key Vault](./relocation-key-vault.md)| ✅ | ✅| ❌ |
-[Azure Load Balancer](../load-balancer/move-across-regions-external-load-balancer-portal.md)| ✅ | ✅| ❌ |
-[Azure Site Recovery (Recovery Services vaults)](relocation-site-recovery.md)| ✅ | ✅| ❌ |
+[Azure Stream Analytics - Stream Analytics jobs](../stream-analytics/copy-job.md?toc=/azure/operational-excellence/toc.json)| ✅ | ✅| ❌ |
+[Azure Stream Analytics - Stream Analytics cluster](../stream-analytics/move-cluster.md?toc=/azure/operational-excellence/toc.json)|✅ | ✅| ❌ |
+[Power BI](/power-bi/admin/service-admin-region-move?toc=/azure/operational-excellence/toc.json)| ✅ |❌ | ❌ |
+
+### Compute
+
+| Product | Relocation | Relocation with data migration | Resource Mover |
+| | | | |
+[Azure App Service](../app-service/manage-move-across-regions.md?toc=/azure/operational-excellence/toc.json)|✅ | ❌| ❌ |
+[Azure Batch](../batch/account-move.md?toc=/azure/operational-excellence/toc.json)|✅ | ✅| ❌ |
+[Azure Functions](relocation-functions.md)|✅ |❌ | ❌ |
+[Azure Static Web Apps](./relocation-static-web-apps.md) | ✅ |❌ | ❌ |
[Azure Virtual Machines]( ../resource-mover/tutorial-move-region-virtual-machines.md?toc=/azure/operational-excellence/toc.json)| ❌ | ❌| ✅ | [Azure Virtual Machine Scale Sets](./relocation-virtual-machine-scale-sets.md)|❌ |✅ | ❌ |
-[Azure Virtual Network](./relocation-virtual-network.md)| ✅| ❌ | ✅ |
-[Azure Virtual Network - Network Security Groups](./relocation-virtual-network-nsg.md)|✅ |❌ | ✅ |
-### ![An icon that signifies this service is mainstream.](./media/relocation/icon-mainstream.svg) Mainstream services
+### Containers
-| Product | Relocation |Relocation with data migration | Resource Mover |
+| Product | Relocation | Relocation with data migration | Resource Mover |
| | | | |
-[Azure API Management](../api-management/api-management-howto-migrate.md?toc=/azure/operational-excellence/toc.json)| ✅ | ✅| ❌ |
-[Azure Application Gateway and Web Application Firewall](relocation-app-gateway.md)| ✅ | ❌| ❌ |
-[Azure App Service](../app-service/manage-move-across-regions.md?toc=/azure/operational-excellence/toc.json)|✅ | ❌| ❌ |
-[Azure Backup](relocation-backup.md)| ✅ | ❌| ❌ |
-[Azure Batch](../batch/account-move.md?toc=/azure/operational-excellence/toc.json)|✅ | ✅| ❌ |
-[Azure Cache for Redis](../azure-cache-for-redis/cache-moving-resources.md?toc=/azure/operational-excellence/toc.json)| ✅ | ❌| ❌ |
[Azure Container Registry](relocation-container-registry.md)|✅ | ✅| ❌ |
+[Azure Functions](relocation-functions.md)|✅ |❌ | ❌ |
+[Azure Kubernetes Service](relocation-kubernetes-service.md)|✅ |✅ | ❌ |
++
+### Databases
+
+| Product | Relocation | Relocation with data migration | Resource Mover |
+| | | | |
+[Azure Cache for Redis](../azure-cache-for-redis/cache-moving-resources.md?toc=/azure/operational-excellence/toc.json)| ✅ | ❌| ❌ |
[Azure Cosmos DB](relocation-cosmos-db.md)|✅ | ✅| ❌ | [Azure Database for MariaDB Server](/azure/mariadb/howto-move-regions-portal?toc=/azure/operational-excellence/toc.json)|✅ | ✅| ❌ | [Azure Database for MySQL Server](/azure/mysql/howto-move-regions-portal?toc=/azure/operational-excellence/toc.json)|✅ | ✅| ❌ | [Azure Database for PostgreSQL](./relocation-postgresql-flexible-server.md)| ✅ | ✅| ❌ |++
+### Integration
+
+| Product | Relocation |Relocation with data migration | Resource Mover |
+| | | | |
+[Azure API Management](../api-management/api-management-howto-migrate.md?toc=/azure/operational-excellence/toc.json)| ✅ | ✅| ❌ |
+[Azure Logic apps](../logic-apps/move-logic-app-resources.md?toc=/azure/operational-excellence/toc.json)| ✅| ❌ | ❌ |
++
+### Internet of Things
+
+| Product | Relocation |Relocation with data migration | Resource Mover |
+| | | | |
+[Azure API Management](../api-management/api-management-howto-migrate.md?toc=/azure/operational-excellence/toc.json)| ✅ | ✅| ❌ |
+[Azure Cosmos DB](relocation-cosmos-db.md)|✅ | ✅| ❌ |
[Azure Event Grid domains](relocation-event-grid-domains.md)| ✅ | ❌| ❌ | [Azure Event Grid custom topics](relocation-event-grid-custom-topics.md)| ✅ | ❌| ❌ | [Azure Event Grid system topics](relocation-event-grid-system-topics.md)| ✅ | ❌| ❌ |
-[Azure Firewall](./relocation-firewall.md)|❌ | ✅| ❌ |
[Azure Functions](relocation-functions.md)|✅ |❌ | ❌ |
-[Azure Kubernetes Service](relocation-kubernetes-service.md)|✅ |✅ | ❌ |
-[Azure Logic apps](../logic-apps/move-logic-app-resources.md?toc=/azure/operational-excellence/toc.json)| ✅| ❌ | ❌ |
-[Azure Monitor - Log Analytics](./relocation-log-analytics.md)| ✅| ❌ | ❌ |
-[Azure Private Link Service](./relocation-private-link.md) | ✅| ❌ | ❌ |
-[Azure Storage Account](relocation-storage-account.md)| ✅ | ✅| ❌ |
-[Managed identities for Azure resources](relocation-storage-account.md)| ✅| ❌ | ❌ |
+[Azure IoT Hub](/azure/iot-hub/iot-hub-how-to-clone?toc=/azure/operational-excellence/toc.json)| ✅ | ✅| ❌ |
[Azure Stream Analytics - Stream Analytics jobs](../stream-analytics/copy-job.md?toc=/azure/operational-excellence/toc.json)| ✅ | ✅| ❌ | [Azure Stream Analytics - Stream Analytics cluster](../stream-analytics/move-cluster.md?toc=/azure/operational-excellence/toc.json)|✅ | ✅| ❌ |
-### ![An icon that signifies this service is strategic.](./media/relocation/icon-strategic.svg) Strategic services
-
-| Product | Relocation | Relocation with data migration | Resource Mover |
+### Management and governance
+| Product | Relocation |Relocation with data migration | Resource Mover |
| | | | | [Azure Automation](./relocation-automation.md)| ✅ | ✅| ❌ |
-[Azure IoT Hub](/azure/iot-hub/iot-hub-how-to-clone?toc=/azure/operational-excellence/toc.json)| ✅ | ✅| ❌ |
+[Azure Backup](relocation-backup.md)| ✅ | ❌| ❌ |
+[Azure Monitor - Log Analytics](./relocation-log-analytics.md)| ✅| ❌ | ❌ |
+[Azure Site Recovery (Recovery Services vaults)](relocation-site-recovery.md)| ✅ | ✅| ❌ |
++
+### Networking
+
+| Product | Relocation |Relocation with data migration | Resource Mover |
+| | | | |
+[Azure Application Gateway and Web Application Firewall](relocation-app-gateway.md)| ✅ | ❌| ❌ |
+[Azure Load Balancer](../load-balancer/move-across-regions-external-load-balancer-portal.md)| ✅ | ✅| ❌ |
+[Azure Private Link Service](./relocation-private-link.md) | ✅| ❌ | ❌ |
+[Azure Virtual Network](./relocation-virtual-network.md)| ✅| ❌ | ✅ |
+[Azure Virtual Network - Network Security Groups](./relocation-virtual-network-nsg.md)|✅ |❌ | ✅ |
+
+### Security
+
+| Product | Relocation |Relocation with data migration | Resource Mover |
+| | | | |
+[Azure Firewall](./relocation-firewall.md)|❌ | ✅| ❌ |
+[Azure Application Gateway and Web Application Firewall](relocation-app-gateway.md)| ✅ | ❌| ❌ |
+[Azure Key Vault](./relocation-key-vault.md)| ✅ | ✅| ❌ |
+[Managed identities for Azure resources](relocation-storage-account.md)| ✅| ❌ | ❌ |
+
+### Storage
+
+| Product | Relocation |Relocation with data migration | Resource Mover |
+| | | | |
+[Azure Backup](relocation-backup.md)| ✅ | ❌| ❌ |
[Azure NetApp Files](./relocation-netapp.md)| ✅ | ✅| ❌ |
-[Azure Static Web Apps](./relocation-static-web-apps.md) | ✅ |❌ | ❌ |
-[Power BI](/power-bi/admin/service-admin-region-move?toc=/azure/operational-excellence/toc.json)| ✅ |❌ | ❌ |
+[Azure Storage Account](relocation-storage-account.md)| ✅ | ✅| ❌ |
++ ## Additional information
reliability Availability Service By Category https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/availability-service-by-category.md
As mentioned previously, Azure classifies services into three categories: founda
> | Virtual Machines: HBv3-series | > | Virtual Machines: HCv1-series | > | Virtual Machines: LSv2-series |
+> | Virtual Machines: LSv3-series |
> | Virtual Machines: Mv2-series | > | Virtual Machines: NCv3-series | > | Virtual Machines: NCasT4 v3-series |
sap Businessobjects Deployment Guide Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/businessobjects-deployment-guide-windows.md
For details on how to create a storage account, see [Create a FileStorage storag
#### Create Azure file shares
-The next step is to create Azure files in the storage account. Azure files use a provisioned model for premium file shares. In a provisioned business model, you proactively specify to Azure files what your storage requirements are, rather than being billed based on what you use. To understand more about this model, see [Provisioned model](../../storage/files/understanding-billing.md#provisioned-model). In this example, we create two Azure files: frsinput (256 GB) and frsoutput (256 GB) for the SAP BOBI file store.
+The next step is to create Azure files in the storage account. Azure files use a provisioned model for premium file shares. In a provisioned business model, you proactively specify to Azure files what your storage requirements are, rather than being billed based on what you use. To understand more about this model, see [Provisioned model](../../storage/files/understanding-billing.md#provisioned-v1-model). In this example, we create two Azure files: frsinput (256 GB) and frsoutput (256 GB) for the SAP BOBI file store.
1. Go to the storage account **azusbobi** > **File shares**. 1. Select **New file share**.
sap High Availability Guide Rhel Nfs Azure Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-rhel-nfs-azure-files.md
The SAP file systems that don't need to be mounted via NFS can also be deployed
When you plan your deployment with NFS on Azure Files, consider the following important points:
-* The minimum share size is 100 GiB. You only pay for the [capacity of the provisioned shares](../../storage/files/understanding-billing.md#provisioned-model).
+* The minimum share size is 100 GiB. You only pay for the [capacity of the provisioned shares](../../storage/files/understanding-billing.md#provisioned-v1-model).
* Size your NFS shares not only based on capacity requirements but also on IOPS and throughput requirements. For more information, see [Azure file share targets](../../storage/files/storage-files-scale-targets.md#azure-file-share-scale-targets). * Test the workload to validate your sizing and ensure that it meets your performance targets. To learn how to troubleshoot performance issues with NFS on Azure Files, see [Troubleshoot Azure file share performance](../../storage/files/files-troubleshoot-performance.md). * For SAP J2EE systems, it's not supported to place `/usr/sap/<SID>/J<nr>` on NFS on Azure Files.
sap High Availability Guide Suse Nfs Azure Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-suse-nfs-azure-files.md
Next, deploy the NFS shares in the storage account you created. In this example,
When you plan your deployment with NFS on Azure Files, consider the following important points:
-* The minimum share size is 100 GiB. You only pay for the [capacity of the provisioned shares](../../storage/files/understanding-billing.md#provisioned-model).
+* The minimum share size is 100 GiB. You only pay for the [capacity of the provisioned shares](../../storage/files/understanding-billing.md#provisioned-v1-model).
* Size your NFS shares not only based on capacity requirements, but also on IOPS and throughput requirements. For details see [Azure file share targets](../../storage/files/storage-files-scale-targets.md#azure-file-share-scale-targets). * Test the workload to validate your sizing and ensure that it meets your performance targets. To learn how to troubleshoot performance issues on Azure Files, consult [Troubleshoot Azure file shares performance](../../storage/files/files-troubleshoot-performance.md). * For SAP J2EE systems, it's not supported to place `/usr/sap/<SID>/J<nr>` on NFS on Azure Files.
sap High Availability Guide Suse Nfs Simple Mount https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-suse-nfs-simple-mount.md
The SAP file systems that don't need to be mounted via NFS can also be deployed
When you plan your deployment with NFS on Azure Files, consider the following important points:
-* The minimum share size is 100 gibibytes (GiB). You pay for only the [capacity of the provisioned shares](../../storage/files/understanding-billing.md#provisioned-model).
+* The minimum share size is 100 GiB. You pay for only the [capacity of the provisioned shares](../../storage/files/understanding-billing.md#provisioned-v1-model).
* Size your NFS shares not only based on capacity requirements, but also on IOPS and throughput requirements. For details, see [Azure file share targets](../../storage/files/storage-files-scale-targets.md#azure-file-share-scale-targets). * Test the workload to validate your sizing and ensure that it meets your performance targets. To learn how to troubleshoot performance issues with NFS on Azure Files, consult [Troubleshoot Azure file share performance](../../storage/files/files-troubleshoot-performance.md). * For SAP J2EE systems, placing `/usr/sap/<SID>/J<nr>` on NFS on Azure Files isn't supported.
security Azure CA Details https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/azure-CA-details.md
Previously updated : 07/22/2024 Last updated : 10/07/2024
Any entity trying to access Microsoft Entra identity services via the TLS/SSL pr
| Certificate Authority | Serial Number /<br>Thumbprint | |- |- |
-| [Baltimore CyberTrust Root](https://cacerts.digicert.com/BaltimoreCyberTrustRoot.crt) | 0x20000b9<br>D4DE20D05E66FC53FE1A50882C78DB2852CAE474 |
+|
| [DigiCert Global Root CA](https://cacerts.digicert.com/DigiCertGlobalRootCA.crt) | 0x083be056904246b1a1756ac95991c74a<br>A8985D3A65E5E5C4B2D7D66D40C6DD2FB19C5436 | | [DigiCert Global Root G2](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt) | 0x033af1e6a711a9a0bb2864b11d09fae5<br>DF3C24F9BFD666761B268073FE06D1CC8D4F82A4 | | [DigiCert Global Root G3](https://cacerts.digicert.com/DigiCertGlobalRootG3.crt) | 0x055556bcf25ea43535c3a40fd5ab4572<br>7E04DE896A3E666D00E687D33FFAD93BE83D349E |
-| [Entrust Root Certification Authority G2](https://web.entrust.com/root-certificates/entrust_g2_ca.cer) | 4a538c28<br>8cf427fd790c3ad166068de81e57efbb932272d4 |
+| [Entrust Root Certification Authority G2](http://web.entrust.com/root-certificates/entrust_g2_ca.cer) | 4a538c28<br>8CF427FD790C3AD166068DE81E57EFBB932272D4 |
| [Microsoft ECC Root Certificate Authority 2017](https://www.microsoft.com/pkiops/certs/Microsoft%20ECC%20Root%20Certificate%20Authority%202017.crt) | 0x66f23daf87de8bb14aea0c573101c2ec<br>999A64C37FF47D9FAB95F14769891460EEC4C3C5 |
-| [Microsoft RSA Root Certificate Authority 2017](https://www.microsoft.com/pkiops/certs/Microsoft%20RSA%20Root%20Certificate%20Authority%202017.crt) | 0x1ed397095fd8b4b347701eaabe7f45b3<br>73a5e64a3bff8316ff0edccc618a906e4eae4d74 |
+| [Microsoft RSA Root Certificate Authority 2017](https://www.microsoft.com/pkiops/certs/Microsoft%20RSA%20Root%20Certificate%20Authority%202017.crt) | 0x1ed397095fd8b4b347701eaabe7f45b3<br>73A5E64A3BFF8316FF0EDCCC618A906E4EAE4D74 |
### Subordinate Certificate Authorities
Any entity trying to access Microsoft Entra identity services via the TLS/SSL pr
|- |- | | [DigiCert Basic RSA CN CA G2](https://crt.sh/?d=2545289014) | 0x02f7e1f982bad009aff47dc95741b2f6<br>4D1FA5D1FB1AC3917C08E43F65015E6AEA571179 | | [DigiCert Cloud Services CA-1](https://crt.sh/?d=12624881) | 0x019ec1c6bd3f597bb20c3338e551d877<br>81B68D6CD2F221F8F534E677523BB236BBA1DC56 |
-| [DigiCert Cloud Services CA-1](https://crt.sh/?d=B3F6B64A07BB9611F47174407841F564FB991F29) | 0f171a48c6f223809218cd2ed6ddc0e8<br>b3f6b64a07bb9611f47174407841f564fb991f29 |
+| [DigiCert Cloud Services CA-1](https://crt.sh/?d=3439320284) | 0f171a48c6f223809218cd2ed6ddc0e8<br>B3F6B64A07BB9611F47174407841F564FB991F29 |
| [DigiCert SHA2 Secure Server CA](https://crt.sh/?d=3422153451) | 0x02742eaa17ca8e21c717bb1ffcfd0ca0<br>626D44E704D1CEABE3BF0D53397464AC8080142C | | [DigiCert TLS Hybrid ECC SHA384 2020 CA1](https://crt.sh/?d=3422153452) | 0x0a275fe704d6eecb23d5cd5b4b1a4e04<br>51E39A8BDB08878C52D6186588A0FA266A69CF28 | | [DigiCert TLS RSA SHA256 2020 CA1](https://crt.sh/?d=4385364571) | 0x06d8d904d5584346f68a2fa754227ec4<br>1C58A3A8518E8759BF075B76B750D4F2DF264FCD |
-| [DigiCert TLS RSA SHA256 2020 CA1](https://crt.sh/?d=6938FD4D98BAB03FAADB97B34396831E3780AEA1) | 0a3508d55c292b017df8ad65c00ff7e4<br>6938fd4d98bab03faadb97b34396831e3780aea1 |
-| [Entrust Certification Authority - L1K](https://aia.entrust.net/l1k-chain256.cer) | 0ee94cc30000000051d37785<br>f21c12f46cdb6b2e16f09f9419cdff328437b2d7 |
-| [Entrust Certification Authority - L1M](https://aia.entrust.net/l1m-chain256.cer) | 61a1e7d20000000051d366a6<br>cc136695639065fab47074d28c55314c66077e90 |
+| [DigiCert TLS RSA SHA256 2020 CA1](https://crt.sh/?d=3427370830) | 0a3508d55c292b017df8ad65c00ff7e4<br>6938FD4D98BAB03FAADB97B34396831E3780AEA1 |
+| [Entrust Certification Authority - L1K](http://aia.entrust.net/l1k-chain256.cer) | 0ee94cc30000000051d37785<br>F21C12F46CDB6B2E16F09F9419CDFF328437B2D7 |
+| [Entrust Certification Authority - L1M](http://aia.entrust.net/l1m-chain256.cer) | 61a1e7d20000000051d366a6<br>CC136695639065FAB47074D28C55314C66077E90 |
| [GeoTrust Global TLS RSA4096 SHA256 2022 CA1](https://crt.sh/?d=6670931375) | 0x0f622f6f21c2ff5d521f723a1d47d62d<br>7E6DB7B7584D8CF2003E0931E6CFC41A3A62D3DF | | [Microsoft Azure ECC TLS Issuing CA 03](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2003%20-%20xsign.crt) | 0x01529ee8368f0b5d72ba433e2d8ea62d<br>56D955C849887874AA1767810366D90ADF6C8536 | | [Microsoft Azure ECC TLS Issuing CA 03](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2003.crt) | 0x330000003322a2579b5e698bcc000000000033<br>91503BE7BF74E2A10AA078B48B71C3477175FEC3 |
Any entity trying to access Microsoft Entra identity services via the TLS/SSL pr
| [Microsoft Azure RSA TLS Issuing CA 07](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20RSA%20TLS%20Issuing%20CA%2007.crt) | 0x330000003bf980b0c83783431700000000003b<br>0E5F41B697DAADD808BF55AD080350A2A5DFCA93 | | [Microsoft Azure RSA TLS Issuing CA 08](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20RSA%20TLS%20Issuing%20CA%2008%20-%20xsign.crt) | 0x0efb7e547edf0ff1069aee57696d7ba0<br>31600991ED5FEC63D355A5484A6DCC787EAD89BC | | [Microsoft Azure RSA TLS Issuing CA 08](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20RSA%20TLS%20Issuing%20CA%2008.crt) | 0x330000003a5dc2ffc321c16d9b00000000003a<br>512C8F3FB71EDACF7ADA490402E710B10C73026E |
-| [Microsoft ECC TLS Issuing AOC CA 01](https://crt.sh/?d=4789656467) | 0x33000000282bfd23e7d1add707000000000028<br>30ab5c33eb4b77d4cbff00a11ee0a7507d9dd316 |
-| [Microsoft ECC TLS Issuing AOC CA 02](https://crt.sh/?d=4814787086) | 0x33000000290f8a6222ef6a5695000000000029<br>3709cd92105d074349d00ea8327f7d5303d729c8 |
-| [Microsoft ECC TLS Issuing EOC CA 01](https://crt.sh/?d=4814787088) | 0x330000002a2d006485fdacbfeb00000000002a<br>5fa13b879b2ad1b12e69d476e6cad90d01013b46 |
-| [Microsoft ECC TLS Issuing EOC CA 02](https://crt.sh/?d=4814787085) | 0x330000002be6902838672b667900000000002b<br>58a1d8b1056571d32be6a7c77ed27f73081d6e7a |
-| [Microsoft RSA TLS CA 01](https://crt.sh/?d=3124375355) | 0x0f14965f202069994fd5c7ac788941e2<br>703D7A8F0EBF55AAA59F98EAF4A206004EB2516A |
-| [Microsoft RSA TLS CA 02](https://crt.sh/?d=3124375356) | 0x0fa74722c53d88c80f589efb1f9d4a3a<br>B0C2D2D13CDD56CDAA6AB6E2C04440BE4A429C75 |
-| [Microsoft RSA TLS Issuing AOC CA 01](https://crt.sh/?d=4789678141) | 0x330000002ffaf06f6697e2469c00000000002f<br>4697fdbed95739b457b347056f8f16a975baf8ee |
-| [Microsoft RSA TLS Issuing AOC CA 02](https://crt.sh/?d=4814787092) | 0x3300000030c756cc88f5c1e7eb000000000030<br>90ed2e9cb40d0cb49a20651033086b1ea2f76e0e |
-| [Microsoft RSA TLS Issuing EOC CA 01](https://crt.sh/?d=4814787098) | 0x33000000310c4914b18c8f339a000000000031<br>a04d3750debfccf1259d553dbec33162c6b42737 |
-| [Microsoft RSA TLS Issuing EOC CA 02](https://crt.sh/?d=4814787087) | 0x3300000032444d7521341496a9000000000032<br>697c6404399cc4e7bb3c0d4a8328b71dd3205563 |
+| [Microsoft ECC TLS Issuing AOC CA 01](https://crt.sh/?d=4789656467) | 0x33000000282bfd23e7d1add707000000000028<br>30AB5C33EB4B77D4CBFF00A11EE0A7507D9DD316 |
+| [Microsoft ECC TLS Issuing AOC CA 02](https://crt.sh/?d=4814787086) | 0x33000000290f8a6222ef6a5695000000000029<br>3709CD92105D074349D00EA8327F7D5303D729C8 |
+| [Microsoft ECC TLS Issuing EOC CA 01](https://crt.sh/?d=4814787088) | 0x330000002a2d006485fdacbfeb00000000002a<br>5FA13B879B2AD1B12E69D476E6CAD90D01013B46 |
+| [Microsoft ECC TLS Issuing EOC CA 02](https://crt.sh/?d=4814787085) | 0x330000002be6902838672b667900000000002b<br>58A1D8B1056571D32BE6A7C77ED27F73081D6E7A |
+| [Microsoft RSA TLS Issuing AOC CA 01](https://crt.sh/?d=4789678141) | 0x330000002ffaf06f6697e2469c00000000002f<br>4697FDBED95739B457B347056F8F16A975BAF8EE |
+| [Microsoft RSA TLS Issuing AOC CA 02](https://crt.sh/?d=4814787092) | 0x3300000030c756cc88f5c1e7eb000000000030<br>90ED2E9CB40D0CB49A20651033086B1EA2F76E0E |
+| [Microsoft RSA TLS Issuing EOC CA 01](https://crt.sh/?d=4814787098) | 0x33000000310c4914b18c8f339a000000000031<br>A04D3750DEBFCCF1259D553DBEC33162C6B42737 |
+| [Microsoft RSA TLS Issuing EOC CA 02](https://crt.sh/?d=4814787087) | 0x3300000032444d7521341496a9000000000032<br>697C6404399CC4E7BB3C0D4A8328B71DD3205563 |
# [Certificate Authority chains](#tab/certificate-authority-chains)
Any entity trying to access Microsoft Entra identity services via the TLS/SSL pr
| Certificate Authority | Serial Number<br>Thumbprint | |- |- |
-| [**Baltimore CyberTrust Root**](https://cacerts.digicert.com/BaltimoreCyberTrustRoot.crt) | 020000b9<br>d4de20d05e66fc53fe1a50882c78db2852cae474 |
-| Γöö [Microsoft RSA TLS CA 01](https://crt.sh/?d=3124375355) | 0x0f14965f202069994fd5c7ac788941e2<br>703D7A8F0EBF55AAA59F98EAF4A206004EB2516A |
-| Γöö [Microsoft RSA TLS CA 02](https://crt.sh/?d=3124375356) | 0x0fa74722c53d88c80f589efb1f9d4a3a<br>B0C2D2D13CDD56CDAA6AB6E2C04440BE4A429C75 |
| [**DigiCert Global Root CA**](https://cacerts.digicert.com/DigiCertGlobalRootCA.crt) | 0x083be056904246b1a1756ac95991c74a<br>A8985D3A65E5E5C4B2D7D66D40C6DD2FB19C5436 | | Γöö [DigiCert Basic RSA CN CA G2](https://crt.sh/?d=2545289014) | 0x02f7e1f982bad009aff47dc95741b2f6<br>4D1FA5D1FB1AC3917C08E43F65015E6AEA571179 | | Γöö [DigiCert Cloud Services CA-1](https://crt.sh/?d=12624881) | 0x019ec1c6bd3f597bb20c3338e551d877<br>81B68D6CD2F221F8F534E677523BB236BBA1DC56 |
+| Γöö [DigiCert Cloud Services CA-1](https://crt.sh/?d=3439320284) | 0f171a48c6f223809218cd2ed6ddc0e8<br>B3F6B64A07BB9611F47174407841F564FB991F29 |
| Γöö [DigiCert SHA2 Secure Server CA](https://crt.sh/?d=3422153451) | 0x02742eaa17ca8e21c717bb1ffcfd0ca0<br>626D44E704D1CEABE3BF0D53397464AC8080142C | | Γöö [DigiCert TLS Hybrid ECC SHA384 2020 CA1](https://crt.sh/?d=3422153452) | 0x0a275fe704d6eecb23d5cd5b4b1a4e04<br>51E39A8BDB08878C52D6186588A0FA266A69CF28 | | Γöö [DigiCert TLS RSA SHA256 2020 CA1](https://crt.sh/?d=4385364571) | 0x06d8d904d5584346f68a2fa754227ec4<br>1C58A3A8518E8759BF075B76B750D4F2DF264FCD |
+| Γöö [DigiCert TLS RSA SHA256 2020 CA1](https://crt.sh/?d=3427370830) | 0a3508d55c292b017df8ad65c00ff7e4<br>6938FD4D98BAB03FAADB97B34396831E3780AEA1 |
| Γöö [GeoTrust Global TLS RSA4096 SHA256 2022 CA1](https://crt.sh/?d=6670931375) | 0x0f622f6f21c2ff5d521f723a1d47d62d<br>7E6DB7B7584D8CF2003E0931E6CFC41A3A62D3DF | | [**DigiCert Global Root G2**](https://cacerts.digicert.com/DigiCertGlobalRootG2.crt) | 0x033af1e6a711a9a0bb2864b11d09fae5<br>DF3C24F9BFD666761B268073FE06D1CC8D4F82A4 | | Γöö [Microsoft Azure RSA TLS Issuing CA 03](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20RSA%20TLS%20Issuing%20CA%2003%20-%20xsign.crt) | 0x05196526449a5e3d1a38748f5dcfebcc<br>F9388EA2C9B7D632B66A2B0B406DF1D37D3901F6 |
Any entity trying to access Microsoft Entra identity services via the TLS/SSL pr
| Γöö [Microsoft Azure ECC TLS Issuing CA 04](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2004%20-%20xsign.crt) | 0x02393d48d702425a7cb41c000b0ed7ca<br>FB73FDC24F06998E070A06B6AFC78FDF2A155B25 | | Γöö [Microsoft Azure ECC TLS Issuing CA 07](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2007%20-%20xsign.crt) | 0x0f1f157582cdcd33734bdc5fcd941a33<br>3BE6CA5856E3B9709056DA51F32CBC8970A83E28 | | Γöö [Microsoft Azure ECC TLS Issuing CA 08](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2008%20-%20xsign.crt) | 0x0ef2e5d83681520255e92c608fbc2ff4<br>716DF84638AC8E6EEBE64416C8DD38C2A25F6630 |
-| [**Entrust Root Certification Authority G2**](https://web.entrust.com/root-certificates/entrust_g2_ca.cer) | 4a538c28<br>8cf427fd790c3ad166068de81e57efbb932272d4 |
-| Γöö [Entrust Certification Authority - L1K](https://aia.entrust.net/l1k-chain256.cer) | 0ee94cc30000000051d37785<br>f21c12f46cdb6b2e16f09f9419cdff328437b2d7 |
-| Γöö [Entrust Certification Authority - L1M](https://aia.entrust.net/l1m-chain256.cer) | 61a1e7d20000000051d366a6<br>cc136695639065fab47074d28c55314c66077e90 |
+| [**Entrust Root Certification Authority G2**](http://web.entrust.com/root-certificates/entrust_g2_ca.cer) | 4a538c28<br>8CF427FD790C3AD166068DE81E57EFBB932272D4 |
+| Γöö [Entrust Certification Authority - L1K](http://aia.entrust.net/l1k-chain256.cer) | 0ee94cc30000000051d37785<br>F21C12F46CDB6B2E16F09F9419CDFF328437B2D7 |
+| Γöö [Entrust Certification Authority - L1M](http://aia.entrust.net/l1m-chain256.cer) | 61a1e7d20000000051d366a6<br>CC136695639065FAB47074D28C55314C66077E90 |
| [**Microsoft ECC Root Certificate Authority 2017**](https://www.microsoft.com/pkiops/certs/Microsoft%20ECC%20Root%20Certificate%20Authority%202017.crt) | 0x66f23daf87de8bb14aea0c573101c2ec<br>999A64C37FF47D9FAB95F14769891460EEC4C3C5 | | Γöö [Microsoft Azure ECC TLS Issuing CA 03](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2003.crt) | 0x330000003322a2579b5e698bcc000000000033<br>91503BE7BF74E2A10AA078B48B71C3477175FEC3 | | Γöö [Microsoft Azure ECC TLS Issuing CA 04](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2004.crt) | 0x33000000322164aedab61f509d000000000032<br>406E3B38EFF35A727F276FE993590B70F8224AED | | Γöö [Microsoft Azure ECC TLS Issuing CA 07](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2007.crt) | 0x3300000034c732435db22a0a2b000000000034<br>AB3490B7E37B3A8A1E715036522AB42652C3CFFE | | Γöö [Microsoft Azure ECC TLS Issuing CA 08](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2008.crt) | 0x3300000031526979844798bbb8000000000031<br>CF33D5A1C2F0355B207FCE940026E6C1580067FD |
-| Γöö [Microsoft ECC TLS Issuing AOC CA 01](https://crt.sh/?d=4789656467) |33000000282bfd23e7d1add707000000000028<br>30ab5c33eb4b77d4cbff00a11ee0a7507d9dd316 |
-| Γöö [Microsoft ECC TLS Issuing AOC CA 02](https://crt.sh/?d=4814787086) |33000000290f8a6222ef6a5695000000000029<br>3709cd92105d074349d00ea8327f7d5303d729c8 |
-| Γöö [Microsoft ECC TLS Issuing EOC CA 01](https://crt.sh/?d=4814787088) |330000002a2d006485fdacbfeb00000000002a<br>5fa13b879b2ad1b12e69d476e6cad90d01013b46 |
-| Γöö [Microsoft ECC TLS Issuing EOC CA 02](https://crt.sh/?d=4814787085) |330000002be6902838672b667900000000002b<br>58a1d8b1056571d32be6a7c77ed27f73081d6e7a |
+| Γöö [Microsoft ECC TLS Issuing AOC CA 01](https://crt.sh/?d=4789656467) |33000000282bfd23e7d1add707000000000028<br>30AB5C33EB4B77D4CBFF00A11EE0A7507D9DD316 |
+| Γöö [Microsoft ECC TLS Issuing AOC CA 02](https://crt.sh/?d=4814787086) |33000000290f8a6222ef6a5695000000000029<br>3709CD92105D074349D00EA8327F7D5303D729C8 |
+| Γöö [Microsoft ECC TLS Issuing EOC CA 01](https://crt.sh/?d=4814787088) |330000002a2d006485fdacbfeb00000000002a<br>5FA13B879B2AD1B12E69D476E6CAD90D01013B46 |
+| Γöö [Microsoft ECC TLS Issuing EOC CA 02](https://crt.sh/?d=4814787085) |330000002be6902838672b667900000000002b<br>58A1D8B1056571D32BE6A7C77ED27F73081D6E7A |
| [**Microsoft RSA Root Certificate Authority 2017**](https://www.microsoft.com/pkiops/certs/Microsoft%20RSA%20Root%20Certificate%20Authority%202017.crt) | 0x1ed397095fd8b4b347701eaabe7f45b3<br>73A5E64A3BFF8316FF0EDCCC618A906E4EAE4D74 | | Γöö [Microsoft Azure RSA TLS Issuing CA 03](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20RSA%20TLS%20Issuing%20CA%2003.crt) | 0x330000003968ea517d8a7e30ce000000000039<br>37461AACFA5970F7F2D2BAC5A659B53B72541C68 | | Γöö [Microsoft Azure RSA TLS Issuing CA 04](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20RSA%20TLS%20Issuing%20CA%2004.crt) | 0x330000003cd7cb44ee579961d000000000003c<br>7304022CA8A9FF7E3E0C1242E0110E643822C45E | | Γöö [Microsoft Azure RSA TLS Issuing CA 07](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20RSA%20TLS%20Issuing%20CA%2007.crt) | 0x330000003bf980b0c83783431700000000003b<br>0E5F41B697DAADD808BF55AD080350A2A5DFCA93 | | Γöö [Microsoft Azure RSA TLS Issuing CA 08](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20RSA%20TLS%20Issuing%20CA%2008.crt) | 0x330000003a5dc2ffc321c16d9b00000000003a<br>512C8F3FB71EDACF7ADA490402E710B10C73026E | | Γöö [Microsoft RSA TLS Issuing AOC CA 01](https://crt.sh/?d=4789678141) |330000002ffaf06f6697e2469c00000000002f<br>4697fdbed95739b457b347056f8f16a975baf8ee |
-| Γöö [Microsoft RSA TLS Issuing AOC CA 02](https://crt.sh/?d=4814787092) |3300000030c756cc88f5c1e7eb000000000030<br>90ed2e9cb40d0cb49a20651033086b1ea2f76e0e |
-| Γöö [Microsoft RSA TLS Issuing EOC CA 01](https://crt.sh/?d=4814787098) |33000000310c4914b18c8f339a000000000031<br>a04d3750debfccf1259d553dbec33162c6b42737 |
-| Γöö [Microsoft RSA TLS Issuing EOC CA 02](https://crt.sh/?d=4814787087) |3300000032444d7521341496a9000000000032<br>697c6404399cc4e7bb3c0d4a8328b71dd3205563 |
+| Γöö [Microsoft RSA TLS Issuing AOC CA 02](https://crt.sh/?d=4814787092) |3300000030c756cc88f5c1e7eb000000000030<br>90ED2E9CB40D0CB49A20651033086B1EA2F76E0E |
+| Γöö [Microsoft RSA TLS Issuing EOC CA 01](https://crt.sh/?d=4814787098) |33000000310c4914b18c8f339a000000000031<br>A04D3750DEBFCCF1259D553DBEC33162C6B42737 |
+| Γöö [Microsoft RSA TLS Issuing EOC CA 02](https://crt.sh/?d=4814787087) |3300000032444d7521341496a9000000000032<br>697C6404399CC4E7BB3C0D4A8328B71DD3205563 |
AIA:
- `www.microsoft.com` CRL:-- `crl.microsoft.com` - `crl3.digicert.com` - `crl4.digicert.com` - `crl.digicert.cn` - `cdp.geotrust.com`-- `mscrl.microsoft.com` - `www.microsoft.com` OCSP:-- `ocsp.msocsp.com` - `ocsp.digicert.com` - `ocsp.digicert.cn` - `oneocsp.microsoft.com`
To determine if the **Microsoft ECC Root Certificate Authority 2017** and **Micr
1. Open a terminal window on your system. 1. Run the following command:+ ```bash keytool -list -keystore $JAVA_HOME/jre/lib/security/cacerts ```+ - `$JAVA_HOME` refers to the path to the Java home directory. - If you're unsure of the path, you can find it by running the following command:
To determine if the **Microsoft ECC Root Certificate Authority 2017** and **Micr
... ``` - 1. To add a root certificate to the trusted root certificate store in Java, you can use the `keytool` utility. The following example adds the **Microsoft RSA Root Certificate Authority 2017** root certificate:+ ```bash keytool -import -file microsoft-ecc-root-ca.crt -alias microsoft-rsa-root-ca -keystore $JAVA_HOME/jre/lib/security/cacerts keytool -import -file microsoft-rsa-root-ca.crt -alias microsoft-rsa-root-ca -keystore $JAVA_HOME/jre/lib/security/cacerts ```+ > [!NOTE] > In this example, `microsoft-ecc-root-ca.crt` and `microsoft-rsa-root-ca.crt` are the names of the files that contain the **Microsoft ECC Root Certificate Authority 2017** and **Microsoft RSA Root Certificate Authority 2017** root certificates, respectively.
To determine if the **Microsoft ECC Root Certificate Authority 2017** and **Micr
The C) for additional information.
-Microsoft updated Azure services to use TLS certificates from a different set of Root Certificate Authorities (CAs) on February 15, 2021, to comply with changes set forth by the C) for additional information.
+Microsoft updated Azure services to use TLS certificates from a different set of Root Certificate Authorities (CAs) on February 15, 2021, to comply with changes set forth by the C) for additional information.
### Article change log
+- October 8, 2024: Removed the following CAs and CDP endpoints: crl.microsoft.com, mscrl.microsoft.com, and ocsp.msocsp.com.
+
+ | Certificate Authority | Serial Number<br>Thumbprint |
+ |- |- |
+ |[Baltimore CyberTrust Root](https://cacerts.digicert.com/BaltimoreCyberTrustRoot.crt) | 0x20000b9<br>D4DE20D05E66FC53FE1A50882C78DB2852CAE474 |
+ |[Microsoft RSA TLS CA 01](https://crt.sh/?d=3124375355) | 0x0f14965f202069994fd5c7ac788941e2<br>703D7A8F0EBF55AAA59F98EAF4A206004EB2516A |
+ |[Microsoft RSA TLS CA 02](https://crt.sh/?d=3124375356) | 0x0fa74722c53d88c80f589efb1f9d4a3a<br>B0C2D2D13CDD56CDAA6AB6E2C04440BE4A429C75 |
+ - July 22, 2024: Added Entrust CAs from a parallel Microsoft 365 article to provide a comprehensive list. - June 27, 2024: Removed the following CAs, which were superseded by both versions of Microsoft Azure ECC TLS Issuing CAs 03, 04, 07, 08. | Certificate Authority | Serial Number<br>Thumbprint | |- |- |
- | [Microsoft Azure ECC TLS Issuing CA 01](https://www.microsoft.com/pki/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2001.cer)|0x09dc42a5f574ff3a389ee06d5d4de440<br>92503D0D74A7D3708197B6EE13082D52117A6AB0|
+ |[Microsoft Azure ECC TLS Issuing CA 01](https://www.microsoft.com/pki/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2001.cer)|0x09dc42a5f574ff3a389ee06d5d4de440<br>92503D0D74A7D3708197B6EE13082D52117A6AB0|
|[Microsoft Azure ECC TLS Issuing CA 01](https://crt.sh/?d=2616305805)|0x330000001aa9564f44321c54b900000000001a<br>CDA57423EC5E7192901CA1BF6169DBE48E8D1268| |[Microsoft Azure ECC TLS Issuing CA 02](https://www.microsoft.com/pki/certs/Microsoft%20Azure%20ECC%20TLS%20Issuing%20CA%2002.cer)|0x0e8dbe5ea610e6cbb569c736f6d7004b<br>1E981CCDDC69102A45C6693EE84389C3CF2329F1| |[Microsoft Azure ECC TLS Issuing CA 02](https://crt.sh/?d=2616326233)|0x330000001b498d6736ed5612c200000000001b<br>489FF5765030EB28342477693EB183A4DED4D2A6|
Microsoft updated Azure services to use TLS certificates from a different set of
|[Microsoft Azure TLS Issuing CA 06](https://www.microsoft.com/pkiops/certs/Microsoft%20Azure%20TLS%20Issuing%20CA%2006.cer)| 0x02e79171fb8021e93fe2d983834c50c0<br>30E01761AB97E59A06B41EF20AF6F2DE7EF4F7B0| |[Microsoft Azure TLS Issuing CA 06](https://crt.sh/?d=2616330106)|0x3300000020a2f1491a37fbd31f000000000020<br>8F1FD57F27C828D7BE29743B4D02CD7E6E5F43E6| -- July 17, 2023: Added 16 new subordinate Certificate Authorities-- February 7, 2023: Added eight new subordinate Certificate Authorities
+- July 17, 2023: Added 16 new subordinate Certificate Authorities.
+- February 7, 2023: Added eight new subordinate Certificate Authorities.
## Next steps
service-connector Tutorial Python Aks Sql Database Connection String https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/tutorial-python-aks-sql-database-connection-string.md
Last updated 07/23/2024
+zone_pivot_group_filename: service-connector/zone-pivot-groups.json
+zone_pivot_groups: aks-authtype
# Tutorial: Connect an AKS app to Azure SQL Database (preview)
In this tutorial, you learn how to connect an application deployed to AKS, to an
> * Update your application code > * Clean up Azure resources.
-> [!WARNING]
-> Microsoft recommends that you use the most secure authentication flow available. The authentication flow described in this procedure requires a very high degree of trust in the application, and carries risks that are not present in other flows. You should only use this flow when other more secure flows, such as managed identities, aren't viable. See the [tutorial using a managed identity](tutorial-python-aks-storage-workload-identity.md).
## Prerequisites
az provider register --namespace Microsoft.KubernetesConfiguration
### Create a new connection
-Create a service connection between your AKS cluster and your SQL database in the Azure portal or the Azure CLI.
+
+Create a service connection between your AKS cluster and your SQL database using Microsoft Entra Workload ID
+
+### [Azure portal](#tab/azure-portal)
+
+1. In the [Azure portal](https://portal.azure.com/), navigate to your AKS cluster resource.
+2. Select **Settings** > **Service Connector (Preview)** > **Create**.
+3. On the **Basics** tab, configure the following settings:
+
+ * **Kubernetes namespace**: Select **default**.
+ * **Service type**: Select **SQL Database**.
+ * **Connection name**: Use the connection name provided by Service Connector or enter your own connection name.
+ * **Subscription**: Select the subscription that includes the Azure SQL Database service.
+ * **SQL server**: Select your SQL server.
+ * **SQL database**: Select your SQL database.
+ * **Client type**: The code language or framework you use to connect to the target service, such as **Python**.
+
+ :::image type="content" source="media/tutorial-ask-sql/create-connection.png" alt-text="Screenshot of the Azure portal showing the form to create a new connection to a SQL database in AKS.":::
+
+4. Select **Next: Authentication**. On the **Authentication** tab, select **Workload Identity** and choose one **User assigned managed identity**.
+5. Select **Next: Networking** > **Next: Review + create** >**Create On Cloud Shell**.
+6. The Cloud Shell will be launched and execute the commands to create a connection. You may need to confirm some configuration changes during the command processing. Once command runs successfully, it will show connection information, and you can click refresh button in **Service Connector** pane to show the latest result.
+
+### [Azure CLI](#tab/azure-cli)
+
+Create a service connection to the SQL database using the [`az aks connection create sql`](/cli/azure/aks/connection/create#az-aks-connection-create-sql) command. You can run this command in two different ways:
+
+ * generate the new connection step by step.
+
+ ```azurecli-interactive
+ az aks connection create sql
+ ```
+
+ * generate the new connection at once. Make sure you replace the following placeholders with your own information: `<source-subscription>`, `<source_resource_group>`, `<cluster>`, `<target-subscription>`, `<target_resource_group>`, `<server>`, `<database>`, and `<***>`.
+
+ ```azurecli-interactive
+ az aks connection create sql \
+ --source-id /subscriptions/<source-subscription>/resourceGroups/<source_resource_group>/providers/Microsoft.ContainerService/managedClusters/<cluster> \
+ --target-id /subscriptions/<target-subscription>/resourceGroups/<target_resource_group>/providers/Microsoft.Sql/servers/<server>/databases/<database> \
+ --workload-identity /subscriptions/<identity-subscription>/resourcegroups/<resource_group>/providers/Microsoft.ManagedIdentity/userAssignedIdentities/<identity_name>
+ ```
+++++
+> [!WARNING]
+> Microsoft recommends that you use the most secure authentication flow available. The authentication flow described in this procedure requires a very high degree of trust in the application, and carries risks that are not present in other flows. You should only use this flow when other more secure flows, such as managed identities, aren't viable. Select the authentication method *[Workload ID (Recommended)](tutorial-python-aks-sql-database-connection-string.md?pivots=workload-id#create-a-new-connection)*.
+
+Create a service connection between your AKS cluster and your SQL database using a connection string
### [Azure portal](#tab/azure-portal)
Create a service connection to the SQL database using the [`az aks connection cr
+ ## Update your container Now that you created a connection between your AKS cluster and the database, you need to retrieve the connection secrets and deploy them in your container.
storage Storage Disaster Recovery Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-disaster-recovery-guidance.md
Each type of failover has a unique set of use cases, corresponding expectations
| Type | Failover Scope | Use case | Expected data loss | Hierarchical Namespace (HNS) supported | |-|--|-|--|-|
-| Customer-managed planned failover (preview) | Storage account | The storage service endpoints for the primary and secondary regions are available, and you want to perform disaster recovery testing. <br></br> The storage service endpoints for the primary region are available, but another service is preventing your workloads from functioning properly.<br><br>To proactively prepare for large-scale disasters, such as a hurricane, that may impact a region. | [No](#anticipate-data-loss-and-inconsistencies) | [Yes <br> *(In preview)*](#hierarchical-namespace-hns) |
+| Customer-managed planned failover (preview) | Storage account | The storage service endpoints for the primary and secondary regions are available, and you want to perform disaster recovery testing. <br></br> The storage service endpoints for the primary region are available, but another service is preventing your workloads from functioning properly.<br><br>To proactively prepare for large-scale disasters, such as a hurricane, that might affect a region. | [No](#anticipate-data-loss-and-inconsistencies) | [Yes <br> *(In preview)*](#hierarchical-namespace-hns) |
| Customer-managed (unplanned) failover | Storage account | The storage service endpoints for the primary region become unavailable, but the secondary region is available. <br></br> You received an Azure Advisory in which Microsoft advises you to perform a failover operation of storage accounts potentially affected by an outage. | [Yes](#anticipate-data-loss-and-inconsistencies) | [Yes <br> *(In preview)*](#hierarchical-namespace-hns) | | Microsoft-managed | Entire region | The primary region becomes unavailable due to a significant disaster, but the secondary region is available. | [Yes](#anticipate-data-loss-and-inconsistencies) | [Yes](#hierarchical-namespace-hns) |
The following table summarizes the resulting redundancy configuration at every s
### Customer-managed planned failover (preview)
-Planned failover can be utilized in multiple scenarios including planned disaster recovery testing, a proactive approach to large scale disasters, or to recover from non-storage related outages.
+Planned failover can be utilized in multiple scenarios including planned disaster recovery testing, a proactive approach to large scale disasters, or to recover from nonstorage related outages.
During the planned failover process, the primary and secondary regions are swapped. The original primary region is demoted and becomes the new secondary region. At the same time, the original secondary region is promoted and becomes the new primary. After the failover completes, users can proceed to access data in the new primary region and administrators can validate their disaster recovery plan. The storage account must be available in both the primary and secondary regions before a planned failover can be initiated.
To understand the effect of this type of failover on your users and applications
### Microsoft-managed failover
-Microsoft may initiate a regional failover in extreme circumstances, such as a catastrophic disaster that impacts an entire geo region. During these events, no action on your part is required. If your storage account is configured for RA-GRS or RA-GZRS, your applications can read from the secondary region during a Microsoft-managed failover. However, you don't have write access to your storage account until the failover process is complete.
+Microsoft might initiate a regional failover in extreme circumstances, such as a catastrophic disaster that impacts an entire geo region. During these events, no action on your part is required. If your storage account is configured for RA-GRS or RA-GZRS, your applications can read from the secondary region during a Microsoft-managed failover. However, you don't have write access to your storage account until the failover process is complete.
> [!IMPORTANT] > Use customer-managed failover options to develop, test, and implement your disaster recovery plans. **Do not** rely on Microsoft-managed failover, which might only be used in extreme circumstances.
The following table can be used to reference feature support.
| **Object Replication** | Unsupported | Unsupported | | **SFTP** | Supported (preview) | Supported (preview) | | **NFSv3** | GRS is unsupported | GRS is unsupported |
-| **Storage Actions** | Unsupported | Unsupported |
+| **Storage Actions** | Supported<sup>1</sup> | Supported<sup>1</sup> |
| **Point-in-time restore (PITR)** | Unsupported | Supported |
+<sup>1</sup> If you initiate a customer-managed planned or unplanned failover, storage tasks can't operate on the account until it fails back to the original primary region. [Learn more](../../reliability/reliability-storage-actions.md#cross-region-disaster-recovery-and-business-continuity).
+ ### Failover isn't for account migration
-Storage account failovers are a temporary solution which can be used to either help plan and test your DR plans, or to recover from a service outage. Failover shouldn't be used as part of your data migration strategy. For information about how to migrate your storage accounts, see [Azure Storage migration overview](storage-migration-overview.md).
+Storage account failovers are a temporary solution used to develop and test your disaster recovery (DR) plans, or to recover from a service outage. Failover shouldn't be used as part of your data migration strategy. For information about how to migrate your storage accounts, see [Azure Storage migration overview](storage-migration-overview.md).
### Storage accounts containing archived blobs
Storage accounts containing archived blobs support account failover. However, af
Microsoft provides two REST APIs for working with Azure Storage resources. These APIs form the basis of all actions you can perform against Azure Storage. The Azure Storage REST API enables you to work with data in your storage account, including blob, queue, file, and table data. The Azure Storage resource provider REST API enables you to manage the storage account and related resources.
-After a failover is complete, clients can again read and write Azure Storage data in the new primary region. However, the Azure Storage resource provider does not fail over, so resource management operations must still take place in the primary region. If the primary region is unavailable, you will not be able to perform management operations on the storage account.
+After a failover is complete, clients can once again read and write Azure Storage data in the new primary region. However, the Azure Storage resource provider doesn't fail over, so resource management operations must still take place in the primary region. If the primary region is unavailable, you aren't be able to perform management operations on the storage account.
-Because the Azure Storage resource provider does not fail over, the [Location](/dotnet/api/microsoft.azure.management.storage.models.trackedresource.location) property will return the original primary location after the failover is complete.
+Because the Azure Storage resource provider doesn't fail over, the [Location](/dotnet/api/microsoft.azure.management.storage.models.trackedresource.location) property will return the original primary location after the failover is complete.
### Azure virtual machines
storage Storage Failover Customer Managed Planned https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-failover-customer-managed-planned.md
During the planned failover process, your storage account's primary and secondar
This article describes what happens during a customer-managed planned failover and failback at every stage of the process. To understand how a failover due to an unexpected storage endpoint outage works, see [How customer-managed (unplanned) failover](storage-failover-customer-managed-unplanned.md).
+<br>
+<iframe width="560" height="315" src="
+https://www.youtube-nocookie.com/embed/lcQfwWsck58?si=I92_-lGOLcr4pUSk"
+title="YouTube video player" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share" allowfullscreen></iframe>
+ [!INCLUDE [storage-failover.planned-preview](../../../includes/storage-failover.planned-preview.md)] [!INCLUDE [storage-failover-user-unplanned-preview-lst](../../../includes/storage-failover-user-unplanned-preview-lst.md)]
storage File Sync Planning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-planning.md
description: Plan for a deployment with Azure File Sync, a service that allows y
Previously updated : 04/23/2024 Last updated : 10/08/2024
Azure File Sync is supported with the following versions of Windows Server:
| Version | Supported SKUs | Supported deployment options | ||-||
+| Windows Server 2025 | Azure, Datacenter, Essentials, Standard, and IoT | Full and Core |
| Windows Server 2022 | Azure, Datacenter, Essentials, Standard, and IoT | Full and Core | | Windows Server 2019 | Datacenter, Essentials, Standard, and IoT | Full and Core | | Windows Server 2016 | Datacenter, Essentials, Standard, and Storage Server | Full and Core |
In this case, Azure File Sync would need about 209,500,000 KiB (209.5 GiB) of sp
### Data Deduplication
-**Windows Server 2022, Windows Server 2019, and Windows Server 2016**
-Data Deduplication is supported irrespective of whether cloud tiering is enabled or disabled on one or more server endpoints on the volume for Windows Server 2016, Windows Server 2019, and Windows Server 2022. Enabling Data Deduplication on a volume with cloud tiering enabled lets you cache more files on-premises without provisioning more storage.
+**Windows Server 2025, Windows Server 2022, Windows Server 2019, and Windows Server 2016**
+Data Deduplication is supported irrespective of whether cloud tiering is enabled or disabled on one or more server endpoints on the volume for Windows Server 2016, Windows Server 2019, Windows Server 2022 and Windows Server 2025. Enabling Data Deduplication on a volume with cloud tiering enabled lets you cache more files on-premises without provisioning more storage.
When Data Deduplication is enabled on a volume with cloud tiering enabled, Dedup optimized files within the server endpoint location will be tiered similar to a normal file based on the cloud tiering policy settings. Once the Dedup optimized files have been tiered, the Data Deduplication garbage collection job will run automatically to reclaim disk space by removing unnecessary chunks that are no longer referenced by other files on the volume.
Azure File Sync doesn't support Data Deduplication and cloud tiering on the same
- For ongoing Deduplication optimization jobs, cloud tiering with date policy will get delayed by the Data Deduplication [MinimumFileAgeDays](/powershell/module/deduplication/set-dedupvolume) setting, if the file isn't already tiered. - Example: If the MinimumFileAgeDays setting is seven days and cloud tiering date policy is 30 days, the date policy will tier files after 37 days. - Note: Once a file is tiered by Azure File Sync, the Deduplication optimization job will skip the file.-- If a server running Windows Server 2012 R2 with the Azure File Sync agent installed is upgraded to Windows Server 2016, Windows Server 2019 or Windows Server 2022, the following steps must be performed to support Data Deduplication and cloud tiering on the same volume:
+- If a server running Windows Server 2012 R2 with the Azure File Sync agent installed is upgraded to Windows Server 2016, Windows Server 2019, Windows Server 2022, or Windows Server 2025, the following steps must be performed to support Data Deduplication and cloud tiering on the same volume:
- Uninstall the Azure File Sync agent for Windows Server 2012 R2 and restart the server.
- - Download the Azure File Sync agent for the new server operating system version (Windows Server 2016, Windows Server 2019, or Windows Server 2022).
+ - Download the Azure File Sync agent for the new server operating system version (Windows Server 2016, Windows Server 2019, Windows Server 2022, or Windows Server 2025).
- Install the Azure File Sync agent and restart the server. Note: The Azure File Sync configuration settings on the server are retained when the agent is uninstalled and reinstalled.
storage File Sync Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-release-notes.md
Previously updated : 09/03/2024 Last updated : 10/08/2024
The following Azure File Sync agent versions are supported:
| Milestone | Agent version number | Release date | Status | |-|-|--||
-| V19 Release - [KB5040924](https://support.microsoft.com/topic/e44fc142-8a24-4dea-9bf9-6e884b4b342e)| 19.1.0.0 | September 3, 2024 | Supported - Flighting |
+| V19 Release - [KB5040924](https://support.microsoft.com/topic/e44fc142-8a24-4dea-9bf9-6e884b4b342e)| 19.1.0.0 | September 3, 2024 | Supported |
| V18.2 Release - [KB5023059](https://support.microsoft.com/topic/613d00dc-998b-4885-86b9-73750195baf5)| 18.2.0.0 | July 9, 2024 | Supported | | V18.1 Release - [KB5023057](https://support.microsoft.com/topic/961af341-40f2-4e95-94c4-f2854add60a5)| 18.1.0.0 | June 11, 2024 | Supported - Security Update | | V17.3 Release - [KB5039814](https://support.microsoft.com/topic/97bd6ab9-fa4c-42c0-a510-cdb1d23825bf)| 17.3.0.0 | June 11, 2024 | Supported - Security Update | | V18 Release - [KB5023057](https://support.microsoft.com/topic/feb374ad-6256-4eeb-9371-eb85071f756f)| 18.0.0.0 | May 8, 2024 | Supported | | V17.2 Release - [KB5023055](https://support.microsoft.com/topic/dfa4c285-a4cb-4561-b0ed-bbd4ae09d91d)| 17.2.0.0 | February 28, 2024 | Supported | | V17.1 Release - [KB5023054](https://support.microsoft.com/topic/azure-file-sync-agent-v17-1-release-february-2024-security-only-update-bd1ce41c-27f4-4e3d-a80f-92f74817c55b)| 17.1.0.0 | February 13, 2024 | Supported - Security Update |
-| V16.2 Release - [KB5023052](https://support.microsoft.com/topic/azure-file-sync-agent-v16-2-release-february-2024-security-only-update-8247bf99-8f51-4eb6-b378-b86b6d1d45b8)| 16.2.0.0 | February 13, 2024 | Supported - Security Update - Agent version will expire on October 7, 2024|
| V17.0 Release - [KB5023053](https://support.microsoft.com/topic/azure-file-sync-agent-v17-release-december-2023-flighting-2d8cba16-c035-4c54-b35d-1bd8fd795ba9)| 17.0.0.0 | December 6, 2023 | Supported |
-| V16.0 Release - [KB5013877](https://support.microsoft.com/topic/ffdc8fe2-c653-43c8-8b47-0865267fd520)| 16.0.0.0 | January 30, 2023 | Supported - Agent version will expire on October 7, 2024 |
## Unsupported versions
The following Azure File Sync agent versions have expired and are no longer supp
| Milestone | Agent version number | Release date | Status | |-|-|--||
+| V16 Release | 16.0.0.0 - 16.2.0.0 | N/A | Not Supported - Agent versions expired on October 7, 2024 |
| V15 Release | 15.0.0.0 - 15.2.0.0 | N/A | Not Supported - Agent versions expired on March 19, 2024 | | V14 Release | 14.0.0.0 | N/A | Not Supported - Agent versions expired on February 8, 2024 | | V13 Release | 13.0.0.0 | N/A | Not Supported - Agent versions expired on August 8, 2022 |
Azure File Sync support for system-assigned managed identities will be in previe
**Sync performance improvements** Sync performance has significantly improved for file share migrations and when metadata-only is changed (for example, ACL changes). Performance numbers will be posted when they are available.
+**Support for Windows Server 2025**
+The Azure File Sync agent is now supported on Windows Server 2025.
+ **Miscellaneous reliability and telemetry improvements for cloud tiering and sync** ### Evaluation Tool
The following release notes are for Azure File Sync version 17.1.0.0 (released F
### Improvements and issues that are fixed Fixes an issue that might allow unauthorized users to create new files in locations they aren't allowed to. This is a security-only update. For more information about this vulnerability, see [CVE-2024-21397](https://msrc.microsoft.com/update-guide/en-US/advisory/CVE-2024-21397).
-## Version 16.2.0.0 (Security Update)
-The following release notes are for Azure File Sync version 16.2.0.0 (released February 13, 2024). This release contains security updates for the Azure File Sync agent. These notes are in addition to the release notes listed for version 16.0.0.0.
-
-### Improvements and issues that are fixed
-Fixes an issue that might allow unauthorized users to create new files in locations they aren't allowed to. This is a security-only update. For more information about this vulnerability, see [CVE-2024-21397](https://msrc.microsoft.com/update-guide/en-US/advisory/CVE-2024-21397).
- ## Version 17.0.0.0 The following release notes are for Azure File Sync version 17.0.0.0 (released December 6, 2023). This release contains improvements for the Azure File Sync service and agent.
The following items don't sync, but the rest of the system continues to operate
### Cloud tiering - If a tiered file is copied to another location by using Robocopy, the resulting file isn't tiered. The offline attribute might be set because Robocopy incorrectly includes that attribute in copy operations. - When copying files using Robocopy, use the /MIR option to preserve file timestamps. This will ensure older files are tiered sooner than recently accessed files.-
-## Version 16.0.0.0
-The following release notes are for Azure File Sync version 16.0.0.0 (released January 30, 2023). This release contains improvements for the Azure File Sync service and agent.
-
-### Improvements and issues that are fixed
-**Improved Azure File Sync service availability**
-Azure File Sync is now a zone-redundant service, which means an outage in a zone has limited impact while improving the service resiliency to minimize customer impact. To fully use this improvement, configure your storage accounts to use zone-redundant storage (ZRS) or Geo-zone redundant storage (GZRS) replication. To learn more about different redundancy options for your storage accounts, see [Azure Files redundancy](../files/files-redundancy.md).
-
-**Immediately run server change enumeration to detect files changes that were missed on the server**
-Azure File Sync uses the [Windows USN journal](/windows/win32/fileio/change-journals) feature on Windows Server to immediately detect files that were changed and upload them to the Azure file share. If files changed are missed due to journal wrap or other issues, the files won't sync to the Azure file share until the changes are detected. Azure File Sync has a server change enumeration job that runs every 24 hours on the server endpoint path to detect changes that were missed by the USN journal. If you don't want to wait until the next server change enumeration job runs, you can now use the `Invoke-StorageSyncServerChangeDetection` PowerShell cmdlet to immediately run server change enumeration on a server endpoint path.
-
-To immediately run server change enumeration on a server endpoint path, run the following PowerShell commands:
-```powershell
-Import-Module "C:\Program Files\Azure\StorageSyncAgent\StorageSync.Management.ServerCmdlets.dll"
-Invoke-StorageSyncServerChangeDetection -ServerEndpointPath <path>
-```
-> [!NOTE]
-> By default, the server change enumeration scan will only check the modified timestamp. To perform a deeper check, use the -DeepScan parameter.
-
-**Bug fix for the PowerShell script FileSyncErrorsReport.ps1**
-
-**Miscellaneous reliability and telemetry improvements for cloud tiering and sync**
-
-### Evaluation Tool
-Before deploying Azure File Sync, you should evaluate whether it's compatible with your system using the Azure File Sync evaluation tool. This tool is an Azure PowerShell cmdlet that checks for potential issues with your file system and dataset, such as unsupported characters or an unsupported OS version. For installation and usage instructions, see [Evaluation Tool](file-sync-planning.md#evaluation-cmdlet) section in the planning guide.
-
-### Agent installation and server configuration
-For more information on how to install and configure the Azure File Sync agent with Windows Server, see [Planning for an Azure File Sync deployment](file-sync-planning.md) and [How to deploy Azure File Sync](file-sync-deployment-guide.md).
--- The agent installation package must be installed with elevated (admin) permissions.-- The agent isn't supported on Nano Server deployment option.-- The agent is supported only on Windows Server 2019, Windows Server 2016, Windows Server 2012 R2, and Windows Server 2022.-- The agent installation package is for a specific operating system version. If a server with an Azure File Sync agent installed is upgraded to a newer operating system version, you must uninstall the existing agent, restart the server, and install the agent for the new server operating system (Windows Server 2016, Windows Server 2019, or Windows Server 2022).-- The agent requires at least 2 GiB of memory. If the server is running in a virtual machine with dynamic memory enabled, the VM should be configured with a minimum 2048 MiB of memory. See [Recommended system resources](file-sync-planning.md#recommended-system-resources) for more information.-- The Storage Sync Agent (FileSyncSvc) service doesn't support server endpoints located on a volume that has the system volume information (SVI) directory compressed. This configuration will lead to unexpected results.-
-### Interoperability
-- Antivirus, backup, and other applications that access tiered files can cause undesirable recall unless they respect the offline attribute and skip reading the content of those files. For more information, see [Troubleshoot Azure File Sync](/troubleshoot/azure/azure-storage/file-sync-troubleshoot?toc=/azure/storage/file-sync/toc.json).-- File Server Resource Manager (FSRM) file screens can cause endless sync failures when files are blocked because of the file screen.-- Running sysprep on a server that has the Azure File Sync agent installed isn't supported and can lead to unexpected results. The Azure File Sync agent should be installed after deploying the server image and completing sysprep mini-setup.-
-### Sync limitations
-The following items don't sync, but the rest of the system continues to operate normally:
--- Files with unsupported characters. See [Troubleshooting guide](/troubleshoot/azure/azure-storage/file-sync-troubleshoot-sync-errors?toc=/azure/storage/file-sync/toc.json#handling-unsupported-characters) for a list of unsupported characters.-- Files or directories that end with a period.-- Paths that are longer than 2,048 characters.-- The system access control list (SACL) portion of a security descriptor that's used for auditing.-- Extended attributes.-- Alternate data streams.-- Reparse points.-- Hard links.-- Compression (if it's set on a server file) isn't preserved when changes sync to that file from other endpoints.-- Any file that's encrypted with EFS (or other user mode encryption) that prevents the service from reading the data.-
-> [!NOTE]
-> Azure File Sync always encrypts data in transit. Data is always encrypted at rest in Azure.
-
-### Server endpoint
-- A server endpoint can be created only on an NTFS volume. ReFS, FAT, FAT32, and other file systems aren't currently supported by Azure File Sync.-- Cloud tiering isn't supported on the system volume. To create a server endpoint on the system volume, disable cloud tiering when creating the server endpoint.-- Failover Clustering is supported only with clustered disks, but not with Cluster Shared Volumes (CSVs).-- A server endpoint can't be nested. It can coexist on the same volume in parallel with another endpoint.-- Don't store an OS or application paging file within a server endpoint location.-
-### Cloud endpoint
-- Azure File Sync supports making changes to the Azure file share directly. However, any changes made on the Azure file share first need to be discovered by an Azure File Sync change detection job. A change detection job is initiated for a cloud endpoint once every 24 hours. To immediately sync files that are changed in the Azure file share, the [Invoke-AzStorageSyncChangeDetection](/powershell/module/az.storagesync/invoke-azstoragesyncchangedetection) PowerShell cmdlet can be used to manually initiate the detection of changes in the Azure file share.-- The storage sync service and/or storage account can be moved to a different resource group, subscription, or Azure AD tenant. After the storage sync service or storage account is moved, you need to give the Microsoft.StorageSync application access to the storage account (see [Ensure Azure File Sync has access to the storage account](/troubleshoot/azure/azure-storage/file-sync-troubleshoot-sync-errors?toc=/azure/storage/file-sync/toc.json#troubleshoot-rbac)).-
-> [!NOTE]
-> When creating the cloud endpoint, the storage sync service and storage account must be in the same Azure AD tenant. Once the cloud endpoint is created, the storage sync service and storage account can be moved to different Azure AD tenants.
-
-### Cloud tiering
-- If a tiered file is copied to another location by using Robocopy, the resulting file isn't tiered. The offline attribute might be set because Robocopy incorrectly includes that attribute in copy operations.-- When copying files using Robocopy, use the /MIR option to preserve file timestamps. This will ensure older files are tiered sooner than recently accessed files.
storage Analyze Files Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/analyze-files-metrics.md
In comparison, the following chart shows a situation where both the client and t
:::image type="content" source="media/analyze-files-metrics/latency-same-region.png" alt-text="Screenshot showing latency metrics when the client and Azure file share are located in the same region." lightbox="media/analyze-files-metrics/latency-same-region.png" border="false":::
-Another latency indicator to look that for might suggest a problem is an increased frequency or abnormal spikes in **Success Server Latency**. This is commonly due to throttling due to exceeding the Azure Files [scale limits](storage-files-scale-targets.md) for standard file shares, or an under-provisioned [Azure Files Premium Share](understanding-billing.md#provisioning-method).
+Another latency indicator to look that for might suggest a problem is an increased frequency or abnormal spikes in **Success Server Latency**. This is commonly due to throttling due to exceeding the Azure Files [scale limits](storage-files-scale-targets.md) for standard file shares, or an under-provisioned [Azure Files Premium Share](understanding-billing.md#provisioned-v1-model).
For more information, see [Troubleshoot high latency, low throughput, or low IOPS](/troubleshoot/azure/azure-storage/files-troubleshoot-performance?toc=%2Fazure%2Fstorage%2Ffiles%2Ftoc.json&tabs=windows#high-latency-low-throughput-or-low-iops).
storage Files Nfs Protocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-nfs-protocol.md
The status of items that appear in this table might change over time as support
## Performance
-NFS Azure file shares are only offered on premium file shares, which store data on solid-state drives (SSD). The IOPS and throughput of NFS shares scale with the provisioned capacity. See the [provisioned model](understanding-billing.md#provisioned-model) section of the **Understanding billing** article to understand the formulas for IOPS, IO bursting, and throughput. The average IO latencies are low-single-digit-millisecond for small IO size, while average metadata latencies are high-single-digit-millisecond. Metadata heavy operations such as untar and workloads like WordPress might face additional latencies due to the high number of open and close operations.
+NFS Azure file shares are only offered on premium file shares, which store data on solid-state drives (SSD). The IOPS and throughput of NFS shares scale with the provisioned capacity. See the [provisioned model](understanding-billing.md#provisioned-v1-model) section of the **Understanding billing** article to understand the formulas for IOPS, IO bursting, and throughput. The average IO latencies are low-single-digit-millisecond for small IO size, while average metadata latencies are high-single-digit-millisecond. Metadata heavy operations such as untar and workloads like WordPress might face additional latencies due to the high number of open and close operations.
> [!NOTE] > You can use the `nconnect` Linux mount option to improve performance for NFS Azure file shares at scale. For more information, see [Improve NFS Azure file share performance](nfs-performance.md).
storage Files Reserve Capacity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-reserve-capacity.md
An Azure Files Reservation covers only the amount of data that is stored in a su
### Reservations and snapshots
-If you're taking snapshots of Azure file shares, there are differences in how Reservations work for standard versus premium file shares. If you're taking snapshots of standard file shares, then the snapshot differentials count against the Reservation and are billed as part of the normal used storage meter. However, if you're taking snapshots of premium file shares, then the snapshots are billed using a separate meter and don't count against the Reservation. For more information, see [Snapshots](understanding-billing.md#snapshots).
+If you're taking snapshots of Azure file shares, there are differences in how Reservations work for standard versus premium file shares. If you're taking snapshots of standard file shares, then the snapshot differentials count against the Reservation and are billed as part of the normal used storage meter. However, if you're taking snapshots of premium file shares, then the snapshots are billed using a separate meter and don't count against the Reservation.
### Supported tiers and redundancy options
storage Files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/files-whats-new.md
Formula changes:
| Burst limit | `MIN(MAX(4000, 3 * ProvisionedGiB), 100000)` | `MIN(MAX(10000, 3 * ProvisionedGiB), 100000)` | For more information, see:-- [The provisioned model for premium Azure file shares](understanding-billing.md#provisioned-model)
+- [The provisioned model for premium Azure file shares](understanding-billing.md#provisioned-v1-model)
- [Azure Files pricing](https://azure.microsoft.com/pricing/details/storage/files/) #### NFSv4.1 protocol support is generally available
Formula changes:
| Throughput (MiB/sec) | <ul><li>Ingress: `40 + CEILING(0.04 * ProvisionedGiB)`</li><li>Egress: `60 + CEILING(0.06 * ProvisionedGiB)`</li></ul> | `100 + CEILING(0.04 * ProvisionedGiB) + CEILING(0.06 * ProvisionedGiB)` | For more information, see:-- [The provisioned model for premium Azure file shares](understanding-billing.md#provisioned-model)
+- [The provisioned model for premium Azure file shares](understanding-billing.md#provisioned-v1-model)
- [Azure Files pricing](https://azure.microsoft.com/pricing/details/storage/files/) ### 2021 quarter 3 (July, August, September)
storage Smb Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/smb-performance.md
The following tips might help you optimize performance:
- Ensure that your storage account and your client are co-located in the same Azure region to reduce network latency. - Use multi-threaded applications and spread load across multiple files. - Performance benefits of SMB Multichannel increase with the number of files distributing load.-- Premium share performance is bound by provisioned share size (IOPS/egress/ingress) and single file limits. For details, see [Understanding provisioning for premium file shares](understanding-billing.md#provisioned-model).
+- Premium share performance is bound by provisioned share size (IOPS/egress/ingress) and single file limits. For details, see [Understanding provisioning for premium file shares](understanding-billing.md#provisioned-v1-model).
- Maximum performance of a single VM client is still bound to VM limits. For example, [Standard_D32s_v3](/azure/virtual-machines/dv3-dsv3-series) can support a maximum bandwidth of 16,000 MBps (or 2GBps), egress from the VM (writes to storage) is metered, ingress (reads from storage) is not. File share performance is subject to machine network limits, CPUs, internal storage available network bandwidth, IO sizes, parallelism, as well as other factors. - The initial test is usually a warm-up. Discard the results and repeat the test. - If performance is limited by a single client and workload is still below provisioned share limits, you can achieve higher performance by spreading load over multiple clients.
storage Storage Files Planning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-planning.md
With both SMB and NFS file shares, Azure Files offers enterprise-grade file shar
| Supported protocol versions | SMB 3.1.1, SMB 3.0, SMB 2.1 | NFS 4.1 | | Recommended OS | <ul><li>Windows 11, version 21H2+</li><li>Windows 10, version 21H1+</li><li>Windows Server 2019+</li><li>Linux kernel version 5.3+</li></ul> | Linux kernel version 4.3+ | | [Available tiers](storage-files-planning.md#storage-tiers) | Premium, transaction optimized, hot, and cool | Premium |
-| Billing model | <ul><li>[Provisioned capacity for premium file shares](./understanding-billing.md#provisioned-model)</li><li>[Pay-as-you-go for standard file shares](./understanding-billing.md#pay-as-you-go-model)</li></ul> | [Provisioned capacity](./understanding-billing.md#provisioned-model) |
+| Billing model | <ul><li>[Provisioned capacity for premium file shares](./understanding-billing.md#provisioned-v1-model)</li><li>[Pay-as-you-go for standard file shares](./understanding-billing.md#pay-as-you-go-model)</li></ul> | [Provisioned capacity](./understanding-billing.md#provisioned-v1-model) |
| [Azure DNS Zone endpoints (preview)](../common/storage-account-overview.md#storage-account-endpoints) | Supported | Supported | | [Redundancy](storage-files-planning.md#redundancy) | LRS, ZRS, GRS, GZRS | LRS, ZRS | | File system semantics | Win32 | POSIX |
storage Storage Files Scale Targets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-scale-targets.md
Title: Azure Files scalability and performance targets
-description: Learn about the scalability and performance targets for Azure storage accounts, Azure Files, and Azure File Sync, including file share capacity, IOPS, throughput, ingress, egress, and operations.
+description: Learn about the scalability and performance targets for Azure Files and Azure File Sync, including file share storage, IOPS, and throughput.
Last updated 08/12/2024 + # Scalability and performance targets for Azure Files and Azure File Sync-
-[Azure Files](storage-files-introduction.md) offers fully managed file shares in the cloud that are accessible via the Server Message Block (SMB) and Network File System (NFS) file system protocols. This article discusses the scalability and performance targets for Azure storage accounts, Azure Files, and Azure File Sync.
+[Azure Files](storage-files-introduction.md) offers fully managed file shares in the cloud that are accessible via the Server Message Block (SMB) and Network File System (NFS) file system protocols. This article discusses the scalability and performance targets for Azure Files and Azure File Sync.
The targets listed here might be affected by other variables in your deployment. For example, the performance of I/O for a file might be impacted by your SMB client's behavior and by your available network bandwidth. You should test your usage pattern to determine whether the scalability and performance of Azure Files meet your requirements. ## Applies to-
-| File share type | SMB | NFS |
-|-|:-:|:-:|
-| Standard file shares (GPv2), LRS/ZRS | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) |
-| Standard file shares (GPv2), GRS/GZRS | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) |
-| Premium file shares (FileStorage), LRS/ZRS | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
+| Management model | Billing model | Media tier | Redundancy | SMB | NFS |
+|-|-|-|-|:-:|:-:|
+| Microsoft.Storage | Provisioned v2 | HDD (standard) | Local (LRS) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) |
+| Microsoft.Storage | Provisioned v2 | HDD (standard) | Zone (ZRS) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) |
+| Microsoft.Storage | Provisioned v2 | HDD (standard) | Geo (GRS) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) |
+| Microsoft.Storage | Provisioned v2 | HDD (standard) | GeoZone (GZRS) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) |
+| Microsoft.Storage | Provisioned v1 | SSD (premium) | Local (LRS) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
+| Microsoft.Storage | Provisioned v1 | SSD (premium) | Zone (ZRS) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png)|
+| Microsoft.Storage | Pay-as-you-go | HDD (standard) | Local (LRS) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) |
+| Microsoft.Storage | Pay-as-you-go | HDD (standard) | Zone (ZRS) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) |
+| Microsoft.Storage | Pay-as-you-go | HDD (standard) | Geo (GRS) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) |
+| Microsoft.Storage | Pay-as-you-go | HDD (standard) | GeoZone (GZRS) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) |
## Azure Files scale targets- Azure file shares are deployed into storage accounts, which are top-level objects that represent a shared pool of storage. This pool of storage can be used to deploy multiple file shares. There are therefore three categories to consider: storage accounts, Azure file shares, and individual files. ### Storage account scale targets- Storage account scale targets apply at the storage account level. There are two main types of storage accounts for Azure Files: -- **General purpose version 2 (GPv2) storage accounts**: GPv2 storage accounts allow you to deploy Azure file shares on standard/hard disk-based (HDD-based) hardware. In addition to storing Azure file shares, GPv2 storage accounts can store other storage resources such as blob containers, queues, or tables. File shares can be deployed into the transaction optimized (default), hot, or cool tiers.--- **FileStorage storage accounts**: FileStorage storage accounts allow you to deploy Azure file shares on premium/solid-state disk-based (SSD-based) hardware. FileStorage accounts can only be used to store Azure file shares; no other storage resources (blob containers, queues, tables, etc.) can be deployed in a FileStorage account.-
-| Attribute | GPv2 storage accounts (standard) | FileStorage storage accounts (premium) |
-|-|-|-|
-| Number of storage accounts per region per subscription | 250<sup>1</sup> | 250<sup>1</sup> |
-| Maximum storage account capacity | 5 PiB<sup>2</sup> | 100 TiB (provisioned) |
-| Maximum number of file shares | Unlimited | Unlimited, total provisioned size of all shares must be less than max than the max storage account capacity |
-| Maximum concurrent request rate | 20,000 IOPS<sup>2</sup> | 102,400 IOPS |
-| Throughput (ingress + egress) for LRS/GRS<br /><ul><li>Australia East</li><li>Central US</li><li>East Asia</li><li>East US 2</li><li>Japan East</li><li>Korea Central</li><li>North Europe</li><li>South Central US</li><li>Southeast Asia</li><li>UK South</li><li>West Europe</li><li>West US</li></ul> | <ul><li>Ingress: 7,152 MiB/sec</li><li>Egress: 14,305 MiB/sec</li></ul> | 10,340 MiB/sec |
-| Throughput (ingress + egress) for ZRS<br /><ul><li>Australia East</li><li>Central US</li><li>East US</li><li>East US 2</li><li>Japan East</li><li>North Europe</li><li>South Central US</li><li>Southeast Asia</li><li>UK South</li><li>West Europe</li><li>West US 2</li></ul> | <ul><li>Ingress: 7,152 MiB/sec</li><li>Egress: 14,305 MiB/sec</li></ul> | 10,340 MiB/sec |
-| Throughput (ingress + egress) for redundancy/region combinations not listed in the previous row | <ul><li>Ingress: 2,980 MiB/sec</li><li>Egress: 5,960 MiB/sec</li></ul> | 10,340 MiB/sec |
-| Maximum number of virtual network rules | 200 | 200 |
-| Maximum number of IP address rules | 200 | 200 |
-| Management read operations | 800 per 5 minutes | 800 per 5 minutes |
-| Management write operations | 10 per second/1200 per hour | 10 per second/1200 per hour |
-| Management list operations | 100 per 5 minutes | 100 per 5 minutes |
-
-<sup>1</sup> With a quota increase, you can create up to 500 storage accounts with standard endpoints per region. For more information, see [Increase Azure Storage account quotas](/azure/quotas/storage-account-quota-requests).
-<sup>2</sup> General-purpose version 2 storage accounts support higher capacity limits and higher limits for ingress by request. To request an increase in account limits, contact [Azure Support](https://azure.microsoft.com/support/faq/).
+- **FileStorage storage accounts**: FileStorage storage accounts allow you to deploy Azure file shares with a provisioned billing model. FileStorage accounts can only be used to store Azure file shares; no other storage resources (blob containers, queues, tables, etc.) can be deployed in a FileStorage account.
+
+- **General purpose version 2 (GPv2) storage accounts**: GPv2 storage accounts allow you to deploy pay-as-you-go file shares on HDD-based hardware. In addition to storing Azure file shares, GPv2 storage accounts can store other storage resources such as blob containers, queues, or tables.
+
+| Attribute | SSD provisioned v1 | HDD provisioned v2 | HDD pay-as-you-go |
+|-|-|-|-|
+| Storage account kind | FileStorage | FileStorage | StorageV2 |
+| SKUs | <ul><li>Premium_LRS</li><li>Premium_ZRS</li></ul> | <ul><li>StandardV2_LRS</li><li>StandardV2_ZRS</li><li>StandardV2_GRS</li><li>StandardV2_GZRS</li></ul> | <ul><li>Standard_LRS</li><li>Standard_ZRS</li><li>Standard_GRS</li><li>Standard_GZRS</li></ul> |
+| Number of storage accounts per region per subscription | 250 | 250 | 250 |
+| Maximum storage capacity | 100 TiB | 4 PiB | 5 PiB |
+| Maximum number of file shares | 1024 (recommended to use 50 or fewer) | 50 | Unlimited (recommended to use 50 or fewer) |
+| Maximum IOPS | 102,400 IOPS | 50,000 IOPS | 20,000 IOPS |
+| Maximum throughput | 10,340 MiB / sec | 5,120 MiB / sec | <ul><li>Select regions:<ul><li>Ingress: 7,680 MiB / sec</li><li>Egress: 25,600 MiB / sec</li></ul></li><li>Default:<ul><li>Ingress: 3,200 MiB / sec</li><li>Egress: 6,400 MiB / sec</li></ul></li></ul> |
+| Maximum number of virtual network rules | 200 | 200 | 200 |
+| Maximum number of IP address rules | 200 | 200 | 200 |
+| Management read operations | 800 per 5 minutes | 800 per 5 minutes | 800 per 5 minutes |
+| Management write operations | 10 per second/1200 per hour | 10 per second/1200 per hour | 10 per second/1200 per hour |
+| Management list operations | 100 per 5 minutes | 100 per 5 minutes | 100 per 5 minutes |
+
+#### Selected regions with increased maximum throughput for HDD pay-as-you-go
+The following regions have an increased maximum throughput for HDD pay-as-you-go storage accounts (StorageV2):
+
+- East Asia
+- Southeast Asia
+- Australia East
+- Brazil South
+- Canada Central
+- China East 2
+- China North 3
+- North Europe
+- West Europe
+- France Central
+- Germany West Central
+- Central India
+- Japan East
+- Jio India West
+- Korea Central
+- Norway East
+- South Africa North
+- Sweden Central
+- UAE North
+- UK South
+- Central US
+- East US
+- East US 2
+- US Gov Virginia
+- US Gov Arizona
+- North Central US
+- South Central US
+- West US
+- West US 2
+- West US 3
### Azure file share scale targets- Azure file share scale targets apply at the file share level.
-| Attribute | Standard file shares<sup>1</sup> | Premium file shares |
+| Attribute | SSD provisioned v1 | HDD provisioned v2 | HDD pay-as-you-go |
|-|-|-|
-| Minimum size of a file share | No minimum | 100 GiB (provisioned) |
-| Provisioned size increase/decrease unit | N/A | 1 GiB |
-| Maximum size of a file share | 100 TiB | 100 TiB |
-| Maximum number of files in a file share | No limit | No limit |
-| Maximum request rate (Max IOPS) | 20,000 | <ul><li>Baseline IOPS: 3000 + 1 IOPS per GiB, up to 102,400</li><li>IOPS bursting: Max (10,000, 3x IOPS per GiB), up to 102,400</li></ul> |
-| Throughput (ingress + egress) for a single file share (MiB/sec) | Up to storage account limits | 100 + CEILING(0.04 * ProvisionedStorageGiB) + CEILING(0.06 * ProvisionedStorageGiB) |
-| Maximum number of share snapshots | 200 snapshots | 200 snapshots |
-| Maximum object name length<sup>3</sup> (full pathname including all directories, file names, and backslash characters) | 2,048 characters | 2,048 characters |
-| Maximum length of individual pathname component<sup>2</sup> (in the path \A\B\C\D, each letter represents a directory or file that is an individual component) | 255 characters | 255 characters |
-| Hard link limit (NFS only) | N/A | 178 |
-| Maximum number of SMB Multichannel channels | N/A | 4 |
-| Maximum number of stored access policies per file share | 5 | 5 |
-
-<sup>1</sup> The limits for standard file shares apply to all three of the tiers available for standard file shares: transaction optimized, hot, and cool.
-
-<sup>2</sup> Azure Files enforces certain [naming rules](/rest/api/storageservices/naming-and-referencing-shares--directories--files--and-metadata#directory-and-file-names) for directory and file names.
+| Storage provisioning unit | 1 GiB | 1 GiB | N/A |
+| IOPS provisioning unit | N/A | 1 IO / sec | N/A |
+| Throughput provisioning unit | N/A | 1 MiB / sec | N/A |
+| Minimum storage size | 100 GiB (provisioned) | 32 GiB (provisioned) | 0 bytes |
+| Maximum storage size | 100 TiB | 256 TiB | 100 TiB |
+| Maximum number of files | Unlimited | Unlimited | Unlimited |
+| Maximum IOPS | 102,400 IOPS (dependent on provisioning) | 50,000 IOPS (dependent on provisioning) | 20,000 IOPS |
+| Maximum throughput | 10,340 MiB / sec (dependent on provisioning) | 5,120 IOPS (dependent on provisioning) | Up to storage account limits |
+| Maximum number of share snapshots | 200 snapshots | 200 snapshots | 200 snapshots |
+| Maximum filename length<sup>3</sup> (full pathname including all directories, file names, and backslash characters) | 2,048 characters | 2,048 characters | 2,048 characters |
+| Maximum length of individual pathname component<sup>2</sup> (in the path \A\B\C\D, each letter represents a directory or file that is an individual component) | 255 characters | 255 characters | 255 characters |
+| Hard link limit (NFS only) | 178 | N/A | N/A |
+| Maximum number of SMB Multichannel channels | 4 | N/A | N/A |
+| Maximum number of stored access policies per file share | 5 | 5 | 5 |
+
+<sup>3</sup> Azure Files enforces certain [naming rules](/rest/api/storageservices/naming-and-referencing-shares--directories--files--and-metadata#directory-and-file-names) for directory and file names.
### File scale targets- File scale targets apply to individual files stored in Azure file shares.
-| Attribute | Files in standard file shares | Files in premium file shares |
+| Attribute | SSD provisioned v1 | HDD provisioned v2 | HDD pay-as-you-go |
|-|-|-|
-| Maximum file size | 4 TiB | 4 TiB |
-| Maximum concurrent request rate | 1,000 IOPS | Up to 8,000<sup>1</sup> |
-| Maximum ingress for a file | 60 MiB/sec | 200 MiB/sec (Up to 1 GiB/s with SMB Multichannel)<sup>2</sup> |
-| Maximum egress for a file | 60 MiB/sec | 300 MiB/sec (Up to 1 GiB/s with SMB Multichannel)<sup>2</sup> |
-| Maximum concurrent handles for root directory<sup>3</sup> | 10,000 handles | 10,000 handles |
-| Maximum concurrent handles per file and directory<sup>3</sup> | 2,000 handles | 2,000 handles |
-
-<sup>1 Applies to read and write I/Os (typically smaller I/O sizes less than or equal to 64 KiB). Metadata operations, other than reads and writes, may be lower. These are soft limits, and throttling can occur beyond these limits.</sup>
-
-<sup>2 Subject to machine network limits, available bandwidth, I/O sizes, queue depth, and other factors. For details see [SMB Multichannel performance](./smb-performance.md).</sup>
-
-<sup>3 Azure Files supports 10,000 open handles on the root directory and 2,000 open handles per file and directory within the share. The number of active users supported per share is dependent on the applications that are accessing the share. If your applications aren't opening a handle on the root directory, Azure Files can support more than 10,000 active users per share. However, if you're using Azure Files to store disk images for large-scale virtual desktop workloads, you might run out of handles for the root directory or per file/directory. In this case, you might need to use multiple Azure file shares. For more information, see [Azure Files sizing guidance for Azure Virtual Desktop](#azure-files-sizing-guidance-for-azure-virtual-desktop).</sup>
+| Maximum file size | 4 TiB | 4 TiB | 4 TiB |
+| Maximum data IOPS per file | 8,000 IOPS | 1,000 IOPS | 1,000 IOPS |
+| Maximum throughput per file | 1,024 MiB / sec | 60 MiB / sec | 60 MiB / sec |
+| Maximum concurrent handles for root directory | 10,000 handles | 10,000 handles | 10,000 handles |
+| Maximum concurrent handles per file and directory | 2,000 handles | 2,000 handles | 2,000 handles |
### Azure Files sizing guidance for Azure Virtual Desktop
storage Storage How To Create File Share https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-how-to-create-file-share.md
Follow these instructions to create a new Azure file share using the Azure porta
- **Tier**: The selected tier for a standard file share. This field is only available in a **general purpose (GPv2)** storage account type. You can choose transaction optimized, hot, or cool. You can change the share's tier at any time. We recommend picking the **Transaction optimized** tier possible during a migration, to minimize transaction expenses, and then switching to a lower tier if desired after the migration is complete.
- - **Provisioned capacity**: For premium file shares only, the provisioned capacity is the amount that you'll be billed for regardless of actual usage. This field is only available in a **FileStorage** storage account type. The IOPS and throughput available on a premium file share are based on the provisioned capacity, so you can provision more capacity to get more performance. The minimum size for a premium file share is 100 GiB. For more information on how to plan for a premium file share, see [provisioning premium file shares](understanding-billing.md#provisioned-model).
+ - **Provisioned capacity**: For premium file shares only, the provisioned capacity is the amount that you'll be billed for regardless of actual usage. This field is only available in a **FileStorage** storage account type. The IOPS and throughput available on a premium file share are based on the provisioned capacity, so you can provision more capacity to get more performance. The minimum size for a premium file share is 100 GiB. For more information on how to plan for a premium file share, see [provisioning premium file shares](understanding-billing.md#provisioned-v1-model).
1. Select the **Backup** tab. By default, [backup is enabled](../../backup/backup-azure-files.md) when you create an Azure file share using the Azure portal. If you want to disable backup for the file share, uncheck the **Enable backup** checkbox. If you want backup enabled, you can either leave the defaults or create a new Recovery Services Vault in the same region and subscription as the storage account. To create a new backup policy, select **Create a new policy**.
storage Understand Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/understand-performance.md
The following table summarizes the expected performance targets between standard
| Write latency (single-digit milliseconds) | Yes | Yes | | Read latency (single-digit milliseconds) | No | Yes |
-Premium file shares offer a provisioning model that guarantees the following performance profile based on share size. For more information, see [Provisioned model](understanding-billing.md#provisioned-model). Burst credits accumulate in a burst bucket whenever traffic for your file share is below baseline IOPS. Earned credits are used later to enable bursting when operations would exceed the baseline IOPS.
+Premium file shares offer a provisioning model that guarantees the following performance profile based on share size. For more information, see the [provisioned v1 model](understanding-billing.md#provisioned-v1-model). Burst credits accumulate in a burst bucket whenever traffic for your file share is below baseline IOPS. Earned credits are used later to enable bursting when operations would exceed the baseline IOPS.
| **Capacity (GiB)** | **Baseline IOPS** | **Burst IOPS** | **Burst credits** | **Throughput (ingress + egress)** | |--|-|-|-||
Whether you're assessing performance requirements for a new or existing workload
- **IOPS and throughput requirements:** Premium file shares support larger IOPS and throughput limits than standard file shares. See [file share scale targets](./storage-files-scale-targets.md#azure-file-share-scale-targets) for more information. -- **Workload duration and frequency:** Short (minutes) and infrequent (hourly) workloads will be less likely to achieve the upper performance limits of standard file shares compared to long-running, frequently occurring workloads. On premium file shares, workload duration is helpful when determining the correct performance profile to use based on the provisioning size. Depending on how long the workload needs to [burst](understanding-billing.md#bursting) for and how long it spends below the baseline IOPS, you can determine if you're accumulating enough bursting credits to consistently satisfy your workload at peak times. Finding the right balance will reduce costs compared to over-provisioning the file share. A common mistake is to run performance tests for only a few minutes, which is often misleading. To get a realistic view of performance, be sure to test at a sufficiently high frequency and duration.
+- **Workload duration and frequency:** Short (minutes) and infrequent (hourly) workloads will be less likely to achieve the upper performance limits of standard file shares compared to long-running, frequently occurring workloads. On premium file shares, workload duration is helpful when determining the correct performance profile to use based on the provisioning size. Depending on how long the workload needs to [burst](understanding-billing.md#provisioned-v1-bursting) for and how long it spends below the baseline IOPS, you can determine if you're accumulating enough bursting credits to consistently satisfy your workload at peak times. Finding the right balance will reduce costs compared to over-provisioning the file share. A common mistake is to run performance tests for only a few minutes, which is often misleading. To get a realistic view of performance, be sure to test at a sufficiently high frequency and duration.
- **Workload parallelization:** For workloads that perform operations in parallel, such as through multiple threads, processes, or application instances on the same client, premium file shares provide a clear advantage over standard file shares: SMB Multichannel. See [Improve SMB Azure file share performance](smb-performance.md) for more information.
storage Understanding Billing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/understanding-billing.md
Title: Understand Azure Files billing
-description: Learn how to interpret the provisioned and pay-as-you-go billing models for SMB and NFS Azure file shares. Understand total cost of ownership, storage reservations, and burst credits.
+description: Learn how to interpret the provisioned and pay-as-you-go billing models for Azure Files. Understand total cost of ownership, storage reservations, and burst credits.
Previously updated : 05/13/2024 Last updated : 10/08/2024 + # Understand Azure Files billing models
+Azure Files supports two different media tiers of storage, SSD and HDD, which allow you to tailor your file shares to the performance and price requirements of your scenario:
-Azure Files provides two distinct billing models: provisioned and pay-as-you-go. The provisioned model is only available for premium file shares, which are file shares deployed in the **FileStorage** storage account kind. The pay-as-you-go model is only available for standard file shares, which are file shares deployed in the **general purpose version 2 (GPv2)** storage account kind. This article explains how both models work to help you understand your monthly Azure Files bill.
+- **SSD (premium)**: file shares hosted on solid-state drives (SSDs) provide consistent high performance and low latency, within single-digit milliseconds for most IO operations.
+- **HDD (standard)**: file shares host on hard disk drives (HDDs) provide cost-effective storage for general purpose use.
- :::column:::
- > [!VIDEO https://www.youtube-nocookie.com/embed/m5_-GsKv4-o]
- :::column-end:::
- :::column:::
- This video is an interview that discusses the basics of the Azure Files billing model. It covers how to optimize costs for Azure file shares, and how to compare Azure Files to other file storage offerings on-premises and in the cloud.
- :::column-end:::
+Azure Files has multiple pricing models including provisioned and pay-as-you-go options:
-For Azure Files pricing information, see [Azure Files pricing page](https://azure.microsoft.com/pricing/details/storage/files/).
+- **Provisioned billing models**: In a provisioned billing model, the primary costs of the file share are based on the amount of storage, IOPS (input and output operations per second), and throughput you provision when you create or update your file share, regardless of how much you use. Azure Files has two different provisioned models *provisioned v2* and *provisioned v1*.
+ - **Provisioned v2**: In the provisioned v2 model, you have the ability to separately provision storage, IOPS, and throughput, although we provide a recommendation for you to help you with first time provisioning.
+ - **Provisioned v1**: In the provisioned v1 model, you provision the amount of storage you need for the share while IOPS and throughput are determined based on how much storage you provision. The provisioned v1 model for Azure Files is only available for SSD file shares.
+
+- **Pay-as-you-go billing model**: In a pay-as-you-go model, the cost of the file share is based on how much you use the share, in the form of used storage, transaction, and data transfer costs. The pay-as-you-go model for Azure Files is only available for HDD file shares. We recommend using the provisioned v2 model for new HDD file share deployments.
+
+This article explains the billing models for Azure Files work to help you understand your monthly Azure Files bill. For Azure Files pricing information, see [Azure Files pricing page](https://azure.microsoft.com/pricing/details/storage/files/).
## Applies to-
-| File share type | SMB | NFS |
-|-|:-:|:-:|
-| Standard file shares (GPv2), LRS/ZRS | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) |
-| Standard file shares (GPv2), GRS/GZRS | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) |
-| Premium file shares (FileStorage), LRS/ZRS | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
+| Management model | Billing model | Media tier | Redundancy | SMB | NFS |
+|-|-|-|-|:-:|:-:|
+| Microsoft.Storage | Provisioned v2 | HDD (standard) | Local (LRS) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) |
+| Microsoft.Storage | Provisioned v2 | HDD (standard) | Zone (ZRS) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) |
+| Microsoft.Storage | Provisioned v2 | HDD (standard) | Geo (GRS) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) |
+| Microsoft.Storage | Provisioned v2 | HDD (standard) | GeoZone (GZRS) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) |
+| Microsoft.Storage | Provisioned v1 | SSD (premium) | Local (LRS) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png) |
+| Microsoft.Storage | Provisioned v1 | SSD (premium) | Zone (ZRS) | ![Yes](../media/icons/yes-icon.png) | ![Yes](../media/icons/yes-icon.png)|
+| Microsoft.Storage | Pay-as-you-go | HDD (standard) | Local (LRS) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) |
+| Microsoft.Storage | Pay-as-you-go | HDD (standard) | Zone (ZRS) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) |
+| Microsoft.Storage | Pay-as-you-go | HDD (standard) | Geo (GRS) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) |
+| Microsoft.Storage | Pay-as-you-go | HDD (standard) | GeoZone (GZRS) | ![Yes](../media/icons/yes-icon.png) | ![No](../media/icons/no-icon.png) |
## Storage units- Azure Files uses the base-2 units of measurement to represent storage capacity: KiB, MiB, GiB, and TiB. | Acronym | Definition | Unit |
Azure Files uses the base-2 units of measurement to represent storage capacity:
| GiB | 1024 MiB (1,073,741,824 bytes) | gibibyte | | TiB | 1024 GiB (1,099,511,627,776 bytes) | tebibyte |
-Although the base-2 units of measure are commonly used by most operating systems and tools to measure storage quantities, they're frequently mislabeled as the base-10 units, which you might be more familiar with: KB, MB, GB, and TB. Although the reasons for the mislabeling vary, the common reason why operating systems like Windows mislabel the storage units is because many operating systems began using these acronyms before they were standardized by the IEC, BIPM, and NIST.
+Although the base-2 units of measure are commonly used by most operating systems and tools to measure storage quantities, they're frequently mislabeled as the base-10 units, which you might be more familiar with: KB, MB, GB, and TB. Although the reasons for the mislabeling vary, the common reason why operating systems like Windows mislabel the storage units is because many operating systems began using these acronyms before they were standardized by the IEC (International Electrotechnical Commission), BIPM (International Bureau of Weights and Measures), and NIST (US National Institute of Standards and Technology).
The following table shows how common operating systems measure and label storage:
The following table shows how common operating systems measure and label storage
Check with your operating system vendor if your operating system isn't listed. ## File share total cost of ownership checklist- If you're migrating to Azure Files from on-premises or comparing Azure Files to other cloud storage solutions, you should consider the following factors to ensure a fair, apples-to-apples comparison: -- **How do you pay for storage, IOPS, and bandwidth?** With Azure Files, the billing model you use depends on whether you're deploying [premium](#provisioned-model) or [standard](#pay-as-you-go-model) file shares. Most cloud solutions have models that align with the principles of either provisioned storage, such as price determinism and simplicity, or pay-as-you-go storage, which can optimize costs by only charging you for what you actually use. Of particular interest for provisioned models are minimum provisioned share size, the provisioning unit, and the ability to increase and decrease provisioning.
+- **How do you pay for storage, IOPS, and bandwidth?** Most cloud solutions have models that align with the principles of either provisioned storage, such as price determinism and simplicity, or pay-as-you-go storage, which can optimize costs by only charging you for what you actually use. Of particular interest for provisioned models are minimum provisioned share size, the provisioning unit, and the ability to increase and decrease provisioning.
- **Are there any methods to optimize storage costs?** You can use [Azure Files Reservations](#reservations) to achieve an up to 36% discount on storage. Other solutions might employ strategies like deduplication or compression to optionally optimize storage efficiency. However, these storage optimization strategies often have non-monetary costs, such as reducing performance. Azure Files Reservations have no side effects on performance. - **How do you achieve storage resiliency and redundancy?** With Azure Files, storage resiliency and redundancy are included in the product offering. All tiers and redundancy levels ensure that data is highly available and at least three copies of your data are accessible. When considering other file storage options, consider whether storage resiliency and redundancy is built in or something you must assemble yourself. -- **What do you need to manage?** With Azure Files, the basic unit of management is a storage account. Other solutions might require additional management, such as operating system updates or virtual resource management such as VMs, disks, and network IP addresses.
+- **What do you need to manage?** With Azure Files, the basic unit of management is a storage account. Other solutions might require extra management, such as operating system updates or virtual resource management such as VMs, disks, and network IP addresses.
- **What are the costs of value-added products?** Azure Files supports integrations with multiple first- and third-party [value-added services](#value-added-services). Value-added services such as Azure Backup, Azure File Sync, and Microsoft Defender for Storage provide backup, replication and caching, and security functionality for Azure Files. Value-added solutions, whether on-premises or in the cloud, have their own licensing and product costs, but are often considered part of the total cost of ownership for file storage.
-## Reservations
+## Provisioned v2 model
+The provisioned v2 model for Azure Files pairs predictability of total cost of ownership with flexibility, allowing you to create a file share that meets your exact storage and performance requirements. When you create a new provisioned v2 file share, you specify how much storage, IOPS, and throughput your file share needs. The amount of each quantity that you provision determines your total bill.
-Azure Files supports reservations (also referred to as *reserved instances*), which enable you to achieve a discount on storage by pre-committing to storage utilization. You should consider purchasing reserved instances for any production workload, or dev/test workloads with consistent footprints. When you purchase a Reservation, you must specify the following dimensions:
+The amount of storage, IOPS, and throughput you provision are the guaranteed limits of your file share's usage. For example, if you provision a 2 TiB share and upload 2 TiB of data to your share, your share will be full and you will not be able to add more data unless you increase the size of your share, or delete some of the data. Credit-based IOPS bursting provides added flexibility around usage, on a best-effort basis, while credits remain.
-- **Capacity size**: Reservations can be for either 10 TiB or 100 TiB, with more significant discounts for purchasing a higher capacity Reservation. You can purchase multiple Reservations, including Reservations of different capacity sizes to meet your workload requirements. For example, if your production deployment has 120 TiB of file shares, you could purchase one 100 TiB Reservation and two 10 TiB Reservations to meet the total storage capacity requirements.-- **Term**: You can purchase reservations for either a one-year or three-year term, with more significant discounts for purchasing a longer Reservation term.-- **Tier**: The tier of Azure Files for the Reservation. Reservations currently are available for the premium, hot, and cool tiers.-- **Location**: The Azure region for the Reservation. Reservations are available in a subset of Azure regions.-- **Redundancy**: The storage redundancy for the Reservation. Reservations are supported for all redundancies Azure Files supports, including LRS, ZRS, GRS, and GZRS.-- **Billing frequency**: Indicates how often the account is billed for the Reservation. Options include *Monthly* or *Upfront*.
+The amount of storage, IOPS, and throughput you provision can be dynamically scaled up or down as your needs change, however, you can only decrease a provisioned quantity only after 24 hours have elapsed since your last quantity increase. Storage, IOPS, and throughput changes are effective within a few minutes after a provisioning change.
-Once you purchase a Reservation, it will automatically be consumed by your existing storage utilization. If you use more storage than you have reserved, you'll pay list price for the balance not covered by the Reservation. Transaction, bandwidth, data transfer, and metadata storage charges aren't included in the Reservation.
+By default, when you create a new file share using the provisioned v2 model, we provide a recommendation for how many IOPS and how much throughput you need based on the amount of provisioned storage you specify. Although these recommendations are based on typical customer usage for that amount of provisioned storage for that media tier in Azure Files, you may find that your workload requires more or less IOPS and throughput than the "typical file share", and you can optionally provision more or less IOPS and throughput depending on your individual file share requirements.
-There are differences in how Reservations work with Azure file share snapshots for standard and premium file shares. If you're taking snapshots of standard file shares, then the snapshot differentials count against the Reservation and are billed as part of the normal used storage meter. However, if you're taking snapshots of premium file shares, then the snapshots are billed using a separate meter and don't count against the Reservation. For more information, see [Snapshots](#snapshots).
+### Availability
+The provisioned v2 model is provided for file shares in storage accounts with the *FileStorage* storage account kind. At present, the following subset of storage account SKUs are available:
-For more information on how to purchase Reservations, see [Optimize costs for Azure Files with Reservations](files-reserve-capacity.md).
+| Storage account kind | Storage account SKU | Type of file share available |
+|-|-|-|
+| FileStorage | StandardV2_LRS | HDD provisioned v2 file shares with the Local (LRS) redundancy specified. |
+| FileStorage | StandardV2_ZRS | HDD provisioned v2 file shares with the Zone (ZRS) redundancy specified. |
+| FileStorage | StandardV2_GRS | HDD provisioned v2 file shares with the Geo (GRS) redundancy specified. |
+| FileStorage | StandardV2_GZRS | HDD provisioned v2 file shares with the GeoZone (GZRS) redundancy specified. |
+
+Currently, these SKUs are generally available in a limited subset of regions:
+
+- France Central
+- France South
+- Australia East
+- Australia Southeast
+- East Asia
+- Southeast Asia
-## Provisioned model
+### Provisioning detail
+When you create a provisioned v2 file share, you specify the provisioned capacity for the file share in terms of storage, IOPS, and throughput. File shares are limited based on the following attributes:
+
+| Item | HDD value |
+|-|-|
+| Storage provisioning unit | 1 GiB |
+| IOPS provisioning unit | 1 IO / sec |
+| Throughput provisioning unit | 1 MiB / sec |
+| Minimum provisioned storage per file share | 32 GiB |
+| Minimum provisioned IOPS per file share | 500 IOPS |
+| Minimum provisioned throughput per file share | 60 MiB / sec |
+| Maximum provisioned storage per file share | 256 TiB (262,144 GiB) |
+| Maximum provisioned IOPS per file share | 50,000 IOPS |
+| Maximum provisioned throughput per file share | 5,120 MiB / sec |
+| Maximum provisioned storage per storage account | 4 PiB (4,194,304 GiB) |
+| Maximum provisioned IOPS per storage account | 50,000 IOPS |
+| Maximum provisioned throughput per storage account | 5,120 MiB / sec |
+| Maximum number of file shares per storage account | 50 file shares |
+
+By default, we recommended IOPS and throughput provisioning based on the provisioned storage you specify. These recommendation formulas are based on typical customer usage for that amount of provisioned storage for that media tier in Azure Files:
+
+| Formula name | HDD formula |
+|-|-|
+| IOPS recommendation | `MIN(MAX(1000 + 0.2 * ProvisionedStorageGiB, 500), 50000)` |
+| Throughput recommendation | `MIN(MAX(60 + 0.02 * ProvisionedStorageGiB, 60), 5120)` |
+
+Depending on your individual file share requirements, you may find that you require more or less IOPS or throughput than our recommendations, and can optionally override these recommendations with your own values as desired.
+
+### Provisioned v2 bursting
+Credit-based IOPS bursting provides added flexibility around IOPS usage. This flexibility is best used as a buffer against unanticipated IO-spikes. For established IO patterns, we recommend provisioning for IO peaks.
+
+Burst IOPS credits accumulate whenever traffic for your file share is below provisioned (baseline) IOPS. Whenever a file share's IOPS usage exceeds the provisioned IOPS and there are available burst IOPS credits, the file share can burst up to the maximum allowed burst IOPS limit. File shares can continue to burst as long as there are credits remaining, but this is based on the number of burst credits accrued. Each IO beyond provisioned IOPS consumes one credit. Once all credits are consumed, the share returns to the provisioned IOPS. IOPS against the file share don't have to do anything special to use bursting. Bursting operates on a best effort basis.
+
+Share credits have three states:
-Azure Files uses a provisioned model for premium file shares. In a provisioned billing model, you proactively specify what your storage requirements are, rather than being billed based on what you use. A provisioned model for storage is similar to buying an on-premises storage solution because when you provision an Azure file share with a certain amount of storage capacity, you pay for that storage capacity regardless of whether you use it or not. Unlike purchasing physical media on-premises, provisioned file shares can be dynamically scaled up or down depending on your storage and IO performance characteristics.
+- Accruing, when the file share is using less than the provisioned IOPS.
+- Declining, when the file share is using more than the provisioned IOPS and in the bursting mode.
+- Constant, when the files share is using exactly the provisioned IOPS and there are either no credits accrued or used.
-You can increase the provisioned size of the file share at any time, but you can decrease it only when 24 hours has elapsed since the last increase. After waiting for 24 hours without a quota increase, you can decrease the share quota as many times as you like, until you increase it again. IOPS/throughput scale changes will be effective within a few minutes after the provisioned size change.
+A new file share starts with the full number of credits in its burst bucket. Burst credits don't accrue if the share IOPS fall below the provisioned limit due to throttling by the server. The following formulas are used to determine the burst IOPS limit and the number of credits possible for a file share:
+
+| Item | HDD formula |
+|-|-|
+| Burst IOPS limit | `MIN(MAX(3 * ProvisionedIOPS, 5000), 50000)` |
+| Burst IOPS credits | `(BurstLimit - ProvisionedIOPS) * 3600` |
+
+The following table illustrates a few examples of these formulas for various provisioned IOPS amounts:
+
+| Provisioned IOPS | HDD burst IOPS limit | HDD burst credits |
+|-|-|-|
+| 500 | Up to 5,000 | 16,200,000 |
+| 1,000 | Up to 5,000 | 14,400,000 |
+| 3,000 | Up to 9,000 | 21,600,000 |
+| 5,000 | Up to 15,000 | 36,000,000 |
+| 10,000 | Up to 30,000 | 72,000,000 |
+| 25,000 | Up to 50,000 | 90,000,000 |
+| 50,000 | Up to 50,000 | 0 |
+
+### Provisioned v2 snapshots
+Azure Files supports snapshots, which are similar to volume shadow copies (VSS) on Windows File Server. For more information on share snapshots, see [Overview of snapshots for Azure Files](storage-snapshots-files.md).
+
+Snapshots are always differential from the live share and from each other. In the provisioned v2 billing model, if the total differential size of all snapshots fits within the excess provisioned storage space of the file share, there is no extra cost for snapshot storage. If the size of the live share data plus the differential snapshot data is greater than the provisioned storage of the share, the excess used capacity of the snapshots is billed against the **Overflow Snapshot Usage** meter. The formula for determining the amount of overflow is: `MAX((LiveShareUsedGiB + SnapshotDifferentialUsedGiB) - ProvisionedStorageGiB, 0)`
+
+Some value-added services for Azure Files use snapshots as part of their value proposition. See [value-added services for Azure Files](#value-added-services) for more information.
+
+### Provisioned v2 soft-delete
+Deleted file shares in storage accounts with soft-delete enabled are billed based on the used storage capacity of the deleted share for the duration of the soft-delete period. To ensure that a deleted file share can always be restored, the provisioned storage, IOPS, and throughput of the share count against the storage account's limits until the file share is purged, however are not billed. For more information on soft-delete, see [How to enable soft delete on Azure file shares](storage-files-enable-soft-delete.md).
+
+### Provisioned v1 billing meters
+File shares provisioned using the provisioned v2 billing model are billed against the following five billing meters:
+
+- **Provisioned Storage**: The amount of storage provisioned in GiB.
+- **Provisioned IOPS**: The amount of IOPS (IO / sec) provisioned.
+- **Provisioned Throughput MiBPS**: The amount of throughput provisioned in MiB / sec.
+- **Overflow Snapshot Usage**: Any amount of differential snapshot usage in GiB that does not fit within the provisioned storage capacity. See [provisioned v2 snapshots](#provisioned-v2-snapshots) for more information.
+- **Soft-Deleted Usage**: Used storage capacity in GiB for soft-deleted file shares. See [provisioned v2 soft-delete](#provisioned-v2-soft-delete) for more information.
+
+Consumption against the provisioned v2 billing meters are emitted hourly in terms of hourly units. For example, for a share with 1024 GiB provisioned, you should see:
+
+- 1,024 units against the **Provisioned Storage** meter for an individual hour.
+- 24,576 units against the **Provisioned Storage** meter if aggregated for a day.
+- A variable number of units if aggregated for a month depending on the number of days in the month:
+ - 28 day month (normal February): 688,128 units against the **Provisioned Storage** meter.
+ - 29 day month (leap year February): 712,704 units against the **Provisioned Storage** meter.
+ - 30 day month: 737,280 units against the **Provisioned Storage** meter.
+ - 31 day month: 761,856 units against the **Provisioned Storage** meter.
+
+## Provisioned v1 model
+The provisioned v1 method provides storage, IOPS and throughput in a fixed ratio to each other, similar to how storage is purchased in an on-premises storage solution. When you create a new provisioned v1 file share, you specify how much storage your share needs, and IOPS and throughput are computed values. The provisioned v1 model for Azure Files is only available for SSD file shares.
+
+The amount of storage you provision determines the guaranteed storage, IOPS, and throughput limits of your file share's usage. For example, if you provision a 2 TiB share and upload 2 TiB of data to your share, your share will be full and you will not be able to add more data unless you increase the size of your share, or delete some of the data. Credit-based IOPS bursting provides added flexibility around usage, on a best-effort basis, while credits remain.
+
+Unlike purchasing storage on-premises, provisioned v1 file shares can be dynamically scaled up or down as your needs change, however, you can only decrease the provisioned storage only after 24 hours have elapsed since your last storage increase. Storage, IOPS, and throughput changes are effective within a few minutes after a provisioning change.
It's possible to decrease the size of your provisioned share below your used GiB. If you do, you won't lose data, but you'll still be billed for the size used and receive the performance of the provisioned share, not the size used.
-### Provisioning method
+### Availability
+The provisioned v1 model is provided for SSD file shares in storage accounts with the *FileStorage* storage account kind:
+
+| Storage account kind | Storage account SKU | Type of file share available |
+|-|-|-|
+| FileStorage | Premium_LRS | SSD provisioned v1 file share with the Local (LRS) redundancy specified. |
+| FileStorage | Premium_ZRS | SSD provisioned v1 file share with the Zone (ZRS) redundancy specified. |
-When you provision a premium file share, you specify how many GiBs your workload requires. Each GiB that you provision entitles you to more IOPS and throughput on a fixed ratio. In addition to the baseline IOPS that you're guaranteed, each premium file share supports bursting on a best-effort basis. The formulas for IOPS and throughput are as follows:
+SSD file shares using the provisioned v1 model are generally available in most Azure regions. See [Azure products by region](https://azure.microsoft.com/explore/global-infrastructure/products-by-region) for more information.
+
+### Provisioning detail
+When you create a provisioned v1 file share, you specify how much storage your share needs. Each GiB that you provision entitles you to more IOPS and throughput in a fixed ratio. File shares are limited based on the following attributes:
| Item | Value | |-|-|
-| Minimum size of a file share | 100 GiB |
-| Provisioning unit | 1 GiB |
-| Baseline IOPS formula | `MIN(3000 + 1 * ProvisionedStorageGiB, 102400)` |
-| Burst limit | `MIN(MAX(10000, 3 * ProvisionedStorageGiB), 102400)` |
+| Storage provisioning unit | 1 GiB |
+| Minimum provisioned storage per file share | 100 GiB |
+| Maximum provisioned storage per file share | 100 TiB (102,400 GiB) |
+| Maximum provisioned storage per storage account | 100 TiB (102,400 GiB) |
+
+The amount of IOPS and throughput provisioned on the share are determined by the following formulas:
+
+| Item | Formula |
+|-|-|
+| Computed provisioned (baseline) IOPS | `MIN(3000 + 1 * ProvisionedStorageGiB, 102400)` |
+| Computed provisioned throughput (MiB / sec) | `100 + CEILING(0.04 * ProvisionedStorageGiB) + CEILING(0.06 * ProvisionedStorageGiB)` |
+
+Depending on your individual file share requirement, you may find that you require more IOPS or throughput than our provisioning formulas provide. In this case, you will need to provision more storage to get the required IOPS or throughput.
+
+### Provisioned v1 bursting
+Credit-based IOPS bursting provides added flexibility around IOPS usage. This flexibility is best used as a buffer against unanticipated IO-spikes. For established IO patterns, we recommend provisioning for IO peaks.
+
+Burst IOPS credits accumulate whenever traffic for your file share is below provisioned (baseline) IOPS. Whenever a file share's IOPS usage exceeds the provisioned IOPS and there are available burst IOPS credits, the file share can burst up to the maximum allowed burst IOPS limit. File shares can continue to burst as long as there are credits remaining, but this is based on the number of burst credits accrued. Each IO beyond provisioned IOPS consumes one credit. Once all credits are consumed, the share returns to the provisioned IOPS. IOPS against the file share don't have to do anything special to use bursting. Bursting operates on a best effort basis.
+
+Share credits have three states:
+
+- Accruing, when the file share is using less than the provisioned IOPS.
+- Declining, when the file share is using more than the provisioned IOPS and in the bursting mode.
+- Constant, when the files share is using exactly the provisioned IOPS and there are either no credits accrued or used.
+
+A new file share starts with the full number of credits in its burst bucket. Burst credits don't accrue if the share IOPS fall below the provisioned limit due to throttling by the server. The following formulas are used to determine the burst IOPS limit and the number of credits possible for a file share:
+
+| Item | Formula |
+|-|-|
+| Burst limit | `MIN(MAX(3 * ProvisionedStorageGiB, 10000), 102400)` |
| Burst credits | `(BurstLimit - BaselineIOPS) * 3600` |
-| Throughput rate (ingress + egress) (MiB/sec) | `100 + CEILING(0.04 * ProvisionedStorageGiB) + CEILING(0.06 * ProvisionedStorageGiB)` |
-The following table illustrates a few examples of these formulae for the provisioned share sizes:
+The following table illustrates a few examples of these formulas for the provisioned share sizes:
| Capacity (GiB) | Baseline IOPS | Burst IOPS | Burst credits | Throughput (ingress + egress) (MiB/sec) | |-|-|-|-|-|
The following table illustrates a few examples of these formulae for the provisi
| 51,200 | 54,200 | Up to 102,400 | 164,880,000 | 5,220 | | 102,400 | 102,400 | Up to 102,400 | 0 | 10,340 |
-Effective file share performance is subject to machine network limits, available network bandwidth, IO sizes, and parallelism, among many other factors. To achieve maximum benefit from parallelization, we recommend enabling [SMB Multichannel](files-smb-protocol.md#smb-multichannel) on premium file shares. Refer to [SMB performance](smb-performance.md) and [performance troubleshooting guide](/troubleshoot/azure/azure-storage/files-troubleshoot-performance?toc=/azure/storage/files/toc.json) for some common performance issues and workarounds.
+Effective file share performance is subject to machine network limits, available network bandwidth, IO sizes, and parallelism, among many other factors. To achieve maximum benefit from parallelization, we recommend enabling [SMB Multichannel](files-smb-protocol.md#smb-multichannel) on SSD file shares. Refer to [SMB performance](smb-performance.md) and [performance troubleshooting guide](/troubleshoot/azure/azure-storage/files-troubleshoot-performance?toc=/azure/storage/files/toc.json) for some common performance issues and workarounds.
-### Bursting
+### Provisioned v1 snapshots
+Azure Files supports snapshots, which are similar to volume shadow copies (VSS) on Windows File Server. For more information on share snapshots, see [Overview of snapshots for Azure Files](storage-snapshots-files.md).
-If your workload needs extra performance to meet peak demand, you can use burst credits to go above the file share's baseline IOPS limit. Bursting is automated and operates based on a credit system. It works on a best effort basis, and the burst limit isn't a guarantee.
+Snapshots are always differential from the live share and from each other. In the provisioned v1 billing model, the total differential size is billed against a usage meter, regardless of how much provisioned storage is unused. The used snapshot storage meter has a reduced price over the provisioned storage price.
-Credits accumulate in a burst bucket whenever traffic for your file share is below baseline IOPS. Earned credits are used later to enable bursting when operations would exceed the baseline IOPS.
+### Provisioned v1 soft-delete
+Deleted file shares in storage accounts with soft-delete enabled are billed based on the used storage capacity of the deleted share for the duration of the soft-delete period. The soft-deleted usage storage capacity is emitted against the used snapshot storage meter. For more information on soft-delete, see [How to enable soft delete on Azure file shares](storage-files-enable-soft-delete.md).
-Whenever a share exceeds the baseline IOPS and has credits in a burst bucket, it will burst up to the maximum allowed peak burst rate. Shares can continue to burst as long as credits are remaining, but this is based on the number of burst credits accrued. Each IO beyond baseline IOPS consumes one credit. Once all credits are consumed, the share returns to the baseline IOPS.
+### Provisioned v1 billing meters
+File shares provisioned using the provisioned v1 billing model are billed against the following two meters:
-Share credits have three states:
+- **Premium Provisioned**: The amount of storage provisioned in GiB.
+- **Premium Snapshots**: The amount of used snapshots and used soft-deleted capacity.
-- Accruing, when the file share is using less than the baseline IOPS.-- Declining, when the file share is using more than the baseline IOPS and in the bursting mode.-- Constant, when the files share is using exactly the baseline IOPS and there are either no credits accrued or used.
+Consumption against the provisioned v1 billing meters are emitted hourly in terms of monthly units. For example, for a share with 1024 GiB provisioned, you should see:
-A new file share starts with the full number of credits in its burst bucket. Burst credits won't accrue if the share IOPS fall below baseline due to throttling by the server.
+- A variable number of units for an individual hour depending on the number of days in the month:
+ - 28 day month (normal February): 1.5238 units against the **Premium Provisioned** meter.
+ - 29 day month (leap year February): 1.4713 units against the **Premium Provisioned** meter.
+ - 30 day month: 1.4222 units against the **Premium Provisioned** meter.
+ - 31 day month: 1.3763 units against the **Premium Provisioned** meter.
+- A variable number of units if aggregated for a day depending on the number of days in the month:
+ - 28 day month (normal February): 36.5714 units against the **Premium Provisioned** meter.
+ - 29 day month (leap year February): 35.3103 units against the **Premium Provisioned** meter.
+ - 30 day month: 34.1333 units against the **Premium Provisioned** meter.
+ - 31 day month: 33.0323 units against the **Premium Provisioned** meter.
+- 1024 units against the **Premium Provisioned** meter if aggregated for a month.
## Pay-as-you-go model
+In the pay-as-you-go model, the amount you pay is determined by how much you use, rather than based on a provisioned amount. At a high level, you pay a cost for the amount of logical data stored, and you're also charged for transactions based on your usage of that data. Pay-as-you-go billing model can be difficult to plan for as part of a budgeting process, because the model is driven by end-user consumption. We therefore recommend using the [provisioned v2 model](#provisioned-v2-model) for new file share deployments. The pay-as-you-go model is only available for HDD file shares.
+
+### Availability
+The pay-as-you-go model is provided for HDD file shares in storage accounts with the *StorageV2* or *Storage* storage account kind:
-Azure Files uses a pay-as-you-go billing model for standard file shares. In this model, the amount you pay is determined by how much you actually use, rather than based on a provisioned amount. At a high level, you pay a cost for the amount of logical data stored, and you're also charged for transactions based on your usage of that data. A pay-as-you-go model can be cost-efficient, because you don't need to overprovision to account for future growth or performance requirements. You also don't need to deprovision if your workload and data footprint vary over time. On the other hand, a pay-as-you-go billing model can be difficult to plan as part of a budgeting process, because the model is driven by end-user consumption.
+| Storage account kind | Storage account SKU | Type of file share available |
+|-|-|-|
+| StorageV2 or Storage | Standard_LRS | HDD pay-as-you-go file share with the Local (LRS) redundancy specified. |
+| StorageV2 or Storage | Standard_ZRS | HDD pay-as-you-go file share with the Zone (ZRS) redundancy specified. |
+| StorageV2 or Storage | Standard_GRS | HDD pay-as-you-go file share with the Geo (GRS) redundancy specified. |
+| StorageV2 or Storage | Standard_GZRS | HDD pay-as-you-go file share with the GeoZone (GZRS) redundancy specified. |
-### Differences in standard tiers
+HDD file shares using the pay-as-you-go model are generally available in all Azure regions.
-When you create a standard file share, you pick between the following tiers: transaction optimized, hot, and cool. All three tiers are stored on the exact same standard storage hardware. The main difference for these three tiers is their data at-rest storage prices, which are lower in cooler tiers, and the transaction prices, which are higher in the cooler tiers. This means:
+### Differences in access tiers
+When you create a HDD file share, you pick between the following access tiers: transaction optimized, hot, and cool. All three access tiers are stored on the exact same storage hardware. The main difference for these three access tiers is their data at-rest storage prices, which are lower in cooler tiers, and the transaction prices, which are higher in the cooler tiers. This means:
-- Transaction optimized, as the name implies, optimizes the price for high transaction workloads. Transaction optimized has the highest data at-rest storage price, but the lowest transaction prices.
+- Transaction optimized, as the name implies, optimizes the price for high IOPS (transaction) workloads. Transaction optimized has the highest data at-rest storage price, but the lowest transaction prices.
- Hot is for active workloads that don't involve a large number of transactions. It has a slightly lower data at-rest storage price, but slightly higher transaction prices as compared to transaction optimized. Think of it as the middle ground between the transaction optimized and cool tiers. - Cool optimizes the price for workloads that don't have high activity, offering the lowest data at-rest storage price, but the highest transaction prices.
-If you put an infrequently accessed workload in the transaction optimized tier, you'll pay almost nothing for the few times in a month that you make transactions against your share. However, you'll pay a high amount for the data storage costs. If you moved this same share to the cool tier, you'd still pay almost nothing for the transaction costs, simply because you're infrequently making transactions for this workload. However, the cool tier has a much cheaper data storage price. Selecting the appropriate tier for your use case allows you to considerably reduce your costs.
+If you put an infrequently accessed workload in the transaction optimized access tier, you'll pay almost nothing for the few times in a month that you make transactions against your share. However, you'll pay a high amount for the data storage costs. If you moved this same share to the cool access tier, you'd still pay almost nothing for the transaction costs, simply because you're infrequently making transactions for this workload. However, the cool access tier has a much cheaper data storage price. Selecting the appropriate access tier for your use case allows you to considerably reduce your costs.
-Similarly, if you put a highly accessed workload in the cool tier, you'll pay a lot more in transaction costs, but less for data storage costs. This can lead to a situation where the increased costs from the transaction prices increase outweigh the savings from the decreased data storage price, leading you to pay more money on cool than you would have on transaction optimized. For some usage levels, it's possible that the hot tier will be the most cost efficient, and the cool tier will be more expensive than transaction optimized.
+Similarly, if you put a highly accessed workload in the cool access tier, you'll pay a lot more in transaction costs, but less for data storage costs. This can lead to a situation where the increased costs from the transaction prices increase outweigh the savings from the decreased data storage price, leading you to pay more money on cool than you would have on transaction optimized. For some usage levels, it's possible that the hot access tier will be the most cost efficient, and the cool access tier will be more expensive than transaction optimized.
-Your workload and activity level will determine the most cost efficient tier for your standard file share. In practice, the best way to pick the most cost efficient tier involves looking at the actual resource consumption of the share (data stored, write transactions, etc.). For standard file shares, we recommend starting in the transaction optimized tier during the initial migration into Azure Files, and then picking the correct tier based on usage after the migration is complete. Transaction usage during migration is not typically indicative of normal transaction usage.
+Your workload and activity level will determine the most cost efficient access tier for your pay-as-you-go file share. In practice, the best way to pick the most cost efficient access tier involves looking at the actual resource consumption of the share (data stored, write transactions, etc.). For pay-as-you-go file shares, we recommend starting in the transaction optimized tier during the initial migration into Azure Files, and then picking the correct access tier based on usage after the migration is complete. Transaction usage during migration is not typically indicative of normal transaction usage.
### What are transactions?- When you mount an Azure file share on a computer using SMB, the Azure file share is exposed on your computer as if it were local storage. This means that applications, scripts, and other programs on your computer can access the files and folders on the Azure file share without needing to know that they're stored in Azure. When you read or write to a file, the application you're using performs a series of API calls to the file system API provided by your operating system. Your operating system then interprets these calls into SMB protocol transactions, which are sent over the wire to Azure Files to fulfill. A task that the end user perceives as a single operation, such as reading a file from start to finish, might be translated into multiple SMB transactions served by Azure Files.
The following table shows the categorization of each transaction:
| Delete transactions | <ul><li>`DeleteShare`</li></ul> | <ul><li>`ClearRange`</li><li>`DeleteDirectory`</li><li>`DeleteFile`</li></ul> | > [!NOTE]
-> NFS 4.1 is only available for premium file shares, which use the provisioned billing model. Transactions don't affect billing for premium file shares.
-
-### Switching between standard tiers
+> NFS 4.1 is only available for SSD file shares, which use a provisioned billing model. Transactions buckets don't affect billing for provisioned file shares.
-Although you can change a standard file share between the three standard file share tiers, the best practice to optimize costs after the initial migration is to pick the most cost optimal tier to be in, and stay there unless your access pattern changes. This is because changing the tier of a standard file share results in additional costs as follows:
+### Switching between access tiers
+Although you can change a pay-as-you-go file share between the three access tiers, the best practice to optimize costs after the initial migration is to pick the most cost optimal access tier to be in, and stay there unless your access pattern changes. This is because changing the access tier of a standard file share results in additional costs as follows:
-- Transactions: When you move a share from a hotter tier to a cooler tier, you'll incur the cooler tier's write transaction charge for each file in the share. Moving a file share from a cooler tier to a hotter tier will incur the cool tier's read transaction charge for each file in the share.
+- Transactions: When you move a share from a hotter access tier to a cooler access tier, you'll incur the cooler access tier's write transaction charge for each file in the share. Moving a file share from a cooler access tier to a hotter access tier will incur the cooler access tier's read transaction charge for each file in the share.
-- Data retrieval: If you're moving from the cool tier to hot or transaction optimized, you'll incur a data retrieval charge based on the size of data moved. Only the cool tier has a data retrieval charge.
+- Data retrieval: If you're moving from the cool access tier to hot or transaction optimized, you'll incur a data retrieval charge based on the size of data moved. Only the cool access tier has a data retrieval charge.
-The following table illustrates the cost breakdown of moving tiers:
+The following table illustrates the cost breakdown of moving access tiers:
-| Tier | Transaction optimized (destination) | Hot (destination) | Cool (destination) |
+| Access tier | Transaction optimized (destination) | Hot (destination) | Cool (destination) |
|-|-|-|-| | **Transaction optimized (source)** | -- | <ul><li>1 hot write transaction per file.</li></ul> | <ul><li>1 cool write transaction per file.</li></ul> | | **Hot (source)** | <ul><li>1 hot read transaction per file.</li><ul> | -- | <ul><li>1 cool write transaction per file.</li></ul> | | **Cool (source)** | <ul><li>1 cool read transaction per file.</li><li>Data retrieval per total used GiB.</li></ul> | <ul><li>1 cool read transaction per file.</li><li>Data retrieval per total used GiB.</li></ul> | -- |
-Although there's no formal limit on how often you can change the tier of your file share, your share will take time to transition based on the amount of data in your share. You can't change the tier of the share while the file share is transitioning between tiers. Changing the tier of the file share doesn't impact regular file share access.
-
-Although there's no direct mechanism to move between premium and standard file shares because they're contained in different storage account types, you can use a copy tool such as robocopy to move between premium and standard file shares.
+Although there's no formal limit on how often you can change the access tier of your file share, your share will take time to transition based on the amount of data in your share. You can't change the access tier of the share while the file share is transitioning between access tiers. Changing the access tier of the file share doesn't impact regular file share access.
-### Choosing a tier
+### Choosing an access tier
+Regardless of how you migrate existing data into Azure Files, we recommend initially creating the file share in transaction optimized access tier due to the large number of transactions incurred during migration. After your migration is complete and you've operated for a few days or weeks with regular usage, you can plug your transaction counts into the [pricing calculator](https://azure.microsoft.com/pricing/calculator/) to figure out which access tier is best suited for your workload.
-Regardless of how you migrate existing data into Azure Files, we recommend initially creating the file share in transaction optimized tier due to the large number of transactions incurred during migration. After your migration is complete and you've operated for a few days or weeks with regular usage, you can plug your transaction counts into the [pricing calculator](https://azure.microsoft.com/pricing/calculator/) to figure out which tier is best suited for your workload.
-
-Because standard file shares only show transaction information at the storage account level, using the storage metrics to estimate which tier is cheaper at the file share level is an imperfect science. If possible, we recommend deploying only one file share in each storage account to ensure full visibility into billing.
+Because pay-as-you-go file shares only show transaction information at the storage account level, using the storage metrics to estimate which access tier is cheaper at the file share level is an imperfect science. If possible, we recommend deploying only one file share in each storage account to ensure full visibility into billing.
To see previous transactions:
To see previous transactions:
> [!NOTE] > Make sure you view transactions over a period of time to get a better idea of average number of transactions. Ensure that the chosen time period doesn't overlap with initial provisioning. Multiply the average number of transactions during this time period to get the estimated transactions for an entire month.
-## Provisioned/quota, logical size, and physical size
+### Pay-as-you-go snapshots
+Azure Files supports snapshots, which are similar to volume shadow copies (VSS) on Windows File Server. For more information on share snapshots, see [Overview of snapshots for Azure Files](storage-snapshots-files.md).
-Azure Files tracks three distinct quantities with respect to share capacity:
+Snapshots are always differential from the live share and from each other. In the pay-as-you-go billing model,the total differential size is billed against the normal used storage meter. This means that you won't see a separate line item on your bill representing snapshots for your pay-as-you-go storage account. This also means that differential snapshot usage counts against reservations that are purchased for pay-as-you-go file shares.
-- **Provisioned size or quota**: With both premium and standard file shares, you specify the maximum size that the file share is allowed to grow to. In premium file shares, this value is called the provisioned size. Whatever amount you provision is what you pay for, regardless of how much you actually use. In standard file shares, this value is called quota and doesn't directly affect your bill. Provisioned size is a required field for premium file shares. For standard file shares, if provisioned size isn't directly specified, the share will default to the maximum value supported by the storage account (100 TiB).
+### Pay-as-you-go soft-delete
+Deleted file shares in storage accounts with soft-delete enabled are billed based on the used storage capacity of the deleted file share for the duration of the soft-delete period. The soft-deleted used storage capacity is emitted against the normal used storage meter. This means that you won't see a separate line item on your bill representing soft-deleted file shares for your pay-as-you-go storage account. This also means that soft-deleted file share usage counts against reservations that are purchased for pay-as-you-go file shares.
-- **Logical size**: The logical size of a file share or file relates to how big it is without considering how it's actually stored, where additional optimizations might be applied. The logical size of the file is how many KiB/MiB/GiB would be transferred over the wire if you copied it to a different location. In both premium and standard file shares, the total logical size of the file share is used for enforcement against provisioned size/quota. In standard file shares, the logical size is the quantity used for the data at-rest usage billing. Logical size is referred to as "size" in the Windows properties dialog for a file/folder and as "content length" by Azure Files metrics.
+### Pay-as-you-go billing meters
+File shares created using the pay-as-you-go billing model are billed against the following meters:
-- **Physical size**: The physical size of the file relates to the size of the file as encoded on disk. This might align with the file's logical size, or it might be smaller, depending on how the file has been written to by the operating system. A common reason for the logical size and physical size to be different is by using [sparse files](/windows/win32/fileio/sparse-files). The physical size of the files in the share is used for snapshot billing, although allocated ranges are shared between snapshots if they are unchanged (differential storage). To learn more about how snapshots are billed in Azure Files, see [Snapshots](#snapshots).
+- **Data Stored**: The used storage including the live shares, differential snapshots, and soft-deleted file shares in GiB.
+- **Metadata**: The size of the file system metadata associated with files and directories such as access control lists (ACLs) and other properties in GiB. This billing meter is only used for file shares in the hot or cool access tiers.
+- **Write Operations**: The number of write transaction buckets (1 bucket = 10,000 transactions).
+- **List Operations**: The number of list transaction buckets (1 bucket = 10,000 transactions).
+- **Read Operations**: The number of read transaction buckets (1 bucket = 10,000 transactions).
+- **Other Operations** / **Protocol Operations**: The number of other transaction buckets (1 bucket = 10,000 transactions).
+- **Data Retrieval**: The amount of data read from the file share in GiB. This meter is only used for file shares in the cool access tier.
+- **Geo-Replication Data Transfer**: If the file share has the Geo or GeoZone redundancy, the amount of data written to the file share replicated to the secondary region in GiB.
-## Snapshots
+Consumption against the **Data Stored** and **Metadata** billing meters are emitted hourly in terms of monthly units. For example, for a share with 1024 used GiB, you should see:
-Azure Files supports snapshots, which are similar to volume shadow copies (VSS) on Windows File Server. Snapshots are always differential from the live share and from each other, meaning that you're always paying only for what's different in each snapshot. For more information on share snapshots, see [Overview of snapshots for Azure Files](storage-snapshots-files.md).
+- A variable number of units for an individual hour depending on the number of days in the month:
+ - 28 day month (normal February): 1.5238 units against the **Data Stored** meter.
+ - 29 day month (leap year February): 1.4713 units against the **Data Stored** meter.
+ - 30 day month: 1.4222 units against the **Data Stored** meter.
+ - 31 day month: 1.3763 units against the **Data Stored** meter.
+- A variable number of units if aggregated for a day depending on the number of days in the month:
+ - 28 day month (normal February): 36.5714 units against the **Data Stored** meter.
+ - 29 day month (leap year February): 35.3103 units against the **Data Stored** meter.
+ - 30 day month: 34.1333 units against the **Data Stored** meter.
+ - 31 day month: 33.0323 units against the **Data Stored** meter.
+- 1024 units against the **Data Stored** meter if aggregated for a month.
-Snapshots don't count against file share size limits, although you're limited to a specific number of snapshots. To see the current snapshot limits, see [Azure file share scale targets](storage-files-scale-targets.md#azure-file-share-scale-targets).
+Consumption against the other meters (ex. **Write Operations** or **Data Retrieval**) are emitted hourly, but since they aren't emitted in terms of a timeframe, don't have any special unit transformations to be aware of.
-Snapshots are always billed based on the differential storage utilization of each snapshot. However, this looks slightly different between premium file shares and standard file shares:
+## Provisioned/quota, logical size, and physical size
+Azure Files tracks three distinct quantities with respect to share capacity:
-- In premium file shares, snapshots are billed against their own snapshot meter, which has a reduced price over the provisioned storage price. This means that you'll see a separate line item on your bill representing snapshots for premium file shares for each FileStorage storage account on your bill.
+- **Provisioned size or quota**: With both provisioned and pay-as-you-go file shares, you specify the maximum size that the file share is allowed to grow to. In provisioned file shares, this value is called the provisioned size. Whatever amount you provision is what you pay for, regardless of how much you actually use. In pay-as-you-go file shares, this value is called quota and doesn't directly affect your bill. Provisioned size is a required field for provisioned file shares. For pay-as-you-go file shares, if provisioned size isn't directly specified, the share will default to the maximum value supported by the storage account (100 TiB).
-- In standard file shares, snapshots are billed as part of the normal used storage meter, although you're still only billed for the differential cost of the snapshot. This means that you won't see a separate line item on your bill representing snapshots for each standard storage account containing Azure file shares. This also means that differential snapshot usage counts against Reservations that are purchased for standard file shares.
+- **Logical size**: The logical size of a file share or file relates to how big it is without considering how it's actually stored, where storage optimizations might be applied. The logical size of the file is how many KiB/MiB/GiB would be transferred over the wire if you copied it to a different location. In both provisioned and pay-as-you-go file shares, the total logical size of the file share is used for enforcement against provisioned size/quota. In pay-as-you-go file shares, the logical size is the quantity used for the data at-rest usage billing. Logical size is referred to as "size" in the Windows properties dialog for a file/folder and as "content length" by Azure Files metrics.
-Some value-added services for Azure Files use snapshots as part of their value proposition. See [value-added services for Azure Files](#value-added-services) for more information.
+- **Physical size**: The physical size of the file relates to the size of the file as encoded on disk. This might align with the file's logical size, or it might be smaller, depending on how the file has been written to by the operating system. A common reason for the logical size and physical size to be different is by using [sparse files](/windows/win32/fileio/sparse-files). The physical size of the files in the share is used for snapshot billing, although allocated ranges are shared between snapshots if they are unchanged (differential storage).
## Value-added services- Like many on-premises storage solutions, Azure Files provides integration points for first- and third-party products to integrate with customer-owned file shares. Although these solutions can provide considerable extra value to Azure Files, you should consider the extra costs that these services add to the total cost of an Azure Files solution. Costs break down into three buckets: - **Licensing costs for the value-added service.** These might come in the form of a fixed cost per customer, end user (sometimes called a "head cost"), Azure file share or storage account. They might also be based on units of storage utilization, such as a fixed cost for every 500 GiB chunk of data in the file share. -- **Transaction costs for the value-added service.** Some value-added services have their own concept of transactions distinct from what Azure Files views as a transaction. These transactions will show up on your bill under the value-added service's charges; however, they relate directly to how you use the value-added service with your file share.
+- **Transaction costs for the value-added service.** Some value-added services have their own concept of transactions on top of the Azure Files billing model selected. These transactions will show up on your bill under the value-added service's charges; however, they relate directly to how you use the value-added service with your file share.
-- **Azure Files costs for using a value-added service.** Azure Files doesn't directly charge customers for adding value-added services, but as part of adding value to the Azure file share, the value-added service might increase the costs that you see on your Azure file share. This is easy to see with standard file shares, because standard file shares have a pay-as-you-go model with transaction charges. If the value-added service does transactions against the file share on your behalf, they will show up in your Azure Files transaction bill even though you didn't directly do those transactions yourself. This applies to premium file shares as well, although it might be less noticeable. Additional transactions against premium file shares from value-added services count against your provisioned IOPS numbers, meaning that value-added services might require provisioning more storage to have enough IOPS or throughput available for your workload.
+- **Azure Files costs for using a value-added service.** Azure Files doesn't directly charge customers for adding value-added services, but as part of adding value to the Azure file share, the value-added service might increase the costs that you see on your Azure file share. This is easy to see with pay-as-you-go file shares, because of transaction charges. If the value-added service does transactions against the file share on your behalf, they will show up in your Azure Files transaction bill even though you didn't directly do those transactions yourself. This applies to provisioned file shares as well, although it might be less noticeable. Transactions against provisioned file shares from value-added services count against your provisioned IOPS numbers, meaning that value-added services might require provisioning more storage to have enough IOPS or throughput available for your workload.
When computing the total cost of ownership for your file share, you should consider the costs of Azure Files and of all value-added services that you would like to use with Azure Files. There are multiple value-added first- and third-party services. This document covers a subset of the common first-party services customers use with Azure file shares. You can learn more about services not listed here by reading the pricing page for that service. ### Azure File Sync- Azure File Sync is a value-added service for Azure Files that synchronizes one or more on-premises Windows file shares with an Azure file share. Because the cloud Azure file share has a complete copy of the data in a synchronized file share that is available on-premises, you can transform your on-premises Windows File Server into a cache of the Azure file share to reduce your on-premises footprint. Learn more by reading [Introduction to Azure File Sync](../file-sync/file-sync-introduction.md). When considering the total cost of ownership for a solution deployed using Azure File Sync, you should consider the following cost aspects: [!INCLUDE [storage-file-sync-cost-categories](../../../includes/storage-file-sync-cost-categories.md)]
-To optimize costs for Azure Files with Azure File Sync, you should consider the tier of your file share. For more information on how to pick the tier for each file share, see [choosing a file share tier](#choosing-a-tier).
-
-If you're migrating to Azure File Sync from StorSimple, see [Comparing the costs of StorSimple to Azure File Sync](../file-sync/file-sync-storsimple-cost-comparison.md).
- ### Azure Backup- Azure Backup provides a serverless backup solution for Azure Files that seamlessly integrates with your file shares, and with other value-added services such as Azure File Sync. Azure Backup for Azure Files is a snapshot-based backup solution that provides a scheduling mechanism for automatically taking snapshots on an administrator-defined schedule. It also provides a user-friendly interface for restoring deleted files/folders or the entire share to a particular point in time. To learn more, see [About Azure file share backup](../../backup/azure-file-share-backup-overview.md?toc=/azure/storage/files/toc.json). When considering the costs of using Azure Backup, consider the following:
When considering the costs of using Azure Backup, consider the following:
- **Protected instance licensing cost for Azure file share data.** Azure Backup charges a protected instance licensing cost per storage account containing backed up Azure file shares. A protected instance is defined as 250 GiB of Azure file share storage. Storage accounts containing less than 250 GiB are subject to a fractional protected instance cost. For more information, see [Azure Backup pricing](https://azure.microsoft.com/pricing/details/backup/). You must select *Azure Files* from the list of services Azure Backup can protect. - **Azure Files costs.** Azure Backup increases the costs of Azure Files in the following ways:
- - **Differential costs from Azure file share snapshots.** Azure Backup automates taking Azure file share snapshots on an administrator-defined schedule. Snapshots are always differential; however, the additional cost added to the total bill depends on the length of time snapshots are kept and the amount of churn on the file share during that time. This dictates how different the snapshot is from the live file share and therefore how much additional data is stored by Azure Files.
+ - **Differential costs from Azure file share snapshots.** Azure Backup automates taking Azure file share snapshots on an administrator-defined schedule. Snapshots are always differential; however, the added cost added depends on the length of time snapshots are kept and the amount of churn on the file share during that time. This dictates how different the snapshot is from the live file share and therefore how much extra data is stored by Azure Files.
- - **Transaction costs from restore operations.** Restore operations from the snapshot to the live share will cause transactions. For standard file shares, this means that reads from snapshots/writes from restores will be billed as normal file share transactions. For premium file shares, these operations are counted against the provisioned IOPS for the file share.
+ - **Transaction costs from restore operations.** Restore operations from the snapshot to the live share will cause transactions. For standard file shares, this means that reads from snapshots/writes from restores are billed as normal file share transactions. For provisioned file shares, these operations are counted against the provisioned IOPS for the file share.
### Microsoft Defender for Storage- Microsoft Defender supports Azure Files as part of its Microsoft Defender for Storage product. Microsoft Defender for Storage detects unusual and potentially harmful attempts to access or exploit your Azure file shares over SMB or FileREST. Microsoft Defender for Storage is enabled on the subscription level for all file shares in storage accounts in that subscription. Microsoft Defender for Storage doesn't support antivirus capabilities for Azure file shares.
-The main cost from Microsoft Defender for Storage is an additional set of transaction costs that the product levies on top of the transactions that are done against the Azure file share. Although these costs are based on the transactions incurred in Azure Files, they aren't part of the billing for Azure Files, but rather are part of the Microsoft Defender pricing. Microsoft Defender for Storage charges a transaction rate even on premium file shares, where Azure Files includes transactions as part of IOPS provisioning. The current transaction rate can be found on [Microsoft Defender for Cloud pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/) under the *Microsoft Defender for Storage* table row.
+The main cost from Microsoft Defender for Storage is an extra set of transaction costs that the product levies on top of the transactions that are done against the Azure file share. Although these costs are based on the transactions incurred in Azure Files, they aren't part of the billing for Azure Files, but rather are part of the Microsoft Defender pricing. Microsoft Defender for Storage charges a transaction rate even on provisioned file shares, where Azure Files includes transactions as part of IOPS provisioning. The current transaction rate can be found on [Microsoft Defender for Cloud pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/) under the *Microsoft Defender for Storage* table row.
Transaction heavy file shares will incur significant costs using Microsoft Defender for Storage. Based on these costs, you might want to opt-out of Microsoft Defender for Storage for specific storage accounts. For more information, see [Exclude a storage account from Microsoft Defender for Storage protections](/azure/defender-for-cloud/defender-for-storage-exclude).
-## See also
+## Reservations
+Azure Files supports reservations (also referred to as *reserved instances*) for the provisioned v1 and pay-as-you-go models. Reservations enable you to achieve a discount on storage by pre-committing to storage utilization. You should consider purchasing reserved instances for any production workload, or dev/test workloads with consistent footprints. When you purchase a Reservation, you must specify the following dimensions:
+- **Capacity size**: Reservations can be for either 10 TiB or 100 TiB, with more significant discounts for purchasing a higher capacity Reservation. You can purchase multiple Reservations, including Reservations of different capacity sizes to meet your workload requirements. For example, if your production deployment has 120 TiB of file shares, you could purchase one 100 TiB Reservation and two 10 TiB Reservations to meet the total storage capacity requirements.
+- **Term**: You can purchase reservations for either a one-year or three-year term, with more significant discounts for purchasing a longer Reservation term.
+- **Tier**: The tier of Azure Files for the Reservation. Reservations currently are available for the premium (SSD), hot (HDD), and cool (HDD) tiers.
+- **Location**: The Azure region for the Reservation. Reservations are available in a subset of Azure regions.
+- **Redundancy**: The storage redundancy for the Reservation. Reservations are supported for all redundancies Azure Files supports, including LRS, ZRS, GRS, and GZRS.
+- **Billing frequency**: Indicates how often the account is billed for the Reservation. Options include *Monthly* or *Upfront*.
+
+Once you purchase a Reservation, it will automatically be consumed by your existing storage utilization. If you use more storage than you have reserved, you'll pay list price for the balance not covered by the Reservation. Transaction, bandwidth, data transfer, and metadata storage charges aren't included in the Reservation.
+
+There are differences in how Reservations work with Azure file share snapshots for pay-as-you-go and provisioned v1 file shares. If you're taking snapshots of pay-as-you-go file shares, then the snapshot differentials count against the Reservation and are billed as part of the normal used storage meter. However, if you're taking snapshots of provisioned v1 file shares, then the snapshots are billed using a separate meter and don't count against the Reservation.
+
+For more information on how to purchase Reservations, see [Optimize costs for Azure Files with Reservations](files-reserve-capacity.md).
+
+## See also
- [Azure Files pricing](https://azure.microsoft.com/pricing/details/storage/files/). - [Planning for an Azure Files deployment](storage-files-planning.md) and [Planning for an Azure File Sync deployment](../file-sync/file-sync-planning.md). - [Create a file share](storage-how-to-create-file-share.md) and [Deploy Azure File Sync](../file-sync/file-sync-deployment-guide.md).
virtual-desktop Compare Remote Desktop Clients https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/compare-remote-desktop-clients.md
zone_pivot_groups: remote-desktop-clients Previously updated : 07/16/2024 Last updated : 10/08/2024 # Compare Remote Desktop app features across platforms and devices
The following table shows which security features are available on each platform
| Feature | Windows<br />(MSI) | Windows<br />(AVD Store) | Windows<br />(RD Store) | macOS | iOS/<br />iPadOS | Android/<br />Chrome OS | Web browser | |--|:-:|:-:|:-:|:-:|:-:|:-:|:-:| | Screen capture protection | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
-| Watermarking | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> |
+| Watermarking | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> |
::: zone-end
+| Feature | Windows<br />(MSI) | macOS | iOS/<br />iPadOS | Android/<br />Chrome OS | Web browser |
+|--|:-:|:-:|:-:|:-:|:-:|
+| Screen capture protection | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
+| Watermarking | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> |
++ | Feature | Windows<br />(MSI) | macOS | iOS/<br />iPadOS | Android/<br />Chrome OS | Web browser | |--|:-:|:-:|:-:|:-:|:-:| | Screen capture protection | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
virtual-desktop Onedrive Remoteapp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/onedrive-remoteapp.md
Title: Use Microsoft OneDrive with a RemoteApp (preview) - Azure Virtual Desktop
+ Title: Use Microsoft OneDrive with a RemoteApp - Azure Virtual Desktop
description: Learn how to use Microsoft OneDrive with a RemoteApp in Azure Virtual Desktop.
virtual-desktop Required Fqdn Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/required-fqdn-endpoint.md
Select the relevant tab based on which cloud you're using.
| `privacy.microsoft.com` | TCP | 443 | Privacy statement | All | | `query.prod.cms.rt.microsoft.com` | TCP | 443 | Download an MSI to update the client. Required for automatic updates. | [Windows Desktop](users/connect-windows.md) | | `graph.microsoft.com` | TCP | 443 | Service traffic | All |
-| `windows.cloud.microsoft.com` | TCP | 443 | Connection center | All |
+| `windows.cloud.microsoft` | TCP | 443 | Connection center | All |
| `windows365.microsoft.com` | TCP | 443 | Service traffic | All | # [Azure for US Government](#tab/azure-for-us-government)
Select the relevant tab based on which cloud you're using.
| `privacy.microsoft.com` | TCP | 443 | Privacy statement | All | | `query.prod.cms.rt.microsoft.com` | TCP | 443 | Download an MSI to update the client. Required for automatic updates. | [Windows Desktop](users/connect-windows.md) | | `graph.microsoft.com` | TCP | 443 | Service traffic | All |
-| `windows.cloud.microsoft.com` | TCP | 443 | Connection center | All |
+| `windows.cloud.microsoft` | TCP | 443 | Connection center | All |
| `windows365.microsoft.com` | TCP | 443 | Service traffic | All |
virtual-desktop Watermarking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/watermarking.md
Title: Watermarking in Azure Virtual Desktop
description: Learn how to enable watermarking in Azure Virtual Desktop to help prevent sensitive information from being captured on client endpoints. Previously updated : 04/29/2024 Last updated : 10/08/2024
You'll need the following things before you can use watermarking:
- Windows - macOS - iOS and iPadOS
+ - Android/Chrome OS (preview)
- Web browser - [Azure Virtual Desktop Insights](azure-monitor.md) configured for your environment.
virtual-desktop Whats New Documentation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-documentation.md
description: Learn about new and updated articles to the Azure Virtual Desktop d
Previously updated : 09/05/2024 Last updated : 10/08/2024 # What's new in documentation for Azure Virtual Desktop We update documentation for Azure Virtual Desktop regularly. In this article, we highlight articles for new features and where there are significant updates to existing articles. To learn what's new in the service, see [What's new for Azure Virtual Desktop](whats-new.md).
+## September 2024
+
+In September 2024, we made the following changes to the documentation:
+
+- Updated [Enable GPU acceleration for Azure Virtual Desktop](graphics-enable-gpu-acceleration.md) for the support of the High Efficiency Video Coding (HEVC), also known as H.265, which is in preview.
+
+- Updated [Use Microsoft OneDrive with a RemoteApp](onedrive-remoteapp.md), which is generally available.
+
+- Published a new article where you can learn [What's new in the Azure Virtual Desktop SxS Network Stack](whats-new-sxs.md).
+ ## August 2024 In August 2024, we made the following changes to the documentation:
virtual-network-manager Concept Ip Address Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/concept-ip-address-management.md
Title: What is IP address management in Azure Virtual Network Manager?
-description: Learn about IP address management in Azure Virtual Network Manager and how it can help you manage IP addresses in your virtual networks.
+ Title: What is IP address management (IPAM) in Azure Virtual Network Manager?
+description: Learn about IP address management (IPAM) in Azure Virtual Network Manager and how it can help you manage IP addresses in your virtual networks.
Previously updated : 10/2/2024 Last updated : 10/08/2024 #customer intent: As a network administrator, I want to learn about IP address management in Azure Virtual Network Manager so that I can manage IP addresses in my virtual networks.
-# What is IP address management in Azure Virtual Network Manager?
+# What is IP address management (IPAM) in Azure Virtual Network Manager?
[!INCLUDE [virtual-network-manager-ipam](../../includes/virtual-network-manager-ipam.md)]
-In this article, you learn about the IP address management feature in Azure Virtual Network Manager and how it can help you manage IP addresses in your virtual networks. With Azure Virtual Network Manager's IP Address Management, you can create pools for IP address planning, automatically assign nonoverlapping classless inter-domain routing (CIDR) addresses to Azure resources, and prevent address space conflicts across on-premises and multicloud environments.
+In this article, you learn about the IP address management (IPAM) feature in Azure Virtual Network Manager and how it can help you manage IP addresses in your virtual networks. With Azure Virtual Network Manager's IP address management, you can create pools for IP address planning, automatically assign nonoverlapping classless inter-domain routing (CIDR) addresses to Azure resources, and prevent address space conflicts across on-premises and multicloud environments.
-## What is IP address management?
+## What is IP address management (IPAM)?
-In Azure Virtual Network Manager, IP address management helps you centrally manage IP addresses in your virtual networks using IP address pools. The following are some key features of IP address manager in Azure Virtual Network
+In Azure Virtual Network Manager, IP address management (IPAM) helps you centrally manage IP addresses in your virtual networks using IP address pools. The following are some key features of IPAM in Azure Virtual Network
- Create pools for IP address planning.
In Azure Virtual Network Manager, IP address management helps you centrally mana
- Support for IPv4 and IPv6 address pools.
-## How does IP address manager work in Azure Virtual Network Manager?
+## How does IPAM work in Azure Virtual Network Manager?
-The IP address manager feature in Azure Virtual Network Manager works through the following key components:
+The IPAM feature in Azure Virtual Network Manager works through the following key components:
- Managing IP Address Pools - Allocating IP addresses to Azure resources-- Delegating IP address management permissions
+- Delegating IPAM permissions
- Simplifying resource creation ### Manage IP address pools
-IP address manager allows network administrators to plan and organize IP address usage by creating pools with address spaces and respective sizes. These pools act as containers for groups of CIDRs, enabling logical grouping for specific networking purposes. You can create a structured hierarchy of pools, dividing a larger pool into smaller, more manageable pools, aiding in more granular control and organization of your network's IP address space.
+IPAM allows network administrators to plan and organize IP address usage by creating pools with address spaces and respective sizes. These pools act as containers for groups of CIDRs, enabling logical grouping for specific networking purposes. You can create a structured hierarchy of pools, dividing a larger pool into smaller, more manageable pools, aiding in more granular control and organization of your network's IP address space.
-There are two types of pools in IP address
+There are two types of pools in IPAM:
- Root pool: The first pool created in your instance is the root pool. This represents your entire IP address range. - Child pool: A child pool is a subset of the root pool or another child pool. You can create multiple child pools within a root pool or another child pool. You can have up to seven layers of pools ### Allocating IP addresses to Azure resources
-When it comes to allocation, you can assign Azure resources with CIDRs, such as virtual networks, to a specific pool. This helps in identifying which CIDRs are currently in use. There's also the option to allocate static CIDRs to a pool, useful for occupying CIDRs that are either not currently in use within Azure or are part of Azure resources not yet supported by the IP address manager service. Allocated CIDRs are released back to the pool if the associated resource is removed or deleted, ensuring efficient utilization and management of the IP space.
+When it comes to allocation, you can assign Azure resources with CIDRs, such as virtual networks, to a specific pool. This helps in identifying which CIDRs are currently in use. There's also the option to allocate static CIDRs to a pool, useful for occupying CIDRs that are either not currently in use within Azure or are part of Azure resources not yet supported by the IPAM service. Allocated CIDRs are released back to the pool if the associated resource is removed or deleted, ensuring efficient utilization and management of the IP space.
-### Delegating permissions for IP address management
+### Delegating permissions for IPAM
-With IP address manager, you can delegate permission to other users to utilize the IP address manager pools, ensuring controlled access and management while democratizing pool allocation. These permissions allow users to see the pools they have access to, aiding in choosing the right pool for their needs.
+With IPAM, you can delegate permission to other users to utilize the IP address pools, ensuring controlled access and management while democratizing pool allocation. These permissions allow users to see the pools they have access to, aiding in choosing the right pool for their needs.
Delegating permissions also allows others to view usage statistics and lists of resources associated with the pool. Within your network manager, complete usage statistics are available including: - The total number of IPs in pool.
Additionally, it shows details for pools and resources associated with pools, gi
When creating CIDR-supporting resources like virtual networks, CIDRs are automatically allocated from the selected pool, simplifying the resource creation process. The system ensures that the automatically allocated CIDRs don't overlap within the pool, maintaining network integrity and preventing conflicts.
-## Permission requirements for IP address manager in Azure Virtual Network Manager
+## Permission requirements for IPAM in Azure Virtual Network Manager
-When using IP address management, the **IPAM Pool User** role alone is sufficient for delegation. During the public preview, you also need to grant **Network Manager Read** access to ensure full discoverability of IP address pools and virtual networks across the Network Manager's scope. Without this role, users with only the **IPAM Pool User** role won't be able to see available pools and virtual networks.
+When using IPAM, the **IPAM Pool User** role alone is sufficient for delegation. During the public preview, you also need to grant **Network Manager Read** access to ensure full discoverability of IP address pools and virtual networks across the Network Manager's scope. Without this role, users with only the **IPAM Pool User** role won't see available pools and virtual networks.
Learn more about [Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md). ## Known issues -- When virtual networks are associated with an IP address manager pool, peering sync may show as out of sync, even though peering is functioning correctly.-- When a VNet is moved to a different subscription, the references in IP address manager are not updated, leading to inconsistent management status.-- When multiple requests for the same VNet are made, it can result in duplicate allocations entries.-- When entering an IP address space, the address space entered must be a valid address range (valid starting address and valid size), else a failure will be encountered when sending a request. Currently, the portal does not validate CIDR input prior to sending requests.
+- When virtual networks are associated with an IP address management pool, peering sync can show as out of sync, even though peering is functioning correctly.
+- When a virtual network is moved to a different subscription, the references in IPAM aren't updated, leading to inconsistent management status.
+- When multiple requests for the same virtual network are made, it can result in duplicate allocations entries.
+- When entering an IP address space, the address space entered must be a valid address range (valid starting address and valid size), else a failure is encountered when sending a request. Currently, the portal doesn't validate CIDR input prior to sending requests.
## Next steps
virtual-network-manager How To Manage Ip Addresses Network Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network-manager/how-to-manage-ip-addresses-network-manager.md
Previously updated : 10/2/2024 Last updated : 10/08/2024 #customer intent: As a network administrator, I want to learn how to manage IP addresses with Azure Virtual Network Manager so that I can create and assign IP address pools to my virtual networks.
[!INCLUDE [virtual-network-manager-ipam](../../includes/virtual-network-manager-ipam.md)]
-Azure Virtual Network Manager allows you to manage IP addresses by creating and assigning IP address pools to your virtual networks. This article shows you how to create and assign IP address pools to your virtual networks with IP address management in Azure Virtual Network Manager.
+Azure Virtual Network Manager allows you to manage IP addresses by creating and assigning IP address pools to your virtual networks. This article shows you how to create and assign IP address pools to your virtual networks with IP address management (IPAM) in Azure Virtual Network Manager.
## Prerequisites
In this step, you review the allocation usage of the IP address pool. This helps
:::image type="content" source="media/how-to-manage-ip-addresses/review-ip-address-pool-allocations-by-resource.png" alt-text="Screenshot of ip address pool allocations highlighting individual resource information.":::
-## Delegating permissions for IP address management
+## Delegating permissions for IP address management (IPAM)
In this step, you delegate permissions to other users to manage IP address pools in your network manager using [Azure role-based access control (RBAC)](../role-based-access-control/check-access.md). This allows you to control access to the IP address pools and ensure that only authorized users can manage the pools.
In this step, you create a virtual network with a nonoverlapping CIDR range by a
## Next steps > [!div class="nextstepaction"]
-> [What is IP address management in Azure Virtual Network Manager](./concept-ip-address-management.md)
+> [What is IP address management (IPAM) in Azure Virtual Network Manager](./concept-ip-address-management.md)
virtual-wan Openvpn Azure Ad Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/openvpn-azure-ad-tenant.md
Previously updated : 09/24/2024 Last updated : 10/08/2024 #Note that Audience values are not sensitive data.
Verify that you have a Microsoft Entra tenant. If you don't have a Microsoft Ent
1. Create two accounts in the newly created Microsoft Entra tenant. For steps, see [Add or delete a new user](../active-directory/fundamentals/add-users-azure-active-directory.md).
- * Global administrator account
+ * [Cloud Application Administrator role](/entra/identity/role-based-access-control/permissions-reference#cloud-application-administrator)
* User account
- The global administrator account will be used to grant consent to the Azure VPN app registration. The user account can be used to test OpenVPN authentication.
-1. Assign one of the accounts the **Global administrator** role. For steps, see [Assign administrator and non-administrator roles to users with Microsoft Entra ID](../active-directory/fundamentals/active-directory-users-assign-role-azure-portal.md).
+ The Cloud Application Administrator role is used to grant consent to the Azure VPN app registration. The user account can be used to test OpenVPN authentication.
+1. Assign one of the accounts the **Cloud Application Administrator** role. For steps, see [Assign administrator and non-administrator roles to users with Microsoft Entra ID](/azure/active-directory-b2c/tenant-management-read-tenant-name).
## <a name="enable-authentication"></a>3. Grant consent to the Azure VPN app registration
vpn-gateway Openvpn Azure Ad Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/openvpn-azure-ad-tenant.md
description: Learn how to set up a Microsoft Entra tenant and P2S gateway for P2
Previously updated : 08/14/2024 Last updated : 10/08/2024 #Note that Audience values are not sensitive data.
vpn-gateway Point To Site Entra Vpn Client Mac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/point-to-site-entra-vpn-client-mac.md
description: Learn how to configure macOS client computers to connect to Azure u
Previously updated : 07/24/2024 Last updated : 10/07/2024
If you used the P2S server configuration steps as mentioned in the [Prerequisite
When you generate and download a VPN client profile configuration package, all the necessary configuration settings for VPN clients are contained in a VPN client profile configuration zip file. The VPN client profile configuration files are specific to the P2S VPN gateway configuration for the virtual network. If there are any changes to the P2S VPN configuration after you generate the files, such as changes to the VPN protocol type or authentication type, you need to generate new VPN client profile configuration files and apply the new configuration to all of the VPN clients that you want to connect.
-Locate and unzip the VPN client profile configuration package you generated and downloaded (listed in the [Prequisites](#prerequisites)). Open the **AzureVPN** folder. In this folder, you'll see either the **azurevpnconfig_aad.xml** file or the **azurevpnconfig.xml** file, depending on whether your P2S configuration includes multiple authentication types. The .xml file contains the settings you use to configure the VPN client profile.
+Locate and unzip the VPN client profile configuration package you generated and downloaded (listed in the [Prerequisites](#prerequisites)). Open the **AzureVPN** folder. In this folder, you'll see either the **azurevpnconfig_aad.xml** file or the **azurevpnconfig.xml** file, depending on whether your P2S configuration includes multiple authentication types. The .xml file contains the settings you use to configure the VPN client profile.
## Import VPN client profile configuration files
Locate and unzip the VPN client profile configuration package you generated and
1. On the Azure VPN Client page, select **Import**.
- :::image type="content" source="media/point-to-site-entra-vpn-client-mac/import.png" alt-text="Screenshot of Azure VPN Client import selection." lightbox="media/point-to-site-entra-vpn-client-mac/import.png":::
1. Navigate to the folder containing the file that you want to import, select it, then click **Open**. 1. On this screen, notice the connection values are populated using the values in the imported VPN client configuration file.
Locate and unzip the VPN client profile configuration package you generated and
1. Click **Save** to save the connection profile configuration. 1. In the VPN connections pane, select the connection profile that you saved. Then, click **Connect**.-
- :::image type="content" source="media/point-to-site-entra-vpn-client-mac/connect.png" alt-text="Screenshot of Azure VPN Client clicking Connect." lightbox="media/point-to-site-entra-vpn-client-mac/connect.png":::
1. Once connected, the status changes to **Connected**. To disconnect from the session, click **Disconnect**. ## Create a connection manually
You can configure the Azure VPN Client with optional configuration settings such
## Next steps
-For more information, see [Create a Microsoft Entra tenant for P2S Open VPN connections that use Microsoft Entra authentication](openvpn-azure-ad-tenant.md).
+For more information, see [Configure P2S VPN Gateway for Microsoft Entra ID authentication](point-to-site-entra-gateway.md).
vpn-gateway Point To Site How To Vpn Client Install Azure Cert https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/point-to-site-how-to-vpn-client-install-azure-cert.md
For information about generating certificates, see the [Generate certificates](v
## <a name="installmac"></a>macOS
->[!NOTE]
->macOS VPN clients are supported for the [Resource Manager deployment model](../azure-resource-manager/management/deployment-models.md) only. They are not supported for the classic deployment model.
- [!INCLUDE [Install on Mac](../../includes/vpn-gateway-certificates-install-mac-client-cert-include.md)] ## <a name="installlinux"></a>Linux
vpn-gateway Point To Site Vpn Client Cert Mac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/point-to-site-vpn-client-cert-mac.md
description: Learn how to configure the VPN client for VPN Gateway P2S configura
Previously updated : 06/18/2024 Last updated : 10/07/2024
The workflow for this article is as follows:
For certificate authentication, a client certificate must be installed on each client computer. The client certificate you want to use must be exported with the private key, and must contain all certificates in the certification path. Additionally, for some configurations, you'll also need to install root certificate information.
-For information about working with certificates, see [Point-to site: Generate certificates - Linux](vpn-gateway-certificates-point-to-site.md).
+For information about working with certificates, see [Generate and export certificates](vpn-gateway-certificates-point-to-site.md).
## View the VPN client profile configuration files
All of the necessary configuration settings for the VPN clients are contained in
The VPN client profile configuration files are specific to the P2S VPN gateway configuration for the virtual network. If there are any changes to the P2S VPN configuration after you generate the files, such as changes to the VPN protocol type or authentication type, you need to generate new VPN client profile configuration files and apply the new configuration to all of the VPN clients that you want to connect.
-Unzip the file to view the folders. When you configure macOS native clients, you use the files in the **Generic** folder. The Generic folder is present if IKEv2 was configured on the gateway. You can find all the information that you need to configure the native VPN client in the **Generic** folder. If you don't see the Generic folder, check the following items, then generate the zip file again.
+Unzip the file to view the folders. When you configure macOS native clients, you use the files in the **Generic** folder. The Generic folder is present if IKEv2 was configured on the gateway. If you don't see the Generic folder, check the following items, then generate the zip file again.
* Check the tunnel type for your configuration. It's likely that IKEv2 wasnΓÇÖt selected as a tunnel type.
-* On the VPN gateway, verify that the SKU isnΓÇÖt Basic. The VPN Gateway Basic SKU doesnΓÇÖt support IKEv2. You'll have to rebuild the gateway with the appropriate SKU and tunnel type if you want macOS clients to connect.
+* Verify that the gateway isn't configured with the Basic SKU. The VPN Gateway Basic SKU doesnΓÇÖt support IKEv2. You'll have to rebuild the gateway with the appropriate SKU and tunnel type if you want macOS clients to connect.
The **Generic** folder contains the following files.
The **Generic** folder contains the following files.
## Install certificates
+You'll need both the root certificate and the child certificate installed on your Mac. The child certificate must be exported with the private key and must contain all certificates in the certification path.
+ ### Root certificate
-1. Copy the root certificate file - **VpnServerRoot.cer** - to your Mac. Double-click the certificate. Depending on your operating system, the certificate will either automatically install, or you'll see the **Add Certificates** page.
+1. Copy the root certificate file (the .cer file) - to your Mac. Double-click the certificate. Depending on your operating system, the certificate will either automatically install, or you'll see the **Add Certificates** page.
1. If you see the **Add Certificates** page, for **Keychain:** click the arrows and select **login** from the dropdown. 1. Click **Add** to import the file. ### Client certificate
-The client certificate is used for authentication and is required. Typically, you can just click the client certificate to install. For more information about how to install a client certificate, see [Install a client certificate](point-to-site-how-to-vpn-client-install-azure-cert.md).
+The client certificate (.pfx file) is used for authentication and is required. Typically, you can just click the client certificate to install. For more information about how to install a client certificate, see [Install a client certificate](point-to-site-how-to-vpn-client-install-azure-cert.md).
-### Verify certificate install
+### Verify certificates are installed
Verify that both the client and the root certificate are installed.
Verify that both the client and the root certificate are installed.
## Configure VPN client profile
-1. Go to **System Preferences -> Network**. On the Network page, click **'+'** to create a new VPN client connection profile for a P2S connection to the Azure virtual network.
-
- :::image type="content" source="./media/point-to-site-vpn-client-cert-mac/mac/new.png" alt-text="Screenshot shows the Network window to click on +." lightbox="./media/point-to-site-vpn-client-cert-mac/mac/new.png":::
-
-1. On the **Select the interface** page, click the arrows next to **Interface:**. From the dropdown, click **VPN**.
-
- :::image type="content" source="./media/point-to-site-vpn-client-cert-mac/mac/vpn.png" alt-text="Screenshot shows the Network window with the option to select an interface, VPN is selected." lightbox="./media/point-to-site-vpn-client-cert-mac/mac/vpn.png":::
-
-1. For **VPN Type**, from the dropdown, click **IKEv2**. In the **Service Name** field, specify a friendly name for the profile, then click **Create**.
-
- :::image type="content" source="./media/point-to-site-vpn-client-cert-mac/mac/service-name.png" alt-text="Screenshot shows the Network window with the option to select an interface, select VPN type, and enter a service name." lightbox="./media/point-to-site-vpn-client-cert-mac/mac/service-name.png":::
-
-1. Go to the VPN client profile that you downloaded. In the **Generic** folder, open the **VpnSettings.xml** file using a text editor. In the example, you can see information about the tunnel type and the server address. Even though there are two VPN types listed, this VPN client will connect over IKEv2. Copy the **VpnServer** tag value.
-
- :::image type="content" source="./media/point-to-site-vpn-client-cert-mac/mac/vpn-server.png" alt-text="Screenshot shows the VpnSettings.xml file open with the VpnServer tag highlighted." lightbox="./media/point-to-site-vpn-client-cert-mac/mac/vpn-server.png":::
-
-1. Paste the **VpnServer** tag value in both the **Server Address** and **Remote ID** fields of the profile. Leave **Local ID** blank. Then, click **Authentication Settings...**.
-
- :::image type="content" source="./media/point-to-site-vpn-client-cert-mac/mac/server-address.png" alt-text="Screenshot shows server info pasted to fields." lightbox="./media/point-to-site-vpn-client-cert-mac/mac/server-address.png":::
-
-## Configure authentication settings
+Use the steps in the [Mac User Guide](https://support.apple.com/guide/mac-help/set-up-a-vpn-connection-on-mac-mchlp2963/mac) that are appropriate for your operating system version to add a VPN client profile configuration with the following settings.
-Configure authentication settings.
+* Select **IKEv2** as the VPN type.
+* For **Display Name**, select a friendly name for the profile.
+* For both **Server Address** and **Remote ID**, use the value from the **VpnServer** tag in the **VpnSettings.xml** file.
-1. On the **Authentication Settings** page, for the Authentication settings field, click the arrows to select **Certificate**.
+ :::image type="content" source="./media/point-to-site-vpn-client-cert-mac/vpn-server.png" alt-text="Screenshot to click Select." lightbox="./media/point-to-site-vpn-client-cert-mac/vpn-server.png":::
- :::image type="content" source="./media/point-to-site-vpn-client-cert-mac/monterey/certificate.png" alt-text="Screenshot shows authentication settings with certificate selected." lightbox="./media/point-to-site-vpn-client-cert-mac/monterey/certificate.png":::
+* For **Authentication** settings, select **Certificate**.
+* For the **Certificate**, choose the child certificate you want to use for authentication. If you have multiple certificates, you can select **Show Certificate** to see more information about each certificate.
+* For **Local ID**, type the name of the child certificate that you selected.
-1. Click **Select** to open the **Choose An Identity** page.
-
- :::image type="content" source="./media/point-to-site-vpn-client-cert-mac/monterey/select.png" alt-text="Screenshot to click Select." lightbox="./media/point-to-site-vpn-client-cert-mac/monterey/select.png":::
-
-1. The **Choose An Identity** page displays a list of certificates for you to choose from. If youΓÇÖre unsure which certificate to use, you can select **Show Certificate** to see more information about each certificate. Click the proper certificate, then click **Continue**.
-
- :::image type="content" source="./media/point-to-site-vpn-client-cert-mac/monterey/choose-identity.png" alt-text="Screenshot shows certificate properties." lightbox="./media/point-to-site-vpn-client-cert-mac/monterey/choose-identity.png":::
-
-1. On the **Authentication Settings** page, verify that the correct certificate is shown, then click **OK**.
-
- :::image type="content" source="./media/point-to-site-vpn-client-cert-mac/monterey/verify.png" alt-text="Screenshot shows the Choose An Identity dialog box where you can select the proper certificate." lightbox="./media/point-to-site-vpn-client-cert-mac/monterey/verify.png":::
-
-## Specify certificate
-
-1. In the **Local ID** field, specify the name of the certificate. In this example, itΓÇÖs **P2SChildCertMac**.
-
- :::image type="content" source="./media/point-to-site-vpn-client-cert-mac/monterey/local-id.png" alt-text="Screenshot shows local ID value." lightbox="./media/point-to-site-vpn-client-cert-mac/monterey/local-id.png":::
-
-1. Click **Apply** to save all changes.
+Once you finished configuring the VPN client profile, save the profile.
## Connect
-1. Click **Connect** to start the P2S connection to the Azure virtual network. You might need to enter your "login" keychain password.
-
- :::image type="content" source="./media/point-to-site-vpn-client-cert-mac/mac/select-connect.png" alt-text="Screenshot shows connect button." lightbox="./media/point-to-site-vpn-client-cert-mac/mac/select-connect.png":::
-
-1. Once the connection has been established, the status shows as **Connected** and you can view the IP address that was pulled from the VPN client address pool.
+The steps to connect are specific to the macOS operating system version. Refer to the [Mac User Guide](https://support.apple.com/guide/mac-help/set-up-a-vpn-connection-on-mac-mchlp2963/mac). Select the operating system version that you're using and follow the steps to connect.
- :::image type="content" source="./media/point-to-site-vpn-client-cert-mac/mac/connected.png" alt-text="Screenshot shows Connected." lightbox="./media/point-to-site-vpn-client-cert-mac/mac/connected.png":::
+Once the connection has been established, the status shows as **Connected**. The IP address is allocated from the VPN client address pool.
## Next steps