Updates from: 01/21/2023 02:28:24
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Add Api Connector Token Enrichment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/add-api-connector-token-enrichment.md
Title: Token enrichment - Azure Active Directory B2C description: Enrich tokens with claims from external identity data sources using APIs or outbound webhooks. -+ Previously updated : 11/09/2021-+ Last updated : 01/17/2023+ zone_pivot_groups: b2c-policy-type - # Enrich tokens with claims from external sources using API connectors- [!INCLUDE [active-directory-b2c-choose-user-flow-or-custom-policy](../../includes/active-directory-b2c-choose-user-flow-or-custom-policy.md)]- Azure Active Directory B2C (Azure AD B2C) enables identity developers to integrate an interaction with a RESTful API into their user flow using [API connectors](api-connectors-overview.md). It enables developers to dynamically retrieve data from external identity sources. At the end of this walkthrough, you'll be able to create an Azure AD B2C user flow that interacts with APIs to enrich tokens with information from external sources.- ::: zone pivot="b2c-user-flow"- You can use API connectors applied to the **Before sending the token (preview)** step to enrich tokens for your applications with information from external sources. When a user signs in or signs up, Azure AD B2C will call the API endpoint configured in the API connector, which can query information about a user in downstream services such as cloud services, custom user stores, custom permission systems, legacy identity systems, and more. [!INCLUDE [b2c-public-preview-feature](../../includes/active-directory-b2c-public-preview.md)]
You can create an API endpoint using one of our [samples](api-connector-samples.
## Prerequisites [!INCLUDE [active-directory-b2c-customization-prerequisites](../../includes/active-directory-b2c-customization-prerequisites.md)]
+- An API endpoint. You can create an API endpoint using one of our [samples](api-connector-samples.md#api-connector-rest-api-samples).
## Create an API connector To use an [API connector](api-connectors-overview.md), you first create the API connector and then enable it in a user flow. 1. Sign in to the [Azure portal](https://portal.azure.com/).
-2. Under **Azure services**, select **Azure AD B2C**.
-4. Select **API connectors**, and then select **New API connector**.
+1. Under **Azure services**, select **Azure AD B2C**.
+1. Select **API connectors**, and then select **New API connector**.
- ![Screenshot of the basic API connector configuration](media/add-api-connector-token-enrichment/api-connector-new.png)
+ ![Screenshot showing the API connectors page in the Azure portal with the New API Connector button highlighted.](media/add-api-connector-token-enrichment/api-connector-new.png)
-5. Provide a display name for the call. For example, **Enrich token from external source**.
-6. Provide the **Endpoint URL** for the API call.
-7. Choose the **Authentication type** and configure the authentication information for calling your API. Learn how to [Secure your API Connector](secure-rest-api.md).
+1. Provide a display name for the call. For example, **Enrich token from external source**.
+1. Provide the **Endpoint URL** for the API call.
+1. Choose the **Authentication type** and configure the authentication information for calling your API. Learn how to [Secure your API Connector](secure-rest-api.md).
- ![Screenshot of authentication configuration for an API connector](media/add-api-connector-token-enrichment/api-connector-config.png)
+ ![Screenshot showing sample authentication configuration for an API connector.](media/add-api-connector-token-enrichment/api-connector-config.png)
-8. Select **Save**.
+1. Select **Save**.
## Enable the API connector in a user flow Follow these steps to add an API connector to a sign-up user flow. 1. Sign in to the [Azure portal](https://portal.azure.com/).
-2. Under **Azure services**, select **Azure AD B2C**.
-4. Select **User flows**, and then select the user flow you want to add the API connector to.
-5. Select **API connectors**, and then select the API endpoint you want to invoke at the **Before sending the token (preview)** step in the user flow:
+1. Under **Azure services**, select **Azure AD B2C**.
+1. Select **User flows**, and then select the user flow you want to add the API connector to.
+1. Select **API connectors**, and then select the API endpoint you want to invoke at the **Before sending the token (preview)** step in the user flow:
- ![Screenshot of selecting an API connector for a user flow step](media/add-api-connector-token-enrichment/api-connectors-user-flow-select.png)
+ ![Screenshot of selecting an API connector for a user flow step.](media/add-api-connector-token-enrichment/api-connectors-user-flow-select.png)
-6. Select **Save**.
+1. Select **Save**.
This step only exists for **Sign up and sign in (Recommended)**, **Sign up (Recommended)**, and **Sign in (Recommended)** user flows. ## Example request sent to the API at this step- An API connector at this step is invoked when a token is about to be issued during sign-ins and sign-ups. - An API connector materializes as an **HTTP POST** request, sending user attributes ('claims') as key-value pairs in a JSON body. Attributes are serialized similarly to [Microsoft Graph](/graph/api/resources/user#properties) user properties. - ```http POST <API-endpoint> Content-type: application/json- { "email": "johnsmith@fabrikam.onmicrosoft.com", "identities": [
Content-type: application/json
"ui_locales":"en-US" } ```- The claims that are sent to the API depend on the information defined for the user.- Only user properties and custom attributes listed in the **Azure AD B2C** > **User attributes** experience are available to be sent in the request.- Custom attributes exist in the **extension_\<extensions-app-id>_CustomAttribute** format in the directory. Your API should expect to receive claims in this same serialized format. For more information on custom attributes, see [Define custom attributes in Azure AD B2C](user-flow-custom-attributes.md).- Additionally, these claims are typically sent in all requests for this step: - **UI Locales ('ui_locales')** - An end-user's locale(s) as configured on their device. This can be used by your API to return internationalized responses. - **Step ('step')** - The step or point on the user flow that the API connector was invoked for. Value for this step is `
Additionally, these claims are typically sent in all requests for this step:
> [!IMPORTANT] > If a claim does not have a value at the time the API endpoint is called, the claim will not be sent to the API. Your API should be designed to explicitly check and handle the case in which a claim is not in the request.- ## Expected response types from the web API at this step- When the web API receives an HTTP request from Azure AD during a user flow, it can return a "continuation response."- ### Continuation response- A continuation response indicates that the user flow should continue to the next step: issuing the token.- In a continuation response, the API can return additional claims. A claim returned by the API that you wish to return in the token must be a built-in claim or [defined as a custom attribute](user-flow-custom-attributes.md) and must be selected in the **Application claims** configuration of the user flow. - The claim value in the token will be that returned by the API, not the value in the directory. Some claim values cannot be overwritten by the API response. Claims that can be returned by the API correspond to the set found under **User attributes** with the exception of `email`.- > [!NOTE] > The API is only invoked during an initial authentication. When using refresh tokens to silently get new access or ID tokens, the token will include the values evaluated during the initial authentication. - ## Example response- ### Example of a continuation response- ```http HTTP/1.1 200 OK Content-type: application/json- { "version": "1.0.0", "action": "Continue",
Content-type: application/json
"extension_<extensions-app-id>_CustomAttribute": "value" // return claim } ```- | Parameter | Type | Required | Description | | -- | -- | -- | -- | | version | String | Yes | The version of your API. | | action | String | Yes | Value must be `Continue`. | | \<builtInUserAttribute> | \<attribute-type> | No | They can be returned in the token if selected as an **Application claim**. |
-| \<extension\_{extensions-app-id}\_CustomAttribute> | \<attribute-type> | No | The claim does not need to contain `_<extensions-app-id>_`, it is *optional*. They can returned in the token if selected as an **Application claim**. |
-
+| \<extension\_{extensions-app-id}\_CustomAttribute> | \<attribute-type> | No | The claim does not need to contain `_<extensions-app-id>_`, it is *optional*. They can be returned in the token if selected as an **Application claim**. |
::: zone-end- ::: zone pivot="b2c-custom-policy" In this scenario, we enrich the user's token data by integrating with a corporate line-of-business workflow. During sign-up or sign-in with local or federated account, Azure AD B2C invokes a REST API to get the user's extended profile data from a remote data source. In this sample, Azure AD B2C sends the user's unique identifier, the objectId. The REST API then returns the user's account balance (a random number). Use this sample as a starting point to integrate with your own CRM system, marketing database, or any line-of-business workflow.- You can also design the interaction as a validation technical profile. This is suitable when the REST API will be validating data on screen and returning claims. For more information, see [Walkthrough: Add an API connector to a sign-up user flow](add-api-connector.md).- ## Prerequisites- - Complete the steps in [Get started with custom policies](tutorial-create-user-flows.md?pivots=b2c-custom-policy). You should have a working custom policy for sign-up and sign-in with local accounts. - Learn how to [Integrate REST API claims exchanges in your Azure AD B2C custom policy](api-connectors-overview.md).- ## Prepare a REST API endpoint- For this walkthrough, you should have a REST API that validates whether a user's Azure AD B2C objectId is registered in your back-end system. If registered, the REST API returns the user account balance. Otherwise, the REST API registers the new account in the directory and returns the starting balance `50.00`.- The following JSON code illustrates the data Azure AD B2C will send to your REST API endpoint. - ```json { "objectId": "User objectId", "lang": "Current UI language" } ```- Once your REST API validates the data, it must return an HTTP 200 (Ok), with the following JSON data:- ```json { "balance": "760.50" } ```- The setup of the REST API endpoint is outside the scope of this article. We have created an [Azure Functions](../azure-functions/functions-reference.md) sample. You can access the complete Azure function code at [GitHub](https://github.com/azure-ad-b2c/rest-api/tree/master/source-code/azure-function).- ## Define claims- A claim provides temporary storage of data during an Azure AD B2C policy execution. You can declare claims within the [claims schema](claimsschema.md) section. - 1. Open the extensions file of your policy. For example, <em>`SocialAndLocalAccounts/`**`TrustFrameworkExtensions.xml`**</em>. 1. Search for the [BuildingBlocks](buildingblocks.md) element. If the element doesn't exist, add it. 1. Locate the [ClaimsSchema](claimsschema.md) element. If the element doesn't exist, add it. 1. Add the following claims to the **ClaimsSchema** element. - ```xml <ClaimType Id="balance"> <DisplayName>Your Balance</DisplayName>
A claim provides temporary storage of data during an Azure AD B2C policy executi
<DataType>string</DataType> </ClaimType> ```- ## Add the RESTful API technical profile - A [Restful technical profile](restful-technical-profile.md) provides support for interfacing with your own RESTful service. Azure AD B2C sends data to the RESTful service in an `InputClaims` collection and receives data back in an `OutputClaims` collection. Find the **ClaimsProviders** element in your <em>**`TrustFrameworkExtensions.xml`**</em> file and add a new claims provider as follows:- ```xml <ClaimsProvider> <DisplayName>REST APIs</DisplayName>
A [Restful technical profile](restful-technical-profile.md) provides support for
</TechnicalProfiles> </ClaimsProvider> ``` - In this example, the `userLanguage` will be sent to the REST service as `lang` within the JSON payload. The value of the `userLanguage` claim contains the current user language ID. For more information, see [claim resolver](claim-resolver-overview.md).- ### Configure the RESTful API technical profile - After you deploy your REST API, set the metadata of the `REST-GetProfile` technical profile to reflect your own REST API, including:- - **ServiceUrl**. Set the URL of the REST API endpoint. - **SendClaimsIn**. Specify how the input claims are sent to the RESTful claims provider. - **AuthenticationType**. Set the type of authentication being performed by the RESTful claims provider such as `Basic` or `ClientCertificate` - **AllowInsecureAuthInProduction**. In a production environment, make sure to set this metadata to `false`. See the [RESTful technical profile metadata](restful-technical-profile.md#metadata) for more configurations.- The comments above `AuthenticationType` and `AllowInsecureAuthInProduction` specify changes you should make when you move to a production environment. To learn how to secure your RESTful APIs for production, see [Secure your RESTful API](secure-rest-api.md).- ## Add an orchestration step- [User journeys](userjourneys.md) specify explicit paths through which a policy allows a relying party application to obtain the desired claims for a user. A user journey is represented as an orchestration sequence that must be followed through for a successful transaction. You can add or subtract orchestration steps. In this case, you will add a new orchestration step that is used to augment the information provided to the application after the user sign-up or sign-in via the REST API call.- 1. Open the base file of your policy. For example, <em>`SocialAndLocalAccounts/`**`TrustFrameworkBase.xml`**</em>. 1. Search for the `<UserJourneys>` element. Copy the entire element, and then delete it. 1. Open the extensions file of your policy. For example, <em>`SocialAndLocalAccounts/`**`TrustFrameworkExtensions.xml`**</em>. 1. Paste the `<UserJourneys>` into the extensions file, after the close of the `<ClaimsProviders>` element. 1. Locate the `<UserJourney Id="SignUpOrSignIn">`, and add the following orchestration step before the last one.- ```xml <OrchestrationStep Order="7" Type="ClaimsExchange"> <ClaimsExchanges>
The comments above `AuthenticationType` and `AllowInsecureAuthInProduction` spec
</ClaimsExchanges> </OrchestrationStep> ```- 1. Refactor the last orchestration step by changing the `Order` to `8`. Your final two orchestration steps should look like the following:- ```xml <OrchestrationStep Order="7" Type="ClaimsExchange"> <ClaimsExchanges> <ClaimsExchange Id="RESTGetProfile" TechnicalProfileReferenceId="REST-GetProfile" /> </ClaimsExchanges> </OrchestrationStep>- <OrchestrationStep Order="8" Type="SendClaims" CpimIssuerTechnicalProfileReferenceId="JwtIssuer" /> ```- 1. Repeat the last two steps for the **ProfileEdit** and **PasswordReset** user journeys.-- ## Include a claim in the token - To return the `balance` claim back to the relying party application, add an output claim to the <em>`SocialAndLocalAccounts/`**`SignUpOrSignIn.xml`**</em> file. Adding an output claim will issue the claim into the token after a successful user journey, and will be sent to the application. Modify the technical profile element within the relying party section to add `balance` as an output claim. ```xml
To return the `balance` claim back to the relying party application, add an outp
</TechnicalProfile> </RelyingParty> ```- Repeat this step for the **ProfileEdit.xml**, and **PasswordReset.xml** user journeys.- Save the files you changed: *TrustFrameworkBase.xml*, and *TrustFrameworkExtensions.xml*, *SignUpOrSignin.xml*, *ProfileEdit.xml*, and *PasswordReset.xml*. - ## Test the custom policy- 1. Sign in to the [Azure portal](https://portal.azure.com). 1. Make sure you're using the directory that contains your Azure AD tenant by selecting the **Directories + subscriptions** icon in the portal toolbar. 1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD directory in the **Directory name** list, and then select **Switch**.
Save the files you changed: *TrustFrameworkBase.xml*, and *TrustFrameworkExtensi
1. Select the sign-up or sign-in policy that you uploaded, and click the **Run now** button. 1. You should be able to sign up using an email address or a Facebook account. 1. The token sent back to your application includes the `balance` claim.- ```json { "typ": "JWT",
Save the files you changed: *TrustFrameworkBase.xml*, and *TrustFrameworkExtensi
} ``` ::: zone-end- ::: zone pivot="b2c-user-flow" ## Best practices and how to troubleshoot
Ensure that:
* If you're using a serverless function or scalable web service, use a hosting plan that keeps the API "awake" or "warm" in production. For Azure Functions, it's recommended you use at minimum the [Premium plan](../azure-functions/functions-scale.md) in production. * Ensure high availability of your API. * Monitor and optimize performance of downstream APIs, databases, or other dependencies of your API.
-
+ [!INCLUDE [active-directory-b2c-https-cipher-tls-requirements](../../includes/active-directory-b2c-https-cipher-tls-requirements.md)] ### Use logging
+### Using serverless cloud functions
+
+Serverless functions, like [HTTP triggers in Azure Functions](../azure-functions/functions-bindings-http-webhook-trigger.md), provide a way create API endpoints to use with the API connector. The serverless cloud function can also call and invoke other web APIs, data stores, and other cloud services for complex scenarios.
+
+### Using logging
In general, it's helpful to use the logging tools enabled by your web API service, like [Application insights](../azure-functions/functions-monitoring.md), to monitor your API for unexpected error codes, exceptions, and poor performance. * Monitor for HTTP status codes that aren't HTTP 200 or 400. * A 401 or 403 HTTP status code typically indicates there's an issue with your authentication. Double-check your API's authentication layer and the corresponding configuration in the API connector. * Use more aggressive levels of logging (for example "trace" or "debug") in development if needed. * Monitor your API for long response times. - Additionally, Azure AD B2C logs metadata about the API transactions that happen during user authentications via a user flow. To find these: 1. Go to **Azure AD B2C**
-2. Under **Activities**, select **Audit logs**.
-3. Filter the list view: For **Date**, select the time interval you want, and for **Activity**, select **An API was called as part of a user flow**.
-4. Inspect individual logs. Each row represents an API connector attempting to be called during a user flow. If an API call fails and a retry occurs, it's still represented as a single row. The `numberOfAttempts` indicates the number of times your API was called. This value can be `1`or `2`. Other information about the API call is detailed in the logs.
-
- ![Screenshot of an example audit log with API connector transaction](media/add-api-connector-token-enrichment/example-anonymized-audit-log.png)
-
+1. Under **Activities**, select **Audit logs**.
+1. Filter the list view: For **Date**, select the time interval you want, and for **Activity**, select **An API was called as part of a user flow**.
+1. Inspect individual logs. Each row represents an API connector attempting to be called during a user flow. If an API call fails and a retry occurs, it's still represented as a single row. The `numberOfAttempts` indicates the number of times your API was called. This value can be `1`or `2`. Other information about the API call is detailed in the logs.
+ ![Screenshot of an example audit log with API connector transaction.](media/add-api-connector-token-enrichment/example-anonymized-audit-log.png)
::: zone-end- ## Next steps- ::: zone pivot="b2c-user-flow"- - Get started with our [samples](api-connector-samples.md#api-connector-rest-api-samples). - [Secure your API Connector](secure-rest-api.md)- ::: zone-end- ::: zone pivot="b2c-custom-policy"- To learn how to secure your APIs, see the following articles:- - [Walkthrough: Integrate REST API claims exchanges in your Azure AD B2C user journey as an orchestration step](add-api-connector-token-enrichment.md) - [Secure your RESTful API](secure-rest-api.md) - [Reference: RESTful technical profile](restful-technical-profile.md)- ::: zone-end
active-directory-b2c Add Api Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/add-api-connector.md
Last updated 12/20/2022---++ zone_pivot_groups: b2c-policy-type
active-directory-b2c Add Identity Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/add-identity-provider.md
Title: Add an identity provider - Azure Active Directory B2C description: Learn how to add an identity provider to your Active Directory B2C tenant. -+ - Previously updated : 04/08/2022+ Last updated : 01/19/2022
You can configure Azure AD B2C to allow users to sign in to your application wit
With external identity provider federation, you can offer your consumers the ability to sign in with their existing social or enterprise accounts, without having to create a new account just for your application.
-On the sign-up or sign-in page, Azure AD B2C presents a list of external identity providers the user can choose for sign-in. Once they select one of the external identity providers, they're taken (redirected) to the selected provider's website to complete the sign in process. After the user successfully signs in, they're returned to Azure AD B2C for authentication of the account in your application.
+On the sign-up or sign-in page, Azure AD B2C presents a list of external identity providers the user can choose for sign-in. Once they select one of the external identity providers, they're taken (redirected) to the selected provider's website to complete the sign-in process. After the user successfully signs in, they're returned to Azure AD B2C for authentication of the account in your application.
-![Mobile sign-in example with a social account (Facebook)](media/add-identity-provider/external-idp.png)
+![Diagram showing mobile sign-in example with a social account (Facebook).](media/add-identity-provider/external-idp.png)
You can add identity providers that are supported by Azure Active Directory B2C (Azure AD B2C) to your [user flows](user-flow-overview.md) using the Azure portal. You can also add identity providers to your [custom policies](user-flow-overview.md).
active-directory-b2c Add Password Reset Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/add-password-reset-policy.md
Title: Set up a password reset flow
description: Learn how to set up a password reset flow in Azure Active Directory B2C (Azure AD B2C). -+
Last updated 10/25/2022 -+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/best-practices.md
Define your application and service architecture, inventory current systems, and
| Usability vs. security | Your solution must strike the right balance between application usability and your organization's acceptable level of risk. | | Move on-premises dependencies to the cloud | To help ensure a resilient solution, consider moving existing application dependencies to the cloud. | | Migrate existing apps to b2clogin.com | The deprecation of login.microsoftonline.com will go into effect for all Azure AD B2C tenants on 04 December 2020. [Learn more](b2clogin.md). |
+| Use Identity Protection and Conditional Access | Use these capabilities for significantly greater control over risky authentications and access policies. Azure AD B2C Premium P2 is required. [Learn more](conditional-access-identity-protection-overview.md). |
+|Tenant size | You need to plan with Azure AD B2C tenant size in mind. By default, Azure AD B2C tenant can accommodate 1.25 million objects (user accounts and applications). You can increase this limit to 5.25 million objects by adding a custom domain to your tenant, and verifying it. If you need a bigger tenant size, you need to contact [Support](find-help-open-support-ticket.md).|
| Use Identity Protection and Conditional Access | Use these capabilities for greater control over risky authentications and access policies. Azure AD B2C Premium P2 is required. [Learn more](conditional-access-identity-protection-overview.md). | ## Implementation
Stay up to date with the state of the service and find support options.
| Best practice | Description | |--|--| | [Service updates](https://azure.microsoft.com/updates/?product=active-directory-b2c) | Stay up to date with Azure AD B2C product updates and announcements. |
-| [Microsoft Support](support-options.md) | File a support request for Azure AD B2C technical issues. Billing and subscription management support is provided at no cost. |
+| [Microsoft Support](find-help-open-support-ticket.md) | File a support request for Azure AD B2C technical issues. Billing and subscription management support is provided at no cost. |
| [Azure status](https://azure.status.microsoft/status) | View the current health status of all Azure services. |+
active-directory-b2c Configure User Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-user-input.md
Title: Add user attributes and customize user input
description: Learn how to customize user input and add user attributes to the sign-up or sign-in journey in Azure Active Directory B2C. -+
Last updated 12/28/2022 -+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-domain.md
Previously updated : 07/26/2022 Last updated : 11/3/2022
zone_pivot_groups: b2c-policy-type
[!INCLUDE [active-directory-b2c-choose-user-flow-or-custom-policy](../../includes/active-directory-b2c-choose-user-flow-or-custom-policy.md)]
-This article describes how to enable custom domains in your redirect URLs for Azure Active Directory B2C (Azure AD B2C). Using a custom domain with your application provides a more seamless user experience. From the user's perspective, they remain in your domain during the sign in process rather than redirecting to the Azure AD B2C default domain *&lt;tenant-name&gt;.b2clogin.com*.
+This article describes how to enable custom domains in your redirect URLs for Azure Active Directory B2C (Azure AD B2C). By using a verified custom domain, you've benefits such as:
+
+- It provides a more seamless user experience. From the user's perspective, they remain in your domain during the sign in process rather than redirecting to the Azure AD B2C default domain *&lt;tenant-name&gt;.b2clogin.com*.
+
+- You increase the number of objects (user accounts and applications) you can create in your Azure AD B2C tenant from the default 1.25 million to 5.25 million.
![Screenshot demonstrates an Azure AD B2C custom domain user experience.](./media/custom-domain/custom-domain-user-experience.png)
active-directory-b2c Identity Protection Investigate Risk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-protection-investigate-risk.md
Last updated 09/16/2021 --++ zone_pivot_groups: b2c-policy-type
active-directory-b2c Identity Provider Adfs Saml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-adfs-saml.md
Title: Add AD FS as a SAML identity provider by using custom policies
description: Set up AD FS 2016 using the SAML protocol and custom policies in Azure Active Directory B2C -+
Last updated 09/16/2021 -+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Identity Provider Adfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-adfs.md
Title: Add AD FS as an OpenID Connect identity provider by using custom policies
description: Set up AD FS 2016 using the OpenID Connect protocol and custom policies in Azure Active Directory B2C -+
Last updated 06/08/2022 -+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Identity Provider Amazon https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-amazon.md
Title: Set up sign-up and sign-in with an Amazon account
description: Provide sign-up and sign-in to customers with Amazon accounts in your applications using Azure Active Directory B2C. -+
Last updated 09/16/2021-+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Identity Provider Apple Id https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-apple-id.md
Title: Set up sign-up and sign-in with an Apple ID
description: Provide sign-up and sign-in to customers with Apple ID in your applications using Azure Active Directory B2C. -+
Last updated 11/02/2021 -+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Identity Provider Azure Ad B2c https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-azure-ad-b2c.md
Title: Set up sign-up and sign-in with an Azure AD B2C account from another Azur
description: Provide sign-up and sign-in to customers with Azure AD B2C accounts from another tenant in your applications using Azure Active Directory B2C. -+ Last updated 09/16/2021-+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Identity Provider Azure Ad Multi Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-azure-ad-multi-tenant.md
Title: Set up sign-in for multi-tenant Azure AD by custom policies
description: Add a multi-tenant Azure AD identity provider using custom policies in Azure Active Directory B2C. -+
Last updated 11/17/2022 -+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Identity Provider Azure Ad Single Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-azure-ad-single-tenant.md
Title: Set up sign-in for an Azure AD organization
description: Set up sign-in for a specific Azure Active Directory organization in Azure Active Directory B2C. -+ Last updated 10/11/2022-+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Identity Provider Ebay https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-ebay.md
Title: Set up sign-up and sign-in with an eBay account
description: Provide sign-up and sign-in to customers with eBay accounts in your applications using Azure Active Directory B2C. -+ Last updated 09/16/2021-+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Identity Provider Facebook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-facebook.md
Title: Set up sign-up and sign-in with a Facebook account
description: Provide sign-up and sign-in to customers with Facebook accounts in your applications using Azure Active Directory B2C. -+
Last updated 03/10/2022 -+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Identity Provider Generic Openid Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-generic-openid-connect.md
Title: Set up sign-up and sign-in with OpenID Connect
description: Set up sign-up and sign-in with any OpenID Connect identity provider (IdP) in Azure Active Directory B2C. -+ Last updated 12/28/2022-+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Identity Provider Generic Saml Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-generic-saml-options.md
Title: Set sign-in with SAML identity provider options
description: Configure sign-in SAML identity provider (IdP) options in Azure Active Directory B2C. -+
Last updated 01/13/2022 -+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Identity Provider Generic Saml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-generic-saml.md
Title: Set up sign-up and sign-in with SAML identity provider
description: Set up sign-up and sign-in with any SAML identity provider (IdP) in Azure Active Directory B2C. -+
Last updated 09/16/2021 -+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Identity Provider Github https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-github.md
Title: Set up sign-up and sign-in with a GitHub account
description: Provide sign-up and sign-in to customers with GitHub accounts in your applications using Azure Active Directory B2C. -+
Last updated 03/10/2022 -+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Identity Provider Google https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-google.md
Title: Set up sign-up and sign-in with a Google account
description: Provide sign-up and sign-in to customers with Google accounts in your applications using Azure Active Directory B2C. -+
Last updated 03/10/2022 -+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Identity Provider Id Me https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-id-me.md
Title: Set up sign-up and sign-in with a ID.me account
description: Provide sign-up and sign-in to customers with ID.me accounts in your applications using Azure Active Directory B2C. -+ Last updated 09/16/2021-+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Identity Provider Linkedin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-linkedin.md
Title: Set up sign-up and sign-in with a LinkedIn account
description: Provide sign-up and sign-in to customers with LinkedIn accounts in your applications using Azure Active Directory B2C. -+
Last updated 09/16/2021 -+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Identity Provider Local https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-local.md
Title: Set up Azure AD B2C local account identity provider
description: Define the identity types uses can use to sign-up or sign-in (email, username, phone number) in your Azure Active Directory B2C tenant. -+ Last updated 09/02/2022-+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Identity Provider Microsoft Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-microsoft-account.md
Title: Set up sign-up and sign-in with a Microsoft Account
description: Provide sign-up and sign-in to customers with Microsoft Accounts in your applications using Azure Active Directory B2C. -+
Last updated 01/13/2022 -+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Identity Provider Mobile Id https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-mobile-id.md
Title: Set up sign-up and sign-in with Mobile ID
description: Provide sign-up and sign-in to customers with Mobile ID in your applications using Azure Active Directory B2C. -+ Last updated 04/08/2022-+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Identity Provider Ping One https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-ping-one.md
Title: Set up sign-up and sign-in with a PingOne account
description: Provide sign-up and sign-in to customers with PingOne accounts in your applications using Azure Active Directory B2C. -+
Last updated 12/2/2021 -+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Identity Provider Qq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-qq.md
Title: Set up sign-up and sign-in with a QQ account using Azure Active Directory B2C description: Provide sign-up and sign-in to customers with QQ accounts in your applications using Azure Active Directory B2C. -+
Last updated 09/16/2021 -+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Identity Provider Salesforce Saml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-salesforce-saml.md
Title: Set up sign-in with a Salesforce SAML provider by using SAML protocol
description: Set up sign-in with a Salesforce SAML provider by using SAML protocol in Azure Active Directory B2C. -+
Last updated 09/16/2021 -+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Identity Provider Salesforce https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-salesforce.md
Title: Set up sign-up and sign-in with a Salesforce account
description: Provide sign-up and sign-in to customers with Salesforce accounts in your applications using Azure Active Directory B2C. -+
Last updated 09/16/2021 -+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Identity Provider Swissid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-swissid.md
Title: Set up sign-up and sign-in with a SwissID account
description: Provide sign-up and sign-in to customers with SwissID accounts in your applications using Azure Active Directory B2C. -+ Last updated 12/07/2021-+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Identity Provider Twitter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-twitter.md
Title: Set up sign-up and sign-in with a Twitter account
description: Provide sign-up and sign-in to customers with Twitter accounts in your applications using Azure Active Directory B2C. -+
Last updated 07/20/2022 -+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Identity Provider Wechat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-wechat.md
Title: Set up sign-up and sign-in with a WeChat account
description: Provide sign-up and sign-in to customers with WeChat accounts in your applications using Azure Active Directory B2C. -+
Last updated 09/16/2021 -+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Identity Provider Weibo https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-weibo.md
Title: Set up sign-up and sign-in with a Weibo account
description: Provide sign-up and sign-in to customers with Weibo accounts in your applications using Azure Active Directory B2C. -+
Last updated 09/16/2021 -+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Identity Verification Proofing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-verification-proofing.md
description: Learn about our partners who integrate with Azure AD B2C to provide identity proofing and verification solutions -+ - Previously updated : 09/13/2022 Last updated : 01/18/2023 - # Identity verification and proofing partners
-With Azure AD B2C partners, customers can enable identity verification and proofing of their end users before allowing account registration or access. Identity verification and proofing can check document, knowledge-based information and liveness.
+With Azure Active Directory B2C (Azure AD B2C) and solutions from software-vendor partners, customers can enable end-user identity verification and proofing for account registration. Identity verification and proofing can check documents, knowledge-based information, and liveness.
+
+## Architecture diagram
+
+The following architecture diagram illustrates the verification and proofing flow.
-A high-level architecture diagram explains the flow.
+ ![Diagram of of the identity proofing flow, from registration to access approval.](./media/partner-gallery/third-party-identity-proofing.png)
-![Diagram shows the identity proofing flow](./media/partner-gallery/third-party-identity-proofing.png)
+1. User begins registration with a device.
+2. User enters information.
+3. Digital-risk score is assessed, then third-party identity proofing and identity validation occurs.
+4. Identity is validated or rejected.
+5. User attributes are passed to Azure Active Directory B2C.
+6. If user verification is successful, a user account is created in Azure AD B2C during sign-in.
+7. Based on the verification result, the user receives an access-approved or -denied message.
-Microsoft partners with the following ISV partners.
+## Software vendors and integration documentation
-| ISV partner | Description and integration walkthroughs |
-|:-|:--|
-| ![Screenshot of a deduce logo.](./medi) is an identity verification and proofing provider focused on stopping account takeover and registration fraud. It helps combat identity fraud and creates a trusted user experience. |
-| ![Screenshot of a eid-me logo](./medi) is an identity verification and decentralized digital identity solution for Canadian citizens. It enables organizations to meet Identity Assurance Level (IAL) 2 and Know Your Customer (KYC) requirements. |
-|![Screenshot of an Experian logo.](./medi) is an Identity verification and proofing provider that performs risk assessments based on user attributes to prevent fraud. |
-|![Screenshot of an IDology logo.](./medi) is an Identity verification and proofing provider with ID verification solutions, fraud prevention solutions, compliance solutions, and others.|
-|![Screenshot of a Jumio logo.](./medi) is an ID verification service, which enables real-time automated ID verification, safeguarding customer data. |
-| ![Screenshot of a LexisNexis logo.](./medi) is a profiling and identity validation provider that verifies user identification and provides comprehensive risk assessment based on userΓÇÖs device. |
-| ![Screenshot of a Onfido logo](./medi) is a document ID and facial biometrics verification solution that allows companies to meet *Know Your Customer* and identity requirements in real time. |
+Microsoft partners with independent software vendors (ISVs). Use the following table to locate an ISV and related integration documentation.
-## Additional information
+| ISV logo | ISV link and description| Integration documentation|
+||||
+| ![Screenshot of the Deduce logo.](./medi)|
+| ![Screenshot of the eID-Me logo.](./medi)|
+|![Screenshot of the Experian logo.](./medi)|
+|![Screenshot of the IDology logo.](./medi)|
+|![Screenshot of the Jumio logo.](./medi)|
+| ![Screenshot of the LexisNexis logo.](./medi)|
+| ![Screenshot of the Onfido logo.](./medi)|
-- [Custom policies in Azure AD B2C](./custom-policy-overview.md)
+## Resources
-- [Get started with custom policies in Azure AD B2C](./tutorial-create-user-flows.md?pivots=b2c-custom-policy&tabs=applications)
+- [Azure AD B2C custom policy overview](custom-policy-overview.md)
+- [Tutorial: Create user flows and custom policies in Azure Active Directory B2C](tutorial-create-user-flows.md?pivots=b2c-custom-policy&tabs=applications)
## Next steps
-Select a partner in the tables mentioned to learn how to integrate their solution with Azure AD B2C.
+Select and contact a partner from the previous table to get started on solution integration with Azure AD B2C. The partners have similar processes to contact them for a product demo.
active-directory-b2c Javascript And Page Layout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/javascript-and-page-layout.md
Title: JavaScript and page layout versions
description: Learn how to enable JavaScript and use page layout versions in Azure Active Directory B2C. -+
Last updated 10/26/2022 -+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Language Customization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/language-customization.md
Title: Language customization in Azure Active Directory B2C description: Learn about customizing the language experience in your user flows in Azure Active Directory B2C. -+
Last updated 12/28/2022 -+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Microsoft Graph Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/microsoft-graph-operations.md
Previously updated : 03/03/2022 Last updated : 11/3/2022
Watch this video to learn about Azure AD B2C user migration using Microsoft Grap
## Prerequisites
-To use MS Graph API, and interact with resources in your Azure AD B2C tenant, you need an application registration that grants the permissions to do so. Follow the steps in the [Manage Azure AD B2C with Microsoft Graph](microsoft-graph-get-started.md) article to create an application registration that your management application can use.
+- To use MS Graph API, and interact with resources in your Azure AD B2C tenant, you need an application registration that grants the permissions to do so. Follow the steps in the [Register a Microsoft Graph application](microsoft-graph-get-started.md) article to create an application registration that your management application can use.
## User management > [!NOTE]
For user flows, these extension properties are [managed by using the Azure porta
> [!NOTE] > In Azure AD, directory extensions are managed through the [extensionProperty resource type](/graph/api/resources/extensionproperty) and its associated methods. However, because they are used in B2C through the `b2c-extensions-app` app which should not be updated, they are managed in Azure AD B2C using the [identityUserFlowAttribute resource type](/graph/api/resources/identityuserflowattribute) and its associated methods.
+## Tenant usage
+
+Use the [Get organization details](/graph/api/organization-get) API to get your directory size quota. You need to add the `$select` query parameter as shown in the following HTTP request:
+
+```http
+ GET https://graph.microsoft.com/v1.0/organization/organization-id?$select=directorySizeQuota
+```
+Replace `organization-id` with your organization or tenant ID.
+
+The response to the above request looks similar to the following JSON snippet:
+
+```json
+{
+ "directorySizeQuota": {
+ "used": 156,
+ "total": 1250000
+ }
+}
+```
## Audit logs - [List audit logs](/graph/api/directoryaudit-list)
active-directory-b2c Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/overview.md
Title: What is Azure Active Directory B2C? description: Learn how you can use Azure Active Directory B2C to support external identities in your applications, including social sign-up with Facebook, Google, and other identity providers. -+
active-directory-b2c Partner Arkose Labs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-arkose-labs.md
Previously updated : 1/4/2023 Last updated : 01/18/2023
Username and password are stored as environment variables, not part of the repos
- [Azure-Samples/active-directory-b2c-node-sign-up-user-flow-arkose](https://github.com/Azure-Samples/active-directory-b2c-node-sign-up-user-flow-arkose) - Find the Azure AD B2C sign-up user flow - [Azure AD B2C custom policy overview](./custom-policy-overview.md)-- [Tutorial: Create user flows and custom policies in Azure Active Directory B2C](./tutorial-create-user-flows.md?pivots=b2c-custom-policy)
+- [Tutorial: Create user flows and custom policies in Azure Active Directory B2C](./tutorial-create-user-flows.md?pivots=b2c-custom-policy)
active-directory-b2c Partner Whoiam https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-whoiam.md
Previously updated : 12/19/2022 Last updated : 01/18/2023
active-directory-b2c Partner Zscaler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-zscaler.md
Previously updated : 12/20/2022 Last updated : 01/18/2023
active-directory-b2c Quickstart Web App Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/quickstart-web-app-dotnet.md
Title: "Quickstart: Set up sign-in for an ASP.NET web app"
description: In this Quickstart, run a sample ASP.NET web app that uses Azure Active Directory B2C to provide account sign-in. -+ Previously updated : 10/01/2021- Last updated : 01/17/2023+
In this quickstart, you use an ASP.NET application to sign in using a social ide
## Prerequisites -- [Visual Studio 2019](https://www.visualstudio.com/downloads/) with the **ASP.NET and web development** workload.
+- [Visual Studio 2022](https://www.visualstudio.com/downloads/) with the **ASP.NET and web development** workload.
- A social account from Facebook, Google, or Microsoft. - [Download a zip file](https://github.com/Azure-Samples/active-directory-b2c-dotnet-webapp-and-webapi/archive/master.zip) or clone the sample web application from GitHub.
In this quickstart, you use an ASP.NET application to sign in using a social ide
## Run the application in Visual Studio 1. In the sample application project folder, open the **B2C-WebAPI-DotNet.sln** solution in Visual Studio.
-2. For this quickstart, you run both the **TaskWebApp** and **TaskService** projects at the same time. Right-click the **B2C-WebAPI-DotNet** solution in Solution Explorer, and then select **Set StartUp Projects**.
-3. Select **Multiple startup projects** and change the **Action** for both projects to **Start**.
-4. Select **OK**.
-5. Press **F5** to debug both applications. Each application opens in its own browser tab:
+1. For this quickstart, you run both the **TaskWebApp** and **TaskService** projects at the same time. Right-click the **B2C-WebAPI-DotNet** solution in Solution Explorer, and then select **Set StartUp Projects**.
+1. Select **Multiple startup projects** and change the **Action** for both projects to **Start**.
+1. Select **OK**.
+1. Press **F5** to debug both applications. Each application opens in its own browser tab:
- `https://localhost:44316/` - The ASP.NET web application. You interact directly with this application in the quickstart. - `https://localhost:44332/` - The web API that's called by the ASP.NET web application.
In this quickstart, you use an ASP.NET application to sign in using a social ide
1. Select **Sign up / Sign in** in the ASP.NET web application to start the workflow.
- ![Sample ASP.NET web app in browser with sign up/sign link highlighted](./media/quickstart-web-app-dotnet/web-app-sign-in.png)
+ ![Screenshot showing the sample ASP.NET web app in browser with sign up/sign link highlighted](./media/quickstart-web-app-dotnet/web-app-sign-in.png)
The sample supports several sign-up options including using a social identity provider or creating a local account using an email address. For this quickstart, use a social identity provider account from either Facebook, Google, or Microsoft.
-2. Azure AD B2C presents a sign-in page for a fictitious company called Fabrikam for the sample web application. To sign up using a social identity provider, select the button of the identity provider you want to use.
+1. Azure AD B2C presents a sign-in page for a fictitious company called Fabrikam for the sample web application. To sign up using a social identity provider, select the button of the identity provider you want to use.
- ![Sign In or Sign Up page showing identity provider buttons](./media/quickstart-web-app-dotnet/sign-in-or-sign-up-web.png)
+ ![Screenshot of the Sign In or Sign Up page identity provider buttons](./media/quickstart-web-app-dotnet/sign-in-or-sign-up-web.png)
You authenticate (sign in) using your social account credentials and authorize the application to read information from your social account. By granting access, the application can retrieve profile information from the social account such as your name and city.
-3. Finish the sign-in process for the identity provider.
+1. Finish the sign-in process for the identity provider.
## Edit your profile
Azure Active Directory B2C provides functionality to allow users to update their
1. In the application menu bar, select your profile name, and then select **Edit profile** to edit the profile you created.
- ![Sample web app in browser with Edit profile link highlighted](./media/quickstart-web-app-dotnet/edit-profile-web.png)
+ ![Screenshot of the sample web app in browser with the edit profile link highlighted](./media/quickstart-web-app-dotnet/edit-profile-web.png)
-2. Change your **Display name** or **City**, and then select **Continue** to update your profile.
+1. Change your **Display name** or **City**, and then select **Continue** to update your profile.
The change is displayed in the upper right portion of the web application's home page.
Azure Active Directory B2C provides functionality to allow users to update their
1. Select **To-Do List** to enter and modify your to-do list items.
-2. In the **New Item** text box, enter text. To call the Azure AD B2C protected web API that adds a to-do list item, select **Add**.
+1. In the **New Item** text box, enter text. To call the Azure AD B2C protected web API that adds a to-do list item, select **Add**.
- ![Sample web app in browser with Add a to-do list item](./media/quickstart-web-app-dotnet/add-todo-item-web.png)
+ ![Screenshot of the sample web app in browser with To-Do List link and Add button highlighted.](./media/quickstart-web-app-dotnet/add-todo-item-web.png)
The ASP.NET web application includes an Azure AD access token in the request to the protected web API resource to perform operations on the user's to-do list items.
active-directory-b2c Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/service-limits.md
Previously updated : 12/01/2022 Last updated : 12/29/2022 zone_pivot_groups: b2c-policy-type
The following table lists the administrative configuration limits in the Azure A
|Number of sign-out URLs per applicationΓÇ» |1 | |String Limit per Attribute |250 Chars | |Number of B2C tenants per subscription |20 |
+|Total number of objects (user accounts and applications) per tenant (default limit)|1.25 million |
+|Total number of objects (user accounts and applications) per tenant (using a verified custom domain)|5.25 million |
|Levels of [inheritance](custom-policy-overview.md#inheritance-model) in custom policies |10 | |Number of policies per Azure AD B2C tenant (user flows + custom policies) |200 | |Maximum policy file size |1024 KB |
active-directory-b2c Sign In Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/sign-in-options.md
Title: Sign-in options supported by Azure AD B2C
description: Learn about the sign-up and sign-in options you can use with Azure Active Directory B2C, including username and password, email, phone, or federation with social or external identity providers. -+ Previously updated : 11/03/2022- Last updated : 01/18/2022+
Email sign-up is enabled by default in your local account identity provider sett
- **Sign-up**: users are prompted for an email address, which is verified at sign-up (optional) and becomes their login ID. The user then enters any other information requested on the sign-up page, for example, display name, given name, and surname. Then they select **Continue** to create an account. - **Password reset**: Users enter and verify their email, after which the user can reset the password
-![Email sign-up or sign-in experience](./media/sign-in-options/local-account-email-experience.png)
+![Series of screenshots showing email sign-up or sign-in experience.](./media/sign-in-options/local-account-email-experience.png)
Learn how to configure email sign-in in your local account identity provider. ## Username sign-in
Your local account identity provider includes a Username option that lets users
- **Sign-up**: Users will be prompted for a username, which will become their login ID. Users will also be prompted for an email address, which will be verified at sign-up. The email address will be used during a password reset flow. The user enters any other information requested on the sign-up page, for example, Display Name, Given Name, and Surname. The user then selects Continue to create the account. - **Password reset**: Users must enter their username and the associated email address. The email address must be verified, after which, the user can reset the password.
-![Username sign-up or sign-in experience](./media/sign-in-options/local-account-username-experience.png)
+![Series of screenshots showing sign-up or sign-in experience.](./media/sign-in-options/local-account-username-experience.png)
## Phone sign-in
Phone sign-in is a passwordless option in your local account identity provider s
1. Next, the user is asked to provide a **recovery email**. The user enters their email address, and then selects *Send verification code*. A code is sent to the user's email inbox, which they can retrieve and enter in the Verification code box. Then the user selects Verify code. 1. Once the code is verified, the user selects *Create* to create their account.
-![Phone sign-up or sign-in experience](./media/sign-in-options/local-account-phone-experience.png)
+![Series of screenshots showing phone sign-up or sign-in experience.](./media/sign-in-options/local-account-phone-experience.png)
### Pricing for phone sign-in
One-time passwords are sent to your users by using SMS text messages. Depending
When you enable phone sign-up and sign-in for your user flows, it's also a good idea to enable the recovery email feature. With this feature, a user can provide an email address that can be used to recover their account when they don't have their phone. This email address is used for account recovery only. It can't be used for signing in. -- When the recovery email prompt is **On**, a user signing up for the first time is prompted to verify a backup email. A user who hasn't provided a recovery email before is asked to verify a backup email during next sign in.
+- When the recovery email prompt is **On**, a user signing up for the first time is prompted to verify a backup email. A user who hasn't provided a recovery email before is asked to verify a backup email during next sign-in.
- When recovery email is **Off**, a user signing up or signing in isn't shown the recovery email prompt. The following screenshots demonstrate the phone recovery flow:
-![Phone recovery user flow](./media/sign-in-options/local-account-change-phone-flow.png)
+![Diagram showing phone recovery user flow.](./media/sign-in-options/local-account-change-phone-flow.png)
## Phone or email sign-in You can choose to combine the [phone sign-in](#phone-sign-in), and the [email sign-in](#email-sign-in) in your local account identity provider settings. In the sign-up or sign-in page, user can type a phone number, or email address. Based on the user input, Azure AD B2C takes the user to the corresponding flow.
-![Phone or email sign-up or sign-in experience](./media/sign-in-options/local-account-phone-and-email-experience.png)
+![Series of screenshots showing phone or email sign-up or sign-in experience.](./media/sign-in-options/local-account-phone-and-email-experience.png)
++
+## Federated sign-in
+
+You can configure Azure AD B2C to allow users to sign in to your application with credentials from external social or enterprise identity providers (IdPs). Azure AD B2C supports many [external identity providers](add-identity-provider.md) and any identity provider that supports OAuth 1.0, OAuth 2.0, OpenID Connect, and SAML protocols.
+
+With external identity provider federation, you can offer your consumers the ability to sign in with their existing social or enterprise accounts, without having to create a new account just for your application.
+
+On the sign-up or sign-in page, Azure AD B2C presents a list of external identity providers the user can choose for sign-in. Once they select one of the external identity providers, they're redirected to the selected provider's website to complete the sign-in process. After the user successfully signs in, they're returned to Azure AD B2C for authentication of the account in your application.
+
+![Diagram showing mobile sign-in example with a social account (Facebook).](media/add-identity-provider/external-idp.png)
+
+You can add identity providers that are supported by Azure Active Directory B2C (Azure AD B2C) to your [user flows](user-flow-overview.md) using the Azure portal. You can also add identity providers to your [custom policies](user-flow-overview.md).
## Next steps - Find out more about the built-in policies provided by [User flows in Azure Active Directory B2C](user-flow-overview.md).-- [Configure your local account identity provider](identity-provider-local.md).
+- [Configure your local account identity provider](identity-provider-local.md).
active-directory-b2c Technical Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/technical-overview.md
Title: Technical and feature overview - Azure Active Directory B2C description: An in-depth introduction to the features and technologies in Azure Active Directory B2C. Azure Active Directory B2C has high availability globally. -+
Last updated 10/26/2022 -+
active-directory-b2c Tenant Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/tenant-management.md
Previously updated : 11/24/2022 Last updated : 12/29/2022
To get your Azure AD B2C tenant ID, follow these steps:
1. In the Azure portal, search for and select **Azure Active Directory**. 1. In the **Overview**, copy the **Tenant ID**.
-![Screenshot demonstrates how to get the Azure AD B2C tenant ID.](./media/tenant-management/get-azure-ad-b2c-tenant-id.png)
+![Screenshot demonstrates how to get the Azure AD B2C tenant ID.](./media/tenant-management/get-azure-ad-b2c-tenant-id.png)
+
+## Get your tenant usage
+
+You can read your Azure AD B2C's total directory size, and how much of it is in use. To do so, follow the steps in [Get tenant usage by using Microsoft Graph API](microsoft-graph-operations.md#tenant-usage).
## Next steps
active-directory-b2c Threat Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/threat-management.md
Title: Mitigate credential attacks - Azure AD B2C
description: Learn about detection and mitigation techniques for credential attacks (password attacks) in Azure Active Directory B2C, including smart account lockout features. -+ Last updated 09/20/2021-+
active-directory-b2c Tutorial Create Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/tutorial-create-tenant.md
Previously updated : 07/12/2022 Last updated : 01/20/2023
Before your applications can interact with Azure Active Directory B2C (Azure AD B2C), they must be registered in a tenant that you manage.
-> [!NOTE]
-> You can create up to 20 tenants per subscription. This limit helps protect against threats to your resources, such as denial-of-service attacks, and is enforced in both the Azure portal and the underlying tenant creation API. If you need to create more than 20 tenants, please contact [Microsoft Support](support-options.md).
->
-> If you want to reuse a tenant name that you previously tried to delete, but you see the error "Already in use by another directory" when you enter the domain name, you'll need to [follow these steps to fully delete the tenant first](./faq.yml?tabs=app-reg-ga#how-do-i-delete-my-azure-ad-b2c-tenant-). A role of at least Subscription Administrator is required. After deleting the tenant, you might also need to sign out and sign back in before you can reuse the domain name.
- In this article, you learn how to: > [!div class="checklist"]
In this article, you learn how to:
> * Switch to the directory containing your Azure AD B2C tenant > * Add the Azure AD B2C resource as a **Favorite** in the Azure portal
-You learn how to register an application in the next tutorial.
+Before you create your Azure AD B2C tenant, you need to take the following considerations into account:
+
+- You can create up to **20** tenants per subscription. This limit help protect against threats to your resources, such as denial-of-service attacks, and is enforced in both the Azure portal and the underlying tenant creation API. If you want to increase this limit, please contact [Microsoft Support](find-help-open-support-ticket.md).
+
+- By default, each tenant can accommodate a total of **1.25 million** objects (user accounts and applications), but you can increase this limit to **5.25 million** objects when you add and verify a custom domain. If you want to increase this limit, please contact [Microsoft Support](find-help-open-support-ticket.md). However, if you created your tenant before **September 2022**, this limit doesn't affect you, and your tenant will retain the size allocated to it at creation, that's, **50 million** objects.
+
+- If you want to reuse a tenant name that you previously tried to delete, but you see the error "Already in use by another directory" when you enter the domain name, you'll need to [follow these steps to fully delete the tenant first](./faq.yml?tabs=app-reg-ga#how-do-i-delete-my-azure-ad-b2c-tenant-). A role of at least *Subscription Administrator* is required. After deleting the tenant, you might also need to sign out and sign back in before you can reuse the domain name.
## Prerequisites
You learn how to register an application in the next tutorial.
![Select the Create a resource button](media/tutorial-create-tenant/create-a-resource.png) 1. Search for **Azure Active Directory B2C**, and then select **Create**.
-2. Select **Create a new Azure AD B2C Tenant**.
+
+1. Select **Create a new Azure AD B2C Tenant**.
![Create a new Azure AD B2C tenant selected in Azure portal](media/tutorial-create-tenant/portal-02-create-tenant.png)
active-directory-b2c Tutorial Create User Flows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/tutorial-create-user-flows.md
Title: Tutorial - Create user flows and custom policies - Azure Active Directory B2C description: Follow this tutorial to learn how to create user flows and custom policies in the Azure portal to enable sign up, sign in, and user profile editing for your applications in Azure Active Directory B2C. -+ Last updated 10/26/2022-+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Tutorial Register Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/tutorial-register-applications.md
Title: "Tutorial: Register a web application in Azure Active Directory B2C"
description: Follow this tutorial to learn how to register a web application in Azure Active Directory B2C using the Azure portal. -+
Last updated 10/26/2022 -+
active-directory-b2c User Flow Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/user-flow-overview.md
Title: User flows and custom policies in Azure Active Directory B2C
description: Learn more about built-in user flows and the custom policy extensible policy framework of Azure Active Directory B2C. -+
Last updated 10/24/2022 -+
active-directory-b2c User Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/user-migration.md
description: Migrate user accounts from another identity provider to Azure AD B2
- Previously updated : 10/24/2022 Last updated : 12/29/2022
Watch this video to learn about Azure AD B2C user migration strategies and steps
>[!Video https://www.youtube.com/embed/lCWR6PGUgz0] +
+> [!NOTE]
+> Before you start the migration, make sure your Azure AD B2C tenant's unused quota can accommodate all the users you expect to migrate. Learn how to [Get your tenant usage](microsoft-graph-operations.md#tenant-usage). If you need to increase your tenant's quota limit, contact [Microsoft Support](find-help-open-support-ticket.md).
+ ## Pre migration In the pre migration flow, your migration application performs these steps for each user account:
active-directory-b2c User Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/user-overview.md
Title: Overview of user accounts in Azure Active Directory B2C description: Learn about the types of user accounts that can be used in Azure Active Directory B2C. -+ Last updated 12/28/2022-+
active-directory Application Proxy Configure Complex Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-configure-complex-application.md
Before you get started with Application Proxy Complex application scenario apps,
To configure (and update) Application Segments for a complex app using the API, you first [create a wildcard application](application-proxy-wildcard.md#create-a-wildcard-application), and then update the application's onPremisesPublishing property to configure the application segments and respective CORS settings. > [!NOTE]
-> One application segment is supported in preview. Support for multiple application segment to be announced soon.
+> 2 application segment per complex application are supported for [Microsoft Azure AD premium subscription](https://azure.microsoft.com/pricing/details/active-directory). Licence requirement for more than 2 application segments per complex application to be announced soon.
If successful, this method returns a `204 No Content` response code and does not return anything in the response body. ## Example
active-directory Application Proxy Integrate With Logic Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-integrate-with-logic-apps.md
+
+ Title: Securely integrate Azure Logic Apps with on-premises APIs using Azure Active Directory Application Proxy
+description: Azure Active Directory's Application Proxy lets cloud-native logic apps securely access on-premises APIs to bridge your workload.
+++++++ Last updated : 01/19/2023++++
+# Securely integrate Azure Logic Apps with on-premises APIs using Azure Active Directory Application Proxy
+
+Azure Logic Apps is a service allowing easy creation of managed workflows in a no-code environment that can integrate with various external services and systems. This can help automate a wide range of business processes, such as data integration, data processing, and event-driven scenarios.
+While Logic Apps easily integrate with other public and cloud-based services, the need may arise to utilize Logic Apps with protected, on-premises applications and services without exposing the service to the public via port forwarding or a traditional reverse proxy.
+
+This article describes the steps necessary to utilize the Azure AD Application Proxy solution to provide secure access to a Logic App, while protecting the internal application from unwanted actors. The process and end result is similar to [Access on-premises APIs with Azure Active Directory Application Proxy](./application-proxy-secure-api-access.md) with special attention paid to utilizing the API from within a Logic App.
+
+## Overview
+
+The following diagram shows a traditional way to publish on-premises APIs for access from Azure Logic Apps. This approach requires opening incoming TCP ports 80 and/or 443 to the API service.
+
+![Diagram that shows Logic App to API direct connection.](./media/application-proxy-integrate-with-logic-apps/azure-logic-app-to-api-connection-direct.png)
+
+The following diagram shows how you can use Azure AD Application Proxy to securely publish APIs for use with Logic Apps (or other Azure Cloud services) without opening any incoming ports:
+
+![Diagram that shows Logic App to API connection via Azure Application Proxy.](./media/application-proxy-integrate-with-logic-apps/azure-logic-app-to-api-connection-app-proxy.png)
+
+The Azure AD App Proxy and associated connector facilitate secure authorization and integration to your on-premises services without additional configuration to your network security infrastructure.
+
+## Prerequisites
+
+To follow this tutorial, you will need:
+
+- Admin access to an Azure directory, with an account that can create and register apps
+- The *Logic App Contributor* role (or higher) in an active tenant
+- Azure Application Proxy connector deployed and an application configured as detailed in [Add an on-premises app - Application Proxy in Azure Active Directory](./application-proxy-add-on-premises-application.md)
+
+> [!NOTE]
+> While granting a user entitlement and testing the sign on is recommended, it is not required for this guide.
+
+## Configure the Application Access
+
+When a new Enterprise Application is created, a matching App Registration is also created. The App Registration allows configuration of secure programmatic access using certificates, secrets, or federated credentials. For integration with a Logic App, we will need to configure a client secret key, and configure the API permissions.
+
+1. From the Azure portal, open **Azure Active Directory**
+
+2. Select the **App Registrations** menu item from the navigation pane
+
+ ![Screenshot of the Azure Active Directory App Registration Menu Item.](./media/application-proxy-integrate-with-logic-apps/app-registration-menu.png)
+
+3. From the *App Registrations* window, select the **All applications** tab option
+
+4. Navigate to the application with a matching name to your deployed App Proxy application. For example, if you deployed *Sample App 1* as an Enterprise Application, click the **Sample App 1** registration item
+
+ > [!NOTE]
+ > If an associated application cannot be found, it may have not been automatically created or may have been deleted. A registration can be created using the **New Registration** button.
+
+5. From the *Sample App 1* detail page, take note of the *Application (client) ID* and *Directory (tenant) ID* fields. These will be used later.
+
+ ![Screenshot of the Azure Active Directory App Registration Detail.](./media/application-proxy-integrate-with-logic-apps/app-registration-detail.png)
+
+6. Select the **API permissions** menu item from the navigation pane
+
+ ![Screenshot of the Azure Active Directory App Registration API Permissions Menu Item.](./media/application-proxy-integrate-with-logic-apps/api-permissions-menu.png)
+
+7. From the *API permissions* page:
+
+ 1. Click the **Add a permission** button
+
+ 2. In the *Request API permissions* pop-up:
+
+ 1. Select the **APIs my organization uses** tab
+
+ 2. Search for your app by name (e.g. *Sample App 1*) and select the item
+
+ 3. Ensure *Delegated Permissions* is **selected**, then **check** the box for *user_impersonation*
+
+ 4. Click **Add permissions**
+
+ 3. Verify the configured permission appears
+
+ ![Screenshot of the Azure Active Directory App Registration API Permissions Detail.](./media/application-proxy-integrate-with-logic-apps/api-permissions-detail.png)
+
+8. Select the **Certificates & secrets** menu item from the navigation pane
+
+ ![Screenshot of the Azure Active Directory App Registration Certificates and Secrets Menu Item.](./media/application-proxy-integrate-with-logic-apps/certificates-and-secrets-menu.png)
+
+9. From the *Certificates & secrets* page:
+
+ 1. Select the **Client secrets** tab item
+
+ 2. Click the **New client secret** button
+
+ 3. From the *Add a client secret* pop-up:
+
+ 1. Enter a **Description** and desired expiration
+
+ 2. Click **Add**
+
+ 4. Verify the new client secret appears
+
+ 5. Click the **Copy** button for the *Value* of the newly created secret. Save this securely for use later, this value is only shown one time.
+
+ ![Screenshot of the Azure Active Directory App Registration Client Secret Detail.](./media/application-proxy-integrate-with-logic-apps/client-secret-detail.png)
+
+## Configure the Logic App
+
+1. From the Logic App, open the **Designer** view
+
+2. Select a desired trigger (if prompted)
+
+3. Add a new step and select the **HTTP** operation
+
+ ![Screenshot of the Azure Logic App Trigger Options Pane.](./media/application-proxy-integrate-with-logic-apps/logic-app-trigger-menu.png)
+
+4. In the operation details:
+
+ 1. *Method*: Select the desired HTTP method to be sent to the internal API
+
+ 2. *URI*: Fill in with the *public* FQDN of your application registered in Azure AD, along with the additional URI required for API access (e.g. *sampleapp1.msappproxy.net/api/1/status*)
+
+ > [!NOTE]
+ > Specific values for API will depend on your internal application. Refer to your application's documentation for more information.
+
+ 3. *Headers*: Enter any desired headers to be sent to the internal API
+
+ 4. *Queries*: Enter any desired queries to be sent to the internal API
+
+ 5. *Body*: Enter any desired body contents to be sent to the internal API
+
+ 6. *Cookie*: Enter any desired cookie(s) to be sent to the internal API
+
+ 7. Click *Add new parameter*, then check *Authentication*
+
+ 8. From the *Authentication type*, select *Active Directory OAuth*
+
+ 9. For the authentication, fill the following details:
+
+ 1. *Authority*: Enter *https://login.windows.net*
+
+ 2. *Tenant*: Enter the **Directory (tenant) ID** noted in *Configure the Application Access*
+
+ 3. *Audience*: Enter the *public* FQDN of your application registered in Azure AD (e.g. *sampleapp1.msappproxy.net*)
+
+ 4. *Client ID*: Enter the **Application (client) ID** noted in *Configure the Application Access*
+
+ 5. *Credential Type*: **Secret**
+
+ 6. *Secret*: Enter the **secret value** noted in *Configure the Application Access*
+
+ ![Screenshot of Azure Logic App HTTP ActionConfiguration.](./media/application-proxy-integrate-with-logic-apps/logic-app-http-configuration.png)
+
+5. Save the logic app and test with your trigger
+
+## Caveats
+
+- APIs that require authentication/authorization require special handling when using this method. Since Azure Active Directory OAuth is being used for access, the requests sent already contain an *Authorization* field that cannot also be utilized by the internal API (unless SSO is configured). As a workaround, some applications offer authentication or authorization that uses methods other than an *Authorization* header. For example, GitLab allows for a header titled *PRIVATE-TOKEN*, and Atlassian JIRA allows for requesting a Cookie that can be used in later requests
+
+- While the Logic App HTTP action shows cleartext values, it is highly recommended to store the App Registration Secret Key in Azure Key Vault for secure retrieval and use.
+
+## See Also
+
+- [How to configure an Application Proxy application](./application-proxy-config-how-to.md)
+- [Access on-premises APIs with Azure Active Directory Application Proxy](./application-proxy-secure-api-access.md)
+- [Common scenarios, examples, tutorials, and walkthroughs for Azure Logic Apps](../../logic-apps/logic-apps-examples-and-scenarios.md)
active-directory Concept Authentication Oath Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-oath-tokens.md
Previously updated : 09/12/2022 Last updated : 01/18/2023
Users may have a combination of up to five OATH hardware tokens or authenticator
>[!IMPORTANT] >The preview is only supported in Azure Global and Azure Government clouds. +
+## Determine OATH token registration type in mysecurityinfo
+Users can manage and add OATH token registrations by accessing https://aka.ms/mysecurityinfo or by selecting Security info from My Account. Specific icons are used to differentiate whether the OATH token registration is hardware or software based.
+
+OATH token registration type | Icon
+ |
+OATH software token | <img width="63" alt="Software OATH token" src="media/concept-authentication-methods/software-oath-token-icon.png">
+OATH hardware token | <img width="63" alt="Hardware OATH token" src="media/concept-authentication-methods/hardware-oath-token-icon.png">
++ ## Next steps Learn more about configuring authentication methods using the [Microsoft Graph REST API](/graph/api/resources/authenticationmethods-overview).
active-directory Howto Mfa Nps Extension Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-nps-extension-errors.md
Sometimes, your users may get messages from Multi-Factor Authentication because
| **OathCodeIncorrect** | Wrong code entered\OATH Code Incorrect | The user entered the wrong code. Have them try again by requesting a new code or signing in again. | | **SMSAuthFailedMaxAllowedCodeRetryReached** | Maximum allowed code retry reached | The user failed the verification challenge too many times. Depending on your settings, they may need to be unblocked by an admin now. | | **SMSAuthFailedWrongCodeEntered** | Wrong code entered/Text Message OTP Incorrect | The user entered the wrong code. Have them try again by requesting a new code or signing in again. |
+| **AuthenticationThrottled** | Too many attempts by user in a short period of time. Throttling. | Microsoft may limit repeated authentication attempts that are performed by the same user in a short period of time. This limitation does not apply to the Microsoft Authenticator or verification code. If you have hit these limits, you can use the Authenticator App, verification code or try to sign in again in a few minutes. |
+| **AuthenticationMethodLimitReached** | Authentication Method Limit Reached. Throttling. | Microsoft may limit repeated authentication attempts that are performed by the same user using the same authentication method type in a short period of time, specifically Voice call or SMS. This limitation does not apply to the Microsoft Authenticator or verification code. If you have hit these limits, you can use the Authenticator App, verification code or try to sign in again in a few minutes.|
## Errors that require support
active-directory Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/faqs.md
This article answers frequently asked questions (FAQs) about Permissions Managem
## What's Permissions Management?
-Permissions Management is a cloud infrastructure entitlement management (CIEM) solution that provides comprehensive visibility into permissions assigned to all identities. For example, over-privileged workload and user identities, actions, and resources across multi-cloud infrastructures in Microsoft Azure, Amazon Web Services (AWS), and Google Cloud Platform (GCP). Permissions Management detects, automatically right-sizes, and continuously monitors unused and excessive permissions. It deepens the Zero Trust security strategy by augmenting the least privilege access principle.
+Permissions Management is a cloud infrastructure entitlement management (CIEM) solution that provides comprehensive visibility into permissions assigned to all identities. For example, over-privileged workload and user identities, actions, and resources across multicloud infrastructures in Microsoft Azure, Amazon Web Services (AWS), and Google Cloud Platform (GCP). Permissions Management detects, automatically right-sizes, and continuously monitors unused and excessive permissions. It deepens the Zero Trust security strategy by augmenting the least privilege access principle.
## What are the prerequisites to use Permissions Management?
No, Permissions Management is a hosted cloud offering.
## Can non-Azure customers use Permissions Management?
-Yes, non-Azure customers can use our solution. Permissions Management is a multi-cloud solution so even customers who have no subscription to Azure can benefit from it.
+Yes, non-Azure customers can use our solution. Permissions Management is a multicloud solution so even customers who have no subscription to Azure can benefit from it.
## Is Permissions Management available for tenants hosted in the European Union (EU)?
Yes, Permissions Management is currently for tenants hosted in the European Unio
## If I'm already using Azure AD Privileged Identity Management (PIM) for Azure, what value does Permissions Management provide?
-Permissions Management complements Azure AD PIM. Azure AD PIM provides just-in-time access for admin roles in Azure (as well as Microsoft Online Services and apps that use groups), while Permissions Management allows multi-cloud discovery, remediation, and monitoring of privileged access across Azure, AWS, and GCP.
+Permissions Management complements Azure AD PIM. Azure AD PIM provides just-in-time access for admin roles in Azure (as well as Microsoft Online Services and apps that use groups), while Permissions Management allows multicloud discovery, remediation, and monitoring of privileged access across Azure, AWS, and GCP.
## What public cloud infrastructures are supported by Permissions Management?
You can read our blog and visit our web page. You can also get in touch with you
## What is the data destruction/decommission process?
-If a customer initiates a free Permissions Management 90-day trial, but does not follow up and convert to a paid license within 90 days of the free trial expiration, we will delete all collected data on or just before 90 days.
+If a customer initiates a free Permissions Management 45-day trial, but does not follow up and convert to a paid license within 45 days of the free trial expiration, we will delete all collected data on or just before 45 days.
-If a customer decides to discontinue licensing the service, we will also delete all previously collected data within 90 days of license termination.
+If a customer decides to discontinue licensing the service, we will also delete all previously collected data within 45 days of license termination.
We also have the ability to remove, export or modify specific data should the Global Admin using the Entra Permissions Management service file an official Data Subject Request. This can be initiated by opening a ticket in the Azure portal [New support request - Microsoft Entra admin center](https://entra.microsoft.com/#blade/Microsoft_Azure_Support/NewSupportRequestV3Blade/callerName/ActiveDirectory/issueType/technical), or alternately contacting your local Microsoft representative. ## Do I require a license to use Entra Permissions Management?
-Yes, as of July 1st, 2022, new customers must acquire a free 90-trial license or a paid license to use the service. You can enable a trial here: [https://aka.ms/TryPermissionsManagement](https://aka.ms/TryPermissionsManagement) or you can directly purchase resource-based licenses here: [https://aka.ms/BuyPermissionsManagement](https://aka.ms/BuyPermissionsManagement)
+Yes, as of July 1st, 2022, new customers must acquire a free 45-day trial license or a paid license to use the service. You can enable a trial here: [https://aka.ms/TryPermissionsManagement](https://aka.ms/TryPermissionsManagement) or you can directly purchase resource-based licenses here: [https://aka.ms/BuyPermissionsManagement](https://aka.ms/BuyPermissionsManagement)
## What do I do if IΓÇÖm using Public Preview version of Entra Permissions Management? If you are using the Public Preview version of Entra Permissions Management, your current deployment(s) will continue to work through October 1st.
-After October 1st you will need to move over to use the newly released version of the service and enable a 90-day trial or purchase licenses to continue using the service.
+After October 1st you will need to move over to use the newly released version of the service and enable a 45-day trial or purchase licenses to continue using the service.
## What do I do if IΓÇÖm using the legacy version of the CloudKnox service?
active-directory Product Permissions Analytics Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-permissions-analytics-reports.md
Previously updated : 02/23/2022 Last updated : 01/20/2023 # Generate and download the Permissions analytics report
-This article describes how to generate and download the **Permissions analytics report** in Permissions Management.
+This article describes how to generate and download the **Permissions analytics report** in Permissions Management for AWS, Azure, and GCP. You can generate the report in Excel format, and also as a PDF.
-> [!NOTE]
-> This topic applies only to Amazon Web Services (AWS) users.
## Generate the Permissions analytics report 1. In the Permissions Management home page, select the **Reports** tab, and then select the **Systems Reports** subtab. The **Systems Reports** subtab displays a list of reports the **Reports** table.
-1. Find **Permissions Analytics Report** in the list, and to download the report, select the down arrow to the right of the report name, or from the ellipses **(...)** menu, select **Download**.
+1. Select **Permissions Analytics Report** from the list. o download the report, select the down arrow to the right of the report name, or from the ellipses **(...)** menu, select **Download**.
The following message displays: **Successfully Started To Generate On Demand Report.**
active-directory How To Attribute Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/how-to-attribute-mapping.md
Previously updated : 01/11/2023 Last updated : 01/20/2023
# Attribute mapping in Azure AD Connect cloud sync
-You can use the cloud sync feature of Azure Active Directory (Azure AD) Connect to map attributes between your on-premises user or group objects and the objects in Azure AD. This capability has been added to the cloud sync configuration.
+You can use the cloud sync attribute mapping feature to map attributes between your on-premises user or group objects and the objects in Azure AD.
+
+ :::image type="content" source="media/how-to-attribute-mapping/new-ux-mapping-1.png" alt-text="Screenshot of new UX screen attribute mapping." lightbox="media/how-to-attribute-mapping/new-ux-mapping-1.png":::
You can customize (change, delete, or create) the default attribute mappings according to your business needs. For a list of attributes that are synchronized, see [Attributes synchronized to Azure Active Directory](../hybrid/reference-connect-sync-attributes-synchronized.md?context=azure%2factive-directory%2fcloud-provisioning%2fcontext%2fcp-context/hybrid/reference-connect-sync-attributes-synchronized.md).
For more information on how to map UserType, see [Map UserType with cloud sync](
## Understand properties of attribute mappings
-Along with the type property, attribute mappings support certain attributes. These attributes will depend on the type of mapping you have selected. The following sections describe the supported attribute mappings for each of the individual types
+Along with the type property, attribute mappings support certain attributes. These attributes will depend on the type of mapping you have selected. The following sections describe the supported attribute mappings for each of the individual types. The following type of attribute mapping is available.
+- Direct
+- Constant
+- Expression
### Direct mapping attributes The following are the attributes supported by a direct mapping:
The following are the attributes supported by a direct mapping:
- **Always**: Apply this mapping on both user-creation and update actions. - **Only during creation**: Apply this mapping only on user-creation actions.
- ![Screenshot for direct](media/how-to-attribute-mapping/mapping-7.png)
### Constant mapping attributes The following are the attributes supported by a constant mapping:
The following are the attributes supported by a constant mapping:
- **Always**: Apply this mapping on both user-creation and update actions. - **Only during creation**: Apply this mapping only on user-creation actions.
- ![Screenshot for constant](media/how-to-attribute-mapping/mapping-9.png)
- ### Expression mapping attributes The following are the attributes supported by an expression mapping:
The following are the attributes supported by an expression mapping:
- **Always**: Apply this mapping on both user-creation and update actions. - **Only during creation**: Apply this mapping only on user-creation actions.
- ![Screenshot for expression](media/how-to-attribute-mapping/mapping-10.png)
- ## Add an attribute mapping
-To use the new capability, follow these steps:
-
-1. In the Azure portal, select **Azure Active Directory**.
-2. Select **Azure AD Connect**.
-3. Select **Manage cloud sync**.
-
- ![Screenshot that shows the link for managing cloud sync.](media/how-to-install/install-6.png)
-
-4. Under **Configuration**, select your configuration.
-5. Select **Click to edit mappings**. This link opens the **Attribute mappings** screen.
+To use attribute mapping, follow these steps:
- ![Screenshot that shows the link for adding attributes.](media/how-to-attribute-mapping/mapping-6.png)
+ 1. In the Azure portal, select **Azure Active Directory**.
+ 2. On the left, select **Azure AD Connect**.
+ 3. On the left, select **Cloud sync**.
+
+ :::image type="content" source="media/how-to-on-demand-provision/new-ux-1.png" alt-text="Screenshot of new UX screen." lightbox="media/how-to-on-demand-provision/new-ux-1.png":::
-6. Select **Add attribute**.
+ 4. Under **Configuration**, select your configuration.
+ 5. On the left, select **Attribute mapping**.
+ 6. At the top, ensure that you have the correct object type selected. That is, user, group, or contact.
+ 7. Click **Add attribute mapping**.
- ![Screenshot that shows the button for adding an attribute, along with lists of attributes and mapping types.](media/how-to-attribute-mapping/mapping-1.png)
+ :::image type="content" source="media/how-to-attribute-mapping/new-ux-mapping-3.png" alt-text="Screenshot of adding an attribute mapping." lightbox="media/how-to-attribute-mapping/new-ux-mapping-3.png":::
-7. Select the mapping type. This can be one of the following:
+ 8. Select the mapping type. This can be one of the following:
- **Direct**: The target attribute is populated with the value of an attribute of the linked object in Active Directory. - **Constant**: The target attribute is populated with a specific string that you specify. - **Expression**: The target attribute is populated based on the result of a script-like expression. - **None**: The target attribute is left unmodified.
-
- For more information see See [Understanding attribute types](#understand-types-of-attribute-mapping) above.
-8. Depending on what you have selected in the previous step, different options will be available for filling in. See the [Understand properties of attribute mappings](#understand-properties-of-attribute-mappings)sections above for information on these attributes.
-9. Select when to apply this mapping, and then select **Apply**.
-11. Back on the **Attribute mappings** screen, you should see your new attribute mapping.
-12. Select **Save schema**.
+
+ 9. Depending on what you have selected in the previous step, different options will be available for filling in.
+ 10. Select when to apply this mapping, and then select **Apply**.
+ :::image type="content" source="media/how-to-attribute-mapping/new-ux-mapping-4.png" alt-text="Screenshot of saving an attribute mapping." lightbox="media/how-to-attribute-mapping/new-ux-mapping-4.png":::
+
+ 11. Back on the **Attribute mappings** screen, you should see your new attribute mapping.
+ 12. Select **Save schema**. You will be notified that once you save the schema, a synchronization will occur. Click **OK**.
+ :::image type="content" source="media/how-to-attribute-mapping/new-ux-mapping-5.png" alt-text="Screenshot of saving schema." lightbox="media/how-to-attribute-mapping/new-ux-mapping-5.png":::
- ![Screenshot that shows the Save schema button.](media/how-to-attribute-mapping/mapping-3.png)
+ 13. Once the save is successful you will see a notification on the right.
+
+ :::image type="content" source="media/how-to-attribute-mapping/new-ux-mapping-6.png" alt-text="Screenshot of successful schema save." lightbox="media/how-to-attribute-mapping/new-ux-mapping-6.png":::
## Test your attribute mapping To test your attribute mapping, you can use [on-demand provisioning](how-to-on-demand-provision.md):
-1. In the Azure portal, select **Azure Active Directory**.
-2. Select **Azure AD Connect**.
-3. Select **Manage provisioning**.
-4. Under **Configuration**, select your configuration.
-5. Under **Validate**, select the **Provision a user** button.
-6. On the **Provision on demand** screen, enter the distinguished name of a user or group and select the **Provision** button.
-
- The screen shows that the provisioning is in progress.
+ 1. In the Azure portal, select **Azure Active Directory**.
+ 2. On the left, select **Azure AD Connect**.
+ 3. On the left, select **Cloud sync**.
+ 4. Under **Configuration**, select your configuration.
+ 5. On the left, select **Provision on demand**.
+ 6. Enter the distinguished name of a user and select the **Provision** button.
+
+ :::image type="content" source="media/how-to-on-demand-provision/new-ux-2.png" alt-text="Screenshot of user distinguished name." lightbox="media/how-to-on-demand-provision/new-ux-2.png":::
- ![Screenshot that shows provisioning in progress.](media/how-to-attribute-mapping/mapping-4.png)
+ 7. After provisioning finishes, a success screen appears with four green check marks. Any errors appear to the left.
-8. After provisioning finishes, a success screen appears with four green check marks.
+ :::image type="content" source="media/how-to-on-demand-provision/new-ux-3.png" alt-text="Screenshot of on-demand success." lightbox="media/how-to-on-demand-provision/new-ux-3.png":::
- Under **Perform action**, select **View details**. On the right, you should see the new attribute synchronized and the expression applied.
- ![Screenshot that shows success and export details.](media/how-to-attribute-mapping/mapping-5.png)
active-directory How To Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/how-to-configure.md
Previously updated : 01/11/2023 Last updated : 01/20/2023
# Create a new configuration for Azure AD Connect cloud sync
-The following document will guide you through configuring Azure AD Connect cloud sync. For additional information and an example of how to configure cloud sync, see the video below.
+The following document will guide you through configuring Azure AD Connect cloud sync.
+
+The following documentation demonstrates the new guided user experience for Azure AD Connect cloud sync. If you are not seeing the images below, you need to select the **Preview features** at the top. You can select this again to revert back to the old experience.
+
+ :::image type="content" source="media/how-to-configure/new-ux-configure-19.png" alt-text="Screenshot of enable preview features." lightbox="media/how-to-configure/new-ux-configure-19.png":::
+
+For additional information and an example of how to configure cloud sync, see the video below.
> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RWKact]
The following document will guide you through configuring Azure AD Connect cloud
## Configure provisioning To configure provisioning, follow these steps.
- 1. In the Azure portal, select **Azure Active Directory**
- 2. Select **Azure AD Connect**.
- 3. Select **Manage cloud sync**.
-
- ![Manage provisioning](media/how-to-install/install-6.png)
+ 1. In the Azure portal, select **Azure Active Directory**.
+ 2. On the left, select **Azure AD Connect**.
+ 3. On the left, select **Cloud sync**.
+
+ :::image type="content" source="media/how-to-on-demand-provision/new-ux-1.png" alt-text="Screenshot of new UX screen." lightbox="media/how-to-on-demand-provision/new-ux-1.png":::
4. Select **New configuration**.
+ :::image type="content" source="media/how-to-configure/new-ux-configure-1.png" alt-text="Screenshot of adding a configuration." lightbox="media/how-to-configure/new-ux-configure-1.png":::
5. On the configuration screen, select your domain and whether to enable password hash sync. Click **Create**.
- ![Create new configuration](media/how-to-configure/configure-1.png)
+ :::image type="content" source="media/how-to-configure/new-ux-configure-2.png" alt-text="Screenshot of a new configuration." lightbox="media/how-to-configure/new-ux-configure-2.png":::
+ 6. The **Get started** screen will open. From here, you can continue configuring cloud sync.
- 6. The Edit provisioning configuration screen will open.
+ :::image type="content" source="media/how-to-configure/new-ux-configure-3.png" alt-text="Screenshot of the getting started screen." lightbox="media/how-to-configure/new-ux-configure-3.png":::
- ![Edit configuration](media/how-to-configure/con-1.png)
+ 7. The configuration is split in to the following 5 sections.
- 7. Enter a **Notification email**. This email will be notified when provisioning isn't healthy. It is recommended that you keep **Prevent accidental deletion** enabled and set the **Accidental deletion threshold** to a number that you wish to be notified about. For more information, see [accidental deletes](#accidental-deletions) below.
- 8. Move the selector to Enable, and select Save.
+|Section|Description|
+|--|--|
+|1. Add [scoping filters](#scope-provisioning-to-specific-users-and-groups)|Use this section to define what objects appear in Azure AD|
+|2. Map [attributes](#attribute-mapping)|Use this section to map attributes between your on-premises users/groups with Azure AD objects|
+|3. [Test](#on-demand-provisioning)|Test your configuration before deploying it|
+|4. View [default properties](#accidental-deletions-and-email-notifications)|View the default setting prior to enabling them and make changes where appropriate|
+|5. Enable [your configuration](#enable-your-configuration)|Once ready, enable the configuration and users/groups will begin synchronizing|
>[!NOTE] > During the configuration process the synchronization service account will be created with the format **ADToAADSyncServiceAccount@[TenantID].onmicrosoft.com** and you may get an error if multi-factor authentication is enabled for the synchronization service account, or other interactive authentication policies are accidentally enabled for the synchronization account. Removing multi-factor authentication or any interactive authentication policies for the synchronization service account should resolve the error and you can complete the configuration smoothly. ## Scope provisioning to specific users and groups
-You can scope the agent to synchronize specific users and groups by using on-premises Active Directory groups or organizational units. You can't configure groups and organizational units within a configuration.
+You can scope the agent to synchronize specific users and groups by using on-premises Active Directory groups or organizational units.
+
+ :::image type="content" source="media/how-to-configure/new-ux-configure-4.png" alt-text="Screenshot of scoping filters icon." lightbox="media/how-to-configure/new-ux-configure-4.png":::
++
+You can't configure groups and organizational units within a configuration.
>[!NOTE] > You cannot use nested groups with group scoping. Nested objects beyond the first level will not be included when scoping using security groups. Only use group scope filtering for pilot scenarios as there are limitations to syncing large groups.
+ 1. On the **Getting started** configuration screen. Click either **Add scoping filters** next to the **Add scoping filters** icon or on the click **Scoping filters** on the left under **Manage**.
- 1. In the Azure portal, select **Azure Active Directory**.
- 2. Select **Azure AD Connect**.
- 3. Select **Manage cloud sync**.
- 4. Under **Configuration**, select your configuration.
-
- ![Configuration section](media/how-to-configure/scope-1.png)
+ :::image type="content" source="media/how-to-configure/new-ux-configure-5.png" alt-text="Screenshot of scoping filters." lightbox="media/how-to-configure/new-ux-configure-5.png":::
- 5. Under **Configure**, select **Edit scoping filters** to change the scope of the configuration rule.
- 6. On the right, you can change the scope. Click **Done** and **Save** when you have finished.
- 7. Once you have changed the scope, you should [restart provisioning](#restart-provisioning) to initiate an immediate synchronization of the changes.
+ 2. Select the scoping filter. The filter can be one of the following:
+ - **All users**: Scopes the configuration to apply to all users that are being synchronized.
+ - **Selected security groups**: Scopes the configuration to apply to specific security groups.
+ - **Selected organizational units**: Scopes the configuration to apply to specific OUs.
+ 3. For security groups and organizational units, supply the appropriate distinguished name and click **Add**.
+ 4. Once your scoping filters are configured, click **Save**.
+ 5. After saving, you should see a message telling you what you still need to do to configure cloud sync. You can click the link to continue.
+ :::image type="content" source="media/how-to-configure/new-ux-configure-16.png" alt-text="Screenshot of the nudge for scoping filters." lightbox="media/how-to-configure/new-ux-configure-16.png":::
+ 7. Once you've changed the scope, you should [restart provisioning](#restart-provisioning) to initiate an immediate synchronization of the changes.
## Attribute mapping
-Azure AD Connect cloud sync allows you to easily map attributes between your on-premises user/group objects and the objects in Azure AD. You can customize the default attribute-mappings according to your business needs. So, you can change or delete existing attribute-mappings, or create new attribute-mappings. For more information, see [attribute mapping](how-to-attribute-mapping.md).
+Azure AD Connect cloud sync allows you to easily map attributes between your on-premises user/group objects and the objects in Azure AD.
+++
+You can customize the default attribute-mappings according to your business needs. So, you can change or delete existing attribute-mappings, or create new attribute-mappings.
++
+After saving, you should see a message telling you what you still need to do to configure cloud sync. You can click the link to continue.
+ :::image type="content" source="media/how-to-configure/new-ux-configure-17.png" alt-text="Screenshot of the nudge for attribute filters." lightbox="media/how-to-configure/new-ux-configure-17.png":::
++
+For more information, see [attribute mapping](how-to-attribute-mapping.md).
## On-demand provisioning
-Azure AD Connect cloud sync allows you to test configuration changes, by applying these changes to a single user or group. You can use this to validate and verify that the changes made to the configuration were applied properly and are being correctly synchronized to Azure AD. For more information, see [on-demand provisioning](how-to-on-demand-provision.md).
+Azure AD Connect cloud sync allows you to test configuration changes, by applying these changes to a single user or group.
++
+You can use this to validate and verify that the changes made to the configuration were applied properly and are being correctly synchronized to Azure AD.
++
+After testing, you should see a message telling you what you still need to do to configure cloud sync. You can click the link to continue.
+ :::image type="content" source="media/how-to-configure/new-ux-configure-18.png" alt-text="Screenshot of the nudge for testing." lightbox="media/how-to-configure/new-ux-configure-18.png":::
++
+For more information, see [on-demand provisioning](how-to-on-demand-provision.md).
+
+## Accidental deletions and email notifications
+The default properties section provides information on accidental deletions and email notifications.
+
-## Accidental deletions
-The accidental delete feature is designed to protect you from accidental configuration changes and changes to your on-premises directory that would affect many users and groups. This feature allows you to:
+The accidental delete feature is designed to protect you from accidental configuration changes and changes to your on-premises directory that would affect many users and groups.
+
+This feature allows you to:
- configure the ability to prevent accidental deletes automatically. - Set the # of objects (threshold) beyond which the configuration will take effect
The accidental delete feature is designed to protect you from accidental configu
For more information, see [Accidental deletes](how-to-accidental-deletes.md)
+Click the **pencil** next to **Basics** to change the defaults in a configuration.
++
+## Enable your configuration
+Once you've finalized and tested your configuration, you can enable it.
++
+Click **Enable configuration** to enable it.
++ ## Quarantines Cloud sync monitors the health of your configuration and places unhealthy objects in a quarantine state. If most or all of the calls made against the target system consistently fail because of an error, for example, invalid admin credentials, the sync job is marked as in quarantine. For more information, see the troubleshooting section on [quarantines](how-to-troubleshoot.md#provisioning-quarantined-problems). ## Restart provisioning
-If you don't want to wait for the next scheduled run, trigger the provisioning run by using the **Restart provisioning** button.
+If you don't want to wait for the next scheduled run, trigger the provisioning run by using the **Restart sync** button.
1. In the Azure portal, select **Azure Active Directory**.
- 2. Select **Azure AD Connect**.
- 3. Select **Manage cloud sync**.
+ 2. On the left, select **Azure AD Connect**.
+ 3. On the left, select **Cloud sync**.
4. Under **Configuration**, select your configuration.
- ![Configuration selection to restart provisioning](media/how-to-configure/scope-1.png)
+ :::image type="content" source="media/how-to-configure/new-ux-configure-14.png" alt-text="Screenshot of restarting sync." lightbox="media/how-to-configure/new-ux-configure-14.png":::
- 5. At the top, select **Restart provisioning**.
+ 5. At the top, select **Restart sync**.
## Remove a configuration To delete a configuration, follow these steps. 1. In the Azure portal, select **Azure Active Directory**.
- 2. Select **Azure AD Connect**.
- 3. Select **Manage cloud sync**.
+ 2. On the left, select **Azure AD Connect**.
+ 3. On the left, select **Cloud sync**.
4. Under **Configuration**, select your configuration.
-
- ![Configuration selection to remove configuration](media/how-to-configure/scope-1.png)
- 5. At the top of the configuration screen, select **Delete**.
+ :::image type="content" source="media/how-to-configure/new-ux-configure-15.png" alt-text="Screenshot of deletion." lightbox="media/how-to-configure/new-ux-configure-15.png":::
+
+ 5. At the top of the configuration screen, select **Delete configuration**.
>[!IMPORTANT] >There's no confirmation prior to deleting a configuration. Make sure this is the action you want to take before you select **Delete**.
active-directory How To Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/how-to-install.md
Previously updated : 01/11/2023 Last updated : 01/20/2023
active-directory How To On Demand Provision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/how-to-on-demand-provision.md
For additional information and an example see the following video.
## Validate a user To use on-demand provisioning, follow these steps:
-1. In the Azure portal, select **Azure Active Directory**.
-2. Select **Azure AD Connect**.
-3. Select **Manage cloud sync**.
-
- ![Screenshot that shows the link for managing cloud sync.](media/how-to-install/install-6.png)
-4. Under **Configuration**, select your configuration.
-5. Under **Validate**, select the **Provision a user** button.
-
- ![Screenshot that shows the button for provisioning a user.](media/how-to-on-demand-provision/on-demand-2.png)
+ 1. In the Azure portal, select **Azure Active Directory**.
+ 2. On the left, select **Azure AD Connect**.
+ 3. On the left, select **Cloud sync**.
+
+ :::image type="content" source="media/how-to-on-demand-provision/new-ux-1.png" alt-text="Screenshot of new UX screen." lightbox="media/how-to-on-demand-provision/new-ux-1.png":::
-6. On the **Provision on demand** screen, enter the distinguished name of a user and select the **Provision** button.
+ 4. Under **Configuration**, select your configuration.
+ 5. On the left, select **Provision on demand**.
+ 6. Enter the distinguished name of a user and select the **Provision** button.
- ![Screenshot that shows a username and a Provision button.](media/how-to-on-demand-provision/on-demand-3.png)
-7. After provisioning finishes, a success screen appears with four green check marks. Any errors appear to the left.
+ :::image type="content" source="media/how-to-on-demand-provision/new-ux-2.png" alt-text="Screenshot of user distinguished name." lightbox="media/how-to-on-demand-provision/new-ux-2.png":::
+
+ 7. After provisioning finishes, a success screen appears with four green check marks. Any errors appear to the left.
- ![Screenshot that shows successful provisioning.](media/how-to-on-demand-provision/on-demand-4.png)
+ :::image type="content" source="media/how-to-on-demand-provision/new-ux-3.png" alt-text="Screenshot of on-demand success." lightbox="media/how-to-on-demand-provision/new-ux-3.png":::
## Get details about provisioning Now you can look at the user information and determine if the changes that you made in the configuration have been applied. The rest of this article describes the individual sections that appear in the details of a successfully synchronized user.
Now you can look at the user information and determine if the changes that you m
### Import user The **Import user** section provides information on the user who was imported from Active Directory. This is what the user looks like before provisioning into Azure AD. Select the **View details** link to display this information.
-![Screenshot of the button for viewing details about an imported user.](media/how-to-on-demand-provision/on-demand-5.png)
- By using this information, you can see the various attributes (and their values) that were imported. If you created a custom attribute mapping, you can see the value here.
-![Screenshot that shows user details.](media/how-to-on-demand-provision/on-demand-6.png)
+ :::image type="content" source="media/how-to-on-demand-provision/new-ux-4.png" alt-text="Screenshot of import user." lightbox="media/how-to-on-demand-provision/new-ux-4.png":::
### Determine if user is in scope The **Determine if user is in scope** section provides information on whether the user who was imported to Azure AD is in scope. Select the **View details** link to display this information.
-![Screenshot of the button for viewing details about user scope.](media/how-to-on-demand-provision/on-demand-7.png)
- By using this information, you can see if the user is in scope.
-![Screenshot that shows user scope details.](media/how-to-on-demand-provision/on-demand-10a.png)
+ :::image type="content" source="media/how-to-on-demand-provision/new-ux-5.png" alt-text="Screenshot of scope determination." lightbox="media/how-to-on-demand-provision/new-ux-5.png":::
### Match user between source and target system The **Match user between source and target system** section provides information on whether the user already exists in Azure AD and whether a join should occur instead of provisioning a new user. Select the **View details** link to display this information.
-![Screenshot of the button for viewing details about a matched user.](media/how-to-on-demand-provision/on-demand-8.png)
- By using this information, you can see whether a match was found or if a new user is going to be created.
-![Screenshot that shows user information.](media/how-to-on-demand-provision/on-demand-11.png)
+ :::image type="content" source="media/how-to-on-demand-provision/new-ux-6.png" alt-text="Screenshot of matching user." lightbox="media/how-to-on-demand-provision/new-ux-6.png":::
The matching details show a message with one of the three following operations: - **Create**: A user is created in Azure AD.
Depending on the type of operation that you've performed, the message will vary.
### Perform action The **Perform action** section provides information on the user who was provisioned or exported into Azure AD after the configuration was applied. This is what the user looks like after provisioning into Azure AD. Select the **View details** link to display this information.
-![Screenshot of the button for viewing details about a performed action.](media/how-to-on-demand-provision/on-demand-9.png)
- By using this information, you can see the values of the attributes after the configuration was applied. Do they look similar to what was imported, or are they different? Was the configuration applied successfully? This process enables you to trace the attribute transformation as it moves through the cloud and into your Azure AD tenant.
-![Screenshot that shows traced attribute details.](media/how-to-on-demand-provision/on-demand-12.png)
+ :::image type="content" source="media/how-to-on-demand-provision/new-ux-7.png" alt-text="Screenshot of perform action." lightbox="media/how-to-on-demand-provision/new-ux-7.png":::
## Next steps
active-directory How To Sso https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/how-to-sso.md
Title: 'How to use Single Sign-on with cloud sync'
-description: This article describes how to install and use sso with cloud sync.
+ Title: 'How to use single sign-on with cloud sync'
+description: This article describes how to install and use single sign-on with cloud sync.
Previously updated : 01/28/2020 Last updated : 01/18/2023
-# Using Single Sign-On with cloud sync
+# Using single sign-on with cloud sync
The following document describes how to use single sign-on with cloud sync. [!INCLUDE [active-directory-cloud-provisioning-sso.md](../../../includes/active-directory-cloud-provisioning-sso.md)]
active-directory How To Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/how-to-troubleshoot.md
description: This article describes how to troubleshoot problems that might aris
Previously updated : 10/13/2021 Last updated : 01/18/2023 ms.prod: windows-server-threshold ms.technology: identity-adfs
active-directory Plan Cloud Sync Topologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/plan-cloud-sync-topologies.md
Previously updated : 09/10/2021 Last updated : 01/17/2023
active-directory Reference Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/reference-error-codes.md
Previously updated : 01/14/2021 Last updated : 01/18/2023
The following is a list of error codes and their description
|Error code|Details|Scenario|Resolution| |--|--|--|--| |TimeOut|Error Message: We've detected a request timeout error when contacting the on-premises agent and synchronizing your configuration. For additional issues related to your cloud sync agent, please see our troubleshooting guidance.|Request to HIS timed out. Current Timeout value is 10 minutes.|See our [troubleshooting guidance](how-to-troubleshoot.md)|
-|HybridSynchronizationActiveDirectoryInternalServerError|Error Message: We were unable to process this request at this point. If this issue persists, please contact support and provide the following job identifier: AD2AADProvisioning.30b500eaf9c643b2b78804e80c1421fe.5c291d3c-d29f-4570-9d6b-f0c2fa3d5926. Additional details: Processing of the HTTP request resulted in an exception. |Could not process the parameters received in SCIM request to a Search request.|Please see the HTTP response returned by the 'Response' property of this exception for details.|
-|HybridIdentityServiceNoAgentsAssigned|Error Message: We are unable to find an active agent for the domain you are trying to sync. Please check to see if the agents have been removed. If so, re-install the agent again.|There are no agents running. Probably agents have been removed. Register a new agent.|"In this case, you will not see any agent assigned to the domain in portal.|
-|HybridIdentityServiceNoActiveAgents|Error Message: We are unable to find an active agent for the domain you are trying to sync. Please check to see if the agent is running by going to the server, where the agent is installed, and check to see if "Microsoft Azure AD Cloud Sync Agent" under Services is running.|"Agents are not listening to the ServiceBus endpoint. [The agent is behind a firewall that does not allow connections to service bus](../app-proxy/application-proxy-configure-connectors-with-proxy-servers.md#use-the-outbound-proxy-server)|
-|HybridIdentityServiceInvalidResource|Error Message: We were unable to process this request at this point. If this issue persists, please contact support and provide the following job identifier: AD2AADProvisioning.3a2a0d8418f34f54a03da5b70b1f7b0c.d583d090-9cd3-4d0a-aee6-8d666658c3e9. Additional details: There seems to be an issue with your cloud sync setup. Please re-register your cloud sync agent on your on-prem AD domain and restart configuration from Azure Portal.|The resource name must be set so HIS knows which agent to contact.|Please re-register your cloud sync agent on your on-prem AD domain and restart configuration from Azure Portal.|
-|HybridIdentityServiceAgentSignalingError|Error Message: We were unable to process this request at this point. If this issue persists, please contact support and provide the following job identifier: AD2AADProvisioning.92d2e8750f37407fa2301c9e52ad7e9b.efb835ef-62e8-42e3-b495-18d5272eb3f9. Additional details: We were unable to process this request at this point. If this issue persists, please contact support with Job ID (from status pane of your configuration).|Service Bus is not able to send a message to the agent. Could be an outage in service bus, or the agent is not responsive.|If this issue persists, please contact support with Job ID (from status pane of your configuration).|
+|HybridSynchronizationActiveDirectoryInternalServerError|Error Message: We were unable to process this request at this point. If this issue persists, please contact support and provide the following job identifier: AD2AADProvisioning.30b500eaf9c643b2b78804e80c1421fe.5c291d3c-d29f-4570-9d6b-f0c2fa3d5926. Additional details: Processing of the HTTP request resulted in an exception. |Couldn't process the parameters received in SCIM request to a Search request.|Please see the HTTP response returned by the 'Response' property of this exception for details.|
+|HybridIdentityServiceNoAgentsAssigned|Error Message: We're unable to find an active agent for the domain you're trying to sync. Please check to see if the agents have been removed. If so, re-install the agent again.|There are no agents running. Probably agents have been removed. Register a new agent.|"In this case, you won't see any agent assigned to the domain in portal.|
+|HybridIdentityServiceNoActiveAgents|Error Message: We're unable to find an active agent for the domain you're trying to sync. Please check to see if the agent is running by going to the server, where the agent is installed, and check to see if "Microsoft Azure AD Cloud Sync Agent" under Services is running.|"Agents aren't listening to the ServiceBus endpoint. [The agent is behind a firewall that doesn't allow connections to service bus](../app-proxy/application-proxy-configure-connectors-with-proxy-servers.md#use-the-outbound-proxy-server)|
+|HybridIdentityServiceInvalidResource|Error Message: We were unable to process this request at this point. If this issue persists, please contact support and provide the following job identifier: AD2AADProvisioning.3a2a0d8418f34f54a03da5b70b1f7b0c.d583d090-9cd3-4d0a-aee6-8d666658c3e9. Additional details: There seems to be an issue with your cloud sync setup. Please re-register your cloud sync agent on your on-premises AD domain and restart configuration from Azure portal.|The resource name must be set so HIS knows which agent to contact.|Please re-register your cloud sync agent on your on-premises AD domain and restart configuration from Azure portal.|
+|HybridIdentityServiceAgentSignalingError|Error Message: We were unable to process this request at this point. If this issue persists, please contact support and provide the following job identifier: AD2AADProvisioning.92d2e8750f37407fa2301c9e52ad7e9b.efb835ef-62e8-42e3-b495-18d5272eb3f9. Additional details: We were unable to process this request at this point. If this issue persists, please contact support with Job ID (from status pane of your configuration).|Service Bus isn't able to send a message to the agent. Could be an outage in service bus, or the agent isn't responsive.|If this issue persists, please contact support with Job ID (from status pane of your configuration).|
|AzureDirectoryServiceServerBusy|Error Message: An error occurred. Error Code: 81. Error Description: Azure Active Directory is currently busy. This operation will be retried automatically. If this issue persists for more than 24 hours, contact Technical Support. Tracking ID: 8a4ab3b5-3664-4278-ab64-9cff37fd3f4f Server Name:|Azure Active Directory is currently busy.|If this issue persists for more than 24 hours, contact Technical Support.|
-|AzureActiveDirectoryInvalidCredential|Error Message: We found an issue with the service account that is used to run Azure AD Connect Cloud Sync. You can repair the cloud service account by following the instructions at [here](./how-to-troubleshoot.md). If the error persists, please contact support with Job ID (from status pane of your configuration). Additional Error Details: CredentialsInvalid AADSTS50034: The user account {EmailHidden} does not exist in the skydrive365.onmicrosoft.com directory. To sign into this application, the account must be added to the directory. Trace ID: 14b63033-3bc9-4bd4-b871-5eb4b3500200 Correlation ID: 57d93ed1-be4d-483c-997c-a3b6f03deb00 Timestamp: 2021-01-12 21:08:29Z |This error is thrown when the sync service account ADToAADSyncServiceAccount doesn't exist in the tenant. It can be due to accidental deletion of the account.|Use [Repair-AADCloudSyncToolsAccount](reference-powershell.md#repair-aadcloudsynctoolsaccount) to fix the service account.|
-|AzureActiveDirectoryExpiredCredentials|Error Message: We were unable to process this request at this point. If this issue persists, please contact support with Job ID (from status pane of your configuration). Additional Error Details: CredentialsExpired AADSTS50055: The password is expired. Trace ID: 989b1841-dbe5-49c9-ab6c-9aa25f7b0e00 Correlation ID: 1c69b196-1c3a-4381-9187-c84747807155 Timestamp: 2021-01-12 20:59:31Z | Response status code does not indicate success: 401 (Unauthorized).<br> AAD Sync service account credentials are expired.|You can repair the cloud service account by following the instructions at https://go.microsoft.com/fwlink/?linkid=2150988. If the error persists, please contact support with Job ID (from status pane of your configuration). Additional Error Details: Your administrative Azure Active Directory tenant credentials were exchanged for an OAuth token that has since expired."|
+|AzureActiveDirectoryInvalidCredential|Error Message: We found an issue with the service account that is used to run Azure AD Connect Cloud Sync. You can repair the cloud service account by following the instructions at [here](./how-to-troubleshoot.md). If the error persists, please contact support with Job ID (from status pane of your configuration). Additional Error Details: CredentialsInvalid AADSTS50034: The user account {EmailHidden} doesn't exist in the skydrive365.onmicrosoft.com directory. To sign into this application, the account must be added to the directory. Trace ID: 14b63033-3bc9-4bd4-b871-5eb4b3500200 Correlation ID: 57d93ed1-be4d-483c-997c-a3b6f03deb00 Timestamp: 2021-01-12 21:08:29Z |This error is thrown when the sync service account ADToAADSyncServiceAccount doesn't exist in the tenant. It can be due to accidental deletion of the account.|Use [Repair-AADCloudSyncToolsAccount](reference-powershell.md#repair-aadcloudsynctoolsaccount) to fix the service account.|
+|AzureActiveDirectoryExpiredCredentials|Error Message: We were unable to process this request at this point. If this issue persists, please contact support with Job ID (from status pane of your configuration). Additional Error Details: CredentialsExpired AADSTS50055: The password is expired. Trace ID: 989b1841-dbe5-49c9-ab6c-9aa25f7b0e00 Correlation ID: 1c69b196-1c3a-4381-9187-c84747807155 Timestamp: 2021-01-12 20:59:31Z | Response status code doesn't indicate success: 401 (Unauthorized).<br> Azure AD Sync service account credentials are expired.|You can repair the cloud service account by following the instructions at https://go.microsoft.com/fwlink/?linkid=2150988. If the error persists, please contact support with Job ID (from status pane of your configuration). Additional Error Details: Your administrative Azure Active Directory tenant credentials were exchanged for an OAuth token that has since expired."|
|AzureActiveDirectoryAuthenticationFailed|Error Message: We were unable to process this request at this point. If this issue persists, please contact support and provide the following job identifier: AD2AADProvisioning.60b943e88f234db2b887f8cb91dee87c.707be0d2-c6a9-405d-a3b9-de87761dc3ac. Additional details: We were unable to process this request at this point. If this issue persists, please contact support with Job ID (from status pane of your configuration). Additional Error Details: UnexpectedError.|Unknown error.|If this issue persists, please contact support with Job ID (from status pane of your configuration).| ## Next steps
active-directory Reference Expressions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/reference-expressions.md
Previously updated : 12/02/2019 Last updated : 01/18/2023
active-directory Reference Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/reference-powershell.md
Previously updated : 11/03/2021 Last updated : 01/17/2023
active-directory Reference Version History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/reference-version-history.md
Previously updated : 11/19/2020 Last updated : 01/17/2023
active-directory Tutorial Basic Ad Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/tutorial-basic-ad-azure.md
Previously updated : 12/02/2019 Last updated : 01/18/2023
You can use the environment you create in the tutorial to test various aspects o
This tutorial consists of ## Prerequisites The following are prerequisites required for completing this tutorial-- A computer with [Hyper-V](/windows-server/virtualization/hyper-v/hyper-v-technology-overview) installed. It is suggested to do this on either a [Windows 10](/virtualization/hyper-v-on-windows/about/supported-guest-os) or a [Windows Server 2016](/windows-server/virtualization/hyper-v/supported-windows-guest-operating-systems-for-hyper-v-on-windows) computer.
+- A computer with [Hyper-V](/windows-server/virtualization/hyper-v/hyper-v-technology-overview) installed. It's suggested to do this on either a [Windows 10](/virtualization/hyper-v-on-windows/about/supported-guest-os) or a [Windows Server 2016](/windows-server/virtualization/hyper-v/supported-windows-guest-operating-systems-for-hyper-v-on-windows) computer.
- An [external network adapter](/virtualization/hyper-v-on-windows/quick-start/connect-to-network) to allow the virtual machine to communicate with the internet. - An [Azure subscription](https://azure.microsoft.com/free) - A copy of Windows Server 2016
In order to finish building the virtual machine, you need to finish the operatin
1. Hyper-V Manager, double-click on the virtual machine 2. Click on the Start button.
-3. You will be prompted to ΓÇÿPress any key to boot from CD or DVDΓÇÖ. Go ahead and do so.
+3. You'll be prompted to ΓÇÿPress any key to boot from CD or DVDΓÇÖ. Go ahead and do so.
4. On the Windows Server start up screen select your language and click **Next**. 5. Click **Install Now**. 6. Enter your license key and click **Next**.
Now you need to create an Azure AD tenant so that you can synchronize our users
6. Once this has completed, click the **here** link, to manage the directory. ## Create a global administrator in Azure AD
-Now that you have an Azure AD tenant, you will create a global administrator account. To create the global administrator account do the following.
+Now that you have an Azure AD tenant, you'll create a global administrator account. To create the global administrator account do the following.
1. Under **Manage**, select **Users**.</br> ![Screenshot that shows the "Overview" menu with "Users" selected.](media/tutorial-single-forest/administrator-1.png)</br> 2. Select **All users** and then select **+ New user**.
-3. Provide a name and username for this user. This will be your Global Admin for the tenant. You will also want to change the **Directory role** to **Global administrator.** You can also show the temporary password. When you are done, select **Create**.</br>
+3. Provide a name and username for this user. This will be your Global Admin for the tenant. You'll also want to change the **Directory role** to **Global administrator.** You can also show the temporary password. When you're done, select **Create**.</br>
![Create](media/tutorial-single-forest/administrator-2.png)</br> 4. Once this has completed, open a new web browser and sign-in to myapps.microsoft.com using the new global administrator account and the temporary password.
-5. Change the password for the global administrator to something that you will remember.
+5. Change the password for the global administrator to something that you'll remember.
## Optional: Additional server and forest The following is an optional section that provides steps to creating an additional server and or forest. This can be used in some of the more advanced tutorials such as [Pilot for Azure AD Connect to cloud sync](tutorial-pilot-aadc-aadccp.md).
In order to finish building the virtual machine, you need to finish the operatin
1. Hyper-V Manager, double-click on the virtual machine 2. Click on the Start button.
-3. You will be prompted to ΓÇÿPress any key to boot from CD or DVDΓÇÖ. Go ahead and do so.
+3. You'll be prompted to ΓÇÿPress any key to boot from CD or DVDΓÇÖ. Go ahead and do so.
4. On the Windows Server start up screen select your language and click **Next**. 5. Click **Install Now**. 6. Enter your license key and click **Next**.
active-directory Tutorial Existing Forest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/tutorial-existing-forest.md
Title: Tutorial - Integrate an existing forest and a new forest with a single Azure AD tenant by using Azure AD Connect cloud sync
+ Title: Tutorial - Integrate an existing forest and a new forest with a single Azure AD tenant using Azure AD Connect cloud sync.
description: Learn how to add cloud sync to an existing hybrid identity environment.
Previously updated : 11/11/2022 Last updated : 01/17/2023
-# Tutorial: Integrate an existing forest and a new forest with a single Azure AD tenant
+# Integrate an existing forest and a new forest with a single Azure AD tenant
This tutorial walks you through adding cloud sync to an existing hybrid identity environment.
This tutorial walks you through adding cloud sync to an existing hybrid identity
You can use the environment you create in this tutorial for testing or for getting more familiar with how a hybrid identity works.
-In this scenario, you sync an existing forest with an Azure AD tenant by using Azure Active Directory (Azure AD) Connect. You want to sync a new forest with the same Azure AD tenant. You'll set up cloud sync for the new forest.
+In this scenario, there's an existing forest synced using Azure AD Connect sync to an Azure AD tenant. And you have a new forest that you want to sync to the same Azure AD tenant. You'll set up cloud sync for the new forest.
## Prerequisites
+### In the Azure Active Directory admin center
-Before you begin, set up your environments.
-
-### In the Azure AD admin center
-
-1. Create a cloud-only global administrator account on your Azure AD tenant.
-
- This way, you can manage the configuration of your tenant if your on-premises services fail or become unavailable. [Learn how to add a cloud-only global administrator account](../fundamentals/add-users-azure-active-directory.md). Complete this step to ensure that you don't get locked out of your tenant.
-
-1. Add one or more [custom domain names](../fundamentals/add-custom-domain.md) to your Azure AD tenant. Your users can sign in with one of these domain names.
+1. Create a cloud-only global administrator account on your Azure AD tenant. This way, you can manage the configuration of your tenant should your on-premises services fail or become unavailable. Learn about [adding a cloud-only global administrator account](../fundamentals/add-users-azure-active-directory.md). Completing this step is critical to ensure that you don't get locked out of your tenant.
+2. Add one or more [custom domain names](../fundamentals/add-custom-domain.md) to your Azure AD tenant. Your users can sign in with one of these domain names.
### In your on-premises environment
-1. Identify a domain-joined host server that's running Windows Server 2012 R2 or later, with at least 4 GB of RAM and .NET 4.7.1+ runtime.
-
-1. If there's a firewall between your servers and Azure AD, configure the following items:
+1. Identify a domain-joined host server running Windows Server 2012 R2 or greater with minimum of 4-GB RAM and .NET 4.7.1+ runtime
+2. If there's a firewall between your servers and Azure AD, configure the following items:
- Ensure that agents can make *outbound* requests to Azure AD over the following ports: | Port number | How it's used | | | |
- | **80** | Downloads the certificate revocation lists (CRLs) while it validates the TLS/SSL certificate. |
- | **443** | Handles all outbound communication with the service. |
- | **8080** (optional) | Agents report their status every 10 minutes over port 8080, if port 443 is unavailable. This status is displayed in the Azure AD portal. |
+ | **80** | Downloads the certificate revocation lists (CRLs) while validating the TLS/SSL certificate |
+ | **443** | Handles all outbound communication with the service |
+ | **8080** (optional) | Agents report their status every 10 minutes over port 8080, if port 443 is unavailable. This status is displayed on the Azure AD portal. |
If your firewall enforces rules according to the originating users, open these ports for traffic from Windows services that run as a network service.-
- - If your firewall or proxy allows you to specify safe suffixes, add connections to **\*.msappproxy.net** and **\*.servicebus.windows.net**. If it doesn't, allow access to the [Azure datacenter IP ranges](https://www.microsoft.com/download/details.aspx?id=41653), which are updated weekly.
-
+ - If your firewall or proxy allows you to specify safe suffixes, then add connections to **\*.msappproxy.net** and **\*.servicebus.windows.net**. If not, allow access to the [Azure datacenter IP ranges](https://www.microsoft.com/download/details.aspx?id=41653), which are updated weekly.
- Your agents need access to **login.windows.net** and **login.microsoftonline.com** for initial registration. Open your firewall for those URLs as well.-
- - For certificate validation, unblock the following URLs: **mscrl.microsoft.com:80**, **crl.microsoft.com:80**, **ocsp.msocsp.com:80**, and **www\.microsoft.com:80**. Because these URLs are used to validate certificates for other Microsoft products, you might already have these URLs unblocked.
+ - For certificate validation, unblock the following URLs: **mscrl.microsoft.com:80**, **crl.microsoft.com:80**, **ocsp.msocsp.com:80**, and **www\.microsoft.com:80**. Since these URLs are used for certificate validation with other Microsoft products, you may already have these URLs unblocked.
## Install the Azure AD Connect provisioning agent
-If you're using the [Basic Active Directory and Azure environment](tutorial-basic-ad-azure.md) tutorial, the agent is DC1. To install the agent, do the following:
+If you're using the [Basic AD and Azure environment](tutorial-basic-ad-azure.md) tutorial, it would be DC1. To install the agent, follow these steps:
[!INCLUDE [active-directory-cloud-sync-how-to-install](../../../includes/active-directory-cloud-sync-how-to-install.md)]
If you're using the [Basic Active Directory and Azure environment](tutorial-basi
[!INCLUDE [active-directory-cloud-sync-how-to-verify-installation](../../../includes/active-directory-cloud-sync-how-to-verify-installation.md)] ## Configure Azure AD Connect cloud sync-
-To configure the cloud sync setup, do the following:
+ Use the following steps to configure provisioning
1. Sign in to the Azure AD portal.
-1. Select **Azure Active Directory**.
-1. Select **Azure AD Connect**.
-1. Select **Manage cloud sync**.
+2. Select **Azure Active Directory**
+3. Select **Azure AD Connect**
+4. Select **Manage cloud sync**
- ![Screenshot that highlights the "Manage cloud sync" link.](media/how-to-configure/manage-1.png)
+ ![Screenshot showing "Manage cloud sync" link.](media/how-to-configure/manage-1.png)
-1. Select **New Configuration**.
+5. Select **New Configuration**
- ![Screenshot of the Azure AD Connect cloud sync page, with the "New configuration" link highlighted.](media/tutorial-single-forest/configure-1.png)
+ ![Screenshot of Azure AD Connect cloud sync screen with "New configuration" link highlighted.](media/tutorial-single-forest/configure-1.png)
-1. On the **Configuration** page, enter a **Notification email**, move the selector to **Enable**, and then select **Save**.
+7. On the configuration screen, enter a **Notification email**, move the selector to **Enable** and select **Save**.
- ![Screenshot of the "Edit provisioning configuration" page.](media/how-to-configure/configure-2.png)
+ ![Screenshot of Configure screen with Notification email filled in and Enable selected.](media/how-to-configure/configure-2.png)
1. The configuration status should now be **Healthy**.
- ![Screenshot of Azure AD Connect cloud sync page, showing a "Healthy" status.](media/how-to-configure/manage-4.png)
+ ![Screenshot of Azure AD Connect cloud sync screen showing Healthy status.](media/how-to-configure/manage-4.png)
+
+## Verify users are created and synchronization is occurring
-## Verify that users are created and synchronization is occurring
+You'll now verify that the users that you had in our on-premises directory have been synchronized and now exist in our Azure AD tenant. This process may take a few hours to complete. To verify users are synchronized, do the following:
-You'll now verify that the users in your on-premises Active Directory have been synchronized and exist in your Azure AD tenant. This process might take a few hours to complete. To verify that the users are synchronized, do the following:
-1. Sign in to the [Azure portal](https://portal.azure.com) with an account that has an Azure subscription.
-1. On the left pane, select **Azure Active Directory**.
-1. Under **Manage**, select **Users**.
-1. Verify that the new users are displayed in your tenant.
+1. Browse to the [Azure portal](https://portal.azure.com) and sign in with an account that has an Azure subscription.
+2. On the left, select **Azure Active Directory**
+3. Under **Manage**, select **Users**.
+4. Verify that you see the new users in our tenant
-## Test signing in with one of your users
+## Test signing in with one of our users
-1. Go to the [Microsoft My Apps](https://myapps.microsoft.com) page.
-1. Sign in with a user account that was created in your new tenant. You'll need to sign in by using the following format: *user@domain.onmicrosoft.com*. Use the same password that the user uses to sign in on-premises.
+1. Browse to [https://myapps.microsoft.com](https://myapps.microsoft.com)
+2. Sign in with a user account that was created in our new tenant. You'll need to sign in using the following format: (user@domain.onmicrosoft.com). Use the same password that the user uses to sign in on-premises.
- ![Screenshot that shows the My Apps portal with signed-in users.](media/tutorial-single-forest/verify-1.png)
+ ![Screenshot that shows the my apps portal with a signed in users.](media/tutorial-single-forest/verify-1.png)
You have now successfully set up a hybrid identity environment that you can use to test and familiarize yourself with what Azure has to offer.
active-directory Tutorial Pilot Aadc Aadccp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/tutorial-pilot-aadc-aadccp.md
Title: Tutorial - Pilot Azure AD Connect cloud sync for an existing synced Active Directory forest
-description: Learn how to pilot cloud sync for a test Active Directory forest that is already synced by using Azure Active Directory (Azure AD) Connect sync.
+ Title: Tutorial - Pilot Azure AD Connect cloud sync for an existing synced AD forest
+description: Learn how to pilot cloud sync for a test Active Directory forest that is already synced using Azure Active Directory (Azure AD) Connect sync.
Previously updated : 11/11/2022 Last updated : 01/18/2023
-# Pilot cloud sync for an existing synced Active Directory forest
+# Pilot cloud sync for an existing synced AD forest
-This tutorial walks you through piloting cloud sync for a test Active Directory forest that's already synced by using Azure Active Directory (Azure AD) Connect sync.
+This tutorial walks you through piloting cloud sync for a test Active Directory forest that is already synced using Azure Active Directory (Azure AD) Connect sync.
![Diagram that shows the Azure AD Connect cloud sync flow.](media/tutorial-migrate-aadc-aadccp/diagram-2.png) ## Considerations
-Before you try this tutorial, keep the following in mind:
+Before you try this tutorial, consider the following items:
-* You should be familiar with the basics of cloud sync.
+1. Ensure that you're familiar with basics of cloud sync.
-* Ensure that you're running Azure AD Connect cloud sync version 1.4.32.0 or later and you've configured the sync rules as documented.
+1. Ensure that you're running Azure AD Connect sync version 1.4.32.0 or later and have configured the sync rules as documented.
-* When you're piloting, you'll be removing a test organizational unit (OU) or group from the Azure AD Connect sync scope. Moving objects out of scope leads to deletion of those objects in Azure AD.
+1. When piloting, you'll be removing a test OU or group from Azure AD Connect sync scope. Moving objects out of scope leads to deletion of those objects in Azure AD.
- - **User objects**: The objects in Azure AD that are soft-deleted and can be restored.
- - **Group objects**: The objects in Azure AD that are hard-deleted and can't be restored.
+ - User objects, the objects in Azure AD are soft-deleted and can be restored.
+ - Group objects, the objects in Azure AD are hard-deleted and can't be restored.
- A new link type has been introduced in Azure AD Connect sync, which will prevent deletions in a piloting scenario.
+ A new link type has been introduced in Azure AD Connect sync, which will prevent the deletion in a piloting scenario.
-* Ensure that the objects in the pilot scope have *ms-ds-consistencyGUID* populated so that cloud sync hard matches the objects.
+1. Ensure that the objects in the pilot scope have ms-ds-consistencyGUID populated so cloud sync hard matches the objects.
> [!NOTE]
- > Azure AD Connect sync doesn't populate *ms-ds-consistencyGUID* by default for group objects.
+ > Azure AD Connect sync does not populate *ms-ds-consistencyGUID* by default for group objects.
-* This configuration is for advanced scenarios. Be sure to follow the steps documented in this tutorial precisely.
+1. This configuration is for advanced scenarios. Ensure that you follow the steps documented in this tutorial precisely.
## Prerequisites
-Before you begin, be sure that you've set up your environment to meet the following prerequisites:
+The following are prerequisites required for completing this tutorial
-- A test environment with [Azure AD connect version 1.4.32.0 or later](https://www.microsoft.com/download/details.aspx?id=47594).
-
- To update Azure AD Connect sync, complete the steps in [Azure AD Connect: Upgrade to the latest version](../hybrid/how-to-upgrade-previous-version.md).
+- A test environment with Azure AD Connect sync version 1.4.32.0 or later
+- An OU or group that is in scope of sync and can be used the pilot. We recommend starting with a small set of objects.
+- A server running Windows Server 2012 R2 or later that will host the provisioning agent.
+- Source anchor for Azure AD Connect sync should be either *objectGuid* or *ms-ds-consistencyGUID*
-- An OU or group that's in scope of sync and can be used in the pilot. We recommend starting with a small set of objects.
+## Update Azure AD Connect
-- Windows Server 2012 R2 or later, which will host the provisioning agent.--- The source anchor for Azure AD Connect sync should be either *objectGuid* or *ms-ds-consistencyGUID*.
+As a minimum, you should have [Azure AD connect](https://www.microsoft.com/download/details.aspx?id=47594) 1.4.32.0. To update Azure AD Connect sync, complete the steps in [Azure AD Connect: Upgrade to the latest version](../hybrid/how-to-upgrade-previous-version.md).
## Stop the scheduler
-Azure AD Connect sync synchronizes changes occurring in your on-premises directory by using a scheduler. To modify and add custom rules, disable the scheduler so that synchronizations won't run while you're making the changes. To stop the scheduler:
+Azure AD Connect sync synchronizes changes occurring in your on-premises directory using a scheduler. In order to modify and add custom rules, you want to disable the scheduler so that synchronizations won't run while you're working making the changes. To stop the scheduler, use the following steps:
-1. On the server that's running Azure AD Connect sync, open PowerShell with administrative privileges.
-1. Run `Stop-ADSyncSyncCycle`, and then select **Enter**.
-1. Run `Set-ADSyncScheduler -SyncCycleEnabled $false`.
+1. On the server that is running Azure AD Connect sync open PowerShell with Administrative Privileges.
+2. Run `Stop-ADSyncSyncCycle`. Hit Enter.
+3. Run `Set-ADSyncScheduler -SyncCycleEnabled $false`.
>[!NOTE]
->If you're running your own custom scheduler for Azure AD Connect sync, be sure to disable the scheduler.
+>If you are running your own custom scheduler for Azure AD Connect sync, then please disable the scheduler.
-## Create a custom user inbound rule
+## Create custom user inbound rule
-1. Open **Synchronization Rules Editor** from the application menu in the desktop, as shown in the following screenshot:
+ 1. Launch the synchronization editor from the application menu in desktop as shown below:
- ![Screenshot of the "Synchronization Rules Editor" command.](media/tutorial-migrate-aadc-aadccp/user-8.png)
+ ![Screenshot of the synchronization rule editor menu.](media/tutorial-migrate-aadc-aadccp/user-8.png)
-1. Under **Direction**, select **Inbound** from the dropdown list, and then select **Add new rule**.
+ 2. Select **Inbound** from the drop-down list for Direction and select **Add new rule**.
- ![Screenshot of the "View and manage your synchronization rules" pane, with "Inbound" and the "Add new rule" button selected.](media/tutorial-migrate-aadc-aadccp/user-1.png)
+ ![Screenshot that shows the "View and manage your synchronization rules" window with "Inbound" and the "Add new rule" button selected.](media/tutorial-migrate-aadc-aadccp/user-1.png)
-1. On the **Description** page, do the following:
+ 3. On the **Description** page, enter the following and select **Next**:
- - **Name**: Give the rule a meaningful name.
- - **Description**: Add a meaningful description.
- - **Connected System**: Select the Active Directory connector that you're writing the custom sync rule for.
- - **Connected System Object Type**: Select **User**.
- - **Metaverse Object Type**: Select **Person**.
- - **Link Type**: Select **Join**.
- - **Precedence**: Enter a value that's unique in the system.
- - **Tag**: Leave this field empty.
+ - **Name:** Give the rule a meaningful name
+ - **Description:** Add a meaningful description
+ - **Connected System:** Choose the AD connector that you're writing the custom sync rule for
+ - **Connected System Object Type:** User
+ - **Metaverse Object Type:** Person
+ - **Link Type:** Join
+ - **Precedence:** Provide a value that is unique in the system
+ - **Tag:** Leave this empty
![Screenshot that shows the "Create inbound synchronization rule - Description" page with values entered.](media/tutorial-migrate-aadc-aadccp/user-2.png)
-1. On the **Scoping filter** page, enter the OU or security group that the pilot is based on.
-
- To filter on OU, add the OU portion of the *distinguished name* (DN). This rule will be applied to all users who are in that OU. for example, if DN ends with "OU=CPUsers,DC=contoso,DC=com, add this filter.
+ 4. On the **Scoping filter** page, enter the OU or security group that you want the pilot based off. To filter on OU, add the OU portion of the distinguished name. This rule will be applied to all users who are in that OU. So, if DN ends with "OU=CPUsers,DC=contoso,DC=com, you would add this filter. Then select **Next**.
|Rule|Attribute|Operator|Value| |--|-|-|--|
- |Scoping&nbsp;OU|DN|ENDSWITH|The distinguished name of the OU.|
- |Scoping&nbsp;group||ISMEMBEROF|The distinguished name of the security group.|
+ |Scoping OU|DN|ENDSWITH|Distinguished name of the OU.|
+ |Scoping group||ISMEMBEROF|Distinguished name of the security group.|
- ![Screenshot that shows the "Create inbound synchronization rule" page with a scoping filter value entered.](media/tutorial-migrate-aadc-aadccp/user-3.png)
+ ![Screenshot that shows the **Create inbound synchronization rule - Scoping filter** page with a scoping filter value entered.](media/tutorial-migrate-aadc-aadccp/user-3.png)
-1. Select **Next**.
-1. On the **Join** rules page, select **Next**.
-1. Under **Add transformations**, do the following:
-
- * **FlowType**: Select **Constant**.
- * **Target Attribute**: Select **cloudNoFlow**.
- * **Source**: Select **True**.
+ 5. On the **Join** rules page, select **Next**.
+ 6. On the **Transformations** page, add a Constant transformation: flow True to cloudNoFlow attribute. Select **Add**.
![Screenshot that shows the **Create inbound synchronization rule - Transformations** page with a **Constant transformation** flow added.](media/tutorial-migrate-aadc-aadccp/user-4.png)
-1. Select **Next**.
-
-1. Select **Add**.
-
-Follow the same steps for all object types (*user*, *group*, and *contact*). Repeat the steps for each configured AD Connector and Active Directory forest.
-
-## Create a custom user outbound rule
+Same steps need to be followed for all object types (user, group and contact). Repeat steps per configured AD Connector / per AD forest.
-1. In the **Direction** dropdown list, select **Outbound**, and then select **Add rule**.
+## Create custom user outbound rule
- ![Screenshot that highlights the selected "Outbound" direction and the "Add new rule" button.](media/tutorial-migrate-aadc-aadccp/user-5.png)
+ 1. Select **Outbound** from the drop-down list for Direction and select **Add rule**.
-1. On the **Description** page, do the following:
+ ![Screenshot that shows the **Outbound** Direction selected and the **Add new rule** button highlighted.](media/tutorial-migrate-aadc-aadccp/user-5.png)
- - **Name**: Give the rule a meaningful name.
- - **Description**: Add a meaningful description.
- - **Connected System**: Select the Azure AD connector that you're writing the custom sync rule for.
- - **Connected System Object Type**: Select **User**.
- - **Metaverse Object Type**: Select **Person**.
- - **Link Type**: Select **JoinNoFlow**.
- - **Precedence**: Enter a value that's unique in the system.
- - **Tag**: Leave this field empty.
+ 2. On the **Description** page, enter the following and select **Next**:
- ![Screenshot of the "Create outbound synchronization rule" pane with properties entered.](media/tutorial-migrate-aadc-aadccp/user-6.png)
+ - **Name:** Give the rule a meaningful name
+ - **Description:** Add a meaningful description
+ - **Connected System:** Choose the Azure AD connector that you're writing the custom sync rule for
+ - **Connected System Object Type:** User
+ - **Metaverse Object Type:** Person
+ - **Link Type:** JoinNoFlow
+ - **Precedence:** Provide a value that is unique in the system<br>
+ - **Tag:** Leave this empty
-1. Select **Next**.
+ ![Screenshot that shows the **Description** page with properties entered.](media/tutorial-migrate-aadc-aadccp/user-6.png)
-1. On the **Create outbound synchronization rule** pane, under **Add scoping filters**, do the following:
-
- * **Attribute**: Select **cloudNoFlow**.
- * **Operator**: Select **EQUAL**.
- * **Value**: Select **True**.
+ 3. On the **Scoping filter** page, choose **cloudNoFlow** equal **True**. Then select **Next**.
![Screenshot that shows a custom rule.](media/tutorial-migrate-aadc-aadccp/user-7.png)
-1. Select **Next**.
-
-1. On the **Join** rules pane, select **Next**.
-
-1. On the **Transformations** pane, select **Add**.
+ 4. On the **Join** rules page, select **Next**.
+ 5. On the **Transformations** page, select **Add**.
-Follow the same steps for all object types (*user*, *group*, and *contact*).
+Same steps need to be followed for all object types (user, group and contact).
## Install the Azure AD Connect provisioning agent
-If you're using the [Basic Active Directory and Azure environment](tutorial-basic-ad-azure.md) tutorial, the agent is CP1. To install the agent, do the following:
+If you're using the [Basic AD and Azure environment](tutorial-basic-ad-azure.md) tutorial, it would be CP1. To install the agent, follow these steps:
[!INCLUDE [active-directory-cloud-sync-how-to-install](../../../includes/active-directory-cloud-sync-how-to-install.md)]
-## Verify the agent installation
+## Verify agent installation
[!INCLUDE [active-directory-cloud-sync-how-to-verify-installation](../../../includes/active-directory-cloud-sync-how-to-verify-installation.md)] ## Configure Azure AD Connect cloud sync
-To configure the cloud sync setup, do the following:
+Use the following steps to configure provisioning:
-1. Sign in to the Azure AD portal.
-1. Select **Azure Active Directory**.
-1. Select **Azure AD Connect**.
-1. Select the **Manage provisioning (Preview)** link.
+1. Sign-in to the Azure AD portal.
+2. Select **Azure Active Directory**
+3. Select **Azure AD Connect**
+4. Select **Manage cloud sync**
- ![Screenshot that shows the "Manage provisioning (Preview)" link.](media/how-to-configure/manage-1.png)
+ ![Screenshot showing "Manage cloud sync" link.](media/how-to-configure/manage-1.png)
-1. Select **New Configuration**
+5. Select **New Configuration**
- ![Screenshot that highlights the "New configuration" link.](media/tutorial-single-forest/configure-1.png)
+ ![Screenshot of Azure AD Connect cloud sync screen with "New configuration" link highlighted.](media/tutorial-single-forest/configure-1.png)
-1. On the **Configure** pane, under **Settings**, enter a **Notification email** and then, under **Deploy**, move the selector to **Enable**.
+6. On the configuration screen, enter a **Notification email**, move the selector to **Enable** and select **Save**.
- ![Screenshot of the "Configure" pane, with a notification email entered and "Enable" selected.](media/tutorial-single-forest/configure-2.png)
+ ![Screenshot of Configure screen with Notification email filled in and Enable selected.](media/tutorial-single-forest/configure-2.png)
-1. Select **Save**.
+7. Under **Configure**, select **All users** to change the scope of the configuration rule.
-1. Under **Scope**, select the **All users** link to change the scope of the configuration rule.
-
- ![Screenshot of the "Configure" pane, with the "All users" link highlighted.](media/how-to-configure/scope-2.png)
+ ![Screenshot of Configure screen with "All users" highlighted next to "Scope users".](media/how-to-configure/scope-2.png)
-1. Under **Scope users**, change the scope to include the OU that you created: **OU=CPUsers,DC=contoso,DC=com**.
+8. On the right, change the scope to include the specific OU you created "OU=CPUsers,DC=contoso,DC=com".
- ![Screenshot of the "Scope users" page, highlighting the scope that's changed to the OU you created.](media/tutorial-existing-forest/scope-2.png)
+ ![Screenshot of the Scope users screen highlighting the scope changed to the OU you created.](media/tutorial-existing-forest/scope-2.png)
-1. Select **Done** and **Save**.
-
- The scope should now be set to **1 organizational unit**.
+9. Select **Done** and **Save**.
+10. The scope should now be set to one organizational unit.
- ![Screenshot of the "Configure" page, with "1 organizational unit" highlighted next to "Scope users".](media/tutorial-existing-forest/scope-3.png)
+ ![Screenshot of Configure screen with "1 organizational unit" highlighted next to "Scope users".](media/tutorial-existing-forest/scope-3.png)
-## Verify that users have been set up by cloud sync
+## Verify users are provisioned by cloud sync
-You'll now verify that the users in your on-premises Active Directory have been synchronized and now exist in your Azure AD tenant. This process might take a few hours to complete. To verify that the users have been synchronized, do the following:
+You'll now verify that the users that you had in our on-premises directory have been synchronized and now exist in out Azure AD tenant. This process may take a few hours to complete. To verify users are provisioning by cloud sync, follow these steps:
-1. Sign in to the [Azure portal](https://portal.azure.com) with an account that has an Azure subscription.
-1. On the left pane, select **Azure Active Directory**.
-1. Select **Azure AD Connect**.
-1. Select **Manage cloud sync**.
-1. Select the **Logs** button.
-1. Search for a username to confirm that the user has been set up by cloud sync.
+1. Browse to the [Azure portal](https://portal.azure.com) and sign in with an account that has an Azure subscription.
+2. On the left, select **Azure Active Directory**
+3. Select on **Azure AD Connect**
+4. Select on **Manage cloud sync**
+5. Select on **Logs** button
+6. Search for a username to confirm that the user is provisioned by cloud sync
Additionally, you can verify that the user and group exist in Azure AD. ## Start the scheduler
-Azure AD Connect sync synchronizes changes that occur in your on-premises directory by using a scheduler. Now that you've modified the rules, you can restart the scheduler.
+Azure AD Connect sync synchronizes changes occurring in your on-premises directory using a scheduler. Now that you've modified the rules, you can restart the scheduler. Use the following steps:
-1. On the server that's running Azure AD Connect sync, open PowerShell with administrative privileges.
-1. Run `Set-ADSyncScheduler -SyncCycleEnabled $true`.
-1. Run `Start-ADSyncSyncCycle`, and then select <kbd>Enter</kbd>.
+1. On the server that is running Azure AD Connect sync open PowerShell with Administrative Privileges
+2. Run `Set-ADSyncScheduler -SyncCycleEnabled $true`.
+3. Run `Start-ADSyncSyncCycle`, then press <kbd>Enter</kbd>.
> [!NOTE]
-> If you're running your own custom scheduler for Azure AD Connect sync, be sure to enable the scheduler.
-
-After the scheduler is enabled, Azure AD Connect stops exporting any changes on objects with `cloudNoFlow=true` in the metaverse, unless any reference attribute (such as `manager`) is being updated.
+> If you are running your own custom scheduler for Azure AD Connect sync, then please enable the scheduler.
-If there's any reference attribute update on the object, Azure AD Connect will ignore the `cloudNoFlow` signal and export all updates on the object.
+Once the scheduler is enabled, Azure AD Connect will stop exporting any changes on objects with `cloudNoFlow=true` in the metaverse, unless any reference attribute (such as `manager`) is being updated. In case there's any reference attribute update on the object, Azure AD Connect will ignore the `cloudNoFlow` signal and export all updates on the object.
-## Does your setup work?
+## Something went wrong
-If the pilot doesn't work as you had expected, you can go back to the Azure AD Connect sync setup by doing the following:
+In case the pilot doesn't work as expected, you can go back to the Azure AD Connect sync setup by following the steps below:
-1. Disable the provisioning configuration in the Azure portal.
-1. Disable all the custom sync rules that were created for cloud provisioning by using the Sync Rule Editor tool. Disabling the rules should result in a full sync of all the connectors.
+1. Disable provisioning configuration in the Azure portal.
+2. Disable all the custom sync rules created for Cloud Provisioning using the Sync Rule Editor tool. Disabling should cause full sync on all the connectors.
## Next steps
active-directory Tutorial Single Forest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/tutorial-single-forest.md
Title: Tutorial - Integrate a single forest with a single Azure AD tenant
-description: This article describes the prerequisites and the hardware requirements for using Azure AD Connect cloud sync.
+description: This topic describes the pre-requisites and the hardware requirements cloud sync.
Previously updated : 11/11/2022 Last updated : 01/17/2023
# Tutorial: Integrate a single forest with a single Azure AD tenant
-This tutorial walks you through creating a hybrid identity environment by using Azure Active Directory (Azure AD) Connect cloud sync.
+This tutorial walks you through creating a hybrid identity environment using Azure Active Directory (Azure AD) Connect cloud sync.
![Diagram that shows the Azure AD Connect cloud sync flow.](media/tutorial-single-forest/diagram-2.png)
You can use the environment you create in this tutorial for testing or for getti
## Prerequisites
-Before you begin, set up your environments by doing the following.
- ### In the Azure Active Directory admin center
-1. Create a cloud-only global administrator account on your Azure AD tenant.
-
- This way, you can manage the configuration of your tenant if your on-premises services fail or become unavailable. [Learn how to add a cloud-only global administrator account](../fundamentals/add-users-azure-active-directory.md). Complete this step to ensure that you don't get locked out of your tenant.
-
-1. Add one or more [custom domain names](../fundamentals/add-custom-domain.md) to your Azure AD tenant. Your users can sign in with one of these domain names.
+1. Create a cloud-only global administrator account on your Azure AD tenant. This way, you can manage the configuration of your tenant should your on-premises services fail or become unavailable. Learn about [adding a cloud-only global administrator account](../fundamentals/add-users-azure-active-directory.md). Completing this step is critical to ensure that you don't get locked out of your tenant.
+2. Add one or more [custom domain names](../fundamentals/add-custom-domain.md) to your Azure AD tenant. Your users can sign in with one of these domain names.
### In your on-premises environment
-1. Identify a domain-joined host server that's running Windows Server 2016 or later, with at least 4 GB of RAM and .NET 4.7.1+ runtime.
-
-1. If there's a firewall between your servers and Azure AD, configure the following items:
+1. Identify a domain-joined host server running Windows Server 2016 or greater with minimum of 4-GB RAM and .NET 4.7.1+ runtime
+2. If there's a firewall between your servers and Azure AD, configure the following items:
- Ensure that agents can make *outbound* requests to Azure AD over the following ports: | Port number | How it's used | | | |
- | **80** | Downloads the certificate revocation lists (CRLs) while it validates the TLS/SSL certificate. |
- | **443** | Handles all outbound communication with the service. |
- | **8080** (optional) | Agents report their status every 10 minutes over port 8080, if port 443 is unavailable. This status is displayed in the Azure AD portal. |
+ | **80** | Downloads the certificate revocation lists (CRLs) while validating the TLS/SSL certificate |
+ | **443** | Handles all outbound communication with the service |
+ | **8080** (optional) | Agents report their status every 10 minutes over port 8080, if port 443 is unavailable. This status is displayed on the Azure AD portal. |
If your firewall enforces rules according to the originating users, open these ports for traffic from Windows services that run as a network service.-
- - If your firewall or proxy allows you to specify safe suffixes, add connections to **\*.msappproxy.net** and **\*.servicebus.windows.net**. If not, allow access to the [Azure datacenter IP ranges](https://www.microsoft.com/download/details.aspx?id=41653), which are updated weekly.
-
+ - If your firewall or proxy allows you to specify safe suffixes, then add connections t to **\*.msappproxy.net** and **\*.servicebus.windows.net**. If not, allow access to the [Azure datacenter IP ranges](https://www.microsoft.com/download/details.aspx?id=41653), which are updated weekly.
- Your agents need access to **login.windows.net** and **login.microsoftonline.com** for initial registration. Open your firewall for those URLs as well.-
- - For certificate validation, unblock the following URLs: **mscrl.microsoft.com:80**, **crl.microsoft.com:80**, **ocsp.msocsp.com:80**, and **www\.microsoft.com:80**. Because these URLs are used to validate certificates for other Microsoft products, you might already have these URLs unblocked.
+ - For certificate validation, unblock the following URLs: **mscrl.microsoft.com:80**, **crl.microsoft.com:80**, **ocsp.msocsp.com:80**, and **www\.microsoft.com:80**. Since these URLs are used for certificate validation with other Microsoft products, you may already have these URLs unblocked.
## Install the Azure AD Connect provisioning agent
-If you're using the [Basic Active Directory and Azure environment](tutorial-basic-ad-azure.md) tutorial, it would be DC1. To install the agent, follow these steps:
+If you're using the [Basic AD and Azure environment](tutorial-basic-ad-azure.md) tutorial, it would be DC1. To install the agent, follow these steps:
[!INCLUDE [active-directory-cloud-sync-how-to-install](../../../includes/active-directory-cloud-sync-how-to-install.md)]
If you're using the [Basic Active Directory and Azure environment](tutorial-basi
## Configure Azure AD Connect cloud sync
-To configure provisioning, do the following:
+Use the following steps to configure and start the provisioning:
-1. Sign in to the Azure AD portal.
-1. Select **Azure Active Directory**.
-1. Select **Azure AD Connect**.
-1. Select **Manage cloud sync**.
+1. Sign in to the Azure AD portal.
+1. Select **Azure Active Directory**
+1. Select **Azure AD Connect**
+1. Select **Manage cloud sync**
- ![Screenshot that shows the "Manage cloud sync" link.](media/how-to-configure/manage-1.png)
+ ![Screenshot showing "Manage cloud sync" link.](media/how-to-configure/manage-1.png)
-1. Select **New Configuration**.
+1. Select **New Configuration**
+
+ [![Screenshot of Azure AD Connect cloud sync screen with "New configuration" link highlighted.](media/tutorial-single-forest/configure-1.png)](media/tutorial-single-forest/configure-1.png#lightbox)
- ![Screenshot of the Azure AD Connect cloud sync page, with the "New configuration" link highlighted.](media/tutorial-single-forest/configure-1.png#lightbox)
+1. On the configuration screen, enter a **Notification email**, move the selector to **Enable** and select **Save**.
-1. On the **Configuration** page, enter a **Notification email**, move the selector to **Enable**, and then select **Save**.
+ [![Screenshot of Configure screen with Notification email filled in and Enable selected.](media/how-to-configure/configure-2.png)](media/how-to-configure/configure-2.png#lightbox)
- ![Screenshot of the "Edit provisioning configuration" page.](media/how-to-configure/configure-2.png#lightbox)
+1. The configuration status should now be **Healthy**.
-1. The configuration status should now be **Healthy**.
+ [![Screenshot of Azure AD Connect cloud sync screen showing Healthy status.](media/how-to-configure/manage-4.png)](media/how-to-configure/manage-4.png#lightbox)
- ![Screenshot of the "Azure AD Connect cloud sync" page, showing a "Healthy" status.](media/how-to-configure/manage-4.png#lightbox)
+## Verify users are created and synchronization is occurring
-## Verify that users are created and synchronization is occurring
+You'll now verify that the users that you had in your on-premises directory have been synchronized and now exist in your Azure AD tenant. The sync operation may take a few hours to complete. To verify users are synchronized, follow these steps:
-You'll now verify that the users in your on-premises directory have been synchronized and exist in your Azure AD tenant. This process might take a few hours to complete. To verify that the users are synchronized, do the following:
-1. Sign in to the [Azure portal](https://portal.azure.com) with an account that has an Azure subscription.
-1. On the left pane, select **Azure Active Directory**.
-1. Under **Manage**, select **Users**.
-1. Verify that the new users are displayed in your tenant.
+1. Browse to the [Azure portal](https://portal.azure.com) and sign in with an account that has an Azure subscription.
+2. On the left, select **Azure Active Directory**
+3. Under **Manage**, select **Users**.
+4. Verify that the new users appear in your tenant
## Test signing in with one of your users
-1. Go to the [Microsoft My Apps](https://myapps.microsoft.com) page.
-1. Sign in with a user account that was created in your new tenant. You'll need to sign in by using the following format: *user@domain.onmicrosoft.com*. Use the same password that the user uses to sign in on-premises.
+1. Browse to [https://myapps.microsoft.com](https://myapps.microsoft.com)
+
+1. Sign in with a user account that was created in your tenant. You'll need to sign in using the following format: (user@domain.onmicrosoft.com). Use the same password that the user uses to sign in on-premises.
- ![Screenshot that shows the My Apps portal with signed-in users.](media/tutorial-single-forest/verify-1.png)
+ ![Screenshot that shows the my apps portal with a signed in users.](media/tutorial-single-forest/verify-1.png)
-You have now successfully set up a hybrid identity environment that you can use to test and familiarize yourself with what Azure has to offer.
+You've now successfully configured a hybrid identity environment using Azure AD Connect cloud sync.
## Next steps - [What is provisioning?](what-is-provisioning.md)-- [What is Azure AD Connect cloud sync?](what-is-cloud-sync.md)
+- [What is Azure AD Connect cloud provisioning?](what-is-cloud-sync.md)
active-directory What Is Cloud Sync https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/what-is-cloud-sync.md
Previously updated : 01/25/2022 Last updated : 01/17/2023
active-directory What Is Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/what-is-provisioning.md
Previously updated : 12/05/2019 Last updated : 01/17/2023
active-directory Access Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/access-tokens.md
Web APIs have one of the following versions selected as a default during registr
eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsIng1dCI6Imk2bEdrM0ZaenhSY1ViMkMzbkVRN3N5SEpsWSIsImtpZCI6Imk2bEdrM0ZaenhSY1ViMkMzbkVRN3N5SEpsWSJ9.eyJhdWQiOiJlZjFkYTlkNC1mZjc3LTRjM2UtYTAwNS04NDBjM2Y4MzA3NDUiLCJpc3MiOiJodHRwczovL3N0cy53aW5kb3dzLm5ldC9mYTE1ZDY5Mi1lOWM3LTQ0NjAtYTc0My0yOWYyOTUyMjIyOS8iLCJpYXQiOjE1MzcyMzMxMDYsIm5iZiI6MTUzNzIzMzEwNiwiZXhwIjoxNTM3MjM3MDA2LCJhY3IiOiIxIiwiYWlvIjoiQVhRQWkvOElBQUFBRm0rRS9RVEcrZ0ZuVnhMaldkdzhLKzYxQUdyU091TU1GNmViYU1qN1hPM0libUQzZkdtck95RCtOdlp5R24yVmFUL2tES1h3NE1JaHJnR1ZxNkJuOHdMWG9UMUxrSVorRnpRVmtKUFBMUU9WNEtjWHFTbENWUERTL0RpQ0RnRTIyMlRJbU12V05hRU1hVU9Uc0lHdlRRPT0iLCJhbXIiOlsid2lhIl0sImFwcGlkIjoiNzVkYmU3N2YtMTBhMy00ZTU5LTg1ZmQtOGMxMjc1NDRmMTdjIiwiYXBwaWRhY3IiOiIwIiwiZW1haWwiOiJBYmVMaUBtaWNyb3NvZnQuY29tIiwiZmFtaWx5X25hbWUiOiJMaW5jb2xuIiwiZ2l2ZW5fbmFtZSI6IkFiZSAoTVNGVCkiLCJpZHAiOiJodHRwczovL3N0cy53aW5kb3dzLm5ldC83MmY5ODhiZi04NmYxLTQxYWYtOTFhYi0yZDdjZDAxMjIyNDcvIiwiaXBhZGRyIjoiMjIyLjIyMi4yMjIuMjIiLCJuYW1lIjoiYWJlbGkiLCJvaWQiOiIwMjIyM2I2Yi1hYTFkLTQyZDQtOWVjMC0xYjJiYjkxOTQ0MzgiLCJyaCI6IkkiLCJzY3AiOiJ1c2VyX2ltcGVyc29uYXRpb24iLCJzdWIiOiJsM19yb0lTUVUyMjJiVUxTOXlpMmswWHBxcE9pTXo1SDNaQUNvMUdlWEEiLCJ0aWQiOiJmYTE1ZDY5Mi1lOWM3LTQ0NjAtYTc0My0yOWYyOTU2ZmQ0MjkiLCJ1bmlxdWVfbmFtZSI6ImFiZWxpQG1pY3Jvc29mdC5jb20iLCJ1dGkiOiJGVnNHeFlYSTMwLVR1aWt1dVVvRkFBIiwidmVyIjoiMS4wIn0.D3H6pMUtQnoJAGq6AHd ``` -- v2.0 for applications that support consumer accounts. The following example shows a v1.0 token (this token example won't validate because the keys have rotated prior to publication and personal information has been removed):
+- v2.0 for applications that support consumer accounts. The following example shows a v2.0 token (this token example won't validate because the keys have rotated prior to publication and personal information has been removed):
``` eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsImtpZCI6Imk2bEdrM0ZaenhSY1ViMkMzbkVRN3N5SEpsWSJ9.eyJhdWQiOiI2ZTc0MTcyYi1iZTU2LTQ4NDMtOWZmNC1lNjZhMzliYjEyZTMiLCJpc3MiOiJodHRwczovL2xvZ2luLm1pY3Jvc29mdG9ubGluZS5jb20vNzJmOTg4YmYtODZmMS00MWFmLTkxYWItMmQ3Y2QwMTFkYjQ3L3YyLjAiLCJpYXQiOjE1MzcyMzEwNDgsIm5iZiI6MTUzNzIzMTA0OCwiZXhwIjoxNTM3MjM0OTQ4LCJhaW8iOiJBWFFBaS84SUFBQUF0QWFaTG8zQ2hNaWY2S09udHRSQjdlQnE0L0RjY1F6amNKR3hQWXkvQzNqRGFOR3hYZDZ3TklJVkdSZ2hOUm53SjFsT2NBbk5aY2p2a295ckZ4Q3R0djMzMTQwUmlvT0ZKNGJDQ0dWdW9DYWcxdU9UVDIyMjIyZ0h3TFBZUS91Zjc5UVgrMEtJaWpkcm1wNjlSY3R6bVE9PSIsImF6cCI6IjZlNzQxNzJiLWJlNTYtNDg0My05ZmY0LWU2NmEzOWJiMTJlMyIsImF6cGFjciI6IjAiLCJuYW1lIjoiQWJlIExpbmNvbG4iLCJvaWQiOiI2OTAyMjJiZS1mZjFhLTRkNTYtYWJkMS03ZTRmN2QzOGU0NzQiLCJwcmVmZXJyZWRfdXNlcm5hbWUiOiJhYmVsaUBtaWNyb3NvZnQuY29tIiwicmgiOiJJIiwic2NwIjoiYWNjZXNzX2FzX3VzZXIiLCJzdWIiOiJIS1pwZmFIeVdhZGVPb3VZbGl0anJJLUtmZlRtMjIyWDVyclYzeERxZktRIiwidGlkIjoiNzJmOTg4YmYtODZmMS00MWFmLTkxYWItMmQ3Y2QwMTFkYjQ3IiwidXRpIjoiZnFpQnFYTFBqMGVRYTgyUy1JWUZBQSIsInZlciI6IjIuMCJ9.pj4N-w_3Us9DrBLfpCt
active-directory Active Directory Authentication Protocols https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-authentication-protocols.md
Last updated 09/27/2021-+
active-directory Active Directory Claims Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-claims-mapping.md
- Title: Customize Azure AD tenant app claims (PowerShell)
-description: Learn how to customize claims emitted in tokens for an application in a specific Azure Active Directory tenant.
------- Previously updated : 01/06/2023----
-# Customize claims emitted in tokens for a specific app in a tenant
-
-A claim is information that an identity provider states about a user inside the token they issue for that user. Claims customization is used by tenant admins to customize the claims emitted in tokens for a specific application in their tenant. You can use claims-mapping policies to:
--- Select which claims are included in tokens.-- Create claim types that don't already exist.-- Choose or change the source of data emitted in specific claims.-
-Claims customization supports configuring claim-mapping policies for the WS-Fed, SAML, OAuth, and OpenID Connect protocols.
-
-This feature replaces and supersedes the [claims customization](active-directory-saml-claims-customization.md) offered through the Azure portal. On the same application, if you customize claims using the portal in addition to the Microsoft Graph/PowerShell method detailed in this document, tokens issued for that application will ignore the configuration in the portal. Configurations made through the methods detailed in this document won't be reflected in the portal.
-
-In this article, we walk through a few common scenarios that can help you understand how to use the [claims-mapping policy type](reference-claims-mapping-policy-type.md).
-
-## Get started
-
-In the following examples, you create, update, link, and delete policies for service principals. Claims-mapping policies can only be assigned to service principal objects. If you're new to Azure Active Directory (Azure AD), we recommend that you [learn about how to get an Azure AD tenant](quickstart-create-new-tenant.md) before you proceed with these examples.
-
-When creating a claims-mapping policy, you can also emit a claim from a directory extension attribute in tokens. Use _ExtensionID_ for the extension attribute instead of _ID_ in the `ClaimsSchema` element. For more information about using extension attributes, see [Using directory extension attributes](active-directory-schema-extensions.md).
-
-The [Azure AD PowerShell Module public preview release](https://www.powershellgallery.com/packages/AzureADPreview) is required to configure claims-mapping policies. The PowerShell module is in preview, while the claims mapping and token creation runtime in Azure is generally available. Updates to the preview PowerShell module could require you to update or change your configuration scripts.
-
-To get started, do the following steps:
-
-1. Download the latest [Azure AD PowerShell Module public preview release](https://www.powershellgallery.com/packages/AzureADPreview).
-1. Run the [Connect-AzureAD](/powershell/module/azuread/connect-azuread?view=azureadps-2.0-preview&preserve-view=true) command to sign in to your Azure AD admin account. Run this command each time you start a new session.
-
- ```powershell
- Connect-AzureAD -Confirm
- ```
-
-1. To see all policies that have been created in your organization, run the following command. We recommend that you run this command after most operations in the following scenarios, to check that your policies are being created as expected.
-
- ```powershell
- Get-AzureADPolicy
- ```
-
-Next, create a claims mapping policy and assign it to a service principal. See these examples for common scenarios:
--- [Omit the basic claims from tokens](#omit-the-basic-claims-from-tokens)-- [Include the EmployeeID and TenantCountry as claims in tokens](#include-the-employeeid-and-tenantcountry-as-claims-in-tokens)-- [Use a claims transformation in tokens](#use-a-claims-transformation-in-tokens)-
-After creating a claims mapping policy, configure your application to acknowledge that tokens will contain customized claims. For more information, read [security considerations](#security-considerations).
-
-## Omit the basic claims from tokens
-
-In this example, you create a policy that removes the [basic claim set](reference-claims-mapping-policy-type.md#claim-sets) from tokens issued to linked service principals.
-
-1. Create a claims-mapping policy. This policy, linked to specific service principals, removes the basic claim set from tokens.
-
- 1. To create the policy, run this command:
-
- ```powershell
- New-AzureADPolicy -Definition @('{"ClaimsMappingPolicy":{"Version":1,"IncludeBasicClaimSet":"false"}}') -DisplayName "OmitBasicClaims" -Type "ClaimsMappingPolicy"
- ```
-
- 2. To see your new policy, and to get the policy ObjectId, run the following command:
-
- ```powershell
- Get-AzureADPolicy
- ```
-
-1. Assign the policy to your service principal. You also need to get the ObjectId of your service principal.
-
- 1. To see all your organization's service principals, you can [query the Microsoft Graph API](/graph/traverse-the-graph). Or, in [Microsoft Graph Explorer](https://developer.microsoft.com/graph/graph-explorer), sign in to your Azure AD account.
- 2. When you have the ObjectId of your service principal, run the following command:
-
- ```powershell
- Add-AzureADServicePrincipalPolicy -Id <ObjectId of the ServicePrincipal> -RefObjectId <ObjectId of the Policy>
- ```
-
-## Include the EmployeeID and TenantCountry as claims in tokens
-
-In this example, you create a policy that adds the EmployeeID and TenantCountry to tokens issued to linked service principals. The EmployeeID is emitted as the name claim type in both SAML tokens and JWTs. The TenantCountry is emitted as the country/region claim type in both SAML tokens and JWTs. In this example, we continue to include the basic claims set in the tokens.
-
-1. Create a claims-mapping policy. This policy, linked to specific service principals, adds the EmployeeID and TenantCountry claims to tokens.
-
- 1. To create the policy, run the following command:
-
- ```powershell
- New-AzureADPolicy -Definition @('{"ClaimsMappingPolicy":{"Version":1,"IncludeBasicClaimSet":"true", "ClaimsSchema": [{"Source":"user","ID":"employeeid","SamlClaimType":"http://schemas.xmlsoap.org/ws/2005/05/identity/claims/employeeid","JwtClaimType":"employeeid"},{"Source":"company","ID":"tenantcountry","SamlClaimType":"http://schemas.xmlsoap.org/ws/2005/05/identity/claims/country","JwtClaimType":"country"}]}}') -DisplayName "ExtraClaimsExample" -Type "ClaimsMappingPolicy"
- ```
-
- When you define a claims mapping policy for a directory extension attribute, use the `ExtensionID` property instead of the `ID` property within the body of the `ClaimsSchema` array.
-
- 2. To see your new policy, and to get the policy ObjectId, run the following command:
-
- ```powershell
- Get-AzureADPolicy
- ```
-
-1. Assign the policy to your service principal. You also need to get the ObjectId of your service principal.
-
- 1. To see all your organization's service principals, you can [query the Microsoft Graph API](/graph/traverse-the-graph). Or, in [Microsoft Graph Explorer](https://developer.microsoft.com/graph/graph-explorer), sign in to your Azure AD account.
- 2. When you have the ObjectId of your service principal, run the following command:
-
- ```powershell
- Add-AzureADServicePrincipalPolicy -Id <ObjectId of the ServicePrincipal> -RefObjectId <ObjectId of the Policy>
- ```
-
-## Use a claims transformation in tokens
-
-In this example, you create a policy that emits a custom claim "JoinedData" to JWTs issued to linked service principals. This claim contains a value created by joining the data stored in the extensionattribute1 attribute on the user object with ".sandbox". In this example, we exclude the basic claims set in the tokens.
-
-1. Create a claims-mapping policy. This policy, linked to specific service principals, adds the EmployeeID and TenantCountry claims to tokens.
-
- 1. To create the policy, run the following command:
-
- ```powershell
- -
- ```
-
- 2. To see your new policy, and to get the policy ObjectId, run the following command:
-
- ```powershell
- Get-AzureADPolicy
- ```
-
-1. Assign the policy to your service principal. You also need to get the ObjectId of your service principal.
-
- 1. To see all your organization's service principals, you can [query the Microsoft Graph API](/graph/traverse-the-graph). Or, in [Microsoft Graph Explorer](https://developer.microsoft.com/graph/graph-explorer), sign in to your Azure AD account.
- 2. When you have the ObjectId of your service principal, run the following command:
-
- ```powershell
- Add-AzureADServicePrincipalPolicy -Id <ObjectId of the ServicePrincipal> -RefObjectId <ObjectId of the Policy>
- ```
-
-## Security considerations
-
-Applications that receive tokens rely on the fact that the claim values are authoritatively issued by Azure AD and can't be tampered with. However, when you modify the token contents through claims-mapping policies, these assumptions may no longer be correct. Applications must explicitly acknowledge that tokens have been modified by the creator of the claims-mapping policy to protect themselves from claims-mapping policies created by malicious actors. This can be done in one the following ways:
--- [Configure a custom signing key](#configure-a-custom-signing-key)-- Or, [update the application manifest](#update-the-application-manifest) to accept mapped claims.-
-Without this, Azure AD will return an [`AADSTS50146` error code](reference-aadsts-error-codes.md#aadsts-error-codes).
-
-### Configure a custom signing key
-
-For multi-tenant apps, a custom signing key should be used. Don't set `acceptMappedClaims` in the app manifest. If set up an app in the Azure portal, you get an app registration object and a service principal in your tenant. That app is using the Azure global sign-in key, which can't be used for customizing claims in tokens. To get custom claims in tokens, create a custom sign-in key from a certificate and add it to service principal. For testing purposes, you can use a self-signed certificate. After configuring the custom signing key, your application code needs to [validate the token signing key](#validate-token-signing-key).
-
-Add the following information to the service principal:
--- Private key (as a [key credential](/graph/api/resources/keycredential))-- Password (as a [password credential](/graph/api/resources/passwordcredential))-- Public key (as a [key credential](/graph/api/resources/keycredential))-
-Extract the private and public key base-64 encoded from the PFX file export of your certificate. Make sure that the `keyId` for the `keyCredential` used for "Sign" matches the `keyId` of the `passwordCredential`. You can generate the `customkeyIdentifier` by getting the hash of the cert's thumbprint.
-
-#### Request
-
-The following shows the format of the HTTP PATCH request to add a custom signing key to a service principal. The "key" value in the `keyCredentials` property is shortened for readability. The value is base-64 encoded. For the private key, the property usage is "Sign". For the public key, the property usage is "Verify".
-
-```
-PATCH https://graph.microsoft.com/v1.0/servicePrincipals/f47a6776-bca7-4f2e-bc6c-eec59d058e3e
-
-Content-type: servicePrincipals/json
-Authorization: Bearer {token}
-
-{
- "keyCredentials":[
- {
- "customKeyIdentifier": "lY85bR8r6yWTW6jnciNEONwlVhDyiQjdVLgPDnkI5mA=",
- "endDateTime": "2021-04-22T22:10:13Z",
- "keyId": "4c266507-3e74-4b91-aeba-18a25b450f6e",
- "startDateTime": "2020-04-22T21:50:13Z",
- "type": "X509CertAndPassword",
- "usage": "Sign",
- "key":"MIIKIAIBAz.....HBgUrDgMCERE20nuTptI9MEFCh2Ih2jaaLZBZGeZBRFVNXeZmAAgIH0A==",
- "displayName": "CN=contoso"
- },
- {
- "customKeyIdentifier": "lY85bR8r6yWTW6jnciNEONwlVhDyiQjdVLgPDnkI5mA=",
- "endDateTime": "2021-04-22T22:10:13Z",
- "keyId": "e35a7d11-fef0-49ad-9f3e-aacbe0a42c42",
- "startDateTime": "2020-04-22T21:50:13Z",
- "type": "AsymmetricX509Cert",
- "usage": "Verify",
- "key": "MIIDJzCCAg+gAw......CTxQvJ/zN3bafeesMSueR83hlCSyg==",
- "displayName": "CN=contoso"
- }
-
- ],
- "passwordCredentials": [
- {
- "customKeyIdentifier": "lY85bR8r6yWTW6jnciNEONwlVhDyiQjdVLgPDnkI5mA=",
- "keyId": "4c266507-3e74-4b91-aeba-18a25b450f6e",
- "endDateTime": "2022-01-27T19:40:33Z",
- "startDateTime": "2020-04-20T19:40:33Z",
- "secretText": "mypassword"
- }
- ]
-}
-```
-
-#### Configure a custom signing key using PowerShell
-
-Use PowerShell to [instantiate an MSAL Public Client Application](msal-net-initializing-client-applications.md#initializing-a-public-client-application-from-code) and use the [Authorization Code Grant](v2-oauth2-auth-code-flow.md) flow to obtain a delegated permission access token for Microsoft Graph. Use the access token to call Microsoft Graph and configure a custom signing key for the service principal. After configuring the custom signing key, your application code needs to [validate the token signing key](#validate-token-signing-key).
-
-To run this script, you need:
-
-1. The object ID of your application's service principal, found in the **Overview** pane of your application's entry in [Enterprise Applications](https://portal.azure.com/#blade/Microsoft_AAD_IAM/StartboardApplicationsMenuBlade/AllApps/menuId/) in the Azure portal.
-2. An app registration to sign in a user and get an access token to call Microsoft Graph. Get the application (client) ID of this app in the **Overview** pane of the application's entry in [App registrations](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/RegisteredApps) in the Azure portal. The app registration should have the following configuration:
- - A redirect URI of "http://localhost" listed in the **Mobile and desktop applications** platform configuration
- - In **API permissions**, Microsoft Graph delegated permissions **Application.ReadWrite.All** and **User.Read** (make sure you grant Admin consent to these permissions)
-3. A user who logs in to get the Microsoft Graph access token. The user should be one of the following Azure AD administrative roles (required to update the service principal):
- - Cloud Application Administrator
- - Application Administrator
- - Global Administrator
-4. A certificate to configure as a custom signing key for our application. You can either create a self-signed certificate or obtain one from your trusted certificate authority. The following certificate components are used in the script:
- - public key (typically a .cer file)
- - private key in PKCS#12 format (in .pfx file)
- - password for the private key (pfx file)
-
-The private key must be in PKCS#12 format since Azure AD doesn't support other format types. Using the wrong format can result in the error "Invalid certificate: Key value is invalid certificate" when using Microsoft Graph to PATCH the service principal with a `keyCredentials` containing the certificate info.
-
-```powershell
-
-$fqdn="fourthcoffeetest.onmicrosoft.com" # this is used for the 'issued to' and 'issued by' field of the certificate
-$pwd="mypassword" # password for exporting the certificate private key
-$location="C:\\temp" # path to folder where both the pfx and cer file will be written to
-
-# Create a self-signed cert
-$cert = New-SelfSignedCertificate -certstorelocation cert:\currentuser\my -DnsName $fqdn
-$pwdSecure = ConvertTo-SecureString -String $pwd -Force -AsPlainText
-$path = 'cert:\currentuser\my\' + $cert.Thumbprint
-$cerFile = $location + "\\" + $fqdn + ".cer"
-$pfxFile = $location + "\\" + $fqdn + ".pfx"
-
-# Export the public and private keys
-Export-PfxCertificate -cert $path -FilePath $pfxFile -Password $pwdSecure
-Export-Certificate -cert $path -FilePath $cerFile
-
-$ClientID = "<app-id>"
-$loginURL = "https://login.microsoftonline.com"
-$tenantdomain = "fourthcoffeetest.onmicrosoft.com"
-$redirectURL = "http://localhost" # this reply URL is needed for PowerShell Core
-[string[]] $Scopes = "https://graph.microsoft.com/.default"
-$pfxpath = $pfxFile # path to pfx file
-$cerpath = $cerFile # path to cer file
-$SPOID = "<service-principal-id>"
-$graphuri = "https://graph.microsoft.com/v1.0/serviceprincipals/$SPOID"
-$password = $pwd # password for the pfx file
--
-# choose the correct folder name for MSAL based on PowerShell version 5.1 (.Net) or PowerShell Core (.Net Core)
-
-if ($PSVersionTable.PSVersion.Major -gt 5)
- {
- $core = $true
- $foldername = "netcoreapp2.1"
- }
-else
- {
- $core = $false
- $foldername = "net45"
- }
-
-# Load the MSAL/microsoft.identity/client assembly -- needed once per PowerShell session
-[System.Reflection.Assembly]::LoadFrom((Get-ChildItem C:/Users/<username>/.nuget/packages/microsoft.identity.client/4.32.1/lib/$foldername/Microsoft.Identity.Client.dll).fullname) | out-null
-
-$global:app = $null
-
-$ClientApplicationBuilder = [Microsoft.Identity.Client.PublicClientApplicationBuilder]::Create($ClientID)
-[void]$ClientApplicationBuilder.WithAuthority($("$loginURL/$tenantdomain"))
-[void]$ClientApplicationBuilder.WithRedirectUri($redirectURL)
-
-$global:app = $ClientApplicationBuilder.Build()
-
-Function Get-GraphAccessTokenFromMSAL {
- [Microsoft.Identity.Client.AuthenticationResult] $authResult = $null
- $AquireTokenParameters = $global:app.AcquireTokenInteractive($Scopes)
- [IntPtr] $ParentWindow = [System.Diagnostics.Process]::GetCurrentProcess().MainWindowHandle
- if ($ParentWindow)
- {
- [void]$AquireTokenParameters.WithParentActivityOrWindow($ParentWindow)
- }
- try {
- $authResult = $AquireTokenParameters.ExecuteAsync().GetAwaiter().GetResult()
- }
- catch {
- $ErrorMessage = $_.Exception.Message
- Write-Host $ErrorMessage
- }
-
- return $authResult
-}
-
-$myvar = Get-GraphAccessTokenFromMSAL
-if ($myvar)
-{
- $GraphAccessToken = $myvar.AccessToken
- Write-Host "Access Token: " $myvar.AccessToken
- #$GraphAccessToken = "eyJ0eXAiOiJKV1QiL ... iPxstltKQ"
--
- # this is for PowerShell Core
- $Secure_String_Pwd = ConvertTo-SecureString $password -AsPlainText -Force
-
- # reading certificate files and creating Certificate Object
- if ($core)
- {
- $pfx_cert = get-content $pfxpath -AsByteStream -Raw
- $cer_cert = get-content $cerpath -AsByteStream -Raw
- $cert = Get-PfxCertificate -FilePath $pfxpath -Password $Secure_String_Pwd
- }
- else
- {
- $pfx_cert = get-content $pfxpath -Encoding Byte
- $cer_cert = get-content $cerpath -Encoding Byte
- # Write-Host "Enter password for the pfx file..."
- # calling Get-PfxCertificate in PowerShell 5.1 prompts for password
- # $cert = Get-PfxCertificate -FilePath $pfxpath
- $cert = [System.Security.Cryptography.X509Certificates.X509Certificate2]::new($pfxpath, $password)
- }
-
- # base 64 encode the private key and public key
- $base64pfx = [System.Convert]::ToBase64String($pfx_cert)
- $base64cer = [System.Convert]::ToBase64String($cer_cert)
-
- # getting id for the keyCredential object
- $guid1 = New-Guid
- $guid2 = New-Guid
-
- # get the custom key identifier from the certificate thumbprint:
- $hasher = [System.Security.Cryptography.HashAlgorithm]::Create('sha256')
- $hash = $hasher.ComputeHash([System.Text.Encoding]::UTF8.GetBytes($cert.Thumbprint))
- $customKeyIdentifier = [System.Convert]::ToBase64String($hash)
-
- # get end date and start date for our keycredentials
- $endDateTime = ($cert.NotAfter).ToUniversalTime().ToString( "yyyy-MM-ddTHH:mm:ssZ" )
- $startDateTime = ($cert.NotBefore).ToUniversalTime().ToString( "yyyy-MM-ddTHH:mm:ssZ" )
-
- # building our json payload
- $object = [ordered]@{
- keyCredentials = @(
- [ordered]@{
- customKeyIdentifier = $customKeyIdentifier
- endDateTime = $endDateTime
- keyId = $guid1
- startDateTime = $startDateTime
- type = "X509CertAndPassword"
- usage = "Sign"
- key = $base64pfx
- displayName = "CN=fourthcoffeetest"
- },
- [ordered]@{
- customKeyIdentifier = $customKeyIdentifier
- endDateTime = $endDateTime
- keyId = $guid2
- startDateTime = $startDateTime
- type = "AsymmetricX509Cert"
- usage = "Verify"
- key = $base64cer
- displayName = "CN=fourthcoffeetest"
- }
- )
- passwordCredentials = @(
- [ordered]@{
- customKeyIdentifier = $customKeyIdentifier
- keyId = $guid1
- endDateTime = $endDateTime
- startDateTime = $startDateTime
- secretText = $password
- }
- )
- }
-
- $json = $object | ConvertTo-Json -Depth 99
- Write-Host "JSON Payload:"
- Write-Output $json
-
- # Request Header
- $Header = @{}
- $Header.Add("Authorization","Bearer $($GraphAccessToken)")
- $Header.Add("Content-Type","application/json")
-
- try
- {
- Invoke-RestMethod -Uri $graphuri -Method "PATCH" -Headers $Header -Body $json
- }
- catch
- {
- # Dig into the exception to get the Response details.
- # Note that value__ is not a typo.
- Write-Host "StatusCode:" $_.Exception.Response.StatusCode.value__
- Write-Host "StatusDescription:" $_.Exception.Response.StatusDescription
- }
-
- Write-Host "Complete Request"
-}
-else
-{
- Write-Host "Fail to get Access Token"
-}
-```
-
-#### Validate token signing key
-
-Apps that have claims mapping enabled must validate their token signing keys by appending `appid={client_id}` to their [OpenID Connect metadata requests](v2-protocols-oidc.md#fetch-the-openid-configuration-document). Below is the format of the OpenID Connect metadata document you should use:
-
-```
-https://login.microsoftonline.com/{tenant}/v2.0/.well-known/openid-configuration?appid={client-id}
-```
-
-### Update the application manifest
-
-For single tenant apps, you can set the `acceptMappedClaims` property to `true` in the [application manifest](reference-app-manifest.md). As documented on the [apiApplication resource type](/graph/api/resources/apiapplication#properties), this allows an application to use claims mapping without specifying a custom signing key.
-
-Don't set `acceptMappedClaims` property to `true` for multi-tenant apps, which can allow malicious actors to create claims-mapping policies for your app.
-
-This does require the requested token audience to use a verified domain name of your Azure AD tenant, which means you should ensure to set the `Application ID URI` (represented by the `identifierUris` in the application manifest) for example to `https://contoso.com/my-api` or (simply using the default tenant name) `https://contoso.onmicrosoft.com/my-api`.
-
-If you're not using a verified domain, Azure AD will return an `AADSTS501461` error code with message _"AcceptMappedClaims is only supported for a token audience matching the application GUID or an audience within the tenant's verified domains. Either change the resource identifier, or use an application-specific signing key."_
-
-## Next steps
--- Read the [claims-mapping policy type](reference-claims-mapping-policy-type.md) reference article to learn more.-- To learn how to customize claims issued in the SAML token through the Azure portal, see [How to: Customize claims issued in the SAML token for enterprise applications](active-directory-saml-claims-customization.md)-- To learn more about extension attributes, see [Using directory extension attributes in claims](active-directory-schema-extensions.md).
active-directory Active Directory Enterprise App Role Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-enterprise-app-role-management.md
Title: Configure role claim for enterprise Azure AD apps description: Learn how to configure the role claim issued in the SAML token for enterprise applications in Azure Active Directory -+
Last updated 11/11/2021-+ # Configure the role claim issued in the SAML token for enterprise applications
active-directory Active Directory How Applications Are Added https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-how-applications-are-added.md
Title: How and why apps are added to Azure AD description: What does it mean for an application to be added to Azure AD and how do they get there? -+
Last updated 10/26/2022-+
active-directory Active Directory How To Integrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-how-to-integrate.md
Title: How to integrate with the Microsoft identity platform description: Learn the benefits of integrating your application with the Microsoft identity platform, and get resources for features like simplified sign-in, identity management, multi-factor authentication, and access control. -+
Last updated 10/01/2020-+
active-directory Authentication Flows App Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/authentication-flows-app-scenarios.md
Title: Microsoft identity platform authentication flows & app scenarios description: Learn about application scenarios for the Microsoft identity platform, including authenticating identities, acquiring tokens, and calling protected APIs. -+ ms.assetid:
Last updated 05/05/2022-++ #Customer intent: As an app developer, I want to learn about authentication flows and application scenarios so I can create applications protected by the Microsoft identity platform.
active-directory Claims Challenge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/claims-challenge.md
Title: Claims challenges, claims requests, and client capabilities description: Explanation of claims challenges, claims requests, and client capabilities in the Microsoft identity platform. --++ Previously updated : 05/11/2021- Last updated : 01/19/2023+ # Customer intent: As an application developer, I want to learn how to handle claims challenges returned from APIs protected by the Microsoft identity platform.
active-directory Configure Token Lifetimes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/configure-token-lifetimes.md
- Title: Set lifetimes for tokens
-description: Learn how to set lifetimes for tokens issued by Microsoft identity platform. Learn how to learn how to manage an organization's default policy, create a policy for web sign-in, create a policy for a native app that calls a web API, and manage an advanced policy.
-------- Previously updated : 10/17/2022----
-# Configure token lifetime policies (preview)
-
-In the following steps, you'll implement a common policy scenario that imposes new rules for token lifetime. It's possible to specify the lifetime of an access, SAML, or ID token issued by the Microsoft identity platform. This can be set for all apps in your organization or for a specific service principal. They can also be set for multi-organizations (multi-tenant application).
-
-For more information, see [configurable token lifetimes](active-directory-configurable-token-lifetimes.md).
-
-## Get started
-
-To get started, download the latest [Azure AD PowerShell Module Public Preview release](https://www.powershellgallery.com/packages/AzureADPreview).
-
-Next, run the `Connect-AzureAD` command to sign in to your Azure Active Directory (Azure AD) admin account. Run this command each time you start a new session.
-
-```powershell
-Connect-AzureAD -Confirm
-```
-
-## Create a policy for web sign-in
-
-In the following steps, you'll create a policy that requires users to authenticate more frequently in your web app. This policy sets the lifetime of the access/ID tokens to the service principal of your web app.
-
-1. Create a token lifetime policy.
-
- This policy, for web sign-in, sets the access/ID token lifetime to two hours.
-
- To create the policy, run the [New-AzureADPolicy](/powershell/module/azuread/new-azureadpolicy?view=azureadps-2.0-preview&preserve-view=true) cmdlet:
-
- ```powershell
- $policy = New-AzureADPolicy -Definition @('{"TokenLifetimePolicy":{"Version":1,"AccessTokenLifetime":"02:00:00"}}') -DisplayName "WebPolicyScenario" -IsOrganizationDefault $false -Type "TokenLifetimePolicy"
- ```
-
- To see your new policy, and to get the policy **ObjectId**, run the [Get-AzureADPolicy](/powershell/module/azuread/get-azureadpolicy?view=azureadps-2.0-preview&preserve-view=true) cmdlet:
-
- ```powershell
- Get-AzureADPolicy -Id $policy.Id
- ```
-
-1. Assign the policy to your service principal. You also need to get the **ObjectId** of your service principal.
-
- Use the [Get-AzureADServicePrincipal](/powershell/module/azuread/get-azureadserviceprincipal) cmdlet to see all your organization's service principals or a single service principal.
-
- ```powershell
- # Get ID of the service principal
- $sp = Get-AzureADServicePrincipal -Filter "DisplayName eq '<service principal display name>'"
- ```
-
- When you have the service principal, run the [Add-AzureADServicePrincipalPolicy](/powershell/module/azuread/add-azureadserviceprincipalpolicy?view=azureadps-2.0-preview&preserve-view=true) cmdlet:
-
- ```powershell
- # Assign policy to a service principal
- Add-AzureADServicePrincipalPolicy -Id $sp.ObjectId -RefObjectId $policy.Id
- ```
-
-## View existing policies in a tenant
-
-To see all policies that have been created in your organization, run the [Get-AzureADPolicy](/powershell/module/azuread/get-azureadpolicy?view=azureadps-2.0-preview&preserve-view=true) cmdlet. Any results with defined property values that differ from the defaults listed above are in scope of the retirement.
-
-```powershell
-Get-AzureADPolicy -All $true
-```
-
-To see which apps and service principals are linked to a specific policy that you identified, run the following [`Get-AzureADPolicyAppliedObject`](/powershell/module/azuread/get-azureadpolicyappliedobject?view=azureadps-2.0-preview&preserve-view=true) cmdlet by replacing `1a37dad8-5da7-4cc8-87c7-efbc0326cf20` with any of your policy IDs. Then you can decide whether to configure Conditional Access sign-in frequency or remain with the Azure AD defaults.
-
-```powershell
-Get-AzureADPolicyAppliedObject -id 1a37dad8-5da7-4cc8-87c7-efbc0326cf20
-```
-
-If your tenant has policies which define custom values for the refresh and session token configuration properties, Microsoft recommends you update those policies to values that reflect the defaults described above. If no changes are made, Azure AD will automatically honor the default values.
-
-### Troubleshooting
-Some users have reported a `Get-AzureADPolicy : The term 'Get-AzureADPolicy' is not recognized` error after running the `Get-AzureADPolicy` cmdlet. As a workaround, run the following to uninstall/re-install the AzureAD module, and then install the AzureADPreview module:
-
-```powershell
-# Uninstall the AzureAD Module
-UnInstall-Module AzureAD
-
-# Install the AzureAD Preview Module adding the -AllowClobber
-Install-Module AzureADPreview -AllowClobber
-Note: You cannot install both the preview and the GA version on the same computer at the same time.
-
-Connect-AzureAD
-Get-AzureADPolicy -All $true
-```
-
-## Next steps
-Learn about [authentication session management capabilities](../conditional-access/howto-conditional-access-session-lifetime.md) in Azure AD Conditional Access.
active-directory Howto Convert App To Be Multi Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-convert-app-to-be-multi-tenant.md
Title: Convert single-tenant app to multi-tenant on Azure AD description: Shows how to convert an existing single-tenant app to a multi-tenant app that can sign in a user from any Azure AD tenant. -+ Last updated 10/20/2022-+ #Customer intent: As an Azure user, I want to convert a single tenant app to an Azure AD multi-tenant app so any Azure AD user can sign in,
active-directory Howto Create Service Principal Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-create-service-principal-portal.md
Title: Create an Azure AD app and service principal in the portal description: Create a new Azure Active Directory app and service principal to manage access to resources with role-based access control in Azure Resource Manager. -+ Last updated 10/11/2022-+
active-directory Howto Remove App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-remove-app.md
Title: "How to: Remove a registered app from the Microsoft identity platform" description: In this how-to, you learn how to remove an application registered with the Microsoft identity platform. -+
Last updated 07/28/2022-+ #Customer intent: As an application developer, I want to know how to remove my application from the Microsoft identity registered.
active-directory Howto Restore App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-restore-app.md
Title: "How to: Restore or remove a recently deleted application with the Microsoft identity platform" description: In this how-to, you learn how to restore or permanently delete a recently deleted application registered with the Microsoft identity platform. -+
Last updated 07/28/2022-++ #Customer intent: As an application developer, I want to know how to restore or permanently delete my recently deleted application from the Microsoft identity platform.
active-directory Id Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/id-tokens.md
Title: Microsoft identity platform ID tokens description: Learn how to use id_tokens emitted by the Azure AD v1.0 and Microsoft identity platform (v2.0) endpoints. -+ Previously updated : 01/25/2022- Last updated : 01/19/2023+
active-directory Mark App As Publisher Verified https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/mark-app-as-publisher-verified.md
Title: Mark an app as publisher verified
-description: Describes how to mark an app as publisher verified. When an application is marked as publisher verified, it means that the publisher has verified their identity using a Microsoft Partner Network account that has completed the verification process and has associated this MPN account with their application registration.
+description: Describes how to mark an app as publisher verified. When an application is marked as publisher verified, it means that the publisher (application developer) has verified the authenticity of their organization using a Microsoft Partner Network (MPN) account that has completed the verification process and has associated this MPN account with that application registration.
active-directory Microsoft Graph Intro https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/microsoft-graph-intro.md
- Title: Microsoft Graph API
-description: The Microsoft Graph API is a RESTful web API that enables you to access Microsoft Cloud service resources.
-------- Previously updated : 10/08/2021----
-# Microsoft Graph API
-
-The Microsoft Graph API is a RESTful web API that enables you to access Microsoft Cloud service resources. After you register your app and get authentication tokens for a user or service, you can make requests to the Microsoft Graph API. For more information, see [Overview of Microsoft Graph](/graph/overview).
-
-Microsoft Graph exposes REST APIs and client libraries to access data on the following Microsoft cloud
--- Microsoft 365 -- Enterprise Mobility and Security -- Windows 10 -- Dynamics 365 Business Central-
-## Versions
-
-The following versions of the Microsoft Graph API are currently available:
--- **Beta version**: The beta version includes APIs that are currently in preview and are accessible in the `https://graph.microsoft.com/beta` endpoint. To start using the beta APIs, see [Microsoft Graph beta endpoint reference](/graph/api/overview?view=graph-rest-beta&preserve-view=true)-- **v1.0 version**: The v1.0 version includes APIs that are generally available and ready for production use. The v1.0 version is accessible in the `https://graph.microsoft.com/v1.0` endpoint. To start using the v1.0 APIs, see [Microsoft Graph REST API v1.0 reference](/graph/api/overview?view=graph-rest-1.0&preserve-view=true)-
-For more information about Microsoft Graph API versions, see [Versioning, support, and breaking change policies for Microsoft Graph](/graph/versioning-and-support).
--
-## Get started
-
-To read from or write to a resource such as a user or an email message, you construct a request that looks like the following pattern:
-
-`{HTTP method} https://graph.microsoft.com/{version}/{resource}?{query-parameters}`
-
-For more information about the elements of the constructed request, see [Use the Microsoft Graph API](/graph/use-the-api)
-
-Quickstart samples are available to show you how to access the power of the Microsoft Graph API. The samples that are available access two services with one authentication: Microsoft account and Outlook. Each quickstart accesses information from Microsoft account users' profiles and displays events from their calendar.
-The quickstarts involve four steps:
--- Select your platform-- Get your app ID (client ID)-- Build the sample-- Sign in, and view events on your calendar-
-When you complete the quickstart, you have an app that's ready to run. For more information, see the [Microsoft Graph quickstart FAQ](/graph/quick-start-faq). To get started with the samples, see [Microsoft Graph QuickStart](https://developer.microsoft.com/graph/quick-start).
-
-## Tools
-
-**Microsoft Graph Explorer** is a web-based tool that you can use to build and test requests to the Microsoft Graph API. Access Microsoft Graph Explorer at https://developer.microsoft.com/graph/graph-explorer.
-
-**Postman** is another tool you can use for making requests to the Microsoft Graph API. You can download Postman at https://www.getpostman.com. To interact with Microsoft Graph in Postman, use the [Microsoft Graph Postman collection](/graph/use-postman).
-
-## Next steps
-
-For more information about Microsoft Graph, including usage information and tutorials, see:
--- [Use the Microsoft Graph API](/graph/use-the-api)-- [Microsoft Graph tutorials](/graph/tutorials)
active-directory Msal Android Single Sign On https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-android-single-sign-on.md
android ms.devlang: java Previously updated : 10/15/2020 Last updated : 01/18/2023 - # Enable cross-app SSO on Android using MSAL
In this how-to, you'll learn how to configure the SDKs used by your application
This how-to assumes you know how to: -- Provision your app using the Azure portal. For more information on this topic, see the instructions for creating an app in [the Android tutorial](./tutorial-v2-android.md#create-a-project)-- Integrate your application with the [Microsoft Authentication Library for Android](https://github.com/AzureAD/microsoft-authentication-library-for-android).
+- Provision your app using the Azure portal. For more information, see the instructions for creating an app in [the Android tutorial](./tutorial-v2-android.md#create-a-project)
+- Integrate your application with the [MSAL for Android](https://github.com/AzureAD/microsoft-authentication-library-for-android)
-## Methods for single sign-on
+## Methods for SSO
There are two ways for applications using MSAL for Android to achieve SSO:
-* Through a [broker application](#sso-through-brokered-authentication)
-* Through the [system browser](#sso-through-system-browser)
+- Through a [broker application](#sso-through-brokered-authentication)
+- Through the [system browser](#sso-through-system-browser)
-
- It is recommended to use a broker application for benefits like device-wide SSO, account management, and conditional access. However, it requires your users to download additional applications.
+ It's recommended to use a broker application for benefits like device-wide SSO, account management, and conditional access. However, it requires your users to download additional applications.
## SSO through brokered authentication
-We recommend that you use one of Microsoft's authentication brokers to participate in device-wide single sign-on (SSO) and to meet organizational Conditional Access policies. Integrating with a broker provides the following benefits:
+We recommend that you use one of Microsoft's authentication brokers to participate in device-wide SSO and to meet organizational Conditional Access policies. Integrating with a broker provides the following benefits:
-- Device single sign-on
+- Device SSO
- Conditional Access for: - Intune App Protection - Device Registration (Workplace Join) - Mobile Device Management - Device-wide Account Management
- - via Android AccountManager & Account Settings
+ - via Android AccountManager & Account Settings
- "Work Account" - custom account type On Android, the Microsoft Authentication Broker is a component that's included in the [Microsoft Authenticator](https://play.google.com/store/apps/details?id=com.azure.authenticator) and [Intune Company Portal](https://play.google.com/store/apps/details?id=com.microsoft.windowsintune.companyportal) apps.
-The following diagram illustrates the relationship between your app, the Microsoft Authentication Library (MSAL), and Microsoft's authentication brokers.
+The following diagram illustrates the relationship between your app, the MSAL, and Microsoft's authentication brokers.
![Diagram showing how an application relates to MSAL, broker apps, and the Android account manager.](./media/brokered-auth/brokered-deployment-diagram.png)
If a device doesn't already have a broker app installed, MSAL instructs the user
#### When a broker is installed
-When a broker is installed on a device, all subsequent interactive token requests (calls to `acquireToken()`) are handled by the broker rather than locally by MSAL. Any SSO state previously available to MSAL is not available to the broker. As a result, the user will need to authenticate again, or select an account from the existing list of accounts known to the device.
+When a broker is installed on a device, all subsequent interactive token requests (calls to `acquireToken()`) are handled by the broker rather than locally by MSAL. Any SSO state previously available to MSAL isn't available to the broker. As a result, the user will need to authenticate again, or select an account from the existing list of accounts known to the device.
Installing a broker doesn't require the user to sign in again. Only when the user needs to resolve an `MsalUiRequiredException` will the next request go to the broker. `MsalUiRequiredException` can be thrown for several reasons, and needs to be resolved interactively. For example:
Installing a broker doesn't require the user to sign in again. Only when the use
#### When a broker is uninstalled
-If there is only one broker hosting app installed, and it is removed, then the user will need to sign in again. Uninstalling the active broker removes the account and associated tokens from the device.
+If there's only one broker hosting app installed, and it's removed, then the user will need to sign in again. Uninstalling the active broker removes the account and associated tokens from the device.
-If Intune Company Portal is installed and is operating as the active broker, and Microsoft Authenticator is also installed, then if the Intune Company Portal (active broker) is uninstalled the user will need to sign in again. Once they sign in again, the Microsoft Authenticator app becomes the active broker.
+If Intune Company Portal is installed and is operating as the active broker, and Microsoft Authenticator is also installed, then if the Intune Company Portal (active broker) is uninstalled the user will need to sign in again. Once they sign in again, the Microsoft Authenticator app becomes the active broker.
### Integrating with a broker
Windows:
keytool -exportcert -alias androiddebugkey -keystore %HOMEPATH%\.android\debug.keystore | openssl sha1 -binary | openssl base64 ```
-Once you've generated a signature hash with *keytool*, use the Azure portal to generate the redirect URI:
+Once you've generated a signature hash with _keytool_, use the Azure portal to generate the redirect URI:
-1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a> and select your Android app in **App registrations**.
-1. Select **Authentication** > **Add a platform** > **Android**.
+1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
+1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="/azure/active-directory/develop/media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you registered your application.
+1. Search for and select **Azure Active Directory**.
+1. Under **Manage**, select **App registrations**.
+1. Under **Manage**, select **App registrations**, then select your application.
+1. Under **Manage**, select **Authentication** > **Add a platform** > **Android**.
1. In the **Configure your Android app** pane that opens, enter the **Signature hash** that you generated earlier and a **Package name**. 1. Select the **Configure** button.
If you get an `MsalClientException` with error code `"BROKER_BIND_FAILURE"`, the
It might not be immediately clear that broker integration is working, but you can use the following steps to check: 1. On your Android device, complete a request using the broker.
-1. In the settings on your Android device, look for a newly created account corresponding to the account that you authenticated with. The account should be of type *Work account*.
+1. In the settings on your Android device, look for a newly created account corresponding to the account that you authenticated with. The account should be of type _Work account_.
You can remove the account from settings if you want to repeat the test. ## SSO through system browser
-Android applications have the option to use the WebView, system browser, or Chrome Custom Tabs for authentication user experience. If the application is not using brokered authentication, it will need to use the system browser rather than the native webview in order to achieve SSO.
+Android applications have the option to use the WebView, system browser, or Chrome Custom Tabs for authentication user experience. If the application isn't using brokered authentication, it will need to use the system browser rather than the native webview in order to achieve SSO.
### Authorization agents Choosing a specific strategy for authorization agents is optional and represents additional functionality apps can customize. Most apps will use the MSAL defaults (see [Understand the Android MSAL configuration file](msal-configuration.md) to see the various defaults).
-MSAL supports authorization using a `WebView`, or the system browser. The image below shows how it looks using the `WebView`, or the system browser with CustomTabs or without CustomTabs:
+MSAL supports authorization using a `WebView`, or the system browser. The image below shows how it looks using the `WebView`, or the system browser with CustomTabs or without CustomTabs:
![MSAL login examples](./media/authorization-agents/sign-in-ui.jpg)
-### Single sign-on implications
+### SSO implications
By default, applications integrated with MSAL use the system browser's Custom Tabs to authorize. Unlike WebViews, Custom Tabs share a cookie jar with the default system browser enabling fewer sign-ins with web or other native apps that have integrated with Custom Tabs. If the application uses a `WebView` strategy without integrating Microsoft Authenticator or Company Portal support into their app, users won't have a single sign-on experience across the device or between native apps and web apps.
-If the application uses MSAL with a broker like Microsoft Authenticator or Intune Company Portal, then users can have a SSO experience across applications if the they have an active sign-in with one of the apps.
+If the application uses MSAL with a broker like Microsoft Authenticator or Intune Company Portal, then users can have SSO experience across applications if they have an active sign-in with one of the apps.
### WebView
To use the in-app WebView, put the following line in the app configuration JSON
"authorization_user_agent" : "WEBVIEW" ```
-When using the in-app `WebView`, the user signs in directly to the app. The tokens are kept inside the sandbox of the app and aren't available outside the app's cookie jar. As a result, the user can't have a SSO experience across applications unless the apps integrate with the Authenticator or Company Portal.
+When using the in-app `WebView`, the user signs in directly to the app. The tokens are kept inside the sandbox of the app and aren't available outside the app's cookie jar. As a result, the user can't have SSO experience across applications unless the apps integrate with the Authenticator or Company Portal.
However, `WebView` does provide the capability to customize the look and feel for sign-in UI. See [Android WebViews](https://developer.android.com/reference/android/webkit/WebView) for more about how to do this customization.
By default, MSAL uses the browser and a [custom tabs](https://developer.chrome.c
"authorization_user_agent" : "BROWSER" ```
-Use this approach to provide a SSO experience through the device's browser. MSAL uses a shared cookie jar, which allows other native apps or web apps to achieve SSO on the device by using the persist session cookie set by MSAL.
+Use this approach to provide SSO experience through the device's browser. MSAL uses a shared cookie jar, which allows other native apps or web apps to achieve SSO on the device by using the persist session cookie set by MSAL.
### Browser selection heuristic
Because it's impossible for MSAL to specify the exact browser package to use on
MSAL primarily retrieves the default browser from the package manager and checks if it is in a tested list of safe browsers. If not, MSAL falls back on using the Webview rather than launching another non-default browser from the safe list. The default browser will be chosen regardless of whether it supports custom tabs. If the browser supports Custom Tabs, MSAL will launch the Custom Tab. Custom Tabs have a look and feel closer to an in-app `WebView` and allow basic UI customization. See [Custom Tabs in Android](https://developer.chrome.com/multidevice/android/customtabs) to learn more.
-If there are no browser packages on the device, MSAL uses the in-app `WebView`. If the device default setting isn't changed, the same browser should be launched for each sign in to ensure a SSO experience.
+If there are no browser packages on the device, MSAL uses the in-app `WebView`. If the device default setting isn't changed, the same browser should be launched for each sign-in to ensure SSO experience.
#### Tested Browsers The following browsers have been tested to see if they correctly redirect to the `"redirect_uri"` specified in the configuration file:
-| Device | Built-in Browser | Chrome | Opera | Microsoft Edge | UC Browser | Firefox |
-| -- |:-:| --:|--:|--:|--:|--:|
-| Nexus 4 (API 17) | pass | pass |not applicable |not applicable |not applicable |not applicable |
-| Samsung S7 (API 25) | pass<sup>1</sup> | pass | pass | pass | fail |pass |
-| Huawei (API 26) |pass<sup>2</sup> | pass | fail | pass | pass |pass |
-| Vivo (API 26) |pass|pass|pass|pass|pass|fail|
-| Pixel 2 (API 26) |pass | pass | pass | pass | fail |pass |
-| Oppo | pass | not applicable<sup>3</sup>|not applicable |not applicable |not applicable | not applicable|
-| OnePlus (API 25) |pass | pass | pass | pass | fail |pass |
-| Nexus (API 28) |pass | pass | pass | pass | fail |pass |
-|MI | pass | pass | pass | pass | fail |pass |
+| Device | Built-in Browser | Chrome | Opera | Microsoft Edge | UC Browser | Firefox |
+| - | :--: | -: | -: | -: | -: | -: |
+| Nexus 4 (API 17) | pass | pass | not applicable | not applicable | not applicable | not applicable |
+| Samsung S7 (API 25) | pass<sup>1</sup> | pass | pass | pass | fail | pass |
+| Huawei (API 26) | pass<sup>2</sup> | pass | fail | pass | pass | pass |
+| Vivo (API 26) | pass | pass | pass | pass | pass | fail |
+| Pixel 2 (API 26) | pass | pass | pass | pass | fail | pass |
+| Oppo | pass | not applicable<sup>3</sup> | not applicable | not applicable | not applicable | not applicable |
+| OnePlus (API 25) | pass | pass | pass | pass | fail | pass |
+| Nexus (API 28) | pass | pass | pass | pass | fail | pass |
+| MI | pass | pass | pass | pass | fail | pass |
<sup>1</sup>Samsung's built-in browser is Samsung Internet.<br/> <sup>2</sup>Huawei's built-in browser is Huawei Browser.<br/>
The following browsers have been tested to see if they correctly redirect to the
## Next steps
-[Shared device mode for Android devices](msal-android-shared-devices.md) allows you to configure an Android device so that it can be easily shared by multiple employees.
+[Shared device mode for Android devices](msal-android-shared-devices.md) allows you to configure an Android device so that it can be easily shared by multiple employees.
active-directory Msal Compare Msal Js And Adal Js https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-compare-msal-js-and-adal-js.md
Once your changes are done, run the app and test your authentication scenario:
npm start ```
-## Example: Securing web apps with ADAL Node vs. MSAL Node
+## Example: Securing a SPA with ADAL.js vs. MSAL.js
The snippets below demonstrates the minimal code required for a single-page application authenticating users with the Microsoft identity platform and getting an access token for Microsoft Graph using first ADAL.js and then MSAL.js:
active-directory Publisher Verification Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/publisher-verification-overview.md
# Publisher verification
-Publisher verification gives app users and organization admins information about the authenticity of a developer who publishes an app that integrates with the Microsoft identity platform.
+Publisher verification gives app users and organization admins information about the authenticity of the developer's organization, who publishes an app that integrates with the Microsoft identity platform.
-An app that's publisher verified means that the app's publisher has verified their identity with Microsoft. Identity verification includes using a [Microsoft Partner Network (MPN)](https://partner.microsoft.com/membership) account that's been [verified](/partner-center/verification-responses) and associating the MPN account with an app registration.
+An app that's publisher verified means that the app's publisher (app developer) has verified the authenticity of their organization with Microsoft. Verifying an app includes using a Microsoft Partner Network (MPN) account that's been [verified](/partner-center/verification-responses) and associating the MPN account with an app registration.
When the publisher of an app has been verified, a blue *verified* badge appears in the Azure Active Directory (Azure AD) consent prompt for the app and on other webpages:
active-directory Redirect Uris Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/redirect-uris-ios.md
Title: Use redirect URIs with MSAL (iOS/macOS)
-description: Learn about the differences between the Microsoft Authentication Library for ObjectiveC (MSAL for iOS and macOS) and Azure AD Authentication Library for ObjectiveC (ADAL.ObjC) and how to migrate between them.
+description: Learn about the differences between the Microsoft Authentication Library for Objective-C (MSAL for iOS and macOS) and Azure AD Authentication Library for Objective-C (ADAL.ObjC) and how to migrate between them.
Previously updated : 08/28/2019 Last updated : 01/18/2023 #Customer intent: As an application developer, I want to learn about how to use redirect URIs.
-# Using redirect URIs with the Microsoft authentication library for iOS and macOS
+# Using redirect URIs with the Microsoft Authentication Library (MSAL) for iOS and macOS
When a user authenticates, Azure Active Directory (Azure AD) sends the token to the app by using the redirect URI registered with the Azure AD application.
-The Microsoft Authentication library (MSAL) requires that the redirect URI be registered with the Azure AD app in a specific format. MSAL uses a default redirect URI, if you don't specify one. The format is `msauth.[Your_Bundle_Id]://auth`.
+The MSAL requires that the redirect URI be registered with the Azure AD app in a specific format. MSAL uses a default redirect URI, if you don't specify one. The format is `msauth.[Your_Bundle_Id]://auth`.
The default redirect URI format works for most apps and scenarios, including brokered authentication and system web view. Use the default format whenever possible.
-However, you may need to change the redirect URI for advanced scenarios, as described below.
+However, you may need to change the redirect URI for advanced scenarios, as described in the following section.
## Scenarios that require a different redirect URI
-### Cross-app single sign on (SSO)
+### Cross-app single sign-on (SSO)
-For the Microsoft Identity platform to share tokens across apps, each app needs to have the same client ID or application ID. This is the unique identifier provided when you registered your app in the portal (not the application bundle ID that you register per app with Apple).
+For the Microsoft identity platform to share tokens across apps, each app needs to have the same client ID or application ID. The client ID is the unique identifier provided when you registered your app in the Azure portal (not the application bundle ID that you register per app with Apple).
The redirect URIs need to be different for each iOS app. This allows the Microsoft identity service to uniquely identify different apps that share an application ID. Each application can have multiple redirect URIs registered in the Azure portal. Each app in your suite will have a different redirect URI. For example: Given the following application registration in the Azure portal:
-* Client ID: `ABCDE-12345` (this is a single client ID)
-* RedirectUris: `msauth.com.contoso.app1://auth`, `msauth.com.contoso.app2://auth`, `msauth.com.contoso.app3://auth`
+- Client ID: `ABCDE-12345`
+- RedirectUris: `msauth.com.contoso.app1://auth`, `msauth.com.contoso.app2://auth`, `msauth.com.contoso.app3://auth`
App1 uses redirect `msauth.com.contoso.app1://auth`.\ App2 uses `msauth.com.contoso.app2://auth`.\
App3 uses `msauth.com.contoso.app3://auth`.
### Migrating from ADAL to MSAL
-When migrating code that used the Azure AD Authentication Library (ADAL) to MSAL, you may already have a redirect URI configured for your app. You can continue using the same redirect URI as long as your ADAL app was configured to support brokered scenarios and your redirect URI satisfies the MSAL redirect URI format requirements.
+When migrating code that used the Azure Active Directory Authentication Library (ADAL) to MSAL, you may already have a redirect URI configured for your app. You can continue using the same redirect URI as long as your ADAL app was configured to support brokered scenarios and your redirect URI satisfies the MSAL redirect URI format requirements.
## MSAL redirect URI format requirements
-* The MSAL redirect URI must be in the form `<scheme>://host`
+- The MSAL redirect URI must be in the form `<scheme>://host`
- Where `<scheme>` is a unique string that identifies your app. It's primarily based on the Bundle Identifier of your application to guarantee uniqueness. For example, if your app's Bundle ID is `com.contoso.myapp`, your redirect URI would be in the form: `msauth.com.contoso.myapp://auth`.
+ Where `<scheme>` is a unique string that identifies your app. It's primarily based on the Bundle Identifier of your application to guarantee uniqueness. For example, if your app's Bundle ID is `com.contoso.myapp`, your redirect URI would be in the form: `msauth.com.contoso.myapp://auth`.
- If you're migrating from ADAL, your redirect URI will likely have this format: `<scheme>://[Your_Bundle_Id]`, where `scheme` is a unique string. This format will continue to work when you use MSAL.
+ If you're migrating from ADAL, your redirect URI will likely have this format: `<scheme>://[Your_Bundle_Id]`, where `scheme` is a unique string. The format will continue to work when you use MSAL.
-* `<scheme>` must be registered in your app's Info.plist under `CFBundleURLTypes > CFBundleURLSchemes`. In this example, Info.plist has been opened as source code:
+- `<scheme>` must be registered in your app's Info.plist under `CFBundleURLTypes > CFBundleURLSchemes`. In this example, Info.plist has been opened as source code:
- ```xml
- <key>CFBundleURLTypes</key>
- <array>
- <dict>
- <key>CFBundleURLSchemes</key>
- <array>
- <string>msauth.[BUNDLE_ID]</string>
- </array>
- </dict>
- </array>
- ```
+ ```xml
+ <key>CFBundleURLTypes</key>
+ <array>
+ <dict>
+ <key>CFBundleURLSchemes</key>
+ <array>
+ <string>msauth.[BUNDLE_ID]</string>
+ </array>
+ </dict>
+ </array>
+ ```
MSAL will verify if your redirect URI registers correctly, and return an error if it's not.
-
-* If you want to use universal links as a redirect URI, the `<scheme>` must be `https` and doesn't need to be declared in `CFBundleURLSchemes`. Instead, configure the app and domain per Apple's instructions at [Universal Links for Developers](https://developer.apple.com/ios/universal-links/) and call the `handleMSALResponse:sourceApplication:` method of `MSALPublicClientApplication` when your application is opened through a universal link.
+
+- If you want to use universal links as a redirect URI, the `<scheme>` must be `https` and doesn't need to be declared in `CFBundleURLSchemes`. Instead, configure the app and domain per Apple's instructions at [Universal Links for Developers](https://developer.apple.com/ios/universal-links/) and call the `handleMSALResponse:sourceApplication:` method of `MSALPublicClientApplication` when your application is opened through a universal link.
## Use a custom redirect URI
-To use a custom redirect URI, pass the `redirectUri` parameter to `MSALPublicClientApplicationConfig` and pass that object to `MSALPublicClientApplication` when you initialize the object. If the redirect URI is invalid, the initializer will return `nil` and set the `redirectURIError`with additional information. For example:
+To use a custom redirect URI, pass the `redirectUri` parameter to `MSALPublicClientApplicationConfig` and pass that object to `MSALPublicClientApplication` when you initialize the object. If the redirect URI is invalid, the initializer will return `nil` and set the `redirectURIError`with additional information. For example:
Objective-C:
let config = MSALPublicClientApplicationConfig(clientId: "your-client-id",
authority: authority) do { let application = try MSALPublicClientApplication(configuration: config)
- // continue on with application
+ // continue on with application
} catch let error as NSError { // handle error here
-}
+}
``` -- ## Handle the URL opened event Your application should call MSAL when it receives any response through URL schemes or universal links. Call the `handleMSALResponse:sourceApplication:` method of `MSALPublicClientApplication` when your application is opened. Here's an example for custom schemes:
Objective-C:
openURL:(NSURL *)url options:(NSDictionary<UIApplicationOpenURLOptionsKey,id> *)options {
- return [MSALPublicClientApplication handleMSALResponse:url
+ return [MSALPublicClientApplication handleMSALResponse:url
sourceApplication:options[UIApplicationOpenURLOptionsSourceApplicationKey]]; } ```
func application(_ app: UIApplication, open url: URL, options: [UIApplication.Op
} ``` -- ## Next steps Learn more about [Authentication flows and application scenarios](authentication-flows-app-scenarios.md)
active-directory Reference Saml Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/reference-saml-tokens.md
Title: SAML 2.0 token claims reference description: Claims reference with details on the claims included in SAML 2.0 tokens issued by the Microsoft identity platform, including their JWT equivalents.-+
Previously updated : 03/29/2021- Last updated : 01/19/2023+
active-directory Request Custom Claims https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/request-custom-claims.md
Title: Request custom claims (MSAL iOS/macOS) description: Learn how to request custom claims. -+ Previously updated : 08/26/2019- Last updated : 01/19/2023+
active-directory Scenario Web App Sign User App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-app-sign-user-app-configuration.md
Title: Configure a web app that signs in users description: Learn how to build a web app that signs in users (code configuration) -+
Last updated 12/8/2022-++ #Customer intent: As an application developer, I want to know how to write a web app that signs in users by using the Microsoft identity platform.
active-directory Scenario Web App Sign User App Registration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-app-sign-user-app-registration.md
Title: Register a web app that signs in users description: Learn how to register a web app that signs in users -+
Last updated 12/6/2022-++ #Customer intent: As an application developer, I want to know how to write a web app that signs in users by using the Microsoft identity platform.
active-directory Scenario Web App Sign User Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-app-sign-user-overview.md
Title: Sign in users from a Web app description: Learn how to build a web app that signs in users (overview) -+
Last updated 10/12/2022-++ #Customer intent: As an application developer, I want to know how to write a web app that signs in users by using the Microsoft identity platform.
active-directory Scenario Web App Sign User Production https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-app-sign-user-production.md
Title: Move web app that signs in users to production description: Learn how to build a web app that signs in users (move to production) -+
Last updated 09/17/2019-++ #Customer intent: As an application developer, I want to know how to write a web app that signs in users by using the Microsoft identity platform.
active-directory Scenario Web App Sign User Sign In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-app-sign-user-sign-in.md
Title: Write a web app that signs in/out users description: Learn how to build a web app that signs in/out users -+
Last updated 07/14/2020-++ #Customer intent: As an application developer, I want to know how to write a web app that signs in users by using the Microsoft identity platform.
active-directory V2 App Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-app-types.md
Title: Application types for the Microsoft identity platform description: The types of apps and scenarios supported by the Microsoft identity platform. -+
Last updated 09/09/2022-+
active-directory Web App Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/web-app-quickstart.md
Previously updated : 11/16/2021 Last updated : 01/18/2023 zone_pivot_groups: web-app-quickstart
active-directory Workload Identity Federation Create Trust User Assigned Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/workload-identity-federation-create-trust-user-assigned-managed-identity.md
Title: Create a trust relationship between a user-assigned managed identity and an external identity provider description: Set up a trust relationship between a user-assigned managed identity in Azure AD and an external identity provider. This allows a software workload outside of Azure to access Azure AD protected resources without using secrets or certificates. -+ Previously updated : 10/24/2022- Last updated : 01/19/2023+ zone_pivot_groups: identity-wif-mi-methods
active-directory Workload Identity Federation Create Trust https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/workload-identity-federation-create-trust.md
Title: Create a trust relationship between an app and an external identity provider description: Set up a trust relationship between an app in Azure AD and an external identity provider. This allows a software workload outside of Azure to access Azure AD protected resources without using secrets or certificates. -+ Previously updated : 12/13/2022- Last updated : 01/19/2023+ zone_pivot_groups: identity-wif-apps-methods
active-directory Licensing Service Plan Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-service-plan-reference.md
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| Microsoft Teams Phone Standard_USGOV_GCCHIGH | MCOEV_USGOV_GCCHIGH | 985fcb26-7b94-475b-b512-89356697be71 | MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) | MICROSOFT 365 PHONE SYSTEM (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) | | Microsoft Teams Phone Resoure Account | PHONESYSTEM_VIRTUALUSER | 440eaaa8-b3e0-484b-a8be-62870b9ba70a | MCOEV_VIRTUALUSER (f47330e9-c134-43b3-9993-e7f004506889) | Microsoft 365 Phone Standard Resource Account (f47330e9-c134-43b3-9993-e7f004506889)| | Microsoft Teams Phone Resource Account for GCC | PHONESYSTEM_VIRTUALUSER_GOV | 2cf22bcb-0c9e-4bc6-8daf-7e7654c0f285 | MCOEV_VIRTUALUSER_GOV (0628a73f-3b4a-4989-bd7b-0f8823144313) | Microsoft 365 Phone Standard Resource Account for Government (0628a73f-3b4a-4989-bd7b-0f8823144313) |
-| Microsoft Teams Premium | Microsoft_Teams_Premium | 989a1621-93bc-4be0-835c-fe30171d6463 | MICROSOFT_ECDN (85704d55-2e73-47ee-93b4-4b8ea14db92b)<br/>TEAMSPRO_MGMT (0504111f-feb8-4a3c-992a-70280f9a2869)<br/>TEAMSPRO_CUST (cc8c0802-a325-43df-8cba-995d0c6cb373)<br/>TEAMSPRO_PROTECTION (f8b44f54-18bb-46a3-9658-44ab58712968)<br/>TEAMSPRO_VIRTUALAPPT (9104f592-f2a7-4f77-904c-ca5a5715883f)<br/>TEAMSPRO_WEBINAR (78b58230-ec7e-4309-913c-93a45cc4735b) | Microsoft eCDN (85704d55-2e73-47ee-93b4-4b8ea14db92b)<br/>Microsoft Teams Premium Intelligent (0504111f-feb8-4a3c-992a-70280f9a2869)<br/>Microsoft Teams Premium Personalized (cc8c0802-a325-43df-8cba-995d0c6cb373)<br/>Microsoft Teams Premium Secure (f8b44f54-18bb-46a3-9658-44ab58712968)<br/>Microsoft Teams Premium Virtual Appointment (9104f592-f2a7-4f77-904c-ca5a5715883f)<br/>Microsoft Teams Premium Webinar (78b58230-ec7e-4309-913c-93a45cc4735b) |
+| Microsoft Teams Premium | Microsoft_Teams_Premium | 989a1621-93bc-4be0-835c-fe30171d6463 | MICROSOFT_ECDN (85704d55-2e73-47ee-93b4-4b8ea14db92b)<br/>TEAMSPRO_MGMT (0504111f-feb8-4a3c-992a-70280f9a2869)<br/>TEAMSPRO_CUST (cc8c0802-a325-43df-8cba-995d0c6cb373)<br/>TEAMSPRO_PROTECTION (f8b44f54-18bb-46a3-9658-44ab58712968)<br/>TEAMSPRO_VIRTUALAPPT (9104f592-f2a7-4f77-904c-ca5a5715883f)<br/>MCO_VIRTUAL_APPT (711413d0-b36e-4cd4-93db-0a50a4ab7ea3)<br/>TEAMSPRO_WEBINAR (78b58230-ec7e-4309-913c-93a45cc4735b) | Microsoft eCDN (85704d55-2e73-47ee-93b4-4b8ea14db92b)<br/>Microsoft Teams Premium Intelligent (0504111f-feb8-4a3c-992a-70280f9a2869)<br/>Microsoft Teams Premium Personalized (cc8c0802-a325-43df-8cba-995d0c6cb373)<br/>Microsoft Teams Premium Secure (f8b44f54-18bb-46a3-9658-44ab58712968)<br/>Microsoft Teams Premium Virtual Appointment (9104f592-f2a7-4f77-904c-ca5a5715883f)<br/>Microsoft Teams Premium Virtual Appointments (711413d0-b36e-4cd4-93db-0a50a4ab7ea3)<br/>Microsoft Teams Premium Webinar (78b58230-ec7e-4309-913c-93a45cc4735b) |
| Microsoft Teams Rooms Basic | Microsoft_Teams_Rooms_Basic | 6af4b3d6-14bb-4a2a-960c-6c902aad34f3 | MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af) | Microsoft 365 Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af) | | Microsoft Teams Rooms Basic without Audio Conferencing | Microsoft_Teams_Rooms_Basic_without_Audio_Conferencing | 50509a35-f0bd-4c5e-89ac-22f0e16a00f8 | TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af) | Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af) | | Microsoft Teams Rooms Pro | Microsoft_Teams_Rooms_Pro | 4cde982a-ede4-4409-9ae6-b003453c8ea6 | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af) | Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Microsoft 365 Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af) |
active-directory Direct Federation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/direct-federation.md
Previously updated : 10/24/2022 Last updated : 01/20/2023
Setting up SAML/WS-Fed IdP federation doesnΓÇÖt change the authentication method
Currently, the Azure AD SAML/WS-Fed federation feature doesn't support sending a signed authentication token to the SAML identity provider.
+**What permissions are required to configure a SAML/Ws-Fed identity provider?**
+
+You need to be an [External Identity Provider Administrator](../roles/permissions-reference.md#external-identity-provider-administrator) or a [Global Administrator](../roles/permissions-reference.md#global-administrator) in your Azure AD tenant to configure a SAML/Ws-Fed identity provider.
+ ## Step 1: Determine if the partner needs to update their DNS text records Depending on the partner's IdP, the partner might need to update their DNS records to enable federation with you. Use the following steps to determine if DNS updates are needed.
Next, you'll configure federation with the IdP configured in step 1 in Azure AD.
### To configure federation in the Azure AD portal
-1. Go to the [Azure portal](https://portal.azure.com/). In the left pane, select **Azure Active Directory**.
-2. Select **External Identities** > **All identity providers**.
-3. Select **New SAML/WS-Fed IdP**.
+1. Sign in to the [Azure portal](https://portal.azure.com/) as an External Identity Provider Administrator or a Global Administrator.
+2. In the left pane, select **Azure Active Directory**.
+3. Select **External Identities** > **All identity providers**.
+4. Select **New SAML/WS-Fed IdP**.
![Screenshot showing button for adding a new SAML or WS-Fed IdP.](media/direct-federation/new-saml-wsfed-idp.png)
On the **All identity providers** page, you can view the list of SAML/WS-Fed ide
![Screenshot showing an identity provider in the SAML WS-Fed list](media/direct-federation/new-saml-wsfed-idp-list-multi.png)
-1. Go to the [Azure portal](https://portal.azure.com/). In the left pane, select **Azure Active Directory**.
-1. Select **External Identities**.
-1. Select **All identity providers**.
-1. Under **SAML/WS-Fed identity providers**, scroll to an identity provider in the list or use the search box.
-1. To update the certificate or modify configuration details:
+1. Sign in to the [Azure portal](https://portal.azure.com) as an External Identity Provider Administrator or a Global Administrator.
+2. In the left pane, select **Azure Active Directory**.
+3. Select **External Identities**.
+4. Select **All identity providers**.
+5. Under **SAML/WS-Fed identity providers**, scroll to an identity provider in the list or use the search box.
+6. To update the certificate or modify configuration details:
- In the **Configuration** column for the identity provider, select the **Edit** link. - On the configuration page, modify any of the following details: - **Display name** - Display name for the partner's organization.
active-directory Facebook Federation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/facebook-federation.md
Previously updated : 01/06/2023 Last updated : 01/20/2023
To use a Facebook account as an [identity provider](identity-providers.md), you
Now you'll set the Facebook client ID and client secret, either by entering it in the Azure AD portal or by using PowerShell. You can test your Facebook configuration by signing up via a user flow on an app enabled for self-service sign-up. ### To configure Facebook federation in the Azure AD portal
-1. Sign in to the [Azure portal](https://portal.azure.com) as the global administrator of your Azure AD tenant.
+1. Sign in to the [Azure portal](https://portal.azure.com) as an External Identity Provider Administrator or a Global Administrator.
2. Under **Azure services**, select **Azure Active Directory**. 3. In the left menu, select **External Identities**. 4. Select **All identity providers**, then select **Facebook**.
active-directory Google Federation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/google-federation.md
Previously updated : 07/12/2022 Last updated : 01/20/2023
First, create a new project in the Google Developers Console to obtain a client
You'll now set the Google client ID and client secret. You can use the Azure portal or PowerShell to do so. Be sure to test your Google federation configuration by inviting yourself. Use a Gmail address and try to redeem the invitation with your invited Google account. **To configure Google federation in the Azure portal**
-1. Go to the [Azure portal](https://portal.azure.com). On the left pane, select **Azure Active Directory**.
-2. Select **External Identities**.
-3. Select **All identity providers**, and then select the **Google** button.
-4. Enter the client ID and client secret you obtained earlier. Select **Save**:
+1. Sign in to the [Azure portal](https://portal.azure.com) as an External Identity Provider Administrator or a Global Administrator.
+2. In the left pane, select **Azure Active Directory**.
+3. Select **External Identities**.
+4. Select **All identity providers**, and then select the **Google** button.
+5. Enter the client ID and client secret you obtained earlier. Select **Save**:
![Screenshot that shows the Add Google identity provider page.](media/google-federation/google-identity-provider.png)
active-directory Identity Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/identity-providers.md
Previously updated : 09/14/2022 Last updated : 01/20/2023
External Identities offers a variety of identity providers.
> [!NOTE] > Federated SAML/WS-Fed IdPs can't be used in your self-service sign-up user flows.
+To configure federation with Google, Facebook, or a SAML/Ws-Fed identity provider, you'll need to be an [External Identity Provider Administrator](../roles/permissions-reference.md#external-identity-provider-administrator) or a [Global Administrator](../roles/permissions-reference.md#global-administrator) in your Azure AD tenant.
+ ## Adding social identity providers Azure AD is enabled by default for self-service sign-up, so users always have the option of signing up using an Azure AD account. However, you can enable other identity providers, including social identity providers like Google or Facebook. To set up social identity providers in your Azure AD tenant, you'll create an application at the identity provider and configure credentials. You'll obtain a client or app ID and a client or app secret, which you can then add to your Azure AD tenant.
active-directory Leave The Organization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/leave-the-organization.md
Title: Leave an organization - Azure Active Directory-
+ Title: Leave an organization as a guest user
+ description: Shows how an Azure AD B2B guest user can leave an organization by using the Access Panel. Previously updated : 12/16/2022 Last updated : 01/17/2023 --++
adobe-target: true
# Leave an organization as an external user
-As an Azure Active Directory (Azure AD) [B2B collaboration](what-is-b2b.md) or [B2B direct connect](b2b-direct-connect-overview.md) user, you can leave an organization at any time if you no longer need to use apps from that organization, or maintain any association.
+As an Azure Active Directory (Azure AD) B2B collaboration or B2B direct connect user, you can leave an organization at any time if you no longer need to use apps from that organization, or maintain any association.
-You can usually leave an organization on your own without having to contact an administrator. However, in some cases this option won't be available and you'll need to contact your tenant admin, who can delete your account in the external organization. This article is intended for administrators. If you're a user looking for information about how to manage and leave an organization, see the [Manage organizations article.](https://support.microsoft.com/account-billing/manage-organizations-for-a-work-or-school-account-in-the-my-account-portal-a9b65a70-fec5-4a1a-8e00-09f99ebdea17)
+## Before you begin
+You can usually leave an organization on your own without having to contact an administrator. However, in some cases this option won't be available and you'll need to contact your tenant admin, who can delete your account in the external organization. This article is intended for administrators. If you're a user looking for information about how to manage and leave an organization, see the [Manage organizations article.](https://support.microsoft.com/account-billing/manage-organizations-for-a-work-or-school-account-in-the-my-account-portal-a9b65a70-fec5-4a1a-8e00-09f99ebdea17)
## What organizations do I belong to? 1. To view the organizations you belong to, first open your **My Account** page. You either have a work or school account created by an organization or a personal account such as for Xbox, Hotmail, or Outlook.com. - If you're using a work or school account, go to https://myaccount.microsoft.com and sign in.
- - If you're using a personal account or email one-time passcode, you'll need to use a My Account URL that includes your tenant name or tenant ID, for example: https://myaccount.microsoft.com?tenantId=wingtiptoys.onmicrosoft.com or https://myaccount.microsoft.com?tenantId=ab123456-cd12-ef12-gh12-ijk123456789.
+ - If you're using a personal account or email one-time passcode, you'll need to use a My Account URL that includes your tenant name or tenant ID.
+ For example:
+ https://myaccount.microsoft.com?tenantId=wingtiptoys.onmicrosoft.com
+ or
+ https://myaccount.microsoft.com?tenantId=ab123456-cd12-ef12-gh12-ijk123456789.
1. Select **Organizations** from the left navigation pane or select the **Manage organizations** link from the **Organizations** block.
In the **Home organization** section, there's no link to **Leave** your organiza
For the external organizations listed under **Other organizations you collaborate with**, you might not be able to leave on your own, for example when: - - the organization you want to leave doesnΓÇÖt allow users to leave by themselves - your account has been disabled
Administrators can use the **External user leave settings** to control whether e
- **Yes**: Users can leave the organization themselves without approval from your admin or privacy contact. - **No**: Users can't leave your organization themselves. They'll see a message guiding them to contact your admin, or privacy contact to request removal from your organization. - :::image type="content" source="media/leave-the-organization/external-user-leave-settings.png" alt-text="Screenshot showing External user leave settings in the portal."::: ### Account removal
If desired, a tenant administrator can permanently delete the account at any tim
1. Select the check box next to a deleted user, and then select **Delete permanently**.
-Permanent deletion can be initiated by the admin, or it happens at the end of the soft deletion period. Permanent deletion can take up to an extra 30 days for data removal ([learn more](/compliance/regulatory/gdpr-dsr-azure#step-5-delete)).
+Permanent deletion can be initiated by the admin, or it happens at the end of the soft deletion period. Permanent deletion can take up to an extra 30 days for data removal.
+
+For B2B direct connect users, data removal begins as soon as the user selects **Leave** in the confirmation message and can take up to 30 days to complete.
-> [!NOTE]
-> For B2B direct connect users, data removal begins as soon as the user selects **Leave** in the confirmation message and can take up to 30 days to complete ([learn more](/compliance/regulatory/gdpr-dsr-azure#delete-a-users-data-when-there-is-no-account-in-the-azure-tenant)).
## Next steps -- Learn more about [Azure AD B2B collaboration](what-is-b2b.md) and [Azure AD B2B direct connect](b2b-direct-connect-overview.md)-- [Use audit logs and access reviews](auditing-and-reporting.md)
+- Learn more about [user deletion](/compliance/regulatory/gdpr-dsr-azure#step-5-delete) and about how to delete a user's data when there's [no account in the Azure tenant](/compliance/regulatory/gdpr-dsr-azure#delete-a-users-data-when-there-is-no-account-in-the-azure-tenant).
+- For more information about GDPR, see the GDPR section of the [Service Trust portal](https://servicetrust.microsoft.com/ViewPage/GDPRGetStarted).
active-directory Self Service Sign Up Add Api Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/self-service-sign-up-add-api-connector.md
Previously updated : 07/13/2021 Last updated : 01/16/2023
To use an [API connector](api-connectors-overview.md), you first create the API
3. In the left menu, select **External Identities**. 4. Select **All API connectors**, and then select **New API connector**.
- :::image type="content" source="media/self-service-sign-up-add-api-connector/api-connector-new.png" alt-text="Providing the basic configuration like target URL and display name for an API connector during the creation experience.":::
+ :::image type="content" source="media/self-service-sign-up-add-api-connector/api-connector-new.png" alt-text="Screenshot of adding a new API connector to External Identities.":::
5. Provide a display name for the call. For example, **Check approval status**. 6. Provide the **Endpoint URL** for the API call. 7. Choose the **Authentication type** and configure the authentication information for calling your API. Learn how to [Secure your API Connector](self-service-sign-up-secure-api-connector.md).
- :::image type="content" source="media/self-service-sign-up-add-api-connector/api-connector-config.png" alt-text="Providing authentication configuration for an API connector during the creation experience.":::
+ :::image type="content" source="media/self-service-sign-up-add-api-connector/api-connector-config.png" alt-text="Screenshot of configuring an API connector.":::
8. Select **Save**.
Content-type: application/json
} ```
-The exact claims sent to the API depends on which information is provided by the identity provider. 'email' is always sent.
+The exact claims sent to the API depend on which information is provided by the identity provider. 'email' is always sent.
### Expected response types from the web API at this step
Content-type: application/json
"ui_locales":"en-US" } ```
-The exact claims sent to the API depends on which information is collected from the user or is provided by the identity provider.
+The exact claims sent to the API depend on which information is collected from the user or is provided by the identity provider.
### Expected response types from the web API at this step
A blocking response exits the user flow. It can be purposely issued by the API t
See an example of a [blocking response](#example-of-a-blocking-response). ### Validation-error response
- When the API responds with a validation-error response, the user flow stays on the attribute collection page and a `userMessage` is displayed to the user. The user can then edit and resubmit the form. This type of response can be used for input validation.
+ When the API responds with a validation-error response, the user flow stays on the attribute collection page, and a `userMessage` is displayed to the user. The user can then edit and resubmit the form. This type of response can be used for input validation.
See an example of a [validation-error response](#example-of-a-validation-error-response).
Content-type: application/json
| version | String | Yes | The version of your API. | | action | String | Yes | Value must be `Continue`. | | \<builtInUserAttribute> | \<attribute-type> | No | Values can be stored in the directory if they selected as a **Claim to receive** in the API connector configuration and **User attributes** for a user flow. Values can be returned in the token if selected as an **Application claim**. |
-| \<extension\_{extensions-app-id}\_CustomAttribute> | \<attribute-type> | No | The claim does not need to contain `_<extensions-app-id>_`, it is *optional*. Returned values can overwrite values collected from a user. |
+| \<extension\_{extensions-app-id}\_CustomAttribute> | \<attribute-type> | No | The claim doesn't need to contain `_<extensions-app-id>_`, it's *optional*. Returned values can overwrite values collected from a user. |
### Example of a blocking response
Content-type: application/json
{ "version": "1.0.0", "action": "ShowBlockPage",
- "userMessage": "There was a problem with your request. You are not able to sign up at this time.",
+ "userMessage": "There was an error with your request. Please try again or contact support.",
} ```
Ensure that:
* Your API implements an authentication method outlined in [secure your API Connector](self-service-sign-up-secure-api-connector.md). * Your API responds as quickly as possible to ensure a fluid user experience. * Azure AD will wait for a maximum of *20 seconds* to receive a response. If none is received, it will make *one more attempt (retry)* at calling your API.
- * If using a serverless function or scalable web service, use a hosting plan that keeps the API "awake" or "warm" in production. For Azure Functions, it's recommended to use at minimum the [Premium plan](../../azure-functions/functions-scale.md)
+ * If using a serverless function or scalable web service, use a hosting plan that keeps the API "awake" or "warm" in production. For Azure Functions, it's recommended to use at minimum the [Premium plan](../../azure-functions/functions-scale.md#overview-of-plans)
* Ensure high availability of your API. * Monitor and optimize performance of downstream APIs, databases, or other dependencies of your API. * Your endpoints must comply with the Azure AD TLS and cipher security requirements. For more information, see [TLS and cipher suite requirements](../../active-directory-b2c/https-cipher-tls-requirements.md).
active-directory 2 Secure Access Current State https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/2-secure-access-current-state.md
If your email and network plans are enabled, you can investigate content sharing
* Identify, prevent, and monitor accidental sharing * [Learn about data loss prevention](/microsoft-365/compliance/dlp-learn-about-dlp?view=o365-worldwide&preserve-view=true ) * Identify unauthorized apps
- * [Microsoft Defender for Cloud Apps](/security/business/siem-and-xdr/microsoft-defender-cloud-apps?rtc=1)
+ * [Microsoft Defender for Cloud Apps overview](/defender-cloud-apps/what-is-defender-for-cloud-apps)
## Next steps
active-directory 5 Secure Access B2b https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/5-secure-access-b2b.md
By default, Teams allows external access. The organization can communicate with
Sharing through SharePoint and OneDrive adds users not in the Entitlement Management process. * [Secure external access to Microsoft Teams, SharePoint, and OneDrive for Business](9-secure-access-teams-sharepoint.md)
-* [Block OneDrive use from Office](/office365/troubleshoot/group-policy/block-onedrive-use-from-office.md)
+* [Block OneDrive use from Office](/office365/troubleshoot/group-policy/block-onedrive-use-from-office)
### Documents in email
active-directory Active Directory Deployment Plans https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-deployment-plans.md
Previously updated : 01/06/2023 Last updated : 01/17/2023
Use the following list to plan for authentication deployment.
* See, [What is Conditional Access?](../conditional-access/overview.md) * See, [Plan a Conditional Access deployment](../conditional-access/plan-conditional-access.md) * **Azure AD self-service password reset (SSPR)** - Help users reset a password without administrator intervention:
- * See, [Passwordless authentication options for Azure AD](/articles/active-directory/authentication/concept-authentication-passwordless.md)
+ * See, [Passwordless authentication options for Azure AD](../authentication/concept-authentication-passwordless.md)
* See, [Plan an Azure Active Directory self-service password reset deployment](../authentication/howto-sspr-deployment.md) * **Passordless authentication** - Implement passwordless authentication using the Microsoft Authenticator app or FIDO2 Security keys: * See, [Enable passwordless sign-in with Microsoft Authenticator](../authentication/howto-authentication-passwordless-phone.md)
Use the following list to plan for authentication deployment.
Use the following list to help deploy applications and devices. * **Single sign-on (SSO)** - Enable user access to apps and resources while signing in once, without being required to enter credentials again:
- * See, [What is SSO in Azure AD?](/articles/active-directory/manage-apps/what-is-single-sign-on.md)
+ * See, [What is SSO in Azure AD?](../manage-apps/what-is-single-sign-on.md)
* See, [Plan a SSO deployment](../manage-apps/plan-sso-deployment.md) * **My Apps portal** - A web-based portal to discover and access applications. Enable user productivity with self-service, for instance requesting access to groups, or managing access to resources on behalf of others. * See, [My Apps portal overview](../manage-apps/myapps-overview.md)
Use the following list to help deploy applications and devices.
The following list describes features and services for productivity gains in hybrid scenarios. * **Active Directory Federation Services (AD FS)** - Migrate user authentication from federation to cloud with pass-through authentication or password hash sync:
- * See, [What is federation with Azure AD?](/articles/active-directory/hybrid/whatis-fed.md)
+ * See, [What is federation with Azure AD?](../hybrid/whatis-fed.md)
* See, [Migrate from federation to cloud authentication](../hybrid/migrate-from-federation-to-cloud-authentication.md) * **Azure AD Application Proxy** - Enable employees to be productive at any place or time, and from a device. Learn about software as a service (SaaS) apps in the cloud and corporate apps on-premises. Azure AD Application Proxy enables access without virtual private networks (VPNs) or demilitarized zones (DMZs):
- * See, [Remote access to on-premises applications through Azure AD Application Proxy](/articles/active-directory/app-proxy/application-proxy.md)
+ * See, [Remote access to on-premises applications through Azure AD Application Proxy](../app-proxy/application-proxy.md)
* See, [Plan an Azure AD Application Proxy deployment](../app-proxy/application-proxy-deployment-plan.md) * **Seamless single sign-on (Seamless SSO)** - Use Seamless SSO for user sign-in, on corporate devices connected to a corporate network. Users don't need to enter passwords to sign in to Azure AD, and usually don't need to enter usernames. Authorized users access cloud-based apps without extra on-premises components: * See, [Azure Active Directory SSO: Quickstart](../hybrid/how-to-connect-sso-quick-start.md)
- * See, [Azure Active Directory Seamless SSO: Technical deep dive](/articles/active-directory/hybrid/how-to-connect-sso-how-it-works.md)
+ * See, [Azure Active Directory Seamless SSO: Technical deep dive](../hybrid/how-to-connect-sso-how-it-works.md)
## Users
Learn more: [Secure access for a connected worldΓÇömeet Microsoft Entra](https:/
* **Reporting and monitoring** - Your Azure AD reporting and monitoring solution design has dependencies and constraints: legal, security, operations, environment, and processes. * See, [Azure Active Directory reporting and monitoring deployment dependencies](../reports-monitoring/plan-monitoring-and-reporting.md) * **Access reviews** - Understand and manage access to resources:
- * See, [What are access reviews?](/articles/active-directory/governance/access-reviews-overview.md)
+ * See, [What are access reviews?](../governance/access-reviews-overview.md)
* See, [Plan a Microsoft Entra access reviews deployment](../governance/deploy-access-reviews.md) * **Identity governance** - Meet your compliance and risk management objectives for access to critical applications. Learn how to enforce accurate access. * See, [Govern access for applications in your environment](../governance/identity-governance-applications-prepare.md)
In your first phase, target IT, usability, and other users who can test and prov
Widen the pilot to larger groups of users by using dynamic membership, or by manually adding users to the targeted group(s).
-Learn more: [Dynamic membership rules for groups in Azure Active Directory](../enterprise-users/groups-dynamic-membership.md)]
+Learn more: [Dynamic membership rules for groups in Azure Active Directory](../enterprise-users/groups-dynamic-membership.md)
active-directory Azure Active Directory B2c Deployment Plans https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/azure-active-directory-b2c-deployment-plans.md
Previously updated : 1/5/2023 Last updated : 01/17/2023
Technology project success depends on managing expectations, outcomes, and respo
- Ask questions, get answers, and receive notifications - Identify a partner or resource outside your organization to support you
-Learn more: [Include the right stakeholders](./active-directory-deployment-plans.md)
+Learn more: [Include the right stakeholders](active-directory-deployment-plans.md)
### Communications
active-directory Five Steps To Full Application Integration With Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/five-steps-to-full-application-integration-with-azure-ad.md
In addition, you can integrate application delivery controllers like F5 BIG-IP A
For apps that are built within your company, your developers can use the [Microsoft identity platform](../develop/index.yml) to implement authentication and authorization. Applications integrated with the platform with be [registered with Azure AD](../develop/quickstart-register-app.md) and managed just like any other app in your portfolio.
-Developers can use the platform for both internal-use apps and customer facing apps, and there are other benefits that come with using the platform. [Microsoft Authentication Libraries (MSAL)](../develop/msal-overview.md), which is part of the platform, allows developers to enable modern experiences like multi-factor authentication and the use of security keys to access their apps without needing to implement it themselves. Additionally, apps integrated with the Microsoft identity platform can access [Microsoft Graph](../develop/microsoft-graph-intro.md) - a unified API endpoint providing the Microsoft 365 data that describes the patterns of productivity, identity, and security in an organization. Developers can use this information to implement features that increase productivity for your users. For example, by identifying the people the user has been interacting with recently and surfacing them in the app's UI.
+Developers can use the platform for both internal-use apps and customer facing apps, and there are other benefits that come with using the platform. [Microsoft Authentication Libraries (MSAL)](../develop/msal-overview.md), which is part of the platform, allows developers to enable modern experiences like multi-factor authentication and the use of security keys to access their apps without needing to implement it themselves. Additionally, apps integrated with the Microsoft identity platform can access [Microsoft Graph](/graph/overview) - a unified API endpoint providing the Azure AD data that describes the patterns of productivity, identity, and security in an organization. Developers can use this information to implement features that increase productivity for your users. For example, by identifying the people the user has been interacting with recently and surfacing them in the app's UI.
We have a [video series](https://www.youtube.com/watch?v=zjezqZPPOfc&amp;list=PLLasX02E8BPBxGouWlJV-u-XZWOc2RkiX) that provides a comprehensive introduction to the platform as well as [many code samples](../develop/sample-v2-code.md) in supported languages and platforms.
active-directory Secure External Access Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/secure-external-access-resources.md
Secure collaboration with your external partners ensures they have correct access to internal resources, and for the expected duration. Learn about governance practices to reduce security risks, meet compliance goals, and ensure accurate access.
+## Governance benefits
+ Governed collaboration improves clarity of ownership of access, reduces exposure of sensitive resources, and enables you to attest to access policy. * Manage external organizations, and their users who access resources * Ensure access is correct, reviewed, and time bound * Empower business owners to manage collaboration with delegation
+## Collaboration methods
+ Traditionally, organizations use one of two methods to collaborate: * Create locally managed credentials for external users, or * Establish federations with partner identity providers (IdP)
-
+ Both methods have drawbacks. For more information, see the following table. | Area of concern | Local credentials | Federation |
Both methods have drawbacks. For more information, see the following table.
Azure Active Directory (Azure AD) B2B integrates with other tools in Azure AD, and Microsoft 365 services. Azure AD B2B simplifies collaboration, reduces expense, and increases security.
-Azure AD B2B benefits:
+## Azure AD B2B benefits
- If the home identity is disabled or deleted, external users can't access resources - User home IdP handles authentication and credential management
active-directory Whats New Sovereign Clouds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new-sovereign-clouds.md
Azure AD receives improvements on an ongoing basis. To stay up to date with the
This page is updated monthly, so revisit it regularly.
+## December 2022
+
+### General Availability - Risk-based Conditional Access for workload identities
+
+**Type:** New feature
+**Service category:** Conditional Access
+**Product capability:** Identity Security & Protection
+
+Customers can now bring one of the most powerful forms of access control in the industry to workload identities. Conditional Access supports risk-based policies for workload identities. Organizations can block sign-in attempts when Identity Protection detects compromised apps or services. For more information, see: [Create a risk-based Conditional Access policy](../conditional-access/workload-identity.md#create-a-risk-based-conditional-access-policy).
+++
+### General Availability - API to recover accidentally deleted Service Principals
+
+**Type:** New feature
+**Service category:** Enterprise Apps
+**Product capability:** Identity Lifecycle Management
+
+Restore a recently deleted application, group, servicePrincipal, administrative unit, or user object from deleted items. If an item was accidentally deleted, you can fully restore the item. This isn't applicable to security groups, which are deleted permanently. A recently deleted item will remain available for up to 30 days. After 30 days, the item is permanently deleted. For more information, see: [servicePrincipal resource type](/graph/api/resources/serviceprincipal).
+++
+### General Availability - Using Staged rollout to test Cert Based Authentication (CBA)
+
+**Type:** New feature
+**Service category:** Authentications (Logins)
+**Product capability:** Identity Security & Protection
+
+We're excited to announce the general availability of hybrid cloud Kerberos trust, a new Windows Hello for Business deployment model to enable a password-less sign-in experience. With this new model, weΓÇÖve made Windows Hello for Business much easier to deploy than the existing key trust and certificate trust deployment models by removing the need for maintaining complicated public key infrastructure (PKI), and Azure Active Directory (AD) Connect synchronization wait times. For more information, see: [Migrate to cloud authentication using Staged Rollout](../hybrid/how-to-connect-staged-rollout.md).
+++ ## November 2022
-### General availability - Windows Hello for Business, cloud Kerberos trust deployment
+### General Availability - Windows Hello for Business, cloud Kerberos trust deployment
We're excited to announce the general availability of hybrid cloud Kerberos trus
-### General availability - Expression builder with Application Provisioning
+### General Availability - Expression builder with Application Provisioning
**Type:** Changed feature **Service category:** Provisioning
Accidental deletion of users in your apps or in your on-premises directory could
-### General availability - SSPR writeback is now available for disconnected forests using Azure AD Connect Cloud sync
+### General Availability - SSPR writeback is now available for disconnected forests using Azure AD Connect Cloud sync
Azure AD Connect Cloud Sync Password writeback now provides customers the abilit
-### General availability - Prevent accidental deletions
+### General Availability - Prevent accidental deletions
For more information, see: [Enable accidental deletions prevention in the Azure
-### General availability - Create group in administrative unit
+### General Availability - Create group in administrative unit
**Type:** New feature **Service category:** RBAC
Groups Administrators and other roles scoped to an administrative unit can now c
-### General availability - Number matching for Microsoft Authenticator notifications
+### General Availability - Number matching for Microsoft Authenticator notifications
For more information, see: [How to use number matching in multifactor authentica
-### General availability - Additional context in Microsoft Authenticator notifications
+### General Availability - Additional context in Microsoft Authenticator notifications
active-directory Howto Troubleshoot Upn Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/howto-troubleshoot-upn-changes.md
Previously updated : 12/19/2022 Last updated : 01/18/2023
In Active Directory, the default UPN suffix is the domain DNS name where you cre
For example, if you add labs.contoso.com and change the user UPNs and email to reflect that, the result is: username@labs.contoso.com.
->[!IMPORTANT]
-> If you change the suffix in Active Directory, add and verify a matching custom domain name in Azure AD.
-> [Add your custom domain name using the Azure Active Directory portal](../fundamentals/add-custom-domain.md)
+ >[!IMPORTANT]
+ > If you change the suffix in Active Directory, add and verify a matching custom domain name in Azure AD.
+ > [Add your custom domain name using the Azure Active Directory portal](../fundamentals/add-custom-domain.md)
![Screenshot of the Add customer domain option, under Custom domain names.](./media/howto-troubleshoot-upn-changes/custom-domains.png)
Users sign in to Azure AD with their userPrincipalName attribute value.
When you use Azure AD with on-premises Active Directory, user accounts are synchronized by using the Azure AD Connect service. The Azure AD Connect wizard uses the userPrincipalName attribute from the on-premises Active Directory as the UPN in Azure AD. You can change it to a different attribute in a custom installation.
->[!NOTE]
-> Define a process for when you update a User Principal Name (UPN) of a user, or for your organization.
+ >[!NOTE]
+ > Define a process for when you update a User Principal Name (UPN) of a user, or for your organization.
When you synchronize user accounts from Active Directory to Azure AD, ensure the UPNs in Active Directory map to verified domains in Azure AD.
Learn more: [How UPN changes affect the OneDrive URL and OneDrive features](/sha
## Teams Meeting Notes known issues and workarounds
-Use Teams Meeting Notes to take and share notes.
-
-Learn more: [Take meeting notes in Teams](/office/take-meeting-notes-in-teams-3eadf032-0ef8-4d60-9e21-0691d317d103).
+Use Teams Meeting Notes to take and share notes.
### Known issues
active-directory Plan Hybrid Identity Design Considerations Data Protection Strategy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/plan-hybrid-identity-design-considerations-data-protection-strategy.md
na Previously updated : 04/29/2019 Last updated : 01/19/2023
active-directory Plan Hybrid Identity Design Considerations Directory Sync Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/plan-hybrid-identity-design-considerations-directory-sync-requirements.md
na Previously updated : 07/18/2017 Last updated : 01/19/2023
active-directory Plan Hybrid Identity Design Considerations Hybrid Id Management Tasks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/plan-hybrid-identity-design-considerations-hybrid-id-management-tasks.md
na Previously updated : 04/29/2019 Last updated : 01/19/2023
active-directory Plan Hybrid Identity Design Considerations Lifecycle Adoption Strategy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/plan-hybrid-identity-design-considerations-lifecycle-adoption-strategy.md
na Previously updated : 05/30/2018 Last updated : 01/19/2023
active-directory Plan Hybrid Identity Design Considerations Multifactor Auth Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/plan-hybrid-identity-design-considerations-multifactor-auth-requirements.md
na Previously updated : 07/18/2017 Last updated : 01/19/2023
active-directory Plan Hybrid Identity Design Considerations Nextsteps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/plan-hybrid-identity-design-considerations-nextsteps.md
na Previously updated : 07/18/2017 Last updated : 01/19/2023
active-directory Plan Hybrid Identity Design Considerations Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/plan-hybrid-identity-design-considerations-overview.md
na Previously updated : 05/30/2018 Last updated : 01/19/2023
active-directory Plan Hybrid Identity Design Considerations Tools Comparison https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/plan-hybrid-identity-design-considerations-tools-comparison.md
na Previously updated : 04/18/2022 Last updated : 01/19/2023
active-directory Reference Connect Accounts Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-accounts-permissions.md
na Previously updated : 06/02/2021 Last updated : 01/19/2023
active-directory Reference Connect Adconnectivitytools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-adconnectivitytools.md
Previously updated : 05/31/2019 Last updated : 01/19/2023
active-directory Reference Connect Adsync https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-adsync.md
Previously updated : 11/30/2020 Last updated : 01/19/2023
active-directory Reference Connect Adsyncconfig https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-adsyncconfig.md
Previously updated : 01/24/2019 Last updated : 01/19/2023
active-directory Reference Connect Adsynctools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-adsynctools.md
Previously updated : 11/30/2020 Last updated : 01/19/2023
active-directory Reference Connect Germany https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-germany.md
na Previously updated : 07/12/2017 Last updated : 01/19/2023
active-directory Reference Connect Government Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-government-cloud.md
Previously updated : 04/14/2020 Last updated : 01/19/2023
active-directory Reference Connect Health User Privacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-health-user-privacy.md
na Previously updated : 04/26/2018 Last updated : 01/19/2023
active-directory Reference Connect Health Version History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-health-version-history.md
na Previously updated : 08/10/2020 Last updated : 01/19/2023
The Azure Active Directory team regularly updates Azure AD Connect Health with n
Azure AD Connect Health for Sync is integrated with Azure AD Connect installation. Read more about [Azure AD Connect release history](./reference-connect-version-history.md) For feature feedback, vote at [Connect Health User Voice channel](https://feedback.azure.com/d365community/forum/22920db1-ad25-ec11-b6e6-000d3a4f0789)
+## 19 January 2023
+**Agent Update**
+- Azure AD Connect Health agent for Azure AD Connect (version 3.2.2188.23)
+ - We fixed a bug where, under certain circumstances, Azure AD Connect sync errors were not getting uploaded or shown in the portal.
+ ## September 2021 **Agent Update** - Azure AD Connect Health agent for AD FS (version 3.1.113.0)
active-directory Reference Connect Instances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-instances.md
na Previously updated : 05/27/2019 Last updated : 01/19/2023
active-directory Reference Connect Ports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-ports.md
na Previously updated : 03/04/2020 Last updated : 01/19/2023
active-directory Reference Connect Pta Version History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-pta-version-history.md
ms.assetid: ef2797d7-d440-4a9a-a648-db32ad137494
Previously updated : 04/14/2020 Last updated : 01/19/2023
active-directory Reference Connect Sync Attributes Synchronized https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-sync-attributes-synchronized.md
na Previously updated : 04/15/2020 Last updated : 01/19/2023
active-directory Reference Connect Sync Functions Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-sync-functions-reference.md
na Previously updated : 07/12/2017 Last updated : 01/19/2023
active-directory Reference Connect User Privacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-user-privacy.md
na Previously updated : 05/21/2018 Last updated : 01/19/2023
active-directory Reference Connect Version History Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-version-history-archive.md
ms.assetid:
Previously updated : 07/23/2020 Last updated : 01/19/2023
active-directory Reference Connect Version History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-version-history.md
This is a bug fix release. There are no functional changes in this release.
## Next steps
-Learn more about how to [integrate your on-premises identities with Azure AD](whatis-hybrid-identity.md).
+Learn more about how to [integrate your on-premises identities with Azure AD](whatis-hybrid-identity.md).
active-directory Tshoot Connect Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/tshoot-connect-connectivity.md
na Previously updated : 01/11/2022 Last updated : 01/19/2023
active-directory Tshoot Connect Install Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/tshoot-connect-install-issues.md
na Previously updated : 01/31/2019 Last updated : 01/19/2023
active-directory Tshoot Connect Largeobjecterror Usercertificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/tshoot-connect-largeobjecterror-usercertificate.md
na Previously updated : 07/13/2017 Last updated : 01/19/2023
active-directory Tshoot Connect Object Not Syncing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/tshoot-connect-object-not-syncing.md
na Previously updated : 08/10/2018 Last updated : 01/19/2023
active-directory Tshoot Connect Objectsync https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/tshoot-connect-objectsync.md
na Previously updated : 04/29/2019 Last updated : 01/19/2023
active-directory Tshoot Connect Pass Through Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/tshoot-connect-pass-through-authentication.md
na Previously updated : 01/25/2021 Last updated : 01/19/2023
active-directory Tshoot Connect Password Hash Synchronization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/tshoot-connect-password-hash-synchronization.md
na Previously updated : 03/13/2017 Last updated : 01/19/2023
active-directory Tshoot Connect Recover From Localdb 10Gb Limit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/tshoot-connect-recover-from-localdb-10gb-limit.md
na Previously updated : 07/17/2017 Last updated : 01/19/2023
active-directory Tshoot Connect Sso https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/tshoot-connect-sso.md
ms.assetid: 9f994aca-6088-40f5-b2cc-c753a4f41da7
Previously updated : 10/07/2019 Last updated : 01/19/2023
active-directory Tshoot Connect Sync Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/tshoot-connect-sync-errors.md
na Previously updated : 01/21/2022 Last updated : 01/19/2023
active-directory Tshoot Connect Tshoot Sql Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/tshoot-connect-tshoot-sql-connectivity.md
na Previously updated : 11/30/2020 Last updated : 01/19/2023
active-directory Tutorial Federation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/tutorial-federation.md
na Previously updated : 11/11/2022 Last updated : 01/19/2023
active-directory Tutorial Phs Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/tutorial-phs-backup.md
Previously updated : 04/25/2019 Last updated : 01/19/2023
active-directory What Is Inter Directory Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/what-is-inter-directory-provisioning.md
Previously updated : 10/30/2020 Last updated : 01/19/2023
Azure AD currently supports three methods for accomplishing inter-directory prov
- [Azure AD Connect](whatis-azure-ad-connect.md) - the Microsoft tool designed to meet and accomplish your hybrid identity, including inter-directory provisioning from Active Directory to Azure AD. -- [Azure AD Connect Cloud Provisioning](../cloud-sync/what-is-cloud-sync.md) -a new Microsoft agent designed to meet and accomplish your hybrid identity goals. It is provides a light-weight inter -directory provisioning experience between Active Directory and Azure AD.
+- [Azure AD Connect cloud sync](../cloud-sync/what-is-cloud-sync.md) -a new Microsoft agent designed to meet and accomplish your hybrid identity goals. It is provides a light-weight inter -directory provisioning experience between Active Directory and Azure AD.
- [Microsoft Identity Manager](/microsoft-identity-manager/microsoft-identity-manager-2016) - Microsoft's on-premises identity and access management solution that helps you manage the users, credentials, policies, and access within your organization. Additionally, MIM provides advanced inter-directory provisioning to achieve hybrid identity environments for Active Directory, Azure AD, and other directories.
active-directory Whatis Aadc Admin Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/whatis-aadc-admin-agent.md
Previously updated : 06/30/2022 Last updated : 01/19/2023
active-directory Whatis Azure Ad Connect V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/whatis-azure-ad-connect-v2.md
Previously updated : 12/2/2022 Last updated : 01/19/2023
active-directory Whatis Azure Ad Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/whatis-azure-ad-connect.md
Previously updated : 10/06/2021 Last updated : 01/19/2023
active-directory Whatis Fed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/whatis-fed.md
na Previously updated : 11/28/2018 Last updated : 01/19/2023
active-directory Whatis Hybrid Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/whatis-hybrid-identity.md
ms.assetid: 59bd209e-30d7-4a89-ae7a-e415969825ea
Previously updated : 05/17/2019 Last updated : 01/19/2023
active-directory Whatis Phs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/whatis-phs.md
Previously updated : 06/25/2020 Last updated : 01/19/2023
active-directory Create Service Principal Cross Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/create-service-principal-cross-tenant.md
In this article, you'll learn how to create an enterprise application in your te
Before you proceed to add the application using any of these options, check whether the enterprise application is already in your tenant by attempting to sign in to the application. If the sign-in is successful, the enterprise application already exists in your tenant.
-If you have verified that the application isn't in your tenant, proceed with any of the following ways to add the enterprise application to your tenant using the appId
+If you have verified that the application isn't in your tenant, proceed with any of the following ways to add the enterprise application to your tenant.
## Prerequisites
To add an enterprise application to your Azure AD tenant, you need:
- An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - One of the following roles: Global Administrator, Cloud Application Administrator, or Application Administrator.-- The client ID of the multi-tenant application.
+- The client ID (also called appId in Microsoft Graph) of the multi-tenant application.
## Create an enterprise application
where:
:::zone-end :::zone pivot="ms-graph"
-From the Microsoft Graph explorer window:
+You can use an API client such as [Graph Explorer](https://aka.ms/ge) to work with Microsoft Graph.
-1. To create the enterprise application, insert the following query:
+1. Grant the client app the *Application.ReadWrite.All* permission.
+
+1. To create the enterprise application, run the following query. The appId is the client ID of the application.
```http
- POST /servicePrincipals.
- ```
-1. Supply the following request in the **Request body**.
-
+ POST https://graph.microsoft.com/v1.0/servicePrincipals
+ Content-type: application/json
+
{ "appId": "fc876dd1-6bcb-4304-b9b6-18ddf1526b62" }
-1. Grant the Application.ReadWrite.All permission under the **Modify permissions** tab and select **Run query**.
+
+ ```
-1. To delete the enterprise application you created, run the query:
+1. To delete the enterprise application you created, run the query.
```http
- DELETE /servicePrincipals/{objectID}
+ DELETE https://graph.microsoft.com/v1.0/servicePrincipals(appId='fc876dd1-6bcb-4304-b9b6-18ddf1526b62')
``` :::zone-end :::zone pivot="azure-cli"
active-directory Delete Application Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/delete-application-portal.md
To delete an enterprise application, you need:
:::zone pivot="ms-graph" Delete an enterprise application using [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer).
-1. To get the list of applications in your tenant, run the following query.
+1. To get the list of service principals in your tenant, run the following query.
+ ```http
- GET /servicePrincipals
+ GET https://graph.microsoft.com/v1.0/servicePrincipals
```+ 1. Record the ID of the enterprise app you want to delete. 1. Delete the enterprise application.
-
+ ```http
- DELETE /servicePrincipals/{id}
+ DELETE https://graph.microsoft.com/v1.0/servicePrincipals/{servicePrincipal-id}
``` + :::zone-end ## Next steps - [Restore a deleted enterprise application](restore-application.md)+
active-directory End User Experiences https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/end-user-experiences.md
As an admin, you can choose to try out new app launcher features while they are
To enable or disable previews for your app launchers: -- Sign in to the Azure portal as a global administrator for your directory.
+- Sign in to the Azure portal as a global administrator, application administrator or cloud application administrator for your directory.
- Search for and select **Azure Active Directory**, then select **Enterprise applications**. - On the left menu, select **App launchers**, then select **Settings**. - Under **Preview settings**, toggle the checkboxes for the previews you want to enable or disable. To opt into a preview, toggle the associated checkbox to the checked state. To opt out of a preview, toggle the associated checkbox to the unchecked state.
active-directory Hide Application From User Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/hide-application-from-user-portal.md
Use the following steps to hide all Microsoft 365 applications from the My Apps
1. Sign in to the [Azure portal](https://portal.azure.com) as a global administrator for your directory. 1. Select **Azure Active Directory**. 1. Select **Enterprise applications**.
-1. Select **User settings**.
-1. For **Users can only see Office 365 apps in the Office 365 portal**, select **Yes**.
-1. Select **Save**.
+1. Select **App launchers**.
+2. Select **Settings**.
+3. For **Users can only see Microsoft 365 apps in the Microsoft 365 portal**, select **Yes**.
+4. Select **Save**.
:::zone-end ## Next steps
active-directory Secure Hybrid Access Integrations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/secure-hybrid-access-integrations.md
Previously updated : 12/16/2022 Last updated : 01/19/2023
Azure Active Directory (Azure AD) supports modern authentication protocols that help keep applications secure. However, many business applications work in a protected corporate network, and some use legacy authentication methods. As companies build Zero Trust strategies and support hybrid and cloud environments, there are solutions that connect apps to Azure AD and provide authentication for legacy applications.
-Learn more: [Zero Trust Deployment Guide for Microsoft Azure Active Directory](/security/blog/2020/04/30/zero-trust-deployment-guide-azure-active-directory/)
+Learn more: [Zero Trust security](../../security/fundamentals/zero-trust.md)
Azure AD natively supports modern protocols:
After the SaaS applications are registered in Azure AD, the applications need to
### Connect apps to Azure AD with legacy authentication
-Your solution can enable the customer to use SSO and Azure Active Directory features, even unsupported applications. To allow access with legacy protocols, your application calls Azure AD to authenticate the user and apply Azure AD Conditional Access policies. Enable this integration from your console. Create a SAML or an OIDC application registration between your solution and Azure AD.
+Your solution can enable the customer to use SSO and Azure Active Directory features, even unsupported applications. To allow access with legacy protocols, your application calls Azure AD to authenticate the user and apply [Azure AD Conditional Access policies](../conditional-access/overview.md). Enable this integration from your console. Create a SAML or an OIDC application registration between your solution and Azure AD.
#### Create a SAML application registration
https://graph.microsoft.com/v1.0/applications/{Application Object ID}
### Apply Conditional Access policies
-Customers and partners can use the Microsoft Graph API to create or apply Conditional Access policies to customer applications. For partners, customers can apply these policies from your solution without using the Azure portal. There are two options to apply Azure AD Conditional Access policies:
+Customers and partners can use the Microsoft Graph API to create or apply per application [Conditional Access policies](../conditional-access/overview.md). For partners, customers can apply these policies from your solution without using the Azure portal. There are two options to apply Azure AD Conditional Access policies:
-- Assign the application to a Conditional Access policy-- Create a new Conditional Access policy and assign the application to it
+- [Assign the application to a Conditional Access policy](#use-a-conditional-access-policy)
+- [Create a new Conditional Access policy and assign the application to it](#create-a-new-conditional-access-policy)
#### Use a Conditional Access policy
The following software-defined perimeter (SDP) solutions providers connect with
* **Strata Maverics Identity Orchestrator** * [Integrate Azure AD SSO with Maverics Identity Orchestrator SAML Connector](../saas-apps/maverics-identity-orchestrator-saml-connector-tutorial.md) * **Zscaler Private Access**
- * [Tutorial: Integrate Zscaler Private Access with Azure AD](../saas-apps/zscalerprivateaccess-tutorial.md)
+ * [Tutorial: Integrate Zscaler Private Access with Azure AD](../saas-apps/zscalerprivateaccess-tutorial.md)
active-directory Secure Hybrid Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/secure-hybrid-access.md
Title: Secure hybrid access
-description: This article describes partner solutions for integrating your legacy on-premises, public cloud, or private cloud applications with Azure AD.
+ Title: Secure hybrid access, protect legacy apps with Azure Active Directory
+description: Find partner solutions to integrate your legacy on-premises, public cloud, or private cloud applications with Azure AD.
Previously updated : 8/17/2021 Last updated : 01/17/2023
-# Secure hybrid access: Secure legacy apps with Azure Active Directory
+# Secure hybrid access: Protect legacy apps with Azure Active Directory
-You can now protect your on-premises and cloud legacy authentication applications by connecting them to Azure Active Directory (AD) with:
+In this article, learn to protect your on-premises and cloud legacy authentication applications by connecting them to Azure Active Directory (Azure AD).
-- [Azure AD Application Proxy](#secure-hybrid-access-through-azure-ad-application-proxy)
+* **[Application Proxy](#secure-hybrid-access-with-application-proxy)**:
+ * [Remote access to on-premises applications through Azure AD Application Proxy](../app-proxy/application-proxy.md)
+ * Protect users, apps, and data in the cloud and on-premises
+ * [Use it to publish on-premises web applications externally](../app-proxy/what-is-application-proxy.md)
+
+* **[Secure hybrid access through Azure AD partner integrations](#partner-integrations-for-apps-on-premises-and-legacy-authentication)**:
+ * [Pre-built solutions](#secure-hybrid-access-through-azure-ad-partner-integrations)
+ * [Apply Conditional Access policies per application](secure-hybrid-access-integrations.md#apply-conditional-access-policies)
+
+In addition to Application Proxy, you can strengthen your security posture with [Azure AD Conditional Access](../conditional-access/overview.md) and [Identity Protection](../identity-protection/overview-identity-protection.md).
-- [Secure hybrid access: Secure legacy apps with Azure Active Directory](#secure-hybrid-access-secure-legacy-apps-with-azure-active-directory)
- - [Secure hybrid access through Azure AD Application Proxy](#secure-hybrid-access-through-azure-ad-application-proxy)
- - [Secure hybrid access through Azure AD partner integrations](#secure-hybrid-access-through-azure-ad-partner-integrations)
+## Single sign-on and multi-factor authentication
-You can bridge the gap and strengthen your security posture across all applications with Azure AD capabilities like [Azure AD Conditional Access](../conditional-access/overview.md) and [Azure AD Identity Protection](../identity-protection/overview-identity-protection.md). By having Azure AD as an Identity provider (IDP), you can use modern authentication and authorization methods like [single sign-on (SSO)](what-is-single-sign-on.md) and [multifactor authentication (MFA)](../authentication/concept-mfa-howitworks.md) to secure your on-premises legacy applications.
+With Azure AD as an identity provider (IdP), you can use modern authentication and authorization methods like [single sign-on (SSO)](what-is-single-sign-on.md) and [Azure AD Multi-Factor Authentication (MFA)](../authentication/concept-mfa-howitworks.md) to secure legacy, on-premises applications.
-## Secure hybrid access through Azure AD Application Proxy
+## Secure hybrid access with Application Proxy
-Using [Application Proxy](../app-proxy/what-is-application-proxy.md) you can provide [secure remote access](../app-proxy/application-proxy-add-on-premises-application.md) to your on-premises web applications. Your users donΓÇÖt need to use a VPN. Users benefit by easily connecting to their applications from any device after a [SSO](../app-proxy/application-proxy-config-sso-how-to.md#how-to-configure-single-sign-on). Application Proxy provides remote access as a service and allows you to [easily publish your applications](../app-proxy/application-proxy-add-on-premises-application.md) to users outside the corporate network. It helps you scale your cloud access management without requiring you to modify your on-premises applications. [Plan an Azure AD Application Proxy](../app-proxy/application-proxy-deployment-plan.md) deployment as a next step.
+Use Application Proxy to protect users, apps, and data in the cloud, and on premises. Use this tool for secure remote access to on-premises web applications. Users donΓÇÖt need to use a virtual private network (VPN); they connect to applications from devices with SSO.
-## Secure hybrid access through Azure AD partner integrations
+Learn more:
-In addition to [Azure AD Application Proxy](../app-proxy/what-is-application-proxy.md), Microsoft partners with third-party providers to enable secure access to your on-premises applications and applications that use legacy authentication.
+* [Remote access to on-premises applications through Azure AD Application Proxy](../app-proxy/application-proxy.md)
+* [Tutorial: Add an on-premises application for remote access through Application Proxy in Azure AD](../app-proxy/application-proxy-add-on-premises-application.md)
+* [How to configure SSO to an Application Proxy application](../app-proxy/application-proxy-config-sso-how-to.md)
+* [Using Azure AD Application Proxy to publish on-premises apps for remote users](../app-proxy/what-is-application-proxy.md)
-![Illustration of Secure Hybrid Access partner integrations and Application Proxy providing access to legacy and on-premises applications after authentication with Azure AD.](./media/secure-hybrid-access/secure-hybrid-access.png)
+### Application publishing and access management
-The following partners offer pre-built solutions to support **conditional access policies per application** and provide detailed guidance for integrating with Azure AD.
+Use Application Proxy remote access as a service to publish applications to users outside the corporate network. Help improve your cloud access management without requiring modification to your on-premises applications. Plan an [Azure AD Application Proxy deployment](../app-proxy/application-proxy-deployment-plan.md).
-- [Akamai Enterprise Application Access](../saas-apps/akamai-tutorial.md)
+## Partner integrations for apps: on-premises and legacy authentication
-- [Citrix Application Delivery Controller (ADC)](../saas-apps/citrix-netscaler-tutorial.md)
+Microsoft partners with various companies that deliver pre-built solutions for on-premises applications, and applications that use legacy authentication. The following diagram illustrates a user flow from sign-in to secure access to apps and data.
-- [Datawiza Access Broker](../manage-apps/datawiza-with-azure-ad.md)
+ ![Diagram of secure hybrid access integrations and Application Proxy providing user access.](./media/secure-hybrid-access/secure-hybrid-access.png)
-- [F5 BIG-IP APM (ADC)](../manage-apps/f5-aad-integration.md)
+### Secure hybrid access through Azure AD partner integrations
-- [F5 BIG-IP APM VPN](../manage-apps/f5-aad-password-less-vpn.md)
+The following partners offer solutions to support [Conditional Access policies per application](secure-hybrid-access-integrations.md#apply-conditional-access-policies). Use the tables in the following sections to learn about the partners and Azure AD integration documentation.
-- [Kemp](../saas-apps/kemp-tutorial.md)
+|Partner|Integration documentation|
+|||
+|Akamai Technologies|[Tutorial: Azure AD SSO integration with Akamai](../saas-apps/akamai-tutorial.md)|
+|Citrix Systems, Inc.|[Tutorial: Azure AD SSO integration with Citrix ADC SAML Connector for Azure AD (Kerberos-based authentication)](../saas-apps/citrix-netscaler-tutorial.md)|
+|Datawiza|[Tutorial: Configure Secure Hybrid Access with Azure AD and Datawiza](datawiza-with-azure-ad.md)|
+|F5, Inc.|[Integrate F5 BIG-IP with Azure AD](f5-aad-integration.md)</br>[Tutorial: Configure F5 BIG-IP SSL-VPN for Azure AD SSO](f5-aad-password-less-vpn.md)|
+|Progress Software Corporation, Progress Kemp|[Tutorial: Azure AD SSO integration with Kemp LoadMaster Azure AD integration](../saas-apps/kemp-tutorial.md)|
+|Perimeter 81 Ltd.|[Tutorial: Azure AD SSO integration with Perimeter 81](../saas-apps/perimeter-81-tutorial.md)|
+|Silverfort|[Tutorial: Configure Secure Hybrid Access with Azure AD and Silverfort](silverfort-azure-ad-integration.md)|
+|Strata Identity, Inc.|[Integrate Azure AD SSO with Maverics Identity Orchestrator SAML Connector](../saas-apps/maverics-identity-orchestrator-saml-connector-tutorial.md)|
-- [Perimeter 81](../saas-apps/perimeter-81-tutorial.md)
+#### Partners with pre-built solutions and integration documentation
-- [Silverfort Authentication Platform](../manage-apps/silverfort-azure-ad-integration.md)
+|Partner|Integration documentation|
+|||
+|Amazon Web Service, Inc.|[Tutorial: Azure AD SSO integration with AWS ClientVPN](../saas-apps/aws-clientvpn-tutorial.md)|
+|Check Point Software Technologies Ltd.|[Tutorial: Azure AD single SSO integration with Check Point Remote Secure Access VPN](../saas-apps/check-point-remote-access-vpn-tutorial.md)|
+|Cisco Systems, Inc.|[Tutorial: Azure AD SSO integration with Cisco AnyConnect](../saas-apps/cisco-anyconnect.md)|
+|Cloudflare, Inc.|[Tutorial: Configure Cloudflare with Azure AD for secure hybrid access](cloudflare-azure-ad-integration.md)|
+|Fortinet, Inc.|[Tutorial: Azure AD SSO integration with FortiGate SSL VPN](../saas-apps/fortigate-ssl-vpn-tutorial.md)|
+|Palo Alto Networks|[Tutorial: Azure AD SSO integration with Palo Alto Networks Admin UI](../saas-apps/paloaltoadmin-tutorial.md)|
+|Pulse Secure|[Tutorial: Azure AD SSO integration with Pulse Connect Secure (PCS)](../saas-apps/pulse-secure-pcs-tutorial.md)</br>[Tutorial: Azure AD SSO integration with Pulse Secure Virtual Traffic Manager](../saas-apps/pulse-secure-virtual-traffic-manager-tutorial.md)|
+|Zscaler, Inc.|[Tutorial: Integrate Zscaler Private Access with Azure AD](../saas-apps/zscalerprivateaccess-tutorial.md)|
-- [Strata](../saas-apps/maverics-identity-orchestrator-saml-connector-tutorial.md)
+## Next steps
+Select a partner in the tables mentioned to learn how to integrate their solution with Azure AD.
-The following partners offer pre-built solutions and detailed guidance for integrating with Azure AD.
--- [AWS](../saas-apps/aws-clientvpn-tutorial.md)--- [Check Point](../saas-apps/check-point-remote-access-vpn-tutorial.md)--- [Cisco AnyConnect](../saas-apps/cisco-anyconnect.md)--- [Cloudflare](../manage-apps/cloudflare-azure-ad-integration.md)--- [Fortinet](../saas-apps/fortigate-ssl-vpn-tutorial.md)--- [Palo Alto Networks Global Protect](../saas-apps/paloaltoadmin-tutorial.md)--- [Pulse Secure Pulse Connect Secure (PCS)](../saas-apps/pulse-secure-pcs-tutorial.md)--- [Pulse Secure Virtual Traffic Manager (VTM)](../saas-apps/pulse-secure-virtual-traffic-manager-tutorial.md)--- [Zscaler Private Access (ZPA)](../saas-apps/zscalerprivateaccess-tutorial.md)
active-directory Plan Monitoring And Reporting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/plan-monitoring-and-reporting.md
Previously updated : 12/19/2022 Last updated : 01/20/2023 # Customer intent: For an Azure AD administrator to monitor logs and report on access
Learn more:
#### Stream logs to storage and SIEM tools * [Integrate Azure AD logs with Azure Monitor logs](./howto-integrate-activity-logs-with-log-analytics.md).
-* [Analyze Azure AD activity logs with Azure Monitor logs](/MicrosoftDocs/azure-docs/blob/main/articles/active-directory/reports-monitoring/howto-analyze-activity-logs-log-analytics.md).
+* [Analyze Azure AD activity logs with Azure Monitor logs](../reports-monitoring/howto-analyze-activity-logs-log-analytics.md).
* Learn how to [stream logs to an event hub](./tutorial-azure-monitor-stream-logs-to-event-hub.md). * Learn how to [Archive Azure AD logs to an Azure Storage account](./quickstart-azure-monitor-route-logs-to-storage-account.md). * [Integrate Azure AD logs with Splunk by using Azure Monitor](./howto-integrate-activity-logs-with-splunk.md)
active-directory Security Emergency Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/security-emergency-access.md
Some organizations use AD Domain Services and AD FS or similar identity provider
Organizations need to ensure that the credentials for emergency access accounts are kept secure and known only to individuals who are authorized to use them. Some customers use a smartcard for Windows Server AD, a [FIDO2 security key](../authentication/howto-authentication-passwordless-security-key.md) for Azure AD and others use passwords. A password for an emergency access account is usually separated into two or three parts, written on separate pieces of paper, and stored in secure, fireproof safes that are in secure, separate locations.
-If using passwords, make sure the accounts have strong passwords that do not expire the password. Ideally, the passwords should be at least 16 characters long and randomly generated.
+If using passwords, make sure the accounts have strong passwords that do not expire. Ideally, the passwords should be at least 16 characters long and randomly generated.
## Monitor sign-in and audit logs
active-directory Firstbird Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/firstbird-tutorial.md
- Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Firstbird | Microsoft Docs'
-description: Learn how to configure single sign-on between Azure Active Directory and Firstbird.
-------- Previously updated : 11/21/2022---
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with Firstbird
-
-In this tutorial, you'll learn how to integrate Firstbird with Azure Active Directory (Azure AD). When you integrate Firstbird with Azure AD, you can:
-
-* Control in Azure AD who has access to Firstbird.
-* Enable your users to be automatically signed-in to Firstbird with their Azure AD accounts.
-* Manage your accounts in one central location - the Azure portal.
-
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-
-## Prerequisites
-
-To get started, you need the following items:
-
-* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
-* Firstbird single sign-on (SSO) enabled subscription.
-
-## Scenario description
-
-In this tutorial, you configure and test Azure AD SSO in a test environment.
---
-* Firstbird supports **SP and IDP** initiated SSO
-* Firstbird supports **Just In Time** user provisioning
--
-## Adding Firstbird from the gallery
-
-To configure the integration of Firstbird into Azure AD, you need to add Firstbird from the gallery to your list of managed SaaS apps.
-
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
-1. On the left navigation pane, select the **Azure Active Directory** service.
-1. Navigate to **Enterprise Applications** and then select **All Applications**.
-1. To add new application, select **New application**.
-1. In the **Add from the gallery** section, type **Firstbird** in the search box.
-1. Select **Firstbird** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-
- Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
--
-## Configure and test Azure AD single sign-on for Firstbird
-
-Configure and test Azure AD SSO with Firstbird using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Firstbird.
-
-To configure and test Azure AD SSO with Firstbird, complete the following building blocks:
-
-1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
- 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
- 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
-1. **[Configure Firstbird SSO](#configure-firstbird-sso)** - to configure the single sign-on settings on application side.
- 1. **[Create Firstbird test user](#create-firstbird-test-user)** - to have a counterpart of B.Simon in Firstbird that is linked to the Azure AD representation of user.
-1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-
-## Configure Azure AD SSO
-
-Follow these steps to enable Azure AD SSO in the Azure portal.
-
-1. In the [Azure portal](https://portal.azure.com/), on the **Firstbird** application integration page, find the **Manage** section and select **single sign-on**.
-1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
-
- ![Edit Basic SAML Configuration](common/edit-urls.png)
-
-1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, enter the values for the following fields:
-
- a. In the **Identifier** text box, type a URL using the following pattern:
- `https://<company-domain>.auth.1brd.com/saml/sp`
-
- b. In the **Reply URL** text box, type a URL using the following pattern:
- `https://<company-domain>.auth.1brd.com/saml/callback`
-
-1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
-
- In the **Sign-on URL** text box, type a URL using the following pattern:
- `https://<company-domain>.1brd.com/login`
-
- > [!NOTE]
- > These values are not real. Update these values with the actual Identifier, Reply URL and Sign-on URL. Contact [Firstbird Client support team](mailto:support@firstbird.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
-
-1. Firstbird application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
-
- ![image](common/edit-attribute.png)
-
-1. In addition to above, Firstbird application expects few more attributes to be passed back in SAML response which are shown below. These attributes are also pre populated but you can review them as per your requirement.
-
- | Name | Source Attribute|
- | | |
- | first_name | `user.givenname` |
- | last_name | `user.surname` |
- | email | `user.mail` |
-
-1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
-
- ![The Certificate download link](common/metadataxml.png)
-
-1. On the **Set up Firstbird** section, copy the appropriate URL(s) based on your requirement.
-
- ![Copy configuration URLs](common/copy-configuration-urls.png)
-
-### Create an Azure AD test user
-
-In this section, you'll create a test user in the Azure portal called B.Simon.
-
-1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
-1. Select **New user** at the top of the screen.
-1. In the **User** properties, follow these steps:
- 1. In the **Name** field, enter `B.Simon`.
- 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
- 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
- 1. Click **Create**.
-
-### Assign the Azure AD test user
-
-In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Firstbird.
-
-1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
-1. In the applications list, select **Firstbird**.
-1. In the app's overview page, find the **Manage** section and select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add User link](common/add-assign-user.png)
-
-1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
-1. In the **Add Assignment** dialog, click the **Assign** button.
-
-## Configure Firstbird SSO
-
-Once you have completed these steps, please send Firstbird the Federation Metadata XML in a support request via e-email to [support@firstbird.com](mailto:support@firstbird.com) with the subject: "SSO configuration".
-
-Firstbird will then store the configuration in the system accordingly and activate SSO for your account. After that, a member of the support staff will contact you to verify the configuration.
-
-> [!NOTE]
-> You need to have the SSO option included in your contract.
-
-### Create Firstbird test user
-
-In this section, a user called B.Simon is created in Firstbird. Firstbird supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Firstbird, a new one is created after authentication.
-
-## Test SSO
-
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
-
-When you click the Firstbird tile in the Access Panel, you should be automatically signed in to the Firstbird for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
-
-## Additional resources
--- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)--- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md)--- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)--- [Try Firstbird with Azure AD](https://aad.portal.azure.com/)
active-directory Radancys Employee Referrals Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/radancys-employee-referrals-tutorial.md
+
+ Title: Azure AD SSO integration with Radancy's Employee Referrals
+description: Learn how to configure single sign-on between Azure Active Directory and Radancy's Employee Referrals.
++++++++ Last updated : 01/19/2023++++
+# Azure AD SSO integration with Radancy's Employee Referrals
+
+In this tutorial, you'll learn how to integrate Radancy's Employee Referrals with Azure Active Directory (Azure AD). When you integrate Radancy's Employee Referrals with Azure AD, you can:
+
+* Control in Azure AD who has access to Radancy's Employee Referrals.
+* Enable your users to be automatically signed-in to Radancy's Employee Referrals with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Radancy's Employee Referrals single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Radancy's Employee Referrals supports **SP and IDP** initiated SSO.
+* Radancy's Employee Referrals supports **Just In Time** user provisioning.
+
+## Add Radancy's Employee Referrals from the gallery
+
+To configure the integration of Radancy's Employee Referrals into Azure AD, you need to add Radancy's Employee Referrals from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Radancy's Employee Referrals** in the search box.
+1. Select **Radancy's Employee Referrals** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, and walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
+
+## Configure and test Azure AD SSO for Radancy's Employee Referrals
+
+Configure and test Azure AD SSO with Radancy's Employee Referrals using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Radancy's Employee Referrals.
+
+To configure and test Azure AD SSO with Radancy's Employee Referrals, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Radancy's Employee Referrals SSO](#configure-radancys-employee-referrals-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Radancy's Employee Referrals test user](#create-radancys-employee-referrals-test-user)** - to have a counterpart of B.Simon in Radancy's Employee Referrals that are linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Radancy's Employee Referrals** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** text box, type a URL using the following pattern:
+ `https://<company-domain>.auth.1brd.com/saml/sp`
+
+ b. In the **Reply URL** text box, type a URL using the following pattern:
+ `https://<company-domain>.auth.1brd.com/saml/callback`
+
+1. Perform the following step if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign-on URL** text box, type a URL using the following pattern:
+ `https://<company-domain>.1brd.com/login`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign-on URL. Contact [Radancy's Employee Referrals Client support team](mailto:support@firstbird.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. Radancy's Employee Referrals application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![Screenshot shows the image of token attributes configuration.](common/edit-attribute.png "Image")
+
+1. In addition to above, Radancy's Employee Referrals application expects few more attributes to be passed back in SAML response, which are shown below. These attributes are also pre populated but you can review them as per your requirement.
+
+ | Name | Source Attribute|
+ | | |
+ | first_name | user.givenname |
+ | last_name | user.surname |
+ | email | user.mail |
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
+
+1. On the **Set up Radancy's Employee Referrals** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows how to copy configuration URL.](common/copy-configuration-urls.png "Metadata")
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Radancy's Employee Referrals.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Radancy's Employee Referrals**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Radancy's Employee Referrals SSO
+
+1. Log in to the Radancy's Employee Referrals website as an administrator.
+
+1. Navigate to **Account Preferences** > **Authentication** > **Single Sign-On**.
+
+1. In the **SAML IdP Metadata Configuration** section, perform the following steps:
+
+ ![Screenshot shows how to upload the Federation Metadata.](media/radancys-employee-referrals-tutorial/certificate.png "Federation")
+
+ 1. In the **Entity ID** textbox, paste the **Azure AD Identifier** value, which you've copied from the Azure portal.
+
+ 1. In the **SSO-service URL** textbox, paste the **Login URL** value, which you've copied from the Azure portal.
+
+ 1. In the **Signing certificate** textbox, paste the **Federation Metadata XML** file, which you've downloaded from the Azure portal.
+
+ 1. **Save configuration** and verify the setup.
+
+ > [!NOTE]
+ > You need to have the SSO option included in your contract.
+
+### Create Radancy's Employee Referrals test user
+
+In this section, a user called B.Simon is created in Radancy's Employee Referrals. Radancy's Employee Referrals supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Radancy's Employee Referrals, a new one is created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Radancy's Employee Referrals Sign-on URL where you can initiate the login flow.
+
+* Go to Radancy's Employee Referrals Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Radancy's Employee Referrals for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Radancy's Employee Referrals tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Radancy's Employee Referrals for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Radancy's Employee Referrals you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Memo 22 09 Other Areas Zero Trust https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/memo-22-09-other-areas-zero-trust.md
You can use analytics in the following tools to aggregate information from Azure
Automation is an important aspect of Zero Trust, particularly in remediation of alerts that occur because of threats or security changes in your environment. In Azure AD, automation integrations are possible to help remediate alerts or perform actions that can improve your security posture. Automations are based on information received from monitoring and analytics.
-[Microsoft Graph API](../develop/microsoft-graph-intro.md) REST calls are the most common way to programmatically access Azure AD. This API-based access requires an Azure AD identity with the necessary authorizations and scope. With the Graph API, you can integrate Microsoft's and other tools. Follow the principles outlined in this article when you're performing the integration.
+[Microsoft Graph API](/graph/overview) REST calls are the most common way to programmatically access Azure AD. This API-based access requires an Azure AD identity with the necessary authorizations and scope. With the Graph API, you can integrate Microsoft's and other tools. Follow the principles outlined in this article when you're performing the integration.
We recommend that you set up an Azure function or an Azure logic app to use a [system-assigned managed identity](../managed-identities-azure-resources/overview.md). Your logic app or function contains the steps or code necessary to automate the desired actions. You assign permissions to the managed identity to grant the service principal the necessary directory permissions to perform the required actions. Grant managed identities only the minimum rights necessary.
active-directory Standards Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/standards-overview.md
# Configure Azure Active Directory to meet identity standards
-In today's world of interconnected infrastructures, compliance with governmental and industry frameworks and standards is often mandatory. Microsoft engages with governments, regulators, and standards bodies to understand and meet compliance requirements for Azure. There are [90 Azure compliance certifications](../../compliance/index.yml), which include many for various regions and countries. Azure has 35 compliance offerings for key industries including,
+In today's world of interconnected infrastructures, compliance with governmental and industry frameworks and standards is often mandatory. Microsoft engages with governments, regulators, and standards bodies to understand and meet compliance requirements for Azure. There are [90 Azure compliance certifications](../../compliance/index.yml), which include many for various countries/regions. Azure has 35 compliance offerings for key industries including,
* Health * Government
aks Aks Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/aks-migration.md
Stateless application migration is the most straightforward case:
Carefully plan your migration of stateful applications to avoid data loss or unexpected downtime.
-* If you use Azure Files, you can mount the file share as a volume into the new cluster. See [Mount Static Azure Files as a Volume](./azure-files-volume.md#mount-file-share-as-a-persistent-volume).
-* If you use Azure Managed Disks, you can only mount the disk if unattached to any VM. See [Mount Static Azure Disk as a Volume](./azure-disk-volume.md#mount-disk-as-a-volume).
+* If you use Azure Files, you can mount the file share as a volume into the new cluster. See [Mount Static Azure Files as a Volume](./azure-csi-files-storage-provision.md#mount-file-share-as-a-persistent-volume).
+* If you use Azure Managed Disks, you can only mount the disk if unattached to any VM. See [Mount Static Azure Disk as a Volume](./azure-csi-disk-storage-provision.md#mount-disk-as-a-volume).
* If neither of those approaches work, you can use a backup and restore options. See [Velero on Azure](https://github.com/vmware-tanzu/velero-plugin-for-microsoft-azure/blob/master/README.md). #### Azure Files
If not, one possible migration approach involves the following steps:
If you want to start with an empty share and make a copy of the source data, you can use the [`az storage file copy`](/cli/azure/storage/file/copy) commands to migrate your data. - #### Migrating persistent volumes If you're migrating existing persistent volumes to AKS, you'll generally follow these steps:
If you're migrating existing persistent volumes to AKS, you'll generally follow
1. Take snapshots of the disks. 1. Create new managed disks from the snapshots. 1. Create persistent volumes in AKS.
-1. Update pod specifications to [use existing volumes](./azure-disk-volume.md) rather than PersistentVolumeClaims (static provisioning).
+1. Update pod specifications to [use existing volumes](./azure-disk-csi.md) rather than PersistentVolumeClaims (static provisioning).
1. Deploy your application to AKS. 1. Validate your application is working correctly. 1. Point your live traffic to your new AKS cluster.
Some open-source tools can help you create managed disks and migrate volumes bet
* [Azure CLI Disk Copy extension](https://github.com/noelbundick/azure-cli-disk-copy-extension) copies and converts disks across resource groups and Azure regions. * [Azure Kube CLI extension](https://github.com/yaron2/azure-kube-cli) enumerates ACS Kubernetes volumes and migrates them to an AKS cluster. - ### Deployment of your cluster configuration We recommend that you use your existing Continuous Integration (CI) and Continuous Deliver (CD) pipeline to deploy a known-good configuration to AKS. You can use Azure Pipelines to [build and deploy your applications to AKS](/azure/devops/pipelines/ecosystems/kubernetes/aks-template). Clone your existing deployment tasks and ensure that `kubeconfig` points to the new AKS cluster.
You may want to move your AKS cluster to a [different region supported by AKS][r
In addition, if you have any services running on your AKS cluster, you will need to install and configure those services on your cluster in the new region. - In this article, we summarized migration details for: > [!div class="checklist"]
In this article, we summarized migration details for:
> * Considerations for stateful applications > * Deployment of your cluster configuration - [region-availability]: https://azure.microsoft.com/global-infrastructure/services/?products=kubernetes-service
aks Aks Support Help https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/aks-support-help.md
Title: Azure Kubernetes Service support and help options description: How to obtain help and support for questions or problems when you create solutions using Azure Kubernetes Service. -+ Last updated 10/18/2022
aks Azure Blob Csi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-blob-csi.md
Title: Use Container Storage Interface (CSI) driver for Azure Blob storage on Azure Kubernetes Service (AKS) description: Learn how to use the Container Storage Interface (CSI) driver for Azure Blob storage in an Azure Kubernetes Service (AKS) cluster.- Previously updated : 12/27/2022- Last updated : 01/18/2023
The Azure Blob storage Container Storage Interface (CSI) driver is a [CSI specif
By adopting and using CSI, AKS now can write, deploy, and iterate plug-ins to expose new or improve existing storage systems in Kubernetes. Using CSI drivers in AKS avoids having to touch the core Kubernetes code and wait for its release cycles.
-Mounting Azure Blob storage as a file system into a container or pod, enables you to use blob storage with a number of applications that work massive amounts of unstructured data. For example:
+When you mount Azure Blob storage as a file system into a container or pod, it enables you to use blob storage with a number of applications that work massive amounts of unstructured data. For example:
* Log file data * Images, documents, and streaming video or audio
To enable the driver on an existing cluster, include the `--enable-blob-driver`
az aks update --enable-blob-driver -n myAKSCluster -g myResourceGroup ```
-You're prompted to confirm there isn't an open-source Blob CSI driver installed. After confirming, it may take several minutes to complete this action. Once it's complete, you should see in the output the status of enabling the driver on your cluster. The following example is resembles the section indicating the results of the previous command:
+You're prompted to confirm there isn't an open-source Blob CSI driver installed. After you confirm, it may take several minutes to complete this action. Once it's complete, you should see in the output the status of enabling the driver on your cluster. The following example resembles the section indicating the results of the previous command:
```output "storageProfile": {
To have a storage volume persist for your workload, you can use a StatefulSet. T
## Next steps -- To learn how to manually set up a static persistent volume, see [Create and use a volume with Azure Blob storage][azure-csi-blob-storage-static].-- To learn how to dynamically set up a persistent volume, see [Create and use a dynamic persistent volume with Azure Blob storage][azure-csi-blob-storage-dynamic].
+- To learn how to set up a static or dynamic persistent volume, see [Create and use a volume with Azure Blob storage][azure-csi-blob-storage-provision].
- To learn how to use CSI driver for Azure Disks, see [Use Azure Disks with CSI driver][azure-disk-csi-driver] - To learn how to use CSI driver for Azure Files, see [Use Azure Files with CSI driver][azure-files-csi-driver] - For more about storage best practices, see [Best practices for storage and backups in Azure Kubernetes Service][operator-best-practices-storage].
To have a storage volume persist for your workload, you can use a StatefulSet. T
[concepts-storage]: concepts-storage.md [persistent-volume]: concepts-storage.md#persistent-volumes [csi-drivers-aks]: csi-storage-drivers.md
-[azure-csi-blob-storage-dynamic]: azure-csi-blob-storage-dynamic.md
-[azure-csi-blob-storage-static]: azure-csi-blob-storage-static.md
-[csi-storage-driver-overview]: csi-storage-drivers.md
+[azure-csi-blob-storage-provision]: azure-csi-blob-storage-provision.md
[azure-disk-csi-driver]: azure-disk-csi.md [azure-files-csi-driver]: azure-files-csi.md [install-azure-cli]: /cli/azure/install_azure_cli
aks Azure Csi Blob Storage Dynamic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-csi-blob-storage-dynamic.md
- Title: Create a dynamic Azure Blob storage persistent volume in Azure Kubernetes Service (AKS)-
-description: Learn how to dynamically create a persistent volume with Azure Blob storage for use with multiple concurrent pods in Azure Kubernetes Service (AKS)
-- Previously updated : 07/21/2022---
-# Dynamically create and use a persistent volume with Azure Blob storage in Azure Kubernetes Service (AKS)
-
-Container-based applications often need to access and persist data in an external data volume. If multiple pods need concurrent access to the same storage volume, you can use Azure Blob storage to connect using [blobfuse][blobfuse-overview] or [Network File System][nfs-overview] (NFS).
-
-This article shows you how to install the Container Storage Interface (CSI) driver and dynamically create an Azure Blob storage container to attach to a pod in AKS.
-
-For more information on Kubernetes volumes, see [Storage options for applications in AKS][concepts-storage].
-
-## Before you begin
---- If you don't have a storage account that supports the NFS v3 protocol, review [NFS v3 support with Azure Blob storage][azure-blob-storage-nfs-support].--- [Enable the Blob storage CSI driver][enable-blob-csi-driver] (preview) on your AKS cluster.-
-## Dynamic provisioning parameters
-
-|Name | Description | Example | Mandatory | Default value|
-| | | | | |
-|skuName | Specify an Azure storage account type (alias: `storageAccountType`). | `Standard_LRS`, `Premium_LRS`, `Standard_GRS`, `Standard_RAGRS` | No | `Standard_LRS`|
-|location | Specify an Azure location. | `eastus` | No | If empty, driver will use the same location name as current cluster.|
-|resourceGroup | Specify an Azure resource group name. | myResourceGroup | No | If empty, driver will use the same resource group name as current cluster.|
-|storageAccount | Specify an Azure storage account name.| storageAccountName | - No for blobfuse mount </br> - Yes for NFSv3 mount. | - For blobfuse mount: if empty, driver finds a suitable storage account that matches `skuName` in the same resource group. If a storage account name is provided, storage account must exist. </br> - For NFSv3 mount, storage account name must be provided.|
-|protocol | Specify blobfuse mount or NFSv3 mount. | `fuse`, `nfs` | No | `fuse`|
-|containerName | Specify the existing container (directory) name. | container | No | If empty, driver creates a new container name, starting with `pvc-fuse` for blobfuse or `pvc-nfs` for NFS v3. |
-|containerNamePrefix | Specify Azure storage directory prefix created by driver. | my |Can only contain lowercase letters, numbers, hyphens, and length should be fewer than 21 characters. | No |
-|server | Specify Azure storage account domain name. | Existing storage account DNS domain name, for example `<storage-account>.privatelink.blob.core.windows.net`. | No | If empty, driver uses default `<storage-account>.blob.core.windows.net` or other sovereign cloud storage account DNS domain name.|
-|allowBlobPublicAccess | Allow or disallow public access to all blobs or containers for storage account created by driver. | `true`,`false` | No | `false`|
-|storageEndpointSuffix | Specify Azure storage endpoint suffix. | `core.windows.net` | No | If empty, driver will use default storage endpoint suffix according to cloud environment.|
-|tags | [tags][az-tags] would be created in new storage account. | Tag format: 'foo=aaa,bar=bbb' | No | ""|
-|matchTags | Match tags when driver tries to find a suitable storage account. | `true`,`false` | No | `false`|
-| | **Following parameters are only for blobfuse** | | | |
-|subscriptionID | Specify Azure subscription ID where blob storage directory will be created. | Azure subscription ID | No | If not empty, `resourceGroup` must be provided.|
-|storeAccountKey | Specify store account key to Kubernetes secret. <br><br> Note: <br> `false` means driver uses kubelet identity to get account key. | `true`,`false` | No | `true`|
-|secretName | Specify secret name to store account key. | | No |
-|secretNamespace | Specify the namespace of secret to store account key. | `default`,`kube-system`, etc. | No | pvc namespace |
-|isHnsEnabled | Enable `Hierarchical namespace` for Azure DataLake storage account. | `true`,`false` | No | `false`|
-| | **Following parameters are only for NFS protocol** | | | |
-|mountPermissions | Specify mounted folder permissions. |The default is `0777`. If set to `0`, driver won't perform `chmod` after mount. | `0777` | No |
-
-## Create a persistent volume claim using built-in storage class
-
-A persistent volume claim (PVC) uses the storage class object to dynamically provision an Azure Blob storage container. The following YAML can be used to create a persistent volume claim 5 GB in size with *ReadWriteMany* access, using the built-in storage class. For more information on access modes, see the [Kubernetes persistent volume][kubernetes-volumes] documentation.
-
-1. Create a file named `blob-nfs-pvc.yaml` and copy in the following YAML.
-
- ```yml
- apiVersion: v1
- kind: PersistentVolumeClaim
- metadata:
- name: azure-blob-storage
- annotations:
- volume.beta.kubernetes.io/storage-class: azureblob-nfs-premium
- spec:
- accessModes:
- - ReadWriteMany
- storageClassName: my-blobstorage
- resources:
- requests:
- storage: 5Gi
- ```
-
-2. Create the persistent volume claim with the kubectl create command:
-
- ```bash
- kubectl create -f blob-nfs-pvc.yaml
- ```
-
-Once completed, the Blob storage container will be created. You can use the [kubectl get][kubectl-get] command to view the status of the PVC:
-
-```bash
-kubectl get pvc azure-blob-storage
-```
-
-The output of the command resembles the following example:
-
-```bash
-NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
-azure-blob-storage Bound pvc-b88e36c5-c518-4d38-a5ee-337a7dda0a68 5Gi RWX azureblob-nfs-premium 92m
-```
-
-## Use the persistent volume claim
-
-The following YAML creates a pod that uses the persistent volume claim **azure-blob-storage** to mount the Azure Blob storage at the `/mnt/blob' path.
-
-1. Create a file named `blob-nfs-pv`, and copy in the following YAML. Make sure that the **claimName** matches the PVC created in the previous step.
-
- ```yml
- kind: Pod
- apiVersion: v1
- metadata:
- name: mypod
- spec:
- containers:
- - name: mypod
- image: mcr.microsoft.com/oss/nginx/nginx:1.17.3-alpine
- resources:
- requests:
- cpu: 100m
- memory: 128Mi
- limits:
- cpu: 250m
- memory: 256Mi
- volumeMounts:
- - mountPath: "/mnt/blob"
- name: volume
- volumes:
- - name: volume
- persistentVolumeClaim:
- claimName: azure-blob-storage
- ```
-
-2. Create the pod with the [kubectl apply][kubectl-apply] command:
-
- ```bash
- kubectl apply -f blob-nfs-pv.yaml
- ```
-
-3. After the pod is in the running state, run the following command to create a new file called `test.txt`.
-
- ```bash
- kubectl exec mypod -- touch /mnt/blob/test.txt
- ```
-
-4. To validate the disk is correctly mounted, run the following command, and verify you see the `test.txt` file in the output:
-
- ```bash
- kubectl exec mypod -- ls /mnt/blob
- ```
-
- The output of the command resembles the following example:
-
- ```bash
- test.txt
- ```
-
-## Create a custom storage class
-
-The default storage classes suit the most common scenarios, but not all. For some cases, you might want to have your own storage class customized with your own parameters. To demonstrate, two examples are shown. One based on using the NFS protocol, and the other using blobfuse.
-
-### Storage class using NFS protocol
-
-In this example, the following manifest configures mounting a Blob storage container using the NFS protocol. Use it to add the *tags* parameter.
-
-1. Create a file named `blob-nfs-sc.yaml`, and paste the following example manifest:
-
- ```yml
- apiVersion: storage.k8s.io/v1
- kind: StorageClass
- metadata:
- name: azureblob-nfs-premium
- provisioner: blob.csi.azure.com
- parameters:
- protocol: nfs
- tags: environment=Development
- volumeBindingMode: Immediate
- ```
-
-2. Create the storage class with the [kubectl apply][kubectl-apply] command:
-
- ```bash
- kubectl apply -f blob-nfs-sc.yaml
- ```
-
- The output of the command resembles the following example:
-
- ```bash
- storageclass.storage.k8s.io/blob-nfs-premium created
- ```
-
-### Storage class using blobfuse
-
-In this example, the following manifest configures using blobfuse and mount a Blob storage container. Use it to update the *skuName* parameter.
-
-1. Create a file named `blobfuse-sc.yaml`, and paste the following example manifest:
-
- ```yml
- apiVersion: storage.k8s.io/v1
- kind: StorageClass
- metadata:
- name: azureblob-fuse-premium
- provisioner: blob.csi.azure.com
- parameters:
- skuName: Standard_GRS # available values: Standard_LRS, Premium_LRS, Standard_GRS, Standard_RAGRS
- reclaimPolicy: Delete
- volumeBindingMode: Immediate
- allowVolumeExpansion: true
- mountOptions:
- - -o allow_other
- - --file-cache-timeout-in-seconds=120
- - --use-attr-cache=true
- - --cancel-list-on-mount-seconds=10 # prevent billing charges on mounting
- - -o attr_timeout=120
- - -o entry_timeout=120
- - -o negative_timeout=120
- - --log-level=LOG_WARNING # LOG_WARNING, LOG_INFO, LOG_DEBUG
- - --cache-size-mb=1000 # Default will be 80% of available memory, eviction will happen beyond that.
- ```
-
-2. Create the storage class with the [kubectl apply][kubectl-apply] command:
-
- ```bash
- kubectl apply -f blobfuse-sc.yaml
- ```
-
- The output of the command resembles the following example:
-
- ```bash
- storageclass.storage.k8s.io/blob-fuse-premium created
- ```
-
-## Next steps
--- To learn how to use CSI driver for Azure Blob storage, see [Use Azure Blob storage with CSI driver][azure-blob-storage-csi].-- To learn how to manually set up a static persistent volume, see [Create and use a volume with Azure Blob storage][azure-csi-blob-storage-static].-- For associated best practices, see [Best practices for storage and backups in AKS][operator-best-practices-storage].-
-<!-- LINKS - external -->
-[kubectl-create]: https://kubernetes.io/docs/user-guide/kubectl/v1.8/#create
-[kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get
-[kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply
-[kubernetes-files]: https://github.com/kubernetes/examples/blob/master/staging/volumes/azure_file/README.md
-[kubernetes-secret]: https://kubernetes.io/docs/concepts/configuration/secret/
-[kubernetes-volumes]: https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes
-[kubernetes-security-context]: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
-[CSI driver parameters]: https://github.com/kubernetes-sigs/azurefile-csi-driver/blob/master/docs/driver-parameters.md#static-provisionbring-your-own-file-share
-[blobfuse-overview]: https://github.com/Azure/azure-storage-fuse
-[nfs-overview]: https://en.wikipedia.org/wiki/Network_File_System
-
-<!-- LINKS - internal -->
-[aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md
-[aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md
-[aks-quickstart-powershell]: ./learn/quick-kubernetes-deploy-powershell.md
-[install-azure-cli]: /cli/azure/install-azure-cli
-[operator-best-practices-storage]: operator-best-practices-storage.md
-[concepts-storage]: concepts-storage.md
-[persistent-volume-example]: #mount-file-share-as-a-persistent-volume
-[use-tags]: use-tags.md
-[use-managed-identity]: use-managed-identity.md
-[kubernetes-secret]: https://kubernetes.io/docs/concepts/configuration/secret/
-[sas-tokens]: ../storage/common/storage-sas-overview.md
-[mount-blob-storage-nfs]: ../storage/blobs/network-file-system-protocol-support-how-to.md
-[azure-csi-blob-storage-static]: azure-csi-blob-storage-static.md
-[blob-storage-csi-driver]: azure-blob-csi.md
-[azure-blob-storage-nfs-support]: ../storage/blobs/network-file-system-protocol-support.md
-[enable-blob-csi-driver]: azure-blob-csi.md#before-you-begin
aks Azure Csi Blob Storage Provision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-csi-blob-storage-provision.md
+
+ Title: Create a persistent volume with Azure Blob storage in Azure Kubernetes Service (AKS)
+
+description: Learn how to create a static or dynamic persistent volume with Azure Blob storage for use with multiple concurrent pods in Azure Kubernetes Service (AKS)
+ Last updated : 01/18/2023+++
+# Create and use a volume with Azure Blob storage in Azure Kubernetes Service (AKS)
+
+Container-based applications often need to access and persist data in an external data volume. If multiple pods need concurrent access to the same storage volume, you can use Azure Blob storage to connect using [blobfuse][blobfuse-overview] or [Network File System][nfs-overview] (NFS).
+
+This article shows you how to:
+
+* Work with a dynamic persistent volume (PV) by installing the Container Storage Interface (CSI) driver and dynamically creating an Azure Blob storage container to attach to a pod.
+* Work with a static PV by creating an Azure Blob storage container, or use an existing one and attach it to a pod.
+
+For more information on Kubernetes volumes, see [Storage options for applications in AKS][concepts-storage].
+
+## Before you begin
+
+- If you don't have a storage account that supports the NFS v3 protocol, review [NFS v3 support with Azure Blob storage][azure-blob-storage-nfs-support].
+
+- [Enable the Blob storage CSI driver][enable-blob-csi-driver] on your AKS cluster.
+
+## Dynamically provision a volume
+
+This section provides guidance for cluster administrators who want to provision one or more persistent volumes that include details of Blob storage for use by a workload. A persistent volume claim (PVC) uses the storage class object to dynamically provision an Azure Blob storage container.
+
+### Dynamic provisioning parameters
+
+|Name | Description | Example | Mandatory | Default value|
+| | | | | |
+|skuName | Specify an Azure storage account type (alias: `storageAccountType`). | `Standard_LRS`, `Premium_LRS`, `Standard_GRS`, `Standard_RAGRS` | No | `Standard_LRS`|
+|location | Specify an Azure location. | `eastus` | No | If empty, driver will use the same location name as current cluster.|
+|resourceGroup | Specify an Azure resource group name. | myResourceGroup | No | If empty, driver will use the same resource group name as current cluster.|
+|storageAccount | Specify an Azure storage account name.| storageAccountName | - No for blobfuse mount </br> - Yes for NFSv3 mount. | - For blobfuse mount: if empty, driver finds a suitable storage account that matches `skuName` in the same resource group. If a storage account name is provided, storage account must exist. </br> - For NFSv3 mount, storage account name must be provided.|
+|protocol | Specify blobfuse mount or NFSv3 mount. | `fuse`, `nfs` | No | `fuse`|
+|containerName | Specify the existing container (directory) name. | container | No | If empty, driver creates a new container name, starting with `pvc-fuse` for blobfuse or `pvc-nfs` for NFS v3. |
+|containerNamePrefix | Specify Azure storage directory prefix created by driver. | my |Can only contain lowercase letters, numbers, hyphens, and length should be fewer than 21 characters. | No |
+|server | Specify Azure storage account domain name. | Existing storage account DNS domain name, for example `<storage-account>.privatelink.blob.core.windows.net`. | No | If empty, driver uses default `<storage-account>.blob.core.windows.net` or other sovereign cloud storage account DNS domain name.|
+|allowBlobPublicAccess | Allow or disallow public access to all blobs or containers for storage account created by driver. | `true`,`false` | No | `false`|
+|storageEndpointSuffix | Specify Azure storage endpoint suffix. | `core.windows.net` | No | If empty, driver will use default storage endpoint suffix according to cloud environment.|
+|tags | [Tags][az-tags] would be created in new storage account. | Tag format: 'foo=aaa,bar=bbb' | No | ""|
+|matchTags | Match tags when driver tries to find a suitable storage account. | `true`,`false` | No | `false`|
+| | **Following parameters are only for blobfuse** | | | |
+|subscriptionID | Specify Azure subscription ID where blob storage directory will be created. | Azure subscription ID | No | If not empty, `resourceGroup` must be provided.|
+|storeAccountKey | Specify store account key to Kubernetes secret. <br><br> Note: <br> `false` means driver uses kubelet identity to get account key. | `true`,`false` | No | `true`|
+|secretName | Specify secret name to store account key. | | No |
+|secretNamespace | Specify the namespace of secret to store account key. | `default`,`kube-system`, etc. | No | pvc namespace |
+|isHnsEnabled | Enable `Hierarchical namespace` for Azure Data Lake storage account. | `true`,`false` | No | `false`|
+| | **Following parameters are only for NFS protocol** | | | |
+|mountPermissions | Specify mounted folder permissions. |The default is `0777`. If set to `0`, driver won't perform `chmod` after mount. | `0777` | No |
+
+### Create a persistent volume claim using built-in storage class
+
+A persistent volume claim (PVC) uses the storage class object to dynamically provision an Azure Blob storage container. The following YAML can be used to create a persistent volume claim 5 GB in size with *ReadWriteMany* access, using the built-in storage class. For more information on access modes, see the [Kubernetes persistent volume][kubernetes-volumes] documentation.
+
+1. Create a file named `blob-nfs-pvc.yaml` and copy in the following YAML.
+
+ ```yml
+ apiVersion: v1
+ kind: PersistentVolumeClaim
+ metadata:
+ name: azure-blob-storage
+ annotations:
+ volume.beta.kubernetes.io/storage-class: azureblob-nfs-premium
+ spec:
+ accessModes:
+ - ReadWriteMany
+ storageClassName: my-blobstorage
+ resources:
+ requests:
+ storage: 5Gi
+ ```
+
+2. Create the persistent volume claim with the [kubectl create][kubectl-create] command:
+
+ ```bash
+ kubectl create -f blob-nfs-pvc.yaml
+ ```
+
+Once completed, the Blob storage container will be created. You can use the [kubectl get][kubectl-get] command to view the status of the PVC:
+
+```bash
+kubectl get pvc azure-blob-storage
+```
+
+The output of the command resembles the following example:
+
+```bash
+NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
+azure-blob-storage Bound pvc-b88e36c5-c518-4d38-a5ee-337a7dda0a68 5Gi RWX azureblob-nfs-premium 92m
+```
+
+#### Use the persistent volume claim
+
+The following YAML creates a pod that uses the persistent volume claim **azure-blob-storage** to mount the Azure Blob storage at the `/mnt/blob' path.
+
+1. Create a file named `blob-nfs-pv`, and copy in the following YAML. Make sure that the **claimName** matches the PVC created in the previous step.
+
+ ```yml
+ kind: Pod
+ apiVersion: v1
+ metadata:
+ name: mypod
+ spec:
+ containers:
+ - name: mypod
+ image: mcr.microsoft.com/oss/nginx/nginx:1.17.3-alpine
+ resources:
+ requests:
+ cpu: 100m
+ memory: 128Mi
+ limits:
+ cpu: 250m
+ memory: 256Mi
+ volumeMounts:
+ - mountPath: "/mnt/blob"
+ name: volume
+ volumes:
+ - name: volume
+ persistentVolumeClaim:
+ claimName: azure-blob-storage
+ ```
+
+2. Create the pod with the [kubectl apply][kubectl-apply] command:
+
+ ```bash
+ kubectl apply -f blob-nfs-pv.yaml
+ ```
+
+3. After the pod is in the running state, run the following command to create a new file called `test.txt`.
+
+ ```bash
+ kubectl exec mypod -- touch /mnt/blob/test.txt
+ ```
+
+4. To validate the disk is correctly mounted, run the following command, and verify you see the `test.txt` file in the output:
+
+ ```bash
+ kubectl exec mypod -- ls /mnt/blob
+ ```
+
+ The output of the command resembles the following example:
+
+ ```bash
+ test.txt
+ ```
+
+### Create a custom storage class
+
+The default storage classes suit the most common scenarios, but not all. For some cases, you might want to have your own storage class customized with your own parameters. To demonstrate, two examples are shown. One based on using the NFS protocol, and the other using blobfuse.
+
+#### Storage class using NFS protocol
+
+In this example, the following manifest configures mounting a Blob storage container using the NFS protocol. Use it to add the *tags* parameter.
+
+1. Create a file named `blob-nfs-sc.yaml`, and paste the following example manifest:
+
+ ```yml
+ apiVersion: storage.k8s.io/v1
+ kind: StorageClass
+ metadata:
+ name: azureblob-nfs-premium
+ provisioner: blob.csi.azure.com
+ parameters:
+ protocol: nfs
+ tags: environment=Development
+ volumeBindingMode: Immediate
+ ```
+
+2. Create the storage class with the [kubectl apply][kubectl-apply] command:
+
+ ```bash
+ kubectl apply -f blob-nfs-sc.yaml
+ ```
+
+ The output of the command resembles the following example:
+
+ ```bash
+ storageclass.storage.k8s.io/blob-nfs-premium created
+ ```
+
+#### Storage class using blobfuse
+
+In this example, the following manifest configures using blobfuse and mounts a Blob storage container. Use it to update the *skuName* parameter.
+
+1. Create a file named `blobfuse-sc.yaml`, and paste the following example manifest:
+
+ ```yml
+ apiVersion: storage.k8s.io/v1
+ kind: StorageClass
+ metadata:
+ name: azureblob-fuse-premium
+ provisioner: blob.csi.azure.com
+ parameters:
+ skuName: Standard_GRS # available values: Standard_LRS, Premium_LRS, Standard_GRS, Standard_RAGRS
+ reclaimPolicy: Delete
+ volumeBindingMode: Immediate
+ allowVolumeExpansion: true
+ mountOptions:
+ - -o allow_other
+ - --file-cache-timeout-in-seconds=120
+ - --use-attr-cache=true
+ - --cancel-list-on-mount-seconds=10 # prevent billing charges on mounting
+ - -o attr_timeout=120
+ - -o entry_timeout=120
+ - -o negative_timeout=120
+ - --log-level=LOG_WARNING # LOG_WARNING, LOG_INFO, LOG_DEBUG
+ - --cache-size-mb=1000 # Default will be 80% of available memory, eviction will happen beyond that.
+ ```
+
+2. Create the storage class with the [kubectl apply][kubectl-apply] command:
+
+ ```bash
+ kubectl apply -f blobfuse-sc.yaml
+ ```
+
+ The output of the command resembles the following example:
+
+ ```bash
+ storageclass.storage.k8s.io/blob-fuse-premium created
+ ```
+
+## Statically provision a volume
+
+This section provides guidance for cluster administrators who want to create one or more persistent volumes that include details of Blob storage for use by a workload.
+
+### Static provisioning parameters
+
+|Name | Description | Example | Mandatory | Default value|
+| | | | | |
+|volumeHandle | Specify a value the driver can use to uniquely identify the storage blob container in the cluster. | A recommended way to produce a unique value is to combine the globally unique storage account name and container name: `{account-name}_{container-name}`.<br> Note: The `#` character is reserved for internal use and can't be used in a volume handle. | Yes ||
+|volumeAttributes.resourceGroup | Specify Azure resource group name. | myResourceGroup | No | If empty, driver uses the same resource group name as current cluster.|
+|volumeAttributes.storageAccount | Specify an existing Azure storage account name. | storageAccountName | Yes ||
+|volumeAttributes.containerName | Specify existing container name. | container | Yes ||
+|volumeAttributes.protocol | Specify blobfuse mount or NFS v3 mount. | `fuse`, `nfs` | No | `fuse`|
+| | **Following parameters are only for blobfuse** | | | |
+|volumeAttributes.secretName | Secret name that stores storage account name and key (only applies for SMB).| | No ||
+|volumeAttributes.secretNamespace | Specify namespace of secret to store account key. | `default` | No | Pvc namespace|
+|nodeStageSecretRef.name | Specify secret name that stores one of the following:<br> `azurestorageaccountkey`<br>`azurestorageaccountsastoken`<br>`msisecret`<br>`azurestoragespnclientsecret`. | |Existing Kubernetes secret name | No |
+|nodeStageSecretRef.namespace | Specify the namespace of secret. | Kubernetes namespace | Yes ||
+| | **Following parameters are only for NFS protocol** | | | |
+|volumeAttributes.mountPermissions | Specify mounted folder permissions. | `0777` | No ||
+| | **Following parameters are only for NFS VNet setting** | | | |
+|vnetResourceGroup | Specify VNet resource group hosting virtual network. | myResourceGroup | No | If empty, driver uses the `vnetResourceGroup` value specified in the Azure cloud config file.|
+|vnetName | Specify the virtual network name. | aksVNet | No | If empty, driver uses the `vnetName` value specified in the Azure cloud config file.|
+|subnetName | Specify the existing subnet name of the agent node. | aksSubnet | No | If empty, driver uses the `subnetName` value in Azure cloud config file. |
+| | **Following parameters are only for feature: blobfuse<br> [Managed Identity and Service Principal Name authentication](https://github.com/Azure/azure-storage-fuse#environment-variables)** | | | |
+|volumeAttributes.AzureStorageAuthType | Specify the authentication type. | `Key`, `SAS`, `MSI`, `SPN` | No | `Key`|
+|volumeAttributes.AzureStorageIdentityClientID | Specify the Identity Client ID. | | No ||
+|volumeAttributes.AzureStorageIdentityObjectID | Specify the Identity Object ID. | | No ||
+|volumeAttributes.AzureStorageIdentityResourceID | Specify the Identity Resource ID. | | No ||
+|volumeAttributes.MSIEndpoint | Specify the MSI endpoint. | | No ||
+|volumeAttributes.AzureStorageSPNClientID | Specify the Azure Service Principal Name (SPN) Client ID. | | No ||
+|volumeAttributes.AzureStorageSPNTenantID | Specify the Azure SPN Tenant ID. | | No ||
+|volumeAttributes.AzureStorageAADEndpoint | Specify the Azure Active Directory (Azure AD) endpoint. | | No ||
+| | **Following parameters are only for feature: blobfuse read account key or SAS token from key vault** | | | |
+|volumeAttributes.keyVaultURL | Specify Azure Key Vault DNS name. | {vault-name}.vault.azure.net | No ||
+|volumeAttributes.keyVaultSecretName | Specify Azure Key Vault secret name. | Existing Azure Key Vault secret name. | No ||
+|volumeAttributes.keyVaultSecretVersion | Azure Key Vault secret version. | Existing version | No |If empty, driver uses current version.|
+
+### Create a Blob storage container
+
+When you create an Azure Blob storage resource for use with AKS, you can create the resource in the node resource group. This approach allows the AKS cluster to access and manage the blob storage resource. If instead you create the blob storage resource in a separate resource group, you must grant the Azure Kubernetes Service managed identity for your cluster the [Contributor][rbac-contributor-role] role to the blob storage resource group.
+
+For this article, create the container in the node resource group. First, get the resource group name with the [az aks show][az-aks-show] command and add the `--query nodeResourceGroup` query parameter. The following example gets the node resource group for the AKS cluster named **myAKSCluster** in the resource group named **myResourceGroup**:
+
+```azurecli
+az aks show --resource-group myResourceGroup --name myAKSCluster --query nodeResourceGroup -o tsv
+```
+
+The output of the command resembles the following example:
+
+```azurecli
+MC_myResourceGroup_myAKSCluster_eastus
+```
+
+Next, create a container for storing blobs following the steps in the [Manage blob storage][manage-blob-storage] to authorize access and then create the container.
+
+### Mount volume
+
+In this section, you mount the persistent volume using the NFS protocol or Blobfuse.
+
+#### [Mount volume using NFS protocol](#tab/mount-nfs)
+
+Mounting Blob storage using the NFS v3 protocol doesn't authenticate using an account key. Your AKS cluster needs to reside in the same or peered virtual network as the agent node. The only way to secure the data in your storage account is by using a virtual network and other network security settings. For more information on how to set up NFS access to your storage account, see [Mount Blob Storage by using the Network File System (NFS) 3.0 protocol](../storage/blobs/network-file-system-protocol-support-how-to.md).
+
+The following example demonstrates how to mount a Blob storage container as a persistent volume using the NFS protocol.
+
+1. Create a file named `pv-blob-nfs.yaml` and copy in the following YAML. Under `storageClass`, update `resourceGroup`, `storageAccount`, and `containerName`.
+
+ > [!NOTE]
+ > `volumeHandle` value should be a unique volumeID for every identical storage blob container in the cluster.
+ > The character `#` is reserved for internal use and cannot be used.
+
+ ```yml
+ apiVersion: v1
+ kind: PersistentVolume
+ metadata:
+ name: pv-blob
+ spec:
+ capacity:
+ storage: 1Pi
+ accessModes:
+ - ReadWriteMany
+ persistentVolumeReclaimPolicy: Retain # If set as "Delete" container would be removed after pvc deletion
+ storageClassName: azureblob-nfs-premium
+ csi:
+ driver: blob.csi.azure.com
+ readOnly: false
+ # make sure volumeid is unique for every identical storage blob container in the cluster
+ # character `#` is reserved for internal use and cannot be used in volumehandle
+ volumeHandle: unique-volumeid
+ volumeAttributes:
+ resourceGroup: resourceGroupName
+ storageAccount: storageAccountName
+ containerName: containerName
+ protocol: nfs
+ ```
+
+ > [!NOTE]
+ > While the [Kubernetes API](https://github.com/kubernetes/kubernetes/blob/release-1.26/pkg/apis/core/types.go#L303-L306) **capacity** attribute is mandatory, this value isn't used by the Azure Blob storage CSI driver because you can flexibly write data until you reach your storage account's capacity limit. The value of the `capacity` attribute is used only for size matching between *PersistentVolumes* and *PersistenVolumeClaims*. We recommend using a fictitious high value. The pod sees a mounted volume with a fictitious size of 5 Petabytes.
+
+2. Run the following command to create the persistent volume using the [kubectl create][kubectl-create] command referencing the YAML file created earlier:
+
+ ```bash
+ kubectl create -f pv-blob-nfs.yaml
+ ```
+
+3. Create a `pvc-blob-nfs.yaml` file with a *PersistentVolumeClaim*. For example:
+
+ ```yml
+ kind: PersistentVolumeClaim
+ apiVersion: v1
+ metadata:
+ name: pvc-blob
+ spec:
+ accessModes:
+ - ReadWriteMany
+ resources:
+ requests:
+ storage: 10Gi
+ volumeName: pv-blob
+ storageClassName: azureblob-nfs-premium
+ ```
+
+4. Run the following command to create the persistent volume claim using the [kubectl create][kubectl-create] command referencing the YAML file created earlier:
+
+ ```bash
+ kubectl create -f pvc-blob-nfs.yaml
+ ```
+
+#### [Mount volume using Blobfuse](#tab/mount-blobfuse)
+
+Kubernetes needs credentials to access the Blob storage container created earlier, which is either an Azure access key or SAS tokens. These credentials are stored in a Kubernetes secret, which is referenced when you create a Kubernetes pod.
+
+1. Use the `kubectl create secret command` to create the secret. You can authenticate using a [Kubernetes secret][kubernetes-secret] or [shared access signature][sas-tokens] (SAS) tokens.
+
+ # [Secret](#tab/secret)
+
+ The following example creates a [Secret object][kubernetes-secret] named *azure-secret* and populates the *azurestorageaccountname* and *azurestorageaccountkey*. You need to provide the account name and key from an existing Azure storage account.
+
+ ```bash
+ kubectl create secret generic azure-secret --from-literal azurestorageaccountname=NAME --from-literal azurestorageaccountkey="KEY" --type=Opaque
+ ```
+
+ # [SAS tokens](#tab/sas-tokens)
+
+ The following example creates a [Secret object][kubernets-secret] named *azure-sas-token* and populates the *azurestorageaccountname* and *azurestorageaccountsastoken*. You need to provide the account name and shared access signature from an existing Azure storage account.
+
+ ```bash
+ kubectl create secret generic azure-sas-token --from-literal azurestorageaccountname=NAME --from-literal azurestorageaccountsastoken
+ ="sastoken" --type=Opaque
+ ```
+
+
+
+2. Create a `pv-blobfuse.yaml` file. Under `volumeAttributes`, update `containerName`. Under `nodeStateSecretRef`, update `name` with the name of the Secret object created earlier. For example:
+
+ > [!NOTE]
+ > `volumeHandle` value should be a unique volumeID for every identical storage blob container in the cluster.
+ > The character `#` is reserved for internal use and cannot be used.
+
+ ```yml
+ apiVersion: v1
+ kind: PersistentVolume
+ metadata:
+ name: pv-blob
+ spec:
+ capacity:
+ storage: 10Gi
+ accessModes:
+ - ReadWriteMany
+ persistentVolumeReclaimPolicy: Retain # If set as "Delete" container would be removed after pvc deletion
+ storageClassName: azureblob-fuse-premium
+ mountOptions:
+ - -o allow_other
+ - --file-cache-timeout-in-seconds=120
+ csi:
+ driver: blob.csi.azure.com
+ readOnly: false
+ # volumeid has to be unique for every identical storage blob container in the cluster
+ # character `#` is reserved for internal use and cannot be used in volumehandle
+ volumeHandle: unique-volumeid
+ volumeAttributes:
+ containerName: containerName
+ nodeStageSecretRef:
+ name: azure-secret
+ namespace: default
+ ```
+
+3. Run the following command to create the persistent volume using the [kubectl create][kubectl-create] command referencing the YAML file created earlier:
+
+ ```bash
+ kubectl create -f pv-blobfuse.yaml
+ ```
+
+4. Create a `pvc-blobfuse.yaml` file with a *PersistentVolume*. For example:
+
+ ```yml
+ apiVersion: v1
+ kind: PersistentVolumeClaim
+ metadata:
+ name: pvc-blob
+ spec:
+ accessModes:
+ - ReadWriteMany
+ resources:
+ requests:
+ storage: 10Gi
+ volumeName: pv-blob
+ storageClassName: azureblob-fuse-premium
+ ```
+
+5. Run the following command to create the persistent volume claim using the [kubectl create][kubectl-create] command referencing the YAML file created earlier:
+
+ ```bash
+ kubectl create -f pvc-blobfuse.yaml
+ ```
+++
+### Use the persistent volume
+
+The following YAML creates a pod that uses the persistent volume or persistent volume claim named **pvc-blob** created earlier, to mount the Azure Blob storage at the `/mnt/blob` path.
+
+1. Create a file named `nginx-pod-blob.yaml`, and copy in the following YAML. Make sure that the **claimName** matches the PVC created in the previous step when creating a persistent volume for NFS or Blobfuse.
+
+ ```yml
+ kind: Pod
+ apiVersion: v1
+ metadata:
+ name: nginx-blob
+ spec:
+ nodeSelector:
+ "kubernetes.io/os": linux
+ containers:
+ - image: mcr.microsoft.com/oss/nginx/nginx:1.17.3-alpine
+ name: nginx-blob
+ volumeMounts:
+ - name: blob01
+ mountPath: "/mnt/blob"
+ volumes:
+ - name: blob01
+ persistentVolumeClaim:
+ claimName: pvc-blob
+ ```
+
+2. Run the following command to create the pod and mount the PVC using the [kubectl create][kubectl-create] command referencing the YAML file created earlier:
+
+ ```bash
+ kubectl create -f nginx-pod-blob.yaml
+ ```
+
+3. Run the following command to create an interactive shell session with the pod to verify the Blob storage mounted:
+
+ ```bash
+ kubectl exec -it nginx-blob -- df -h
+ ```
+
+ The output from the command resembles the following example:
+
+ ```bash
+ Filesystem Size Used Avail Use% Mounted on
+ ...
+ blobfuse 14G 41M 13G 1% /mnt/blob
+ ...
+ ```
+
+## Next steps
+
+- To learn how to use CSI driver for Azure Blob storage, see [Use Azure Blob storage with CSI driver][azure-blob-storage-csi].
+- For associated best practices, see [Best practices for storage and backups in AKS][operator-best-practices-storage].
+
+<!-- LINKS - external -->
+[kubernetes-volumes]: https://kubernetes.io/docs/concepts/storage/volumes/
+[blobfuse-overview]: https://github.com/Azure/azure-storage-fuse
+[nfs-overview]: https://en.wikipedia.org/wiki/Network_File_System
+[kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply
+[kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get
+[kubernetes-secret]: https://kubernetes.io/docs/concepts/configuration/secret/
+[kubectl-create]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#create
+
+<!-- LINKS - internal -->
+[operator-best-practices-storage]: operator-best-practices-storage.md
+[concepts-storage]: concepts-storage.md
+[azure-blob-storage-csi]: azure-blob-csi.md
+[azure-blob-storage-nfs-support]: ../storage/blobs/network-file-system-protocol-support.md
+[enable-blob-csi-driver]: azure-blob-csi.md#before-you-begin
+[az-tags]: ../azure-resource-manager/management/tag-resources.md
+[sas-tokens]: ../storage/common/storage-sas-overview.md
aks Azure Csi Blob Storage Static https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-csi-blob-storage-static.md
- Title: Create a static persistent volume with Azure Blob storage in Azure Kubernetes Service (AKS)-
-description: Learn how to create a static persistent volume with Azure Blob storage for use with multiple concurrent pods in Azure Kubernetes Service (AKS)
-- Previously updated : 12/27/2022---
-# Create and use a static volume with Azure Blob storage in Azure Kubernetes Service (AKS)
-
-Container-based applications often need to access and persist data in an external data volume. If multiple pods need concurrent access to the same storage volume, you can use Azure Blob storage to connect using [blobfuse][blobfuse-overview] or [Network File System][nfs-overview] (NFS).
-
-This article shows you how to create an Azure Blob storage container or use an existing one and attach it to a pod in AKS.
-
-For more information on Kubernetes volumes, see [Storage options for applications in AKS][concepts-storage].
-
-## Before you begin
---- If you don't have a storage account that supports the NFS v3 protocol, review [NFS v3 support with Azure Blob storage][azure-blob-storage-nfs-support].--- [Enable the Blob storage CSI driver][enable-blob-csi-driver] on your AKS cluster.-
-## Static provisioning parameters
-
-|Name | Description | Example | Mandatory | Default value|
-| | | | | |
-|volumeHandle | Specify a value the driver can use to uniquely identify the storage blob container in the cluster. | A recommended way to produce a unique value is to combine the globally unique storage account name and container name: {account-name}_{container-name}. Note: The # character is reserved for internal use and can't be used in a volume handle. | Yes ||
-|volumeAttributes.resourceGroup | Specify Azure resource group name. | myResourceGroup | No | If empty, driver will use the same resource group name as current cluster.|
-|volumeAttributes.storageAccount | Specify existing Azure storage account name. | storageAccountName | Yes ||
-|volumeAttributes.containerName | Specify existing container name. | container | Yes ||
-|volumeAttributes.protocol | Specify blobfuse mount or NFS v3 mount. | `fuse`, `nfs` | No | `fuse`|
-| | **Following parameters are only for blobfuse** | | | |
-|volumeAttributes.secretName | Secret name that stores storage account name and key (only applies for SMB).| | No ||
-|volumeAttributes.secretNamespace | Specify namespace of secret to store account key. | `default` | No | Pvc namespace|
-|nodeStageSecretRef.name | Specify secret name that stores (see examples below):<br>`azurestorageaccountkey`<br>`azurestorageaccountsastoken`<br>`msisecret`<br>`azurestoragespnclientsecret`. | |Existing Kubernetes secret name | No |
-|nodeStageSecretRef.namespace | Specify the namespace of secret. | k8s namespace | Yes ||
-| | **Following parameters are only for NFS protocol** | | | |
-|volumeAttributes.mountPermissions | Specify mounted folder permissions. | `0777` | No ||
-| | **Following parameters are only for NFS VNet setting** | | | |
-|vnetResourceGroup | Specify VNet resource group hosting virtual network. | myResourceGroup | No | If empty, driver uses the `vnetResourceGroup` value specified in the Azure cloud config file.|
-|vnetName | Specify the virtual network name. | aksVNet | No | If empty, driver uses the `vnetName` value specified in the Azure cloud config file.|
-|subnetName | Specify the existing subnet name of the agent node. | aksSubnet | No | If empty, driver uses the `subnetName` value in Azure cloud config file. |
-| | **Following parameters are only for feature: blobfuse<br> [Managed Identity and Service Principal Name authentication](https://github.com/Azure/azure-storage-fuse#environment-variables)** | | | |
-|volumeAttributes.AzureStorageAuthType | Specify the authentication type. | `Key`, `SAS`, `MSI`, `SPN` | No | `Key`|
-|volumeAttributes.AzureStorageIdentityClientID | Specify the Identity Client ID. | | No ||
-|volumeAttributes.AzureStorageIdentityObjectID | Specify the Identity Object ID. | | No ||
-|volumeAttributes.AzureStorageIdentityResourceID | Specify the Identity Resource ID. | | No ||
-|volumeAttributes.MSIEndpoint | Specify the MSI endpoint. | | No ||
-|volumeAttributes.AzureStorageSPNClientID | Specify the Azure Service Principal Name (SPN) Client ID. | | No ||
-|volumeAttributes.AzureStorageSPNTenantID | Specify the Azure SPN Tenant ID. | | No ||
-|volumeAttributes.AzureStorageAADEndpoint | Specify the Azure Active Directory (Azure AD) endpoint. | | No ||
-| | **Following parameters are only for feature: blobfuse read account key or SAS token from key vault** | | | |
-|volumeAttributes.keyVaultURL | Specify Azure Key Vault DNS name. | {vault-name}.vault.azure.net | No ||
-|volumeAttributes.keyVaultSecretName | Specify Azure Key Vault secret name. | Existing Azure Key Vault secret name. | No ||
-|volumeAttributes.keyVaultSecretVersion | Azure Key Vault secret version. | Existing version | No |If empty, driver uses current version.|
-
-## Create a Blob storage container
-
-When you create an Azure Blob storage resource for use with AKS, you can create the resource in the node resource group. This approach allows the AKS cluster to access and manage the blob storage resource. If instead you create the blob storage resource in a separate resource group, you must grant the Azure Kubernetes Service managed identity for your cluster the [Contributor][rbac-contributor-role] role to the blob storage resource group.
-
-For this article, create the container in the node resource group. First, get the resource group name with the [az aks show][az-aks-show] command and add the `--query nodeResourceGroup` query parameter. The following example gets the node resource group for the AKS cluster named **myAKSCluster** in the resource group named **myResourceGroup**:
-
-```azurecli
-az aks show --resource-group myResourceGroup --name myAKSCluster --query nodeResourceGroup -o tsv
-```
-
-The output of the command resembles the following example:
-
-```azurecli
-MC_myResourceGroup_myAKSCluster_eastus
-```
-
-Next, create a container for storing blobs following the steps in the [Manage blob storage][manage-blob-storage] to authorize access and then create the container.
-
-## Mount Blob storage as a volume using NFS
-
-Mounting Blob storage using the NFS v3 protocol doesn't authenticate using an account key. Your AKS cluster needs to reside in the same or peered virtual network as the agent node. The only way to secure the data in your storage account is by using a virtual network and other network security settings. For more information on how to set up NFS access to your storage account, see [Mount Blob Storage by using the Network File System (NFS) 3.0 protocol](../storage/blobs/network-file-system-protocol-support-how-to.md).
-
-The following example demonstrates how to mount a Blob storage container as a persistent volume using the NFS protocol.
-
-1. Create a file named `pv-blob-nfs.yaml` and copy in the following YAML. Under `storageClass`, update `resourceGroup`, `storageAccount`, and `containerName`.
-
- ```yml
- apiVersion: v1
- kind: PersistentVolume
- metadata:
- name: pv-blob
- spec:
- capacity:
- storage: 10Gi
- accessModes:
- - ReadWriteMany
- persistentVolumeReclaimPolicy: Retain # If set as "Delete" container would be removed after pvc deletion
- storageClassName: azureblob-nfs-premium
- csi:
- driver: blob.csi.azure.com
- readOnly: false
- # make sure volumeid is unique for every identical storage blob container in the cluster
- # character `#` is reserved for internal use and cannot be used in volumehandle
- volumeHandle: unique-volumeid
- volumeAttributes:
- resourceGroup: resourceGroupName
- storageAccount: storageAccountName
- containerName: containerName
- protocol: nfs
- ```
-
-2. Run the following command to create the persistent volume using the `kubectl create` command referencing the YAML file created earlier:
-
- ```bash
- kubectl create -f pv-blob-nfs.yaml
- ```
-
-3. Create a `pvc-blob-nfs.yaml` file with a *PersistentVolumeClaim*. For example:
-
- ```yml
- kind: PersistentVolumeClaim
- apiVersion: v1
- metadata:
- name: pvc-blob
- spec:
- accessModes:
- - ReadWriteMany
- resources:
- requests:
- storage: 10Gi
- volumeName: pv-blob
- storageClassName: azureblob-nfs-premium
- ```
-
-4. Run the following command to create the persistent volume claim using the `kubectl create` command referencing the YAML file created earlier:
-
- ```bash
- kubectl create -f pvc-blob-nfs.yaml
- ```
-
-## Mount Blob storage as a volume using Blobfuse
-
-Kubernetes needs credentials to access the Blob storage container created earlier, which is either an Azure access key or SAS tokens. These credentials are stored in a Kubernetes secret, which is referenced when you create a Kubernetes pod.
-
-1. Use the `kubectl create secret command` to create the secret. You can authenticate using a [Kubernetes secret][kubernetes-secret] or [shared access signature][sas-tokens] (SAS) tokens.
-
- # [Secret](#tab/secret)
-
- The following example creates a [Secret object][kubernetes-secret] named *azure-secret* and populates the *azurestorageaccountname* and *azurestorageaccountkey*. You need to provide the account name and key from an existing Azure storage account.
-
- ```bash
- kubectl create secret generic azure-secret --from-literal azurestorageaccountname=NAME --from-literal azurestorageaccountkey="KEY" --type=Opaque
- ```
-
- # [SAS tokens](#tab/sas-tokens)
-
- The following example creates a [Secret object][kubernets-secret] named *azure-sas-token* and populates the *azurestorageaccountname* and *azurestorageaccountsastoken*. You need to provide the account name and shared access signature from an existing Azure storage account.
-
- ```bash
- kubectl create secret generic azure-sas-token --from-literal azurestorageaccountname=NAME --from-literal azurestorageaccountsastoken
- ="sastoken" --type=Opaque
- ```
-
-
-
-2. Create a `pv-blobfuse.yaml` file. Under `volumeAttributes`, update `containerName`. Under `nodeStateSecretRef`, update `name` with the name of the Secret object created earlier. For example:
-
- ```yml
- apiVersion: v1
- kind: PersistentVolume
- metadata:
- name: pv-blob
- spec:
- capacity:
- storage: 10Gi
- accessModes:
- - ReadWriteMany
- persistentVolumeReclaimPolicy: Retain # If set as "Delete" container would be removed after pvc deletion
- storageClassName: azureblob-fuse-premium
- mountOptions:
- - -o allow_other
- - --file-cache-timeout-in-seconds=120
- csi:
- driver: blob.csi.azure.com
- readOnly: false
- # make sure volumeid is unique for every identical storage blob container in the cluster
- # character `#` is reserved for internal use and cannot be used in volumehandle
- volumeHandle: unique-volumeid
- volumeAttributes:
- containerName: containerName
- nodeStageSecretRef:
- name: azure-secret
- namespace: default
- ```
-
-3. Run the following command to create the persistent volume using the `kubectl create` command referencing the YAML file created earlier:
-
- ```bash
- kubectl create -f pv-blobfuse.yaml
- ```
-
-4. Create a `pvc-blobfuse.yaml` file with a *PersistentVolume*. For example:
-
- ```yml
- apiVersion: v1
- kind: PersistentVolumeClaim
- metadata:
- name: pvc-blob
- spec:
- accessModes:
- - ReadWriteMany
- resources:
- requests:
- storage: 10Gi
- volumeName: pv-blob
- storageClassName: azureblob-fuse-premium
- ```
-
-5. Run the following command to create the persistent volume claim using the `kubectl create` command referencing the YAML file created earlier:
-
- ```bash
- kubectl create -f pvc-blobfuse.yaml
- ```
-
-## Use the persistence volume
-
-The following YAML creates a pod that uses the persistent volume or persistent volume claim named **pvc-blob** created earlier, to mount the Azure Blob storage at the `/mnt/blob' path.
-
-1. Create a file named `nginx-pod-blob.yaml`, and copy in the following YAML. Make sure that the **claimName** matches the PVC created in the previous step when creating a persistent volume for NFS or Blobfuse.
-
- ```yml
- kind: Pod
- apiVersion: v1
- metadata:
- name: nginx-blob
- spec:
- nodeSelector:
- "kubernetes.io/os": linux
- containers:
- - image: mcr.microsoft.com/oss/nginx/nginx:1.17.3-alpine
- name: nginx-blob
- volumeMounts:
- - name: blob01
- mountPath: "/mnt/blob"
- volumes:
- - name: blob01
- persistentVolumeClaim:
- claimName: pvc-blob
- ```
-
-2. Run the following command to create the pod and mount the PVC using the `kubectl create` command referencing the YAML file created earlier:
-
- ```bash
- kubectl create -f nginx-pod-blob.yaml
- ```
-
-3. Run the following command to create an interactive shell session with the pod to verify the Blob storage mounted:
-
- ```bash
- kubectl exec -it nginx-blob -- df -h
- ```
-
- The output from the command resembles the following example:
-
- ```bash
- Filesystem Size Used Avail Use% Mounted on
- ...
- blobfuse 14G 41M 13G 1% /mnt/blob
- ...
- ```
-
-## Next steps
--- To learn how to use CSI driver for Azure Blob storage, see [Use Azure Blob storage with CSI driver][azure-blob-storage-csi].-- To learn how to manually set up a dynamic persistent volume, see [Create and use a dynamic volume with Azure Blob storage][azure-csi-blob-storage-dynamic].-- For associated best practices, see [Best practices for storage and backups in AKS][operator-best-practices-storage].-
-<!-- LINKS - external -->
-[kubectl-create]: https://kubernetes.io/docs/user-guide/kubectl/v1.8/#create
-[kubernetes-files]: https://github.com/kubernetes/examples/blob/master/staging/volumes/azure_file/README.md
-[kubernetes-secret]: https://kubernetes.io/docs/concepts/configuration/secret/
-[kubernetes-volumes]: https://kubernetes.io/docs/concepts/storage/volumes/
-[kubernetes-security-context]: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
-[blobfuse-overview]: https://github.com/Azure/azure-storage-fuse
-[nfs-overview]: https://en.wikipedia.org/wiki/Network_File_System
-[kubernetes-secret]: https://kubernetes.io/docs/concepts/configuration/secret/
-
-<!-- LINKS - internal -->
-[aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md
-[aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md
-[aks-quickstart-powershell]: ./learn/quick-kubernetes-deploy-powershell.md
-[install-azure-cli]: /cli/azure/install-azure-cli
-[operator-best-practices-storage]: operator-best-practices-storage.md
-[concepts-storage]: concepts-storage.md
-[persistent-volume-example]: #mount-file-share-as-a-persistent-volume
-[use-tags]: use-tags.md
-[use-managed-identity]: use-managed-identity.md
-[kubernetes-secret]: https://kubernetes.io/docs/concepts/configuration/secret/
-[sas-tokens]: ../storage/common/storage-sas-overview.md
-[azure-csi-blob-storage-dynamic]: azure-csi-blob-storage-dynamic.md
-[azure-blob-storage-csi]: azure-blob-csi.md
-[rbac-contributor-role]: ../role-based-access-control/built-in-roles.md#contributor
-[az-aks-show]: /cli/azure/aks#az-aks-show
-[manage-blob-storage]: ../storage/blobs/blob-containers-cli.md
-[azure-blob-storage-nfs-support]: ../storage/blobs/network-file-system-protocol-support.md
-[enable-blob-csi-driver]: azure-blob-csi.md#before-you-begin
aks Azure Csi Disk Storage Provision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-csi-disk-storage-provision.md
+
+ Title: Create a persistent volume with Azure Disks in Azure Kubernetes Service (AKS)
+
+description: Learn how to create a static or dynamic persistent volume with Azure Disks for use with multiple concurrent pods in Azure Kubernetes Service (AKS)
+ Last updated : 01/18/2023++
+# Create and use a volume with Azure Disks in Azure Kubernetes Service (AKS)
+
+A persistent volume represents a piece of storage that has been provisioned for use with Kubernetes pods. A persistent volume can be used by one or many pods, and can be dynamically or statically provisioned. This article shows you how to dynamically create persistent volumes with Azure Disks for use by a single pod in an Azure Kubernetes Service (AKS) cluster.
+
+> [!NOTE]
+> An Azure disk can only be mounted with *Access mode* type *ReadWriteOnce*, which makes it available to one node in AKS. If you need to share a persistent volume across multiple nodes, use [Azure Files][azure-files-pvc].
+
+This article shows you how to:
+
+* Work with a dynamic persistent volume (PV) by installing the Container Storage Interface (CSI) driver and dynamically creating one or more Azure managed disk to attach to a pod.
+* Work with a static PV by creating one or more Azure managed disk, or use an existing one and attach it to a pod.
+
+For more information on Kubernetes volumes, see [Storage options for applications in AKS][concepts-storage].
+
+## Before you begin
+
+- An Azure [storage account][azure-storage-account].
+
+- The Azure CLI version 2.0.59 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
+
+The Azure Disks CSI driver has a limit of 32 volumes per node. The volume count changes based on the size of the node/node pool. Run the [kubectl get][kubectl-get] command to determine the number of volumes that can be allocated per node:
+
+```console
+kubectl get CSINode <nodename> -o yaml
+```
+
+## Dynamically provision a volume
+
+This section provides guidance for cluster administrators who want to provision one or more persistent volumes that include details of Azure Disk storage for use by a workload. A persistent volume claim (PVC) uses the storage class object to dynamically provision an Azure Disk storage container.
+
+### Dynamic provisioning parameters
+
+|Name | Meaning | Available Value | Mandatory | Default value
+| | | | |
+|skuName | Azure Disks storage account type (alias: `storageAccountType`)| `Standard_LRS`, `Premium_LRS`, `StandardSSD_LRS`, `PremiumV2_LRS`, `UltraSSD_LRS`, `Premium_ZRS`, `StandardSSD_ZRS` | No | `StandardSSD_LRS`|
+|fsType | File System Type | `ext4`, `ext3`, `ext2`, `xfs`, `btrfs` for Linux, `ntfs` for Windows | No | `ext4` for Linux, `ntfs` for Windows|
+|cachingMode | [Azure Data Disk Host Cache Setting][disk-host-cache-setting] | `None`, `ReadOnly`, `ReadWrite` | No | `ReadOnly`|
+|location | Specify Azure region where Azure Disks will be created | `eastus`, `westus`, etc. | No | If empty, driver will use the same location name as current AKS cluster|
+|resourceGroup | Specify the resource group where the Azure Disks will be created | Existing resource group name | No | If empty, driver will use the same resource group name as current AKS cluster|
+|DiskIOPSReadWrite | [UltraSSD disk][ultra-ssd-disks] IOPS Capability (minimum: 2 IOPS/GiB ) | 100~160000 | No | `500`|
+|DiskMBpsReadWrite | [UltraSSD disk][ultra-ssd-disks] Throughput Capability(minimum: 0.032/GiB) | 1~2000 | No | `100`|
+|LogicalSectorSize | Logical sector size in bytes for ultra disk. Supported values are 512 ad 4096. 4096 is the default. | `512`, `4096` | No | `4096`|
+|tags | Azure Disk [tags][azure-tags] | Tag format: `key1=val1,key2=val2` | No | ""|
+|diskEncryptionSetID | ResourceId of the disk encryption set to use for [enabling encryption at rest][disk-encryption] | format: `/subscriptions/{subs-id}/resourceGroups/{rg-name}/providers/Microsoft.Compute/diskEncryptionSets/{diskEncryptionSet-name}` | No | ""|
+|diskEncryptionType | Encryption type of the disk encryption set. | `EncryptionAtRestWithCustomerKey`(by default), `EncryptionAtRestWithPlatformAndCustomerKeys` | No | ""|
+|writeAcceleratorEnabled | [Write Accelerator on Azure Disks][azure-disk-write-accelerator] | `true`, `false` | No | ""|
+|networkAccessPolicy | NetworkAccessPolicy property to prevent generation of the SAS URI for a disk or a snapshot | `AllowAll`, `DenyAll`, `AllowPrivate` | No | `AllowAll`|
+|diskAccessID | Azure Resource ID of the DiskAccess resource to use private endpoints on disks | | No | ``|
+|enableBursting | [Enable on-demand bursting][on-demand-bursting] beyond the provisioned performance target of the disk. On-demand bursting should only be applied to Premium disk and when the disk size > 512 GB. Ultra and shared disk isn't supported. Bursting is disabled by default. | `true`, `false` | No | `false`|
+|useragent | User agent used for [customer usage attribution][customer-usage-attribution] | | No | Generated Useragent formatted `driverName/driverVersion compiler/version (OS-ARCH)`|
+|enableAsyncAttach | Allow multiple disk attach operations (in batch) on one node in parallel.<br> While this parameter can speed up disk attachment, you may encounter Azure API throttling limit when there are large number of volume attachments. | `true`, `false` | No | `false`|
+|subscriptionID | Specify Azure subscription ID where the Azure Disks is created. | Azure subscription ID | No | If not empty, `resourceGroup` must be provided.|
+| | **Following parameters are only for v2** | | | |
+| enableAsyncAttach | The v2 driver uses a different strategy to manage Azure API throttling and ignores this parameter. | | No | |
+| maxShares | The total number of shared disk mounts allowed for the disk. Setting the value to 2 or more enables attachment replicas. | Supported values depend on the disk size. See [Share an Azure managed disk][share-azure-managed-disk] for supported values. | No | 1 |
+| maxMountReplicaCount | The number of replicas attachments to maintain. | This value must be in the range `[0..(maxShares - 1)]` | No | If `accessMode` is `ReadWriteMany`, the default is `0`. Otherwise, the default is `maxShares - 1` |
+
+### Built-in storage classes
+
+A storage class is used to define how a unit of storage is dynamically created with a persistent volume. For more information on Kubernetes storage classes, see [Kubernetes Storage Classes][kubernetes-storage-classes].
+
+Each AKS cluster includes four pre-created storage classes, two of them configured to work with Azure Disks:
+
+* The *default* storage class provisions a standard SSD Azure Disk.
+ * Standard storage is backed by Standard SSDs and delivers cost-effective storage while still delivering reliable performance.
+* The *managed-csi-premium* storage class provisions a premium Azure Disk.
+ * Premium disks are backed by SSD-based high-performance, low-latency disk. Perfect for VMs running production workload. If the AKS nodes in your cluster use premium storage, select the *managed-premium* class.
+
+If you use one of the default storage classes, you can't update the volume size after the storage class is created. To be able to update the volume size after a storage class is created, add the line `allowVolumeExpansion: true` to one of the default storage classes, or you can create your own custom storage class. It's not supported to reduce the size of a PVC (to prevent data loss). You can edit an existing storage class by using the `kubectl edit sc` command.
+
+For example, if you want to use a disk of size 4 TiB, you must create a storage class that defines `cachingmode: None` because [disk caching isn't supported for disks 4 TiB and larger][disk-host-cache-setting].
+
+For more information about storage classes and creating your own storage class, see [Storage options for applications in AKS][storage-class-concepts].
+
+Use the [kubectl get sc][kubectl-get] command to see the pre-created storage classes. The following example shows the pre-create storage classes available within an AKS cluster:
+
+```bash
+kubectl get sc
+```
+
+The output of the command resembles the following example:
+
+```console
+NAME PROVISIONER AGE
+default (default) disk.csi.azure.com 1h
+managed-csi disk.csi.azure.com 1h
+```
+
+> [!NOTE]
+> Persistent volume claims are specified in GiB but Azure managed disks are billed by SKU for a specific size. These SKUs range from 32GiB for S4 or P4 disks to 32TiB for S80 or P80 disks (in preview). The throughput and IOPS performance of a Premium managed disk depends on the both the SKU and the instance size of the nodes in the AKS cluster. For more information, see [Pricing and performance of managed disks][managed-disk-pricing-performance].
+
+### Create a persistent volume claim
+
+A persistent volume claim (PVC) is used to automatically provision storage based on a storage class. In this case, a PVC can use one of the pre-created storage classes to create a standard or premium Azure managed disk.
+
+Create a file named `azure-pvc.yaml`, and copy in the following manifest. The claim requests a disk named `azure-managed-disk` that is *5 GB* in size with *ReadWriteOnce* access. The *managed-csi* storage class is specified as the storage class.
+
+```yaml
+apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+ name: azure-managed-disk
+spec:
+ accessModes:
+ - ReadWriteOnce
+ storageClassName: managed-csi
+ resources:
+ requests:
+ storage: 5Gi
+```
+
+> [!TIP]
+> To create a disk that uses premium storage, use `storageClassName: managed-csi-premium` rather than *managed-csi*.
+
+Create the persistent volume claim with the [kubectl apply][kubectl-apply] command and specify your *azure-pvc.yaml* file:
+
+```bash
+kubectl apply -f azure-pvc.yaml
+```
+
+The output of the command resembles the following example:
+
+```console
+persistentvolumeclaim/azure-managed-disk created
+```
+
+### Use the persistent volume
+
+Once the persistent volume claim has been created and the disk successfully provisioned, a pod can be created with access to the disk. The following manifest creates a basic NGINX pod that uses the persistent volume claim named *azure-managed-disk* to mount the Azure Disk at the path `/mnt/azure`. For Windows Server containers, specify a *mountPath* using the Windows path convention, such as *'D:'*.
+
+Create a file named `azure-pvc-disk.yaml`, and copy in the following manifest.
+
+```yaml
+kind: Pod
+apiVersion: v1
+metadata:
+ name: mypod
+spec:
+ containers:
+ - name: mypod
+ image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
+ resources:
+ requests:
+ cpu: 100m
+ memory: 128Mi
+ limits:
+ cpu: 250m
+ memory: 256Mi
+ volumeMounts:
+ - mountPath: "/mnt/azure"
+ name: volume
+ volumes:
+ - name: volume
+ persistentVolumeClaim:
+ claimName: azure-managed-disk
+```
+
+Create the pod with the [kubectl apply][kubectl-apply] command, as shown in the following example:
+
+```console
+kubectl apply -f azure-pvc-disk.yaml
+```
+
+The output of the command resembles the following example:
+
+```console
+pod/mypod created
+```
+
+You now have a running pod with your Azure Disk mounted in the `/mnt/azure` directory. This configuration can be seen when inspecting your pod using the [kubectl describe][kubectl-describe] command, as shown in the following condensed example:
+
+```bash
+kubectl describe pod mypod
+```
+
+The output of the command resembles the following example:
+
+```console
+[...]
+Volumes:
+ volume:
+ Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
+ ClaimName: azure-managed-disk
+ ReadOnly: false
+ default-token-smm2n:
+ Type: Secret (a volume populated by a Secret)
+ SecretName: default-token-smm2n
+ Optional: false
+[...]
+Events:
+ Type Reason Age From Message
+ - - - -
+ Normal Scheduled 2m default-scheduler Successfully assigned mypod to aks-nodepool1-79590246-0
+ Normal SuccessfulMountVolume 2m kubelet, aks-nodepool1-79590246-0 MountVolume.SetUp succeeded for volume "default-token-smm2n"
+ Normal SuccessfulMountVolume 1m kubelet, aks-nodepool1-79590246-0 MountVolume.SetUp succeeded for volume "pvc-faf0f176-8b8d-11e8-923b-deb28c58d242"
+[...]
+```
+
+### Use Azure ultra disks
+
+To use Azure ultra disk, see [Use ultra disks on Azure Kubernetes Service (AKS)][use-ultra-disks].
+
+### Back up a persistent volume
+
+To back up the data in your persistent volume, take a snapshot of the managed disk for the volume. You can then use this snapshot to create a restored disk and attach to pods as a means of restoring the data.
+
+First, get the volume name with the [kubectl get][kubectl-get] command, such as for the PVC named *azure-managed-disk*:
+
+```bash
+kubectl get pvc azure-managed-disk
+```
+
+The output of the command resembles the following example:
+
+```console
+NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
+azure-managed-disk Bound pvc-faf0f176-8b8d-11e8-923b-deb28c58d242 5Gi RWO managed-premium 3m
+```
+
+This volume name forms the underlying Azure disk name. Query for the disk ID with [az disk list][az-disk-list] and provide your PVC volume name, as shown in the following example:
+
+```azurecli
+az disk list --query '[].id | [?contains(@,`pvc-faf0f176-8b8d-11e8-923b-deb28c58d242`)]' -o tsv
+
+/subscriptions/<guid>/resourceGroups/MC_MYRESOURCEGROUP_MYAKSCLUSTER_EASTUS/providers/MicrosoftCompute/disks/kubernetes-dynamic-pvc-faf0f176-8b8d-11e8-923b-deb28c58d242
+```
+
+Use the disk ID to create a snapshot disk with [az snapshot create][az-snapshot-create]. The following example creates a snapshot named *pvcSnapshot* in the same resource group as the AKS cluster *MC_myResourceGroup_myAKSCluster_eastus*. You may encounter permission issues if you create snapshots and restore disks in resource groups that the AKS cluster doesn't have access to.
+
+```azurecli
+az snapshot create \
+ --resource-group MC_myResourceGroup_myAKSCluster_eastus \
+ --name pvcSnapshot \
+ --source /subscriptions/<guid>/resourceGroups/MC_myResourceGroup_myAKSCluster_eastus/providers/MicrosoftCompute/disks/kubernetes-dynamic-pvc-faf0f176-8b8d-11e8-923b-deb28c58d242
+```
+
+Depending on the amount of data on your disk, it may take a few minutes to create the snapshot.
+
+### Restore and use a snapshot
+
+To restore the disk and use it with a Kubernetes pod, use the snapshot as a source when you create a disk with [az disk create][az-disk-create]. This operation preserves the original resource if you then need to access the original data snapshot. The following example creates a disk named *pvcRestored* from the snapshot named *pvcSnapshot*:
+
+```azurecli
+az disk create --resource-group MC_myResourceGroup_myAKSCluster_eastus --name pvcRestored --source pvcSnapshot
+```
+
+To use the restored disk with a pod, specify the ID of the disk in the manifest. Get the disk ID with the [az disk show][az-disk-show] command. The following example gets the disk ID for *pvcRestored* created in the previous step:
+
+```azurecli
+az disk show --resource-group MC_myResourceGroup_myAKSCluster_eastus --name pvcRestored --query id -o tsv
+```
+
+Create a pod manifest named `azure-restored.yaml` and specify the disk URI obtained in the previous step. The following example creates a basic NGINX web server, with the restored disk mounted as a volume at */mnt/azure*:
+
+```yaml
+kind: Pod
+apiVersion: v1
+metadata:
+ name: mypodrestored
+spec:
+ containers:
+ - name: mypodrestored
+ image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
+ resources:
+ requests:
+ cpu: 100m
+ memory: 128Mi
+ limits:
+ cpu: 250m
+ memory: 256Mi
+ volumeMounts:
+ - mountPath: "/mnt/azure"
+ name: volume
+ volumes:
+ - name: volume
+ azureDisk:
+ kind: Managed
+ diskName: pvcRestored
+ diskURI: /subscriptions/<guid>/resourceGroups/MC_myResourceGroupAKS_myAKSCluster_eastus/providers/Microsoft.Compute/disks/pvcRestored
+```
+
+Create the pod with the [kubectl apply][kubectl-apply] command, as shown in the following example:
+
+```bash
+kubectl apply -f azure-restored.yaml
+```
+
+The output of the command resembles the following example:
+
+```console
+pod/mypodrestored created
+```
+
+You can use `kubectl describe pod mypodrestored` to view details of the pod, such as the following condensed example that shows the volume information:
+
+```bash
+kubectl describe pod mypodrestored
+```
+
+The output of the command resembles the following example:
+
+```console
+[...]
+Volumes:
+ volume:
+ Type: AzureDisk (an Azure Data Disk mount on the host and bind mount to the pod)
+ DiskName: pvcRestored
+ DiskURI: /subscriptions/19da35d3-9a1a-4f3b-9b9c-3c56ef409565/resourceGroups/MC_myResourceGroupAKS_myAKSCluster_eastus/providers/Microsoft.Compute/disks/pvcRestored
+ Kind: Managed
+ FSType: ext4
+ CachingMode: ReadWrite
+ ReadOnly: false
+[...]
+```
+
+### Using Azure tags
+
+For more information on using Azure tags, see [Use Azure tags in Azure Kubernetes Service (AKS)][use-tags].
+
+## Statically provision a volume
+
+This section provides guidance for cluster administrators who want to create one or more persistent volumes that include details of Azure Disks storage for use by a workload.
+
+### Static provisioning parameters
+
+|Name | Meaning | Available Value | Mandatory | Default value|
+| | | | | |
+|volumeHandle| Azure disk URI | `/subscriptions/{sub-id}/resourcegroups/{group-name}/providers/microsoft.compute/disks/{disk-id}` | Yes | N/A|
+|volumeAttributes.fsType | File system type | `ext4`, `ext3`, `ext2`, `xfs`, `btrfs` for Linux, `ntfs` for Windows | No | `ext4` for Linux, `ntfs` for Windows |
+|volumeAttributes.partition | Partition number of the existing disk (only supported on Linux) | `1`, `2`, `3` | No | Empty (no partition) </br>- Make sure partition format is like `-part1` |
+|volumeAttributes.cachingMode | [Disk host cache setting][disk-host-cache-setting] | `None`, `ReadOnly`, `ReadWrite` | No | `ReadOnly`|
+
+### Create an Azure disk
+
+When you create an Azure disk for use with AKS, you can create the disk resource in the **node** resource group. This approach allows the AKS cluster to access and manage the disk resource. If instead you created the disk in a separate resource group, you must grant the Azure Kubernetes Service (AKS) managed identity for your cluster the `Contributor` role to the disk's resource group. In this exercise, you're going to create the disk in the same resource group as your cluster.
+
+1. Identify the resource group name using the [az aks show][az-aks-show] command and add the `--query nodeResourceGroup` parameter. The following example gets the node resource group for the AKS cluster name *myAKSCluster* in the resource group name *myResourceGroup*:
+
+ ```azurecli-interactive
+ az aks show --resource-group myResourceGroup --name myAKSCluster --query nodeResourceGroup -o tsv
+
+ MC_myResourceGroup_myAKSCluster_eastus
+ ```
+
+2. Create a disk using the [az disk create][az-disk-create] command. Specify the node resource group name obtained in the previous command, and then a name for the disk resource, such as *myAKSDisk*. The following example creates a *20*GiB disk, and outputs the ID of the disk after it's created. If you need to create a disk for use with Windows Server containers, add the `--os-type windows` parameter to correctly format the disk.
+
+ ```azurecli-interactive
+ az disk create \
+ --resource-group MC_myResourceGroup_myAKSCluster_eastus \
+ --name myAKSDisk \
+ --size-gb 20 \
+ --query id --output tsv
+ ```
+
+ > [!NOTE]
+ > Azure Disks are billed by SKU for a specific size. These SKUs range from 32GiB for S4 or P4 disks to 32TiB for S80 or P80 disks (in preview). The throughput and IOPS performance of a Premium managed disk depends on both the SKU and the instance size of the nodes in the AKS cluster. See [Pricing and Performance of Managed Disks][managed-disk-pricing-performance].
+
+ The disk resource ID is displayed once the command has successfully completed, as shown in the following example output. This disk ID is used to mount the disk in the next section.
+
+ ```console
+ /subscriptions/<subscriptionID>/resourceGroups/MC_myAKSCluster_myAKSCluster_eastus/providers/Microsoft.Compute/disks/myAKSDisk
+ ```
+
+### Mount disk as a volume
+
+1. Create a *pv-azuredisk.yaml* file with a *PersistentVolume*. Update `volumeHandle` with disk resource ID from the previous step. For example:
+
+ ```yaml
+ apiVersion: v1
+ kind: PersistentVolume
+ metadata:
+ name: pv-azuredisk
+ spec:
+ capacity:
+ storage: 20Gi
+ accessModes:
+ - ReadWriteOnce
+ persistentVolumeReclaimPolicy: Retain
+ storageClassName: managed-csi
+ csi:
+ driver: disk.csi.azure.com
+ readOnly: false
+ volumeHandle: /subscriptions/<subscriptionID>/resourceGroups/MC_myAKSCluster_myAKSCluster_eastus/providers/Microsoft.Compute/disks/myAKSDisk
+ volumeAttributes:
+ fsType: ext4
+ ```
+
+2. Create a *pvc-azuredisk.yaml* file with a *PersistentVolumeClaim* that uses the *PersistentVolume*. For example:
+
+ ```yaml
+ apiVersion: v1
+ kind: PersistentVolumeClaim
+ metadata:
+ name: pvc-azuredisk
+ spec:
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 20Gi
+ volumeName: pv-azuredisk
+ storageClassName: managed-csi
+ ```
+
+3. Use the [kubectl apply][kubectl-apply] commands to create the *PersistentVolume* and *PersistentVolumeClaim*, referencing the two YAML files created earlier:
+
+ ```bash
+ kubectl apply -f pv-azuredisk.yaml
+ kubectl apply -f pvc-azuredisk.yaml
+ ```
+
+4. To verify your *PersistentVolumeClaim* is created and bound to the *PersistentVolume*, run the
+following command:
+
+ ```bash
+ kubectl get pvc pvc-azuredisk
+ ```
+
+ The output of the command resembles the following example:
+
+ ```console
+ NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
+ pvc-azuredisk Bound pv-azuredisk 20Gi RWO 5s
+ ```
+
+5. Create a *azure-disk-pod.yaml* file to reference your *PersistentVolumeClaim*. For example:
+
+ ```yaml
+ apiVersion: v1
+ kind: Pod
+ metadata:
+ name: mypod
+ spec:
+ nodeSelector:
+ kubernetes.io/os: linux
+ containers:
+ - image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
+ name: mypod
+ resources:
+ requests:
+ cpu: 100m
+ memory: 128Mi
+ limits:
+ cpu: 250m
+ memory: 256Mi
+ volumeMounts:
+ - name: azure
+ mountPath: /mnt/azure
+ volumes:
+ - name: azure
+ persistentVolumeClaim:
+ claimName: pvc-azuredisk
+ ```
+
+6. Run the [kubectl apply][kubectl-apply] command to apply the configuration and mount the volume, referencing the YAML
+configuration file created in the previous steps:
+
+ ```bash
+ kubectl apply -f azure-disk-pod.yaml
+ ```
+
+## Next steps
+
+- To learn how to use CSI driver for Azure Disks storage, see [Use Azure Disks storage with CSI driver][azure-disks-storage-csi].
+- For associated best practices, see [Best practices for storage and backups in AKS][operator-best-practices-storage].
+
+<!-- LINKS - external -->
+[access-modes]: https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes
+[kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply
+[kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get
+[kubernetes-storage-classes]: https://kubernetes.io/docs/concepts/storage/storage-classes/
+[kubernetes-volumes]: https://kubernetes.io/docs/concepts/storage/persistent-volumes/
+[managed-disk-pricing-performance]: https://azure.microsoft.com/pricing/details/managed-disks/
+[kubectl-describe]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#describe
+
+<!-- LINKS - internal -->
+[azure-storage-account]: ../storage/common/storage-introduction.md
+[azure-disks-storage-csi]: azure-disk-csi.md
+[azure-files-pvc]: azure-files-dynamic-pv.md
+[az-disk-list]: /cli/azure/disk#az_disk_list
+[az-snapshot-create]: /cli/azure/snapshot#az_snapshot_create
+[az-disk-create]: /cli/azure/disk#az_disk_create
+[az-disk-show]: /cli/azure/disk#az_disk_show
+[az-aks-show]: /cli/azure/aks#az-aks-show
+[install-azure-cli]: /cli/azure/install-azure-cli
+[operator-best-practices-storage]: operator-best-practices-storage.md
+[concepts-storage]: concepts-storage.md
+[storage-class-concepts]: concepts-storage.md#storage-classes
+[use-tags]: use-tags.md
+[share-azure-managed-disk]: ../virtual-machines/disks-shared.md
+[disk-host-cache-setting]: ../virtual-machines/windows/premium-storage-performance.md#disk-caching
+[use-ultra-disks]: use-ultra-disks.md
+[ultra-ssd-disks]: ../virtual-machines/linux/disks-ultra-ssd.md
+[azure-tags]: ../azure-resource-manager/management/tag-resources.md
+[disk-encryption]: ../virtual-machines/windows/disk-encryption.md
+[azure-disk-write-accelerator]: ../virtual-machines/windows/how-to-enable-write-accelerator.md
+[on-demand-bursting]: ../virtual-machines/disk-bursting.md
+[customer-usage-attribution]: ../marketplace/azure-partner-customer-usage-attribution.md
aks Azure Csi Files Storage Provision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-csi-files-storage-provision.md
+
+ Title: Create a persistent volume with Azure Files in Azure Kubernetes Service (AKS)
+
+description: Learn how to create a static or dynamic persistent volume with Azure Files for use with multiple concurrent pods in Azure Kubernetes Service (AKS)
+ Last updated : 01/18/2023++
+# Create and use a volume with Azure Files in Azure Kubernetes Service (AKS)
+
+A persistent volume represents a piece of storage that has been provisioned for use with Kubernetes pods. A persistent volume can be used by one or many pods, and can be dynamically or statically provisioned. If multiple pods need concurrent access to the same storage volume, you can use Azure Files to connect using the [Server Message Block (SMB) protocol][smb-overview]. This article shows you how to dynamically create an Azure Files share for use by multiple pods in an Azure Kubernetes Service (AKS) cluster.
+
+This article shows you how to:
+
+* Work with a dynamic persistent volume (PV) by installing the Container Storage Interface (CSI) driver and dynamically creating one or more Azure file shares to attach to a pod.
+* Work with a static PV by creating one or more Azure file shares, or use an existing one and attach it to a pod.
+
+For more information on Kubernetes volumes, see [Storage options for applications in AKS][concepts-storage].
+
+## Before you begin
+
+- An Azure [storage account][azure-storage-account].
+
+- The Azure CLI version 2.0.59 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
+
+## Dynamically provision a volume
+
+This section provides guidance for cluster administrators who want to provision one or more persistent volumes that include details of one or more shares on Azure Files for use by a workload. A persistent volume claim (PVC) uses the storage class object to dynamically provision an Azure Files file share.
+
+### Dynamic provisioning parameters
+
+|Name | Meaning | Available Value | Mandatory | Default value
+| | | | |
+|skuName | Azure Files storage account type (alias: `storageAccountType`)| `Standard_LRS`, `Standard_ZRS`, `Standard_GRS`, `Standard_RAGRS`, `Standard_RAGZRS`,`Premium_LRS`, `Premium_ZRS` | No | `StandardSSD_LRS`<br> Minimum file share size for Premium account type is 100 GB.<br> ZRS account type is supported in limited regions.<br> NFS file share only supports Premium account type.|
+|fsType | File System Type | `ext4`, `ext3`, `ext2`, `xfs`| Yes | `ext4` for Linux|
+|location | Specify Azure region where Azure storage account will be created. | For example, `eastus`. | No | If empty, driver uses the same location name as current AKS cluster.|
+|resourceGroup | Specify the resource group where the Azure Disks will be created | Existing resource group name | No | If empty, driver uses the same resource group name as current AKS cluster.|
+|shareName | Specify Azure file share name | Existing or new Azure file share name. | No | If empty, driver generates an Azure file share name. |
+|shareNamePrefix | Specify Azure file share name prefix created by driver. | Share name can only contain lowercase letters, numbers, hyphens, and length should be fewer than 21 characters. | No |
+|folderName | Specify folder name in Azure file share. | Existing folder name in Azure file share. | No | If folder name does not exist in file share, mount will fail. |
+|shareAccessTier | [Access tier for file share][storage-tiers] | General purpose v2 account can choose between `TransactionOptimized` (default), `Hot`, and `Cool`. Premium storage account type for file shares only. | No | Empty. Use default setting for different storage account types.|
+|accountAccessTier | [Access tier for storage account][access-tiers-overview] | Standard account can choose `Hot` or `Cool`, and Premium account can only choose `Premium`. | No | Empty. Use default setting for different storage account types. |
+|server | Specify Azure storage account server address | Existing server address, for example `accountname.privatelink.file.core.windows.net`. | No | If empty, driver uses default `accountname.file.core.windows.net` or other sovereign cloud account address. |
+|disableDeleteRetentionPolicy | Specify whether disable DeleteRetentionPolicy for storage account created by driver. | `true` or `false` | No | `false` |
+|allowBlobPublicAccess | Allow or disallow public access to all blobs or containers for storage account created by driver. | `true` or `false` | No | `false` |
+|requireInfraEncryption | Specify whether or not the service applies a secondary layer of encryption with platform managed keys for data at rest for storage account created by driver. | `true` or `false` | No | `false` |
+|storageEndpointSuffix | Specify Azure storage endpoint suffix. | `core.windows.net`, `core.chinacloudapi.cn`, etc. | No | If empty, driver uses default storage endpoint suffix according to cloud environment. For example, `core.windows.net`. |
+|tags | [Tags][tag-resources] are created in new storage account. | Tag format: 'foo=aaa,bar=bbb' | No | "" |
+|matchTags | Match tags when driver tries to find a suitable storage account. | `true` or `false` | No | `false` |
+| | **Following parameters are only for SMB protocol** | | |
+|subscriptionID | Specify Azure subscription ID where Azure file share is created. | Azure subscription ID | No | If not empty, `resourceGroup` must be provided. |
+|storeAccountKey | Specify whether to store account key to Kubernetes secret. | `true` or `false`<br>`false` means driver leverages kubelet identity to get account key. | No | `true` |
+|secretName | Specify secret name to store account key. | | No |
+|secretNamespace | Specify the namespace of secret to store account key. <br><br> **Note:** <br> If `secretNamespace` isn't specified, the secret is created in the same namespace as the pod. | `default`,`kube-system`, etc | No | Pvc namespace, for example `csi.storage.k8s.io/pvc/namespace` |
+|useDataPlaneAPI | Specify whether to use [data plane API][data-plane-api] for file share create/delete/resize. This could solve the SRP API throttling issue because the data plane API has almost no limit, while it would fail when there is firewall or Vnet setting on storage account. | `true` or `false` | No | `false` |
+| | **Following parameters are only for NFS protocol** | | |
+|rootSquashType | Specify root squashing behavior on the share. The default is `NoRootSquash` | `AllSquash`, `NoRootSquash`, `RootSquash` | No |
+|mountPermissions | Mounted folder permissions. The default is `0777`. If set to `0`, driver doesn't perform `chmod` after mount | `0777` | No |
+| | **Following parameters are only for VNet setting. For example, NFS, private end point** | | |
+|vnetResourceGroup | Specify VNet resource group where virtual network is defined. | Existing resource group name. | No | If empty, driver uses the `vnetResourceGroup` value in Azure cloud config file. |
+|vnetName | Virtual network name | Existing virtual network name. | No | If empty, driver uses the `vnetName` value in Azure cloud config file. |
+|subnetName | Subnet name | Existing subnet name of the agent node. | No | If empty, driver uses the `subnetName` value in Azure cloud config file. |
+|fsGroupChangePolicy | Indicates how volume's ownership is changed by the driver. Pod `securityContext.fsGroupChangePolicy` is ignored. | `OnRootMismatch` (default), `Always`, `None` | No | `OnRootMismatch`|
+
+### Create a storage class
+
+A storage class is used to define how an Azure file share is created. A storage account is automatically created in the [node resource group][node-resource-group] for use with the storage class to hold the Azure Files file share. Choose of the following [Azure storage redundancy][storage-skus] for *skuName*:
+
+* *Standard_LRS* - standard locally redundant storage (LRS)
+* *Standard_GRS* - standard geo-redundant storage (GRS)
+* *Standard_ZRS* - standard zone redundant storage (ZRS)
+* *Standard_RAGRS* - standard read-access geo-redundant storage (RA-GRS)
+* *Premium_LRS* - premium locally redundant storage (LRS)
+* *Premium_ZRS* - premium zone redundant storage (ZRS)
+
+> [!NOTE]
+> Minimum premium file share is 100GB.
+
+For more information on Kubernetes storage classes for Azure Files, see [Kubernetes Storage Classes][kubernetes-storage-classes].
+
+Create a file named `azure-file-sc.yaml` and copy in the following example manifest. For more information on *mountOptions*, see the [Mount options][mount-options] section.
+
+```yaml
+kind: StorageClass
+apiVersion: storage.k8s.io/v1
+metadata:
+ name: my-azurefile
+provisioner: file.csi.azure.com # replace with "kubernetes.io/azure-file" if aks version is less than 1.21
+allowVolumeExpansion: true
+mountOptions:
+ - dir_mode=0777
+ - file_mode=0777
+ - uid=0
+ - gid=0
+ - mfsymlinks
+ - cache=strict
+ - actimeo=30
+parameters:
+ skuName: Premium_LRS
+```
+
+Create the storage class with the [kubectl apply][kubectl-apply] command:
+
+```bash
+kubectl apply -f azure-file-sc.yaml
+```
+
+### Create a persistent volume claim
+
+A persistent volume claim (PVC) uses the storage class object to dynamically provision an Azure file share. The following YAML can be used to create a persistent volume claim *100 GB* in size with *ReadWriteMany* access. For more information on access modes, see the [Kubernetes persistent volume][access-modes] documentation.
+
+Now create a file named `azure-file-pvc.yaml` and copy in the following YAML. Make sure that the *storageClassName* matches the storage class created in the last step:
+
+```yaml
+apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+ name: my-azurefile
+spec:
+ accessModes:
+ - ReadWriteMany
+ storageClassName: my-azurefile
+ resources:
+ requests:
+ storage: 100Gi
+```
+
+> [!NOTE]
+> If using the *Premium_LRS* sku for your storage class, the minimum value for *storage* must be *100Gi*.
+
+Create the persistent volume claim with the [kubectl apply][kubectl-apply] command:
+
+```bash
+kubectl apply -f azure-file-pvc.yaml
+```
+
+Once completed, the file share will be created. A Kubernetes secret is also created that includes connection information and credentials. You can use the [kubectl get][kubectl-get] command to view the status of the PVC:
+
+```bash
+kubectl get pvc my-azurefile
+```
+
+The output of the command resembles the following example:
+
+```console
+NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
+my-azurefile Bound pvc-8436e62e-a0d9-11e5-8521-5a8664dc0477 10Gi RWX my-azurefile 5m
+```
+
+### Use the persistent volume
+
+The following YAML creates a pod that uses the persistent volume claim *my-azurefile* to mount the Azure Files file share at the */mnt/azure* path. For Windows Server containers, specify a *mountPath* using the Windows path convention, such as *'D:'*.
+
+Create a file named `azure-pvc-files.yaml`, and copy in the following YAML. Make sure that the *claimName* matches the PVC created in the last step.
+
+```yaml
+kind: Pod
+apiVersion: v1
+metadata:
+ name: mypod
+spec:
+ containers:
+ - name: mypod
+ image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
+ resources:
+ requests:
+ cpu: 100m
+ memory: 128Mi
+ limits:
+ cpu: 250m
+ memory: 256Mi
+ volumeMounts:
+ - mountPath: "/mnt/azure"
+ name: volume
+ volumes:
+ - name: volume
+ persistentVolumeClaim:
+ claimName: my-azurefile
+```
+
+Create the pod with the [kubectl apply][kubectl-apply] command.
+
+```bash
+kubectl apply -f azure-pvc-files.yaml
+```
+
+You now have a running pod with your Azure Files file share mounted in the */mnt/azure* directory. This configuration can be seen when inspecting your pod using the [kubectl describe][kubectl-describe] command. The following condensed example output shows the volume mounted in the container:
+
+```console
+Containers:
+ mypod:
+ Container ID: docker://053bc9c0df72232d755aa040bfba8b533fa696b123876108dec400e364d2523e
+ Image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
+ Image ID: docker-pullable://nginx@sha256:d85914d547a6c92faa39ce7058bd7529baacab7e0cd4255442b04577c4d1f424
+ State: Running
+ Started: Fri, 01 Mar 2019 23:56:16 +0000
+ Ready: True
+ Mounts:
+ /mnt/azure from volume (rw)
+ /var/run/secrets/kubernetes.io/serviceaccount from default-token-8rv4z (ro)
+[...]
+Volumes:
+ volume:
+ Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
+ ClaimName: my-azurefile
+ ReadOnly: false
+[...]
+```
+
+### Mount options
+
+The default value for *fileMode* and *dirMode* is *0777* for Kubernetes version 1.13.0 and above. If dynamically creating the persistent volume with a storage class, mount options can be specified on the storage class object. The following example sets *0777*:
+
+```yaml
+kind: StorageClass
+apiVersion: storage.k8s.io/v1
+metadata:
+ name: my-azurefile
+provisioner: file.csi.azure.com # replace with "kubernetes.io/azure-file" if aks version is less than 1.21
+allowVolumeExpansion: true
+mountOptions:
+ - dir_mode=0777
+ - file_mode=0777
+ - uid=0
+ - gid=0
+ - mfsymlinks
+ - cache=strict
+ - actimeo=30
+parameters:
+ skuName: Premium_LRS
+```
+
+### Using Azure tags
+
+For more details on using Azure tags, see [Use Azure tags in Azure Kubernetes Service (AKS)][use-tags].
+
+## Statically provision a volume
+
+This section provides guidance for cluster administrators who want to create one or more persistent volumes that include details of an existing Azure Files share to use with a workload.
+
+### Static provisioning parameters
+
+|Name | Meaning | Available Value | Mandatory | Default value |
+| | | | | |
+|volumeAttributes.resourceGroup | Specify an Azure resource group name. | myResourceGroup | No | If empty, driver uses the same resource group name as current cluster. |
+|volumeAttributes.storageAccount | Specify an existing Azure storage account name. | storageAccountName | Yes ||
+|volumeAttributes.shareName | Specify an Azure file share name. | fileShareName | Yes ||
+|volumeAttributes.folderName | Specify a folder name in Azure file share. | folderName | No | If folder name doesn't exist in file share, mount would fail. |
+|volumeAttributes.protocol | Specify file share protocol. | `smb`, `nfs` | No | `smb` |
+|volumeAttributes.server | Specify Azure storage account server address | Existing server address, for example `accountname.privatelink.file.core.windows.net`. | No | If empty, driver uses default `accountname.file.core.windows.net` or other sovereign cloud account address. |
+| | **Following parameters are only for SMB protocol** | | | |
+|volumeAttributes.secretName | Specify a secret name that stores storage account name and key. | | No |
+|volumeAttributes.secretNamespace | Specify a secret namespace. | `default`,`kube-system`, etc. | No | PVC namespace (`csi.storage.k8s.io/pvc/namespace`) |
+|nodeStageSecretRef.name | Specify a secret name that stores storage account name and key. | Existing secret name | Yes ||
+|nodeStageSecretRef.namespace | Specify a secret namespace. | Kubernetes namespace | Yes ||
+| | **Following parameters are only for NFS protocol** | | | |
+|volumeAttributes.fsGroupChangePolicy | Indicates how a volumes ownership is changed by the driver. Pod `securityContext.fsGroupChangePolicy` is ignored. | `OnRootMismatch` (default), `Always`, `None` | No | `OnRootMismatch` |
+|volumeAttributes.mountPermissions | Specify mounted folder permissions. The default is `0777` | | No ||
+
+### Create an Azure file share
+
+Before you can use an Azure Files file share as a Kubernetes volume, you must create an Azure Storage account and the file share. In this article, you'll create the storage container in the node resource group.
+
+1. Get the resource group name with the [az aks show][az-aks-show] command and add the `--query nodeResourceGroup` query parameter. The following example gets the node resource group for the AKS cluster named **myAKSCluster** in the resource group named **myResourceGroup**.
+
+ ```azurecli
+ az aks show --resource-group myResourceGroup --name myAKSCluster --query nodeResourceGroup -o tsv
+ ```
+
+ The output of the command resembles the following example:
+
+ ```azurecli
+ MC_myResourceGroup_myAKSCluster_eastus
+ ```
+
+2. The following command creates a storage account using the Standard_LRS SKU. Replace the following placeholders:
+
+ * `myAKSStorageAccount` with the name of the storage account
+ * `nodeResourceGroupName` with the name of the resource group that the AKS cluster nodes are hosted in
+ * `location` with the name of the region to create the resource in. It should be the same region as the AKS cluster nodes.
+
+ ```azurecli
+ az storage account create -n myAKSStorageAccount -g nodeResourceGroupName -l location --sku Standard_LRS
+ ```
+
+3. Run the following command to export the connection string as an environment variable. This is used when creating the Azure file share in a later step.
+
+ ```azurecli
+ export AZURE_STORAGE_CONNECTION_STRING=$(az storage account show-connection-string -n storageAccountName -g resourceGroupName -o tsv)
+ ```
+
+4. Create the file share using the [Az storage share create][az-storage-share-create] command. Replace the placeholder `shareName` with a name you want to use for the share.
+
+ ```azurecli
+ az storage share create -n shareName --connection-string $AZURE_STORAGE_CONNECTION_STRING
+ ```
+
+5. Run the following command to export the storage account key as an environment variable.
+
+ ```azurecli
+ STORAGE_KEY=$(az storage account keys list --resource-group $AKS_PERS_RESOURCE_GROUP --account-name $AKS_PERS_STORAGE_ACCOUNT_NAME --query "[0].value" -o tsv)
+ ```
+
+6. Run the following commands to echo the storage account name and key. Copy this information as these values are needed when you create the Kubernetes volume later in this article.
+
+ ```azurecli
+ echo Storage account name: $AKS_PERS_STORAGE_ACCOUNT_NAME
+ echo Storage account key: $STORAGE_KEY
+ ```
+
+### Create a Kubernetes secret
+
+Kubernetes needs credentials to access the file share created in the previous step. These credentials are stored in a [Kubernetes secret][kubernetes-secret], which is referenced when you create a Kubernetes pod.
+
+Use the `kubectl create secret` command to create the secret. The following example creates a secret named *azure-secret* and populates the *azurestorageaccountname* and *azurestorageaccountkey* from the previous step. To use an existing Azure storage account, provide the account name and key.
+
+```bash
+kubectl create secret generic azure-secret --from-literal=azurestorageaccountname=$AKS_PERS_STORAGE_ACCOUNT_NAME --from-literal=azurestorageaccountkey=$STORAGE_KEY
+```
+
+### Mount file share as an inline volume
+
+> [!NOTE]
+> Inline volume can only access secrets in the same namespace as the pod. To specify a different secret namespace, [please use the persistent volume example][persistent-volume-example] below instead.
+
+To mount the Azure Files file share into your pod, configure the volume in the container spec. Create a new file named `azure-files-pod.yaml` with the following contents. If you changed the name of the file share or secret name, update the *shareName* and *secretName*. If desired, update the `mountPath`, which is the path where the Files share is mounted in the pod. For Windows Server containers, specify a *mountPath* using the Windows path convention, such as *'D:'*.
+
+```yaml
+apiVersion: v1
+kind: Pod
+metadata:
+ name: mypod
+spec:
+ nodeSelector:
+ kubernetes.io/os: linux
+ containers:
+ - image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
+ name: mypod
+ resources:
+ requests:
+ cpu: 100m
+ memory: 128Mi
+ limits:
+ cpu: 250m
+ memory: 256Mi
+ volumeMounts:
+ - name: azure
+ mountPath: /mnt/azure
+ volumes:
+ - name: azure
+ csi:
+ driver: file.csi.azure.com
+ readOnly: false
+ volumeAttributes:
+ secretName: azure-secret # required
+ shareName: aksshare # required
+ mountOptions: "dir_mode=0777,file_mode=0777,cache=strict,actimeo=30,nosharesock" # optional
+```
+
+Use the [kubectl apply][kubectl-apply] command to create the pod.
+
+```bash
+kubectl apply -f azure-files-pod.yaml
+```
+
+You now have a running pod with an Azure Files file share mounted at */mnt/azure*. You can verify the share is mounted successfully using the [kubectl describe][kubectl-describe] command:
+
+```bash
+kubectl describe pod mypod
+```
+
+### Mount file share as a persistent volume
+
+The following example demonstrates how to mount a file share as a persistent volume.
+
+1. Create a file named `azurefiles-pv.yaml` and copy in the following YAML. Under `csi`, update `resourceGroup`, `volumeHandle`, and `shareName`. For mount options, the default value for *fileMode* and *dirMode* is *0777*.
+
+ ```yaml
+ apiVersion: v1
+ kind: PersistentVolume
+ metadata:
+ name: azurefile
+ spec:
+ capacity:
+ storage: 5Gi
+ accessModes:
+ - ReadWriteMany
+ persistentVolumeReclaimPolicy: Retain
+ storageClassName: azurefile-csi
+ csi:
+ driver: file.csi.azure.com
+ readOnly: false
+ volumeHandle: unique-volumeid # make sure this volumeid is unique for every identical share in the cluster
+ volumeAttributes:
+ resourceGroup: resourceGroupName # optional, only set this when storage account is not in the same resource group as node
+ shareName: aksshare
+ nodeStageSecretRef:
+ name: azure-secret
+ namespace: default
+ mountOptions:
+ - dir_mode=0777
+ - file_mode=0777
+ - uid=0
+ - gid=0
+ - mfsymlinks
+ - cache=strict
+ - nosharesock
+ - nobrl
+ ```
+
+2. Run the following command to create the persistent volume using the [kubectl create][kubectl-create] command referencing the YAML file created earlier:
+
+ ```bash
+ kubectl create -f azurefiles-pv.yaml
+ ```
+
+3. Create a *azurefiles-mount-options-pvc.yaml* file with a *PersistentVolumeClaim* that uses the *PersistentVolume* and copy the following YAML.
+
+ ```yaml
+ apiVersion: v1
+ kind: PersistentVolumeClaim
+ metadata:
+ name: azurefile
+ spec:
+ accessModes:
+ - ReadWriteMany
+ storageClassName: azurefile-csi
+ volumeName: azurefile
+ resources:
+ requests:
+ storage: 5Gi
+ ```
+
+4. Use the `kubectl` commands to create the *PersistentVolumeClaim*.
+
+```bash
+kubectl apply -f azurefiles-mount-options-pvc.yaml
+```
+
+5. Verify your *PersistentVolumeClaim* is created and bound to the *PersistentVolume* by running the following command.
+
+ ```bash
+ kubectl get pvc azurefile
+ ```
+
+ The output from the command resembles the following example:
+
+ ```console
+ NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
+ azurefile Bound azurefile 5Gi RWX azurefile 5s
+ ```
+
+6. Update your container spec to reference your *PersistentVolumeClaim* and update your pod. For example:
+
+ ```yaml
+ ...
+ volumes:
+ - name: azure
+ persistentVolumeClaim:
+ claimName: azurefile
+ ```
+
+7. Because a pod spec can't be updated in place, use [kubectl delete][kubectl-delete] and [kubectl apply][kubectl-apply] commands to delete and then re-create the pod:
+
+ ```bash
+ kubectl delete pod mypod
+
+ kubectl apply -f azure-files-pod.yaml
+ ```
+
+## Next steps
+
+For Azure File CSI driver parameters, see [CSI driver parameters][CSI driver parameters].
+
+For associated best practices, see [Best practices for storage and backups in AKS][operator-best-practices-storage].
+
+<!-- LINKS - external -->
+[kubernetes-secret]: https://kubernetes.io/docs/concepts/configuration/secret/
+[smb-overview]: /windows/desktop/FileIO/microsoft-smb-protocol-and-cifs-protocol-overview
+[CSI driver parameters]: https://github.com/kubernetes-sigs/azurefile-csi-driver/blob/master/docs/driver-parameters.md#static-provisionbring-your-own-file-share
+[kubernetes-storage-classes]: https://kubernetes.io/docs/concepts/storage/storage-classes/#azure-file
+[kubernetes-persistent-volume]: https://kubernetes.io/docs/concepts/storage/persistent-volumes
+[kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply
+[kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get
+[kubectl-create]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#create
+[data-plane-api]: https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob
+[kubectl-describe]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#describe
+[kubectl-delete]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#delete
+
+<!-- LINKS - internal -->
+[azure-storage-account]: ../storage/common/storage-introduction.md
+[install-azure-cli]: /cli/azure/install-azure-cli
+[operator-best-practices-storage]: operator-best-practices-storage.md
+[concepts-storage]: concepts-storage.md
+[persistent-volume-example]: #mount-file-share-as-a-persistent-volume
+[use-tags]: use-tags.md
+[node-resource-group]: faq.md#why-are-two-resource-groups-created-with-aks
+[storage-skus]: ../storage/common/storage-redundancy.md
+[mount-options]: #mount-options
+[az-aks-show]: /cli/azure/aks#az-aks-show
+[az-storage-share-create]: /cli/azure/storage/share#az-storage-share-create
+[storage-tiers]: ../storage/files/storage-files-planning.md#storage-tiers
+[access-tiers-overview]: ../storage/blobs/access-tiers-overview.md
+[tag-resources]: ../azure-resource-manager/management/tag-resources.md
aks Azure Disk Csi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-disk-csi.md
Title: Use Container Storage Interface (CSI) driver for Azure Disks on Azure Kubernetes Service (AKS) description: Learn how to use the Container Storage Interface (CSI) driver for Azure Disks in an Azure Kubernetes Service (AKS) cluster.- Previously updated : 10/13/2022 Last updated : 01/18/2023 # Use the Azure Disks Container Storage Interface (CSI) driver in Azure Kubernetes Service (AKS)
In addition to in-tree driver features, Azure Disks CSI driver supports the foll
- [Volume clone](#clone-volumes) - [Resize disk PV without downtime(Preview)](#resize-a-persistent-volume-without-downtime-preview)
+> [!NOTE]
+> Depending on the VM SKU that's being used, the Azure Disks CSI driver might have a per-node volume limit. For some powerful VMs (for example, 16 cores), the limit is 64 volumes per node. To identify the limit per VM SKU, review the **Max data disks** column for each VM SKU offered. For a list of VM SKUs offered and their corresponding detailed capacity limits, see [General purpose virtual machine sizes][general-purpose-machine-sizes].
+ ## Storage class driver dynamic disks parameters |Name | Meaning | Available Value | Mandatory | Default value
In addition to in-tree driver features, Azure Disks CSI driver supports the foll
|LogicalSectorSize | Logical sector size in bytes for Ultra disk. Supported values are 512 ad 4096. 4096 is the default. | `512`, `4096` | No | `4096`| |tags | Azure Disk [tags](../azure-resource-manager/management/tag-resources.md) | Tag format: `key1=val1,key2=val2` | No | ""| |diskEncryptionSetID | ResourceId of the disk encryption set to use for [enabling encryption at rest](../virtual-machines/windows/disk-encryption.md) | format: `/subscriptions/{subs-id}/resourceGroups/{rg-name}/providers/Microsoft.Compute/diskEncryptionSets/{diskEncryptionSet-name}` | No | ""|
-|diskEncryptionType | Encryption type of the disk encryption set | `EncryptionAtRestWithCustomerKey`(by default), `EncryptionAtRestWithPlatformAndCustomerKeys` | No | ""|
+|diskEncryptionType | Encryption type of the disk encryption set. | `EncryptionAtRestWithCustomerKey`(by default), `EncryptionAtRestWithPlatformAndCustomerKeys` | No | ""|
|writeAcceleratorEnabled | [Write Accelerator on Azure Disks](../virtual-machines/windows/how-to-enable-write-accelerator.md) | `true`, `false` | No | ""| |networkAccessPolicy | NetworkAccessPolicy property to prevent generation of the SAS URI for a disk or a snapshot | `AllowAll`, `DenyAll`, `AllowPrivate` | No | `AllowAll`|
-|diskAccessID | ARM ID of the DiskAccess resource to use private endpoints on disks | | No | ``|
+|diskAccessID | Azure Resource ID of the DiskAccess resource to use private endpoints on disks | | No | ``|
|enableBursting | [Enable on-demand bursting](../virtual-machines/disk-bursting.md) beyond the provisioned performance target of the disk. On-demand bursting should only be applied to Premium disk and when the disk size > 512 GB. Ultra and shared disk isn't supported. Bursting is disabled by default. | `true`, `false` | No | `false`| |useragent | User agent used for [customer usage attribution](../marketplace/azure-partner-customer-usage-attribution.md)| | No | Generated Useragent formatted `driverName/driverVersion compiler/version (OS-ARCH)`| |enableAsyncAttach | Allow multiple disk attach operations (in batch) on one node in parallel.<br> While this parameter can speed up disk attachment, you may encounter Azure API throttling limit when there are large number of volume attachments. | `true`, `false` | No | `false`|
-|subscriptionID | Specify Azure subscription ID where the Azure Disks will be created | Azure subscription ID | No | If not empty, `resourceGroup` must be provided.|
+|subscriptionID | Specify Azure subscription ID where the Azure Disks is created. | Azure subscription ID | No | If not empty, `resourceGroup` must be provided.|
## Use CSI persistent volumes with Azure Disks
-A [persistent volume](concepts-storage.md#persistent-volumes) (PV) represents a piece of storage that's provisioned for use with Kubernetes pods. A PV can be used by one or many pods and can be dynamically or statically provisioned. This article shows you how to dynamically create PVs with Azure disk for use by a single pod in an AKS cluster. For static provisioning, see [Create a static volume with Azure Disks](azure-disk-volume.md).
+A [persistent volume](concepts-storage.md#persistent-volumes) (PV) represents a piece of storage that's provisioned for use with Kubernetes pods. A PV can be used by one or many pods and can be dynamically or statically provisioned. This article shows you how to dynamically create PVs with Azure disk for use by a single pod in an AKS cluster. For static provisioning, see [Create a static volume with Azure Disks](azure-csi-disk-storage-provision.md#statically-provision-a-volume).
For more information on Kubernetes volumes, see [Storage options for applications in AKS][concepts-storage].
The output of the command resembles the following example:
## Next steps - To learn how to use CSI driver for Azure Files, see [Use Azure Files with CSI driver][azure-files-csi].-- To learn how to use CSI driver for Azure Blob storage (preview), see [Use Azure Blob storage with CSI driver][azure-blob-csi] (preview).
+- To learn how to use CSI driver for Azure Blob storage, see [Use Azure Blob storage with CSI driver][azure-blob-csi].
- For more information about storage best practices, see [Best practices for storage and backups in Azure Kubernetes Service][operator-best-practices-storage]. <!-- LINKS - external -->
The output of the command resembles the following example:
[az-on-demand-bursting]: ../virtual-machines/disk-bursting.md#on-demand-bursting [enable-on-demand-bursting]: ../virtual-machines/disks-enable-bursting.md?tabs=azure-cli [az-premium-ssd]: ../virtual-machines/disks-types.md#premium-ssds
+[general-purpose-machine-sizes]: ../virtual-machines/sizes-general.md
aks Azure Disk Volume https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-disk-volume.md
- Title: Create a static volume for pods in Azure Kubernetes Service (AKS)
-description: Learn how to manually create a volume with Azure disks for use with a pod in Azure Kubernetes Service (AKS)
-- Previously updated : 05/17/2022--
-#Customer intent: As a developer, I want to learn how to manually create and attach storage to a specific pod in AKS.
--
-# Create a static volume with Azure disks in Azure Kubernetes Service (AKS)
-
-Container-based applications often need to access and persist data in an external data volume. If a single pod needs access to storage, you can use Azure disks to present a native volume for application use. This article shows you how to manually create an Azure disk and attach it to a pod in AKS.
-
-> [!NOTE]
-> An Azure disk can only be mounted to a single pod at a time. If you need to share a persistent volume across multiple pods, use [Azure Files][azure-files-volume].
-
-For more information on Kubernetes volumes, see [Storage options for applications in AKS][concepts-storage].
-
-## Before you begin
-
-This article assumes that you have an existing AKS cluster with 1.21 or later version. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][aks-quickstart-powershell], or [using the Azure portal][aks-quickstart-portal].
-
-If you want to interact with Azure disks on an AKS cluster with 1.20 or previous version, see the [Kubernetes plugin for Azure disks][kubernetes-disks].
-
-The Azure Disks CSI driver has a limit of 32 volumes per node. The volume count will change based on the size of the node/node pool. Run the following command to determine the number of volumes that can be allocated per node:
-
-```console
-kubectl get CSINode <nodename> -o yaml
-```
-
-## Storage class static provisioning
-
-The following table describes the Storage Class parameters for the Azure disk CSI driver static provisioning:
-
-|Name | Meaning | Available Value | Mandatory | Default value|
-| | | | | |
-|volumeHandle| Azure disk URI | `/subscriptions/{sub-id}/resourcegroups/{group-name}/providers/microsoft.compute/disks/{disk-id}` | Yes | N/A|
-|volumeAttributes.fsType | File system type | `ext4`, `ext3`, `ext2`, `xfs`, `btrfs` for Linux, `ntfs` for Windows | No | `ext4` for Linux, `ntfs` for Windows |
-|volumeAttributes.partition | Partition number of the existing disk (only supported on Linux) | `1`, `2`, `3` | No | Empty (no partition) </br>- Make sure partition format is like `-part1` |
-|volumeAttributes.cachingMode | [Disk host cache setting](../virtual-machines/windows/premium-storage-performance.md#disk-caching)| `None`, `ReadOnly`, `ReadWrite` | No | `ReadOnly`|
-
-## Create an Azure disk
-
-When you create an Azure disk for use with AKS, you can create the disk resource in the **node** resource group. This approach allows the AKS cluster to access and manage the disk resource. If instead you created the disk in a separate resource group, you must grant the Azure Kubernetes Service (AKS) managed identity for your cluster the `Contributor` role to the disk's resource group. In this exercise, you're going to create the disk in the same resource group as your cluster.
-
-1. Identify the resource group name using the [az aks show][az-aks-show] command and add the `--query nodeResourceGroup` parameter. The following example gets the node resource group for the AKS cluster name *myAKSCluster* in the resource group name *myResourceGroup*:
-
- ```azurecli-interactive
- $ az aks show --resource-group myResourceGroup --name myAKSCluster --query nodeResourceGroup -o tsv
-
- MC_myResourceGroup_myAKSCluster_eastus
- ```
-
-2. Create a disk using the [az disk create][az-disk-create] command. Specify the node resource group name obtained in the previous command, and then a name for the disk resource, such as *myAKSDisk*. The following example creates a *20*GiB disk, and outputs the ID of the disk after it's created. If you need to create a disk for use with Windows Server containers, add the `--os-type windows` parameter to correctly format the disk.
-
- ```azurecli-interactive
- az disk create \
- --resource-group MC_myResourceGroup_myAKSCluster_eastus \
- --name myAKSDisk \
- --size-gb 20 \
- --query id --output tsv
- ```
-
- > [!NOTE]
- > Azure disks are billed by SKU for a specific size. These SKUs range from 32GiB for S4 or P4 disks to 32TiB for S80 or P80 disks (in preview). The throughput and IOPS performance of a Premium managed disk depends on both the SKU and the instance size of the nodes in the AKS cluster. See [Pricing and Performance of Managed Disks][managed-disk-pricing-performance].
-
- The disk resource ID is displayed once the command has successfully completed, as shown in the following example output. This disk ID is used to mount the disk in the next section.
-
- ```console
- /subscriptions/<subscriptionID>/resourceGroups/MC_myAKSCluster_myAKSCluster_eastus/providers/Microsoft.Compute/disks/myAKSDisk
- ```
-
-## Mount disk as a volume
-
-1. Create a *pv-azuredisk.yaml* file with a *PersistentVolume*. Update `volumeHandle` with disk resource ID from the previous step. For example:
-
- ```yaml
- apiVersion: v1
- kind: PersistentVolume
- metadata:
- name: pv-azuredisk
- spec:
- capacity:
- storage: 20Gi
- accessModes:
- - ReadWriteOnce
- persistentVolumeReclaimPolicy: Retain
- storageClassName: managed-csi
- csi:
- driver: disk.csi.azure.com
- readOnly: false
- volumeHandle: /subscriptions/<subscriptionID>/resourceGroups/MC_myAKSCluster_myAKSCluster_eastus/providers/Microsoft.Compute/disks/myAKSDisk
- volumeAttributes:
- fsType: ext4
- ```
-
-2. Create a *pvc-azuredisk.yaml* file with a *PersistentVolumeClaim* that uses the *PersistentVolume*. For example:
-
- ```yaml
- apiVersion: v1
- kind: PersistentVolumeClaim
- metadata:
- name: pvc-azuredisk
- spec:
- accessModes:
- - ReadWriteOnce
- resources:
- requests:
- storage: 20Gi
- volumeName: pv-azuredisk
- storageClassName: managed-csi
- ```
-
-3. Use the `kubectl` commands to create the *PersistentVolume* and *PersistentVolumeClaim*, referencing the two YAML files created earlier:
-
- ```console
- kubectl apply -f pv-azuredisk.yaml
- kubectl apply -f pvc-azuredisk.yaml
- ```
-
-4. To verify your *PersistentVolumeClaim* is created and bound to the *PersistentVolume*, run the
-following command:
-
- ```console
- $ kubectl get pvc pvc-azuredisk
-
- NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
- pvc-azuredisk Bound pv-azuredisk 20Gi RWO 5s
- ```
-
-5. Create an *azure-disk-pod.yaml* file to reference your *PersistentVolumeClaim*. For example:
-
- ```yaml
- apiVersion: v1
- kind: Pod
- metadata:
- name: mypod
- spec:
- nodeSelector:
- kubernetes.io/os: linux
- containers:
- - image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
- name: mypod
- resources:
- requests:
- cpu: 100m
- memory: 128Mi
- limits:
- cpu: 250m
- memory: 256Mi
- volumeMounts:
- - name: azure
- mountPath: /mnt/azure
- volumes:
- - name: azure
- persistentVolumeClaim:
- claimName: pvc-azuredisk
- ```
-
-6. Run the following command to apply the configuration and mount the volume, referencing the YAML
-configuration file created in the previous steps:
-
- ```console
- kubectl apply -f azure-disk-pod.yaml
- ```
-
-## Next steps
-
-To learn about our recommended storage and backup practices, see [Best practices for storage and backups in AKS][operator-best-practices-storage].
-
-<!-- LINKS - external -->
-[kubernetes-disks]: https://github.com/kubernetes/examples/blob/master/staging/volumes/azure_disk/README.md
-[kubernetes-volumes]: https://kubernetes.io/docs/concepts/storage/volumes/
-[managed-disk-pricing-performance]: https://azure.microsoft.com/pricing/details/managed-disks/
-
-<!-- LINKS - internal -->
-[az-disk-list]: /cli/azure/disk#az_disk_list
-[az-disk-create]: /cli/azure/disk#az_disk_create
-[az-group-list]: /cli/azure/group#az_group_list
-[az-resource-show]: /cli/azure/resource#az_resource_show
-[aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md
-[aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md
-[aks-quickstart-powershell]: ./learn/quick-kubernetes-deploy-powershell.md
-[az-aks-show]: /cli/azure/aks#az_aks_show
-[install-azure-cli]: /cli/azure/install-azure-cli
-[azure-files-volume]: azure-files-volume.md
-[operator-best-practices-storage]: operator-best-practices-storage.md
-[concepts-storage]: concepts-storage.md
aks Azure Disks Dynamic Pv https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-disks-dynamic-pv.md
- Title: Dynamically create Azure Disks volume-
-description: Learn how to dynamically create a persistent volume with Azure Disks in Azure Kubernetes Service (AKS)
-- Previously updated : 07/21/2022--
-#Customer intent: As a developer, I want to learn how to dynamically create and attach storage to pods in AKS.
--
-# Dynamically create and use a persistent volume with Azure Disks in Azure Kubernetes Service (AKS)
-
-A persistent volume represents a piece of storage that has been provisioned for use with Kubernetes pods. A persistent volume can be used by one or many pods, and can be dynamically or statically provisioned. This article shows you how to dynamically create persistent volumes with Azure Disks for use by a single pod in an Azure Kubernetes Service (AKS) cluster.
-
-> [!NOTE]
-> An Azure disk can only be mounted with *Access mode* type *ReadWriteOnce*, which makes it available to one node in AKS. If you need to share a persistent volume across multiple nodes, use [Azure Files][azure-files-pvc].
-
-For more information on Kubernetes volumes, see [Storage options for applications in AKS][concepts-storage].
-
-## Before you begin
-
-This article assumes that you have an existing AKS cluster with 1.21 or later version. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][aks-quickstart-powershell], or [using the Azure portal][aks-quickstart-portal].
-
-You also need the Azure CLI version 2.0.59 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
-
-The Azure Disks CSI driver has a limit of 32 volumes per node. The volume count will change based on the size of the node/node pool. Run the following command to determine the number of volumes that can be allocated per node:
-
-```console
-kubectl get CSINode <nodename> -o yaml
-```
-
-## Built-in storage classes
-
-A storage class is used to define how a unit of storage is dynamically created with a persistent volume. For more information on Kubernetes storage classes, see [Kubernetes Storage Classes][kubernetes-storage-classes].
-
-Each AKS cluster includes four pre-created storage classes, two of them configured to work with Azure Disks:
-
-* The *default* storage class provisions a standard SSD Azure Disk.
- * Standard storage is backed by Standard SSDs and delivers cost-effective storage while still delivering reliable performance.
-* The *managed-csi-premium* storage class provisions a premium Azure Disk.
- * Premium disks are backed by SSD-based high-performance, low-latency disk. Perfect for VMs running production workload. If the AKS nodes in your cluster use premium storage, select the *managed-premium* class.
-
-If you use one of the default storage classes, you can't update the volume size after the storage class is created. To be able to update the volume size after a storage class is created, add the line `allowVolumeExpansion: true` to one of the default storage classes, or you can create your own custom storage class. It's not supported to reduce the size of a PVC (to prevent data loss). You can edit an existing storage class by using the `kubectl edit sc` command.
-
-For example, if you want to use a disk of size 4 TiB, you must create a storage class that defines `cachingmode: None` because [disk caching isn't supported for disks 4 TiB and larger](../virtual-machines/premium-storage-performance.md#disk-caching).
-
-For more information about storage classes and creating your own storage class, see [Storage options for applications in AKS][storage-class-concepts].
-
-Use the [kubectl get sc][kubectl-get] command to see the pre-created storage classes. The following example shows the pre-create storage classes available within an AKS cluster:
-
-```bash
-kubectl get sc
-```
-
-The output of the command resembles the following example:
-
-```bash
-NAME PROVISIONER AGE
-default (default) disk.csi.azure.com 1h
-managed-csi disk.csi.azure.com 1h
-```
-
-> [!NOTE]
-> Persistent volume claims are specified in GiB but Azure managed disks are billed by SKU for a specific size. These SKUs range from 32GiB for S4 or P4 disks to 32TiB for S80 or P80 disks (in preview). The throughput and IOPS performance of a Premium managed disk depends on the both the SKU and the instance size of the nodes in the AKS cluster. For more information, see [Pricing and performance of managed disks][managed-disk-pricing-performance].
-
-## Create a persistent volume claim
-
-A persistent volume claim (PVC) is used to automatically provision storage based on a storage class. In this case, a PVC can use one of the pre-created storage classes to create a standard or premium Azure managed disk.
-
-Create a file named `azure-pvc.yaml`, and copy in the following manifest. The claim requests a disk named `azure-managed-disk` that is *5 GB* in size with *ReadWriteOnce* access. The *managed-csi* storage class is specified as the storage class.
-
-```yaml
-apiVersion: v1
-kind: PersistentVolumeClaim
-metadata:
- name: azure-managed-disk
-spec:
- accessModes:
- - ReadWriteOnce
- storageClassName: managed-csi
- resources:
- requests:
- storage: 5Gi
-```
-
-> [!TIP]
-> To create a disk that uses premium storage, use `storageClassName: managed-csi-premium` rather than *managed-csi*.
-
-Create the persistent volume claim with the [kubectl apply][kubectl-apply] command and specify your *azure-pvc.yaml* file:
-
-```bash
-kubectl apply -f azure-pvc.yaml
-```
-
-The output of the command resembles the following example:
-
-```bash
-persistentvolumeclaim/azure-managed-disk created
-```
-
-## Use the persistent volume
-
-Once the persistent volume claim has been created and the disk successfully provisioned, a pod can be created with access to the disk. The following manifest creates a basic NGINX pod that uses the persistent volume claim named *azure-managed-disk* to mount the Azure Disk at the path `/mnt/azure`. For Windows Server containers, specify a *mountPath* using the Windows path convention, such as *'D:'*.
-
-Create a file named `azure-pvc-disk.yaml`, and copy in the following manifest.
-
-```yaml
-kind: Pod
-apiVersion: v1
-metadata:
- name: mypod
-spec:
- containers:
- - name: mypod
- image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
- resources:
- requests:
- cpu: 100m
- memory: 128Mi
- limits:
- cpu: 250m
- memory: 256Mi
- volumeMounts:
- - mountPath: "/mnt/azure"
- name: volume
- volumes:
- - name: volume
- persistentVolumeClaim:
- claimName: azure-managed-disk
-```
-
-Create the pod with the [kubectl apply][kubectl-apply] command, as shown in the following example:
-
-```console
-kubectl apply -f azure-pvc-disk.yaml
-
-pod/mypod created
-```
-
-You now have a running pod with your Azure Disk mounted in the `/mnt/azure` directory. This configuration can be seen when inspecting your pod via `kubectl describe pod mypod`, as shown in the following condensed example:
-
-```bash
-kubectl describe pod mypod
-```
-
-The output of the command resembles the following example:
-
-```bash
-[...]
-Volumes:
- volume:
- Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
- ClaimName: azure-managed-disk
- ReadOnly: false
- default-token-smm2n:
- Type: Secret (a volume populated by a Secret)
- SecretName: default-token-smm2n
- Optional: false
-[...]
-Events:
- Type Reason Age From Message
- - - - -
- Normal Scheduled 2m default-scheduler Successfully assigned mypod to aks-nodepool1-79590246-0
- Normal SuccessfulMountVolume 2m kubelet, aks-nodepool1-79590246-0 MountVolume.SetUp succeeded for volume "default-token-smm2n"
- Normal SuccessfulMountVolume 1m kubelet, aks-nodepool1-79590246-0 MountVolume.SetUp succeeded for volume "pvc-faf0f176-8b8d-11e8-923b-deb28c58d242"
-[...]
-```
-
-## Use Ultra Disks
-
-To use ultra disk, see [Use Ultra Disks on Azure Kubernetes Service (AKS)](use-ultra-disks.md).
-
-## Back up a persistent volume
-
-To back up the data in your persistent volume, take a snapshot of the managed disk for the volume. You can then use this snapshot to create a restored disk and attach to pods as a means of restoring the data.
-
-First, get the volume name with the `kubectl get pvc` command, such as for the PVC named *azure-managed-disk*:
-
-```console
-$ kubectl get pvc azure-managed-disk
-
-NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
-azure-managed-disk Bound pvc-faf0f176-8b8d-11e8-923b-deb28c58d242 5Gi RWO managed-premium 3m
-```
-
-This volume name forms the underlying Azure Disk name. Query for the disk ID with [az disk list][az-disk-list] and provide your PVC volume name, as shown in the following example:
-
-```azurecli
-az disk list --query '[].id | [?contains(@,`pvc-faf0f176-8b8d-11e8-923b-deb28c58d242`)]' -o tsv
-
-/subscriptions/<guid>/resourceGroups/MC_MYRESOURCEGROUP_MYAKSCLUSTER_EASTUS/providers/MicrosoftCompute/disks/kubernetes-dynamic-pvc-faf0f176-8b8d-11e8-923b-deb28c58d242
-```
-
-Use the disk ID to create a snapshot disk with [az snapshot create][az-snapshot-create]. The following example creates a snapshot named *pvcSnapshot* in the same resource group as the AKS cluster (*MC_myResourceGroup_myAKSCluster_eastus*). You may encounter permission issues if you create snapshots and restore disks in resource groups that the AKS cluster doesn't have access to.
-
-```azurecli
-az snapshot create \
- --resource-group MC_myResourceGroup_myAKSCluster_eastus \
- --name pvcSnapshot \
- --source /subscriptions/<guid>/resourceGroups/MC_myResourceGroup_myAKSCluster_eastus/providers/MicrosoftCompute/disks/kubernetes-dynamic-pvc-faf0f176-8b8d-11e8-923b-deb28c58d242
-```
-
-Depending on the amount of data on your disk, it may take a few minutes to create the snapshot.
-
-## Restore and use a snapshot
-
-To restore the disk and use it with a Kubernetes pod, use the snapshot as a source when you create a disk with [az disk create][az-disk-create]. This operation preserves the original resource if you then need to access the original data snapshot. The following example creates a disk named *pvcRestored* from the snapshot named *pvcSnapshot*:
-
-```azurecli
-az disk create --resource-group MC_myResourceGroup_myAKSCluster_eastus --name pvcRestored --source pvcSnapshot
-```
-
-To use the restored disk with a pod, specify the ID of the disk in the manifest. Get the disk ID with the [az disk show][az-disk-show] command. The following example gets the disk ID for *pvcRestored* created in the previous step:
-
-```azurecli
-az disk show --resource-group MC_myResourceGroup_myAKSCluster_eastus --name pvcRestored --query id -o tsv
-```
-
-Create a pod manifest named `azure-restored.yaml` and specify the disk URI obtained in the previous step. The following example creates a basic NGINX web server, with the restored disk mounted as a volume at */mnt/azure*:
-
-```yaml
-kind: Pod
-apiVersion: v1
-metadata:
- name: mypodrestored
-spec:
- containers:
- - name: mypodrestored
- image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
- resources:
- requests:
- cpu: 100m
- memory: 128Mi
- limits:
- cpu: 250m
- memory: 256Mi
- volumeMounts:
- - mountPath: "/mnt/azure"
- name: volume
- volumes:
- - name: volume
- azureDisk:
- kind: Managed
- diskName: pvcRestored
- diskURI: /subscriptions/<guid>/resourceGroups/MC_myResourceGroupAKS_myAKSCluster_eastus/providers/Microsoft.Compute/disks/pvcRestored
-```
-
-Create the pod with the [kubectl apply][kubectl-apply] command, as shown in the following example:
-
-```bash
-$ kubectl apply -f azure-restored.yaml
-```
-
-The output of the command resembles the following example:
-
-```bash
-pod/mypodrestored created
-```
-
-You can use `kubectl describe pod mypodrestored` to view details of the pod, such as the following condensed example that shows the volume information:
-
-```bash
-kubectl describe pod mypodrestored
-```
-
-The output of the command resembles the following example:
-
-```bash
-[...]
-Volumes:
- volume:
- Type: AzureDisk (an Azure Data Disk mount on the host and bind mount to the pod)
- DiskName: pvcRestored
- DiskURI: /subscriptions/19da35d3-9a1a-4f3b-9b9c-3c56ef409565/resourceGroups/MC_myResourceGroupAKS_myAKSCluster_eastus/providers/Microsoft.Compute/disks/pvcRestored
- Kind: Managed
- FSType: ext4
- CachingMode: ReadWrite
- ReadOnly: false
-[...]
-```
-
-## Using Azure tags
-
-For more information on using Azure tags, see [Use Azure tags in Azure Kubernetes Service (AKS)][use-tags].
-
-## Next steps
-
-For associated best practices, see [Best practices for storage and backups in AKS][operator-best-practices-storage].
-
-Learn more about Kubernetes persistent volumes using Azure Disks.
-
-> [!div class="nextstepaction"]
-> [Kubernetes plugin for Azure Disks][azure-disk-volume]
-
-<!-- LINKS - external -->
-[access-modes]: https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes
-[kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply
-[kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get
-[kubernetes-storage-classes]: https://kubernetes.io/docs/concepts/storage/storage-classes/
-[kubernetes-volumes]: https://kubernetes.io/docs/concepts/storage/persistent-volumes/
-[managed-disk-pricing-performance]: https://azure.microsoft.com/pricing/details/managed-disks/
-
-<!-- LINKS - internal -->
-[azure-disk-volume]: azure-disk-volume.md
-[azure-files-pvc]: azure-files-dynamic-pv.md
-[premium-storage]: ../virtual-machines/disks-types.md
-[az-disk-list]: /cli/azure/disk#az_disk_list
-[az-snapshot-create]: /cli/azure/snapshot#az_snapshot_create
-[az-disk-create]: /cli/azure/disk#az_disk_create
-[az-disk-show]: /cli/azure/disk#az_disk_show
-[aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md
-[aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md
-[aks-quickstart-powershell]: ./learn/quick-kubernetes-deploy-powershell.md
-[install-azure-cli]: /cli/azure/install-azure-cli
-[operator-best-practices-storage]: operator-best-practices-storage.md
-[concepts-storage]: concepts-storage.md
-[storage-class-concepts]: concepts-storage.md#storage-classes
-[az-feature-register]: /cli/azure/feature#az_feature_register
-[az-feature-list]: /cli/azure/feature#az_feature_list
-[az-provider-register]: /cli/azure/provider#az_provider_register
-[az-extension-add]: /cli/azure/extension#az_extension_add
-[az-extension-update]: /cli/azure/extension#az_extension_update
-[az-feature-register]: /cli/azure/feature#az_feature_register
-[az-feature-list]: /cli/azure/feature#az_feature_list
-[az-provider-register]: /cli/azure/provider#az_provider_register
-[use-tags]: use-tags.md
aks Azure Files Csi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-files-csi.md
Title: Use Container Storage Interface (CSI) driver for Azure Files on Azure Kubernetes Service (AKS) description: Learn how to use the Container Storage Interface (CSI) driver for Azure Files in an Azure Kubernetes Service (AKS) cluster.- Previously updated : 01/03/2023-- Last updated : 01/18/2023 # Use Azure Files Container Storage Interface (CSI) driver in Azure Kubernetes Service (AKS)
In addition to the original in-tree driver features, Azure Files CSI driver supp
| | | | | |skuName | Azure Files storage account type (alias: `storageAccountType`)| `Standard_LRS`, `Standard_ZRS`, `Standard_GRS`, `Standard_RAGRS`, `Standard_RAGZRS`,`Premium_LRS`, `Premium_ZRS` | No | `StandardSSD_LRS`<br> Minimum file share size for Premium account type is 100 GiB.<br> ZRS account type is supported in limited regions.<br> NFS file share only supports Premium account type.| |fsType | File System Type | `ext4`, `ext3`, `ext2`, `xfs`| Yes | `ext4` for Linux|
-|location | Specify Azure region where Azure storage account will be created. | `eastus`, `westus`, etc. | No | If empty, driver uses the same location name as current AKS cluster.|
-|resourceGroup | Specify the resource group where the Azure Disks will be created | Existing resource group name | No | If empty, driver uses the same resource group name as current AKS cluster.|
+|location | Specify Azure region where Azure storage account will be created. | For example, `eastus`. | No | If empty, driver uses the same location name as current AKS cluster.|
+|resourceGroup | Specify the resource group where the Azure Disks will be created. | Existing resource group name | No | If empty, driver uses the same resource group name as current AKS cluster.|
|shareName | Specify Azure file share name | Existing or new Azure file share name. | No | If empty, driver generates an Azure file share name. |
-|shareNamePrefix | Specify Azure file share name prefix created by driver. | Share name can only contain lowercase letters, numbers, hyphens, and length should be less than 21 characters. | No |
+|shareNamePrefix | Specify Azure file share name prefix created by driver. | Share name can only contain lowercase letters, numbers, hyphens, and length should be fewer than 21 characters. | No |
|folderName | Specify folder name in Azure file share. | Existing folder name in Azure file share. | No | If folder name does not exist in file share, mount will fail. | |shareAccessTier | [Access tier for file share][storage-tiers] | General purpose v2 account can choose between `TransactionOptimized` (default), `Hot`, and `Cool`. Premium storage account type for file shares only. | No | Empty. Use default setting for different storage account types.| |accountAccessTier | [Access tier for storage account][access-tiers-overview] | Standard account can choose `Hot` or `Cool`, and Premium account can only choose `Premium`. | No | Empty. Use default setting for different storage account types. |
In addition to the original in-tree driver features, Azure Files CSI driver supp
|allowBlobPublicAccess | Allow or disallow public access to all blobs or containers for storage account created by driver. | `true` or `false` | No | `false` | |requireInfraEncryption | Specify whether or not the service applies a secondary layer of encryption with platform managed keys for data at rest for storage account created by driver. | `true` or `false` | No | `false` | |storageEndpointSuffix | Specify Azure storage endpoint suffix. | `core.windows.net`, `core.chinacloudapi.cn`, etc. | No | If empty, driver uses default storage endpoint suffix according to cloud environment. For example, `core.windows.net`. |
-|tags | [tags][tag-resources] are created in newly created storage account. | Tag format: 'foo=aaa,bar=bbb' | No | "" |
+|tags | [tags][tag-resources] are created in new storage account. | Tag format: 'foo=aaa,bar=bbb' | No | "" |
|matchTags | Match tags when driver tries to find a suitable storage account. | `true` or `false` | No | `false` | | | **Following parameters are only for SMB protocol** | | | |subscriptionID | Specify Azure subscription ID where Azure file share is created. | Azure subscription ID | No | If not empty, `resourceGroup` must be provided. |
-|storeAccountKey | Specify whether to store account key to k8s secret. | `true` or `false`<br>`false` means driver leverages kubelet identity to get account key. | No | `true` |
+|storeAccountKey | Specify whether to store account key to Kubernetes secret. | `true` or `false`<br>`false` means driver leverages kubelet identity to get account key. | No | `true` |
|secretName | Specify secret name to store account key. | | No | |secretNamespace | Specify the namespace of secret to store account key. <br><br> **Note:** <br> If `secretNamespace` isn't specified, the secret is created in the same namespace as the pod. | `default`,`kube-system`, etc | No | Pvc namespace, for example `csi.storage.k8s.io/pvc/namespace` | |useDataPlaneAPI | Specify whether to use [data plane API][data-plane-api] for file share create/delete/resize. This could solve the SRP API throttling issue because the data plane API has almost no limit, while it would fail when there is firewall or Vnet setting on storage account. | `true` or `false` | No | `false` |
In addition to the original in-tree driver features, Azure Files CSI driver supp
## Use a persistent volume with Azure Files
-A [persistent volume (PV)][persistent-volume] represents a piece of storage that's provisioned for use with Kubernetes pods. A PV can be used by one or many pods and can be dynamically or statically provisioned. If multiple pods need concurrent access to the same storage volume, you can use Azure Files to connect by using the [Server Message Block (SMB)][smb-overview] or [NFS protocol][nfs-overview]. This article shows you how to dynamically create an Azure Files share for use by multiple pods in an AKS cluster. For static provisioning, see [Manually create and use a volume with an Azure Files share][azure-files-pvc-manual].
+A [persistent volume (PV)][persistent-volume] represents a piece of storage that's provisioned for use with Kubernetes pods. A PV can be used by one or many pods and can be dynamically or statically provisioned. If multiple pods need concurrent access to the same storage volume, you can use Azure Files to connect by using the [Server Message Block (SMB)][smb-overview] or [NFS protocol][nfs-overview]. This article shows you how to dynamically create an Azure Files share for use by multiple pods in an AKS cluster. For static provisioning, see [Manually create and use a volume with an Azure Files share][azure-files-storage-provision.md#statically-provision-a-volume].
With Azure Files shares, there is no limit as to how many can be mounted on a node.
A storage class is used to define how an Azure file share is created. A storage
> [!NOTE] > Azure Files supports Azure Premium Storage. The minimum premium file share capacity is 100 GiB.
-When you use storage CSI drivers on AKS, there are two more built-in `StorageClasses` that use the Azure Files CSI storage drivers. The other CSI storage classes are created with the cluster alongside the in-tree default storage classes.
+When you use storage CSI drivers on AKS, there are two more built-in `StorageClasses` that uses the Azure Files CSI storage drivers. The other CSI storage classes are created with the cluster alongside the in-tree default storage classes.
- `azurefile-csi`: Uses Azure Standard Storage to create an Azure Files share. - `azurefile-csi-premium`: Uses Azure Premium Storage to create an Azure Files share.
You can request a larger volume for a PVC. Edit the PVC object, and specify a la
> [!NOTE] > A new PV is never created to satisfy the claim. Instead, an existing volume is resized.
-In AKS, the built-in `azurefile-csi` storage class already supports expansion, so use the [PVC created earlier with this storage class](#dynamically-create-azure-files-pvs-by-using-the-built-in-storage-classes). The PVC requested a 100GiB file share. We can confirm that by running:
+In AKS, the built-in `azurefile-csi` storage class already supports expansion, so use the [PVC created earlier with this storage class](#dynamically-create-azure-files-pvs-by-using-the-built-in-storage-classes). The PVC requested a 100 GiB file share. We can confirm that by running:
```bash kubectl exec -it nginx-azurefile -- df -h /mnt/azurefile
The output of the commands resembles the following example:
[azure-disk-csi]: azure-disk-csi.md [azure-blob-csi]: azure-blob-csi.md [persistent-volume-claim-overview]: concepts-storage.md#persistent-volume-claims
+[access-tier-file-share]: ../storage/files/storage-files-planning#storage-tiers.md
+[access-tier-storage-account]: ../storage/blobs/access-tiers-overview.md
+[azure-tags]: ../azure-resource-manager/management/tag-resources.md
[azure-disk-volume]: azure-disk-volume.md [azure-files-pvc]: azure-files-dynamic-pv.md [azure-files-pvc-manual]: azure-files-volume.md
aks Azure Files Dynamic Pv https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-files-dynamic-pv.md
- Title: Dynamically create Azure Files share-
-description: Learn how to dynamically create a persistent volume with Azure Files for use with multiple concurrent pods in Azure Kubernetes Service (AKS)
-- Previously updated : 05/31/2022--
-#Customer intent: As a developer, I want to learn how to dynamically create and attach storage using Azure Files to pods in AKS.
--
-# Dynamically create and use a persistent volume with Azure Files in Azure Kubernetes Service (AKS)
-
-A persistent volume represents a piece of storage that has been provisioned for use with Kubernetes pods. A persistent volume can be used by one or many pods, and can be dynamically or statically provisioned. If multiple pods need concurrent access to the same storage volume, you can use Azure Files to connect using the [Server Message Block (SMB) protocol][smb-overview]. This article shows you how to dynamically create an Azure Files share for use by multiple pods in an Azure Kubernetes Service (AKS) cluster.
-
-For more information on Kubernetes volumes, see [Storage options for applications in AKS][concepts-storage].
-
-## Before you begin
-
-This article assumes that you have an existing AKS cluster with 1.21 or later version. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][aks-quickstart-powershell], or [using the Azure portal][aks-quickstart-portal].
-
-You also need the Azure CLI version 2.0.59 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
-
-## Create a storage class
-
-A storage class is used to define how an Azure file share is created. A storage account is automatically created in the [node resource group][node-resource-group] for use with the storage class to hold the Azure file shares. Choose of the following [Azure storage redundancy][storage-skus] for *skuName*:
-
-* *Standard_LRS* - standard locally redundant storage (LRS)
-* *Standard_GRS* - standard geo-redundant storage (GRS)
-* *Standard_ZRS* - standard zone redundant storage (ZRS)
-* *Standard_RAGRS* - standard read-access geo-redundant storage (RA-GRS)
-* *Premium_LRS* - premium locally redundant storage (LRS)
-* *Premium_ZRS* - premium zone redundant storage (ZRS)
-
-> [!NOTE]
-> Minimum premium file share is 100GB.
-
-For more information on Kubernetes storage classes for Azure Files, see [Kubernetes Storage Classes][kubernetes-storage-classes].
-
-Create a file named `azure-file-sc.yaml` and copy in the following example manifest. For more information on *mountOptions*, see the [Mount options][mount-options] section.
-
-```yaml
-kind: StorageClass
-apiVersion: storage.k8s.io/v1
-metadata:
- name: my-azurefile
-provisioner: file.csi.azure.com # replace with "kubernetes.io/azure-file" if aks version is less than 1.21
-allowVolumeExpansion: true
-mountOptions:
- - dir_mode=0777
- - file_mode=0777
- - uid=0
- - gid=0
- - mfsymlinks
- - cache=strict
- - actimeo=30
-parameters:
- skuName: Premium_LRS
-```
-
-Create the storage class with the [kubectl apply][kubectl-apply] command:
-
-```console
-kubectl apply -f azure-file-sc.yaml
-```
-
-## Create a persistent volume claim
-
-A persistent volume claim (PVC) uses the storage class object to dynamically provision an Azure file share. The following YAML can be used to create a persistent volume claim *100 GB* in size with *ReadWriteMany* access. For more information on access modes, see the [Kubernetes persistent volume][access-modes] documentation.
-
-Now create a file named `azure-file-pvc.yaml` and copy in the following YAML. Make sure that the *storageClassName* matches the storage class created in the last step:
-
-```yaml
-apiVersion: v1
-kind: PersistentVolumeClaim
-metadata:
- name: my-azurefile
-spec:
- accessModes:
- - ReadWriteMany
- storageClassName: my-azurefile
- resources:
- requests:
- storage: 100Gi
-```
-
-> [!NOTE]
-> If using the *Premium_LRS* sku for your storage class, the minimum value for *storage* must be *100Gi*.
-
-Create the persistent volume claim with the [kubectl apply][kubectl-apply] command:
-
-```console
-kubectl apply -f azure-file-pvc.yaml
-```
-
-Once completed, the file share will be created. A Kubernetes secret is also created that includes connection information and credentials. You can use the [kubectl get][kubectl-get] command to view the status of the PVC:
-
-```console
-$ kubectl get pvc my-azurefile
-
-NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
-my-azurefile Bound pvc-8436e62e-a0d9-11e5-8521-5a8664dc0477 10Gi RWX my-azurefile 5m
-```
-
-## Use the persistent volume
-
-The following YAML creates a pod that uses the persistent volume claim *my-azurefile* to mount the Azure file share at the */mnt/azure* path. For Windows Server containers, specify a *mountPath* using the Windows path convention, such as *'D:'*.
-
-Create a file named `azure-pvc-files.yaml`, and copy in the following YAML. Make sure that the *claimName* matches the PVC created in the last step.
-
-```yaml
-kind: Pod
-apiVersion: v1
-metadata:
- name: mypod
-spec:
- containers:
- - name: mypod
- image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
- resources:
- requests:
- cpu: 100m
- memory: 128Mi
- limits:
- cpu: 250m
- memory: 256Mi
- volumeMounts:
- - mountPath: "/mnt/azure"
- name: volume
- volumes:
- - name: volume
- persistentVolumeClaim:
- claimName: my-azurefile
-```
-
-Create the pod with the [kubectl apply][kubectl-apply] command.
-
-```console
-kubectl apply -f azure-pvc-files.yaml
-```
-
-You now have a running pod with your Azure Files share mounted in the */mnt/azure* directory. This configuration can be seen when inspecting your pod via `kubectl describe pod mypod`. The following condensed example output shows the volume mounted in the container:
-
-```
-Containers:
- mypod:
- Container ID: docker://053bc9c0df72232d755aa040bfba8b533fa696b123876108dec400e364d2523e
- Image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
- Image ID: docker-pullable://nginx@sha256:d85914d547a6c92faa39ce7058bd7529baacab7e0cd4255442b04577c4d1f424
- State: Running
- Started: Fri, 01 Mar 2019 23:56:16 +0000
- Ready: True
- Mounts:
- /mnt/azure from volume (rw)
- /var/run/secrets/kubernetes.io/serviceaccount from default-token-8rv4z (ro)
-[...]
-Volumes:
- volume:
- Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
- ClaimName: my-azurefile
- ReadOnly: false
-[...]
-```
-
-## Mount options
-
-The default value for *fileMode* and *dirMode* is *0777* for Kubernetes version 1.13.0 and above. If dynamically creating the persistent volume with a storage class, mount options can be specified on the storage class object. The following example sets *0777*:
-
-```yaml
-kind: StorageClass
-apiVersion: storage.k8s.io/v1
-metadata:
- name: my-azurefile
-provisioner: file.csi.azure.com # replace with "kubernetes.io/azure-file" if aks version is less than 1.21
-allowVolumeExpansion: true
-mountOptions:
- - dir_mode=0777
- - file_mode=0777
- - uid=0
- - gid=0
- - mfsymlinks
- - cache=strict
- - actimeo=30
-parameters:
- skuName: Premium_LRS
-```
-
-## Using Azure tags
-
-For more details on using Azure tags, see [Use Azure tags in Azure Kubernetes Service (AKS)][use-tags].
-
-## Next steps
-
-For associated best practices, see [Best practices for storage and backups in AKS][operator-best-practices-storage].
-
-For storage class parameters, see [Dynamic Provision](https://github.com/kubernetes-sigs/azurefile-csi-driver/blob/master/docs/driver-parameters.md#dynamic-provision).
-
-Learn more about Kubernetes persistent volumes using Azure Files.
-
-> [!div class="nextstepaction"]
-> [Kubernetes plugin for Azure Files][kubernetes-files]
-
-<!-- LINKS - external -->
-[access-modes]: https://kubernetes.io/docs/concepts/storage/persistent-volumes
-[kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply
-[kubectl-describe]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#describe
-[kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get
-[kubernetes-files]: https://github.com/kubernetes/examples/blob/master/staging/volumes/azure_file/README.md
-[kubernetes-secret]: https://kubernetes.io/docs/concepts/configuration/secret/
-[kubernetes-security-context]: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
-[kubernetes-storage-classes]: https://kubernetes.io/docs/concepts/storage/storage-classes/#azure-file
-[kubernetes-volumes]: https://kubernetes.io/docs/concepts/storage/persistent-volumes/
-[pv-static]: https://kubernetes.io/docs/concepts/storage/persistent-volumes/#static
-[smb-overview]: /windows/desktop/FileIO/microsoft-smb-protocol-and-cifs-protocol-overview
-
-<!-- LINKS - internal -->
-[az-group-create]: /cli/azure/group#az_group_create
-[az-group-list]: /cli/azure/group#az_group_list
-[az-resource-show]: /cli/azure/aks#az_aks_show
-[az-storage-account-create]: /cli/azure/storage/account#az_storage_account_create
-[az-storage-create]: /cli/azure/storage/account#az_storage_account_create
-[az-storage-key-list]: /cli/azure/storage/account/keys#az_storage_account_keys_list
-[az-storage-share-create]: /cli/azure/storage/share#az_storage_share_create
-[mount-options]: #mount-options
-[aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md
-[aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md
-[aks-quickstart-powershell]: ./learn/quick-kubernetes-deploy-powershell.md
-[install-azure-cli]: /cli/azure/install-azure-cli
-[az-aks-show]: /cli/azure/aks#az_aks_show
-[storage-skus]: ../storage/common/storage-redundancy.md
-[kubernetes-rbac]: concepts-identity.md#role-based-access-controls-rbac
-[operator-best-practices-storage]: operator-best-practices-storage.md
-[concepts-storage]: concepts-storage.md
-[node-resource-group]: faq.md#why-are-two-resource-groups-created-with-aks
-[use-tags]: use-tags.md
aks Azure Files Volume https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-files-volume.md
- Title: Manually create Azure Files share-
-description: Learn how to manually create a volume with Azure Files for use with multiple concurrent pods in Azure Kubernetes Service (AKS)
-- Previously updated : 12/26/2022--
-#Customer intent: As a developer, I want to learn how to manually create and attach storage using Azure Files to a pod in AKS.
---
-# Manually create and use a volume with Azure Files share in Azure Kubernetes Service (AKS)
-
-Container-based applications often need to access and persist data in an external data volume. If multiple pods need concurrent access to the same storage volume, you can use Azure Files to connect using the [Server Message Block (SMB) protocol][smb-overview]. This article shows you how to manually create an Azure Files share and attach it to a pod in AKS.
-
-For more information on Kubernetes volumes, see [Storage options for applications in AKS][concepts-storage].
-
-## Before you begin
-
-This article assumes that you have an existing AKS cluster with 1.21 or later version. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][aks-quickstart-powershell], or [using the Azure portal][aks-quickstart-portal].
-
-If you want to interact with Azure Files on an AKS cluster with 1.20 or previous version, see the [Kubernetes plugin for Azure Files][kubernetes-files].
-
-## Create an Azure file share
-
-Before you can use Azure Files as a Kubernetes volume, you must create an Azure Storage account and the file share. The following commands create a resource group named *myAKSShare*, a storage account, and a Files share named *aksshare*:
-
-```azurecli-interactive
-# Change these four parameters as needed for your own environment
-AKS_PERS_STORAGE_ACCOUNT_NAME=mystorageaccount$RANDOM
-AKS_PERS_RESOURCE_GROUP=myAKSShare
-AKS_PERS_LOCATION=eastus
-AKS_PERS_SHARE_NAME=aksshare
-
-# Create a resource group
-az group create --name $AKS_PERS_RESOURCE_GROUP --location $AKS_PERS_LOCATION
-
-# Create a storage account
-az storage account create -n $AKS_PERS_STORAGE_ACCOUNT_NAME -g $AKS_PERS_RESOURCE_GROUP -l $AKS_PERS_LOCATION --sku Standard_LRS
-
-# Export the connection string as an environment variable, this is used when creating the Azure file share
-export AZURE_STORAGE_CONNECTION_STRING=$(az storage account show-connection-string -n $AKS_PERS_STORAGE_ACCOUNT_NAME -g $AKS_PERS_RESOURCE_GROUP -o tsv)
-
-# Create the file share
-az storage share create -n $AKS_PERS_SHARE_NAME --connection-string $AZURE_STORAGE_CONNECTION_STRING
-
-# Get storage account key
-STORAGE_KEY=$(az storage account keys list --resource-group $AKS_PERS_RESOURCE_GROUP --account-name $AKS_PERS_STORAGE_ACCOUNT_NAME --query "[0].value" -o tsv)
-
-# Echo storage account name and key
-echo Storage account name: $AKS_PERS_STORAGE_ACCOUNT_NAME
-echo Storage account key: $STORAGE_KEY
-```
-
-Make a note of the storage account name and key shown at the end of the script output. These values are needed when you create the Kubernetes volume in one of the following steps.
-
-## Create a Kubernetes secret
-
-Kubernetes needs credentials to access the file share created in the previous step. These credentials are stored in a [Kubernetes secret][kubernetes-secret], which is referenced when you create a Kubernetes pod.
-
-Use the `kubectl create secret` command to create the secret. The following example creates a secret named *azure-secret* and populates the *azurestorageaccountname* and *azurestorageaccountkey* from the previous step. To use an existing Azure storage account, provide the account name and key.
-
-```console
-kubectl create secret generic azure-secret --from-literal=azurestorageaccountname=$AKS_PERS_STORAGE_ACCOUNT_NAME --from-literal=azurestorageaccountkey=$STORAGE_KEY
-```
-
-## Mount file share as an inline volume
-> [!NOTE]
-> Inline volume can only access secrets in the same namespace as the pod. To specify a different secret namespace, [please use the persistent volume example][persistent-volume-example] below instead.
-
-To mount the Azure Files share into your pod, configure the volume in the container spec. Create a new file named `azure-files-pod.yaml` with the following contents. If you changed the name of the Files share or secret name, update the *shareName* and *secretName*. If desired, update the `mountPath`, which is the path where the Files share is mounted in the pod. For Windows Server containers, specify a *mountPath* using the Windows path convention, such as *'D:'*.
-
-```yaml
-apiVersion: v1
-kind: Pod
-metadata:
- name: mypod
-spec:
- nodeSelector:
- kubernetes.io/os: linux
- containers:
- - image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
- name: mypod
- resources:
- requests:
- cpu: 100m
- memory: 128Mi
- limits:
- cpu: 250m
- memory: 256Mi
- volumeMounts:
- - name: azure
- mountPath: /mnt/azure
- volumes:
- - name: azure
- csi:
- driver: file.csi.azure.com
- readOnly: false
- volumeAttributes:
- secretName: azure-secret # required
- shareName: aksshare # required
- mountOptions: "dir_mode=0777,file_mode=0777,cache=strict,actimeo=30,nosharesock" # optional
-```
-
-Use the `kubectl` command to create the pod.
-
-```console
-kubectl apply -f azure-files-pod.yaml
-```
-
-You now have a running pod with an Azure Files share mounted at */mnt/azure*. You can use `kubectl describe pod mypod` to verify the share is mounted successfully.
-
-## Mount file share as a persistent volume
-> [!NOTE]
-> For SMB mount, if `nodeStageSecretRef` field is not provided in PV config, azure file driver would try to get `azure-storage-account-{accountname}-secret` in the pod namespace, if that secret does not exist, it would get account key by Azure storage account API directly using kubelet identity (make sure kubelet identity has reader access to the storage account).
-> The default value for *fileMode* and *dirMode* is *0777*.
-
-```yaml
-apiVersion: v1
-kind: PersistentVolume
-metadata:
- name: azurefile
-spec:
- capacity:
- storage: 5Gi
- accessModes:
- - ReadWriteMany
- persistentVolumeReclaimPolicy: Retain
- storageClassName: azurefile-csi
- csi:
- driver: file.csi.azure.com
- readOnly: false
- volumeHandle: unique-volumeid # make sure volumeid is unique for every identical share in the cluster
- volumeAttributes:
- resourceGroup: EXISTING_RESOURCE_GROUP_NAME # optional, only set this when storage account is not in the same resource group as agent node
- shareName: aksshare
- nodeStageSecretRef:
- name: azure-secret
- namespace: default
- mountOptions:
- - dir_mode=0777
- - file_mode=0777
- - uid=0
- - gid=0
- - mfsymlinks
- - cache=strict
- - nosharesock
- - nobrl
-```
-
-Create a *azurefile-mount-options-pvc.yaml* file with a *PersistentVolumeClaim* that uses the *PersistentVolume*. For example:
-
-```yaml
-apiVersion: v1
-kind: PersistentVolumeClaim
-metadata:
- name: azurefile
-spec:
- accessModes:
- - ReadWriteMany
- storageClassName: azurefile-csi
- volumeName: azurefile
- resources:
- requests:
- storage: 5Gi
-```
-
-Use the `kubectl` commands to create the *PersistentVolume* and *PersistentVolumeClaim*.
-
-```console
-kubectl apply -f azurefile-mount-options-pv.yaml
-kubectl apply -f azurefile-mount-options-pvc.yaml
-```
-
-Verify your *PersistentVolumeClaim* is created and bound to the *PersistentVolume*.
-
-```console
-$ kubectl get pvc azurefile
-
-NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
-azurefile Bound azurefile 5Gi RWX azurefile 5s
-```
-
-Update your container spec to reference your *PersistentVolumeClaim* and update your pod. For example:
-
-```yaml
-...
- volumes:
- - name: azure
- persistentVolumeClaim:
- claimName: azurefile
-```
-
-As the pod spec can't be updated in place, use `kubectl` commands to delete, and then re-create the pod:
-
-```console
-kubectl delete pod mypod
-
-kubectl apply -f azure-files-pod.yaml
-```
-
-## Next steps
-
-For Azure File CSI driver parameters, see [CSI driver parameters][CSI driver parameters].
-
-For associated best practices, see [Best practices for storage and backups in AKS][operator-best-practices-storage].
-
-<!-- LINKS - external -->
-[kubectl-create]: https://kubernetes.io/docs/user-guide/kubectl/v1.8/#create
-[kubernetes-files]: https://github.com/kubernetes/examples/blob/master/staging/volumes/azure_file/README.md
-[kubernetes-secret]: https://kubernetes.io/docs/concepts/configuration/secret/
-[kubernetes-volumes]: https://kubernetes.io/docs/concepts/storage/volumes/
-[smb-overview]: /windows/desktop/FileIO/microsoft-smb-protocol-and-cifs-protocol-overview
-[kubernetes-security-context]: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
-[CSI driver parameters]: https://github.com/kubernetes-sigs/azurefile-csi-driver/blob/master/docs/driver-parameters.md#static-provisionbring-your-own-file-share
-
-<!-- LINKS - internal -->
-[aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md
-[aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md
-[aks-quickstart-powershell]: ./learn/quick-kubernetes-deploy-powershell.md
-[install-azure-cli]: /cli/azure/install-azure-cli
-[operator-best-practices-storage]: operator-best-practices-storage.md
-[concepts-storage]: concepts-storage.md
-[persistent-volume-example]: #mount-file-share-as-a-persistent-volume
-[use-tags]: use-tags.md
aks Certificate Rotation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/certificate-rotation.md
Title: Certificate Rotation in Azure Kubernetes Service (AKS)
description: Learn certificate rotation in an Azure Kubernetes Service (AKS) cluster. Previously updated : 09/12/2022 Last updated : 01/19/2023 # Certificate rotation in Azure Kubernetes Service (AKS)
This article shows you how certificate rotation works in your AKS cluster.
This article requires that you are running the Azure CLI version 2.0.77 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
+## Limitation
+
+Certificate rotation is not supported for stopped AKS clusters.
+ ## AKS certificates, Certificate Authorities, and Service Accounts AKS generates and uses the following certificates, Certificate Authorities, and Service Accounts:
aks Concepts Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-storage.md
Title: Concepts - Storage in Azure Kubernetes Services (AKS) description: Learn about storage in Azure Kubernetes Service (AKS), including volumes, persistent volumes, storage classes, and claims- Previously updated : 08/10/2022 Last updated : 01/18/2023
Kubernetes typically treats individual pods as ephemeral, disposable resources.
Traditional volumes are created as Kubernetes resources backed by Azure Storage. You can manually create data volumes to be assigned to pods directly, or have Kubernetes automatically create them. Data volumes can use: [Azure Disks][disks-types], [Azure Files][storage-files-planning], [Azure NetApp Files][azure-netapp-files-service-levels], or [Azure Blobs][storage-account-overview]. > [!NOTE]
-> The Azure Disks CSI driver has a limit of 32 volumes per node. Other Azure Storage services don't have an equivalent limit.
+> Depending on the VM SKU that's being used, the Azure Disks CSI driver might have a per-node volume limit. For some powerful VMs (for example, 16 cores), the limit is 64 volumes per node. To identify the limit per VM SKU, review the **Max data disks** column for each VM SKU offered. For a list of VM SKUs offered and their corresponding detailed capacity limits, see [General purpose virtual machine sizes][general-purpose-machine-sizes].
### Azure Disks
For clusters using the [Container Storage Interface (CSI) drivers][csi-storage-d
| `azureblob-nfs-premium` | Uses Azure Premium storage to create an Azure Blob storage container and connect using the NFS v3 protocol. The reclaim policy ensures that the underlying Azure Blob storage container is deleted when the persistent volume that used it is deleted. | | `azureblob-fuse-premium` | Uses Azure Premium storage to create an Azure Blob storage container and connect using BlobFuse. The reclaim policy ensures that the underlying Azure Blob storage container is deleted when the persistent volume that used it is deleted. |
-Unless you specify a StorageClass for a persistent volume, the default StorageClass will be used. Ensure volumes use the appropriate storage you need when requesting persistent volumes.
+Unless you specify a StorageClass for a persistent volume, the default StorageClass will be used. Ensure volumes use the appropriate storage you need when requesting persistent volumes.
> [!IMPORTANT]
-> Starting in Kubernetes version 1.21, AKS will use CSI drivers only and by default. The `default` class will be the same as `managed-csi`
+> Starting with Kubernetes version 1.21, AKS only uses CSI drivers by default and CSI migration is enabled. While existing in-tree persistent volumes continue to function, starting with version 1.26, AKS will no longer support volumes created using in-tree driver and storage provisioned for files and disk.
+>
+> The `default` class will be the same as `managed-csi`.
You can create a StorageClass for additional needs using `kubectl`. The following example uses Premium Managed Disks and specifies that the underlying Azure Disk should be *retained* when you delete the pod:
For more information on core Kubernetes and AKS concepts, see the following arti
[operator-best-practices-storage]: operator-best-practices-storage.md [csi-storage-drivers]: csi-storage-drivers.md [azure-blob-csi]: azure-blob-csi.md
+[general-purpose-machine-sizes]: ../virtual-machines/sizes-general.md
aks Configure Azure Cni Dynamic Ip Allocation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-azure-cni-dynamic-ip-allocation.md
+
+ Title: Configure Azure CNI networking for dynamic allocation of IPs and enhanced subnet support in Azure Kubernetes Service (AKS)
+description: Learn how to configure Azure CNI (advanced) networking for dynamic allocation of IPs and enhanced subnet support in Azure Kubernetes Service (AKS)
++ Last updated : 01/09/2023+++
+# Configure Azure CNI networking for dynamic allocation of IPs and enhanced subnet support in Azure Kubernetes Service (AKS)
+
+A drawback with the traditional CNI is the exhaustion of pod IP addresses as the AKS cluster grows, which results in the need to rebuild your entire cluster in a bigger subnet. The new dynamic IP allocation capability in Azure CNI solves this problem by allocating pod IPs from a subnet separate from the subnet hosting the AKS cluster.
+
+It offers the following benefits:
+
+* **Better IP utilization**: IPs are dynamically allocated to cluster Pods from the Pod subnet. This leads to better utilization of IPs in the cluster compared to the traditional CNI solution, which does static allocation of IPs for every node.
+* **Scalable and flexible**: Node and pod subnets can be scaled independently. A single pod subnet can be shared across multiple node pools of a cluster or across multiple AKS clusters deployed in the same VNet. You can also configure a separate pod subnet for a node pool.
+* **High performance**: Since pod are assigned VNet IPs, they have direct connectivity to other cluster pod and resources in the VNet. The solution supports very large clusters without any degradation in performance.
+* **Separate VNet policies for pods**: Since pods have a separate subnet, you can configure separate VNet policies for them that are different from node policies. This enables many useful scenarios such as allowing internet connectivity only for pods and not for nodes, fixing the source IP for pod in a node pool using a VNet Network NAT, and using NSGs to filter traffic between node pools.
+* **Kubernetes network policies**: Both the Azure Network Policies and Calico work with this new solution.
+
+This article shows you how to use Azure CNI networking for dynamic allocation of IPs and enhanced subnet support in AKS.
+
+## Prerequisites
+
+> [!NOTE]
+> When using dynamic allocation of IPs, exposing an application as a Private Link Service using a Kubernetes Load Balancer Service isn't supported.
+
+* Review the [prerequisites](/configure-azure-cni.md#prerequisites) for configuring basic Azure CNI networking in AKS, as the same prerequisites apply to this article.
+* Review the [deployment parameters](/configure-azure-cni.md#deployment-parameters) for configuring basic Azure CNI networking in AKS, as the same parameters apply.
+* Only linux node clusters and node pools are supported.
+* AKS Engine and DIY clusters aren't supported.
+* Azure CLI version `2.37.0` or later.
+
+## Plan IP addressing
+
+Planning your IP addressing is much simpler with this feature. Since the nodes and pods scale independently, their address spaces can also be planned separately. Since pod subnets can be configured to the granularity of a node pool, you can always add a new subnet when you add a node pool. The system pods in a cluster/node pool also receive IPs from the pod subnet, so this behavior needs to be accounted for.
+
+IPs are allocated to nodes in batches of 16. Pod subnet IP allocation should be planned with a minimum of 16 IPs per node in the cluster; nodes will request 16 IPs on startup and will request another batch of 16 any time there are <8 IPs unallocated in their allotment.
+
+The planning of IPs for Kubernetes services and Docker bridge remain unchanged.
+
+## Maximum pods per node in a cluster with dynamic allocation of IPs and enhanced subnet support
+
+The pods per node values when using Azure CNI with dynamic allocation of IPs slightly differ from the traditional CNI behavior:
+
+|CNI|Default|Configurable at deployment|
+|--| :--: |--|
+|Traditional Azure CNI|30|Yes (up to 250)|
+|Azure CNI with dynamic allocation of IPs|250|Yes (up to 250)|
+
+All other guidance related to configuring the maximum pods per node remains the same.
+
+## Deployment parameters
+
+The [deployment parameters](/configure-azure-cni.md#deployment-parameters) for configuring basic Azure CNI networking in AKS are all valid, with two exceptions:
+
+* The **subnet** parameter now refers to the subnet related to the cluster's nodes.
+* An additional parameter **pod subnet** is used to specify the subnet whose IP addresses will be dynamically allocated to pods.
+
+## Configure networking with dynamic allocation of IPs and enhanced subnet support - Azure CLI
+
+Using dynamic allocation of IPs and enhanced subnet support in your cluster is similar to the default method for configuring a cluster Azure CNI. The following example walks through creating a new virtual network with a subnet for nodes and a subnet for pods, and creating a cluster that uses Azure CNI with dynamic allocation of IPs and enhanced subnet support. Be sure to replace variables such as `$subscription` with your own values.
+
+Create the virtual network with two subnets.
+
+```azurecli-interactive
+resourceGroup="myResourceGroup"
+vnet="myVirtualNetwork"
+location="westcentralus"
+
+# Create the resource group
+az group create --name $resourceGroup --location $location
+
+# Create our two subnet network
+az network vnet create -g $resourceGroup --location $location --name $vnet --address-prefixes 10.0.0.0/8 -o none
+az network vnet subnet create -g $resourceGroup --vnet-name $vnet --name nodesubnet --address-prefixes 10.240.0.0/16 -o none
+az network vnet subnet create -g $resourceGroup --vnet-name $vnet --name podsubnet --address-prefixes 10.241.0.0/16 -o none
+```
+
+Create the cluster, referencing the node subnet using `--vnet-subnet-id` and the pod subnet using `--pod-subnet-id`.
+
+```azurecli-interactive
+clusterName="myAKSCluster"
+subscription="aaaaaaa-aaaaa-aaaaaa-aaaa"
+
+az aks create -n $clusterName -g $resourceGroup -l $location \
+ --max-pods 250 \
+ --node-count 2 \
+ --network-plugin azure \
+ --vnet-subnet-id /subscriptions/$subscription/resourceGroups/$resourceGroup/providers/Microsoft.Network/virtualNetworks/$vnet/subnets/nodesubnet \
+ --pod-subnet-id /subscriptions/$subscription/resourceGroups/$resourceGroup/providers/Microsoft.Network/virtualNetworks/$vnet/subnets/podsubnet
+```
+
+### Adding node pool
+
+When adding node pool, reference the node subnet using `--vnet-subnet-id` and the pod subnet using `--pod-subnet-id`. The following example creates two new subnets that are then referenced in the creation of a new node pool:
+
+```azurecli-interactive
+az network vnet subnet create -g $resourceGroup --vnet-name $vnet --name node2subnet --address-prefixes 10.242.0.0/16 -o none
+az network vnet subnet create -g $resourceGroup --vnet-name $vnet --name pod2subnet --address-prefixes 10.243.0.0/16 -o none
+
+az aks nodepool add --cluster-name $clusterName -g $resourceGroup -n newnodepool \
+ --max-pods 250 \
+ --node-count 2 \
+ --vnet-subnet-id /subscriptions/$subscription/resourceGroups/$resourceGroup/providers/Microsoft.Network/virtualNetworks/$vnet/subnets/node2subnet \
+ --pod-subnet-id /subscriptions/$subscription/resourceGroups/$resourceGroup/providers/Microsoft.Network/virtualNetworks/$vnet/subnets/pod2subnet \
+ --no-wait
+```
+
+## Dynamic allocation of IP addresses and enhanced subnet support FAQs
+
+* **Can I assign multiple pod subnets to a cluster/node pool?**
+
+ Only one subnet can be assigned to a cluster or node pool. However, multiple clusters or node pools can share a single subnet.
+
+* **Can I assign Pod subnets from a different VNet altogether?**
+
+ No, the pod subnet should be from the same VNet as the cluster.
+
+* **Can some node pools in a cluster use the traditional CNI while others use the new CNI?**
+
+ The entire cluster should use only one type of CNI.
+
+## Next steps
+
+Learn more about networking in AKS in the following articles:
+
+* [Use a static IP address with the Azure Kubernetes Service (AKS) load balancer](static-ip.md)
+* [Use an internal load balancer with Azure Kubernetes Service (AKS)](internal-lb.md)
+
+* [Create a basic ingress controller with external network connectivity][aks-ingress-basic]
+* [Enable the HTTP application routing add-on][aks-http-app-routing]
+* [Create an ingress controller that uses an internal, private network and IP address][aks-ingress-internal]
+* [Create an ingress controller with a dynamic public IP and configure Let's Encrypt to automatically generate TLS certificates][aks-ingress-tls]
+* [Create an ingress controller with a static public IP and configure Let's Encrypt to automatically generate TLS certificates][aks-ingress-static-tls]
+
+<!-- LINKS - Internal -->
+[aks-ingress-basic]: ingress-basic.md
+[aks-ingress-tls]: ingress-tls.md
+[aks-ingress-static-tls]: ingress-static-ip.md
+[aks-http-app-routing]: http-application-routing.md
+[aks-ingress-internal]: ingress-internal-ip.md
aks Configure Azure Cni https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-azure-cni.md
# Configure Azure CNI networking in Azure Kubernetes Service (AKS)
-By default, AKS clusters use [kubenet][kubenet], and a virtual network and subnet are created for you. With *kubenet*, nodes get an IP address from a virtual network subnet. Network address translation (NAT) is then configured on the nodes, and pods receive an IP address "hidden" behind the node IP. This approach reduces the number of IP addresses that you need to reserve in your network space for pods to use.
+By default, AKS clusters use [kubenet][kubenet] and create a virtual network and subnet. With *kubenet*, nodes get an IP address from a virtual network subnet. Network address translation (NAT) is then configured on the nodes, and pods receive an IP address "hidden" behind the node IP. This approach reduces the number of IP addresses that you need to reserve in your network space for pods to use.
-With [Azure Container Networking Interface (CNI)][cni-networking], every pod gets an IP address from the subnet and can be accessed directly. These IP addresses must be unique across your network space, and must be planned in advance. Each node has a configuration parameter for the maximum number of pods that it supports. The equivalent number of IP addresses per node are then reserved up front for that node. This approach requires more planning, and often leads to IP address exhaustion or the need to rebuild clusters in a larger subnet as your application demands grow.
+With [Azure Container Networking Interface (CNI)][cni-networking], every pod gets an IP address from the subnet and can be accessed directly. These IP addresses must be unique across your network space and must be planned in advance. Each node has a configuration parameter for the maximum number of pods that it supports. The equivalent number of IP addresses per node are then reserved up front for that node. This approach requires more planning, and often leads to IP address exhaustion or the need to rebuild clusters in a larger subnet as your application demands grow.
-This article shows you how to use *Azure CNI* networking to create and use a virtual network subnet for an AKS cluster. For more information on network options and considerations, see [Network concepts for Kubernetes and AKS][aks-network-concepts].
+This article shows you how to use Azure CNI networking to create and use a virtual network subnet for an AKS cluster. For more information on network options and considerations, see [Network concepts for Kubernetes and AKS][aks-network-concepts].
## Prerequisites
The following screenshot from the Azure portal shows an example of configuring t
:::image type="content" source="../aks/media/networking-overview/portal-01-networking-advanced.png" alt-text="Screenshot from the Azure portal showing an example of configuring these settings during AKS cluster creation.":::
-## Dynamic allocation of IPs and enhanced subnet support
-
-A drawback with the traditional CNI is the exhaustion of pod IP addresses as the AKS cluster grows, resulting in the need to rebuild the entire cluster in a bigger subnet. The new dynamic IP allocation capability in Azure CNI solves this problem by allocating pod IPs from a subnet separate from the subnet hosting the AKS cluster. It offers the following benefits:
-
-* **Better IP utilization**: IPs are dynamically allocated to cluster Pods from the Pod subnet. This leads to better utilization of IPs in the cluster compared to the traditional CNI solution, which does static allocation of IPs for every node.
-
-* **Scalable and flexible**: Node and pod subnets can be scaled independently. A single pod subnet can be shared across multiple node pools of a cluster or across multiple AKS clusters deployed in the same VNet. You can also configure a separate pod subnet for a node pool.
-
-* **High performance**: Since pod are assigned VNet IPs, they have direct connectivity to other cluster pod and resources in the VNet. The solution supports very large clusters without any degradation in performance.
-
-* **Separate VNet policies for pods**: Since pods have a separate subnet, you can configure separate VNet policies for them that are different from node policies. This enables many useful scenarios such as allowing internet connectivity only for pods and not for nodes, fixing the source IP for pod in a node pool using a VNet Network NAT, and using NSGs to filter traffic between node pools.
-
-* **Kubernetes network policies**: Both the Azure Network Policies and Calico work with this new solution.
-
-### Additional prerequisites
-
-> [!NOTE]
-> When using dynamic allocation of IPs, exposing an application as a Private Link Service using a Kubernetes Load Balancer Service is not supported.
-
-The [prerequisites][prerequisites] already listed for Azure CNI still apply, but there are a few additional limitations:
-
-* AKS Engine and DIY clusters are not supported.
-* Azure CLI version `2.37.0` or later.
-
-### Planning IP addressing
-
-When using this feature, planning is much simpler. Since the nodes and pods scale independently, their address spaces can also be planned separately. Since pod subnets can be configured to the granularity of a node pool, customers can always add a new subnet when they add a node pool. The system pods in a cluster/node pool also receive IPs from the pod subnet, so this behavior needs to be accounted for.
-
-IPs are allocated to nodes in batches of 16. Pod subnet IP allocation should be planned with a minimum of 16 IPs per node in the cluster; nodes will request 16 IPs on startup and will request another batch of 16 any time there are <8 IPs unallocated in their allotment.
-
-The planning of IPs for Kubernetes services and Docker bridge remain unchanged.
-
-### Maximum pods per node in a cluster with dynamic allocation of IPs and enhanced subnet support
-
-The pods per node values when using Azure CNI with dynamic allocation of IPs have changed slightly from the traditional CNI behavior:
-
-|CNI|Default|Configurable at deployment|
-|--| :--: |--|
-|Traditional Azure CNI|30|Yes (up to 250)|
-|Azure CNI with dynamic allocation of IPs|250|Yes (up to 250)|
-
-All other guidance related to configuring the maximum pods per node remains the same.
-
-### Additional deployment parameters
-
-The deployment parameters described above are all still valid, with one exception:
-
-* The **subnet** parameter now refers to the subnet related to the cluster's nodes.
-* An additional parameter **pod subnet** is used to specify the subnet whose IP addresses will be dynamically allocated to pods.
-
-### Configure networking - CLI with dynamic allocation of IPs and enhanced subnet support
-
-Using dynamic allocation of IPs and enhanced subnet support in your cluster is similar to the default method for configuring a cluster Azure CNI. The following example walks through creating a new virtual network with a subnet for nodes and a subnet for pods, and creating a cluster that uses Azure CNI with dynamic allocation of IPs and enhanced subnet support. Be sure to replace variables such as `$subscription` with your own values:
-
-First, create the virtual network with two subnets:
-
-```azurecli-interactive
-resourceGroup="myResourceGroup"
-vnet="myVirtualNetwork"
-location="westcentralus"
-
-# Create the resource group
-az group create --name $resourceGroup --location $location
-
-# Create our two subnet network
-az network vnet create -g $resourceGroup --location $location --name $vnet --address-prefixes 10.0.0.0/8 -o none
-az network vnet subnet create -g $resourceGroup --vnet-name $vnet --name nodesubnet --address-prefixes 10.240.0.0/16 -o none
-az network vnet subnet create -g $resourceGroup --vnet-name $vnet --name podsubnet --address-prefixes 10.241.0.0/16 -o none
-```
-
-Then, create the cluster, referencing the node subnet using `--vnet-subnet-id` and the pod subnet using `--pod-subnet-id`:
-
-```azurecli-interactive
-clusterName="myAKSCluster"
-subscription="aaaaaaa-aaaaa-aaaaaa-aaaa"
-
-az aks create -n $clusterName -g $resourceGroup -l $location \
- --max-pods 250 \
- --node-count 2 \
- --network-plugin azure \
- --vnet-subnet-id /subscriptions/$subscription/resourceGroups/$resourceGroup/providers/Microsoft.Network/virtualNetworks/$vnet/subnets/nodesubnet \
- --pod-subnet-id /subscriptions/$subscription/resourceGroups/$resourceGroup/providers/Microsoft.Network/virtualNetworks/$vnet/subnets/podsubnet
-```
-
-#### Adding node pool
-
-When adding node pool, reference the node subnet using `--vnet-subnet-id` and the pod subnet using `--pod-subnet-id`. The following example creates two new subnets that are then referenced in the creation of a new node pool:
-
-```azurecli-interactive
-az network vnet subnet create -g $resourceGroup --vnet-name $vnet --name node2subnet --address-prefixes 10.242.0.0/16 -o none
-az network vnet subnet create -g $resourceGroup --vnet-name $vnet --name pod2subnet --address-prefixes 10.243.0.0/16 -o none
-
-az aks nodepool add --cluster-name $clusterName -g $resourceGroup -n newnodepool \
- --max-pods 250 \
- --node-count 2 \
- --vnet-subnet-id /subscriptions/$subscription/resourceGroups/$resourceGroup/providers/Microsoft.Network/virtualNetworks/$vnet/subnets/node2subnet \
- --pod-subnet-id /subscriptions/$subscription/resourceGroups/$resourceGroup/providers/Microsoft.Network/virtualNetworks/$vnet/subnets/pod2subnet \
- --no-wait
-```
-## Monitor IP subnet usage
+## Monitor IP subnet usage
Azure CNI provides the capability to monitor IP subnet usage. To enable IP subnet usage monitoring, follow the steps below: ### Get the YAML file
-1. Download or grep the file named container-azm-ms-agentconfig.yaml from [GitHub][github].
-2. Find azure_subnet_ip_usage in integrations. Set `enabled` to `true`.
-3. Save the file.
+
+1. Download or grep the file named container-azm-ms-agentconfig.yaml from [GitHub][github].
+2. Find azure_subnet_ip_usage in integrations. Set `enabled` to `true`.
+3. Save the file.
### Get the AKS credentials
Set the variables for subscription, resource group and cluster. Consider the fol
## Frequently asked questions
-The following questions and answers apply to the **Azure CNI** networking configuration.
-
-* *Can I deploy VMs in my cluster subnet?*
+* **Can I deploy VMs in my cluster subnet?**
Yes.
-* *What source IP do external systems see for traffic that originates in an Azure CNI-enabled pod?*
+* **What source IP do external systems see for traffic that originates in an Azure CNI-enabled pod?**
Systems in the same virtual network as the AKS cluster see the pod IP as the source address for any traffic from the pod. Systems outside the AKS cluster virtual network see the node IP as the source address for any traffic from the pod.
-* *Can I configure per-pod network policies?*
+* **Can I configure per-pod network policies?**
Yes, Kubernetes network policy is available in AKS. To get started, see [Secure traffic between pods by using network policies in AKS][network-policy].
-* *Is the maximum number of pods deployable to a node configurable?*
+* **Is the maximum number of pods deployable to a node configurable?**
Yes, when you deploy a cluster with the Azure CLI or a Resource Manager template. See [Maximum pods per node](#maximum-pods-per-node). You can't change the maximum number of pods per node on an existing cluster.
-* *How do I configure additional properties for the subnet that I created during AKS cluster creation? For example, service endpoints.*
+* **How do I configure additional properties for the subnet that I created during AKS cluster creation? For example, service endpoints.**
The complete list of properties for the virtual network and subnets that you create during AKS cluster creation can be configured in the standard virtual network configuration page in the Azure portal.
-* *Can I use a different subnet within my cluster virtual network for the* **Kubernetes service address range**?
+* **Can I use a different subnet within my cluster virtual network for the *Kubernetes service address range*?**
It's not recommended, but this configuration is possible. The service address range is a set of virtual IPs (VIPs) that Kubernetes assigns to internal services in your cluster. Azure Networking has no visibility into the service IP range of the Kubernetes cluster. Because of the lack of visibility into the cluster's service address range, it's possible to later create a new subnet in the cluster virtual network that overlaps with the service address range. If such an overlap occurs, Kubernetes could assign a service an IP that's already in use by another resource in the subnet, causing unpredictable behavior or failures. By ensuring you use an address range outside the cluster's virtual network, you can avoid this overlap risk.
-### Dynamic allocation of IP addresses and enhanced subnet support FAQs
-
-The following questions and answers apply to the **Azure CNI network configuration when using Dynamic allocation of IP addresses and enhanced subnet support**.
-
-* *Can I assign multiple pod subnets to a cluster/node pool?*
-
- Only one subnet can be assigned to a cluster or node pool. However, multiple clusters or node pools can share a single subnet.
-
-* *Can I assign Pod subnets from a different VNet altogether?*
-
- No, the pod subnet should be from the same VNet as the cluster.
-
-* *Can some node pools in a cluster use the traditional CNI while others use the new CNI?*
-
- The entire cluster should use only one type of CNI.
- ## Next steps
+To configure Azure CNI networking with dynamic IP allocation and enhanced subnet support, see [Configure Azure CNI networking for dynamic allocation of IPs and enhanced subnet support in AKS](/configure-azure-cni-dynamic-ip-allocation.md).
+ Learn more about networking in AKS in the following articles: * [Use a static IP address with the Azure Kubernetes Service (AKS) load balancer](static-ip.md)
aks Csi Migrate In Tree Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/csi-migrate-in-tree-volumes.md
+
+ Title: Migrate from in-tree storage class to CSI drivers on Azure Kubernetes Service (AKS)
+description: Learn how to migrate from in-tree persistent volume to the Container Storage Interface (CSI) driver in an Azure Kubernetes Service (AKS) cluster.
+ Last updated : 01/18/2023++++
+# Migrate from in-tree storage class to CSI drivers on Azure Kubernetes Service (AKS)
+
+The implementation of the [Container Storage Interface (CSI) driver][csi-driver-overview] was introduced in Azure Kubernetes Service (AKS) starting with version 1.21. By adopting and using CSI as the standard, your existing stateful workloads using in-tree Persistent Volumes (PVs) should be migrated or upgraded to use the CSI driver.
+
+To make this process as simple as possible, and to ensure no data loss, this article provides different migration options. These options include scripts to help ensure a smooth migration from in-tree to Azure Disks and Azure Files CSI drivers.
+
+## Before you begin
+
+* The Azure CLI version 2.37.0 or later. Run `az --version` to find the version, and run `az upgrade` to upgrade the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
+* Kubectl and cluster administrators have access to create, get, list, delete access to a PVC or PV, volume snapshot, or volume snapshot content. For an Azure Active Directory (Azure AD) RBAC enabled cluster, you're a member of the [Azure Kubernetes Service RBAC Cluster Admin][aks-rbac-cluster-admin-role] role.
+
+## Migrate Disk volumes
+
+Migration from in-tree to CSI is supported using two migration options:
+
+* Create a static volume
+* Create a dynamic volume
+
+### Create a static volume
+
+Using this option, you create a PV by statically assigning `claimRef` to a new PVC that you'll create later, and specify the `volumeName` for the *PersistentVolumeClaim*.
++
+The benefits of this approach are:
+
+* It's simple and can be automated.
+* No need to clean up original configuration using in-tree storage class.
+* Low risk as you're only performing a logical deletion of Kubernetes PV/PVC, the actual physical data isn't deleted.
+* No extra costs as the result of not having to create more objects such as disk, snapshots, etc.
+
+The following are important considerations to evaluate:
+
+* Transition to static volumes from original dynamic-style volumes requires constructing and managing PV objects manually for all options.
+* Potential application downtime when redeploying the new application with reference to the new PVC object.
+
+#### Migration
+
+1. Update the existing PV `ReclaimPolicy` from **Delete** to **Retain** by running the following command:
+
+ ```bash
+ kubectl patch pv pvName -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
+ ```
+
+ Replace **pvName** with the name of your selected PersistentVolume. Alternatively, if you want to update the reclaimPolicy for multiple PVs, create a file named **patchReclaimPVs.sh** and copy in the following code.
+
+ ```bash
+ # Patch the Persistent Volume in case ReclaimPolicy is Delete
+ #!/bin/sh
+ namespace=$1
+ i=1
+ for pvc in $(kubectl get pvc -n $namespace | awk '{ print $1}'); do
+ # Ignore first record as it contains header
+ if [ $i -eq 1 ]; then
+ i=$((i + 1))
+ else
+ pv="$(kubectl get pvc $pvc -n $namespace -o jsonpath='{.spec.volumeName}')"
+ reclaimPolicy="$(kubectl get pv $pv -n $namespace -o jsonpath='{.spec.persistentVolumeReclaimPolicy}')"
+ echo "Reclaim Policy for Persistent Volume $pv is $reclaimPolicy"
+ if [[ $reclaimPolicy == "Delete" ]]; then
+ echo "Updating ReclaimPolicy for $pv to Retain"
+ kubectl patch pv $pv -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
+ fi
+ fi
+ done
+ ```
+
+ Execute the script with the `namespace` parameter to specify the cluster namespace `./PatchReclaimPolicy.sh <namespace>`.
+
+2. Get a list of all of the PVCs in namespace sorted by **creationTimestamp** by running the following command. Set the namespace using the `--namespace` argument along with the actual cluster namespace.
+
+ ```bash
+ kubectl get pvc -n <namespace> --sort-by=.metadata.creationTimestamp -o custom-columns=NAME:.metadata.name,CreationTime:.metadata.creationTimestamp,StorageClass:.spec.storageClassName,Size:.spec.resources.requests.storage
+ ```
+
+ This step is helpful if you have a large number of PVs that need to be migrated, and you want to migrate a few at a time. Running this command enables you to identify which PVCs were created in a given time frame. When you run the *CreatePV.sh* script, two of the parameters are start time and end time that enable you to only migrate the PVCs during that period of time.
+
+3. Create a file named **CreatePV.sh** and copy in the following code. The script does the following:
+
+ * Creates a new PersistentVolume with name `existing-pv-csi` for all PersistentVolumes in namespaces for storage class `storageClassName`.
+ * Configure new PVC name as `existing-pvc-csi`.
+ * Updates the application (deployment/StatefulSet) to refer to new PVC.
+ * Creates a new PVC with the PV name you specify.
+
+ ```bash
+ #!/bin/sh
+ #kubectl get pvc -n <namespace> --sort-by=.metadata.creationTimestamp -o custom-columns=NAME:.metadata.name,CreationTime:.metadata.creationTimestamp,StorageClass:.spec.storageClassName,Size:.spec.resources.requests.storage
+ # TimeFormat 2022-04-20T13:19:56Z
+ namespace=$1
+ fileName=$(date +%Y%m%d%H%M)-$namespace
+ existingStorageClass=$2
+ storageClassNew=$3
+ starttimestamp=$4
+ endtimestamp=$5
+ i=1
+ for pvc in $(kubectl get pvc -n $namespace | awk '{ print $1}'); do
+ # Ignore first record as it contains header
+ if [ $i -eq 1 ]; then
+ i=$((i + 1))
+ else
+ pvcCreationTime=$(kubectl get pvc $pvc -n $namespace -o jsonpath='{.metadata.creationTimestamp}')
+ if [[ $pvcCreationTime > $starttimestamp ]]; then
+ if [[ $endtimestamp > $pvcCreationTime ]]; then
+ pv="$(kubectl get pvc $pvc -n $namespace -o jsonpath='{.spec.volumeName}')"
+ reclaimPolicy="$(kubectl get pv $pv -n $namespace -o jsonpath='{.spec.persistentVolumeReclaimPolicy}')"
+ storageClass="$(kubectl get pv $pv -n $namespace -o jsonpath='{.spec.storageClassName}')"
+ echo $pvc
+ reclaimPolicy="$(kubectl get pv $pv -n $namespace -o jsonpath='{.spec.persistentVolumeReclaimPolicy}')"
+ if [[ $reclaimPolicy == "Retain" ]]; then
+ if [[ $storageClass == $existingStorageClass ]]; then
+ storageSize="$(kubectl get pv $pv -n $namespace -o jsonpath='{.spec.capacity.storage}')"
+ skuName="$(kubectl get storageClass $storageClass -o jsonpath='{.reclaimPolicy}')"
+ diskURI="$(kubectl get pv $pv -n $namespace -o jsonpath='{.spec.azureDisk.diskURI}')"
+ persistentVolumeReclaimPolicy="$(kubectl get pv $pv -n $namespace -o jsonpath='{.spec.persistentVolumeReclaimPolicy}')"
+
+ cat >$pvc-csi.yaml <<EOF
+ apiVersion: v1
+ kind: PersistentVolume
+ metadata:
+ annotations:
+ pv.kubernetes.io/provisioned-by: disk.csi.azure.com
+ name: $pv-csi
+ spec:
+ accessModes:
+ - ReadWriteOnce
+ capacity:
+ storage: $storageSize
+ claimRef:
+ apiVersion: v1
+ kind: PersistentVolumeClaim
+ name: $pvc-csi
+ namespace: $namespace
+ csi:
+ driver: disk.csi.azure.com
+ volumeAttributes:
+ csi.storage.k8s.io/pv/name: $pv-csi
+ csi.storage.k8s.io/pvc/name: $pvc-csi
+ csi.storage.k8s.io/pvc/namespace: $namespace
+ requestedsizegib: "$storageSize"
+ skuname: $skuName
+ volumeHandle: $diskURI
+ persistentVolumeReclaimPolicy: $persistentVolumeReclaimPolicy
+ storageClassName: $storageClassNew
+
+ apiVersion: v1
+ kind: PersistentVolumeClaim
+ metadata:
+ name: $pvc-csi
+ namespace: $namespace
+ spec:
+ accessModes:
+ - ReadWriteOnce
+ storageClassName: $storageClassNew
+ resources:
+ requests:
+ storage: $storageSize
+ volumeName: $pv-csi
+ EOF
+ kubectl apply -f $pvc-csi.yaml
+ line="PVC:$pvc,PV:$pv,StorageClassTarget:$storageClassNew"
+ printf '%s\n' "$line" >>$fileName
+ fi
+ fi
+ fi
+ fi
+ fi
+ done
+ ```
+
+4. To create a new PersistentVolume for all PersistentVolumes in the namespace, execute the script **CreatePV.sh** with the following parameters:
+
+ * `namespace` - The cluster namespace
+ * `sourceStorageClass` - The in-tree storage driver-based StorageClass
+ * `targetCSIStorageClass` - The CSI storage driver-based StorageClass, which can be either one of the default storage classes that have the provisioner set to **disk.csi.azure.com** or **file.csi.azure.com**. Or you can create a custom storage class as long as it is set to either one of those two provisioners.
+ * `volumeSnapshotClass` - Name of the volume snapshot class. For example, `custom-disk-snapshot-sc`.
+ * `startTimeStamp` - Provide a start time in the format **yyyy-mm-ddthh:mm:ssz**.
+ * `endTimeStamp` - Provide an end time in the format **yyyy-mm-ddthh:mm:ssz**.
+
+ ```bash
+ ./CreatePV.sh <namespace> <sourceIntreeStorageClass> <targetCSIStorageClass> <startTimestamp> <endTimestamp>
+ ```
+
+5. Update your application to use the new PVC.
+
+### Create a dynamic volume
+
+Using this option, you dynamically create a Persistent Volume from a Persistent Volume Claim.
++
+The benefits of this approach are:
+
+* It's less risky because all new objects are created while retaining other copies with snapshots.
+
+* No need to construct PVs separately and add volume name in PVC manifest.
+
+The following are important considerations to evaluate:
+
+* While this approach is less risky, it does create multiple objects that will increase your storage costs.
+
+* During creation of the new volume(s), your application is unavailable.
+
+* Deletion steps should be performed with caution. Temporary [resource locks][azure-resource-locks] can be applied to your resource group until migration is completed and your application is successfully verified.
+
+* Perform data validation/verification as new disks are created from snapshots.
+
+#### Migration
+
+Before proceeding, verify the following:
+
+* For specific workloads where data is written to memory before being written to disk, the application should be stopped and to allow in-memory data to be flushed to disk.
+* `VolumeSnapshot` class should exist as shown in the following example YAML:
+
+ ```yml
+ apiVersion: snapshot.storage.k8s.io/v1
+ kind: VolumeSnapshotClass
+ metadata:
+ name: custom-disk-snapshot-sc
+ driver: disk.csi.azure.com
+ deletionPolicy: Delete
+ parameters:
+ incremental: "false"
+ ```
+
+1. Get list of all the PVCs in a specified namespace sorted by *creationTimestamp* by running the following command. Set the namespace using the `--namespace` argument along with the actual cluster namespace.
+
+ ```bash
+ kubectl get pvc --namespace <namespace> --sort-by=.metadata.creationTimestamp -o custom-columns=NAME:.metadata.name,CreationTime:.metadata.creationTimestamp,StorageClass:.spec.storageClassName,Size:.spec.resources.requests.storage
+ ```
+
+ This step is helpful if you have a large number of PVs that need to be migrated, and you want to migrate a few at a time. Running this command enables you to identify which PVCs were created in a given time frame. When you run the *MigrateCSI.sh* script, two of the parameters are start time and end time that enable you to only migrate the PVCs during that period of time.
+
+2. Create a file named **MigrateToCSI.sh** and copy in the following code. The script does the following:
+
+ * Creates a full disk snapshot using the Azure CLI
+ * Creates `VolumesnapshotContent`
+ * Creates `VolumeSnapshot`
+ * Creates a new PVC from `VolumeSnapshot`
+ * Creates a new file with the filename `<namespace>-timestamp`, which contains a list of all old resources that needs to be cleaned up.
+
+ ```bash
+ #!/bin/sh
+ #kubectl get pvc -n <namespace> --sort-by=.metadata.creationTimestamp -o custom-columns=NAME:.metadata.name,CreationTime:.metadata.creationTimestamp,StorageClass:.spec.storageClassName,Size:.spec.resources.requests.storage
+ # TimeFormat 2022-04-20T13:19:56Z
+ namespace=$1
+ fileName=$namespace-$(date +%Y%m%d%H%M)
+ existingStorageClass=$2
+ storageClassNew=$3
+ volumestorageClass=$4
+ starttimestamp=$5
+ endtimestamp=$6
+ i=1
+ for pvc in $(kubectl get pvc -n $namespace | awk '{ print $1}'); do
+ # Ignore first record as it contains header
+ if [ $i -eq 1 ]; then
+ i=$((i + 1))
+ else
+ pvcCreationTime=$(kubectl get pvc $pvc -n $namespace -o jsonpath='{.metadata.creationTimestamp}')
+ if [[ $pvcCreationTime > $starttimestamp ]]; then
+ if [[ $endtimestamp > $pvcCreationTime ]]; then
+ pv="$(kubectl get pvc $pvc -n $namespace -o jsonpath='{.spec.volumeName}')"
+ reclaimPolicy="$(kubectl get pv $pv -n $namespace -o jsonpath='{.spec.persistentVolumeReclaimPolicy}')"
+ storageClass="$(kubectl get pv $pv -n $namespace -o jsonpath='{.spec.storageClassName}')"
+ echo $pvc
+ reclaimPolicy="$(kubectl get pv $pv -n $namespace -o jsonpath='{.spec.persistentVolumeReclaimPolicy}')"
+ if [[ $storageClass == $existingStorageClass ]]; then
+ storageSize="$(kubectl get pv $pv -n $namespace -o jsonpath='{.spec.capacity.storage}')"
+ skuName="$(kubectl get storageClass $storageClass -o jsonpath='{.reclaimPolicy}')"
+ diskURI="$(kubectl get pv $pv -n $namespace -o jsonpath='{.spec.azureDisk.diskURI}')"
+ targetResourceGroup="$(cut -d'/' -f5 <<<"$diskURI")"
+ echo $diskURI
+ echo $targetResourceGroup
+ persistentVolumeReclaimPolicy="$(kubectl get pv $pv -n $namespace -o jsonpath='{.spec.persistentVolumeReclaimPolicy}')"
+ az snapshot create --resource-group $targetResourceGroup --name $pvc-$fileName --source "$diskURI"
+ snapshotPath=$(az snapshot list --resource-group $targetResourceGroup --query "[?name == '$pvc-$fileName'].id | [0]")
+ snapshotHandle=$(echo "$snapshotPath" | tr -d '"')
+ echo $snapshotHandle
+ sleep 10
+ # Create Restore File
+ cat <<EOF >$pvc-csi.yml
+ apiVersion: snapshot.storage.k8s.io/v1
+ kind: VolumeSnapshotContent
+ metadata:
+ name: $pvc-$fileName
+ spec:
+ deletionPolicy: 'Delete'
+ driver: 'disk.csi.azure.com'
+ volumeSnapshotClassName: $volumestorageClass
+ source:
+ snapshotHandle: $snapshotHandle
+ volumeSnapshotRef:
+ apiVersion: snapshot.storage.k8s.io/v1
+ kind: VolumeSnapshot
+ name: $pvc-$fileName
+ namespace: $1
+
+ apiVersion: snapshot.storage.k8s.io/v1
+ kind: VolumeSnapshot
+ metadata:
+ name: $pvc-$fileName
+ namespace: $1
+ spec:
+ volumeSnapshotClassName: $volumestorageClass
+ source:
+ volumeSnapshotContentName: $pvc-$fileName
+
+ apiVersion: v1
+ kind: PersistentVolumeClaim
+ metadata:
+ name: csi-$pvc
+ namespace: $1
+ spec:
+ accessModes:
+ - ReadWriteOnce
+ storageClassName: $storageClassNew
+ resources:
+ requests:
+ storage: $storageSize
+ dataSource:
+ name: $pvc-$fileName
+ kind: VolumeSnapshot
+ apiGroup: snapshot.storage.k8s.io
+
+ EOF
+ kubectl create -f $pvc-csi.yml
+ line="OLDPVC:$pvc,OLDPV:$pv,VolumeSnapshotContent:volumeSnapshotContent-$fileName,VolumeSnapshot:volumesnapshot$fileName,OLDdisk:$diskURI"
+ printf '%s\n' "$line" >>$fileName
+ fi
+ fi
+ fi
+ fi
+ done
+ ```
+
+3. To migrate the disk volumes, execute the script **MigrateToCSI.sh** with the following parameters:
+
+ * `namespace` - The cluster namespace
+ * `sourceStorageClass` - The in-tree storage driver-based StorageClass
+ * `targetCSIStorageClass` - The CSI storage driver-based StorageClass
+ * `volumeSnapshotClass` - Name of the volume snapshot class. For example, `custom-disk-snapshot-sc`.
+ * `startTimeStamp` - Provide a start time in the format **yyyy-mm-ddthh:mm:ssz**.
+ * `endTimeStamp` - Provide an end time in the format **yyyy-mm-ddthh:mm:ssz**.
+
+ ```bash
+ ./MigrateToCSI.sh <namespace> <sourceStorageClass> <TargetCSIstorageClass> <VolumeSnapshotClass> <startTimestamp> <endTimestamp>
+ ```
+
+4. Update your application to use the new PVC.
+
+5. Manually delete the older resources including in-tree PVC/PV, VolumeSnapshot, and VolumeSnapshotContent. Otherwise, maintaining the in-tree PVC/PC and snapshot objects will generate more cost.
+
+## Migrate File share volumes
+
+Migration from in-tree to CSI is supported by creating a static volume.
+
+### Migration
+
+1. Update the existing PV `ReclaimPolicy` from **Delete** to **Retain** by running the following command:
+
+ ```bash
+ kubectl patch pv pvName -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
+ ```
+
+ Replace **pvName** with the name of your selected PersistentVolume. Alternatively, if you want to update the reclaimPolicy for multiple PVs, create a file named **patchReclaimPVs.sh** and copy in the following code.
+
+ ```bash
+ # Patch the Persistent Volume in case ReclaimPolicy is Delete
+ #!/bin/sh
+ namespace=$1
+ i=1
+ for pvc in $(kubectl get pvc -n $namespace | awk '{ print $1}'); do
+ # Ignore first record as it contains header
+ if [ $i -eq 1 ]; then
+ i=$((i + 1))
+ else
+ pv="$(kubectl get pvc $pvc -n $namespace -o jsonpath='{.spec.volumeName}')"
+ reclaimPolicy="$(kubectl get pv $pv -n $namespace -o jsonpath='{.spec.persistentVolumeReclaimPolicy}')"
+ echo "Reclaim Policy for Persistent Volume $pv is $reclaimPolicy"
+ if [[ $reclaimPolicy == "Delete" ]]; then
+ echo "Updating ReclaimPolicy for $pv to Retain"
+ kubectl patch pv $pv -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
+ fi
+ fi
+ done
+ ```
+
+ Execute the script with the `namespace` parameter to specify the cluster namespace `./PatchReclaimPolicy.sh <namespace>`.
+
+2. Create a new Storage Class with the provisioner set to `file.csi.azure.com`, or you can use one of the default StorageClasses with the CSI file provisioner.
+
+3. Get the `secretName` and `shareName` from the existing *PersistentVolumes* by running the following command:
+
+ ```bash
+ kubectl describe pv pvName
+ ```
+
+4. Create a new PV using the new StorageClass, and the `shareName` and `secretName` from the in-tree PV. Create a file named *azurefile-mount-pv.yaml* and copy in the following code. Under `csi`, update `resourceGroup`, `volumeHandle`, and `shareName`. For mount options, the default value for *fileMode* and *dirMode* is *0777*.
+
+ The default value for `fileMode` and `dirMode` is **0777**.
+
+ ```yml
+ apiVersion: v1
+ kind: PersistentVolume
+ metadata:
+ name: azurefile
+ spec:
+ capacity:
+ storage: 5Gi
+ accessModes:
+ - ReadWriteMany
+ persistentVolumeReclaimPolicy: Retain
+ storageClassName: azurefile-csi
+ csi:
+ driver: file.csi.azure.com
+ readOnly: false
+ volumeHandle: unique-volumeid # make sure volumeid is unique for every identical share in the cluster
+ volumeAttributes:
+ resourceGroup: EXISTING_RESOURCE_GROUP_NAME # optional, only set this when storage account is not in the same resource group as the cluster nodes
+ shareName: aksshare
+ nodeStageSecretRef:
+ name: azure-secret
+ namespace: default
+ mountOptions:
+ - dir_mode=0777
+ - file_mode=0777
+ - uid=0
+ - gid=0
+ - mfsymlinks
+ - cache=strict
+ - nosharesock
+ - nobrl
+ ```
+
+5. Create a file named *azurefile-mount-pvc.yaml* file with a *PersistentVolumeClaim* that uses the *PersistentVolume* using the following code.
+
+ ```yml
+ apiVersion: v1
+ kind: PersistentVolumeClaim
+ metadata:
+ name: azurefile
+ spec:
+ accessModes:
+ - ReadWriteMany
+ storageClassName: azurefile-csi
+ volumeName: azurefile
+ resources:
+ requests:
+ storage: 5Gi
+ ```
+
+6. Use the `kubectl` command to create the *PersistentVolume*.
+
+ ```bash
+ kubectl apply -f azurefile-mount-pv.yaml
+ ```
+
+7. Use the `kubectl` command to create the *PersistentVolumeClaim*.
+
+ ```bash
+ kubectl apply -f azurefile-mount-pvc.yaml
+ ```
+
+8. Verify your *PersistentVolumeClaim* is created and bound to the *PersistentVolume* by running the following command.
+
+ ```bash
+ kubectl get pvc azurefile
+ ```
+
+ The output resembles the following:
+
+ ```output
+ NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
+ azurefile Bound azurefile 5Gi RWX azurefile 5s
+ ```
+
+9. Update your container spec to reference your *PersistentVolumeClaim* and update your pod. For example, copy the following code and create a file named *azure-files-pod.yaml*.
+
+ ```yml
+ ...
+ volumes:
+ - name: azure
+ persistentVolumeClaim:
+ claimName: azurefile
+ ```
+
+10. The pod spec can't be updated in place. Use the following `kubectl` commands to delete and then re-create the pod.
+
+ ```bash
+ kubectl delete pod mypod
+ ```
+
+ ```bash
+ kubectl apply -f azure-files-pod.yaml
+ ```
+
+## Next steps
+
+For more about storage best practices, see [Best practices for storage and backups in Azure Kubernetes Service][aks-storage-backups-best-practices].
+
+<!-- LINKS - internal -->
+[install-azure-cli]: /cli/azure/install-azure-cli
+[aks-rbac-cluster-admin-role]: manage-azure-rbac.md#create-role-assignments-for-users-to-access-cluster
+[azure-resource-locks]: /azure/azure-resource-manager/management/lock-resources
+[csi-driver-overview]: csi-storage-drivers.md
+[aks-storage-backups-best-practices]: operator-best-practices-storage.md
aks Csi Storage Drivers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/csi-storage-drivers.md
Title: Container Storage Interface (CSI) drivers on Azure Kubernetes Service (AKS) description: Learn about and deploy the Container Storage Interface (CSI) drivers for Azure Disks and Azure Files in an Azure Kubernetes Service (AKS) cluster- Previously updated : 11/16/2022 Last updated : 01/19/2023
The CSI storage driver support on AKS allows you to natively use:
- [**Azure Blob storage**](azure-blob-csi.md) can be used to mount Blob storage (or object storage) as a file system into a container or pod. Using Blob storage enables your cluster to support applications that work with large unstructured datasets like log file data, images or documents, HPC, and others. Additionally, if you ingest data into [Azure Data Lake storage](../storage/blobs/data-lake-storage-introduction.md), you can directly mount and use it in AKS without configuring another interim filesystem. > [!IMPORTANT]
-> Starting with Kubernetes version 1.21, AKS only uses CSI drivers by default and CSI migration is enabled. Existing in-tree persistent volumes will continue to function. However, internally Kubernetes hands control of all storage management operations (previously targeting in-tree drivers) to CSI drivers.
+> Starting with Kubernetes version 1.26, in-tree persistent volume types *kubernetes.io/azure-disk* and *kubernetes.io/azure-file* are deprecated and will no longer be supported. Removing these drivers following their deprecation is not planned, however you should migrate to the corresponding CSI drivers *disks.csi.azure.com* and *file.csi.azure.com*. To review the migration options for your storage classes and upgrade your cluster to use Azure Disks and Azure Files CSI drivers, see [Migrate from in-tree to CSI drivers][migrate-from-in-tree-to-csi-drivers].
> > *In-tree drivers* refers to the storage drivers that are part of the core Kubernetes code opposed to the CSI drivers, which are plug-ins.
The CSI storage driver support on AKS allows you to natively use:
- You need the Azure CLI version 2.42 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli]. - If the open-source CSI Blob storage driver is installed on your cluster, uninstall it before enabling the Azure Blob storage driver.
-## Disable CSI storage drivers on a new or existing cluster
-
-To disable CSI storage drivers on a new cluster, include one of the following parameters depending on the storage system:
-
-* `--disable-disk-driver` allows you to disable the [Azure Disks CSI driver][azure-disk-csi].
-* `--disable-file-driver` allows you to disable the [Azure Files CSI driver][azure-files-csi].
-* `--disable-blob-driver` allows you to disable the [Azure Blob storage CSI driver][azure-blob-csi].
-* `--disable-snapshot-controller` allows you to disable the [snapshot controller][snapshot-controller].
-
-```azurecli
-az aks create -n myAKSCluster -g myResourceGroup --disable-disk-driver --disable-file-driver --disable-blob-driver --disable-snapshot-controller
-```
-
-To disable CSI storage drivers on an existing cluster, use one of the parameters listed earlier depending on the storage system:
-
-```azurecli
-az aks update -n myAKSCluster -g myResourceGroup --disable-disk-driver --disable-file-driver --disable-blob-driver --disable-snapshot-controller
-```
- ## Enable CSI storage drivers on an existing cluster To enable CSI storage drivers on a new cluster, include one of the following parameters depending on the storage system:
To enable CSI storage drivers on a new cluster, include one of the following par
az aks update -n myAKSCluster -g myResourceGroup --enable-disk-driver --enable-file-driver --enable-blob-driver --enable-snapshot-controller ```
-## Migrate custom in-tree storage classes to CSI
-
-If you've created in-tree driver storage classes, those storage classes continue to work since CSI migration is turned on after upgrading your cluster to 1.21.x. If you want to use CSI features you'll need to perform the migration.
+It may take several minutes to complete this action. Once it's complete, you should see in the output the status of enabling the driver on your cluster. The following example resembles the section indicating the results when enabling the Blob storage CSI driver:
-Migrating these storage classes involves deleting the existing ones, and re-creating them with the provisioner set to **disk.csi.azure.com** if using Azure Disks, and **files.csi.azure.com** if using Azure Files.
-
-### Migrate storage class provisioner
+```output
+"storageProfile": {
+ "blobCsiDriver": {
+ "enabled": true
+ },
+```
-The following example YAML manifest shows the difference between the in-tree storage class definition configured to use Azure Disks, and the equivalent using a CSI storage class definition. The CSI storage system supports the same features as the in-tree drivers, so the only change needed would be the value for `provisioner`.
+## Disable CSI storage drivers on a new or existing cluster
-#### Original in-tree storage class definition
+To disable CSI storage drivers on a new cluster, include one of the following parameters depending on the storage system:
-```yaml
-kind: StorageClass
-apiVersion: storage.k8s.io/v1
-metadata:
- name: custom-managed-premium
-provisioner: kubernetes.io/azure-disk
-reclaimPolicy: Delete
-parameters:
- storageAccountType: Premium_LRS
-```
+* `--disable-disk-driver` allows you to disable the [Azure Disks CSI driver][azure-disk-csi].
+* `--disable-file-driver` allows you to disable the [Azure Files CSI driver][azure-files-csi].
+* `--disable-blob-driver` allows you to disable the [Azure Blob storage CSI driver][azure-blob-csi].
+* `--disable-snapshot-controller` allows you to disable the [snapshot controller][snapshot-controller].
-#### CSI storage class definition
-
-```yaml
-kind: StorageClass
-apiVersion: storage.k8s.io/v1
-metadata:
- name: custom-managed-premium
-provisioner: disk.csi.azure.com
-reclaimPolicy: Delete
-parameters:
- storageAccountType: Premium_LRS
+```azurecli
+az aks create -n myAKSCluster -g myResourceGroup --disable-disk-driver --disable-file-driver --disable-blob-driver --disable-snapshot-controller
```
-The CSI storage system supports the same features as the In-tree drivers, so the only change needed would be the provisioner.
-
-## Migrate in-tree persistent volumes
-
-> [!IMPORTANT]
-> If your in-tree persistent volume `reclaimPolicy` is set to **Delete**, you need to change its policy to **Retain** to persist your data. This can be achieved using a [patch operation on the PV](https://kubernetes.io/docs/tasks/administer-cluster/change-pv-reclaim-policy/). For example:
->
-> ```bash
-> kubectl patch pv pv-azuredisk --type merge --patch '{"spec": {"persistentVolumeReclaimPolicy": "Retain"}}'
-> ```
+To disable CSI storage drivers on an existing cluster, use one of the parameters listed earlier depending on the storage system:
-### Migrate in-tree Azure Disks persistent volumes
+```azurecli
+az aks update -n myAKSCluster -g myResourceGroup --disable-disk-driver --disable-file-driver --disable-blob-driver --disable-snapshot-controller
+```
-If you have in-tree Azure Disks persistent volumes, get `diskURI` from in-tree persistent volumes and then follow this [guide][azure-disk-static-mount] to set up CSI driver persistent volumes.
+## Migrate custom in-tree storage classes to CSI
-### Migrate in-tree Azure File persistent volumes
+If you've created in-tree driver storage classes, those storage classes continue to work since CSI migration is turned on after upgrading your cluster to 1.21.x. If you want to use CSI features you'll need to perform the migration.
-If you have in-tree Azure File persistent volumes, get `secretName`, `shareName` from in-tree persistent volumes and then follow this [guide][azure-file-static-mount] to set up CSI driver persistent volumes
+To review the migration options for your storage classes and upgrade your cluster to use Azure Disks and Azure Files CSI drivers, see [Migrate from in-tree to CSI drivers][migrate-from-in-tree-csi-drivers].
## Next steps
If you have in-tree Azure File persistent volumes, get `secretName`, `shareName`
- To use the CSI driver for Azure Files, see [Use Azure Files with CSI drivers][azure-files-csi]. - To use the CSI driver for Azure Blob storage, see [Use Azure Blob storage with CSI drivers][azure-blob-csi] - For more about storage best practices, see [Best practices for storage and backups in Azure Kubernetes Service][operator-best-practices-storage].-- For more information on CSI migration, see [Kubernetes In-Tree to CSI Volume Migration][csi-migration-community].
+- For more information on CSI migration, see [Kubernetes in-tree to CSI Volume Migration][csi-migration-community].
<!-- LINKS - external --> [csi-migration-community]: https://kubernetes.io/blog/2019/12/09/kubernetes-1-17-feature-csi-migration-beta [snapshot-controller]: https://kubernetes-csi.github.io/docs/snapshot-controller.html <!-- LINKS - internal -->
-[azure-disk-static-mount]: azure-disk-volume.md#mount-disk-as-a-volume
-[azure-file-static-mount]: azure-files-volume.md#mount-file-share-as-a-persistent-volume
+[azure-disk-static-mount]: azure-csi-disk-storage-provision.md#mount-disk-as-a-volume
+[azure-file-static-mount]: azure-csi-files-storage-provision.md#mount-file-share-as-a-persistent-volume
[install-azure-cli]: /cli/azure/install-azure-cli [operator-best-practices-storage]: operator-best-practices-storage.md [azure-blob-csi]: azure-blob-csi.md [azure-disk-csi]: azure-disk-csi.md
-[azure-files-csi]: azure-files-csi.md
+[azure-files-csi]: azure-files-csi.md
+[migrate-from-in-tree-csi-drivers]: csi-migrate-in-tree-volumes.md
aks Internal Lb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/internal-lb.md
Learn more about Kubernetes services in the [Kubernetes services documentation][
[install-azure-cli]: /cli/azure/install-azure-cli [aks-sp]: kubernetes-service-principal.md#delegate-access-to-other-azure-resources [different-subnet]: #specify-a-different-subnet
-[aks-vnet-subnet]: /aks/configure-kubenet.md#create-a-virtual-network-and-subnet
-[unique-subnet]: /aks/use-multiple-node-pools.md#add-a-node-pool-with-a-unique-subnet
+[aks-vnet-subnet]: configure-kubenet.md#create-a-virtual-network-and-subnet
+[unique-subnet]: use-multiple-node-pools.md#add-a-node-pool-with-a-unique-subnet
aks Intro Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/intro-kubernetes.md
Learn more about deploying and managing AKS.
[aks-scale]: ./tutorial-kubernetes-scale.md [aks-upgrade]: ./upgrade-cluster.md [azure-devops]: ../devops-project/overview.md
-[azure-disk]: ./azure-disks-dynamic-pv.md
-[azure-files]: ./azure-files-dynamic-pv.md
+[azure-disk]: ./azure-disk-csi.md
+[azure-files]: ./azure-files-csi.md
[container-health]: ../azure-monitor/containers/container-insights-overview.md [aks-master-logs]: monitor-aks-reference.md#resource-logs [aks-supported versions]: supported-kubernetes-versions.md
aks Monitor Aks Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/monitor-aks-reference.md
Title: Monitoring AKS data reference description: Important reference material needed when you monitor AKS -+ Last updated 07/18/2022
aks Operator Best Practices Run At Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/operator-best-practices-run-at-scale.md
To increase the node limit beyond 1000, you must have the following pre-requisit
## Networking considerations and best practices * Use Managed NAT for cluster egress with at least 2 public IPs on the NAT Gateway. For more information, see [Managed NAT Gateway with AKS][Managed NAT Gateway - Azure Kubernetes Service].
-* Use Azure CNI with Dynamic IP allocation for optimum IP utilization, and scale up to 50k application pods per cluster with one routable IP per pod. For more information, see [Configure Azure CNI networking in AKS][Configure Azure CNI networking in Azure Kubernetes Service (AKS)].
+* Use Azure CNI with Dynamic IP allocation for optimum IP utilization, and scale up to 50k application pods per cluster with one routable IP per pod. For more information, see [Configure Azure CNI networking for dynamic allocation of IPs and enhanced subnet support in AKS][Configure Azure CNI networking for dynamic allocation of IPs and enhanced subnet support in Azure Kubernetes Service (AKS)].
* When using internal Kubernetes services behind an internal load balancer, we recommend creating an internal load balancer or internal service below 750 node scale for optimal scaling performance and load balancer elasticity. > [!NOTE]
To increase the node limit beyond 1000, you must have the following pre-requisit
<!-- Links - External --> [Managed NAT Gateway - Azure Kubernetes Service]: nat-gateway.md
-[Configure Azure CNI networking in Azure Kubernetes Service (AKS)]: configure-azure-cni.md#dynamic-allocation-of-ips-and-enhanced-subnet-support
+[Configure Azure CNI networking for dynamic allocation of IPs and enhanced subnet support in Azure Kubernetes Service (AKS)]: configure-azure-cni-dynamic-ip-allocation.md
[max surge]: upgrade-cluster.md?tabs=azure-cli#customize-node-surge-upgrade [Azure portal]: https://portal.azure.com/#create/Microsoft.Support/Parameters/%7B%0D%0A%09%22subId%22%3A+%22%22%2C%0D%0A%09%22pesId%22%3A+%225a3a423f-8667-9095-1770-0a554a934512%22%2C%0D%0A%09%22supportTopicId%22%3A+%2280ea0df7-5108-8e37-2b0e-9737517f0b96%22%2C%0D%0A%09%22contextInfo%22%3A+%22AksLabelDeprecationMarch22%22%2C%0D%0A%09%22caller%22%3A+%22Microsoft_Azure_ContainerService+%2B+AksLabelDeprecationMarch22%22%2C%0D%0A%09%22severity%22%3A+%223%22%0D%0A%7D [uptime SLA]: uptime-sla.md
aks Operator Best Practices Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/operator-best-practices-storage.md
This article focused on storage best practices in AKS. For more information abou
<!-- LINKS - Internal --> [aks-concepts-storage]: concepts-storage.md [vm-sizes]: ../virtual-machines/sizes.md
-[dynamic-disks]: azure-disks-dynamic-pv.md
-[dynamic-files]: azure-files-dynamic-pv.md
+[dynamic-disks]: azure-disk-csi.md
+[dynamic-files]: azure-files-csi.md
[reclaim-policy]: concepts-storage.md#storage-classes [aks-concepts-storage-pvcs]: concepts-storage.md#persistent-volume-claims [aks-concepts-storage-classes]: concepts-storage.md#storage-classes
aks Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Kubernetes Service
description: Lists Azure Policy Regulatory Compliance controls available for Azure Kubernetes Service (AKS). These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Last updated 01/05/2023 -+ # Azure Policy Regulatory Compliance controls for Azure Kubernetes Service (AKS)
aks Spot Node Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/spot-node-pool.md
Title: Add an Azure Spot node pool to an Azure Kubernetes Service (AKS) cluster description: Learn how to add an Azure Spot node pool to an Azure Kubernetes Service (AKS) cluster. -+ Last updated 01/21/2022
aks Supported Kubernetes Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/supported-kubernetes-versions.md
The Kubernetes community releases minor versions roughly every three months. Rec
Minor version releases include new features and improvements. Patch releases are more frequent (sometimes weekly) and are intended for critical bug fixes within a minor version. Patch releases include fixes for security vulnerabilities or major bugs.
->[!WARNING]
-> Due to an issue with Calico and AKS. It is highly reccomended that customers using Calico do not upgrade or create new clusters on v1.25.
- ## Kubernetes versions Kubernetes uses the standard [Semantic Versioning](https://semver.org/) versioning scheme for each version:
Get-AzAksVersion -Location eastus
## AKS Kubernetes release calendar
-For the past release history, see [Kubernetes](https://en.wikipedia.org/wiki/Kubernetes#History).
- > [!NOTE]
-> The asterisk (*) states that a date has not been finalized; because of this, the timeline below is subject to change. Please continue to check the release calendar for updates.
+> AKS follows 12 months of support for a GA Kubernetes version. To read more about our support policy for Kubernetes versioning, please read our [FAQ](https://learn.microsoft.com/azure/aks/supported-kubernetes-versions?tabs=azure-cli#faq).
+
+For the past release history, see [Kubernetes](https://en.wikipedia.org/wiki/Kubernetes#History).
| K8s version | Upstream release | AKS preview | AKS GA | End of life | |--|-|--||-|
-| 1.21 | Apr-08-21 | May 2021 | Jul 2021 | 1.24 GA |
-| 1.22 | Aug-04-21 | Sept 2021 | Dec 2021 | 1.25 GA |
-| 1.23 | Dec 2021 | Jan 2022 | Apr 2022 | 1.26 GA |
-| 1.24 | Apr-22-22 | May 2022 | Jul 2022 | 1.27 GA
-| 1.25 | Aug 2022 | Oct 2022 | Dec 2022 | 1.28 GA
-| 1.26 | Dec 2022 | Jan 2023 | Feb 2023 | 1.29 GA
-| 1.27 | Apr 2023 | May 2023 | Jun 2023 | 1.30 GA
-| 1.28 | * | * | * | 1.31 GA
+| 1.22 | Aug-04-21 | Sept 2021 | Dec 2021 | Dec 2022 |
+| 1.23 | Dec 2021 | Jan 2022 | Apr 2022 | Apr 2023 |
+| 1.24 | Apr-22-22 | May 2022 | Jul 2022 | Jul 2023
+| 1.25 | Aug 2022 | Oct 2022 | Dec 2022 | Dec 2023
+| 1.26 | Dec 2022 | Jan 2023 | Feb 2023 | Feb 2024
+| 1.27 | Apr 2023 | May 2023 | Jun 2023 | Jun 2024
> [!NOTE] > To see real-time updates of region release status and version release notes, visit the [AKS release status webpage][aks-release]. To learn more about the release status webpage, see [AKS release tracker][aks-tracker].
aks Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/upgrade.md
For more information what cluster operations may trigger specific upgrade events
[release-tracker]: ./release-tracker.md [node-image-upgrade]: ./node-image-upgrade.md [gh-actions-upgrade]: ./node-upgrade-github-actions.md
-[operator-guide-patching]: /azure/architecture/operator-guides/aks/aks-upgrade-practices.md#considerations
+[operator-guide-patching]: /azure/architecture/operator-guides/aks/aks-upgrade-practices#considerations
[supported-k8s-versions]: ./supported-kubernetes-versions.md#kubernetes-version-support-policy [ts-nsg]: /troubleshoot/azure/azure-kubernetes/upgrade-fails-because-of-nsg-rules [ts-pod-drain]: /troubleshoot/azure/azure-kubernetes/error-code-poddrainfailure
aks Use Azure Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-azure-policy.md
Title: Use Azure Policy to secure your cluster description: Use Azure Policy to secure an Azure Kubernetes Service (AKS) cluster.-+ Last updated 09/12/2022
aks Use Ultra Disks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-ultra-disks.md
This feature can only be set at cluster creation or node pool creation time.
> Azure ultra disks require nodepools deployed in availability zones and regions that support these disks as well as only specific VM series. See the [**Ultra disks GA scope and limitations**](../virtual-machines/disks-enable-ultra-ssd.md#ga-scope-and-limitations). ### Limitations+ - Ultra disks can't be used with some features and functionality, such as availability sets or Azure Disk Encryption. Review [**Ultra disks GA scope and limitations**](../virtual-machines/disks-enable-ultra-ssd.md#ga-scope-and-limitations) before proceeding.-- The supported size range for a Ultra disks is between 100 and 1500
+- The supported size range for ultra disks is between 100 and 1500.
-## Create a new cluster that can use Ultra disks
+## Create a new cluster that can use ultra disks
-Create an AKS cluster that is able to leverage Ultra Disks by using the following CLI commands. Use the `--enable-ultra-ssd` flag to set the `EnableUltraSSD` feature.
+Create an AKS cluster that is able to leverage Azure ultra Disks by using the following CLI commands. Use the `--enable-ultra-ssd` flag to set the `EnableUltraSSD` feature.
Create an Azure resource group: ```azurecli-interactive
-# Create an Azure resource group
az group create --name myResourceGroup --location westus2 ```
-Create the AKS cluster with support for Ultra Disks.
+Create an AKS-managed Azure AD cluster with support for ultra disks.
```azurecli-interactive
-# Create an AKS-managed Azure AD cluster
-az aks create -g MyResourceGroup -n MyManagedCluster -l westus2 --node-vm-size Standard_D2s_v3 --zones 1 2 --node-count 2 --enable-ultra-ssd
+az aks create -g MyResourceGroup -n myAKSCluster -l westus2 --node-vm-size Standard_D2s_v3 --zones 1 2 --node-count 2 --enable-ultra-ssd
``` If you want to create clusters without ultra disk support, you can do so by omitting the `--enable-ultra-ssd` parameter.
-## Enable Ultra disks on an existing cluster
+## Enable ultra disks on an existing cluster
You can enable ultra disks on existing clusters by adding a new node pool to your cluster that support ultra disks. Configure a new node pool to use ultra disks by using the `--enable-ultra-ssd` flag.
If you want to create new node pools without support for ultra disks, you can do
## Use ultra disks dynamically with a storage class
-To use ultra disks in our deployments or stateful sets you can use a [storage class for dynamic provisioning](azure-disks-dynamic-pv.md).
+To use ultra disks in our deployments or stateful sets you can use a [storage class for dynamic provisioning][azure-disk-volume].
### Create the storage class
parameters:
Create the storage class with the [kubectl apply][kubectl-apply] command and specify your *azure-ultra-disk-sc.yaml* file: ```console
-$ kubectl apply -f azure-ultra-disk-sc.yaml
+kubectl apply -f azure-ultra-disk-sc.yaml
+```
+The output from the command resembles the following example:
+```console
storageclass.storage.k8s.io/ultra-disk-sc created ```
spec:
Create the persistent volume claim with the [kubectl apply][kubectl-apply] command and specify your *azure-ultra-disk-pvc.yaml* file: ```console
-$ kubectl apply -f azure-ultra-disk-pvc.yaml
+kubectl apply -f azure-ultra-disk-pvc.yaml
+```
+
+The output from the command resembles the following example:
+```console
persistentvolumeclaim/ultra-disk created ```
spec:
Create the pod with the [kubectl apply][kubectl-apply] command, as shown in the following example: ```console
-$ kubectl apply -f nginx-ultra.yaml
+kubectl apply -f nginx-ultra.yaml
+```
+The output from the command resembles the following example:
+
+```console
pod/nginx-ultra created ``` You now have a running pod with your Azure disk mounted in the `/mnt/azure` directory. This configuration can be seen when inspecting your pod via `kubectl describe pod nginx-ultra`, as shown in the following condensed example: ```console
-$ kubectl describe pod nginx-ultra
+kubectl describe pod nginx-ultra
[...] Volumes:
For more details on using Azure tags, see [Use Azure tags in Azure Kubernetes Se
[managed-disk-pricing-performance]: https://azure.microsoft.com/pricing/details/managed-disks/ <!-- LINKS - internal -->
-[azure-disk-volume]: azure-disk-volume.md
-[azure-files-pvc]: azure-files-dynamic-pv.md
+[azure-disk-volume]: azure-disk-csi.md
+[azure-files-pvc]: azure-files-csi.md
[premium-storage]: ../virtual-machines/disks-types.md [az-disk-list]: /cli/azure/disk#az_disk_list [az-snapshot-create]: /cli/azure/snapshot#az_snapshot_create
aks Vertical Pod Autoscaler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/vertical-pod-autoscaler.md
Title: Vertical Pod Autoscaling (preview) in Azure Kubernetes Service (AKS) description: Learn how to vertically autoscale your pod on an Azure Kubernetes Service (AKS) cluster.- Previously updated : 09/30/2022 Last updated : 01/12/2023 # Vertical Pod Autoscaling (preview) in Azure Kubernetes Service (AKS)
-This article provides an overview of Vertical Pod Autoscaler (VPA) (preview) in Azure Kubernetes Service (AKS), which is based on the open source [Kubernetes](https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler) version. When configured, it automatically sets resource requests and limits on containers per workload based on past usage. This ensures pods are scheduled onto nodes that have the required CPU and memory resources.
+This article provides an overview of Vertical Pod Autoscaler (VPA) (preview) in Azure Kubernetes Service (AKS), which is based on the open source [Kubernetes](https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler) version. When configured, it automatically sets resource requests and limits on containers per workload based on past usage. VPA makes certain pods are scheduled onto nodes that have the required CPU and memory resources.
## Benefits
The following steps create a deployment with two pods, each running a single con
The pod has 100 millicpu and 50 Mibibytes of memory reserved in this example. For this sample application, the pod needs less than 100 millicpu to run, so there's no CPU capacity available. The pods also reserves much less memory than needed. The Vertical Pod Autoscaler *vpa-recommender* deployment analyzes the pods hosting the hamster application to see if the CPU and memory requirements are appropriate. If adjustments are needed, the vpa-updater relaunches the pods with updated values.
-1. Wait for the vpa-updater to launch a new hamster pod. This should take a few minutes. You can monitor the pods using the [kubectl get][kubectl-get] command.
+1. Wait for the vpa-updater to launch a new hamster pod, which should take a few minutes. You can monitor the pods using the [kubectl get][kubectl-get] command.
```bash kubectl get --watch pods -l app=hamster
Vertical Pod autoscaling uses the `VerticalPodAutoscaler` object to automaticall
The Vertical Pod Autoscaler uses the `lowerBound` and `upperBound` attributes to decide whether to delete a Pod and replace it with a new Pod. If a Pod has requests less than the lower bound or greater than the upper bound, the Vertical Pod Autoscaler deletes the Pod and replaces it with a Pod that meets the target attribute.
+## Metrics server VPA throttling
+
+With AKS clusters version 1.24 and higher, vertical pod autoscaling is enabled for the metrics server. VPA enables you to adjust the resource limit when the metrics server is experiencing consistent CPU and memory resource constraints.
+
+If the metrics server throttling rate is high and the memory usage of its two pods are unbalanced, this indicates the metrics server requires more resources than the default values specified.
+
+To update the coefficient values, create a ConfigMap in the overlay *kube-system* namespace to override the values in the metrics server specification. Perform the following steps to update the metrics server.
+
+1. Create a ConfigMap file named *metrics-server-config.yaml* and copy in the following manifest.
+
+ ```yml
+ apiVersion: v1
+ kind: ConfigMap
+ metadata:
+ name: metrics-server-config
+ namespace: kube-system
+ labels:
+ kubernetes.io/cluster-service: "true"
+ addonmanager.kubernetes.io/mode: EnsureExists
+ data:
+ NannyConfiguration: |-
+ apiVersion: nannyconfig/v1alpha1
+ kind: NannyConfiguration
+ baseCPU: 100m
+ cpuPerNode: 1m
+ baseMemory: 100Mi
+ memoryPerNode: 8Mi
+ ```
+
+ In this ConfigMap example, it changes the resource limit and request to the following:
+
+ * cpu: (100+1n) millicore
+ * memory: (100+8n) mebibyte
+
+ Where *n* is the number of nodes.
+
+2. Create the ConfigMap using the [kubectl apply][kubectl-apply] command and specify the name of your YAML manifest:
+
+ ```bash
+ kubectl apply -f metrics-server-config.yaml
+ ```
+
+Be cautious of the *baseCPU*, *cpuPerNode*, *baseMemory*, and the *memoryPerNode* as the ConfigMap won't be validated by AKS. As a recommended practice, increase the value gradually to avoid unnecessary resource consumption. Proactively monitor resource usage when updating or creating the ConfigMap. A large number of resource requests could negatively impact the node.
+ ## Next steps This article showed you how to automatically scale resource utilization, such as CPU and memory, of cluster nodes to match application requirements. You can also use the horizontal pod autoscaler to automatically adjust the number of pods that run your application. For steps on using the horizontal pod autoscaler, see [Scale applications in AKS][scale-applications-in-aks].
aks Virtual Nodes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/virtual-nodes.md
Title: Use virtual nodes description: Overview of how using virtual node with Azure Kubernetes Services (AKS)- Previously updated : 09/06/2022 Last updated : 01/18/2023 # Create and configure an Azure Kubernetes Services (AKS) cluster to use virtual nodes
-To rapidly scale application workloads in an AKS cluster, you can use virtual nodes. With virtual nodes, you have quick provisioning of pods, and only pay per second for their execution time. You don't need to wait for Kubernetes cluster autoscaler to deploy VM compute nodes to run the additional pods. Virtual nodes are only supported with Linux pods and nodes.
+To rapidly scale application workloads in an AKS cluster, you can use virtual nodes. With virtual nodes, you have quick provisioning of pods, and only pay per second for their execution time. You don't need to wait for Kubernetes cluster autoscaler to deploy VM compute nodes to run more pods. Virtual nodes are only supported with Linux pods and nodes.
-The virtual nodes add-on for AKS, is based on the open source project [Virtual Kubelet][virtual-kubelet-repo].
+The virtual nodes add on for AKS is based on the open source project [Virtual Kubelet][virtual-kubelet-repo].
-This article gives you an overview of the region availability and networking requirements for using virtual nodes, as well as the known limitations.
+This article gives you an overview of the region availability and networking requirements for using virtual nodes, and the known limitations.
## Regional availability
-All regions, where ACI supports VNET SKUs, are supported for virtual nodes deployments. For more details, see [Resource availability for Azure Container Instances in Azure regions](../container-instances/container-instances-region-availability.md).
+All regions, where ACI supports VNET SKUs, are supported for virtual nodes deployments. For more information, see [Resource availability for Azure Container Instances in Azure regions](../container-instances/container-instances-region-availability.md).
-For available CPU and memory SKUs in each region, please check the [Azure Container Instances Resource availability for Azure Container Instances in Azure regions - Linux container groups](../container-instances/container-instances-region-availability.md#linux-container-groups)
+For available CPU and memory SKUs in each region, review [Azure Container Instances Resource availability for Azure Container Instances in Azure regions - Linux container groups](../container-instances/container-instances-region-availability.md#linux-container-groups)
## Network requirements
-Virtual nodes enable network communication between pods that run in Azure Container Instances (ACI) and the AKS cluster. To provide this communication, a virtual network subnet is created and delegated permissions are assigned. Virtual nodes only work with AKS clusters created using *advanced* networking (Azure CNI). By default, AKS clusters are created with *basic* networking (kubenet).
+Virtual nodes enable network communication between pods that run in Azure Container Instances (ACI) and the AKS cluster. To support this communication, a virtual network subnet is created and delegated permissions are assigned. Virtual nodes only work with AKS clusters created using *advanced* networking (Azure CNI). By default, AKS clusters are created with *basic* networking (kubenet).
Pods running in Azure Container Instances (ACI) need access to the AKS API server endpoint, in order to configure networking. ## Known limitations
-Virtual Nodes functionality is heavily dependent on ACI's feature set. In addition to the [quotas and limits for Azure Container Instances](../container-instances/container-instances-quotas.md), the following scenarios are not yet supported with Virtual nodes:
+Virtual nodes functionality is heavily dependent on ACI's feature set. In addition to the [quotas and limits for Azure Container Instances](../container-instances/container-instances-quotas.md), the following scenarios aren't supported with virtual nodes:
* Using service principal to pull ACR images. [Workaround](https://github.com/virtual-kubelet/azure-aci/blob/master/README.md#private-registry) is to use [Kubernetes secrets](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/#create-a-secret-by-providing-credentials-on-the-command-line) * [Virtual Network Limitations](../container-instances/container-instances-vnet.md) including VNet peering, Kubernetes network policies, and outbound traffic to the internet with network security groups. * Init containers * [Host aliases](https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/) * [Arguments](../container-instances/container-instances-exec.md#restrictions) for exec in ACI
-* [DaemonSets](concepts-clusters-workloads.md#statefulsets-and-daemonsets) will not deploy pods to the virtual nodes
+* [DaemonSets](concepts-clusters-workloads.md#statefulsets-and-daemonsets) won't deploy pods to the virtual nodes
* Virtual nodes support scheduling Linux pods. You can manually install the open source [Virtual Kubelet ACI](https://github.com/virtual-kubelet/azure-aci) provider to schedule Windows Server containers to ACI. * Virtual nodes require AKS clusters with Azure CNI networking. * Using api server authorized ip ranges for AKS.
-* Volume mounting Azure Files share support [General-purpose V2](../storage/common/storage-account-overview.md#types-of-storage-accounts) and [General-purpose V1](../storage/common/storage-account-overview.md#types-of-storage-accounts). Follow the instructions for mounting [a volume with Azure Files share](azure-files-volume.md).
-* Using IPv6 is not supported.
+* Volume mounting Azure Files share support [General-purpose V2](../storage/common/storage-account-overview.md#types-of-storage-accounts) and [General-purpose V1](../storage/common/storage-account-overview.md#types-of-storage-accounts). Follow the instructions for mounting [a volume with Azure Files share](azure-files-csi.md).
+* Using IPv6 isn't supported.
* Virtual nodes don't support the [Container hooks](https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/) feature. ## Next steps
Virtual nodes are often one component of a scaling solution in AKS. For more inf
- [Use the Kubernetes horizontal pod autoscaler][aks-hpa] - [Use the Kubernetes cluster autoscaler][aks-cluster-autoscaler]-- [Check out the Autoscale sample for Virtual Nodes][virtual-node-autoscale]
+- [Check out the Autoscale sample for virtual nodes][virtual-node-autoscale]
- [Read more about the Virtual Kubelet open source library][virtual-kubelet-repo] <!-- LINKS - external -->
api-management Api Management Gateways Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-gateways-overview.md
The API Management *gateway* (also called *data plane* or *runtime*) is the serv
[!INCLUDE [api-management-gateway-role](../../includes/api-management-gateway-role.md)] +
+> [!NOTE]
+> All requests to the API Management gateway, including those rejected by policy configurations, count toward configured rate limits, quotas, and billing limits if applied in the service tier.
++ ## Managed and self-hosted API Management offers both managed and self-hosted gateways:
The following table compares features available in the managed gateway versus th
> [!NOTE] > * Some features of managed and self-hosted gateways are supported only in certain [service tiers](api-management-features.md) or with certain [deployment environments](self-hosted-gateway-overview.md#packaging) for self-hosted gateways.
+> * For the current supported features of the self-hosted gateway, ensure that you have upgraded to the latest major version of the self-hosted gateway [container image](self-hosted-gateway-overview.md#container-images).
> * See also self-hosted gateway [limitations](self-hosted-gateway-overview.md#limitations). ### Infrastructure
The following table compares features available in the managed gateway versus th
### Policies
-Managed and self-hosted gateways support all available [policies](api-management-howto-policies.md) in policy definitions with the following exceptions.
+Managed and self-hosted gateways support all available [policies](api-management-policies.md) in policy definitions with the following exceptions.
-| Policy | Managed (Dedicated) | Managed (Consumption) | Self-hosted |
+| Policy | Managed (Dedicated) | Managed (Consumption) | Self-hosted<sup>1</sup> |
| | -- | -- | - |
-| [Dapr integration](api-management-dapr-policies.md) | ❌ | ❌ | ✔️ |
+| [Dapr integration](api-management-policies.md#dapr-integration-policies) | ❌ | ❌ | ✔️ |
| [Get authorization context](get-authorization-context-policy.md) | ✔️ | ❌ | ❌ |
-| [Quota and rate limit](api-management-access-restriction-policies.md) | ✔️ | ✔️<sup>1</sup> | ✔️<sup>2</sup>
+| [Quota and rate limit](api-management-policies.md#access-restriction-policies) | ✔️ | ✔️<sup>2</sup> | ✔️<sup>3</sup>
| [Set GraphQL resolver](set-graphql-resolver-policy.md) | ✔️ | ❌ | ❌ |
-<sup>1</sup> The rate limit by key and quota by key policies aren't available in the Consumption tier.<br/>
-<sup>2</sup> By default, rate limit counts in self-hosted gateways are per-gateway, per-node.
+<sup>1</sup> Configured policies that aren't supported by the self-hosted gateway are skipped during policy execution.<br/>
+<sup>2</sup> The rate limit by key and quota by key policies aren't available in the Consumption tier.<br/>
+<sup>3</sup> [!INCLUDE [api-management-self-hosted-gateway-rate-limit](../../includes/api-management-self-hosted-gateway-rate-limit.md)] [Learn more](how-to-self-hosted-gateway-on-kubernetes-in-production.md#request-throttling)
+ ### Monitoring
api-management How To Self Hosted Gateway On Kubernetes In Production https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-self-hosted-gateway-on-kubernetes-in-production.md
Previously updated : 12/17/2021 Last updated : 01/17/2023 # Guidance for running self-hosted gateway on Kubernetes in production
By default, a self-hosted gateway is deployed with a **RollingUpdate** deploymen
We recommend reducing container logs to warnings (`warn`) to improve for performance. Learn more in our [self-hosted gateway configuration reference](self-hosted-gateway-settings-reference.md).
+## Request throttling
+
+Request throttling in a self-hosted gateway can be enabled by using the API Management [rate-limit](rate-limit-policy.md) or [rate-limit-by-key](rate-limit-by-key-policy.md) policy. Configure rate limit counts to synchronize among gateway instances across cluster nodes by exposing the following ports in the Kubernetes deployment for instance discovery:
+
+* Port 4290 (UDP), for the rate limiting synchronization
+* Port 4291 (UDP), for sending heartbeats to other instances
+
+> [!NOTE]
+> [!INCLUDE [api-management-self-hosted-gateway-rate-limit](../../includes/api-management-self-hosted-gateway-rate-limit.md)]
+ ## Security The self-hosted gateway is able to run as non-root in Kubernetes allowing customers to run the gateway securely.
securityContext:
> [!WARNING] > When using local CA certificates, the self-hosted gateway must run with user ID (UID) `1001` in order to manage the CA certificates otherwise the gateway will not start up. + ## Next steps * To learn more about the self-hosted gateway, see [Self-hosted gateway overview](self-hosted-gateway-overview.md).
api-management Include Fragment Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/include-fragment-policy.md
The policy inserts the policy fragment as-is at the location you select in the p
| Attribute | Description | Required | Default | | | -- | -- | - |
-| fragment-id | A string. Policy expression allowed. Specifies the identifier (name) of a policy fragment created in the API Management instance. | Yes | N/A |
+| fragment-id | A string. Specifies the identifier (name) of a policy fragment created in the API Management instance. | Yes | N/A |
## Usage
In the following example, the policy fragment named *myFragment* is added in the
* [API Management advanced policies](api-management-advanced-policies.md)
api-management Mock Response Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/mock-response-policy.md
The `mock-response` policy, as the name implies, is used to mock APIs and operat
- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation - [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+### Usage notes
+
+- [Policy expressions](api-management-policy-expressions.md) can't be used in attribute values for this policy.
+ ## Examples ```xml
The `mock-response` policy, as the name implies, is used to mock APIs and operat
* [API Management advanced policies](api-management-advanced-policies.md)
api-management Rate Limit By Key Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/rate-limit-by-key-policy.md
To understand the difference between rate limits and quotas, [see Rate limits an
- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation - [**Gateways:**](api-management-gateways-overview.md) dedicated, self-hosted
+### Usage notes
+
+* [!INCLUDE [api-management-self-hosted-gateway-rate-limit](../../includes/api-management-self-hosted-gateway-rate-limit.md)] [Learn more](how-to-self-hosted-gateway-on-kubernetes-in-production.md#request-throttling)
++ ## Example In the following example, the rate limit of 10 calls per 60 seconds is keyed by the caller IP address. After each policy execution, the remaining calls allowed in the time period are stored in the variable `remainingCallsPerIP`.
api-management Rate Limit Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/rate-limit-policy.md
Previously updated : 12/08/2022 Last updated : 01/11/2023
To understand the difference between rate limits and quotas, [see Rate limits an
* This policy can be used only once per policy definition. * Except where noted, [policy expressions](api-management-policy-expressions.md) can't be used in attribute values for this policy. * This policy is only applied when an API is accessed using a subscription key.
+* [!INCLUDE [api-management-self-hosted-gateway-rate-limit](../../includes/api-management-self-hosted-gateway-rate-limit.md)] [Learn more](how-to-self-hosted-gateway-on-kubernetes-in-production.md#request-throttling)
+ ## Example
app-service App Service Web Tutorial Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-web-tutorial-rest-api.md
description: Learn how Azure App Service helps you host your RESTful APIs with C
ms.assetid: a820e400-06af-4852-8627-12b3db4a8e70 ms.devlang: csharp Previously updated : 04/28/2020 Last updated : 01/31/2023
Next, you enable the built-in CORS support in App Service for your API.
![CORS error in browser client](./media/app-service-web-tutorial-rest-api/azure-app-service-cors-error.png)
- Because of the domain mismatch between the browser app (`http://localhost:5000`) and remote resource (`http://<app_name>.azurewebsites.net`), and the fact that your API in App Service is not sending the `Access-Control-Allow-Origin` header, your browser has prevented cross-domain content from loading in your browser app.
+ The domain mismatch between the browser app (`http://localhost:5000`) and remote resource (`http://<app_name>.azurewebsites.net`) is recognized by your browser as a cross-origin resource request. Also, the fact that your REST API the App Service app is not sending the `Access-Control-Allow-Origin` header, the browser has prevented cross-domain content from loading.
In production, your browser app would have a public URL instead of the localhost URL, but the way to enable CORS to a localhost URL is the same as a public URL.
In the Cloud Shell, enable CORS to your client's URL by using the [`az webapp co
az webapp cors add --resource-group myResourceGroup --name <app-name> --allowed-origins 'http://localhost:5000' ```
-You can set more than one client URL in `properties.cors.allowedOrigins` (`"['URL1','URL2',...]"`). You can also enable all client URLs with `"['*']"`.
-
-> [!NOTE]
-> If your app requires credentials such as cookies or authentication tokens to be sent, the browser may require the `ACCESS-CONTROL-ALLOW-CREDENTIALS` header on the response. To enable this in App Service, set `properties.cors.supportCredentials` to `true` in your CORS config. This cannot be enabled when `allowedOrigins` includes `'*'`.
-
-> [!NOTE]
-> Specifying `AllowAnyOrigin` and `AllowCredentials` is an insecure configuration and can result in cross-site request forgery. The CORS service returns an invalid CORS response when an app is configured with both methods.
+You can add multiple allowed origins by running the command multiple times or by adding a comma-separate list in `--allowed-origins`. To allow all origins, use `--allowed-origins '*'`.
### Test CORS again
Refresh the browser app at `http://localhost:5000`. The error message in the **C
Congratulations, you're running an API in Azure App Service with CORS support.
-## App Service CORS vs. your CORS
+## Frequently asked questions
+
+- [App Service CORS vs. your CORS](#app-service-cors-vs-your-cors)
+- [How do I set allowed origins to a wildcard subdomain?](#how-do-i-set-allowed-origins-to-a-wildcard-subdomain)
+- [How do I enable the ACCESS-CONTROL-ALLOW-CREDENTIALS header on the response?](#how-do-i-enable-the-access-control-allow-credentials-header-on-the-response)
+
+#### App Service CORS vs. your CORS
You can use your own CORS utilities instead of App Service CORS for more flexibility. For example, you may want to specify different allowed origins for different routes or methods. Since App Service CORS lets you specify one set of accepted origins for all API routes and methods, you would want to use your own CORS code. See how ASP.NET Core does it at [Enabling Cross-Origin Requests (CORS)](/aspnet/core/security/cors).
The built-in App Service CORS feature does not have options to allow only specif
> >
+#### How do I set allowed origins to a wildcard subdomain?
+
+A wildcard subdomain like `*.contoso.com` is more restrictive than the wildcard origin `*`. However, the app's CORS management page in the Azure portal doesn't let you set a wildcard subdomain as an allowed origin. However, you can do it using the Azure CLI, like so:
+
+```azurecli-interactive
+az webapp cors add --resource-group <group-name> --name <app-name> --allowed-origins 'https://*.contoso.com'
+```
+
+#### How do I enable the ACCESS-CONTROL-ALLOW-CREDENTIALS header on the response?
+
+If your app requires credentials such as cookies or authentication tokens to be sent, the browser may require the `ACCESS-CONTROL-ALLOW-CREDENTIALS` header on the response. To enable this in App Service, set `properties.cors.supportCredentials` to `true`.
+
+```azurecli-interactive
+az resource update --name web --resource-group <group-name> \
+ --namespace Microsoft.Web --resource-type config \
+ --parent sites/<app-name> --set properties.cors.supportCredentials=true
+```
+
+This operation is not allowed when allowed origins include the wildcard origin `'*'`. Specifying `AllowAnyOrigin` and `AllowCredentials` is an insecure configuration and can result in cross-site request forgery. To allow credentials, try replacing the wildcard origin with [wildcard subdomains](#how-do-i-set-allowed-origins-to-a-wildcard-subdomain).
+ [!INCLUDE [cli-samples-clean-up](../../includes/cli-samples-clean-up.md)] <a name="next"></a>
app-service Configure Connect To Azure Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-connect-to-azure-storage.md
zone_pivot_groups: app-service-containers-code
# Mount Azure Storage as a local share in App Service ::: zone pivot="code-windows"
-> [!NOTE]
-> Mounting Azure Storage as a local share for App Service on Windows code (non-container) is currently in preview.
->
+ This guide shows how to mount Azure Storage Files as a network share in Windows code (non-container) in App Service. Only [Azure Files Shares](../storage/files/storage-how-to-use-files-portal.md) and [Premium Files Shares](../storage/files/storage-how-to-create-file-share.md) are supported. The benefits of custom-mounted storage include: - Configure persistent storage for your App Service app and manage the storage separately.
The following features are supported for Linux containers:
| **Share name** | Files share to mount. | | **Access key** (Advanced only) | [Access key](../storage/common/storage-account-keys-manage.md) for your storage account. | | **Mount path** | Directory inside your app service that you want to mount. Only `/mounts/pathname` is supported.|
+ | **Deployment slot setting** | When checked, the storage mount settings also apply to deployment slots.|
::: zone-end ::: zone pivot="container-windows" | Setting | Description |
The following features are supported for Linux containers:
| **Share name** | Files share to mount. | | **Access key** (Advanced only) | [Access key](../storage/common/storage-account-keys-manage.md) for your storage account. | | **Mount path** | Directory inside your Windows container that you want to mount. Do not use a root directory (`[C-Z]:\` or `/`) or the `home` directory (`[C-Z]:\home`, or `/home`) as it's not supported.|
+ | **Deployment slot setting** | When checked, the storage mount settings also apply to deployment slots.|
::: zone-end ::: zone pivot="container-linux" | Setting | Description |
The following features are supported for Linux containers:
| **Storage container** or **Share name** | Files share or Blobs container to mount. | | **Access key** (Advanced only) | [Access key](../storage/common/storage-account-keys-manage.md) for your storage account. | | **Mount path** | Directory inside the Linux container to mount to Azure Storage. Do not use `/` or `/home`.|
+ | **Deployment slot setting** | When checked, the storage mount settings also apply to deployment slots.|
::: zone-end # [Azure CLI](#tab/cli)
To validate that the Azure Storage is mounted successfully for the app:
## Best practices ::: zone pivot="code-windows"+
+- Azure Storage mounts can be configured as a virtual directory to serve static content. To configure the virtual directory, in the left navigation click **Configuration** > **Path Mappings** > **New Virtual Application or Directory**. Set the **Physical path** to the **Mount path** defined on the Azure Storage mount.
+ - To avoid potential issues related to latency, place the app and the Azure Storage account in the same Azure region. Note, however, if the app and Azure Storage account are in same Azure region, and if you grant access from App Service IP addresses in the [Azure Storage firewall configuration](../storage/common/storage-network-security.md), then these IP restrictions are not honored. - In the Azure Storage account, avoid [regenerating the access key](../storage/common/storage-account-keys-manage.md) that's used to mount the storage in the app. The storage account contains two different keys. Azure App Services stores Azure storage account key. Use a stepwise approach to ensure that the storage mount remains available to the app during key regeneration. For example, assuming that you used **key1** to configure storage mount in your app:
app-service Configure Gateway Required Vnet Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-gateway-required-vnet-integration.md
+
+ Title: Configure gateway-required virtual network integration for your app
+description: Integrate your app in Azure App Service with Azure virtual networks using gateway-required virtual network integration.
++ Last updated : 01/20/2023+++
+# Configure gateway-required virtual network integration
+
+Gateway-required virtual network integration supports connecting to a virtual network in another region or to a classic virtual network. Gateway-required virtual network integration only works for Windows plans. We recommend using [regional virtual network integration](./overview-vnet-integration.md) to integrate with virtual networks.
+
+Gateway-required virtual network integration:
+
+* Enables an app to connect to only one virtual network at a time.
+* Enables up to five virtual networks to be integrated within an App Service plan.
+* Allows the same virtual network to be used by multiple apps in an App Service plan without affecting the total number that can be used by an App Service plan. If you have six apps using the same virtual network in the same App Service plan that counts as one virtual network being used.
+* SLA on the gateway can affect the overall [SLA](https://azure.microsoft.com/support/legal/sla/).
+* Enables your apps to use the DNS that the virtual network is configured with.
+* Requires a virtual network route-based gateway configured with an SSTP point-to-site VPN before it can be connected to an app.
+
+You can't use gateway-required virtual network integration:
+
+* With a virtual network connected with ExpressRoute.
+* From a Linux app.
+* From a [Windows container](./quickstart-custom-container.md).
+* To access service endpoint-secured resources.
+* To resolve App Settings referencing a network protected Key Vault.
+* With a coexistence gateway that supports both ExpressRoute and point-to-site or site-to-site VPNs.
+
+[Regional virtual network integration](./overview-vnet-integration.md) mitigates the above mentioned limitations.
+
+## Set up a gateway in your Azure virtual network
+
+To create a gateway:
+
+1. [Create the VPN gateway and subnet](../vpn-gateway/vpn-gateway-howto-point-to-site-resource-manager-portal.md#creategw). Select a route-based VPN type.
+
+1. [Set the point-to-site addresses](../vpn-gateway/vpn-gateway-howto-point-to-site-resource-manager-portal.md#addresspool). If the gateway isn't in the basic SKU, then IKEV2 must be disabled in the point-to-site configuration and SSTP must be selected. The point-to-site address space must be in the RFC 1918 address blocks 10.0.0.0/8, 172.16.0.0/12, and 192.168.0.0/16.
+
+If you create the gateway for use with gateway-required virtual network integration, you don't need to upload a certificate. Creating the gateway can take 30 minutes. You won't be able to integrate your app with your virtual network until the gateway is created.
+
+## How gateway-required virtual network integration works
+
+Gateway-required virtual network integration is built on top of point-to-site VPN technology. Point-to-site VPNs limit network access to the virtual machine that hosts the app. Apps are restricted to send traffic out to the internet only through hybrid connections or through virtual network integration. When your app is configured with the portal to use gateway-required virtual network integration, a complex negotiation is managed on your behalf to create and assign certificates on the gateway and the application side. The result is that the workers used to host your apps can directly connect to the virtual network gateway in the selected virtual network.
++
+## Access on-premises resources
+
+Apps can access on-premises resources by integrating with virtual networks that have site-to-site connections. If you use gateway-required virtual network integration, update your on-premises VPN gateway routes with your point-to-site address blocks. When the site-to-site VPN is first set up, the scripts used to configure it should set up routes properly. If you add the point-to-site addresses after you create your site-to-site VPN, you need to update the routes manually. Details on how to do that varies per gateway and aren't described here.
+
+BGP routes from on-premises won't be propagated automatically into App Service. You need to manually propagate them on the point-to-site configuration using the steps in this document [Advertise custom routes for P2S VPN clients](../vpn-gateway/vpn-gateway-p2s-advertise-custom-routes.md).
+
+> [!NOTE]
+> The gateway-required virtual network integration feature doesn't integrate an app with a virtual network that has an ExpressRoute gateway. Even if the ExpressRoute gateway is configured in [coexistence mode](../expressroute/expressroute-howto-coexist-resource-manager.md), the virtual network integration doesn't work. If you need to access resources through an ExpressRoute connection, use the regional virtual network integration feature or an [App Service Environment](./environment/intro.md), which runs in your virtual network.
+
+## Peering
+
+If you use gateway-required virtual network integration with peering, you need to configure a few more items. To configure peering to work with your app:
+
+1. Add a peering connection on the virtual network your app connects to. When you add the peering connection, enable **Allow virtual network access** and select **Allow forwarded traffic** and **Allow gateway transit**.
+1. Add a peering connection on the virtual network that's being peered to the virtual network you're connected to. When you add the peering connection on the destination virtual network, enable **Allow virtual network access** and select **Allow forwarded traffic** and **Allow remote gateways**.
+1. Go to **App Service plan** > **Networking** > **VNet integration** in the portal. Select the virtual network your app connects to. Under the routing section, add the address range of the virtual network that's peered with the virtual network your app is connected to.
+
+## Manage virtual network integration
+
+Connecting and disconnecting with a virtual network is at an app level. Operations that can affect virtual network integration across multiple apps are at the App Service plan level. From the app > **Networking** > **VNet integration** portal, you can get details on your virtual network. You can see similar information at the App Service plan level in the **App Service plan** > **Networking** > **VNet integration** portal.
+
+The only operation you can take in the app view of your virtual network integration instance is to disconnect your app from the virtual network it's currently connected to. To disconnect your app from a virtual network, select **Disconnect**. Your app is restarted when you disconnect from a virtual network. Disconnecting doesn't change your virtual network. The subnet or gateway isn't removed. If you then want to delete your virtual network, first disconnect your app from the virtual network and delete the resources in it, such as gateways.
+
+The App Service plan virtual network integration UI shows you all the virtual network integrations used by the apps in your App Service plan. To see details on each virtual network, select the virtual network you're interested in. There are two actions you can perform here for gateway-required virtual network integration:
+
+* **Sync network**: The sync network operation is used only for the gateway-required virtual network integration feature. Performing a sync network operation ensures that your certificates and network information are in sync. If you add or change the DNS of your virtual network, perform a sync network operation. This operation restarts any apps that use this virtual network. This operation won't work if you're using an app and a virtual network belonging to different subscriptions.
+* **Add routes**: Adding routes drives outbound traffic into your virtual network.
+
+The private IP assigned to the instance is exposed via the environment variable WEBSITE_PRIVATE_IP. Kudu console UI also shows the list of environment variables available to the web app. This IP is an IP from the address range of the point-to-site address pool configured on the virtual network gateway. This IP will be used by the web app to connect to the resources through the Azure virtual network.
+
+> [!NOTE]
+> The value of WEBSITE_PRIVATE_IP is bound to change. However, it will be an IP within the address range of the point-to-site address range, so you'll need to allow access from the entire address range.
+>
+
+## Gateway-required virtual network integration routing
+
+The routes that are defined in your virtual network are used to direct traffic into your virtual network from your app. To send more outbound traffic into the virtual network, add those address blocks here. This capability only works with gateway-required virtual network integration. Route tables don't affect your app traffic when you use gateway-required virtual network integration.
+
+## Gateway-required virtual network integration certificates
+
+When gateway-required virtual network integration is enabled, there's a required exchange of certificates to ensure the security of the connection. Along with the certificates are the DNS configuration, routes, and other similar things that describe the network.
+
+If certificates or network information is changed, select **Sync Network**. When you select **Sync Network**, you cause a brief outage in connectivity between your app and your virtual network. Your app isn't restarted, but the loss of connectivity could cause your site to not function properly.
+
+## Pricing details
+
+Three charges are related to the use of the gateway-required virtual network integration feature:
+
+* **App Service plan pricing tier charges**: Your apps need to be in a Basic, Standard, Premium, Premium v2, or Premium v3 App Service plan. For more information on those costs, see [App Service pricing](https://azure.microsoft.com/pricing/details/app-service/).
+* **Data transfer costs**: There's a charge for data egress, even if the virtual network is in the same datacenter. Those charges are described in [Data transfer pricing details](https://azure.microsoft.com/pricing/details/data-transfers/).
+* **VPN gateway costs**: There's a cost to the virtual network gateway that's required for the point-to-site VPN. For more information, see [VPN gateway pricing](https://azure.microsoft.com/pricing/details/vpn-gateway/).
+
+## Troubleshooting
+
+Many things can prevent your app from reaching a specific host and port. Most of the time it's one of these things:
+
+* **A firewall is in the way.** If you have a firewall in the way, you hit the TCP timeout. The TCP timeout is 21 seconds in this case. Use the **tcpping** tool to test connectivity. TCP timeouts can be caused by many things beyond firewalls, but start there.
+* **DNS isn't accessible.** The DNS timeout is 3 seconds per DNS server. If you have two DNS servers, the timeout is 6 seconds. Use nameresolver to see if DNS is working. You can't use nslookup, because that doesn't use the DNS your virtual network is configured with. If inaccessible, you could have a firewall or NSG blocking access to DNS or it could be down.
+
+If those items don't answer your problems, look first for things like:
+
+* Is the point-to-site address range in the RFC 1918 ranges (10.0.0.0-10.255.255.255 / 172.16.0.0-172.31.255.255 / 192.168.0.0-192.168.255.255)?
+* Does the gateway show as being up in the portal? If your gateway is down, then bring it back up.
+* Do certificates show as being in sync, or do you suspect that the network configuration was changed? If your certificates are out of sync or you suspect that a change was made to your virtual network configuration that wasn't synced with your ASPs, select **Sync Network**.
+* If you're going across a VPN, is the on-premises gateway configured to route traffic back up to Azure? If you can reach endpoints in your virtual network but not on-premises, check your routes.
+* Are you trying to use a coexistence gateway that supports both point to site and ExpressRoute? Coexistence gateways aren't supported with virtual network integration.
+
+Debugging networking issues is a challenge because you can't see what's blocking access to a specific host:port combination. Some causes include:
+
+* You have a firewall up on your host that prevents access to the application port from your point-to-site IP range. Crossing subnets often requires public access.
+* Your target host is down.
+* Your application is down.
+* You had the wrong IP or hostname.
+* Your application is listening on a different port than what you expected. You can match your process ID with the listening port by using "netstat -aon" on the endpoint host.
+* Your network security groups are configured in such a manner that they prevent access to your application host and port from your point-to-site IP range.
+
+You don't know what address your app actually uses. It could be any address in the point-to-site address range, so you need to allow access from the entire address range.
+
+More debug steps include:
+
+* Connect to a VM in your virtual network and attempt to reach your resource host:port from there. To test for TCP access, use the PowerShell command **Test-NetConnection**. The syntax is:
+
+```powershell
+Test-NetConnection hostname [optional: -Port]
+```
+
+* Bring up an application on a VM and test access to that host and port from the console from your app by using **tcpping**.
+
+### On-premises resources
+
+If your app can't reach a resource on-premises, check if you can reach the resource from your virtual network. Use the **Test-NetConnection** PowerShell command to check for TCP access. If your VM can't reach your on-premises resource, your VPN or ExpressRoute connection might not be configured properly.
+
+If your virtual network-hosted VM can reach your on-premises system but your app can't, the cause is likely one of the following reasons:
+
+* Your routes aren't configured with your subnet or point-to-site address ranges in your on-premises gateway.
+* Your network security groups are blocking access for your point-to-site IP range.
+* Your on-premises firewalls are blocking traffic from your point-to-site IP range.
+* You're trying to reach a non-RFC 1918 address by using the regional virtual network integration feature.
+
+For more information, see [virtual network integration troubleshooting guide](/troubleshoot/azure/app-service/troubleshoot-vnet-integration-apps).
app-service Configure Network Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/configure-network-settings.md
ASE_NAME="[myAseName]"
RESOURCE_GROUP_NAME="[myResourceGroup]" az appservice ase update --name $ASE_NAME -g $RESOURCE_GROUP_NAME --allow-new-private-endpoint-connection true
-az appservice ase list-addresses -n --name $ASE_NAME -g $RESOURCE_GROUP_NAME --query properties.allowNewPrivateEndpointConnections
+az appservice ase list-addresses -n --name $ASE_NAME -g $RESOURCE_GROUP_NAME --query allowNewPrivateEndpointConnections
``` The setting is also available for configuration through Azure portal at the App Service Environment configuration:
If you want to enable FTP access, you can run the following Azure CLI command:
```azurecli ASE_NAME="[myAseName]" RESOURCE_GROUP_NAME="[myResourceGroup]"
-az resource update --name $ASE_NAME/configurations/networking --set properties.ftpEnabled=true -g $RESOURCE_GROUP_NAME --resource-type "Microsoft.Web/hostingEnvironments/networkingConfiguration"
+az appservice ase update --name $ASE_NAME -g $RESOURCE_GROUP_NAME --allow-incoming-ftp-connections true
-az resource show --name $ASE_NAME/configurations/networking -g $RESOURCE_GROUP_NAME --resource-type "Microsoft.Web/hostingEnvironments/networkingConfiguration" --query properties.ftpEnabled
+az appservice ase list-addresses -n --name $ASE_NAME -g $RESOURCE_GROUP_NAME --query ftpEnabled
``` The setting is also available for configuration through Azure portal at the App Service Environment configuration:
Run the following Azure CLI command to enable remote debugging access:
```azurecli ASE_NAME="[myAseName]" RESOURCE_GROUP_NAME="[myResourceGroup]"
-az resource update --name $ASE_NAME/configurations/networking --set properties.RemoteDebugEnabled=true -g $RESOURCE_GROUP_NAME --resource-type "Microsoft.Web/hostingEnvironments/networkingConfiguration"
+az appservice ase update --name $ASE_NAME -g $RESOURCE_GROUP_NAME --allow-remote-debugging true
-az resource show --name $ASE_NAME/configurations/networking -g $RESOURCE_GROUP_NAME --resource-type "Microsoft.Web/hostingEnvironments/networkingConfiguration" --query properties.remoteDebugEnabled
+az appservice ase list-addresses -n --name $ASE_NAME -g $RESOURCE_GROUP_NAME --query remoteDebugEnabled
``` The setting is also available for configuration through Azure portal at the App Service Environment configuration:
The setting is also available for configuration through Azure portal at the App
## Next steps > [!div class="nextstepaction"]
-> [Create an App Service Environment from a template](create-from-template.md)
+> [Create an App Service Environment from a template](./how-to-create-from-template.md)
> [!div class="nextstepaction"] > [Deploy your app to Azure App Service using FTP](../deploy-ftp.md)
app-service Create From Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/create-from-template.md
Title: Create an ASE with ARM
+ Title: Create an ASE with Azure Resource Manager
description: Learn how to create an external or ILB App Service environment by using an Azure Resource Manager template. ms.assetid: 6eb7d43d-e820-4a47-818c-80ff7d3b6f8e Previously updated : 10/11/2021 Last updated : 01/20/2023 # Create an ASE by using an Azure Resource Manager template ## Overview
-> [!NOTE]
-> This article is about the App Service Environment v2 and App Service Environment v3 which are used with Isolated App Service plans
->
+
+> [!IMPORTANT]
+> This article is about App Service Environment v2 which is used with Isolated App Service plans. [App Service Environment v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-v1-and-v2-retirement-announcement/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](migration-alternatives.md) to migrate to the new version.
+>
Azure App Service environments (ASEs) can be created with an internet-accessible endpoint or an endpoint on an internal address in an Azure Virtual Network. When created with an internal endpoint, that endpoint is provided by an Azure component called an internal load balancer (ILB). The ASE on an internal IP address is called an ILB ASE. The ASE with a public endpoint is called an External ASE. An ASE can be created by using the Azure portal or an Azure Resource Manager template. This article walks through the steps and syntax you need to create an External ASE or ILB ASE with Resource Manager templates. To learn how to create an ASEv2 in the Azure portal, see [Make an External ASE][MakeExternalASE] or [Make an ILB ASE][MakeILBASE].
-To learn how to create an ASEv3 in Azure portal, see [Create ASEv3][Create ASEv3].
-When you create an ASE in the Azure portal, you can create your virtual network at the same time or choose a preexisting virtual network to deploy into.
+When you create an ASE in the Azure portal, you can create your virtual network at the same time or choose a pre-existing virtual network to deploy into.
When you create an ASE from a template, you must start with: * An Azure Virtual Network. * A subnet in that virtual network. We recommend an ASE subnet size of `/24` with 256 addresses to accommodate future growth and scaling needs. After the ASE is created, you can't change the size.
-* When you creating an ASE into preexisting virtual network and subnet, the existing resource group name, virtual network name and subnet name are required.
* The subscription you want to deploy into. * The location you want to deploy into.
-To automate your ASE creation, follow they guidelines in the sections below. If you are creating an ILB ASEv2 with custom dnsSuffix (for example, `internal-contoso.com`), there are a few more things to do.
+To automate your ASE creation, follow they guidelines in the following sections. If you're creating an ILB ASEv2 with custom dnsSuffix (for example, `internal-contoso.com`), there are a few more things to do.
1. After your ILB ASE with custom dnsSuffix is created, an TLS/SSL certificate that matches your ILB ASE domain should be uploaded.
To automate your ASE creation, follow they guidelines in the sections below. If
## Create the ASE
-A Resource Manager template that creates an ASE and its associated parameters file is available on GitHub for [ASEv3][asev3quickstarts] and [ASEv2][quickstartasev2create].
-
-If you want to make an ASE, use these Resource Manager template [ASEv3][asev3quickstarts] or [ASEv2][quickstartilbasecreate] example. They cater to that use case. Most of the parameters in the *azuredeploy.parameters.json* file are common to the creation of ILB ASEs and External ASEs. The following list calls out parameters of special note, or that are unique, when you create an ILB ASE with an existing subnet.
-### ASEv3 parameters
-* *aseName*: Required. This parameter defines an unique ASE name.
-* *internalLoadBalancingMode*: Required. In most cases, set this to 3, which means both HTTP/HTTPS traffic on ports 80/443. If this property is set to 0, the HTTP/HTTPS traffic remains on the public VIP.
-* *zoneRedundant*: Required. In most cases, set this to false, which means the ASE will not be deployed into Availability Zones(AZ). Zonal ASEs can be deployed in some regions, you can refer to [this][AZ Support for ASEv3].
-* *dedicatedHostCount*: Required. In most cases, set this to 0, which means the ASE will be deployed as normal without dedicated hosts deployed.
-* *useExistingVnetandSubnet*: Required. Set to true if using an existing virtual network and subnet.
-* *vNetResourceGroupName*: Required if using an existing virtual network and subnet. This parameter defines the resource group name of the existing virtual network and subnet where ASE will reside.
-* *virtualNetworkName*: Required if using an existing virtual network and subnet. This parameter defines the virtual network name of the existing virtual network and subnet where ASE will reside.
-* *subnetName*: Required if using an existing virtual network and subnet. This parameter defines the subnet name of the existing virtual network and subnet where ASE will reside.
-* *createPrivateDNS*: Set to true if you want to create a private DNS zone after ASEv3 created. For an ILB ASE, when set this parameter to true, it will create a private DNS zone as ASE name with *appserviceenvironment.net* DNS suffix.
-### ASEv2 parameters
-* *aseName*: This parameter defines an unique ASE name.
+A Resource Manager template that creates an ASE and its associated parameters file is available on GitHub for [ASEv2][quickstartasev2create].
+
+If you want to make an ASE, use this Resource Manager template [ASEv2][quickstartilbasecreate] example. Most of the parameters in the *azuredeploy.parameters.json* file are common to the creation of ILB ASEs and External ASEs. The following list calls out parameters of special note, or that's unique, when you create an ILB ASE with an existing subnet.
+
+### Parameters
+* *aseName*: This parameter defines a unique ASE name.
* *location*: This parameter defines the location of the App Service Environment. * *existingVirtualNetworkName*: This parameter defines the virtual network name of the existing virtual network and subnet where ASE will reside. * *existingVirtualNetworkResourceGroup*: his parameter defines the resource group name of the existing virtual network and subnet where ASE will reside.
Obtain a valid TLS/SSL certificate by using internal certificate authorities, pu
* **Subject**: This attribute must be set to **.your-root-domain-here.com*. * **Subject Alternative Name**: This attribute must include both **.your-root-domain-here.com* and **.scm.your-root-domain-here.com*. TLS connections to the SCM/Kudu site associated with each app use an address of the form *your-app-name.scm.your-root-domain-here.com*.
-With a valid TLS/SSL certificate in hand, two additional preparatory steps are needed. Convert/save the TLS/SSL certificate as a .pfx file. Remember that the .pfx file must include all intermediate and root certificates. Secure it with a password.
+With a valid TLS/SSL certificate in hand, two more preparatory steps are needed. Convert/save the TLS/SSL certificate as a .pfx file. Remember that the .pfx file must include all intermediate and root certificates. Secure it with a password.
The .pfx file needs to be converted into a base64 string because the TLS/SSL certificate is uploaded by using a Resource Manager template. Because Resource Manager templates are text files, the .pfx file must be converted into a base64 string. This way it can be included as a parameter of the template.
The parameters in the *azuredeploy.parameters.json* file are listed here:
* *existingAseLocation*: Text string containing the Azure region where the ILB ASE was deployed. For example: "South Central US". * *pfxBlobString*: The based64-encoded string representation of the .pfx file. Use the code snippet shown earlier and copy the string contained in "exportedcert.pfx.b64". Paste it in as the value of the *pfxBlobString* attribute. * *password*: The password used to secure the .pfx file.
-* *certificateThumbprint*: The certificate's thumbprint. If you retrieve this value from PowerShell (for example, *$certificate.Thumbprint* from the earlier code snippet), you can use the value as is. If you copy the value from the Windows certificate dialog box, remember to strip out the extraneous spaces. The *certificateThumbprint* should look something like AF3143EB61D43F6727842115BB7F17BBCECAECAE.
+* *certificateThumbprint*: The certificate's thumbprint. If you retrieve this value from PowerShell (for example, `$certificate.Thumbprint` from the earlier code snippet), you can use the value as is. If you copy the value from the Windows certificate dialog box, remember to strip out the extraneous spaces. The *certificateThumbprint* should look something like AF3143EB61D43F6727842115BB7F17BBCECAECAE.
* *certificateName*: A friendly string identifier of your own choosing used to identity the certificate. The name is used as part of the unique Resource Manager identifier for the *Microsoft.Web/certificates* entity that represents the TLS/SSL certificate. The name *must* end with the following suffix: \_yourASENameHere_InternalLoadBalancingASE. The Azure portal uses this suffix as an indicator that the certificate is used to secure an ILB-enabled ASE. An abbreviated example of *azuredeploy.parameters.json* is shown here:
$parameterPath="PATH\azuredeploy.parameters.json"
New-AzResourceGroupDeployment -Name "CHANGEME" -ResourceGroupName "YOUR-RG-NAME-HERE" -TemplateFile $templatePath -TemplateParameterFile $parameterPath ```
-It takes roughly 40 minutes per ASE front end to apply the change. For example, for a default-sized ASE that uses two front ends, the template takes around one hour and 20 minutes to complete. While the template is running, the ASE can't scale.
+It takes roughly 40 minutes per ASE front end to apply the change. For example, for a default-sized ASE that uses two front ends, the template takes around 1 hour and 20 minutes to complete. While the template is running, the ASE can't scale.
After the template finishes, apps on the ILB ASE can be accessed over HTTPS. The connections are secured by using the default TLS/SSL certificate. The default TLS/SSL certificate is used when apps on the ILB ASE are addressed by using a combination of the application name plus the default host name. For example, `https://mycustomapp.internal-contoso.com` uses the default TLS/SSL certificate for **.internal-contoso.com*.
However, just like apps that run on the public multitenant service, developers c
[MakeILBASE]: ./create-ilb-ase.md [ASENetwork]: ./network-info.md [UsingASE]: ./using-an-ase.md
-[UDRs]: ../../virtual-network/virtual-networks-udr-overview.md
-[NSGs]: ../../virtual-network/network-security-groups-overview.md
[ConfigureASEv1]: app-service-web-configure-an-app-service-environment.md [ASEv1Intro]: app-service-app-service-environment-intro.md
-[mobileapps]: /previous-versions/azure/app-service-mobile/app-service-mobile-value-prop
-[Functions]: ../../azure-functions/index.yml
[Pricing]: https://azure.microsoft.com/pricing/details/app-service/ [ARMOverview]: ../../azure-resource-manager/management/overview.md
-[ConfigureSSL]: ../../app-service/configure-ssl-certificate.md
-[Kudu]: https://azure.microsoft.com/resources/videos/super-secret-kudu-debug-console-for-azure-web-sites/
-[ASEWAF]: ./integrate-with-application-gateway.md
-[AppGW]: ../../web-application-firewall/ag/ag-overview.md
-[ILBASEv1Template]: app-service-app-service-environment-create-ilb-ase-resourcemanager.md
-[Create ASEv3]: creation.md
-[asev3quickstarts]: https://azure.microsoft.com/resources/templates/web-app-asp-app-on-asev3-create
-[AZ Support for ASEv3]: zone-redundancy.md
+[ConfigureSSL]: ../../app-service/configure-ssl-certificate.md
app-service How To Create From Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/how-to-create-from-template.md
+
+ Title: Create an App Service Environment (ASE) v3 with Azure Resource Manager
+description: Learn how to create an external or ILB App Service Environment v3 by using an Azure Resource Manager template.
++ Last updated : 01/20/2023++
+# Create an App Service Environment by using an Azure Resource Manager template
+
+App Service Environment can be created using an Azure Resource Manager template allowing you to do repeatable deployment.
+
+> [!NOTE]
+> This article is about App Service Environment v3, which is used with isolated v2 App Service plans.
+
+## Overview
+
+Azure App Service Environment can be created with an internet-accessible endpoint or an endpoint on an internal address in an Azure Virtual Network. When created with an internal endpoint, that endpoint is provided by an Azure component called an internal load balancer (ILB). The App Service Environment on an internal IP address is called an ILB ASE. The App Service Environment with a public endpoint is called an External ASE.
+
+An ASE can be created by using the Azure portal or an Azure Resource Manager template. This article walks through the steps and syntax you need to create an External ASE or ILB ASE with Resource Manager templates. Learn [how to create an App Service Environment in Azure portal](./creation.md).
+
+When you create an App Service Environment in the Azure portal, you can create your virtual network at the same time or choose a pre-existing virtual network to deploy into.
+
+When you create an App Service Environment from a template, you must start with:
+
+* An Azure Virtual Network.
+* A subnet in that virtual network. We recommend a subnet size of `/24` with 256 addresses to accommodate future growth and scaling needs. After the App Service Environment is created, you can't change the size.
+* The location you want to deploy into.
+
+## Configuring the App Service Environment
+
+The basic Resource Manager template that creates an App Service Environment looks like this:
+
+```json
+{
+ "type": "Microsoft.Web/hostingEnvironments",
+ "apiVersion": "2022-03-01",
+ "name": "[parameters('aseName')]",
+ "location": "[resourceGroup().location]",
+ "kind": "ASEV3",
+ "properties": {
+ "internalLoadBalancingMode": "Web, Publishing",
+ "virtualNetwork": {
+ "id": "[parameters('subnetResourceId')]"
+ },
+ "networkingConfiguration": { },
+ "customDnsSuffixConfiguration": { }
+ },
+ "identity": {
+ "type": "SystemAssigned"
+ }
+}
+```
+
+In addition to the core properties, there are other configuration options that you can use to configure your App Service Environment.
+
+* *name*: Required. This parameter defines a unique App Service Environment name.
+* *virtualNetwork -> id*: Required. Specifies the resource ID of the subnet. Subnet must be empty and delegated to Microsoft.Web/hostingEnvironments
+* *internalLoadBalancingMode*: Required. In most cases, set this property to "Web, Publishing", which means both HTTP/HTTPS traffic and FTP traffic is on an internal VIP (Internal Load Balancer). If this property is set to "None", all traffic remains on the public VIP (External Load Balancer).
+* *zoneRedundant*: Optional. Defines with true/false if the App Service Environment will be deployed into Availability Zones (AZ). For more information, see [zone redundancy](./zone-redundancy.md).
+* *dedicatedHostCount*: Optional. In most cases, set this property to 0 or left out. You can set it to 2 if you want to deploy your App Service Environment with physical hardware isolation on dedicated hosts.
+* *upgradePreference*: Optional. Defines if upgrade is started automatically or a 15 day windows to start the deployment is given. Valid values are "None", "Early", "Late", "Manual". More information [about upgrade preference](./how-to-upgrade-preference.md).
+* *clusterSettings*: Optional. For more information, see [cluster settings](./app-service-app-service-environment-custom-settings.md).
+* *networkingConfiguration -> allowNewPrivateEndpointConnections*: Optional. For more information, see [networking configuration](./configure-network-settings.md#allow-new-private-endpoint-connections).
+* *networkingConfiguration -> remoteDebugEnabled*: Optional. For more information, see [networking configuration](./configure-network-settings.md#remote-debugging-access).
+* *networkingConfiguration -> ftpEnabled*: Optional. For more information, see [networking configuration](./configure-network-settings.md#ftp-access).
+* *networkingConfiguration -> inboundIpAddressOverride*: Optional. Allow you to create an App Service Environment with your own Azure Public IP address (specify the resource ID) or define a static IP for ILB deployments. This setting can't be changed after the App Service Environment is created.
+* *customDnsSuffixConfiguration*: Optional. Allows you to specify a custom domain suffix for the App Service Environment. Requires a valid certificate from a Key Vault and access using a Managed Identity. For more information about the specific parameters, see [configuration custom domain suffix](./how-to-custom-domain-suffix.md).
+
+### Deploying the App Service Environment
+
+After creating the ARM template, for example named *azuredeploy.json* and optionally a parameters file for example named *azuredeploy.parameters.json*, you can create the App Service Environment by using the Azure CLI code snippet. Change the file paths to match the Resource Manager template-file locations on your machine. Remember to supply your own value for the resource group name:
+
+```azurecli
+templatePath="PATH/azuredeploy.json"
+parameterPath="PATH/azuredeploy.parameters.json"
+
+az deployment group create --resource-group "YOUR-RG-NAME-HERE" --template-file $templatePath --parameters $parameterPath
+```
+
+It takes about two hours for the App Service Environment to be created.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Using an App Service Environment v3](./using.md)
+
+> [!div class="nextstepaction"]
+> [App Service Environment v3 Networking](./networking.md)
+
+> [!div class="nextstepaction"]
+> [Certificates in App Service Environment v3](./overview-certificates.md)
app-service How To Custom Domain Suffix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/how-to-custom-domain-suffix.md
The custom domain suffix defines a root domain that can be used by the App Servi
The custom domain suffix is for the App Service Environment. This feature is different from a custom domain binding on an App Service. For more information on custom domain bindings, see [Map an existing custom DNS name to Azure App Service](../app-service-web-tutorial-custom-domain.md).
-If the certificate used for the custom domain suffix contains a Subject Alternate Name (SAN) entry for **.scm.CUSTOM-DOMAIN*, the scm site will then also be reachable from *APP-NAME.scm.CUSTOM-DOMAIN*. You can only access scm over custom domain using basic authentication. Single sign-on is only possible with the default root domain.
+If the certificate used for the custom domain suffix contains a Subject Alternate Name (SAN) entry for **.scm.CUSTOM-DOMAIN*, the scm site will then also be reachable from *APP-NAME.scm.CUSTOM-DOMAIN*. You can only access scm over custom domain using basic authentication. Single sign-on is only possible with the default root domain.
+
+Unlike earlier versions, the FTPS endpoints for your App Services on your App Service Environment v3 can only be reached using the default domain suffix.
## Prerequisites
app-service Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/overview.md
A few features that were available in earlier versions of App Service Environmen
- Monitor your traffic with Network Watcher or network security group (NSG) flow logs. - Perform a backup and restore operation on a storage account behind a firewall.
+- Access the FTPS endpoint using a custom domain suffix.
## Pricing
app-service Overview Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-security.md
App Service authentication and authorization support multiple authentication pro
When authenticating against a back-end service, App Service provides two different mechanisms depending on your need: - **Service identity** - Sign in to the remote resource using the identity of the app itself. App Service lets you easily create a [managed identity](overview-managed-identity.md), which you can use to authenticate with other services, such as [Azure SQL Database](/azure/sql-database/) or [Azure Key Vault](../key-vault/index.yml). For an end-to-end tutorial of this approach, see [Secure Azure SQL Database connection from App Service using a managed identity](tutorial-connect-msi-sql-database.md).-- **On-behalf-of (OBO)** - Make delegated access to remote resources on behalf of the user. With Azure Active Directory as the authentication provider, your App Service app can perform delegated sign-in to a remote service, such as [Microsoft Graph API](../active-directory/develop/microsoft-graph-intro.md) or a remote API app in App Service. For an end-to-end tutorial of this approach, see [Authenticate and authorize users end-to-end in Azure App Service](tutorial-auth-aad.md).
+- **On-behalf-of (OBO)** - Make delegated access to remote resources on behalf of the user. With Azure Active Directory as the authentication provider, your App Service app can perform delegated sign-in to a remote service, such as [Microsoft Graph](/graph/overview) or a remote API app in App Service. For an end-to-end tutorial of this approach, see [Authenticate and authorize users end-to-end in Azure App Service](tutorial-auth-aad.md).
## Connectivity to remote resources
app-service Overview Vnet Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-vnet-integration.md
Title: Integrate your app with an Azure virtual network
description: Integrate your app in Azure App Service with Azure virtual networks. Previously updated : 10/05/2022 Last updated : 01/20/2023
-# Integrate your app with an Azure virtual network
+# <a name="regional-virtual-network-integration"></a>Integrate your app with an Azure virtual network
-This article describes the Azure App Service virtual network integration feature and how to set it up with apps in [App Service](./overview.md). With [Azure virtual networks](../virtual-network/virtual-networks-overview.md), you can place many of your Azure resources in a non-internet-routable network. The App Service virtual network integration feature enables your apps to access resources in or through a virtual network. Virtual network integration doesn't enable your apps to be accessed privately.
+This article describes the Azure App Service virtual network integration feature and how to set it up with apps in [App Service](./overview.md). With [Azure virtual networks](../virtual-network/virtual-networks-overview.md), you can place many of your Azure resources in a non-internet-routable network. The App Service virtual network integration feature enables your apps to access resources in or through a virtual network.
+
+>[!NOTE]
+> Information about Gateway-required virtual network integration has [moved to a new location](./configure-gateway-required-vnet-integration.md).
App Service has two variations:
+* The dedicated compute pricing tiers, which include the Basic, Standard, Premium, Premium v2, and Premium v3.
+* The App Service Environment, which deploys directly into your virtual network with dedicated supporting infrastructure and is using the Isolated and Isolated v2 pricing tiers.
-Learn [how to enable virtual network integration](./configure-vnet-integration-enable.md).
+The virtual network integration feature is used in Azure App Service dedicated compute pricing tiers. If your app is in an [App Service Environment](./environment/overview.md), it's already integrated with a virtual network and doesn't require you to configure virtual network integration feature to reach resources in the same virtual network. For more information on all the networking features, see [App Service networking features](./networking-features.md).
+
+Virtual network integration gives your app access to resources in your virtual network, but it doesn't grant inbound private access to your app from the virtual network. Private site access refers to making an app accessible only from a private network, such as from within an Azure virtual network. Virtual network integration is used only to make outbound calls from your app into your virtual network. Refer to [private endpoint](./networking/private-endpoint.md) for inbound private access.
+
+The virtual network integration feature:
-## Regional virtual network integration
+* Requires a [supported Basic or Standard](./overview-vnet-integration.md#limitations), Premium, Premium v2, Premium v3, or Elastic Premium App Service pricing tier.
+* Supports TCP and UDP.
+* Works with App Service apps, function apps and Logic apps.
-Regional virtual network integration supports connecting to a virtual network in the same region and doesn't require a gateway. Using regional virtual network integration enables your app to access:
+There are some things that virtual network integration doesn't support, like:
+
+* Mounting a drive.
+* Windows Server Active Directory domain join.
+* NetBIOS.
+
+Virtual network integration supports connecting to a virtual network in the same region. Using virtual network integration enables your app to access:
* Resources in the virtual network you're integrated with. * Resources in virtual networks peered to the virtual network your app is integrated with including global peering connections.
Regional virtual network integration supports connecting to a virtual network in
* Service endpoint-secured services. * Private endpoint-enabled services.
-When you use regional virtual network integration, you can use the following Azure networking features:
+When you use virtual network integration, you can use the following Azure networking features:
* **Network security groups (NSGs)**: You can block outbound traffic with an NSG that's placed on your integration subnet. The inbound rules don't apply because you can't use virtual network integration to provide inbound access to your app. * **Route tables (UDRs)**: You can place a route table on the integration subnet to send outbound traffic where you want.
+* **NAT gateway**: You can use [NAT gateway](./networking/nat-gateway-integration.md) to get a dedicated outbound IP and mitigate SNAT port exhaustion.
-### How regional virtual network integration works
+Learn [how to enable virtual network integration](./configure-vnet-integration-enable.md).
-Apps in App Service are hosted on worker roles. Regional virtual network integration works by mounting virtual interfaces to the worker roles with addresses in the delegated subnet. Because the from address is in your virtual network, it can access most things in or through your virtual network like a VM in your virtual network would. The networking implementation is different than running a VM in your virtual network. That's why some networking features aren't yet available for this feature.
+## <a name="how-regional-virtual-network-integration-works"></a> How virtual network integration works
+Apps in App Service are hosted on worker roles. Virtual network integration works by mounting virtual interfaces to the worker roles with addresses in the delegated subnet. Because the from address is in your virtual network, it can access most things in or through your virtual network like a VM in your virtual network would.
-When regional virtual network integration is enabled, your app makes outbound calls through your virtual network. The outbound addresses that are listed in the app properties portal are the addresses still used by your app. However, if your outbound call is to a virtual machine or private endpoint in the integration virtual network or peered virtual network, the outbound address will be an address from the integration subnet. The private IP assigned to an instance is exposed via the environment variable, WEBSITE_PRIVATE_IP.
-When all traffic routing is enabled, all outbound traffic is sent into your virtual network. If all traffic routing isn't enabled, only private traffic (RFC1918) and service endpoints configured on the integration subnet will be sent into the virtual network, and outbound traffic to the internet will go through the same channels as normal.
+When virtual network integration is enabled, your app makes outbound calls through your virtual network. The outbound addresses that are listed in the app properties portal are the addresses still used by your app. However, if your outbound call is to a virtual machine or private endpoint in the integration virtual network or peered virtual network, the outbound address will be an address from the integration subnet. The private IP assigned to an instance is exposed via the environment variable, WEBSITE_PRIVATE_IP.
-The feature supports two virtual interface per worker. Two virtual interfaces per worker means two regional virtual network integrations per App Service plan. The apps in the same App Service plan can only use one of the virtual network integrations to a specific subnet. If you need an app to connect to additional virtual networks or additional subnets in the same virtual network, you need to create another App Service plan. The virtual interfaces used isn't a resource that customers have direct access to.
+When all traffic routing is enabled, all outbound traffic is sent into your virtual network. If all traffic routing isn't enabled, only private traffic (RFC1918) and service endpoints configured on the integration subnet will be sent into the virtual network. Outbound traffic to the internet will be routed directly from the app.
-### Subnet requirements
+The feature supports two virtual interfaces per worker. Two virtual interfaces per worker mean two virtual network integrations per App Service plan. The apps in the same App Service plan can only use one of the virtual network integrations to a specific subnet. If you need an app to connect to more virtual networks or more subnets in the same virtual network, you need to create another App Service plan. The virtual interfaces used aren't resources customers have direct access to.
-Virtual network integration depends on a dedicated subnet. When you create a subnet, the Azure subnet loses five IPs from the start. One address is used from the integration subnet for each plan instance. If you scale your app to four instances, then four addresses are used.
+## Subnet requirements
-When you scale up or down in size, the required address space is doubled for a short period of time. This change affects the real, available supported instances for a given subnet size. The following table shows both the maximum available addresses per CIDR block and the effect this has on horizontal scale.
+Virtual network integration depends on a dedicated subnet. When you create a subnet, the Azure subnet consumes five IPs from the start. One address is used from the integration subnet for each plan instance. If you scale your app to four instances, then four addresses are used.
+
+When you scale up or down in size, the required address space is doubled for a short period of time. The scale operation affects the real, available supported instances for a given subnet size. The following table shows both the maximum available addresses per CIDR block and the effect the available addresses has on horizontal scale.
| CIDR block size | Maximum available addresses | Maximum horizontal scale (instances)<sup>*</sup> | |--|-||
When you scale up or down in size, the required address space is doubled for a s
<sup>*</sup>Assumes that you'll need to scale up or down in either size or SKU at some point.
-Because subnet size can't be changed after assignment, use a subnet that's large enough to accommodate whatever scale your app might reach. To avoid any issues with subnet capacity, use a `/26` with 64 addresses. When you're creating subnets in Azure portal as part of integrating with the virtual network, a minimum size of /27 is required. If the subnet already exists before integrating through the portal you can use a /28 subnet.
+Because subnet size can't be changed after assignment, use a subnet that's large enough to accommodate whatever scale your app might reach. To avoid any issues with subnet capacity, use a `/26` with 64 addresses. When you're creating subnets in Azure portal as part of integrating with the virtual network, a minimum size of /27 is required. If the subnet already exists before integrating through the portal, you can use a /28 subnet.
>[!NOTE] > Windows Containers uses an additional IP address per app for each App Service plan instance, and you need to size the subnet accordingly. If you have for example 10 Windows Container App Service plan instances with 4 apps running, you will need 50 IP addresses and additional addresses to support horizontal (up/down) scale. When you want your apps in your plan to reach a virtual network that's already connected to by apps in another plan, select a different subnet than the one being used by the pre-existing virtual network integration.
-### Permissions
+## Permissions
-You must have at least the following Role-based access control permissions on the subnet or at a higher level to configure regional virtual network integration through Azure portal, CLI or when setting the `virtualNetworkSubnetId` site property directly:
+You must have at least the following Role-based access control permissions on the subnet or at a higher level to configure virtual network integration through Azure portal, CLI or when setting the `virtualNetworkSubnetId` site property directly:
| Action | Description | |-|-|
You must have at least the following Role-based access control permissions on th
If the virtual network is in a different subscription than the app, you must ensure that the subscription with the virtual network is registered for the `Microsoft.Web` resource provider. You can explicitly register the provider [by following this documentation](../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider), but it will also automatically be registered when creating the first web app in a subscription.
-### Routes
+## Routes
-You can control what traffic goes through the virtual network integration. There are three types of routing to consider when you configure regional virtual network integration. [Application routing](#application-routing) defines what traffic is routed from your app and into the virtual network. [Configuration routing](#configuration-routing) affects operations that happen before or during startup of your app. Examples are container image pull and app settings with Key Vault reference. [Network routing](#network-routing) is the ability to handle how both app and configuration traffic are routed from your virtual network and out.
+You can control what traffic goes through the virtual network integration. There are three types of routing to consider when you configure virtual network integration. [Application routing](#application-routing) defines what traffic is routed from your app and into the virtual network. [Configuration routing](#configuration-routing) affects operations that happen before or during startup of your app. Examples are container image pull and [app settings with Key Vault reference](./app-service-key-vault-references.md). [Network routing](#network-routing) is the ability to handle how both app and configuration traffic are routed from your virtual network and out.
Through application routing or configuration routing options, you can configure what traffic will be sent through the virtual network integration. Traffic is only subject to [network routing](#network-routing) if it's sent through the virtual network integration.
-#### Application routing
+### Application routing
Application routing applies to traffic that is sent from your app after it has been started. See [configuration routing](#configuration-routing) for traffic during startup. When you configure application routing, you can either route all traffic or only private traffic (also known as [RFC1918](https://datatracker.ietf.org/doc/html/rfc1918#section-3) traffic) into your virtual network. You configure this behavior through the **Route All** setting. If **Route All** is disabled, your app only routes private traffic into your virtual network. If you want to route all your outbound app traffic into your virtual network, make sure that **Route All** is enabled.
Application routing applies to traffic that is sent from your app after it has b
Learn [how to configure application routing](./configure-vnet-integration-routing.md#configure-application-routing). > [!NOTE]
-> Outbound SMTP connectivity (port 25) is supported for App Service when the SMTP traffic is routed through the virtual network integration. The supportability is determined by a setting on the subscription where the virtual network is deployed. For virtual networks/subnets created before 1. August 2022 you need to initiate a temporary configuration change to the virtual network/subnet for the setting to be synchronized from the subscription. An example could be to add a temporary subnet, associate/dissociate an NSG temporarily or configure a service endpoint temporarily. For more information and troubleshooting see [Troubleshoot outbound SMTP connectivity problems in Azure](../virtual-network/troubleshoot-outbound-smtp-connectivity.md).
+> Outbound SMTP connectivity (port 25) is supported for App Service when the SMTP traffic is routed through the virtual network integration. The supportability is determined by a setting on the subscription where the virtual network is deployed. For virtual networks/subnets created before 1. August 2022 you need to initiate a temporary configuration change to the virtual network/subnet for the setting to be synchronized from the subscription. An example could be to add a temporary subnet, associate/dissociate an NSG temporarily or configure a service endpoint temporarily. For more information, see [Troubleshoot outbound SMTP connectivity problems in Azure](../virtual-network/troubleshoot-outbound-smtp-connectivity.md).
-#### Configuration routing
+### Configuration routing
When you're using virtual network integration, you can configure how parts of the configuration traffic are managed. By default, configuration traffic will go directly over the public route, but for the mentioned individual components, you can actively configure it to be routed through the virtual network integration.
-##### Content share
+#### Content share
Bringing your own storage for content in often used in Functions where [content share](./../azure-functions/configure-networking-how-to.md#restrict-your-storage-account-to-a-virtual-network) is configured as part of the Functions app.
To route content share traffic through the virtual network integration, you must
In addition to configuring the routing, you must also ensure that any firewall or Network Security Group configured on traffic from the subnet allow traffic to port 443 and 445.
-##### Container image pull
+#### Container image pull
When using custom containers, you can pull the container over the virtual network integration. To route the container pull traffic through the virtual network integration, you must ensure that the routing setting is configured. Learn [how to configure image pull routing](./configure-vnet-integration-routing.md#container-image-pull).
-##### App settings using Key Vault references
+#### App settings using Key Vault references
App settings using Key Vault references will attempt to get secrets over the public route. If the Key Vault is blocking public traffic and the app is using virtual network integration, an attempt will then be made to get the secrets through the virtual network integration.
App settings using Key Vault references will attempt to get secrets over the pub
> * Configure SSL/TLS certificates from private Key Vaults is currently not supported. > * App Service Logs to private storage accounts is currently not supported. We recommend using Diagnostics Logging and allowing Trusted Services for the storage account.
-#### Network routing
+### Network routing
You can use route tables to route outbound traffic from your app without restriction. Common destinations can include firewall devices or gateways. You can also use a [network security group](../virtual-network/network-security-groups-overview.md) (NSG) to block outbound traffic to resources in your virtual network or the internet. An NSG that's applied to your integration subnet is in effect regardless of any route tables applied to your integration subnet.
-Route tables and network security groups only apply to traffic routed through the virtual network integration. See [application routing](#application-routing) and [configuration routing](#configuration-routing) for details. Routes won't affect replies to inbound app requests and inbound rules in an NSG don't apply to your app because virtual network integration affects only outbound traffic from your app. To control inbound traffic to your app, use the Access Restrictions feature.
+Route tables and network security groups only apply to traffic routed through the virtual network integration. See [application routing](#application-routing) and [configuration routing](#configuration-routing) for details. Routes won't apply to replies from inbound app requests and inbound rules in an NSG don't apply to your app. Virtual network integration affects only outbound traffic from your app. To control inbound traffic to your app, use the [access restrictions](./overview-access-restrictions.md) feature or [private endpoints](./networking/private-endpoint.md).
-When configuring network security groups or route tables that affect outbound traffic, you must make sure you consider your application dependencies. Application dependencies include endpoints that your app needs during runtime. Besides APIs and services the app is calling, this could also be derived endpoints like certificate revocation list (CRL) check endpoints and identity/authentication endpoint, for example Azure Active Directory. If you're using [continuous deployment in App Service](./deploy-continuous-deployment.md), you might also need to allow endpoints depending on type and language. Specifically for [Linux continuous deployment](https://github.com/microsoft/Oryx/blob/main/doc/hosts/appservice.md#network-dependencies), you'll need to allow `oryx-cdn.microsoft.io:443`.
+When configuring network security groups or route tables that applies to outbound traffic, you must make sure you consider your application dependencies. Application dependencies include endpoints that your app needs during runtime. Besides APIs and services the app is calling, these endpoints could also be derived endpoints like certificate revocation list (CRL) check endpoints and identity/authentication endpoint, for example Azure Active Directory. If you're using [continuous deployment in App Service](./deploy-continuous-deployment.md), you might also need to allow endpoints depending on type and language. Specifically for [Linux continuous deployment](https://github.com/microsoft/Oryx/blob/main/doc/hosts/appservice.md#network-dependencies), you'll need to allow `oryx-cdn.microsoft.io:443`.
When you want to route outbound traffic on-premises, you can use a route table to send outbound traffic to your Azure ExpressRoute gateway. If you do route traffic to a gateway, set routes in the external network to send any replies back. Border Gateway Protocol (BGP) routes also affect your app traffic. If you have BGP routes from something like an ExpressRoute gateway, your app outbound traffic is affected. Similar to user-defined routes, BGP routes affect traffic according to your routing scope setting.
-### Service endpoints
+## Service endpoints
-Regional virtual network integration enables you to reach Azure services that are secured with service endpoints. To access a service endpoint-secured service, follow these steps:
+Virtual network integration enables you to reach Azure services that are secured with service endpoints. To access a service endpoint-secured service, follow these steps:
-1. Configure regional virtual network integration with your web app to connect to a specific subnet for integration.
+1. Configure virtual network integration with your web app to connect to a specific subnet for integration.
1. Go to the destination service and configure service endpoints against the integration subnet.
-### Private endpoints
+## Private endpoints
If you want to make calls to [private endpoints](./networking/private-endpoint.md), make sure that your DNS lookups resolve to the private endpoint. You can enforce this behavior in one of the following ways:
If you want to make calls to [private endpoints](./networking/private-endpoint.m
* Manage the private endpoint in the DNS server used by your app. To manage the configuration, you must know the private endpoint IP address. Then point the endpoint you're trying to reach to that address by using an A record. * Configure your own DNS server to forward to Azure DNS private zones.
-### Azure DNS private zones
+## Azure DNS private zones
After your app integrates with your virtual network, it uses the same DNS server that your virtual network is configured with. If no custom DNS is specified, it uses Azure default DNS and any private zones linked to the virtual network.
-### Limitations
+## Limitations
-There are some limitations with using regional virtual network integration:
+There are some limitations with using virtual network integration:
* The feature is available from all App Service deployments in Premium v2 and Premium v3. It's also available in Basic and Standard tier but only from newer App Service deployments. If you're on an older deployment, you can only use the feature from a Premium v2 App Service plan. If you want to make sure you can use the feature in a Basic or Standard App Service plan, create your app in a Premium v3 App Service plan. Those plans are only supported on our newest deployments. You can scale down if you want after the plan is created. * The feature can't be used by Isolated plan apps that are in an App Service Environment.
There are some limitations with using regional virtual network integration:
* The integration subnet can't have [service endpoint policies](../virtual-network/virtual-network-service-endpoint-policies-overview.md) enabled. * The integration subnet can be used by only one App Service plan. * You can't delete a virtual network with an integrated app. Remove the integration before you delete the virtual network.
-* You can have two regional virtual network integration per App Service plan. Multiple apps in the same App Service plan can use the same virtual network integration.
-* You can't change the subscription of an app or a plan while there's an app that's using regional virtual network integration.
-
-## Gateway-required virtual network integration
-
-Gateway-required virtual network integration supports connecting to a virtual network in another region or to a classic virtual network. Gateway-required virtual network integration:
-
-* Enables an app to connect to only one virtual network at a time.
-* Enables up to five virtual networks to be integrated within an App Service plan.
-* Allows the same virtual network to be used by multiple apps in an App Service plan without affecting the total number that can be used by an App Service plan. If you have six apps using the same virtual network in the same App Service plan that counts as one virtual network being used.
-* SLA on the gateway can affect the overall [SLA](https://azure.microsoft.com/support/legal/sla/).
-* Enables your apps to use the DNS that the virtual network is configured with.
-* Requires a virtual network route-based gateway configured with an SSTP point-to-site VPN before it can be connected to an app.
-
-You can't use gateway-required virtual network integration:
-
-* With a virtual network connected with ExpressRoute.
-* From a Linux app.
-* From a [Windows container](./quickstart-custom-container.md).
-* To access service endpoint-secured resources.
-* To resolve App Settings referencing a network protected Key Vault.
-* With a coexistence gateway that supports both ExpressRoute and point-to-site or site-to-site VPNs.
-
-### Set up a gateway in your Azure virtual network
-
-To create a gateway:
-
-1. [Create the VPN gateway and subnet](../vpn-gateway/vpn-gateway-howto-point-to-site-resource-manager-portal.md#creategw). Select a route-based VPN type.
-
-1. [Set the point-to-site addresses](../vpn-gateway/vpn-gateway-howto-point-to-site-resource-manager-portal.md#addresspool). If the gateway isn't in the basic SKU, then IKEV2 must be disabled in the point-to-site configuration and SSTP must be selected. The point-to-site address space must be in the RFC 1918 address blocks 10.0.0.0/8, 172.16.0.0/12, and 192.168.0.0/16.
-
-If you create the gateway for use with gateway-required virtual network integration, you don't need to upload a certificate. Creating the gateway can take 30 minutes. You won't be able to integrate your app with your virtual network until the gateway is created.
-
-### How gateway-required virtual network integration works
+* You can't have more than two virtual network integrations per App Service plan. Multiple apps in the same App Service plan can use the same virtual network integration. Currently you can only configure the first integration through Azure portal. The second integration must be created using Azure Resource Manager templates or Azure CLI commands.
+* You can't change the subscription of an app or a plan while there's an app that's using virtual network integration.
-Gateway-required virtual network integration is built on top of point-to-site VPN technology. Point-to-site VPNs limit network access to the virtual machine that hosts the app. Apps are restricted to send traffic out to the internet only through hybrid connections or through virtual network integration. When your app is configured with the portal to use gateway-required virtual network integration, a complex negotiation is managed on your behalf to create and assign certificates on the gateway and the application side. The result is that the workers used to host your apps can directly connect to the virtual network gateway in the selected virtual network.
+## Access on-premises resources
+No extra configuration is required for the virtual network integration feature to reach through your virtual network to on-premises resources. You simply need to connect your virtual network to on-premises resources by using ExpressRoute or a site-to-site VPN.
-### Access on-premises resources
+## Peering
-Apps can access on-premises resources by integrating with virtual networks that have site-to-site connections. If you use gateway-required virtual network integration, update your on-premises VPN gateway routes with your point-to-site address blocks. When the site-to-site VPN is first set up, the scripts used to configure it should set up routes properly. If you add the point-to-site addresses after you create your site-to-site VPN, you need to update the routes manually. Details on how to do that vary per gateway and aren't described here.
-
-BGP routes from on-premises won't be propagated automatically into App Service. You need to manually propagate them on the point-to-site configuration using the steps in this document [Advertise custom routes for P2S VPN clients](../vpn-gateway/vpn-gateway-p2s-advertise-custom-routes.md).
-
-No extra configuration is required for the regional virtual network integration feature to reach through your virtual network to on-premises resources. You simply need to connect your virtual network to on-premises resources by using ExpressRoute or a site-to-site VPN.
-
-> [!NOTE]
-> The gateway-required virtual network integration feature doesn't integrate an app with a virtual network that has an ExpressRoute gateway. Even if the ExpressRoute gateway is configured in [coexistence mode](../expressroute/expressroute-howto-coexist-resource-manager.md), the virtual network integration doesn't work. If you need to access resources through an ExpressRoute connection, use the regional virtual network integration feature or an [App Service Environment](./environment/intro.md), which runs in your virtual network.
-
-### Peering
-
-If you use peering with regional virtual network integration, you don't need to do any more configuration.
-
-If you use gateway-required virtual network integration with peering, you need to configure a few more items. To configure peering to work with your app:
-
-1. Add a peering connection on the virtual network your app connects to. When you add the peering connection, enable **Allow virtual network access** and select **Allow forwarded traffic** and **Allow gateway transit**.
-1. Add a peering connection on the virtual network that's being peered to the virtual network you're connected to. When you add the peering connection on the destination virtual network, enable **Allow virtual network access** and select **Allow forwarded traffic** and **Allow remote gateways**.
-1. Go to **App Service plan** > **Networking** > **VNet integration** in the portal. Select the virtual network your app connects to. Under the routing section, add the address range of the virtual network that's peered with the virtual network your app is connected to.
+If you use peering with virtual network integration, you don't need to do any more configuration.
## Manage virtual network integration Connecting and disconnecting with a virtual network is at an app level. Operations that can affect virtual network integration across multiple apps are at the App Service plan level. From the app > **Networking** > **VNet integration** portal, you can get details on your virtual network. You can see similar information at the App Service plan level in the **App Service plan** > **Networking** > **VNet integration** portal.
-The only operation you can take in the app view of your virtual network integration instance is to disconnect your app from the virtual network it's currently connected to. To disconnect your app from a virtual network, select **Disconnect**. Your app is restarted when you disconnect from a virtual network. Disconnecting doesn't change your virtual network. The subnet or gateway isn't removed. If you then want to delete your virtual network, first disconnect your app from the virtual network and delete the resources in it, such as gateways.
-
-The App Service plan virtual network integration UI shows you all the virtual network integrations used by the apps in your App Service plan. To see details on each virtual network, select the virtual network you're interested in. There are two actions you can perform here for gateway-required virtual network integration:
-
-* **Sync network**: The sync network operation is used only for the gateway-required virtual network integration feature. Performing a sync network operation ensures that your certificates and network information are in sync. If you add or change the DNS of your virtual network, perform a sync network operation. This operation restarts any apps that use this virtual network. This operation won't work if you're using an app and a virtual network belonging to different subscriptions.
-* **Add routes**: Adding routes drives outbound traffic into your virtual network.
+In the app view of your virtual network integration instance, you can disconnect your app from the virtual network and you can configure application routing. To disconnect your app from a virtual network, select **Disconnect**. Your app is restarted when you disconnect from a virtual network. Disconnecting doesn't change your virtual network. The subnet isn't removed. If