Updates from: 01/21/2023 02:28:24
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Add Api Connector Token Enrichment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/add-api-connector-token-enrichment.md
Title: Token enrichment - Azure Active Directory B2C description: Enrich tokens with claims from external identity data sources using APIs or outbound webhooks. -+ Previously updated : 11/09/2021-+ Last updated : 01/17/2023+ zone_pivot_groups: b2c-policy-type - # Enrich tokens with claims from external sources using API connectors- [!INCLUDE [active-directory-b2c-choose-user-flow-or-custom-policy](../../includes/active-directory-b2c-choose-user-flow-or-custom-policy.md)]- Azure Active Directory B2C (Azure AD B2C) enables identity developers to integrate an interaction with a RESTful API into their user flow using [API connectors](api-connectors-overview.md). It enables developers to dynamically retrieve data from external identity sources. At the end of this walkthrough, you'll be able to create an Azure AD B2C user flow that interacts with APIs to enrich tokens with information from external sources.- ::: zone pivot="b2c-user-flow"- You can use API connectors applied to the **Before sending the token (preview)** step to enrich tokens for your applications with information from external sources. When a user signs in or signs up, Azure AD B2C will call the API endpoint configured in the API connector, which can query information about a user in downstream services such as cloud services, custom user stores, custom permission systems, legacy identity systems, and more. [!INCLUDE [b2c-public-preview-feature](../../includes/active-directory-b2c-public-preview.md)]
You can create an API endpoint using one of our [samples](api-connector-samples.
## Prerequisites [!INCLUDE [active-directory-b2c-customization-prerequisites](../../includes/active-directory-b2c-customization-prerequisites.md)]
+- An API endpoint. You can create an API endpoint using one of our [samples](api-connector-samples.md#api-connector-rest-api-samples).
## Create an API connector To use an [API connector](api-connectors-overview.md), you first create the API connector and then enable it in a user flow. 1. Sign in to the [Azure portal](https://portal.azure.com/).
-2. Under **Azure services**, select **Azure AD B2C**.
-4. Select **API connectors**, and then select **New API connector**.
+1. Under **Azure services**, select **Azure AD B2C**.
+1. Select **API connectors**, and then select **New API connector**.
- ![Screenshot of the basic API connector configuration](media/add-api-connector-token-enrichment/api-connector-new.png)
+ ![Screenshot showing the API connectors page in the Azure portal with the New API Connector button highlighted.](media/add-api-connector-token-enrichment/api-connector-new.png)
-5. Provide a display name for the call. For example, **Enrich token from external source**.
-6. Provide the **Endpoint URL** for the API call.
-7. Choose the **Authentication type** and configure the authentication information for calling your API. Learn how to [Secure your API Connector](secure-rest-api.md).
+1. Provide a display name for the call. For example, **Enrich token from external source**.
+1. Provide the **Endpoint URL** for the API call.
+1. Choose the **Authentication type** and configure the authentication information for calling your API. Learn how to [Secure your API Connector](secure-rest-api.md).
- ![Screenshot of authentication configuration for an API connector](media/add-api-connector-token-enrichment/api-connector-config.png)
+ ![Screenshot showing sample authentication configuration for an API connector.](media/add-api-connector-token-enrichment/api-connector-config.png)
-8. Select **Save**.
+1. Select **Save**.
## Enable the API connector in a user flow Follow these steps to add an API connector to a sign-up user flow. 1. Sign in to the [Azure portal](https://portal.azure.com/).
-2. Under **Azure services**, select **Azure AD B2C**.
-4. Select **User flows**, and then select the user flow you want to add the API connector to.
-5. Select **API connectors**, and then select the API endpoint you want to invoke at the **Before sending the token (preview)** step in the user flow:
+1. Under **Azure services**, select **Azure AD B2C**.
+1. Select **User flows**, and then select the user flow you want to add the API connector to.
+1. Select **API connectors**, and then select the API endpoint you want to invoke at the **Before sending the token (preview)** step in the user flow:
- ![Screenshot of selecting an API connector for a user flow step](media/add-api-connector-token-enrichment/api-connectors-user-flow-select.png)
+ ![Screenshot of selecting an API connector for a user flow step.](media/add-api-connector-token-enrichment/api-connectors-user-flow-select.png)
-6. Select **Save**.
+1. Select **Save**.
This step only exists for **Sign up and sign in (Recommended)**, **Sign up (Recommended)**, and **Sign in (Recommended)** user flows. ## Example request sent to the API at this step- An API connector at this step is invoked when a token is about to be issued during sign-ins and sign-ups. - An API connector materializes as an **HTTP POST** request, sending user attributes ('claims') as key-value pairs in a JSON body. Attributes are serialized similarly to [Microsoft Graph](/graph/api/resources/user#properties) user properties. - ```http POST <API-endpoint> Content-type: application/json- { "email": "johnsmith@fabrikam.onmicrosoft.com", "identities": [
Content-type: application/json
"ui_locales":"en-US" } ```- The claims that are sent to the API depend on the information defined for the user.- Only user properties and custom attributes listed in the **Azure AD B2C** > **User attributes** experience are available to be sent in the request.- Custom attributes exist in the **extension_\<extensions-app-id>_CustomAttribute** format in the directory. Your API should expect to receive claims in this same serialized format. For more information on custom attributes, see [Define custom attributes in Azure AD B2C](user-flow-custom-attributes.md).- Additionally, these claims are typically sent in all requests for this step: - **UI Locales ('ui_locales')** - An end-user's locale(s) as configured on their device. This can be used by your API to return internationalized responses. - **Step ('step')** - The step or point on the user flow that the API connector was invoked for. Value for this step is `
Additionally, these claims are typically sent in all requests for this step:
> [!IMPORTANT] > If a claim does not have a value at the time the API endpoint is called, the claim will not be sent to the API. Your API should be designed to explicitly check and handle the case in which a claim is not in the request.- ## Expected response types from the web API at this step- When the web API receives an HTTP request from Azure AD during a user flow, it can return a "continuation response."- ### Continuation response- A continuation response indicates that the user flow should continue to the next step: issuing the token.- In a continuation response, the API can return additional claims. A claim returned by the API that you wish to return in the token must be a built-in claim or [defined as a custom attribute](user-flow-custom-attributes.md) and must be selected in the **Application claims** configuration of the user flow. - The claim value in the token will be that returned by the API, not the value in the directory. Some claim values cannot be overwritten by the API response. Claims that can be returned by the API correspond to the set found under **User attributes** with the exception of `email`.- > [!NOTE] > The API is only invoked during an initial authentication. When using refresh tokens to silently get new access or ID tokens, the token will include the values evaluated during the initial authentication. - ## Example response- ### Example of a continuation response- ```http HTTP/1.1 200 OK Content-type: application/json- { "version": "1.0.0", "action": "Continue",
Content-type: application/json
"extension_<extensions-app-id>_CustomAttribute": "value" // return claim } ```- | Parameter | Type | Required | Description | | -- | -- | -- | -- | | version | String | Yes | The version of your API. | | action | String | Yes | Value must be `Continue`. | | \<builtInUserAttribute> | \<attribute-type> | No | They can be returned in the token if selected as an **Application claim**. |
-| \<extension\_{extensions-app-id}\_CustomAttribute> | \<attribute-type> | No | The claim does not need to contain `_<extensions-app-id>_`, it is *optional*. They can returned in the token if selected as an **Application claim**. |
-
+| \<extension\_{extensions-app-id}\_CustomAttribute> | \<attribute-type> | No | The claim does not need to contain `_<extensions-app-id>_`, it is *optional*. They can be returned in the token if selected as an **Application claim**. |
::: zone-end- ::: zone pivot="b2c-custom-policy" In this scenario, we enrich the user's token data by integrating with a corporate line-of-business workflow. During sign-up or sign-in with local or federated account, Azure AD B2C invokes a REST API to get the user's extended profile data from a remote data source. In this sample, Azure AD B2C sends the user's unique identifier, the objectId. The REST API then returns the user's account balance (a random number). Use this sample as a starting point to integrate with your own CRM system, marketing database, or any line-of-business workflow.- You can also design the interaction as a validation technical profile. This is suitable when the REST API will be validating data on screen and returning claims. For more information, see [Walkthrough: Add an API connector to a sign-up user flow](add-api-connector.md).- ## Prerequisites- - Complete the steps in [Get started with custom policies](tutorial-create-user-flows.md?pivots=b2c-custom-policy). You should have a working custom policy for sign-up and sign-in with local accounts. - Learn how to [Integrate REST API claims exchanges in your Azure AD B2C custom policy](api-connectors-overview.md).- ## Prepare a REST API endpoint- For this walkthrough, you should have a REST API that validates whether a user's Azure AD B2C objectId is registered in your back-end system. If registered, the REST API returns the user account balance. Otherwise, the REST API registers the new account in the directory and returns the starting balance `50.00`.- The following JSON code illustrates the data Azure AD B2C will send to your REST API endpoint. - ```json { "objectId": "User objectId", "lang": "Current UI language" } ```- Once your REST API validates the data, it must return an HTTP 200 (Ok), with the following JSON data:- ```json { "balance": "760.50" } ```- The setup of the REST API endpoint is outside the scope of this article. We have created an [Azure Functions](../azure-functions/functions-reference.md) sample. You can access the complete Azure function code at [GitHub](https://github.com/azure-ad-b2c/rest-api/tree/master/source-code/azure-function).- ## Define claims- A claim provides temporary storage of data during an Azure AD B2C policy execution. You can declare claims within the [claims schema](claimsschema.md) section. - 1. Open the extensions file of your policy. For example, <em>`SocialAndLocalAccounts/`**`TrustFrameworkExtensions.xml`**</em>. 1. Search for the [BuildingBlocks](buildingblocks.md) element. If the element doesn't exist, add it. 1. Locate the [ClaimsSchema](claimsschema.md) element. If the element doesn't exist, add it. 1. Add the following claims to the **ClaimsSchema** element. - ```xml <ClaimType Id="balance"> <DisplayName>Your Balance</DisplayName>
A claim provides temporary storage of data during an Azure AD B2C policy executi
<DataType>string</DataType> </ClaimType> ```- ## Add the RESTful API technical profile - A [Restful technical profile](restful-technical-profile.md) provides support for interfacing with your own RESTful service. Azure AD B2C sends data to the RESTful service in an `InputClaims` collection and receives data back in an `OutputClaims` collection. Find the **ClaimsProviders** element in your <em>**`TrustFrameworkExtensions.xml`**</em> file and add a new claims provider as follows:- ```xml <ClaimsProvider> <DisplayName>REST APIs</DisplayName>
A [Restful technical profile](restful-technical-profile.md) provides support for
</TechnicalProfiles> </ClaimsProvider> ``` - In this example, the `userLanguage` will be sent to the REST service as `lang` within the JSON payload. The value of the `userLanguage` claim contains the current user language ID. For more information, see [claim resolver](claim-resolver-overview.md).- ### Configure the RESTful API technical profile - After you deploy your REST API, set the metadata of the `REST-GetProfile` technical profile to reflect your own REST API, including:- - **ServiceUrl**. Set the URL of the REST API endpoint. - **SendClaimsIn**. Specify how the input claims are sent to the RESTful claims provider. - **AuthenticationType**. Set the type of authentication being performed by the RESTful claims provider such as `Basic` or `ClientCertificate` - **AllowInsecureAuthInProduction**. In a production environment, make sure to set this metadata to `false`. See the [RESTful technical profile metadata](restful-technical-profile.md#metadata) for more configurations.- The comments above `AuthenticationType` and `AllowInsecureAuthInProduction` specify changes you should make when you move to a production environment. To learn how to secure your RESTful APIs for production, see [Secure your RESTful API](secure-rest-api.md).- ## Add an orchestration step- [User journeys](userjourneys.md) specify explicit paths through which a policy allows a relying party application to obtain the desired claims for a user. A user journey is represented as an orchestration sequence that must be followed through for a successful transaction. You can add or subtract orchestration steps. In this case, you will add a new orchestration step that is used to augment the information provided to the application after the user sign-up or sign-in via the REST API call.- 1. Open the base file of your policy. For example, <em>`SocialAndLocalAccounts/`**`TrustFrameworkBase.xml`**</em>. 1. Search for the `<UserJourneys>` element. Copy the entire element, and then delete it. 1. Open the extensions file of your policy. For example, <em>`SocialAndLocalAccounts/`**`TrustFrameworkExtensions.xml`**</em>. 1. Paste the `<UserJourneys>` into the extensions file, after the close of the `<ClaimsProviders>` element. 1. Locate the `<UserJourney Id="SignUpOrSignIn">`, and add the following orchestration step before the last one.- ```xml <OrchestrationStep Order="7" Type="ClaimsExchange"> <ClaimsExchanges>
The comments above `AuthenticationType` and `AllowInsecureAuthInProduction` spec
</ClaimsExchanges> </OrchestrationStep> ```- 1. Refactor the last orchestration step by changing the `Order` to `8`. Your final two orchestration steps should look like the following:- ```xml <OrchestrationStep Order="7" Type="ClaimsExchange"> <ClaimsExchanges> <ClaimsExchange Id="RESTGetProfile" TechnicalProfileReferenceId="REST-GetProfile" /> </ClaimsExchanges> </OrchestrationStep>- <OrchestrationStep Order="8" Type="SendClaims" CpimIssuerTechnicalProfileReferenceId="JwtIssuer" /> ```- 1. Repeat the last two steps for the **ProfileEdit** and **PasswordReset** user journeys.-- ## Include a claim in the token - To return the `balance` claim back to the relying party application, add an output claim to the <em>`SocialAndLocalAccounts/`**`SignUpOrSignIn.xml`**</em> file. Adding an output claim will issue the claim into the token after a successful user journey, and will be sent to the application. Modify the technical profile element within the relying party section to add `balance` as an output claim. ```xml
To return the `balance` claim back to the relying party application, add an outp
</TechnicalProfile> </RelyingParty> ```- Repeat this step for the **ProfileEdit.xml**, and **PasswordReset.xml** user journeys.- Save the files you changed: *TrustFrameworkBase.xml*, and *TrustFrameworkExtensions.xml*, *SignUpOrSignin.xml*, *ProfileEdit.xml*, and *PasswordReset.xml*. - ## Test the custom policy- 1. Sign in to the [Azure portal](https://portal.azure.com). 1. Make sure you're using the directory that contains your Azure AD tenant by selecting the **Directories + subscriptions** icon in the portal toolbar. 1. On the **Portal settings | Directories + subscriptions** page, find your Azure AD directory in the **Directory name** list, and then select **Switch**.
Save the files you changed: *TrustFrameworkBase.xml*, and *TrustFrameworkExtensi
1. Select the sign-up or sign-in policy that you uploaded, and click the **Run now** button. 1. You should be able to sign up using an email address or a Facebook account. 1. The token sent back to your application includes the `balance` claim.- ```json { "typ": "JWT",
Save the files you changed: *TrustFrameworkBase.xml*, and *TrustFrameworkExtensi
} ``` ::: zone-end- ::: zone pivot="b2c-user-flow" ## Best practices and how to troubleshoot
Ensure that:
* If you're using a serverless function or scalable web service, use a hosting plan that keeps the API "awake" or "warm" in production. For Azure Functions, it's recommended you use at minimum the [Premium plan](../azure-functions/functions-scale.md) in production. * Ensure high availability of your API. * Monitor and optimize performance of downstream APIs, databases, or other dependencies of your API.
-
+ [!INCLUDE [active-directory-b2c-https-cipher-tls-requirements](../../includes/active-directory-b2c-https-cipher-tls-requirements.md)] ### Use logging
+### Using serverless cloud functions
+
+Serverless functions, like [HTTP triggers in Azure Functions](../azure-functions/functions-bindings-http-webhook-trigger.md), provide a way create API endpoints to use with the API connector. The serverless cloud function can also call and invoke other web APIs, data stores, and other cloud services for complex scenarios.
+
+### Using logging
In general, it's helpful to use the logging tools enabled by your web API service, like [Application insights](../azure-functions/functions-monitoring.md), to monitor your API for unexpected error codes, exceptions, and poor performance. * Monitor for HTTP status codes that aren't HTTP 200 or 400. * A 401 or 403 HTTP status code typically indicates there's an issue with your authentication. Double-check your API's authentication layer and the corresponding configuration in the API connector. * Use more aggressive levels of logging (for example "trace" or "debug") in development if needed. * Monitor your API for long response times. - Additionally, Azure AD B2C logs metadata about the API transactions that happen during user authentications via a user flow. To find these: 1. Go to **Azure AD B2C**
-2. Under **Activities**, select **Audit logs**.
-3. Filter the list view: For **Date**, select the time interval you want, and for **Activity**, select **An API was called as part of a user flow**.
-4. Inspect individual logs. Each row represents an API connector attempting to be called during a user flow. If an API call fails and a retry occurs, it's still represented as a single row. The `numberOfAttempts` indicates the number of times your API was called. This value can be `1`or `2`. Other information about the API call is detailed in the logs.
-
- ![Screenshot of an example audit log with API connector transaction](media/add-api-connector-token-enrichment/example-anonymized-audit-log.png)
-
+1. Under **Activities**, select **Audit logs**.
+1. Filter the list view: For **Date**, select the time interval you want, and for **Activity**, select **An API was called as part of a user flow**.
+1. Inspect individual logs. Each row represents an API connector attempting to be called during a user flow. If an API call fails and a retry occurs, it's still represented as a single row. The `numberOfAttempts` indicates the number of times your API was called. This value can be `1`or `2`. Other information about the API call is detailed in the logs.
+ ![Screenshot of an example audit log with API connector transaction.](media/add-api-connector-token-enrichment/example-anonymized-audit-log.png)
::: zone-end- ## Next steps- ::: zone pivot="b2c-user-flow"- - Get started with our [samples](api-connector-samples.md#api-connector-rest-api-samples). - [Secure your API Connector](secure-rest-api.md)- ::: zone-end- ::: zone pivot="b2c-custom-policy"- To learn how to secure your APIs, see the following articles:- - [Walkthrough: Integrate REST API claims exchanges in your Azure AD B2C user journey as an orchestration step](add-api-connector-token-enrichment.md) - [Secure your RESTful API](secure-rest-api.md) - [Reference: RESTful technical profile](restful-technical-profile.md)- ::: zone-end
active-directory-b2c Add Api Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/add-api-connector.md
Last updated 12/20/2022---++ zone_pivot_groups: b2c-policy-type
active-directory-b2c Add Identity Provider https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/add-identity-provider.md
Title: Add an identity provider - Azure Active Directory B2C description: Learn how to add an identity provider to your Active Directory B2C tenant. -+ - Previously updated : 04/08/2022+ Last updated : 01/19/2022
You can configure Azure AD B2C to allow users to sign in to your application wit
With external identity provider federation, you can offer your consumers the ability to sign in with their existing social or enterprise accounts, without having to create a new account just for your application.
-On the sign-up or sign-in page, Azure AD B2C presents a list of external identity providers the user can choose for sign-in. Once they select one of the external identity providers, they're taken (redirected) to the selected provider's website to complete the sign in process. After the user successfully signs in, they're returned to Azure AD B2C for authentication of the account in your application.
+On the sign-up or sign-in page, Azure AD B2C presents a list of external identity providers the user can choose for sign-in. Once they select one of the external identity providers, they're taken (redirected) to the selected provider's website to complete the sign-in process. After the user successfully signs in, they're returned to Azure AD B2C for authentication of the account in your application.
-![Mobile sign-in example with a social account (Facebook)](media/add-identity-provider/external-idp.png)
+![Diagram showing mobile sign-in example with a social account (Facebook).](media/add-identity-provider/external-idp.png)
You can add identity providers that are supported by Azure Active Directory B2C (Azure AD B2C) to your [user flows](user-flow-overview.md) using the Azure portal. You can also add identity providers to your [custom policies](user-flow-overview.md).
active-directory-b2c Add Password Reset Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/add-password-reset-policy.md
Title: Set up a password reset flow
description: Learn how to set up a password reset flow in Azure Active Directory B2C (Azure AD B2C). -+
Last updated 10/25/2022 -+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/best-practices.md
Define your application and service architecture, inventory current systems, and
| Usability vs. security | Your solution must strike the right balance between application usability and your organization's acceptable level of risk. | | Move on-premises dependencies to the cloud | To help ensure a resilient solution, consider moving existing application dependencies to the cloud. | | Migrate existing apps to b2clogin.com | The deprecation of login.microsoftonline.com will go into effect for all Azure AD B2C tenants on 04 December 2020. [Learn more](b2clogin.md). |
+| Use Identity Protection and Conditional Access | Use these capabilities for significantly greater control over risky authentications and access policies. Azure AD B2C Premium P2 is required. [Learn more](conditional-access-identity-protection-overview.md). |
+|Tenant size | You need to plan with Azure AD B2C tenant size in mind. By default, Azure AD B2C tenant can accommodate 1.25 million objects (user accounts and applications). You can increase this limit to 5.25 million objects by adding a custom domain to your tenant, and verifying it. If you need a bigger tenant size, you need to contact [Support](find-help-open-support-ticket.md).|
| Use Identity Protection and Conditional Access | Use these capabilities for greater control over risky authentications and access policies. Azure AD B2C Premium P2 is required. [Learn more](conditional-access-identity-protection-overview.md). | ## Implementation
Stay up to date with the state of the service and find support options.
| Best practice | Description | |--|--| | [Service updates](https://azure.microsoft.com/updates/?product=active-directory-b2c) | Stay up to date with Azure AD B2C product updates and announcements. |
-| [Microsoft Support](support-options.md) | File a support request for Azure AD B2C technical issues. Billing and subscription management support is provided at no cost. |
+| [Microsoft Support](find-help-open-support-ticket.md) | File a support request for Azure AD B2C technical issues. Billing and subscription management support is provided at no cost. |
| [Azure status](https://azure.status.microsoft/status) | View the current health status of all Azure services. |+
active-directory-b2c Configure User Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/configure-user-input.md
Title: Add user attributes and customize user input
description: Learn how to customize user input and add user attributes to the sign-up or sign-in journey in Azure Active Directory B2C. -+
Last updated 12/28/2022 -+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Custom Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-domain.md
Previously updated : 07/26/2022 Last updated : 11/3/2022
zone_pivot_groups: b2c-policy-type
[!INCLUDE [active-directory-b2c-choose-user-flow-or-custom-policy](../../includes/active-directory-b2c-choose-user-flow-or-custom-policy.md)]
-This article describes how to enable custom domains in your redirect URLs for Azure Active Directory B2C (Azure AD B2C). Using a custom domain with your application provides a more seamless user experience. From the user's perspective, they remain in your domain during the sign in process rather than redirecting to the Azure AD B2C default domain *&lt;tenant-name&gt;.b2clogin.com*.
+This article describes how to enable custom domains in your redirect URLs for Azure Active Directory B2C (Azure AD B2C). By using a verified custom domain, you've benefits such as:
+
+- It provides a more seamless user experience. From the user's perspective, they remain in your domain during the sign in process rather than redirecting to the Azure AD B2C default domain *&lt;tenant-name&gt;.b2clogin.com*.
+
+- You increase the number of objects (user accounts and applications) you can create in your Azure AD B2C tenant from the default 1.25 million to 5.25 million.
![Screenshot demonstrates an Azure AD B2C custom domain user experience.](./media/custom-domain/custom-domain-user-experience.png)
active-directory-b2c Identity Protection Investigate Risk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-protection-investigate-risk.md
Last updated 09/16/2021 --++ zone_pivot_groups: b2c-policy-type
active-directory-b2c Identity Provider Adfs Saml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-adfs-saml.md
Title: Add AD FS as a SAML identity provider by using custom policies
description: Set up AD FS 2016 using the SAML protocol and custom policies in Azure Active Directory B2C -+
Last updated 09/16/2021 -+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Identity Provider Adfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-adfs.md
Title: Add AD FS as an OpenID Connect identity provider by using custom policies
description: Set up AD FS 2016 using the OpenID Connect protocol and custom policies in Azure Active Directory B2C -+
Last updated 06/08/2022 -+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Identity Provider Amazon https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-amazon.md
Title: Set up sign-up and sign-in with an Amazon account
description: Provide sign-up and sign-in to customers with Amazon accounts in your applications using Azure Active Directory B2C. -+
Last updated 09/16/2021-+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Identity Provider Apple Id https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-apple-id.md
Title: Set up sign-up and sign-in with an Apple ID
description: Provide sign-up and sign-in to customers with Apple ID in your applications using Azure Active Directory B2C. -+
Last updated 11/02/2021 -+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Identity Provider Azure Ad B2c https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-azure-ad-b2c.md
Title: Set up sign-up and sign-in with an Azure AD B2C account from another Azur
description: Provide sign-up and sign-in to customers with Azure AD B2C accounts from another tenant in your applications using Azure Active Directory B2C. -+ Last updated 09/16/2021-+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Identity Provider Azure Ad Multi Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-azure-ad-multi-tenant.md
Title: Set up sign-in for multi-tenant Azure AD by custom policies
description: Add a multi-tenant Azure AD identity provider using custom policies in Azure Active Directory B2C. -+
Last updated 11/17/2022 -+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Identity Provider Azure Ad Single Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-azure-ad-single-tenant.md
Title: Set up sign-in for an Azure AD organization
description: Set up sign-in for a specific Azure Active Directory organization in Azure Active Directory B2C. -+ Last updated 10/11/2022-+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Identity Provider Ebay https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-ebay.md
Title: Set up sign-up and sign-in with an eBay account
description: Provide sign-up and sign-in to customers with eBay accounts in your applications using Azure Active Directory B2C. -+ Last updated 09/16/2021-+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Identity Provider Facebook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-facebook.md
Title: Set up sign-up and sign-in with a Facebook account
description: Provide sign-up and sign-in to customers with Facebook accounts in your applications using Azure Active Directory B2C. -+
Last updated 03/10/2022 -+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Identity Provider Generic Openid Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-generic-openid-connect.md
Title: Set up sign-up and sign-in with OpenID Connect
description: Set up sign-up and sign-in with any OpenID Connect identity provider (IdP) in Azure Active Directory B2C. -+ Last updated 12/28/2022-+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Identity Provider Generic Saml Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-generic-saml-options.md
Title: Set sign-in with SAML identity provider options
description: Configure sign-in SAML identity provider (IdP) options in Azure Active Directory B2C. -+
Last updated 01/13/2022 -+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Identity Provider Generic Saml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-generic-saml.md
Title: Set up sign-up and sign-in with SAML identity provider
description: Set up sign-up and sign-in with any SAML identity provider (IdP) in Azure Active Directory B2C. -+
Last updated 09/16/2021 -+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Identity Provider Github https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-github.md
Title: Set up sign-up and sign-in with a GitHub account
description: Provide sign-up and sign-in to customers with GitHub accounts in your applications using Azure Active Directory B2C. -+
Last updated 03/10/2022 -+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Identity Provider Google https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-google.md
Title: Set up sign-up and sign-in with a Google account
description: Provide sign-up and sign-in to customers with Google accounts in your applications using Azure Active Directory B2C. -+
Last updated 03/10/2022 -+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Identity Provider Id Me https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-id-me.md
Title: Set up sign-up and sign-in with a ID.me account
description: Provide sign-up and sign-in to customers with ID.me accounts in your applications using Azure Active Directory B2C. -+ Last updated 09/16/2021-+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Identity Provider Linkedin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-linkedin.md
Title: Set up sign-up and sign-in with a LinkedIn account
description: Provide sign-up and sign-in to customers with LinkedIn accounts in your applications using Azure Active Directory B2C. -+
Last updated 09/16/2021 -+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Identity Provider Local https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-local.md
Title: Set up Azure AD B2C local account identity provider
description: Define the identity types uses can use to sign-up or sign-in (email, username, phone number) in your Azure Active Directory B2C tenant. -+ Last updated 09/02/2022-+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Identity Provider Microsoft Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-microsoft-account.md
Title: Set up sign-up and sign-in with a Microsoft Account
description: Provide sign-up and sign-in to customers with Microsoft Accounts in your applications using Azure Active Directory B2C. -+
Last updated 01/13/2022 -+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Identity Provider Mobile Id https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-mobile-id.md
Title: Set up sign-up and sign-in with Mobile ID
description: Provide sign-up and sign-in to customers with Mobile ID in your applications using Azure Active Directory B2C. -+ Last updated 04/08/2022-+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Identity Provider Ping One https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-ping-one.md
Title: Set up sign-up and sign-in with a PingOne account
description: Provide sign-up and sign-in to customers with PingOne accounts in your applications using Azure Active Directory B2C. -+
Last updated 12/2/2021 -+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Identity Provider Qq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-qq.md
Title: Set up sign-up and sign-in with a QQ account using Azure Active Directory B2C description: Provide sign-up and sign-in to customers with QQ accounts in your applications using Azure Active Directory B2C. -+
Last updated 09/16/2021 -+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Identity Provider Salesforce Saml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-salesforce-saml.md
Title: Set up sign-in with a Salesforce SAML provider by using SAML protocol
description: Set up sign-in with a Salesforce SAML provider by using SAML protocol in Azure Active Directory B2C. -+
Last updated 09/16/2021 -+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Identity Provider Salesforce https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-salesforce.md
Title: Set up sign-up and sign-in with a Salesforce account
description: Provide sign-up and sign-in to customers with Salesforce accounts in your applications using Azure Active Directory B2C. -+
Last updated 09/16/2021 -+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Identity Provider Swissid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-swissid.md
Title: Set up sign-up and sign-in with a SwissID account
description: Provide sign-up and sign-in to customers with SwissID accounts in your applications using Azure Active Directory B2C. -+ Last updated 12/07/2021-+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Identity Provider Twitter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-twitter.md
Title: Set up sign-up and sign-in with a Twitter account
description: Provide sign-up and sign-in to customers with Twitter accounts in your applications using Azure Active Directory B2C. -+
Last updated 07/20/2022 -+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Identity Provider Wechat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-wechat.md
Title: Set up sign-up and sign-in with a WeChat account
description: Provide sign-up and sign-in to customers with WeChat accounts in your applications using Azure Active Directory B2C. -+
Last updated 09/16/2021 -+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Identity Provider Weibo https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-weibo.md
Title: Set up sign-up and sign-in with a Weibo account
description: Provide sign-up and sign-in to customers with Weibo accounts in your applications using Azure Active Directory B2C. -+
Last updated 09/16/2021 -+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Identity Verification Proofing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-verification-proofing.md
description: Learn about our partners who integrate with Azure AD B2C to provide identity proofing and verification solutions -+ - Previously updated : 09/13/2022 Last updated : 01/18/2023 - # Identity verification and proofing partners
-With Azure AD B2C partners, customers can enable identity verification and proofing of their end users before allowing account registration or access. Identity verification and proofing can check document, knowledge-based information and liveness.
+With Azure Active Directory B2C (Azure AD B2C) and solutions from software-vendor partners, customers can enable end-user identity verification and proofing for account registration. Identity verification and proofing can check documents, knowledge-based information, and liveness.
+
+## Architecture diagram
+
+The following architecture diagram illustrates the verification and proofing flow.
-A high-level architecture diagram explains the flow.
+ ![Diagram of of the identity proofing flow, from registration to access approval.](./media/partner-gallery/third-party-identity-proofing.png)
-![Diagram shows the identity proofing flow](./media/partner-gallery/third-party-identity-proofing.png)
+1. User begins registration with a device.
+2. User enters information.
+3. Digital-risk score is assessed, then third-party identity proofing and identity validation occurs.
+4. Identity is validated or rejected.
+5. User attributes are passed to Azure Active Directory B2C.
+6. If user verification is successful, a user account is created in Azure AD B2C during sign-in.
+7. Based on the verification result, the user receives an access-approved or -denied message.
-Microsoft partners with the following ISV partners.
+## Software vendors and integration documentation
-| ISV partner | Description and integration walkthroughs |
-|:-|:--|
-| ![Screenshot of a deduce logo.](./medi) is an identity verification and proofing provider focused on stopping account takeover and registration fraud. It helps combat identity fraud and creates a trusted user experience. |
-| ![Screenshot of a eid-me logo](./medi) is an identity verification and decentralized digital identity solution for Canadian citizens. It enables organizations to meet Identity Assurance Level (IAL) 2 and Know Your Customer (KYC) requirements. |
-|![Screenshot of an Experian logo.](./medi) is an Identity verification and proofing provider that performs risk assessments based on user attributes to prevent fraud. |
-|![Screenshot of an IDology logo.](./medi) is an Identity verification and proofing provider with ID verification solutions, fraud prevention solutions, compliance solutions, and others.|
-|![Screenshot of a Jumio logo.](./medi) is an ID verification service, which enables real-time automated ID verification, safeguarding customer data. |
-| ![Screenshot of a LexisNexis logo.](./medi) is a profiling and identity validation provider that verifies user identification and provides comprehensive risk assessment based on userΓÇÖs device. |
-| ![Screenshot of a Onfido logo](./medi) is a document ID and facial biometrics verification solution that allows companies to meet *Know Your Customer* and identity requirements in real time. |
+Microsoft partners with independent software vendors (ISVs). Use the following table to locate an ISV and related integration documentation.
-## Additional information
+| ISV logo | ISV link and description| Integration documentation|
+||||
+| ![Screenshot of the Deduce logo.](./medi)|
+| ![Screenshot of the eID-Me logo.](./medi)|
+|![Screenshot of the Experian logo.](./medi)|
+|![Screenshot of the IDology logo.](./medi)|
+|![Screenshot of the Jumio logo.](./medi)|
+| ![Screenshot of the LexisNexis logo.](./medi)|
+| ![Screenshot of the Onfido logo.](./medi)|
-- [Custom policies in Azure AD B2C](./custom-policy-overview.md)
+## Resources
-- [Get started with custom policies in Azure AD B2C](./tutorial-create-user-flows.md?pivots=b2c-custom-policy&tabs=applications)
+- [Azure AD B2C custom policy overview](custom-policy-overview.md)
+- [Tutorial: Create user flows and custom policies in Azure Active Directory B2C](tutorial-create-user-flows.md?pivots=b2c-custom-policy&tabs=applications)
## Next steps
-Select a partner in the tables mentioned to learn how to integrate their solution with Azure AD B2C.
+Select and contact a partner from the previous table to get started on solution integration with Azure AD B2C. The partners have similar processes to contact them for a product demo.
active-directory-b2c Javascript And Page Layout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/javascript-and-page-layout.md
Title: JavaScript and page layout versions
description: Learn how to enable JavaScript and use page layout versions in Azure Active Directory B2C. -+
Last updated 10/26/2022 -+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Language Customization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/language-customization.md
Title: Language customization in Azure Active Directory B2C description: Learn about customizing the language experience in your user flows in Azure Active Directory B2C. -+
Last updated 12/28/2022 -+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Microsoft Graph Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/microsoft-graph-operations.md
Previously updated : 03/03/2022 Last updated : 11/3/2022
Watch this video to learn about Azure AD B2C user migration using Microsoft Grap
## Prerequisites
-To use MS Graph API, and interact with resources in your Azure AD B2C tenant, you need an application registration that grants the permissions to do so. Follow the steps in the [Manage Azure AD B2C with Microsoft Graph](microsoft-graph-get-started.md) article to create an application registration that your management application can use.
+- To use MS Graph API, and interact with resources in your Azure AD B2C tenant, you need an application registration that grants the permissions to do so. Follow the steps in the [Register a Microsoft Graph application](microsoft-graph-get-started.md) article to create an application registration that your management application can use.
## User management > [!NOTE]
For user flows, these extension properties are [managed by using the Azure porta
> [!NOTE] > In Azure AD, directory extensions are managed through the [extensionProperty resource type](/graph/api/resources/extensionproperty) and its associated methods. However, because they are used in B2C through the `b2c-extensions-app` app which should not be updated, they are managed in Azure AD B2C using the [identityUserFlowAttribute resource type](/graph/api/resources/identityuserflowattribute) and its associated methods.
+## Tenant usage
+
+Use the [Get organization details](/graph/api/organization-get) API to get your directory size quota. You need to add the `$select` query parameter as shown in the following HTTP request:
+
+```http
+ GET https://graph.microsoft.com/v1.0/organization/organization-id?$select=directorySizeQuota
+```
+Replace `organization-id` with your organization or tenant ID.
+
+The response to the above request looks similar to the following JSON snippet:
+
+```json
+{
+ "directorySizeQuota": {
+ "used": 156,
+ "total": 1250000
+ }
+}
+```
## Audit logs - [List audit logs](/graph/api/directoryaudit-list)
active-directory-b2c Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/overview.md
Title: What is Azure Active Directory B2C? description: Learn how you can use Azure Active Directory B2C to support external identities in your applications, including social sign-up with Facebook, Google, and other identity providers. -+
active-directory-b2c Partner Arkose Labs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-arkose-labs.md
Previously updated : 1/4/2023 Last updated : 01/18/2023
Username and password are stored as environment variables, not part of the repos
- [Azure-Samples/active-directory-b2c-node-sign-up-user-flow-arkose](https://github.com/Azure-Samples/active-directory-b2c-node-sign-up-user-flow-arkose) - Find the Azure AD B2C sign-up user flow - [Azure AD B2C custom policy overview](./custom-policy-overview.md)-- [Tutorial: Create user flows and custom policies in Azure Active Directory B2C](./tutorial-create-user-flows.md?pivots=b2c-custom-policy)
+- [Tutorial: Create user flows and custom policies in Azure Active Directory B2C](./tutorial-create-user-flows.md?pivots=b2c-custom-policy)
active-directory-b2c Partner Whoiam https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-whoiam.md
Previously updated : 12/19/2022 Last updated : 01/18/2023
active-directory-b2c Partner Zscaler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-zscaler.md
Previously updated : 12/20/2022 Last updated : 01/18/2023
active-directory-b2c Quickstart Web App Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/quickstart-web-app-dotnet.md
Title: "Quickstart: Set up sign-in for an ASP.NET web app"
description: In this Quickstart, run a sample ASP.NET web app that uses Azure Active Directory B2C to provide account sign-in. -+ Previously updated : 10/01/2021- Last updated : 01/17/2023+
In this quickstart, you use an ASP.NET application to sign in using a social ide
## Prerequisites -- [Visual Studio 2019](https://www.visualstudio.com/downloads/) with the **ASP.NET and web development** workload.
+- [Visual Studio 2022](https://www.visualstudio.com/downloads/) with the **ASP.NET and web development** workload.
- A social account from Facebook, Google, or Microsoft. - [Download a zip file](https://github.com/Azure-Samples/active-directory-b2c-dotnet-webapp-and-webapi/archive/master.zip) or clone the sample web application from GitHub.
In this quickstart, you use an ASP.NET application to sign in using a social ide
## Run the application in Visual Studio 1. In the sample application project folder, open the **B2C-WebAPI-DotNet.sln** solution in Visual Studio.
-2. For this quickstart, you run both the **TaskWebApp** and **TaskService** projects at the same time. Right-click the **B2C-WebAPI-DotNet** solution in Solution Explorer, and then select **Set StartUp Projects**.
-3. Select **Multiple startup projects** and change the **Action** for both projects to **Start**.
-4. Select **OK**.
-5. Press **F5** to debug both applications. Each application opens in its own browser tab:
+1. For this quickstart, you run both the **TaskWebApp** and **TaskService** projects at the same time. Right-click the **B2C-WebAPI-DotNet** solution in Solution Explorer, and then select **Set StartUp Projects**.
+1. Select **Multiple startup projects** and change the **Action** for both projects to **Start**.
+1. Select **OK**.
+1. Press **F5** to debug both applications. Each application opens in its own browser tab:
- `https://localhost:44316/` - The ASP.NET web application. You interact directly with this application in the quickstart. - `https://localhost:44332/` - The web API that's called by the ASP.NET web application.
In this quickstart, you use an ASP.NET application to sign in using a social ide
1. Select **Sign up / Sign in** in the ASP.NET web application to start the workflow.
- ![Sample ASP.NET web app in browser with sign up/sign link highlighted](./media/quickstart-web-app-dotnet/web-app-sign-in.png)
+ ![Screenshot showing the sample ASP.NET web app in browser with sign up/sign link highlighted](./media/quickstart-web-app-dotnet/web-app-sign-in.png)
The sample supports several sign-up options including using a social identity provider or creating a local account using an email address. For this quickstart, use a social identity provider account from either Facebook, Google, or Microsoft.
-2. Azure AD B2C presents a sign-in page for a fictitious company called Fabrikam for the sample web application. To sign up using a social identity provider, select the button of the identity provider you want to use.
+1. Azure AD B2C presents a sign-in page for a fictitious company called Fabrikam for the sample web application. To sign up using a social identity provider, select the button of the identity provider you want to use.
- ![Sign In or Sign Up page showing identity provider buttons](./media/quickstart-web-app-dotnet/sign-in-or-sign-up-web.png)
+ ![Screenshot of the Sign In or Sign Up page identity provider buttons](./media/quickstart-web-app-dotnet/sign-in-or-sign-up-web.png)
You authenticate (sign in) using your social account credentials and authorize the application to read information from your social account. By granting access, the application can retrieve profile information from the social account such as your name and city.
-3. Finish the sign-in process for the identity provider.
+1. Finish the sign-in process for the identity provider.
## Edit your profile
Azure Active Directory B2C provides functionality to allow users to update their
1. In the application menu bar, select your profile name, and then select **Edit profile** to edit the profile you created.
- ![Sample web app in browser with Edit profile link highlighted](./media/quickstart-web-app-dotnet/edit-profile-web.png)
+ ![Screenshot of the sample web app in browser with the edit profile link highlighted](./media/quickstart-web-app-dotnet/edit-profile-web.png)
-2. Change your **Display name** or **City**, and then select **Continue** to update your profile.
+1. Change your **Display name** or **City**, and then select **Continue** to update your profile.
The change is displayed in the upper right portion of the web application's home page.
Azure Active Directory B2C provides functionality to allow users to update their
1. Select **To-Do List** to enter and modify your to-do list items.
-2. In the **New Item** text box, enter text. To call the Azure AD B2C protected web API that adds a to-do list item, select **Add**.
+1. In the **New Item** text box, enter text. To call the Azure AD B2C protected web API that adds a to-do list item, select **Add**.
- ![Sample web app in browser with Add a to-do list item](./media/quickstart-web-app-dotnet/add-todo-item-web.png)
+ ![Screenshot of the sample web app in browser with To-Do List link and Add button highlighted.](./media/quickstart-web-app-dotnet/add-todo-item-web.png)
The ASP.NET web application includes an Azure AD access token in the request to the protected web API resource to perform operations on the user's to-do list items.
active-directory-b2c Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/service-limits.md
Previously updated : 12/01/2022 Last updated : 12/29/2022 zone_pivot_groups: b2c-policy-type
The following table lists the administrative configuration limits in the Azure A
|Number of sign-out URLs per applicationΓÇ» |1 | |String Limit per Attribute |250 Chars | |Number of B2C tenants per subscription |20 |
+|Total number of objects (user accounts and applications) per tenant (default limit)|1.25 million |
+|Total number of objects (user accounts and applications) per tenant (using a verified custom domain)|5.25 million |
|Levels of [inheritance](custom-policy-overview.md#inheritance-model) in custom policies |10 | |Number of policies per Azure AD B2C tenant (user flows + custom policies) |200 | |Maximum policy file size |1024 KB |
active-directory-b2c Sign In Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/sign-in-options.md
Title: Sign-in options supported by Azure AD B2C
description: Learn about the sign-up and sign-in options you can use with Azure Active Directory B2C, including username and password, email, phone, or federation with social or external identity providers. -+ Previously updated : 11/03/2022- Last updated : 01/18/2022+
Email sign-up is enabled by default in your local account identity provider sett
- **Sign-up**: users are prompted for an email address, which is verified at sign-up (optional) and becomes their login ID. The user then enters any other information requested on the sign-up page, for example, display name, given name, and surname. Then they select **Continue** to create an account. - **Password reset**: Users enter and verify their email, after which the user can reset the password
-![Email sign-up or sign-in experience](./media/sign-in-options/local-account-email-experience.png)
+![Series of screenshots showing email sign-up or sign-in experience.](./media/sign-in-options/local-account-email-experience.png)
Learn how to configure email sign-in in your local account identity provider. ## Username sign-in
Your local account identity provider includes a Username option that lets users
- **Sign-up**: Users will be prompted for a username, which will become their login ID. Users will also be prompted for an email address, which will be verified at sign-up. The email address will be used during a password reset flow. The user enters any other information requested on the sign-up page, for example, Display Name, Given Name, and Surname. The user then selects Continue to create the account. - **Password reset**: Users must enter their username and the associated email address. The email address must be verified, after which, the user can reset the password.
-![Username sign-up or sign-in experience](./media/sign-in-options/local-account-username-experience.png)
+![Series of screenshots showing sign-up or sign-in experience.](./media/sign-in-options/local-account-username-experience.png)
## Phone sign-in
Phone sign-in is a passwordless option in your local account identity provider s
1. Next, the user is asked to provide a **recovery email**. The user enters their email address, and then selects *Send verification code*. A code is sent to the user's email inbox, which they can retrieve and enter in the Verification code box. Then the user selects Verify code. 1. Once the code is verified, the user selects *Create* to create their account.
-![Phone sign-up or sign-in experience](./media/sign-in-options/local-account-phone-experience.png)
+![Series of screenshots showing phone sign-up or sign-in experience.](./media/sign-in-options/local-account-phone-experience.png)
### Pricing for phone sign-in
One-time passwords are sent to your users by using SMS text messages. Depending
When you enable phone sign-up and sign-in for your user flows, it's also a good idea to enable the recovery email feature. With this feature, a user can provide an email address that can be used to recover their account when they don't have their phone. This email address is used for account recovery only. It can't be used for signing in. -- When the recovery email prompt is **On**, a user signing up for the first time is prompted to verify a backup email. A user who hasn't provided a recovery email before is asked to verify a backup email during next sign in.
+- When the recovery email prompt is **On**, a user signing up for the first time is prompted to verify a backup email. A user who hasn't provided a recovery email before is asked to verify a backup email during next sign-in.
- When recovery email is **Off**, a user signing up or signing in isn't shown the recovery email prompt. The following screenshots demonstrate the phone recovery flow:
-![Phone recovery user flow](./media/sign-in-options/local-account-change-phone-flow.png)
+![Diagram showing phone recovery user flow.](./media/sign-in-options/local-account-change-phone-flow.png)
## Phone or email sign-in You can choose to combine the [phone sign-in](#phone-sign-in), and the [email sign-in](#email-sign-in) in your local account identity provider settings. In the sign-up or sign-in page, user can type a phone number, or email address. Based on the user input, Azure AD B2C takes the user to the corresponding flow.
-![Phone or email sign-up or sign-in experience](./media/sign-in-options/local-account-phone-and-email-experience.png)
+![Series of screenshots showing phone or email sign-up or sign-in experience.](./media/sign-in-options/local-account-phone-and-email-experience.png)
++
+## Federated sign-in
+
+You can configure Azure AD B2C to allow users to sign in to your application with credentials from external social or enterprise identity providers (IdPs). Azure AD B2C supports many [external identity providers](add-identity-provider.md) and any identity provider that supports OAuth 1.0, OAuth 2.0, OpenID Connect, and SAML protocols.
+
+With external identity provider federation, you can offer your consumers the ability to sign in with their existing social or enterprise accounts, without having to create a new account just for your application.
+
+On the sign-up or sign-in page, Azure AD B2C presents a list of external identity providers the user can choose for sign-in. Once they select one of the external identity providers, they're redirected to the selected provider's website to complete the sign-in process. After the user successfully signs in, they're returned to Azure AD B2C for authentication of the account in your application.
+
+![Diagram showing mobile sign-in example with a social account (Facebook).](media/add-identity-provider/external-idp.png)
+
+You can add identity providers that are supported by Azure Active Directory B2C (Azure AD B2C) to your [user flows](user-flow-overview.md) using the Azure portal. You can also add identity providers to your [custom policies](user-flow-overview.md).
## Next steps - Find out more about the built-in policies provided by [User flows in Azure Active Directory B2C](user-flow-overview.md).-- [Configure your local account identity provider](identity-provider-local.md).
+- [Configure your local account identity provider](identity-provider-local.md).
active-directory-b2c Technical Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/technical-overview.md
Title: Technical and feature overview - Azure Active Directory B2C description: An in-depth introduction to the features and technologies in Azure Active Directory B2C. Azure Active Directory B2C has high availability globally. -+
Last updated 10/26/2022 -+
active-directory-b2c Tenant Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/tenant-management.md
Previously updated : 11/24/2022 Last updated : 12/29/2022
To get your Azure AD B2C tenant ID, follow these steps:
1. In the Azure portal, search for and select **Azure Active Directory**. 1. In the **Overview**, copy the **Tenant ID**.
-![Screenshot demonstrates how to get the Azure AD B2C tenant ID.](./media/tenant-management/get-azure-ad-b2c-tenant-id.png)
+![Screenshot demonstrates how to get the Azure AD B2C tenant ID.](./media/tenant-management/get-azure-ad-b2c-tenant-id.png)
+
+## Get your tenant usage
+
+You can read your Azure AD B2C's total directory size, and how much of it is in use. To do so, follow the steps in [Get tenant usage by using Microsoft Graph API](microsoft-graph-operations.md#tenant-usage).
## Next steps
active-directory-b2c Threat Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/threat-management.md
Title: Mitigate credential attacks - Azure AD B2C
description: Learn about detection and mitigation techniques for credential attacks (password attacks) in Azure Active Directory B2C, including smart account lockout features. -+ Last updated 09/20/2021-+
active-directory-b2c Tutorial Create Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/tutorial-create-tenant.md
Previously updated : 07/12/2022 Last updated : 01/20/2023
Before your applications can interact with Azure Active Directory B2C (Azure AD B2C), they must be registered in a tenant that you manage.
-> [!NOTE]
-> You can create up to 20 tenants per subscription. This limit helps protect against threats to your resources, such as denial-of-service attacks, and is enforced in both the Azure portal and the underlying tenant creation API. If you need to create more than 20 tenants, please contact [Microsoft Support](support-options.md).
->
-> If you want to reuse a tenant name that you previously tried to delete, but you see the error "Already in use by another directory" when you enter the domain name, you'll need to [follow these steps to fully delete the tenant first](./faq.yml?tabs=app-reg-ga#how-do-i-delete-my-azure-ad-b2c-tenant-). A role of at least Subscription Administrator is required. After deleting the tenant, you might also need to sign out and sign back in before you can reuse the domain name.
- In this article, you learn how to: > [!div class="checklist"]
In this article, you learn how to:
> * Switch to the directory containing your Azure AD B2C tenant > * Add the Azure AD B2C resource as a **Favorite** in the Azure portal
-You learn how to register an application in the next tutorial.
+Before you create your Azure AD B2C tenant, you need to take the following considerations into account:
+
+- You can create up to **20** tenants per subscription. This limit help protect against threats to your resources, such as denial-of-service attacks, and is enforced in both the Azure portal and the underlying tenant creation API. If you want to increase this limit, please contact [Microsoft Support](find-help-open-support-ticket.md).
+
+- By default, each tenant can accommodate a total of **1.25 million** objects (user accounts and applications), but you can increase this limit to **5.25 million** objects when you add and verify a custom domain. If you want to increase this limit, please contact [Microsoft Support](find-help-open-support-ticket.md). However, if you created your tenant before **September 2022**, this limit doesn't affect you, and your tenant will retain the size allocated to it at creation, that's, **50 million** objects.
+
+- If you want to reuse a tenant name that you previously tried to delete, but you see the error "Already in use by another directory" when you enter the domain name, you'll need to [follow these steps to fully delete the tenant first](./faq.yml?tabs=app-reg-ga#how-do-i-delete-my-azure-ad-b2c-tenant-). A role of at least *Subscription Administrator* is required. After deleting the tenant, you might also need to sign out and sign back in before you can reuse the domain name.
## Prerequisites
You learn how to register an application in the next tutorial.
![Select the Create a resource button](media/tutorial-create-tenant/create-a-resource.png) 1. Search for **Azure Active Directory B2C**, and then select **Create**.
-2. Select **Create a new Azure AD B2C Tenant**.
+
+1. Select **Create a new Azure AD B2C Tenant**.
![Create a new Azure AD B2C tenant selected in Azure portal](media/tutorial-create-tenant/portal-02-create-tenant.png)
active-directory-b2c Tutorial Create User Flows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/tutorial-create-user-flows.md
Title: Tutorial - Create user flows and custom policies - Azure Active Directory B2C description: Follow this tutorial to learn how to create user flows and custom policies in the Azure portal to enable sign up, sign in, and user profile editing for your applications in Azure Active Directory B2C. -+ Last updated 10/26/2022-+ zone_pivot_groups: b2c-policy-type
active-directory-b2c Tutorial Register Applications https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/tutorial-register-applications.md
Title: "Tutorial: Register a web application in Azure Active Directory B2C"
description: Follow this tutorial to learn how to register a web application in Azure Active Directory B2C using the Azure portal. -+
Last updated 10/26/2022 -+
active-directory-b2c User Flow Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/user-flow-overview.md
Title: User flows and custom policies in Azure Active Directory B2C
description: Learn more about built-in user flows and the custom policy extensible policy framework of Azure Active Directory B2C. -+
Last updated 10/24/2022 -+
active-directory-b2c User Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/user-migration.md
description: Migrate user accounts from another identity provider to Azure AD B2
- Previously updated : 10/24/2022 Last updated : 12/29/2022
Watch this video to learn about Azure AD B2C user migration strategies and steps
>[!Video https://www.youtube.com/embed/lCWR6PGUgz0] +
+> [!NOTE]
+> Before you start the migration, make sure your Azure AD B2C tenant's unused quota can accommodate all the users you expect to migrate. Learn how to [Get your tenant usage](microsoft-graph-operations.md#tenant-usage). If you need to increase your tenant's quota limit, contact [Microsoft Support](find-help-open-support-ticket.md).
+ ## Pre migration In the pre migration flow, your migration application performs these steps for each user account:
active-directory-b2c User Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/user-overview.md
Title: Overview of user accounts in Azure Active Directory B2C description: Learn about the types of user accounts that can be used in Azure Active Directory B2C. -+ Last updated 12/28/2022-+
active-directory Application Proxy Configure Complex Application https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-configure-complex-application.md
Before you get started with Application Proxy Complex application scenario apps,
To configure (and update) Application Segments for a complex app using the API, you first [create a wildcard application](application-proxy-wildcard.md#create-a-wildcard-application), and then update the application's onPremisesPublishing property to configure the application segments and respective CORS settings. > [!NOTE]
-> One application segment is supported in preview. Support for multiple application segment to be announced soon.
+> 2 application segment per complex application are supported for [Microsoft Azure AD premium subscription](https://azure.microsoft.com/pricing/details/active-directory). Licence requirement for more than 2 application segments per complex application to be announced soon.
If successful, this method returns a `204 No Content` response code and does not return anything in the response body. ## Example
active-directory Application Proxy Integrate With Logic Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/app-proxy/application-proxy-integrate-with-logic-apps.md
+
+ Title: Securely integrate Azure Logic Apps with on-premises APIs using Azure Active Directory Application Proxy
+description: Azure Active Directory's Application Proxy lets cloud-native logic apps securely access on-premises APIs to bridge your workload.
+++++++ Last updated : 01/19/2023++++
+# Securely integrate Azure Logic Apps with on-premises APIs using Azure Active Directory Application Proxy
+
+Azure Logic Apps is a service allowing easy creation of managed workflows in a no-code environment that can integrate with various external services and systems. This can help automate a wide range of business processes, such as data integration, data processing, and event-driven scenarios.
+While Logic Apps easily integrate with other public and cloud-based services, the need may arise to utilize Logic Apps with protected, on-premises applications and services without exposing the service to the public via port forwarding or a traditional reverse proxy.
+
+This article describes the steps necessary to utilize the Azure AD Application Proxy solution to provide secure access to a Logic App, while protecting the internal application from unwanted actors. The process and end result is similar to [Access on-premises APIs with Azure Active Directory Application Proxy](./application-proxy-secure-api-access.md) with special attention paid to utilizing the API from within a Logic App.
+
+## Overview
+
+The following diagram shows a traditional way to publish on-premises APIs for access from Azure Logic Apps. This approach requires opening incoming TCP ports 80 and/or 443 to the API service.
+
+![Diagram that shows Logic App to API direct connection.](./media/application-proxy-integrate-with-logic-apps/azure-logic-app-to-api-connection-direct.png)
+
+The following diagram shows how you can use Azure AD Application Proxy to securely publish APIs for use with Logic Apps (or other Azure Cloud services) without opening any incoming ports:
+
+![Diagram that shows Logic App to API connection via Azure Application Proxy.](./media/application-proxy-integrate-with-logic-apps/azure-logic-app-to-api-connection-app-proxy.png)
+
+The Azure AD App Proxy and associated connector facilitate secure authorization and integration to your on-premises services without additional configuration to your network security infrastructure.
+
+## Prerequisites
+
+To follow this tutorial, you will need:
+
+- Admin access to an Azure directory, with an account that can create and register apps
+- The *Logic App Contributor* role (or higher) in an active tenant
+- Azure Application Proxy connector deployed and an application configured as detailed in [Add an on-premises app - Application Proxy in Azure Active Directory](./application-proxy-add-on-premises-application.md)
+
+> [!NOTE]
+> While granting a user entitlement and testing the sign on is recommended, it is not required for this guide.
+
+## Configure the Application Access
+
+When a new Enterprise Application is created, a matching App Registration is also created. The App Registration allows configuration of secure programmatic access using certificates, secrets, or federated credentials. For integration with a Logic App, we will need to configure a client secret key, and configure the API permissions.
+
+1. From the Azure portal, open **Azure Active Directory**
+
+2. Select the **App Registrations** menu item from the navigation pane
+
+ ![Screenshot of the Azure Active Directory App Registration Menu Item.](./media/application-proxy-integrate-with-logic-apps/app-registration-menu.png)
+
+3. From the *App Registrations* window, select the **All applications** tab option
+
+4. Navigate to the application with a matching name to your deployed App Proxy application. For example, if you deployed *Sample App 1* as an Enterprise Application, click the **Sample App 1** registration item
+
+ > [!NOTE]
+ > If an associated application cannot be found, it may have not been automatically created or may have been deleted. A registration can be created using the **New Registration** button.
+
+5. From the *Sample App 1* detail page, take note of the *Application (client) ID* and *Directory (tenant) ID* fields. These will be used later.
+
+ ![Screenshot of the Azure Active Directory App Registration Detail.](./media/application-proxy-integrate-with-logic-apps/app-registration-detail.png)
+
+6. Select the **API permissions** menu item from the navigation pane
+
+ ![Screenshot of the Azure Active Directory App Registration API Permissions Menu Item.](./media/application-proxy-integrate-with-logic-apps/api-permissions-menu.png)
+
+7. From the *API permissions* page:
+
+ 1. Click the **Add a permission** button
+
+ 2. In the *Request API permissions* pop-up:
+
+ 1. Select the **APIs my organization uses** tab
+
+ 2. Search for your app by name (e.g. *Sample App 1*) and select the item
+
+ 3. Ensure *Delegated Permissions* is **selected**, then **check** the box for *user_impersonation*
+
+ 4. Click **Add permissions**
+
+ 3. Verify the configured permission appears
+
+ ![Screenshot of the Azure Active Directory App Registration API Permissions Detail.](./media/application-proxy-integrate-with-logic-apps/api-permissions-detail.png)
+
+8. Select the **Certificates & secrets** menu item from the navigation pane
+
+ ![Screenshot of the Azure Active Directory App Registration Certificates and Secrets Menu Item.](./media/application-proxy-integrate-with-logic-apps/certificates-and-secrets-menu.png)
+
+9. From the *Certificates & secrets* page:
+
+ 1. Select the **Client secrets** tab item
+
+ 2. Click the **New client secret** button
+
+ 3. From the *Add a client secret* pop-up:
+
+ 1. Enter a **Description** and desired expiration
+
+ 2. Click **Add**
+
+ 4. Verify the new client secret appears
+
+ 5. Click the **Copy** button for the *Value* of the newly created secret. Save this securely for use later, this value is only shown one time.
+
+ ![Screenshot of the Azure Active Directory App Registration Client Secret Detail.](./media/application-proxy-integrate-with-logic-apps/client-secret-detail.png)
+
+## Configure the Logic App
+
+1. From the Logic App, open the **Designer** view
+
+2. Select a desired trigger (if prompted)
+
+3. Add a new step and select the **HTTP** operation
+
+ ![Screenshot of the Azure Logic App Trigger Options Pane.](./media/application-proxy-integrate-with-logic-apps/logic-app-trigger-menu.png)
+
+4. In the operation details:
+
+ 1. *Method*: Select the desired HTTP method to be sent to the internal API
+
+ 2. *URI*: Fill in with the *public* FQDN of your application registered in Azure AD, along with the additional URI required for API access (e.g. *sampleapp1.msappproxy.net/api/1/status*)
+
+ > [!NOTE]
+ > Specific values for API will depend on your internal application. Refer to your application's documentation for more information.
+
+ 3. *Headers*: Enter any desired headers to be sent to the internal API
+
+ 4. *Queries*: Enter any desired queries to be sent to the internal API
+
+ 5. *Body*: Enter any desired body contents to be sent to the internal API
+
+ 6. *Cookie*: Enter any desired cookie(s) to be sent to the internal API
+
+ 7. Click *Add new parameter*, then check *Authentication*
+
+ 8. From the *Authentication type*, select *Active Directory OAuth*
+
+ 9. For the authentication, fill the following details:
+
+ 1. *Authority*: Enter *https://login.windows.net*
+
+ 2. *Tenant*: Enter the **Directory (tenant) ID** noted in *Configure the Application Access*
+
+ 3. *Audience*: Enter the *public* FQDN of your application registered in Azure AD (e.g. *sampleapp1.msappproxy.net*)
+
+ 4. *Client ID*: Enter the **Application (client) ID** noted in *Configure the Application Access*
+
+ 5. *Credential Type*: **Secret**
+
+ 6. *Secret*: Enter the **secret value** noted in *Configure the Application Access*
+
+ ![Screenshot of Azure Logic App HTTP ActionConfiguration.](./media/application-proxy-integrate-with-logic-apps/logic-app-http-configuration.png)
+
+5. Save the logic app and test with your trigger
+
+## Caveats
+
+- APIs that require authentication/authorization require special handling when using this method. Since Azure Active Directory OAuth is being used for access, the requests sent already contain an *Authorization* field that cannot also be utilized by the internal API (unless SSO is configured). As a workaround, some applications offer authentication or authorization that uses methods other than an *Authorization* header. For example, GitLab allows for a header titled *PRIVATE-TOKEN*, and Atlassian JIRA allows for requesting a Cookie that can be used in later requests
+
+- While the Logic App HTTP action shows cleartext values, it is highly recommended to store the App Registration Secret Key in Azure Key Vault for secure retrieval and use.
+
+## See Also
+
+- [How to configure an Application Proxy application](./application-proxy-config-how-to.md)
+- [Access on-premises APIs with Azure Active Directory Application Proxy](./application-proxy-secure-api-access.md)
+- [Common scenarios, examples, tutorials, and walkthroughs for Azure Logic Apps](../../logic-apps/logic-apps-examples-and-scenarios.md)
active-directory Concept Authentication Oath Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/concept-authentication-oath-tokens.md
Previously updated : 09/12/2022 Last updated : 01/18/2023
Users may have a combination of up to five OATH hardware tokens or authenticator
>[!IMPORTANT] >The preview is only supported in Azure Global and Azure Government clouds. +
+## Determine OATH token registration type in mysecurityinfo
+Users can manage and add OATH token registrations by accessing https://aka.ms/mysecurityinfo or by selecting Security info from My Account. Specific icons are used to differentiate whether the OATH token registration is hardware or software based.
+
+OATH token registration type | Icon
+ |
+OATH software token | <img width="63" alt="Software OATH token" src="media/concept-authentication-methods/software-oath-token-icon.png">
+OATH hardware token | <img width="63" alt="Hardware OATH token" src="media/concept-authentication-methods/hardware-oath-token-icon.png">
++ ## Next steps Learn more about configuring authentication methods using the [Microsoft Graph REST API](/graph/api/resources/authenticationmethods-overview).
active-directory Howto Mfa Nps Extension Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/authentication/howto-mfa-nps-extension-errors.md
Sometimes, your users may get messages from Multi-Factor Authentication because
| **OathCodeIncorrect** | Wrong code entered\OATH Code Incorrect | The user entered the wrong code. Have them try again by requesting a new code or signing in again. | | **SMSAuthFailedMaxAllowedCodeRetryReached** | Maximum allowed code retry reached | The user failed the verification challenge too many times. Depending on your settings, they may need to be unblocked by an admin now. | | **SMSAuthFailedWrongCodeEntered** | Wrong code entered/Text Message OTP Incorrect | The user entered the wrong code. Have them try again by requesting a new code or signing in again. |
+| **AuthenticationThrottled** | Too many attempts by user in a short period of time. Throttling. | Microsoft may limit repeated authentication attempts that are performed by the same user in a short period of time. This limitation does not apply to the Microsoft Authenticator or verification code. If you have hit these limits, you can use the Authenticator App, verification code or try to sign in again in a few minutes. |
+| **AuthenticationMethodLimitReached** | Authentication Method Limit Reached. Throttling. | Microsoft may limit repeated authentication attempts that are performed by the same user using the same authentication method type in a short period of time, specifically Voice call or SMS. This limitation does not apply to the Microsoft Authenticator or verification code. If you have hit these limits, you can use the Authenticator App, verification code or try to sign in again in a few minutes.|
## Errors that require support
active-directory Faqs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/faqs.md
This article answers frequently asked questions (FAQs) about Permissions Managem
## What's Permissions Management?
-Permissions Management is a cloud infrastructure entitlement management (CIEM) solution that provides comprehensive visibility into permissions assigned to all identities. For example, over-privileged workload and user identities, actions, and resources across multi-cloud infrastructures in Microsoft Azure, Amazon Web Services (AWS), and Google Cloud Platform (GCP). Permissions Management detects, automatically right-sizes, and continuously monitors unused and excessive permissions. It deepens the Zero Trust security strategy by augmenting the least privilege access principle.
+Permissions Management is a cloud infrastructure entitlement management (CIEM) solution that provides comprehensive visibility into permissions assigned to all identities. For example, over-privileged workload and user identities, actions, and resources across multicloud infrastructures in Microsoft Azure, Amazon Web Services (AWS), and Google Cloud Platform (GCP). Permissions Management detects, automatically right-sizes, and continuously monitors unused and excessive permissions. It deepens the Zero Trust security strategy by augmenting the least privilege access principle.
## What are the prerequisites to use Permissions Management?
No, Permissions Management is a hosted cloud offering.
## Can non-Azure customers use Permissions Management?
-Yes, non-Azure customers can use our solution. Permissions Management is a multi-cloud solution so even customers who have no subscription to Azure can benefit from it.
+Yes, non-Azure customers can use our solution. Permissions Management is a multicloud solution so even customers who have no subscription to Azure can benefit from it.
## Is Permissions Management available for tenants hosted in the European Union (EU)?
Yes, Permissions Management is currently for tenants hosted in the European Unio
## If I'm already using Azure AD Privileged Identity Management (PIM) for Azure, what value does Permissions Management provide?
-Permissions Management complements Azure AD PIM. Azure AD PIM provides just-in-time access for admin roles in Azure (as well as Microsoft Online Services and apps that use groups), while Permissions Management allows multi-cloud discovery, remediation, and monitoring of privileged access across Azure, AWS, and GCP.
+Permissions Management complements Azure AD PIM. Azure AD PIM provides just-in-time access for admin roles in Azure (as well as Microsoft Online Services and apps that use groups), while Permissions Management allows multicloud discovery, remediation, and monitoring of privileged access across Azure, AWS, and GCP.
## What public cloud infrastructures are supported by Permissions Management?
You can read our blog and visit our web page. You can also get in touch with you
## What is the data destruction/decommission process?
-If a customer initiates a free Permissions Management 90-day trial, but does not follow up and convert to a paid license within 90 days of the free trial expiration, we will delete all collected data on or just before 90 days.
+If a customer initiates a free Permissions Management 45-day trial, but does not follow up and convert to a paid license within 45 days of the free trial expiration, we will delete all collected data on or just before 45 days.
-If a customer decides to discontinue licensing the service, we will also delete all previously collected data within 90 days of license termination.
+If a customer decides to discontinue licensing the service, we will also delete all previously collected data within 45 days of license termination.
We also have the ability to remove, export or modify specific data should the Global Admin using the Entra Permissions Management service file an official Data Subject Request. This can be initiated by opening a ticket in the Azure portal [New support request - Microsoft Entra admin center](https://entra.microsoft.com/#blade/Microsoft_Azure_Support/NewSupportRequestV3Blade/callerName/ActiveDirectory/issueType/technical), or alternately contacting your local Microsoft representative. ## Do I require a license to use Entra Permissions Management?
-Yes, as of July 1st, 2022, new customers must acquire a free 90-trial license or a paid license to use the service. You can enable a trial here: [https://aka.ms/TryPermissionsManagement](https://aka.ms/TryPermissionsManagement) or you can directly purchase resource-based licenses here: [https://aka.ms/BuyPermissionsManagement](https://aka.ms/BuyPermissionsManagement)
+Yes, as of July 1st, 2022, new customers must acquire a free 45-day trial license or a paid license to use the service. You can enable a trial here: [https://aka.ms/TryPermissionsManagement](https://aka.ms/TryPermissionsManagement) or you can directly purchase resource-based licenses here: [https://aka.ms/BuyPermissionsManagement](https://aka.ms/BuyPermissionsManagement)
## What do I do if IΓÇÖm using Public Preview version of Entra Permissions Management? If you are using the Public Preview version of Entra Permissions Management, your current deployment(s) will continue to work through October 1st.
-After October 1st you will need to move over to use the newly released version of the service and enable a 90-day trial or purchase licenses to continue using the service.
+After October 1st you will need to move over to use the newly released version of the service and enable a 45-day trial or purchase licenses to continue using the service.
## What do I do if IΓÇÖm using the legacy version of the CloudKnox service?
active-directory Product Permissions Analytics Reports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-infrastructure-entitlement-management/product-permissions-analytics-reports.md
Previously updated : 02/23/2022 Last updated : 01/20/2023 # Generate and download the Permissions analytics report
-This article describes how to generate and download the **Permissions analytics report** in Permissions Management.
+This article describes how to generate and download the **Permissions analytics report** in Permissions Management for AWS, Azure, and GCP. You can generate the report in Excel format, and also as a PDF.
-> [!NOTE]
-> This topic applies only to Amazon Web Services (AWS) users.
## Generate the Permissions analytics report 1. In the Permissions Management home page, select the **Reports** tab, and then select the **Systems Reports** subtab. The **Systems Reports** subtab displays a list of reports the **Reports** table.
-1. Find **Permissions Analytics Report** in the list, and to download the report, select the down arrow to the right of the report name, or from the ellipses **(...)** menu, select **Download**.
+1. Select **Permissions Analytics Report** from the list. o download the report, select the down arrow to the right of the report name, or from the ellipses **(...)** menu, select **Download**.
The following message displays: **Successfully Started To Generate On Demand Report.**
active-directory How To Attribute Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/how-to-attribute-mapping.md
Previously updated : 01/11/2023 Last updated : 01/20/2023
# Attribute mapping in Azure AD Connect cloud sync
-You can use the cloud sync feature of Azure Active Directory (Azure AD) Connect to map attributes between your on-premises user or group objects and the objects in Azure AD. This capability has been added to the cloud sync configuration.
+You can use the cloud sync attribute mapping feature to map attributes between your on-premises user or group objects and the objects in Azure AD.
+
+ :::image type="content" source="media/how-to-attribute-mapping/new-ux-mapping-1.png" alt-text="Screenshot of new UX screen attribute mapping." lightbox="media/how-to-attribute-mapping/new-ux-mapping-1.png":::
You can customize (change, delete, or create) the default attribute mappings according to your business needs. For a list of attributes that are synchronized, see [Attributes synchronized to Azure Active Directory](../hybrid/reference-connect-sync-attributes-synchronized.md?context=azure%2factive-directory%2fcloud-provisioning%2fcontext%2fcp-context/hybrid/reference-connect-sync-attributes-synchronized.md).
For more information on how to map UserType, see [Map UserType with cloud sync](
## Understand properties of attribute mappings
-Along with the type property, attribute mappings support certain attributes. These attributes will depend on the type of mapping you have selected. The following sections describe the supported attribute mappings for each of the individual types
+Along with the type property, attribute mappings support certain attributes. These attributes will depend on the type of mapping you have selected. The following sections describe the supported attribute mappings for each of the individual types. The following type of attribute mapping is available.
+- Direct
+- Constant
+- Expression
### Direct mapping attributes The following are the attributes supported by a direct mapping:
The following are the attributes supported by a direct mapping:
- **Always**: Apply this mapping on both user-creation and update actions. - **Only during creation**: Apply this mapping only on user-creation actions.
- ![Screenshot for direct](media/how-to-attribute-mapping/mapping-7.png)
### Constant mapping attributes The following are the attributes supported by a constant mapping:
The following are the attributes supported by a constant mapping:
- **Always**: Apply this mapping on both user-creation and update actions. - **Only during creation**: Apply this mapping only on user-creation actions.
- ![Screenshot for constant](media/how-to-attribute-mapping/mapping-9.png)
- ### Expression mapping attributes The following are the attributes supported by an expression mapping:
The following are the attributes supported by an expression mapping:
- **Always**: Apply this mapping on both user-creation and update actions. - **Only during creation**: Apply this mapping only on user-creation actions.
- ![Screenshot for expression](media/how-to-attribute-mapping/mapping-10.png)
- ## Add an attribute mapping
-To use the new capability, follow these steps:
-
-1. In the Azure portal, select **Azure Active Directory**.
-2. Select **Azure AD Connect**.
-3. Select **Manage cloud sync**.
-
- ![Screenshot that shows the link for managing cloud sync.](media/how-to-install/install-6.png)
-
-4. Under **Configuration**, select your configuration.
-5. Select **Click to edit mappings**. This link opens the **Attribute mappings** screen.
+To use attribute mapping, follow these steps:
- ![Screenshot that shows the link for adding attributes.](media/how-to-attribute-mapping/mapping-6.png)
+ 1. In the Azure portal, select **Azure Active Directory**.
+ 2. On the left, select **Azure AD Connect**.
+ 3. On the left, select **Cloud sync**.
+
+ :::image type="content" source="media/how-to-on-demand-provision/new-ux-1.png" alt-text="Screenshot of new UX screen." lightbox="media/how-to-on-demand-provision/new-ux-1.png":::
-6. Select **Add attribute**.
+ 4. Under **Configuration**, select your configuration.
+ 5. On the left, select **Attribute mapping**.
+ 6. At the top, ensure that you have the correct object type selected. That is, user, group, or contact.
+ 7. Click **Add attribute mapping**.
- ![Screenshot that shows the button for adding an attribute, along with lists of attributes and mapping types.](media/how-to-attribute-mapping/mapping-1.png)
+ :::image type="content" source="media/how-to-attribute-mapping/new-ux-mapping-3.png" alt-text="Screenshot of adding an attribute mapping." lightbox="media/how-to-attribute-mapping/new-ux-mapping-3.png":::
-7. Select the mapping type. This can be one of the following:
+ 8. Select the mapping type. This can be one of the following:
- **Direct**: The target attribute is populated with the value of an attribute of the linked object in Active Directory. - **Constant**: The target attribute is populated with a specific string that you specify. - **Expression**: The target attribute is populated based on the result of a script-like expression. - **None**: The target attribute is left unmodified.
-
- For more information see See [Understanding attribute types](#understand-types-of-attribute-mapping) above.
-8. Depending on what you have selected in the previous step, different options will be available for filling in. See the [Understand properties of attribute mappings](#understand-properties-of-attribute-mappings)sections above for information on these attributes.
-9. Select when to apply this mapping, and then select **Apply**.
-11. Back on the **Attribute mappings** screen, you should see your new attribute mapping.
-12. Select **Save schema**.
+
+ 9. Depending on what you have selected in the previous step, different options will be available for filling in.
+ 10. Select when to apply this mapping, and then select **Apply**.
+ :::image type="content" source="media/how-to-attribute-mapping/new-ux-mapping-4.png" alt-text="Screenshot of saving an attribute mapping." lightbox="media/how-to-attribute-mapping/new-ux-mapping-4.png":::
+
+ 11. Back on the **Attribute mappings** screen, you should see your new attribute mapping.
+ 12. Select **Save schema**. You will be notified that once you save the schema, a synchronization will occur. Click **OK**.
+ :::image type="content" source="media/how-to-attribute-mapping/new-ux-mapping-5.png" alt-text="Screenshot of saving schema." lightbox="media/how-to-attribute-mapping/new-ux-mapping-5.png":::
- ![Screenshot that shows the Save schema button.](media/how-to-attribute-mapping/mapping-3.png)
+ 13. Once the save is successful you will see a notification on the right.
+
+ :::image type="content" source="media/how-to-attribute-mapping/new-ux-mapping-6.png" alt-text="Screenshot of successful schema save." lightbox="media/how-to-attribute-mapping/new-ux-mapping-6.png":::
## Test your attribute mapping To test your attribute mapping, you can use [on-demand provisioning](how-to-on-demand-provision.md):
-1. In the Azure portal, select **Azure Active Directory**.
-2. Select **Azure AD Connect**.
-3. Select **Manage provisioning**.
-4. Under **Configuration**, select your configuration.
-5. Under **Validate**, select the **Provision a user** button.
-6. On the **Provision on demand** screen, enter the distinguished name of a user or group and select the **Provision** button.
-
- The screen shows that the provisioning is in progress.
+ 1. In the Azure portal, select **Azure Active Directory**.
+ 2. On the left, select **Azure AD Connect**.
+ 3. On the left, select **Cloud sync**.
+ 4. Under **Configuration**, select your configuration.
+ 5. On the left, select **Provision on demand**.
+ 6. Enter the distinguished name of a user and select the **Provision** button.
+
+ :::image type="content" source="media/how-to-on-demand-provision/new-ux-2.png" alt-text="Screenshot of user distinguished name." lightbox="media/how-to-on-demand-provision/new-ux-2.png":::
- ![Screenshot that shows provisioning in progress.](media/how-to-attribute-mapping/mapping-4.png)
+ 7. After provisioning finishes, a success screen appears with four green check marks. Any errors appear to the left.
-8. After provisioning finishes, a success screen appears with four green check marks.
+ :::image type="content" source="media/how-to-on-demand-provision/new-ux-3.png" alt-text="Screenshot of on-demand success." lightbox="media/how-to-on-demand-provision/new-ux-3.png":::
- Under **Perform action**, select **View details**. On the right, you should see the new attribute synchronized and the expression applied.
- ![Screenshot that shows success and export details.](media/how-to-attribute-mapping/mapping-5.png)
active-directory How To Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/how-to-configure.md
Previously updated : 01/11/2023 Last updated : 01/20/2023
# Create a new configuration for Azure AD Connect cloud sync
-The following document will guide you through configuring Azure AD Connect cloud sync. For additional information and an example of how to configure cloud sync, see the video below.
+The following document will guide you through configuring Azure AD Connect cloud sync.
+
+The following documentation demonstrates the new guided user experience for Azure AD Connect cloud sync. If you are not seeing the images below, you need to select the **Preview features** at the top. You can select this again to revert back to the old experience.
+
+ :::image type="content" source="media/how-to-configure/new-ux-configure-19.png" alt-text="Screenshot of enable preview features." lightbox="media/how-to-configure/new-ux-configure-19.png":::
+
+For additional information and an example of how to configure cloud sync, see the video below.
> [!VIDEO https://www.microsoft.com/en-us/videoplayer/embed/RWKact]
The following document will guide you through configuring Azure AD Connect cloud
## Configure provisioning To configure provisioning, follow these steps.
- 1. In the Azure portal, select **Azure Active Directory**
- 2. Select **Azure AD Connect**.
- 3. Select **Manage cloud sync**.
-
- ![Manage provisioning](media/how-to-install/install-6.png)
+ 1. In the Azure portal, select **Azure Active Directory**.
+ 2. On the left, select **Azure AD Connect**.
+ 3. On the left, select **Cloud sync**.
+
+ :::image type="content" source="media/how-to-on-demand-provision/new-ux-1.png" alt-text="Screenshot of new UX screen." lightbox="media/how-to-on-demand-provision/new-ux-1.png":::
4. Select **New configuration**.
+ :::image type="content" source="media/how-to-configure/new-ux-configure-1.png" alt-text="Screenshot of adding a configuration." lightbox="media/how-to-configure/new-ux-configure-1.png":::
5. On the configuration screen, select your domain and whether to enable password hash sync. Click **Create**.
- ![Create new configuration](media/how-to-configure/configure-1.png)
+ :::image type="content" source="media/how-to-configure/new-ux-configure-2.png" alt-text="Screenshot of a new configuration." lightbox="media/how-to-configure/new-ux-configure-2.png":::
+ 6. The **Get started** screen will open. From here, you can continue configuring cloud sync.
- 6. The Edit provisioning configuration screen will open.
+ :::image type="content" source="media/how-to-configure/new-ux-configure-3.png" alt-text="Screenshot of the getting started screen." lightbox="media/how-to-configure/new-ux-configure-3.png":::
- ![Edit configuration](media/how-to-configure/con-1.png)
+ 7. The configuration is split in to the following 5 sections.
- 7. Enter a **Notification email**. This email will be notified when provisioning isn't healthy. It is recommended that you keep **Prevent accidental deletion** enabled and set the **Accidental deletion threshold** to a number that you wish to be notified about. For more information, see [accidental deletes](#accidental-deletions) below.
- 8. Move the selector to Enable, and select Save.
+|Section|Description|
+|--|--|
+|1. Add [scoping filters](#scope-provisioning-to-specific-users-and-groups)|Use this section to define what objects appear in Azure AD|
+|2. Map [attributes](#attribute-mapping)|Use this section to map attributes between your on-premises users/groups with Azure AD objects|
+|3. [Test](#on-demand-provisioning)|Test your configuration before deploying it|
+|4. View [default properties](#accidental-deletions-and-email-notifications)|View the default setting prior to enabling them and make changes where appropriate|
+|5. Enable [your configuration](#enable-your-configuration)|Once ready, enable the configuration and users/groups will begin synchronizing|
>[!NOTE] > During the configuration process the synchronization service account will be created with the format **ADToAADSyncServiceAccount@[TenantID].onmicrosoft.com** and you may get an error if multi-factor authentication is enabled for the synchronization service account, or other interactive authentication policies are accidentally enabled for the synchronization account. Removing multi-factor authentication or any interactive authentication policies for the synchronization service account should resolve the error and you can complete the configuration smoothly. ## Scope provisioning to specific users and groups
-You can scope the agent to synchronize specific users and groups by using on-premises Active Directory groups or organizational units. You can't configure groups and organizational units within a configuration.
+You can scope the agent to synchronize specific users and groups by using on-premises Active Directory groups or organizational units.
+
+ :::image type="content" source="media/how-to-configure/new-ux-configure-4.png" alt-text="Screenshot of scoping filters icon." lightbox="media/how-to-configure/new-ux-configure-4.png":::
++
+You can't configure groups and organizational units within a configuration.
>[!NOTE] > You cannot use nested groups with group scoping. Nested objects beyond the first level will not be included when scoping using security groups. Only use group scope filtering for pilot scenarios as there are limitations to syncing large groups.
+ 1. On the **Getting started** configuration screen. Click either **Add scoping filters** next to the **Add scoping filters** icon or on the click **Scoping filters** on the left under **Manage**.
- 1. In the Azure portal, select **Azure Active Directory**.
- 2. Select **Azure AD Connect**.
- 3. Select **Manage cloud sync**.
- 4. Under **Configuration**, select your configuration.
-
- ![Configuration section](media/how-to-configure/scope-1.png)
+ :::image type="content" source="media/how-to-configure/new-ux-configure-5.png" alt-text="Screenshot of scoping filters." lightbox="media/how-to-configure/new-ux-configure-5.png":::
- 5. Under **Configure**, select **Edit scoping filters** to change the scope of the configuration rule.
- 6. On the right, you can change the scope. Click **Done** and **Save** when you have finished.
- 7. Once you have changed the scope, you should [restart provisioning](#restart-provisioning) to initiate an immediate synchronization of the changes.
+ 2. Select the scoping filter. The filter can be one of the following:
+ - **All users**: Scopes the configuration to apply to all users that are being synchronized.
+ - **Selected security groups**: Scopes the configuration to apply to specific security groups.
+ - **Selected organizational units**: Scopes the configuration to apply to specific OUs.
+ 3. For security groups and organizational units, supply the appropriate distinguished name and click **Add**.
+ 4. Once your scoping filters are configured, click **Save**.
+ 5. After saving, you should see a message telling you what you still need to do to configure cloud sync. You can click the link to continue.
+ :::image type="content" source="media/how-to-configure/new-ux-configure-16.png" alt-text="Screenshot of the nudge for scoping filters." lightbox="media/how-to-configure/new-ux-configure-16.png":::
+ 7. Once you've changed the scope, you should [restart provisioning](#restart-provisioning) to initiate an immediate synchronization of the changes.
## Attribute mapping
-Azure AD Connect cloud sync allows you to easily map attributes between your on-premises user/group objects and the objects in Azure AD. You can customize the default attribute-mappings according to your business needs. So, you can change or delete existing attribute-mappings, or create new attribute-mappings. For more information, see [attribute mapping](how-to-attribute-mapping.md).
+Azure AD Connect cloud sync allows you to easily map attributes between your on-premises user/group objects and the objects in Azure AD.
+++
+You can customize the default attribute-mappings according to your business needs. So, you can change or delete existing attribute-mappings, or create new attribute-mappings.
++
+After saving, you should see a message telling you what you still need to do to configure cloud sync. You can click the link to continue.
+ :::image type="content" source="media/how-to-configure/new-ux-configure-17.png" alt-text="Screenshot of the nudge for attribute filters." lightbox="media/how-to-configure/new-ux-configure-17.png":::
++
+For more information, see [attribute mapping](how-to-attribute-mapping.md).
## On-demand provisioning
-Azure AD Connect cloud sync allows you to test configuration changes, by applying these changes to a single user or group. You can use this to validate and verify that the changes made to the configuration were applied properly and are being correctly synchronized to Azure AD. For more information, see [on-demand provisioning](how-to-on-demand-provision.md).
+Azure AD Connect cloud sync allows you to test configuration changes, by applying these changes to a single user or group.
++
+You can use this to validate and verify that the changes made to the configuration were applied properly and are being correctly synchronized to Azure AD.
++
+After testing, you should see a message telling you what you still need to do to configure cloud sync. You can click the link to continue.
+ :::image type="content" source="media/how-to-configure/new-ux-configure-18.png" alt-text="Screenshot of the nudge for testing." lightbox="media/how-to-configure/new-ux-configure-18.png":::
++
+For more information, see [on-demand provisioning](how-to-on-demand-provision.md).
+
+## Accidental deletions and email notifications
+The default properties section provides information on accidental deletions and email notifications.
+
-## Accidental deletions
-The accidental delete feature is designed to protect you from accidental configuration changes and changes to your on-premises directory that would affect many users and groups. This feature allows you to:
+The accidental delete feature is designed to protect you from accidental configuration changes and changes to your on-premises directory that would affect many users and groups.
+
+This feature allows you to:
- configure the ability to prevent accidental deletes automatically. - Set the # of objects (threshold) beyond which the configuration will take effect
The accidental delete feature is designed to protect you from accidental configu
For more information, see [Accidental deletes](how-to-accidental-deletes.md)
+Click the **pencil** next to **Basics** to change the defaults in a configuration.
++
+## Enable your configuration
+Once you've finalized and tested your configuration, you can enable it.
++
+Click **Enable configuration** to enable it.
++ ## Quarantines Cloud sync monitors the health of your configuration and places unhealthy objects in a quarantine state. If most or all of the calls made against the target system consistently fail because of an error, for example, invalid admin credentials, the sync job is marked as in quarantine. For more information, see the troubleshooting section on [quarantines](how-to-troubleshoot.md#provisioning-quarantined-problems). ## Restart provisioning
-If you don't want to wait for the next scheduled run, trigger the provisioning run by using the **Restart provisioning** button.
+If you don't want to wait for the next scheduled run, trigger the provisioning run by using the **Restart sync** button.
1. In the Azure portal, select **Azure Active Directory**.
- 2. Select **Azure AD Connect**.
- 3. Select **Manage cloud sync**.
+ 2. On the left, select **Azure AD Connect**.
+ 3. On the left, select **Cloud sync**.
4. Under **Configuration**, select your configuration.
- ![Configuration selection to restart provisioning](media/how-to-configure/scope-1.png)
+ :::image type="content" source="media/how-to-configure/new-ux-configure-14.png" alt-text="Screenshot of restarting sync." lightbox="media/how-to-configure/new-ux-configure-14.png":::
- 5. At the top, select **Restart provisioning**.
+ 5. At the top, select **Restart sync**.
## Remove a configuration To delete a configuration, follow these steps. 1. In the Azure portal, select **Azure Active Directory**.
- 2. Select **Azure AD Connect**.
- 3. Select **Manage cloud sync**.
+ 2. On the left, select **Azure AD Connect**.
+ 3. On the left, select **Cloud sync**.
4. Under **Configuration**, select your configuration.
-
- ![Configuration selection to remove configuration](media/how-to-configure/scope-1.png)
- 5. At the top of the configuration screen, select **Delete**.
+ :::image type="content" source="media/how-to-configure/new-ux-configure-15.png" alt-text="Screenshot of deletion." lightbox="media/how-to-configure/new-ux-configure-15.png":::
+
+ 5. At the top of the configuration screen, select **Delete configuration**.
>[!IMPORTANT] >There's no confirmation prior to deleting a configuration. Make sure this is the action you want to take before you select **Delete**.
active-directory How To Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/how-to-install.md
Previously updated : 01/11/2023 Last updated : 01/20/2023
active-directory How To On Demand Provision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/how-to-on-demand-provision.md
For additional information and an example see the following video.
## Validate a user To use on-demand provisioning, follow these steps:
-1. In the Azure portal, select **Azure Active Directory**.
-2. Select **Azure AD Connect**.
-3. Select **Manage cloud sync**.
-
- ![Screenshot that shows the link for managing cloud sync.](media/how-to-install/install-6.png)
-4. Under **Configuration**, select your configuration.
-5. Under **Validate**, select the **Provision a user** button.
-
- ![Screenshot that shows the button for provisioning a user.](media/how-to-on-demand-provision/on-demand-2.png)
+ 1. In the Azure portal, select **Azure Active Directory**.
+ 2. On the left, select **Azure AD Connect**.
+ 3. On the left, select **Cloud sync**.
+
+ :::image type="content" source="media/how-to-on-demand-provision/new-ux-1.png" alt-text="Screenshot of new UX screen." lightbox="media/how-to-on-demand-provision/new-ux-1.png":::
-6. On the **Provision on demand** screen, enter the distinguished name of a user and select the **Provision** button.
+ 4. Under **Configuration**, select your configuration.
+ 5. On the left, select **Provision on demand**.
+ 6. Enter the distinguished name of a user and select the **Provision** button.
- ![Screenshot that shows a username and a Provision button.](media/how-to-on-demand-provision/on-demand-3.png)
-7. After provisioning finishes, a success screen appears with four green check marks. Any errors appear to the left.
+ :::image type="content" source="media/how-to-on-demand-provision/new-ux-2.png" alt-text="Screenshot of user distinguished name." lightbox="media/how-to-on-demand-provision/new-ux-2.png":::
+
+ 7. After provisioning finishes, a success screen appears with four green check marks. Any errors appear to the left.
- ![Screenshot that shows successful provisioning.](media/how-to-on-demand-provision/on-demand-4.png)
+ :::image type="content" source="media/how-to-on-demand-provision/new-ux-3.png" alt-text="Screenshot of on-demand success." lightbox="media/how-to-on-demand-provision/new-ux-3.png":::
## Get details about provisioning Now you can look at the user information and determine if the changes that you made in the configuration have been applied. The rest of this article describes the individual sections that appear in the details of a successfully synchronized user.
Now you can look at the user information and determine if the changes that you m
### Import user The **Import user** section provides information on the user who was imported from Active Directory. This is what the user looks like before provisioning into Azure AD. Select the **View details** link to display this information.
-![Screenshot of the button for viewing details about an imported user.](media/how-to-on-demand-provision/on-demand-5.png)
- By using this information, you can see the various attributes (and their values) that were imported. If you created a custom attribute mapping, you can see the value here.
-![Screenshot that shows user details.](media/how-to-on-demand-provision/on-demand-6.png)
+ :::image type="content" source="media/how-to-on-demand-provision/new-ux-4.png" alt-text="Screenshot of import user." lightbox="media/how-to-on-demand-provision/new-ux-4.png":::
### Determine if user is in scope The **Determine if user is in scope** section provides information on whether the user who was imported to Azure AD is in scope. Select the **View details** link to display this information.
-![Screenshot of the button for viewing details about user scope.](media/how-to-on-demand-provision/on-demand-7.png)
- By using this information, you can see if the user is in scope.
-![Screenshot that shows user scope details.](media/how-to-on-demand-provision/on-demand-10a.png)
+ :::image type="content" source="media/how-to-on-demand-provision/new-ux-5.png" alt-text="Screenshot of scope determination." lightbox="media/how-to-on-demand-provision/new-ux-5.png":::
### Match user between source and target system The **Match user between source and target system** section provides information on whether the user already exists in Azure AD and whether a join should occur instead of provisioning a new user. Select the **View details** link to display this information.
-![Screenshot of the button for viewing details about a matched user.](media/how-to-on-demand-provision/on-demand-8.png)
- By using this information, you can see whether a match was found or if a new user is going to be created.
-![Screenshot that shows user information.](media/how-to-on-demand-provision/on-demand-11.png)
+ :::image type="content" source="media/how-to-on-demand-provision/new-ux-6.png" alt-text="Screenshot of matching user." lightbox="media/how-to-on-demand-provision/new-ux-6.png":::
The matching details show a message with one of the three following operations: - **Create**: A user is created in Azure AD.
Depending on the type of operation that you've performed, the message will vary.
### Perform action The **Perform action** section provides information on the user who was provisioned or exported into Azure AD after the configuration was applied. This is what the user looks like after provisioning into Azure AD. Select the **View details** link to display this information.
-![Screenshot of the button for viewing details about a performed action.](media/how-to-on-demand-provision/on-demand-9.png)
- By using this information, you can see the values of the attributes after the configuration was applied. Do they look similar to what was imported, or are they different? Was the configuration applied successfully? This process enables you to trace the attribute transformation as it moves through the cloud and into your Azure AD tenant.
-![Screenshot that shows traced attribute details.](media/how-to-on-demand-provision/on-demand-12.png)
+ :::image type="content" source="media/how-to-on-demand-provision/new-ux-7.png" alt-text="Screenshot of perform action." lightbox="media/how-to-on-demand-provision/new-ux-7.png":::
## Next steps
active-directory How To Sso https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/how-to-sso.md
Title: 'How to use Single Sign-on with cloud sync'
-description: This article describes how to install and use sso with cloud sync.
+ Title: 'How to use single sign-on with cloud sync'
+description: This article describes how to install and use single sign-on with cloud sync.
Previously updated : 01/28/2020 Last updated : 01/18/2023
-# Using Single Sign-On with cloud sync
+# Using single sign-on with cloud sync
The following document describes how to use single sign-on with cloud sync. [!INCLUDE [active-directory-cloud-provisioning-sso.md](../../../includes/active-directory-cloud-provisioning-sso.md)]
active-directory How To Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/how-to-troubleshoot.md
description: This article describes how to troubleshoot problems that might aris
Previously updated : 10/13/2021 Last updated : 01/18/2023 ms.prod: windows-server-threshold ms.technology: identity-adfs
active-directory Plan Cloud Sync Topologies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/plan-cloud-sync-topologies.md
Previously updated : 09/10/2021 Last updated : 01/17/2023
active-directory Reference Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/reference-error-codes.md
Previously updated : 01/14/2021 Last updated : 01/18/2023
The following is a list of error codes and their description
|Error code|Details|Scenario|Resolution| |--|--|--|--| |TimeOut|Error Message: We've detected a request timeout error when contacting the on-premises agent and synchronizing your configuration. For additional issues related to your cloud sync agent, please see our troubleshooting guidance.|Request to HIS timed out. Current Timeout value is 10 minutes.|See our [troubleshooting guidance](how-to-troubleshoot.md)|
-|HybridSynchronizationActiveDirectoryInternalServerError|Error Message: We were unable to process this request at this point. If this issue persists, please contact support and provide the following job identifier: AD2AADProvisioning.30b500eaf9c643b2b78804e80c1421fe.5c291d3c-d29f-4570-9d6b-f0c2fa3d5926. Additional details: Processing of the HTTP request resulted in an exception. |Could not process the parameters received in SCIM request to a Search request.|Please see the HTTP response returned by the 'Response' property of this exception for details.|
-|HybridIdentityServiceNoAgentsAssigned|Error Message: We are unable to find an active agent for the domain you are trying to sync. Please check to see if the agents have been removed. If so, re-install the agent again.|There are no agents running. Probably agents have been removed. Register a new agent.|"In this case, you will not see any agent assigned to the domain in portal.|
-|HybridIdentityServiceNoActiveAgents|Error Message: We are unable to find an active agent for the domain you are trying to sync. Please check to see if the agent is running by going to the server, where the agent is installed, and check to see if "Microsoft Azure AD Cloud Sync Agent" under Services is running.|"Agents are not listening to the ServiceBus endpoint. [The agent is behind a firewall that does not allow connections to service bus](../app-proxy/application-proxy-configure-connectors-with-proxy-servers.md#use-the-outbound-proxy-server)|
-|HybridIdentityServiceInvalidResource|Error Message: We were unable to process this request at this point. If this issue persists, please contact support and provide the following job identifier: AD2AADProvisioning.3a2a0d8418f34f54a03da5b70b1f7b0c.d583d090-9cd3-4d0a-aee6-8d666658c3e9. Additional details: There seems to be an issue with your cloud sync setup. Please re-register your cloud sync agent on your on-prem AD domain and restart configuration from Azure Portal.|The resource name must be set so HIS knows which agent to contact.|Please re-register your cloud sync agent on your on-prem AD domain and restart configuration from Azure Portal.|
-|HybridIdentityServiceAgentSignalingError|Error Message: We were unable to process this request at this point. If this issue persists, please contact support and provide the following job identifier: AD2AADProvisioning.92d2e8750f37407fa2301c9e52ad7e9b.efb835ef-62e8-42e3-b495-18d5272eb3f9. Additional details: We were unable to process this request at this point. If this issue persists, please contact support with Job ID (from status pane of your configuration).|Service Bus is not able to send a message to the agent. Could be an outage in service bus, or the agent is not responsive.|If this issue persists, please contact support with Job ID (from status pane of your configuration).|
+|HybridSynchronizationActiveDirectoryInternalServerError|Error Message: We were unable to process this request at this point. If this issue persists, please contact support and provide the following job identifier: AD2AADProvisioning.30b500eaf9c643b2b78804e80c1421fe.5c291d3c-d29f-4570-9d6b-f0c2fa3d5926. Additional details: Processing of the HTTP request resulted in an exception. |Couldn't process the parameters received in SCIM request to a Search request.|Please see the HTTP response returned by the 'Response' property of this exception for details.|
+|HybridIdentityServiceNoAgentsAssigned|Error Message: We're unable to find an active agent for the domain you're trying to sync. Please check to see if the agents have been removed. If so, re-install the agent again.|There are no agents running. Probably agents have been removed. Register a new agent.|"In this case, you won't see any agent assigned to the domain in portal.|
+|HybridIdentityServiceNoActiveAgents|Error Message: We're unable to find an active agent for the domain you're trying to sync. Please check to see if the agent is running by going to the server, where the agent is installed, and check to see if "Microsoft Azure AD Cloud Sync Agent" under Services is running.|"Agents aren't listening to the ServiceBus endpoint. [The agent is behind a firewall that doesn't allow connections to service bus](../app-proxy/application-proxy-configure-connectors-with-proxy-servers.md#use-the-outbound-proxy-server)|
+|HybridIdentityServiceInvalidResource|Error Message: We were unable to process this request at this point. If this issue persists, please contact support and provide the following job identifier: AD2AADProvisioning.3a2a0d8418f34f54a03da5b70b1f7b0c.d583d090-9cd3-4d0a-aee6-8d666658c3e9. Additional details: There seems to be an issue with your cloud sync setup. Please re-register your cloud sync agent on your on-premises AD domain and restart configuration from Azure portal.|The resource name must be set so HIS knows which agent to contact.|Please re-register your cloud sync agent on your on-premises AD domain and restart configuration from Azure portal.|
+|HybridIdentityServiceAgentSignalingError|Error Message: We were unable to process this request at this point. If this issue persists, please contact support and provide the following job identifier: AD2AADProvisioning.92d2e8750f37407fa2301c9e52ad7e9b.efb835ef-62e8-42e3-b495-18d5272eb3f9. Additional details: We were unable to process this request at this point. If this issue persists, please contact support with Job ID (from status pane of your configuration).|Service Bus isn't able to send a message to the agent. Could be an outage in service bus, or the agent isn't responsive.|If this issue persists, please contact support with Job ID (from status pane of your configuration).|
|AzureDirectoryServiceServerBusy|Error Message: An error occurred. Error Code: 81. Error Description: Azure Active Directory is currently busy. This operation will be retried automatically. If this issue persists for more than 24 hours, contact Technical Support. Tracking ID: 8a4ab3b5-3664-4278-ab64-9cff37fd3f4f Server Name:|Azure Active Directory is currently busy.|If this issue persists for more than 24 hours, contact Technical Support.|
-|AzureActiveDirectoryInvalidCredential|Error Message: We found an issue with the service account that is used to run Azure AD Connect Cloud Sync. You can repair the cloud service account by following the instructions at [here](./how-to-troubleshoot.md). If the error persists, please contact support with Job ID (from status pane of your configuration). Additional Error Details: CredentialsInvalid AADSTS50034: The user account {EmailHidden} does not exist in the skydrive365.onmicrosoft.com directory. To sign into this application, the account must be added to the directory. Trace ID: 14b63033-3bc9-4bd4-b871-5eb4b3500200 Correlation ID: 57d93ed1-be4d-483c-997c-a3b6f03deb00 Timestamp: 2021-01-12 21:08:29Z |This error is thrown when the sync service account ADToAADSyncServiceAccount doesn't exist in the tenant. It can be due to accidental deletion of the account.|Use [Repair-AADCloudSyncToolsAccount](reference-powershell.md#repair-aadcloudsynctoolsaccount) to fix the service account.|
-|AzureActiveDirectoryExpiredCredentials|Error Message: We were unable to process this request at this point. If this issue persists, please contact support with Job ID (from status pane of your configuration). Additional Error Details: CredentialsExpired AADSTS50055: The password is expired. Trace ID: 989b1841-dbe5-49c9-ab6c-9aa25f7b0e00 Correlation ID: 1c69b196-1c3a-4381-9187-c84747807155 Timestamp: 2021-01-12 20:59:31Z | Response status code does not indicate success: 401 (Unauthorized).<br> AAD Sync service account credentials are expired.|You can repair the cloud service account by following the instructions at https://go.microsoft.com/fwlink/?linkid=2150988. If the error persists, please contact support with Job ID (from status pane of your configuration). Additional Error Details: Your administrative Azure Active Directory tenant credentials were exchanged for an OAuth token that has since expired."|
+|AzureActiveDirectoryInvalidCredential|Error Message: We found an issue with the service account that is used to run Azure AD Connect Cloud Sync. You can repair the cloud service account by following the instructions at [here](./how-to-troubleshoot.md). If the error persists, please contact support with Job ID (from status pane of your configuration). Additional Error Details: CredentialsInvalid AADSTS50034: The user account {EmailHidden} doesn't exist in the skydrive365.onmicrosoft.com directory. To sign into this application, the account must be added to the directory. Trace ID: 14b63033-3bc9-4bd4-b871-5eb4b3500200 Correlation ID: 57d93ed1-be4d-483c-997c-a3b6f03deb00 Timestamp: 2021-01-12 21:08:29Z |This error is thrown when the sync service account ADToAADSyncServiceAccount doesn't exist in the tenant. It can be due to accidental deletion of the account.|Use [Repair-AADCloudSyncToolsAccount](reference-powershell.md#repair-aadcloudsynctoolsaccount) to fix the service account.|
+|AzureActiveDirectoryExpiredCredentials|Error Message: We were unable to process this request at this point. If this issue persists, please contact support with Job ID (from status pane of your configuration). Additional Error Details: CredentialsExpired AADSTS50055: The password is expired. Trace ID: 989b1841-dbe5-49c9-ab6c-9aa25f7b0e00 Correlation ID: 1c69b196-1c3a-4381-9187-c84747807155 Timestamp: 2021-01-12 20:59:31Z | Response status code doesn't indicate success: 401 (Unauthorized).<br> Azure AD Sync service account credentials are expired.|You can repair the cloud service account by following the instructions at https://go.microsoft.com/fwlink/?linkid=2150988. If the error persists, please contact support with Job ID (from status pane of your configuration). Additional Error Details: Your administrative Azure Active Directory tenant credentials were exchanged for an OAuth token that has since expired."|
|AzureActiveDirectoryAuthenticationFailed|Error Message: We were unable to process this request at this point. If this issue persists, please contact support and provide the following job identifier: AD2AADProvisioning.60b943e88f234db2b887f8cb91dee87c.707be0d2-c6a9-405d-a3b9-de87761dc3ac. Additional details: We were unable to process this request at this point. If this issue persists, please contact support with Job ID (from status pane of your configuration). Additional Error Details: UnexpectedError.|Unknown error.|If this issue persists, please contact support with Job ID (from status pane of your configuration).| ## Next steps
active-directory Reference Expressions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/reference-expressions.md
Previously updated : 12/02/2019 Last updated : 01/18/2023
active-directory Reference Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/reference-powershell.md
Previously updated : 11/03/2021 Last updated : 01/17/2023
active-directory Reference Version History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/reference-version-history.md
Previously updated : 11/19/2020 Last updated : 01/17/2023
active-directory Tutorial Basic Ad Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/tutorial-basic-ad-azure.md
Previously updated : 12/02/2019 Last updated : 01/18/2023
You can use the environment you create in the tutorial to test various aspects o
This tutorial consists of ## Prerequisites The following are prerequisites required for completing this tutorial-- A computer with [Hyper-V](/windows-server/virtualization/hyper-v/hyper-v-technology-overview) installed. It is suggested to do this on either a [Windows 10](/virtualization/hyper-v-on-windows/about/supported-guest-os) or a [Windows Server 2016](/windows-server/virtualization/hyper-v/supported-windows-guest-operating-systems-for-hyper-v-on-windows) computer.
+- A computer with [Hyper-V](/windows-server/virtualization/hyper-v/hyper-v-technology-overview) installed. It's suggested to do this on either a [Windows 10](/virtualization/hyper-v-on-windows/about/supported-guest-os) or a [Windows Server 2016](/windows-server/virtualization/hyper-v/supported-windows-guest-operating-systems-for-hyper-v-on-windows) computer.
- An [external network adapter](/virtualization/hyper-v-on-windows/quick-start/connect-to-network) to allow the virtual machine to communicate with the internet. - An [Azure subscription](https://azure.microsoft.com/free) - A copy of Windows Server 2016
In order to finish building the virtual machine, you need to finish the operatin
1. Hyper-V Manager, double-click on the virtual machine 2. Click on the Start button.
-3. You will be prompted to ΓÇÿPress any key to boot from CD or DVDΓÇÖ. Go ahead and do so.
+3. You'll be prompted to ΓÇÿPress any key to boot from CD or DVDΓÇÖ. Go ahead and do so.
4. On the Windows Server start up screen select your language and click **Next**. 5. Click **Install Now**. 6. Enter your license key and click **Next**.
Now you need to create an Azure AD tenant so that you can synchronize our users
6. Once this has completed, click the **here** link, to manage the directory. ## Create a global administrator in Azure AD
-Now that you have an Azure AD tenant, you will create a global administrator account. To create the global administrator account do the following.
+Now that you have an Azure AD tenant, you'll create a global administrator account. To create the global administrator account do the following.
1. Under **Manage**, select **Users**.</br> ![Screenshot that shows the "Overview" menu with "Users" selected.](media/tutorial-single-forest/administrator-1.png)</br> 2. Select **All users** and then select **+ New user**.
-3. Provide a name and username for this user. This will be your Global Admin for the tenant. You will also want to change the **Directory role** to **Global administrator.** You can also show the temporary password. When you are done, select **Create**.</br>
+3. Provide a name and username for this user. This will be your Global Admin for the tenant. You'll also want to change the **Directory role** to **Global administrator.** You can also show the temporary password. When you're done, select **Create**.</br>
![Create](media/tutorial-single-forest/administrator-2.png)</br> 4. Once this has completed, open a new web browser and sign-in to myapps.microsoft.com using the new global administrator account and the temporary password.
-5. Change the password for the global administrator to something that you will remember.
+5. Change the password for the global administrator to something that you'll remember.
## Optional: Additional server and forest The following is an optional section that provides steps to creating an additional server and or forest. This can be used in some of the more advanced tutorials such as [Pilot for Azure AD Connect to cloud sync](tutorial-pilot-aadc-aadccp.md).
In order to finish building the virtual machine, you need to finish the operatin
1. Hyper-V Manager, double-click on the virtual machine 2. Click on the Start button.
-3. You will be prompted to ΓÇÿPress any key to boot from CD or DVDΓÇÖ. Go ahead and do so.
+3. You'll be prompted to ΓÇÿPress any key to boot from CD or DVDΓÇÖ. Go ahead and do so.
4. On the Windows Server start up screen select your language and click **Next**. 5. Click **Install Now**. 6. Enter your license key and click **Next**.
active-directory Tutorial Existing Forest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/tutorial-existing-forest.md
Title: Tutorial - Integrate an existing forest and a new forest with a single Azure AD tenant by using Azure AD Connect cloud sync
+ Title: Tutorial - Integrate an existing forest and a new forest with a single Azure AD tenant using Azure AD Connect cloud sync.
description: Learn how to add cloud sync to an existing hybrid identity environment.
Previously updated : 11/11/2022 Last updated : 01/17/2023
-# Tutorial: Integrate an existing forest and a new forest with a single Azure AD tenant
+# Integrate an existing forest and a new forest with a single Azure AD tenant
This tutorial walks you through adding cloud sync to an existing hybrid identity environment.
This tutorial walks you through adding cloud sync to an existing hybrid identity
You can use the environment you create in this tutorial for testing or for getting more familiar with how a hybrid identity works.
-In this scenario, you sync an existing forest with an Azure AD tenant by using Azure Active Directory (Azure AD) Connect. You want to sync a new forest with the same Azure AD tenant. You'll set up cloud sync for the new forest.
+In this scenario, there's an existing forest synced using Azure AD Connect sync to an Azure AD tenant. And you have a new forest that you want to sync to the same Azure AD tenant. You'll set up cloud sync for the new forest.
## Prerequisites
+### In the Azure Active Directory admin center
-Before you begin, set up your environments.
-
-### In the Azure AD admin center
-
-1. Create a cloud-only global administrator account on your Azure AD tenant.
-
- This way, you can manage the configuration of your tenant if your on-premises services fail or become unavailable. [Learn how to add a cloud-only global administrator account](../fundamentals/add-users-azure-active-directory.md). Complete this step to ensure that you don't get locked out of your tenant.
-
-1. Add one or more [custom domain names](../fundamentals/add-custom-domain.md) to your Azure AD tenant. Your users can sign in with one of these domain names.
+1. Create a cloud-only global administrator account on your Azure AD tenant. This way, you can manage the configuration of your tenant should your on-premises services fail or become unavailable. Learn about [adding a cloud-only global administrator account](../fundamentals/add-users-azure-active-directory.md). Completing this step is critical to ensure that you don't get locked out of your tenant.
+2. Add one or more [custom domain names](../fundamentals/add-custom-domain.md) to your Azure AD tenant. Your users can sign in with one of these domain names.
### In your on-premises environment
-1. Identify a domain-joined host server that's running Windows Server 2012 R2 or later, with at least 4 GB of RAM and .NET 4.7.1+ runtime.
-
-1. If there's a firewall between your servers and Azure AD, configure the following items:
+1. Identify a domain-joined host server running Windows Server 2012 R2 or greater with minimum of 4-GB RAM and .NET 4.7.1+ runtime
+2. If there's a firewall between your servers and Azure AD, configure the following items:
- Ensure that agents can make *outbound* requests to Azure AD over the following ports: | Port number | How it's used | | | |
- | **80** | Downloads the certificate revocation lists (CRLs) while it validates the TLS/SSL certificate. |
- | **443** | Handles all outbound communication with the service. |
- | **8080** (optional) | Agents report their status every 10 minutes over port 8080, if port 443 is unavailable. This status is displayed in the Azure AD portal. |
+ | **80** | Downloads the certificate revocation lists (CRLs) while validating the TLS/SSL certificate |
+ | **443** | Handles all outbound communication with the service |
+ | **8080** (optional) | Agents report their status every 10 minutes over port 8080, if port 443 is unavailable. This status is displayed on the Azure AD portal. |
If your firewall enforces rules according to the originating users, open these ports for traffic from Windows services that run as a network service.-
- - If your firewall or proxy allows you to specify safe suffixes, add connections to **\*.msappproxy.net** and **\*.servicebus.windows.net**. If it doesn't, allow access to the [Azure datacenter IP ranges](https://www.microsoft.com/download/details.aspx?id=41653), which are updated weekly.
-
+ - If your firewall or proxy allows you to specify safe suffixes, then add connections to **\*.msappproxy.net** and **\*.servicebus.windows.net**. If not, allow access to the [Azure datacenter IP ranges](https://www.microsoft.com/download/details.aspx?id=41653), which are updated weekly.
- Your agents need access to **login.windows.net** and **login.microsoftonline.com** for initial registration. Open your firewall for those URLs as well.-
- - For certificate validation, unblock the following URLs: **mscrl.microsoft.com:80**, **crl.microsoft.com:80**, **ocsp.msocsp.com:80**, and **www\.microsoft.com:80**. Because these URLs are used to validate certificates for other Microsoft products, you might already have these URLs unblocked.
+ - For certificate validation, unblock the following URLs: **mscrl.microsoft.com:80**, **crl.microsoft.com:80**, **ocsp.msocsp.com:80**, and **www\.microsoft.com:80**. Since these URLs are used for certificate validation with other Microsoft products, you may already have these URLs unblocked.
## Install the Azure AD Connect provisioning agent
-If you're using the [Basic Active Directory and Azure environment](tutorial-basic-ad-azure.md) tutorial, the agent is DC1. To install the agent, do the following:
+If you're using the [Basic AD and Azure environment](tutorial-basic-ad-azure.md) tutorial, it would be DC1. To install the agent, follow these steps:
[!INCLUDE [active-directory-cloud-sync-how-to-install](../../../includes/active-directory-cloud-sync-how-to-install.md)]
If you're using the [Basic Active Directory and Azure environment](tutorial-basi
[!INCLUDE [active-directory-cloud-sync-how-to-verify-installation](../../../includes/active-directory-cloud-sync-how-to-verify-installation.md)] ## Configure Azure AD Connect cloud sync-
-To configure the cloud sync setup, do the following:
+ Use the following steps to configure provisioning
1. Sign in to the Azure AD portal.
-1. Select **Azure Active Directory**.
-1. Select **Azure AD Connect**.
-1. Select **Manage cloud sync**.
+2. Select **Azure Active Directory**
+3. Select **Azure AD Connect**
+4. Select **Manage cloud sync**
- ![Screenshot that highlights the "Manage cloud sync" link.](media/how-to-configure/manage-1.png)
+ ![Screenshot showing "Manage cloud sync" link.](media/how-to-configure/manage-1.png)
-1. Select **New Configuration**.
+5. Select **New Configuration**
- ![Screenshot of the Azure AD Connect cloud sync page, with the "New configuration" link highlighted.](media/tutorial-single-forest/configure-1.png)
+ ![Screenshot of Azure AD Connect cloud sync screen with "New configuration" link highlighted.](media/tutorial-single-forest/configure-1.png)
-1. On the **Configuration** page, enter a **Notification email**, move the selector to **Enable**, and then select **Save**.
+7. On the configuration screen, enter a **Notification email**, move the selector to **Enable** and select **Save**.
- ![Screenshot of the "Edit provisioning configuration" page.](media/how-to-configure/configure-2.png)
+ ![Screenshot of Configure screen with Notification email filled in and Enable selected.](media/how-to-configure/configure-2.png)
1. The configuration status should now be **Healthy**.
- ![Screenshot of Azure AD Connect cloud sync page, showing a "Healthy" status.](media/how-to-configure/manage-4.png)
+ ![Screenshot of Azure AD Connect cloud sync screen showing Healthy status.](media/how-to-configure/manage-4.png)
+
+## Verify users are created and synchronization is occurring
-## Verify that users are created and synchronization is occurring
+You'll now verify that the users that you had in our on-premises directory have been synchronized and now exist in our Azure AD tenant. This process may take a few hours to complete. To verify users are synchronized, do the following:
-You'll now verify that the users in your on-premises Active Directory have been synchronized and exist in your Azure AD tenant. This process might take a few hours to complete. To verify that the users are synchronized, do the following:
-1. Sign in to the [Azure portal](https://portal.azure.com) with an account that has an Azure subscription.
-1. On the left pane, select **Azure Active Directory**.
-1. Under **Manage**, select **Users**.
-1. Verify that the new users are displayed in your tenant.
+1. Browse to the [Azure portal](https://portal.azure.com) and sign in with an account that has an Azure subscription.
+2. On the left, select **Azure Active Directory**
+3. Under **Manage**, select **Users**.
+4. Verify that you see the new users in our tenant
-## Test signing in with one of your users
+## Test signing in with one of our users
-1. Go to the [Microsoft My Apps](https://myapps.microsoft.com) page.
-1. Sign in with a user account that was created in your new tenant. You'll need to sign in by using the following format: *user@domain.onmicrosoft.com*. Use the same password that the user uses to sign in on-premises.
+1. Browse to [https://myapps.microsoft.com](https://myapps.microsoft.com)
+2. Sign in with a user account that was created in our new tenant. You'll need to sign in using the following format: (user@domain.onmicrosoft.com). Use the same password that the user uses to sign in on-premises.
- ![Screenshot that shows the My Apps portal with signed-in users.](media/tutorial-single-forest/verify-1.png)
+ ![Screenshot that shows the my apps portal with a signed in users.](media/tutorial-single-forest/verify-1.png)
You have now successfully set up a hybrid identity environment that you can use to test and familiarize yourself with what Azure has to offer.
active-directory Tutorial Pilot Aadc Aadccp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/tutorial-pilot-aadc-aadccp.md
Title: Tutorial - Pilot Azure AD Connect cloud sync for an existing synced Active Directory forest
-description: Learn how to pilot cloud sync for a test Active Directory forest that is already synced by using Azure Active Directory (Azure AD) Connect sync.
+ Title: Tutorial - Pilot Azure AD Connect cloud sync for an existing synced AD forest
+description: Learn how to pilot cloud sync for a test Active Directory forest that is already synced using Azure Active Directory (Azure AD) Connect sync.
Previously updated : 11/11/2022 Last updated : 01/18/2023
-# Pilot cloud sync for an existing synced Active Directory forest
+# Pilot cloud sync for an existing synced AD forest
-This tutorial walks you through piloting cloud sync for a test Active Directory forest that's already synced by using Azure Active Directory (Azure AD) Connect sync.
+This tutorial walks you through piloting cloud sync for a test Active Directory forest that is already synced using Azure Active Directory (Azure AD) Connect sync.
![Diagram that shows the Azure AD Connect cloud sync flow.](media/tutorial-migrate-aadc-aadccp/diagram-2.png) ## Considerations
-Before you try this tutorial, keep the following in mind:
+Before you try this tutorial, consider the following items:
-* You should be familiar with the basics of cloud sync.
+1. Ensure that you're familiar with basics of cloud sync.
-* Ensure that you're running Azure AD Connect cloud sync version 1.4.32.0 or later and you've configured the sync rules as documented.
+1. Ensure that you're running Azure AD Connect sync version 1.4.32.0 or later and have configured the sync rules as documented.
-* When you're piloting, you'll be removing a test organizational unit (OU) or group from the Azure AD Connect sync scope. Moving objects out of scope leads to deletion of those objects in Azure AD.
+1. When piloting, you'll be removing a test OU or group from Azure AD Connect sync scope. Moving objects out of scope leads to deletion of those objects in Azure AD.
- - **User objects**: The objects in Azure AD that are soft-deleted and can be restored.
- - **Group objects**: The objects in Azure AD that are hard-deleted and can't be restored.
+ - User objects, the objects in Azure AD are soft-deleted and can be restored.
+ - Group objects, the objects in Azure AD are hard-deleted and can't be restored.
- A new link type has been introduced in Azure AD Connect sync, which will prevent deletions in a piloting scenario.
+ A new link type has been introduced in Azure AD Connect sync, which will prevent the deletion in a piloting scenario.
-* Ensure that the objects in the pilot scope have *ms-ds-consistencyGUID* populated so that cloud sync hard matches the objects.
+1. Ensure that the objects in the pilot scope have ms-ds-consistencyGUID populated so cloud sync hard matches the objects.
> [!NOTE]
- > Azure AD Connect sync doesn't populate *ms-ds-consistencyGUID* by default for group objects.
+ > Azure AD Connect sync does not populate *ms-ds-consistencyGUID* by default for group objects.
-* This configuration is for advanced scenarios. Be sure to follow the steps documented in this tutorial precisely.
+1. This configuration is for advanced scenarios. Ensure that you follow the steps documented in this tutorial precisely.
## Prerequisites
-Before you begin, be sure that you've set up your environment to meet the following prerequisites:
+The following are prerequisites required for completing this tutorial
-- A test environment with [Azure AD connect version 1.4.32.0 or later](https://www.microsoft.com/download/details.aspx?id=47594).
-
- To update Azure AD Connect sync, complete the steps in [Azure AD Connect: Upgrade to the latest version](../hybrid/how-to-upgrade-previous-version.md).
+- A test environment with Azure AD Connect sync version 1.4.32.0 or later
+- An OU or group that is in scope of sync and can be used the pilot. We recommend starting with a small set of objects.
+- A server running Windows Server 2012 R2 or later that will host the provisioning agent.
+- Source anchor for Azure AD Connect sync should be either *objectGuid* or *ms-ds-consistencyGUID*
-- An OU or group that's in scope of sync and can be used in the pilot. We recommend starting with a small set of objects.
+## Update Azure AD Connect
-- Windows Server 2012 R2 or later, which will host the provisioning agent.--- The source anchor for Azure AD Connect sync should be either *objectGuid* or *ms-ds-consistencyGUID*.
+As a minimum, you should have [Azure AD connect](https://www.microsoft.com/download/details.aspx?id=47594) 1.4.32.0. To update Azure AD Connect sync, complete the steps in [Azure AD Connect: Upgrade to the latest version](../hybrid/how-to-upgrade-previous-version.md).
## Stop the scheduler
-Azure AD Connect sync synchronizes changes occurring in your on-premises directory by using a scheduler. To modify and add custom rules, disable the scheduler so that synchronizations won't run while you're making the changes. To stop the scheduler:
+Azure AD Connect sync synchronizes changes occurring in your on-premises directory using a scheduler. In order to modify and add custom rules, you want to disable the scheduler so that synchronizations won't run while you're working making the changes. To stop the scheduler, use the following steps:
-1. On the server that's running Azure AD Connect sync, open PowerShell with administrative privileges.
-1. Run `Stop-ADSyncSyncCycle`, and then select **Enter**.
-1. Run `Set-ADSyncScheduler -SyncCycleEnabled $false`.
+1. On the server that is running Azure AD Connect sync open PowerShell with Administrative Privileges.
+2. Run `Stop-ADSyncSyncCycle`. Hit Enter.
+3. Run `Set-ADSyncScheduler -SyncCycleEnabled $false`.
>[!NOTE]
->If you're running your own custom scheduler for Azure AD Connect sync, be sure to disable the scheduler.
+>If you are running your own custom scheduler for Azure AD Connect sync, then please disable the scheduler.
-## Create a custom user inbound rule
+## Create custom user inbound rule
-1. Open **Synchronization Rules Editor** from the application menu in the desktop, as shown in the following screenshot:
+ 1. Launch the synchronization editor from the application menu in desktop as shown below:
- ![Screenshot of the "Synchronization Rules Editor" command.](media/tutorial-migrate-aadc-aadccp/user-8.png)
+ ![Screenshot of the synchronization rule editor menu.](media/tutorial-migrate-aadc-aadccp/user-8.png)
-1. Under **Direction**, select **Inbound** from the dropdown list, and then select **Add new rule**.
+ 2. Select **Inbound** from the drop-down list for Direction and select **Add new rule**.
- ![Screenshot of the "View and manage your synchronization rules" pane, with "Inbound" and the "Add new rule" button selected.](media/tutorial-migrate-aadc-aadccp/user-1.png)
+ ![Screenshot that shows the "View and manage your synchronization rules" window with "Inbound" and the "Add new rule" button selected.](media/tutorial-migrate-aadc-aadccp/user-1.png)
-1. On the **Description** page, do the following:
+ 3. On the **Description** page, enter the following and select **Next**:
- - **Name**: Give the rule a meaningful name.
- - **Description**: Add a meaningful description.
- - **Connected System**: Select the Active Directory connector that you're writing the custom sync rule for.
- - **Connected System Object Type**: Select **User**.
- - **Metaverse Object Type**: Select **Person**.
- - **Link Type**: Select **Join**.
- - **Precedence**: Enter a value that's unique in the system.
- - **Tag**: Leave this field empty.
+ - **Name:** Give the rule a meaningful name
+ - **Description:** Add a meaningful description
+ - **Connected System:** Choose the AD connector that you're writing the custom sync rule for
+ - **Connected System Object Type:** User
+ - **Metaverse Object Type:** Person
+ - **Link Type:** Join
+ - **Precedence:** Provide a value that is unique in the system
+ - **Tag:** Leave this empty
![Screenshot that shows the "Create inbound synchronization rule - Description" page with values entered.](media/tutorial-migrate-aadc-aadccp/user-2.png)
-1. On the **Scoping filter** page, enter the OU or security group that the pilot is based on.
-
- To filter on OU, add the OU portion of the *distinguished name* (DN). This rule will be applied to all users who are in that OU. for example, if DN ends with "OU=CPUsers,DC=contoso,DC=com, add this filter.
+ 4. On the **Scoping filter** page, enter the OU or security group that you want the pilot based off. To filter on OU, add the OU portion of the distinguished name. This rule will be applied to all users who are in that OU. So, if DN ends with "OU=CPUsers,DC=contoso,DC=com, you would add this filter. Then select **Next**.
|Rule|Attribute|Operator|Value| |--|-|-|--|
- |Scoping&nbsp;OU|DN|ENDSWITH|The distinguished name of the OU.|
- |Scoping&nbsp;group||ISMEMBEROF|The distinguished name of the security group.|
+ |Scoping OU|DN|ENDSWITH|Distinguished name of the OU.|
+ |Scoping group||ISMEMBEROF|Distinguished name of the security group.|
- ![Screenshot that shows the "Create inbound synchronization rule" page with a scoping filter value entered.](media/tutorial-migrate-aadc-aadccp/user-3.png)
+ ![Screenshot that shows the **Create inbound synchronization rule - Scoping filter** page with a scoping filter value entered.](media/tutorial-migrate-aadc-aadccp/user-3.png)
-1. Select **Next**.
-1. On the **Join** rules page, select **Next**.
-1. Under **Add transformations**, do the following:
-
- * **FlowType**: Select **Constant**.
- * **Target Attribute**: Select **cloudNoFlow**.
- * **Source**: Select **True**.
+ 5. On the **Join** rules page, select **Next**.
+ 6. On the **Transformations** page, add a Constant transformation: flow True to cloudNoFlow attribute. Select **Add**.
![Screenshot that shows the **Create inbound synchronization rule - Transformations** page with a **Constant transformation** flow added.](media/tutorial-migrate-aadc-aadccp/user-4.png)
-1. Select **Next**.
-
-1. Select **Add**.
-
-Follow the same steps for all object types (*user*, *group*, and *contact*). Repeat the steps for each configured AD Connector and Active Directory forest.
-
-## Create a custom user outbound rule
+Same steps need to be followed for all object types (user, group and contact). Repeat steps per configured AD Connector / per AD forest.
-1. In the **Direction** dropdown list, select **Outbound**, and then select **Add rule**.
+## Create custom user outbound rule
- ![Screenshot that highlights the selected "Outbound" direction and the "Add new rule" button.](media/tutorial-migrate-aadc-aadccp/user-5.png)
+ 1. Select **Outbound** from the drop-down list for Direction and select **Add rule**.
-1. On the **Description** page, do the following:
+ ![Screenshot that shows the **Outbound** Direction selected and the **Add new rule** button highlighted.](media/tutorial-migrate-aadc-aadccp/user-5.png)
- - **Name**: Give the rule a meaningful name.
- - **Description**: Add a meaningful description.
- - **Connected System**: Select the Azure AD connector that you're writing the custom sync rule for.
- - **Connected System Object Type**: Select **User**.
- - **Metaverse Object Type**: Select **Person**.
- - **Link Type**: Select **JoinNoFlow**.
- - **Precedence**: Enter a value that's unique in the system.
- - **Tag**: Leave this field empty.
+ 2. On the **Description** page, enter the following and select **Next**:
- ![Screenshot of the "Create outbound synchronization rule" pane with properties entered.](media/tutorial-migrate-aadc-aadccp/user-6.png)
+ - **Name:** Give the rule a meaningful name
+ - **Description:** Add a meaningful description
+ - **Connected System:** Choose the Azure AD connector that you're writing the custom sync rule for
+ - **Connected System Object Type:** User
+ - **Metaverse Object Type:** Person
+ - **Link Type:** JoinNoFlow
+ - **Precedence:** Provide a value that is unique in the system<br>
+ - **Tag:** Leave this empty
-1. Select **Next**.
+ ![Screenshot that shows the **Description** page with properties entered.](media/tutorial-migrate-aadc-aadccp/user-6.png)
-1. On the **Create outbound synchronization rule** pane, under **Add scoping filters**, do the following:
-
- * **Attribute**: Select **cloudNoFlow**.
- * **Operator**: Select **EQUAL**.
- * **Value**: Select **True**.
+ 3. On the **Scoping filter** page, choose **cloudNoFlow** equal **True**. Then select **Next**.
![Screenshot that shows a custom rule.](media/tutorial-migrate-aadc-aadccp/user-7.png)
-1. Select **Next**.
-
-1. On the **Join** rules pane, select **Next**.
-
-1. On the **Transformations** pane, select **Add**.
+ 4. On the **Join** rules page, select **Next**.
+ 5. On the **Transformations** page, select **Add**.
-Follow the same steps for all object types (*user*, *group*, and *contact*).
+Same steps need to be followed for all object types (user, group and contact).
## Install the Azure AD Connect provisioning agent
-If you're using the [Basic Active Directory and Azure environment](tutorial-basic-ad-azure.md) tutorial, the agent is CP1. To install the agent, do the following:
+If you're using the [Basic AD and Azure environment](tutorial-basic-ad-azure.md) tutorial, it would be CP1. To install the agent, follow these steps:
[!INCLUDE [active-directory-cloud-sync-how-to-install](../../../includes/active-directory-cloud-sync-how-to-install.md)]
-## Verify the agent installation
+## Verify agent installation
[!INCLUDE [active-directory-cloud-sync-how-to-verify-installation](../../../includes/active-directory-cloud-sync-how-to-verify-installation.md)] ## Configure Azure AD Connect cloud sync
-To configure the cloud sync setup, do the following:
+Use the following steps to configure provisioning:
-1. Sign in to the Azure AD portal.
-1. Select **Azure Active Directory**.
-1. Select **Azure AD Connect**.
-1. Select the **Manage provisioning (Preview)** link.
+1. Sign-in to the Azure AD portal.
+2. Select **Azure Active Directory**
+3. Select **Azure AD Connect**
+4. Select **Manage cloud sync**
- ![Screenshot that shows the "Manage provisioning (Preview)" link.](media/how-to-configure/manage-1.png)
+ ![Screenshot showing "Manage cloud sync" link.](media/how-to-configure/manage-1.png)
-1. Select **New Configuration**
+5. Select **New Configuration**
- ![Screenshot that highlights the "New configuration" link.](media/tutorial-single-forest/configure-1.png)
+ ![Screenshot of Azure AD Connect cloud sync screen with "New configuration" link highlighted.](media/tutorial-single-forest/configure-1.png)
-1. On the **Configure** pane, under **Settings**, enter a **Notification email** and then, under **Deploy**, move the selector to **Enable**.
+6. On the configuration screen, enter a **Notification email**, move the selector to **Enable** and select **Save**.
- ![Screenshot of the "Configure" pane, with a notification email entered and "Enable" selected.](media/tutorial-single-forest/configure-2.png)
+ ![Screenshot of Configure screen with Notification email filled in and Enable selected.](media/tutorial-single-forest/configure-2.png)
-1. Select **Save**.
+7. Under **Configure**, select **All users** to change the scope of the configuration rule.
-1. Under **Scope**, select the **All users** link to change the scope of the configuration rule.
-
- ![Screenshot of the "Configure" pane, with the "All users" link highlighted.](media/how-to-configure/scope-2.png)
+ ![Screenshot of Configure screen with "All users" highlighted next to "Scope users".](media/how-to-configure/scope-2.png)
-1. Under **Scope users**, change the scope to include the OU that you created: **OU=CPUsers,DC=contoso,DC=com**.
+8. On the right, change the scope to include the specific OU you created "OU=CPUsers,DC=contoso,DC=com".
- ![Screenshot of the "Scope users" page, highlighting the scope that's changed to the OU you created.](media/tutorial-existing-forest/scope-2.png)
+ ![Screenshot of the Scope users screen highlighting the scope changed to the OU you created.](media/tutorial-existing-forest/scope-2.png)
-1. Select **Done** and **Save**.
-
- The scope should now be set to **1 organizational unit**.
+9. Select **Done** and **Save**.
+10. The scope should now be set to one organizational unit.
- ![Screenshot of the "Configure" page, with "1 organizational unit" highlighted next to "Scope users".](media/tutorial-existing-forest/scope-3.png)
+ ![Screenshot of Configure screen with "1 organizational unit" highlighted next to "Scope users".](media/tutorial-existing-forest/scope-3.png)
-## Verify that users have been set up by cloud sync
+## Verify users are provisioned by cloud sync
-You'll now verify that the users in your on-premises Active Directory have been synchronized and now exist in your Azure AD tenant. This process might take a few hours to complete. To verify that the users have been synchronized, do the following:
+You'll now verify that the users that you had in our on-premises directory have been synchronized and now exist in out Azure AD tenant. This process may take a few hours to complete. To verify users are provisioning by cloud sync, follow these steps:
-1. Sign in to the [Azure portal](https://portal.azure.com) with an account that has an Azure subscription.
-1. On the left pane, select **Azure Active Directory**.
-1. Select **Azure AD Connect**.
-1. Select **Manage cloud sync**.
-1. Select the **Logs** button.
-1. Search for a username to confirm that the user has been set up by cloud sync.
+1. Browse to the [Azure portal](https://portal.azure.com) and sign in with an account that has an Azure subscription.
+2. On the left, select **Azure Active Directory**
+3. Select on **Azure AD Connect**
+4. Select on **Manage cloud sync**
+5. Select on **Logs** button
+6. Search for a username to confirm that the user is provisioned by cloud sync
Additionally, you can verify that the user and group exist in Azure AD. ## Start the scheduler
-Azure AD Connect sync synchronizes changes that occur in your on-premises directory by using a scheduler. Now that you've modified the rules, you can restart the scheduler.
+Azure AD Connect sync synchronizes changes occurring in your on-premises directory using a scheduler. Now that you've modified the rules, you can restart the scheduler. Use the following steps:
-1. On the server that's running Azure AD Connect sync, open PowerShell with administrative privileges.
-1. Run `Set-ADSyncScheduler -SyncCycleEnabled $true`.
-1. Run `Start-ADSyncSyncCycle`, and then select <kbd>Enter</kbd>.
+1. On the server that is running Azure AD Connect sync open PowerShell with Administrative Privileges
+2. Run `Set-ADSyncScheduler -SyncCycleEnabled $true`.
+3. Run `Start-ADSyncSyncCycle`, then press <kbd>Enter</kbd>.
> [!NOTE]
-> If you're running your own custom scheduler for Azure AD Connect sync, be sure to enable the scheduler.
-
-After the scheduler is enabled, Azure AD Connect stops exporting any changes on objects with `cloudNoFlow=true` in the metaverse, unless any reference attribute (such as `manager`) is being updated.
+> If you are running your own custom scheduler for Azure AD Connect sync, then please enable the scheduler.
-If there's any reference attribute update on the object, Azure AD Connect will ignore the `cloudNoFlow` signal and export all updates on the object.
+Once the scheduler is enabled, Azure AD Connect will stop exporting any changes on objects with `cloudNoFlow=true` in the metaverse, unless any reference attribute (such as `manager`) is being updated. In case there's any reference attribute update on the object, Azure AD Connect will ignore the `cloudNoFlow` signal and export all updates on the object.
-## Does your setup work?
+## Something went wrong
-If the pilot doesn't work as you had expected, you can go back to the Azure AD Connect sync setup by doing the following:
+In case the pilot doesn't work as expected, you can go back to the Azure AD Connect sync setup by following the steps below:
-1. Disable the provisioning configuration in the Azure portal.
-1. Disable all the custom sync rules that were created for cloud provisioning by using the Sync Rule Editor tool. Disabling the rules should result in a full sync of all the connectors.
+1. Disable provisioning configuration in the Azure portal.
+2. Disable all the custom sync rules created for Cloud Provisioning using the Sync Rule Editor tool. Disabling should cause full sync on all the connectors.
## Next steps
active-directory Tutorial Single Forest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/tutorial-single-forest.md
Title: Tutorial - Integrate a single forest with a single Azure AD tenant
-description: This article describes the prerequisites and the hardware requirements for using Azure AD Connect cloud sync.
+description: This topic describes the pre-requisites and the hardware requirements cloud sync.
Previously updated : 11/11/2022 Last updated : 01/17/2023
# Tutorial: Integrate a single forest with a single Azure AD tenant
-This tutorial walks you through creating a hybrid identity environment by using Azure Active Directory (Azure AD) Connect cloud sync.
+This tutorial walks you through creating a hybrid identity environment using Azure Active Directory (Azure AD) Connect cloud sync.
![Diagram that shows the Azure AD Connect cloud sync flow.](media/tutorial-single-forest/diagram-2.png)
You can use the environment you create in this tutorial for testing or for getti
## Prerequisites
-Before you begin, set up your environments by doing the following.
- ### In the Azure Active Directory admin center
-1. Create a cloud-only global administrator account on your Azure AD tenant.
-
- This way, you can manage the configuration of your tenant if your on-premises services fail or become unavailable. [Learn how to add a cloud-only global administrator account](../fundamentals/add-users-azure-active-directory.md). Complete this step to ensure that you don't get locked out of your tenant.
-
-1. Add one or more [custom domain names](../fundamentals/add-custom-domain.md) to your Azure AD tenant. Your users can sign in with one of these domain names.
+1. Create a cloud-only global administrator account on your Azure AD tenant. This way, you can manage the configuration of your tenant should your on-premises services fail or become unavailable. Learn about [adding a cloud-only global administrator account](../fundamentals/add-users-azure-active-directory.md). Completing this step is critical to ensure that you don't get locked out of your tenant.
+2. Add one or more [custom domain names](../fundamentals/add-custom-domain.md) to your Azure AD tenant. Your users can sign in with one of these domain names.
### In your on-premises environment
-1. Identify a domain-joined host server that's running Windows Server 2016 or later, with at least 4 GB of RAM and .NET 4.7.1+ runtime.
-
-1. If there's a firewall between your servers and Azure AD, configure the following items:
+1. Identify a domain-joined host server running Windows Server 2016 or greater with minimum of 4-GB RAM and .NET 4.7.1+ runtime
+2. If there's a firewall between your servers and Azure AD, configure the following items:
- Ensure that agents can make *outbound* requests to Azure AD over the following ports: | Port number | How it's used | | | |
- | **80** | Downloads the certificate revocation lists (CRLs) while it validates the TLS/SSL certificate. |
- | **443** | Handles all outbound communication with the service. |
- | **8080** (optional) | Agents report their status every 10 minutes over port 8080, if port 443 is unavailable. This status is displayed in the Azure AD portal. |
+ | **80** | Downloads the certificate revocation lists (CRLs) while validating the TLS/SSL certificate |
+ | **443** | Handles all outbound communication with the service |
+ | **8080** (optional) | Agents report their status every 10 minutes over port 8080, if port 443 is unavailable. This status is displayed on the Azure AD portal. |
If your firewall enforces rules according to the originating users, open these ports for traffic from Windows services that run as a network service.-
- - If your firewall or proxy allows you to specify safe suffixes, add connections to **\*.msappproxy.net** and **\*.servicebus.windows.net**. If not, allow access to the [Azure datacenter IP ranges](https://www.microsoft.com/download/details.aspx?id=41653), which are updated weekly.
-
+ - If your firewall or proxy allows you to specify safe suffixes, then add connections t to **\*.msappproxy.net** and **\*.servicebus.windows.net**. If not, allow access to the [Azure datacenter IP ranges](https://www.microsoft.com/download/details.aspx?id=41653), which are updated weekly.
- Your agents need access to **login.windows.net** and **login.microsoftonline.com** for initial registration. Open your firewall for those URLs as well.-
- - For certificate validation, unblock the following URLs: **mscrl.microsoft.com:80**, **crl.microsoft.com:80**, **ocsp.msocsp.com:80**, and **www\.microsoft.com:80**. Because these URLs are used to validate certificates for other Microsoft products, you might already have these URLs unblocked.
+ - For certificate validation, unblock the following URLs: **mscrl.microsoft.com:80**, **crl.microsoft.com:80**, **ocsp.msocsp.com:80**, and **www\.microsoft.com:80**. Since these URLs are used for certificate validation with other Microsoft products, you may already have these URLs unblocked.
## Install the Azure AD Connect provisioning agent
-If you're using the [Basic Active Directory and Azure environment](tutorial-basic-ad-azure.md) tutorial, it would be DC1. To install the agent, follow these steps:
+If you're using the [Basic AD and Azure environment](tutorial-basic-ad-azure.md) tutorial, it would be DC1. To install the agent, follow these steps:
[!INCLUDE [active-directory-cloud-sync-how-to-install](../../../includes/active-directory-cloud-sync-how-to-install.md)]
If you're using the [Basic Active Directory and Azure environment](tutorial-basi
## Configure Azure AD Connect cloud sync
-To configure provisioning, do the following:
+Use the following steps to configure and start the provisioning:
-1. Sign in to the Azure AD portal.
-1. Select **Azure Active Directory**.
-1. Select **Azure AD Connect**.
-1. Select **Manage cloud sync**.
+1. Sign in to the Azure AD portal.
+1. Select **Azure Active Directory**
+1. Select **Azure AD Connect**
+1. Select **Manage cloud sync**
- ![Screenshot that shows the "Manage cloud sync" link.](media/how-to-configure/manage-1.png)
+ ![Screenshot showing "Manage cloud sync" link.](media/how-to-configure/manage-1.png)
-1. Select **New Configuration**.
+1. Select **New Configuration**
+
+ [![Screenshot of Azure AD Connect cloud sync screen with "New configuration" link highlighted.](media/tutorial-single-forest/configure-1.png)](media/tutorial-single-forest/configure-1.png#lightbox)
- ![Screenshot of the Azure AD Connect cloud sync page, with the "New configuration" link highlighted.](media/tutorial-single-forest/configure-1.png#lightbox)
+1. On the configuration screen, enter a **Notification email**, move the selector to **Enable** and select **Save**.
-1. On the **Configuration** page, enter a **Notification email**, move the selector to **Enable**, and then select **Save**.
+ [![Screenshot of Configure screen with Notification email filled in and Enable selected.](media/how-to-configure/configure-2.png)](media/how-to-configure/configure-2.png#lightbox)
- ![Screenshot of the "Edit provisioning configuration" page.](media/how-to-configure/configure-2.png#lightbox)
+1. The configuration status should now be **Healthy**.
-1. The configuration status should now be **Healthy**.
+ [![Screenshot of Azure AD Connect cloud sync screen showing Healthy status.](media/how-to-configure/manage-4.png)](media/how-to-configure/manage-4.png#lightbox)
- ![Screenshot of the "Azure AD Connect cloud sync" page, showing a "Healthy" status.](media/how-to-configure/manage-4.png#lightbox)
+## Verify users are created and synchronization is occurring
-## Verify that users are created and synchronization is occurring
+You'll now verify that the users that you had in your on-premises directory have been synchronized and now exist in your Azure AD tenant. The sync operation may take a few hours to complete. To verify users are synchronized, follow these steps:
-You'll now verify that the users in your on-premises directory have been synchronized and exist in your Azure AD tenant. This process might take a few hours to complete. To verify that the users are synchronized, do the following:
-1. Sign in to the [Azure portal](https://portal.azure.com) with an account that has an Azure subscription.
-1. On the left pane, select **Azure Active Directory**.
-1. Under **Manage**, select **Users**.
-1. Verify that the new users are displayed in your tenant.
+1. Browse to the [Azure portal](https://portal.azure.com) and sign in with an account that has an Azure subscription.
+2. On the left, select **Azure Active Directory**
+3. Under **Manage**, select **Users**.
+4. Verify that the new users appear in your tenant
## Test signing in with one of your users
-1. Go to the [Microsoft My Apps](https://myapps.microsoft.com) page.
-1. Sign in with a user account that was created in your new tenant. You'll need to sign in by using the following format: *user@domain.onmicrosoft.com*. Use the same password that the user uses to sign in on-premises.
+1. Browse to [https://myapps.microsoft.com](https://myapps.microsoft.com)
+
+1. Sign in with a user account that was created in your tenant. You'll need to sign in using the following format: (user@domain.onmicrosoft.com). Use the same password that the user uses to sign in on-premises.
- ![Screenshot that shows the My Apps portal with signed-in users.](media/tutorial-single-forest/verify-1.png)
+ ![Screenshot that shows the my apps portal with a signed in users.](media/tutorial-single-forest/verify-1.png)
-You have now successfully set up a hybrid identity environment that you can use to test and familiarize yourself with what Azure has to offer.
+You've now successfully configured a hybrid identity environment using Azure AD Connect cloud sync.
## Next steps - [What is provisioning?](what-is-provisioning.md)-- [What is Azure AD Connect cloud sync?](what-is-cloud-sync.md)
+- [What is Azure AD Connect cloud provisioning?](what-is-cloud-sync.md)
active-directory What Is Cloud Sync https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/what-is-cloud-sync.md
Previously updated : 01/25/2022 Last updated : 01/17/2023
active-directory What Is Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/cloud-sync/what-is-provisioning.md
Previously updated : 12/05/2019 Last updated : 01/17/2023
active-directory Access Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/access-tokens.md
Web APIs have one of the following versions selected as a default during registr
eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsIng1dCI6Imk2bEdrM0ZaenhSY1ViMkMzbkVRN3N5SEpsWSIsImtpZCI6Imk2bEdrM0ZaenhSY1ViMkMzbkVRN3N5SEpsWSJ9.eyJhdWQiOiJlZjFkYTlkNC1mZjc3LTRjM2UtYTAwNS04NDBjM2Y4MzA3NDUiLCJpc3MiOiJodHRwczovL3N0cy53aW5kb3dzLm5ldC9mYTE1ZDY5Mi1lOWM3LTQ0NjAtYTc0My0yOWYyOTUyMjIyOS8iLCJpYXQiOjE1MzcyMzMxMDYsIm5iZiI6MTUzNzIzMzEwNiwiZXhwIjoxNTM3MjM3MDA2LCJhY3IiOiIxIiwiYWlvIjoiQVhRQWkvOElBQUFBRm0rRS9RVEcrZ0ZuVnhMaldkdzhLKzYxQUdyU091TU1GNmViYU1qN1hPM0libUQzZkdtck95RCtOdlp5R24yVmFUL2tES1h3NE1JaHJnR1ZxNkJuOHdMWG9UMUxrSVorRnpRVmtKUFBMUU9WNEtjWHFTbENWUERTL0RpQ0RnRTIyMlRJbU12V05hRU1hVU9Uc0lHdlRRPT0iLCJhbXIiOlsid2lhIl0sImFwcGlkIjoiNzVkYmU3N2YtMTBhMy00ZTU5LTg1ZmQtOGMxMjc1NDRmMTdjIiwiYXBwaWRhY3IiOiIwIiwiZW1haWwiOiJBYmVMaUBtaWNyb3NvZnQuY29tIiwiZmFtaWx5X25hbWUiOiJMaW5jb2xuIiwiZ2l2ZW5fbmFtZSI6IkFiZSAoTVNGVCkiLCJpZHAiOiJodHRwczovL3N0cy53aW5kb3dzLm5ldC83MmY5ODhiZi04NmYxLTQxYWYtOTFhYi0yZDdjZDAxMjIyNDcvIiwiaXBhZGRyIjoiMjIyLjIyMi4yMjIuMjIiLCJuYW1lIjoiYWJlbGkiLCJvaWQiOiIwMjIyM2I2Yi1hYTFkLTQyZDQtOWVjMC0xYjJiYjkxOTQ0MzgiLCJyaCI6IkkiLCJzY3AiOiJ1c2VyX2ltcGVyc29uYXRpb24iLCJzdWIiOiJsM19yb0lTUVUyMjJiVUxTOXlpMmswWHBxcE9pTXo1SDNaQUNvMUdlWEEiLCJ0aWQiOiJmYTE1ZDY5Mi1lOWM3LTQ0NjAtYTc0My0yOWYyOTU2ZmQ0MjkiLCJ1bmlxdWVfbmFtZSI6ImFiZWxpQG1pY3Jvc29mdC5jb20iLCJ1dGkiOiJGVnNHeFlYSTMwLVR1aWt1dVVvRkFBIiwidmVyIjoiMS4wIn0.D3H6pMUtQnoJAGq6AHd ``` -- v2.0 for applications that support consumer accounts. The following example shows a v1.0 token (this token example won't validate because the keys have rotated prior to publication and personal information has been removed):
+- v2.0 for applications that support consumer accounts. The following example shows a v2.0 token (this token example won't validate because the keys have rotated prior to publication and personal information has been removed):
``` eyJ0eXAiOiJKV1QiLCJhbGciOiJSUzI1NiIsImtpZCI6Imk2bEdrM0ZaenhSY1ViMkMzbkVRN3N5SEpsWSJ9.eyJhdWQiOiI2ZTc0MTcyYi1iZTU2LTQ4NDMtOWZmNC1lNjZhMzliYjEyZTMiLCJpc3MiOiJodHRwczovL2xvZ2luLm1pY3Jvc29mdG9ubGluZS5jb20vNzJmOTg4YmYtODZmMS00MWFmLTkxYWItMmQ3Y2QwMTFkYjQ3L3YyLjAiLCJpYXQiOjE1MzcyMzEwNDgsIm5iZiI6MTUzNzIzMTA0OCwiZXhwIjoxNTM3MjM0OTQ4LCJhaW8iOiJBWFFBaS84SUFBQUF0QWFaTG8zQ2hNaWY2S09udHRSQjdlQnE0L0RjY1F6amNKR3hQWXkvQzNqRGFOR3hYZDZ3TklJVkdSZ2hOUm53SjFsT2NBbk5aY2p2a295ckZ4Q3R0djMzMTQwUmlvT0ZKNGJDQ0dWdW9DYWcxdU9UVDIyMjIyZ0h3TFBZUS91Zjc5UVgrMEtJaWpkcm1wNjlSY3R6bVE9PSIsImF6cCI6IjZlNzQxNzJiLWJlNTYtNDg0My05ZmY0LWU2NmEzOWJiMTJlMyIsImF6cGFjciI6IjAiLCJuYW1lIjoiQWJlIExpbmNvbG4iLCJvaWQiOiI2OTAyMjJiZS1mZjFhLTRkNTYtYWJkMS03ZTRmN2QzOGU0NzQiLCJwcmVmZXJyZWRfdXNlcm5hbWUiOiJhYmVsaUBtaWNyb3NvZnQuY29tIiwicmgiOiJJIiwic2NwIjoiYWNjZXNzX2FzX3VzZXIiLCJzdWIiOiJIS1pwZmFIeVdhZGVPb3VZbGl0anJJLUtmZlRtMjIyWDVyclYzeERxZktRIiwidGlkIjoiNzJmOTg4YmYtODZmMS00MWFmLTkxYWItMmQ3Y2QwMTFkYjQ3IiwidXRpIjoiZnFpQnFYTFBqMGVRYTgyUy1JWUZBQSIsInZlciI6IjIuMCJ9.pj4N-w_3Us9DrBLfpCt
active-directory Active Directory Authentication Protocols https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-authentication-protocols.md
Last updated 09/27/2021-+
active-directory Active Directory Claims Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-claims-mapping.md
- Title: Customize Azure AD tenant app claims (PowerShell)
-description: Learn how to customize claims emitted in tokens for an application in a specific Azure Active Directory tenant.
------- Previously updated : 01/06/2023----
-# Customize claims emitted in tokens for a specific app in a tenant
-
-A claim is information that an identity provider states about a user inside the token they issue for that user. Claims customization is used by tenant admins to customize the claims emitted in tokens for a specific application in their tenant. You can use claims-mapping policies to:
--- Select which claims are included in tokens.-- Create claim types that don't already exist.-- Choose or change the source of data emitted in specific claims.-
-Claims customization supports configuring claim-mapping policies for the WS-Fed, SAML, OAuth, and OpenID Connect protocols.
-
-This feature replaces and supersedes the [claims customization](active-directory-saml-claims-customization.md) offered through the Azure portal. On the same application, if you customize claims using the portal in addition to the Microsoft Graph/PowerShell method detailed in this document, tokens issued for that application will ignore the configuration in the portal. Configurations made through the methods detailed in this document won't be reflected in the portal.
-
-In this article, we walk through a few common scenarios that can help you understand how to use the [claims-mapping policy type](reference-claims-mapping-policy-type.md).
-
-## Get started
-
-In the following examples, you create, update, link, and delete policies for service principals. Claims-mapping policies can only be assigned to service principal objects. If you're new to Azure Active Directory (Azure AD), we recommend that you [learn about how to get an Azure AD tenant](quickstart-create-new-tenant.md) before you proceed with these examples.
-
-When creating a claims-mapping policy, you can also emit a claim from a directory extension attribute in tokens. Use _ExtensionID_ for the extension attribute instead of _ID_ in the `ClaimsSchema` element. For more information about using extension attributes, see [Using directory extension attributes](active-directory-schema-extensions.md).
-
-The [Azure AD PowerShell Module public preview release](https://www.powershellgallery.com/packages/AzureADPreview) is required to configure claims-mapping policies. The PowerShell module is in preview, while the claims mapping and token creation runtime in Azure is generally available. Updates to the preview PowerShell module could require you to update or change your configuration scripts.
-
-To get started, do the following steps:
-
-1. Download the latest [Azure AD PowerShell Module public preview release](https://www.powershellgallery.com/packages/AzureADPreview).
-1. Run the [Connect-AzureAD](/powershell/module/azuread/connect-azuread?view=azureadps-2.0-preview&preserve-view=true) command to sign in to your Azure AD admin account. Run this command each time you start a new session.
-
- ```powershell
- Connect-AzureAD -Confirm
- ```
-
-1. To see all policies that have been created in your organization, run the following command. We recommend that you run this command after most operations in the following scenarios, to check that your policies are being created as expected.
-
- ```powershell
- Get-AzureADPolicy
- ```
-
-Next, create a claims mapping policy and assign it to a service principal. See these examples for common scenarios:
--- [Omit the basic claims from tokens](#omit-the-basic-claims-from-tokens)-- [Include the EmployeeID and TenantCountry as claims in tokens](#include-the-employeeid-and-tenantcountry-as-claims-in-tokens)-- [Use a claims transformation in tokens](#use-a-claims-transformation-in-tokens)-
-After creating a claims mapping policy, configure your application to acknowledge that tokens will contain customized claims. For more information, read [security considerations](#security-considerations).
-
-## Omit the basic claims from tokens
-
-In this example, you create a policy that removes the [basic claim set](reference-claims-mapping-policy-type.md#claim-sets) from tokens issued to linked service principals.
-
-1. Create a claims-mapping policy. This policy, linked to specific service principals, removes the basic claim set from tokens.
-
- 1. To create the policy, run this command:
-
- ```powershell
- New-AzureADPolicy -Definition @('{"ClaimsMappingPolicy":{"Version":1,"IncludeBasicClaimSet":"false"}}') -DisplayName "OmitBasicClaims" -Type "ClaimsMappingPolicy"
- ```
-
- 2. To see your new policy, and to get the policy ObjectId, run the following command:
-
- ```powershell
- Get-AzureADPolicy
- ```
-
-1. Assign the policy to your service principal. You also need to get the ObjectId of your service principal.
-
- 1. To see all your organization's service principals, you can [query the Microsoft Graph API](/graph/traverse-the-graph). Or, in [Microsoft Graph Explorer](https://developer.microsoft.com/graph/graph-explorer), sign in to your Azure AD account.
- 2. When you have the ObjectId of your service principal, run the following command:
-
- ```powershell
- Add-AzureADServicePrincipalPolicy -Id <ObjectId of the ServicePrincipal> -RefObjectId <ObjectId of the Policy>
- ```
-
-## Include the EmployeeID and TenantCountry as claims in tokens
-
-In this example, you create a policy that adds the EmployeeID and TenantCountry to tokens issued to linked service principals. The EmployeeID is emitted as the name claim type in both SAML tokens and JWTs. The TenantCountry is emitted as the country/region claim type in both SAML tokens and JWTs. In this example, we continue to include the basic claims set in the tokens.
-
-1. Create a claims-mapping policy. This policy, linked to specific service principals, adds the EmployeeID and TenantCountry claims to tokens.
-
- 1. To create the policy, run the following command:
-
- ```powershell
- New-AzureADPolicy -Definition @('{"ClaimsMappingPolicy":{"Version":1,"IncludeBasicClaimSet":"true", "ClaimsSchema": [{"Source":"user","ID":"employeeid","SamlClaimType":"http://schemas.xmlsoap.org/ws/2005/05/identity/claims/employeeid","JwtClaimType":"employeeid"},{"Source":"company","ID":"tenantcountry","SamlClaimType":"http://schemas.xmlsoap.org/ws/2005/05/identity/claims/country","JwtClaimType":"country"}]}}') -DisplayName "ExtraClaimsExample" -Type "ClaimsMappingPolicy"
- ```
-
- When you define a claims mapping policy for a directory extension attribute, use the `ExtensionID` property instead of the `ID` property within the body of the `ClaimsSchema` array.
-
- 2. To see your new policy, and to get the policy ObjectId, run the following command:
-
- ```powershell
- Get-AzureADPolicy
- ```
-
-1. Assign the policy to your service principal. You also need to get the ObjectId of your service principal.
-
- 1. To see all your organization's service principals, you can [query the Microsoft Graph API](/graph/traverse-the-graph). Or, in [Microsoft Graph Explorer](https://developer.microsoft.com/graph/graph-explorer), sign in to your Azure AD account.
- 2. When you have the ObjectId of your service principal, run the following command:
-
- ```powershell
- Add-AzureADServicePrincipalPolicy -Id <ObjectId of the ServicePrincipal> -RefObjectId <ObjectId of the Policy>
- ```
-
-## Use a claims transformation in tokens
-
-In this example, you create a policy that emits a custom claim "JoinedData" to JWTs issued to linked service principals. This claim contains a value created by joining the data stored in the extensionattribute1 attribute on the user object with ".sandbox". In this example, we exclude the basic claims set in the tokens.
-
-1. Create a claims-mapping policy. This policy, linked to specific service principals, adds the EmployeeID and TenantCountry claims to tokens.
-
- 1. To create the policy, run the following command:
-
- ```powershell
- -
- ```
-
- 2. To see your new policy, and to get the policy ObjectId, run the following command:
-
- ```powershell
- Get-AzureADPolicy
- ```
-
-1. Assign the policy to your service principal. You also need to get the ObjectId of your service principal.
-
- 1. To see all your organization's service principals, you can [query the Microsoft Graph API](/graph/traverse-the-graph). Or, in [Microsoft Graph Explorer](https://developer.microsoft.com/graph/graph-explorer), sign in to your Azure AD account.
- 2. When you have the ObjectId of your service principal, run the following command:
-
- ```powershell
- Add-AzureADServicePrincipalPolicy -Id <ObjectId of the ServicePrincipal> -RefObjectId <ObjectId of the Policy>
- ```
-
-## Security considerations
-
-Applications that receive tokens rely on the fact that the claim values are authoritatively issued by Azure AD and can't be tampered with. However, when you modify the token contents through claims-mapping policies, these assumptions may no longer be correct. Applications must explicitly acknowledge that tokens have been modified by the creator of the claims-mapping policy to protect themselves from claims-mapping policies created by malicious actors. This can be done in one the following ways:
--- [Configure a custom signing key](#configure-a-custom-signing-key)-- Or, [update the application manifest](#update-the-application-manifest) to accept mapped claims.-
-Without this, Azure AD will return an [`AADSTS50146` error code](reference-aadsts-error-codes.md#aadsts-error-codes).
-
-### Configure a custom signing key
-
-For multi-tenant apps, a custom signing key should be used. Don't set `acceptMappedClaims` in the app manifest. If set up an app in the Azure portal, you get an app registration object and a service principal in your tenant. That app is using the Azure global sign-in key, which can't be used for customizing claims in tokens. To get custom claims in tokens, create a custom sign-in key from a certificate and add it to service principal. For testing purposes, you can use a self-signed certificate. After configuring the custom signing key, your application code needs to [validate the token signing key](#validate-token-signing-key).
-
-Add the following information to the service principal:
--- Private key (as a [key credential](/graph/api/resources/keycredential))-- Password (as a [password credential](/graph/api/resources/passwordcredential))-- Public key (as a [key credential](/graph/api/resources/keycredential))-
-Extract the private and public key base-64 encoded from the PFX file export of your certificate. Make sure that the `keyId` for the `keyCredential` used for "Sign" matches the `keyId` of the `passwordCredential`. You can generate the `customkeyIdentifier` by getting the hash of the cert's thumbprint.
-
-#### Request
-
-The following shows the format of the HTTP PATCH request to add a custom signing key to a service principal. The "key" value in the `keyCredentials` property is shortened for readability. The value is base-64 encoded. For the private key, the property usage is "Sign". For the public key, the property usage is "Verify".
-
-```
-PATCH https://graph.microsoft.com/v1.0/servicePrincipals/f47a6776-bca7-4f2e-bc6c-eec59d058e3e
-
-Content-type: servicePrincipals/json
-Authorization: Bearer {token}
-
-{
- "keyCredentials":[
- {
- "customKeyIdentifier": "lY85bR8r6yWTW6jnciNEONwlVhDyiQjdVLgPDnkI5mA=",
- "endDateTime": "2021-04-22T22:10:13Z",
- "keyId": "4c266507-3e74-4b91-aeba-18a25b450f6e",
- "startDateTime": "2020-04-22T21:50:13Z",
- "type": "X509CertAndPassword",
- "usage": "Sign",
- "key":"MIIKIAIBAz.....HBgUrDgMCERE20nuTptI9MEFCh2Ih2jaaLZBZGeZBRFVNXeZmAAgIH0A==",
- "displayName": "CN=contoso"
- },
- {
- "customKeyIdentifier": "lY85bR8r6yWTW6jnciNEONwlVhDyiQjdVLgPDnkI5mA=",
- "endDateTime": "2021-04-22T22:10:13Z",
- "keyId": "e35a7d11-fef0-49ad-9f3e-aacbe0a42c42",
- "startDateTime": "2020-04-22T21:50:13Z",
- "type": "AsymmetricX509Cert",
- "usage": "Verify",
- "key": "MIIDJzCCAg+gAw......CTxQvJ/zN3bafeesMSueR83hlCSyg==",
- "displayName": "CN=contoso"
- }
-
- ],
- "passwordCredentials": [
- {
- "customKeyIdentifier": "lY85bR8r6yWTW6jnciNEONwlVhDyiQjdVLgPDnkI5mA=",
- "keyId": "4c266507-3e74-4b91-aeba-18a25b450f6e",
- "endDateTime": "2022-01-27T19:40:33Z",
- "startDateTime": "2020-04-20T19:40:33Z",
- "secretText": "mypassword"
- }
- ]
-}
-```
-
-#### Configure a custom signing key using PowerShell
-
-Use PowerShell to [instantiate an MSAL Public Client Application](msal-net-initializing-client-applications.md#initializing-a-public-client-application-from-code) and use the [Authorization Code Grant](v2-oauth2-auth-code-flow.md) flow to obtain a delegated permission access token for Microsoft Graph. Use the access token to call Microsoft Graph and configure a custom signing key for the service principal. After configuring the custom signing key, your application code needs to [validate the token signing key](#validate-token-signing-key).
-
-To run this script, you need:
-
-1. The object ID of your application's service principal, found in the **Overview** pane of your application's entry in [Enterprise Applications](https://portal.azure.com/#blade/Microsoft_AAD_IAM/StartboardApplicationsMenuBlade/AllApps/menuId/) in the Azure portal.
-2. An app registration to sign in a user and get an access token to call Microsoft Graph. Get the application (client) ID of this app in the **Overview** pane of the application's entry in [App registrations](https://portal.azure.com/#blade/Microsoft_AAD_IAM/ActiveDirectoryMenuBlade/RegisteredApps) in the Azure portal. The app registration should have the following configuration:
- - A redirect URI of "http://localhost" listed in the **Mobile and desktop applications** platform configuration
- - In **API permissions**, Microsoft Graph delegated permissions **Application.ReadWrite.All** and **User.Read** (make sure you grant Admin consent to these permissions)
-3. A user who logs in to get the Microsoft Graph access token. The user should be one of the following Azure AD administrative roles (required to update the service principal):
- - Cloud Application Administrator
- - Application Administrator
- - Global Administrator
-4. A certificate to configure as a custom signing key for our application. You can either create a self-signed certificate or obtain one from your trusted certificate authority. The following certificate components are used in the script:
- - public key (typically a .cer file)
- - private key in PKCS#12 format (in .pfx file)
- - password for the private key (pfx file)
-
-The private key must be in PKCS#12 format since Azure AD doesn't support other format types. Using the wrong format can result in the error "Invalid certificate: Key value is invalid certificate" when using Microsoft Graph to PATCH the service principal with a `keyCredentials` containing the certificate info.
-
-```powershell
-
-$fqdn="fourthcoffeetest.onmicrosoft.com" # this is used for the 'issued to' and 'issued by' field of the certificate
-$pwd="mypassword" # password for exporting the certificate private key
-$location="C:\\temp" # path to folder where both the pfx and cer file will be written to
-
-# Create a self-signed cert
-$cert = New-SelfSignedCertificate -certstorelocation cert:\currentuser\my -DnsName $fqdn
-$pwdSecure = ConvertTo-SecureString -String $pwd -Force -AsPlainText
-$path = 'cert:\currentuser\my\' + $cert.Thumbprint
-$cerFile = $location + "\\" + $fqdn + ".cer"
-$pfxFile = $location + "\\" + $fqdn + ".pfx"
-
-# Export the public and private keys
-Export-PfxCertificate -cert $path -FilePath $pfxFile -Password $pwdSecure
-Export-Certificate -cert $path -FilePath $cerFile
-
-$ClientID = "<app-id>"
-$loginURL = "https://login.microsoftonline.com"
-$tenantdomain = "fourthcoffeetest.onmicrosoft.com"
-$redirectURL = "http://localhost" # this reply URL is needed for PowerShell Core
-[string[]] $Scopes = "https://graph.microsoft.com/.default"
-$pfxpath = $pfxFile # path to pfx file
-$cerpath = $cerFile # path to cer file
-$SPOID = "<service-principal-id>"
-$graphuri = "https://graph.microsoft.com/v1.0/serviceprincipals/$SPOID"
-$password = $pwd # password for the pfx file
--
-# choose the correct folder name for MSAL based on PowerShell version 5.1 (.Net) or PowerShell Core (.Net Core)
-
-if ($PSVersionTable.PSVersion.Major -gt 5)
- {
- $core = $true
- $foldername = "netcoreapp2.1"
- }
-else
- {
- $core = $false
- $foldername = "net45"
- }
-
-# Load the MSAL/microsoft.identity/client assembly -- needed once per PowerShell session
-[System.Reflection.Assembly]::LoadFrom((Get-ChildItem C:/Users/<username>/.nuget/packages/microsoft.identity.client/4.32.1/lib/$foldername/Microsoft.Identity.Client.dll).fullname) | out-null
-
-$global:app = $null
-
-$ClientApplicationBuilder = [Microsoft.Identity.Client.PublicClientApplicationBuilder]::Create($ClientID)
-[void]$ClientApplicationBuilder.WithAuthority($("$loginURL/$tenantdomain"))
-[void]$ClientApplicationBuilder.WithRedirectUri($redirectURL)
-
-$global:app = $ClientApplicationBuilder.Build()
-
-Function Get-GraphAccessTokenFromMSAL {
- [Microsoft.Identity.Client.AuthenticationResult] $authResult = $null
- $AquireTokenParameters = $global:app.AcquireTokenInteractive($Scopes)
- [IntPtr] $ParentWindow = [System.Diagnostics.Process]::GetCurrentProcess().MainWindowHandle
- if ($ParentWindow)
- {
- [void]$AquireTokenParameters.WithParentActivityOrWindow($ParentWindow)
- }
- try {
- $authResult = $AquireTokenParameters.ExecuteAsync().GetAwaiter().GetResult()
- }
- catch {
- $ErrorMessage = $_.Exception.Message
- Write-Host $ErrorMessage
- }
-
- return $authResult
-}
-
-$myvar = Get-GraphAccessTokenFromMSAL
-if ($myvar)
-{
- $GraphAccessToken = $myvar.AccessToken
- Write-Host "Access Token: " $myvar.AccessToken
- #$GraphAccessToken = "eyJ0eXAiOiJKV1QiL ... iPxstltKQ"
--
- # this is for PowerShell Core
- $Secure_String_Pwd = ConvertTo-SecureString $password -AsPlainText -Force
-
- # reading certificate files and creating Certificate Object
- if ($core)
- {
- $pfx_cert = get-content $pfxpath -AsByteStream -Raw
- $cer_cert = get-content $cerpath -AsByteStream -Raw
- $cert = Get-PfxCertificate -FilePath $pfxpath -Password $Secure_String_Pwd
- }
- else
- {
- $pfx_cert = get-content $pfxpath -Encoding Byte
- $cer_cert = get-content $cerpath -Encoding Byte
- # Write-Host "Enter password for the pfx file..."
- # calling Get-PfxCertificate in PowerShell 5.1 prompts for password
- # $cert = Get-PfxCertificate -FilePath $pfxpath
- $cert = [System.Security.Cryptography.X509Certificates.X509Certificate2]::new($pfxpath, $password)
- }
-
- # base 64 encode the private key and public key
- $base64pfx = [System.Convert]::ToBase64String($pfx_cert)
- $base64cer = [System.Convert]::ToBase64String($cer_cert)
-
- # getting id for the keyCredential object
- $guid1 = New-Guid
- $guid2 = New-Guid
-
- # get the custom key identifier from the certificate thumbprint:
- $hasher = [System.Security.Cryptography.HashAlgorithm]::Create('sha256')
- $hash = $hasher.ComputeHash([System.Text.Encoding]::UTF8.GetBytes($cert.Thumbprint))
- $customKeyIdentifier = [System.Convert]::ToBase64String($hash)
-
- # get end date and start date for our keycredentials
- $endDateTime = ($cert.NotAfter).ToUniversalTime().ToString( "yyyy-MM-ddTHH:mm:ssZ" )
- $startDateTime = ($cert.NotBefore).ToUniversalTime().ToString( "yyyy-MM-ddTHH:mm:ssZ" )
-
- # building our json payload
- $object = [ordered]@{
- keyCredentials = @(
- [ordered]@{
- customKeyIdentifier = $customKeyIdentifier
- endDateTime = $endDateTime
- keyId = $guid1
- startDateTime = $startDateTime
- type = "X509CertAndPassword"
- usage = "Sign"
- key = $base64pfx
- displayName = "CN=fourthcoffeetest"
- },
- [ordered]@{
- customKeyIdentifier = $customKeyIdentifier
- endDateTime = $endDateTime
- keyId = $guid2
- startDateTime = $startDateTime
- type = "AsymmetricX509Cert"
- usage = "Verify"
- key = $base64cer
- displayName = "CN=fourthcoffeetest"
- }
- )
- passwordCredentials = @(
- [ordered]@{
- customKeyIdentifier = $customKeyIdentifier
- keyId = $guid1
- endDateTime = $endDateTime
- startDateTime = $startDateTime
- secretText = $password
- }
- )
- }
-
- $json = $object | ConvertTo-Json -Depth 99
- Write-Host "JSON Payload:"
- Write-Output $json
-
- # Request Header
- $Header = @{}
- $Header.Add("Authorization","Bearer $($GraphAccessToken)")
- $Header.Add("Content-Type","application/json")
-
- try
- {
- Invoke-RestMethod -Uri $graphuri -Method "PATCH" -Headers $Header -Body $json
- }
- catch
- {
- # Dig into the exception to get the Response details.
- # Note that value__ is not a typo.
- Write-Host "StatusCode:" $_.Exception.Response.StatusCode.value__
- Write-Host "StatusDescription:" $_.Exception.Response.StatusDescription
- }
-
- Write-Host "Complete Request"
-}
-else
-{
- Write-Host "Fail to get Access Token"
-}
-```
-
-#### Validate token signing key
-
-Apps that have claims mapping enabled must validate their token signing keys by appending `appid={client_id}` to their [OpenID Connect metadata requests](v2-protocols-oidc.md#fetch-the-openid-configuration-document). Below is the format of the OpenID Connect metadata document you should use:
-
-```
-https://login.microsoftonline.com/{tenant}/v2.0/.well-known/openid-configuration?appid={client-id}
-```
-
-### Update the application manifest
-
-For single tenant apps, you can set the `acceptMappedClaims` property to `true` in the [application manifest](reference-app-manifest.md). As documented on the [apiApplication resource type](/graph/api/resources/apiapplication#properties), this allows an application to use claims mapping without specifying a custom signing key.
-
-Don't set `acceptMappedClaims` property to `true` for multi-tenant apps, which can allow malicious actors to create claims-mapping policies for your app.
-
-This does require the requested token audience to use a verified domain name of your Azure AD tenant, which means you should ensure to set the `Application ID URI` (represented by the `identifierUris` in the application manifest) for example to `https://contoso.com/my-api` or (simply using the default tenant name) `https://contoso.onmicrosoft.com/my-api`.
-
-If you're not using a verified domain, Azure AD will return an `AADSTS501461` error code with message _"AcceptMappedClaims is only supported for a token audience matching the application GUID or an audience within the tenant's verified domains. Either change the resource identifier, or use an application-specific signing key."_
-
-## Next steps
--- Read the [claims-mapping policy type](reference-claims-mapping-policy-type.md) reference article to learn more.-- To learn how to customize claims issued in the SAML token through the Azure portal, see [How to: Customize claims issued in the SAML token for enterprise applications](active-directory-saml-claims-customization.md)-- To learn more about extension attributes, see [Using directory extension attributes in claims](active-directory-schema-extensions.md).
active-directory Active Directory Enterprise App Role Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-enterprise-app-role-management.md
Title: Configure role claim for enterprise Azure AD apps description: Learn how to configure the role claim issued in the SAML token for enterprise applications in Azure Active Directory -+
Last updated 11/11/2021-+ # Configure the role claim issued in the SAML token for enterprise applications
active-directory Active Directory How Applications Are Added https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-how-applications-are-added.md
Title: How and why apps are added to Azure AD description: What does it mean for an application to be added to Azure AD and how do they get there? -+
Last updated 10/26/2022-+
active-directory Active Directory How To Integrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/active-directory-how-to-integrate.md
Title: How to integrate with the Microsoft identity platform description: Learn the benefits of integrating your application with the Microsoft identity platform, and get resources for features like simplified sign-in, identity management, multi-factor authentication, and access control. -+
Last updated 10/01/2020-+
active-directory Authentication Flows App Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/authentication-flows-app-scenarios.md
Title: Microsoft identity platform authentication flows & app scenarios description: Learn about application scenarios for the Microsoft identity platform, including authenticating identities, acquiring tokens, and calling protected APIs. -+ ms.assetid:
Last updated 05/05/2022-++ #Customer intent: As an app developer, I want to learn about authentication flows and application scenarios so I can create applications protected by the Microsoft identity platform.
active-directory Claims Challenge https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/claims-challenge.md
Title: Claims challenges, claims requests, and client capabilities description: Explanation of claims challenges, claims requests, and client capabilities in the Microsoft identity platform. --++ Previously updated : 05/11/2021- Last updated : 01/19/2023+ # Customer intent: As an application developer, I want to learn how to handle claims challenges returned from APIs protected by the Microsoft identity platform.
active-directory Configure Token Lifetimes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/configure-token-lifetimes.md
- Title: Set lifetimes for tokens
-description: Learn how to set lifetimes for tokens issued by Microsoft identity platform. Learn how to learn how to manage an organization's default policy, create a policy for web sign-in, create a policy for a native app that calls a web API, and manage an advanced policy.
-------- Previously updated : 10/17/2022----
-# Configure token lifetime policies (preview)
-
-In the following steps, you'll implement a common policy scenario that imposes new rules for token lifetime. It's possible to specify the lifetime of an access, SAML, or ID token issued by the Microsoft identity platform. This can be set for all apps in your organization or for a specific service principal. They can also be set for multi-organizations (multi-tenant application).
-
-For more information, see [configurable token lifetimes](active-directory-configurable-token-lifetimes.md).
-
-## Get started
-
-To get started, download the latest [Azure AD PowerShell Module Public Preview release](https://www.powershellgallery.com/packages/AzureADPreview).
-
-Next, run the `Connect-AzureAD` command to sign in to your Azure Active Directory (Azure AD) admin account. Run this command each time you start a new session.
-
-```powershell
-Connect-AzureAD -Confirm
-```
-
-## Create a policy for web sign-in
-
-In the following steps, you'll create a policy that requires users to authenticate more frequently in your web app. This policy sets the lifetime of the access/ID tokens to the service principal of your web app.
-
-1. Create a token lifetime policy.
-
- This policy, for web sign-in, sets the access/ID token lifetime to two hours.
-
- To create the policy, run the [New-AzureADPolicy](/powershell/module/azuread/new-azureadpolicy?view=azureadps-2.0-preview&preserve-view=true) cmdlet:
-
- ```powershell
- $policy = New-AzureADPolicy -Definition @('{"TokenLifetimePolicy":{"Version":1,"AccessTokenLifetime":"02:00:00"}}') -DisplayName "WebPolicyScenario" -IsOrganizationDefault $false -Type "TokenLifetimePolicy"
- ```
-
- To see your new policy, and to get the policy **ObjectId**, run the [Get-AzureADPolicy](/powershell/module/azuread/get-azureadpolicy?view=azureadps-2.0-preview&preserve-view=true) cmdlet:
-
- ```powershell
- Get-AzureADPolicy -Id $policy.Id
- ```
-
-1. Assign the policy to your service principal. You also need to get the **ObjectId** of your service principal.
-
- Use the [Get-AzureADServicePrincipal](/powershell/module/azuread/get-azureadserviceprincipal) cmdlet to see all your organization's service principals or a single service principal.
-
- ```powershell
- # Get ID of the service principal
- $sp = Get-AzureADServicePrincipal -Filter "DisplayName eq '<service principal display name>'"
- ```
-
- When you have the service principal, run the [Add-AzureADServicePrincipalPolicy](/powershell/module/azuread/add-azureadserviceprincipalpolicy?view=azureadps-2.0-preview&preserve-view=true) cmdlet:
-
- ```powershell
- # Assign policy to a service principal
- Add-AzureADServicePrincipalPolicy -Id $sp.ObjectId -RefObjectId $policy.Id
- ```
-
-## View existing policies in a tenant
-
-To see all policies that have been created in your organization, run the [Get-AzureADPolicy](/powershell/module/azuread/get-azureadpolicy?view=azureadps-2.0-preview&preserve-view=true) cmdlet. Any results with defined property values that differ from the defaults listed above are in scope of the retirement.
-
-```powershell
-Get-AzureADPolicy -All $true
-```
-
-To see which apps and service principals are linked to a specific policy that you identified, run the following [`Get-AzureADPolicyAppliedObject`](/powershell/module/azuread/get-azureadpolicyappliedobject?view=azureadps-2.0-preview&preserve-view=true) cmdlet by replacing `1a37dad8-5da7-4cc8-87c7-efbc0326cf20` with any of your policy IDs. Then you can decide whether to configure Conditional Access sign-in frequency or remain with the Azure AD defaults.
-
-```powershell
-Get-AzureADPolicyAppliedObject -id 1a37dad8-5da7-4cc8-87c7-efbc0326cf20
-```
-
-If your tenant has policies which define custom values for the refresh and session token configuration properties, Microsoft recommends you update those policies to values that reflect the defaults described above. If no changes are made, Azure AD will automatically honor the default values.
-
-### Troubleshooting
-Some users have reported a `Get-AzureADPolicy : The term 'Get-AzureADPolicy' is not recognized` error after running the `Get-AzureADPolicy` cmdlet. As a workaround, run the following to uninstall/re-install the AzureAD module, and then install the AzureADPreview module:
-
-```powershell
-# Uninstall the AzureAD Module
-UnInstall-Module AzureAD
-
-# Install the AzureAD Preview Module adding the -AllowClobber
-Install-Module AzureADPreview -AllowClobber
-Note: You cannot install both the preview and the GA version on the same computer at the same time.
-
-Connect-AzureAD
-Get-AzureADPolicy -All $true
-```
-
-## Next steps
-Learn about [authentication session management capabilities](../conditional-access/howto-conditional-access-session-lifetime.md) in Azure AD Conditional Access.
active-directory Howto Convert App To Be Multi Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-convert-app-to-be-multi-tenant.md
Title: Convert single-tenant app to multi-tenant on Azure AD description: Shows how to convert an existing single-tenant app to a multi-tenant app that can sign in a user from any Azure AD tenant. -+ Last updated 10/20/2022-+ #Customer intent: As an Azure user, I want to convert a single tenant app to an Azure AD multi-tenant app so any Azure AD user can sign in,
active-directory Howto Create Service Principal Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-create-service-principal-portal.md
Title: Create an Azure AD app and service principal in the portal description: Create a new Azure Active Directory app and service principal to manage access to resources with role-based access control in Azure Resource Manager. -+ Last updated 10/11/2022-+
active-directory Howto Remove App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-remove-app.md
Title: "How to: Remove a registered app from the Microsoft identity platform" description: In this how-to, you learn how to remove an application registered with the Microsoft identity platform. -+
Last updated 07/28/2022-+ #Customer intent: As an application developer, I want to know how to remove my application from the Microsoft identity registered.
active-directory Howto Restore App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/howto-restore-app.md
Title: "How to: Restore or remove a recently deleted application with the Microsoft identity platform" description: In this how-to, you learn how to restore or permanently delete a recently deleted application registered with the Microsoft identity platform. -+
Last updated 07/28/2022-++ #Customer intent: As an application developer, I want to know how to restore or permanently delete my recently deleted application from the Microsoft identity platform.
active-directory Id Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/id-tokens.md
Title: Microsoft identity platform ID tokens description: Learn how to use id_tokens emitted by the Azure AD v1.0 and Microsoft identity platform (v2.0) endpoints. -+ Previously updated : 01/25/2022- Last updated : 01/19/2023+
active-directory Mark App As Publisher Verified https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/mark-app-as-publisher-verified.md
Title: Mark an app as publisher verified
-description: Describes how to mark an app as publisher verified. When an application is marked as publisher verified, it means that the publisher has verified their identity using a Microsoft Partner Network account that has completed the verification process and has associated this MPN account with their application registration.
+description: Describes how to mark an app as publisher verified. When an application is marked as publisher verified, it means that the publisher (application developer) has verified the authenticity of their organization using a Microsoft Partner Network (MPN) account that has completed the verification process and has associated this MPN account with that application registration.
active-directory Microsoft Graph Intro https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/microsoft-graph-intro.md
- Title: Microsoft Graph API
-description: The Microsoft Graph API is a RESTful web API that enables you to access Microsoft Cloud service resources.
-------- Previously updated : 10/08/2021----
-# Microsoft Graph API
-
-The Microsoft Graph API is a RESTful web API that enables you to access Microsoft Cloud service resources. After you register your app and get authentication tokens for a user or service, you can make requests to the Microsoft Graph API. For more information, see [Overview of Microsoft Graph](/graph/overview).
-
-Microsoft Graph exposes REST APIs and client libraries to access data on the following Microsoft cloud
--- Microsoft 365 -- Enterprise Mobility and Security -- Windows 10 -- Dynamics 365 Business Central-
-## Versions
-
-The following versions of the Microsoft Graph API are currently available:
--- **Beta version**: The beta version includes APIs that are currently in preview and are accessible in the `https://graph.microsoft.com/beta` endpoint. To start using the beta APIs, see [Microsoft Graph beta endpoint reference](/graph/api/overview?view=graph-rest-beta&preserve-view=true)-- **v1.0 version**: The v1.0 version includes APIs that are generally available and ready for production use. The v1.0 version is accessible in the `https://graph.microsoft.com/v1.0` endpoint. To start using the v1.0 APIs, see [Microsoft Graph REST API v1.0 reference](/graph/api/overview?view=graph-rest-1.0&preserve-view=true)-
-For more information about Microsoft Graph API versions, see [Versioning, support, and breaking change policies for Microsoft Graph](/graph/versioning-and-support).
--
-## Get started
-
-To read from or write to a resource such as a user or an email message, you construct a request that looks like the following pattern:
-
-`{HTTP method} https://graph.microsoft.com/{version}/{resource}?{query-parameters}`
-
-For more information about the elements of the constructed request, see [Use the Microsoft Graph API](/graph/use-the-api)
-
-Quickstart samples are available to show you how to access the power of the Microsoft Graph API. The samples that are available access two services with one authentication: Microsoft account and Outlook. Each quickstart accesses information from Microsoft account users' profiles and displays events from their calendar.
-The quickstarts involve four steps:
--- Select your platform-- Get your app ID (client ID)-- Build the sample-- Sign in, and view events on your calendar-
-When you complete the quickstart, you have an app that's ready to run. For more information, see the [Microsoft Graph quickstart FAQ](/graph/quick-start-faq). To get started with the samples, see [Microsoft Graph QuickStart](https://developer.microsoft.com/graph/quick-start).
-
-## Tools
-
-**Microsoft Graph Explorer** is a web-based tool that you can use to build and test requests to the Microsoft Graph API. Access Microsoft Graph Explorer at https://developer.microsoft.com/graph/graph-explorer.
-
-**Postman** is another tool you can use for making requests to the Microsoft Graph API. You can download Postman at https://www.getpostman.com. To interact with Microsoft Graph in Postman, use the [Microsoft Graph Postman collection](/graph/use-postman).
-
-## Next steps
-
-For more information about Microsoft Graph, including usage information and tutorials, see:
--- [Use the Microsoft Graph API](/graph/use-the-api)-- [Microsoft Graph tutorials](/graph/tutorials)
active-directory Msal Android Single Sign On https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-android-single-sign-on.md
android ms.devlang: java Previously updated : 10/15/2020 Last updated : 01/18/2023 - # Enable cross-app SSO on Android using MSAL
In this how-to, you'll learn how to configure the SDKs used by your application
This how-to assumes you know how to: -- Provision your app using the Azure portal. For more information on this topic, see the instructions for creating an app in [the Android tutorial](./tutorial-v2-android.md#create-a-project)-- Integrate your application with the [Microsoft Authentication Library for Android](https://github.com/AzureAD/microsoft-authentication-library-for-android).
+- Provision your app using the Azure portal. For more information, see the instructions for creating an app in [the Android tutorial](./tutorial-v2-android.md#create-a-project)
+- Integrate your application with the [MSAL for Android](https://github.com/AzureAD/microsoft-authentication-library-for-android)
-## Methods for single sign-on
+## Methods for SSO
There are two ways for applications using MSAL for Android to achieve SSO:
-* Through a [broker application](#sso-through-brokered-authentication)
-* Through the [system browser](#sso-through-system-browser)
+- Through a [broker application](#sso-through-brokered-authentication)
+- Through the [system browser](#sso-through-system-browser)
-
- It is recommended to use a broker application for benefits like device-wide SSO, account management, and conditional access. However, it requires your users to download additional applications.
+ It's recommended to use a broker application for benefits like device-wide SSO, account management, and conditional access. However, it requires your users to download additional applications.
## SSO through brokered authentication
-We recommend that you use one of Microsoft's authentication brokers to participate in device-wide single sign-on (SSO) and to meet organizational Conditional Access policies. Integrating with a broker provides the following benefits:
+We recommend that you use one of Microsoft's authentication brokers to participate in device-wide SSO and to meet organizational Conditional Access policies. Integrating with a broker provides the following benefits:
-- Device single sign-on
+- Device SSO
- Conditional Access for: - Intune App Protection - Device Registration (Workplace Join) - Mobile Device Management - Device-wide Account Management
- - via Android AccountManager & Account Settings
+ - via Android AccountManager & Account Settings
- "Work Account" - custom account type On Android, the Microsoft Authentication Broker is a component that's included in the [Microsoft Authenticator](https://play.google.com/store/apps/details?id=com.azure.authenticator) and [Intune Company Portal](https://play.google.com/store/apps/details?id=com.microsoft.windowsintune.companyportal) apps.
-The following diagram illustrates the relationship between your app, the Microsoft Authentication Library (MSAL), and Microsoft's authentication brokers.
+The following diagram illustrates the relationship between your app, the MSAL, and Microsoft's authentication brokers.
![Diagram showing how an application relates to MSAL, broker apps, and the Android account manager.](./media/brokered-auth/brokered-deployment-diagram.png)
If a device doesn't already have a broker app installed, MSAL instructs the user
#### When a broker is installed
-When a broker is installed on a device, all subsequent interactive token requests (calls to `acquireToken()`) are handled by the broker rather than locally by MSAL. Any SSO state previously available to MSAL is not available to the broker. As a result, the user will need to authenticate again, or select an account from the existing list of accounts known to the device.
+When a broker is installed on a device, all subsequent interactive token requests (calls to `acquireToken()`) are handled by the broker rather than locally by MSAL. Any SSO state previously available to MSAL isn't available to the broker. As a result, the user will need to authenticate again, or select an account from the existing list of accounts known to the device.
Installing a broker doesn't require the user to sign in again. Only when the user needs to resolve an `MsalUiRequiredException` will the next request go to the broker. `MsalUiRequiredException` can be thrown for several reasons, and needs to be resolved interactively. For example:
Installing a broker doesn't require the user to sign in again. Only when the use
#### When a broker is uninstalled
-If there is only one broker hosting app installed, and it is removed, then the user will need to sign in again. Uninstalling the active broker removes the account and associated tokens from the device.
+If there's only one broker hosting app installed, and it's removed, then the user will need to sign in again. Uninstalling the active broker removes the account and associated tokens from the device.
-If Intune Company Portal is installed and is operating as the active broker, and Microsoft Authenticator is also installed, then if the Intune Company Portal (active broker) is uninstalled the user will need to sign in again. Once they sign in again, the Microsoft Authenticator app becomes the active broker.
+If Intune Company Portal is installed and is operating as the active broker, and Microsoft Authenticator is also installed, then if the Intune Company Portal (active broker) is uninstalled the user will need to sign in again. Once they sign in again, the Microsoft Authenticator app becomes the active broker.
### Integrating with a broker
Windows:
keytool -exportcert -alias androiddebugkey -keystore %HOMEPATH%\.android\debug.keystore | openssl sha1 -binary | openssl base64 ```
-Once you've generated a signature hash with *keytool*, use the Azure portal to generate the redirect URI:
+Once you've generated a signature hash with _keytool_, use the Azure portal to generate the redirect URI:
-1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a> and select your Android app in **App registrations**.
-1. Select **Authentication** > **Add a platform** > **Android**.
+1. Sign in to the <a href="https://portal.azure.com/" target="_blank">Azure portal</a>.
+1. If you have access to multiple tenants, use the **Directories + subscriptions** filter :::image type="icon" source="/azure/active-directory/develop/media/common/portal-directory-subscription-filter.png" border="false"::: in the top menu to switch to the tenant in which you registered your application.
+1. Search for and select **Azure Active Directory**.
+1. Under **Manage**, select **App registrations**.
+1. Under **Manage**, select **App registrations**, then select your application.
+1. Under **Manage**, select **Authentication** > **Add a platform** > **Android**.
1. In the **Configure your Android app** pane that opens, enter the **Signature hash** that you generated earlier and a **Package name**. 1. Select the **Configure** button.
If you get an `MsalClientException` with error code `"BROKER_BIND_FAILURE"`, the
It might not be immediately clear that broker integration is working, but you can use the following steps to check: 1. On your Android device, complete a request using the broker.
-1. In the settings on your Android device, look for a newly created account corresponding to the account that you authenticated with. The account should be of type *Work account*.
+1. In the settings on your Android device, look for a newly created account corresponding to the account that you authenticated with. The account should be of type _Work account_.
You can remove the account from settings if you want to repeat the test. ## SSO through system browser
-Android applications have the option to use the WebView, system browser, or Chrome Custom Tabs for authentication user experience. If the application is not using brokered authentication, it will need to use the system browser rather than the native webview in order to achieve SSO.
+Android applications have the option to use the WebView, system browser, or Chrome Custom Tabs for authentication user experience. If the application isn't using brokered authentication, it will need to use the system browser rather than the native webview in order to achieve SSO.
### Authorization agents Choosing a specific strategy for authorization agents is optional and represents additional functionality apps can customize. Most apps will use the MSAL defaults (see [Understand the Android MSAL configuration file](msal-configuration.md) to see the various defaults).
-MSAL supports authorization using a `WebView`, or the system browser. The image below shows how it looks using the `WebView`, or the system browser with CustomTabs or without CustomTabs:
+MSAL supports authorization using a `WebView`, or the system browser. The image below shows how it looks using the `WebView`, or the system browser with CustomTabs or without CustomTabs:
![MSAL login examples](./media/authorization-agents/sign-in-ui.jpg)
-### Single sign-on implications
+### SSO implications
By default, applications integrated with MSAL use the system browser's Custom Tabs to authorize. Unlike WebViews, Custom Tabs share a cookie jar with the default system browser enabling fewer sign-ins with web or other native apps that have integrated with Custom Tabs. If the application uses a `WebView` strategy without integrating Microsoft Authenticator or Company Portal support into their app, users won't have a single sign-on experience across the device or between native apps and web apps.
-If the application uses MSAL with a broker like Microsoft Authenticator or Intune Company Portal, then users can have a SSO experience across applications if the they have an active sign-in with one of the apps.
+If the application uses MSAL with a broker like Microsoft Authenticator or Intune Company Portal, then users can have SSO experience across applications if they have an active sign-in with one of the apps.
### WebView
To use the in-app WebView, put the following line in the app configuration JSON
"authorization_user_agent" : "WEBVIEW" ```
-When using the in-app `WebView`, the user signs in directly to the app. The tokens are kept inside the sandbox of the app and aren't available outside the app's cookie jar. As a result, the user can't have a SSO experience across applications unless the apps integrate with the Authenticator or Company Portal.
+When using the in-app `WebView`, the user signs in directly to the app. The tokens are kept inside the sandbox of the app and aren't available outside the app's cookie jar. As a result, the user can't have SSO experience across applications unless the apps integrate with the Authenticator or Company Portal.
However, `WebView` does provide the capability to customize the look and feel for sign-in UI. See [Android WebViews](https://developer.android.com/reference/android/webkit/WebView) for more about how to do this customization.
By default, MSAL uses the browser and a [custom tabs](https://developer.chrome.c
"authorization_user_agent" : "BROWSER" ```
-Use this approach to provide a SSO experience through the device's browser. MSAL uses a shared cookie jar, which allows other native apps or web apps to achieve SSO on the device by using the persist session cookie set by MSAL.
+Use this approach to provide SSO experience through the device's browser. MSAL uses a shared cookie jar, which allows other native apps or web apps to achieve SSO on the device by using the persist session cookie set by MSAL.
### Browser selection heuristic
Because it's impossible for MSAL to specify the exact browser package to use on
MSAL primarily retrieves the default browser from the package manager and checks if it is in a tested list of safe browsers. If not, MSAL falls back on using the Webview rather than launching another non-default browser from the safe list. The default browser will be chosen regardless of whether it supports custom tabs. If the browser supports Custom Tabs, MSAL will launch the Custom Tab. Custom Tabs have a look and feel closer to an in-app `WebView` and allow basic UI customization. See [Custom Tabs in Android](https://developer.chrome.com/multidevice/android/customtabs) to learn more.
-If there are no browser packages on the device, MSAL uses the in-app `WebView`. If the device default setting isn't changed, the same browser should be launched for each sign in to ensure a SSO experience.
+If there are no browser packages on the device, MSAL uses the in-app `WebView`. If the device default setting isn't changed, the same browser should be launched for each sign-in to ensure SSO experience.
#### Tested Browsers The following browsers have been tested to see if they correctly redirect to the `"redirect_uri"` specified in the configuration file:
-| Device | Built-in Browser | Chrome | Opera | Microsoft Edge | UC Browser | Firefox |
-| -- |:-:| --:|--:|--:|--:|--:|
-| Nexus 4 (API 17) | pass | pass |not applicable |not applicable |not applicable |not applicable |
-| Samsung S7 (API 25) | pass<sup>1</sup> | pass | pass | pass | fail |pass |
-| Huawei (API 26) |pass<sup>2</sup> | pass | fail | pass | pass |pass |
-| Vivo (API 26) |pass|pass|pass|pass|pass|fail|
-| Pixel 2 (API 26) |pass | pass | pass | pass | fail |pass |
-| Oppo | pass | not applicable<sup>3</sup>|not applicable |not applicable |not applicable | not applicable|
-| OnePlus (API 25) |pass | pass | pass | pass | fail |pass |
-| Nexus (API 28) |pass | pass | pass | pass | fail |pass |
-|MI | pass | pass | pass | pass | fail |pass |
+| Device | Built-in Browser | Chrome | Opera | Microsoft Edge | UC Browser | Firefox |
+| - | :--: | -: | -: | -: | -: | -: |
+| Nexus 4 (API 17) | pass | pass | not applicable | not applicable | not applicable | not applicable |
+| Samsung S7 (API 25) | pass<sup>1</sup> | pass | pass | pass | fail | pass |
+| Huawei (API 26) | pass<sup>2</sup> | pass | fail | pass | pass | pass |
+| Vivo (API 26) | pass | pass | pass | pass | pass | fail |
+| Pixel 2 (API 26) | pass | pass | pass | pass | fail | pass |
+| Oppo | pass | not applicable<sup>3</sup> | not applicable | not applicable | not applicable | not applicable |
+| OnePlus (API 25) | pass | pass | pass | pass | fail | pass |
+| Nexus (API 28) | pass | pass | pass | pass | fail | pass |
+| MI | pass | pass | pass | pass | fail | pass |
<sup>1</sup>Samsung's built-in browser is Samsung Internet.<br/> <sup>2</sup>Huawei's built-in browser is Huawei Browser.<br/>
The following browsers have been tested to see if they correctly redirect to the
## Next steps
-[Shared device mode for Android devices](msal-android-shared-devices.md) allows you to configure an Android device so that it can be easily shared by multiple employees.
+[Shared device mode for Android devices](msal-android-shared-devices.md) allows you to configure an Android device so that it can be easily shared by multiple employees.
active-directory Msal Compare Msal Js And Adal Js https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/msal-compare-msal-js-and-adal-js.md
Once your changes are done, run the app and test your authentication scenario:
npm start ```
-## Example: Securing web apps with ADAL Node vs. MSAL Node
+## Example: Securing a SPA with ADAL.js vs. MSAL.js
The snippets below demonstrates the minimal code required for a single-page application authenticating users with the Microsoft identity platform and getting an access token for Microsoft Graph using first ADAL.js and then MSAL.js:
active-directory Publisher Verification Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/publisher-verification-overview.md
# Publisher verification
-Publisher verification gives app users and organization admins information about the authenticity of a developer who publishes an app that integrates with the Microsoft identity platform.
+Publisher verification gives app users and organization admins information about the authenticity of the developer's organization, who publishes an app that integrates with the Microsoft identity platform.
-An app that's publisher verified means that the app's publisher has verified their identity with Microsoft. Identity verification includes using a [Microsoft Partner Network (MPN)](https://partner.microsoft.com/membership) account that's been [verified](/partner-center/verification-responses) and associating the MPN account with an app registration.
+An app that's publisher verified means that the app's publisher (app developer) has verified the authenticity of their organization with Microsoft. Verifying an app includes using a Microsoft Partner Network (MPN) account that's been [verified](/partner-center/verification-responses) and associating the MPN account with an app registration.
When the publisher of an app has been verified, a blue *verified* badge appears in the Azure Active Directory (Azure AD) consent prompt for the app and on other webpages:
active-directory Redirect Uris Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/redirect-uris-ios.md
Title: Use redirect URIs with MSAL (iOS/macOS)
-description: Learn about the differences between the Microsoft Authentication Library for ObjectiveC (MSAL for iOS and macOS) and Azure AD Authentication Library for ObjectiveC (ADAL.ObjC) and how to migrate between them.
+description: Learn about the differences between the Microsoft Authentication Library for Objective-C (MSAL for iOS and macOS) and Azure AD Authentication Library for Objective-C (ADAL.ObjC) and how to migrate between them.
Previously updated : 08/28/2019 Last updated : 01/18/2023 #Customer intent: As an application developer, I want to learn about how to use redirect URIs.
-# Using redirect URIs with the Microsoft authentication library for iOS and macOS
+# Using redirect URIs with the Microsoft Authentication Library (MSAL) for iOS and macOS
When a user authenticates, Azure Active Directory (Azure AD) sends the token to the app by using the redirect URI registered with the Azure AD application.
-The Microsoft Authentication library (MSAL) requires that the redirect URI be registered with the Azure AD app in a specific format. MSAL uses a default redirect URI, if you don't specify one. The format is `msauth.[Your_Bundle_Id]://auth`.
+The MSAL requires that the redirect URI be registered with the Azure AD app in a specific format. MSAL uses a default redirect URI, if you don't specify one. The format is `msauth.[Your_Bundle_Id]://auth`.
The default redirect URI format works for most apps and scenarios, including brokered authentication and system web view. Use the default format whenever possible.
-However, you may need to change the redirect URI for advanced scenarios, as described below.
+However, you may need to change the redirect URI for advanced scenarios, as described in the following section.
## Scenarios that require a different redirect URI
-### Cross-app single sign on (SSO)
+### Cross-app single sign-on (SSO)
-For the Microsoft Identity platform to share tokens across apps, each app needs to have the same client ID or application ID. This is the unique identifier provided when you registered your app in the portal (not the application bundle ID that you register per app with Apple).
+For the Microsoft identity platform to share tokens across apps, each app needs to have the same client ID or application ID. The client ID is the unique identifier provided when you registered your app in the Azure portal (not the application bundle ID that you register per app with Apple).
The redirect URIs need to be different for each iOS app. This allows the Microsoft identity service to uniquely identify different apps that share an application ID. Each application can have multiple redirect URIs registered in the Azure portal. Each app in your suite will have a different redirect URI. For example: Given the following application registration in the Azure portal:
-* Client ID: `ABCDE-12345` (this is a single client ID)
-* RedirectUris: `msauth.com.contoso.app1://auth`, `msauth.com.contoso.app2://auth`, `msauth.com.contoso.app3://auth`
+- Client ID: `ABCDE-12345`
+- RedirectUris: `msauth.com.contoso.app1://auth`, `msauth.com.contoso.app2://auth`, `msauth.com.contoso.app3://auth`
App1 uses redirect `msauth.com.contoso.app1://auth`.\ App2 uses `msauth.com.contoso.app2://auth`.\
App3 uses `msauth.com.contoso.app3://auth`.
### Migrating from ADAL to MSAL
-When migrating code that used the Azure AD Authentication Library (ADAL) to MSAL, you may already have a redirect URI configured for your app. You can continue using the same redirect URI as long as your ADAL app was configured to support brokered scenarios and your redirect URI satisfies the MSAL redirect URI format requirements.
+When migrating code that used the Azure Active Directory Authentication Library (ADAL) to MSAL, you may already have a redirect URI configured for your app. You can continue using the same redirect URI as long as your ADAL app was configured to support brokered scenarios and your redirect URI satisfies the MSAL redirect URI format requirements.
## MSAL redirect URI format requirements
-* The MSAL redirect URI must be in the form `<scheme>://host`
+- The MSAL redirect URI must be in the form `<scheme>://host`
- Where `<scheme>` is a unique string that identifies your app. It's primarily based on the Bundle Identifier of your application to guarantee uniqueness. For example, if your app's Bundle ID is `com.contoso.myapp`, your redirect URI would be in the form: `msauth.com.contoso.myapp://auth`.
+ Where `<scheme>` is a unique string that identifies your app. It's primarily based on the Bundle Identifier of your application to guarantee uniqueness. For example, if your app's Bundle ID is `com.contoso.myapp`, your redirect URI would be in the form: `msauth.com.contoso.myapp://auth`.
- If you're migrating from ADAL, your redirect URI will likely have this format: `<scheme>://[Your_Bundle_Id]`, where `scheme` is a unique string. This format will continue to work when you use MSAL.
+ If you're migrating from ADAL, your redirect URI will likely have this format: `<scheme>://[Your_Bundle_Id]`, where `scheme` is a unique string. The format will continue to work when you use MSAL.
-* `<scheme>` must be registered in your app's Info.plist under `CFBundleURLTypes > CFBundleURLSchemes`. In this example, Info.plist has been opened as source code:
+- `<scheme>` must be registered in your app's Info.plist under `CFBundleURLTypes > CFBundleURLSchemes`. In this example, Info.plist has been opened as source code:
- ```xml
- <key>CFBundleURLTypes</key>
- <array>
- <dict>
- <key>CFBundleURLSchemes</key>
- <array>
- <string>msauth.[BUNDLE_ID]</string>
- </array>
- </dict>
- </array>
- ```
+ ```xml
+ <key>CFBundleURLTypes</key>
+ <array>
+ <dict>
+ <key>CFBundleURLSchemes</key>
+ <array>
+ <string>msauth.[BUNDLE_ID]</string>
+ </array>
+ </dict>
+ </array>
+ ```
MSAL will verify if your redirect URI registers correctly, and return an error if it's not.
-
-* If you want to use universal links as a redirect URI, the `<scheme>` must be `https` and doesn't need to be declared in `CFBundleURLSchemes`. Instead, configure the app and domain per Apple's instructions at [Universal Links for Developers](https://developer.apple.com/ios/universal-links/) and call the `handleMSALResponse:sourceApplication:` method of `MSALPublicClientApplication` when your application is opened through a universal link.
+
+- If you want to use universal links as a redirect URI, the `<scheme>` must be `https` and doesn't need to be declared in `CFBundleURLSchemes`. Instead, configure the app and domain per Apple's instructions at [Universal Links for Developers](https://developer.apple.com/ios/universal-links/) and call the `handleMSALResponse:sourceApplication:` method of `MSALPublicClientApplication` when your application is opened through a universal link.
## Use a custom redirect URI
-To use a custom redirect URI, pass the `redirectUri` parameter to `MSALPublicClientApplicationConfig` and pass that object to `MSALPublicClientApplication` when you initialize the object. If the redirect URI is invalid, the initializer will return `nil` and set the `redirectURIError`with additional information. For example:
+To use a custom redirect URI, pass the `redirectUri` parameter to `MSALPublicClientApplicationConfig` and pass that object to `MSALPublicClientApplication` when you initialize the object. If the redirect URI is invalid, the initializer will return `nil` and set the `redirectURIError`with additional information. For example:
Objective-C:
let config = MSALPublicClientApplicationConfig(clientId: "your-client-id",
authority: authority) do { let application = try MSALPublicClientApplication(configuration: config)
- // continue on with application
+ // continue on with application
} catch let error as NSError { // handle error here
-}
+}
``` -- ## Handle the URL opened event Your application should call MSAL when it receives any response through URL schemes or universal links. Call the `handleMSALResponse:sourceApplication:` method of `MSALPublicClientApplication` when your application is opened. Here's an example for custom schemes:
Objective-C:
openURL:(NSURL *)url options:(NSDictionary<UIApplicationOpenURLOptionsKey,id> *)options {
- return [MSALPublicClientApplication handleMSALResponse:url
+ return [MSALPublicClientApplication handleMSALResponse:url
sourceApplication:options[UIApplicationOpenURLOptionsSourceApplicationKey]]; } ```
func application(_ app: UIApplication, open url: URL, options: [UIApplication.Op
} ``` -- ## Next steps Learn more about [Authentication flows and application scenarios](authentication-flows-app-scenarios.md)
active-directory Reference Saml Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/reference-saml-tokens.md
Title: SAML 2.0 token claims reference description: Claims reference with details on the claims included in SAML 2.0 tokens issued by the Microsoft identity platform, including their JWT equivalents.-+
Previously updated : 03/29/2021- Last updated : 01/19/2023+
active-directory Request Custom Claims https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/request-custom-claims.md
Title: Request custom claims (MSAL iOS/macOS) description: Learn how to request custom claims. -+ Previously updated : 08/26/2019- Last updated : 01/19/2023+
active-directory Scenario Web App Sign User App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-app-sign-user-app-configuration.md
Title: Configure a web app that signs in users description: Learn how to build a web app that signs in users (code configuration) -+
Last updated 12/8/2022-++ #Customer intent: As an application developer, I want to know how to write a web app that signs in users by using the Microsoft identity platform.
active-directory Scenario Web App Sign User App Registration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-app-sign-user-app-registration.md
Title: Register a web app that signs in users description: Learn how to register a web app that signs in users -+
Last updated 12/6/2022-++ #Customer intent: As an application developer, I want to know how to write a web app that signs in users by using the Microsoft identity platform.
active-directory Scenario Web App Sign User Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-app-sign-user-overview.md
Title: Sign in users from a Web app description: Learn how to build a web app that signs in users (overview) -+
Last updated 10/12/2022-++ #Customer intent: As an application developer, I want to know how to write a web app that signs in users by using the Microsoft identity platform.
active-directory Scenario Web App Sign User Production https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-app-sign-user-production.md
Title: Move web app that signs in users to production description: Learn how to build a web app that signs in users (move to production) -+
Last updated 09/17/2019-++ #Customer intent: As an application developer, I want to know how to write a web app that signs in users by using the Microsoft identity platform.
active-directory Scenario Web App Sign User Sign In https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/scenario-web-app-sign-user-sign-in.md
Title: Write a web app that signs in/out users description: Learn how to build a web app that signs in/out users -+
Last updated 07/14/2020-++ #Customer intent: As an application developer, I want to know how to write a web app that signs in users by using the Microsoft identity platform.
active-directory V2 App Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/v2-app-types.md
Title: Application types for the Microsoft identity platform description: The types of apps and scenarios supported by the Microsoft identity platform. -+
Last updated 09/09/2022-+
active-directory Web App Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/web-app-quickstart.md
Previously updated : 11/16/2021 Last updated : 01/18/2023 zone_pivot_groups: web-app-quickstart
active-directory Workload Identity Federation Create Trust User Assigned Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/workload-identity-federation-create-trust-user-assigned-managed-identity.md
Title: Create a trust relationship between a user-assigned managed identity and an external identity provider description: Set up a trust relationship between a user-assigned managed identity in Azure AD and an external identity provider. This allows a software workload outside of Azure to access Azure AD protected resources without using secrets or certificates. -+ Previously updated : 10/24/2022- Last updated : 01/19/2023+ zone_pivot_groups: identity-wif-mi-methods
active-directory Workload Identity Federation Create Trust https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/develop/workload-identity-federation-create-trust.md
Title: Create a trust relationship between an app and an external identity provider description: Set up a trust relationship between an app in Azure AD and an external identity provider. This allows a software workload outside of Azure to access Azure AD protected resources without using secrets or certificates. -+ Previously updated : 12/13/2022- Last updated : 01/19/2023+ zone_pivot_groups: identity-wif-apps-methods
active-directory Licensing Service Plan Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/enterprise-users/licensing-service-plan-reference.md
When managing licenses in [the Azure portal](https://portal.azure.com/#blade/Mic
| Microsoft Teams Phone Standard_USGOV_GCCHIGH | MCOEV_USGOV_GCCHIGH | 985fcb26-7b94-475b-b512-89356697be71 | MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) | MICROSOFT 365 PHONE SYSTEM (4828c8ec-dc2e-4779-b502-87ac9ce28ab7) | | Microsoft Teams Phone Resoure Account | PHONESYSTEM_VIRTUALUSER | 440eaaa8-b3e0-484b-a8be-62870b9ba70a | MCOEV_VIRTUALUSER (f47330e9-c134-43b3-9993-e7f004506889) | Microsoft 365 Phone Standard Resource Account (f47330e9-c134-43b3-9993-e7f004506889)| | Microsoft Teams Phone Resource Account for GCC | PHONESYSTEM_VIRTUALUSER_GOV | 2cf22bcb-0c9e-4bc6-8daf-7e7654c0f285 | MCOEV_VIRTUALUSER_GOV (0628a73f-3b4a-4989-bd7b-0f8823144313) | Microsoft 365 Phone Standard Resource Account for Government (0628a73f-3b4a-4989-bd7b-0f8823144313) |
-| Microsoft Teams Premium | Microsoft_Teams_Premium | 989a1621-93bc-4be0-835c-fe30171d6463 | MICROSOFT_ECDN (85704d55-2e73-47ee-93b4-4b8ea14db92b)<br/>TEAMSPRO_MGMT (0504111f-feb8-4a3c-992a-70280f9a2869)<br/>TEAMSPRO_CUST (cc8c0802-a325-43df-8cba-995d0c6cb373)<br/>TEAMSPRO_PROTECTION (f8b44f54-18bb-46a3-9658-44ab58712968)<br/>TEAMSPRO_VIRTUALAPPT (9104f592-f2a7-4f77-904c-ca5a5715883f)<br/>TEAMSPRO_WEBINAR (78b58230-ec7e-4309-913c-93a45cc4735b) | Microsoft eCDN (85704d55-2e73-47ee-93b4-4b8ea14db92b)<br/>Microsoft Teams Premium Intelligent (0504111f-feb8-4a3c-992a-70280f9a2869)<br/>Microsoft Teams Premium Personalized (cc8c0802-a325-43df-8cba-995d0c6cb373)<br/>Microsoft Teams Premium Secure (f8b44f54-18bb-46a3-9658-44ab58712968)<br/>Microsoft Teams Premium Virtual Appointment (9104f592-f2a7-4f77-904c-ca5a5715883f)<br/>Microsoft Teams Premium Webinar (78b58230-ec7e-4309-913c-93a45cc4735b) |
+| Microsoft Teams Premium | Microsoft_Teams_Premium | 989a1621-93bc-4be0-835c-fe30171d6463 | MICROSOFT_ECDN (85704d55-2e73-47ee-93b4-4b8ea14db92b)<br/>TEAMSPRO_MGMT (0504111f-feb8-4a3c-992a-70280f9a2869)<br/>TEAMSPRO_CUST (cc8c0802-a325-43df-8cba-995d0c6cb373)<br/>TEAMSPRO_PROTECTION (f8b44f54-18bb-46a3-9658-44ab58712968)<br/>TEAMSPRO_VIRTUALAPPT (9104f592-f2a7-4f77-904c-ca5a5715883f)<br/>MCO_VIRTUAL_APPT (711413d0-b36e-4cd4-93db-0a50a4ab7ea3)<br/>TEAMSPRO_WEBINAR (78b58230-ec7e-4309-913c-93a45cc4735b) | Microsoft eCDN (85704d55-2e73-47ee-93b4-4b8ea14db92b)<br/>Microsoft Teams Premium Intelligent (0504111f-feb8-4a3c-992a-70280f9a2869)<br/>Microsoft Teams Premium Personalized (cc8c0802-a325-43df-8cba-995d0c6cb373)<br/>Microsoft Teams Premium Secure (f8b44f54-18bb-46a3-9658-44ab58712968)<br/>Microsoft Teams Premium Virtual Appointment (9104f592-f2a7-4f77-904c-ca5a5715883f)<br/>Microsoft Teams Premium Virtual Appointments (711413d0-b36e-4cd4-93db-0a50a4ab7ea3)<br/>Microsoft Teams Premium Webinar (78b58230-ec7e-4309-913c-93a45cc4735b) |
| Microsoft Teams Rooms Basic | Microsoft_Teams_Rooms_Basic | 6af4b3d6-14bb-4a2a-960c-6c902aad34f3 | MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af) | Microsoft 365 Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af) | | Microsoft Teams Rooms Basic without Audio Conferencing | Microsoft_Teams_Rooms_Basic_without_Audio_Conferencing | 50509a35-f0bd-4c5e-89ac-22f0e16a00f8 | TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af) | Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af) | | Microsoft Teams Rooms Pro | Microsoft_Teams_Rooms_Pro | 4cde982a-ede4-4409-9ae6-b003453c8ea6 | AAD_PREMIUM (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>MCOMEETADV (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>MCOEV (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>INTUNE_A (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>TEAMS1 (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>MCOSTANDARD (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>WHITEBOARD_PLAN3 (4a51bca5-1eff-43f5-878c-177680f191af) | Azure Active Directory Premium P1 (41781fb2-bc02-4b7c-bd55-b576c07bb09d)<br/>Microsoft 365 Audio Conferencing (3e26ee1f-8a5f-4d52-aee2-b81ce45c8f40)<br/>Microsoft 365 Phone System (4828c8ec-dc2e-4779-b502-87ac9ce28ab7)<br/>Microsoft Intune (c1ec4a95-1f05-45b3-a911-aa3fa01094f5)<br/>Microsoft Teams (57ff2da0-773e-42df-b2af-ffb7a2317929)<br/>Skype for Business Online (Plan 2) (0feaeb32-d00e-4d66-bd5a-43b5b83db82c)<br/>Whiteboard (Plan 3) (4a51bca5-1eff-43f5-878c-177680f191af) |
active-directory Direct Federation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/direct-federation.md
Previously updated : 10/24/2022 Last updated : 01/20/2023
Setting up SAML/WS-Fed IdP federation doesnΓÇÖt change the authentication method
Currently, the Azure AD SAML/WS-Fed federation feature doesn't support sending a signed authentication token to the SAML identity provider.
+**What permissions are required to configure a SAML/Ws-Fed identity provider?**
+
+You need to be an [External Identity Provider Administrator](../roles/permissions-reference.md#external-identity-provider-administrator) or a [Global Administrator](../roles/permissions-reference.md#global-administrator) in your Azure AD tenant to configure a SAML/Ws-Fed identity provider.
+ ## Step 1: Determine if the partner needs to update their DNS text records Depending on the partner's IdP, the partner might need to update their DNS records to enable federation with you. Use the following steps to determine if DNS updates are needed.
Next, you'll configure federation with the IdP configured in step 1 in Azure AD.
### To configure federation in the Azure AD portal
-1. Go to the [Azure portal](https://portal.azure.com/). In the left pane, select **Azure Active Directory**.
-2. Select **External Identities** > **All identity providers**.
-3. Select **New SAML/WS-Fed IdP**.
+1. Sign in to the [Azure portal](https://portal.azure.com/) as an External Identity Provider Administrator or a Global Administrator.
+2. In the left pane, select **Azure Active Directory**.
+3. Select **External Identities** > **All identity providers**.
+4. Select **New SAML/WS-Fed IdP**.
![Screenshot showing button for adding a new SAML or WS-Fed IdP.](media/direct-federation/new-saml-wsfed-idp.png)
On the **All identity providers** page, you can view the list of SAML/WS-Fed ide
![Screenshot showing an identity provider in the SAML WS-Fed list](media/direct-federation/new-saml-wsfed-idp-list-multi.png)
-1. Go to the [Azure portal](https://portal.azure.com/). In the left pane, select **Azure Active Directory**.
-1. Select **External Identities**.
-1. Select **All identity providers**.
-1. Under **SAML/WS-Fed identity providers**, scroll to an identity provider in the list or use the search box.
-1. To update the certificate or modify configuration details:
+1. Sign in to the [Azure portal](https://portal.azure.com) as an External Identity Provider Administrator or a Global Administrator.
+2. In the left pane, select **Azure Active Directory**.
+3. Select **External Identities**.
+4. Select **All identity providers**.
+5. Under **SAML/WS-Fed identity providers**, scroll to an identity provider in the list or use the search box.
+6. To update the certificate or modify configuration details:
- In the **Configuration** column for the identity provider, select the **Edit** link. - On the configuration page, modify any of the following details: - **Display name** - Display name for the partner's organization.
active-directory Facebook Federation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/facebook-federation.md
Previously updated : 01/06/2023 Last updated : 01/20/2023
To use a Facebook account as an [identity provider](identity-providers.md), you
Now you'll set the Facebook client ID and client secret, either by entering it in the Azure AD portal or by using PowerShell. You can test your Facebook configuration by signing up via a user flow on an app enabled for self-service sign-up. ### To configure Facebook federation in the Azure AD portal
-1. Sign in to the [Azure portal](https://portal.azure.com) as the global administrator of your Azure AD tenant.
+1. Sign in to the [Azure portal](https://portal.azure.com) as an External Identity Provider Administrator or a Global Administrator.
2. Under **Azure services**, select **Azure Active Directory**. 3. In the left menu, select **External Identities**. 4. Select **All identity providers**, then select **Facebook**.
active-directory Google Federation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/google-federation.md
Previously updated : 07/12/2022 Last updated : 01/20/2023
First, create a new project in the Google Developers Console to obtain a client
You'll now set the Google client ID and client secret. You can use the Azure portal or PowerShell to do so. Be sure to test your Google federation configuration by inviting yourself. Use a Gmail address and try to redeem the invitation with your invited Google account. **To configure Google federation in the Azure portal**
-1. Go to the [Azure portal](https://portal.azure.com). On the left pane, select **Azure Active Directory**.
-2. Select **External Identities**.
-3. Select **All identity providers**, and then select the **Google** button.
-4. Enter the client ID and client secret you obtained earlier. Select **Save**:
+1. Sign in to the [Azure portal](https://portal.azure.com) as an External Identity Provider Administrator or a Global Administrator.
+2. In the left pane, select **Azure Active Directory**.
+3. Select **External Identities**.
+4. Select **All identity providers**, and then select the **Google** button.
+5. Enter the client ID and client secret you obtained earlier. Select **Save**:
![Screenshot that shows the Add Google identity provider page.](media/google-federation/google-identity-provider.png)
active-directory Identity Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/identity-providers.md
Previously updated : 09/14/2022 Last updated : 01/20/2023
External Identities offers a variety of identity providers.
> [!NOTE] > Federated SAML/WS-Fed IdPs can't be used in your self-service sign-up user flows.
+To configure federation with Google, Facebook, or a SAML/Ws-Fed identity provider, you'll need to be an [External Identity Provider Administrator](../roles/permissions-reference.md#external-identity-provider-administrator) or a [Global Administrator](../roles/permissions-reference.md#global-administrator) in your Azure AD tenant.
+ ## Adding social identity providers Azure AD is enabled by default for self-service sign-up, so users always have the option of signing up using an Azure AD account. However, you can enable other identity providers, including social identity providers like Google or Facebook. To set up social identity providers in your Azure AD tenant, you'll create an application at the identity provider and configure credentials. You'll obtain a client or app ID and a client or app secret, which you can then add to your Azure AD tenant.
active-directory Leave The Organization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/leave-the-organization.md
Title: Leave an organization - Azure Active Directory-
+ Title: Leave an organization as a guest user
+ description: Shows how an Azure AD B2B guest user can leave an organization by using the Access Panel. Previously updated : 12/16/2022 Last updated : 01/17/2023 --++
adobe-target: true
# Leave an organization as an external user
-As an Azure Active Directory (Azure AD) [B2B collaboration](what-is-b2b.md) or [B2B direct connect](b2b-direct-connect-overview.md) user, you can leave an organization at any time if you no longer need to use apps from that organization, or maintain any association.
+As an Azure Active Directory (Azure AD) B2B collaboration or B2B direct connect user, you can leave an organization at any time if you no longer need to use apps from that organization, or maintain any association.
-You can usually leave an organization on your own without having to contact an administrator. However, in some cases this option won't be available and you'll need to contact your tenant admin, who can delete your account in the external organization. This article is intended for administrators. If you're a user looking for information about how to manage and leave an organization, see the [Manage organizations article.](https://support.microsoft.com/account-billing/manage-organizations-for-a-work-or-school-account-in-the-my-account-portal-a9b65a70-fec5-4a1a-8e00-09f99ebdea17)
+## Before you begin
+You can usually leave an organization on your own without having to contact an administrator. However, in some cases this option won't be available and you'll need to contact your tenant admin, who can delete your account in the external organization. This article is intended for administrators. If you're a user looking for information about how to manage and leave an organization, see the [Manage organizations article.](https://support.microsoft.com/account-billing/manage-organizations-for-a-work-or-school-account-in-the-my-account-portal-a9b65a70-fec5-4a1a-8e00-09f99ebdea17)
## What organizations do I belong to? 1. To view the organizations you belong to, first open your **My Account** page. You either have a work or school account created by an organization or a personal account such as for Xbox, Hotmail, or Outlook.com. - If you're using a work or school account, go to https://myaccount.microsoft.com and sign in.
- - If you're using a personal account or email one-time passcode, you'll need to use a My Account URL that includes your tenant name or tenant ID, for example: https://myaccount.microsoft.com?tenantId=wingtiptoys.onmicrosoft.com or https://myaccount.microsoft.com?tenantId=ab123456-cd12-ef12-gh12-ijk123456789.
+ - If you're using a personal account or email one-time passcode, you'll need to use a My Account URL that includes your tenant name or tenant ID.
+ For example:
+ https://myaccount.microsoft.com?tenantId=wingtiptoys.onmicrosoft.com
+ or
+ https://myaccount.microsoft.com?tenantId=ab123456-cd12-ef12-gh12-ijk123456789.
1. Select **Organizations** from the left navigation pane or select the **Manage organizations** link from the **Organizations** block.
In the **Home organization** section, there's no link to **Leave** your organiza
For the external organizations listed under **Other organizations you collaborate with**, you might not be able to leave on your own, for example when: - - the organization you want to leave doesnΓÇÖt allow users to leave by themselves - your account has been disabled
Administrators can use the **External user leave settings** to control whether e
- **Yes**: Users can leave the organization themselves without approval from your admin or privacy contact. - **No**: Users can't leave your organization themselves. They'll see a message guiding them to contact your admin, or privacy contact to request removal from your organization. - :::image type="content" source="media/leave-the-organization/external-user-leave-settings.png" alt-text="Screenshot showing External user leave settings in the portal."::: ### Account removal
If desired, a tenant administrator can permanently delete the account at any tim
1. Select the check box next to a deleted user, and then select **Delete permanently**.
-Permanent deletion can be initiated by the admin, or it happens at the end of the soft deletion period. Permanent deletion can take up to an extra 30 days for data removal ([learn more](/compliance/regulatory/gdpr-dsr-azure#step-5-delete)).
+Permanent deletion can be initiated by the admin, or it happens at the end of the soft deletion period. Permanent deletion can take up to an extra 30 days for data removal.
+
+For B2B direct connect users, data removal begins as soon as the user selects **Leave** in the confirmation message and can take up to 30 days to complete.
-> [!NOTE]
-> For B2B direct connect users, data removal begins as soon as the user selects **Leave** in the confirmation message and can take up to 30 days to complete ([learn more](/compliance/regulatory/gdpr-dsr-azure#delete-a-users-data-when-there-is-no-account-in-the-azure-tenant)).
## Next steps -- Learn more about [Azure AD B2B collaboration](what-is-b2b.md) and [Azure AD B2B direct connect](b2b-direct-connect-overview.md)-- [Use audit logs and access reviews](auditing-and-reporting.md)
+- Learn more about [user deletion](/compliance/regulatory/gdpr-dsr-azure#step-5-delete) and about how to delete a user's data when there's [no account in the Azure tenant](/compliance/regulatory/gdpr-dsr-azure#delete-a-users-data-when-there-is-no-account-in-the-azure-tenant).
+- For more information about GDPR, see the GDPR section of the [Service Trust portal](https://servicetrust.microsoft.com/ViewPage/GDPRGetStarted).
active-directory Self Service Sign Up Add Api Connector https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/external-identities/self-service-sign-up-add-api-connector.md
Previously updated : 07/13/2021 Last updated : 01/16/2023
To use an [API connector](api-connectors-overview.md), you first create the API
3. In the left menu, select **External Identities**. 4. Select **All API connectors**, and then select **New API connector**.
- :::image type="content" source="media/self-service-sign-up-add-api-connector/api-connector-new.png" alt-text="Providing the basic configuration like target URL and display name for an API connector during the creation experience.":::
+ :::image type="content" source="media/self-service-sign-up-add-api-connector/api-connector-new.png" alt-text="Screenshot of adding a new API connector to External Identities.":::
5. Provide a display name for the call. For example, **Check approval status**. 6. Provide the **Endpoint URL** for the API call. 7. Choose the **Authentication type** and configure the authentication information for calling your API. Learn how to [Secure your API Connector](self-service-sign-up-secure-api-connector.md).
- :::image type="content" source="media/self-service-sign-up-add-api-connector/api-connector-config.png" alt-text="Providing authentication configuration for an API connector during the creation experience.":::
+ :::image type="content" source="media/self-service-sign-up-add-api-connector/api-connector-config.png" alt-text="Screenshot of configuring an API connector.":::
8. Select **Save**.
Content-type: application/json
} ```
-The exact claims sent to the API depends on which information is provided by the identity provider. 'email' is always sent.
+The exact claims sent to the API depend on which information is provided by the identity provider. 'email' is always sent.
### Expected response types from the web API at this step
Content-type: application/json
"ui_locales":"en-US" } ```
-The exact claims sent to the API depends on which information is collected from the user or is provided by the identity provider.
+The exact claims sent to the API depend on which information is collected from the user or is provided by the identity provider.
### Expected response types from the web API at this step
A blocking response exits the user flow. It can be purposely issued by the API t
See an example of a [blocking response](#example-of-a-blocking-response). ### Validation-error response
- When the API responds with a validation-error response, the user flow stays on the attribute collection page and a `userMessage` is displayed to the user. The user can then edit and resubmit the form. This type of response can be used for input validation.
+ When the API responds with a validation-error response, the user flow stays on the attribute collection page, and a `userMessage` is displayed to the user. The user can then edit and resubmit the form. This type of response can be used for input validation.
See an example of a [validation-error response](#example-of-a-validation-error-response).
Content-type: application/json
| version | String | Yes | The version of your API. | | action | String | Yes | Value must be `Continue`. | | \<builtInUserAttribute> | \<attribute-type> | No | Values can be stored in the directory if they selected as a **Claim to receive** in the API connector configuration and **User attributes** for a user flow. Values can be returned in the token if selected as an **Application claim**. |
-| \<extension\_{extensions-app-id}\_CustomAttribute> | \<attribute-type> | No | The claim does not need to contain `_<extensions-app-id>_`, it is *optional*. Returned values can overwrite values collected from a user. |
+| \<extension\_{extensions-app-id}\_CustomAttribute> | \<attribute-type> | No | The claim doesn't need to contain `_<extensions-app-id>_`, it's *optional*. Returned values can overwrite values collected from a user. |
### Example of a blocking response
Content-type: application/json
{ "version": "1.0.0", "action": "ShowBlockPage",
- "userMessage": "There was a problem with your request. You are not able to sign up at this time.",
+ "userMessage": "There was an error with your request. Please try again or contact support.",
} ```
Ensure that:
* Your API implements an authentication method outlined in [secure your API Connector](self-service-sign-up-secure-api-connector.md). * Your API responds as quickly as possible to ensure a fluid user experience. * Azure AD will wait for a maximum of *20 seconds* to receive a response. If none is received, it will make *one more attempt (retry)* at calling your API.
- * If using a serverless function or scalable web service, use a hosting plan that keeps the API "awake" or "warm" in production. For Azure Functions, it's recommended to use at minimum the [Premium plan](../../azure-functions/functions-scale.md)
+ * If using a serverless function or scalable web service, use a hosting plan that keeps the API "awake" or "warm" in production. For Azure Functions, it's recommended to use at minimum the [Premium plan](../../azure-functions/functions-scale.md#overview-of-plans)
* Ensure high availability of your API. * Monitor and optimize performance of downstream APIs, databases, or other dependencies of your API. * Your endpoints must comply with the Azure AD TLS and cipher security requirements. For more information, see [TLS and cipher suite requirements](../../active-directory-b2c/https-cipher-tls-requirements.md).
active-directory 2 Secure Access Current State https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/2-secure-access-current-state.md
If your email and network plans are enabled, you can investigate content sharing
* Identify, prevent, and monitor accidental sharing * [Learn about data loss prevention](/microsoft-365/compliance/dlp-learn-about-dlp?view=o365-worldwide&preserve-view=true ) * Identify unauthorized apps
- * [Microsoft Defender for Cloud Apps](/security/business/siem-and-xdr/microsoft-defender-cloud-apps?rtc=1)
+ * [Microsoft Defender for Cloud Apps overview](/defender-cloud-apps/what-is-defender-for-cloud-apps)
## Next steps
active-directory 5 Secure Access B2b https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/5-secure-access-b2b.md
By default, Teams allows external access. The organization can communicate with
Sharing through SharePoint and OneDrive adds users not in the Entitlement Management process. * [Secure external access to Microsoft Teams, SharePoint, and OneDrive for Business](9-secure-access-teams-sharepoint.md)
-* [Block OneDrive use from Office](/office365/troubleshoot/group-policy/block-onedrive-use-from-office.md)
+* [Block OneDrive use from Office](/office365/troubleshoot/group-policy/block-onedrive-use-from-office)
### Documents in email
active-directory Active Directory Deployment Plans https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/active-directory-deployment-plans.md
Previously updated : 01/06/2023 Last updated : 01/17/2023
Use the following list to plan for authentication deployment.
* See, [What is Conditional Access?](../conditional-access/overview.md) * See, [Plan a Conditional Access deployment](../conditional-access/plan-conditional-access.md) * **Azure AD self-service password reset (SSPR)** - Help users reset a password without administrator intervention:
- * See, [Passwordless authentication options for Azure AD](/articles/active-directory/authentication/concept-authentication-passwordless.md)
+ * See, [Passwordless authentication options for Azure AD](../authentication/concept-authentication-passwordless.md)
* See, [Plan an Azure Active Directory self-service password reset deployment](../authentication/howto-sspr-deployment.md) * **Passordless authentication** - Implement passwordless authentication using the Microsoft Authenticator app or FIDO2 Security keys: * See, [Enable passwordless sign-in with Microsoft Authenticator](../authentication/howto-authentication-passwordless-phone.md)
Use the following list to plan for authentication deployment.
Use the following list to help deploy applications and devices. * **Single sign-on (SSO)** - Enable user access to apps and resources while signing in once, without being required to enter credentials again:
- * See, [What is SSO in Azure AD?](/articles/active-directory/manage-apps/what-is-single-sign-on.md)
+ * See, [What is SSO in Azure AD?](../manage-apps/what-is-single-sign-on.md)
* See, [Plan a SSO deployment](../manage-apps/plan-sso-deployment.md) * **My Apps portal** - A web-based portal to discover and access applications. Enable user productivity with self-service, for instance requesting access to groups, or managing access to resources on behalf of others. * See, [My Apps portal overview](../manage-apps/myapps-overview.md)
Use the following list to help deploy applications and devices.
The following list describes features and services for productivity gains in hybrid scenarios. * **Active Directory Federation Services (AD FS)** - Migrate user authentication from federation to cloud with pass-through authentication or password hash sync:
- * See, [What is federation with Azure AD?](/articles/active-directory/hybrid/whatis-fed.md)
+ * See, [What is federation with Azure AD?](../hybrid/whatis-fed.md)
* See, [Migrate from federation to cloud authentication](../hybrid/migrate-from-federation-to-cloud-authentication.md) * **Azure AD Application Proxy** - Enable employees to be productive at any place or time, and from a device. Learn about software as a service (SaaS) apps in the cloud and corporate apps on-premises. Azure AD Application Proxy enables access without virtual private networks (VPNs) or demilitarized zones (DMZs):
- * See, [Remote access to on-premises applications through Azure AD Application Proxy](/articles/active-directory/app-proxy/application-proxy.md)
+ * See, [Remote access to on-premises applications through Azure AD Application Proxy](../app-proxy/application-proxy.md)
* See, [Plan an Azure AD Application Proxy deployment](../app-proxy/application-proxy-deployment-plan.md) * **Seamless single sign-on (Seamless SSO)** - Use Seamless SSO for user sign-in, on corporate devices connected to a corporate network. Users don't need to enter passwords to sign in to Azure AD, and usually don't need to enter usernames. Authorized users access cloud-based apps without extra on-premises components: * See, [Azure Active Directory SSO: Quickstart](../hybrid/how-to-connect-sso-quick-start.md)
- * See, [Azure Active Directory Seamless SSO: Technical deep dive](/articles/active-directory/hybrid/how-to-connect-sso-how-it-works.md)
+ * See, [Azure Active Directory Seamless SSO: Technical deep dive](../hybrid/how-to-connect-sso-how-it-works.md)
## Users
Learn more: [Secure access for a connected worldΓÇömeet Microsoft Entra](https:/
* **Reporting and monitoring** - Your Azure AD reporting and monitoring solution design has dependencies and constraints: legal, security, operations, environment, and processes. * See, [Azure Active Directory reporting and monitoring deployment dependencies](../reports-monitoring/plan-monitoring-and-reporting.md) * **Access reviews** - Understand and manage access to resources:
- * See, [What are access reviews?](/articles/active-directory/governance/access-reviews-overview.md)
+ * See, [What are access reviews?](../governance/access-reviews-overview.md)
* See, [Plan a Microsoft Entra access reviews deployment](../governance/deploy-access-reviews.md) * **Identity governance** - Meet your compliance and risk management objectives for access to critical applications. Learn how to enforce accurate access. * See, [Govern access for applications in your environment](../governance/identity-governance-applications-prepare.md)
In your first phase, target IT, usability, and other users who can test and prov
Widen the pilot to larger groups of users by using dynamic membership, or by manually adding users to the targeted group(s).
-Learn more: [Dynamic membership rules for groups in Azure Active Directory](../enterprise-users/groups-dynamic-membership.md)]
+Learn more: [Dynamic membership rules for groups in Azure Active Directory](../enterprise-users/groups-dynamic-membership.md)
active-directory Azure Active Directory B2c Deployment Plans https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/azure-active-directory-b2c-deployment-plans.md
Previously updated : 1/5/2023 Last updated : 01/17/2023
Technology project success depends on managing expectations, outcomes, and respo
- Ask questions, get answers, and receive notifications - Identify a partner or resource outside your organization to support you
-Learn more: [Include the right stakeholders](./active-directory-deployment-plans.md)
+Learn more: [Include the right stakeholders](active-directory-deployment-plans.md)
### Communications
active-directory Five Steps To Full Application Integration With Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/five-steps-to-full-application-integration-with-azure-ad.md
In addition, you can integrate application delivery controllers like F5 BIG-IP A
For apps that are built within your company, your developers can use the [Microsoft identity platform](../develop/index.yml) to implement authentication and authorization. Applications integrated with the platform with be [registered with Azure AD](../develop/quickstart-register-app.md) and managed just like any other app in your portfolio.
-Developers can use the platform for both internal-use apps and customer facing apps, and there are other benefits that come with using the platform. [Microsoft Authentication Libraries (MSAL)](../develop/msal-overview.md), which is part of the platform, allows developers to enable modern experiences like multi-factor authentication and the use of security keys to access their apps without needing to implement it themselves. Additionally, apps integrated with the Microsoft identity platform can access [Microsoft Graph](../develop/microsoft-graph-intro.md) - a unified API endpoint providing the Microsoft 365 data that describes the patterns of productivity, identity, and security in an organization. Developers can use this information to implement features that increase productivity for your users. For example, by identifying the people the user has been interacting with recently and surfacing them in the app's UI.
+Developers can use the platform for both internal-use apps and customer facing apps, and there are other benefits that come with using the platform. [Microsoft Authentication Libraries (MSAL)](../develop/msal-overview.md), which is part of the platform, allows developers to enable modern experiences like multi-factor authentication and the use of security keys to access their apps without needing to implement it themselves. Additionally, apps integrated with the Microsoft identity platform can access [Microsoft Graph](/graph/overview) - a unified API endpoint providing the Azure AD data that describes the patterns of productivity, identity, and security in an organization. Developers can use this information to implement features that increase productivity for your users. For example, by identifying the people the user has been interacting with recently and surfacing them in the app's UI.
We have a [video series](https://www.youtube.com/watch?v=zjezqZPPOfc&amp;list=PLLasX02E8BPBxGouWlJV-u-XZWOc2RkiX) that provides a comprehensive introduction to the platform as well as [many code samples](../develop/sample-v2-code.md) in supported languages and platforms.
active-directory Secure External Access Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/secure-external-access-resources.md
Secure collaboration with your external partners ensures they have correct access to internal resources, and for the expected duration. Learn about governance practices to reduce security risks, meet compliance goals, and ensure accurate access.
+## Governance benefits
+ Governed collaboration improves clarity of ownership of access, reduces exposure of sensitive resources, and enables you to attest to access policy. * Manage external organizations, and their users who access resources * Ensure access is correct, reviewed, and time bound * Empower business owners to manage collaboration with delegation
+## Collaboration methods
+ Traditionally, organizations use one of two methods to collaborate: * Create locally managed credentials for external users, or * Establish federations with partner identity providers (IdP)
-
+ Both methods have drawbacks. For more information, see the following table. | Area of concern | Local credentials | Federation |
Both methods have drawbacks. For more information, see the following table.
Azure Active Directory (Azure AD) B2B integrates with other tools in Azure AD, and Microsoft 365 services. Azure AD B2B simplifies collaboration, reduces expense, and increases security.
-Azure AD B2B benefits:
+## Azure AD B2B benefits
- If the home identity is disabled or deleted, external users can't access resources - User home IdP handles authentication and credential management
active-directory Whats New Sovereign Clouds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/fundamentals/whats-new-sovereign-clouds.md
Azure AD receives improvements on an ongoing basis. To stay up to date with the
This page is updated monthly, so revisit it regularly.
+## December 2022
+
+### General Availability - Risk-based Conditional Access for workload identities
+
+**Type:** New feature
+**Service category:** Conditional Access
+**Product capability:** Identity Security & Protection
+
+Customers can now bring one of the most powerful forms of access control in the industry to workload identities. Conditional Access supports risk-based policies for workload identities. Organizations can block sign-in attempts when Identity Protection detects compromised apps or services. For more information, see: [Create a risk-based Conditional Access policy](../conditional-access/workload-identity.md#create-a-risk-based-conditional-access-policy).
+++
+### General Availability - API to recover accidentally deleted Service Principals
+
+**Type:** New feature
+**Service category:** Enterprise Apps
+**Product capability:** Identity Lifecycle Management
+
+Restore a recently deleted application, group, servicePrincipal, administrative unit, or user object from deleted items. If an item was accidentally deleted, you can fully restore the item. This isn't applicable to security groups, which are deleted permanently. A recently deleted item will remain available for up to 30 days. After 30 days, the item is permanently deleted. For more information, see: [servicePrincipal resource type](/graph/api/resources/serviceprincipal).
+++
+### General Availability - Using Staged rollout to test Cert Based Authentication (CBA)
+
+**Type:** New feature
+**Service category:** Authentications (Logins)
+**Product capability:** Identity Security & Protection
+
+We're excited to announce the general availability of hybrid cloud Kerberos trust, a new Windows Hello for Business deployment model to enable a password-less sign-in experience. With this new model, weΓÇÖve made Windows Hello for Business much easier to deploy than the existing key trust and certificate trust deployment models by removing the need for maintaining complicated public key infrastructure (PKI), and Azure Active Directory (AD) Connect synchronization wait times. For more information, see: [Migrate to cloud authentication using Staged Rollout](../hybrid/how-to-connect-staged-rollout.md).
+++ ## November 2022
-### General availability - Windows Hello for Business, cloud Kerberos trust deployment
+### General Availability - Windows Hello for Business, cloud Kerberos trust deployment
We're excited to announce the general availability of hybrid cloud Kerberos trus
-### General availability - Expression builder with Application Provisioning
+### General Availability - Expression builder with Application Provisioning
**Type:** Changed feature **Service category:** Provisioning
Accidental deletion of users in your apps or in your on-premises directory could
-### General availability - SSPR writeback is now available for disconnected forests using Azure AD Connect Cloud sync
+### General Availability - SSPR writeback is now available for disconnected forests using Azure AD Connect Cloud sync
Azure AD Connect Cloud Sync Password writeback now provides customers the abilit
-### General availability - Prevent accidental deletions
+### General Availability - Prevent accidental deletions
For more information, see: [Enable accidental deletions prevention in the Azure
-### General availability - Create group in administrative unit
+### General Availability - Create group in administrative unit
**Type:** New feature **Service category:** RBAC
Groups Administrators and other roles scoped to an administrative unit can now c
-### General availability - Number matching for Microsoft Authenticator notifications
+### General Availability - Number matching for Microsoft Authenticator notifications
For more information, see: [How to use number matching in multifactor authentica
-### General availability - Additional context in Microsoft Authenticator notifications
+### General Availability - Additional context in Microsoft Authenticator notifications
active-directory Howto Troubleshoot Upn Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/howto-troubleshoot-upn-changes.md
Previously updated : 12/19/2022 Last updated : 01/18/2023
In Active Directory, the default UPN suffix is the domain DNS name where you cre
For example, if you add labs.contoso.com and change the user UPNs and email to reflect that, the result is: username@labs.contoso.com.
->[!IMPORTANT]
-> If you change the suffix in Active Directory, add and verify a matching custom domain name in Azure AD.
-> [Add your custom domain name using the Azure Active Directory portal](../fundamentals/add-custom-domain.md)
+ >[!IMPORTANT]
+ > If you change the suffix in Active Directory, add and verify a matching custom domain name in Azure AD.
+ > [Add your custom domain name using the Azure Active Directory portal](../fundamentals/add-custom-domain.md)
![Screenshot of the Add customer domain option, under Custom domain names.](./media/howto-troubleshoot-upn-changes/custom-domains.png)
Users sign in to Azure AD with their userPrincipalName attribute value.
When you use Azure AD with on-premises Active Directory, user accounts are synchronized by using the Azure AD Connect service. The Azure AD Connect wizard uses the userPrincipalName attribute from the on-premises Active Directory as the UPN in Azure AD. You can change it to a different attribute in a custom installation.
->[!NOTE]
-> Define a process for when you update a User Principal Name (UPN) of a user, or for your organization.
+ >[!NOTE]
+ > Define a process for when you update a User Principal Name (UPN) of a user, or for your organization.
When you synchronize user accounts from Active Directory to Azure AD, ensure the UPNs in Active Directory map to verified domains in Azure AD.
Learn more: [How UPN changes affect the OneDrive URL and OneDrive features](/sha
## Teams Meeting Notes known issues and workarounds
-Use Teams Meeting Notes to take and share notes.
-
-Learn more: [Take meeting notes in Teams](/office/take-meeting-notes-in-teams-3eadf032-0ef8-4d60-9e21-0691d317d103).
+Use Teams Meeting Notes to take and share notes.
### Known issues
active-directory Plan Hybrid Identity Design Considerations Data Protection Strategy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/plan-hybrid-identity-design-considerations-data-protection-strategy.md
na Previously updated : 04/29/2019 Last updated : 01/19/2023
active-directory Plan Hybrid Identity Design Considerations Directory Sync Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/plan-hybrid-identity-design-considerations-directory-sync-requirements.md
na Previously updated : 07/18/2017 Last updated : 01/19/2023
active-directory Plan Hybrid Identity Design Considerations Hybrid Id Management Tasks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/plan-hybrid-identity-design-considerations-hybrid-id-management-tasks.md
na Previously updated : 04/29/2019 Last updated : 01/19/2023
active-directory Plan Hybrid Identity Design Considerations Lifecycle Adoption Strategy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/plan-hybrid-identity-design-considerations-lifecycle-adoption-strategy.md
na Previously updated : 05/30/2018 Last updated : 01/19/2023
active-directory Plan Hybrid Identity Design Considerations Multifactor Auth Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/plan-hybrid-identity-design-considerations-multifactor-auth-requirements.md
na Previously updated : 07/18/2017 Last updated : 01/19/2023
active-directory Plan Hybrid Identity Design Considerations Nextsteps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/plan-hybrid-identity-design-considerations-nextsteps.md
na Previously updated : 07/18/2017 Last updated : 01/19/2023
active-directory Plan Hybrid Identity Design Considerations Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/plan-hybrid-identity-design-considerations-overview.md
na Previously updated : 05/30/2018 Last updated : 01/19/2023
active-directory Plan Hybrid Identity Design Considerations Tools Comparison https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/plan-hybrid-identity-design-considerations-tools-comparison.md
na Previously updated : 04/18/2022 Last updated : 01/19/2023
active-directory Reference Connect Accounts Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-accounts-permissions.md
na Previously updated : 06/02/2021 Last updated : 01/19/2023
active-directory Reference Connect Adconnectivitytools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-adconnectivitytools.md
Previously updated : 05/31/2019 Last updated : 01/19/2023
active-directory Reference Connect Adsync https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-adsync.md
Previously updated : 11/30/2020 Last updated : 01/19/2023
active-directory Reference Connect Adsyncconfig https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-adsyncconfig.md
Previously updated : 01/24/2019 Last updated : 01/19/2023
active-directory Reference Connect Adsynctools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-adsynctools.md
Previously updated : 11/30/2020 Last updated : 01/19/2023
active-directory Reference Connect Germany https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-germany.md
na Previously updated : 07/12/2017 Last updated : 01/19/2023
active-directory Reference Connect Government Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-government-cloud.md
Previously updated : 04/14/2020 Last updated : 01/19/2023
active-directory Reference Connect Health User Privacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-health-user-privacy.md
na Previously updated : 04/26/2018 Last updated : 01/19/2023
active-directory Reference Connect Health Version History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-health-version-history.md
na Previously updated : 08/10/2020 Last updated : 01/19/2023
The Azure Active Directory team regularly updates Azure AD Connect Health with n
Azure AD Connect Health for Sync is integrated with Azure AD Connect installation. Read more about [Azure AD Connect release history](./reference-connect-version-history.md) For feature feedback, vote at [Connect Health User Voice channel](https://feedback.azure.com/d365community/forum/22920db1-ad25-ec11-b6e6-000d3a4f0789)
+## 19 January 2023
+**Agent Update**
+- Azure AD Connect Health agent for Azure AD Connect (version 3.2.2188.23)
+ - We fixed a bug where, under certain circumstances, Azure AD Connect sync errors were not getting uploaded or shown in the portal.
+ ## September 2021 **Agent Update** - Azure AD Connect Health agent for AD FS (version 3.1.113.0)
active-directory Reference Connect Instances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-instances.md
na Previously updated : 05/27/2019 Last updated : 01/19/2023
active-directory Reference Connect Ports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-ports.md
na Previously updated : 03/04/2020 Last updated : 01/19/2023
active-directory Reference Connect Pta Version History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-pta-version-history.md
ms.assetid: ef2797d7-d440-4a9a-a648-db32ad137494
Previously updated : 04/14/2020 Last updated : 01/19/2023
active-directory Reference Connect Sync Attributes Synchronized https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-sync-attributes-synchronized.md
na Previously updated : 04/15/2020 Last updated : 01/19/2023
active-directory Reference Connect Sync Functions Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-sync-functions-reference.md
na Previously updated : 07/12/2017 Last updated : 01/19/2023
active-directory Reference Connect User Privacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-user-privacy.md
na Previously updated : 05/21/2018 Last updated : 01/19/2023
active-directory Reference Connect Version History Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-version-history-archive.md
ms.assetid:
Previously updated : 07/23/2020 Last updated : 01/19/2023
active-directory Reference Connect Version History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/reference-connect-version-history.md
This is a bug fix release. There are no functional changes in this release.
## Next steps
-Learn more about how to [integrate your on-premises identities with Azure AD](whatis-hybrid-identity.md).
+Learn more about how to [integrate your on-premises identities with Azure AD](whatis-hybrid-identity.md).
active-directory Tshoot Connect Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/tshoot-connect-connectivity.md
na Previously updated : 01/11/2022 Last updated : 01/19/2023
active-directory Tshoot Connect Install Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/tshoot-connect-install-issues.md
na Previously updated : 01/31/2019 Last updated : 01/19/2023
active-directory Tshoot Connect Largeobjecterror Usercertificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/tshoot-connect-largeobjecterror-usercertificate.md
na Previously updated : 07/13/2017 Last updated : 01/19/2023
active-directory Tshoot Connect Object Not Syncing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/tshoot-connect-object-not-syncing.md
na Previously updated : 08/10/2018 Last updated : 01/19/2023
active-directory Tshoot Connect Objectsync https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/tshoot-connect-objectsync.md
na Previously updated : 04/29/2019 Last updated : 01/19/2023
active-directory Tshoot Connect Pass Through Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/tshoot-connect-pass-through-authentication.md
na Previously updated : 01/25/2021 Last updated : 01/19/2023
active-directory Tshoot Connect Password Hash Synchronization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/tshoot-connect-password-hash-synchronization.md
na Previously updated : 03/13/2017 Last updated : 01/19/2023
active-directory Tshoot Connect Recover From Localdb 10Gb Limit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/tshoot-connect-recover-from-localdb-10gb-limit.md
na Previously updated : 07/17/2017 Last updated : 01/19/2023
active-directory Tshoot Connect Sso https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/tshoot-connect-sso.md
ms.assetid: 9f994aca-6088-40f5-b2cc-c753a4f41da7
Previously updated : 10/07/2019 Last updated : 01/19/2023
active-directory Tshoot Connect Sync Errors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/tshoot-connect-sync-errors.md
na Previously updated : 01/21/2022 Last updated : 01/19/2023
active-directory Tshoot Connect Tshoot Sql Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/tshoot-connect-tshoot-sql-connectivity.md
na Previously updated : 11/30/2020 Last updated : 01/19/2023
active-directory Tutorial Federation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/tutorial-federation.md
na Previously updated : 11/11/2022 Last updated : 01/19/2023
active-directory Tutorial Phs Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/tutorial-phs-backup.md
Previously updated : 04/25/2019 Last updated : 01/19/2023
active-directory What Is Inter Directory Provisioning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/what-is-inter-directory-provisioning.md
Previously updated : 10/30/2020 Last updated : 01/19/2023
Azure AD currently supports three methods for accomplishing inter-directory prov
- [Azure AD Connect](whatis-azure-ad-connect.md) - the Microsoft tool designed to meet and accomplish your hybrid identity, including inter-directory provisioning from Active Directory to Azure AD. -- [Azure AD Connect Cloud Provisioning](../cloud-sync/what-is-cloud-sync.md) -a new Microsoft agent designed to meet and accomplish your hybrid identity goals. It is provides a light-weight inter -directory provisioning experience between Active Directory and Azure AD.
+- [Azure AD Connect cloud sync](../cloud-sync/what-is-cloud-sync.md) -a new Microsoft agent designed to meet and accomplish your hybrid identity goals. It is provides a light-weight inter -directory provisioning experience between Active Directory and Azure AD.
- [Microsoft Identity Manager](/microsoft-identity-manager/microsoft-identity-manager-2016) - Microsoft's on-premises identity and access management solution that helps you manage the users, credentials, policies, and access within your organization. Additionally, MIM provides advanced inter-directory provisioning to achieve hybrid identity environments for Active Directory, Azure AD, and other directories.
active-directory Whatis Aadc Admin Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/whatis-aadc-admin-agent.md
Previously updated : 06/30/2022 Last updated : 01/19/2023
active-directory Whatis Azure Ad Connect V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/whatis-azure-ad-connect-v2.md
Previously updated : 12/2/2022 Last updated : 01/19/2023
active-directory Whatis Azure Ad Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/whatis-azure-ad-connect.md
Previously updated : 10/06/2021 Last updated : 01/19/2023
active-directory Whatis Fed https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/whatis-fed.md
na Previously updated : 11/28/2018 Last updated : 01/19/2023
active-directory Whatis Hybrid Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/whatis-hybrid-identity.md
ms.assetid: 59bd209e-30d7-4a89-ae7a-e415969825ea
Previously updated : 05/17/2019 Last updated : 01/19/2023
active-directory Whatis Phs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/hybrid/whatis-phs.md
Previously updated : 06/25/2020 Last updated : 01/19/2023
active-directory Create Service Principal Cross Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/create-service-principal-cross-tenant.md
In this article, you'll learn how to create an enterprise application in your te
Before you proceed to add the application using any of these options, check whether the enterprise application is already in your tenant by attempting to sign in to the application. If the sign-in is successful, the enterprise application already exists in your tenant.
-If you have verified that the application isn't in your tenant, proceed with any of the following ways to add the enterprise application to your tenant using the appId
+If you have verified that the application isn't in your tenant, proceed with any of the following ways to add the enterprise application to your tenant.
## Prerequisites
To add an enterprise application to your Azure AD tenant, you need:
- An Azure AD user account. If you don't already have one, you can [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F). - One of the following roles: Global Administrator, Cloud Application Administrator, or Application Administrator.-- The client ID of the multi-tenant application.
+- The client ID (also called appId in Microsoft Graph) of the multi-tenant application.
## Create an enterprise application
where:
:::zone-end :::zone pivot="ms-graph"
-From the Microsoft Graph explorer window:
+You can use an API client such as [Graph Explorer](https://aka.ms/ge) to work with Microsoft Graph.
-1. To create the enterprise application, insert the following query:
+1. Grant the client app the *Application.ReadWrite.All* permission.
+
+1. To create the enterprise application, run the following query. The appId is the client ID of the application.
```http
- POST /servicePrincipals.
- ```
-1. Supply the following request in the **Request body**.
-
+ POST https://graph.microsoft.com/v1.0/servicePrincipals
+ Content-type: application/json
+
{ "appId": "fc876dd1-6bcb-4304-b9b6-18ddf1526b62" }
-1. Grant the Application.ReadWrite.All permission under the **Modify permissions** tab and select **Run query**.
+
+ ```
-1. To delete the enterprise application you created, run the query:
+1. To delete the enterprise application you created, run the query.
```http
- DELETE /servicePrincipals/{objectID}
+ DELETE https://graph.microsoft.com/v1.0/servicePrincipals(appId='fc876dd1-6bcb-4304-b9b6-18ddf1526b62')
``` :::zone-end :::zone pivot="azure-cli"
active-directory Delete Application Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/delete-application-portal.md
To delete an enterprise application, you need:
:::zone pivot="ms-graph" Delete an enterprise application using [Graph Explorer](https://developer.microsoft.com/graph/graph-explorer).
-1. To get the list of applications in your tenant, run the following query.
+1. To get the list of service principals in your tenant, run the following query.
+ ```http
- GET /servicePrincipals
+ GET https://graph.microsoft.com/v1.0/servicePrincipals
```+ 1. Record the ID of the enterprise app you want to delete. 1. Delete the enterprise application.
-
+ ```http
- DELETE /servicePrincipals/{id}
+ DELETE https://graph.microsoft.com/v1.0/servicePrincipals/{servicePrincipal-id}
``` + :::zone-end ## Next steps - [Restore a deleted enterprise application](restore-application.md)+
active-directory End User Experiences https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/end-user-experiences.md
As an admin, you can choose to try out new app launcher features while they are
To enable or disable previews for your app launchers: -- Sign in to the Azure portal as a global administrator for your directory.
+- Sign in to the Azure portal as a global administrator, application administrator or cloud application administrator for your directory.
- Search for and select **Azure Active Directory**, then select **Enterprise applications**. - On the left menu, select **App launchers**, then select **Settings**. - Under **Preview settings**, toggle the checkboxes for the previews you want to enable or disable. To opt into a preview, toggle the associated checkbox to the checked state. To opt out of a preview, toggle the associated checkbox to the unchecked state.
active-directory Hide Application From User Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/hide-application-from-user-portal.md
Use the following steps to hide all Microsoft 365 applications from the My Apps
1. Sign in to the [Azure portal](https://portal.azure.com) as a global administrator for your directory. 1. Select **Azure Active Directory**. 1. Select **Enterprise applications**.
-1. Select **User settings**.
-1. For **Users can only see Office 365 apps in the Office 365 portal**, select **Yes**.
-1. Select **Save**.
+1. Select **App launchers**.
+2. Select **Settings**.
+3. For **Users can only see Microsoft 365 apps in the Microsoft 365 portal**, select **Yes**.
+4. Select **Save**.
:::zone-end ## Next steps
active-directory Secure Hybrid Access Integrations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/secure-hybrid-access-integrations.md
Previously updated : 12/16/2022 Last updated : 01/19/2023
Azure Active Directory (Azure AD) supports modern authentication protocols that help keep applications secure. However, many business applications work in a protected corporate network, and some use legacy authentication methods. As companies build Zero Trust strategies and support hybrid and cloud environments, there are solutions that connect apps to Azure AD and provide authentication for legacy applications.
-Learn more: [Zero Trust Deployment Guide for Microsoft Azure Active Directory](/security/blog/2020/04/30/zero-trust-deployment-guide-azure-active-directory/)
+Learn more: [Zero Trust security](../../security/fundamentals/zero-trust.md)
Azure AD natively supports modern protocols:
After the SaaS applications are registered in Azure AD, the applications need to
### Connect apps to Azure AD with legacy authentication
-Your solution can enable the customer to use SSO and Azure Active Directory features, even unsupported applications. To allow access with legacy protocols, your application calls Azure AD to authenticate the user and apply Azure AD Conditional Access policies. Enable this integration from your console. Create a SAML or an OIDC application registration between your solution and Azure AD.
+Your solution can enable the customer to use SSO and Azure Active Directory features, even unsupported applications. To allow access with legacy protocols, your application calls Azure AD to authenticate the user and apply [Azure AD Conditional Access policies](../conditional-access/overview.md). Enable this integration from your console. Create a SAML or an OIDC application registration between your solution and Azure AD.
#### Create a SAML application registration
https://graph.microsoft.com/v1.0/applications/{Application Object ID}
### Apply Conditional Access policies
-Customers and partners can use the Microsoft Graph API to create or apply Conditional Access policies to customer applications. For partners, customers can apply these policies from your solution without using the Azure portal. There are two options to apply Azure AD Conditional Access policies:
+Customers and partners can use the Microsoft Graph API to create or apply per application [Conditional Access policies](../conditional-access/overview.md). For partners, customers can apply these policies from your solution without using the Azure portal. There are two options to apply Azure AD Conditional Access policies:
-- Assign the application to a Conditional Access policy-- Create a new Conditional Access policy and assign the application to it
+- [Assign the application to a Conditional Access policy](#use-a-conditional-access-policy)
+- [Create a new Conditional Access policy and assign the application to it](#create-a-new-conditional-access-policy)
#### Use a Conditional Access policy
The following software-defined perimeter (SDP) solutions providers connect with
* **Strata Maverics Identity Orchestrator** * [Integrate Azure AD SSO with Maverics Identity Orchestrator SAML Connector](../saas-apps/maverics-identity-orchestrator-saml-connector-tutorial.md) * **Zscaler Private Access**
- * [Tutorial: Integrate Zscaler Private Access with Azure AD](../saas-apps/zscalerprivateaccess-tutorial.md)
+ * [Tutorial: Integrate Zscaler Private Access with Azure AD](../saas-apps/zscalerprivateaccess-tutorial.md)
active-directory Secure Hybrid Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/manage-apps/secure-hybrid-access.md
Title: Secure hybrid access
-description: This article describes partner solutions for integrating your legacy on-premises, public cloud, or private cloud applications with Azure AD.
+ Title: Secure hybrid access, protect legacy apps with Azure Active Directory
+description: Find partner solutions to integrate your legacy on-premises, public cloud, or private cloud applications with Azure AD.
Previously updated : 8/17/2021 Last updated : 01/17/2023
-# Secure hybrid access: Secure legacy apps with Azure Active Directory
+# Secure hybrid access: Protect legacy apps with Azure Active Directory
-You can now protect your on-premises and cloud legacy authentication applications by connecting them to Azure Active Directory (AD) with:
+In this article, learn to protect your on-premises and cloud legacy authentication applications by connecting them to Azure Active Directory (Azure AD).
-- [Azure AD Application Proxy](#secure-hybrid-access-through-azure-ad-application-proxy)
+* **[Application Proxy](#secure-hybrid-access-with-application-proxy)**:
+ * [Remote access to on-premises applications through Azure AD Application Proxy](../app-proxy/application-proxy.md)
+ * Protect users, apps, and data in the cloud and on-premises
+ * [Use it to publish on-premises web applications externally](../app-proxy/what-is-application-proxy.md)
+
+* **[Secure hybrid access through Azure AD partner integrations](#partner-integrations-for-apps-on-premises-and-legacy-authentication)**:
+ * [Pre-built solutions](#secure-hybrid-access-through-azure-ad-partner-integrations)
+ * [Apply Conditional Access policies per application](secure-hybrid-access-integrations.md#apply-conditional-access-policies)
+
+In addition to Application Proxy, you can strengthen your security posture with [Azure AD Conditional Access](../conditional-access/overview.md) and [Identity Protection](../identity-protection/overview-identity-protection.md).
-- [Secure hybrid access: Secure legacy apps with Azure Active Directory](#secure-hybrid-access-secure-legacy-apps-with-azure-active-directory)
- - [Secure hybrid access through Azure AD Application Proxy](#secure-hybrid-access-through-azure-ad-application-proxy)
- - [Secure hybrid access through Azure AD partner integrations](#secure-hybrid-access-through-azure-ad-partner-integrations)
+## Single sign-on and multi-factor authentication
-You can bridge the gap and strengthen your security posture across all applications with Azure AD capabilities like [Azure AD Conditional Access](../conditional-access/overview.md) and [Azure AD Identity Protection](../identity-protection/overview-identity-protection.md). By having Azure AD as an Identity provider (IDP), you can use modern authentication and authorization methods like [single sign-on (SSO)](what-is-single-sign-on.md) and [multifactor authentication (MFA)](../authentication/concept-mfa-howitworks.md) to secure your on-premises legacy applications.
+With Azure AD as an identity provider (IdP), you can use modern authentication and authorization methods like [single sign-on (SSO)](what-is-single-sign-on.md) and [Azure AD Multi-Factor Authentication (MFA)](../authentication/concept-mfa-howitworks.md) to secure legacy, on-premises applications.
-## Secure hybrid access through Azure AD Application Proxy
+## Secure hybrid access with Application Proxy
-Using [Application Proxy](../app-proxy/what-is-application-proxy.md) you can provide [secure remote access](../app-proxy/application-proxy-add-on-premises-application.md) to your on-premises web applications. Your users donΓÇÖt need to use a VPN. Users benefit by easily connecting to their applications from any device after a [SSO](../app-proxy/application-proxy-config-sso-how-to.md#how-to-configure-single-sign-on). Application Proxy provides remote access as a service and allows you to [easily publish your applications](../app-proxy/application-proxy-add-on-premises-application.md) to users outside the corporate network. It helps you scale your cloud access management without requiring you to modify your on-premises applications. [Plan an Azure AD Application Proxy](../app-proxy/application-proxy-deployment-plan.md) deployment as a next step.
+Use Application Proxy to protect users, apps, and data in the cloud, and on premises. Use this tool for secure remote access to on-premises web applications. Users donΓÇÖt need to use a virtual private network (VPN); they connect to applications from devices with SSO.
-## Secure hybrid access through Azure AD partner integrations
+Learn more:
-In addition to [Azure AD Application Proxy](../app-proxy/what-is-application-proxy.md), Microsoft partners with third-party providers to enable secure access to your on-premises applications and applications that use legacy authentication.
+* [Remote access to on-premises applications through Azure AD Application Proxy](../app-proxy/application-proxy.md)
+* [Tutorial: Add an on-premises application for remote access through Application Proxy in Azure AD](../app-proxy/application-proxy-add-on-premises-application.md)
+* [How to configure SSO to an Application Proxy application](../app-proxy/application-proxy-config-sso-how-to.md)
+* [Using Azure AD Application Proxy to publish on-premises apps for remote users](../app-proxy/what-is-application-proxy.md)
-![Illustration of Secure Hybrid Access partner integrations and Application Proxy providing access to legacy and on-premises applications after authentication with Azure AD.](./media/secure-hybrid-access/secure-hybrid-access.png)
+### Application publishing and access management
-The following partners offer pre-built solutions to support **conditional access policies per application** and provide detailed guidance for integrating with Azure AD.
+Use Application Proxy remote access as a service to publish applications to users outside the corporate network. Help improve your cloud access management without requiring modification to your on-premises applications. Plan an [Azure AD Application Proxy deployment](../app-proxy/application-proxy-deployment-plan.md).
-- [Akamai Enterprise Application Access](../saas-apps/akamai-tutorial.md)
+## Partner integrations for apps: on-premises and legacy authentication
-- [Citrix Application Delivery Controller (ADC)](../saas-apps/citrix-netscaler-tutorial.md)
+Microsoft partners with various companies that deliver pre-built solutions for on-premises applications, and applications that use legacy authentication. The following diagram illustrates a user flow from sign-in to secure access to apps and data.
-- [Datawiza Access Broker](../manage-apps/datawiza-with-azure-ad.md)
+ ![Diagram of secure hybrid access integrations and Application Proxy providing user access.](./media/secure-hybrid-access/secure-hybrid-access.png)
-- [F5 BIG-IP APM (ADC)](../manage-apps/f5-aad-integration.md)
+### Secure hybrid access through Azure AD partner integrations
-- [F5 BIG-IP APM VPN](../manage-apps/f5-aad-password-less-vpn.md)
+The following partners offer solutions to support [Conditional Access policies per application](secure-hybrid-access-integrations.md#apply-conditional-access-policies). Use the tables in the following sections to learn about the partners and Azure AD integration documentation.
-- [Kemp](../saas-apps/kemp-tutorial.md)
+|Partner|Integration documentation|
+|||
+|Akamai Technologies|[Tutorial: Azure AD SSO integration with Akamai](../saas-apps/akamai-tutorial.md)|
+|Citrix Systems, Inc.|[Tutorial: Azure AD SSO integration with Citrix ADC SAML Connector for Azure AD (Kerberos-based authentication)](../saas-apps/citrix-netscaler-tutorial.md)|
+|Datawiza|[Tutorial: Configure Secure Hybrid Access with Azure AD and Datawiza](datawiza-with-azure-ad.md)|
+|F5, Inc.|[Integrate F5 BIG-IP with Azure AD](f5-aad-integration.md)</br>[Tutorial: Configure F5 BIG-IP SSL-VPN for Azure AD SSO](f5-aad-password-less-vpn.md)|
+|Progress Software Corporation, Progress Kemp|[Tutorial: Azure AD SSO integration with Kemp LoadMaster Azure AD integration](../saas-apps/kemp-tutorial.md)|
+|Perimeter 81 Ltd.|[Tutorial: Azure AD SSO integration with Perimeter 81](../saas-apps/perimeter-81-tutorial.md)|
+|Silverfort|[Tutorial: Configure Secure Hybrid Access with Azure AD and Silverfort](silverfort-azure-ad-integration.md)|
+|Strata Identity, Inc.|[Integrate Azure AD SSO with Maverics Identity Orchestrator SAML Connector](../saas-apps/maverics-identity-orchestrator-saml-connector-tutorial.md)|
-- [Perimeter 81](../saas-apps/perimeter-81-tutorial.md)
+#### Partners with pre-built solutions and integration documentation
-- [Silverfort Authentication Platform](../manage-apps/silverfort-azure-ad-integration.md)
+|Partner|Integration documentation|
+|||
+|Amazon Web Service, Inc.|[Tutorial: Azure AD SSO integration with AWS ClientVPN](../saas-apps/aws-clientvpn-tutorial.md)|
+|Check Point Software Technologies Ltd.|[Tutorial: Azure AD single SSO integration with Check Point Remote Secure Access VPN](../saas-apps/check-point-remote-access-vpn-tutorial.md)|
+|Cisco Systems, Inc.|[Tutorial: Azure AD SSO integration with Cisco AnyConnect](../saas-apps/cisco-anyconnect.md)|
+|Cloudflare, Inc.|[Tutorial: Configure Cloudflare with Azure AD for secure hybrid access](cloudflare-azure-ad-integration.md)|
+|Fortinet, Inc.|[Tutorial: Azure AD SSO integration with FortiGate SSL VPN](../saas-apps/fortigate-ssl-vpn-tutorial.md)|
+|Palo Alto Networks|[Tutorial: Azure AD SSO integration with Palo Alto Networks Admin UI](../saas-apps/paloaltoadmin-tutorial.md)|
+|Pulse Secure|[Tutorial: Azure AD SSO integration with Pulse Connect Secure (PCS)](../saas-apps/pulse-secure-pcs-tutorial.md)</br>[Tutorial: Azure AD SSO integration with Pulse Secure Virtual Traffic Manager](../saas-apps/pulse-secure-virtual-traffic-manager-tutorial.md)|
+|Zscaler, Inc.|[Tutorial: Integrate Zscaler Private Access with Azure AD](../saas-apps/zscalerprivateaccess-tutorial.md)|
-- [Strata](../saas-apps/maverics-identity-orchestrator-saml-connector-tutorial.md)
+## Next steps
+Select a partner in the tables mentioned to learn how to integrate their solution with Azure AD.
-The following partners offer pre-built solutions and detailed guidance for integrating with Azure AD.
--- [AWS](../saas-apps/aws-clientvpn-tutorial.md)--- [Check Point](../saas-apps/check-point-remote-access-vpn-tutorial.md)--- [Cisco AnyConnect](../saas-apps/cisco-anyconnect.md)--- [Cloudflare](../manage-apps/cloudflare-azure-ad-integration.md)--- [Fortinet](../saas-apps/fortigate-ssl-vpn-tutorial.md)--- [Palo Alto Networks Global Protect](../saas-apps/paloaltoadmin-tutorial.md)--- [Pulse Secure Pulse Connect Secure (PCS)](../saas-apps/pulse-secure-pcs-tutorial.md)--- [Pulse Secure Virtual Traffic Manager (VTM)](../saas-apps/pulse-secure-virtual-traffic-manager-tutorial.md)--- [Zscaler Private Access (ZPA)](../saas-apps/zscalerprivateaccess-tutorial.md)
active-directory Plan Monitoring And Reporting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/reports-monitoring/plan-monitoring-and-reporting.md
Previously updated : 12/19/2022 Last updated : 01/20/2023 # Customer intent: For an Azure AD administrator to monitor logs and report on access
Learn more:
#### Stream logs to storage and SIEM tools * [Integrate Azure AD logs with Azure Monitor logs](./howto-integrate-activity-logs-with-log-analytics.md).
-* [Analyze Azure AD activity logs with Azure Monitor logs](/MicrosoftDocs/azure-docs/blob/main/articles/active-directory/reports-monitoring/howto-analyze-activity-logs-log-analytics.md).
+* [Analyze Azure AD activity logs with Azure Monitor logs](../reports-monitoring/howto-analyze-activity-logs-log-analytics.md).
* Learn how to [stream logs to an event hub](./tutorial-azure-monitor-stream-logs-to-event-hub.md). * Learn how to [Archive Azure AD logs to an Azure Storage account](./quickstart-azure-monitor-route-logs-to-storage-account.md). * [Integrate Azure AD logs with Splunk by using Azure Monitor](./howto-integrate-activity-logs-with-splunk.md)
active-directory Security Emergency Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/roles/security-emergency-access.md
Some organizations use AD Domain Services and AD FS or similar identity provider
Organizations need to ensure that the credentials for emergency access accounts are kept secure and known only to individuals who are authorized to use them. Some customers use a smartcard for Windows Server AD, a [FIDO2 security key](../authentication/howto-authentication-passwordless-security-key.md) for Azure AD and others use passwords. A password for an emergency access account is usually separated into two or three parts, written on separate pieces of paper, and stored in secure, fireproof safes that are in secure, separate locations.
-If using passwords, make sure the accounts have strong passwords that do not expire the password. Ideally, the passwords should be at least 16 characters long and randomly generated.
+If using passwords, make sure the accounts have strong passwords that do not expire. Ideally, the passwords should be at least 16 characters long and randomly generated.
## Monitor sign-in and audit logs
active-directory Firstbird Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/firstbird-tutorial.md
- Title: 'Tutorial: Azure Active Directory single sign-on (SSO) integration with Firstbird | Microsoft Docs'
-description: Learn how to configure single sign-on between Azure Active Directory and Firstbird.
-------- Previously updated : 11/21/2022---
-# Tutorial: Azure Active Directory single sign-on (SSO) integration with Firstbird
-
-In this tutorial, you'll learn how to integrate Firstbird with Azure Active Directory (Azure AD). When you integrate Firstbird with Azure AD, you can:
-
-* Control in Azure AD who has access to Firstbird.
-* Enable your users to be automatically signed-in to Firstbird with their Azure AD accounts.
-* Manage your accounts in one central location - the Azure portal.
-
-To learn more about SaaS app integration with Azure AD, see [What is application access and single sign-on with Azure Active Directory](../manage-apps/what-is-single-sign-on.md).
-
-## Prerequisites
-
-To get started, you need the following items:
-
-* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
-* Firstbird single sign-on (SSO) enabled subscription.
-
-## Scenario description
-
-In this tutorial, you configure and test Azure AD SSO in a test environment.
---
-* Firstbird supports **SP and IDP** initiated SSO
-* Firstbird supports **Just In Time** user provisioning
--
-## Adding Firstbird from the gallery
-
-To configure the integration of Firstbird into Azure AD, you need to add Firstbird from the gallery to your list of managed SaaS apps.
-
-1. Sign in to the [Azure portal](https://portal.azure.com) using either a work or school account, or a personal Microsoft account.
-1. On the left navigation pane, select the **Azure Active Directory** service.
-1. Navigate to **Enterprise Applications** and then select **All Applications**.
-1. To add new application, select **New application**.
-1. In the **Add from the gallery** section, type **Firstbird** in the search box.
-1. Select **Firstbird** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
-
- Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, as well as walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
--
-## Configure and test Azure AD single sign-on for Firstbird
-
-Configure and test Azure AD SSO with Firstbird using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Firstbird.
-
-To configure and test Azure AD SSO with Firstbird, complete the following building blocks:
-
-1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
- 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
- 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
-1. **[Configure Firstbird SSO](#configure-firstbird-sso)** - to configure the single sign-on settings on application side.
- 1. **[Create Firstbird test user](#create-firstbird-test-user)** - to have a counterpart of B.Simon in Firstbird that is linked to the Azure AD representation of user.
-1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
-
-## Configure Azure AD SSO
-
-Follow these steps to enable Azure AD SSO in the Azure portal.
-
-1. In the [Azure portal](https://portal.azure.com/), on the **Firstbird** application integration page, find the **Manage** section and select **single sign-on**.
-1. On the **Select a single sign-on method** page, select **SAML**.
-1. On the **Set up single sign-on with SAML** page, click the edit/pen icon for **Basic SAML Configuration** to edit the settings.
-
- ![Edit Basic SAML Configuration](common/edit-urls.png)
-
-1. On the **Basic SAML Configuration** section, if you wish to configure the application in **IDP** initiated mode, enter the values for the following fields:
-
- a. In the **Identifier** text box, type a URL using the following pattern:
- `https://<company-domain>.auth.1brd.com/saml/sp`
-
- b. In the **Reply URL** text box, type a URL using the following pattern:
- `https://<company-domain>.auth.1brd.com/saml/callback`
-
-1. Click **Set additional URLs** and perform the following step if you wish to configure the application in **SP** initiated mode:
-
- In the **Sign-on URL** text box, type a URL using the following pattern:
- `https://<company-domain>.1brd.com/login`
-
- > [!NOTE]
- > These values are not real. Update these values with the actual Identifier, Reply URL and Sign-on URL. Contact [Firstbird Client support team](mailto:support@firstbird.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
-
-1. Firstbird application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
-
- ![image](common/edit-attribute.png)
-
-1. In addition to above, Firstbird application expects few more attributes to be passed back in SAML response which are shown below. These attributes are also pre populated but you can review them as per your requirement.
-
- | Name | Source Attribute|
- | | |
- | first_name | `user.givenname` |
- | last_name | `user.surname` |
- | email | `user.mail` |
-
-1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
-
- ![The Certificate download link](common/metadataxml.png)
-
-1. On the **Set up Firstbird** section, copy the appropriate URL(s) based on your requirement.
-
- ![Copy configuration URLs](common/copy-configuration-urls.png)
-
-### Create an Azure AD test user
-
-In this section, you'll create a test user in the Azure portal called B.Simon.
-
-1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
-1. Select **New user** at the top of the screen.
-1. In the **User** properties, follow these steps:
- 1. In the **Name** field, enter `B.Simon`.
- 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
- 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
- 1. Click **Create**.
-
-### Assign the Azure AD test user
-
-In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Firstbird.
-
-1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
-1. In the applications list, select **Firstbird**.
-1. In the app's overview page, find the **Manage** section and select **Users and groups**.
-
- ![The "Users and groups" link](common/users-groups-blade.png)
-
-1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
-
- ![The Add User link](common/add-assign-user.png)
-
-1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
-1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
-1. In the **Add Assignment** dialog, click the **Assign** button.
-
-## Configure Firstbird SSO
-
-Once you have completed these steps, please send Firstbird the Federation Metadata XML in a support request via e-email to [support@firstbird.com](mailto:support@firstbird.com) with the subject: "SSO configuration".
-
-Firstbird will then store the configuration in the system accordingly and activate SSO for your account. After that, a member of the support staff will contact you to verify the configuration.
-
-> [!NOTE]
-> You need to have the SSO option included in your contract.
-
-### Create Firstbird test user
-
-In this section, a user called B.Simon is created in Firstbird. Firstbird supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Firstbird, a new one is created after authentication.
-
-## Test SSO
-
-In this section, you test your Azure AD single sign-on configuration using the Access Panel.
-
-When you click the Firstbird tile in the Access Panel, you should be automatically signed in to the Firstbird for which you set up SSO. For more information about the Access Panel, see [Introduction to the Access Panel](https://support.microsoft.com/account-billing/sign-in-and-start-apps-from-the-my-apps-portal-2f3b1bae-0e5a-4a86-a33e-876fbd2a4510).
-
-## Additional resources
--- [ List of Tutorials on How to Integrate SaaS Apps with Azure Active Directory ](./tutorial-list.md)--- [What is application access and single sign-on with Azure Active Directory? ](../manage-apps/what-is-single-sign-on.md)--- [What is conditional access in Azure Active Directory?](../conditional-access/overview.md)--- [Try Firstbird with Azure AD](https://aad.portal.azure.com/)
active-directory Radancys Employee Referrals Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/saas-apps/radancys-employee-referrals-tutorial.md
+
+ Title: Azure AD SSO integration with Radancy's Employee Referrals
+description: Learn how to configure single sign-on between Azure Active Directory and Radancy's Employee Referrals.
++++++++ Last updated : 01/19/2023++++
+# Azure AD SSO integration with Radancy's Employee Referrals
+
+In this tutorial, you'll learn how to integrate Radancy's Employee Referrals with Azure Active Directory (Azure AD). When you integrate Radancy's Employee Referrals with Azure AD, you can:
+
+* Control in Azure AD who has access to Radancy's Employee Referrals.
+* Enable your users to be automatically signed-in to Radancy's Employee Referrals with their Azure AD accounts.
+* Manage your accounts in one central location - the Azure portal.
+
+## Prerequisites
+
+To get started, you need the following items:
+
+* An Azure AD subscription. If you don't have a subscription, you can get a [free account](https://azure.microsoft.com/free/).
+* Radancy's Employee Referrals single sign-on (SSO) enabled subscription.
+
+## Scenario description
+
+In this tutorial, you configure and test Azure AD SSO in a test environment.
+
+* Radancy's Employee Referrals supports **SP and IDP** initiated SSO.
+* Radancy's Employee Referrals supports **Just In Time** user provisioning.
+
+## Add Radancy's Employee Referrals from the gallery
+
+To configure the integration of Radancy's Employee Referrals into Azure AD, you need to add Radancy's Employee Referrals from the gallery to your list of managed SaaS apps.
+
+1. Sign in to the Azure portal using either a work or school account, or a personal Microsoft account.
+1. On the left navigation pane, select the **Azure Active Directory** service.
+1. Navigate to **Enterprise Applications** and then select **All Applications**.
+1. To add new application, select **New application**.
+1. In the **Add from the gallery** section, type **Radancy's Employee Referrals** in the search box.
+1. Select **Radancy's Employee Referrals** from results panel and then add the app. Wait a few seconds while the app is added to your tenant.
+
+Alternatively, you can also use the [Enterprise App Configuration Wizard](https://portal.office.com/AdminPortal/home?Q=Docs#/azureadappintegration). In this wizard, you can add an application to your tenant, add users/groups to the app, assign roles, and walk through the SSO configuration as well. [Learn more about Microsoft 365 wizards.](/microsoft-365/admin/misc/azure-ad-setup-guides)
+
+## Configure and test Azure AD SSO for Radancy's Employee Referrals
+
+Configure and test Azure AD SSO with Radancy's Employee Referrals using a test user called **B.Simon**. For SSO to work, you need to establish a link relationship between an Azure AD user and the related user in Radancy's Employee Referrals.
+
+To configure and test Azure AD SSO with Radancy's Employee Referrals, perform the following steps:
+
+1. **[Configure Azure AD SSO](#configure-azure-ad-sso)** - to enable your users to use this feature.
+ 1. **[Create an Azure AD test user](#create-an-azure-ad-test-user)** - to test Azure AD single sign-on with B.Simon.
+ 1. **[Assign the Azure AD test user](#assign-the-azure-ad-test-user)** - to enable B.Simon to use Azure AD single sign-on.
+1. **[Configure Radancy's Employee Referrals SSO](#configure-radancys-employee-referrals-sso)** - to configure the single sign-on settings on application side.
+ 1. **[Create Radancy's Employee Referrals test user](#create-radancys-employee-referrals-test-user)** - to have a counterpart of B.Simon in Radancy's Employee Referrals that are linked to the Azure AD representation of user.
+1. **[Test SSO](#test-sso)** - to verify whether the configuration works.
+
+## Configure Azure AD SSO
+
+Follow these steps to enable Azure AD SSO in the Azure portal.
+
+1. In the Azure portal, on the **Radancy's Employee Referrals** application integration page, find the **Manage** section and select **single sign-on**.
+1. On the **Select a single sign-on method** page, select **SAML**.
+1. On the **Set up single sign-on with SAML** page, click the pencil icon for **Basic SAML Configuration** to edit the settings.
+
+ ![Screenshot shows how to edit Basic SAML Configuration.](common/edit-urls.png "Basic Configuration")
+
+1. On the **Basic SAML Configuration** section, perform the following steps:
+
+ a. In the **Identifier** text box, type a URL using the following pattern:
+ `https://<company-domain>.auth.1brd.com/saml/sp`
+
+ b. In the **Reply URL** text box, type a URL using the following pattern:
+ `https://<company-domain>.auth.1brd.com/saml/callback`
+
+1. Perform the following step if you wish to configure the application in **SP** initiated mode:
+
+ In the **Sign-on URL** text box, type a URL using the following pattern:
+ `https://<company-domain>.1brd.com/login`
+
+ > [!NOTE]
+ > These values are not real. Update these values with the actual Identifier, Reply URL and Sign-on URL. Contact [Radancy's Employee Referrals Client support team](mailto:support@firstbird.com) to get these values. You can also refer to the patterns shown in the **Basic SAML Configuration** section in the Azure portal.
+
+1. Radancy's Employee Referrals application expects the SAML assertions in a specific format, which requires you to add custom attribute mappings to your SAML token attributes configuration. The following screenshot shows the list of default attributes.
+
+ ![Screenshot shows the image of token attributes configuration.](common/edit-attribute.png "Image")
+
+1. In addition to above, Radancy's Employee Referrals application expects few more attributes to be passed back in SAML response, which are shown below. These attributes are also pre populated but you can review them as per your requirement.
+
+ | Name | Source Attribute|
+ | | |
+ | first_name | user.givenname |
+ | last_name | user.surname |
+ | email | user.mail |
+
+1. On the **Set up single sign-on with SAML** page, in the **SAML Signing Certificate** section, find **Federation Metadata XML** and select **Download** to download the certificate and save it on your computer.
+
+ ![Screenshot shows the Certificate download link.](common/metadataxml.png "Certificate")
+
+1. On the **Set up Radancy's Employee Referrals** section, copy the appropriate URL(s) based on your requirement.
+
+ ![Screenshot shows how to copy configuration URL.](common/copy-configuration-urls.png "Metadata")
+
+### Create an Azure AD test user
+
+In this section, you'll create a test user in the Azure portal called B.Simon.
+
+1. From the left pane in the Azure portal, select **Azure Active Directory**, select **Users**, and then select **All users**.
+1. Select **New user** at the top of the screen.
+1. In the **User** properties, follow these steps:
+ 1. In the **Name** field, enter `B.Simon`.
+ 1. In the **User name** field, enter the username@companydomain.extension. For example, `B.Simon@contoso.com`.
+ 1. Select the **Show password** check box, and then write down the value that's displayed in the **Password** box.
+ 1. Click **Create**.
+
+### Assign the Azure AD test user
+
+In this section, you'll enable B.Simon to use Azure single sign-on by granting access to Radancy's Employee Referrals.
+
+1. In the Azure portal, select **Enterprise Applications**, and then select **All applications**.
+1. In the applications list, select **Radancy's Employee Referrals**.
+1. In the app's overview page, find the **Manage** section and select **Users and groups**.
+1. Select **Add user**, then select **Users and groups** in the **Add Assignment** dialog.
+1. In the **Users and groups** dialog, select **B.Simon** from the Users list, then click the **Select** button at the bottom of the screen.
+1. If you're expecting any role value in the SAML assertion, in the **Select Role** dialog, select the appropriate role for the user from the list and then click the **Select** button at the bottom of the screen.
+1. In the **Add Assignment** dialog, click the **Assign** button.
+
+## Configure Radancy's Employee Referrals SSO
+
+1. Log in to the Radancy's Employee Referrals website as an administrator.
+
+1. Navigate to **Account Preferences** > **Authentication** > **Single Sign-On**.
+
+1. In the **SAML IdP Metadata Configuration** section, perform the following steps:
+
+ ![Screenshot shows how to upload the Federation Metadata.](media/radancys-employee-referrals-tutorial/certificate.png "Federation")
+
+ 1. In the **Entity ID** textbox, paste the **Azure AD Identifier** value, which you've copied from the Azure portal.
+
+ 1. In the **SSO-service URL** textbox, paste the **Login URL** value, which you've copied from the Azure portal.
+
+ 1. In the **Signing certificate** textbox, paste the **Federation Metadata XML** file, which you've downloaded from the Azure portal.
+
+ 1. **Save configuration** and verify the setup.
+
+ > [!NOTE]
+ > You need to have the SSO option included in your contract.
+
+### Create Radancy's Employee Referrals test user
+
+In this section, a user called B.Simon is created in Radancy's Employee Referrals. Radancy's Employee Referrals supports just-in-time user provisioning, which is enabled by default. There is no action item for you in this section. If a user doesn't already exist in Radancy's Employee Referrals, a new one is created after authentication.
+
+## Test SSO
+
+In this section, you test your Azure AD single sign-on configuration with following options.
+
+#### SP initiated:
+
+* Click on **Test this application** in Azure portal. This will redirect to Radancy's Employee Referrals Sign-on URL where you can initiate the login flow.
+
+* Go to Radancy's Employee Referrals Sign-on URL directly and initiate the login flow from there.
+
+#### IDP initiated:
+
+* Click on **Test this application** in Azure portal and you should be automatically signed in to the Radancy's Employee Referrals for which you set up the SSO.
+
+You can also use Microsoft My Apps to test the application in any mode. When you click the Radancy's Employee Referrals tile in the My Apps, if configured in SP mode you would be redirected to the application sign-on page for initiating the login flow and if configured in IDP mode, you should be automatically signed in to the Radancy's Employee Referrals for which you set up the SSO. For more information about the My Apps, see [Introduction to the My Apps](../user-help/my-apps-portal-end-user-access.md).
+
+## Next steps
+
+Once you configure Radancy's Employee Referrals you can enforce session control, which protects exfiltration and infiltration of your organizationΓÇÖs sensitive data in real time. Session control extends from Conditional Access. [Learn how to enforce session control with Microsoft Cloud App Security](/cloud-app-security/proxy-deployment-aad).
active-directory Memo 22 09 Other Areas Zero Trust https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/memo-22-09-other-areas-zero-trust.md
You can use analytics in the following tools to aggregate information from Azure
Automation is an important aspect of Zero Trust, particularly in remediation of alerts that occur because of threats or security changes in your environment. In Azure AD, automation integrations are possible to help remediate alerts or perform actions that can improve your security posture. Automations are based on information received from monitoring and analytics.
-[Microsoft Graph API](../develop/microsoft-graph-intro.md) REST calls are the most common way to programmatically access Azure AD. This API-based access requires an Azure AD identity with the necessary authorizations and scope. With the Graph API, you can integrate Microsoft's and other tools. Follow the principles outlined in this article when you're performing the integration.
+[Microsoft Graph API](/graph/overview) REST calls are the most common way to programmatically access Azure AD. This API-based access requires an Azure AD identity with the necessary authorizations and scope. With the Graph API, you can integrate Microsoft's and other tools. Follow the principles outlined in this article when you're performing the integration.
We recommend that you set up an Azure function or an Azure logic app to use a [system-assigned managed identity](../managed-identities-azure-resources/overview.md). Your logic app or function contains the steps or code necessary to automate the desired actions. You assign permissions to the managed identity to grant the service principal the necessary directory permissions to perform the required actions. Grant managed identities only the minimum rights necessary.
active-directory Standards Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory/standards/standards-overview.md
# Configure Azure Active Directory to meet identity standards
-In today's world of interconnected infrastructures, compliance with governmental and industry frameworks and standards is often mandatory. Microsoft engages with governments, regulators, and standards bodies to understand and meet compliance requirements for Azure. There are [90 Azure compliance certifications](../../compliance/index.yml), which include many for various regions and countries. Azure has 35 compliance offerings for key industries including,
+In today's world of interconnected infrastructures, compliance with governmental and industry frameworks and standards is often mandatory. Microsoft engages with governments, regulators, and standards bodies to understand and meet compliance requirements for Azure. There are [90 Azure compliance certifications](../../compliance/index.yml), which include many for various countries/regions. Azure has 35 compliance offerings for key industries including,
* Health * Government
aks Aks Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/aks-migration.md
Stateless application migration is the most straightforward case:
Carefully plan your migration of stateful applications to avoid data loss or unexpected downtime.
-* If you use Azure Files, you can mount the file share as a volume into the new cluster. See [Mount Static Azure Files as a Volume](./azure-files-volume.md#mount-file-share-as-a-persistent-volume).
-* If you use Azure Managed Disks, you can only mount the disk if unattached to any VM. See [Mount Static Azure Disk as a Volume](./azure-disk-volume.md#mount-disk-as-a-volume).
+* If you use Azure Files, you can mount the file share as a volume into the new cluster. See [Mount Static Azure Files as a Volume](./azure-csi-files-storage-provision.md#mount-file-share-as-a-persistent-volume).
+* If you use Azure Managed Disks, you can only mount the disk if unattached to any VM. See [Mount Static Azure Disk as a Volume](./azure-csi-disk-storage-provision.md#mount-disk-as-a-volume).
* If neither of those approaches work, you can use a backup and restore options. See [Velero on Azure](https://github.com/vmware-tanzu/velero-plugin-for-microsoft-azure/blob/master/README.md). #### Azure Files
If not, one possible migration approach involves the following steps:
If you want to start with an empty share and make a copy of the source data, you can use the [`az storage file copy`](/cli/azure/storage/file/copy) commands to migrate your data. - #### Migrating persistent volumes If you're migrating existing persistent volumes to AKS, you'll generally follow these steps:
If you're migrating existing persistent volumes to AKS, you'll generally follow
1. Take snapshots of the disks. 1. Create new managed disks from the snapshots. 1. Create persistent volumes in AKS.
-1. Update pod specifications to [use existing volumes](./azure-disk-volume.md) rather than PersistentVolumeClaims (static provisioning).
+1. Update pod specifications to [use existing volumes](./azure-disk-csi.md) rather than PersistentVolumeClaims (static provisioning).
1. Deploy your application to AKS. 1. Validate your application is working correctly. 1. Point your live traffic to your new AKS cluster.
Some open-source tools can help you create managed disks and migrate volumes bet
* [Azure CLI Disk Copy extension](https://github.com/noelbundick/azure-cli-disk-copy-extension) copies and converts disks across resource groups and Azure regions. * [Azure Kube CLI extension](https://github.com/yaron2/azure-kube-cli) enumerates ACS Kubernetes volumes and migrates them to an AKS cluster. - ### Deployment of your cluster configuration We recommend that you use your existing Continuous Integration (CI) and Continuous Deliver (CD) pipeline to deploy a known-good configuration to AKS. You can use Azure Pipelines to [build and deploy your applications to AKS](/azure/devops/pipelines/ecosystems/kubernetes/aks-template). Clone your existing deployment tasks and ensure that `kubeconfig` points to the new AKS cluster.
You may want to move your AKS cluster to a [different region supported by AKS][r
In addition, if you have any services running on your AKS cluster, you will need to install and configure those services on your cluster in the new region. - In this article, we summarized migration details for: > [!div class="checklist"]
In this article, we summarized migration details for:
> * Considerations for stateful applications > * Deployment of your cluster configuration - [region-availability]: https://azure.microsoft.com/global-infrastructure/services/?products=kubernetes-service
aks Aks Support Help https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/aks-support-help.md
Title: Azure Kubernetes Service support and help options description: How to obtain help and support for questions or problems when you create solutions using Azure Kubernetes Service. -+ Last updated 10/18/2022
aks Azure Blob Csi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-blob-csi.md
Title: Use Container Storage Interface (CSI) driver for Azure Blob storage on Azure Kubernetes Service (AKS) description: Learn how to use the Container Storage Interface (CSI) driver for Azure Blob storage in an Azure Kubernetes Service (AKS) cluster.- Previously updated : 12/27/2022- Last updated : 01/18/2023
The Azure Blob storage Container Storage Interface (CSI) driver is a [CSI specif
By adopting and using CSI, AKS now can write, deploy, and iterate plug-ins to expose new or improve existing storage systems in Kubernetes. Using CSI drivers in AKS avoids having to touch the core Kubernetes code and wait for its release cycles.
-Mounting Azure Blob storage as a file system into a container or pod, enables you to use blob storage with a number of applications that work massive amounts of unstructured data. For example:
+When you mount Azure Blob storage as a file system into a container or pod, it enables you to use blob storage with a number of applications that work massive amounts of unstructured data. For example:
* Log file data * Images, documents, and streaming video or audio
To enable the driver on an existing cluster, include the `--enable-blob-driver`
az aks update --enable-blob-driver -n myAKSCluster -g myResourceGroup ```
-You're prompted to confirm there isn't an open-source Blob CSI driver installed. After confirming, it may take several minutes to complete this action. Once it's complete, you should see in the output the status of enabling the driver on your cluster. The following example is resembles the section indicating the results of the previous command:
+You're prompted to confirm there isn't an open-source Blob CSI driver installed. After you confirm, it may take several minutes to complete this action. Once it's complete, you should see in the output the status of enabling the driver on your cluster. The following example resembles the section indicating the results of the previous command:
```output "storageProfile": {
To have a storage volume persist for your workload, you can use a StatefulSet. T
## Next steps -- To learn how to manually set up a static persistent volume, see [Create and use a volume with Azure Blob storage][azure-csi-blob-storage-static].-- To learn how to dynamically set up a persistent volume, see [Create and use a dynamic persistent volume with Azure Blob storage][azure-csi-blob-storage-dynamic].
+- To learn how to set up a static or dynamic persistent volume, see [Create and use a volume with Azure Blob storage][azure-csi-blob-storage-provision].
- To learn how to use CSI driver for Azure Disks, see [Use Azure Disks with CSI driver][azure-disk-csi-driver] - To learn how to use CSI driver for Azure Files, see [Use Azure Files with CSI driver][azure-files-csi-driver] - For more about storage best practices, see [Best practices for storage and backups in Azure Kubernetes Service][operator-best-practices-storage].
To have a storage volume persist for your workload, you can use a StatefulSet. T
[concepts-storage]: concepts-storage.md [persistent-volume]: concepts-storage.md#persistent-volumes [csi-drivers-aks]: csi-storage-drivers.md
-[azure-csi-blob-storage-dynamic]: azure-csi-blob-storage-dynamic.md
-[azure-csi-blob-storage-static]: azure-csi-blob-storage-static.md
-[csi-storage-driver-overview]: csi-storage-drivers.md
+[azure-csi-blob-storage-provision]: azure-csi-blob-storage-provision.md
[azure-disk-csi-driver]: azure-disk-csi.md [azure-files-csi-driver]: azure-files-csi.md [install-azure-cli]: /cli/azure/install_azure_cli
aks Azure Csi Blob Storage Dynamic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-csi-blob-storage-dynamic.md
- Title: Create a dynamic Azure Blob storage persistent volume in Azure Kubernetes Service (AKS)-
-description: Learn how to dynamically create a persistent volume with Azure Blob storage for use with multiple concurrent pods in Azure Kubernetes Service (AKS)
-- Previously updated : 07/21/2022---
-# Dynamically create and use a persistent volume with Azure Blob storage in Azure Kubernetes Service (AKS)
-
-Container-based applications often need to access and persist data in an external data volume. If multiple pods need concurrent access to the same storage volume, you can use Azure Blob storage to connect using [blobfuse][blobfuse-overview] or [Network File System][nfs-overview] (NFS).
-
-This article shows you how to install the Container Storage Interface (CSI) driver and dynamically create an Azure Blob storage container to attach to a pod in AKS.
-
-For more information on Kubernetes volumes, see [Storage options for applications in AKS][concepts-storage].
-
-## Before you begin
---- If you don't have a storage account that supports the NFS v3 protocol, review [NFS v3 support with Azure Blob storage][azure-blob-storage-nfs-support].--- [Enable the Blob storage CSI driver][enable-blob-csi-driver] (preview) on your AKS cluster.-
-## Dynamic provisioning parameters
-
-|Name | Description | Example | Mandatory | Default value|
-| | | | | |
-|skuName | Specify an Azure storage account type (alias: `storageAccountType`). | `Standard_LRS`, `Premium_LRS`, `Standard_GRS`, `Standard_RAGRS` | No | `Standard_LRS`|
-|location | Specify an Azure location. | `eastus` | No | If empty, driver will use the same location name as current cluster.|
-|resourceGroup | Specify an Azure resource group name. | myResourceGroup | No | If empty, driver will use the same resource group name as current cluster.|
-|storageAccount | Specify an Azure storage account name.| storageAccountName | - No for blobfuse mount </br> - Yes for NFSv3 mount. | - For blobfuse mount: if empty, driver finds a suitable storage account that matches `skuName` in the same resource group. If a storage account name is provided, storage account must exist. </br> - For NFSv3 mount, storage account name must be provided.|
-|protocol | Specify blobfuse mount or NFSv3 mount. | `fuse`, `nfs` | No | `fuse`|
-|containerName | Specify the existing container (directory) name. | container | No | If empty, driver creates a new container name, starting with `pvc-fuse` for blobfuse or `pvc-nfs` for NFS v3. |
-|containerNamePrefix | Specify Azure storage directory prefix created by driver. | my |Can only contain lowercase letters, numbers, hyphens, and length should be fewer than 21 characters. | No |
-|server | Specify Azure storage account domain name. | Existing storage account DNS domain name, for example `<storage-account>.privatelink.blob.core.windows.net`. | No | If empty, driver uses default `<storage-account>.blob.core.windows.net` or other sovereign cloud storage account DNS domain name.|
-|allowBlobPublicAccess | Allow or disallow public access to all blobs or containers for storage account created by driver. | `true`,`false` | No | `false`|
-|storageEndpointSuffix | Specify Azure storage endpoint suffix. | `core.windows.net` | No | If empty, driver will use default storage endpoint suffix according to cloud environment.|
-|tags | [tags][az-tags] would be created in new storage account. | Tag format: 'foo=aaa,bar=bbb' | No | ""|
-|matchTags | Match tags when driver tries to find a suitable storage account. | `true`,`false` | No | `false`|
-| | **Following parameters are only for blobfuse** | | | |
-|subscriptionID | Specify Azure subscription ID where blob storage directory will be created. | Azure subscription ID | No | If not empty, `resourceGroup` must be provided.|
-|storeAccountKey | Specify store account key to Kubernetes secret. <br><br> Note: <br> `false` means driver uses kubelet identity to get account key. | `true`,`false` | No | `true`|
-|secretName | Specify secret name to store account key. | | No |
-|secretNamespace | Specify the namespace of secret to store account key. | `default`,`kube-system`, etc. | No | pvc namespace |
-|isHnsEnabled | Enable `Hierarchical namespace` for Azure DataLake storage account. | `true`,`false` | No | `false`|
-| | **Following parameters are only for NFS protocol** | | | |
-|mountPermissions | Specify mounted folder permissions. |The default is `0777`. If set to `0`, driver won't perform `chmod` after mount. | `0777` | No |
-
-## Create a persistent volume claim using built-in storage class
-
-A persistent volume claim (PVC) uses the storage class object to dynamically provision an Azure Blob storage container. The following YAML can be used to create a persistent volume claim 5 GB in size with *ReadWriteMany* access, using the built-in storage class. For more information on access modes, see the [Kubernetes persistent volume][kubernetes-volumes] documentation.
-
-1. Create a file named `blob-nfs-pvc.yaml` and copy in the following YAML.
-
- ```yml
- apiVersion: v1
- kind: PersistentVolumeClaim
- metadata:
- name: azure-blob-storage
- annotations:
- volume.beta.kubernetes.io/storage-class: azureblob-nfs-premium
- spec:
- accessModes:
- - ReadWriteMany
- storageClassName: my-blobstorage
- resources:
- requests:
- storage: 5Gi
- ```
-
-2. Create the persistent volume claim with the kubectl create command:
-
- ```bash
- kubectl create -f blob-nfs-pvc.yaml
- ```
-
-Once completed, the Blob storage container will be created. You can use the [kubectl get][kubectl-get] command to view the status of the PVC:
-
-```bash
-kubectl get pvc azure-blob-storage
-```
-
-The output of the command resembles the following example:
-
-```bash
-NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
-azure-blob-storage Bound pvc-b88e36c5-c518-4d38-a5ee-337a7dda0a68 5Gi RWX azureblob-nfs-premium 92m
-```
-
-## Use the persistent volume claim
-
-The following YAML creates a pod that uses the persistent volume claim **azure-blob-storage** to mount the Azure Blob storage at the `/mnt/blob' path.
-
-1. Create a file named `blob-nfs-pv`, and copy in the following YAML. Make sure that the **claimName** matches the PVC created in the previous step.
-
- ```yml
- kind: Pod
- apiVersion: v1
- metadata:
- name: mypod
- spec:
- containers:
- - name: mypod
- image: mcr.microsoft.com/oss/nginx/nginx:1.17.3-alpine
- resources:
- requests:
- cpu: 100m
- memory: 128Mi
- limits:
- cpu: 250m
- memory: 256Mi
- volumeMounts:
- - mountPath: "/mnt/blob"
- name: volume
- volumes:
- - name: volume
- persistentVolumeClaim:
- claimName: azure-blob-storage
- ```
-
-2. Create the pod with the [kubectl apply][kubectl-apply] command:
-
- ```bash
- kubectl apply -f blob-nfs-pv.yaml
- ```
-
-3. After the pod is in the running state, run the following command to create a new file called `test.txt`.
-
- ```bash
- kubectl exec mypod -- touch /mnt/blob/test.txt
- ```
-
-4. To validate the disk is correctly mounted, run the following command, and verify you see the `test.txt` file in the output:
-
- ```bash
- kubectl exec mypod -- ls /mnt/blob
- ```
-
- The output of the command resembles the following example:
-
- ```bash
- test.txt
- ```
-
-## Create a custom storage class
-
-The default storage classes suit the most common scenarios, but not all. For some cases, you might want to have your own storage class customized with your own parameters. To demonstrate, two examples are shown. One based on using the NFS protocol, and the other using blobfuse.
-
-### Storage class using NFS protocol
-
-In this example, the following manifest configures mounting a Blob storage container using the NFS protocol. Use it to add the *tags* parameter.
-
-1. Create a file named `blob-nfs-sc.yaml`, and paste the following example manifest:
-
- ```yml
- apiVersion: storage.k8s.io/v1
- kind: StorageClass
- metadata:
- name: azureblob-nfs-premium
- provisioner: blob.csi.azure.com
- parameters:
- protocol: nfs
- tags: environment=Development
- volumeBindingMode: Immediate
- ```
-
-2. Create the storage class with the [kubectl apply][kubectl-apply] command:
-
- ```bash
- kubectl apply -f blob-nfs-sc.yaml
- ```
-
- The output of the command resembles the following example:
-
- ```bash
- storageclass.storage.k8s.io/blob-nfs-premium created
- ```
-
-### Storage class using blobfuse
-
-In this example, the following manifest configures using blobfuse and mount a Blob storage container. Use it to update the *skuName* parameter.
-
-1. Create a file named `blobfuse-sc.yaml`, and paste the following example manifest:
-
- ```yml
- apiVersion: storage.k8s.io/v1
- kind: StorageClass
- metadata:
- name: azureblob-fuse-premium
- provisioner: blob.csi.azure.com
- parameters:
- skuName: Standard_GRS # available values: Standard_LRS, Premium_LRS, Standard_GRS, Standard_RAGRS
- reclaimPolicy: Delete
- volumeBindingMode: Immediate
- allowVolumeExpansion: true
- mountOptions:
- - -o allow_other
- - --file-cache-timeout-in-seconds=120
- - --use-attr-cache=true
- - --cancel-list-on-mount-seconds=10 # prevent billing charges on mounting
- - -o attr_timeout=120
- - -o entry_timeout=120
- - -o negative_timeout=120
- - --log-level=LOG_WARNING # LOG_WARNING, LOG_INFO, LOG_DEBUG
- - --cache-size-mb=1000 # Default will be 80% of available memory, eviction will happen beyond that.
- ```
-
-2. Create the storage class with the [kubectl apply][kubectl-apply] command:
-
- ```bash
- kubectl apply -f blobfuse-sc.yaml
- ```
-
- The output of the command resembles the following example:
-
- ```bash
- storageclass.storage.k8s.io/blob-fuse-premium created
- ```
-
-## Next steps
--- To learn how to use CSI driver for Azure Blob storage, see [Use Azure Blob storage with CSI driver][azure-blob-storage-csi].-- To learn how to manually set up a static persistent volume, see [Create and use a volume with Azure Blob storage][azure-csi-blob-storage-static].-- For associated best practices, see [Best practices for storage and backups in AKS][operator-best-practices-storage].-
-<!-- LINKS - external -->
-[kubectl-create]: https://kubernetes.io/docs/user-guide/kubectl/v1.8/#create
-[kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get
-[kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply
-[kubernetes-files]: https://github.com/kubernetes/examples/blob/master/staging/volumes/azure_file/README.md
-[kubernetes-secret]: https://kubernetes.io/docs/concepts/configuration/secret/
-[kubernetes-volumes]: https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes
-[kubernetes-security-context]: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
-[CSI driver parameters]: https://github.com/kubernetes-sigs/azurefile-csi-driver/blob/master/docs/driver-parameters.md#static-provisionbring-your-own-file-share
-[blobfuse-overview]: https://github.com/Azure/azure-storage-fuse
-[nfs-overview]: https://en.wikipedia.org/wiki/Network_File_System
-
-<!-- LINKS - internal -->
-[aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md
-[aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md
-[aks-quickstart-powershell]: ./learn/quick-kubernetes-deploy-powershell.md
-[install-azure-cli]: /cli/azure/install-azure-cli
-[operator-best-practices-storage]: operator-best-practices-storage.md
-[concepts-storage]: concepts-storage.md
-[persistent-volume-example]: #mount-file-share-as-a-persistent-volume
-[use-tags]: use-tags.md
-[use-managed-identity]: use-managed-identity.md
-[kubernetes-secret]: https://kubernetes.io/docs/concepts/configuration/secret/
-[sas-tokens]: ../storage/common/storage-sas-overview.md
-[mount-blob-storage-nfs]: ../storage/blobs/network-file-system-protocol-support-how-to.md
-[azure-csi-blob-storage-static]: azure-csi-blob-storage-static.md
-[blob-storage-csi-driver]: azure-blob-csi.md
-[azure-blob-storage-nfs-support]: ../storage/blobs/network-file-system-protocol-support.md
-[enable-blob-csi-driver]: azure-blob-csi.md#before-you-begin
aks Azure Csi Blob Storage Provision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-csi-blob-storage-provision.md
+
+ Title: Create a persistent volume with Azure Blob storage in Azure Kubernetes Service (AKS)
+
+description: Learn how to create a static or dynamic persistent volume with Azure Blob storage for use with multiple concurrent pods in Azure Kubernetes Service (AKS)
+ Last updated : 01/18/2023+++
+# Create and use a volume with Azure Blob storage in Azure Kubernetes Service (AKS)
+
+Container-based applications often need to access and persist data in an external data volume. If multiple pods need concurrent access to the same storage volume, you can use Azure Blob storage to connect using [blobfuse][blobfuse-overview] or [Network File System][nfs-overview] (NFS).
+
+This article shows you how to:
+
+* Work with a dynamic persistent volume (PV) by installing the Container Storage Interface (CSI) driver and dynamically creating an Azure Blob storage container to attach to a pod.
+* Work with a static PV by creating an Azure Blob storage container, or use an existing one and attach it to a pod.
+
+For more information on Kubernetes volumes, see [Storage options for applications in AKS][concepts-storage].
+
+## Before you begin
+
+- If you don't have a storage account that supports the NFS v3 protocol, review [NFS v3 support with Azure Blob storage][azure-blob-storage-nfs-support].
+
+- [Enable the Blob storage CSI driver][enable-blob-csi-driver] on your AKS cluster.
+
+## Dynamically provision a volume
+
+This section provides guidance for cluster administrators who want to provision one or more persistent volumes that include details of Blob storage for use by a workload. A persistent volume claim (PVC) uses the storage class object to dynamically provision an Azure Blob storage container.
+
+### Dynamic provisioning parameters
+
+|Name | Description | Example | Mandatory | Default value|
+| | | | | |
+|skuName | Specify an Azure storage account type (alias: `storageAccountType`). | `Standard_LRS`, `Premium_LRS`, `Standard_GRS`, `Standard_RAGRS` | No | `Standard_LRS`|
+|location | Specify an Azure location. | `eastus` | No | If empty, driver will use the same location name as current cluster.|
+|resourceGroup | Specify an Azure resource group name. | myResourceGroup | No | If empty, driver will use the same resource group name as current cluster.|
+|storageAccount | Specify an Azure storage account name.| storageAccountName | - No for blobfuse mount </br> - Yes for NFSv3 mount. | - For blobfuse mount: if empty, driver finds a suitable storage account that matches `skuName` in the same resource group. If a storage account name is provided, storage account must exist. </br> - For NFSv3 mount, storage account name must be provided.|
+|protocol | Specify blobfuse mount or NFSv3 mount. | `fuse`, `nfs` | No | `fuse`|
+|containerName | Specify the existing container (directory) name. | container | No | If empty, driver creates a new container name, starting with `pvc-fuse` for blobfuse or `pvc-nfs` for NFS v3. |
+|containerNamePrefix | Specify Azure storage directory prefix created by driver. | my |Can only contain lowercase letters, numbers, hyphens, and length should be fewer than 21 characters. | No |
+|server | Specify Azure storage account domain name. | Existing storage account DNS domain name, for example `<storage-account>.privatelink.blob.core.windows.net`. | No | If empty, driver uses default `<storage-account>.blob.core.windows.net` or other sovereign cloud storage account DNS domain name.|
+|allowBlobPublicAccess | Allow or disallow public access to all blobs or containers for storage account created by driver. | `true`,`false` | No | `false`|
+|storageEndpointSuffix | Specify Azure storage endpoint suffix. | `core.windows.net` | No | If empty, driver will use default storage endpoint suffix according to cloud environment.|
+|tags | [Tags][az-tags] would be created in new storage account. | Tag format: 'foo=aaa,bar=bbb' | No | ""|
+|matchTags | Match tags when driver tries to find a suitable storage account. | `true`,`false` | No | `false`|
+| | **Following parameters are only for blobfuse** | | | |
+|subscriptionID | Specify Azure subscription ID where blob storage directory will be created. | Azure subscription ID | No | If not empty, `resourceGroup` must be provided.|
+|storeAccountKey | Specify store account key to Kubernetes secret. <br><br> Note: <br> `false` means driver uses kubelet identity to get account key. | `true`,`false` | No | `true`|
+|secretName | Specify secret name to store account key. | | No |
+|secretNamespace | Specify the namespace of secret to store account key. | `default`,`kube-system`, etc. | No | pvc namespace |
+|isHnsEnabled | Enable `Hierarchical namespace` for Azure Data Lake storage account. | `true`,`false` | No | `false`|
+| | **Following parameters are only for NFS protocol** | | | |
+|mountPermissions | Specify mounted folder permissions. |The default is `0777`. If set to `0`, driver won't perform `chmod` after mount. | `0777` | No |
+
+### Create a persistent volume claim using built-in storage class
+
+A persistent volume claim (PVC) uses the storage class object to dynamically provision an Azure Blob storage container. The following YAML can be used to create a persistent volume claim 5 GB in size with *ReadWriteMany* access, using the built-in storage class. For more information on access modes, see the [Kubernetes persistent volume][kubernetes-volumes] documentation.
+
+1. Create a file named `blob-nfs-pvc.yaml` and copy in the following YAML.
+
+ ```yml
+ apiVersion: v1
+ kind: PersistentVolumeClaim
+ metadata:
+ name: azure-blob-storage
+ annotations:
+ volume.beta.kubernetes.io/storage-class: azureblob-nfs-premium
+ spec:
+ accessModes:
+ - ReadWriteMany
+ storageClassName: my-blobstorage
+ resources:
+ requests:
+ storage: 5Gi
+ ```
+
+2. Create the persistent volume claim with the [kubectl create][kubectl-create] command:
+
+ ```bash
+ kubectl create -f blob-nfs-pvc.yaml
+ ```
+
+Once completed, the Blob storage container will be created. You can use the [kubectl get][kubectl-get] command to view the status of the PVC:
+
+```bash
+kubectl get pvc azure-blob-storage
+```
+
+The output of the command resembles the following example:
+
+```bash
+NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
+azure-blob-storage Bound pvc-b88e36c5-c518-4d38-a5ee-337a7dda0a68 5Gi RWX azureblob-nfs-premium 92m
+```
+
+#### Use the persistent volume claim
+
+The following YAML creates a pod that uses the persistent volume claim **azure-blob-storage** to mount the Azure Blob storage at the `/mnt/blob' path.
+
+1. Create a file named `blob-nfs-pv`, and copy in the following YAML. Make sure that the **claimName** matches the PVC created in the previous step.
+
+ ```yml
+ kind: Pod
+ apiVersion: v1
+ metadata:
+ name: mypod
+ spec:
+ containers:
+ - name: mypod
+ image: mcr.microsoft.com/oss/nginx/nginx:1.17.3-alpine
+ resources:
+ requests:
+ cpu: 100m
+ memory: 128Mi
+ limits:
+ cpu: 250m
+ memory: 256Mi
+ volumeMounts:
+ - mountPath: "/mnt/blob"
+ name: volume
+ volumes:
+ - name: volume
+ persistentVolumeClaim:
+ claimName: azure-blob-storage
+ ```
+
+2. Create the pod with the [kubectl apply][kubectl-apply] command:
+
+ ```bash
+ kubectl apply -f blob-nfs-pv.yaml
+ ```
+
+3. After the pod is in the running state, run the following command to create a new file called `test.txt`.
+
+ ```bash
+ kubectl exec mypod -- touch /mnt/blob/test.txt
+ ```
+
+4. To validate the disk is correctly mounted, run the following command, and verify you see the `test.txt` file in the output:
+
+ ```bash
+ kubectl exec mypod -- ls /mnt/blob
+ ```
+
+ The output of the command resembles the following example:
+
+ ```bash
+ test.txt
+ ```
+
+### Create a custom storage class
+
+The default storage classes suit the most common scenarios, but not all. For some cases, you might want to have your own storage class customized with your own parameters. To demonstrate, two examples are shown. One based on using the NFS protocol, and the other using blobfuse.
+
+#### Storage class using NFS protocol
+
+In this example, the following manifest configures mounting a Blob storage container using the NFS protocol. Use it to add the *tags* parameter.
+
+1. Create a file named `blob-nfs-sc.yaml`, and paste the following example manifest:
+
+ ```yml
+ apiVersion: storage.k8s.io/v1
+ kind: StorageClass
+ metadata:
+ name: azureblob-nfs-premium
+ provisioner: blob.csi.azure.com
+ parameters:
+ protocol: nfs
+ tags: environment=Development
+ volumeBindingMode: Immediate
+ ```
+
+2. Create the storage class with the [kubectl apply][kubectl-apply] command:
+
+ ```bash
+ kubectl apply -f blob-nfs-sc.yaml
+ ```
+
+ The output of the command resembles the following example:
+
+ ```bash
+ storageclass.storage.k8s.io/blob-nfs-premium created
+ ```
+
+#### Storage class using blobfuse
+
+In this example, the following manifest configures using blobfuse and mounts a Blob storage container. Use it to update the *skuName* parameter.
+
+1. Create a file named `blobfuse-sc.yaml`, and paste the following example manifest:
+
+ ```yml
+ apiVersion: storage.k8s.io/v1
+ kind: StorageClass
+ metadata:
+ name: azureblob-fuse-premium
+ provisioner: blob.csi.azure.com
+ parameters:
+ skuName: Standard_GRS # available values: Standard_LRS, Premium_LRS, Standard_GRS, Standard_RAGRS
+ reclaimPolicy: Delete
+ volumeBindingMode: Immediate
+ allowVolumeExpansion: true
+ mountOptions:
+ - -o allow_other
+ - --file-cache-timeout-in-seconds=120
+ - --use-attr-cache=true
+ - --cancel-list-on-mount-seconds=10 # prevent billing charges on mounting
+ - -o attr_timeout=120
+ - -o entry_timeout=120
+ - -o negative_timeout=120
+ - --log-level=LOG_WARNING # LOG_WARNING, LOG_INFO, LOG_DEBUG
+ - --cache-size-mb=1000 # Default will be 80% of available memory, eviction will happen beyond that.
+ ```
+
+2. Create the storage class with the [kubectl apply][kubectl-apply] command:
+
+ ```bash
+ kubectl apply -f blobfuse-sc.yaml
+ ```
+
+ The output of the command resembles the following example:
+
+ ```bash
+ storageclass.storage.k8s.io/blob-fuse-premium created
+ ```
+
+## Statically provision a volume
+
+This section provides guidance for cluster administrators who want to create one or more persistent volumes that include details of Blob storage for use by a workload.
+
+### Static provisioning parameters
+
+|Name | Description | Example | Mandatory | Default value|
+| | | | | |
+|volumeHandle | Specify a value the driver can use to uniquely identify the storage blob container in the cluster. | A recommended way to produce a unique value is to combine the globally unique storage account name and container name: `{account-name}_{container-name}`.<br> Note: The `#` character is reserved for internal use and can't be used in a volume handle. | Yes ||
+|volumeAttributes.resourceGroup | Specify Azure resource group name. | myResourceGroup | No | If empty, driver uses the same resource group name as current cluster.|
+|volumeAttributes.storageAccount | Specify an existing Azure storage account name. | storageAccountName | Yes ||
+|volumeAttributes.containerName | Specify existing container name. | container | Yes ||
+|volumeAttributes.protocol | Specify blobfuse mount or NFS v3 mount. | `fuse`, `nfs` | No | `fuse`|
+| | **Following parameters are only for blobfuse** | | | |
+|volumeAttributes.secretName | Secret name that stores storage account name and key (only applies for SMB).| | No ||
+|volumeAttributes.secretNamespace | Specify namespace of secret to store account key. | `default` | No | Pvc namespace|
+|nodeStageSecretRef.name | Specify secret name that stores one of the following:<br> `azurestorageaccountkey`<br>`azurestorageaccountsastoken`<br>`msisecret`<br>`azurestoragespnclientsecret`. | |Existing Kubernetes secret name | No |
+|nodeStageSecretRef.namespace | Specify the namespace of secret. | Kubernetes namespace | Yes ||
+| | **Following parameters are only for NFS protocol** | | | |
+|volumeAttributes.mountPermissions | Specify mounted folder permissions. | `0777` | No ||
+| | **Following parameters are only for NFS VNet setting** | | | |
+|vnetResourceGroup | Specify VNet resource group hosting virtual network. | myResourceGroup | No | If empty, driver uses the `vnetResourceGroup` value specified in the Azure cloud config file.|
+|vnetName | Specify the virtual network name. | aksVNet | No | If empty, driver uses the `vnetName` value specified in the Azure cloud config file.|
+|subnetName | Specify the existing subnet name of the agent node. | aksSubnet | No | If empty, driver uses the `subnetName` value in Azure cloud config file. |
+| | **Following parameters are only for feature: blobfuse<br> [Managed Identity and Service Principal Name authentication](https://github.com/Azure/azure-storage-fuse#environment-variables)** | | | |
+|volumeAttributes.AzureStorageAuthType | Specify the authentication type. | `Key`, `SAS`, `MSI`, `SPN` | No | `Key`|
+|volumeAttributes.AzureStorageIdentityClientID | Specify the Identity Client ID. | | No ||
+|volumeAttributes.AzureStorageIdentityObjectID | Specify the Identity Object ID. | | No ||
+|volumeAttributes.AzureStorageIdentityResourceID | Specify the Identity Resource ID. | | No ||
+|volumeAttributes.MSIEndpoint | Specify the MSI endpoint. | | No ||
+|volumeAttributes.AzureStorageSPNClientID | Specify the Azure Service Principal Name (SPN) Client ID. | | No ||
+|volumeAttributes.AzureStorageSPNTenantID | Specify the Azure SPN Tenant ID. | | No ||
+|volumeAttributes.AzureStorageAADEndpoint | Specify the Azure Active Directory (Azure AD) endpoint. | | No ||
+| | **Following parameters are only for feature: blobfuse read account key or SAS token from key vault** | | | |
+|volumeAttributes.keyVaultURL | Specify Azure Key Vault DNS name. | {vault-name}.vault.azure.net | No ||
+|volumeAttributes.keyVaultSecretName | Specify Azure Key Vault secret name. | Existing Azure Key Vault secret name. | No ||
+|volumeAttributes.keyVaultSecretVersion | Azure Key Vault secret version. | Existing version | No |If empty, driver uses current version.|
+
+### Create a Blob storage container
+
+When you create an Azure Blob storage resource for use with AKS, you can create the resource in the node resource group. This approach allows the AKS cluster to access and manage the blob storage resource. If instead you create the blob storage resource in a separate resource group, you must grant the Azure Kubernetes Service managed identity for your cluster the [Contributor][rbac-contributor-role] role to the blob storage resource group.
+
+For this article, create the container in the node resource group. First, get the resource group name with the [az aks show][az-aks-show] command and add the `--query nodeResourceGroup` query parameter. The following example gets the node resource group for the AKS cluster named **myAKSCluster** in the resource group named **myResourceGroup**:
+
+```azurecli
+az aks show --resource-group myResourceGroup --name myAKSCluster --query nodeResourceGroup -o tsv
+```
+
+The output of the command resembles the following example:
+
+```azurecli
+MC_myResourceGroup_myAKSCluster_eastus
+```
+
+Next, create a container for storing blobs following the steps in the [Manage blob storage][manage-blob-storage] to authorize access and then create the container.
+
+### Mount volume
+
+In this section, you mount the persistent volume using the NFS protocol or Blobfuse.
+
+#### [Mount volume using NFS protocol](#tab/mount-nfs)
+
+Mounting Blob storage using the NFS v3 protocol doesn't authenticate using an account key. Your AKS cluster needs to reside in the same or peered virtual network as the agent node. The only way to secure the data in your storage account is by using a virtual network and other network security settings. For more information on how to set up NFS access to your storage account, see [Mount Blob Storage by using the Network File System (NFS) 3.0 protocol](../storage/blobs/network-file-system-protocol-support-how-to.md).
+
+The following example demonstrates how to mount a Blob storage container as a persistent volume using the NFS protocol.
+
+1. Create a file named `pv-blob-nfs.yaml` and copy in the following YAML. Under `storageClass`, update `resourceGroup`, `storageAccount`, and `containerName`.
+
+ > [!NOTE]
+ > `volumeHandle` value should be a unique volumeID for every identical storage blob container in the cluster.
+ > The character `#` is reserved for internal use and cannot be used.
+
+ ```yml
+ apiVersion: v1
+ kind: PersistentVolume
+ metadata:
+ name: pv-blob
+ spec:
+ capacity:
+ storage: 1Pi
+ accessModes:
+ - ReadWriteMany
+ persistentVolumeReclaimPolicy: Retain # If set as "Delete" container would be removed after pvc deletion
+ storageClassName: azureblob-nfs-premium
+ csi:
+ driver: blob.csi.azure.com
+ readOnly: false
+ # make sure volumeid is unique for every identical storage blob container in the cluster
+ # character `#` is reserved for internal use and cannot be used in volumehandle
+ volumeHandle: unique-volumeid
+ volumeAttributes:
+ resourceGroup: resourceGroupName
+ storageAccount: storageAccountName
+ containerName: containerName
+ protocol: nfs
+ ```
+
+ > [!NOTE]
+ > While the [Kubernetes API](https://github.com/kubernetes/kubernetes/blob/release-1.26/pkg/apis/core/types.go#L303-L306) **capacity** attribute is mandatory, this value isn't used by the Azure Blob storage CSI driver because you can flexibly write data until you reach your storage account's capacity limit. The value of the `capacity` attribute is used only for size matching between *PersistentVolumes* and *PersistenVolumeClaims*. We recommend using a fictitious high value. The pod sees a mounted volume with a fictitious size of 5 Petabytes.
+
+2. Run the following command to create the persistent volume using the [kubectl create][kubectl-create] command referencing the YAML file created earlier:
+
+ ```bash
+ kubectl create -f pv-blob-nfs.yaml
+ ```
+
+3. Create a `pvc-blob-nfs.yaml` file with a *PersistentVolumeClaim*. For example:
+
+ ```yml
+ kind: PersistentVolumeClaim
+ apiVersion: v1
+ metadata:
+ name: pvc-blob
+ spec:
+ accessModes:
+ - ReadWriteMany
+ resources:
+ requests:
+ storage: 10Gi
+ volumeName: pv-blob
+ storageClassName: azureblob-nfs-premium
+ ```
+
+4. Run the following command to create the persistent volume claim using the [kubectl create][kubectl-create] command referencing the YAML file created earlier:
+
+ ```bash
+ kubectl create -f pvc-blob-nfs.yaml
+ ```
+
+#### [Mount volume using Blobfuse](#tab/mount-blobfuse)
+
+Kubernetes needs credentials to access the Blob storage container created earlier, which is either an Azure access key or SAS tokens. These credentials are stored in a Kubernetes secret, which is referenced when you create a Kubernetes pod.
+
+1. Use the `kubectl create secret command` to create the secret. You can authenticate using a [Kubernetes secret][kubernetes-secret] or [shared access signature][sas-tokens] (SAS) tokens.
+
+ # [Secret](#tab/secret)
+
+ The following example creates a [Secret object][kubernetes-secret] named *azure-secret* and populates the *azurestorageaccountname* and *azurestorageaccountkey*. You need to provide the account name and key from an existing Azure storage account.
+
+ ```bash
+ kubectl create secret generic azure-secret --from-literal azurestorageaccountname=NAME --from-literal azurestorageaccountkey="KEY" --type=Opaque
+ ```
+
+ # [SAS tokens](#tab/sas-tokens)
+
+ The following example creates a [Secret object][kubernets-secret] named *azure-sas-token* and populates the *azurestorageaccountname* and *azurestorageaccountsastoken*. You need to provide the account name and shared access signature from an existing Azure storage account.
+
+ ```bash
+ kubectl create secret generic azure-sas-token --from-literal azurestorageaccountname=NAME --from-literal azurestorageaccountsastoken
+ ="sastoken" --type=Opaque
+ ```
+
+
+
+2. Create a `pv-blobfuse.yaml` file. Under `volumeAttributes`, update `containerName`. Under `nodeStateSecretRef`, update `name` with the name of the Secret object created earlier. For example:
+
+ > [!NOTE]
+ > `volumeHandle` value should be a unique volumeID for every identical storage blob container in the cluster.
+ > The character `#` is reserved for internal use and cannot be used.
+
+ ```yml
+ apiVersion: v1
+ kind: PersistentVolume
+ metadata:
+ name: pv-blob
+ spec:
+ capacity:
+ storage: 10Gi
+ accessModes:
+ - ReadWriteMany
+ persistentVolumeReclaimPolicy: Retain # If set as "Delete" container would be removed after pvc deletion
+ storageClassName: azureblob-fuse-premium
+ mountOptions:
+ - -o allow_other
+ - --file-cache-timeout-in-seconds=120
+ csi:
+ driver: blob.csi.azure.com
+ readOnly: false
+ # volumeid has to be unique for every identical storage blob container in the cluster
+ # character `#` is reserved for internal use and cannot be used in volumehandle
+ volumeHandle: unique-volumeid
+ volumeAttributes:
+ containerName: containerName
+ nodeStageSecretRef:
+ name: azure-secret
+ namespace: default
+ ```
+
+3. Run the following command to create the persistent volume using the [kubectl create][kubectl-create] command referencing the YAML file created earlier:
+
+ ```bash
+ kubectl create -f pv-blobfuse.yaml
+ ```
+
+4. Create a `pvc-blobfuse.yaml` file with a *PersistentVolume*. For example:
+
+ ```yml
+ apiVersion: v1
+ kind: PersistentVolumeClaim
+ metadata:
+ name: pvc-blob
+ spec:
+ accessModes:
+ - ReadWriteMany
+ resources:
+ requests:
+ storage: 10Gi
+ volumeName: pv-blob
+ storageClassName: azureblob-fuse-premium
+ ```
+
+5. Run the following command to create the persistent volume claim using the [kubectl create][kubectl-create] command referencing the YAML file created earlier:
+
+ ```bash
+ kubectl create -f pvc-blobfuse.yaml
+ ```
+++
+### Use the persistent volume
+
+The following YAML creates a pod that uses the persistent volume or persistent volume claim named **pvc-blob** created earlier, to mount the Azure Blob storage at the `/mnt/blob` path.
+
+1. Create a file named `nginx-pod-blob.yaml`, and copy in the following YAML. Make sure that the **claimName** matches the PVC created in the previous step when creating a persistent volume for NFS or Blobfuse.
+
+ ```yml
+ kind: Pod
+ apiVersion: v1
+ metadata:
+ name: nginx-blob
+ spec:
+ nodeSelector:
+ "kubernetes.io/os": linux
+ containers:
+ - image: mcr.microsoft.com/oss/nginx/nginx:1.17.3-alpine
+ name: nginx-blob
+ volumeMounts:
+ - name: blob01
+ mountPath: "/mnt/blob"
+ volumes:
+ - name: blob01
+ persistentVolumeClaim:
+ claimName: pvc-blob
+ ```
+
+2. Run the following command to create the pod and mount the PVC using the [kubectl create][kubectl-create] command referencing the YAML file created earlier:
+
+ ```bash
+ kubectl create -f nginx-pod-blob.yaml
+ ```
+
+3. Run the following command to create an interactive shell session with the pod to verify the Blob storage mounted:
+
+ ```bash
+ kubectl exec -it nginx-blob -- df -h
+ ```
+
+ The output from the command resembles the following example:
+
+ ```bash
+ Filesystem Size Used Avail Use% Mounted on
+ ...
+ blobfuse 14G 41M 13G 1% /mnt/blob
+ ...
+ ```
+
+## Next steps
+
+- To learn how to use CSI driver for Azure Blob storage, see [Use Azure Blob storage with CSI driver][azure-blob-storage-csi].
+- For associated best practices, see [Best practices for storage and backups in AKS][operator-best-practices-storage].
+
+<!-- LINKS - external -->
+[kubernetes-volumes]: https://kubernetes.io/docs/concepts/storage/volumes/
+[blobfuse-overview]: https://github.com/Azure/azure-storage-fuse
+[nfs-overview]: https://en.wikipedia.org/wiki/Network_File_System
+[kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply
+[kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get
+[kubernetes-secret]: https://kubernetes.io/docs/concepts/configuration/secret/
+[kubectl-create]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#create
+
+<!-- LINKS - internal -->
+[operator-best-practices-storage]: operator-best-practices-storage.md
+[concepts-storage]: concepts-storage.md
+[azure-blob-storage-csi]: azure-blob-csi.md
+[azure-blob-storage-nfs-support]: ../storage/blobs/network-file-system-protocol-support.md
+[enable-blob-csi-driver]: azure-blob-csi.md#before-you-begin
+[az-tags]: ../azure-resource-manager/management/tag-resources.md
+[sas-tokens]: ../storage/common/storage-sas-overview.md
aks Azure Csi Blob Storage Static https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-csi-blob-storage-static.md
- Title: Create a static persistent volume with Azure Blob storage in Azure Kubernetes Service (AKS)-
-description: Learn how to create a static persistent volume with Azure Blob storage for use with multiple concurrent pods in Azure Kubernetes Service (AKS)
-- Previously updated : 12/27/2022---
-# Create and use a static volume with Azure Blob storage in Azure Kubernetes Service (AKS)
-
-Container-based applications often need to access and persist data in an external data volume. If multiple pods need concurrent access to the same storage volume, you can use Azure Blob storage to connect using [blobfuse][blobfuse-overview] or [Network File System][nfs-overview] (NFS).
-
-This article shows you how to create an Azure Blob storage container or use an existing one and attach it to a pod in AKS.
-
-For more information on Kubernetes volumes, see [Storage options for applications in AKS][concepts-storage].
-
-## Before you begin
---- If you don't have a storage account that supports the NFS v3 protocol, review [NFS v3 support with Azure Blob storage][azure-blob-storage-nfs-support].--- [Enable the Blob storage CSI driver][enable-blob-csi-driver] on your AKS cluster.-
-## Static provisioning parameters
-
-|Name | Description | Example | Mandatory | Default value|
-| | | | | |
-|volumeHandle | Specify a value the driver can use to uniquely identify the storage blob container in the cluster. | A recommended way to produce a unique value is to combine the globally unique storage account name and container name: {account-name}_{container-name}. Note: The # character is reserved for internal use and can't be used in a volume handle. | Yes ||
-|volumeAttributes.resourceGroup | Specify Azure resource group name. | myResourceGroup | No | If empty, driver will use the same resource group name as current cluster.|
-|volumeAttributes.storageAccount | Specify existing Azure storage account name. | storageAccountName | Yes ||
-|volumeAttributes.containerName | Specify existing container name. | container | Yes ||
-|volumeAttributes.protocol | Specify blobfuse mount or NFS v3 mount. | `fuse`, `nfs` | No | `fuse`|
-| | **Following parameters are only for blobfuse** | | | |
-|volumeAttributes.secretName | Secret name that stores storage account name and key (only applies for SMB).| | No ||
-|volumeAttributes.secretNamespace | Specify namespace of secret to store account key. | `default` | No | Pvc namespace|
-|nodeStageSecretRef.name | Specify secret name that stores (see examples below):<br>`azurestorageaccountkey`<br>`azurestorageaccountsastoken`<br>`msisecret`<br>`azurestoragespnclientsecret`. | |Existing Kubernetes secret name | No |
-|nodeStageSecretRef.namespace | Specify the namespace of secret. | k8s namespace | Yes ||
-| | **Following parameters are only for NFS protocol** | | | |
-|volumeAttributes.mountPermissions | Specify mounted folder permissions. | `0777` | No ||
-| | **Following parameters are only for NFS VNet setting** | | | |
-|vnetResourceGroup | Specify VNet resource group hosting virtual network. | myResourceGroup | No | If empty, driver uses the `vnetResourceGroup` value specified in the Azure cloud config file.|
-|vnetName | Specify the virtual network name. | aksVNet | No | If empty, driver uses the `vnetName` value specified in the Azure cloud config file.|
-|subnetName | Specify the existing subnet name of the agent node. | aksSubnet | No | If empty, driver uses the `subnetName` value in Azure cloud config file. |
-| | **Following parameters are only for feature: blobfuse<br> [Managed Identity and Service Principal Name authentication](https://github.com/Azure/azure-storage-fuse#environment-variables)** | | | |
-|volumeAttributes.AzureStorageAuthType | Specify the authentication type. | `Key`, `SAS`, `MSI`, `SPN` | No | `Key`|
-|volumeAttributes.AzureStorageIdentityClientID | Specify the Identity Client ID. | | No ||
-|volumeAttributes.AzureStorageIdentityObjectID | Specify the Identity Object ID. | | No ||
-|volumeAttributes.AzureStorageIdentityResourceID | Specify the Identity Resource ID. | | No ||
-|volumeAttributes.MSIEndpoint | Specify the MSI endpoint. | | No ||
-|volumeAttributes.AzureStorageSPNClientID | Specify the Azure Service Principal Name (SPN) Client ID. | | No ||
-|volumeAttributes.AzureStorageSPNTenantID | Specify the Azure SPN Tenant ID. | | No ||
-|volumeAttributes.AzureStorageAADEndpoint | Specify the Azure Active Directory (Azure AD) endpoint. | | No ||
-| | **Following parameters are only for feature: blobfuse read account key or SAS token from key vault** | | | |
-|volumeAttributes.keyVaultURL | Specify Azure Key Vault DNS name. | {vault-name}.vault.azure.net | No ||
-|volumeAttributes.keyVaultSecretName | Specify Azure Key Vault secret name. | Existing Azure Key Vault secret name. | No ||
-|volumeAttributes.keyVaultSecretVersion | Azure Key Vault secret version. | Existing version | No |If empty, driver uses current version.|
-
-## Create a Blob storage container
-
-When you create an Azure Blob storage resource for use with AKS, you can create the resource in the node resource group. This approach allows the AKS cluster to access and manage the blob storage resource. If instead you create the blob storage resource in a separate resource group, you must grant the Azure Kubernetes Service managed identity for your cluster the [Contributor][rbac-contributor-role] role to the blob storage resource group.
-
-For this article, create the container in the node resource group. First, get the resource group name with the [az aks show][az-aks-show] command and add the `--query nodeResourceGroup` query parameter. The following example gets the node resource group for the AKS cluster named **myAKSCluster** in the resource group named **myResourceGroup**:
-
-```azurecli
-az aks show --resource-group myResourceGroup --name myAKSCluster --query nodeResourceGroup -o tsv
-```
-
-The output of the command resembles the following example:
-
-```azurecli
-MC_myResourceGroup_myAKSCluster_eastus
-```
-
-Next, create a container for storing blobs following the steps in the [Manage blob storage][manage-blob-storage] to authorize access and then create the container.
-
-## Mount Blob storage as a volume using NFS
-
-Mounting Blob storage using the NFS v3 protocol doesn't authenticate using an account key. Your AKS cluster needs to reside in the same or peered virtual network as the agent node. The only way to secure the data in your storage account is by using a virtual network and other network security settings. For more information on how to set up NFS access to your storage account, see [Mount Blob Storage by using the Network File System (NFS) 3.0 protocol](../storage/blobs/network-file-system-protocol-support-how-to.md).
-
-The following example demonstrates how to mount a Blob storage container as a persistent volume using the NFS protocol.
-
-1. Create a file named `pv-blob-nfs.yaml` and copy in the following YAML. Under `storageClass`, update `resourceGroup`, `storageAccount`, and `containerName`.
-
- ```yml
- apiVersion: v1
- kind: PersistentVolume
- metadata:
- name: pv-blob
- spec:
- capacity:
- storage: 10Gi
- accessModes:
- - ReadWriteMany
- persistentVolumeReclaimPolicy: Retain # If set as "Delete" container would be removed after pvc deletion
- storageClassName: azureblob-nfs-premium
- csi:
- driver: blob.csi.azure.com
- readOnly: false
- # make sure volumeid is unique for every identical storage blob container in the cluster
- # character `#` is reserved for internal use and cannot be used in volumehandle
- volumeHandle: unique-volumeid
- volumeAttributes:
- resourceGroup: resourceGroupName
- storageAccount: storageAccountName
- containerName: containerName
- protocol: nfs
- ```
-
-2. Run the following command to create the persistent volume using the `kubectl create` command referencing the YAML file created earlier:
-
- ```bash
- kubectl create -f pv-blob-nfs.yaml
- ```
-
-3. Create a `pvc-blob-nfs.yaml` file with a *PersistentVolumeClaim*. For example:
-
- ```yml
- kind: PersistentVolumeClaim
- apiVersion: v1
- metadata:
- name: pvc-blob
- spec:
- accessModes:
- - ReadWriteMany
- resources:
- requests:
- storage: 10Gi
- volumeName: pv-blob
- storageClassName: azureblob-nfs-premium
- ```
-
-4. Run the following command to create the persistent volume claim using the `kubectl create` command referencing the YAML file created earlier:
-
- ```bash
- kubectl create -f pvc-blob-nfs.yaml
- ```
-
-## Mount Blob storage as a volume using Blobfuse
-
-Kubernetes needs credentials to access the Blob storage container created earlier, which is either an Azure access key or SAS tokens. These credentials are stored in a Kubernetes secret, which is referenced when you create a Kubernetes pod.
-
-1. Use the `kubectl create secret command` to create the secret. You can authenticate using a [Kubernetes secret][kubernetes-secret] or [shared access signature][sas-tokens] (SAS) tokens.
-
- # [Secret](#tab/secret)
-
- The following example creates a [Secret object][kubernetes-secret] named *azure-secret* and populates the *azurestorageaccountname* and *azurestorageaccountkey*. You need to provide the account name and key from an existing Azure storage account.
-
- ```bash
- kubectl create secret generic azure-secret --from-literal azurestorageaccountname=NAME --from-literal azurestorageaccountkey="KEY" --type=Opaque
- ```
-
- # [SAS tokens](#tab/sas-tokens)
-
- The following example creates a [Secret object][kubernets-secret] named *azure-sas-token* and populates the *azurestorageaccountname* and *azurestorageaccountsastoken*. You need to provide the account name and shared access signature from an existing Azure storage account.
-
- ```bash
- kubectl create secret generic azure-sas-token --from-literal azurestorageaccountname=NAME --from-literal azurestorageaccountsastoken
- ="sastoken" --type=Opaque
- ```
-
-
-
-2. Create a `pv-blobfuse.yaml` file. Under `volumeAttributes`, update `containerName`. Under `nodeStateSecretRef`, update `name` with the name of the Secret object created earlier. For example:
-
- ```yml
- apiVersion: v1
- kind: PersistentVolume
- metadata:
- name: pv-blob
- spec:
- capacity:
- storage: 10Gi
- accessModes:
- - ReadWriteMany
- persistentVolumeReclaimPolicy: Retain # If set as "Delete" container would be removed after pvc deletion
- storageClassName: azureblob-fuse-premium
- mountOptions:
- - -o allow_other
- - --file-cache-timeout-in-seconds=120
- csi:
- driver: blob.csi.azure.com
- readOnly: false
- # make sure volumeid is unique for every identical storage blob container in the cluster
- # character `#` is reserved for internal use and cannot be used in volumehandle
- volumeHandle: unique-volumeid
- volumeAttributes:
- containerName: containerName
- nodeStageSecretRef:
- name: azure-secret
- namespace: default
- ```
-
-3. Run the following command to create the persistent volume using the `kubectl create` command referencing the YAML file created earlier:
-
- ```bash
- kubectl create -f pv-blobfuse.yaml
- ```
-
-4. Create a `pvc-blobfuse.yaml` file with a *PersistentVolume*. For example:
-
- ```yml
- apiVersion: v1
- kind: PersistentVolumeClaim
- metadata:
- name: pvc-blob
- spec:
- accessModes:
- - ReadWriteMany
- resources:
- requests:
- storage: 10Gi
- volumeName: pv-blob
- storageClassName: azureblob-fuse-premium
- ```
-
-5. Run the following command to create the persistent volume claim using the `kubectl create` command referencing the YAML file created earlier:
-
- ```bash
- kubectl create -f pvc-blobfuse.yaml
- ```
-
-## Use the persistence volume
-
-The following YAML creates a pod that uses the persistent volume or persistent volume claim named **pvc-blob** created earlier, to mount the Azure Blob storage at the `/mnt/blob' path.
-
-1. Create a file named `nginx-pod-blob.yaml`, and copy in the following YAML. Make sure that the **claimName** matches the PVC created in the previous step when creating a persistent volume for NFS or Blobfuse.
-
- ```yml
- kind: Pod
- apiVersion: v1
- metadata:
- name: nginx-blob
- spec:
- nodeSelector:
- "kubernetes.io/os": linux
- containers:
- - image: mcr.microsoft.com/oss/nginx/nginx:1.17.3-alpine
- name: nginx-blob
- volumeMounts:
- - name: blob01
- mountPath: "/mnt/blob"
- volumes:
- - name: blob01
- persistentVolumeClaim:
- claimName: pvc-blob
- ```
-
-2. Run the following command to create the pod and mount the PVC using the `kubectl create` command referencing the YAML file created earlier:
-
- ```bash
- kubectl create -f nginx-pod-blob.yaml
- ```
-
-3. Run the following command to create an interactive shell session with the pod to verify the Blob storage mounted:
-
- ```bash
- kubectl exec -it nginx-blob -- df -h
- ```
-
- The output from the command resembles the following example:
-
- ```bash
- Filesystem Size Used Avail Use% Mounted on
- ...
- blobfuse 14G 41M 13G 1% /mnt/blob
- ...
- ```
-
-## Next steps
--- To learn how to use CSI driver for Azure Blob storage, see [Use Azure Blob storage with CSI driver][azure-blob-storage-csi].-- To learn how to manually set up a dynamic persistent volume, see [Create and use a dynamic volume with Azure Blob storage][azure-csi-blob-storage-dynamic].-- For associated best practices, see [Best practices for storage and backups in AKS][operator-best-practices-storage].-
-<!-- LINKS - external -->
-[kubectl-create]: https://kubernetes.io/docs/user-guide/kubectl/v1.8/#create
-[kubernetes-files]: https://github.com/kubernetes/examples/blob/master/staging/volumes/azure_file/README.md
-[kubernetes-secret]: https://kubernetes.io/docs/concepts/configuration/secret/
-[kubernetes-volumes]: https://kubernetes.io/docs/concepts/storage/volumes/
-[kubernetes-security-context]: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
-[blobfuse-overview]: https://github.com/Azure/azure-storage-fuse
-[nfs-overview]: https://en.wikipedia.org/wiki/Network_File_System
-[kubernetes-secret]: https://kubernetes.io/docs/concepts/configuration/secret/
-
-<!-- LINKS - internal -->
-[aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md
-[aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md
-[aks-quickstart-powershell]: ./learn/quick-kubernetes-deploy-powershell.md
-[install-azure-cli]: /cli/azure/install-azure-cli
-[operator-best-practices-storage]: operator-best-practices-storage.md
-[concepts-storage]: concepts-storage.md
-[persistent-volume-example]: #mount-file-share-as-a-persistent-volume
-[use-tags]: use-tags.md
-[use-managed-identity]: use-managed-identity.md
-[kubernetes-secret]: https://kubernetes.io/docs/concepts/configuration/secret/
-[sas-tokens]: ../storage/common/storage-sas-overview.md
-[azure-csi-blob-storage-dynamic]: azure-csi-blob-storage-dynamic.md
-[azure-blob-storage-csi]: azure-blob-csi.md
-[rbac-contributor-role]: ../role-based-access-control/built-in-roles.md#contributor
-[az-aks-show]: /cli/azure/aks#az-aks-show
-[manage-blob-storage]: ../storage/blobs/blob-containers-cli.md
-[azure-blob-storage-nfs-support]: ../storage/blobs/network-file-system-protocol-support.md
-[enable-blob-csi-driver]: azure-blob-csi.md#before-you-begin
aks Azure Csi Disk Storage Provision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-csi-disk-storage-provision.md
+
+ Title: Create a persistent volume with Azure Disks in Azure Kubernetes Service (AKS)
+
+description: Learn how to create a static or dynamic persistent volume with Azure Disks for use with multiple concurrent pods in Azure Kubernetes Service (AKS)
+ Last updated : 01/18/2023++
+# Create and use a volume with Azure Disks in Azure Kubernetes Service (AKS)
+
+A persistent volume represents a piece of storage that has been provisioned for use with Kubernetes pods. A persistent volume can be used by one or many pods, and can be dynamically or statically provisioned. This article shows you how to dynamically create persistent volumes with Azure Disks for use by a single pod in an Azure Kubernetes Service (AKS) cluster.
+
+> [!NOTE]
+> An Azure disk can only be mounted with *Access mode* type *ReadWriteOnce*, which makes it available to one node in AKS. If you need to share a persistent volume across multiple nodes, use [Azure Files][azure-files-pvc].
+
+This article shows you how to:
+
+* Work with a dynamic persistent volume (PV) by installing the Container Storage Interface (CSI) driver and dynamically creating one or more Azure managed disk to attach to a pod.
+* Work with a static PV by creating one or more Azure managed disk, or use an existing one and attach it to a pod.
+
+For more information on Kubernetes volumes, see [Storage options for applications in AKS][concepts-storage].
+
+## Before you begin
+
+- An Azure [storage account][azure-storage-account].
+
+- The Azure CLI version 2.0.59 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
+
+The Azure Disks CSI driver has a limit of 32 volumes per node. The volume count changes based on the size of the node/node pool. Run the [kubectl get][kubectl-get] command to determine the number of volumes that can be allocated per node:
+
+```console
+kubectl get CSINode <nodename> -o yaml
+```
+
+## Dynamically provision a volume
+
+This section provides guidance for cluster administrators who want to provision one or more persistent volumes that include details of Azure Disk storage for use by a workload. A persistent volume claim (PVC) uses the storage class object to dynamically provision an Azure Disk storage container.
+
+### Dynamic provisioning parameters
+
+|Name | Meaning | Available Value | Mandatory | Default value
+| | | | |
+|skuName | Azure Disks storage account type (alias: `storageAccountType`)| `Standard_LRS`, `Premium_LRS`, `StandardSSD_LRS`, `PremiumV2_LRS`, `UltraSSD_LRS`, `Premium_ZRS`, `StandardSSD_ZRS` | No | `StandardSSD_LRS`|
+|fsType | File System Type | `ext4`, `ext3`, `ext2`, `xfs`, `btrfs` for Linux, `ntfs` for Windows | No | `ext4` for Linux, `ntfs` for Windows|
+|cachingMode | [Azure Data Disk Host Cache Setting][disk-host-cache-setting] | `None`, `ReadOnly`, `ReadWrite` | No | `ReadOnly`|
+|location | Specify Azure region where Azure Disks will be created | `eastus`, `westus`, etc. | No | If empty, driver will use the same location name as current AKS cluster|
+|resourceGroup | Specify the resource group where the Azure Disks will be created | Existing resource group name | No | If empty, driver will use the same resource group name as current AKS cluster|
+|DiskIOPSReadWrite | [UltraSSD disk][ultra-ssd-disks] IOPS Capability (minimum: 2 IOPS/GiB ) | 100~160000 | No | `500`|
+|DiskMBpsReadWrite | [UltraSSD disk][ultra-ssd-disks] Throughput Capability(minimum: 0.032/GiB) | 1~2000 | No | `100`|
+|LogicalSectorSize | Logical sector size in bytes for ultra disk. Supported values are 512 ad 4096. 4096 is the default. | `512`, `4096` | No | `4096`|
+|tags | Azure Disk [tags][azure-tags] | Tag format: `key1=val1,key2=val2` | No | ""|
+|diskEncryptionSetID | ResourceId of the disk encryption set to use for [enabling encryption at rest][disk-encryption] | format: `/subscriptions/{subs-id}/resourceGroups/{rg-name}/providers/Microsoft.Compute/diskEncryptionSets/{diskEncryptionSet-name}` | No | ""|
+|diskEncryptionType | Encryption type of the disk encryption set. | `EncryptionAtRestWithCustomerKey`(by default), `EncryptionAtRestWithPlatformAndCustomerKeys` | No | ""|
+|writeAcceleratorEnabled | [Write Accelerator on Azure Disks][azure-disk-write-accelerator] | `true`, `false` | No | ""|
+|networkAccessPolicy | NetworkAccessPolicy property to prevent generation of the SAS URI for a disk or a snapshot | `AllowAll`, `DenyAll`, `AllowPrivate` | No | `AllowAll`|
+|diskAccessID | Azure Resource ID of the DiskAccess resource to use private endpoints on disks | | No | ``|
+|enableBursting | [Enable on-demand bursting][on-demand-bursting] beyond the provisioned performance target of the disk. On-demand bursting should only be applied to Premium disk and when the disk size > 512 GB. Ultra and shared disk isn't supported. Bursting is disabled by default. | `true`, `false` | No | `false`|
+|useragent | User agent used for [customer usage attribution][customer-usage-attribution] | | No | Generated Useragent formatted `driverName/driverVersion compiler/version (OS-ARCH)`|
+|enableAsyncAttach | Allow multiple disk attach operations (in batch) on one node in parallel.<br> While this parameter can speed up disk attachment, you may encounter Azure API throttling limit when there are large number of volume attachments. | `true`, `false` | No | `false`|
+|subscriptionID | Specify Azure subscription ID where the Azure Disks is created. | Azure subscription ID | No | If not empty, `resourceGroup` must be provided.|
+| | **Following parameters are only for v2** | | | |
+| enableAsyncAttach | The v2 driver uses a different strategy to manage Azure API throttling and ignores this parameter. | | No | |
+| maxShares | The total number of shared disk mounts allowed for the disk. Setting the value to 2 or more enables attachment replicas. | Supported values depend on the disk size. See [Share an Azure managed disk][share-azure-managed-disk] for supported values. | No | 1 |
+| maxMountReplicaCount | The number of replicas attachments to maintain. | This value must be in the range `[0..(maxShares - 1)]` | No | If `accessMode` is `ReadWriteMany`, the default is `0`. Otherwise, the default is `maxShares - 1` |
+
+### Built-in storage classes
+
+A storage class is used to define how a unit of storage is dynamically created with a persistent volume. For more information on Kubernetes storage classes, see [Kubernetes Storage Classes][kubernetes-storage-classes].
+
+Each AKS cluster includes four pre-created storage classes, two of them configured to work with Azure Disks:
+
+* The *default* storage class provisions a standard SSD Azure Disk.
+ * Standard storage is backed by Standard SSDs and delivers cost-effective storage while still delivering reliable performance.
+* The *managed-csi-premium* storage class provisions a premium Azure Disk.
+ * Premium disks are backed by SSD-based high-performance, low-latency disk. Perfect for VMs running production workload. If the AKS nodes in your cluster use premium storage, select the *managed-premium* class.
+
+If you use one of the default storage classes, you can't update the volume size after the storage class is created. To be able to update the volume size after a storage class is created, add the line `allowVolumeExpansion: true` to one of the default storage classes, or you can create your own custom storage class. It's not supported to reduce the size of a PVC (to prevent data loss). You can edit an existing storage class by using the `kubectl edit sc` command.
+
+For example, if you want to use a disk of size 4 TiB, you must create a storage class that defines `cachingmode: None` because [disk caching isn't supported for disks 4 TiB and larger][disk-host-cache-setting].
+
+For more information about storage classes and creating your own storage class, see [Storage options for applications in AKS][storage-class-concepts].
+
+Use the [kubectl get sc][kubectl-get] command to see the pre-created storage classes. The following example shows the pre-create storage classes available within an AKS cluster:
+
+```bash
+kubectl get sc
+```
+
+The output of the command resembles the following example:
+
+```console
+NAME PROVISIONER AGE
+default (default) disk.csi.azure.com 1h
+managed-csi disk.csi.azure.com 1h
+```
+
+> [!NOTE]
+> Persistent volume claims are specified in GiB but Azure managed disks are billed by SKU for a specific size. These SKUs range from 32GiB for S4 or P4 disks to 32TiB for S80 or P80 disks (in preview). The throughput and IOPS performance of a Premium managed disk depends on the both the SKU and the instance size of the nodes in the AKS cluster. For more information, see [Pricing and performance of managed disks][managed-disk-pricing-performance].
+
+### Create a persistent volume claim
+
+A persistent volume claim (PVC) is used to automatically provision storage based on a storage class. In this case, a PVC can use one of the pre-created storage classes to create a standard or premium Azure managed disk.
+
+Create a file named `azure-pvc.yaml`, and copy in the following manifest. The claim requests a disk named `azure-managed-disk` that is *5 GB* in size with *ReadWriteOnce* access. The *managed-csi* storage class is specified as the storage class.
+
+```yaml
+apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+ name: azure-managed-disk
+spec:
+ accessModes:
+ - ReadWriteOnce
+ storageClassName: managed-csi
+ resources:
+ requests:
+ storage: 5Gi
+```
+
+> [!TIP]
+> To create a disk that uses premium storage, use `storageClassName: managed-csi-premium` rather than *managed-csi*.
+
+Create the persistent volume claim with the [kubectl apply][kubectl-apply] command and specify your *azure-pvc.yaml* file:
+
+```bash
+kubectl apply -f azure-pvc.yaml
+```
+
+The output of the command resembles the following example:
+
+```console
+persistentvolumeclaim/azure-managed-disk created
+```
+
+### Use the persistent volume
+
+Once the persistent volume claim has been created and the disk successfully provisioned, a pod can be created with access to the disk. The following manifest creates a basic NGINX pod that uses the persistent volume claim named *azure-managed-disk* to mount the Azure Disk at the path `/mnt/azure`. For Windows Server containers, specify a *mountPath* using the Windows path convention, such as *'D:'*.
+
+Create a file named `azure-pvc-disk.yaml`, and copy in the following manifest.
+
+```yaml
+kind: Pod
+apiVersion: v1
+metadata:
+ name: mypod
+spec:
+ containers:
+ - name: mypod
+ image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
+ resources:
+ requests:
+ cpu: 100m
+ memory: 128Mi
+ limits:
+ cpu: 250m
+ memory: 256Mi
+ volumeMounts:
+ - mountPath: "/mnt/azure"
+ name: volume
+ volumes:
+ - name: volume
+ persistentVolumeClaim:
+ claimName: azure-managed-disk
+```
+
+Create the pod with the [kubectl apply][kubectl-apply] command, as shown in the following example:
+
+```console
+kubectl apply -f azure-pvc-disk.yaml
+```
+
+The output of the command resembles the following example:
+
+```console
+pod/mypod created
+```
+
+You now have a running pod with your Azure Disk mounted in the `/mnt/azure` directory. This configuration can be seen when inspecting your pod using the [kubectl describe][kubectl-describe] command, as shown in the following condensed example:
+
+```bash
+kubectl describe pod mypod
+```
+
+The output of the command resembles the following example:
+
+```console
+[...]
+Volumes:
+ volume:
+ Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
+ ClaimName: azure-managed-disk
+ ReadOnly: false
+ default-token-smm2n:
+ Type: Secret (a volume populated by a Secret)
+ SecretName: default-token-smm2n
+ Optional: false
+[...]
+Events:
+ Type Reason Age From Message
+ - - - -
+ Normal Scheduled 2m default-scheduler Successfully assigned mypod to aks-nodepool1-79590246-0
+ Normal SuccessfulMountVolume 2m kubelet, aks-nodepool1-79590246-0 MountVolume.SetUp succeeded for volume "default-token-smm2n"
+ Normal SuccessfulMountVolume 1m kubelet, aks-nodepool1-79590246-0 MountVolume.SetUp succeeded for volume "pvc-faf0f176-8b8d-11e8-923b-deb28c58d242"
+[...]
+```
+
+### Use Azure ultra disks
+
+To use Azure ultra disk, see [Use ultra disks on Azure Kubernetes Service (AKS)][use-ultra-disks].
+
+### Back up a persistent volume
+
+To back up the data in your persistent volume, take a snapshot of the managed disk for the volume. You can then use this snapshot to create a restored disk and attach to pods as a means of restoring the data.
+
+First, get the volume name with the [kubectl get][kubectl-get] command, such as for the PVC named *azure-managed-disk*:
+
+```bash
+kubectl get pvc azure-managed-disk
+```
+
+The output of the command resembles the following example:
+
+```console
+NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
+azure-managed-disk Bound pvc-faf0f176-8b8d-11e8-923b-deb28c58d242 5Gi RWO managed-premium 3m
+```
+
+This volume name forms the underlying Azure disk name. Query for the disk ID with [az disk list][az-disk-list] and provide your PVC volume name, as shown in the following example:
+
+```azurecli
+az disk list --query '[].id | [?contains(@,`pvc-faf0f176-8b8d-11e8-923b-deb28c58d242`)]' -o tsv
+
+/subscriptions/<guid>/resourceGroups/MC_MYRESOURCEGROUP_MYAKSCLUSTER_EASTUS/providers/MicrosoftCompute/disks/kubernetes-dynamic-pvc-faf0f176-8b8d-11e8-923b-deb28c58d242
+```
+
+Use the disk ID to create a snapshot disk with [az snapshot create][az-snapshot-create]. The following example creates a snapshot named *pvcSnapshot* in the same resource group as the AKS cluster *MC_myResourceGroup_myAKSCluster_eastus*. You may encounter permission issues if you create snapshots and restore disks in resource groups that the AKS cluster doesn't have access to.
+
+```azurecli
+az snapshot create \
+ --resource-group MC_myResourceGroup_myAKSCluster_eastus \
+ --name pvcSnapshot \
+ --source /subscriptions/<guid>/resourceGroups/MC_myResourceGroup_myAKSCluster_eastus/providers/MicrosoftCompute/disks/kubernetes-dynamic-pvc-faf0f176-8b8d-11e8-923b-deb28c58d242
+```
+
+Depending on the amount of data on your disk, it may take a few minutes to create the snapshot.
+
+### Restore and use a snapshot
+
+To restore the disk and use it with a Kubernetes pod, use the snapshot as a source when you create a disk with [az disk create][az-disk-create]. This operation preserves the original resource if you then need to access the original data snapshot. The following example creates a disk named *pvcRestored* from the snapshot named *pvcSnapshot*:
+
+```azurecli
+az disk create --resource-group MC_myResourceGroup_myAKSCluster_eastus --name pvcRestored --source pvcSnapshot
+```
+
+To use the restored disk with a pod, specify the ID of the disk in the manifest. Get the disk ID with the [az disk show][az-disk-show] command. The following example gets the disk ID for *pvcRestored* created in the previous step:
+
+```azurecli
+az disk show --resource-group MC_myResourceGroup_myAKSCluster_eastus --name pvcRestored --query id -o tsv
+```
+
+Create a pod manifest named `azure-restored.yaml` and specify the disk URI obtained in the previous step. The following example creates a basic NGINX web server, with the restored disk mounted as a volume at */mnt/azure*:
+
+```yaml
+kind: Pod
+apiVersion: v1
+metadata:
+ name: mypodrestored
+spec:
+ containers:
+ - name: mypodrestored
+ image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
+ resources:
+ requests:
+ cpu: 100m
+ memory: 128Mi
+ limits:
+ cpu: 250m
+ memory: 256Mi
+ volumeMounts:
+ - mountPath: "/mnt/azure"
+ name: volume
+ volumes:
+ - name: volume
+ azureDisk:
+ kind: Managed
+ diskName: pvcRestored
+ diskURI: /subscriptions/<guid>/resourceGroups/MC_myResourceGroupAKS_myAKSCluster_eastus/providers/Microsoft.Compute/disks/pvcRestored
+```
+
+Create the pod with the [kubectl apply][kubectl-apply] command, as shown in the following example:
+
+```bash
+kubectl apply -f azure-restored.yaml
+```
+
+The output of the command resembles the following example:
+
+```console
+pod/mypodrestored created
+```
+
+You can use `kubectl describe pod mypodrestored` to view details of the pod, such as the following condensed example that shows the volume information:
+
+```bash
+kubectl describe pod mypodrestored
+```
+
+The output of the command resembles the following example:
+
+```console
+[...]
+Volumes:
+ volume:
+ Type: AzureDisk (an Azure Data Disk mount on the host and bind mount to the pod)
+ DiskName: pvcRestored
+ DiskURI: /subscriptions/19da35d3-9a1a-4f3b-9b9c-3c56ef409565/resourceGroups/MC_myResourceGroupAKS_myAKSCluster_eastus/providers/Microsoft.Compute/disks/pvcRestored
+ Kind: Managed
+ FSType: ext4
+ CachingMode: ReadWrite
+ ReadOnly: false
+[...]
+```
+
+### Using Azure tags
+
+For more information on using Azure tags, see [Use Azure tags in Azure Kubernetes Service (AKS)][use-tags].
+
+## Statically provision a volume
+
+This section provides guidance for cluster administrators who want to create one or more persistent volumes that include details of Azure Disks storage for use by a workload.
+
+### Static provisioning parameters
+
+|Name | Meaning | Available Value | Mandatory | Default value|
+| | | | | |
+|volumeHandle| Azure disk URI | `/subscriptions/{sub-id}/resourcegroups/{group-name}/providers/microsoft.compute/disks/{disk-id}` | Yes | N/A|
+|volumeAttributes.fsType | File system type | `ext4`, `ext3`, `ext2`, `xfs`, `btrfs` for Linux, `ntfs` for Windows | No | `ext4` for Linux, `ntfs` for Windows |
+|volumeAttributes.partition | Partition number of the existing disk (only supported on Linux) | `1`, `2`, `3` | No | Empty (no partition) </br>- Make sure partition format is like `-part1` |
+|volumeAttributes.cachingMode | [Disk host cache setting][disk-host-cache-setting] | `None`, `ReadOnly`, `ReadWrite` | No | `ReadOnly`|
+
+### Create an Azure disk
+
+When you create an Azure disk for use with AKS, you can create the disk resource in the **node** resource group. This approach allows the AKS cluster to access and manage the disk resource. If instead you created the disk in a separate resource group, you must grant the Azure Kubernetes Service (AKS) managed identity for your cluster the `Contributor` role to the disk's resource group. In this exercise, you're going to create the disk in the same resource group as your cluster.
+
+1. Identify the resource group name using the [az aks show][az-aks-show] command and add the `--query nodeResourceGroup` parameter. The following example gets the node resource group for the AKS cluster name *myAKSCluster* in the resource group name *myResourceGroup*:
+
+ ```azurecli-interactive
+ az aks show --resource-group myResourceGroup --name myAKSCluster --query nodeResourceGroup -o tsv
+
+ MC_myResourceGroup_myAKSCluster_eastus
+ ```
+
+2. Create a disk using the [az disk create][az-disk-create] command. Specify the node resource group name obtained in the previous command, and then a name for the disk resource, such as *myAKSDisk*. The following example creates a *20*GiB disk, and outputs the ID of the disk after it's created. If you need to create a disk for use with Windows Server containers, add the `--os-type windows` parameter to correctly format the disk.
+
+ ```azurecli-interactive
+ az disk create \
+ --resource-group MC_myResourceGroup_myAKSCluster_eastus \
+ --name myAKSDisk \
+ --size-gb 20 \
+ --query id --output tsv
+ ```
+
+ > [!NOTE]
+ > Azure Disks are billed by SKU for a specific size. These SKUs range from 32GiB for S4 or P4 disks to 32TiB for S80 or P80 disks (in preview). The throughput and IOPS performance of a Premium managed disk depends on both the SKU and the instance size of the nodes in the AKS cluster. See [Pricing and Performance of Managed Disks][managed-disk-pricing-performance].
+
+ The disk resource ID is displayed once the command has successfully completed, as shown in the following example output. This disk ID is used to mount the disk in the next section.
+
+ ```console
+ /subscriptions/<subscriptionID>/resourceGroups/MC_myAKSCluster_myAKSCluster_eastus/providers/Microsoft.Compute/disks/myAKSDisk
+ ```
+
+### Mount disk as a volume
+
+1. Create a *pv-azuredisk.yaml* file with a *PersistentVolume*. Update `volumeHandle` with disk resource ID from the previous step. For example:
+
+ ```yaml
+ apiVersion: v1
+ kind: PersistentVolume
+ metadata:
+ name: pv-azuredisk
+ spec:
+ capacity:
+ storage: 20Gi
+ accessModes:
+ - ReadWriteOnce
+ persistentVolumeReclaimPolicy: Retain
+ storageClassName: managed-csi
+ csi:
+ driver: disk.csi.azure.com
+ readOnly: false
+ volumeHandle: /subscriptions/<subscriptionID>/resourceGroups/MC_myAKSCluster_myAKSCluster_eastus/providers/Microsoft.Compute/disks/myAKSDisk
+ volumeAttributes:
+ fsType: ext4
+ ```
+
+2. Create a *pvc-azuredisk.yaml* file with a *PersistentVolumeClaim* that uses the *PersistentVolume*. For example:
+
+ ```yaml
+ apiVersion: v1
+ kind: PersistentVolumeClaim
+ metadata:
+ name: pvc-azuredisk
+ spec:
+ accessModes:
+ - ReadWriteOnce
+ resources:
+ requests:
+ storage: 20Gi
+ volumeName: pv-azuredisk
+ storageClassName: managed-csi
+ ```
+
+3. Use the [kubectl apply][kubectl-apply] commands to create the *PersistentVolume* and *PersistentVolumeClaim*, referencing the two YAML files created earlier:
+
+ ```bash
+ kubectl apply -f pv-azuredisk.yaml
+ kubectl apply -f pvc-azuredisk.yaml
+ ```
+
+4. To verify your *PersistentVolumeClaim* is created and bound to the *PersistentVolume*, run the
+following command:
+
+ ```bash
+ kubectl get pvc pvc-azuredisk
+ ```
+
+ The output of the command resembles the following example:
+
+ ```console
+ NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
+ pvc-azuredisk Bound pv-azuredisk 20Gi RWO 5s
+ ```
+
+5. Create a *azure-disk-pod.yaml* file to reference your *PersistentVolumeClaim*. For example:
+
+ ```yaml
+ apiVersion: v1
+ kind: Pod
+ metadata:
+ name: mypod
+ spec:
+ nodeSelector:
+ kubernetes.io/os: linux
+ containers:
+ - image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
+ name: mypod
+ resources:
+ requests:
+ cpu: 100m
+ memory: 128Mi
+ limits:
+ cpu: 250m
+ memory: 256Mi
+ volumeMounts:
+ - name: azure
+ mountPath: /mnt/azure
+ volumes:
+ - name: azure
+ persistentVolumeClaim:
+ claimName: pvc-azuredisk
+ ```
+
+6. Run the [kubectl apply][kubectl-apply] command to apply the configuration and mount the volume, referencing the YAML
+configuration file created in the previous steps:
+
+ ```bash
+ kubectl apply -f azure-disk-pod.yaml
+ ```
+
+## Next steps
+
+- To learn how to use CSI driver for Azure Disks storage, see [Use Azure Disks storage with CSI driver][azure-disks-storage-csi].
+- For associated best practices, see [Best practices for storage and backups in AKS][operator-best-practices-storage].
+
+<!-- LINKS - external -->
+[access-modes]: https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes
+[kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply
+[kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get
+[kubernetes-storage-classes]: https://kubernetes.io/docs/concepts/storage/storage-classes/
+[kubernetes-volumes]: https://kubernetes.io/docs/concepts/storage/persistent-volumes/
+[managed-disk-pricing-performance]: https://azure.microsoft.com/pricing/details/managed-disks/
+[kubectl-describe]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#describe
+
+<!-- LINKS - internal -->
+[azure-storage-account]: ../storage/common/storage-introduction.md
+[azure-disks-storage-csi]: azure-disk-csi.md
+[azure-files-pvc]: azure-files-dynamic-pv.md
+[az-disk-list]: /cli/azure/disk#az_disk_list
+[az-snapshot-create]: /cli/azure/snapshot#az_snapshot_create
+[az-disk-create]: /cli/azure/disk#az_disk_create
+[az-disk-show]: /cli/azure/disk#az_disk_show
+[az-aks-show]: /cli/azure/aks#az-aks-show
+[install-azure-cli]: /cli/azure/install-azure-cli
+[operator-best-practices-storage]: operator-best-practices-storage.md
+[concepts-storage]: concepts-storage.md
+[storage-class-concepts]: concepts-storage.md#storage-classes
+[use-tags]: use-tags.md
+[share-azure-managed-disk]: ../virtual-machines/disks-shared.md
+[disk-host-cache-setting]: ../virtual-machines/windows/premium-storage-performance.md#disk-caching
+[use-ultra-disks]: use-ultra-disks.md
+[ultra-ssd-disks]: ../virtual-machines/linux/disks-ultra-ssd.md
+[azure-tags]: ../azure-resource-manager/management/tag-resources.md
+[disk-encryption]: ../virtual-machines/windows/disk-encryption.md
+[azure-disk-write-accelerator]: ../virtual-machines/windows/how-to-enable-write-accelerator.md
+[on-demand-bursting]: ../virtual-machines/disk-bursting.md
+[customer-usage-attribution]: ../marketplace/azure-partner-customer-usage-attribution.md
aks Azure Csi Files Storage Provision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-csi-files-storage-provision.md
+
+ Title: Create a persistent volume with Azure Files in Azure Kubernetes Service (AKS)
+
+description: Learn how to create a static or dynamic persistent volume with Azure Files for use with multiple concurrent pods in Azure Kubernetes Service (AKS)
+ Last updated : 01/18/2023++
+# Create and use a volume with Azure Files in Azure Kubernetes Service (AKS)
+
+A persistent volume represents a piece of storage that has been provisioned for use with Kubernetes pods. A persistent volume can be used by one or many pods, and can be dynamically or statically provisioned. If multiple pods need concurrent access to the same storage volume, you can use Azure Files to connect using the [Server Message Block (SMB) protocol][smb-overview]. This article shows you how to dynamically create an Azure Files share for use by multiple pods in an Azure Kubernetes Service (AKS) cluster.
+
+This article shows you how to:
+
+* Work with a dynamic persistent volume (PV) by installing the Container Storage Interface (CSI) driver and dynamically creating one or more Azure file shares to attach to a pod.
+* Work with a static PV by creating one or more Azure file shares, or use an existing one and attach it to a pod.
+
+For more information on Kubernetes volumes, see [Storage options for applications in AKS][concepts-storage].
+
+## Before you begin
+
+- An Azure [storage account][azure-storage-account].
+
+- The Azure CLI version 2.0.59 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
+
+## Dynamically provision a volume
+
+This section provides guidance for cluster administrators who want to provision one or more persistent volumes that include details of one or more shares on Azure Files for use by a workload. A persistent volume claim (PVC) uses the storage class object to dynamically provision an Azure Files file share.
+
+### Dynamic provisioning parameters
+
+|Name | Meaning | Available Value | Mandatory | Default value
+| | | | |
+|skuName | Azure Files storage account type (alias: `storageAccountType`)| `Standard_LRS`, `Standard_ZRS`, `Standard_GRS`, `Standard_RAGRS`, `Standard_RAGZRS`,`Premium_LRS`, `Premium_ZRS` | No | `StandardSSD_LRS`<br> Minimum file share size for Premium account type is 100 GB.<br> ZRS account type is supported in limited regions.<br> NFS file share only supports Premium account type.|
+|fsType | File System Type | `ext4`, `ext3`, `ext2`, `xfs`| Yes | `ext4` for Linux|
+|location | Specify Azure region where Azure storage account will be created. | For example, `eastus`. | No | If empty, driver uses the same location name as current AKS cluster.|
+|resourceGroup | Specify the resource group where the Azure Disks will be created | Existing resource group name | No | If empty, driver uses the same resource group name as current AKS cluster.|
+|shareName | Specify Azure file share name | Existing or new Azure file share name. | No | If empty, driver generates an Azure file share name. |
+|shareNamePrefix | Specify Azure file share name prefix created by driver. | Share name can only contain lowercase letters, numbers, hyphens, and length should be fewer than 21 characters. | No |
+|folderName | Specify folder name in Azure file share. | Existing folder name in Azure file share. | No | If folder name does not exist in file share, mount will fail. |
+|shareAccessTier | [Access tier for file share][storage-tiers] | General purpose v2 account can choose between `TransactionOptimized` (default), `Hot`, and `Cool`. Premium storage account type for file shares only. | No | Empty. Use default setting for different storage account types.|
+|accountAccessTier | [Access tier for storage account][access-tiers-overview] | Standard account can choose `Hot` or `Cool`, and Premium account can only choose `Premium`. | No | Empty. Use default setting for different storage account types. |
+|server | Specify Azure storage account server address | Existing server address, for example `accountname.privatelink.file.core.windows.net`. | No | If empty, driver uses default `accountname.file.core.windows.net` or other sovereign cloud account address. |
+|disableDeleteRetentionPolicy | Specify whether disable DeleteRetentionPolicy for storage account created by driver. | `true` or `false` | No | `false` |
+|allowBlobPublicAccess | Allow or disallow public access to all blobs or containers for storage account created by driver. | `true` or `false` | No | `false` |
+|requireInfraEncryption | Specify whether or not the service applies a secondary layer of encryption with platform managed keys for data at rest for storage account created by driver. | `true` or `false` | No | `false` |
+|storageEndpointSuffix | Specify Azure storage endpoint suffix. | `core.windows.net`, `core.chinacloudapi.cn`, etc. | No | If empty, driver uses default storage endpoint suffix according to cloud environment. For example, `core.windows.net`. |
+|tags | [Tags][tag-resources] are created in new storage account. | Tag format: 'foo=aaa,bar=bbb' | No | "" |
+|matchTags | Match tags when driver tries to find a suitable storage account. | `true` or `false` | No | `false` |
+| | **Following parameters are only for SMB protocol** | | |
+|subscriptionID | Specify Azure subscription ID where Azure file share is created. | Azure subscription ID | No | If not empty, `resourceGroup` must be provided. |
+|storeAccountKey | Specify whether to store account key to Kubernetes secret. | `true` or `false`<br>`false` means driver leverages kubelet identity to get account key. | No | `true` |
+|secretName | Specify secret name to store account key. | | No |
+|secretNamespace | Specify the namespace of secret to store account key. <br><br> **Note:** <br> If `secretNamespace` isn't specified, the secret is created in the same namespace as the pod. | `default`,`kube-system`, etc | No | Pvc namespace, for example `csi.storage.k8s.io/pvc/namespace` |
+|useDataPlaneAPI | Specify whether to use [data plane API][data-plane-api] for file share create/delete/resize. This could solve the SRP API throttling issue because the data plane API has almost no limit, while it would fail when there is firewall or Vnet setting on storage account. | `true` or `false` | No | `false` |
+| | **Following parameters are only for NFS protocol** | | |
+|rootSquashType | Specify root squashing behavior on the share. The default is `NoRootSquash` | `AllSquash`, `NoRootSquash`, `RootSquash` | No |
+|mountPermissions | Mounted folder permissions. The default is `0777`. If set to `0`, driver doesn't perform `chmod` after mount | `0777` | No |
+| | **Following parameters are only for VNet setting. For example, NFS, private end point** | | |
+|vnetResourceGroup | Specify VNet resource group where virtual network is defined. | Existing resource group name. | No | If empty, driver uses the `vnetResourceGroup` value in Azure cloud config file. |
+|vnetName | Virtual network name | Existing virtual network name. | No | If empty, driver uses the `vnetName` value in Azure cloud config file. |
+|subnetName | Subnet name | Existing subnet name of the agent node. | No | If empty, driver uses the `subnetName` value in Azure cloud config file. |
+|fsGroupChangePolicy | Indicates how volume's ownership is changed by the driver. Pod `securityContext.fsGroupChangePolicy` is ignored. | `OnRootMismatch` (default), `Always`, `None` | No | `OnRootMismatch`|
+
+### Create a storage class
+
+A storage class is used to define how an Azure file share is created. A storage account is automatically created in the [node resource group][node-resource-group] for use with the storage class to hold the Azure Files file share. Choose of the following [Azure storage redundancy][storage-skus] for *skuName*:
+
+* *Standard_LRS* - standard locally redundant storage (LRS)
+* *Standard_GRS* - standard geo-redundant storage (GRS)
+* *Standard_ZRS* - standard zone redundant storage (ZRS)
+* *Standard_RAGRS* - standard read-access geo-redundant storage (RA-GRS)
+* *Premium_LRS* - premium locally redundant storage (LRS)
+* *Premium_ZRS* - premium zone redundant storage (ZRS)
+
+> [!NOTE]
+> Minimum premium file share is 100GB.
+
+For more information on Kubernetes storage classes for Azure Files, see [Kubernetes Storage Classes][kubernetes-storage-classes].
+
+Create a file named `azure-file-sc.yaml` and copy in the following example manifest. For more information on *mountOptions*, see the [Mount options][mount-options] section.
+
+```yaml
+kind: StorageClass
+apiVersion: storage.k8s.io/v1
+metadata:
+ name: my-azurefile
+provisioner: file.csi.azure.com # replace with "kubernetes.io/azure-file" if aks version is less than 1.21
+allowVolumeExpansion: true
+mountOptions:
+ - dir_mode=0777
+ - file_mode=0777
+ - uid=0
+ - gid=0
+ - mfsymlinks
+ - cache=strict
+ - actimeo=30
+parameters:
+ skuName: Premium_LRS
+```
+
+Create the storage class with the [kubectl apply][kubectl-apply] command:
+
+```bash
+kubectl apply -f azure-file-sc.yaml
+```
+
+### Create a persistent volume claim
+
+A persistent volume claim (PVC) uses the storage class object to dynamically provision an Azure file share. The following YAML can be used to create a persistent volume claim *100 GB* in size with *ReadWriteMany* access. For more information on access modes, see the [Kubernetes persistent volume][access-modes] documentation.
+
+Now create a file named `azure-file-pvc.yaml` and copy in the following YAML. Make sure that the *storageClassName* matches the storage class created in the last step:
+
+```yaml
+apiVersion: v1
+kind: PersistentVolumeClaim
+metadata:
+ name: my-azurefile
+spec:
+ accessModes:
+ - ReadWriteMany
+ storageClassName: my-azurefile
+ resources:
+ requests:
+ storage: 100Gi
+```
+
+> [!NOTE]
+> If using the *Premium_LRS* sku for your storage class, the minimum value for *storage* must be *100Gi*.
+
+Create the persistent volume claim with the [kubectl apply][kubectl-apply] command:
+
+```bash
+kubectl apply -f azure-file-pvc.yaml
+```
+
+Once completed, the file share will be created. A Kubernetes secret is also created that includes connection information and credentials. You can use the [kubectl get][kubectl-get] command to view the status of the PVC:
+
+```bash
+kubectl get pvc my-azurefile
+```
+
+The output of the command resembles the following example:
+
+```console
+NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
+my-azurefile Bound pvc-8436e62e-a0d9-11e5-8521-5a8664dc0477 10Gi RWX my-azurefile 5m
+```
+
+### Use the persistent volume
+
+The following YAML creates a pod that uses the persistent volume claim *my-azurefile* to mount the Azure Files file share at the */mnt/azure* path. For Windows Server containers, specify a *mountPath* using the Windows path convention, such as *'D:'*.
+
+Create a file named `azure-pvc-files.yaml`, and copy in the following YAML. Make sure that the *claimName* matches the PVC created in the last step.
+
+```yaml
+kind: Pod
+apiVersion: v1
+metadata:
+ name: mypod
+spec:
+ containers:
+ - name: mypod
+ image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
+ resources:
+ requests:
+ cpu: 100m
+ memory: 128Mi
+ limits:
+ cpu: 250m
+ memory: 256Mi
+ volumeMounts:
+ - mountPath: "/mnt/azure"
+ name: volume
+ volumes:
+ - name: volume
+ persistentVolumeClaim:
+ claimName: my-azurefile
+```
+
+Create the pod with the [kubectl apply][kubectl-apply] command.
+
+```bash
+kubectl apply -f azure-pvc-files.yaml
+```
+
+You now have a running pod with your Azure Files file share mounted in the */mnt/azure* directory. This configuration can be seen when inspecting your pod using the [kubectl describe][kubectl-describe] command. The following condensed example output shows the volume mounted in the container:
+
+```console
+Containers:
+ mypod:
+ Container ID: docker://053bc9c0df72232d755aa040bfba8b533fa696b123876108dec400e364d2523e
+ Image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
+ Image ID: docker-pullable://nginx@sha256:d85914d547a6c92faa39ce7058bd7529baacab7e0cd4255442b04577c4d1f424
+ State: Running
+ Started: Fri, 01 Mar 2019 23:56:16 +0000
+ Ready: True
+ Mounts:
+ /mnt/azure from volume (rw)
+ /var/run/secrets/kubernetes.io/serviceaccount from default-token-8rv4z (ro)
+[...]
+Volumes:
+ volume:
+ Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
+ ClaimName: my-azurefile
+ ReadOnly: false
+[...]
+```
+
+### Mount options
+
+The default value for *fileMode* and *dirMode* is *0777* for Kubernetes version 1.13.0 and above. If dynamically creating the persistent volume with a storage class, mount options can be specified on the storage class object. The following example sets *0777*:
+
+```yaml
+kind: StorageClass
+apiVersion: storage.k8s.io/v1
+metadata:
+ name: my-azurefile
+provisioner: file.csi.azure.com # replace with "kubernetes.io/azure-file" if aks version is less than 1.21
+allowVolumeExpansion: true
+mountOptions:
+ - dir_mode=0777
+ - file_mode=0777
+ - uid=0
+ - gid=0
+ - mfsymlinks
+ - cache=strict
+ - actimeo=30
+parameters:
+ skuName: Premium_LRS
+```
+
+### Using Azure tags
+
+For more details on using Azure tags, see [Use Azure tags in Azure Kubernetes Service (AKS)][use-tags].
+
+## Statically provision a volume
+
+This section provides guidance for cluster administrators who want to create one or more persistent volumes that include details of an existing Azure Files share to use with a workload.
+
+### Static provisioning parameters
+
+|Name | Meaning | Available Value | Mandatory | Default value |
+| | | | | |
+|volumeAttributes.resourceGroup | Specify an Azure resource group name. | myResourceGroup | No | If empty, driver uses the same resource group name as current cluster. |
+|volumeAttributes.storageAccount | Specify an existing Azure storage account name. | storageAccountName | Yes ||
+|volumeAttributes.shareName | Specify an Azure file share name. | fileShareName | Yes ||
+|volumeAttributes.folderName | Specify a folder name in Azure file share. | folderName | No | If folder name doesn't exist in file share, mount would fail. |
+|volumeAttributes.protocol | Specify file share protocol. | `smb`, `nfs` | No | `smb` |
+|volumeAttributes.server | Specify Azure storage account server address | Existing server address, for example `accountname.privatelink.file.core.windows.net`. | No | If empty, driver uses default `accountname.file.core.windows.net` or other sovereign cloud account address. |
+| | **Following parameters are only for SMB protocol** | | | |
+|volumeAttributes.secretName | Specify a secret name that stores storage account name and key. | | No |
+|volumeAttributes.secretNamespace | Specify a secret namespace. | `default`,`kube-system`, etc. | No | PVC namespace (`csi.storage.k8s.io/pvc/namespace`) |
+|nodeStageSecretRef.name | Specify a secret name that stores storage account name and key. | Existing secret name | Yes ||
+|nodeStageSecretRef.namespace | Specify a secret namespace. | Kubernetes namespace | Yes ||
+| | **Following parameters are only for NFS protocol** | | | |
+|volumeAttributes.fsGroupChangePolicy | Indicates how a volumes ownership is changed by the driver. Pod `securityContext.fsGroupChangePolicy` is ignored. | `OnRootMismatch` (default), `Always`, `None` | No | `OnRootMismatch` |
+|volumeAttributes.mountPermissions | Specify mounted folder permissions. The default is `0777` | | No ||
+
+### Create an Azure file share
+
+Before you can use an Azure Files file share as a Kubernetes volume, you must create an Azure Storage account and the file share. In this article, you'll create the storage container in the node resource group.
+
+1. Get the resource group name with the [az aks show][az-aks-show] command and add the `--query nodeResourceGroup` query parameter. The following example gets the node resource group for the AKS cluster named **myAKSCluster** in the resource group named **myResourceGroup**.
+
+ ```azurecli
+ az aks show --resource-group myResourceGroup --name myAKSCluster --query nodeResourceGroup -o tsv
+ ```
+
+ The output of the command resembles the following example:
+
+ ```azurecli
+ MC_myResourceGroup_myAKSCluster_eastus
+ ```
+
+2. The following command creates a storage account using the Standard_LRS SKU. Replace the following placeholders:
+
+ * `myAKSStorageAccount` with the name of the storage account
+ * `nodeResourceGroupName` with the name of the resource group that the AKS cluster nodes are hosted in
+ * `location` with the name of the region to create the resource in. It should be the same region as the AKS cluster nodes.
+
+ ```azurecli
+ az storage account create -n myAKSStorageAccount -g nodeResourceGroupName -l location --sku Standard_LRS
+ ```
+
+3. Run the following command to export the connection string as an environment variable. This is used when creating the Azure file share in a later step.
+
+ ```azurecli
+ export AZURE_STORAGE_CONNECTION_STRING=$(az storage account show-connection-string -n storageAccountName -g resourceGroupName -o tsv)
+ ```
+
+4. Create the file share using the [Az storage share create][az-storage-share-create] command. Replace the placeholder `shareName` with a name you want to use for the share.
+
+ ```azurecli
+ az storage share create -n shareName --connection-string $AZURE_STORAGE_CONNECTION_STRING
+ ```
+
+5. Run the following command to export the storage account key as an environment variable.
+
+ ```azurecli
+ STORAGE_KEY=$(az storage account keys list --resource-group $AKS_PERS_RESOURCE_GROUP --account-name $AKS_PERS_STORAGE_ACCOUNT_NAME --query "[0].value" -o tsv)
+ ```
+
+6. Run the following commands to echo the storage account name and key. Copy this information as these values are needed when you create the Kubernetes volume later in this article.
+
+ ```azurecli
+ echo Storage account name: $AKS_PERS_STORAGE_ACCOUNT_NAME
+ echo Storage account key: $STORAGE_KEY
+ ```
+
+### Create a Kubernetes secret
+
+Kubernetes needs credentials to access the file share created in the previous step. These credentials are stored in a [Kubernetes secret][kubernetes-secret], which is referenced when you create a Kubernetes pod.
+
+Use the `kubectl create secret` command to create the secret. The following example creates a secret named *azure-secret* and populates the *azurestorageaccountname* and *azurestorageaccountkey* from the previous step. To use an existing Azure storage account, provide the account name and key.
+
+```bash
+kubectl create secret generic azure-secret --from-literal=azurestorageaccountname=$AKS_PERS_STORAGE_ACCOUNT_NAME --from-literal=azurestorageaccountkey=$STORAGE_KEY
+```
+
+### Mount file share as an inline volume
+
+> [!NOTE]
+> Inline volume can only access secrets in the same namespace as the pod. To specify a different secret namespace, [please use the persistent volume example][persistent-volume-example] below instead.
+
+To mount the Azure Files file share into your pod, configure the volume in the container spec. Create a new file named `azure-files-pod.yaml` with the following contents. If you changed the name of the file share or secret name, update the *shareName* and *secretName*. If desired, update the `mountPath`, which is the path where the Files share is mounted in the pod. For Windows Server containers, specify a *mountPath* using the Windows path convention, such as *'D:'*.
+
+```yaml
+apiVersion: v1
+kind: Pod
+metadata:
+ name: mypod
+spec:
+ nodeSelector:
+ kubernetes.io/os: linux
+ containers:
+ - image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
+ name: mypod
+ resources:
+ requests:
+ cpu: 100m
+ memory: 128Mi
+ limits:
+ cpu: 250m
+ memory: 256Mi
+ volumeMounts:
+ - name: azure
+ mountPath: /mnt/azure
+ volumes:
+ - name: azure
+ csi:
+ driver: file.csi.azure.com
+ readOnly: false
+ volumeAttributes:
+ secretName: azure-secret # required
+ shareName: aksshare # required
+ mountOptions: "dir_mode=0777,file_mode=0777,cache=strict,actimeo=30,nosharesock" # optional
+```
+
+Use the [kubectl apply][kubectl-apply] command to create the pod.
+
+```bash
+kubectl apply -f azure-files-pod.yaml
+```
+
+You now have a running pod with an Azure Files file share mounted at */mnt/azure*. You can verify the share is mounted successfully using the [kubectl describe][kubectl-describe] command:
+
+```bash
+kubectl describe pod mypod
+```
+
+### Mount file share as a persistent volume
+
+The following example demonstrates how to mount a file share as a persistent volume.
+
+1. Create a file named `azurefiles-pv.yaml` and copy in the following YAML. Under `csi`, update `resourceGroup`, `volumeHandle`, and `shareName`. For mount options, the default value for *fileMode* and *dirMode* is *0777*.
+
+ ```yaml
+ apiVersion: v1
+ kind: PersistentVolume
+ metadata:
+ name: azurefile
+ spec:
+ capacity:
+ storage: 5Gi
+ accessModes:
+ - ReadWriteMany
+ persistentVolumeReclaimPolicy: Retain
+ storageClassName: azurefile-csi
+ csi:
+ driver: file.csi.azure.com
+ readOnly: false
+ volumeHandle: unique-volumeid # make sure this volumeid is unique for every identical share in the cluster
+ volumeAttributes:
+ resourceGroup: resourceGroupName # optional, only set this when storage account is not in the same resource group as node
+ shareName: aksshare
+ nodeStageSecretRef:
+ name: azure-secret
+ namespace: default
+ mountOptions:
+ - dir_mode=0777
+ - file_mode=0777
+ - uid=0
+ - gid=0
+ - mfsymlinks
+ - cache=strict
+ - nosharesock
+ - nobrl
+ ```
+
+2. Run the following command to create the persistent volume using the [kubectl create][kubectl-create] command referencing the YAML file created earlier:
+
+ ```bash
+ kubectl create -f azurefiles-pv.yaml
+ ```
+
+3. Create a *azurefiles-mount-options-pvc.yaml* file with a *PersistentVolumeClaim* that uses the *PersistentVolume* and copy the following YAML.
+
+ ```yaml
+ apiVersion: v1
+ kind: PersistentVolumeClaim
+ metadata:
+ name: azurefile
+ spec:
+ accessModes:
+ - ReadWriteMany
+ storageClassName: azurefile-csi
+ volumeName: azurefile
+ resources:
+ requests:
+ storage: 5Gi
+ ```
+
+4. Use the `kubectl` commands to create the *PersistentVolumeClaim*.
+
+```bash
+kubectl apply -f azurefiles-mount-options-pvc.yaml
+```
+
+5. Verify your *PersistentVolumeClaim* is created and bound to the *PersistentVolume* by running the following command.
+
+ ```bash
+ kubectl get pvc azurefile
+ ```
+
+ The output from the command resembles the following example:
+
+ ```console
+ NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
+ azurefile Bound azurefile 5Gi RWX azurefile 5s
+ ```
+
+6. Update your container spec to reference your *PersistentVolumeClaim* and update your pod. For example:
+
+ ```yaml
+ ...
+ volumes:
+ - name: azure
+ persistentVolumeClaim:
+ claimName: azurefile
+ ```
+
+7. Because a pod spec can't be updated in place, use [kubectl delete][kubectl-delete] and [kubectl apply][kubectl-apply] commands to delete and then re-create the pod:
+
+ ```bash
+ kubectl delete pod mypod
+
+ kubectl apply -f azure-files-pod.yaml
+ ```
+
+## Next steps
+
+For Azure File CSI driver parameters, see [CSI driver parameters][CSI driver parameters].
+
+For associated best practices, see [Best practices for storage and backups in AKS][operator-best-practices-storage].
+
+<!-- LINKS - external -->
+[kubernetes-secret]: https://kubernetes.io/docs/concepts/configuration/secret/
+[smb-overview]: /windows/desktop/FileIO/microsoft-smb-protocol-and-cifs-protocol-overview
+[CSI driver parameters]: https://github.com/kubernetes-sigs/azurefile-csi-driver/blob/master/docs/driver-parameters.md#static-provisionbring-your-own-file-share
+[kubernetes-storage-classes]: https://kubernetes.io/docs/concepts/storage/storage-classes/#azure-file
+[kubernetes-persistent-volume]: https://kubernetes.io/docs/concepts/storage/persistent-volumes
+[kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply
+[kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get
+[kubectl-create]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#create
+[data-plane-api]: https://pkg.go.dev/github.com/Azure/azure-sdk-for-go/sdk/storage/azblob
+[kubectl-describe]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#describe
+[kubectl-delete]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#delete
+
+<!-- LINKS - internal -->
+[azure-storage-account]: ../storage/common/storage-introduction.md
+[install-azure-cli]: /cli/azure/install-azure-cli
+[operator-best-practices-storage]: operator-best-practices-storage.md
+[concepts-storage]: concepts-storage.md
+[persistent-volume-example]: #mount-file-share-as-a-persistent-volume
+[use-tags]: use-tags.md
+[node-resource-group]: faq.md#why-are-two-resource-groups-created-with-aks
+[storage-skus]: ../storage/common/storage-redundancy.md
+[mount-options]: #mount-options
+[az-aks-show]: /cli/azure/aks#az-aks-show
+[az-storage-share-create]: /cli/azure/storage/share#az-storage-share-create
+[storage-tiers]: ../storage/files/storage-files-planning.md#storage-tiers
+[access-tiers-overview]: ../storage/blobs/access-tiers-overview.md
+[tag-resources]: ../azure-resource-manager/management/tag-resources.md
aks Azure Disk Csi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-disk-csi.md
Title: Use Container Storage Interface (CSI) driver for Azure Disks on Azure Kubernetes Service (AKS) description: Learn how to use the Container Storage Interface (CSI) driver for Azure Disks in an Azure Kubernetes Service (AKS) cluster.- Previously updated : 10/13/2022 Last updated : 01/18/2023 # Use the Azure Disks Container Storage Interface (CSI) driver in Azure Kubernetes Service (AKS)
In addition to in-tree driver features, Azure Disks CSI driver supports the foll
- [Volume clone](#clone-volumes) - [Resize disk PV without downtime(Preview)](#resize-a-persistent-volume-without-downtime-preview)
+> [!NOTE]
+> Depending on the VM SKU that's being used, the Azure Disks CSI driver might have a per-node volume limit. For some powerful VMs (for example, 16 cores), the limit is 64 volumes per node. To identify the limit per VM SKU, review the **Max data disks** column for each VM SKU offered. For a list of VM SKUs offered and their corresponding detailed capacity limits, see [General purpose virtual machine sizes][general-purpose-machine-sizes].
+ ## Storage class driver dynamic disks parameters |Name | Meaning | Available Value | Mandatory | Default value
In addition to in-tree driver features, Azure Disks CSI driver supports the foll
|LogicalSectorSize | Logical sector size in bytes for Ultra disk. Supported values are 512 ad 4096. 4096 is the default. | `512`, `4096` | No | `4096`| |tags | Azure Disk [tags](../azure-resource-manager/management/tag-resources.md) | Tag format: `key1=val1,key2=val2` | No | ""| |diskEncryptionSetID | ResourceId of the disk encryption set to use for [enabling encryption at rest](../virtual-machines/windows/disk-encryption.md) | format: `/subscriptions/{subs-id}/resourceGroups/{rg-name}/providers/Microsoft.Compute/diskEncryptionSets/{diskEncryptionSet-name}` | No | ""|
-|diskEncryptionType | Encryption type of the disk encryption set | `EncryptionAtRestWithCustomerKey`(by default), `EncryptionAtRestWithPlatformAndCustomerKeys` | No | ""|
+|diskEncryptionType | Encryption type of the disk encryption set. | `EncryptionAtRestWithCustomerKey`(by default), `EncryptionAtRestWithPlatformAndCustomerKeys` | No | ""|
|writeAcceleratorEnabled | [Write Accelerator on Azure Disks](../virtual-machines/windows/how-to-enable-write-accelerator.md) | `true`, `false` | No | ""| |networkAccessPolicy | NetworkAccessPolicy property to prevent generation of the SAS URI for a disk or a snapshot | `AllowAll`, `DenyAll`, `AllowPrivate` | No | `AllowAll`|
-|diskAccessID | ARM ID of the DiskAccess resource to use private endpoints on disks | | No | ``|
+|diskAccessID | Azure Resource ID of the DiskAccess resource to use private endpoints on disks | | No | ``|
|enableBursting | [Enable on-demand bursting](../virtual-machines/disk-bursting.md) beyond the provisioned performance target of the disk. On-demand bursting should only be applied to Premium disk and when the disk size > 512 GB. Ultra and shared disk isn't supported. Bursting is disabled by default. | `true`, `false` | No | `false`| |useragent | User agent used for [customer usage attribution](../marketplace/azure-partner-customer-usage-attribution.md)| | No | Generated Useragent formatted `driverName/driverVersion compiler/version (OS-ARCH)`| |enableAsyncAttach | Allow multiple disk attach operations (in batch) on one node in parallel.<br> While this parameter can speed up disk attachment, you may encounter Azure API throttling limit when there are large number of volume attachments. | `true`, `false` | No | `false`|
-|subscriptionID | Specify Azure subscription ID where the Azure Disks will be created | Azure subscription ID | No | If not empty, `resourceGroup` must be provided.|
+|subscriptionID | Specify Azure subscription ID where the Azure Disks is created. | Azure subscription ID | No | If not empty, `resourceGroup` must be provided.|
## Use CSI persistent volumes with Azure Disks
-A [persistent volume](concepts-storage.md#persistent-volumes) (PV) represents a piece of storage that's provisioned for use with Kubernetes pods. A PV can be used by one or many pods and can be dynamically or statically provisioned. This article shows you how to dynamically create PVs with Azure disk for use by a single pod in an AKS cluster. For static provisioning, see [Create a static volume with Azure Disks](azure-disk-volume.md).
+A [persistent volume](concepts-storage.md#persistent-volumes) (PV) represents a piece of storage that's provisioned for use with Kubernetes pods. A PV can be used by one or many pods and can be dynamically or statically provisioned. This article shows you how to dynamically create PVs with Azure disk for use by a single pod in an AKS cluster. For static provisioning, see [Create a static volume with Azure Disks](azure-csi-disk-storage-provision.md#statically-provision-a-volume).
For more information on Kubernetes volumes, see [Storage options for applications in AKS][concepts-storage].
The output of the command resembles the following example:
## Next steps - To learn how to use CSI driver for Azure Files, see [Use Azure Files with CSI driver][azure-files-csi].-- To learn how to use CSI driver for Azure Blob storage (preview), see [Use Azure Blob storage with CSI driver][azure-blob-csi] (preview).
+- To learn how to use CSI driver for Azure Blob storage, see [Use Azure Blob storage with CSI driver][azure-blob-csi].
- For more information about storage best practices, see [Best practices for storage and backups in Azure Kubernetes Service][operator-best-practices-storage]. <!-- LINKS - external -->
The output of the command resembles the following example:
[az-on-demand-bursting]: ../virtual-machines/disk-bursting.md#on-demand-bursting [enable-on-demand-bursting]: ../virtual-machines/disks-enable-bursting.md?tabs=azure-cli [az-premium-ssd]: ../virtual-machines/disks-types.md#premium-ssds
+[general-purpose-machine-sizes]: ../virtual-machines/sizes-general.md
aks Azure Disk Volume https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-disk-volume.md
- Title: Create a static volume for pods in Azure Kubernetes Service (AKS)
-description: Learn how to manually create a volume with Azure disks for use with a pod in Azure Kubernetes Service (AKS)
-- Previously updated : 05/17/2022--
-#Customer intent: As a developer, I want to learn how to manually create and attach storage to a specific pod in AKS.
--
-# Create a static volume with Azure disks in Azure Kubernetes Service (AKS)
-
-Container-based applications often need to access and persist data in an external data volume. If a single pod needs access to storage, you can use Azure disks to present a native volume for application use. This article shows you how to manually create an Azure disk and attach it to a pod in AKS.
-
-> [!NOTE]
-> An Azure disk can only be mounted to a single pod at a time. If you need to share a persistent volume across multiple pods, use [Azure Files][azure-files-volume].
-
-For more information on Kubernetes volumes, see [Storage options for applications in AKS][concepts-storage].
-
-## Before you begin
-
-This article assumes that you have an existing AKS cluster with 1.21 or later version. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][aks-quickstart-powershell], or [using the Azure portal][aks-quickstart-portal].
-
-If you want to interact with Azure disks on an AKS cluster with 1.20 or previous version, see the [Kubernetes plugin for Azure disks][kubernetes-disks].
-
-The Azure Disks CSI driver has a limit of 32 volumes per node. The volume count will change based on the size of the node/node pool. Run the following command to determine the number of volumes that can be allocated per node:
-
-```console
-kubectl get CSINode <nodename> -o yaml
-```
-
-## Storage class static provisioning
-
-The following table describes the Storage Class parameters for the Azure disk CSI driver static provisioning:
-
-|Name | Meaning | Available Value | Mandatory | Default value|
-| | | | | |
-|volumeHandle| Azure disk URI | `/subscriptions/{sub-id}/resourcegroups/{group-name}/providers/microsoft.compute/disks/{disk-id}` | Yes | N/A|
-|volumeAttributes.fsType | File system type | `ext4`, `ext3`, `ext2`, `xfs`, `btrfs` for Linux, `ntfs` for Windows | No | `ext4` for Linux, `ntfs` for Windows |
-|volumeAttributes.partition | Partition number of the existing disk (only supported on Linux) | `1`, `2`, `3` | No | Empty (no partition) </br>- Make sure partition format is like `-part1` |
-|volumeAttributes.cachingMode | [Disk host cache setting](../virtual-machines/windows/premium-storage-performance.md#disk-caching)| `None`, `ReadOnly`, `ReadWrite` | No | `ReadOnly`|
-
-## Create an Azure disk
-
-When you create an Azure disk for use with AKS, you can create the disk resource in the **node** resource group. This approach allows the AKS cluster to access and manage the disk resource. If instead you created the disk in a separate resource group, you must grant the Azure Kubernetes Service (AKS) managed identity for your cluster the `Contributor` role to the disk's resource group. In this exercise, you're going to create the disk in the same resource group as your cluster.
-
-1. Identify the resource group name using the [az aks show][az-aks-show] command and add the `--query nodeResourceGroup` parameter. The following example gets the node resource group for the AKS cluster name *myAKSCluster* in the resource group name *myResourceGroup*:
-
- ```azurecli-interactive
- $ az aks show --resource-group myResourceGroup --name myAKSCluster --query nodeResourceGroup -o tsv
-
- MC_myResourceGroup_myAKSCluster_eastus
- ```
-
-2. Create a disk using the [az disk create][az-disk-create] command. Specify the node resource group name obtained in the previous command, and then a name for the disk resource, such as *myAKSDisk*. The following example creates a *20*GiB disk, and outputs the ID of the disk after it's created. If you need to create a disk for use with Windows Server containers, add the `--os-type windows` parameter to correctly format the disk.
-
- ```azurecli-interactive
- az disk create \
- --resource-group MC_myResourceGroup_myAKSCluster_eastus \
- --name myAKSDisk \
- --size-gb 20 \
- --query id --output tsv
- ```
-
- > [!NOTE]
- > Azure disks are billed by SKU for a specific size. These SKUs range from 32GiB for S4 or P4 disks to 32TiB for S80 or P80 disks (in preview). The throughput and IOPS performance of a Premium managed disk depends on both the SKU and the instance size of the nodes in the AKS cluster. See [Pricing and Performance of Managed Disks][managed-disk-pricing-performance].
-
- The disk resource ID is displayed once the command has successfully completed, as shown in the following example output. This disk ID is used to mount the disk in the next section.
-
- ```console
- /subscriptions/<subscriptionID>/resourceGroups/MC_myAKSCluster_myAKSCluster_eastus/providers/Microsoft.Compute/disks/myAKSDisk
- ```
-
-## Mount disk as a volume
-
-1. Create a *pv-azuredisk.yaml* file with a *PersistentVolume*. Update `volumeHandle` with disk resource ID from the previous step. For example:
-
- ```yaml
- apiVersion: v1
- kind: PersistentVolume
- metadata:
- name: pv-azuredisk
- spec:
- capacity:
- storage: 20Gi
- accessModes:
- - ReadWriteOnce
- persistentVolumeReclaimPolicy: Retain
- storageClassName: managed-csi
- csi:
- driver: disk.csi.azure.com
- readOnly: false
- volumeHandle: /subscriptions/<subscriptionID>/resourceGroups/MC_myAKSCluster_myAKSCluster_eastus/providers/Microsoft.Compute/disks/myAKSDisk
- volumeAttributes:
- fsType: ext4
- ```
-
-2. Create a *pvc-azuredisk.yaml* file with a *PersistentVolumeClaim* that uses the *PersistentVolume*. For example:
-
- ```yaml
- apiVersion: v1
- kind: PersistentVolumeClaim
- metadata:
- name: pvc-azuredisk
- spec:
- accessModes:
- - ReadWriteOnce
- resources:
- requests:
- storage: 20Gi
- volumeName: pv-azuredisk
- storageClassName: managed-csi
- ```
-
-3. Use the `kubectl` commands to create the *PersistentVolume* and *PersistentVolumeClaim*, referencing the two YAML files created earlier:
-
- ```console
- kubectl apply -f pv-azuredisk.yaml
- kubectl apply -f pvc-azuredisk.yaml
- ```
-
-4. To verify your *PersistentVolumeClaim* is created and bound to the *PersistentVolume*, run the
-following command:
-
- ```console
- $ kubectl get pvc pvc-azuredisk
-
- NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
- pvc-azuredisk Bound pv-azuredisk 20Gi RWO 5s
- ```
-
-5. Create an *azure-disk-pod.yaml* file to reference your *PersistentVolumeClaim*. For example:
-
- ```yaml
- apiVersion: v1
- kind: Pod
- metadata:
- name: mypod
- spec:
- nodeSelector:
- kubernetes.io/os: linux
- containers:
- - image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
- name: mypod
- resources:
- requests:
- cpu: 100m
- memory: 128Mi
- limits:
- cpu: 250m
- memory: 256Mi
- volumeMounts:
- - name: azure
- mountPath: /mnt/azure
- volumes:
- - name: azure
- persistentVolumeClaim:
- claimName: pvc-azuredisk
- ```
-
-6. Run the following command to apply the configuration and mount the volume, referencing the YAML
-configuration file created in the previous steps:
-
- ```console
- kubectl apply -f azure-disk-pod.yaml
- ```
-
-## Next steps
-
-To learn about our recommended storage and backup practices, see [Best practices for storage and backups in AKS][operator-best-practices-storage].
-
-<!-- LINKS - external -->
-[kubernetes-disks]: https://github.com/kubernetes/examples/blob/master/staging/volumes/azure_disk/README.md
-[kubernetes-volumes]: https://kubernetes.io/docs/concepts/storage/volumes/
-[managed-disk-pricing-performance]: https://azure.microsoft.com/pricing/details/managed-disks/
-
-<!-- LINKS - internal -->
-[az-disk-list]: /cli/azure/disk#az_disk_list
-[az-disk-create]: /cli/azure/disk#az_disk_create
-[az-group-list]: /cli/azure/group#az_group_list
-[az-resource-show]: /cli/azure/resource#az_resource_show
-[aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md
-[aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md
-[aks-quickstart-powershell]: ./learn/quick-kubernetes-deploy-powershell.md
-[az-aks-show]: /cli/azure/aks#az_aks_show
-[install-azure-cli]: /cli/azure/install-azure-cli
-[azure-files-volume]: azure-files-volume.md
-[operator-best-practices-storage]: operator-best-practices-storage.md
-[concepts-storage]: concepts-storage.md
aks Azure Disks Dynamic Pv https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-disks-dynamic-pv.md
- Title: Dynamically create Azure Disks volume-
-description: Learn how to dynamically create a persistent volume with Azure Disks in Azure Kubernetes Service (AKS)
-- Previously updated : 07/21/2022--
-#Customer intent: As a developer, I want to learn how to dynamically create and attach storage to pods in AKS.
--
-# Dynamically create and use a persistent volume with Azure Disks in Azure Kubernetes Service (AKS)
-
-A persistent volume represents a piece of storage that has been provisioned for use with Kubernetes pods. A persistent volume can be used by one or many pods, and can be dynamically or statically provisioned. This article shows you how to dynamically create persistent volumes with Azure Disks for use by a single pod in an Azure Kubernetes Service (AKS) cluster.
-
-> [!NOTE]
-> An Azure disk can only be mounted with *Access mode* type *ReadWriteOnce*, which makes it available to one node in AKS. If you need to share a persistent volume across multiple nodes, use [Azure Files][azure-files-pvc].
-
-For more information on Kubernetes volumes, see [Storage options for applications in AKS][concepts-storage].
-
-## Before you begin
-
-This article assumes that you have an existing AKS cluster with 1.21 or later version. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][aks-quickstart-powershell], or [using the Azure portal][aks-quickstart-portal].
-
-You also need the Azure CLI version 2.0.59 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
-
-The Azure Disks CSI driver has a limit of 32 volumes per node. The volume count will change based on the size of the node/node pool. Run the following command to determine the number of volumes that can be allocated per node:
-
-```console
-kubectl get CSINode <nodename> -o yaml
-```
-
-## Built-in storage classes
-
-A storage class is used to define how a unit of storage is dynamically created with a persistent volume. For more information on Kubernetes storage classes, see [Kubernetes Storage Classes][kubernetes-storage-classes].
-
-Each AKS cluster includes four pre-created storage classes, two of them configured to work with Azure Disks:
-
-* The *default* storage class provisions a standard SSD Azure Disk.
- * Standard storage is backed by Standard SSDs and delivers cost-effective storage while still delivering reliable performance.
-* The *managed-csi-premium* storage class provisions a premium Azure Disk.
- * Premium disks are backed by SSD-based high-performance, low-latency disk. Perfect for VMs running production workload. If the AKS nodes in your cluster use premium storage, select the *managed-premium* class.
-
-If you use one of the default storage classes, you can't update the volume size after the storage class is created. To be able to update the volume size after a storage class is created, add the line `allowVolumeExpansion: true` to one of the default storage classes, or you can create your own custom storage class. It's not supported to reduce the size of a PVC (to prevent data loss). You can edit an existing storage class by using the `kubectl edit sc` command.
-
-For example, if you want to use a disk of size 4 TiB, you must create a storage class that defines `cachingmode: None` because [disk caching isn't supported for disks 4 TiB and larger](../virtual-machines/premium-storage-performance.md#disk-caching).
-
-For more information about storage classes and creating your own storage class, see [Storage options for applications in AKS][storage-class-concepts].
-
-Use the [kubectl get sc][kubectl-get] command to see the pre-created storage classes. The following example shows the pre-create storage classes available within an AKS cluster:
-
-```bash
-kubectl get sc
-```
-
-The output of the command resembles the following example:
-
-```bash
-NAME PROVISIONER AGE
-default (default) disk.csi.azure.com 1h
-managed-csi disk.csi.azure.com 1h
-```
-
-> [!NOTE]
-> Persistent volume claims are specified in GiB but Azure managed disks are billed by SKU for a specific size. These SKUs range from 32GiB for S4 or P4 disks to 32TiB for S80 or P80 disks (in preview). The throughput and IOPS performance of a Premium managed disk depends on the both the SKU and the instance size of the nodes in the AKS cluster. For more information, see [Pricing and performance of managed disks][managed-disk-pricing-performance].
-
-## Create a persistent volume claim
-
-A persistent volume claim (PVC) is used to automatically provision storage based on a storage class. In this case, a PVC can use one of the pre-created storage classes to create a standard or premium Azure managed disk.
-
-Create a file named `azure-pvc.yaml`, and copy in the following manifest. The claim requests a disk named `azure-managed-disk` that is *5 GB* in size with *ReadWriteOnce* access. The *managed-csi* storage class is specified as the storage class.
-
-```yaml
-apiVersion: v1
-kind: PersistentVolumeClaim
-metadata:
- name: azure-managed-disk
-spec:
- accessModes:
- - ReadWriteOnce
- storageClassName: managed-csi
- resources:
- requests:
- storage: 5Gi
-```
-
-> [!TIP]
-> To create a disk that uses premium storage, use `storageClassName: managed-csi-premium` rather than *managed-csi*.
-
-Create the persistent volume claim with the [kubectl apply][kubectl-apply] command and specify your *azure-pvc.yaml* file:
-
-```bash
-kubectl apply -f azure-pvc.yaml
-```
-
-The output of the command resembles the following example:
-
-```bash
-persistentvolumeclaim/azure-managed-disk created
-```
-
-## Use the persistent volume
-
-Once the persistent volume claim has been created and the disk successfully provisioned, a pod can be created with access to the disk. The following manifest creates a basic NGINX pod that uses the persistent volume claim named *azure-managed-disk* to mount the Azure Disk at the path `/mnt/azure`. For Windows Server containers, specify a *mountPath* using the Windows path convention, such as *'D:'*.
-
-Create a file named `azure-pvc-disk.yaml`, and copy in the following manifest.
-
-```yaml
-kind: Pod
-apiVersion: v1
-metadata:
- name: mypod
-spec:
- containers:
- - name: mypod
- image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
- resources:
- requests:
- cpu: 100m
- memory: 128Mi
- limits:
- cpu: 250m
- memory: 256Mi
- volumeMounts:
- - mountPath: "/mnt/azure"
- name: volume
- volumes:
- - name: volume
- persistentVolumeClaim:
- claimName: azure-managed-disk
-```
-
-Create the pod with the [kubectl apply][kubectl-apply] command, as shown in the following example:
-
-```console
-kubectl apply -f azure-pvc-disk.yaml
-
-pod/mypod created
-```
-
-You now have a running pod with your Azure Disk mounted in the `/mnt/azure` directory. This configuration can be seen when inspecting your pod via `kubectl describe pod mypod`, as shown in the following condensed example:
-
-```bash
-kubectl describe pod mypod
-```
-
-The output of the command resembles the following example:
-
-```bash
-[...]
-Volumes:
- volume:
- Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
- ClaimName: azure-managed-disk
- ReadOnly: false
- default-token-smm2n:
- Type: Secret (a volume populated by a Secret)
- SecretName: default-token-smm2n
- Optional: false
-[...]
-Events:
- Type Reason Age From Message
- - - - -
- Normal Scheduled 2m default-scheduler Successfully assigned mypod to aks-nodepool1-79590246-0
- Normal SuccessfulMountVolume 2m kubelet, aks-nodepool1-79590246-0 MountVolume.SetUp succeeded for volume "default-token-smm2n"
- Normal SuccessfulMountVolume 1m kubelet, aks-nodepool1-79590246-0 MountVolume.SetUp succeeded for volume "pvc-faf0f176-8b8d-11e8-923b-deb28c58d242"
-[...]
-```
-
-## Use Ultra Disks
-
-To use ultra disk, see [Use Ultra Disks on Azure Kubernetes Service (AKS)](use-ultra-disks.md).
-
-## Back up a persistent volume
-
-To back up the data in your persistent volume, take a snapshot of the managed disk for the volume. You can then use this snapshot to create a restored disk and attach to pods as a means of restoring the data.
-
-First, get the volume name with the `kubectl get pvc` command, such as for the PVC named *azure-managed-disk*:
-
-```console
-$ kubectl get pvc azure-managed-disk
-
-NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
-azure-managed-disk Bound pvc-faf0f176-8b8d-11e8-923b-deb28c58d242 5Gi RWO managed-premium 3m
-```
-
-This volume name forms the underlying Azure Disk name. Query for the disk ID with [az disk list][az-disk-list] and provide your PVC volume name, as shown in the following example:
-
-```azurecli
-az disk list --query '[].id | [?contains(@,`pvc-faf0f176-8b8d-11e8-923b-deb28c58d242`)]' -o tsv
-
-/subscriptions/<guid>/resourceGroups/MC_MYRESOURCEGROUP_MYAKSCLUSTER_EASTUS/providers/MicrosoftCompute/disks/kubernetes-dynamic-pvc-faf0f176-8b8d-11e8-923b-deb28c58d242
-```
-
-Use the disk ID to create a snapshot disk with [az snapshot create][az-snapshot-create]. The following example creates a snapshot named *pvcSnapshot* in the same resource group as the AKS cluster (*MC_myResourceGroup_myAKSCluster_eastus*). You may encounter permission issues if you create snapshots and restore disks in resource groups that the AKS cluster doesn't have access to.
-
-```azurecli
-az snapshot create \
- --resource-group MC_myResourceGroup_myAKSCluster_eastus \
- --name pvcSnapshot \
- --source /subscriptions/<guid>/resourceGroups/MC_myResourceGroup_myAKSCluster_eastus/providers/MicrosoftCompute/disks/kubernetes-dynamic-pvc-faf0f176-8b8d-11e8-923b-deb28c58d242
-```
-
-Depending on the amount of data on your disk, it may take a few minutes to create the snapshot.
-
-## Restore and use a snapshot
-
-To restore the disk and use it with a Kubernetes pod, use the snapshot as a source when you create a disk with [az disk create][az-disk-create]. This operation preserves the original resource if you then need to access the original data snapshot. The following example creates a disk named *pvcRestored* from the snapshot named *pvcSnapshot*:
-
-```azurecli
-az disk create --resource-group MC_myResourceGroup_myAKSCluster_eastus --name pvcRestored --source pvcSnapshot
-```
-
-To use the restored disk with a pod, specify the ID of the disk in the manifest. Get the disk ID with the [az disk show][az-disk-show] command. The following example gets the disk ID for *pvcRestored* created in the previous step:
-
-```azurecli
-az disk show --resource-group MC_myResourceGroup_myAKSCluster_eastus --name pvcRestored --query id -o tsv
-```
-
-Create a pod manifest named `azure-restored.yaml` and specify the disk URI obtained in the previous step. The following example creates a basic NGINX web server, with the restored disk mounted as a volume at */mnt/azure*:
-
-```yaml
-kind: Pod
-apiVersion: v1
-metadata:
- name: mypodrestored
-spec:
- containers:
- - name: mypodrestored
- image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
- resources:
- requests:
- cpu: 100m
- memory: 128Mi
- limits:
- cpu: 250m
- memory: 256Mi
- volumeMounts:
- - mountPath: "/mnt/azure"
- name: volume
- volumes:
- - name: volume
- azureDisk:
- kind: Managed
- diskName: pvcRestored
- diskURI: /subscriptions/<guid>/resourceGroups/MC_myResourceGroupAKS_myAKSCluster_eastus/providers/Microsoft.Compute/disks/pvcRestored
-```
-
-Create the pod with the [kubectl apply][kubectl-apply] command, as shown in the following example:
-
-```bash
-$ kubectl apply -f azure-restored.yaml
-```
-
-The output of the command resembles the following example:
-
-```bash
-pod/mypodrestored created
-```
-
-You can use `kubectl describe pod mypodrestored` to view details of the pod, such as the following condensed example that shows the volume information:
-
-```bash
-kubectl describe pod mypodrestored
-```
-
-The output of the command resembles the following example:
-
-```bash
-[...]
-Volumes:
- volume:
- Type: AzureDisk (an Azure Data Disk mount on the host and bind mount to the pod)
- DiskName: pvcRestored
- DiskURI: /subscriptions/19da35d3-9a1a-4f3b-9b9c-3c56ef409565/resourceGroups/MC_myResourceGroupAKS_myAKSCluster_eastus/providers/Microsoft.Compute/disks/pvcRestored
- Kind: Managed
- FSType: ext4
- CachingMode: ReadWrite
- ReadOnly: false
-[...]
-```
-
-## Using Azure tags
-
-For more information on using Azure tags, see [Use Azure tags in Azure Kubernetes Service (AKS)][use-tags].
-
-## Next steps
-
-For associated best practices, see [Best practices for storage and backups in AKS][operator-best-practices-storage].
-
-Learn more about Kubernetes persistent volumes using Azure Disks.
-
-> [!div class="nextstepaction"]
-> [Kubernetes plugin for Azure Disks][azure-disk-volume]
-
-<!-- LINKS - external -->
-[access-modes]: https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes
-[kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply
-[kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get
-[kubernetes-storage-classes]: https://kubernetes.io/docs/concepts/storage/storage-classes/
-[kubernetes-volumes]: https://kubernetes.io/docs/concepts/storage/persistent-volumes/
-[managed-disk-pricing-performance]: https://azure.microsoft.com/pricing/details/managed-disks/
-
-<!-- LINKS - internal -->
-[azure-disk-volume]: azure-disk-volume.md
-[azure-files-pvc]: azure-files-dynamic-pv.md
-[premium-storage]: ../virtual-machines/disks-types.md
-[az-disk-list]: /cli/azure/disk#az_disk_list
-[az-snapshot-create]: /cli/azure/snapshot#az_snapshot_create
-[az-disk-create]: /cli/azure/disk#az_disk_create
-[az-disk-show]: /cli/azure/disk#az_disk_show
-[aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md
-[aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md
-[aks-quickstart-powershell]: ./learn/quick-kubernetes-deploy-powershell.md
-[install-azure-cli]: /cli/azure/install-azure-cli
-[operator-best-practices-storage]: operator-best-practices-storage.md
-[concepts-storage]: concepts-storage.md
-[storage-class-concepts]: concepts-storage.md#storage-classes
-[az-feature-register]: /cli/azure/feature#az_feature_register
-[az-feature-list]: /cli/azure/feature#az_feature_list
-[az-provider-register]: /cli/azure/provider#az_provider_register
-[az-extension-add]: /cli/azure/extension#az_extension_add
-[az-extension-update]: /cli/azure/extension#az_extension_update
-[az-feature-register]: /cli/azure/feature#az_feature_register
-[az-feature-list]: /cli/azure/feature#az_feature_list
-[az-provider-register]: /cli/azure/provider#az_provider_register
-[use-tags]: use-tags.md
aks Azure Files Csi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-files-csi.md
Title: Use Container Storage Interface (CSI) driver for Azure Files on Azure Kubernetes Service (AKS) description: Learn how to use the Container Storage Interface (CSI) driver for Azure Files in an Azure Kubernetes Service (AKS) cluster.- Previously updated : 01/03/2023-- Last updated : 01/18/2023 # Use Azure Files Container Storage Interface (CSI) driver in Azure Kubernetes Service (AKS)
In addition to the original in-tree driver features, Azure Files CSI driver supp
| | | | | |skuName | Azure Files storage account type (alias: `storageAccountType`)| `Standard_LRS`, `Standard_ZRS`, `Standard_GRS`, `Standard_RAGRS`, `Standard_RAGZRS`,`Premium_LRS`, `Premium_ZRS` | No | `StandardSSD_LRS`<br> Minimum file share size for Premium account type is 100 GiB.<br> ZRS account type is supported in limited regions.<br> NFS file share only supports Premium account type.| |fsType | File System Type | `ext4`, `ext3`, `ext2`, `xfs`| Yes | `ext4` for Linux|
-|location | Specify Azure region where Azure storage account will be created. | `eastus`, `westus`, etc. | No | If empty, driver uses the same location name as current AKS cluster.|
-|resourceGroup | Specify the resource group where the Azure Disks will be created | Existing resource group name | No | If empty, driver uses the same resource group name as current AKS cluster.|
+|location | Specify Azure region where Azure storage account will be created. | For example, `eastus`. | No | If empty, driver uses the same location name as current AKS cluster.|
+|resourceGroup | Specify the resource group where the Azure Disks will be created. | Existing resource group name | No | If empty, driver uses the same resource group name as current AKS cluster.|
|shareName | Specify Azure file share name | Existing or new Azure file share name. | No | If empty, driver generates an Azure file share name. |
-|shareNamePrefix | Specify Azure file share name prefix created by driver. | Share name can only contain lowercase letters, numbers, hyphens, and length should be less than 21 characters. | No |
+|shareNamePrefix | Specify Azure file share name prefix created by driver. | Share name can only contain lowercase letters, numbers, hyphens, and length should be fewer than 21 characters. | No |
|folderName | Specify folder name in Azure file share. | Existing folder name in Azure file share. | No | If folder name does not exist in file share, mount will fail. | |shareAccessTier | [Access tier for file share][storage-tiers] | General purpose v2 account can choose between `TransactionOptimized` (default), `Hot`, and `Cool`. Premium storage account type for file shares only. | No | Empty. Use default setting for different storage account types.| |accountAccessTier | [Access tier for storage account][access-tiers-overview] | Standard account can choose `Hot` or `Cool`, and Premium account can only choose `Premium`. | No | Empty. Use default setting for different storage account types. |
In addition to the original in-tree driver features, Azure Files CSI driver supp
|allowBlobPublicAccess | Allow or disallow public access to all blobs or containers for storage account created by driver. | `true` or `false` | No | `false` | |requireInfraEncryption | Specify whether or not the service applies a secondary layer of encryption with platform managed keys for data at rest for storage account created by driver. | `true` or `false` | No | `false` | |storageEndpointSuffix | Specify Azure storage endpoint suffix. | `core.windows.net`, `core.chinacloudapi.cn`, etc. | No | If empty, driver uses default storage endpoint suffix according to cloud environment. For example, `core.windows.net`. |
-|tags | [tags][tag-resources] are created in newly created storage account. | Tag format: 'foo=aaa,bar=bbb' | No | "" |
+|tags | [tags][tag-resources] are created in new storage account. | Tag format: 'foo=aaa,bar=bbb' | No | "" |
|matchTags | Match tags when driver tries to find a suitable storage account. | `true` or `false` | No | `false` | | | **Following parameters are only for SMB protocol** | | | |subscriptionID | Specify Azure subscription ID where Azure file share is created. | Azure subscription ID | No | If not empty, `resourceGroup` must be provided. |
-|storeAccountKey | Specify whether to store account key to k8s secret. | `true` or `false`<br>`false` means driver leverages kubelet identity to get account key. | No | `true` |
+|storeAccountKey | Specify whether to store account key to Kubernetes secret. | `true` or `false`<br>`false` means driver leverages kubelet identity to get account key. | No | `true` |
|secretName | Specify secret name to store account key. | | No | |secretNamespace | Specify the namespace of secret to store account key. <br><br> **Note:** <br> If `secretNamespace` isn't specified, the secret is created in the same namespace as the pod. | `default`,`kube-system`, etc | No | Pvc namespace, for example `csi.storage.k8s.io/pvc/namespace` | |useDataPlaneAPI | Specify whether to use [data plane API][data-plane-api] for file share create/delete/resize. This could solve the SRP API throttling issue because the data plane API has almost no limit, while it would fail when there is firewall or Vnet setting on storage account. | `true` or `false` | No | `false` |
In addition to the original in-tree driver features, Azure Files CSI driver supp
## Use a persistent volume with Azure Files
-A [persistent volume (PV)][persistent-volume] represents a piece of storage that's provisioned for use with Kubernetes pods. A PV can be used by one or many pods and can be dynamically or statically provisioned. If multiple pods need concurrent access to the same storage volume, you can use Azure Files to connect by using the [Server Message Block (SMB)][smb-overview] or [NFS protocol][nfs-overview]. This article shows you how to dynamically create an Azure Files share for use by multiple pods in an AKS cluster. For static provisioning, see [Manually create and use a volume with an Azure Files share][azure-files-pvc-manual].
+A [persistent volume (PV)][persistent-volume] represents a piece of storage that's provisioned for use with Kubernetes pods. A PV can be used by one or many pods and can be dynamically or statically provisioned. If multiple pods need concurrent access to the same storage volume, you can use Azure Files to connect by using the [Server Message Block (SMB)][smb-overview] or [NFS protocol][nfs-overview]. This article shows you how to dynamically create an Azure Files share for use by multiple pods in an AKS cluster. For static provisioning, see [Manually create and use a volume with an Azure Files share][azure-files-storage-provision.md#statically-provision-a-volume].
With Azure Files shares, there is no limit as to how many can be mounted on a node.
A storage class is used to define how an Azure file share is created. A storage
> [!NOTE] > Azure Files supports Azure Premium Storage. The minimum premium file share capacity is 100 GiB.
-When you use storage CSI drivers on AKS, there are two more built-in `StorageClasses` that use the Azure Files CSI storage drivers. The other CSI storage classes are created with the cluster alongside the in-tree default storage classes.
+When you use storage CSI drivers on AKS, there are two more built-in `StorageClasses` that uses the Azure Files CSI storage drivers. The other CSI storage classes are created with the cluster alongside the in-tree default storage classes.
- `azurefile-csi`: Uses Azure Standard Storage to create an Azure Files share. - `azurefile-csi-premium`: Uses Azure Premium Storage to create an Azure Files share.
You can request a larger volume for a PVC. Edit the PVC object, and specify a la
> [!NOTE] > A new PV is never created to satisfy the claim. Instead, an existing volume is resized.
-In AKS, the built-in `azurefile-csi` storage class already supports expansion, so use the [PVC created earlier with this storage class](#dynamically-create-azure-files-pvs-by-using-the-built-in-storage-classes). The PVC requested a 100GiB file share. We can confirm that by running:
+In AKS, the built-in `azurefile-csi` storage class already supports expansion, so use the [PVC created earlier with this storage class](#dynamically-create-azure-files-pvs-by-using-the-built-in-storage-classes). The PVC requested a 100 GiB file share. We can confirm that by running:
```bash kubectl exec -it nginx-azurefile -- df -h /mnt/azurefile
The output of the commands resembles the following example:
[azure-disk-csi]: azure-disk-csi.md [azure-blob-csi]: azure-blob-csi.md [persistent-volume-claim-overview]: concepts-storage.md#persistent-volume-claims
+[access-tier-file-share]: ../storage/files/storage-files-planning#storage-tiers.md
+[access-tier-storage-account]: ../storage/blobs/access-tiers-overview.md
+[azure-tags]: ../azure-resource-manager/management/tag-resources.md
[azure-disk-volume]: azure-disk-volume.md [azure-files-pvc]: azure-files-dynamic-pv.md [azure-files-pvc-manual]: azure-files-volume.md
aks Azure Files Dynamic Pv https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-files-dynamic-pv.md
- Title: Dynamically create Azure Files share-
-description: Learn how to dynamically create a persistent volume with Azure Files for use with multiple concurrent pods in Azure Kubernetes Service (AKS)
-- Previously updated : 05/31/2022--
-#Customer intent: As a developer, I want to learn how to dynamically create and attach storage using Azure Files to pods in AKS.
--
-# Dynamically create and use a persistent volume with Azure Files in Azure Kubernetes Service (AKS)
-
-A persistent volume represents a piece of storage that has been provisioned for use with Kubernetes pods. A persistent volume can be used by one or many pods, and can be dynamically or statically provisioned. If multiple pods need concurrent access to the same storage volume, you can use Azure Files to connect using the [Server Message Block (SMB) protocol][smb-overview]. This article shows you how to dynamically create an Azure Files share for use by multiple pods in an Azure Kubernetes Service (AKS) cluster.
-
-For more information on Kubernetes volumes, see [Storage options for applications in AKS][concepts-storage].
-
-## Before you begin
-
-This article assumes that you have an existing AKS cluster with 1.21 or later version. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][aks-quickstart-powershell], or [using the Azure portal][aks-quickstart-portal].
-
-You also need the Azure CLI version 2.0.59 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
-
-## Create a storage class
-
-A storage class is used to define how an Azure file share is created. A storage account is automatically created in the [node resource group][node-resource-group] for use with the storage class to hold the Azure file shares. Choose of the following [Azure storage redundancy][storage-skus] for *skuName*:
-
-* *Standard_LRS* - standard locally redundant storage (LRS)
-* *Standard_GRS* - standard geo-redundant storage (GRS)
-* *Standard_ZRS* - standard zone redundant storage (ZRS)
-* *Standard_RAGRS* - standard read-access geo-redundant storage (RA-GRS)
-* *Premium_LRS* - premium locally redundant storage (LRS)
-* *Premium_ZRS* - premium zone redundant storage (ZRS)
-
-> [!NOTE]
-> Minimum premium file share is 100GB.
-
-For more information on Kubernetes storage classes for Azure Files, see [Kubernetes Storage Classes][kubernetes-storage-classes].
-
-Create a file named `azure-file-sc.yaml` and copy in the following example manifest. For more information on *mountOptions*, see the [Mount options][mount-options] section.
-
-```yaml
-kind: StorageClass
-apiVersion: storage.k8s.io/v1
-metadata:
- name: my-azurefile
-provisioner: file.csi.azure.com # replace with "kubernetes.io/azure-file" if aks version is less than 1.21
-allowVolumeExpansion: true
-mountOptions:
- - dir_mode=0777
- - file_mode=0777
- - uid=0
- - gid=0
- - mfsymlinks
- - cache=strict
- - actimeo=30
-parameters:
- skuName: Premium_LRS
-```
-
-Create the storage class with the [kubectl apply][kubectl-apply] command:
-
-```console
-kubectl apply -f azure-file-sc.yaml
-```
-
-## Create a persistent volume claim
-
-A persistent volume claim (PVC) uses the storage class object to dynamically provision an Azure file share. The following YAML can be used to create a persistent volume claim *100 GB* in size with *ReadWriteMany* access. For more information on access modes, see the [Kubernetes persistent volume][access-modes] documentation.
-
-Now create a file named `azure-file-pvc.yaml` and copy in the following YAML. Make sure that the *storageClassName* matches the storage class created in the last step:
-
-```yaml
-apiVersion: v1
-kind: PersistentVolumeClaim
-metadata:
- name: my-azurefile
-spec:
- accessModes:
- - ReadWriteMany
- storageClassName: my-azurefile
- resources:
- requests:
- storage: 100Gi
-```
-
-> [!NOTE]
-> If using the *Premium_LRS* sku for your storage class, the minimum value for *storage* must be *100Gi*.
-
-Create the persistent volume claim with the [kubectl apply][kubectl-apply] command:
-
-```console
-kubectl apply -f azure-file-pvc.yaml
-```
-
-Once completed, the file share will be created. A Kubernetes secret is also created that includes connection information and credentials. You can use the [kubectl get][kubectl-get] command to view the status of the PVC:
-
-```console
-$ kubectl get pvc my-azurefile
-
-NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
-my-azurefile Bound pvc-8436e62e-a0d9-11e5-8521-5a8664dc0477 10Gi RWX my-azurefile 5m
-```
-
-## Use the persistent volume
-
-The following YAML creates a pod that uses the persistent volume claim *my-azurefile* to mount the Azure file share at the */mnt/azure* path. For Windows Server containers, specify a *mountPath* using the Windows path convention, such as *'D:'*.
-
-Create a file named `azure-pvc-files.yaml`, and copy in the following YAML. Make sure that the *claimName* matches the PVC created in the last step.
-
-```yaml
-kind: Pod
-apiVersion: v1
-metadata:
- name: mypod
-spec:
- containers:
- - name: mypod
- image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
- resources:
- requests:
- cpu: 100m
- memory: 128Mi
- limits:
- cpu: 250m
- memory: 256Mi
- volumeMounts:
- - mountPath: "/mnt/azure"
- name: volume
- volumes:
- - name: volume
- persistentVolumeClaim:
- claimName: my-azurefile
-```
-
-Create the pod with the [kubectl apply][kubectl-apply] command.
-
-```console
-kubectl apply -f azure-pvc-files.yaml
-```
-
-You now have a running pod with your Azure Files share mounted in the */mnt/azure* directory. This configuration can be seen when inspecting your pod via `kubectl describe pod mypod`. The following condensed example output shows the volume mounted in the container:
-
-```
-Containers:
- mypod:
- Container ID: docker://053bc9c0df72232d755aa040bfba8b533fa696b123876108dec400e364d2523e
- Image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
- Image ID: docker-pullable://nginx@sha256:d85914d547a6c92faa39ce7058bd7529baacab7e0cd4255442b04577c4d1f424
- State: Running
- Started: Fri, 01 Mar 2019 23:56:16 +0000
- Ready: True
- Mounts:
- /mnt/azure from volume (rw)
- /var/run/secrets/kubernetes.io/serviceaccount from default-token-8rv4z (ro)
-[...]
-Volumes:
- volume:
- Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
- ClaimName: my-azurefile
- ReadOnly: false
-[...]
-```
-
-## Mount options
-
-The default value for *fileMode* and *dirMode* is *0777* for Kubernetes version 1.13.0 and above. If dynamically creating the persistent volume with a storage class, mount options can be specified on the storage class object. The following example sets *0777*:
-
-```yaml
-kind: StorageClass
-apiVersion: storage.k8s.io/v1
-metadata:
- name: my-azurefile
-provisioner: file.csi.azure.com # replace with "kubernetes.io/azure-file" if aks version is less than 1.21
-allowVolumeExpansion: true
-mountOptions:
- - dir_mode=0777
- - file_mode=0777
- - uid=0
- - gid=0
- - mfsymlinks
- - cache=strict
- - actimeo=30
-parameters:
- skuName: Premium_LRS
-```
-
-## Using Azure tags
-
-For more details on using Azure tags, see [Use Azure tags in Azure Kubernetes Service (AKS)][use-tags].
-
-## Next steps
-
-For associated best practices, see [Best practices for storage and backups in AKS][operator-best-practices-storage].
-
-For storage class parameters, see [Dynamic Provision](https://github.com/kubernetes-sigs/azurefile-csi-driver/blob/master/docs/driver-parameters.md#dynamic-provision).
-
-Learn more about Kubernetes persistent volumes using Azure Files.
-
-> [!div class="nextstepaction"]
-> [Kubernetes plugin for Azure Files][kubernetes-files]
-
-<!-- LINKS - external -->
-[access-modes]: https://kubernetes.io/docs/concepts/storage/persistent-volumes
-[kubectl-apply]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#apply
-[kubectl-describe]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#describe
-[kubectl-get]: https://kubernetes.io/docs/reference/generated/kubectl/kubectl-commands#get
-[kubernetes-files]: https://github.com/kubernetes/examples/blob/master/staging/volumes/azure_file/README.md
-[kubernetes-secret]: https://kubernetes.io/docs/concepts/configuration/secret/
-[kubernetes-security-context]: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
-[kubernetes-storage-classes]: https://kubernetes.io/docs/concepts/storage/storage-classes/#azure-file
-[kubernetes-volumes]: https://kubernetes.io/docs/concepts/storage/persistent-volumes/
-[pv-static]: https://kubernetes.io/docs/concepts/storage/persistent-volumes/#static
-[smb-overview]: /windows/desktop/FileIO/microsoft-smb-protocol-and-cifs-protocol-overview
-
-<!-- LINKS - internal -->
-[az-group-create]: /cli/azure/group#az_group_create
-[az-group-list]: /cli/azure/group#az_group_list
-[az-resource-show]: /cli/azure/aks#az_aks_show
-[az-storage-account-create]: /cli/azure/storage/account#az_storage_account_create
-[az-storage-create]: /cli/azure/storage/account#az_storage_account_create
-[az-storage-key-list]: /cli/azure/storage/account/keys#az_storage_account_keys_list
-[az-storage-share-create]: /cli/azure/storage/share#az_storage_share_create
-[mount-options]: #mount-options
-[aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md
-[aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md
-[aks-quickstart-powershell]: ./learn/quick-kubernetes-deploy-powershell.md
-[install-azure-cli]: /cli/azure/install-azure-cli
-[az-aks-show]: /cli/azure/aks#az_aks_show
-[storage-skus]: ../storage/common/storage-redundancy.md
-[kubernetes-rbac]: concepts-identity.md#role-based-access-controls-rbac
-[operator-best-practices-storage]: operator-best-practices-storage.md
-[concepts-storage]: concepts-storage.md
-[node-resource-group]: faq.md#why-are-two-resource-groups-created-with-aks
-[use-tags]: use-tags.md
aks Azure Files Volume https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/azure-files-volume.md
- Title: Manually create Azure Files share-
-description: Learn how to manually create a volume with Azure Files for use with multiple concurrent pods in Azure Kubernetes Service (AKS)
-- Previously updated : 12/26/2022--
-#Customer intent: As a developer, I want to learn how to manually create and attach storage using Azure Files to a pod in AKS.
---
-# Manually create and use a volume with Azure Files share in Azure Kubernetes Service (AKS)
-
-Container-based applications often need to access and persist data in an external data volume. If multiple pods need concurrent access to the same storage volume, you can use Azure Files to connect using the [Server Message Block (SMB) protocol][smb-overview]. This article shows you how to manually create an Azure Files share and attach it to a pod in AKS.
-
-For more information on Kubernetes volumes, see [Storage options for applications in AKS][concepts-storage].
-
-## Before you begin
-
-This article assumes that you have an existing AKS cluster with 1.21 or later version. If you need an AKS cluster, see the AKS quickstart [using the Azure CLI][aks-quickstart-cli], [using Azure PowerShell][aks-quickstart-powershell], or [using the Azure portal][aks-quickstart-portal].
-
-If you want to interact with Azure Files on an AKS cluster with 1.20 or previous version, see the [Kubernetes plugin for Azure Files][kubernetes-files].
-
-## Create an Azure file share
-
-Before you can use Azure Files as a Kubernetes volume, you must create an Azure Storage account and the file share. The following commands create a resource group named *myAKSShare*, a storage account, and a Files share named *aksshare*:
-
-```azurecli-interactive
-# Change these four parameters as needed for your own environment
-AKS_PERS_STORAGE_ACCOUNT_NAME=mystorageaccount$RANDOM
-AKS_PERS_RESOURCE_GROUP=myAKSShare
-AKS_PERS_LOCATION=eastus
-AKS_PERS_SHARE_NAME=aksshare
-
-# Create a resource group
-az group create --name $AKS_PERS_RESOURCE_GROUP --location $AKS_PERS_LOCATION
-
-# Create a storage account
-az storage account create -n $AKS_PERS_STORAGE_ACCOUNT_NAME -g $AKS_PERS_RESOURCE_GROUP -l $AKS_PERS_LOCATION --sku Standard_LRS
-
-# Export the connection string as an environment variable, this is used when creating the Azure file share
-export AZURE_STORAGE_CONNECTION_STRING=$(az storage account show-connection-string -n $AKS_PERS_STORAGE_ACCOUNT_NAME -g $AKS_PERS_RESOURCE_GROUP -o tsv)
-
-# Create the file share
-az storage share create -n $AKS_PERS_SHARE_NAME --connection-string $AZURE_STORAGE_CONNECTION_STRING
-
-# Get storage account key
-STORAGE_KEY=$(az storage account keys list --resource-group $AKS_PERS_RESOURCE_GROUP --account-name $AKS_PERS_STORAGE_ACCOUNT_NAME --query "[0].value" -o tsv)
-
-# Echo storage account name and key
-echo Storage account name: $AKS_PERS_STORAGE_ACCOUNT_NAME
-echo Storage account key: $STORAGE_KEY
-```
-
-Make a note of the storage account name and key shown at the end of the script output. These values are needed when you create the Kubernetes volume in one of the following steps.
-
-## Create a Kubernetes secret
-
-Kubernetes needs credentials to access the file share created in the previous step. These credentials are stored in a [Kubernetes secret][kubernetes-secret], which is referenced when you create a Kubernetes pod.
-
-Use the `kubectl create secret` command to create the secret. The following example creates a secret named *azure-secret* and populates the *azurestorageaccountname* and *azurestorageaccountkey* from the previous step. To use an existing Azure storage account, provide the account name and key.
-
-```console
-kubectl create secret generic azure-secret --from-literal=azurestorageaccountname=$AKS_PERS_STORAGE_ACCOUNT_NAME --from-literal=azurestorageaccountkey=$STORAGE_KEY
-```
-
-## Mount file share as an inline volume
-> [!NOTE]
-> Inline volume can only access secrets in the same namespace as the pod. To specify a different secret namespace, [please use the persistent volume example][persistent-volume-example] below instead.
-
-To mount the Azure Files share into your pod, configure the volume in the container spec. Create a new file named `azure-files-pod.yaml` with the following contents. If you changed the name of the Files share or secret name, update the *shareName* and *secretName*. If desired, update the `mountPath`, which is the path where the Files share is mounted in the pod. For Windows Server containers, specify a *mountPath* using the Windows path convention, such as *'D:'*.
-
-```yaml
-apiVersion: v1
-kind: Pod
-metadata:
- name: mypod
-spec:
- nodeSelector:
- kubernetes.io/os: linux
- containers:
- - image: mcr.microsoft.com/oss/nginx/nginx:1.15.5-alpine
- name: mypod
- resources:
- requests:
- cpu: 100m
- memory: 128Mi
- limits:
- cpu: 250m
- memory: 256Mi
- volumeMounts:
- - name: azure
- mountPath: /mnt/azure
- volumes:
- - name: azure
- csi:
- driver: file.csi.azure.com
- readOnly: false
- volumeAttributes:
- secretName: azure-secret # required
- shareName: aksshare # required
- mountOptions: "dir_mode=0777,file_mode=0777,cache=strict,actimeo=30,nosharesock" # optional
-```
-
-Use the `kubectl` command to create the pod.
-
-```console
-kubectl apply -f azure-files-pod.yaml
-```
-
-You now have a running pod with an Azure Files share mounted at */mnt/azure*. You can use `kubectl describe pod mypod` to verify the share is mounted successfully.
-
-## Mount file share as a persistent volume
-> [!NOTE]
-> For SMB mount, if `nodeStageSecretRef` field is not provided in PV config, azure file driver would try to get `azure-storage-account-{accountname}-secret` in the pod namespace, if that secret does not exist, it would get account key by Azure storage account API directly using kubelet identity (make sure kubelet identity has reader access to the storage account).
-> The default value for *fileMode* and *dirMode* is *0777*.
-
-```yaml
-apiVersion: v1
-kind: PersistentVolume
-metadata:
- name: azurefile
-spec:
- capacity:
- storage: 5Gi
- accessModes:
- - ReadWriteMany
- persistentVolumeReclaimPolicy: Retain
- storageClassName: azurefile-csi
- csi:
- driver: file.csi.azure.com
- readOnly: false
- volumeHandle: unique-volumeid # make sure volumeid is unique for every identical share in the cluster
- volumeAttributes:
- resourceGroup: EXISTING_RESOURCE_GROUP_NAME # optional, only set this when storage account is not in the same resource group as agent node
- shareName: aksshare
- nodeStageSecretRef:
- name: azure-secret
- namespace: default
- mountOptions:
- - dir_mode=0777
- - file_mode=0777
- - uid=0
- - gid=0
- - mfsymlinks
- - cache=strict
- - nosharesock
- - nobrl
-```
-
-Create a *azurefile-mount-options-pvc.yaml* file with a *PersistentVolumeClaim* that uses the *PersistentVolume*. For example:
-
-```yaml
-apiVersion: v1
-kind: PersistentVolumeClaim
-metadata:
- name: azurefile
-spec:
- accessModes:
- - ReadWriteMany
- storageClassName: azurefile-csi
- volumeName: azurefile
- resources:
- requests:
- storage: 5Gi
-```
-
-Use the `kubectl` commands to create the *PersistentVolume* and *PersistentVolumeClaim*.
-
-```console
-kubectl apply -f azurefile-mount-options-pv.yaml
-kubectl apply -f azurefile-mount-options-pvc.yaml
-```
-
-Verify your *PersistentVolumeClaim* is created and bound to the *PersistentVolume*.
-
-```console
-$ kubectl get pvc azurefile
-
-NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
-azurefile Bound azurefile 5Gi RWX azurefile 5s
-```
-
-Update your container spec to reference your *PersistentVolumeClaim* and update your pod. For example:
-
-```yaml
-...
- volumes:
- - name: azure
- persistentVolumeClaim:
- claimName: azurefile
-```
-
-As the pod spec can't be updated in place, use `kubectl` commands to delete, and then re-create the pod:
-
-```console
-kubectl delete pod mypod
-
-kubectl apply -f azure-files-pod.yaml
-```
-
-## Next steps
-
-For Azure File CSI driver parameters, see [CSI driver parameters][CSI driver parameters].
-
-For associated best practices, see [Best practices for storage and backups in AKS][operator-best-practices-storage].
-
-<!-- LINKS - external -->
-[kubectl-create]: https://kubernetes.io/docs/user-guide/kubectl/v1.8/#create
-[kubernetes-files]: https://github.com/kubernetes/examples/blob/master/staging/volumes/azure_file/README.md
-[kubernetes-secret]: https://kubernetes.io/docs/concepts/configuration/secret/
-[kubernetes-volumes]: https://kubernetes.io/docs/concepts/storage/volumes/
-[smb-overview]: /windows/desktop/FileIO/microsoft-smb-protocol-and-cifs-protocol-overview
-[kubernetes-security-context]: https://kubernetes.io/docs/tasks/configure-pod-container/security-context/
-[CSI driver parameters]: https://github.com/kubernetes-sigs/azurefile-csi-driver/blob/master/docs/driver-parameters.md#static-provisionbring-your-own-file-share
-
-<!-- LINKS - internal -->
-[aks-quickstart-cli]: ./learn/quick-kubernetes-deploy-cli.md
-[aks-quickstart-portal]: ./learn/quick-kubernetes-deploy-portal.md
-[aks-quickstart-powershell]: ./learn/quick-kubernetes-deploy-powershell.md
-[install-azure-cli]: /cli/azure/install-azure-cli
-[operator-best-practices-storage]: operator-best-practices-storage.md
-[concepts-storage]: concepts-storage.md
-[persistent-volume-example]: #mount-file-share-as-a-persistent-volume
-[use-tags]: use-tags.md
aks Certificate Rotation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/certificate-rotation.md
Title: Certificate Rotation in Azure Kubernetes Service (AKS)
description: Learn certificate rotation in an Azure Kubernetes Service (AKS) cluster. Previously updated : 09/12/2022 Last updated : 01/19/2023 # Certificate rotation in Azure Kubernetes Service (AKS)
This article shows you how certificate rotation works in your AKS cluster.
This article requires that you are running the Azure CLI version 2.0.77 or later. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][azure-cli-install].
+## Limitation
+
+Certificate rotation is not supported for stopped AKS clusters.
+ ## AKS certificates, Certificate Authorities, and Service Accounts AKS generates and uses the following certificates, Certificate Authorities, and Service Accounts:
aks Concepts Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/concepts-storage.md
Title: Concepts - Storage in Azure Kubernetes Services (AKS) description: Learn about storage in Azure Kubernetes Service (AKS), including volumes, persistent volumes, storage classes, and claims- Previously updated : 08/10/2022 Last updated : 01/18/2023
Kubernetes typically treats individual pods as ephemeral, disposable resources.
Traditional volumes are created as Kubernetes resources backed by Azure Storage. You can manually create data volumes to be assigned to pods directly, or have Kubernetes automatically create them. Data volumes can use: [Azure Disks][disks-types], [Azure Files][storage-files-planning], [Azure NetApp Files][azure-netapp-files-service-levels], or [Azure Blobs][storage-account-overview]. > [!NOTE]
-> The Azure Disks CSI driver has a limit of 32 volumes per node. Other Azure Storage services don't have an equivalent limit.
+> Depending on the VM SKU that's being used, the Azure Disks CSI driver might have a per-node volume limit. For some powerful VMs (for example, 16 cores), the limit is 64 volumes per node. To identify the limit per VM SKU, review the **Max data disks** column for each VM SKU offered. For a list of VM SKUs offered and their corresponding detailed capacity limits, see [General purpose virtual machine sizes][general-purpose-machine-sizes].
### Azure Disks
For clusters using the [Container Storage Interface (CSI) drivers][csi-storage-d
| `azureblob-nfs-premium` | Uses Azure Premium storage to create an Azure Blob storage container and connect using the NFS v3 protocol. The reclaim policy ensures that the underlying Azure Blob storage container is deleted when the persistent volume that used it is deleted. | | `azureblob-fuse-premium` | Uses Azure Premium storage to create an Azure Blob storage container and connect using BlobFuse. The reclaim policy ensures that the underlying Azure Blob storage container is deleted when the persistent volume that used it is deleted. |
-Unless you specify a StorageClass for a persistent volume, the default StorageClass will be used. Ensure volumes use the appropriate storage you need when requesting persistent volumes.
+Unless you specify a StorageClass for a persistent volume, the default StorageClass will be used. Ensure volumes use the appropriate storage you need when requesting persistent volumes.
> [!IMPORTANT]
-> Starting in Kubernetes version 1.21, AKS will use CSI drivers only and by default. The `default` class will be the same as `managed-csi`
+> Starting with Kubernetes version 1.21, AKS only uses CSI drivers by default and CSI migration is enabled. While existing in-tree persistent volumes continue to function, starting with version 1.26, AKS will no longer support volumes created using in-tree driver and storage provisioned for files and disk.
+>
+> The `default` class will be the same as `managed-csi`.
You can create a StorageClass for additional needs using `kubectl`. The following example uses Premium Managed Disks and specifies that the underlying Azure Disk should be *retained* when you delete the pod:
For more information on core Kubernetes and AKS concepts, see the following arti
[operator-best-practices-storage]: operator-best-practices-storage.md [csi-storage-drivers]: csi-storage-drivers.md [azure-blob-csi]: azure-blob-csi.md
+[general-purpose-machine-sizes]: ../virtual-machines/sizes-general.md
aks Configure Azure Cni Dynamic Ip Allocation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-azure-cni-dynamic-ip-allocation.md
+
+ Title: Configure Azure CNI networking for dynamic allocation of IPs and enhanced subnet support in Azure Kubernetes Service (AKS)
+description: Learn how to configure Azure CNI (advanced) networking for dynamic allocation of IPs and enhanced subnet support in Azure Kubernetes Service (AKS)
++ Last updated : 01/09/2023+++
+# Configure Azure CNI networking for dynamic allocation of IPs and enhanced subnet support in Azure Kubernetes Service (AKS)
+
+A drawback with the traditional CNI is the exhaustion of pod IP addresses as the AKS cluster grows, which results in the need to rebuild your entire cluster in a bigger subnet. The new dynamic IP allocation capability in Azure CNI solves this problem by allocating pod IPs from a subnet separate from the subnet hosting the AKS cluster.
+
+It offers the following benefits:
+
+* **Better IP utilization**: IPs are dynamically allocated to cluster Pods from the Pod subnet. This leads to better utilization of IPs in the cluster compared to the traditional CNI solution, which does static allocation of IPs for every node.
+* **Scalable and flexible**: Node and pod subnets can be scaled independently. A single pod subnet can be shared across multiple node pools of a cluster or across multiple AKS clusters deployed in the same VNet. You can also configure a separate pod subnet for a node pool.
+* **High performance**: Since pod are assigned VNet IPs, they have direct connectivity to other cluster pod and resources in the VNet. The solution supports very large clusters without any degradation in performance.
+* **Separate VNet policies for pods**: Since pods have a separate subnet, you can configure separate VNet policies for them that are different from node policies. This enables many useful scenarios such as allowing internet connectivity only for pods and not for nodes, fixing the source IP for pod in a node pool using a VNet Network NAT, and using NSGs to filter traffic between node pools.
+* **Kubernetes network policies**: Both the Azure Network Policies and Calico work with this new solution.
+
+This article shows you how to use Azure CNI networking for dynamic allocation of IPs and enhanced subnet support in AKS.
+
+## Prerequisites
+
+> [!NOTE]
+> When using dynamic allocation of IPs, exposing an application as a Private Link Service using a Kubernetes Load Balancer Service isn't supported.
+
+* Review the [prerequisites](/configure-azure-cni.md#prerequisites) for configuring basic Azure CNI networking in AKS, as the same prerequisites apply to this article.
+* Review the [deployment parameters](/configure-azure-cni.md#deployment-parameters) for configuring basic Azure CNI networking in AKS, as the same parameters apply.
+* Only linux node clusters and node pools are supported.
+* AKS Engine and DIY clusters aren't supported.
+* Azure CLI version `2.37.0` or later.
+
+## Plan IP addressing
+
+Planning your IP addressing is much simpler with this feature. Since the nodes and pods scale independently, their address spaces can also be planned separately. Since pod subnets can be configured to the granularity of a node pool, you can always add a new subnet when you add a node pool. The system pods in a cluster/node pool also receive IPs from the pod subnet, so this behavior needs to be accounted for.
+
+IPs are allocated to nodes in batches of 16. Pod subnet IP allocation should be planned with a minimum of 16 IPs per node in the cluster; nodes will request 16 IPs on startup and will request another batch of 16 any time there are <8 IPs unallocated in their allotment.
+
+The planning of IPs for Kubernetes services and Docker bridge remain unchanged.
+
+## Maximum pods per node in a cluster with dynamic allocation of IPs and enhanced subnet support
+
+The pods per node values when using Azure CNI with dynamic allocation of IPs slightly differ from the traditional CNI behavior:
+
+|CNI|Default|Configurable at deployment|
+|--| :--: |--|
+|Traditional Azure CNI|30|Yes (up to 250)|
+|Azure CNI with dynamic allocation of IPs|250|Yes (up to 250)|
+
+All other guidance related to configuring the maximum pods per node remains the same.
+
+## Deployment parameters
+
+The [deployment parameters](/configure-azure-cni.md#deployment-parameters) for configuring basic Azure CNI networking in AKS are all valid, with two exceptions:
+
+* The **subnet** parameter now refers to the subnet related to the cluster's nodes.
+* An additional parameter **pod subnet** is used to specify the subnet whose IP addresses will be dynamically allocated to pods.
+
+## Configure networking with dynamic allocation of IPs and enhanced subnet support - Azure CLI
+
+Using dynamic allocation of IPs and enhanced subnet support in your cluster is similar to the default method for configuring a cluster Azure CNI. The following example walks through creating a new virtual network with a subnet for nodes and a subnet for pods, and creating a cluster that uses Azure CNI with dynamic allocation of IPs and enhanced subnet support. Be sure to replace variables such as `$subscription` with your own values.
+
+Create the virtual network with two subnets.
+
+```azurecli-interactive
+resourceGroup="myResourceGroup"
+vnet="myVirtualNetwork"
+location="westcentralus"
+
+# Create the resource group
+az group create --name $resourceGroup --location $location
+
+# Create our two subnet network
+az network vnet create -g $resourceGroup --location $location --name $vnet --address-prefixes 10.0.0.0/8 -o none
+az network vnet subnet create -g $resourceGroup --vnet-name $vnet --name nodesubnet --address-prefixes 10.240.0.0/16 -o none
+az network vnet subnet create -g $resourceGroup --vnet-name $vnet --name podsubnet --address-prefixes 10.241.0.0/16 -o none
+```
+
+Create the cluster, referencing the node subnet using `--vnet-subnet-id` and the pod subnet using `--pod-subnet-id`.
+
+```azurecli-interactive
+clusterName="myAKSCluster"
+subscription="aaaaaaa-aaaaa-aaaaaa-aaaa"
+
+az aks create -n $clusterName -g $resourceGroup -l $location \
+ --max-pods 250 \
+ --node-count 2 \
+ --network-plugin azure \
+ --vnet-subnet-id /subscriptions/$subscription/resourceGroups/$resourceGroup/providers/Microsoft.Network/virtualNetworks/$vnet/subnets/nodesubnet \
+ --pod-subnet-id /subscriptions/$subscription/resourceGroups/$resourceGroup/providers/Microsoft.Network/virtualNetworks/$vnet/subnets/podsubnet
+```
+
+### Adding node pool
+
+When adding node pool, reference the node subnet using `--vnet-subnet-id` and the pod subnet using `--pod-subnet-id`. The following example creates two new subnets that are then referenced in the creation of a new node pool:
+
+```azurecli-interactive
+az network vnet subnet create -g $resourceGroup --vnet-name $vnet --name node2subnet --address-prefixes 10.242.0.0/16 -o none
+az network vnet subnet create -g $resourceGroup --vnet-name $vnet --name pod2subnet --address-prefixes 10.243.0.0/16 -o none
+
+az aks nodepool add --cluster-name $clusterName -g $resourceGroup -n newnodepool \
+ --max-pods 250 \
+ --node-count 2 \
+ --vnet-subnet-id /subscriptions/$subscription/resourceGroups/$resourceGroup/providers/Microsoft.Network/virtualNetworks/$vnet/subnets/node2subnet \
+ --pod-subnet-id /subscriptions/$subscription/resourceGroups/$resourceGroup/providers/Microsoft.Network/virtualNetworks/$vnet/subnets/pod2subnet \
+ --no-wait
+```
+
+## Dynamic allocation of IP addresses and enhanced subnet support FAQs
+
+* **Can I assign multiple pod subnets to a cluster/node pool?**
+
+ Only one subnet can be assigned to a cluster or node pool. However, multiple clusters or node pools can share a single subnet.
+
+* **Can I assign Pod subnets from a different VNet altogether?**
+
+ No, the pod subnet should be from the same VNet as the cluster.
+
+* **Can some node pools in a cluster use the traditional CNI while others use the new CNI?**
+
+ The entire cluster should use only one type of CNI.
+
+## Next steps
+
+Learn more about networking in AKS in the following articles:
+
+* [Use a static IP address with the Azure Kubernetes Service (AKS) load balancer](static-ip.md)
+* [Use an internal load balancer with Azure Kubernetes Service (AKS)](internal-lb.md)
+
+* [Create a basic ingress controller with external network connectivity][aks-ingress-basic]
+* [Enable the HTTP application routing add-on][aks-http-app-routing]
+* [Create an ingress controller that uses an internal, private network and IP address][aks-ingress-internal]
+* [Create an ingress controller with a dynamic public IP and configure Let's Encrypt to automatically generate TLS certificates][aks-ingress-tls]
+* [Create an ingress controller with a static public IP and configure Let's Encrypt to automatically generate TLS certificates][aks-ingress-static-tls]
+
+<!-- LINKS - Internal -->
+[aks-ingress-basic]: ingress-basic.md
+[aks-ingress-tls]: ingress-tls.md
+[aks-ingress-static-tls]: ingress-static-ip.md
+[aks-http-app-routing]: http-application-routing.md
+[aks-ingress-internal]: ingress-internal-ip.md
aks Configure Azure Cni https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/configure-azure-cni.md
# Configure Azure CNI networking in Azure Kubernetes Service (AKS)
-By default, AKS clusters use [kubenet][kubenet], and a virtual network and subnet are created for you. With *kubenet*, nodes get an IP address from a virtual network subnet. Network address translation (NAT) is then configured on the nodes, and pods receive an IP address "hidden" behind the node IP. This approach reduces the number of IP addresses that you need to reserve in your network space for pods to use.
+By default, AKS clusters use [kubenet][kubenet] and create a virtual network and subnet. With *kubenet*, nodes get an IP address from a virtual network subnet. Network address translation (NAT) is then configured on the nodes, and pods receive an IP address "hidden" behind the node IP. This approach reduces the number of IP addresses that you need to reserve in your network space for pods to use.
-With [Azure Container Networking Interface (CNI)][cni-networking], every pod gets an IP address from the subnet and can be accessed directly. These IP addresses must be unique across your network space, and must be planned in advance. Each node has a configuration parameter for the maximum number of pods that it supports. The equivalent number of IP addresses per node are then reserved up front for that node. This approach requires more planning, and often leads to IP address exhaustion or the need to rebuild clusters in a larger subnet as your application demands grow.
+With [Azure Container Networking Interface (CNI)][cni-networking], every pod gets an IP address from the subnet and can be accessed directly. These IP addresses must be unique across your network space and must be planned in advance. Each node has a configuration parameter for the maximum number of pods that it supports. The equivalent number of IP addresses per node are then reserved up front for that node. This approach requires more planning, and often leads to IP address exhaustion or the need to rebuild clusters in a larger subnet as your application demands grow.
-This article shows you how to use *Azure CNI* networking to create and use a virtual network subnet for an AKS cluster. For more information on network options and considerations, see [Network concepts for Kubernetes and AKS][aks-network-concepts].
+This article shows you how to use Azure CNI networking to create and use a virtual network subnet for an AKS cluster. For more information on network options and considerations, see [Network concepts for Kubernetes and AKS][aks-network-concepts].
## Prerequisites
The following screenshot from the Azure portal shows an example of configuring t
:::image type="content" source="../aks/media/networking-overview/portal-01-networking-advanced.png" alt-text="Screenshot from the Azure portal showing an example of configuring these settings during AKS cluster creation.":::
-## Dynamic allocation of IPs and enhanced subnet support
-
-A drawback with the traditional CNI is the exhaustion of pod IP addresses as the AKS cluster grows, resulting in the need to rebuild the entire cluster in a bigger subnet. The new dynamic IP allocation capability in Azure CNI solves this problem by allocating pod IPs from a subnet separate from the subnet hosting the AKS cluster. It offers the following benefits:
-
-* **Better IP utilization**: IPs are dynamically allocated to cluster Pods from the Pod subnet. This leads to better utilization of IPs in the cluster compared to the traditional CNI solution, which does static allocation of IPs for every node.
-
-* **Scalable and flexible**: Node and pod subnets can be scaled independently. A single pod subnet can be shared across multiple node pools of a cluster or across multiple AKS clusters deployed in the same VNet. You can also configure a separate pod subnet for a node pool.
-
-* **High performance**: Since pod are assigned VNet IPs, they have direct connectivity to other cluster pod and resources in the VNet. The solution supports very large clusters without any degradation in performance.
-
-* **Separate VNet policies for pods**: Since pods have a separate subnet, you can configure separate VNet policies for them that are different from node policies. This enables many useful scenarios such as allowing internet connectivity only for pods and not for nodes, fixing the source IP for pod in a node pool using a VNet Network NAT, and using NSGs to filter traffic between node pools.
-
-* **Kubernetes network policies**: Both the Azure Network Policies and Calico work with this new solution.
-
-### Additional prerequisites
-
-> [!NOTE]
-> When using dynamic allocation of IPs, exposing an application as a Private Link Service using a Kubernetes Load Balancer Service is not supported.
-
-The [prerequisites][prerequisites] already listed for Azure CNI still apply, but there are a few additional limitations:
-
-* AKS Engine and DIY clusters are not supported.
-* Azure CLI version `2.37.0` or later.
-
-### Planning IP addressing
-
-When using this feature, planning is much simpler. Since the nodes and pods scale independently, their address spaces can also be planned separately. Since pod subnets can be configured to the granularity of a node pool, customers can always add a new subnet when they add a node pool. The system pods in a cluster/node pool also receive IPs from the pod subnet, so this behavior needs to be accounted for.
-
-IPs are allocated to nodes in batches of 16. Pod subnet IP allocation should be planned with a minimum of 16 IPs per node in the cluster; nodes will request 16 IPs on startup and will request another batch of 16 any time there are <8 IPs unallocated in their allotment.
-
-The planning of IPs for Kubernetes services and Docker bridge remain unchanged.
-
-### Maximum pods per node in a cluster with dynamic allocation of IPs and enhanced subnet support
-
-The pods per node values when using Azure CNI with dynamic allocation of IPs have changed slightly from the traditional CNI behavior:
-
-|CNI|Default|Configurable at deployment|
-|--| :--: |--|
-|Traditional Azure CNI|30|Yes (up to 250)|
-|Azure CNI with dynamic allocation of IPs|250|Yes (up to 250)|
-
-All other guidance related to configuring the maximum pods per node remains the same.
-
-### Additional deployment parameters
-
-The deployment parameters described above are all still valid, with one exception:
-
-* The **subnet** parameter now refers to the subnet related to the cluster's nodes.
-* An additional parameter **pod subnet** is used to specify the subnet whose IP addresses will be dynamically allocated to pods.
-
-### Configure networking - CLI with dynamic allocation of IPs and enhanced subnet support
-
-Using dynamic allocation of IPs and enhanced subnet support in your cluster is similar to the default method for configuring a cluster Azure CNI. The following example walks through creating a new virtual network with a subnet for nodes and a subnet for pods, and creating a cluster that uses Azure CNI with dynamic allocation of IPs and enhanced subnet support. Be sure to replace variables such as `$subscription` with your own values:
-
-First, create the virtual network with two subnets:
-
-```azurecli-interactive
-resourceGroup="myResourceGroup"
-vnet="myVirtualNetwork"
-location="westcentralus"
-
-# Create the resource group
-az group create --name $resourceGroup --location $location
-
-# Create our two subnet network
-az network vnet create -g $resourceGroup --location $location --name $vnet --address-prefixes 10.0.0.0/8 -o none
-az network vnet subnet create -g $resourceGroup --vnet-name $vnet --name nodesubnet --address-prefixes 10.240.0.0/16 -o none
-az network vnet subnet create -g $resourceGroup --vnet-name $vnet --name podsubnet --address-prefixes 10.241.0.0/16 -o none
-```
-
-Then, create the cluster, referencing the node subnet using `--vnet-subnet-id` and the pod subnet using `--pod-subnet-id`:
-
-```azurecli-interactive
-clusterName="myAKSCluster"
-subscription="aaaaaaa-aaaaa-aaaaaa-aaaa"
-
-az aks create -n $clusterName -g $resourceGroup -l $location \
- --max-pods 250 \
- --node-count 2 \
- --network-plugin azure \
- --vnet-subnet-id /subscriptions/$subscription/resourceGroups/$resourceGroup/providers/Microsoft.Network/virtualNetworks/$vnet/subnets/nodesubnet \
- --pod-subnet-id /subscriptions/$subscription/resourceGroups/$resourceGroup/providers/Microsoft.Network/virtualNetworks/$vnet/subnets/podsubnet
-```
-
-#### Adding node pool
-
-When adding node pool, reference the node subnet using `--vnet-subnet-id` and the pod subnet using `--pod-subnet-id`. The following example creates two new subnets that are then referenced in the creation of a new node pool:
-
-```azurecli-interactive
-az network vnet subnet create -g $resourceGroup --vnet-name $vnet --name node2subnet --address-prefixes 10.242.0.0/16 -o none
-az network vnet subnet create -g $resourceGroup --vnet-name $vnet --name pod2subnet --address-prefixes 10.243.0.0/16 -o none
-
-az aks nodepool add --cluster-name $clusterName -g $resourceGroup -n newnodepool \
- --max-pods 250 \
- --node-count 2 \
- --vnet-subnet-id /subscriptions/$subscription/resourceGroups/$resourceGroup/providers/Microsoft.Network/virtualNetworks/$vnet/subnets/node2subnet \
- --pod-subnet-id /subscriptions/$subscription/resourceGroups/$resourceGroup/providers/Microsoft.Network/virtualNetworks/$vnet/subnets/pod2subnet \
- --no-wait
-```
-## Monitor IP subnet usage
+## Monitor IP subnet usage
Azure CNI provides the capability to monitor IP subnet usage. To enable IP subnet usage monitoring, follow the steps below: ### Get the YAML file
-1. Download or grep the file named container-azm-ms-agentconfig.yaml from [GitHub][github].
-2. Find azure_subnet_ip_usage in integrations. Set `enabled` to `true`.
-3. Save the file.
+
+1. Download or grep the file named container-azm-ms-agentconfig.yaml from [GitHub][github].
+2. Find azure_subnet_ip_usage in integrations. Set `enabled` to `true`.
+3. Save the file.
### Get the AKS credentials
Set the variables for subscription, resource group and cluster. Consider the fol
## Frequently asked questions
-The following questions and answers apply to the **Azure CNI** networking configuration.
-
-* *Can I deploy VMs in my cluster subnet?*
+* **Can I deploy VMs in my cluster subnet?**
Yes.
-* *What source IP do external systems see for traffic that originates in an Azure CNI-enabled pod?*
+* **What source IP do external systems see for traffic that originates in an Azure CNI-enabled pod?**
Systems in the same virtual network as the AKS cluster see the pod IP as the source address for any traffic from the pod. Systems outside the AKS cluster virtual network see the node IP as the source address for any traffic from the pod.
-* *Can I configure per-pod network policies?*
+* **Can I configure per-pod network policies?**
Yes, Kubernetes network policy is available in AKS. To get started, see [Secure traffic between pods by using network policies in AKS][network-policy].
-* *Is the maximum number of pods deployable to a node configurable?*
+* **Is the maximum number of pods deployable to a node configurable?**
Yes, when you deploy a cluster with the Azure CLI or a Resource Manager template. See [Maximum pods per node](#maximum-pods-per-node). You can't change the maximum number of pods per node on an existing cluster.
-* *How do I configure additional properties for the subnet that I created during AKS cluster creation? For example, service endpoints.*
+* **How do I configure additional properties for the subnet that I created during AKS cluster creation? For example, service endpoints.**
The complete list of properties for the virtual network and subnets that you create during AKS cluster creation can be configured in the standard virtual network configuration page in the Azure portal.
-* *Can I use a different subnet within my cluster virtual network for the* **Kubernetes service address range**?
+* **Can I use a different subnet within my cluster virtual network for the *Kubernetes service address range*?**
It's not recommended, but this configuration is possible. The service address range is a set of virtual IPs (VIPs) that Kubernetes assigns to internal services in your cluster. Azure Networking has no visibility into the service IP range of the Kubernetes cluster. Because of the lack of visibility into the cluster's service address range, it's possible to later create a new subnet in the cluster virtual network that overlaps with the service address range. If such an overlap occurs, Kubernetes could assign a service an IP that's already in use by another resource in the subnet, causing unpredictable behavior or failures. By ensuring you use an address range outside the cluster's virtual network, you can avoid this overlap risk.
-### Dynamic allocation of IP addresses and enhanced subnet support FAQs
-
-The following questions and answers apply to the **Azure CNI network configuration when using Dynamic allocation of IP addresses and enhanced subnet support**.
-
-* *Can I assign multiple pod subnets to a cluster/node pool?*
-
- Only one subnet can be assigned to a cluster or node pool. However, multiple clusters or node pools can share a single subnet.
-
-* *Can I assign Pod subnets from a different VNet altogether?*
-
- No, the pod subnet should be from the same VNet as the cluster.
-
-* *Can some node pools in a cluster use the traditional CNI while others use the new CNI?*
-
- The entire cluster should use only one type of CNI.
- ## Next steps
+To configure Azure CNI networking with dynamic IP allocation and enhanced subnet support, see [Configure Azure CNI networking for dynamic allocation of IPs and enhanced subnet support in AKS](/configure-azure-cni-dynamic-ip-allocation.md).
+ Learn more about networking in AKS in the following articles: * [Use a static IP address with the Azure Kubernetes Service (AKS) load balancer](static-ip.md)
aks Csi Migrate In Tree Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/csi-migrate-in-tree-volumes.md
+
+ Title: Migrate from in-tree storage class to CSI drivers on Azure Kubernetes Service (AKS)
+description: Learn how to migrate from in-tree persistent volume to the Container Storage Interface (CSI) driver in an Azure Kubernetes Service (AKS) cluster.
+ Last updated : 01/18/2023++++
+# Migrate from in-tree storage class to CSI drivers on Azure Kubernetes Service (AKS)
+
+The implementation of the [Container Storage Interface (CSI) driver][csi-driver-overview] was introduced in Azure Kubernetes Service (AKS) starting with version 1.21. By adopting and using CSI as the standard, your existing stateful workloads using in-tree Persistent Volumes (PVs) should be migrated or upgraded to use the CSI driver.
+
+To make this process as simple as possible, and to ensure no data loss, this article provides different migration options. These options include scripts to help ensure a smooth migration from in-tree to Azure Disks and Azure Files CSI drivers.
+
+## Before you begin
+
+* The Azure CLI version 2.37.0 or later. Run `az --version` to find the version, and run `az upgrade` to upgrade the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli].
+* Kubectl and cluster administrators have access to create, get, list, delete access to a PVC or PV, volume snapshot, or volume snapshot content. For an Azure Active Directory (Azure AD) RBAC enabled cluster, you're a member of the [Azure Kubernetes Service RBAC Cluster Admin][aks-rbac-cluster-admin-role] role.
+
+## Migrate Disk volumes
+
+Migration from in-tree to CSI is supported using two migration options:
+
+* Create a static volume
+* Create a dynamic volume
+
+### Create a static volume
+
+Using this option, you create a PV by statically assigning `claimRef` to a new PVC that you'll create later, and specify the `volumeName` for the *PersistentVolumeClaim*.
++
+The benefits of this approach are:
+
+* It's simple and can be automated.
+* No need to clean up original configuration using in-tree storage class.
+* Low risk as you're only performing a logical deletion of Kubernetes PV/PVC, the actual physical data isn't deleted.
+* No extra costs as the result of not having to create more objects such as disk, snapshots, etc.
+
+The following are important considerations to evaluate:
+
+* Transition to static volumes from original dynamic-style volumes requires constructing and managing PV objects manually for all options.
+* Potential application downtime when redeploying the new application with reference to the new PVC object.
+
+#### Migration
+
+1. Update the existing PV `ReclaimPolicy` from **Delete** to **Retain** by running the following command:
+
+ ```bash
+ kubectl patch pv pvName -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
+ ```
+
+ Replace **pvName** with the name of your selected PersistentVolume. Alternatively, if you want to update the reclaimPolicy for multiple PVs, create a file named **patchReclaimPVs.sh** and copy in the following code.
+
+ ```bash
+ # Patch the Persistent Volume in case ReclaimPolicy is Delete
+ #!/bin/sh
+ namespace=$1
+ i=1
+ for pvc in $(kubectl get pvc -n $namespace | awk '{ print $1}'); do
+ # Ignore first record as it contains header
+ if [ $i -eq 1 ]; then
+ i=$((i + 1))
+ else
+ pv="$(kubectl get pvc $pvc -n $namespace -o jsonpath='{.spec.volumeName}')"
+ reclaimPolicy="$(kubectl get pv $pv -n $namespace -o jsonpath='{.spec.persistentVolumeReclaimPolicy}')"
+ echo "Reclaim Policy for Persistent Volume $pv is $reclaimPolicy"
+ if [[ $reclaimPolicy == "Delete" ]]; then
+ echo "Updating ReclaimPolicy for $pv to Retain"
+ kubectl patch pv $pv -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
+ fi
+ fi
+ done
+ ```
+
+ Execute the script with the `namespace` parameter to specify the cluster namespace `./PatchReclaimPolicy.sh <namespace>`.
+
+2. Get a list of all of the PVCs in namespace sorted by **creationTimestamp** by running the following command. Set the namespace using the `--namespace` argument along with the actual cluster namespace.
+
+ ```bash
+ kubectl get pvc -n <namespace> --sort-by=.metadata.creationTimestamp -o custom-columns=NAME:.metadata.name,CreationTime:.metadata.creationTimestamp,StorageClass:.spec.storageClassName,Size:.spec.resources.requests.storage
+ ```
+
+ This step is helpful if you have a large number of PVs that need to be migrated, and you want to migrate a few at a time. Running this command enables you to identify which PVCs were created in a given time frame. When you run the *CreatePV.sh* script, two of the parameters are start time and end time that enable you to only migrate the PVCs during that period of time.
+
+3. Create a file named **CreatePV.sh** and copy in the following code. The script does the following:
+
+ * Creates a new PersistentVolume with name `existing-pv-csi` for all PersistentVolumes in namespaces for storage class `storageClassName`.
+ * Configure new PVC name as `existing-pvc-csi`.
+ * Updates the application (deployment/StatefulSet) to refer to new PVC.
+ * Creates a new PVC with the PV name you specify.
+
+ ```bash
+ #!/bin/sh
+ #kubectl get pvc -n <namespace> --sort-by=.metadata.creationTimestamp -o custom-columns=NAME:.metadata.name,CreationTime:.metadata.creationTimestamp,StorageClass:.spec.storageClassName,Size:.spec.resources.requests.storage
+ # TimeFormat 2022-04-20T13:19:56Z
+ namespace=$1
+ fileName=$(date +%Y%m%d%H%M)-$namespace
+ existingStorageClass=$2
+ storageClassNew=$3
+ starttimestamp=$4
+ endtimestamp=$5
+ i=1
+ for pvc in $(kubectl get pvc -n $namespace | awk '{ print $1}'); do
+ # Ignore first record as it contains header
+ if [ $i -eq 1 ]; then
+ i=$((i + 1))
+ else
+ pvcCreationTime=$(kubectl get pvc $pvc -n $namespace -o jsonpath='{.metadata.creationTimestamp}')
+ if [[ $pvcCreationTime > $starttimestamp ]]; then
+ if [[ $endtimestamp > $pvcCreationTime ]]; then
+ pv="$(kubectl get pvc $pvc -n $namespace -o jsonpath='{.spec.volumeName}')"
+ reclaimPolicy="$(kubectl get pv $pv -n $namespace -o jsonpath='{.spec.persistentVolumeReclaimPolicy}')"
+ storageClass="$(kubectl get pv $pv -n $namespace -o jsonpath='{.spec.storageClassName}')"
+ echo $pvc
+ reclaimPolicy="$(kubectl get pv $pv -n $namespace -o jsonpath='{.spec.persistentVolumeReclaimPolicy}')"
+ if [[ $reclaimPolicy == "Retain" ]]; then
+ if [[ $storageClass == $existingStorageClass ]]; then
+ storageSize="$(kubectl get pv $pv -n $namespace -o jsonpath='{.spec.capacity.storage}')"
+ skuName="$(kubectl get storageClass $storageClass -o jsonpath='{.reclaimPolicy}')"
+ diskURI="$(kubectl get pv $pv -n $namespace -o jsonpath='{.spec.azureDisk.diskURI}')"
+ persistentVolumeReclaimPolicy="$(kubectl get pv $pv -n $namespace -o jsonpath='{.spec.persistentVolumeReclaimPolicy}')"
+
+ cat >$pvc-csi.yaml <<EOF
+ apiVersion: v1
+ kind: PersistentVolume
+ metadata:
+ annotations:
+ pv.kubernetes.io/provisioned-by: disk.csi.azure.com
+ name: $pv-csi
+ spec:
+ accessModes:
+ - ReadWriteOnce
+ capacity:
+ storage: $storageSize
+ claimRef:
+ apiVersion: v1
+ kind: PersistentVolumeClaim
+ name: $pvc-csi
+ namespace: $namespace
+ csi:
+ driver: disk.csi.azure.com
+ volumeAttributes:
+ csi.storage.k8s.io/pv/name: $pv-csi
+ csi.storage.k8s.io/pvc/name: $pvc-csi
+ csi.storage.k8s.io/pvc/namespace: $namespace
+ requestedsizegib: "$storageSize"
+ skuname: $skuName
+ volumeHandle: $diskURI
+ persistentVolumeReclaimPolicy: $persistentVolumeReclaimPolicy
+ storageClassName: $storageClassNew
+
+ apiVersion: v1
+ kind: PersistentVolumeClaim
+ metadata:
+ name: $pvc-csi
+ namespace: $namespace
+ spec:
+ accessModes:
+ - ReadWriteOnce
+ storageClassName: $storageClassNew
+ resources:
+ requests:
+ storage: $storageSize
+ volumeName: $pv-csi
+ EOF
+ kubectl apply -f $pvc-csi.yaml
+ line="PVC:$pvc,PV:$pv,StorageClassTarget:$storageClassNew"
+ printf '%s\n' "$line" >>$fileName
+ fi
+ fi
+ fi
+ fi
+ fi
+ done
+ ```
+
+4. To create a new PersistentVolume for all PersistentVolumes in the namespace, execute the script **CreatePV.sh** with the following parameters:
+
+ * `namespace` - The cluster namespace
+ * `sourceStorageClass` - The in-tree storage driver-based StorageClass
+ * `targetCSIStorageClass` - The CSI storage driver-based StorageClass, which can be either one of the default storage classes that have the provisioner set to **disk.csi.azure.com** or **file.csi.azure.com**. Or you can create a custom storage class as long as it is set to either one of those two provisioners.
+ * `volumeSnapshotClass` - Name of the volume snapshot class. For example, `custom-disk-snapshot-sc`.
+ * `startTimeStamp` - Provide a start time in the format **yyyy-mm-ddthh:mm:ssz**.
+ * `endTimeStamp` - Provide an end time in the format **yyyy-mm-ddthh:mm:ssz**.
+
+ ```bash
+ ./CreatePV.sh <namespace> <sourceIntreeStorageClass> <targetCSIStorageClass> <startTimestamp> <endTimestamp>
+ ```
+
+5. Update your application to use the new PVC.
+
+### Create a dynamic volume
+
+Using this option, you dynamically create a Persistent Volume from a Persistent Volume Claim.
++
+The benefits of this approach are:
+
+* It's less risky because all new objects are created while retaining other copies with snapshots.
+
+* No need to construct PVs separately and add volume name in PVC manifest.
+
+The following are important considerations to evaluate:
+
+* While this approach is less risky, it does create multiple objects that will increase your storage costs.
+
+* During creation of the new volume(s), your application is unavailable.
+
+* Deletion steps should be performed with caution. Temporary [resource locks][azure-resource-locks] can be applied to your resource group until migration is completed and your application is successfully verified.
+
+* Perform data validation/verification as new disks are created from snapshots.
+
+#### Migration
+
+Before proceeding, verify the following:
+
+* For specific workloads where data is written to memory before being written to disk, the application should be stopped and to allow in-memory data to be flushed to disk.
+* `VolumeSnapshot` class should exist as shown in the following example YAML:
+
+ ```yml
+ apiVersion: snapshot.storage.k8s.io/v1
+ kind: VolumeSnapshotClass
+ metadata:
+ name: custom-disk-snapshot-sc
+ driver: disk.csi.azure.com
+ deletionPolicy: Delete
+ parameters:
+ incremental: "false"
+ ```
+
+1. Get list of all the PVCs in a specified namespace sorted by *creationTimestamp* by running the following command. Set the namespace using the `--namespace` argument along with the actual cluster namespace.
+
+ ```bash
+ kubectl get pvc --namespace <namespace> --sort-by=.metadata.creationTimestamp -o custom-columns=NAME:.metadata.name,CreationTime:.metadata.creationTimestamp,StorageClass:.spec.storageClassName,Size:.spec.resources.requests.storage
+ ```
+
+ This step is helpful if you have a large number of PVs that need to be migrated, and you want to migrate a few at a time. Running this command enables you to identify which PVCs were created in a given time frame. When you run the *MigrateCSI.sh* script, two of the parameters are start time and end time that enable you to only migrate the PVCs during that period of time.
+
+2. Create a file named **MigrateToCSI.sh** and copy in the following code. The script does the following:
+
+ * Creates a full disk snapshot using the Azure CLI
+ * Creates `VolumesnapshotContent`
+ * Creates `VolumeSnapshot`
+ * Creates a new PVC from `VolumeSnapshot`
+ * Creates a new file with the filename `<namespace>-timestamp`, which contains a list of all old resources that needs to be cleaned up.
+
+ ```bash
+ #!/bin/sh
+ #kubectl get pvc -n <namespace> --sort-by=.metadata.creationTimestamp -o custom-columns=NAME:.metadata.name,CreationTime:.metadata.creationTimestamp,StorageClass:.spec.storageClassName,Size:.spec.resources.requests.storage
+ # TimeFormat 2022-04-20T13:19:56Z
+ namespace=$1
+ fileName=$namespace-$(date +%Y%m%d%H%M)
+ existingStorageClass=$2
+ storageClassNew=$3
+ volumestorageClass=$4
+ starttimestamp=$5
+ endtimestamp=$6
+ i=1
+ for pvc in $(kubectl get pvc -n $namespace | awk '{ print $1}'); do
+ # Ignore first record as it contains header
+ if [ $i -eq 1 ]; then
+ i=$((i + 1))
+ else
+ pvcCreationTime=$(kubectl get pvc $pvc -n $namespace -o jsonpath='{.metadata.creationTimestamp}')
+ if [[ $pvcCreationTime > $starttimestamp ]]; then
+ if [[ $endtimestamp > $pvcCreationTime ]]; then
+ pv="$(kubectl get pvc $pvc -n $namespace -o jsonpath='{.spec.volumeName}')"
+ reclaimPolicy="$(kubectl get pv $pv -n $namespace -o jsonpath='{.spec.persistentVolumeReclaimPolicy}')"
+ storageClass="$(kubectl get pv $pv -n $namespace -o jsonpath='{.spec.storageClassName}')"
+ echo $pvc
+ reclaimPolicy="$(kubectl get pv $pv -n $namespace -o jsonpath='{.spec.persistentVolumeReclaimPolicy}')"
+ if [[ $storageClass == $existingStorageClass ]]; then
+ storageSize="$(kubectl get pv $pv -n $namespace -o jsonpath='{.spec.capacity.storage}')"
+ skuName="$(kubectl get storageClass $storageClass -o jsonpath='{.reclaimPolicy}')"
+ diskURI="$(kubectl get pv $pv -n $namespace -o jsonpath='{.spec.azureDisk.diskURI}')"
+ targetResourceGroup="$(cut -d'/' -f5 <<<"$diskURI")"
+ echo $diskURI
+ echo $targetResourceGroup
+ persistentVolumeReclaimPolicy="$(kubectl get pv $pv -n $namespace -o jsonpath='{.spec.persistentVolumeReclaimPolicy}')"
+ az snapshot create --resource-group $targetResourceGroup --name $pvc-$fileName --source "$diskURI"
+ snapshotPath=$(az snapshot list --resource-group $targetResourceGroup --query "[?name == '$pvc-$fileName'].id | [0]")
+ snapshotHandle=$(echo "$snapshotPath" | tr -d '"')
+ echo $snapshotHandle
+ sleep 10
+ # Create Restore File
+ cat <<EOF >$pvc-csi.yml
+ apiVersion: snapshot.storage.k8s.io/v1
+ kind: VolumeSnapshotContent
+ metadata:
+ name: $pvc-$fileName
+ spec:
+ deletionPolicy: 'Delete'
+ driver: 'disk.csi.azure.com'
+ volumeSnapshotClassName: $volumestorageClass
+ source:
+ snapshotHandle: $snapshotHandle
+ volumeSnapshotRef:
+ apiVersion: snapshot.storage.k8s.io/v1
+ kind: VolumeSnapshot
+ name: $pvc-$fileName
+ namespace: $1
+
+ apiVersion: snapshot.storage.k8s.io/v1
+ kind: VolumeSnapshot
+ metadata:
+ name: $pvc-$fileName
+ namespace: $1
+ spec:
+ volumeSnapshotClassName: $volumestorageClass
+ source:
+ volumeSnapshotContentName: $pvc-$fileName
+
+ apiVersion: v1
+ kind: PersistentVolumeClaim
+ metadata:
+ name: csi-$pvc
+ namespace: $1
+ spec:
+ accessModes:
+ - ReadWriteOnce
+ storageClassName: $storageClassNew
+ resources:
+ requests:
+ storage: $storageSize
+ dataSource:
+ name: $pvc-$fileName
+ kind: VolumeSnapshot
+ apiGroup: snapshot.storage.k8s.io
+
+ EOF
+ kubectl create -f $pvc-csi.yml
+ line="OLDPVC:$pvc,OLDPV:$pv,VolumeSnapshotContent:volumeSnapshotContent-$fileName,VolumeSnapshot:volumesnapshot$fileName,OLDdisk:$diskURI"
+ printf '%s\n' "$line" >>$fileName
+ fi
+ fi
+ fi
+ fi
+ done
+ ```
+
+3. To migrate the disk volumes, execute the script **MigrateToCSI.sh** with the following parameters:
+
+ * `namespace` - The cluster namespace
+ * `sourceStorageClass` - The in-tree storage driver-based StorageClass
+ * `targetCSIStorageClass` - The CSI storage driver-based StorageClass
+ * `volumeSnapshotClass` - Name of the volume snapshot class. For example, `custom-disk-snapshot-sc`.
+ * `startTimeStamp` - Provide a start time in the format **yyyy-mm-ddthh:mm:ssz**.
+ * `endTimeStamp` - Provide an end time in the format **yyyy-mm-ddthh:mm:ssz**.
+
+ ```bash
+ ./MigrateToCSI.sh <namespace> <sourceStorageClass> <TargetCSIstorageClass> <VolumeSnapshotClass> <startTimestamp> <endTimestamp>
+ ```
+
+4. Update your application to use the new PVC.
+
+5. Manually delete the older resources including in-tree PVC/PV, VolumeSnapshot, and VolumeSnapshotContent. Otherwise, maintaining the in-tree PVC/PC and snapshot objects will generate more cost.
+
+## Migrate File share volumes
+
+Migration from in-tree to CSI is supported by creating a static volume.
+
+### Migration
+
+1. Update the existing PV `ReclaimPolicy` from **Delete** to **Retain** by running the following command:
+
+ ```bash
+ kubectl patch pv pvName -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
+ ```
+
+ Replace **pvName** with the name of your selected PersistentVolume. Alternatively, if you want to update the reclaimPolicy for multiple PVs, create a file named **patchReclaimPVs.sh** and copy in the following code.
+
+ ```bash
+ # Patch the Persistent Volume in case ReclaimPolicy is Delete
+ #!/bin/sh
+ namespace=$1
+ i=1
+ for pvc in $(kubectl get pvc -n $namespace | awk '{ print $1}'); do
+ # Ignore first record as it contains header
+ if [ $i -eq 1 ]; then
+ i=$((i + 1))
+ else
+ pv="$(kubectl get pvc $pvc -n $namespace -o jsonpath='{.spec.volumeName}')"
+ reclaimPolicy="$(kubectl get pv $pv -n $namespace -o jsonpath='{.spec.persistentVolumeReclaimPolicy}')"
+ echo "Reclaim Policy for Persistent Volume $pv is $reclaimPolicy"
+ if [[ $reclaimPolicy == "Delete" ]]; then
+ echo "Updating ReclaimPolicy for $pv to Retain"
+ kubectl patch pv $pv -p '{"spec":{"persistentVolumeReclaimPolicy":"Retain"}}'
+ fi
+ fi
+ done
+ ```
+
+ Execute the script with the `namespace` parameter to specify the cluster namespace `./PatchReclaimPolicy.sh <namespace>`.
+
+2. Create a new Storage Class with the provisioner set to `file.csi.azure.com`, or you can use one of the default StorageClasses with the CSI file provisioner.
+
+3. Get the `secretName` and `shareName` from the existing *PersistentVolumes* by running the following command:
+
+ ```bash
+ kubectl describe pv pvName
+ ```
+
+4. Create a new PV using the new StorageClass, and the `shareName` and `secretName` from the in-tree PV. Create a file named *azurefile-mount-pv.yaml* and copy in the following code. Under `csi`, update `resourceGroup`, `volumeHandle`, and `shareName`. For mount options, the default value for *fileMode* and *dirMode* is *0777*.
+
+ The default value for `fileMode` and `dirMode` is **0777**.
+
+ ```yml
+ apiVersion: v1
+ kind: PersistentVolume
+ metadata:
+ name: azurefile
+ spec:
+ capacity:
+ storage: 5Gi
+ accessModes:
+ - ReadWriteMany
+ persistentVolumeReclaimPolicy: Retain
+ storageClassName: azurefile-csi
+ csi:
+ driver: file.csi.azure.com
+ readOnly: false
+ volumeHandle: unique-volumeid # make sure volumeid is unique for every identical share in the cluster
+ volumeAttributes:
+ resourceGroup: EXISTING_RESOURCE_GROUP_NAME # optional, only set this when storage account is not in the same resource group as the cluster nodes
+ shareName: aksshare
+ nodeStageSecretRef:
+ name: azure-secret
+ namespace: default
+ mountOptions:
+ - dir_mode=0777
+ - file_mode=0777
+ - uid=0
+ - gid=0
+ - mfsymlinks
+ - cache=strict
+ - nosharesock
+ - nobrl
+ ```
+
+5. Create a file named *azurefile-mount-pvc.yaml* file with a *PersistentVolumeClaim* that uses the *PersistentVolume* using the following code.
+
+ ```yml
+ apiVersion: v1
+ kind: PersistentVolumeClaim
+ metadata:
+ name: azurefile
+ spec:
+ accessModes:
+ - ReadWriteMany
+ storageClassName: azurefile-csi
+ volumeName: azurefile
+ resources:
+ requests:
+ storage: 5Gi
+ ```
+
+6. Use the `kubectl` command to create the *PersistentVolume*.
+
+ ```bash
+ kubectl apply -f azurefile-mount-pv.yaml
+ ```
+
+7. Use the `kubectl` command to create the *PersistentVolumeClaim*.
+
+ ```bash
+ kubectl apply -f azurefile-mount-pvc.yaml
+ ```
+
+8. Verify your *PersistentVolumeClaim* is created and bound to the *PersistentVolume* by running the following command.
+
+ ```bash
+ kubectl get pvc azurefile
+ ```
+
+ The output resembles the following:
+
+ ```output
+ NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
+ azurefile Bound azurefile 5Gi RWX azurefile 5s
+ ```
+
+9. Update your container spec to reference your *PersistentVolumeClaim* and update your pod. For example, copy the following code and create a file named *azure-files-pod.yaml*.
+
+ ```yml
+ ...
+ volumes:
+ - name: azure
+ persistentVolumeClaim:
+ claimName: azurefile
+ ```
+
+10. The pod spec can't be updated in place. Use the following `kubectl` commands to delete and then re-create the pod.
+
+ ```bash
+ kubectl delete pod mypod
+ ```
+
+ ```bash
+ kubectl apply -f azure-files-pod.yaml
+ ```
+
+## Next steps
+
+For more about storage best practices, see [Best practices for storage and backups in Azure Kubernetes Service][aks-storage-backups-best-practices].
+
+<!-- LINKS - internal -->
+[install-azure-cli]: /cli/azure/install-azure-cli
+[aks-rbac-cluster-admin-role]: manage-azure-rbac.md#create-role-assignments-for-users-to-access-cluster
+[azure-resource-locks]: /azure/azure-resource-manager/management/lock-resources
+[csi-driver-overview]: csi-storage-drivers.md
+[aks-storage-backups-best-practices]: operator-best-practices-storage.md
aks Csi Storage Drivers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/csi-storage-drivers.md
Title: Container Storage Interface (CSI) drivers on Azure Kubernetes Service (AKS) description: Learn about and deploy the Container Storage Interface (CSI) drivers for Azure Disks and Azure Files in an Azure Kubernetes Service (AKS) cluster- Previously updated : 11/16/2022 Last updated : 01/19/2023
The CSI storage driver support on AKS allows you to natively use:
- [**Azure Blob storage**](azure-blob-csi.md) can be used to mount Blob storage (or object storage) as a file system into a container or pod. Using Blob storage enables your cluster to support applications that work with large unstructured datasets like log file data, images or documents, HPC, and others. Additionally, if you ingest data into [Azure Data Lake storage](../storage/blobs/data-lake-storage-introduction.md), you can directly mount and use it in AKS without configuring another interim filesystem. > [!IMPORTANT]
-> Starting with Kubernetes version 1.21, AKS only uses CSI drivers by default and CSI migration is enabled. Existing in-tree persistent volumes will continue to function. However, internally Kubernetes hands control of all storage management operations (previously targeting in-tree drivers) to CSI drivers.
+> Starting with Kubernetes version 1.26, in-tree persistent volume types *kubernetes.io/azure-disk* and *kubernetes.io/azure-file* are deprecated and will no longer be supported. Removing these drivers following their deprecation is not planned, however you should migrate to the corresponding CSI drivers *disks.csi.azure.com* and *file.csi.azure.com*. To review the migration options for your storage classes and upgrade your cluster to use Azure Disks and Azure Files CSI drivers, see [Migrate from in-tree to CSI drivers][migrate-from-in-tree-to-csi-drivers].
> > *In-tree drivers* refers to the storage drivers that are part of the core Kubernetes code opposed to the CSI drivers, which are plug-ins.
The CSI storage driver support on AKS allows you to natively use:
- You need the Azure CLI version 2.42 or later installed and configured. Run `az --version` to find the version. If you need to install or upgrade, see [Install Azure CLI][install-azure-cli]. - If the open-source CSI Blob storage driver is installed on your cluster, uninstall it before enabling the Azure Blob storage driver.
-## Disable CSI storage drivers on a new or existing cluster
-
-To disable CSI storage drivers on a new cluster, include one of the following parameters depending on the storage system:
-
-* `--disable-disk-driver` allows you to disable the [Azure Disks CSI driver][azure-disk-csi].
-* `--disable-file-driver` allows you to disable the [Azure Files CSI driver][azure-files-csi].
-* `--disable-blob-driver` allows you to disable the [Azure Blob storage CSI driver][azure-blob-csi].
-* `--disable-snapshot-controller` allows you to disable the [snapshot controller][snapshot-controller].
-
-```azurecli
-az aks create -n myAKSCluster -g myResourceGroup --disable-disk-driver --disable-file-driver --disable-blob-driver --disable-snapshot-controller
-```
-
-To disable CSI storage drivers on an existing cluster, use one of the parameters listed earlier depending on the storage system:
-
-```azurecli
-az aks update -n myAKSCluster -g myResourceGroup --disable-disk-driver --disable-file-driver --disable-blob-driver --disable-snapshot-controller
-```
- ## Enable CSI storage drivers on an existing cluster To enable CSI storage drivers on a new cluster, include one of the following parameters depending on the storage system:
To enable CSI storage drivers on a new cluster, include one of the following par
az aks update -n myAKSCluster -g myResourceGroup --enable-disk-driver --enable-file-driver --enable-blob-driver --enable-snapshot-controller ```
-## Migrate custom in-tree storage classes to CSI
-
-If you've created in-tree driver storage classes, those storage classes continue to work since CSI migration is turned on after upgrading your cluster to 1.21.x. If you want to use CSI features you'll need to perform the migration.
+It may take several minutes to complete this action. Once it's complete, you should see in the output the status of enabling the driver on your cluster. The following example resembles the section indicating the results when enabling the Blob storage CSI driver:
-Migrating these storage classes involves deleting the existing ones, and re-creating them with the provisioner set to **disk.csi.azure.com** if using Azure Disks, and **files.csi.azure.com** if using Azure Files.
-
-### Migrate storage class provisioner
+```output
+"storageProfile": {
+ "blobCsiDriver": {
+ "enabled": true
+ },
+```
-The following example YAML manifest shows the difference between the in-tree storage class definition configured to use Azure Disks, and the equivalent using a CSI storage class definition. The CSI storage system supports the same features as the in-tree drivers, so the only change needed would be the value for `provisioner`.
+## Disable CSI storage drivers on a new or existing cluster
-#### Original in-tree storage class definition
+To disable CSI storage drivers on a new cluster, include one of the following parameters depending on the storage system:
-```yaml
-kind: StorageClass
-apiVersion: storage.k8s.io/v1
-metadata:
- name: custom-managed-premium
-provisioner: kubernetes.io/azure-disk
-reclaimPolicy: Delete
-parameters:
- storageAccountType: Premium_LRS
-```
+* `--disable-disk-driver` allows you to disable the [Azure Disks CSI driver][azure-disk-csi].
+* `--disable-file-driver` allows you to disable the [Azure Files CSI driver][azure-files-csi].
+* `--disable-blob-driver` allows you to disable the [Azure Blob storage CSI driver][azure-blob-csi].
+* `--disable-snapshot-controller` allows you to disable the [snapshot controller][snapshot-controller].
-#### CSI storage class definition
-
-```yaml
-kind: StorageClass
-apiVersion: storage.k8s.io/v1
-metadata:
- name: custom-managed-premium
-provisioner: disk.csi.azure.com
-reclaimPolicy: Delete
-parameters:
- storageAccountType: Premium_LRS
+```azurecli
+az aks create -n myAKSCluster -g myResourceGroup --disable-disk-driver --disable-file-driver --disable-blob-driver --disable-snapshot-controller
```
-The CSI storage system supports the same features as the In-tree drivers, so the only change needed would be the provisioner.
-
-## Migrate in-tree persistent volumes
-
-> [!IMPORTANT]
-> If your in-tree persistent volume `reclaimPolicy` is set to **Delete**, you need to change its policy to **Retain** to persist your data. This can be achieved using a [patch operation on the PV](https://kubernetes.io/docs/tasks/administer-cluster/change-pv-reclaim-policy/). For example:
->
-> ```bash
-> kubectl patch pv pv-azuredisk --type merge --patch '{"spec": {"persistentVolumeReclaimPolicy": "Retain"}}'
-> ```
+To disable CSI storage drivers on an existing cluster, use one of the parameters listed earlier depending on the storage system:
-### Migrate in-tree Azure Disks persistent volumes
+```azurecli
+az aks update -n myAKSCluster -g myResourceGroup --disable-disk-driver --disable-file-driver --disable-blob-driver --disable-snapshot-controller
+```
-If you have in-tree Azure Disks persistent volumes, get `diskURI` from in-tree persistent volumes and then follow this [guide][azure-disk-static-mount] to set up CSI driver persistent volumes.
+## Migrate custom in-tree storage classes to CSI
-### Migrate in-tree Azure File persistent volumes
+If you've created in-tree driver storage classes, those storage classes continue to work since CSI migration is turned on after upgrading your cluster to 1.21.x. If you want to use CSI features you'll need to perform the migration.
-If you have in-tree Azure File persistent volumes, get `secretName`, `shareName` from in-tree persistent volumes and then follow this [guide][azure-file-static-mount] to set up CSI driver persistent volumes
+To review the migration options for your storage classes and upgrade your cluster to use Azure Disks and Azure Files CSI drivers, see [Migrate from in-tree to CSI drivers][migrate-from-in-tree-csi-drivers].
## Next steps
If you have in-tree Azure File persistent volumes, get `secretName`, `shareName`
- To use the CSI driver for Azure Files, see [Use Azure Files with CSI drivers][azure-files-csi]. - To use the CSI driver for Azure Blob storage, see [Use Azure Blob storage with CSI drivers][azure-blob-csi] - For more about storage best practices, see [Best practices for storage and backups in Azure Kubernetes Service][operator-best-practices-storage].-- For more information on CSI migration, see [Kubernetes In-Tree to CSI Volume Migration][csi-migration-community].
+- For more information on CSI migration, see [Kubernetes in-tree to CSI Volume Migration][csi-migration-community].
<!-- LINKS - external --> [csi-migration-community]: https://kubernetes.io/blog/2019/12/09/kubernetes-1-17-feature-csi-migration-beta [snapshot-controller]: https://kubernetes-csi.github.io/docs/snapshot-controller.html <!-- LINKS - internal -->
-[azure-disk-static-mount]: azure-disk-volume.md#mount-disk-as-a-volume
-[azure-file-static-mount]: azure-files-volume.md#mount-file-share-as-a-persistent-volume
+[azure-disk-static-mount]: azure-csi-disk-storage-provision.md#mount-disk-as-a-volume
+[azure-file-static-mount]: azure-csi-files-storage-provision.md#mount-file-share-as-a-persistent-volume
[install-azure-cli]: /cli/azure/install-azure-cli [operator-best-practices-storage]: operator-best-practices-storage.md [azure-blob-csi]: azure-blob-csi.md [azure-disk-csi]: azure-disk-csi.md
-[azure-files-csi]: azure-files-csi.md
+[azure-files-csi]: azure-files-csi.md
+[migrate-from-in-tree-csi-drivers]: csi-migrate-in-tree-volumes.md
aks Internal Lb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/internal-lb.md
Learn more about Kubernetes services in the [Kubernetes services documentation][
[install-azure-cli]: /cli/azure/install-azure-cli [aks-sp]: kubernetes-service-principal.md#delegate-access-to-other-azure-resources [different-subnet]: #specify-a-different-subnet
-[aks-vnet-subnet]: /aks/configure-kubenet.md#create-a-virtual-network-and-subnet
-[unique-subnet]: /aks/use-multiple-node-pools.md#add-a-node-pool-with-a-unique-subnet
+[aks-vnet-subnet]: configure-kubenet.md#create-a-virtual-network-and-subnet
+[unique-subnet]: use-multiple-node-pools.md#add-a-node-pool-with-a-unique-subnet
aks Intro Kubernetes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/intro-kubernetes.md
Learn more about deploying and managing AKS.
[aks-scale]: ./tutorial-kubernetes-scale.md [aks-upgrade]: ./upgrade-cluster.md [azure-devops]: ../devops-project/overview.md
-[azure-disk]: ./azure-disks-dynamic-pv.md
-[azure-files]: ./azure-files-dynamic-pv.md
+[azure-disk]: ./azure-disk-csi.md
+[azure-files]: ./azure-files-csi.md
[container-health]: ../azure-monitor/containers/container-insights-overview.md [aks-master-logs]: monitor-aks-reference.md#resource-logs [aks-supported versions]: supported-kubernetes-versions.md
aks Monitor Aks Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/monitor-aks-reference.md
Title: Monitoring AKS data reference description: Important reference material needed when you monitor AKS -+ Last updated 07/18/2022
aks Operator Best Practices Run At Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/operator-best-practices-run-at-scale.md
To increase the node limit beyond 1000, you must have the following pre-requisit
## Networking considerations and best practices * Use Managed NAT for cluster egress with at least 2 public IPs on the NAT Gateway. For more information, see [Managed NAT Gateway with AKS][Managed NAT Gateway - Azure Kubernetes Service].
-* Use Azure CNI with Dynamic IP allocation for optimum IP utilization, and scale up to 50k application pods per cluster with one routable IP per pod. For more information, see [Configure Azure CNI networking in AKS][Configure Azure CNI networking in Azure Kubernetes Service (AKS)].
+* Use Azure CNI with Dynamic IP allocation for optimum IP utilization, and scale up to 50k application pods per cluster with one routable IP per pod. For more information, see [Configure Azure CNI networking for dynamic allocation of IPs and enhanced subnet support in AKS][Configure Azure CNI networking for dynamic allocation of IPs and enhanced subnet support in Azure Kubernetes Service (AKS)].
* When using internal Kubernetes services behind an internal load balancer, we recommend creating an internal load balancer or internal service below 750 node scale for optimal scaling performance and load balancer elasticity. > [!NOTE]
To increase the node limit beyond 1000, you must have the following pre-requisit
<!-- Links - External --> [Managed NAT Gateway - Azure Kubernetes Service]: nat-gateway.md
-[Configure Azure CNI networking in Azure Kubernetes Service (AKS)]: configure-azure-cni.md#dynamic-allocation-of-ips-and-enhanced-subnet-support
+[Configure Azure CNI networking for dynamic allocation of IPs and enhanced subnet support in Azure Kubernetes Service (AKS)]: configure-azure-cni-dynamic-ip-allocation.md
[max surge]: upgrade-cluster.md?tabs=azure-cli#customize-node-surge-upgrade [Azure portal]: https://portal.azure.com/#create/Microsoft.Support/Parameters/%7B%0D%0A%09%22subId%22%3A+%22%22%2C%0D%0A%09%22pesId%22%3A+%225a3a423f-8667-9095-1770-0a554a934512%22%2C%0D%0A%09%22supportTopicId%22%3A+%2280ea0df7-5108-8e37-2b0e-9737517f0b96%22%2C%0D%0A%09%22contextInfo%22%3A+%22AksLabelDeprecationMarch22%22%2C%0D%0A%09%22caller%22%3A+%22Microsoft_Azure_ContainerService+%2B+AksLabelDeprecationMarch22%22%2C%0D%0A%09%22severity%22%3A+%223%22%0D%0A%7D [uptime SLA]: uptime-sla.md
aks Operator Best Practices Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/operator-best-practices-storage.md
This article focused on storage best practices in AKS. For more information abou
<!-- LINKS - Internal --> [aks-concepts-storage]: concepts-storage.md [vm-sizes]: ../virtual-machines/sizes.md
-[dynamic-disks]: azure-disks-dynamic-pv.md
-[dynamic-files]: azure-files-dynamic-pv.md
+[dynamic-disks]: azure-disk-csi.md
+[dynamic-files]: azure-files-csi.md
[reclaim-policy]: concepts-storage.md#storage-classes [aks-concepts-storage-pvcs]: concepts-storage.md#persistent-volume-claims [aks-concepts-storage-classes]: concepts-storage.md#storage-classes
aks Security Controls Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/security-controls-policy.md
Title: Azure Policy Regulatory Compliance controls for Azure Kubernetes Service
description: Lists Azure Policy Regulatory Compliance controls available for Azure Kubernetes Service (AKS). These built-in policy definitions provide common approaches to managing the compliance of your Azure resources. Last updated 01/05/2023 -+ # Azure Policy Regulatory Compliance controls for Azure Kubernetes Service (AKS)
aks Spot Node Pool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/spot-node-pool.md
Title: Add an Azure Spot node pool to an Azure Kubernetes Service (AKS) cluster description: Learn how to add an Azure Spot node pool to an Azure Kubernetes Service (AKS) cluster. -+ Last updated 01/21/2022
aks Supported Kubernetes Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/supported-kubernetes-versions.md
The Kubernetes community releases minor versions roughly every three months. Rec
Minor version releases include new features and improvements. Patch releases are more frequent (sometimes weekly) and are intended for critical bug fixes within a minor version. Patch releases include fixes for security vulnerabilities or major bugs.
->[!WARNING]
-> Due to an issue with Calico and AKS. It is highly reccomended that customers using Calico do not upgrade or create new clusters on v1.25.
- ## Kubernetes versions Kubernetes uses the standard [Semantic Versioning](https://semver.org/) versioning scheme for each version:
Get-AzAksVersion -Location eastus
## AKS Kubernetes release calendar
-For the past release history, see [Kubernetes](https://en.wikipedia.org/wiki/Kubernetes#History).
- > [!NOTE]
-> The asterisk (*) states that a date has not been finalized; because of this, the timeline below is subject to change. Please continue to check the release calendar for updates.
+> AKS follows 12 months of support for a GA Kubernetes version. To read more about our support policy for Kubernetes versioning, please read our [FAQ](https://learn.microsoft.com/azure/aks/supported-kubernetes-versions?tabs=azure-cli#faq).
+
+For the past release history, see [Kubernetes](https://en.wikipedia.org/wiki/Kubernetes#History).
| K8s version | Upstream release | AKS preview | AKS GA | End of life | |--|-|--||-|
-| 1.21 | Apr-08-21 | May 2021 | Jul 2021 | 1.24 GA |
-| 1.22 | Aug-04-21 | Sept 2021 | Dec 2021 | 1.25 GA |
-| 1.23 | Dec 2021 | Jan 2022 | Apr 2022 | 1.26 GA |
-| 1.24 | Apr-22-22 | May 2022 | Jul 2022 | 1.27 GA
-| 1.25 | Aug 2022 | Oct 2022 | Dec 2022 | 1.28 GA
-| 1.26 | Dec 2022 | Jan 2023 | Feb 2023 | 1.29 GA
-| 1.27 | Apr 2023 | May 2023 | Jun 2023 | 1.30 GA
-| 1.28 | * | * | * | 1.31 GA
+| 1.22 | Aug-04-21 | Sept 2021 | Dec 2021 | Dec 2022 |
+| 1.23 | Dec 2021 | Jan 2022 | Apr 2022 | Apr 2023 |
+| 1.24 | Apr-22-22 | May 2022 | Jul 2022 | Jul 2023
+| 1.25 | Aug 2022 | Oct 2022 | Dec 2022 | Dec 2023
+| 1.26 | Dec 2022 | Jan 2023 | Feb 2023 | Feb 2024
+| 1.27 | Apr 2023 | May 2023 | Jun 2023 | Jun 2024
> [!NOTE] > To see real-time updates of region release status and version release notes, visit the [AKS release status webpage][aks-release]. To learn more about the release status webpage, see [AKS release tracker][aks-tracker].
aks Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/upgrade.md
For more information what cluster operations may trigger specific upgrade events
[release-tracker]: ./release-tracker.md [node-image-upgrade]: ./node-image-upgrade.md [gh-actions-upgrade]: ./node-upgrade-github-actions.md
-[operator-guide-patching]: /azure/architecture/operator-guides/aks/aks-upgrade-practices.md#considerations
+[operator-guide-patching]: /azure/architecture/operator-guides/aks/aks-upgrade-practices#considerations
[supported-k8s-versions]: ./supported-kubernetes-versions.md#kubernetes-version-support-policy [ts-nsg]: /troubleshoot/azure/azure-kubernetes/upgrade-fails-because-of-nsg-rules [ts-pod-drain]: /troubleshoot/azure/azure-kubernetes/error-code-poddrainfailure
aks Use Azure Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-azure-policy.md
Title: Use Azure Policy to secure your cluster description: Use Azure Policy to secure an Azure Kubernetes Service (AKS) cluster.-+ Last updated 09/12/2022
aks Use Ultra Disks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/use-ultra-disks.md
This feature can only be set at cluster creation or node pool creation time.
> Azure ultra disks require nodepools deployed in availability zones and regions that support these disks as well as only specific VM series. See the [**Ultra disks GA scope and limitations**](../virtual-machines/disks-enable-ultra-ssd.md#ga-scope-and-limitations). ### Limitations+ - Ultra disks can't be used with some features and functionality, such as availability sets or Azure Disk Encryption. Review [**Ultra disks GA scope and limitations**](../virtual-machines/disks-enable-ultra-ssd.md#ga-scope-and-limitations) before proceeding.-- The supported size range for a Ultra disks is between 100 and 1500
+- The supported size range for ultra disks is between 100 and 1500.
-## Create a new cluster that can use Ultra disks
+## Create a new cluster that can use ultra disks
-Create an AKS cluster that is able to leverage Ultra Disks by using the following CLI commands. Use the `--enable-ultra-ssd` flag to set the `EnableUltraSSD` feature.
+Create an AKS cluster that is able to leverage Azure ultra Disks by using the following CLI commands. Use the `--enable-ultra-ssd` flag to set the `EnableUltraSSD` feature.
Create an Azure resource group: ```azurecli-interactive
-# Create an Azure resource group
az group create --name myResourceGroup --location westus2 ```
-Create the AKS cluster with support for Ultra Disks.
+Create an AKS-managed Azure AD cluster with support for ultra disks.
```azurecli-interactive
-# Create an AKS-managed Azure AD cluster
-az aks create -g MyResourceGroup -n MyManagedCluster -l westus2 --node-vm-size Standard_D2s_v3 --zones 1 2 --node-count 2 --enable-ultra-ssd
+az aks create -g MyResourceGroup -n myAKSCluster -l westus2 --node-vm-size Standard_D2s_v3 --zones 1 2 --node-count 2 --enable-ultra-ssd
``` If you want to create clusters without ultra disk support, you can do so by omitting the `--enable-ultra-ssd` parameter.
-## Enable Ultra disks on an existing cluster
+## Enable ultra disks on an existing cluster
You can enable ultra disks on existing clusters by adding a new node pool to your cluster that support ultra disks. Configure a new node pool to use ultra disks by using the `--enable-ultra-ssd` flag.
If you want to create new node pools without support for ultra disks, you can do
## Use ultra disks dynamically with a storage class
-To use ultra disks in our deployments or stateful sets you can use a [storage class for dynamic provisioning](azure-disks-dynamic-pv.md).
+To use ultra disks in our deployments or stateful sets you can use a [storage class for dynamic provisioning][azure-disk-volume].
### Create the storage class
parameters:
Create the storage class with the [kubectl apply][kubectl-apply] command and specify your *azure-ultra-disk-sc.yaml* file: ```console
-$ kubectl apply -f azure-ultra-disk-sc.yaml
+kubectl apply -f azure-ultra-disk-sc.yaml
+```
+The output from the command resembles the following example:
+```console
storageclass.storage.k8s.io/ultra-disk-sc created ```
spec:
Create the persistent volume claim with the [kubectl apply][kubectl-apply] command and specify your *azure-ultra-disk-pvc.yaml* file: ```console
-$ kubectl apply -f azure-ultra-disk-pvc.yaml
+kubectl apply -f azure-ultra-disk-pvc.yaml
+```
+
+The output from the command resembles the following example:
+```console
persistentvolumeclaim/ultra-disk created ```
spec:
Create the pod with the [kubectl apply][kubectl-apply] command, as shown in the following example: ```console
-$ kubectl apply -f nginx-ultra.yaml
+kubectl apply -f nginx-ultra.yaml
+```
+The output from the command resembles the following example:
+
+```console
pod/nginx-ultra created ``` You now have a running pod with your Azure disk mounted in the `/mnt/azure` directory. This configuration can be seen when inspecting your pod via `kubectl describe pod nginx-ultra`, as shown in the following condensed example: ```console
-$ kubectl describe pod nginx-ultra
+kubectl describe pod nginx-ultra
[...] Volumes:
For more details on using Azure tags, see [Use Azure tags in Azure Kubernetes Se
[managed-disk-pricing-performance]: https://azure.microsoft.com/pricing/details/managed-disks/ <!-- LINKS - internal -->
-[azure-disk-volume]: azure-disk-volume.md
-[azure-files-pvc]: azure-files-dynamic-pv.md
+[azure-disk-volume]: azure-disk-csi.md
+[azure-files-pvc]: azure-files-csi.md
[premium-storage]: ../virtual-machines/disks-types.md [az-disk-list]: /cli/azure/disk#az_disk_list [az-snapshot-create]: /cli/azure/snapshot#az_snapshot_create
aks Vertical Pod Autoscaler https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/vertical-pod-autoscaler.md
Title: Vertical Pod Autoscaling (preview) in Azure Kubernetes Service (AKS) description: Learn how to vertically autoscale your pod on an Azure Kubernetes Service (AKS) cluster.- Previously updated : 09/30/2022 Last updated : 01/12/2023 # Vertical Pod Autoscaling (preview) in Azure Kubernetes Service (AKS)
-This article provides an overview of Vertical Pod Autoscaler (VPA) (preview) in Azure Kubernetes Service (AKS), which is based on the open source [Kubernetes](https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler) version. When configured, it automatically sets resource requests and limits on containers per workload based on past usage. This ensures pods are scheduled onto nodes that have the required CPU and memory resources.
+This article provides an overview of Vertical Pod Autoscaler (VPA) (preview) in Azure Kubernetes Service (AKS), which is based on the open source [Kubernetes](https://github.com/kubernetes/autoscaler/tree/master/vertical-pod-autoscaler) version. When configured, it automatically sets resource requests and limits on containers per workload based on past usage. VPA makes certain pods are scheduled onto nodes that have the required CPU and memory resources.
## Benefits
The following steps create a deployment with two pods, each running a single con
The pod has 100 millicpu and 50 Mibibytes of memory reserved in this example. For this sample application, the pod needs less than 100 millicpu to run, so there's no CPU capacity available. The pods also reserves much less memory than needed. The Vertical Pod Autoscaler *vpa-recommender* deployment analyzes the pods hosting the hamster application to see if the CPU and memory requirements are appropriate. If adjustments are needed, the vpa-updater relaunches the pods with updated values.
-1. Wait for the vpa-updater to launch a new hamster pod. This should take a few minutes. You can monitor the pods using the [kubectl get][kubectl-get] command.
+1. Wait for the vpa-updater to launch a new hamster pod, which should take a few minutes. You can monitor the pods using the [kubectl get][kubectl-get] command.
```bash kubectl get --watch pods -l app=hamster
Vertical Pod autoscaling uses the `VerticalPodAutoscaler` object to automaticall
The Vertical Pod Autoscaler uses the `lowerBound` and `upperBound` attributes to decide whether to delete a Pod and replace it with a new Pod. If a Pod has requests less than the lower bound or greater than the upper bound, the Vertical Pod Autoscaler deletes the Pod and replaces it with a Pod that meets the target attribute.
+## Metrics server VPA throttling
+
+With AKS clusters version 1.24 and higher, vertical pod autoscaling is enabled for the metrics server. VPA enables you to adjust the resource limit when the metrics server is experiencing consistent CPU and memory resource constraints.
+
+If the metrics server throttling rate is high and the memory usage of its two pods are unbalanced, this indicates the metrics server requires more resources than the default values specified.
+
+To update the coefficient values, create a ConfigMap in the overlay *kube-system* namespace to override the values in the metrics server specification. Perform the following steps to update the metrics server.
+
+1. Create a ConfigMap file named *metrics-server-config.yaml* and copy in the following manifest.
+
+ ```yml
+ apiVersion: v1
+ kind: ConfigMap
+ metadata:
+ name: metrics-server-config
+ namespace: kube-system
+ labels:
+ kubernetes.io/cluster-service: "true"
+ addonmanager.kubernetes.io/mode: EnsureExists
+ data:
+ NannyConfiguration: |-
+ apiVersion: nannyconfig/v1alpha1
+ kind: NannyConfiguration
+ baseCPU: 100m
+ cpuPerNode: 1m
+ baseMemory: 100Mi
+ memoryPerNode: 8Mi
+ ```
+
+ In this ConfigMap example, it changes the resource limit and request to the following:
+
+ * cpu: (100+1n) millicore
+ * memory: (100+8n) mebibyte
+
+ Where *n* is the number of nodes.
+
+2. Create the ConfigMap using the [kubectl apply][kubectl-apply] command and specify the name of your YAML manifest:
+
+ ```bash
+ kubectl apply -f metrics-server-config.yaml
+ ```
+
+Be cautious of the *baseCPU*, *cpuPerNode*, *baseMemory*, and the *memoryPerNode* as the ConfigMap won't be validated by AKS. As a recommended practice, increase the value gradually to avoid unnecessary resource consumption. Proactively monitor resource usage when updating or creating the ConfigMap. A large number of resource requests could negatively impact the node.
+ ## Next steps This article showed you how to automatically scale resource utilization, such as CPU and memory, of cluster nodes to match application requirements. You can also use the horizontal pod autoscaler to automatically adjust the number of pods that run your application. For steps on using the horizontal pod autoscaler, see [Scale applications in AKS][scale-applications-in-aks].
aks Virtual Nodes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/virtual-nodes.md
Title: Use virtual nodes description: Overview of how using virtual node with Azure Kubernetes Services (AKS)- Previously updated : 09/06/2022 Last updated : 01/18/2023 # Create and configure an Azure Kubernetes Services (AKS) cluster to use virtual nodes
-To rapidly scale application workloads in an AKS cluster, you can use virtual nodes. With virtual nodes, you have quick provisioning of pods, and only pay per second for their execution time. You don't need to wait for Kubernetes cluster autoscaler to deploy VM compute nodes to run the additional pods. Virtual nodes are only supported with Linux pods and nodes.
+To rapidly scale application workloads in an AKS cluster, you can use virtual nodes. With virtual nodes, you have quick provisioning of pods, and only pay per second for their execution time. You don't need to wait for Kubernetes cluster autoscaler to deploy VM compute nodes to run more pods. Virtual nodes are only supported with Linux pods and nodes.
-The virtual nodes add-on for AKS, is based on the open source project [Virtual Kubelet][virtual-kubelet-repo].
+The virtual nodes add on for AKS is based on the open source project [Virtual Kubelet][virtual-kubelet-repo].
-This article gives you an overview of the region availability and networking requirements for using virtual nodes, as well as the known limitations.
+This article gives you an overview of the region availability and networking requirements for using virtual nodes, and the known limitations.
## Regional availability
-All regions, where ACI supports VNET SKUs, are supported for virtual nodes deployments. For more details, see [Resource availability for Azure Container Instances in Azure regions](../container-instances/container-instances-region-availability.md).
+All regions, where ACI supports VNET SKUs, are supported for virtual nodes deployments. For more information, see [Resource availability for Azure Container Instances in Azure regions](../container-instances/container-instances-region-availability.md).
-For available CPU and memory SKUs in each region, please check the [Azure Container Instances Resource availability for Azure Container Instances in Azure regions - Linux container groups](../container-instances/container-instances-region-availability.md#linux-container-groups)
+For available CPU and memory SKUs in each region, review [Azure Container Instances Resource availability for Azure Container Instances in Azure regions - Linux container groups](../container-instances/container-instances-region-availability.md#linux-container-groups)
## Network requirements
-Virtual nodes enable network communication between pods that run in Azure Container Instances (ACI) and the AKS cluster. To provide this communication, a virtual network subnet is created and delegated permissions are assigned. Virtual nodes only work with AKS clusters created using *advanced* networking (Azure CNI). By default, AKS clusters are created with *basic* networking (kubenet).
+Virtual nodes enable network communication between pods that run in Azure Container Instances (ACI) and the AKS cluster. To support this communication, a virtual network subnet is created and delegated permissions are assigned. Virtual nodes only work with AKS clusters created using *advanced* networking (Azure CNI). By default, AKS clusters are created with *basic* networking (kubenet).
Pods running in Azure Container Instances (ACI) need access to the AKS API server endpoint, in order to configure networking. ## Known limitations
-Virtual Nodes functionality is heavily dependent on ACI's feature set. In addition to the [quotas and limits for Azure Container Instances](../container-instances/container-instances-quotas.md), the following scenarios are not yet supported with Virtual nodes:
+Virtual nodes functionality is heavily dependent on ACI's feature set. In addition to the [quotas and limits for Azure Container Instances](../container-instances/container-instances-quotas.md), the following scenarios aren't supported with virtual nodes:
* Using service principal to pull ACR images. [Workaround](https://github.com/virtual-kubelet/azure-aci/blob/master/README.md#private-registry) is to use [Kubernetes secrets](https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/#create-a-secret-by-providing-credentials-on-the-command-line) * [Virtual Network Limitations](../container-instances/container-instances-vnet.md) including VNet peering, Kubernetes network policies, and outbound traffic to the internet with network security groups. * Init containers * [Host aliases](https://kubernetes.io/docs/concepts/services-networking/add-entries-to-pod-etc-hosts-with-host-aliases/) * [Arguments](../container-instances/container-instances-exec.md#restrictions) for exec in ACI
-* [DaemonSets](concepts-clusters-workloads.md#statefulsets-and-daemonsets) will not deploy pods to the virtual nodes
+* [DaemonSets](concepts-clusters-workloads.md#statefulsets-and-daemonsets) won't deploy pods to the virtual nodes
* Virtual nodes support scheduling Linux pods. You can manually install the open source [Virtual Kubelet ACI](https://github.com/virtual-kubelet/azure-aci) provider to schedule Windows Server containers to ACI. * Virtual nodes require AKS clusters with Azure CNI networking. * Using api server authorized ip ranges for AKS.
-* Volume mounting Azure Files share support [General-purpose V2](../storage/common/storage-account-overview.md#types-of-storage-accounts) and [General-purpose V1](../storage/common/storage-account-overview.md#types-of-storage-accounts). Follow the instructions for mounting [a volume with Azure Files share](azure-files-volume.md).
-* Using IPv6 is not supported.
+* Volume mounting Azure Files share support [General-purpose V2](../storage/common/storage-account-overview.md#types-of-storage-accounts) and [General-purpose V1](../storage/common/storage-account-overview.md#types-of-storage-accounts). Follow the instructions for mounting [a volume with Azure Files share](azure-files-csi.md).
+* Using IPv6 isn't supported.
* Virtual nodes don't support the [Container hooks](https://kubernetes.io/docs/concepts/containers/container-lifecycle-hooks/) feature. ## Next steps
Virtual nodes are often one component of a scaling solution in AKS. For more inf
- [Use the Kubernetes horizontal pod autoscaler][aks-hpa] - [Use the Kubernetes cluster autoscaler][aks-cluster-autoscaler]-- [Check out the Autoscale sample for Virtual Nodes][virtual-node-autoscale]
+- [Check out the Autoscale sample for virtual nodes][virtual-node-autoscale]
- [Read more about the Virtual Kubelet open source library][virtual-kubelet-repo] <!-- LINKS - external -->
api-management Api Management Gateways Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-gateways-overview.md
The API Management *gateway* (also called *data plane* or *runtime*) is the serv
[!INCLUDE [api-management-gateway-role](../../includes/api-management-gateway-role.md)] +
+> [!NOTE]
+> All requests to the API Management gateway, including those rejected by policy configurations, count toward configured rate limits, quotas, and billing limits if applied in the service tier.
++ ## Managed and self-hosted API Management offers both managed and self-hosted gateways:
The following table compares features available in the managed gateway versus th
> [!NOTE] > * Some features of managed and self-hosted gateways are supported only in certain [service tiers](api-management-features.md) or with certain [deployment environments](self-hosted-gateway-overview.md#packaging) for self-hosted gateways.
+> * For the current supported features of the self-hosted gateway, ensure that you have upgraded to the latest major version of the self-hosted gateway [container image](self-hosted-gateway-overview.md#container-images).
> * See also self-hosted gateway [limitations](self-hosted-gateway-overview.md#limitations). ### Infrastructure
The following table compares features available in the managed gateway versus th
### Policies
-Managed and self-hosted gateways support all available [policies](api-management-howto-policies.md) in policy definitions with the following exceptions.
+Managed and self-hosted gateways support all available [policies](api-management-policies.md) in policy definitions with the following exceptions.
-| Policy | Managed (Dedicated) | Managed (Consumption) | Self-hosted |
+| Policy | Managed (Dedicated) | Managed (Consumption) | Self-hosted<sup>1</sup> |
| | -- | -- | - |
-| [Dapr integration](api-management-dapr-policies.md) | ❌ | ❌ | ✔️ |
+| [Dapr integration](api-management-policies.md#dapr-integration-policies) | ❌ | ❌ | ✔️ |
| [Get authorization context](get-authorization-context-policy.md) | ✔️ | ❌ | ❌ |
-| [Quota and rate limit](api-management-access-restriction-policies.md) | ✔️ | ✔️<sup>1</sup> | ✔️<sup>2</sup>
+| [Quota and rate limit](api-management-policies.md#access-restriction-policies) | ✔️ | ✔️<sup>2</sup> | ✔️<sup>3</sup>
| [Set GraphQL resolver](set-graphql-resolver-policy.md) | ✔️ | ❌ | ❌ |
-<sup>1</sup> The rate limit by key and quota by key policies aren't available in the Consumption tier.<br/>
-<sup>2</sup> By default, rate limit counts in self-hosted gateways are per-gateway, per-node.
+<sup>1</sup> Configured policies that aren't supported by the self-hosted gateway are skipped during policy execution.<br/>
+<sup>2</sup> The rate limit by key and quota by key policies aren't available in the Consumption tier.<br/>
+<sup>3</sup> [!INCLUDE [api-management-self-hosted-gateway-rate-limit](../../includes/api-management-self-hosted-gateway-rate-limit.md)] [Learn more](how-to-self-hosted-gateway-on-kubernetes-in-production.md#request-throttling)
+ ### Monitoring
api-management How To Self Hosted Gateway On Kubernetes In Production https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/how-to-self-hosted-gateway-on-kubernetes-in-production.md
Previously updated : 12/17/2021 Last updated : 01/17/2023 # Guidance for running self-hosted gateway on Kubernetes in production
By default, a self-hosted gateway is deployed with a **RollingUpdate** deploymen
We recommend reducing container logs to warnings (`warn`) to improve for performance. Learn more in our [self-hosted gateway configuration reference](self-hosted-gateway-settings-reference.md).
+## Request throttling
+
+Request throttling in a self-hosted gateway can be enabled by using the API Management [rate-limit](rate-limit-policy.md) or [rate-limit-by-key](rate-limit-by-key-policy.md) policy. Configure rate limit counts to synchronize among gateway instances across cluster nodes by exposing the following ports in the Kubernetes deployment for instance discovery:
+
+* Port 4290 (UDP), for the rate limiting synchronization
+* Port 4291 (UDP), for sending heartbeats to other instances
+
+> [!NOTE]
+> [!INCLUDE [api-management-self-hosted-gateway-rate-limit](../../includes/api-management-self-hosted-gateway-rate-limit.md)]
+ ## Security The self-hosted gateway is able to run as non-root in Kubernetes allowing customers to run the gateway securely.
securityContext:
> [!WARNING] > When using local CA certificates, the self-hosted gateway must run with user ID (UID) `1001` in order to manage the CA certificates otherwise the gateway will not start up. + ## Next steps * To learn more about the self-hosted gateway, see [Self-hosted gateway overview](self-hosted-gateway-overview.md).
api-management Include Fragment Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/include-fragment-policy.md
The policy inserts the policy fragment as-is at the location you select in the p
| Attribute | Description | Required | Default | | | -- | -- | - |
-| fragment-id | A string. Policy expression allowed. Specifies the identifier (name) of a policy fragment created in the API Management instance. | Yes | N/A |
+| fragment-id | A string. Specifies the identifier (name) of a policy fragment created in the API Management instance. | Yes | N/A |
## Usage
In the following example, the policy fragment named *myFragment* is added in the
* [API Management advanced policies](api-management-advanced-policies.md)
api-management Mock Response Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/mock-response-policy.md
The `mock-response` policy, as the name implies, is used to mock APIs and operat
- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation - [**Gateways:**](api-management-gateways-overview.md) dedicated, consumption, self-hosted
+### Usage notes
+
+- [Policy expressions](api-management-policy-expressions.md) can't be used in attribute values for this policy.
+ ## Examples ```xml
The `mock-response` policy, as the name implies, is used to mock APIs and operat
* [API Management advanced policies](api-management-advanced-policies.md)
api-management Rate Limit By Key Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/rate-limit-by-key-policy.md
To understand the difference between rate limits and quotas, [see Rate limits an
- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation - [**Gateways:**](api-management-gateways-overview.md) dedicated, self-hosted
+### Usage notes
+
+* [!INCLUDE [api-management-self-hosted-gateway-rate-limit](../../includes/api-management-self-hosted-gateway-rate-limit.md)] [Learn more](how-to-self-hosted-gateway-on-kubernetes-in-production.md#request-throttling)
++ ## Example In the following example, the rate limit of 10 calls per 60 seconds is keyed by the caller IP address. After each policy execution, the remaining calls allowed in the time period are stored in the variable `remainingCallsPerIP`.
api-management Rate Limit Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/rate-limit-policy.md
Previously updated : 12/08/2022 Last updated : 01/11/2023
To understand the difference between rate limits and quotas, [see Rate limits an
* This policy can be used only once per policy definition. * Except where noted, [policy expressions](api-management-policy-expressions.md) can't be used in attribute values for this policy. * This policy is only applied when an API is accessed using a subscription key.
+* [!INCLUDE [api-management-self-hosted-gateway-rate-limit](../../includes/api-management-self-hosted-gateway-rate-limit.md)] [Learn more](how-to-self-hosted-gateway-on-kubernetes-in-production.md#request-throttling)
+ ## Example
app-service App Service Web Tutorial Rest Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/app-service-web-tutorial-rest-api.md
description: Learn how Azure App Service helps you host your RESTful APIs with C
ms.assetid: a820e400-06af-4852-8627-12b3db4a8e70 ms.devlang: csharp Previously updated : 04/28/2020 Last updated : 01/31/2023
Next, you enable the built-in CORS support in App Service for your API.
![CORS error in browser client](./media/app-service-web-tutorial-rest-api/azure-app-service-cors-error.png)
- Because of the domain mismatch between the browser app (`http://localhost:5000`) and remote resource (`http://<app_name>.azurewebsites.net`), and the fact that your API in App Service is not sending the `Access-Control-Allow-Origin` header, your browser has prevented cross-domain content from loading in your browser app.
+ The domain mismatch between the browser app (`http://localhost:5000`) and remote resource (`http://<app_name>.azurewebsites.net`) is recognized by your browser as a cross-origin resource request. Also, the fact that your REST API the App Service app is not sending the `Access-Control-Allow-Origin` header, the browser has prevented cross-domain content from loading.
In production, your browser app would have a public URL instead of the localhost URL, but the way to enable CORS to a localhost URL is the same as a public URL.
In the Cloud Shell, enable CORS to your client's URL by using the [`az webapp co
az webapp cors add --resource-group myResourceGroup --name <app-name> --allowed-origins 'http://localhost:5000' ```
-You can set more than one client URL in `properties.cors.allowedOrigins` (`"['URL1','URL2',...]"`). You can also enable all client URLs with `"['*']"`.
-
-> [!NOTE]
-> If your app requires credentials such as cookies or authentication tokens to be sent, the browser may require the `ACCESS-CONTROL-ALLOW-CREDENTIALS` header on the response. To enable this in App Service, set `properties.cors.supportCredentials` to `true` in your CORS config. This cannot be enabled when `allowedOrigins` includes `'*'`.
-
-> [!NOTE]
-> Specifying `AllowAnyOrigin` and `AllowCredentials` is an insecure configuration and can result in cross-site request forgery. The CORS service returns an invalid CORS response when an app is configured with both methods.
+You can add multiple allowed origins by running the command multiple times or by adding a comma-separate list in `--allowed-origins`. To allow all origins, use `--allowed-origins '*'`.
### Test CORS again
Refresh the browser app at `http://localhost:5000`. The error message in the **C
Congratulations, you're running an API in Azure App Service with CORS support.
-## App Service CORS vs. your CORS
+## Frequently asked questions
+
+- [App Service CORS vs. your CORS](#app-service-cors-vs-your-cors)
+- [How do I set allowed origins to a wildcard subdomain?](#how-do-i-set-allowed-origins-to-a-wildcard-subdomain)
+- [How do I enable the ACCESS-CONTROL-ALLOW-CREDENTIALS header on the response?](#how-do-i-enable-the-access-control-allow-credentials-header-on-the-response)
+
+#### App Service CORS vs. your CORS
You can use your own CORS utilities instead of App Service CORS for more flexibility. For example, you may want to specify different allowed origins for different routes or methods. Since App Service CORS lets you specify one set of accepted origins for all API routes and methods, you would want to use your own CORS code. See how ASP.NET Core does it at [Enabling Cross-Origin Requests (CORS)](/aspnet/core/security/cors).
The built-in App Service CORS feature does not have options to allow only specif
> >
+#### How do I set allowed origins to a wildcard subdomain?
+
+A wildcard subdomain like `*.contoso.com` is more restrictive than the wildcard origin `*`. However, the app's CORS management page in the Azure portal doesn't let you set a wildcard subdomain as an allowed origin. However, you can do it using the Azure CLI, like so:
+
+```azurecli-interactive
+az webapp cors add --resource-group <group-name> --name <app-name> --allowed-origins 'https://*.contoso.com'
+```
+
+#### How do I enable the ACCESS-CONTROL-ALLOW-CREDENTIALS header on the response?
+
+If your app requires credentials such as cookies or authentication tokens to be sent, the browser may require the `ACCESS-CONTROL-ALLOW-CREDENTIALS` header on the response. To enable this in App Service, set `properties.cors.supportCredentials` to `true`.
+
+```azurecli-interactive
+az resource update --name web --resource-group <group-name> \
+ --namespace Microsoft.Web --resource-type config \
+ --parent sites/<app-name> --set properties.cors.supportCredentials=true
+```
+
+This operation is not allowed when allowed origins include the wildcard origin `'*'`. Specifying `AllowAnyOrigin` and `AllowCredentials` is an insecure configuration and can result in cross-site request forgery. To allow credentials, try replacing the wildcard origin with [wildcard subdomains](#how-do-i-set-allowed-origins-to-a-wildcard-subdomain).
+ [!INCLUDE [cli-samples-clean-up](../../includes/cli-samples-clean-up.md)] <a name="next"></a>
app-service Configure Connect To Azure Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-connect-to-azure-storage.md
zone_pivot_groups: app-service-containers-code
# Mount Azure Storage as a local share in App Service ::: zone pivot="code-windows"
-> [!NOTE]
-> Mounting Azure Storage as a local share for App Service on Windows code (non-container) is currently in preview.
->
+ This guide shows how to mount Azure Storage Files as a network share in Windows code (non-container) in App Service. Only [Azure Files Shares](../storage/files/storage-how-to-use-files-portal.md) and [Premium Files Shares](../storage/files/storage-how-to-create-file-share.md) are supported. The benefits of custom-mounted storage include: - Configure persistent storage for your App Service app and manage the storage separately.
The following features are supported for Linux containers:
| **Share name** | Files share to mount. | | **Access key** (Advanced only) | [Access key](../storage/common/storage-account-keys-manage.md) for your storage account. | | **Mount path** | Directory inside your app service that you want to mount. Only `/mounts/pathname` is supported.|
+ | **Deployment slot setting** | When checked, the storage mount settings also apply to deployment slots.|
::: zone-end ::: zone pivot="container-windows" | Setting | Description |
The following features are supported for Linux containers:
| **Share name** | Files share to mount. | | **Access key** (Advanced only) | [Access key](../storage/common/storage-account-keys-manage.md) for your storage account. | | **Mount path** | Directory inside your Windows container that you want to mount. Do not use a root directory (`[C-Z]:\` or `/`) or the `home` directory (`[C-Z]:\home`, or `/home`) as it's not supported.|
+ | **Deployment slot setting** | When checked, the storage mount settings also apply to deployment slots.|
::: zone-end ::: zone pivot="container-linux" | Setting | Description |
The following features are supported for Linux containers:
| **Storage container** or **Share name** | Files share or Blobs container to mount. | | **Access key** (Advanced only) | [Access key](../storage/common/storage-account-keys-manage.md) for your storage account. | | **Mount path** | Directory inside the Linux container to mount to Azure Storage. Do not use `/` or `/home`.|
+ | **Deployment slot setting** | When checked, the storage mount settings also apply to deployment slots.|
::: zone-end # [Azure CLI](#tab/cli)
To validate that the Azure Storage is mounted successfully for the app:
## Best practices ::: zone pivot="code-windows"+
+- Azure Storage mounts can be configured as a virtual directory to serve static content. To configure the virtual directory, in the left navigation click **Configuration** > **Path Mappings** > **New Virtual Application or Directory**. Set the **Physical path** to the **Mount path** defined on the Azure Storage mount.
+ - To avoid potential issues related to latency, place the app and the Azure Storage account in the same Azure region. Note, however, if the app and Azure Storage account are in same Azure region, and if you grant access from App Service IP addresses in the [Azure Storage firewall configuration](../storage/common/storage-network-security.md), then these IP restrictions are not honored. - In the Azure Storage account, avoid [regenerating the access key](../storage/common/storage-account-keys-manage.md) that's used to mount the storage in the app. The storage account contains two different keys. Azure App Services stores Azure storage account key. Use a stepwise approach to ensure that the storage mount remains available to the app during key regeneration. For example, assuming that you used **key1** to configure storage mount in your app:
app-service Configure Gateway Required Vnet Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-gateway-required-vnet-integration.md
+
+ Title: Configure gateway-required virtual network integration for your app
+description: Integrate your app in Azure App Service with Azure virtual networks using gateway-required virtual network integration.
++ Last updated : 01/20/2023+++
+# Configure gateway-required virtual network integration
+
+Gateway-required virtual network integration supports connecting to a virtual network in another region or to a classic virtual network. Gateway-required virtual network integration only works for Windows plans. We recommend using [regional virtual network integration](./overview-vnet-integration.md) to integrate with virtual networks.
+
+Gateway-required virtual network integration:
+
+* Enables an app to connect to only one virtual network at a time.
+* Enables up to five virtual networks to be integrated within an App Service plan.
+* Allows the same virtual network to be used by multiple apps in an App Service plan without affecting the total number that can be used by an App Service plan. If you have six apps using the same virtual network in the same App Service plan that counts as one virtual network being used.
+* SLA on the gateway can affect the overall [SLA](https://azure.microsoft.com/support/legal/sla/).
+* Enables your apps to use the DNS that the virtual network is configured with.
+* Requires a virtual network route-based gateway configured with an SSTP point-to-site VPN before it can be connected to an app.
+
+You can't use gateway-required virtual network integration:
+
+* With a virtual network connected with ExpressRoute.
+* From a Linux app.
+* From a [Windows container](./quickstart-custom-container.md).
+* To access service endpoint-secured resources.
+* To resolve App Settings referencing a network protected Key Vault.
+* With a coexistence gateway that supports both ExpressRoute and point-to-site or site-to-site VPNs.
+
+[Regional virtual network integration](./overview-vnet-integration.md) mitigates the above mentioned limitations.
+
+## Set up a gateway in your Azure virtual network
+
+To create a gateway:
+
+1. [Create the VPN gateway and subnet](../vpn-gateway/vpn-gateway-howto-point-to-site-resource-manager-portal.md#creategw). Select a route-based VPN type.
+
+1. [Set the point-to-site addresses](../vpn-gateway/vpn-gateway-howto-point-to-site-resource-manager-portal.md#addresspool). If the gateway isn't in the basic SKU, then IKEV2 must be disabled in the point-to-site configuration and SSTP must be selected. The point-to-site address space must be in the RFC 1918 address blocks 10.0.0.0/8, 172.16.0.0/12, and 192.168.0.0/16.
+
+If you create the gateway for use with gateway-required virtual network integration, you don't need to upload a certificate. Creating the gateway can take 30 minutes. You won't be able to integrate your app with your virtual network until the gateway is created.
+
+## How gateway-required virtual network integration works
+
+Gateway-required virtual network integration is built on top of point-to-site VPN technology. Point-to-site VPNs limit network access to the virtual machine that hosts the app. Apps are restricted to send traffic out to the internet only through hybrid connections or through virtual network integration. When your app is configured with the portal to use gateway-required virtual network integration, a complex negotiation is managed on your behalf to create and assign certificates on the gateway and the application side. The result is that the workers used to host your apps can directly connect to the virtual network gateway in the selected virtual network.
++
+## Access on-premises resources
+
+Apps can access on-premises resources by integrating with virtual networks that have site-to-site connections. If you use gateway-required virtual network integration, update your on-premises VPN gateway routes with your point-to-site address blocks. When the site-to-site VPN is first set up, the scripts used to configure it should set up routes properly. If you add the point-to-site addresses after you create your site-to-site VPN, you need to update the routes manually. Details on how to do that varies per gateway and aren't described here.
+
+BGP routes from on-premises won't be propagated automatically into App Service. You need to manually propagate them on the point-to-site configuration using the steps in this document [Advertise custom routes for P2S VPN clients](../vpn-gateway/vpn-gateway-p2s-advertise-custom-routes.md).
+
+> [!NOTE]
+> The gateway-required virtual network integration feature doesn't integrate an app with a virtual network that has an ExpressRoute gateway. Even if the ExpressRoute gateway is configured in [coexistence mode](../expressroute/expressroute-howto-coexist-resource-manager.md), the virtual network integration doesn't work. If you need to access resources through an ExpressRoute connection, use the regional virtual network integration feature or an [App Service Environment](./environment/intro.md), which runs in your virtual network.
+
+## Peering
+
+If you use gateway-required virtual network integration with peering, you need to configure a few more items. To configure peering to work with your app:
+
+1. Add a peering connection on the virtual network your app connects to. When you add the peering connection, enable **Allow virtual network access** and select **Allow forwarded traffic** and **Allow gateway transit**.
+1. Add a peering connection on the virtual network that's being peered to the virtual network you're connected to. When you add the peering connection on the destination virtual network, enable **Allow virtual network access** and select **Allow forwarded traffic** and **Allow remote gateways**.
+1. Go to **App Service plan** > **Networking** > **VNet integration** in the portal. Select the virtual network your app connects to. Under the routing section, add the address range of the virtual network that's peered with the virtual network your app is connected to.
+
+## Manage virtual network integration
+
+Connecting and disconnecting with a virtual network is at an app level. Operations that can affect virtual network integration across multiple apps are at the App Service plan level. From the app > **Networking** > **VNet integration** portal, you can get details on your virtual network. You can see similar information at the App Service plan level in the **App Service plan** > **Networking** > **VNet integration** portal.
+
+The only operation you can take in the app view of your virtual network integration instance is to disconnect your app from the virtual network it's currently connected to. To disconnect your app from a virtual network, select **Disconnect**. Your app is restarted when you disconnect from a virtual network. Disconnecting doesn't change your virtual network. The subnet or gateway isn't removed. If you then want to delete your virtual network, first disconnect your app from the virtual network and delete the resources in it, such as gateways.
+
+The App Service plan virtual network integration UI shows you all the virtual network integrations used by the apps in your App Service plan. To see details on each virtual network, select the virtual network you're interested in. There are two actions you can perform here for gateway-required virtual network integration:
+
+* **Sync network**: The sync network operation is used only for the gateway-required virtual network integration feature. Performing a sync network operation ensures that your certificates and network information are in sync. If you add or change the DNS of your virtual network, perform a sync network operation. This operation restarts any apps that use this virtual network. This operation won't work if you're using an app and a virtual network belonging to different subscriptions.
+* **Add routes**: Adding routes drives outbound traffic into your virtual network.
+
+The private IP assigned to the instance is exposed via the environment variable WEBSITE_PRIVATE_IP. Kudu console UI also shows the list of environment variables available to the web app. This IP is an IP from the address range of the point-to-site address pool configured on the virtual network gateway. This IP will be used by the web app to connect to the resources through the Azure virtual network.
+
+> [!NOTE]
+> The value of WEBSITE_PRIVATE_IP is bound to change. However, it will be an IP within the address range of the point-to-site address range, so you'll need to allow access from the entire address range.
+>
+
+## Gateway-required virtual network integration routing
+
+The routes that are defined in your virtual network are used to direct traffic into your virtual network from your app. To send more outbound traffic into the virtual network, add those address blocks here. This capability only works with gateway-required virtual network integration. Route tables don't affect your app traffic when you use gateway-required virtual network integration.
+
+## Gateway-required virtual network integration certificates
+
+When gateway-required virtual network integration is enabled, there's a required exchange of certificates to ensure the security of the connection. Along with the certificates are the DNS configuration, routes, and other similar things that describe the network.
+
+If certificates or network information is changed, select **Sync Network**. When you select **Sync Network**, you cause a brief outage in connectivity between your app and your virtual network. Your app isn't restarted, but the loss of connectivity could cause your site to not function properly.
+
+## Pricing details
+
+Three charges are related to the use of the gateway-required virtual network integration feature:
+
+* **App Service plan pricing tier charges**: Your apps need to be in a Basic, Standard, Premium, Premium v2, or Premium v3 App Service plan. For more information on those costs, see [App Service pricing](https://azure.microsoft.com/pricing/details/app-service/).
+* **Data transfer costs**: There's a charge for data egress, even if the virtual network is in the same datacenter. Those charges are described in [Data transfer pricing details](https://azure.microsoft.com/pricing/details/data-transfers/).
+* **VPN gateway costs**: There's a cost to the virtual network gateway that's required for the point-to-site VPN. For more information, see [VPN gateway pricing](https://azure.microsoft.com/pricing/details/vpn-gateway/).
+
+## Troubleshooting
+
+Many things can prevent your app from reaching a specific host and port. Most of the time it's one of these things:
+
+* **A firewall is in the way.** If you have a firewall in the way, you hit the TCP timeout. The TCP timeout is 21 seconds in this case. Use the **tcpping** tool to test connectivity. TCP timeouts can be caused by many things beyond firewalls, but start there.
+* **DNS isn't accessible.** The DNS timeout is 3 seconds per DNS server. If you have two DNS servers, the timeout is 6 seconds. Use nameresolver to see if DNS is working. You can't use nslookup, because that doesn't use the DNS your virtual network is configured with. If inaccessible, you could have a firewall or NSG blocking access to DNS or it could be down.
+
+If those items don't answer your problems, look first for things like:
+
+* Is the point-to-site address range in the RFC 1918 ranges (10.0.0.0-10.255.255.255 / 172.16.0.0-172.31.255.255 / 192.168.0.0-192.168.255.255)?
+* Does the gateway show as being up in the portal? If your gateway is down, then bring it back up.
+* Do certificates show as being in sync, or do you suspect that the network configuration was changed? If your certificates are out of sync or you suspect that a change was made to your virtual network configuration that wasn't synced with your ASPs, select **Sync Network**.
+* If you're going across a VPN, is the on-premises gateway configured to route traffic back up to Azure? If you can reach endpoints in your virtual network but not on-premises, check your routes.
+* Are you trying to use a coexistence gateway that supports both point to site and ExpressRoute? Coexistence gateways aren't supported with virtual network integration.
+
+Debugging networking issues is a challenge because you can't see what's blocking access to a specific host:port combination. Some causes include:
+
+* You have a firewall up on your host that prevents access to the application port from your point-to-site IP range. Crossing subnets often requires public access.
+* Your target host is down.
+* Your application is down.
+* You had the wrong IP or hostname.
+* Your application is listening on a different port than what you expected. You can match your process ID with the listening port by using "netstat -aon" on the endpoint host.
+* Your network security groups are configured in such a manner that they prevent access to your application host and port from your point-to-site IP range.
+
+You don't know what address your app actually uses. It could be any address in the point-to-site address range, so you need to allow access from the entire address range.
+
+More debug steps include:
+
+* Connect to a VM in your virtual network and attempt to reach your resource host:port from there. To test for TCP access, use the PowerShell command **Test-NetConnection**. The syntax is:
+
+```powershell
+Test-NetConnection hostname [optional: -Port]
+```
+
+* Bring up an application on a VM and test access to that host and port from the console from your app by using **tcpping**.
+
+### On-premises resources
+
+If your app can't reach a resource on-premises, check if you can reach the resource from your virtual network. Use the **Test-NetConnection** PowerShell command to check for TCP access. If your VM can't reach your on-premises resource, your VPN or ExpressRoute connection might not be configured properly.
+
+If your virtual network-hosted VM can reach your on-premises system but your app can't, the cause is likely one of the following reasons:
+
+* Your routes aren't configured with your subnet or point-to-site address ranges in your on-premises gateway.
+* Your network security groups are blocking access for your point-to-site IP range.
+* Your on-premises firewalls are blocking traffic from your point-to-site IP range.
+* You're trying to reach a non-RFC 1918 address by using the regional virtual network integration feature.
+
+For more information, see [virtual network integration troubleshooting guide](/troubleshoot/azure/app-service/troubleshoot-vnet-integration-apps).
app-service Configure Network Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/configure-network-settings.md
ASE_NAME="[myAseName]"
RESOURCE_GROUP_NAME="[myResourceGroup]" az appservice ase update --name $ASE_NAME -g $RESOURCE_GROUP_NAME --allow-new-private-endpoint-connection true
-az appservice ase list-addresses -n --name $ASE_NAME -g $RESOURCE_GROUP_NAME --query properties.allowNewPrivateEndpointConnections
+az appservice ase list-addresses -n --name $ASE_NAME -g $RESOURCE_GROUP_NAME --query allowNewPrivateEndpointConnections
``` The setting is also available for configuration through Azure portal at the App Service Environment configuration:
If you want to enable FTP access, you can run the following Azure CLI command:
```azurecli ASE_NAME="[myAseName]" RESOURCE_GROUP_NAME="[myResourceGroup]"
-az resource update --name $ASE_NAME/configurations/networking --set properties.ftpEnabled=true -g $RESOURCE_GROUP_NAME --resource-type "Microsoft.Web/hostingEnvironments/networkingConfiguration"
+az appservice ase update --name $ASE_NAME -g $RESOURCE_GROUP_NAME --allow-incoming-ftp-connections true
-az resource show --name $ASE_NAME/configurations/networking -g $RESOURCE_GROUP_NAME --resource-type "Microsoft.Web/hostingEnvironments/networkingConfiguration" --query properties.ftpEnabled
+az appservice ase list-addresses -n --name $ASE_NAME -g $RESOURCE_GROUP_NAME --query ftpEnabled
``` The setting is also available for configuration through Azure portal at the App Service Environment configuration:
Run the following Azure CLI command to enable remote debugging access:
```azurecli ASE_NAME="[myAseName]" RESOURCE_GROUP_NAME="[myResourceGroup]"
-az resource update --name $ASE_NAME/configurations/networking --set properties.RemoteDebugEnabled=true -g $RESOURCE_GROUP_NAME --resource-type "Microsoft.Web/hostingEnvironments/networkingConfiguration"
+az appservice ase update --name $ASE_NAME -g $RESOURCE_GROUP_NAME --allow-remote-debugging true
-az resource show --name $ASE_NAME/configurations/networking -g $RESOURCE_GROUP_NAME --resource-type "Microsoft.Web/hostingEnvironments/networkingConfiguration" --query properties.remoteDebugEnabled
+az appservice ase list-addresses -n --name $ASE_NAME -g $RESOURCE_GROUP_NAME --query remoteDebugEnabled
``` The setting is also available for configuration through Azure portal at the App Service Environment configuration:
The setting is also available for configuration through Azure portal at the App
## Next steps > [!div class="nextstepaction"]
-> [Create an App Service Environment from a template](create-from-template.md)
+> [Create an App Service Environment from a template](./how-to-create-from-template.md)
> [!div class="nextstepaction"] > [Deploy your app to Azure App Service using FTP](../deploy-ftp.md)
app-service Create From Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/create-from-template.md
Title: Create an ASE with ARM
+ Title: Create an ASE with Azure Resource Manager
description: Learn how to create an external or ILB App Service environment by using an Azure Resource Manager template. ms.assetid: 6eb7d43d-e820-4a47-818c-80ff7d3b6f8e Previously updated : 10/11/2021 Last updated : 01/20/2023 # Create an ASE by using an Azure Resource Manager template ## Overview
-> [!NOTE]
-> This article is about the App Service Environment v2 and App Service Environment v3 which are used with Isolated App Service plans
->
+
+> [!IMPORTANT]
+> This article is about App Service Environment v2 which is used with Isolated App Service plans. [App Service Environment v2 will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-v1-and-v2-retirement-announcement/). There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v2, please follow the steps in [this article](migration-alternatives.md) to migrate to the new version.
+>
Azure App Service environments (ASEs) can be created with an internet-accessible endpoint or an endpoint on an internal address in an Azure Virtual Network. When created with an internal endpoint, that endpoint is provided by an Azure component called an internal load balancer (ILB). The ASE on an internal IP address is called an ILB ASE. The ASE with a public endpoint is called an External ASE. An ASE can be created by using the Azure portal or an Azure Resource Manager template. This article walks through the steps and syntax you need to create an External ASE or ILB ASE with Resource Manager templates. To learn how to create an ASEv2 in the Azure portal, see [Make an External ASE][MakeExternalASE] or [Make an ILB ASE][MakeILBASE].
-To learn how to create an ASEv3 in Azure portal, see [Create ASEv3][Create ASEv3].
-When you create an ASE in the Azure portal, you can create your virtual network at the same time or choose a preexisting virtual network to deploy into.
+When you create an ASE in the Azure portal, you can create your virtual network at the same time or choose a pre-existing virtual network to deploy into.
When you create an ASE from a template, you must start with: * An Azure Virtual Network. * A subnet in that virtual network. We recommend an ASE subnet size of `/24` with 256 addresses to accommodate future growth and scaling needs. After the ASE is created, you can't change the size.
-* When you creating an ASE into preexisting virtual network and subnet, the existing resource group name, virtual network name and subnet name are required.
* The subscription you want to deploy into. * The location you want to deploy into.
-To automate your ASE creation, follow they guidelines in the sections below. If you are creating an ILB ASEv2 with custom dnsSuffix (for example, `internal-contoso.com`), there are a few more things to do.
+To automate your ASE creation, follow they guidelines in the following sections. If you're creating an ILB ASEv2 with custom dnsSuffix (for example, `internal-contoso.com`), there are a few more things to do.
1. After your ILB ASE with custom dnsSuffix is created, an TLS/SSL certificate that matches your ILB ASE domain should be uploaded.
To automate your ASE creation, follow they guidelines in the sections below. If
## Create the ASE
-A Resource Manager template that creates an ASE and its associated parameters file is available on GitHub for [ASEv3][asev3quickstarts] and [ASEv2][quickstartasev2create].
-
-If you want to make an ASE, use these Resource Manager template [ASEv3][asev3quickstarts] or [ASEv2][quickstartilbasecreate] example. They cater to that use case. Most of the parameters in the *azuredeploy.parameters.json* file are common to the creation of ILB ASEs and External ASEs. The following list calls out parameters of special note, or that are unique, when you create an ILB ASE with an existing subnet.
-### ASEv3 parameters
-* *aseName*: Required. This parameter defines an unique ASE name.
-* *internalLoadBalancingMode*: Required. In most cases, set this to 3, which means both HTTP/HTTPS traffic on ports 80/443. If this property is set to 0, the HTTP/HTTPS traffic remains on the public VIP.
-* *zoneRedundant*: Required. In most cases, set this to false, which means the ASE will not be deployed into Availability Zones(AZ). Zonal ASEs can be deployed in some regions, you can refer to [this][AZ Support for ASEv3].
-* *dedicatedHostCount*: Required. In most cases, set this to 0, which means the ASE will be deployed as normal without dedicated hosts deployed.
-* *useExistingVnetandSubnet*: Required. Set to true if using an existing virtual network and subnet.
-* *vNetResourceGroupName*: Required if using an existing virtual network and subnet. This parameter defines the resource group name of the existing virtual network and subnet where ASE will reside.
-* *virtualNetworkName*: Required if using an existing virtual network and subnet. This parameter defines the virtual network name of the existing virtual network and subnet where ASE will reside.
-* *subnetName*: Required if using an existing virtual network and subnet. This parameter defines the subnet name of the existing virtual network and subnet where ASE will reside.
-* *createPrivateDNS*: Set to true if you want to create a private DNS zone after ASEv3 created. For an ILB ASE, when set this parameter to true, it will create a private DNS zone as ASE name with *appserviceenvironment.net* DNS suffix.
-### ASEv2 parameters
-* *aseName*: This parameter defines an unique ASE name.
+A Resource Manager template that creates an ASE and its associated parameters file is available on GitHub for [ASEv2][quickstartasev2create].
+
+If you want to make an ASE, use this Resource Manager template [ASEv2][quickstartilbasecreate] example. Most of the parameters in the *azuredeploy.parameters.json* file are common to the creation of ILB ASEs and External ASEs. The following list calls out parameters of special note, or that's unique, when you create an ILB ASE with an existing subnet.
+
+### Parameters
+* *aseName*: This parameter defines a unique ASE name.
* *location*: This parameter defines the location of the App Service Environment. * *existingVirtualNetworkName*: This parameter defines the virtual network name of the existing virtual network and subnet where ASE will reside. * *existingVirtualNetworkResourceGroup*: his parameter defines the resource group name of the existing virtual network and subnet where ASE will reside.
Obtain a valid TLS/SSL certificate by using internal certificate authorities, pu
* **Subject**: This attribute must be set to **.your-root-domain-here.com*. * **Subject Alternative Name**: This attribute must include both **.your-root-domain-here.com* and **.scm.your-root-domain-here.com*. TLS connections to the SCM/Kudu site associated with each app use an address of the form *your-app-name.scm.your-root-domain-here.com*.
-With a valid TLS/SSL certificate in hand, two additional preparatory steps are needed. Convert/save the TLS/SSL certificate as a .pfx file. Remember that the .pfx file must include all intermediate and root certificates. Secure it with a password.
+With a valid TLS/SSL certificate in hand, two more preparatory steps are needed. Convert/save the TLS/SSL certificate as a .pfx file. Remember that the .pfx file must include all intermediate and root certificates. Secure it with a password.
The .pfx file needs to be converted into a base64 string because the TLS/SSL certificate is uploaded by using a Resource Manager template. Because Resource Manager templates are text files, the .pfx file must be converted into a base64 string. This way it can be included as a parameter of the template.
The parameters in the *azuredeploy.parameters.json* file are listed here:
* *existingAseLocation*: Text string containing the Azure region where the ILB ASE was deployed. For example: "South Central US". * *pfxBlobString*: The based64-encoded string representation of the .pfx file. Use the code snippet shown earlier and copy the string contained in "exportedcert.pfx.b64". Paste it in as the value of the *pfxBlobString* attribute. * *password*: The password used to secure the .pfx file.
-* *certificateThumbprint*: The certificate's thumbprint. If you retrieve this value from PowerShell (for example, *$certificate.Thumbprint* from the earlier code snippet), you can use the value as is. If you copy the value from the Windows certificate dialog box, remember to strip out the extraneous spaces. The *certificateThumbprint* should look something like AF3143EB61D43F6727842115BB7F17BBCECAECAE.
+* *certificateThumbprint*: The certificate's thumbprint. If you retrieve this value from PowerShell (for example, `$certificate.Thumbprint` from the earlier code snippet), you can use the value as is. If you copy the value from the Windows certificate dialog box, remember to strip out the extraneous spaces. The *certificateThumbprint* should look something like AF3143EB61D43F6727842115BB7F17BBCECAECAE.
* *certificateName*: A friendly string identifier of your own choosing used to identity the certificate. The name is used as part of the unique Resource Manager identifier for the *Microsoft.Web/certificates* entity that represents the TLS/SSL certificate. The name *must* end with the following suffix: \_yourASENameHere_InternalLoadBalancingASE. The Azure portal uses this suffix as an indicator that the certificate is used to secure an ILB-enabled ASE. An abbreviated example of *azuredeploy.parameters.json* is shown here:
$parameterPath="PATH\azuredeploy.parameters.json"
New-AzResourceGroupDeployment -Name "CHANGEME" -ResourceGroupName "YOUR-RG-NAME-HERE" -TemplateFile $templatePath -TemplateParameterFile $parameterPath ```
-It takes roughly 40 minutes per ASE front end to apply the change. For example, for a default-sized ASE that uses two front ends, the template takes around one hour and 20 minutes to complete. While the template is running, the ASE can't scale.
+It takes roughly 40 minutes per ASE front end to apply the change. For example, for a default-sized ASE that uses two front ends, the template takes around 1 hour and 20 minutes to complete. While the template is running, the ASE can't scale.
After the template finishes, apps on the ILB ASE can be accessed over HTTPS. The connections are secured by using the default TLS/SSL certificate. The default TLS/SSL certificate is used when apps on the ILB ASE are addressed by using a combination of the application name plus the default host name. For example, `https://mycustomapp.internal-contoso.com` uses the default TLS/SSL certificate for **.internal-contoso.com*.
However, just like apps that run on the public multitenant service, developers c
[MakeILBASE]: ./create-ilb-ase.md [ASENetwork]: ./network-info.md [UsingASE]: ./using-an-ase.md
-[UDRs]: ../../virtual-network/virtual-networks-udr-overview.md
-[NSGs]: ../../virtual-network/network-security-groups-overview.md
[ConfigureASEv1]: app-service-web-configure-an-app-service-environment.md [ASEv1Intro]: app-service-app-service-environment-intro.md
-[mobileapps]: /previous-versions/azure/app-service-mobile/app-service-mobile-value-prop
-[Functions]: ../../azure-functions/index.yml
[Pricing]: https://azure.microsoft.com/pricing/details/app-service/ [ARMOverview]: ../../azure-resource-manager/management/overview.md
-[ConfigureSSL]: ../../app-service/configure-ssl-certificate.md
-[Kudu]: https://azure.microsoft.com/resources/videos/super-secret-kudu-debug-console-for-azure-web-sites/
-[ASEWAF]: ./integrate-with-application-gateway.md
-[AppGW]: ../../web-application-firewall/ag/ag-overview.md
-[ILBASEv1Template]: app-service-app-service-environment-create-ilb-ase-resourcemanager.md
-[Create ASEv3]: creation.md
-[asev3quickstarts]: https://azure.microsoft.com/resources/templates/web-app-asp-app-on-asev3-create
-[AZ Support for ASEv3]: zone-redundancy.md
+[ConfigureSSL]: ../../app-service/configure-ssl-certificate.md
app-service How To Create From Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/how-to-create-from-template.md
+
+ Title: Create an App Service Environment (ASE) v3 with Azure Resource Manager
+description: Learn how to create an external or ILB App Service Environment v3 by using an Azure Resource Manager template.
++ Last updated : 01/20/2023++
+# Create an App Service Environment by using an Azure Resource Manager template
+
+App Service Environment can be created using an Azure Resource Manager template allowing you to do repeatable deployment.
+
+> [!NOTE]
+> This article is about App Service Environment v3, which is used with isolated v2 App Service plans.
+
+## Overview
+
+Azure App Service Environment can be created with an internet-accessible endpoint or an endpoint on an internal address in an Azure Virtual Network. When created with an internal endpoint, that endpoint is provided by an Azure component called an internal load balancer (ILB). The App Service Environment on an internal IP address is called an ILB ASE. The App Service Environment with a public endpoint is called an External ASE.
+
+An ASE can be created by using the Azure portal or an Azure Resource Manager template. This article walks through the steps and syntax you need to create an External ASE or ILB ASE with Resource Manager templates. Learn [how to create an App Service Environment in Azure portal](./creation.md).
+
+When you create an App Service Environment in the Azure portal, you can create your virtual network at the same time or choose a pre-existing virtual network to deploy into.
+
+When you create an App Service Environment from a template, you must start with:
+
+* An Azure Virtual Network.
+* A subnet in that virtual network. We recommend a subnet size of `/24` with 256 addresses to accommodate future growth and scaling needs. After the App Service Environment is created, you can't change the size.
+* The location you want to deploy into.
+
+## Configuring the App Service Environment
+
+The basic Resource Manager template that creates an App Service Environment looks like this:
+
+```json
+{
+ "type": "Microsoft.Web/hostingEnvironments",
+ "apiVersion": "2022-03-01",
+ "name": "[parameters('aseName')]",
+ "location": "[resourceGroup().location]",
+ "kind": "ASEV3",
+ "properties": {
+ "internalLoadBalancingMode": "Web, Publishing",
+ "virtualNetwork": {
+ "id": "[parameters('subnetResourceId')]"
+ },
+ "networkingConfiguration": { },
+ "customDnsSuffixConfiguration": { }
+ },
+ "identity": {
+ "type": "SystemAssigned"
+ }
+}
+```
+
+In addition to the core properties, there are other configuration options that you can use to configure your App Service Environment.
+
+* *name*: Required. This parameter defines a unique App Service Environment name.
+* *virtualNetwork -> id*: Required. Specifies the resource ID of the subnet. Subnet must be empty and delegated to Microsoft.Web/hostingEnvironments
+* *internalLoadBalancingMode*: Required. In most cases, set this property to "Web, Publishing", which means both HTTP/HTTPS traffic and FTP traffic is on an internal VIP (Internal Load Balancer). If this property is set to "None", all traffic remains on the public VIP (External Load Balancer).
+* *zoneRedundant*: Optional. Defines with true/false if the App Service Environment will be deployed into Availability Zones (AZ). For more information, see [zone redundancy](./zone-redundancy.md).
+* *dedicatedHostCount*: Optional. In most cases, set this property to 0 or left out. You can set it to 2 if you want to deploy your App Service Environment with physical hardware isolation on dedicated hosts.
+* *upgradePreference*: Optional. Defines if upgrade is started automatically or a 15 day windows to start the deployment is given. Valid values are "None", "Early", "Late", "Manual". More information [about upgrade preference](./how-to-upgrade-preference.md).
+* *clusterSettings*: Optional. For more information, see [cluster settings](./app-service-app-service-environment-custom-settings.md).
+* *networkingConfiguration -> allowNewPrivateEndpointConnections*: Optional. For more information, see [networking configuration](./configure-network-settings.md#allow-new-private-endpoint-connections).
+* *networkingConfiguration -> remoteDebugEnabled*: Optional. For more information, see [networking configuration](./configure-network-settings.md#remote-debugging-access).
+* *networkingConfiguration -> ftpEnabled*: Optional. For more information, see [networking configuration](./configure-network-settings.md#ftp-access).
+* *networkingConfiguration -> inboundIpAddressOverride*: Optional. Allow you to create an App Service Environment with your own Azure Public IP address (specify the resource ID) or define a static IP for ILB deployments. This setting can't be changed after the App Service Environment is created.
+* *customDnsSuffixConfiguration*: Optional. Allows you to specify a custom domain suffix for the App Service Environment. Requires a valid certificate from a Key Vault and access using a Managed Identity. For more information about the specific parameters, see [configuration custom domain suffix](./how-to-custom-domain-suffix.md).
+
+### Deploying the App Service Environment
+
+After creating the ARM template, for example named *azuredeploy.json* and optionally a parameters file for example named *azuredeploy.parameters.json*, you can create the App Service Environment by using the Azure CLI code snippet. Change the file paths to match the Resource Manager template-file locations on your machine. Remember to supply your own value for the resource group name:
+
+```azurecli
+templatePath="PATH/azuredeploy.json"
+parameterPath="PATH/azuredeploy.parameters.json"
+
+az deployment group create --resource-group "YOUR-RG-NAME-HERE" --template-file $templatePath --parameters $parameterPath
+```
+
+It takes about two hours for the App Service Environment to be created.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Using an App Service Environment v3](./using.md)
+
+> [!div class="nextstepaction"]
+> [App Service Environment v3 Networking](./networking.md)
+
+> [!div class="nextstepaction"]
+> [Certificates in App Service Environment v3](./overview-certificates.md)
app-service How To Custom Domain Suffix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/how-to-custom-domain-suffix.md
The custom domain suffix defines a root domain that can be used by the App Servi
The custom domain suffix is for the App Service Environment. This feature is different from a custom domain binding on an App Service. For more information on custom domain bindings, see [Map an existing custom DNS name to Azure App Service](../app-service-web-tutorial-custom-domain.md).
-If the certificate used for the custom domain suffix contains a Subject Alternate Name (SAN) entry for **.scm.CUSTOM-DOMAIN*, the scm site will then also be reachable from *APP-NAME.scm.CUSTOM-DOMAIN*. You can only access scm over custom domain using basic authentication. Single sign-on is only possible with the default root domain.
+If the certificate used for the custom domain suffix contains a Subject Alternate Name (SAN) entry for **.scm.CUSTOM-DOMAIN*, the scm site will then also be reachable from *APP-NAME.scm.CUSTOM-DOMAIN*. You can only access scm over custom domain using basic authentication. Single sign-on is only possible with the default root domain.
+
+Unlike earlier versions, the FTPS endpoints for your App Services on your App Service Environment v3 can only be reached using the default domain suffix.
## Prerequisites
app-service Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/overview.md
A few features that were available in earlier versions of App Service Environmen
- Monitor your traffic with Network Watcher or network security group (NSG) flow logs. - Perform a backup and restore operation on a storage account behind a firewall.
+- Access the FTPS endpoint using a custom domain suffix.
## Pricing
app-service Overview Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-security.md
App Service authentication and authorization support multiple authentication pro
When authenticating against a back-end service, App Service provides two different mechanisms depending on your need: - **Service identity** - Sign in to the remote resource using the identity of the app itself. App Service lets you easily create a [managed identity](overview-managed-identity.md), which you can use to authenticate with other services, such as [Azure SQL Database](/azure/sql-database/) or [Azure Key Vault](../key-vault/index.yml). For an end-to-end tutorial of this approach, see [Secure Azure SQL Database connection from App Service using a managed identity](tutorial-connect-msi-sql-database.md).-- **On-behalf-of (OBO)** - Make delegated access to remote resources on behalf of the user. With Azure Active Directory as the authentication provider, your App Service app can perform delegated sign-in to a remote service, such as [Microsoft Graph API](../active-directory/develop/microsoft-graph-intro.md) or a remote API app in App Service. For an end-to-end tutorial of this approach, see [Authenticate and authorize users end-to-end in Azure App Service](tutorial-auth-aad.md).
+- **On-behalf-of (OBO)** - Make delegated access to remote resources on behalf of the user. With Azure Active Directory as the authentication provider, your App Service app can perform delegated sign-in to a remote service, such as [Microsoft Graph](/graph/overview) or a remote API app in App Service. For an end-to-end tutorial of this approach, see [Authenticate and authorize users end-to-end in Azure App Service](tutorial-auth-aad.md).
## Connectivity to remote resources
app-service Overview Vnet Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-vnet-integration.md
Title: Integrate your app with an Azure virtual network
description: Integrate your app in Azure App Service with Azure virtual networks. Previously updated : 10/05/2022 Last updated : 01/20/2023
-# Integrate your app with an Azure virtual network
+# <a name="regional-virtual-network-integration"></a>Integrate your app with an Azure virtual network
-This article describes the Azure App Service virtual network integration feature and how to set it up with apps in [App Service](./overview.md). With [Azure virtual networks](../virtual-network/virtual-networks-overview.md), you can place many of your Azure resources in a non-internet-routable network. The App Service virtual network integration feature enables your apps to access resources in or through a virtual network. Virtual network integration doesn't enable your apps to be accessed privately.
+This article describes the Azure App Service virtual network integration feature and how to set it up with apps in [App Service](./overview.md). With [Azure virtual networks](../virtual-network/virtual-networks-overview.md), you can place many of your Azure resources in a non-internet-routable network. The App Service virtual network integration feature enables your apps to access resources in or through a virtual network.
+
+>[!NOTE]
+> Information about Gateway-required virtual network integration has [moved to a new location](./configure-gateway-required-vnet-integration.md).
App Service has two variations:
+* The dedicated compute pricing tiers, which include the Basic, Standard, Premium, Premium v2, and Premium v3.
+* The App Service Environment, which deploys directly into your virtual network with dedicated supporting infrastructure and is using the Isolated and Isolated v2 pricing tiers.
-Learn [how to enable virtual network integration](./configure-vnet-integration-enable.md).
+The virtual network integration feature is used in Azure App Service dedicated compute pricing tiers. If your app is in an [App Service Environment](./environment/overview.md), it's already integrated with a virtual network and doesn't require you to configure virtual network integration feature to reach resources in the same virtual network. For more information on all the networking features, see [App Service networking features](./networking-features.md).
+
+Virtual network integration gives your app access to resources in your virtual network, but it doesn't grant inbound private access to your app from the virtual network. Private site access refers to making an app accessible only from a private network, such as from within an Azure virtual network. Virtual network integration is used only to make outbound calls from your app into your virtual network. Refer to [private endpoint](./networking/private-endpoint.md) for inbound private access.
+
+The virtual network integration feature:
-## Regional virtual network integration
+* Requires a [supported Basic or Standard](./overview-vnet-integration.md#limitations), Premium, Premium v2, Premium v3, or Elastic Premium App Service pricing tier.
+* Supports TCP and UDP.
+* Works with App Service apps, function apps and Logic apps.
-Regional virtual network integration supports connecting to a virtual network in the same region and doesn't require a gateway. Using regional virtual network integration enables your app to access:
+There are some things that virtual network integration doesn't support, like:
+
+* Mounting a drive.
+* Windows Server Active Directory domain join.
+* NetBIOS.
+
+Virtual network integration supports connecting to a virtual network in the same region. Using virtual network integration enables your app to access:
* Resources in the virtual network you're integrated with. * Resources in virtual networks peered to the virtual network your app is integrated with including global peering connections.
Regional virtual network integration supports connecting to a virtual network in
* Service endpoint-secured services. * Private endpoint-enabled services.
-When you use regional virtual network integration, you can use the following Azure networking features:
+When you use virtual network integration, you can use the following Azure networking features:
* **Network security groups (NSGs)**: You can block outbound traffic with an NSG that's placed on your integration subnet. The inbound rules don't apply because you can't use virtual network integration to provide inbound access to your app. * **Route tables (UDRs)**: You can place a route table on the integration subnet to send outbound traffic where you want.
+* **NAT gateway**: You can use [NAT gateway](./networking/nat-gateway-integration.md) to get a dedicated outbound IP and mitigate SNAT port exhaustion.
-### How regional virtual network integration works
+Learn [how to enable virtual network integration](./configure-vnet-integration-enable.md).
-Apps in App Service are hosted on worker roles. Regional virtual network integration works by mounting virtual interfaces to the worker roles with addresses in the delegated subnet. Because the from address is in your virtual network, it can access most things in or through your virtual network like a VM in your virtual network would. The networking implementation is different than running a VM in your virtual network. That's why some networking features aren't yet available for this feature.
+## <a name="how-regional-virtual-network-integration-works"></a> How virtual network integration works
+Apps in App Service are hosted on worker roles. Virtual network integration works by mounting virtual interfaces to the worker roles with addresses in the delegated subnet. Because the from address is in your virtual network, it can access most things in or through your virtual network like a VM in your virtual network would.
-When regional virtual network integration is enabled, your app makes outbound calls through your virtual network. The outbound addresses that are listed in the app properties portal are the addresses still used by your app. However, if your outbound call is to a virtual machine or private endpoint in the integration virtual network or peered virtual network, the outbound address will be an address from the integration subnet. The private IP assigned to an instance is exposed via the environment variable, WEBSITE_PRIVATE_IP.
-When all traffic routing is enabled, all outbound traffic is sent into your virtual network. If all traffic routing isn't enabled, only private traffic (RFC1918) and service endpoints configured on the integration subnet will be sent into the virtual network, and outbound traffic to the internet will go through the same channels as normal.
+When virtual network integration is enabled, your app makes outbound calls through your virtual network. The outbound addresses that are listed in the app properties portal are the addresses still used by your app. However, if your outbound call is to a virtual machine or private endpoint in the integration virtual network or peered virtual network, the outbound address will be an address from the integration subnet. The private IP assigned to an instance is exposed via the environment variable, WEBSITE_PRIVATE_IP.
-The feature supports two virtual interface per worker. Two virtual interfaces per worker means two regional virtual network integrations per App Service plan. The apps in the same App Service plan can only use one of the virtual network integrations to a specific subnet. If you need an app to connect to additional virtual networks or additional subnets in the same virtual network, you need to create another App Service plan. The virtual interfaces used isn't a resource that customers have direct access to.
+When all traffic routing is enabled, all outbound traffic is sent into your virtual network. If all traffic routing isn't enabled, only private traffic (RFC1918) and service endpoints configured on the integration subnet will be sent into the virtual network. Outbound traffic to the internet will be routed directly from the app.
-### Subnet requirements
+The feature supports two virtual interfaces per worker. Two virtual interfaces per worker mean two virtual network integrations per App Service plan. The apps in the same App Service plan can only use one of the virtual network integrations to a specific subnet. If you need an app to connect to more virtual networks or more subnets in the same virtual network, you need to create another App Service plan. The virtual interfaces used aren't resources customers have direct access to.
-Virtual network integration depends on a dedicated subnet. When you create a subnet, the Azure subnet loses five IPs from the start. One address is used from the integration subnet for each plan instance. If you scale your app to four instances, then four addresses are used.
+## Subnet requirements
-When you scale up or down in size, the required address space is doubled for a short period of time. This change affects the real, available supported instances for a given subnet size. The following table shows both the maximum available addresses per CIDR block and the effect this has on horizontal scale.
+Virtual network integration depends on a dedicated subnet. When you create a subnet, the Azure subnet consumes five IPs from the start. One address is used from the integration subnet for each plan instance. If you scale your app to four instances, then four addresses are used.
+
+When you scale up or down in size, the required address space is doubled for a short period of time. The scale operation affects the real, available supported instances for a given subnet size. The following table shows both the maximum available addresses per CIDR block and the effect the available addresses has on horizontal scale.
| CIDR block size | Maximum available addresses | Maximum horizontal scale (instances)<sup>*</sup> | |--|-||
When you scale up or down in size, the required address space is doubled for a s
<sup>*</sup>Assumes that you'll need to scale up or down in either size or SKU at some point.
-Because subnet size can't be changed after assignment, use a subnet that's large enough to accommodate whatever scale your app might reach. To avoid any issues with subnet capacity, use a `/26` with 64 addresses. When you're creating subnets in Azure portal as part of integrating with the virtual network, a minimum size of /27 is required. If the subnet already exists before integrating through the portal you can use a /28 subnet.
+Because subnet size can't be changed after assignment, use a subnet that's large enough to accommodate whatever scale your app might reach. To avoid any issues with subnet capacity, use a `/26` with 64 addresses. When you're creating subnets in Azure portal as part of integrating with the virtual network, a minimum size of /27 is required. If the subnet already exists before integrating through the portal, you can use a /28 subnet.
>[!NOTE] > Windows Containers uses an additional IP address per app for each App Service plan instance, and you need to size the subnet accordingly. If you have for example 10 Windows Container App Service plan instances with 4 apps running, you will need 50 IP addresses and additional addresses to support horizontal (up/down) scale. When you want your apps in your plan to reach a virtual network that's already connected to by apps in another plan, select a different subnet than the one being used by the pre-existing virtual network integration.
-### Permissions
+## Permissions
-You must have at least the following Role-based access control permissions on the subnet or at a higher level to configure regional virtual network integration through Azure portal, CLI or when setting the `virtualNetworkSubnetId` site property directly:
+You must have at least the following Role-based access control permissions on the subnet or at a higher level to configure virtual network integration through Azure portal, CLI or when setting the `virtualNetworkSubnetId` site property directly:
| Action | Description | |-|-|
You must have at least the following Role-based access control permissions on th
If the virtual network is in a different subscription than the app, you must ensure that the subscription with the virtual network is registered for the `Microsoft.Web` resource provider. You can explicitly register the provider [by following this documentation](../azure-resource-manager/management/resource-providers-and-types.md#register-resource-provider), but it will also automatically be registered when creating the first web app in a subscription.
-### Routes
+## Routes
-You can control what traffic goes through the virtual network integration. There are three types of routing to consider when you configure regional virtual network integration. [Application routing](#application-routing) defines what traffic is routed from your app and into the virtual network. [Configuration routing](#configuration-routing) affects operations that happen before or during startup of your app. Examples are container image pull and app settings with Key Vault reference. [Network routing](#network-routing) is the ability to handle how both app and configuration traffic are routed from your virtual network and out.
+You can control what traffic goes through the virtual network integration. There are three types of routing to consider when you configure virtual network integration. [Application routing](#application-routing) defines what traffic is routed from your app and into the virtual network. [Configuration routing](#configuration-routing) affects operations that happen before or during startup of your app. Examples are container image pull and [app settings with Key Vault reference](./app-service-key-vault-references.md). [Network routing](#network-routing) is the ability to handle how both app and configuration traffic are routed from your virtual network and out.
Through application routing or configuration routing options, you can configure what traffic will be sent through the virtual network integration. Traffic is only subject to [network routing](#network-routing) if it's sent through the virtual network integration.
-#### Application routing
+### Application routing
Application routing applies to traffic that is sent from your app after it has been started. See [configuration routing](#configuration-routing) for traffic during startup. When you configure application routing, you can either route all traffic or only private traffic (also known as [RFC1918](https://datatracker.ietf.org/doc/html/rfc1918#section-3) traffic) into your virtual network. You configure this behavior through the **Route All** setting. If **Route All** is disabled, your app only routes private traffic into your virtual network. If you want to route all your outbound app traffic into your virtual network, make sure that **Route All** is enabled.
Application routing applies to traffic that is sent from your app after it has b
Learn [how to configure application routing](./configure-vnet-integration-routing.md#configure-application-routing). > [!NOTE]
-> Outbound SMTP connectivity (port 25) is supported for App Service when the SMTP traffic is routed through the virtual network integration. The supportability is determined by a setting on the subscription where the virtual network is deployed. For virtual networks/subnets created before 1. August 2022 you need to initiate a temporary configuration change to the virtual network/subnet for the setting to be synchronized from the subscription. An example could be to add a temporary subnet, associate/dissociate an NSG temporarily or configure a service endpoint temporarily. For more information and troubleshooting see [Troubleshoot outbound SMTP connectivity problems in Azure](../virtual-network/troubleshoot-outbound-smtp-connectivity.md).
+> Outbound SMTP connectivity (port 25) is supported for App Service when the SMTP traffic is routed through the virtual network integration. The supportability is determined by a setting on the subscription where the virtual network is deployed. For virtual networks/subnets created before 1. August 2022 you need to initiate a temporary configuration change to the virtual network/subnet for the setting to be synchronized from the subscription. An example could be to add a temporary subnet, associate/dissociate an NSG temporarily or configure a service endpoint temporarily. For more information, see [Troubleshoot outbound SMTP connectivity problems in Azure](../virtual-network/troubleshoot-outbound-smtp-connectivity.md).
-#### Configuration routing
+### Configuration routing
When you're using virtual network integration, you can configure how parts of the configuration traffic are managed. By default, configuration traffic will go directly over the public route, but for the mentioned individual components, you can actively configure it to be routed through the virtual network integration.
-##### Content share
+#### Content share
Bringing your own storage for content in often used in Functions where [content share](./../azure-functions/configure-networking-how-to.md#restrict-your-storage-account-to-a-virtual-network) is configured as part of the Functions app.
To route content share traffic through the virtual network integration, you must
In addition to configuring the routing, you must also ensure that any firewall or Network Security Group configured on traffic from the subnet allow traffic to port 443 and 445.
-##### Container image pull
+#### Container image pull
When using custom containers, you can pull the container over the virtual network integration. To route the container pull traffic through the virtual network integration, you must ensure that the routing setting is configured. Learn [how to configure image pull routing](./configure-vnet-integration-routing.md#container-image-pull).
-##### App settings using Key Vault references
+#### App settings using Key Vault references
App settings using Key Vault references will attempt to get secrets over the public route. If the Key Vault is blocking public traffic and the app is using virtual network integration, an attempt will then be made to get the secrets through the virtual network integration.
App settings using Key Vault references will attempt to get secrets over the pub
> * Configure SSL/TLS certificates from private Key Vaults is currently not supported. > * App Service Logs to private storage accounts is currently not supported. We recommend using Diagnostics Logging and allowing Trusted Services for the storage account.
-#### Network routing
+### Network routing
You can use route tables to route outbound traffic from your app without restriction. Common destinations can include firewall devices or gateways. You can also use a [network security group](../virtual-network/network-security-groups-overview.md) (NSG) to block outbound traffic to resources in your virtual network or the internet. An NSG that's applied to your integration subnet is in effect regardless of any route tables applied to your integration subnet.
-Route tables and network security groups only apply to traffic routed through the virtual network integration. See [application routing](#application-routing) and [configuration routing](#configuration-routing) for details. Routes won't affect replies to inbound app requests and inbound rules in an NSG don't apply to your app because virtual network integration affects only outbound traffic from your app. To control inbound traffic to your app, use the Access Restrictions feature.
+Route tables and network security groups only apply to traffic routed through the virtual network integration. See [application routing](#application-routing) and [configuration routing](#configuration-routing) for details. Routes won't apply to replies from inbound app requests and inbound rules in an NSG don't apply to your app. Virtual network integration affects only outbound traffic from your app. To control inbound traffic to your app, use the [access restrictions](./overview-access-restrictions.md) feature or [private endpoints](./networking/private-endpoint.md).
-When configuring network security groups or route tables that affect outbound traffic, you must make sure you consider your application dependencies. Application dependencies include endpoints that your app needs during runtime. Besides APIs and services the app is calling, this could also be derived endpoints like certificate revocation list (CRL) check endpoints and identity/authentication endpoint, for example Azure Active Directory. If you're using [continuous deployment in App Service](./deploy-continuous-deployment.md), you might also need to allow endpoints depending on type and language. Specifically for [Linux continuous deployment](https://github.com/microsoft/Oryx/blob/main/doc/hosts/appservice.md#network-dependencies), you'll need to allow `oryx-cdn.microsoft.io:443`.
+When configuring network security groups or route tables that applies to outbound traffic, you must make sure you consider your application dependencies. Application dependencies include endpoints that your app needs during runtime. Besides APIs and services the app is calling, these endpoints could also be derived endpoints like certificate revocation list (CRL) check endpoints and identity/authentication endpoint, for example Azure Active Directory. If you're using [continuous deployment in App Service](./deploy-continuous-deployment.md), you might also need to allow endpoints depending on type and language. Specifically for [Linux continuous deployment](https://github.com/microsoft/Oryx/blob/main/doc/hosts/appservice.md#network-dependencies), you'll need to allow `oryx-cdn.microsoft.io:443`.
When you want to route outbound traffic on-premises, you can use a route table to send outbound traffic to your Azure ExpressRoute gateway. If you do route traffic to a gateway, set routes in the external network to send any replies back. Border Gateway Protocol (BGP) routes also affect your app traffic. If you have BGP routes from something like an ExpressRoute gateway, your app outbound traffic is affected. Similar to user-defined routes, BGP routes affect traffic according to your routing scope setting.
-### Service endpoints
+## Service endpoints
-Regional virtual network integration enables you to reach Azure services that are secured with service endpoints. To access a service endpoint-secured service, follow these steps:
+Virtual network integration enables you to reach Azure services that are secured with service endpoints. To access a service endpoint-secured service, follow these steps:
-1. Configure regional virtual network integration with your web app to connect to a specific subnet for integration.
+1. Configure virtual network integration with your web app to connect to a specific subnet for integration.
1. Go to the destination service and configure service endpoints against the integration subnet.
-### Private endpoints
+## Private endpoints
If you want to make calls to [private endpoints](./networking/private-endpoint.md), make sure that your DNS lookups resolve to the private endpoint. You can enforce this behavior in one of the following ways:
If you want to make calls to [private endpoints](./networking/private-endpoint.m
* Manage the private endpoint in the DNS server used by your app. To manage the configuration, you must know the private endpoint IP address. Then point the endpoint you're trying to reach to that address by using an A record. * Configure your own DNS server to forward to Azure DNS private zones.
-### Azure DNS private zones
+## Azure DNS private zones
After your app integrates with your virtual network, it uses the same DNS server that your virtual network is configured with. If no custom DNS is specified, it uses Azure default DNS and any private zones linked to the virtual network.
-### Limitations
+## Limitations
-There are some limitations with using regional virtual network integration:
+There are some limitations with using virtual network integration:
* The feature is available from all App Service deployments in Premium v2 and Premium v3. It's also available in Basic and Standard tier but only from newer App Service deployments. If you're on an older deployment, you can only use the feature from a Premium v2 App Service plan. If you want to make sure you can use the feature in a Basic or Standard App Service plan, create your app in a Premium v3 App Service plan. Those plans are only supported on our newest deployments. You can scale down if you want after the plan is created. * The feature can't be used by Isolated plan apps that are in an App Service Environment.
There are some limitations with using regional virtual network integration:
* The integration subnet can't have [service endpoint policies](../virtual-network/virtual-network-service-endpoint-policies-overview.md) enabled. * The integration subnet can be used by only one App Service plan. * You can't delete a virtual network with an integrated app. Remove the integration before you delete the virtual network.
-* You can have two regional virtual network integration per App Service plan. Multiple apps in the same App Service plan can use the same virtual network integration.
-* You can't change the subscription of an app or a plan while there's an app that's using regional virtual network integration.
-
-## Gateway-required virtual network integration
-
-Gateway-required virtual network integration supports connecting to a virtual network in another region or to a classic virtual network. Gateway-required virtual network integration:
-
-* Enables an app to connect to only one virtual network at a time.
-* Enables up to five virtual networks to be integrated within an App Service plan.
-* Allows the same virtual network to be used by multiple apps in an App Service plan without affecting the total number that can be used by an App Service plan. If you have six apps using the same virtual network in the same App Service plan that counts as one virtual network being used.
-* SLA on the gateway can affect the overall [SLA](https://azure.microsoft.com/support/legal/sla/).
-* Enables your apps to use the DNS that the virtual network is configured with.
-* Requires a virtual network route-based gateway configured with an SSTP point-to-site VPN before it can be connected to an app.
-
-You can't use gateway-required virtual network integration:
-
-* With a virtual network connected with ExpressRoute.
-* From a Linux app.
-* From a [Windows container](./quickstart-custom-container.md).
-* To access service endpoint-secured resources.
-* To resolve App Settings referencing a network protected Key Vault.
-* With a coexistence gateway that supports both ExpressRoute and point-to-site or site-to-site VPNs.
-
-### Set up a gateway in your Azure virtual network
-
-To create a gateway:
-
-1. [Create the VPN gateway and subnet](../vpn-gateway/vpn-gateway-howto-point-to-site-resource-manager-portal.md#creategw). Select a route-based VPN type.
-
-1. [Set the point-to-site addresses](../vpn-gateway/vpn-gateway-howto-point-to-site-resource-manager-portal.md#addresspool). If the gateway isn't in the basic SKU, then IKEV2 must be disabled in the point-to-site configuration and SSTP must be selected. The point-to-site address space must be in the RFC 1918 address blocks 10.0.0.0/8, 172.16.0.0/12, and 192.168.0.0/16.
-
-If you create the gateway for use with gateway-required virtual network integration, you don't need to upload a certificate. Creating the gateway can take 30 minutes. You won't be able to integrate your app with your virtual network until the gateway is created.
-
-### How gateway-required virtual network integration works
+* You can't have more than two virtual network integrations per App Service plan. Multiple apps in the same App Service plan can use the same virtual network integration. Currently you can only configure the first integration through Azure portal. The second integration must be created using Azure Resource Manager templates or Azure CLI commands.
+* You can't change the subscription of an app or a plan while there's an app that's using virtual network integration.
-Gateway-required virtual network integration is built on top of point-to-site VPN technology. Point-to-site VPNs limit network access to the virtual machine that hosts the app. Apps are restricted to send traffic out to the internet only through hybrid connections or through virtual network integration. When your app is configured with the portal to use gateway-required virtual network integration, a complex negotiation is managed on your behalf to create and assign certificates on the gateway and the application side. The result is that the workers used to host your apps can directly connect to the virtual network gateway in the selected virtual network.
+## Access on-premises resources
+No extra configuration is required for the virtual network integration feature to reach through your virtual network to on-premises resources. You simply need to connect your virtual network to on-premises resources by using ExpressRoute or a site-to-site VPN.
-### Access on-premises resources
+## Peering
-Apps can access on-premises resources by integrating with virtual networks that have site-to-site connections. If you use gateway-required virtual network integration, update your on-premises VPN gateway routes with your point-to-site address blocks. When the site-to-site VPN is first set up, the scripts used to configure it should set up routes properly. If you add the point-to-site addresses after you create your site-to-site VPN, you need to update the routes manually. Details on how to do that vary per gateway and aren't described here.
-
-BGP routes from on-premises won't be propagated automatically into App Service. You need to manually propagate them on the point-to-site configuration using the steps in this document [Advertise custom routes for P2S VPN clients](../vpn-gateway/vpn-gateway-p2s-advertise-custom-routes.md).
-
-No extra configuration is required for the regional virtual network integration feature to reach through your virtual network to on-premises resources. You simply need to connect your virtual network to on-premises resources by using ExpressRoute or a site-to-site VPN.
-
-> [!NOTE]
-> The gateway-required virtual network integration feature doesn't integrate an app with a virtual network that has an ExpressRoute gateway. Even if the ExpressRoute gateway is configured in [coexistence mode](../expressroute/expressroute-howto-coexist-resource-manager.md), the virtual network integration doesn't work. If you need to access resources through an ExpressRoute connection, use the regional virtual network integration feature or an [App Service Environment](./environment/intro.md), which runs in your virtual network.
-
-### Peering
-
-If you use peering with regional virtual network integration, you don't need to do any more configuration.
-
-If you use gateway-required virtual network integration with peering, you need to configure a few more items. To configure peering to work with your app:
-
-1. Add a peering connection on the virtual network your app connects to. When you add the peering connection, enable **Allow virtual network access** and select **Allow forwarded traffic** and **Allow gateway transit**.
-1. Add a peering connection on the virtual network that's being peered to the virtual network you're connected to. When you add the peering connection on the destination virtual network, enable **Allow virtual network access** and select **Allow forwarded traffic** and **Allow remote gateways**.
-1. Go to **App Service plan** > **Networking** > **VNet integration** in the portal. Select the virtual network your app connects to. Under the routing section, add the address range of the virtual network that's peered with the virtual network your app is connected to.
+If you use peering with virtual network integration, you don't need to do any more configuration.
## Manage virtual network integration Connecting and disconnecting with a virtual network is at an app level. Operations that can affect virtual network integration across multiple apps are at the App Service plan level. From the app > **Networking** > **VNet integration** portal, you can get details on your virtual network. You can see similar information at the App Service plan level in the **App Service plan** > **Networking** > **VNet integration** portal.
-The only operation you can take in the app view of your virtual network integration instance is to disconnect your app from the virtual network it's currently connected to. To disconnect your app from a virtual network, select **Disconnect**. Your app is restarted when you disconnect from a virtual network. Disconnecting doesn't change your virtual network. The subnet or gateway isn't removed. If you then want to delete your virtual network, first disconnect your app from the virtual network and delete the resources in it, such as gateways.
-
-The App Service plan virtual network integration UI shows you all the virtual network integrations used by the apps in your App Service plan. To see details on each virtual network, select the virtual network you're interested in. There are two actions you can perform here for gateway-required virtual network integration:
-
-* **Sync network**: The sync network operation is used only for the gateway-required virtual network integration feature. Performing a sync network operation ensures that your certificates and network information are in sync. If you add or change the DNS of your virtual network, perform a sync network operation. This operation restarts any apps that use this virtual network. This operation won't work if you're using an app and a virtual network belonging to different subscriptions.
-* **Add routes**: Adding routes drives outbound traffic into your virtual network.
+In the app view of your virtual network integration instance, you can disconnect your app from the virtual network and you can configure application routing. To disconnect your app from a virtual network, select **Disconnect**. Your app is restarted when you disconnect from a virtual network. Disconnecting doesn't change your virtual network. The subnet isn't removed. If you then want to delete your virtual network, first disconnect your app from the virtual network.
-The private IP assigned to the instance is exposed via the environment variable WEBSITE_PRIVATE_IP. Kudu console UI also shows the list of environment variables available to the web app. This IP is assigned from the address range of the integrated subnet. For regional virtual network integration, the value of WEBSITE_PRIVATE_IP is an IP from the address range of the delegated subnet. For gateway-required virtual network integration, the value is an IP from the address range of the point-to-site address pool configured on the virtual network gateway. This IP will be used by the web app to connect to the resources through the Azure virtual network.
+The private IP assigned to the instance is exposed via the environment variable WEBSITE_PRIVATE_IP. Kudu console UI also shows the list of environment variables available to the web app. This IP is assigned from the address range of the integrated subnet. This IP will be used by the web app to connect to the resources through the Azure virtual network.
> [!NOTE]
-> The value of WEBSITE_PRIVATE_IP is bound to change. However, it will be an IP within the address range of the integration subnet or the point-to-site address range, so you'll need to allow access from the entire address range.
+> The value of WEBSITE_PRIVATE_IP is bound to change. However, it will be an IP within the address range of the integration subnet, so you'll need to allow access from the entire address range.
>
-### Gateway-required virtual network integration routing
-
-The routes that are defined in your virtual network are used to direct traffic into your virtual network from your app. To send more outbound traffic into the virtual network, add those address blocks here. This capability only works with gateway-required virtual network integration. Route tables don't affect your app traffic when you use gateway-required virtual network integration the way that they do with regional virtual network integration.
-
-### Gateway-required virtual network integration certificates
-
-When gateway-required virtual network integration is enabled, there's a required exchange of certificates to ensure the security of the connection. Along with the certificates are the DNS configuration, routes, and other similar things that describe the network.
+## Pricing details
-If certificates or network information is changed, select **Sync Network**. When you select **Sync Network**, you cause a brief outage in connectivity between your app and your virtual network. Your app isn't restarted, but the loss of connectivity could cause your site to not function properly.
+The virtual network integration feature has no extra charge for use beyond the App Service plan pricing tier charges.
-## Pricing details
+## Troubleshooting
-The regional virtual network integration feature has no extra charge for use beyond the App Service plan pricing tier charges.
+The feature is easy to set up, but that doesn't mean your experience will be problem free. If you encounter problems accessing your desired endpoint, there are various steps you can take depending on what you are observing. For more information, see [virtual network integration troubleshooting guide](/troubleshoot/azure/app-service/troubleshoot-vnet-integration-apps).
-Three charges are related to the use of the gateway-required virtual network integration feature:
+> [!NOTE]
+> * Virtual network integration isn't supported for Docker Compose scenarios in App Service.
+> * Access restrictions does not apply to traffic coming through a private endpoint.
-* **App Service plan pricing tier charges**: Your apps need to be in a Basic, Standard, Premium, Premium v2, or Premium v3 App Service plan. For more information on those costs, see [App Service pricing](https://azure.microsoft.com/pricing/details/app-service/).
-* **Data transfer costs**: There's a charge for data egress, even if the virtual network is in the same datacenter. Those charges are described in [Data transfer pricing details](https://azure.microsoft.com/pricing/details/data-transfers/).
-* **VPN gateway costs**: There's a cost to the virtual network gateway that's required for the point-to-site VPN. For more information, see [VPN gateway pricing](https://azure.microsoft.com/pricing/details/vpn-gateway/).
+### Deleting the App Service plan or app before disconnecting the network integration
-## Troubleshooting
+If you deleted the app or the App Service plan without disconnecting the virtual network integration first, you won't be able to do any update/delete operations on the virtual network or subnet that was used for the integration with the deleted resource. A subnet delegation 'Microsoft.Web/serverFarms' will remain assigned to your subnet and will prevent the update/delete operations.
-> [!NOTE]
-> Virtual network integration isn't supported for Docker Compose scenarios in App Service.
-> Access restrictions are ignored if a private endpoint is present.
+In order to do update/delete the subnet or virtual network again, you need to re-create the virtual network integration, and then disconnect it:
+1. Re-create the App Service plan and app (it's mandatory to use the exact same web app name as before).
+1. Navigate to **Networking** on the app in Azure portal and configure the virtual network integration.
+1. After the virtual network integration is configured, select the 'Disconnect' button.
+1. Delete the App Service plan or app.
+1. Update/Delete the subnet or virtual network.
+If you still encounter issues with the virtual network integration after following these steps, contact Microsoft Support.
app-service Tutorial Php Mysql App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-php-mysql-app.md
description: Learn how to get a PHP app working in Azure, with connection to a M
ms.assetid: 14feb4f3-5095-496e-9a40-690e1414bd73 ms.devlang: php Previously updated : 07/22/2022 Last updated : 01/31/2023
application-gateway Tutorial Protect Application Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/tutorial-protect-application-gateway.md
This article helps you create an Azure Application Gateway with a DDoS protected virtual network. Azure DDoS Protection Standard enables enhanced DDoS mitigation capabilities such as adaptive tuning, attack alert notifications, and monitoring to protect your application gateways from large scale DDoS attacks. > [!IMPORTANT]
-> Azure DDoS protection Standard incurs a cost per public IP address in the virtual network where you enable the service. Ensure you delete the resources in this tutorial if you aren't using the resources in the future. For information about pricing, see [Azure DDoS Protection Pricing](https://azure.microsoft.com/pricing/details/ddos-protection/). For more information about Azure DDoS protection, see [What is Azure DDoS Protection?](../ddos-protection/ddos-protection-overview.md).
+> Azure DDoS Protection incurs a cost when you use the Standard SKU. Overages charges only apply if more than 100 public IPs are protected in the tenant. Ensure you delete the resources in this tutorial if you aren't using the resources in the future. For information about pricing, see [Azure DDoS Protection Pricing]( https://azure.microsoft.com/pricing/details/ddos-protection/). For more information about Azure DDoS protection, see [What is Azure DDoS Protection?](../ddos-protection/ddos-protection-overview.md).
In this tutorial, you learn how to:
applied-ai-services Concept Receipt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/concept-receipt.md
Last updated 11/14/2022
recommendations: false
-<!-- markdownlint-disable MD033 -->
+<!-- markdownlint-disable MD033 -->
# Azure Form Recognizer receipt model
applied-ai-services Form Recognizer Container Image Tags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/containers/form-recognizer-container-image-tags.md
Release notes for `v2.1`:
| Container | Tags | Retrieve image | ||:||
-| **Layout**| &bullet; `latest` </br> &bullet; `2.1`| `docker pull mcr.microsoft.com/azure-cognitive-services/form-recognizer/layout)`|
-| **Business Card** | &bullet; `latest` </br> &bullet; `2.1` |`docker pull mcr.microsoft.com/azure-cognitive-services/form-recognizer/businesscard` |
-| **ID Document** | &bullet; `latest` </br> &bullet; `2.1`| `docker pull mcr.microsoft.com/azure-cognitive-services/form-recognizer/id-document`|
-| **Receipt**| &bullet; `latest` </br> &bullet; `2.1`| `docker pull mcr.microsoft.com/azure-cognitive-services/form-recognizer/receipt` |
-| **Invoice**| &bullet; `latest` </br> &bullet; `2.1`|`docker pull mcr.microsoft.com/azure-cognitive-services/form-recognizer/invoice` |
-| **Custom API** | &bullet; `latest` </br> &bullet; `2.1`| `docker pull mcr.microsoft.com/azure-cognitive-services/form-recognizer/custom-api`|
-| **Custom Supervised**| &bullet; `latest` </br> &bullet; `2.1`|`docker pull mcr.microsoft.com/azure-cognitive-services/form-recognizer/custom-supervised` |
+| **Layout**| &bullet; `latest` </br> &bullet; `2.1-preview`| `docker pull mcr.microsoft.com/azure-cognitive-services/form-recognizer/layout`|
+| **Business Card** | &bullet; `latest` </br> &bullet; `2.1-preview` |`docker pull mcr.microsoft.com/azure-cognitive-services/form-recognizer/businesscard` |
+| **ID Document** | &bullet; `latest` </br> &bullet; `2.1-preview`| `docker pull mcr.microsoft.com/azure-cognitive-services/form-recognizer/id-document`|
+| **Receipt**| &bullet; `latest` </br> &bullet; `2.1-preview`| `docker pull mcr.microsoft.com/azure-cognitive-services/form-recognizer/receipt` |
+| **Invoice**| &bullet; `latest` </br> &bullet; `2.1-preview`|`docker pull mcr.microsoft.com/azure-cognitive-services/form-recognizer/invoice` |
+| **Custom API** | &bullet; `latest` </br> &bullet; `2.1-preview`| `docker pull mcr.microsoft.com/azure-cognitive-services/form-recognizer/custom-api`|
+| **Custom Supervised**| &bullet; `latest` </br> &bullet; `2.1-preview`|`docker pull mcr.microsoft.com/azure-cognitive-services/form-recognizer/custom-supervised` |
### [Previous versions](#tab/previous)
applied-ai-services Form Recognizer Container Install Run https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/containers/form-recognizer-container-install-run.md
The following host machine requirements are applicable to **train and analyze**
### [Layout](#tab/layout)
-Below is a self-contained `docker compose` example to run the Form Recognizer Layout container. With `docker compose`, you use a YAML file to configure your application's services. Then, with `docker-compose up` command, you create and start all the services from your configuration. Enter {FORM_RECOGNIZER_ENDPOINT_URI} and {{FORM_RECOGNIZER_KEY} values for your Layout container instance.
+The following code sample is a self-contained `docker compose` example to run the Form Recognizer Layout container. With `docker compose`, you use a YAML file to configure your application's services. Then, with `docker-compose up` command, you create and start all the services from your configuration. Enter {FORM_RECOGNIZER_ENDPOINT_URI} and {{FORM_RECOGNIZER_KEY} values for your Layout container instance.
```yml version: "3.9"
docker-compose up
### [Business Card](#tab/business-card)
-Below is a self-contained `docker compose` example to run Form Recognizer Business Card and Read containers together. With `docker compose`, you use a YAML file to configure your application's services. Then, with `docker-compose up` command, you create and start all the services from your configuration. Enter {FORM_RECOGNIZER_ENDPOINT_URI} and {FORM_RECOGNIZER_KEY} values for your Business Card container instance. Enter {COMPUTER_VISION_ENDPOINT_URI} and {COMPUTER_VISION_KEY} for your Computer Vision Read container.
+The following code sample is a self-contained `docker compose` example to run Form Recognizer Business Card and Read containers together. With `docker compose`, you use a YAML file to configure your application's services. Then, with `docker-compose up` command, you create and start all the services from your configuration. Enter {FORM_RECOGNIZER_ENDPOINT_URI} and {FORM_RECOGNIZER_KEY} values for your Business Card container instance. Enter {COMPUTER_VISION_ENDPOINT_URI} and {COMPUTER_VISION_KEY} for your Computer Vision Read container.
```yml version: "3.9"
docker-compose up
### [ID Document](#tab/id-document)
-Below is a self-contained `docker compose` example to run Form Recognizer ID Document and Read containers together. With `docker compose`, you use a YAML file to configure your application's services. Then, with `docker-compose up` command, you create and start all the services from your configuration. Enter {FORM_RECOGNIZER_ENDPOINT_URI} and {FORM_RECOGNIZER_KEY} values for your ID document container. Enter {COMPUTER_VISION_ENDPOINT_URI} and {COMPUTER_VISION_KEY} values for your Computer Vision Read container.
+The following code sample is a self-contained `docker compose` example to run Form Recognizer ID Document and Read containers together. With `docker compose`, you use a YAML file to configure your application's services. Then, with `docker-compose up` command, you create and start all the services from your configuration. Enter {FORM_RECOGNIZER_ENDPOINT_URI} and {FORM_RECOGNIZER_KEY} values for your ID document container. Enter {COMPUTER_VISION_ENDPOINT_URI} and {COMPUTER_VISION_KEY} values for your Computer Vision Read container.
```yml version: "3.9"
docker-compose up
### [Invoice](#tab/invoice)
-Below is a self-contained `docker compose` example to run Form Recognizer Invoice and Layout containers together. With `docker compose`, you use a YAML file to configure your application's services. Then, with `docker-compose up` command, you create and start all the services from your configuration. Enter {FORM_RECOGNIZER_ENDPOINT_URI} and {FORM_RECOGNIZER_KEY} values for your Invoice and Layout containers.
+The following code sample is a self-contained `docker compose` example to run Form Recognizer Invoice and Layout containers together. With `docker compose`, you use a YAML file to configure your application's services. Then, with `docker-compose up` command, you create and start all the services from your configuration. Enter {FORM_RECOGNIZER_ENDPOINT_URI} and {FORM_RECOGNIZER_KEY} values for your Invoice and Layout containers.
```yml version: "3.9"
docker-compose up
``` ### [Receipt](#tab/receipt)-
-Below is a self-contained `docker compose` example to run Form Recognizer Receipt and Read containers together. With `docker compose`, you use a YAML file to configure your application's services. Then, with `docker-compose up` command, you create and start all the services from your configuration. Enter {FORM_RECOGNIZER_ENDPOINT_URI} and {FORM_RECOGNIZER_KEY} values for your Receipt container. Enter {COMPUTER_VISION_ENDPOINT_URI} and {COMPUTER_VISION_KEY} values for your Computer Vision Read container.
+The following code sample is a self-contained `docker compose` example to run Form Recognizer Receipt and Read containers together. With `docker compose`, you use a YAML file to configure your application's services. Then, with `docker-compose up` command, you create and start all the services from your configuration. Enter {FORM_RECOGNIZER_ENDPOINT_URI} and {FORM_RECOGNIZER_KEY} values for your Receipt container. Enter {COMPUTER_VISION_ENDPOINT_URI} and {COMPUTER_VISION_KEY} values for your Computer Vision Read container.
```yml version: "3.9"
docker-compose up
### [Custom](#tab/custom)
-In addition to the [prerequisites](#prerequisites) mentioned above, you'll need to do the following to process a custom document:
+In addition to the [prerequisites](#prerequisites), you'll need to do the following to process a custom document:
#### &bullet; Create a folder to store the following files:
In addition to the [prerequisites](#prerequisites) mentioned above, you'll need
1. Name this folder **shared**. 1. We'll reference the file path for this folder as **{SHARED_MOUNT_PATH}**.
- 1. Copy the file path in a convenient location, such as *Microsoft Notepad*. You'll need to add it to your **.env** file, below.
+ 1. Copy the file path in a convenient location, such as *Microsoft Notepad*. You'll need to add it to your **.env** file.
#### &bullet; Create a folder to store the logs written by the Form Recognizer service on your local machine. 1. Name this folder **output**. 1. We'll reference the file path for this folder as **{OUTPUT_MOUNT_PATH}**.
- 1. Copy the file path in a convenient location, such as *Microsoft Notepad*. You'll need to add it to your **.env** file, below.
+ 1. Copy the file path in a convenient location, such as *Microsoft Notepad*. You'll need to add it to your **.env** file.
#### &bullet; Create an environment file
http {
} ```
-* Gather a set of at least six forms of the same type. You'll use this data to train the model and test a form. You can use a [sample data set](https://go.microsoft.com/fwlink/?linkid=2090451) (download and extract *sample_data.zip*). Download the training files to the **shared** folder you created above.
+* Gather a set of at least six forms of the same type. You'll use this data to train the model and test a form. You can use a [sample data set](https://go.microsoft.com/fwlink/?linkid=2090451) (download and extract *sample_data.zip*). Download the training files to the **shared** folder you created.
* If you want to label your data, download the [Form Recognizer Sample Labeling tool for Windows](https://github.com/microsoft/OCR-Form-Tools/releases). The download will import the labeling tool .exe file that you'll use to label the data present on your local file system. You can ignore any warnings that occur during the download process.
http {
1. Name this file **docker-compose.yml**
-2. Below is a self-contained `docker compose` example to run Form Recognizer Layout, Label Tool, Custom API, and Custom Supervised containers together. With `docker compose`, you use a YAML file to configure your application's services. Then, with `docker-compose up` command, you create and start all the services from your configuration.
+2. The following code sample is a self-contained `docker compose` example to run Form Recognizer Layout, Label Tool, Custom API, and Custom Supervised containers together. With `docker compose`, you use a YAML file to configure your application's services. Then, with `docker-compose up` command, you create and start all the services from your configuration.
```yml version: '3.3'
$docker-compose up
+## The Sample Labeling tool and Azure Container Instances (ACI)
+
+To learn how to use the Sample Labeling tool with an Azure Container Instance, *see*, [Deploy the Sample Labeling tool](../deploy-label-tool.md#deploy-with-azure-container-instances-aci).
+ ## Validate that the service is running There are several ways to validate that the container is running: * The container provides a homepage at `\` as a visual validation that the container is running.
-* You can open your favorite web browser and navigate to the external IP address and exposed port of the container in question. Use the various request URLs below to validate the container is running. The example request URLs listed below are `http://localhost:5000`, but your specific container may vary. Keep in mind that you're navigating to your container's **External IP address** and exposed port.
+* You can open your favorite web browser and navigate to the external IP address and exposed port of the container in question. Use the listed request URLs to validate the container is running. The listed example request URLs are `http://localhost:5000`, but your specific container may vary. Keep in mind that you're navigating to your container's **External IP address** and exposed port.
Request URL | Purpose -- | --
The Form Recognizer containers send billing information to Azure by using a Form
Queries to the container are billed at the pricing tier of the Azure resource that's used for the `Key`. You'll be billed for each container instance used to process your documents and images. Thus, If you use the business card feature, you'll be billed for the Form Recognizer `BusinessCard` and `Computer Vision Read` container instances. For the invoice feature, you'll be billed for the Form Recognizer `Invoice` and `Layout` container instances. *See*, [Form Recognizer](https://azure.microsoft.com/pricing/details/form-recognizer/) and Computer Vision [Read feature](https://azure.microsoft.com/pricing/details/cognitive-services/computer-vision/) container pricing.
-Azure Cognitive Services containers aren't licensed to run without being connected to the metering / billing endpoint. Containers must be enabled to communicate billing information with the billing endpoint at all times. Cognitive Services containers don't send customer data, such as the image or text that's being analyzed, to Microsoft.
+Azure Cognitive Services containers aren't licensed to run without being connected to the metering / billing endpoint. Containers must be enabled to always communicate billing information with the billing endpoint. Cognitive Services containers don't send customer data, such as the image or text that's being analyzed, to Microsoft.
### Connect to Azure
applied-ai-services Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/language-support.md
This technology is currently available for US driver licenses and the biographic
::: moniker range="form-recog-2.1.0" > [!div class="nextstepaction"] > [Try Form Recognizer Sample Labeling tool](https://aka.ms/fott-2.1-ga)
applied-ai-services Managed Identities Secured Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/form-recognizer/managed-identities-secured-access.md
Previously updated : 10/20/2022 Last updated : 01/18/2023 monikerRange: '>=form-recog-2.1.0' recommendations: false
To get started, you'll need:
* An [**Azure virtual network**](https://portal.azure.com/#create/Microsoft.VirtualNetwork-ARM) in the same region as your Form Recognizer resource. You'll create a virtual network to deploy your application resources to train models and analyze documents.
-* An [**Azure data science VM**](https://portal.azure.com/#create/Microsoft.VirtualNetwork-ARM) optionally deploy a data science VM in the virtual network to test the secure connections being established.
+* An **Azure data science VM** for [**Windows**](/azure/machine-learning/data-science-virtual-machine/provision-vm) or [**Linux/Ubuntu**](/azure/machine-learning/data-science-virtual-machine/dsvm-ubuntu-intro) to optionally deploy a data science VM in the virtual network to test the secure connections being established.
## Configure resources
applied-ai-services Data Feeds From Different Sources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/metrics-advisor/data-feeds-from-different-sources.md
Title: Connect different data sources to Metrics Advisor-+ description: Add different data feeds to Metrics Advisor-
applied-ai-services Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/metrics-advisor/encryption.md
Title: Metrics Advisor service encryption-+ description: Metrics Advisor service encryption of data at rest.
applied-ai-services Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/metrics-advisor/glossary.md
Title: Metrics Advisor glossary-+ description: Key ideas and concepts for the Metrics Advisor service-
applied-ai-services Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/metrics-advisor/how-tos/alerts.md
Title: Configure Metrics Advisor alerts-+ description: How to configure your Metrics Advisor alerts using hooks for email, web and Azure DevOps.-
applied-ai-services Anomaly Feedback https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/metrics-advisor/how-tos/anomaly-feedback.md
Title: Provide anomaly feedback to the Metrics Advisor service-+ description: Learn how to send feedback on anomalies found by your Metrics Advisor instance, and tune the results. -
applied-ai-services Configure Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/metrics-advisor/how-tos/configure-metrics.md
Title: Configure your Metrics Advisor instance using the web portal-+ description: How to configure your Metrics Advisor instance and fine-tune the anomaly detection results.-
applied-ai-services Credential Entity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/metrics-advisor/how-tos/credential-entity.md
Title: Create a credential entity-+ description: How to create a credential entity to manage your credential in secure.-
applied-ai-services Diagnose An Incident https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/metrics-advisor/how-tos/diagnose-an-incident.md
Title: Diagnose an incident using Metrics Advisor-+ description: Learn how to diagnose an incident using Metrics Advisor, and get detailed views of anomalies in your data.- -+ Last updated 04/15/2021
applied-ai-services Further Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/metrics-advisor/how-tos/further-analysis.md
Title: Further analyze an incident and evaluate impact-+ description: Learn how to leverage analysis tools to further analyze an incident. - -+ Last updated 04/15/2021
applied-ai-services Manage Data Feeds https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/metrics-advisor/how-tos/manage-data-feeds.md
Title: Manage data feeds in Metrics Advisor-+ description: Learn how to manage data feeds that you've added to Metrics Advisor.-
applied-ai-services Metrics Graph https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/metrics-advisor/how-tos/metrics-graph.md
Title: Metrics Advisor metrics graph-+ description: How to configure your Metrics graph and visualize related anomalies in your data.-
applied-ai-services Onboard Your Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/metrics-advisor/how-tos/onboard-your-data.md
Title: Onboard your data feed to Metrics Advisor-+ description: How to get started with onboarding your data feeds to Metrics Advisor.-
applied-ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/metrics-advisor/overview.md
Title: What is the Azure Metrics Advisor service? description: What is Metrics Advisor?-
applied-ai-services Rest Api And Client Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/metrics-advisor/quickstarts/rest-api-and-client-library.md
Title: Metrics Advisor client libraries REST API-+ description: Use this quickstart to connect your applications to the Metrics Advisor API from Azure Cognitive Services.-
applied-ai-services Web Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/metrics-advisor/quickstarts/web-portal.md
Title: 'Quickstart: Metrics Advisor web portal'-+ description: Learn how to start using the Metrics Advisor web portal.-
applied-ai-services Enable Anomaly Notification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/metrics-advisor/tutorials/enable-anomaly-notification.md
Title: Metrics Advisor anomaly notification e-mails with Azure Logic Apps
description: Learn how to automate sending e-mail alerts in response to Metric Advisor anomalies -++ Last updated 05/20/2021
applied-ai-services Write A Valid Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/metrics-advisor/tutorials/write-a-valid-query.md
Title: Write a query for Metrics Advisor data ingestion
description: Learn how to onboard your data to Metrics Advisor. -++ Last updated 05/20/2021
applied-ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/applied-ai-services/metrics-advisor/whats-new.md
Title: Metrics Advisor what's new-+ description: Learn about what is new with Metrics Advisor-
automation Automation Alert Metric https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-alert-metric.md
Alerts allow you to define a condition to monitor for and an action to take when
2. The **Configure signal logic** page is where you define the logic that triggers the alert. Under the historical graph you are presented with two dimensions, **Runbook Name** and **Status**. Dimensions are different properties for a metric that can be used to filter results. For **Runbook Name**, select the runbook you want to alert on or leave blank to alert on all runbooks. For **Status**, select a status from the drop-down you want to monitor for. The runbook name and status values that appear in the dropdown are only for jobs that have ran in the past week.
- If you want to alert on a status or runbook that isn't shown in the dropdown, click the **Add custom value** option next to the dimension. This action opens a dialog that allows you to specify a custom value, which hasn't emitted for that dimension recently. If you enter a value that doesn't exist for a property your alert won't be triggered.For list of job statuses, see [Job statuses](automation-runbook-execution.md#job-statuses).
+ If you want to alert on a status or runbook that isn't shown in the dropdown, click the **Add custom value** option next to the dimension. This action opens a dialog that allows you to specify a custom value, which hasn't emitted for that dimension recently. If you enter a value that doesn't exist for a property your alert won't be triggered. For more information, see [Job statuses](automation-runbook-execution.md#job-statuses).
> [!NOTE] > If you don't specify a name for the **Runbook Name** dimension, if there are any runbooks that meet the status criteria, which includes hidden system runbooks, you will receive an alert.
automation Automation Windows Hrw Install https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/automation-windows-hrw-install.md
To check version of agent-based Windows Hybrid Runbook Worker, go to the followi
`C:\ProgramFiles\Microsoft Monitoring Agent\Agent\AzureAutomation\` The *Azure Automation* folder has a sub-folder with the version number as the name of the sub-folder.+
+## Update Log Analytics agent to latest version
+
+Azure Automation [Agent-based User Hybrid Runbook Worker](automation-hybrid-runbook-worker.md) (V1) requires the [Log Analytics agent](../azure-monitor/agents/log-analytics-agent.md) (also known as MMA agent) during the installation of the Hybrid Worker. We recommend you to update the Log Analytics agent to the latest version to reduce security vulnerabilities and benefit from bug fixes.
+
+Log Analytics agent versions prior to [10.20.18053 (bundle) and 1.0.18053.0 (extension)](../virtual-machines/extensions/oms-windows.md#agent-and-vm-extension-version) use an older method of certificate handling, and hence it is **not recommended**. Hybrid Workers on the outdated agents will not be able to connect to Azure, and Azure Automation jobs executed by these Hybrid Workers will stop.
+
+You must update the Log Analytics agent to the latest version by following the below steps:
+
+1. Check the current version of the Log Analytics agent for your Windows Hybrid Worker: Go to the installation path - *C:\ProgramFiles\Microsoft Monitoring Agent\Agent* and right-click *HealthService.exe* to check **Properties**. The field **Product version** provides the version number of the Log Analytics agent.
+2. If your Log Analytics agent version is prior to [10.20.18053 (bundle) and 1.0.18053.0 (extension)](../virtual-machines/extensions/oms-windows.md#agent-and-vm-extension-version), upgrade to the latest version of the Windows Log Analytics agent, following these [guidelines](../azure-monitor/agents/agent-manage.md).
+
+> [!NOTE]
+> Any Azure Automation jobs running on the Hybrid Worker during the upgrade process might stop. Ensure that there arenΓÇÖt any jobs running or scheduled during the Log Analytics agent upgrade.
+ ## Next steps
automation Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/change-tracking/overview.md
A key capability of Change Tracking and Inventory is alerting on changes to the
|ConfigurationChange <br>&#124; where RegistryKey == @"HKEY_LOCAL_MACHINE\\SOFTWARE\\Microsoft\\Windows\\CurrentVersion\\QualityCompat"| Useful for tracking changes to crucial antivirus keys.| |ConfigurationChange <br>&#124; where RegistryKey contains @"HKEY_LOCAL_MACHINE\\SYSTEM\\CurrentControlSet\\Services\\SharedAccess\\Parameters\\FirewallPolicy"| Useful for tracking changes to firewall settings.| +
+## Update Log Analytics agent to latest version
+
+For Change Tracking & Inventory, machines use the [Log Analytics agent](../../azure-monitor/agents/log-analytics-agent.md) to collect data about changes to installed software, Windows services, Windows registry and files, and Linux daemons on monitored servers. Soon, Azure will no longer accept connections from older versions of the Windows Log Analytics (LA) agent, also known as the Windows Microsoft Monitoring Agent (MMA), that uses an older method for certificate handling. We recommend to upgrade your agent to the latest version as soon as possible.
+
+[Agents that are on version - 10.20.18053 (bundle) and 1.0.18053.0 (extension)](../../virtual-machines/extensions/oms-windows.md#agent-and-vm-extension-version) or newer aren't affected in response to this change. If youΓÇÖre on an agent prior to that, your agent will be unable to connect, and the Change Tracking & Inventory pipeline & downstream activities can stop. You can check the current LA agent version in HeartBeat table within your LA Workspace.
+
+Ensure to upgrade to the latest version of the Windows Log Analytics agent (MMA) following these [guidelines](../../azure-monitor/agents/agent-manage.md).
+ ## Next steps - To enable from an Automation account, see [Enable Change Tracking and Inventory from an Automation account](enable-from-automation-account.md).
automation Runbook Authoring Extension For Vscode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/how-to/runbook-authoring-extension-for-vscode.md
following these steps:
1. Select **Azure Automation** from the search results, and then select **Install**. 1. Select **Reload** when necessary.
+## Connect to Azure Account
+
+To view all the resources within your Automation account, you must connect to your Azure account. Follow the steps to connect to Azure from Visual Studio Code:
+
+1. You can sign in to Azure from the Azure Automation extension or the Command Palette.
+ - To sign in from the Azure Automation extension: select **Sign in to Azure**.
+
+ Or
+
+ - To sign in from the Command Palette: from the menu bar, go to **View > Command Palette** and enter **Azure:Sign-in**.
+
+1. Follow the sign in instructions to sign in to Azure.
+ After you're connected, you will find the Azure account name on the status bar of Visual Studio Code.
+
+## Select subscriptions
+
+When you sign in for the first time, the extension loads only the default subscription resources and Automation accounts. To add or remove subscriptions, follow these steps:
+
+1. You can use the Command Palette or the window footer to start the subscription command.
+ - To sign in from Command Palette - from the menu bar, go to **View > Command Palette** and enter **Azure: Select Subscriptions**.
+
+ Or
+
+ - To sign in from window footer - In the window footer, select the segment that matches **Azure: your account**.
+
+1. Use the filter to find the subscriptions by name.
+1. Check or uncheck each subscription to add or remove them from the list of subscriptions shown by Azure Automation extension.
+1. Select **OK** after you have completed adding or removing the subscriptions.
++ ## Using the Azure Automation extension The extension simplifies the process of creating and editing runbooks. You can now test them locally without logging into the Azure portal. The various actions that you can perform are listed below:
automation Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/update-management/overview.md
Customers who have invested in Microsoft Endpoint Configuration Manager for mana
Update Management relies on the locally configured update repository to update supported Windows systems, either WSUS or Windows Update. Tools such as [System Center Updates Publisher](/configmgr/sum/tools/updates-publisher) allow you to import and publish custom updates with WSUS. This scenario allows Update Management to update machines that use Configuration Manager as their update repository with third-party software. To learn how to configure Updates Publisher, see [Install Updates Publisher](/configmgr/sum/tools/install-updates-publisher). +
+## Update Windows Log Analytics agent to latest version
+
+Update Management requiresΓÇ»[Log Analytics agent](../../azure-monitor/agents/log-analytics-agent.md)ΓÇ» for its functioning. We recommend you to update Windows Log Analytics agent (also known as Windows Microsoft Monitoring Agent (MMA)) to the latest version to reduce security vulnerabilities and benefit from bug fixes. Log Analytics agent versions prior to [10.20.18053 (bundle) and 1.0.18053.0 (extension)](../../virtual-machines/extensions/oms-windows.md#agent-and-vm-extension-version) use an older method of certificate handling and hence it is not recommended. Older Windows Log Analytics agents would not be able to connect to Azure and Update Management would stop working on them.
+
+You must update Log Analytics agent to the latest version, by following below steps:ΓÇ»
+
+Check the current version of Log Analytics agent for your machine:  Go to the installation path - *C:\ProgramFiles\Microsoft Monitoring Agent\Agent* and right-click on *HealthService.exe* to check **Properties**. In the **Details** tab, the field **Product version** provides version number of the Log Analytics agent.
+
+If your Log Analytics agent version is prior to [10.20.18053 (bundle) and 1.0.18053.0 (extension)](../../virtual-machines/extensions/oms-windows.md#agent-and-vm-extension-version), upgrade to the latest version of the Windows Log Analytics agent, following these [guidelines](../../azure-monitor/agents/agent-manage.md).ΓÇ»
+
+>[!NOTE]
+> During the upgrade process, update management schedules might fail. Ensure to do this when there is no planned schedule.
+ ## Next steps * Before enabling and using Update Management, review [Plan your Update Management deployment](plan-deployment.md).
azure-arc Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/connectivity.md
Some Azure-attached services are only available when they can be directly reache
## Details on internet addresses, ports, encryption, and proxy server support
-There are three connections required to services available on the Internet. These connections include:
-- [Microsoft Container Registry (MCR)](#microsoft-container-registry-mcr)-- [Helm chart (direct connected mode)](#helm-chart-direct-connected-mode)-- [Azure Resource Manager APIs](#azure-resource-manager-apis)-- [Azure monitor APIs](#azure-monitor-apis)-- [Azure Arc data processing service](#azure-arc-data-processing-service)
+## Additional network requirements
-All HTTPS connections to Azure and the Microsoft Container Registry are encrypted using SSL/TLS using officially signed and verifiable certificates.
-
-The following sections provide details for these connections.
-
-### Microsoft Container Registry (MCR)
-
-The Microsoft Container Registry hosts the Azure Arc-enabled data services container images. You can pull these images from MCR and push them to a private container registry and configure the data controller deployment process to pull the container images from that private container registry.
-
-#### Connection source
-
-The Kubernetes kubelet on each of the Kubernetes nodes pulling the container images.
-
-#### Connection target
-
-`mcr.microsoft.com`
-
-#### Protocol
-
-HTTPS
-
-#### Port
-
-443
-
-#### Can use proxy
-
-Yes
-
-#### Authentication
-
-None
-
-### Helm chart (direct connected mode)
-
-The Helm chart used to provision the Azure Arc data controller bootstrapper and cluster level objects, such as custom resource definitions, cluster roles, and cluster role bindings, is pulled from an Azure Container Registry.
-
-#### Connection source
-
-The Kubernetes kubelet on each of the Kubernetes nodes pulling the container images.
-
-#### Connection target
-
-`arcdataservicesrow1.azurecr.io`
-
-#### Protocol
-
-HTTPS
-
-#### Port
-
-443
-
-#### Can use proxy
-
-Yes
-
-#### Authentication
-
-None
-
-### Azure Resource Manager APIs
-Azure Data Studio, and Azure CLI connect to the Azure Resource Manager APIs to send and retrieve data to and from Azure for some features.
-
-#### Connection source
-
-A computer running Azure Data Studio, or Azure CLI that is connecting to Azure.
-
-#### Connection target
--- `login.microsoftonline.com`-- `management.azure.com`-
-#### Protocol
-
-HTTPS
-
-#### Port
-
-443
-
-#### Can use proxy
-
-Yes
-
-To use proxy, verify that the agents meet the network requirements. See [Meet network requirements](../kubernetes/quickstart-connect-cluster.md#meet-network-requirements).
-
-#### Authentication
-
-Azure Active Directory
-
-### Azure monitor APIs
-
-Azure Data Studio and Azure CLI connect to the Azure Resource Manager APIs to send and retrieve data to and from Azure for some features.
-
-#### Connection source
-
-A computer running Azure CLI that is uploading monitoring metrics or logs to Azure Monitor.
-
-#### Connection target
--- `login.microsoftonline.com`-- `management.azure.com`-- `*.ods.opinsights.azure.com`-- `*.oms.opinsights.azure.com`-- `*.monitoring.azure.com`-
-For example, to upload usage metrics data services will connect to `https://<azureRegion>.monitoring.azure.com/` where `<azureRegion>` is the region where data services is deployed.
-
-Likewise, data services will connect to the log analytics workspace at `https://<subscription_id>.ods.opinsights.azure.com` where `<subscription_id>` represents your Azure subscription.
-
-#### Protocol
-
-HTTPS
-
-#### Port
-
-443
-
-#### Can use proxy
-
-Yes
-
-#### Authentication
-
-Azure Active Directory
-
-> [!NOTE]
-> For now, all browser HTTPS/443 connections to the data controller for running the command `az arcdata dc export` and Grafana and Kibana dashboards are SSL encrypted using self-signed certificates. A feature will be available in the future that will allow you to provide your own certificates for encryption of these SSL connections.
-
-Connectivity from Azure Data Studio to the Kubernetes API server uses the Kubernetes authentication and encryption that you have established. Each user that is using Azure Data Studio or CLI must have an authenticated connection to the Kubernetes API to perform many of the actions related to Azure Arc-enabled data services.
-
-### Azure Arc data processing service
-
-Points to the data processing service endpoint in connection
-
-#### Connection target
--- `san-af-eastus-prod.azurewebsites.net`-- `san-af-eastus2-prod.azurewebsites.net`-- `san-af-australiaeast-prod.azurewebsites.net`-- `san-af-centralus-prod.azurewebsites.net`-- `san-af-westus2-prod.azurewebsites.net`-- `san-af-westeurope-prod.azurewebsites.net`-- `san-af-southeastasia-prod.azurewebsites.net`-- `san-af-koreacentral-prod.azurewebsites.net`-- `san-af-northeurope-prod.azurewebsites.net`-- `san-af-westeurope-prod.azurewebsites.net`-- `san-af-uksouth-prod.azurewebsites.net`-- `san-af-francecentral-prod.azurewebsites.net`-
-#### Protocol
-
-HTTPS
-
-#### Can use proxy
-
-Yes
-
-To use proxy, verify that the agents meet the network requirements. See [Meet network requirements](../kubernetes/quickstart-connect-cluster.md#meet-network-requirements).
-
-#### Authentication
-
-None
+In addition, resource bridge (preview) requires [Arc-enabled Kubernetes endpoints](../network-requirements-consolidated.md#azure-arc-enabled-kubernetes-endpoints).
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/overview.md
Many of the services such as self-service provisioning, automated backups/restor
To see the regions that currently support Azure Arc-enabled data services, go to [Azure Products by Region - Azure Arc](https://azure.microsoft.com/global-infrastructure/services/?cdn=disable&products=azure-arc). + ## Next steps > **Just want to try things out?**
azure-arc Validation Program https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/data/validation-program.md
To see how all Azure Arc-enabled components are validated, see [Validation progr
|--|--|--|--|--| | Karbon 2.2<br/>AOS: 5.19.1.5<br/>AHV: 20201105.1021<br/>PC: Version pc.2021.3.02<br/> | 1.19.8-0 | 1.0.0_2021-07-30 | 15.0.2148.140|postgres 12.3 (Ubuntu 12.3-1)|
-### Platform 9
-
-|Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL server version
-|--|--|--|--|--|
-| Platform9 Managed Kubernetes v5.3.0 | 1.20.5 | 1.0.0_2021-07-30| 15.0.2195.191 | PostgreSQL 12.3 (Ubuntu 12.3-1) |
### PureStorage
To see how all Azure Arc-enabled components are validated, see [Validation progr
|Solution and version | Kubernetes version | Azure Arc-enabled data services version | SQL engine version | PostgreSQL server version |--|--|--|--|--|
+|Wind River Cloud Platform 22.12 | 1.24.4|1.14.0_2022-12-13 |16.0.816.19223|Postgres 14.5(ubuntu 20.04) |
|Wind River Cloud Platform 22.06 | 1.23.1|1.9.0_2022-07-12 |16.0.312.4243|postgres 12.3 (Ubuntu 12.3-1) | ## Data services validation process
azure-arc Cluster Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/cluster-connect.md
Title: "Use the cluster connect to securely connect to Azure Arc-enabled Kubernetes clusters" Previously updated : 08/30/2022
+ Title: "Use cluster connect to securely connect to Azure Arc-enabled Kubernetes clusters."
Last updated : 01/18/2023 description: "With cluster connect, you can securely connect to Azure Arc-enabled Kubernetes clusters without requiring any inbound port to be enabled on the firewall."
Access to the `apiserver` of the Azure Arc-enabled Kubernetes cluster enables th
- Interactive debugging and troubleshooting. - Cluster access to Azure services for [custom locations](custom-locations.md) and other resources created on top of it.
-A conceptual overview of this feature is available in [Cluster connect - Azure Arc-enabled Kubernetes](conceptual-cluster-connect.md).
+Before you begin, review the [conceptual overview of the cluster connect feature](conceptual-cluster-connect.md).
## Prerequisites
A conceptual overview of this feature is available in [Cluster connect - Azure A
+ ## Azure Active Directory authentication option ### [Azure CLI](#tab/azure-cli)
-1. Get the `objectId` associated with your Azure AD entity.
+1. Get the `objectId` associated with your Azure Active Directory (Azure AD) entity.
- For an Azure AD user account:
A conceptual overview of this feature is available in [Cluster connect - Azure A
### [Azure PowerShell](#tab/azure-powershell)
-1. Get the `objectId` associated with your Azure AD entity.
+1. Get the `objectId` associated with your Azure Active Directory (Azure AD) entity.
- For an Azure AD user account:
A conceptual overview of this feature is available in [Cluster connect - Azure A
```console TOKEN=$(kubectl get secret demo-user-secret -o jsonpath='{$.data.token}' | base64 -d | sed 's/$/\n/g') ```+ 1. Get the token to output to console ```console
A conceptual overview of this feature is available in [Cluster connect - Azure A
kubectl create clusterrolebinding demo-user-binding --clusterrole cluster-admin --serviceaccount default:demo-user ```
-1. Create a service account token by:
-
- ```console
- kubectl apply -f demo-user-secret.yaml
- ```
-
- Contents of `demo-user-secret.yaml`:
+1. Create a service account token. Create a `demo-user-secret.yaml` file with the following content:
```yaml apiVersion: v1
A conceptual overview of this feature is available in [Cluster connect - Azure A
type: kubernetes.io/service-account-token ```
+ Then run these commands:
+
+ ```console
+ kubectl apply -f demo-user-secret.yaml
+ ```
+ ```console $TOKEN = ([System.Text.Encoding]::UTF8.GetString([System.Convert]::FromBase64String((kubectl get secret demo-user-secret -o jsonpath='{$.data.token}')))) ```
azure-arc Quickstart Connect Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/quickstart-connect-cluster.md
For a conceptual look at connecting clusters to Azure Arc, see [Azure Arc-enable
## Meet network requirements
-> [!IMPORTANT]
-> Azure Arc agents require the following outbound URLs on `https://:443` to function.
-> For `*.servicebus.windows.net` (for Azure Cloud) & `*.servicebus.usgovcloudapi.net` (for Azure US Government), websockets need to be enabled for outbound access on firewall and proxy.
-
-| Endpoint (DNS) | Description |
-| -- | - |
-| `https://management.azure.com` (for Azure Cloud), `https://management.usgovcloudapi.net` (for Azure US Government) | Required for the agent to connect to Azure and register the cluster. |
-| `https://<region>.dp.kubernetesconfiguration.azure.com` (for Azure Cloud), `https://<region>.dp.kubernetesconfiguration.azure.us` (for Azure US Government) | Data plane endpoint for the agent to push status and fetch configuration information. |
-| `https://login.microsoftonline.com`, `https://<region>.login.microsoft.com`, `login.windows.net` (for Azure Cloud), `https://login.microsoftonline.us`, `<region>.login.microsoftonline.us` (for Azure US Government) | Required to fetch and update Azure Resource Manager tokens. |
-| `https://mcr.microsoft.com`, `https://*.data.mcr.microsoft.com` | Required to pull container images for Azure Arc agents. |
-| `https://gbl.his.arc.azure.com` (for Azure Cloud), `https://gbl.his.arc.azure.us` (for Azure US Government) | Required to get the regional endpoint for pulling system-assigned Managed Identity certificates. |
-| `https://*.his.arc.azure.com` (for Azure Cloud), `https://usgv.his.arc.azure.us` (for Azure US Government) | Required to pull system-assigned Managed Identity certificates. |
-|`https://k8connecthelm.azureedge.net` | `az connectedk8s connect` uses Helm 3 to deploy Azure Arc agents on the Kubernetes cluster. This endpoint is needed for Helm client download to facilitate deployment of the agent helm chart. |
-|`guestnotificationservice.azure.com`, `*.guestnotificationservice.azure.com`, `sts.windows.net`, `https://k8sconnectcsp.azureedge.net`(for Azure Cloud), `guestnotificationservice.azure.us`, `*.guestnotificationservice.azure.us`, `sts.windows.net`, `https://k8sconnectcsp.azureedge.us` (for Azure US Government) | For [Cluster Connect](cluster-connect.md) and for [Custom Location](custom-locations.md) based scenarios. |
-|`*.servicebus.windows.net`(for Azure Cloud), `*.servicebus.usgovcloudapi.net` (for Azure US Government) | For [Cluster Connect](cluster-connect.md) and for [Custom Location](custom-locations.md) based scenarios. |
-|`https://graph.microsoft.com/` | Required when [Azure RBAC](azure-rbac.md) is configured |
-> [!NOTE]
-> For Azure Cloud to translate the `*.servicebus.windows.net` wildcard into specific endpoints, use the command `\GET https://guestnotificationservice.azure.com/urls/allowlist?api-version=2020-01-01&location=<location>`. For Azure US Government to translate the `*.servicebus.usgovcloudapi.net` wildcard into specific endpoints, use the command `\GET https://guestnotificationservice.azure.us/urls/allowlist?api-version=2020-01-01&location=<location>`. Within these commands, the region must be specified for the `<location>` placeholder.
-> [!IMPORTANT]
-> To view and manage connected clusters in the Azure portal, be sure that your network allows traffic to `*.arc.azure.net`.
+For a complete list of network requirements for Azure Arc features and Azure Arc-enabled services, see [Azure Arc network requirements (Consolidated)](../network-requirements-consolidated.md).
## Create a resource group
azure-arc Validation Program https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/validation-program.md
The following providers and their corresponding Kubernetes distributions have su
| Nutanix | [Nutanix Kubernetes Engine](https://www.nutanix.com/products/kubernetes-engine) | Version [2.5](https://portal.nutanix.com/page/documents/details?targetId=Nutanix-Kubernetes-Engine-v2_5:Nutanix-Kubernetes-Engine-v2_5); upstream K8s v1.23.11 | | Kublr | [Kublr Managed K8s](https://kublr.com/managed-kubernetes/) Distribution | Upstream K8s Version: 1.22.10 <br> Upstream K8s Version: 1.21.3 | | Mirantis | [Mirantis Kubernetes Engine](https://www.mirantis.com/software/mirantis-kubernetes-engine/) | MKE Version [3.6.0](https://docs.mirantis.com/mke/3.6/release-notes/3-6-0.html) <br> MKE Version [3.5.5](https://docs.mirantis.com/mke/3.5/release-notes/3-5-5.html) <br> MKE Version [3.4.7](https://docs.mirantis.com/mke/3.4/release-notes/3-4-7.html) |
-| Wind River | [Wind River Cloud Platform](https://www.windriver.com/studio/operator/cloud-platform) | Wind River Cloud Platform 22.06; Upstream K8s version: 1.23.1 <br>Wind River Cloud Platform 21.12; Upstream K8s version: 1.21.8 <br>Wind River Cloud Platform 21.05; Upstream K8s version: 1.18.1 |
+| Wind River | [Wind River Cloud Platform](https://www.windriver.com/studio/operator/cloud-platform) | Wind River Cloud Platform 22.12; Upstream K8s version: 1.24.4 <br>Wind River Cloud Platform 22.06; Upstream K8s version: 1.23.1 <br>Wind River Cloud Platform 21.12; Upstream K8s version: 1.21.8 <br>Wind River Cloud Platform 21.05; Upstream K8s version: 1.18.1 |
The Azure Arc team also ran the conformance tests and validated Azure Arc-enabled Kubernetes scenarios on the following public cloud providers:
azure-arc Network Requirements Consolidated https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/network-requirements-consolidated.md
+
+ Title: Azure Arc network requirements
+description: A consolidated list of network requirements for Azure Arc features and Azure Arc-enabled services. Lists endpoints, ports, and protocols.
Last updated : 03/01/2022+++
+# Azure Arc network requirements
+
+This article lists the endpoints, ports, and protocols required for Azure Arc-enabled services and features.
++
+## Azure Arc-enabled Kubernetes endpoints
+
+Connectivity to the Arc Kubernetes-based endpoints is required for all Kubernetes based Arc offerings, including:
+
+- Azure Arc-enabled Kubernetes
+- Azure Arc-enabled App services
+- Azure Arc-enabled Machine Learning
+- Azure Arc-enabled data services (direct connectivity mode only)
++
+For an example, see [Quickstart: Connect an existing Kubernetes cluster to Azure Arc](kubernetes/quickstart-connect-cluster.md).
+
+## Azure Arc-enabled data services
+
+This section describes additional requirements specific to Azure Arc-enabled data services, in addition to the Arc-enabled Kubernetes endpoints listed above.
++
+For more information, see [Connectivity modes and requirements](dat).
+
+## Azure Arc-enabled servers
+
+Connectivity to Arc-enabled server endpoints is required for:
+
+- Azure Arc-enabled SQL Server
+- Azure Arc-enabled VMware vSphere (preview) <sup>*</sup>
+- Azure Arc-enabled System Center Virtual Machine Manager (preview) <sup>*</sup>
+- Azure Arc-enabled Azure Stack (HCI) (preview) <sup>*</sup>
+
+ <sup>*</sup>Only required for guest management enabled.
++
+For examples, see [Connected Machine agent network requirements](servers/network-requirements.md)].
+
+## Azure Arc resource bridge (preview)
+
+This section describes additional networking requirements specific to deploying Azure Arc resource bridge (preview) in your enterprise. These additional requirements also apply to Azure Arc-enabled VMware vSphere (preview) and Azure Arc-enabled System Center Virtual Machine Manager (preview).
++
+## Azure Arc-enabled System Center Virtual Machine Manager (preview)
+
+Azure Arc-enabled System Center Virtual Machine Manager (SCVMM) requires the connectivity described below:
+
+| **Service** | **Port** | **URL** | **Direction** | **Notes**|
+| | | | | |
+| SCVMM management Server | 443 | URL of the SCVMM management server | Appliance VM IP and control plane endpoint need outbound connection. | Used by the SCVMM server to communicate with the Appliance VM and the control plane. |
++
+For more information, see [Overview of Arc-enabled System Center Virtual Machine Manager (preview)](system-center-virtual-machine-manager/overview.md).
+## Azure Arc-enabled VMware vSphere (preview)
+
+Azure Arc-enabled VMware vSphere requires the connectivity described below:
+
+| **Service** | **Port** | **URL** | **Direction** | **Notes**|
+| | | | | |
+| vCenter Server | 443 | URL of the vCenter server | Appliance VM IP and control plane endpoint need outbound connection. | Used to by the vCenter server to communicate with the Appliance VM and the control plane.|
+
+For more information, see [Support matrix for Azure Arc-enabled VMware vSphere (preview)](vmware-vsphere/support-matrix-for-arc-enabled-vmware-vsphere.md).
azure-arc Network Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/resource-bridge/network-requirements.md
Last updated 12/06/2022
This article describes the networking requirements for deploying Azure Arc resource bridge (preview) in your enterprise.
-## Outbound connectivity
-The firewall and proxy URLs below must be allowlisted in order to enable communication from the host machine, Appliance VM, and Control Plane IP to the required Arc resource bridge URLs.
-### Firewall/Proxy URL allowlist
+## Additional network requirements
-|**Service**|**Port**|**URL**|**Direction**|**Notes**|
-|--|--|--|--|--|
-|Microsoft container registry | 443 | `https://mcr.microsoft.com`| Appliance VM IP and Control Plane IP need outbound connection. | Required to pull container images for installation. |
-|Azure Arc Identity service | 443 | `https://*.his.arc.azure.com` | Appliance VM IP and Control Plane IP need outbound connection. | Manages identity and access control for Azure resources |
-|Azure Arc configuration service | 443 | `https://*.dp.kubernetesconfiguration.azure.com`| Appliance VM IP and Control Plane IP need outbound connection. | Used for Kubernetes cluster configuration.|
-|Cluster connect service | 443 | `https://*.servicebus.windows.net` | Appliance VM IP and Control Plane IP need outbound connection. | Provides cloud-enabled communication to connect on-premises resources with the cloud. |
-|Guest Notification service| 443 | `https://guestnotificationservice.azure.com`| Appliance VM IP and Control Plane IP need outbound connection. | Used to connect on-premises resources to Azure.|
-|SFS API endpoint | 443 | `msk8s.api.cdp.microsoft.com` | Deployment machine, Appliance VM IP and Control Plane IP need outbound connection. | Used when downloading product catalog, product bits, and OS images from SFS. |
-|Resource bridge (appliance) Dataplane service| 443 | `https://*.dp.prod.appliances.azure.com`| Appliance VM IP and Control Plane IP need outbound connection. | Communicate with resource provider in Azure.|
-|Resource bridge (appliance) container image download| 443 | `*.blob.core.windows.net, https://ecpacr.azurecr.io`| Appliance VM IP and Control Plane IP need outbound connection. | Required to pull container images. |
-|Resource bridge (appliance) image download| 80 | `msk8s.b.tlu.dl.delivery.mp.microsoft.com`| Deployment machine, Appliance VM IP and Control Plane IP need outbound connection. | Download the Arc Resource Bridge OS images. |
-|Resource bridge (appliance) image download| 443 | `msk8s.sf.tlu.dl.delivery.mp.microsoft.com`| Deployment machine, Appliance VM IP and Control Plane IP need outbound connection. | Download the Arc Resource Bridge OS images. |
-|Azure Arc for Kubernetes container image download| 443 | `https://azurearcfork8sdev.azurecr.io`| Appliance VM IP and Control Plane IP need outbound connection. | Required to pull container images. |
-|ADHS telemetry service | 443 | `adhs.events.data.microsoft.com`| Appliance VM IP and Control Plane IP need outbound connection. | Runs inside the appliance/mariner OS. Used periodically to send Microsoft required diagnostic data from control plane nodes. Used when telemetry is coming off Mariner, which would mean any Kubernetes control plane. |
-|Microsoft events data service | 443 |`v20.events.data.microsoft.co`m| Appliance VM IP and Control Plane IP need outbound connection. | Used periodically to send Microsoft required diagnostic data from the Azure Stack HCI or Windows Server host. Used when telemetry is coming off Windows like Windows Server or HCI. |
-|Secure token service | 443 |`sts.windows.net`| Appliance VM IP and Control Plane IP need outbound connection. | Used for custom locations. |
-
-### Used by other Arc agents
-
-|**Service**|**URL**|
-|--|--|
-|Azure Resource Manager| `https://management.azure.com`|
-|Azure Active Directory| `https://login.microsoftonline.com`|
-
-## SSL proxy configuration
-
-Azure Arc resource bridge must be configured for proxy so that it can connect to the Azure services. This configuration is handled automatically. However, proxy configuration of the client machine isn't configured by the Azure Arc resource bridge.
-
-There are only two certificates that should be relevant when deploying the Arc resource bridge behind an SSL proxy: the SSL certificate for your SSL proxy (so that the host and guest trust your proxy FQDN and can establish an SSL connection to it), and the SSL certificate of the Microsoft download servers. This certificate must be trusted by your proxy server itself, as the proxy is the one establishing the final connection and needs to trust the endpoint. Non-Windows machines may not trust this second certificate by default, so you may need to ensure that it's trusted.
+In addition, resource bridge (preview) requires [Arc-enabled Kubernetes endpoints](../network-requirements-consolidated.md#azure-arc-enabled-kubernetes-endpoints).
## Next steps
azure-arc Network Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/servers/network-requirements.md
This topic describes the networking requirements for using the Connected Machine agent to onboard a physical server or virtual machine to Azure Arc-enabled servers.
-## Networking configuration
+## Details
-The Azure Connected Machine agent for Linux and Windows communicates outbound securely to Azure Arc over TCP port 443. By default, the agent uses the default route to the internet to reach Azure services. You can optionally [configure the agent to use a proxy server](manage-agent.md#update-or-remove-proxy-settings) if your network requires it. Proxy servers don't make the Connected Machine agent more secure because the traffic is already encrypted.
-To further secure your network connectivity to Azure Arc, instead of using public networks and proxy servers, you can implement an [Azure Arc Private Link Scope](private-link-security.md) .
-
-> [!NOTE]
-> Azure Arc-enabled servers does not support using a [Log Analytics gateway](../../azure-monitor/agents/gateway.md) as a proxy for the Connected Machine agent.
-
-If outbound connectivity is restricted by your firewall or proxy server, make sure the URLs and Service Tags listed below are not blocked.
-
-## Service tags
-
-Be sure to allow access to the following Service Tags:
-
-* AzureActiveDirectory
-* AzureTrafficManager
-* AzureResourceManager
-* AzureArcInfrastructure
-* Storage
-* WindowsAdminCenter (if [using Windows Admin Center to manage Arc-enabled servers](/windows-server/manage/windows-admin-center/azure/manage-arc-hybrid-machines))
-
-For a list of IP addresses for each service tag/region, see the JSON file [Azure IP Ranges and Service Tags ΓÇô Public Cloud](https://www.microsoft.com/download/details.aspx?id=56519). Microsoft publishes weekly updates containing each Azure Service and the IP ranges it uses. This information in the JSON file is the current point-in-time list of the IP ranges that correspond to each service tag. The IP addresses are subject to change. If IP address ranges are required for your firewall configuration, then the **AzureCloud** Service Tag should be used to allow access to all Azure services. Do not disable security monitoring or inspection of these URLs, allow them as you would other Internet traffic.
-
-For more information, see [Virtual network service tags](../../virtual-network/service-tags-overview.md).
-
-## URLs
-
-The table below lists the URLs that must be available in order to install and use the Connected Machine agent.
-
-### [Azure Cloud](#tab/azure-cloud)
-
-| Agent resource | Description | When required| Endpoint used with private link |
-|||--||
-|`aka.ms`|Used to resolve the download script during installation|At installation time, only| Public |
-|`download.microsoft.com`|Used to download the Windows installation package|At installation time, only| Public |
-|`packages.microsoft.com`|Used to download the Linux installation package|At installation time, only| Public |
-|`login.windows.net`|Azure Active Directory|Always| Public |
-|`login.microsoftonline.com`|Azure Active Directory|Always| Public |
-|`pas.windows.net`|Azure Active Directory|Always| Public |
-|`management.azure.com`|Azure Resource Manager - to create or delete the Arc server resource|When connecting or disconnecting a server, only| Public, unless a [resource management private link](../../azure-resource-manager/management/create-private-link-access-portal.md) is also configured |
-|`*.his.arc.azure.com`|Metadata and hybrid identity services|Always| Private |
-|`*.guestconfiguration.azure.com`| Extension management and guest configuration services |Always| Private |
-|`guestnotificationservice.azure.com`, `*.guestnotificationservice.azure.com`|Notification service for extension and connectivity scenarios|Always| Public |
-|`azgn*.servicebus.windows.net`|Notification service for extension and connectivity scenarios|Always| Public |
-|`*.servicebus.windows.net`|For Windows Admin Center and SSH scenarios|If using SSH or Windows Admin Center from Azure|Public|
-|`*.waconazure.com`|For Windows Admin Center connectivity|If using Windows Admin Center|Public|
-|`*.blob.core.windows.net`|Download source for Azure Arc-enabled servers extensions|Always, except when using private endpoints| Not used when private link is configured |
-|`dc.services.visualstudio.com`|Agent telemetry|Optional, not used in agent versions 1.24+| Public |
-
-> [!NOTE]
-> To translate the `*.servicebus.windows.net` wildcard into specific endpoints, use the command `\GET https://guestnotificationservice.azure.com/urls/allowlist?api-version=2020-01-01&location=<location>`. Within this command, the region must be specified for the `<location>` placeholder.
-
-### [Azure Government](#tab/azure-government)
-
-| Agent resource | Description | When required| Endpoint used with private link |
-|||--||
-|`aka.ms`|Used to resolve the download script during installation|At installation time, only| Public |
-|`download.microsoft.com`|Used to download the Windows installation package|At installation time, only| Public |
-|`packages.microsoft.com`|Used to download the Linux installation package|At installation time, only| Public |
-|`login.microsoftonline.us`|Azure Active Directory|Always| Public |
-|`pasff.usgovcloudapi.net`|Azure Active Directory|Always| Public |
-|`management.usgovcloudapi.net`|Azure Resource Manager - to create or delete the Arc server resource|When connecting or disconnecting a server, only| Public, unless a [resource management private link](../../azure-resource-manager/management/create-private-link-access-portal.md) is also configured |
-|`*.his.arc.azure.us`|Metadata and hybrid identity services|Always| Private |
-|`*.guestconfiguration.azure.us`| Extension management and guest configuration services |Always| Private |
-|`*.blob.core.usgovcloudapi.net`|Download source for Azure Arc-enabled servers extensions|Always, except when using private endpoints| Not used when private link is configured |
-|`dc.applicationinsights.us`|Agent telemetry|Optional, not used in agent versions 1.24+| Public |
-
-### [Azure China](#tab/azure-china)
-
-> [!NOTE]
-> Private link is not available for Azure Arc-enabled servers in Azure China regions.
-
-| Agent resource | Description | When required|
-|||--|
-|`aka.ms`|Used to resolve the download script during installation|At installation time, only|
-|`download.microsoft.com`|Used to download the Windows installation package|At installation time, only|
-|`packages.microsoft.com`|Used to download the Linux installation package|At installation time, only|
-|`login.chinacloudapi.cn`|Azure Active Directory|Always|
-|`login.partner.chinacloudapi.cn`|Azure Active Directory|Always|
-|`pas.chinacloudapi.cn`|Azure Active Directory|Always|
-|`management.chinacloudapi.cn`|Azure Resource Manager - to create or delete the Arc server resource|When connecting or disconnecting a server, only|
-|`*.his.arc.azure.cn`|Metadata and hybrid identity services|Always|
-|`*.guestconfiguration.azure.cn`| Extension management and guest configuration services |Always|
-|`guestnotificationservice.azure.cn`, `*.guestnotificationservice.azure.cn`|Notification service for extension and connectivity scenarios|Always|
-|`azgn*.servicebus.chinacloudapi.cn`|Notification service for extension and connectivity scenarios|Always|
-|`*.servicebus.chinacloudapi.cn`|For Windows Admin Center and SSH scenarios|If using SSH or Windows Admin Center from Azure|
-|`*.blob.core.chinacloudapi.cn`|Download source for Azure Arc-enabled servers extensions|Always, except when using private endpoints|
-|`dc.applicationinsights.azure.cn`|Agent telemetry|Optional, not used in agent versions 1.24+|
---
-## Transport Layer Security 1.2 protocol
-
-To ensure the security of data in transit to Azure, we strongly encourage you to configure machine to use Transport Layer Security (TLS) 1.2. Older versions of TLS/Secure Sockets Layer (SSL) have been found to be vulnerable and while they still currently work to allow backwards compatibility, they are **not recommended**.
-
-|Platform/Language | Support | More Information |
-| | | |
-|Linux | Linux distributions tend to rely on [OpenSSL](https://www.openssl.org) for TLS 1.2 support. | Check the [OpenSSL Changelog](https://www.openssl.org/news/changelog.html) to confirm your version of OpenSSL is supported.|
-| Windows Server 2012 R2 and higher | Supported, and enabled by default. | To confirm that you are still using the [default settings](/windows-server/security/tls/tls-registry-settings).|
## Next steps * Review additional [prerequisites for deploying the Connected Machine agent](prerequisites.md). * Before you deploy the Azure Arc-enabled servers agent and integrate with other Azure management and monitoring services, review the [Planning and deployment guide](plan-at-scale-deployment.md). * To resolve problems, review the [agent connection issues troubleshooting guide](troubleshoot-agent-onboard.md).
+* For a complete list of network requirements for Azure Arc features and Azure Arc-enabled services, see [Azure Arc network requirements (Consolidated)](../network-requirements-consolidated.md).
azure-arc Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/system-center-virtual-machine-manager/overview.md
Azure Arc-enabled SCVMM (preview) is currently supported in the following region
The following firewall URL exceptions are needed for the Azure Arc resource bridge VM: +
+In addition, SCVMM requires the following exception:
+ | **Service** | **Port** | **URL** | **Direction** | **Notes**| | | | | | |
-| Microsoft container registry | 443 | `https://mcr.microsoft.com` | Appliance VM IP and control plane endpoint need outbound connection. | Required to pull container images for installation. |
-| Azure Arc Identity service | 443 | `https://*.his.arc.azure.com` | Appliance VM IP and control plane endpoint need outbound connection. | Manages identity and access control for Azure resources |
-| Azure Arc configuration service | 443 | `https://*.dp.kubernetesconfiguration.azure.com` | Appliance VM IP and control plane endpoint need outbound connection. | Used for Kubernetes cluster configuration. |
-| Cluster connect service | 443 | `https://*.servicebus.windows.net` | Appliance VM IP and control plane endpoint need outbound connection. | Provides cloud-enabled communication to connect on-premises resources with the cloud. |
-| Guest Notification service | 443 | `https://guestnotificationservice.azure.com` | Appliance VM IP and control plane endpoint need outbound connection. | Used to connect on-premises resources to Azure. |
-| SFS API endpoint | 443 | `msk8s.api.cdp.microsoft.com` | Host machine, Appliance VM IP and control plane endpoint need outbound connection. | Used when downloading product catalog, product bits, and OS images from SFS. |
-| Resource bridge (appliance) Data plane service | 443 | `https://*.dp.prod.appliances.azure.com` | Appliance VM IP and control plane endpoint need outbound connection. | Communicate with resource provider in Azure. |
-| Resource bridge (appliance) container image download | 443 | `*.blob.core.windows.net`, `https://ecpacr.azurecr.io` | Appliance VM IP and control plane endpoint need outbound connection. | Required to pull container images. |
-| Resource bridge (appliance) image download | 80 | `*.dl.delivery.mp.microsoft.com` | Host machine, Appliance VM IP and control plane endpoint need outbound connection. | Download the Arc resource bridge OS images. |
-| Azure Arc for K8s container image download | 443 | `https://azurearcfork8sdev.azurecr.io` | Appliance VM IP and control plane endpoint need outbound connection. | Required to pull container images. |
-| ADHS telemetry service | 443 | `adhs.events.data.microsoft.com` | Appliance VM IP and control plane endpoint need outbound connection. Runs inside the appliance/mariner OS. | Used periodically to send Microsoft required diagnostic data from control plane nodes. Used when telemetry is coming off Mariner, which would mean any K8s control plane. |
-| Microsoft events data service | 443 | `v20.events.data.microsoft.com` | Appliance VM IP and control plane endpoint need outbound connection. | Used periodically to send Microsoft required diagnostic data from the Azure Stack HCI or Windows Server host. Used when telemetry is coming off Windows like Windows Server or HCI. |
| SCVMM management Server | 443 | URL of the SCVMM management server | Appliance VM IP and control plane endpoint need outbound connection. | Used by the SCVMM server to communicate with the Appliance VM and the control plane. | +
+For a complete list of network requirements for Azure Arc features and Azure Arc-enabled services, see [Azure Arc network requirements (Consolidated)](../network-requirements-consolidated.md).
+ ## Next steps [See how to create a Azure Arc VM](create-virtual-machine.md)
azure-arc Support Matrix For Arc Enabled Vmware Vsphere https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/vmware-vsphere/support-matrix-for-arc-enabled-vmware-vsphere.md
For Arc-enabled VMware vSphere, resource bridge has the following minimum virtua
### Resource bridge networking requirements + The following firewall URL exceptions are needed for the Azure Arc resource bridge VM: +
+In addition, VMware VSphere requires the following exception:
+ | **Service** | **Port** | **URL** | **Direction** | **Notes**| | | | | | |
-| Microsoft container registry | 443 | `https://mcr.microsoft.com` | Appliance VM IP and control plane endpoint need outbound connection. | Required to pull container images for installation. |
-| Azure Arc Identity service | 443 | `https://*.his.arc.azure.com` | Appliance VM IP and control plane endpoint need outbound connection. | Manages identity and access control for Azure resources |
-| Azure Arc configuration service | 443 | `https://*.dp.kubernetesconfiguration.azure.com` | Appliance VM IP and control plane endpoint need outbound connection. | Used for Kubernetes cluster configuration. |
-| Cluster connect service | 443 | `https://*.servicebus.windows.net` | Appliance VM IP and control plane endpoint need outbound connection. | Provides cloud-enabled communication to connect on-premises resources with the cloud. |
-| Guest Notification service | 443 | `https://guestnotificationservice.azure.com` | Appliance VM IP and control plane endpoint need outbound connection. | Used to connect on-premises resources to Azure. |
-| SFS API endpoint | 443 | `msk8s.api.cdp.microsoft.com` | Host machine, Appliance VM IP and control plane endpoint need outbound connection. | Used when downloading product catalog, product bits, and OS images from SFS. |
-| Resource bridge (appliance) Data plane service | 443 | `https://*.dp.prod.appliances.azure.com` | Appliance VM IP and control plane endpoint need outbound connection. | Communicate with resource provider in Azure. |
-| Resource bridge (appliance) container image download | 443 | `*.blob.core.windows.net`, `https://ecpacr.azurecr.io` | Appliance VM IP and control plane endpoint need outbound connection. | Required to pull container images. |
-| Resource bridge (appliance) image download | 80 | `*.dl.delivery.mp.microsoft.com` | Host machine, Appliance VM IP and control plane endpoint need outbound connection. | Download the Arc resource bridge OS images. |
-| Azure Arc for K8s container image download | 443 | `https://azurearcfork8sdev.azurecr.io` | Appliance VM IP and control plane endpoint need outbound connection. | Required to pull container images. |
-| ADHS telemetry service | 443 | `adhs.events.data.microsoft.com` | Appliance VM IP and control plane endpoint need outbound connection. Runs inside the appliance/mariner OS. | Used periodically to send Microsoft required diagnostic data from control plane nodes. Used when telemetry is coming off Mariner, which would mean any K8s control plane. |
-| Microsoft events data service | 443 | `v20.events.data.microsoft.com` | Appliance VM IP and control plane endpoint need outbound connection. | Used periodically to send Microsoft required diagnostic data from the Azure Stack HCI or Windows Server host. Used when telemetry is coming off Windows like Windows Server or HCI. |
| vCenter Server | 443 | URL of the vCenter server | Appliance VM IP and control plane endpoint need outbound connection. | Used to by the vCenter server to communicate with the Appliance VM and the control plane.|
+For a complete list of network requirements for Azure Arc features and Azure Arc-enabled services, see [Azure Arc network requirements (Consolidated)](../network-requirements-consolidated.md).
+ ## Azure role/permission requirements The minimum Azure roles required for operations related to Arc-enabled VMware vSphere are as follows:
azure-cache-for-redis Cache How To Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-upgrade.md
# How to upgrade an existing Redis 4 cache to Redis 6 > [!IMPORTANT]
-> We are improving the upgrade experience and have temporarily disabled the Redis version upgrade. We recommend that you upgrade your caches after January 20, 2023.
+> We are improving the upgrade experience and have temporarily disabled the Redis version upgrade. We recommend that you upgrade your caches starting in February 2023.
Azure Cache for Redis supports upgrading the version of your Azure Cache for Redis from Redis 4 to Redis 6. Upgrading is similar to regular monthly maintenance. Upgrading follows the same pattern as maintenance: First, the Redis version on the replica node is updated, followed by an update to the primary node. Your client application should treat the upgrade operation exactly like a planned maintenance event.
Before you upgrade, check the Redis version of a cache by selecting **Properties
## Upgrade using the Azure portal > [!IMPORTANT]
-> We are improving the upgrade experience and have temporarily disabled the Redis version upgrade. We recommend that you upgrade your caches after January 20, 2023.
+> We are improving the upgrade experience and have temporarily disabled the Redis version upgrade. We recommend that you upgrade your caches starting in February 2023.
## Upgrade using Azure CLI > [!IMPORTANT]
-> We are improving the upgrade experience and have temporarily disabled the Redis version upgrade. We recommend that you upgrade your caches after January 20, 2023.
+> We are improving the upgrade experience and have temporarily disabled the Redis version upgrade. We recommend that you upgrade your caches starting in February 2023.
## Upgrade using PowerShell > [!IMPORTANT]
-> We are improving the upgrade experience and have temporarily disabled the Redis version upgrade. We recommend that you upgrade your caches after January 20, 2023.
+> We are improving the upgrade experience and have temporarily disabled the Redis version upgrade. We recommend that you upgrade your caches starting in February 2023.
## Next steps
azure-functions Dotnet Isolated Process Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/dotnet-isolated-process-guide.md
Title: Guide for running C# Azure Functions in an isolated worker process
description: Learn how to use a .NET isolated worker process to run your C# functions in Azure, which supports non-LTS versions of .NET and .NET Framework apps. Previously updated : 11/01/2022 Last updated : 01/16/2023 recommendations: false #Customer intent: As a developer, I need to know how to create functions that run in an isolated worker process so that I can run my function code on current (not LTS) releases of .NET.
The following example performs clean-up actions if a cancellation request has be
You can compile your function app as [ReadyToRun binaries](/dotnet/core/deploying/ready-to-run). ReadyToRun is a form of ahead-of-time compilation that can improve startup performance to help reduce the effect of [cold-start](event-driven-scaling.md#cold-start) when running in a [Consumption plan](consumption-plan.md).
-ReadyToRun is available in .NET 3.1, .NET 6 (both in-process and isolated worker process), and .NET 7, and it requires [version 3.0 or later](functions-versions.md) of the Azure Functions runtime.
+ReadyToRun is available in .NET 6 and later versions and requires [version 4.0 or later](functions-versions.md) of the Azure Functions runtime.
To compile your project as ReadyToRun, update your project file by adding the `<PublishReadyToRun>` and `<RuntimeIdentifier>` elements. The following is the configuration for publishing to a Windows 32-bit function app.
azure-functions Durable Functions Storage Providers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/durable/durable-functions-storage-providers.md
The source code for the DTFx components of the Azure Storage storage provider ca
> [!NOTE] > Standard general purpose Azure Storage accounts are required when using the Azure Storage provider. All other storage account types are not supported. We highly recommend using legacy v1 general purpose storage accounts because the newer v2 storage accounts can be significantly more expensive for Durable Functions workloads. For more information on Azure Storage account types, see the [Storage account overview](../../storage/common/storage-account-overview.md) documentation.
-## <a name="netherite"></a>Netherite (preview)
+## <a name="netherite"></a>Netherite
The Netherite storage backend was designed and developed by [Microsoft Research](https://www.microsoft.com/research). It uses [Azure Event Hubs](../../event-hubs/event-hubs-about.md) and the [FASTER](https://www.microsoft.com/research/project/faster/) database technology on top of [Azure Page Blobs](../../storage/blobs/storage-blob-pageblob-overview.md). The design of Netherite enables significantly higher-throughput processing of orchestrations and entities compared to other providers. In some benchmark scenarios, throughput was shown to increase by more than an order of magnitude when compared to the default Azure Storage provider.
You can learn more about the technical details of the Netherite storage provider
> [!NOTE] > The _Netherite_ name originates from the world of [Minecraft](https://minecraft.fandom.com/wiki/Netherite).
-## <a name="mssql"></a>Microsoft SQL Server (MSSQL) (preview)
+## <a name="mssql"></a>Microsoft SQL Server (MSSQL)
The Microsoft SQL Server (MSSQL) storage provider persists all state into a Microsoft SQL Server database. It's compatible with both on-premises and cloud-hosted deployments of SQL Server, including [Azure SQL Database](/azure/azure-sql/database/sql-database-paas-overview).
For more detailed setup instructions, see the [Netherite getting started documen
To use the MSSQL storage provider, you must first add a reference to the [Microsoft.DurableTask.SqlServer.AzureFunctions](https://www.nuget.org/packages/Microsoft.DurableTask.SqlServer.AzureFunctions) NuGet package in your **csproj** file (.NET apps) or your **extensions.proj** file (JavaScript, Python, and PowerShell apps).
-> [!NOTE]
-> The MSSQL storage provider is not yet supported in apps that use [extension bundles](../functions-bindings-register.md#extension-bundles).
- The following example shows the minimum configuration required to enable the MSSQL storage provider. ```json
There are many significant tradeoffs between the various supported storage provi
| Storage provider | Azure Storage | Netherite | MSSQL | |- |- |- |- |
-| Official support status | ✅ Generally available (GA) | ⚠ Public preview | ⚠ Public preview |
+| Official support status | ✅ Generally available (GA) | ✅ Generally available (GA) | ✅ Generally available (GA) |
| External dependencies | Azure Storage account (general purpose v1) | Azure Event Hubs<br/>Azure Storage account (general purpose) | [SQL Server 2019](https://www.microsoft.com/sql-server/sql-server-2019) or Azure SQL Database | | Local development and emulation options | [Azurite v3.12+](../../storage/common/storage-use-azurite.md) (cross platform) | Supports in-memory emulation of task hubs ([more information](https://microsoft.github.io/durabletask-netherite/#/emulation)) | SQL Server Developer Edition (supports [Windows](/sql/database-engine/install-windows/install-sql-server), [Linux](/sql/linux/sql-server-linux-setup), and [Docker containers](/sql/linux/sql-server-linux-docker-container-deployment)) | | Task hub configuration | Explicit | Explicit | Implicit by default ([more information](https://microsoft.github.io/durabletask-mssql/#/taskhubs)) | | Maximum throughput | Moderate | Very high | Moderate | | Maximum orchestration/entity scale-out (nodes) | 16 | 32 | N/A | | Maximum activity scale-out (nodes) | N/A | 32 | N/A |
-| Consumption plan support | ✅ Fully supported | ❌ Not supported | ❌ Not supported |
-| Elastic Premium plan support | ✅ Fully supported | ⚠ Requires [runtime scale monitoring](../functions-networking-options.md#premium-plan-with-virtual-network-triggers) | ⚠ Requires [runtime scale monitoring](../functions-networking-options.md#premium-plan-with-virtual-network-triggers) |
| [KEDA 2.0](https://keda.sh/) scaling support<br/>([more information](../functions-kubernetes-keda.md)) | ❌ Not supported | ❌ Not supported | ✅ Supported using the [MSSQL scaler](https://keda.sh/docs/scalers/mssql/) ([more information](https://microsoft.github.io/durabletask-mssql/#/scaling)) |
-| Support for [extension bundles](../functions-bindings-register.md#extension-bundles) (recommended for non-.NET apps) | ✅ Fully supported | ❌ Not supported | ❌ Not supported |
+| Support for [extension bundles](../functions-bindings-register.md#extension-bundles) (recommended for non-.NET apps) | ✅ Fully supported | ✅ Fully supported | ✅ Fully supported |
| Price-performance configurable? | ❌ No | ✅ Yes (Event Hubs TUs and CUs) | ✅ Yes (SQL vCPUs) | | Managed Identity Support | ✅ Fully supported | ❌ Not supported | ⚠️ Requires runtime-driven scaling | | Disconnected environment support | ❌ Azure connectivity required | ❌ Azure connectivity required | ✅ Fully supported |
azure-functions Functions Bindings Azure Sql Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql-input.md
For information on setup and configuration details, see the [overview](./functio
::: zone pivot="programming-language-csharp"
-More samples for the Azure SQL input binding are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/main/samples/samples-csharp).
# [In-process](#tab/in-process)
+More samples for the Azure SQL input binding are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/main/samples/samples-csharp).
+ This section contains the following examples: * [HTTP trigger, get row by ID from query string](#http-trigger-look-up-id-from-query-string-c)
The stored procedure `dbo.DeleteToDo` must be created on the SQL database. In t
# [Isolated process](#tab/isolated-process)
-Isolated worker process isn't currently supported.
+More samples for the Azure SQL input binding are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/main/samples/samples-outofproc).
+
+This section contains the following examples:
+
+* [HTTP trigger, get row by ID from query string](#http-trigger-look-up-id-from-query-string-c-oop)
+* [HTTP trigger, get multiple rows from route data](#http-trigger-get-multiple-items-from-route-data-c-oop)
+* [HTTP trigger, delete rows](#http-trigger-delete-one-or-multiple-rows-c-oop)
+
+The examples refer to a `ToDoItem` class and a corresponding database table:
+++
+<a id="http-trigger-look-up-id-from-query-string-c-oop"></a>
+### HTTP trigger, get row by ID from query string
+
+The following example shows a [C# function](functions-dotnet-class-library.md) that retrieves a single record. The function is triggered by an HTTP request that uses a query string to specify the ID. That ID is used to retrieve a `ToDoItem` record with the specified query.
+
+> [!NOTE]
+> The HTTP query string parameter is case-sensitive.
+>
+
+```cs
+using System.Collections.Generic;
+using System.Linq;
+using Microsoft.AspNetCore.Http;
+using Microsoft.AspNetCore.Mvc;
+using Microsoft.Azure.Functions.Worker;
+using Microsoft.Azure.Functions.Worker.Extensions.Sql;
+using Microsoft.Azure.Functions.Worker.Http;
+
+namespace AzureSQLSamples
+{
+ public static class GetToDoItem
+ {
+ [FunctionName("GetToDoItem")]
+ public static IActionResult Run(
+ [HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = "gettodoitem")]
+ HttpRequest req,
+ [Sql("select [Id], [order], [title], [url], [completed] from dbo.ToDo where Id = @Id",
+ CommandType = System.Data.CommandType.Text,
+ Parameters = "@Id={Query.id}",
+ ConnectionStringSetting = "SqlConnectionString")]
+ IEnumerable<ToDoItem> toDoItem)
+ {
+ return new OkObjectResult(toDoItem.FirstOrDefault());
+ }
+ }
+}
+```
+
+<a id="http-trigger-get-multiple-items-from-route-data-c-oop"></a>
+### HTTP trigger, get multiple rows from route parameter
+
+The following example shows a [C# function](functions-dotnet-class-library.md) that retrieves documents returned by the query. The function is triggered by an HTTP request that uses route data to specify the value of a query parameter. That parameter is used to filter the `ToDoItem` records in the specified query.
+
+```cs
+using System.Collections.Generic;
+using Microsoft.AspNetCore.Http;
+using Microsoft.AspNetCore.Mvc;
+using Microsoft.Azure.Functions.Worker;
+using Microsoft.Azure.Functions.Worker.Extensions.Sql;
+using Microsoft.Azure.Functions.Worker.Http;
+
+namespace AzureSQLSamples
+{
+ public static class GetToDoItems
+ {
+ [FunctionName("GetToDoItems")]
+ public static IActionResult Run(
+ [HttpTrigger(AuthorizationLevel.Anonymous, "get", Route = "gettodoitems/{priority}")]
+ HttpRequest req,
+ [Sql("select [Id], [order], [title], [url], [completed] from dbo.ToDo where [Priority] > @Priority",
+ CommandType = System.Data.CommandType.Text,
+ Parameters = "@Priority={priority}",
+ ConnectionStringSetting = "SqlConnectionString")]
+ IEnumerable<ToDoItem> toDoItems)
+ {
+ return new OkObjectResult(toDoItems);
+ }
+ }
+}
+```
+
+<a id="http-trigger-delete-one-or-multiple-rows-c-oop"></a>
+### HTTP trigger, delete rows
+
+The following example shows a [C# function](functions-dotnet-class-library.md) that executes a stored procedure with input from the HTTP request query parameter.
+
+The stored procedure `dbo.DeleteToDo` must be created on the SQL database. In this example, the stored procedure deletes a single record or all records depending on the value of the parameter.
++
+```cs
+namespace AzureSQL.ToDo
+{
+ public static class DeleteToDo
+ {
+ // delete all items or a specific item from querystring
+ // returns remaining items
+ // uses input binding with a stored procedure DeleteToDo to delete items and return remaining items
+ [FunctionName("DeleteToDo")]
+ public static IActionResult Run(
+ [HttpTrigger(AuthorizationLevel.Anonymous, "delete", Route = "DeleteFunction")] HttpRequest req,
+ ILogger log,
+ [Sql("DeleteToDo", CommandType = System.Data.CommandType.StoredProcedure,
+ Parameters = "@Id={Query.id}", ConnectionStringSetting = "SqlConnectionString")]
+ IEnumerable<ToDoItem> toDoItems)
+ {
+ return new OkObjectResult(toDoItems);
+ }
+ }
+}
+```
<!-- Uncomment to support C# script examples. # [C# Script](#tab/csharp-script)
azure-functions Functions Bindings Azure Sql Output https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql-output.md
For information on setup and configuration details, see the [overview](./functio
::: zone pivot="programming-language-csharp"
-More samples for the Azure SQL output binding are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/main/samples/samples-csharp).
# [In-process](#tab/in-process)
+More samples for the Azure SQL output binding are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/main/samples/samples-csharp).
+ This section contains the following examples: * [HTTP trigger, write one record](#http-trigger-write-one-record-c)
namespace AzureSQLSamples
# [Isolated process](#tab/isolated-process)
-Isolated worker process isn't currently supported.
+More samples for the Azure SQL output binding are available in the [GitHub repository](https://github.com/Azure/azure-functions-sql-extension/tree/main/samples/samples-outofproc).
+
+This section contains the following examples:
+
+* [HTTP trigger, write one record](#http-trigger-write-one-record-c-oop)
+* [HTTP trigger, write to two tables](#http-trigger-write-to-two-tables-c-oop)
+* [HTTP trigger, write records using IAsyncCollector](#http-trigger-write-records-using-iasynccollector-c-oop)
+
+The examples refer to a `ToDoItem` class and a corresponding database table:
++++
+<a id="http-trigger-write-one-record-c-oop"></a>
+
+### HTTP trigger, write one record
+
+The following example shows a [C# function](functions-dotnet-class-library.md) that adds a record to a database, using data provided in an HTTP POST request as a JSON body.
++
+<a id="http-trigger-write-to-two-tables-c-oop"></a>
+
+### HTTP trigger, write to two tables
+
+The following example shows a [C# function](functions-dotnet-class-library.md) that adds records to a database in two different tables (`dbo.ToDo` and `dbo.RequestLog`), using data provided in an HTTP POST request as a JSON body and multiple output bindings.
+
+```sql
+CREATE TABLE dbo.RequestLog (
+ Id int identity(1,1) primary key,
+ RequestTimeStamp datetime2 not null,
+ ItemCount int not null
+)
+```
++
+```cs
+using System;
+using System.Collections.Generic;
+using System.IO;
+using System.Threading.Tasks;
+using Microsoft.Azure.Functions.Worker.Extensions.Sql;
+using Microsoft.Azure.Functions.Worker;
+using Microsoft.Azure.Functions.Worker.Http;
+using Microsoft.Extensions.Logging;
+using Newtonsoft.Json;
+
+namespace AzureSQL.ToDo
+{
+ public static class PostToDo
+ {
+ // create a new ToDoItem from body object
+ // uses output binding to insert new item into ToDo table
+ [FunctionName("PostToDo")]
+ public static async Task<IActionResult> Run(
+ [HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = "PostFunction")] HttpRequest req,
+ ILogger log,
+ [Sql("dbo.ToDo", ConnectionStringSetting = "SqlConnectionString")] IAsyncCollector<ToDoItem> toDoItems,
+ [Sql("dbo.RequestLog", ConnectionStringSetting = "SqlConnectionString")] IAsyncCollector<RequestLog> requestLogs)
+ {
+ string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
+ ToDoItem toDoItem = JsonConvert.DeserializeObject<ToDoItem>(requestBody);
+
+ // generate a new id for the todo item
+ toDoItem.Id = Guid.NewGuid();
+
+ // set Url from env variable ToDoUri
+ toDoItem.url = Environment.GetEnvironmentVariable("ToDoUri")+"?id="+toDoItem.Id.ToString();
+
+ // if completed is not provided, default to false
+ if (toDoItem.completed == null)
+ {
+ toDoItem.completed = false;
+ }
+
+ await toDoItems.AddAsync(toDoItem);
+ await toDoItems.FlushAsync();
+ List<ToDoItem> toDoItemList = new List<ToDoItem> { toDoItem };
+
+ requestLog = new RequestLog();
+ requestLog.RequestTimeStamp = DateTime.Now;
+ requestLog.ItemCount = 1;
+ await requestLogs.AddAsync(requestLog);
+ await requestLogs.FlushAsync();
+
+ return new OkObjectResult(toDoItemList);
+ }
+ }
+
+ public class RequestLog {
+ public DateTime RequestTimeStamp { get; set; }
+ public int ItemCount { get; set; }
+ }
+}
+```
+
+<a id="http-trigger-write-records-using-iasynccollector-c-oop"></a>
+
+### HTTP trigger, write records using IAsyncCollector
+
+The following example shows a [C# function](functions-dotnet-class-library.md) that adds a collection of records to a database, using data provided in an HTTP POST body JSON array.
+
+```cs
+using Microsoft.AspNetCore.Http;
+using Microsoft.AspNetCore.Mvc;
+using Microsoft.Azure.Functions.Worker;
+using Microsoft.Azure.Functions.Worker.Extensions.Sql;
+using Microsoft.Azure.Functions.Worker.Http;
+using Newtonsoft.Json;
+using System.IO;
+using System.Threading.Tasks;
+
+namespace AzureSQLSamples
+{
+ public static class WriteRecordsAsync
+ {
+ [FunctionName("WriteRecordsAsync")]
+ public static async Task<IActionResult> Run(
+ [HttpTrigger(AuthorizationLevel.Anonymous, "post", Route = "addtodo-asynccollector")]
+ HttpRequest req,
+ [Sql("dbo.ToDo", ConnectionStringSetting = "SqlConnectionString")] IAsyncCollector<ToDoItem> newItems)
+ {
+ string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
+ var incomingItems = JsonConvert.DeserializeObject<ToDoItem[]>(requestBody);
+ foreach (ToDoItem newItem in incomingItems)
+ {
+ await newItems.AddAsync(newItem);
+ }
+ // Rows are upserted here
+ await newItems.FlushAsync();
+
+ return new CreatedResult($"/api/addtodo-asynccollector", "done");
+ }
+ }
+}
+```
<!-- Uncomment to support C# script examples. # [C# Script](#tab/csharp-script)
azure-functions Functions Bindings Azure Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-azure-sql.md
Functions execute in the same process as the Functions host. To learn more, see
Add the extension to your project by installing this [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Extensions.Sql).
+```bash
+dotnet add package Microsoft.Azure.WebJobs.Extensions.Sql --prerelease
+```
+ # [Isolated process](#tab/isolated-process) Functions execute in an isolated C# worker process. To learn more, see [Guide for running C# Azure Functions in an isolated worker process](dotnet-isolated-process-guide.md).
-> [!NOTE]
-> In the current preview, Azure SQL bindings aren't supported when your function app runs in an isolated worker process.
+Add the extension to your project by installing this [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.Sql/).
-<!--
-Add the extension to your project by installing this [NuGet package](https://www.nuget.org/packages/Microsoft.Azure.Functions.Worker.Extensions.SignalRService/).
>
+```bash
+dotnet add package Microsoft.Azure.Functions.Worker.Extensions.Sql --prerelease
+```
<!-- awaiting bundle support # [C# script](#tab/csharp-script)
azure-functions Functions Bindings Error Pages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-error-pages.md
Title: Azure Functions error handling and retry guidance
-description: Learn to handle errors and retry events in Azure Functions with links to specific binding errors, including information on retry policies.
-
+description: Learn how to handle errors and retry events in Azure Functions, with links to specific binding errors, including information on retry policies.
Last updated 01/03/2023 zone_pivot_groups: programming-languages-set-functions-lang-workers
zone_pivot_groups: programming-languages-set-functions-lang-workers
# Azure Functions error handling and retries
-Handling errors in Azure Functions is important to avoid lost data, missed events, and to monitor the health of your application. It's also important to understand the retry behaviors of event-based triggers.
+Handling errors in Azure Functions is important to help you avoid lost data, avoid missed events, and monitor the health of your application. It's also an important way to help you understand the retry behaviors of event-based triggers.
This article describes general strategies for error handling and the available retry strategies. > [!IMPORTANT]
-> The retry policy support in the runtime for triggers other than Timer, Kafka, and Event Hubs is being removed after this feature becomes generally available (GA). Preview retry policy support for all triggers other than Timer and Event Hubs will be removed in December 2022. For more information, see the [Retries section below](#retries).
+> We're removing retry policy support in the runtime for triggers other than Timer, Kafka, and Event Hubs after this feature becomes generally available (GA). Preview retry policy support for all triggers other than Timer and Event Hubs was removed in December 2022. For more information, see the [Retries](#retries) section.
## Handling errors
-Errors raised in an Azure Functions can come from any of the following origins:
+Errors that occur in an Azure function can result from any of the following:
-- Use of built-in Azure Functions [triggers and bindings](functions-triggers-bindings.md).-- Calls to APIs of underlying Azure services.-- Calls to REST endpoints.-- Calls to client libraries, packages, or third-party APIs.
+- Use of built-in Azure Functions [triggers and bindings](functions-triggers-bindings.md)
+- Calls to APIs of underlying Azure services
+- Calls to REST endpoints
+- Calls to client libraries, packages, or third-party APIs
-Good error handling practices are important to avoid loss of data or missed messages. This section describes some recommended error handling practices with links to more information.
+To avoid loss of data or missed messages, it's important to practice good error handling. This section describes some recommended error-handling practices and provides links to more information.
### Enable Application Insights
-Azure Functions integrates with Application Insights to collect error data, performance data, and runtime logs. You should use Application Insights to discover and better understand errors occurring in your function executions. To learn more, see [Monitor Azure Functions](functions-monitoring.md).
+Azure Functions integrates with Application Insights to collect error data, performance data, and runtime logs. You should use Application Insights to discover and better understand errors that occur in your function executions. To learn more, see [Monitor Azure Functions](functions-monitoring.md).
### Use structured error handling
Capturing and logging errors is critical to monitoring the health of your applic
### Plan your retry strategy
-Several Functions bindings extensions provide built-in support for retries. In addition, the runtime lets you define retry policies for Timer, Kafka, and Event Hubs triggered functions. To learn more, see [Retries](#retries). For triggers that don't provide retry behaviors, you may want to implement your own retry scheme.
+Several Functions bindings extensions provide built-in support for retries. In addition, the runtime lets you define retry policies for Timer, Kafka, and Event Hubs-triggered functions. To learn more, see [Retries](#retries). For triggers that don't provide retry behaviors, you might want to implement your own retry scheme.
### Design for idempotency
-The occurrence of errors when processing data can be a problem for your functions, especially when processing messages. You need to consider what happens when the error occurs and how to avoid duplicate processing. To learn more, see [Designing Azure Functions for identical input](functions-idempotent.md).
+The occurrence of errors when you're processing data can be a problem for your functions, especially when you're processing messages. It's important to consider what happens when the error occurs and how to avoid duplicate processing. To learn more, see [Designing Azure Functions for identical input](functions-idempotent.md).
## Retries
-There are two kinds of retries available for your functions: built-in retry behaviors of individual trigger extensions and retry policies. The following table indicates which triggers support retries and where the retry behavior is configured. It also links to more information about errors coming from the underlying services.
+There are two kinds of retries available for your functions:
+* Built-in retry behaviors of individual trigger extensions
+* Retry policies provided by the Functions runtime
+
+The following table indicates which triggers support retries and where the retry behavior is configured. It also links to more information about errors that come from the underlying services.
| Trigger/binding | Retry source | Configuration | | - | - | -- | | Azure Cosmos DB | [Retry policies](#retry-policies) | Function-level |
-| Blob Storage | [Binding extension](functions-bindings-storage-blob-trigger.md#poison-blobs) | [host.json](functions-bindings-storage-queue.md#host-json) |
-| Event Grid | [Binding extension](../event-grid/delivery-and-retry.md) | Event subscription |
-| Event Hubs | [Retry policies](#retry-policies) | Function-level |
-| Queue Storage | [Binding extension](functions-bindings-storage-queue-trigger.md#poison-messages) | [host.json](functions-bindings-storage-queue.md#host-json) |
+| Azure Blob Storage | [Binding extension](functions-bindings-storage-blob-trigger.md#poison-blobs) | [host.json](functions-bindings-storage-queue.md#host-json) |
+| Azure Event Grid | [Binding extension](../event-grid/delivery-and-retry.md) | Event subscription |
+| Azure Event Hubs | [Retry policies](#retry-policies) | Function-level |
+| Azure Queue Storage | [Binding extension](functions-bindings-storage-queue-trigger.md#poison-messages) | [host.json](functions-bindings-storage-queue.md#host-json) |
| RabbitMQ | [Binding extension](functions-bindings-rabbitmq-trigger.md#dead-letter-queues) | [Dead letter queue](https://www.rabbitmq.com/dlx.html) |
-| Service Bus | [Binding extension](../service-bus-messaging/service-bus-dead-letter-queues.md) | [Dead letter queue](../service-bus-messaging/service-bus-dead-letter-queues.md#maximum-delivery-count) |
+| Azure Service Bus | [Binding extension](../service-bus-messaging/service-bus-dead-letter-queues.md) | [Dead letter queue](../service-bus-messaging/service-bus-dead-letter-queues.md#maximum-delivery-count) |
|Timer | [Retry policies](#retry-policies) | Function-level | |Kafka | [Retry policies](#retry-policies) | Function-level | ### Retry policies
-Starting with version 3.x of the Azure Functions runtime, you can define a retry policies for Timer, Kafka, and Event Hubs triggers that are enforced by the Functions runtime. The retry policy tells the runtime to rerun a failed execution until either successful completion occurs or the maximum number of retries is reached.
+Starting with version 3.x of the Azure Functions runtime, you can define retry policies for Timer, Kafka, and Event Hubs triggers that are enforced by the Functions runtime.
+
+The retry policy tells the runtime to rerun a failed execution until either successful completion occurs or the maximum number of retries is reached.
-A retry policy is evaluated when a Timer, Kafka, or Event Hubs triggered function raises an uncaught exception. As a best practice, you should catch all exceptions in your code and rethrow any errors that you want to result in a retry. Event Hubs checkpoints won't be written until the retry policy for the execution has completed. Because of this behavior, progress on the specific partition is paused until the current batch has completed.
+A retry policy is evaluated when a Timer, Kafka, or Event Hubs-triggered function raises an uncaught exception. As a best practice, you should catch all exceptions in your code and rethrow any errors that you want to result in a retry. Event Hubs checkpoints won't be written until the retry policy for the execution has finished. Because of this behavior, progress on the specific partition is paused until the current batch has finished.
#### Retry strategies
-There are two retry strategies supported by policy that you can configure:
+You can configure two retry strategies that are supported by policy:
# [Fixed delay](#tab/fixed-delay)
The first retry waits for the minimum delay. On subsequent retries, time is adde
#### Max retry counts
-You can configure the maximum number of times function execution is retried before eventual failure. The current retry count is stored in memory of the instance. It's possible that an instance has a failure between retry attempts. When an instance fails during a retry policy, the retry count is lost. When there are instance failures, the Event Hubs trigger is able to resume processing and retry the batch on a new instance, with the retry count reset to zero. Timer trigger doesn't resume on a new instance. This behavior means that the max retry count is a best effort, and in some rare cases an execution could be retried more than the maximum. For Timer triggers, the retries can be less than the maximum requested.
+You can configure the maximum number of times that a function execution is retried before eventual failure. The current retry count is stored in memory of the instance.
+
+It's possible for an instance to have a failure between retry attempts. When an instance fails during a retry policy, the retry count is lost. When there are instance failures, the Event Hubs trigger is able to resume processing and retry the batch on a new instance, with the retry count reset to zero. The timer trigger doesn't resume on a new instance.
+
+This behavior means that the maximum retry count is a best effort. In some rare cases, an execution could be retried more than the requested maximum number of times. For Timer triggers, the retries can be less than the maximum number requested.
#### Retry examples
public static async Task Run([EventHubTrigger("myHub", Connection = "EventHubCon
|Property | Description | ||-| |MaxRetryCount|Required. The maximum number of retries allowed per function execution. `-1` means to retry indefinitely.|
-|DelayInterval|The delay that is used between retries. Specify as a string with the format `HH:mm:ss`.|
+|DelayInterval|The delay that's used between retries. Specify it as a string with the format `HH:mm:ss`.|
# [Isolated process](#tab/isolated-process/fixed-delay)
-Retry policies aren't yet supported when running in an isolated worker process.
+Retry policies aren't yet supported when they're running in an isolated worker process.
-# [C# Script](#tab/csharp-script/fixed-delay)
+# [C# script](#tab/csharp-script/fixed-delay)
Here's the retry policy in the *function.json* file:
Here's the retry policy in the *function.json* file:
} ```
-|function.json property | Description |
+|*function.json*&nbsp;property | Description |
||-| |strategy|Use `fixedDelay`.| |maxRetryCount|Required. The maximum number of retries allowed per function execution. `-1` means to retry indefinitely.|
-|delayInterval|The delay that is used between retries. Specify as a string with the format `HH:mm:ss`.|
+|delayInterval|The delay that's used between retries. Specify it as a string with the format `HH:mm:ss`.|
# [In-process](#tab/in-process/exponential-backoff)
public static async Task Run([EventHubTrigger("myHub", Connection = "EventHubCon
|Property | Description | ||-| |MaxRetryCount|Required. The maximum number of retries allowed per function execution. `-1` means to retry indefinitely.|
-|MinimumInterval|The minimum retry delay. Specify as a string with the format `HH:mm:ss`.|
-|MaximumInterval|The maximum retry delay. Specify as a string with the format `HH:mm:ss`.|
+|MinimumInterval|The minimum retry delay. Specify it as a string with the format `HH:mm:ss`.|
+|MaximumInterval|The maximum retry delay. Specify it as a string with the format `HH:mm:ss`.|
# [Isolated process](#tab/isolated-process/exponential-backoff)
-Retry policies aren't yet supported when running in an isolated worker process.
+Retry policies aren't yet supported when they're running in an isolated worker process.
-# [C# Script](#tab/csharp-script/exponential-backoff)
+# [C# script](#tab/csharp-script/exponential-backoff)
Here's the retry policy in the *function.json* file:
Here's the retry policy in the *function.json* file:
} ```
-|function.json property | Description |
+|*function.json*&nbsp;property | Description |
||-| |strategy|Use `exponentialBackoff`.| |maxRetryCount|Required. The maximum number of retries allowed per function execution. `-1` means to retry indefinitely.|
-|minimumInterval|The minimum retry delay. Specify as a string with the format `HH:mm:ss`.|
-|maximumInterval|The maximum retry delay. Specify as a string with the format `HH:mm:ss`.|
+|minimumInterval|The minimum retry delay. Specify it as a string with the format `HH:mm:ss`.|
+|maximumInterval|The maximum retry delay. Specify it as a string with the format `HH:mm:ss`.|
::: zone-end
Here's the retry policy in the *function.json* file:
-|function.json property | Description |
+|*function.json* property | Description |
||-| |strategy|Required. The retry strategy to use. Valid values are `fixedDelay` or `exponentialBackoff`.| |maxRetryCount|Required. The maximum number of retries allowed per function execution. `-1` means to retry indefinitely.|
-|delayInterval|The delay that is used between retries when using a `fixedDelay` strategy. Specify as a string with the format `HH:mm:ss`.|
-|minimumInterval|The minimum retry delay when using an `exponentialBackoff` strategy. Specify as a string with the format `HH:mm:ss`.|
-|maximumInterval|The maximum retry delay when using `exponentialBackoff` strategy. Specify as a string with the format `HH:mm:ss`.|
+|delayInterval|The delay that's used between retries when you're using a `fixedDelay` strategy. Specify it as a string with the format `HH:mm:ss`.|
+|minimumInterval|The minimum retry delay when you're using an `exponentialBackoff` strategy. Specify it as a string with the format `HH:mm:ss`.|
+|maximumInterval|The maximum retry delay when you're using `exponentialBackoff` strategy. Specify it as a string with the format `HH:mm:ss`.|
::: zone-end ::: zone pivot="programming-language-python"
-Here's a Python sample to use retry context in a function:
+Here's a Python sample that uses the retry context in a function:
```Python import azure.functions
public void run(
|Element | Description | ||-| |maxRetryCount|Required. The maximum number of retries allowed per function execution. `-1` means to retry indefinitely.|
-|delayInterval|The delay that is used between retries when using a `fixedDelay` strategy. Specify as a string with the format `HH:mm:ss`.|
-|minimumInterval|The minimum retry delay when using an `exponentialBackoff` strategy. Specify as a string with the format `HH:mm:ss`.|
-|maximumInterval|The maximum retry delay when using `exponentialBackoff` strategy. Specify as a string with the format `HH:mm:ss`.|
+|delayInterval|The delay that's used between retries when you're using a `fixedDelay` strategy. Specify it as a string with the format `HH:mm:ss`.|
+|minimumInterval|The minimum retry delay when you're using an `exponentialBackoff` strategy. Specify it as a string with the format `HH:mm:ss`.|
+|maximumInterval|The maximum retry delay when you're using `exponentialBackoff` strategy. Specify it as a string with the format `HH:mm:ss`.|
public void run(
## Binding error codes
-When integrating with Azure services, errors may originate from the APIs of the underlying services. Information relating to binding-specific errors is available in the **Exceptions and return codes** section of the following articles:
+When you're integrating with Azure services, errors might originate from the APIs of the underlying services. Information that relates to binding-specific errors is available in the "Exceptions and return codes" sections of the following articles:
+ [Azure Cosmos DB](/rest/api/cosmos-db/http-status-codes-for-cosmosdb)
-+ [Blob storage](functions-bindings-storage-blob-output.md#exceptions-and-return-codes)
++ [Blob Storage](functions-bindings-storage-blob-output.md#exceptions-and-return-codes) + [Event Grid](../event-grid/troubleshoot-errors.md) + [Event Hubs](functions-bindings-event-hubs-output.md#exceptions-and-return-codes)
-+ [IoT Hubs](functions-bindings-event-iot-output.md#exceptions-and-return-codes)
++ [IoT Hub](functions-bindings-event-iot-output.md#exceptions-and-return-codes) + [Notification Hubs](functions-bindings-notification-hubs.md#exceptions-and-return-codes)
-+ [Queue storage](functions-bindings-storage-queue-output.md#exceptions-and-return-codes)
++ [Queue Storage](functions-bindings-storage-queue-output.md#exceptions-and-return-codes) + [Service Bus](functions-bindings-service-bus-output.md#exceptions-and-return-codes)
-+ [Table storage](functions-bindings-storage-table-output.md#exceptions-and-return-codes)
++ [Table Storage](functions-bindings-storage-table-output.md#exceptions-and-return-codes) ## Next steps + [Azure Functions triggers and bindings concepts](functions-triggers-bindings.md)
-+ [Best practices for reliable Azure Functions](functions-best-practices.md)
++ [Best practices for reliable Azure functions](functions-best-practices.md)
azure-functions Functions Bindings Kafka https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-bindings-kafka.md
Title: Apache Kafka bindings for Azure Functions description: Learn to integrate Azure Functions with an Apache Kafka stream. Previously updated : 05/14/2022 Last updated : 01/12/2023 zone_pivot_groups: programming-languages-set-functions-lang-workers
The following properties, which are inherited from the [Apache Kafka C/C++ clien
|Property | Applies to | librdkafka equivalent | |||| | AutoCommitIntervalMs | Trigger | `auto.commit.interval.ms` |
+| AutoOffsetReset | Trigger | `auto.offset.reset` |
| FetchMaxBytes | Trigger | `fetch.max.bytes` | | LibkafkaDebug | Both | `debug` | | MaxPartitionFetchBytes | Trigger | `max.partition.fetch.bytes` |
azure-functions Functions Dotnet Class Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-dotnet-class-library.md
This article is an introduction to developing Azure Functions by using C# in .NE
As a C# developer, you may also be interested in one of the following articles: | Getting started | Concepts| Guided learning/samples |
-|--| -- |--|
+|--|--|--|
| <ul><li>[Using Visual Studio](functions-create-your-first-function-visual-studio.md)</li><li>[Using Visual Studio Code](create-first-function-vs-code-csharp.md)</li><li>[Using command line tools](create-first-function-cli-csharp.md)</li></ul> | <ul><li>[Hosting options](functions-scale.md)</li><li>[Performance&nbsp;considerations](functions-best-practices.md)</li><li>[Visual Studio development](functions-develop-vs.md)</li><li>[Dependency injection](functions-dotnet-dependency-injection.md)</li></ul> | <ul><li>[Create serverless applications](/training/paths/create-serverless-applications/)</li><li>[C# samples](/samples/browse/?products=azure-functions&languages=csharp)</li></ul> | Azure Functions supports C# and C# script programming languages. If you're looking for guidance on [using C# in the Azure portal](functions-create-function-app-portal.md), see [C# script (.csx) developer reference](functions-reference-csharp.md). [!INCLUDE [functions-dotnet-supported-versions](../../includes/functions-dotnet-supported-versions.md)]
-### Functions v2.x considerations
-
-Function apps that target the latest 2.x version (`~2`) are automatically upgraded to run on .NET Core 3.1. Because of breaking changes between .NET Core versions, not all apps developed and compiled against .NET Core 2.2 can be safely upgraded to .NET Core 3.1. You can opt out of this upgrade by pinning your function app to `~2.0`. Functions also detects incompatible APIs and may pin your app to `~2.0` to prevent incorrect execution on .NET Core 3.1.
-
->[!NOTE]
->If your function app is pinned to `~2.0` and you change this version target to `~2`, your function app may break. If you deploy using ARM templates, check the version in your templates. If this occurs, change your version back to target `~2.0` and fix compatibility issues.
-
-Function apps that target `~2.0` continue to run on .NET Core 2.2. This version of .NET Core no longer receives security and other maintenance updates. To learn more, see [this announcement page](https://github.com/Azure/app-service-announcements/issues/266).
-
-You should work to make your functions compatible with .NET Core 3.1 as soon as possible. After you've resolved these issues, change your version back to `~2` or upgrade to `~3`. To learn more about targeting versions of the Functions runtime, see [How to target Azure Functions runtime versions](set-runtime-version.md).
-
-When running on Linux in a Premium or dedicated (App Service) plan, you pin your version by instead targeting a specific image by setting the `linuxFxVersion` site config setting to `DOCKER|mcr.microsoft.com/azure-functions/dotnet:2.0.14786-appservice` To learn how to set `linuxFxVersion`, see [Manual version updates on Linux](set-runtime-version.md#manual-version-updates-on-linux).
- ## Functions class library project In Visual Studio, the **Azure Functions** project template creates a C# class library project that contains the following files:
public static class BindingExpressionsExample
## Autogenerated function.json
-The build process creates a *function.json* file in a function folder in the build folder. As noted earlier, this file is not meant to be edited directly. You can't change binding configuration or disable the function by editing this file.
+The build process creates a *function.json* file in a function folder in the build folder. As noted earlier, this file isn't meant to be edited directly. You can't change binding configuration or disable the function by editing this file.
The purpose of this file is to provide information to the scale controller to use for [scaling decisions on the Consumption plan](event-driven-scaling.md). For this reason, the file only has trigger info, not input/output bindings.
The generated *function.json* file includes a `configurationSource` property tha
The *function.json* file generation is performed by the NuGet package [Microsoft\.NET\.Sdk\.Functions](https://www.nuget.org/packages/Microsoft.NET.Sdk.Functions).
-The same package is used for both version 1.x and 2.x of the Functions runtime. The target framework is what differentiates a 1.x project from a 2.x project. Here are the relevant parts of the `.csproj` files, showing different target frameworks with the same `Sdk` package:
+The following example shows the relevant parts of the `.csproj` files that have different target frameworks of the same `Sdk` package:
-# [v2.x+](#tab/v2)
+# [v4.x](#tab/v4)
```xml <PropertyGroup>
- <TargetFramework>netcoreapp2.1</TargetFramework>
- <AzureFunctionsVersion>v2</AzureFunctionsVersion>
+ <TargetFramework>net6.0</TargetFramework>
+ <AzureFunctionsVersion>v4</AzureFunctionsVersion>
</PropertyGroup> <ItemGroup>
- <PackageReference Include="Microsoft.NET.Sdk.Functions" Version="1.0.8" />
+ <PackageReference Include="Microsoft.NET.Sdk.Functions" Version="4.1.1" />
</ItemGroup> ```
The same package is used for both version 1.x and 2.x of the Functions runtime.
```xml <PropertyGroup>
- <TargetFramework>net461</TargetFramework>
+ <TargetFramework>net48</TargetFramework>
</PropertyGroup> <ItemGroup>
- <PackageReference Include="Microsoft.NET.Sdk.Functions" Version="1.0.8" />
+ <PackageReference Include="Microsoft.NET.Sdk.Functions" Version="1.0.24" />
</ItemGroup> ```
-Among the `Sdk` package dependencies are triggers and bindings. A 1.x project refers to 1.x triggers and bindings because those triggers and bindings target the .NET Framework, while 2.x triggers and bindings target .NET Core.
+Among the `Sdk` package dependencies are triggers and bindings. A 1.x project refers to 1.x triggers and bindings because those triggers and bindings target the .NET Framework, while 4.x triggers and bindings target .NET Core.
The `Sdk` package also depends on [Newtonsoft.Json](https://www.nuget.org/packages/Newtonsoft.Json), and indirectly on [WindowsAzure.Storage](https://www.nuget.org/packages/WindowsAzure.Storage). These dependencies make sure that your project uses the versions of those packages that work with the Functions runtime version that the project targets. For example, `Newtonsoft.Json` has version 11 for .NET Framework 4.6.1, but the Functions runtime that targets .NET Framework 4.6.1 is only compatible with `Newtonsoft.Json` 9.0.1. So your function code in that project also has to use `Newtonsoft.Json` 9.0.1.
The source code for `Microsoft.NET.Sdk.Functions` is available in the GitHub rep
Visual Studio uses the [Azure Functions Core Tools](functions-run-local.md#install-the-azure-functions-core-tools) to run Functions projects on your local computer. The Core Tools is a command-line interface for the Functions runtime.
-If you install the Core Tools using the Windows installer (MSI) package or by using npm, that doesn't affect the Core Tools version used by Visual Studio. For the Functions runtime version 1.x, Visual Studio stores Core Tools versions in *%USERPROFILE%\AppData\Local\Azure.Functions.Cli* and uses the latest version stored there. For Functions 2.x, the Core Tools are included in the **Azure Functions and Web Jobs Tools** extension. For both 1.x and 2.x, you can see what version is being used in the console output when you run a Functions project:
+If you install the Core Tools using the Windows installer (MSI) package or by using npm, it doesn't affect the Core Tools version used by Visual Studio. For the Functions runtime version 1.x, Visual Studio stores Core Tools versions in *%USERPROFILE%\AppData\Local\Azure.Functions.Cli* and uses the latest version stored there. For Functions 4.x, the Core Tools are included in the **Azure Functions and Web Jobs Tools** extension. For Functions 1.x, you can see what version is being used in the console output when you run a Functions project:
```terminal [3/1/2018 9:59:53 AM] Starting Host (HostId=contoso2-1518597420, Version=2.0.11353.0, ProcessId=22020, Debug=False, Attempt=0, FunctionsExtensionVersion=)
If you install the Core Tools using the Windows installer (MSI) package or by us
You can compile your function app as [ReadyToRun binaries](/dotnet/core/deploying/ready-to-run). ReadyToRun is a form of ahead-of-time compilation that can improve startup performance to help reduce the impact of [cold-start](event-driven-scaling.md#cold-start) when running in a [Consumption plan](consumption-plan.md).
-ReadyToRun is available in .NET 3.1 and .NET 6 (in-proc and isolated) and .NET 7 and requires [version 3.0 or 4.0 of the Azure Functions runtime](functions-versions.md).
+ReadyToRun is available in .NET 6 and later versions and requires [version 4.0 of the Azure Functions runtime](functions-versions.md).
To compile your project as ReadyToRun, update your project file by adding the `<PublishReadyToRun>` and `<RuntimeIdentifier>` elements. The following is the configuration for publishing to a Windows 32-bit function app.
public static async Task<HttpResponseMessage> Run(HttpRequestMessage req, ILogge
logger.LogInformation("Request for item with key={itemKey}.", id); ```
-To learn more about how Functions implements `ILogger`, see [Collecting telemetry data](functions-monitoring.md#collecting-telemetry-data). Categories prefixed with `Function` assume you are using an `ILogger` instance. If you choose to instead use an `ILogger<T>`, the category name may instead be based on `T`.
+To learn more about how Functions implements `ILogger`, see [Collecting telemetry data](functions-monitoring.md#collecting-telemetry-data). Categories prefixed with `Function` assume you're using an `ILogger` instance. If you choose to instead use an `ILogger<T>`, the category name may instead be based on `T`.
### Structured logging
Here's a sample JSON representation of `customDimensions` data:
### <a name="log-custom-telemetry-in-c-functions"></a>Log custom telemetry
-There is a Functions-specific version of the Application Insights SDK that you can use to send custom telemetry data from your functions to Application Insights: [Microsoft.Azure.WebJobs.Logging.ApplicationInsights](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Logging.ApplicationInsights). Use the following command from the command prompt to install this package:
+There's a Functions-specific version of the Application Insights SDK that you can use to send custom telemetry data from your functions to Application Insights: [Microsoft.Azure.WebJobs.Logging.ApplicationInsights](https://www.nuget.org/packages/Microsoft.Azure.WebJobs.Logging.ApplicationInsights). Use the following command from the command prompt to install this package:
# [Command](#tab/cmd)
In this command, replace `<VERSION>` with a version of this package that support
The following C# examples uses the [custom telemetry API](../azure-monitor/app/api-custom-events-metrics.md). The example is for a .NET class library, but the Application Insights code is the same for C# script.
-# [v2.x+](#tab/v2)
+# [v4.x](#tab/v4)
Version 2.x and later versions of the runtime use newer features in Application Insights to automatically correlate telemetry with the current operation. There's no need to manually set the operation `Id`, `ParentId`, or `Name` fields.
Define an imperative binding as follows:
} ```
- `BindingTypeAttribute` is the .NET attribute that defines your binding, and `T` is an input or output type that's supported by that binding type. `T` cannot be an `out` parameter type (such as `out JObject`). For example, the Mobile Apps table output binding supports [six output types](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions.MobileApps/MobileTableAttribute.cs#L17-L22), but you can only use [ICollector\<T>](https://github.com/Azure/azure-webjobs-sdk/blob/master/src/Microsoft.Azure.WebJobs/ICollector.cs) or [IAsyncCollector\<T>](https://github.com/Azure/azure-webjobs-sdk/blob/master/src/Microsoft.Azure.WebJobs/IAsyncCollector.cs) with imperative binding.
+ `BindingTypeAttribute` is the .NET attribute that defines your binding, and `T` is an input or output type that's supported by that binding type. `T` can't be an `out` parameter type (such as `out JObject`). For example, the Mobile Apps table output binding supports [six output types](https://github.com/Azure/azure-webjobs-sdk-extensions/blob/master/src/WebJobs.Extensions.MobileApps/MobileTableAttribute.cs#L17-L22), but you can only use [ICollector\<T>](https://github.com/Azure/azure-webjobs-sdk/blob/master/src/Microsoft.Azure.WebJobs/ICollector.cs) or [IAsyncCollector\<T>](https://github.com/Azure/azure-webjobs-sdk/blob/master/src/Microsoft.Azure.WebJobs/IAsyncCollector.cs) with imperative binding.
### Single attribute example
azure-functions Functions Networking Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-networking-options.md
Azure Functions supports two kinds of virtual network integration:
Virtual network integration in Azure Functions uses shared infrastructure with App Service web apps. To learn more about the two types of virtual network integration, see: * [Regional virtual network integration](../app-service/overview-vnet-integration.md#regional-virtual-network-integration)
-* [Gateway-required virtual network integration](../app-service/overview-vnet-integration.md#gateway-required-virtual-network-integration)
+* [Gateway-required virtual network integration](../app-service/configure-gateway-required-vnet-integration.md)
To learn how to set up virtual network integration, see [Enable virtual network integration](#enable-virtual-network-integration).
azure-functions Functions Reference Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-reference-java.md
This guide contains detailed information to help you succeed developing Azure Functions using Java.
-As a Java developer, if you're new to Azure Functions, please consider first reading one of the following articles:
+As a Java developer, if you're new to Azure Functions, consider first reading one of the following articles:
| Getting started | Concepts| Scenarios/samples | | -- | -- | -- |
-| <ul><li>[Java function using Visual Studio Code](./create-first-function-vs-code-java.md)</li><li>[Jav)</li></ul> | <ul><li>[Java samples with different triggers](/samples/azure-samples/azure-functions-samples-java/azure-functions-java/)</li><li>[Event Hub trigger and Azure Cosmos DB output binding](/samples/azure-samples/java-functions-eventhub-cosmosdb/sample/)</li></ul> |
+| <ul><li>[Java function using Visual Studio Code](./create-first-function-vs-code-java.md)</li><li>[Jav)</li></ul> | <ul><li>[Java samples with different triggers](/samples/azure-samples/azure-functions-samples-java/azure-functions-java/)</li><li>[Event Hubs trigger and Azure Cosmos DB output binding](/samples/azure-samples/java-functions-eventhub-cosmosdb/sample/)</li></ul> |
## Java function basics
-A Java function is a `public` method, decorated with the annotation `@FunctionName`. This method defines the entry for a Java function, and must be unique in a particular package. The package can have multiple classes with multiple public methods annotated with `@FunctionName`. A single package is deployed to a function app in Azure. When running in Azure, the function app provides the deployment, execution, and management context for your individual Java functions.
+A Java function is a `public` method, decorated with the annotation `@FunctionName`. This method defines the entry for a Java function, and must be unique in a particular package. The package can have multiple classes with multiple public methods annotated with `@FunctionName`. A single package is deployed to a function app in Azure. In Azure, the function app provides the deployment, execution, and management context for your individual Java functions.
## Programming model
The following developer environments have Azure Functions tooling that lets you
+ [Eclipse](functions-create-maven-eclipse.md) + [IntelliJ](functions-create-maven-intellij.md)
-The article links above show you how to create your first functions using your IDE of choice.
+These articles show you how to create your first functions using your IDE of choice.
-### Project Scaffolding
+### Project scaffolding
If you prefer command line development from the Terminal, the simplest way to scaffold Java-based function projects is to use `Apache Maven` archetypes. The Java Maven archetype for Azure Functions is published under the following _groupId_:_artifactId_: [com.microsoft.azure:azure-functions-archetype](https://search.maven.org/artifact/com.microsoft.azure/azure-functions-archetype/).
To get started using this archetype, see the [Java quickstart](./create-first-fu
## Folder structure
-Here is the folder structure of an Azure Functions Java project:
+Here's the folder structure of an Azure Functions Java project:
``` FunctionsProject
public class Function {
} ```
-Here is the generated corresponding `function.json` by the [azure-functions-maven-plugin](https://mvnrepository.com/artifact/com.microsoft.azure/azure-functions-maven-plugin):
+Here's the generated corresponding `function.json` by the [azure-functions-maven-plugin](https://mvnrepository.com/artifact/com.microsoft.azure/azure-functions-maven-plugin):
```json {
Here is the generated corresponding `function.json` by the [azure-functions-mave
## Java versions
-The version of Java used when creating the function app on which functions runs in Azure is specified in the pom.xml file. The Maven archetype currently generates a pom.xml for Java 8, which you can change before publishing. The Java version in pom.xml should match the version on which you have locally developed and tested your app.
+The version of Java on which your app runs in Azure is specified in the pom.xml file. The Maven archetype currently generates a pom.xml for Java 8, which you can change before publishing. The Java version in pom.xml should match the version on which you've locally developed and tested your app.
### Supported versions
The following example shows the operating system setting in the `runtime` sectio
## JDK runtime availability and support
-Microsoft and [Adoptium](https://adoptium.net/) builds of OpenJDK are provided and supported on Functions for Java 8 (Adoptium), 11 (MSFT) and 17(MSFT). These binaries are provided as a no-cost, multi-platform, production-ready distribution of the OpenJDK for Azure. They contain all the components for building and runnning Java SE applications.
+Microsoft and [Adoptium](https://adoptium.net/) builds of OpenJDK are provided and supported on Functions for Java 8 (Adoptium), 11 (MSFT) and 17(MSFT). These binaries are provided as a no-cost, multi-platform, production-ready distribution of the OpenJDK for Azure. They contain all the components for building and running Java SE applications.
For local development or testing, you can download the [Microsoft build of OpenJDK](/java/openjdk/download) or [Adoptium Temurin](https://adoptium.net/?variant=openjdk8&jvmVariant=hotspot) binaries for free. [Azure support](https://azure.microsoft.com/support/) for issues with the JDKs and function apps is available with a [qualified support plan](https://azure.microsoft.com/support/plans/).
-If you would like to continue using the Zulu for Azure binaries on your Function app, please [configure your app accordingly](https://github.com/Azure/azure-functions-java-worker/wiki/Customize-JVM-to-use-Zulu). You can continue to use the Azul binaries for your site, but any security patches or improvements will only be available in new versions of the OpenJDK, so we recommend that you eventually remove this configuration so that your Function apps use the latest available version of Java.
+If you would like to continue using the Zulu for Azure binaries on your Function app, [configure your app accordingly](https://github.com/Azure/azure-functions-java-worker/wiki/Customize-JVM-to-use-Zulu). You can continue to use the Azul binaries for your site. However, any security patches or improvements are only available in new versions of the OpenJDK. Because of this, you should eventually remove this configuration so that your apps use the latest available version of Java.
## Customize JVM
Functions lets you customize the Java virtual machine (JVM) used to run your Jav
* `-Djava.net.preferIPv4Stack=true` * `-jar`
-You can provide additional arguments in an app setting named `JAVA_OPTS`. You can add app settings to your function app deployed to Azure in the Azure portal or the Azure CLI.
+You can provide other arguments to the JVM by using one of the following application settings, depending on the plan type:
-> [!IMPORTANT]
-> In the Consumption plan, you must also add the WEBSITE_USE_PLACEHOLDER setting with a value of 0 for the customization to work. This setting does increase the cold start times for Java functions.
+| Plan type | Setting name | Comment |
+| | |
+| [Consumption plan](./consumption-plan.md) | `languageWorkers__java__arguments` | This setting does increase the cold start times for Java functions running in a Consumption plan. |
+| [Premium plan](./functions-premium-plan.md)<br/>[Dedicated plan](./dedicated-plan.md) | `JAVA_OPTS` | |
+
+The following sections show you how to add these settings. To learn more about working with application settings, see the [Work with application settings](./functions-how-to-use-azure-function-app-settings.md#settings) section.
### Azure portal
-In the [Azure portal](https://portal.azure.com), use the [Application Settings tab](functions-how-to-use-azure-function-app-settings.md#settings) to add the `JAVA_OPTS` setting.
+In the [Azure portal](https://portal.azure.com), use the [Application Settings tab](functions-how-to-use-azure-function-app-settings.md#settings) to add either the `languageWorkers__java__arguments` or the `JAVA_OPTS` setting.
### Azure CLI
-You can use the [az functionapp config appsettings set](/cli/azure/functionapp/config/appsettings) command to set `JAVA_OPTS`, as in the following example:
+You can use the [az functionapp config appsettings set](/cli/azure/functionapp/config/appsettings) command to add these settings, as shown in the following example for the `-Djava.awt.headless=true` option:
# [Consumption plan](#tab/consumption/bash) ```azurecli-interactive az functionapp config appsettings set \
- --settings "JAVA_OPTS=-Djava.awt.headless=true" \
- "WEBSITE_USE_PLACEHOLDER=0" \
+ --settings "languageWorkers__java__arguments=-Djava.awt.headless=true" \
--name <APP_NAME> --resource-group <RESOURCE_GROUP> ```
az functionapp config appsettings set \
```azurecli-interactive az functionapp config appsettings set ^
- --settings "JAVA_OPTS=-Djava.awt.headless=true" ^
- "WEBSITE_USE_PLACEHOLDER=0" ^
+ --settings "languageWorkers__java__arguments=-Djava.awt.headless=true" ^
--name <APP_NAME> --resource-group <RESOURCE_GROUP> ```
To receive a batch of inputs, you can bind to `String[]`, `POJO[]`, `List<String
```
-This function gets triggered whenever there is new data in the configured event hub. Because the `cardinality` is set to `MANY`, the function receives a batch of messages from the event hub. `EventData` from event hub gets converted to `TestEventData` for the function execution.
+This function gets triggered whenever there's new data in the configured event hub. Because the `cardinality` is set to `MANY`, the function receives a batch of messages from the event hub. `EventData` from event hub gets converted to `TestEventData` for the function execution.
### Output binding example
You invoke this function on an `HttpRequest` object. It writes multiple values t
## HttpRequestMessage and HttpResponseMessage
- These are defined in `azure-functions-java-library`. They are helper types to work with HttpTrigger functions.
+ These are defined in `azure-functions-java-library`. They're helper types to work with HttpTrigger functions.
| Specialized type | Target | Typical usage | | | :--: | |
public class Function {
## View logs and trace
-You can use the Azure CLI to stream Java stdout and stderr logging, as well as other application logging.
+You can use the Azure CLI to stream Java stdout and stderr logging, and other application logging.
Here's how to configure your function app to write application logging by using the Azure CLI:
azure-functions Functions Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-versions.md
Title: Azure Functions runtime versions overview
description: Azure Functions supports multiple versions of the runtime. Learn the differences between them and how to choose the one that's right for you. Previously updated : 10/22/2022 Last updated : 01/09/2023 zone_pivot_groups: programming-languages-set-functions
zone_pivot_groups: programming-languages-set-functions
| Version | Support level | Description | | | | | | 4.x | GA | **_Recommended runtime version for functions in all languages._** Check out [Supported language versions](#languages). |
-| 3.x | GA | Supported all languages (see [Supported language versions](#languages)). Reached the end of life (EOL) for extended support on December 13, 2022. We highly recommend you [migrating your apps from Azure Functions version 3.x to version 4.x](migrate-version-3-version-4.md) for full support. |
-| 2.x | GA | Supported for [legacy version 2.x apps](#pinning-to-version-20). This version is in maintenance mode, with enhancements provided only in later versions. Reached the end of life (EOL) on December 13, 2022. We highly recommend you [migrating your apps from Azure Functions version 3.x to version 4.x](migrate-version-3-version-4.md) for full support. |
-| 1.x | GA | Recommended only for C# apps that must use .NET Framework and only supports development in the Azure portal, Azure Stack Hub portal, or locally on Windows computers. This version is in maintenance mode, with enhancements provided only in later versions. |
+| 3.x | GA<sup>*</sup> | Reached the end of life (EOL) for extended support on December 13, 2022. We highly recommend you [migrate your apps to version 4.x](migrate-version-3-version-4.md) for full support. |
+| 2.x | GA<sup>*</sup> | Reached the end of life (EOL) on December 13, 2022. We highly recommend you [migrate your apps to version 4.x](migrate-version-3-version-4.md) for full support. |
+| 1.x | GA | Supported only for C# apps that must use .NET Framework. This version is in maintenance mode, with enhancements provided only in later versions. We highly recommend you migrate your apps to version 4.x, which [supports .NET Framework 4.8](migrate-version-1-version-4.md?tabs=v4&pivots=programming-language-csharp).|
+
+<sup>*</sup>For a detailed support statement about end-of-life versions, see [this migration article](migrate-version-3-version-4.md).
This article details some of the differences between these versions, how you can create each version, and how to change the version on which your functions run.
The following major runtime version values are used:
| Value | Runtime target | | | -- | | `~4` | 4.x |
-| `~3` | 3.x |
| `~1` | 1.x | >[!IMPORTANT]
To resolve issues your function app may have when running on the latest major ve
Older minor versions are periodically removed from Functions. For the latest news about Azure Functions releases, including the removal of specific older minor versions, monitor [Azure App Service announcements](https://github.com/Azure/app-service-announcements/issues).
-### Pinning to version ~2.0
-
-.NET function apps running on version 2.x (`~2`) are automatically upgraded to run on .NET Core 3.1, which is a long-term support version of .NET Core 3. Running your .NET functions on .NET Core 3.1 allows you to take advantage of the latest security updates and product enhancements.
-
-Any function app pinned to `~2.0` continues to run on .NET Core 2.2, which no longer receives security and other updates. To learn more, see [Functions v2.x considerations](functions-dotnet-class-library.md#functions-v2x-considerations).
## Minimum extension versions
You can also choose `net6.0`, `net7.0`, or `net48` as the target framework if yo
# [Version 3.x](#tab/v3)
-```xml
-<TargetFramework>netcoreapp3.1</TargetFramework>
-<AzureFunctionsVersion>v3</AzureFunctionsVersion>
-```
-
-You can also choose `net5.0` as the target framework if you're using [.NET isolated worker process functions](dotnet-isolated-process-guide.md).
-
-> [!NOTE]
-> Azure Functions 3.x and .NET requires the `Microsoft.NET.Sdk.Functions` extension be at least `3.0.0`.
+Reached the end of life (EOL) on December 13, 2022. We highly recommend you [migrating your apps to version 4.x](migrate-version-3-version-4.md) for full support.
# [Version 2.x](#tab/v2)
-```xml
-<TargetFramework>netcoreapp2.1</TargetFramework>
-<AzureFunctionsVersion>v2</AzureFunctionsVersion>
-```
+Reached the end of life (EOL) on December 13, 2022. We highly recommend you [migrating your apps to version 4.x](migrate-version-3-version-4.md) for full support.
# [Version 1.x](#tab/v1) ```xml
-<TargetFramework>net472</TargetFramework>
+<TargetFramework>net48</TargetFramework>
<AzureFunctionsVersion>v1</AzureFunctionsVersion> ``` ### VS Code and Azure Functions Core Tools
-[Azure Functions Core Tools](functions-run-local.md) is used for command-line development and also by the [Azure Functions extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) for Visual Studio Code. To develop against version 4.x, install version 4.x of the Core Tools. Version 3.x development requires version 3.x of the Core Tools, and so on. For more information, see [Install the Azure Functions Core Tools](functions-run-local.md#install-the-azure-functions-core-tools).
-
-For Visual Studio Code development, you may also need to update the user setting for the `azureFunctions.projectRuntime` to match the version of the tools installed. This setting also updates the templates and languages used during function app creation. To create apps in `~3`, you update the `azureFunctions.projectRuntime` user setting to `~3`.
+[Azure Functions Core Tools](functions-run-local.md) is used for command-line development and also by the [Azure Functions extension](https://marketplace.visualstudio.com/items?itemName=ms-azuretools.vscode-azurefunctions) for Visual Studio Code. For more information, see [Install the Azure Functions Core Tools](functions-run-local.md#install-the-azure-functions-core-tools).
-![Azure Functions extension runtime setting](./media/functions-versions/vs-code-version-runtime.png)
+For Visual Studio Code development, you may also need to update the user setting for the `azureFunctions.projectRuntime` to match the version of the tools installed. This setting also updates the templates and languages used during function app creation.
## Bindings
azure-functions Set Runtime Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/set-runtime-version.md
When a new version is publicly available, a prompt in the portal gives you the c
The following table shows the `FUNCTIONS_EXTENSION_VERSION` values for each major version to enable automatic updates:
-| Major version | `FUNCTIONS_EXTENSION_VERSION` value | Additional configuration |
-| - | -- | - |
-| 4.x | `~4` | [On Windows, enable .NET 6](./migrate-version-3-version-4.md#upgrade-your-function-app-in-azure) |
-| 3.x | `~3` | |
-| 2.x | `~2` | |
-| 1.x | `~1` | |
+| Major version | `FUNCTIONS_EXTENSION_VERSION` value | Additional configuration |
+| - | -- | - |
+| 4.x | `~4` | [On Windows, enable .NET 6](./migrate-version-3-version-4.md#upgrade-your-function-app-in-azure) |
+| 3.x<sup>*</sup>| `~3` | |
+| 2.x<sup>*</sup>| `~2` | |
+| 1.x | `~1` | |
-A change to the runtime version causes a function app to restart.
+<sup>*</sup>Reached the end of life (EOL) for extended support on December 13, 2022. For a detailed support statement about end-of-life versions, see [this migration article](migrate-version-3-version-4.md).
->[!NOTE]
->.NET Function apps pinned to `~2.0` opt out of the automatic upgrade to .NET Core 3.1. To learn more, see [Functions v2.x considerations](functions-dotnet-class-library.md#functions-v2x-considerations).
+A change to the runtime version causes a function app to restart.
## View and update the current runtime version
azure-government Documentation Government Csp List https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-csp-list.md
Title: Azure Government authorized reseller list
-description: Comprehensive list of Azure Government Cloud Solution Providers, resellers, and distributors.
-
-cloud: gov
+description: Comprehensive list of Azure Government cloud solution providers, resellers, and distributors.
Previously updated : 07/05/2022++ Last updated : 01/18/2023 # Azure Government authorized reseller list
azure-government Documentation Government Get Started Connect With Ps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-government/documentation-government-get-started-connect-with-ps.md
Title: Connect to Azure Government with PowerShell
-description: Information on connecting to your subscription in Azure Government with PowerShell
+description: Information on connecting to your subscription in Azure Government with PowerShell.
Previously updated : 12/07/2021 Last updated : 01/18/2023 # Quickstart: Connect to Azure Government with PowerShell
This quickstart shows how to use PowerShell to access and start managing resourc
## Prerequisites -- Review [Guidance for developers](./documentation-government-developer-guide.md).<br/> This article discusses Azure Government's unique URLs and endpoints for managing your environment. You must know about these endpoints in order to connect to Azure Government.
+- Review [Guidance for developers](./documentation-government-developer-guide.md), which discusses Azure Government's unique URLs and endpoints for managing your environment. You must know about these endpoints in order to connect to Azure Government.
- Review [Compare Azure Government and global Azure](./compare-azure-government-global-azure.md) and click on a service of interest to see variations between Azure Government and global Azure. ## Install PowerShell
Install PowerShell on your local machine. For more information, including how to
When you start PowerShell, you have to tell Azure PowerShell to connect to Azure Government by specifying an environment parameter. The parameter ensures that PowerShell is connecting to the correct endpoints. The collection of endpoints is determined when you log in to your account. Different APIs require different versions of the environment switch. ```powershell
-Connect-AzAccount -EnvironmentName AzureUSGovernment
+Connect-AzAccount -Environment AzureUSGovernment
``` </br>
This quickstart showed you how to use PowerShell to connect to Azure Government.
> [!div class="nextstepaction"] > [Azure documentation](../index.yml)+
+For more information about Azure Government, see the following resources:
+
+- [Azure Government overview](./documentation-government-welcome.md)
+- [Compare Azure Government and global Azure](./compare-azure-government-global-azure.md)
+- [Azure Government security](./documentation-government-plan-security.md)
+- [Azure Government compliance](./documentation-government-plan-compliance.md)
+- [Azure Government services by audit scope](./compliance/azure-services-in-fedramp-auditscope.md#azure-government-services-by-audit-scope)
+- [Azure Government DoD overview](./documentation-government-overview-dod.md)
+- [FedRAMP ΓÇô Azure compliance](/azure/compliance/offerings/offering-fedramp)
+- [DoD Impact Level 5 ΓÇô Azure compliance](/azure/compliance/offerings/offering-dod-il5)
+- [Isolation guidelines for Impact Level 5 workloads](./documentation-government-impact-level-5.md)
azure-monitor Agents Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agents-overview.md
description: Overview of the Azure Monitor Agent, which collects monitoring data
Previously updated : 1/5/2023 Last updated : 1/18/2023
In addition to the generally available data collection listed above, Azure Monit
| : | : | : | : | | [Microsoft Defender for Cloud](../../security-center/security-center-introduction.md) | Public preview | <ul><li>Azure Security Agent extension</li><li>SQL Advanced Threat Protection extension</li><li>SQL Vulnerability Assessment extension</li></ul> | [Auto-deployment of Azure Monitor Agent (Preview)](../../defender-for-cloud/auto-deploy-azure-monitoring-agent.md) | | [Microsoft Sentinel](../../sentinel/overview.md) | <ul><li>Windows Security Events: [Generally available](../../sentinel/connect-windows-security-events.md?tabs=AMA)</li><li>Windows Forwarding Event (WEF): [Public preview](../../sentinel/data-connectors-reference.md#windows-forwarded-events-preview)</li><li>Windows DNS logs: [Public preview](../../sentinel/connect-dns-ama.md)</li><li>Linux Syslog CEF: [Public preview](../../sentinel/connect-cef-ama.md#set-up-the-common-event-format-cef-via-ama-connector)</li></ul> | Sentinel DNS extension, if youΓÇÖre collecting DNS logs. For all other data types, you just need the Azure Monitor Agent extension. | - |
-| [Change Tracking](../../automation/change-tracking/overview.md) | Change Tracking: Preview. | Change Tracking extension | [Sign-up link](https://aka.ms/amadcr-privatepreviews) |
+| [Change Tracking](../../automation/change-tracking/overview.md) | Public preview | Change Tracking extension | [Change Tracking and Inventory using Azure Monitor Agent](../../automation/change-tracking/overview-monitoring-agent.md) |
| [Update Management](../../automation/update-management/overview.md) (available without Azure Monitor Agent) | Use Update Management v2 - Public preview | None | [Update management center (Public preview) documentation](../../update-center/index.yml) | | [Network Watcher](../../network-watcher/network-watcher-monitoring-overview.md) | Connection Monitor: Public preview | Azure NetworkWatcher extension | [Monitor network connectivity by using Azure Monitor Agent](../../network-watcher/azure-monitor-agent-with-connection-monitor.md) |
The tables below provide a comparison of Azure Monitor Agent with the legacy the
| | VM Insights | X (Public preview) | X | | | | Microsoft Defender for Cloud | X (Public preview) | X | | | | Update Management | X (Public preview, independent of monitoring agents) | X | |
-| | Change Tracking | | X | |
+| | Change Tracking | X (Public preview) | X | |
### Linux agents
The tables below provide a comparison of Azure Monitor Agent with the legacy the
| | VM Insights | X (Public preview) | X | | | | Microsoft Defender for Cloud | X (Public preview) | X | | | | Update Management | X (Public preview, independent of monitoring agents) | X | |
-| | Change Tracking | | X | |
+| | Change Tracking | X (Public preview) | X | |
<sup>1</sup> To review other limitations of using Azure Monitor Metrics, see [quotas and limits](../essentials/metrics-custom-overview.md#quotas-and-limits). On Linux, using Azure Monitor Metrics as the only destination is supported in v.1.10.9.0 or higher.
View [supported operating systems for Azure Arc Connected Machine agent](../../a
## Next steps - [Install the Azure Monitor Agent](azure-monitor-agent-manage.md) on Windows and Linux virtual machines.-- [Create a data collection rule](data-collection-rule-azure-monitor-agent.md) to collect data from the agent and send it to Azure Monitor.
+- [Create a data collection rule](data-collection-rule-azure-monitor-agent.md) to collect data from the agent and send it to Azure Monitor.
azure-monitor Azure Monitor Agent Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-manage.md
Use the following CLI commands to uninstall Azure Monitor Agent on Azure virtual
- Windows ```azurecli
- az vm extension delete --resource-group <resource-group-name> --vm-name <virtual-machine-name> -name AzureMonitorWindowsAgent
+ az vm extension delete --resource-group <resource-group-name> --vm-name <virtual-machine-name> --name AzureMonitorWindowsAgent
``` - Linux ```azurecli
- az vm extension delete --resource-group <resource-group-name> --vm-name <virtual-machine-name> -name AzureMonitorLinuxAgent
+ az vm extension delete --resource-group <resource-group-name> --vm-name <virtual-machine-name> --name AzureMonitorLinuxAgent
``` ### Uninstall on Azure Arc-enabled servers
azure-monitor Azure Monitor Agent Migration Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-migration-tools.md
Previously updated : 11/14/2022 Last updated : 1/18/2023 # Customer intent: As an Azure account administrator, I want to use the available Azure Monitor tools to migrate from Log Analytics Agent to Azure Monitor Agent and track the status of the migration in my account.
Azure Monitor Agent (AMA) replaces the Log Analytics Agent (MMA/OMS) for Windows
## Using AMA Migration Helper AMA Migration Helper is a workbook-based Azure Monitor solution that helps you **discover what to migrate** and **track progress** as you move from Log Analytics Agent to Azure Monitor Agent. Use this single pane of glass view to expedite and track the status of your agent migration journey.
+The helper now supports multiple subscriptions, and includes **automatic migration recommendations** based on your usage.
You can access the workbook **[here](https://portal.azure.com/#view/AppInsightsExtension/UsageNotebookBlade/ComponentId/Azure%20Monitor/ConfigurationId/community-Workbooks%2FAzure%20Monitor%20-%20Agents%2FAgent%20Migration%20Tracker/Type/workbook/WorkbookTemplateName/AMA%20Migration%20Helper)**, or find it on the Azure portal under **Monitor** > **Workbooks** > **Public Templates** > **Azure Monitor essentials** > **AMA Migration Helper**. :::image type="content" source="media/azure-monitor-migration-tools/ama-migration-helper.png" lightbox="media/azure-monitor-migration-tools/ama-migration-helper.png" alt-text="Screenshot of the Azure Monitor Agent Migration Helper workbook. The screenshot highlights the Subscription and Workspace dropdowns and shows the Azure Virtual Machines tab, on which you can track which agent is deployed on each virtual machine.":::
+**Automatic Migration Recommendations**
++ ## Installing and using DCR Config Generator Azure Monitor Agent relies only on [data collection rules (DCRs)](../essentials/data-collection-rule-overview.md) for configuration, whereas Log Analytics Agent inherits its configuration from Log Analytics workspaces.
azure-monitor Data Collection Iis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-iis.md
The [data collection rule](../essentials/data-collection-rule-overview.md) defin
- How Azure Monitor transforms events during ingestion. - The destination Log Analytics workspace and table to which Azure Monitor sends the data.
-You can define a data collection rule to send data from multiple machines to multiple Log Analytics workspaces, including workspaces in a different region or tenant. Create the data collection rule in the *same region* as your Log Analytics workspace.
+You can define a data collection rule to send data from multiple machines to multiple Log Analytics workspaces, including workspaces in a different region or tenant. Create the data collection rule in the *same region* as your VM / VMSS / Arc enabled server.
> [!NOTE] > To send data across tenants, you must first enable [Azure Lighthouse](../../lighthouse/overview.md).
Learn more about:
- [Azure Monitor Agent](azure-monitor-agent-overview.md). - [Data collection rules](../essentials/data-collection-rule-overview.md).-- [Best practices for cost management in Azure Monitor](../best-practices-cost.md).
+- [Best practices for cost management in Azure Monitor](../best-practices-cost.md).
azure-monitor Data Collection Rule Azure Monitor Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-rule-azure-monitor-agent.md
When you paste the XPath query into the field on the **Add data source** screen,
[ ![Screenshot that shows the steps to create an XPath query in the Windows Event Viewer.](media/data-collection-rule-azure-monitor-agent/data-collection-rule-extract-xpath.png) ](media/data-collection-rule-azure-monitor-agent/data-collection-rule-extract-xpath.png#lightbox)
-For a list of limitations in the XPath supported by Windows event log, see [XPath 1.0 limitations](/windows/win32/wes/consuming-events#xpath-10-limitations).
> [!TIP] > You can use the PowerShell cmdlet `Get-WinEvent` with the `FilterXPath` parameter to test the validity of an XPath query locally on your machine first. The following script shows an example:
Examples of using a custom XPath to filter events:
| Collect all Critical, Error, Warning, and Information events from the System event log except for Event ID = 6 (Driver loaded) | `System!*[System[(Level=1 or Level=2 or Level=3) and (EventID != 6)]]` | | Collect all success and failure Security events except for Event ID 4624 (Successful logon) | `Security!*[System[(band(Keywords,13510798882111488)) and (EventID != 4624)]]` |
+> [!NOTE]
+> For a list of limitations in the XPath supported by Windows event log, see [XPath 1.0 limitations](/windows/win32/wes/consuming-events#xpath-10-limitations).
+> For instance, you can use the "position", "Band", and "timediff" functions within the query but other functions like "starts-with" and "contains" are not currently supported.
## Next steps - [Collect text logs by using Azure Monitor Agent](data-collection-text-log.md). - Learn more about [Azure Monitor Agent](azure-monitor-agent-overview.md).-- Learn more about [data collection rules](../essentials/data-collection-rule-overview.md).
+- Learn more about [data collection rules](../essentials/data-collection-rule-overview.md).
azure-monitor Data Sources Custom Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-sources-custom-logs.md
The Custom Logs data source for the Log Analytics agent in Azure Monitor allows you to collect events from text files on both Windows and Linux computers. Many applications log information to text files instead of standard logging services, such as Windows Event log or Syslog. After the data is collected, you can either parse it into individual fields in your queries or extract it during collection to individual fields.
+>[!IMPORTANT]
+> This article describes how to collect a text log with the Log Analytics agent. If you're using the Azure Monitor agent, then see [Collect text logs with Azure Monitor Agent](data-collection-text-log.md).
+ [!INCLUDE [Log Analytics agent deprecation](../../../includes/log-analytics-agent-deprecation.md)] ![Diagram that shows custom log collection.](media/data-sources-custom-logs/overview.png)
azure-monitor Resource Manager Data Collection Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/resource-manager-data-collection-rules.md
param associationName string
@description('The resource ID of the data collection rule.') param dataCollectionRuleId string
-resource vm 'Microsoft.Compute/virtualMachines@2021-11-01' existing = {
+resource vm 'Microsoft.HybridCompute/machines@2021-11-01' existing = {
name: vmName }
resource association 'Microsoft.Insights/dataCollectionRuleAssociations@2021-09-
{ "type": "Microsoft.Insights/dataCollectionRuleAssociations", "apiVersion": "2021-09-01-preview",
- "scope": "[format('Microsoft.Compute/virtualMachines/{0}', parameters('vmName'))]",
+ "scope": "[format('Microsoft.HybridCompute/machines/{0}', parameters('vmName'))]",
"name": "[parameters('associationName')]", "properties": { "description": "Association of data collection rule. Deleting this association will break the data collection for this Arc server.",
azure-monitor Java In Process Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-in-process-agent.md
Title: Azure Monitor Application Insights Java description: Application performance monitoring for Java applications running in any environment without requiring code modification. The article also discusses distributed tracing and the application map. Previously updated : 12/14/2022 Last updated : 01/18/2023 ms.devlang: java
This section shows you how to download the auto-instrumentation jar file.
#### Download the jar file
-Download the [applicationinsights-agent-3.4.7.jar](https://github.com/microsoft/ApplicationInsights-Java/releases/download/3.4.7/applicationinsights-agent-3.4.7.jar) file.
+Download the [applicationinsights-agent-3.4.8.jar](https://github.com/microsoft/ApplicationInsights-Java/releases/download/3.4.8/applicationinsights-agent-3.4.8.jar) file.
> [!WARNING] >
Download the [applicationinsights-agent-3.4.7.jar](https://github.com/microsoft/
#### Point the JVM to the jar file
-Add `-javaagent:"path/to/applicationinsights-agent-3.4.7.jar"` to your application's JVM args.
+Add `-javaagent:"path/to/applicationinsights-agent-3.4.8.jar"` to your application's JVM args.
> [!TIP] > For help with configuring your application's JVM args, see [Tips for updating your JVM args](./java-standalone-arguments.md).
If you develop a Spring Boot application, you can replace the JVM argument by a
APPLICATIONINSIGHTS_CONNECTION_STRING=<Copy connection string from Application Insights Resource Overview> ```
- - Create a configuration file named `applicationinsights.json`. Place it in the same directory as `applicationinsights-agent-3.4.7.jar` with the following content:
+ - Create a configuration file named `applicationinsights.json`. Place it in the same directory as `applicationinsights-agent-3.4.8.jar` with the following content:
```json {
Structured logging (attaching custom dimensions to your logs) can be accomplishe
<dependency> <groupId>com.microsoft.azure</groupId> <artifactId>applicationinsights-core</artifactId>
- <version>3.4.7</version>
+ <version>3.4.8</version>
</dependency> ```
azure-monitor Java Spring Boot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-spring-boot.md
Title: Configure Azure Monitor Application Insights for Spring Boot description: How to configure Azure Monitor Application Insights for Spring Boot applications Previously updated : 12/14/2022 Last updated : 01/18/2023 ms.devlang: java
There are two options for enabling Application Insights Java with Spring Boot: J
## Enabling with JVM argument
-Add the JVM arg `-javaagent:"path/to/applicationinsights-agent-3.4.7.jar"` somewhere before `-jar`, for example:
+Add the JVM arg `-javaagent:"path/to/applicationinsights-agent-3.4.8.jar"` somewhere before `-jar`, for example:
```
-java -javaagent:"path/to/applicationinsights-agent-3.4.7.jar" -jar <myapp.jar>
+java -javaagent:"path/to/applicationinsights-agent-3.4.8.jar" -jar <myapp.jar>
``` ### Spring Boot via Docker entry point
-If you're using the *exec* form, add the parameter `-javaagent:"path/to/applicationinsights-agent-3.4.7.jar"` to the parameter list somewhere before the `"-jar"` parameter, for example:
+If you're using the *exec* form, add the parameter `-javaagent:"path/to/applicationinsights-agent-3.4.8.jar"` to the parameter list somewhere before the `"-jar"` parameter, for example:
```
-ENTRYPOINT ["java", "-javaagent:path/to/applicationinsights-agent-3.4.7.jar", "-jar", "<myapp.jar>"]
+ENTRYPOINT ["java", "-javaagent:path/to/applicationinsights-agent-3.4.8.jar", "-jar", "<myapp.jar>"]
```
-If you're using the *shell* form, add the JVM arg `-javaagent:"path/to/applicationinsights-agent-3.4.7.jar"` somewhere before `-jar`, for example:
+If you're using the *shell* form, add the JVM arg `-javaagent:"path/to/applicationinsights-agent-3.4.8.jar"` somewhere before `-jar`, for example:
```
-ENTRYPOINT java -javaagent:"path/to/applicationinsights-agent-3.4.7.jar" -jar <myapp.jar>
+ENTRYPOINT java -javaagent:"path/to/applicationinsights-agent-3.4.8.jar" -jar <myapp.jar>
``` ### Configuration
To enable Application Insights Java programmatically, you must add the following
<dependency> <groupId>com.microsoft.azure</groupId> <artifactId>applicationinsights-runtime-attach</artifactId>
- <version>3.4.3</version>
+ <version>3.4.8</version>
</dependency> ```
azure-monitor Java Standalone Arguments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-arguments.md
Title: Add the JVM arg - Application Insights for Java description: Learn how to add the JVM arg that enables Application Insights for Java. Previously updated : 12/14/2022 Last updated : 01/18/2023 ms.devlang: java
If you're using a third-party container image that you can't modify, mount the A
If you installed Tomcat via `apt-get` or `yum`, you should have a file `/etc/tomcat8/tomcat8.conf`. Add this line to the end of that file: ```
-JAVA_OPTS="$JAVA_OPTS -javaagent:path/to/applicationinsights-agent-3.4.7.jar"
+JAVA_OPTS="$JAVA_OPTS -javaagent:path/to/applicationinsights-agent-3.4.8.jar"
``` ### Tomcat installed via download and unzip
JAVA_OPTS="$JAVA_OPTS -javaagent:path/to/applicationinsights-agent-3.4.7.jar"
If you installed Tomcat via download and unzip from [https://tomcat.apache.org](https://tomcat.apache.org), you should have a file `<tomcat>/bin/catalina.sh`. Create a new file in the same directory named `<tomcat>/bin/setenv.sh` with the following content: ```
-CATALINA_OPTS="$CATALINA_OPTS -javaagent:path/to/applicationinsights-agent-3.4.7.jar"
+CATALINA_OPTS="$CATALINA_OPTS -javaagent:path/to/applicationinsights-agent-3.4.8.jar"
```
-If the file `<tomcat>/bin/setenv.sh` already exists, modify that file and add `-javaagent:path/to/applicationinsights-agent-3.4.7.jar` to `CATALINA_OPTS`.
+If the file `<tomcat>/bin/setenv.sh` already exists, modify that file and add `-javaagent:path/to/applicationinsights-agent-3.4.8.jar` to `CATALINA_OPTS`.
## Tomcat 8 (Windows)
If the file `<tomcat>/bin/setenv.sh` already exists, modify that file and add `-
Locate the file `<tomcat>/bin/catalina.bat`. Create a new file in the same directory named `<tomcat>/bin/setenv.bat` with the following content: ```
-set CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.4.7.jar
+set CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.4.8.jar
``` Quotes aren't necessary, but if you want to include them, the proper placement is: ```
-set "CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.4.7.jar"
+set "CATALINA_OPTS=%CATALINA_OPTS% -javaagent:path/to/applicationinsights-agent-3.4.8.jar"
```
-If the file `<tomcat>/bin/setenv.bat` already exists, modify that file and add `-javaagent:path/to/applicationinsights-agent-3.4.7.jar` to `CATALINA_OPTS`.
+If the file `<tomcat>/bin/setenv.bat` already exists, modify that file and add `-javaagent:path/to/applicationinsights-agent-3.4.8.jar` to `CATALINA_OPTS`.
### Run Tomcat as a Windows service
-Locate the file `<tomcat>/bin/tomcat8w.exe`. Run that executable and add `-javaagent:path/to/applicationinsights-agent-3.4.7.jar` to the `Java Options` under the `Java` tab.
+Locate the file `<tomcat>/bin/tomcat8w.exe`. Run that executable and add `-javaagent:path/to/applicationinsights-agent-3.4.8.jar` to the `Java Options` under the `Java` tab.
## JBoss EAP 7 ### Standalone server
-Add `-javaagent:path/to/applicationinsights-agent-3.4.7.jar` to the existing `JAVA_OPTS` environment variable in the file `JBOSS_HOME/bin/standalone.conf` (Linux) or `JBOSS_HOME/bin/standalone.conf.bat` (Windows):
+Add `-javaagent:path/to/applicationinsights-agent-3.4.8.jar` to the existing `JAVA_OPTS` environment variable in the file `JBOSS_HOME/bin/standalone.conf` (Linux) or `JBOSS_HOME/bin/standalone.conf.bat` (Windows):
```java ...
- JAVA_OPTS="-javaagent:path/to/applicationinsights-agent-3.4.7.jar -Xms1303m -Xmx1303m ..."
+ JAVA_OPTS="-javaagent:path/to/applicationinsights-agent-3.4.8.jar -Xms1303m -Xmx1303m ..."
... ``` ### Domain server
-Add `-javaagent:path/to/applicationinsights-agent-3.4.7.jar` to the existing `jvm-options` in `JBOSS_HOME/domain/configuration/host.xml`:
+Add `-javaagent:path/to/applicationinsights-agent-3.4.8.jar` to the existing `jvm-options` in `JBOSS_HOME/domain/configuration/host.xml`:
```xml ...
Add `-javaagent:path/to/applicationinsights-agent-3.4.7.jar` to the existing `jv
<jvm-options> <option value="-server"/> <!--Add Java agent jar file here-->
- <option value="-javaagent:path/to/applicationinsights-agent-3.4.7.jar"/>
+ <option value="-javaagent:path/to/applicationinsights-agent-3.4.8.jar"/>
<option value="-XX:MetaspaceSize=96m"/> <option value="-XX:MaxMetaspaceSize=256m"/> </jvm-options>
Add these lines to `start.ini`:
``` --exec--javaagent:path/to/applicationinsights-agent-3.4.7.jar
+-javaagent:path/to/applicationinsights-agent-3.4.8.jar
``` ## Payara 5
-Add `-javaagent:path/to/applicationinsights-agent-3.4.7.jar` to the existing `jvm-options` in `glassfish/domains/domain1/config/domain.xml`:
+Add `-javaagent:path/to/applicationinsights-agent-3.4.8.jar` to the existing `jvm-options` in `glassfish/domains/domain1/config/domain.xml`:
```xml ... <java-config ...> <!--Edit the JVM options here--> <jvm-options>
- -javaagent:path/to/applicationinsights-agent-3.4.7.jar>
+ -javaagent:path/to/applicationinsights-agent-3.4.8.jar>
</jvm-options> ... </java-config>
Add `-javaagent:path/to/applicationinsights-agent-3.4.7.jar` to the existing `jv
1. In `Generic JVM arguments`, add the following JVM argument: ```
- -javaagent:path/to/applicationinsights-agent-3.4.7.jar
+ -javaagent:path/to/applicationinsights-agent-3.4.8.jar
``` 1. Save and restart the application server.
Add `-javaagent:path/to/applicationinsights-agent-3.4.7.jar` to the existing `jv
Create a new file `jvm.options` in the server directory (for example, `<openliberty>/usr/servers/defaultServer`), and add this line: ```--javaagent:path/to/applicationinsights-agent-3.4.7.jar
+-javaagent:path/to/applicationinsights-agent-3.4.8.jar
``` ## Others
azure-monitor Java Standalone Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-config.md
Title: Configuration options - Azure Monitor Application Insights for Java description: This article shows you how to configure Azure Monitor Application Insights for Java. Previously updated : 12/14/2022 Last updated : 01/18/2023 ms.devlang: java
You'll find more information and configuration options in the following sections
## Configuration file path
-By default, Application Insights Java 3.x expects the configuration file to be named `applicationinsights.json`, and to be located in the same directory as `applicationinsights-agent-3.4.7.jar`.
+By default, Application Insights Java 3.x expects the configuration file to be named `applicationinsights.json`, and to be located in the same directory as `applicationinsights-agent-3.4.8.jar`.
You can specify your own configuration file path by using one of these two options: * `APPLICATIONINSIGHTS_CONFIGURATION_FILE` environment variable * `applicationinsights.configuration.file` Java system property
-If you specify a relative path, it will be resolved relative to the directory where `applicationinsights-agent-3.4.7.jar` is located.
+If you specify a relative path, it will be resolved relative to the directory where `applicationinsights-agent-3.4.8.jar` is located.
Alternatively, instead of using a configuration file, you can specify the entire _content_ of the JSON configuration via the environment variable `APPLICATIONINSIGHTS_CONFIGURATION_CONTENT`.
Or you can set the connection string by using the Java system property `applicat
You can also set the connection string by specifying a file to load the connection string from.
-If you specify a relative path, it's resolved relative to the directory where `applicationinsights-agent-3.4.7.jar` is located.
+If you specify a relative path, it's resolved relative to the directory where `applicationinsights-agent-3.4.8.jar` is located.
```json {
Cloud role name overrides allow you to override the [default cloud role name](#c
} ```
+## Connection string configured at runtime
+
+Starting from version 3.4.8, if you need the ability to configure the connection string at runtime,
+add this property to your json configuration:
+
+```json
+{
+ "connectionStringConfiguredAtRuntime": true
+}
+```
+
+and add `applicationinsights-core` to your application:
+
+```xml
+<dependency>
+ <groupId>com.microsoft.azure</groupId>
+ <artifactId>applicationinsights-core</artifactId>
+ <version>3.4.8</version>
+</dependency>
+```
+
+and use the static `configure(String)` method in the class
+`com.microsoft.applicationinsights.connectionstring.ConnectionString`.
+
+> [!NOTE]
+> Any telemetry that is captured prior to configuring the connection string will be dropped,
+> so it is best to configure it as early as possible in your application startup.
+ ## Autocollect InProc dependencies (preview) Starting from version 3.2.0, if you want to capture controller "InProc" dependencies, use the following configuration:
In the preceding configuration example:
* `level` can be one of `OFF`, `ERROR`, `WARN`, `INFO`, `DEBUG`, or `TRACE`. * `path` can be an absolute or relative path. Relative paths are resolved against the directory where
-`applicationinsights-agent-3.4.7.jar` is located.
+`applicationinsights-agent-3.4.8.jar` is located.
Starting from version 3.0.2, you can also set the self-diagnostics `level` by using the environment variable `APPLICATIONINSIGHTS_SELF_DIAGNOSTICS_LEVEL`. It then takes precedence over the self-diagnostics level specified in the JSON configuration.
azure-monitor Java Standalone Upgrade From 2X https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/java-standalone-upgrade-from-2x.md
Title: Upgrading from 2.x - Azure Monitor Application Insights Java description: Upgrading from Azure Monitor Application Insights Java 2.x Previously updated : 12/14/2022 Last updated : 01/18/2023 ms.devlang: java
auto-instrumentation which is provided by the 3.x Java agent.
Add the 3.x Java agent to your JVM command-line args, for example ```--javaagent:path/to/applicationinsights-agent-3.4.7.jar
+-javaagent:path/to/applicationinsights-agent-3.4.8.jar
``` If you were using the Application Insights 2.x Java agent, just replace your existing `-javaagent:...` with the above.
azure-monitor Best Practices Cost https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/best-practices-cost.md
Title: Cost optimization and Azure Monitor
-description: Guidance and recommendations for reducing your cost for Azure Monitor.
+ Title: Optimize costs in Azure Monitor
+description: Recommendations for reducing costs in Azure Monitor.
Last updated 10/17/2022
-# Cost optimization and Azure Monitor
+# Optimize costs in Azure Monitor
You can significantly reduce your cost for Azure Monitor by understanding your different configuration options and opportunities to reduce the amount of data that it collects. Before you use this article, you should see [Azure Monitor cost and usage](usage-estimated-costs.md) to understand the different ways that Azure Monitor charges and how to view your monthly bill. > [!NOTE]
azure-monitor Metrics Supported https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/metrics-supported.md
> [!NOTE] > This list is largely auto-generated. Any modification made to this list via GitHub might be written over without warning. Contact the author of this article for details on how to make permanent updates.
-Date list was last updated: 2021-10-05.
- Azure Monitor provides several ways to interact with metrics, including charting them in the Azure portal, accessing them through the REST API, or querying them by using PowerShell or the Azure CLI (Command Line Interface). This article is a complete list of all platform (that is, automatically collected) metrics currently available with the consolidated metric pipeline in Azure Monitor. Metrics changed or added after the date at the top of this article might not yet appear in the list. To query for and access the list of metrics programmatically, use the [2018-01-01 api-version](/rest/api/monitor/metricdefinitions). Other metrics not in this list might be available in the portal or through legacy APIs.
azure-monitor Getting Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/getting-started.md
These articles provide detailed information about each of the main steps you'll
| [Configure data collection](best-practices-data-collection.md) | Tasks required to collect monitoring data from your Azure and hybrid applications and resources. | | [Analysis and visualizations](best-practices-analysis.md) | Standard features and additional visualizations that you can create to analyze collected monitoring data. | | [Alerts and automated responses](best-practices-alerts.md) | Configure notifications and processes that are automatically triggered when an alert is created. |
-| [Best practices and cost management](best-practices-cost.md) | Reducing your cloud monitoring costs by implementing and managing Azure Monitor in the most cost-effective manner. |
+| [Optimize costs](best-practices-cost.md) | Reduce your cloud monitoring costs by implementing and managing Azure Monitor in the most cost-effective manner. |
## Next steps
azure-monitor Data Retention Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/data-retention-archive.md
You can access archived data by [running a search job](search-jobs.md) or [resto
> [!NOTE] > The archive period can only be set at the table level, not at the workspace level.
+When you shorten an existing retention policy, it takes 30 days for Azure Monitor to remove data, to prevent data loss in error configuration, and let you revert it. You can [purge data](#purge-retained-data) immediately when required.
+ ## Configure the default workspace retention policy You can set the workspace default retention policy in the Azure portal to 30, 31, 60, 90, 120, 180, 270, 365, 550, and 730 days. You can set a different policy for specific tables by [configuring the retention and archive policy at the table level](#set-retention-and-archive-policy-by-table). If you're on the *free* tier, you'll need to upgrade to the paid tier to change the data retention period.
Get-AzOperationalInsightsTable -ResourceGroupName ContosoRG -WorkspaceName Conto
## Purge retained data
-When you shorten an existing retention policy, it takes several days for Azure Monitor to remove data that you no longer want to keep.
- If you set the data retention policy to 30 days, you can purge older data immediately by using the `immediatePurgeDataOn30Days` parameter in Azure Resource Manager. The purge functionality is useful when you need to remove personal data immediately. The immediate purge functionality isn't available through the Azure portal. Workspaces with a 30-day retention policy might keep data for 31 days if you don't set the `immediatePurgeDataOn30Days` parameter.
azure-monitor Logs Data Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/logs/logs-data-export.md
Log Analytics workspace data export continuously exports data that's sent to you
- Currently, data export isn't supported in China. ## Data completeness
-Data export is optimized for moving large data volumes to your destinations. In certain retry conditions, it can include a fraction of duplicated records. The export operation might fail when ingress limits are reached. For more information, see [Create or update a data export rule](#create-or-update-a-data-export-rule). In such a case, a retry continues for up to 30 minutes. If the destination is still unavailable, data will be discarded until the destination becomes available.
+Data export is optimized for moving large data volumes to your destinations. The export operation might fail for destinations capacity or availability, and a retry process continues for up to 12-hours. For more information, see [Create or update a data export rule](#create-or-update-a-data-export-rule) for destination limits and recommended alerts. If the destinations are still unavailable after the retry period, data is discarded. In certain retry conditions, retry can cause a fraction of duplicated records.
## Pricing model Data export charges are based on the volume of data exported measured in bytes. The size of data exported by Log Analytics Data Export is the number of bytes in the exported JSON-formatted data. Data volume is measured in GB (10^9 bytes).
azure-monitor Monitor Virtual Machine Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/monitor-virtual-machine-agent.md
This article is part of the guide [Monitor virtual machines and their workloads
Any monitoring tool like Azure Monitor, requires an agent installed on a machine to collect data from its guest operating system. Azure Monitor uses the [Azure Monitor agent](../agents/agents-overview.md), which supports virtual machines in Azure, other cloud environments, and on-premises.
-## Legacy agents
-The Azure Monitor agent replaces legacy agents that are still available but should only be used if you require particular functionality not yet available with Azure Monitor agent. Most users will be able to use Azure Monitor without the legacy agents.
-
-The legacy agents include the following:
--- [Log Analytics agent](../agents/log-analytics-agent.md): Supports virtual machines in Azure, other cloud environments, and on-premises. Sends data to Azure Monitor Logs. This agent is the same agent used for System Center Operations Manager.-- [Azure Diagnostic extension](../agents/diagnostics-extension-overview.md): Supports Azure Monitor virtual machines only. Sends data to Azure Monitor Metrics, Azure Event Hubs, and Azure Storage.-
-See [Supported services and features](../agents/agents-overview.md#supported-services-and-features) for the current features supported by Azure Monitor agent. See [Migrate to Azure Monitor Agent from Log Analytics agent](../agents/azure-monitor-agent-migration.md) for details on migrating to the Azure Monitor agent if you already have the Log Analytics agent deployed.
- ## Prerequisites- ### Create a Log Analytics workspace You don't need a Log Analytics workspace to deploy the Azure Monitor agent, but you will need one to collect the data that it sends. There's no cost for the workspace, but you do incur ingestion and retention costs when you collect data.
For complete details on logic that you should consider for designing a workspace
### Workspace permissions The access mode of the workspace defines which users can access different sets of data. For details on how to define your access mode and configure permissions, see [Manage access to log data and workspaces in Azure Monitor](../logs/manage-access.md). If you're just getting started with Azure Monitor, consider accepting the defaults when you create your workspace and configure its permissions later.
-## Multihoming agents
-Multihoming refers to a virtual machine that connects to multiple workspaces. There's typically little reason to multihome agents for Azure Monitor alone. Having an agent send data to multiple workspaces most likely creates duplicate data in each workspace, which increases your overall cost. You can combine data from multiple workspaces by using [cross-workspace queries](../logs/cross-workspace-query.md) and [workbooks](../visualizations/../visualize/workbooks-overview.md).
-
-One reason you might consider multihoming, though, is if you have an environment with Microsoft Defender for Cloud or Microsoft Sentinel stored in a workspace that's separate from Azure Monitor. A machine being monitored by each service needs to send data to each workspace.
+> [!TIP]
+> Multihoming refers to a virtual machine that connects to multiple workspaces. There's typically little reason to multihome agents for Azure Monitor alone. Having an agent send data to multiple workspaces most likely creates duplicate data in each workspace, which increases your overall cost. You can combine data from multiple workspaces by using [cross-workspace queries](../logs/cross-workspace-query.md) and [workbooks](../visualizations/../visualize/workbooks-overview.md). One reason you might consider multihoming is if you have an environment with Microsoft Defender for Cloud or Microsoft Sentinel stored in a workspace that's separate from Azure Monitor. A machine being monitored by each service needs to send data to each workspace.
## Prepare hybrid machines A hybrid machine is any machine not running in Azure. It's a virtual machine running in another cloud or hosted provider or a virtual or physical machine running on-premises in your datacenter. Use [Azure Arc-enabled servers](../../azure-arc/servers/overview.md) on hybrid machines so you can manage them similarly to your Azure virtual machines. You can use VM insights in Azure Monitor to use the same process to enable monitoring for Azure Arc-enabled servers as you do for Azure virtual machines. For a complete guide on preparing your hybrid machines for Azure, see [Plan and deploy Azure Arc-enabled servers](../../azure-arc/servers/plan-at-scale-deployment.md). This task includes enabling individual machines and using [Azure Policy](../../governance/policy/overview.md) to enable your entire hybrid environment at scale.
A hybrid machine is any machine not running in Azure. It's a virtual machine run
There's no additional cost for Azure Arc-enabled servers, but there might be some cost for different options that you enable. For details, see [Azure Arc pricing](https://azure.microsoft.com/pricing/details/azure-arc/). There is a cost for the data collected in the workspace after your hybrid machines are onboarded, but this is the same as for an Azure virtual machine. ### Network requirements
-The Azure Monitor agent for both Linux and Windows communicates outbound to the Azure Monitor service over TCP port 443. The Dependency agent uses the Azure Monitor agent for all communication, so it doesn't require any another ports. For details on how to configure your firewall and proxy, see [Network requirements](../agents/log-analytics-agent.md#network-requirements).
+The Azure Monitor agent for both Linux and Windows communicates outbound to the Azure Monitor service over TCP port 443. The Dependency agent uses the Azure Monitor agent for all communication, so it doesn't require any another ports. For details on how to configure your firewall and proxy, see [Network requirements](../agents/azure-monitor-agent-data-collection-endpoint.md).
+There are three different options for connect your hybrid virtual machines to Azure Monitor:
-### Log Analytics gateway
-With the Log Analytics gateway, you can channel communications from your on-premises machines through a single gateway. Azure Arc doesn't use the gateway, but its Connected Machine agent is required to install Azure Monitor agent. For details on how to configure and use the Log Analytics gateway, see [Log Analytics gateway](../agents/gateway.md).
+- **Public internet**. If your hybrid servers are allowed to communicate with the public internet, then they can connect to a global Azure Monitor endpoint. This is the simplest configuration but also the least secure.
+
+- **Log Analytics gateway**. With the Log Analytics gateway, you can channel communications from your on-premises machines through a single gateway. Azure Arc doesn't use the gateway, but its Connected Machine agent is required to install Azure Monitor agent. For details on how to configure and use the Log Analytics gateway, see [Log Analytics gateway](../agents/gateway.md).
-### Azure Private Link
-By using Azure Private Link, you can create a private endpoint for your Log Analytics workspace. After it's configured, any connections to the workspace must be made through this private endpoint. Private Link works by using DNS overrides, so there's no configuration requirement on individual agents. For details on Private Link, see [Use Azure Private Link to securely connect networks to Azure Monitor](../logs/private-link-security.md). For specific guidance on configuring private link for your virtual machines, see [Enable network isolation for the Azure Monitor agent](../agents/azure-monitor-agent-data-collection-endpoint.md).
+- **Azure Private Link**. By using Azure Private Link, you can create a private endpoint for your Log Analytics workspace. After it's configured, any connections to the workspace must be made through this private endpoint. Private Link works by using DNS overrides, so there's no configuration requirement on individual agents. For details on Private Link, see [Use Azure Private Link to securely connect networks to Azure Monitor](../logs/private-link-security.md). For specific guidance on configuring private link for your virtual machines, see [Enable network isolation for the Azure Monitor agent](../agents/azure-monitor-agent-data-collection-endpoint.md).
+ ## Agent deployment options The Azure Monitor agent is implemented as a [virtual machine extension](../../virtual-machines/extensions/overview.md), so you can install it using a variety of standard methods including PowerShell, CLI, and Resource Manager templates. See [Manage Azure Monitor Agent](../agents/azure-monitor-agent-manage.md) for details on each. Other notable methods for installation are described below.
-### Azure Policy
-If you have a significant number of virtual machines, you should deploy the agent using Azure Policy as described in [Manage Azure Monitor Agent](../agents/azure-monitor-agent-manage.md?tabs=azure-portal#use-azure-policy). This will ensure that the agent is automatically added to existing virtual machines and any new ones that you deploy. See [Enable VM insights by using Azure Policy](vminsights-enable-policy.md) for deploying the agent with VM insights.
+| Method | Scenarios | Details |
+|:|:|:|
+| Azure Policy | Production deployment at scale | If you have a significant number of virtual machines, you should deploy the agent using Azure Policy as described in [Manage Azure Monitor Agent](../agents/azure-monitor-agent-manage.md?tabs=azure-portal#use-azure-policy) or [Enable VM insights by using Azure Policy](vminsights-enable-policy.md). This will ensure that the agent is automatically added to existing virtual machines and any new ones that you deploy. |
+| Data collection rule in Azure portal | Testing and simple deployments | When you create a data collection rule in the Azure portal as described in [Collect events and performance counters from virtual machines with Azure Monitor Agent](../agents/data-collection-rule-azure-monitor-agent.md), you have the option of specifying virtual machines to receive it. The Azure Monitor agent will be automatically installed on any machines that don't already have it. |
+| VM insights in Azure portal | Testing and simple deployments with preconfigured monitoring | VM insights provides [simplified onboarding of agents in the Azure portal](vminsights-enable-portal.md). With a single click for a particular machine, it installs the Azure Monitor agent, connects to a workspace, and starts collecting performance data. You can optionally have it install the dependency agent and collect processes and dependency data to enable the map feature of VM insights. |
+| Windows client installer | Client machines | Use the [Windows client installer](../agents/azure-monitor-agent-windows-client.md) to install the agent on Windows clients such as Windows 11. For different options deploying the agent on a single machine or as part of a script, see [Manage Azure Monitor Agent](../agents/azure-monitor-agent-manage.md?tabs=azure-portal#install). |
-### Data collection rule in the Azure portal
-When you create a data collection rule in the Azure portal as described in [Collect events and performance counters from virtual machines with Azure Monitor Agent](../agents/data-collection-rule-azure-monitor-agent.md), you have the option of specifying virtual machines to receive it. The Azure Monitor agent will be automatically installed on any machines that don't already have it.
-### VM insights
-VM insights provides simplified onboarding of agents in the Azure portal. With a single click for a particular machine, it installs the Azure Monitor agent, connects to a workspace, and starts collecting performance data. You can optionally have it install the dependency agent and collect processes and dependency data to enable the map feature of VM insights.
+## Legacy agents
+The Azure Monitor agent replaces legacy agents that are still available but should only be used if you require particular functionality not yet available with Azure Monitor agent. Most users will be able to use Azure Monitor without the legacy agents.
-You can enable VM insights on individual machines by using the same methods for Azure virtual machines and Azure Arc-enabled servers. These methods include onboarding individual machines with the Azure portal or Azure Resource Manager templates or enabling machines at scale by using Azure Policy. For different options to enable VM insights for your machines, see [Enable VM insights overview](vminsights-enable-overview.md). To create a policy that automatically enables VM insights on any new machines as they're created, see [Enable VM insights by using Azure Policy](vminsights-enable-policy.md).
+The legacy agents include the following:
+- [Log Analytics agent](../agents/log-analytics-agent.md): Supports virtual machines in Azure, other cloud environments, and on-premises. Sends data to Azure Monitor Logs. This agent is the same agent used for System Center Operations Manager.
+- [Azure Diagnostic extension](../agents/diagnostics-extension-overview.md): Supports Azure Monitor virtual machines only. Sends data to Azure Monitor Metrics, Azure Event Hubs, and Azure Storage.
-### Windows client installer
-Use the [Windows client installer](../agents/azure-monitor-agent-windows-client.md) to install the agent on Windows clients such as Windows 11. For different options deploying the agent on a single machine or as part of a script, see [Manage Azure Monitor Agent](../agents/azure-monitor-agent-manage.md?tabs=azure-portal#install).
+See [Supported services and features](../agents/agents-overview.md#supported-services-and-features) for the current features supported by Azure Monitor agent. See [Migrate to Azure Monitor Agent from Log Analytics agent](../agents/azure-monitor-agent-migration.md) for details on migrating to the Azure Monitor agent if you already have the Log Analytics agent deployed.
## Next steps
azure-monitor Monitor Virtual Machine Data Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/monitor-virtual-machine-data-collection.md
See [Monitor virtual machines with Azure Monitor: Analyze monitoring data](monit
### VM insights When you enable VM insights, then it will create a data collection rule, with the **_MSVMI-_** prefix that collects the following information. You can use this same DCR with other machines as opposed to creating a new one for each VM. -- Common performance counters for the client operating system are sent to the [InsightsMetrics](/azure/azure-monitor/reference/tables/insightsmetrics) table in the Log Analytics workspace. Counter names will be normalized to use the same common name regardless of the operating system type.
+- Common performance counters for the client operating system are sent to the [InsightsMetrics](/azure/azure-monitor/reference/tables/insightsmetrics) table in the Log Analytics workspace. Counter names will be normalized to use the same common name regardless of the operating system type. See [How to query logs from VM insights](vminsights-log-query.md#performance-records) for a list of performance counters that are collected.
- If you specified processes and dependencies to be collected, then the following tables are populated: - [VMBoundPort](/azure/azure-monitor/reference/tables/vmboundport) - Traffic for open server ports on the machine
azure-monitor Monitor Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/vm/monitor-virtual-machine.md
Azure Monitor focuses on operational data like Activity logs, Metrics, and Log A
> [!IMPORTANT] > The security services have their own cost independent of Azure Monitor. Before you configure these services, refer to their pricing information to determine your appropriate investment in their usage. -
-### Integration with Azure Monitor
The following table lists the integration points for Azure Monitor with the security services. All the services use the same Azure Monitor agent, which reduces complexity because there are no other components being deployed to your virtual machines. Defender for Cloud and Microsoft Sentinel store their data in a Log Analytics workspace so that you can use log queries to correlate data collected by the different services. Or you can create a custom workbook that combines security data and availability and performance data in a single view. See [Design a Log Analytics workspace architecture](../logs/workspace-design.md) for guidance on the most effective workspace design for your requirements taking into account all your services that use them. | Integration point | Azure Monitor | Microsoft Defender for Cloud | Microsoft Sentinel | Defender for Endpoint | |:|::|::|::|::|
-| Collects security events | | X | X | X |
+| Collects security events | X<sup>1</sup> | X | X | X |
| Stores data in Log Analytics workspace | X | X | X | | | Uses Azure Monitor agent | X | X | X | X |
+<sup>1</sup> Azure Monitor agent can collect security events but will send them to the [Event table](/azure/azure-monitor/reference/tables/event) with other events. Microsoft Sentinel provides additional features to collect and analyze these events.
+ > [!IMPORTANT] > Azure Monitor agent is in preview for some service features. See [Supported services and features](../agents/agents-overview.md#supported-services-and-features) for current details.
azure-netapp-files Azure Netapp Files Solution Architectures https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-solution-architectures.md
na Previously updated : 01/09/2023 Last updated : 01/20/2023 # Solution architectures using Azure NetApp Files
This section provides references for High Performance Computing (HPC) solutions.
### Generic HPC
+* [Azure HPC OnDemand Platform](https://azure.github.io/az-hop/)
* [Azure NetApp Files: Getting the most out of your cloud storage](https://cloud.netapp.com/hubfs/Resources/ANF%20PERFORMANCE%20TESTING%20IN%20TEMPLATE.pdf) * [Run MPI workloads with Azure Batch and Azure NetApp Files](https://azure.microsoft.com/resources/run-mpi-workloads-with-azure-batch-and-azure-netapp-files/) * [Azure Cycle Cloud: CycleCloud HPC environments on Azure NetApp Files](/azure/cyclecloud/overview)
azure-netapp-files Faq Smb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/faq-smb.md
Previously updated : 08/24/2022 Last updated : 01/19/2023 # SMB FAQs for Azure NetApp Files
To see when the password was last updated on the Azure NetApp Files SMB machine
> Due to an interoperability issue with the [April 2022 Monthly Windows Update]( https://support.microsoft.com/topic/april-12-2022-kb5012670-monthly-rollup-cae43d16-5b5d-43ea-9c52-9174177c6277), the policy that automatically updates the Active Directory machine account password for SMB volumes has been suspended until a fix is deployed.
+## Does Azure NetApp Files support Alternate Data Streams (ADS)?
+
+Yes, Azure NetApp Files supports [Alternate Data Streams (ADS)](/openspecs/windows_protocols/ms-fscc/e2b19412-a925-4360-b009-86e3b8a020c8) by default on [SMB volumes](azure-netapp-files-create-volumes-smb.md) and [dual-protocol volumes configured with NTFS security style](create-volumes-dual-protocol.md#considerations) when accessed via SMB.
+ ## Next steps - [FAQs about SMB performance for Azure NetApp Files](azure-netapp-files-smb-performance.md)
azure-resource-manager Bicep Config Modules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-config-modules.md
Title: Module setting for Bicep config description: Describes how to customize configuration values for modules in Bicep deployments. Previously updated : 01/11/2023 Last updated : 01/18/2023 # Add module settings in the Bicep config file
azure-resource-manager Bicep Config https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-config.md
Title: Bicep config file description: Describes the configuration file for your Bicep deployments Previously updated : 01/09/2023 Last updated : 01/18/2023 # Configure your Bicep environment
Bicep supports a configuration file named `bicepconfig.json`. Within this file,
To customize values, create this file in the directory where you store Bicep files. You can add `bicepconfig.json` files in multiple directories. The configuration file closest to the Bicep file in the directory hierarchy is used.
-To create a `bicepconfig.json` file in Visual Studio Code, open the Command Palette (**[CTRL/CMD]**+**[SHIFT]**+**P**), and then select **Bicep: Create Bicep Configuration File**. For more information, see [Visual Studio Code](./visual-studio-code.md#create-bicep-configuration-file).
+## Create the config file in VSCode
+
+You can use any text editor to create the config file.
+
+To create a `bicepconfig.json` file in Visual Studio Code, open the Command Palette (**[CTRL/CMD]**+**[SHIRT]**+**P**), and then select **Bicep: Create Bicep Configuration File**. For more information, see [Visual Studio Code](./visual-studio-code.md#create-bicep-configuration-file).
:::image type="content" source="./media/bicep-config/vscode-create-bicep-configuration-file.png" alt-text="Screenshot of how to create Bicep configuration file in VSCode.":::
-## Available settings
+The Bicep extension for Visual Studio Code supports intellisense for your `bicepconfig.json` file. Use the intellisense to discover available properties and values.
+
-When working with [modules](modules.md), you can add aliases for module paths. These aliases simplify your Bicep file because you don't have to repeat complicated paths. You can also configure cloud profile and credential precedence for authenticating to Azure from Bicep CLI and Visual Studio Code. The credentials are used to publish modules to registries and to restore external modules to the local cache when using the insert resource function.For more information, see [Add module settings to Bicep config](bicep-config-modules.md).
+## Configure Bicep modules
+
+When working with [modules](modules.md), you can add aliases for module paths. These aliases simplify your Bicep file because you don't have to repeat complicated paths. You can also configure cloud profile and credential precedence for authenticating to Azure from Bicep CLI and Visual Studio Code. The credentials are used to publish modules to registries and to restore external modules to the local cache when using the insert resource function. For more information, see [Add module settings to Bicep config](bicep-config-modules.md).
+
+## Configure Linter rules
The [Bicep linter](linter.md) checks Bicep files for syntax errors and best practice violations. You can override the default settings for the Bicep file validation by modifying `bicepconfig.json`. For more information, see [Add linter settings to Bicep config](bicep-config-linter.md).
-## Intellisense
+## Enable experimental features
-The Bicep extension for Visual Studio Code supports intellisense for your `bicepconfig.json` file. Use the intellisense to discover available properties and values.
+The following sample enables the [user-defined types in Bicep](https://aka.ms/bicepCustomTypes).
+```json
+{
+ "experimentalFeaturesEnabled": {
+ "imports": true,
+ "userDefineTypes": true
+ }
+}
+```
## Next steps
azure-resource-manager Resource Name Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/management/resource-name-rules.md
In the following tables, the term alphanumeric refers to:
> | | | | | > | profiles | resource group | 1-260 | Alphanumerics and hyphens.<br><br>Start and end with alphanumeric. | > | profiles / endpoints | global | 1-50 | Alphanumerics and hyphens.<br><br>Start and end with alphanumeric. |
+> | profiles / originGroups | global | 1-50 | Alphanumerics and hyphens.<br><br>Start and end with alphanumeric. |
+> | profiles / originGroups / origins | global | 1-50 | Alphanumerics and hyphens.<br><br>Start and end with alphanumeric. |
+> | profiles / afdEndpoints / routes | global | 1-50 | Alphanumerics and hyphens.<br><br>Start and end with alphanumeric. |
## Microsoft.CertificateRegistration
azure-video-indexer Live Stream Analysis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-video-indexer/live-stream-analysis.md
- Title: Live stream analysis using Azure Video Indexer
-description: This article shows how to perform a live stream analysis using Azure Video Indexer.
- Previously updated : 11/13/2019--
-# Live stream analysis with Azure Video Indexer
-
-Azure Video Indexer is an Azure service designed to extract deep insights from video and audio files offline. This is to analyze a given media file already created in advance. However, for some use cases it's important to get the media insights from a live feed as quick as possible to unlock operational and other use cases pressed in time. For example, such rich metadata on a live stream could be used by content producers to automate TV production.
-
-A solution described in this article, allows customers to use Azure Video Indexer in near real-time resolutions on live feeds. The delay in indexing can be as low as four minutes using this solution, depending on the chunks of data being indexed, the input resolution, the type of content and the compute powered used for this process.
-
-![The Azure Video Indexer metadata on the live stream](./media/live-stream-analysis/live-stream-analysis01.png)
-
-*Figure 1 ΓÇô Sample player displaying the Azure Video Indexer metadata on the live stream*
-
-The [stream analysis solution](https://github.com/Azure-Samples/media-services-video-indexer/blob/master/LiveStreamAnalysis/README.MD) at hand, uses Azure Functions and two Logic Apps to process a live program from a live channel in Azure Media Services with Azure Video Indexer and displays the result with Azure Media Player showing the near real-time resulted stream.
-
-In high level, it is comprised of two main steps. The first step runs every 60 seconds, and takes a subclip of the last 60 seconds played, creates an asset from it and indexes it via Azure Video Indexer. Then the second step is called once indexing is complete. The insights captured are processed, sent to Azure Cosmos DB, and the subclip indexed is deleted.
-
-The sample player plays the live stream and gets the insights from Azure Cosmos DB, using a dedicated Azure Function. It displays the metadata and thumbnails in sync with the live video.
-
-![The two logic apps processing the live stream every minute in the cloud](./media/live-stream-analysis/live-stream-analysis02.png)
-
-*Figure 2 ΓÇô The two logic apps processing the live stream every minute in the cloud.*
-
-## Step-by-step guide
-
-The full code and a step-by-step guide to deploy the results can be found in [GitHub project for Live media analytics with Azure Video Indexer](https://github.com/Azure-Samples/media-services-video-indexer/blob/master/LiveStreamAnalysis/README.MD).
-
-## Next steps
-
-[Azure Video Indexer overview](video-indexer-overview.md)
azure-vmware Concepts Network Design Considerations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/concepts-network-design-considerations.md
Last updated 1/10/2023
# Azure VMware Solution network design considerations
-Azure VMware Solution offers a VMware private cloud environment accessible for users and applications from on-premises and Azure-based environments or resources. The connectivity is delivered through networking services such as Azure ExpressRoute and VPN connections. There are several networking considerations to review before setting up your Azure VMware Solution environment. This article provides solutions for use cases you may encounter when configuring your networking with Azure VMware Solution.
+Azure VMware Solution offers a VMware private cloud environment that users and applications can access from on-premises and Azure-based environments or resources. Networking services such as Azure ExpressRoute and virtual private network (VPN) connections deliver the connectivity.
+
+There are several networking considerations to review before you set up your Azure VMware Solution environment. This article provides solutions for use cases that you might encounter when you're using Azure VMware Solution to configure your networks.
## Azure VMware Solution compatibility with AS-Path Prepend
-Azure VMware Solution is compatible with AS-Path Prepend for redundant ExpressRoute configurations with the caveat of not honoring the outbound path selection from Azure towards on-premises. If you're running two or more ExpressRoute paths between on-premises and Azure, and the listed [Prerequisites](#prerequisites) are not met, you may experience impaired connectivity or no connectivity between your on-premises networks and Azure VMware Solution. The connectivity issue is caused when Azure VMware Solution doesn't see the AS-Path Prepend and uses equal cost multi-pathing (ECMP) to send traffic towards your environment over both ExpressRoute circuits. That action causes issues with stateful firewall inspection.
+Azure VMware Solution is compatible with AS-Path Prepend for redundant ExpressRoute configurations, with the caveat of not honoring the outbound path selection from Azure toward on-premises. If you're running two or more ExpressRoute paths between on-premises and Azure, and you don't meet the listed [prerequisites](#prerequisites), you might experience impaired connectivity or no connectivity between your on-premises networks and Azure VMware Solution.
+
+The connectivity problem happens when Azure VMware Solution doesn't notice AS-Path Prepend and uses equal-cost multipath (ECMP) routing to send traffic toward your environment over both ExpressRoute circuits. That action causes problems with stateful firewall inspection.
### Prerequisites
-For AS-Path Prepend, you'll need to verify that all of the following listed connections are true:
+For AS-Path Prepend, verify that all of the following listed connections are true:
> [!div class="checklist"]
-> * Both or all circuits are connected to Azure VMware Solution with ExpressRoute Global Reach.
+> * Both or all circuits are connected to Azure VMware Solution through ExpressRoute Global Reach.
> * The same netblocks are being advertised from two or more circuits. > * Stateful firewalls are in the network path. > * You're using AS-Path Prepend to force Azure to prefer one path over others.
-Either 2 or 4 byte Public ASN numbers should be used and be compatible with Azure VMware Solution. If you don't own a Public ASN to use for prepending, open a [Microsoft Customer Support Ticket](https://ms.portal.azure.com/#view/Microsoft_Azure_Support/HelpAndSupportBlade/~/overview) to view options.
+Use either 2-byte or 4-byte public ASN numbers, and make sure that they're compatible with Azure VMware Solution. If you don't own a public ASN for prepending, open a [Microsoft support ticket](https://ms.portal.azure.com/#view/Microsoft_Azure_Support/HelpAndSupportBlade/~/overview) to view options.
## Management VMs and default routes from on-premises > [!IMPORTANT]
-> Azure VMware Solution Management VMs will not honor a default route from on-premises.
+> Azure VMware Solution management virtual machines (VMs) won't honor a default route from on-premises.
-If you're routing back to your on-premises networks using only a default route advertised towards Azure, the vCenter Server and NSX-T Manager VMs won't be compatible with that route.
+If you're routing back to your on-premises networks by using only a default route advertised toward Azure, vCenter Server and NSX-T Manager VMs won't be compatible with that route.
-**Solution**
+To reach vCenter Server and NSX-T Manager, provide specific routes from on-premises to allow traffic to have a return path to those networks.
-To reach vCenter Server and NSX-T Manager, more specific routes from on-premises need to be provided to allow traffic to have a return path route to those networks.
+## Default route to Azure VMware Solution for internet traffic inspection
-## Use a default route to Azure VMware Solution for internet traffic inspection
+Certain deployments require inspecting all egress traffic from Azure VMware Solution toward the internet. Although it's possible to create network virtual appliances (NVAs) in Azure VMware Solution, there are use cases where these appliances already exist in Azure and can be applied to inspect internet traffic from Azure VMware Solution. In this case, a default route can be injected from the NVA in Azure to attract traffic from Azure VMware Solution and inspect the traffic before it goes out to the public internet.
-Certain deployments require inspecting all egress traffic from Azure VMware Solution towards the Internet. While it's possible to create Network Virtual Appliances (NVAs) in Azure VMware Solution, there are use cases when these appliances already exist in Azure that can be applied to inspect Internet traffic from Azure VMware Solution. In this case, a default route can be injected from the NVA in Azure to attract traffic from Azure VMware Solution and inspect it before sending it out to the public Internet.
+The following diagram describes a basic hub-and-spoke topology connected to an Azure VMware Solution cloud and to an on-premises network through ExpressRoute. The diagram shows how the NVA in Azure originates the default route (`0.0.0.0/0`). Azure Route Server propagates the route to Azure VMware Solution through ExpressRoute.
-The following diagram describes a basic hub and spoke topology connected to an Azure VMware Solution cloud and to an on-premises network through ExpressRoute. The diagram shows how the default route (`0.0.0.0/0`) is originated by the NVA in Azure, and propagated by Azure Route Server to Azure VMware Solution through ExpressRoute.
- > [!IMPORTANT]
-> The default route advertised by the NVA will be propagated to the on-premises network. Because of that, UDRs will need to be added to ensure traffic from Azure VMware Solution is transiting through the NVA.
+> The default route that the NVA advertises will be propagated to the on-premises network. You need to add user-defined routes (UDRs) to ensure that traffic from Azure VMware Solution is transiting through the NVA.
Communication between Azure VMware Solution and the on-premises network usually occurs over ExpressRoute Global Reach, as described in [Peer on-premises environments to Azure VMware Solution](../azure-vmware/tutorial-expressroute-global-reach-private-cloud.md).
-## Connectivity between Azure VMware Solution and on-premises network via a third party network virtual appliance
+## Connectivity between Azure VMware Solution and an on-premises network
-There are two main scenarios for this connectivity pattern:
+There are two main scenarios for connectivity between Azure VMware Solution and an on-premises network via a third-party NVA:
-- Organizations may have the requirement to send traffic between Azure VMware Solution and the on-premises network through an NVA (typically a firewall).-- ExpressRoute Global Reach might not be available in a particular region to interconnect the ExpressRoute circuits of Azure VMware Solution and the on-premises network.
+- Organizations have a requirement to send traffic between Azure VMware Solution and the on-premises network through an NVA (typically a firewall).
+- ExpressRoute Global Reach isn't available in a particular region to interconnect the ExpressRoute circuits of Azure VMware Solution and the on-premises network.
-There are two topologies you can apply to meet all requirements for these two scenarios. The first is a [Supernet topology](#supernet-design-topology) and the second is a [Transit spoke virtual network topology](#transit-spoke-virtual-network-topology).
+There are two topologies that you can apply to meet all requirements for those scenarios: [supernet](#supernet-design-topology) and [transit spoke virtual network](#transit-spoke-virtual-network-topology).
> [!IMPORTANT]
-> The preferred option to connect Azure VMware Solution and on-premises environments is a direct ExpressRoute Global Reach connection. The patterns described in this document add considerable complexity to the environment.
+> The preferred option to connect Azure VMware Solution and on-premises environments is a direct ExpressRoute Global Reach connection. The patterns described in this article add complexity to the environment.
### Supernet design topology
-If both ExpressRoute circuits (to Azure VMware Solution and to on-premises) are terminated in the same ExpressRoute gateway, you can assume that the gateway is going to route packets across them. However, an ExpressRoute gateway isn't designed to do that. You need to hairpin the traffic to an NVA that can route the traffic. There are two requirements to hairpin network traffic to an NVA:
+If both ExpressRoute circuits (to Azure VMware Solution and to on-premises) are terminated in the same ExpressRoute gateway, you can assume that the gateway is going to route packets across them. However, an ExpressRoute gateway isn't designed to do that. You need to hairpin the traffic to an NVA that can route the traffic.
-- The NVA should advertise a supernet for the Azure VMware Solution and on-premises prefixes.
+There are two requirements to hairpin network traffic to an NVA:
- You could use a supernet that includes both Azure VMware Solution and on-premises prefixes, or individual prefixes for Azure VMware Solution and on-premises (always less specific that the actual prefixes advertised over ExpressRoute). Keep in mind that all supernet prefixes advertised to Route Server are going to be propagated both to Azure VMware Solution and on-premises.
-- UDRs in the GatewaySubnet that exactly match the prefixes advertised from Azure VMware Solution and on-premises will cause hairpin traffic from the GatewaySubnet to the NVA.
+- The NVA should advertise a supernet for the Azure VMware Solution and on-premises prefixes.
-**This topology results in high management overhead for large networks that change over time. Note that there are specific limitations to be considered.**
+ You could use a supernet that includes both Azure VMware Solution and on-premises prefixes. Or you could use individual prefixes for Azure VMware Solution and on-premises (always less specific than the actual prefixes advertised over ExpressRoute). Keep in mind that all supernet prefixes advertised to Route Server will be propagated to both Azure VMware Solution and on-premises.
+- UDRs in the gateway subnet that exactly match the prefixes advertised from Azure VMware Solution and on-premises will cause hairpin traffic from the gateway subnet to the NVA.
-**Limitations**
+This topology results in high management overhead for large networks that change over time. Consider these limitations:
-- Anytime a workload segment is created in Azure VMware Solution, UDRs may need to be added to ensure traffic from Azure VMware Solution is transiting through the NVA.-- If your on-premises environment has a large number of routes that change, BGP and UDR configuration in the supernet may need to be updated.-- Since there's a single ExpressRoute Gateway that processes network traffic in both directions, performance may be limited.
+- Anytime a workload segment is created in Azure VMware Solution, UDRs might need to be added to ensure that traffic from Azure VMware Solution is transiting through the NVA.
+- If your on-premises environment has a large number of routes that change, Border Gateway Protocol (BGP) and UDR configuration in the supernet might need to be updated.
+- Because a single ExpressRoute gateway processes network traffic in both directions, performance might be limited.
- There's an Azure Virtual Network limit of 400 UDRs.
-The following diagram demonstrates how the NVA needs to advertise more generic (less specific) prefixes that include the networks from on-premises and Azure VMware Solution. Be careful with this approach as the NVA could potentially attract traffic that it shouldn't (since it's advertising wider ranges, for example: the whole `10.0.0.0/8` network).
+The following diagram demonstrates how the NVA needs to advertise prefixes that are more generic (less specific) and that include the networks from on-premises and Azure VMware Solution. Be careful with this approach. The NVA could potentially attract traffic that it shouldn't, because it's advertising wider ranges (for example, the whole `10.0.0.0/8` network).
:::image type="content" source="media/concepts-network-design/vmware-solution-to-on-premises-hairpin.png" alt-text="Diagram of Azure VMware Solution to on-premises communication with Route Server in a single region." lightbox="media/concepts-network-design/vmware-solution-to-on-premises-hairpin.png"::: ### Transit spoke virtual network topology > [!NOTE]
-> If advertising less specific prefixes is not possible due to the limits previously described, you can implement an alternative design using two separate Virtual Networks.
+> If advertising prefixes that are less specific isn't possible because of the previously described limits, you can implement an alternative design that uses two separate virtual networks.
-In this topology, instead of propagating less specific routes to attract traffic to the ExpressRoute gateway, two different NVAs in separate Virtual Networks can exchange routes between each other. The Virtual Networks can propagate these routes to their respective ExpressRoute circuits via BGP and Azure Route Server, as the following diagram shows. Each NVA has full control on which prefixes are propagated to each ExpressRoute circuit.
+In this topology, instead of propagating routes that are less specific to attract traffic to the ExpressRoute gateway, two different NVAs in separate virtual networks can exchange routes between each other. The virtual networks can propagate these routes to their respective ExpressRoute circuits via BGP and Azure Route Server. Each NVA has full control over which prefixes are propagated to each ExpressRoute circuit.
-The following diagram demonstrates how a single 0.0.0.0/0 is advertised to Azure VMware Solution. It also shows how the individual Azure VMware Solution prefixes are propagated to the on-premises network.
+The following diagram demonstrates how a single `0.0.0.0/0` route is advertised to Azure VMware Solution. It also shows how the individual Azure VMware Solution prefixes are propagated to the on-premises network.
:::image type="content" source="media/concepts-network-design/vmware-solution-to-on-premises.png" alt-text="Diagram of Azure VMware Solution to on-premises communication with Route Server in two regions." lightbox="media/concepts-network-design/vmware-solution-to-on-premises.png"::: > [!IMPORTANT]
-> An encapsulation protocol such as VXLAN or IPsec is required between the NVAs. Encapsulation is needed because the NVA NICs would learn the routes from Azure Route Server with the NVA as next hop and create a routing loop.
+> An encapsulation protocol such as VXLAN or IPsec is required between the NVAs. Encapsulation is needed because the NVA network adapter (NIC) would learn the routes from Azure Route Server with the NVA as the next hop and create a routing loop.
-There's an alternative to using an overlay. Apply secondary NICs in the NVA that won't learn the routes from Azure Route Server and configure UDRs so that Azure can route traffic to the remote environment over those NICs. You can find more details in [Enterprise-scale network topology and connectivity for Azure VMware Solution](/azure/cloud-adoption-framework/scenarios/azure-vmware/eslz-network-topology-connectivity#scenario-2-a-third-party-nva-in-hub-azure-virtual-network-inspects-all-network-traffic).
+There's an alternative to using an overlay. Apply secondary NICs in the NVA that won't learn the routes from Azure Route Server. Then, configure UDRs so that Azure can route traffic to the remote environment over those NICs. You can find more details in [Enterprise-scale network topology and connectivity for Azure VMware Solution](/azure/cloud-adoption-framework/scenarios/azure-vmware/eslz-network-topology-connectivity#scenario-2-a-third-party-nva-in-hub-azure-virtual-network-inspects-all-network-traffic).
-**This topology requires a complex initial set-up. Once the set-up is complete, the topology works as expected with minimal management overhead. See the following list of specific set-up complexities.**
+This topology requires a complex initial setup. The topology then works as expected with minimal management overhead. Setup complexities include:
-- There's an extra cost for an additional transit Virtual Network that includes an Azure Route Server, ExpressRoute Gateway, and another NVA. The NVAs may also need to use large VM sizes to meet throughput requirements.-- There's IPSec or VxLAN tunneling between the two NVAs required which means that the NVAs are also in the datapath. Depending on the type of NVA you're using, it can result in custom and complex configuration on those NVAs.
+- There's an extra cost for an additional transit virtual network that includes Azure Route Server, an ExpressRoute gateway, and another NVA. The NVAs might also need to use large VM sizes to meet throughput requirements.
+- IPsec or VXLAN tunneling is required between the two NVAs, which means that the NVAs are also in the datapath. Depending on the type of NVA that you're using, it can result in custom and complex configuration on those NVAs.
## Next steps
-Now that you've covered Azure VMware Solution network design considerations, you may want to learn more about:
+Now that you've covered network design considerations for Azure VMware Solution, you might want to learn more about these topics:
-- [Network interconnectivity concepts - Azure VMware Solution](concepts-networking.md)
+- [Azure VMware Solution networking and interconnectivity concepts](concepts-networking.md)
- [Plan the Azure VMware Solution deployment](plan-private-cloud-deployment.md)-- [Networking planning checklist for Azure VMware Solution](tutorial-network-checklist.md)-
-## Recommended content
--- [Tutorial - Configure networking for your VMware private cloud in Azure - Azure VMware Solution](tutorial-network-checklist.md)-
+- [Tutorial: Networking planning checklist for Azure VMware Solution](tutorial-network-checklist.md)
azure-vmware Deploy Arc For Azure Vmware Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-vmware/deploy-arc-for-azure-vmware-solution.md
After your Azure VMware Solution vCenter resources have been enabled for access
This section will demonstrate how to use custom roles to manage granular access to VMware vSphere resources through Azure.
-#### Arc-enabled VMware vSphere custom roles
+#### Arc-enabled VMware vSphere built-in roles
-Three custom roles are provided to meet your Role-based access control (RBAC) requirements. These roles can be applied to a whole subscription, resource group, or a single resource.
+There are three built-in roles to meet your Role-based access control (RBAC) requirements. You can apply these roles to a whole subscription, resource group, or a single resource.
-- Azure Arc VMware vSphere Administrator role-- Azure Arc VMware vSphere Private Cloud User role-- Azure Arc VMware vSphere VM Contributor role
+**Azure Arc VMware Administrator role** - is used by administrators
-The first role is for an Administrator. The other two roles apply to anyone who needs to deploy or manage a VM.
+**Azure Arc VMware Private Cloud User role** - is used by anyone who needs to deploy and manage VMs
+
+**Azure Arc VMware VM Contributor role** - is used by anyone who needs to deploy and manage VMs
**Azure Arc Azure VMware Solution Administrator role**
-This custom role gives the user permission to conduct all possible operations for the `Microsoft.ConnectedVMwarevSphere` resource provider. This role should be assigned to users or groups who are administrators that manage Azure Arc-enabled Azure VMware Solution deployment.
+This role provides permissions to perform all possible operations for the Microsoft.ConnectedVMwarevSphere resource provider. Assign this role to users or groups that are administrators managing Azure Arc enabled VMware vSphere deployment.
**Azure Arc Azure VMware Solution Private Cloud User role**
-This custom role gives the user permission to use the Arc-enabled Azure VMware Solutions vSphere resources that have been made accessible through Azure. This role should be assigned to any users or groups that need to deploy, update, or delete VMs.
+This role gives the user permission to use the Arc-enabled Azure VMware Solutions vSphere resources that have been made accessible through Azure. This role should be assigned to any users or groups that need to deploy, update, or delete VMs.
We recommend assigning this role at the individual resource pool (host or cluster), virtual network, or template that you want the user to deploy VMs with. **Azure Arc Azure VMware Solution VM Contributor role**
-This custom role gives the user permission to perform all VMware VM operations. This role should be assigned to any users or groups that need to deploy, update, or delete VMs.
+This role gives the user permission to perform all VMware VM operations. This role should be assigned to any users or groups that need to deploy, update, or delete VMs.
We recommend assigning this role at the subscription level or resource group you want the user to deploy VMs with.
For the final step, you'll need to delete the resource bridge VM and the VM temp
**Is Arc supported in all the Azure VMware Solution regions?**
-Arc is supported in EastUS and WestEU regions however we are working to extend the regional support.
+Arc is supported in EastUS, WestEU, UK South, Australia East, Canada Central and Southeast Asia regions however we are working to extend the regional support.
**How does support work?**
azure-web-pubsub Reference Json Reliable Webpubsub Subprotocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/reference-json-reliable-webpubsub-subprotocol.md
Title: Reference - Azure Web PubSub supported JSON WebSocket subprotocol `json.reliable.webpubsub.azure.v1`
+ Title: Reference - Azure Web PubSub JSON WebSocket subprotocol `json.reliable.webpubsub.azure.v1`
description: The reference describes Azure Web PubSub supported WebSocket subprotocol `json.reliable.webpubsub.azure.v1` Previously updated : 11/06/2021 Last updated : 01/09/2023
-# Azure Web PubSub supported Reliable JSON WebSocket subprotocol
-
-This document describes the subprotocol `json.reliable.webpubsub.azure.v1`.
+# Azure Web PubSub Reliable JSON WebSocket subprotocol
-When the client is using this subprotocol, both outgoing data frame and incoming data frame are expected to be **JSON** payloads.
+The JSON WebSocket subprotocol, `json.reliable.webpubsub.azure.v1`, enables the highly reliable exchange of publish/subscribe messages directly between clients even during network issues.
+
+This document describes the subprotocol json.reliable.webpubsub.azure.v1.
> [!NOTE]
-> Reliable protocols are still in preview. Some changes are expected in future.
+> Reliable protocols are still in preview. Some changes are expected in the future.
+
+When WebSocket client connections drop due to intermittent network issues, messages can be lost. In a pub/sub system, publishers are decoupled from subscribers and may not detect a subscribers' dropped connection or message loss.
+
+To overcome intermittent network issues and maintain reliable message delivery, you can use the Azure WebPubSub `json.reliable.webpubsub.azure.v1` subprotocol to create a *Reliable PubSub WebSocket client*.
-## Overview
+A *Reliable PubSub WebSocket client* can:
-Subprotocol `json.reliable.webpubsub.azure.v1` empowers the client to have a high reliable message delivery experience under network issues and do a publish-subscribe (PubSub) directly instead of doing a round trip to the upstream server. The WebSocket connection with the `json.reliable.webpubsub.azure.v1` subprotocol is called a Reliable PubSub WebSocket client.
+* reconnect a dropped connection.
+* recover from message loss.
+* join a group using [join requests](#join-groups).
+* publish messages directly to a group using [publish requests](#publish-messages).
+* route messages directly to upstream event handlers using [event requests](#send-custom-events).
+
+For example, you can create a *Reliable PubSub WebSocket client* with the following JavaScript code:
-For example, in JS, a Reliable PubSub WebSocket client can be created using:
```js var pubsub = new WebSocket('wss://test.webpubsub.azure.com/client/hubs/hub1', 'json.reliable.webpubsub.azure.v1'); ```
-When using `json.reliable.webpubsub.azure.v1` subprotocol, the client must follow the [How to create reliable clients](./howto-develop-reliable-clients.md) to implement reconnection, publisher and subscriber.
+See [How to create reliable clients](./howto-develop-reliable-clients.md) to implement reconnection and message reliability for publisher and subscriber clients.
+
+When the client is using this subprotocol, both outgoing and incoming data frames must contain JSON payloads.
[!INCLUDE [reference-permission](includes/reference-permission.md)]
Format:
} ```
-Reliable PubSub WebSocket client must send sequence ack message once it received a message from the service. Find more in [How to create reliable clients](./howto-develop-reliable-clients.md#subscriber)
+Reliable PubSub WebSocket client must send a sequence ack message once it receives a message from the service. For more information, see [How to create reliable clients](./howto-develop-reliable-clients.md#subscriber)
-* `sequenceId` is a incremental uint64 number from the message received.
+* `sequenceId` is an incremental uint64 number from the message received.
## Responses
-Messages received by the client can be several types: `ack`, `message`, and `system`. Messages with type `message` have `sequenceId` property. Client must send [Sequence Ack](#sequence-ack) to the service once it receives a message.
+Messages received by the client can be several types: `ack`, `message`, and `system`. Messages with type `message` have `sequenceId` property. Client must send a [Sequence Ack](#sequence-ack) to the service once it receives a message.
### Ack response
-If the request contains `ackId`, the service will return an ack response for this request. The client implementation should handle this ack mechanism, including waiting for the ack response for an `async` `await` operation, and having a timeout check when the ack response is not received during a certain period.
+When the request contains `ackId`, the service will return an ack response for this request. The client implementation should handle this ack mechanism, including waiting for the ack response using an `async` `await` operation, and have a timeout handler when the ack response isn't received during a certain period.
Format: ```json
The client implementation SHOULD always check if the `success` is `true` or `fal
### Message response
-Clients can receive messages published from one group the client joined, or from the server management role that the server sends messages to the specific client or the specific user.
+Clients can receive messages published from a group the client has joined or from the server, which, operating in a server management role, sends messages to specific clients or users.
-1. When the message is from a group
+1. The response message from a group:
```json {
Clients can receive messages published from one group the client joined, or from
} ```
-1. When The message is from the server.
+1. The response message from the server:
```json {
Clients can receive messages published from one group the client joined, or from
``` #### Case 1: Sending data `Hello World` to the connection through REST API with `Content-Type`=`text/plain`
-* What a simple WebSocket client receives is a text WebSocket frame with data: `Hello World`;
-* What a PubSub WebSocket client receives is as follows:
+
+* A simple WebSocket client receives a text WebSocket frame with data: `Hello World`;
+* A PubSub WebSocket client receives the message in JSON:
+ ```json { "sequenceId": 1,
Clients can receive messages published from one group the client joined, or from
``` #### Case 2: Sending data `{ "Hello" : "World"}` to the connection through REST API with `Content-Type`=`application/json`
-* What a simple WebSocket client receives is a text WebSocket frame with stringified data: `{ "Hello" : "World"}`;
-* What a PubSub WebSocket client receives is as follows:
+
+* A simple WebSocket client receives a text WebSocket frame with stringified data: `{ "Hello" : "World"}`;
+* A PubSub WebSocket client receives the message in JSON:
+ ```json { "sequenceId": 1,
Clients can receive messages published from one group the client joined, or from
} ```
-If the REST API is sending a string `Hello World` using `application/json` content type, what the simple WebSocket client receives is a JSON string, which is `"Hello World"` that wraps the string with `"`.
+If the REST API is sending a string `Hello World` using `application/json` content type, the simple WebSocket client receives the JSON string `"Hello World"` wrapped in `"`.
#### Case 3: Sending binary data to the connection through REST API with `Content-Type`=`application/octet-stream`
-* What a simple WebSocket client receives is a binary WebSocket frame with the binary data.
-* What a PubSub WebSocket client receives is as follows:
+
+* A simple WebSocket client receives a binary WebSocket frame with the binary data.
+* A PubSub WebSocket client receives the message in JSON:
+ ```json { "sequenceId": 1,
If the REST API is sending a string `Hello World` using `application/json` conte
### System response
-The Web PubSub service can also send system-related responses to the client.
+The Web PubSub service can return system-related responses to the client.
#### Connected
-When the connection connects to service.
+The response to the client connect request:
```json {
Find more details in [Reconnection](./howto-develop-reliable-clients.md#reconnec
#### Disconnected
-When the server closes the connection, or when the service declines the client.
+The response when the server closes the connection or when the service declines the client connection:
```json {
azure-web-pubsub Reference Json Webpubsub Subprotocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-web-pubsub/reference-json-webpubsub-subprotocol.md
description: The reference describes Azure Web PubSub supported WebSocket subpro
- Previously updated : 11/06/2021+ Last updated : 01/09/2023
-# Azure Web PubSub supported JSON WebSocket subprotocol
-
-This document describes the subprotocol `json.webpubsub.azure.v1`.
+# Azure Web PubSub supported JSON WebSocket subprotocol
+
+The JSON WebSocket subprotocol, `json.webpubsub.azure.v1`, enables the exchange of publish/subscribe messages directly between clients. A WebSocket connection using the `json.webpubsub.azure.v1` subprotocol is called a *PubSub WebSocket client*.
-When the client is using this subprotocol, both outgoing data frame and incoming data frame are expected to be **JSON** payloads.
## Overview
-Subprotocol `json.webpubsub.azure.v1` empowers the clients to do publish/subscribe directly instead of a round trip to the upstream server. We call the WebSocket connection with `json.webpubsub.azure.v1` subprotocol a PubSub WebSocket client.
+In a simple WebSocket client, a *server* role is required to handle events from clients. A simple WebSocket connection triggers a `message` event when it sends messages and relies on the server-side to process messages and do other operations.
+
+With the `json.webpubsub.azure.v1` subprotocol, you can create *PubSub WebSocket clients* that can:
+
+* join a group using [join requests](#join-groups).
+* publish messages directly to a group using [publish requests](#publish-messages).
+* route messages directly to upstream event handlers using [event requests](#send-custom-events).
+
+For example, you can create a *PubSub WebSocket client* with the following JavaScript code:
-For example, in JS, a PubSub WebSocket client can be created using:
-```js
+```javascript
// PubSub WebSocket client var pubsub = new WebSocket('wss://test.webpubsub.azure.com/client/hubs/hub1', 'json.webpubsub.azure.v1'); ```
-For a simple WebSocket client, the *server* is a MUST HAVE role to handle the events from clients. A simple WebSocket connection always triggers a `message` event when it sends messages, and always relies on the server-side to process messages and do other operations. With the help of the `json.webpubsub.azure.v1` subprotocol, an authorized client can join a group using [join requests](#join-groups) and publish messages to a group using [publish requests](#publish-messages) directly. It can also route messages to different upstream (event handlers) by customizing the *event* the message belongs using [event requests](#send-custom-events).
+
+This document describes the subprotocol `json.webpubsub.azure.v1` requests and responses. Both incoming and outgoing data frames must contain JSON payloads.
[!INCLUDE [reference-permission](includes/reference-permission.md)]
For a simple WebSocket client, the *server* is a MUST HAVE role to handle the ev
## Responses
-Messages received by the client can be several types: `ack`, `message`, and `system`:
+Message types received by the client can be:
+
+* ack - The response to a request containing an `ackId`.
+* message - Messages from the group or server.
+* system - Responses from the Web PubSub service to system related client requests.
### Ack response
-If the request contains `ackId`, the service will return an ack response for this request. The client implementation should handle this ack mechanism, including waiting for the ack response for an `async` `await` operation, and having a timeout check when the ack response is not received during a certain period.
+When the client request contains `ackId`, the service will return an ack response for the request. The client should handle the ack mechanism, by waiting for the ack response with an `async` `await` operation and using a timeout operation when the ack response isn't received in a certain period.
Format:+ ```json { "type": "ack",
Format:
} ```
-The client implementation SHOULD always check if the `success` is `true` or `false` first. Only when `success` is `false` the client reads from `error`.
+The client implementation SHOULD always check if the `success` is `true` or `false` first, then only read the error when `success` is `false`.
### Message response
-Clients can receive messages published from one group the client joined, or from the server management role that the server sends messages to the specific client or the specific user.
+Clients can receive messages published from a group the client has joined or from the server, which, operating in a server management role, sends messages to specific clients or users.
1. When the message is from a group
Clients can receive messages published from one group the client joined, or from
} ```
-1. When The message is from the server.
+1. When the message is from the server.
```json {
Clients can receive messages published from one group the client joined, or from
``` #### Case 1: Sending data `Hello World` to the connection through REST API with `Content-Type`=`text/plain`
-* What a simple WebSocket client receives is a text WebSocket frame with data: `Hello World`;
-* What a PubSub WebSocket client receives is as follows:
+
+* A simple WebSocket client receives a text WebSocket frame with data: `Hello World`;
+* A PubSub WebSocket client receives:
+ ```json { "type": "message",
Clients can receive messages published from one group the client joined, or from
``` #### Case 2: Sending data `{ "Hello" : "World"}` to the connection through REST API with `Content-Type`=`application/json`
-* What a simple WebSocket client receives is a text WebSocket frame with stringified data: `{ "Hello" : "World"}`;
-* What a PubSub WebSocket client receives is as follows:
+
+* A simple WebSocket client receives a text WebSocket frame with stringified data: `{ "Hello" : "World"}`.
+* A PubSub WebSocket client receives:
+ ```json { "type": "message",
Clients can receive messages published from one group the client joined, or from
} ```
-If the REST API is sending a string `Hello World` using `application/json` content type, what the simple WebSocket client receives is a JSON string, which is `"Hello World"` that wraps the string with `"`.
+If the REST API is sending a string `Hello World` using `application/json` content type, the simple WebSocket client receives a JSON string, which is `"Hello World"` wrapped with double quotes (`"`).
#### Case 3: Sending binary data to the connection through REST API with `Content-Type`=`application/octet-stream`
-* What a simple WebSocket client receives is a binary WebSocket frame with the binary data.
-* What a PubSub WebSocket client receives is as follows:
+
+* A simple WebSocket client receives a binary WebSocket frame with the binary data.
+* A PubSub WebSocket client receives:
+ ```json { "type": "message",
If the REST API is sending a string `Hello World` using `application/json` conte
### System response
-The Web PubSub service can also send system-related responses to the client.
+The Web PubSub service sends system related responses to client requests.
#### Connected
-When the connection connects to service.
+The response to the client connect request:
```json {
When the connection connects to service.
#### Disconnected
-When the server closes the connection, or when the service declines the client.
+The response when the server closes the connection, or when the service declines the client.
```json {
backup Backup Azure Backup Server Vmware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-backup-server-vmware.md
Title: Back up VMware VMs with Azure Backup Server description: In this article, learn how to use Azure Backup Server to back up VMware VMs running on a VMware vCenter/ESXi server.- Previously updated : 07/27/2021+ Last updated : 01/18/2023++++ # Back up VMware VMs with Azure Backup Server
-This article explains how to back up VMware VMs running on VMware ESXi hosts/vCenter Server to Azure using Azure Backup Server (MABS).
+This article describes how to back up VMware VMs running on VMware ESXi hosts/vCenter Server to Azure using Azure Backup Server (MABS).
>[!Note] >With MABS v3 Update Rollup 2 release, you can now back up VMware 7.0 VMs as well.
-This article explains how to:
+## VMware VM protection workflow
-- Set up a secure channel so that Azure Backup Server can communicate with VMware servers over HTTPS.-- Set up a VMware account that Azure Backup Server uses to access the VMware server.-- Add the account credentials to Azure Backup.-- Add the vCenter or ESXi server to Azure Backup Server.-- Set up a protection group that contains the VMware VMs you want to back up, specify backup settings, and schedule the backup.
+To protect VMware VM using Azure Backup you need to:
-## Supported VMware features
+1. Set up a secure channel so that Azure Backup Server can communicate with VMware servers over HTTPS.
+1. Set up a VMware account that Azure Backup Server uses to access the VMware server.
+1. Add the account credentials to Azure Backup.
+1. Add the vCenter or ESXi server to Azure Backup Server.
+1. Set up a protection group that contains the VMware VMs you want to back up, specify backup settings, and schedule the backup.
+
+## Support matrix
+
+This section provides the supported scenarios to protect VMware VMs.
+
+### Supported VMware features
MABS provides the following features when backing up VMware virtual machines: -- Agentless backup: MABS doesn't require an agent to be installed on the vCenter or ESXi server, to back up the virtual machine. Instead, just provide the IP address or fully qualified domain name (FQDN), and login credentials used to authenticate the VMware server with MABS.
+- Agentless backup: MABS doesn't require an agent to be installed on the vCenter or ESXi server, to back up the virtual machine. Instead, just provide the IP address or fully qualified domain name (FQDN), and sign-in credentials used to authenticate the VMware server with MABS.
- Cloud Integrated Backup: MABS protects workloads to disk and cloud. MABS's backup and recovery workflow helps you manage long-term retention and offsite backup. - Detect and protect VMs managed by vCenter: MABS detects and protects VMs deployed on a VMware server (vCenter or ESXi server). As your deployment size grows, use vCenter to manage your VMware environment. MABS also detects VMs managed by vCenter, allowing you to protect large deployments.-- Folder level auto protection: vCenter lets you organize your VMs in VM folders. MABS detects these folders and lets you protect VMs at the folder level and includes all subfolders. When protecting folders, MABS not only protects the VMs in that folder, but also protects VMs added later. MABS detects new VMs on a daily basis and protects them automatically. As you organize your VMs in recursive folders, MABS automatically detects and protects the new VMs deployed in the recursive folders.
+- Folder level auto protection: vCenter lets you organize your VMs in VM folders. MABS detects these folders and lets you protect VMs at the folder level and includes all subfolders. When you're protecting folders, MABS not only protects the VMs in that folder, but also protects VMs added later. MABS detects new VMs on a daily basis and protects them automatically. As you organize your VMs in recursive folders, MABS automatically detects and protects the new VMs deployed in the recursive folders.
- MABS protects VMs stored on a local disk, network file system (NFS), or cluster storage. - MABS protects VMs migrated for load balancing: As VMs are migrated for load balancing, MABS automatically detects and continues VM protection. - MABS can recover files/folders from a Windows VM without recovering the entire VM, which helps recover necessary files faster.
-## Support matrix
+### Supported MABS versions
| MABS versions | Supported VMware VM versions for backup | | | |
By default, Azure Backup Server communicates with VMware servers over HTTPS. To
Set up a secure channel as follows:
-1. In the browser on Azure Backup Server, enter the vSphere Web Client URL. If the login page doesn't appear, verify the connection and browser proxy settings.
+1. In the browser on Azure Backup Server, enter the vSphere Web Client URL. If the sign-in page doesn't appear, verify the connection and browser proxy settings.
- ![vSphere Web Client](./media/backup-azure-backup-server-vmware/vsphere-web-client.png)
+ ![Screenshot showing the vSphere Web Client.](./media/backup-azure-backup-server-vmware/vsphere-web-client.png)
-2. On the vSphere Web Client login page, select **Download trusted root CA certificates**.
+2. On the vSphere Web Client sign-in page, select **Download trusted root CA certificates**.
- ![Download trusted root CA certificate](./media/backup-azure-backup-server-vmware/vmware-download-ca-cert-prompt.png)
+ ![Screenshot shows how to download the trusted root CA certificate.](./media/backup-azure-backup-server-vmware/vmware-download-ca-cert-prompt.png)
3. A file named **download** is downloaded. Depending on your browser, you receive a message that asks whether to open or save the file.
- ![Download CA certificate](./media/backup-azure-backup-server-vmware/download-certs.png)
+ ![Screenshot shows how to download CA certificate.](./media/backup-azure-backup-server-vmware/download-certs.png)
4. Save the file on the Azure Backup Server machine with a .zip extension. 5. Right-click **download.zip** > **Extract All**. The .zip file extracts its contents to the **certs** folder, which contains: - The root certificate file with an extension that begins with a numbered sequence like .0 and .1.
- - The CRL file has an extension that begins with a sequence like .r0 or .r1. The CRL file is associated with a certificate.
+ - The CRL file has an extension that begins with a sequence like `.r0` or `.r1`. The CRL file is associated with a certificate.
- ![Downloaded certificates](./media/backup-azure-backup-server-vmware/extracted-files-in-certs-folder.png)
+ ![Screenshot shows how to extract downloaded certificates.](./media/backup-azure-backup-server-vmware/extracted-files-in-certs-folder.png)
6. In the **certs** folder, right-click the root certificate file > **Rename**.
- ![Rename root certificate](./media/backup-azure-backup-server-vmware/rename-cert.png)
+ ![Screenshot shows how to rename the root certificate.](./media/backup-azure-backup-server-vmware/rename-cert.png)
7. Change the root certificate's extension to .crt, and confirm. The file icon changes to one that represents a root certificate.
Set up a secure channel as follows:
9. In **Certificate Import Wizard**, select **Local Machine** as the destination for the certificate, and then select **Next**. Confirm if you're asked if you want to allow changes to the computer.
- ![Wizard Welcome](./media/backup-azure-backup-server-vmware/certificate-import-wizard1.png)
+ ![Screenshot shows the Certificate Import Wizard.](./media/backup-azure-backup-server-vmware/certificate-import-wizard1.png)
10. On the **Certificate Store** page, select **Place all certificates in the following store**, and then select **Browse** to choose the certificate store.
- ![Certificate storage](./media/backup-azure-backup-server-vmware/cert-import-wizard-local-store.png)
+ ![Screenshot shows how to choose the certificate storage.](./media/backup-azure-backup-server-vmware/cert-import-wizard-local-store.png)
-11. In **Select Certificate Store**, select **Trusted Root Certification Authorities** as the destination folder for the certificates, and then select **OK**.
+11. On **Select Certificate Store**, select **Trusted Root Certification Authorities** as the destination folder for the certificates, and then select **OK**.
- ![Certificate destination folder](./media/backup-azure-backup-server-vmware/certificate-store-selected.png)
+ ![Screenshot shows how to select the certificate destination folder.](./media/backup-azure-backup-server-vmware/certificate-store-selected.png)
-12. In **Completing the Certificate Import Wizard**, verify the folder, and then select **Finish**.
+12. On **Completing the Certificate Import Wizard**, verify the folder, and then select **Finish**.
- ![Verify certificate is in the proper folder](./media/backup-azure-backup-server-vmware/cert-wizard-final-screen.png)
+ ![Screenshot shows how to verify if the certificate is in the proper folder.](./media/backup-azure-backup-server-vmware/cert-wizard-final-screen.png)
13. After the certificate import is confirmed, sign in to the vCenter Server to confirm that your connection is secure.
If you have secure boundaries within your organization, and don't want to use th
The Azure Backup Server needs a user account with permissions to access v-Center Server/ESXi host. Create a VMware role with specific privileges, and then associate a user account with the role. 1. Sign in to the vCenter Server (or ESXi host if you're not using vCenter Server).
-2. In the **Navigator** panel, select **Administration**.
+2. On the **Navigator** pane, select **Administration**.
- ![Administration](./media/backup-azure-backup-server-vmware/vmware-navigator-panel.png)
+ ![Screenshot shows how to select Administration.](./media/backup-azure-backup-server-vmware/vmware-navigator-panel.png)
-3. In **Administration** > **Roles**, select the add role icon (the + symbol).
+3. On **Administration** > **Roles**, select the add role icon (the + symbol).
- ![Add role](./media/backup-azure-backup-server-vmware/vmware-define-new-role.png)
+ ![Screenshot shows how to add roles.](./media/backup-azure-backup-server-vmware/vmware-define-new-role.png)
-4. In **Create Role** > **Role name**, enter *BackupAdminRole*. The role name can be whatever you like, but it should be recognizable for the role's purpose.
+4. On **Create Role** > **Role name**, enter *BackupAdminRole*. The role name can be whatever you like, but it should be recognizable for the role's purpose.
5. Select the privileges as summarized in the table below, and then select **OK**. The new role appears on the list in the **Roles** panel. - Select the icon next to the parent label to expand the parent and view the child privileges. - To select the VirtualMachine privileges, you need to go several levels into the parent child hierarchy. - You don't need to select all child privileges within a parent privilege.
- ![Parent child privilege hierarchy](./media/backup-azure-backup-server-vmware/cert-add-privilege-expand.png)
+ ![Screenshot shows how to select the parent and child privilege hierarchy.](./media/backup-azure-backup-server-vmware/cert-add-privilege-expand.png)
### Role permissions
The following table captures the privileges that you need to assign to the user
| Privileges for vCenter 6.5 user account | Privileges for vCenter 6.7 (and later) user account | |-|-|
-| Datastore cluster.Configure a datastore cluster | Datastore cluster.Configure a datastore cluster |
-| Datastore.AllocateSpace | Datastore.AllocateSpace |
-| Datastore.Browse datastore | Datastore.Browse datastore |
-| Datastore.Low-level file operations | Datastore.Low-level file operations |
-| Global.Disable methods | Global.Disable methods |
-| Global.Enable methods | Global.Enable methods |
-| Global.Licenses | Global.Licenses |
-| Global.Log event | Global.Log event |
-| Global.Manage custom attributes | Global.Manage custom attributes |
-| Global.Set custom attribute | Global.Set custom attribute |
-| Host.Local operations.Create virtual machine | Host.Local operations.Create virtual machine |
-| Network.Assign network | Network.Assign network |
-| Resource. Assign virtual machine to resource pool | Resource. Assign virtual machine to resource pool |
-| vApp.Add virtual machine | vApp.Add virtual machine |
-| vApp.Assign resource pool | vApp.Assign resource pool |
-| vApp.Unregister | vApp.Unregister |
-| VirtualMachine.Configuration. Add Or Remove Device | VirtualMachine.Configuration. Add Or Remove Device |
-| Virtual machine.Configuration.Disk lease | Virtual machine.Configuration.Acquire disk lease |
-| Virtual machine.Configuration.Add new disk | Virtual machine.Configuration.Add new disk |
-| Virtual machine.Configuration.Advanced | Virtual machine.Configuration.Advanced configuration |
-| Virtual machine.Configuration.Disk change tracking | Virtual machine.Configuration.Toggle disk change tracking |
-| Virtual machine.Configuration.Host USB device | Virtual machine.Configuration.Configure Host USB device |
-| Virtual machine.Configuration.Extend virtual disk | Virtual machine.Configuration.Extend virtual disk |
-| Virtual machine.Configuration.Query unowned files | Virtual machine.Configuration.Query unowned files |
-| Virtual machine.Configuration.Swapfile placement | Virtual machine.Configuration.Change Swapfile placement |
-| Virtual machine.Guest Operations.Guest Operation Program Execution | Virtual machine.Guest Operations.Guest Operation Program Execution |
-| Virtual machine.Guest Operations.Guest Operation Modifications | Virtual machine.Guest Operations.Guest Operation Modifications |
-| Virtual machine.Guest Operations.Guest Operation Queries | Virtual machine.Guest Operations.Guest Operation Queries |
-| Virtual machine .Interaction .Device connection | Virtual machine .Interaction .Device connection |
-| Virtual machine .Interaction .Guest operating system management by VIX API | Virtual machine .Interaction .Guest operating system management by VIX API |
-| Virtual machine .Interaction .Power Off | Virtual machine .Interaction .Power Off |
-| Virtual machine .Inventory.Create new | Virtual machine .Inventory.Create new |
-| Virtual machine .Inventory.Remove | Virtual machine .Inventory.Remove |
-| Virtual machine .Inventory.Register | Virtual machine .Inventory.Register |
-| Virtual machine .Provisioning.Allow disk access | Virtual machine .Provisioning.Allow disk access |
-| Virtual machine .Provisioning.Allow file access | Virtual machine .Provisioning.Allow file access |
-| Virtual machine .Provisioning.Allow read-only disk access | Virtual machine .Provisioning.Allow read-only disk access |
-| Virtual machine .Provisioning.Allow virtual machine download          | Virtual machine .Provisioning.Allow virtual machine download          |
-| Virtual machine .Snapshot management. Create snapshot | Virtual machine .Snapshot management. Create snapshot |
-| Virtual machine .Snapshot management.Remove Snapshot | Virtual machine .Snapshot management.Remove Snapshot |
-| Virtual machine .Snapshot management.Revert to snapshot | Virtual machine .Snapshot management.Revert to snapshot |
+| `Datastore cluster.Configure a datastore cluster` | `Datastore cluster.Configure a datastore cluster` |
+| `Datastore.AllocateSpace` | `Datastore.AllocateSpace` |
+| `Datastore.Browse datastore` | `Datastore.Browse datastore` |
+| `Datastore.Low-level file operations` | `Datastore.Low-level file operations` |
+| `Global.Disable methods` | `Global.Disable methods` |
+| `Global.Enable methods` | `Global.Enable methods` |
+| `Global.Licenses` | `Global.Licenses` |
+| `Global.Log event` | `Global.Log event` |
+| `Global.Manage custom attributes` | `Global.Manage custom attributes` |
+| `Global.Set custom attribute` | `Global.Set custom attribute` |
+| `Host.Local operations.Create virtual machine` | `Host.Local operations.Create virtual machine` |
+| `Network.Assign network` | `Network.Assign network` |
+| `Resource. Assign virtual machine to resource pool` | `Resource. Assign virtual machine to resource pool` |
+| `vApp.Add virtual machine` | `vApp.Add virtual machine` |
+| `vApp.Assign resource pool` | `vApp.Assign resource pool` |
+| `vApp.Unregister` | `vApp.Unregister` |
+| `VirtualMachine.Configuration. Add Or Remove Device` | `VirtualMachine.Configuration. Add Or Remove Device` |
+| `Virtual machine.Configuration.Disk lease` | `Virtual machine.Configuration.Acquire disk lease` |
+| `Virtual machine.Configuration.Add new disk` | `Virtual machine.Configuration.Add new disk` |
+| `Virtual machine.Configuration.Advanced` | `Virtual machine.Configuration.Advanced configuration` |
+| `Virtual machine.Configuration.Disk change tracking` | `Virtual machine.Configuration.Toggle disk change tracking` |
+| `Virtual machine.Configuration.Host USB device` | `Virtual machine.Configuration.Configure Host USB device` |
+| `Virtual machine.Configuration.Extend virtual disk` | `Virtual machine.Configuration.Extend virtual disk` |
+| `Virtual machine.Configuration.Query unowned files` | `Virtual machine.Configuration.Query unowned files` |
+| `Virtual machine.Configuration.Swapfile placement` | `Virtual machine.Configuration.Change Swapfile placement` |
+| `Virtual machine.Guest Operations.Guest Operation Program Execution` | `Virtual machine.Guest Operations.Guest Operation Program Execution` |
+| `Virtual machine.Guest Operations.Guest Operation Modifications` | `Virtual machine.Guest Operations.Guest Operation Modifications` |
+| `Virtual machine.Guest Operations.Guest Operation Queries` | `Virtual machine.Guest Operations.Guest Operation Queries` |
+| `Virtual machine .Interaction .Device connection` | `Virtual machine .Interaction .Device connection` |
+| `Virtual machine .Interaction .Guest operating system management by VIX API` | `Virtual machine .Interaction .Guest operating system management by VIX API` |
+| `Virtual machine .Interaction .Power Off` | `Virtual machine .Interaction .Power Off` |
+| `Virtual machine .Inventory.Create new` | `Virtual machine .Inventory.Create new` |
+| `Virtual machine .Inventory.Remove` | `Virtual machine .Inventory.Remove` |
+| `Virtual machine .Inventory.Register` | `Virtual machine .Inventory.Register` |
+| `Virtual machine .Provisioning.Allow disk access` | `Virtual machine .Provisioning.Allow disk access` |
+| `Virtual machine .Provisioning.Allow file access` | `Virtual machine .Provisioning.Allow file access` |
+| `Virtual machine .Provisioning.Allow read-only disk access` | `Virtual machine .Provisioning.Allow read-only disk access` |
+| `Virtual machine .Provisioning.Allow virtual machine download`          | `Virtual machine .Provisioning.Allow virtual machine download`          |
+| `Virtual machine .Snapshot management. Create snapshot` | `Virtual machine .Snapshot management. Create snapshot` |
+| `Virtual machine .Snapshot management.Remove Snapshot` | `Virtual machine .Snapshot management.Remove Snapshot` |
+| `Virtual machine .Snapshot management.Revert to snapshot` | `Virtual machine .Snapshot management.Revert to snapshot` |
> [!NOTE] > The following table lists the privileges for vCenter 6.0 and vCenter 5.5 user accounts. | Privileges for vCenter 6.0 user account | Privileges for vCenter 5.5 user account | | | |
-| Datastore.AllocateSpace | Network.Assign |
-| Global.Manage custom attributes | Datastore.AllocateSpace |
-| Global.Set custom attribute | VirtualMachine.Config.ChangeTracking |
-| Host.Local operations.Create virtual machine | VirtualMachine.State.RemoveSnapshot |
-| Network. Assign network | VirtualMachine.State.CreateSnapshot |
-| Resource. Assign virtual machine to resource pool | VirtualMachine.Provisioning.DiskRandomRead |
-| Virtual machine.Configuration.Add new disk | VirtualMachine.Interact.PowerOff |
-| Virtual machine.Configuration.Advanced | VirtualMachine.Inventory.Create |
-| Virtual machine.Configuration.Disk change tracking | VirtualMachine.Config.AddNewDisk |
-| Virtual machine.Configuration.Host USB device | VirtualMachine.Config.HostUSBDevice |
-| Virtual machine.Configuration.Query unowned files | VirtualMachine.Config.AdvancedConfig |
-| Virtual machine.Configuration.Swapfile placement | VirtualMachine.Config.SwapPlacement |
-| Virtual machine.Interaction.Power Off | Global.ManageCustomFields |
-| Virtual machine.Inventory. Create new | |
-| Virtual machine.Provisioning.Allow disk access | |
-| Virtual machine.Provisioning. Allow read-only disk access | |
-| Virtual machine.Snapshot management.Create snapshot | |
-| Virtual machine.Snapshot management.Remove Snapshot | |
+| `Datastore.AllocateSpace` | `Network.Assign` |
+| `Global.Manage custom attributes` | `Datastore.AllocateSpace` |
+| `Global.Set custom attribute` | `VirtualMachine.Config.ChangeTracking` |
+| `Host.Local operations.Create virtual machine` | `VirtualMachine.State.RemoveSnapshot` |
+| `Network. Assign network` | `VirtualMachine.State.CreateSnapshot` |
+| `Resource. Assign virtual machine to resource pool` | `VirtualMachine.Provisioning.DiskRandomRead` |
+| `Virtual machine.Configuration.Add new disk` | `VirtualMachine.Interact.PowerOff` |
+| `Virtual machine.Configuration.Advanced` | `VirtualMachine.Inventory.Create` |
+| `Virtual machine.Configuration.Disk change tracking` | `VirtualMachine.Config.AddNewDisk` |
+| `Virtual machine.Configuration.Host USB device` | `VirtualMachine.Config.HostUSBDevice` |
+| `Virtual machine.Configuration.Query unowned files` | `VirtualMachine.Config.AdvancedConfig` |
+| `Virtual machine.Configuration.Swapfile placement` | `VirtualMachine.Config.SwapPlacement` |
+| `Virtual machine.Interaction.Power Off` | `Global.ManageCustomFields` |
+| `Virtual machine.Inventory. Create new` | |
+| `Virtual machine.Provisioning.Allow disk access` | |
+| `Virtual machine.Provisioning. Allow read-only disk access` | |
+| `Virtual machine.Snapshot management.Create snapshot` | |
+| `Virtual machine.Snapshot management.Remove Snapshot` | |
## Create a VMware account
-1. In vCenter Server **Navigator** panel, select **Users and Groups**. If you don't use vCenter Server, create the account on the appropriate ESXi host.
+To create a VMware account, follow these steps:
+
+1. On vCenter Server **Navigator** pane, select **Users and Groups**. If you don't use vCenter Server, create the account on the appropriate ESXi host.
- ![Users and Groups option](./media/backup-azure-backup-server-vmware/vmware-userandgroup-panel.png)
+ ![Screenshot shows how to select the Users and Groups option.](./media/backup-azure-backup-server-vmware/vmware-userandgroup-panel.png)
The **vCenter Users and Groups** panel appear.
-2. In the **vCenter Users and Groups** panel, select the **Users** tab, and then select the add users icon (the + symbol).
+2. On the **vCenter Users and Groups** pane, select the **Users** tab, and then select the add users icon (the + symbol).
+
+ ![Screenshot shows the vCenter Users and Groups pane.](./media/backup-azure-backup-server-vmware/usersandgroups.png)
- ![vCenter Users and Groups panel](./media/backup-azure-backup-server-vmware/usersandgroups.png)
+3. On **New User** dialog box, add the user information > **OK**. In this procedure, the username is BackupAdmin.
-3. In **New User** dialog box, add the user information > **OK**. In this procedure, the username is BackupAdmin.
+ ![Screenshot shows the New User dialog box.](./media/backup-azure-backup-server-vmware/vmware-new-user-account.png)
- ![New User dialog box](./media/backup-azure-backup-server-vmware/vmware-new-user-account.png)
+4. To associate the user account with the role, in the **Navigator** pane, select **Global Permissions**.
-4. To associate the user account with the role, in the **Navigator** panel, select **Global Permissions**. In the **Global Permissions** panel, select the **Manage** tab, and then select the add icon (the + symbol).
+ On the **Global Permissions** pane, select the **Manage** tab, and then select the add icon (the + symbol).
- ![Global Permissions panel](./media/backup-azure-backup-server-vmware/vmware-add-new-perms.png)
+ ![Screenshot shows the Global Permissions pane.](./media/backup-azure-backup-server-vmware/vmware-add-new-perms.png)
-5. In **Global Permission Root - Add Permission**, select **Add** to choose the user or group.
+5. On **Global Permission Root - Add Permission**, select **Add** to choose the user or group.
- ![Choose user or group](./media/backup-azure-backup-server-vmware/vmware-add-new-global-perm.png)
+ ![Screenshot shows how to choose user or group.](./media/backup-azure-backup-server-vmware/vmware-add-new-global-perm.png)
-6. In **Select Users/Groups**, choose **BackupAdmin** > **Add**. In **Users**, the *domain\username* format is used for the user account. If you want to use a different domain, choose it from the **Domain** list. Select **OK** to add the selected users to the **Add Permission** dialog box.
+6. On **Select Users/Groups**, choose **BackupAdmin** > **Add**. In **Users**, the *domain\username* format is used for the user account. If you want to use a different domain, choose it from the **Domain** list. Select **OK** to add the selected users to the **Add Permission** dialog box.
- ![Add BackupAdmin user](./media/backup-azure-backup-server-vmware/vmware-assign-account-to-role.png)
+ ![Screenshot shows how to add the BackupAdmin user.](./media/backup-azure-backup-server-vmware/vmware-assign-account-to-role.png)
-7. In **Assigned Role**, from the drop-down list, select **BackupAdminRole** > **OK**.
+7. On **Assigned Role**, from the drop-down list, select **BackupAdminRole** > **OK**.
- ![Assign user to role](./media/backup-azure-backup-server-vmware/vmware-choose-role.png)
+ ![Screenshot shows how to assign user to role.](./media/backup-azure-backup-server-vmware/vmware-choose-role.png)
-On the **Manage** tab in the **Global Permissions** panel, the new user account and the associated role appear in the list.
+On the **Manage** tab on the **Global Permissions** pane, the new user account and the associated role appear in the list.
## Add the account on Azure Backup Server
-1. Open Azure Backup Server. If you can't find the icon on the desktop, open Microsoft Azure Backup from the apps list.
+To add the account on the Azure Backup Server, follow these steps:
- ![Azure Backup Server icon](./media/backup-azure-backup-server-vmware/mabs-icon.png)
+1. Open Azure Backup Server.
-2. In the Azure Backup Server console, select **Management** > **Production Servers** > **Manage VMware**.
+ If you can't find the icon on the desktop, open Microsoft Azure Backup from the apps list.
- ![Azure Backup Server console](./media/backup-azure-backup-server-vmware/add-vmware-credentials.png)
+ ![Screenshot shows the Azure Backup Server icon.](./media/backup-azure-backup-server-vmware/mabs-icon.png)
-3. In the **Manage Credentials** dialog box, select **Add**.
+2. On the Azure Backup Server console, select **Management** > **Production Servers** > **Manage VMware**.
- ![Manage Credentials dialog box](./media/backup-azure-backup-server-vmware/mabs-manage-credentials-dialog.png)
+ ![Screenshot shows the Azure Backup Server console.](./media/backup-azure-backup-server-vmware/add-vmware-credentials.png)
-4. In **Add Credential**, enter a name and a description for the new credential, and specify the username and password you defined on the VMware server. The name, *Contoso Vcenter credential* is used to identify the credential in this procedure. If the VMware server and Azure Backup Server aren't in the same domain, specify the domain in the user name.
+3. On the **Manage Credentials** dialog box, select **Add**.
- ![Azure Backup Server Add Credential dialog box](./media/backup-azure-backup-server-vmware/mabs-add-credential-dialog2.png)
+ ![Screenshot shows the Manage Credentials dialog box.](./media/backup-azure-backup-server-vmware/mabs-manage-credentials-dialog.png)
+
+4. On **Add Credential**, enter a name and a description for the new credential, and specify the username and password you defined on the VMware server. The name, *Contoso Vcenter credential* is used to identify the credential in this procedure. If the VMware server and Azure Backup Server aren't in the same domain, specify the domain in the user name.
+
+ ![Screenshot shows the Azure Backup Server Add Credential dialog box.](./media/backup-azure-backup-server-vmware/mabs-add-credential-dialog2.png)
5. Select **Add** to add the new credential.
- ![Add new credentials](./media/backup-azure-backup-server-vmware/new-list-of-mabs-creds.png)
+ ![Screenshot shows how to add new credentials.](./media/backup-azure-backup-server-vmware/new-list-of-mabs-creds.png)
## Add the vCenter Server
-Add the vCenter Server to Azure Backup Server.
+To add the vCenter Server to Azure Backup Server, follow these steps:
-1. In the Azure Backup Server console, select **Management** > **Production Servers** > **Add**.
+1. On the Azure Backup Server console, select **Management** > **Production Servers** > **Add**.
- ![Open Production Server Addition Wizard](./media/backup-azure-backup-server-vmware/add-vcenter-to-mabs.png)
+ ![Screenshot shows how to open the Production Server Addition Wizard.](./media/backup-azure-backup-server-vmware/add-vcenter-to-mabs.png)
-2. In **Production Server Addition Wizard** > **Select Production Server type** page, select **VMware Servers**, and then select **Next**.
+2. On **Production Server Addition Wizard** > **Select Production Server type** page, select **VMware Servers**, and then select **Next**.
- ![Production Server Addition Wizard](./media/backup-azure-backup-server-vmware/production-server-add-wizard.png)
+ ![Screenshot shows the Production Server Addition Wizard.](./media/backup-azure-backup-server-vmware/production-server-add-wizard.png)
-3. In **Select Computers** **Server Name/IP Address**, specify the FQDN or IP address of the VMware server. If all the ESXi servers are managed by the same vCenter, specify the vCenter name. Otherwise, add the ESXi host.
+3. On **Select Computers**, under **Server Name/IP Address**, specify the FQDN or IP address of the VMware server. If all the ESXi servers are managed by the same vCenter, specify the vCenter name. Otherwise, add the ESXi host.
- ![Specify VMware server](./media/backup-azure-backup-server-vmware/add-vmware-server-provide-server-name.png)
+ ![Screenshot shows how to specify the VMware server.](./media/backup-azure-backup-server-vmware/add-vmware-server-provide-server-name.png)
-4. In **SSL Port**, enter the port that's used to communicate with the VMware server. 443 is the default port, but you can change it if your VMware server listens on a different port.
+4. On **SSL Port**, enter the port that's used to communicate with the VMware server. 443 is the default port, but you can change it if your VMware server listens on a different port.
-5. In **Specify Credential**, select the credential that you created earlier.
+5. On **Specify Credential**, select the credential that you created earlier.
- ![Specify credential](./media/backup-azure-backup-server-vmware/identify-creds.png)
+ ![Screenshot shows how to specify the credential.](./media/backup-azure-backup-server-vmware/identify-creds.png)
6. Select **Add** to add the VMware server to the servers list. Then select **Next**.
- ![Add VMware server and credential](./media/backup-azure-backup-server-vmware/add-vmware-server-credentials.png)
+ ![Screenshot shows how to add the VMware server and credential.](./media/backup-azure-backup-server-vmware/add-vmware-server-credentials.png)
-7. In the **Summary** page, select **Add** to add the VMware server to Azure Backup Server. The new server is added immediately, no agent is needed on the VMware server.
+7. On the **Summary** page, select **Add** to add the VMware server to Azure Backup Server. The new server is added immediately, no agent is needed on the VMware server.
- ![Add VMware server to Azure Backup Server](./media/backup-azure-backup-server-vmware/tasks-screen.png)
+ ![Screenshot shows how to add the VMware server to Azure Backup Server.](./media/backup-azure-backup-server-vmware/tasks-screen.png)
8. Verify settings on the **Finish** page.
- ![Finish page](./media/backup-azure-backup-server-vmware/summary-screen.png)
+ ![Screenshot shows the finish page.](./media/backup-azure-backup-server-vmware/summary-screen.png)
If you have multiple ESXi hosts that aren't managed by vCenter server, or you have multiple instances of vCenter Server, you need to rerun the wizard to add the servers. ## Configure a protection group
-Add VMware VMs for backup. Protection groups gather multiple VMs and apply the same data retention and backup settings to all VMs in the group.
+To add VMware VMs for backup. Protection groups gather multiple VMs and apply the same data retention and backup settings to all VMs in the group, follow these steps:
-1. In the Azure Backup Server console, select **Protection**, > **New**.
+1. On the Azure Backup Server console, select **Protection** > **New**.
- ![Open the Create New Protection Group wizard](./media/backup-azure-backup-server-vmware/open-protection-wizard.png)
+ ![Screenshot shows how to open the Create New Protection Group wizard.](./media/backup-azure-backup-server-vmware/open-protection-wizard.png)
-1. In the **Create New Protection Group** wizard welcome page, select **Next**.
+1. On the **Create New Protection Group** wizard welcome page, select **Next**.
- ![Create New Protection Group wizard dialog box](./media/backup-azure-backup-server-vmware/protection-wizard.png)
+ ![Screenshot shows the Create New Protection Group wizard dialog box.](./media/backup-azure-backup-server-vmware/protection-wizard.png)
1. On the **Select Protection group type** page, select **Servers** and then select **Next**. The **Select group members** page appears.
-1. In **Select group members**, select the VMs (or VM folders) that you want to back up. Then select **Next**.
+1. On **Select group members**, select the VMs (or VM folders) that you want to back up. Then select **Next**.
- When you select a folder, or VMs or folders inside that folder are also selected for backup. You can uncheck folders or VMs you don't want to back up. 1. If a VM or folder is already being backed up, you can't select it. This ensures that duplicate recovery points aren't created for a VM.
- ![Select group members](./media/backup-azure-backup-server-vmware/server-add-selected-members.png)
+ ![Screenshot shows how to select group members.](./media/backup-azure-backup-server-vmware/server-add-selected-members.png)
-1. In **Select Data Protection Method** page, enter a name for the protection group, and protection settings. To back up to Azure, set short-term protection to **Disk** and enable online protection. Then select **Next**.
+1. On **Select Data Protection Method** page, enter a name for the protection group, and protection settings. To back up to Azure, set short-term protection to **Disk** and enable online protection. Then select **Next**.
- ![Select data protection method](./media/backup-azure-backup-server-vmware/name-protection-group.png)
+ ![Screenshot shows how to select data protection method.](./media/backup-azure-backup-server-vmware/name-protection-group.png)
-1. In **Specify Short-Term Goals**, specify how long you want to keep data backed up to disk.
- - In **Retention Range**, specify how many days disk recovery points should be kept.
- - In **Synchronization frequency**, specify how often disk recovery points are taken.
+1. On **Specify Short-Term Goals**, specify how long you want to keep data backed up to disk.
+ - On **Retention Range**, specify how many days disk recovery points should be kept.
+ - On **Synchronization frequency**, specify how often disk recovery points are taken.
- If you don't want to set a backup interval, you can check **Just before a recovery point** so that a backup runs just before each recovery point is scheduled. - Short-term backups are full backups and not incremental. - Select **Modify** to change the times/dates when short-term backups occur.
- ![Specify short-term goals](./media/backup-azure-backup-server-vmware/short-term-goals.png)
+ ![Screenshot shows how to specify short-term goals.](./media/backup-azure-backup-server-vmware/short-term-goals.png)
-1. In **Review Disk Allocation**, review the disk space provided for the VM backups. for the VMs.
+1. On **Review Disk Allocation**, review the disk space provided for the VM backups. for the VMs.
- - The recommended disk allocations are based on the retention range you specified, the type of workload, and the size of the protected data. Make any changes required, and then select **Next**.
+ - The recommended disk allocations are based on the retention range you specified, the type of workload, and the size of the protected data. Make any required changes, and then select **Next**.
- **Data size:** Size of the data in the protection group. - **Disk space:** The recommended amount of disk space for the protection group. If you want to modify this setting, you should allocate total space that's slightly larger than the amount that you estimate each data source grows. - **Colocate data:** If you turn on colocation, multiple data sources in the protection can map to a single replica and recovery point volume. Colocation isn't supported for all workloads. - **Automatically grow:** If you turn on this setting, if data in the protected group outgrows the initial allocation, Azure Backup Server tries to increase the disk size by 25 percent. - **Storage pool details:** Shows the status of the storage pool, including total and remaining disk size.
- ![Review disk allocation](./media/backup-azure-backup-server-vmware/review-disk-allocation.png)
+ ![Screenshot shows how to review disk allocation.](./media/backup-azure-backup-server-vmware/review-disk-allocation.png)
-1. In **Choose Replica Creation Method** page, specify how you want to take the initial backup, and then select **Next**.
+1. On **Choose Replica Creation Method** page, specify how you want to take the initial backup, and then select **Next**.
- The default is **Automatically over the network** and **Now**. - If you use the default, we recommend that you specify an off-peak time. Choose **Later** and specify a day and time. - For large amounts of data or less-than-optimal network conditions, consider replicating the data offline by using removable media.
- ![Choose replica creation method](./media/backup-azure-backup-server-vmware/replica-creation.png)
+ ![Screenshot shows how to choose the Replica creation method.](./media/backup-azure-backup-server-vmware/replica-creation.png)
-1. In **Consistency Check Options**, select how and when to automate the consistency checks. Then select **Next**.
+1. On **Consistency Check Options**, select how and when to automate the consistency checks. Then select **Next**.
- You can run consistency checks when replica data becomes inconsistent, or on a set schedule. - If you don't want to configure automatic consistency checks, you can run a manual check. To do this, right-click the protection group > **Perform Consistency Check**.
-1. In **Specify Online Protection Data** page, select the VMs or VM folders that you want to back up. You can select the members individually, or select **Select All** to choose all members. Then select **Next**.
+1. On **Specify Online Protection Data** page, select the VMs or VM folders that you want to back up. You can select the members individually, or select **Select All** to choose all members. Then select **Next**.
- ![Specify online protection data](./media/backup-azure-backup-server-vmware/select-data-to-protect.png)
+ ![Screenshot shows how to specify the online protection data.](./media/backup-azure-backup-server-vmware/select-data-to-protect.png)
1. On the **Specify Online Backup Schedule** page, specify how often you want to back up data from local storage to Azure. - Cloud recovery points for the data will be generated according to the schedule. Then select **Next**. - After the recovery point is generated, it's transferred to the Recovery Services vault in Azure.
- ![Specify online backup schedule](./media/backup-azure-backup-server-vmware/online-backup-schedule.png)
+ ![Screenshot shows how to specify the online backup schedule.](./media/backup-azure-backup-server-vmware/online-backup-schedule.png)
1. On the **Specify Online Retention Policy** page, indicate how long you want to keep the recovery points that are created from the daily/weekly/monthly/yearly backups to Azure. then select **Next**. - There's no time limit for how long you can keep data in Azure. - The only limit is that you can't have more than 9999 recovery points per protected instance. In this example, the protected instance is the VMware server.
- ![Specify online retention policy](./media/backup-azure-backup-server-vmware/retention-policy.png)
+ ![Screenshot shows how to specify the online retention policy.](./media/backup-azure-backup-server-vmware/retention-policy.png)
1. On the **Summary** page, review the settings, and then select **Create Group**.
- ![Protection group member and setting summary](./media/backup-azure-backup-server-vmware/protection-group-summary.png)
+ ![Screenshot shows the protection group member and setting summary.](./media/backup-azure-backup-server-vmware/protection-group-summary.png)
## VMware parallel backups
You can modify the number of jobs by using the registry key as shown below (not
## VMware vSphere 6.7 and 7.0
-To back up vSphere 6.7 and 7.0, do the following:
+To back up vSphere 6.7 and 7.0, follow these steps:
- Enable TLS 1.2 on the MABS Server
Windows Registry Editor Version 5.00
## Exclude disk from VMware VM backup
+With MABS V3 UR1 (and later), you can exclude the specific disk from VMware VM backup. The configuration script **ExcludeDisk.ps1** is located in the `C:\Program Files\Microsoft Azure Backup Server\DPM\DPM\bin folder`.
+ > [!NOTE] > This feature is applicable for MABS V3 UR1 (and later).
-With MABS V3 UR1 (and later), you can exclude the specific disk from VMware VM backup. The configuration script **ExcludeDisk.ps1** is located in the `C:\Program Files\Microsoft Azure Backup Server\DPM\DPM\bin folder`.
-
-To configure the disk exclusion, follow the steps below:
+To configure the disk exclusion, follow these steps:
### Identify the VMware VM and disk details to be excluded
To configure the disk exclusion, follow the steps below:
For example, to exclude the Hard Disk 2 from the TestVM4, the path for Hard Disk 2 is **[datastore1] TestVM4/TestVM4\_1.vmdk**.
- ![Hard disk to be excluded](./media/backup-azure-backup-server-vmware/test-vm.png)
+ ![Screenshot shows the hard disk to be excluded.](./media/backup-azure-backup-server-vmware/test-vm.png)
### Configure MABS Server
backup Backup Azure System State https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-system-state.md
Title: Back up Windows system state to Azure description: Learn how to back up the system state of Windows Server computers to Azure.- Previously updated : 05/23/2018+ Last updated : 01/20/2023++++
-# Back up Windows system state to Azure
-This article explains how to back up your Windows Server system state to Azure. It's intended to walk you through the basics.
+# Back up Windows system state to Azure
-If you want to know more about Azure Backup, read this [overview](backup-overview.md).
+This article describes how to back up your Windows Server system state to Azure. It's intended to walk you through the basics.
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) that lets you access any Azure service.
+For more information about Azure Backup, see the [overview article](backup-overview.md). If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) that lets you access any Azure service.
[!INCLUDE [How to create a Recovery Services vault](../../includes/backup-create-rs-vault.md)] ## Set storage redundancy for the vault
-When you create a Recovery Services vault, make sure storage redundancy is configured the way you want.
+When you create a Recovery Services vault, ensure that you configure the storage redundancy as per the organization requirements.
+
+To set the storage redundancy for the vault, follow these steps:
1. From the **Recovery Services vaults** pane, select the new vault.
- ![Select the new vault from the list of Recovery Services vault](./media/backup-try-azure-backup-in-10-mins/rs-vault-list.png)
+ ![Screenshot shows how to select the new vault from the list of Recovery Services vault.](./media/backup-try-azure-backup-in-10-mins/rs-vault-list.png)
When you select the vault, the **Recovery Services vault** pane narrows, and the Settings pane (*which has the name of the vault at the top*) and the vault details pane open.
- ![View the storage configuration for new vault](./media/backup-try-azure-backup-in-10-mins/set-storage-configuration-2.png)
-2. In the new vault's Settings pane, use the vertical slide to scroll down to the Manage section, and select **Backup Infrastructure**.
- The Backup Infrastructure pane opens.
-3. In the Backup Infrastructure pane, select **Backup Configuration** to open the **Backup Configuration** pane.
+ ![Screenshot show how to view the storage configuration for new vault.](./media/backup-try-azure-backup-in-10-mins/set-storage-configuration-2.png)
+2. On the new vault's **Settings** pane, use the vertical slide to scroll down to the Manage section, and select **Backup Infrastructure**.
+
+3. On the **Backup Infrastructure** pane, select **Backup Configuration** to open the **Backup Configuration** pane.
+
+ ![Screenshot shows how to set the storage configuration for new vault.](./media/backup-try-azure-backup-in-10-mins/set-storage-configuration.png)
- ![Set the storage configuration for new vault](./media/backup-try-azure-backup-in-10-mins/set-storage-configuration.png)
4. Choose the appropriate storage replication option for your vault.
- ![Storage configuration choices](./media/backup-try-azure-backup-in-10-mins/choose-storage-configuration-for-vault.png)
+ ![Screenshot shows how to select the storage configuration option.](./media/backup-try-azure-backup-in-10-mins/choose-storage-configuration-for-vault.png)
By default, your vault has geo-redundant storage. If you use Azure as a primary backup storage endpoint, continue to use **Geo-redundant**. If you don't use Azure as a primary backup storage endpoint, then choose **Locally-redundant**, which reduces the Azure storage costs. Read more about [geo-redundant](../storage/common/storage-redundancy.md#geo-redundant-storage), [locally redundant](../storage/common/storage-redundancy.md#locally-redundant-storage) and [zone-redundant](../storage/common/storage-redundancy.md#zone-redundant-storage) storage options in this [Storage redundancy overview](../storage/common/storage-redundancy.md).
Now that you've created a vault, configure it for backing up Windows System Stat
## Configure the vault
+To configure the vault, follow these steps:
+ 1. On the Recovery Services vault pane (for the vault you just created), in the Getting Started section, select **Backup**, then on the **Getting Started with Backup** pane, select **Backup goal**.
- ![Open backup settings](./media/backup-try-azure-backup-in-10-mins/open-backup-settings.png)
+ ![Screenshot shows how to open the backup settings.](./media/backup-try-azure-backup-in-10-mins/open-backup-settings.png)
The **Backup Goal** pane opens.
- ![Open backup goal pane](./media/backup-try-azure-backup-in-10-mins/backup-goal-blade.png)
+ ![Screenshot shows how to open the backup goal pane.](./media/backup-try-azure-backup-in-10-mins/backup-goal-blade.png)
2. From the **Where is your workload running?** drop-down menu, select **On-premises**.
Now that you've created a vault, configure it for backing up Windows System Stat
3. From the **What do you want to back up?** menu, select **System State**, and select **OK**.
- ![Configuring files and folders](./media/backup-azure-system-state/backup-goal-system-state.png)
+ ![Screenshot shows how to configure files and folders.](./media/backup-azure-system-state/backup-goal-system-state.png)
- After selecting **OK**, a checkmark appears next to **Backup goal**, and the **Prepare infrastructure** pane opens.
+ After you select **OK**, a checkmark appears next to **Backup goal**, and the **Prepare infrastructure** pane opens.
- ![Backup goal configured, next prepare infrastructure](./media/backup-try-azure-backup-in-10-mins/backup-goal-configed.png)
+ ![Screenshot shows how to prepare infrastructure.](./media/backup-try-azure-backup-in-10-mins/backup-goal-configed.png)
4. On the **Prepare infrastructure** pane, select **Download Agent for Windows Server or Windows Client**.
- ![Prepare infrastructure](./media/backup-try-azure-backup-in-10-mins/choose-agent-for-server-client.png)
+ ![Screenshot shows how to start downloading the agent for Windows client.](./media/backup-try-azure-backup-in-10-mins/choose-agent-for-server-client.png)
If you're using Windows Server Essential, then choose to download the agent for Windows Server Essential. A pop-up menu prompts you to run or save MARSAgentInstaller.exe.
- ![MARSAgentInstaller dialog](./media/backup-try-azure-backup-in-10-mins/mars-installer-run-save.png)
+ ![Screenshot shows the MARSAgentInstaller dialog.](./media/backup-try-azure-backup-in-10-mins/mars-installer-run-save.png)
5. In the download pop-up menu, select **Save**. By default, the **MARSagentinstaller.exe** file is saved to your Downloads folder. When the installer completes, you'll see a pop-up asking if you want to run the installer, or open the folder.
- ![MARS installer is complete](./media/backup-try-azure-backup-in-10-mins/mars-installer-complete.png)
+ ![Screenshot shows that MARS installer is complete.](./media/backup-try-azure-backup-in-10-mins/mars-installer-complete.png)
You don't need to install the agent yet. You can install the agent after you've downloaded the vault credentials. 6. On the **Prepare infrastructure** pane, select **Download**.
- ![download vault credentials](./media/backup-try-azure-backup-in-10-mins/download-vault-credentials.png)
+ ![Screenshot shows how to download vault credentials.](./media/backup-try-azure-backup-in-10-mins/download-vault-credentials.png)
The vault credentials download to your **Downloads** folder. After the vault credentials finish downloading, you'll see a pop-up asking if you want to open or save the credentials. Select **Save**. If you accidentally select **Open**, let the dialog that attempts to open the vault credentials, fail. You won't be able to open the vault credentials. Continue to the next step. The vault credentials are in the **Downloads** folder.
- ![vault credentials finished downloading](./media/backup-try-azure-backup-in-10-mins/vault-credentials-downloaded.png)
+ ![Screenshot shows that vault credentials downloading is finished.](./media/backup-try-azure-backup-in-10-mins/vault-credentials-downloaded.png)
+ > [!NOTE] > The vault credentials must be saved only to a location that's local to the Windows Server on which you intend to use the agent. >
Now that you've created a vault, configure it for backing up Windows System Stat
## Install and register the agent
-> [!NOTE]
-> Enabling backup through the Azure portal isn't available. Use the Microsoft Azure Recovery Services Agent to back up Windows Server System State.
->
+To install and register the agent, follow these steps:
1. Locate and double-click the **MARSagentinstaller.exe** from the Downloads folder (or other saved location). The installer provides a series of messages as it extracts, installs, and registers the Recovery Services agent.
- ![run Recovery Services agent installer credentials](./media/backup-try-azure-backup-in-10-mins/mars-installer-registration.png)
+ ![Screenshot shows how to run Recovery Services agent installer credentials.](./media/backup-try-azure-backup-in-10-mins/mars-installer-registration.png)
2. Complete the Microsoft Azure Recovery Services Agent Setup Wizard. To complete the wizard, you need to:
Now that you've created a vault, configure it for backing up Windows System Stat
The agent is now installed and your machine is registered to the vault. You're ready to configure and schedule your backup.
+> [!NOTE]
+> Enabling backup through the Azure portal isn't available. Use the Microsoft Azure Recovery Services Agent to back up Windows Server System State.
+>
+ ## Back up Windows Server System State The initial backup includes two tasks:
To complete the initial backup, use the Microsoft Azure Recovery Services agent.
> >
-### To schedule the backup job
+### Schedule the backup job
+
+To schedule the backup job, follow these steps:
1. Open the Microsoft Azure Recovery Services agent. You can find it by searching your machine for **Microsoft Azure Backup**.
- ![Launch the Azure Recovery Services agent](./media/backup-try-azure-backup-in-10-mins/snap-in-search.png)
+ ![Screenshot shows how to launch the Azure Recovery Services agent.](./media/backup-try-azure-backup-in-10-mins/snap-in-search.png)
-2. In the Recovery Services agent, select **Schedule Backup**.
+2. On the Recovery Services agent, select **Schedule Backup**.
- ![Schedule a Windows Server backup](./media/backup-try-azure-backup-in-10-mins/schedule-first-backup.png)
+ ![Screenshot shows how to schedule a Windows Server backup.](./media/backup-try-azure-backup-in-10-mins/schedule-first-backup.png)
3. On the **Getting started** page of the Schedule Backup Wizard, select **Next**.
To complete the initial backup, use the Microsoft Azure Recovery Services agent.
9. After the wizard finishes creating the backup schedule, select **Close**.
-### To back up Windows Server System State for the first time
+### Back up Windows Server System State for the first time
+
+To back up Windows Server System State for the first time, follow these steps:
-1. Make sure there are no pending updates for Windows Server that require a reboot.
+1. Ensure that there are no pending updates for Windows Server that require a reboot.
-2. In the Recovery Services agent, select **Back Up Now** to complete the initial seeding over the network.
+2. On the Recovery Services agent, select **Back Up Now** to complete the initial seeding over the network.
- ![Windows Server back-up now](./media/backup-try-azure-backup-in-10-mins/backup-now.png)
+ ![Screenshot shows how to start backup of Windows Server.](./media/backup-try-azure-backup-in-10-mins/backup-now.png)
3. Select **System State** on the **Select Backup Item** screen that appears and select **Next**.
To complete the initial backup, use the Microsoft Azure Recovery Services agent.
After the initial backup is completed, the **Job completed** status appears in the Backup console.
- ![IR complete](./media/backup-try-azure-backup-in-10-mins/ircomplete.png)
-
-## Questions?
-
-If you have questions, [send us feedback](https://feedback.azure.com/d365community/forum/153aa817-0725-ec11-b6e6-000d3a4f0858).
+ ![Screenshot shows that the initial backup is completed.](./media/backup-try-azure-backup-in-10-mins/ircomplete.png)
## Next steps
backup Backup Mabs Sql Azure Stack https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-mabs-sql-azure-stack.md
Title: Back up SQL Server workloads on Azure Stack description: In this article, learn how to configure Microsoft Azure Backup Server (MABS) to protect SQL Server databases on Azure Stack.- Previously updated : 06/08/2018+ Last updated : 01/18/2023++++ + # Back up SQL Server on Azure Stack
-Use this article to configure Microsoft Azure Backup Server (MABS) to protect SQL Server databases on Azure Stack.
+This article describes how to configure Microsoft Azure Backup Server (MABS) to protect SQL Server databases on Azure Stack.
+
+## SQL Server databases protection workflow
-The management of SQL Server database backup to Azure and recovery from Azure involves three steps:
+The management of SQL Server database backup to Azure and recovery from Azure involves:
1. Create a backup policy to protect SQL Server databases 2. Create on-demand backup copies
The management of SQL Server database backup to Azure and recovery from Azure in
## Prerequisites and limitations
-* If you have a database with files on a remote file share, protection will fail with Error ID 104. MABS doesn't support protection for SQL Server data on a remote file share.
+* If you've a database with files on a remote file share, protection will fail with Error ID 104. MABS doesn't support protection for SQL Server data on a remote file share.
* MABS can't protect databases that are stored on remote SMB shares. * Ensure that the [availability group replicas are configured as read-only](/sql/database-engine/availability-groups/windows/configure-read-only-access-on-an-availability-replica-sql-server). * You must explicitly add the system account **NTAuthority\System** to the Sysadmin group on SQL Server.
The management of SQL Server database backup to Azure and recovery from Azure in
* If the backup fails on the selected node, then the backup operation fails. * Recovery to the original location isn't supported. * SQL Server 2014 or above backup issues:
- * SQL server 2014 added a new feature to create a [database for on-premises SQL Server in Windows Azure Blob storage](/sql/relational-databases/databases/sql-server-data-files-in-microsoft-azure). MABS can't be used to protect this configuration.
+ * SQL server 2014 added a new feature to create a [database for on-premises SQL Server on Microsoft Azure Blob storage](/sql/relational-databases/databases/sql-server-data-files-in-microsoft-azure). MABS can't be used to protect this configuration.
* There are some known issues with "Prefer secondary" backup preference for the SQL Always On option. MABS always takes a backup from secondary. If no secondary can be found, then the backup fails. ## Before you start [Install and prepare Azure Backup Server](backup-mabs-install-azure-stack.md).
-## Create a backup policy to protect SQL Server databases to Azure
+## Create a backup policy
+
+To create a backup policy to protect SQL Server databases to Azure, follow these steps:
1. On the Azure Backup Server UI, select the **Protection** workspace. 2. On the tool ribbon, select **New** to create a new protection group.
- ![Create Protection Group](./media/backup-azure-backup-sql/protection-group.png)
+ ![Screenshot shows how to initiate creating Protection Group.](./media/backup-azure-backup-sql/protection-group.png)
Azure Backup Server starts the Protection Group wizard, which leads you through creating a **Protection Group**. Select **Next**.
-3. In the **Select Protection Group Type** screen, select **Servers**.
+3. On the **Select Protection Group Type** screen, select **Servers**.
- ![Select Protection Group Type - 'Servers'](./media/backup-azure-backup-sql/pg-servers.png)
+ ![Screenshot shows how to select Protection Group Type - Servers.](./media/backup-azure-backup-sql/pg-servers.png)
-4. In the **Select Group Members** screen, the Available members list displays the various data sources. Select **+** to expand a folder and reveal the subfolders. Select the checkbox to select an item.
+4. On the **Select Group Members** screen, the Available members list displays the various data sources. Select **+** to expand a folder and reveal the subfolders. Select the checkbox to select an item.
- ![Select SQL DB](./media/backup-azure-backup-sql/pg-databases.png)
+ ![Screenshot shows how to select a SQL database.](./media/backup-azure-backup-sql/pg-databases.png)
All selected items appear in the Selected members list. After selecting the servers or databases you want to protect, select **Next**.
-5. In the **Select Data Protection Method** screen, provide a name for the protection group and select the **I want online Protection** checkbox.
+5. On the **Select Data Protection Method** screen, provide a name for the protection group and select the **I want online Protection** checkbox.
- ![Data Protection Method - short-term disk & Online Azure](./media/backup-azure-backup-sql/pg-name.png)
+ ![Screenshot shows the Data Protection Method - short-term disk & Online Azure.](./media/backup-azure-backup-sql/pg-name.png)
-6. In the **Specify Short-Term Goals** screen, include the necessary inputs to create backup points to disk, and select **Next**.
+6. On the **Specify Short-Term Goals** screen, include the necessary inputs to create backup points to disk, and select **Next**.
In the example, **Retention range** is **5 days**, **Synchronization frequency** is once every **15 minutes**, which is the backup frequency. **Express Full Backup** is set to **8:00 P.M**.
- ![Short-term goals](./media/backup-azure-backup-sql/pg-shortterm.png)
+ ![Screenshot shows the short-term goals.](./media/backup-azure-backup-sql/pg-shortterm.png)
> [!NOTE] > In the example shown, at 8:00 PM every day a backup point is created by transferring the modified data from the previous dayΓÇÖs 8:00 PM backup point. This process is called **Express Full Backup**. Transaction logs are synchronized every 15 minutes. If you need to recover the database at 9:00 PM, the point is created from the logs from the last express full backup point (8PM in this case).
- >
- >
7. On the **Review disk allocation** screen, verify the overall storage space available, and the potential disk space. Select **Next**.
-8. In the **Choose Replica Creation Method**, choose how to create your first recovery point. You can transfer the initial backup manually (off network) to avoid bandwidth congestion or over the network. If you choose to wait to transfer the first backup, you can specify the time for the initial transfer. Select **Next**.
+8. On the **Choose Replica Creation Method**, choose how to create your first recovery point. You can transfer the initial backup manually (off network) to avoid bandwidth congestion or over the network. If you choose to wait to transfer the first backup, you can specify the time for the initial transfer. Select **Next**.
- ![Initial replication method](./media/backup-azure-backup-sql/pg-manual.png)
+ ![Screenshot shows the initial replication method.](./media/backup-azure-backup-sql/pg-manual.png)
The initial backup copy requires transferring the entire data source (SQL Server database) from production server (SQL Server computer) to Azure Backup Server. This data might be large, and transferring the data over the network could exceed bandwidth. For this reason, you can choose to transfer the initial backup: **Manually** (using removable media) to avoid bandwidth congestion, or **Automatically over the network** (at a specified time).
The management of SQL Server database backup to Azure and recovery from Azure in
9. Choose when you want the consistency check to run and select **Next**.
- ![Consistency check](./media/backup-azure-backup-sql/pg-consistent.png)
+ ![Screenshot shows how to schedule the consistency check.](./media/backup-azure-backup-sql/pg-consistent.png)
Azure Backup Server performs a consistency check on the integrity of the backup point. Azure Backup Server calculates the checksum of the backup file on the production server (SQL Server computer in this scenario) and the backed-up data for that file. If there's a conflict, it's assumed the backed-up file on Azure Backup Server is corrupt. Azure Backup Server rectifies the backed-up data by sending the blocks corresponding to the checksum mismatch. Because consistency checks are performance-intensive, you can schedule the consistency check or run it automatically. 10. To specify online protection of the datasources, select the databases to be protected to Azure and select **Next**.
- ![Select datasources](./media/backup-azure-backup-sql/pg-sqldatabases.png)
+ ![Screenshot shows how to select data sources.](./media/backup-azure-backup-sql/pg-sqldatabases.png)
11. Choose backup schedules and retention policies that suit the organization policies.
- ![Schedule and Retention](./media/backup-azure-backup-sql/pg-schedule.png)
+ ![Screenshot shows hot to backup schedule and retention.](./media/backup-azure-backup-sql/pg-schedule.png)
In this example, backups are taken once a day at 12:00 PM and 8 PM (bottom part of the screen) > [!NOTE] > ItΓÇÖs a good practice to have a few short-term recovery points on disk, for quick recovery. These recovery points are used for operational recovery. Azure serves as a good offsite location with higher SLAs and guaranteed availability.
- >
- >
+ **Best Practice**: If you schedule backups to Azure to start after the local disk backups complete, the latest disk backups are always copied to Azure. 12. Choose the retention policy schedule. The details on how the retention policy works are provided at [Use Azure Backup to replace your tape infrastructure article](backup-azure-backup-cloud-as-tape.md).
- ![Retention Policy](./media/backup-azure-backup-sql/pg-retentionschedule.png)
+ ![Screenshot shows how to choose the retention Policy.](./media/backup-azure-backup-sql/pg-retentionschedule.png)
In this example:
The management of SQL Server database backup to Azure and recovery from Azure in
14. Once you review the policy details in the **Summary** screen, select **Create group** to complete the workflow. You can select **Close** and monitor the job progress in Monitoring workspace.
- ![Creation of Protection Group In-Progress](./media/backup-azure-backup-sql/pg-summary.png)
+ ![Screenshot shows the the in-progress job state of the Protection Group creation.](./media/backup-azure-backup-sql/pg-summary.png)
+
+## Run an on-demand backup
-## On-demand backup of a SQL Server database
+A *recovery point* is created only when the first backup occurs. After creating a backup policy, you can trigger the creation of a recovery point manually, rather than waiting for the scheduler to take the backup.
-While the previous steps created a backup policy, a ΓÇ£recovery pointΓÇ¥ is created only when the first backup occurs. Rather than waiting for the scheduler to kick in, the steps below trigger the creation of a recovery point manually.
+To run an on-demand backup of a SQL Server database, follow these steps:
1. Wait until the protection group status shows **OK** for the database before creating the recovery point.
- ![Protection Group Members](./media/backup-azure-backup-sql/sqlbackup-recoverypoint.png)
-2. Right-click on the database and select **Create Recovery Point**.
+ ![Screenshot shows the Protection Group members.](./media/backup-azure-backup-sql/sqlbackup-recoverypoint.png)
+2. Right-click the database and select **Create Recovery Point**.
- ![Create Online Recovery Point](./media/backup-azure-backup-sql/sqlbackup-createrp.png)
+ ![Screenshot shows how to start creating the online online Recovery Point.](./media/backup-azure-backup-sql/sqlbackup-createrp.png)
3. Choose **Online Protection** in the drop-down menu and select **OK** to start creation of a recovery point in Azure.
- ![Create recovery point](./media/backup-azure-backup-sql/sqlbackup-azure.png)
+ ![Screenshot shows how to choose the Online Protection option.](./media/backup-azure-backup-sql/sqlbackup-azure.png)
4. View the job progress in the **Monitoring** workspace.
- ![Monitoring console](./media/backup-azure-backup-sql/sqlbackup-monitoring.png)
+ ![Screenshot shows the monitoring console.](./media/backup-azure-backup-sql/sqlbackup-monitoring.png)
-## Recover a SQL Server database from Azure
+## Recover the database from Azure
-The following steps are required to recover a protected entity (SQL Server database) from Azure.
+To recover a protected entity (SQL Server database) from Azure, follow these steps:
1. Open the Azure Backup Server Management Console. Navigate to **Recovery** workspace where you can see the protected servers. Browse the required database (in this case ReportServer$MSDPM2012). Select a **Recovery from** time that's specified as an **Online** point.
- ![Select Recovery point](./media/backup-azure-backup-sql/sqlbackup-restorepoint.png)
+ ![Screenshot shows how to select a Recovery point.](./media/backup-azure-backup-sql/sqlbackup-restorepoint.png)
2. Right-click the database name and select **Recover**.
- ![Recover from Azure](./media/backup-azure-backup-sql/sqlbackup-recover.png)
+ ![Screenshot shows how to select a database to recover from Azure.](./media/backup-azure-backup-sql/sqlbackup-recover.png)
3. MABS shows the details of the recovery point. Select **Next**. To overwrite the database, select the recovery type **Recover to original instance of SQL Server**. Select **Next**.
- ![Recover to Original Location](./media/backup-azure-backup-sql/sqlbackup-recoveroriginal.png)
+ ![Screenshot shows how to recover database to original location.](./media/backup-azure-backup-sql/sqlbackup-recoveroriginal.png)
In this example, MABS recovers the database to another SQL Server instance, or to a standalone network folder.
-4. In the **Specify Recovery options** screen, you can select the recovery options like Network bandwidth usage throttling to throttle the bandwidth used by recovery. Select **Next**.
+4. On the **Specify Recovery options** screen, you can select the recovery options like Network bandwidth usage throttling to throttle the bandwidth used by recovery. Select **Next**.
-5. In the **Summary** screen, you see all the recovery configurations provided so far. Select **Recover**.
+5. On the **Summary** screen, you see all the recovery configurations provided so far. Select **Recover**.
The Recovery status shows the database being recovered. You can select **Close** to close the wizard and view the progress in the **Monitoring** workspace.
- ![Initiate recovery process](./media/backup-azure-backup-sql/sqlbackup-recoverying.png)
+ ![Screenshot shows how to initiate the recovery process.](./media/backup-azure-backup-sql/sqlbackup-recoverying.png)
Once the recovery is completed, the restored database is application consistent. ## Next steps
-See the [Backup files and application](backup-mabs-files-applications-azure-stack.md) article.
-See the [Backup SharePoint on Azure Stack](backup-mabs-sharepoint-azure-stack.md) article.
+- [Back up files and application](backup-mabs-files-applications-azure-stack.md) article.
+- [Back up SharePoint on Azure Stack](backup-mabs-sharepoint-azure-stack.md) article.
backup Manage Azure Managed Disks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/manage-azure-managed-disks.md
Title: Manage Azure Managed Disks description: Learn about managing Azure Managed Disk from the Azure portal.- Previously updated : 09/23/2021+ Last updated : 01/20/2023++++ # Manage Azure Managed Disks This article explains how to manage Azure Managed Disk from the Azure portal.
-## Stop Protection (Preview)
+## Monitor a backup operation
+
+The Azure Backup service creates a job for scheduled backups or if you trigger on-demand backup operation for tracking. To view the backup job status:
+
+1. Go to the **Backup instance** screen. It shows the jobs dashboard with operation and status for the past seven days.
+
+ ![Screenshot shows the jobs dashboard.](./media/backup-managed-disks/jobs-dashboard.png)
+
+1. To view the status of the backup operation, select **View all** to show ongoing and past jobs of this backup instance.
+
+ ![Screenshot shows how to select the view all option.](./media/backup-managed-disks/view-all.png)
+
+1. Review the list of backup and restore jobs and their status. Select a job from the list of jobs to view job details.
+
+ ![Screenshot shows how to select a job to see details.](./media/backup-managed-disks/select-job.png)
+
+## Monitor a restore operation
+
+After you trigger the restore operation, the backup service creates a job for tracking. Azure Backup displays notifications about the job in the portal. To view the restore job progress:
+
+1. Go to the **Backup instance** screen. It shows the jobs dashboard with operation and status for the past seven days.
+
+ ![Screenshot shows the Jobs dashboard that lists all jobs and the statuses.](./media/restore-managed-disks/jobs-dashboard.png)
+
+1. To view the status of the restore operation, select **View all** to show ongoing and past jobs of this backup instance.
+
+ ![Screenshot shows how to select View all.](./media/restore-managed-disks/view-all.png)
+
+1. Review the list of backup and restore jobs and their status. Select a job from the list of jobs to view job details.
+
+ ![Screenshot shows the list of jobs.](./media/restore-managed-disks/list-of-jobs.png)
+
+## Manage operations using the Azure portal
+
+This section describes several Azure Backup supported management operations that make it easy to manage Azure Managed disks.
+
+### Stop Protection (Preview)
There are three ways by which you can stop protecting an Azure Disk:
There are three ways by which you can stop protecting an Azure Disk:
- **Stop Protection and Delete Data**: This option helps you stop all future backup jobs from protecting your disks and delete all the recovery points. You won't be able to restore the disk or use the **Resume backup** option.
-### Stop Protection and Retain Data
+#### Stop Protection and Retain Data
1. Go to **Backup center** and select **Azure Disks**.
There are three ways by which you can stop protecting an Azure Disk:
:::image type="content" source="./media/manage-azure-managed-disks/confirm-stopping-disk-backup-inline.png" alt-text="Screenshot showing the options for disk backup instance retention to be selected." lightbox="./media/manage-azure-managed-disks/confirm-stopping-disk-backup-expanded.png":::
-### Stop Protection and Delete Data
+#### Stop Protection and Delete Data
1. Go to **Backup center** and select **Azure Disks**.
There are three ways by which you can stop protecting an Azure Disk:
:::image type="content" source="./media/manage-azure-managed-disks/confirm-stopping-disk-backup-inline.png" alt-text="Screenshot showing the options for disk backup instance retention to be selected." lightbox="./media/manage-azure-managed-disks/confirm-stopping-disk-backup-expanded.png":::
-## Resume Protection
+### Resume Protection
If you have selected the **Stop Protection and Retain data** option, you can resume protection for your disks.
Use the following steps:
:::image type="content" source="./media/manage-azure-managed-disks/resume-disk-backup-inline.png" alt-text="Screenshot showing the option to resume disk backup." lightbox="./media/manage-azure-managed-disks/resume-disk-backup-expanded.png":::
-## Delete Backup Instance
+### Delete Backup Instance
If you choose to stop all scheduled backup jobs and delete all existing backups, use **Delete Backup Instance**.
bastion Kerberos Authentication Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/kerberos-authentication-portal.md
This article shows you how to configure Azure Bastion to use Kerberos authentication. Kerberos authentication can be used with both the Basic and the Standard Bastion SKUs. For more information about Kerberos authentication, see the [Kerberos authentication overview](/windows-server/security/kerberos/kerberos-authentication-overview). For more information about Azure Bastion, see [What is Azure Bastion?](bastion-overview.md)
-> [!NOTE]
-> During Preview, the Kerberos setting for Azure Bastion can be configured in the Azure portal only.
->
+## Considerations
+
+* During Preview, the Kerberos setting for Azure Bastion can be configured in the Azure portal only.
+* VMs migrated from on-premises to Azure are not currently supported for Kerberos. 
+* Cross-realm authentication is not currently supported for Kerberos. 
## Prerequisites
bastion Tutorial Protect Bastion Host https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/bastion/tutorial-protect-bastion-host.md
In this tutorial, you deploy Bastion using the Standard SKU tier and adjust host
Azure Bastion is a PaaS service that's maintained for you, not a bastion host that you install on one of your VMs and maintain yourself. For more information about Azure Bastion, see [What is Azure Bastion?](bastion-overview.md) > [!IMPORTANT]
-> Azure DDoS protection Standard incurs a cost per public IP address in the virtual network where you enable the service. Ensure you delete the resources in this tutorial if you aren't using the resources in the future. For information about pricing, see [Azure DDoS Protection Pricing](https://azure.microsoft.com/pricing/details/ddos-protection/). For more information about Azure DDoS protection, see [What is Azure DDoS Protection?](../ddos-protection/ddos-protection-overview.md).
+> Azure DDoS Protection incurs a cost when you use the Standard SKU. Overages charges only apply if more than 100 public IPs are protected in the tenant. Ensure you delete the resources in this tutorial if you aren't using the resources in the future. For information about pricing, see [Azure DDoS Protection Pricing]( https://azure.microsoft.com/pricing/details/ddos-protection/). For more information about Azure DDoS protection, see [What is Azure DDoS Protection?](../ddos-protection/ddos-protection-overview.md).
In this tutorial, you'll learn how to:
batch Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/best-practices.md
Title: Best practices description: Learn best practices and useful tips for developing your Azure Batch solutions. Previously updated : 11/15/2022 Last updated : 01/18/2023
Before you recreate or resize your pool, you should download any node agent logs
> [!NOTE] > For general guidance about security in Azure Batch, see [Batch security and compliance best practices](security-best-practices.md).
+#### Operating system updates
+
+It's recommended that the VM image selected for a Batch pool should be up-to-date with the latest publisher provided security updates.
+Some images may perform automatic updates upon boot (or shortly thereafter), which may interfere with certain user directed actions such
+as retrieving package repository updates (for example, `apt update`) or installing packages during actions such as a
+[StartTask](jobs-and-tasks.md#start-task).
+
+Azure Batch doesn't verify or guarantee that images allowed for use with the service have the latest security updates.
+Updates to images are under the purview of the publisher of the image, and not that of Azure Batch. For certain images published
+under `microsoft-azure-batch`, there's no guarantee that these images are kept up-to-date with their upstream derived image.
+ ### Pool lifetime and billing Pool lifetime can vary depending upon the method of allocation and options applied to the pool configuration. Pools can have an arbitrary lifetime and a varying number of compute nodes at any point in time. It's your responsibility to manage the compute nodes in the pool either explicitly, or through features provided by the service ([autoscale](nodes-and-pools.md#automatic-scaling-policy) or [autopool](nodes-and-pools.md#autopools)).
chaos-studio Chaos Studio Fault Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/chaos-studio/chaos-studio-fault-library.md
Configuring the shutdown fault:
``` ## Key Vault Deny Access+ | Property | Value | |-|-| | Capability Name | DenyAccess-1.0 |
Configuring the shutdown fault:
] } ```+
+## Key Vault Disable Certificate
++
+| Property | Value |
+| - | |
+| Capability Name | DisableCertificate-1.0 |
+| Target Type | Microsoft-KeyVault |
+| Description | Using certificate properties, fault will disable the certificate for specific duration (provided by user) and enables it after this fault duration. |
+| Prerequisites | For OneCert certificates, the domain must be registered with OneCert before attempting to run the fault. |
+| Urn | urn:csci:microsoft:keyvault:disableCertificate/1.0 |
+| Fault Type | Continuous |
+| Parameters (key, value) | |
+| certificateName | Name of AKV certificate on which fault will be executed |
+| version | The certificate version that should be updated; if not specified, the latest version will be updated. |
+
+### Sample JSON
+
+```json
+{
+ "name": "branchOne",
+ "actions": [
+ {
+ "type": "continuous",
+ "name": "urn:csci:microsoft:keyvault:disableCertificate/1.0",
+ "parameters": [
+ {
+ "key": "certificateName",
+ "value": "<name of AKV certificate>"
+ },
+ {
+ "key": "version",
+ "value": "<certificate version>"
+ }
+
+],
+ "duration": "PT10M",
+ "selectorid": "myResources"
+ }
+ ]
+}
+```
+
+## Key Vault Increment Certificate Version
+
+| Property | Value |
+| - | |
+| Capability Name | IncrementCertificateVersion-1.0 |
+| Target Type | Microsoft-KeyVault |
+| Description | Generates new certificate version and thumbprint using the Key Vault Certificate client library. Current working certificate will be upgraded to this version. |
+| Prerequisites | For OneCert certificates, the domain must be registered with OneCert before attempting to run the fault. |
+| Urn | urn:csci:microsoft:keyvault:incrementCertificateVersion/1.0 |
+| Fault Type | Discrete |
+| Parameters (key, value) | |
+| certificateName | Name of AKV certificate on which fault will be executed |
+
+### Sample JSON
+
+```json
+{
+ "name": "branchOne",
+ "actions": [
+ {
+ "type": "discrete",
+ "name": "urn:csci:microsoft:keyvault:incrementCertificateVersion/1.0",
+ "parameters": [
+ {
+ "key": "certificateName",
+ "value": "<name of AKV certificate>"
+ }
+ ],
+ "duration": "PT10M",
+ "selectorid": "myResources"
+ }
+ ]
+}
+```
+
+## Key Vault Update Certificate Policy
+
+| Property | Value |
+| - | |
+| Capability Name | UpdateCertificatePolicy-1.0 |
+| Target Type | Microsoft-KeyVault |
+| Description | Certificate policies (examples: certificate validity period, certificate type, key size, or key type) are updated based on the user input and reverted after the fault duration. |
+| Prerequisites | For OneCert certificates, the domain must be registered with OneCert before attempting to run the fault. |
+| Urn | urn:csci:microsoft:keyvault:updateCertificatePolicy/1.0 |
+| Fault Type | Continuous |
+| Parameters (key, value) | |
+| certificateName | Name of AKV certificate on which fault will be executed |
+| version | The certificate version that should be updated; if not specified, the latest version will be updated. |
+| enabled | Bool. Value indicating whether the new certificate version will be enabled |
+| validityInMonths | The validity period of the certificate in months |
+| certificateTransparency | Indicates whether the certificate should be published to the certificate transparency list when created |
+| certificateType | the certificate type |
+| contentType | The content type of the certificate, eg Pkcs12 when the certificate contains raw PFX bytes, or Pem when it contains ASCII PEM-encoded btes. Pkcs12 is the default value assumed |
+| keySize | The size of the RSA key: 2048, 3072, or 4096 |
+| exportable | Boolean. Value indicating if the certificate key is exportable from the vault or secure certificate store |
+| reuseKey | Boolean. Value indicating if the certificate key should be reused when rotating the certificate|
+| keyType | The type of backing key to be generated when issuing new certificates: RSA or EC |
+
+### Sample JSON
+
+```json
+{
+ "name": "branchOne",
+ "actions": [
+ {
+ "type": "continuous",
+ "name": "urn:csci:microsoft:keyvault:updateCertificatePolicy/1.0",
+ "parameters": [
+ {
+ "key": "certificateName",
+ "value": "<name of AKV certificate>"
+ },
+ {
+ "key": "version",
+ "value": "<certificate version>"
+ },
+ {
+ "key": "enabled",
+ "value": "True"
+ },
+ {
+ "key": "validityInMonths",
+ "value": "12"
+ },
+ {
+ "key": "certificateTransparency",
+ "value": "True"
+ },
+ {
+ "key": "certificateType",
+ "value": "<certificate type>"
+ },
+ {
+ "key": "contentType",
+ "value": "Pem"
+ },
+ {
+ "key": "keySize",
+ "value": "4096"
+ },
+ {
+ "key": "exportable",
+ "value": "True"
+ },
+ {
+ "key": "reuseKey",
+ "value": "False"
+ },
+ {
+ "key": "keyType",
+ "value": "RSA"
+ }
+
+ ],
+ "duration": "PT10M",
+ "selectorid": "myResources"
+ }
+ ]
+}
+```
cloud-services Cloud Services Guestos Msrc Releases https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-msrc-releases.md
na Previously updated : 12/14/2022 Last updated : 1/20/2023 # Azure Guest OS The following tables show the Microsoft Security Response Center (MSRC) updates applied to the Azure Guest OS. Search this article to determine if a particular update applies to the Guest OS you are using. Updates always carry forward for the particular [family][family-explain] they were introduced in.
-## December 2022 Guest OS
+
+## January 2023 Guest OS
>[!NOTE]
->The December Guest OS is currently being rolled out to Cloud Service VMs that are configured for automatic updates. When the rollout is complete, this version will be made available for manual updates through the Azure portal and configuration files. The following patches are included in the December Guest OS. This list is subject to change.
-
-| Product Category | Parent KB Article | Vulnerability Description | Guest OS | Date First Introduced |
-| | | | | |
-| Rel 22-12 | [5021235] | Latest Cumulative Update(LCU) | 5.76 | Dec 13, 2022 |
-| Rel 22-12 | [5019958] | IE Cumulative Updates | 2.132, 3.119, 4.112 | Nov 8, 2022 |
-| Rel 22-12 | [5021249] | Latest Cumulative Update(LCU) | 7.20 | Dec 13, 2022 |
-| Rel 22-12 | [5021237] | Latest Cumulative Update(LCU) | 6.52 | Dec 13, 2022 |
-| Rel 22-12 | [5020861] | .NET Framework 3.5 Security and Quality Rollup   | 2.132 | Dec 13, 2022 |
-| Rel 22-12 | [5020869] | .NET Framework 4.6.2 Security and Quality Rollup  | 2.132 | Dec 13, 2022 |
-| Rel 22-12 | [5020862] | .NET Framework 3.5 Security and Quality Rollup   | 4.112 | Dec 13, 2022 |
-| Rel 22-12 | [5020868] | .NET Framework 4.6.2 Security and Quality Rollup   | 4.112 | Dec 13, 2022 |
-| Rel 22-12 | [5020859] | .NET Framework 3.5 Security and Quality Rollup  | 3.119 | Dec 13, 2022 |
-| Rel 22-12 | [5020867] | .NET Framework 4.6.2 Security and Quality Rollup   | 3.119 | Dec 13, 2022 |
-| Rel 22-12 | [5020874] | . NET Framework 4.8 Security and Quality Rollup  | 6.52 | Dec 13, 2022 |
-| Rel 22-12 | [5020866] | .NET Framework 4.7.2 Cumulative Update  | 6.52 | Dec 13, 2022 |
-| Rel 22-12 | [5020873] | .NET Framework 4.8 Security and Quality Rollup   | 5.76 | Dec 13, 2022 |
-| Rel 22-12 | [5020877] | .NET Framework 4.8 Security and Quality Rollup  | 7.20 | Dec 13, 2022 |
-| Rel 22-12 | [5021291] | Monthly Rollup  | 2.132 | Dec 13, 2022 |
-| Rel 22-12 | [5021285] | Monthly Rollup  | 3.119 | Dec 13, 2022 |
-| Rel 22-12 | [5021294] | Monthly Rollup  | 4.112 | Dec 13, 2022 |
-| Rel 22-12 | [5016263] | Servicing Stack update LKG  | 3.119 | Jul 12, 2022 |
-| Rel 22-12 | [5018922] | Servicing Stack update LKG  | 4.112 | Oct 11, 2022 |
-| Rel 22-12 | [4578013] | OOB Standalone Security Update  | 4.112 | Aug 19, 2020 |
-| Rel 22-12 | [5017396] | Servicing Stack update LKG  | 5.76 | Sep 13, 2022 |
-| Rel 22-12 | 5020374 | Servicing Stack update LKG  | 6.52 | Dec 13, 2022 |
-| Rel 22-12 | [5017397] | Servicing Stack update LKG  | 2.132 | Sep 13, 2022 |
-| Rel 22-12 | 5020373 | Servicing Stack update 11C LKG  | 7.20 | Dec 13, 2020 |
-| Rel 22-12 | [4494175] | Microcode  | 5.76 | Sep 1, 2020 |
-| Rel 22-12 | [4494174] | Microcode  | 6.52 | Sep 1, 2020 |
+>The January Guest OS is currently being rolled out to Cloud Service VMs that are configured for automatic updates. When the rollout is complete, this version will be made available for manual updates through the Azure portal and configuration files. The following patches are included in the January Guest OS. This list is subject to change.
+
+| Product Category | Parent KB Article | Vulnerability Description | Guest OS | Date First Introduced |
+| | | | | |
+| Rel 23-01 | [5022289] | Latest Cumulative Update(LCU) | 5.77 | Jan 10, 2023 |
+| Rel 23-01 | [5019958] | IE Cumulative Updates | 2.133, 3.120, 4.113 | Nov 8, 2022 |
+| Rel 23-01 | [5022291] | Latest Cumulative Update(LCU) | 7.21 | Jan 10, 2023 |
+| Rel 23-01 | [5022286] | Latest Cumulative Update(LCU) | 6.53 | Jan 10, 2023 |
+| Rel 23-01 | [5020861] | .NET Framework 3.5 Security and Quality Rollup LKG | 2.133 | Dec 13, 2022 |
+| Rel 23-01 | [5020869] | .NET Framework 4.6.2 Security and Quality Rollup LKG | 2.133 | Dec 13, 2022 |
+| Rel 23-01 | [5020862] | .NET Framework 3.5 Security and Quality Rollup LKG | 4.113 | Dec 13, 2022 |
+| Rel 23-01 | [5020868] | .NET Framework 4.6.2 Security and Quality Rollup LKG | 4.113 | Dec 13, 2022 |
+| Rel 23-01 | [5020859] | .NET Framework 3.5 Security and Quality Rollup LKG | 3.120 | Dec 13, 2022 |
+| Rel 23-01 | [5020867] | .NET Framework 4.6.2 Security and Quality Rollup LKG | 3.120 | Dec 13, 2022 |
+| Rel 23-01 | [5020866] | .NET Framework 4.7.2 Cumulative Update LKG | 6.53 | Dec 13, 2022 |
+| Rel 23-01 | [5020877] | .NET Framework 4.8 Security and Quality Rollup LKG | 7.21 | Dec 13, 2022 |
+| Rel 23-01 | [5022338] | Monthly Rollup | 2.133 | Jan 10, 2023 |
+| Rel 23-01 | [5022348] | Monthly Rollup | 3.120 | Jan 10, 2023 |
+| Rel 23-01 | [5022352] | Monthly Rollup | 4.113 | Jan 10, 2023 |
+| Rel 23-01 | [5016263] | Servicing Stack update LKG | 3.120 | Jul 12, 2022 |
+| Rel 23-01 | [5018922] | Servicing Stack update LKG | 4.113 | Oct 11, 2022 |
+| Rel 23-01 | [4578013] | OOB Standalone Security Update | 4.113 | Aug 19, 2020 |
+| Rel 23-01 | [5017396] | Servicing Stack update LKG | 5.77 | Sep 13, 2022 |
+| Rel 23-01 | [5017397] | Servicing Stack update LKG | 2.133 | Sep 13, 2022 |
+| Rel 23-01 | [4494175] | Microcode | 5.77 | Sep 1, 2020 |
+| Rel 23-01 | [4494174] | Microcode | 6.53 | Sep 1, 2020 |
+
+[5022289]: https://support.microsoft.com/kb/5022289
+[5019958]: https://support.microsoft.com/kb/5019958
+[5022291]: https://support.microsoft.com/kb/5022291
+[5022286]: https://support.microsoft.com/kb/5022286
+[5020861]: https://support.microsoft.com/kb/5020861
+[5020869]: https://support.microsoft.com/kb/5020869
+[5020862]: https://support.microsoft.com/kb/5020862
+[5020868]: https://support.microsoft.com/kb/5020868
+[5020859]: https://support.microsoft.com/kb/5020859
+[5020867]: https://support.microsoft.com/kb/5020867
+[5020866]: https://support.microsoft.com/kb/5020866
+[5020877]: https://support.microsoft.com/kb/5020877
+[5022338]: https://support.microsoft.com/kb/5022338
+[5022348]: https://support.microsoft.com/kb/5022348
+[5022352]: https://support.microsoft.com/kb/5022352
+[5016263]: https://support.microsoft.com/kb/5016263
+[5018922]: https://support.microsoft.com/kb/5018922
+[4578013]: https://support.microsoft.com/kb/4578013
+[5017396]: https://support.microsoft.com/kb/5017396
+[5017397]: https://support.microsoft.com/kb/5017397
+[4494175]: https://support.microsoft.com/kb/4494175
+[4494174]: https://support.microsoft.com/kb/4494174
++
+## December 2022 Guest OS
++
+| Product Category | Parent KB Article | Vulnerability Description | Guest OS | Date First Introduced |
+| | | | | |
+| Rel 22-12 | [5021235] | Latest Cumulative Update(LCU) | [5.76] | Dec 13, 2022 |
+| Rel 22-12 | [5019958] | IE Cumulative Updates | [2.132], [3.119], [4.112] | Nov 8, 2022 |
+| Rel 22-12 | [5021249] | Latest Cumulative Update(LCU) | [7.20] | Dec 13, 2022 |
+| Rel 22-12 | [5021237] | Latest Cumulative Update(LCU) | [6.52] | Dec 13, 2022 |
+| Rel 22-12 | [5020861] | .NET Framework 3.5 Security and Quality Rollup   | [2.132] | Dec 13, 2022 |
+| Rel 22-12 | [5020869] | .NET Framework 4.6.2 Security and Quality Rollup  | [2.132 ]| Dec 13, 2022 |
+| Rel 22-12 | [5020862] | .NET Framework 3.5 Security and Quality Rollup   | [4.112] | Dec 13, 2022 |
+| Rel 22-12 | [5020868] | .NET Framework 4.6.2 Security and Quality Rollup   | [4.112] | Dec 13, 2022 |
+| Rel 22-12 | [5020859] | .NET Framework 3.5 Security and Quality Rollup  | [3.119] | Dec 13, 2022 |
+| Rel 22-12 | [5020867] | .NET Framework 4.6.2 Security and Quality Rollup   | [3.119] | Dec 13, 2022 |
+| Rel 22-12 | [5020874] | . NET Framework 4.8 Security and Quality Rollup  | [6.52] | Dec 13, 2022 |
+| Rel 22-12 | [5020866] | .NET Framework 4.7.2 Cumulative Update  | [6.52] | Dec 13, 2022 |
+| Rel 22-12 | [5020873] | .NET Framework 4.8 Security and Quality Rollup   | [5.76] | Dec 13, 2022 |
+| Rel 22-12 | [5020877] | .NET Framework 4.8 Security and Quality Rollup  | [7.20] | Dec 13, 2022 |
+| Rel 22-12 | [5021291] | Monthly Rollup  | [2.132] | Dec 13, 2022 |
+| Rel 22-12 | [5021285] | Monthly Rollup  | [3.119] | Dec 13, 2022 |
+| Rel 22-12 | [5021294] | Monthly Rollup  | [4.112] | Dec 13, 2022 |
+| Rel 22-12 | [5016263] | Servicing Stack update LKG  | [3.119] | Jul 12, 2022 |
+| Rel 22-12 | [5018922] | Servicing Stack update LKG  | [4.112] | Oct 11, 2022 |
+| Rel 22-12 | [4578013] | OOB Standalone Security Update  | [4.112] | Aug 19, 2020 |
+| Rel 22-12 | [5017396] | Servicing Stack update LKG  | [5.76] | Sep 13, 2022 |
+| Rel 22-12 | 5020374 | Servicing Stack update LKG  | [6.52] | Dec 13, 2022 |
+| Rel 22-12 | [5017397] | Servicing Stack update LKG  | [2.132] | Sep 13, 2022 |
+| Rel 22-12 | 5020373 | Servicing Stack update 11C LKG  | [7.20] | Dec 13, 2020 |
+| Rel 22-12 | [4494175] | Microcode  | [5.76] | Sep 1, 2020 |
+| Rel 22-12 | [4494174] | Microcode  | [6.52] | Sep 1, 2020 |
[5021235]: https://support.microsoft.com/kb/5021235 [5019958]: https://support.microsoft.com/kb/5019958
The following tables show the Microsoft Security Response Center (MSRC) updates
[5017397]: https://support.microsoft.com/kb/5017397 [4494175]: https://support.microsoft.com/kb/4494175 [4494174]: https://support.microsoft.com/kb/4494174
+[2.132]: ./cloud-services-guestos-update-matrix.md#family-2-releases
+[3.119]: ./cloud-services-guestos-update-matrix.md#family-3-releases
+[4.112]: ./cloud-services-guestos-update-matrix.md#family-4-releases
+[5.76]: ./cloud-services-guestos-update-matrix.md#family-5-releases
+[6.52]: ./cloud-services-guestos-update-matrix.md#family-6-releases
+[7.20]: ./cloud-services-guestos-update-matrix.md#family-7-releases
## November 2022 Guest OS
cloud-services Cloud Services Guestos Update Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cloud-services/cloud-services-guestos-update-matrix.md
na Previously updated : 12/12/2022 Last updated : 1/19/2023 # Azure Guest OS releases and SDK compatibility matrix
Unsure about how to update your Guest OS? Check [this][cloud updates] out.
## News updates
+###### **January 19, 2023**
+The December Guest OS has released.
+ ###### **December 12, 2022** The November Guest OS has released.
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-7.20_202212-01 | January 19, 2023 | Post 7.22 |
| WA-GUEST-OS-7.19_202211-01 | December 12, 2022 | Post 7.21 |
-| WA-GUEST-OS-7.18_202210-02 | November 4, 2022 | Post 7.20 |
+|~~WA-GUEST-OS-7.18_202210-02~~| November 4, 2022 | January 19, 2023 |
|~~WA-GUEST-OS-7.16_202209-01~~| September 29, 2022 | December 12, 2022 | |~~WA-GUEST-OS-7.15_202208-01~~| September 2, 2022 | November 4, 2022 | |~~WA-GUEST-OS-7.14_202207-01~~| August 3, 2022 | September 29, 2022 |
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-6.52_202212-01 | January 19, 2023 | Post 6.54 |
| WA-GUEST-OS-6.51_202211-01 | December 12, 2022 | Post 6.53 |
-| WA-GUEST-OS-6.50_202210-02 | November 4, 2022 | Post 6.52 |
+|~~WA-GUEST-OS-6.50_202210-02~~| November 4, 2022 | January 19, 2023 |
|~~WA-GUEST-OS-6.48_202209-01~~| September 29, 2022 | December 12, 2022 | |~~WA-GUEST-OS-6.47_202208-01~~| September 2, 2022 | November 4, 2022 | |~~WA-GUEST-OS-6.46_202207-01~~| August 3, 2022 | September 29, 2022 |
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-5.76_202212-01 | January 19, 2023 | Post 5.78 |
| WA-GUEST-OS-5.75_202211-01 | December 12, 2022 | Post 5.77 |
-| WA-GUEST-OS-5.74_202210-02 | November 4, 2022 | Post 5.76 |
+|~~WA-GUEST-OS-5.74_202210-02~~| November 4, 2022 | January 19, 2023 |
|~~WA-GUEST-OS-5.72_202209-01~~| September 29, 2022 | December 12, 2022 | |~~WA-GUEST-OS-5.71_202208-01~~| September 2, 2022 | November 4, 2022 | |~~WA-GUEST-OS-5.70_202207-01~~| August 3, 2022 | September 29, 2022 |
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-4.112_202212-01 | January 19, 2023 | Post 4.114 |
| WA-GUEST-OS-4.111_202211-01 | December 12, 2022 | Post 4.113 |
-| WA-GUEST-OS-4.110_202210-02 | November 4, 2022 | Post 4.112 |
+|~~WA-GUEST-OS-4.110_202210-02~~| November 4, 2022 | January 19, 2023 |
|~~WA-GUEST-OS-4.108_202209-01~~| September 29, 2022 | December 12, 2022 | |~~WA-GUEST-OS-4.107_202208-01~~| September 2, 2022 | November 4, 2022 | |~~WA-GUEST-OS-4.106_202207-02~~| August 3, 2022 | September 29, 2022 |
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-3.119_202212-01 | January 19, 2023 | Post 3.121 |
| WA-GUEST-OS-3.118_202211-01 | December 12, 2022 | Post 3.120 |
-| WA-GUEST-OS-3.117_202210-02 | November 4, 2022 | Post 3.119 |
+|~~WA-GUEST-OS-3.117_202210-02~~| November 4, 2022 | January 19, 2023 |
|~~WA-GUEST-OS-3.115_202209-01~~| September 29, 2022 | December 12, 2022 | |~~WA-GUEST-OS-3.114_202208-01~~| September 2, 2022 | November 4, 2022 | |~~WA-GUEST-OS-3.113_202207-02~~| August 3, 2022 | September 29, 2022 |
The September Guest OS has released.
| Configuration string | Release date | Disable date | | | | |
+| WA-GUEST-OS-2.132_202212-01 | January 19, 2023 | Post 2.134 |
| WA-GUEST-OS-2.131_202211-01 | December 12, 2022 | Post 2.133 |
-| WA-GUEST-OS-2.130_202210-02 | November 4, 2022 | Post 2.132 |
+|~~WA-GUEST-OS-2.130_202210-02~~| November 4, 2022 | January 19, 2023 |
|~~WA-GUEST-OS-2.128_202209-01~~| September 29, 2022 | December 12, 2022 | |~~WA-GUEST-OS-2.127_202208-01~~| September 2, 2022 | November 4, 2022 | |~~WA-GUEST-OS-2.126_202207-02~~| August 3, 2022 | September 29, 2022 |
cognitive-services Embedded Speech https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/embedded-speech.md
Embedded neural voices only support 24-kHz sample rate.
For embedded speech, you'll need to download the speech recognition models for [speech-to-text](speech-to-text.md) and voices for [text-to-speech](text-to-speech.md). Instructions will be provided upon successful completion of the [limited access review](https://aka.ms/csgate-embedded-speech) process.
+The following [speech-to-text](speech-to-text.md) models are available: de-DE, en-AU, en-CA, en-GB, en-IE, en-IN, en-NZ, en-US, es-ES, es-MX, fr-CA, fr-FR, hi-IN, it-IT, ja-JP, ko-KR, nl-NL, pt-BR, ru-RU, sv-SE, tr-TR, zh-CN, zh-HK, and zh-TW.
+
+The following [text-to-speech](text-to-speech.md) locales and voices are available:
+
+| Locale (BCP-47) | Language | Text-to-speech voices |
+| -- | -- | -- |
+| `de-DE` | German (Germany) | `de-DE-KatjaNeural` (Female)<br/>`de-DE-ConradNeural` (Male)|
+| `en-AU` | English (Australia) | `en-AU-AnnetteNeural` (Female)<br/>`en-AU-WilliamNeural` (Male)|
+| `en-CA` | English (Canada) | `en-CA-ClaraNeural` (Female)<br/>`en-CA-LiamNeural` (Male)|
+| `en-GB` | English (United Kingdom) | `en-GB-LibbyNeural` (Female)<br/>`en-GB-RyanNeural` (Male)|
+| `en-US` | English (United States) | `en-US-AriaNeural` (Female)<br/>`en-US-GuyNeural` (Male)<br/>`en-US-JennyNeural` (Female)|
+| `es-ES` | Spanish (Spain) | `es-ES-ElviraNeural` (Female)<br/>`es-ES-AlvaroNeural` (Male)|
+| `es-MX` | Spanish (Mexico) | `es-MX-DaliaNeural` (Female)<br/>`es-MX-JorgeNeural` (Male)|
+| `fr-CA` | French (Canada) | `fr-CA-SylvieNeural` (Female)<br/>`fr-CA-JeanNeural` (Male)|
+| `fr-FR` | French (France) | `fr-FR-DeniseNeural` (Female)<br/>`fr-FR-HenriNeural` (Male)|
+| `it-IT` | Italian (Italy) | `it-IT-IsabellaNeural` (Female)<br/>`it-IT-DiegoNeural` (Male)|
+| `ja-JP` | Japanese (Japan) | `ja-JP-NanamiNeural` (Female)<br/>`ja-JP-KeitaNeural` (Male)|
+| `ko-KR` | Korean (Korea) | `ko-KR-SunHiNeural` (Female)<br/>`ko-KR-InJoonNeural` (Male)|
+| `pr-BR` | Portuguese (Brazil) | `pt-BR-FranciscaNeural` (Female)<br/>`pt-BR-AntonioNeural` (Male)|
+| `zh-CN` | Chinese (Mandarin, Simplified) | `zh-CN-XiaoxiaoNeural` (Female)<br/>`zh-CN-YunxiNeural` (Male)|
+ ## Embedded speech configuration For cloud connected applications, as shown in most Speech SDK samples, you use the `SpeechConfig` object with a Speech resource key and region. For embedded speech, you don't use a Speech resource. Instead of a cloud resource, you use the [models and voices](#models-and-voices) that you downloaded to your local device.
cognitive-services How To Custom Speech Test And Train https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/how-to-custom-speech-test-and-train.md
The following table lists accepted data types, when each data type should be use
| [Audio only](#audio-data-for-training-or-testing) | Yes (visual inspection) | 5+ audio files | Yes (Preview for `en-US`) | 1-20 hours of audio | | [Audio + human-labeled transcripts](#audio--human-labeled-transcript-data-for-training-or-testing) | Yes (evaluation of accuracy) | 0.5-5 hours of audio | Yes | 1-20 hours of audio | | [Plain text](#plain-text-data-for-training) | No | Not applicable | Yes | 1-200 MB of related text |
-| [Structured text](#structured-text-data-for-training) (public preview) | No | Not applicable | Yes | Up to 10 classes with up to 4,000 items and up to 50,000 training sentences |
+| [Structured text](#structured-text-data-for-training) | No | Not applicable | Yes | Up to 10 classes with up to 4,000 items and up to 50,000 training sentences |
| [Pronunciation](#pronunciation-data-for-training) | No | Not applicable | Yes | 1 KB to 1 MB of pronunciation text | Training with plain text or structured text usually finishes within a few minutes.
cognitive-services Speech Container Batch Processing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-container-batch-processing.md
docker pull docker.io/batchkit/speech-batch-kit:latest
## Endpoint configuration
-The batch client takes a yaml configuration file that specifies the on-prem container endpoints. The following example can be written to `/mnt/my_nfs/config.yaml`, which is used in the examples below.
+The batch client takes a yaml configuration file that specifies the on-premises container endpoints. The following example can be written to `/mnt/my_nfs/config.yaml`, which is used in the examples below.
++++++++++ ```yaml
-MyContainer1:
-ΓÇ» concurrency: 5
-ΓÇ» host: 192.168.0.100
-ΓÇ» port: 5000
-ΓÇ» rtf: 3
-MyContainer2:
-ΓÇ» concurrency: 5
-ΓÇ» host: BatchVM0.corp.redmond.microsoft.com
-ΓÇ» port: 5000
-ΓÇ» rtf: 2
-MyContainer3:
-ΓÇ» concurrency: 10
-ΓÇ» host: localhost
-ΓÇ» port: 6001
-ΓÇ» rtf: 4
+MyContainer1:
+ concurrency: 5
+ host: 192.168.0.100
+ port: 5000
+ rtf: 3
+MyContainer2:
+ concurrency: 5
+ host: BatchVM0.corp.redmond.microsoft.com
+ port: 5000
+ rtf: 2
+MyContainer3:
+ concurrency: 10
+ host: localhost
+ port: 6001
+ rtf: 4
```
-This yaml example specifies three speech containers on three hosts. The first host is specified by a IPv4 address, the second is running on the same VM as the batch-client, and the third container is specified by the DNS hostname of another VM. The `concurrency` value specifies the maximum concurrent file transcriptions that can run on the same container. The `rtf` (Real-Time Factor) value is optional, and can be used to tune performance.
-
-The batch client can dynamically detect if an endpoint becomes unavailable (for example, due to a container restart or networking issue), and when it becomes available again. Transcription requests will not be sent to containers that are unavailable, and the client will continue using other available containers. You can add, remove, or edit endpoints at any time without interrupting the progress of your batch.
+This yaml example specifies three speech containers on three hosts. The first host is specified by a IPv4 address, the second is running on the same VM as the batch-client, and the third container is specified by the DNS hostname of another VM. The `concurrency` value specifies the maximum concurrent file transcriptions that can run on the same container. The `rtf` (Real-Time Factor) value is optional and can be used to tune performance.
+
+The batch client can dynamically detect if an endpoint becomes unavailable (for example, due to a container restart or networking issue), and when it becomes available again. Transcription requests will not be sent to containers that are unavailable, and the client will continue using other available containers. You can add, remove, or edit endpoints at any time without interrupting the progress of your batch.
+++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
+## Run the batch processing container
+ΓÇ»
+> [!NOTE]
+> * This example uses the same directory (`/my_nfs`) for the configuration file and the inputs, outputs, and logs directories. You can use hosted or NFS-mounted directories for these folders.
+> * Running the client with `–h` will list the available command-line parameters, and their default values. 
+> * The batch processing container is only supported on Linux.
+
+Use the Docker `run` command to start the container. This will start an interactive shell inside the container.
++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++
-## Run the batch processing container
-ΓÇ»
-> [!NOTE]
-> * This example uses the same directory (`/my_nfs`) for the configuration file and the inputs, outputs, and logs directories. You can use hosted or NFS-mounted directories for these folders.
-> * Running the client with `–h` will list the available command-line parameters, and their default values. 
-> * The batch processing container is only supported on Linux.
-Use the Docker `run` command to start the container. This will start an interactive shell inside the container.
```Docker
-docker run --rm -ti -v  /mnt/my_nfs:/my_nfs --entrypoint /bin/bash /mnt/my_nfs:/my_nfs docker.io/batchkit/speech-batch-kit:latest
+docker run --network host --rm -ti -v /mnt/my_nfs:/my_nfs --entrypoint /bin/bash /mnt/my_nfs:/my_nfs docker.io/batchkit/speech-batch-kit:latest
``` To run the batch client: ++++++++++++++++++++++++++++++++++++++++ ```Docker
-run-batch-client -config /my_nfs/config.yaml -input_folder /my_nfs/audio_files -output_folder /my_nfs/transcriptions -log_folder  /my_nfs/logs -file_log_level DEBUG -nbest 1 -m ONESHOT -diarization  None -language en-US -strict_config  
+run-batch-client -config /my_nfs/config.yaml -input_folder /my_nfs/audio_files -output_folder /my_nfs/transcriptions -log_folder /my_nfs/logs -file_log_level DEBUG -nbest 1 -m ONESHOT -diarization None -language en-US -strict_config
``` To run the batch client and container in a single command: +++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ ```Docker
-docker run --rm -ti -v  /mnt/my_nfs:/my_nfs docker.io/batchkit/speech-batch-kit:latest  -config /my_nfs/config.yaml -input_folder /my_nfs/audio_files -output_folder /my_nfs/transcriptions -log_folder  /my_nfs/logs -file_log_level DEBUG -nbest 1 -m ONESHOT -diarization  None -language en-US -strict_config  
+docker run --network host --rm -ti -v /mnt/my_nfs:/my_nfs docker.io/batchkit/speech-batch-kit:latest -config /my_nfs/config.yaml -input_folder /my_nfs/audio_files -output_folder /my_nfs/transcriptions -log_folder /my_nfs/logs
``` +++++++++ The client will start running. If an audio file has already been transcribed in a previous run, the client will automatically skip the file. Files are sent with an automatic retry if transient errors occur, and you can differentiate between which errors you want to the client to retry on. On a transcription error, the client will continue transcription, and can retry without losing progress. ## Run modes
The batch processing kit offers three modes, using the `--run-mode` parameter.
#### [REST](#tab/rest)
-`REST` mode is an API server mode that provides a basic set of HTTP endpoints for audio file batch submission, status checking, and long polling. Also enables programmatic consumption using a Python module extension, or importing as a submodule.
+`REST` mode is an API server mode that provides a basic set of HTTP endpoints for audio file batch submission, status checking, and long polling. Also enables programmatic consumption using a Python module extension or importing as a submodule.
:::image type="content" source="media/containers/batch-rest-api-mode.png" alt-text="A diagram showing the batch-kit container processing files in REST mode."::: 1. Define the Speech container endpoints that the batch client will use in the `config.yaml` file.
-2. Send an HTTP request request to one of the API server's endpoints.
-
+2. Send an HTTP request to one of the API server's endpoints.
+
|Endpoint |Description | ||| |`/submit` | Endpoint for creating new batch requests. |
The batch processing kit offers three modes, using the `--run-mode` parameter.
|`/watch` | Endpoint for using HTTP long polling until the batch completes. | 3. Audio files are uploaded from the input directory. If the audio file has already been transcribed in a previous run with the same output directory (same file name and checksum), the client will skip the file.
-4. If a request is sent to the `/submit` endpoint, the files are dispatched to the container endpoints from step 1.
+4. If a request is sent to the `/submit` endpoint, the files are dispatched to the container endpoints from step
5. Logs and the Speech container output are returned to the specified output directory. - ## Logging > [!NOTE]
The output directory specified by `-output_folder` will contain a *run_summary.j
## Next steps * [How to install and run containers](speech-container-howto.md)+++++++
cognitive-services Speech Container Howto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-container-howto.md
With Speech containers, you can build a speech application architecture that's o
| Container | Features | Latest | Release status | |--|--|--|--|
-| Speech-to-text | Analyzes sentiment and transcribes continuous real-time speech or batch audio recordings with intermediate results. | 3.9.0 | Generally available |
-| Custom speech-to-text | Using a custom model from the [Custom Speech portal](https://speech.microsoft.com/customspeech), transcribes continuous real-time speech or batch audio recordings into text with intermediate results. | 3.9.0 | Generally available |
+| Speech-to-text | Analyzes sentiment and transcribes continuous real-time speech or batch audio recordings with intermediate results. | 3.10.0 | Generally available |
+| Custom speech-to-text | Using a custom model from the [Custom Speech portal](https://speech.microsoft.com/customspeech), transcribes continuous real-time speech or batch audio recordings into text with intermediate results. | 3.10.0 | Generally available |
| Speech language identification | Detects the language spoken in audio files. | 1.5.0 | Preview |
-| Neural text-to-speech | Converts text to natural-sounding speech by using deep neural network technology, which allows for more natural synthesized speech. | 2.8.0 | Generally available |
+| Neural text-to-speech | Converts text to natural-sounding speech by using deep neural network technology, which allows for more natural synthesized speech. | 2.9.0 | Generally available |
## Prerequisites
cognitive-services Speech Services Quotas And Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/Speech-Service/speech-services-quotas-and-limits.md
You aren't able to see the existing value of the concurrent request limit parame
>[!NOTE] >[Speech containers](speech-container-howto.md) don't require increases of the concurrent request limit, because containers are constrained only by the CPUs of the hardware they are hosted on.-
+
#### Prepare the required information
-To create an increase request, you provide your deployment region and the custom endpoint ID. To get it, perform the following actions:
+To create an increase request, you need to provide your information.
+
+- For the prebuilt voice:
+ - Speech resource ID
+ - Region
+- For the custom voice:
+ - Deployment region
+ - Custom endpoint ID
+
+How to get information for the prebuilt voice:
+
+1. Go to the [Azure portal](https://portal.azure.com/).
+1. Select the Speech service resource for which you would like to increase the concurrency request limit.
+1. From the **Resource Management** group, select **Properties**.
+1. Copy and save the values of the following fields:
+ - **Resource ID**
+ - **Location** (your endpoint region)
+
+How to get information for the custom voice:
1. Go to the [Speech Studio](https://aka.ms/speechstudio/customvoice) portal. 1. Sign in if necessary, and go to **Custom Voice**.
-1. Select your project, and go to **Deployment**.
+1. Select your project, and go to **Deploy model**.
1. Select the required endpoint. 1. Copy and save the values of the following fields: - **Service Region** (your endpoint region) - **Endpoint ID**-
+
#### Create and submit a support request Initiate the increase of the limit for concurrent requests for your resource, or if necessary check the current limit, by submitting a support request. Here's how:
Initiate the increase of the limit for concurrent requests for your resource, or
1. In **Problem subtype**, select either: - **Quota or concurrent requests increase** for an increase request. - **Quota or usage validation** to check the existing limit.
-1. Select **Next: Solutions**. Proceed further with the request creation.
-1. On the **Details** tab, in the **Description** field, enter the following:
+1. On the **Recommended solution** tab, select **Next**.
+1. On the **Additional details** tab, fill in all the required items. And in the **Details** field, enter the following:
- A note that the request is about the text-to-speech quota.
- - Choose either the base or custom model.
- - The Azure resource information you [collected previously](#have-the-required-information-ready).
+ - Choose either the prebuilt voice or custom voice.
+ - The Azure resource information you [collected previously](#prepare-the-required-information).
- Any other required information. 1. On the **Review + create** tab, select **Create**. 1. Note the support request number in Azure portal notifications. You'll be contacted shortly about your request.
cognitive-services Disconnected Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/containers/disconnected-containers.md
docker pull mcr.microsoft.com/azure-cognitive-services/form-recognizer/invoice:l
## Configure the container to be run in a disconnected environment
-Now that you've downloaded your container, you'll need to run the container with the `DownloadLicense=True` parameter in your `docker run` command. This parameter will download a license file that will enable your Docker container to run when it isn't connected to the internet. It also contains an expiration date, after which the license file will be invalid to run the container. You can only use a license file with the appropriate container that you've been approved for. For example, you can't use a license file for a speech-to-text container with a form recognizer container.
+Now that you've downloaded your container, you'll need to run the container with the `DownloadLicense=True` parameter in your `docker run` command. This parameter will download a license file that will enable your Docker container to run when it isn't connected to the internet. It also contains an expiration date, after which the license file will be invalid to run the container. You can only use a license file with the appropriate container that you've been approved for. For example, you can't use a license file for a speech-to-text container with a form recognizer container. Please do not rename or modify the license file as this will prevent the container from running successfully.
> [!IMPORTANT] >
cognitive-services Create Account Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/create-account-bicep.md
Previously updated : 04/29/2022 Last updated : 01/19/2023
Using Bicep to create a Cognitive Service resource lets you create a multi-servi
The Bicep file used in this quickstart is from [Azure Quickstart Templates](https://azure.microsoft.com/resources/templates/cognitive-services-universalkey/).
+> [!NOTE]
+> * If you use a different resource `kind` (listed below), you may need to change the `sku` parameter to match the [pricing](https://azure.microsoft.com/pricing/details/cognitive-services/) tier you wish to use. For example, the `TextAnalytics` kind uses `S` instead of `S0`.
+> * Many of the Cognitive Services have a free `F0` pricing tier that you can use to try the service.
+
+Be sure to change the `sku` parameter to the [pricing](https://azure.microsoft.com/pricing/details/cognitive-services/) instance you want. The `sku` depends on the resource `kind` that you are using. For example, `TextAnalytics`
+ :::code language="bicep" source="~/quickstart-templates/quickstarts/microsoft.cognitiveservices/cognitive-services-universalkey/main.bicep"::: One Azure resource is defined in the Bicep file: [Microsoft.CognitiveServices/accounts](/azure/templates/microsoft.cognitiveservices/accounts) specifies that it is a Cognitive Services resource. The `kind` field in the Bicep file defines the type of resource.
cognitive-services Data Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/concepts/data-limits.md
The following limit specifies the maximum number of characters that can be in a
| Feature | Value | ||| | Conversation issue and resolution summarization| 40,000 characters as measured by [StringInfo.LengthInTextElements](/dotnet/api/system.globalization.stringinfo.lengthintextelements).|
-| Text Analytics for health | 30,720 characters as measured by [StringInfo.LengthInTextElements](/dotnet/api/system.globalization.stringinfo.lengthintextelements). |
+| Text Analytics for health | 125,000 characters as measured by [StringInfo.LengthInTextElements](/dotnet/api/system.globalization.stringinfo.lengthintextelements). |
| All other pre-configured features (synchronous) | 5,120 as measured by [StringInfo.LengthInTextElements](/dotnet/api/system.globalization.stringinfo.lengthintextelements). If you need to submit larger documents, consider using the feature asynchronously (described below). | | All other pre-configured features ([asynchronous](use-asynchronously.md)) | 125,000 characters across all submitted documents, as measured by [StringInfo.LengthInTextElements](/dotnet/api/system.globalization.stringinfo.lengthintextelements) (maximum of 25 documents). |
Exceeding the following document limits will generate an HTTP 400 error code.
| Personally Identifying Information (PII) detection | 5 | | Document summarization | 25 | | Entity Linking | 5 |
-| Text Analytics for health | 10 for the web-based API, 1000 for the container. |
+| Text Analytics for health |25 for the web-based API, 1000 for the container. (125,000 characters in total) |
## Rate limits
cognitive-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/entity-linking/how-to/call-api.md
The entity linking feature can be used to identify and disambiguate the identity of an entity found in text (for example, determining whether an occurrence of the word "*Mars*" refers to the planet, or to the Roman god of war). It will return the entities in the text with links to [Wikipedia](https://www.wikipedia.org/) as a knowledge base.
-> [!TIP]
-> If you want to start using this feature, you can follow the [quickstart article](../quickstart.md) to get started. You can also make example requests using [Language Studio](../../language-studio.md) without needing to write code.
+
+## Development options
+ ## Determine how to process the data (optional)
When you submit documents to be processed by entity linking, you can specify whi
Entity linking produces a higher-quality result when you give it smaller amounts of text to work on. This is opposite from some features, like key phrase extraction which performs better on larger blocks of text. To get the best results from both operations, consider restructuring the inputs accordingly.
-To send an API request, You will need a Language resource endpoint and key.
+To send an API request, you will need a Language resource endpoint and key.
> [!NOTE] > You can find the key and endpoint for your Language resource on the Azure portal. They will be located on the resource's **Key and endpoint** page, under **resource management**.
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/entity-linking/overview.md
Previously updated : 06/15/2022 Last updated : 01/10/2023
This documentation contains the following types of articles:
* [**Quickstarts**](quickstart.md) are getting-started instructions to guide you through making requests to the service. * [**How-to guides**](how-to/call-api.md) contain instructions for using the service in more specific ways.
+## Get started with entity linking
-The result will be a collection of recognized entities in your text, with URLs to Wikipedia as an online knowledge base.
[!INCLUDE [Developer reference](../includes/reference-samples-text-analytics.md)]
cognitive-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/key-phrase-extraction/how-to/call-api.md
Previously updated : 07/27/2022 Last updated : 01/10/2023
This feature is useful if you need to quickly identify the main points in a coll
> [!TIP] > If you want to start using this feature, you can follow the [quickstart article](../quickstart.md) to get started. You can also make example requests using [Language Studio](../../language-studio.md) without needing to write code.
+## Development options
+ ## Determine how to process the data (optional)
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/key-phrase-extraction/overview.md
Previously updated : 06/15/2022 Last updated : 01/10/2023
This documentation contains the following types of articles:
[!INCLUDE [Typical workflow for pre-configured language features](../includes/overview-typical-workflow.md)]
-## Deploy on premises using Docker containers
+## Get started with entity linking
+
-Use the available Docker container to [deploy this feature on-premises](how-to/use-containers.md). These docker containers enable you to bring the service closer to your data for compliance, security, or other operational reasons.
## Responsible AI
cognitive-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/language-detection/how-to/call-api.md
Language detection is useful for content stores that collect arbitrary text, whe
The Language Detection feature can detect a wide range of languages, variants, dialects, and some regional or cultural languages.
-> [!TIP]
-> If you want to start using this feature, you can follow the [quickstart article](../quickstart.md) to get started. You can also make example requests using [Language Studio](../../language-studio.md) without needing to write code.
+## Development options
+ ## Determine how to process the data (optional)
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/language-detection/overview.md
This documentation contains the following types of articles:
[!INCLUDE [Typical workflow for pre-configured language features](../includes/overview-typical-workflow.md)]
-## Deploy on premises using Docker containers
-Use the available Docker container to [deploy this feature on-premises](how-to/use-containers.md). These docker containers enable you to bring the service closer to your data for compliance, security, or other operational reasons.
+## Get started with language detection
+ ## Responsible AI
cognitive-services Language Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/language-studio.md
Previously updated : 07/21/2022 Last updated : 01/03/2023
Language Studio provides you with a platform to try several service features, and see what they return in a visual manner. It also provides you with an easy-to-use experience to create custom projects and models to work on your data. Using the Studio, you can get started without needing to write code, and then use the available client libraries and REST APIs in your application.
-## Get started using Language Studio
+## Try Language Studio before signing up
+Language Studio lets you try available features without needing to create an Azure account or an Azure resource. From the main page of the studio, select one of the listed categories to see [available features](overview.md#available-features) you can try.
-## Language Studio pre-configured features
-The Language service offers multiple features that use prebuilt, pre-configured models for performing various tasks such as: entity linking, language detection, and key phrase extraction. See the [Azure Cognitive Service for Language overview](overview.md) to see the list of features offered by the service.
+Once you choose a feature, you'll be able to send several text examples to the service, and see example output.
-Each of these features has a demo-like experience inside Language Studio that lets you input text, and presents the response both visually, and in JSON. These demos help you quickly test these prebuilt features without using code.
-## Language Studio customizable features
+## Use Language Studio with your own text
-The Language service also offers multiple features that let you create, train, and deploy custom models to better fit your data. For example, custom content classification and custom question answering. For features with customization, Language Studio offers workflows that let developers and subject matter experts build models without needing machine learning expertise.
+When you're ready to use Language Studio features on your own text data, you will need an Azure Language resource for authentication and [billing](https://aka.ms/unifiedLanguagePricing). You can also use this resource to call the REST APIs and client libraries programmatically. Follow these steps to get started.
+
+> [!IMPORTANT]
+> The setup process and requirements for custom features are different. If you're using one of the following custom features, we recommend using the quickstart articles linked below to get started more easily.
+> * [Conversational Language Understanding](./conversational-language-understanding/quickstart.md)
+> * [Custom Text Classification](./custom-classification/quickstart.md)
+> * [Custom Named Entity Recognition (NER)](./custom-named-entity-recognition/quickstart.md)
+> * [Orchestration workflow](./orchestration-workflow/quickstart.md)
+
+1. Create an Azure Subscription. You can [create one for free](https://azure.microsoft.com/free/ai/).
+
+2. [Log into Language Studio](https://aka.ms/languageStudio). If it's your first time logging in, you'll see a window appear that lets you choose a language resource.
+
+ :::image type="content" source="./media/language-resource-small.png" alt-text="A screenshot showing the resource selection screen in Language Studio." lightbox="./media/language-resource.png":::
+
+3. Select **Create a new language resource**. Then enter information for your new resource, such as a name, location and resource group.
+
+
+ > [!TIP]
+ > * When selecting a location for your Azure resource, choose one that's closest to you for lower latency.
+ > * We recommend turning the **Managed Identity** option **on**, to authenticate your requests across Azure.
+ > * If you use the free pricing tier, you can keep using the Language service even after your Azure free trial or service credit expires.
+
+ :::image type="content" source="./media/create-new-resource-small.png" alt-text="A screenshot showing the resource creation screen in Language Studio." lightbox="./media/create-new-resource.png":::
+
+4. Select **Done**. Your resource will be created, and you will be able to use the different features offered by the Language service with your own text.
## Clean up resources
cognitive-services How To Call https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/named-entity-recognition/how-to-call.md
Previously updated : 03/01/2022 Last updated : 01/10/2023
The NER feature can evaluate unstructured text, and extract named entities from text in several pre-defined categories, for example: person, location, event, product, and organization.
+## Development options
++ ## Determine how to process the data (optional) ### Specify the NER model
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/named-entity-recognition/overview.md
Named Entity Recognition (NER) is one of the features offered by [Azure Cognitiv
[!INCLUDE [Typical workflow for pre-configured language features](../includes/overview-typical-workflow.md)] +
+## Get started with named entity recognition
++ [!INCLUDE [Developer reference](../includes/reference-samples-text-analytics.md)] ## Responsible AI
cognitive-services How To Call https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/personally-identifiable-information/how-to-call.md
The PII feature can evaluate unstructured text, extract and redact sensitive information (PII) and health information (PHI) in text across several [pre-defined categories](concepts/entity-categories.md). +
+## Development options
++ ## Determine how to process the data (optional) ### Specify the PII detection model
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/personally-identifiable-information/overview.md
Previously updated : 08/02/2022 Last updated : 01/10/2023
PII comes into two shapes:
[!INCLUDE [Typical workflow for pre-configured language features](../includes/overview-typical-workflow.md)]
+## Get started with PII detection
++ [!INCLUDE [Developer reference](../includes/reference-samples-text-analytics.md)]
cognitive-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/sentiment-opinion-mining/how-to/call-api.md
Sentiment analysis and opinion mining are two ways of detecting positive and negative sentiment. Using sentiment analysis, you can get sentiment labels (such as "negative", "neutral" and "positive") and confidence scores at the sentence and document-level. Opinion Mining provides granular information about the opinions related to words (such as the attributes of products or services) in the text.
-> [!TIP]
-> If you want to start using this feature, you can follow the [quickstart article](../quickstart.md) to get started. You can also make example requests using [Language Studio](../../language-studio.md) without needing to write code.
-- ## Sentiment Analysis Sentiment Analysis applies sentiment labels to text, which are returned at a sentence and document level, with a confidence score for each.
For example, if a customer leaves feedback about a hotel such as "The room was g
If you're using the REST API, to get Opinion Mining in your results, you must include the `opinionMining=true` flag in a request for sentiment analysis. The Opinion Mining results will be included in the sentiment analysis response. Opinion mining is an extension of Sentiment Analysis and is included in your current [pricing tier](https://azure.microsoft.com/pricing/details/cognitive-services/text-analytics/).
+## Development options
++ ## Determine how to process the data (optional) ### Specify the sentiment analysis model
When you submit documents to be processed by sentiment analysis, you can specify
Sentiment analysis and opinion mining produce a higher-quality result when you give it smaller amounts of text to work on. This is opposite from some features, like key phrase extraction which performs better on larger blocks of text.
-To send an API request, you will need your Language resource endpoint and key.
+To send an API request, you'll need your Language resource endpoint and key.
> [!NOTE] > You can find the key and endpoint for your Language resource on the Azure portal. They will be located on the resource's **Key and endpoint** page, under **resource management**.
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/sentiment-opinion-mining/overview.md
Previously updated : 07/27/2022 Last updated : 01/12/2023
Both sentiment analysis and opinion mining work with a variety of [written langu
The sentiment analysis feature provides sentiment labels (such as "negative", "neutral" and "positive") based on the highest confidence score found by the service at a sentence and document-level. This feature also returns confidence scores between 0 and 1 for each document & sentences within it for positive, neutral and negative sentiment.
-### Deploy on premises using Docker containers
-
-Use the available Docker container to [deploy sentiment analysis on-premises](how-to/use-containers.md). These docker containers enable you to bring the service closer to your data for compliance, security, or other operational reasons.
- ## Opinion mining Opinion mining is a feature of sentiment analysis. Also known as aspect-based sentiment analysis in Natural Language Processing (NLP), this feature provides more granular information about the opinions related to words (such as the attributes of products or services) in text. [!INCLUDE [Typical workflow for pre-configured language features](../includes/overview-typical-workflow.md)]
+## Get started with sentiment analysis
++ ## Responsible AI An AI system includes not only the technology, but also the people who will use it, the people who will be affected by it, and the environment in which it is deployed. Read the [transparency note for sentiment analysis](/legal/cognitive-services/language-service/transparency-note-sentiment-analysis?context=/azure/cognitive-services/language-service/context/context) to learn about responsible AI use and deployment in your systems. You can also see the following articles for more information:
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/summarization/overview.md
Previously updated : 12/06/2022 Last updated : 01/12/2023
Conversation summarization feature would simplify the text into the following:
## Get started with summarization
-# [Document summarization](#tab/document-summarization)
-
-To use this feature, you submit raw unstructured text for analysis and handle the API output in your application. Analysis is performed as-is, with no additional customization to the model used on your data. There are two ways to use summarization:
-
-|Development option |Description | Links |
-||||
-| Language Studio | A web-based platform that enables you to try document summarization without needing writing code. | ΓÇó [Language Studio website](https://language.cognitive.azure.com/tryout/summarization) <br> ΓÇó [Quickstart: Use Language Studio](../language-studio.md) |
-| REST API or Client library (Azure SDK) | Integrate document summarization into your applications using the REST API, or the client library available in a variety of languages. | ΓÇó [Quickstart: Use document summarization](quickstart.md) |
-
-# [Conversation summarization](#tab/conversation-summarization)
-
-To use this feature, you submit raw text for analysis and handle the API output in your application. Analysis is performed as-is, with no additional customization to the model used on your data. There are two ways to use conversation summarization:
-|Development option |Description | Links |
-||||
-| REST API | Integrate conversation summarization into your applications using the REST API. | [Quickstart: Use conversation summarization](quickstart.md?tabs=conversation-summarization&pivots=rest-api) |
-- ## Input requirements and service limits
cognitive-services Assertion Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/text-analytics-for-health/concepts/assertion-detection.md
Text Analytics for health returns assertion modifiers, which are informative att
**ASSOCIATION** ΓÇô describes whether the concept is associated with the subject of the text or someone else. * **Subject** [Default]: the concept is associated with the subject of the text, usually the patient.
-* **Someone_Else**: the concept is associated with someone who is not the subject of the text.
+* **Other**: the concept is associated with someone who is not the subject of the text.
Assertion detection represents negated entities as a negative value for the certainty category, for example:
cognitive-services Call Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/text-analytics-for-health/how-to/call-api.md
[!INCLUDE [service notice](../includes/service-notice.md)]
-Text Analytics for health can be used to extract and label relevant medical information from unstructured texts such as doctors' notes, discharge summaries, clinical documents, and electronic health records. The service performs [named entity recognition](../concepts/health-entity-categories.md), [relation extraction](../concepts/relation-extraction.md), [entity linking](https://www.nlm.nih.gov/research/umls/sourcereleasedocs/https://docsupdatetracker.net/index.html), and [assertion detection](../concepts/assertion-detection.md) to uncover insights from the input text. For information on the returned confidence scores, see the [transparency note](/legal/cognitive-services/text-analytics/transparency-note#general-guidelines-to-understand-and-improve-performance?context=/azure/cognitive-services/text-analytics/context/context).
+Text Analytics for health can be used to extract and label relevant medical information from unstructured texts such as doctors' notes, discharge summaries, clinical documents, and electronic health records. The service performs [named entity recognition](../concepts/health-entity-categories.md), [relation extraction](../concepts/relation-extraction.md), [entity linking](https://www.nlm.nih.gov/research/umls/sourcereleasedocs/https://docsupdatetracker.net/index.html), and [assertion detection](../concepts/assertion-detection.md) to uncover insights from the input text. For information on the returned confidence scores, see the [transparency note](/legal/cognitive-services/text-analytics/transparency-note#general-guidelines-to-understand-and-improve-performance?context=/azure/cognitive-services/text-analytics/context/context).
There are two ways to call the service: * A [Docker container](use-containers.md) (synchronous) * Using the web-based API and client libraries (asynchronous)
+## Development options
-> [!TIP]
-> If you want to test out the feature without writing any, you can follow the [quickstart article](../quickstart.md) to get started. You can also make example requests using [Language Studio](../../language-studio.md) without needing to write code.
## Specify the Text Analytics for health model
cognitive-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/language-service/text-analytics-for-health/overview.md
Additionally, Text Analytics for health can return the processed output using th
> [!VIDEO https://learn.microsoft.com/Shows/AI-Show/Introducing-Text-Analytics-for-Health/player] -- ## Usage scenarios Text Analytics for health can be used in multiple scenarios across a variety of industries.
Some common customer motivations for using Text Analytics for health include:
|Review and report medical information|Support solutions for reporting and flagging possible errors in medical information resulting from reviewal processes such as quality assurance.| |Assist with decision support|Enable solutions that provide humans with assistive information relating to patientsΓÇÖ medical information for faster and more reliable decisions.|
+## Get started with Text Analytics for health
-## Get started with Text analytics for health
-
-To use this feature, all you need is to submit raw unstructured text for analysis. Analysis is performed as-is, with no additional customization to the model used on your data. There are three ways to get started Text Analytics for health:
--
-|Development option |Description | Links |
-||||
-| Language Studio | A web-based platform that enables you to try Text Analytics for health without needing to write any code. | ΓÇó [Language Studio website](https://language.cognitive.azure.com/tryout/healthAnalysis) <br> ΓÇó [Quickstart: Use Language Studio](../language-studio.md) |
-| REST API or Client library (Azure SDK) | Integrate Text Analytics for health into your applications using the REST API or the client library, available in a variety of development languages. | ΓÇó [Quickstart: Use Text Analytics for health](quickstart.md) |
-| Docker container | Use the available Docker container to deploy this feature on-premises, letting you bring the service closer to your data for compliance, security, or other operational reasons. | ΓÇó [How to deploy on-premises](how-to/use-containers.md) |
## Input requirements and service limits
cognitive-services Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cognitive-services/openai/concepts/models.md
When using our Embeddings models, keep in mind their limitations and risks.
| | | | | | | text-similarity-ada-001 | No | Yes | East US, South Central US, West Europe | N/A | | text-similarity-babbage-001 | No | Yes | South Central US, West Europe | N/A |
-| text-similarit-curie-001 | No | Yes | East US, South Central US, West Europe | N/A |
+| text-similarity-curie-001 | No | Yes | East US, South Central US, West Europe | N/A |
| text-similarity-davinci-001 | No | Yes | South Central US, West Europe | N/A | | text-search-ada-doc-001 | No | Yes | South Central US, West Europe | N/A | | text-search-ada-query-001 | No | Yes | South Central US, West Europe | N/A |
When using our Embeddings models, keep in mind their limitations and risks.
## Next steps
-[Learn more about Azure OpenAI](../overview.md).
+[Learn more about Azure OpenAI](../overview.md).
communication-services Logic App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/chat/logic-app.md
Title: Quickstart - Send chat message in Power Automate with Azure Communication Services
+ Title: Send a chat message in Power Automate
description: In this quickstart, learn how to send a chat message in Azure Logic Apps workflows by using the Azure Communication Services Chat connector.-+
-# Quickstart: Send chat message in Power Automate with Azure Communication Services
+# Quickstart: Send a chat message in Power Automate
-You can create automated workflows that can send chat messages using the Azure Communication Services Chat connector. This quickstart will show how to create a chat, add a participant, send a message and list messages into an existing workflow.
+You can create automated workflows that send chat messages by using the Azure Communication Services Chat connector. This quickstart shows you how to create a chat, add a participant, send a message, and list messages in an existing workflow.
## Prerequisites
You can create automated workflows that can send chat messages using the Azure C
- An active Azure Communication Services resource, or [create a Communication Services resource](../create-communication-resource.md). -- An active Logic Apps resource (logic app), or [create a blank logic app but with the trigger that you want to use](../../../logic-apps/quickstart-create-first-logic-app-workflow.md). Currently, the Azure Communication Services Chat connector provides only actions, so your logic app requires a trigger, at minimum.-
+- An active Azure Logic Apps resource, or [create a blank logic app with the trigger that you want to use](../../../logic-apps/quickstart-create-first-logic-app-workflow.md). Currently, the Communication Services Chat connector provides only actions, so your logic app requires a trigger, at minimum.
## Create user
-Add a new step in your workflow by using the Azure Communication Services Identity connector, follow these steps in Power Automate with your Power Automate flow open in edit mode.
-
-1. On the designer, under the step where you want to add the new action, select New step. Alternatively, to add the new action between steps, move your pointer over the arrow between those steps, select the plus sign (+), and select Add an action.
+Complete these steps in Power Automate with your Power Automate flow open in edit mode.
-1. In the Choose an operation search box, enter Communication Services Identity. From the actions list, select Create a user.
+To add a new step in your workflow by using the Communication Services Identity connector:
- :::image type="content" source="./media/logic-app/azure-communications-services-connector-create-user.png" alt-text="Screenshot that shows the Azure Communication Services Identity connector Create user action.":::
+1. In the designer, under the step where you want to add the new action, select **New step**. Alternatively, to add the new action between steps, move your pointer over the arrow between those steps, select the plus sign (+), and then select **Add an action**.
-1. Provide the Connection String. This can be found in [Microsoft Azure](https://portal.azure.com/), within your Azure Communication Service Resource, on the Keys option from the left menu > Connection String
+1. In the **Choose an operation** search box, enter **Communication Services Identity**. In the list of actions list, select **Create a user**.
- :::image type="content" source="./media/logic-app/azure-portal-connection-string.png" alt-text="Screenshot that shows the Keys page within an Azure Communication Services Resource." lightbox="./media/logic-app/azure-portal-connection-string.png":::
+ :::image type="content" source="./media/logic-app/azure-communications-services-connector-create-user.png" alt-text="Screenshot that shows the Azure Communication Services Identity connector Create user action.":::
-1. Provide a Connection Name
+1. Enter the connection string. To get the connection string URL in the [Azure portal](https://portal.azure.com/), go to the Azure Communication Services resource. In the resource menu, select **Keys**, and then select **Connection string**. Select the copy icon to copy the connection string.
-1. Click ΓÇ£Show advanced optionsΓÇ¥ and select the Token Scope the action will also output an access token and its expiration time with the specified scope.
+ :::image type="content" source="./media/logic-app/azure-portal-connection-string.png" alt-text="Screenshot that shows the Keys pane for an Azure Communication Services resource." lightbox="./media/logic-app/azure-portal-connection-string.png":::
- This action will output a User ID, which is a Communication Services user identity.
- Additionally, if you click ΓÇ£Show advanced optionsΓÇ¥ and select the Token Scope the action will also output an access token and its expiration time with the specified scope.
+1. Enter a name for the connection.
+1. Select **Show advanced options**, and then select the token scope. The action generates an access token and its expiration time with the specified scope. This action also generates a user ID that's a Communication Services user identity.
+
:::image type="content" source="./media/logic-app/azure-communications-services-connector-create-user-action.png" alt-text="Screenshot that shows the Azure Communication Services Identity connector Create user action options.":::
-1. Select ΓÇ£chatΓÇ¥
+1. In **Token Scopes Item**, select **chat**.
:::image type="content" source="./media/logic-app/azure-communications-services-connector-create-user-action-advanced.png" alt-text="Screenshot that shows the Azure Communication Services Chat connector advanced options.":::
-1. Click Create. This will output the User ID and an Access Token.
+1. Select **Create**. The user ID and an access token are shown.
## Create a chat thread
-1. Add a new action
+1. Add a new action.
-1. In the Choose an operation search box, enter Communication Services Chat. From the actions list, select Create chat thread.
+1. In the **Choose an operation** search box, enter **Communication Services Chat**. In the list of actions, select **Create chat thread**.
:::image type="content" source="./media/logic-app/azure-communications-services-connector-create-chat-thread.png" alt-text="Screenshot that shows the Azure Communication Services Chat connector Create a chat thread action.":::
-
-1. Provide the Azure Communication Services endpoint URL. This can be found in [Microsoft Azure](https://portal.azure.com/), within your Azure Communication Service Resource, on the Keys option from the left menu > Endpoint.
-1. Provide a Connection Name
+1. Enter the Communication Services endpoint URL. To get the endpoint URL in the [Azure portal](https://portal.azure.com/), go to the Azure Communication Services resource. In the resource menu, select **Keys**, and then select **Endpoint**.
+
+1. Enter a name for the connection.
+
+1. Select the access token that was generated in the preceding section, and then add a chat thread topic description. Add the created user and enter a name for the participant.
-1. Select the Access Token from the previous step, add a Chat thread topic description. Additionally, add the created user and add a Name for the participant.
+ :::image type="content" source="./media/logic-app/azure-communications-services-connector-create-chat-thread-input.png" alt-text="Screenshot that shows the Azure Communication Services Chat connector Create chat thread action dialog.":::
- :::image type="content" source="./media/logic-app/azure-communications-services-connector-create-chat-thread-input.png" alt-text="Screenshot that shows the Azure Communication Services Chat connector Create chat thread action input fields.":::
-
## Send a message
-1. Add a new action
+1. Add a new action.
-1. In the Choose an operation search box, enter Communication Services Chat. From the actions list, select Send a Chat message to chat thread.
+1. In the **Choose an operation** search box, enter **Communication Services Chat**. In the list of actions, select **Send message to chat thread**.
:::image type="content" source="./media/logic-app/azure-communications-services-connector-send-chat-message.png" alt-text="Screenshot that shows the Azure Communication Services Chat connector Send chat message action.":::
-
-1. Provide the Access Token, Thread ID, Content, and Name information as shown below.
-
- :::image type="content" source="./media/logic-app/azure-communications-services-connector-send-chat-message-input.png" alt-text="Screenshot that shows the Azure Communication Services Chat connector Send chat message action input fields.":::
+
+1. Enter the access token, thread ID, content, and name.
+
+ :::image type="content" source="./media/logic-app/azure-communications-services-connector-send-chat-message-input.png" alt-text="Screenshot that shows the Azure Communication Services Chat connector Send chat message action dialog.":::
## List chat thread messages
-To verify you have correctly sent a message, we will add one more action to list the chat thread messages.
-1. Add a new action
+To verify that you sent a message correctly:
+
+1. Add a new action.
-1. In the Choose an operation search box, enter Communication Services Chat. From the actions list, select List chat thread messages.
+1. In the **Choose an operation** search box, enter **Communication Services Chat**. In the list of actions, select **List chat thread messages**.
:::image type="content" source="./media/logic-app/azure-communications-services-connector-list-chat-messages.png" alt-text="Screenshot that shows the Azure Communication Services Chat connector List chat messages action.":::
-
-1. Provide the Access token and Thread ID as follows
-
- :::image type="content" source="./media/logic-app/azure-communications-services-connector-list-chat-messages-input.png" alt-text="Screenshot that shows the Azure Communication Services Chat connector Send chat message action input.":::
+
+1. Enter the access token and thread ID.
+
+ :::image type="content" source="./media/logic-app/azure-communications-services-connector-list-chat-messages-input.png" alt-text="Screenshot that shows the Azure Communication Services Chat connector List chat messages action dialog.":::
## Test your logic app
-To manually start your workflow, on the designer toolbar, select **Run**. The workflow should create a user, issue an access token for that user, then remove it and delete the user. For more information, review [how to run your workflow](../../../logic-apps/quickstart-create-first-logic-app-workflow.md#run-workflow).
+To manually start your workflow, on the designer toolbar, select **Run**. The workflow creates a user, issues an access token for that user, and then removes the token and deletes the user. For more information, review [How to run your workflow](../../../logic-apps/quickstart-create-first-logic-app-workflow.md#run-workflow).
-Now click on the List chat thread messages and check the output, the message sent will be in the action outputs.
+Now, select **List chat thread messages**. In the action outputs, check for the message that was sent.
## Clean up resources
-To remove a Communication Services subscription, delete the Communication Services resource or resource group. Deleting the resource group also deletes any other resources in that group. For more information, review [how to clean up Communication Services resources](../create-communication-resource.md#clean-up-resources).
+To remove a Communication Services subscription, delete the Communication Services resource or resource group. Deleting the resource group also deletes any other resources in that group. For more information, review [How to clean up Communication Services resources](../create-communication-resource.md#clean-up-resources).
To clean up your logic app workflow and related resources, review [how to clean up Logic Apps resources](../../../logic-apps/quickstart-create-first-logic-app-workflow.md#clean-up-resources). ## Next steps
-In this quickstart, you learned how to create a user, create a chat thread and send a message using the Azure Communication Services Identity and Azure Communication Services Chat connectors. To learn more check the [Azure Communication Services Chat Connector](/connectors/acschat/) documentation.
-
-To learn more about access tokens check [Create and Manage Azure Communication Services users and access tokens](../chat/logic-app.md).
+In this quickstart, you learned how to create a user, create a chat thread, and send a message by using the Communication Services Identity and Communication Services Chat connectors. To learn more, review [Communication Services Chat connector](/connectors/acschat/).
-To learn more about how to send an email check [Send email message in Power Automate with Azure Communication Services](../email/logic-app.md).
+Learn how to [create and manage Communication Services users and access tokens](../chat/logic-app.md).
+Learn how to [send an email message in Power Automate by using Communication Services](../email/logic-app.md).
communication-services React Native https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/chat/react-native.md
Title: Using Chat SDK with React Native
+ Title: Use the Chat SDK with React Native
-description: In this quickstart, you'll learn how to use the Azure Communication Chat SDK with React Native
-
+description: Learn how to use the Azure Communication Services Chat SDK with React Native.
+ Last updated 11/30/2021
-# Quickstart: Using Chat SDK with React Native
+# Quickstart: Use the Chat SDK with React Native
-In this quickstart, we'll set up the Chat JavaScript SDK with React Native. This is supported for Azure Communication JavaScript Chat SDK v1.1.1 and later.
+In this quickstart, you set up the packages in the Azure Communication Services Chat JavaScript SDK to support chat in your React Native app. The steps described in the quickstart are supported for Azure Communication Services JavaScript Chat SDK 1.1.1 and later.
-## Setting up with React Native
+## Set up the chat packages to work with React Native
-The following steps will be required to run Azure Communication JavaScript Chat SDK with React Native after [initializing your React Native project](https://reactnative.dev/docs/environment-setup#installing-dependencies).
+Currently, Communication Services chat packages are available as Node packages. Because not all Node modules are compatible with React Native, the modules require a React Native port to work.
-ACS chat packages currently available are Node packages. Since not all Node modules are compatible with React Native, they require a React Native port in order to work. To make @azure/communication-chat work with React Native, you will need to install the below mentioned packages that contain React Native ports of the Node core required in @azure/communication-chat.
+After you [initialize your React Native project](https://reactnative.dev/docs/environment-setup#installing-dependencies), complete the following steps to make `@azure/communication-chat` work with React Native. The steps install the packages that contain React Native ports of the Node Core modules that are required in `@azure/communication-chat`.
-1. Install `node-libs-react-native`
- ``` console
+1. Install `node-libs-react-native`:
+
+ ```console
npm install node-libs-react-native --save-dev ```
-2. Install `stream-browserify`
- ``` console
+
+1. Install `stream-browserify`:
+
+ ```console
npm install stream-browserify --save-dev ```
-3. Install `react-native-get-random-values`
- ``` console
+
+1. Install `react-native-get-random-values`:
+
+ ```console
npm install react-native-get-random-values --save-dev ```
-4. Install `react-native-url-polyfill`
- ``` console
+
+1. Install `react-native-url-polyfill`:
+
+ ```console
npm install react-native-url-polyfill --save-dev ```
-5. Update _metro.config.js_ to use React Native compatible Node Core modules
- ```JavaScript
+
+1. Update _metro.config.js_ to use React Native-compatible Node Core modules:
+
+ ```javascript
module.exports = { // ... resolver: {
ACS chat packages currently available are Node packages. Since not all Node modu
tls: require.resolve('node-libs-react-native/mock/tls') } };
+ }
```
-6. Add following _import_ on top of your entry point file
- ```JavaScript
+
+1. Add the following `import` commands at the top of your entry point file:
+
+ ```javascript
import 'node-libs-react-native/globals'; import 'react-native-get-random-values'; import 'react-native-url-polyfill/auto'; ```
-7. Install Communication Service packages
+
+1. Install Communication Services packages:
+ ```console npm install @azure/communication-common@1.1.0 --save
ACS chat packages currently available are Node packages. Since not all Node modu
``` ## Next steps
-In this quickstart you learned:
-> [!div class="checklist"]
-> * Communication Services packages required to add chat to your app
-> * How to set up Chat SDK for use in React Native environment
-For a step by step guide on how to use Chat SDK to start a chat, refer to the [quickstart](./get-started.md?pivots=programming-language-javascript).
+In this quickstart, you learned how to set up the required Communication Services packages to add chat to your app in a React Native environment.
+Learn how to [use the Chat SDK to start a chat](./get-started.md?pivots=programming-language-javascript).
communication-services Get Started Volume Indicator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/get-started-volume-indicator.md
+
+ Title: Quickstart - Add volume indicator to your Web calling app
+
+description: In this quickstart, you'll learn how to check call volume within your Web app when using Azure Communication Services.
+++ Last updated : 1/18/2023++++++
+# Accessing call volume level
+As a developer you can have control over checking microphone volume in JavaScript. This quickstart shows examples of how to accomplish this within the ACS WebJS.
+
+## Prerequisites
+
+>[!IMPORTANT]
+> The quick start examples here are available starting on the public preview version [1.9.1-beta.1](https://www.npmjs.com/package/@azure/communication-calling/v/1.9.1-beta.1) of the calling Web SDK. Make sure to use that SDK version or newer when trying this quickstart.
+
+## Checking the audio stream volume
+As a developer it can be nice to have the ability to check and display to end users the current microphone volume. ACS calling API exposes this information using `getVolume`. The `getVolume` value is a number ranging from 0 to 100 (with 0 noting zero audio detected, 100 as the max level detectable). This value iss sampled every 200 ms to get near real time value of volume.
+
+### Example usage
+Sample code to get volume of selected microphone. This example shows how to generate the volume level by accessing `getVolume`.
+
+```javascript
+//Get the vaolume of the local audio source
+const volumeIndicator = await new SDK.LocalAudioStream(deviceManager.selectedMicrophone).getVolume();
+volumeIndicator.on('levelChanged', ()=>{
+ console.log(`Volume is ${volumeIndicator.level}`)
+})
+
+//Get the volume level of the remote incoming audio source
+const remoteAudioStream = call.remoteAudioStreams[0];
+const volumeIndicator = await remoteAudioStream.getVolume();
+volumeIndicator.on('levelChanged', ()=>{
+ console.log(`Volume is ${volumeIndicator.level}`)
+})
+
+```
+
container-apps Disaster Recovery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/disaster-recovery.md
Previously updated : 6/17/2022 Last updated : 1/18/2023 # Disaster recovery guidance for Azure Container Apps
Additionally, the following resources can help you create your own disaster reco
## Set up zone redundancy in your Container Apps environment
-To take advantage of availability zones, you must enable zone redundancy when you create the Container Apps environment. The environment must include a virtual network (VNET) with an infrastructure subnet. To ensure proper distribution of replicas, you should configure your app's minimum and maximum replica count with values that are divisible by three. The minimum replica count should be at least three.
+To take advantage of availability zones, you must enable zone redundancy when you create the Container Apps environment. The environment must include a virtual network (VNET) with an available subnet. To ensure proper distribution of replicas, you should configure your app's minimum and maximum replica count with values that are divisible by three. The minimum replica count should be at least three.
+
+### Enable zone redundancy via the Azure portal
-### Enable zone redundancy via the Azure portal
-
To create a container app in an environment with zone redundancy enabled using the Azure portal: 1. Navigate to the Azure portal.
Create a VNET and infrastructure subnet to include with the Container Apps envir
When using these commands, replace the `<PLACEHOLDERS>` with your values.
+>[!NOTE]
+> The subnet associated with a Container App Environment requires a CIDR prefix of `/23` or larger.
+ # [Bash](#tab/bash) ```azurecli
container-apps Firewall Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/firewall-integration.md
Using custom user-defined routes (UDRs) or ExpressRoutes, other than with UDRs o
The following tables describe how to configure a collection of NSG allow rules.
+>[!NOTE]
+> The subnet associated with a Container App Environment requires a CIDR prefix of `/23` or larger.
+ ### Inbound | Protocol | Port | ServiceTag | Description |
container-apps Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/networking.md
As you create a custom VNET, keep in mind the following situations:
- If you want your container app to restrict all outside access, create an [internal Container Apps environment](vnet-custom-internal.md). -- When you provide your own VNET, you need to provide a subnet that is dedicated to the Container App Environment you will deploy. This subnet cannot be used by other services.
+- When you provide your own VNET, you need to provide a subnet that is dedicated to the Container App Environment you'll deploy. This subnet can't be used by other services.
- Network addresses are assigned from a subnet range you define as the environment is created.
The second URL grants access to the log streaming service and the console. If ne
## Ports and IP addresses >[!NOTE]
-> The subnet associated with a Container App Environment requires a CIDR prefix of /21 or larger (/20, /19 etc.).
+> The subnet associated with a Container App Environment requires a CIDR prefix of `/23` or larger.
The following ports are exposed for inbound connections.
IP addresses are broken down into the following types:
| Type | Description | |--|--| | Public inbound IP address | Used for app traffic in an external deployment, and management traffic in both internal and external deployments. |
-| Outbound public IP | Used as the "from" IP for outbound connections that leave the virtual network. These connections aren't routed down a VPN. Using a NAT gateway or other proxy for outbound traffic from a Container App environment is not supported. Outbound IPs are not guaranteed and may change over time. |
+| Outbound public IP | Used as the "from" IP for outbound connections that leave the virtual network. These connections aren't routed down a VPN. Using a NAT gateway or other proxy for outbound traffic from a Container App environment isn't supported. Outbound IPs are not guaranteed and may change over time. |
| Internal load balancer IP address | This address only exists in an internal deployment. | | App-assigned IP-based TLS/SSL addresses | These addresses are only possible with an external deployment, and when IP-based TLS/SSL binding is configured. |
If you're using the Azure CLI and the [platformReservedCidr](vnet-custom-interna
There's no forced tunneling in Container Apps routes. ## DNS-- **Custom DNS**: If your VNET uses a custom DNS server instead of the default Azure-provided DNS server, configure your DNS server to forward unresolved DNS queries to `168.63.129.16`. [Azure recursive resolvers](../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server) uses this IP address to resolve requests. If you do not use the Azure recursive resolvers, the Container Apps environment will not function.
+- **Custom DNS**: If your VNET uses a custom DNS server instead of the default Azure-provided DNS server, configure your DNS server to forward unresolved DNS queries to `168.63.129.16`. [Azure recursive resolvers](../virtual-network/virtual-networks-name-resolution-for-vms-and-role-instances.md#name-resolution-that-uses-your-own-dns-server) uses this IP address to resolve requests. If you don't use the Azure recursive resolvers, the Container Apps environment won't function.
- **VNET-scope ingress**: If you plan to use VNET-scope [ingress](./ingress.md#configuration) in an internal Container Apps environment, configure your domains in one of the following ways:
- 1. **Non-custom domains**: If you do not plan to use custom domains, create a private DNS zone that resolves the Container Apps environment's default domain to the static IP address of the Container Apps environment. You can use [Azure Private DNS](../dns/private-dns-overview.md) or your own DNS server. If you use Azure Private DNS, create a Private DNS Zone named as the Container App EnvironmentΓÇÖs default domain (`<UNIQUE_IDENTIFIER>.<REGION_NAME>.azurecontainerapps.io`), with an `A` record that points to the static IP address of the Container Apps environment.
+ 1. **Non-custom domains**: If you don't plan to use custom domains, create a private DNS zone that resolves the Container Apps environment's default domain to the static IP address of the Container Apps environment. You can use [Azure Private DNS](../dns/private-dns-overview.md) or your own DNS server. If you use Azure Private DNS, create a Private DNS Zone named as the Container App EnvironmentΓÇÖs default domain (`<UNIQUE_IDENTIFIER>.<REGION_NAME>.azurecontainerapps.io`), with an `A` record that points to the static IP address of the Container Apps environment.
1. **Custom domains**: If you plan to use custom domains, use a publicly resolvable domain to [add a custom domain and certificate](./custom-domains-certificates.md#add-a-custom-domain-and-certificate) to the container app. Additionally, create a private DNS zone that resolves the apex domain to the static IP address of the Container Apps environment. You can use [Azure Private DNS](../dns/private-dns-overview.md) or your own DNS server. If you use Azure Private DNS, create a Private DNS Zone named as the apex domain, with an `A` record that points to the static IP address of the Container Apps environment. ## Managed resources
-When you deploy an internal or an external environment into your own network, a new resource group prefixed with `MC_` is created in the Azure subscription where your environment is hosted. This resource group contains infrastructure components managed by the Azure Container Apps platform, and shouldn't be modified. The resource group contains Public IP addresses used specifically for outbound connectivity from your environment and a load balancer. In addition to the [Azure Container Apps billing](./billing.md), you are billed for the following:
+When you deploy an internal or an external environment into your own network, a new resource group prefixed with `MC_` is created in the Azure subscription where your environment is hosted. This resource group contains infrastructure components managed by the Azure Container Apps platform, and shouldn't be modified. The resource group contains Public IP addresses used specifically for outbound connectivity from your environment and a load balancer. In addition to the [Azure Container Apps billing](./billing.md), you're billed for:
- Two standard static [public IPs](https://azure.microsoft.com/pricing/details/ip-addresses/), one for ingress and one for egress. If you need more IPs for egress due to SNAT issues, [open a support ticket to request an override](https://azure.microsoft.com/support/create-ticket/). -- Two standard [Load Balancers](https://azure.microsoft.com/pricing/details/load-balancer/) if using an internal environment, or one standard [Load Balancer](https://azure.microsoft.com/pricing/details/load-balancer/) if using an external environment. Each load balancer has less than six rules. The cost of data processed (GB) includes both ingress and egress for management operations.
+- Two standard [Load Balancers](https://azure.microsoft.com/pricing/details/load-balancer/) if using an internal environment, or one standard [Load Balancer](https://azure.microsoft.com/pricing/details/load-balancer/) if using an external environment. Each load balancer has fewer than six rules. The cost of data processed (GB) includes both ingress and egress for management operations.
## Next steps
container-apps Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/policy-reference.md
the link in the **Version** column to view the source on the
## Policy definitions ## Next steps
container-apps Scale App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/scale-app.md
Previously updated : 09/27/2022 Last updated : 12/08/2022 -
+zone_pivot_groups: arm-azure-cli-portal
# Set scaling rules in Azure Container Apps
-Azure Container Apps manages automatic horizontal scaling through a set of declarative scaling rules. As a container app scales out, new instances of the container app are created on-demand. These instances are known as replicas. When you first create a container app, the scale rule is set to zero. No usage charges are incurred when an application scales to zero. For more pricing information, see [Billing in Azure Container Apps](billing.md).
+Azure Container Apps manages automatic horizontal scaling through a set of declarative scaling rules. As a container app revision scales out, new instances of the revision are created on-demand. These instances are known as replicas.
-Scaling rules are defined in `resources.properties.template.scale` section of the JSON configuration file. When you add or edit existing scaling rules, a new revision of your container is automatically created with the new configuration. A revision is an immutable snapshot of your container app and it gets created automatically when certain aspects of your application are updated (scaling rules, Dapr settings, template configuration etc.). See the [Change types](./revisions.md#change-types) section to learn about the type of changes that do or don't trigger a new revision.
+Adding or editing scaling rules creates a new revision of your container app. A revision is an immutable snapshot of your container app. See revision [change types](./revisions.md#change-types) to review which types of changes trigger a new revision.
-There are two scale properties that apply to all rules in your container app:
+## Scale definition
-| Scale property | Description | Default value | Min value | Max value |
-||||||
-| `minReplicas` | Minimum number of replicas running for your container app. | 0 | 0 | 30 |
-| `maxReplicas` | Maximum number of replicas running for your container app. | 10 | 1 | 30 |
+Scaling is defined by the combination of limits and rules.
+
+- **Limits** are the minimum and maximum possible number of replicas per revision as your container app scales.
+
+ | Scale limit | Default value | Min value | Max value |
+ |||||
+ | Minimum number of replicas per revision | 0 | 0 | 30 |
+ | Maximum number of replicas per revision | 10 | 1 | 30 |
+
+ To request an increase in maximum replica amounts for your container app, [submit a support ticket](https://azure.microsoft.com/support/create-ticket/).
+
+- **Rules** are the criteria used by Container Apps to decide when to add or remove replicas.
-- If your container app scales to zero, then you aren't billed usage charges.-- Individual scale rules are defined in the `rules` array.-- If you want to ensure that an instance of your application is always running, set `minReplicas` to 1 or higher.-- Replicas not processing, but that remain in memory are billed in the "idle charge" category.-- Changes to scaling rules are a [revision-scope](revisions.md#revision-scope-changes) change.-- It's recommended to set the `properties.configuration.activeRevisionsMode` property of the container app to `single`, when using non-HTTP event scale rules.-- Container Apps implements the KEDA [ScaledObject](https://keda.sh/docs/concepts/scaling-deployments/#details) and HTTP scaler with the following default settings.
- - pollingInterval: 30 seconds
- - cooldownPeriod: 300 seconds
+ [Scale rules](#scale-rules) are implemented as HTTP, TCP, or custom.
-## Scale triggers
+As you define your scaling rules, keep in mind the following items:
-Azure Container Apps supports the following scale triggers:
+- You aren't billed usage charges if your container app scales to zero.
+- Replicas that aren't processing, but remain in memory may be billed at a lower "idle" rate. For more information, see [Billing](./billing.md).
+- If you want to ensure that an instance of your revision is always running, set the minimum number of replicas to 1 or higher.
-- [HTTP traffic](#http): Scaling based on the number of concurrent HTTP requests to your revision.-- [TCP traffic](#tcp): Scaling based on the number of concurrent TCP requests to your revision.-- [Event-driven](#event-driven): Event-based triggers such as messages in an Azure Service Bus.-- [CPU](#cpu) or [Memory](#memory) usage: Scaling based on the amount of CPU or memory consumed by a replica.
+## Scale rules
+
+Scaling is driven by three different categories of triggers:
+
+- [HTTP](#http): Based on the number of concurrent HTTP requests to your revision.
+- [TCP](#tcp): Based on the number of concurrent TCP connections to your revision.
+- [Custom](#custom): Based on CPU, memory, or supported event-driven data sources such as:
+ - Azure Service Bus
+ - Azure Event Hubs
+ - Apache Kafka
+ - Redis
## HTTP
-With an HTTP scaling rule, you have control over the threshold that determines when to scale out.
+With an HTTP scaling rule, you have control over the threshold of concurrent HTTP requests that determines how your container app revision scales.
+
+In the following example, the revision scales out up to five replicas and can scale in to zero. The scaling threshold is set to 100 concurrent requests.
+
+### Example
++
+The `http` section defines an HTTP scale rule.
| Scale property | Description | Default value | Min value | Max value | ||||||
-| `concurrentRequests`| When the number of requests exceeds this value, then another replica is added. Replicas will continue to be added up to the `maxReplicas` amount as the number of concurrent requests increase. | 10 | 1 | n/a |
-
-In the following example, the container app scales out up to five replicas and can scale down to zero. The scaling threshold is set to 100 concurrent requests per second.
+| `concurrentRequests`| When the number of HTTP requests exceeds this value, then another replica is added. Replicas continue to add to the pool up to the `maxReplicas` amount. | 10 | 1 | n/a |
```json {
In the following example, the container app scales out up to five replicas and c
"name": "http-rule", "http": { "metadata": {
- "concurrentRequests": "100"
+ "concurrentRequests": "100"
} } }]
In the following example, the container app scales out up to five replicas and c
} ```
-### Add an HTTP scale trigger to a Container App in single-revision mode
- > [!NOTE]
-> Revisions are immutable. Changing scale rules automatically generates a new revision.
+> Set the `properties.configuration.activeRevisionsMode` property of the container app to `single`, when using non-HTTP event scale rules.
-1. Open Azure portal, and navigate to your container app.
-1. Select **Scale**, then select your revision from the dropdown menu.
- :::image type="content" source="media/scalers/scale-revisions.png" alt-text="A screenshot showing revisions scale.":::
+Define an HTTP scale rule using the `--scale-rule-http-concurrency` parameter in the [`create`](/cli/azure/containerapp#az-containerapp-create) or [`update`](/cli/azure/containerapp#az-containerapp-update) commands.
+
+| CLI parameter | Description | Default value | Min value | Max value |
+||||||
+| `--scale-rule-http-concurrency`| When the number of concurrent HTTP requests exceeds this value, then another replica is added. Replicas continue to add to the pool up to the `max-replicas` amount. | 10 | 1 | n/a |
+
+```bash
+az containerapp create \
+ --name <CONTAINER_APP_NAME> \
+ --resource-group <RESOURCE_GROUP> \
+ --environment <ENVIRONMENT_NAME> \
+ --image <CONTAINER_IMAGE_LOCATION>
+ --min-replicas 0 \
+ --max-replicas 5 \
+ --scale-rule-name azure-http-rule \
+ --scale-rule-type http \
+ --scale-rule-http-concurrency 100
+```
+++
+1. Go to your container app in the Azure portal
+
+1. Select **Scale**.
1. Select **Edit and deploy**.
-1. Select **Scale**, and then select **Add**.
+1. Select the **Scale** tab.
+
+1. Select the minimum and maximum replica range.
- :::image type="content" source="media/scalers/add-scale-rule.png" alt-text="A screenshot showing how to add a scale rule.":::
+ :::image type="content" source="media/scale-app/azure-container-apps-scale-slide.png" alt-text="Screenshot of Azure Container Apps scale range slider.":::
-1. Select **HTTP scaling** and enter a **Rule name** and the number of **Concurrent requests** for your scale rule and then select **Add**.
+1. Select **Add**.
- :::image type="content" source="media/scalers/http-scale-rule.png" alt-text="A screenshot showing how to add an h t t p scale rule.":::
+1. In the *Rule name* box, enter a rule name.
-1. Select **Create** when you're done.
+1. From the *Type* dropdown, select **HTTP Scaling**.
- :::image type="content" source="media/scalers/create-http-scale-rule.png" alt-text="A screenshot showing the newly created http scale rule.":::
+1. In the *Concurrent requests* box, enter your desired number of concurrent requests for your container app.
+ ## TCP
-With a TCP scaling rule, you have control over the threshold that determines when to scale out.
+With a TCP scaling rule, you have control over the threshold of concurrent TCP connections that determines how your app scales.
+
+In the following example, the container app revision scales out up to five replicas and can scale in to zero. The scaling threshold is set to 100 concurrent connections per second.
+
+### Example
++
+The `tcp` section defines a TCP scale rule.
| Scale property | Description | Default value | Min value | Max value | ||||||
-| `concurrentRequests`| When the number of requests exceeds this value, then another replica is added. Replicas will continue to be added up to the `maxReplicas` amount as the number of concurrent requests increase. | 10 | 1 | n/a |
-
-In the following example, the container app scales out up to five replicas and can scale down to zero. The scaling threshold is set to 100 concurrent requests per second.
+| `concurrentConnections`| When the number of concurrent TCP connections exceeds this value, then another replica is added. Replicas will continue to be added up to the `maxReplicas` amount as the number of concurrent connections increase. | 10 | 1 | n/a |
```json {
In the following example, the container app scales out up to five replicas and c
"name": "tcp-rule", "tcp": { "metadata": {
- "concurrentRequests": "100"
+ "concurrentConnections": "100"
} } }]
In the following example, the container app scales out up to five replicas and c
} ```
-## Event-driven
++
+Define a TCP scale rule using the `--scale-rule-tcp-concurrency` parameter in the [`create`](/cli/azure/containerapp#az-containerapp-create) or [`update`](/cli/azure/containerapp#az-containerapp-update) commands.
+
+| CLI parameter | Description | Default value | Min value | Max value |
+||||||
+| `--scale-rule-tcp-concurrency`| When the number of concurrent TCP connections exceeds this value, then another replica is added. Replicas will continue to be added up to the `max-replicas` amount as the number of concurrent connections increase. | 10 | 1 | n/a |
+
+```bash
+az containerapp create \
+ --name <CONTAINER_APP_NAME> \
+ --resource-group <RESOURCE_GROUP> \
+ --environment <ENVIRONMENT_NAME> \
+ --image <CONTAINER_IMAGE_LOCATION>
+ --min-replicas 0 \
+ --max-replicas 5 \
+ --scale-rule-name azure-tcp-rule \
+ --scale-rule-type tcp \
+ --scale-rule-tcp-concurrency 100
+```
+++
+Not supported in the Azure portal. Use the [Azure CLI](scale-app.md?pivots=azure-cli#tcp) or [Azure Resource Manager](scale-app.md?&pivots=azure-resource-manager#tcp) to configure a TCP scale rule.
++
+## Custom
+
+You can create a custom Container Apps scaling rule based on any [ScaledObject](https://keda.sh/docs/latest/concepts/scaling-deployments/)-based [KEDA scaler](https://keda.sh/docs/latest/scalers/) with these defaults:
-Container Apps can scale based of a wide variety of event types. Any event supported by [KEDA](https://keda.sh/docs/scalers/) is supported in Container Apps.
+| Defaults | Seconds |
+|--|--|
+| Polling interval | 30 |
+| Cool down period | 300 |
-Each event type features different properties in the `metadata` section of the KEDA definition. Use these properties to define a scale rule in Container Apps.
+The following example demonstrates how to create a custom scale rule.
-The following example shows how to create a scale rule based on an [Azure Service Bus](https://keda.sh/docs/scalers/azure-service-bus/) trigger.
+### Example
-The container app scales according to the following behavior:
+This example shows how to convert an [Azure Service Bus scaler](https://keda.sh/docs/latest/scalers/azure-service-bus/) to a Container Apps scale rule, but you use the same process for any other [ScaledObject](https://keda.sh/docs/latest/concepts/scaling-deployments/)-based [KEDA scaler](https://keda.sh/docs/latest/scalers/) specification.
-- For every 20 messages placed in the queue, a new replica is created.-- The connection string to the queue is provided as a parameter to the configuration file and referenced via the `secretRef` property.
+For authentication, KEDA scaler authentication parameters convert into [Container Apps secrets](manage-secrets.md).
++
+The following procedure shows you how to convert a KEDA scaler to a Container App scale rule. This snippet is an excerpt of an ARM template to show you where each section fits in context of the overall template.
```json {
The container app scales according to the following behavior:
"resources": { ... "properties": {
+ ...
"configuration": {
- "secrets": [{
- "name": "servicebusconnectionstring",
- "value": "<MY-CONNECTION-STRING-VALUE>"
- }],
+ ...
+ "secrets": [
+ {
+ "name": "<NAME>",
+ "value": "<VALUE>"
+ }
+ ]
}, "template": { ... "scale": {
- "minReplicas": "0",
- "maxReplicas": "30",
+ "minReplicas": 0,
+ "maxReplicas": 5,
"rules": [
- {
- "name": "queue-based-autoscaling",
- "custom": {
- "type": "azure-servicebus",
- "metadata": {
- "queueName": "myServiceBusQueue",
- "messageCount": "20"
- },
- "auth": [{
- "secretRef": "servicebusconnectionstring",
- "triggerParameter": "connection"
- }]
+ {
+ "name": "<RULE_NAME>",
+ "custom": {
+ "metadata": {
+ ...
+ },
+ "auth": [
+ {
+ "secretRef": "<NAME>",
+ "triggerParameter": "<PARAMETER>"
+ }
+ ]
+ }
+ }
+ ]
}
- }]
+ }
+ }
+ }
} ```
+Refer to this excerpt for context on how the below examples fit in the ARM template.
+
+First, you'll define the type and metadata of the scale rule.
+
+1. From the KEDA scaler specification, find the `type` value.
+
+ :::code language="yml" source="~/azure-docs-snippets-pr/container-apps/keda-azure-service-bus-trigger.yml" highlight="2":::
+
+1. In the ARM template, enter the scaler `type` value into the `custom.type` property of the scale rule.
+
+ :::code language="json" source="~/azure-docs-snippets-pr/container-apps/container-apps-azure-service-bus-rule-0.json" highlight="6":::
+
+1. From the KEDA scaler specification, find the `metadata` values.
+
+ :::code language="yml" source="~/azure-docs-snippets-pr/container-apps/keda-azure-service-bus-trigger.yml" highlight="4,5,6":::
+
+1. In the ARM template, add all metadata values to the `custom.metadata` section of the scale rule.
+
+ :::code language="json" source="~/azure-docs-snippets-pr/container-apps/container-apps-azure-service-bus-rule-0.json" highlight="8,9,10":::
+
+### Authentication
+
+A KEDA scaler may support using secrets in a [TriggerAuthentication](https://keda.sh/docs/latest/concepts/authentication/) that is referenced by the `authenticationRef` property. You can map the TriggerAuthentication object to the Container Apps scale rule.
+ > [!NOTE]
-> Upstream KEDA scale rules are defined using Kubernetes YAML, while Azure Container Apps supports ARM templates, Bicep Templates and Container Apps specific YAML. The following example uses an ARM template and therefore the rules need to switch property names from [kebab](https://en.wikipedia.org/wiki/Naming_convention_(programming)#Delimiter-separated_words) case to [camel](https://en.wikipedia.org/wiki/Naming_convention_(programming)#Letter_case-separated_words) when translating from existing KEDA manifests.
+> Container Apps scale rules only support secret references. Other authentication types such as pod identity are not supported.
-### Set up a connection string secret
+1. Find the `TriggerAuthentication` object referenced by the KEDA `ScaledObject` specification.
-To create a custom scale trigger, first create a connection string secret to authenticate with the different custom scalers.
+1. From the KEDA specification, find each `secretTargetRef` of the `TriggerAuthentication` object and its associated secret.
-1. In Azure portal, navigate to your container app and then select **Secrets**.
+ :::code language="yml" source="~/azure-docs-snippets-pr/container-apps/keda-azure-service-bus-auth.yml" highlight="8,16,17,18":::
-1. Select **Add**, and then enter your secret key/value information.
+1. In the ARM template, add all entries to the `auth` array of the scale rule.
-1. Select **Add** when you're done.
+ 1. Add a [secret](./manage-secrets.md) to the container app's `secrets` array containing the secret value.
- :::image type="content" source="media/scalers/connection-string.png" alt-text="A screenshot showing how to create a connection string.":::
+ 1. Set the value of the `triggerParameter` property to the value of the `TriggerAuthentication`'s `key` property.
-### Add a custom scale trigger
+ 1. Set the value of the `secretRef` property to the name of the Container Apps secret.
-1. In Azure portal, select **Scale** and then select your revision from the dropdown menu.
+ :::code language="json" source="~/azure-docs-snippets-pr/container-apps/container-apps-azure-service-bus-rule-1.json" highlight="10,11,12,13,32,33,34,35":::
- :::image type="content" source="media/scalers/scale-revisions.png" alt-text="A screenshot showing the revisions scale page.":::
+ Some scalers support metadata with the `FromEnv` suffix to reference a value in an environment variable. Container Apps looks at the first container listed in the ARM template for the environment variable.
-1. Select **Edit and deploy**.
+ Refer to the [considerations section](#considerations) for more security related information.
-1. Select **Scale**, and then select **Add**.
- :::image type="content" source="media/scalers/add-scale-rule.png" alt-text="A screenshot showing how to add a scale rule.":::
-1. Enter a **Rule name**, select **Custom** and enter a **Custom rule type**. Enter your **Secret reference** and **Trigger parameter** and then add your **Metadata** parameters. select **Add** when you're done.
+1. From the KEDA scaler specification, find the `type` value.
- :::image type="content" source="media/scalers/custom-scaler.png" alt-text="A screenshot showing how to configure a custom scale rule.":::
+ :::code language="yml" source="~/azure-docs-snippets-pr/container-apps/keda-azure-service-bus-trigger.yml" highlight="2":::
-1. Select **Create** when you're done.
+1. In the CLI command, set the `--scale-rule-type` parameter to the specification `type` value.
+
+ :::code language="bash" source="~/azure-docs-snippets-pr/container-apps/container-apps-azure-service-bus-cli.bash" highlight="10":::
+
+1. From the KEDA scaler specification, find the `metadata` values.
+
+ :::code language="yml" source="~/azure-docs-snippets-pr/container-apps/keda-azure-service-bus-trigger.yml" highlight="4,5,6":::
+
+1. In the CLI command, set the `--scale-rule-metadata` parameter to the metadata values.
+
+ You'll need to transform the values from a YAML format to a key/value pair for use on the command line. Separate each key/value pair with a space.
+
+ :::code language="bash" source="~/azure-docs-snippets-pr/container-apps/container-apps-azure-service-bus-cli.bash" highlight="11,12,13":::
+
+### Authentication
+
+A KEDA scaler may support using secrets in a [TriggerAuthentication](https://keda.sh/docs/latest/concepts/authentication/) that is referenced by the authenticationRef property. You can map the TriggerAuthentication object to the Container Apps scale rule.
> [!NOTE]
-> In multiple revision mode, adding a new scale trigger creates a new revision of your application but your old revision remains available with the old scale rules. Use the **Revision management** page to manage their traffic allocations.
+> Container Apps scale rules only support secret references. Other authentication types such as pod identity are not supported.
-### KEDA scalers conversion
+1. Find the `TriggerAuthentication` object referenced by the KEDA `ScaledObject` specification. Identify each `secretTargetRef` of the `TriggerAuthentication` object.
-Azure Container Apps supports KEDA ScaledObjects and all of the available [KEDA scalers](https://keda.sh/docs/scalers/). To convert KEDA templates, it's easier to start with a custom JSON template and add the parameters you need based on the scenario and the scale trigger you want to set up.
+ :::code language="yml" source="~/azure-docs-snippets-pr/container-apps/keda-azure-service-bus-auth.yml" highlight="8,16,17,18":::
-```json
-{
- ...
- "resources": {
- ...
- "properties": {
- "configuration": {
- "secrets": [{
- "name": "<YOUR_CONNECTION_STRING_NAME>",
- "value": "<YOUR-CONNECTION-STRING>"
- }],
- },
- "template": {
- ...
- "scale": {
- "minReplicas": "0",
- "maxReplicas": "30",
- "rules": [
- {
- "name": "<YOUR_TRIGGER_NAME>",
- "custom": {
- "type": "<TRIGGER_TYPE>",
- "metadata": {
- },
- "auth": [{
- "secretRef": "<YOUR_CONNECTION_STRING_NAME>",
- "triggerParameter": "<TRIGGER_PARAMETER>"
- }]
- }
- }]
-}
-```
+1. In your container app, create the [secrets](./manage-secrets.md) that match the `secretTargetRef` properties.
-The following YAML is an example of setting up an [Azure Storage Queue](https://keda.sh/docs/scalers/azure-storage-queue/) scaler that you can configure to auto scale based on Azure Storage Queues.
+1. In the CLI command, set parameters for each `secretTargetRef` entry.
-Below is the KEDA trigger specification for an Azure Storage Queue. To set up a scale rule in Azure Container Apps, you'll need the trigger `type` and any other required parameters. You can also add other optional parameters, which vary based on the scaler you're using.
+ 1. Create a secret entry with the `--secrets` parameter. If there are multiple secrets, separate them with a space.
-In this example, you need the `accountName` and the name of the cloud environment that the queue belongs to `cloud` to set up your scaler in Azure Container Apps.
+ 1. Create an authentication entry with the `--scale-rule-auth` parameter. If there are multiple entries, separate them with a space.
-```yml
-triggers:
-- type: azure-queue
- metadata:
- queueName: orders
- queueLength: '5'
- connectionFromEnv: STORAGE_CONNECTIONSTRING_ENV_NAME
- accountName: storage-account-name
- cloud: AzureUSGovernmentCloud
-```
+ :::code language="bash" source="~/azure-docs-snippets-pr/container-apps/container-apps-azure-service-bus-cli.bash" highlight="8,14":::
-Now your JSON config file should look like this:
-```json
-{
- ...
- "resources": {
- ...
- "properties": {
- "configuration": {
- "secrets": [{
- "name": "my-connection-string",
- "value": "*********"
- }],
- },
- "template": {
- ...
- "scale": {
- "minReplicas": "0",
- "maxReplicas": "30",
- "rules": [
- {
- "name": "queue-trigger",
- "custom": {
- "type": "azure-queue",
- "metadata": {
- "accountName": "my-storage-account-name",
- "cloud": "AzurePublicCloud"
- },
- "auth": [{
- "secretRef": "my-connection-string",
- "triggerParameter": "connection"
- }]
- }
- }]
-}
-```
-> [!NOTE]
-> KEDA ScaledJobs are not supported. For more information, see [KEDA Scaling Jobs](https://keda.sh/docs/concepts/scaling-jobs/#overview).
+1. Go to your container app in the Azure portal
-## CPU
+1. Select **Scale**.
-CPU scaling allows your app to scale in or out depending on how much the CPU is being used. CPU scaling doesn't allow your container app to scale to 0. For more information about this trigger, see [KEDA CPU scale trigger](https://keda.sh/docs/scalers/cpu/).
+1. Select **Edit and deploy**.
-The following example shows how to create a CPU scaling rule.
+1. Select the **Scale** tab.
-```json
-{
- ...
- "resources": {
- ...
- "properties": {
- ...
- "template": {
- ...
- "scale": {
- "minReplicas": "1",
- "maxReplicas": "10",
- "rules": [{
- "name": "cpu-scaling-rule",
- "custom": {
- "type": "cpu",
- "metadata": {
- "type": "Utilization",
- "value": "50"
- }
- }
- }]
- }
- }
- }
- }
-}
-```
+1. Select the minimum and maximum replica range.
-- In this example, the container app scales when CPU usage exceeds 50%.-- At a minimum, a single replica remains in memory for apps that scale based on CPU utilization.
+ :::image type="content" source="media/scale-app/azure-container-apps-scale-slide.png" alt-text="Screenshot of Azure Container Apps scale range slider.":::
-## Memory
+1. Select **Add**.
-Memory scaling allows your app to scale in or out depending on how much of the memory is being used. Memory scaling doesn't allow your container app to scale to 0. For more information regarding this scaler, see [KEDA Memory scaler](https://keda.sh/docs/scalers/memory/).
+1. In the *Rule name* box, enter a rule name.
-The following example shows how to create a memory scaling rule.
+1. From the *Type* dropdown, select **Custom**.
-```json
-{
- ...
- "resources": {
- ...
- "properties": {
- ...
- "template": {
- ...
- "scale": {
- "minReplicas": "1",
- "maxReplicas": "10",
- "rules": [{
- "name": "memory-scaling-rule",
- "custom": {
- "type": "memory",
- "metadata": {
- "type": "Utilization",
- "value": "50"
- }
- }
- }]
- }
- }
- }
- }
-}
-```
+1. From the KEDA scaler specification, find the `type` value.
+
+ :::code language="yml" source="~/azure-docs-snippets-pr/container-apps/keda-azure-service-bus-trigger.yml" highlight="2":::
+
+1. In the *Custom rule type* box, enter the scaler `type` value.
-- In this example, the container app scales when memory usage exceeds 50%.-- At a minimum, a single replica remains in memory for apps that scale based on memory utilization.
+1. From the KEDA scaler specification, find the `metadata` values.
+
+ :::code language="yml" source="~/azure-docs-snippets-pr/container-apps/keda-azure-service-bus-trigger.yml" highlight="4,5,6":::
+
+1. In the portal, find the *Metadata* section and select **Add**. Enter the name and value for each item in the KEDA `ScaledObject` specification metadata section.
+
+### Authentication
+
+A KEDA scaler may support using secrets in a [TriggerAuthentication](https://keda.sh/docs/latest/concepts/authentication/) that is referenced by the authenticationRef property. You can map the TriggerAuthentication object to the Container Apps scale rule.
+
+> [!NOTE]
+> Container Apps scale rules only support secret references. Other authentication types such as pod identity are not supported.
+
+1. In your container app, create the [secrets](./manage-secrets.md) that you want to reference.
+
+1. Find the `TriggerAuthentication` object referenced by the KEDA `ScaledObject` specification. Identify each `secretTargetRef` of the `TriggerAuthentication` object.
+
+ :::code language="yml" source="~/azure-docs-snippets-pr/container-apps/keda-azure-service-bus-auth.yml" highlight="16,17,18":::
+
+1. In the *Authentication* section, select **Add** to create an entry for each KEDA `secretTargetRef` parameter.
++
+## Default scale rule
+
+If you don't create a scale rule, the default scale rule is applied to your container app.
+
+| Trigger | Min replicas | Max replicas |
+|--|--|--|
+| HTTP | 0 | 10 |
+
+> [!IMPORTANT]
+> Make sure you create a scale rule or set `minReplicas` to 1 or more if you don't enable ingress. If ingress is disabled and all you have is the default limits and rule, then your container app will scale to zero and have no way of starting back up.
## Considerations
+- In "multiple revision" mode, adding a new scale trigger creates a new revision of your application but your old revision remains available with the old scale rules. Use the **Revision management** page to manage traffic allocations.
+
+- No usage charges are incurred when an application scales to zero. For more pricing information, see [Billing in Azure Container Apps](billing.md).
+
+### Unsupported KEDA capabilities
+
+- KEDA ScaledJobs aren't supported. For more information, see [KEDA Scaling Jobs](https://keda.sh/docs/concepts/scaling-jobs/#overview).
+
+### Known limitations
+ - Vertical scaling isn't supported. - Replica quantities are a target amount, not a guarantee.
-
+ - If you're using [Dapr actors](https://docs.dapr.io/developing-applications/building-blocks/actors/actors-overview/) to manage states, you should keep in mind that scaling to zero isn't supported. Dapr uses virtual actors to manage asynchronous calls, which means their in-memory representation isn't tied to their identity or lifetime. ## Next steps
container-apps Vnet Custom Internal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/vnet-custom-internal.md
$VnetName = 'my-custom-vnet'
Now create an instance of the virtual network to associate with the Container Apps environment. The virtual network must have two subnets available for the container app instance. > [!NOTE]
-> You can use an existing virtual network, but a dedicated subnet with a CIDR range of `/21` or larger is required for use with Container Apps.
+> You can use an existing virtual network, but a dedicated subnet with a CIDR range of `/23` or larger is required for use with Container Apps.
# [Bash](#tab/bash)
$vnet = New-AzVirtualNetwork @VnetArgs
> [!NOTE]
-> Network subnet address prefix requires a CIDR range of `/21`.
+> Network subnet address prefix requires a minimum CIDR range of `/23`.
With the VNET established, you can now query for the infrastructure subnet ID.
container-apps Vnet Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/vnet-custom.md
The following example shows you how to create a Container Apps environment in an
[!INCLUDE [container-apps-create-portal-steps.md](../../includes/container-apps-create-portal-steps.md)] > [!NOTE]
-> Network address prefixes requires a CIDR range of `/21` or larger (`/20`, `/19` etc.).
+> Network address prefixes requires a CIDR range of `/23` or larger.
7. Select the **Networking** tab to create a VNET. 8. Select **Yes** next to *Use your own virtual network*.
$VnetName = 'my-custom-vnet'
Now create an Azure virtual network to associate with the Container Apps environment. The virtual network must have a subnet available for the environment deployment. > [!NOTE]
-> You can use an existing virtual network, but a dedicated subnet with a CIDR range of `/21` or larger is required for use with Container Apps.
+> You can use an existing virtual network, but a dedicated subnet with a CIDR range of `/23` or larger is required for use with Container Apps.
# [Bash](#tab/bash)
You must either provide values for all three of these properties, or none of the
| Parameter | Description | |||
-| `VnetConfigurationPlatformReservedCidr` | The address range used internally for environment infrastructure services. Must have a size between `/21` and `/12`. |
+| `VnetConfigurationPlatformReservedCidr` | The address range used internally for environment infrastructure services. Must have a size between `/23` and `/12`. |
| `VnetConfigurationPlatformReservedDnsIP` | An IP address from the `VnetConfigurationPlatformReservedCidr` range that is used for the internal DNS server. The address can't be the first address in the range, or the network address. For example, if `VnetConfigurationPlatformReservedCidr` is set to `10.2.0.0/16`, then `VnetConfigurationPlatformReservedDnsIP` can't be `10.2.0.0` (the network address), or `10.2.0.1` (infrastructure reserves use of this IP). In this case, the first usable IP for the DNS would be `10.2.0.2`. | | `VnetConfigurationDockerBridgeCidr` | The address range assigned to the Docker bridge network. This range must have a size between `/28` and `/12`. |
container-registry Container Registry Auth Service Principal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-registry/container-registry-auth-service-principal.md
You can find the preceding sample scripts for Azure CLI on GitHub, as well as ve
Once you have a service principal that you've granted access to your container registry, you can configure its credentials for access to "headless" services and applications, or enter them using the `docker login` command. Use the following values:
-* **User name** - service principal's **application (client) ID**
+* **Username** - service principal's **application (client) ID**
* **Password** - service principal's **password (client secret)**
-Each value has the format `xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx`.
+The **Username** value has the format `xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx`.
> [!TIP] > You can regenerate the password (client secret) of a service principal by running the [az ad sp credential reset](/cli/azure/ad/sp/credential#az-ad-sp-credential-reset) command.
cosmos-db How To Setup Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-setup-customer-managed-keys.md
Using customer-managed keys with Azure Cosmos DB requires you to set two propert
- [How to use soft-delete with PowerShell](../key-vault/general/key-vault-recovery.md) - [How to use soft-delete with Azure CLI](../key-vault/general/key-vault-recovery.md)
-1. Once these settings have been enabled, on the access policy tab, you can choose your preferred permission model to use. Access policies are set by default, but Azure role-based access control is supported as well.
+
+### Choosing the preferred security model 
+
+Once purge protection and soft-delete have been enabled, on the access policy tab, you can choose your preferred permission model to use. Access policies are set by default, but Azure role-based access control is supported as well.
The necessary permissions must be given for allowing Cosmos DB to use your encryption key. This step varies depending on whether the Azure Key Vault is using either Access policies or role-based access control.
+> [!NOTE]
+> It is important to note that only one security model can be active at a time, so there is no need to seed the role based access control if the Azure Key Vault is set to use access policies and vice versa)
+ ### Add an access policy In this variation, use the Azure Cosmos DB principal to create an access policy with the appropriate permissions.
cosmos-db How To Setup Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-setup-rbac.md
The actual metadata requests allowed by the `Microsoft.DocumentDB/databaseAccoun
Azure Cosmos DB exposes two built-in role definitions:
+> [!IMPORTANT]
+> The term **role definitions** here refer to Azure Cosmos DB specific role definitions. These are distinct from Azure role-based access control role definitions.
+ | ID | Name | Included actions | |||| | 00000000-0000-0000-0000-000000000001 | Cosmos DB Built-in Data Reader | `Microsoft.DocumentDB/databaseAccounts/readMetadata`<br>`Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/items/read`<br>`Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/executeQuery`<br>`Microsoft.DocumentDB/databaseAccounts/sqlDatabases/containers/readChangeFeed` |
cosmos-db Change Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/change-log.md
Previously updated : 10/12/2022 Last updated : 01/18/2023
The Change log for the API for MongoDB is meant to inform you about our feature
## Azure Cosmos DB for MongoDB updates
+### Azure Cosmos DB for MongoDB 5.0 (Limited Preview)
+
+The 5.0 version supports many new features such as distributed ACID transactions, higher limits for unsharded collections and for shards themselves, improved performance for aggregation pipelines and complex queries, and more.
+
+[Request access today to try out this Limited Preview!](../access-previews.md)
+ ### Role-based access control (RBAC) (GA) Azure Cosmos DB for MongoDB now offers a built-in role-based access control (RBAC) that allows you to authorize your data requests with a fine-grained, role-based permission model. Using this role-based access control (RBAC) allows you access with more options for control, security, and auditability of your database account data.
cosmos-db Feature Support 32 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/feature-support-32.md
Unique indexes are available for all Azure Cosmos DB accounts using Azure Cosmos
## Time-to-live (TTL)
-Azure Cosmos DB supports a time-to-live (TTL) based on the timestamp of the document. TTL can be enabled for collections by going to the [Azure portal](https://portal.azure.com).
+Azure Cosmos DB only supports a time-to-live (TTL) at the collection level (_ts) in version 3.2. Upgrade to versions 3.6+ to take advantage of other forms of [TTL](https://learn.microsoft.com/azure/cosmos-db/mongodb/time-to-live).
## User and role management
cosmos-db Feature Support 40 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/feature-support-40.md
For example, with a sharded collection, sharded on key ΓÇ£countryΓÇ¥: To delete
- `db.coll.deleteMany({"country": "USA", "city": "NYC"})` - **Success** - `db.coll.deleteMany({"city": "NYC"})` - Fails with error **ShardKeyNotFound(61)**
+> [!NOTE]
+> Retryable writes does not support bulk unordered writes at this time. If you would like to perform bulk writes with retryable writes enabled, perform bulk ordered writes.
+ To enable the feature, [add the EnableMongoRetryableWrites capability](how-to-configure-capabilities.md) to your database account. This feature can also be enabled in the features tab in the Azure portal. ## Sharding
cosmos-db Feature Support 42 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/feature-support-42.md
For example, with a sharded collection, sharded on key ΓÇ£countryΓÇ¥: To delete
- `db.coll.deleteMany({"country": "USA", "city": "NYC"})` ΓÇô **Success** - `db.coll.deleteMany({"city": "NYC"})` - Fails with error **ShardKeyNotFound(61)**
+> [!NOTE]
+> Retryable writes does not support bulk unordered writes at this time. If you would like to perform bulk writes with retryable writes enabled, perform bulk ordered writes.
+ To enable the feature, [add the `EnableMongoRetryableWrites` capability](how-to-configure-capabilities.md) to your database account. This feature can also be enabled in the features tab in the Azure portal. ## Sharding
cosmos-db Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/introduction.md
Previously updated : 11/30/2022 Last updated : 01/18/2023
The API for MongoDB implements the wire protocol for MongoDB. This implementatio
The API for MongoDB is compatible with the following MongoDB server versions:
+- [Version 5.0 (limited preview)](../access-previews.md)
- [Version 4.2](feature-support-42.md) - [Version 4.0](feature-support-40.md) - [Version 3.6](feature-support-36.md)
cosmos-db Quickstart Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/quickstart-python.md
ms.devlang: python Previously updated : 11/03/2022 Last updated : 1/17/2023
For this sample code, the container will use the category as a logical partition
From the project directory, open the *app.py* file. In your editor, import the `os` and `json` modules. Then, import the `CosmosClient` and `PartitionKey` classes from the `azure.cosmos` module.
+#### [Sync](#tab/sync)
+ :::code language="python" source="~/cosmos-db-nosql-python-samples/001-quickstart/app.py" id="imports":::
-Create variables for the `COSMOS_ENDPOINT` and `COSMOS_KEY` environment variables using `os.environ`.
+#### [Async](#tab/async)
++++
+Create constants for the `COSMOS_ENDPOINT` and `COSMOS_KEY` environment variables using `os.environ`.
+
+#### [Sync / Async](#tab/sync+async)
:::code language="python" source="~/cosmos-db-nosql-python-samples/001-quickstart/app.py" id="environment_variables"::: ++
+Create constants for the database and container names.
+
+#### [Sync / Async](#tab/sync+async)
++++ Create a new client instance using the [`CosmosClient`](/python/api/azure-cosmos/azure.cosmos.cosmos_client.cosmosclient) class constructor and the two variables you created as parameters.
+#### [Sync](#tab/sync)
+ :::code language="python" source="~/cosmos-db-nosql-python-samples/001-quickstart/app.py" id="create_client":::
+#### [Async](#tab/async)
+
+> [!IMPORTANT]
+> Please the client instance in a coroutine function named `manage_cosmos`. Within the coroutine function, define the new client with the `async with` keywords. Outside of the coroutine function, use the `asyncio.run` function to execute the coroutine asynchronously.
+++++ ### Create a database Use the [`CosmosClient.create_database_if_not_exists`](/python/api/azure-cosmos/azure.cosmos.cosmos_client.cosmosclient#azure-cosmos-cosmos-client-cosmosclient-create-database-if-not-exists) method to create a new database if it doesn't already exist. This method will return a [`DatabaseProxy`](/python/api/azure-cosmos/azure.cosmos.databaseproxy) reference to the existing or newly created database.
+#### [Sync](#tab/sync)
+ :::code language="python" source="~/cosmos-db-nosql-python-samples/001-quickstart/app.py" id="create_database":::
+#### [Async](#tab/async)
++++ ### Create a container The [`PartitionKey`](/python/api/azure-cosmos/azure.cosmos.partitionkey) class defines a partition key path that you can use when creating a container.
+#### [Sync](#tab/sync)
-The [`Databaseproxy.create_container_if_not_exists`](/python/api/azure-cosmos/azure.cosmos.databaseproxy#azure-cosmos-databaseproxy-create-container-if-not-exists) method will create a new container if it doesn't already exist. This method will also return a [`ContainerProxy`](/python/api/azure-cosmos/azure.cosmos.containerproxy) reference to the container.
:::code language="python" source="~/cosmos-db-nosql-python-samples/001-quickstart/app.py" id="create_container":::
+#### [Async](#tab/async)
+++++
+The [`Databaseproxy.create_container_if_not_exists`](/python/api/azure-cosmos/azure.cosmos.databaseproxy#azure-cosmos-databaseproxy-create-container-if-not-exists) method will create a new container if it doesn't already exist. This method will also return a [`ContainerProxy`](/python/api/azure-cosmos/azure.cosmos.containerproxy) reference to the container.
+ ### Create an item
-Create a new item in the container by first creating a new variable (`newItem`) with a sample item defined. In this example, the unique identifier of this item is `70b63682-b93a-4c77-aad2-65501347265f`. The partition key value is derived from the `/categoryId` path, so it would be `61dba35b-4f02-45c5-b648-c6badc0cbd79`.
+Create a new item in the container by first creating a new variable (`new_item`) with a sample item defined. In this example, the unique identifier of this item is `70b63682-b93a-4c77-aad2-65501347265f`. The partition key value is derived from the `/categoryId` path, so it would be `61dba35b-4f02-45c5-b648-c6badc0cbd79`.
+
+#### [Sync / Async](#tab/sync+async)
:::code language="python" source="~/cosmos-db-nosql-python-samples/001-quickstart/app.py" id="new_item"::: ++ > [!TIP] > The remaining fields are flexible and you can define as many or as few as you want. You can even combine different item schemas in the same container. Create an item in the container by using the [`ContainerProxy.create_item`](/python/api/azure-cosmos/azure.cosmos.containerproxy#azure-cosmos-containerproxy-create-item) method passing in the variable you already created.
+#### [Sync](#tab/sync)
+ :::code language="python" source="~/cosmos-db-nosql-python-samples/001-quickstart/app.py" id="create_item":::
+#### [Async](#tab/async)
++++ ### Get an item In Azure Cosmos DB, you can perform a point read operation by using both the unique identifier (``id``) and partition key fields. In the SDK, call [`ContainerProxy.read_item`](/python/api/azure-cosmos/azure.cosmos.containerproxy#azure-cosmos-containerproxy-read-item) passing in both values to return an item as a dictionary of strings and values (`dict[str, Any]`).
+#### [Sync](#tab/sync)
+ :::code language="python" source="~/cosmos-db-nosql-python-samples/001-quickstart/app.py" id="read_item":::
-In this example, the dictionary result is saved to a variable named `existingItem`.
+#### [Async](#tab/async)
++++
+In this example, the dictionary result is saved to a variable named `existing_item`.
### Query items After you insert an item, you can run a query to get all items that match a specific filter. This example runs the SQL query: ``SELECT * FROM products p WHERE p.categoryId = "61dba35b-4f02-45c5-b648-c6badc0cbd79"``. This example uses query parameterization to construct the query. The query uses a string of the SQL query, and a dictionary of query parameters.
+#### [Sync / Async](#tab/sync+async)
+ :::code language="python" source="~/cosmos-db-nosql-python-samples/001-quickstart/app.py" id="build_query"::: ++ This example dictionary included the `@categoryId` query parameter and the corresponding value `61dba35b-4f02-45c5-b648-c6badc0cbd79`. Once the query is defined, call [`ContainerProxy.query_items`](/python/api/azure-cosmos/azure.cosmos.containerproxy#azure-cosmos-containerproxy-query-items) to run the query and return the results as a paged set of items (`ItemPage[Dict[str, Any]]`).
+#### [Sync / Async](#tab/sync+async)
+ :::code language="python" source="~/cosmos-db-nosql-python-samples/001-quickstart/app.py" id="query_items"::: ++ Finally, use a for loop to iterate over the results in each page and perform various actions.
+#### [Sync](#tab/sync)
+ :::code language="python" source="~/cosmos-db-nosql-python-samples/001-quickstart/app.py" id="iterate_query_results":::
+#### [Async](#tab/async)
++++ In this example, `json.dumps` is used to print the item to the console in a human-readable way. ## Run the code
python app.py
The output of the app should be similar to this example: ```output
-{
+Database cosmicworks
+Container products
+Point read Yamba Surfboard
+Result list [
+ {
"id": "70b63682-b93a-4c77-aad2-65501347265f", "categoryId": "61dba35b-4f02-45c5-b648-c6badc0cbd79", "categoryName": "gear-surf-surfboards", "name": "Yamba Surfboard", "quantity": 12, "sale": false,
- "_rid": "yzN6AIfJxe0BAAAAAAAAAA==",
- "_self": "dbs/yzN6AA==/colls/yzN6AIfJxe0=/docs/yzN6AIfJxe0BAAAAAAAAAA==/",
- "_etag": "\"2a00ccd4-0000-0200-0000-63650e420000\"",
+ "_rid": "KSsMAPI2fH0BAAAAAAAAAA==",
+ "_self": "dbs/KSsMAA==/colls/KSsMAPI2fH0=/docs/KSsMAPI2fH0BAAAAAAAAAA==/",
+ "_etag": "\"48002b76-0000-0200-0000-63c85f9d0000\"",
"_attachments": "attachments/",
- "_ts": 16457527130
-}
+ "_ts": 1674076061
+ }
+]
``` > [!NOTE]
cosmos-db Tutorial Dotnet Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/tutorial-dotnet-web-app.md
First, you'll create a database and container in the existing API for NoSQL acco
Now, you'll create a new ASP.NET web application using a sample project template. You'll then explore the source code and run the sample to get acquainted with the application before adding Azure Cosmos DB connectivity using the Azure SDK for .NET.
+> [!IMPORTANT]
+> This tutorial transparently pulls packages from [NuGet](https://nuget.org). You can use [`dotnet nuget list source`](/dotnet/core/tools/dotnet-nuget-list-source#examples) to verify your package sources. If you do not have NuGet as a package source, use [`dotnet nuget add source`](/dotnet/core/tools/dotnet-nuget-add-source#examples) to install the site as a source.
+ 1. Open a terminal in an empty directory. 1. Install the `cosmicworks.template.web` project template package from NuGet.
cosmos-db Reference Extensions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/reference-extensions.md
Previously updated : 10/26/2022 Last updated : 01/17/2023 # PostgreSQL extensions in Azure Cosmos DB for PostgreSQL
The versions of each extension installed in a cluster sometimes differ based on
> [!div class="mx-tableFixed"] > | **Extension** | **Description** | **PG 11** | **PG 12** | **PG 13** | **PG 14** | **PG 15** | > ||||||
-> | [citus](https://github.com/citusdata/citus) | Citus distributed database. | 9.5.11 | 10.0.7 | 10.2.6 | 11.0.4 | 11.1.3 |
+> | [citus](https://github.com/citusdata/citus) | Citus distributed database. | 9.5.11 | 10.0.7 | 10.2.8 | 11.1.4 | 11.1.4 |
### Data types extensions
The versions of each extension installed in a cluster sometimes differ based on
> | [dblink](https://www.postgresql.org/docs/current/dblink.html) | A module that supports connections to other PostgreSQL databases from within a database session. See the "dblink and postgres_fdw" section for information about this extension. | 1.2 | 1.2 | 1.2 | 1.2 | 1.2 | > | [old\_snapshot](https://www.postgresql.org/docs/current/oldsnapshot.html) | Allows inspection of the server state that is used to implement old_snapshot_threshold. | | | | 1.0 | 1.0 | > | [pageinspect](https://www.postgresql.org/docs/current/pageinspect.html) | Inspect the contents of database pages at a low level. | 1.7 | 1.7 | 1.8 | 1.9 | 1.10 |
+> | [pg\_azure\_storage](howto-ingest-azure-blob-storage.md) | Azure integration for PostgreSQL. | | | 1.0 | 1.0 | 1.0 |
> | [pg\_buffercache](https://www.postgresql.org/docs/current/static/pgbuffercache.html) | Provides a means for examining what's happening in the shared buffer cache in real time. | 1.3 | 1.3 | 1.3 | 1.3 | 1.3 | > | [pg\_cron](https://github.com/citusdata/pg_cron) | Job scheduler for PostgreSQL. | 1.4 | 1.4 | 1.4 | 1.4 | 1.4 | > | [pg\_freespacemap](https://www.postgresql.org/docs/current/pgfreespacemap.html) | Examine the free space map (FSM). | 1.2 | 1.2 | 1.2 | 1.2 | 1.2 |
cosmos-db Reference Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/postgresql/reference-versions.md
PostgreSQL database version:
Depending on which version of PostgreSQL is running in a cluster, different [versions of PostgreSQL extensions](reference-extensions.md)
-will be installed as well. In particular, PostgreSQL 14 comes with Citus 11, PostgreSQL versions 12 and 13 come with
+will be installed as well. In particular, PostgreSQL 14 and PostgreSQL 15 come with Citus 11, PostgreSQL versions 12 and 13 come with
Citus 10, and earlier PostgreSQL versions come with Citus 9.5. ## Next steps
cost-management-billing Assign Roles Azure Service Principals https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/assign-roles-azure-service-principals.md
tags: billing
Previously updated : 09/20/2022 Last updated : 01/18/2023
You can manage your Enterprise Agreement (EA) enrollment in the [Azure Enterprise portal](https://ea.azure.com/). Direct Enterprise customer can now manage Enterprise Agreement(EA) enrollment in [Azure portal](https://portal.azure.com/). You can create different roles to manage your organization, view costs, and create subscriptions. This article helps you automate some of those tasks by using Azure PowerShell and REST APIs with Azure service principal names (SPNs).
+> [!NOTE]
+> If you have multiple EA billing accounts in your organization, you must grant the EA roles to Azure SPNs individually in each EA billing account.
+ Before you begin, ensure that you're familiar with the following articles: - [Enterprise agreement roles](understand-ea-roles.md)
Here's an example of the application registration page.
### Find your SPN and tenant ID
-You also need the object ID of the SPN and the tenant ID of the app. You need this information for permission assignment operations later in this article.
+You also need the object ID of the SPN and the tenant ID of the app. You need this information for permission assignment operations later in this article. All applications are registered in Azure AD in the tenant. Two types of objects get created when the app registration is completed:
+
+- Application object - The application object ID is what you see under App Registrations in Azure AD. The object ID should *not* be used to grant any EA roles.
+
+- Service Principal object - The Service Principal object is what you see in the Enterprise Registration window in Azure AD. The object ID is used to grant EA roles to the SPN.
1. Open Azure Active Directory, and then select **Enterprise applications**. 1. Find your app in the list.
Later in this article, you'll give permission to the Azure AD app to act by usin
| DepartmentReader | Download the usage details for the department they administer. Can view the usage and charges associated with their department. | db609904-a47f-4794-9be8-9bd86fbffd8a | | SubscriptionCreator | Create new subscriptions in the given scope of Account. | a0bcee42-bf30-4d1b-926a-48d21664ef71 | -- An EnrollmentReader role can be assigned to an SPN only by a user who has an enrollment writer role.
+- An EnrollmentReader role can be assigned to an SPN only by a user who has an enrollment writer role. The EnrollmentReader role assigned to an SPN isn't shown in the EA portal. It's created by programmatic means and is only for programmatic use.
- A DepartmentReader role can be assigned to an SPN only by a user who has an enrollment writer or department writer role. - A SubscriptionCreator role can be assigned to an SPN only by a user who is the owner of the enrollment account (EA administrator). The role isn't shown in the EA portal. It's created by programmatic means and is only for programmatic use. - The EA purchaser role isn't shown in the EA portal. It's created by programmatic means and is only for programmatic use.
+When you grant an EA role to an SPN, you must use the `billingRoleAssignmentName` required property. The parameter is a unique GUID that you must provide. You can generate a GUID using the [New-Guid](/powershell/module/microsoft.powershell.utility/new-guid) PowerShell command. You can also use the [Online GUID / UUID Generator](https://guidgenerator.com/) website to generate a unique GUID.
+ An SPN can have only one role. ## Assign enrollment account role permission to the SPN
cost-management-billing Cancel Azure Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/cancel-azure-subscription.md
tags: billing
Previously updated : 01/09/2023 Last updated : 01/17/2023
If you cancel an Azure Support plan, you're billed for the rest of the month. Ca
## Who can cancel a subscription?
-The table below describes the permission required to cancel a subscription.
+The following table describes the permission required to cancel a subscription.
|Subscription type |Who can cancel | |||
Depending on your subscription type, you may not be able to delete a subscriptio
- If **Delete resources** doesn't display a green check mark, then you have resources that must be deleted in order to delete the subscription. You can select **View resources** to navigate to the Resources page to manually delete the resources. After resource deletion, you might need to wait 10 minutes for resource deletion status to update in order to delete the subscription. - If **Manual deletion date** doesn't display a green check mark, you must wait the required period before you can delete the subscription. - >[!NOTE]
-> 90 days after you cancel a subscription, the subscription is automatically deleted.
+> - 90 days after you cancel a subscription, the subscription is automatically deleted.
+> - If you have deleted all resources but the Delete your subscription page shows that you still have active resources, you might have active *hidden resources*. You can't delete a subscription if you have active hidden resources. To delete them, navigate to **Subscriptions** > select the subscription > **Resources**. At the top of the page, select **Manage view** and then select **Show hidden types**. Then, delete the resources.
++ ## Reactivate a subscription
cost-management-billing Pay By Invoice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/pay-by-invoice.md
tags: billing
Previously updated : 11/08/2022 Last updated : 01/09/2023
If you're not automatically approved, you can submit a request to Azure support
- Add your billing contact information in the Azure portal before the credit limit can be approved. The contact details should be related to the company's Accounts Payable or Finance department. 1. Verify your contact information and preferred contact method, and then select **Create**.
-┬╣ If you don't know your Commerce Account ID, it's the GUID ID shown on the Properties page for your billing account. To view your Commerce Account ID in the Azure portal, navigate to **Cost Management** > select a billing scope > in the left menu, select **Properties**. On the billing scope Properties page, notice the GUID ID value. It's your Commerce Account ID.
+┬╣ If you don't know your Commerce Account ID, it's the GUID ID shown on the Properties page for your billing account. To view it in the Azure portal:
+
+ 1. Go to the Azure home page. Search for **Cost Management** and select it (not Cost Management + Billing). It's a green hexagon-shaped symbol.
+ 1. You should see the overview page. If you don't see Properties in the left menu, at the top of the page under Scope, select **Go to billing account**.
+ 1. In the left menu, select **Properties**. On the properties page you should see your billing account ID shown as a GUID ID value. It's your Commerce Account ID.
If we need to run a credit check because of the amount of credit that you need, we'll send you a credit check application. We might ask you to provide your companyΓÇÖs audited financial statements. If no financial information is provided or if the information isn't strong enough to support the amount of credit limit required, we might ask for a security deposit or a standby letter of credit to approve your credit check request.
cost-management-billing Reservation Amortization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/reservation-amortization.md
Previously updated : 12/06/2022 Last updated : 01/20/2023
In Cost analysis, you view costs with a metric. They include Actual cost and Amo
**Actual cost** - Shows the purchase as it appears on your bill. For example, if you bought a one-year reservation for $1200 in January 2022, cost analysis shows a $1200 cost in the month of January for the reservation. It doesn't show a reservation cost for other months of the year. If you group your actual costs by VM, then a VM that received the reservation benefit for a given month would have zero cost for the month.
-**Amortized cost** - Shows a reservation purchase split as an amortized cost over the duration of the reservation term. With the same example above, cost analysis shows a different amount for each month depending on the number of days in the month. If you group costs by VM in this example, you'd see cost attributed to each VM that received the reservation benefit. However, _used reservation_ costs are attributed to the subscription used to buy the reservation because the unused portion isn't attributable to any specific resource or subscription.
+**Amortized cost** - Shows a reservation purchase split as an amortized cost over the duration of the reservation term. With the same example above, cost analysis shows a different amount for each month depending on the number of days in the month. If you group costs by VM in this example, you'd see cost attributed to each VM that received the reservation benefit. However, _unused reservation_ costs are attributed to the subscription used to buy the reservation because the unused portion isn't attributable to any specific resource or subscription.
## View amortized costs
data-factory Author Management Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/author-management-hub.md
Previously updated : 04/27/2021 Last updated : 01/18/2023 # Management hub in Azure Data Factory
data-factory Azure Ssis Integration Runtime Package Store https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/azure-ssis-integration-runtime-package-store.md
Previously updated : 10/22/2021 Last updated : 01/20/2023 # Manage packages with Azure-SSIS Integration Runtime package store
data-factory Concepts Data Flow Manage Graph https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/concepts-data-flow-manage-graph.md
Previously updated : 09/09/2021 Last updated : 01/18/2023 # Managing the mapping data flow graph
data-factory Connector Amazon Marketplace Web Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-amazon-marketplace-web-service.md
Previously updated : 09/09/2021 Last updated : 01/20/2023 # Copy data from Amazon Marketplace Web Service using Azure Data Factory or Synapse Analytics
data-factory Connector Concur https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-concur.md
Title: Copy data from Concur (Preview)
description: Learn how to copy data from Concur to supported sink data stores using a copy activity in an Azure Data Factory or Synapse Analytics pipeline. - Previously updated : 09/09/2021 Last updated : 01/20/2023 # Copy data from Concur using Azure Data Factory or Synapse Analytics(Preview)
data-factory Connector Couchbase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-couchbase.md
Previously updated : 09/09/2021 Last updated : 01/20/2023 # Copy data from Couchbase using Azure Data Factory (Preview)
data-factory Connector Hubspot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-hubspot.md
Previously updated : 09/09/2021 Last updated : 01/18/2023 # Copy data from HubSpot using Azure Data Factory or Synapse Analytics
data-factory Connector Informix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-informix.md
Previously updated : 11/09/2021 Last updated : 01/18/2023
data-factory Connector Jira https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-jira.md
Previously updated : 09/09/2021 Last updated : 01/18/2023 # Copy data from Jira using Azure Data Factory or Synapse Analytics
data-factory Connector Magento https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-magento.md
Previously updated : 09/09/2021 Last updated : 01/20/2023 # Copy data from Magento using Azure Data Factory or Synapse Analytics(Preview)
data-factory Connector Marketo https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-marketo.md
Previously updated : 09/09/2021 Last updated : 01/20/2023
data-factory Connector Microsoft Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-microsoft-access.md
Previously updated : 09/09/2021 Last updated : 01/18/2023 # Copy data from and to Microsoft Access using Azure Data Factory or Synapse Analytics
data-factory Connector Mongodb Atlas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-mongodb-atlas.md
Previously updated : 09/09/2021 Last updated : 01/18/2023 # Copy data from or to MongoDB Atlas using Azure Data Factory or Synapse Analytics
data-factory Connector Netezza https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-netezza.md
Previously updated : 09/09/2021 Last updated : 01/20/2023 # Copy data from Netezza by using Azure Data Factory or Synapse Analytics
data-factory Connector Oracle Cloud Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-oracle-cloud-storage.md
Previously updated : 12/13/2021 Last updated : 01/18/2023
data-factory Connector Oracle Eloqua https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-oracle-eloqua.md
Previously updated : 09/09/2021 Last updated : 01/20/2023 # Copy data from Oracle Eloqua using Azure Data Factory or Synapse Analytics (Preview)
data-factory Connector Oracle Service Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-oracle-service-cloud.md
Previously updated : 09/09/2021 Last updated : 01/18/2023 # Copy data from Oracle Service Cloud using Azure Data Factory or Synapse Analytics (Preview)
data-factory Connector Presto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-presto.md
Previously updated : 09/09/2021 Last updated : 01/20/2023 # Copy data from Presto using Azure Data Factory or Synapse Analytics
data-factory Connector Quickbooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-quickbooks.md
Previously updated : 09/09/2021 Last updated : 01/18/2023 # Copy data from QuickBooks Online using Azure Data Factory or Synapse Analytics (Preview)
data-factory Connector Shopify https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-shopify.md
Previously updated : 09/09/2021 Last updated : 01/18/2023 # Copy data from Shopify using Azure Data Factoryor Synapse Analytics (Preview)
data-factory Connector Spark https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-spark.md
Previously updated : 09/09/2021 Last updated : 01/18/2023 # Copy data from Spark using Azure Data Factory or Synapse Analytics
data-factory Connector Sybase https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-sybase.md
Previously updated : 09/09/2021 Last updated : 01/20/2023 # Copy data from Sybase using Azure Data Factory or Synapse Analytics
data-factory Connector Teradata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-teradata.md
Previously updated : 09/09/2021 Last updated : 01/18/2023
data-factory Connector Troubleshoot Db2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-db2.md
Previously updated : 10/01/2021 Last updated : 01/20/2023
data-factory Connector Troubleshoot Delimited Text https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-delimited-text.md
Previously updated : 10/01/2021 Last updated : 01/18/2023
data-factory Connector Troubleshoot Oracle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-oracle.md
Previously updated : 10/01/2021 Last updated : 01/18/2023
data-factory Connector Troubleshoot Postgresql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-postgresql.md
Previously updated : 10/01/2021 Last updated : 01/20/2023
data-factory Connector Troubleshoot Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-rest.md
Previously updated : 10/01/2021 Last updated : 01/18/2023
data-factory Connector Web Table https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-web-table.md
Previously updated : 09/09/2021 Last updated : 01/18/2023 # Copy data from Web table by using Azure Data Factory or Synapse Analytics
data-factory Connector Xero https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-xero.md
Previously updated : 09/09/2021 Last updated : 01/18/2023 # Copy data from Xero using Azure Data Factory or Synapse Analytics
data-factory Connector Zoho https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-zoho.md
Previously updated : 09/09/2021 Last updated : 01/20/2023 # Copy data from Zoho using Azure Data Factory or Synapse Analytics (Preview)
data-factory Control Flow Execute Pipeline Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/control-flow-execute-pipeline-activity.md
The master pipeline forwards these values to the invoked pipeline as shown in th
} ```+
+> [!WARNING]
+>Execute Pipeline activity passes array parameter as string to the child pipeline.This is due to the fact that the payload is passed from the parent pipeline to the >child as string. We can see it when we check the input passed to the child pipeline. Plese check this [section](./data-factory-troubleshoot-guide.md#execute-pipeline-passes-array-parameter-as-string-to-the-child-pipeline) for more details.
+ ## Next steps See other supported control flow activities:
data-factory Create Azure Ssis Integration Runtime Deploy Packages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/create-azure-ssis-integration-runtime-deploy-packages.md
description: Learn how to deploy and run SSIS packages in Azure Data Factory wit
Previously updated : 10/22/2021 Last updated : 01/20/2023
data-factory Memory Optimized Compute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/memory-optimized-compute.md
Previously updated : 11/12/2021 Last updated : 01/18/2023 # Memory optimized compute type for Data Flows in Azure Data Factory and Azure Synapse
data-factory Monitor Logs Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/monitor-logs-rest.md
Previously updated : 09/02/2021 Last updated : 01/18/2023 # Set up diagnostic logs via the Azure Monitor REST API
data-factory Monitor Ssis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/monitor-ssis.md
Previously updated : 09/02/2021 Last updated : 01/18/2023 # Monitor SSIS operations with Azure Monitor
data-factory Transform Data Synapse Spark Job Definition https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/transform-data-synapse-spark-job-definition.md
+
+ Title: Transform data with Synapse Spark job definition
+
+description: Learn how to process or transform data by running a Synapse Spark job definition in Azure Data Factory and Synapse Analytics pipelines.
++++++ Last updated : 07/12/2022++
+# Transform data by running a Synapse Spark job definition
+
+The Azure Synapse Spark job definition Activity in a [pipeline](concepts-pipelines-activities.md) runs a Synapse Spark job definition in your Azure Synapse Analytics workspace. This article builds on the [data transformation activities](transform-data.md) article, which presents a general overview of data transformation and the supported transformation activities.
+
+## Set Apache Spark job definition canvas
+
+To use a Spark job definition activity for Synapse in a pipeline, complete the following steps:
+
+## General settings
+
+1. Search for _Spark job definition_ in the pipeline Activities pane, and drag a Spark job definition activity under the Synapse to the pipeline canvas.
+
+2. Select the new Spark job definition activity on the canvas if it isn't already selected.
+
+3. In the **General** tab, enter sample for Name.
+
+4. (Option) You can also enter a description.
+
+5. Timeout: Maximum amount of time an activity can run. Default is seven days, which is also the maximum amount of time allowed. Format is in D.HH:MM:SS.
+
+6. Retry: Maximum number of retry attempts.
+
+7. Retry interval: The number of seconds between each retry attempt.
+
+8. Secure output: When checked, output from the activity won't be captured in logging.
+
+9. Secure input: When checked, input from the activity won't be captured in logging.
+
+## Azure Synapse Analytics (Artifacts) settings
+
+1. Select the new Spark job definition activity on the canvas if it isn't already selected.
+
+2. Select the **Azure Synapse Analytics (Artifacts)** tab to select or create a new Azure Synapse Analytics linked service that will execute the Spark job definition activity.
+
+
+ :::image type="content" source="./media/transform-data-synapse-spark-job-definition/spark-job-definition-activity.png" alt-text="Screenshot that shows the UI for the linked service tab for a spark job definition activity.":::
+
+## Settings tab
+
+1. Select the new Spark job definition activity on the canvas if it isn't already selected.
+
+2. Select the **Settings** tab.
+
+3. Expand the Spark job definition list, you can select an existing Apache Spark job definition in the linked Azure Synapse Analytics workspace.
+
+4. (Optional) You can fill in information for Apache Spark job definition. If the following settings are empty, the settings of the spark job definition itself will be used to run; if the following settings aren't empty, these settings will replace the settings of the spark job definition itself.
+
+ | Property | Description |
+ | -- | -- |
+ |Main definition file| The main file used for the job. Select a PY/JAR/ZIP file from your storage. You can select **Upload file** to upload the file to a storage account. <br> Sample: `abfss://…/path/to/wordcount.jar`|
+ | References from subfolders | Scanning subfolders from the root folder of the main definition file, these files will be added as reference files. The folders named "jars", "pyFiles", "files" or "archives" will be scanned, and the folders name are case sensitive. |
+ |Main class name| The fully qualified identifier or the main class that is in the main definition file. <br> Sample: `WordCount`|
+ |Command-line arguments| You can add command-line arguments by clicking the **New** button. It should be noted that adding command-line arguments will override the command-line arguments defined by the Spark job definition. <br> *Sample: `abfss://…/path/to/shakespeare.txt` `abfss://…/path/to/result`* <br> |
+ |Apache Spark pool| You can select Apache Spark pool from the list.|
+ |Python code reference| Additional python code files used for reference in the main definition file. <br> It supports passing files (.py, .py3, .zip) to the "pyFiles" property. It will override the "pyFiles" property defined in Spark job definition. <br>|
+ |Reference files | Additional files used for reference in the main definition file. |
+ |Apache Spark pool| You can select Apache Spark pool from the list.|
+ |Dynamically allocate executors| This setting maps to the dynamic allocation property in Spark configuration for Spark Application executors allocation.|
+ |Min executors| Min number of executors to be allocated in the specified Spark pool for the job.|
+ |Max executors| Max number of executors to be allocated in the specified Spark pool for the job.|
+ |Driver size| Number of cores and memory to be used for driver given in the specified Apache Spark pool for the job.|
+ |Spark configuration| Specify values for Spark configuration properties listed in the topic: Spark Configuration - Application properties. Users can use default configuration and customized configuration. |
+
+ :::image type="content" source="./media/transform-data-synapse-spark-job-definition/spark-job-definition-activity-settings.png" alt-text="Screenshot that shows the UI for the spark job definition activity.":::
+
+5. You can add dynamic content by clicking the **Add Dynamic Content** button or by pressing the shortcut key <kbd>Alt</kbd>+<kbd>Shift</kbd>+<kbd>D</kbd>. In the **Add Dynamic Content** page, you can use any combination of expressions, functions, and system variables to add to dynamic content.
+
+ :::image type="content" source="./media/transform-data-synapse-spark-job-definition/spark-job-definition-activity-add-dynamic-content.png" alt-text="Screenshot that displays the UI for adding dynamic content to Spark job definition activities.":::
+
+## User properties tab
+
+You can add properties for Apache Spark job definition activity in this panel.
++
+## Azure Synapse spark job definition activity definition
+
+Here's the sample JSON definition of an Azure Synapse Analytics Notebook Activity:
+
+```json
+ {
+ "activities": [
+ {
+ "name": "Spark job definition1",
+ "type": "SparkJob",
+ "dependsOn": [],
+ "policy": {
+ "timeout": "7.00:00:00",
+ "retry": 0,
+ "retryIntervalInSeconds": 30,
+ "secureOutput": false,
+ "secureInput": false
+ },
+ "typeProperties": {
+ "sparkJob": {
+ "referenceName": {
+ "value": "Spark job definition 1",
+ "type": "Expression"
+ },
+ "type": "SparkJobDefinitionReference"
+ }
+ },
+ "linkedServiceName": {
+ "referenceName": "AzureSynapseArtifacts1",
+ "type": "LinkedServiceReference"
+ }
+ }
+ ],
+ }
+```
+
+## Azure Synapse Spark job definition properties
+
+The following table describes the JSON properties used in the JSON
+definition:
+
+|Property|Description|Required|
+||||
+|name|Name of the activity in the pipeline.|Yes|
+|description|Text describing what the activity does.|No|
+|type|For Azure Synapse spark job definition Activity, the activity type is SparkJob.|Yes|
+
+## See Azure Synapse Spark job definition activity run history
+
+Go to Pipeline runs under the **Monitor** tab, you'll see the pipeline you've triggered. Open the pipeline that contains Azure Synapse Spark job definition activity to see the run history.
++
+You can see the notebook activity **input** or **output** by selecting the input or Output button. If your pipeline failed with a user error, select the **output** to check the **result** field to see the detailed user error traceback.
++
data-factory Tutorial Transform Data Hive Virtual Network Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/tutorial-transform-data-hive-virtual-network-portal.md
Previously updated : 06/07/2021 Last updated : 01/20/2023 # Transform data in Azure Virtual Network using Hive activity in Azure Data Factory using the Azure portal
If you don't have an Azure subscription, create a [free](https://azure.microsoft
## Create a data factory
-1. Launch **Microsoft Edge** or **Google Chrome** web browser. Currently, Data Factory UI is supported only in Microsoft Edge and Google Chrome web browsers.
-1. Log in to the [Azure portal](https://portal.azure.com/).
-2. Click **New** on the left menu, click **Data + Analytics**, and click **Data Factory**.
-
- :::image type="content" source="./media/tutorial-transform-data-using-hive-in-vnet-portal/new-data-factory-menu.png" alt-text="New->DataFactory":::
-3. In the **New data factory** page, enter **ADFTutorialHiveFactory** for the **name**.
-
- :::image type="content" source="./media/tutorial-transform-data-using-hive-in-vnet-portal/new-azure-data-factory.png" alt-text="New data factory page":::
-
- The name of the Azure data factory must be **globally unique**. If you receive the following error, change the name of the data factory (for example, yournameMyAzureSsisDataFactory) and try creating again. See [Data Factory - Naming Rules](naming-rules.md) article for naming rules for Data Factory artifacts.
-
- *Data factory name ΓÇ£MyAzureSsisDataFactoryΓÇ¥ is not available*
-3. Select your Azure **subscription** in which you want to create the data factory.
-4. For the **Resource Group**, do one of the following steps:
-
- - Select **Use existing**, and select an existing resource group from the drop-down list.
- - Select **Create new**, and enter the name of a resource group.
-
- To learn about resource groups, see [Using resource groups to manage your Azure resources](../azure-resource-manager/management/overview.md).
-4. Select **V2** for the **version**.
-5. Select the **location** for the data factory. Only locations that are supported for creation of data factories are shown in the list.
-6. Select **Pin to dashboard**.
-7. Click **Create**.
-8. On the dashboard, you see the following tile with status: **Deploying data factory**.
-
- :::image type="content" source="media/tutorial-transform-data-using-hive-in-vnet-portal/deploying-data-factory.png" alt-text="deploying data factory tile":::
-9. After the creation is complete, you see the **Data Factory** page as shown in the image.
-
- :::image type="content" source="./media/tutorial-transform-data-using-hive-in-vnet-portal/data-factory-home-page.png" alt-text="Data factory home page":::
-10. Click **Author & Monitor** to launch the Data Factory User Interface (UI) in a separate tab.
-11. In the home page, switch to the **Manage** tab in the left panel as shown in the following image:
+1. If you have not created your data factory yet, follow the steps in [Quickstart: Create a data factory by using the Azure portal and Azure Data Factory Studio](quickstart-create-data-factory-portal.md) to create one. After creating it, browse to the data factory in the Azure portal.
+
+ :::image type="content" source="./media/doc-common-process/data-factory-home-page.png" alt-text="Screenshot of home page for the Azure Data Factory, with the Open Azure Data Factory Studio tile.":::
- :::image type="content" source="media/doc-common-process/get-started-page-manage-button.png" alt-text="Screenshot that shows the Manage tab.":::
+1. Select **Open** on the **Open Azure Data Factory Studio** tile to launch the Data Integration application in a separate tab.
## Create a self-hosted integration runtime As the Hadoop cluster is inside a virtual network, you need to install a self-hosted integration runtime (IR) in the same virtual network. In this section, you create a new VM, join it to the same virtual network, and install self-hosted IR on it. The self-hosted IR allows Data Factory service to dispatch processing requests to a compute service such as HDInsight inside a virtual network. It also allows you to move data to/from data stores inside a virtual network to Azure. You use a self-hosted IR when the data store or compute is in an on-premises environment as well.
You performed the following steps in this tutorial:
Advance to the following tutorial to learn about transforming data by using a Spark cluster on Azure: > [!div class="nextstepaction"]
->[Branching and chaining Data Factory control flow](tutorial-control-flow-portal.md)
+>[Branching and chaining Data Factory control flow](tutorial-control-flow-portal.md)
data-factory Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/whats-new.md
This page is updated monthly, so revisit it regularly. For older months' update
Check out our [What's New video archive](https://www.youtube.com/playlist?list=PLt4mCx89QIGS1rQlNt2-7iuHHAKSomVLv) for all of our monthly update videos.
+## December 2022
+
+### Data flow
+
+SQL change data capture (CDC) incremental extract - supports numeric columns in mapping dataflow
+
+### Data movement
+
+Express virtual network injection for SSIS in Azure Data Factory is generally available [Learn more](https://techcommunity.microsoft.com/t5/sql-server-integration-services/general-availability-of-express-virtual-network-injection-for/ba-p/3699993)
+
+### Region expansion
+
+Continued region expansion - Azure Data Factory is now available in China North 3 [Learn more](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=data-factory)
+ ## November 2022
ddos-protection Ddos Protection Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ddos-protection/ddos-protection-overview.md
Azure DDoS Protection, combined with application design best practices, provides
:::image type="content" source="./media/ddos-best-practices/ddos-protection-overview-architecture.png" alt-text="Diagram of the reference architecture for a DDoS protected PaaS web application."::: ## Region Availability
-DDoS IP Protection is currently available in the following regions.
-
-| Americas | Europe | Middle East | Africa | Asia Pacific |
-||-||--||
-| West Central US | France Central | UAE Central | South Africa North | Australia Central |
-| North Central US | Germany West Central | Qatar Central | | Korea Central |
-| West US | Switzerland North | | | Japan East |
-| West US 3 | France South | | | West India |
-| | Norway East | | | Jio India Central |
-| | Sweden Central | | | Australia Central 2 |
-| | Germany North | | | |
--
+DDoS IP Protection is currently not available in East US 2 and West Europe regions.
## Key benefits
defender-for-cloud Continuous Export https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/continuous-export.md
description: Learn how to configure continuous export of security alerts and rec
Previously updated : 11/30/2022 Last updated : 01/19/2023 # Continuously export Microsoft Defender for Cloud data
To export data to an Azure Event hub or Log Analytics workspace in a different t
You can also configure export to another tenant through the REST API. For more information, see the automations [REST API](/rest/api/defenderforcloud/automations/create-or-update?tabs=HTTP).
+## Continuously export to an Event Hub behind a firewall
+
+You can enable continuous export as a trusted service, so that you can send data to an Event Hub that has an Azure Firewall enabled.
+
+**To grant access to continuous export as a trusted service**:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. Navigate to **Microsoft Defender for Cloud** > **Environmental settings**.
+
+1. Select the relevant resource.
+
+1. Select **Continuous export**.
+
+1. Select **Export as a trusted service**.
+
+ :::image type="content" source="media/continuous-export/export-as-trusted.png" alt-text="Screenshot that shows where the checkbox is located to select export as trusted service.":::
+
+You'll now need to add the relevant role assignment on the destination Event Hub.
+
+**To add the relevant role assignment on the destination Event Hub**:
+
+1. Navigate to the selected Event Hub.
+
+1. Select **Access Control** > **Add role assignment**
+
+ :::image type="content" source="media/continuous-export/add-role-assignment.png" alt-text="Screenshot that shows where the add role assignment button is found." lightbox="media/continuous-export/add-role-assignment.png":::
+
+1. Select **Azure Event Hubs Data Sender**.
+
+1. Select the **Members** tab.
+
+1. Select **+ Select members**.
+
+1. Search for and select **Windows Azure Security Resource Provider**.
+
+ :::image type="content" source="media/continuous-export/windows-security-resource.png" alt-text="Screenshot that shows you where to enter and search for Windows Azure Security Resource Provider." lightbox="media/continuous-export/windows-security-resource.png":::
+
+1. Select **Review + assign**.
+ ## View exported alerts and recommendations in Azure Monitor You might also choose to view exported Security Alerts and/or recommendations in [Azure Monitor](../azure-monitor/alerts/alerts-overview.md).
defender-for-cloud Episode Twenty Five https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-twenty-five.md
+
+ Title: AWS ECR coverage in Defender for Containers | Defender for Cloud in the field
+
+description: Learn about AWS ECR coverage in Defender for Containers
+ Last updated : 01/18/2023++
+# AWS ECR Coverage in Defender for Containers | Defender for Cloud in the field
+
+**Episode description**: In this episode of Defender for Cloud in the Field, Tomer Spivak joins Yuri Diogenes to talk about the new AWS ECR coverage in Defender for Containers. Tomer explains how Defender for Containers performs vulnerability assessment for ECR workloads in AWS and how to enable this capability. Tomer demonstrates the user experience in Defender for Cloud, showing the vulnerability findings in the dashboard and the onboarding process.
+<br>
+<br>
+<iframe src="https://aka.ms/docs/player?id=919f847f-4b19-4440-aede-a0917e1d7019" width="1080" height="530" allowFullScreen="true" frameBorder="0"></iframe>
+
+- [00:00](/shows/mdc-in-the-field/aws-ecr#time=00m00s) - Intro
+- [01:44](/shows/mdc-in-the-field/aws-ecr#time=01m44s) - Introducing AWS ECR coverage
+- [03:38](/shows/mdc-in-the-field/aws-ecr#time=03m38s) - How new repos or images are discovered after the initial assessment
+- [04:22](/shows/mdc-in-the-field/aws-ecr#time=04m22s) - Scanning frequency
+- [07:33](/shows/mdc-in-the-field/aws-ecr#time=07m33s) - Demonstration
++
+## Recommended resources
+ - [Learn more](defender-for-containers-vulnerability-assessment-elastic.md) about AWS ECR Coverage in Defender for Containers.
+ - Subscribe to [Microsoft Security on YouTube](https://www.youtube.com/playlist?list=PL3ZTgFEc7LysiX4PfHhdJPR7S8mGO14YS)
+ - Join our [Tech Community](https://aka.ms/SecurityTechCommunity)
+ - For more about [Microsoft Security](https://msft.it/6002T9HQY)
+
+- Follow us on social media:
+
+ - [LinkedIn](https://www.youtube.com/redirect?event=video_description&redir_token=QUFFLUhqbFk5TXZuQld2NlpBRV9BQlJqMktYSm95WWhCZ3xBQ3Jtc0tsQU13MkNPWGNFZzVuem5zc05wcnp0VGxybHprVTkwS2todWw0b0VCWUl4a2ZKYVktNGM1TVFHTXpmajVLcjRKX0cwVFNJaDlzTld4MnhyenBuUGRCVmdoYzRZTjFmYXRTVlhpZGc4MHhoa3N6ZDhFMA&q=https%3A%2F%2Fwww.linkedin.com%2Fshowcase%2Fmicrosoft-security%2F)
+ - [Twitter](https://twitter.com/msftsecurity)
+
+- Join our [Tech Community](https://aka.ms/SecurityTechCommunity)
+
+- Learn more about [Microsoft Security](https://msft.it/6002T9HQY)
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [New AWS Connector in Microsoft Defender for Cloud](episode-one.md)
defender-for-cloud Episode Twenty Four https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/episode-twenty-four.md
Title: Enhancements in Defender for SQL vulnerability assessment | Defender for
description: Learn about Enhancements in Defender for SQL Vulnerability Assessment Previously updated : 01/05/2023 Last updated : 01/18/2023 # Enhancements in Defender for SQL vulnerability assessment | Defender for Cloud in the field
Last updated 01/05/2023
## Next steps > [!div class="nextstepaction"]
-> [New AWS Connector in Microsoft Defender for Cloud](episode-one.md)
+> [AWS ECR Coverage in Defender for Containers](episode-twenty-five.md)
defender-for-cloud Integration Defender For Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/integration-defender-for-endpoint.md
If you've never enabled the integration for Windows, the **Allow Microsoft Defen
In addition, in the Azure portal you'll see a new Azure extension on your machines called `MDE.Linux`.
-## Enable the MDE unified solution at scale
+### Enable the MDE unified solution at scale
You can also enable the MDE unified solution at scale through the supplied REST API version 2022-05-01. For full details, see the [API documentation](/rest/api/defenderforcloud/settings/update?tabs=HTTP).
URI: `https://management.azure.com/subscriptions/<subscriptionId>/providers/Micr
} ```
+## Track MDE deployment status
+
+You can use the [Defender for Endpoint deployment status workbook](https://github.com/Azure/Microsoft-Defender-for-Cloud/tree/main/Workbooks/Defender%20for%20Endpoint%20Deployment%20Status) to track the MDE deployment status on your Azure VMs and non-Azure machines that are connected via Azure Arc. The interactive workbook provides an overview of machines in your environment showing their Microsoft Defender for Endpoint extension deployment status.
+ ## Access the Microsoft Defender for Endpoint portal 1. Ensure the user account has the necessary permissions. Learn more in [Assign user access to Microsoft Defender Security Center](/microsoft-365/security/defender-endpoint/assign-portal-access).
defender-for-cloud Powershell Sample Vulnerability Assessment Baselines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/powershell-sample-vulnerability-assessment-baselines.md
$APIVersion = "2022-05-01-preview"
function GetExpressConfigurationStatus($SubscriptionId, $ResourceGroupName, $ServerName){
- $Uri = "https://management.azure.com/subscriptions/$SubscriptionId/resourceGroups/$ResourceGroupName/providers/Microsoft.Sql/servers/$ServerName/sqlVulnerabilityAssessments/Defualt?api-version=" + $APIVersion
+ $Uri = "https://management.azure.com/subscriptions/$SubscriptionId/resourceGroups/$ResourceGroupName/providers/Microsoft.Sql/servers/$ServerName/sqlVulnerabilityAssessments/Default?api-version=" + $APIVersion
SendRestRequest -Method "GET" -Uri $Uri }
function SetLastScanAsBaselineOnSystemDatabase($SubscriptionId, $ResourceGroupNa
} function SetLastScanAsBaselineOnUserDatabase($SubscriptionId, $ResourceGroupName, $ServerName, $DatabaseName){
- $Uri = "https://management.azure.com/subscriptions/$SubscriptionId/resourceGroups/$ResourceGroupName/providers/Microsoft.Sql/servers/$ServerName/databases/$DatabaseName/sqlVulnerabilityAssessments/defualt/baselines/default?api-version=" + $APIVersion
+ $Uri = "https://management.azure.com/subscriptions/$SubscriptionId/resourceGroups/$ResourceGroupName/providers/Microsoft.Sql/servers/$ServerName/databases/$DatabaseName/sqlVulnerabilityAssessments/default/baselines/default?api-version=" + $APIVersion
$Body = "{properties: {latestScan: true,results: {}}}" SendRestRequest -Method "PUT" -Uri $Uri -Body $Body }
defender-for-cloud Quickstart Onboard Aws https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-aws.md
Title: Connect your AWS account to Microsoft Defender for Cloud description: Defend your AWS resources with Microsoft Defender for Cloud Previously updated : 11/07/2022 Last updated : 01/10/2023 zone_pivot_groups: connect-aws-accounts
To protect your AWS-based resources, you can connect an AWS account with either:
- [**Microsoft Defender for Containers**](defender-for-containers-introduction.md) brings threat detection and advanced defenses to [supported Amazon EKS clusters](supported-machines-endpoint-solutions-clouds-containers.md). - [**Microsoft Defender for SQL**](defender-for-sql-introduction.md) brings threat detection and advanced defenses to your SQL Servers running on AWS EC2, AWS RDS Custom for SQL Server. -- **Classic cloud connector** - Requires configuration in your AWS account to create a user that Defender for Cloud can use to connect to your AWS environment. If you have classic cloud connectors, we recommend that you [delete these connectors](#remove-classic-connectors), and use the native connector to reconnect to the account. Using both the classic and native connectors can produce duplicate recommendations.
+- **Classic cloud connector** - Requires configuration in your AWS account to create a user that Defender for Cloud can use to connect to your AWS environment.
+
+> [!NOTE]
+> The option to select the classic connector is only available if you previously onboarded an AWS account using the classic connector.
+>
+> If you have classic cloud connectors, we recommend that you [delete these connectors](#remove-classic-connectors), and use the native connector to reconnect to the account. Using both the classic and native connectors can produce duplicate recommendations.
For a reference list of all the recommendations Defender for Cloud can provide for AWS resources, see [Security recommendations for AWS resources - a reference guide](recommendations-reference-aws.md).
defender-for-cloud Quickstart Onboard Devops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-devops.md
API calls performed by Defender for Cloud count against the [Azure DevOps Global
|--|--| | Release state: | Preview <br> The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include other legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. | | Pricing: | For pricing, see the Defender for Cloud [pricing page](https://azure.microsoft.com/pricing/details/defender-for-cloud/?v=17.23h#pricing). |
-| Required permissions: | **- Azure account:** with permissions to sign into Azure portal <br> **- Contributor:** on the Azure subscription where the connector will be created <br> **- Security Admin Role:** in Defender for Cloud <br> **- Organization Administrator:** in Azure DevOps <br> - In Azure DevOps, configure: Third-party applications gain access via OAuth, which must be set to `On` . [Learn more about OAuth](/azure/devops/organizations/accounts/change-application-access-policies?view=azure-devops)|
+| Required permissions: | **- Azure account:** with permissions to sign into Azure portal <br> **- Contributor:** on the Azure subscription where the connector will be created <br> **- Security Admin Role:** in Defender for Cloud <br> **- Organization Administrator:** in Azure DevOps <br> **- Basic or Basic + Test Plans Access Level:** in Azure DevOps. <br> - In Azure DevOps, configure: Third-party applications gain access via OAuth, which must be set to `On` . [Learn more about OAuth](/azure/devops/organizations/accounts/change-application-access-policies?view=azure-devops)|
| Regions: | Central US | | Clouds: | :::image type="icon" source="media/quickstart-onboard-github/check-yes.png" border="false"::: Commercial clouds <br> :::image type="icon" source="media/quickstart-onboard-github/x-no.png" border="false"::: National (Azure Government, Azure China 21Vianet) |
defender-for-cloud Quickstart Onboard Gcp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/quickstart-onboard-gcp.md
Title: Connect your GCP project to Microsoft Defender for Cloud description: Monitoring your GCP resources from Microsoft Defender for Cloud Previously updated : 09/20/2022 Last updated : 01/10/2023 zone_pivot_groups: connect-gcp-accounts
To protect your GCP-based resources, you can connect a GCP project with either:
- **Classic cloud connector** - Requires configuration in your GCP project to create a user that Defender for Cloud can use to connect to your GCP environment. If you have classic cloud connectors, we recommend that you [delete these connectors](#remove-classic-connectors) and use the native connector to reconnect to the project. Using both the classic and native connectors can produce duplicate recommendations.
+> [!NOTE]
+> The option to select the classic connector is only available if you previously onboarded a GCP project using the classic connector.
+>
+> If you have classic cloud connectors, we recommend that you [delete these connectors](#remove-classic-connectors), and use the native connector to reconnect to the account. Using both the classic and native connectors can produce duplicate recommendations.
+ :::image type="content" source="./media/quickstart-onboard-gcp/gcp-account-in-overview.png" alt-text="Screenshot of GCP projects shown in Microsoft Defender for Cloud's overview dashboard." lightbox="media/quickstart-onboard-gcp/gcp-account-in-overview.png"::: ::: zone pivot="env-settings"
To protect your GCP-based resources, you can connect a GCP project with either:
|Aspect|Details| |-|:-|
-| Release state: | Preview <br> The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to the Azure features that are in beta, preview, or otherwise not yet released into general availability. |
+| Release state: | Preview <br> The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include other legal terms that apply to the Azure features that are in beta, preview, or otherwise not yet released into general availability. |
|Pricing:|The **[Defender for SQL](defender-for-sql-introduction.md)** plan is billed at the same price as Azure resources.<br> The **Defender for Servers** plan is billed at the same price as the [Microsoft Defender for Servers](defender-for-servers-introduction.md) plan for Azure machines. If a GCP VM instance doesn't have the Azure Arc agent deployed, you won't be charged for that machine. <br>The **[Defender for Containers](defender-for-containers-introduction.md)** plan is free during the preview. After which, it will be billed for GCP at the same price as for Azure resources.| |Required roles and permissions:| **Contributor** on the relevant Azure Subscription <br> **Owner** on the GCP organization or project| |Clouds:|:::image type="icon" source="./media/icons/yes-icon.png"::: Commercial clouds<br>:::image type="icon" source="./media/icons/no-icon.png"::: National (Azure Government, Azure China 21Vianet, Other Gov)|
Follow the steps below to create your GCP cloud connector.
1. Toggle the plans you want to connect to **On**. By default all necessary prerequisites and components will be provisioned. (Optional) Learn how to [configure each plan](#optional-configure-selected-plans).
- 1. (**Containers only**) Ensure you have fulfilled the [network requirements](defender-for-containers-enable.md?tabs=defender-for-container-gcp#network-requirements) for the Defender for Containers plan.
+ 1. (**Containers only**) Ensure you've fulfilled the [network requirements](defender-for-containers-enable.md?tabs=defender-for-container-gcp#network-requirements) for the Defender for Containers plan.
1. Select the **Next: Configure access**.
To have full visibility to Microsoft Defender for Servers security content, ensu
> <br><br> Microsoft Defender for Servers does not install the OS config agent to a VM that does not have it installed. However, Microsoft Defender for Servers will enable communication between the OS config agent and the OS config service if the agent is already installed but not communicating with the service. > <br><br> This can change the OS config agent from `inactive` to `active` and will lead to additional costs.
- - **Manual installation** - You can manually connect your VM instances to Azure Arc for servers. Instances in projects with Defender for Servers plan enabled that are not connected to Arc will be surfaced by the recommendation ΓÇ£GCP VM instances should be connected to Azure ArcΓÇ¥. Use the ΓÇ£FixΓÇ¥ option offered in this recommendation to install Azure Arc on the selected machines.
+ - **Manual installation** - You can manually connect your VM instances to Azure Arc for servers. Instances in projects with Defender for Servers plan enabled that aren't connected to Arc will be surfaced by the recommendation ΓÇ£GCP VM instances should be connected to Azure ArcΓÇ¥. Use the ΓÇ£FixΓÇ¥ option offered in this recommendation to install Azure Arc on the selected machines.
- Ensure you've fulfilled the [network requirements for Azure Arc](../azure-arc/servers/network-requirements.md?tabs=azure-cloud). -- Additional extensions should be enabled on the Arc-connected machines.
+- Other extensions should be enabled on the Arc-connected machines.
- Microsoft Defender for Endpoint - VA solution (TVM/ Qualys) - Log Analytics (LA) agent on Arc machines or Azure Monitor agent (AMA). Ensure the selected workspace has security solution installed.
- The LA agent and AMA are currently configured in the subscription level, such that all the multicloud accounts and projects (from both AWS and GCP) under the same subscription will inherit the subscription settings with regard to the LA agent and AMA.
+ The LA agent and AMA are currently configured in the subscription level, such that all the multicloud accounts and projects (from both AWS and GCP) under the same subscription will inherit the subscription settings regarding the LA agent and AMA.
Learn more about [monitoring components](monitoring-components.md) for Defender for Cloud.
To have full visibility to Microsoft Defender for Servers security content, ensu
1. On the Select plans screen select **View configuration**.
- :::image type="content" source="media/quickstart-onboard-gcp/view-configuration.png" alt-text="Screenshot showing where to click to configure the Servers plan.":::
+ :::image type="content" source="media/quickstart-onboard-gcp/view-configuration.png" alt-text="Screenshot showing where to select to configure the Servers plan.":::
1. On the Auto provisioning screen, toggle the switches on or off depending on your need.
To have full visibility to Microsoft Defender for SQL security content, ensure y
> The Arc auto-provisioning process leverages the VM manager on your Google Cloud Platform, to enforce policies on the your VMs through the OS config agent. A VM with an [Active OS agent](https://cloud.google.com/compute/docs/manage-os#agent-state) will incur a cost according to GCP. Refer to [GCP's technical documentation](https://cloud.google.com/compute/docs/vm-manager#pricing) to see how this may affect your account. > <br><br> Microsoft Defender for Servers does not install the OS config agent to a VM that does not have it installed. However, Microsoft Defender for Servers will enable communication between the OS config agent and the OS config service if the agent is already installed but not communicating with the service. > <br><br> This can change the OS config agent from `inactive` to `active` and will lead to additional costs. -- Additional extensions should be enabled on the Arc-connected machines.
+- Other extensions should be enabled on the Arc-connected machines.
- SQL servers on machines. Ensure the plan is enabled on your subscription. - Log Analytics (LA) agent on Arc machines. Ensure the selected workspace has security solution installed.
- The LA agent and SQL servers on machines plan are currently configured in the subscription level, such that all the multicloud accounts and projects (from both AWS and GCP) under the same subscription will inherit the subscription settings and may result in additional charges.
+ The LA agent and SQL servers on machines plan are currently configured in the subscription level, such that all the multicloud accounts and projects (from both AWS and GCP) under the same subscription will inherit the subscription settings and may result in extra charges.
Learn more about [monitoring components](monitoring-components.md) for Defender for Cloud.
To have full visibility to Microsoft Defender for SQL security content, ensure y
1. On the Select plans screen select **Configure**.
- :::image type="content" source="media/quickstart-onboard-gcp/view-configuration.png" alt-text="Screenshot showing where to click to configure the Databases plan.":::
+ :::image type="content" source="media/quickstart-onboard-gcp/view-configuration.png" alt-text="Screenshot showing where to select to configure the Databases plan.":::
1. On the Auto provisioning screen, toggle the switches on or off depending on your need.
To have full visibility to Microsoft Defender for SQL security content, ensure y
Microsoft Defender for Containers brings threat detection and advanced defenses to your GCP GKE Standard clusters. To get the full security value out of Defender for Containers and to fully protect GCP clusters, ensure you have the following requirements configured: - **Kubernetes audit logs to Defender for Cloud** - Enabled by default. This configuration is available at a GCP project level only. This provides agentless collection of the audit log data through [GCP Cloud Logging](https://cloud.google.com/logging/) to the Microsoft Defender for Cloud backend for further analysis.-- **Azure Arc-enabled Kubernetes, the Defender extension, and the Azure Policy extension** - Enabled by default. You can install Azure Arc-enabled Kubernetes and its extensions on your GKE clusters in 3 different ways:
+- **Azure Arc-enabled Kubernetes, the Defender extension, and the Azure Policy extension** - Enabled by default. You can install Azure Arc-enabled Kubernetes and its extensions on your GKE clusters in three different ways:
- **(Recommended)** Enable the Defender for Container auto-provisioning at the project level as explained in the instructions below. - Defender for Cloud recommendations, for per cluster installation, which will appear on the Microsoft Defender for Cloud's Recommendations page. Learn how to [deploy the solution to specific clusters](defender-for-containers-enable.md?tabs=defender-for-container-gke#deploy-the-solution-to-specific-clusters). - Manual installation for [Arc-enabled Kubernetes](../azure-arc/kubernetes/quickstart-connect-cluster.md) and [extensions](../azure-arc/kubernetes/extensions.md).
Microsoft Defender for Containers brings threat detection and advanced defenses
1. On the Select plans screen select **Configure**.
- :::image type="content" source="media/quickstart-onboard-gcp/containers-configure.png" alt-text="Screenshot showing where to click to configure the Containers plan.":::
+ :::image type="content" source="media/quickstart-onboard-gcp/containers-configure.png" alt-text="Screenshot showing where to select to configure the Containers plan.":::
1. On the Auto provisioning screen, toggle the switches **On**.
For all the GCP projects in your organization, you must also:
1. Set up **GCP Security Command Center** using [these instructions from the GCP documentation](https://cloud.google.com/security-command-center/docs/quickstart-scc-setup). 1. Enable **Security Health Analytics** using [these instructions from the GCP documentation](https://cloud.google.com/security-command-center/docs/how-to-use-security-health-analytics).
-1. Verify that there is data flowing to the Security Command Center.
+1. Verify that there's data flowing to the Security Command Center.
-The instructions for connecting your GCP environment for security configuration follow Google's recommendations for consuming security configuration recommendations. The integration leverages Google Security Command Center and will consume additional resources that might impact your billing.
+The instructions for connecting your GCP environment for security configuration follow Google's recommendations for consuming security configuration recommendations. The integration applies Google Security Command Center and will consume other resources that might impact your billing.
When you first enable Security Health Analytics, it might take several hours for data to be available.
defender-for-cloud Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes-archive.md
The effect for the Key Vault recommendations listed here was changed to "audit":
| Key Vault secrets should have an expiration date | 14257785-9437-97fa-11ae-898cfb24302b | | Key Vault keys should have an expiration date | 1aabfa0d-7585-f9f5-1d92-ecb40291d9f2 | - ### Deprecate API App policies for App Service We deprecated the following policies to corresponding policies that already exist to include API apps:
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
Title: Release notes for Microsoft Defender for Cloud description: A description of what's new and changed in Microsoft Defender for Cloud Previously updated : 01/10/2023 Last updated : 01/19/2023 # What's new in Microsoft Defender for Cloud?
This page is updated frequently, so revisit it often.
To learn about *planned* changes that are coming soon to Defender for Cloud, see [Important upcoming changes to Microsoft Defender for Cloud](upcoming-changes.md). > [!TIP]
-> If you're looking for items older than six months, you'll find them in the [Archive for What's new in Microsoft Defender for Cloud](release-notes-archive.md).
+> If you're looking for items older than six months, you can find them in the [Archive for What's new in Microsoft Defender for Cloud](release-notes-archive.md).
## January 2023
Updates in January include:
- [New version of the recommendation to find missing system updates (Preview)](#new-version-of-the-recommendation-to-find-missing-system-updates-preview) - [Cleanup of deleted Azure Arc machines in connected AWS and GCP accounts](#cleanup-of-deleted-azure-arc-machines-in-connected-aws-and-gcp-accounts)
+- [Allow continuous export to Event Hub behind a firewall](#allow-continuous-export-to-event-hubs-behind-a-firewall)
### New version of the recommendation to find missing system updates (Preview)
A machine connected to an AWS and GCP account and covered by Defender for Server
Defender for Cloud will now automatically delete Azure Arc machines when those machines are deleted in connected AWS or GCP account.
+### Allow continuous export to Event Hubs behind a firewall
+
+You can now enable the continuous export of alerts and recommendations, as a trusted service to Event Hubs that are protected by an Azure firewall.
+
+You can enable this as the alerts or recommendations are generated or you can define a schedule to send periodic snapshots of all of the new data.
+
+Learn how to enable [continuous export to an Event Hub behind an Azure firewall](continuous-export.md#continuously-export-to-an-event-hub-behind-a-firewall).
+ ## December 2022 Updates in December include:
defender-for-cloud Supported Machines Endpoint Solutions Clouds Containers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/supported-machines-endpoint-solutions-clouds-containers.md
description: Learn about the availability of Microsoft Defender for Cloud contai
Previously updated : 10/24/2022 Last updated : 12/29/2022
The **tabs** below show the features that are available, by environment, for Mic
<sup><a name="footnote2"></a>2</sup> To get [Microsoft Defender for Containers](../defender-for-cloud/defender-for-containers-introduction.md) protection for your environments, you'll need to onboard [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md) and enable Defender for Containers as an Arc extension.
+For additional requirements for Kuberenetes workload protection, see [existing limitations](../governance/policy/concepts/policy-for-kubernetes.md#limitations).
+ > [!NOTE]
-> For additional requirements for Kuberenetes workload protection, see [existing limitations](../governance/policy/concepts/policy-for-kubernetes.md#limitations).
+>Adding the Defender agent on a cluster with the [ARM64 node pool](../aks/use-multiple-node-pools.md) (or adding ARM64 node pool to a cluster with Defender agent installed) is currently not supported.
### Network restrictions
Outbound proxy without authentication and outbound proxy with basic authenticati
<sup><a name="footnote2"></a>2</sup> To get [Microsoft Defender for Containers](../defender-for-cloud/defender-for-containers-introduction.md) protection for your environments, you'll need to onboard [Azure Arc-enabled Kubernetes](../azure-arc/kubernetes/overview.md) and enable Defender for Containers as an Arc extension.
+For additional requirements for Kuberenetes workload protection, see [existing limitations](../governance/policy/concepts/policy-for-kubernetes.md#limitations).
+ > [!NOTE]
-> For additional requirements for Kuberenetes workload protection, see [existing limitations](../governance/policy/concepts/policy-for-kubernetes.md#limitations).
+> Adding the Defender agent on a cluster with [ARM64 node pool](../aks/use-multiple-node-pools.md) (or adding ARM64 node pool to a cluster with Defender agent installed) is currently not supported.
### Supported host operating systems
defender-for-cloud Upcoming Changes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/upcoming-changes.md
Title: Important changes coming to Microsoft Defender for Cloud description: Upcoming changes to Microsoft Defender for Cloud that you might need to be aware of and for which you might need to plan Previously updated : 01/16/2023 Last updated : 01/18/2023
You can learn more about [Microsoft Defender for Endpoint onboarding options](in
You can also view the [full list of alerts](alerts-reference.md#defender-for-servers-alerts-to-be-deprecated) that are set to be deprecated. -
+Read the [Microsoft Defender for Cloud blog](https://techcommunity.microsoft.com/t5/microsoft-defender-for-cloud/defender-for-servers-security-alerts-improvements/ba-p/3714175).
## Next steps For all recent changes to Defender for Cloud, see [What's new in Microsoft Defender for Cloud?](release-notes.md).
defender-for-iot Alerts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/alerts.md
# Microsoft Defender for IoT alerts
-Microsoft Defender for IoT alerts enhance your network security and operations with real-time details about events logged in your network. Alerts are triggered when OT or Enterprise IoT network sensors detect changes or suspicious activity in network traffic that needs your attention.
+Microsoft Defender for IoT alerts enhance your network security and operations with real-time details about events logged in your network. Alerts are messages that a Defender for IoT engine triggers when OT or Enterprise IoT network sensors detect changes or suspicious activity in network traffic that needs your attention.
For example:
defender-for-iot Architecture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/architecture.md
Defender for IoT provides hybrid network support using the following management
:::image type="content" source="media/release-notes/new-interface.png" alt-text="Screenshot that shows the updated interface." lightbox="media/release-notes/new-interface.png"::: -- **The on-premises management console**. In air-gapped environments, you can get a central view of data from all of your sensors from an on-premises management console. The on-premises management console also lets you organize your network into separate sites and zones to support a [Zero Trust](/security/zero-trust/) mindset, and provides extra maintenance tools and reporting features.
+- **The on-premises management console**. In air-gapped environments, the on-premises management console provides a centralized view and management options for devices and threats detected by connected OT network sensors. The on-premises management console also lets you organize your network into separate sites and zones to support a [Zero Trust](/security/zero-trust/) mindset, and provides extra maintenance tools and reporting features.
## Next steps
defender-for-iot Cli Ot Sensor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/cli-ot-sensor.md
This article lists the CLI commands available from Defender for IoT OT network sensors. + ## Prerequisites Before you can run any of the following CLI commands, you'll need access to the CLI on your OT network sensor as a privileged user.
defender-for-iot Concept Supported Protocols https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/concept-supported-protocols.md
Defender for IoT can detect the following protocols when identifying assets and
|**Microsoft** | Horizon community dissectors<br> Horizon proprietary dissectors (developed by customers) | |**Mitsubishi** | Melsoft / Melsec (Mitsubishi Electric) | |**Omron** | FINS |
+|**OPC** | UA |
|**Oracle** | TDS<br> TNS | |**Rockwell Automation** | ENIP<br> EtherNet/IP CIP (including Rockwell extension)<br> EtherNet/IP CIP FW version 27 and above | |**Schneider Electric** | Modbus/TCP<br> Modbus TCPΓÇôSchneider Unity Extensions<br> OASYS (Schneider Electric Telvant)<br> Schneider TSAA |
defender-for-iot How To Create Data Mining Queries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-create-data-mining-queries.md
Data mining query data is continuously saved until a device is deleted, and is a
To create data mining reports, you must be able to access the OT network sensor you want to generate data for as an **Admin** or **Security Analyst** user.
-For more information, see [On-premises users and roles for OT monitoring with Defender for IoT](roles-on-premises.md)
+For more information, see [On-premises users and roles for OT monitoring with Defender for IoT](roles-on-premises.md).
## View an OT sensor predefined data mining report
Create your own custom data mining report if you have reporting needs not covere
| **Choose category** | Select the categories to include in your report. | | **Order by** | Select to sort your data by category or by activity. | | **Filter by** | Define a filter for your report using any of the following parameters: <br><br> - **Results within the last**: Enter a number and then select **Minutes**, **Hours**, or **Days** <br> - **IP address / MAC address / Port**: Enter one or more IP addresses, MAC addresses, and ports to filter into your report. Enter a value and then select + to add it to the list.<br> - **Device group**: Select one or mode device groups to filter into your report. |
- | **Add filter type** | Select to add any of the following filter types into your report. <br><br> - Transport (GENERIC) <br> - Protocol (GENERIC) <br> - TAG (GENERIC) <br> - Maximum value (GENERIC) <br> - State (GENERIC) <br> - Minimum value (GENERIC) <br><br> Enter a value in the relevant field and then select + to add it to the list. |
+ | **Add filter type** | Select to add any of the following filter types into your report. <br><br> - Transport (GENERIC) <br> - Protocol (GENERIC) <br> - TAG (GENERIC) <br> - Maximum value (GENERIC) <br> - State (GENERIC) <br> - Minimum value (GENERIC) <br><br> Enter a value in the relevant field and then select + to add it to the list. |
1. Select **Save**. Your data mining report is shown in the **My reports** area. For example:
Sign into an on-premises management console to view [out-of-the-box data mining
**To view a data mining report from an on-premises management console**:
-Sign into your on-premises management console and select
-
-1. **Reports** on the left.
+1. Sign into your on-premises management console and select **Reports** on the left.
1. From the **Sensors** drop-down list, select the sensor for which you want to generate the report.
The page lists the current report data. Select :::image type="icon" source="medi
- Continue creating other reports for more security data from your OT sensor. For more information, see:
- - [Risk assessment reporting](how-to-create-risk-assessment-reports.md)
-
- - [Attack vector reporting](how-to-create-attack-vector-reports.md)
-
- - [Create trends and statistics dashboards](how-to-create-trends-and-statistics-reports.md)
+ - [Risk assessment reporting](how-to-create-risk-assessment-reports.md)
+
+ - [Attack vector reporting](how-to-create-attack-vector-reports.md)
+
+ - [Create trends and statistics dashboards](how-to-create-trends-and-statistics-reports.md)
defender-for-iot How To Enhance Port And Vlan Name Resolution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-enhance-port-and-vlan-name-resolution.md
Title: Enhance port and VLAN name resolution in Defender for IoT
-description: Customize port and VLAN names on your sensors
Previously updated : 01/02/2022
+ Title: Customize port and VLAN names on OT network sensors - Microsoft Defender for IoT
+description: Learn how to customize port and VLAN names on Microsoft Defender for IoT OT network sensors.
Last updated : 01/12/2023
-# Customize port and VLAN names
+# Customize port and VLAN names on OT network sensors
-You can customize port and VLAN names on your sensors to enrich device resolution.
+Enrich device data shown in Defender for IoT by customizing port and VLAN names on your OT network sensors.
-## Customize a port name
+For example, you might want to assign a name to a non-reserved port that shows unusually high activity in order to call it out, or assign a name to a VLAN number to identify it quicker.
-Microsoft Defender for IoT automatically assigns names to most universally reserved ports, such as DHCP or HTTP. You can customize port names for other ports that Defender for IoT detects. For example, you might assign a name to a non-reserved port because that port shows unusually high activity. Names appear when you view device groups from the device map, or when you create reports that provide port information.
+## Prerequisites
-Customize a name as follows:
+To customize port and VLAN names, you must be able to access the OT network sensor as an **Admin** user.
-1. Select **System Settings**. Under **Network monitoring**, select **Port Naming**.
-2. Select **Add port**.
-3. Enter the port number, select the protocol (TCP, UDP, both) and type in a name.
-4. Select **Save**.
+For more information, see [On-premises users and roles for OT monitoring with Defender for IoT](roles-on-premises.md).
+
+## Customize names of detected ports
+
+Defender for IoT automatically assigns names to most universally reserved ports, such as DHCP or HTTP. However, you might want to customize the name of a specific port to highlight it, such as when you're watching a port with unusually high detected activity.
+
+Port names are shown in Defender for IoT when [viewing device groups from the OT sensor's device map](how-to-work-with-the-sensor-device-map.md#group-highlight-and-filters-tools), or when you create OT sensor reports that include port information.
+
+**To customize a port name:**
+
+1. Sign into your OT sensor as an **Admin** user.
+
+1. Select **System settings** on the left and then, under **Network monitoring**, select **Port Naming**.
+
+1. In the **Port naming** pane that appears, enter the port number you want to name, the port's protocol, and a meaningful name. Supported protocol values include: **TCP**, **UDP**, and **BOTH**.
+
+1. Select **+ Add port** to customize an additional port, and **Save** when you're done.
## Customize a VLAN name
-You can enrich device inventory data with device VLAN numbers and tags.
+VLANs are either discovered automatically by the OT network sensor or added manually. Automatically discovered VLANs can't be edited or deleted, but manually added VLANs require a unique name. If a VLAN isn't explicitly named, the VLAN's number is shown instead.
-- VLANs support is based on 802.1q (up to VLAN ID 4094). VLANS can be discovered automatically by the sensor or added manually.-- Automatically discovered VLANs can't be edited or deleted. You should add a name to each VLAN, if you don't add a name, the VLAN number will appear when VLAN information is reported.-- When you add a manual VLN, you must add a unique name. These VLANs can be edited and deleted.-- VLAN names can contain up to 50 ASCII characters.
+VLAN's support is based on 802.1q (up to VLAN ID 4094).
-## Before you start
-> [!NOTE]
-> VLAN names are not synchronized between the sensor and the management console. You need to define the name on the management console as well.
-For Cisco switches, add the following line to the span configuration: `monitor session 1 destination interface XX/XX encapsulation dot1q`. In that command, *XX/XX* is the name and number of the port.
+VLAN names aren't synchronized between the OT network sensor and the on-premises management console. If you want to view customized VLAN names on the on-premises management console, [define the VLAN names](how-to-manage-the-on-premises-management-console.md#define-vlan-names) there as well.
-To configure VLAN names:
+**To configure VLAN names on an OT network sensor:**
-1. On the side menu, select **System Settings**.
+1. Sign in to your OT sensor as an **Admin** user.
-2. In the **System Settings** window, select **VLAN**.
+1. Select **System Settings** on the left and then, under **Network monitoring**, select **VLAN Naming**.
- :::image type="content" source="media/how-to-enrich-asset-information/edit-vlan.png" alt-text="Use the system settings to edit your VLANs.":::
+1. In the **VLAN naming** pane that appears, enter a VLAN ID and unique VLAN name. VLAN names can contain up to 50 ASCII characters.
-3. Add a unique name next to each VLAN ID.
+1. Select **+ Add VLAN** to customize an additional VLAN, and **Save** when you're done.
+1. **For Cisco switches**: Add the `monitor session 1 destination interface XX/XX encapsulation dot1q` command to the SPAN port configuration, where *XX/XX* is the name and number of the port.
## Next steps
-View enriched device information in various reports:
+> [!div class="nextstepaction"]
+> [Investigate detected devices from the OT sensor device inventory](how-to-investigate-sensor-detections-in-a-device-inventory.md)
+
+> [!div class="nextstepaction"]
+> [Create sensor trends and statistics reports](how-to-create-trends-and-statistics-reports.md)
-- [Investigate sensor detections in a device inventory](how-to-investigate-sensor-detections-in-a-device-inventory.md)-- [Sensor trends and statistics reports](how-to-create-trends-and-statistics-reports.md)-- [Sensor data mining queries](how-to-create-data-mining-queries.md)
+> [!div class="nextstepaction"]
+> [Create sensor data mining queries](how-to-create-data-mining-queries.md)
defender-for-iot How To Manage Individual Sensors https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/how-to-manage-individual-sensors.md
This article describes how to manage individual sensors, such as managing activa
You can also perform some management tasks for multiple sensors simultaneously from the Azure portal or an on-premises management console. For more information, see [Next steps](#next-steps). + ## View overall sensor status When you sign into your sensor, the first page shown is the **Overview** page.
This feature is supported for the following sensor versions:
## Retrieve forensics data stored on the sensor
-Use Defender for IoT data mining reports on an OT network sensor to retrieve forensic data from that sensorΓÇÖs storage. The following types of forensic data is stored locally on OT sensors, for devices detected by that sensor:
+Use Defender for IoT data mining reports on an OT network sensor to retrieve forensic data from that sensorΓÇÖs storage. The following types of forensic data are stored locally on OT sensors, for devices detected by that sensor:
- Device data - Alert data
Clearing data deletes all detected or learned data on the sensor. After clearing
1. In the confirmation dialog box, select **Yes** to confirm that you do want to clear all data from the sensor and reset it. For example:
- :::image type="content" source="media/how-to-manage-individual-sensors/clear-system-data.png" alt-text="Screenshot of clearing system data on the support page in the sensor console.":::
+ :::image type="content" source="media/how-to-manage-individual-sensors/clear-system-data.png" alt-text="Screenshot of clearing system data on the support page in the sensor console." lightbox="media/how-to-manage-individual-sensors/clear-system-data.png":::
A confirmation message appears that the action was successful. All learned data, allowlists, policies, and configuration settings are cleared from the sensor.
For more information, see:
- [Manage sensors with Defender for IoT in the Azure portal](how-to-manage-sensors-on-the-cloud.md) - [Threat intelligence research and packages](how-to-work-with-threat-intelligence-packages.md) - [Manage sensors from the management console](how-to-manage-sensors-from-the-on-premises-management-console.md)-- [Troubleshoot the sensor and on-premises management console](how-to-troubleshoot-the-sensor-and-on-premises-management-console.md)
+- [Troubleshoot the sensor and on-premises management console](how-to-troubleshoot-the-sensor-and-on-premises-management-console.md)
defender-for-iot Install Software On Premises Management Console https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/ot-deploy/install-software-on-premises-management-console.md
sudo ethtool -p <port value> <time-in-seconds>
This command will cause the light on the port to flash for the specified time period. For example, entering `sudo ethtool -p eno1 120`, will have port eno1 flash for 2 minutes, allowing you to find the port on the back of your appliance. + ## Next steps > [!div class="nextstepaction"]
defender-for-iot Install Software Ot Sensor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/ot-deploy/install-software-ot-sensor.md
This procedure describes how to install OT monitoring software on a sensor.
Make sure that your sensor is connected to your network, and then you can sign in to your sensor via a network-connected browser. For more information, see [Activate and set up your sensor](../how-to-activate-and-set-up-your-sensor.md#activate-and-set-up-your-sensor). + ## Next steps > [!div class="nextstepaction"]
defender-for-iot References Defender For Iot Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/references-defender-for-iot-glossary.md
- Title: Defender for IoT glossary for organizations
-description: This glossary provides a brief description of important Defender for IoT platform terms and concepts.
Previously updated : 01/01/2023---
-# Defender for IoT glossary for organizations
-
-This glossary provides a brief description of important terms and concepts for the Microsoft Defender for IoT platform. Select the **Learn more** links to go to related terms in the glossary. This will help you more quickly learn and use product tools.
-
-<a name="glossary-a"></a>
-
-## A
-
-| Term | Description | Learn more |
-|--|--|--|
-| **Access group** | Support user access requirements for large organizations by creating access group rules.<br /><br />Rules let you control view and configuration access to the Defender for IoT on-premises management console for specific user roles at relevant business units, regions, sites, and zones.<br /><br />For example, allow security analysts from an Active Directory group to access West European automotive data but prevent access to data in Africa. | **[On-premises management console](#o)** <br /><br />**[Business unit](#b)** |
-| **Access tokens** | Generate access tokens to access the Defender for IoT REST API. | **[API](#glossary-a)** |
-| **Acknowledge alert event** | Instruct Defender for IoT to hide the alert once for the detected event. The alert will be triggered again if the event is detected again. | **[Alert](#glossary-a)<br /><br />[Learn alert event](#l)<br /><br />[Mute alert event](#m)** |
-| **Alert** | A message that a Defender for IoT engine triggers regarding deviations from authorized network behavior, network anomalies, or suspicious network activity and traffic. | **[Forwarding rule](#f)<br /><br />[Exclusion rule](#e)<br /><br />[System notifications](#s)** |
-| **Alert comment** | Comments that security analysts and administrators make in alert messages. For example, an alert comment might give instructions about mitigation actions to take, or names of individuals to contact regarding the event.<br /><br />Users who are reviewing alerts can choose the comment or comments that best reflect the event status, or steps taken to investigate the alert. | **[Alert](#glossary-a)** |
-| **Anomaly engine** | A Defender for IoT engine that detects unusual machine-to-machine (M2M) communication and behavior. For example, the engine might detect excessive SMB sign in attempts. Anomaly alerts are triggered when these events are detected. | **[Defender for IoT engines](#d)** |
-| **API** | Allows external systems to access data discovered by Defender for IoT and perform actions by using the external REST API over SSL connections. | **[Access tokens](#glossary-a)** |
-| **Attack vector report** | A real-time graphical representation of vulnerability chains of exploitable endpoints.<br /><br />Reports let you evaluate the effect of mitigation activities in the attack sequence to determine. For example, you can evaluate whether a system upgrade disrupts the attacker's path by breaking the attack chain, or whether an alternate attack path remains. This prioritizes remediation and mitigation activities. | **[Risk assessment report](#r)** |
-
-## B
-
-| Term | Description | Learn more |
-|--|--|--|
-| **Business unit** | A logical organization of your business according to specific industries.<br /><br />For example, a global company that contains glass factories and plastic factories can be managed as two different business units. You can control access of Defender for IoT users to specific business units. | **[On-premises management console](#o)<br /><br />[Access group](#glossary-a)<br /><br />[Site](#s)<br /><br />[Zone](#z)** |
-| **Baseline** | Approved network traffic, protocols, commands, and devices. Defender for IoT identifies deviations from the network baseline. View approved baseline traffic by generating data-mining reports. | **[Data mining](#d)<br /><br />[Learning mode](#l)** |
-
-## C
-
-| Term | Description | Learn more |
-|--|--|--|
-| **CLI commands** | Command-line interface (CLI) options for Defender for IoT administrator users. CLI commands are available for features that can't be accessed from the Defender for IoT consoles. | - |
--
-## D
-
-| Term | Description | Learn more |
-|--|--|--|
-| **Data mining** | Generate comprehensive and granular reports about your network devices:<br /><br />- **SOC incident response**: Reports in real time to help deal with immediate incident response. For example, a report can list devices that might need patching.<br /><br />- **Forensics**: Reports based on historical data for investigative reports.<br /><br />- **IT network integrity**: Reports that help improve overall network security. For example, a report can list devices with weak authentication credentials.<br /><br />- **visibility**: Reports that cover all query items to view all baseline parameters of your network.<br /><br />Save data-mining reports for read-only users to view. | **[Baseline](#b)<br /><br />[Reports](#r)** |
-| **Defender for IoT platform** | The Defender for IoT solution installed on Defender for IoT sensors and the on-premises management console. | **[Sensor](#s)<br /><br />[On-premises management console](#o)** |
-| **Device inventories** | Device inventory data is available from Defender for IoT in the Azure portal, the OT sensor, and the on-premises management console. For more information, see [What is a Defender for IoT committed device?](architecture.md#what-is-a-defender-for-iot-committed-device)| [**Device map**](#d)|
-| **Device map** | A graphical representation of network devices that Defender for IoT detects. It shows the connections between devices and information about each device. Use the map to:<br /><br />- Retrieve and control critical device information.<br /><br />- Analyze network slices.<br /><br />- Export device details and summaries. | **[Purdue layer group](#p)** |
-
-## E
-
-| Term | Description | Learn more |
-|--|--|--|
-| **Engines** | The self-learning analytics engines in Defender for IoT eliminate the need for updating signatures or defining rules. The engines use ICS-specific behavioral analytics and data science to continuously analyze OT network traffic for anomalies, malware, operational problems, protocol violations, and deviations from baseline network activity.<br /><br />When an engine detects a deviation, an alert is triggered. Alerts can be viewed and managed from the **Alerts** screen or from a SIEM. | **[Alert](#glossary-a)** |
-| **Enterprise view** | A global map that presents business units, sites, and zones where Defender for IoT sensors are installed. View geographical locations of malicious alerts, operational alerts, and more. | **[Business unit](#b)<br /><br />[Site](#s)<br /><br />[Zone](#z)** |
-| **Event timeline** | A timeline of activity detected on your network, including:<br /><br />- Alerts triggered.<br /><br />- Network events (informational).<br /><br />- User operations such as sign in, user deletion, and user creation, and alert management operations such as mute, learn, and acknowledge. Available in the sensor consoles. | - |
-| **Exclusion rule** | Instruct Defender for IoT to ignore alert triggers based on time period, device address, and alert name, or by a specific sensor.<br /><br />For example, if you know that all the OT devices monitored by a specific sensor will go through a maintenance procedure between 6:30 and 10:15 in the morning, you can set an exclusion rule that states that this sensor should send no alerts in the predefined period. | **[Alert](#glossary-a)<br /><br />[Mute alert event](#m)** |
-
-## F
-
-| Term | Description | Learn more |
-|--|--|--|
-| **Forwarding rule** | Forwarding rules instruct Defender for IoT to send alert information to partner vendors or systems.<br /><br />For example, send alert information to a Splunk server or a syslog server. | **[Alert](#glossary-a)** |
-
-## G
-
-| Term | Description | Learn more |
-|--|--|--|
-| **Group** | Predefined or custom groups of devices that contain specific attributes, such as devices that carried out programming activity or devices that are located on a specific subnet. Use groups to help you view devices and analyze devices in Defender for IoT.<br /><br />Groups can be viewed in and created from the device map and device inventory. | **[Device map](#d)<br /><br />[Device inventory](#d)** |
-
-## H
-
-| Term | Description | Learn more |
-|--|--|--|
-| **Horizon open development environment** | Secure IoT and ICS devices running proprietary and custom protocols or protocols that deviate from any standard. Use the Horizon Open Development Environment (ODE) SDK to develop dissector plug-ins that decode network traffic based on defined protocols. Defender for IoT services analyze traffic to provide complete monitoring, alerting, and reporting.<br /><br />Use Horizon to:<br /><br />- **Expand** visibility and control without the need to upgrade Defender for IoT platform versions.<br /><br />- **Secure** proprietary information by developing on-site as an external plug-in.<br /><br />- **Localize** text for alerts, events, and protocol parameters.<br /><br />Contact your customer success representative for details. | **[Protocol support](#p)<br /><br />[Localization](#l)** |
-| **Horizon custom alert** | Enhance alert management in your enterprise by triggering custom alerts for any protocol (based on Horizon Framework traffic dissectors).<br /><br />These alerts can be used to communicate information:<br /><br />- About traffic detections based on protocols and underlying protocols in a proprietary Horizon plug-in.<br /><br />- About a combination of protocol fields from all protocol layers. | **[Protocol support](#p)** |
-
-## I
-
-| Term | Description | Learn more |
-|--|--|--|
-| **Integrations** | Expand Defender for IoT capabilities by sharing device information with partner systems. organizations can bridge previously siloed security, NAC, incident management, and device management solutions to accelerate system-wide responses and more rapidly mitigate risks. | **[Forwarding rule](#f)** |
-| **Internal subnet** | Subnet configurations defined by Defender for IoT. In some cases, such as environments that use public ranges as internal ranges, you can instruct Defender for IoT to resolve all subnets as internal subnets. Subnets are displayed in the map and in various Defender for IoT reports. | **[Subnets](#s)** |
-
-## L
-
-| Term | Description | Learn more |
-|--|--|--|
-| **Learn alert event** | Instruct Defender for IoT to authorize the traffic detected in an alert event. | **[Alert](#glossary-a)<br /><br />[Acknowledge alert event](#glossary-a)<br /><br />[Mute alert event](#m)** |
-| **Learning mode** | The mode used when Defender for IoT learns your network activity. This activity becomes your network baseline. Defender for IoT remains in the mode for a predefined period after installation. Activity that deviates from learned activity after this period will trigger Defender for IoT alerts. | **[Smart IT learning](#s)<br /><br />[Baseline](#b)** |
-| **Localization** | Localize text for alerts, events, and protocol parameters for dissector plug-ins developed by Horizon. | **[Horizon open development environment](#h)** |
-
-## M
--
-| Term | Description | Learn more |
-|--|--|--|
-| **Mute Alert Event** | Instruct Defender for IoT to continuously ignore activity with identical devices and comparable traffic. | **[Alert](#glossary-a)<br /><br />[Exclusion rule](#e)<br /><br />[Acknowledge alert event](#glossary-a)<br /><br />[Learn alert event](#l)** |
-
-## N
-
-| Term | Description | Learn more |
-|--|--|--|
-| **Notifications** | Information about network changes or unresolved device properties. Options are available to update device and network information with new data detected. Responding to notifications enriches the device inventory, map, and various reports. Available on sensor consoles. | **[Alert](#glossary-a)<br /><br />[System notifications](#s)** |
-
-## O
-
-| Term | Description | Learn more |
-|--|--|--|
-| **On-premises management console** | The on-premises management console provides a centralized view and management of devices and threats that Defender for IoT sensor deployments detect in your organization. | **[Defender for IoT platform](#d)<br /><br />[Sensor](#s)** |
-| **Operational alert** | Alerts that deal with operational network issues, such as a device that's suspected to be disconnected from the network. | **[Alert](#glossary-a)<br /><br />[Security alert](#s)** |
-
-## P
-
-| Term | Description | Learn more |
-|--|--|--|
-| **Purdue layer** | Shows the interconnections and interdependencies of main components of a typical ICS on the map. | |
-| **Protocol support** | In addition to embedded protocol support, you can secure IoT and ICS devices running proprietary and custom protocols, or protocols that deviate from any standard, by using the Horizon Open Development Environment SDK. | **[Horizon open development environment](#h)** |
-
-## R
-
-| Term | Description | Learn more |
-|--|--|--|
-| **Region** | A logical division of a global organization into geographical regions. Examples are North America, Western Europe, and Eastern Europe.<br /><br />North America might have factories from various business units. | **[Access group](#glossary-a)<br /><br />[Business unit](#b)<br /><br />[On-premises management console](#o)<br /><br />[Site](#s)<br /><br />[Zone](#z)** |
-| **Reports** | Reports reflect information generated by data-mining query results. This includes default data-mining results, which are available in the **Reports** view. Admins and security analysts can also generate custom data-mining queries and save them as reports. These reports will also be available for read-only users. | **[Data mining](#d)** |
-| **Risk assessment report** | Risk assessment reporting lets you generate a security score for each network device, along with an overall network security score. The overall score represents the percentage of 100 percent security. The report provides mitigation recommendations that will help you improve your current security score. | - |
-
-## S
-
-| Term | Description | Learn more |
-|--|--|--|
-| **Security alert** | Alerts that deal with security issues, such as excessive SMB sign in attempts or malware detections. | **[Alert](#glossary-a)<br /><br />[Operational alert](#o)** |
-| **Selective probing** | Defender for IoT passively inspects IT and OT traffic and detects relevant information on devices, their attributes, their behavior, and more. In certain cases, some information might not be visible in passive network analyses.<br /><br />When this happens, you can use the safe, granular probing tools in Defender for IoT to discover important information on previously unreachable devices. | - |
-| **Sensor** | The physical or virtual machine on which the Defender for IoT platform is installed. | **[On-premises management console](#o)** |
-| **Site** | A location that a factory or other entity. The site should contain a zone or several zones in which a sensor is installed. | **[Zone](#z)** |
-| **Site Management** | The on-premises management console option that that lets you manage enterprise sensors. | - |
-| **Smart IT learning** | After the learning period is complete and the learning mode is disabled, Defender for IoT might detect an unusually high level of baseline changes that are the result of normal IT activity, such as DNS and HTTP requests. This traffic might trigger unnecessary policy violation alerts and system notifications. To reduce these alerts and notifications, you can enable Smart IT Learning. | **[Learning mode](#l)<br /><br />[Baseline](#b)** |
-| **Subnets** | To enable focus on the OT devices, IT devices are automatically aggregated by subnet in the device map. Each subnet is presented as a single entity on the map, including an interactive collapsing or expanding capability to focus in to an IT subnet and back. | **[Device map](#d)** |
-| **System notifications** | Notifications from the on-premises management console regrading:<br /><br />- Sensor connection status.<br /><br />- Remote backup failures. | **[Notifications](#n)<br /><br />[Alert](#glossary-a)** |
-
-## Z
-
-| Term | Description | Learn more |
-|--|--|--|
-| **Zone** | An area within a site in which a sensor, or sensors are installed. | **[Site](#s)<br /><br />[Business unit](#b)<br /><br />[Region](#r)** |
defender-for-iot References Work With Defender For Iot Cli Commands https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/references-work-with-defender-for-iot-cli-commands.md
To access the Defender for IoT CLI, you'll need access to the sensor or on-premi
- For OT sensors or the on-premises management console, you'll need to sign in as a [privileged user](#privileged-user-access-for-ot-monitoring). - For Enterprise IoT sensors, you can sign in as any user. + ## Privileged user access for OT monitoring Privileged users for OT monitoring are pre-defined together with the [OT monitoring software installation](../how-to-install-software.md), as part of the hardened operating system.
defender-for-iot Roles Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-iot/organizations/roles-azure.md
Roles for management actions are applied to user roles across an entire Azure su
| **Push OT threat intelligence updates** <br>Apply per subscription only | - | Γ£ö | Γ£ö | Γ£ö | | **Onboard an Enterprise IoT plan from Microsoft 365 Defender** [*](#enterprise-iot-security)<br>Apply per subscription only | - | Γ£ö | - | - | | **View Azure alerts** <br>Apply per subscription or site | Γ£ö | Γ£ö |Γ£ö | Γ£ö|
-| **Modify Azure alerts (write access)** <br>Apply per subscription or site| - | Γ£ö |Γ£ö | Γ£ö |
+| **Modify Azure alerts (write access - change status, learn, download PCAP)** <br>Apply per subscription or site| - | Γ£ö |Γ£ö | Γ£ö |
| **View Azure device inventory** <br>Apply per subscription or site | Γ£ö | Γ£ö |Γ£ö | Γ£ö| | **Manage Azure device inventory (write access)** <br>Apply per subscription or site | - | Γ£ö |Γ£ö | Γ£ö | | **View Azure workbooks**<br>Apply per subscription or site | Γ£ö | Γ£ö |Γ£ö | Γ£ö |
For more information, see:
- [Manage OT monitoring users on the Azure portal](manage-users-portal.md) - [On-premises user roles for OT monitoring with Defender for IoT](roles-on-premises.md) - [Create and manage users on an OT network sensor](manage-users-sensor.md)-- [Create and manage users on an on-premises management console](manage-users-on-premises-management-console.md)
+- [Create and manage users on an on-premises management console](manage-users-on-premises-management-console.md)
dev-box How To Manage Dev Box Definitions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/how-to-manage-dev-box-definitions.md
If you don't have an available dev center, follow the steps in [Quickstart: Conf
|Name|Value| |-|-|
- |**Name**|Enter a descriptive name for your dev box definition. Note that you can't change the dev box definition name after it's created. |
- |**Image**|Select the base operating system for the dev box. You can select an image from the Marketplace or from an Azure Compute Gallery.|
+ |**Name**|Enter a descriptive name for your dev box definition. You can't change the dev box definition name after it's created. |
+ |**Image**|Select the base operating system for the dev box. You can select an image from the Marketplace or from an Azure Compute Gallery. </br> If you're creating a dev box definition for testing purposes, consider using the **Visual Studio 2022 Enterprise on Windows 11 Enterprise + Microsoft 365 Apps 22H2** image or the **Visual Studio 2022 Pro on Windows 11 Enterprise + Microsoft 365 Apps 22H2**.|
|**Image version**|Select a specific, numbered version to ensure all the dev boxes in the pool always use the same version of the image. Select **Latest** to ensure new dev boxes use the latest image available.| |**Compute**|Select the compute combination for your dev box definition.| |**Storage**|Select the amount of storage for your dev box definition.|
- :::image type="content" source="./media/how-to-manage-dev-box-definitions/create-dev-box-definition-page.png" alt-text="Screenshot showing the Create dev box definition page.":::
+ :::image type="content" source="./media/how-to-manage-dev-box-definitions/recommended-test-image.png" alt-text="Screenshot showing the Create dev box definition page.":::
1. To create the dev box definition, select **Create**.
dev-box Quickstart Configure Dev Box Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dev-box/quickstart-configure-dev-box-service.md
The following steps show you how to create and configure a dev box definition. Y
|Name|Value|Note| |-|-|-| |**Name**|Enter a descriptive name for your dev box definition.|
- |**Image**|Select the base operating system for the dev box. You can select an image from the Azure Marketplace or from an Azure Compute Gallery. </br> If you're creating a dev box definition for testing purposes, consider using the **Windows 11 Enterprise + Microsoft 365 Apps 22H2** image. |To use custom images while creating a dev box definition, you can attach an Azure Compute Gallery that has the custom images. Learn [How to configure an Azure Compute Gallery](./how-to-configure-azure-compute-gallery.md).|
+ |**Image**|Select the base operating system for the dev box. You can select an image from the Azure Marketplace or from an Azure Compute Gallery. </br> If you're creating a dev box definition for testing purposes, consider using the **Visual Studio 2022 Enterprise on Windows 11 Enterprise + Microsoft 365 Apps 22H2** image. |To use custom images while creating a dev box definition, you can attach an Azure Compute Gallery that has the custom images. Learn [How to configure an Azure Compute Gallery](./how-to-configure-azure-compute-gallery.md).|
|**Image version**|Select a specific, numbered version to ensure all the dev boxes in the pool always use the same version of the image. Select **Latest** to ensure new dev boxes use the latest image available.|Selecting the Latest image version enables the dev box pool to use the most recent image version for your chosen image from the gallery. This way, the dev boxes created will stay up to date with the latest tools and code on your image. Existing dev boxes won't be modified when an image version is updated.| |**Compute**|Select the compute combination for your dev box definition.|| |**Storage**|Select the amount of storage for your dev box definition.||
- :::image type="content" source="./media/quickstart-configure-dev-box-service/create-dev-box-definition-page.png" alt-text="Screenshot showing the Create dev box definition page.":::
+ :::image type="content" source="./media/quickstart-configure-dev-box-service/recommended-test-image.png" alt-text="Screenshot showing the Create dev box definition page.":::
1. Select **Create**.
devtest-labs Add Artifact Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/add-artifact-vm.md
$artifactParameters = @()
# Fill the artifact parameter with the additional -param_ data and strip off the -param_ $Params | ForEach-Object { if ($_ -match '^-param_(.*)') {
- $name = $_.TrimStart('^-param_')
+ $name = $_ -replace '^-param_'
} elseif ( $name ) { $artifactParameters += @{ "name" = "$name"; "value" = "$_" } $name = $null #reset name variable
devtest-labs Devtest Lab Upload Vhd Using Azcopy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-upload-vhd-using-azcopy.md
Title: Upload VHD file to Azure DevTest Labs using AzCopy
-description: This article provides a walkthrough to use the AzCopy command-line utility to upload a VHD file to a lab's storage account in Azure DevTest Labs.
+ Title: Upload a VHD file to Azure DevTest Labs by using AzCopy
+description: Walk through the steps to use the AzCopy command-line utility to upload a VHD file to a lab storage account in Azure DevTest Labs.
Previously updated : 06/26/2020 Last updated : 12/22/2022
-# Upload VHD file to lab's storage account using AzCopy
+# Upload a VHD file to a lab storage account by using AzCopy
[!INCLUDE [devtest-lab-upload-vhd-selector](../../includes/devtest-lab-upload-vhd-selector.md)]
-In Azure DevTest Labs, VHD files can be used to create custom images, which are used to provision virtual machines.
-The following steps walk you through using the AzCopy command-line utility to upload a VHD file to a lab's storage account. Once you've uploaded your VHD file, the [Next steps section](#next-steps) lists some articles that illustrate how to create a custom image from the uploaded VHD file. For more information about disks and VHDs in Azure, see [Introduction to managed disks](../virtual-machines/managed-disks-overview.md)
+In this article, learn how to use the AzCopy command-line utility to upload a VHD file to a lab storage account in Azure DevTest Labs. After you upload your VHD file, you can create a custom image from the uploaded VHD file and use the image to provision a virtual machine.
-> [!NOTE]
+For more information about disks and VHDs in Azure, see [Introduction to managed disks](../virtual-machines/managed-disks-overview.md).
+
+> [!NOTE]
> > AzCopy is a Windows-only command-line utility.
-## Step-by-step instructions
+## Prerequisites
+
+- Download and install the [latest version of AzCopy](https://aka.ms/downloadazcopy).
+
+To upload a VHD file to a lab storage account by using AzCopy, first, get the lab storage account name via the Azure portal. Then, use AzCopy to upload the file.
-The following steps walk you through uploading a VHD file to Azure DevTest Labs using [AzCopy](https://aka.ms/downloadazcopy).
+## Get the lab storage account name
-1. Get the name of the lab's storage account using the Azure portal:
+To get the name of the lab storage account:
1. Sign in to the [Azure portal](https://go.microsoft.com/fwlink/p/?LinkID=525040).
-1. Select **All services**, and then select **DevTest Labs** from the list.
+1. Select **All resources**, and then select your lab.
+
+1. In the lab menu under **Settings**, select **Configuration and policies**.
+
+1. In **Activity log**, in the resource menu under **Virtual machine bases**, select **Custom images**.
+
+1. In **Custom images**, select **Add**.
-1. From the list of labs, select the desired lab.
+1. In **Custom image**, under **VHD**, select the **Upload an image using PowerShell** link.
-1. On the lab's blade, select **Configuration**.
+ :::image type="content" source="media/devtest-lab-upload-vhd-using-azcopy/upload-image-powershell.png" alt-text="Screenshot that shows settings to upload a VHD by using PowerShell on the Custom image pane.":::
-1. On the lab **Configuration** blade, select **Custom images (VHDs)**.
+1. In **Upload an image using PowerShell**, scroll right to see a call to the Add-AzureRmVhd cmdlet.
-1. On the **Custom images** blade, Select **+Add**.
+ The `-Destination` parameter contains the URI for a blob container in the following format:
-1. On the **Custom image** blade, select **VHD**.
+ `https://<storageAccountName>.blob.core.windows.net/uploads/...`
-1. On the **VHD** blade, select **Upload a VHD using PowerShell**.
+ :::image type="content" source="media/devtest-lab-upload-vhd-using-azcopy/destination-parameter.png" alt-text="Screenshot that shows an example of a URI in the Add VHD box.":::
- ![Upload VHD using PowerShell](./media/devtest-lab-upload-vhd-using-azcopy/upload-image-using-psh.png)
+1. Copy the storage account URI to use in the next section.
-1. The **Upload an image using PowerShell** blade displays a call to the **Add-AzureVhd** cmdlet.
-The first parameter (*Destination*) contains the URI for a blob container (*uploads*) in the following format:
+## Upload a VHD file
- ```
- https://<STORAGE-ACCOUNT-NAME>.blob.core.windows.net/uploads/...
- ```
+To upload a VHD file by using AzCopy:
-1. Make note of the full URI as it is used in later steps.
+1. In Windows, open a Command Prompt window and go to the AzCopy installation directory.
-1. Upload the VHD file using AzCopy:
-
-1. [Download and install the latest version of AzCopy](https://aka.ms/downloadazcopy).
+ By default, AzCopy is installed in *ProgramFiles(x86)\Microsoft SDKs\Azure\AzCopy*.
-1. Open a command window and navigate to the AzCopy installation directory. Optionally, you can add the AzCopy installation location to your system path. By default, AzCopy is installed to the following directory:
+ Optionally, you can add the AzCopy installation location to your system path.
- ```command-line
- %ProgramFiles(x86)%\Microsoft SDKs\Azure\AzCopy
- ```
+1. At the command prompt, run the following command. Use the storage account key and blob container URI you copied from the Azure portal. The value for `vhdFileName` must be in quotes.
-1. Using the storage account key and blob container URI, run the following command at the command prompt. The *vhdFileName* value needs to be in quotes. The process of uploading a VHD file can be lengthy depending on the size of the VHD file and your connection speed.
+ ```cmd
+ AzCopy /Source:<sourceDirectory> /Dest:<blobContainerUri> /DestKey:<storageAccountKey> /Pattern:"<vhdFileName>" /BlobType:page
+ ```
- ```command-line
- AzCopy /Source:<sourceDirectory> /Dest:<blobContainerUri> /DestKey:<storageAccountKey> /Pattern:"<vhdFileName>" /BlobType:page
- ```
+The process of uploading a VHD file might be lengthy depending on the size of the VHD file and your connection speed.
## Next steps -- [Create a custom image in Azure DevTest Labs from a VHD file using the Azure portal](devtest-lab-create-template.md)-- [Create a custom image in Azure DevTest Labs from a VHD file using PowerShell](devtest-lab-create-custom-image-from-vhd-using-powershell.md)
+- Learn how to [create a custom image in Azure DevTest Labs from a VHD file by using the Azure portal](devtest-lab-create-template.md).
+- Learn how to [create a custom image in Azure DevTest Labs from a VHD file by using PowerShell](devtest-lab-create-custom-image-from-vhd-using-powershell.md).
devtest-labs Devtest Lab Upload Vhd Using Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-upload-vhd-using-powershell.md
Title: Upload VHD file to Azure DevTest Labs using PowerShell
-description: This article provides a walkthrough that shows you how to upload a VHD file to Azure DevTest Labs using PowerShell.
+ Title: Upload a VHD file to Azure DevTest Labs by using PowerShell
+description: Walk through the steps to use PowerShell to upload a VHD file to a lab storage account in Azure DevTest Labs.
Previously updated : 06/26/2020 Last updated : 12/22/2022
-# Upload VHD file to lab's storage account using PowerShell
+# Upload a VHD file to a lab storage account by using PowerShell
[!INCLUDE [devtest-lab-upload-vhd-selector](../../includes/devtest-lab-upload-vhd-selector.md)]
-In Azure DevTest Labs, VHD files can be used to create custom images, which are used to provision virtual machines.
-The following steps walk you through using PowerShell to upload a VHD file to a lab's storage account. Once you've uploaded your VHD file, the [Next steps section](#next-steps) lists some articles that illustrate how to create a custom image from the uploaded VHD file. For more information about disks and VHDs in Azure, see [Introduction to managed disks](../virtual-machines/managed-disks-overview.md)
+In this article, learn how to use PowerShell to upload a VHD file to a lab storage account in Azure DevTest Labs. After you upload your VHD file, you can create a custom image from the uploaded VHD file and use the image to provision a virtual machine.
-## Step-by-step instructions
+For more information about disks and VHDs in Azure, see [Introduction to managed disks](../virtual-machines/managed-disks-overview.md).
-The following steps walk you through uploading a VHD file to Azure DevTest Labs using PowerShell.
+## Prerequisites
+
+- Download and install the [latest version of PowerShell](/powershell/scripting/install/installing-powershell?).
+
+To upload a VHD file to a lab storage account by using PowerShell, first, get the lab storage account name via the Azure portal. Then, use a PowerShell cmdlet to upload the file.
+
+## Get the lab storage account name
+
+To get the name of the lab storage account:
1. Sign in to the [Azure portal](https://go.microsoft.com/fwlink/p/?LinkID=525040).
-1. Select **All services**, and then select **DevTest Labs** from the list.
+1. Select **All resources**, and then select your lab.
+
+1. In the lab menu under **Settings**, select **Configuration and policies**.
-1. From the list of labs, select the desired lab.
+1. In **Activity log**, in the resource menu under **Virtual machine bases**, select **Custom images**.
-1. On the lab's blade, select **Configuration**.
+1. In **Custom images**, select **Add**.
-1. On the lab **Configuration** blade, select **Custom images (VHDs)**.
+1. In **Custom image**, under **VHD**, select the **Upload an image using PowerShell** link.
-1. On the **Custom images** blade, Select **+Add**.
+ :::image type="content" source="media/devtest-lab-upload-vhd-using-powershell/upload-image-powershell.png" alt-text="Screenshot that shows the link to upload a VHD by using PowerShell on the Custom image pane.":::
-1. On the **Custom image** blade, select **VHD**.
+1. In **Upload an image using PowerShell**, select and copy the generated PowerShell script to use in the next section.
-1. On the **VHD** blade, select **Upload a VHD using PowerShell**.
+## Upload a VHD file
- ![Upload VHD using PowerShell](./media/devtest-lab-upload-vhd-using-powershell/upload-image-using-psh.png)
+To upload a VHD file by using PowerShell:
-1. On the **Upload an image using PowerShell** blade, copy the generated PowerShell script to a text editor.
+1. In a text editor, paste the generated PowerShell script you copied from the Azure portal.
-1. Modify the **LocalFilePath** parameter of the **Add-AzureVhd** cmdlet to point to the location of the VHD file you want to upload.
+1. Modify the `-LocalFilePath` parameter of the Add-AzureRmVhd cmdlet to point to the location of the VHD file you want to upload.
-1. At a PowerShell prompt, run the **Add-AzureVhd** cmdlet (with the modified **LocalFilePath** parameter).
+1. At a PowerShell command prompt, run the Add-AzureRmVhd cmdlet with the modified `-LocalFilePath` parameter.
-> [!WARNING]
->
-> The process of uploading a VHD file can be lengthy depending on the size of the VHD file and your connection speed.
+The process of uploading a VHD file might be lengthy depending on the size of the VHD file and your connection speed.
## Next steps -- [Create a custom image in Azure DevTest Labs from a VHD file using the Azure portal](devtest-lab-create-template.md)-- [Create a custom image in Azure DevTest Labs from a VHD file using PowerShell](devtest-lab-create-custom-image-from-vhd-using-powershell.md)
+- Learn how to [create a custom image in Azure DevTest Labs from a VHD file by using the Azure portal](devtest-lab-create-template.md).
+- Learn how to [create a custom image in Azure DevTest Labs from a VHD file by using PowerShell](devtest-lab-create-custom-image-from-vhd-using-powershell.md).
devtest-labs Devtest Lab Upload Vhd Using Storage Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/devtest-lab-upload-vhd-using-storage-explorer.md
Title: Upload a VHD file to by using Storage Explorer
-description: Upload a VHD file to a DevTest Labs lab storage account by using Microsoft Azure Storage Explorer.
+ Title: Upload a VHD file to lab storage by using Storage Explorer
+description: Walk through the steps to upload a VHD file to a DevTest Labs lab storage account by using Azure Storage Explorer.
Previously updated : 11/05/2021 Last updated : 12/23/2022
-# Upload a VHD file to a lab's storage account by using Storage Explorer
+# Upload a VHD file to a lab storage account by using Storage Explorer
[!INCLUDE [devtest-lab-upload-vhd-selector](../../includes/devtest-lab-upload-vhd-selector.md)]
-In Azure DevTest Labs, you can use VHD files to create custom images for provisioning virtual machines. This article describes how to use [Microsoft Azure Storage Explorer](../vs-azure-tools-storage-manage-with-storage-explorer.md) to upload a VHD file to a lab's storage account. Once you upload the VHD file to DevTest Labs, you can [create a custom image](devtest-lab-create-custom-image-from-vhd-using-powershell.md) from the uploaded file. For more information about disks and VHDs in Azure, see [Introduction to managed disks](../virtual-machines/managed-disks-overview.md).
+In this article, learn how to use [Azure Storage Explorer](../vs-azure-tools-storage-manage-with-storage-explorer.md) to upload a VHD file to a lab storage account in Azure DevTest Labs. After you upload your VHD file, you can create a custom image from the uploaded VHD file and use the image to provision a virtual machine.
-Storage Explorer supports several connection options. This article describes connecting to a storage account associated with your Azure subscription. For information about other Storage Explorer connection options, see [Getting started with Storage Explorer](../vs-azure-tools-storage-manage-with-storage-explorer.md).
+For more information about disks and VHDs in Azure, see [Introduction to managed disks](../virtual-machines/managed-disks-overview.md).
+
+Storage Explorer supports several connection options. This article describes how to connect to a storage account that's associated with your Azure subscription. For information about other Storage Explorer connection options, see [Get started with Storage Explorer](../vs-azure-tools-storage-manage-with-storage-explorer.md).
## Prerequisites -- [Download and install the latest version of Microsoft Azure Storage Explorer](https://www.storageexplorer.com).
+- Download and install the [latest version of Storage Explorer](https://www.storageexplorer.com).
-- Get the name of the lab's storage account by using the Azure portal:
+To upload a VHD file to a lab storage account by using Storage Explorer, first, get the lab storage account name via the Azure portal. Then, use Storage Explorer to upload the file.
- 1. In the [Azure portal](https://go.microsoft.com/fwlink/p/?LinkID=525040), search for and select **DevTest Labs**, and then select your lab from the list.
- 1. On the lab page, select **Configuration and policies** from the left navigation.
- 1. On the **Configuration and policies** page, under **Virtual machine bases**, select **Custom images**.
- 1. On the **Custom images** page, select **Add**.
- 1. On the **Custom image** page, under **VHD**, select the **Upload a VHD using PowerShell** link.
- ![Screenshot that shows the Upload VHD using PowerShell link.](media/devtest-lab-upload-vhd-using-storage-explorer/upload-image-using-psh.png)
- 1. On the **Upload an image using PowerShell** page, in the call to the **Add-AzureVhd** cmdlet, the `Destination` parameter shows the lab's storage account name in the following format:
- `https://<STORAGE-ACCOUNT-NAME>.blob.core.windows.net/uploads/`.
- 1. Copy the storage account name to use in the following steps.
+## Get the lab storage account name
-## Step-by-step instructions
+To get the name of the lab storage account:
-1. When you open Storage Explorer, the left **Explorer** pane shows all the Azure subscriptions you're signed in to.
+1. Sign in to the [Azure portal](https://go.microsoft.com/fwlink/p/?LinkID=525040).
- If you need to add a different account, select the **Account Management** icon, and in the **Account Management** pane, select **Add an account**.
+1. Select **All resources**, and then select your lab.
- ![Screenshot that shows Add an account in the Account Management pane.](media/devtest-lab-upload-vhd-using-storage-explorer/add-account-link.png)
+1. In the lab menu under **Settings**, select **Configuration and policies**.
- Follow the prompts to sign in with the Microsoft account associated with your Azure subscription.
+1. In **Activity log**, in the resource menu under **Virtual machine bases**, select **Custom images**.
+
+1. In **Custom images**, select **Add**.
+
+1. In **Custom image**, under **VHD**, select the **Upload an image using PowerShell** link.
+
+ :::image type="content" source="media/devtest-lab-upload-vhd-using-storage-explorer/upload-image-powershell.png" alt-text="Screenshot that shows settings to upload a VHD by using PowerShell on the Custom image pane.":::
+
+1. In **Upload an image using PowerShell**, scroll right to see a call to the Add-AzureRmVhd cmdlet.
+
+ The `-Destination` parameter contains the URI for a blob container in the following format:
+
+ `https://<storageAccountName>.blob.core.windows.net/uploads/...`
+
+ :::image type="content" source="media/devtest-lab-upload-vhd-using-storage-explorer/destination-parameter.png" alt-text="Screenshot that shows an example of a storage account name in the Add VHD box.":::
-1. After you sign in, the **Explorer** pane shows the Azure subscriptions associated with your account. Select the dropdown arrow next to the Azure subscription you want to use. The left pane shows the storage accounts associated with the selected Azure subscription.
+1. Copy the storage account name to use in the next section.
- ![Screenshot that shows the storage accounts for a selected Azure subscription.](media/devtest-lab-upload-vhd-using-storage-explorer/storage-accounts-list.png)
+## Upload a VHD file
+To upload a VHD file by using Storage Explorer:
+
+1. When you open Storage Explorer, the Explorer pane shows all the Azure subscriptions you're signed in to.
+
+ If you need to add a different account, select the **Account Management** icon. In **Account Management**, select **Add an account**.
+
+ :::image type="content" source="media/devtest-lab-upload-vhd-using-storage-explorer/add-account-link.png" alt-text="Screenshot that shows Add an account in the Account Management pane.":::
+
+ Follow the prompts to sign in with the Microsoft account associated with your Azure subscription.
+
+1. After you sign in, the Explorer pane shows the Azure subscriptions that are associated with your account. Select the dropdown arrow next to the Azure subscription you want to use. The left pane shows the storage accounts that are associated with the selected Azure subscription.
+
+ :::image type="content" source="media/devtest-lab-upload-vhd-using-storage-explorer/storage-accounts-list.png" alt-text="Screenshot that shows the storage accounts for a selected Azure subscription.":::
+
1. Select the dropdown arrow next to the lab storage account name you saved earlier.
-1. Expand the **Blob Containers** node, and then select **uploads**.
+1. Expand **Blob Containers**, and then select **uploads**.
- ![Screenshot that shows the expanded Blob Containers node with the uploads directory.](media/devtest-lab-upload-vhd-using-storage-explorer/upload-dir.png)
+ :::image type="content" source="media/devtest-lab-upload-vhd-using-storage-explorer/upload-dir.png" alt-text="Screenshot that shows the expanded Blob Containers node with the uploads directory.":::
-1. In the Storage Explorer right pane, on the blob editor toolbar, select **Upload**, and then select **Upload Files**.
+1. In the Storage Explorer right pane, on the blob editor toolbar, select **Upload**, and then select **Upload Files**.
- ![Screenshot that shows the Upload button and Upload Files.](media/devtest-lab-upload-vhd-using-storage-explorer/upload-button.png)
+ :::image type="content" source="media/devtest-lab-upload-vhd-using-storage-explorer/upload-button.png" alt-text="Screenshot that shows the Upload button and Upload Files.":::
-1. In the **Upload Files** dialog box, select **...** next to the **Selected files** field, browse to the VHD file on your machine, select it, and then select **Open**.
+1. In the **Upload Files** dialog:
-1. Under **Blob type**, change **Block Blob** to **Page Blob**.
+ 1. Select **...** next to **Selected files**. Go to the VHD file on your computer, select the file, and then select **Open**.
-1. Select **Upload**.
+ 1. For **Blob type**, select **Page Blob**.
- ![Screenshot that shows the Upload Files dialog box.](media/devtest-lab-upload-vhd-using-storage-explorer/upload-file.png)
+ 1. Select **Upload**.
-The **Activities** pane at the bottom shows upload status. Uploading the VHD file can take a long time, depending on the size of the VHD file and your connection speed.
+ :::image type="content" source="media/devtest-lab-upload-vhd-using-storage-explorer/upload-file.png" alt-text="Screenshot that shows the Upload Files dialog box.":::
-![Screenshot that shows the Activities pane with upload status.](media/devtest-lab-upload-vhd-using-storage-explorer/upload-status.png)
+1. Check the **Activities** pane at the bottom of Storage Explorer to see the upload status. Uploading the VHD file might take a long time, depending on the size of the VHD file and your connection speed.
-## Next steps
+ :::image type="content" source="media/devtest-lab-upload-vhd-using-storage-explorer/upload-status.png" alt-text="Screenshot that shows the Activities pane with upload status.":::
-- [Create a custom image in Azure DevTest Labs from a VHD file using the Azure portal](devtest-lab-create-template.md)-- [Create a custom image in Azure DevTest Labs from a VHD file using PowerShell](devtest-lab-create-custom-image-from-vhd-using-powershell.md)
+## Next steps
+- Learn how to [create a custom image in Azure DevTest Labs from a VHD file by using the Azure portal](devtest-lab-create-template.md).
+- Learn how to [create a custom image in Azure DevTest Labs from a VHD file using PowerShell](devtest-lab-create-custom-image-from-vhd-using-powershell.md).
digital-twins Reference Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/digital-twins/reference-service-limits.md
To manage the throttling, here are some recommendations for working with limits.
* Use retry logic. The [Azure Digital Twins SDKs](concepts-apis-sdks.md) implement retry logic for failed requests, so if you're working with a provided SDK, this functionality is already built-in. Otherwise, consider implementing retry logic in your own application. The service sends back a `Retry-After` header in the failure response, which you can use to determine how long to wait before retrying. * Use thresholds and notifications to warn about approaching limits. Some of the service limits for Azure Digital Twins have corresponding [metrics](../azure-monitor/essentials/data-platform-metrics.md) that can be used to track usage in these areas. To configure thresholds and set up an alert on any metric when a threshold is approached, see the instructions in [Create a new alert rule](../azure-monitor/alerts/alerts-create-new-alert-rule.md?tabs=metric). To set up notifications for other limits where metrics aren't provided, consider implementing this logic in your own application code. * Deploy at scale across multiple instances. Avoid having a single point of failure. Instead of one large graph for your entire deployment, consider sectioning out subsets of twins logically (like by region or tenant) across multiple instances.
+* For modeling recommendations to help you operate within the functional limits, see [Modeling tools and best practices](concepts-models.md#modeling-tools-and-best-practices).
>[!NOTE] >Azure Digital Twins will automatically scale resources to meet the rate limits described in this article. You may experience throttling before these limits are reached due to internal scaling to adapt to the incoming load. Internal scaling can take anywhere from 5 to 30 minutes, during which time your application may encounter 429 errors.
digital-twins Reference Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/dotnet-develop-multitenant-applications.md
- Title: Multi-Tenant Web Application Pattern | Microsoft Docs
-description: Find architectural overviews and design patterns that describe how to implement a multi-tenant web application on Azure.
------- Previously updated : 06/05/2015---
-# Multitenant Applications in Azure
-A multitenant application is a shared resource that allows "users in separate tenants" to view the application as though it was their own. A typical scenario that lends itself to a multitenant application is one in which all users of the application from different tenants may wish to customize the user experience but otherwise have the same basic business requirements. Examples of large multitenant applications are Microsoft 365, Outlook.com, and visualstudio.com.
-
-From an application provider's perspective, the benefits of multitenancy mostly relate to operational and cost efficiencies. One version of your application can meet the needs of many tenants/customers, allowing consolidation of system administration tasks such as monitoring, performance tuning, software maintenance, and data backups.
-
-The following provides a list of the most significant goals and requirements from a provider's perspective.
-
-* **Provisioning**: You must be able to provision new tenants for the application. For multitenant applications with a large number of tenants, it is usually necessary to automate this process by enabling self-service provisioning.
-* **Maintainability**: You must be able to upgrade the application and perform other maintenance tasks while multiple tenants are using it.
-* **Monitoring**: You must be able to monitor the application at all times to identify any problems and to troubleshoot them. This includes monitoring how each tenant is using the application.
-
-A properly implemented multitenant application provides the following benefits to users.
-
-* **Isolation**: The activities of individual tenants do not affect the use of the application by other tenants. Tenants cannot access each other's data. It appears to the tenant as though they have exclusive use of the application.
-* **Availability**: Individual tenants want the application to be constantly available, perhaps with guarantees defined in an SLA. Again, the activities of other tenants should not affect the availability of the application.
-* **Scalability**: The application scales to meet the demand of individual tenants. The presence and actions of other tenants should not affect the performance of the application.
-* **Costs**: Costs are lower than running a dedicated, single-tenant application because multi-tenancy enables the sharing of resources.
-* **Customizability**. The ability to customize the application for an individual tenant in various ways such as adding or removing features, changing colors and logos, or even adding their own code or script.
-
-In short, while there are many considerations that you must take into account to provide a highly scalable service, there are also multiple goals and requirements that are common to many multitenant applications. Some may not be relevant in specific scenarios, and the importance of individual goals and requirements will differ in each scenario. As a provider of the multitenant application, you'll also have goals and requirements, such as meeting the tenant's needs, profitability, billing, multiple service levels, provisioning, maintainability monitoring, and automation.
-
-For more information on additional design considerations of a multitenant application, see [Hosting a Multi-Tenant Application on Azure][Hosting a Multi-Tenant Application on Azure] and [Cross-Tenant Communication using Azure Service Bus Sample][Cross-Tenant Communication using Azure Service Bus Sample]. For information on common data architecture patterns of multi-tenant software-as-a-service (SaaS) database applications, see [Design Patterns for Multi-tenant SaaS Applications with Azure SQL Database](/azure/azure-sql/database/saas-tenancy-app-design-patterns).
-
-## Cross-Tenant Communication using Azure Service Bus Sample
-The [Cross-Tenant Communication using Azure Service Bus Sample][Cross-Tenant Communication using Azure Service Bus Sample] demonstrates a multi-tenanted solution handling cross-tenant communication between a provider and one or more of its customers using Service Bus message queues. The provider comunicates securely with each of its customers, and each customer securely with the provider. To download the complete sample with instructions, see [Cross-Tenant Communication using Azure Service Bus Sample][Cross-Tenant Communication using Azure Service Bus Sample].
-
-## Azure features for Multitenant Applications
-Azure provides many features that allow you to address the key problems encountered when designing a multitenant system.
-
-**Isolation**
-
-* Segment Website Tenants by Host Headers with or without TLS communication
-* Segment Website Tenants by Query Parameters
-* Web Services in Worker Roles
- * Worker Roles that typically process data on the backend of an application.
- * Web Roles that typically act as the frontend for applications.
-
-**Storage**
-
-Data management such as Azure SQL Database or Azure Storage services such as the Table service, which provides services for storage of large amounts of unstructured data and the Blob service, which provides services to store large amounts of unstructured text or binary data such as video, audio and images.
-
-* Securing Multitenant Data in SQL Database per-tenant SQL Server logins.
-* Using Azure Tables for Application Resources by specifying a container level access policy, you can have the ability to adjust permissions without having to issue new URL's for the resources protected with shared access signatures.
-* Azure Queues for Application Resources Azure queues are commonly used to drive processing on behalf of tenants, but may also be used to distribute work required for provisioning or management.
-* Service Bus Queues for Application Resources that pushes work to a shared service, you can use a single queue where each tenant sender only has permissions (as derived from claims issued from ACS) to push to that queue, while only the receivers from the service have permission to pull from the queue the data coming from multiple tenants.
-
-**Connection and Security Services**
-
-* Azure Service Bus, a messaging infrastructure that sits between applications allowing them to exchange messages in a loosely coupled way for improved scale and resiliency.
-
-**Networking Services**
-
-Azure provides several networking services that support authentication, and improve manageability of your hosted applications. These services include the following:
-
-* Azure Virtual Network lets you provision and manage virtual private networks (VPNs) in Azure as well as securely link these with on-premises IT infrastructure.
-* Virtual Network Traffic Manager allows you to load balance incoming traffic across multiple hosted Azure services whether they're running in the same datacenter or across different datacenters around the world.
-* Azure Active Directory (Azure AD) is a modern, REST-based service that provides identity management and access control capabilities for your cloud applications. Using Azure AD for Application Resources provides an easy way of authenticating and authorizing users to gain access to your web applications and services while allowing the features of authentication and authorization to be factored out of your code.
-* Azure Service Bus provides a secure messaging and data flow capability for distributed and hybrid applications, such as communication between Azure hosted applications and on-premises applications and services, without requiring complex firewall and security infrastructures. Using Service Bus Relay for Application Resources to access the services that are exposed as endpoints may belong to the tenant (for example, hosted outside of the system, such as on-premises), or they may be services provisioned specifically for the tenant (because sensitive, tenant-specific data travels across them).
-
-**Provisioning Resources**
-
-Azure provides a number of ways to provision new tenants for the application. For multitenant applications with a large number of tenants, it is usually necessary to automate this process by enabling self-service provisioning.
-
-* Worker roles allow you to provision and de-provision per tenant resources (such as when a new tenant signs-up or cancels), collect metrics for metering use, and manage scale following a certain schedule or in response to the crossing of thresholds of key performance indicators. This same role may also be used to push out updates and upgrades to the solution.
-* Azure Blobs can be used to provision compute or pre-initialized storage resources for new tenants while providing container level access policies to protect the compute service Packages, VHD images and other resources.
-* Options for provisioning SQL Database resources for a tenant include:
-
- * DDL in scripts or embedded as resources within assemblies.
- * SQL Server 2008 R2 DAC Packages deployed programmatically.
- * Copying from a master reference database.
- * Using database Import and Export to provision new databases from a file.
-
-<!--links-->
-
-[Hosting a Multi-Tenant Application on Azure]: /previous-versions/msp-n-p/hh534480(v=pandp.10)
-[Designing Multitenant Applications on Azure]: https://msdn.microsoft.com/library/windowsazure/hh689716
-[Cross-Tenant Communication using Azure Service Bus Sample]: https://github.com/Azure-Samples/Cross-Tenant-Communication-Using-Azure-Service-Bus
energy-data-services How To Use Managed Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-use-managed-identity.md
Title: Use managed identities for Microsoft Energy Data Services on Azure
-description: Learn how to use Managed Identity to access Microsoft Energy Data Services from other Azure services.
+description: Learn how to use a managed identity to access Microsoft Energy Data Services from other Azure services.
Last updated 01/04/2023
-#Customer intent: As a developer, I want to use managed identity to access Microsoft Energy Data Services from other Azure services such as Azure Functions.
+#Customer intent: As a developer, I want to use a managed identity to access Microsoft Energy Data Services from other Azure services, such as Azure Functions.
-# Use managed identity to access Microsoft Energy Data Services from other Azure services
+# Use a managed identity to access Microsoft Energy Data Services from other Azure services
-This article provides an overview on how to access data plane or control plane of Microsoft Energy Data Services from other Microsoft Azure Services using *managed identity*.
+This article describes how to access the data plane or control plane of Microsoft Energy Data Services from other Microsoft Azure services by using a *managed identity*.
-There's a need for services such as Azure Functions etc. to be able to consume Microsoft Energy Data Services APIs. This interoperability will allow you to use the best of multiple Azure services, for example, you can write a script in Azure Function to ingest data in Microsoft Energy Data Services. Here, we should assume that Azure Functions is the source service while Microsoft Energy Data Services is the target service. To understand how this scenario works, it's important to understand the concept of managed identity.
+There's a need for services such as Azure Functions to be able to consume Microsoft Energy Data Services APIs. This interoperability allows you to use the best capabilities of multiple Azure services.
-## Managed Identity
+For example, you can write a script in Azure Functions to ingest data in Microsoft Energy Data Services. In that scenario, you should assume that Azure Functions is the source service and Microsoft Energy Data Services is the target service.
-A managed identity from Azure Active Directory (Azure AD) allows your application to easily access other Azure AD-protected resources. The identity is managed by the Azure platform and doesn't require you to create or rotate any secrets. Any Azure service that wants to access Microsoft Energy Data Services control plane or data plane for any operation can use managed identity to do so.
+This article walks you through the five main steps for configuring Azure Functions to access Microsoft Energy Data Services.
-Managed identity is of two types. It could be a system assigned managed identity or user assigned managed identity. System-assigned managed identities have their lifecycle tied to the resource that created them. User-assigned managed identities can be used on multiple resources. To learn more about managed identities, see [What are managed identities for Azure resources?](../active-directory/managed-identities-azure-resources/overview.md)
+## Overview of managed identities
-Currently, other services can connect to Microsoft Energy Data Services using system or user assigned managed identity. However, Microsoft Energy Data Services doesn't support system assigned managed identity.
+A managed identity from Azure Active Directory (Azure AD) allows your application to easily access other Azure AD-protected resources. The identity is managed by the Azure platform and doesn't require you to create or rotate any secrets. Any Azure service that wants to access Microsoft Energy Data Services control plane or data plane for any operation can use a managed identity to do so.
-For this scenario, we'll use a user assigned managed identity in Azure Function to call a data plane API in Microsoft Energy Data Services.
+There are two types of managed identities:
-## Pre-requisites
+- *System-assigned* managed identities have their lifecycle tied to the resource that created them.
+- *User-assigned* managed identities can be used on multiple resources.
-Before you begin, make sure:
+To learn more about managed identities, see [What are managed identities for Azure resources?](../active-directory/managed-identities-azure-resources/overview.md).
-* You've created a [Microsoft Energy Data Services instance](quickstart-create-microsoft-energy-data-services-instance.md).
+Currently, other services can connect to Microsoft Energy Data Services by using a system-assigned or user-assigned managed identity. However, Microsoft Energy Data Services doesn't support system-assigned managed identities.
-* You've created a [Azure Function App](../azure-functions/functions-create-function-app-portal.md).
+For the scenario in this article, you'll use a user-assigned managed identity in Azure Functions to call a data plane API in Microsoft Energy Data Services.
-* You've created a [Python Azure Function using portal](../azure-functions/create-first-function-vs-code-python.md) or using [command line.](../azure-functions/create-first-function-cli-python.md)
+## Prerequisites
-* You've created [user assigned managed identity](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md). You can create a system assigned identity as well however, this document will explain the flow using user assigned managed identity.
+Before you begin, create the following resources:
+* [Microsoft Energy Data Services instance](quickstart-create-microsoft-energy-data-services-instance.md)
-## Steps for Azure Functions to access Microsoft Energy Data Services using Managed Identity
+* [Azure function app](../azure-functions/functions-create-function-app-portal.md)
-There are five important steps to configure Azure Functions to access Microsoft Energy Data Services.
+* Python-based Azure function, by using the [Azure portal](../azure-functions/create-first-function-vs-code-python.md) or the [command line](../azure-functions/create-first-function-cli-python.md)
+* [User-assigned managed identity](../active-directory/managed-identities-azure-resources/how-manage-user-assigned-managed-identities.md)
-### Step 1: Retrieve the Object ID of system or user-assigned identity that wants to access the Microsoft Energy Data Services APIs.
-1. You can get the *Object ID* of system assigned identity associated with Azure Functions by navigating to *Identity* screen of the Azure Function.
+## Step 1: Retrieve the object ID
-[![Screenshot of object id for system assigned identity.](media/how-to-use-managed-identity/1-object-id-system-assigned-identity.png)](media/how-to-use-managed-identity/1-object-id-system-assigned-identity.png#lightbox)
-
-2. Similarly, navigate to the *Overview* tab of the user assigned identity to find its *Object ID*.
+To retrieve the object ID for the user-assigned identity that will access the Microsoft Energy Data Services APIs:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+2. Go to the managed identity, and then select **Overview**.
+3. Under **Essentials**, note the **Object (principal) ID** value.
-[![Screenshot of object id for user assigned identity.](media/how-to-use-managed-identity/2-object-id-user-assigned-identity.png)](media/how-to-use-managed-identity/2-object-id-user-assigned-identity.png#lightbox)
+[![Screenshot of the object ID for a user-assigned identity.](media/how-to-use-managed-identity/2-object-id-user-assigned-identity.png)](media/how-to-use-managed-identity/2-object-id-user-assigned-identity.png#lightbox)
-### Step 2. Retrieve the *Application ID* of system or user-assigned identity using the Object ID.
+## Step 2: Retrieve the application ID
-1. Navigate to *Azure Active Directory (Azure AD)* in Azure
-2. Navigate to *Enterprise Application* tab.
-3. Search for the *Object ID* of the user assigned identity or system assigned identity in the *Search by application name or Object ID* search box.
-4. Copy the *Application ID* from Enterprise Application section of Azure Active Directory.
-
-[![Screenshot of Application Id for user assigned identity.](media/how-to-use-managed-identity/3-object-id-application-id-user-assigned-identity.png)](media/how-to-use-managed-identity/3-object-id-application-id-user-assigned-identity.png#lightbox)
+Retrieve the application ID of the user-assigned identity by using the object ID:
-### Step 3: Add the user assigned managed identity to Azure Functions
-
-1. Sign in to the Azure portal.
-2. In the Azure portal, navigate to your Azure Function.
-3. Under Account Settings, select Identity.
-4. Select the User assigned tab, and then select Add.
-5. Select your existing user-assigned managed identity and then select Add. You'll then be returned to the User assigned tab.
+1. In the Azure portal, go to **Azure Active Directory**.
+2. On the left menu, select **Enterprise applications**.
+3. In the **Search by application name or object ID** box, enter the object ID.
+4. For the application that appears in the results, note the **Application ID** value.
-[![Screenshot of adding user assigned identity to Azure Function.](media/how-to-use-managed-identity/4-user-assigned-identity-azure-function.png)](media/how-to-use-managed-identity/4-user-assigned-identity-azure-function.png#lightbox)
-
-### Step 4: Add the application ID to entitlement groups to access Microsoft Energy Data Services APIs
-Next, you need to add this Application ID to appropriate groups using the entitlement service to access Microsoft Energy Data Services APIs. You need to perform the following actions:
+[![Screenshot of the application ID for a user-assigned identity.](media/how-to-use-managed-identity/3-object-id-application-id-user-assigned-identity.png)](media/how-to-use-managed-identity/3-object-id-application-id-user-assigned-identity.png#lightbox)
-1. Find the tenant-id, client-id, client-secret, Microsoft Energy Data Services url, and data partition-id and generate the [access token](how-to-manage-users.md#prerequisites). You should have the following information handy with you:
+## Step 3: Add the user-assigned managed identity to Azure Functions
-* tenant-id
-* client-id
-* client-secret
-* microsoft energy data services uri
-* data-partition-id
-* access token
-* Application ID of the managed identity
+1. In the Azure portal, go to your Azure function.
+2. UnderΓÇ»**Account Settings**, selectΓÇ»**Identity**.
+3. Select the **User assigned** tab, and then select **Add**.
+4. Select your existing user-assigned managed identity, and then select **Add**. You're then returned to the **User assigned** tab.
+
+[![Screenshot of a newly added user-assigned identity to an Azure function.](media/how-to-use-managed-identity/4-user-assigned-identity-azure-function.png)](media/how-to-use-managed-identity/4-user-assigned-identity-azure-function.png#lightbox)
+## Step 4: Add the application ID to entitlement groups
-2. Next, use the [add-member-api](https://microsoft.github.io/meds-samples/rest-apis/https://docsupdatetracker.net/index.html?page=/meds-samples/rest-apis/entitlements_openapi.yaml#/add-member-api/addMemberUsingPOST) to add the Application ID of the user managed identity to appropriate entitlement groups. For example, in this case, we'll add the Application ID to two groups:
+Next, add the application ID to the appropriate groups that will use the entitlement service to access Microsoft Energy Data Services APIs. The following example adds the application ID to two groups:
* users@[partition ID].dataservices.energy * users.datalake.editors@[partition ID].dataservices.energy
-> [!NOTE]
-> In the below commands use the Application ID of the managed identity and not the Object Id of the managed identity in the below command.
-
-* Adding Application ID of the managed identity to users@[partition ID].dataservices.energy
-
-3. Run the following CURL command on Azure bash:
-
-```bash
- curl --location --request POST 'https://<microsoft energy data services uri>/api/entitlements/v2/groups/users@ <data-partition-id>.dataservices.energy/members' \
- --header 'data-partition-id: <data-partition-id>' \
- --header 'Authorization: Bearer \
- --header 'Content-Type: application/json' \
- --data-raw '{
- "email": "<Application ID of the managed identity>",
- "role": "MEMBER"
- }'
-```
-
-Sample response:
-```JSON
-{
- "email": "<Application ID of the managed identity>",
- "role": "MEMBER"
- }
-```
-* Adding Application ID of the managed identity to users.datalake.editors@[partition ID].dataservices.energy
-
-4. Run the following CURL command on Azure bash:
-
-```bash
- curl --location --request POST 'https://<microsoft energy data services uri>/api/entitlements/v2/groups/ users.datalake.editors@ <data-partition-id>.dataservices.energy/members' \
- --header 'data-partition-id: <data-partition-id>' \
- --header 'Authorization: Bearer \
- --header 'Content-Type: application/json' \
- --data-raw '{
- "email": "<Application ID of the managed identity>",
- "role": "MEMBER"
- }'
-```
-
-Sample response:
-```JSON
-{
- "email": "<Application ID of the managed identity>",
- "role": "MEMBER"
- }
-```
-
-### Step 5: Generate token for accessing Microsoft Energy Data Services from Azure Function
+To add the application ID:
+
+1. Gather the following information:
+
+ * Tenant ID
+ * Client ID
+ * Client secret
+ * Microsoft Energy Data Services URI
+ * Data partition ID
+ * [Access token](how-to-manage-users.md#prerequisites)
+ * Application ID of the managed identity
+
+2. Use the [Add Member API](https://microsoft.github.io/meds-samples/rest-apis/https://docsupdatetracker.net/index.html?page=/meds-samples/rest-apis/entitlements_openapi.yaml#/add-member-api/addMemberUsingPOST) to add the application ID of the user-assigned managed identity to the appropriate entitlement groups.
+
+ > [!NOTE]
+ > In the following commands, be sure to use the application ID of the managed identity and not the object ID.
+
+ 1. To add the application ID to the users@[partition ID].dataservices.energy group, run the following cURL command via Bash in Azure:
+
+ ```bash
+ curl --location --request POST 'https://<Microsoft Energy Data Services URI>/api/entitlements/v2/groups/users@ <data-partition-id>.dataservices.energy/members' \
+ --header 'data-partition-id: <data-partition-id>' \
+ --header 'Authorization: Bearer \
+ --header 'Content-Type: application/json' \
+ --data-raw '{
+ "email": "<application ID of the managed identity>",
+ "role": "MEMBER"
+ }'
+ ```
+
+ Here's a sample response:
+
+ ```json
+ {
+ "email": "<application ID of the managed identity>",
+ "role": "MEMBER"
+ }
+ ```
+
+ 1. To add the application ID to the users.datalake.editors@[partition ID].dataservices.energy group, run the following cURL command via Bash in Azure:
+
+ ```bash
+ curl --location --request POST 'https://<Microsoft Energy Data Services URI>/api/entitlements/v2/groups/ users.datalake.editors@ <data-partition-id>.dataservices.energy/members' \
+ --header 'data-partition-id: <data-partition-id>' \
+ --header 'Authorization: Bearer \
+ --header 'Content-Type: application/json' \
+ --data-raw '{
+ "email": "<application ID of the managed identity>",
+ "role": "MEMBER"
+ }'
+ ```
+
+ Here's a sample response:
+
+ ```json
+ {
+ "email": "<application ID of the managed identity>",
+ "role": "MEMBER"
+ }
+ ```
+
+## Step 5: Generate a token
Now Azure Functions is ready to access Microsoft Energy Data Services APIs.
-In this case, Azure function generates a token using User Assigned identity. The Azure function uses the Application ID present in the Microsoft Energy Data Services instance, while generating the token.
-Sample Azure function code.
+The Azure function generates a token by using the user-assigned identity. The function uses the application ID that's present in the Microsoft Energy Data Services instance while generating the token.
+
+Here's an example of the Azure function code:
```python import logging
from msrestazure.azure_active_directory import MSIAuthentication
def main(req: func.HttpRequest) -> str: logging.info('Python HTTP trigger function processed a request.')
- //To Authenticate using Managed Identity, we need to pass the Microsoft Energy Data Services Application ID as the resource.
- //If we want to use a user-assigned identity, we should also include the
- //Client ID as an additional parameter.
- //Managed Identity using System Assigned Identity: MSIAuthentication(resource)
- //Managed Identity using user Assigned Identity: MSIAuthentication(client_id, resource)
+ //To authenticate by using a managed identity, you need to pass the Microsoft Energy Data Services application ID as the resource.
+ //To use a user-assigned identity, you should include the
+ //client ID as an additional parameter.
+ //Managed identity using user-assigned identity: MSIAuthentication(client_id, resource)
creds = MSIAuthentication(client_id="<client_id_of_managed_identity>ΓÇ¥, resource="<meds_app_id>") url = "https://<meds-uri>/api/entitlements/v2/groups" payload = {}
- // Passing data partition ID of Microsoft Energy Data Services in headers along with the token received using MI.
+ // Passing the data partition ID of Microsoft Energy Data Services in headers along with the token received using the managed instance.
headers = { 'data-partition-id': '<data partition id>', 'Authorization': 'Bearer ' + creds.token["access_token"]
def main(req: func.HttpRequest) -> str:
```
-You should get the following successful response from Azure Function:
+You should get the following successful response from Azure Functions:
-[![Screenshot of success message from Azure Function.](media/how-to-use-managed-identity/5-azure-function-success.png)](media/how-to-use-managed-identity/5-azure-function-success.png#lightbox)
+[![Screenshot of a success message from Azure Functions.](media/how-to-use-managed-identity/5-azure-function-success.png)](media/how-to-use-managed-identity/5-azure-function-success.png#lightbox)
-With the following steps completed, you're now able to use Azure Functions to access Microsoft Energy Data Services APIs with appropriate use of managed identities.
+With the preceding steps completed, you can now use Azure Functions to access Microsoft Energy Data Services APIs with appropriate use of managed identities.
## Next steps
-<!-- Add a context sentence for the following links -->
-To learn more about Lockbox in Microsoft Energy Data Services
+
+Learn about Lockbox:
> [!div class="nextstepaction"] > [Lockbox in Microsoft Energy Data Services](how-to-create-lockbox.md)
event-grid Event Schema Api Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/event-schema-api-management.md
API Management emits the following event types:
| Microsoft.ApiManagement.SubscriptionCreated | Raised when a subscription is created. | | Microsoft.ApiManagement.SubscriptionUpdated | Raised when a subscription is updated. | | Microsoft.ApiManagement.SubscriptionDeleted | Raised when a subscription is deleted. |
+| Microsoft.ApiManagement.GatewayCreated | Raised when a self-hosted gateway is created. |
+| Microsoft.ApiManagement.GatewayDeleted | Raised when a self-hosted gateway is updated. |
+| Microsoft.ApiManagement.GatewayUpdated | Raised when a self-hosted gateway is deleted. |
+| Microsoft.ApiManagement.GatewayAPIAdded | Raised when an API was removed from a self-hosted gateway. |
+| Microsoft.ApiManagement.GatewayAPIRemoved | Raised when an API was removed from a self-hosted gateway. |
+| Microsoft.ApiManagement.GatewayCertificateAuthorityCreated | Raised when a certificate authority was updated for a self-hosted. |
+| Microsoft.ApiManagement.GatewayCertificateAuthorityDeleted | Raised when a certificate authority was deleted for a self-hosted. |
+| Microsoft.ApiManagement.GatewayCertificateAuthorityUpdated | Raised when a certificate authority was updated for a self-hosted. |
+| Microsoft.ApiManagement.GatewayHostnameConfigurationCreated | Raised when a hostname configuration was created for a self-hosted. |
+| Microsoft.ApiManagement.GatewayHostnameConfigurationDeleted | Raised when a hostname configuration was deleted for a self-hosted. |
+| Microsoft.ApiManagement.GatewayHostnameConfigurationUpdated | Raised when a hostname configuration was updated for a self-hosted. |
## Example event
firewall Integrate With Nat Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/firewall/integrate-with-nat-gateway.md
ThereΓÇÖs no double NAT with this architecture. Azure Firewall instances send th
> [!NOTE] > Using Azure Virtual Network NAT is currently incompatible with Azure Firewall if you have deployed your [Azure Firewall across multiple availability zones](deploy-availability-zone-powershell.md). >
-> In addition, Azure Virtual Network NAT integration is not currently supported in secured virtual hub network architectures. You must deploy using a hub virtual network architecture. For more information about Azure Firewall architecture options, see [What are the Azure Firewall Manager architecture options?](../firewall-manager/vhubs-and-vnets.md).
+> In addition, Azure Virtual Network NAT integration is not currently supported in secured virtual hub network architectures. You must deploy using a hub virtual network architecture. For detailed guidance on integrating NAT gateway with Azure Firewall in a hub and spoke network architecture refer to the [NAT gateway and Azure Firewall integration tutorial](/azure/virtual-network/nat-gateway/tutorial-hub-spoke-nat-firewall). For more information about Azure Firewall architecture options, see [What are the Azure Firewall Manager architecture options?](../firewall-manager/vhubs-and-vnets.md).
## Associate a NAT gateway with an Azure Firewall subnet - Azure PowerShell
az network vnet subnet update --name AzureFirewallSubnet --vnet-name nat-vnet --
## Next steps - [Design virtual networks with NAT gateway](../virtual-network/nat-gateway/nat-gateway-resource.md)
+- [Integrate NAT gateway with Azure Firewall in a hub and spoke network](/azure/virtual-network/nat-gateway/tutorial-hub-spoke-nat-firewall)
governance Assign Policy Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/assign-policy-terraform.md
This quickstart steps you through the process of creating a policy assignment to
machines that aren't using managed disks. At the end of this process, you'll successfully identify virtual machines that aren't using managed
-disks. They're _non-compliant_ with the policy assignment.
+disks across subscription. They're _non-compliant_ with the policy assignment.
## Prerequisites
for Azure Policy use the
1. Create a new folder named `policy-assignment` and change directories into it.
-1. Create `main.tf` with the following code:
-
- ```hcl
- provider "azurerm" {
- features {}
- }
-
- terraform {
- required_providers {
- azurerm = {
- source = "hashicorp/azurerm"
- version = ">= 2.96.0"
- }
- }
- }
+2. Create `main.tf` with the following code:
+
+ > [!NOTE]
+ > To create a Policy Assignment at a Management Group use the [azurerm_management_group_policy_assignment](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/management_group_policy_assignment) resource, for a Resource Group use the [azurerm_resource_group_policy_assignment](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/resource_group_policy_assignment) and for a Subscription use the [azurerm_subscription_policy_assignment](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs/resources/subscription_policy_assignment) resource.
+
+
+ ```terraform
+ provider "azurerm" {
+ features {}
+ }
+
+ terraform {
+ required_providers {
+ azurerm = {
+ source = "hashicorp/azurerm"
+ version = ">= 2.96.0"
+ }
+ }
+ }
+
+ resource "azurerm_subscription_policy_assignment" "auditvms" {
+ name = "audit-vm-manageddisks"
+ subscription_id = var.cust_scope
+ policy_definition_id = "/providers/Microsoft.Authorization/policyDefinitions/06a78e20-9358-41c9-923c-fb736d382a4d"
+ description = "Shows all virtual machines not using managed disks"
+ display_name = "Audit VMs without managed disks assignment"
+ }
+ ```
+3. Create `variables.tf` with the following code:
- resource "azurerm_resource_policy_assignment" "auditvms" {
- name = "audit-vm-manageddisks"
- resource_id = var.cust_scope
- policy_definition_id = "/providers/Microsoft.Authorization/policyDefinitions/06a78e20-9358-41c9-923c-fb736d382a4d"
- description = "Shows all virtual machines not using managed disks"
- display_name = "Audit VMs without managed disks assignment"
+ ```terraform
+ variable "cust_scope" {
+ default = "{scope}"
} ```
-1. Create `variables.tf` with the following code:
-
- ```hcl
- variable "cust_scope" {
- default = "{scope}"
- }
- ```
-
- A scope determines what resources or grouping of resources the policy assignment gets enforced
- on. It could range from a management group to an individual resource. Be sure to replace
- `{scope}` with one of the following patterns:
+ A scope determines what resources or grouping of resources the policy assignment gets enforced on. It could range from a management group to an individual resource. Be sure to replace `{scope}` with one of the following patterns based on the declared resource:
- Subscription: `/subscriptions/{subscriptionId}` - Resource group: `/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}` - Resource: `/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/{resourceProviderNamespace}/[{parentResourcePath}/]`
-1. Create `output.tf` with the following code:
+4. Create `output.tf` with the following code:
- ```hcl
- output "assignment_id" {
- value = azurerm_resource_policy_assignment.auditvms.id
- }
- ```
+ ```terraform
+ output "assignment_id" {
+ value = azurerm_resource_policy_assignment.auditvms.id
+ }
+ ```
## Initialize Terraform and create plan
returned.
## Identify non-compliant resources To view the resources that aren't compliant under this new assignment, use the _assignment\_id_
-returned by `terraform apply`. With it, run the following command to get the resource IDs of the
+returned by ```terraform apply```. With it, run the following command to get the resource IDs of the
non-compliant resources that are output into a JSON file: ```console
-armclient post "/subscriptions/<subscriptionID>/resourceGroups/<rgName>/providers/Microsoft.PolicyInsights/policyStates/latest/queryResults?api-version=2019-10-01&$filter=IsCompliant eq false and PolicyAssignmentId eq '<policyAssignmentID>'&$apply=groupby((ResourceId))" > <json file to direct the output with the resource IDs into>
+armclient post "/subscriptions/<subscriptionID>/providers/Microsoft.PolicyInsights/policyStates/latest/queryResults?api-version=2019-10-01&$filter=IsCompliant eq false and PolicyAssignmentId eq '<policyAssignmentID>'&$apply=groupby((ResourceId))" > <json file to direct the output with the resource IDs into>
``` Your results resemble the following example:
governance Cis Azure 1 4 0 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/samples/cis-azure-1-4-0.md
+
+ Title: Regulatory Compliance details for CIS Microsoft Azure Foundations Benchmark 1.4.0
+description: This article describes the CIS Microsoft Azure Foundations Benchmark 1.4.0 Regulatory Compliance built-in initiative. Each control is mapped to one or more Azure Policy definitions that assist with assessment.
Last updated : 01/18/2023++++
+# Details of the CIS Microsoft Azure Foundations Benchmark 1.4.0 Regulatory Compliance built-in initiative
+
+The following article details how the Azure Policy Regulatory Compliance built-in initiative
+definition maps to **compliance domains** and **controls** in CIS Microsoft Azure Foundations Benchmark 1.4.0.
+For more information about this compliance standard, see
+[CIS Microsoft Azure Foundations Benchmark 1.4.0](https://www.cisecurity.org/benchmark/azure/). To understand
+_Ownership_, see [Azure Policy policy definition](../concepts/definition-structure.md#type) and
+[Shared responsibility in the cloud](../../../security/fundamentals/shared-responsibility.md).
+
+The following mappings are to the **CIS Microsoft Azure Foundations Benchmark 1.4.0** controls. Use the
+navigation on the right to jump directly to a specific **compliance domain**. Many of the controls
+are implemented with an [Azure Policy](../overview.md) initiative definition. To review the complete
+initiative definition, open **Policy** in the Azure portal and select the **Definitions** page.
+Then, find and select the **CIS Microsoft Azure Foundations Benchmark v1.4.0** Regulatory Compliance built-in
+initiative definition.
+
+> [!IMPORTANT]
+> Each control below is associated with one or more [Azure Policy](../overview.md) definitions.
+> These policies may help you [assess compliance](../how-to/get-compliance-data.md) with the
+> control; however, there often is not a one-to-one or complete match between a control and one or
+> more policies. As such, **Compliant** in Azure Policy refers only to the policy definitions
+> themselves; this doesn't ensure you're fully compliant with all requirements of a control. In
+> addition, the compliance standard includes controls that aren't addressed by any Azure Policy
+> definitions at this time. Therefore, compliance in Azure Policy is only a partial view of your
+> overall compliance status. The associations between compliance domains, controls, and Azure Policy
+> definitions for this compliance standard may change over time. To view the change history, see the
+> [GitHub Commit History](https://github.com/Azure/azure-policy/commits/master/built-in-policies/policySetDefinitions/Regulatory%20Compliance/CISv1_4_0.json).
+
+## 1 Identity and Access Management
+
+### Ensure that 'Multi-Factor Auth Status' is 'Enabled' for all Privileged Users
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 1.1
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Adopt biometric authentication mechanisms](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7d7a8356-5c34-9a95-3118-1424cfaf192a) |CMA_0005 - Adopt biometric authentication mechanisms |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0005.json) |
+|[MFA should be enabled for accounts with write permissions on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9297c21d-2ed6-4474-b48f-163f75654ce3) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with write privileges to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[3.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForWritePermissions_Audit.json) |
+|[MFA should be enabled on accounts with owner permissions on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faa633080-8b72-40c4-a2d7-d00c03e80bed) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with owner permissions to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForOwnerPermissions_Audit.json) |
+
+### Ensure that 'Users can add gallery apps to My Apps' is set to 'No'
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 1.10
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Authorize access to security functions and information](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faeed863a-0f56-429f-945d-8bb66bd06841) |CMA_0022 - Authorize access to security functions and information |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0022.json) |
+|[Authorize and manage access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F50e9324a-7410-0539-0662-2c1e775538b7) |CMA_0023 - Authorize and manage access |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0023.json) |
+|[Enforce mandatory and discretionary access control policies](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb1666a13-8f67-9c47-155e-69e027ff6823) |CMA_0246 - Enforce mandatory and discretionary access control policies |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0246.json) |
+
+### Ensure that 'Users can register applications' is set to 'No'
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 1.11
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Authorize access to security functions and information](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faeed863a-0f56-429f-945d-8bb66bd06841) |CMA_0022 - Authorize access to security functions and information |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0022.json) |
+|[Authorize and manage access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F50e9324a-7410-0539-0662-2c1e775538b7) |CMA_0023 - Authorize and manage access |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0023.json) |
+|[Enforce mandatory and discretionary access control policies](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb1666a13-8f67-9c47-155e-69e027ff6823) |CMA_0246 - Enforce mandatory and discretionary access control policies |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0246.json) |
+
+### Ensure That 'Guest users access restrictions' is set to 'Guest user access is restricted to properties and memberships of their own directory objects''
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 1.12
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Authorize access to security functions and information](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faeed863a-0f56-429f-945d-8bb66bd06841) |CMA_0022 - Authorize access to security functions and information |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0022.json) |
+|[Authorize and manage access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F50e9324a-7410-0539-0662-2c1e775538b7) |CMA_0023 - Authorize and manage access |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0023.json) |
+|[Design an access control model](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F03b6427e-6072-4226-4bd9-a410ab65317e) |CMA_0129 - Design an access control model |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0129.json) |
+|[Employ least privilege access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1bc7fd64-291f-028e-4ed6-6e07886e163f) |CMA_0212 - Employ least privilege access |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0212.json) |
+|[Enforce logical access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F10c4210b-3ec9-9603-050d-77e4d26c7ebb) |CMA_0245 - Enforce logical access |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0245.json) |
+|[Enforce mandatory and discretionary access control policies](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb1666a13-8f67-9c47-155e-69e027ff6823) |CMA_0246 - Enforce mandatory and discretionary access control policies |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0246.json) |
+|[Require approval for account creation](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fde770ba6-50dd-a316-2932-e0d972eaa734) |CMA_0431 - Require approval for account creation |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0431.json) |
+|[Review user groups and applications with access to sensitive data](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Feb1c944e-0e94-647b-9b7e-fdb8d2af0838) |CMA_0481 - Review user groups and applications with access to sensitive data |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0481.json) |
+
+### Ensure that 'Guest invite restrictions' is set to "Only users assigned to specific admin roles can invite guest users"
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 1.13
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Authorize access to security functions and information](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faeed863a-0f56-429f-945d-8bb66bd06841) |CMA_0022 - Authorize access to security functions and information |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0022.json) |
+|[Authorize and manage access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F50e9324a-7410-0539-0662-2c1e775538b7) |CMA_0023 - Authorize and manage access |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0023.json) |
+|[Design an access control model](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F03b6427e-6072-4226-4bd9-a410ab65317e) |CMA_0129 - Design an access control model |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0129.json) |
+|[Employ least privilege access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1bc7fd64-291f-028e-4ed6-6e07886e163f) |CMA_0212 - Employ least privilege access |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0212.json) |
+|[Enforce logical access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F10c4210b-3ec9-9603-050d-77e4d26c7ebb) |CMA_0245 - Enforce logical access |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0245.json) |
+|[Enforce mandatory and discretionary access control policies](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb1666a13-8f67-9c47-155e-69e027ff6823) |CMA_0246 - Enforce mandatory and discretionary access control policies |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0246.json) |
+|[Require approval for account creation](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fde770ba6-50dd-a316-2932-e0d972eaa734) |CMA_0431 - Require approval for account creation |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0431.json) |
+|[Review user groups and applications with access to sensitive data](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Feb1c944e-0e94-647b-9b7e-fdb8d2af0838) |CMA_0481 - Review user groups and applications with access to sensitive data |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0481.json) |
+
+### Ensure That 'Restrict access to Azure AD administration portal' is Set to "Yes"
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 1.14
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Authorize access to security functions and information](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faeed863a-0f56-429f-945d-8bb66bd06841) |CMA_0022 - Authorize access to security functions and information |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0022.json) |
+|[Authorize and manage access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F50e9324a-7410-0539-0662-2c1e775538b7) |CMA_0023 - Authorize and manage access |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0023.json) |
+|[Enforce logical access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F10c4210b-3ec9-9603-050d-77e4d26c7ebb) |CMA_0245 - Enforce logical access |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0245.json) |
+|[Enforce mandatory and discretionary access control policies](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb1666a13-8f67-9c47-155e-69e027ff6823) |CMA_0246 - Enforce mandatory and discretionary access control policies |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0246.json) |
+|[Require approval for account creation](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fde770ba6-50dd-a316-2932-e0d972eaa734) |CMA_0431 - Require approval for account creation |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0431.json) |
+|[Review user groups and applications with access to sensitive data](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Feb1c944e-0e94-647b-9b7e-fdb8d2af0838) |CMA_0481 - Review user groups and applications with access to sensitive data |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0481.json) |
+
+### Ensure that 'Restrict user ability to access groups features in the Access Pane' is Set to 'Yes'
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 1.15
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Authorize access to security functions and information](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faeed863a-0f56-429f-945d-8bb66bd06841) |CMA_0022 - Authorize access to security functions and information |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0022.json) |
+|[Authorize and manage access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F50e9324a-7410-0539-0662-2c1e775538b7) |CMA_0023 - Authorize and manage access |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0023.json) |
+|[Enforce mandatory and discretionary access control policies](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb1666a13-8f67-9c47-155e-69e027ff6823) |CMA_0246 - Enforce mandatory and discretionary access control policies |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0246.json) |
+|[Establish and document change control processes](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbd4dc286-2f30-5b95-777c-681f3a7913d3) |CMA_0265 - Establish and document change control processes |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0265.json) |
+
+### Ensure that 'Users can create security groups in Azure portals, API or PowerShell' is set to 'No'
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 1.16
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Authorize access to security functions and information](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faeed863a-0f56-429f-945d-8bb66bd06841) |CMA_0022 - Authorize access to security functions and information |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0022.json) |
+|[Authorize and manage access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F50e9324a-7410-0539-0662-2c1e775538b7) |CMA_0023 - Authorize and manage access |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0023.json) |
+|[Enforce mandatory and discretionary access control policies](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb1666a13-8f67-9c47-155e-69e027ff6823) |CMA_0246 - Enforce mandatory and discretionary access control policies |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0246.json) |
+|[Establish and document change control processes](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbd4dc286-2f30-5b95-777c-681f3a7913d3) |CMA_0265 - Establish and document change control processes |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0265.json) |
+
+### Ensure that 'Owners can manage group membership requests in the Access Panel' is set to 'No'
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 1.17
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Authorize access to security functions and information](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faeed863a-0f56-429f-945d-8bb66bd06841) |CMA_0022 - Authorize access to security functions and information |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0022.json) |
+|[Authorize and manage access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F50e9324a-7410-0539-0662-2c1e775538b7) |CMA_0023 - Authorize and manage access |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0023.json) |
+|[Enforce mandatory and discretionary access control policies](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb1666a13-8f67-9c47-155e-69e027ff6823) |CMA_0246 - Enforce mandatory and discretionary access control policies |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0246.json) |
+|[Establish and document change control processes](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbd4dc286-2f30-5b95-777c-681f3a7913d3) |CMA_0265 - Establish and document change control processes |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0265.json) |
+
+### Ensure that 'Users can create Microsoft 365 groups in Azure portals, API or PowerShell' is set to 'No'
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 1.18
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Authorize access to security functions and information](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faeed863a-0f56-429f-945d-8bb66bd06841) |CMA_0022 - Authorize access to security functions and information |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0022.json) |
+|[Authorize and manage access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F50e9324a-7410-0539-0662-2c1e775538b7) |CMA_0023 - Authorize and manage access |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0023.json) |
+|[Enforce mandatory and discretionary access control policies](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb1666a13-8f67-9c47-155e-69e027ff6823) |CMA_0246 - Enforce mandatory and discretionary access control policies |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0246.json) |
+|[Establish and document change control processes](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbd4dc286-2f30-5b95-777c-681f3a7913d3) |CMA_0265 - Establish and document change control processes |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0265.json) |
+
+### Ensure that 'Require Multi-Factor Authentication to register or join devices with Azure AD' is set to 'Yes'
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 1.19
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Adopt biometric authentication mechanisms](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7d7a8356-5c34-9a95-3118-1424cfaf192a) |CMA_0005 - Adopt biometric authentication mechanisms |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0005.json) |
+|[Authorize remote access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fdad8a2e9-6f27-4fc2-8933-7e99fe700c9c) |CMA_0024 - Authorize remote access |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0024.json) |
+|[Document mobility training](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F83dfb2b8-678b-20a0-4c44-5c75ada023e6) |CMA_0191 - Document mobility training |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0191.json) |
+|[Document remote access guidelines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3d492600-27ba-62cc-a1c3-66eb919f6a0d) |CMA_0196 - Document remote access guidelines |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0196.json) |
+|[Identify and authenticate network devices](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fae5345d5-8dab-086a-7290-db43a3272198) |CMA_0296 - Identify and authenticate network devices |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0296.json) |
+|[Implement controls to secure alternate work sites](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcd36eeec-67e7-205a-4b64-dbfe3b4e3e4e) |CMA_0315 - Implement controls to secure alternate work sites |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0315.json) |
+|[Provide privacy training](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F518eafdd-08e5-37a9-795b-15a8d798056d) |CMA_0415 - Provide privacy training |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0415.json) |
+|[Satisfy token quality requirements](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F056a723b-4946-9d2a-5243-3aa27c4d31a1) |CMA_0487 - Satisfy token quality requirements |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0487.json) |
+
+### Ensure that 'Multi-Factor Auth Status' is 'Enabled' for all Non-Privileged Users
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 1.2
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Adopt biometric authentication mechanisms](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7d7a8356-5c34-9a95-3118-1424cfaf192a) |CMA_0005 - Adopt biometric authentication mechanisms |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0005.json) |
+|[MFA should be enabled on accounts with read permissions on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe3576e28-8b17-4677-84c3-db2990658d64) |Multi-Factor Authentication (MFA) should be enabled for all subscription accounts with read privileges to prevent a breach of accounts or resources. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableMFAForReadPermissions_Audit.json) |
+
+### Ensure that no custom subscription owner roles are created
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 1.20
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Authorize access to security functions and information](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faeed863a-0f56-429f-945d-8bb66bd06841) |CMA_0022 - Authorize access to security functions and information |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0022.json) |
+|[Authorize and manage access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F50e9324a-7410-0539-0662-2c1e775538b7) |CMA_0023 - Authorize and manage access |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0023.json) |
+|[Design an access control model](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F03b6427e-6072-4226-4bd9-a410ab65317e) |CMA_0129 - Design an access control model |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0129.json) |
+|[Employ least privilege access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1bc7fd64-291f-028e-4ed6-6e07886e163f) |CMA_0212 - Employ least privilege access |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0212.json) |
+|[Enforce mandatory and discretionary access control policies](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb1666a13-8f67-9c47-155e-69e027ff6823) |CMA_0246 - Enforce mandatory and discretionary access control policies |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0246.json) |
+|[Establish and document change control processes](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbd4dc286-2f30-5b95-777c-681f3a7913d3) |CMA_0265 - Establish and document change control processes |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0265.json) |
+
+### Ensure Security Defaults is enabled on Azure Active Directory
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 1.21
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Adopt biometric authentication mechanisms](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7d7a8356-5c34-9a95-3118-1424cfaf192a) |CMA_0005 - Adopt biometric authentication mechanisms |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0005.json) |
+|[Authenticate to cryptographic module](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6f1de470-79f3-1572-866e-db0771352fc8) |CMA_0021 - Authenticate to cryptographic module |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0021.json) |
+|[Authorize remote access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fdad8a2e9-6f27-4fc2-8933-7e99fe700c9c) |CMA_0024 - Authorize remote access |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0024.json) |
+|[Document mobility training](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F83dfb2b8-678b-20a0-4c44-5c75ada023e6) |CMA_0191 - Document mobility training |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0191.json) |
+|[Document remote access guidelines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3d492600-27ba-62cc-a1c3-66eb919f6a0d) |CMA_0196 - Document remote access guidelines |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0196.json) |
+|[Identify and authenticate network devices](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fae5345d5-8dab-086a-7290-db43a3272198) |CMA_0296 - Identify and authenticate network devices |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0296.json) |
+|[Implement controls to secure alternate work sites](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcd36eeec-67e7-205a-4b64-dbfe3b4e3e4e) |CMA_0315 - Implement controls to secure alternate work sites |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0315.json) |
+|[Provide privacy training](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F518eafdd-08e5-37a9-795b-15a8d798056d) |CMA_0415 - Provide privacy training |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0415.json) |
+|[Satisfy token quality requirements](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F056a723b-4946-9d2a-5243-3aa27c4d31a1) |CMA_0487 - Satisfy token quality requirements |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0487.json) |
+
+### Ensure a Custom Role is Assigned Permissions for Administering Resource Locks
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 1.22
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Authorize access to security functions and information](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faeed863a-0f56-429f-945d-8bb66bd06841) |CMA_0022 - Authorize access to security functions and information |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0022.json) |
+|[Authorize and manage access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F50e9324a-7410-0539-0662-2c1e775538b7) |CMA_0023 - Authorize and manage access |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0023.json) |
+|[Enforce mandatory and discretionary access control policies](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb1666a13-8f67-9c47-155e-69e027ff6823) |CMA_0246 - Enforce mandatory and discretionary access control policies |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0246.json) |
+|[Establish and document change control processes](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbd4dc286-2f30-5b95-777c-681f3a7913d3) |CMA_0265 - Establish and document change control processes |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0265.json) |
+
+### Ensure guest users are reviewed on a monthly basis
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 1.3
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Audit user account status](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F49c23d9b-02b0-0e42-4f94-e8cef1b8381b) |CMA_0020 - Audit user account status |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0020.json) |
+|[External accounts with owner permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff8456c1c-aa66-4dfb-861a-25d127b775c9) |External accounts with owner permissions should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveExternalAccountsWithOwnerPermissions_Audit.json) |
+|[External accounts with read permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5f76cf89-fbf2-47fd-a3f4-b891fa780b60) |External accounts with read privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveExternalAccountsReadPermissions_Audit.json) |
+|[External accounts with write permissions should be removed from your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5c607a2e-c700-4744-8254-d77e7c9eb5e4) |External accounts with write privileges should be removed from your subscription in order to prevent unmonitored access. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_RemoveExternalAccountsWritePermissions_Audit.json) |
+|[Reassign or remove user privileges as needed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7805a343-275c-41be-9d62-7215b96212d8) |CMA_C1040 - Reassign or remove user privileges as needed |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1040.json) |
+|[Review account provisioning logs](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa830fe9e-08c9-a4fb-420c-6f6bf1702395) |CMA_0460 - Review account provisioning logs |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0460.json) |
+|[Review user accounts](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F79f081c7-1634-01a1-708e-376197999289) |CMA_0480 - Review user accounts |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0480.json) |
+|[Review user privileges](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff96d2186-79df-262d-3f76-f371e3b71798) |CMA_C1039 - Review user privileges |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1039.json) |
+
+### Ensure that 'Restore multi-factor authentication on all remembered devices' is Enabled
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 1.4
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Adopt biometric authentication mechanisms](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7d7a8356-5c34-9a95-3118-1424cfaf192a) |CMA_0005 - Adopt biometric authentication mechanisms |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0005.json) |
+|[Identify and authenticate network devices](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fae5345d5-8dab-086a-7290-db43a3272198) |CMA_0296 - Identify and authenticate network devices |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0296.json) |
+|[Satisfy token quality requirements](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F056a723b-4946-9d2a-5243-3aa27c4d31a1) |CMA_0487 - Satisfy token quality requirements |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0487.json) |
+
+### Ensure that 'Number of days before users are asked to re-confirm their authentication information' is not set to '0'
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 1.6
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Automate account management](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2cc9c165-46bd-9762-5739-d2aae5ba90a1) |CMA_0026 - Automate account management |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0026.json) |
+|[Manage system and admin accounts](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F34d38ea7-6754-1838-7031-d7fd07099821) |CMA_0368 - Manage system and admin accounts |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0368.json) |
+|[Monitor access across the organization](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F48c816c5-2190-61fc-8806-25d6f3df162f) |CMA_0376 - Monitor access across the organization |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0376.json) |
+|[Notify when account is not needed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8489ff90-8d29-61df-2d84-f9ab0f4c5e84) |CMA_0383 - Notify when account is not needed |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0383.json) |
+
+### Ensure that 'Notify users on password resets?' is set to 'Yes'
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 1.7
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Automate account management](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2cc9c165-46bd-9762-5739-d2aae5ba90a1) |CMA_0026 - Automate account management |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0026.json) |
+|[Implement training for protecting authenticators](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe4b00788-7e1c-33ec-0418-d048508e095b) |CMA_0329 - Implement training for protecting authenticators |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0329.json) |
+|[Manage system and admin accounts](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F34d38ea7-6754-1838-7031-d7fd07099821) |CMA_0368 - Manage system and admin accounts |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0368.json) |
+|[Monitor access across the organization](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F48c816c5-2190-61fc-8806-25d6f3df162f) |CMA_0376 - Monitor access across the organization |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0376.json) |
+|[Notify when account is not needed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8489ff90-8d29-61df-2d84-f9ab0f4c5e84) |CMA_0383 - Notify when account is not needed |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0383.json) |
+
+### Ensure That 'Notify all admins when other admins reset their password?' is set to 'Yes'
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 1.8
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Audit privileged functions](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff26af0b1-65b6-689a-a03f-352ad2d00f98) |CMA_0019 - Audit privileged functions |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0019.json) |
+|[Automate account management](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2cc9c165-46bd-9762-5739-d2aae5ba90a1) |CMA_0026 - Automate account management |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0026.json) |
+|[Implement training for protecting authenticators](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe4b00788-7e1c-33ec-0418-d048508e095b) |CMA_0329 - Implement training for protecting authenticators |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0329.json) |
+|[Manage system and admin accounts](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F34d38ea7-6754-1838-7031-d7fd07099821) |CMA_0368 - Manage system and admin accounts |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0368.json) |
+|[Monitor access across the organization](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F48c816c5-2190-61fc-8806-25d6f3df162f) |CMA_0376 - Monitor access across the organization |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0376.json) |
+|[Monitor privileged role assignment](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fed87d27a-9abf-7c71-714c-61d881889da4) |CMA_0378 - Monitor privileged role assignment |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0378.json) |
+|[Notify when account is not needed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8489ff90-8d29-61df-2d84-f9ab0f4c5e84) |CMA_0383 - Notify when account is not needed |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0383.json) |
+|[Restrict access to privileged accounts](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F873895e8-0e3a-6492-42e9-22cd030e9fcd) |CMA_0446 - Restrict access to privileged accounts |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0446.json) |
+|[Revoke privileged roles as appropriate](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F32f22cfa-770b-057c-965b-450898425519) |CMA_0483 - Revoke privileged roles as appropriate |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0483.json) |
+|[Use privileged identity management](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe714b481-8fac-64a2-14a9-6f079b2501a4) |CMA_0533 - Use privileged identity management |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0533.json) |
+
+### Ensure that 'Users can consent to apps accessing company data on their behalf' is set to 'No'
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 1.9
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Authorize access to security functions and information](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faeed863a-0f56-429f-945d-8bb66bd06841) |CMA_0022 - Authorize access to security functions and information |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0022.json) |
+|[Authorize and manage access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F50e9324a-7410-0539-0662-2c1e775538b7) |CMA_0023 - Authorize and manage access |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0023.json) |
+|[Enforce mandatory and discretionary access control policies](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb1666a13-8f67-9c47-155e-69e027ff6823) |CMA_0246 - Enforce mandatory and discretionary access control policies |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0246.json) |
+
+## 2 Microsoft Defender for Cloud
+
+### Ensure that Microsoft Defender for Servers is set to 'On'
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 2.1
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Azure Defender for servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4da35fc9-c9e7-4960-aec9-797fe7d9051d) |Azure Defender for servers provides real-time threat protection for server workloads and generates hardening recommendations as well as alerts about suspicious activities. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnVM_Audit.json) |
+|[Block untrusted and unsigned processes that run from USB](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3d399cf3-8fc6-0efc-6ab0-1412f1198517) |CMA_0050 - Block untrusted and unsigned processes that run from USB |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0050.json) |
+|[Detect network services that have not been authorized or approved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F86ecd378-a3a0-5d5b-207c-05e6aaca43fc) |CMA_C1700 - Detect network services that have not been authorized or approved |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1700.json) |
+|[Manage gateways](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F63f63e71-6c3f-9add-4c43-64de23e554a7) |CMA_0363 - Manage gateways |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0363.json) |
+|[Perform a trend analysis on threats](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F50e81644-923d-33fc-6ebb-9733bc8d1a06) |CMA_0389 - Perform a trend analysis on threats |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0389.json) |
+|[Perform vulnerability scans](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c5e0e1a-216f-8f49-0a15-76ed0d8b8e1f) |CMA_0393 - Perform vulnerability scans |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0393.json) |
+|[Review malware detections report weekly](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4a6f5cbd-6c6b-006f-2bb1-091af1441bce) |CMA_0475 - Review malware detections report weekly |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0475.json) |
+|[Review threat protection status weekly](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffad161f5-5261-401a-22dd-e037bae011bd) |CMA_0479 - Review threat protection status weekly |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0479.json) |
+|[Update antivirus definitions](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fea9d7c95-2f10-8a4d-61d8-7469bd2e8d65) |CMA_0517 - Update antivirus definitions |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0517.json) |
+
+### Ensure that Microsoft Defender for Cloud Apps (MCAS) integration with Microsoft Defender for Cloud is selected
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 2.10
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Block untrusted and unsigned processes that run from USB](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3d399cf3-8fc6-0efc-6ab0-1412f1198517) |CMA_0050 - Block untrusted and unsigned processes that run from USB |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0050.json) |
+|[Detect network services that have not been authorized or approved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F86ecd378-a3a0-5d5b-207c-05e6aaca43fc) |CMA_C1700 - Detect network services that have not been authorized or approved |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1700.json) |
+|[Manage gateways](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F63f63e71-6c3f-9add-4c43-64de23e554a7) |CMA_0363 - Manage gateways |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0363.json) |
+|[Perform a trend analysis on threats](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F50e81644-923d-33fc-6ebb-9733bc8d1a06) |CMA_0389 - Perform a trend analysis on threats |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0389.json) |
+|[Perform vulnerability scans](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c5e0e1a-216f-8f49-0a15-76ed0d8b8e1f) |CMA_0393 - Perform vulnerability scans |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0393.json) |
+|[Review malware detections report weekly](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4a6f5cbd-6c6b-006f-2bb1-091af1441bce) |CMA_0475 - Review malware detections report weekly |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0475.json) |
+|[Review threat protection status weekly](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffad161f5-5261-401a-22dd-e037bae011bd) |CMA_0479 - Review threat protection status weekly |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0479.json) |
+|[Update antivirus definitions](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fea9d7c95-2f10-8a4d-61d8-7469bd2e8d65) |CMA_0517 - Update antivirus definitions |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0517.json) |
+
+### Ensure That auto provisioning of Log Analytics Agent for Azure VMs' is set to 'On'
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 2.11
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Auto provisioning of the Log Analytics agent should be enabled on your subscription](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F475aae12-b88a-4572-8b36-9b712b2b3a17) |To monitor for security vulnerabilities and threats, Azure Security Center collects data from your Azure virtual machines. Data is collected by the Log Analytics agent, formerly known as the Microsoft Monitoring Agent (MMA), which reads various security-related configurations and event logs from the machine and copies the data to your Log Analytics workspace for analysis. We recommend enabling auto provisioning to automatically deploy the agent to all supported Azure VMs and any new ones that are created. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Automatic_provisioning_log_analytics_monitoring_agent.json) |
+|[Document security operations](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2c6bee3a-2180-2430-440d-db3c7a849870) |CMA_0202 - Document security operations |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0202.json) |
+|[Turn on sensors for endpoint security solution](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5fc24b95-53f7-0ed1-2330-701b539b97fe) |CMA_0514 - Turn on sensors for endpoint security solution |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0514.json) |
+
+### Ensure ASC Default Policy settings are not set to 'Disabled'
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 2.12
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Configure actions for noncompliant devices](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb53aa659-513e-032c-52e6-1ce0ba46582f) |CMA_0062 - Configure actions for noncompliant devices |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0062.json) |
+|[Develop and maintain baseline configurations](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2f20840e-7925-221c-725d-757442753e7c) |CMA_0153 - Develop and maintain baseline configurations |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0153.json) |
+|[Enforce security configuration settings](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F058e9719-1ff9-3653-4230-23f76b6492e0) |CMA_0249 - Enforce security configuration settings |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0249.json) |
+|[Establish a configuration control board](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7380631c-5bf5-0e3a-4509-0873becd8a63) |CMA_0254 - Establish a configuration control board |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0254.json) |
+|[Establish and document a configuration management plan](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F526ed90e-890f-69e7-0386-ba5c0f1f784f) |CMA_0264 - Establish and document a configuration management plan |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0264.json) |
+|[Implement an automated configuration management tool](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F33832848-42ab-63f3-1a55-c0ad309d44cd) |CMA_0311 - Implement an automated configuration management tool |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0311.json) |
+
+### Ensure 'Additional email addresses' is Configured with a Security Contact Email
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 2.13
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Subscriptions should have a contact email address for security issues](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4f4f78b8-e367-4b10-a341-d9a4ad5cf1c7) |To ensure the relevant people in your organization are notified when there is a potential security breach in one of your subscriptions, set a security contact to receive email notifications from Security Center. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Security_contact_email.json) |
+
+### Ensure That 'Notify about alerts with the following severity' is Set to 'High'
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 2.14
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Email notification for high severity alerts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6e2593d9-add6-4083-9c9b-4b7d2188c899) |To ensure the relevant people in your organization are notified when there is a potential security breach in one of your subscriptions, enable email notifications for high severity alerts in Security Center. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_Email_notification.json) |
+
+### Ensure that Microsoft Defender for App Service is set to 'On'
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 2.2
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Azure Defender for App Service should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2913021d-f2fd-4f3d-b958-22354e2bdbcb) |Azure Defender for App Service leverages the scale of the cloud, and the visibility that Azure has as a cloud provider, to monitor for common web app attacks. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnAppServices_Audit.json) |
+|[Block untrusted and unsigned processes that run from USB](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3d399cf3-8fc6-0efc-6ab0-1412f1198517) |CMA_0050 - Block untrusted and unsigned processes that run from USB |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0050.json) |
+|[Detect network services that have not been authorized or approved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F86ecd378-a3a0-5d5b-207c-05e6aaca43fc) |CMA_C1700 - Detect network services that have not been authorized or approved |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1700.json) |
+|[Manage gateways](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F63f63e71-6c3f-9add-4c43-64de23e554a7) |CMA_0363 - Manage gateways |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0363.json) |
+|[Perform a trend analysis on threats](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F50e81644-923d-33fc-6ebb-9733bc8d1a06) |CMA_0389 - Perform a trend analysis on threats |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0389.json) |
+|[Perform vulnerability scans](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c5e0e1a-216f-8f49-0a15-76ed0d8b8e1f) |CMA_0393 - Perform vulnerability scans |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0393.json) |
+|[Review malware detections report weekly](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4a6f5cbd-6c6b-006f-2bb1-091af1441bce) |CMA_0475 - Review malware detections report weekly |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0475.json) |
+|[Review threat protection status weekly](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffad161f5-5261-401a-22dd-e037bae011bd) |CMA_0479 - Review threat protection status weekly |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0479.json) |
+|[Update antivirus definitions](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fea9d7c95-2f10-8a4d-61d8-7469bd2e8d65) |CMA_0517 - Update antivirus definitions |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0517.json) |
+
+### Ensure that Microsoft Defender for Azure SQL Databases is set to 'On'
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 2.3
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Azure Defender for Azure SQL Database servers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7fe3b40f-802b-4cdd-8bd4-fd799c948cc2) |Azure Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, and discovering and classifying sensitive data. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedDataSecurityOnSqlServers_Audit.json) |
+|[Block untrusted and unsigned processes that run from USB](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3d399cf3-8fc6-0efc-6ab0-1412f1198517) |CMA_0050 - Block untrusted and unsigned processes that run from USB |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0050.json) |
+|[Detect network services that have not been authorized or approved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F86ecd378-a3a0-5d5b-207c-05e6aaca43fc) |CMA_C1700 - Detect network services that have not been authorized or approved |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1700.json) |
+|[Manage gateways](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F63f63e71-6c3f-9add-4c43-64de23e554a7) |CMA_0363 - Manage gateways |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0363.json) |
+|[Perform a trend analysis on threats](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F50e81644-923d-33fc-6ebb-9733bc8d1a06) |CMA_0389 - Perform a trend analysis on threats |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0389.json) |
+|[Perform vulnerability scans](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c5e0e1a-216f-8f49-0a15-76ed0d8b8e1f) |CMA_0393 - Perform vulnerability scans |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0393.json) |
+|[Review malware detections report weekly](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4a6f5cbd-6c6b-006f-2bb1-091af1441bce) |CMA_0475 - Review malware detections report weekly |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0475.json) |
+|[Review threat protection status weekly](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffad161f5-5261-401a-22dd-e037bae011bd) |CMA_0479 - Review threat protection status weekly |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0479.json) |
+|[Update antivirus definitions](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fea9d7c95-2f10-8a4d-61d8-7469bd2e8d65) |CMA_0517 - Update antivirus definitions |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0517.json) |
+
+### Ensure that Microsoft Defender for SQL servers on machines is set to 'On'
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 2.4
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Azure Defender for SQL servers on machines should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6581d072-105e-4418-827f-bd446d56421b) |Azure Defender for SQL provides functionality for surfacing and mitigating potential database vulnerabilities, detecting anomalous activities that could indicate threats to SQL databases, and discovering and classifying sensitive data. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedDataSecurityOnSqlServerVirtualMachines_Audit.json) |
+|[Block untrusted and unsigned processes that run from USB](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3d399cf3-8fc6-0efc-6ab0-1412f1198517) |CMA_0050 - Block untrusted and unsigned processes that run from USB |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0050.json) |
+|[Detect network services that have not been authorized or approved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F86ecd378-a3a0-5d5b-207c-05e6aaca43fc) |CMA_C1700 - Detect network services that have not been authorized or approved |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1700.json) |
+|[Manage gateways](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F63f63e71-6c3f-9add-4c43-64de23e554a7) |CMA_0363 - Manage gateways |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0363.json) |
+|[Perform a trend analysis on threats](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F50e81644-923d-33fc-6ebb-9733bc8d1a06) |CMA_0389 - Perform a trend analysis on threats |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0389.json) |
+|[Perform vulnerability scans](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c5e0e1a-216f-8f49-0a15-76ed0d8b8e1f) |CMA_0393 - Perform vulnerability scans |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0393.json) |
+|[Review malware detections report weekly](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4a6f5cbd-6c6b-006f-2bb1-091af1441bce) |CMA_0475 - Review malware detections report weekly |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0475.json) |
+|[Review threat protection status weekly](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffad161f5-5261-401a-22dd-e037bae011bd) |CMA_0479 - Review threat protection status weekly |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0479.json) |
+|[Update antivirus definitions](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fea9d7c95-2f10-8a4d-61d8-7469bd2e8d65) |CMA_0517 - Update antivirus definitions |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0517.json) |
+
+### Ensure that Microsoft Defender for Storage is set to 'On'
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 2.5
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Azure Defender for Storage should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F308fbb08-4ab8-4e67-9b29-592e93fb94fa) |Azure Defender for Storage provides detections of unusual and potentially harmful attempts to access or exploit storage accounts. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnStorageAccounts_Audit.json) |
+|[Block untrusted and unsigned processes that run from USB](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3d399cf3-8fc6-0efc-6ab0-1412f1198517) |CMA_0050 - Block untrusted and unsigned processes that run from USB |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0050.json) |
+|[Detect network services that have not been authorized or approved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F86ecd378-a3a0-5d5b-207c-05e6aaca43fc) |CMA_C1700 - Detect network services that have not been authorized or approved |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1700.json) |
+|[Manage gateways](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F63f63e71-6c3f-9add-4c43-64de23e554a7) |CMA_0363 - Manage gateways |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0363.json) |
+|[Perform a trend analysis on threats](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F50e81644-923d-33fc-6ebb-9733bc8d1a06) |CMA_0389 - Perform a trend analysis on threats |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0389.json) |
+|[Perform vulnerability scans](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c5e0e1a-216f-8f49-0a15-76ed0d8b8e1f) |CMA_0393 - Perform vulnerability scans |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0393.json) |
+|[Review malware detections report weekly](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4a6f5cbd-6c6b-006f-2bb1-091af1441bce) |CMA_0475 - Review malware detections report weekly |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0475.json) |
+|[Review threat protection status weekly](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffad161f5-5261-401a-22dd-e037bae011bd) |CMA_0479 - Review threat protection status weekly |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0479.json) |
+|[Update antivirus definitions](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fea9d7c95-2f10-8a4d-61d8-7469bd2e8d65) |CMA_0517 - Update antivirus definitions |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0517.json) |
+
+### Ensure that Microsoft Defender for Kubernetes is set to 'On'
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 2.6
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Block untrusted and unsigned processes that run from USB](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3d399cf3-8fc6-0efc-6ab0-1412f1198517) |CMA_0050 - Block untrusted and unsigned processes that run from USB |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0050.json) |
+|[Detect network services that have not been authorized or approved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F86ecd378-a3a0-5d5b-207c-05e6aaca43fc) |CMA_C1700 - Detect network services that have not been authorized or approved |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1700.json) |
+|[Manage gateways](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F63f63e71-6c3f-9add-4c43-64de23e554a7) |CMA_0363 - Manage gateways |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0363.json) |
+|[Microsoft Defender for Containers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1c988dd6-ade4-430f-a608-2a3e5b0a6d38) |Microsoft Defender for Containers provides hardening, vulnerability assessment and run-time protections for your Azure, hybrid, and multi-cloud Kubernetes environments. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnContainers_Audit.json) |
+|[Perform a trend analysis on threats](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F50e81644-923d-33fc-6ebb-9733bc8d1a06) |CMA_0389 - Perform a trend analysis on threats |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0389.json) |
+|[Perform vulnerability scans](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c5e0e1a-216f-8f49-0a15-76ed0d8b8e1f) |CMA_0393 - Perform vulnerability scans |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0393.json) |
+|[Review malware detections report weekly](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4a6f5cbd-6c6b-006f-2bb1-091af1441bce) |CMA_0475 - Review malware detections report weekly |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0475.json) |
+|[Review threat protection status weekly](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffad161f5-5261-401a-22dd-e037bae011bd) |CMA_0479 - Review threat protection status weekly |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0479.json) |
+|[Update antivirus definitions](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fea9d7c95-2f10-8a4d-61d8-7469bd2e8d65) |CMA_0517 - Update antivirus definitions |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0517.json) |
+
+### Ensure that Microsoft Defender for Container Registries is set to 'On'
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 2.7
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Block untrusted and unsigned processes that run from USB](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3d399cf3-8fc6-0efc-6ab0-1412f1198517) |CMA_0050 - Block untrusted and unsigned processes that run from USB |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0050.json) |
+|[Detect network services that have not been authorized or approved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F86ecd378-a3a0-5d5b-207c-05e6aaca43fc) |CMA_C1700 - Detect network services that have not been authorized or approved |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1700.json) |
+|[Manage gateways](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F63f63e71-6c3f-9add-4c43-64de23e554a7) |CMA_0363 - Manage gateways |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0363.json) |
+|[Microsoft Defender for Containers should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1c988dd6-ade4-430f-a608-2a3e5b0a6d38) |Microsoft Defender for Containers provides hardening, vulnerability assessment and run-time protections for your Azure, hybrid, and multi-cloud Kubernetes environments. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnContainers_Audit.json) |
+|[Perform a trend analysis on threats](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F50e81644-923d-33fc-6ebb-9733bc8d1a06) |CMA_0389 - Perform a trend analysis on threats |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0389.json) |
+|[Perform vulnerability scans](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c5e0e1a-216f-8f49-0a15-76ed0d8b8e1f) |CMA_0393 - Perform vulnerability scans |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0393.json) |
+|[Review malware detections report weekly](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4a6f5cbd-6c6b-006f-2bb1-091af1441bce) |CMA_0475 - Review malware detections report weekly |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0475.json) |
+|[Review threat protection status weekly](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffad161f5-5261-401a-22dd-e037bae011bd) |CMA_0479 - Review threat protection status weekly |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0479.json) |
+|[Update antivirus definitions](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fea9d7c95-2f10-8a4d-61d8-7469bd2e8d65) |CMA_0517 - Update antivirus definitions |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0517.json) |
+
+### Ensure that Microsoft Defender for Key Vault is set to 'On'
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 2.8
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Azure Defender for Key Vault should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e6763cc-5078-4e64-889d-ff4d9a839047) |Azure Defender for Key Vault provides an additional layer of protection and security intelligence by detecting unusual and potentially harmful attempts to access or exploit key vault accounts. |AuditIfNotExists, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableAdvancedThreatProtectionOnKeyVaults_Audit.json) |
+|[Block untrusted and unsigned processes that run from USB](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3d399cf3-8fc6-0efc-6ab0-1412f1198517) |CMA_0050 - Block untrusted and unsigned processes that run from USB |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0050.json) |
+|[Detect network services that have not been authorized or approved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F86ecd378-a3a0-5d5b-207c-05e6aaca43fc) |CMA_C1700 - Detect network services that have not been authorized or approved |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1700.json) |
+|[Manage gateways](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F63f63e71-6c3f-9add-4c43-64de23e554a7) |CMA_0363 - Manage gateways |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0363.json) |
+|[Perform a trend analysis on threats](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F50e81644-923d-33fc-6ebb-9733bc8d1a06) |CMA_0389 - Perform a trend analysis on threats |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0389.json) |
+|[Perform vulnerability scans](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c5e0e1a-216f-8f49-0a15-76ed0d8b8e1f) |CMA_0393 - Perform vulnerability scans |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0393.json) |
+|[Review malware detections report weekly](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4a6f5cbd-6c6b-006f-2bb1-091af1441bce) |CMA_0475 - Review malware detections report weekly |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0475.json) |
+|[Review threat protection status weekly](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffad161f5-5261-401a-22dd-e037bae011bd) |CMA_0479 - Review threat protection status weekly |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0479.json) |
+|[Update antivirus definitions](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fea9d7c95-2f10-8a4d-61d8-7469bd2e8d65) |CMA_0517 - Update antivirus definitions |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0517.json) |
+
+### Ensure that Microsoft Defender for Endpoint (WDATP) integration with Microsoft Defender for Cloud is selected
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 2.9
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Block untrusted and unsigned processes that run from USB](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3d399cf3-8fc6-0efc-6ab0-1412f1198517) |CMA_0050 - Block untrusted and unsigned processes that run from USB |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0050.json) |
+|[Detect network services that have not been authorized or approved](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F86ecd378-a3a0-5d5b-207c-05e6aaca43fc) |CMA_C1700 - Detect network services that have not been authorized or approved |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1700.json) |
+|[Manage gateways](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F63f63e71-6c3f-9add-4c43-64de23e554a7) |CMA_0363 - Manage gateways |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0363.json) |
+|[Perform a trend analysis on threats](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F50e81644-923d-33fc-6ebb-9733bc8d1a06) |CMA_0389 - Perform a trend analysis on threats |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0389.json) |
+|[Perform vulnerability scans](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c5e0e1a-216f-8f49-0a15-76ed0d8b8e1f) |CMA_0393 - Perform vulnerability scans |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0393.json) |
+|[Review malware detections report weekly](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4a6f5cbd-6c6b-006f-2bb1-091af1441bce) |CMA_0475 - Review malware detections report weekly |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0475.json) |
+|[Review threat protection status weekly](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffad161f5-5261-401a-22dd-e037bae011bd) |CMA_0479 - Review threat protection status weekly |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0479.json) |
+|[Update antivirus definitions](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fea9d7c95-2f10-8a4d-61d8-7469bd2e8d65) |CMA_0517 - Update antivirus definitions |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0517.json) |
+
+## 3 Storage Accounts
+
+### Ensure that 'Secure transfer required' is set to 'Enabled'
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 3.1
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Configure workstations to check for digital certificates](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F26daf649-22d1-97e9-2a8a-01b182194d59) |CMA_0073 - Configure workstations to check for digital certificates |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0073.json) |
+|[Protect data in transit using encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb11697e8-9515-16f1-7a35-477d5c8a1344) |CMA_0403 - Protect data in transit using encryption |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0403.json) |
+|[Protect passwords with encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb2d3e5a2-97ab-5497-565a-71172a729d93) |CMA_0408 - Protect passwords with encryption |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0408.json) |
+|[Secure transfer to storage accounts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F404c3081-a854-4457-ae30-26a93ef643f9) |Audit requirement of Secure transfer in your storage account. Secure transfer is an option that forces your storage account to accept requests only from secure connections (HTTPS). Use of HTTPS ensures authentication between the server and the service and protects data in transit from network layer attacks such as man-in-the-middle, eavesdropping, and session-hijacking |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/Storage_AuditForHTTPSEnabled_Audit.json) |
+
+### Ensure Storage logging is Enabled for Blob Service for 'Read', 'Write', and 'Delete' requests
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 3.10
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Audit privileged functions](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff26af0b1-65b6-689a-a03f-352ad2d00f98) |CMA_0019 - Audit privileged functions |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0019.json) |
+|[Audit user account status](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F49c23d9b-02b0-0e42-4f94-e8cef1b8381b) |CMA_0020 - Audit user account status |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0020.json) |
+|[Configure Azure Audit capabilities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa3e98638-51d4-4e28-910a-60e98c1a756f) |CMA_C1108 - Configure Azure Audit capabilities |Manual, Disabled |[1.1.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1108.json) |
+|[Determine auditable events](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2f67e567-03db-9d1f-67dc-b6ffb91312f4) |CMA_0137 - Determine auditable events |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0137.json) |
+|[Review audit data](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6625638f-3ba1-7404-5983-0ea33d719d34) |CMA_0466 - Review audit data |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0466.json) |
+
+### Ensure Storage Logging is Enabled for Table Service for 'Read', 'Write', and 'Delete' Requests
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 3.11
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Audit privileged functions](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff26af0b1-65b6-689a-a03f-352ad2d00f98) |CMA_0019 - Audit privileged functions |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0019.json) |
+|[Audit user account status](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F49c23d9b-02b0-0e42-4f94-e8cef1b8381b) |CMA_0020 - Audit user account status |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0020.json) |
+|[Configure Azure Audit capabilities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa3e98638-51d4-4e28-910a-60e98c1a756f) |CMA_C1108 - Configure Azure Audit capabilities |Manual, Disabled |[1.1.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1108.json) |
+|[Determine auditable events](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2f67e567-03db-9d1f-67dc-b6ffb91312f4) |CMA_0137 - Determine auditable events |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0137.json) |
+|[Review audit data](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6625638f-3ba1-7404-5983-0ea33d719d34) |CMA_0466 - Review audit data |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0466.json) |
+
+### Ensure the "Minimum TLS version" is set to "Version 1.2"
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 3.12
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Configure workstations to check for digital certificates](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F26daf649-22d1-97e9-2a8a-01b182194d59) |CMA_0073 - Configure workstations to check for digital certificates |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0073.json) |
+|[Protect data in transit using encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb11697e8-9515-16f1-7a35-477d5c8a1344) |CMA_0403 - Protect data in transit using encryption |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0403.json) |
+|[Protect passwords with encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb2d3e5a2-97ab-5497-565a-71172a729d93) |CMA_0408 - Protect passwords with encryption |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0408.json) |
+
+### Ensure That Storage Account Access Keys are Periodically Regenerated
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 3.2
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Define a physical key management process](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F51e4b233-8ee3-8bdc-8f5f-f33bd0d229b7) |CMA_0115 - Define a physical key management process |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0115.json) |
+|[Define cryptographic use](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc4ccd607-702b-8ae6-8eeb-fc3339cd4b42) |CMA_0120 - Define cryptographic use |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0120.json) |
+|[Define organizational requirements for cryptographic key management](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd661e9eb-4e15-5ba1-6f02-cdc467db0d6c) |CMA_0123 - Define organizational requirements for cryptographic key management |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0123.json) |
+|[Determine assertion requirements](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7a0ecd94-3699-5273-76a5-edb8499f655a) |CMA_0136 - Determine assertion requirements |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0136.json) |
+|[Issue public key certificates](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F97d91b33-7050-237b-3e23-a77d57d84e13) |CMA_0347 - Issue public key certificates |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0347.json) |
+|[Manage symmetric cryptographic keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9c276cf3-596f-581a-7fbd-f5e46edaa0f4) |CMA_0367 - Manage symmetric cryptographic keys |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0367.json) |
+|[Restrict access to private keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8d140e8b-76c7-77de-1d46-ed1b2e112444) |CMA_0445 - Restrict access to private keys |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0445.json) |
+
+### Ensure Storage Logging is Enabled for Queue Service for 'Read', 'Write', and 'Delete' requests
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 3.3
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Audit privileged functions](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff26af0b1-65b6-689a-a03f-352ad2d00f98) |CMA_0019 - Audit privileged functions |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0019.json) |
+|[Audit user account status](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F49c23d9b-02b0-0e42-4f94-e8cef1b8381b) |CMA_0020 - Audit user account status |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0020.json) |
+|[Configure Azure Audit capabilities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa3e98638-51d4-4e28-910a-60e98c1a756f) |CMA_C1108 - Configure Azure Audit capabilities |Manual, Disabled |[1.1.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1108.json) |
+|[Determine auditable events](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2f67e567-03db-9d1f-67dc-b6ffb91312f4) |CMA_0137 - Determine auditable events |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0137.json) |
+|[Review audit data](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6625638f-3ba1-7404-5983-0ea33d719d34) |CMA_0466 - Review audit data |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0466.json) |
+
+### Ensure that Shared Access Signature Tokens Expire Within an Hour
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 3.4
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Disable authenticators upon termination](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd9d48ffb-0d8c-0bd5-5f31-5a5826d19f10) |CMA_0169 - Disable authenticators upon termination |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0169.json) |
+|[Revoke privileged roles as appropriate](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F32f22cfa-770b-057c-965b-450898425519) |CMA_0483 - Revoke privileged roles as appropriate |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0483.json) |
+|[Terminate user session automatically](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4502e506-5f35-0df4-684f-b326e3cc7093) |CMA_C1054 - Terminate user session automatically |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1054.json) |
+
+### Ensure that 'Public access level' is set to Private for blob containers
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 3.5
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[\[Preview\]: Storage account public access should be disallowed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4fa4b6c0-31ca-4c0d-b10d-24b96f62a751) |Anonymous public read access to containers and blobs in Azure Storage is a convenient way to share data but might present security risks. To prevent data breaches caused by undesired anonymous access, Microsoft recommends preventing public access to a storage account unless your scenario requires it. |audit, Audit, deny, Deny, disabled, Disabled |[3.1.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/ASC_Storage_DisallowPublicBlobAccess_Audit.json) |
+|[Authorize access to security functions and information](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faeed863a-0f56-429f-945d-8bb66bd06841) |CMA_0022 - Authorize access to security functions and information |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0022.json) |
+|[Authorize and manage access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F50e9324a-7410-0539-0662-2c1e775538b7) |CMA_0023 - Authorize and manage access |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0023.json) |
+|[Enforce logical access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F10c4210b-3ec9-9603-050d-77e4d26c7ebb) |CMA_0245 - Enforce logical access |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0245.json) |
+|[Enforce mandatory and discretionary access control policies](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb1666a13-8f67-9c47-155e-69e027ff6823) |CMA_0246 - Enforce mandatory and discretionary access control policies |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0246.json) |
+|[Require approval for account creation](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fde770ba6-50dd-a316-2932-e0d972eaa734) |CMA_0431 - Require approval for account creation |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0431.json) |
+|[Review user groups and applications with access to sensitive data](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Feb1c944e-0e94-647b-9b7e-fdb8d2af0838) |CMA_0481 - Review user groups and applications with access to sensitive data |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0481.json) |
+
+### Ensure Default Network Access Rule for Storage Accounts is Set to Deny
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 3.6
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Storage accounts should restrict network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F34c877ad-507e-4c82-993e-3452a6e0ad3c) |Network access to storage accounts should be restricted. Configure network rules so only applications from allowed networks can access the storage account. To allow connections from specific internet or on-premises clients, access can be granted to traffic from specific Azure virtual networks or to public internet IP address ranges |Audit, Deny, Disabled |[1.1.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/Storage_NetworkAcls_Audit.json) |
+|[Storage accounts should restrict network access using virtual network rules](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2a1a9cdf-e04d-429a-8416-3bfb72a1b26f) |Protect your storage accounts from potential threats using virtual network rules as a preferred method instead of IP-based filtering. Disabling IP-based filtering prevents public IPs from accessing your storage accounts. |Audit, Deny, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/StorageAccountOnlyVnetRulesEnabled_Audit.json) |
+
+### Ensure 'Trusted Microsoft Services' are Enabled for Storage Account Access
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 3.7
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Control information flow](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F59bedbdc-0ba9-39b9-66bb-1d1c192384e6) |CMA_0079 - Control information flow |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0079.json) |
+|[Employ flow control mechanisms of encrypted information](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F79365f13-8ba4-1f6c-2ac4-aa39929f56d0) |CMA_0211 - Employ flow control mechanisms of encrypted information |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0211.json) |
+|[Establish firewall and router configuration standards](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F398fdbd8-56fd-274d-35c6-fa2d3b2755a1) |CMA_0272 - Establish firewall and router configuration standards |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0272.json) |
+|[Establish network segmentation for card holder data environment](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff476f3b0-4152-526e-a209-44e5f8c968d7) |CMA_0273 - Establish network segmentation for card holder data environment |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0273.json) |
+|[Identify and manage downstream information exchanges](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc7fddb0e-3f44-8635-2b35-dc6b8e740b7c) |CMA_0298 - Identify and manage downstream information exchanges |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0298.json) |
+|[Storage accounts should allow access from trusted Microsoft services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc9d007d0-c057-4772-b18c-01e546713bcd) |Some Microsoft services that interact with storage accounts operate from networks that can't be granted access through network rules. To help this type of service work as intended, allow the set of trusted Microsoft services to bypass the network rules. These services will then use strong authentication to access the storage account. |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/StorageAccess_TrustedMicrosoftServices_Audit.json) |
+
+### Ensure Storage for Critical Data are Encrypted with Customer Managed Keys
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 3.9
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Establish a data leakage management procedure](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c9aa856-6b86-35dc-83f4-bc72cec74dea) |CMA_0255 - Establish a data leakage management procedure |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0255.json) |
+|[Implement controls to secure all media](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe435f7e3-0dd9-58c9-451f-9b44b96c0232) |CMA_0314 - Implement controls to secure all media |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0314.json) |
+|[Protect data in transit using encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb11697e8-9515-16f1-7a35-477d5c8a1344) |CMA_0403 - Protect data in transit using encryption |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0403.json) |
+|[Protect special information](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa315c657-4a00-8eba-15ac-44692ad24423) |CMA_0409 - Protect special information |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0409.json) |
+|[Storage accounts should use customer-managed key for encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6fac406b-40ca-413b-bf8e-0bf964659c25) |Secure your blob and file storage account with greater flexibility using customer-managed keys. When you specify a customer-managed key, that key is used to protect and control access to the key that encrypts your data. Using customer-managed keys provides additional capabilities to control rotation of the key encryption key or cryptographically erase data. |Audit, Disabled |[1.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/StorageAccountCustomerManagedKeyEnabled_Audit.json) |
+
+## 4 Database Services
+
+### Ensure that 'Auditing' is set to 'On'
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 4.1.1
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Audit privileged functions](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff26af0b1-65b6-689a-a03f-352ad2d00f98) |CMA_0019 - Audit privileged functions |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0019.json) |
+|[Audit user account status](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F49c23d9b-02b0-0e42-4f94-e8cef1b8381b) |CMA_0020 - Audit user account status |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0020.json) |
+|[Auditing on SQL server should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa6fb4358-5bf4-4ad7-ba82-2cd2f41ce5e9) |Auditing on your SQL Server should be enabled to track database activities across all databases on the server and save them in an audit log. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServerAuditing_Audit.json) |
+|[Determine auditable events](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2f67e567-03db-9d1f-67dc-b6ffb91312f4) |CMA_0137 - Determine auditable events |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0137.json) |
+|[Review audit data](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6625638f-3ba1-7404-5983-0ea33d719d34) |CMA_0466 - Review audit data |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0466.json) |
+
+### Ensure that 'Data encryption' is set to 'On' on a SQL Database
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 4.1.2
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Establish a data leakage management procedure](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c9aa856-6b86-35dc-83f4-bc72cec74dea) |CMA_0255 - Establish a data leakage management procedure |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0255.json) |
+|[Implement controls to secure all media](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe435f7e3-0dd9-58c9-451f-9b44b96c0232) |CMA_0314 - Implement controls to secure all media |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0314.json) |
+|[Protect data in transit using encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb11697e8-9515-16f1-7a35-477d5c8a1344) |CMA_0403 - Protect data in transit using encryption |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0403.json) |
+|[Protect special information](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa315c657-4a00-8eba-15ac-44692ad24423) |CMA_0409 - Protect special information |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0409.json) |
+|[Transparent Data Encryption on SQL databases should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F17k78e20-9358-41c9-923c-fb736d382a12) |Transparent data encryption should be enabled to protect data-at-rest and meet compliance requirements |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlDBEncryption_Audit.json) |
+
+### Ensure that 'Auditing' Retention is 'greater than 90 days'
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 4.1.3
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Adhere to retention periods defined](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1ecb79d7-1a06-9a3b-3be8-f434d04d1ec1) |CMA_0004 - Adhere to retention periods defined |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0004.json) |
+|[Govern and monitor audit processing activities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F333b4ada-4a02-0648-3d4d-d812974f1bb2) |CMA_0289 - Govern and monitor audit processing activities |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0289.json) |
+|[Retain security policies and procedures](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fefef28d0-3226-966a-a1e8-70e89c1b30bc) |CMA_0454 - Retain security policies and procedures |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0454.json) |
+|[Retain terminated user data](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7c7032fe-9ce6-9092-5890-87a1a3755db1) |CMA_0455 - Retain terminated user data |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0455.json) |
+|[SQL servers with auditing to storage account destination should be configured with 90 days retention or higher](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F89099bee-89e0-4b26-a5f4-165451757743) |For incident investigation purposes, we recommend setting the data retention for your SQL Server' auditing to storage account destination to at least 90 days. Confirm that you are meeting the necessary retention rules for the regions in which you are operating. This is sometimes required for compliance with regulatory standards. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServerAuditingRetentionDays_Audit.json) |
+
+### Ensure that Advanced Threat Protection (ATP) on a SQL Server is Set to 'Enabled'
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 4.2.1
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Azure Defender for SQL should be enabled for unprotected Azure SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb4388-5bf4-4ad7-ba82-2cd2f41ceae9) |Audit SQL servers without Advanced Data Security |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_AdvancedDataSecurity_Audit.json) |
+|[Azure Defender for SQL should be enabled for unprotected SQL Managed Instances](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fabfb7388-5bf4-4ad7-ba99-2cd2f41cebb9) |Audit each SQL Managed Instance without advanced data security. |AuditIfNotExists, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_AdvancedDataSecurity_Audit.json) |
+|[Perform a trend analysis on threats](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F50e81644-923d-33fc-6ebb-9733bc8d1a06) |CMA_0389 - Perform a trend analysis on threats |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0389.json) |
+
+### Ensure that Vulnerability Assessment (VA) is enabled on a SQL server by setting a Storage Account
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 4.2.2
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Perform vulnerability scans](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c5e0e1a-216f-8f49-0a15-76ed0d8b8e1f) |CMA_0393 - Perform vulnerability scans |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0393.json) |
+|[Remediate information system flaws](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbe38a620-000b-21cf-3cb3-ea151b704c3b) |CMA_0427 - Remediate information system flaws |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0427.json) |
+|[Vulnerability assessment should be enabled on SQL Managed Instance](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1b7aa243-30e4-4c9e-bca8-d0d3022b634a) |Audit each SQL Managed Instance which doesn't have recurring vulnerability assessment scans enabled. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnManagedInstance_Audit.json) |
+|[Vulnerability assessment should be enabled on your SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fef2a8f2a-b3d9-49cd-a8a8-9a3aaaf647d9) |Audit Azure SQL servers which do not have vulnerability assessment properly configured. Vulnerability assessment can discover, track, and help you remediate potential database vulnerabilities. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/VulnerabilityAssessmentOnServer_Audit.json) |
+
+### Ensure that VA setting 'Periodic recurring scans' to 'on' for each SQL server
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 4.2.3
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Perform vulnerability scans](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c5e0e1a-216f-8f49-0a15-76ed0d8b8e1f) |CMA_0393 - Perform vulnerability scans |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0393.json) |
+|[Remediate information system flaws](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbe38a620-000b-21cf-3cb3-ea151b704c3b) |CMA_0427 - Remediate information system flaws |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0427.json) |
+
+### Ensure that VA setting 'Send scan reports to' is configured for a SQL server
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 4.2.4
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Correlate Vulnerability scan information](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe3905a3c-97e7-0b4f-15fb-465c0927536f) |CMA_C1558 - Correlate Vulnerability scan information |Manual, Disabled |[1.1.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1558.json) |
+|[Perform vulnerability scans](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c5e0e1a-216f-8f49-0a15-76ed0d8b8e1f) |CMA_0393 - Perform vulnerability scans |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0393.json) |
+|[Remediate information system flaws](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbe38a620-000b-21cf-3cb3-ea151b704c3b) |CMA_0427 - Remediate information system flaws |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0427.json) |
+|[Vulnerability Assessment settings for SQL server should contain an email address to receive scan reports](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F057d6cfe-9c4f-4a6d-bc60-14420ea1f1a9) |Ensure that an email address is provided for the 'Send scan reports to' field in the Vulnerability Assessment settings. This email address receives scan result summary after a periodic scan runs on SQL servers. |AuditIfNotExists, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_VulnerabilityAssessmentEmails_Audit.json) |
+
+### Ensure that Vulnerability Assessment Setting 'Also send email notifications to admins and subscription owners' is Set for Each SQL Server
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 4.2.5
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Correlate Vulnerability scan information](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe3905a3c-97e7-0b4f-15fb-465c0927536f) |CMA_C1558 - Correlate Vulnerability scan information |Manual, Disabled |[1.1.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1558.json) |
+|[Perform vulnerability scans](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c5e0e1a-216f-8f49-0a15-76ed0d8b8e1f) |CMA_0393 - Perform vulnerability scans |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0393.json) |
+|[Remediate information system flaws](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbe38a620-000b-21cf-3cb3-ea151b704c3b) |CMA_0427 - Remediate information system flaws |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0427.json) |
+
+### Ensure 'Enforce SSL connection' is set to 'ENABLED' for PostgreSQL Database Server
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 4.3.1
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Configure workstations to check for digital certificates](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F26daf649-22d1-97e9-2a8a-01b182194d59) |CMA_0073 - Configure workstations to check for digital certificates |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0073.json) |
+|[Enforce SSL connection should be enabled for PostgreSQL database servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd158790f-bfb0-486c-8631-2dc6b4e8e6af) |Azure Database for PostgreSQL supports connecting your Azure Database for PostgreSQL server to client applications using Secure Sockets Layer (SSL). Enforcing SSL connections between your database server and your client applications helps protect against 'man in the middle' attacks by encrypting the data stream between the server and your application. This configuration enforces that SSL is always enabled for accessing your database server. |Audit, Disabled |[1.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_EnableSSL_Audit.json) |
+|[Protect data in transit using encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb11697e8-9515-16f1-7a35-477d5c8a1344) |CMA_0403 - Protect data in transit using encryption |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0403.json) |
+|[Protect passwords with encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb2d3e5a2-97ab-5497-565a-71172a729d93) |CMA_0408 - Protect passwords with encryption |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0408.json) |
+
+### Ensure Server Parameter 'log_checkpoints' is set to 'ON' for PostgreSQL Database Server
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 4.3.2
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Audit privileged functions](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff26af0b1-65b6-689a-a03f-352ad2d00f98) |CMA_0019 - Audit privileged functions |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0019.json) |
+|[Audit user account status](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F49c23d9b-02b0-0e42-4f94-e8cef1b8381b) |CMA_0020 - Audit user account status |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0020.json) |
+|[Determine auditable events](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2f67e567-03db-9d1f-67dc-b6ffb91312f4) |CMA_0137 - Determine auditable events |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0137.json) |
+|[Log checkpoints should be enabled for PostgreSQL database servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Feb6f77b9-bd53-4e35-a23d-7f65d5f0e43d) |This policy helps audit any PostgreSQL databases in your environment without log_checkpoints setting enabled. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_EnableLogCheckpoint_Audit.json) |
+|[Review audit data](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6625638f-3ba1-7404-5983-0ea33d719d34) |CMA_0466 - Review audit data |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0466.json) |
+
+### Ensure server parameter 'log_connections' is set to 'ON' for PostgreSQL Database Server
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 4.3.3
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Audit privileged functions](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff26af0b1-65b6-689a-a03f-352ad2d00f98) |CMA_0019 - Audit privileged functions |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0019.json) |
+|[Audit user account status](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F49c23d9b-02b0-0e42-4f94-e8cef1b8381b) |CMA_0020 - Audit user account status |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0020.json) |
+|[Determine auditable events](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2f67e567-03db-9d1f-67dc-b6ffb91312f4) |CMA_0137 - Determine auditable events |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0137.json) |
+|[Log connections should be enabled for PostgreSQL database servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Feb6f77b9-bd53-4e35-a23d-7f65d5f0e442) |This policy helps audit any PostgreSQL databases in your environment without log_connections setting enabled. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_EnableLogConnections_Audit.json) |
+|[Review audit data](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6625638f-3ba1-7404-5983-0ea33d719d34) |CMA_0466 - Review audit data |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0466.json) |
+
+### Ensure server parameter 'log_disconnections' is set to 'ON' for PostgreSQL Database Server
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 4.3.4
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Audit privileged functions](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff26af0b1-65b6-689a-a03f-352ad2d00f98) |CMA_0019 - Audit privileged functions |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0019.json) |
+|[Audit user account status](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F49c23d9b-02b0-0e42-4f94-e8cef1b8381b) |CMA_0020 - Audit user account status |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0020.json) |
+|[Determine auditable events](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2f67e567-03db-9d1f-67dc-b6ffb91312f4) |CMA_0137 - Determine auditable events |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0137.json) |
+|[Disconnections should be logged for PostgreSQL database servers.](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Feb6f77b9-bd53-4e35-a23d-7f65d5f0e446) |This policy helps audit any PostgreSQL databases in your environment without log_disconnections enabled. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_EnableLogDisconnections_Audit.json) |
+|[Review audit data](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6625638f-3ba1-7404-5983-0ea33d719d34) |CMA_0466 - Review audit data |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0466.json) |
+
+### Ensure server parameter 'connection_throttling' is set to 'ON' for PostgreSQL Database Server
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 4.3.5
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Audit privileged functions](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff26af0b1-65b6-689a-a03f-352ad2d00f98) |CMA_0019 - Audit privileged functions |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0019.json) |
+|[Audit user account status](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F49c23d9b-02b0-0e42-4f94-e8cef1b8381b) |CMA_0020 - Audit user account status |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0020.json) |
+|[Connection throttling should be enabled for PostgreSQL database servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5345bb39-67dc-4960-a1bf-427e16b9a0bd) |This policy helps audit any PostgreSQL databases in your environment without Connection throttling enabled. This setting enables temporary connection throttling per IP for too many invalid password login failures. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/PostgreSQL_ConnectionThrottling_Enabled_Audit.json) |
+|[Determine auditable events](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2f67e567-03db-9d1f-67dc-b6ffb91312f4) |CMA_0137 - Determine auditable events |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0137.json) |
+|[Review audit data](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6625638f-3ba1-7404-5983-0ea33d719d34) |CMA_0466 - Review audit data |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0466.json) |
+
+### Ensure server parameter 'log_retention_days' is greater than 3 days for PostgreSQL Database Server
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 4.3.6
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Adhere to retention periods defined](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1ecb79d7-1a06-9a3b-3be8-f434d04d1ec1) |CMA_0004 - Adhere to retention periods defined |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0004.json) |
+|[Govern and monitor audit processing activities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F333b4ada-4a02-0648-3d4d-d812974f1bb2) |CMA_0289 - Govern and monitor audit processing activities |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0289.json) |
+|[Retain security policies and procedures](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fefef28d0-3226-966a-a1e8-70e89c1b30bc) |CMA_0454 - Retain security policies and procedures |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0454.json) |
+|[Retain terminated user data](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7c7032fe-9ce6-9092-5890-87a1a3755db1) |CMA_0455 - Retain terminated user data |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0455.json) |
+
+### Ensure 'Allow access to Azure services' for PostgreSQL Database Server is disabled
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 4.3.7
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Control information flow](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F59bedbdc-0ba9-39b9-66bb-1d1c192384e6) |CMA_0079 - Control information flow |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0079.json) |
+|[Employ flow control mechanisms of encrypted information](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F79365f13-8ba4-1f6c-2ac4-aa39929f56d0) |CMA_0211 - Employ flow control mechanisms of encrypted information |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0211.json) |
+|[Establish firewall and router configuration standards](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F398fdbd8-56fd-274d-35c6-fa2d3b2755a1) |CMA_0272 - Establish firewall and router configuration standards |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0272.json) |
+|[Establish network segmentation for card holder data environment](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff476f3b0-4152-526e-a209-44e5f8c968d7) |CMA_0273 - Establish network segmentation for card holder data environment |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0273.json) |
+|[Identify and manage downstream information exchanges](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc7fddb0e-3f44-8635-2b35-dc6b8e740b7c) |CMA_0298 - Identify and manage downstream information exchanges |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0298.json) |
+
+### Ensure 'Infrastructure double encryption' for PostgreSQL Database Server is 'Enabled'
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 4.3.8
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Establish a data leakage management procedure](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c9aa856-6b86-35dc-83f4-bc72cec74dea) |CMA_0255 - Establish a data leakage management procedure |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0255.json) |
+|[Implement controls to secure all media](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe435f7e3-0dd9-58c9-451f-9b44b96c0232) |CMA_0314 - Implement controls to secure all media |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0314.json) |
+|[Protect data in transit using encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb11697e8-9515-16f1-7a35-477d5c8a1344) |CMA_0403 - Protect data in transit using encryption |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0403.json) |
+|[Protect special information](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa315c657-4a00-8eba-15ac-44692ad24423) |CMA_0409 - Protect special information |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0409.json) |
+
+### Ensure 'Enforce SSL connection' is set to 'Enabled' for Standard MySQL Database Server
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 4.4.1
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Configure workstations to check for digital certificates](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F26daf649-22d1-97e9-2a8a-01b182194d59) |CMA_0073 - Configure workstations to check for digital certificates |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0073.json) |
+|[Protect data in transit using encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb11697e8-9515-16f1-7a35-477d5c8a1344) |CMA_0403 - Protect data in transit using encryption |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0403.json) |
+|[Protect passwords with encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb2d3e5a2-97ab-5497-565a-71172a729d93) |CMA_0408 - Protect passwords with encryption |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0408.json) |
+
+### Ensure 'TLS Version' is set to 'TLSV1.2' for MySQL flexible Database Server
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 4.4.2
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Configure workstations to check for digital certificates](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F26daf649-22d1-97e9-2a8a-01b182194d59) |CMA_0073 - Configure workstations to check for digital certificates |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0073.json) |
+|[Protect data in transit using encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb11697e8-9515-16f1-7a35-477d5c8a1344) |CMA_0403 - Protect data in transit using encryption |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0403.json) |
+|[Protect passwords with encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb2d3e5a2-97ab-5497-565a-71172a729d93) |CMA_0408 - Protect passwords with encryption |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0408.json) |
+
+### Ensure that Azure Active Directory Admin is configured
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 4.5
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[An Azure Active Directory administrator should be provisioned for SQL servers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1f314764-cb73-4fc9-b863-8eca98ac36e9) |Audit provisioning of an Azure Active Directory administrator for your SQL server to enable Azure AD authentication. Azure AD authentication enables simplified permission management and centralized identity management of database users and other Microsoft services |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SQL_DB_AuditServerADAdmins_Audit.json) |
+|[Automate account management](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2cc9c165-46bd-9762-5739-d2aae5ba90a1) |CMA_0026 - Automate account management |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0026.json) |
+|[Manage system and admin accounts](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F34d38ea7-6754-1838-7031-d7fd07099821) |CMA_0368 - Manage system and admin accounts |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0368.json) |
+|[Monitor access across the organization](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F48c816c5-2190-61fc-8806-25d6f3df162f) |CMA_0376 - Monitor access across the organization |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0376.json) |
+|[Notify when account is not needed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8489ff90-8d29-61df-2d84-f9ab0f4c5e84) |CMA_0383 - Notify when account is not needed |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0383.json) |
+
+### Ensure SQL server's TDE protector is encrypted with Customer-managed key
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 4.6
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Establish a data leakage management procedure](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c9aa856-6b86-35dc-83f4-bc72cec74dea) |CMA_0255 - Establish a data leakage management procedure |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0255.json) |
+|[Implement controls to secure all media](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe435f7e3-0dd9-58c9-451f-9b44b96c0232) |CMA_0314 - Implement controls to secure all media |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0314.json) |
+|[Protect data in transit using encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb11697e8-9515-16f1-7a35-477d5c8a1344) |CMA_0403 - Protect data in transit using encryption |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0403.json) |
+|[Protect special information](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa315c657-4a00-8eba-15ac-44692ad24423) |CMA_0409 - Protect special information |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0409.json) |
+|[SQL managed instances should use customer-managed keys to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac01ad65-10e5-46df-bdd9-6b0cad13e1d2) |Implementing Transparent Data Encryption (TDE) with your own key provides you with increased transparency and control over the TDE Protector, increased security with an HSM-backed external service, and promotion of separation of duties. This recommendation applies to organizations with a related compliance requirement. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlManagedInstance_EnsureServerTDEisEncryptedWithYourOwnKey_Deny.json) |
+|[SQL servers should use customer-managed keys to encrypt data at rest](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0a370ff3-6cab-4e85-8995-295fd854c5b8) |Implementing Transparent Data Encryption (TDE) with your own key provides increased transparency and control over the TDE Protector, increased security with an HSM-backed external service, and promotion of separation of duties. This recommendation applies to organizations with a related compliance requirement. |Audit, Deny, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/SQL/SqlServer_EnsureServerTDEisEncryptedWithYourOwnKey_Deny.json) |
+
+## 5 Logging and Monitoring
+
+### Ensure that a 'Diagnostics Setting' exists
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 5.1.1
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Determine auditable events](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2f67e567-03db-9d1f-67dc-b6ffb91312f4) |CMA_0137 - Determine auditable events |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0137.json) |
+
+### Ensure Diagnostic Setting captures appropriate categories
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 5.1.2
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Audit privileged functions](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff26af0b1-65b6-689a-a03f-352ad2d00f98) |CMA_0019 - Audit privileged functions |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0019.json) |
+|[Audit user account status](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F49c23d9b-02b0-0e42-4f94-e8cef1b8381b) |CMA_0020 - Audit user account status |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0020.json) |
+|[Configure Azure Audit capabilities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa3e98638-51d4-4e28-910a-60e98c1a756f) |CMA_C1108 - Configure Azure Audit capabilities |Manual, Disabled |[1.1.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1108.json) |
+|[Determine auditable events](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2f67e567-03db-9d1f-67dc-b6ffb91312f4) |CMA_0137 - Determine auditable events |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0137.json) |
+|[Review audit data](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6625638f-3ba1-7404-5983-0ea33d719d34) |CMA_0466 - Review audit data |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0466.json) |
+
+### Ensure the storage container storing the activity logs is not publicly accessible
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 5.1.3
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[\[Preview\]: Storage account public access should be disallowed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4fa4b6c0-31ca-4c0d-b10d-24b96f62a751) |Anonymous public read access to containers and blobs in Azure Storage is a convenient way to share data but might present security risks. To prevent data breaches caused by undesired anonymous access, Microsoft recommends preventing public access to a storage account unless your scenario requires it. |audit, Audit, deny, Deny, disabled, Disabled |[3.1.0-preview](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Storage/ASC_Storage_DisallowPublicBlobAccess_Audit.json) |
+|[Enable dual or joint authorization](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2c843d78-8f64-92b5-6a9b-e8186c0e7eb6) |CMA_0226 - Enable dual or joint authorization |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0226.json) |
+|[Protect audit information](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e696f5a-451f-5c15-5532-044136538491) |CMA_0401 - Protect audit information |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0401.json) |
+
+### Ensure the storage account containing the container with activity logs is encrypted with BYOK (Use Your Own Key)
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 5.1.4
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Enable dual or joint authorization](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2c843d78-8f64-92b5-6a9b-e8186c0e7eb6) |CMA_0226 - Enable dual or joint authorization |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0226.json) |
+|[Maintain integrity of audit system](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc0559109-6a27-a217-6821-5a6d44f92897) |CMA_C1133 - Maintain integrity of audit system |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1133.json) |
+|[Protect audit information](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0e696f5a-451f-5c15-5532-044136538491) |CMA_0401 - Protect audit information |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0401.json) |
+|[Storage account containing the container with activity logs must be encrypted with BYOK](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffbb99e8e-e444-4da0-9ff1-75c92f5a85b2) |This policy audits if the Storage account containing the container with activity logs is encrypted with BYOK. The policy works only if the storage account lies on the same subscription as activity logs by design. More information on Azure Storage encryption at rest can be found here [https://aka.ms/azurestoragebyok](https://aka.ms/azurestoragebyok). |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ActivityLog_StorageAccountBYOK_Audit.json) |
+
+### Ensure that logging for Azure KeyVault is 'Enabled'
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 5.1.5
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Audit privileged functions](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff26af0b1-65b6-689a-a03f-352ad2d00f98) |CMA_0019 - Audit privileged functions |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0019.json) |
+|[Audit user account status](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F49c23d9b-02b0-0e42-4f94-e8cef1b8381b) |CMA_0020 - Audit user account status |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0020.json) |
+|[Determine auditable events](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2f67e567-03db-9d1f-67dc-b6ffb91312f4) |CMA_0137 - Determine auditable events |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0137.json) |
+|[Resource logs in Key Vault should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcf820ca0-f99e-4f3e-84fb-66e913812d21) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[5.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/KeyVault_AuditDiagnosticLog_Audit.json) |
+|[Review audit data](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6625638f-3ba1-7404-5983-0ea33d719d34) |CMA_0466 - Review audit data |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0466.json) |
+
+### Ensure that Activity Log Alert exists for Create Policy Assignment
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 5.2.1
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Alert personnel of information spillage](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9622aaa9-5c49-40e2-5bf8-660b7cd23deb) |CMA_0007 - Alert personnel of information spillage |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0007.json) |
+|[An activity log alert should exist for specific Policy operations](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc5447c04-a4d7-4ba8-a263-c9ee321a6858) |This policy audits specific Policy operations with no activity log alerts configured. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ActivityLog_PolicyOperations_Audit.json) |
+|[Develop an incident response plan](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2b4e134f-1e4c-2bff-573e-082d85479b6e) |CMA_0145 - Develop an incident response plan |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0145.json) |
+|[Set automated notifications for new and trending cloud applications in your organization](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faf38215f-70c4-0cd6-40c2-c52d86690a45) |CMA_0495 - Set automated notifications for new and trending cloud applications in your organization |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0495.json) |
+
+### Ensure that Activity Log Alert exists for Delete Policy Assignment
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 5.2.2
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Alert personnel of information spillage](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9622aaa9-5c49-40e2-5bf8-660b7cd23deb) |CMA_0007 - Alert personnel of information spillage |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0007.json) |
+|[An activity log alert should exist for specific Policy operations](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc5447c04-a4d7-4ba8-a263-c9ee321a6858) |This policy audits specific Policy operations with no activity log alerts configured. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ActivityLog_PolicyOperations_Audit.json) |
+|[Develop an incident response plan](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2b4e134f-1e4c-2bff-573e-082d85479b6e) |CMA_0145 - Develop an incident response plan |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0145.json) |
+|[Set automated notifications for new and trending cloud applications in your organization](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faf38215f-70c4-0cd6-40c2-c52d86690a45) |CMA_0495 - Set automated notifications for new and trending cloud applications in your organization |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0495.json) |
+
+### Ensure that Activity Log Alert exists for Create or Update Network Security Group
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 5.2.3
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Alert personnel of information spillage](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9622aaa9-5c49-40e2-5bf8-660b7cd23deb) |CMA_0007 - Alert personnel of information spillage |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0007.json) |
+|[An activity log alert should exist for specific Administrative operations](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb954148f-4c11-4c38-8221-be76711e194a) |This policy audits specific Administrative operations with no activity log alerts configured. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ActivityLog_AdministrativeOperations_Audit.json) |
+|[Develop an incident response plan](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2b4e134f-1e4c-2bff-573e-082d85479b6e) |CMA_0145 - Develop an incident response plan |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0145.json) |
+|[Set automated notifications for new and trending cloud applications in your organization](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faf38215f-70c4-0cd6-40c2-c52d86690a45) |CMA_0495 - Set automated notifications for new and trending cloud applications in your organization |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0495.json) |
+
+### Ensure that Activity Log Alert exists for Delete Network Security Group
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 5.2.4
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Alert personnel of information spillage](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9622aaa9-5c49-40e2-5bf8-660b7cd23deb) |CMA_0007 - Alert personnel of information spillage |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0007.json) |
+|[An activity log alert should exist for specific Administrative operations](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb954148f-4c11-4c38-8221-be76711e194a) |This policy audits specific Administrative operations with no activity log alerts configured. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ActivityLog_AdministrativeOperations_Audit.json) |
+|[Develop an incident response plan](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2b4e134f-1e4c-2bff-573e-082d85479b6e) |CMA_0145 - Develop an incident response plan |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0145.json) |
+|[Set automated notifications for new and trending cloud applications in your organization](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faf38215f-70c4-0cd6-40c2-c52d86690a45) |CMA_0495 - Set automated notifications for new and trending cloud applications in your organization |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0495.json) |
+
+### Ensure that Activity Log Alert exists for Create or Update Network Security Group
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 5.2.5
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Alert personnel of information spillage](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9622aaa9-5c49-40e2-5bf8-660b7cd23deb) |CMA_0007 - Alert personnel of information spillage |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0007.json) |
+|[An activity log alert should exist for specific Administrative operations](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb954148f-4c11-4c38-8221-be76711e194a) |This policy audits specific Administrative operations with no activity log alerts configured. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ActivityLog_AdministrativeOperations_Audit.json) |
+|[Develop an incident response plan](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2b4e134f-1e4c-2bff-573e-082d85479b6e) |CMA_0145 - Develop an incident response plan |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0145.json) |
+|[Set automated notifications for new and trending cloud applications in your organization](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faf38215f-70c4-0cd6-40c2-c52d86690a45) |CMA_0495 - Set automated notifications for new and trending cloud applications in your organization |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0495.json) |
+
+### Ensure that activity log alert exists for the Delete Network Security Group Rule
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 5.2.6
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Alert personnel of information spillage](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9622aaa9-5c49-40e2-5bf8-660b7cd23deb) |CMA_0007 - Alert personnel of information spillage |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0007.json) |
+|[An activity log alert should exist for specific Administrative operations](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb954148f-4c11-4c38-8221-be76711e194a) |This policy audits specific Administrative operations with no activity log alerts configured. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ActivityLog_AdministrativeOperations_Audit.json) |
+|[Develop an incident response plan](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2b4e134f-1e4c-2bff-573e-082d85479b6e) |CMA_0145 - Develop an incident response plan |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0145.json) |
+|[Set automated notifications for new and trending cloud applications in your organization](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faf38215f-70c4-0cd6-40c2-c52d86690a45) |CMA_0495 - Set automated notifications for new and trending cloud applications in your organization |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0495.json) |
+
+### Ensure that Activity Log Alert exists for Create or Update Security Solution
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 5.2.7
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Alert personnel of information spillage](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9622aaa9-5c49-40e2-5bf8-660b7cd23deb) |CMA_0007 - Alert personnel of information spillage |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0007.json) |
+|[An activity log alert should exist for specific Security operations](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3b980d31-7904-4bb7-8575-5665739a8052) |This policy audits specific Security operations with no activity log alerts configured. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ActivityLog_SecurityOperations_Audit.json) |
+|[Develop an incident response plan](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2b4e134f-1e4c-2bff-573e-082d85479b6e) |CMA_0145 - Develop an incident response plan |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0145.json) |
+|[Set automated notifications for new and trending cloud applications in your organization](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faf38215f-70c4-0cd6-40c2-c52d86690a45) |CMA_0495 - Set automated notifications for new and trending cloud applications in your organization |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0495.json) |
+
+### Ensure that Activity Log Alert exists for Delete Security Solution
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 5.2.8
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Alert personnel of information spillage](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9622aaa9-5c49-40e2-5bf8-660b7cd23deb) |CMA_0007 - Alert personnel of information spillage |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0007.json) |
+|[An activity log alert should exist for specific Security operations](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3b980d31-7904-4bb7-8575-5665739a8052) |This policy audits specific Security operations with no activity log alerts configured. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ActivityLog_SecurityOperations_Audit.json) |
+|[Develop an incident response plan](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2b4e134f-1e4c-2bff-573e-082d85479b6e) |CMA_0145 - Develop an incident response plan |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0145.json) |
+|[Set automated notifications for new and trending cloud applications in your organization](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faf38215f-70c4-0cd6-40c2-c52d86690a45) |CMA_0495 - Set automated notifications for new and trending cloud applications in your organization |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0495.json) |
+
+### Ensure that Activity Log Alert exists for Create or Update or Delete SQL Server Firewall Rule
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 5.2.9
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Alert personnel of information spillage](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9622aaa9-5c49-40e2-5bf8-660b7cd23deb) |CMA_0007 - Alert personnel of information spillage |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0007.json) |
+|[An activity log alert should exist for specific Administrative operations](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb954148f-4c11-4c38-8221-be76711e194a) |This policy audits specific Administrative operations with no activity log alerts configured. |AuditIfNotExists, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Monitoring/ActivityLog_AdministrativeOperations_Audit.json) |
+|[Develop an incident response plan](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2b4e134f-1e4c-2bff-573e-082d85479b6e) |CMA_0145 - Develop an incident response plan |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0145.json) |
+|[Set automated notifications for new and trending cloud applications in your organization](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faf38215f-70c4-0cd6-40c2-c52d86690a45) |CMA_0495 - Set automated notifications for new and trending cloud applications in your organization |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0495.json) |
+
+### Ensure that Diagnostic Logs Are Enabled for All Services that Support it.
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 5.3
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Adhere to retention periods defined](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1ecb79d7-1a06-9a3b-3be8-f434d04d1ec1) |CMA_0004 - Adhere to retention periods defined |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0004.json) |
+|[App Service apps should have resource logs enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F91a78b24-f231-4a8a-8da9-02c35b2b6510) |Audit enabling of resource logs on the app. This enables you to recreate activity trails for investigation purposes if a security incident occurs or your network is compromised. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_ResourceLoggingMonitoring_Audit.json) |
+|[Audit privileged functions](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff26af0b1-65b6-689a-a03f-352ad2d00f98) |CMA_0019 - Audit privileged functions |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0019.json) |
+|[Audit user account status](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F49c23d9b-02b0-0e42-4f94-e8cef1b8381b) |CMA_0020 - Audit user account status |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0020.json) |
+|[Configure Azure Audit capabilities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa3e98638-51d4-4e28-910a-60e98c1a756f) |CMA_C1108 - Configure Azure Audit capabilities |Manual, Disabled |[1.1.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1108.json) |
+|[Determine auditable events](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2f67e567-03db-9d1f-67dc-b6ffb91312f4) |CMA_0137 - Determine auditable events |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0137.json) |
+|[Govern and monitor audit processing activities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F333b4ada-4a02-0648-3d4d-d812974f1bb2) |CMA_0289 - Govern and monitor audit processing activities |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0289.json) |
+|[Resource logs in Azure Data Lake Store should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F057ef27e-665e-4328-8ea3-04b3122bd9fb) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[5.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Data%20Lake/DataLakeStore_AuditDiagnosticLog_Audit.json) |
+|[Resource logs in Azure Stream Analytics should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff9be5368-9bf5-4b84-9e0a-7850da98bb46) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[5.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Stream%20Analytics/StreamAnalytics_AuditDiagnosticLog_Audit.json) |
+|[Resource logs in Batch accounts should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F428256e6-1fac-4f48-a757-df34c2b3336d) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[5.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Batch/Batch_AuditDiagnosticLog_Audit.json) |
+|[Resource logs in Data Lake Analytics should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc95c74d9-38fe-4f0d-af86-0c7d626a315c) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[5.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Data%20Lake/DataLakeAnalytics_AuditDiagnosticLog_Audit.json) |
+|[Resource logs in Event Hub should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F83a214f7-d01a-484b-91a9-ed54470c9a6a) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[5.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Event%20Hub/EventHub_AuditDiagnosticLog_Audit.json) |
+|[Resource logs in IoT Hub should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F383856f8-de7f-44a2-81fc-e5135b5c2aa4) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[3.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Internet%20of%20Things/IoTHub_AuditDiagnosticLog_Audit.json) |
+|[Resource logs in Key Vault should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcf820ca0-f99e-4f3e-84fb-66e913812d21) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[5.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/KeyVault_AuditDiagnosticLog_Audit.json) |
+|[Resource logs in Logic Apps should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F34f95f76-5386-4de7-b824-0d8478470c9d) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[5.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Logic%20Apps/LogicApps_AuditDiagnosticLog_Audit.json) |
+|[Resource logs in Search services should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb4330a05-a843-4bc8-bf9a-cacce50c67f4) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[5.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Search/Search_AuditDiagnosticLog_Audit.json) |
+|[Resource logs in Service Bus should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff8d36e2f-389b-4ee4-898d-21aeb69a0f45) |Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes; when a security incident occurs or when your network is compromised |AuditIfNotExists, Disabled |[5.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Service%20Bus/ServiceBus_AuditDiagnosticLog_Audit.json) |
+|[Resource logs in Virtual Machine Scale Sets should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7c1b1214-f927-48bf-8882-84f0af6588b1) |It is recommended to enable Logs so that activity trail can be recreated when investigations are required in the event of an incident or a compromise. |AuditIfNotExists, Disabled |[2.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/ServiceFabric_and_VMSS_AuditVMSSDiagnostics.json) |
+|[Retain security policies and procedures](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fefef28d0-3226-966a-a1e8-70e89c1b30bc) |CMA_0454 - Retain security policies and procedures |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0454.json) |
+|[Retain terminated user data](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7c7032fe-9ce6-9092-5890-87a1a3755db1) |CMA_0455 - Retain terminated user data |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0455.json) |
+|[Review audit data](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6625638f-3ba1-7404-5983-0ea33d719d34) |CMA_0466 - Review audit data |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0466.json) |
+
+## 6 Networking
+
+### Ensure no SQL Databases allow ingress 0.0.0.0/0 (ANY IP)
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 6.3
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Control information flow](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F59bedbdc-0ba9-39b9-66bb-1d1c192384e6) |CMA_0079 - Control information flow |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0079.json) |
+|[Employ flow control mechanisms of encrypted information](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F79365f13-8ba4-1f6c-2ac4-aa39929f56d0) |CMA_0211 - Employ flow control mechanisms of encrypted information |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0211.json) |
+
+### Ensure that Network Security Group Flow Log retention period is 'greater than 90 days'
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 6.4
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Adhere to retention periods defined](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1ecb79d7-1a06-9a3b-3be8-f434d04d1ec1) |CMA_0004 - Adhere to retention periods defined |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0004.json) |
+|[Retain security policies and procedures](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fefef28d0-3226-966a-a1e8-70e89c1b30bc) |CMA_0454 - Retain security policies and procedures |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0454.json) |
+|[Retain terminated user data](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7c7032fe-9ce6-9092-5890-87a1a3755db1) |CMA_0455 - Retain terminated user data |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0455.json) |
+
+### Ensure that Network Watcher is 'Enabled'
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 6.5
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Network Watcher should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb6e2945c-0b7b-40f5-9233-7a5323b5cdc6) |Network Watcher is a regional service that enables you to monitor and diagnose conditions at a network scenario level in, to, and from Azure. Scenario level monitoring enables you to diagnose problems at an end to end network level view. It is required to have a network watcher resource group to be created in every region where a virtual network is present. An alert is enabled if a network watcher resource group is not available in a particular region. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Network/NetworkWatcher_Enabled_Audit.json) |
+|[Verify security functions](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fece8bb17-4080-5127-915f-dc7267ee8549) |CMA_C1708 - Verify security functions |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1708.json) |
+
+## 7 Virtual Machines
+
+### Ensure Virtual Machines are utilizing Managed Disks
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 7.1
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Audit VMs that do not use managed disks](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F06a78e20-9358-41c9-923c-fb736d382a4d) |This policy audits VMs that do not use managed disks |audit |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/VMRequireManagedDisk_Audit.json) |
+|[Control physical access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F55a7f9a0-6397-7589-05ef-5ed59a8149e7) |CMA_0081 - Control physical access |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0081.json) |
+|[Manage the input, output, processing, and storage of data](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe603da3a-8af7-4f8a-94cb-1bcc0e0333d2) |CMA_0369 - Manage the input, output, processing, and storage of data |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0369.json) |
+|[Review label activity and analytics](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe23444b9-9662-40f3-289e-6d25c02b48fa) |CMA_0474 - Review label activity and analytics |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0474.json) |
+
+### Ensure that 'OS and Data' disks are encrypted with Customer Managed Key (CMK)
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 7.2
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Establish a data leakage management procedure](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c9aa856-6b86-35dc-83f4-bc72cec74dea) |CMA_0255 - Establish a data leakage management procedure |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0255.json) |
+|[Implement controls to secure all media](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe435f7e3-0dd9-58c9-451f-9b44b96c0232) |CMA_0314 - Implement controls to secure all media |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0314.json) |
+|[Protect data in transit using encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb11697e8-9515-16f1-7a35-477d5c8a1344) |CMA_0403 - Protect data in transit using encryption |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0403.json) |
+|[Protect special information](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa315c657-4a00-8eba-15ac-44692ad24423) |CMA_0409 - Protect special information |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0409.json) |
+|[Virtual machines should encrypt temp disks, caches, and data flows between Compute and Storage resources](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0961003e-5a0a-4549-abde-af6a37f2724d) |By default, a virtual machine's OS and data disks are encrypted-at-rest using platform-managed keys. Temp disks, data caches and data flowing between compute and storage aren't encrypted. Disregard this recommendation if: 1. using encryption-at-host, or 2. server-side encryption on Managed Disks meets your security requirements. Learn more in: Server-side encryption of Azure Disk Storage: [https://aka.ms/disksse,](https://aka.ms/disksse,) Different disk encryption offerings: [https://aka.ms/diskencryptioncomparison](https://aka.ms/diskencryptioncomparison) |AuditIfNotExists, Disabled |[2.0.3](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_UnencryptedVMDisks_Audit.json) |
+
+### Ensure that 'Unattached disks' are encrypted with CMK
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 7.3
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Establish a data leakage management procedure](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c9aa856-6b86-35dc-83f4-bc72cec74dea) |CMA_0255 - Establish a data leakage management procedure |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0255.json) |
+|[Implement controls to secure all media](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe435f7e3-0dd9-58c9-451f-9b44b96c0232) |CMA_0314 - Implement controls to secure all media |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0314.json) |
+|[Protect data in transit using encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb11697e8-9515-16f1-7a35-477d5c8a1344) |CMA_0403 - Protect data in transit using encryption |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0403.json) |
+|[Protect special information](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa315c657-4a00-8eba-15ac-44692ad24423) |CMA_0409 - Protect special information |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0409.json) |
+
+### Ensure that Only Approved Extensions Are Installed
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 7.4
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Only approved VM extensions should be installed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc0e996f8-39cf-4af9-9f45-83fbde810432) |This policy governs the virtual machine extensions that are not approved. |Audit, Deny, Disabled |[1.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Compute/VirtualMachines_ApprovedExtensions_Audit.json) |
+
+### Ensure that the latest OS Patches for all Virtual Machines are applied
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 7.5
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Remediate information system flaws](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbe38a620-000b-21cf-3cb3-ea151b704c3b) |CMA_0427 - Remediate information system flaws |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0427.json) |
+|[System updates should be installed on your machines](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F86b3d65f-7626-441e-b690-81a8b71cff60) |Missing security system updates on your servers will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_MissingSystemUpdates_Audit.json) |
+
+### Ensure that the endpoint protection for all Virtual Machines is installed
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 7.6
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Block untrusted and unsigned processes that run from USB](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3d399cf3-8fc6-0efc-6ab0-1412f1198517) |CMA_0050 - Block untrusted and unsigned processes that run from USB |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0050.json) |
+|[Document security operations](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2c6bee3a-2180-2430-440d-db3c7a849870) |CMA_0202 - Document security operations |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0202.json) |
+|[Manage gateways](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F63f63e71-6c3f-9add-4c43-64de23e554a7) |CMA_0363 - Manage gateways |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0363.json) |
+|[Monitor missing Endpoint Protection in Azure Security Center](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faf6cd1bd-1635-48cb-bde7-5b15693900b9) |Servers without an installed Endpoint Protection agent will be monitored by Azure Security Center as recommendations |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_MissingEndpointProtection_Audit.json) |
+|[Perform a trend analysis on threats](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F50e81644-923d-33fc-6ebb-9733bc8d1a06) |CMA_0389 - Perform a trend analysis on threats |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0389.json) |
+|[Perform vulnerability scans](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c5e0e1a-216f-8f49-0a15-76ed0d8b8e1f) |CMA_0393 - Perform vulnerability scans |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0393.json) |
+|[Review malware detections report weekly](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4a6f5cbd-6c6b-006f-2bb1-091af1441bce) |CMA_0475 - Review malware detections report weekly |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0475.json) |
+|[Review threat protection status weekly](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ffad161f5-5261-401a-22dd-e037bae011bd) |CMA_0479 - Review threat protection status weekly |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0479.json) |
+|[Turn on sensors for endpoint security solution](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5fc24b95-53f7-0ed1-2330-701b539b97fe) |CMA_0514 - Turn on sensors for endpoint security solution |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0514.json) |
+|[Update antivirus definitions](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fea9d7c95-2f10-8a4d-61d8-7469bd2e8d65) |CMA_0517 - Update antivirus definitions |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0517.json) |
+|[Verify software, firmware and information integrity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fdb28735f-518f-870e-15b4-49623cbe3aa0) |CMA_0542 - Verify software, firmware and information integrity |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0542.json) |
+
+### Ensure that VHD's are Encrypted
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 7.7
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Establish a data leakage management procedure](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3c9aa856-6b86-35dc-83f4-bc72cec74dea) |CMA_0255 - Establish a data leakage management procedure |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0255.json) |
+|[Implement controls to secure all media](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe435f7e3-0dd9-58c9-451f-9b44b96c0232) |CMA_0314 - Implement controls to secure all media |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0314.json) |
+|[Protect data in transit using encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb11697e8-9515-16f1-7a35-477d5c8a1344) |CMA_0403 - Protect data in transit using encryption |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0403.json) |
+|[Protect special information](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa315c657-4a00-8eba-15ac-44692ad24423) |CMA_0409 - Protect special information |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0409.json) |
+
+## 8 Other Security Considerations
+
+### Ensure that the Expiration Date is set for all Keys in RBAC Key Vaults
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 8.1
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Define a physical key management process](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F51e4b233-8ee3-8bdc-8f5f-f33bd0d229b7) |CMA_0115 - Define a physical key management process |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0115.json) |
+|[Define cryptographic use](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc4ccd607-702b-8ae6-8eeb-fc3339cd4b42) |CMA_0120 - Define cryptographic use |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0120.json) |
+|[Define organizational requirements for cryptographic key management](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd661e9eb-4e15-5ba1-6f02-cdc467db0d6c) |CMA_0123 - Define organizational requirements for cryptographic key management |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0123.json) |
+|[Determine assertion requirements](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7a0ecd94-3699-5273-76a5-edb8499f655a) |CMA_0136 - Determine assertion requirements |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0136.json) |
+|[Issue public key certificates](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F97d91b33-7050-237b-3e23-a77d57d84e13) |CMA_0347 - Issue public key certificates |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0347.json) |
+|[Key Vault keys should have an expiration date](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F152b15f7-8e1f-4c1f-ab71-8c010ba5dbc0) |Cryptographic keys should have a defined expiration date and not be permanent. Keys that are valid forever provide a potential attacker with more time to compromise the key. It is a recommended security practice to set expiration dates on cryptographic keys. |Audit, Deny, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/Keys_ExpirationSet.json) |
+|[Manage symmetric cryptographic keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9c276cf3-596f-581a-7fbd-f5e46edaa0f4) |CMA_0367 - Manage symmetric cryptographic keys |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0367.json) |
+|[Restrict access to private keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8d140e8b-76c7-77de-1d46-ed1b2e112444) |CMA_0445 - Restrict access to private keys |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0445.json) |
+
+### Ensure that the Expiration Date is set for all Keys in Non-RBAC Key Vaults.
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 8.2
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Define a physical key management process](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F51e4b233-8ee3-8bdc-8f5f-f33bd0d229b7) |CMA_0115 - Define a physical key management process |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0115.json) |
+|[Define cryptographic use](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc4ccd607-702b-8ae6-8eeb-fc3339cd4b42) |CMA_0120 - Define cryptographic use |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0120.json) |
+|[Define organizational requirements for cryptographic key management](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd661e9eb-4e15-5ba1-6f02-cdc467db0d6c) |CMA_0123 - Define organizational requirements for cryptographic key management |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0123.json) |
+|[Determine assertion requirements](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7a0ecd94-3699-5273-76a5-edb8499f655a) |CMA_0136 - Determine assertion requirements |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0136.json) |
+|[Issue public key certificates](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F97d91b33-7050-237b-3e23-a77d57d84e13) |CMA_0347 - Issue public key certificates |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0347.json) |
+|[Key Vault keys should have an expiration date](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F152b15f7-8e1f-4c1f-ab71-8c010ba5dbc0) |Cryptographic keys should have a defined expiration date and not be permanent. Keys that are valid forever provide a potential attacker with more time to compromise the key. It is a recommended security practice to set expiration dates on cryptographic keys. |Audit, Deny, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/Keys_ExpirationSet.json) |
+|[Manage symmetric cryptographic keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9c276cf3-596f-581a-7fbd-f5e46edaa0f4) |CMA_0367 - Manage symmetric cryptographic keys |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0367.json) |
+|[Restrict access to private keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8d140e8b-76c7-77de-1d46-ed1b2e112444) |CMA_0445 - Restrict access to private keys |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0445.json) |
+
+### Ensure that the Expiration Date is set for all Secrets in RBAC Key Vaults
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 8.3
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Define a physical key management process](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F51e4b233-8ee3-8bdc-8f5f-f33bd0d229b7) |CMA_0115 - Define a physical key management process |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0115.json) |
+|[Define cryptographic use](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc4ccd607-702b-8ae6-8eeb-fc3339cd4b42) |CMA_0120 - Define cryptographic use |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0120.json) |
+|[Define organizational requirements for cryptographic key management](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd661e9eb-4e15-5ba1-6f02-cdc467db0d6c) |CMA_0123 - Define organizational requirements for cryptographic key management |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0123.json) |
+|[Determine assertion requirements](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7a0ecd94-3699-5273-76a5-edb8499f655a) |CMA_0136 - Determine assertion requirements |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0136.json) |
+|[Issue public key certificates](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F97d91b33-7050-237b-3e23-a77d57d84e13) |CMA_0347 - Issue public key certificates |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0347.json) |
+|[Key Vault secrets should have an expiration date](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F98728c90-32c7-4049-8429-847dc0f4fe37) |Secrets should have a defined expiration date and not be permanent. Secrets that are valid forever provide a potential attacker with more time to compromise them. It is a recommended security practice to set expiration dates on secrets. |Audit, Deny, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/Secrets_ExpirationSet.json) |
+|[Manage symmetric cryptographic keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9c276cf3-596f-581a-7fbd-f5e46edaa0f4) |CMA_0367 - Manage symmetric cryptographic keys |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0367.json) |
+|[Restrict access to private keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8d140e8b-76c7-77de-1d46-ed1b2e112444) |CMA_0445 - Restrict access to private keys |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0445.json) |
+
+### Ensure that the Expiration Date is set for all Secrets in Non-RBAC Key Vaults
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 8.4
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Define a physical key management process](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F51e4b233-8ee3-8bdc-8f5f-f33bd0d229b7) |CMA_0115 - Define a physical key management process |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0115.json) |
+|[Define cryptographic use](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc4ccd607-702b-8ae6-8eeb-fc3339cd4b42) |CMA_0120 - Define cryptographic use |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0120.json) |
+|[Define organizational requirements for cryptographic key management](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd661e9eb-4e15-5ba1-6f02-cdc467db0d6c) |CMA_0123 - Define organizational requirements for cryptographic key management |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0123.json) |
+|[Determine assertion requirements](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7a0ecd94-3699-5273-76a5-edb8499f655a) |CMA_0136 - Determine assertion requirements |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0136.json) |
+|[Issue public key certificates](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F97d91b33-7050-237b-3e23-a77d57d84e13) |CMA_0347 - Issue public key certificates |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0347.json) |
+|[Key Vault secrets should have an expiration date](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F98728c90-32c7-4049-8429-847dc0f4fe37) |Secrets should have a defined expiration date and not be permanent. Secrets that are valid forever provide a potential attacker with more time to compromise them. It is a recommended security practice to set expiration dates on secrets. |Audit, Deny, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/Secrets_ExpirationSet.json) |
+|[Manage symmetric cryptographic keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9c276cf3-596f-581a-7fbd-f5e46edaa0f4) |CMA_0367 - Manage symmetric cryptographic keys |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0367.json) |
+|[Restrict access to private keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8d140e8b-76c7-77de-1d46-ed1b2e112444) |CMA_0445 - Restrict access to private keys |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0445.json) |
+
+### Ensure that Resource Locks are set for Mission Critical Azure Resources
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 8.5
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Establish and document change control processes](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbd4dc286-2f30-5b95-777c-681f3a7913d3) |CMA_0265 - Establish and document change control processes |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0265.json) |
+
+### Ensure the key vault is recoverable
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 8.6
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Key vaults should have purge protection enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0b60c0b2-2dc2-4e1c-b5c9-abbed971de53) |Malicious deletion of a key vault can lead to permanent data loss. A malicious insider in your organization can potentially delete and purge key vaults. Purge protection protects you from insider attacks by enforcing a mandatory retention period for soft deleted key vaults. No one inside your organization or Microsoft will be able to purge your key vaults during the soft delete retention period. |Audit, Deny, Disabled |[2.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Key%20Vault/KeyVault_Recoverable_Audit.json) |
+|[Maintain availability of information](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3ad7f0bc-3d03-0585-4d24-529779bb02c2) |CMA_C1644 - Maintain availability of information |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1644.json) |
+
+### Enable role-based access control (RBAC) within Azure Kubernetes Services
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 8.7
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Authorize access to security functions and information](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Faeed863a-0f56-429f-945d-8bb66bd06841) |CMA_0022 - Authorize access to security functions and information |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0022.json) |
+|[Authorize and manage access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F50e9324a-7410-0539-0662-2c1e775538b7) |CMA_0023 - Authorize and manage access |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0023.json) |
+|[Enforce logical access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F10c4210b-3ec9-9603-050d-77e4d26c7ebb) |CMA_0245 - Enforce logical access |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0245.json) |
+|[Enforce mandatory and discretionary access control policies](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb1666a13-8f67-9c47-155e-69e027ff6823) |CMA_0246 - Enforce mandatory and discretionary access control policies |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0246.json) |
+|[Require approval for account creation](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fde770ba6-50dd-a316-2932-e0d972eaa734) |CMA_0431 - Require approval for account creation |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0431.json) |
+|[Review user groups and applications with access to sensitive data](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Feb1c944e-0e94-647b-9b7e-fdb8d2af0838) |CMA_0481 - Review user groups and applications with access to sensitive data |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0481.json) |
+|[Role-Based Access Control (RBAC) should be used on Kubernetes Services](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac4a19c2-fa67-49b4-8ae5-0b2e78c49457) |To provide granular filtering on the actions that users can perform, use Role-Based Access Control (RBAC) to manage permissions in Kubernetes Service Clusters and configure relevant authorization policies. |Audit, Disabled |[1.0.2](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Security%20Center/ASC_EnableRBAC_KubernetesService_Audit.json) |
+
+## 9 AppService
+
+### Ensure App Service Authentication is set up for apps in Azure App Service
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 9.1
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[App Service apps should have authentication enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F95bccee9-a7f8-4bec-9ee9-62c3473701fc) |Azure App Service Authentication is a feature that can prevent anonymous HTTP requests from reaching the web app, or authenticate those that have tokens before they reach the web app. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_Authentication_WebApp_Audit.json) |
+|[Authenticate to cryptographic module](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6f1de470-79f3-1572-866e-db0771352fc8) |CMA_0021 - Authenticate to cryptographic module |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0021.json) |
+|[Enforce user uniqueness](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe336d5f4-4d8f-0059-759c-ae10f63d1747) |CMA_0250 - Enforce user uniqueness |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0250.json) |
+|[Function apps should have authentication enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc75248c1-ea1d-4a9c-8fc9-29a6aabd5da8) |Azure App Service Authentication is a feature that can prevent anonymous HTTP requests from reaching the Function app, or authenticate those that have tokens before they reach the Function app. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_Authentication_functionapp_Audit.json) |
+|[Support personal verification credentials issued by legal authorities](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1d39b5d9-0392-8954-8359-575ce1957d1a) |CMA_0507 - Support personal verification credentials issued by legal authorities |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0507.json) |
+
+### Ensure FTP deployments are Disabled
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 9.10
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[App Service apps should require FTPS only](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F4d24b6d4-5e53-4a4f-a7f4-618fa573ee4b) |Enable FTPS enforcement for enhanced security. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_AuditFTPS_WebApp_Audit.json) |
+|[Configure workstations to check for digital certificates](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F26daf649-22d1-97e9-2a8a-01b182194d59) |CMA_0073 - Configure workstations to check for digital certificates |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0073.json) |
+|[Function apps should require FTPS only](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F399b2637-a50f-4f95-96f8-3a145476eb15) |Enable FTPS enforcement for enhanced security. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_AuditFTPS_FunctionApp_Audit.json) |
+|[Protect data in transit using encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb11697e8-9515-16f1-7a35-477d5c8a1344) |CMA_0403 - Protect data in transit using encryption |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0403.json) |
+|[Protect passwords with encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb2d3e5a2-97ab-5497-565a-71172a729d93) |CMA_0408 - Protect passwords with encryption |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0408.json) |
+
+### Ensure Azure Keyvaults are Used to Store Secrets
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 9.11
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[Define a physical key management process](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F51e4b233-8ee3-8bdc-8f5f-f33bd0d229b7) |CMA_0115 - Define a physical key management process |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0115.json) |
+|[Define cryptographic use](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc4ccd607-702b-8ae6-8eeb-fc3339cd4b42) |CMA_0120 - Define cryptographic use |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0120.json) |
+|[Define organizational requirements for cryptographic key management](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd661e9eb-4e15-5ba1-6f02-cdc467db0d6c) |CMA_0123 - Define organizational requirements for cryptographic key management |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0123.json) |
+|[Determine assertion requirements](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7a0ecd94-3699-5273-76a5-edb8499f655a) |CMA_0136 - Determine assertion requirements |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0136.json) |
+|[Ensure cryptographic mechanisms are under configuration management](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb8dad106-6444-5f55-307e-1e1cc9723e39) |CMA_C1199 - Ensure cryptographic mechanisms are under configuration management |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1199.json) |
+|[Issue public key certificates](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F97d91b33-7050-237b-3e23-a77d57d84e13) |CMA_0347 - Issue public key certificates |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0347.json) |
+|[Maintain availability of information](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F3ad7f0bc-3d03-0585-4d24-529779bb02c2) |CMA_C1644 - Maintain availability of information |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_C1644.json) |
+|[Manage symmetric cryptographic keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9c276cf3-596f-581a-7fbd-f5e46edaa0f4) |CMA_0367 - Manage symmetric cryptographic keys |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0367.json) |
+|[Restrict access to private keys](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8d140e8b-76c7-77de-1d46-ed1b2e112444) |CMA_0445 - Restrict access to private keys |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0445.json) |
+
+### Ensure Web App Redirects All HTTP traffic to HTTPS in Azure App Service
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 9.2
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[App Service apps should only be accessible over HTTPS](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa4af4a39-4135-47fb-b175-47fbdf85311d) |Use of HTTPS ensures server/service authentication and protects data in transit from network layer eavesdropping attacks. |Audit, Disabled, Deny |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppServiceWebapp_AuditHTTP_Audit.json) |
+|[Configure workstations to check for digital certificates](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F26daf649-22d1-97e9-2a8a-01b182194d59) |CMA_0073 - Configure workstations to check for digital certificates |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0073.json) |
+|[Protect data in transit using encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb11697e8-9515-16f1-7a35-477d5c8a1344) |CMA_0403 - Protect data in transit using encryption |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0403.json) |
+|[Protect passwords with encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb2d3e5a2-97ab-5497-565a-71172a729d93) |CMA_0408 - Protect passwords with encryption |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0408.json) |
+
+### Ensure Web App is using the latest version of TLS encryption
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 9.3
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[App Service apps should use the latest TLS version](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff0e6e85b-9b9f-4a4b-b67b-f730d42f1b0b) |Periodically, newer versions are released for TLS either due to security flaws, include additional functionality, and enhance speed. Upgrade to the latest TLS version for App Service apps to take advantage of security fixes, if any, and/or new functionalities of the latest version. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_RequireLatestTls_WebApp_Audit.json) |
+|[Configure workstations to check for digital certificates](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F26daf649-22d1-97e9-2a8a-01b182194d59) |CMA_0073 - Configure workstations to check for digital certificates |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0073.json) |
+|[Function apps should use the latest TLS version](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff9d614c5-c173-4d56-95a7-b4437057d193) |Periodically, newer versions are released for TLS either due to security flaws, include additional functionality, and enhance speed. Upgrade to the latest TLS version for Function apps to take advantage of security fixes, if any, and/or new functionalities of the latest version. |AuditIfNotExists, Disabled |[2.0.1](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_RequireLatestTls_FunctionApp_Audit.json) |
+|[Protect data in transit using encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb11697e8-9515-16f1-7a35-477d5c8a1344) |CMA_0403 - Protect data in transit using encryption |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0403.json) |
+|[Protect passwords with encryption](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb2d3e5a2-97ab-5497-565a-71172a729d93) |CMA_0408 - Protect passwords with encryption |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0408.json) |
+
+### Ensure the web app has 'Client Certificates (Incoming client certificates)' set to 'On'
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 9.4
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[App Service apps should have 'Client Certificates (Incoming client certificates)' enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5bb220d9-2698-4ee4-8404-b9c30c9df609) |Client certificates allow for the app to request a certificate for incoming requests. Only clients that have a valid certificate will be able to reach the app. |Audit, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_Webapp_Audit_ClientCert.json) |
+|[Authenticate to cryptographic module](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F6f1de470-79f3-1572-866e-db0771352fc8) |CMA_0021 - Authenticate to cryptographic module |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0021.json) |
+|[Function apps should have 'Client Certificates (Incoming client certificates)' enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Feaebaea7-8013-4ceb-9d14-7eb32271373c) |Client certificates allow for the app to request a certificate for incoming requests. Only clients with valid certificates will be able to reach the app. |Audit, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_FunctionApp_Audit_ClientCert.json) |
+
+### Ensure that Register with Azure Active Directory is enabled on App Service
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 9.5
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[App Service apps should use managed identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2b9ad585-36bc-4615-b300-fd4435808332) |Use a managed identity for enhanced authentication security |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_UseManagedIdentity_WebApp_Audit.json) |
+|[Automate account management](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F2cc9c165-46bd-9762-5739-d2aae5ba90a1) |CMA_0026 - Automate account management |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0026.json) |
+|[Function apps should use managed identity](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0da106f2-4ca3-48e8-bc85-c638fe6aea8f) |Use a managed identity for enhanced authentication security |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_UseManagedIdentity_FunctionApp_Audit.json) |
+|[Manage system and admin accounts](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F34d38ea7-6754-1838-7031-d7fd07099821) |CMA_0368 - Manage system and admin accounts |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0368.json) |
+|[Monitor access across the organization](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F48c816c5-2190-61fc-8806-25d6f3df162f) |CMA_0376 - Monitor access across the organization |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0376.json) |
+|[Notify when account is not needed](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8489ff90-8d29-61df-2d84-f9ab0f4c5e84) |CMA_0383 - Notify when account is not needed |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0383.json) |
+
+### Ensure That 'PHP version' is the Latest, If Used to Run the Web App
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 9.6
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[App Service apps that use PHP should use the latest 'PHP version'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7261b898-8a84-4db8-9e04-18527132abb3) |Periodically, newer versions are released for PHP software either due to security flaws or to include additional functionality. Using the latest PHP version for App Service apps is recommended in order to take advantage of security fixes, if any, and/or new functionalities of the latest version. Currently, this policy only applies to Linux apps. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_Webapp_Audit_PHP_Latest.json) |
+|[Remediate information system flaws](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbe38a620-000b-21cf-3cb3-ea151b704c3b) |CMA_0427 - Remediate information system flaws |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0427.json) |
+
+### Ensure that 'Python version' is the Latest Stable Version, if Used to Run the Web App
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 9.7
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[App Service apps that use Python should use the latest 'Python version'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7008174a-fd10-4ef0-817e-fc820a951d73) |Periodically, newer versions are released for Python software either due to security flaws or to include additional functionality. Using the latest Python version for App Service apps is recommended in order to take advantage of security fixes, if any, and/or new functionalities of the latest version. This policy only applies to Linux apps. |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_WebApp_Audit_python_Latest.json) |
+|[Function apps that use Python should use the latest 'Python version'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F7238174a-fd10-4ef0-817e-fc820a951d73) |Periodically, newer versions are released for Python software either due to security flaws or to include additional functionality. Using the latest Python version for Function apps is recommended in order to take advantage of security fixes, if any, and/or new functionalities of the latest version. This policy only applies to Linux apps since Python is not supported on Windows apps. |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_FunctionApp_Audit_python_Latest.json) |
+|[Remediate information system flaws](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbe38a620-000b-21cf-3cb3-ea151b704c3b) |CMA_0427 - Remediate information system flaws |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0427.json) |
+
+### Ensure that 'Java version' is the latest, if used to run the Web App
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 9.8
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[App Service apps that use Java should use the latest 'Java version'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F496223c3-ad65-4ecd-878a-bae78737e9ed) |Periodically, newer versions are released for Java software either due to security flaws or to include additional functionality. Using the latest Java version for web apps is recommended in order to take advantage of security fixes, if any, and/or new functionalities of the latest version. Currently, this policy only applies to Linux apps. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_WebApp_Audit_java_Latest.json) |
+|[Function apps that use Java should use the latest 'Java version'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9d0b6ea4-93e2-4578-bf2f-6bb17d22b4bc) |Periodically, newer versions are released for Java software either due to security flaws or to include additional functionality. Using the latest Java version for Function apps is recommended in order to take advantage of security fixes, if any, and/or new functionalities of the latest version. Currently, this policy only applies to Linux apps. |AuditIfNotExists, Disabled |[3.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_FunctionApp_Audit_java_Latest.json) |
+|[Remediate information system flaws](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbe38a620-000b-21cf-3cb3-ea151b704c3b) |CMA_0427 - Remediate information system flaws |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0427.json) |
+
+### Ensure that 'HTTP Version' is the Latest, if Used to Run the Web App
+
+**ID**: CIS Microsoft Azure Foundations Benchmark recommendation 9.9
+**Ownership**: Shared
+
+|Name<br /><sub>(Azure portal)</sub> |Description |Effect(s) |Version<br /><sub>(GitHub)</sub> |
+|||||
+|[App Service apps should use latest 'HTTP Version'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8c122334-9d20-4eb8-89ea-ac9a705b74ae) |Periodically, newer versions are released for HTTP either due to security flaws or to include additional functionality. Using the latest HTTP version for web apps to take advantage of security fixes, if any, and/or new functionalities of the newer version. |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_WebApp_Audit_HTTP_Latest.json) |
+|[Function apps should use latest 'HTTP Version'](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe2c1c086-2d84-4019-bff3-c44ccd95113c) |Periodically, newer versions are released for HTTP either due to security flaws or to include additional functionality. Using the latest HTTP version for web apps to take advantage of security fixes, if any, and/or new functionalities of the newer version. |AuditIfNotExists, Disabled |[4.0.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/App%20Service/AppService_FunctionApp_Audit_HTTP_Latest.json) |
+|[Remediate information system flaws](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbe38a620-000b-21cf-3cb3-ea151b704c3b) |CMA_0427 - Remediate information system flaws |Manual, Disabled |[1.1.0](https://github.com/Azure/azure-policy/blob/master/built-in-policies/policyDefinitions/Regulatory%20Compliance/CMA_0427.json) |
+
+## Next steps
+
+Additional articles about Azure Policy:
+
+- [Regulatory Compliance](../concepts/regulatory-compliance.md) overview.
+- See the [initiative definition structure](../concepts/initiative-definition-structure.md).
+- Review other examples at [Azure Policy samples](./index.md).
+- Review [Understanding policy effects](../concepts/effects.md).
+- Learn how to [remediate non-compliant resources](../how-to/remediate-resources.md).
governance First Query Dotnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/resource-graph/first-query-dotnet.md
Title: "Quickstart: Your first .NET Core query" description: In this quickstart, you follow the steps to enable the Resource Graph NuGet packages for .NET Core and run your first query. Previously updated : 01/06/2023 Last updated : 01/19/2023 # Quickstart: Run your first Resource Graph query using .NET Core
+> [!NOTE]
+> Special thanks to [Glenn Block](https://github.com/glennblock) for contributing
+> the code used in this quickstart.
+ The first step to using Azure Resource Graph is to check that the required packages for .NET Core are installed. This quickstart walks you through the process of adding the packages to your .NET Core installation.
required packages.
dotnet add package Microsoft.Azure.Services.AppAuthentication --version 1.5.0 ```
-1. Replace the default `program.cs` with the following code and save the updated file:
-
- ```csharp
- using System;
- using System.Collections.Generic;
- using System.Threading.Tasks;
- using Microsoft.IdentityModel.Clients.ActiveDirectory;
- using Microsoft.Rest;
- using Azure.ResourceManager.ResourceGraph;
- using Azure.ResourceManager.ResourceGraph.Models;
-
- namespace argQuery
- {
- class Program
- {
- static async Task Main(string[] args)
- {
- string strTenant = args[0];
- string strClientId = args[1];
- string strClientSecret = args[2];
- string strQuery = args[3];
-
- AuthenticationContext authContext = new AuthenticationContext("https://login.microsoftonline.com/" + strTenant);
- AuthenticationResult authResult = await authContext.AcquireTokenAsync("https://management.core.windows.net", new ClientCredential(strClientId, strClientSecret));
- ServiceClientCredentials serviceClientCreds = new TokenCredentials(authResult.AccessToken);
-
- ResourceGraphClient argClient = new ResourceGraphClient(serviceClientCreds);
- QueryRequest request = new QueryRequest();
- request.Query = strQuery;
-
- QueryResponse response = argClient.Resources(request);
- Console.WriteLine("Records: " + response.Count);
- Console.WriteLine("Data:\n" + response.Data);
- }
- }
- }
- ```
+1. Replace the default `Program.cs` with the following code and save the updated file:
+
+```csharp
+using System;
+using System.Collections.Generic;
+using System.Threading.Tasks;
+using Azure.Core;
+using Azure.Identity;
+using Azure.ResourceManager;
+using Azure.ResourceManager.Resources;
+using Azure.ResourceManager.ResourceGraph;
+using Azure.ResourceManager.ResourceGraph.Models;
+
+namespace argQuery
+{
+ class Program
+ {
+ static async Task Main(string[] args)
+ {
+ string strTenant = args[0];
+ string strClientId = args[1];
+ string strClientSecret = args[2];
+ string strQuery = args[3];
+
+ var client = new ArmClient(new ClientSecretCredential(strTenant, strClientId, strClientSecret));
+ var tenant = client.GetTenants().First();
+ //Console.WriteLine($"{tenant.Id} {tenant.HasData}");
+ var queryContent = new ResourceQueryContent(strQuery);
+ var response = tenant.GetResources(queryContent);
+ var result = response.Value;
+ Console.WriteLine($"Count: {result.Data.ToString()}");
+ }
+ }
+}
+```
> [!NOTE] > This code creates a tenant-based query. To limit the query to a
With the .NET Core console application built and published, it's time to try out
tenant-based Resource Graph query. The query returns the first five Azure resources with the **Name** and **Resource Type** of each resource.
-In each call to `argQuery`, there are variables that are used that you need to replace with your own
+In each call to `argQuery`, replace the variables with your own
values: - `{tenantId}` - Replace with your tenant ID - `{clientId}` - Replace with the client ID of your service principal - `{clientSecret}` - Replace with the client secret of your service principal
-1. Change directories to the `{run-folder}` you defined with the previous `dotnet publish` command.
+1. Change directories to the `{run-folder}` you defined with the earlier `dotnet publish` command.
1. Run your first Azure Resource Graph query using the compiled .NET Core console application:
values:
> [!NOTE] > As this query example does not provide a sort modifier such as `order by`, running this query
- > multiple times is likely to yield a different set of resources per request.
+ > many times is likely to yield a different set of resources per request.
1. Change the final parameter to `argQuery.exe` and change the query to `order by` the **Name** property:
hdinsight Apache Ambari Email https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/apache-ambari-email.md
Last updated 04/11/2022
# Tutorial: Configure Apache Ambari email notifications in Azure HDInsight
-In this tutorial, you'll configure Apache Ambari email notifications using SendGrid. [Apache Ambari](./hdinsight-hadoop-manage-ambari.md) simplifies the management and monitoring of an HDInsight cluster by providing an easy to use web UI and REST API. Ambari is included on HDInsight clusters, and is used to monitor the cluster and make configuration changes. [SendGrid](https://sendgrid.com/solutions/) is a free cloud-based email service that provides reliable transactional email delivery, scalability, and real-time analytics along with flexible APIs that make custom integration easy. Azure customers can unlock 25,000 free emails each month.
+In this tutorial, you'll configure Apache Ambari email notifications using SendGrid as an example. [Apache Ambari](./hdinsight-hadoop-manage-ambari.md) simplifies the management and monitoring of an HDInsight cluster by providing an easy to use web UI and REST API. Ambari is included on HDInsight clusters, and is used to monitor the cluster and make configuration changes. [SendGrid](https://sendgrid.com/solutions/) is a free cloud-based email service that provides reliable transactional email delivery, scalability, and real-time analytics along with flexible APIs that make custom integration easy. Azure customers can unlock 25,000 free emails each month.
+
+> [!NOTE]
+> SendGrid is not mandatory to configure Apache Ambari email notifications. You can also use other third party email box. For example, outlook, gmail and so on.
In this tutorial, you learn how to:
hdinsight Interactive Query Troubleshoot Migrate 36 To 40 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/interactive-query/interactive-query-troubleshoot-migrate-36-to-40.md
Steps to disable ACID on HDInsight 4.0:
hive.create.as.insert.only=false; metastore.create.as.acid=false; ```-
+> [!Note]
+> If hive.strict.managed.tables is set to true \<Default value\>, Creating Managed and non-transaction table will fail with the following error:
+```
+java.lang.Exception: java.sql.SQLException: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. Unable to alter table. Table <Table name> failed strict managed table checks due to the following reason: Table is marked as a managed table but is not transactional.
+```
2. Restart hive service. > [!IMPORTANT]
healthcare-apis Copy To Synapse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/fhir/copy-to-synapse.md
In this article, youΓÇÖll learn three ways to copy data from the FHIR service in Azure Health Data Services to [Azure Synapse Analytics](https://azure.microsoft.com/services/synapse-analytics/), which is a limitless analytics service that brings together data integration, enterprise data warehousing, and big data analytics.
-* Use the [FHIR to Synapse Sync Agent](https://github.com/microsoft/FHIR-Analytics-Pipelines/blob/main/FhirToDataLake/docs/Deployment.md) OSS tool
+* Use the [FHIR to Synapse Sync Agent](https://github.com/microsoft/FHIR-Analytics-Pipelines/blob/main/FhirToDataLake/docs/Deploy-FhirToDatalake.md) OSS tool
* Use the [FHIR to CDM pipeline generator](https://github.com/microsoft/FHIR-Analytics-Pipelines/blob/main/FhirToCdm/docs/fhir-to-cdm.md) OSS tool * Use $export and load data to Synapse using T-SQL ## Using the FHIR to Synapse Sync Agent OSS tool > [!Note]
-> [FHIR to Synapse Sync Agent](https://github.com/microsoft/FHIR-Analytics-Pipelines/blob/main/FhirToDataLake/docs/Deployment.md) is an open source tool released under MIT license, and is not covered by the Microsoft SLA for Azure services.
+> [FHIR to Synapse Sync Agent](https://github.com/microsoft/FHIR-Analytics-Pipelines/blob/main/FhirToDataLake/docs/Deploy-FhirToDatalake.md) is an open source tool released under MIT license, and is not covered by the Microsoft SLA for Azure services.
The **FHIR to Synapse Sync Agent** is a Microsoft OSS project released under MIT License. It's an Azure function that extracts data from a FHIR server using FHIR Resource APIs, converts it to hierarchical Parquet files, and writes it to Azure Data Lake in near real time. This also contains a script to create external tables and views in [Synapse Serverless SQL pool](../../synapse-analytics/sql/on-demand-workspace-overview.md) pointing to the Parquet files.
Next, you can learn about how you can de-identify your FHIR data while exporting
>[!div class="nextstepaction"] >[Exporting de-identified data](./de-identified-export.md)
-FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
+FHIR&#174; is a registered trademark of [HL7](https://hl7.org/fhir/) and is used with the permission of HL7.
healthcare-apis Data Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/data-flow.md
Previously updated : 12/27/2022 Last updated : 01/18/2023
This article provides an overview of the MedTech service data flow. You'll learn about the different data processing stages within the MedTech service that transforms device data into Fast Healthcare Interoperability Resources (FHIR&#174;)-based [Observation](https://www.hl7.org/fhir/observation.html) resources.
-Data from health-related devices or medical devices flows through a path in which the MedTech service transforms data into FHIR, and then data is stored on and accessed from the FHIR service. The health data path follows these steps in this order: ingest, normalize, group, transform, and persist. Health data is retrieved from the device in the first step of ingestion. After the data is received, it's processed, or normalized per a user-selected/user-created schema template called the device mapping. Normalized health data is simpler to process and can be grouped. In the next step, health data is grouped into three Operate parameters. After the health data is normalized and grouped, it can be processed or transformed through a FHIR destination mapping, and then saved or persisted on the FHIR service.
+Data from devices flows through a path in which the MedTech service transforms data into FHIR, and then data is stored on and accessed from the FHIR service. The data path follows these steps in this order: ingest, normalize, group, transform, and persist. Data is retrieved from the device in the first step of ingestion. After the data is received, it's processed, or normalized per a user-selected/user-created schema template called the device mapping. Normalized data is simpler to process and can be grouped. In the next step, data is grouped into three Operate parameters. After the data is normalized and grouped, it can be processed or transformed through a FHIR destination mapping, and then saved or persisted on the FHIR service.
This article goes into more depth about each step in the data flow. The next steps are [Choose a deployment method for the MedTech service](deploy-new-choose.md) by using a device mapping (the normalization step) and a FHIR destination mapping (the transformation step).
-This next section of the article describes the stages that IoMT (Internet of Medical Things) device data goes through as it processed through the MedTech service.
+This next section of the article describes the stages that IoT (Internet of Things) device data goes through as it processed through the MedTech service.
:::image type="content" source="media/data-flow/iot-data-flow.png" alt-text="Screenshot of IoMT data as it flows from IoT devices into an Azure event hub. IoMT data is ingested by the MedTech service as it is normalized, grouped, transformed, and persisted in the FHIR service." lightbox="media/data-flow/iot-data-flow.png":::
At this point, [Device](https://www.hl7.org/fhir/device.html) resource, along wi
> [!NOTE] > All identity look ups are cached once resolved to decrease load on the FHIR service. If you plan on reusing devices with multiple patients it is advised you create a virtual device resource that is specific to the patient and send virtual device identifier in the message payload. The virtual device can be linked to the actual device resource as a parent.
-If no Device resource for a given device identifier exists in the FHIR service, the outcome depends upon the value of `Resolution Type` set at the time of creation. When set to `Lookup`, the specific message is ignored, and the pipeline will continue to process other incoming messages. If set to `Create`, the MedTech service will create a bare-bones Device and Patient resources on the FHIR service.
+If no Device resource for a given device identifier exists in the FHIR service, the outcome depends upon the value of `Resolution Type` set at the time of creation. When set to `Lookup`, the specific message is ignored, and the pipeline will continue to process other incoming messages. If set to `Create`, the MedTech service will create a bare-bones Device and Consumer resources on the FHIR service.
## Persist Once the Observation FHIR resource is generated in the Transform stage, the resource is saved into the FHIR service. If the Observation FHIR resource is new, it will be created on the FHIR service. If the Observation FHIR resource already existed, it will get updated.
healthcare-apis Deploy New Choose https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/deploy-new-choose.md
Previously updated : 1/10/2023 Last updated : 1/18/2023
In this quickstart, you'll learn about these deployment methods:
[![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.healthcareapis%2Fworkspaces%2Fiotconnectors-with-iothub%2Fazuredeploy.json)
-To learn more about deploying the MedTech service including an Azure IoT Hub using an ARM template and the **Deploy to Azure** button, see [Receive device messages through Azure IoT Hub](device-data-through-iot-hub.md).
+To learn more about deploying the MedTech service including an Azure IoT Hub using an ARM template and the **Deploy to Azure** button, see [Receive device messages through Azure IoT Hub](device-messages-through-iot-hub.md).
## ARM template using the Deploy to Azure button
healthcare-apis Device Messages Through Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/device-messages-through-iot-hub.md
+
+ Title: Receive device messages through Azure IoT Hub - Azure Health Data Services
+description: Learn how to deploy Azure IoT Hub with message routing to send device messages to the MedTech service in Azure Health Data Services. The tutorial uses an Azure Resource Manager template (ARM template) in the Azure portal and Visual Studio Code with the Azure IoT Hub extension.
+++++ Last updated : 1/18/2023+++
+# Tutorial: Receive device messages through Azure IoT Hub
+
+For enhanced workflows and ease of use, you can use the MedTech service to receive messages from devices you create and manage through an IoT hub in [Azure IoT Hub](../../iot-hub/iot-concepts-and-iot-hub.md). This tutorial uses an Azure Resource Manager template (ARM template) and a **Deploy to Azure** button to deploy a MedTech service. The template creates an IoT hub to create and manage devices, and then routes device messages to an event hub in Azure Event Hubs for the MedTech service to pick up.
++
+> [!TIP]
+> To learn more about how the MedTech service transforms and persists device messages into the Fast Healthcare Interoperability Resources (FHIR&#174;) service as FHIR Observations, see [The MedTech service data flow](data-flow.md).
++
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> - Open an ARM template in the Azure portal.
+> - Configure the template for your deployment.
+> - Create a device.
+> - Send a test message.
+> - Review metrics for the test message.
+
+> [!TIP]
+> To learn more about ARM templates, see [What are ARM templates?](./../../azure-resource-manager/templates/overview.md)
+
+## Prerequisites
+
+To begin your deployment and complete the tutorial, you must have the following prerequisites:
+
+- An active Azure subscription account. If you don't have an Azure subscription, see the [subscription decision guide](/azure/cloud-adoption-framework/decision-guides/subscriptions/).
+
+- Owner or Contributor and User Access Administrator role assignments in the Azure subscription. For more information, see [What is Azure role-based access control (Azure RBAC)?](../../role-based-access-control/overview.md)
+
+- The Microsoft.HealthcareApis, Microsoft.EventHub, and Microsoft.Devices resource providers registered with your Azure subscription. To learn more, see [Azure resource providers and types](../../azure-resource-manager/management/resource-providers-and-types.md).
+
+- [Visual Studio Code](https://code.visualstudio.com/Download) installed locally.
+
+- [Azure IoT Tools](https://marketplace.visualstudio.com/items?itemName=vsciot-vscode.azure-iot-tools) installed in Visual Studio Code. Azure IoT Tools is a collection of extensions that makes it easy to connect to IoT hubs, create devices, and send messages. In this tutorial, you use the Azure IoT Hub extension in Visual Studio Code to connect to your deployed IoT hub, create a device, and send a test message from the device to your IoT hub.
+
+When you have these prerequisites, you're ready to configure the ARM template by using the **Deploy to Azure** button.
+
+## Review the ARM template - Optional
+
+The ARM template used to deploy the resources in this tutorial is available at [Azure Quickstart Templates](/samples/azure/azure-quickstart-templates/iotconnectors-with-iothub/) by using the *azuredeploy.json* file on [GitHub](https://github.com/azure/azure-quickstart-templates/tree/master/quickstarts/microsoft.healthcareapis/workspaces/iotconnectors-with-iothub).
+
+## Use the Deploy to Azure button
+
+To begin deployment in the Azure portal, select the **Deploy to Azure** button:
+
+[![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fquickstarts%2Fmicrosoft.healthcareapis%2Fworkspaces%2Fiotconnectors-with-iothub%2Fazuredeploy.json)
+
+## Configure the deployment
+
+1. In the Azure portal, on the **Basics** tab of the Azure Quickstart Template, select or enter the following information for your deployment:
+
+ - **Subscription**: The Azure subscription to use for the deployment.
+
+ - **Resource group**: An existing resource group, or you can create a new resource group.
+
+ - **Region**: The Azure region of the resource group that's used for the deployment. **Region** auto-fills by using the resource group region.
+
+ - **Basename**: A value that's appended to the name of the Azure resources and services that are deployed. The examples in this tutorial use the basename *azuredocsdemo*. You can choose your own basename value.
+
+ - **Location**: A supported Azure region for Azure Health Data Services (the value can be the same as or different from the region your resource group is in). For a list of Azure regions where Health Data Services is available, see [Products available by regions](https://azure.microsoft.com/explore/global-infrastructure/products-by-region/?products=health-data-services).
+
+ - **Fhir Contributor Principle Id** (optional): An Azure Active Directory (Azure AD) user object ID to provide read/write permissions in the FHIR service.
+
+ You can use this account to give access to the FHIR service to view the device messages that are generated in this tutorial. We recommend that you use your own Azure AD user object ID, so you can access the messages in the FHIR service. If you choose not to use the **Fhir Contributor Principle Id** option, clear the text box.
+
+ To learn how to get an Azure AD user object ID, see [Find the user object ID](/partner-center/find-ids-and-domain-names#find-the-user-object-id). The user object ID that's used in this tutorial is only an example. If you use this option, use your own user object ID or the object ID of another person who you want to be able to access the FHIR service.
+
+ - **Device Mapping**: Don't change the default values for this tutorial. The mappings are set in the template to send a device message to your IoT hub later in the tutorial.
+
+ - **Destination Mapping**: Don't change the default values for this tutorial. The mappings are set in the template to send a device message to your IoT hub later in the tutorial.
+
+ :::image type="content" source="media\device-messages-through-iot-hub\deploy-template-options.png" alt-text="Screenshot that shows deployment options for the MedTech service for Health Data Services in the Azure portal." lightbox="media\device-messages-through-iot-hub\deploy-template-options.png":::
+
+2. To validate your configuration, select **Review + create**.
+
+ :::image type="content" source="media\device-messages-through-iot-hub\review-and-create-button.png" alt-text="Screenshot that shows the Review + create button selected in the Azure portal.":::
+
+3. In **Review + create**, check the template validation status. If validation is successful, the template displays **Validation Passed**. If validation fails, fix the detail that's indicated in the error message, and then select **Review + create** again.
+
+ :::image type="content" source="media\device-messages-through-iot-hub\validation-complete.png" alt-text="Screenshot that shows the Review + create pane displaying the Validation Passed message.":::
+
+4. After a successful validation, to begin the deployment, select **Create**.
+
+ :::image type="content" source="media\device-messages-through-iot-hub\create-button.png" alt-text="Screenshot that shows the highlighted Create button.":::
+
+5. In a few minutes, the Azure portal displays the message that your deployment is completed.
+
+ :::image type="content" source="media\device-messages-through-iot-hub\deployment-complete-banner.png" alt-text="Screenshot that shows a green checkmark and the message Your deployment is complete.":::
+
+ > [!IMPORTANT]
+ > If you're going to allow access from multiple services to the device message event hub, it is highly recommended that each service has its own event hub consumer group.
+ >
+ > Consumer groups enable multiple consuming applications to have a separate view of the event stream, and to read the stream independently at their own pace and with their own offsets. For more information, see [Consumer groups](../../event-hubs/event-hubs-features.md#consumer-groups).
+ >
+ > Examples:
+ >
+ > - Two MedTech services accessing the same device message event hub.
+ >
+ > - A MedTech service and a storage writer application accessing the same device message event hub.
+
+## Review deployed resources and access permissions
+
+When deployment is completed, the following resources and access roles are created in the template deployment:
+
+- An Azure Event Hubs namespace and a device message event hub. In this deployment, the event hub is named *devicedata*.
+
+ - An event hub consumer group. In this deployment, the consumer group is named *$Default*.
+
+ - An Azure Event Hubs Data Sender role. In this deployment, the sender role is named *devicedatasender* and can be used to provide access to the device event hub using a shared access signature (SAS). To learn more about authorizing access using a SAS, see [Authorizing access to Event Hubs resources using Shared Access Signatures](../../event-hubs/authorize-access-shared-access-signature.md). The Azure Event Hubs Data Sender role isn't used in this tutorial.
+
+- An Azure IoT Hub with [message routing](../../iot-hub/iot-hub-devguide-messages-d2c.md) configured to send device messages to the device message event hub.
+
+- A [user-assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md) that provides send access from the IoT hub to the device message event hub. The managed identity has the Azure Event Hubs Data Sender role in the [Access control section (IAM)](../../role-based-access-control/overview.md) of the device message event hub.
+
+- A Health Data Services workspace.
+
+- A Health Data Services FHIR service.
+
+- A Health Data Services MedTech service with the required [system-assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md) roles:
+
+ - For the device message event hub, the Azure Events Hubs Data Receiver role is assigned in the [Access control section (IAM)](../../role-based-access-control/overview.md) of the device message event hub.
+
+ - For the FHIR service, the FHIR Data Writer role is assigned in the [Access control section (IAM)](../../role-based-access-control/overview.md) of the FHIR service.
+
+- Conforming and valid MedTech service [device](how-to-configure-device-mappings.md) and [FHIR destination mappings](how-to-configure-fhir-mappings.md).
+
+> [!IMPORTANT]
+> In this tutorial, the ARM template configures the MedTech service to operate in Create mode. A patient resource and a device resource are created for each device that sends data to your FHIR service.
+>
+> To learn more about the MedTech service resolution types Create and Lookup, see [Destination properties](deploy-new-config.md#destination-properties).
+
+## Create a device and send a test message
+
+With your resources successfully deployed, next, you connect to your IoT hub, create a device, and send a test message to the IoT hub. After you complete these steps, your MedTech service can:
+
+- Pick up the IoT hub-routed test message from the device message event hub.
+- Transform the test message into five FHIR observations.
+- Persist the FHIR observations to your FHIR service.
+
+You complete the steps by using Visual Studio Code with the Azure IoT Hub extension:
+
+1. Open Visual Studio Code with Azure IoT Tools installed.
+
+2. In Explorer, in **Azure IoT Hub**, select **…** and choose **Select IoT Hub**.
+
+ :::image type="content" source="media\device-messages-through-iot-hub\select-iot-hub.png" alt-text="Screenshot of Visual Studio Code with the Azure IoT Hub extension with the deployed IoT hub selected." lightbox="media\device-messages-through-iot-hub\select-iot-hub.png":::
+
+3. Select the Azure subscription where your IoT hub was provisioned.
+
+4. Select your IoT hub. The name of your IoT hub is the *basename* you provided when you provisioned the resources prefixed with **ih-**. An example hub name is *ih-azuredocsdemo*.
+
+5. In Explorer, in **Azure IoT Hub**, select **…** and choose **Create Device**. An example device name is *iot-001*.
+
+ :::image type="content" source="media\device-messages-through-iot-hub\create-device.png" alt-text="Screenshot that shows Visual Studio Code with the Azure IoT Hub extension with Create device selected." lightbox="media\device-messages-through-iot-hub\create-device.png":::
+
+6. To send a test message from the device to your IoT hub, right-click the device and select **Send D2C Message to IoT Hub**.
+
+ > [!NOTE]
+ > In this device-to-cloud (D2C) example, *cloud* is the IoT hub in the Azure IoT Hub that receives the device message. Azure IoT Hub supports two-way communications. To set up a cloud-to-device (C2D) scenario, select **Send C2D Message to Device Cloud**.
+
+ :::image type="content" source="media\device-messages-through-iot-hub\select-device-to-cloud-message.png" alt-text="Screenshot that shows Visual Studio Code with the Azure IoT Hub extension and the Send D2C Message to IoT Hub option selected." lightbox="media\device-messages-through-iot-hub\select-device-to-cloud-message.png":::
+
+7. In **Send D2C Messages**, select or enter the following values:
+
+ - **Device(s) to send messages from**: The name of the device you created.
+
+ - **Message(s) per device**: **1**.
+
+ - **Interval between two messages**: **1 second(s)**.
+
+ - **Message**: **Plain Text**.
+
+ - **Edit**: Clear any existing text, and then paste the following JSON.
+
+ > [!TIP]
+ > You can use the **Copy** option in in the right corner of the below test message, and then paste it within the **Edit** option.
+
+ ```json
+ {
+ "HeartRate": 78,
+ "RespiratoryRate": 12,
+ "HeartRateVariability": 30,
+ "BodyTemperature": 98.6,
+ "BloodPressure": {
+ "Systolic": 120,
+ "Diastolic": 80
+ }
+ }
+ ```
+
+8. To begin the process of sending a test message to your IoT hub, select **Send**.
+
+ :::image type="content" source="media\device-messages-through-iot-hub\select-device-to-cloud-message-options.png" alt-text="Screenshot that shows Visual Studio code with the Azure IoT Hub extension with the device message options selected." lightbox="media\device-messages-through-iot-hub\select-device-to-cloud-message-options.png":::
+
+ After you select **Send**, it might take up to five minutes for the FHIR resources to be available in the FHIR service.
+
+ > [!IMPORTANT]
+ > To avoid device spoofing in D2C messages, Azure IoT Hub enriches all messages with additional properties. For more information, see [Anti-spoofing properties](../../iot-hub/iot-hub-devguide-messages-construct.md#anti-spoofing-properties) and [How to use IotJsonPathContentTemplate mappings](how-to-use-iot-jsonpath-content-mappings.md).
+
+## Review metrics from the test message
+
+Now that you've successfully sent a test message to your IoT hub, review your MedTech service metrics. You review metrics to verify that your MedTech service received, grouped, transformed, and persisted the test message to your FHIR service. To learn more, see [How to display the MedTech service monitoring tab metrics](how-to-use-monitoring-tab.md).
+
+For your MedTech service metrics, you can see that your MedTech service completed the following steps for the test message:
+
+- **Number of Incoming Messages**: Received the incoming test message from the device message event hub.
+- **Number of Normalized Messages**: Created five normalized messages.
+- **Number of Measurements**: Created five measurements.
+- **Number of FHIR resources**: Created five FHIR resources that are persisted in your FHIR service.
+++
+## View test data in the FHIR service
+
+If you provided your own Azure AD user object ID as the optional value for **Fhir Contributor Principal ID** in the deployment template, you can query FHIR resources in your FHIR service.
+
+To learn how to get an Azure AD access token and view FHIR resources in your FHIR service, see [Access by using Postman](../fhir/use-postman.md).
+
+## Next steps
+
+In this tutorial, you deployed an ARM template in the Azure portal, connected to your IoT hub, created a device, sent a test message, and reviewed your MedTech service metrics.
+
+To learn about other methods for deploying the MedTech service, see
+
+> [!div class="nextstepaction"]
+> [Choose a deployment method for the MedTech service](deploy-new-choose.md)
+
+FHIR&#174; is a registered trademark of Health Level Seven International, registered in the U.S. Trademark Office and is used with their permission.
healthcare-apis Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/get-started.md
Previously updated : 1/5/2023 Last updated : 1/18/2023 # Get started with the MedTech service in the Azure Health Data Services
-This article will show you how to get started with the Azure MedTech service in the [Azure Health Data Services](../healthcare-apis-overview.md). There are six steps you need to follow to be able to deploy and process MedTech service to ingest health data from a medical device using Azure Event Hubs service, persist the data to Azure Fast Healthcare Interoperability Resources (FHIR&#174;) service as Observation resources, and link FHIR service Observations to patient and device resources. This article provides an architecture overview to help you follow the six steps of the implementation process.
+This article will show you how to get started with the Azure MedTech service in the [Azure Health Data Services](../healthcare-apis-overview.md). There are six steps you need to follow to be able to deploy and process MedTech service to ingest data from a device using Azure Event Hubs service, persist the data to Azure Fast Healthcare Interoperability Resources (FHIR&#174;) service as Observation resources, and link FHIR service Observations to user and device resources. This article provides an architecture overview to help you follow the six steps of the implementation process.
## Architecture overview of the MedTech service
-The following diagram outlines the basic architectural path that enables the MedTech service to receive data from a medical device and send it to the FHIR service. This diagram shows how the six-step implementation process is divided into three key development stages: deployment, post-deployment, and data processing.
+The following diagram outlines the basic architectural path that enables the MedTech service to receive data from a device and send it to the FHIR service. This diagram shows how the six-step implementation process is divided into three key development stages: deployment, post-deployment, and data processing.
:::image type="content" source="media/get-started/get-started-with-iot.png" alt-text="Diagram showing MedTech service architectural overview." lightbox="media/get-started/get-started-with-iot.png":::
-### Deployment
--- Step 1 introduces the subscription and permissions prerequisites required.--- Step 2 shows how Azure services are provisioned for the MedTech services.--- Step 3 presents the configuration process.-
-### Post-deployment
--- Step 4 outlines how to connect to other services.-
-### Data processing
--- Step 5 represents the data flow from a device to an event hub and the way it's processed through the five parts of the MedTech service.--- Step 6 demonstrates the path to verify processed data sent from MedTech service to the FHIR service.-
-## Get started implementing the MedTech service
- Follow these six steps to set up and start using the MedTech service. ## Step 1: Prerequisites for deployment
healthcare-apis Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/healthcare-apis/iot/overview.md
Previously updated : 12/27/2022 Last updated : 1/20/2023 # What is the MedTech service?
-## Overview
+This article provides an introductory overview of the MedTech service. The MedTech service in Azure Health Data Services is a Platform as a service (PaaS) that enables you to gather data from diverse devices and convert it into a Fast Healthcare Interoperability Resources (FHIR&#174;) service format. The MedTech service's device data translation capabilities make it possible to transform a wide variety of data into a unified FHIR format that provides secure data management in a cloud environment.
-The MedTech service in Azure Health Data Services is a Platform as a service (PaaS) that enables you to gather data from diverse medical devices and convert it into a Fast Healthcare Interoperability Resources (FHIR&#174;) service format. The MedTech service's device data translation capabilities make it possible to transform a wide variety of data into a unified FHIR format that provides secure health data management in a cloud environment.
-
-The MedTech service is important because healthcare data can be difficult to access or lost when it comes from diverse or incompatible devices, systems, or formats. If medical information isn't easy to access, it may have a negative effect on gaining clinical insights and a patient's health and wellness. The ability to transform many types of medical device data into a unified FHIR format enables the MedTech service to successfully link devices, health data, labs, and remote in-person care to support the clinician, care team, patient, and family. As a result, this capability can facilitate the discovery of important clinical insights and trend capture. It can also help make connections to new device applications and enable advanced research projects.
+The MedTech service is important because data can be difficult to access or lost when it comes from diverse or incompatible devices, systems, or formats. If this information isn't easy to access, it may have a negative effect on gaining key insights and capturing trends. The ability to transform many types of device data into a unified FHIR format enables the MedTech service to successfully link device data with other datasets to support the end user. As a result, this capability can facilitate the discovery of important clinical insights and trend capture. It can also help make connections to new device applications and enable advanced research projects.
## How the MedTech service works
-The following diagram outlines the basic elements of how the MedTech service transforms medical device data into a standardized FHIR resource in the cloud.
+The following diagram outlines the basic elements of how the MedTech service transforms device data into a standardized FHIR resource in the cloud.
:::image type="content" source="media/overview/what-is-simple-diagram.png" alt-text="Simple diagram showing the MedTech service." lightbox="media/overview/what-is-simple-diagram.png":::
These elements are:
### Deployment
-In order to implement the MedTech service, you need to have an Azure subscription, set up a workspace, and set up a namespace to deploy three Azure
+In order to implement the MedTech service, you need to have an Azure subscription, set up a workspace, and set up a namespace to deploy three Azure
### Devices
-After the PaaS deployment is completed, high-velocity and low-velocity patient medical data can be collected from a wide range of JSON-compatible IoMT devices, systems, and formats.
+After the PaaS deployment is completed, high-velocity and low-velocity data can be collected from a wide range of JSON-compatible IoMT devices, systems, and formats.
### Event Hubs service
- IoMT data is then sent from a device over the Internet to Event Hubs service to hold it temporarily in the cloud. The event hub can asynchronously process millions of data points per second, eliminating data traffic jams, making it possible to easily handle huge amounts of information in real time.
+ IoT data is then sent from a device over the Internet to Event Hubs service to hold it temporarily in the cloud. The event hub can asynchronously process millions of data points per second, eliminating data traffic jams, making it possible to easily handle huge amounts of information in real time.
### The MedTech service
These stages are:
### FHIR service
-The MedTech service data processing is complete when the new FHIR Observation resource is successfully persisted and saved into the FHIR service. Now it's ready for use by the care team, clinician, laboratory, or research facility.
+The MedTech service data processing is complete when the new FHIR Observation resource is successfully persisted, saved into the FHIR service, and ready for use.
## Key features of the MedTech service
The MedTech service has many features that make it secure, configurable, scalab
### Secure
-The MedTech service delivers your data to FHIR service in Azure Health Data Services, ensuring that your Protected Personal Health Information (PHI) has unparalleled security and advanced threat protection. The FHIR service isolates your data in a unique database per API instance and protects it with multi-region failover. In addition, the MedTech service uses [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md) and a [system-assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md) for extra security and control of your MedTech service assets.
+The MedTech service delivers your data to FHIR service in Azure Health Data Services, ensuring that your data has unparalleled security and advanced threat protection. The FHIR service isolates your data in a unique database per API instance and protects it with multi-region failover. In addition, the MedTech service uses [Azure role-based access control (Azure RBAC)](../../role-based-access-control/overview.md) and a [system-assigned managed identity](../../active-directory/managed-identities-azure-resources/overview.md) for extra security and control of your MedTech service assets.
### Configurable
The MedTech service can be customized and configured by using [device](how-to-co
Useful options could include: -- Linking Devices and health care consumers together for enhanced insights, trend capture, interoperability between systems, and proactive and remote monitoring.
+- Linking devices and consumers together for enhanced insights, trend capture, interoperability between systems, and proactive and remote monitoring.
- FHIR observation resources that can be created or updated according to existing or new templates. -- Being able to choose Health data terms that work best for your organization and provide consistency in device data ingestion. For example, you could have either "hr" or "heart rate" or "Heart Rate" to define heart rate information.
+- Being able to choose data terms that work best for your organization and provide consistency in device data ingestion.
- Facilitating customization, editing, testing, and troubleshooting MedTech service Device and FHIR destination mappings with The [IoMT Connector Data Mapper](https://github.com/microsoft/iomt-fhir/tree/master/tools/data-mapper) open-source tool.
iot-develop Quickstart Devkit Espressif Esp32 Freertos Iot Hub https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-develop/quickstart-devkit-espressif-esp32-freertos-iot-hub.md
+
+ Title: Connect an ESPRESSIF ESP-32 to Azure IoT Hub quickstart
+description: Use Azure IoT middleware for FreeRTOS to connect an ESPRESSIF ESP32-Azure IoT Kit device to Azure IoT Hub and send telemetry.
+++
+ms.devlang: c
+ Last updated : 01/20/2023+
+#Customer intent: As a device builder, I want to see a working IoT device sample using FreeRTOS to connect to Azure IoT Hub. The device should be able to send telemetry and respond to commands. As a solution builder, I want to use a tool to view the properties, commands, and telemetry an IoT Plug and Play device reports to the IoT hub it connects to.
++
+# Quickstart: Connect an ESPRESSIF ESP32-Azure IoT Kit to IoT Hub
+
+**Applies to**: [Embedded device development](about-iot-develop.md#embedded-device-development)<br>
+**Total completion time**: 45 minutes
+
+In this quickstart, you use the Azure IoT middleware for FreeRTOS to connect the ESPRESSIF ESP32-Azure IoT Kit (from now on, the ESP32 DevKit) to Azure IoT.
+
+You'll complete the following tasks:
+
+* Install a set of embedded development tools for programming an ESP32 DevKit
+* Build an image and flash it onto the ESP32 DevKit
+* Use Azure CLI to create and manage an Azure IoT hub that the ESP32 DevKit will securely connect to
+* Use Azure IoT Explorer to register a device with your IoT hub, view device properties, view device telemetry, and call direct commands on the device
+
+## Prerequisites
+
+* A PC running Windows 10 or Windows 11
+* [Git](https://git-scm.com/downloads) for cloning the repository
+* Hardware
+ * ESPRESSIF [ESP32-Azure IoT Kit](https://www.espressif.com/products/devkits/esp32-azure-kit/overview)
+ * USB 2.0 A male to Micro USB male cable
+ * Wi-Fi 2.4 GHz
+* An active Azure subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+## Prepare the development environment
+
+### Install the tools
+To set up your development environment, first you install the ESPRESSIF ESP-IDF build environment. The installer includes all the tools required to clone, build, flash, and monitor your device.
+
+To install the ESP-IDF tools:
+1. Download and launch the [ESP-IDF v5.0 Offline-installer](https://dl.espressif.com/dl/esp-idf).
+1. When the installer lists components to install, select all components and complete the installation.
++
+### Clone the repo
+
+Clone the following repo to download all sample device code, setup scripts, and SDK documentation. If you previously cloned this repo, you don't need to do it again.
+
+To clone the repo, run the following command:
+
+```shell
+git clone --recursive https://github.com/Azure-Samples/iot-middleware-freertos-samples.git
+```
+
+For Windows 10 and 11, make sure long paths are enabled.
+
+1. To enable long paths, see [Enable long paths in Windows 10](/windows/win32/fileio/maximum-file-path-limitation?tabs=registry).
+1. In git, run the following command in a terminal with administrator permissions:
+
+ ```shell
+ git config --system core.longpaths true
+ ```
+
+## Create the cloud components
+
+### Create an IoT hub
+
+You can use Azure CLI to create an IoT hub that handles events and messaging for your device.
+
+To create an IoT hub:
+
+1. Launch your CLI app. To run the CLI commands in the rest of this quickstart, copy the command syntax, paste it into your CLI app, edit variable values, and press Enter.
+ - If you're using Cloud Shell, right-click the link for [Cloud Shell](https://shell.azure.com/bash), and select the option to open in a new tab.
+ - If you're using Azure CLI locally, start your CLI console app and sign in to Azure CLI.
+
+1. Run [az extension add](/cli/azure/extension#az-extension-add) to install or upgrade the *azure-iot* extension to the current version.
+
+ ```azurecli-interactive
+ az extension add --upgrade --name azure-iot
+ ```
+
+1. Run the [az group create](/cli/azure/group#az-group-create) command to create a resource group. The following command creates a resource group named *MyResourceGroup* in the *centralus* region.
+
+ > [!NOTE]
+ > You can optionally set an alternate `location`. To see available locations, run [az account list-locations](/cli/azure/account#az-account-list-locations).
+
+ ```azurecli
+ az group create --name MyResourceGroup --location centralus
+ ```
+
+1. Run the [az iot hub create](/cli/azure/iot/hub#az-iot-hub-create) command to create an IoT hub. It might take a few minutes to create an IoT hub.
+
+ *YourIotHubName*. Replace this placeholder in the code with the name you chose for your IoT hub. An IoT hub name must be globally unique in Azure. This placeholder is used in the rest of this quickstart to represent your unique IoT hub name.
+
+ The `--sku F1` parameter creates the IoT hub in the Free tier. Free tier hubs have a limited feature set and are used for proof of concept applications. For more information on IoT Hub tiers, features, and pricing, see [Azure IoT Hub pricing](https://azure.microsoft.com/pricing/details/iot-hub).
+
+ ```azurecli
+ az iot hub create --resource-group MyResourceGroup --name {YourIoTHubName} --sku F1 --partition-count 2
+ ```
+
+1. After the IoT hub is created, view the JSON output in the console, and copy the `hostName` value to use in a later step. The `hostName` value looks like the following example:
+
+ `{Your IoT hub name}.azure-devices.net`
+
+### Configure IoT Explorer
+
+In the rest of this quickstart, you'll use IoT Explorer to register a device to your IoT hub, to view the device properties and telemetry, and to send commands to your device. In this section, you configure IoT Explorer to connect to the IoT hub you created, and to read plug and play models from the public model repository.
+
+To add a connection to your IoT hub:
+
+1. In your CLI app, run the [az iot hub connection-string show](/cli/azure/iot/hub/connection-string#az-iot-hub-connection-string-show) command to get the connection string for your IoT hub.
+
+ ```azurecli
+ az iot hub connection-string show --hub-name {YourIoTHubName}
+ ```
+
+1. Copy the connection string without the surrounding quotation characters.
+1. In Azure IoT Explorer, select **IoT hubs** on the left menu.
+1. Select **+ Add connection**.
+1. Paste the connection string into the **Connection string** box.
+1. Select **Save**.
+
+ :::image type="content" source="media/quickstart-devkit-espressif-esp32-iot-hub/iot-explorer-add-connection.png" alt-text="Screenshot of adding a connection in IoT Explorer.":::
+
+If the connection succeeds, IoT Explorer switches to the **Devices** view.
+
+To add the public model repository:
+
+1. In IoT Explorer, select **Home** to return to the home view.
+1. On the left menu, select **IoT Plug and Play Settings**, then select **+Add** and select **Public repository** from the drop-down menu.
+1. An entry appears for the public model repository at `https://devicemodels.azure.com`.
+
+ :::image type="content" source="media/quickstart-devkit-espressif-esp32-iot-hub/iot-explorer-add-public-repository.png" alt-text="Screenshot of adding the public model repository in IoT Explorer.":::
+
+1. Select **Save**.
+
+### Register a device
+
+In this section, you create a new device instance and register it with the IoT hub you created. You'll use the connection information for the newly registered device to securely connect your physical device in a later section.
+
+To register a device:
+
+1. From the home view in IoT Explorer, select **IoT hubs**.
+1. The connection you previously added should appear. Select **View devices in this hub** below the connection properties.
+1. Select **+ New** and enter a device ID for your device; for example, `mydevice`. Leave all other properties the same.
+1. Select **Create**.
+
+ :::image type="content" source="media/quickstart-devkit-espressif-esp32-iot-hub/iot-explorer-device-created.png" alt-text="Screenshot of Azure IoT Explorer device identity.":::
+
+1. Use the copy buttons to copy the **Device ID** and **Primary key** fields.
+
+Before continuing to the next section, save each of the following values retrieved from earlier steps, to a safe location. You use these values in the next section to configure your device.
+
+* `hostName`
+* `deviceId`
+* `primaryKey`
++
+## Prepare the device
+To connect the ESP32 DevKit to Azure, you'll modify configuration settings, build the image, and flash the image to the device.
+
+### Set up the environment
+To launch the ESP-IDF environment:
+1. Select Windows **Start**, find **ESP-IDF 5.0 CMD** and run it.
+1. In **ESP-IDF 5.0 CMD**, navigate to the *iot-middleware-freertos-samples* directory that you cloned previously.
+1. Navigate to the ESP32-Azure IoT Kit project directory *demos\projects\ESPRESSIF\aziotkit*.
+1. Run the following command to launch the configuration menu:
+
+ ```shell
+ idf.py menuconfig
+ ```
+
+### Add configuration
+
+To add wireless network configuration:
+1. In **ESP-IDF 5.0 CMD**, select **Azure IoT middleware for FreeRTOS Sample Configuration >**, and press <kbd>Enter</kbd>.
+1. Set the following configuration settings using your local wireless network credentials.
+
+ |Setting|Value|
+ |-|--|
+ |**WiFi SSID** |{*Your Wi-Fi SSID*}|
+ |**WiFi Password** |{*Your Wi-Fi password*}|
+
+1. Select <kbd>Esc</kbd> to return to the previous menu.
+
+To add configuration to connect to Azure IoT Hub:
+1. Select **Azure IoT middleware for FreeRTOS Main Task Configuration >**, and press <kbd>Enter</kbd>.
+1. Set the following Azure IoT configuration settings to the values that you saved after you created Azure resources.
+
+ |Setting|Value|
+ |-|--|
+ |**Azure IoT Hub FQDN** |{*Your host name*}|
+ |**Azure IoT Device ID** |{*Your Device ID*}|
+ |**Azure IoT Device Symmetric Key** |{*Your primary key*}|
+
+ > [!NOTE]
+ > In the setting **Azure IoT Authentication Method**, confirm that the default value of *Symmetric Key* is selected.
+
+1. Select <kbd>Esc</kbd> to return to the previous menu.
++
+To save the configuration:
+1. Select <kbd>Shift</kbd>+<kbd>S</kbd> to open the save options. This lets you save the configuration to a file named *skconfig* in the current *.\aziotkit* directory.
+1. Select <kbd>Enter</kbd> to save the configuration.
+1. Select <kbd>Enter</kbd> to dismiss the acknowledgment message.
+1. Select <kbd>Q</kbd> to quit the configuration menu.
++
+### Build and flash the image
+In this section, you use the ESP-IDF tools to build, flash, and monitor the ESP32 DevKit as it connects to Azure IoT.
+
+> [!NOTE]
+> In the following commands in this section, use a short build output path near your root directory. Specify the build path after the `-B` parameter in each command that requires it. The short path helps to avoid a current issue in the ESPRESSIF ESP-IDF tools that can cause errors with long build path names. The following commands use a local path *C:\espbuild* as an example.
+
+To build the image:
+1. In **ESP-IDF 5.0 CMD**, from the *iot-middleware-freertos-samples\demos\projects\ESPRESSIF\aziotkit* directory, run the following command to build the image.
+
+ ```shell
+ idf.py --no-ccache -B "C:\espbuild" build
+ ```
+
+1. After the build completes, confirm that the binary image file was created in the build path that you specified previously.
+
+ *C:\espbuild\azure_iot_freertos_esp32.bin*
+
+To flash the image:
+1. On the ESP32 DevKit, locate the Micro USB port, which is highlighted in the following image:
+
+ :::image type="content" source="media/quickstart-devkit-espressif-esp32-iot-hub/esp-azure-iot-kit.png" alt-text="Photo of the ESP32-Azure IoT Kit board.":::
+
+1. Connect the Micro USB cable to the Micro USB port on the ESP32 DevKit, and then connect it to your computer.
+1. Open Windows **Device Manager**, and view **Ports** to find out which COM port the ESP32 DevKit is connected to.
+
+ :::image type="content" source="media/quickstart-devkit-espressif-esp32-iot-hub/esp-device-manager.png" alt-text="Screenshot of Windows Device Manager displaying COM port for a connected device.":::
+
+1. In **ESP-IDF 5.0 CMD**, run the following command, replacing the *\<Your-COM-port\>* placeholder and brackets with the correct COM port from the previous step. For example, replace the placeholder with `COM3`.
+
+ ```shell
+ idf.py --no-ccache -B "C:\espbuild" -p <Your-COM-port> flash
+ ```
+
+1. Confirm that the output completes with the following text for a successful flash:
+
+ ```output
+ Hash of data verified
+
+ Leaving...
+ Hard resetting via RTS pin...
+ Done
+ ```
+
+To confirm that the device connects to Azure IoT Central:
+1. In **ESP-IDF 5.0 CMD**, run the following command to start the monitoring tool. As you did in a previous command, replace the \<Your-COM-port\> placeholder, and brackets with the COM port that the device is connected to.
+
+ ```shell
+ idf.py -B "C:\espbuild" -p <Your-COM-port> monitor
+ ```
+
+1. Check for repeating blocks of output similar to the following example. This output confirms that the device connects to Azure IoT and sends telemetry.
+
+ ```output
+ I (50807) AZ IOT: Successfully sent telemetry message
+ I (50807) AZ IOT: Attempt to receive publish message from IoT Hub.
+
+ I (51057) MQTT: Packet received. ReceivedBytes=2.
+ I (51057) MQTT: Ack packet deserialized with result: MQTTSuccess.
+ I (51057) MQTT: State record updated. New state=MQTTPublishDone.
+ I (51067) AZ IOT: Puback received for packet id: 0x00000008
+ I (53067) AZ IOT: Keeping Connection Idle...
+ ```
+
+## View device properties
+
+You can use Azure IoT Explorer to view and manage the properties of your devices. In the following sections, you'll use the Plug and Play capabilities that are visible in IoT Explorer to manage and interact with the ESP32 DevKit. These capabilities rely on the device model published for the ESP32 DevKit in the public model repository. You configured IoT Explorer to search this repository for device models earlier in this quickstart. In many cases, you can perform the same action without using plug and play by selecting IoT Explorer menu options. However, using plug and play often provides an enhanced experience. IoT Explorer can read the device model specified by a plug and play device and present information specific to that device.
+
+To access IoT Plug and Play components for the device in IoT Explorer:
+
+1. From the home view in IoT Explorer, select **IoT hubs**, then select **View devices in this hub**.
+1. Select your device.
+1. Select **IoT Plug and Play components**.
+1. Select **Default component**. IoT Explorer displays the IoT Plug and Play components that are implemented on your device.
+
+ :::image type="content" source="media/quickstart-devkit-espressif-esp32-iot-hub/iot-explorer-default-component-view.png" alt-text="Screenshot of the device's default component in IoT Explorer.":::
+
+1. On the **Interface** tab, view the JSON content in the device model **Description**. The JSON contains configuration details for each of the IoT Plug and Play components in the device model.
+
+ Each tab in IoT Explorer corresponds to one of the IoT Plug and Play components in the device model.
+
+ | Tab | Type | Name | Description |
+ |||||
+ | **Interface** | Interface | `Espressif ESP32 Azure IoT Kit` | Example device model for the ESP32 DevKit |
+ | **Properties (writable)** | Property | `telemetryFrequencySecs` | The interval that the device sends telemetry |
+ | **Commands** | Command | `ToggleLed1` | Turn the LED on or off |
+ | **Commands** | Command | `ToggleLed2` | Turn the LED on or off |
+ | **Commands** | Command | `DisplayText` | Displays sent text on the device screen |
+
+To view and edit device properties using Azure IoT Explorer:
+
+1. Select the **Properties (writable)** tab. It displays the interval that telemetry is sent.
+1. Change the `telemetryFrequencySecs` value to *5*, and then select **Update desired value**. Your device now uses this interval to send telemetry.
+
+ :::image type="content" source="media/quickstart-devkit-espressif-esp32-iot-hub/iot-explorer-set-telemetry-interval.png" alt-text="Screenshot of setting telemetry interval on the device in IoT Explorer.":::
+
+1. IoT Explorer responds with a notification.
+
+To use Azure CLI to view device properties:
+
+1. In your CLI console, run the [az iot hub device-twin show](/cli/azure/iot/hub/device-twin#az-iot-hub-device-twin-show) command.
+
+ ```azurecli
+ az iot hub device-twin show --device-id mydevice --hub-name {YourIoTHubName}
+ ```
+
+1. Inspect the properties for your device in the console output.
+
+> [!TIP]
+> You can also use Azure IoT Explorer to view device properties. In the left navigation select **Device twin**.
+
+## View telemetry
+
+With Azure IoT Explorer, you can view the flow of telemetry from your device to the cloud. Optionally, you can do the same task using Azure CLI.
+
+To view telemetry in Azure IoT Explorer:
+
+1. From the **IoT Plug and Play components** (Default Component) pane for your device in IoT Explorer, select the **Telemetry** tab. Confirm that **Use built-in event hub** is set to *Yes*.
+1. Select **Start**.
+1. View the telemetry as the device sends messages to the cloud.
+
+ :::image type="content" source="media/quickstart-devkit-espressif-esp32-iot-hub/iot-explorer-device-telemetry.png" alt-text="Screenshot of device telemetry in IoT Explorer.":::
+
+1. Select the **Show modeled events** checkbox to view the events in the data format specified by the device model.
+
+ :::image type="content" source="media/quickstart-devkit-espressif-esp32-iot-hub/iot-explorer-show-modeled-events.png" alt-text="Screenshot of modeled telemetry events in IoT Explorer.":::
+
+1. Select **Stop** to end receiving events.
+
+To use Azure CLI to view device telemetry:
+
+1. Run the [az iot hub monitor-events](/cli/azure/iot/hub#az-iot-hub-monitor-events) command. Use the names that you created previously in Azure IoT for your device and IoT hub.
+
+ ```azurecli
+ az iot hub monitor-events --device-id mydevice --hub-name {YourIoTHubName}
+ ```
+
+1. View the JSON output in the console.
+
+ ```json
+ {
+ "event": {
+ "origin": "mydevice",
+ "module": "",
+ "interface": "dtmi:azureiot:devkit:freertos:Esp32AzureIotKit;1",
+ "component": "",
+ "payload": "{\"temperature\":28.6,\"humidity\":25.1,\"light\":116.66,\"pressure\":-33.69,\"altitude\":8764.9,\"magnetometerX\":1627,\"magnetometerY\":28373,\"magnetometerZ\":4232,\"pitch\":6,\"roll\":0,\"accelerometerX\":-1,\"accelerometerY\":0,\"accelerometerZ\":9}"
+ }
+ }
+ ```
+
+1. Select CTRL+C to end monitoring.
++
+## Call a direct method on the device
+
+You can also use Azure IoT Explorer to call a direct method that you've implemented on your device. Direct methods have a name, and can optionally have a JSON payload, configurable connection, and method timeout. In this section, you call a method that turns an LED on or off. Optionally, you can do the same task using Azure CLI.
+
+To call a method in Azure IoT Explorer:
+
+1. From the **IoT Plug and Play components** (Default Component) pane for your device in IoT Explorer, select the **Commands** tab.
+1. For the **ToggleLed1** command, select **Send command**. The LED on the ESP32 DevKit toggles on or off. You should also see a notification in IoT Explorer.
+
+ :::image type="content" source="media/quickstart-devkit-espressif-esp32-iot-hub/iot-explorer-invoke-method.png" alt-text="Screenshot of calling a method in IoT Explorer.":::
+
+1. For the **DisplayText** command, enter some text in the **content** field.
+1. Select **Send command**. The text displays on the ESP32 DevKit screen.
++
+To use Azure CLI to call a method:
+
+1. Run the [az iot hub invoke-device-method](/cli/azure/iot/hub#az-iot-hub-invoke-device-method) command, and specify the method name and payload. For this method, setting `method-payload` to `true` means the LED toggles to the opposite of its current state.
++
+ ```azurecli
+ az iot hub invoke-device-method --device-id mydevice --method-name ToggleLed2 --method-payload true --hub-name {YourIoTHubName}
+ ```
+
+ The CLI console shows the status of your method call on the device, where `200` indicates success.
+
+ ```json
+ {
+ "payload": {},
+ "status": 200
+ }
+ ```
+
+1. Check your device to confirm the LED state.
+
+## Troubleshoot and debug
+
+If you experience issues building the device code, flashing the device, or connecting, see [Troubleshooting](troubleshoot-embedded-device-quickstarts.md).
+
+For debugging the application, see [Debugging with Visual Studio Code](https://github.com/azure-rtos/getting-started/blob/master/docs/debugging.md).
+
+## Clean up resources
+
+If you no longer need the Azure resources created in this quickstart, you can use the Azure CLI to delete the resource group and all of its resources.
+
+> [!IMPORTANT]
+> Deleting a resource group is irreversible. The resource group and all the resources contained in it are permanently deleted. Make sure that you do not accidentally delete the wrong resource group or resources.
+
+To delete a resource group by name:
+
+1. Run the [az group delete](/cli/azure/group#az-group-delete) command. This command removes the resource group, the IoT Hub, and the device registration you created.
+
+ ```azurecli-interactive
+ az group delete --name MyResourceGroup
+ ```
+
+1. Run the [az group list](/cli/azure/group#az-group-list) command to confirm the resource group is deleted.
+
+ ```azurecli-interactive
+ az group list
+ ```
+
+## Next steps
+
+In this quickstart, you built a custom image that contains the Azure IoT middleware for FreeRTOS sample code, and then you flashed the image to the ESP32 DevKit device. You connected the ESP32 DevKit to Azure IoT Hub, and carried out tasks such as viewing telemetry and calling methods on the device.
+
+As a next step, explore the following articles to learn more about using the IoT device SDKs to connect devices to Azure IoT.
+
+> [!div class="nextstepaction"]
+> [Connect a simulated general device to IoT Hub](quickstart-send-telemetry-iot-hub.md)
+> [!div class="nextstepaction"]
+> [Learn more about connecting embedded devices using C SDK and Embedded C SDK](concepts-using-c-sdk-and-embedded-c-sdk.md)
iot-dps Concepts Control Access Dps Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/concepts-control-access-dps-azure-ad.md
Title: Access control and security for DPS by using Azure Active Directory | Mi
description: Concepts - how to control access to Azure IoT Hub Device Provisioning Service (DPS) (DPS) for back-end apps. Includes information about Azure Active Directory and RBAC. -++ Last updated 02/07/2022 - + # Control access to Azure IoT Hub Device Provisioning Service (DPS) by using Azure Active Directory (preview) You can use Azure Active Directory (Azure AD) to authenticate requests to Azure IoT Hub Device Provisioning Service (DPS) APIs, like create device identity and invoke direct method. You can also use Azure role-based access control (Azure RBAC) to authorize those same service APIs. By using these technologies together, you can grant permissions to access Azure IoT Hub Device Provisioning Service (DPS) APIs to an Azure AD security principal. This security principal could be a user, group, or application service principal.
iot-dps Concepts Control Access Dps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/concepts-control-access-dps.md
Title: Access control and security for Azure IoT Hub Device Provisioning Servic
description: Overview on how to control access to Azure IoT Hub Device Provisioning Service (DPS), includes links to in-depth articles on Azure Active Directory integration (Public Preview) and SAS options. -++ Last updated 04/20/2022 - # Control access to Azure IoT Hub Device Provisioning Service (DPS)
iot-dps Concepts Device Oem Security Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/concepts-device-oem-security-practices.md
Last updated 3/02/2020 -++ -- + # Security practices for Azure IoT device manufacturers As more manufacturers release IoT devices, it's helpful to identify guidance around common practices. This article summarizes recommended security practices to consider when you manufacture devices for use with Azure IoT Device Provisioning Service (DPS).
iot-dps How To Manage Linked Iot Hubs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/how-to-manage-linked-iot-hubs.md
Title: How to manage linked IoT hubs with Device Provisioning Service (DPS)
description: This article shows how to link and manage IoT hubs with the Device Provisioning Service (DPS). Previously updated : 10/24/2022 Last updated : 01/18/2023
When you link an IoT hub to your DPS instance, it becomes available to participa
* For enrollments that do explicitly set the IoT hubs to apply allocation policy to, you'll need to manually or programmatically add the new IoT hub to the enrollment settings for it to participate in allocation.
+### Limitations
+
+* There are some limitations when working with linked IoT hubs and private endpoints. For more information, see [Private endpoint limitations](virtual-network-support.md#private-endpoint-limitations).
+
+* The linked IoT Hub must have [Connect using shared access policies](../iot-hub/iot-hub-dev-guide-azure-ad-rbac.md#azure-ad-access-and-shared-access-policies) set to **Allow**.
+ ### Use the Azure portal to link an IoT hub In the Azure portal, you can link an IoT hub either from the left menu of your DPS instance or from the enrollment when creating or updating an enrollment. In both cases, the IoT hub is scoped to the DPS instance (not just the enrollment).
To update symmetric keys for a linked IoT hub with Azure CLS:
az iot dps update --name MyExampleDps --set properties.iotHubs[0].connectionString="HostName=MyExampleHub-2.azure-devices.net;SharedAccessKeyName=iothubowner;SharedAccessKey=NewTokenValue" ```
-## Limitations
-
-There are some limitations when working with linked IoT hubs and private endpoints. For more information, see [Private endpoint limitations](virtual-network-support.md#private-endpoint-limitations).
- ## Next steps * To learn more about allocation policies, see [Manage allocation policies](how-to-use-allocation-policies.md).
iot-dps Iot Dps Customer Data Requests https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/iot-dps-customer-data-requests.md
Title: Customer data request featuresΓÇï for Azure DPS devices description: For devices managed in Azure Device Provisioning Service (DPS) that are personal, this article shows admins how to export or delete personal data.--++ Last updated 05/16/2018 -+ - # Summary of customer data request featuresΓÇï
iot-dps Iot Dps Mqtt Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/iot-dps-mqtt-support.md
Title: Understand Azure IoT Device Provisioning Service MQTT support | Microsoft Docs description: Developer guide - support for devices connecting to the Azure IoT Device Provisioning Service (DPS) device-facing endpoint using the MQTT protocol. -+ Last updated 02/25/2022 - + # Communicate with your DPS using the MQTT protocol DPS enables devices to communicate with the DPS device endpoint using:
iot-dps Quick Setup Auto Provision Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/quick-setup-auto-provision-bicep.md
description: Azure quickstart - Learn how to create an Azure IoT Hub Device Prov
Last updated 08/17/2022 -+ - # Quickstart: Set up the IoT Hub Device Provisioning Service (DPS) with Bicep
iot-edge How To Connect Downstream Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-connect-downstream-device.md
Title: Connect downstream devices - Azure IoT Edge | Microsoft Docs
+ Title: Connect a downstream device to an Azure IoT Edge gateway
description: How to configure downstream devices to connect to Azure IoT Edge gateway devices. Previously updated : 06/02/2022 Last updated : 01/09/2023
iot-edge How To Connect Downstream Iot Edge Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-connect-downstream-iot-edge-device.md
description: Step by step adaptable manual instructions on how to create a hiera
Previously updated : 10/5/2022 Last updated : 01/17/2023
To configure your parent device, open a local or remote command shell.
To enable secure connections, every IoT Edge parent device in a gateway scenario needs to be configured with a unique device CA certificate and a copy of the root CA certificate shared by all devices in the gateway hierarchy.
-01. Transfer the **root CA certificate**, **parent device CA certificate**, and **parent private key** to the parent device. The examples in this article use the preferred directory `/var/aziot` for the certificates and keys.
+01. Check your certificates meet the [format requirements](how-to-manage-device-certificates.md#format-requirements).
-01. Install the **root CA certificate** on the parent IoT Edge device. First, copy the root certificate into the certificate directory and add `.crt` to the end of the file name. Next, update the certificate store on the device using the platform-specific command.
+01. Transfer the **root CA certificate**, **parent device CA certificate**, and **parent private key** to the parent device.
- **Debian or Ubuntu:**
+01. Copy the certificates and keys to the correct directories. The preferred directories for device certificates are `/var/aziot/certs` for the certificates and `/var/aziot/secrets` for keys.
```bash
- sudo cp /var/aziot/certs/azure-iot-test-only.root.ca.cert.pem /usr/local/share/ca-certificates/azure-iot-test-only.root.ca.cert.pem.crt
+ ### Copy device certificate ###
- sudo update-ca-certificates
+ # If the device certificate and keys directories don't exist, create, set ownership, and set permissions
+ sudo mkdir -p /var/aziot/certs
+ sudo chown aziotcs:aziotcs /var/aziot/certs
+ sudo chmod 755 /var/aziot/certs
+
+ sudo mkdir -p /var/aziot/secrets
+ sudo chown aziotks:aziotks /var/aziot/secrets
+ sudo chmod 700 /var/aziot/secrets
+
+ # Copy full-chain device certificate and private key into the correct directory
+ sudo cp iot-edge-device-ca-gateway-full-chain.cert.pem /var/aziot/certs
+ sudo cp iot-edge-device-ca-gateway.key.pem /var/aziot/secrets
+
+ ### Root certificate ###
+
+ # Copy root certificate into the /certs directory
+ sudo cp azure-iot-test-only.root.ca.cert.pem /var/aziot/certs
+
+ # Copy root certificate into the CA certificate directory and add .crt extension.
+ # The root certificate must be in the CA certificate directory to install it in the certificate store.
+ # Use the appropriate copy command for your device OS or if using EFLOW.
+
+ # For Ubuntu and Debian, use /usr/local/share/ca-certificates/
+ sudo cp azure-iot-test-only.root.ca.cert.pem /usr/local/share/azure-iot-test-only.root.ca.cert.pem.crt
+ # For EFLOW, use /etc/pki/ca-trust/source/anchors/
+ sudo cp azure-iot-test-only.root.ca.cert.pem /etc/pki/ca-trust/source/anchors/azure-iot-test-only.root.ca.pem.crt
```
- **IoT Edge for Linux on Windows (EFLOW):**
+01. Change the ownership and permissions of the certificates and keys.
```bash
- sudo cp /var/aziot/certs/azure-iot-test-only.root.ca.cert.pem /etc/pki/ca-trust/source/anchors/azure-iot-test-only.root.ca.cert.pem.crt
+ # Give aziotcs ownership to certificates
+ # Read and write for aziotcs, read-only for others
+ sudo chown -R aziotcs:aziotcs /var/aziot/certs
+ sudo find /var/aziot/certs -type f -name "*.*" -exec chmod 644 {} \;
+
+ # Give aziotks ownership to private keys
+ # Read and write for aziotks, no permission for others
+ sudo chown -R aziotks:aziotks /var/aziot/secrets
+ sudo find /var/aziot/secrets -type f -name "*.*" -exec chmod 600 {} \;
+
+ # Verify permissions of directories and files
+ sudo ls -Rla /var/aziot
+ ```
+ The output of list with correct ownership and permission is similar to the following:
+
+ ```Output
+ azureUser@vm-h2hnm5j5uxk2a:/var/aziot$ sudo ls -Rla /var/aziot
+ /var/aziot:
+ total 16
+ drwxr-xr-x 4 root root 4096 Dec 14 00:16 .
+ drwxr-xr-x 15 root root 4096 Dec 14 00:15 ..
+ drw-r--r-- 2 aziotcs aziotcs 4096 Jan 14 00:31 certs
+ drwx 2 aziotks aziotks 4096 Jan 14 00:35 secrets
+
+ /var/aziot/certs:
+ total 20
+ drw-r--r-- 2 aziotcs aziotcs 4096 Jan 14 00:31 .
+ drwxr-xr-x 4 root root 4096 Dec 14 00:16 ..
+ -rw-r--r-- 1 aziotcs aziotcs 1984 Jan 14 00:24 azure-iot-test-only.root.ca.cert.pem
+ -rw-r--r-- 1 aziotcs aziotcs 5887 Jan 14 00:27 iot-edge-device-ca-gateway-full-chain.cert.pem
+
+ /var/aziot/secrets:
+ total 20
+ drwx 2 aziotks aziotks 4096 Jan 14 00:35 .
+ drwxr-xr-x 4 root root 4096 Dec 14 00:16 ..
+ -rw- 1 aziotks aziotks 3326 Jan 14 00:29 azure-iot-test-only.root.ca.key.pem
+ -rw- 1 aziotks aziotks 3243 Jan 14 00:28 iot-edge-device-ca-gateway.key.pem
+ ```
+
+
+01. Install the **root CA certificate** on the parent IoT Edge device by updating the certificate store on the device using the platform-specific command.
+
+ ```bash
+ # Update the certificate store
+
+ # For Ubuntu and Debian, use update-ca-certificates command
+ sudo update-ca-certificates
+ # For EFLOW, use update-ca-trust
sudo update-ca-trust ```
- For more information about using `update-ca-trust`, see [CBL-Mariner SSL CA certificates management](https://github.com/microsoft/CBL-Mariner/blob/1.0/toolkit/docs/security/ca-certificates.md).
+
+ For more information about using `update-ca-trust` in EFLOW, see [CBL-Mariner SSL CA certificates management](https://github.com/microsoft/CBL-Mariner/blob/1.0/toolkit/docs/security/ca-certificates.md).
The command reports one certificate was added to `/etc/ssl/certs`.
To configure your downstream device, open a local or remote command shell.
To enable secure connections, every IoT Edge downstream device in a gateway scenario needs to be configured with a unique device CA certificate and a copy of the root CA certificate shared by all devices in the gateway hierarchy.
-01. Transfer the **root CA certificate**, **child device CA certificate**, and **child private key** to the downstream device. The examples in this article use the directory `/var/aziot` for the certificates and keys directory.
+01. Check your certificates meet the [format requirements](how-to-manage-device-certificates.md#format-requirements).
-01. Install the **root CA certificate** on the downstream IoT Edge device. First, copy the root certificate into the certificate directory and add `.crt` to the end of the file name. Next, update the certificate store on the device using the platform-specific command.
+01. Transfer the **root CA certificate**, **child device CA certificate**, and **child private key** to the downstream device.
- **Debian or Ubuntu:**
+01. Copy the certificates and keys to the correct directories. The preferred directories for device certificates are `/var/aziot/certs` for the certificates and `/var/aziot/secrets` for keys.
```bash
- sudo cp /var/aziot/certs/azure-iot-test-only.root.ca.cert.pem /usr/local/share/ca-certificates/azure-iot-test-only.root.ca.cert.pem.crt
+ ### Copy device certificate ###
- sudo update-ca-certificates
+ # If the device certificate and keys directories don't exist, create, set ownership, and set permissions
+ sudo mkdir -p /var/aziot/certs
+ sudo chown aziotcs:aziotcs /var/aziot/certs
+ sudo chmod 755 /var/aziot/certs
+
+ sudo mkdir -p /var/aziot/secrets
+ sudo chown aziotks:aziotks /var/aziot/secrets
+ sudo chmod 700 /var/aziot/secrets
+
+ # Copy device full-chain certificate and private key into the correct directory
+ sudo cp iot-device-downstream-full-chain.cert.pem /var/aziot/certs
+ sudo cp iot-device-downstream.key.pem /var/aziot/secrets
+
+ ### Root certificate ###
+
+ # Copy root certificate into the /certs directory
+ sudo cp azure-iot-test-only.root.ca.cert.pem /var/aziot/certs
+
+ # Copy root certificate into the CA certificate directory and add .crt extension.
+ # The root certificate must be in the CA certificate directory to install it in the certificate store.
+ # Use the appropriate copy command for your device OS or if using EFLOW.
+
+ # For Ubuntu and Debian, use /usr/local/share/ca-certificates/
+ sudo cp azure-iot-test-only.root.ca.cert.pem /usr/local/share/azure-iot-test-only.root.ca.cert.pem.crt
+ # For EFLOW, use /etc/pki/ca-trust/source/anchors/
+ sudo cp azure-iot-test-only.root.ca.cert.pem /etc/pki/ca-trust/source/anchors/azure-iot-test-only.root.ca.pem.crt
```
- **IoT Edge for Linux on Windows (EFLOW):**
+01. Change the ownership and permissions of the certificates and keys.
```bash
- sudo cp /var/aziot/certs/azure-iot-test-only.root.ca.cert.pem /etc/pki/ca-trust/source/anchors/azure-iot-test-only.root.ca.cert.pem.crt
+ # Give aziotcs ownership to certificates
+ # Read and write for aziotcs, read-only for others
+ sudo chown -R aziotcs:aziotcs /var/aziot/certs
+ sudo find /var/aziot/certs -type f -name "*.*" -exec chmod 644 {} \;
+
+ # Give aziotks ownership to private keys
+ # Read and write for aziotks, no permission for others
+ sudo chown -R aziotks:aziotks /var/aziot/secrets
+ sudo find /var/aziot/secrets -type f -name "*.*" -exec chmod 600 {} \;
+ ```
+01. Install the **root CA certificate** on the downstream IoT Edge device by updating the certificate store on the device using the platform-specific command.
+
+ ```bash
+ # Update the certificate store
+
+ # For Ubuntu and Debian, use update-ca-certificates command
+ sudo update-ca-certificates
+ # For EFLOW, use update-ca-trust
sudo update-ca-trust ```
- For more information about using `update-ca-trust`, see [CBL-Mariner SSL CA certificates management](https://github.com/microsoft/CBL-Mariner/blob/1.0/toolkit/docs/security/ca-certificates.md).
+
+ For more information about using `update-ca-trust` in EFLOW, see [CBL-Mariner SSL CA certificates management](https://github.com/microsoft/CBL-Mariner/blob/1.0/toolkit/docs/security/ca-certificates.md).
The command reports one certificate was added to `/etc/ssl/certs`.
You should already have IoT Edge installed on your device. If not, follow the st
```toml [edge_ca]
- cert = "file:///var/aziot/certs/iot-edge-device-ca-downstream-full-chain.cert.pem"
- pk = "file:///var/aziot/secrets/iot-edge-device-ca-downstream.key.pem"
+ cert = "file:///var/aziot/certs/iot-device-downstream-full-chain.cert.pem"
+ pk = "file:///var/aziot/secrets/iot-device-downstream.key.pem"
``` 01. Verify your IoT Edge device uses the correct version of the IoT Edge agent when it starts. Find the **Default Edge Agent** section and set the image value for IoT Edge to version 1.4. For example:
You should already have IoT Edge installed on your device. If not, follow the st
trust_bundle_cert = "file:///var/aziot/certs/azure-iot-test-only.root.ca.cert.pem" [edge_ca]
- cert = "file:///var/aziot/certs/iot-edge-device-ca-downstream-full-chain.cert.pem"
- pk = "file:///var/aziot/secrets/iot-edge-device-ca-downstream.key.pem"
+ cert = "file:///var/aziot/certs/iot-device-downstream-full-chain.cert.pem"
+ pk = "file:///var/aziot/secrets/iot-device-downstream.key.pem"
``` 01. Save and close the `config.toml` configuration file. For example if you're using the **nano** editor, select **Ctrl+O** - *Write Out*, **Enter**, and **Ctrl+X** - *Exit*.
You should already have IoT Edge installed on your device. If not, follow the st
```Output azureUser@child-vm:~$ openssl s_client -connect <parent hostname>:8883 < 2>&1 >- Can't use SSL_get_servername depth=3 CN = Azure_IoT_Hub_CA_Cert_Test_Only verify return:1
You should already have IoT Edge installed on your device. If not, follow the st
If the command times out, there may be blocked ports between the child and parent devices. Review the network configuration and settings for the devices. > [!WARNING]
- > A previous version of this document directed users to copy the `iot-edge-device-ca-gateway.cert.pem` certificate for use in the gateway `[edge_ca]` section. This was incorrect, and results in certificate validation errors from the downstream device. For example, the `openssl s_client ...` command above will produce:
+ > Not using a full-chain certificate in the gateway's `[edge_ca]` section results in certificate validation errors from the downstream device. For example, the `openssl s_client ...` command above will produce:
> > ``` > Can't use SSL_get_servername
You should already have IoT Edge installed on your device. If not, follow the st
> DONE > ``` >
- > The same issue will appear for TLS-enabled devices connecting to the downstream Edge device if `iot-edge-device-ca-downstream.cert.pem` is copied to the device instead of `iot-edge-device-ca-downstream-full-chain.cert.pem`.
+ > The same issue occurs for TLS-enabled devices that connect to the downstream IoT Edge device if the full-chain device certificate isn't used and configured on the downstream device.
## Network isolate downstream devices
iot-edge How To Create Transparent Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-create-transparent-gateway.md
Title: Create transparent gateway device - Azure IoT Edge | Microsoft Docs
+ Title: Create transparent gateway device using Azure IoT Edge
description: Use an Azure IoT Edge device as a transparent gateway that can process information from downstream devices Previously updated : 11/1/2022 Last updated : 01/17/2022
If you don't have your own certificate authority and want to use demo certificat
# [IoT Edge](#tab/iotedge)
+1. Check the certificate meets [format requirements](how-to-manage-device-certificates.md#format-requirements).
1. If you created the certificates on a different machine, copy them over to your IoT Edge device. You can use a USB drive, a service like [Azure Key Vault](../key-vault/general/overview.md), or with a function like [Secure file copy](https://www.ssh.com/ssh/scp/). 1. Move the files to the preferred directory for certificates and keys. Use `/var/aziot/certs` for certificates and `/var/aziot/secrets` for keys.
-1. Change the ownership and permissions of the certificates and keys.
+1. Create the certificates and keys directories and set permissions. You should store your certificates and keys to the preferred `/var/aziot` directory. Use `/var/aziot/certs` for certificates and `/var/aziot/secrets` for keys.
```bash
+ # If the certificate and keys directories don't exist, create, set ownership, and set permissions
+ sudo mkdir -p /var/aziot/certs
sudo chown aziotcs:aziotcs /var/aziot/certs
- sudo chown -R iotedge /var/aziot/certs
- sudo chmod 644 /var/aziot/secrets/
+ sudo chmod 755 /var/aziot/certs
+
+ sudo mkdir -p /var/aziot/secrets
+ sudo chown aziotks:aziotks /var/aziot/secrets
+ sudo chmod 700 /var/aziot/secrets
+ ```
+1. Change the ownership and permissions of the certificates and keys.
+
+ ```bash
+ # Give aziotcs ownership to certificates
+ # Read and write for aziotcs, read-only for others
+ sudo chown -R aziotcs:aziotcs /var/aziot/certs
+ sudo find /var/aziot/certs -type f -name "*.*" -exec chmod 644 {} \;
+
+ # Give aziotks ownership to private keys
+ # Read and write for aziotks, no permission for others
+ sudo chown -R aziotks:aziotks /var/aziot/secrets
+ sudo find /var/aziot/secrets -type f -name "*.*" -exec chmod 600 {} \;
``` # [IoT Edge for Linux on Windows](#tab/eflow) Now, you need to copy the certificates to the Azure IoT Edge for Linux on Windows virtual machine.
+1. Check the certificate meets [format requirements](how-to-manage-device-certificates.md#format-requirements).
+ 1. Copy the certificates to the EFLOW virtual machine to a directory where you have write access. For example, the `/home/iotedge-user` home directory. ```powershell
Now, you need to copy the certificates to the Azure IoT Edge for Linux on Window
Connect-EflowVm ```
-1. Create the certificates directory. You should store your certificates and keys to the preferred `/var/aziot` directory. Use `/var/aziot/certs` for certificates and `/var/aziot/secrets` for keys.
+1. Create the certificates and keys directories and set permissions. You should store your certificates and keys to the preferred `/var/aziot` directory. Use `/var/aziot/certs` for certificates and `/var/aziot/secrets` for keys.
```bash
+ # If the certificate and keys directories don't exist, create, set ownership, and set permissions
sudo mkdir -p /var/aziot/certs
+ sudo chown aziotcs:aziotcs /var/aziot/certs
+ sudo chmod 755 /var/aziot/certs
+ sudo mkdir -p /var/aziot/secrets
+ sudo chown aziotks:aziotks /var/aziot/secrets
+ sudo chmod 700 /var/aziot/secrets
``` 1. Move the certificates and keys to the preferred `/var/aziot` directory. ```bash # Move the IoT Edge device CA certificate and key to preferred location
+ sudo mv ~/azure-iot-test-only.root.ca.cert.pem /var/aziot/certs
sudo mv ~/iot-edge-device-ca-<cert name>-full-chain.cert.pem /var/aziot/certs sudo mv ~/iot-edge-device-ca-<cert name>.key.pem /var/aziot/secrets
- sudo mv ~/azure-iot-test-only.root.ca.cert.pem /var/aziot/certs
``` 1. Change the ownership and permissions of the certificates and keys. ```bash
- sudo chown -R iotedge /var/aziot/certs
- sudo chmod 644 /var/aziot/secrets/iot-edge-device-ca-<cert name>.key.pem
+ # Give aziotcs ownership to certificate
+ # Read and write for aziotcs, read-only for others
+ sudo chown aziotcs:aziotcs /var/aziot/certs/azure-iot-test-only.root.ca.cert.pem
+ sudo chown aziotcs:aziotcs /var/aziot/certs/iot-edge-device-ca-<cert name>-full-chain.cert.pem
+ sudo chmod 644 /var/aziot/certs/azure-iot-test-only.root.ca.cert.pem
+ sudo chmod 644 /var/aziot/certs/iot-edge-device-ca-<cert name>-full-chain.cert.pem
+
+ # Give aziotks ownership to private key
+ # Read and write for aziotks, no permission for others
+ sudo chown aziotks:aziotks /var/aziot/secrets/iot-edge-device-ca-<cert name>.key.pem
+ sudo chmod 600 /var/aziot/secrets/iot-edge-device-ca-<cert name>.key.pem
``` 1. Exit the EFLOW VM connection.
Downstream devices send telemetry and messages to the gateway device, where the
* The IoT Edge hub module is deployed to the device.
- When you first install IoT Edge on a device, only one system module starts automatically: the IoT Edge agent. Once you create the first deployment for a device, the second system module, the IoT Edge hub, starts as well. If the **edgeHub** module isn't running on your device, create a deployment for your device.
+ When you first install IoT Edge on a device, only one system module starts automatically: the IoT Edge agent. Once you create the first deployment for a device, the second system module and the IoT Edge hub start as well. If the **edgeHub** module isn't running on your device, create a deployment for your device.
* The IoT Edge hub module has routes set up to handle incoming messages from downstream devices.
iot-edge How To Manage Device Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/how-to-manage-device-certificates.md
description: How to install and manage certificates on an Azure IoT Edge device
Previously updated : 10/20/2022 Last updated : 1/17/2023
If your PKI provider provides a `.cer` file, it may contain the same certificate
* If it's in DER (binary) format, convert it to PEM with `openssl x509 -in cert.cer -out cert.pem`. * Use the PEM file as the trust bundle. For more information about the trust bundle, see the next section.
+## Permission requirements
+
+The following table lists the file and directory permissions required for the IoT Edge certificates. The preferred directory for the certificates is `/var/aziot/certs/` and `/var/aziot/secrets/` for keys.
+
+| File or directory | Permissions | Owner |
+|-|-|-|
+| `/var/aziot/certs/` certificates directory | drwxr-xr-x (755) | aziotcs |
+| Certificate files in `/var/aziot/certs/` | -wr-r--r-- (644) | aziotcs |
+| `/var/aziot/secrets/` keys directory | drwx (700)| aziotks |
+| Key files in `/var/aziot/secrets/` | -wr- (600) | aziotks |
+
+To create the directories, set the permissions, and set the owner, run the following commands:
+
+```bash
+# If the certificate and keys directories don't exist, create, set ownership, and set permissions
+sudo mkdir -p /var/aziot/certs
+sudo chown aziotcs:aziotcs /var/aziot/certs
+sudo chmod 755 /var/aziot/certs
+
+sudo mkdir -p /var/aziot/secrets
+sudo chown aziotks:aziotks /var/aziot/secrets
+sudo chmod 700 /var/aziot/secrets
+
+# Give aziotcs ownership to certificates
+# Read and write for aziotcs, read-only for others
+sudo chown -R aziotcs:aziotcs /var/aziot/certs
+sudo find /var/aziot/certs -type f -name "*.*" -exec chmod 644 {} \;
+
+# Give aziotks ownership to private keys
+# Read and write for aziotks, no permission for others
+sudo chown -R aziotks:aziotks /var/aziot/secrets
+sudo find /var/aziot/secrets -type f -name "*.*" -exec chmod 600 {} \;
+
+# Verify permissions of directories and files
+sudo ls -Rla /var/aziot
+```
+
+The output of list with correct ownership and permission is similar to the following:
+
+```Output
+azureUser@vm-h2hnm5j5uxk2a:/var/aziot$ sudo ls -Rla /var/aziot
+/var/aziot:
+total 16
+drwxr-xr-x 4 root root 4096 Dec 14 00:16 .
+drwxr-xr-x 15 root root 4096 Dec 14 00:15 ..
+drw-r--r-- 2 aziotcs aziotcs 4096 Jan 14 00:31 certs
+drwx 2 aziotks aziotks 4096 Jan 14 00:35 secrets
+
+/var/aziot/certs:
+total 20
+drw-r--r-- 2 aziotcs aziotcs 4096 Jan 14 00:31 .
+drwxr-xr-x 4 root root 4096 Dec 14 00:16 ..
+-rw-r--r-- 1 aziotcs aziotcs 1984 Jan 14 00:24 azure-iot-test-only.root.ca.cert.pem
+-rw-r--r-- 1 aziotcs aziotcs 5887 Jan 14 00:27 iot-device-devicename-full-chain.cert.pem
+
+/var/aziot/secrets:
+total 20
+drwx 2 aziotks aziotks 4096 Jan 14 00:35 .
+drwxr-xr-x 4 root root 4096 Dec 14 00:16 ..
+-rw- 1 aziotks aziotks 3326 Jan 14 00:29 azure-iot-test-only.root.ca.key.pem
+-rw- 1 aziotks aziotks 3243 Jan 14 00:28 iot-device-devicename.key.pem
+```
+ ## Manage trusted root CA (trust bundle) Using a self-signed certificate authority (CA) certificate as a root of trust with IoT Edge and modules is known as *trust bundle*. The trust bundle is available for IoT Edge and modules to communicate with servers. To configure the trust bundle, specify its file path in the IoT Edge configuration file.
-1. Get a publicly-trusted root CA certificate from a PKI provider.
+1. Get a publicly trusted root CA certificate from a PKI provider.
-1. Check the certificate [meets format requirements](#format-requirements).
+1. Check the certificate meets [format requirements](#format-requirements).
-1. Copy the PEM file and give IoT Edge's certificate access. For example, with `/var/aziot/certs` directory:
+1. Copy the PEM file and give IoT Edge's certificate service access. For example, with `/var/aziot/certs` directory:
```bash
- # Make the directory as root if doesn't exist
+ # Make the directory if doesn't exist
sudo mkdir /var/aziot/certs -p
- # Copy certificate over
+
+ # Change cert directory user and group ownership to aziotcs and set permissions
+ sudo chown aziotcs:aziotcs /var/aziot/certs
+ sudo chmod 755 /var/aziot/certs
+
+ # Copy certificate into certs directory
sudo cp root-ca.pem /var/aziot/certs
- # Give aziotcs ownership to certificate
- # Read and write for aziotcs, read-only for others
+ # Give aziotcs ownership to certificate and set read and write permission for aziotcs, read-only for others
sudo chown aziotcs:aziotcs /var/aziot/certs/root-ca.pem sudo chmod 644 /var/aziot/certs/root-ca.pem ```
IoT Edge can use existing certificate and private key files to authenticate or a
1. Copy the PEM file to the IoT Edge device where IoT Edge modules can have access. For example, `/var/aziot/` directory. ```bash
- # Make the directory if doesn't exist
- sudo mkdir /var/aziot/certs -p
- sudo mkdir /var/aziot/secrets -p
+ # If the certificate and keys directories don't exist, create, set ownership, and set permissions
+ sudo mkdir -p /var/aziot/certs
+ sudo chown aziotcs:aziotcs /var/aziot/certs
+ sudo chmod 755 /var/aziot/certs
+
+ sudo mkdir -p /var/aziot/secrets
+ sudo chown aziotks:aziotks /var/aziot/secrets
+ sudo chmod 700 /var/aziot/secrets
- # Copy certificate and private key over
+ # Copy certificate and private key into the correct directory
sudo cp my-cert.pem /var/aziot/certs sudo cp my-private-key.pem /var/aziot/secrets ```
IoT Edge can use existing certificate and private key files to authenticate or a
sudo chmod 644 /var/aziot/certs/my-cert.pem # Give aziotks ownership to private key
- # Read and write for aziotks, no permission for other
+ # Read and write for aziotks, no permission for others
sudo chown aziotks:aziotks /var/aziot/secrets/my-private-key.pem sudo chmod 600 /var/aziot/secrets/my-private-key.pem ```
In this scenario, the bootstrap certificate and private key are expected to be l
* Microsoft partners with GlobalSign to [provide a demo account](https://www.globalsign.com/lp/globalsign-and-microsoft-azure-iot-edge-enroll-demo).
-1. In the IoT Edge device configuration file `config.toml`, configure the path to a trusted root certificate that IoT Edge uses to validate the EST server's TLS certificate. This step is optional if the EST server has a publicly-trusted root TLS certificate.
+1. In the IoT Edge device configuration file `config.toml`, configure the path to a trusted root certificate that IoT Edge uses to validate the EST server's TLS certificate. This step is optional if the EST server has a publicly trusted root TLS certificate.
```toml [cert_issuance.est]
iot-edge Tutorial Configure Est Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/tutorial-configure-est-server.md
The Dockerfile uses Ubuntu 18.04, a [Cisco library called `libest`](https://gith
Each device requires the Certificate Authority (CA) certificate that is associated to a device identity certificate.
-1. On the IoT Edge device, create the `/var/aziot` directory if it doesn't exist then change directory to it.
+1. On the IoT Edge device, create the `/var/aziot/certs` directory if it doesn't exist then change directory to it.
```bash
- # Create the /var/aziot/certs directory if it doesn't exist
- sudo mkdir -p /var/aziot/certs
+ # If the certificate directory doen't exist, create, set ownership, and set permissions
+ sudo mkdir -p /var/aziot/certs
+ sudo chown aziotcs:aziotcs /var/aziot/certs
+ sudo chmod 755 /var/aziot/certs
# Change directory to /var/aziot/certs cd /var/aziot/certs
Each device requires the Certificate Authority (CA) certificate that is associat
openssl s_client -showcerts -verify 5 -connect localhost:8085 < | sudo awk '/BEGIN/,/END/{ if(/BEGIN/){a++}; out="cert"a".pem"; print >out}' && sudo cp cert2.pem cacert.crt.pem ```
-1. Certificates should be owned by the key service user **aziotks**. Set the ownership to **aziotks** for all the certificate files.
+1. Certificates should be owned by the key service user **aziotcs**. Set the ownership to **aziotcs** for all the certificate files and set permissions. For more information about certificate ownership and permissions, see [Permission requirements](how-to-manage-device-certificates.md#permission-requirements).
- ```bash
- sudo chown aziotks:aziotks /var/aziot/certs/*.pem
- ```
+ ```bash
+ # Give aziotcs ownership to certificates
+ sudo chown -R aziotcs:aziotcs /var/aziot/certs
+ # Read and write for aziotcs, read-only for others
+ sudo find /var/aziot/certs -type f -name "*.*" -exec chmod 644 {} \;
+ ```
## Provision IoT Edge device using DPS
iot-hub-device-update Device Update Agent Check https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-agent-check.md
Title: Device Update for Azure IoT Hub agent check | Microsoft Docs description: Device Update for IoT Hub uses Agent Check to find and diagnose missing devices.-+ Last updated 10/31/2022
iot-hub-device-update Device Update Data Privacy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-data-privacy.md
Title: Data privacy for Device Update for Azure IoT Hub | Microsoft Docs
+ Title: Data privacy for Device Update for Azure IoT Hub
description: Understand how Device Update for IoT Hub protects data privacy.-+ Previously updated : 09/12/2022 Last updated : 01/19/2023 # Device Update telemetry collection
-Device Update for IoT Hub is a REST API-based cloud service targeted at enterprise customers that enables secure, over-the-air updating of millions of devices via a partitioned Azure service.
+In order to maintain the quality and availability of the Device Update service, Microsoft collects certain telemetry from your customer data that may be stored and processed outside of your Azure region. The following list contains the data points that Microsoft collects about the Device Update service:
-In order to maintain the quality and availability of the Device Update service, Microsoft collects certain telemetry from your Customer Data which may be stored and processed outside of your Azure region. Below is a list of the data points that Microsoft collects about the Device Update service.
-* Device Manufacturer, Model*
-* Device Interface Version*
-* DU Agent Version, DO Agent Version*
-* Update Namespace, Name, Version*
-* IoT Hub Device ID
-* DU Account ID, Instance ID
+* Device manufacturer, model*
+* Device Interface version*
+* Device Update agent version, Delivery Optimization (DO) agent version*
+* Update namespace, name, version*
+* IoT Hub device ID
+* Device Update account ID, instance ID
* Import ErrorCode, ExtendedErrorCode * Deployment ResultCode, ExtendedResultCode * Log collection ResultCode, Extended ResultCode
-*For fields marked with asterisk, do not put any personal or sensitive data.
+*For fields marked with an asterisk, don't put any personal or sensitive data.
-Microsoft maintains no information and has no access to data that would allow correlation of these telemetry data points with an individualΓÇÖs identity. These system-generated data points are not accessible or exportable by tenant administrators. These data constitute factual actions conducted within the service and diagnostic data related to individual devices.
+Microsoft maintains no information and has no access to data that would allow correlation of these telemetry data points with an individualΓÇÖs identity. These system-generated data points aren't accessible or exportable by tenant administrators. These data constitute factual actions conducted within the service and diagnostic data related to individual devices.
-For further information on Microsoft's privacy commitments, please read the "Enterprise and developer products" section of the Microsoft Privacy Statement.
+For more information on Microsoft's privacy commitments, see the "Enterprise and developer products" section of the [Microsoft Privacy Statement](https://privacy.microsoft.com/en-us/privacystatement).
+
+For more information about data residency with Device Update, see [Regional mapping for disaster recovery for Device Update](device-update-region-mapping.md).
iot-hub-device-update Device Update Log Collection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-log-collection.md
Title: Device Update for Azure IoT Hub log collection | Microsoft Docs description: Device Update for IoT Hub enables remote collection of diagnostic logs from connected IoT devices.-+ Last updated 10/26/2022
iot-hub-device-update Device Update Region Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub-device-update/device-update-region-mapping.md
Title: Region mapping for BCDR for Device Update for Azure IoT Hub | Microsoft Docs
-description: Regional mapping for BCDR for Device Update for IoT Hub.
+ Title: Region failover mapping - Device Update for Azure IoT Hub
+description: Regional mapping for business continuity and disaster recovery (BCDR) for Device Update for IoT Hub.
Last updated 08/31/2022
-# Regional mapping for BCDR for Device Update for IoT Hub
+# Regional failover mapping for Device Update for IoT Hub
-In cases where an Azure region is unavailable due to an outage, data contained in the update files submitted to the Device Update for IoT Hub service may be sent to a specific Azure region for the duration of the outage, for the purpose of anti-malware scanning and making the updates available on the Device Update service endpoints.
+In cases where an Azure region is unavailable due to an outage, Device Update for IoT Hub supports business continuity and disaster recovery (BCDR) efforts with regional failover pairings. During an outage, data contained in the update files submitted to the Device Update service may be sent to a secondary Azure region. This failover enables Device Update to continue scanning update files for malware and making the updates available on the service endpoints.
## Failover region mapping
In cases where an Azure region is unavailable due to an outage, data contained i
| Sweden Central | North Europe | | East US | West US 2 | | East US 2 | West US 2 |
-| West US 2 | East US |
-| West US 3 | East US |
+| West US 2 | East US |
+| West US 3 | East US |
| South Central US | East US | | East US 2 (EUAP) | West US 2 | | Australia East | Southeast Asia | | Southeast Asia | Australia East | - ## Next steps [Learn about importing updates](.\import-update.md)--
iot-hub Iot Hub Dev Guide Azure Ad Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/iot-hub-dev-guide-azure-ad-rbac.md
Previously updated : 10/20/2021 Last updated : 01/18/2023
The following table describes the permissions available for IoT Hub service API
## Azure AD access and shared access policies
-By default, IoT Hub supports service API access through both Azure AD and [shared access policies and security tokens](iot-hub-dev-guide-sas.md). To minimize potential security vulnerabilities inherent in security tokens, disable access with shared access policies:
+By default, IoT Hub supports service API access through both Azure AD and [shared access policies and security tokens](iot-hub-dev-guide-sas.md). To minimize potential security vulnerabilities inherent in security tokens, disable access with shared access policies.
1. Ensure that your service clients and users have [sufficient access](#manage-access-to-iot-hub-by-using-azure-rbac-role-assignment) to your IoT hub. Follow the [principle of least privilege](../security/fundamentals/identity-management-best-practices.md). 1. In the [Azure portal](https://portal.azure.com), go to your IoT hub. 1. On the left pane, select **Shared access policies**.
-1. Under **Connect using shared access policies**, select **Deny**.
+1. Under **Connect using shared access policies**, select **Deny**, and review the warning.
:::image type="content" source="media/iot-hub-dev-guide-azure-ad-rbac/disable-local-auth.png" alt-text="Screenshot that shows how to turn off IoT Hub shared access policies." border="true":::
-1. Review the warning, and then select **Save**.
+
+ > [!WARNING]
+ > By denying connections using shared access policies, all users and services that connect using this method lose access immediately. Notably, since Device Provisioning Service (DPS) only supports linking IoT hubs using shared access policies, all device provisioning flows will fail with "unauthorized" error. Proceed carefully and plan to replace access with Azure AD role based access. **Do not proceed if you use DPS**.
Your IoT Hub service APIs can now be accessed only through Azure AD and RBAC.
iot-hub Migrate Tls Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/migrate-tls-certificate.md
You should start planning now for the effects of migrating your IoT hubs to the
* Any device that doesn't have the DigiCert Global Root G2 in its certificate store won't be able to connect to Azure. * The IP address of the IoT hub will change.
+> [!VIDEO 8f4fe09a-3065-4941-9b4d-d9267e817aad]
+ ## Timeline The IoT Hub team will begin migrating IoT hubs by region on **February 15, 2023** and completing by October 15, 2023. After all IoT hubs have migrated, then DPS will perform its migration between January 15 and February 15, 2024.
key-vault Certificate Access Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/certificate-access-control.md
tags: azure-resource-manager
Previously updated : 10/12/2020 Last updated : 01/20/2023
- **import**: Import certificate material into a Key Vault certificate - **delete**: Delete a certificate, its policy, and all of its versions - **recover**: Recover a deleted certificate
- - **backup**: Backup a certificate in a key vault
+ - **backup**: Back up a certificate in a key vault
- **restore**: Restore a backed-up certificate to a key vault - **managecontacts**: Manage Key Vault certificate contacts - **manageissuers**: Manage Key Vault certificate authorities/issuers
key-vault Create Certificate Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/create-certificate-scenarios.md
The scenarios / operations outlined in this article are:
||--| |POST|`https://mykeyvault.vault.azure.net/certificates/mycert1/create?api-version={api-version}`|
-The following examples require an object named "mydigicert" to already be available in your key vault with the issuer provider as DigiCert. The certificate issuer is an entity represented in Azure Key Vault (KV) as a CertificateIssuer resource. It is used to provide information about the source of a KV certificate; issuer name, provider, credentials, and other administrative details.
+The following examples require an object named "mydigicert" to already be available in your key vault with the issuer provider as DigiCert. The certificate issuer is an entity represented in Azure Key Vault (KV) as a CertificateIssuer resource. It's used to provide information about the source of a KV certificate; issuer name, provider, credentials, and other administrative details.
### Request
StatusCode: 200, ReasonPhrase: 'OK'
> The value of the *errorcode* can be "Certificate issuer error" or "Request rejected" based on issuer or user error respectively. ## Get pending request - pending request status is "deleted" or "overwritten"
-A pending object can be deleted or overwritten by a create/import operation when its status is not "inProgress."
+A pending object can be deleted or overwritten by a create/import operation when its status isn't `inProgress`.
|Method|Request URI| ||--|
StatusCode: 409, ReasonPhrase: 'Conflict'
``` ## Merge when pending request is created with an issuer
-Merge is not allowed when a pending object is created with an issuer but is allowed when its state is "inProgress."
+Merge isn't allowed when a pending object is created with an issuer but is allowed when its state is `inProgress`.
If the request to create the x509 certificate fails or cancels for some reason, and if an x509 certificate can be retrieved by out-of-band means, a merge operation can be done to complete the KV certificate.
StatusCode: 403, ReasonPhrase: 'Forbidden'
``` ## Request a cancellation while the pending request status is "inProgress"
-A cancellation can only be requested. A request may or may not be canceled. If a request is not "inProgress", an http status of 400 (Bad Request) is returned.
+A cancellation can only be requested. A request may or may not be canceled. If a request isn't "inProgress", an http status of 400 (Bad Request) is returned.
|Method|Request URI| ||--|
StatusCode: 200, ReasonPhrase: 'OK'
``` ## Create a KV certificate manually
-You can create a certificate issued with a CA of your choice through a manual creation process. Set the name of the issuer to ΓÇ£UnknownΓÇ¥ or do not specify the issuer field.
+You can create a certificate issued with a CA of your choice through a manual creation process. Set the name of the issuer to ΓÇ£UnknownΓÇ¥ or don't specify the issuer field.
|Method|Request URI| ||--|
key-vault Create Certificate Signing Request https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/create-certificate-signing-request.md
tags: azure-resource-manager
Previously updated : 06/17/2020 Last updated : 01/20/2023
key-vault Create Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/create-certificate.md
# Certificate creation methods
- A Key Vault (KV) certificate can be either created or imported into a key vault. When a KV certificate is created the private key is created inside the key vault and never exposed to certificate owner. The following are ways to create a certificate in Key Vault:
+ A Key Vault (KV) certificate can be either created or imported into a key vault. When a KV certificate is created, the private key is created inside the key vault and never exposed to certificate owner. The following are ways to create a certificate in Key Vault:
-- **Create a self-signed certificate:** This will create a public-private key pair and associate it with a certificate. The certificate will be signed by its own key.
+- **Create a self-signed certificate:** Create a public-private key pair and associate it with a certificate. The certificate will be signed by its own key.
-- **Create a new certificate manually:** This will create a public-private key pair and generate an X.509 certificate signing request. The signing request can be signed by your registration authority or certification authority. The signed x509 certificate can be merged with the pending key pair to complete the KV certificate in Key Vault. Although this method requires more steps, it does provide you with greater security because the private key is created in and restricted to Key Vault. This is explained in the diagram below.
+- **Create a new certificate manually:** Create a public-private key pair and generate an X.509 certificate signing request. The signing request can be signed by your registration authority or certification authority. The signed x509 certificate can be merged with the pending key pair to complete the KV certificate in Key Vault. Although this method requires more steps, it does provide you with greater security because the private key is created in and restricted to Key Vault.
![Create a certificate with your own certificate authority](../media/certificate-authority-1.png) The following descriptions correspond to the green lettered steps in the preceding diagram.
-1. In the diagram above, your application is creating a certificate which internally begins by creating a key in your key vault.
+1. In the diagram, your application is creating a certificate, which internally begins by creating a key in your key vault.
2. Key Vault returns to your application a Certificate Signing Request (CSR) 3. Your application passes the CSR to your chosen CA. 4. Your chosen CA responds with an X509 Certificate. 5. Your application completes the new certificate creation with a merger of the X509 Certificate from your CA. -- **Create a certificate with a known issuer provider:** This method requires you to do a one-time task of creating an issuer object. Once an issuer object is created in you key vault, its name can be referenced in the policy of the KV certificate. A request to create such a KV certificate will create a key pair in the vault and communicate with the issuer provider service using the information in the referenced issuer object to get an x509 certificate. The x509 certificate is retrieved from the issuer service and is merged with the key pair to complete the KV certificate creation.
+- **Create a certificate with a known issuer provider:** This method requires you to do a one-time task of creating an issuer object. Once an issuer object is created in you key vault, its name can be referenced in the policy of the KV certificate. A request to create such a KV certificate will create a key pair in the vault and communicate with the issuer provider service using the information in the referenced issuer object to get an x509 certificate. The x509 certificate is retrieved from the issuer service and is merged with the key pair to complete the KV certificate creation.
![Create a certificate with a Key Vault partnered certificate authority](../media/certificate-authority-2.png) The following descriptions correspond to the green lettered steps in the preceding diagram.
-1. In the diagram above, your application is creating a certificate which internally begins by creating a key in your key vault.
+1. In the diagram, your application is creating a certificate, which internally begins by creating a key in your key vault.
2. Key Vault sends an TLS/SSL Certificate Request to the CA. 3. Your application polls, in a loop and wait process, for your Key Vault for certificate completion. The certificate creation is complete when Key Vault receives the CAΓÇÖs response with x509 certificate. 4. The CA responds to Key Vault's TLS/SSL Certificate Request with an TLS/SSL X.509 certificate. 5. Your new certificate creation completes with the merger of the TLS/SSL X.509 certificate for the CA. ## Asynchronous process+ KV certificate creation is an asynchronous process. This operation will create a KV certificate request and return an http status code of 202 (Accepted). The status of the request can be tracked by polling the pending object created by this operation. The full URI of the pending object is returned in the LOCATION header. When a request to create a KV certificate completes, the status of the pending object will change to "completed" from "in progress", and a new version of the KV certificate will be created. This will become the current version. ## First creation
- When a KV certificate is created for the first time, an addressable key and secret is also created with the same name as that of the certificate. If the name is already in use, then the operation will fail with an http status code of 409 (conflict).
+ When a KV certificate is created for the first time, an addressable key and secret is also created with the same name as the certificate. If the name is already in use, then the operation will fail with an http status code of 409 (conflict).
The addressable key and secret get their attributes from the KV certificate attributes. The addressable key and secret created this way are marked as managed keys and secrets, whose lifetime is managed by Key Vault. Managed keys and secrets are read-only. Note: If a KV certificate expires or is disabled, the corresponding key and secret will become inoperable.
- If this is the first operation to create a KV certificate then a policy is required. A policy can also be supplied with successive create operations to replace the policy resource. If a policy is not supplied, then the policy resource on the service is used to create a next version of KV certificate. Note that while a request to create a next version is in progress, the current KV certificate, and corresponding addressable key and secret, remain unchanged.
+ If this is the first operation to create a KV certificate, a policy is required. A policy can also be supplied with successive create operations to replace the policy resource. If a policy isn't supplied, then the policy resource on the service is used to create a next version of KV certificate. While a request to create a next version is in progress, the current KV certificate, and corresponding addressable key and secret, remain unchanged.
## Self-issued certificate To create a self-issued certificate, set the issuer name as "Self" in the certificate policy as shown in following snippet from certificate policy.
When a request to create a KV certificate completes, the status of the pending o
```
- If the issuer name is not specified, then the issuer name is set to "Unknown". When issuer is "Unknown", the certificate owner will have to manually get a x509 certificate from the issuer of his/her choice, then merge the public x509 certificate with the key vault certificate pending object to complete the certificate creation.
+ If the issuer name isn't specified, then the issuer name is set to "Unknown". When issuer is "Unknown", the certificate owner will have to manually get a x509 certificate from the issuer of their choice, then merge the public x509 certificate with the key vault certificate pending object to complete the certificate creation.
``` "issuer": {
Certificate creation can be completed manually or using a "Self" issuer. Key Vau
A certificate issuer is an entity represented in Azure Key Vault (KV) as a CertificateIssuer resource. It is used to provide information about the source of a KV certificate; issuer name, provider, credentials, and other administrative details.
-Note that when an order is placed with the issuer provider, it may honor or override the x509 certificate extensions and certificate validity period based on the type of certificate.
+When an order is placed with the issuer provider, it may honor or override the x509 certificate extensions and certificate validity period based on the type of certificate.
Authorization: Requires the certificates/create permission.
key-vault Overview Renew Certificate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/overview-renew-certificate.md
tags: azure-resource-manager
Previously updated : 07/20/2020- Last updated : 01/20/2023+ # Renew your Azure Key Vault certificates
This article discusses how to renew your Azure Key Vault certificates.
## Get notified about certificate expiration To get notified about certificate life events, you would need to add certificate contact. Certificate contacts contain contact information to send notifications triggered by certificate lifetime events. The contacts information is shared by all the certificates in the key vault. A notification is sent to all the specified contacts for an event for any certificate in the key vault.
-### Steps to set certificate notifications:
+### Steps to set certificate notifications
+ First, add a certificate contact to your key vault. You can add using the Azure portal or the PowerShell cmdlet [Add-AzKeyVaultCertificateContact](/powershell/module/az.keyvault/add-azkeyvaultcertificatecontact). Second, configure when you want to be notified about the certificate expiration. To configure the lifecycle attributes of the certificate, see [Configure certificate autorotation in Key Vault](./tutorial-rotate-certificates.md#update-lifecycle-attributes-of-a-stored-certificate).
-If a certificate's policy is set to auto renewal, then a notification is sent on the following events.
+If a certificate's policy is set to auto renewal, then a notification is sent on the following events:
- Before certificate renewal - After certificate renewal, stating if the certificate was successfully renewed, or if there was an error, requiring manual renewal of the certificate.
- When a certificate policy that is set to be manually renewed (email only), a notification is sent when it's time to renew the certificate.
+When a certificate policy is set to be manually renewed (email only), a notification is sent when it's time to renew the certificate.
In Key Vault, there are three categories of certificates:-- Certificates that are created with an integrated certificate authority (CA), such as DigiCert or GlobalSign-- Certificates that are created with a nonintegrated CA-- Self-signed certificates
+- Certificates that are created with an integrated certificate authority (CA), such as DigiCert or GlobalSign.
+- Certificates that are created with a nonintegrated CA.
+- Self-signed certificates.
+
+## Renew an integrated CA certificate
+
+Azure Key Vault handles the end-to-end maintenance of certificates that are issued by trusted Microsoft certificate authorities DigiCert and GlobalSign. Learn how to [integrate a trusted CA with Key Vault](./how-to-integrate-certificate-authority.md). When a certificate is renewed, a new secret version is created with a new Key Vault identifier.
-## Renew an integrated CA certificate
-Azure Key Vault handles the end-to-end maintenance of certificates that are issued by trusted Microsoft certificate authorities DigiCert and GlobalSign. Learn how to [integrate a trusted CA with Key Vault](./how-to-integrate-certificate-authority.md). When a certificate is renewed a new secret version is created with a new Key Vault identifier.
+## Renew a nonintegrated CA certificate
-## Renew a nonintegrated CA certificate
By using Azure Key Vault, you can import certificates from any CA, a benefit that lets you integrate with several Azure resources and make deployment easy. If you're worried about losing track of your certificate expiration dates or, worse, you've discovered that a certificate has already expired, your key vault can help keep you up to date. For nonintegrated CA certificates, the key vault lets you set up near-expiration email notifications. Such notifications can be set for multiple users as well. > [!IMPORTANT] > A certificate is a versioned object. If the current version is expiring, you need to create a new version. Conceptually, each new version is a new certificate that's composed of a key and a blob that ties that key to an identity. When you use a nonpartnered CA, the key vault generates a key/value pair and returns a certificate signing request (CSR).
-To renew a nonintegrated CA certificate, do the following:
+To renew a nonintegrated CA certificate:
1. Sign in to the Azure portal, and then open the certificate you want to renew. 1. On the certificate pane, select **New Version**.
-3. On the **Create a certificate** page make sure the **Generate** option is selected under **Method of Certificate Creation**.
-4. Verify the **Subject** and other details about the certificate and then click **Create**.
+3. On the **Create a certificate** page, make sure the **Generate** option is selected under **Method of Certificate Creation**.
+4. Verify the **Subject** and other details about the certificate and then select **Create**.
5. You should now see the message **The creation of certificate << certificate name >> is currently pending. Click here to go its Certificate Operation to monitor the progress**
-6. Click on the message and a new pane should be shown. The pane should show the status as "In Progress". At this point key vault has generated a CSR that you can download using the **Download CSR** option.
+1. Select on the message and a new pane should be shown. The pane should show the status as "In Progress". At this point, Key Vault has generated a CSR that you can download using the **Download CSR** option.
1. Select **Download CSR** to download a CSR file to your local drive. 1. Send the CSR to your choice of CA to sign the request. 1. Bring back the signed request, and select **Merge Signed Request** on the same certificate operation pane.
key-vault Tutorial Rotate Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/certificates/tutorial-rotate-certificates.md
Previously updated : 04/16/2020- Last updated : 01/20/2023+ #Customer intent: As a security admin who is new to Azure, I want to use Key Vault to securely store certificates in Azure. # Tutorial: Configure certificate auto-rotation in Key Vault
-You can easily provision, manage, and deploy digital certificates by using Azure Key Vault. The certificates can be public and private Secure Sockets Layer (SSL)/Transport Layer Security (TLS) certificates signed by a certificate authority (CA), or a self-signed certificate. Key Vault can also request and renew certificates through partnerships with CAs, providing a robust solution for certificate lifecycle management. In this tutorial, you will update a certificate's validity period, auto-rotation frequency, and CA attributes.
+You can easily provision, manage, and deploy digital certificates by using Azure Key Vault. The certificates can be public and private Secure Sockets Layer (SSL)/Transport Layer Security (TLS) certificates signed by a certificate authority (CA), or a self-signed certificate. Key Vault can also request and renew certificates through partnerships with CAs, providing a robust solution for certificate lifecycle management. In this tutorial, you'll update a certificate's validity period, auto-rotation frequency, and CA attributes.
The tutorial shows you how to:
The following CAs are currently partnered providers with Key Vault:
- DigiCert: Key Vault offers OV TLS/SSL certificates. - GlobalSign: Key Vault offers OV TLS/SSL certificates.
-Key Vault auto-rotates certificates through established partnerships with CAs. Because Key Vault automatically requests and renews certificates through the partnership, auto-rotation capability is not applicable for certificates created with CAs that are not partnered with Key Vault.
+Key Vault auto-rotates certificates through established partnerships with CAs. Because Key Vault automatically requests and renews certificates through the partnership, auto-rotation capability isn't applicable for certificates created with CAs that aren't partnered with Key Vault.
> [!NOTE] > An account admin for a CA provider creates credentials that Key Vault uses to create, renew, and use TLS/SSL certificates.
key-vault Authentication Requests And Responses https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/authentication-requests-and-responses.md
tags: azure-resource-manager
Previously updated : 09/15/2020 Last updated : 01/20/2023
key-vault Azure Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/azure-policy.md
Title: Integrate Azure Key Vault with Azure Policy
description: Learn how to integrate Azure Key Vault with Azure Policy Previously updated : 03/31/2021 Last updated : 01/10/2023
Example Usage Scenarios:
- You want to improve the security posture of your company by implementing requirements around minimum key sizes and maximum validity periods of certificates in your company's key vaults but you don't know which teams will be compliant and which are not. - You currently don't have a solution to perform an audit across your organization, or you are conducting manual audits of your environment by asking individual teams within your organization to report their compliance. You are looking for a way to automate this task, perform audits in real time, and guarantee the accuracy of the audit.-- You want to enforce your company security policies and stop individuals from creating self-signed certificates, but you don't have an automated way to block their creation.
+- You want to enforce your company security policies and stop individuals from creating self-signed certificates, but you don't have an automated way to block their creation.
- You want to relax some requirements for your test teams, but you want to maintain tight controls over your production environment. You need a simple automated way to separate enforcement of your resources.-- You want to be sure that you can roll-back enforcement of new policies in the event of a live-site issue. You need a one-click solution to turn off enforcement of the policy.
+- You want to be sure that you can roll-back enforcement of new policies in the event of a live-site issue. You need a one-click solution to turn off enforcement of the policy.
- You are relying on a 3rd party solution for auditing your environment and you want to use an internal Microsoft offering. ## Types of policy effects and guidance
-**Audit**: When the effect of a policy is set to audit, the policy will not cause any breaking changes to your environment. It will only alert you to components such as certificates that do not comply with the policy definitions within a specified scope, by marking these components as non-compliant in the policy compliance dashboard. Audit is default if no policy effect is selected.
+When enforcing a policy, you can determine its effect over the resulting evaluation. Each policy definition allows you to choose one of multiple effects. Therefore, policy enforcement may behave differently depending on the type of operation you are evaluating. In general, the effects for policies that integrate with Key Vault include:
-**Deny**: When the effect of a policy is set to deny, the policy will block the creation of new components such as certificates as well as block new versions of existing components that do not comply with the policy definition. Existing non-compliant resources within a key vault are not affected. The 'audit' capabilities will continue to operate.
+- [**Audit**](../../governance/policy/concepts/effects.md#audit): when the effect of a policy is set to `Audit`, the policy will not cause any breaking changes to your environment. It will only alert you to components such as certificates that do not comply with the policy definitions within a specified scope, by marking these components as non-compliant in the policy compliance dashboard. Audit is default if no policy effect is selected.
-## Available "Built-In" Policy Definitions
+- [**Deny**](../../governance/policy/concepts/effects.md#deny): when the effect of a policy is set to `Deny`, the policy will block the creation of new components such as certificates as well as block new versions of existing components that do not comply with the policy definition. Existing non-compliant resources within a key vault are not affected. The 'audit' capabilities will continue to operate.
-Key Vault has created a set of policies, which can be used to manage key vaults and its key, certificate, and secret objects. These policies are 'Built-In', which means they don't require you to write any custom JSON to enable them and they are available in the Azure portal for you to assign. You can still customize certain parameters to fit your organization's needs.
+- [**Disabled**](../../governance/policy/concepts/effects.md#disabled): when the effect of a policy is set to `Disabled`, the policy will still be evaluated but enforcement will not take effect, thus being compliant for the condition with `Disabled` effect. This is useful to disable the policy for a specific condition as opposed to all conditions.
+
+- [**Modify**](../../governance/policy/concepts/effects.md#modify): when the effect of a policy is set to `Modify`, you can perform addition of resource tags, such as adding the `Deny` tag to a network. This is useful to disable access to a public network for Azure Key Vault managed HSM. It is necessary to [configure a manage identity](../../governance/policy/how-to/remediate-resources.md?tabs=azure-portal#configure-the-managed-identity) for the policy definition via the `roleDefinitionIds` parameter to utilize the `Modify` effect.
-# [Certificate Policies](#tab/certificates)
+- [**DeployIfNotExists**](../../governance/policy/concepts/effects.md#deployifnotexists): when the effect of a policy is set to `DeployIfNotExists`, a deployment template is executed when the condition is met. This can be used to configure diagnostic settings for Key Vault to log analytics workspace. It is necessary to [configure a manage identity](../../governance/policy/how-to/remediate-resources.md?tabs=azure-portal#configure-the-managed-identity) for the policy definition via the `roleDefinitionIds` parameter to utilize the `DeployIfNotExists` effect.
-### Manage certificates that are within a specified number of days of expiration
+- [**AuditIfNotExists**](../../governance/policy/concepts/effects.md#deployifnotexists): when the effect of a policy is set to `AuditIfNotExists`, you can identify resources that lack the properties specified in the details of the policy condition. This is useful to identify key vaults that have no resource logs enabled. It is necessary to [configure a manage identity](../../governance/policy/how-to/remediate-resources.md?tabs=azure-portal#configure-the-managed-identity) for the policy definition via the `roleDefinitionIds` parameter to utilize the `DeployIfNotExists` effect.
-Your service can experience an outage if a certificate that is not being adequately monitored is not rotated prior to its expiration. This policy is critical to making sure that your certificates stored in key vault are being monitored. It is recommended that you apply this policy multiple times with different expiration thresholds, for example, at 180, 90, 60, and 30-day thresholds. This policy can be used to monitor and triage certificate expiration in your organization.
+## Available Built-In Policy Definitions
+Predetermined policies, referred to as 'built-ins', facilitate governance over your key vaults so you don't have to write custom policies in JSON format to enforce commonly used rules associated with best security practices. Even though built-ins are predetermined, certain policies require you to define parameters. For example, by defining the effect of the policy, you can audit the key vault and its objects before enforcing a deny operation to prevent outages. Current built-ins for Azure Key Vault are categorized in four major groups: key vault, certificates, keys, and secrets management. Within each category, policies are grouped towards driving specific security goals.
-### Certificates should have the specified lifetime action triggers
+### Key Vaults
-This policy allows you to manage the lifetime action specified for certificates that are either within a certain number of days of their expiration or have reached a certain percentage of their usable life.
+#### Network Access
-### Certificates should use allowed key types
+Reduce the risk of data leakage by restricting public network access, enabling [Azure Private Link](https://azure.microsoft.com/products/private-link/) connections, creating private DNS zones to override DNS resolution for a private endpoint, and enabling [firewall protection](network-security.md) so that the key vault is not accessible by default to any public IP.
-This policy allows you to restrict the type of certificates that can be in your key vault. You can use this policy to make sure that your certificate private keys are RSA, ECC, or are HSM backed. You can choose from the following list which certificate types are allowed.
+| Policy | Effects |
+|--|--|
+| [Azure Key Vault should disable public network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F405c5871-3e91-4644-8a63-58e19d68ff5b) | Audit _(Default)_, Deny, Disabled |
+| [**[Preview]** Azure Key Vault Managed HSM should disable public network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F19ea9d63-adee-4431-a95e-1913c6c1c75f) | Audit _(Default)_, Deny, Disabled
+| [**[Preview]**: Configure Key Vault Managed HSMs to disable public network access](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F84d327c3-164a-4685-b453-900478614456) | Modify _(Default)_, Disabled
+| [**[Preview]**: Azure Key Vaults should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa6abeaec-4d90-4a02-805f-6b26c4d3fbe9) | Audit _(Default)_, Deny, Disabled
+| [**[Preview]**: Azure Key Vault Managed HSMs should use private link](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F59fee2f4-d439-4f1b-9b9a-982e1474bfd8) | Audit _(Default)_, Disabled
+| [**[Preview]**: Configure Azure Key Vaults with private endpoints](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F9d4fad1f-5189-4a42-b29e-cf7929c6b6df) | DeployIfNotExists _(Default)_, Disabled
+| [**[Preview]**: Configure Azure Key Vault Managed HSMs with private endpoints](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fd1d6d8bb-cc7c-420f-8c7d-6f6f5279a844) | DeployIfNotExists _(Default)_, Disabled
+| [**[Preview]**: Configure Azure Key Vaults to use private DNS zones](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac673a9a-f77d-4846-b2d8-a57f8e1c01d4) | DeployIfNotExists _(Default)_, Disabled
+| [Key Vaults should have firewall enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F55615ac9-af46-4a59-874e-391cc3dfb490) | Audit _(Default)_, Deny, Disabled
+| [Configure Key Vaults to enable firewall](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fac673a9a-f77d-4846-b2d8-a57f8e1c01dc) | Modify _(Default)_, Disabled
-- RSA-- RSA - HSM-- ECC-- ECC - HSM
+#### Deletion Protection
-### Certificates should be issued by the specified integrated certificate authority
+Prevent permanent data loss of your key vault and its objects by enabling [soft-delete and purge protection](soft-delete-overview.md). While soft-delete allows you to recover an accidentally deleted key vault for a configurable retention period, purge protection protects you from insider attacks by enforcing a mandatory retention period for soft-deleted key vaults. Purge protection can only be enabled once soft-delete is enabled. No one inside your organization or Microsoft will be able to purge your key vaults during the soft delete retention period.
-If you use a Key Vault integrated certificate authority (Digicert or GlobalSign) and you want users to use one or either of these providers, you can use this policy to audit or enforce your selection. This policy will evaluate the CA selected in the issuance policy of the cert and the CA provider defined in the key vault. This policy can also be used to audit or deny the creation of self-signed certificates in key vault.
+| Policy | Effects |
+|--|--|
+| [Key Vaults should have soft delete enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1e66c121-a66a-4b1f-9b83-0fd99bf0fc2d) | Audit _(Default)_, Deny, Disabled
+| [Key Vaults should have purge protection enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0b60c0b2-2dc2-4e1c-b5c9-abbed971de53) | Audit _(Default)_, Deny, Disabled
+| [Azure Key Vault Managed HSMs should have purge protection enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc39ba22d-4428-4149-b981-70acb31fc383) | Audit _(Default)_, Deny, Disabled
-### Certificates should be issued by the specified non-integrated certificate authority
+#### Diagnostics
-If you use an internal certificate authority or a certificate authority not integrated with key vault and you want users to use a certificate authority from a list you provide, you can use this policy to create an allowed list of certificate authorities by issuer name. This policy can also be used to audit or deny the creation of self-signed certificates in key vault.
+Drive the enabling of resource logs to recreate activity trails to use for investigation purposes when a security incident occurs or when your network is compromised.
-### Certificates using elliptic curve cryptography should have allowed curve names
+| Policy | Effects |
+|--|--|
+| [Deploy diagnostic settings for Key Vaults to an Event Hub](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fed7c8c13-51e7-49d1-8a43-8490431a0da2) | DeployIfNotExists _(Default)_
+| [Deploy - Configure diagnostic settings for Key Vault managed HSMs to an Event Hub](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa6d2c800-5230-4a40-bff3-8268b4987d42) | DeployIfNotExists _(Default)_, Disabled
+| [Deploy - Configure diagnostic settings for Key Vaults to Log Analytics workspace](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F951af2fa-529b-416e-ab6e-066fd85ac459) | DeployIfNotExists _(Default)_, Disabled
+| [Resource logs in Key Vaults should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcf820ca0-f99e-4f3e-84fb-66e913812d21) | AuditIfNotExists _(Default)_, Disabled
+| [Resource logs in Key Vault managed HSMs should be enabled](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa2a5b911-5617-447e-a49e-59dbe0e0434b) | AuditIfNotExists _(Default)_, Disabled
-If you use elliptic curve cryptography or ECC certificates, you can customize an allowed list of curve names from the list below. The default option allows all the following curve names.
+### Certificates
-- P-256-- P-256K-- P-384-- P-521
+#### Lifecycle of Certificates
-### Certificates using RSA cryptography Manage minimum key size for RSA certificates
+Promote the use of short-lived certificates to mitigate undetected attacks, by minimizing the time-frame of ongoing damage and reducing the value of the certificate to attackers. When implementing short-lived certificates it is recommended to regularly monitor their expiration date to avoid outages, so that they can be rotated adequately before expiration. You can also control the lifetime action specified for certificates that are either within a certain number of days of their expiration or have reached a certain percentage of their usable life.
-If you use RSA certificates, you can choose a minimum key size that your certificates must have. You may select one option from the list below.
+| Policy | Effects |
+|--|--|
+| [**[Preview]**: Certificates should have the specified maximum validity period](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F0a075868-4c26-42ef-914c-5bc007359560) | Effects: Audit (_Default_), Deny, Disabled
+| [**[Preview]**: Certificates should not expire within the specified number of days](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff772fb64-8e40-40ad-87bc-7706e1949427) | Effects: Audit (_Default_), Deny, Disabled
+| [Certificates should have the specified lifetime action triggers](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F12ef42cb-9903-4e39-9c26-422d29570417) | Effects: Audit (_Default_), Deny, Disabled
-- 2048 bit-- 3072 bit-- 4096 bit-
-### Certificates should have the specified maximum validity period (preview)
-
-This policy allows you to manage the maximum validity period of your certificates stored in key vault. It is a good security practice to limit the maximum validity period of your certificates. If a private key of your certificate were to become compromised without detection, using short lived certificates minimizes the time frame for ongoing damage and reduces the value of the certificate to an attacker.
-
-# [Key Policies](#tab/keys)
-
-### Keys should not be active for longer than the specified number of days
-
-If you want to make sure that your keys have not been active for longer than a specified number of days, you can use this policy to audit how long your key has been active.
-
-**If your key has an activation date set**, this policy will calculate the number of days that have elapsed from the **activation date** of the key to the current date. If the number of days exceeds the threshold you set, the key will be marked as non-compliant with the policy.
-
-**If your key does not have an activation date set**, this policy will calculate the number of days that have elapsed from the **creation date** of the key to the current date. If the number of days exceeds the threshold you set, the key will be marked as non-compliant with the policy.
-
-### Keys should be the specified cryptographic type RSA or EC
-
-This policy allows you to restrict the type of keys that can be in your key vault. You can use this policy to make sure that your keys are RSA, ECC, or are HSM backed. You can choose from the following list which certificate types are allowed.
--- RSA-- RSA - HSM-- ECC-- ECC - HSM-
-### Keys using elliptic curve cryptography should have the specified curve names
-
-If you use elliptic curve cryptography or ECC keys, you can customize an allowed list of curve names from the list below. The default option allows all the following curve names.
--- P-256-- P-256K-- P-384-- P-521-
-### Keys should have expirations dates set
-
-This policy audits all keys in your key vaults and flags keys that do not have an expiration date set as non-compliant. You can also use this policy to block the creation of keys that do not have an expiration date set.
-
-### Keys should have more than the specified number of days before expiration
-
-If a key is too close to expiration, an organizational delay to rotate the key may result in an outage. Keys should be rotated at a specified number of days prior to expiration to provide sufficient time to react to a failure. This policy will audit keys that are too close to their expiration date and allows you to set this threshold in days. You can also use this policy to prevent the creation of new keys that are too close to their expiration date.
-
-### Keys should be backed by a hardware security module
-
-An HSM is a hardware security module that stores keys. An HSM provides a physical layer of protection for cryptographic keys. The cryptographic key cannot leave a physical HSM which provides a greater level of security than a software key. Some organizations have compliance requirements that mandate the use of HSM keys. Use this policy to audit any keys stored in your key vault that is not HSM backed. You can also use this policy to block the creation of new keys that are not HSM backed. This policy will apply to all key types, RSA and ECC.
+> [!NOTE]
+> It is recommended to apply [the certificate expiration policy](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Ff772fb64-8e40-40ad-87bc-7706e1949427) multiple times with different expiration thresholds, for example, at 180, 90, 60, and 30-day thresholds.
-### Keys using RSA cryptography should have a specified minimum key size
+#### Certificate Authority
-Using RSA keys with smaller key sizes is not a secure design practice. You may be subject to audit and certification standards that mandate the use of a minimum key size. The following policy allows you to set a minimum key size requirement on your key vault. You can audit keys that do not meet this minimum requirement. This policy can also be used to block the creation of new keys that do not meet the minimum key size requirement.
+Audit or enforce the selection of a specific certificate authority to issue your certificates either relying on one of Azure Key Vault's integrated certificate authorities (Digicert or GlobalSign), or a non-integrated certificate authority of your preference. You can also audit or deny the creation of self-signed certificates.
-### Keys should have the specified maximum validity period
+| Policy | Effects |
+|--|--|
+| [Certificates should be issued by the specified integrated certificate authority](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F8e826246-c976-48f6-b03e-619bb92b3d82) | Audit (_Default_), Deny, Disabled
+| [Certificates should be issued by the specified non-integrated certificate authority](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fa22f4a40-01d3-4c7d-8071-da157eeff341) | Audit (_Default_), Deny, Disabled
-Manage your organizational compliance requirements by specifying the maximum amount of time in days that a key can be valid within your key vault. Keys that are valid longer than the threshold you set will be marked as non-compliant. You can also use this policy to block the creation of new keys that have an expiration date set longer than the maximum validity period you specify.
+#### Certificate Attributes
-# [Secret Policies](#tab/secrets)
+Restrict the type of your key vault's certificates to be RSA, ECC, or HSM-backed. If you use elliptic curve cryptography or ECC certificates, you can customize and select curve names such as P-256, P-256K, P-384, and P-521. If you use RSA certificates, you can choose a minimum key size for your certificates to be 2048 bits, 3072 bits, or 4096 bits.
-### Secrets should not be active for longer than the specified number of days
+| Policy | Effects |
+|--|--|
+| [Certificates should use allowed key types](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1151cede-290b-4ba0-8b38-0ad145ac888f) | Audit (_Default_), Deny, Disabled
+| [Certificates using elliptic curve cryptography should have allowed curve names](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fbd78111f-4953-4367-9fd5-7e08808b54bf) | Audit (_Default_), Deny, Disabled
+| [Certificates using RSA cryptography should have the specified minimum key size](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fcee51871-e572-4576-855c-047c820360f0) | Audit (_Default_), Deny, Disabled
-If you want to make sure that your secrets have not been active for longer than a specified number of days, you can use this policy to audit how long your secret has been active.
+### Keys
-**If your secret has an activation date set**, this policy will calculate the number of days that have elapsed from the **activation date** of the secret to the current date. If the number of days exceeds the threshold you set, the secret will be marked as non-compliant with the policy.
+#### HSM-backed keys
-**If your secret does not have an activation date set**, this policy will calculate the number of days that have elapsed from the **creation date** of the secret to the current date. If the number of days exceeds the threshold you set, the secret will be marked as non-compliant with the policy.
+An HSM is a hardware security module that stores keys. An HSM provides a physical layer of protection for cryptographic keys. The cryptographic key cannot leave a physical HSM which provides a greater level of security than a software key. Some organizations have compliance requirements that mandate the use of HSM keys. You can use this policy to audit any keys stored in your Key Vault that is not HSM backed. You can also use this policy to block the creation of new keys that are not HSM backed. This policy will apply to all key types, including RSA and ECC.
-### Secrets should have content type set
+| Policy | Effects |
+|--|--|
+| [Keys should be backed by a hardware security module (HSM)](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F587c79fe-dd04-4a5e-9d0b-f89598c7261b) | Audit (_Default_), Deny, Disabled
-Any plain text or encoded file can be stored as a key vault secret. However, your organization may want to set different rotation policies and restrictions on passwords, connection strings, or certificates stored as keys. A content type tag can help a user see what is stored in a secret object without reading the value of the secret. You can use this policy to audit secrets that don't have a content type tag set. You can also use this policy to prevent new secrets from being created if they don't have a content type tag set.
+#### Lifecycle of Keys
-### Secrets should have expiration date set
+With lifecycle management built-ins you can flag or block keys that do not have an expiration date, get alerts whenever delays in key rotation may result in an outage, prevent the creation of new keys that are close to their expiration date, limit the lifetime and active status of keys to drive key rotation, and preventing keys from being active for more than a specified number of days.
-This policy audits all secrets in your key vault and flags secrets that do not have an expiration date set as non-compliant. You can also use this policy to block the creation of secrets that do not have an expiration date set.
+| Policy | Effects |
+|--|--|
+| [Key Vault keys should have an expiration date](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F152b15f7-8e1f-4c1f-ab71-8c010ba5dbc0) | Audit (_Default_), Deny, Disabled
+| [**[Preview]**: Managed HSM keys should have an expiration date](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F1d478a74-21ba-4b9f-9d8f-8e6fced0eec5) | Audit (_Default_), Deny, Disabled
+| [Keys should have more than the specified number of days before expiration](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F5ff38825-c5d8-47c5-b70e-069a21955146) | Audit (_Default_), Deny, Disabled
+| [**[Preview]**: Azure Key Vault Managed HSM Keys should have more than the specified number of days before expiration](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fad27588c-0198-4c84-81ef-08efd0274653) | Audit (_Default_), Deny, Disabled
+| [Keys should have the specified maximum validity period](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F49a22571-d204-4c91-a7b6-09b1a586fbc9) | Audit (_Default_), Deny, Disabled
+| [Keys should not be active for longer than the specified number of days](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc26e4b24-cf98-4c67-b48b-5a25c4c69eb9) | Audit (_Default_), Deny, Disabled
-### Secrets should have more than the specified number of days before expiration
+> [!IMPORTANT]
+> **If your key has an activation date set**, [the policy above](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fc26e4b24-cf98-4c67-b48b-5a25c4c69eb9) will calculate the number of days that have elapsed from the **activation date** of the key to the current date. If the number of days exceeds the threshold you set, the key will be marked as non-compliant with the policy. **If your key does not have an activation date set**, the policy will calculate the number of days that have elapsed from the **creation date** of the key to the current date. If the number of days exceeds the threshold you set, the key will be marked as non-compliant with the policy.
-If a secret is too close to expiration, an organizational delay to rotate the secret may result in an outage. Secrets should be rotated at a specified number of days prior to expiration to provide sufficient time to react to a failure. This policy will audit secrets that are too close to their expiration date and allows you to set this threshold in days. You can also use this policy to prevent the creation of new secrets that are too close to their expiration date.
+#### Key Attributes
-### Secrets should have the specified maximum validity period
+Restrict the type of your Key Vault's keys to be RSA, ECC, or HSM-backed. If you use elliptic curve cryptography or ECC keys, you can customize and select curve names such as P-256, P-256K, P-384, and P-521. If you use RSA keys, you can mandate the use of a minimum key size for current and new keys to be 2048 bits, 3072 bits, or 4096 bits. Keep in mind that using RSA keys with smaller key sizes is not a secure design practice, thus it is recommended to block the creation of new keys that do not meet the minimum size requirement.
-Manage your organizational compliance requirements by specifying the maximum amount of time in days that a secret can be valid within your key vault. Secrets that are valid longer than the threshold you set will be marked as non-compliant. You can also use this policy to block the creation of new secrets that have an expiration date set longer than the maximum validity period you specify.
+| Policy | Effects |
+|--|--|
+| [Keys should be the specified cryptographic type RSA or EC](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F75c4f823-d65c-4f29-a733-01d0077fdbcb) | Audit (_Default_), Deny, Disabled
+| [Keys using elliptic curve cryptography should have the specified curve names](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fff25f3c8-b739-4538-9d07-3d6d25cfb255) | Audit (_Default_), Deny, Disabled
+| [**[Preview]**: Azure Key Vault Managed HSM keys using elliptic curve cryptography should have the specified curve names](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe58fd0c1-feac-4d12-92db-0a7e9421f53e) | Audit (_Default_), Deny, Disabled
+| [Keys using RSA cryptography should have a specified minimum key size](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F82067dbb-e53b-4e06-b631-546d197452d9) | Audit (_Default_), Deny, Disabled
+| [**[Preview]**: Azure Key Vault Managed HSM keys using RSA cryptography should have a specified minimum key size](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F86810a98-8e91-4a44-8386-ec66d0de5d57) | Audit (_Default_), Deny, Disabled
-# [Key Vault Policies](#tab/keyvault)
+### Secrets
-### Key Vault should use a virtual network service endpoint
+#### Lifecycle of Secrets
-This policy audits any Key Vault not configured to use a virtual network service endpoint.
+With lifecycle management built-ins you can flag or block secrets that do not have an expiration date, get alerts whenever delays in secret rotation may result in an outage, prevent the creation of new keys that are close to their expiration date, limit the lifetime and active status of keys to drive key rotation, and preventing keys from being active for more than a specified number of days.
-### Resource logs in Key Vault should be enabled
+| Policy | Effects |
+|--|--|
+| [Secrets should have an expiration date](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F75262d3e-ba4a-4f43-85f8-9f72c090e5e3) | Audit (_Default_), Deny, Disabled
+| [Secrets should have more than the specified number of days before expiration](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fb0eb591a-5e70-4534-a8bf-04b9c489584a) | Audit (_Default_), Deny, Disabled
+| [Secrets should have the specified maximum validity period](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F342e8053-e12e-4c44-be01-c3c2f318400f) | Audit (_Default_), Deny, Disabled
+| [Secrets should not be active for longer than the specified number of days](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe8d99835-8a06-45ae-a8e0-87a91941ccfe) | Audit (_Default_), Deny, Disabled
-Audit enabling of resource logs. This enables you to recreate activity trails to use for investigation purposes when a security incident occurs or when your network is compromised
+> [!IMPORTANT]
+> **If your secret has an activation date set**, [the policy above](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2Fe8d99835-8a06-45ae-a8e0-87a91941ccfe) will calculate the number of days that have elapsed from the **activation date** of the secret to the current date. If the number of days exceeds the threshold you set, the secret will be marked as non-compliant with the policy. **If your secret does not have an activation date set**, this policy will calculate the number of days that have elapsed from the **creation date** of the secret to the current date. If the number of days exceeds the threshold you set, the secret will be marked as non-compliant with the policy.
-### Key vaults should have purge protection enabled
+#### Secret Attributes
-Malicious deletion of a key vault can lead to permanent data loss. A malicious insider in your organization can potentially delete and purge key vaults. Purge protection protects you from insider attacks by enforcing a mandatory retention period for soft deleted key vaults. No one inside your organization or Microsoft will be able to purge your key vaults during the soft delete retention period.
+Any plain text or encoded file can be stored as an Azure key vault secret. However, your organization may want to set different rotation policies and restrictions on passwords, connection strings, or certificates stored as keys. A content type tag can help a user see what is stored in a secret object without reading the value of the secret. You can audit secrets that don't have a content type tag set or prevent new secrets from being created if they don't have a content type tag set.
-
+| Policy | Effects |
+|--|--|
+| [Secrets should have content type set](https://portal.azure.com/#blade/Microsoft_Azure_Policy/PolicyDetailBlade/definitionId/%2Fproviders%2FMicrosoft.Authorization%2FpolicyDefinitions%2F75262d3e-ba4a-4f43-85f8-9f72c090e5e3) | Audit (_Default_), Deny, Disabled
## Example Scenario
You manage a key vault used by multiple teams that contains 100 certificates, an
1. You contact the owners of these certificates and communicate the new security requirement that certificates cannot be valid for longer than 2 years. Some teams respond and 15 of the certificates were renewed with a maximum validity period of 2 years or less. Other teams do not respond, and you still have 5 non-compliant certificates in your key vault. 1. You change the effect of the policy you assigned to "deny". The 5 non-compliant certificates are not revoked, and they continue to function. However, they cannot be renewed with a validity period that is greater than 2 years.
-## Enabling and managing a Key Vault policy through the Azure portal
+## Enabling and managing a key vault policy through the Azure portal
### Select a Policy Definition
-1. Log in to the Azure portal.
+1. Log in to the Azure portal.
1. Search "Policy" in the Search Bar and Select **Policy**. ![Screenshot that shows the Search Bar.](../media/policy-img1.png)
You manage a key vault used by multiple teams that contains 100 certificates, an
1. From this page you can filter results by compliant or non-compliant vaults. Here you can see a list of non-compliant key vaults within the scope of the policy assignment. A vault is considered non-compliant if any of the components (certificates) in the vault are non-compliant. You can select an individual vault to view the individual non-compliant components (certificates). -
- ![Screenshot that shows a list of non-compliant key vaults within the scope of the policy assignment.](../media/policy-img9.png)
+ ![Screenshot that shows a list of non-compliant Key Vaults within the scope of the policy assignment.](../media/policy-img9.png)
1. View the name of the components within a vault that are non-compliant - ![Screenshot that shows where you can view the name of the components within a vault that are non-compliant.](../media/policy-img10.png)
-1. If you need to check whether users are being denied the ability to create resources within key vault, you can click on the **Component Events (preview)** tab to view a summary of denied certificate operations with the requestor and timestamps of requests.
-
+1. If you need to check whether users are being denied the ability to create resources within the key vault, you can click on the **Component Events (preview)** tab to view a summary of denied certificate operations with the requestor and timestamps of requests.
![Overview of how Azure Key Vault works](../media/policy-img11.png) ## Feature Limitations Assigning a policy with a "deny" effect may take up to 30 mins (average case) and 1 hour (worst case) to start denying the creation of non-compliant resources. The delay refers to following scenarios -
-1. A new policy is assigned
-2. An existing policy assignment is modified
-3. A new KeyVault (resource) is created in a scope with existing policies.
+1. A new policy is assigned.
+2. An existing policy assignment is modified.
+3. A new KeyVault (resource) is created in a scope with existing policies.
+
+The policy evaluation of existing components in a vault may take up to 1 hour (average case) and 2 hours (worst case) before compliance results are viewable in the portal UI.
-The policy evaluation of existing components in a vault may take up to 1 hour (average case) and 2 hours (worst case) before compliance results are viewable in the portal UI.
If the compliance results show up as "Not Started" it may be due to the following reasons:-- The policy valuation has not completed yet. Initial evaluation latency can take up to 2 hours in the worst-case scenario. +
+- The policy valuation has not completed yet. Initial evaluation latency can take up to 2 hours in the worst-case scenario.
- There are no key vaults in the scope of the policy assignment. - There are no key vaults with certificates within the scope of the policy assignment. --- > [!NOTE]
-> Azure Policy
-> [Resource Provider modes](../../governance/policy/concepts/definition-structure.md#resource-provider-modes),
-> such as those for Azure Key Vault, provide information about compliance on the
-> [Component Compliance](../../governance/policy/how-to/get-compliance-data.md#component-compliance)
+> Azure Policy [Resource Provider modes](../../governance/policy/concepts/definition-structure.md#resource-provider-modes), such as those for Azure Key Vault, provide information about compliance on the [Component Compliance](../../governance/policy/how-to/get-compliance-data.md#component-compliance)
> page. ## Next Steps -- [Logging and frequently asked questions for Azure policy for key vault](../general/troubleshoot-azure-policy-for-key-vault.md)
+- [Logging and frequently asked questions for Azure policy for Key Vault](troubleshoot-azure-policy-for-key-vault.md)
- Learn more about the [Azure Policy service](../../governance/policy/overview.md) - See Key Vault samples: [Key Vault built-in policy definitions](../../governance/policy/samples/built-in-policies.md#key-vault)-- Learn about [Microsoft cloud securiy benchmark on Key vault](/security/benchmark/azure/baselines/key-vault-security-baseline)
+- Learn about [Microsoft cloud security benchmark on Key Vault](/security/benchmark/azure/baselines/key-vault-security-baseline)
key-vault Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/backup.md
Title: Back up a secret, key, or certificate stored in Azure Key Vault | Microsoft Docs description: Use this document to help back up a secret, key, or certificate stored in Azure Key Vault. --+ tags: azure-resource-manager Previously updated : 3/18/2021-- Last updated : 01/17/2023++ #Customer intent: As an Azure Key Vault administrator, I want to back up a secret, key, or certificate in my key vault. # Azure Key Vault backup and restore
key-vault Client Libraries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/client-libraries.md
tags: azure-resource-manager
Previously updated : 08/14/2020 Last updated : 01/20/2023
# Client Libraries for Azure Key Vault
-The client libraries for Azure Key Vault allow programmatic access to Key Vault functionality from a variety of languages, including .NET, Python, Java, and JavaScript.
+The client libraries for Azure Key Vault allow programmatic access to Key Vault functionality from a several languages, including .NET, Python, Java, and JavaScript.
## Client libraries per language and object
-Each SDK has separate client libraries for key vault, secrets, keys, and certificates, per the table below.
+Each SDK has separate client libraries for key vault, secrets, keys, and certificates.
| Language | Secrets | Keys | Certificates | Key Vault (Management plane) | |--|--|--|--|--|
key-vault Developers Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/developers-guide.md
Previously updated : 10/05/2020 Last updated : 01/17/2023 # Azure Key Vault developer's guide Azure Key Vault allows you to securely access sensitive information from within your applications: -- Keys, secrets, and certificates are protected without your having to write the code yourself, and you can easily use them from your applications.
+- Keys, secrets, and certificates are protected without you're having to write the code yourself, and you can easily use them from your applications.
- You allow customers to own and manage their own keys, secrets, and certificates so you can concentrate on providing the core software features. In this way, your applications won't own the responsibility or potential liability for your customers' tenant keys, secrets, and certificates. - Your application can use keys for signing and encryption yet keep the key management external from your application. For more information, see [About keys](../keys/about-keys.md). - You can manage credentials like passwords, access keys, and SAS tokens by storing them in Key Vault as secrets. For more information, see [About secrets](../secrets/about-secrets.md).
You can use the predefined Key Vault Contributor role to grant management access
| Azure CLI | PowerShell | REST API | Resource Manager | .NET | Python | Java | JavaScript | |--|--|--|--|--|--|--|--|
-|[Reference](/cli/azure/keyvault)<br>[Quickstart](quick-create-cli.md)|[Reference](/powershell/module/az.keyvault)<br>[Quickstart](quick-create-powershell.md)|[Reference](/rest/api/keyvault/)|[Reference](/azure/templates/microsoft.keyvault/vaults)<br>[Quickstart](./vault-create-template.md)|[Reference](/dotnet/api/microsoft.azure.management.keyvault)|[Reference](/python/api/azure-mgmt-keyvault/azure.mgmt.keyvault)|[Reference](/java/api/overview/azure/resourcemanager-keyvault-readme?view=azure-java-stable)|[Reference](/javascript/api/@azure/arm-keyvault)|
+|[Reference](/cli/azure/keyvault)<br>[Quickstart](quick-create-cli.md)|[Reference](/powershell/module/az.keyvault)<br>[Quickstart](quick-create-powershell.md)|[Reference](/rest/api/keyvault/)|[Reference](/azure/templates/microsoft.keyvault/vaults)<br>[Quickstart](./vault-create-template.md)|[Reference](/dotnet/api/microsoft.azure.management.keyvault)|[Reference](/python/api/azure-mgmt-keyvault/azure.mgmt.keyvault)|[Reference](/java/api/overview/azure/resourcemanager-keyvault-readme?view=azure-java-stable&preserve-view=true)|[Reference](/javascript/api/@azure/arm-keyvault)|
For installation packages and source code, see [Client libraries](client-libraries.md).
key-vault Disaster Recovery Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/disaster-recovery-guidance.md
Previously updated : 03/31/2021 Last updated : 01/17/2023 + # Azure Key Vault availability and redundancy
Azure Key Vault features multiple layers of redundancy to make sure that your keys and secrets remain available to your application even if individual components of the service fail. > [!NOTE]
-> This guide applies to vaults. Managed HSM pools use a different high availability and disaster recovery model. See [Managed HSM Disaster Recovery Guide](../managed-hsm/disaster-recovery-guide.md) for more information.
+> This guide applies to vaults. Managed HSM pools use a different high availability and disaster recovery model; for more information, see [Managed HSM Disaster Recovery Guide](../managed-hsm/disaster-recovery-guide.md) for more information.
-The contents of your key vault are replicated within the region and to a secondary region at least 150 miles away, but within the same geography to maintain high durability of your keys and secrets. For details about specific region pairs, see [Azure paired regions](../../availability-zones/cross-region-replication-azure.md). The exception to the paired regions model is single region geo, for example Brazil South, Qatar Central. Such regions allow only the option to keep data resident within the same region. Both Brazil South and Qatar Central use zone redundant storage (ZRS) to replicate your data three times within the single location/region. For AKV Premium, only 2 of the 3 regions are used to replicate data from the HSM's.
+The contents of your key vault are replicated within the region and to a secondary region at least 150 miles away, but within the same geography to maintain high durability of your keys and secrets. For details about specific region pairs, see [Azure paired regions](../../availability-zones/cross-region-replication-azure.md). The exception to the paired regions model is single region geo, for example Brazil South, Qatar Central. Such regions allow only the option to keep data resident within the same region. Both Brazil South and Qatar Central use zone redundant storage (ZRS) to replicate your data three times within the single location/region. For AKV Premium, only two of the three regions are used to replicate data from the HSMs.
-If individual components within the key vault service fail, alternate components within the region step in to serve your request to make sure that there is no degradation of functionality. You don't need to take any action to start this process, it happens automatically and will be transparent to you.
+If individual components within the key vault service fail, alternate components within the region step in to serve your request to make sure that there's no degradation of functionality. You don't need to take any actionΓÇöthe process happens automatically and will be transparent to you.
-In the rare event that an entire Azure region is unavailable, the requests that you make of Azure Key Vault in that region are automatically routed (*failed over*) to a secondary region except in the case of the Brazil South and Qatar Central region. When the primary region is available again, requests are routed back (*failed back*) to the primary region. Again, you don't need to take any action because this happens automatically.
+## Failover
-In the Brazil South and Qatar Central region, you must plan for the recovery of your Azure key vaults in a region failure scenario. To back up and restore your Azure key vault to a region of your choice, complete the steps that are detailed in [Azure Key Vault backup](backup.md).
+In the rare event that an entire Azure region is unavailable, the requests that you make of Azure Key Vault in that region are automatically routed (*failed over*) to a secondary region (except as noted). When the primary region is available again, requests are routed back (*failed back*) to the primary region. Again, you don't need to take any action because this happens automatically.
+
+> [!IMPORTANT]
+> Failover is not supported in:
+>
+> - Brazil South
+> - Brazil Southeast
+> - Qatar Central (no paired region)
+> - Poland Central (no paired region)
+> - West US 3
+>
+> All other regions use read-access geo-redundant storage (RA-GRS). For more information, see [Azure Storage redundancy: Redundancy in a secondary region](../../storage/common/storage-redundancy.md#redundancy-in-a-secondary-region).
+
+In the Brazil South and Qatar Central region, you must plan for the recovery of your Azure key vaults in a region failure scenario. To back up and restore your Azure key vault to a region of your choice, complete the steps that are detailed in [Azure Key Vault backup](backup.md).
Through this high availability design, Azure Key Vault requires no downtime for maintenance activities. There are a few caveats to be aware of:
-* In the event of a region failover, it may take a few minutes for the service to fail over. Requests that are made during this time before failover may fail.
-* If you are using private link to connect to your key vault, it may take up to 20 minutes for the connection to be re-established in the event of a failover.
-* During failover, your key vault is in read-only mode. Requests that are supported in this mode are:
+* In the event of a region failover, it may take a few minutes for the service to fail over. Requests made during this time before failover may fail.
+* If you're using private link to connect to your key vault, it may take up to 20 minutes for the connection to be re-established in the event of a failover.
+* During failover, your key vault is in read-only mode. Requests supported in this mode:
+ * List certificates * Get certificates * List secrets
There are a few caveats to be aware of:
* Sign * Backup
-* During failover, you will not be able to make changes to key vault properties. You will not be able to change access policy or firewall configurations and settings.
+During failover, you won't be able to make changes to key vault properties. You won't be able to change access policy or firewall configurations and settings.
+
+After a failover is failed back, all request types (including read *and* write requests) are available.
+
+## Next steps
-* After a failover is failed back, all request types (including read *and* write requests) are available.
+- [Azure Key Vault backup](backup.md)
+- [Azure Storage redundancy](../managed-hsm/disaster-recovery-guide.md)
+- [Azure paired regions](../../availability-zones/cross-region-replication-azure.md)
key-vault Integrate Databricks Blob Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/integrate-databricks-blob-storage.md
Previously updated : 06/16/2020 #Required; mm/dd/yyyy format. Last updated : 01/20/2023 # Tutorial: Access Azure Blob Storage using Azure Databricks and Azure Key Vault
key-vault Logging https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/logging.md
tags: azure-resource-manager
Previously updated : 12/18/2020 Last updated : 01/20/2023 #Customer intent: As an Azure Key Vault administrator, I want to enable logging so I can monitor how my key vaults are accessed. # Azure Key Vault logging
-After you create one or more key vaults, you'll likely want to monitor how and when your key vaults are accessed, and by whom. You can do this by enabling logging for Azure Key Vault, which saves information in an Azure storage account that you provide. For step by step guidance on setting this up, see [How to enable Key Vault logging](howto-logging.md).
+After you create one or more key vaults, you'll likely want to monitor how and when your key vaults are accessed, and by whom. Enabling logging for Azure Key Vault saves this information in an Azure storage account that you provide. For step by step guidance, see [How to enable Key Vault logging](howto-logging.md).
-You can access your logging information 10 minutes (at most) after the key vault operation. In most cases, it will be quicker than this. It's up to you to manage your logs in your storage account:
+You can access your logging information 10 minutes (at most) after the key vault operation. In most cases, it will be quicker. It's up to you to manage your logs in your storage account:
* Use standard Azure access control methods in your storage account to secure your logs by restricting who can access them. * Delete logs that you no longer want to keep in your storage account.
The following table lists the field names and descriptions:
| Field name | Description | | | | | **time** |Date and time in UTC. |
-| **resourceId** |Azure Resource Manager resource ID. For Key Vault logs, this is always the Key Vault resource ID. |
+| **resourceId** |Azure Resource Manager resource ID. For Key Vault logs, it is always the Key Vault resource ID. |
| **operationName** |Name of the operation, as documented in the next table. | | **operationVersion** |REST API version requested by the client. | | **category** |Type of result. For Key Vault logs, `AuditEvent` is the single, available value. | | **resultType** |Result of the REST API request. | | **resultSignature** |HTTP status. |
-| **resultDescription** |Additional description about the result, when available. |
-| **durationMs** |Time it took to service the REST API request, in milliseconds. This does not include the network latency, so the time you measure on the client side might not match this time. |
+| **resultDescription** |More description about the result, when available. |
+| **durationMs** |Time it took to service the REST API request, in milliseconds. The time does not include the network latency, so the time you measure on the client side might not match this time. |
| **callerIpAddress** |IP address of the client that made the request. | | **correlationId** |An optional GUID that the client can pass to correlate client-side logs with service-side (Key Vault) logs. |
-| **identity** |Identity from the token that was presented in the REST API request. This is usually a "user," a "service principal," or the combination "user+appId," as in the case of a request that results from an Azure PowerShell cmdlet. |
+| **identity** | Identity from the token that was presented in the REST API request. Usually a "user," a "service principal," or the combination "user+appId", for instance when the request comes from a Azure PowerShell cmdlet. |
| **properties** |Information that varies based on the operation (**operationName**). In most cases, this field contains client information (the user agent string passed by the client), the exact REST API request URI, and the HTTP status code. In addition, when an object is returned as a result of a request (for example, **KeyCreate** or **VaultGet**), it also contains the key URI (as `id`), vault URI, or secret URI. | The **operationName** field values are in *ObjectVerb* format. For example:
The following table lists the **operationName** values and corresponding REST AP
You can use the Key Vault solution in Azure Monitor logs to review Key Vault `AuditEvent` logs. In Azure Monitor logs, you use log queries to analyze data and get the information you need.
-For more information, including how to set this up, see [Azure Key Vault in Azure Monitor](../key-vault-insights-overview.md).
+For more information, including how to set it up, see [Azure Key Vault in Azure Monitor](../key-vault-insights-overview.md).
-For understanding how to analyze logs, see [Sample kusto log queries](./monitor-key-vault.md#analyzing-logs)
+For understanding how to analyze logs, see [Sample Kusto log queries](./monitor-key-vault.md#analyzing-logs)
## Next steps
key-vault Move Subscription https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/move-subscription.md
tags: azure-resource-manager
Previously updated : 05/05/2020 Last updated : 01/20/2023 # Customer intent: As a key vault administrator, I want to move my vault to another subscription.
For more information about Azure Key Vault and Azure Active Directory, see
> **Key Vaults used for disk encryption cannot be moved** > If you are using key vault with disk encryption for a VM, the key vault cannot be moved to a different resource group or a subscription while disk encryption is enabled. You must disable disk encryption prior to moving the key vault to a new resource group or subscription.
-Some service principals (users and applications) are bound to a specific tenant. If you move your key vault to a subscription in another tenant, there is a chance that you will not be able to restore access to a specific service principal. Check to make sure that all essential service principals exist in the tenant where you are moving your key vault.
+Some service principals (users and applications) are bound to a specific tenant. If you move your key vault to a subscription in another tenant, there's a chance that you won't be able to restore access to a specific service principal. Check to make sure that all essential service principals exist in the tenant where you are moving your key vault.
## Prerequisites
You can check existing roles using [Azure portal](../../role-based-access-contro
1. Sign in to the Azure portal at https://portal.azure.com. 2. Navigate to your [key vault](overview.md)
-3. Click on the "Overview" tab
+3. Select on the "Overview" tab
4. Select the "Move" button 5. Select "Move to another subscription" from the dropdown options 6. Select the resource group where you want to move your key vault
You can check existing roles using [Azure portal](../../role-based-access-contro
## Additional steps when subscription is in a new tenant
-If you moved your key vault to a subscription in a new tenant, you need to manually update the tenant ID and remove old access policies and role assignments. Here are tutorials for these steps in PowerShell and Azure CLI. If you are using PowerShell, you may need to run the Clear-AzContext command documented below to allow you to see resources outside your current selected scope.
+If you moved your key vault to a subscription in a new tenant, you need to manually update the tenant ID and remove old access policies and role assignments. Here are tutorials for these steps in PowerShell and Azure CLI. If you are using PowerShell, you may need to run the Clear-AzContext command to allow you to see resources outside your current selected scope.
### Update tenant ID in a key vault
key-vault Private Link Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/private-link-diagnostics.md
Title: Diagnose private links configuration issues on Azure Key Vault
description: Resolve common private links issues with Key Vault and deep dive into the configuration Previously updated : 09/30/2020 Last updated : 01/17/2023
If the application, script or portal is running on an arbitrary Internet-connect
### If you use a managed solution, refer to specific documentation
-This guide is NOT applicable to solutions that are managed by Microsoft, where the key vault is accessed by an Azure product that exists independently from the customer Virtual Network. Examples of such scenarios are Azure Storage or Azure SQL configured for encryption at rest, Azure Event Hub encrypting data with customer-provided keys, Azure Data Factory accessing service credentials stored in key vault, Azure Pipelines retrieving secrets from key vault, and other similar scenarios. In these cases, *you must check if the product supports key vaults with the firewall enabled*. This support is typically performed with the [Trusted Services](overview-vnet-service-endpoints.md#trusted-services) feature of Key Vault firewall. However, many products are not included in the list of trusted services, for a variety of reasons. In that case, reach the product-specific support.
+This guide is NOT applicable to solutions that are managed by Microsoft, where the key vault is accessed by an Azure product that exists independently from the customer Virtual Network. Examples of such scenarios are Azure Storage or Azure SQL configured for encryption at rest, Azure Event Hubs encrypting data with customer-provided keys, Azure Data Factory accessing service credentials stored in key vault, Azure Pipelines retrieving secrets from key vault, and other similar scenarios. In these cases, *you must check if the product supports key vaults with the firewall enabled*. This support is typically performed with the [Trusted Services](overview-vnet-service-endpoints.md#trusted-services) feature of Key Vault firewall. However, many products are not included in the list of trusted services, for various reasons. In that case, reach the product-specific support.
-A small number of Azure products supports the concept of *vnet injection*. In simple terms, the product adds a network device into the customer Virtual Network, allowing it to send requests as if was deployed to the Virtual Network. A notable example is [Azure Databricks](/azure/databricks/administration-guide/cloud-configurations/azure/vnet-inject). Products like this can make requests to the key vault using the private links, and this troubleshooting guide may help.
+A few Azure products supports the concept of *vnet injection*. In simple terms, the product adds a network device into the customer Virtual Network, allowing it to send requests as if it was deployed to the Virtual Network. A notable example is [Azure Databricks](/azure/databricks/administration-guide/cloud-configurations/azure/vnet-inject). Products like this can make requests to the key vault using the private links, and this troubleshooting guide may help.
## 2. Confirm that the connection is approved and succeeded
The following steps validate that the private endpoint connection is approved a
1. Open the Azure portal and open your key vault resource. 2. In the left menu, select **Networking**.
-3. Click the **Private endpoint connections** tab. This will show all private endpoint connections and their respective states. If there are no connections, or if the connection for your Virtual Network is missing, you have to create a new Private Endpoint. This will be covered later.
+3. Select the **Private endpoint connections** tab. This will show all private endpoint connections and their respective states. If there are no connections, or if the connection for your Virtual Network is missing, you have to create a new Private Endpoint. This will be covered later.
4. Still in **Private endpoint connections**, find the one you are diagnosing and confirm that "Connection state" is **Approved** and "Provisioning state" is **Succeeded**. - If the connection is in "Pending" state, you might be able to just approve it. - If the connection "Rejected", "Failed", "Error", "Disconnected" or other state, then it's not effective at all, you have to create a new Private Endpoint resource.
You will need to diagnose hostname resolution, and for that you must know the ex
1. Open the Azure portal and open your key vault resource. 2. In the left menu, select **Networking**.
-3. Click the **Private endpoint connections** tab. This will show all private endpoint connections and their respective states.
+3. Select the **Private endpoint connections** tab. This will show all private endpoint connections and their respective states.
4. Find the one you are diagnosing and confirm that "Connection state" is **Approved** and Provisioning state is **Succeeded**. If you are not seeing this, go back to previous sections of this document. 5. When you find the right item, click the link in the **Private endpoint** column. This will open the Private Endpoint resource. 6. The Overview page may show a section called **Custom DNS settings**. Confirm that there is only one entry that matches the key vault hostname. That entry shows the key vault private IP address.
key-vault Private Link Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/private-link-service.md
Title: Integrate Key Vault with Azure Private Link
description: Learn how to integrate Azure Key Vault with Azure Private Link Service Previously updated : 03/31/2021 Last updated : 01/17/2023
For more information, see [What is Azure Private Link?](../../private-link/priva
## Prerequisites
-To integrate a key vault with Azure Private Link, you will need the following:
+To integrate a key vault with Azure Private Link, you'll need:
- A key vault. - An Azure virtual network.
You can create a new key vault with the [Azure portal](../general/quick-create-p
After configuring the key vault basics, select the Networking tab and follow these steps: 1. Select the Private Endpoint radio button in the Networking tab.
-1. Click the "+ Add" Button to add a private endpoint.
+1. Select the "+ Add" Button to add a private endpoint.
![Screenshot that shows the 'Networking' tab on the 'Create key vault' page.](../media/private-link-service-1.png)
After configuring the key vault basics, select the Networking tab and follow the
![Screenshot that shows the 'Create private endpoint' page with settings selected.](../media/private-link-service-8.png)
-You will now be able to see the configured private endpoint. You now have the option to delete and edit this private endpoint.
-Select the "Review + Create" button and create the key vault. It will take 5-10 minutes for the deployment to complete.
+You'll now be able to see the configured private endpoint. You can now delete and edit this private endpoint.
+Select the "Review + Create" button and create the key vault. It will take 5-10 minutes for the deployment to complete.
### Establish a private link connection to an existing key vault
If you already have a key vault, you can create a private link connection by fol
1. Advance through the "DNS" and "Tags" blades, accepting the defaults. 1. On the "Review + Create" blade, select "Create".
-When you create a private endpoint, the connection must be approved. If the resource for which you are creating a private endpoint is in your directory, you will be able to approve the connection request provided you have sufficient permissions; if you are connecting to an Azure resource in another directory, you must wait for the owner of that resource to approve your connection request.
+When you create a private endpoint, the connection must be approved. If the resource for which you're creating a private endpoint is in your directory, you'll be able to approve the connection request provided you have sufficient permissions; if you're connecting to an Azure resource in another directory, you must wait for the owner of that resource to approve your connection request.
There are four provisioning states:
There are four provisioning states:
1. If there are any connections that are pending, you will see a connection listed with "Pending" in the provisioning state. 1. Select the private endpoint you wish to approve 1. Select the approve button.
-1. If there are any private endpoint connections you want to reject, whether it is a pending request or existing connection, select the connection and click the "Reject" button.
+1. If there are any private endpoint connections you want to reject, whether it's a pending request or existing connection, select the connection and select the "Reject" button.
![Image](../media/private-link-service-7.png)
Aliases: <your-key-vault-name>.vault.azure.net
## Troubleshooting Guide * Check to make sure the private endpoint is in the approved state.
- 1. You can check and fix this in Azure portal. Open the Key Vault resource, and click the Networking option.
+ 1. You can check and fix this in Azure portal. Open the Key Vault resource, and select the Networking option.
2. Then select the Private endpoint connections tab. 3. Make sure connection state is Approved and provisioning state is Succeeded. 4. You may also navigate to the private endpoint resource and review same properties there, and double-check that the virtual network matches the one you are using.
Aliases: <your-key-vault-name>.vault.azure.net
* Check to make sure the Private DNS Zone is linked to the Virtual Network. This may be the issue if you are still getting the public IP address returned. 1. If the Private Zone DNS is not linked to the virtual network, the DNS query originating from the virtual network will return the public IP address of the key vault.
- 2. Navigate to the Private DNS Zone resource in the Azure portal and click the virtual network links option.
+ 2. Navigate to the Private DNS Zone resource in the Azure portal and select the virtual network links option.
4. The virtual network that will perform calls to the key vault must be listed. 5. If it's not there, add it. 6. For detailed steps, see the following document [Link Virtual Network to Private DNS Zone](../../dns/private-dns-getstarted-portal.md#link-the-virtual-network) * Check to make sure the Private DNS Zone is not missing an A record for the key vault. 1. Navigate to the Private DNS Zone page.
- 2. Click Overview and check if there is an A record with the simple name of your key vault (i.e. fabrikam). Do not specify any suffix.
+ 2. Select Overview and check if there is an A record with the simple name of your key vault (i.e. fabrikam). Do not specify any suffix.
3. Make sure you check the spelling, and either create or fix the A record. You can use a TTL of 600 (10 mins). 4. Make sure you specify the correct private IP address. * Check to make sure the A record has the correct IP Address. 1. You can confirm the IP address by opening the Private Endpoint resource in Azure portal. 2. Navigate to the Microsoft.Network/privateEndpoints resource, in the Azure portal (not the Key Vault resource)
- 3. In the overview page look for Network interface and click that link.
+ 3. In the overview page look for Network interface and select that link.
4. The link will show the Overview of the NIC resource, which contains the property Private IP address. 5. Verify that this is the correct IP address that is specified in the A record.
key-vault Rbac Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/rbac-migration.md
Previously updated : 8/30/2020 Last updated : 01/20/2023
key-vault Troubleshoot Azure Policy For Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/troubleshoot-azure-policy-for-key-vault.md
Title: Troubleshoot issues with implementing Azure policy on Key Vault
description: Troubleshooting issues with implementing Azure policy on Key Vault Previously updated : 08/17/2020 Last updated : 01/17/2023
This article guides you how to troubleshoot general errors that might occur when
## About Azure policy for Key Vault
-[Azure Policy](../../governance/policy/index.yml) is a governance tool that gives users the ability to audit and manage their Azure environment at scale. Azure Policy provides the ability to place guardrails on Azure resources to ensure they are compliant with assigned policy rules. It allows users to perform audit, real-time enforcement, and remediation of their Azure environment. The results of audits performed by policy will be available to users in a compliance dashboard where they will be able to see a drill down of which resources and components are compliant and which are not.
+[Azure Policy](../../governance/policy/index.yml) is a governance tool that gives users the ability to audit and manage their Azure environment at scale. Azure Policy allows you to place guardrails on Azure resources to ensure they're compliant with assigned policy rules. It allows users to perform audit, real-time enforcement, and remediation of their Azure environment. The results of audits performed by policy will be available to users in a compliance dashboard where they will be able to see a drill-down of which resources and components are compliant and which are not.
### Logging
-In order to monitor how policy evaluations are conducted, you can review the Key Vault logs. You can do this by enabling logging for Azure Key Vault, which saves information in an Azure storage account that you provide. For step by step guidance on setting this up, see [How to enable Key Vault logging](howto-logging.md).
+In order to monitor how policy evaluations are conducted, you can review the Key Vault logs. Enabling logging for Azure Key Vault, which saves information in an Azure storage account that you provide. For step by step guidance, see [How to enable Key Vault logging](howto-logging.md).
-When you enable logging, a new container called **AzurePolicyEvaluationDetails** will be automatically created to collect policy related logging information in your specified storage account.
+When you enable logging, a new container called **AzurePolicyEvaluationDetails** will be automatically created to collect policy related logging information in your specified storage account.
> [!NOTE] > You should strictly regulate access to monitoring data, particularly log files, as they can contain sensitive information. Learn about applying [built-in monitoring Azure role](../../azure-monitor/roles-permissions-security.md) and limiting access.
->
->
-Individual blobs are stored as text, formatted as a JSON blob.
-Let's look at an example log entry for a Key policy : [Keys should have expiration date set](azure-policy.md?tabs=keys#secrets-should-have-expiration-date-set). This policy evaluates all keys in your key vaults and flags keys that do not have an expiration date set as non-compliant.
+Individual blobs are stored as text, formatted as a JSON blob.
+
+Let's look at an example log entry for a Key policy: [Keys should have expiration date set](azure-policy.md). This policy evaluates all keys in your key vaults and flags keys that do not have an expiration date set as non-compliant.
```json {
The following table lists the field names and descriptions:
| Field name | Description | | | | | **ObjectName** |Name of the object |
-| **ObjectType** |Type of key vault object, ie. certificate, secret or key |
+| **ObjectType** |Type of key vault object: certificate, secret or key |
| **IsComplianceCheck** |True if evaluation occurred during nightly audit, false if evaluation occurred during resource creation or update | | **AssignmentId** | The ID of the policy assignment | | **AssignmentDisplayName** | Friendly name of the policy assignment |
The following table lists the field names and descriptions:
#### Key Vault recovery blocked by Azure policy
-One of the reasons could be that your subscription (or management group) has a policy that is blocking the recovery. The fix is to adjust the policy so that it does not apply when a vault is being recovered.
+One of the reasons could be that your subscription (or management group) has a policy that is blocking the recovery. The fix is to adjust the policy so that it doesn't apply when a vault is being recovered.
-If you see the error type ```RequestDisallowedByPolicy``` for recovery due to **built-in** policy, ensure that you are using the **most updated version**.
+If you see the error type ```RequestDisallowedByPolicy``` for recovery due to **built-in** policy, ensure that you're using the **most updated version**.
If you created a **custom policy** with your own logic, here is an example of portion of a policy that can be used to require soft delete. The recovery of a soft deleted vault uses the same API as creating or updating a vault. However, instead of specifying the properties of the vault, it has a single "createMode" property with the value "recover". The vault will be restored with whatever properties it had when it was deleted. Policies that block requests unless they have specific properties configured will also block the recovery of soft deleted vaults. The fix is to include a clause that will cause the policy to ignore requests where "createMode" is "recover":
Microsoft.KeyVault.Data: a deleted policy assignment can take up to 24 hours to
Mitigation: update the policy assignment's effect to 'Disabled'. - #### Secret creation via ARM template missing out policy evaluation
-Data plane policies that evaluate secret creation would not be applicable on [secrets created via ARM template](../secrets/quick-create-template.md?tabs=CLI) at the time of secret creation. After 24 hours, when the automated compliance check would occur, and the compliance results can be reviewed.
-
+Data plane policies that evaluate secret creation wouldn't be applicable on [secrets created via ARM template](../secrets/quick-create-template.md?tabs=CLI) at the time of secret creation. After 24 hours, when the automated compliance check would occur, and the compliance results can be reviewed.
## Next steps
key-vault Troubleshooting Access Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/troubleshooting-access-issues.md
Title: Troubleshooting Azure key vault access policy issues
description: Troubleshooting Azure key vault access policy issues Previously updated : 08/10/2020 Last updated : 01/17/2023
## Frequently asked questions
-### I am not able to list or get secrets/keys/certificate. I am seeing "something went wrong.." Error.
+### I'm not able to list or get secrets/keys/certificate. I'm seeing a "something went wrong" error
+ If you are having problem with listing/getting/creating or accessing secret, make sure that you have access policy defined to do that operation: [Key Vault Access Policies](./assign-access-policy-cli.md) ### How can I identify how and when key vaults are accessed?
After you create one or more key vaults, you'll likely want to monitor how and w
As you start to scale your service, the number of requests sent to your key vault will rise. Such demand has a potential to increase the latency of your requests and in extreme cases, cause your requests to be throttled which will impact the performance of your service. You can monitor key vault performance metrics and get alerted for specific thresholds, for step-by-step guide to configure monitoring, [read more](./alert.md).
-### I am not able to modify access policy, how can it be enabled?
-The user needs to have sufficient AAD permissions to modify access policy. In this case, the user would need to have higher contributor role.
+### I'm not able to modify access policy, how can it be enabled?
+
+The user needs to have sufficient Azure AD permissions to modify access policy. In this case, the user would need to have higher contributor role.
### I am seeing 'Unknown Policy' error. What does that mean?
-There are two different possibilities of seeing access policy in Unknown section:
-* There might be a previous user who had access and for some reason that user does not exist.
-* If access policy is added via powershell and the access policy is added for the application objectid instead of the service principal.
-### How can I assign access control per key vault object?
+There are two reasons why you may see an access policy in the Unknown section:
-Key Vault RBAC permission model allows per object permission. Individual keys, secrets, and certificates permissions should be used
-only for specific scenarios:
+* A previous user had access but that user no longer exists.
+* The access policy was added through PowerShell, using the application objectid instead of the service principal.
-- Multi-layer applications that need to separate access control
- between layers
+### How can I assign access control per key vault object?
-- Sharing individual secret between multiple applications
+Key Vault RBAC permission model allows per object permission. Individual keys, secrets, and certificates permissions should be used
+only for specific scenarios:
+- Multi-layer applications that need to separate access control between layers
+- Sharing individual secret between multiple applications
### How can I provide key vault authenticate using access control policy? The simplest way to authenticate a cloud-based application to Key Vault is with a managed identity; see [Authenticate to Azure Key Vault](authentication.md) for details.
-If you are creating an on-prem application, doing local development, or otherwise unable to use a managed identity, you can instead register a service principal manually and provide access to your key vault using an access control policy. See [Assign an access control policy](assign-access-policy-portal.md).
+If you're creating an on-premises application, doing local development, or otherwise unable to use a managed identity, you can instead register a service principal manually and provide access to your key vault using an access control policy. See [Assign an access control policy](assign-access-policy-portal.md).
### How can I give the AD group access to the key vault?
The application also needs at least one Identity and Access Management (IAM) rol
### How can I redeploy Key Vault with ARM template without deleting existing access policies?
-Currently Key Vault redeployment deletes any access policy in Key Vault and replace them with access policy in ARM template. There is no incremental option for Key Vault access policies. To preserve access policies in Key Vault, you need to read existing access policies in Key Vault and populate ARM template with those policies to avoid any access outages.
+Currently Key Vault redeployment deletes any access policy in Key Vault and replaces them with access policy in ARM template. There is no incremental option for Key Vault access policies. To preserve access policies in Key Vault, you need to read existing access policies in Key Vault and populate ARM template with those policies to avoid any access outages.
-Another option that can help for this scenario is using Azure RBAC and roles as an alternative to access policies. With Azure RBAC, you can re-deploy the key vault without specifying the policy again. You can read more this solution [here](./rbac-guide.md).
+Another option that can help for this scenario is using Azure RBAC and roles as an alternative to access policies. With Azure RBAC, you can redeploy the key vault without specifying the policy again. You can read more this solution [here](./rbac-guide.md).
### Recommended troubleshooting Steps for following error types * HTTP 401: Unauthenticated Request - [Troubleshooting steps](rest-error-codes.md#http-401-unauthenticated-request) * HTTP 403: Insufficient Permissions - [Troubleshooting steps](rest-error-codes.md#http-403-insufficient-permissions) * HTTP 429: Too Many Requests - [Troubleshooting steps](rest-error-codes.md#http-429-too-many-requests)
-* Check if you have delete access permission to key vault: See [Assign an access policy - CLI](assign-access-policy-cli.md), [Assign an access policy - PowerShell](assign-access-policy-powershell.md), or [Assign an access policy - Portal](assign-access-policy-portal.md).
+* Check if you've delete access permission to key vault: See [Assign an access policy - CLI](assign-access-policy-cli.md), [Assign an access policy - PowerShell](assign-access-policy-powershell.md), or [Assign an access policy - Portal](assign-access-policy-portal.md).
* If you have problem with authenticate to key vault in code, use [Authentication SDK](https://azure.github.io/azure-sdk/posts/2020-02-25/defaultazurecredentials.html) ### What are the best practices I should implement when key vault is getting throttled?
key-vault Tutorial Net Create Vault Azure Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/tutorial-net-create-vault-azure-web-app.md
Previously updated : 05/06/2020 Last updated : 01/17/2023 ms.devlang: csharp
using Azure.Security.KeyVault.Secrets;
using Azure.Core; ```
-Add the following lines before the `app.UseEndpoints` call (.NET 5.0 or earlier) or `app.MapGet` call (.NET 6.0) , updating the URI to reflect the `vaultUri` of your key vault. This code uses [DefaultAzureCredential()](/dotnet/api/azure.identity.defaultazurecredential) to authenticate to Key Vault, which uses a token from managed identity to authenticate. For more information about authenticating to Key Vault, see the [Developer's Guide](./developers-guide.md#authenticate-to-key-vault-in-code). The code also uses exponential backoff for retries in case Key Vault is being throttled. For more information about Key Vault transaction limits, see [Azure Key Vault throttling guidance](./overview-throttling.md).
+Add the following lines before the `app.UseEndpoints` call (.NET 5.0 or earlier) or `app.MapGet` call (.NET 6.0), updating the URI to reflect the `vaultUri` of your key vault. This code uses [DefaultAzureCredential()](/dotnet/api/azure.identity.defaultazurecredential) to authenticate to Key Vault, which uses a token from managed identity to authenticate. For more information about authenticating to Key Vault, see the [Developer's Guide](./developers-guide.md#authenticate-to-key-vault-in-code). The code also uses exponential backoff for retries in case Key Vault is being throttled. For more information about Key Vault transaction limits, see [Azure Key Vault throttling guidance](./overview-throttling.md).
```csharp SecretClientOptions options = new SecretClientOptions()
key-vault Tutorial Python Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/tutorial-python-virtual-machine.md
Previously updated : 07/20/2020 Last updated : 01/17/2023 ms.devlang: python
The value of secret 'mySecret' in '<your-unique-keyvault-name>' is: 'Success!'
## Clean up resources
-When they are no longer needed, delete the virtual machine and your key vault. You can do this quickly by simply deleting the resource group to which they belong:
+When they're no longer needed, delete the virtual machine and your key vault. You can be done quickly by deleting the resource group to which they belong:
```azurecli az group delete -g myResourceGroup
key-vault Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/whats-new.md
tags: azure-resource-manager
Previously updated : 01/12/2020 Last updated : 01/17/2023 #Customer intent: As an Azure Key Vault administrator, I want to react to soft-delete being turned on for all key vaults.
For more information, see [Configure key auto-rotation in Key Vault](../keys/how
## January 2022
-Azure Key Vault service throughput limits have been increased to serve double its previous quota for each vault to help ensure high performance for applications. That is, for secret GET and RSA 2,048-bit software keys, you'll receive 4,000 GET transactions per 10 seconds vs 2,000 per 10 seconds previously. The service quotas are specific to operation type and the entire list can be accessed in [Azure Key Vault Service Limits](./service-limits.md).
+Azure Key Vault service throughput limits have been increased to serve double its previous quota for each vault to help ensure high performance for applications. That is, for secret GET and RSA 2,048-bit software keys, you'll receive 4,000 GET transactions per 10 seconds versus 2,000 per 10 seconds previously. The service quotas are specific to operation type and the entire list can be accessed in [Azure Key Vault Service Limits](./service-limits.md).
For Azure update announcement, see [General availability: Azure Key Vault increased service limits for all its customers] (https://azure.microsoft.com/updates/azurekeyvaultincreasedservicelimits/)
For more information, see [Configure key auto-rotation in Key Vault](../keys/how
## October 2021
-Integration of Azure Key Vault with Azure Policy has reached general availability and is now ready for production use. This capability is a step towards our commitment to simplifying secure secrets management in Azure, while also enhancing policy enforcements that you can define on Key Vault, keys, secrets and certificates. Azure Policy provides the ability to place guardrails on Key Vault and its objects to ensure they are compliant with your organizations security recommendations and compliance regulations. It allows you to perform real time policy-based enforcement and on-demand compliance assessment of existing secrets in your Azure environment. The results of audits performed by policy will be available to you in a compliance dashboard where you will be able to see a drill down of which resources and components are compliant and which are not. Azure policy for Key Vault will provide you with a full suite of built-in policies offering governance of your keys, secrets, and certificates.
+Integration of Azure Key Vault with Azure Policy has reached general availability and is now ready for production use. This capability is a step towards our commitment to simplifying secure secrets management in Azure, while also enhancing policy enforcements that you can define on Key Vault, keys, secrets and certificates. Azure Policy allows you to place guardrails on Key Vault and its objects to ensure they're compliant with your organizations security recommendations and compliance regulations. It allows you to perform real time policy-based enforcement and on-demand compliance assessment of existing secrets in your Azure environment. The results of audits performed by policy will be available to you in a compliance dashboard where you'll be able to see a drill-down of which resources and components are compliant and which aren't. Azure policy for Key Vault will provide you with a full suite of built-in policies offering governance of your keys, secrets, and certificates.
You can learn more about how to [Integrate Azure Key Vault with Azure Policy](./azure-policy.md?tabs=certificates) and assign a new policy. Announcement is linked [here](https://azure.microsoft.com/updates/gaazurepolicyforkeyvault).
To support [soft delete now on by default](#soft-delete-on-by-default), two chan
### Soft delete on by default
-**Soft-delete is required to be enabled for all key vaults**, both new and pre-existing. Over the next few months the ability to opt out of soft delete will be deprecated. For full details on this potentially breaking change, as well as steps to find affected key vaults and update them beforehand, see the article [Soft-delete will be enabled on all key vaults](soft-delete-change.md).
+**Soft-delete is required to be enabled for all key vaults**, both new and pre-existing. Over the next few months the ability to opt out of soft delete will be deprecated. For full details on this potentially breaking change, and steps to find affected key vaults and update them beforehand, see the article [Soft-delete will be enabled on all key vaults](soft-delete-change.md).
### Azure TLS certificate changes
-Microsoft is updating Azure services to use TLS certificates from a different set of Root Certificate Authorities (CAs). This change is being made because the current CA certificates do not comply with one of the C).
+Microsoft is updating Azure services to use TLS certificates from a different set of Root Certificate Authorities (CAs). This change is being made because the current CA certificates don't comply with one of the C).
## June 2020
New features and integrations released this year:
New features released this year: -- Managed storage account keys. Storage Account Keys feature added easier integration with Azure Storage. See the overview topic for more information, [Managed Storage Account Keys overview](../secrets/overview-storage-keys.md).-- Soft delete. Soft-delete feature improves data protection of your key vaults and key vault objects. See the overview topic for more information, [Soft-delete overview](./soft-delete-overview.md).
+- Managed storage account keys. Storage Account Keys feature added easier integration with Azure Storage. For more information, see [Managed Storage Account Keys overview](../secrets/overview-storage-keys.md).
+- Soft delete. Soft-delete feature improves data protection of your key vaults and key vault objects. For more information, see [Soft-delete overview](./soft-delete-overview.md).
## 2015
First preview version (version 2014-12-08-preview) was announced on January 8, 2
## Next steps
-If you have additional questions, please contact us through [support](https://azure.microsoft.com/support/options/).
+If you have questions, contact us through [support](https://azure.microsoft.com/support/options/).
key-vault Authorize Azure Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/authorize-azure-resource-manager.md
+
+ Title: Allow key management operations through Azure Resource Manager
+description: Learn how to allow key management operations through ARM
++
+tags: azure-resource-manager
++++ Last updated : 11/14/2022++
+# Customer intent: As a managed HSM administrator, I want to authorize Azure Resource Manager to perform key management operations via Azure Managed HSM
++
+# Allow key management operations through Azure Resource Manager
+
+For many asynchronous operations in the Portal and Template deployments, Azure Resource Manager must be trusted to act on behalf of users. Azure Key Vault trusts Azure Resource Manager but, for many higher assurance environments, such trust in the Azure portal and Azure Resource Manager may be considered a risk.
+
+Azure Managed HSM doesn't trust Azure Resource Manager by default. However, for environments where such risk is an acceptable tradeoff for the ease of use of the Azure portal and template deployments, Managed HSM offers a way for an administrator to opt in to this trust.
+
+For the Azure portal or Azure Resource Manager to interact with Azure Managed HSM in the same way as Azure Key Vault Standard and Premium, an authorized Managed HSM administrator must allow Azure Resource Manager to act on behalf of the user. To change this behavior and allow users to use Azure portal or Azure Resource Manager to create new keys or list keys, make the following Azure Managed HSM setting update:
+
+```azurecli-interactive
+az rest --method PATCH --url "https://<managed-hsm-url>/settings/AllowKeyManagementOperationsThroughARM" --body "{\"value\":\"true\"}" --headers "Content-Type=application/json" --resource "https://managedhsm.azure.net"
+```
+
+To disable this trust and revert to the default behavior of Managed HSM:
+
+```azurecli-interactive
+az rest --method PATCH --url "https://<managed-hsm-url>/settings/AllowKeyManagementOperationsThroughARM" --body "{\"value\":\"false\"}" --headers "Content-Type=application/json" --resource "https://managedhsm.azure.net"
+```
+
+## Next steps
+
+- [Control your data with Managed HSM](mhsm-control-data.md)
+- [Azure Managed HSM access control](access-control.md)
key-vault Azure Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/managed-hsm/azure-policy.md
-# Integrate Azure Managed HSM with Azure Policy
+# Integrate Azure Managed HSM with Azure Policy (preview)
[Azure Policy](../../governance/policy/index.yml) is a governance tool that gives users the ability to audit and manage their Azure environment at scale. Azure Policy provides the ability to place guardrails on Azure resources to ensure they're compliant with assigned policy rules. It allows users to perform audit, real-time enforcement, and remediation of their Azure environment. The results of audits performed by policy will be available to users in a compliance dashboard where they'll be able to see a drill-down of which resources and components are compliant and which aren't. For more information, see the [Overview of the Azure Policy service](../../governance/policy/overview.md).
Example Usage Scenarios:
**Deny**: When the effect of a policy is set to deny, the policy will block the creation of new components such as weaker keys, and will block new versions of existing keys that do not comply with the policy definition. Existing non-compliant resources within a Managed HSM are not affected. The 'audit' capabilities will continue to operate. - ### Keys using elliptic curve cryptography should have the specified curve names If you use elliptic curve cryptography or ECC keys, you can customize an allowed list of curve names from the list below. The default option allows all the following curve names.
If there are no compliance results of a pool after one day. Check if the role as
- [Logging and frequently asked questions for Azure policy for key vault](../general/troubleshoot-azure-policy-for-key-vault.md) - Learn more about the [Azure Policy service](../../governance/policy/overview.md) - See Key Vault samples: [Key Vault built-in policy definitions](../../governance/policy/samples/built-in-policies.md#key-vault)-- Learn about [Microsoft cloud security benchmark on Key vault](/security/benchmark/azure/baselines/key-vault-security-baseline)
+- Learn about [Microsoft cloud security benchmark on Key vault](/security/benchmark/azure/baselines/key-vault-security-baseline)
key-vault Policy Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/policy-reference.md
the link in the **Version** column to view the source on the
- See the built-ins on the [Azure Policy GitHub repo](https://github.com/Azure/azure-policy). - Review the [Azure Policy definition structure](../governance/policy/concepts/definition-structure.md).-- Review [Understanding policy effects](../governance/policy/concepts/effects.md).
+- Review [Understanding policy effects](../governance/policy/concepts/effects.md).
key-vault About Secrets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/about-secrets.md
tags: azure-resource-manager
Previously updated : 09/04/2019 Last updated : 01/17/2023
[Key Vault](../general/overview.md) provides secure storage of generic secrets, such as passwords and database connection strings.
-From a developer's perspective, Key Vault APIs accept and return secret values as strings. Internally, Key Vault stores and manages secrets as sequences of octets (8-bit bytes), with a maximum size of 25k bytes each. The Key Vault service doesn't provide semantics for secrets. It merely accepts the data, encrypts it, stores it, and returns a secret identifier ("id"). The identifier can be used to retrieve the secret at a later time.
+From a developer's perspective, Key Vault APIs accept and return secret values as strings. Internally, Key Vault stores and manages secrets as sequences of octets (8-bit bytes), with a maximum size of 25k bytes each. The Key Vault service doesn't provide semantics for secrets. It merely accepts the data, encrypts it, stores it, and returns a secret identifier (`id`). The identifier can be used to retrieve the secret at a later time.
-For highly sensitive data, clients should consider additional layers of protection for data. Encrypting data using a separate protection key prior to storage in Key Vault is one example.
+For highly sensitive data, clients should consider extra layers of protection for data. Encrypting data using a separate protection key prior to storage in Key Vault is one example.
-Key Vault also supports a contentType field for secrets. Clients may specify the content type of a secret to assist in interpreting the secret data when it's retrieved. The maximum length of this field is 255 characters. The suggested usage is as a hint for interpreting the secret data. For instance, an implementation may store both passwords and certificates as secrets, then use this field to differentiate. There are no predefined values.
+Key Vault also supports a contentType field for secrets. Clients may specify the content type of a secret to help interpreting the secret data when it's retrieved. The maximum length of this field is 255 characters. The suggested usage is as a hint for interpreting the secret data. For instance, an implementation may store both passwords and certificates as secrets, then use this field to differentiate. There are no predefined values.
## Encryption
In addition to the secret data, the following attributes may be specified:
- *exp*: IntDate, optional, default is **forever**. The *exp* (expiration time) attribute identifies the expiration time on or after which the secret data SHOULD NOT be retrieved, except in [particular situations](#date-time-controlled-operations). This field is for **informational** purposes only as it informs users of key vault service that a particular secret may not be used. Its value MUST be a number containing an IntDate value. - *nbf*: IntDate, optional, default is **now**. The *nbf* (not before) attribute identifies the time before which the secret data SHOULD NOT be retrieved, except in [particular situations](#date-time-controlled-operations). This field is for **informational** purposes only. Its value MUST be a number containing an IntDate value. -- *enabled*: boolean, optional, default is **true**. This attribute specifies whether the secret data can be retrieved. The enabled attribute is used in conjunction with *nbf* and *exp* when an operation occurs between *nbf* and *exp*, it will only be permitted if enabled is set to **true**. Operations outside the *nbf* and *exp* window are automatically disallowed, except in [particular situations](#date-time-controlled-operations).
+- *enabled*: boolean, optional, default is **true**. This attribute specifies whether the secret data can be retrieved. The enabled attribute is used with *nbf* and *exp* when an operation occurs between *nbf* and *exp*, it will only be permitted if enabled is set to **true**. Operations outside the *nbf* and *exp* window are automatically disallowed, except in [particular situations](#date-time-controlled-operations).
-There are additional read-only attributes that are included in any response that includes secret attributes:
+There are more read-only attributes that are included in any response that includes secret attributes:
- *created*: IntDate, optional. The created attribute indicates when this version of the secret was created. This value is null for secrets created prior to the addition of this attribute. Its value must be a number containing an IntDate value. - *updated*: IntDate, optional. The updated attribute indicates when this version of the secret was updated. This value is null for secrets that were last updated prior to the addition of this attribute. Its value must be a number containing an IntDate value.
A secret's **get** operation will work for not-yet-valid and expired secrets, ou
## Secret access control
-Access Control for secrets managed in Key Vault, is provided at the level of the Key Vault that contains those secrets. The access control policy for secrets, is distinct from the access control policy for keys in the same Key Vault. Users may create one or more vaults to hold secrets, and are required to maintain scenario appropriate segmentation and management of secrets.
+Access Control for secrets managed in Key Vault, is provided at the level of the Key Vault that contains those secrets. The access control policy for secrets is distinct from the access control policy for keys in the same Key Vault. Users may create one or more vaults to hold secrets, and are required to maintain scenario appropriate segmentation and management of secrets.
The following permissions can be used, on a per-principal basis, in the secrets access control entry on a vault, and closely mirror the operations allowed on a secret object:
How-to guides to control access in Key Vault:
- [Provide access to Key Vault keys, certificates, and secrets with an Azure role-based access control](../general/rbac-guide.md) ## Secret tags
-You can specify additional application-specific metadata in the form of tags. Key Vault supports up to 15 tags, each of which can have a 256 character name and a 256 character value.
+You can specify more application-specific metadata in the form of tags. Key Vault supports up to 15 tags, each of which can have a 256 character name and a 256 character value.
>[!Note] >Tags are readable by a caller if they have the *list* or *get* permission.
key-vault Quick Create Net https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/quick-create-net.md
Title: Quickstart - Azure Key Vault secrets client library for .NET
description: Learn how to create, retrieve, and delete secrets from an Azure key vault using the .NET client library Previously updated : 09/23/2020 Last updated : 01/20/2023
await client.SetSecretAsync(secretName, secretValue);
``` > [!NOTE]
-> If secret name exists, above code will create new version of that secret.
+> If secret name exists, the code will create new version of that secret.
### Retrieve a secret
key-vault Tutorial Rotation Dual https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/tutorial-rotation-dual.md
- Title: Rotation tutorial for resources with two sets of credentials
-description: Use this tutorial to learn how to automate the rotation of a secret for resources that use two sets of authentication credentials.
---
-tags: 'rotation'
--- Previously updated : 06/22/2020---
-# Automate the rotation of a secret for resources that have two sets of authentication credentials
-
-The best way to authenticate to Azure services is by using a [managed identity](../general/authentication.md), but there are some scenarios where that isn't an option. In those cases, access keys or passwords are used. You should rotate access keys and passwords frequently.
-
-This tutorial shows how to automate the periodic rotation of secrets for databases and services that use two sets of authentication credentials. Specifically, this tutorial shows how to rotate Azure Storage account keys stored in Azure Key Vault as secrets. You'll use a function triggered by Azure Event Grid notification.
-
-> [!NOTE]
-> For Storage account services, using Azure Active Directory to authorize requests is recommended. For more information, see [Authorize access to blobs using Azure Active Directory](../../storage/blobs/authorize-access-azure-active-directory.md). There are services that require storage account connection strings with access keys. For that scenario, we recommend this solution.
-
-Here's the rotation solution described in this tutorial:
-
-![Diagram that shows the rotation solution.](../media/secrets/rotation-dual/rotation-diagram.png)
-
-In this solution, Azure Key Vault stores storage account individual access keys as versions of the same secret, alternating between the primary and secondary key in subsequent versions. When one access key is stored in the latest version of the secret, the alternate key is regenerated and added to Key Vault as the new latest version of the secret. The solution provides the application's entire rotation cycle to refresh to the newest regenerated key.
-
-1. Thirty days before the expiration date of a secret, Key Vault publishes the near expiry event to Event Grid.
-1. Event Grid checks the event subscriptions and uses HTTP POST to call the function app endpoint that's subscribed to the event.
-1. The function app identifies the alternate key (not the latest one) and calls the storage account to regenerate it.
-1. The function app adds the new regenerated key to Azure Key Vault as the new version of the secret.
-
-## Prerequisites
-* An Azure subscription. [Create one for free.](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)
-* Azure [Cloud Shell](https://shell.azure.com/). This tutorial is using portal Cloud Shell with PowerShell env
-* Azure Key Vault.
-* Two Azure storage accounts.
-
-> [!NOTE]
-> Rotation of shared storage account key revokes account level shared access signature (SAS) generated based on that key. After storage account key rotation, you must regenerate account-level SAS tokens to avoid disruptions to applications.
-
-You can use this deployment link if you don't have an existing key vault and existing storage accounts:
-
-[![Link that's labelled Deploy to Azure.](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/1-CONTRIBUTION-GUIDE/images/deploytoazure.png)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2FKeyVault-Rotation-StorageAccountKey-PowerShell%2Fmaster%2FARM-Templates%2FInitial-Setup%2Fazuredeploy.json)
-
-1. Under **Resource group**, select **Create new**. Name the group **vaultrotation** and then select **OK**.
-1. Select **Review + create**.
-1. Select **Create**.
-
- ![Screenshot that shows how to create a resource group.](../media/secrets/rotation-dual/dual-rotation-1.png)
-
-You'll now have a key vault and two storage accounts. You can verify this setup in the Azure CLI or Azure PowerShell by running this command:
-# [Azure CLI](#tab/azure-cli)
-```azurecli
-az resource list -o table -g vaultrotation
-```
-# [Azure PowerShell](#tab/azurepowershell)
-
-```azurepowershell
-Get-AzResource -Name 'vaultrotation*' | Format-Table
-```
--
-The result will look something like this output:
-
-```console
-Name ResourceGroup Location Type Status
-- - --
-vaultrotation-kv vaultrotation westus Microsoft.KeyVault/vaults
-vaultrotationstorage vaultrotation westus Microsoft.Storage/storageAccounts
-vaultrotationstorage2 vaultrotation westus Microsoft.Storage/storageAccounts
-```
-
-## Create and deploy the key rotation function
-
-Next, you'll create a function app with a system-managed identity, in addition to other required components. You'll also deploy the rotation function for the storage account keys.
-
-The function app rotation function requires the following components and configuration:
-- An Azure App Service plan-- A storage account to manage function app triggers-- An access policy to access secrets in Key Vault-- The Storage Account Key Operator Service role assigned to the function app so it can access storage account access keys-- A key rotation function with an event trigger and an HTTP trigger (on-demand rotation)-- An Event Grid event subscription for the **SecretNearExpiry** event-
-1. Select the Azure template deployment link:
-
- [![Azure template deployment link.](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/1-CONTRIBUTION-GUIDE/images/deploytoazure.png)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2FKeyVault-Rotation-StorageAccountKey-PowerShell%2Fmaster%2FARM-Templates%2FFunction%2Fazuredeploy.json)
-
-1. In the **Resource group** list, select **vaultrotation**.
-1. In the **Storage Account RG** box, enter the name of the resource group in which your storage account is located. Keep the default value **[resourceGroup().name]** if your storage account is already located in the same resource group where you'll deploy the key rotation function.
-1. In the **Storage Account Name** box, enter the name of the storage account that contains the access keys to rotate. Keep the default value **[concat(resourceGroup().name, 'storage')]** if you use storage account created in [Prerequisites](#prerequisites).
-1. In the **Key Vault RG** box, enter the name of resource group in which your key vault is located. Keep the default value **[resourceGroup().name]** if your key vault already exists in the same resource group where you'll deploy the key rotation function.
-1. In the **Key Vault Name** box, enter the name of the key vault. Keep the default value **[concat(resourceGroup().name, '-kv')]** if you use key vault created in [Prerequisites](#prerequisites).
-1. In the **App Service Plan Type** box, select hosting plan. **Premium Plan** is needed only when your key vault is behind firewall.
-1. In the **Function App Name** box, enter the name of the function app.
-1. In the **Secret Name** box, enter the name of the secret where you'll store access keys.
-1. In the **Repo URL** box, enter the GitHub location of the function code. In this tutorial you can use **https://github.com/Azure-Samples/KeyVault-Rotation-StorageAccountKey-PowerShell.git** .
-1. Select **Review + create**.
-1. Select **Create**.
-
- ![Screenshot that shows how to create and deploy function.](../media/secrets/rotation-dual/dual-rotation-2.png)
-
-After you complete the preceding steps, you'll have a storage account, a server farm, a function app, and Application Insights. When the deployment is complete, you'll see this page:
-
- ![Screenshot that shows the Your deployment is complete page.](../media/secrets/rotation-dual/dual-rotation-3.png)
-> [!NOTE]
-> If you encounter a failure, you can select **Redeploy** to finish the deployment of the components.
--
-You can find deployment templates and code for the rotation function in [Azure Samples](https://github.com/Azure-Samples/KeyVault-Rotation-StorageAccountKey-PowerShell).
-
-### Add the storage account access keys to Key Vault secrets
-
-First, set your access policy to grant **manage secrets** permissions to your user principal:
-# [Azure CLI](#tab/azure-cli)
-```azurecli
-az keyvault set-policy --upn <email-address-of-user> --name vaultrotation-kv --secret-permissions set delete get list
-```
-# [Azure PowerShell](#tab/azurepowershell)
-
-```azurepowershell
-Set-AzKeyVaultAccessPolicy -UserPrincipalName <email-address-of-user> --name vaultrotation-kv -PermissionsToSecrets set,delete,get,list
-```
--
-You can now create a new secret with a storage account access key as its value. You'll also need the storage account resource ID, secret validity period, and key ID to add to the secret so the rotation function can regenerate the key in the storage account.
-
-Determine the storage account resource ID. You can find this value in the `id` property.
-
-# [Azure CLI](#tab/azure-cli)
-```azurecli
-az storage account show -n vaultrotationstorage
-```
-# [Azure PowerShell](#tab/azurepowershell)
-
-```azurepowershell
-Get-AzStorageAccount -Name vaultrotationstorage -ResourceGroupName vaultrotation | Select-Object -Property *
-```
--
-List the storage account access keys so you can get the key values:
-# [Azure CLI](#tab/azure-cli)
-```azurecli
-az storage account keys list -n vaultrotationstorage
-```
-# [Azure PowerShell](#tab/azurepowershell)
-
-```azurepowershell
-Get-AzStorageAccountKey -Name vaultrotationstorage -ResourceGroupName vaultrotation
-```
--
-Add secret to key vault with validity period for 60 days, storage account resource id, and for demonstration purpose to trigger rotation immmediately set expiration date to tomorrow. Run this command, using your retrieved values for `key1Value` and `storageAccountResourceId`:
-
-# [Azure CLI](#tab/azure-cli)
-```azurecli
-$tomorrowDate = (get-date).AddDays(+1).ToString("yyyy-MM-ddTHH:mm:ssZ")
-az keyvault secret set --name storageKey --vault-name vaultrotation-kv --value <key1Value> --tags "CredentialId=key1" "ProviderAddress=<storageAccountResourceId>" "ValidityPeriodDays=60" --expires $tomorrowDate
-```
-# [Azure PowerShell](#tab/azurepowershell)
-
-```azurepowershell
-$tomorrowDate = (Get-Date).AddDays(+1).ToString('yyy-MM-ddTHH:mm:ssZ')
-$secretVaule = ConvertTo-SecureString -String '<key1Value>' -AsPlainText -Force
-$tags = @{
- CredentialId='key1'
- ProviderAddress='<storageAccountResourceId>'
- ValidityPeriodDays='60'
-}
-Set-AzKeyVaultSecret -Name storageKey -VaultName vaultrotation-kv -SecretValue $secretVaule -Tag $tags -Expires $tomorrowDate
-```
--
-Above secret will trigger `SecretNearExpiry` event within several minutes. This event will in turn trigger the function to rotate the secret with expiration set to 60 days. In that configuration, 'SecretNearExpiry' event would be triggered every 30 days (30 days before expiry) and rotation function would will alternate rotation between key1 and key2.
-
-You can verify that access keys have regenerated by retrieving the storage account key and the Key Vault secret and compare them.
-
-Use this command to get the secret information:
-# [Azure CLI](#tab/azure-cli)
-```azurecli
-az keyvault secret show --vault-name vaultrotation-kv --name storageKey
-```
-# [Azure PowerShell](#tab/azurepowershell)
-
-```azurepowershell
-Get-AzKeyVaultSecret -VaultName vaultrotation-kv -Name storageKey -AsPlainText
-```
--
-Notice that `CredentialId` is updated to the alternate `keyName` and that `value` is regenerated:
-
-![Screenshot that shows the output of the A Z keyvault secret show command for the first storage account.](../media/secrets/rotation-dual/dual-rotation-4.png)
-
-Retrieve the access keys to compare the values:
-# [Azure CLI](#tab/azure-cli)
-```azurecli
-az storage account keys list -n vaultrotationstorage
-```
-# [Azure PowerShell](#tab/azurepowershell)
-
-```azurepowershell
-Get-AzStorageAccountKey -Name vaultrotationstorage -ResourceGroupName vaultrotation
-```
--
-Notice that `value` of the key is same as secret in key vault:
-
-![Screenshot that shows the output of the A Z storage account keys list command for the first storage account.](../media/secrets/rotation-dual/dual-rotation-5.png)
-
-## Use existing rotation function for multiple storage accounts
-
-You can reuse the same function app to rotate keys for multiple storage accounts.
-
-To add storage account keys to an existing function for rotation, you need:
-- The Storage Account Key Operator Service role assigned to function app so it can access storage account access keys.-- An Event Grid event subscription for the **SecretNearExpiry** event.-
-1. Select the Azure template deployment link:
-
- [![Azure template deployment link.](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/1-CONTRIBUTION-GUIDE/images/deploytoazure.png)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2FKeyVault-Rotation-StorageAccountKey-PowerShell%2Fmaster%2FARM-Templates%2FAdd-Event-Subscriptions%2Fazuredeploy.json)
-
-1. In the **Resource group** list, select **vaultrotation**.
-1. In the **Storage Account RG** box, enter the name of the resource group in which your storage account is located. Keep the default value **[resourceGroup().name]** if your storage account is already located in the same resource group where you'll deploy the key rotation function.
-1. In the **Storage Account Name** box, enter the name of the storage account that contains the access keys to rotate.
-1. In the **Key Vault RG** box, enter the name of resource group in which your key vault is located. Keep the default value **[resourceGroup().name]** if your key vault already exists in the same resource group where you'll deploy the key rotation function.
-1. In the **Key Vault Name** box, enter the name of the key vault.
-1. In the **Function App Name** box, enter the name of the function app.
-1. In the **Secret Name** box, enter the name of the secret where you'll store access keys.
-1. Select **Review + create**.
-1. Select **Create**.
-
- ![Screenshot that shows how to create an additional storage account.](../media/secrets/rotation-dual/dual-rotation-7.png)
-
-### Add storage account access key to Key Vault secrets
-
-Determine the storage account resource ID. You can find this value in the `id` property.
-# [Azure CLI](#tab/azure-cli)
-```azurecli
-az storage account show -n vaultrotationstorage2
-```
-# [Azure PowerShell](#tab/azurepowershell)
-
-```azurepowershell
-Get-AzStorageAccount -Name vaultrotationstorage -ResourceGroupName vaultrotation | Select-Object -Property *
-```
--
-List the storage account access keys so you can get the key2 value:
-# [Azure CLI](#tab/azure-cli)
-```azurecli
-az storage account keys list -n vaultrotationstorage2
-```
-# [Azure PowerShell](#tab/azurepowershell)
-
-```azurepowershell
-Get-AzStorageAccountKey -Name vaultrotationstorage2 -ResourceGroupName vaultrotation
-```
--
-Add secret to key vault with validity period for 60 days, storage account resource id, and for demonstration purpose to trigger rotation immmediately set expiration date to tomorrow. Run this command, using your retrieved values for `key2Value` and `storageAccountResourceId`:
-
-# [Azure CLI](#tab/azure-cli)
-```azurecli
-$tomorrowDate = (Get-Date).AddDays(+1).ToString('yyyy-MM-ddTHH:mm:ssZ')
-az keyvault secret set --name storageKey2 --vault-name vaultrotation-kv --value <key2Value> --tags "CredentialId=key2" "ProviderAddress=<storageAccountResourceId>" "ValidityPeriodDays=60" --expires $tomorrowDate
-```
-# [Azure PowerShell](#tab/azurepowershell)
-
-```azurepowershell
-$tomorrowDate = (get-date).AddDays(+1).ToString("yyyy-MM-ddTHH:mm:ssZ")
-$secretVaule = ConvertTo-SecureString -String '<key1Value>' -AsPlainText -Force
-$tags = @{
- CredentialId='key2';
- ProviderAddress='<storageAccountResourceId>';
- ValidityPeriodDays='60'
-}
-Set-AzKeyVaultSecret -Name storageKey2 -VaultName vaultrotation-kv -SecretValue $secretVaule -Tag $tags -Expires $tomorrowDate
-```
--
-Use this command to get the secret information:
-# [Azure CLI](#tab/azure-cli)
-```azurecli
-az keyvault secret show --vault-name vaultrotation-kv --name storageKey2
-```
-# [Azure PowerShell](#tab/azurepowershell)
-
-```azurepowershell
-Get-AzKeyVaultSecret -VaultName vaultrotation-kv -Name storageKey2 -AsPlainText
-```
--
-Notice that `CredentialId` is updated to the alternate `keyName` and that `value` is regenerated:
-
-![Screenshot that shows the output of the A Z keyvault secret show command for the second storage account.](../media/secrets/rotation-dual/dual-rotation-8.png)
-
-Retrieve the access keys to compare the values:
-# [Azure CLI](#tab/azure-cli)
-```azurecli
-az storage account keys list -n vaultrotationstorage
-```
-# [Azure PowerShell](#tab/azurepowershell)
-
-```azurepowershell
-Get-AzStorageAccountKey -Name vaultrotationstorage -ResourceGroupName vaultrotation
-```
--
-Notice that `value` of the key is same as secret in key vault:
-
-![Screenshot that shows the output of the A Z storage account keys list command for the second storage account.](../media/secrets/rotation-dual/dual-rotation-9.png)
-
-## Disable rotation for secret
-
-You can disable rotation of a secret simply by deleting event grid subscription for that secret. Use the Azure PowerShell [Remove-AzEventGridSubscription](/powershell/module/az.eventgrid/remove-azeventgridsubscription) cmdlet or Azure CLI [az event grid event--subscription delete](/cli/azure/eventgrid/event-subscription?#az-eventgrid-event-subscription-delete) command.
--
-## Key Vault rotation functions for two sets of credentials
-
-Rotation functions template for two sets of credentials and several ready to use functions:
--- [Project template](https://serverlesslibrary.net/sample/bc72c6c3-bd8f-4b08-89fb-c5720c1f997f)-- [Redis Cache](https://serverlesslibrary.net/sample/0d42ac45-3db2-4383-86d7-3b92d09bc978)-- [Storage Account](https://serverlesslibrary.net/sample/0e4e6618-a96e-4026-9e3a-74b8412213a4)-- [Azure Cosmos DB](https://serverlesslibrary.net/sample/bcfaee79-4ced-4a5c-969b-0cc3997f47cc)-
-> [!NOTE]
-> Above rotation functions are created by a member of the community and not by Microsoft. Community Azure Functions are not supported under any Microsoft support programme or service, and are made available AS IS without warranty of any kind.
-
-## Next steps
--- Tutorial: [Secrets rotation for one set of credentials](./tutorial-rotation.md)-- Overview: [Monitoring Key Vault with Azure Event Grid](../general/event-grid-overview.md)-- How to: [Create your first function in the Azure portal](../../azure-functions/functions-get-started.md)-- How to: [Receive email when a Key Vault secret changes](../general/event-grid-logicapps.md)-- Reference: [Azure Event Grid event schema for Azure Key Vault](../../event-grid/event-schema-key-vault.md)+
+ Title: Rotation tutorial for resources with two sets of credentials
+description: Use this tutorial to learn how to automate the rotation of a secret for resources that use two sets of authentication credentials.
++
+tags: 'rotation'
+++ Last updated : 01/20/2023+++
+# Automate the rotation of a secret for resources that have two sets of authentication credentials
+
+The best way to authenticate to Azure services is by using a [managed identity](../general/authentication.md), but there are some scenarios where that isn't an option. In those cases, access keys or passwords are used. You should rotate access keys and passwords frequently.
+
+This tutorial shows how to automate the periodic rotation of secrets for databases and services that use two sets of authentication credentials. Specifically, this tutorial shows how to rotate Azure Storage account keys stored in Azure Key Vault as secrets. You'll use a function triggered by Azure Event Grid notification.
+
+> [!NOTE]
+> For Storage account services, using Azure Active Directory to authorize requests is recommended. For more information, see [Authorize access to blobs using Azure Active Directory](../../storage/blobs/authorize-access-azure-active-directory.md). There are services that require storage account connection strings with access keys. For that scenario, we recommend this solution.
+
+Here's the rotation solution described in this tutorial:
+
+![Diagram that shows the rotation solution.](../media/secrets/rotation-dual/rotation-diagram.png)
+
+In this solution, Azure Key Vault stores storage account individual access keys as versions of the same secret, alternating between the primary and secondary key in subsequent versions. When one access key is stored in the latest version of the secret, the alternate key is regenerated and added to Key Vault as the new latest version of the secret. The solution provides the application's entire rotation cycle to refresh to the newest regenerated key.
+
+1. Thirty days before the expiration date of a secret, Key Vault publishes the near expiry event to Event Grid.
+1. Event Grid checks the event subscriptions and uses HTTP POST to call the function app endpoint that's subscribed to the event.
+1. The function app identifies the alternate key (not the latest one) and calls the storage account to regenerate it.
+1. The function app adds the new regenerated key to Azure Key Vault as the new version of the secret.
+
+## Prerequisites
+* An Azure subscription. [Create one for free.](https://azure.microsoft.com/free/?WT.mc_id=A261C142F)
+* Azure [Cloud Shell](https://shell.azure.com/). This tutorial is using portal Cloud Shell with PowerShell env
+* Azure Key Vault.
+* Two Azure storage accounts.
+
+> [!NOTE]
+> Rotation of shared storage account key revokes account level shared access signature (SAS) generated based on that key. After storage account key rotation, you must regenerate account-level SAS tokens to avoid disruptions to applications.
+
+You can use this deployment link if you don't have an existing key vault and existing storage accounts:
+
+[![Link that's labelled Deploy to Azure.](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/1-CONTRIBUTION-GUIDE/images/deploytoazure.png)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2FKeyVault-Rotation-StorageAccountKey-PowerShell%2Fmaster%2FARM-Templates%2FInitial-Setup%2Fazuredeploy.json)
+
+1. Under **Resource group**, select **Create new**. Name the group **vault rotation** and then select **OK**.
+1. Select **Review + create**.
+1. Select **Create**.
+
+ ![Screenshot that shows how to create a resource group.](../media/secrets/rotation-dual/dual-rotation-1.png)
+
+You'll now have a key vault and two storage accounts. You can verify this setup in the Azure CLI or Azure PowerShell by running this command:
+# [Azure CLI](#tab/azure-cli)
+```azurecli
+az resource list -o table -g vaultrotation
+```
+# [Azure PowerShell](#tab/azurepowershell)
+
+```azurepowershell
+Get-AzResource -Name 'vaultrotation*' | Format-Table
+```
++
+The result will look something like this output:
+
+```console
+Name ResourceGroup Location Type Status
+-- -- - --
+vaultrotation-kv vaultrotation westus Microsoft.KeyVault/vaults
+vaultrotationstorage vaultrotation westus Microsoft.Storage/storageAccounts
+vaultrotationstorage2 vaultrotation westus Microsoft.Storage/storageAccounts
+```
+
+## Create and deploy the key rotation function
+
+Next, you'll create a function app with a system-managed identity, in addition to other required components. You'll also deploy the rotation function for the storage account keys.
+
+The function app rotation function requires the following components and configuration:
+- An Azure App Service plan
+- A storage account to manage function app triggers
+- An access policy to access secrets in Key Vault
+- The Storage Account Key Operator Service role assigned to the function app so it can access storage account access keys
+- A key rotation function with an event trigger and an HTTP trigger (on-demand rotation)
+- An Event Grid event subscription for the **SecretNearExpiry** event
+
+1. Select the Azure template deployment link:
+
+ [![Azure template deployment link.](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/1-CONTRIBUTION-GUIDE/images/deploytoazure.png)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2FKeyVault-Rotation-StorageAccountKey-PowerShell%2Fmaster%2FARM-Templates%2FFunction%2Fazuredeploy.json)
+
+1. In the **Resource group** list, select **vaultrotation**.
+1. In the **Storage Account RG** box, enter the name of the resource group in which your storage account is located. Keep the default value **[resourceGroup().name]** if your storage account is already located in the same resource group where you'll deploy the key rotation function.
+1. In the **Storage Account Name** box, enter the name of the storage account that contains the access keys to rotate. Keep the default value **[concat(resourceGroup().name, 'storage')]** if you use storage account created in [Prerequisites](#prerequisites).
+1. In the **Key Vault RG** box, enter the name of resource group in which your key vault is located. Keep the default value **[resourceGroup().name]** if your key vault already exists in the same resource group where you'll deploy the key rotation function.
+1. In the **Key Vault Name** box, enter the name of the key vault. Keep the default value **[concat(resourceGroup().name, '-kv')]** if you use key vault created in [Prerequisites](#prerequisites).
+1. In the **App Service Plan Type** box, select hosting plan. **Premium Plan** is needed only when your key vault is behind firewall.
+1. In the **Function App Name** box, enter the name of the function app.
+1. In the **Secret Name** box, enter the name of the secret where you'll store access keys.
+1. In the **Repo URL** box, enter the GitHub location of the function code. In this tutorial, you can use **https://github.com/Azure-Samples/KeyVault-Rotation-StorageAccountKey-PowerShell.git** .
+1. Select **Review + create**.
+1. Select **Create**.
+
+ ![Screenshot that shows how to create and deploy function.](../media/secrets/rotation-dual/dual-rotation-2.png)
+
+After you complete the preceding steps, you'll have a storage account, a server farm, a function app, and Application Insights. When the deployment is complete, you'll see this page:
+
+ ![Screenshot that shows the Your deployment is complete page.](../media/secrets/rotation-dual/dual-rotation-3.png)
+> [!NOTE]
+> If you encounter a failure, you can select **Redeploy** to finish the deployment of the components.
+
+You can find deployment templates and code for the rotation function in [Azure Samples](https://github.com/Azure-Samples/KeyVault-Rotation-StorageAccountKey-PowerShell).
+
+### Add the storage account access keys to Key Vault secrets
+
+First, set your access policy to grant **manage secrets** permissions to your user principal:
+# [Azure CLI](#tab/azure-cli)
+```azurecli
+az keyvault set-policy --upn <email-address-of-user> --name vaultrotation-kv --secret-permissions set delete get list
+```
+# [Azure PowerShell](#tab/azurepowershell)
+
+```azurepowershell
+Set-AzKeyVaultAccessPolicy -UserPrincipalName <email-address-of-user> --name vaultrotation-kv -PermissionsToSecrets set,delete,get,list
+```
++
+You can now create a new secret with a storage account access key as its value. You'll also need the storage account resource ID, secret validity period, and key ID to add to the secret so the rotation function can regenerate the key in the storage account.
+
+Determine the storage account resource ID. You can find this value in the `id` property.
+
+# [Azure CLI](#tab/azure-cli)
+```azurecli
+az storage account show -n vaultrotationstorage
+```
+# [Azure PowerShell](#tab/azurepowershell)
+
+```azurepowershell
+Get-AzStorageAccount -Name vaultrotationstorage -ResourceGroupName vaultrotation | Select-Object -Property *
+```
++
+List the storage account access keys so you can get the key values:
+# [Azure CLI](#tab/azure-cli)
+```azurecli
+az storage account keys list -n vaultrotationstorage
+```
+# [Azure PowerShell](#tab/azurepowershell)
+
+```azurepowershell
+Get-AzStorageAccountKey -Name vaultrotationstorage -ResourceGroupName vaultrotation
+```
++
+Add secret to key vault with validity period for 60 days, storage account resource ID, and for demonstration purpose to trigger rotation immediately set expiration date to tomorrow. Run this command, using your retrieved values for `key1Value` and `storageAccountResourceId`:
+
+# [Azure CLI](#tab/azure-cli)
+```azurecli
+$tomorrowDate = (get-date).AddDays(+1).ToString("yyyy-MM-ddTHH:mm:ssZ")
+az keyvault secret set --name storageKey --vault-name vaultrotation-kv --value <key1Value> --tags "CredentialId=key1" "ProviderAddress=<storageAccountResourceId>" "ValidityPeriodDays=60" --expires $tomorrowDate
+```
+# [Azure PowerShell](#tab/azurepowershell)
+
+```azurepowershell
+$tomorrowDate = (Get-Date).AddDays(+1).ToString('yyy-MM-ddTHH:mm:ssZ')
+$secretVaule = ConvertTo-SecureString -String '<key1Value>' -AsPlainText -Force
+$tags = @{
+ CredentialId='key1'
+ ProviderAddress='<storageAccountResourceId>'
+ ValidityPeriodDays='60'
+}
+Set-AzKeyVaultSecret -Name storageKey -VaultName vaultrotation-kv -SecretValue $secretVaule -Tag $tags -Expires $tomorrowDate
+```
++
+This secret will trigger `SecretNearExpiry` event within several minutes. This event will in turn trigger the function to rotate the secret with expiration set to 60 days. In that configuration, 'SecretNearExpiry' event would be triggered every 30 days (30 days before expiry) and rotation function will alternate rotation between key1 and key2.
+
+You can verify that access keys have regenerated by retrieving the storage account key and the Key Vault secret and compare them.
+
+Use this command to get the secret information:
+# [Azure CLI](#tab/azure-cli)
+```azurecli
+az keyvault secret show --vault-name vaultrotation-kv --name storageKey
+```
+# [Azure PowerShell](#tab/azurepowershell)
+
+```azurepowershell
+Get-AzKeyVaultSecret -VaultName vaultrotation-kv -Name storageKey -AsPlainText
+```
++
+Notice that `CredentialId` is updated to the alternate `keyName` and that `value` is regenerated:
+
+![Screenshot that shows the output of the A Z keyvault secret show command for the first storage account.](../media/secrets/rotation-dual/dual-rotation-4.png)
+
+Retrieve the access keys to compare the values:
+# [Azure CLI](#tab/azure-cli)
+```azurecli
+az storage account keys list -n vaultrotationstorage
+```
+# [Azure PowerShell](#tab/azurepowershell)
+
+```azurepowershell
+Get-AzStorageAccountKey -Name vaultrotationstorage -ResourceGroupName vaultrotation
+```
++
+Notice that `value` of the key is same as secret in key vault:
+
+![Screenshot that shows the output of the A Z storage account keys list command for the first storage account.](../media/secrets/rotation-dual/dual-rotation-5.png)
+
+## Use existing rotation function for multiple storage accounts
+
+You can reuse the same function app to rotate keys for multiple storage accounts.
+
+To add storage account keys to an existing function for rotation, you need:
+- The Storage Account Key Operator Service role assigned to function app so it can access storage account access keys.
+- An Event Grid event subscription for the **SecretNearExpiry** event.
+
+1. Select the Azure template deployment link:
+
+ [![Azure template deployment link.](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/1-CONTRIBUTION-GUIDE/images/deploytoazure.png)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2FKeyVault-Rotation-StorageAccountKey-PowerShell%2Fmaster%2FARM-Templates%2FAdd-Event-Subscriptions%2Fazuredeploy.json)
+
+1. In the **Resource group** list, select **vaultrotation**.
+1. In the **Storage Account RG** box, enter the name of the resource group in which your storage account is located. Keep the default value **[resourceGroup().name]** if your storage account is already located in the same resource group where you'll deploy the key rotation function.
+1. In the **Storage Account Name** box, enter the name of the storage account that contains the access keys to rotate.
+1. In the **Key Vault RG** box, enter the name of resource group in which your key vault is located. Keep the default value **[resourceGroup().name]** if your key vault already exists in the same resource group where you'll deploy the key rotation function.
+1. In the **Key Vault Name** box, enter the name of the key vault.
+1. In the **Function App Name** box, enter the name of the function app.
+1. In the **Secret Name** box, enter the name of the secret where you'll store access keys.
+1. Select **Review + create**.
+1. Select **Create**.
+
+ ![Screenshot that shows how to create an additional storage account.](../media/secrets/rotation-dual/dual-rotation-7.png)
+
+### Add storage account access key to Key Vault secrets
+
+Determine the storage account resource ID. You can find this value in the `id` property.
+# [Azure CLI](#tab/azure-cli)
+```azurecli
+az storage account show -n vaultrotationstorage2
+```
+# [Azure PowerShell](#tab/azurepowershell)
+
+```azurepowershell
+Get-AzStorageAccount -Name vaultrotationstorage -ResourceGroupName vaultrotation | Select-Object -Property *
+```
++
+List the storage account access keys so you can get the key2 value:
+# [Azure CLI](#tab/azure-cli)
+```azurecli
+az storage account keys list -n vaultrotationstorage2
+```
+# [Azure PowerShell](#tab/azurepowershell)
+
+```azurepowershell
+Get-AzStorageAccountKey -Name vaultrotationstorage2 -ResourceGroupName vaultrotation
+```
++
+Add secret to key vault with validity period for 60 days, storage account resource ID, and for demonstration purpose to trigger rotation immediately set expiration date to tomorrow. Run this command, using your retrieved values for `key2Value` and `storageAccountResourceId`:
+
+# [Azure CLI](#tab/azure-cli)
+```azurecli
+$tomorrowDate = (Get-Date).AddDays(+1).ToString('yyyy-MM-ddTHH:mm:ssZ')
+az keyvault secret set --name storageKey2 --vault-name vaultrotation-kv --value <key2Value> --tags "CredentialId=key2" "ProviderAddress=<storageAccountResourceId>" "ValidityPeriodDays=60" --expires $tomorrowDate
+```
+# [Azure PowerShell](#tab/azurepowershell)
+
+```azurepowershell
+$tomorrowDate = (get-date).AddDays(+1).ToString("yyyy-MM-ddTHH:mm:ssZ")
+$secretVaule = ConvertTo-SecureString -String '<key1Value>' -AsPlainText -Force
+$tags = @{
+ CredentialId='key2';
+ ProviderAddress='<storageAccountResourceId>';
+ ValidityPeriodDays='60'
+}
+Set-AzKeyVaultSecret -Name storageKey2 -VaultName vaultrotation-kv -SecretValue $secretVaule -Tag $tags -Expires $tomorrowDate
+```
++
+Use this command to get the secret information:
+# [Azure CLI](#tab/azure-cli)
+```azurecli
+az keyvault secret show --vault-name vaultrotation-kv --name storageKey2
+```
+# [Azure PowerShell](#tab/azurepowershell)
+
+```azurepowershell
+Get-AzKeyVaultSecret -VaultName vaultrotation-kv -Name storageKey2 -AsPlainText
+```
++
+Notice that `CredentialId` is updated to the alternate `keyName` and that `value` is regenerated:
+
+![Screenshot that shows the output of the A Z keyvault secret show command for the second storage account.](../media/secrets/rotation-dual/dual-rotation-8.png)
+
+Retrieve the access keys to compare the values:
+# [Azure CLI](#tab/azure-cli)
+```azurecli
+az storage account keys list -n vaultrotationstorage
+```
+# [Azure PowerShell](#tab/azurepowershell)
+
+```azurepowershell
+Get-AzStorageAccountKey -Name vaultrotationstorage -ResourceGroupName vaultrotation
+```
++
+Notice that `value` of the key is same as secret in key vault:
+
+![Screenshot that shows the output of the A Z storage account keys list command for the second storage account.](../media/secrets/rotation-dual/dual-rotation-9.png)
+
+## Disable rotation for secret
+
+You can disable rotation of a secret simply by deleting the Event Grid subscription for that secret. Use the Azure PowerShell [Remove-AzEventGridSubscription](/powershell/module/az.eventgrid/remove-azeventgridsubscription) cmdlet or Azure CLI [az event grid event--subscription delete](/cli/azure/eventgrid/event-subscription?#az-eventgrid-event-subscription-delete) command.
++
+## Key Vault rotation functions for two sets of credentials
+
+Rotation functions template for two sets of credentials and several ready to use functions:
+
+- [Project template](https://serverlesslibrary.net/sample/bc72c6c3-bd8f-4b08-89fb-c5720c1f997f)
+- [Redis Cache](https://serverlesslibrary.net/sample/0d42ac45-3db2-4383-86d7-3b92d09bc978)
+- [Storage Account](https://serverlesslibrary.net/sample/0e4e6618-a96e-4026-9e3a-74b8412213a4)
+- [Azure Cosmos DB](https://serverlesslibrary.net/sample/bcfaee79-4ced-4a5c-969b-0cc3997f47cc)
+
+> [!NOTE]
+> These rotation functions are created by a member of the community and not by Microsoft. Community functions are not supported under any Microsoft support program or service, and are made available AS IS without warranty of any kind.
+
+## Next steps
+
+- Tutorial: [Secrets rotation for one set of credentials](./tutorial-rotation.md)
+- Overview: [Monitoring Key Vault with Azure Event Grid](../general/event-grid-overview.md)
+- How to: [Create your first function in the Azure portal](../../azure-functions/functions-get-started.md)
+- How to: [Receive email when a Key Vault secret changes](../general/event-grid-logicapps.md)
+- Reference: [Azure Event Grid event schema for Azure Key Vault](../../event-grid/event-schema-key-vault.md)
key-vault Tutorial Rotation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/secrets/tutorial-rotation.md
tags: 'rotation'
Previously updated : 01/26/2020 Last updated : 01/20/2023 ms.devlang: csharp
The best way to authenticate to Azure services is by using a [managed identity](
This tutorial shows how to automate the periodic rotation of secrets for databases and services that use one set of authentication credentials. Specifically, this tutorial rotates SQL Server passwords stored in Azure Key Vault by using a function triggered by Azure Event Grid notification: - :::image type="content" source="../media/rotate-1.png" alt-text="Diagram of rotation solution"::: 1. Thirty days before the expiration date of a secret, Key Vault publishes the "near expiry" event to Event Grid.
This tutorial shows how to automate the periodic rotation of secrets for databas
* Azure Key Vault * SQL Server
-Below deployment link can be used, if you don't have existing Key Vault and SQL Server:
+If you don't have existing Key Vault and SQL Server, you can use this deployment link:
[![Image showing a button labeled "Deploy to Azure".](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/1-CONTRIBUTION-GUIDE/images/deploytoazure.png)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2FKeyVault-Rotation-SQLPassword-Csharp%2Fmain%2FARM-Templates%2FInitial-Setup%2Fazuredeploy.json) 1. Under **Resource group**, select **Create new**. Give group a name, we use **akvrotation** in this tutorial.
-1. Under **Sql Admin Login**, type Sql administrator login name.
+1. Under **SQL Admin Login**, type SQL administrator login name.
1. Select **Review + create**. 1. Select **Create**
akvrotation-sql2 akvrotation eastus Microsoft.Sql/servers
akvrotation-sql2/master akvrotation eastus Microsoft.Sql/servers/databases ```
-## Create and deploy sql server password rotation function
+## Create and deploy SQL server password rotation function
+ > [!IMPORTANT]
-> Below template requires Key Vault, SQL server and Azure Function to be in the same resource group
+> This template requires the key vault, SQL server and Azure Function to be in the same resource group.
-Next, create a function app with a system-managed identity, in addition to the other required components, and deploy sql server password rotation functions
+Next, create a function app with a system-managed identity, in addition to the other required components, and deploy SQL server password rotation functions
The function app requires these components: - An Azure App Service plan-- A Function App with Sql password rotation functions with event trigger and http trigger
+- A Function App with SQL password rotation functions with event trigger and http trigger
- A storage account required for function app trigger management - An access policy for Function App identity to access secrets in Key Vault-- An EventGrid event subscription for **SecretNearExpiry** event
+- An Event Grid event subscription for **SecretNearExpiry** event
-1. Select the Azure template deployment link:
+1. Select the Azure template deployment link:
[![Image showing a button labeled "Deploy to Azure".](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/1-CONTRIBUTION-GUIDE/images/deploytoazure.png)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2FKeyVault-Rotation-SQLPassword-Csharp%2Fmain%2FARM-Templates%2FFunction%2Fazuredeploy.json) 1. In the **Resource group** list, select **akvrotation**.
-1. In the **Sql Server Name**, type the Sql Server name with password to rotate
+1. In the **SQL Server Name**, type the SQL Server name with password to rotate
1. In the **Key Vault Name**, type the key vault name 1. In the **Function App Name**, type the function app name 1. In the **Secret Name**, type secret name where the password will be stored
This rotation method reads database information from the secret, creates a new v
You can find the complete code on [GitHub](https://github.com/Azure-Samples/KeyVault-Rotation-SQLPassword-Csharp). ## Add the secret to Key Vault+ Set your access policy to grant *manage secrets* permissions to users: ```azurecli
The web app requires these components:
[![Image showing a button labeled "Deploy to Azure".](https://raw.githubusercontent.com/Azure/azure-quickstart-templates/master/1-CONTRIBUTION-GUIDE/images/deploytoazure.png)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2FKeyVault-Rotation-SQLPassword-Csharp-WebApp%2Fmain%2FARM-Templates%2FWeb-App%2Fazuredeploy.json) 1. Select the **akvrotation** resource group.
-1. In the **Sql Server Name**, type the Sql Server name with password to rotate
+1. In the **SQL Server Name**, type the SQL Server name with password to rotate
1. In the **Key Vault Name**, type the key vault name 1. In the **Secret Name**, type secret name where the password is stored 1. In the **Repo Url**, type web app code GitHub location (**https://github.com/Azure-Samples/KeyVault-Rotation-SQLPassword-Csharp-WebApp.git**)
When the application opens in the browser, you will see the **Generated Secret V
- Overview: [Monitoring Key Vault with Azure Event Grid](../general/event-grid-overview.md) - How to: [Receive email when a key vault secret changes](../general/event-grid-logicapps.md) - [Azure Event Grid event schema for Azure Key Vault](../../event-grid/event-schema-key-vault.md)+
lab-services Classroom Labs Scenarios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/classroom-labs-scenarios.md
Title: Use labs for trainings - Azure Lab Services
+ Title: Use labs for trainings
+ description: This article describes how to use Azure DevTest Labs for creating labs on Azure for training scenarios.- Previously updated : 01/04/2022+++++ Last updated : 01/17/2023 # Use labs for trainings
+In this article, you learn about the different features and steps for using Azure Lab Services for conducting classes. Azure Lab Services allows educators (teachers, professors, trainers, or teaching assistants, etc.) to quickly and easily create an online lab to provision pre-configured learning environments for the trainees. Each trainee can use identical and isolated environments for the training. Apply policies to ensure that the training environments are available to each trainee only when they need them, and contain enough resources - such as virtual machines - required for the training.
-Azure Lab Services allows educators (teachers, professors, trainers, or teaching assistants, etc.) to quickly and easily create an online lab to provision pre-configured learning environments for the trainees. Each trainee would be able use identical and isolated environments for the training. Policies can be applied to ensure that the training environments are available to each trainee only when they need them and contain enough resources - such as virtual machines - required for the training.
-![Lab](./media/classroom-labs-scenarios/classroom.png)
-
-Labs meet the following requirements that are required to conduct training in any virtual environment:
+Labs meet the following requirements for conducting training in any virtual environment:
- Trainees can quickly provision their training environments - Every training machine should be identical - Trainees can't see VMs created by other trainees-- Control cost by ensuring that trainees cannot get more VMs than they need for the training and also shutdown VMs when they are not using them
+- Control cost by ensuring that trainees can't get more VMs than they need for the training and also shutdown VMs when they aren't using them
- Easily share the training lab with each trainee - Reuse the training lab again and again
-In this article, you learn about various Azure Lab Services features that can be used to meet the previously described training requirements and detailed steps that you can follow to set up a lab for training.
## Create the lab plan as a lab plan administrator
-The first step in using Azure Lab Services is to create a lab plan in the Azure portal. After a lab plan administrator creates the lab plan, the admin adds users who want to create labs to the **Lab Creator** role. The educators create labs with virtual machines for students to do exercises for the course they are teaching. For details, see [Create and manage lab plan](how-to-manage-lab-plans.md).
+The first step in using Azure Lab Services is to create a lab plan in the Azure portal. After a lab plan administrator creates the lab plan, the admin adds the Lab Creator role to users who want to create labs, such as educators.
+
+The lab creator can then create labs with virtual machines for students to do exercises for the course they're teaching. For details, see [Create and manage lab plan](how-to-manage-lab-plans.md).
## Create and manage labs
-An educator, who is a member of the Lab Creator role in a lab plan, can create one or more labs in the lab plan. You create and configure a template VM with all the required software for doing exercises in your course. You pick a ready-made image from the available images for creating a lab and then optionally customize it by installing the software required for the lab. For details, see [Create and manage labs](how-to-manage-labs.md).
+If you have the Lab Creator role for a lab plan, you can create one or more labs in the lab plan. You create and configure a template VM with all the required software for doing exercises in your course. You select a ready-made image from the available images for creating a lab and then optionally customize it by installing the software required for the lab. For details, see [Create and manage labs](how-to-manage-labs.md).
## Set up and publish a template VM
-A template in a lab is a base virtual machine image from which all usersΓÇÖ virtual machines are created. Set up the template VM so that it is configured with exactly what you want to provide to the training attendees. You can provide a name and description of the template that the lab users see. Then, you publish the template to make instances of the template VM available to your lab users. When you publish a template, Azure Lab Services creates VMs in the lab by using the template. The number of VMs created in this process is same as the maximum number of users allowed into the lab, which you can set in the usage policy of the lab. All virtual machines have the same configuration as the template. For details, see [Set up and publish template virtual machines](how-to-create-manage-template.md).
+A template VM in a lab is a base virtual machine image from which all usersΓÇÖ VMs are created. Set up the template VM so that it's configured with exactly what you want to provide to the training attendees. You can provide a name and description of the template that the lab users see.
+
+Then, you publish the template to make instances of the template VM available to your lab users. When you publish a template, Azure Lab Services creates VMs in the lab by using the template. The number of VMs created in this process is the same as the maximum number of users allowed into the lab, which you can set in the usage policy of the lab. All virtual machines have the same configuration as the template. For details, see [Set up and publish template virtual machines](how-to-create-manage-template.md).
## Configure usage settings and policies
-The lab creator can add or remove users to the lab, get registration link to send to lab users, set up policies such as setting individual quotas per user, update the number of VMs available in the lab, and more. For details, see [Configure usage settings and policies](how-to-configure-student-usage.md).
+The lab creator can add or remove users to the lab, get a registration link to invite lab users, set up policies such as setting individual quotas per user, update the number of VMs available in the lab, and more. For details, see [Configure usage settings and policies](how-to-configure-student-usage.md).
## Create and manage schedules
Schedules allow you to configure a lab such that VMs in the lab automatically st
## Use VMs in the lab
-A student or training attendee registers to the lab, and connects to the VM to do exercises for the course. For details, see [How to access a lab](how-to-use-lab.md).
+A student or training attendee registers to the lab by using the registration link they received from the lab creator. They can then connect to the VM to do the exercises for the course. For details, see [How to access a lab](how-to-use-lab.md).
## Next steps
-Start with creating a lab plan in labs by following instructions in the article: [Tutorial: Setup a lab plan with Azure Lab Services](tutorial-setup-lab-plan.md).
+- To get started by creating a lab plan, follow the steps in [Tutorial: Setup a lab plan with Azure Lab Services](tutorial-setup-lab-plan.md).
lab-services Connect Virtual Machine Chromebook Remote Desktop https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/connect-virtual-machine-chromebook-remote-desktop.md
Title: How to connect to an Azure Lab Services VM from Chromebook | Microsoft Docs
+ Title: Connect to a lab VM from Chromebook
+ description: Learn how to connect from a Chromebook to a virtual machine in Azure Lab Services. -- Previously updated : 01/27/2022 +++ Last updated : 01/18/2023 # Connect to a VM using Remote Desktop Protocol on a Chromebook
-This section shows how a student can connect to a lab VM from a Chromebook by using Remote Desktop Protocol (RDP).
+In this article, you learn how to connect to a lab VM in Azure Lab Services from a Chromebook by using Remote Desktop Protocol (RDP).
## Install Microsoft Remote Desktop on a Chromebook
-1. Open the App Store on your Chromebook, and search for **Microsoft Remote Desktop**.
+To connect to the lab VM via RDP, you use the Microsoft Remote Desktop app.
+
+To install the Microsoft Remote Desktop app:
- :::image type="content" source="./media/connect-virtual-machine-chromebook-remote-desktop/install-remote-desktop-chromebook.png" alt-text="Screenshot of Microsoft Remote Desktop app.":::
+1. Open the app store on your Chromebook, and search for **Microsoft Remote Desktop**.
-1. Install the latest version of **Remote Desktop** by Microsoft Corporation.
+ :::image type="content" source="./media/connect-virtual-machine-chromebook-remote-desktop/install-remote-desktop-chromebook.png" alt-text="Screenshot of the Microsoft Remote Desktop app in the app store.":::
+
+1. Select **Install** to install the latest version of the Remote Desktop application by Microsoft Corporation.
## Access the VM from your Chromebook using RDP
+Next, you connect to the lab VM by using the remote desktop application. You can retrieve the connection information for the lab VM from the Azure Lab Services website.
+
+1. Navigate to the Azure Lab Services website (https://labs.azure.com), and sign in with your credentials.
+ 1. On the tile for your VM, ensure the [VM is running](how-to-use-lab.md#start-or-stop-the-vm) and select the **Connect** icon. :::image type="content" source="./media/connect-virtual-machine-chromebook-remote-desktop/connect-vm.png" alt-text="Screenshot of My virtual machines page for Azure Lab Services. The connect icon button on the VM tile is highlighted.":::
-1. If youΓÇÖre connecting *to a Linux VM*, you'll see two options to connect to the VM: SSH and RDP. Select the **Connect via RDP** option. If you're connecting *to a Windows VM*, you don't need to choose an connection option. The RDP file will automatically start downloading.
+
+1. If youΓÇÖre connecting to a Linux VM, you'll see two options to connect to the VM: SSH and RDP. Select the **Connect via RDP** option. If you're connecting to a Windows VM, you don't need to choose a connection option. The RDP file will automatically start downloading.
:::image type="content" source="./media/connect-virtual-machine-chromebook-remote-desktop/student-vm-connect-options.png" alt-text="Screenshot that shows V M tile for student. The R D P and S S H connection options are highlighted.":::+ 1. Open the **RDP** file that's downloaded on your computer with **Microsoft Remote Desktop** installed. It should start connecting to the VM. :::image type="content" source="./media/connect-virtual-machine-chromebook-remote-desktop/connect-vm-chromebook.png" alt-text="Screenshot of the Microsoft Remote Desktop app connecting to V M.":::
-1. When prompted, enter your password.
+
+1. When prompted, enter your username and password.
:::image type="content" source="./media/connect-virtual-machine-chromebook-remote-desktop/password-chromebook.png" alt-text="Screenshot that shows the Logon screen where you enter your username and password.":::
-1. Select **Continue** if you receive a warning about the certificate not being verified.
+
+1. If you receive a certificate warning, you can select **Continue**.
:::image type="content" source="./media/connect-virtual-machine-chromebook-remote-desktop/certificate-error-chromebook.png" alt-text="Screenshot that shows certificate warning when connecting to lab V M.":::
-1. Once the connection is complete you'll see the desktop of your lab VM.
+1. After the connection is established, you see the desktop of your lab VM.
## Next steps
lab-services Connect Virtual Machine Linux X2go https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/connect-virtual-machine-linux-x2go.md
Install the [X2Go client](https://wiki.x2go.org/doku.php/doc:installation:x2gocl
## Next steps -- [As an educator, configure X2Go on a template VM](how-to-enable-remote-desktop-linux.md#x2go-setup)
+- [As an educator, configure X2Go on a template VM](how-to-enable-remote-desktop-linux.md#setting-up-x2go)
- [As a student, stop the VM](how-to-use-lab.md#start-or-stop-the-vm)
lab-services Connect Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/connect-virtual-machine.md
To connect *to a Linux VM using RDP*, follow the instructions based on the type
### Connect to a Linux lab VM Using X2Go
-Linux VMs can have X2Go enabled and a graphical desktop installed. For more information, see [X2Go Setup](how-to-enable-remote-desktop-linux.md#x2go-setup) and [Using GNOME or MATE graphical desktops](how-to-enable-remote-desktop-linux.md#using-gnome-or-mate-graphical-desktops).
+Linux VMs can have X2Go enabled and a graphical desktop installed. For more information, see [X2Go Setup](how-to-enable-remote-desktop-linux.md#setting-up-x2go) and [Using GNOME or MATE graphical desktops](how-to-enable-remote-desktop-linux.md#using-gnome-or-mate-graphical-desktops).
For instructions to connect *to a Linux VM using X2Go*, see [Connect to a VM using X2Go](connect-virtual-machine-linux-x2go.md).
lab-services Hackathon Labs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/hackathon-labs.md
Last updated 11/19/2021
# Use Azure Lab Services for your next hackathon - Azure Lab Services is designed to be lightweight and easy to use so that you can quickly spin up a new lab of virtual machines (VMs) for your hackathon. Use the following checklist to ensure that your hackathon goes as smoothly as possible. This checklist should be completed by your IT department or faculty who are responsible for creating and managing your hackathon lab. To use Lab Services for your hackathon, ensure that both lab plan and your lab are created at least a few days before the start of your hackathon. Also, follow the guidance below:
lab-services How To Enable Remote Desktop Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/how-to-enable-remote-desktop-linux.md
Title: Enable graphical remote desktop for Linux in Azure Lab Services | Microsoft Docs
+ Title: Enable graphical remote desktop for Linux labs
+ description: Learn how to enable remote desktop for Linux virtual machines in a lab in Azure Lab Services. ++++ Previously updated : 01/04/2022 Last updated : 01/17/2023 # Enable graphical remote desktop for Linux virtual machines in Azure Lab Services
-When a lab is created from a **Linux** image, **SSH** (Secure Shell) access is automatically configured so that the educator can connect to the template VM from the command line. When the template VM is published, students can also connect to their VMs using SSH.
-
-You can also connect to a Linux VM using a **GUI** (graphical user interface). This article shows the steps to set up GUI connections using **Remote Desktop Protocol (RDP)** and **X2Go** .
+When you create a lab from a Linux image, Azure Lab Services automatically configures SSH (Secure Shell) access to let a lab creator, such as an educator, connect to the template VM from the command line. After you publish the template VM, lab users can then also connect to their VMs using SSH. You can also connect to a Linux VM using a GUI (graphical user interface). This article shows the steps to set up GUI connections using Remote Desktop Protocol (RDP) and X2Go.
> [!NOTE]
-> Linux uses an open-source version of RDP called, [Xrdp](https://en.wikipedia.org/wiki/Xrdp). For simplicity, we use the term RDP throughout this article.
+> Linux uses an open-source version of RDP called, [Xrdp](https://en.wikipedia.org/wiki/Xrdp). For simplicity, we use the term RDP throughout this article.
-In some cases, such as with Ubuntu LTS 18.04, X2Go provides better performance. If you use RDP and notice latency when interacting with the graphical desktop environment, consider trying X2Go since it may improve performance.
+In some cases, such as with Ubuntu LTS 18.04, X2Go provides better performance. If you use RDP and notice latency when interacting with the graphical desktop environment, consider trying X2Go for improved performance.
> [!IMPORTANT]
-> Some marketplace images already have a graphical desktop environment and remote desktop server installed. For example, the [Data Science Virtual Machine for Linux (Ubuntu)](https://azuremarketplace.microsoft.com/en-us/marketplace/apps?search=Data%20science%20Virtual%20machine&page=1&filters=microsoft%3Blinux) already has [XFCE and X2Go Server installed and configured to accept client connections](../machine-learning/data-science-virtual-machine/dsvm-ubuntu-intro.md#x2go).
+> Some marketplace images already have a graphical desktop environment and remote desktop server installed. For example, the [Data Science Virtual Machine for Linux (Ubuntu)](https://azuremarketplace.microsoft.com/en-us/marketplace/apps?search=Data%20science%20Virtual%20machine&page=1&filters=microsoft%3Blinux) already has [XFCE and X2Go Server installed and configured to accept client connections](../machine-learning/data-science-virtual-machine/dsvm-ubuntu-intro.md#x2go).
> [!WARNING]
-> If you need to use [GNOME](https://www.gnome.org/) or [MATE](https://mate-desktop.org/), ensure your lab VM is properly configured. There is a known networking conflict that can occur with the Azure Linux Agent which is needed for the VMs to work properly in Azure Lab Services. Instead, we recommend using a different graphical desktop environment, such as [XFCE](https://www.xfce.org/).
+> If you need to use [GNOME](https://www.gnome.org/) or [MATE](https://mate-desktop.org/), ensure you have properly configured your lab VM. There is a known networking conflict that can occur with the Azure Linux Agent, which is needed for the VMs to work properly in Azure Lab Services. Instead, we recommend using a different graphical desktop environment, such as [XFCE](https://www.xfce.org/).
-## X2Go Setup
+## Setting up X2Go
-To use X2Go, the educator must:
+To use X2Go, the lab creator must perform the following steps on the lab template VM:
- Install the X2Go remote desktop server. - Install a Linux graphical desktop environment.
-X2Go uses the same port that is already enabled for SSH. As a result, no extra configuration is required during lab creation.
+X2Go uses the same port that as SSH, which is already enabled by Azure Lab Services. As a result, no extra configuration is required during lab creation.
> [!NOTE]
-> In some cases, such as with Ubuntu LTS 18.04, X2Go provides better performance. If you use RDP and notice latency when interacting with the graphical desktop environment, consider trying X2Go since it may improve performance.
+> In some cases, such as with Ubuntu LTS 18.04, X2Go provides better performance. If you use RDP and notice latency when interacting with the graphical desktop environment, consider trying X2Go instead.
### Install X2Go Server on the template VM
-To set up X2Go on a template VM, first follow instructions to [update the template VM](how-to-create-manage-template.md#update-a-template-vm).
+For optimal performance, we recommend using the XFCE graphical desktop and for users to connect to the desktop using X2Go.
+
+1. Follow these instructions to [prepare for updating the template VM](how-to-create-manage-template.md#update-a-template-vm).
-For optimal performance, we typically recommend using the XFCE graphical desktop and for users to connect to the desktop using X2Go. To set up XFCE with X2Go on Ubuntu, see [Install and configure X2Go](https://aka.ms/azlabs/scripts/LinuxDesktop-Xfce).
+1. Install X2Go Server in either of two ways:
-To manually install X2Go Server, see [X2Go Server Installation](https://wiki.x2go.org/doku.php/doc:installation:x2goserver). There are many graphical desktop environments available for Linux. Some options include [GNOME](https://www.gnome.org/), [MATE](https://mate-desktop.org/), [XFCE](https://www.xfce.org/), and [Xubuntu](https://xubuntu.org/).
+ - Follow these steps to [configure X2Go on Ubuntu by using a script](https://aka.ms/azlabs/scripts/LinuxDesktop-Xfce).
+
+ - Alternately, [manually install X2Go Server](https://wiki.x2go.org/doku.php/doc:installation:x2goserver).
+
+ There are many graphical desktop environments available for Linux. Some options include [GNOME](https://www.gnome.org/), [MATE](https://mate-desktop.org/), [XFCE](https://www.xfce.org/), and [Xubuntu](https://xubuntu.org/).
## Connect using X2Go client
-Educators and students use X2Go client is used to connect to a VM that has X2Go configured. Using the VM's SSH connection information, follow the steps in the how-to article [Connect to a VM using X2Go](connect-virtual-machine-linux-x2go.md).
+To connect to a VM that has X2Go configured, you can use the X2Go client software and the SSH information of the VM. Follow these steps to [connect to a VM by using the X2Go client](connect-virtual-machine-linux-x2go.md).
## RDP Setup
-To use RDP, the educator must:
+To use RDP to connect to a lab VM, the lab creator must:
- Enable remote desktop connection in Azure Lab Services - Install the RDP remote desktop server.
To use RDP, the educator must:
### Enable RDP connection in a lab
-This step is needed so Azure Lab Services opens port 3389 for RDP to the Linux VMs. By default, Linux VMs only have the SSH port opened.
+RDP uses network port 3389 for connecting to a VM. By default, Linux VMs only have the SSH port opened.
+
+Follow theses steps to allow port 3389 to be open for connecting to Linux VMs with RDP:
-1. During lab creation, the educator can **Enable Remote Desktop Connection**. The educator must **enable** this option to open the port on the Linux VM that is needed for an RDP remote desktop session. Otherwise, if this option is left **disabled**, only the port for SSH is opened.
+1. During lab creation, enable the **Enable Remote Desktop Connection** setting.
+
+ The educator must enable this option to open the port on the Linux VM that is needed for an RDP remote desktop session. If this option is left disabled, only the port for SSH is opened.
:::image type="content" source="./media/how-to-enable-remote-desktop-linux/enable-rdp-option.png" alt-text="Screenshot that shows the New lab window with the Enable Remote Desktop Connection option.":::+ 1. On the **Enabling Remote Desktop Connection** message box, select **Continue with Remote Desktop**. :::image type="content" source="./media/how-to-enable-remote-desktop-linux/enabling-remote-desktop-connection-dialog.png" alt-text="Screenshot that shows the Enable Remote Desktop Connection confirmation window.":::
This step is needed so Azure Lab Services opens port 3389 for RDP to the Linux V
If you want to set up the GNOME with RDP on Ubuntu, see [Install and configure GNOME/RDP](https://aka.ms/azlabs/scripts/LinuxDesktop-GnomeMate). These instructions handle known issues with that configuration.
-To install the RDP package on the template VM, see [Install and configure RDP](../virtual-machines/linux/use-remote-desktop.md). There are many graphical desktop environments available for Linux. Some options include [GNOME](https://www.gnome.org/), [MATE](https://mate-desktop.org/), [XFCE](https://www.xfce.org/), and [Xubuntu](https://xubuntu.org/).
+To install the RDP package on the template VM, see [Install and configure RDP](../virtual-machines/linux/use-remote-desktop.md). There are many graphical desktop environments available for Linux. Some options include [GNOME](https://www.gnome.org/), [MATE](https://mate-desktop.org/), [XFCE](https://www.xfce.org/), and [Xubuntu](https://xubuntu.org/).
## Connect using RDP client
-The Microsoft RDP client is used to connect to a template VM that has RDP configured. The Remote Desktop client can be used on Windows, Chromebooks, Macs and more. For more information, see [Remote Desktop clients](/windows-server/remote/remote-desktop-services/clients/remote-desktop-clients).
+You can use the Microsoft RDP client to connect to a template VM that has RDP configured. The Remote Desktop client can be used on Windows, Chromebooks, Macs and more. For more information, see [Remote Desktop clients](/windows-server/remote/remote-desktop-services/clients/remote-desktop-clients).
For OS-specific instructions for connecting to a lab VM using RDP, see [Connect to a Linux lab VM using RDP](connect-virtual-machine.md#connect-to-a-linux-lab-vm-using-rdp).
For OS-specific instructions for connecting to a lab VM using RDP, see [Connect
### Using GNOME or MATE graphical desktops
-For the GNOME or MATE graphical desktop environments, you may come across a networking conflict with the Azure Linux Agent. The Azure Linux Agent is needed for the VMs to work properly in Azure Lab Services. This networking conflict causes the following side effects when Ubuntu 18.04 LTS is used with either GNOME or MATE installed:
+For the GNOME or MATE graphical desktop environments, you may come across a networking conflict with the Azure Linux Agent. The Azure Linux Agent is needed for the VMs to work properly in Azure Lab Services. This networking conflict causes the following side effects when Ubuntu 18.04 LTS is used with either GNOME or MATE installed:
- Lab creation using the image will fail with the error message, **Communication could not be established with the VM agent. Please verify that the VM agent is enabled and functioning.** - Publishing student VMs will stop responding if the auto-shutdown settings are enabled. - Resetting the student VM password will stop responding.
-To set up the GNOME or MATE graphical desktops on Ubuntu, see [Install and configure GNOME/RDP and MATE/X2go](https://aka.ms/azlabs/scripts/LinuxDesktop-GnomeMate). These instructions include a fix for the networking conflict that exists with Ubuntu 18.04 LTS. The scripts also support installing GNOME and MATE on Ubuntu 20.04 LTS and 21.04 LTS:
+To set up the GNOME or MATE graphical desktops on Ubuntu, see [Install and configure GNOME/RDP and MATE/X2go](https://aka.ms/azlabs/scripts/LinuxDesktop-GnomeMate). These instructions include a fix for the networking conflict that exists with Ubuntu 18.04 LTS. The scripts also support installing GNOME and MATE on Ubuntu 20.04 LTS and 21.04 LTS.
### Using RDP with Ubuntu
-In some cases, such as with Ubuntu LTS 18.04, X2Go provides better performance. If you use RDP and notice latency when interacting with the graphical desktop environment, consider trying [X2Go](#x2go-setup) since it may improve performance.
+In some cases, such as with Ubuntu LTS 18.04, X2Go provides better performance. If you use RDP and notice latency when interacting with the graphical desktop environment, consider trying [X2Go](#setting-up-x2go) for improved performance.
## Next steps
-After an educator configures either RDP or X2Go on their template VM, they can [publish the template VM](how-to-create-manage-template.md#publish-the-template-vm).
+You've now successfully configured RDP or X2Go for a Linux-based template VM.
+
+- Learn how you can [publish the template VM](how-to-create-manage-template.md#publish-the-template-vm) to create student lab VMs based on this template.
lab-services Quick Create Lab Plan Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/quick-create-lab-plan-portal.md
Title: Azure Lab Services Quickstart - Create a lab plan using the Azure portal
+ Title: Create a lab plan using the Azure portal
+ description: In this quickstart, you learn how to create an Azure Lab Services lab plan using the Azure portal.++++ Previously updated : 1/18/2022 Last updated : 01/18/2023 # Quickstart: Create a lab plan using the Azure portal
-A lab plan for Azure Lab Services can be created through the Azure portal. This quickstart shows you, as the admin, how to use the Azure portal to create a lab plan. Lab plans are used when creating labs for Azure Lab Services. You'll also add a role assignment so an educator can create labs based on the lab plan. For an overview of Azure Lab Services, see [An introduction to Azure Lab Services](lab-services-overview.md).
+In Azure Lab Services, a lab plan serves as a collection of configurations and settings that apply to the labs created from it. In this article, you learn how to create a lab plan by using the Azure portal. Next, you grant permissions for others to create labs on the lab plan by assigning the Lab Creator Azure Active Directory (Azure AD) role.
-## Prerequisites
+For an overview of Azure Lab Services, see [An introduction to Azure Lab Services](lab-services-overview.md).
-To complete this quick start, make sure that you have:
+## Prerequisites
-- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/).
+- An Azure account with an active subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
## Create a lab plan The following steps show how to use the Azure portal to create a lab plan.
-1. In the [Azure portal](https://portal.azure.com), select **Create a resource** at the top left of the screen.
-1. Select **All services** in the left menu. Search for **Lab plans**.
-1. Select the **Lab plans** tile, select **Create**.
+1. Sign in to the [Azure portal](https://portal.azure.com) by using the credentials for your Azure subscription.
+1. Select **Create a resource** in the upper left-hand corner of the Azure portal.
+
+ :::image type="content" source="./media/quick-create-lab-plan-portal/azure-portal-create-resource.png" alt-text="Screenshot that shows the Azure portal home page, highlighting the Create a resource button.":::
+
+1. Search for **lab plan**.
+1. On the **Lab plan** tile, select the **Create** dropdown and choose **Lab plan**.
+
+ :::image type="content" source="./media/quick-create-lab-plan-portal/select-lab-plans-service.png" alt-text="Screenshot of how to search for and create a lab plan by using the Azure Marketplace.":::
- :::image type="content" source="./media/quick-create-lab-plan-portal/select-lab-plans-service.png" alt-text="Screenshot that shows the Lab plan tile for Azure Marketplace.":::
+1. On the **Basics** tab of the **Create a lab plan** page, provide the following information:
-1. On the **Basics** tab of the **Create a lab plan** page:
- 1. For the **Subscription**, select the Azure subscription in which you want to create the lab plan.
- 1. For **Resource Group**, select **Create New** and enter *MyResourceGroup*.
- 1. For **Name**, enter a *MyLabPlan*.
- 1. For **Region**, select the Azure region you want to create the lab plan. (Region for the lab plan is also the default region where your labs will be created.)
- 1. Select **Review + create**.
+ | Field | Description |
+ | | -- |
+ | **Subscription** | Select the Azure subscription that you want to use for this Lab Plan resource. |
+ | **Resource group** | Select **Create New** and enter *MyResourceGroup*. |
+ | **Name** | Enter *MyLabPlan* as the lab plan name. |
+ | **Region** | Select a geographic location to host your Lab Plan resource. |
+
+ :::image type="content" source="./media/quick-create-lab-plan-portal/Create-lab-plan-basics-tab.png" alt-text="Screenshot that shows the Basics tab of the Create a new lab plan experience.":::
- :::image type="content" source="./media/quick-create-lab-plan-portal/Create-lab-plan-basics-tab.png" alt-text="Screenshot that shows the Basics tab of the Create a new lab plan experience.":::
+1. After you're finished configuring the resource, select **Review + Create**.
-1. Review the summary and select **Create**.
+1. Review all the configuration settings and select **Create** to start the deployment of the Lab Plan.
:::image type="content" source="./media/quick-create-lab-plan-portal/Create-lab-plan-review-create-tab.png" alt-text="Screenshot that shows the Review and Create tab of the Create a new lab plan experience.":::
-1. When the deployment is complete, expand **Next steps**, and select **Go to resource**.
+1. To view the new resource, select **Go to resource**.
:::image type="content" source="./media/quick-create-lab-plan-portal/Create-lab-plan-deployment-complete.png" alt-text="Screenshot that the deployment of the lab plan resource is complete.":::
-1. Confirm that you see the **Overview** page for *MyLabPlan*.
+1. Confirm that you see the Lab Plan **Overview** page for *MyLabPlan*.
## Add a user to the Lab Creator role
The following steps show how to use the Azure portal to create a lab plan.
When no longer needed, you can delete the resource group, lab plan, and all related resources. 1. On the **Overview** page for the lab plan, select the **Resource group** link.+ 1. At the top of the page for the resource group, select **Delete resource group**.
-1. A page will open warning you that you're about to delete resources. Type the name of the resource group and select **Delete** to finish deleting the resources and the resource group.
+
+1. Enter the resource group name. Then select **Delete**.
+
+To delete resources by using the Azure CLI, enter the following command:
+
+```azurecli
+az group delete --name <yourresourcegroup>
+```
+
+Remember, deleting the resource group deletes all of the resources within it.
## Troubleshooting
lab-services Tutorial Setup Lab Plan https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/tutorial-setup-lab-plan.md
Title: Create a lab plan with Azure Lab Services | Microsoft Docs
-description: Learn how to set up a lab plan with Azure Lab Services, add a lab creator, and specify Marketplace images to be used by labs in the lab plan.
+ Title: Create a lab plan with Azure Lab Services
+
+description: Learn how to set up a lab plan with Azure Lab Services and assign lab creation permissions to a user by using the Azure portal.
Previously updated : 01/06/2022++++ Last updated : 01/17/2023 # Tutorial: Create a lab plan with Azure Lab Services
-In Azure Lab Services, the lab plan serves as a collection of configurations and settings that apply to the labs created from it. In your lab plan, give permission to others to create labs, and set policies that apply to newly created labs. In this tutorial, learn how to create a lab plan.
+In Azure Lab Services, the lab plan serves as a collection of configurations and settings that apply to the labs you create from it. In your lab plan, give permission to others to create labs, and set policies that apply to newly created labs. In this tutorial, learn how to create a lab plan by using the Azure portal.
In this tutorial, you do the following actions:
In this tutorial, you do the following actions:
> * Create a lab plan > * Assign a user to the Lab Creator role
-If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/) before you begin.
+## Prerequisites
+
+* An Azure account with an active subscription. If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
## Create a lab plan
The following steps illustrate how to use the Azure portal to create a lab plan
1. Sign in to the [Azure portal](https://portal.azure.com). 1. Select **Create a resource** in the upper left-hand corner of the Azure portal.+
+ :::image type="content" source="./media/tutorial-setup-lab-plan/azure-portal-create-resource.png" alt-text="Screenshot that shows the Azure portal home page, highlighting the Create a resource button.":::
+ 1. Search for **lab plan**. (**Lab plan** can also be found under the **DevOps** category.) 1. On the **Lab plan** tile, select the **Create** dropdown and choose **Lab plan**.
- :::image type="content" source="./media/tutorial-setup-lab-plan/select-lab-plans-service.png" alt-text="All Services -> Lab Services":::
-1. On the **Basics** tab of the **Create a lab plan** page, do the following actions:
- 1. Select the **Azure subscription** in which you want to create the lab plan.
- 2. For **Resource group**, select an existing resource group or select **Create new**, and enter a name for the new resource group.
- 3. For **Name**, enter a lab plan name. For more information about naming restrictions, see [Microsoft.LabServices resource name rules](../azure-resource-manager/management/resource-name-rules.md#microsoftlabservices).
- 4. For **Region**, select a location/region in which you want to create the lab plan.
+ :::image type="content" source="./media/tutorial-setup-lab-plan/select-lab-plans-service.png" alt-text="Screenshot of how to search for and create a lab plan by using the Azure Marketplace.":::
+
+1. On the **Basics** tab of the **Create a lab plan** page, provide the following information:
+
+ | Field | Description |
+ | | -- |
+ | **Subscription** | Select the Azure subscription that you want to use to create the lab plan. |
+ | **Resource group** | Select an existing resource group or select **Create new**, and enter a name for the new resource group. |
+ | **Name** | Enter a unique lab plan name. <br/>For more information about naming restrictions, see [Microsoft.LabServices resource name rules](../azure-resource-manager/management/resource-name-rules.md#microsoftlabservices). |
+ | **Region** | Select a geographic location to host your lab plan. |
+
+1. After you're finished configuring the resource, select **Review + Create**.
+
+ :::image type="content" source="./media/tutorial-setup-lab-plan/lab-plan-basics-page.png" alt-text="Screenshot that shows the Basics tab to create a new lab plan in the Azure portal.":::
+
+1. Review all the configuration settings and select **Create** to start the deployment of the Lab Plan.
+
+1. To view the new resource, select **Go to resource**.
- :::image type="content" source="./media/tutorial-setup-lab-plan/lab-plan-basics-page.png" alt-text="Lab plan - basics page":::
- 5. If you would like to enable advanced networking, see [Connect to your virtual network in Azure Lab Services](how-to-connect-vnet-injection.md).
- 6. Select **Review + Create**. When the validation succeeds, select **Create**.
- 7. Review the summary, and select **Create**.
+ :::image type="content" source="./media/tutorial-setup-lab-plan/go-to-lab-plan.png" alt-text="Screenshot that shows the resource deployment completion page in the Azure portal.":::
- :::image type="content" source="./media/tutorial-setup-lab-plan/create-button.png" alt-text="Review + create -> Create":::
-1. When the deployment is complete, select **Go to resource** under **Next steps**.
+1. Confirm that you see the lab plan **Overview** page.
- :::image type="content" source="./media/tutorial-setup-lab-plan/go-to-lab-plan.png" alt-text="Go to lab plan page":::
-1. Confirm that you see the **Lab Plan** page.
+ :::image type="content" source="./media/tutorial-setup-lab-plan/lab-plan-page.png" alt-text="Screenshot that shows the lab plan overview page in the Azure portal.":::
- :::image type="content" source="./media/tutorial-setup-lab-plan/lab-plan-page.png" alt-text="Lab plan page":::
+You've now successfully created a lab plan by using the Azure portal. To let others create labs in the lab plan, you assign them the Lab Creator role.
## Add a user to the Lab Creator role
The following steps illustrate how to use the Azure portal to create a lab plan
## Next steps
-In this tutorial, you created a lab plan and gave lab creation permissions to an educator. To learn about how to create a lab as an educator, advance to the next tutorial:
+In this tutorial, you created a lab plan and assigned lab creation permissions to another user. To learn about how to create a lab, advance to the next tutorial:
> [!div class="nextstepaction"]
-> [Create a lab](tutorial-setup-lab.md)
+> [Create a lab](./tutorial-setup-lab.md)
lab-services Tutorial Setup Lab https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/lab-services/tutorial-setup-lab.md
Title: Create a lab using Azure Lab Services | Microsoft Docs
+ Title: Create and publish a lab
+ description: In this tutorial, you use Azure Lab Services to set up a lab with virtual machines that are used by students in your class. Previously updated : 1/21/2022++++ Last updated : 1/18/2023
-# Tutorial: Create and publish a lab
+# Tutorial: Create and publish a lab in Azure Lab Services
[!INCLUDE [preview note](./includes/lab-services-new-update-focused-article.md)]
-In this tutorial, you set up a lab with virtual machines that are used by students in the classroom by doing the following actions:
+In this tutorial, you use Azure Lab Services to set up a lab with virtual machines that are used by students in the classroom. You use the Azure Lab Services website to create a lab from a virtual machine image and configure a schedule for automatically starting and stopping the lab VMs. Finally, you add and invite users to the lab to let them access the lab VMs.
+
+In this article, you learn how to:
> [!div class="checklist"] > * Create a lab
In this tutorial, you set up a lab with virtual machines that are used by studen
## Prerequisites
-* A lab plan. To create a lab plan, see [Tutorial: Create a lab plan with Azure Lab Services](tutorial-setup-lab-plan.md).
-* Permission to create a lab. You must be a member of one of these roles in the lab plan: Owner, Lab Creator, or Contributor. For more information, see [Azure Lab Services built-in roles](administrator-guide.md#rbac-roles). The user account used to create a lab plan will already have the required permissions to create a lab.
+* A lab plan. To create a lab plan, see [Tutorial: Create a lab plan with Azure Lab Services](tutorial-setup-lab-plan.md).
+* Permission to create a lab. You must be a member of one of these Azure Active Directory roles in the lab plan: Owner, Lab Creator, or Contributor. For more information, see [Azure Lab Services built-in roles](administrator-guide.md#rbac-roles). The user account that created the lab plan already has the required permissions to create a lab.
Here's the typical workflow when using Azure Lab
Here's the typical workflow when using Azure Lab
## Create a lab
-In this step, you create a lab for your class in Azure Lab Services portal.
+As a lab creator, you can create a lab for your class by using the Azure Lab Services website.
+
+1. Navigate to the Azure Lab Services website (https://labs.azure.com).
+1. Select **Sign in** and enter your credentials. Azure Lab Services supports organizational accounts and Microsoft accounts.
+1. Select **New lab**.
+
+ :::image type="content" source="./media/tutorial-setup-lab/new-lab-button.png" alt-text="Screenshot of Azure Lab Services portal. New lab button is highlighted.":::
+
+1. In the **New Lab** page, do the following actions:
-1. Navigate to Lab Services web portal: [https://labs.azure.com](https://labs.azure.com).
-2. Select **Sign in** and enter your credentials. Azure Lab Services supports organizational accounts and Microsoft accounts.
-3. Select **New lab**.
- <br>:::image type="content" source="./media/tutorial-setup-lab/new-lab-button.png" alt-text="Screenshot of Azure Lab Services portal. New lab button is highlighted.":::
-4. In the **New Lab** window, do the following actions:
1. Specify a **name**, **virtual machine image**, **size**, and **region** for your lab, and select **Next**. For more information about naming restrictions, see [Microsoft.LabServices resource name rules](../azure-resource-manager/management/resource-name-rules.md#microsoftlabservices). Possibly, you'll need to choose a **lab plan**. If more than one lab plan is in the resource group, you'll see a dropdown to choose a lab plan. If there's only one lab plan in the resource group, this option will be hidden.
In this step, you create a lab for your class in Azure Lab Services portal.
> [!IMPORTANT] > Make a note of user name and password. They won't be shown again.+ 1. This step is **optional** for the tutorial. Select **Give lab user a non-admin account on their virtual machines** to give the student non-administrator account rather the default administrator account.
+
> [!IMPORTANT] > Make a note of non-admin user name and password. They won't be shown again.+ 1. If you would like students to set their own password the first time they sign into their VM, uncheck **Use same password for all virtual machines**. Note, students will have to wait for the password set function to complete before the connect button is available for their VM if **Use same password for all virtual machines** is unchecked. Select **Next**. :::image type="content" source="./media/tutorial-setup-lab/virtual-machine-credentials.png" alt-text="Screenshot that shows the Virtual machine credentials window when creating a new Azure Lab Services lab.":::
In this step, you create a lab for your class in Azure Lab Services portal.
:::image type="content" source="./media/tutorial-setup-lab/template-virtual-machine-settings.png" alt-text="Screenshot of the Template virtual machine settings windows when creating a new Azure Lab Services lab.":::
-5. You should see the following screen that shows the status of the template VM creation.
+1. You should see the following screen that shows the status of the template VM creation.
:::image type="content" source="./media/tutorial-setup-lab/create-template-vm-progress.png" alt-text="Screenshot of status of the template VM creation.":::
-6. If **Use a virtual machine image without customization** was selected on the **Template virtual machine settings** window when creating the lab, skip this step. On the **Template** page, optionally do the following steps:
+
+1. If **Use a virtual machine image without customization** was selected on the **Template virtual machine settings** window when creating the lab, skip this step. On the **Template** page, optionally do the following steps:
1. Connect to the template VM by selecting **Start**. If it's a Linux template VM, you choose whether you want to connect using SSH or RDP (if RDP is enabled).+ :::image type="content" source="./media/tutorial-setup-lab/start-template-vm.png" alt-text="Screenshot of the template page of an Azure Lab Services lab. Start template button is highlighted.":::+ 2. Install and configure software required for your class on the template VM. 3. **Stop** the template VM.
In this step, you publish the lab. When you publish the template VM, Azure Lab S
> [!WARNING] > Publishing is an irreversible action! It can't be undone.
-2. On the **Publish template** page, select **Publish**. Select **OK** when warned that publishing is a permanent action.
+1. On the **Publish template** page, select **Publish**. Select **OK** when warned that publishing is a permanent action.
+ :::image type="content" source="./media/tutorial-setup-lab/publish-template-number-vms.png" alt-text="Screenshot of confirmation window for publish action of Azure.":::
-3. You see the **status of publishing** the template on page.
+1. Wait until the publishing finishes. You can track the publishing status on the **Template** page.
:::image type="content" source="./media/tutorial-setup-lab/publish-template-progress.png" alt-text="Screenshot of Azure Lab Services template page. The publishing in progress message is highlighted.":::
-4. Wait until the publishing is complete.
-5. Select **Virtual machine pool** on the left menu or select **Virtual machines** tile on the dashboard page to see the list of available machines. Confirm that you see virtual machines that are in **Unassigned** state. These VMs aren't assigned to students yet. They should be in **Stopped** state. For more information about managing the virtual machine pool, see [Manage a VM pool in Lab Services](how-to-manage-vm-pool.md).
+1. On the **Virtual machine pool** page, confirm that the virtual machines are flagged as **Unassigned** and are in a **Stopped** state.
+
+ Unassigned VMs aren't assigned to students yet. For more information about managing the virtual machine pool, see [Manage a VM pool in Lab Services](how-to-manage-vm-pool.md).
:::image type="content" source="./media/tutorial-setup-lab/virtual-machines-stopped.png" alt-text="Screenshot of virtual machines stopped. The virtual machine pool menu is highlighted.":::
In this step, you publish the lab. When you publish the template VM, Azure Lab S
## Set a schedule for the lab
-Create a scheduled event for the lab so that VMs in the lab are automatically started and stopped at specific times. The user quota (default: 10 hours) you specified earlier is the extra time assigned to each student outside this scheduled time.
+You can create a scheduled event for the lab so that VMs in the lab are automatically started and stopped at specific times. For example, you might create a scheduled event that matches the class hours. You can create one-time events or recurring events. For more information about creating and managing schedules for a class, see [Create and manage schedule for labs](how-to-create-schedules.md).
+
+The user quota (default: 10 hours) you specified earlier is the extra time assigned to each student outside this scheduled time.
+
+To create a scheduled event for a lab:
-1. Switch to the **Schedules** page, and select **Add scheduled event** on the toolbar. **Add scheduled event** will be disabled if the lab is actively being published.
+1. Switch to the **Schedules** page, and select **Add scheduled event** on the toolbar.
+
+ If the lab hasn't finished publishing, **Add scheduled event** will be disabled.
:::image type="content" source="./media/how-to-create-schedules/add-schedule-button.png" alt-text="Screenshot of the Add scheduled event button on the Schedules page. The Schedules menu and Add scheduled event button are highlighted."::: 1. On the **Add scheduled event** page, do the following steps:+ 1. Confirm that **Standard** is selected the **Event type**.
- 2. Select the **start date** for the class.
- 3. Select the **start time** at which you want the VMs to be started.
- 4. Select the **stop time** at which the VMs are to be shut down.
- 5. Select the **time zone** for the start and stop times you specified.
-1. On the same **Add scheduled event** page, select the current schedule in the **Repeat** section.
+ 1. Select the **start date** for the class.
+ 1. Select the **start time** at which you want the VMs to be started.
+ 1. Select the **stop time** at which the VMs are to be shut down.
+ 1. Select the **time zone** for the start and stop times you specified.
+
+1. On the same **Add scheduled event** page, select the current schedule in the **Repeat** section.
+ :::image type="content" source="./media/how-to-create-schedules/select-current-schedule.png" alt-text="Screenshot of the Add scheduled event window. The Repeat description of the scheduled event is highlighted.":::+ 1. On the **Repeat** dialog box, do the following steps:+ 1. Confirm that **every week** is set for the **Repeat** field.
- 2. Select the days on which you want the schedule to take effect. In the following example, Monday-Friday is selected.
- 3. Select an **end date** for the schedule.
- 4. Select **Save**.
+ 1. Select the days on which you want the schedule to take effect. In the following example, Monday-Friday is selected.
+ 1. Select an **end date** for the schedule.
+ 1. Select **Save**.
+ :::image type="content" source="./media/how-to-create-schedules/set-repeat-schedule.png" alt-text="Screenshot of the Repeat windows for scheduled events. Event repeats every week, Monday through Friday.":::+ 1. On the **Add scheduled event** page, for **Notes (optional)**, enter any description or notes for the schedule.+ 1. On the **Add scheduled event** page, select **Save**.+ :::image type="content" source="./media/how-to-create-schedules/add-schedule-page-weekly.png" alt-text="Screenshot of the Add scheduled event window.":::
-1. Navigate to the start date in the calendar to verify that the schedule is set.
- :::image type="content" source="./media/how-to-create-schedules/schedule-calendar.png" alt-text="Screenshot of the Schedule page for Azure Lab Services. Repeating schedule, Monday through Friday shown in the calendar.":::
-For more information about creating and managing schedules for a class, see [Create and manage schedule for labs](how-to-create-schedules.md).
+1. In the calendar view, confirm that the scheduled event is present.
+
+ :::image type="content" source="./media/how-to-create-schedules/schedule-calendar.png" alt-text="Screenshot of the Schedule page for Azure Lab Services. Repeating schedule, Monday through Friday shown in the calendar.":::
## Add users to the lab
-In this section, you add students to the lab. Students can be added to a lab several ways including [manually by entering an email address](how-to-configure-student-usage.md#add-users-by-email-address), [uploading a CSV file with student information](how-to-configure-student-usage.md#add-users-by-uploading-a-csv-file), or [syncing to an Azure AD group](how-to-configure-student-usage.md#sync-users-with-azure-ad-group).
+Now that you've created and configured the lab, you can add lab users. Azure Lab Services supports multiple ways to add users to a lab:
+
+- [Manually by entering an email address](how-to-configure-student-usage.md#add-users-by-email-address)
+- [Upload a CSV file with student information](how-to-configure-student-usage.md#add-users-by-uploading-a-csv-file)
+- [Sync to an Azure Active Directory group](how-to-configure-student-usage.md#sync-users-with-azure-ad-group)
+
+By default, access to a lab is restricted, which means that only listed users can register with the lab. You can turn off restricted access, which allows students to register with the lab as long as they have the registration link. Configure restricted access by using the **Restrict access** setting on the **Users** page.
-By default, the **Restrict access** option, found on the **Users** page, is turned on for a lab. *Only* listed users can register with the lab by using the registration link you send. You can turn off restricted access, which allows students to register with the lab as long as they have the registration link.
+Manually add users to the lab by providing their email address:
1. Select the **Users** page. 1. Select **Add users manually**.
- :::image type="content" source="./media/tutorial-setup-lab/add-users-manually.png" alt-text="Add users manually.":::
+ :::image type="content" source="./media/tutorial-setup-lab/add-users-manually.png" alt-text="screenshot that shows the Users page, highlighting Add users manually.":::
+ 1. Select **Add by email address** (default), enter the students' email addresses on separate lines or on a single line separated by semicolons.
- :::image type="content" source="./media/tutorial-setup-lab/add-users-email-addresses.png" alt-text="Add users' email addresses":::
+ :::image type="content" source="./media/tutorial-setup-lab/add-users-email-addresses.png" alt-text="Screenshot that shows the Add users page, enabling you to enter user email addresses.":::
+ 1. Select **Save**. The list displays the email addresses and statuses of the current users, whether they're registered with the lab or not.
- :::image type="content" source="./media/tutorial-setup-lab/list-of-added-users.png" alt-text="Users list.":::
+ :::image type="content" source="./media/tutorial-setup-lab/list-of-added-users.png" alt-text="Screenshot that shows the Users page, showing the list of user email addresses.":::
> [!NOTE]
- > After the students are registered with the lab, the list displays their names. The name that's shown in the list is constructed by using the first and last names of the student's information from Azure AD or their Microsoft Account. For more information on supported account types, see [Student accounts](how-to-configure-student-usage.md#student-accounts).
+ > After a student registers for the lab uing the registration link, the user list also displays their name. The name that's shown in the list is constructed by using the first and last names of the student's information from Azure Active Directory or their Microsoft Account. For more information about supported account types, see [Student accounts](how-to-configure-student-usage.md#student-accounts).
## Send invitation emails to users
+After adding users to the lab, you can send email invitations to let them register for the lab:
+ 1. Switch to the **Users** view if you aren't on the page already, and select **Invite all** on the toolbar.
- :::image type="content" source="./media/tutorial-setup-lab/invite-all-button.png" alt-text="Screenshot of User page in Azure Lab Services. Invite all button highlighted.":::
-1. On the **Send invitation by email** page, enter an optional message, and then select **Send**. The email automatically includes the registration link. You can get this registration link by selecting **... (ellipsis)** on the toolbar, and **Registration link**.
- :::image type="content" source="./media/tutorial-setup-lab/send-email.png" alt-text="Screenshot of Send invitation by email windows for Azure Lab Services.":::
-1. You see the status of **invitation** in the **Users** list. The status should change to **Sending** and then to **Sent on &lt;date&gt;**.
+
+ :::image type="content" source="./media/tutorial-setup-lab/invite-all-button.png" alt-text="Screenshot of the User page in Azure Lab Services, highlighting the Invite all button.":::
+
+1. On the **Send invitation by email** page, enter an optional message, and then select **Send**.
+
+ The email automatically includes the registration link. You can also get this registration link by selecting **... (ellipsis)** > **Registration link** on the toolbar.
+
+ :::image type="content" source="./media/tutorial-setup-lab/send-email.png" alt-text="Screenshot that shows the Send invitation by email page in the Azure Lab Services website.":::
+
+1. You can track the status of the invitation in the **Users** list.
+
+ The status should change to **Sending** and then to **Sent on &lt;date&gt;**.
For more information about managing usage of student VMs, see [How to configure student usage](how-to-configure-student-usage.md).
load-balancer Load Balancer Distribution Mode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-distribution-mode.md
Azure Load Balancer supports two distribution modes for distributing traffic to
* Hash-based * Source IP affinity
+To learn more about the different distribution modes supported by Azure Load Balancer, see [Azure Load Balancer distribution modes](distribution-mode-concepts.md).
+ In this article, you learn how to configure the distribution mode for your Azure Load Balancer.
You can change the configuration of the distribution mode by modifying the load-
The following options are available: * **None (hash-based)** - Specifies that successive requests from the same client may be handled by any virtual machine.
-* **Client IP (source IP affinity two-tuple)** - Specifies that successive requests from the same client IP address will be handled by the same virtual machine.
-* **Client IP and protocol (source IP affinity three-tuple)** - Specifies that successive requests from the same client IP address and protocol combination will be handled by the same virtual machine.
+* **Client IP (two-tuple: source IP and destination IP)** - Specifies that successive requests from the same client IP address will be handled by the same virtual machine.
+* **Client IP and protocol (three-tuple: source IP, destination IP, and protocol type)** - Specifies that successive requests from the same client IP address and protocol combination will be handled by the same virtual machine.
5. Choose the distribution mode and then select **Save**.
load-balancer Load Balancer Ipv6 For Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/load-balancer-ipv6-for-linux.md
This document describes how to enable DHCPv6 so that your Linux virtual machine
> [!WARNING] > By improperly editing network configuration files, you can lose network access to your VM. We recommended that you test your configuration changes on non-production systems. The instructions in this article have been tested on the latest versions of the Linux images in the Azure Marketplace. For more detailed instructions, consult the documentation for your own version of Linux.
-## Ubuntu (17.10)
+## Ubuntu (17.10 or higher)
-1. Edit the */etc/dhcp/dhclient6.conf* file, and add the following line:
+1. Edit the **`/etc/dhcp/dhclient.conf`** file, and add the following line:
```config timeout 10; ```
-2. Create a new file in the cloud.cfg.d folder that will retain your configuration through reboots. The information in this file will override the default [NETPLAN]( https://netplan.io) config (in YAML configuration files at this location: /{lib,etc,run}/netplan/*.yaml).
+2. Create a new file in the cloud.cfg.d folder that will retain your configuration through reboots. **The information in this file will override the default [NETPLAN]( https://netplan.io) config (in YAML configuration files at this location: /etc/netplan/*.yaml)**.
- For example, create a */etc/cloud/cloud.config.d/91-azure-network.cfg* file. Ensure that "dhcp6: true" is reflected under the required interface, as shown by the sample below:
+ Create a */etc/cloud/cloud.config.d/91-azure-network.cfg* file. Ensure that **`dhcp6: true`** is reflected under the required interface, as shown by the sample below:
```config
- network:
+ network:
version: 2 ethernets:
- eth0:
- addresses: 172.16.0.30/24
- dhcp4: true
- dhcp6: true
- match:
- driver: hv_netvsc
- macaddress: 00:00:00:00:00:00
- set-name: eth0
+ eth0:
+ dhcp4: true
+ dhcp6: true
+ match:
+ driver: hv_netvsc
+ set-name: eth0
```
- The IP address range and MAC address would be specific to your configuration and should be replaced with the appropriate values.
+3. Save the file and reboot.
-3. Save the file and reboot.
+4. Use **`ifconfig`** to verify virtual machine received IPv6 address.
-4. Renew the IPv6 address:
+ If **`ifconfig`** isn't installed, run the following commands:
```bash
- sudo ifdown eth0 && sudo ifup eth0
+ sudo apt update
+ sudo apt install net-tools
```
-
+
+ :::image type="content" source="./media/load-balancer-ipv6-for-linux/ipv6-ip-address-ifconfig.png" alt-text="Screenshot of ifconfig showing IPv6 IP address.":::
+ ## Debian 1. Edit the */etc/dhcp/dhclient6.conf* file, and add the following line:
load-balancer Monitor Load Balancer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/monitor-load-balancer.md
The following table lists common and recommended alert rules for Load Balancer.
| Alert type | Condition | Description | |:|:|:|
-| Load balancing rule unavailable due to unavailable VMs | If data path availability split by Frontend IP address and Frontend Port (all known and future values) is equal to zero and health probe status is equal to zero, then fire alerts | This alert determines if the data path availability for any configured load balancing rules isn't servicing traffic due to all VMs in the associated backend pool being probed down by the configured health probe. Review load balancer [troubleshooting guide](load-balancer-troubleshoot.md) to investigate the potential root cause. |
+| Load balancing rule unavailable due to unavailable VMs | If data path availability split by Frontend IP address and Frontend Port (all known and future values) is equal to zero, and in a secondary alert, if health probe status is equal to zero, then fire alerts | These alerts help determine if the data path availability for any configured load balancing rules isn't servicing traffic due to all VMs in the associated backend pool being probed down by the configured health probe. Review load balancer [troubleshooting guide](load-balancer-troubleshoot.md) to investigate the potential root cause. |
| VM availability significantly low | If health probe status split by Backend IP and Backend Port is equal to user defined probed-up percentage of total pool size (that is, 25% are probed up), then fire alert | This alert determines if there are less than needed VMs available to serve traffic | | Outbound connections to internet endpoint failing | If SNAT Connection Count filtered to Connection State = Failed is greater than zero, then fire alert | This alert fires when SNAT ports are exhausted and VMs are failing to initiate outbound connections. | | Approaching SNAT exhaustion | If Used SNAT Ports is greater than user defined number, then fire alert | This alert requires a static outbound configuration where the same number of ports are always allocated. It then fires when a percentage of the allocated ports is used. |
load-balancer Tutorial Protect Load Balancer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-balancer/tutorial-protect-load-balancer.md
+
+ Title: "Tutorial: Protect your public load balancer with Azure DDoS Protection Standard"
+
+description: Learn how to set up a public load balancer and protect it with Azure DDoS protection.
+++ Last updated : 12/21/2022+++
+# Tutorial: Protect your public load balancer with Azure DDoS Protection Standard
+
+Azure DDoS Protection Standard enables enhanced DDoS mitigation capabilities such as adaptive tuning, attack alert notifications, and monitoring to protect your public load balancers from large scale DDoS attacks.
+
+> [!IMPORTANT]
+> Azure DDoS Protection incurs a cost when you use the Standard SKU. Overages charges only apply if more than 100 public IPs are protected in the tenant. Ensure you delete the resources in this tutorial if you aren't using the resources in the future. For information about pricing, see [Azure DDoS Protection Pricing]( https://azure.microsoft.com/pricing/details/ddos-protection/). For more information about Azure DDoS protection, see [What is Azure DDoS Protection?](../ddos-protection/ddos-protection-overview.md).
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * Create a DDoS Protection plan.
+> * Create a virtual network with DDoS Protection and Bastion service enabled.
+> * Create a standard SKU public load balancer with frontend IP, health probe, backend configuration, and load-balancing rule.
+> * Create a NAT gateway for outbound internet access for the backend pool.
+> * Create virtual machine, then install and configure IIS on the VMs to demonstrate the port forwarding and load-balancing rules.
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
+
+## Prerequisites
+
+- An Azure account with an active subscription.
+
+## Create a DDoS protection plan
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. In the search box at the top of the portal, enter **DDoS protection**. Select **DDoS protection plans** in the search results and then select **+ Create**.
+
+1. In the **Basics** tab of **Create a DDoS protection plan** page, enter or select the following information:
+
+ :::image type="content" source="./media/protect-load-balancer-with-ddos-standard/create-ddos-plan.png" alt-text="Screenshot of creating a DDoS protection plan.":::
+
+ | Setting | Value |
+ |--|--|
+ | **Project details** | |
+ | Subscription | Select your Azure subscription. |
+ | Resource group | Select **Create new**. </br> Enter **TutorLoadBalancer-rg**. </br> Select **OK**. |
+ | **Instance details** | |
+ | Name | Enter **myDDoSProtectionPlan**. |
+ | Region | Select **(US) East US**. |
+
+1. Select **Review + create** and then select **Create** to deploy the DDoS protection plan.
+
+## Create the virtual network
+
+In this section, you'll create a virtual network, subnet, Azure Bastion host, and associate the DDoS Protection Standard plan. The virtual network and subnet contains the load balancer and virtual machines. The bastion host is used to securely manage the virtual machines and install IIS to test the load balancer. The DDoS Protection plan will protect all public IP resources in the virtual network.
+
+1. In the search box at the top of the portal, enter **Virtual network**. Select **Virtual Networks** in the search results.
+
+2. In **Virtual networks**, select **+ Create**.
+
+3. In **Create virtual network**, enter or select the following information in the **Basics** tab:
+
+ | **Setting** | **Value** |
+ |||
+ | **Project Details** | |
+ | Subscription | Select your Azure subscription. |
+ | Resource Group | Select **TutorLoadBalancer-rg** |
+ | **Instance details** | |
+ | Name | Enter **myVNet** |
+ | Region | Select **East US** |
+
+4. Select the **IP Addresses** tab or select **Next: IP Addresses** at the bottom of the page.
+
+5. In the **IP Addresses** tab, enter this information:
+
+ | Setting | Value |
+ |--|-|
+ | IPv4 address space | Enter **10.1.0.0/16** |
+
+6. Under **Subnet name**, select the word **default**. If a subnet isn't present, select **+ Add subnet**.
+
+7. In **Edit subnet**, enter this information:
+
+ | Setting | Value |
+ |--|-|
+ | Subnet name | Enter **myBackendSubnet** |
+ | Subnet address range | Enter **10.1.0.0/24** |
+
+8. Select **Save** or **Add**.
+
+9. Select the **Security** tab.
+
+10. Under **BastionHost**, select **Enable**. Enter this information:
+
+ | Setting | Value |
+ |--|-|
+ | Bastion name | Enter **myBastionHost** |
+ | AzureBastionSubnet address space | Enter **10.1.1.0/26** |
+ | Public IP Address | Select **Create new**. </br> For **Name**, enter **myBastionIP**. </br> Select **OK**. |
+
+11. Under **DDoS Protection Standard**, select **Enable**. Then from the drop-down menu, select **myDDoSProtectionPlan**.
+
+ :::image type="content" source="./media/protect-load-balancer-with-ddos-standard/enable-ddos.png" alt-text="Screenshot of enabling DDoS during virtual network creation.":::
+
+12. Select the **Review + create** tab or select the **Review + create** button.
+
+13. Select **Create**.
+
+ > [!NOTE]
+ > The virtual network and subnet are created immediately. The Bastion host creation is submitted as a job and will complete within 10 minutes. You can proceed to the next steps while the Bastion host is created.
+
+## Create load balancer
+
+In this section, you'll create a zone redundant load balancer that load balances virtual machines. With zone-redundancy, one or more availability zones can fail and the data path survives as long as one zone in the region remains healthy.
+
+During the creation of the load balancer, you'll configure:
+
+* Frontend IP address
+* Backend pool
+* Inbound load-balancing rules
+* Health probe
+
+1. In the search box at the top of the portal, enter **Load balancer**. Select **Load balancers** in the search results.
+
+2. In the **Load balancer** page, select **+ Create**.
+
+3. In the **Basics** tab of the **Create load balancer** page, enter or select the following information:
+
+ | Setting | Value |
+ | | |
+ | **Project details** | |
+ | Subscription | Select your subscription. |
+ | Resource group | Select **TutorLoadBalancer-rg**. |
+ | **Instance details** | |
+ | Name | Enter **myLoadBalancer** |
+ | Region | Select **East US**. |
+ | SKU | Leave the default **Standard**. |
+ | Type | Select **Public**. |
+ | Tier | Leave the default **Regional**. |
+
+ :::image type="content" source="./media/protect-load-balancer-with-ddos-standard/create-standard-load-balancer.png" alt-text="Screenshot of create standard load balancer basics tab." border="true":::
+
+4. Select **Next: Frontend IP configuration** at the bottom of the page.
+
+5. In **Frontend IP configuration**, select **+ Add a frontend IP configuration**.
+
+6. Enter **myFrontend** in **Name**.
+
+7. Select **IPv4** for the **IP version**.
+
+8. Select **IP address** for the **IP type**.
+
+ > [!NOTE]
+ > For more information on IP prefixes, see [Azure Public IP address prefix](../virtual-network/ip-services/public-ip-address-prefix.md).
+
+9. Select **Create new** in **Public IP address**.
+
+10. In **Add a public IP address**, enter **myPublicIP** for **Name**.
+
+11. Select **Zone-redundant** in **Availability zone**.
+
+ > [!NOTE]
+ > In regions with [Availability Zones](../availability-zones/az-overview.md?toc=%2fazure%2fvirtual-network%2ftoc.json#availability-zones), you have the option to select no-zone (default option), a specific zone, or zone-redundant. The choice will depend on your specific domain failure requirements. In regions without Availability Zones, this field won't appear. </br> For more information on availability zones, see [Availability zones overview](../availability-zones/az-overview.md).
+
+12. Leave the default of **Microsoft Network** for **Routing preference**.
+
+13. Select **OK**.
+
+14. Select **Add**.
+
+15. Select **Next: Backend pools** at the bottom of the page.
+
+16. In the **Backend pools** tab, select **+ Add a backend pool**.
+
+17. Enter **myBackendPool** for **Name** in **Add backend pool**.
+
+18. Select **myVNet** in **Virtual network**.
+
+19. Select **IP Address** for **Backend Pool Configuration**.
+
+21. Select **Save**.
+
+22. Select **Next: Inbound rules** at the bottom of the page.
+
+23. Under **Load balancing rule** in the **Inbound rules** tab, select **+ Add a load balancing rule**.
+
+24. In **Add load balancing rule**, enter or select the following information:
+
+ | Setting | Value |
+ | - | -- |
+ | Name | Enter **myHTTPRule** |
+ | IP Version | Select **IPv4** or **IPv6** depending on your requirements. |
+ | Frontend IP address | Select **myFrontend (To be created)**. |
+ | Backend pool | Select **myBackendPool**. |
+ | Protocol | Select **TCP**. |
+ | Port | Enter **80**. |
+ | Backend port | Enter **80**. |
+ | Health probe | Select **Create new**. </br> In **Name**, enter **myHealthProbe**. </br> Select **TCP** in **Protocol**. </br> Leave the rest of the defaults, and select **OK**. |
+ | Session persistence | Select **None**. |
+ | Idle timeout (minutes) | Enter or select **15**. |
+ | TCP reset | Select **Enabled**. |
+ | Floating IP | Select **Disabled**. |
+ | Outbound source network address translation (SNAT) | Leave the default of **(Recommended) Use outbound rules to provide backend pool members access to the internet.** |
+
+25. Select **Add**.
+
+26. Select the blue **Review + create** button at the bottom of the page.
+
+27. Select **Create**.
+
+ > [!NOTE]
+ > In this example we'll create a NAT gateway to provide outbound Internet access. The outbound rules tab in the configuration is bypassed as it's optional and isn't needed with the NAT gateway. For more information on Azure NAT gateway, see [What is Azure Virtual Network NAT?](../virtual-network/nat-gateway/nat-overview.md)
+ > For more information about outbound connections in Azure, see [Source Network Address Translation (SNAT) for outbound connections](../load-balancer/load-balancer-outbound-connections.md)
+
+## Create NAT gateway
+
+In this section, you'll create a NAT gateway for outbound internet access for resources in the virtual network. For other options for outbound rules, check out [Network Address Translation (SNAT) for outbound connections](load-balancer-outbound-connections.md).
+
+1. In the search box at the top of the portal, enter **NAT gateway**. Select **NAT gateways** in the search results.
+
+2. In **NAT gateways**, select **+ Create**.
+
+3. In **Create network address translation (NAT) gateway**, enter or select the following information:
+
+ | Setting | Value |
+ | - | -- |
+ | **Project details** | |
+ | Subscription | Select your subscription. |
+ | Resource group | Select **TutorLoadBalancer-rg**. |
+ | **Instance details** | |
+ | NAT gateway name | Enter **myNATgateway**. |
+ | Region | Select **East US**. |
+ | Availability zone | Select **None**. |
+ | Idle timeout (minutes) | Enter **15**. |
+
+4. Select the **Outbound IP** tab or select **Next: Outbound IP** at the bottom of the page.
+
+5. In **Outbound IP**, select **Create a new public IP address** next to **Public IP addresses**.
+
+6. Enter **myNATgatewayIP** in **Name**.
+
+7. Select **OK**.
+
+8. Select the **Subnet** tab or select the **Next: Subnet** button at the bottom of the page.
+
+9. In **Virtual network** in the **Subnet** tab, select **myVNet**.
+
+10. Select **myBackendSubnet** under **Subnet name**.
+
+11. Select the blue **Review + create** button at the bottom of the page, or select the **Review + create** tab.
+
+12. Select **Create**.
+
+## Create virtual machines
+
+In this section, you'll create two VMs (**myVM1** and **myVM2**) in two different zones (**Zone 1**, and **Zone 2**).
+
+These VMs are added to the backend pool of the load balancer that was created earlier.
+
+1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
+
+2. In **Virtual machines**, select **+ Create** > **Azure virtual machine**.
+
+3. In **Create a virtual machine**, enter or select the following values in the **Basics** tab:
+
+ | Setting | Value |
+ |--|-|
+ | **Project Details** | |
+ | Subscription | Select your Azure subscription |
+ | Resource Group | Select **TutorLoadBalancer-rg** |
+ | **Instance details** | |
+ | Virtual machine name | Enter **myVM1** |
+ | Region | Select **((US) East US)** |
+ | Availability Options | Select **Availability zones** |
+ | Availability zone | Select **Zone 1** |
+ | Security type | Select **Standard**. |
+ | Image | Select **Windows Server 2022 Datacenter: Azure Edition - Gen2** |
+ | Azure Spot instance | Leave the default of unchecked. |
+ | Size | Choose VM size or take default setting |
+ | **Administrator account** | |
+ | Username | Enter a username |
+ | Password | Enter a password |
+ | Confirm password | Reenter password |
+ | **Inbound port rules** | |
+ | Public inbound ports | Select **None** |
+
+4. Select the **Networking** tab, or select **Next: Disks**, then **Next: Networking**.
+
+5. In the Networking tab, select or enter the following information:
+
+ | Setting | Value |
+ | - | -- |
+ | **Network interface** | |
+ | Virtual network | Select **myVNet** |
+ | Subnet | Select **myBackendSubnet** |
+ | Public IP | Select **None**. |
+ | NIC network security group | Select **Advanced** |
+ | Configure network security group | Skip this setting until the rest of the settings are completed. Complete after **Select a backend pool**.|
+ | Delete NIC when VM is deleted | Leave the default of **unselected**. |
+ | Accelerated networking | Leave the default of **selected**. |
+ | **Load balancing** |
+ | **Load balancing options** |
+ | Load-balancing options | Select **Azure load balancer** |
+ | Select a load balancer | Select **myLoadBalancer** |
+ | Select a backend pool | Select **myBackendPool** |
+ | Configure network security group | Select **Create new**. </br> In the **Create network security group**, enter **myNSG** in **Name**. </br> Under **Inbound rules**, select **+Add an inbound rule**. </br> Under **Service**, select **HTTP**. </br> Under **Priority**, enter **100**. </br> In **Name**, enter **myNSGRule** </br> Select **Add** </br> Select **OK** |
+
+6. Select **Review + create**.
+
+7. Review the settings, and then select **Create**.
+
+8. Follow the steps 1 through 7 to create another VM with the following values and all the other settings the same as **myVM1**:
+
+ | Setting | VM 2
+ | - | -- |
+ | Name | **myVM2** |
+ | Availability zone | **Zone 2** |
+ | Network security group | Select the existing **myNSG** |
++
+## Install IIS
+
+1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
+
+2. Select **myVM1**.
+
+3. On the **Overview** page, select **Connect**, then **Bastion**.
+
+4. Enter the username and password entered during VM creation.
+
+5. Select **Connect**.
+
+6. On the server desktop, navigate to **Start** > **Windows PowerShell** > **Windows PowerShell**.
+
+7. In the PowerShell Window, run the following commands to:
+
+ * Install the IIS server
+ * Remove the default iisstart.htm file
+ * Add a new iisstart.htm file that displays the name of the VM:
+
+ ```powershell
+ # Install IIS server role
+ Install-WindowsFeature -name Web-Server -IncludeManagementTools
+
+ # Remove default htm file
+ Remove-Item C:\inetpub\wwwroot\iisstart.htm
+
+ # Add a new htm file that displays server name
+ Add-Content -Path "C:\inetpub\wwwroot\iisstart.htm" -Value $("Hello World from " + $env:computername)
+
+ ```
+
+8. Close the Bastion session with **myVM1**.
+
+9. Repeat steps 1 to 8 to install IIS and the updated iisstart.htm file on **myVM2**.
+
+## Test the load balancer
+
+1. In the search box at the top of the page, enter **Public IP**. Select **Public IP addresses** in the search results.
+
+2. In **Public IP addresses**, select **myPublicIP**.
+
+3. Copy the item in **IP address**. Paste the public IP into the address bar of your browser. The custom VM page of the IIS Web server is displayed in the browser.
+
+ :::image type="content" source="./media/quickstart-load-balancer-standard-public-portal/load-balancer-test.png" alt-text="Screenshot of load balancer test.":::
+
+## Clean up resources
+
+When no longer needed, delete the resource group, load balancer, and all related resources. To do so, select the resource group **TutorLoadBalancer-rg** that contains the resources and then select **Delete**.
+
+## Next steps
+
+Advance to the next article to learn how to:
+
+> [!div class="nextstepaction"]
+> [Create a public load balancer with an IP-based backend](tutorial-load-balancer-ip-backend-portal.md)
load-testing How To Compare Multiple Test Runs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-compare-multiple-test-runs.md
Previously updated : 02/16/2022 Last updated : 01/18/2023
load-testing How To Monitor Server Side Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/how-to-monitor-server-side-metrics.md
Previously updated : 02/08/2022 Last updated : 01/18/2023
load-testing Quickstart Create And Run Load Test https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/quickstart-create-and-run-load-test.md
Previously updated : 02/15/2022 Last updated : 01/18/2023 adobe-target: true
load-testing Tutorial Identify Bottlenecks Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/load-testing/tutorial-identify-bottlenecks-azure-portal.md
Previously updated : 02/15/2022 Last updated : 01/18/2023 #Customer intent: As an Azure user, I want to learn how to identify and fix bottlenecks in a web app so that I can improve the performance of the web apps that I'm running in Azure.
machine-learning Algorithm Cheat Sheet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/algorithm-cheat-sheet.md
-+ Last updated 11/04/2022
machine-learning Azure Machine Learning Release Notes Cli V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/azure-machine-learning-release-notes-cli-v2.md
description: Learn about the latest updates to Azure Machine Learning CLI (v2)
-+ Previously updated : 04/12/2022 Last updated : 11/08/2022 # Azure Machine Learning CLI (v2) release notes
machine-learning Classification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference-v2/classification.md
+
+ Title: "AutoML Classification"
+
+description: Learn how to use the AutoML Classification component in Azure Machine Learning to create a classifier using ML Table data.
+++++++ Last updated : 12/1/2022++
+# AutoML Classification
+
+This article describes a component in Azure Machine Learning designer.
+
+Use this component to create a machine learning model that is based on the AutoML Classification.
++
+## How to configure
+
+This component creates a classification model on tabular data.
+
+This model requires a training dataset. Validation and test datasets are optional.
+
+AutoML creates a number of pipelines in parallel that try different algorithms and parameters for your model. The service iterates through ML algorithms paired with feature selections, where each iteration produces a model with a training score. You are able to choose the metric you want the model to optimize for. The better the score for the chosen metric the better the model is considered to "fit" your data. You are able to define an exit criteria for the experiment. The exit criteria will be model with a specific training score you want AutoML to find. It will stop once it hits the exit criteria defined. This component will then output the best model that has been generated at the end of the run for your dataset.
++
+1. Add the **AutoML Classification** component to your pipeline.
+
+1. Specify the **Target Column** you want the model to output
+
+1. For **classification**, you can also enable deep learning.
+
+If deep learning is enabled, validation is limited to _train_validation split_. [Learn more about validation options](/how-to-configure-cross-validation-data-splits.md).
++
+1. (Optional) View addition configuration settings: additional settings you can use to better control the training job. Otherwise, defaults are applied based on experiment selection and data.
+
+ Additional configurations|Description
+ |
+ Primary metric| Main metric used for scoring your model. [Learn more about model metrics](/how-to-configure-auto-train.md#primary-metric).
+ Explain best model | Select to enable or disable, in order to show explanations for the recommended best model. <br> This functionality is not currently available for [certain forecasting algorithms](/how-to-machine-learning-interpretability-automl.md#interpretability-during-training-for-the-best-model).
+ Blocked algorithm| Select algorithms you want to exclude from the training job. <br><br> Allowing algorithms is only available for [SDK experiments](/how-to-configure-auto-train.md#supported-algorithms). <br> See the [supported algorithms for each task type](/python/api/azureml-automl-core/azureml.automl.core.shared.constants.supportedmodels).
+ Exit criterion| When any of these criteria are met, the training job is stopped. <br> *Training job time (hours)*: How long to allow the training job to run. <br> *Metric score threshold*: Minimum metric score for all pipelines. This ensures that if you have a defined target metric you want to reach, you do not spend more time on the training job than necessary.
+ Concurrency| *Max concurrent iterations*: Maximum number of pipelines (iterations) to test in the training job. The job will not run more than the specified number of iterations. Learn more about how automated ML performs [multiple child jobs on clusters](/how-to-configure-auto-train.md#multiple-child-runs-on-clusters).
++
+1. The **[Optional] Validate and test** form allows you to do the following.
+
+ 1. Specify the type of validation to be used for your training job. [Learn more about cross validation](/how-to-configure-cross-validation-data-splits.md#prerequisites).
+
+ 1. Provide a test dataset (preview) to evaluate the recommended model that automated ML generates for you at the end of your experiment. When you provide test data, a test job is automatically triggered at the end of your experiment. This test job is only job on the best model that was recommended by automated ML.
+
+ >[!IMPORTANT]
+ > Providing a test dataset to evaluate generated models is a preview feature. This capability is an [experimental](/python/api/overview/azure/ml/#stable-vs-experimental) preview feature, and may change at any time.
+
+ * Test data is considered a separate from training and validation, so as to not bias the results of the test job of the recommended model. [Learn more about bias during model validation](/concept-automated-ml.md#training-validation-and-test-data).
+ * You can either provide your own test dataset or opt to use a percentage of your training dataset. Test data must be in the form of an [Azure Machine Learning TabularDataset](../v1/how-to-create-register-datasets.md#tabulardataset).
+ * The schema of the test dataset should match the training dataset. The target column is optional, but if no target column is indicated no test metrics are calculated.
+ * The test dataset should not be the same as the training dataset or the validation dataset.
+
+
+## Next steps
+
+See the [set of components available](/component-reference.md) to Azure Machine Learning.
machine-learning Component Reference V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference-v2/component-reference-v2.md
+
+ Title: "Algorithm & component reference (v2)"
+description: Learn about the Azure Machine Learning designer components that you can use to create your own machine learning projects. (v2)
++++++++ Last updated : 01/17/2023+
+# Algorithm & component reference for Azure Machine Learning designer (v2)
+
+Azure Machine Learning designer components (Designer) allow users to create machine learning projects using a drag and drop interface. Follow this link to reach the Designer studio. Follow this link to [learn more about Designer](../concept-designer.md).
++
+This reference content provides background on each of the custom components (v2) available in Azure Machine Learning designer.
+
+You can navigate to Custom components in AzureML Studio as shown in the following image.
+++
+Each component represents a set of code that can run independently and perform a machine learning task, given the required inputs. A component might contain a particular algorithm, or perform a task that is important in machine learning, such as missing value replacement, or statistical analysis.
+
+For help with choosing algorithms, see
+* [How to select algorithms](..//how-to-select-algorithms.md)
+
+> [!TIP]
+> In any pipeline in the designer, you can get information about a specific component. Select the **Learn more** link in the component card when hovering on the component in the component list, or in the right pane of the component.
++
+## AutoML Algorithms
+
+| Functionality | Description | component |
+| | | |
+| Classification | Component that kicks off an AutoML job to train a classification model within an Azure Machine Learning pipeline | [AutoML Classification](classification.md) |
+| Regression | Component that kicks off an AutoML job to train a regression model within an Azure Machine Learning pipeline. | [AutoML Regression](regression.md) |
+| Forecasting | Component that kicks off an AutoML job to train a forecasting model within an Azure Machine Learning pipeline. | [AutoML Forecasting](forecasting.md) |
+| Image Classification |Component that kicks off an AutoML job to train an image classification model within an Azure Machine Learning pipeline |[Image Classification](image-classification.md)|
+| Multilabel Image Classification |Component that kicks off an AutoML job to train a multilabel image classification model within an Azure Machine Learning pipeline |[Image Classification Multilabel](image-classification-multilabel.md) |
+| Image Object Detection | Component that kicks off an AutoML job to train an image object detection model within an Azure Machine Learning pipeline | [Image Object Detection](image-object-detection.md) |
+| Image Instance Segmentation | Component that kicks off an AutoML job to train an image instance segmentation model within an Azure Machine Learning pipeline | [Image Instance Segmentation](image-instance-segmentation.md)|
+| Multilabel Text Classification | Component that kicks off an AutoML job to train a multilabel NLP text classification model within an Azure Machine Learning pipeline. | [AutoML Multilabel Text Classification](text-classification-multilabel.md)|
+| Text Classification | Component that kicks off an AutoML job to train an NLP text classification model within an Azure Machine Learning pipeline. | [AutoML Text Classification](text-classification.md)|
+| Text Ner | Component that kicks off an AutoML job to train an NLP NE (Named Entity Recognition) model within an Azure Machine Learning pipeline. | [AutoML Text Ner](text-ner.md)|
+
+## Next steps
+
+* [Tutorial: Build a model in designer to predict auto prices](../tutorial-designer-automobile-price-train-score.md)
machine-learning Forecasting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference-v2/forecasting.md
+
+ Title: "AutoML Forecasting Component in Microsoft Azure Machine Learning Designer"
+
+description: Learn how to use the AutoML Forecasting component in Azure Machine Learning to create a classifier using ML Table data.
+++++++ Last updated : 12/1/2022++
+# AutoML Forecasting
+
+This article describes a component in Azure Machine Learning designer.
+
+Use this component to create a machine learning model that is based on the AutoML Forecasting.
++
+## How to configure
+
+This component creates a forecasting model. Because forecasting is a supervised learning method, you need a *labeled dataset* that includes a label column with a value for all rows. Follow this link to get more information on [how to prepare your dataset.](/how-to-prepare-datasets-for-automl-images) The dataset will need a *labeled dataset* that includes a label column with a value for all rows.
+
+This model requires a training dataset. Validation and test datasets are optional.
+
+AutoML creates a number of pipelines in parallel that try different algorithms and parameters for your model. The service iterates through ML algorithms paired with feature selections, where each iteration produces a model with a training score. You are able to choose the metric you want the model to optimize for. The better the score for the chosen metric the better the model is considered to "fit" your data. You are able to define an exit criteria for the experiment. The exit criteria will be model with a specific training score you want AutoML to find. It will stop once it hits the exit criteria defined. This component will then output the best model that has been generated at the end of the run for your dataset.
++
+1. Add the **AutoML Forecasting** component to your pipeline.
+
+1. Specify the **training_data** you want the model to use.
+
+1. Specify the **Primary Metric** you want AutoML to use to measure your model's success.
++
+1. Specify the **Target Column** you want the model to output
+
+1. On the **Task type and settings** form, select the task type: forecasting. See [supported task types](/concept-automated-ml.md#when-to-use-automl-classification-regression-forecasting-computer-vision--nlp) for more information.
+
+ 1. For **forecasting** you can,
+
+ 1. Enable deep learning.
+
+ 1. Select *time column*: This column contains the time data to be used.
+
+ 1. Select *forecast horizon*: Indicate how many time units (minutes/hours/days/weeks/months/years) will the model be able to predict to the future. The further the model is required to predict into the future, the less accurate it becomes. [Learn more about forecasting and forecast horizon](/how-to-auto-train-forecast.md).
+
+1. (Optional) View addition configuration settings: additional settings you can use to better control the training job. Otherwise, defaults are applied based on experiment selection and data.
+
+ Additional configurations|Description
+ |
+ Primary metric| Main metric used for scoring your model. [Learn more about model metrics](/how-to-configure-auto-train.md#primary-metric).
+ Explain best model | Select to enable or disable, in order to show explanations for the recommended best model. <br> This functionality is not currently available for [certain forecasting algorithms](/how-to-machine-learning-interpretability-automl.md#interpretability-during-training-for-the-best-model).
+ Blocked algorithm| Select algorithms you want to exclude from the training job. <br><br> Allowing algorithms is only available for [SDK experiments](../how-to-configure-auto-train.md#supported-algorithms). <br> See the [supported algorithms for each task type](/python/api/azureml-automl-core/azureml.automl.core.shared.constants.supportedmodels).
+ Exit criterion| When any of these criteria are met, the training job is stopped. <br> *Training job time (hours)*: How long to allow the training job to run. <br> *Metric score threshold*: Minimum metric score for all pipelines. This ensures that if you have a defined target metric you want to reach, you do not spend more time on the training job than necessary.
+ Concurrency| *Max concurrent iterations*: Maximum number of pipelines (iterations) to test in the training job. The job will not run more than the specified number of iterations. Learn more about how automated ML performs [multiple child jobs on clusters](/how-to-configure-auto-train.md#multiple-child-runs-on-clusters).
++++
+## Next steps
+
+See the [set of components available](/component-reference.md) to Azure Machine Learning.
machine-learning Image Classification Multilabel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference-v2/image-classification-multilabel.md
+
+ Title: "AutoML Image Classification Multi-label"
+
+description: Learn how to use the AutoML Image Classification Multi-label component in Azure Machine Learning to create a classifier using ML Table data.
+++++++ Last updated : 12/1/2022++
+# AutoML Image Classification Multi-label
+
+This article describes a component in Azure Machine Learning designer.
+
+Use this component to create a machine learning model that is based on the AutoML Image Classification Multi-label.
+
+Multi-label image classification is a computer vision task where the goal is to predict a set of labels associated with each individual image. You may consider using multi-label classification where you need to determine several properties of a given image.
+
+## How to configure
+
+[Follow this link](/machine-learning/reference-automl-images-cli-multilabel-classification) for a full list of configurable parameters of this component.
+
+[Follow this link](/machine-learning/reference-automl-images-cli-multilabel-classification) for a full list of configurable parameters of this component.
+
+This component creates a classification model. Because classification is a supervised learning method, you need a *labeled dataset* that includes a label column with a value for all rows.
++
+This model requires a training dataset. Validation and test datasets are optional.
+
+Follow this link to get more information on [how to prepare your dataset.](/how-to-prepare-datasets-for-automl-images) The dataset will need a *labeled dataset* that includes a label column with a value for all rows.
++
+AutoML runs a number of trials (specified in `max_trials`) in parallel (`specified in max_concurrent_trial`) that try different algorithms and parameters for your model. The service iterates through ML algorithms paired with hyperparameter selections and each trial produces a model with a training score. You are able to choose the metric you want the model to optimize for. The better the score for the chosen metric the better the model is considered to "fit" your data. You are able to define an exit criteria (termination policy) for the experiment. The exit criteria will be model with a specific training score you want AutoML to find. It will stop once it hits the exit criteria defined. This component will then output the best model that has been generated at the end of the run for your dataset. Visit this link for more information on [exit criteria (termination policy)](/how-to-auto-train-image-models#early-termination-policies).
+++
+1. Add the **AutoML Image Classification Multi-label** component to your pipeline.
+
+1. Specify the **Target Column** you want the model to output
+
+1. Specify the **Primary Metric** you want AutoML to use to measure your model's success. Visit this link for an [explanation on each primary metric for computer vision.](/how-to-auto-train-image-models.md#primary-metric)
+
+1. (Optional) You are able to configure algorithm settings. Visit this link for a {list of supported algorithms for computer vision.](/how-to-auto-train-image-models.md#supported-model-algorithms
+
+1. (Optional) To configure job limits, visit [this link for more explanation.](/how-to-auto-train-image-models.md#job-limits)
+
+1. (Optional) Visit this link for a [list of configurations for Sampling and Early Termination for your Job Sweep.](/how-to-auto-train-image-models.md#sampling-methods-for-the-sweep) You can also find more information on each of the policies and sampling methods.
+
+
+
+## Next steps
+
+See the [set of components available](/component-reference.md) to Azure Machine Learning.
machine-learning Image Classification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference-v2/image-classification.md
+
+ Title: "AutoML Image Classification"
+
+description: Learn how to use the AutoML Image Classification component in Azure Machine Learning to create a classifier using ML Table data.
+++++++ Last updated : 12/1/2022++
+# AutoML Image Classification
+
+This article describes a component in Azure Machine Learning designer.
+
+Use this component to create a machine learning model that is based on the AutoML Image Classification.
++
+## How to configure
+
+[Follow this link](/machine-learning/reference-automl-images-cli-classification) for a full list of configurable parameters of this component.
++
+This model requires a training dataset. Validation and test datasets are optional.
+
+Follow this link to get more information on [how to prepare your dataset.](/how-to-prepare-datasets-for-automl-images) The dataset will need a *labeled dataset* that includes a label column with a value for all rows.
++
+AutoML runs a number of trials (specified in max_trials) in parallel (specified in max_concurrent_trials) that try different algorithms and parameters for your model. The service iterates through ML algorithms paired with hyperparameter selections and each trial produces a model with a training score. You are able to choose the metric you want the model to optimize for. The better the score for the chosen metric the better the model is considered to "fit" your data. You are able to define an exit criteria (termination policy) for the experiment. The exit criteria will be model with a specific training score you want AutoML to find. It will stop once it hits the exit criteria defined. This component will then output the best model that has been generated at the end of the run for your dataset.
+++
+1. Add the **AutoML Image Classification** component to your pipeline.
+
+1. Specify the **Target Column** you want the model to output
+
+1. Specify the **Primary Metric** you want AutoML to use to measure your model's success. Visit this link for an [explanation on each primary metric for computer vision.](/how-to-auto-train-image-models.md#primary-metric)
+
+1. (Optional) You are able to configure algorithm settings. Visit this link for a {list of supported algorithms for computer vision.](/how-to-auto-train-image-models.md#supported-model-algorithms
+
+1. (Optional) To configure job limits, visit [this link for more explanation.](/how-to-auto-train-image-models.md#job-limits)
+
+1. (Optional) Visit this link for a [list of configurations for Sampling and Early Termination for your Job Sweep.](/how-to-auto-train-image-models.md#sampling-methods-for-the-sweep) You can also find more information on each of the policies and sampling methods.
+
+
+
+## Next steps
+
+See the [set of components available](/component-reference.md) to Azure Machine Learning.
machine-learning Image Instance Segmentation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference-v2/image-instance-segmentation.md
+
+ Title: "AutoML Image Instance Segmentation Component in Microsoft Azure Machine Learning Designer"
+
+description: Learn how to use the AutoML Image Instance Segmentation component in Azure Machine Learning to create a classifier using ML Table data.
+++++++ Last updated : 12/1/2022++
+# AutoML Image Instance Segmentation
+
+This article describes a component in Azure Machine Learning designer.
+
+Use this component to create a machine learning model that is based on the AutoML Image Instance Segmentation model.
+
+## How to configure
+
+[Follow this link](/machine-learning/reference-automl-images-instance-segmentation) for a full list of configurable parameters of this component.
+
+This model requires a training dataset. Validation and test datasets are optional.
+
+Follow this link to get more information on [how to prepare your dataset.](/how-to-prepare-datasets-for-automl-images) The dataset will need a *labeled dataset* that includes a label column with a value for all rows.
+
+AutoML runs a number of trials (specified in max_trials) in parallel (specified in max_concurrent_trials) that try different algorithms and parameters for your model. The service iterates through ML algorithms paired with hyperparameter selections and each trial produces a model with a training score. You are able to choose the metric you want the model to optimize for. The better the score for the chosen metric the better the model is considered to "fit" your data. You are able to define an exit criteria (termination policy) for the experiment. The exit criteria will be model with a specific training score you want AutoML to find. It will stop once it hits the exit criteria defined. This component will then output the best model that has been generated at the end of the run for your dataset. Visit this link for more information on [exit criteria (termination policy)](/how-to-auto-train-image-models#early-termination-policies).
++++
+1. Add the **AutoML Image Instance Segmentation** component to your pipeline.
+
+1. Specify the **Target Column** you want the model to output
+
+1. Specify the **Primary Metric** you want AutoML to use to measure your model's success. Visit this link for an [explanation on each primary metric for computer vision.](/how-to-auto-train-image-models.md#primary-metric)
+
+1. (Optional) You are able to configure algorithm settings. Visit this link for a {list of supported algorithms for computer vision.](/how-to-auto-train-image-models.md#supported-model-algorithms
+
+1. (Optional) To configure job limits, visit [this link for more explanation.](/how-to-auto-train-image-models.md#job-limits)
+
+1. (Optional) Visit this link for a [list of configurations for Sampling and Early Termination for your Job Sweep.](/how-to-auto-train-image-models.md#sampling-methods-for-the-sweep) You can also find more information on each of the policies and sampling methods.
+
+
+
+## Next steps
+
+See the [set of components available](/component-reference.md) to Azure Machine Learning.
machine-learning Image Object Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference-v2/image-object-detection.md
+
+ Title: "AutoML Image Object Detection"
+
+description: Learn how to use the AutoML Image Object Detection component in Azure Machine Learning to create a classifier using ML Table data.
+++++++ Last updated : 12/1/2022++
+# AutoML Image Object Detection
+
+This article describes a component in Azure Machine Learning designer.
+
+Use this component to create a machine learning model that is based on the AutoML Image Object Detection.
+
+The Image Object Detection model will locate and categorize entities within images. Object detection models are commonly trained using deep learning and neural networks.
+
+## How to configure
+
+[Follow this link](/machine-learning/reference-automl-images-cli-multilabel-classification) for a full list of configurable parameters of this component.
+
+This model requires a training dataset. Validation and test datasets are optional.
+
+Follow this link to get more information on [how to prepare your dataset.](/how-to-prepare-datasets-for-automl-images) The dataset will need a *labeled dataset* that includes a label column with a value for all rows.
+
+AutoML runs a number of trials (specified in max_trials) in parallel (specified in max_concurrent_trials) that try different algorithms and parameters for your model. The service iterates through ML algorithms paired with hyperparameter selections and each trial produces a model with a training score. You are able to choose the metric you want the model to optimize for. The better the score for the chosen metric the better the model is considered to "fit" your data. You are able to define an exit criteria (termination policy) for the experiment. The exit criteria will be model with a specific training score you want AutoML to find. It will stop once it hits the exit criteria defined. This component will then output the best model that has been generated at the end of the run for your dataset. Visit this link for more information on [exit criteria (termination policy)](/how-to-auto-train-image-models#early-termination-policies).
++++
+1. Add the **AutoML Image Object Detection** component to your pipeline.
+
+1. Specify the **Target Column** you want the model to output
+
+1. Specify the **Primary Metric** you want AutoML to use to measure your model's success. Visit this link for an [explanation on each primary metric for computer vision.](/how-to-auto-train-image-models.md#primary-metric)
+
+1. (Optional) You are able to configure algorithm settings. Visit this link for a {list of supported algorithms for computer vision.](/how-to-auto-train-image-models.md#supported-model-algorithms
+
+1. (Optional) To configure job limits, visit [this link for more explanation.](/how-to-auto-train-image-models.md#job-limits)
+
+1. (Optional) Visit this link for a [list of configurations for Sampling and Early Termination for your Job Sweep.](/how-to-auto-train-image-models.md#sampling-methods-for-the-sweep) You can also find more information on each of the policies and sampling methods.
+
+
+
+## Next steps
+
+See the [set of components available](/component-reference.md) to Azure Machine Learning.
machine-learning Regression https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference-v2/regression.md
+
+ Title: "AutoML Regression"
+
+description: Learn how to use the AutoML Regression component in Azure Machine Learning to create a classifier using ML Table data.
+++++++ Last updated : 12/1/2022++
+# AutoML Regression
+
+This article describes a component in Azure Machine Learning designer.
+
+Use this component to create a machine learning model that is based on the AutoML Regression.
++
+## How to configure
++
+This model requires a training dataset. Validation and test datasets are optional.
+
+AutoML creates a number of pipelines in parallel that try different algorithms and parameters for your model. The service iterates through ML algorithms paired with feature selections, where each iteration produces a model with a training score. You are able to choose the metric you want the model to optimize for. The better the score for the chosen metric the better the model is considered to "fit" your data. You are able to define an exit criteria for the experiment. The exit criteria will be model with a specific training score you want AutoML to find. It will stop once it hits the exit criteria defined. This component will then output the best model that has been generated at the end of the run for your dataset. Visit this link for more information on [exit criteria (termination policy)](/how-to-auto-train-image-models#early-termination-policies).
+++
+1. Add the **AutoML Regression** component to your pipeline.
+
+1. Specify the **Target Column** you want the model to output
+
+1. (Optional) View addition configuration settings: additional settings you can use to better control the training job. Otherwise, defaults are applied based on experiment selection and data.
+
+ Additional configurations|Description
+ |
+ Primary metric| Main metric used for scoring your model. [Learn more about model metrics](..//how-to-configure-auto-train.md#primary-metric).
+ Explain best model | Select to enable or disable, in order to show explanations for the recommended best model. <br> This functionality is not currently available for [certain forecasting algorithms](../how-to-machine-learning-interpretability-automl.md#interpretability-during-training-for-the-best-model).
+ Blocked algorithm| Select algorithms you want to exclude from the training job. <br><br> Allowing algorithms is only available for [SDK experiments](../how-to-configure-auto-train.md#supported-algorithms). <br> See the [supported algorithms for each task type](/python/api/azureml-automl-core/azureml.automl.core.shared.constants.supportedmodels).
+ Exit criterion| When any of these criteria are met, the training job is stopped. <br> *Training job time (hours)*: How long to allow the training job to run. <br> *Metric score threshold*: Minimum metric score for all pipelines. This ensures that if you have a defined target metric you want to reach, you do not spend more time on the training job than necessary.
+ Concurrency| *Max concurrent iterations*: Maximum number of pipelines (iterations) to test in the training job. The job will not run more than the specified number of iterations. Learn more about how automated ML performs [multiple child jobs on clusters](/how-to-configure-auto-train.md#multiple-child-runs-on-clusters).
+++
+1. The **[Optional] Validate and test** form allows you to do the following.
+
+ 1. Specify the type of validation to be used for your training job. [Learn more about cross validation](/how-to-configure-cross-validation-data-splits.md#prerequisites).
+
+
+ 1. Provide a test dataset (preview) to evaluate the recommended model that automated ML generates for you at the end of your experiment. When you provide test data, a test job is automatically triggered at the end of your experiment. This test job is only job on the best model that was recommended by automated ML.
+
+ >[!IMPORTANT]
+ > Providing a test dataset to evaluate generated models is a preview feature. This capability is an [experimental](/python/api/overview/azure/ml/#stable-vs-experimental) preview feature, and may change at any time.
+
+ * Test data is considered a separate from training and validation, so as to not bias the results of the test job of the recommended model. [Learn more about bias during model validation](../concept-automated-ml.md#training-validation-and-test-data).
+ * You can either provide your own test dataset or opt to use a percentage of your training dataset. Test data must be in the form of an [Azure Machine Learning TabularDataset](../v1/how-to-create-register-datasets.md#tabulardataset).
+ * The schema of the test dataset should match the training dataset. The target column is optional, but if no target column is indicated no test metrics are calculated.
+ * The test dataset should not be the same as the training dataset or the validation dataset.
+ * Forecasting jobs do not support train/test split.
+
+
+
+
+## Next steps
+
+See the [set of components available](/component-reference.md) to Azure Machine Learning.
machine-learning Text Classification Multilabel https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference-v2/text-classification-multilabel.md
+
+ Title: "AutoML Text Multi-label Classification"
+
+description: Learn how to use the AutoML Text Multi-label Classification component in Azure Machine Learning to create a classifier using ML Table data.
+++++++ Last updated : 12/1/2022++
+# AutoML Text Multi-label Classification
+
+This article describes a component in Azure Machine Learning designer.
+
+Use this component to create a machine learning model that is based on the AutoML Text Multi-label Classification.
+
+Multi-label text classification is for use cases where each example may be assigned more than one label, as opposed to single-label multiclass text classification where every example is labeled with the single most probable class.
+
+## How to configure
+
+This component trains an NLP classification model on text data. Text classification is a supervised learning task and requires a *labeled dataset* that includes a label column with a value for all rows.
+
+This model requires a training and a validation dataset. The datasets must be in ML Table format.
++
+1. Add the **AutoML Text Multi-label Classification** component to your pipeline.
+
+1. Specify the **Target Column** you want the model to output
+
+1. Specify the **Primary Metric** you want AutoML to use to measure your model's success.
+
+1. (Optional) Select the language your dataset consists of. Visit this link for a [full list of supported languages.](/how-to-auto-train-nlp-models.md#language-settings
+
+1. (Optional) You are able to configure Hyperparameters. Visit this link for a [full list of configurable Hyperparameters](/how-to-auto-train-nlp-models.md#supported-hyperparameters)
+
+1. (Optional) Job Sweep settings are configurable. Visit this link to learn more about [each configurable parameter.](/how-to-auto-train-nlp-models.md#sampling-methods-for-the-sweep)
+
+1. (Optional) Job Limit settings are configurable. Visit this link to learn more about [these settings.](/how-to-auto-train-nlp-models.md#resources-for-the-sweep)
++++
+## Next steps
+
+See the [set of components available](/component-reference.md) to Azure Machine Learning.
machine-learning Text Classification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference-v2/text-classification.md
+
+ Title: "AutoML Text Classification"
+
+description: Learn how to use the AutoML Text Classification component in Azure Machine Learning to create a classifier using ML Table data.
+++++++ Last updated : 12/1/2022++
+# AutoML Text Classification
+
+This article describes a component in Azure Machine Learning designer.
+
+Use this component to create a machine learning model that is based on the AutoML Classification.
+
+A text classification model will allow you to classify or categorize texts into predefined groups. Your dataset should be a labeled set of texts with their relevant tags that categorize each piece of text into a predefined group.
++
+## How to configure
+
+This component trains an NLP classification model on text data. Text classification is a supervised learning task and requires a *labeled dataset* that includes a label column with a value for all rows.
++
+This model requires a training and a Validation dataset. The datasets must be in ML Table format.
+++
+1. Add the **AutoML Text Classification** component to your pipeline.
+
+1. Specify the **Target Column** you want the model to output
+
+1. Specify the **Primary Metric** you want AutoML to use to measure your model's success.
+
+1. (Optional) Select the language your dataset consists of. Visit this link for a [full list of supported languages.](/how-to-auto-train-nlp-models.md#language-settings
+
+1. (Optional) You are able to configure Hyperparameters. Visit this link for a [full list of configurable Hyperparameters](/how-to-auto-train-nlp-models.md#supported-hyperparameters)
+
+1. (Optional) Job Sweep settings are configurable. Visit this link to learn more about [each configurable parameter.](/how-to-auto-train-nlp-models.md#sampling-methods-for-the-sweep)
+
+1. (Optional) Job Limit settings are configurable. Visit this link to learn more about [these settings.](/how-to-auto-train-nlp-models.md#resources-for-the-sweep)
+++
+## Next steps
+
+See the [set of components available](/component-reference.md) to Azure Machine Learning.
machine-learning Text Ner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/component-reference-v2/text-ner.md
+
+ Title: "AutoML Text NER (Named Entry Recognition)"
+
+description: Learn how to use the AutoML Text NER component in Azure Machine Learning to create a classifier using ML Table data.
+++++++ Last updated : 12/1/2022++
+# AutoML Text NER (Named Entry Recognition)
+
+This article describes a component in Azure Machine Learning designer.
+
+Use this component to create a machine learning model that is based on the AutoML Text NER.
+
+Named Entity Recognition (NER) is one of the features offered by Azure Cognitive Service for Language. The NER feature can identify and categorize entities in unstructured text. For [more information on NER](/cognitive-services/language-service/named-entity-recognition/overview)
+
+## How to configure
+
+This component trains an NLP classification model on text data. Text classification is a supervised learning task and requires a *labeled dataset* that includes a label column with a value for all rows.
++
+This model requires a training and Validation dataset. The datasets must be in ML Table format.
+++
+1. Add the **AutoML Text NER** component to your pipeline.
+
+1. Specify the **Primary Metric** you want AutoML to use to measure your model's success.
+
+1. (Optional) Select the language your dataset consists of. Visit this link for a [full list of supported languages.](/how-to-auto-train-nlp-models.md#language-settings
+
+1. (Optional) You are able to configure Hyperparameters. Visit this link for a [full list of configurable Hyperparameters](/how-to-auto-train-nlp-models.md#supported-hyperparameters)
+
+1. (Optional) Job Sweep settings are configurable. Visit this link to learn more about [each configurable parameter.](/how-to-auto-train-nlp-models.md#sampling-methods-for-the-sweep)
+
+1. (Optional) Job Limit settings are configurable. Visit this link to learn more about [these settings.](/how-to-auto-train-nlp-models.md#resources-for-the-sweep)
++++
+## Next steps
+
+See the [set of components available](/component-reference.md) to Azure Machine Learning.
machine-learning Concept Automl Forecasting Calendar Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-automl-forecasting-calendar-features.md
+
+ Title: Calendar features for time series forecasting in AutoML
+
+description: Learn how Azure Machine Learning's AutoML creates calendar and holiday features
++++++++ Last updated : 12/15/2022++
+# Calendar features for time series forecasting in AutoML
+
+This article focuses on the calendar-based features that AutoML creates to increase the accuracy of forecasting regression models. Since holidays can have a strong influence on how the modeled system behaves, the time before, during, and after a holiday can bias the seriesΓÇÖ patterns. Each holiday generates a window over your existing dataset that the learner can assign an effect to. This can be especially useful in scenarios such as holidays that generate high demands for specific products. See the [methods overview article](./concept-automl-forecasting-methods.md) for more general information about forecasting methodology in AutoML. Instructions and examples for training forecasting models in AutoML can be found in our [set up AutoML for time series forecasting](./how-to-auto-train-forecast.md) article.
+
+As a part of feature engineering, AutoML transforms datetime type columns provided in the training data into new columns of calendar-based features. These features can help regression models learn seasonal patterns at several cadences. AutoML can always create calendar features from the time index of the time series since this is a required column in the training data. Calendar features are also made from other columns with datetime type, if any are present. See the [how AutoML uses your data](./concept-automl-forecasting-methods.md#how-automl-uses-your-data) guide for more information on data requirements.
+
+AutoML considers two categories of calendar features: standard features that are based entirely on date and time values and holiday features which are specific to a country or region of the world. We'll go over these features in the remainder of the article.
+
+## Standard calendar features
+
+Th following table shows the full set of AutoML's standard calendar features along with an example output. The example uses the standard `YY-mm-dd %H-%m-%d` format for datetime representation.
+
+| Feature name | Description | Example output for 2011-01-01 00:25:30 |
+| | -- | -- |
+|`year`|Numeric feature representing the calendar year |2011|
+|`year_iso`|Represents ISO year as defined in ISO 8601. ISO years start on the first week of year that has a Thursday. For example, if January 1 is a Friday, the ISO year begins on January 4. ISO years may differ from calendar years.|2010|
+|`half`| Feature indicating whether the date is in the first or second half of the year. It is 1 if the date is prior to July 1 and 2 otherwise.
+|`quarter`|Numeric feature representing the quarter of the given date. It takes values 1, 2, 3, or 4 representing first, second, third, fourth quarter of calendar year.|1|
+|`month`|Numeric feature representing the calendar month. It takes values 1 through 12.|1|
+|`month_lbl`|String feature representing the name of month.|'January'|
+|`day`|Numeric feature representing the day of the month. It takes values from 1 through 31.|1|
+|`hour`|Numeric feature representing the hour of the day. It takes values 0 through 23.|0|
+|`minute`|Numeric feature representing the minute within the hour. It takes values 0 through 59.|25|
+|`second`|Numeric feature representing the second of the given datetime. In the case where only date format is provided, then it is assumed as 0. It takes values 0 through 59.|30|
+|`am_pm`|Numeric feature indicating whether the time is in the morning or evening. It is 0 for times before 12PM and 1 for times after 12PM. |0|
+|`am_pm_lbl`|String feature indicating whether the time is in the morning or evening.|'am'|
+|`hour12`|Numeric feature representing the hour of the day on a 12 hour clock. It takes values 0 through 12 for first half of the day and 1 through 11 for second half.|0|
+|`wday`|Numeric feature representing the day of the week. It takes values 0 through 6, where 0 corresponds to Monday. |5|
+|`wday_lbl`|String feature representing name of the day of the week. |
+|`qday`|Numeric feature representing the day within the quarter. It takes values 1 through 92.|1|
+|`yday`|Numeric feature representing the day of the year. It takes values 1 through 365, or 1 through 366 in the case of leap year.|1|
+|`week`|Numeric feature representing [ISO week](https://en.wikipedia.org/wiki/ISO_week_date) as defined in ISO 8601. ISO weeks always start on Monday and end on Sunday. It takes values 1 through 52, or 53 for years having 1st January falling on Thursday or for leap years having 1st January falling on Wednesday.|52|
+
+The full set of standard calendar features may not be created in all cases. The generated set depends on the frequency of the time series and whether the training data contains datetime features in addition to the time index. The following table shows the features created for different column types:
+
+Column purpose | Calendar features
+ |
+Time index | The full set minus calendar features that have high correlation with other features. For example, if the time series frequency is daily, then any features with a more granular frequency than daily will be removed since they don't provide useful information.
+Other datetime column | A reduced set consisting of `Year`, `Month`, `Day`, `DayOfWeek`, `DayOfYear`, `QuarterOfYear`, `WeekOfMonth`, `Hour`, `Minute`, and `Second`. If the column is a date with no time, `Hour`, `Minute`, and `Second` will be 0.
+
+## Holiday features
+
+AutoML can optionally create features representing holidays from a specific country or region. These features are configured in AutoML using the `country_or_region_for_holidays` parameter which accepts an [ISO country code](https://en.wikipedia.org/wiki/List_of_ISO_3166_country_codes).
+
+> [!NOTE]
+> Holiday features can only be made for time series with daily frequency.
+
+The following table summarizes the holiday features:
+
+Feature name | Description
+ | -- |
+`Holiday`| String feature that specifies whether a date is a regional or national holiday. Days within some range of a holiday are also marked.
+`isPaidTimeOff`| Binary feature that takes value 1 if the day is a "paid time-off holiday" in the given country or region.
+
+AutoML uses Azure Open Datasets as a source for holiday information. For more information, see the [PublicHolidays](/python/api/azureml-opendatasets/azureml.opendatasets.publicholidays) documentation.
+
+To better understand the holiday feature generation, consider the following example data:
+
+<img src='./media/concept-automl-forecasting-calendar-features/load_forecasting_sample_data_daily.png' alt='sample_data' width=50%></img>
+
+To make American holiday features for this data, we set the `country_or_region_for_holiday` to 'US' in the [forecast settings](/python/api/azure-ai-ml/azure.ai.ml.automl.forecastingjob#azure-ai-ml-automl-forecastingjob-set-forecast-settings) as shown in the following code sample:
+```python
+from azure.ai.ml import automl
+
+# create a forcasting job
+forecasting_job = automl.forecasting(
+ compute='test_cluster', # Name of single or multinode AML compute infrastructure created by user
+ experiment_name=exp_name, # name of experiment
+ training_data=sample_data,
+ target_column_name='demand',
+ primary_metric='NormalizedRootMeanSquaredError',
+ n_cross_validations=3,
+ enable_model_explainability=True
+)
+
+# set custom forecast settings
+forecasting_job.set_forecast_settings(
+ time_column_name='timeStamp',
+ country_or_region_for_holidays='US'
+)
+```
+The generated holiday features look like the following:
+
+<a name='output'><img src='./media/concept-automl-forecasting-calendar-features/sample_dataset_holiday_feature_generated.png' alt='sample_data_output' width=75%></img></a>
+
+Note that generated features have the prefix `_automl_` prepended to their column names. AutoML generally uses this prefix to distinguish input features from engineered features.
+
+## Next steps
+* Learn more about [how to set up AutoML to train a time-series forecasting model](./how-to-auto-train-forecast.md).
+* Browse [AutoML Forecasting Frequently Asked Questions](./how-to-automl-forecasting-faq.md).
+* Learn about [AutoML Forecasting Lagged Features](./concept-automl-forecasting-lags.md).
+* Learn about [how AutoML uses machine learning to build forecasting models](./concept-automl-forecasting-methods.md).
machine-learning Concept Automl Forecasting Lags https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-automl-forecasting-lags.md
+
+ Title: Lagged features for time series forecasting in AutoML
+
+description: Learn how Azure Machine Learning's AutoML forms lag based features for time series forecasting
++++++++ Last updated : 12/15/2022
+show_latex: true
++
+# Lagged features for time series forecasting in AutoML
+This article focuses on AutoML's methods for creating lag and rolling window aggregation features for forecasting regression models. Features like these that use past information can significantly increase accuracy by helping the model to learn correlational patterns in time. See the [methods overview article](./concept-automl-forecasting-methods.md) for general information about forecasting methodology in AutoML. Instructions and examples for training forecasting models in AutoML can be found in our [set up AutoML for time series forecasting](./how-to-auto-train-forecast.md) article.
+
+## Lag feature example
+AutoML generates lags with respect to the forecast horizon. The example in this section illustrates this concept. Here, we use a forecast horizon of three and target lag order of one. Consider the following monthly time series:
+
+Table 1: Original time series <a name="tab:original-ts"></a>
+
+| Date | $y_t$ |
+|: |: |
+| 1/1/2001 | 0 |
+| 2/1/2001 | 10 |
+| 3/1/2001 | 20 |
+| 4/1/2001 | 30 |
+| 5/1/2001 | 40 |
+| 6/1/2001 | 50 |
+
+First, we generate the lag feature for the horizon $h=1$ only. As you continue reading, it will become clear why we use individual horizons in each table.
+
+Table 2: Lag featurization for $h=1$ <a name="tbl:classic-lag-1"></a>
+
+| Date | $y_t$ | Origin | $y_{t-1}$ | $h$ |
+|: |: |: |: |: |
+| 1/1/2001 | 0 | 12/1/2000 | - | 1 |
+| 2/1/2001 | 10 | 1/1/2001 | 0 | 1 |
+| 3/1/2001 | 20 | 2/1/2001 | 10 | 1 |
+| 4/1/2001 | 30 | 3/1/2001 | 20 | 1 |
+| 5/1/2001 | 40 | 4/1/2001 | 30 | 1 |
+| 6/1/2001 | 50 | 4/1/2001 | 40 | 1 |
+
+Table 2 is generated from Table 1 by shifting the $y_t$ column down by a single observation. We've added a column named `Origin` that has the dates that the lag features originate from. Next, we generate the lagging feature for the forecast horizon $h=2$ only.
+
+Table 3: Lag featurization for $h=2$ <a name="tbl:classic-lag-2"></a>
+
+| Date | $y_t$ | Origin | $y_{t-2}$ | $h$ |
+|: |: |: |: |: |
+| 1/1/2001 | 0 | 11/1/2000 | - | 2 |
+| 2/1/2001 | 10 | 12/1/2000 | - | 2 |
+| 3/1/2001 | 20 | 1/1/2001 | 0 | 2 |
+| 4/1/2001 | 30 | 2/1/2001 | 10 | 2 |
+| 5/1/2001 | 40 | 3/1/2001 | 20 | 2 |
+| 6/1/2001 | 50 | 4/1/2001 | 30 | 2 |
+
+Table 3 is generated from Table 1 by shifting the $y_t$ column down by two observations. Finally, we will generate the lagging feature for the forecast horizon $h=3$ only.
+
+Table 4: Lag featurization for $h=3$ <a name="tbl:classic-lag-3"></a>
+
+| Date | $y_t$ | Origin | $y_{t-3}$ | $h$ |
+|: |: |: |: |: |
+| 1/1/2001 | 0 | 10/1/2000 | - | 3 |
+| 2/1/2001 | 10 | 11/1/2000 | - | 3 |
+| 3/1/2001 | 20 | 12/1/2000 | - | 3 |
+| 4/1/2001 | 30 | 1/1/2001 | 0 | 3 |
+| 5/1/2001 | 40 | 2/1/2001 | 10 | 3 |
+| 6/1/2001 | 50 | 3/1/2001 | 20 | 3 |
+
+Next, we concatenate Tables 1, 2, and 3 and rearrange the rows. The result is in the following table:
+
+Table 5: Lag featurization complete <a name="tbl:automl-lag-complete"></a>
+
+| Date | $y_t$ | Origin | $y_{t-1}^{(h)}$ | $h$ |
+|: |: |: |: |: |
+| 1/1/2001 | 0 | 12/1/2000 | - | 1 |
+| 1/1/2001 | 0 | 11/1/2000 | - | 2 |
+| 1/1/2001 | 0 | 10/1/2000 | - | 3 |
+| 2/1/2001 | 10 | 1/1/2001 | 0 | 1 |
+| 2/1/2001 | 10 | 12/1/2000 | - | 2 |
+| 2/1/2001 | 10 | 11/1/2000 | - | 3 |
+| 3/1/2001 | 20 | 2/1/2001 | 10 | 1 |
+| 3/1/2001 | 20 | 1/1/2001 | 0 | 2 |
+| 3/1/2001 | 20 | 12/1/2000 | - | 3 |
+| 4/1/2001 | 30 | 3/1/2001 | 20 | 1 |
+| 4/1/2001 | 30 | 2/1/2001 | 10 | 2 |
+| 4/1/2001 | 30 | 1/1/2001 | 0 | 3 |
+| 5/1/2001 | 40 | 4/1/2001 | 30 | 1 |
+| 5/1/2001 | 40 | 3/1/2001 | 20 | 2 |
+| 5/1/2001 | 40 | 2/1/2001 | 10 | 3 |
+| 6/1/2001 | 50 | 4/1/2001 | 40 | 1 |
+| 6/1/2001 | 50 | 4/1/2001 | 30 | 2 |
+| 6/1/2001 | 50 | 3/1/2001 | 20 | 3 |
++
+In the final table, we've changed the name of the lag column to $y_{t-1}^{(h)}$ to reflect that the lag is generated with respect to a specific horizon. The table shows that the lags we generated with respect to the horizon can be mapped to the conventional ways of generating lags in the previous tables.
+
+Table 5 is an example of the data augmentation that AutoML applies to training data to enable direct forecasting from regression models. When the configuration includes lag features, AutoML creates horizon dependent lags along with an integer-valued horizon feature. This enables AutoML's forecasting regression models to make a prediction at horizon $h$ without regard to the prediction at $h-1$, in contrast to recursively defined models like ARIMA.
+
+> [!NOTE]
+> Generation of horizon dependent lag features adds new _rows_ to the dataset. The number of new rows is proportional to forecast horizon. This dataset size growth can lead to out-of-memory errors on smaller compute nodes or when dataset size is already large. See the [frequently asked questions](./how-to-automl-forecasting-faq.md#how-do-i-fix-an-out-of-memory-error) article for solutions to this problem.
+
+Another consequence of this lagging strategy is that lag order and forecast horizon are decoupled. If, for example, your forecast horizon is seven, and you want AutoML to use lag features, you do not have to set the lag order to seven to ensure prediction over a full forecast horizon. Since AutoML generates lags with respect to horizon, you can set the lag order to one and AutoML will augment the data so that lags of any order are valid up to forecast horizon.
+
+## Next steps
+* Learn more about [how to set up AutoML to train a time-series forecasting model](./how-to-auto-train-forecast.md).
+* Browse [AutoML Forecasting Frequently Asked Questions](./how-to-automl-forecasting-faq.md).
+* Learn about [calendar features for time series forecasting in AutoML](./concept-automl-forecasting-calendar-features.md).
+* Learn about [how AutoML uses machine learning to build forecasting models](./concept-automl-forecasting-methods.md).
machine-learning Concept Automl Forecasting Methods https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-automl-forecasting-methods.md
+
+ Title: Overview of forecasting methods in AutoML
+
+description: Learn how Azure Machine Learning's AutoML uses machine learning to build forecasting models
++++++++ Last updated : 12/15/2022
+show_latex: true
++
+# Overview of forecasting methods in AutoML
+This article focuses on the methods that AutoML uses to prepare time series data and build forecasting models. Instructions and examples for training forecasting models in AutoML can be found in our [set up AutoML for time series forecasting](./how-to-auto-train-forecast.md) article.
+
+AutoML uses several methods to forecast time series values. These methods can be roughly assigned to two categories:
+
+1. Time series models that use historical values of the target quantity to make predictions into the future.
+2. Regression, or explanatory, models that use predictor variables to forecast values of the target.
+
+As an example, consider the problem of forecasting daily demand for a particular brand of orange juice from a grocery store. Let $y_t$ represent the demand for this brand on day $t$. A **time series model** predicts demand at $t+1$ using some function of historical demand,
+
+$y_{t+1} = f(y_t, y_{t-1}, \cdots, y_{t-s})$.
+
+The function $f$ often has parameters that we tune using observed demand from the past. The amount of history that $f$ uses to make predictions, $s$, can also be considered a parameter of the model.
+
+The time series model in the orange juice demand example may not be accurate enough since it only uses information about past demand. There are many other factors that likely influence future demand such as price, day of the week, and whether it's a holiday or not. Consider a **regression model** that uses these predictor variables,
+
+$y = g(\text{price}, \text{day of week}, \text{holiday})$.
+
+Again, $g$ generally has a set of parameters, including those governing regularization, that AutoML tunes using past values of the demand and the predictors. We omit $t$ from the expression to emphasize that the regression model uses correlational patterns between _contemporaneously_ defined variables to make predictions. That is, to predict $y_{t+1}$ from $g$, we must know which day of the week $t+1$ falls on, whether it's a holiday, and the orange juice price on day $t+1$. The first two pieces of information are always easily found by consulting a calendar. A retail price is usually set in advance, so the price of orange juice is likely also known one day ahead. However, the price may not be known 10 days into the future! It's important to understand that the utility of this regression is limited by how far into the future we need forecasts, also called the **forecast horizon**, and to what degree we know the future values of the predictors.
+
+> [!IMPORTANT]
+> AutoML's forecasting regression models assume that all features provided by the user are known into the future, at least up to the forecast horizon.
+
+AutoML's forecasting regression models can also be augmented to use historical values of the target and predictors. The result is a hybrid model with characteristics of a time series model and a pure regression model. Historical quantities are additional predictor variables in the regression and we refer to them as **lagged quantities**. The _order_ of the lag refers to how far back the value is known. For example, the current value of an order two lag of the target for our orange juice demand example is the observed juice demand from two days ago.
+
+Another notable difference between the time series models and the regression models is in the way they generate forecasts. Time series models are generally defined by recursion relations and produce forecasts one-at-a-time. To forecast many periods into the future, they iterate up-to the forecast horizon, feeding previous forecasts back into the model to generate the next one-period-ahead forecast as needed. In contrast, the regression models are so-called **direct forecasters** that generate _all_ forecasts up to the horizon in one go. Direct forecasters can be preferable to recursive ones because recursive models compound prediction error when they feed previous forecasts back into the model. When lag features are included, AutoML makes some important modifications to the training data so that the regression models can function as direct forecasters. See the [lag features article](./concept-automl-forecasting-lags.md) for more details.
+
+## Forecasting models in AutoML
+The following table lists the forecasting models implemented in AutoML and what category they belong to:
+
+Time Series Models | Regression Models
+-| --
+[Naive, Seasonal Naive, Average, Seasonal Average](https://otexts.com/fpp3/simple-methods.html), [ARIMA(X)](https://www.statsmodels.org/dev/generated/statsmodels.tsa.statespace.sarimax.SARIMAX.html), [Exponential Smoothing](https://www.statsmodels.org/dev/generated/statsmodels.tsa.holtwinters.ExponentialSmoothing.html) | [Linear SGD](https://scikit-learn.org/stable/modules/linear_model.html#stochastic-gradient-descent-sgd), [LARS LASSO](https://scikit-learn.org/stable/modules/linear_model.html#lars-lasso), [Elastic Net](https://scikit-learn.org/stable/modules/linear_model.html#elastic-net), [Prophet](https://facebook.github.io/prophet/), [K Nearest Neighbors](https://scikit-learn.org/stable/modules/neighbors.html#nearest-neighbors-regression), [Decision Tree](https://scikit-learn.org/stable/modules/tree.html#regression), [Random Forest](https://scikit-learn.org/stable/modules/ensemble.html#random-forests), [Extremely Randomized Trees](https://scikit-learn.org/stable/modules/ensemble.html#extremely-randomized-trees), [Gradient Boosted Trees](https://scikit-learn.org/stable/modules/ensemble.html#regression), [LightGBM](https://lightgbm.readthedocs.io/en/latest/https://docsupdatetracker.net/index.html), [XGBoost](https://xgboost.readthedocs.io/en/latest/parameter.html), Temporal Convolutional Network
+
+The models in each category are listed roughly in order of the complexity of patterns they're able to incorporate, also known as the **model capacity**. A Naive model, which simply forecasts the last observed value, has low capacity while the Temporal Convolutional Network (TCN), a deep neural network with potentially millions of tunable parameters, has high capacity.
+
+Importantly, AutoML also includes **ensemble** models that create weighted combinations of the best performing models to further improve accuracy. For forecasting, we use a [soft voting ensemble](https://scikit-learn.org/stable/modules/ensemble.html#voting-regressor) where composition and weights are found via the [Caruana Ensemble Selection Algorithm](http://www.niculescu-mizil.org/papers/shotgun.icml04.revised.rev2.pdf).
+
+> [!NOTE]
+> There are two important caveats for forecast model ensembles:
+> 1. The TCN cannot currently be included in ensembles.
+> 2. AutoML by default disables another ensemble method, the **stack ensemble**, which is included with default regression and classification tasks in AutoML. The stack ensemble fits a meta-model on the best model forecasts to find ensemble weights. We've found in internal benchmarking that this strategy has an increased tendency to over fit time series data. This can result in poor generalization, so the stack ensemble is disabled by default. However, it can be enabled if desired in the AutoML configuration.
+
+## How AutoML uses your data
+
+AutoML accepts time series data in tabular, "wide" format; that is, each variable must have its own corresponding column. AutoML requires that one of the columns must be the time axis for the forecasting problem which is parsable into a datetime type. The simplest time series data set consists of a **time column** and a numeric **target column**. The target is the variable one intends to predict into the future. An example of the format in this simple case follows below:
+
+timestamp | quantity
+ | --
+2012-01-01 | 100
+2012-01-02 | 97
+2012-01-03 | 106
+... | ...
+2013-12-31 | 347
+
+In more complex cases, the data may contain other columns aligned with the time index.
+
+timestamp | SKU | price | advertised | quantity
+ | | -- | - | --
+2012-01-01 | JUICE1 | 3.5 | 0 | 100
+2012-01-01 | BREAD3 | 5.76 | 0 | 47
+2012-01-02 | JUICE1 | 3.5 | 0 | 97
+2012-01-02 | BREAD3 | 5.5 | 1 | 68
+... | ... | ... | ... | ...
+2013-12-31 | JUICE1 | 3.75 | 0 | 347
+2013-12-31 | BREAD3 | 5.7 | 0 | 94
+
+In this example, there's a SKU, a retail price, and a flag indicating whether an item was advertised in addition to the timestamp and target quantity. There are evidently two series in this dataset - one for the JUICE1 SKU and one for the BREAD3 SKU; the `SKU` column is a **time series ID column** since grouping by it gives two groups containing a single series each. Before sweeping over models, AutoML does basic validation of the input configuration and data and adds engineered features.
+
+### Missing data handling
+AutoML's time series models generally require data with regularly spaced observations in time. Regularly spaced, here, includes cases like monthly or yearly observations where the number of days between observations may vary. Prior to modeling, AutoML must ensure that series are values are not missing _and_ that the observations are regular. Hence, there are two missing data cases:
+
+* A value is missing for some cell in the tabular data
+* A _row_ is missing which corresponds with an expected observation given the time series frequency
+
+In the first case, AutoML imputes missing values using common, configurable techniques.
+
+An example of a missing, expected row is shown in the following table:
+
+timestamp | quantity
+ | --
+2012-01-01 | 100
+2012-01-03 | 106
+2012-01-04 | 103
+... | ...
+2013-12-31 | 347
+
+This series ostensibly has a daily frequency, but there's no observation for 2012-01-02. In this case, AutoML will attempt to fill in the data by adding a new row for 2012-01-02. The new value for the `quantity` column, and any other columns in the data, will then be imputed like other missing values. Clearly, AutoML must know the series frequency in order to fill in observation gaps like this. AutoML automatically detects this frequency, or, optionally, the user can provide it in the configuration.
+
+The imputation method for filling missing values can be configured in the input. The default methods are listed in the following table:
+
+Column Type | Default Imputation Method
+-- |
+Target | Forward fill (last observation carried forward)
+Numeric Feature | Median value
+
+Missing values for categorical features are handled during numerical encoding by including an additional category corresponding to a missing value. Imputation is implicit in this case.
+
+### Automated feature engineering
+AutoML generally adds new columns to user data in an effort to increase modeling accuracy. Engineered feature can include the following:
+
+Feature Group | Default/Optional
+ | -
+Calendar features derived from the time index (for example, day of week) | Default
+Encoding categorical types to numeric type | Default
+Indicator features for holidays associated with a given country or region | Optional
+Lags of target quantity | Optional
+Lags of feature columns | Optional
+Rolling window aggregations (for example, rolling average) of target quantity | Optional
+Seasonal decomposition (STL) | Optional
+
+The user can configure featurization from the AutoML SDK via the [ForecastingJob](/python/api/azure-ai-ml/azure.ai.ml.automl.forecastingjob#azure-ai-ml-automl-forecastingjob-set-forecast-settings) class or from the [AzureML Studio web interface](how-to-use-automated-ml-for-ml-models.md#customize-featurization).
+
+### Model sweeping
+After data has been prepared with missing data handling and feature engineering, AutoML sweeps over a set of models and hyper-parameters using a [model recommendation service](https://www.microsoft.com/research/publication/probabilistic-matrix-factorization-for-automated-machine-learning/). The models are ranked based on validation or cross-validation metrics and then, optionally, the top models may be used in an ensemble model. The best model, or any of the trained models, can be inspected, downloaded, or deployed to produce forecasts as needed. See the [model sweeping and selection](./concept-automl-forecasting-sweeping.md) article for more details.
++
+### Model grouping
+When a dataset contains more than one time series, as in the given data example, there are multiple ways to model that data. For instance, we may simply group by the **time series ID column(s)** and train independent models for each series. A more general approach is to partition the data into groups that may each contain multiple, likely related series and train a model per group. By default, AutoML forecasting uses a mixed approach to model grouping. Time series models, plus ARIMAX and Prophet, assign one series to one group and other regression models assign all series to a single group. The following table summarizes the model groupings in two categories, one-to-one and many-to-one:
+
+Each Series in Own Group (1:1) | All Series in Single Group (N:1)
+-| --
+Naive, Seasonal Naive, Average, Seasonal Average, Exponential Smoothing, ARIMA, ARIMAX, Prophet | Linear SGD, LARS LASSO, Elastic Net, K Nearest Neighbors, Decision Tree, Random Forest, Extremely Randomized Trees, Gradient Boosted Trees, LightGBM, XGBoost, Temporal Convolutional Network
+
+More general model groupings are possible via AutoML's Many-Models solution; see our [Many Models- Automated ML notebook](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-many-models/auto-ml-forecasting-many-models.ipynb) and [Hierarchical time series- Automated ML notebook](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-hierarchical-timeseries/auto-ml-forecasting-hierarchical-timeseries.ipynb).
+
+## Next steps
+
+* Learn more about [model sweeping and selection](./concept-automl-forecasting-sweeping.md) for forecasting in AutoML.
+* Learn about how AutoML creates [features from the calendar](./concept-automl-forecasting-calendar-features.md).
+* Learn about how AutoML creates [lag features](./concept-automl-forecasting-lags.md).
+* Read answers to [frequently asked questions](./how-to-automl-forecasting-faq.md) about forecasting in AutoML.
machine-learning Concept Automl Forecasting Sweeping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-automl-forecasting-sweeping.md
+
+ Title: Model sweeping and selection for forecasting in AutoML
+
+description: Learn how Azure Machine Learning's AutoML searches for and selects forecasting models
++++++++ Last updated : 12/15/2022++
+# Model sweeping and selection for forecasting in AutoML
+This article focuses on how AutoML searches for and selects forecasting models. Please see the [methods overview article](./concept-automl-forecasting-methods.md) for more general information about forecasting methodology in AutoML. Instructions and examples for training forecasting models in AutoML can be found in our [set up AutoML for time series forecasting](./how-to-auto-train-forecast.md) article.
+
+## Model sweeping
+The central task for AutoML is to train and evaluate several models and choose the best one with respect to the given primary metric. The word "model" here refers to both the model class - such as ARIMA or Random Forest - and the specific hyper-parameter settings which distinguish models within a class. For instance, ARIMA refers to a class of models that share a mathematical template and a set of statistical assumptions. Training, or fitting, an ARIMA model requires a list of positive integers that specify the precise mathematical form of the model; these are the hyper-parameters. ARIMA(1, 0, 1) and ARIMA(2, 1, 2) have the same class, but different hyper-parameters and, so, can be separately fit with the training data and evaluated against each other. AutoML searches, or _sweeps_, over different model classes and within classes by varying hyper-parameters.
+
+The following table shows the different hyper-parameter sweeping methods that AutoML uses for different model classes:
+
+Model class group | Model type | Hyper-parameter sweeping method
+- | - | -
+Naive, Seasonal Naive, Average, Seasonal Average | Time series | No sweeping within class due to model simplicity
+Exponential Smoothing, ARIMA(X) | Time series | Grid search for within-class sweeping
+Prophet | Regression | No sweeping within class
+Linear SGD, LARS LASSO, Elastic Net, K Nearest Neighbors, Decision Tree, Random Forest, Extremely Randomized Trees, Gradient Boosted Trees, LightGBM, XGBoost | Regression | AutoML's [model recommendation service](https://www.microsoft.com/research/publication/probabilistic-matrix-factorization-for-automated-machine-learning/) dynamically explores hyper-parameter spaces
+Temporal Convolutional Network | Regression | Static list of models followed by random search over network size, dropout ratio, and learning rate.
+
+For a description of the different model types, see the [forecasting models](./concept-automl-forecasting-methods.md#forecasting-models-in-automl) section of the methods overview article.
+
+The amount of sweeping that AutoML does depends on the forecasting job configuration. You can specify the stopping criteria as a time limit or a limit on the number of trials, or equivalently the number of models. Early termination logic can be used in both cases to stop sweeping if the primary metric is not improving.
+
+## Model selection
+AutoML forecasting model search and selection proceeds in the following three phases:
+
+1. Sweep over time series models and select the best model from _each class_ using [penalized likelihood methods](https://otexts.com/fpp3/arima-estimation.html#information-criteria).
+2. Sweep over regression models and rank them, along with the best time series models from phase 1, according to their primary metric values from validation sets.
+3. Build an ensemble model from the top ranked models, calculate its validation metric, and rank it with the other models.
+
+The model with the top ranked metric value at the end of phase 3 is designated the best model.
+
+> [!IMPORTANT]
+> AutoML's final phase of model selection always calculates metrics on **out-of-sample** data. That is, data that was not used to fit the models. This helps to protect against over-fitting.
+
+AutoML has two validation configurations - cross-validation and explicit validation data. In the cross-validation case, AutoML uses the input configuration to create data splits into training and validation folds. Time order must be preserved in these splits, so AutoML uses so-called **Rolling Origin Cross Validation** which divides the series into training and validation data using an origin time point. Sliding the origin in time generates the cross-validation folds. Each validation fold contains the next horizon of observations immediately following the position of the origin for the given fold. This strategy preserves the time series data integrity and mitigates the risk of information leakage.
++
+AutoML follows the usual cross-validation procedure, training a separate model on each fold and averaging validation metrics from all folds.
+
+Cross-validation for forecasting jobs is configured by setting the number of cross-validation folds and, optionally, the number of time periods between two consecutive cross-validation folds. See the [training and validation data](./how-to-auto-train-forecast.md#training-and-validation-data) guide for more information and an example of configuring cross-validation for forecasting.
+
+You can also bring your own validation data. Learn more in the [configure data splits and cross-validation in AutoML](how-to-configure-cross-validation-data-splits.md#provide-validation-data) article.
+
+## Next steps
+* Learn more about [how to set up AutoML to train a time-series forecasting model](./how-to-auto-train-forecast.md).
+* Browse [AutoML Forecasting Frequently Asked Questions](./how-to-automl-forecasting-faq.md).
+* Learn about [calendar features for time series forecasting in AutoML](./concept-automl-forecasting-calendar-features.md).
+* Learn about [how AutoML uses machine learning to build forecasting models](./concept-automl-forecasting-methods.md).
machine-learning Concept Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-compute-instance.md
Last updated 10/19/2022
An Azure Machine Learning compute instance is a managed cloud-based workstation for data scientists. Each compute instance has only one owner, although you can share files between multiple compute instances.
-Compute instances make it easy to get started with Azure Machine Learning development as well as provide management and enterprise readiness capabilities for IT administrators.
+Compute instances make it easy to get started with Azure Machine Learning development and provide management and enterprise readiness capabilities for IT administrators.
Use a compute instance as your fully configured and managed development environment in the cloud for machine learning. They can also be used as a compute target for training and inferencing for development and testing purposes.
-For compute instance Jupyter functionality to work, ensure that web socket communication is not disabled. Please ensure your network allows websocket connections to *.instances.azureml.net and *.instances.azureml.ms.
+For compute instance Jupyter functionality to work, ensure that web socket communication isn't disabled. Ensure your network allows websocket connections to *.instances.azureml.net and *.instances.azureml.ms.
> [!IMPORTANT] > Items marked (preview) in this article are currently in public preview.
A compute instance is a fully managed cloud-based workstation optimized for your
|Key benefits|Description| |-|-| |Productivity|You can build and deploy models using integrated notebooks and the following tools in Azure Machine Learning studio:<br/>- Jupyter<br/>- JupyterLab<br/>- VS Code (preview)<br/>Compute instance is fully integrated with Azure Machine Learning workspace and studio. You can share notebooks and data with other data scientists in the workspace.<br/>
-|Managed & secure|Reduce your security footprint and add compliance with enterprise security requirements. Compute instances provide robust management policies and secure networking configurations such as:<br/><br/>- Autoprovisioning from Resource Manager templates or Azure Machine Learning SDK<br/>- [Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md)<br/>- [Virtual network support](./how-to-secure-training-vnet.md#compute-cluster)<br/> - Azure policy to disable SSH access<br/> - Azure policy to enforce creation in a virtual network <br/> - Auto-shutdown/auto-start based on schedule <br/>- TLS 1.2 enabled |
+|Managed & secure|Reduce your security footprint and add compliance with enterprise security requirements. Compute instances provide robust management policies and secure networking configurations such as:<br/><br/>- Autoprovisioning from Resource Manager templates or Azure Machine Learning SDK<br/>- [Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md)<br/>- [Virtual network support](./how-to-secure-training-vnet.md)<br/> - Azure policy to disable SSH access<br/> - Azure policy to enforce creation in a virtual network <br/> - Auto-shutdown/auto-start based on schedule <br/>- TLS 1.2 enabled |
|Preconfigured&nbsp;for&nbsp;ML|Save time on setup tasks with pre-configured and up-to-date ML packages, deep learning frameworks, GPU drivers.| |Fully customizable|Broad support for Azure VM types including GPUs and persisted low-level customization such as installing packages and drivers makes advanced scenarios a breeze. You can also use setup scripts to automate customization |
-* Secure your compute instance with **[No public IP (preview)](./how-to-secure-training-vnet.md)**.
-* The compute instance is also a secure training compute target similar to [compute clusters](how-to-create-attach-compute-cluster.md), but it is single node.
+* Secure your compute instance with **[No public IP](./how-to-secure-training-vnet.md)**.
+* The compute instance is also a secure training compute target similar to [compute clusters](how-to-create-attach-compute-cluster.md), but it's single node.
* You can [create a compute instance](how-to-create-manage-compute-instance.md?tabs=python#create) yourself, or an administrator can **[create a compute instance on your behalf](how-to-create-manage-compute-instance.md?tabs=python#create-on-behalf-of-preview)**. * You can also **[use a setup script (preview)](how-to-customize-compute-instance.md)** for an automated way to customize and configure the compute instance as per your needs. * To save on costs, **[create a schedule](how-to-create-manage-compute-instance.md#schedule-automatic-start-and-stop)** to automatically start and stop the compute instance, or [enable idle shutdown](how-to-create-manage-compute-instance.md#enable-idle-shutdown-preview)
The files in the file share are accessible from all compute instances in the sam
You can also clone the latest Azure Machine Learning samples to your folder under the user files directory in the workspace file share.
-Writing small files can be slower on network drives than writing to the compute instance local disk itself. If you are writing many small files, try using a directory directly on the compute instance, such as a `/tmp` directory. Note these files will not be accessible from other compute instances.
+Writing small files can be slower on network drives than writing to the compute instance local disk itself. If you're writing many small files, try using a directory directly on the compute instance, such as a `/tmp` directory. Note these files won't be accessible from other compute instances.
-Do not store training data on the notebooks file share. You can use the `/tmp` directory on the compute instance for your temporary data. However, do not write very large files of data on the OS disk of the compute instance. OS disk on compute instance has 128 GB capacity. You can also store temporary training data on temporary disk mounted on /mnt. Temporary disk size is configurable based on the VM size chosen and can store larger amounts of data if a higher size VM is chosen. You can also mount [datastores and datasets](v1/concept-azure-machine-learning-architecture.md#datasets-and-datastores). Any software packages you install are saved on the OS disk of compute instance. Please note customer managed key encryption is currently not supported for OS disk. The OS disk for compute instance is encrypted with Microsoft-managed keys.
+Don't store training data on the notebooks file share. You can use the `/tmp` directory on the compute instance for your temporary data. However, don't write large files of data on the OS disk of the compute instance. OS disk on compute instance has 128-GB capacity. You can also store temporary training data on temporary disk mounted on /mnt. Temporary disk size is based on the VM size chosen and can store larger amounts of data if a higher size VM is chosen. You can also mount [datastores and datasets](v1/concept-azure-machine-learning-architecture.md#datasets-and-datastores). Any software packages you install are saved on the OS disk of compute instance. Note customer managed key encryption is currently not supported for OS disk. The OS disk for compute instance is encrypted with Microsoft-managed keys.
## Create
Other ways to create a compute instance:
* With [Azure Machine Learning SDK](how-to-create-manage-compute-instance.md?tabs=python#create) * From the [CLI extension for Azure Machine Learning](how-to-create-manage-compute-instance.md?tabs=azure-cli#create)
-The dedicated cores per region per VM family quota and total regional quota, which applies to compute instance creation, is unified and shared with Azure Machine Learning training compute cluster quota. Stopping the compute instance does not release quota to ensure you will be able to restart the compute instance. Please do not stop the compute instance through the OS terminal by doing a sudo shutdown.
+The dedicated cores per region per VM family quota and total regional quota, which applies to compute instance creation, is unified and shared with Azure Machine Learning training compute cluster quota. Stopping the compute instance doesn't release quota to ensure you'll be able to restart the compute instance. Don't stop the compute instance through the OS terminal by doing a sudo shutdown.
-Compute instance comes with P10 OS disk. Temp disk type depends on the VM size chosen. Currently, it is not possible to change the OS disk type.
+Compute instance comes with P10 OS disk. Temp disk type depends on the VM size chosen. Currently, it isn't possible to change the OS disk type.
## Compute target
machine-learning Concept Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-customer-managed-keys.md
Previously updated : 03/17/2022 Last updated : 01/19/2023+ # Customer-managed keys for Azure Machine Learning
machine-learning Concept Deep Learning Vs Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-deep-learning-vs-machine-learning.md
+ Last updated 11/04/2022
machine-learning Concept Designer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-designer.md
+ Last updated 08/03/2022
machine-learning Concept Ml Pipelines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-ml-pipelines.md
+ Last updated 05/10/2022
machine-learning Concept Mlflow Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-mlflow-models.md
description: Learn about how MLflow uses the concept of models instead of artifa
+ Last updated 11/04/2022
machine-learning Concept Mlflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-mlflow.md
description: Learn about how Azure Machine Learning uses MLflow to log metrics a
+ Last updated 08/15/2022
machine-learning Concept Secure Network Traffic Flow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-secure-network-traffic-flow.md
When you create a compute instance or compute cluster, the following resources a
* A Network Security Group with required outbound rules. These rules allow __inbound__ access from the Azure Machine Learning (TCP on port 44224) and Azure Batch service (TCP on ports 29876-29877). > [!IMPORTANT]
- > If you use a firewall to block internet access into the VNet, you must configure the firewall to allow this traffic. For example, with Azure Firewall you can create user-defined routes. For more information, see [How to use Azure Machine Learning with a firewall](how-to-access-azureml-behind-firewall.md#inbound-configuration).
+ > If you use a firewall to block internet access into the VNet, you must configure the firewall to allow this traffic. For example, with Azure Firewall you can create user-defined routes. For more information, see [Configure inbound and outbound network traffic](how-to-access-azureml-behind-firewall.md).
* A load balancer with a public IP.
Also allow __outbound__ access to the following service tags. For each tag, repl
Data access from your compute instance or cluster goes through the private endpoint of the Storage Account for your VNet.
-If you use Visual Studio Code on a compute instance, you must allow other outbound traffic. For more information, see [How to use Azure Machine Learning with a firewall](how-to-access-azureml-behind-firewall.md).
+If you use Visual Studio Code on a compute instance, you must allow other outbound traffic. For more information, see [Configure inbound and outbound network traffic](how-to-access-azureml-behind-firewall.md).
:::image type="content" source="./media/concept-secure-network-traffic-flow/compute-instance-and-cluster.png" alt-text="Diagram of traffic flow when using compute instance or cluster":::
machine-learning How To Access Azureml Behind Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-azureml-behind-firewall.md
Previously updated : 09/06/2022 Last updated : 01/10/2023 ms.devlang: azurecli # Configure inbound and outbound network traffic
-In this article, learn about the network communication requirements when securing Azure Machine Learning workspace in a virtual network (VNet). Including how to configure Azure Firewall to control access to your Azure Machine Learning workspace and the public internet. To learn more about securing Azure Machine Learning, see [Enterprise security for Azure Machine Learning](concept-enterprise-security.md).
+Azure Machine Learning requires access to servers and services on the public internet. When implementing network isolation, you need to understand what access is required and how to enable it.
> [!NOTE] > The information in this article applies to Azure Machine Learning workspace configured with a private endpoint.
-> [!TIP]
-> This article is part of a series on securing an Azure Machine Learning workflow. See the other articles in this series:
->
-> * [Virtual network overview](how-to-network-security-overview.md)
-> * [Secure the workspace resources](how-to-secure-workspace-vnet.md)
-> * [Secure the training environment](how-to-secure-training-vnet.md)
-> * [Secure the inference environment](how-to-secure-inferencing-vnet.md)
-> * [Enable studio functionality](how-to-enable-studio-virtual-network.md)
-> * [Use custom DNS](how-to-custom-dns.md)
+## Common terms and information
+
+The following terms and information are used throughout this article:
+
+* __Azure service tags__: A service tag is an easy way to specify the IP ranges used by an Azure service. For example, the `AzureMachineLearning` tag represents the IP addresses used by the Azure Machine Learning service.
+
+ > [!IMPORTANT]
+ > Azure service tags are only supported by some Azure services. If you are using a non-Azure solution such as a 3rd party firewall, download a list of [Azure IP Ranges and Service Tags](https://www.microsoft.com/download/details.aspx?id=56519). Extract the file and search for the service tag within the file. The IP addresses may change periodically.
+
+* __Region__: Some service tags allow you to specify an Azure region. This limits access to the service IP addresses in a specific region, usually the one that your service is in. In this article, when you see `<region>`, substitute your Azure region instead. For example, `BatchNodeManagement.<region>` would be `BatchNodeManagement.uswest` if your Azure Machine Learning workspace is in the US West region.
+
+* __Azure Batch__: Azure Machine Learning compute clusters and compute instances rely on a back-end Azure Batch instance. This back-end service is hosted in a Microsoft subscription.
-## Well-known ports
+* __Ports__: The following ports are used in this article. If a port range isn't listed in this table, it's specific to the service and may not have any published information on what it's used for:
-The following are well-known ports used by services listed in this article. If a port range is used in this article and isn't listed in this section, it's specific to the service and may not have published information on what it's used for:
+ | Port | Description |
+ | -- | -- |
+ | 80 | Unsecured web traffic (HTTP) |
+ | 443 | Secured web traffic (HTTPS) |
+ | 445 | SMB traffic used to access file shares in Azure File storage |
+ | 8787 | Used when connecting to RStudio on a compute instance |
+ | 18881 | Used to connect to the language server to enable IntelliSense for notebooks on a compute instance. |
+* __Protocol__: Unless noted otherwise, all network traffic mentioned in this article uses __TCP__.
-| Port | Description |
-| -- | -- |
-| 80 | Unsecured web traffic (HTTP) |
-| 443 | Secured web traffic (HTTPS) |
-| 445 | SMB traffic used to access file shares in Azure File storage |
-| 8787 | Used when connecting to RStudio or Posit Workbench (formerly RStudio Workbench) on a compute instance |
-| 18881 | Used to connect to the language server to enable IntelliSense for notebooks on a compute instance. |
+## Basic configuration
-## Required public internet access
+This configuration makes the following assumptions:
+* You're using docker images provided by a container registry that you provide, and won't be using images provided by Microsoft.
+* You're using a private Python package repository, and won't be accessing public package repositories such as `pypi.org`, `*.anaconda.com`, or `*.anaconda.org`.
+* The private endpoints can communicate directly with each other within the VNet. For example, all services have a private endpoint in the same VNet:
+ * Azure Machine Learning workspace
+ * Azure Storage Account (blob, file, table, queue)
-## Azure Firewall
+__Inbound traffic__
+
+| Source | Source<br>ports | Destination | Destination<b>ports| Purpose |
+| -- |:--:| -- |:--:| -- |
+| `AzureLoadBalancer` | Any | `VirtualNetwork` | 44224 | Inbound to compute instance/cluster. __Only needed if the instance/cluster is configured to use a public IP address__. |
+
+> [!TIP]
+> A network security group (NSG) is created by default for this traffic. For more information, see [Default security rules](/azure/virtual-network/network-security-groups-overview#inbound).
+
+__Outbound traffic__
+
+| Service tag(s) | Ports | Purpose |
+| -- |:--:| -- |
+| `AzureActiveDirectory` | 80, 443 | Authentication using Azure AD. |
+| `AzureMachineLearning` | 443, 8787, 18881<br>UDP: 5831 | Using Azure Machine Learning services. |
+| `BatchNodeManagement.<region>` | 443 | Communication Azure Batch. |
+| `AzureResourceManager` | 443 | Creation of Azure resources with Azure Machine Learning. |
+| `Storage.<region>` | 443 | Access data stored in the Azure Storage Account for compute cluster and compute instance. This outbound can be used to exfiltrate data. For more information, see [Data exfiltration protection](how-to-prevent-data-loss-exfiltration.md). |
+| `AzureFrontDoor.FrontEnd`</br>* Not needed in Azure China. | 443 | Global entry point for [Azure Machine Learning studio](https://ml.azure.com). Store images and environments for AutoML. |
+| `MicrosoftContainerRegistry.<region>` | 443 | Access docker images provided by Microsoft. |
+| `Frontdoor.FirstParty` | 443 | Access docker images provided by Microsoft. |
+| `AzureMonitor` | 443 | Used to log monitoring and metrics to Azure Monitor. |
> [!IMPORTANT]
-> Azure Firewall provides security _for Azure Virtual Network resources_. Some Azure Services, such as Azure Storage Accounts, have their own firewall settings that _apply to the public endpoint for that specific service instance_. The information in this document is specific to Azure Firewall.
->
-> For information on service instance firewall settings, see [Use studio in a virtual network](how-to-enable-studio-virtual-network.md#firewall-settings).
+ > If a compute instance or compute cluster is configured for no public IP, they can't access the public internet by default. However, they do need to communicate with the resources listed above. To enable outbound communication, you have two possible options:
+ >
+ > * __User-defined route and firewall__: Create a user-defined route in the subnet that contains the compute. The __Next hop__ for the route should reference the private IP address of the firewall, with an address prefix of 0.0.0.0/0.
+ > * __Azure Virtual Network NAT with a public IP__: For more information on using Virtual Network Nat, see the [Virtual Network NAT](/azure/virtual-network/nat-gateway/nat-overview) documentation.
+
+### Recommended configuration for training and deploying models
+
+__Outbound traffic__
+
+| Service tag(s) | Ports | Purpose |
+| -- |:--:| -- |
+| `MicrosoftContainerRegistry.<region>` and `AzureFrontDoor.FirstParty` | 443 | Allows use of Docker images that Microsoft provides for training and inference. Also sets up the Azure Machine Learning router for Azure Kubernetes Service. |
+
+__To allow installation of Python packages for training and deployment__, allow __outbound__ traffic to the following host names:
-* For __inbound__ traffic to Azure Machine Learning compute cluster and compute instance, use [user-defined routes (UDRs)](../virtual-network/virtual-networks-udr-overview.md) to skip the firewall.
+> [!NOTE]
+> This is not a complete list of the hosts required for all Python resources on the internet, only the most commonly used. For example, if you need access to a GitHub repository or other host, you must identify and add the required hosts for that scenario.
+
+| __Host name__ | __Purpose__ |
+| - | - |
+| `anaconda.com`<br>`*.anaconda.com` | Used to install default packages. |
+| `*.anaconda.org` | Used to get repo data. |
+| `pypi.org` | Used to list dependencies from the default index, if any, and the index isn't overwritten by user settings. If the index is overwritten, you must also allow `*.pythonhosted.org`. |
+| `*pytorch.org` | Used by some examples based on PyTorch. |
+| `*.tensorflow.org` | Used by some examples based on Tensorflow. |
-* For __outbound__ traffic, create __network__ and __application__ rules.
+## Scenario: Install RStudio on compute instance
-These rule collections are described in more detail in [What are some Azure Firewall concepts](../firewall/firewall-faq.yml#what-are-some-azure-firewall-concepts).
+To allow installation of RStudio on a compute instance, the firewall needs to allow outbound access to the sites to pull the Docker image from. Add the following Application rule to your Azure Firewall policy:
+
+* __Name__: AllowRStudioInstall
+* __Source Type__: IP Address
+* __Source IP Addresses__: The IP address range of the subnet where you will create the compute instance. For example, `172.16.0.0/24`.
+* __Destination Type__: FQDN
+* __Target FQDN__: `ghcr.io`, `pkg-containers.githubusercontent.com`
+* __Protocol__: `Https:443`
+
+To allow the installation of R packages, allow __outbound__ traffic to `cloud.r-project.org`. This host is used for installing CRAN packages.
+
+> [!NOTE]
+> If you need access to a GitHub repository or other host, you must identify and add the required hosts for that scenario.
-### Inbound configuration
+## Scenario: Using compute cluster or compute instance with a public IP
[!INCLUDE [udr info for computes](../../includes/machine-learning-compute-user-defined-routes.md)]
-### Outbound configuration
-
-1. Add __Network rules__, allowing traffic __to__ and __from__ the following service tags:
-
- | Service tag | Protocol | Port |
- | -- |:--:|:--:|
- | AzureActiveDirectory | TCP | 80, 443 |
- | AzureMachineLearning | TCP | 443, 8787, 18881 |
- | AzureResourceManager | TCP | 443 |
- | Storage.region | TCP | 443 |
- | AzureFrontDoor.FrontEnd</br>* Not needed in Azure China. | TCP | 443 |
- | AzureContainerRegistry.region | TCP | 443 |
- | MicrosoftContainerRegistry.region</br>**Note** that this tag has a dependency on the **AzureFrontDoor.FirstParty** tag | TCP | 443 |
- | AzureKeyVault.region | TCP | 443 |
-
- > [!TIP]
- > * AzureContainerRegistry.region is only needed for custom Docker images. Including small modifications (such as additional packages) to base images provided by Microsoft.
- > * MicrosoftContainerRegistry.region is only needed if you plan on using the _default Docker images provided by Microsoft_, and _enabling user-managed dependencies_.
- > * AzureKeyVault.region is only needed if your workspace was created with the [hbi_workspace](/python/api/azure-ai-ml/azure.ai.ml.entities.workspace) flag enabled.
- > * For entries that contain `region`, replace with the Azure region that you're using. For example, `AzureContainerRegistry.westus`.
-
-1. Add __Application rules__ for the following hosts:
-
- > [!NOTE]
- > This is not a complete list of the hosts required for all hosts you may need to communicate with, only the most commonly used. For example, if you need access to a GitHub repository or other host, you must identify and add the required hosts for that scenario.
-
- | **Host name** | **Purpose** |
- | - | - |
- | **anaconda.com**</br>**\*.anaconda.com** | Used to install default packages. |
- | **\*.anaconda.org** | Used to get repo data. |
- | **pypi.org** | Used to list dependencies from the default index, if any, and the index isn't overwritten by user settings. If the index is overwritten, you must also allow **\*.pythonhosted.org**. |
- | **cloud.r-project.org** | Used when installing CRAN packages for R development. |
- | **ghcr.io**</br>**pkg-containers.githubusercontent.com** | Used by the Custom Applications feature on a compute instance to pull images from GitHub Container Repository (ghcr.io). For example, the RStudio or Posit Workbench image is hosted here. |
- | **\*pytorch.org** | Used by some examples based on PyTorch. |
- | **\*.tensorflow.org** | Used by some examples based on Tensorflow. |
- | **\*vscode.dev**</br>**\*vscode-unpkg.net**</br>**\*vscode-cdn.net**</br>**\*vscodeexperiments.azureedge.net**</br>**default.exp-tas.com** | Required to access vscode.dev (Visual Studio Code for the Web) |
- | **code.visualstudio.com** | Required to download and install VS Code desktop. This is not required for VS Code Web. |
- | **update.code.visualstudio.com**</br>**\*.vo.msecnd.net** | Used to retrieve VS Code server bits that are installed on the compute instance through a setup script. |
- | **marketplace.visualstudio.com**</br>**vscode.blob.core.windows.net**</br>**\*.gallerycdn.vsassets.io** | Required to download and install VS Code extensions. These enable the remote connection to Compute Instances provided by the Azure ML extension for VS Code, see [Connect to an Azure Machine Learning compute instance in Visual Studio Code](./how-to-set-up-vs-code-remote.md) for more information. |
- | **raw.githubusercontent.com/microsoft/vscode-tools-for-ai/master/azureml_remote_websocket_server/\*** | Used to retrieve websocket server bits that are installed on the compute instance. The websocket server is used to transmit requests from Visual Studio Code client (desktop application) to Visual Studio Code server running on the compute instance.|
- | **dc.applicationinsights.azure.com** | Used to collect metrics and diagnostics information when working with Microsoft support. |
- | **dc.applicationinsights.microsoft.com** | Used to collect metrics and diagnostics information when working with Microsoft support. |
- | **dc.services.visualstudio.com** | Used to collect metrics and diagnostics information when working with Microsoft support. |
-
-
- For __Protocol:Port__, select use __http, https__.
-
- For more information on configuring application rules, see [Deploy and configure Azure Firewall](../firewall/tutorial-firewall-deploy-portal.md#configure-an-application-rule).
-
-1. To restrict outbound traffic for models deployed to Azure Kubernetes Service (AKS), see the [Restrict egress traffic in Azure Kubernetes Service](../aks/limit-egress-traffic.md) and [Secure AKS inference environment](how-to-secure-kubernetes-inferencing-environment.md) articles.
-
-## Kubernetes Compute
+## Scenario: Firewall between Azure Machine Learning and Azure Storage endpoints
+
+You must also allow __outbound__ access to `Storage.<region>` on __port 445__.
+
+## Scenario: Workspace created with the `hbi_workspace` flag enabled
+
+You must also allow __outbound__ access to `Keyvault.<region>`. This outbound traffic is used to access the key vault instance for the back-end Azure Batch service.
+
+For more information on the `hbi_workspace` flag, see the [data encryption](concept-data-encryption.md) article.
+
+## Scenario: Use Kubernetes compute
[Kubernetes Cluster](./how-to-attach-kubernetes-anywhere.md) running behind an outbound proxy server or firewall needs extra egress network configuration.
These rule collections are described in more detail in [What are some Azure Fire
Besides above requirements, the following outbound URLs are also required for Azure Machine Learning, | Outbound Endpoint| Port | Description|Training |Inference |
-|--|--|--|--|--|
-| __\*.kusto.windows.net__<br>__\*.table.core.windows.net__<br>__\*.queue.core.windows.net__ | https:443 | Required to upload system logs to Kusto. |**&check;**|**&check;**|
-| __\<your ACR name\>.azurecr.io__<br>__\<your ACR name>\.\<region name>\.data.azurecr.io__ | https:443 | Azure container registry, required to pull docker images used for machine learning workloads.|**&check;**|**&check;**|
-| __\<your storage account name\>.blob.core.windows.net__ | https:443 | Azure blob storage, required to fetch machine learning project scripts,data or models, and upload job logs/outputs.|**&check;**|**&check;**|
-| __\<your AzureML workspace ID>.workspace.\<region\>.api.azureml.ms__<br>__\<region\>.experiments.azureml.net__<br>__\<region\>.api.azureml.ms__ | https:443 | Azure Machine Learning service API.|**&check;**|**&check;**|
-| __pypi.org__ | https:443 | Python package index, to install pip packages used for training job environment initialization.|**&check;**|N/A|
-| __archive.ubuntu.com__<br>__security.ubuntu.com__<br>__ppa.launchpad.net__ | http:80 | Required to download the necessary security patches. |**&check;**|N/A|
+|--|--|--|:--:|:--:|
+| `*.kusto.windows.net`<br>`*.table.core.windows.net`<br>`*.queue.core.windows.net` | 443 | Required to upload system logs to Kusto. |__&check;__|__&check;__|
+| `<your ACR name>.azurecr.io`<br>`<your ACR name>.<region>.data.azurecr.io` | 443 | Azure container registry, required to pull docker images used for machine learning workloads.|__&check;__|__&check;__|
+| `<your storage account name>.blob.core.windows.net` | 443 | Azure blob storage, required to fetch machine learning project scripts, data or models, and upload job logs/outputs.|__&check;__|__&check;__|
+| `<your workspace ID>.workspace.<region>.api.azureml.ms`<br>`<region>.experiments.azureml.net`<br>`<region>.api.azureml.ms` | 443 | Azure Machine Learning service API.|__&check;__|__&check;__|
+| `pypi.org` | 443 | Python package index, to install pip packages used for training job environment initialization.|__&check;__|N/A|
+| `archive.ubuntu.com`<br>`security.ubuntu.com`<br>`ppa.launchpad.net` | 80 | Required to download the necessary security patches. |__&check;__|N/A|
> [!NOTE]
-> `<region>` is the lowcase full spelling of Azure Region, for example, eastus, southeastasia.
->
-> `<your AML workspace ID>` can be found in Azure portal - your Machine Learning resource page - Properties - Workspace ID.
+> * Replace `<your workspace workspace ID>` with your workspace ID. The ID can be found in Azure portal - your Machine Learning resource page - Properties - Workspace ID.
+> * Replace `<your storage account>` with the storage account name.
+> * Replace `<your ACR name>` with the name of the Azure Container Registry for your workspace.
+> * Replace `<region>` with the region of your workspace.
### In-cluster communication requirements
-To install AzureMl extension at Kubernetes compute, all AzureML related components are deployed in `azureml` namespace. Following in-cluster communication are needed to ensure the ML workloads work well in cluster.
+To install the Azure Machine Learning extension on Kubernetes compute, all Azure Machine Learning related components are deployed in a `azureml` namespace. The following in-cluster communication is needed to ensure the ML workloads work well in the AKS cluster.
- The components in `azureml` namespace should be able to communicate with Kubernetes API server. - The components in `azureml` namespace should be able to communicate with each other. - The components in `azureml` namespace should be able to communicate with `kube-dns` and `konnectivity-agent` in `kube-system` namespace.
To install AzureMl extension at Kubernetes compute, all AzureML related componen
- If the cluster is used for real-time inferencing, the deployed model PODs should be able to communicate with `amlarc-identity-proxy-xxx` PODs on 9999 port.
+## Scenario: Visual Studio Code
-## Other firewalls
+The hosts in this section are used to install Visual Studio Code packages to establish a remote connection between Visual Studio Code and compute instances in your Azure Machine Learning workspace.
+
+> [!NOTE]
+> This is not a complete list of the hosts required for all Visual Studio Code resources on the internet, only the most commonly used. For example, if you need access to a GitHub repository or other host, you must identify and add the required hosts for that scenario.
+
+| __Host name__ | __Purpose__ |
+| - | - |
+| `*.vscode.dev`<br>`*.vscode-unpkg.net`<br>`*.vscode-cdn.net`<br>`*.vscodeexperiments.azureedge.net`<br>`default.exp-tas.com` | Required to access vscode.dev (Visual Studio Code for the Web) |
+| `code.visualstudio.com` | Required to download and install VS Code desktop. This host isn't required for VS Code Web. |
+| `update.code.visualstudio.com`<br>`*.vo.msecnd.net` | Used to retrieve VS Code server bits that are installed on the compute instance through a setup script. |
+| `marketplace.visualstudio.com`<br>`vscode.blob.core.windows.net`<br>`*.gallerycdn.vsassets.io` | Required to download and install VS Code extensions. These hosts enable the remote connection to compute instances using the Azure ML extension for VS Code. For more information, see [Connect to an Azure Machine Learning compute instance in Visual Studio Code](./how-to-set-up-vs-code-remote.md) |
+| `raw.githubusercontent.com/microsoft/vscode-tools-for-ai/master/azureml_remote_websocket_server/*` | Used to retrieve websocket server bits that are installed on the compute instance. The websocket server is used to transmit requests from Visual Studio Code client (desktop application) to Visual Studio Code server running on the compute instance. |
+
+## Scenario: Third party firewall
The guidance in this section is generic, as each firewall has its own terminology and specific configurations. If you have questions, check the documentation for the firewall you're using.
The hosts in the following tables are owned by Microsoft, and provide services r
> * __Your storage__: The Azure Storage Account(s) in your subscription, which is used to store your data and artifacts such as models, training data, training logs, and Python scripts.> > * __Microsoft storage__: The Azure Machine Learning compute instance and compute clusters rely on Azure Batch, and must access storage located in a Microsoft subscription. This storage is used only for the management of the compute instances. None of your data is stored here.
-**General Azure hosts**
+__General Azure hosts__
# [Azure public](#tab/public)
-| **Required for** | **Hosts** | **Protocol** | **Ports** |
+| __Required for__ | __Hosts__ | __Protocol__ | __Ports__ |
| -- | -- | -- | - |
-| Azure Active Directory | login.microsoftonline.com | TCP | 80, 443 |
-| Azure portal | management.azure.com | TCP | 443 |
-| Azure Resource Manager | management.azure.com | TCP | 443 |
+| Azure Active Directory | `login.microsoftonline.com` | TCP | 80, 443 |
+| Azure portal | `management.azure.com` | TCP | 443 |
+| Azure Resource Manager | `management.azure.com` | TCP | 443 |
# [Azure Government](#tab/gov)
-| **Required for** | **Hosts** | **Protocol** | **Ports** |
+| __Required for__ | __Hosts__ | __Protocol__ | __Ports__ |
| -- | -- | -- | - |
-| Azure Active Directory | login.microsoftonline.us | TCP | 80, 443 |
-| Azure portal | management.azure.us | TCP | 443 |
-| Azure Resource Manager | management.usgovcloudapi.net | TCP | 443 |
+| Azure Active Directory | `login.microsoftonline.us` | TCP | 80, 443 |
+| Azure portal | `management.azure.us` | TCP | 443 |
+| Azure Resource Manager | `management.usgovcloudapi.net` | TCP | 443 |
# [Azure China 21Vianet](#tab/china)
-| **Required for** | **Hosts** | **Protocol** | **Ports** |
+| __Required for__ | __Hosts__ | __Protocol__ | __Ports__ |
| -- | -- | -- | -- |
-| Azure Active Directory | login.chinacloudapi.cn | TCP | 80, 443 |
-| Azure portal | management.azure.cn | TCP | 443 |
-| Azure Resource Manager | management.chinacloudapi.cn | TCP | 443 |
+| Azure Active Directory | `login.chinacloudapi.cn` | TCP | 80, 443 |
+| Azure portal | `management.azure.cn` | TCP | 443 |
+| Azure Resource Manager | `management.chinacloudapi.cn` | TCP | 443 |
-**Azure Machine Learning hosts**
+__Azure Machine Learning hosts__
> [!IMPORTANT] > In the following table, replace `<storage>` with the name of the default storage account for your Azure Machine Learning workspace. Replace `<region>` with the region of your workspace. # [Azure public](#tab/public)
-| **Required for** | **Hosts** | **Protocol** | **Ports** |
+| __Required for__ | __Hosts__ | __Protocol__ | __Ports__ |
| -- | -- | -- | -- |
-| Azure Machine Learning studio | ml.azure.com | TCP | 443 |
-| API |\*.azureml.ms | TCP | 443 |
-| API | \*.azureml.net | TCP | 443 |
-| Model management | \*.modelmanagement.azureml.net | TCP | 443 |
-| Integrated notebook | \*.\<region\>.notebooks.azure.net | TCP | 443 |
-| Integrated notebook | \<storage\>.file.core.windows.net | TCP | 443, 445 |
-| Integrated notebook | \<storage\>.dfs.core.windows.net | TCP | 443 |
-| Integrated notebook | \<storage\>.blob.core.windows.net | TCP | 443 |
-| Integrated notebook | graph.microsoft.com | TCP | 443 |
-| Integrated notebook | \*.aznbcontent.net | TCP | 443 |
-| AutoML NLP, Vision | automlresources-prod.azureedge.net | TCP | 443 |
-| AutoML NLP, Vision | aka.ms | TCP | 443 |
+| Azure Machine Learning studio | `ml.azure.com` | TCP | 443 |
+| API | `*.azureml.ms` | TCP | 443 |
+| API | `*.azureml.net` | TCP | 443 |
+| Model management | `*.modelmanagement.azureml.net` | TCP | 443 |
+| Integrated notebook | `*.notebooks.azure.net` | TCP | 443 |
+| Integrated notebook | `<storage>.file.core.windows.net` | TCP | 443, 445 |
+| Integrated notebook | `<storage>.dfs.core.windows.net` | TCP | 443 |
+| Integrated notebook | `<storage>.blob.core.windows.net` | TCP | 443 |
+| Integrated notebook | `graph.microsoft.com` | TCP | 443 |
+| Integrated notebook | `*.aznbcontent.net` | TCP | 443 |
+| AutoML NLP, Vision | `automlresources-prod.azureedge.net` | TCP | 443 |
+| AutoML NLP, Vision | `aka.ms` | TCP | 443 |
> [!NOTE] > AutoML NLP, Vision are currently only supported in Azure public regions. # [Azure Government](#tab/gov)
-| **Required for** | **Hosts** | **Protocol** | **Ports** |
+| __Required for__ | __Hosts__ | __Protocol__ | __Ports__ |
| -- | -- | -- | -- |
-| Azure Machine Learning studio | ml.azure.us | TCP | 443 |
-| API | \*.ml.azure.us | TCP | 443 |
-| Model management | \*.modelmanagement.azureml.us | TCP | 443 |
-| Integrated notebook | \*.notebooks.usgovcloudapi.net | TCP | 443 |
-| Integrated notebook | \<storage\>.file.core.usgovcloudapi.net | TCP | 443, 445 |
-| Integrated notebook | \<storage\>.dfs.core.usgovcloudapi.net | TCP | 443 |
-| Integrated notebook | \<storage\>.blob.core.usgovcloudapi.net | TCP | 443 |
-| Integrated notebook | graph.microsoft.us | TCP | 443 |
-| Integrated notebook | \*.aznbcontent.net | TCP | 443 |
+| Azure Machine Learning studio | `ml.azure.us` | TCP | 443 |
+| API | `*.ml.azure.us` | TCP | 443 |
+| Model management | `*.modelmanagement.azureml.us` | TCP | 443 |
+| Integrated notebook | `*.notebooks.usgovcloudapi.net` | TCP | 443 |
+| Integrated notebook | `<storage>.file.core.usgovcloudapi.net` | TCP | 443, 445 |
+| Integrated notebook | `<storage>.dfs.core.usgovcloudapi.net` | TCP | 443 |
+| Integrated notebook | `<storage>.blob.core.usgovcloudapi.net` | TCP | 443 |
+| Integrated notebook | `graph.microsoft.us` | TCP | 443 |
+| Integrated notebook | `*.aznbcontent.net` | TCP | 443 |
# [Azure China 21Vianet](#tab/china)
-| **Required for** | **Hosts** | **Protocol** | **Ports** |
+| __Required for__ | __Hosts__ | __Protocol__ | __Ports__ |
| -- | -- | -- | -- |
-| Azure Machine Learning studio | studio.ml.azure.cn | TCP | 443 |
-| API | \*.ml.azure.cn | TCP | 443 |
-| API | \*.azureml.cn | TCP | 443 |
-| Model management | \*.modelmanagement.ml.azure.cn | TCP | 443 |
-| Integrated notebook | \*.notebooks.chinacloudapi.cn | TCP | 443 |
-| Integrated notebook | \<storage\>.file.core.chinacloudapi.cn | TCP | 443, 445 |
-| Integrated notebook | \<storage\>.dfs.core.chinacloudapi.cn | TCP | 443 |
-| Integrated notebook | \<storage\>.blob.core.chinacloudapi.cn | TCP | 443 |
-| Integrated notebook | graph.chinacloudapi.cn | TCP | 443 |
-| Integrated notebook | \*.aznbcontent.net | TCP | 443 |
+| Azure Machine Learning studio | `studio.ml.azure.cn` | TCP | 443 |
+| API | `*.ml.azure.cn` | TCP | 443 |
+| API | `*.azureml.cn` | TCP | 443 |
+| Model management | `*.modelmanagement.ml.azure.cn` | TCP | 443 |
+| Integrated notebook | `*.notebooks.chinacloudapi.cn` | TCP | 443 |
+| Integrated notebook | `<storage>.file.core.chinacloudapi.cn` | TCP | 443, 445 |
+| Integrated notebook | `<storage>.dfs.core.chinacloudapi.cn` | TCP | 443 |
+| Integrated notebook | `<storage>.blob.core.chinacloudapi.cn` | TCP | 443 |
+| Integrated notebook | `graph.chinacloudapi.cn` | TCP | 443 |
+| Integrated notebook | `*.aznbcontent.net` | TCP | 443 |
-**Azure Machine Learning compute instance and compute cluster hosts**
+__Azure Machine Learning compute instance and compute cluster hosts__
> [!TIP] > * The host for __Azure Key Vault__ is only needed if your workspace was created with the [hbi_workspace](/python/api/azure-ai-ml/azure.ai.ml.entities.workspace) flag enabled.
The hosts in the following tables are owned by Microsoft, and provide services r
# [Azure public](#tab/public)
-| **Required for** | **Hosts** | **Protocol** | **Ports** |
+| __Required for__ | __Hosts__ | __Protocol__ | __Ports__ |
| -- | -- | -- | -- |
-| Compute cluster/instance | graph.windows.net | TCP | 443 |
-| Compute instance | \*.instances.azureml.net | TCP | 443 |
-| Compute instance | \*.instances.azureml.ms | TCP | 443, 8787, 18881 |
-| Microsoft storage access | \*.blob.core.windows.net | TCP | 443 |
-| Microsoft storage access | \*.table.core.windows.net | TCP | 443 |
-| Microsoft storage access | \*.queue.core.windows.net | TCP | 443 |
-| Your storage account | \<storage\>.file.core.windows.net | TCP | 443, 445 |
-| Your storage account | \<storage\>.blob.core.windows.net | TCP | 443 |
+| Compute cluster/instance | `graph.windows.net` | TCP | 443 |
+| Compute instance | `*.instances.azureml.net` | TCP | 443 |
+| Compute instance | `*.instances.azureml.ms` | TCP | 443, 8787, 18881 |
+| Compute instance | `*.tundra.azureml.ms` | UDP | 5831 |
+| Compute instance | `*.batch.azure.com` | ANY | 443 |
+| Compute instance | `*.service.batch.com` | ANY | 443 |
+| Microsoft storage access | `*.blob.core.windows.net` | TCP | 443 |
+| Microsoft storage access | `*.table.core.windows.net` | TCP | 443 |
+| Microsoft storage access | `*.queue.core.windows.net` | TCP | 443 |
+| Your storage account | `<storage>.file.core.windows.net` | TCP | 443, 445 |
+| Your storage account | `<storage>.blob.core.windows.net` | TCP | 443 |
| Azure Key Vault | \*.vault.azure.net | TCP | 443 | # [Azure Government](#tab/gov)
-| **Required for** | **Hosts** | **Protocol** | **Ports** |
+| __Required for__ | __Hosts__ | __Protocol__ | __Ports__ |
| -- | -- | -- | -- |
-| Compute cluster/instance | graph.windows.net | TCP | 443 |
-| Compute instance | \*.instances.azureml.us | TCP | 443 |
-| Compute instance | \*.instances.azureml.ms | TCP | 443, 8787, 18881 |
-| Microsoft storage access | \*.blob.core.usgovcloudapi.net | TCP | 443 |
-| Microsoft storage access | \*.table.core.usgovcloudapi.net | TCP | 443 |
-| Microsoft storage access | \*.queue.core.usgovcloudapi.net | TCP | 443 |
-| Your storage account | \<storage\>.file.core.usgovcloudapi.net | TCP | 443, 445 |
-| Your storage account | \<storage\>.blob.core.usgovcloudapi.net | TCP | 443 |
-| Azure Key Vault | \*.vault.usgovcloudapi.net | TCP | 443 |
+| Compute cluster/instance | `graph.windows.net` | TCP | 443 |
+| Compute instance | `*.instances.azureml.us` | TCP | 443 |
+| Compute instance | `*.instances.azureml.ms` | TCP | 443, 8787, 18881 |
+| Microsoft storage access | `*.blob.core.usgovcloudapi.net` | TCP | 443 |
+| Microsoft storage access | `*.table.core.usgovcloudapi.net` | TCP | 443 |
+| Microsoft storage access | `*.queue.core.usgovcloudapi.net` | TCP | 443 |
+| Your storage account | `<storage>.file.core.usgovcloudapi.net` | TCP | 443, 445 |
+| Your storage account | `<storage>.blob.core.usgovcloudapi.net` | TCP | 443 |
+| Azure Key Vault | `*.vault.usgovcloudapi.net` | TCP | 443 |
# [Azure China 21Vianet](#tab/china)
-| **Required for** | **Hosts** | **Protocol** | **Ports** |
+| __Required for__ | __Hosts__ | __Protocol__ | __Ports__ |
| -- | -- | -- | -- |
-| Compute cluster/instance | graph.chinacloudapi.cn | TCP | 443 |
-| Compute instance | \*.instances.azureml.cn | TCP | 443 |
-| Compute instance | \*.instances.azureml.ms | TCP | 443, 8787, 18881 |
-| Microsoft storage access | \*.blob.core.chinacloudapi.cn | TCP | 443 |
-| Microsoft storage access | \*.table.core.chinacloudapi.cn | TCP | 443 |
-| Microsoft storage access | \*.queue.core.chinacloudapi.cn | TCP | 443 |
-| Your storage account | \<storage\>.file.core.chinacloudapi.cn | TCP | 443, 445 |
-| Your storage account | \<storage\>.blob.core.chinacloudapi.cn | TCP | 443 |
-| Azure Key Vault | \*.vault.azure.cn | TCP | 443 |
+| Compute cluster/instance | `graph.chinacloudapi.cn` | TCP | 443 |
+| Compute instance | `*.instances.azureml.cn` | TCP | 443 |
+| Compute instance | `*.instances.azureml.ms` | TCP | 443, 8787, 18881 |
+| Microsoft storage access | `*.blob.core.chinacloudapi.cn` | TCP | 443 |
+| Microsoft storage access | `*.table.core.chinacloudapi.cn` | TCP | 443 |
+| Microsoft storage access | `*.queue.core.chinacloudapi.cn` | TCP | 443 |
+| Your storage account | `<storage>.file.core.chinacloudapi.cn` | TCP | 443, 445 |
+| Your storage account | `<storage>.blob.core.chinacloudapi.cn` | TCP | 443 |
+| Azure Key Vault | `*.vault.azure.cn` | TCP | 443 |
-**Docker images maintained by by Azure Machine Learning**
+__Docker images maintained by by Azure Machine Learning__
-| **Required for** | **Hosts** | **Protocol** | **Ports** |
+| __Required for__ | __Hosts__ | __Protocol__ | __Ports__ |
| -- | -- | -- | -- | | Microsoft Container Registry | mcr.microsoft.com</br>\*.data.mcr.microsoft.com | TCP | 443 |
-| Azure Machine Learning pre-built images | viennaglobal.azurecr.io | TCP | 443 |
> [!TIP]
-> * __Azure Container Registry__ is required for any custom Docker image. This includes small modifications (such as additional packages) to base images provided by Microsoft.
+> * __Azure Container Registry__ is required for any custom Docker image. This includes small modifications (such as additional packages) to base images provided by Microsoft. It is also required by the internal training job submission process of Azure Machine Learning.
> * __Microsoft Container Registry__ is only needed if you plan on using the _default Docker images provided by Microsoft_, and _enabling user-managed dependencies_. > * If you plan on using federated identity, follow the [Best practices for securing Active Directory Federation Services](/windows-server/identity/ad-fs/deployment/best-practices-securing-ad-fs) article.
-Also, use the information in the [inbound configuration](#inbound-configuration) section to add IP addresses for `BatchNodeManagement` and `AzureMachineLearning`.
+Also, use the information in the [compute with public IP](#scenario-using-compute-cluster-or-compute-instance-with-a-public-ip) section to add IP addresses for `BatchNodeManagement` and `AzureMachineLearning`.
For information on restricting access to models deployed to AKS, see [Restrict egress traffic in Azure Kubernetes Service](../aks/limit-egress-traffic.md).
-**Monitoring, metrics, and diagnostics**
+__Monitoring, metrics, and diagnostics__
To support logging of metrics and other monitoring information to Azure Monitor and Application Insights, allow outbound traffic to the following hosts: > [!NOTE] > The information logged to these hosts is also used by Microsoft Support to be able to diagnose any problems you run into with your workspace.
-* **dc.applicationinsights.azure.com**
-* **dc.applicationinsights.microsoft.com**
-* **dc.services.visualstudio.com**
-* ***.in.applicationinsights.azure.com**
+* `dc.applicationinsights.azure.com`
+* `dc.applicationinsights.microsoft.com`
+* `dc.services.visualstudio.com`
+* `*.in.applicationinsights.azure.com`
For a list of IP addresses for these hosts, see [IP addresses used by Azure Monitor](../azure-monitor/app/ip-addresses.md).
-### Python hosts
-
-The hosts in this section are used to install Python packages, and are required during development, training, and deployment.
-
-> [!NOTE]
-> This is not a complete list of the hosts required for all Python resources on the internet, only the most commonly used. For example, if you need access to a GitHub repository or other host, you must identify and add the required hosts for that scenario.
-
-| **Host name** | **Purpose** |
-| - | - |
-| **anaconda.com**</br>**\*.anaconda.com** | Used to install default packages. |
-| **\*.anaconda.org** | Used to get repo data. |
-| **pypi.org** | Used to list dependencies from the default index, if any, and the index isn't overwritten by user settings. If the index is overwritten, you must also allow **\*.pythonhosted.org**. |
-| **\*pytorch.org** | Used by some examples based on PyTorch. |
-| **\*.tensorflow.org** | Used by some examples based on Tensorflow. |
-
-### R hosts
-
-The hosts in this section are used to install R packages, and are required during development, training, and deployment.
-
-> [!NOTE]
-> This is not a complete list of the hosts required for all R resources on the internet, only the most commonly used. For example, if you need access to a GitHub repository or other host, you must identify and add the required hosts for that scenario.
-
-| **Host name** | **Purpose** |
-| - | - |
-| **cloud.r-project.org** | Used when installing CRAN packages. |
-
-### Visual Studio Code hosts
-
-The hosts in this section are used to install Visual Studio Code packages to establish a remote connection between Visual Studio Code and compute instances in your Azure Machine Learning workspace.
-
-> [!NOTE]
-> This is not a complete list of the hosts required for all Visual Studio Code resources on the internet, only the most commonly used. For example, if you need access to a GitHub repository or other host, you must identify and add the required hosts for that scenario.
-
-| **Host name** | **Purpose** |
-| - | - |
-| **\*vscode.dev**</br>**\*vscode-unpkg.net**</br>**\*vscode-cdn.net**</br>**\*vscodeexperiments.azureedge.net**</br>**default.exp-tas.com** | Required to access vscode.dev (Visual Studio Code for the Web) |
-| **code.visualstudio.com** | Required to download and install VS Code desktop. This is not required for VS Code Web. |
-| **update.code.visualstudio.com**</br>**\*.vo.msecnd.net** | Used to retrieve VS Code server bits that are installed on the compute instance through a setup script. |
-| **marketplace.visualstudio.com**</br>**vscode.blob.core.windows.net**</br>**\*.gallerycdn.vsassets.io** | Required to download and install VS Code extensions. These enable the remote connection to Compute Instances provided by the Azure ML extension for VS Code, see [Connect to an Azure Machine Learning compute instance in Visual Studio Code](./how-to-set-up-vs-code-remote.md) for more information. |
-| **raw.githubusercontent.com/microsoft/vscode-tools-for-ai/master/azureml_remote_websocket_server/\*** |Used to retrieve websocket server bits that are installed on the compute instance. The websocket server is used to transmit requests from Visual Studio Code client (desktop application) to Visual Studio Code server running on the compute instance. |
- ## Next steps This article is part of a series on securing an Azure Machine Learning workflow. See the other articles in this series:
This article is part of a series on securing an Azure Machine Learning workflow.
* [Enable studio functionality](how-to-enable-studio-virtual-network.md) * [Use custom DNS](how-to-custom-dns.md)
-For more information on configuring Azure Firewall, see [Tutorial: Deploy and configure Azure Firewall using the Azure portal](../firewall/tutorial-firewall-deploy-portal.md).
+For more information on configuring Azure Firewall, see [Tutorial: Deploy and configure Azure Firewall using the Azure portal](../firewall/tutorial-firewall-deploy-portal.md).
machine-learning How To Access Data Batch Endpoints Jobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-data-batch-endpoints-jobs.md
Batch endpoints ensure that only authorized users are able to invoke batch deplo
| Azure Data Lake Storage Gen1 | Not apply | Identity of the job + Managed identity of the compute cluster | POSIX | | Azure Data Lake Storage Gen2 | Not apply | Identity of the job + Managed identity of the compute cluster | POSIX and RBAC |
-The managed identity of the compute cluster is used for mounting and configuring the data store. That means that in order to successfully read data from external storage services, the managed identity of the compute cluster where the deployment is running must have at least [Storage Blob Data Reader](../role-based-access-control/built-in-roles.md#storage-blob-data-reader) access to the storage account. Only storage account owners can [change your access level via the Azure portal](../storage/blobs/assign-azure-role-data-access.md).
+The managed identity of the compute cluster is used for mounting and configuring external data storage accounts. However, the identity of the job is still used to read the underlying data allowing you to achieve granular access control. That means that in order to successfully read data from external storage services, the managed identity of the compute cluster where the deployment is running must have at least [Storage Blob Data Reader](../role-based-access-control/built-in-roles.md#storage-blob-data-reader) access to the storage account. Only storage account owners can [change your access level via the Azure portal](../storage/blobs/assign-azure-role-data-access.md).
> [!NOTE] > To assign an identity to the compute used by a batch deployment, follow the instructions at [Set up authentication between Azure ML and other services](how-to-identity-based-service-authentication.md#compute-cluster). Configure the identity on the compute cluster associated with the deployment. Notice that all the jobs running on such compute are affected by this change. However, different deployments (even under the same deployment) can be configured to run under different clusters so you can administer the permissions accordingly depending on your requirements.
machine-learning How To Access Terminal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-access-terminal.md
Access all Git operations from the terminal. All Git files and folders will be s
To integrate Git with your Azure Machine Learning workspace, see [Git integration for Azure Machine Learning](concept-train-model-git-integration.md). - ## Install packages Install packages from a terminal window. Install Python packages into the **Python 3.8 - AzureML** environment. Install R packages into the **R** environment.
Or you can install packages directly in Jupyter Notebook, RStudio, or Posit Work
## Add new kernels > [!WARNING]
-> While customizing the compute instance, make sure you do not delete the **azureml_py36** or **azureml_py38** conda environments. Also do not delete **Python 3.6 - AzureML** or **Python 3.8 - AzureML** kernels. These are needed for Jupyter/JupyterLab functionality.
+> While customizing the compute instance, make sure you do not delete the **azureml_py36** or **azureml_py38** conda environments. Also do not delete **Python 3.6 - AzureML** or **Python 3.8 - AzureML** kernels. These are needed for Jupyter/JupyterLab functionality.
To add a new Jupyter kernel to the compute instance:
To add a new Jupyter kernel to the compute instance:
Any of the [available Jupyter Kernels](https://github.com/jupyter/jupyter/wiki/Jupyter-kernels) can be installed. ### Remove added kernels+ > [!WARNING]
-> While customizing the compute instance, make sure you do not delete the **azureml_py36** or **azureml_py38** conda environments. Also do not delete **Python 3.6 - AzureML** or **Python 3.8 - AzureML** kernels. These are needed for Jupyter/JupyterLab functionality.
+> While customizing the compute instance, make sure you do not delete the **azureml_py36** or **azureml_py38** conda environments. Also do not delete **Python 3.6 - AzureML** or **Python 3.8 - AzureML** kernels. These are needed for Jupyter/JupyterLab functionality.
To remove an added Jupyter kernel from the compute instance, you must remove the kernelspec, and (optionally) the conda environment. You can also choose to keep the conda environment. You must remove the kernelspec, or your kernel will still be selectable and cause unexpected behavior. To remove the kernelspec:+ 1. Use the terminal window to list and find the kernelspec: ```shell
To remove the kernelspec:
``` To also remove the conda environment:+ 1. Use the terminal window to list and find the conda environment: ```shell conda env list ```
-3. Remove the conda environment, replacing ENV_NAME with the conda environment you'd like to remove:
+1. Remove the conda environment, replacing ENV_NAME with the conda environment you'd like to remove:
```shell conda env remove -n ENV_NAME
Upon refresh, the kernel list in your notebooks view should reflect the changes
## Manage terminal sessions
- Select **View active sessions** in the terminal toolbar to see a list of all active terminal sessions. When there are no active sessions, this tab will be disabled.
+Terminal sessions can stay active if terminal tabs are not properly closed. Too many active terminal sessions can impact the performance of your compute instance.
+
+Select **Manage active sessions** in the terminal toolbar to see a list of all active terminal sessions and shut down the sessions you no longer need.
+
+Learn more about how to manage sessions running on your compute at [Managing notebook and terminal sessions](how-to-manage-compute-sessions.md).
> [!WARNING]
-> Make sure you close any unused sessions to preserve your compute instance's resources. Idle terminals may impact performance of compute instances.
+> Make sure you close any sessions you no longer need to preserve your compute instance's resources and optimize your performance.
machine-learning How To Administrate Data Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-administrate-data-authentication.md
Previously updated : 05/24/2022 Last updated : 01/20/2023+ # Customer intent: As an administrator, I need to administrate data access and set up authentication method for data scientists.
machine-learning How To Auto Train Forecast https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-auto-train-forecast.md
Last updated 11/18/2021
+show_latex: true
# Set up AutoML to train a time-series forecasting model with Python
A time series whose moments (mean and variance) change over time is called a **n
:::image type="content" source="media/how-to-auto-train-forecast/non-stationary-retail-sales.png" alt-text="Diagram showing retail sales for a non-stationary time series.":::
-Next, let's examine the image below, which plots the the original series in first differences `($x_{t} = y_{t} - y_{t-1}$)` where `$x_t$` is the change in retail sales and $y_{t}$ and $y_{t-1}$ represent the original series and its first lag, respectively. The mean of the series is roughly constant regardless the time frame one is looking at. This is an example of a (first order) stationary times series. The reason we added the `first order` term is because the first moment (mean) is time invariant (does not change with time interval), the same cannot be said about the variance, which is a second moment.
+Next, let's examine the image below, which plots the the original series in first differences, $x_t = y_t - y_{t-1}$ where $x_t$ is the change in retail sales and $y_t$ and $y_{t-1}$ represent the original series and its first lag, respectively. The mean of the series is roughly constant regardless the time frame one is looking at. This is an example of a first order stationary times series. The reason we added the first order term is because the first moment (mean) does not change with time interval, the same cannot be said about the variance, which is a second moment.
:::image type="content" source="media/how-to-auto-train-forecast/weakly-stationary-retail-sales.png" alt-text="Diagram showing retail sales for a weakly stationary time series."::: AutoML Machine learning models can not inherently deal with stochastic trends, or other well-known problems associated with non-stationary time series. As a result, their out of sample forecast accuracy will be "poor" if such trends are present.
-Automated ML automatically analyzes time series dataset to check whether it is stationary or not. When non-stationary time series are detected, they are automatically first differenced to mitigate the impact of non-stationary time series.
+AutoML automatically analyzes time series dataset to check whether it is stationary or not. When non-stationary time series are detected, AutoML applies a differencing transform automatically to mitigate the impact of non-stationary time series.
## Run the experiment
mse = mean_squared_error(
rolling_forecast_df[fitted_model.actual_column_name], rolling_forecast_df[fitted_model.forecast_column_name]) ```
-In the above sample, the step size for the rolling forecast is set to 1 which means that the forecaster is advanced 1 period, or 1 day in our demand prediction example, at each iteration. The total number of forecasts returned by `rolling_forecast` thus depends on the length of the test set and this step size. For more details and examples see the [rolling_forecast() documentation](/python/api/azureml-training-tabular/azureml.training.tabular.models.forecasting_pipeline_wrapper_base.forecastingpipelinewrapperbase#azureml-training-tabular-models-forecasting-pipeline-wrapper-base-forecastingpipelinewrapperbase-rolling-forecast) and the [Forecasting away from training data notebook](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-forecast-function/auto-ml-forecasting-function.ipynb).
+In this sample, the step size for the rolling forecast is set to one which means that the forecaster is advanced one period, or one day in our demand prediction example, at each iteration. The total number of forecasts returned by `rolling_forecast` thus depends on the length of the test set and this step size. For more details and examples see the [rolling_forecast() documentation](/python/api/azureml-training-tabular/azureml.training.tabular.models.forecasting_pipeline_wrapper_base.forecastingpipelinewrapperbase#azureml-training-tabular-models-forecasting-pipeline-wrapper-base-forecastingpipelinewrapperbase-rolling-forecast) and the [Forecasting away from training data notebook](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-forecast-function/auto-ml-forecasting-function.ipynb).
### Prediction into the future
See the [forecasting sample notebooks](https://github.com/Azure/azureml-examples
## Next steps * Learn more about [How to deploy an AutoML model to an online endpoint](how-to-deploy-automl-endpoint.md).
-* Learn about [Interpretability: model explanations in automated machine learning (preview)](how-to-machine-learning-interpretability-automl.md).
+* Learn about [Interpretability: model explanations in automated machine learning (preview)](how-to-machine-learning-interpretability-automl.md).
+* Learn about [how AutoML builds forecasting models](./concept-automl-forecasting-methods.md).
machine-learning How To Auto Train Image Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-auto-train-image-models.md
description: Set up Azure Machine Learning automated ML to train computer vision
+ Last updated 07/13/2022
-#Customer intent: I'm a data scientist with ML knowledge in the computer vision space, looking to build ML models using image data in Azure Machine Learning with full control of the model algorithm, hyperparameters, and training and deployment environments.
+#Customer intent: I'm a data scientist with ML knowledge in the computer vision space, looking to build ML models using image data in Azure Machine Learning with full control of the model architecture, hyperparameters, and training and deployment environments.
# Set up AutoML to train computer vision models
Last updated 07/13/2022
In this article, you learn how to train computer vision models on image data with automated ML with the Azure Machine Learning CLI extension v2 or the Azure Machine Learning Python SDK v2.
-Automated ML supports model training for computer vision tasks like image classification, object detection, and instance segmentation. Authoring AutoML models for computer vision tasks is currently supported via the Azure Machine Learning Python SDK. The resulting experimentation runs, models, and outputs are accessible from the Azure Machine Learning studio UI. [Learn more about automated ml for computer vision tasks on image data](concept-automated-ml.md).
+Automated ML supports model training for computer vision tasks like image classification, object detection, and instance segmentation. Authoring AutoML models for computer vision tasks is currently supported via the Azure Machine Learning Python SDK. The resulting experimentation trials, models, and outputs are accessible from the Azure Machine Learning studio UI. [Learn more about automated ml for computer vision tasks on image data](concept-automated-ml.md).
## Prerequisites
In order to generate computer vision models, you need to bring labeled image dat
If your training data is in a different format (like, pascal VOC or COCO), you can apply the helper scripts included with the sample notebooks to convert the data to JSONL. Learn more about how to [prepare data for computer vision tasks with automated ML](how-to-prepare-datasets-for-automl-images.md). > [!Note]
-> The training data needs to have at least 10 images in order to be able to submit an AutoML run.
+> The training data needs to have at least 10 images in order to be able to submit an AutoML job.
> [!Warning] > Creation of `MLTable` from data in JSONL format is supported using the SDK and CLI only, for this capability. Creating the `MLTable` via UI is not supported at this time.
image_object_detection_job = automl.image_object_detection(
## Configure experiments
-For computer vision tasks, you can launch either [individual runs](#individual-runs), [manual sweeps](#manually-sweeping-model-hyperparameters) or [automatic sweeps](#automatically-sweeping-model-hyperparameters-automode). We recommend starting with an automatic sweep to get a first baseline model. Then, you can try out individual runs with certain models and hyperparameter configurations. Finally, with manual sweeps you can explore multiple hyperparameter values near the more promising models and hyperparameter configurations. This three step workflow (automatic sweep, individual runs, manual sweeps) avoids searching the entirety of the hyperparameter space, which grows exponentially in the number of hyperparameters.
+For computer vision tasks, you can launch either [individual trials](#individual-trials), [manual sweeps](#manually-sweeping-model-hyperparameters) or [automatic sweeps](#automatically-sweeping-model-hyperparameters-automode). We recommend starting with an automatic sweep to get a first baseline model. Then, you can try out individual trials with certain models and hyperparameter configurations. Finally, with manual sweeps you can explore multiple hyperparameter values near the more promising models and hyperparameter configurations. This three step workflow (automatic sweep, individual trials, manual sweeps) avoids searching the entirety of the hyperparameter space, which grows exponentially in the number of hyperparameters.
Automatic sweeps can yield competitive results for many datasets. Additionally, they do not require advanced knowledge of model architectures, they take into account hyperparameter correlations and they work seamlessly across different hardware setups. All these reasons make them a strong option for the early stage of your experimentation process.
Automatic sweeps can yield competitive results for many datasets. Additionally,
An AutoML training job uses a primary metric for model optimization and hyperparameter tuning. The primary metric depends on the task type as shown below; other primary metric values are currently not supported.
-* `accuracy` for IMAGE_CLASSIFICATION
-* `iou` for IMAGE_CLASSIFICATION_MULTILABEL
-* `mean_average_precision` for IMAGE_OBJECT_DETECTION
-* `mean_average_precision` for IMAGE_INSTANCE_SEGMENTATION
+* [Accuracy](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.accuracy_score.html) for image classification
+* [Intersection over union](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.jaccard_score.html) for image classification multilabel
+* [Mean average precision](how-to-understand-automated-ml.md#object-detection-and-instance-segmentation-metrics) for image object detection
+* [Mean average precision](how-to-understand-automated-ml.md#object-detection-and-instance-segmentation-metrics) for image instance segmentation
### Job limits
You can control the resources spent on your AutoML Image training job by specify
Parameter | Detail --|-
-`max_trials` | Parameter for maximum number of configurations to sweep. Must be an integer between 1 and 1000. When exploring just the default hyperparameters for a given model algorithm, set this parameter to 1. The default value is 1.
-`max_concurrent_trials`| Maximum number of runs that can run concurrently. If specified, must be an integer between 1 and 100. The default value is 1. <br><br> **NOTE:** <li> The number of concurrent runs is gated on the resources available in the specified compute target. Ensure that the compute target has the available resources for the desired concurrency. <li> `max_concurrent_trials` is capped at `max_trials` internally. For example, if user sets `max_concurrent_trials=4`, `max_trials=2`, values would be internally updated as `max_concurrent_trials=2`, `max_trials=2`.
+`max_trials` | Parameter for maximum number of trials to sweep. Must be an integer between 1 and 1000. When exploring just the default hyperparameters for a given model architecture, set this parameter to 1. The default value is 1.
+`max_concurrent_trials`| Maximum number of trials that can run concurrently. If specified, must be an integer between 1 and 100. The default value is 1. <br><br> **NOTE:** <li> The number of concurrent trials is gated on the resources available in the specified compute target. Ensure that the compute target has the available resources for the desired concurrency. <li> `max_concurrent_trials` is capped at `max_trials` internally. For example, if user sets `max_concurrent_trials=4`, `max_trials=2`, values would be internally updated as `max_concurrent_trials=2`, `max_trials=2`.
`timeout_minutes`| The amount of time in minutes before the experiment terminates. If none specified, default experiment timeout_minutes is seven days (maximum 60 days) # [Azure CLI](#tab/cli)
limits:
> [!IMPORTANT] > This feature is currently in public preview. This preview version is provided without a service-level agreement. Certain features might not be supported or might have constrained capabilities. For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-It is generally hard to predict the best model architecture and hyperparameters for a dataset. Also, in some cases the human time allocated to tuning hyperparameters may be limited. For computer vision tasks, you can specify a number of runs and the system will automatically determine the region of the hyperparameter space to sweep. You do not have to define a hyperparameter search space, a sampling method or an early termination policy.
+It is generally hard to predict the best model architecture and hyperparameters for a dataset. Also, in some cases the human time allocated to tuning hyperparameters may be limited. For computer vision tasks, you can specify a number of trials and the system will automatically determine the region of the hyperparameter space to sweep. You do not have to define a hyperparameter search space, a sampling method or an early termination policy.
#### Triggering AutoMode
image_object_detection_job.set_limits(max_trials=10, max_concurrent_trials=2)
```
-A number of runs between 10 and 20 will likely work well on many datasets. The [time budget](#job-limits) for the AutoML job can still be set, but we recommend doing this only if each trial may take a long time.
+A number of trials between 10 and 20 will likely work well on many datasets. The [time budget](#job-limits) for the AutoML job can still be set, but we recommend doing this only if each trial may take a long time.
> [!Warning] > Launching automatic sweeps via the UI is not supported at this time.
-### Individual runs
+### Individual trials
-In individual runs, you directly control the model algorithm and hyperparameters. The model algorithm is passed via the `model_name` parameter.
+In individual trials, you directly control the model architecture and hyperparameters. The model architecture is passed via the `model_name` parameter.
-#### Supported model algorithms
+#### Supported model architectures
The following table summarizes the supported models for each computer vision task.
-Task | Model algorithms | String literal syntax<br> ***`default_model`\**** denoted with \*
+Task | model architectures | String literal syntax<br> ***`default_model`\**** denoted with \*
|-|- Image classification<br> (multi-class and multi-label)| **MobileNet**: Light-weighted models for mobile applications <br> **ResNet**: Residual networks<br> **ResNeSt**: Split attention networks<br> **SE-ResNeXt50**: Squeeze-and-Excitation networks<br> **ViT**: Vision transformer networks| `mobilenetv2` <br>`resnet18` <br>`resnet34` <br> `resnet50` <br> `resnet101` <br> `resnet152` <br> `resnest50` <br> `resnest101` <br> `seresnext` <br> `vits16r224` (small) <br> ***`vitb16r224`\**** (base) <br>`vitl16r224` (large)| Object detection | **YOLOv5**: One stage object detection model <br> **Faster RCNN ResNet FPN**: Two stage object detection models <br> **RetinaNet ResNet FPN**: address class imbalance with Focal Loss <br> <br>*Note: Refer to [`model_size` hyperparameter](reference-automl-images-hyperparameters.md#model-specific-hyperparameters) for YOLOv5 model sizes.*| ***`yolov5`\**** <br> `fasterrcnn_resnet18_fpn` <br> `fasterrcnn_resnet34_fpn` <br> `fasterrcnn_resnet50_fpn` <br> `fasterrcnn_resnet101_fpn` <br> `fasterrcnn_resnet152_fpn` <br> `retinanet_resnet50_fpn` Instance segmentation | **MaskRCNN ResNet FPN**| `maskrcnn_resnet18_fpn` <br> `maskrcnn_resnet34_fpn` <br> ***`maskrcnn_resnet50_fpn`\**** <br> `maskrcnn_resnet101_fpn` <br> `maskrcnn_resnet152_fpn`
-In addition to controlling the model algorithm, you can also tune hyperparameters used for model training. While many of the hyperparameters exposed are model-agnostic, there are instances where hyperparameters are task-specific or model-specific. [Learn more about the available hyperparameters for these instances](reference-automl-images-hyperparameters.md).
+In addition to controlling the model architecture, you can also tune hyperparameters used for model training. While many of the hyperparameters exposed are model-agnostic, there are instances where hyperparameters are task-specific or model-specific. [Learn more about the available hyperparameters for these instances](reference-automl-images-hyperparameters.md).
# [Azure CLI](#tab/cli) [!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
-If you wish to use the default hyperparameter values for a given algorithm (say yolov5), you can specify it using the model_name key in the training_parameters section. For example,
+If you wish to use the default hyperparameter values for a given architecture (say yolov5), you can specify it using the model_name key in the training_parameters section. For example,
```yaml training_parameters:
training_parameters:
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
-If you wish to use the default hyperparameter values for a given algorithm (say yolov5), you can specify it using the model_name parameter in the set_training_parameters method of the task specific `automl` job. For example,
+If you wish to use the default hyperparameter values for a given architecture (say yolov5), you can specify it using the model_name parameter in the set_training_parameters method of the task specific `automl` job. For example,
```python image_object_detection_job.set_training_parameters(model_name="yolov5")
search_space:
#### Define the parameter search space
-You can define the model algorithms and hyperparameters to sweep in the parameter space. You can either specify a single model algorithm or multiple ones.
+You can define the model architectures and hyperparameters to sweep in the parameter space. You can either specify a single model architecture or multiple ones.
-* See [Individual runs](#individual-runs) for the list of supported model algorithms for each task type.
+* See [Individual trials](#individual-trials) for the list of supported model architectures for each task type.
* See [Hyperparameters for computer vision tasks](reference-automl-images-hyperparameters.md) hyperparameters for each computer vision task type. * See [details on supported distributions for discrete and continuous hyperparameters](how-to-tune-hyperparameters.md#define-the-search-space).
When sweeping hyperparameters, you need to specify the sampling method to use fo
#### Early termination policies
-You can automatically end poorly performing runs with an early termination policy. Early termination improves computational efficiency, saving compute resources that would have been otherwise spent on less promising configurations. Automated ML for images supports the following early termination policies using the `early_termination` parameter. If no termination policy is specified, all configurations are run to completion.
+You can automatically end poorly performing trials with an early termination policy. Early termination improves computational efficiency, saving compute resources that would have been otherwise spent on less promising trials. Automated ML for images supports the following early termination policies using the `early_termination` parameter. If no termination policy is specified, all trials are run to completion.
| Early termination policy | AutoML Job syntax |
In our experiments, we found that these augmentations help the model to generali
## Incremental training (optional)
-Once the training run is done, you have the option to further train the model by loading the trained model checkpoint. You can either use the same dataset or a different one for incremental training.
+Once the training job is done, you have the option to further train the model by loading the trained model checkpoint. You can either use the same dataset or a different one for incremental training.
-### Pass the checkpoint via run ID
+### Pass the checkpoint via job ID
-You can pass the run ID that you want to load the checkpoint from.
+You can pass the job ID that you want to load the checkpoint from.
# [Azure CLI](#tab/cli)
training_parameters:
[!INCLUDE [sdk v2](../../includes/machine-learning-sdk-v2.md)]
-To find the run ID from the desired model, you can use the following code.
+To find the job ID from the desired model, you can use the following code.
```python
-# find a run id to get a model checkpoint from
+# find a job id to get a model checkpoint from
import mlflow # Obtain the tracking URL from MLClient
from mlflow.tracking.client import MlflowClient
mlflow_client = MlflowClient() mlflow_parent_run = mlflow_client.get_run(automl_job.name)
-# Fetch the id of the best automl child run.
+# Fetch the id of the best automl child trial.
target_checkpoint_run_id = mlflow_parent_run.data.tags["automl_best_child_run_id"] ```
-To pass a checkpoint via the run ID, you need to use the `checkpoint_run_id` parameter in `set_training_parameters` function.
+To pass a checkpoint via the job ID, you need to use the `checkpoint_run_id` parameter in `set_training_parameters` function.
```python image_object_detection_job = automl.image_object_detection(
When you've configured your AutoML Job to the desired settings, you can submit t
## Outputs and evaluation metrics
-The automated ML training runs generates output model files, evaluation metrics, logs and deployment artifacts like the scoring file and the environment file which can be viewed from the outputs and logs and metrics tab of the child runs.
+The automated ML training jobs generates output model files, evaluation metrics, logs and deployment artifacts like the scoring file and the environment file which can be viewed from the outputs and logs and metrics tab of the child jobs.
> [!TIP]
-> Check how to navigate to the run results from the [View run results](how-to-understand-automated-ml.md#view-job-results) section.
+> Check how to navigate to the job results from the [View job results](how-to-understand-automated-ml.md#view-job-results) section.
-For definitions and examples of the performance charts and metrics provided for each run, see [Evaluate automated machine learning experiment results](how-to-understand-automated-ml.md#metrics-for-image-models-preview).
+For definitions and examples of the performance charts and metrics provided for each job, see [Evaluate automated machine learning experiment results](how-to-understand-automated-ml.md#metrics-for-image-models-preview).
## Register and deploy model
-Once the run completes, you can register the model that was created from the best run (configuration that resulted in the best primary metric). You can either register the model after downloading or by specifying the azureml path with corresponding jobid. Note: If you want to change the inference settings that are described below you need to download the model and change settings.json and register using the updated model folder.
+Once the job completes, you can register the model that was created from the best trial (configuration that resulted in the best primary metric). You can either register the model after downloading or by specifying the azureml path with corresponding jobid. Note: If you want to change the inference settings that are described below you need to download the model and change settings.json and register using the updated model folder.
-### Get the best run
+### Get the best trial
# [Azure CLI](#tab/cli)
az ml online-endpoint update --name 'od-fridge-items-endpoint' --traffic 'od-fri
Alternatively You can deploy the model from the [Azure Machine Learning studio UI](https://ml.azure.com/).
-Navigate to the model you wish to deploy in the **Models** tab of the automated ML run and select on **Deploy** and select **Deploy to real-time endpoint** .
+Navigate to the model you wish to deploy in the **Models** tab of the automated ML job and select on **Deploy** and select **Deploy to real-time endpoint** .
![Screenshot of how the Deployment page looks like after selecting the Deploy option.](./media/how-to-auto-train-image-models/deploy-end-point.png).
image_object_detection_job = automl.image_object_detection(
### Streaming image files from storage
-By default, all image files are downloaded to disk prior to model training. If the size of the image files is greater than available disk space, the run will fail. Instead of downloading all images to disk, you can select to stream image files from Azure storage as they're needed during training. Image files are streamed from Azure storage directly to system memory, bypassing disk. At the same time, as many files as possible from storage are cached on disk to minimize the number of requests to storage.
+By default, all image files are downloaded to disk prior to model training. If the size of the image files is greater than available disk space, the job will fail. Instead of downloading all images to disk, you can select to stream image files from Azure storage as they're needed during training. Image files are streamed from Azure storage directly to system memory, bypassing disk. At the same time, as many files as possible from storage are cached on disk to minimize the number of requests to storage.
> [!NOTE] > If streaming is enabled, ensure the Azure storage account is located in the same region as compute to minimize cost and latency.
machine-learning How To Auto Train Nlp Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-auto-train-nlp-models.md
+
machine-learning How To Automl Forecasting Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-automl-forecasting-faq.md
+
+ Title: Frequently asked questions about forecasting in AutoML
+
+description: Read answers to frequently asked questions about forecasting in AutoML
++++++++ Last updated : 12/15/2022++
+# Frequently asked questions about forecasting in AutoML
+This article answers common questions about forecasting in AutoML. See the [methods overview article](./concept-automl-forecasting-methods.md) for more general information about forecasting methodology in AutoML. Instructions and examples for training forecasting models in AutoML can be found in our [set up AutoML for time series forecasting](./how-to-auto-train-forecast.md) article.
+
+## How do I start building forecasting models in AutoML?
+You can start by reading our guide on [setting up AutoML to train a time-series forecasting model with Python](./how-to-auto-train-forecast.md). We've also provided hands-on examples in several Jupyter notebooks:
+1. [Bike share example](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-bike-share/auto-ml-forecasting-bike-share.ipynb)
+2. [Forecasting using deep learning](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-github-dau/auto-ml-forecasting-github-dau.ipynb)
+3. [Many models](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-many-models/auto-ml-forecasting-many-models.ipynb)
+4. [Forecasting Recipes](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-recipes-univariate/auto-ml-forecasting-univariate-recipe-experiment-settings.ipynb)
+5. [Advanced forecasting scenarios](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-forecast-function/auto-ml-forecasting-function.ipynb)
+
+## Why is AutoML slow on my data?
+
+We're always working to make it faster and more scalable! To work as a general forecasting platform, AutoML does extensive data validations, complex feature engineering, and searches over a large model space. This complexity can require a lot of time, depending on the data and the configuration.
+
+One common source of slow runtime is training AutoML with default settings on data containing numerous time series. The cost of many forecasting methods scales with the number of series. For example, methods like Exponential Smoothing and Prophet [train a model for each time series](./concept-automl-forecasting-methods.md#model-grouping) in the training data. **The Many Models feature of AutoML scales to these scenarios** by distributing training jobs across a compute cluster and has been successfully applied to data with millions of time series. For more information, see the [forecasting at scale](./how-to-auto-train-forecast.md#forecasting-at-scale) article. You can also read about [the success of Many Models](https://techcommunity.microsoft.com/t5/ai-machine-learning-blog/automated-machine-learning-on-the-m5-forecasting-competition/ba-p/2933391) on a high-profile competition data set.
+
+## How can I make AutoML faster?
+See the ["why is AutoML slow on my data"](#why-is-automl-slow-on-my-data) answer to understand why it may be slow in your case.
+Consider the following configuration changes that may speed up your job:
+- Block time series models like ARIMA and Prophet
+- Turn off look-back features like lags and rolling windows
+- Reduce
+ - number of trials/iterations
+ - trial/iteration timeout
+ - experiment timeout
+ - number of cross validation folds.
+- Ensure that early termination is enabled.
+
+## What modeling configuration should I use?
+
+There are four basic configurations supported by AutoML forecasting:
+
+1. **Default AutoML** is recommended if the dataset has a small number of time series that have roughly similar historic behavior.
+
+ Advantages:
+ - Simple to configure from code/SDK or AzureML Studio
+ - AutoML has the chance to cross-learn across different time series since the regression models pool all series together in training. See the [model grouping](./concept-automl-forecasting-methods.md#model-grouping) section for more information.
++
+ Disadvantages:
+
+ - Regression models may be less accurate if the time series in the training data have divergent behavior
+ - Time series models may take a long time to train if there are a large number of series in the training data. See the ["why is AutoML slow on my data"](#why-is-automl-slow-on-my-data) answer for more information.
+
+2. **AutoML with deep learning** is recommended for datasets with more than 1000 observations and, potentially, numerous time series exhibiting complex patterns. When enabled, AutoML will sweep over temporal convolutional neural network (TCN) models during training. See the [enable deep learning](./how-to-auto-train-forecast.md#enable-deep-learning) section for more information.
+
+ Advantages
+ - Simple to configure from code/SDK or AzureML Studio
+ - Cross-learning opportunities since the TCN pools data over all series
+ - Potentially higher accuracy due to the large capacity of DNN models. See the [forecasting models in AutoML](./concept-automl-forecasting-methods.md#forecasting-models-in-automl) section for more information.
+
+ Disadvantages
+ - Training can take much longer due to the complexity of DNN models
+
+ > [!NOTE]
+ > We recommend using compute nodes with GPUs when deep learning is enabled to best take advantage of high DNN capacity. Training time can be much faster in comparison to nodes with only CPUs. See the [GPU optimized compute](../virtual-machines/sizes-gpu.md) article for more information.
+
+3. **Many Models** is recommended if you need to train and manage a large number of forecasting models in a scalable way. See the [forecasting at scale](./how-to-auto-train-forecast.md#forecasting-at-scale) section for more information.
+
+ Advantages:
+ - Scalable
+ - Potentially higher accuracy when time series have divergent behavior from one another.
+
+ Disadvantages:
+ - No cross-learning across time series
+ - You can't configure or launch Many Models jobs from AzureML Studio, only the code/SDK experience is currently available.
+
+4. **Hierarchical Time Series**, or HTS, is recommended if the series in your data have nested, hierarchical structure and you need to train or make forecasts at aggregated levels of the hierarchy. See the [hierarchical time series forecasting](how-to-auto-train-forecast.md#hierarchical-time-series-forecasting) section for more information.
+
+ Advantages
+ - Training at aggregated levels can reduce noise in the leaf node time series and potentially lead to higher accuracy models
+ - Forecasts can be retrieved for any level of the hierarchy by aggregating or disaggregating forecasts from the training level.
+
+ Disadvantages
+ - You need to provide the aggregation level for training. AutoML doesn't currently have an algorithm to find an optimal level.
+
+ > [!NOTE]
+ > HTS is designed for tasks where training or prediction is required at aggregated levels in the hierarchy. For hierarchical data requiring only leaf node training and prediction, use [Many Models](./how-to-auto-train-forecast.md#many-models) instead.
+
+## How can I prevent over-fitting and data leakage?
+
+AutoML uses machine learning best practices, such as cross-validated model selection, that mitigate many over-fitting issues. However, there are other potential sources of over-fitting:
+
+- The input data contains **feature columns that are derived from the target with a simple formula**. For example, a feature that is an exact multiple of the target can result in a nearly perfect training score. The model, however, will likely not generalize to out-of-sample data. We advise you to explore the data prior to model training and to drop columns that "leak" the target information.
+- The training data uses **features that are not known into the future**, up to the forecast horizon. AutoML's regression models currently assume all features are known to the forecast horizon. We advise you to explore your data prior to training and remove any feature columns that are only known historically.
+- There are **significant structural differences - regime changes - between the training, validation, or test portions of the data**. For example, consider the effect of the COVID-19 pandemic on demand for almost any good during 2020 and 2021; this is a classic example of a regime change. Over-fitting due to regime change is the most challenging issue to address because it's highly scenario dependent and can require deep knowledge to identify. As a first line of defense, try to reserve 10 - 20% of the total history for validation, or cross-validation, data. This is not always possible if the training history is short, but is generally a best practice. See our guide on [configuring validation](./how-to-auto-train-forecast.md#training-and-validation-data) for more information.
+
+## What if my time series data doesn't have regularly spaced observations?
+
+AutoML's forecasting models all require that training data have regularly spaced observations with respect to the calendar. This requirement includes cases like monthly or yearly observations where the number of days between observations may vary. There are two cases where time dependent data may not meet this requirement:
+
+- The data has a well defined frequency, but **there are missing observations that create gaps in the series**. In this case, AutoML will attempt to detect the frequency, fill in new observations for the gaps, and impute missing target and feature values therein. The imputation methods can be optionally configured by the user via SDK settings or through the Web UI. See the [custom featurization](./how-to-auto-train-forecast.md#customize-featurization)
+guide for more information on configuring imputation.
+
+- **The data does not have a well defined frequency**. That is, the duration between observations does not have a discernible pattern. Transactional data, like that from a point-of-sales system, is one example. In this case, you can set AutoML to aggregate your data to a chosen frequency. You can choose a regular frequency that best suites the data and the modeling objectives. See the [data aggregation](./how-to-auto-train-forecast.md#frequency--target-data-aggregation) section for more information.
+
+## How do I choose the primary metric?
+
+The primary metric is very important since its value on validation data determines the best model during [ sweeping and selection](./concept-automl-forecasting-sweeping.md). **Normalized root mean squared error (NRMSE) or normalized mean absolute error (NMAE) are usually the best choices for the primary metric** in forecasting tasks. To choose between them, note that RMSE penalizes outliers in the training data more than MAE because it uses the square of the error. The NMAE may be a better choice if you want the model to be less sensitive to outliers. See the [regression and forecasting metrics](./how-to-understand-automated-ml.md#regressionforecasting-metrics) guide for more information.
+
+> [!NOTE]
+> We do not recommend using the R2 score, or _R_<sup>2</sup>, as a primary metric for forecasting.
+
+> [!NOTE]
+> AutoML does not support custom, or user-provided functions for the primary metric. You must choose one of the predefined primary metrics that AutoML supports.
+
+## How can I improve the accuracy of my model?
+
+- Ensure that you're configuring AutoML the best way for your data. See the [model configuration](#what-modeling-configuration-should-i-use) answer for more information.
+- Check out the [forecasting recipes notebook](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-recipes-univariate/auto-ml-forecasting-univariate-recipe-experiment-settings.ipynb) for step-by-step guides on how to build and improve forecast models.
+- Evaluate the model using back-tests over several forecasting cycles. This procedure gives a more robust estimate of forecasting error and gives you a baseline to measure improvements against. See our [back-testing notebook](https://github.com/Azure/azureml-examples/blob/main/v1/python-sdk/tutorials/automl-with-azureml/forecasting-backtest-single-model/auto-ml-forecasting-backtest-single-model.ipynb) for an example.
+- If the data is noisy, consider aggregating it to a coarser frequency to increase the signal-to-noise ratio. See the [data aggregation](./how-to-auto-train-forecast.md#frequency--target-data-aggregation) guide for more information.
+- Add new features that may help predict the target. Subject matter expertise can help greatly when selecting training data.
+- Compare validation and test metric values and determine if the selected model is under-fitting or over-fitting the data. This knowledge can guide you to a better training configuration. For example, you might determine that you need to use more cross-validation folds in response to over-fitting.
+
+### How do I fix an Out-Of-Memory error?
+
+There are two types of memory issues:
+- RAM Out-of-Memory
+- Disk Out-of-Memory
+
+First, ensure that you're configuring AutoML in the best way for your data. See the [model configuration](#what-modeling-configuration-should-i-use) answer for more information.
+
+For default AutoML settings, RAM Out-of-Memory may be fixed by using compute nodes with more RAM. A useful rule-of-thumb is that the amount of free RAM should be at least 10 times larger than the raw data size to run AutoML with default settings.
+
+Disk Out-of-Memory errors may be resolved by deleting the compute cluster and creating a new one.
+
+### What advanced forecasting scenarios are supported by AutoML?
+
+We support the following advanced prediction scenarios:
+- Quantile forecasts
+- Robust model evaluation via [rolling forecasts](./how-to-auto-train-forecast.md#evaluating-model-accuracy-with-a-rolling-forecast)
+- Forecasting beyond the forecast horizon
+- Forecasting when there is a gap in time between training and forecasting periods.
+
+See the [advanced forecasting scenarios notebook](https://github.com/Azure/MachineLearningNotebooks/blob/master/how-to-use-azureml/automated-machine-learning/forecasting-forecast-function/auto-ml-forecasting-function.ipynb) for examples and details.
+
+## How do I view metrics from forecasting training jobs?
+
+See our [metrics in studio UI](./v1/how-to-log-view-metrics.md#view-run-metrics-in-the-studio-ui) guide for finding training and validation metric values. Note that you can view metrics for any forecasting model trained in AutoML by navigating to a model from the AutoML job UI in the studio and clicking on the "metrics" tab.
++
+## How do I debug failures with forecasting training jobs?
+
+If your AutoML forecasting job fails, you will see an error message in the studio UI that may help to diagnose and fix the problem. The best source of information about the failure beyond the error message is the driver log for the job. Check out the [run logs](./v1/how-to-log-view-metrics.md#view-and-download-log-files-for-a-run) guide for instructions on finding driver logs.
+
+> [!NOTE]
+> For Many Models or HTS job, training is usually on multi-node compute clusters. Logs for these jobs are present for each node IP address. You will need to search for error logs in each node in this case. The error logs, along with the driver logs, are in the `user_logs` folder for each node IP.
+
+### What is a workspace / environment / experiment/ compute instance / compute target?
+
+If you aren't familiar with Azure Machine Learning concepts, start with the ["What is AzureML"](overview-what-is-azure-machine-learning.md) article and the [workspaces](./concept-workspace.md) article.
+
+## Next steps
+* Learn more about [how to set up AutoML to train a time-series forecasting model](./how-to-auto-train-forecast.md).
+* Learn about [calendar features for time series forecasting in AutoML](./concept-automl-forecasting-calendar-features.md).
+* Learn about [how AutoML uses machine learning to build forecasting models](./concept-automl-forecasting-methods.md).
+* Learn about [AutoML Forecasting Lagged Features](./concept-automl-forecasting-lags.md).
machine-learning How To Configure Auto Train https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-auto-train.md
Classification | Regression | Time Series Forecasting
With additional algorithms below.
-* [Image Classification Multi-class Algorithms](how-to-auto-train-image-models.md#supported-model-algorithms)
-* [Image Classification Multi-label Algorithms](how-to-auto-train-image-models.md#supported-model-algorithms)
-* [Image Object Detection Algorithms](how-to-auto-train-image-models.md#supported-model-algorithms)
+* [Image Classification Multi-class Algorithms](how-to-auto-train-image-models.md#supported-model-architectures)
+* [Image Classification Multi-label Algorithms](how-to-auto-train-image-models.md#supported-model-architectures)
+* [Image Object Detection Algorithms](how-to-auto-train-image-models.md#supported-model-architectures)
* [NLP Text Classification Multi-label Algorithms](how-to-auto-train-nlp-models.md#language-settings) * [NLP Text Named Entity Recognition (NER) Algorithms](how-to-auto-train-nlp-models.md#language-settings)
machine-learning How To Configure Cross Validation Data Splits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-configure-cross-validation-data-splits.md
-+ Last updated 11/15/2021
machine-learning How To Create Attach Compute Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-attach-compute-cluster.md
In this article, learn how to:
## What is a compute cluster?
-Azure Machine Learning compute cluster is a managed-compute infrastructure that allows you to easily create a single or multi-node compute. The compute cluster is a resource that can be shared with other users in your workspace. The compute scales up automatically when a job is submitted, and can be put in an Azure Virtual Network. Compute cluster supports **no public IP (preview)** deployment as well in virtual network. The compute executes in a containerized environment and packages your model dependencies in a [Docker container](https://www.docker.com/why-docker).
+Azure Machine Learning compute cluster is a managed-compute infrastructure that allows you to easily create a single or multi-node compute. The compute cluster is a resource that can be shared with other users in your workspace. The compute scales up automatically when a job is submitted, and can be put in an Azure Virtual Network. Compute cluster supports **no public IP** deployment as well in virtual network. The compute executes in a containerized environment and packages your model dependencies in a [Docker container](https://www.docker.com/why-docker).
Compute clusters can run jobs securely in a [virtual network environment](how-to-secure-training-vnet.md), without requiring enterprises to open up SSH ports. The job executes in a containerized environment and packages your model dependencies in a Docker container.
machine-learning How To Create Manage Compute Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-create-manage-compute-instance.md
Where the file *create-instance.yml* is:
* Enable idle shutdown (preview). Configure a compute instance to automatically shut down if it's inactive. For more information, see [enable idle shutdown](#enable-idle-shutdown-preview). * Add schedule. Schedule times for the compute instance to automatically start and/or shut down. See [schedule details](#schedule-automatic-start-and-stop) below. * Enable SSH access. Follow the [detailed SSH access instructions](#enable-ssh-access) below.
- * Enable virtual network. Specify the **Resource group**, **Virtual network**, and **Subnet** to create the compute instance inside an Azure Virtual Network (vnet). You can also select __No public IP__ (preview) to prevent the creation of a public IP address, which requires a private link workspace. You must also satisfy these [network requirements](./how-to-secure-training-vnet.md) for virtual network setup.
+ * Enable virtual network. Specify the **Resource group**, **Virtual network**, and **Subnet** to create the compute instance inside an Azure Virtual Network (vnet). You can also select __No public IP__ to prevent the creation of a public IP address, which requires a private link workspace. You must also satisfy these [network requirements](./how-to-secure-training-vnet.md) for virtual network setup.
* Assign the computer to another user. For more about assigning to other users, see [Create on behalf of](#create-on-behalf-of-preview) * Provision with a setup script (preview) - for more information about how to create and use a setup script, see [Customize the compute instance with a script](how-to-customize-compute-instance.md).
machine-learning How To Deploy Kubernetes Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-kubernetes-extension.md
In this article, you can learn:
* Or an Arc Kubernetes cluster is up and running. Follow instructions in [connect existing Kubernetes cluster to Azure Arc](../azure-arc/kubernetes/quickstart-connect-cluster.md). * If the cluster is an Azure RedHat OpenShift Service (ARO) cluster or OpenShift Container Platform (OCP) cluster, you must satisfy other prerequisite steps as documented in the [Reference for configuring Kubernetes cluster](./reference-kubernetes.md#prerequisites-for-aro-or-ocp-clusters) article. * For production purposes, the Kubernetes cluster must have a minimum of **4 vCPU cores and 14-GB memory**. For more information on resource detail and cluster size recommendations, see [Recommended resource planning](./reference-kubernetes.md).
-* Cluster running behind an outbound proxy server or firewall needs extra [network configurations](./how-to-access-azureml-behind-firewall.md#kubernetes-compute)
+* Cluster running behind an outbound proxy server or firewall needs extra [network configurations](./how-to-access-azureml-behind-firewall.md).
* Install or upgrade Azure CLI to version 2.24.0 or higher. * Install or upgrade Azure CLI extension `k8s-extension` to version 1.2.3 or higher.
machine-learning How To Deploy Model Cognitive Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-model-cognitive-search.md
+ Last updated 03/11/2021
machine-learning How To Devops Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-devops-machine-learning.md
+ Last updated 11/11/2022
machine-learning How To Export Delete Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-export-delete-data.md
+ Last updated 10/21/2021
machine-learning How To Generate Automl Training Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-generate-automl-training-code.md
description: How to view model training code for an automated ML trained model a
+
machine-learning How To Github Actions Machine Learning https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-github-actions-machine-learning.md
+ Last updated 09/13/2022
machine-learning How To Inference Onnx Automl Image Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-inference-onnx-automl-image-models.md
description: Use ONNX with Azure Machine Learning automated ML to make predictions on computer vision models for classification, object detection, and instance segmentation. +
env = Environment(
) ```
-Use the following model specific arguments to submit the script. For more details on arguments, refer to [model specific hyperparameters](reference-automl-images-hyperparameters.md#model-specific-hyperparameters) and for supported object detection model names refer to the [supported model algorithm section](how-to-auto-train-image-models.md#supported-model-algorithms).
+Use the following model specific arguments to submit the script. For more details on arguments, refer to [model specific hyperparameters](how-to-auto-train-image-models.md#configure-experiments) and for supported object detection model names refer to the [supported model architecture section](how-to-auto-train-image-models.md#supported-model-architectures).
To get the argument values needed to create the batch scoring model, refer to the scoring scripts generated under the outputs folder of the AutoML training runs. Use the hyperparameter values available in the model settings variable inside the scoring file for the best child run.
assert batch_size == img_data.shape[0]
# [Object detection with Faster R-CNN or RetinaNet](#tab/object-detect-cnn)
-For object detection with the Faster R-CNN algorithm, follow the same preprocessing steps as image classification, except for image cropping. You can resize the image with height `600` and width `800`. You can get the expected input height and width with the following code.
+For object detection with the Faster R-CNN architecture, follow the same preprocessing steps as image classification, except for image cropping. You can resize the image with height `600` and width `800`. You can get the expected input height and width with the following code.
```python batch, channel, height_onnx, width_onnx = session.get_inputs()[0].shape
assert batch_size == img_data.shape[0]
# [Object detection with YOLO](#tab/object-detect-yolo)
-For object detection with the YOLO algorithm, follow the same preprocessing steps as image classification, except for image cropping. You can resize the image with height `600` and width `800`, and get the expected input height and width with the following code.
+For object detection with the YOLO architecture, follow the same preprocessing steps as image classification, except for image cropping. You can resize the image with height `600` and width `800`, and get the expected input height and width with the following code.
```python batch, channel, height_onnx, width_onnx = session.get_inputs()[0].shape
for image_idx, class_idx in zip(image_wise_preds[0], image_wise_preds[1]):
print('image: {}, class_index: {}, class_name: {}'.format(image_files[image_idx], class_idx, classes[class_idx])) ```
-For multi-class and multi-label classification, you can follow the same steps mentioned earlier for all the supported algorithms in AutoML.
+For multi-class and multi-label classification, you can follow the same steps mentioned earlier for all the supported model architectures in AutoML.
# [Object detection with Faster R-CNN or RetinaNet](#tab/object-detect-cnn)
machine-learning How To Kubernetes Inference Routing Azureml Fe https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-kubernetes-inference-routing-azureml-fe.md
The following diagram shows the connectivity requirements for AKS inferencing. B
For general AKS connectivity requirements, see [Control egress traffic for cluster nodes in Azure Kubernetes Service](../aks/limit-egress-traffic.md).
-For accessing Azure ML services behind a firewall, see [How to access azureml behind firewall](./how-to-access-azureml-behind-firewall.md#kubernetes-compute).
+For accessing Azure ML services behind a firewall, see [Configure inbound and outbound network traffic](how-to-access-azureml-behind-firewall.md).
### Overall DNS resolution requirements
machine-learning How To Log Mlflow Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-log-mlflow-models.md
description: Learn how to start logging MLflow models instead of artifacts using
+ Last updated 07/8/2022
machine-learning How To Manage Compute Sessions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-compute-sessions.md
+
+ Title: How to manage compute sessions
+
+description: Use the session management panel to manage the active notebook and terminal sessions running on a compute instance.
+++++++ Last updated : 1/18/2023
+# Customer intent: As a data scientist, I want to manage the notebook and terminal sessions on my compute instance for optimal performance.
++
+# Manage notebook and terminal sessions
+
+Notebook and terminal sessions run on the compute and maintain your current working state.
+
+When you reopen a notebook, or reconnect to a terminal session, you can reconnect to the previous session state (including command history, execution history, and defined variables). However, too many active sessions may slow down the performance of your compute. With too many active sessions, you may find your terminal or notebook cell typing lags, or terminal or notebook command execution may feel slower than expected.
+
+Use the session management panel in Azure Machine Learning studio to help you manage your active sessions and optimize the performance of your compute instance. Navigate to this session management panel from the compute toolbar of either a terminal tab or a notebook tab.
+
+> [!NOTE]
+> For optimal performance, we recommend you donΓÇÖt keep more than six active sessions - and the fewer the better.
++
+## Notebook sessions
+
+In the session management panel, select a linked notebook name in the notebook sessions section to reopen a notebook with its previous state.
+
+Notebook sessions are kept active when you close a notebook tab in the Azure Machine Learning studio. So, when you reopen a notebook you'll have access to previously defined variables and execution state - in this case, you're benefitting from the active notebook session.
+
+However, keeping too many active notebook sessions can slow down the performance of your compute. So, you should use the session management panel to shut down any notebook sessions you no longer need.
+
+Select **Manage active sessions** in the terminal toolbar to open the session management panel and shut down the sessions you no longer need. In the following image, you can see that the tooltip shows the count of active notebook sessions.
++
+## Terminal sessions
+
+In the session management panel, you can select on a terminal link to reopen a terminal tab connected to that previous terminal session.
+
+In contrast to notebook sessions, terminal sessions are terminated when you close a terminal tab. However, if you navigate away from the Azure Machine Learning studio without closing a terminal tab, the session may remain open. You should be shut down any terminal sessions you no longer need by using the session management panel.
+
+Select **Manage active sessions** in the terminal toolbar to open the session management panel and shut down the sessions you no longer need. In the following image, you can see that the tooltip shows the count of active terminal sessions.
++
+## Next steps
+
+* [How to create and manage files in your workspace](how-to-manage-files.md)
+* [Run Jupyter notebooks in your workspace](how-to-run-jupyter-notebooks.md)
+* [Access a compute instance terminal in your workspace](how-to-access-terminal.md)
machine-learning How To Manage Models Mlflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-models-mlflow.md
description: Explains how to use MLflow for managing models in Azure Machine Lea
+ Last updated 06/08/2022
machine-learning How To Manage Workspace Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-manage-workspace-terraform.md
+ Last updated 01/05/2022 ms.tool: terraform
machine-learning How To Network Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-network-security-overview.md
In this section, you learn how to secure the training environment in Azure Machi
To secure the training environment, use the following steps:
-1. Create an Azure Machine Learning [compute instance and computer cluster in the virtual network](how-to-secure-training-vnet.md#compute-cluster) to run the training job.
-1. If your compute cluster or compute instance uses a public IP address, you must [Allow inbound communication](how-to-secure-training-vnet.md#required-public-internet-access) so that management services can submit jobs to your compute resources.
+1. Create an Azure Machine Learning [compute instance and computer cluster in the virtual network](how-to-secure-training-vnet.md) to run the training job.
+1. If your compute cluster or compute instance uses a public IP address, you must [Allow inbound communication](how-to-secure-training-vnet.md) so that management services can submit jobs to your compute resources.
> [!TIP] > Compute cluster and compute instance can be created with or without a public IP address. If created with a public IP address, you get a load balancer with a public IP to accept the inbound access from Azure batch service and Azure Machine Learning service. You need to configure User Defined Routing (UDR) if you use a firewall. If created without a public IP, you get a private link service to accept the inbound access from Azure batch service and Azure Machine Learning service without a public IP.
machine-learning How To Prepare Datasets For Automl Images https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-prepare-datasets-for-automl-images.md
+ Last updated 05/26/2022
machine-learning How To Prevent Data Loss Exfiltration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-prevent-data-loss-exfiltration.md
Previously updated : 08/26/2022 Last updated : 01/20/2023+ # Azure Machine Learning data exfiltration prevention
Azure Machine Learning has several inbound and outbound dependencies. Some of th
* __Storage Outbound__: This requirement comes from compute instance and compute cluster. A malicious agent can use this outbound rule to exfiltrate data by provisioning and saving data in their own storage account. You can remove data exfiltration risk by using an Azure Service Endpoint Policy and Azure Batch's simplified node communication architecture.
- * __AzureFrontDoor.frontend outbound__: Azure Front Door is used by the Azure Machine Learning studio UI and AutoML. Instead of allowing outbound to the service tag (AzureFrontDoor.frontend), switch to the following fully qulified domain names (FQDN). Switching to these FQDNs removes unnecessary outbound traffic included in the service tag and allows only what is needed for Azure Machine Learning studio UI and AutoML.
+ * __AzureFrontDoor.frontend outbound__: Azure Front Door is used by the Azure Machine Learning studio UI and AutoML. Instead of allowing outbound to the service tag (AzureFrontDoor.frontend), switch to the following fully qualified domain names (FQDN). Switching to these FQDNs removes unnecessary outbound traffic included in the service tag and allows only what is needed for Azure Machine Learning studio UI and AutoML.
- `ml.azure.com` - `automlresources-prod.azureedge.net`
Service endpoint policies allow you to filter egress virtual network traffic to
### Inbound > [!IMPORTANT]
-> The following information __modifies__ the guidance provided in the [Inbound traffic](how-to-secure-training-vnet.md#inbound-traffic) section of the "Secure training environment with virtual networks" article.
+> The following information __modifies__ the guidance provided in the [How to secure training environment](how-to-secure-training-vnet.md) article.
-When using Azure Machine Learning __compute instance__ _with a public IP address_, allow inbound traffic from Azure Batch management (service tag `BatchNodeManagement.<region>`). A compute instance _with no public IP_ (preview) __doesn't__ require this inbound communication.
+When using Azure Machine Learning __compute instance__ _with a public IP address_, allow inbound traffic from Azure Batch management (service tag `BatchNodeManagement.<region>`). A compute instance _with no public IP_ __doesn't__ require this inbound communication.
### Outbound
When using Azure ML curated environments, make sure to use the latest environmen
-## Limitations
-
-If you want to have data exfiltration with **No Public IP option**, you need to opt in to this Azure Machine Learning preview. Microsoft will contact you once your subscription has been allowlisted to the preview. It may take one to two weeks to allowlist your subscription. Use the form at [https://forms.office.com/r/0Rw6mXTT07](https://forms.office.com/r/0Rw6mXTT07) to opt in to this Azure Machine Learning preview.
- ## Next steps For more information, see the following articles:
machine-learning How To Run Jupyter Notebooks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-run-jupyter-notebooks.md
Use the **Notebooks** section of your workspace to edit and run Jupyter notebook
## Edit a notebook
-To edit a notebook, open any notebook located in the **User files** section of your workspace. Click on the cell you wish to edit. If you don't have any notebooks in this section, see [Create and manage files in your workspace](how-to-manage-files.md).
+To edit a notebook, open any notebook located in the **User files** section of your workspace. Select the cell you wish to edit. If you don't have any notebooks in this section, see [Create and manage files in your workspace](how-to-manage-files.md).
You can edit the notebook without connecting to a compute instance. When you want to run the cells in the notebook, select or create a compute instance. If you select a stopped compute instance, it will automatically start when you run the first cell. When a compute instance is running, you can also use code completion, powered by [Intellisense](https://code.visualstudio.com/docs/editor/intellisense), in any Python notebook.
-You can also launch Jupyter or JupyterLab from the notebook toolbar. Azure Machine Learning does not provide updates and fix bugs from Jupyter or JupyterLab as they are Open Source products outside of the boundary of Microsoft Support.
+You can also launch Jupyter or JupyterLab from the notebook toolbar. Azure Machine Learning doesn't provide updates and fix bugs from Jupyter or JupyterLab as they're Open Source products outside of the boundary of Microsoft Support.
## Focus mode
To run a notebook or a Python script, you first connect to a running [compute in
:::image type="content" source="media/how-to-run-jupyter-notebooks/start-compute.png" alt-text="Start compute instance":::
-Once you are connected to a compute instance, use the toolbar to run all cells in the notebook, or Control + Enter to run a single selected cell.
+Once you're connected to a compute instance, use the toolbar to run all cells in the notebook, or Control + Enter to run a single selected cell.
Only you can see and use the compute instances you create. Your **User files** are stored separately from the VM and are shared among all compute instances in the workspace.
Select the tool to show the variable explorer window.
## Navigate with a TOC
-On the notebook toolbar, use the **Table of contents** tool to display or hide the table of contents. Start a markdown cell with a heading to add it to the table of contents. Click on an entry in the table to scroll to that cell in the notebook.
+On the notebook toolbar, use the **Table of contents** tool to display or hide the table of contents. Start a markdown cell with a heading to add it to the table of contents. Select an entry in the table to scroll to that cell in the notebook.
:::image type="content" source="media/how-to-run-jupyter-notebooks/table-of-contents.png" alt-text="Screenshot: Table of contents in the notebook":::
On the notebook toolbar, use the **Table of contents** tool to display or hide
The notebook toolbar allows you to change the environment on which your notebook runs.
-These actions will not change the notebook state or the values of any variables in the notebook:
+These actions won't change the notebook state or the values of any variables in the notebook:
|Action |Result | ||| --|
Similar to Jupyter Notebooks, Azure Machine Learning studio notebooks have a mod
### Command mode shortcuts
-A cell is in command mode when there is no text cursor prompting you to type. When a cell is in Command mode, you can edit the notebook as a whole but not type into individual cells. Enter command mode by pressing `ESC` or using the mouse to select outside of a cell's editor area. The left border of the active cell is blue and solid, and its **Run** button is blue.
+A cell is in command mode when there's no text cursor prompting you to type. When a cell is in Command mode, you can edit the notebook as a whole but not type into individual cells. Enter command mode by pressing `ESC` or using the mouse to select outside of a cell's editor area. The left border of the active cell is blue and solid, and its **Run** button is blue.
:::image type="content" source="media/how-to-run-jupyter-notebooks/command-mode.png" alt-text="Notebook cell in command mode ":::
A cell is in command mode when there is no text cursor prompting you to type. Wh
### Edit mode shortcuts
-Edit mode is indicated by a text cursor prompting you to type in the editor area. When a cell is in edit mode, you can type into the cell. Enter edit mode by pressing `Enter` or using the mouse to select on a cell's editor area. The left border of the active cell is green and hatched, and its **Run** button is green. You also see the cursor prompt in the cell in Edit mode.
+Edit mode is indicated by a text cursor prompting you to type in the editor area. When a cell is in edit mode, you can type into the cell. Enter edit mode by pressing `Enter` or select a cell's editor area. The left border of the active cell is green and hatched, and its **Run** button is green. You also see the cursor prompt in the cell in Edit mode.
:::image type="content" source="media/how-to-run-jupyter-notebooks/edit-mode.png" alt-text="Notebook cell in edit mode":::
Using the following keystroke shortcuts, you can more easily navigate and run co
* **Connecting to a notebook**: If you can't connect to a notebook, ensure that web socket communication is **not** disabled. For compute instance Jupyter functionality to work, web socket communication must be enabled. Ensure your [network allows websocket connections](how-to-access-azureml-behind-firewall.md?tabs=ipaddress#microsoft-hosts) to *.instances.azureml.net and *.instances.azureml.ms.
-* **Private endpoint**: When a compute instance is deployed in a workspace with a private endpoint, it can be only be [accessed from within virtual network](./how-to-secure-training-vnet.md). If you are using custom DNS or hosts file, add an entry for < instance-name >.< region >.instances.azureml.ms with the private IP address of your workspace private endpoint. For more information see the [custom DNS](./how-to-custom-dns.md?tabs=azure-cli) article.
+* **Private endpoint**: When a compute instance is deployed in a workspace with a private endpoint, it can be only be [accessed from within virtual network](./how-to-secure-training-vnet.md). If you're using custom DNS or hosts file, add an entry for < instance-name >.< region >.instances.azureml.ms with the private IP address of your workspace private endpoint. For more information, see the [custom DNS](./how-to-custom-dns.md?tabs=azure-cli) article.
-* **Kernel crash**: If your kernel crashed and was restarted, you can run the following command to look at jupyter log and find out more details: `sudo journalctl -u jupyter`. If kernel issues persist, consider using a compute instance with more memory.
+* **Kernel crash**: If your kernel crashed and was restarted, you can run the following command to look at Jupyter log and find out more details: `sudo journalctl -u jupyter`. If kernel issues persist, consider using a compute instance with more memory.
* **Kernel not found** or **Kernel operations were disabled**: When using the default Python 3.8 kernel on a compute instance, you may get an error such as "Kernel not found" or "Kernel operations were disabled". To fix, use one of the following methods: * Create a new compute instance. This will use a new image where this problem has been resolved.
Using the following keystroke shortcuts, you can more easily navigate and run co
* **Expired token**: If you run into an expired token issue, sign out of your Azure ML studio, sign back in, and then restart the notebook kernel.
-* **File upload limit**: When uploading a file through the notebook's file explorer, you are limited files that are smaller than 5TB. If you need to upload a file larger than this, we recommend that you use the SDK to upload the data to a datastore. For more information, see [Create data assets](how-to-create-data-assets.md?tabs=Python-SDK).
+* **File upload limit**: When uploading a file through the notebook's file explorer, you're limited files that are smaller than 5 TB. If you need to upload a file larger than this, we recommend that you use the SDK to upload the data to a datastore. For more information, see [Create data assets](how-to-create-data-assets.md?tabs=Python-SDK).
## Next steps * [Run your first experiment](tutorial-1st-experiment-sdk-train.md) * [Backup your file storage with snapshots](../storage/files/storage-snapshots-files.md)
-* [Working in secure environments](./how-to-secure-training-vnet.md#compute-cluster)
+* [Working in secure environments](./how-to-secure-training-vnet.md)
machine-learning How To Secure Batch Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-batch-endpoint.md
In order to enable the jump host VM (or self-hosted agent VMs if using [Azure Ba
Azure Machine Learning batch deployments run on compute clusters. To secure batch deployment jobs, those compute clusters have to be deployed in a virtual network too.
-1. Create an Azure Machine Learning [computer cluster in the virtual network](how-to-secure-training-vnet.md#compute-cluster).
+1. Create an Azure Machine Learning [computer cluster in the virtual network](how-to-secure-training-vnet.md).
2. Ensure all related services have private endpoints configured in the network. Private endpoints are used for not only Azure Machine Learning workspace, but also its associated resources such as Azure Storage, Azure Key Vault, or Azure Container Registry. Azure Container Registry is a required service. While securing the Azure Machine Learning workspace with virtual networks, please note that there are [some prerequisites about Azure Container Registry](how-to-secure-workspace-vnet.md#prerequisites).
-4. If your compute instance uses a public IP address, you must [Allow inbound communication](how-to-secure-training-vnet.md#required-public-internet-access) so that management services can submit jobs to your compute resources.
+4. If your compute instance uses a public IP address, you must [Allow inbound communication](how-to-secure-training-vnet.md#compute-instancecluster-with-public-ip) so that management services can submit jobs to your compute resources.
> [!TIP] > Compute cluster and compute instance can be created with or without a public IP address. If created with a public IP address, you get a load balancer with a public IP to accept the inbound access from Azure batch service and Azure Machine Learning service. You need to configure User Defined Routing (UDR) if you use a firewall. If created without a public IP, you get a private link service to accept the inbound access from Azure batch service and Azure Machine Learning service without a public IP.
-1. Extra NSG may be required depending on your case. Please see [Limitations for Azure Machine Learning compute cluster](how-to-secure-training-vnet.md#azure-machine-learning-compute-clusterinstance-1).
+1. Extra NSG may be required depending on your case. For more information, see [How to secure your training environment](how-to-secure-training-vnet.md).
-For more details about how to configure compute clusters networking read [Secure an Azure Machine Learning training environment with virtual networks](how-to-secure-training-vnet.md#azure-machine-learning-compute-clusterinstance-1).
+For more information, see the [Secure an Azure Machine Learning training environment with virtual networks](how-to-secure-training-vnet.md) article.
## Using two-networks architecture
machine-learning How To Secure Training Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-training-vnet.md
Previously updated : 11/16/2022 Last updated : 01/09/2023 ms.devlang: azurecli
ms.devlang: azurecli
> * [SDK v1](./v1/how-to-secure-training-vnet.md) > * [SDK v2 (current version)](how-to-secure-training-vnet.md)
-In this article, you learn how to secure training environments with a virtual network in Azure Machine Learning. You'll learn how to secure training environments through the Azure Machine Learning __studio__ and Python SDK __v2__.
+Azure Machine Learning compute instance and compute cluster can be used to securely train models in a virtual network. When planning your environment, you can configure the compute instance/cluster with or without a public IP address. The general differences between the two are:
+
+* **No public IP**: Reduces costs as it doesn't have the same networking resource requirements. Improves security by removing the requirement for inbound traffic from the internet. However, there are additional configuration changes required to enable outbound access to required resources (Azure Active Directory, Azure Resource Manager, etc.).
+* **Public IP**: Works by default, but costs more due to additional Azure networking resources. Requires inbound communication from the Azure Machine Learning service over the public internet.
+
+The following table contains the differences between these configurations:
+
+| Configuration | With public IP | Without public IP |
+| -- | -- | -- |
+| Inbound traffic | AzureMachineLearning | None |
+| Outbound traffic | By default, can access the public internet with no restrictions.<br>You can restrict what it accesses using a Network Security Group or firewall. | By default, it cannot access the public internet since there is no public IP resource.<br>You need a Virtual Network NAT gateway or Firewall to route outbound traffic to required resources on the internet. |
+| Azure networking resources | Public IP address, load balancer, network interface | None |
+
+You can also use Azure Databricks or HDInsight to train models in a virtual network.
> [!TIP] > This article is part of a series on securing an Azure Machine Learning workflow. See the other articles in this series:
In this article you learn how to secure the following training compute resources
+ Read the [Network security overview](how-to-network-security-overview.md) article to understand common virtual network scenarios and overall virtual network architecture.
-+ An existing virtual network and subnet to use with your compute resources.
-
-+ To deploy resources into a virtual network or subnet, your user account must have permissions to the following actions in Azure role-based access control (Azure RBAC):
-
- - "Microsoft.Network/virtualNetworks/*/read" on the virtual network resource. This permission isn't needed for Azure Resource Manager (ARM) template deployments.
- - "Microsoft.Network/virtualNetworks/subnet/join/action" on the subnet resource.
-
- For more information on Azure RBAC with networking, see the [Networking built-in roles](../role-based-access-control/built-in-roles.md#networking)
-
-### Azure Machine Learning compute cluster/instance
++ An existing virtual network and subnet to use with your compute resources. This VNet must be in the same subscription as your Azure Machine Learning workspace.
-* Compute clusters and instances create the following resources. If they're unable to create these resources (for example, if there's a resource lock on the resource group) then creation, scale out, or scale in, may fail.
+ - We recommend putting the storage accounts used by your workspace and training jobs in the same Azure region that you plan to use for your compute instances and clusters. If they aren't in the same Azure region, you may incur data transfer costs and increased network latency.
+ - Make sure that **WebSocket** communication is allowed to `*.instances.azureml.net` and `*.instances.azureml.ms` in your VNet. WebSockets are used by Jupyter on compute instances.
- * IP address.
- * Network Security Group (NSG).
- * Load balancer.
++ An existing subnet in the virtual network. This subnet is used when creating compute instances and clusters.
-* The virtual network must be in the same subscription as the Azure Machine Learning workspace.
-* The subnet used for the compute instance or cluster must have enough unassigned IP addresses.
+ - Make sure that the subnet isn't delegated to other Azure services.
+ - Make sure that the subnet contains enough free IP addresses. Each compute instance requires one IP address. Each *node* within a compute cluster requires one IP address.
- * A compute cluster can dynamically scale. If there aren't enough unassigned IP addresses, the cluster will be partially allocated.
- * A compute instance only requires one IP address.
++ If you have your own DNS server, we recommend using DNS forwarding to resolve the fully qualified domain names (FQDN) of compute instances and clusters. For more information, see [Use a custom DNS with Azure Machine Learning](how-to-custom-dns.md).
-* To create a compute cluster or instance without a public IP address (a preview feature), your workspace must use a private endpoint to connect to the VNet. For more information, see [Configure a private endpoint for Azure Machine Learning workspace](how-to-configure-private-link.md).
-* If you plan to secure the virtual network by restricting traffic, see the [Required public internet access](#required-public-internet-access) section.
-* The subnet used to deploy compute cluster/instance shouldn't be delegated to any other service. For example, it shouldn't be delegated to ACI.
-* Compute cluster/instance deployment in virtual network is not supported with Azure Lighthouse
++ To deploy resources into a virtual network or subnet, your user account must have permissions to the following actions in Azure role-based access control (Azure RBAC):
-### Azure Databricks
+ - "Microsoft.Network/virtualNetworks/*/read" on the virtual network resource. This permission isn't needed for Azure Resource Manager (ARM) template deployments.
+ - "Microsoft.Network/virtualNetworks/subnet/join/action" on the subnet resource.
-* The virtual network must be in the same subscription and region as the Azure Machine Learning workspace.
-* If the Azure Storage Account(s) for the workspace are also secured in a virtual network, they must be in the same virtual network as the Azure Databricks cluster.
+ For more information on Azure RBAC with networking, see the [Networking built-in roles](../role-based-access-control/built-in-roles.md#networking)
## Limitations
-### Azure Machine Learning compute cluster/instance
-
-* If put multiple compute instances or clusters in one virtual network, you may need to request a quota increase for one or more of your resources. The Machine Learning compute instance or cluster automatically allocates networking resources __in the resource group that contains the virtual network__. For each compute instance or cluster, the service allocates the following resources:
-
- * One network security group (NSG). This NSG contains the following rules, which are specific to compute cluster and compute instance:
-
- > [!IMPORTANT]
- > Compute instance and compute cluster automatically create an NSG with the required rules.
- >
- > If you have another NSG at the subnet level, the rules in the subnet level NSG mustn't conflict with the rules in the automatically created NSG.
- >
- > To learn how the NSGs filter your network traffic, see [How network security groups filter network traffic](../virtual-network/network-security-group-how-it-works.md).
-
- * Allow inbound TCP traffic on ports 29876-29877 from the `BatchNodeManagement` service tag.
- * Allow inbound TCP traffic on port 44224 from the `AzureMachineLearning` service tag.
-
- The following screenshot shows an example of these rules:
-
- :::image type="content" source="./media/how-to-secure-training-vnet/compute-instance-cluster-network-security-group.png" alt-text="Screenshot of NSG":::
--
- > [!TIP]
- > If your compute cluster or instance does not use a public IP address (a preview feature), these inbound NSG rules are not required.
-
- * For compute cluster or instance, it's now possible to remove the public IP address (a preview feature). If you have Azure Policy assignments prohibiting Public IP creation, then deployment of the compute cluster or instance will succeed.
-
- * One load balancer
-
- For compute clusters, these resources are deleted every time the cluster scales down to 0 nodes and created when scaling up.
-
- For a compute instance, these resources are kept until the instance is deleted. Stopping the instance doesn't remove the resources.
-
- > [!IMPORTANT]
- > These resources are limited by the subscription's [resource quotas](../azure-resource-manager/management/azure-subscription-service-limits.md). If the virtual network resource group is locked then deletion of compute cluster/instance will fail. Load balancer cannot be deleted until the compute cluster/instance is deleted. Also please ensure there is no Azure Policy assignment which prohibits creation of network security groups.
-
-* If you create a compute instance and plan to use the no public IP address configuration, your Azure Machine Learning workspace's managed identity must be assigned the __Reader__ role for the virtual network that contains the workspace. For more information on assigning roles, see [Steps to assign an Azure role](../role-based-access-control/role-assignments-steps.md).
-
- > [!IMPORTANT]
- > Using the __no public IP__ configuration requires you to opt-in to this preview. Before opting in, you must have created a workspace and a compute instance on the subscription you plan to use. You can delete the compute instance and/or workspace after creating them.
- >
- > Use the form at [https://forms.office.com/r/0Rw6mXTT07](https://forms.office.com/r/0Rw6mXTT07) to opt in to this Azure Machine Learning preview. Microsoft will contact you once your subscription has been allowlisted to the preview. It may take one to two weeks to allowlist your subscription. Opting-in provides the following benefits:
- > - Additional regions are available for use with no public IP configuration
- > - [Data exfiltration protection](how-to-prevent-data-loss-exfiltration.md).
- > - No networking costs from load balancer, public IP, or private link service.
- >
- > If you have been using compute instances configured for no public IP without opting-in to the preview using the form, you will need to delete and recreate them after your subscription has been allowlisted to take advantage of the new architecture and region availability. For existing compute clusters configured for no public IP, once the cluster has been reduced to 0 nodes (requires the minimum nodes to be configured as 0), it will take advantage of the new architecture the next time nodes are allocated after the subscription is allowlisted.
-
- [!INCLUDE [no-public-ip-info](../../includes/machine-learning-no-public-ip-availibility.md)]
-
-* If you have configured Azure Container Registry for your workspace behind the virtual network, you must use a compute cluster to build Docker images. If you use a compute cluster configured for no public IP address, you must provide some method for the cluster to access the public internet. Internet access is required when accessing images stored on the Microsoft Container Registry, packages installed on Pypi, Conda, etc. For more information, see [Enable Azure Container Registry](how-to-secure-workspace-vnet.md#enable-azure-container-registry-acr).
-
-* If the Azure Storage Accounts for the workspace are also in the virtual network, use the following guidance on subnet limitations:
-
- * If you plan to use Azure Machine Learning __studio__ to visualize data or use designer, the storage account must be __in the same subnet as the compute instance or cluster__.
- * If you plan to use the __SDK__, the storage account can be in a different subnet.
-
- > [!NOTE]
- > Adding a resource instance for your workspace or selecting the checkbox for "Allow trusted Microsoft services to access this account" is not sufficient to allow communication from the compute.
-
-* When your workspace uses a private endpoint, the compute instance can only be accessed from inside the virtual network. If you use a custom DNS or hosts file, add an entry for `<instance-name>.<region>.instances.azureml.ms`. Map this entry to the private IP address of the workspace private endpoint. For more information, see the [custom DNS](./how-to-custom-dns.md) article.
-* Virtual network service endpoint policies don't work for compute cluster/instance system storage accounts.
-* If storage and compute instance are in different regions, you may see intermittent timeouts.
-* If the Azure Container Registry for your workspace uses a private endpoint to connect to the virtual network, you canΓÇÖt use a managed identity for the compute instance. To use a managed identity with the compute instance, don't put the container registry in the VNet.
-* If you want to use Jupyter Notebooks on a compute instance:
-
- * Don't disable websocket communication. Make sure your network allows websocket communication to `*.instances.azureml.net` and `*.instances.azureml.ms`.
- * Make sure that your notebook is running on a compute resource behind the same virtual network and subnet as your data. When creating the compute instance, use **Advanced settings** > **Configure virtual network** to select the network and subnet.
- * __Compute clusters__ can be created in a different region than your workspace. This functionality is in __preview__, and is only available for __compute clusters__, not compute instances. When using a different region for the cluster, the following limitations apply: * If your workspace associated resources, such as storage, are in a different virtual network than the cluster, set up global virtual network peering between the networks. For more information, see [Virtual network peering](../virtual-network/virtual-network-peering-overview.md).
In this article you learn how to secure the following training compute resources
> [!WARNING] > If you are using a __private endpoint-enabled workspace__, creating the cluster in a different region is __not supported__.
-* An Azure Machine Learning workspace requires outbound access to `storage.<region>/*.blob.core.windows.net` on the public internet, where `<region>` is the Azure region of the workspace. This outbound access is required by Azure Machine Learning compute cluster and compute instance. Both are based on Azure Batch, and need to access a storage account provided by Azure Batch on the public network.
+* Compute cluster/instance deployment in virtual network isn't supported with Azure Lighthouse.
- By using a Service Endpoint Policy, you can mitigate this vulnerability. This feature is currently in preview. For more information, see the [Azure Machine Learning data exfiltration prevention](how-to-prevent-data-loss-exfiltration.md) article.
+## Compute instance/cluster with no public IP
-### Azure Databricks
+> [!IMPORTANT]
+> If you have been using compute instances configured for no public IP without opting-in to the preview, you will need to delete and recreate them after January 20 (when the feature is generally available).
+>
+> For existing compute clusters configured for no public IP, once the cluster has been reduced to 0 nodes (requires the minimum nodes to be configured as 0), it will take advantage of the new architecture the next time nodes are allocated after the subscription is allowlisted.
-* In addition to the __databricks-private__ and __databricks-public__ subnets used by Azure Databricks, the __default__ subnet created for the virtual network is also required.
-* Azure Databricks doesn't use a private endpoint to communicate with the virtual network.
+The following configurations are in addition to those listed in the [Prerequisites](#prerequisites) section, and are specific to **creating** a compute instances/clusters configured for no public IP:
-For more information on using Azure Databricks in a virtual network, see [Deploy Azure Databricks in your Azure Virtual Network](/azure/databricks/administration-guide/cloud-configurations/azure/vnet-inject).
++ Your workspace must use a private endpoint to connect to the VNet. For more information, see [Configure a private endpoint for Azure Machine Learning workspace](how-to-configure-private-link.md).
-### Azure HDInsight or virtual machine
++ In your VNet, allow **outbound** traffic to the following service tags or fully qualified domain names (FQDN):
-* Azure Machine Learning supports only virtual machines that are running Ubuntu.
+ | Service tag | Protocol | Port | Notes |
+ | -- |:--:|:--:| -- |
+ | `AzureMachineLearning` | TCP<br>UDP | 443/8787/18881<br>5831 | Communication with the Azure Machine Learning service.|
+ | `BatchNodeManagement.<region>` | ANY | 443| Replace `<region>` with the Azure region that contains your Azure Machine learning workspace. Communication with Azure Batch. Compute instance and compute cluster are implemented using the Azure Batch service.|
+ | `Storage.<region>` | TCP | 443 | Replace `<region>` with the Azure region that contains your Azure Machine learning workspace. This service tag is used to communicate with the Azure Storage account used by Azure Batch. |
-## Required public internet access
+ > [!IMPORTANT]
+ > The outbound access to `Storage.<region>` could potentially be used to exfiltrate data from your workspace. By using a Service Endpoint Policy, you can mitigate this vulnerability. For more information, see the [Azure Machine Learning data exfiltration prevention](how-to-prevent-data-loss-exfiltration.md) article.
+ | FQDN | Protocol | Port | Notes |
+ | - |:-:|:-:| - |
+ | `<region>.tundra.azureml.ms` | UDP | 5831 | Replace `<region>` with the Azure region that contains your Azure Machine learning workspace. |
+ | `graph.windows.net` | TCP | 443 | Communication with the Microsoft Graph API.|
+ | `*.instances.azureml.ms` | TCP | 443/8787/18881 | Communication with Azure Machine Learning. |
+ | `<region>.batch.azure.com` | ANY | 443 | Replace `<region>` with the Azure region that contains your Azure Machine learning workspace. Communication with Azure Batch. |
+ | `<region>.service.batch.com` | ANY | 443 | Replace `<region>` with the Azure region that contains your Azure Machine learning workspace. Communication with Azure Batch. |
+ | `*.blob.core.windows.net` | TCP | 443 | Communication with Azure Blob storage. |
+ | `*.queue.core.windows.net` | TCP | 443 | Communication with Azure Queue storage. |
+ | `*.table.core.windows.net` | TCP | 443 | Communication with Azure Table storage. |
-For information on using a firewall solution, see [Use a firewall with Azure Machine Learning](how-to-access-azureml-behind-firewall.md).
-## Compute cluster
++ Create either a firewall and outbound rules or a NAT gateway and network service groups to allow outbound traffic. Since the compute has no public IP address, it can't communicate with resources on the public internet without this configuration. For example, it wouldn't be able to communicate with Azure Active Directory or Azure Resource Manager. Installing Python packages from public sources would also require this configuration.
-Use the following steps to create a compute cluster in the Azure Machine Learning studio:
+ For more information on the outbound traffic that is used by Azure Machine Learning, see the following articles:
+ - [Configure inbound and outbound network traffic](how-to-access-azureml-behind-firewall.md).
+ - [Azure's outbound connectivity methods](/azure/load-balancer/load-balancer-outbound-connections#scenarios).
-1. Sign in to [Azure Machine Learning studio](https://ml.azure.com/), and then select your subscription and workspace.
-1. Select __Compute__ on the left, __Compute clusters__ from the center, and then select __+ New__.
+Use the following information to create a compute instance or cluster with no public IP address:
- :::image type="content" source="./media/how-to-enable-virtual-network/create-compute-cluster.png" alt-text="Screenshot of creating a cluster":::
+# [Azure CLI](#tab/cli)
-1. In the __Create compute cluster__ dialog, select the VM size and configuration you need and then select __Next__.
+In the `az ml compute create` command, replace the following values:
- :::image type="content" source="./media/how-to-enable-virtual-network/create-compute-cluster-vm.png" alt-text="Screenshot of setting VM config":::
+* `rg`: The resource group that the compute will be created in.
+* `ws`: The Azure Machine Learning workspace name.
+* `yourvnet`: The Azure Virtual Network.
+* `yoursubnet`: The subnet to use for the compute.
+* `AmlCompute` or `ComputeInstance`: Specifying `AmlCompute` creates a *compute cluster*. `ComputeInstance` creates a *compute instance*.
-1. From the __Configure Settings__ section, set the __Compute name__, __Virtual network__, and __Subnet__.
+```azurecli
+az ml compute create --resource-group rg --workspace-name ws --vnet-name yourvnet --subnet yoursubnet --type AmlCompute or ComputeInstance --enable-node-public-ip false
+```
- :::image type="content" source="media/how-to-enable-virtual-network/create-compute-cluster-config.png" alt-text="Screenshot shows setting compute name, virtual network, and subnet.":::
+# [Python](#tab/python)
- > [!TIP]
- > If your workspace uses a private endpoint to connect to the virtual network, the __Virtual network__ selection field is greyed out.
- >
+> [!IMPORTANT]
+> The following code snippet assumes that `ml_client` points to an Azure Machine Learning workspace that uses a private endpoint to participate in a VNet. For more information on using `ml_client`, see the tutorial [Azure Machine Learning in a day](tutorial-azure-ml-in-a-day.md).
-1. Select __Create__ to create the compute cluster.
+```python
+from azure.ai.ml.entities import AmlCompute
+# specify aml compute name.
+cpu_compute_target = "cpu-cluster"
-When the creation process finishes, you train your model by using the cluster in an experiment.
+try:
+ ml_client.compute.get(cpu_compute_target)
+except Exception:
+ print("Creating a new cpu compute target...")
+ compute = AmlCompute(
+ name=cpu_compute_target, size="STANDARD_D2_V2", min_instances=0, max_instances=4,
+ vnet_name="yourvnet", subnet_name="yoursubnet", enable_node_public_ip=False
+ )
+ ml_client.compute.begin_create_or_update(compute).result()
+```
+# [Studio](#tab/azure-studio)
-### No public IP for compute clusters (preview)
+1. Sign in to the [Azure Machine Learning studio](https://ml.azure.com), and then select your subscription and workspace.
+1. Select the **Compute** page from the left navigation bar.
+1. Select the **+ New** from the navigation bar of compute instance or compute cluster.
+1. Configure the VM size and configuration you need, then select **Next**.
+1. From the **Advanced Settings**, Select **Enable virtual network**, your virtual network and subnet, and finally select the **No Public IP** option under the VNet/subnet section.
-When you enable **No public IP**, your compute cluster doesn't use a public IP for communication with any dependencies. Instead, it communicates solely within the virtual network using Azure Private Link ecosystem and service/private endpoints, eliminating the need for a public IP entirely. No public IP removes access and discoverability of compute cluster nodes from the internet thus eliminating a significant threat vector. **No public IP** clusters help comply with no public IP policies many enterprises have.
+ :::image type="content" source="./media/how-to-secure-training-vnet/no-public-ip.png" alt-text="A screenshot of how to configure no public IP for compute instance and compute cluster." lightbox="./media/how-to-secure-training-vnet/no-public-ip.png":::
-A compute cluster with **No public IP** enabled has **no inbound communication requirements** from public internet. Specifically, neither inbound NSG rule (`BatchNodeManagement`, `AzureMachineLearning`) is required. You still need to allow inbound from source of **VirtualNetwork** and any port source, to destination of **VirtualNetwork**, and destination port of **29876, 29877** and inbound from source **AzureLoadBalancer** and any port source to destination **VirtualNetwork** and port **44224** destination.
+
-> [!WARNING]
-> By default, you do not have public internet access from No Public IP Compute Cluster. This prevents *outbound* access to required resources such as Azure Active Directory, Azure Resource Manager, Microsoft Container Registry, and other outbound resources as listed in the [Required public internet access](#required-public-internet-access) section. Or to non-Microsoft resources such as Pypi or Conda repositories. To resolve this problem, you need to configure User Defined Routing (UDR) to reach to a public IP to access the internet. For example, you can use a public IP of your firewall, or you can use [Virtual Network NAT](../virtual-network/nat-gateway/nat-overview.md) with a public IP.
+## Compute instance/cluster with public IP
-**No public IP** clusters are dependent on [Azure Private Link](how-to-configure-private-link.md) for Azure Machine Learning workspace.
-A compute cluster with **No public IP** also requires you to disable private endpoint network policies and private link service network policies. These requirements come from Azure private link service and private endpoints and aren't Azure Machine Learning specific. Follow instruction from [Disable network policies for Private Link service](../private-link/disable-private-link-service-network-policy.md) to set the parameters `disable-private-endpoint-network-policies` and `disable-private-link-service-network-policies` on the virtual network subnet.
+The following configurations are in addition to those listed in the [Prerequisites](#prerequisites) section, and are specific to **creating** compute instances/clusters that have a public IP:
-For **outbound connections** to work, you need to set up an egress firewall such as Azure firewall with user defined routes. For instance, you can use a firewall set up with [inbound/outbound configuration](how-to-access-azureml-behind-firewall.md) and route traffic there by defining a route table on the subnet in which the compute cluster is deployed. The route table entry can set up the next hop of the private IP address of the firewall with the address prefix of 0.0.0.0/0.
++ If you put multiple compute instances/clusters in one virtual network, you may need to request a quota increase for one or more of your resources. The Machine Learning compute instance or cluster automatically allocates networking resources __in the resource group that contains the virtual network__. For each compute instance or cluster, the service allocates the following resources:
-You can use a service endpoint or private endpoint for your Azure container registry and Azure storage in the subnet in which cluster is deployed.
+ * A network security group (NSG) is automatically created. This NSG allows inbound TCP traffic on port 44224 from the `AzureMachineLearning` service tag.
-To create a no public IP address compute cluster (a preview feature) in studio, set **No public IP** checkbox in the virtual network section.
-You can also create no public IP compute cluster through an ARM template. In the ARM template set enableNodePublicIP parameter to false.
+ > [!IMPORTANT]
+ > Compute instance and compute cluster automatically create an NSG with the required rules.
+ >
+ > If you have another NSG at the subnet level, the rules in the subnet level NSG mustn't conflict with the rules in the automatically created NSG.
+ >
+ > To learn how the NSGs filter your network traffic, see [How network security groups filter network traffic](../virtual-network/network-security-group-how-it-works.md).
-**Troubleshooting**
+ * One load balancer
-* If you get this error message during creation of cluster `The specified subnet has PrivateLinkServiceNetworkPolicies or PrivateEndpointNetworkEndpoints enabled`, follow the instructions from [Disable network policies for Private Link service](../private-link/disable-private-link-service-network-policy.md) and [Disable network policies for Private Endpoint](../private-link/disable-private-endpoint-network-policy.md).
+ For compute clusters, these resources are deleted every time the cluster scales down to 0 nodes and created when scaling up.
-* If job execution fails with connection issues to ACR or Azure Storage, verify that customer has added ACR and Azure Storage service endpoint/private endpoints to subnet and ACR/Azure Storage allows the access from the subnet.
+ For a compute instance, these resources are kept until the instance is deleted. Stopping the instance doesn't remove the resources.
-* To ensure that you've created a no public IP cluster, in Studio when looking at cluster details you'll see **No Public IP** property is set to **true** under resource properties.
+ > [!IMPORTANT]
+ > These resources are limited by the subscription's [resource quotas](../azure-resource-manager/management/azure-subscription-service-limits.md). If the virtual network resource group is locked then deletion of compute cluster/instance will fail. Load balancer cannot be deleted until the compute cluster/instance is deleted. Also please ensure there is no Azure Policy assignment which prohibits creation of network security groups.
-## Compute instance
++ In your VNet, allow **inbound** TCP traffic on port **44224** from the `AzureMachineLearning` service tag.
+ > [!IMPORTANT]
+ > The compute instance/cluster is dynamically assigned an IP address when it is created. Since the address is not known before creation, and inbound access is required as part of the creation process, you cannot statically assign it on your firewall. Instead, if you are using a firewall with the VNet you must create a user-defined route to allow this inbound traffic.
++ In your VNet, allow **outbound** traffic to the following service tags:
-For steps on how to create a compute instance deployed in a virtual network, see [Create and manage an Azure Machine Learning compute instance](how-to-create-manage-compute-instance.md).
+ | Service tag | Protocol | Port | Notes |
+ | -- |:--:|:--:| -- |
+ | `AzureMachineLearning` | TCP<br>UDP | 443/8787/18881<br>5831 | Communication with the Azure Machine Learning service.|
+ | `BatchNodeManagement.<region>` | ANY | 443| Replace `<region>` with the Azure region that contains your Azure Machine learning workspace. Communication with Azure Batch. Compute instance and compute cluster are implemented using the Azure Batch service.|
+ | `Storage.<region>` | TCP | 443 | Replace `<region>` with the Azure region that contains your Azure Machine learning workspace. This service tag is used to communicate with the Azure Storage account used by Azure Batch. |
-### No public IP for compute instances (preview)
+ > [!IMPORTANT]
+ > The outbound access to `Storage.<region>` could potentially be used to exfiltrate data from your workspace. By using a Service Endpoint Policy, you can mitigate this vulnerability. For more information, see the [Azure Machine Learning data exfiltration prevention](how-to-prevent-data-loss-exfiltration.md) article.
-When you enable **No public IP**, your compute instance doesn't use a public IP for communication with any dependencies. Instead, it communicates solely within the virtual network using Azure Private Link ecosystem and service/private endpoints, eliminating the need for a public IP entirely. No public IP removes access and discoverability of compute instance node from the internet thus eliminating a significant threat vector. Compute instances will also do packet filtering to reject any traffic from outside virtual network. **No public IP** instances are dependent on [Azure Private Link](how-to-configure-private-link.md) for Azure Machine Learning workspace.
+ | FQDN | Protocol | Port | Notes |
+ | - |:-:|:-:| - |
+ | `<region>.tundra.azureml.ms` | UDP | 5831 | Replace `<region>` with the Azure region that contains your Azure Machine learning workspace. |
+ | `graph.windows.net` | TCP | 443 | Communication with the Microsoft Graph API.|
+ | `*.instances.azureml.ms` | TCP | 443/8787/18881 | Communication with Azure Machine Learning. |
+ | `<region>.batch.azure.com` | ANY | 443 | Replace `<region>` with the Azure region that contains your Azure Machine learning workspace. Communication with Azure Batch. |
+ | `<region>.service.batch.com` | ANY | 443 | Replace `<region>` with the Azure region that contains your Azure Machine learning workspace. Communication with Azure Batch. |
+ | `*.blob.core.windows.net` | TCP | 443 | Communication with Azure Blob storage. |
+ | `*.queue.core.windows.net` | TCP | 443 | Communication with Azure Queue storage. |
+ | `*.table.core.windows.net` | TCP | 443 | Communication with Azure Table storage. |
-> [!WARNING]
-> By default, you do not have public internet access from No Public IP Compute Instance. You need to configure User Defined Routing (UDR) to reach to a public IP to access the internet. For example, you can use a public IP of your firewall, or you can use [Virtual Network NAT](../virtual-network/nat-gateway/nat-overview.md) with a public IP. Specifically, you need access to Azure Active Directory, Azure Resource Manager, Microsoft Container Registry, and other outbound resources as listed in the [Required public internet access](#required-public-internet-access) section. You may also need outbound access to non-Microsoft resources such as Pypi or Conda repositories.
+Use the following information to create a compute instance or cluster with a public IP address in the VNet:
-For **outbound connections** to work, you need to set up an egress firewall such as Azure firewall with user defined routes. For instance, you can use a firewall set up with [inbound/outbound configuration](how-to-access-azureml-behind-firewall.md) and route traffic there by defining a route table on the subnet in which the compute instance is deployed. The route table entry can set up the next hop of the private IP address of the firewall with the address prefix of 0.0.0.0/0.
+# [Azure CLI](#tab/cli)
-A compute instance with **No public IP** enabled has **no inbound communication requirements** from public internet. Specifically, neither inbound NSG rule (`BatchNodeManagement`, `AzureMachineLearning`) is required. You still need to allow inbound from source of **VirtualNetwork**, any port source, destination of **VirtualNetwork**, and destination port of **29876, 29877, 44224**.
+In the `az ml compute create` command, replace the following values:
-A compute instance with **No public IP** also requires you to disable private endpoint network policies and private link service network policies. These requirements come from Azure private link service and private endpoints and aren't Azure Machine Learning specific. Follow instruction from [Disable network policies for Private Link service source IP](../private-link/disable-private-link-service-network-policy.md) to set the parameters `disable-private-endpoint-network-policies` and `disable-private-link-service-network-policies` on the virtual network subnet.
+* `rg`: The resource group that the compute will be created in.
+* `ws`: The Azure Machine Learning workspace name.
+* `yourvnet`: The Azure Virtual Network.
+* `yoursubnet`: The subnet to use for the compute.
+* `AmlCompute` or `ComputeInstance`: Specifying `AmlCompute` creates a *compute cluster*. `ComputeInstance` creates a *compute instance*.
-To create a no public IP address compute instance (a preview feature) in studio, set **No public IP** checkbox in the virtual network section.
-You can also create no public IP compute instance through an ARM template. In the ARM template set enableNodePublicIP parameter to false.
+```azurecli
+az ml compute create --resource-group rg --workspace-name ws --vnet-name yourvnet --subnet yoursubnet --type AmlCompute or ComputeInstance
+```
-Next steps:
-* [Use custom DNS](how-to-custom-dns.md)
-* [Use a firewall](how-to-access-azureml-behind-firewall.md)
+# [Python](#tab/python)
+
+> [!IMPORTANT]
+> The following code snippet assumes that `ml_client` points to an Azure Machine Learning workspace that uses a private endpoint to participate in a VNet. For more information on using `ml_client`, see the tutorial [Azure Machine Learning in a day](tutorial-azure-ml-in-a-day.md).
+
+```python
+from azure.ai.ml.entities import AmlCompute
-## Inbound traffic
+# specify aml compute name.
+cpu_compute_target = "cpu-cluster"
+try:
+ ml_client.compute.get(cpu_compute_target)
+except Exception:
+ print("Creating a new cpu compute target...")
+ # Replace "yourvnet" and "yoursubnet" with your VNet and subnet.
+ compute = AmlCompute(
+ name=cpu_compute_target, size="STANDARD_D2_V2", min_instances=0, max_instances=4,
+ vnet_name="yourvnet", subnet_name="yoursubnet"
+ )
+ ml_client.compute.begin_create_or_update(compute).result()
+```
-For more information on input and output traffic requirements for Azure Machine Learning, see [Use a workspace behind a firewall](how-to-access-azureml-behind-firewall.md).
+# [Studio](#tab/azure-studio)
+
+1. Sign in to the [Azure Machine Learning studio](https://ml.azure.com), and then select your subscription and workspace.
+1. Select the **Compute** page from the left navigation bar.
+1. Select the **+ New** from the navigation bar of compute instance or compute cluster.
+1. Configure the VM size and configuration you need, then select **Next**.
+1. From the **Advanced Settings**, Select **Enable virtual network** and then select your virtual network and subnet.
+
+ :::image type="content" source="./media/how-to-secure-training-vnet/with-public-ip.png" alt-text="A screenshot of how to configure a compute instance/cluster in a VNet with a public IP." lightbox="./media/how-to-secure-training-vnet/with-public-ip.png":::
++ ## Azure Databricks
-For specific information on using Azure Databricks with a virtual network, see [Deploy Azure Databricks in your Azure Virtual Network](/azure/databricks/administration-guide/cloud-configurations/azure/vnet-inject).
+* The virtual network must be in the same subscription and region as the Azure Machine Learning workspace.
+* If the Azure Storage Account(s) for the workspace are also secured in a virtual network, they must be in the same virtual network as the Azure Databricks cluster.
+* In addition to the __databricks-private__ and __databricks-public__ subnets used by Azure Databricks, the __default__ subnet created for the virtual network is also required.
+* Azure Databricks doesn't use a private endpoint to communicate with the virtual network.
-<a id="vmorhdi"></a>
+For specific information on using Azure Databricks with a virtual network, see [Deploy Azure Databricks in your Azure Virtual Network](/azure/databricks/administration-guide/cloud-configurations/azure/vnet-inject).
## Virtual machine or HDInsight cluster
In this section, you learn how to use a virtual machine or Azure HDInsight clust
### Create the VM or HDInsight cluster
+> [!IMPORTANT]
+> Azure Machine Learning supports only virtual machines that are running Ubuntu.
+ Create a VM or HDInsight cluster by using the Azure portal or the Azure CLI, and put the cluster in an Azure virtual network. For more information, see the following articles: * [Create and manage Azure virtual networks for Linux VMs](../virtual-machines/linux/tutorial-virtual-network.md)
Allow Azure Machine Learning to communicate with the SSH port on the VM or clust
Keep the default outbound rules for the network security group. For more information, see the default security rules in [Security groups](../virtual-network/network-security-groups-overview.md#default-security-rules).
-If you don't want to use the default outbound rules and you do want to limit the outbound access of your virtual network, see the [required public internet access](#required-public-internet-access) section.
+If you don't want to use the default outbound rules and you do want to limit the outbound access of your virtual network, see the [required public internet access](#required-public-internet-access-to-train-models) section.
### Attach the VM or HDInsight cluster Attach the VM or HDInsight cluster to your Azure Machine Learning workspace. For more information, see [Manage compute resources for model training and deployment in studio](how-to-create-attach-compute-studio.md).
+## Required public internet access to train models
+
+> [!IMPORTANT]
+> While previous sections of this article describe configurations required to **create** compute resources, the configuration information in this section is required to **use** these resources to train models.
++
+For information on using a firewall solution, see [Use a firewall with Azure Machine Learning](how-to-access-azureml-behind-firewall.md).
## Next steps This article is part of a series on securing an Azure Machine Learning workflow. See the other articles in this series:
machine-learning How To Secure Workspace Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-workspace-vnet.md
Previously updated : 06/17/2022 Last updated : 01/19/2023 -+ # Secure an Azure Machine Learning workspace with virtual networks
When ACR is behind a virtual network, Azure Machine Learning canΓÇÖt use it to d
[!INCLUDE [machine-learning-required-public-internet-access](../../includes/machine-learning-public-internet-access.md)]
-For information on using a firewall solution, see [Use a firewall with Azure Machine Learning](how-to-access-azureml-behind-firewall.md).
+For information on using a firewall solution, see [Configure required input and output communication](how-to-access-azureml-behind-firewall.md).
## Secure the workspace with private endpoint
Azure Machine Learning supports storage accounts configured to use either a priv
* **Blob** * **File**
- * **Queue** - Only needed if you plan to use [ParallelRunStep](./tutorial-pipeline-batch-scoring-classification.md) in an Azure Machine Learning pipeline.
- * **Table** - Only needed if you plan to use [ParallelRunStep](./tutorial-pipeline-batch-scoring-classification.md) in an Azure Machine Learning pipeline.
+ * **Queue** - Only needed if you plan to use [Batch endpoints](concept-endpoints.md#what-are-batch-endpoints) or the [ParallelRunStep](./tutorial-pipeline-batch-scoring-classification.md) in an Azure Machine Learning pipeline.
+ * **Table** - Only needed if you plan to use [Batch endpoints](concept-endpoints.md#what-are-batch-endpoints) or the [ParallelRunStep](./tutorial-pipeline-batch-scoring-classification.md) in an Azure Machine Learning pipeline.
:::image type="content" source="./media/how-to-enable-studio-virtual-network/configure-storage-private-endpoint.png" alt-text="Screenshot showing private endpoint configuration page with blob and file options":::
machine-learning How To Set Up Vs Code Remote https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-set-up-vs-code-remote.md
To configure a remote compute instance for development, you'll need a few prereq
* Azure Machine Learning compute instance. [Use the Azure Machine Learning Visual Studio Code extension to create a new compute instance](how-to-manage-resources-vscode.md#create-compute-instance) if you don't have one. > [!IMPORTANT]
-> To connect to a compute instance behind a firewall, see [use workspace behind a Firewall for Azure Machine Learning](how-to-access-azureml-behind-firewall.md#visual-studio-code-hosts).
+> To connect to a compute instance behind a firewall, see [Configure inbound and outbound network traffic](how-to-access-azureml-behind-firewall.md#scenario-visual-studio-code).
To connect to your remote compute instance:
machine-learning How To Setup Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-setup-customer-managed-keys.md
description: 'Learn how to improve data security with Azure Machine Learning by
-+ Previously updated : 06/24/2022 Last updated : 01/20/2023 # Use customer-managed keys with Azure Machine Learning
In the [customer-managed keys concepts article](concept-customer-managed-keys.md
* Resources managed by Microsoft in your subscription canΓÇÖt transfer ownership to you. * You can't delete Microsoft-managed resources used for customer-managed keys without also deleting your workspace. * The key vault that contains your customer-managed key must be in the same Azure subscription as the Azure Machine Learning workspace.
-* Workspace with customer-managed key does not currently support v2 online endpoint and batch endpoint.
+* Workspace with customer-managed key doesn't currently support v2 online endpoint and batch endpoint.
> [!IMPORTANT] > When using a customer-managed key, the costs for your subscription will be higher because of the additional resources in your subscription. To estimate the cost, use the [Azure pricing calculator](https://azure.microsoft.com/pricing/calculator/).
machine-learning How To Setup Mlops Azureml https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-setup-mlops-azureml.md
+
+ Title: Set up MLOps with Azure DevOps
+
+description: Learn how to set up a sample MLOps environment in AzureML
+++++ Last updated : 11/29/2022++++
+# Set up MLOps with Azure DevOps
++
+Azure Machine Learning allows you to integration with [Azure DevOps pipeline](/azure/devops/pipelines/) to automate the machine learning lifecycle. Some of the operations you can automate are:
+
+* Deployment of AzureML infrastructure
+* Data preparation (extract, transform, load operations)
+* Training machine learning models with on-demand scale-out and scale-up
+* Deployment of machine learning models as public or private web services
+* Monitoring deployed machine learning models (such as for performance analysis)
+
+In this article, you learn about using Azure Machine Learning to set up an end-to-end MLOps pipeline that runs a linear regression to predict taxi fares in NYC. The pipeline is made up of components, each serving different functions, which can be registered with the workspace, versioned, and reused with various inputs and outputs. you are going to be using the [recommended Azure architecture for MLOps](/azure/architecture/data-guide/technology-choices/machine-learning-operations-v2) and [Azure MLOps (v2) solution accelerator](https://github.com/Azure/mlops-v2) to quickly set up an MLOps project in AzureML.
+
+> [!TIP]
+> We recommend you understand some of the [recommended Azure architectures](/azure/architecture/data-guide/technology-choices/machine-learning-operations-v2) for MLOps before implementing any solution. You'll need to pick the best architecture for your given Machine learning project.
+
+## Prerequisites
+
+- An Azure subscription. If you don't have an Azure subscription, create a free account before you begin. Try the [free or paid version of Azure Machine Learning](https://azure.microsoft.com/free/).
+- An Azure Machine Learning workspace.
+- The Azure Machine Learning [SDK v2 for Python](https://aka.ms/sdk-v2-install).
+- The Azure Machine Learning [CLI v2](how-to-configure-cli.md).
+- Git running on your local machine.
+- An [organization](/azure/devops/organizations/accounts/create-organization) in Azure DevOps.
+- [Azure DevOps project](how-to-devops-machine-learning.md) that will host the source repositories and pipelines.
+- The [Terraform extension for Azure DevOps](https://marketplace.visualstudio.com/items?itemName=ms-devlabs.custom-terraform-tasks) if you are using Azure DevOps + Terraform to spin up infrastructure
+
+> [!NOTE]
+>
+>Git version 2.27 or newer is required. For more information on installing the Git command, see https://git-scm.com/downloads and select your operating system
+
+> [!IMPORTANT]
+>The CLI commands in this article were tested using Bash. If you use a different shell, you may encounter errors.
+
+## Set up authentication with Azure and DevOps
+
+Before you can set up an MLOps project with AzureML, you need to set up authentication for Azure DevOps.
+
+### Create service principal
+ For the use of the demo, the creation of one or two service principles is required, depending on how many environments, you want to work on (Dev or Prod or Both). These principles can be created using one of the following methods:
+
+# [Create from Azure Cloud Shell](#tab/azure-shell)
+
+1. Launch the [Azure Cloud Shell](https://shell.azure.com).
+
+ > [!TIP]
+ > The first time you've launched the Cloud Shell, you'll be prompted to create a storage account for the Cloud Shell.
+
+1. If prompted, choose **Bash** as the environment used in the Cloud Shell. You can also change environments in the drop-down on the top navigation bar
+
+ ![Screenshot of the cloud shell environment dropdown.](./media/how-to-setup-mlops-azureml/PS_CLI1_1.png)
+
+1. Copy the bash commands below to your computer and update the **projectName**, **subscriptionId**, and **environment** variables with the values for your project. If you are creating both a Dev and Prod environment, you'll need to run this script once for each environment, creating a service principal for each. This command will also grant the **Contributor** role to the service principal in the subscription provided. This is required for Azure DevOps to properly use resources in that subscription.
+
+ ``` bash
+ projectName="<your project name>"
+ roleName="Contributor"
+ subscriptionId="<subscription Id>"
+ environment="<Dev|Prod>" #First letter should be capitalized
+ servicePrincipalName="Azure-ARM-${environment}-${projectName}"
+ # Verify the ID of the active subscription
+ echo "Using subscription ID $subscriptionID"
+ echo "Creating SP for RBAC with name $servicePrincipalName, with role $roleName and in scopes /subscriptions/$subscriptionId"
+ az ad sp create-for-rbac --name $servicePrincipalName --role $roleName --scopes /subscriptions/$subscriptionId
+ echo "Please ensure that the information created here is properly save for future use."
+ ```
+
+1. Copy your edited commands into the Azure Shell and run them (**Ctrl** + **Shift** + **v**).
+
+1. After running these commands, you'll be presented with information related to the service principal. Save this information to a safe location, it will be use later in the demo to configure Azure DevOps.
+
+ ```json
+ {
+ "appId": "<application id>",
+ "displayName": "Azure-ARM-dev-Sample_Project_Name",
+ "password": "<password>",
+ "tenant": "<tenant id>"
+ }
+ ```
+
+1. Repeat **Step 3.** if you're creating service principals for Dev and Prod environments.
+
+1. Close the Cloud Shell once the service principals are created.
+
+
+# [Create from Azure portal](#tab/azure-portal)
+
+1. Navigate to [Azure App Registrations](https://entra.microsoft.com/#view/Microsoft_AAD_RegisteredApps/ApplicationsListBlade/quickStartType~/null/sourceTypeMicrosoft_AAD_IAM)
+
+1. Select **New Registration**.
+
+ ![Screenshot of service principal setup.](./media/how-to-setup-mlops-azureml/SP-setup-ownership-tab.png)
+
+1. Go through the process of creating a Service Principle (SP) selecting **Accounts in any organizational directory (Any Azure AD directory - Multitenant)** and name it **Azure-ARM-Dev-ProjectName**. Once created, repeat and create a new SP named **Azure-ARM-Prod-ProjectName**. Replace **ProjectName** with the name of your project so that the service principal can be uniquely identified.
+
+1. Go to **Certificates & Secrets** and add for each SP **New client secret**, then store the value and secret separately.
+
+1. To assign the necessary permissions to these principals, select your respective [subscription](https://portal.azure.com/#view/Microsoft_Azure_BillingSubscriptionsBlade?) and go to IAM. Select **+Add** then select **Add Role Assignment**.
+
+ ![Screenshot of the add role assignment page.](./media/how-to-setup-mlops-azureml/SP-setup-iam-tab.png)
+
+1. Select Contributor and add members selecting + Select Members. Add the member **Azure-ARM-Dev-ProjectName** as create before.
+
+ ![Screenshot of the add role assignment selection.](./media/how-to-setup-mlops-azureml/SP-setup-role-assignment.png)
+
+1. Repeat step here, if you deploy Dev and Prod into the same subscription, otherwise change to the prod subscription and repeat with **Azure-ARM-Prod-ProjectName**. The basic SP setup is successfully finished.
+++
+### Set up Azure DevOps
+
+1. Navigate to [Azure DevOps](https://go.microsoft.com/fwlink/?LinkId=2014676&githubsi=true&clcid=0x409&WebUserId=2ecdcbf9a1ae497d934540f4edce2b7d).
+
+2. Select **create a new project** (Name the project `mlopsv2` for this tutorial).
+
+ ![Screenshot of ADO Project.](./media/how-to-setup-mlops-azureml/ado-create-project.png)
+
+3. In the project under **Project Settings** (at the bottom left of the project page) select **Service Connections**.
+
+4. Select **New Service Connection**.
+
+ ![Screenshot of ADO New Service connection button.](./media/how-to-setup-mlops-azureml/create_first_service_connection.png)
+
+5. Select **Azure Resource Manager**, select **Next**, select **Service principal (manual)**, select **Next** and select the Scope Level **Subscription**.
+
+ - **Subscription Name** - Use the name of the subscription where your service principal is stored.
+ - **Subscription Id** - Use the `subscriptionId` you used in **Step 1.** input as the Subscription ID
+ - **Service Principal Id** - Use the `appId` from **Step 1.** output as the Service Principal ID
+ - **Service principal key** - Use the `password` from **Step 1.** output as the Service Principal Key
+ - **Tenant ID** - Use the `tenant` from **Step 1.** output as the Tenant ID
++
+6. Name the service connection **Azure-ARM-Dev**.
+
+7. Select **Grant access permission to all pipelines**, then select **Verify and Save**. Repeat this step to create another service connection **Azure-ARM-Prod** using the details of the Prod service principal created in **Step 1.**
+
+The Azure DevOps setup is successfully finished.
+
+### Set up source repository with Azure DevOps
+
+1. Open the project you created in [Azure DevOps](https://dev.azure.com/)
+
+1. Open the Repos section and select **Import Repository**
+
+ ![Screenshot of ADO import repo first time.](./media/how-to-setup-mlops-azureml/import_repo_first_time.png)
+
+1. Enter https://github.com/Azure/mlops-v2-ado-demo into the Clone URL field. Click import at the bottom of the page
+
+ ![Screenshot of ADO import MLOps demo repo.](./media/how-to-setup-mlops-azureml/import_repo_Git_template.png)
++
+1. Open the Repos section. Click on the default repo name at the top of the screen and select Import Repository
+
+ ![Screenshot of ADO import repo.](./media/how-to-setup-mlops-azureml/ado-import-repo.png)
+
+1. Enter https://github.com/Azure/mlops-templates into the Clone URL field. Click import at the bottom of the page
+
+ ![Screenshot of ADO import MLOps template repo.](./media/how-to-setup-mlops-azureml/ado-import-mlops-templates.png)
+
+ > [!TIP]
+ > Learn more about the MLOps v2 accelerator structure and the MLOps [template](https://github.com/Azure/mlops-v2/)
+
+1. Open the **Project settings** at the bottom of the left hand navigation pane
+
+1. Under the Repos section, click **Repositories**. Select the repository you created in **Step 6.** Select the **Security** tab
+
+1. Under the User permissions section, select the **mlopsv2 Build Service** user. Change the permission **Contribute** permission to **Allow** and the **Create branch** permission to **Allow**.
+ ![Screenshot of ADO permissions.](./media/how-to-setup-mlops-azureml/ado-permissions-repo.png)
+
+1. Open the **Pipelines** section in the left hand navigation pane and click on the 3 vertical dots next to the **Create Pipelines** button. Select **Manage Security**
+
+ ![Screenshot of Pipeline security.](./media/how-to-setup-mlops-azureml/ado-open-pipelinesSecurity.png)
+
+1. Select the **mlopsv2 Build Service** account for your project under the Users section. Change the permission **Edit build pipeline** to **Allow**
+
+ ![Screenshot of Add security.](./media/how-to-setup-mlops-azureml/ado-add-pipelinesSecurity.png)
+
+> [!NOTE]
+> This finishes the prerequisite section and the deployment of the solution accelerator can happen accordingly.
++
+## Deploying infrastructure via Azure DevOps
+This step deploys the training pipeline to the Azure Machine Learning workspace created in the previous steps.
+
+> [!TIP]
+> Make sure you understand the [Architectural Patterns](/azure/architecture/data-guide/technology-choices/machine-learning-operations-v2) of the solution accelerator before you checkout the MLOps v2 repo and deploy the infrastructure. In examples you'll use the [classical ML project type](/azure/architecture/data-guide/technology-choices/machine-learning-operations-v2#classical-machine-learning-architecture).
+
+### Run Azure infrastructure pipeline
+1. Go to the first repo you imported in the previous section, `mlops-v2-ado-demo`, select the **config-infra-dev.yml** file.
+
+ ![Screenshot of Repo in ADO.](./media/how-to-setup-mlops-azureml/ADO-repo.png)
+
+ This config file uses the namespace and postfix values the names of the artifacts to ensure uniqueness. Update the following section in the config to your liking.
+
+ ```
+ namespace: [5 max random new letters]
+ postfix: [4 max random new digits]
+ location: eastus
+ ```
+ > [!NOTE]
+ > If you are running a Deep Learning workload such as CV or NLP, ensure your GPU compute is available in your deployment zone.
+
+1. Click Commit and push code to get these values into the pipeline.
+
+1. Repeat this step for **config-infra-prod.yml** file.
+
+1. Go to Pipelines section
+
+ ![Screenshot of ADO Pipelines.](./media/how-to-setup-mlops-azureml/ADO-pipelines.png)
+
+1. Select **New Pipeline**.
+
+ ![Screenshot of ADO New Pipeline button for infra.](./media/how-to-setup-mlops-azureml/ADO-new-pipeline.png)
+
+1. Select **Azure Repos Git**.
+
+ ![Screenshot of ADO Where's your code.](./media/how-to-setup-mlops-azureml/ado-wheresyourcode.png)
+
+1. Select the repository that you cloned in from the previous section `mlops-v2-ado-demo`
+
+1. Select **Existing Azure Pipeline YAML File**
+
+ ![Screenshot of ADO Pipeline page on configure step.](./media/how-to-setup-mlops-azureml/ADO-configure-pipelines.png)
+
+
+1. Select `main` as a branch and choose based on your deployment method your preferred yml path.
+ - For a terraform scenario, choose `infrastructure/pipelines/tf-ado-deploy-infra.yml`, then select **Continue**.
+ - For a bicep scenario, choose `infrastructure/pipelines/bicep-ado-deploy-infra.yml`, then select **Continue**.
+
+> [!CAUTION]
+> For this example, make sure the [Terraform extension for Azure DevOps](https://marketplace.visualstudio.com/items?itemName=ms-devlabs.custom-terraform-tasks) is installed.
+
+1. Run the pipeline; it will take a few minutes to finish. The pipeline should create the following artifacts:
+ * Resource Group for your Workspace including Storage Account, Container Registry, Application Insights, Keyvault and the Azure Machine Learning Workspace itself.
+ * In the workspace, there's also a compute cluster created.
+
+1. Now the Operationalizing Loop of the MLOps Architecture is deployed.
+ ![Screenshot of ADO Infra Pipeline screen.](./media/how-to-setup-mlops-azureml/ADO-infra-pipeline.png)
+
+ > [!NOTE]
+ > The **Unable move and reuse existing repository to required location** warnings may be ignored.
+
+## Deploying model training pipeline and moving to test environment
+
+1. Go to ADO pipelines
+
+ ![Screenshot of ADO Pipelines.](./media/how-to-setup-mlops-azureml/ADO-pipelines.png)
+
+1. Select **New Pipeline**.
+
+ ![Screenshot of ADO New Pipeline button.](./media/how-to-setup-mlops-azureml/ADO-new-pipeline.png)
+
+1. Select **Azure Repos Git**.
+
+ ![Screenshot of ADO Where's your code.](./media/how-to-setup-mlops-azureml/ado-wheresyourcode.png)
+
+1. Select the repository that you cloned in from the previous section `mlopsv2`
+
+1. Select **Existing Azure Pipeline YAML File**
+
+ ![Screenshot of ADO Pipeline page on configure step.](./media/how-to-setup-mlops-azureml/ADO-configure-pipelines.png)
+
+1. Select `main` as a branch and choose `/mlops/devops-pipelines/deploy-model-training-pipeline.yml`, then select **Continue**.
+
+1. **Save and Run** the pipeline
+
+> [!NOTE]
+> At this point, the infrastructure is configured and the Prototyping Loop of the MLOps Architecture is deployed. you're ready to move to our trained model to production.
+
+## Moving to production environment and deploying model
+
+**Prepare Data**
+ - This component takes multiple taxi datasets (yellow and green) and merges/filters the data, and prepare the train/val and evaluation datasets.
+ - Input: Local data under ./data/ (multiple .csv files)
+ - Output: Single prepared dataset (.csv) and train/val/test datasets.
+
+**Train Model**
+ - This component trains a Linear Regressor with the training set.
+ - Input: Training dataset
+ - Output: Trained model (pickle format)
+
+**Evaluate Model**
+ - This component uses the trained model to predict taxi fares on the test set.
+ - Input: ML model and Test dataset
+ - Output: Performance of model and a deploy flag whether to deploy or not.
+ - This component compares the performance of the model with all previous deployed models on the new test dataset and decides whether to promote or not model into production. Promoting model into production happens by registering the model in AML workspace.
+
+**Register Model**
+ - This component scores the model based on how accurate the predictions are in the test set.
+ - Input: Trained model and the deploy flag.
+ - Output: Registered model in Azure Machine Learning.
+
+### Deploy ML model endpoint
+1. Go to ADO pipelines
+
+ ![Screenshot of ADO Pipelines.](./media/how-to-setup-mlops-azureml/ADO-pipelines.png)
+
+1. Select **New Pipeline**.
+
+ ![Screenshot of ADO New Pipeline button for endpoint.](./media/how-to-setup-mlops-azureml/ADO-new-pipeline.png)
+
+1. Select **Azure Repos Git**.
+
+ ![Screenshot of ADO Where's your code.](./media/how-to-setup-mlops-azureml/ado-wheresyourcode.png)
+
+1. Select the repository that you cloned in from the previous section `mlopsv2`
+
+1. Select **Existing Azure Pipeline YAML File**
+
+ ![Screenshot of ADO Pipeline page on configure step.](./media/how-to-setup-mlops-azureml/ADO-configure-pipelines.png)
+
+1. Select `main` as a branch and choose:
+
+ - For Managed Batch Endpoint `/mlops/devops-pipelines/deploy-batch-endpoint-pipeline.yml`
+
+ - For Managed Online Endpoint `/mlops/devops-pipelines/deploy-online-endpoint-pipeline.yml`
+
+ Then select **Continue**.
+
+1. Batch/Online endpoint names need to be unique, so change **[your endpoint-name]** to another unique name and then select **Run**.
+
+ ![Screenshot of ADO batch deploy script.](./media/how-to-setup-mlops-azureml/ADO-batch-pipeline.png)
+
+> [!IMPORTANT]
+> If the run fails due to an existing online endpoint name, recreate the pipeline as described previously and change **[your endpoint-name]** to **[your endpoint-name (random number)]**
+
+1. When the run completes, you'll see output similar to the following image:
+
+ ![Screenshot of ADO Pipeline batch run result page.](./media/how-to-setup-mlops-azureml/ADO-batch-pipeline-run.png)
+
+ Now the Prototyping Loop is connected to the Operationalizing Loop of the MLOps Architecture and inference has been run.
+
+## Clean up resources
+
+1. If you're not going to continue to use your pipeline, delete your Azure DevOps project.
+1. In Azure portal, delete your resource group and Azure Machine Learning instance.
+
+## Next steps
+
+* [Install and set up Python SDK v2](https://aka.ms/sdk-v2-install)
+* [Install and set up Python CLI v2](how-to-configure-cli.md)
+* [Azure MLOps (v2) solution accelerator](https://github.com/Azure/mlops-v2) on GitHub
+* Learn more about [Azure Pipelines with Azure Machine Learning](how-to-devops-machine-learning.md)
+* Learn more about [GitHub Actions with Azure Machine Learning](how-to-github-actions-machine-learning.md)
+* Deploy MLOps on Azure in Less Than an Hour - [Community MLOps V2 Accelerator video](https://www.youtube.com/watch?v=5yPDkWCMmtk)
machine-learning How To Troubleshoot Environments https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-environments.md
ml_client.environments.create_or_update(env_docker_image)
* [Environment class v1](https://aka.ms/azureml/environment/environment-class-v1) ### Container registry credentials missing either username or password-- To access the base image in the container registry specified, you must provide both a username and password. One is missing.-- Providing credentials in this way is deprecated. For the current method of providing credentials, see the *secrets in base image registry* section.
+<!--issueDescription-->
+
+**Potential causes:**
+
+* You've specified either a username or a password for your container registry in your environment definition, but not both
+
+**Affected areas (symptoms):**
+* Failure in registering your environment
+<!--/issueDescription-->
+
+**Troubleshooting steps**
+
+*Applies to: Python SDK azureml V1*
+
+Add the missing username or password to your environment definition to fix the issue
+
+```
+myEnv.docker.base_image_registry.username = "username"
+```
+
+Alternatively, provide authentication via [workspace connections](https://aka.ms/azureml/environment/set-connection-v1)
+
+```
+from azureml.core import Workspace
+ws = Workspace.from_config()
+ws.set_connection("connection1", "ACR", "<URL>", "Basic", "{'Username': '<username>', 'Password': '<password>'}")
+```
+
+*Applies to: Azure CLI extensions V1 & V2*
+
+Create a workspace connection from a YAML specification file
+
+```
+az ml connection create --file connection.yml --resource-group my-resource-group --workspace-name my-workspace
+```
+
+> [!NOTE]
+> * Providing credentials in your environment definition is deprecated. Use workspace connections instead.
+
+**Resources**
+* [Python SDK AzureML v1 workspace connections](https://aka.ms/azureml/environment/set-connection-v1)
+* [Python SDK AzureML v2 workspace connections](/python/api/azure-ai-ml/azure.ai.ml.entities.workspaceconnection)
+* [Azure CLI workspace connections](/cli/azure/ml/connection)
### Multiple credentials for base image registry-- When specifying credentials for a base image registry, you must specify only one set of credentials. -- The following authentication types are currently supported:
- - Basic (username/password)
- - Registry identity (clientId/resourceId)
-- If you're using workspace connections to specify credentials, [delete one of the connections](https://aka.ms/azureml/environment/delete-connection-v1)-- If you've specified credentials directly in your environment definition, choose either username/password or registry identity
-to use, and set the other credentials you won't use to `null`
- - Specifying credentials in this way is deprecated. It's recommended that you use workspace connections. See
- *secrets in base image registry* below
+<!--issueDescription-->
+
+**Potential causes:**
+
+* You've specified more than one set of credentials for your base image registry
+
+**Affected areas (symptoms):**
+* Failure in registering your environment
+<!--/issueDescription-->
+
+**Troubleshooting steps**
+
+*Applies to: Python SDK azureml V1*
+
+If you're using workspace connections, view the connections you have set, and delete whichever one(s) you don't want to use
+
+```
+from azureml.core import Workspace
+ws = Workspace.from_config()
+ws.list_connections()
+ws.delete_connection("myConnection2")
+```
+
+If you've specified credentials in your environment definition, choose one set of credentials to use, and set all others to null
+
+```
+myEnv.docker.base_image_registry.registry_identity = None
+```
+
+> [!NOTE]
+> * Providing credentials in your environment definition is deprecated. Use workspace connections instead.
+
+**Resources**
+* [Delete a workspace connection v1](https://aka.ms/azureml/environment/delete-connection-v1)
+* [Python SDK AzureML v1 workspace connections](https://aka.ms/azureml/environment/set-connection-v1)
+* [Python SDK AzureML v2 workspace connections](/python/api/azure-ai-ml/azure.ai.ml.entities.workspaceconnection)
+* [Azure CLI workspace connections](/cli/azure/ml/connection)
### Secrets in base image registry-- If you specify a base image in your `DockerSection`, you must specify the registry address from which the image will be pulled,
-and credentials to authenticate to the registry, if needed.
-- Historically, credentials have been specified in the environment definition. However, this method isn't secure and should be
-avoided.
-- Users should set credentials using workspace connections. For instructions, see [set_connection](https://aka.ms/azureml/environment/set-connection-v1)
+<!--issueDescription-->
+
+**Potential causes:**
+
+* You've specified credentials in your environment definition
+
+**Affected areas (symptoms):**
+* Failure in registering your environment
+<!--/issueDescription-->
+
+**Troubleshooting steps**
+
+Specifying credentials in your environment definition is deprecated. Delete credentials from your environment definition and use workspace connections instead.
+
+*Applies to: Python SDK azureml V1*
+
+Set a workspace connection on your workspace
+
+```
+from azureml.core import Workspace
+ws = Workspace.from_config()
+ws.set_connection("connection1", "ACR", "<URL>", "Basic", "{'Username': '<username>', 'Password': '<password>'}")
+```
+
+*Applies to: Azure CLI extensions V1 & V2*
+
+Create a workspace connection from a YAML specification file
+
+```
+az ml connection create --file connection.yml --resource-group my-resource-group --workspace-name my-workspace
+```
+
+**Resources**
+* [Python SDK AzureML v1 workspace connections](https://aka.ms/azureml/environment/set-connection-v1)
+* [Python SDK AzureML v2 workspace connections](/python/api/azure-ai-ml/azure.ai.ml.entities.workspaceconnection)
+* [Azure CLI workspace connections](/cli/azure/ml/connection)
### Deprecated Docker attribute-- The following `DockerSection` attributes are deprecated:
- - `enabled`
- - `arguments`
- - `shared_volumes`
- - `gpu_support`
- - Azure Machine Learning now automatically detects and uses NVIDIA Docker extension when available.
- - `smh_size`
-- Use [DockerConfiguration](https://aka.ms/azureml/environment/docker-configuration-class) instead-- See [DockerSection deprecated variables](https://aka.ms/azureml/environment/docker-section-class)
+<!--issueDescription-->
+
+**Potential causes:**
+
+* You've specified Docker attributes in your environment definition that are now deprecated
+* The following are deprecated:
+ * `enabled`
+ * `arguments`
+ * `shared_volumes`
+ * `gpu_support`
+ * AzureML now automatically detects and uses NVIDIA Docker extension when available
+ * `smh_size`
+
+**Affected areas (symptoms):**
+* Failure in registering your environment
+<!--/issueDescription-->
+
+**Troubleshooting steps**
+
+*Applies to: Python SDK azureml V1*
+
+Instead of specifying these attributes in the `DockerSection` of your environment definition, use [DockerConfiguration](https://aka.ms/azureml/environment/docker-configuration-class)
+
+**Resources**
+* See `DockerSection` [deprecated variables](https://aka.ms/azureml/environment/docker-section-class)
### Dockerfile length over limit - The specified Dockerfile can't exceed the maximum Dockerfile size of 100 KB
conda_dep.add_conda_package("python==3.8")
- See [Python versions](https://aka.ms/azureml/environment/python-versions) and [Python end-of-life dates](https://aka.ms/azureml/environment/python-end-of-life) ### Python version not recommended-- The Python version used in the environment definition is deprecated, and its use should be avoided
+- The Python version used in the environment definition is at or near its end of life, and should be avoided
- Consider using a newer version of Python as the specified version will eventually be unsupported - See [Python versions](https://aka.ms/azureml/environment/python-versions) and [Python end-of-life dates](https://aka.ms/azureml/environment/python-end-of-life)
environment definition
version of a package on subsequent builds of an environment. This behavior can lead to unexpected errors - See [conda package pinning](https://aka.ms/azureml/environment/how-to-pin-conda-packages)
+### UTF-8 decoding error
+<!--issueDescription-->
+This issue can happen when there's a failure decoding a character in your conda specification. 
+
+**Potential causes:**
+* Your conda YAML file contains characters that aren't compatible with UTF-8.
+
+**Affected areas (symptoms):**
+* Failure in building environments from UI, SDK, and CLI.
+* Failure in running jobs because it will implicitly build the environment in the first step.
+<!--/issueDescription-->
+ ### *Pip issues* ### Pip not specified - For reproducibility, pip should be specified as a dependency in your conda specification, and it should be pinned
If you suspect that the path name to your container registry is incorrect
* For a registry `my-registry.io` and image `test/image` with tag `3.2`, a valid image path would be `my-registry.io/test/image:3.2` * See [registry path documentation](https://aka.ms/azureml/environment/docker-registries)
-If your container registry is behind a virtual network and is using a private endpoint in an [unsupported region](https://aka.ms/azureml/environment/private-link-availability)
+If your container registry is behind a virtual network or is using a private endpoint in an [unsupported region](https://aka.ms/azureml/environment/private-link-availability)
* Configure the container registry by using the service endpoint (public access) from the portal and retry * After you put the container registry behind a virtual network, run the [Azure Resource Manager template](https://aka.ms/azureml/environment/secure-resources-using-vnet) so the workspace can communicate with the container registry instance
Many issues could cause a horovod failure, and there's a comprehensive list of t
* [horovod installation](https://aka.ms/azureml/environment/install-horovod) ### Conda command not found-- Failed to create or update the conda environment because the conda command is missing -- For system-managed environments, conda should be in the path in order to create the user's environment
-from the provided conda specification
+<!--issueDescription-->
+This issue can happen when the conda command isn't recognized during conda environment creation or update.
+
+**Potential causes:**
+* conda isn't installed in the base image you're using
+* conda isn't installed via your Dockerfile before you try to execute the conda command
+* conda isn't included in or wasn't added to your path
+
+**Affected areas (symptoms):**
+* Failure in building environments from UI, SDK, and CLI.
+* Failure in running jobs because it will implicitly build the environment in the first step.
+<!--/issueDescription-->
+
+**Troubleshooting steps**
+
+Ensure that you have a conda installation step in your Dockerfile before trying to execute any conda commands
+* Review this [list of conda installers](https://docs.conda.io/en/latest/miniconda.html) to determine what you need for your scenario
+
+If you've tried installing conda and are experiencing this issue, ensure that you've added conda to your path
+* Review this [example](https://stackoverflow.com/questions/58269375/how-to-install-packages-with-miniconda-in-dockerfile) for guidance
+* Review how to set [environment variables in a Dockerfile](https://docs.docker.com/engine/reference/builder/#env)
+
+**Resources**
+* All available conda distributions are found in the [conda repository](https://repo.anaconda.com/miniconda/)
### Incompatible Python version-- Failed to create or update the conda environment because a package specified in the conda environment isn't compatible with the specified python version-- Update the Python version or use a different version of the package
+<!--issueDescription-->
+This issue can happen when there's a package specified in your conda environment that isn't compatible with your specified Python version.
+
+**Affected areas (symptoms):**
+* Failure in building environments from UI, SDK, and CLI.
+* Failure in running jobs because it will implicitly build the environment in the first step.
+<!--/issueDescription-->
+
+**Troubleshooting steps**
+
+Use a different version of the package that's compatible with your specified Python version
+
+Alternatively, use a different version of Python that's compatible with the package you've specified
+* If you're changing your Python version, use a version that's supported and that isn't nearing its end-of-life soon
+* See Python [end-of-life dates](https://aka.ms/azureml/environment/python-end-of-life)
+
+**Resources**
+* [Python documentation by version](https://aka.ms/azureml/environment/python-versions)
### Conda bare redirection-- Failed to create or update the conda environment because a package was specified on the command line using ">" or "<"
-without using quotes. Consider adding quotes around the package specification
+<!--issueDescription-->
+This issue can happen when a package is specified on the command line using "<" or ">" without using quotes, causing conda environment creation or update to fail.
+
+**Affected areas (symptoms):**
+* Failure in building environments from UI, SDK, and CLI.
+* Failure in running jobs because it will implicitly build the environment in the first step.
+<!--/issueDescription-->
+
+**Troubleshooting steps**
+
+Add quotes around the package specification
+* For example, change `conda install -y pip<=20.1.1` to `conda install -y "pip<=20.1.1"`
### *Pip issues during build* ### Failed to install packages-- Failed to install Python packages-- Review the image build log for more information on this error
+<!--issueDescription-->
+This issue can happen when your image build fails during Python package installation.
+
+**Potential causes:**
+* There are many issues that could cause this error
+* This is a generic message that's surfaced when the error you're encountering isn't yet covered by AzureML analysis
+
+**Affected areas (symptoms):**
+* Failure in building environments from UI, SDK, and CLI.
+* Failure in running jobs because it will implicitly build the environment in the first step.
+<!--/issueDescription-->
+
+**Troubleshooting steps**
+
+Review your Build log for more information on your image build failure
+
+Leave feedback for the AzureML team to analyze the error you're experiencing
+* [File a problem or suggestion](https://github.com/Azure/azureml-assets/issues/new?assignees=&labels=environmentLogs&template=environmentLogsFeedback.yml)
### Can't uninstall package-- Pip failed to uninstall a Python package that was installed via the OS's package manager-- Consider creating a separate environment using conda instead
+<!--issueDescription-->
+This can happen when pip fails to uninstall a Python package that was installed via the operating system's package manager.
+
+**Potential causes:**
+* An existing pip problem or a problematic pip version
+* An issue arising from not using an isolated environment
+
+**Affected areas (symptoms):**
+* Failure in building environments from UI, SDK, and CLI.
+* Failure in running jobs because it will implicitly build the environment in the first step.
+<!--/issueDescription-->
+
+**Troubleshooting steps**
+
+Read the following and determine if your failure is caused by an existing pip problem
+* [Cannot uninstall while creating Docker image](https://stackoverflow.com/questions/63383400/error-cannot-uninstall-ruamel-yaml-while-creating-docker-image-for-azure-ml-a)
+* [pip 10 disutils partial uninstall issue](https://github.com/pypa/pip/issues/5247)
+* [pip 10 no longer uninstalls disutils packages](https://github.com/pypa/pip/issues/4805)
+
+Try the following
+
+```
+pip install --ignore-installed [package]
+```
+
+Try creating a separate environment using conda
+
+### *Docker push issues*
+### Failed to store Docker image
+<!--issueDescription-->
+This issue can happen when a Docker image fails to be stored (pushed) to a container registry.
+
+**Potential causes:**
+* A transient issue has occurred with the ACR associated with the workspace
+* A container registry behind a virtual network is using a private endpoint in an [unsupported region](https://aka.ms/azureml/environment/private-link-availability)
+
+**Affected areas (symptoms):**
+* Failure in building environments from the UI, SDK, and CLI.
+* Failure in running jobs because it will implicitly build the environment in the first step.
+<!--/issueDescription-->
+
+**Troubleshooting steps**
+
+Retry the environment build if you suspect this is a transient issue with the workspace's Azure Container Registry (ACR)
+
+If your container registry is behind a virtual network or is using a private endpoint in an [unsupported region](https://aka.ms/azureml/environment/private-link-availability)
+* Configure the container registry by using the service endpoint (public access) from the portal and retry
+* After you put the container registry behind a virtual network, run the [Azure Resource Manager template](https://aka.ms/azureml/environment/secure-resources-using-vnet) so the workspace can communicate with the container registry instance
+
+If you aren't using a virtual network, or if you've configured it correctly, test that your credentials are correct for your ACR by attempting a simple local build
+* Get credentials for your workspace ACR from the Azure Portal
+* Log in to your ACR using `docker login <myregistry.azurecr.io> -u "username" -p "password"`
+* For an image "helloworld", test pushing to your ACR by running `docker push helloworld`
+* See [Quickstart: Build and run a container image using Azure Container Registry Tasks](../container-registry/container-registry-quickstart-task-cli.md)
machine-learning How To Troubleshoot Kubernetes Compute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-kubernetes-compute.md
If the error message is:
AzureML Kubernetes job failed. 137:PodPattern matched: {"containers":[{"name":"training-identity-sidecar","message":"Updating certificates in /etc/ssl/certs...\n1 added, 0 removed; done.\nRunning hooks in /etc/ca-certificates/update.d...\ndone.\n * Serving Flask app 'msi-endpoint-server' (lazy loading)\n * Environment: production\n WARNING: This is a development server. Do not use it in a production deployment.\n Use a production WSGI server instead.\n * Debug mode: off\n * Running on http://127.0.0.1:12342/ (Press CTRL+C to quit)\n","code":137}]} ```
-Check your proxy setting and check whether 127.0.0.1 was added to proxy-skip-range when using `az connectedk8s connect` by following this [network configuring](how-to-access-azureml-behind-firewall.md#kubernetes-compute).
+Check your proxy setting and check whether 127.0.0.1 was added to proxy-skip-range when using `az connectedk8s connect` by following this [network configuring](how-to-access-azureml-behind-firewall.md#scenario-use-kubernetes-compute).
## Private link issue
machine-learning How To Troubleshoot Kubernetes Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-kubernetes-extension.md
kubectl get events -n azureml --sort-by='.lastTimestamp'
## Troubleshoot AzureML extension deployment error
-### Error: cannot reuse a name that is still in use
-This means the extension name you specified already exists. If the name is used by Azureml extension, you need to wait for about an hour and try again. If the name is used by other helm charts, you need to use another name. Run ```helm list -Aa``` to list all helm charts in your cluster.
+### Error: can't reuse a name that is still in use
+This error means the extension name you specified already exists. If the name is used by Azureml extension, you need to wait for about an hour and try again. If the name is used by other helm charts, you need to use another name. Run ```helm list -Aa``` to list all helm charts in your cluster.
### Error: earlier operation for the helm chart is still in progress You need to wait for about an hour and try again after the unknown operation is completed.
-### Error: unable to create new content in namespace azureml because it is being terminated
-This happens when an uninstallation operation isn't finished and another installation operation is triggered. You can run ```az k8s-extension show``` command to check the provisioning status of the extension and make sure the extension has been uninstalled before taking other actions.
+### Error: unable to create new content in namespace azureml because it's being terminated
+This error happens when an uninstallation operation isn't finished and another installation operation is triggered. You can run ```az k8s-extension show``` command to check the provisioning status of the extension and make sure the extension has been uninstalled before taking other actions.
### Error: failed in download the Chart path not found
-This happens when you specify a wrong extension version. You need to make sure the specified version exists. If you want to use the latest version, you don't need to specify ```--version``` .
+This error happens when you specify a wrong extension version. You need to make sure the specified version exists. If you want to use the latest version, you don't need to specify ```--version``` .
-### Error: cannot be imported into the current release: invalid ownership metadata
-This error means there is a conflict between existing cluster resources and AzureML extension. A full error message could be like this:
+### Error: can't be imported into the current release: invalid ownership metadata
+This error means there's a conflict between existing cluster resources and AzureML extension. A full error message could be like the following text:
``` CustomResourceDefinition "jobs.batch.volcano.sh" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "amlarc-extension"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "azureml" ```
-Follow the steps below to mitigate the issue.
+Use the following steps to mitigate the issue.
* Check who owns the problematic resources and if the resource can be deleted or modified. * If the resource is used only by AzureML extension and can be deleted, you can manually add labels to mitigate the issue. Taking the previous error message as an example, you can run commands as follows,
Follow the steps below to mitigate the issue.
kubectl annotate crd jobs.batch.volcano.sh "meta.helm.sh/release-namespace=azureml" "meta.helm.sh/release-name=<extension-name>" ``` By setting the labels and annotations to the resource, it means the resource is managed by helm and owned by AzureML extension.
-* If the resource is also used by other components in your cluster and can't be modified. Refer to [deploy AzureML extension](./how-to-deploy-kubernetes-extension.md#review-azureml-extension-configuration-settings) to see if there is a configuration setting to disable the conflict resource.
+* If the resource is also used by other components in your cluster and can't be modified. Refer to [deploy AzureML extension](./how-to-deploy-kubernetes-extension.md#review-azureml-extension-configuration-settings) to see if there's a configuration setting to disable the conflict resource.
## HealthCheck of extension When the installation failed and didn't hit any of the above error messages, you can use the built-in health check job to make a comprehensive check on the extension. Azureml extension contains a `HealthCheck` job to pre-check your cluster readiness when you try to install, update or delete the extension. The HealthCheck job will output a report, which is saved in a configmap named `arcml-healthcheck` in `azureml` namespace. The error codes and possible solutions for the report are listed in [Error Code of HealthCheck](#error-code-of-healthcheck).
The health check is triggered whenever you install, update or delete the extensi
- If the extension is updated failed, you should look into `pre-upgrade` and `pre-rollback`. - If the extension is deleted failed, you should look into `pre-delete`.
-When you request support, we recommend that you run the following command below and send the```healthcheck.logs``` file to us, as it can facilitate us to better locate the problem.
+When you request support, we recommend that you run the following command and send the```healthcheck.logs``` file to us, as it can facilitate us to better locate the problem.
```bash kubectl logs healthcheck -n azureml ```
This table shows how to troubleshoot the error codes returned by the HealthCheck
|Error Code |Error Message | Description | |--|--|--| |E40001 | LOAD_BALANCER_NOT_SUPPORT | Load balancer isn't supported in your cluster. You need to configure the load balancer in your cluster or consider to set `inferenceRouterServiceType` to `nodePort` or `clusterIP`. |
-|E40002 | INSUFFICIENT_NODE | You have enabled `inferenceRouterHA` that requires at least three nodes in your cluster. Disable the HA if you have fewer than three nodes. |
+|E40002 | INSUFFICIENT_NODE | You have enabled `inferenceRouterHA` that requires at least three nodes in your cluster. Disable the HA if you've fewer than three nodes. |
|E40003 | INTERNAL_LOAD_BALANCER_NOT_SUPPORT | Currently, internal load balancer is only supported by AKS. Don't set `internalLoadBalancerProvider` if you don't have an AKS cluster.| |E40007 | INVALID_SSL_SETTING | The SSL key or certificate isn't valid. The CNAME should be compatible with the certificate. |
-|E45002 | PROMETHEUS_CONFLICT | The Prometheus Operator installed is conflict with your existing Prometheus Operator. For more information, refer to [Prometheus operator](#prometheus-operator) |
-|E45003 | BAD_NETWORK_CONNECTIVITY | You need to meet [network-requirements](./how-to-access-azureml-behind-firewall.md#kubernetes-compute).|
+|E45002 | PROMETHEUS_CONFLICT | The Prometheus Operator installed is conflict with your existing Prometheus Operator. For more information, see [Prometheus operator](#prometheus-operator) |
+|E45003 | BAD_NETWORK_CONNECTIVITY | You need to meet [network-requirements](./how-to-access-azureml-behind-firewall.md#scenario-use-kubernetes-compute).|
|E45004 | AZUREML_FE_ROLE_CONFLICT |AzureML extension isn't supported in the [legacy AKS](./how-to-attach-kubernetes-anywhere.md#kubernetescompute-and-legacy-akscompute). To install AzureML extension, you need to [delete the legacy azureml-fe components](v1/how-to-create-attach-kubernetes.md#delete-azureml-fe-related-resources).| |E45005 | AZUREML_FE_DEPLOYMENT_CONFLICT | AzureML extension isn't supported in the [legacy AKS](./how-to-attach-kubernetes-anywhere.md#kubernetescompute-and-legacy-akscompute). To install AzureML extension, you need to [delete the legacy azureml-fe components](v1/how-to-create-attach-kubernetes.md#delete-azureml-fe-related-resources).|
In this case, all Prometheus instances will be managed by the existing prometheu
``` ### DCGM exporter
-[Dcgm-exporter](https://github.com/NVIDIA/dcgm-exporter) is the official tool recommended by NVIDIA for collecting GPU metrics. We have integrated it into Azureml extension. But, by default, dcgm-exporter is not enabled, and no GPU metrics are collected. You can specify ```installDcgmExporter``` flag to ```true``` to enable it. As it's NVIDIA's official tool, you may already have it installed in your GPU cluster. If so, you can set ```installDcgmExporter``` to ```false``` and follow the steps below to integrate your dcgm-exporter into Azureml extension. Another thing to note is that dcgm-exporter allows user to config which metrics to expose. For Azureml extension, make sure ```DCGM_FI_DEV_GPU_UTIL```, ```DCGM_FI_DEV_FB_FREE``` and ```DCGM_FI_DEV_FB_USED``` metrics are exposed.
+[Dcgm-exporter](https://github.com/NVIDIA/dcgm-exporter) is the official tool recommended by NVIDIA for collecting GPU metrics. We've integrated it into Azureml extension. But, by default, dcgm-exporter isn't enabled, and no GPU metrics are collected. You can specify ```installDcgmExporter``` flag to ```true``` to enable it. As it's NVIDIA's official tool, you may already have it installed in your GPU cluster. If so, you can set ```installDcgmExporter``` to ```false``` and follow the steps below to integrate your dcgm-exporter into Azureml extension. Another thing to note is that dcgm-exporter allows user to config which metrics to expose. For Azureml extension, make sure ```DCGM_FI_DEV_GPU_UTIL```, ```DCGM_FI_DEV_FB_FREE``` and ```DCGM_FI_DEV_FB_USED``` metrics are exposed.
1. Make sure you have Aureml extension and dcgm-exporter installed successfully. Dcgm-exporter can be installed by [Dcgm-exporter helm chart](https://github.com/NVIDIA/dcgm-exporter) or [Gpu-operator helm chart](https://github.com/NVIDIA/gpu-operator)
-1. Check if there is a service for dcgm-exporter. If it doesn't exist or you don't know how to check, run the command below to create one.
+1. Check if there's a service for dcgm-exporter. If it doesn't exist or you don't know how to check, run the following command to create one.
```bash cat << EOF | kubectl apply -f - apiVersion: v1
machine-learning How To Use Automl Small Object Detect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-automl-small-object-detect.md
description: Set up Azure Machine Learning automated ML to train small object detection models with the CLI v2 and Python SDK v2. +
machine-learning How To Use Mlflow Azure Synapse https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-mlflow-azure-synapse.md
-+ Last updated 07/06/2022
machine-learning Migrate Execute R Script https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-execute-r-script.md
-+ Last updated 03/08/2021
machine-learning Migrate Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-overview.md
-+ Last updated 11/30/2022
machine-learning Migrate Rebuild Experiment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-rebuild-experiment.md
-+ Last updated 10/21/2021
machine-learning Migrate Rebuild Web Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/migrate-rebuild-web-service.md
-+ Last updated 03/08/2021
machine-learning Monitor Resource Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/monitor-resource-reference.md
- Previously updated : 10/21/2021+ Last updated : 01/19/2023 # Monitoring Azure machine learning data reference
machine-learning Reference Automl Images Cli Classification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-automl-images-cli-classification.md
The source JSON schema can be found at https://azuremlsdk2.blob.core.windows.net
| `validation_data` | object | The validation data to be used within the job. It should contain both training features and label column (optionally a sample weights column). If `validation_data` is specified, then `training_data` and `target_column_name` parameters must be specified. For more information on keys and their descriptions, see [Training or validation data](#training-or-validation-data) section. For an example, see [Consume data](./how-to-auto-train-image-models.md?tabs=cli#consume-data) section| | | | `validation_data_size` | float | What fraction of the data to hold out for validation when user validation data isn't specified. | A value in range (0.0, 1.0) | | | `limits` | object | Dictionary of limit configurations of the job. The key is name for the limit within the context of the job and the value is limit value. For more information, see [Configure your experiment settings](./how-to-auto-train-image-models.md?tabs=cli#job-limits) section. | | |
-| `training_parameters` | object | Dictionary containing training parameters for the job. Provide an object that has keys as listed in following sections. <br> - [Model agnostic hyperparameters](./reference-automl-images-hyperparameters.md#model-agnostic-hyperparameters) <br> - [Image classification (multi-class and multi-label) specific hyperparameters](./reference-automl-images-hyperparameters.md#image-classification-multi-class-and-multi-label-specific-hyperparameters). <br> <br> For an example, see [Supported model algorithms](./how-to-auto-train-image-models.md?tabs=cli#supported-model-algorithms) section. | | |
+| `training_parameters` | object | Dictionary containing training parameters for the job. Provide an object that has keys as listed in following sections. <br> - [Model agnostic hyperparameters](./reference-automl-images-hyperparameters.md#model-agnostic-hyperparameters) <br> - [Image classification (multi-class and multi-label) specific hyperparameters](./reference-automl-images-hyperparameters.md#image-classification-multi-class-and-multi-label-specific-hyperparameters). <br> <br> For an example, see [Supported model architectures](./how-to-auto-train-image-models.md?tabs=cli#supported-model-architectures) section. | | |
| `sweep` | object | Dictionary containing sweep parameters for the job. It has two keys - `sampling_algorithm` (**required**) and `early_termination`. For more information and an example, see [Sampling methods for the sweep](./how-to-auto-train-image-models.md?tabs=cli#sampling-methods-for-the-sweep), [Early termination policies](./how-to-auto-train-image-models.md?tabs=cli#early-termination-policies) sections. | | | | `search_space` | object | Dictionary of the hyperparameter search space. The key is the name of the hyperparameter and the value is the parameter expression. The user can find the possible hyperparameters from parameters specified for `training_parameters` key. For an example, see [Sweeping hyperparameters for your model](./how-to-auto-train-image-models.md?tabs=cli#manually-sweeping-model-hyperparameters) section. | | | | `search_space.<hyperparameter>` | object | There are two types of hyperparameters: <br> - **Discrete Hyperparameters**: Discrete hyperparameters are specified as a [`choice`](./reference-yaml-job-sweep.md#choice) among discrete values. `choice` can be one or more comma-separated values, a `range` object, or any arbitrary `list` object. Advanced discrete hyperparameters can also be specified using a distribution - [`randint`](./reference-yaml-job-sweep.md#randint), [`qlognormal`, `qnormal`](./reference-yaml-job-sweep.md#qlognormal-qnormal), [`qloguniform`, `quniform`](./reference-yaml-job-sweep.md#qloguniform-quniform). For more information, see this [section](./how-to-tune-hyperparameters.md#discrete-hyperparameters). <br> - **Continuous hyperparameters**: Continuous hyperparameters are specified as a distribution over a continuous range of values. Currently supported distributions are - [`lognormal`, `normal`](./reference-yaml-job-sweep.md#lognormal-normal), [`loguniform`](./reference-yaml-job-sweep.md#loguniform), [`uniform`](./reference-yaml-job-sweep.md#uniform). For more information, see this [section](./how-to-tune-hyperparameters.md#continuous-hyperparameters). <br> <br> See [Parameter expressions](./reference-yaml-job-sweep.md#parameter-expressions) for the set of possible expressions to use. | | |
machine-learning Reference Automl Images Cli Instance Segmentation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-automl-images-cli-instance-segmentation.md
For information on all the keys in Yaml syntax, see [Yaml syntax](./reference-au
| | - | -- | -- | - | | `task` | const | **Required.** The type of AutoML task. | `image_instance_segmentation` | `image_instance_segmentation` | | `primary_metric` | string | The metric that AutoML will optimize for model selection. |`mean_average_precision` | `mean_average_precision` |
-| `training_parameters` | object | Dictionary containing training parameters for the job. Provide an object that has keys as listed in following sections. <br> - [Model specific hyperparameters](./reference-automl-images-hyperparameters.md#model-specific-hyperparameters) for maskrcnn_* (if you're using maskrcnn_* for instance segmentation) <br> - [Model agnostic hyperparameters](./reference-automl-images-hyperparameters.md#model-agnostic-hyperparameters) <br> - [Object detection and instance segmentation task specific hyperparameters](./reference-automl-images-hyperparameters.md#object-detection-and-instance-segmentation-task-specific-hyperparameters). <br> <br> For an example, see [Supported model algorithms](./how-to-auto-train-image-models.md?tabs=cli#supported-model-algorithms) section.| | |
+| `training_parameters` | object | Dictionary containing training parameters for the job. Provide an object that has keys as listed in following sections. <br> - [Model specific hyperparameters](./reference-automl-images-hyperparameters.md#model-specific-hyperparameters) for maskrcnn_* (if you're using maskrcnn_* for instance segmentation) <br> - [Model agnostic hyperparameters](./reference-automl-images-hyperparameters.md#model-agnostic-hyperparameters) <br> - [Object detection and instance segmentation task specific hyperparameters](./reference-automl-images-hyperparameters.md#object-detection-and-instance-segmentation-task-specific-hyperparameters). <br> <br> For an example, see [Supported model architectures](./how-to-auto-train-image-models.md?tabs=cli#supported-model-architectures) section.| | |
## Remarks
machine-learning Reference Automl Images Cli Object Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-automl-images-cli-object-detection.md
For information on all the keys in Yaml syntax, see [Yaml syntax](./reference-au
| | - | -- | -- | - | | `task` | const | **Required.** The type of AutoML task. | `image_object_detection` | `image_object_detection` | | `primary_metric` | string | The metric that AutoML will optimize for model selection. |`mean_average_precision` | `mean_average_precision` |
-| `training_parameters` | object | Dictionary containing training parameters for the job. Provide an object that has keys as listed in following sections. <br> - [Model Specific Hyperparameters](./reference-automl-images-hyperparameters.md#model-specific-hyperparameters) for yolov5 (if you're using yolov5 for object detection) <br> - [Model agnostic hyperparameters](./reference-automl-images-hyperparameters.md#model-agnostic-hyperparameters) <br> - [Object detection and instance segmentation task specific hyperparameters](./reference-automl-images-hyperparameters.md#object-detection-and-instance-segmentation-task-specific-hyperparameters). <br> <br> For an example, see [Supported model algorithms](./how-to-auto-train-image-models.md?tabs=cli#supported-model-algorithms) section.| | |
+| `training_parameters` | object | Dictionary containing training parameters for the job. Provide an object that has keys as listed in following sections. <br> - [Model Specific Hyperparameters](./reference-automl-images-hyperparameters.md#model-specific-hyperparameters) for yolov5 (if you're using yolov5 for object detection) <br> - [Model agnostic hyperparameters](./reference-automl-images-hyperparameters.md#model-agnostic-hyperparameters) <br> - [Object detection and instance segmentation task specific hyperparameters](./reference-automl-images-hyperparameters.md#object-detection-and-instance-segmentation-task-specific-hyperparameters). <br> <br> For an example, see [Supported model architectures](./how-to-auto-train-image-models.md?tabs=cli#supported-model-architectures) section.| | |
## Remarks
machine-learning Reference Automl Images Hyperparameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-automl-images-hyperparameters.md
Last updated 01/18/2022
Learn which hyperparameters are available specifically for computer vision tasks in automated ML experiments.
-With support for computer vision tasks, you can control the model algorithm and sweep hyperparameters. These model algorithms and hyperparameters are passed in as the parameter space for the sweep. While many of the hyperparameters exposed are model-agnostic, there are instances where hyperparameters are model-specific or task-specific.
+With support for computer vision tasks, you can control the model architecture and sweep hyperparameters. These model architectures and hyperparameters are passed in as the parameter space for the sweep. While many of the hyperparameters exposed are model-agnostic, there are instances where hyperparameters are model-specific or task-specific.
## Model-specific hyperparameters
-This table summarizes hyperparameters specific to the `yolov5` algorithm.
+This table summarizes hyperparameters specific to the `yolov5` architecture.
| Parameter name | Description | Default | | - |-|-|
The following table summarizes hyperparmeters for image classification (multi-cl
The following hyperparameters are for object detection and instance segmentation tasks. > [!WARNING]
-> These parameters are not supported with the `yolov5` algorithm. See the [model specific hyperparameters](#model-specific-hyperparameters) section for `yolov5` supported hyperparmeters.
+> These parameters are not supported with the `yolov5` architecture. See the [model specific hyperparameters](#model-specific-hyperparameters) section for `yolov5` supported hyperparmeters.
| Parameter name | Description | Default | | - |-|--|
machine-learning Reference Automl Images Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-automl-images-schema.md
-+ Last updated 09/09/2022
machine-learning Reference Automl Nlp Cli Multilabel Classification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-automl-nlp-cli-multilabel-classification.md
+
+ Title: 'CLI (v2) Automated ML NLP text classification multilabel job YAML schema'
+
+description: Reference documentation for the CLI (v2) automated ML NLP text classification multilabel job YAML schema.
++++++++ Last updated : 12/22/2022+++
+# CLI (v2) Automated ML text classification multilabel job YAML schema
+++
+Every Azure Machine Learning entity has a schematized YAML representation. You can create a new entity from a YAML configuration file with a `.yml` or `.yaml` extension.
+
+This article provides a reference for some syntax concepts you will encounter while configuring these YAML files for NLP text classification multilabel jobs.
+
+The source JSON schema can be found at
+https://azuremlsdk2.blob.core.windows.net/preview/0.0.1/autoMLNLPTextClassificationMultilabelJob.schema.json
+
+## YAML syntax
+
+| Key | Type | Description | Allowed values | Default value |
+| | - | -- | -- | - |
+| `$schema` | string | Represents the location/url to load the YAML schema. If the user uses the Azure Machine Learning VS Code extension to author the YAML file, including `$schema` at the top of the file enables the user to invoke schema and resource completions. | | |
+| `type` | const | **Required.** The type of job. | `automl` | `automl` |
+| `task` | const | **Required.** The type of AutoML task. <br> Task description for multilabel classification: <br> There are multiple possible classes and each sample can be assigned any number of classes. The task is to predict all the classes for each sample. For example, classifying a movie script as "Comedy", or "Romantic", or "Comedy and Romantic".| `text_classification_multilabel` | |
+| `name` | string | Name of the job. Must be unique across all jobs in the workspace. If omitted, Azure ML will autogenerate a GUID for the name. | | |
+| `display_name` | string | Display name of the job in the studio UI. Can be non-unique within the workspace. If omitted, Azure ML will autogenerate a human-readable adjective-noun identifier for the display name. | | |
+| `experiment_name` | string | Experiment name to organize the job under. Each job's run record will be organized under the corresponding experiment in the studio's "Experiments" tab. If omitted, Azure ML will default it to the name of the working directory where the job was created. | | |
+| `description` | string | Description of the job. | | |
+| `tags` | object | Dictionary of tags for the job. | | |
+| `compute` | string | Name of the compute target to execute the job on. To reference an existing compute in the workspace, we use syntax: `azureml:<compute_name>` | | |
+| `log_verbosity` | number | Different levels of log verbosity. |`not_set`, `debug`, `info`, `warning`, `error`, `critical` | `info` |
+| `primary_metric` | string | The metric that AutoML will optimize for model selection. |`accuracy` | `accuracy` |
+| `target_column_name` | string | **Required.** The name of the column to target for predictions. It must always be specified. This parameter is applicable to `training_data` and `validation_data`. | | |
+| `training_data` | object | **Required.** The data to be used within the job. See [multi label](./how-to-auto-train-nlp-models.md?tabs=cli#multi-label) section for more detail. | | |
+| `validation_data` | object | **Required.** The validation data to be used within the job. It should be consistent with the training data in terms of the set of columns, data type for each column, order of columns from left to right and at least two unique labels. <br> *Note*: the column names within each dataset should be unique. See [data validation](./how-to-auto-train-nlp-models.md?tabs=cli#data-validation) section for more information.| | |
+| `limits` | object | Dictionary of limit configurations of the job. Parameters in this section: `max_concurrent_trials`, `max_nodes`, `max_trials`, `timeout_minutes`, `trial_timeout_minutes`. See [limits](#limits) for detail.| | |
+| `training_parameters` | object | Dictionary containing training parameters for the job. <br> See [supported hyperparameters](#supported-hyperparameters) for detail. <br> *Note*: Hyperparameters set in the `training_parameters` are fixed across all sweeping runs and thus don't need to be included in the search space. | | |
+| `sweep` | object | Dictionary containing sweep parameters for the job. It has two keys - `sampling_algorithm` (**required**) and `early_termination`. For more information, see [model sweeping and hyperparameter tuning](./how-to-auto-train-nlp-models.md?tabs=cli#model-sweeping-and-hyperparameter-tuning-preview) sections. | | |
+| `search_space` | object | Dictionary of the hyperparameter search space. The key is the name of the hyperparameter and the value is the parameter expression. All parameters that are fixable via `training_parameters` are supported here (to be instead swept over). See [supported hyperparameters](#supported-hyperparameters) for more detail. <br> There are two types of hyperparameters: <br> - **Discrete Hyperparameters**: Discrete hyperparameters are specified as a [`choice`](./reference-yaml-job-sweep.md#choice) among discrete values. `choice` can be one or more comma-separated values, a `range` object, or any arbitrary `list` object. Advanced discrete hyperparameters can also be specified using a distribution - [`randint`](./reference-yaml-job-sweep.md#randint), [`qlognormal`, `qnormal`](./reference-yaml-job-sweep.md#qlognormal-qnormal), [`qloguniform`, `quniform`](./reference-yaml-job-sweep.md#qloguniform-quniform). For more information, see this [section](./how-to-tune-hyperparameters.md#discrete-hyperparameters). <br> - **Continuous hyperparameters**: Continuous hyperparameters are specified as a distribution over a continuous range of values. Currently supported distributions are - [`lognormal`, `normal`](./reference-yaml-job-sweep.md#lognormal-normal), [`loguniform`](./reference-yaml-job-sweep.md#loguniform), [`uniform`](./reference-yaml-job-sweep.md#uniform). For more information, see this [section](./how-to-tune-hyperparameters.md#continuous-hyperparameters). <br> <br> See [parameter expressions](./reference-yaml-job-sweep.md#parameter-expressions) section for the set of possible expressions to use. | | |
+| `outputs` | object | Dictionary of output configurations of the job. The key is a name for the output within the context of the job and the value is the output configuration. | | |
+| `outputs.best_model` | object | Dictionary of output configurations for best model. For more information, see [Best model output configuration](#best-model-output-configuration). | | |
+
+Other syntax used in configurations:
+
+### Limits
+
+| Key | Type | Description | Allowed values | Default value |
+| | - | -- | -- | - |
+| `max_concurrent_trials` | integer | Represents the maximum number of trials (children jobs) that would be executed in parallel. | | `1` |
+| `max_trials` | integer | Represents the maximum number of trials an AutoML nlp job can try to run a training algorithm with different combination of hyperparameters. | | `1` |
+| `timeout_minutes ` | integer | Represents the maximum amount of time in minutes that the submitted AutoML job can take to run . After this, the job will get terminated. The default timeout in AutoML NLP jobs is 7 days. | | `10080`|
+| `trial_timeout_minutes ` | integer | Represents the maximum amount of time in minutes that each trial (child job) in the submitted AutoML job can take run. After this, the child job will get terminated. | | |
+|`max_nodes`| integer | The maximum number of nodes from the backing compute cluster to leverage for the job.| | `1` |
+
+### Supported hyperparameters
+
+The following table describes the hyperparameters that AutoML NLP supports.
+
+| Parameter name | Description | Syntax |
+|-|||
+| gradient_accumulation_steps | The number of backward operations whose gradients are to be summed up before performing one step of gradient descent by calling the optimizerΓÇÖs step function. <br><br> This is leveraged to use an effective batch size which is gradient_accumulation_steps times larger than the maximum size that fits the GPU. | Must be a positive integer.
+| learning_rate | Initial learning rate. | Must be a float in the range (0, 1). |
+| learning_rate_scheduler |Type of learning rate scheduler. | Must choose from `linear, cosine, cosine_with_restarts, polynomial, constant, constant_with_warmup`. |
+| model_name | Name of one of the supported models. | Must choose from `bert_base_cased, bert_base_uncased, bert_base_multilingual_cased, bert_base_german_cased, bert_large_cased, bert_large_uncased, distilbert_base_cased, distilbert_base_uncased, roberta_base, roberta_large, distilroberta_base, xlm_roberta_base, xlm_roberta_large, xlnet_base_cased, xlnet_large_cased`. |
+| number_of_epochs | Number of training epochs. | Must be a positive integer. |
+| training_batch_size | Training batch size. | Must be a positive integer. |
+| validation_batch_size | Validation batch size. | Must be a positive integer. |
+| warmup_ratio | Ratio of total training steps used for a linear warmup from 0 to learning_rate. | Must be a float in the range [0, 1]. |
+| weight_decay | Value of weight decay when optimizer is sgd, adam, or adamw. | Must be a float in the range [0, 1]. |
+
+### Training or validation data
+
+| Key | Type | Description | Allowed values | Default value |
+| | - | -- | -- | - |
+| `description` | string | The detailed information that describes this input data. | | |
+| `path` | string | The path from where data should be loaded. Path can be a `file` path, `folder` path or `pattern` for paths. `pattern` specifies a search pattern to allow globbing(`*` and `**`) of files and folders containing data. Supported URI types are `azureml`, `https`, `wasbs`, `abfss`, and `adl`. For more information on how to use the `azureml://` URI format, see [core yaml syntax](./reference-yaml-core-syntax.md). URI of the location of the artifact file. If this URI doesn't have a scheme (for example, http:, azureml: etc.), then it's considered a local reference and the file it points to is uploaded to the default workspace blob-storage as the entity is created. | | |
+| `mode` | string | Dataset delivery mechanism. | `direct` | `direct` |
+| `type` | const | In order to generate nlp models, the user needs to bring training data in the form of an MLTable. For more information, see [preparing data](./how-to-auto-train-nlp-models.md#preparing-data) | mltable | mltable|
+
+### Best model output configuration
+
+| Key | Type | Description | Allowed values |Default value |
+| | - | -- | -- | |
+| `type` | string | **Required.** Type of best model. AutoML allows only mlflow models. | `mlflow_model` | `mlflow_model` |
+| `path` | string | **Required.** URI of the location where the model-artifact file(s) are stored. If this URI doesn't have a scheme (for example, http:, azureml: etc.), then it's considered a local reference and the file it points to is uploaded to the default workspace blob-storage as the entity is created. | | |
+| `storage_uri` | string | The HTTP URL of the Model. Use this URL with `az storage copy -s THIS_URL -d DESTINATION_PATH --recursive` to download the data. | | |
+
+## Remarks
+
+The `az ml job` command can be used for managing Azure Machine Learning jobs.
+
+## Examples
+
+Examples are available in the [examples GitHub repository](https://github.com/Azure/azureml-examples/tree/main/cli/jobs). Examples relevant to NLP text classification multilabel jobs are linked below.
+
+## YAML: AutoML text classification multilabel job
++
+## YAML: AutoML text classification multilabel pipeline job
++
+## Next steps
+
+- [Install and use the CLI (v2)](how-to-configure-cli.md)
machine-learning Reference Automl Nlp Cli Ner https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-automl-nlp-cli-ner.md
+
+ Title: 'CLI (v2) Automated ML NLP text NER job YAML schema'
+
+description: Reference documentation for the CLI (v2) automated ML NLP text NER job YAML schema.
++++++++ Last updated : 12/22/2022+++
+# CLI (v2) Automated ML text NER job YAML schema
+++
+Every Azure Machine Learning entity has a schematized YAML representation. You can create a new entity from a YAML configuration file with a `.yml` or `.yaml` extension.
+
+This article provides a reference for some syntax concepts you will encounter while configuring these YAML files for NLP text NER jobs.
+
+The source JSON schema can be found at https://azuremlsdk2.blob.core.windows.net/preview/0.0.1/autoMLNLPTextNERJob.schema.json
+
+## YAML syntax
+
+| Key | Type | Description | Allowed values | Default value |
+| | - | -- | -- | - |
+| `$schema` | string | Represents the location/url to load the YAML schema. If the user uses the Azure Machine Learning VS Code extension to author the YAML file, including `$schema` at the top of the file enables the user to invoke schema and resource completions. | | |
+| `type` | const | **Required.** The type of job. | `automl` | `automl` |
+| `task` | const | **Required.** The type of AutoML task. <br> Task description for NER: <br> There are multiple possible tags for tokens in sequences. The task is to predict the tags for all the tokens for each sequence. For example, extracting domain-specific entities from unstructured text, such as contracts or financial documents. | `text_ner` | |
+| `name` | string | Name of the job. Must be unique across all jobs in the workspace. If omitted, Azure ML will autogenerate a GUID for the name. | | |
+| `display_name` | string | Display name of the job in the studio UI. Can be non-unique within the workspace. If omitted, Azure ML will autogenerate a human-readable adjective-noun identifier for the display name. | | |
+| `experiment_name` | string | Experiment name to organize the job under. Each job's run record will be organized under the corresponding experiment in the studio's "Experiments" tab. If omitted, Azure ML will default it to the name of the working directory where the job was created. | | |
+| `description` | string | Description of the job. | | |
+| `tags` | object | Dictionary of tags for the job. | | |
+| `compute` | string | Name of the compute target to execute the job on. To reference an existing compute in the workspace, we use syntax: `azureml:<compute_name>` | | |
+| `log_verbosity` | number | Different levels of log verbosity. |`not_set`, `debug`, `info`, `warning`, `error`, `critical` | `info` |
+| `primary_metric` | string | The metric that AutoML will optimize for model selection. |`accuracy`| `accuracy` |
+| `training_data` | object | **Required.** The data to be used within the job. Unlike multi-class or multi-label, which takes .csv format datasets, named entity recognition requires CoNLL format. The file must contain exactly two columns and in each row, the token and the label is separated by a single space. See [NER](./how-to-auto-train-nlp-models.md?tabs=cli#named-entity-recognition-ner) section for more detail.| | |
+| `validation_data` | object | **Required.** The validation data to be used within the job. <br> - The file should not start with an empty line <br> - Each line must be an empty line, or follow format `{token}` `{label}`, where there is exactly one space between the token and the label and no white space after the label <br> - All labels must start with I-, B-, or be exactly O. Case sensitive <br> - Exactly one empty line between two samples <br> - Exactly one empty line at the end of the file <br> See [data validation](./how-to-auto-train-nlp-models.md?tabs=cli#data-validation) section for more detail. | | |
+| `limits` | object | Dictionary of limit configurations of the job. Parameters in this section: `max_concurrent_trials`, `max_nodes`, `max_trials`, `timeout_minutes`, `trial_timeout_minutes`. See [limits](#limits) for detail.| | |
+| `training_parameters` | object | Dictionary containing training parameters for the job. Provide an object that has keys as listed in following sections. <br> For more information, see [supported hyperparameters](./how-to-auto-train-nlp-models.md?tabs=cli#supported-hyperparameters) section| | |
+| `training_parameters` | object | Dictionary containing training parameters for the job. <br> See [supported hyperparameters](#supported-hyperparameters) for detail. <br> *Note*: Hyperparameters set in the `training_parameters` are fixed across all sweeping runs and thus don't need to be included in the search space. | | |
+| `search_space` | object | Dictionary of the hyperparameter search space. The key is the name of the hyperparameter and the value is the parameter expression. All parameters that are fixable via `training_parameters` are supported here (to be instead swept over). See [supported hyperparameters](#supported-hyperparameters) for more detail. <br> There are two types of hyperparameters: <br> - **Discrete Hyperparameters**: Discrete hyperparameters are specified as a [`choice`](./reference-yaml-job-sweep.md#choice) among discrete values. `choice` can be one or more comma-separated values, a `range` object, or any arbitrary `list` object. Advanced discrete hyperparameters can also be specified using a distribution - [`randint`](./reference-yaml-job-sweep.md#randint), [`qlognormal`, `qnormal`](./reference-yaml-job-sweep.md#qlognormal-qnormal), [`qloguniform`, `quniform`](./reference-yaml-job-sweep.md#qloguniform-quniform). For more information, see this [section](./how-to-tune-hyperparameters.md#discrete-hyperparameters). <br> - **Continuous hyperparameters**: Continuous hyperparameters are specified as a distribution over a continuous range of values. Currently supported distributions are - [`lognormal`, `normal`](./reference-yaml-job-sweep.md#lognormal-normal), [`loguniform`](./reference-yaml-job-sweep.md#loguniform), [`uniform`](./reference-yaml-job-sweep.md#uniform). For more information, see this [section](./how-to-tune-hyperparameters.md#continuous-hyperparameters). <br> <br> See [parameter expressions](./reference-yaml-job-sweep.md#parameter-expressions) for the set of possible expressions to use. | | |
+| `outputs` | object | Dictionary of output configurations of the job. The key is a name for the output within the context of the job and the value is the output configuration. | | |
+| `outputs.best_model` | object | Dictionary of output configurations for best model. For more information, see [Best model output configuration](#best-model-output-configuration). | | |
++
+Other syntax used in configurations:
+### Limits
+
+| Key | Type | Description | Allowed values | Default value |
+| | - | -- | -- | - |
+| `max_concurrent_trials` | integer | Represents the maximum number of trials (children jobs) that would be executed in parallel. | | `1` |
+| `max_trials` | integer | Represents the maximum number of trials an AutoML nlp job can try to run a training algorithm with different combination of hyperparameters. | | `1` |
+| `timeout_minutes ` | integer | Represents the maximum amount of time in minutes that the submitted AutoML job can take to run . After this, the job will get terminated. The default timeout in AutoML NLP jobs is 7 days. | | `10080`|
+| `trial_timeout_minutes ` | integer | Represents the maximum amount of time in minutes that each trial (child job) in the submitted AutoML job can take run. After this, the child job will get terminated. | | |
+|`max_nodes`| integer | The maximum number of nodes from the backing compute cluster to leverage for the job.| | `1` |
+
+### Supported hyperparameters
+
+The following table describes the hyperparameters that AutoML NLP supports.
+
+| Parameter name | Description | Syntax |
+|-|||
+| gradient_accumulation_steps | The number of backward operations whose gradients are to be summed up before performing one step of gradient descent by calling the optimizerΓÇÖs step function. <br><br> This is leveraged to use an effective batch size which is gradient_accumulation_steps times larger than the maximum size that fits the GPU. | Must be a positive integer.
+| learning_rate | Initial learning rate. | Must be a float in the range (0, 1). |
+| learning_rate_scheduler |Type of learning rate scheduler. | Must choose from `linear, cosine, cosine_with_restarts, polynomial, constant, constant_with_warmup`. |
+| model_name | Name of one of the supported models. | Must choose from `bert_base_cased, bert_base_uncased, bert_base_multilingual_cased, bert_base_german_cased, bert_large_cased, bert_large_uncased, distilbert_base_cased, distilbert_base_uncased, roberta_base, roberta_large, distilroberta_base, xlm_roberta_base, xlm_roberta_large, xlnet_base_cased, xlnet_large_cased`. |
+| number_of_epochs | Number of training epochs. | Must be a positive integer. |
+| training_batch_size | Training batch size. | Must be a positive integer. |
+| validation_batch_size | Validation batch size. | Must be a positive integer. |
+| warmup_ratio | Ratio of total training steps used for a linear warmup from 0 to learning_rate. | Must be a float in the range [0, 1]. |
+| weight_decay | Value of weight decay when optimizer is sgd, adam, or adamw. | Must be a float in the range [0, 1]. |
+
+### Training or validation data
+
+| Key | Type | Description | Allowed values | Default value |
+| | - | -- | -- | - |
+| `description` | string | The detailed information that describes this input data. | | |
+| `path` | string | The path from where data should be loaded. Path can be a `file` path, `folder` path or `pattern` for paths. `pattern` specifies a search pattern to allow globbing(`*` and `**`) of files and folders containing data. URI types are `azureml`, `https`, `wasbs`, `abfss`, and `adl`. For more information on how to use the `azureml://` URI format, see [core yaml syntax](./reference-yaml-core-syntax.md). URI of the location of the artifact file. If this URI doesn't have a scheme (for example, http:, azureml: etc.), then it's considered a local reference and the file it points to is uploaded to the default workspace blob-storage as the entity is created. | | |
+| `mode` | string | Dataset delivery mechanism. | `direct` | `direct` |
+| `type` | const | In order to generate nlp models, the user needs to bring training data in the form of an MLTable. For more information, see [preparing data](./how-to-auto-train-nlp-models.md#preparing-data) | mltable | mltable|
+
+### Best model output configuration
+
+| Key | Type | Description | Allowed values |Default value |
+| | - | -- | -- | |
+| `type` | string | **Required.** Type of best model. AutoML allows only mlflow models. | `mlflow_model` | `mlflow_model` |
+| `path` | string | **Required.** URI of the location where the model-artifact file(s) are stored. If this URI doesn't have a scheme (for example, http:, azureml: etc.), then it's considered a local reference and the file it points to is uploaded to the default workspace blob-storage as the entity is created. | | |
+| `storage_uri` | string | The HTTP URL of the Model. Use this URL with `az storage copy -s THIS_URL -d DESTINATION_PATH --recursive` to download the data. | | |
+
+## Remarks
+
+The `az ml job` command can be used for managing Azure Machine Learning jobs.
+
+## Examples
+
+Examples are available in the [examples GitHub repository](https://github.com/Azure/azureml-examples/tree/main/cli/jobs). Examples relevant to text NER job are linked below.
+
+## YAML: AutoML text NER job
++
+## YAML: AutoML text NER sweeping job
++
+## YAML: AutoML text NER pipeline job
++
+## Next steps
+
+- [Install and use the CLI (v2)](how-to-configure-cli.md)
machine-learning Reference Automl Nlp Cli Text Classification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-automl-nlp-cli-text-classification.md
+
+ Title: 'CLI (v2) Automated ML text classification job YAML schema'
+
+description: Reference documentation for the CLI (v2) automated ML text classification job YAML schema.
++++++++ Last updated : 12/22/2022+++
+# CLI (v2) Automated ML text classification job YAML schema
+++
+Every Azure Machine Learning entity has a schematized YAML representation. You can create a new entity from a YAML configuration file with a `.yml` or `.yaml` extension.
+
+This article provides a reference for some syntax concepts you will encounter while configuring these YAML files for NLP text classification jobs.
+
+The source JSON schema can be found at https://azuremlsdk2.blob.core.windows.net/preview/0.0.1/autoMLNLPTextClassificationJob.schema.json
+++
+## YAML syntax
+
+| Key | Type | Description | Allowed values | Default value |
+| | - | -- | -- | - |
+| `$schema` | string | Represents the location/url to load the YAML schema. If the user uses the Azure Machine Learning VS Code extension to author the YAML file, including `$schema` at the top of the file enables the user to invoke schema and resource completions. | | |
+| `type` | const | **Required.** The type of job. | `automl` | `automl` |
+| `task` | const | **Required.** The type of AutoML task. <br> Task description of text classification: <br> There are multiple possible classes and each sample can be classified as exactly one class. The task is to predict the correct class for each sample. For example, classifying a movie script as "Comedy" or "Romantic". | `text_classification` | |
+| `name` | string | Name of the job. Must be unique across all jobs in the workspace. If omitted, Azure ML will autogenerate a GUID for the name. | | |
+| `display_name` | string | Display name of the job in the studio UI. Can be non-unique within the workspace. If omitted, Azure ML will autogenerate a human-readable adjective-noun identifier for the display name. | | |
+| `experiment_name` | string | Experiment name to organize the job under. Each job's run record will be organized under the corresponding experiment in the studio's "Experiments" tab. If omitted, Azure ML will default it to the name of the working directory where the job was created. | | |
+| `description` | string | Description of the job. | | |
+| `tags` | object | Dictionary of tags for the job. | | |
+| `compute` | string | Name of the compute target to execute the job on. To reference an existing compute in the workspace, we use syntax: `azureml:<compute_name>` | | |
+| `log_verbosity` | number | Different levels of log verbosity. |`not_set`, `debug`, `info`, `warning`, `error`, `critical` | `info` |
+| `primary_metric` | string | The metric that AutoML will optimize for model selection. |`accuracy`,<br> `auc_weighted`, <br> `precision_score_weighted` | `accuracy` |
+| `target_column_name` | string | **Required.** The name of the column to target for predictions. It must always be specified. This parameter is applicable to `training_data` and `validation_data`. | | |
+| `training_data` | object | **Required.** The data to be used within the job. For multi-class classification, the dataset can contain several text columns and exactly one label column. | | |
+| `validation_data` | object | **Required.** The validation data to be used within the job. It should be consistent with the training data in terms of the set of columns, data type for each column, order of columns from left to right and at least two unique labels. <br> *Note*: the column names within each dataset should be unique.| | |
+| `limits` | object | Dictionary of limit configurations of the job. Parameters in this section: `max_concurrent_trials`, `max_nodes`, `max_trials`, `timeout_minutes`, `trial_timeout_minutes`. See [limits](#limits) for detail.| | |
+| `training_parameters` | object | Dictionary containing training parameters for the job. <br> See [supported hyperparameters](#supported-hyperparameters) for detail. <br> *Note*: Hyperparameters set in the `training_parameters` are fixed across all sweeping runs and thus don't need to be included in the search space. | | |
+| `sweep` | object | Dictionary containing sweep parameters for the job. It has two keys - `sampling_algorithm` (**required**) and `early_termination`. For more information, see [model sweeping and hyperparameter tuning](./how-to-auto-train-nlp-models.md?tabs=cli#model-sweeping-and-hyperparameter-tuning-preview) sections. | | |
+| `search_space` | object | Dictionary of the hyperparameter search space. The key is the name of the hyperparameter and the value is the parameter expression. All parameters that are fixable via `training_parameters` are supported here (to be instead swept over). See [supported hyperparameters](#supported-hyperparameters) for more detail. <br> There are two types of hyperparameters: <br> - **Discrete Hyperparameters**: Discrete hyperparameters are specified as a [`choice`](./reference-yaml-job-sweep.md#choice) among discrete values. `choice` can be one or more comma-separated values, a `range` object, or any arbitrary `list` object. Advanced discrete hyperparameters can also be specified using a distribution - [`randint`](./reference-yaml-job-sweep.md#randint), [`qlognormal`, `qnormal`](./reference-yaml-job-sweep.md#qlognormal-qnormal), [`qloguniform`, `quniform`](./reference-yaml-job-sweep.md#qloguniform-quniform). For more information, see this [section](./how-to-tune-hyperparameters.md#discrete-hyperparameters). <br> - **Continuous hyperparameters**: Continuous hyperparameters are specified as a distribution over a continuous range of values. Currently supported distributions are - [`lognormal`, `normal`](./reference-yaml-job-sweep.md#lognormal-normal), [`loguniform`](./reference-yaml-job-sweep.md#loguniform), [`uniform`](./reference-yaml-job-sweep.md#uniform). For more information, see this [section](./how-to-tune-hyperparameters.md#continuous-hyperparameters). <br> <br> See [parameter expressions](./reference-yaml-job-sweep.md#parameter-expressions) for the set of possible expressions to use. | | |
+| `outputs` | object | Dictionary of output configurations of the job. The key is a name for the output within the context of the job and the value is the output configuration. | | |
+| `outputs.best_model` | object | Dictionary of output configurations for best model. For more information, see [Best model output configuration](#best-model-output-configuration). | | |
+
+Other syntax used in configurations:
+
+### Limits
+
+| Key | Type | Description | Allowed values | Default value |
+| | - | -- | -- | - |
+| `max_concurrent_trials` | integer | Represents the maximum number of trials (children jobs) that would be executed in parallel. | | `1` |
+| `max_trials` | integer | Represents the maximum number of trials an AutoML nlp job can try to run a training algorithm with different combination of hyperparameters. | | `1` |
+| `timeout_minutes ` | integer | Represents the maximum amount of time in minutes that the submitted AutoML NLP job can take to run . After this, the job will get terminated. The default timeout in AutoML NLP jobs is 7 days. | | `10080`|
+| `trial_timeout_minutes ` | integer | Represents the maximum amount of time in minutes that each trial (child job) in the submitted AutoML job can take run. After this, the child job will get terminated. | | |
+|`max_nodes`| integer | The maximum number of nodes from the backing compute cluster to leverage for the job.| | `1` |
+
+### Supported hyperparameters
+
+The following table describes the hyperparameters that AutoML NLP supports.
+
+| Parameter name | Description | Syntax |
+|-|||
+| gradient_accumulation_steps | The number of backward operations whose gradients are to be summed up before performing one step of gradient descent by calling the optimizerΓÇÖs step function. <br><br> This is leveraged to use an effective batch size which is gradient_accumulation_steps times larger than the maximum size that fits the GPU. | Must be a positive integer.
+| learning_rate | Initial learning rate. | Must be a float in the range (0, 1). |
+| learning_rate_scheduler |Type of learning rate scheduler. | Must choose from `linear, cosine, cosine_with_restarts, polynomial, constant, constant_with_warmup`. |
+| model_name | Name of one of the supported models. | Must choose from `bert_base_cased, bert_base_uncased, bert_base_multilingual_cased, bert_base_german_cased, bert_large_cased, bert_large_uncased, distilbert_base_cased, distilbert_base_uncased, roberta_base, roberta_large, distilroberta_base, xlm_roberta_base, xlm_roberta_large, xlnet_base_cased, xlnet_large_cased`. |
+| number_of_epochs | Number of training epochs. | Must be a positive integer. |
+| training_batch_size | Training batch size. | Must be a positive integer. |
+| validation_batch_size | Validation batch size. | Must be a positive integer. |
+| warmup_ratio | Ratio of total training steps used for a linear warmup from 0 to learning_rate. | Must be a float in the range [0, 1]. |
+| weight_decay | Value of weight decay when optimizer is sgd, adam, or adamw. | Must be a float in the range [0, 1]. |
+
+### Training or validation data
+
+| Key | Type | Description | Allowed values | Default value |
+| | - | -- | -- | - |
+| `description` | string | The detailed information that describes this input data. | | |
+| `path` | string | The path from where data should be loaded. Path can be a `file` path, `folder` path or `pattern` for paths. `pattern` specifies a search pattern to allow globbing(`*` and `**`) of files and folders containing data. Supported URI types are `azureml`, `https`, `wasbs`, `abfss`, and `adl`. For more information on how to use the `azureml://` URI format, see [core yaml syntax](./reference-yaml-core-syntax.md). URI of the location of the artifact file. If this URI doesn't have a scheme (for example, http:, azureml: etc.), then it's considered a local reference and the file it points to is uploaded to the default workspace blob-storage as the entity is created. | | |
+| `mode` | string | Dataset delivery mechanism. | `direct` | `direct` |
+| `type` | const | In order to generate nlp models, the user needs to bring training data in the form of an MLTable. For more information, see [preparing data](./how-to-auto-train-nlp-models.md#preparing-data) | mltable | mltable|
+
+### Best model output configuration
+
+| Key | Type | Description | Allowed values |Default value |
+| | - | -- | -- | |
+| `type` | string | **Required.** Type of best model. AutoML allows only mlflow models. | `mlflow_model` | `mlflow_model` |
+| `path` | string | **Required.** URI of the location where the model-artifact file(s) are stored. If this URI doesn't have a scheme (for example, http:, azureml: etc.), then it's considered a local reference and the file it points to is uploaded to the default workspace blob-storage as the entity is created. | | |
+| `storage_uri` | string | The HTTP URL of the Model. Use this URL with `az storage copy -s THIS_URL -d DESTINATION_PATH --recursive` to download the data. | | |
+
+## Remarks
+
+The `az ml job` command can be used for managing Azure Machine Learning jobs.
+
+## Examples
+
+Examples are available in the [examples GitHub repository](https://github.com/Azure/azureml-examples/tree/main/cli/jobs). Examples relevant to text classification job are linked below.
+
+## YAML: AutoML text classification job
++
+## YAML: AutoML text classification pipeline job
++
+## Next steps
+
+- [Install and use the CLI (v2)](how-to-configure-cli.md)
machine-learning Reference Machine Learning Cloud Parity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-machine-learning-cloud-parity.md
The information in the rest of this document provides information on what featur
* For Azure Machine Learning compute instances, the ability to refresh a token lasting more than 24 hours is not available in Azure Government. * Model Profiling does not support 4 CPUs in the US-Arizona region. * Sample notebooks may not work in Azure Government if it needs access to public data.
-* IP addresses: The CLI command used in the [required public internet access](how-to-secure-training-vnet.md#required-public-internet-access) instructions does not return IP ranges. Use the [Azure IP ranges and service tags for Azure Government](https://www.microsoft.com/download/details.aspx?id=57063) instead.
+* IP addresses: The CLI command used in the [required public internet access](how-to-secure-training-vnet.md#required-public-internet-access-to-train-models) instructions does not return IP ranges. Use the [Azure IP ranges and service tags for Azure Government](https://www.microsoft.com/download/details.aspx?id=57063) instead.
* For scheduled pipelines, we also provide a blob-based trigger mechanism. This mechanism is not supported for CMK workspaces. For enabling a blob-based trigger for CMK workspaces, you have to do extra setup. For more information, see [Trigger a run of a machine learning pipeline from a Logic App (SDK/CLI v1)](v1/how-to-trigger-published-pipeline.md). * Firewalls: When using an Azure Government region, add the following hosts to your firewall setting:
The information in the rest of this document provides information on what featur
| Azure Active Directory | `https://login.microsoftonline.com` | `https://login.chinacloudapi.cn` | * Sample notebook may not work, if it needs access to public data.
-* IP address ranges: The CLI command used in the [required public internet access](how-to-secure-training-vnet.md#required-public-internet-access) instructions does not return IP ranges. Use the [Azure IP ranges and service tags for Azure China](https://www.microsoft.com//download/details.aspx?id=57062) instead.
+* IP address ranges: The CLI command used in the [required public internet access](how-to-secure-training-vnet.md#required-public-internet-access-to-train-models) instructions does not return IP ranges. Use the [Azure IP ranges and service tags for Azure China](https://www.microsoft.com//download/details.aspx?id=57062) instead.
* Azure Machine Learning compute instances preview is not supported in a workspace where Private Endpoint is enabled for now, but CI will be supported in the next deployment for the service expansion to all AzureML regions. * Searching for assets in the web UI with Chinese characters will not work correctly.
machine-learning Reference Yaml Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-yaml-data.md
Last updated 03/31/2022-+ # CLI (v2) data YAML schema
machine-learning Tutorial Auto Train Image Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-auto-train-image-models.md
-+ Last updated 05/26/2022
This compute is used later while creating the task specific `automl` job.
## Experiment setup
-You can use an Experiment to track your model training runs.
+You can use an Experiment to track your model training jobs.
# [Azure CLI](#tab/cli) [!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
You can create data inputs from training and validation MLTable with the followi
## Configure your object detection experiment
-To configure automated ML runs for image-related tasks, create a task specific AutoML job.
+To configure automated ML jobs for image-related tasks, create a task specific AutoML job.
# [Azure CLI](#tab/cli) [!INCLUDE [cli v2](../../includes/machine-learning-cli-v2.md)]
When you've configured your AutoML Job to the desired settings, you can submit t
### Manual hyperparameter sweeping for image tasks
-In your AutoML job, you can specify the model algorithms by using `model_name` parameter and configure the settings to perform a hyperparameter sweep over a defined search space to find the optimal model.
+In your AutoML job, you can specify the model architectures by using `model_name` parameter and configure the settings to perform a hyperparameter sweep over a defined search space to find the optimal model.
In this example, we will train an object detection model with `yolov5` and `fasterrcnn_resnet50_fpn`, both of which are pretrained on COCO, a large-scale object detection, segmentation, and captioning dataset that contains over thousands of labeled images with over 80 label categories.
limits:
-The following code defines the search space in preparation for the hyperparameter sweep for each defined algorithm, `yolov5` and `fasterrcnn_resnet50_fpn`. In the search space, specify the range of values for `learning_rate`, `optimizer`, `lr_scheduler`, etc., for AutoML to choose from as it attempts to generate a model with the optimal primary metric. If hyperparameter values are not specified, then default values are used for each algorithm.
+The following code defines the search space in preparation for the hyperparameter sweep for each defined architecture, `yolov5` and `fasterrcnn_resnet50_fpn`. In the search space, specify the range of values for `learning_rate`, `optimizer`, `lr_scheduler`, etc., for AutoML to choose from as it attempts to generate a model with the optimal primary metric. If hyperparameter values are not specified, then default values are used for each architecture.
For the tuning settings, use random sampling to pick samples from this parameter space by using the `random` sampling_algorithm. The job limits configured above, tells automated ML to try a total of 10 trials with these different samples, running two trials at a time on our compute target, which was set up using four nodes. The more parameters the search space has, the more trials you need to find optimal models.
-The Bandit early termination policy is also used. This policy terminates poor performing configurations; that is, those configurations that are not within 20% slack of the best performing configuration, which significantly saves compute resources.
+The Bandit early termination policy is also used. This policy terminates poor performing trials; that is, those trials that are not within 20% slack of the best performing trial, which significantly saves compute resources.
# [Azure CLI](#tab/cli)
When you've configured your AutoML Job to the desired settings, you can submit t
-When doing a hyperparameter sweep, it can be useful to visualize the different configurations that were tried using the HyperDrive UI. You can navigate to this UI by going to the 'Child runs' tab in the UI of the main automl_image_run from above, which is the HyperDrive parent run. Then you can go into the 'Child runs' tab of this one.
+When doing a hyperparameter sweep, it can be useful to visualize the different trials that were tried using the HyperDrive UI. You can navigate to this UI by going to the 'Child jobs' tab in the UI of the main automl_image_job from above, which is the HyperDrive parent job. Then you can go into the 'Child jobs' tab of this one.
-Alternatively, here below you can see directly the HyperDrive parent run and navigate to its 'Child runs' tab:
+Alternatively, here below you can see directly the HyperDrive parent job and navigate to its 'Child jobs' tab:
# [Azure CLI](#tab/cli)
hd_job
## Register and deploy model
-Once the run completes, you can register the model that was created from the best run (configuration that resulted in the best primary metric). You can either register the model after downloading or by specifying the azureml path with corresponding jobid.
+Once the job completes, you can register the model that was created from the best trial (configuration that resulted in the best primary metric). You can either register the model after downloading or by specifying the azureml path with corresponding jobid.
-### Get the best run
+### Get the best trial
# [Azure CLI](#tab/cli)
machine-learning Tutorial Train Deploy Image Classification Model Vscode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-train-deploy-image-classification-model-vscode.md
+ Last updated 05/25/2021 #Customer intent: As a professional data scientist, I want to learn how to train an image classification model using TensorFlow and the Azure Machine Learning Visual Studio Code Extension.
machine-learning How To Access Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-access-data.md
Azure Machine Learning requires extra configuration steps to communicate with a
Azure Machine Learning can receive requests from clients outside of the virtual network. To ensure that the entity requesting data from the service is safe and to enable data being displayed in your workspace, [use a private endpoint with your workspace](../how-to-configure-private-link.md).
-**For Python SDK users**, to access your data via your training script on a compute target, the compute target needs to be inside the same virtual network and subnet of the storage. You can [use a compute cluster in the same virtual network](../how-to-secure-training-vnet.md#compute-cluster) or [use a compute instance in the same virtual network](../how-to-secure-training-vnet.md#compute-instance).
+**For Python SDK users**, to access your data via your training script on a compute target, the compute target needs to be inside the same virtual network and subnet of the storage. You can [use a compute instance/cluster in the same virtual network](how-to-secure-training-vnet.md).
**For Azure Machine Learning studio users**, several features rely on the ability to read data from a dataset, such as dataset previews, profiles, and automated machine learning. For these features to work with storage behind virtual networks, use a [workspace managed identity in the studio](../how-to-enable-studio-virtual-network.md) to allow Azure Machine Learning to access the storage account from outside the virtual network.
machine-learning How To Create Attach Compute Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-create-attach-compute-cluster.md
In this article, learn how to:
## What is a compute cluster?
-Azure Machine Learning compute cluster is a managed-compute infrastructure that allows you to easily create a single or multi-node compute. The compute cluster is a resource that can be shared with other users in your workspace. The compute scales up automatically when a job is submitted, and can be put in an Azure Virtual Network. Compute cluster supports **no public IP (preview)** deployment as well in virtual network. The compute executes in a containerized environment and packages your model dependencies in a [Docker container](https://www.docker.com/why-docker).
+Azure Machine Learning compute cluster is a managed-compute infrastructure that allows you to easily create a single or multi-node compute. The compute cluster is a resource that can be shared with other users in your workspace. The compute scales up automatically when a job is submitted, and can be put in an Azure Virtual Network. Compute cluster supports **no public IP** deployment as well in virtual network. The compute executes in a containerized environment and packages your model dependencies in a [Docker container](https://www.docker.com/why-docker).
Compute clusters can run jobs securely in a [virtual network environment](../how-to-secure-training-vnet.md), without requiring enterprises to open up SSH ports. The job executes in a containerized environment and packages your model dependencies in a Docker container.
machine-learning How To Network Security Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-network-security-overview.md
In this section, you learn how to secure the training environment in Azure Machi
To secure the training environment, use the following steps:
-1. Create an Azure Machine Learning [compute instance and computer cluster in the virtual network](how-to-secure-training-vnet.md#compute-cluster) to run the training job.
-1. If your compute cluster or compute instance uses a public IP address, you must [Allow inbound communication](how-to-secure-training-vnet.md#required-public-internet-access) so that management services can submit jobs to your compute resources.
+1. Create an Azure Machine Learning [compute instance and computer cluster in the virtual network](how-to-secure-training-vnet.md) to run the training job.
+1. If your compute cluster or compute instance uses a public IP address, you must [Allow inbound communication](how-to-secure-training-vnet.md#required-public-internet-access-to-train-models) so that management services can submit jobs to your compute resources.
> [!TIP] > Compute cluster and compute instance can be created with or without a public IP address. If created with a public IP address, you get a load balancer with a public IP to accept the inbound access from Azure batch service and Azure Machine Learning service. You need to configure User Defined Routing (UDR) if you use a firewall. If created without a public IP, you get a private link service to accept the inbound access from Azure batch service and Azure Machine Learning service without a public IP.
machine-learning How To Secure Training Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-secure-training-vnet.md
In this article, you learn how to secure training environments with a virtual network in Azure Machine Learning using the Python SDK v1.
+Azure Machine Learning compute instance and compute cluster can be used to securely train models in a virtual network. When planning your environment, you can configure the compute instance/cluster with or without a public IP address. The general differences between the two are:
+
+* **No public IP**: Reduces costs as it doesn't have the same networking resource requirements. Improves security by removing the requirement for inbound traffic from the internet. However, there are additional configuration changes required to enable outbound access to required resources (Azure Active Directory, Azure Resource Manager, etc.).
+* **Public IP**: Works by default, but costs more due to additional Azure networking resources. Requires inbound communication from the Azure Machine Learning service over the public internet.
+
+The following table contains the differences between these configurations:
+
+| Configuration | With public IP | Without public IP |
+| -- | -- | -- |
+| Inbound traffic | AzureMachineLearning | None |
+| Outbound traffic | By default, can access the public internet with no restrictions.<br>You can restrict what it accesses using a Network Security Group or firewall. | By default, it cannot access the public internet since there is no public IP resource.<br>You need a Virtual Network NAT gateway or Firewall to route outbound traffic to required resources on the internet. |
+| Azure networking resources | Public IP address, load balancer, network interface | None |
+
+You can also use Azure Databricks or HDInsight to train models in a virtual network.
+ > [!TIP] > For information on using the Azure Machine Learning __studio__ and the Python SDK __v2__, see [Secure training environment (v2)](../how-to-secure-training-vnet.md). >
In this article you learn how to secure the following training compute resources
+ Read the [Network security overview](how-to-network-security-overview.md) article to understand common virtual network scenarios and overall virtual network architecture.
-+ An existing virtual network and subnet to use with your compute resources.
++ An existing virtual network and subnet to use with your compute resources. This VNet must be in the same subscription as your Azure Machine Learning workspace.+
+ - We recommend putting the storage accounts used by your workspace and training jobs in the same Azure region that you plan to use for your compute instances and clusters. If they aren't in the same Azure region, you may incur data transfer costs and increased network latency.
+ - Make sure that **WebSocket** communication is allowed to `*.instances.azureml.net` and `*.instances.azureml.ms` in your VNet. WebSockets are used by Jupyter on compute instances.
+++ An existing subnet in the virtual network. This subnet is used when creating compute instances and clusters.+
+ - Make sure that the subnet isn't delegated to other Azure services.
+ - Make sure that the subnet contains enough free IP addresses. Each compute instance requires one IP address. Each *node* within a compute cluster requires one IP address.
+++ If you have your own DNS server, we recommend using DNS forwarding to resolve the fully qualified domain names (FQDN) of compute instances and clusters. For more information, see [Use a custom DNS with Azure Machine Learning](../how-to-custom-dns.md). + To deploy resources into a virtual network or subnet, your user account must have permissions to the following actions in Azure role-based access control (Azure RBAC): - "Microsoft.Network/virtualNetworks/*/read" on the virtual network resource. This permission isn't needed for Azure Resource Manager (ARM) template deployments. - "Microsoft.Network/virtualNetworks/subnet/join/action" on the subnet resource.
- For more information on Azure RBAC with networking, see the [Networking built-in roles](../../role-based-access-control/built-in-roles.md#networking)
+ For more information on Azure RBAC with networking, see the [Networking built-in roles](/azure/role-based-access-control/built-in-roles.md#networking)
+
+## Limitations
### Azure Machine Learning compute cluster/instance
-* Compute clusters and instances create the following resources. If they're unable to create these resources (for example, if there's a resource lock on the resource group) then creation, scale out, or scale in, may fail.
+* __Compute clusters__ can be created in a different region than your workspace. This functionality is in __preview__, and is only available for __compute clusters__, not compute instances. When using a different region for the cluster, the following limitations apply:
- * IP address.
- * Network Security Group (NSG).
- * Load balancer.
+ * If your workspace associated resources, such as storage, are in a different virtual network than the cluster, set up global virtual network peering between the networks. For more information, see [Virtual network peering](/azure/virtual-network/virtual-network-peering-overview).
+ * You may see increased network latency and data transfer costs. The latency and costs can occur when creating the cluster, and when running jobs on it.
-* The virtual network must be in the same subscription as the Azure Machine Learning workspace.
-* The subnet used for the compute instance or cluster must have enough unassigned IP addresses.
+ Guidance such as using NSG rules, user-defined routes, and input/output requirements, apply as normal when using a different region than the workspace.
- * A compute cluster can dynamically scale. If there aren't enough unassigned IP addresses, the cluster will be partially allocated.
- * A compute instance only requires one IP address.
+ > [!WARNING]
+ > If you are using a __private endpoint-enabled workspace__, creating the cluster in a different region is __not supported__.
-* To create a compute cluster or instance [without a public IP address](#no-public-ip-for-compute-clusters-preview) (a preview feature), your workspace must use a private endpoint to connect to the VNet. For more information, see [Configure a private endpoint for Azure Machine Learning workspace](how-to-configure-private-link.md).
-* If you plan to secure the virtual network by restricting traffic, see the [Required public internet access](#required-public-internet-access) section.
-* The subnet used to deploy compute cluster/instance shouldn't be delegated to any other service. For example, it shouldn't be delegated to ACI.
+* Compute cluster/instance deployment in virtual network isn't supported with Azure Lighthouse.
### Azure Databricks * The virtual network must be in the same subscription and region as the Azure Machine Learning workspace. * If the Azure Storage Account(s) for the workspace are also secured in a virtual network, they must be in the same virtual network as the Azure Databricks cluster.
+* In addition to the __databricks-private__ and __databricks-public__ subnets used by Azure Databricks, the __default__ subnet created for the virtual network is also required.
+* Azure Databricks doesn't use a private endpoint to communicate with the virtual network.
-## Limitations
-
-### Azure Machine Learning compute cluster/instance
+For more information on using Azure Databricks in a virtual network, see [Deploy Azure Databricks in your Azure Virtual Network](/azure/databricks/administration-guide/cloud-configurations/azure/vnet-inject).
-* If put multiple compute instances or clusters in one virtual network, you may need to request a quota increase for one or more of your resources. The Machine Learning compute instance or cluster automatically allocates networking resources __in the resource group that contains the virtual network__. For each compute instance or cluster, the service allocates the following resources:
+### Azure HDInsight or virtual machine
- * One network security group (NSG). This NSG contains the following rules, which are specific to compute cluster and compute instance:
+* Azure Machine Learning supports only virtual machines that are running Ubuntu.
- * Allow inbound TCP traffic on ports 29876-29877 from the `BatchNodeManagement` service tag.
- * Allow inbound TCP traffic on port 44224 from the `AzureMachineLearning` service tag.
- The following screenshot shows an example of these rules:
+## Compute instance/cluster with no public IP
- :::image type="content" source="./media/how-to-secure-training-vnet/compute-instance-cluster-network-security-group.png" lightbox="./media/how-to-secure-training-vnet/compute-instance-cluster-network-security-group.png" alt-text="Screenshot of the network security group.":::
+To create a compute instance or compute cluster with no public IP, use the Azure Machine Learning studio UI to create the resource:
+1. Sign in to the [Azure Machine Learning studio](https://ml.azure.com), and then select your subscription and workspace.
+1. Select the **Compute** page from the left navigation bar.
+1. Select the **+ New** from the navigation bar of compute instance or compute cluster.
+1. Configure the VM size and configuration you need, then select **Next**.
+1. From the **Advanced Settings**, Select **Enable virtual network**, your virtual network and subnet, and finally select the **No Public IP** option under the VNet/subnet section.
- > [!TIP]
- > If your compute cluster or instance does not use a public IP address (a preview feature), these inbound NSG rules are not required.
-
- * For compute cluster or instance, it's now possible to remove the public IP address (a preview feature). If you have Azure Policy assignments prohibiting Public IP creation, then deployment of the compute cluster or instance will succeed.
+ :::image type="content" source="../media/how-to-secure-training-vnet/no-public-ip.png" alt-text="A screenshot of how to configure no public IP for compute instance and compute cluster." lightbox="../media/how-to-secure-training-vnet/no-public-ip.png":::
- * One load balancer
+> [!TIP]
+> You can also use the Azure Machine Learning SDK v2 or Azure CLI extension for ML v2. For information on creating a compute instance or cluster with no public IP, see the v2 version of [Secure an Azure Machine Learning training environment](../how-to-secure-training-vnet.md) article.
- For compute clusters, these resources are deleted every time the cluster scales down to 0 nodes and created when scaling up.
- For a compute instance, these resources are kept until the instance is deleted. Stopping the instance doesn't remove the resources.
+## Compute instance/cluster with public IP
- > [!IMPORTANT]
- > These resources are limited by the subscription's [resource quotas](../../azure-resource-manager/management/azure-subscription-service-limits.md). If the virtual network resource group is locked then deletion of compute cluster/instance will fail. Load balancer cannot be deleted until the compute cluster/instance is deleted. Also please ensure there is no Azure Policy assignment which prohibits creation of network security groups.
+The following configurations are in addition to those listed in the [Prerequisites](#prerequisites) section, and are specific to **creating** compute instances/clusters that have a public IP:
-* If you create a compute instance and plan to use the no public IP address configuration, your Azure Machine Learning workspace's managed identity must be assigned the __Reader__ role for the virtual network that contains the workspace. For more information on assigning roles, see [Steps to assign an Azure role](../../role-based-access-control/role-assignments-steps.md).
++ If you put multiple compute instances/clusters in one virtual network, you may need to request a quota increase for one or more of your resources. The Machine Learning compute instance or cluster automatically allocates networking resources __in the resource group that contains the virtual network__. For each compute instance or cluster, the service allocates the following resources:
-* If you have configured Azure Container Registry for your workspace behind the virtual network, you must use a compute cluster to build Docker images. You can't use a compute cluster with the no public IP address configuration. For more information, see [Enable Azure Container Registry](../how-to-secure-workspace-vnet.md#enable-azure-container-registry-acr).
+ * A network security group (NSG) is automatically created. This NSG allows inbound TCP traffic on port 44224 from the `AzureMachineLearning` service tag.
-* If the Azure Storage Accounts for the workspace are also in the virtual network, use the following guidance on subnet limitations:
+ > [!IMPORTANT]
+ > Compute instance and compute cluster automatically create an NSG with the required rules.
+ >
+ > If you have another NSG at the subnet level, the rules in the subnet level NSG mustn't conflict with the rules in the automatically created NSG.
+ >
+ > To learn how the NSGs filter your network traffic, see [How network security groups filter network traffic](/azure/virtual-network/network-security-group-how-it-works).
- * If you plan to use Azure Machine Learning __studio__ to visualize data or use designer, the storage account must be __in the same subnet as the compute instance or cluster__.
- * If you plan to use the __SDK__, the storage account can be in a different subnet.
+ * One load balancer
- > [!NOTE]
- > Adding a resource instance for your workspace or selecting the checkbox for "Allow trusted Microsoft services to access this account" is not sufficient to allow communication from the compute.
+ For compute clusters, these resources are deleted every time the cluster scales down to 0 nodes and created when scaling up.
-* When your workspace uses a private endpoint, the compute instance can only be accessed from inside the virtual network. If you use a custom DNS or hosts file, add an entry for `<instance-name>.<region>.instances.azureml.ms`. Map this entry to the private IP address of the workspace private endpoint. For more information, see the [custom DNS](../how-to-custom-dns.md) article.
-* Virtual network service endpoint policies don't work for compute cluster/instance system storage accounts.
-* If storage and compute instance are in different regions, you may see intermittent timeouts.
-* If the Azure Container Registry for your workspace uses a private endpoint to connect to the virtual network, you canΓÇÖt use a managed identity for the compute instance. To use a managed identity with the compute instance, don't put the container registry in the VNet.
-* If you want to use Jupyter Notebooks on a compute instance:
+ For a compute instance, these resources are kept until the instance is deleted. Stopping the instance doesn't remove the resources.
- * Don't disable websocket communication. Make sure your network allows websocket communication to `*.instances.azureml.net` and `*.instances.azureml.ms`.
- * Make sure that your notebook is running on a compute resource behind the same virtual network and subnet as your data. When creating the compute instance, use **Advanced settings** > **Configure virtual network** to select the network and subnet.
+ > [!IMPORTANT]
+ > These resources are limited by the subscription's [resource quotas](/azure/azure-resource-manager/management/azure-subscription-service-limits). If the virtual network resource group is locked then deletion of compute cluster/instance will fail. Load balancer cannot be deleted until the compute cluster/instance is deleted. Also please ensure there is no Azure Policy assignment which prohibits creation of network security groups.
-* __Compute clusters__ can be created in a different region than your workspace. This functionality is in __preview__, and is only available for __compute clusters__, not compute instances. When using a different region for the cluster, the following limitations apply:
++ In your VNet, allow **inbound** TCP traffic on port **44224** from the `AzureMachineLearning` service tag.
+ > [!IMPORTANT]
+ > The compute instance/cluster is dynamically assigned an IP address when it is created. Since the address is not known before creation, and inbound access is required as part of the creation process, you cannot statically assign it on your firewall. Instead, if you are using a firewall with the VNet you must create a user-defined route to allow this inbound traffic.
++ In your VNet, allow **outbound** traffic to the following service tags:
- * If your workspace associated resources, such as storage, are in a different virtual network than the cluster, set up global virtual network peering between the networks. For more information, see [Virtual network peering](../../virtual-network/virtual-network-peering-overview.md).
- * You may see increased network latency and data transfer costs. The latency and costs can occur when creating the cluster, and when running jobs on it.
+ | Service tag | Protocol | Port | Notes |
+ | -- |:--:|:--:| -- |
+ | `AzureMachineLearning` | TCP<br>UDP | 443/8787/18881<br>5831 | Communication with the Azure Machine Learning service.|
+ | `BatchNodeManagement.<region>` | ANY | 443| Replace `<region>` with the Azure region that contains your Azure Machine learning workspace. Communication with Azure Batch. Compute instance and compute cluster are implemented using the Azure Batch service.|
+ | `Storage.<region>` | TCP | 443 | Replace `<region>` with the Azure region that contains your Azure Machine learning workspace. This service tag is used to communicate with the Azure Storage account used by Azure Batch. |
- Guidance such as using NSG rules, user-defined routes, and input/output requirements, apply as normal when using a different region than the workspace.
+ > [!IMPORTANT]
+ > The outbound access to `Storage.<region>` could potentially be used to exfiltrate data from your workspace. By using a Service Endpoint Policy, you can mitigate this vulnerability. For more information, see the [Azure Machine Learning data exfiltration prevention](../how-to-prevent-data-loss-exfiltration.md) article.
- > [!WARNING]
- > If you are using a __private endpoint-enabled workspace__, creating the cluster in a different region is __not supported__.
+ | FQDN | Protocol | Port | Notes |
+ | - |:-:|:-:| - |
+ | `<region>.tundra.azureml.ms` | UDP | 5831 | Replace `<region>` with the Azure region that contains your Azure Machine learning workspace. |
+ | `graph.windows.net` | TCP | 443 | Communication with the Microsoft Graph API.|
+ | `*.instances.azureml.ms` | TCP | 443/8787/18881 | Communication with Azure Machine Learning. |
+ | `<region>.batch.azure.com` | ANY | 443 | Replace `<region>` with the Azure region that contains your Azure Machine learning workspace. Communication with Azure Batch. |
+ | `<region>.service.batch.com` | ANY | 443 | Replace `<region>` with the Azure region that contains your Azure Machine learning workspace. Communication with Azure Batch. |
+ | `*.blob.core.windows.net` | TCP | 443 | Communication with Azure Blob storage. |
+ | `*.queue.core.windows.net` | TCP | 443 | Communication with Azure Queue storage. |
+ | `*.table.core.windows.net` | TCP | 443 | Communication with Azure Table storage. |
-### Azure Databricks
+# [Compute instance](#tab/instance)
-* In addition to the __databricks-private__ and __databricks-public__ subnets used by Azure Databricks, the __default__ subnet created for the virtual network is also required.
-* Azure Databricks doesn't use a private endpoint to communicate with the virtual network.
-
-For more information on using Azure Databricks in a virtual network, see [Deploy Azure Databricks in your Azure Virtual Network](/azure/databricks/administration-guide/cloud-configurations/azure/vnet-inject).
-
-### Azure HDInsight or virtual machine
-* Azure Machine Learning supports only virtual machines that are running Ubuntu.
+```python
+import datetime
+import time
-## Required public internet access
+from azureml.core.compute import ComputeTarget, ComputeInstance
+from azureml.core.compute_target import ComputeTargetException
+# Choose a name for your instance
+# Compute instance name should be unique across the azure region
+compute_name = "ci{}".format(ws._workspace_id)[:10]
-For information on using a firewall solution, see [Use a firewall with Azure Machine Learning](../how-to-access-azureml-behind-firewall.md).
+# Verify that instance does not exist already
+try:
+ instance = ComputeInstance(workspace=ws, name=compute_name)
+ print('Found existing instance, use it.')
+except ComputeTargetException:
+ compute_config = ComputeInstance.provisioning_configuration(
+ vm_size='STANDARD_D3_V2',
+ ssh_public_access=False,
+ vnet_resourcegroup_name='vnet_resourcegroup_name',
+ vnet_name='vnet_name',
+ subnet_name='subnet_name',
+ # admin_user_ssh_public_key='<my-sshkey>'
+ )
+ instance = ComputeInstance.create(ws, compute_name, compute_config)
+ instance.wait_for_completion(show_output=True)
+```
-## Compute cluster
+# [Compute cluster](#tab/cluster)
[!INCLUDE [sdk v1](../../../includes/machine-learning-sdk-v1.md)]
-The following code creates a new Machine Learning Compute cluster in the `default` subnet of a virtual network named `mynetwork`:
- ```python from azureml.core.compute import ComputeTarget, AmlCompute from azureml.core.compute_target import ComputeTargetException
except ComputeTargetException:
# Wait for the cluster to be completed, show the output log cpu_cluster.wait_for_completion(show_output=True) ```+
-When the creation process finishes, you train your model by using the cluster in an experiment. For more information, see [Select and use a compute target for training](../how-to-set-up-training-targets.md).
+When the creation process finishes, you train your model. For more information, see [Select and use a compute target for training](how-to-set-up-training-targets.md).
[!INCLUDE [low-pri-note](../../../includes/machine-learning-low-pri-vm.md)]
-### No public IP for compute clusters (preview)
-
-When you enable **No public IP**, your compute cluster doesn't use a public IP for communication with any dependencies. Instead, it communicates solely within the virtual network using Azure Private Link ecosystem and service/private endpoints, eliminating the need for a public IP entirely. No public IP removes access and discoverability of compute cluster nodes from the internet thus eliminating a significant threat vector. **No public IP** clusters help comply with no public IP policies many enterprises have.
-
-> [!WARNING]
-> By default, you do not have public internet access from No Public IP Compute Cluster. You need to configure User Defined Routing (UDR) to reach to a public IP to access the internet. For example, you can use a public IP of your firewall, or you can use [Virtual Network NAT](../../virtual-network/nat-gateway/nat-overview.md) with a public IP.
-
-A compute cluster with **No public IP** enabled has **no inbound communication requirements** from public internet. Specifically, neither inbound NSG rule (`BatchNodeManagement`, `AzureMachineLearning`) is required. You still need to allow inbound from source of **VirtualNetwork** and any port source, to destination of **VirtualNetwork**, and destination port of **29876, 29877** and inbound from source **AzureLoadBalancer** and any port source to destination **VirtualNetwork** and port **44224** destination.
-
-**No public IP** clusters are dependent on [Azure Private Link](how-to-configure-private-link.md) for Azure Machine Learning workspace.
-A compute cluster with **No public IP** also requires you to disable private endpoint network policies and private link service network policies. These requirements come from Azure private link service and private endpoints and aren't Azure Machine Learning specific. Follow instruction from [Disable network policies for Private Link service](../../private-link/disable-private-link-service-network-policy.md) to set the parameters `disable-private-endpoint-network-policies` and `disable-private-link-service-network-policies` on the virtual network subnet.
-
-For **outbound connections** to work, you need to set up an egress firewall such as Azure firewall with user defined routes. For instance, you can use a firewall set up with [inbound/outbound configuration](../how-to-access-azureml-behind-firewall.md) and route traffic there by defining a route table on the subnet in which the compute cluster is deployed. The route table entry can set up the next hop of the private IP address of the firewall with the address prefix of 0.0.0.0/0.
-
-You can use a service endpoint or private endpoint for your Azure container registry and Azure storage in the subnet in which cluster is deployed.
+## Azure Databricks
-To create a no public IP address compute cluster (a preview feature) in studio, set **No public IP** checkbox in the virtual network section.
-You can also create no public IP compute cluster through an ARM template. In the ARM template set enableNodePublicIP parameter to false.
--
-**Troubleshooting**
-
-* If you get this error message during creation of cluster `The specified subnet has PrivateLinkServiceNetworkPolicies or PrivateEndpointNetworkEndpoints enabled`, follow the instructions from [Disable network policies for Private Link service](../../private-link/disable-private-link-service-network-policy.md) and [Disable network policies for Private Endpoint](../../private-link/disable-private-endpoint-network-policy.md).
-
-* If job execution fails with connection issues to ACR or Azure Storage, verify that customer has added ACR and Azure Storage service endpoint/private endpoints to subnet and ACR/Azure Storage allows the access from the subnet.
-
-* To ensure that you've created a no public IP cluster, in Studio when looking at cluster details you'll see **No Public IP** property is set to **true** under resource properties.
-
-## Compute instance
-
-For steps on how to create a compute instance deployed in a virtual network, see [Create and manage an Azure Machine Learning compute instance](how-to-create-manage-compute-instance.md).
-
-### No public IP for compute instances (preview)
-
-When you enable **No public IP**, your compute instance doesn't use a public IP for communication with any dependencies. Instead, it communicates solely within the virtual network using Azure Private Link ecosystem and service/private endpoints, eliminating the need for a public IP entirely. No public IP removes access and discoverability of compute instance node from the internet thus eliminating a significant threat vector. Compute instances will also do packet filtering to reject any traffic from outside virtual network. **No public IP** instances are dependent on [Azure Private Link](how-to-configure-private-link.md) for Azure Machine Learning workspace.
-
-> [!WARNING]
-> By default, you do not have public internet access from No Public IP Compute Instance. You need to configure User Defined Routing (UDR) to reach to a public IP to access the internet. For example, you can use a public IP of your firewall, or you can use [Virtual Network NAT](../../virtual-network/nat-gateway/nat-overview.md) with a public IP.
-
-For **outbound connections** to work, you need to set up an egress firewall such as Azure firewall with user defined routes. For instance, you can use a firewall set up with [inbound/outbound configuration](../how-to-access-azureml-behind-firewall.md) and route traffic there by defining a route table on the subnet in which the compute instance is deployed. The route table entry can set up the next hop of the private IP address of the firewall with the address prefix of 0.0.0.0/0.
-
-A compute instance with **No public IP** enabled has **no inbound communication requirements** from public internet. Specifically, neither inbound NSG rule (`BatchNodeManagement`, `AzureMachineLearning`) is required. You still need to allow inbound from source of **VirtualNetwork**, any port source, destination of **VirtualNetwork**, and destination port of **29876, 29877, 44224**.
-
-A compute instance with **No public IP** also requires you to disable private endpoint network policies and private link service network policies. These requirements come from Azure private link service and private endpoints and aren't Azure Machine Learning specific. Follow instruction from [Disable network policies for Private Link service source IP](../../private-link/disable-private-link-service-network-policy.md) to set the parameters `disable-private-endpoint-network-policies` and `disable-private-link-service-network-policies` on the virtual network subnet.
-
-To create a no public IP address compute instance (a preview feature) in studio, set **No public IP** checkbox in the virtual network section.
-You can also create no public IP compute instance through an ARM template. In the ARM template set enableNodePublicIP parameter to false.
+* The virtual network must be in the same subscription and region as the Azure Machine Learning workspace.
+* If the Azure Storage Account(s) for the workspace are also secured in a virtual network, they must be in the same virtual network as the Azure Databricks cluster.
+* In addition to the __databricks-private__ and __databricks-public__ subnets used by Azure Databricks, the __default__ subnet created for the virtual network is also required.
+* Azure Databricks doesn't use a private endpoint to communicate with the virtual network.
-Next steps:
-* [Use custom DNS](../how-to-custom-dns.md)
-* [Use a firewall](../how-to-access-azureml-behind-firewall.md)
+For specific information on using Azure Databricks with a virtual network, see [Deploy Azure Databricks in your Azure Virtual Network](/azure/databricks/administration-guide/cloud-configurations/azure/vnet-inject).
+## Required public internet access to train models
-## Inbound traffic
+> [!IMPORTANT]
+> While previous sections of this article describe configurations required to **create** compute resources, the configuration information in this section is required to **use** these resources to train models.
-For more information on input and output traffic requirements for Azure Machine Learning, see [Use a workspace behind a firewall](../how-to-access-azureml-behind-firewall.md).
+For information on using a firewall solution, see [Use a firewall with Azure Machine Learning](../how-to-access-azureml-behind-firewall.md).
## Next steps
managed-grafana Concept Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/concept-whats-new.md
+
+ Title: What's new in Azure Managed Grafana
+description: Recent updates for Azure Managed Grafana
++++ Last updated : 01/18/2023+++
+# What's New in Azure Managed Grafana
+
+## January 2023
+
+### Support for Grafana Enterprise
+
+Grafana Enterprise is now supported.
+
+For more information, go to [Subscribe to Grafana Enterprise](how-to-grafana-enterprise.md).
+
+### Support for service accounts
+
+Service accounts are now supported.
+
+For more information, go to [How to use service accounts](how-to-service-accounts.md).
+
+## Next steps
+
+If you have more questions, contact us through [support](https://azure.microsoft.com/support/options/).
marketplace Plan Saas Offer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/marketplace/plan-saas-offer.md
These additional technical requirements apply to the _Sell through Microsoft_ (t
- You must use the [SaaS Fulfillment APIs](./partner-center-portal/pc-saas-fulfillment-apis.md) to integrate with Azure Marketplace and Microsoft AppSource. You must expose a service that can interact with the SaaS subscription to create, update, and delete a user account and service plan. Critical API changes must be supported within 24 hours. Non-critical API changes will be released periodically. Diagrams and detailed explanations describing the usage of the collected fields are available in documentation for the [APIs](./partner-center-portal/pc-saas-fulfillment-apis.md). - You must create at least one plan for your offer. Your plan is priced based on the pricing model you select before publishing: _flat rate_ or _per-user_. More details about [plans](#plans) are provided later in this article. - The customer can cancel your offer at any time.
+> [!NOTE]
+> 2-year and 3-year Multi-year SaaS plans with pending payments are not eligible for cancellation after the standard 72-hour cancellation policy has passed. Cancellation is not possible until the current billing term is complete because there are future payments due with Multi-year SaaS subscriptions. To request a cancellation of a 2-year or 3-year plan beyond the standard 72-hour cancellation period, contact [Marketplace support](/marketplace/get-support).
### Technical information
You can choose to opt into Microsoft-supported marketing and sales channels. Whe
- [Invoking Metered Billing with the SaaS Accelerator](https://go.microsoft.com/fwlink/?linkid=2196161) - [Configuring Email in the SaaS Accelerator](https://go.microsoft.com/fwlink/?linkid=2196165) - [Custom Landing Page Fields with the SaaS Accelerator](https://go.microsoft.com/fwlink/?linkid=2196166)+
network-watcher Azure Monitor Agent With Connection Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/azure-monitor-agent-with-connection-monitor.md
Title: Monitor network connectivity by using Azure Monitor Agent description: This article describes how to monitor network connectivity in Connection Monitor by using Azure Monitor Agent. -+ Last updated 10/27/2022-+ #Customer intent: I need to monitor a connection by using Azure Monitor Agent.
network-watcher Connection Monitor Connected Machine Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/connection-monitor-connected-machine-agent.md
Title: Install the Azure Connected Machine agent for Connection Monitor description: This article describes how to install Azure Connected Machine agent -+ Last updated 10/27/2022-+ #Customer intent: I need to monitor a connection by using Azure Monitor Agent.
network-watcher Connection Monitor Create Using Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/connection-monitor-create-using-portal.md
Title: Create a connection monitor - Azure portal
description: This article describes how to create a monitor in Connection Monitor by using the Azure portal. -+ Last updated 11/05/2022-+ #Customer intent: I need to create a connection monitor to monitor communication between one VM and another. # Create a monitor in Connection Monitor by using the Azure portal
network-watcher Connection Monitor Create Using Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/connection-monitor-create-using-powershell.md
description: Learn how to create a connection monitor by using PowerShell. documentationcenter: na-+ na Last updated 01/07/2021-+ #Customer intent: I need to create a connection monitor by using PowerShell to monitor communication between one VM and another.
network-watcher Connection Monitor Create Using Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/connection-monitor-create-using-template.md
description: Learn how to create Connection Monitor using the ARMClient. documentationcenter: na-+ na Last updated 02/08/2021 -+ #Customer intent: I need to create a connection monitor to monitor communication between one VM and another. # Create a Connection Monitor using the ARM template
network-watcher Connection Monitor Install Azure Monitor Agent https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/connection-monitor-install-azure-monitor-agent.md
Title: Install Azure Monitor Agent for Connection Monitor description: This article describes how to install Azure Monitor Agent. -+ Last updated 10/25/2022-+ #Customer intent: I need to monitor a connection by using Azure Monitor Agent.
network-watcher Connection Monitor Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/connection-monitor-overview.md
Title: Connection Monitor in Azure | Microsoft Docs
description: Learn how to use Connection Monitor to monitor network communication in a distributed environment. documentationcenter: na--+ tags: azure-resource-manager na Last updated 10/04/2022-+ #Customer intent: I need to monitor communication between one VM and another. If the communication fails, I need to know why so that I can resolve the problem.
network-watcher Connection Monitor Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/connection-monitor-schema.md
Title: Azure Network Watcher Connection Monitor schemas | Microsoft Docs
description: Understand the Tests data schema and the Path data schema of Azure Network Watcher Connection Monitor. documentationcenter: na--+ na Last updated 08/14/2021-+
network-watcher Connection Monitor Virtual Machine Scale Set https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/connection-monitor-virtual-machine-scale-set.md
Title: Tutorial - Monitor network communication between two Virtual Machine Scal
description: In this tutorial, you'll learn how to monitor network communication between two Virtual Machine Scale Sets by using the Azure Network Watcher connection monitor capability. documentationcenter: na-+ tags: azure-resource-manager # Customer intent: I need to monitor communication between a virtual machine scale set and another VM. If the communication fails, I need to know why, so that I can resolve the problem.
na Last updated 10/17/2022-+
network-watcher Connection Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/connection-monitor.md
Title: 'Tutorial: Monitor network communication between two virtual machines using the Azure portal' description: In this tutorial, you learn how to monitor network communication between two virtual machines with Azure Network Watcher's connection monitor capability. -+ tags: azure-resource-manager Last updated 10/28/2022-+ # Customer intent: I need to monitor communication between a VM and another VM. If the communication fails, I need to know why, so that I can resolve the problem.
network-watcher Data Residency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/data-residency.md
Title: Data residency for Azure Network Watcher | Microsoft Docs
description: This article will help you understand data residency for the Azure Network Watcher service. documentationcenter: na--+ na Last updated 06/16/2021-+ -- # Data residency for Azure Network Watcher
network-watcher Diagnose Communication Problem Between Networks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/diagnose-communication-problem-between-networks.md
description: In this tutorial, learn how to diagnose a communication problem between an Azure virtual network connected to an on-premises, or other virtual network, through an Azure virtual network gateway, using Network Watcher's VPN diagnostics capability. documentationcenter: na-+ # Customer intent: I need to determine why resources in a virtual network can't communicate with resources in a different network.
na Last updated 01/07/2021-+
network-watcher Diagnose Vm Network Routing Problem Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/diagnose-vm-network-routing-problem-cli.md
description: In this article, you learn how to use Azure CLI to diagnose a virtual machine network routing problem using the next hop capability of Azure Network Watcher. documentationcenter: network-watcher-+ tags: azure-resource-manager # Customer intent: I need to diagnose virtual machine (VM) network routing problem that prevents communication to different destinations. ms.assetid:
network-watcher Last updated 03/18/2022-+
network-watcher Diagnose Vm Network Routing Problem Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/diagnose-vm-network-routing-problem-powershell.md
description: In this article, you learn how to diagnose a virtual machine network routing problem using the next hop capability of Azure Network Watcher. documentationcenter: network-watcher-+ tags: azure-resource-manager # Customer intent: I need to diagnose virtual machine (VM) network routing problem that prevents communication to different destinations. network-watcher Last updated 01/07/2021-+
network-watcher Diagnose Vm Network Routing Problem https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/diagnose-vm-network-routing-problem.md
description: In this tutorial, you learn how to diagnose a virtual machine network routing problem using the next hop capability of Azure Network Watcher. documentationcenter: network-watcher-+ tags: azure-resource-manager # Customer intent: I need to diagnose virtual machine (VM) network routing problem that prevents communication to different destinations. network-watcher Last updated 01/07/2021-+
network-watcher Diagnose Vm Network Traffic Filtering Problem Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/diagnose-vm-network-traffic-filtering-problem-cli.md
description: Learn how to use Azure CLI to diagnose a virtual machine network traffic filter problem using the IP flow verify capability of Azure Network Watcher. documentationcenter: network-watcher--+ tags: azure-resource-manager network-watcher Last updated 11/02/2022-+ #Customer intent: I need to diagnose a virtual machine (VM) network traffic filter problem that prevents communication to and from a VM.
network-watcher Diagnose Vm Network Traffic Filtering Problem Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/diagnose-vm-network-traffic-filtering-problem-powershell.md
description: Learn how to use Azure PowerShell to diagnose a virtual machine network traffic filter problem using the IP flow verify capability of Azure Network Watcher. documentationcenter: network-watcher--++ Last updated 10/12/2022
network-watcher Diagnose Vm Network Traffic Filtering Problem https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/diagnose-vm-network-traffic-filtering-problem.md
description: In this quickstart, you learn how to diagnose a virtual machine network traffic filter problem using the IP flow verify capability of Azure Network Watcher. documentationcenter: network-watcher--++ Last updated 11/18/2022
network-watcher Enable Network Watcher Flow Log Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/enable-network-watcher-flow-log-settings.md
Title: Enable Azure Network Watcher | Microsoft Docs
+ Title: Enable Azure Network Watcher
description: Learn how to enable Network Watcher. documentationcenter: na-+ na Last updated 05/30/2022-+ # Enable Azure Network Watcher
network-watcher Migrate To Connection Monitor From Connection Monitor Classic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/migrate-to-connection-monitor-from-connection-monitor-classic.md
description: Learn how to migrate to Connection Monitor from Connection Monitor. documentationcenter: na-+ na Last updated 06/30/2021-+ #Customer intent: I need to migrate from Connection Monitor to Connection Monitor.
network-watcher Migrate To Connection Monitor From Network Performance Monitor https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/migrate-to-connection-monitor-from-network-performance-monitor.md
Title: Migrate to Connection Monitor from Network Performance Monitor description: Learn how to migrate to Connection Monitor from Network Performance Monitor.-+ Last updated 06/30/2021-+ #Customer intent: I need to migrate from Network Performance Monitor to Connection Monitor.
network-watcher Network Insights Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-insights-overview.md
Title: Azure Monitor Network Insights
description: An overview of Azure Monitor Network Insights, which provides a comprehensive view of health and metrics for all deployed network resources without any configuration. --++ Last updated 09/28/2022
network-watcher Network Insights Topology https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-insights-topology.md
Title: Network Insights topology
description: An overview of topology, which provides a pictorial representation of the resources. --++ Last updated 11/16/2022
network-watcher Network Insights Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-insights-troubleshooting.md
description: Troubleshooting steps for issues that may arise while using Network
--++ Last updated 09/29/2022
network-watcher Network Watcher Alert Triggered Packet Capture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-alert-triggered-packet-capture.md
description: This article describes how to create an alert triggered packet capture with Azure Network Watcher documentationcenter: na-+ ms.assetid: 75e6e7c4-b3ba-4173-8815-b00d7d824e11 na Last updated 01/20/2021-+
network-watcher Network Watcher Analyze Nsg Flow Logs Graylog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-analyze-nsg-flow-logs-graylog.md
Title: Analyze Azure network security group flow logs - Graylog | Microsoft Docs
description: Learn how to manage and analyze network security group flow logs in Azure using Network Watcher and Graylog. documentationcenter: na--+ tags: azure-resource-manager- na Last updated 07/03/2021-+
network-watcher Network Watcher Connectivity Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-connectivity-cli.md
description: Learn how to use the connection troubleshoot capability of Azure Network Watcher using the Azure CLI. documentationcenter: na-+ na Last updated 01/07/2021-+ # Troubleshoot connections with Azure Network Watcher using the Azure CLI
network-watcher Network Watcher Connectivity Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-connectivity-overview.md
Title: Introduction to Azure Network Watcher Connection Troubleshoot | Microsoft
description: This page provides an overview of the Network Watcher connection troubleshooting capability documentationcenter: na-+ na Last updated 11/10/2022-+ # Introduction to connection troubleshoot in Azure Network Watcher
network-watcher Network Watcher Connectivity Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-connectivity-portal.md
description: Learn how to use the connection troubleshoot capability of Azure Network Watcher using the Azure portal. documentationcenter: na-+ na Last updated 01/04/2021-+ # Troubleshoot connections with Azure Network Watcher using the Azure portal
network-watcher Network Watcher Connectivity Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-connectivity-powershell.md
description: Learn how to use the connection troubleshoot capability of Azure Network Watcher using PowerShell. documentationcenter: na-+ na Last updated 01/07/2021-+
network-watcher Network Watcher Connectivity Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-connectivity-rest.md
description: Learn how to use the connection troubleshoot capability of Azure Network Watcher using the Azure REST API. documentationcenter: na-+ na Last updated 01/07/2021-+ # Troubleshoot connections with Azure Network Watcher using the Azure REST API
network-watcher Network Watcher Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-create.md
Title: Create an Azure Network Watcher instance description: Learn how to create or delete an Azure Network Watcher using the Azure portal, PowerShell, the Azure CLI or the REST API. -+ ms.assetid: b1314119-0b87-4f4d-b44c-2c4d0547fb76 Last updated 12/30/2022-+ ms.devlang: azurecli
network-watcher Network Watcher Deep Packet Inspection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-deep-packet-inspection.md
Title: Packet inspection with Azure Network Watcher | Microsoft Docs
description: This article describes how to use Network Watcher to perform deep packet inspection collected from a VM documentationcenter: na-+ ms.assetid: 7b907d00-9c35-40f5-a61e-beb7b782276f na Last updated 01/07/2021-+ # Packet inspection with Azure Network Watcher
network-watcher Network Watcher Delete Nsg Flow Log Blobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-delete-nsg-flow-log-blobs.md
Title: Delete storage blobs for network security group flow logs in Azure Networ
description: This article explains how to delete the network security group flow log storage blobs that are outside their retention policy period in Azure Network Watcher. documentationcenter: na--+ na Last updated 01/07/2021-+
network-watcher Network Watcher Diagnose On Premises Connectivity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-diagnose-on-premises-connectivity.md
description: This article describes how to diagnose on-premises connectivity via VPN gateway with Azure Network Watcher resource troubleshooting. documentationcenter: na-+ ms.assetid: aeffbf3d-fd19-4d61-831d-a7114f7534f9 na Last updated 01/20/2021-+
network-watcher Network Watcher Intrusion Detection Open Source Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-intrusion-detection-open-source-tools.md
description: This article describes how to use Azure Network Watcher and open source tools to perform network intrusion detection documentationcenter: na-+ ms.assetid: 0f043f08-19e1-4125-98b0-3e335ba69681 na Last updated 09/15/2022-+
network-watcher Network Watcher Ip Flow Verify Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-ip-flow-verify-overview.md
Title: Introduction to IP flow verify in Azure Network Watcher | Microsoft Docs
description: This page provides an overview of the Network Watcher IP flow verify capability documentationcenter: na-+ na Last updated 10/04/2022-+ # Introduction to IP flow verify in Azure Network Watcher
network-watcher Network Watcher Monitor With Azure Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-monitor-with-azure-automation.md
description: This article describes how to diagnose On-premises connectivity with Azure Automation and Network Watcher documentationcenter: na-+ na Last updated 11/20/2020 -+
network-watcher Network Watcher Monitoring Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-monitoring-overview.md
Title: Azure Network Watcher | Microsoft Docs
description: Learn about Azure Network Watcher's monitoring, diagnostics, metrics, and logging capabilities for resources in a virtual network. documentationcenter: na-+ # Customer intent: As someone with basic Azure network experience, I want to understand how Azure Network Watcher can help me resolve some of the network-related problems I've encountered and provide insight into how I use Azure networking.
na Last updated 10/11/2022-+
network-watcher Network Watcher Network Configuration Diagnostics Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-network-configuration-diagnostics-overview.md
Title: Introduction to Network Configuration Diagnostics in Azure Network Watche
description: This page provides an overview of the Network Watcher - NSG Diagnostics documentationcenter: na-+ na Last updated 01/04/2023 -+ # Introduction to NSG Diagnostics in Azure Network Watcher
network-watcher Network Watcher Next Hop Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-next-hop-overview.md
Title: Introduction to next hop in Azure Network Watcher | Microsoft Docs
description: This article provides an overview of the Network Watcher next hop capability. documentationcenter: na-+ ms.assetid: febf7bca-e0b7-41d5-838f-a5a40ebc5aac na Last updated 01/29/2020-+
network-watcher Network Watcher Nsg Auditing Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-nsg-auditing-powershell.md
description: This page provides instructions on how to configure auditing of a Network Security Group documentationcenter: na-+ na Last updated 03/01/2022-+
network-watcher Network Watcher Nsg Flow Logging Azure Resource Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-nsg-flow-logging-azure-resource-manager.md
Title: Network Watcher - Create NSG flow logs using an Azure Resource Manager te
description: Use an Azure Resource Manager template and PowerShell to easily set up NSG Flow Logs. documentationcenter: na--+ tags: azure-resource-manager- na Last updated 02/09/2022-+
network-watcher Network Watcher Nsg Flow Logging Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-nsg-flow-logging-cli.md
Title: Manage NSG Flow logs - Azure CLI
description: This page explains how to manage Network Security Group Flow logs in Azure Network Watcher with Azure CLI -+ Last updated 12/09/2021-+
network-watcher Network Watcher Nsg Flow Logging Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-nsg-flow-logging-overview.md
description: This article explains how to use the NSG flow logs feature of Azure Network Watcher. documentationcenter: na--+ na Last updated 10/06/2022 --+ # Introduction to flow logging for network security groups
network-watcher Network Watcher Nsg Flow Logging Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-nsg-flow-logging-portal.md
Title: 'Tutorial: Log network traffic flow to and from a virtual machine - Azure portal' description: Learn how to log network traffic flow to and from a virtual machine using Network Watcher's NSG flow logs capability. -+ Last updated 10/28/2022-+ # Customer intent: I need to log the network traffic to and from a VM so I can analyze it for anomalies.
network-watcher Network Watcher Nsg Flow Logging Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-nsg-flow-logging-powershell.md
Title: Manage NSG Flow logs - Azure PowerShell description: This page explains how to manage Network Security Group Flow logs in Azure Network Watcher with Azure PowerShell-+ Last updated 12/24/2021-+
network-watcher Network Watcher Nsg Flow Logging Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-nsg-flow-logging-rest.md
description: This page explains how to manage Network Security Group flow logs in Azure Network Watcher with REST API documentationcenter: na-+ na Last updated 07/13/2021-+
network-watcher Network Watcher Nsg Grafana https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-nsg-grafana.md
description: Manage and analyze Network Security Group Flow Logs in Azure using Network Watcher and Grafana. documentationcenter: na--+ tags: azure-resource-manager- ms.assetid: na Last updated 09/15/2022-+ # Manage and analyze Network Security Group flow logs using Network Watcher and Grafana
network-watcher Network Watcher Packet Capture Manage Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-packet-capture-manage-cli.md
Title: Manage packet captures with Azure Network Watcher - Azure CLI | Microsoft
description: This page explains how to manage the packet capture feature of Network Watcher using the Azure CLI documentationcenter: na-+ ms.assetid: cb0c1d10-f7f2-4c34-b08c-f73452430be8 na Last updated 12/09/2021-+
network-watcher Network Watcher Packet Capture Manage Portal Vmss https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-packet-capture-manage-portal-vmss.md
description: Learn how to manage the packet capture feature of Network Watcher in virtual machine scale set using the Azure portal. documentationcenter: na-+ na Last updated 06/07/2022 -+ # Manage packet captures in Virtual machine scale sets with Azure Network Watcher using the portal
network-watcher Network Watcher Packet Capture Manage Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-packet-capture-manage-portal.md
Title: Manage packet captures in VMs with Network Watcher - Azure portal
description: Learn how to manage packet captures in virtual machines with the packet capture feature of Network Watcher using the Azure portal. -+ Last updated 01/04/2023-+
network-watcher Network Watcher Packet Capture Manage Powershell Vmss https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-packet-capture-manage-powershell-vmss.md
description: This page explains how to manage the packet capture feature of Network Watcher in virtual machine scale set using PowerShell documentationcenter: na-+ na Last updated 06/07/2022-+
network-watcher Network Watcher Packet Capture Manage Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-packet-capture-manage-powershell.md
description: This page explains how to manage the packet capture feature of Network Watcher using PowerShell documentationcenter: na-+ na Last updated 02/01/2021-+
network-watcher Network Watcher Packet Capture Manage Rest Vmss https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-packet-capture-manage-rest-vmss.md
Title: Manage packet captures in Virtual machine scale sets with Azure Network W
description: This page explains how to manage the packet capture feature of virtual machine scale set in Network Watcher using Azure REST API documentationcenter: na-+ na Last updated 10/04/2022-+
network-watcher Network Watcher Packet Capture Manage Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-packet-capture-manage-rest.md
Title: Manage packet captures with Azure Network Watcher - REST API | Microsoft
description: This page explains how to manage the packet capture feature of Network Watcher using Azure REST API documentationcenter: na-+ na Last updated 05/28/2021-+
network-watcher Network Watcher Packet Capture Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-packet-capture-overview.md
Title: Introduction to Packet capture in Azure Network Watcher | Microsoft Docs
description: This page provides an overview of the Network Watcher packet capture's capability documentationcenter: na-+ na Last updated 06/07/2022-+
network-watcher Network Watcher Read Nsg Flow Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-read-nsg-flow-logs.md
Title: Read NSG flow logs | Microsoft Docs
description: Learn how to use Azure PowerShell to parse Network Security Group flow logs, which are created hourly and updated every few minutes in Azure Network Watcher. documentationcenter: na-+ na Last updated 02/09/2021-+
network-watcher Network Watcher Security Group View Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-security-group-view-cli.md
description: This article will describe how to use Azure CLI to analyze a virtual machines security with Security Group View. documentationcenter: na-+ na Last updated 12/09/2021-+
network-watcher Network Watcher Security Group View Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-security-group-view-overview.md
Title: Introduction to Effective security rules view in Azure Network Watcher |
description: This page provides an overview of the Network Watcher - Effective security rules view capability documentationcenter: na-+ na Last updated 03/18/2022-+
network-watcher Network Watcher Security Group View Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-security-group-view-powershell.md
description: This article will describe how to use PowerShell to analyze a virtual machines security with Security Group View. documentationcenter: na-+ na Last updated 11/20/2020-+
network-watcher Network Watcher Security Group View Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-security-group-view-rest.md
description: This article will describe how to the Azure REST API to analyze a virtual machines security with Security Group View. documentationcenter: na-+ na Last updated 03/01/2022-+
network-watcher Network Watcher Troubleshoot Manage Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-troubleshoot-manage-cli.md
description: This page explains how to use the Azure Network Watcher troubleshoot Azure CLI documentationcenter: na-+ na Last updated 07/25/2022-+
network-watcher Network Watcher Troubleshoot Manage Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-troubleshoot-manage-powershell.md
Title: Troubleshoot Azure VNet gateway and connections - Azure PowerShell
description: This page explains how to use the Azure Network Watcher troubleshoot PowerShell cmdlet -+ Last updated 11/22/2022-+
network-watcher Network Watcher Troubleshoot Manage Rest https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-troubleshoot-manage-rest.md
description: This page explains how to troubleshoot Virtual Network Gateways and Connections with Azure Network Watcher using REST documentationcenter: na-+ na Last updated 01/07/2021-+
network-watcher Network Watcher Troubleshoot Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-troubleshoot-overview.md
description: This page provides an overview of the Network Watcher resource troubleshooting capabilities documentationcenter: na-+ na Last updated 03/31/2022-+
network-watcher Network Watcher Using Open Source Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-using-open-source-tools.md
description: This page describes how to use Network Watcher packet capture with Capanalysis to visualize traffic patterns to and from your VMs. documentationcenter: na-+ na Last updated 02/25/2021 -+ # Visualize network traffic patterns to and from your VMs using open-source tools
network-watcher Network Watcher Visualize Nsg Flow Logs Open Source Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-visualize-nsg-flow-logs-open-source-tools.md
description: Manage and analyze Network Security Group Flow Logs in Azure using Network Watcher and Elastic Stack. documentationcenter: na-+ na Last updated 09/15/2022-+
network-watcher Network Watcher Visualize Nsg Flow Logs Power Bi https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/network-watcher-visualize-nsg-flow-logs-power-bi.md
description: Learn how to use Power BI to visualize Network Security Group flow logs to allow you to view information about IP traffic in Azure Network Watcher. documentationcenter: na-+ na Last updated 06/23/2021-+
network-watcher Nsg Flow Logs Policy Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/nsg-flow-logs-policy-portal.md
- Title: QuickStart - Deploy and manage NSG Flow Logs using Azure Policy description: This article explains how to use the built-in policies to manage the deployment of NSG flow logs documentationcenter: na-+ na Last updated 02/09/2022-+
network-watcher Quickstart Configure Network Security Group Flow Logs From Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/quickstart-configure-network-security-group-flow-logs-from-arm-template.md
Title: 'Quickstart: Configure Network Watcher network security group flow logs by using an Azure Resource Manager template (ARM template)' description: Learn how to enable network security group (NSG) flow logs programmatically by using an Azure Resource Manager template (ARM template) and Azure PowerShell. --++ Last updated 09/01/2022
network-watcher Quickstart Configure Network Security Group Flow Logs From Bicep https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/quickstart-configure-network-security-group-flow-logs-from-bicep.md
Title: 'Quickstart: Configure Network Watcher network security group flow logs by using a Bicep file' description: Learn how to enable network security group (NSG) flow logs programmatically by using Bicep and Azure PowerShell. --++ Last updated 08/26/2022
network-watcher Required Rbac Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/required-rbac-permissions.md
Title: Azure RBAC permissions required to use capabilities
description: Learn which Azure role-based access control permissions are required to work with Network Watcher capabilities. -+ - na Last updated 10/07/2022-+
network-watcher Resource Move https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/resource-move.md
Title: Move Azure Network Watcher resources | Microsoft Docs
description: Move Azure Network Watcher resources across regions documentationcenter: na--+ na Last updated 06/10/2021-+ - # Moving Azure Network Watcher resources across regions
network-watcher Supported Region Traffic Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/supported-region-traffic-analytics.md
Title: Azure Traffic Analytics supported regions | Microsoft Docs
+ Title: Azure Traffic Analytics supported regions
description: This article provides the list of Traffic Analytics supported regions. documentationcenter: na-+ na Last updated 06/15/2022-
-ms.custon: references_regions
--++ # Supported regions: NSG
network-watcher Traffic Analytics Policy Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/traffic-analytics-policy-portal.md
Title: Deploy and manage Traffic Analytics using Azure Policy
description: This article explains how to use the built-in policies to manage the deployment of Traffic Analytics -+ Last updated 02/09/2022-+
network-watcher Traffic Analytics Schema Update https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/traffic-analytics-schema-update.md
Title: Azure Traffic Analytics schema update - March 2020
description: Sample queries with new fields in the Traffic Analytics schema. Use these three examples to replace the deprecated fields with the new ones. documentationcenter: na--+ na Last updated 06/20/2022--+ + # Sample queries with new fields in the Traffic Analytics schema (March 2020 schema update) The [Traffic Analytics log schema](./traffic-analytics-schema.md) includes the following new fields:
network-watcher Traffic Analytics Schema https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/traffic-analytics-schema.md
Title: Azure traffic analytics schema | Microsoft Docs
+ Title: Azure traffic analytics schema
description: Understand schema of Traffic Analytics to analyze Azure network security group flow logs.--+ -+ Last updated 03/29/2022 --+ # Schema and data aggregation in Traffic Analytics
network-watcher Traffic Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/traffic-analytics.md
Title: Azure traffic analytics description: Learn what traffic analytics is, and how to use traffic analytics for viewing network activity, securing networks, and optimizing performance. -+ Last updated 01/06/2023-+
network-watcher Usage Scenarios Traffic Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/usage-scenarios-traffic-analytics.md
Title: Usage scenarios of Azure Traffic Analytics | Microsoft Docs
+ Title: Usage scenarios of Azure Traffic Analytics
description: This article describes the usage scenarios of Traffic Analytics. documentationcenter: na-+ na Last updated 05/30/2022-+ # Usage scenarios
network-watcher View Network Topology https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/view-network-topology.md
Title: View Azure virtual network topology | Microsoft Docs description: Learn how to view the resources in a virtual network, and the relationships between the resources. --++ na -+ Last updated 11/11/2022
network-watcher View Relative Latencies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/network-watcher/view-relative-latencies.md
Title: View relative latencies to Azure regions from specific locations description: Learn how to view relative latencies across Internet providers to Azure regions from specific locations. -+ na Last updated 04/20/2022-+
notification-hubs Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/notification-hubs/availability-zones.md
Previously updated : 11/19/2021 Last updated : 01/17/2023
Azure Notification Hubs now supports [availability zones](../availability-zones/
## Feature availability
-Availability zones support will be included as part of an upcoming Azure Notification Hubs Premium SKU. It will only be available in [Azure regions](../availability-zones/az-region.md) where availability zones are present.
+Availability zones support will be included as part of an upcoming Azure Notification Hubs Premium SKU and as an add-on feature to the other SKUs. It will only be available in [Azure regions](../availability-zones/az-region.md) where availability zones are present.
> [!NOTE]
-> Until Azure Notification Hubs Premium is released, availability zones is by invitation only. If you are interested in using this feature, contact your customer success manager at Microsoft, or create an Azure support ticket which will be triaged by the support team.
+> Until the feature is broadly released, availability zones is by invitation only. If you are interested in using this feature, contact your customer success manager at Microsoft, or create an Azure support ticket which will be triaged by the support team.
## Enable availability zones
openshift Cluster Administration Cluster Admin Role https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/cluster-administration-cluster-admin-role.md
Title: Azure Red Hat OpenShift cluster administrator role | Microsoft Docs description: Assignment and usage of the Azure Red Hat OpenShift cluster administrator role --++ Last updated 09/25/2019
openshift Cluster Administration Security Context Constraints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/cluster-administration-security-context-constraints.md
Title: Manage security context constraints in Azure Red Hat OpenShift | Microsoft Docs description: Security context constraints for Azure Red Hat OpenShift cluster administrators --++ Last updated 09/25/2019
openshift Dns Forwarding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/dns-forwarding.md
Title: Configure DNS Forwarding for Azure Red Hat OpenShift 4 description: Configure DNS Forwarding for Azure Red Hat OpenShift 4--++ Last updated 04/24/2020
openshift Howto Aad App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-aad-app-configuration.md
Title: Azure Active Directory integration for Azure Red Hat OpenShift description: Learn how to create an Azure AD security group and user for testing apps on your Microsoft Azure Red Hat OpenShift cluster.--++ Last updated 05/13/2019
openshift Howto Add Update Pull Secret https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-add-update-pull-secret.md
Title: Add or update your Red Hat pull secret on an Azure Red Hat OpenShift 4 cluster description: Add or update your Red Hat pull secret on existing 4.x ARO clusters--++ Last updated 05/21/2020
openshift Howto Create Private Cluster 3X https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-create-private-cluster-3x.md
Title: Create a private cluster with Azure Red Hat OpenShift 3.11 description: Learn how to create a private cluster with Azure Red Hat OpenShift 3.11 and about the benefits of private clusters.--++ Last updated 06/02/2022
openshift Howto Create Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-create-tenant.md
Title: Create an Azure AD tenant for Azure Red Hat OpenShift description: Here's how to create an Azure Active Directory (Azure AD) tenant to host your Microsoft Azure Red Hat OpenShift cluster.--++ Last updated 05/13/2019
openshift Howto Custom Dns https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-custom-dns.md
Title: Configure custom DNS resources in an Azure Red Hat OpenShift (ARO) cluster description: Discover how to add a custom DNS server on all of your nodes in Azure Red Hat OpenShift (ARO).--++ Last updated 06/02/2021
openshift Howto Manage Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-manage-projects.md
description: Manage projects, templates, image-streams in an Azure Red Hat OpenS
keywords: red hat openshift projects requests self-provisioner -+ Last updated 07/19/2019
openshift Howto Setup Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-setup-environment.md
Title: Set up your Azure Red Hat OpenShift development environment description: Here are the prerequisites for working with Microsoft Azure Red Hat OpenShift. keywords: red hat openshift setup set up--++ Last updated 11/04/2019
openshift Howto Spot Nodes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-spot-nodes.md
Title: Use Azure Spot Virtual Machines in an Azure Red Hat OpenShift (ARO) cluster description: Discover how to utilize Azure Spot Virtual Machines in Azure Red Hat OpenShift (ARO)--++ keywords: spot, nodes, aro, deploy, openshift, red hat
openshift Howto Use Acr With Aro https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/howto-use-acr-with-aro.md
Title: Use Azure Container Registry with Azure Red Hat OpenShift description: Learn how to pull and run a container from Azure Container Registry in your Azure Red Hat OpenShift cluster.--++ Last updated 01/10/2021
openshift Intro Openshift https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/intro-openshift.md
Title: Introduction to Azure Red Hat OpenShift description: Learn the features and benefits of Microsoft Azure Red Hat OpenShift to deploy and manage container-based applications.--++ Last updated 11/13/2020
openshift Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/migration.md
Title: Migrate from an Azure Red Hat OpenShift 3.11 to Azure Red Hat OpenShift 4 description: Migrate from an Azure Red Hat OpenShift 3.11 to Azure Red Hat OpenShift 4--++ Last updated 08/13/2020
openshift Responsibility Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/responsibility-matrix.md
description: Learn about the ownership of responsibilities for the operation of
Last updated 4/12/2021--++ keywords: aro, openshift, az aro, red hat, cli, RACI, support
openshift Supported Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/supported-resources.md
Title: Supported resources for Azure Red Hat OpenShift 3.11 description: Understand which Azure regions and virtual machine sizes are supported by Microsoft Azure Red Hat OpenShift.--++ Last updated 05/15/2019
openshift Tutorial Connect Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/tutorial-connect-cluster.md
Title: Tutorial - Connect to an Azure Red Hat OpenShift 4 cluster description: Learn how to connect a Microsoft Azure Red Hat OpenShift cluster--++ Last updated 04/24/2020
openshift Tutorial Delete Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/openshift/tutorial-delete-cluster.md
Title: Tutorial - Delete an Azure Red Hat OpenShift cluster description: In this tutorial, learn how to delete an Azure Red Hat OpenShift cluster using the Azure CLI-+ -+ Last updated 04/24/2020
orbital License Spacecraft https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/license-spacecraft.md
This page provides an overview on how to register or license your spacecraft with Azure Orbital.
+ > [!NOTE]
+ > This process is for the ground station license only. Microsoft manages the ground station licenses in our network and ensures customer satellites are added and authorized.
+ > The customer is responsible for acquiring a spacecraft license for their spacecraft. Microsoft can provide technical information needed to complete the federal regulator and ITU processes as needed.
+ ## Prerequisites To initiate the spacecraft licensing process, you'll need: - A spacecraft object that corresponds to the spacecraft in orbit or slated for launch. The links in this object must match all current and planned filings.-- List of ground stations that you wish to use
+- A list of ground stations that you wish to use to communicate with your satellite.
## Step 1 - Initiate the request
The process starts by initiating the licensing request via the Azure portal.
1. Navigate to the spacecraft object and select New Support Request under the Support + troubleshooting category to the left. 1. Complete the following fields:
- 1. Summary: Provide a relevant ticket title.
- 1. Issue type: Technical.
- 1. Subscription: Choose your current subscription.
- 1. Service: My Service
- 1. Service Type: Azure Orbital
- 1. Problem type: Spacecraft Management and Setup
- 1. Problem subtype: Spacecraft Registration
-1. Click next to Solutions
-1. Click next to Details
-1. Enter the desired ground stations in the Description field
-1. Enable advanced diagnostic information
-1. Click next to Review + Create
+
+ | **Field** | **Value** |
+ | | |
+ | Summary | Provide a relevant ticket title. |
+ | Issue type | Technical |
+ | Subscription | Choose your current subscription. |
+ | Service | My Service |
+ | Service Type | Azure Orbital |
+ | Problem type | Spacecraft Management and Setup |
+ | Problem subtype | Spacecraft Registration |
+
+1. Click next to Solutions.
+1. Click next to Details.
+1. Enter the desired ground stations in the Description field.
+1. Enable advanced diagnostic information.
+1. Click next to Review + Create.
1. Click Create. ## Step 2 - Provide more details
Once the determination is made, we'll confirm the cost with you and ask you to a
## Step 4 - Azure Orbital requests the relevant licensing
-Upon authorization, you'll be billed and our regulatory team will seek the relevant licenses to enable your spacecraft with the desired ground stations. This step will take 2 to 6 months to execute.
+Upon authorization, you will be billed the fees associated with each relevant ground station. Our regulatory team will seek the relevant licenses to enable your spacecraft to communicate with the desired ground stations. Refer to the following table for an estimated timeline for execution:
+
+| **Station** | **Qunicy** | **Chile** | **Sweden** | **South Africa** | **Singapore** |
+| -- | - | | - | - | - |
+| Onboarding Timeframe | 3-6 months | 3-6 months | 3-6 months | <1 month | 3-6 months |
## Step 5 - Spacecraft is authorized
-Once the licenses are in place, the spacecraft object will be updated by Azure Orbital to represent the licenses held at the specified ground stations. Refer to (to add link to spacecraft concept) to understand how the authorizations are applied.
+Once the licenses are in place, the spacecraft object will be updated by Azure Orbital to represent the licenses held at the specified ground stations. To understand how the authorizations are applied, see [Spacecraft Object](./spacecraft-object.md).
## FAQ
-Q. Are third party ground stations such as KSAT included in this process?
-A. No, the process on this page applies to Microsoft sites only. For more information, see [Integrate partner network ground stations](./partner-network-integration.md).
+**Q.** Are third party ground stations such as KSAT included in this process?
+**A.** No, the process on this page applies to Microsoft sites only. For more information, see [Integrate partner network ground stations](./partner-network-integration.md).
+
+**Q.** Do public satellites requite licensing?
+**A.** The Azure Orbital Ground Station service supports several public satellites that do not require licensing. These include Aqua, Suomi NPP, JPSS-1/NOAA-20, and Terra.
+ ## Next steps - [Integrate partner network ground stations](./partner-network-integration.md)
partner-solutions New Relic Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/new-relic/new-relic-overview.md
Azure Native New Relic Service provides the following capabilities:
## New Relic links
-For more help with using Azure Native New Relic Service, see the [New Relic documentation](https://docs.newrelic.com/).
+For more help with using Azure Native New Relic Service, see the [New Relic documentation](https://docs.newrelic.com/docs/infrastructure/microsoft-azure-integrations/get-started/azure-native).
## Next steps - [Quickstart: Get started with New Relic](new-relic-create.md) - [Quickstart: Link to an existing New Relic account](new-relic-link-to-existing.md)+
partner-solutions Nginx Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/nginx/nginx-create.md
Previously updated : 01/11/2023 Last updated : 01/18/2023
partner-solutions Nginx Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/nginx/nginx-manage.md
Previously updated : 01/11/2023 Last updated : 01/18/2023
partner-solutions Nginx Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/nginx/nginx-overview.md
Previously updated : 01/11/2023 Last updated : 01/18/2023
partner-solutions Nginx Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/nginx/nginx-troubleshoot.md
Previously updated : 01/11/2023 Last updated : 01/18/2023
partner-solutions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/overview.md
Last updated 10/10/2022 - # Azure Native ISV Services overview
-An Azure Native ISV Service enables users to easily provision, manage, and tightly integrate ISV software and services on Azure. Currently, several services are publicly available across these areas: observability, data, networking, and storage. For a list of all our current ISV partner services, see [Extend Azure with Azure Native ISV Services](partners.md).
+An Azure Native ISV Service enables users to easily provision, manage, and tightly integrate *independent software vendor* (ISV) software and services on Azure. Currently, several services are publicly available across these areas: observability, data, networking, and storage. For a list of all our current ISV partner services, see [Extend Azure with Azure Native ISV Services](partners.md).
## Features of Azure Native ISV Services
A list of features of any Azure Native ISV Service is listed below.
- Logs and metrics: Seamlessly direct logs and metrics from Azure Monitor to the Azure Native ISV Service using just a few gestures. You can configure auto-discovery of resources to monitor, and set up automatic log forwarding and metrics shipping. You can easily do the setup in Azure, without needing to create additional infrastructure or write custom code. - VNet injection: Provides private data plane access to Azure Native ISV services from customersΓÇÖ virtual networks. - Unified billing: Engage with a single entity, Microsoft Azure Marketplace, for billing. No separate license purchase is required to use Azure Native ISV Services.+
partner-solutions Partners https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/partners.md
Title: Partner services description: Learn about services offered by partners on Azure. ++ Previously updated : 01/16/2023 Last updated : 01/18/2023
Azure Native ISV Services are available through the Marketplace.
|Partner |Description | ||-|
-|[Datadog](datadog/overview.md) | Monitoring and analytics platform for large scale applications. |
+|[Datadog - An Azure Native ISV Service](datadog/overview.md) | Monitoring and analytics platform for large scale applications. |
|[Elastic](elastic/overview.md) | Build modern search experiences and maximize visibility into health, performance, and security of your infrastructure, applications, and data. | |[Logz.io](logzio/overview.md) | Observability platform that centralizes log, metric, and tracing analytics. | |[Azure Native Dynatrace Service](dynatrace/dynatrace-overview.md) | Provides deep cloud observability, advanced AIOps, and continuous runtime application security. |
-|[New Relic Preview](new-relic/new-relic-overview.md) | A cloud-based end-to-end observability platform for analyzing and troubleshooting the performance of applications, infrastructure, logs, real-user monitoring, and more. |
+|[Azure Native New Relic Service Preview](new-relic/new-relic-overview.md) | A cloud-based end-to-end observability platform for analyzing and troubleshooting the performance of applications, infrastructure, logs, real-user monitoring, and more. |
## Data and storage |Partner |Description |
-|||
-| [Apache Kafka for Confluent Cloud](apache-kafka-confluent-cloud/overview.md) | Fully managed event-streaming platform powered by Apache Kafka. |
+||-|
+|[Apache Kafka for Confluent Cloud](apache-kafka-confluent-cloud/overview.md) | Fully managed event streaming platform powered by Apache Kafka. |
+|[Azure Native Qumulo Scalable File Service Preview](qumulo/qumulo-overview.md) | Multi-petabyte scale, single namespace, multi-protocol file data platform with the performance, security, and simplicity to meet the most demanding enterprise workloads. |
## Networking and security |Partner |Description |
-|||
-|[NGINXaaS ](nginx/nginx-overview.md) | Use NGINXaaS as a reverse proxy within your Azure environment. |
+||-|
+|[NGINXaaS - Azure Native ISV Service](nginx/nginx-overview.md) | Use NGINXaaS as a reverse proxy within your Azure environment. |
partner-solutions Qumulo Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/qumulo/qumulo-create.md
+
+ Title: Get started with Azure Native Qumulo Scalable File Service Preview
+description: In this quickstart, learn how to create an instance of Azure Native Qumulo Scalable File Service.
+++ Last updated : 01/18/2023+++
+# Quickstart: Get started with Azure Native Qumulo Scalable File Service Preview
+
+In this quickstart, you create an instance of Azure Native Qumulo Scalable File Service Preview. When you create the service instance, the following entities are also created and mapped to a Qumulo file system namespace:
+
+- A delegated subnet that enables the Qumulo service to inject service endpoints (eNICs) into your virtual network.
+- A managed resource group that has internal networking and other resources required for the Qumulo service.
+- A Qumulo resource in the region of your choosing. This entity stores and manages your data.
+- A software as a service (SaaS) resource, based on the plan that you select in the Azure Marketplace offer for Qumulo. This resource is used for billing.
+
+## Prerequisites
+
+1. Make sure that you have **Owner** or **Contributor** access to the Azure subscription. For custom roles, you also need write access to:
+
+ - The resource group where your delegated subnet is created.
+ - The resource group where your Qumulo file system namespace is created.
+
+ For more information about permissions and how to check access, see [Troubleshoot Azure Native Qumulo Service](qumulo-troubleshoot.md).
+
+1. Create a [delegated subnet](/azure/virtual-network/subnet-delegation-overview) to the Qumulo service:
+
+ 1. Identify the region where you want to subscribe to the Qumulo service.
+ 1. Create a new virtual network, or select an existing virtual network in the same region where you want to create the Qumulo service.
+ 1. Create a subnet in the newly created virtual network. Use the default configuration, or update the subnet network configuration based on your network policy.
+ 1. Delegate the newly created subnet as a Qumulo-only subnet.
+
+ > [!NOTE]
+ > The selected subnet address range should have at least 256 IP addresses: 251 free and 5 Azure reserved addresses.
+ >
+ > Your Qumulo subnet should be in the same region as that of the Qumulo service. The subnet must be delegated to `Qumulo.Storage/fileSystems`.
+
+ :::image type="content" source="media/qumulo-create/qumulo-vnet-properties.png" alt-text="Screenshot that shows virtual network properties in the Azure portal.":::
+
+## Subscribe to Azure Native Qumulo Scalable File Service
+
+1. Go to the Azure portal and sign in.
+
+1. If you've visited Azure Marketplace in a recent session, select the icon from the available options. Otherwise, search for **marketplace** and select the **Marketplace** result under **Services**.
+
+1. In Azure Marketplace, search for **Azure Native Qumulo Scalable File Service**.
+
+1. Select **Subscribe**.
++
+## Create an Azure Native Qumulo Scalable File Service resource
+
+1. The **Basics** tab provides a form to create an Azure Native Qumulo Scalable File Service resource on the working pane. Provide the following values:
+
+ | **Property** | **Description** |
+ |--|--|
+ |**Subscription** | From the dropdown list, select the Azure subscription where you have **Owner** access. |
+ |**Resource group** | Specify whether you want to create a new resource group or use an existing one. A resource group is a container that holds related resources for an Azure solution. For more information, seeΓÇ»[Azure resource group overview](/azure/azure-resource-manager/management/overview). |
+ |**Resource name** | Enter the name of the Qumulo file system. The resource name should have fewer than 15 characters, and it can contain only alphanumeric characters and the hyphen symbol.|
+ |**Region** | Select one of the available regions from the dropdown list. |
+ |**Availability Zone** | Select an availability zone to pin the Qumulo file system resources to that zone in a region. |
+ |**Password** | Create an initial password to set the Qumulo administrator access. |
+ |**Storage** | Choose either **Standard** or **Performance** for your storage configuration, based on your workload requirements.|
+ |**Capacity (TB)** | Specify the size of the file system that needs to be created.|
+ |**Pricing Plan** | A pay-as-you-go plan is selected by default. For upfront pricing plans or free trials, contact azure@qumulo.com. |
+
+ :::image type="content" source="media/qumulo-create/qumulo-create.png" alt-text="Screenshot of the Basics tab for creating a Qumulo resource on the working pane.":::
+
+1. On the **Networking** tab, provide the following values:
+
+ |**Property** |**Description** |
+ |--|--|
+ | **Virtual network** | Select the appropriate virtual network from your subscription where the Qumulo file system should be hosted.|
+ | **Subnet** | Select a subnet from a list of pre-created delegated subnets in the virtual network. One delegated subnet can be associated with only one Qumulo file system.|
+
+ :::image type="content" source="media/qumulo-create/qumulo-networking.png" alt-text="Screenshot of the Networking tab for creating a Qumulo resource on the working pane.":::
+
+ Only virtual networks in the specified region with subnets delegated to `Qumulo.Storage/fileSystems` appear on this page. If an expected virtual network is not listed, verify that it's in the chosen region and that the virtual network includes a subnet delegated to Qumulo.
+
+1. Select **Review + Create** to create the resource.
partner-solutions Qumulo How To Manage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/qumulo/qumulo-how-to-manage.md
+
+ Title: Manage Azure Native Qumulo Scalable File Service Preview
+description: This article describes how to manage Azure Native Qumulo Scalable File Service in the Azure portal.
+++ Last updated : 01/18/2023+++
+# Manage Azure Native Qumulo Scalable File Service Preview
+
+This article describes how to manage your instance of Azure Native Qumulo Scalable File Service Preview.
+
+## Manage the Qumulo resource
+
+1. In the Azure portal, browse to your instance of Azure Native Qumulo Scalable File Service.
+
+1. On the **Resource** menu, select **Overview** to see some of the settings for your Qumulo resource.
+
+ :::image type="content" source="media/qumulo-how-to-manage/qumulo-overview.png" alt-text="Screenshot that shows selections for getting details about a Qumulo resource.":::
+
+1. The **Resource** menu has other settings that you can examine and change. For example, selecting **IP addresses** displays the IP addresses that you can use to manage the file system.
+
+ :::image type="content" source="media/qumulo-how-to-manage/qumulo-ip-addresses.png" alt-text="Screenshot that shows selections for displaying IP addresses associated with a file system.":::
+
+## Configure and use the Qumulo file system
+
+For help with configuring and using your file system, see the [Qumulo documentation hub](https://docs.qumulo.com/cloud-guide/).
+
+## Delete the Qumulo file system
+
+To delete your Qumulo file system, you delete your deployment of Azure Native Qumulo Scalable File Service:
+
+1. In the Azure portal, select your deployment of Azure Native Qumulo Scalable File Service.
+1. On the **Resource** menu, select **Overview**.
+1. Select **Delete**.
+1. Confirm that you want to delete Azure Native Qumulo Scalable File Service, along with associated data and other resources attached to the service.
+1. Select **Delete**. This action is not reversible. The data contained in the file system is permanently deleted.
++
+## Next steps
+- [Quickstart: Get started with Azure Native Qumulo Scalable File Service](qumulo-create.md)
+- [Troubleshoot Azure Native Qumulo Scalable File Service](qumulo-troubleshoot.md)
partner-solutions Qumulo Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/qumulo/qumulo-overview.md
+
+ Title: Azure Native Qumulo Scalable File Service Preview overview
+description: Learn about what Azure Native Qumulo Scalable File Service offers you.
+++ Last updated : 01/18/2023+++
+# What is Azure Native Qumulo Scalable File Service Preview?
+
+Qumulo is an industry leader in distributed file system and object storage. Qumulo provides a scalable, performant, and simple-to-use cloud-native file system that can support a wide variety of data workloads. The file system uses standard file-sharing protocols, such as NFS, SMB, FTP, and S3.
+
+The Azure Native Qumulo Scalable File Service offering on Azure Marketplace enables you to create and manage a Qumulo file system by using the Azure portal with a seamlessly integrated experience. You can also create and manage Qumulo resources by using the Azure portal through the resource provider `Qumulo.Storage/fileSystem`. Qumulo manages the service while giving you full admin rights to configure details like file system shares, exports, quotas, snapshots, and Active Directory users.
+
+> [!NOTE]
+> Azure Native Qumulo Scalable File Service stores and processes data only in the region where the service was deployed. No data is stored outside that region.
+
+## Capabilities
+
+Azure Native Qumulo Scalable File Service provides:
+
+- Seamless onboarding: Easily include Qumulo as a natively integrated service on Azure.
+
+- Unified billing: Get a single bill for all resources that you consume on Azure for the Qumulo service.
+<!-- Is the benefit one bill for all Qumulo deployments or one bill for anything you do on Azure including Qumulo? -->
+- Private access: The service is directly connected to your own virtual network (sometimes called *VNet injection*).
+
+## Next steps
+
+- For more help with using Azure Native Qumulo Scalable File Service, see the [Qumulo documentation](https://docs.qumulo.com/cloud-guide/azure/).
+- To create an instance of the service, see the [quickstart](qumulo-create.md).
partner-solutions Qumulo Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/qumulo/qumulo-troubleshoot.md
+
+ Title: Troubleshoot Azure Native Qumulo Scalable File Service Preview
+description: This article provides information about troubleshooting Azure Native Qumulo Scalable File Service.
++ Last updated : 01/18/2023+++
+# Troubleshoot Azure Native Qumulo Scalable File Service Preview
+
+This article describes how to fix common problems when you're working with Azure Native Qumulo Scalable File Service Preview.
+
+Try the troubleshooting information in this article first. If that doesn't work, you can use one of the following methods to open a request form for Qumulo support:
+
+- Go to the [Qumulo support page](https://aka.ms/partners/Qumulo/Support) and select **Open a case**.
+- Go to the Azure portal and select **New Support request** on the left pane.
++
+## You got a purchase error related to a payment method
+
+A purchase can fail because a valid credit card is not connected to the Azure subscription, or because a payment method is not associated with the subscription.
+
+Try using a different Azure subscription. Or, add or update the credit card or payment method for the subscription. For more information, see [Update the credit and payment method](/azure/cost-management-billing/manage/change-credit-card).
+
+## You got a purchase error related to an Enterprise Agreement
+
+Some Microsoft Enterprise Agreement (EA) subscriptions don't allow Azure Marketplace purchases.
+
+Try using a different subscription, or [enable your subscription for Azure Marketplace purchases](/azure/cost-management-billing/manage/ea-azure-marketplace#enabling-azure-marketplace-purchases).
+
+## You can't create a resource
+
+To set up Azure Native Qumulo Scalable File Service integration, you must have **Owner** or **Contributor** access on the Azure subscription. Ensure that you have the proper access on both the subnet resource group and the Qumulo service resource group before you start the setup.
+
+For successful creation of a Qumulo service, custom role-based access control (RBAC) roles need to have the following permissions in the subnet and Qumulo service resource groups:
+
+ - Qumulo.Storage/\*
+
+ - Microsoft.Network/virtualNetworks/subnets/join/action
+
+## Next steps
+
+- [Manage Azure Native Qumulo Scalable File Service Preview](qumulo-how-to-manage.md)
peering-service About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/peering-service/about.md
Previously updated : 01/15/2023 Last updated : 01/19/2023
Azure Peering Service is a networking service that enhances the connectivity to
With Peering Service, customers can select a well-connected partner service provider in a given region. Public connectivity is optimized for high reliability and minimal latency from cloud services to the end-user location. Customers can also opt for Peering Service telemetry such as user latency measures to the Microsoft network, BGP route monitoring, and alerts against leaks and hijacks by registering the Peering Service connection in the Azure portal.
Microsoft 365, Dynamics 365, and any other Microsoft SaaS services are hosted in
Microsoft and partner service providers ensure that the traffic for the prefixes registered with a Peering Service connection enters and exits the nearest Microsoft Edge PoP locations on the Microsoft global network. Microsoft ensures that the networking traffic egressing from the prefixes registered with Peering Service connections takes the nearest Microsoft Edge PoP locations on the Microsoft global network. > [!NOTE] > For more information about the Microsoft global network, see [Microsoft global network](../networking/microsoft-global-network.md).
Peering Service uses two types of redundancy:
This type of redundancy uses the shortest routing path by always choosing the nearest Microsoft Edge PoP to the end user and ensures that the customer is one network hop (AS hops) away from MicrosoftΓÇï.
- :::image type="content" source="./media/peering-service-about/peering-service-geo-shortest.png" alt-text="Diagram showing geo-redundancy.":::
+ :::image type="content" source="./media/about/peering-service-geo-shortest.png" alt-text="Diagram showing geo-redundancy.":::
### Optimal routing
The following routing technique is preferred:
Routing that doesn't use the cold-potato technique is referred to as hot-potato routing. With hot-potato routing, traffic that originates from the Microsoft cloud then goes over the internet.
- :::image type="content" source="./media/peering-service-about/peering-service-cold-potato.png" alt-text="Diagram showing cold-potato routing.":::
+ :::image type="content" source="./media/about/peering-service-cold-potato.png" alt-text="Diagram showing cold-potato routing.":::
### Monitoring platform
The following routing technique is preferred:
Monitoring captures the events if there's any service degradation.
- :::image type="content" source="./media/peering-service-about/peering-service-latency-report.png" alt-text="Diagram showing monitoring platform for Peering Service.":::
+ :::image type="content" source="./media/about/peering-service-latency-report.png" alt-text="Diagram showing monitoring platform for Peering Service.":::
### Traffic protection
peering-service Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/peering-service/azure-portal.md
Previously updated : 01/13/2023 Last updated : 01/19/2023
Sign in to the [Azure portal](https://portal.azure.com).
- Prefix received with longer AS path(>3), contact Peering Service provider. - Prefix received with private AS in the path, contact Peering Service provider.
-### Add or remove a prefix
+## Add or remove a prefix
1. In the search box at the top of the portal, enter *Peering Service*. Select **Peering Services** in the search results.
Sign in to the [Azure portal](https://portal.azure.com).
> [!NOTE] > You can't modify an existing prefix.
-### Delete a Peering Service connection
+## Delete a Peering Service connection
1. In the search box at the top of the portal, enter *Peering Service*. Select **Peering Services** in the search results.
Sign in to the [Azure portal](https://portal.azure.com).
- To learn more about Peering Service connection, see [Peering Service connection](connection.md). - To learn more about Peering Service connection telemetry, see [Peering Service connection telemetry](connection-telemetry.md). - To measure Peering Service connection telemetry, see [Measure connection telemetry](measure-connection-telemetry.md).+
peering-service Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/peering-service/cli.md
Title: Register a Peering Service Preview connection by using the Azure CLI
-description: Learn how to register a Peering Service connection by using the Azure CLI
+ Title: Create, change, or delete a Peering Service connection - Azure CLI
+description: Learn how to create, change, or delete a Peering Service connection using the Azure CLI
Previously updated : 05/2/2020 Last updated : 01/19/2023 +
-# Register a Peering Service connection by using the Azure CLI
+# Create, change, or delete a Peering Service connection using the Azure CLI
-Azure Peering Service is a networking service that enhances customer connectivity to Microsoft cloud services such as Microsoft 365, Dynamics 365, software as a service (SaaS) services, Azure, or any Microsoft services accessible via the public internet. In this article, you'll learn how to register a Peering Service connection by using the Azure CLI.
+> [!div class="op_single_selector"]
+> * [Portal](azure-portal.md)
+> * [PowerShell](powershell.md)
+> * [Azure CLI](cli.md)
-- This article requires version 2.0.28 or later of the Azure CLI. Run [az version](/cli/azure/reference-index#az-version) to find the version and dependent libraries that are installed. To upgrade to the latest version, run [az upgrade](/cli/azure/reference-index#az-upgrade).
+Azure Peering Service is a networking service that enhances customer connectivity to Microsoft cloud services such as Microsoft 365, Dynamics 365, software as a service (SaaS) services, Azure, or any Microsoft services accessible via the public internet.
+
+In this article, you'll learn how to create, change, and delete a Peering Service connection using the Azure CLI.
+
+If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) before you begin.
++
+If you decide to install and use Azure CLI locally, this article requires you to use version 2.0.28 or later of the Azure CLI. Run [az version](/cli/azure/reference-index#az-version) to find the version and dependent libraries that are installed. To upgrade to the latest version, run [az upgrade](/cli/azure/reference-index#az-upgrade). If using Azure Cloud Shell, the latest version is already installed.
## Prerequisites
-You must have the following:
+- An Azure subscription.
+
+- A connectivity provider. For more information, see [Peering Service partners](./location-partners.md).
-### Azure account
+## Register your subscription with the resource provider and feature flag
-You must have a valid and active Microsoft Azure account. This account is required to set up the Peering Service connection. Peering Service is a resource within Azure subscriptions.
+Before you proceed to the steps of creating the Peering Service connection, register your subscription with the resource provider and feature flag using [az feature register](/cli/azure/feature.md#az-feature-register) and [az provider register](/cli/azure/provider.md#az-provider-register):
+
+```azurecli-interactive
+az feature register --namespace Microsoft.Peering --name AllowPeeringService
+az provider register --name Microsoft.Peering
+```
-### Connectivity provider
+## List Peering Service locations and service providers
-You can work with an internet service provider or internet exchange partner to obtain Peering Service to connect your network with the Microsoft network.
+Use [az peering service country list](/cli/azure/peering/service/country.md#az-peering-service-country-list) to list the countries where Peering Service is available and [az peering service location list](/cli/azure/peering/service/location.md#az-peering-service-location-list) to list the available metro locations in a specific country where you can get the Peering Service:
-Make sure that the connectivity providers are partnered with Microsoft.
+```azurecli-interactive
+# List the countries available for Peering Service.
+az peering service country list --out table
+# List metro locations serviced in a country
+az peering service location list --country "united states" --output table
+```
+Use [az peering service provider list](/cli/azure/peering/service/provider.md#az-peering-service-provider-list) to get a list of available [Peering Service providers](location-partners.md):
-- This article requires version 2.0.28 or later of the Azure CLI. If using Azure Cloud Shell, the latest version is already installed.
+```azurecli-interactive
+az peering service provider list --output table
+```
-### 1. Select your subscription
+## Create a Peering Service connection
-Select the subscription for which you want to register the Peering Service connection.
+Create a Peering Service connection using [az peering service create](/cli/azure/peering/service.md#az-peering-service-create):
```azurecli-interactive
-az account set --subscription "<subscription ID>"
+az peering service create --location "eastus" --peering-service-name "myPeeringService" --resource-group "myResourceGroup" --peering-service-location "Virginia" --peering-service-provider "Contoso"
```
-If you don't already have a resource group, you must create one before you register your Peering Service connection. You can create a resource group by running the following command:
+## Add the Peering Service prefix
+
+Use [az peering service prefix create](/cli/azure/peering/service/prefix.md#az-peering-service-prefix-create) to add the prefix provided to you by the connectivity provider:
```azurecli-interactive
-az group create -n MyResourceGroup -l "West US"
+az peering service prefix create --peering-service-name "myPeeringService" --prefix-name "myPrefix" --resource-group "myResourceGroup" --peering-service-prefix-key "00000000-0000-0000-0000-000000000000" --prefix "240.0.0.0/32"
```
-### 2. Register your subscription with the resource provider and feature flag
+## List all Peering Services connections
-Before you proceed to the steps of registering the Peering Service connection by using the Azure CLI, register your subscription with the resource provider and feature flag by using the Azure CLI. The Azure CLI commands are specified here:
+To view the list of all Peering Service connections, use [az peering service list](/cli/azure/peering/service.md#az-peering-service-list):
```azurecli-interactive
+az peering service list --resource-group "myresourcegroup" --output "table"
+```
-az feature register --namespace Microsoft.Peering --name AllowPeeringService
+## List all Peering Service prefixes
+To view the list of all Peering Service prefixes, use [az peering service prefix list](/cli/azure/peering/service/prefix.md#az-peering-service-prefix-list):
+
+```azurecli-interactive
+az peering service prefix list --peering-service-name "myPeeringService" --resource-group "myResourceGroup"
```
-### 3. Register the Peering Service connection
+## Remove the Peering Service prefix
-Register the Peering Service connection by using the following set of commands via the Azure CLI. This example registers the Peering Service connection named myPeeringService.
+To remove a Peering Service prefix, use [az peering service prefix delete](/cli/azure/peering/service/prefix.md#az-peering-service-prefix-delete):
```azurecli-interactive
-az peering service create : Create peering service\
- --location -l \
- --name myPeeringService\
- --resource-group -g MyResourceGroup\
- --peering-service-location\
- --peering-service-provider\
- --tags
+az peering service prefix delete --peering-service-name "myPeeringService" --prefix-name "myPrefix" --resource-group "myResourceGroup"
```
-### 4. Register the prefix
+## Delete a Peering Service connection
-Register the prefix that's provided by the connectivity provider by executing the following commands via the Azure CLI. This example registers the prefix named myPrefix.
+To delete a Peering Service connection, use [az peering service delete](/cli/azure/peering/service.md#az-peering-service-delete):
```azurecli-interactive
-az peering service prefix create \
- --name myPrefix\
- --peering-service-name myPeeringService\
- --resource-group -g myResourceGroup\
+az peering service delete --peering-service-name "myPeeringService" --resource-group "myResourceGroup"
``` ## Next steps - To learn more about Peering Service connection, see [Peering Service connection](connection.md).-- To learn about Peering Service connection telemetry, see [Peering Service connection telemetry](connection-telemetry.md).-- To measure telemetry, see [Measure connection telemetry](measure-connection-telemetry.md).-- To register the connection by using Azure PowerShell, see [Register a Peering Service connection - Azure PowerShell](powershell.md).-- To register the connection by using the Azure portal, see [Register a Peering Service connection - Azure portal](azure-portal.md).
+- To learn more about Peering Service connection telemetry, see [Peering Service connection telemetry](connection-telemetry.md).
+- To measure Peering Service connection telemetry, see [Measure connection telemetry](measure-connection-telemetry.md).
peering-service Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/peering-service/powershell.md
Previously updated : 01/13/2022 Last updated : 01/19/2023
If you don't have an Azure subscription, create a [free account](https://azure.m
[!INCLUDE [cloud-shell-try-it.md](../../includes/cloud-shell-try-it.md)]
-If you decide to install and use PowerShell locally instead, this quickstart requires you to use Azure PowerShell module version 1.0.0 or later. To find the installed version, run `Get-Module -ListAvailable Az`. For installation and upgrade information, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
+If you decide to install and use PowerShell locally instead, this article requires you to use Azure PowerShell module version 1.0.0 or later. To find the installed version, run `Get-Module -ListAvailable Az`. For installation and upgrade information, see [Install Azure PowerShell module](/powershell/azure/install-az-ps).
Finally, if you're running PowerShell locally, you'll also need to run `Connect-AzAccount`. That command creates a connection with Azure.
Get-AzPeeringServiceLocation -Country "United States"
``` Use [Get-AzPeeringServiceProvider](/powershell/module/az.peering/get-azpeeringserviceprovider) to get a list of available [Peering Service providers](location-partners.md):
-
```azurepowershell-interactive Get-AzPeeringServiceProvider ```
postgresql Concepts Azure Ad Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-azure-ad-authentication.md
Once you've authenticated against the Active Directory, you then retrieve a toke
## Next steps - To learn how to create and populate Azure AD, and then configure Azure AD with Azure Database for PostgreSQL, see [Configure and sign in with Azure AD for Azure Database for PostgreSQL](how-to-configure-sign-in-azure-ad-authentication.md).-- For an overview of logins, users, and database roles Azure Database for PostgreSQL, see [Create users in Azure Database for PostgreSQL - Flexible Server](how-to-create-users.md). - To learn how to manage Azure AD users for Flexible Server, see [Manage Azure Active Directory users - Azure Database for PostgreSQL - Flexible Server](how-to-manage-azure-ad-users.md). <!--Image references-->
postgresql Concepts Logical https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-logical.md
Here is an example of configuring pglogical at the provider database server and
1. Install pglogical extension in the database in both the provider and the subscriber database servers. ```SQL
- \C myDB
+ \c myDB
CREATE EXTENSION pglogical; ``` 2. If the replication user is other than the server administration user (who created the server), make sure that you grant membership in a role `azure_pg_admin` to the user and assign REPLICATION and LOGIN attributes to the user. See [pglogical documentation](https://github.com/2ndQuadrant/pglogical#limitations-and-restrictions) for details.
postgresql Connect Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/connect-java.md
az postgres flexible-server db create \
Next, create a non-admin user and grant all permissions to the database. > [!NOTE]
-> You can read more detailed information about creating PostgreSQL users in [Create users in Azure Database for PostgreSQL](./how-to-create-users.md).
+> You can read more detailed information about managing PostgreSQL users in [Manage Azure Active Directory users - Azure Database for PostgreSQL - Flexible Server](how-to-manage-azure-ad-users.md).
#### [Passwordless (Recommended)](#tab/passwordless)
postgresql How To Configure Sign In Azure Ad Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-configure-sign-in-azure-ad-authentication.md
description: Learn how to set up Azure Active Directory (Azure AD) for authentic
Previously updated : 11/04/2022 Last updated : 01/18/2023
In this article, you'll configure Azure Active Directory (Azure AD) access for a
> [!NOTE] > Azure Active Directory authentication for Azure Database for PostgreSQL - Flexible Server is currently in preview.
-You can configure Azure AD authentication for Azure Database for PostgreSQL - Flexible Server either during server provisioning or later. Only Azure AD administrator users can create or enable users for Azure AD-based authentication. We recommend not using the Azure AD administrator for regular database operations, because that role has elevated user permissions (for example, CREATEDB).
+You can configure Azure AD authentication for Azure Database for PostgreSQL - Flexible Server either during server provisioning or later. Only Azure AD administrator users can create or enable users for Azure AD-based authentication. We recommend not using the Azure AD administrator for regular database operations because that role has elevated user permissions (for example, CREATEDB).
-You can have multiple Azure AD admin users with Azure Database for PostgreSQL - Flexible Server. Azure AD admin users can be a user, a group, or a service principal.
+You can have multiple Azure AD admin users with Azure Database for PostgreSQL - Flexible Server. Azure AD admin users can be a user, a group, or service principal.
## Prerequisites - An Azure account with an active subscription. If you don't already have one, [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).-- One of the following roles: Global Administrator, Privileged Role Administrator, Tenant Administrator.
+- One of the following roles: **Global Administrator**, **Privileged Role Administrator**, **Tenant Creator**.
- Installation of the [Azure CLI](/cli/azure/install-azure-cli). ## Install the Azure AD PowerShell module
The following steps are mandatory to use Azure AD authentication with Azure Data
```powershell Connect-AzureAD -TenantId <customer tenant id> ```
-A successful output will look similar to the following.
-```
-Account Environment TenantId TenantDomain AccountType
-- -- -- --
-passwordless-user@contoso.com AzureCloud 456e5515-431d-4a70-874d-bdae2ba97c1d <your tenant name>.onmicrosoft.com User
-```
+A successful output looks similar to the following.
-Ensure that your Azure tenant has the service principal for the Azure Database for PostgreSQL Flexible Server. This only needs to be done once per Azure tenant. First, check for the existence of the service principal in your tenant with this command. The specific ObjectId value is for the Azure Database for PostgreSQL Flexible Server service principal.
+```output
+Account Environment TenantId TenantDomain AccountType
+- -- -- --
+<your account> AzureCloud <your tenant Id> <your tenant name>.onmicrosoft.com User
```
-Get-AzureADServicePrincipal -ObjectId 0049e2e2-fcea-4bc4-af90-bdb29a9bbe98
+
+Ensure that your Azure tenant has the service principal for the Azure Database for PostgreSQL Flexible Server. This only needs to be done once per Azure tenant. First, check for the existence of the service principal in your tenant with this command. The ObjectId value is for the Azure Database for PostgreSQL Flexible Server service principal.
+
+> [!NOTE]
+> The following script is an example of a created Azure App Registration you can use for testing. If you want to apply your ids, you need to use your own App Registration object and application id.
+
+```powershell
+Get-AzureADServicePrincipal -ObjectId 97deb67a-332c-456a-9ef4-3a95eb59c74b
```+ If the service principal exists, you'll see the following output.
-```
+
+```output
ObjectId AppId DisplayName -- -- -- 0049e2e2-fcea-4bc4-af90-bdb29a9bbe98 5657e26c-cc92-45d9-bc47-9da6cfdb4ed9 Azure OSSRDBMS PostgreSQL Flexible Server ```
+> [!IMPORTANT]
+> If you are not a **Global Administrator**, **Privileged Role Administrator**, **Tenant Creator** you can't proceed past this step.
+ ### Grant read access
-Grant Azure Database for PostgreSQL - Flexible Server Service Principal read access to a customer tenant, to request Graph API tokens for Azure AD validation tasks:
+Grant Azure Database for PostgreSQL - Flexible Server Service Principal read access to a customer tenant to request Graph API tokens for Azure AD validation tasks:
```powershell New-AzureADServicePrincipal -AppId 5657e26c-cc92-45d9-bc47-9da6cfdb4ed9
Azure AD is a multitenant application. It requires outbound connectivity to perf
- **Public access (allowed IP addresses)**: No extra network rules are required. - **Private access (virtual network integration)**:
- - You need an outbound network security group (NSG) rule to allow virtual network traffic to reach the `AzureActiveDirectory` service tag only.
- - Optionally, if you're using a proxy, you can add a new firewall rule to allow HTTP/S traffic to reach the `AzureActiveDirectory` service tag only.
+ - You need an outbound network security group (NSG) rule to allow virtual network traffic to only reach the `AzureActiveDirectory` service tag.
+ - Optionally, if you're using a proxy, you can add a new firewall rule to allow HTTP/S traffic to reach only the `AzureActiveDirectory` service tag.
To set the Azure AD admin during server provisioning, follow these steps: 1. In the Azure portal, during server provisioning, select either **PostgreSQL and Azure Active Directory authentication** or **Azure Active Directory authentication only** as the authentication method. 1. On the **Set admin** tab, select a valid Azure AD user, group, service principal, or managed identity in the customer tenant to be the Azure AD administrator.
-
- You can optionally add a local PostgreSQL admin account if you prefer using the **PostgreSQL and Azure Active Directory authentication** method.
- > [!NOTE]
- > You can add only one Azure admin user during server provisioning. You can add multiple Azure AD admin users after the server is created.
+ You can optionally add a local PostgreSQL admin account if you prefer using the **PostgreSQL and Azure Active Directory authentication** method.
+
+ > [!NOTE]
+ > You can add only one Azure admin user during server provisioning. You can add multiple Azure AD admin users after the Server is created.
++
+ :::image type="content" source="media/how-to-configure-sign-in-Azure-ad-authentication/set-Azure-ad-admin-server-creation.png" alt-text="Screenshot that shows selections for setting an Azure AD admin during server provisioning.]":::
-![Screenshot that shows selections for setting an Azure AD admin during server provisioning.][3]
To set the Azure AD administrator after server creation, follow these steps:
To set the Azure AD administrator after server creation, follow these steps:
1. Select **Add Azure AD Admins**. Then select a valid Azure AD user, group, service principal, or managed identity in the customer tenant to be an Azure AD administrator. 1. Select **Save**.
-![Screenshot that shows selections for setting an Azure AD admin after server creation.][2]
+ :::image type="content" source="media/how-to-configure-sign-in-Azure-ad-authentication/set-Azure-ad-admin.png" alt-text="Screenshot that shows selections for setting an Azure AD admin after server creation.":::
> [!IMPORTANT]
-> When you're setting the administrator, a new user is added to Azure Database for PostgreSQL - Flexible Server with full administrator permissions.
+> When setting the administrator, a new user is added to Azure Database for PostgreSQL - Flexible Server with full administrator permissions.
## Connect to Azure Database for PostgreSQL by using Azure AD The following high-level diagram summarizes the workflow of using Azure AD authentication with Azure Database for PostgreSQL:
-![Diagram of authentication flow between Azure Active Directory, the user's computer, and the server.][1]
+ :::image type="content" source="media/how-to-configure-sign-in-Azure-ad-authentication/authentication-flow.png" alt-text="Diagram of authentication flow between Azure Active Directory, the user's computer, and the server.":::
-Azure AD integration works with standard PostgreSQL tools like psql, which aren't Azure AD aware and support only specifying the username and password when you're connecting to PostgreSQL. The Azure AD token is passed as the password, as shown in the preceding diagram.
+Azure AD integration works with standard PostgreSQL tools like psql, which aren't Azure AD aware and support only specifying the username and password when you're connecting to PostgreSQL. As shown in the preceding diagram, the Azure AD token is passed as the password.
We've tested the following clients:
We've tested the following clients:
## Authenticate with Azure AD
-Use the following procedures to authenticate with Azure AD as an Azure Database for PostgreSQL - Flexible Server user. You can follow along in Azure Cloud Shell, on an Azure virtual machine, or on your local machine.
+Use the following procedures to authenticate with Azure AD as an Azure Database for PostgreSQL - Flexible Server user. You can follow along in Azure Cloud Shell, on an Azure virtual machine, or on your local machine.
### Sign in to the user's Azure subscription
The command opens a browser window to the Azure AD authentication page. It requi
### Retrieve the Azure AD access token
-Use the Azure CLI to acquire an access token for the Azure AD authenticated user to access Azure Database for PostgreSQL. Here's an example for the public cloud:
+Use the Azure CLI to acquire an access token for the Azure AD authenticated user to access Azure Database for PostgreSQL. Here's an example of the public cloud:
```azurecli-interactive az account get-access-token --resource https://ossrdbms-aad.database.windows.net
The token is a Base64 string. It encodes all the information about the authentic
### Use a token as a password for signing in with client psql
-When you're connecting, it's best to use the access token as the PostgreSQL user password.
+When connecting, it's best to use the access token as the PostgreSQL user password.
-While you're using the psql command-line client, the access token needs to be passed through the `PGPASSWORD` environment variable. The reason is that the access token exceeds the password length that psql can accept directly.
+While using the psql command-line client, the access token needs to be passed through the `PGPASSWORD` environment variable. The reason is that the access token exceeds the password length that psql can accept directly.
Here's a Windows example:
$env:PGPASSWORD='<copy/pasted TOKEN value from step 2>'
Here's a Linux/macOS example:
-```shell
+```bash
export PGPASSWORD=<copy/pasted TOKEN value from step 2> ``` You can also combine step 2 and step 3 together using command substitution. The token retrieval can be encapsulated into a variable and passed directly as a value for `PGPASSWORD` environment variable:
-```shell
+```bash
export PGPASSWORD=$(az account get-access-token --resource-type oss-rdbms --query "[accessToken]" -o tsv) ```
+Now you can initiate a connection with Azure Database for PostgreSQL as you usually would:
-Now you can initiate a connection with Azure Database for PostgreSQL as you normally would:
-
-```shell
+```sql
psql "host=mydb.postgres... user=user@tenant.onmicrosoft.com dbname=postgres sslmode=require" ```
To connect by using an Azure AD token with PgAdmin, follow these steps:
Here are some essential considerations when you're connecting:
-* `user@tenant.onmicrosoft.com` is the name of the Azure AD user.
-* Be sure to use the exact way that the Azure user is spelled. Azure AD user and group names are case-sensitive.
-* If the name contains spaces, use a backslash (`\`) before each space to escape it.
-* The access token's validity is 5 minutes to 60 minutes. We recommend that you get the access token just before you initiate the sign-in to Azure Database for PostgreSQL.
+- `user@tenant.onmicrosoft.com` is the name of the Azure AD user.
+- Be sure to use the exact way the Azure user is spelled. Azure AD user and group names are case-sensitive.
+- If the name contains spaces, use a backslash (`\`) before each space to escape it.
+- The access token's validity is 5 minutes to 60 minutes. You should get the access token before initiating the sign-in to Azure Database for PostgreSQL.
You're now authenticated to your Azure Database for PostgreSQL server through Azure AD authentication.
You're now authenticated to your Azure Database for PostgreSQL server through Az
### Create Azure AD groups in Azure Database for PostgreSQL - Flexible Server
-To enable an Azure AD group for access to your database, use the same mechanism that you used for users, but instead specify the group name. For example:
+To enable an Azure AD group to access your database, use the same mechanism you used for users, but specify the group name instead. For example:
-```
+```sql
select * from pgAzure ADauth_create_principal('Prod DB Readonly', false, false). ```
-When group members sign in, they use their personal access tokens but specify the group name as the username.
+When group members sign in, they use their access tokens but specify the group name as the username.
> [!NOTE] > Azure Database for PostgreSQL - Flexible Server supports managed identities as group members.
When group members sign in, they use their personal access tokens but specify th
Authenticate with Azure AD by using the Azure CLI. This step isn't required in Azure Cloud Shell. The user needs to be a member of the Azure AD group.
-```
+```azurecli-interactive
az login ``` ### Retrieve the Azure AD access token
-Use the Azure CLI to acquire an access token for the Azure AD authenticated user to access Azure Database for PostgreSQL. Here's an example for the public cloud:
+Use the Azure CLI to acquire an access token for the Azure AD authenticated user to access Azure Database for PostgreSQL. Here's an example of the public cloud:
```azurecli-interactive az account get-access-token --resource https://ossrdbms-aad.database.windows.net ```
-You must specify the preceding resource value exactly as shown. For other clouds, you can look up the resource value by using the following command:
+You must specify the initial resource value exactly as shown. For other clouds, you can look up the resource value by using the following command:
```azurecli-interactive az cloud show
After authentication is successful, Azure AD returns an access token:
### Use a token as a password for signing in with psql or PgAdmin
-These considerations are important when you're connecting as a group member:
+These considerations are essential when you're connecting as a group member:
-- The group name is the name of the Azure AD group that you're trying to connect as.-- Be sure to use the exact way that the Azure AD group name is spelled. Azure AD user and group names are case-sensitive.
+- The group name is the name of the Azure AD group that you're trying to connect.
+- Be sure to use the exact way the Azure AD group name is spelled. Azure AD user and group names are case-sensitive.
- When you're connecting as a group, use only the group name and not the alias of a group member. - If the name contains spaces, use a backslash (`\`) before each space to escape it.-- The access token's validity is 5 minutes to 60 minutes. We recommend that you get the access token just before you initiate the sign-in to Azure Database for PostgreSQL.
+- The access token's validity is 5 minutes to 60 minutes. We recommend you get the access token before initiating the sign-in to Azure Database for PostgreSQL.
You're now authenticated to your PostgreSQL server through Azure AD authentication. ## Next steps - Review the overall concepts for [Azure AD authentication with Azure Database for PostgreSQL - Flexible Server](concepts-azure-ad-authentication.md).-- Learn how to [manage Azure AD roles in Azure Database for PostgreSQL - Flexible Server](how-to-create-users.md).-
-<!--Image references-->
-
-[1]: ./media/concepts-azure-ad-authentication/authentication-flow.png
-[2]: ./media/concepts-azure-ad-authentication/set-azure-ad-admin.png
-[3]: ./media/concepts-azure-ad-authentication/set-azure-ad-admin-server-creation.png
+- Learn how to [Manage Azure Active Directory users - Azure Database for PostgreSQL - Flexible Server](how-to-manage-azure-ad-users.md).
postgresql Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/release-notes.md
This page provides latest news and updates regarding feature additions, engine v
## Release: November 2022
+* Public preview of [Enhanced Metrics](./concepts-monitoring.md) for Azure Database for PostgreSQL ΓÇô Flexible Server
* Support for [minor versions](./concepts-supported-versions.md) 14.5, 13.8, 12.12, 11.17. <sup>$</sup> * General availability of Azure Database for PostgreSQL - Flexible Server in China North 3 & China East 3 Regions.
private-5g-core Modify Packet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/modify-packet-core.md
The following list contains the data that will be lost over a packet core reinst
1. If you want to keep using the same credentials when signing in to [distributed tracing](distributed-tracing.md), save a copy of the current password to a secure location. 1. If you want to keep using the same credentials when signing in to the [packet core dashboards](packet-core-dashboards.md), save a copy of the current password to a secure location.
-1. Any customizations made to the packet core dashboards won't be carried over the upgrade. Refer to [Exporting a dashboard](https://grafana.com/docs/grafana/v6.1/reference/export_import/#exporting-a-dashboard) in the Grafana documentation to save a backed-up copy of your dashboards.
-1. Most UEs will automatically re-register and recreate any sessions after the upgrade completes. If you have any special devices that require manual operations to recover from a packet core outage, gather a list of these UEs and their recovery steps.
+1. Any customizations made to the packet core dashboards won't be carried over the reinstall. Refer to [Exporting a dashboard](https://grafana.com/docs/grafana/v6.1/reference/export_import/#exporting-a-dashboard) in the Grafana documentation to save a backed-up copy of your dashboards.
+1. Most UEs will automatically re-register and recreate any sessions after the reinstall completes. If you have any special devices that require manual operations to recover from a packet core outage, gather a list of these UEs and their recovery steps.
## Select the packet core instance to modify
private-5g-core Provision Sims Arm Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/provision-sims-arm-template.md
To begin, collect the values in the following table for each SIM you want to pro
| Value |Parameter name | |--|--| | SIM name. The SIM name must only contain alphanumeric characters, dashes, and underscores. | `simName` |
-| The Integrated Circuit Card Identification Number (ICCID). The ICCID identifies a specific physical SIM or eSIM, and includes information on the SIM's country and issuer. The ICCID is a unique numerical value between 19 and 20 digits in length, beginning with 89. | `integratedCircuitCardIdentifier` |
+| The Integrated Circuit Card Identification Number (ICCID). The ICCID identifies a specific physical SIM or eSIM, and includes information on the SIM's country and issuer. The ICCID is optional and is a unique numerical value between 19 and 20 digits in length, beginning with 89. | `integratedCircuitCardIdentifier` |
| The international mobile subscriber identity (IMSI). The IMSI is a unique number (usually 15 digits) identifying a device or user in a mobile network. | `internationalMobileSubscriberIdentity` | | The Authentication Key (Ki). The Ki is a unique 128-bit value assigned to the SIM by an operator, and is used with the derived operator code (OPc) to authenticate a user. It must be a 32-character string, containing hexadecimal characters only. | `authenticationKey` | | The derived operator code (OPc). The OPc is taken from the SIM's Ki and the network's operator code (OP). The packet core instance uses it to authenticate a user using a standards-based algorithm. The OPc must be a 32-character string, containing hexadecimal characters only. | `operatorKeyCode` | | The type of device using this SIM. This value is an optional free-form string. You can use it as required to easily identify device types using the enterprise's private mobile network. | `deviceType` |
+| The SIM policy to assign to the SIM. This is optional, but your SIMs won't be able to use the private mobile network without an assigned SIM policy. You'll need to assign a SIM policy if you want to set static IP addresses to the SIM during provisioning. | `simPolicyId` |
+
+### Collect the required information for assigning static IP addresses
+
+You only need to complete this step if you've configured static IP address allocation for your packet core instance(s) and you want to assign static IP addresses to the SIMs during SIM provisioning.
+
+Collect the values in the following table for each SIM you want to provision. If your private mobile network has multiple data networks and you want to assign a different static IP address for each data network to this SIM, collect the values for each IP address.
+
+Each IP address must come from the pool you assigned for static IP address allocation when creating the relevant data network, as described in [Collect data network values](collect-required-information-for-a-site.md#collect-data-network-values). For more information, see [Allocate User Equipment (UE) IP address pools](complete-private-mobile-network-prerequisites.md#allocate-user-equipment-ue-ip-address-pools).
+
+| Value | Parameter name |
+|--|--|--|
+| The data network that the SIM will use. | `staticIpConfiguration.attachedDataNetworkId` |
+| The network slice that the SIM will use. | `staticIpConfiguration.sliceId` |
+| The static IP address to assign to the SIM. | `staticIpConfiguration.staticIpAddress` |
## Prepare an array for your SIMs
-Use the information you collected in [Collect the required information for your SIMs](#collect-the-required-information-for-your-sims) to create an array containing properties for each of the SIMs you want to provision. The following is an example of an array containing properties for two SIMs. If you don't want to assign a SIM policy to a SIM, you can delete the `simPolicyId` parameter for that SIM.
+Use the information you collected in [Collect the required information for your SIMs](#collect-the-required-information-for-your-sims) to create a JSON array containing properties for each of the SIMs you want to provision. The following is an example of an array containing properties for two SIMs (`SIM1` and `SIM2`).
+
+If you don't want to configure static IP addresses for a SIM, delete the `staticIpConfiguration` parameter for that SIM. If your private mobile network has multiple data networks and you want to assign a different static IP address for each data network to the same SIM, you can include additional `attachedDataNetworkId`, `sliceId` and `staticIpAddress` parameters for each IP address under `staticIpConfiguration`.
```json [
Use the information you collected in [Collect the required information for your
"internationalMobileSubscriberIdentity": "001019990010001", "authenticationKey": "00112233445566778899AABBCCDDEEFF", "operatorKeyCode": "63bfa50ee6523365ff14c1f45f88737d",
- "deviceType": "Cellphone"
+ "deviceType": "Cellphone",
+ "simPolicyId": "/subscriptions/subid/resourceGroups/contoso-rg/providers/Microsoft.MobileNetwork/mobileNetworks/contoso-network/simPolicies/SimPolicy1",
+ "staticIpConfiguration" :[
+ {
+ "attachedDataNetworkId": "/subscriptions/subid/resourceGroups/contoso-rg/providers/Microsoft.MobileNetwork/packetCoreControlPlanes/site-1/packetCoreDataPlanes/site-1/attachedDataNetworks/adn1",
+ "sliceId": "/subscriptions/subid/resourceGroups/contoso-rg/providers/Microsoft.MobileNetwork/mobileNetworks/contoso-network/slices/slice-1",
+ "staticIpAddress": "10.132.124.54"
+ },
+ {
+ "attachedDataNetworkId": "/subscriptions/subid/resourceGroups/contoso-rg/providers/Microsoft.MobileNetwork/packetCoreControlPlanes/site-1/packetCoreDataPlanes/site-1/attachedDataNetworks/adn2",
+ "sliceId": "/subscriptions/subid/resourceGroups/contoso-rg/providers/Microsoft.MobileNetwork/mobileNetworks/contoso-network/slices/slice-1",
+ "staticIpAddress": "10.132.124.55"
+ }
+ ]
}, { "simName": "SIM2",
Use the information you collected in [Collect the required information for your
"internationalMobileSubscriberIdentity": "001019990010002", "authenticationKey": "11112233445566778899AABBCCDDEEFF", "operatorKeyCode": "63bfa50ee6523365ff14c1f45f88738d",
- "deviceType": "Sensor"
+ "deviceType": "Sensor",
+ "simPolicyId": "/subscriptions/subid/resourceGroups/contoso-rg/providers/Microsoft.MobileNetwork/mobileNetworks/contoso-network/simPolicies/SimPolicy2",
+ "staticIpConfiguration" :[
+ {
+ "attachedDataNetworkId": "/subscriptions/subid/resourceGroups/contoso-rg/providers/Microsoft.MobileNetwork/packetCoreControlPlanes/site-1/packetCoreDataPlanes/site-1/attachedDataNetworks/adn1",
+ "sliceId": "/subscriptions/subid/resourceGroups/contoso-rg/providers/Microsoft.MobileNetwork/mobileNetworks/contoso-network/slices/slice-1",
+ "staticIpAddress": "10.132.124.54"
+ },
+ {
+ "attachedDataNetworkId": "/subscriptions/subid/resourceGroups/contoso-rg/providers/Microsoft.MobileNetwork/packetCoreControlPlanes/site-1/packetCoreDataPlanes/site-1/attachedDataNetworks/adn2",
+ "sliceId": "/subscriptions/subid/resourceGroups/contoso-rg/providers/Microsoft.MobileNetwork/mobileNetworks/contoso-network/slices/slice-1",
+ "staticIpAddress": "10.132.124.55"
+ }
+ ]
} ] ```
The following Azure resources are defined in the template.
- **Existing Mobile Network Name:** enter the name of the Mobile Network resource representing your private mobile network. - **Existing Sim Policy Name:** enter the name of the SIM policy you want to assign to the SIMs. - **Sim Group Name:** enter the name for the new SIM group.
- - **Sim Resources:** paste in the array you prepared in [Prepare an array for your SIMs](#prepare-an-array-for-your-sims).
+ - **Sim Resources:** paste in the JSON array you prepared in [Prepare an array for your SIMs](#prepare-an-array-for-your-sims).
:::image type="content" source="media/provision-sims-arm-template/sims-arm-template-configuration-fields.png" alt-text="Screenshot of the Azure portal showing the configuration fields for the SIMs ARM template.":::
The following Azure resources are defined in the template.
## Next steps
-If you've configured static IP address allocation for your packet core instance(s), you may want to [assign static IP addresses to the SIMs you've provisioned](manage-existing-sims.md#assign-static-ip-addresses).
+If you've configured static IP address allocation for your packet core instance(s) and you haven't already assigned static IP addresses to the SIMs you've provisioned, you can do so by following the steps in [Assign static IP addresses](manage-existing-sims.md#assign-static-ip-addresses).
private-5g-core Provision Sims Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/provision-sims-azure-portal.md
To begin, collect the values in the following table for each SIM you want to pro
| Value | Field name in Azure portal | JSON file parameter name | |--|--|--| | SIM name. The SIM name must only contain alphanumeric characters, dashes, and underscores. | **SIM name** | `simName` |
-| The Integrated Circuit Card Identification Number (ICCID). The ICCID identifies a specific physical SIM or eSIM, and includes information on the SIM's country and issuer. The ICCID is a unique numerical value between 19 and 20 digits in length, beginning with 89. | **ICCID** | `integratedCircuitCardIdentifier` |
+| The Integrated Circuit Card Identification Number (ICCID). The ICCID identifies a specific physical SIM or eSIM, and includes information on the SIM's country and issuer. The ICCID is optional and is a unique numerical value between 19 and 20 digits in length, beginning with 89. | **ICCID** | `integratedCircuitCardIdentifier` |
| The international mobile subscriber identity (IMSI). The IMSI is a unique number (usually 15 digits) identifying a device or user in a mobile network. | **IMSI** | `internationalMobileSubscriberIdentity` | | The Authentication Key (Ki). The Ki is a unique 128-bit value assigned to the SIM by an operator, and is used with the derived operator code (OPc) to authenticate a user. It must be a 32-character string, containing hexadecimal characters only. | **Ki** | `authenticationKey` | | The derived operator code (OPc). The OPc is taken from the SIM's Ki and the network's operator code (OP). The packet core instance uses it to authenticate a user using a standards-based algorithm. The OPc must be a 32-character string, containing hexadecimal characters only. | **Opc** | `operatorKeyCode` | | The type of device using this SIM. This value is an optional free-form string. You can use it as required to easily identify device types using the enterprise's private mobile network. | **Device type** | `deviceType` |
-| The SIM policy to assign to the SIM. This is optional, but your SIMs won't be able to use the private mobile network without an assigned SIM policy. | **SIM policy** | `simPolicyId` |
+| The SIM policy to assign to the SIM. This is optional, but your SIMs won't be able to use the private mobile network without an assigned SIM policy. You'll need to assign a SIM policy if you want to set static IP addresses to the SIM during provisioning. | **SIM policy** | `simPolicyId` |
+
+### Collect the required information for assigning static IP addresses
+
+You only need to complete this step if all of the following apply:
+
+- You're using a JSON file to provision your SIMs.
+- You've configured static IP address allocation for your packet core instance(s).
+- You want to assign static IP addresses to the SIMs during SIM provisioning.
+
+Collect the values in the following table for each SIM you want to provision. If your private mobile network has multiple data networks and you want to assign a different static IP address for each data network to this SIM, collect the values for each IP address.
+
+Each IP address must come from the pool you assigned for static IP address allocation when creating the relevant data network, as described in [Collect data network values](collect-required-information-for-a-site.md#collect-data-network-values). For more information, see [Allocate User Equipment (UE) IP address pools](complete-private-mobile-network-prerequisites.md#allocate-user-equipment-ue-ip-address-pools).
+
+| Value | Field name in Azure portal | JSON file parameter name |
+|--|--|--|
+| The data network that the SIM will use. | Not applicable. | `staticIpConfiguration.attachedDataNetworkId` |
+| The network slice that the SIM will use. | Not applicable. | `staticIpConfiguration.sliceId` |
+| The static IP address to assign to the SIM. | Not applicable. | `staticIpConfiguration.staticIpAddress` |
## Create the JSON file Only carry out this step if you decided in [Prerequisites](#prerequisites) to use a JSON file to provision your SIMs. Otherwise, you can skip to [Begin provisioning the SIMs in the Azure portal](#begin-provisioning-the-sims-in-the-azure-portal).
-Prepare the JSON file using the information you collected for your SIMs in [Collect the required information for your SIMs](#collect-the-required-information-for-your-sims). This example file shows the required format. It contains the parameters required to provision two SIMs (`SIM1` and `SIM2`). If you don't want to assign a SIM policy to a SIM, you can delete the `simPolicyId` parameter for that SIM.
+Prepare the JSON file using the information you collected for your SIMs in [Collect the required information for your SIMs](#collect-the-required-information-for-your-sims). The example file below shows the required format. It contains the parameters required to provision two SIMs (`SIM1` and `SIM2`).
+
+If you don't want to configure static IP addresses for a SIM, delete the `staticIpConfiguration` parameter for that SIM. If your private mobile network has multiple data networks and you want to assign a different static IP address for each data network to the same SIM, you can include additional `attachedDataNetworkId`, `sliceId` and `staticIpAddress` parameters for each IP address under `staticIpConfiguration`.
```json [
Prepare the JSON file using the information you collected for your SIMs in [Coll
"authenticationKey": "00112233445566778899AABBCCDDEEFF", "operatorKeyCode": "63bfa50ee6523365ff14c1f45f88737d", "deviceType": "Cellphone",
- "simPolicyId": "/subscriptions/subid/resourceGroups/contoso-rg/providers/Microsoft.MobileNetwork/mobileNetworks/contoso-network/simPolicies/SimPolicy1"
+ "simPolicyId": "/subscriptions/subid/resourceGroups/contoso-rg/providers/Microsoft.MobileNetwork/mobileNetworks/contoso-network/simPolicies/SimPolicy1",
+ "staticIpConfiguration" :[
+ {
+ "attachedDataNetworkId": "/subscriptions/subid/resourceGroups/contoso-rg/providers/Microsoft.MobileNetwork/packetCoreControlPlanes/site-1/packetCoreDataPlanes/site-1/attachedDataNetworks/adn1",
+ "sliceId": "/subscriptions/subid/resourceGroups/contoso-rg/providers/Microsoft.MobileNetwork/mobileNetworks/contoso-network/slices/slice-1",
+ "staticIpAddress": "10.132.124.54"
+ },
+ {
+ "attachedDataNetworkId": "/subscriptions/subid/resourceGroups/contoso-rg/providers/Microsoft.MobileNetwork/packetCoreControlPlanes/site-1/packetCoreDataPlanes/site-1/attachedDataNetworks/adn2",
+ "sliceId": "/subscriptions/subid/resourceGroups/contoso-rg/providers/Microsoft.MobileNetwork/mobileNetworks/contoso-network/slices/slice-1",
+ "staticIpAddress": "10.132.124.55"
+ }
+ ]
}, { "simName": "SIM2",
Prepare the JSON file using the information you collected for your SIMs in [Coll
"authenticationKey": "11112233445566778899AABBCCDDEEFF", "operatorKeyCode": "63bfa50ee6523365ff14c1f45f88738d", "deviceType": "Sensor",
- "simPolicyId": "/subscriptions/subid/resourceGroups/contoso-rg/providers/Microsoft.MobileNetwork/mobileNetworks/contoso-network/simPolicies/SimPolicy2"
+ "simPolicyId": "/subscriptions/subid/resourceGroups/contoso-rg/providers/Microsoft.MobileNetwork/mobileNetworks/contoso-network/simPolicies/SimPolicy2",
+ "staticIpConfiguration" :[
+ {
+ "attachedDataNetworkId": "/subscriptions/subid/resourceGroups/contoso-rg/providers/Microsoft.MobileNetwork/packetCoreControlPlanes/site-1/packetCoreDataPlanes/site-1/attachedDataNetworks/adn1",
+ "sliceId": "/subscriptions/subid/resourceGroups/contoso-rg/providers/Microsoft.MobileNetwork/mobileNetworks/contoso-network/slices/slice-1",
+ "staticIpAddress": "10.132.124.54"
+ },
+ {
+ "attachedDataNetworkId": "/subscriptions/subid/resourceGroups/contoso-rg/providers/Microsoft.MobileNetwork/packetCoreControlPlanes/site-1/packetCoreDataPlanes/site-1/attachedDataNetworks/adn2",
+ "sliceId": "/subscriptions/subid/resourceGroups/contoso-rg/providers/Microsoft.MobileNetwork/mobileNetworks/contoso-network/slices/slice-1",
+ "staticIpAddress": "10.132.124.55"
+ }
+ ]
} ] ```
In this step, you'll provision SIMs using a JSON file.
## Next steps
-If you've configured static IP address allocation for your packet core instance(s), you may want to [assign static IP addresses to the SIMs you've provisioned](manage-existing-sims.md#assign-static-ip-addresses).
+If you've configured static IP address allocation for your packet core instance(s) and you haven't already assigned static IP addresses to the SIMs you've provisioned, you can do so by following the steps in [Assign static IP addresses](manage-existing-sims.md#assign-static-ip-addresses).
private-5g-core Reinstall Packet Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/private-5g-core/reinstall-packet-core.md
The following list contains the data that will be lost over a packet core reinst
1. If you want to keep using the same credentials when signing in to [distributed tracing](distributed-tracing.md), save a copy of the current password to a secure location. 1. If you want to keep using the same credentials when signing in to the [packet core dashboards](packet-core-dashboards.md), save a copy of the current password to a secure location.
-1. Any customizations made to the packet core dashboards won't be carried over the upgrade. Refer to [Exporting a dashboard](https://grafana.com/docs/grafana/v6.1/reference/export_import/#exporting-a-dashboard) in the Grafana documentation to save a backed-up copy of your dashboards.
-1. Most UEs will automatically re-register and recreate any sessions after the upgrade completes. If you have any special devices that require manual operations to recover from a packet core outage, gather a list of these UEs and their recovery steps.
+1. Any customizations made to the packet core dashboards won't be carried over the reinstall. Refer to [Exporting a dashboard](https://grafana.com/docs/grafana/v6.1/reference/export_import/#exporting-a-dashboard) in the Grafana documentation to save a backed-up copy of your dashboards.
+1. Most UEs will automatically re-register and recreate any sessions after the reinstall completes. If you have any special devices that require manual operations to recover from a packet core outage, gather a list of these UEs and their recovery steps.
## Reinstall the packet core instance
purview Create Sensitivity Label https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/create-sensitivity-label.md
Sensitivity labels are supported in the Microsoft Purview Data Map for the follo
## Labeling for SQL databases
-In addition to labeling for schematized data assets, the Microsoft Purview Data Map also supports labeling for SQL database columns using the SQL data classification in [SQL Server Management Studio (SSMS)](/sql/ssms/sql-server-management-studio-ssms). While Microsoft Purview uses the global [sensitivity labels](/microsoft-365/compliance/sensitivity-labels), SSMS only uses labels defined locally.
+In addition to the Microsoft Purview Data Map's labeling for schematized data assets, Microsoft also supports labeling for SQL database columns using the SQL data classification in [SQL Server Management Studio (SSMS)](/sql/ssms/sql-server-management-studio-ssms). While Microsoft Purview uses the global [sensitivity labels](/microsoft-365/compliance/sensitivity-labels), SSMS only uses labels defined locally.
Labeling in Microsoft Purview and labeling in SSMS are separate processes that don't currently interact with each other. Therefore, **labels applied in SSMS are not shown in Microsoft Purview, and vice versa**. We recommend Microsoft Purview for labeling SQL databases, because the labels can be applied globally, across multiple platforms.
purview Register Scan Adls Gen2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-adls-gen2.md
It's important to register the data source in Microsoft Purview prior to setting
> [!TIP] > To troubleshoot any issues with scanning:
-> 1. Confirm you have followed all [**prerequisites for scanning**](#prerequisites-for-scan).
+> 1. Confirm you have properly set up [**authentication for scanning**](#authentication-for-a-scan)
> 1. Review our [**scan troubleshooting documentation**](troubleshoot-connections.md).
-### Prerequisites for scan
+### Authentication for a scan
-In order to have access to scan the data source, an authentication method in the ADLS Gen2 Storage account needs to be configured.
-The following options are supported:
+Your Azure network may allow for communications between your Azure resources, but if you've set up firewalls, private endpoints, or virtual networks within Azure, you'll need to follow one of these configurations below.
-> [!Note]
-> If you have firewall enabled for the storage account, you must use managed identity authentication method when setting up a scan.
+|Networking constraints |Integration runtime type |Available credential types |
+||||
+|No private endpoints or firewalls | Azure IR | Managed identity (Recommended), service principal, or account key|
+|Firewall enabled but no private endpoints| Azure IR | Managed identity |
+|Private endpoints enabled | *Self-Hosted IR | Service principal, account key|
-* **System-assigned managed identity (Recommended)** - As soon as the Microsoft Purview Account is created, a system-assigned managed identity (SAMI) is created automatically in Azure AD tenant. Depending on the type of resource, specific RBAC role assignments are required for the Microsoft Purview system-assigned managed identity (SAMI) to perform the scans.
-
-* **User-assigned managed identity** (preview) - Similar to a system managed identity, a user-assigned managed identity (UAMI) is a credential resource that can be used to allow Microsoft Purview to authenticate against Azure Active Directory. For more information, you can see our [User-assigned managed identity guide](manage-credentials.md#create-a-user-assigned-managed-identity).
+*To use a self-hosted integration runtime, you'll first need to [create one](manage-integration-runtimes.md) and confirm your [network settings for Microsoft Purview](catalog-private-link.md)
-* **Account Key** - Secrets can be created inside an Azure Key Vault to store credentials in order to enable access for Microsoft Purview to scan data sources securely using the secrets. A secret can be a storage account key, SQL login password, or a password.
-
- > [!Note]
- > If you use this option, you need to deploy an _Azure key vault_ resource in your subscription and assign _Microsoft Purview accountΓÇÖs_ SAMI with required access permission to secrets inside _Azure key vault_.
+# [System or user assigned managed identity](#tab/MI)
-* **Service Principal** - In this method, you can create a new or use an existing service principal in your Azure Active Directory tenant.
+#### Using a system or user assigned managed identity for scanning
-### Authentication for a scan
+There are two types of managed identity you can use:
-# [System or user assigned managed identity](#tab/MI)
+* **System-assigned managed identity (Recommended)** - As soon as the Microsoft Purview Account is created, a system-assigned managed identity (SAMI) is created automatically in Azure AD tenant. Depending on the type of resource, specific RBAC role assignments are required for the Microsoft Purview system-assigned managed identity (SAMI) to perform the scans.
-#### Using a system or user assigned managed identity for scanning
+* **User-assigned managed identity** (preview) - Similar to a system managed identity, a user-assigned managed identity (UAMI) is a credential resource that can be used to allow Microsoft Purview to authenticate against Azure Active Directory. For more information, you can see our [User-assigned managed identity guide](manage-credentials.md#create-a-user-assigned-managed-identity).
It's important to give your Microsoft Purview account or user-assigned managed identity (UAMI) the permission to scan the ADLS Gen2 data source. You can add your Microsoft Purview account's system-assigned managed identity (which has the same name as your Microsoft Purview account) or UAMI at the Subscription, Resource Group, or Resource level, depending on what level scan permissions are needed.
It's important to give your Microsoft Purview account or user-assigned managed i
#### Using Account Key for scanning
+> [!Note]
+> If you use this option, you need to deploy an _Azure key vault_ resource in your subscription and [assign _Microsoft Purview accountΓÇÖs_ System Assigned Managed Identity (SAMI) required access permission to secrets inside _Azure key vault_.](manage-credentials.md#microsoft-purview-permissions-on-the-azure-key-vault)
+ When authentication method selected is **Account Key**, you need to get your access key and store in the key vault: 1. Navigate to your ADLS Gen2 storage account
It's important to give your service principal the permission to scan the ADLS Ge
#### If using Account Key
-1. Provide a **Name** for the scan, choose the appropriate collection for the scan, and select **Authentication method** as _Account Key_
+1. Provide a **Name** for the scan, select the Azure IR or your Self-Hosted IR depending on your configuration, choose the appropriate collection for the scan, and select **+ New** under credential.
+
+1. Select **Account Key** as the authentication method, then select the appropriate **Key vault connection**, and provide the name of the secret you used to store the account key. Then select **Create**
:::image type="content" source="media/register-scan-adls-gen2/register-adls-gen2-acct-key.png" alt-text="Screenshot that shows the Account Key option for scanning":::
+1. Select **Test connection**. On a successful connection, select **Continue**
+ # [Service Principal](#tab/SP) #### If using Service Principal
-1. Provide a **Name** for the scan, choose the appropriate collection for the scan, and select the **+ New** under **Credential**
+1. Provide a **Name** for the scan, select the Azure IR or your Self-Hosted IR depending on your configuration, choose the appropriate collection for the scan, and select the **+ New** under **Credential**
:::image type="content" source="media/register-scan-adls-gen2/register-adls-gen2-sp-option.png" alt-text="Screenshot that shows the option for service principal to enable scanning":::
purview Register Scan Azure Blob Storage Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/purview/register-scan-azure-blob-storage-source.md
For file types such as csv, tsv, psv, ssv, the schema is extracted when the foll
* First row values are non-empty * First row values are unique
-* First row values are not a date or a number
+* First row values aren't a date or a number
## Prerequisites
For file types such as csv, tsv, psv, ssv, the schema is extracted when the foll
* An active [Microsoft Purview account](create-catalog-portal.md).
-* You will need to be a Data Source Administrator and Data Reader to register a source and manage it in the Microsoft Purview governance portal. See our [Microsoft Purview Permissions page](catalog-permissions.md) for details.
+* You'll need to be a Data Source Administrator and Data Reader to register a source and manage it in the Microsoft Purview governance portal. See our [Microsoft Purview Permissions page](catalog-permissions.md) for details.
\** Lineage is supported if dataset is used as a source/sink in [Data Factory Copy activity](how-to-link-azure-data-factory.md)
For file types such as csv, tsv, psv, ssv, the schema is extracted when the foll
This section will enable you to register the Azure Blob storage account for scan and data share in Purview. ### Prerequisites for register
-* You will need to be a Data Source Admin and one of the other Purview roles (e.g. Data Reader or Data Share Contributor) to register a source and manage it in the Microsoft Purview governance portal. See our [Microsoft Purview Permissions page](catalog-permissions.md) for details.
+* You'll need to be a Data Source Admin and one of the other Purview roles (for example, Data Reader or Data Share Contributor) to register a source and manage it in the Microsoft Purview governance portal. See our [Microsoft Purview Permissions page](catalog-permissions.md) for details.
### Steps to register
For file types such as csv, tsv, psv, ssv, the schema is extracted when the foll
* First row values are non-empty * First row values are unique
-* First row values are neither a date nor a number
+* First row values are not a date or a number
### Authentication for a scan
-In order to have access to scan the data source, an authentication method in the Azure Blob Storage account needs to be configured.
-
-The following options are supported:
-
-> [!Note]
-> If you have firewall enabled for the storage account, you must use managed identity authentication method when setting up a scan.
--- **System-assigned managed identity (Recommended)** - As soon as the Microsoft Purview Account is created, a system-assigned managed identity (SAMI) is created automatically in Azure AD tenant. Depending on the type of resource, specific RBAC role assignments are required for the Microsoft Purview SAMI to perform the scans.
+Your Azure network may allow for communications between your Azure resources, but if you've set up firewalls, private endpoints, or virtual networks within Azure, you'll need to follow one of these configurations below.
-- **User-assigned managed identity** (preview) - Similar to a system-managed identity, a user-assigned managed identity (UAMI) is a credential resource that can be used to allow Microsoft Purview to authenticate against Azure Active Directory. For more information, you can see our [User-assigned managed identity guide](manage-credentials.md#create-a-user-assigned-managed-identity).
+|Networking constraints |Integration runtime type |Available credential types |
+||||
+|No private endpoints or firewalls | Azure IR | Managed identity (Recommended), service principal, or account key|
+|Firewall enabled but no private endpoints| Azure IR | Managed identity |
+|Private endpoints enabled | *Self-Hosted IR | Service principal, account key|
-- **Account Key** - Secrets can be created inside an Azure Key Vault to store credentials in order to enable access for Microsoft Purview to scan data sources securely using the secrets. A secret can be a storage account key, SQL login password, or a password.
+*To use a self-hosted integration runtime, you'll first need to [create one](manage-integration-runtimes.md) and confirm your [network settings for Microsoft Purview](catalog-private-link.md)
- > [!Note]
- > If you use this option, you need to deploy an _Azure key vault_ resource in your subscription and assign _Microsoft Purview accountΓÇÖs_ SAMI with required access permission to secrets inside _Azure key vault_.
+#### Using a system or user assigned managed identity for scanning
-- **Service Principal** - In this method, you can create a new or use an existing service principal in your Azure Active Directory tenant.
+There are two types of managed identity you can use:
-#### Using a system or user assigned managed identity for scanning
+* **System-assigned managed identity (Recommended)** - As soon as the Microsoft Purview Account is created, a system-assigned managed identity (SAMI) is created automatically in Azure AD tenant. Depending on the type of resource, specific RBAC role assignments are required for the Microsoft Purview system-assigned managed identity (SAMI) to perform the scans.
-It is important to give your Microsoft Purview account the permission to scan the Azure Blob data source. You can add access for the SAMI or UAMI at the Subscription, Resource Group, or Resource level, depending on what level scan permission is needed.
+* **User-assigned managed identity** (preview) - Similar to a system managed identity, a user-assigned managed identity (UAMI) is a credential resource that can be used to allow Microsoft Purview to authenticate against Azure Active Directory. For more information, you can see our [User-assigned managed identity guide](manage-credentials.md#create-a-user-assigned-managed-identity).
+It's important to give your Microsoft Purview account the permission to scan the Azure Blob data source. You can add access for the SAMI or UAMI at the Subscription, Resource Group, or Resource level, depending on what level scan permission is needed.
> [!NOTE] > If you have firewall enabled for the storage account, you must use **managed identity** authentication method when setting up a scan.
When authentication method selected is **Account Key**, you need to get your acc
1. Select **Create** to complete
-1. If your key vault is not connected to Microsoft Purview yet, you will need to [create a new key vault connection](manage-credentials.md#create-azure-key-vaults-connections-in-your-microsoft-purview-account)
+1. If your key vault isn't connected to Microsoft Purview yet, you'll need to [create a new key vault connection](manage-credentials.md#create-azure-key-vaults-connections-in-your-microsoft-purview-account)
1. Finally, [create a new credential](manage-credentials.md#create-a-new-credential) using the key to set up your scan #### Using Service Principal for scanning ##### Creating a new service principal
-If you need to [Create a new service principal](./create-service-principal-azure.md), it is required to register an application in your Azure AD tenant and provide access to Service Principal in your data sources. Your Azure AD Global Administrator or other roles such as Application Administrator can perform this operation.
+If you need to [Create a new service principal](./create-service-principal-azure.md), it's required to register an application in your Azure AD tenant and provide access to Service Principal in your data sources. Your Azure AD Global Administrator or other roles such as Application Administrator can perform this operation.
##### Getting the Service Principal's Application ID
If you need to [Create a new service principal](./create-service-principal-azure
##### Granting the Service Principal access to your Azure Blob account
-It is important to give your service principal the permission to scan the Azure Blob data source. You can add access for the service principal at the Subscription, Resource Group, or Resource level, depending on what level scan access is needed.
+It's important to give your service principal the permission to scan the Azure Blob data source. You can add access for the service principal at the Subscription, Resource Group, or Resource level, depending on what level scan access is needed.
> [!Note] > You need to be an owner of the subscription to be able to add a service principal on an Azure resource.
Provide a **Name** for the scan, select the Microsoft Purview accounts SAMI or U
#### If using Account Key
-Provide a **Name** for the scan, choose the appropriate collection for the scan, and select **Authentication method** as _Account Key_ and select **Create**
+Provide a **Name** for the scan, select the Azure IR or your Self-Hosted IR depending on your configuration, choose the appropriate collection for the scan, and select **Authentication method** as _Account Key_ and select **Create**
:::image type="content" source="media/register-scan-azure-blob-storage-source/register-blob-acct-key.png" alt-text="Screenshot that shows the Account Key option for scanning"::: #### If using Service Principal
-1. Provide a **Name** for the scan, choose the appropriate collection for the scan, and select the **+ New** under **Credential**
+1. Provide a **Name** for the scan, select the Azure IR or your Self-Hosted IR depending on your configuration, choose the appropriate collection for the scan, and select the **+ New** under **Credential**
:::image type="content" source="media/register-scan-azure-blob-storage-source/register-blob-sp-option.png" alt-text="Screenshot that shows the option for service principal to enable scanning":::
To map a storage account asset in a received share, you need ONE of the followin
* **Microsoft.Storage/storageAccounts/blobServices/containers/write** - This permission is available in the *Contributor*, *Owner*, *Storage Blob Data Contributor* and *Storage Blob Data Owner* role. ### Update shared data in source storage account
-Updates you make to shared files or data in the shared folder from source storage account will be made available to recipient in target storage account in near real time. When you delete subfolder or files within the shared folder, they will disappear for recipient. To delete the shared folder, file or parent folders or containers, you need to first revoke access to all your shares from the source storage account.
+Updates you make to shared files or data in the shared folder from source storage account will be made available to recipient in target storage account in near real time. When you delete subfolder or files within the shared folder, they'll disappear for recipient. To delete the shared folder, file or parent folders or containers, you need to first revoke access to all your shares from the source storage account.
### Access shared data in target storage account The target storage account enables recipient to access the shared data read-only in near real time. You can connect analytics tools such as Synapse Workspace and Databricks to the shared data to perform analytics. Cost of accessing the shared data is charged to the target storage account. ### Service limit
-Source storage account can support up to 20 targets, and target storage account can support up to 100 sources. If you require an increase in limit, please contact Support.
+Source storage account can support up to 20 targets, and target storage account can support up to 100 sources. If you require an increase in limit, contact Support.
## Access policy
reliability Availability Zones Migration Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/availability-zones-migration-overview.md
Title: Availability zone migration guidance overview for Microsoft Azure products and services description: Availability zone migration guidance overview for Microsoft Azure products and services -++ Last updated 11/08/2022 -+ # Availability zone migration guidance overview
reliability Migrate Api Mgt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/migrate-api-mgt.md
Last updated 07/07/2022 -+
reliability Migrate App Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/migrate-app-configuration.md
Last updated 09/10/2022 -+ # Migrate App Configuration to a region with availability zone support
reliability Migrate App Gateway V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/migrate-app-gateway-v2.md
Last updated 07/28/2022 -+ # Migrate Application Gateway and WAF deployments to availability zone support
reliability Migrate App Service Environment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/migrate-app-service-environment.md
Last updated 06/08/2022 -+ # Migrate App Service Environment to availability zone support
reliability Migrate App Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/migrate-app-service.md
Last updated 12/14/2022 -+
reliability Migrate Cache Redis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/migrate-cache-redis.md
Last updated 06/23/2022 -+ # Migrate an Azure Cache for Redis instance to availability zone support
reliability Migrate Container Instances https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/migrate-container-instances.md
Last updated 07/22/2022 -+ # Migrate Azure Container Instances to availability zone support
reliability Migrate Database Mysql Flex https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/migrate-database-mysql-flex.md
Last updated 12/13/2022 -+ # Migrate MySQL ΓÇô Flexible Server to availability zone support
reliability Migrate Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/migrate-functions.md
Last updated 08/29/2022 -+
reliability Migrate Load Balancer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/migrate-load-balancer.md
Last updated 05/09/2022 -+ CustomerIntent: As a cloud architect/engineer, I need general guidance on migrating load balancers to using availability zones. <!-- CHANGE AUTHOR BEFORE PUBLISH -->
reliability Migrate Monitor Log Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/migrate-monitor-log-analytics.md
Last updated 07/21/2022 -+ # Migrate Log Analytics workspaces to availability zone support
reliability Migrate Recovery Services Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/migrate-recovery-services-vault.md
Last updated 06/24/2022 -+ # Migrate Azure Recovery Services vault to availability zone support
reliability Migrate Search Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/migrate-search-service.md
Last updated 08/01/2022 -+
reliability Migrate Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/migrate-storage.md
Last updated 09/27/2022 -+ # Migrate Azure Storage accounts to availability zone support
reliability Migrate Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/migrate-vm.md
Last updated 04/21/2022 -+ # Migrate Virtual Machines and Virtual Machine Scale Sets to availability zone support
reliability Migrate Workload Aks Mysql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/migrate-workload-aks-mysql.md
Last updated 08/29/2022 -+ # Migrate Azure Kubernetes Service (AKS) and MySQL Flexible Server workloads to availability zone support
reliability Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/overview.md
Title: Azure reliability documentation description: Azure reliability documentation for availability zones, cross-regional disaster recovery, availability of services for sovereign clouds, regions, and category. - Last updated 07/20/2022 -+++ # Azure reliability documentation
reliability Sovereign Cloud China https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/reliability/sovereign-cloud-china.md
Last updated 10/27/2022 -+ # Availability of services for Microsoft Azure operated by 21Vianet
role-based-access-control Conditions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/role-based-access-control/conditions-overview.md
Previously updated : 10/24/2022 Last updated : 01/19/2023 #Customer intent: As a dev, devops, or it admin, I want to learn how to constrain access within a role assignment by using conditions.
Some features of conditions are still in preview. The following table lists the
| Add conditions using [Azure PowerShell](conditions-role-assignments-powershell.md), [Azure CLI](conditions-role-assignments-cli.md), or [REST API](conditions-role-assignments-rest.md) | GA | October 2022 | | Use [resource and request attributes](conditions-format.md#attributes) for specific combinations of Azure storage resources, access attribute types, and storage account performance tiers. For more information, see [Status of condition features in Azure Storage](../storage/common/authorize-data-access.md#status-of-condition-features-in-azure-storage). | GA | October 2022 | | Use [custom security attributes on a principal in a condition](conditions-format.md#principal-attributes) | Preview | November 2021 |
-| Use resource and request attributes in a condition | Preview | May 2021 |
## Conditions and Azure AD PIM
route-server Tutorial Protect Route Server https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/route-server/tutorial-protect-route-server.md
This article helps you create an Azure Route Server with a DDoS protected virtual network. Azure DDoS protection protects your publicly accessible route server from Distributed Denial of Service attacks. > [!IMPORTANT]
-> Azure DDoS protection Standard incurs a cost per public IP address in the virtual network where you enable the service. Ensure you delete the resources in this tutorial if you aren't using the resources in the future. For information about pricing, see [Azure DDoS Protection Pricing](https://azure.microsoft.com/pricing/details/ddos-protection/). For more information about Azure DDoS protection, see [What is Azure DDoS Protection?](../ddos-protection/ddos-protection-overview.md).
+> Azure DDoS Protection incurs a cost when you use the Standard SKU. Overages charges only apply if more than 100 public IPs are protected in the tenant. Ensure you delete the resources in this tutorial if you aren't using the resources in the future. For information about pricing, see [Azure DDoS Protection Pricing]( https://azure.microsoft.com/pricing/details/ddos-protection/). For more information about Azure DDoS protection, see [What is Azure DDoS Protection?](../ddos-protection/ddos-protection-overview.md).
In this tutorial, you learn how to:
search Resource Partners Knowledge Mining https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/resource-partners-knowledge-mining.md
Previously updated : 08/15/2022 Last updated : 01/18/2023 # Partner spotlight
search Resource Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/resource-tools.md
Title: Tools for search indexing
+ Title: Productivity tools
description: Use existing code samples or build your own tools for working with a search index in Azure Cognitive Search.
Previously updated : 09/20/2022 Last updated : 01/18/2023
-# Tools - Azure Cognitive Search
+# Productivity tools - Azure Cognitive Search
-Tools are provided as source code that you can download, modify, and build to create an app that helps you develop or maintain a search solution.
-
-The following tools are built by engineers at Microsoft, but aren't part of the Azure Cognitive Search service and aren't under Service Level Agreement (SLA).
+Productivity tools are built by engineers at Microsoft, but aren't part of the Azure Cognitive Search service and aren't under Service Level Agreement (SLA). These tools are provided as source code that you can download, modify, and build to create an app that helps you develop or maintain a search solution.
| Tool name | Description | Source code | |--| |-| | [Azure Cognitive Search Lab readme](https://github.com/Azure-Samples/azure-search-lab/blob/main/README.md) | Connects to your search service with a Web UI that exercises the full REST API, including the ability to edit a live search index. | [https://github.com/Azure-Samples/azure-search-lab](https://github.com/Azure-Samples/azure-search-lab) |
-| [Knowledge Mining Accelerator readme](https://github.com/Azure-Samples/azure-search-knowledge-mining/blob/main/README.md) | Code and docs to jump start a knowledge store using your data. | [https://github.com/Azure-Samples/azure-search-knowledge-mining](https://github.com/Azure-Samples/azure-search-knowledge-mining) |
| [Back up and Restore readme](https://github.com/liamc) | Download a populated search index to your local device and then upload the index and its content to a new search service. | [https://github.com/liamca/azure-search-backup-restore](https://github.com/liamca/azure-search-backup-restore) |
+| [Knowledge Mining Accelerator readme](https://github.com/Azure-Samples/azure-search-knowledge-mining/blob/main/README.md) | Code and docs to jump start a knowledge store using your data. | [https://github.com/Azure-Samples/azure-search-knowledge-mining](https://github.com/Azure-Samples/azure-search-knowledge-mining) |
| [Performance testing readme](https://github.com/Azure-Samples/azure-search-performance-testing/blob/main/README.md) | This solution helps you load test Azure Cognitive Search. It uses Apache JMeter as an open source load and performance testing tool and Terraform to dynamically provision and destroy the required infrastructure on Azure. | [https://github.com/Azure-Samples/azure-search-performance-testing](https://github.com/Azure-Samples/azure-search-performance-testing) |
+| [Visual Studio Code extension](https://github.com/microsoft/vscode-azurecognitivesearch) | Although the extension is no longer available in the Visual Studio Code Marketplace, the code is open sourced at `https://github.com/microsoft/vscode-azurecognitivesearch`. You can clone and modify the tool for your own use. |
search Search Api Preview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-api-preview.md
Previously updated : 06/15/2022 Last updated : 01/18/2023 # Preview features in Azure Cognitive Search
-This article is a comprehensive list of all features that are in public preview. Preview functionality is provided under [Supplemental Terms of Use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/), without a service level agreement, and isn't recommended for production workloads.
+This article is a complete list of all features that are in public preview. This list is helpful if you're checking feature status.
-Preview features that transition to general availability are removed from this list. If a feature isn't listed below, you can assume it's generally available or retired. For announcements regarding general availability, see [Service Updates](https://azure.microsoft.com/updates/?product=search) or [What's New](whats-new.md).
+Preview functionality is provided under [Supplemental Terms of Use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/), without a service level agreement, and isn't recommended for production workloads.
+
+Preview features that transition to general availability are removed from this list. If a feature isn't listed below, you can assume it's generally available or retired. For announcements regarding general availability and retirement, see [Service Updates](https://azure.microsoft.com/updates/?product=search) or [What's New](whats-new.md).
|Feature&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; | Category | Description | Availability | |||-|| | [**Azure Files indexer**](search-file-storage-integration.md) | Indexer data source | Adds REST API support for creating indexers for [Azure Files](https://azure.microsoft.com/services/storage/files/) | Public preview, [Search REST API 2021-04-30-Preview](/rest/api/searchservice/index-preview). Announced in November 2021. |
-| [**Azure RBAC support**](search-security-rbac.md) | Security | Use new built-in roles to control access to indexes and indexing, eliminating or reducing the dependency on API keys. | Public preview ([registration required](./search-security-rbac.md?tabs=config-svc-portal%2croles-portal%2ctest-portal#step-1-preview-sign-up)). After you are registered, use the Azure portal or the Management REST API version 2021-04-01-Preview to configure a search service for data plane authentication. Announced in July 2021. |
+| [**Azure RBAC support (data plane)**](search-security-rbac.md) | Security | Use new built-in roles to control access to indexes and indexing, eliminating or reducing the dependency on API keys. | Public preview ([registration required](./search-security-rbac.md?tabs=config-svc-portal%2croles-portal%2ctest-portal#step-1-preview-sign-up)). After you're registered, use the Azure portal or the Management REST API version 2021-04-01-Preview to configure a search service for data plane authentication. Announced in July 2021. |
| [**Search REST API 2021-04-30-Preview**](/rest/api/searchservice/index-preview) | Security | Modifies [Create or Update Data Source](/rest/api/searchservice/preview-api/create-or-update-data-source) to support managed identities under Azure Active Directory, for indexers that connect to external data sources. | Public preview, [Search REST API 2021-04-30-Preview](/rest/api/searchservice/index-preview). Announced in May 2021. | | [**Management REST API 2021-04-01-Preview**](/rest/api/searchmanagement/) | Security | Modifies [Create or Update Service](/rest/api/searchmanagement/2021-04-01-preview/services/create-or-update) to support new [DataPlaneAuthOptions](/rest/api/searchmanagement/2021-04-01-preview/services/create-or-update#dataplaneauthoptions). | Public preview, [Management REST API](/rest/api/searchmanagement/), API version 2021-04-01-Preview. Announced in May 2021. | | [**Reset Documents**](search-howto-run-reset-indexers.md) | Indexer | Reprocesses individually selected search documents in indexer workloads. | Use the [Reset Documents REST API](/rest/api/searchservice/preview-api/reset-documents), API versions 2021-04-30-Preview or 2020-06-30-Preview. | | [**SharePoint Indexer**](search-howto-index-sharepoint-online.md) | Indexer data source | New data source for indexer-based indexing of SharePoint content. | [Sign up](https://aka.ms/azure-cognitive-search/indexer-preview) is required so that support can be enabled for your subscription on the backend. Configure this data source using [Create or Update Data Source](/rest/api/searchservice/preview-api/create-or-update-data-source), API versions 2021-04-30-Preview or 2020-06-30-Preview, or the Azure portal. | | [**MySQL indexer data source**](search-howto-index-mysql.md) | Indexer data source | Index content and metadata from Azure MySQL data sources.| [Sign up](https://aka.ms/azure-cognitive-search/indexer-preview) is required so that support can be enabled for your subscription on the backend. Configure this data source using [Create or Update Data Source](/rest/api/searchservice/preview-api/create-or-update-data-source), API versions 2021-04-30-Preview or 2020-06-30-Preview, [.NET SDK 11.2.1](/dotnet/api/azure.search.documents.indexes.models.searchindexerdatasourcetype.mysql), and Azure portal. |
-| [**Azure Cosmos DB indexer: Azure Cosmos DB for MongoDB, Azure Cosmos DB for Apache Gremlin**](search-howto-index-cosmosdb.md) | Indexer data source | For Azure Cosmos DB, SQL API is generally available, but Azure CosmosDB for MongoDB and Azure CosmosDB for Apache Gremlin are in preview. | For MongoDB and Gremlin, [sign up first](https://aka.ms/azure-cognitive-search/indexer-preview) so that support can be enabled for your subscription on the backend. MongoDB data sources can be configured in the portal. Configure this data source using [Create or Update Data Source](/rest/api/searchservice/preview-api/create-or-update-data-source), API versions 2021-04-30-Preview or 2020-06-30-Preview. |
+| [**Azure Cosmos DB indexer: Azure Cosmos DB for MongoDB, Azure Cosmos DB for Apache Gremlin**](search-howto-index-cosmosdb.md) | Indexer data source | For Azure Cosmos DB, SQL API is generally available, but Azure Cosmos DB for MongoDB and Azure Cosmos DB for Apache Gremlin are in preview. | For MongoDB and Gremlin, [sign up first](https://aka.ms/azure-cognitive-search/indexer-preview) so that support can be enabled for your subscription on the backend. MongoDB data sources can be configured in the portal. Configure this data source using [Create or Update Data Source](/rest/api/searchservice/preview-api/create-or-update-data-source), API versions 2021-04-30-Preview or 2020-06-30-Preview. |
| [**Native blob soft delete**](search-howto-index-changed-deleted-blobs.md) | Indexer data source | The Azure Blob Storage indexer in Azure Cognitive Search will recognize blobs that are in a soft deleted state, and remove the corresponding search document during indexing. | Configure this data source using [Create or Update Data Source](/rest/api/searchservice/preview-api/create-or-update-data-source), API versions 2021-04-30-Preview or 2020-06-30-Preview. | | [**Semantic search**](semantic-search-overview.md) | Relevance (scoring) | Semantic ranking of results, captions, and answers. | Configure semantic search using [Search Documents](/rest/api/searchservice/preview-api/search-documents), API versions 2021-04-30-Preview or 2020-06-30-Preview, and Search Explorer (portal). | | [**speller**](cognitive-search-aml-skill.md) | Query | Optional spelling correction on query term inputs for simple, full, and semantic queries. | [Search Preview REST API](/rest/api/searchservice/preview-api/search-documents), API versions 2021-04-30-Preview or 2020-06-30-Preview, and Search Explorer (portal). | | [**Normalizers**](search-normalizers.md) | Query | Normalizers provide simple text pre-processing: consistent casing, accent removal, and ASCII folding, without invoking the full text analysis chain.| Use [Search Documents](/rest/api/searchservice/preview-api/search-documents), API versions 2021-04-30-Preview or 2020-06-30-Preview.| | [**featuresMode parameter**](/rest/api/searchservice/preview-api/search-documents#query-parameters) | Relevance (scoring) | Relevance score expansion to include details: per field similarity score, per field term frequency, and per field number of unique tokens matched. You can consume these data points in [custom scoring solutions](https://github.com/Azure-Samples/search-ranking-tutorial). | Add this query parameter using [Search Documents](/rest/api/searchservice/preview-api/search-documents), API versions 2021-04-30-Preview, 2020-06-30-Preview, or 2019-05-06-Preview. | | [**Azure Machine Learning (AML) skill**](cognitive-search-aml-skill.md) | AI enrichment (skills) | A new skill type to integrate an inferencing endpoint from Azure Machine Learning. Get started with [this tutorial](cognitive-search-tutorial-aml-custom-skill.md). | Use [Search Preview REST API](/rest/api/searchservice/), API versions 2021-04-30-Preview, 2020-06-30-Preview, or 2019-05-06-Preview. Also available in the portal, in skillset design, assuming Cognitive Search and Azure ML services are deployed in the same subscription. |
-| [**Incremental enrichment**](cognitive-search-incremental-indexing-conceptual.md) | AI enrichment (skills) | Adds caching to an enrichment pipeline, allowing you to reuse existing output if a targeted modification, such as an update to a skillset or another object, does not change the content. Caching applies only to enriched documents produced by a skillset.| Add this configuration setting using [Create or Update Indexer Preview REST API](/rest/api/searchservice/create-indexer), API versions 2021-04-30-Preview, 2020-06-30-Preview, or 2019-05-06-Preview. |
+| [**Incremental enrichment**](cognitive-search-incremental-indexing-conceptual.md) | AI enrichment (skills) | Adds caching to an enrichment pipeline, allowing you to reuse existing output if a targeted modification, such as an update to a skillset or another object, doesn't change the content. Caching applies only to enriched documents produced by a skillset.| Add this configuration setting using [Create or Update Indexer Preview REST API](/rest/api/searchservice/create-indexer), API versions 2021-04-30-Preview, 2020-06-30-Preview, or 2019-05-06-Preview. |
| [**moreLikeThis**](search-more-like-this.md) | Query | Finds documents that are relevant to a specific document. This feature has been in earlier previews. | Add this query parameter in [Search Documents Preview REST API](/rest/api/searchservice/search-documents) calls, with API versions 2021-04-30-Preview, 2020-06-30-Preview, 2019-05-06-Preview, 2016-09-01-Preview, or 2017-11-11-Preview. | ## How to call a preview REST API Azure Cognitive Search always pre-releases experimental features through the REST API first, then through prerelease versions of the .NET SDK.
-Preview features are available for testing and experimentation, with the goal of gathering feedback on feature design and implementation. For this reason, preview features can change over time, possibly in ways that break backwards compatibility. This is in contrast to features in a GA version, which are stable and unlikely to change with the exception of small backward-compatible fixes and enhancements. Also, preview features do not always make it into a GA release.
+Preview features are available for testing and experimentation, with the goal of gathering feedback on feature design and implementation. For this reason, preview features can change over time, possibly in ways that break backwards compatibility. This is in contrast to features in a GA version, which are stable and unlikely to change with the exception of small backward-compatible fixes and enhancements. Also, preview features don't always make it into a GA release.
While some preview features might be available in the portal and .NET SDK, the REST API always has preview features.
While some preview features might be available in the portal and .NET SDK, the R
+ For management operations, [**`2021-04-01-Preview`**](/rest/api/searchmanagement/management-api-versions) is the current preview version.
-Older previews are still operational but become stale over time. If your code calls `api-version=2019-05-06-Preview` or `api-version=2016-09-01-Preview` or `api-version=2017-11-11-Preview`, those calls are still valid, but those versions will not include new features and bug fixes are not guaranteed.
+Older previews are still operational but become stale over time. If your code calls `api-version=2019-05-06-Preview` or `api-version=2016-09-01-Preview` or `api-version=2017-11-11-Preview`, those calls are still valid, but those versions won't include new features and bug fixes aren't guaranteed.
The following example syntax illustrates a call to the preview API version.
search Search Get Started Vs Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-get-started-vs-code.md
- Title: 'Use Visual Studio Code with Search'-
-description: This article provides documentation for the Visual Studio Code extension for Azure Cognitive Search.
---- Previously updated : 10/31/2022---
-# Work with Azure Cognitive Search using the Visual Studio Code extension (preview - retired)
-
-> [!IMPORTANT]
-> The Visual Studio Code Extension for Azure Cognitive Search was introduced as a **public preview feature** under [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). It's now discontinued.
-
-If you have an existing installation of Visual Studio Code Extension for Azure Cognitive Search, you can continue to use it, but it will no longer be updated, and it isn't guaranteed to work with future versions of Azure Cognitive Search.
-
-This article is for current users of the extension.
-
-## Prerequisites
-
-+ [Visual Studio Code](https://code.visualstudio.com/download)
-
-+ Although the extension is no longer available in the Visual Studio Code Marketplace, the code is open sourced at [https://github.com/microsoft/vscode-azurecognitivesearch](https://github.com/microsoft/vscode-azurecognitivesearch). You can clone and modify the tool for your own use.
-
-+ [Azure Cognitive Search service](search-create-service-portal.md)
-
-## Connect to your subscription
-
-Select **Sign in to Azure...** and log into your Azure Account.
-
-You should see your subscriptions. In the following screenshot, the subscription name is "Visual Studio Enterprise" and it contains one search service named "azsearch-service".
-
-![VS Code Azure subscriptions](media/search-get-started-rest/subscriptions.png "Subscriptions in VS Code")
-
-To limit the subscriptions displayed, open the command palette (Ctrl+Shift+P or Cmd+Shift+P) and search for *Azure* or *Select Subscriptions*. There are also commands available for signing in and out of your Azure account.
-
-When you expand the search service, you'll see tree items for each Cognitive Search item: indexes, data sources, indexers, skillsets, synonym maps, and aliases.
-
-![VS Code Azure search tree](media/search-get-started-rest/search-tree.png "VS Code Azure search tree")
-
-These tree items can be expanded to show any resources you have in your search service.
-
-## 1 - Create an index
-
-To get started with Azure Cognitive Search, you first need to create a search index. This is done using the [Create Index REST API](/rest/api/searchservice/create-index).
-
-With the VS Code extension, you only need to worry about the body of the request. For this quickstart, we provide a sample index definition and corresponding documents.
-
-### Index definition
-
-The index definition below is a sample schema for fictitious hotels.
-
-The `fields` collection defines the structure of documents in the search index. Each field has a data type and a number of additional attributes that determine how the field can be used.
-
-```json
-{
- "name": "hotels-quickstart",
- "fields": [
- {
- "name": "HotelId",
- "type": "Edm.String",
- "key": true,
- "filterable": true
- },
- {
- "name": "HotelName",
- "type": "Edm.String",
- "searchable": true,
- "filterable": false,
- "sortable": true,
- "facetable": false
- },
- {
- "name": "Description",
- "type": "Edm.String",
- "searchable": true,
- "filterable": false,
- "sortable": false,
- "facetable": false,
- "analyzer": "en.lucene"
- },
- {
- "name": "Description_fr",
- "type": "Edm.String",
- "searchable": true,
- "filterable": false,
- "sortable": false,
- "facetable": false,
- "analyzer": "fr.lucene"
- },
- {
- "name": "Category",
- "type": "Edm.String",
- "searchable": true,
- "filterable": true,
- "sortable": true,
- "facetable": true
- },
- {
- "name": "Tags",
- "type": "Collection(Edm.String)",
- "searchable": true,
- "filterable": true,
- "sortable": false,
- "facetable": true
- },
- {
- "name": "ParkingIncluded",
- "type": "Edm.Boolean",
- "filterable": true,
- "sortable": true,
- "facetable": true
- },
- {
- "name": "LastRenovationDate",
- "type": "Edm.DateTimeOffset",
- "filterable": true,
- "sortable": true,
- "facetable": true
- },
- {
- "name": "Rating",
- "type": "Edm.Double",
- "filterable": true,
- "sortable": true,
- "facetable": true
- },
- {
- "name": "Address",
- "type": "Edm.ComplexType",
- "fields": [
- {
- "name": "StreetAddress",
- "type": "Edm.String",
- "filterable": false,
- "sortable": false,
- "facetable": false,
- "searchable": true
- },
- {
- "name": "City",
- "type": "Edm.String",
- "searchable": true,
- "filterable": true,
- "sortable": true,
- "facetable": true
- },
- {
- "name": "StateProvince",
- "type": "Edm.String",
- "searchable": true,
- "filterable": true,
- "sortable": true,
- "facetable": true
- },
- {
- "name": "PostalCode",
- "type": "Edm.String",
- "searchable": true,
- "filterable": true,
- "sortable": true,
- "facetable": true
- },
- {
- "name": "Country",
- "type": "Edm.String",
- "searchable": true,
- "filterable": true,
- "sortable": true,
- "facetable": true
- }
- ]
- }
- ],
- "suggesters": [
- {
- "name": "sg",
- "searchMode": "analyzingInfixMatching",
- "sourceFields": [
- "HotelName"
- ]
- }
- ]
-}
-```
-
-To create a new index, right-click on **Indexes** and then select **Create new index**. An editor with a name similar to `indexes-new-28c972f661.azsindex` will pop up.
-
-Paste the index definition from above into the window. Save the file and select **Upload** when prompted if you want to update the index. This step creates the index and adds it to the tree view on the left.
-
-![Gif of creating an index](media/search-get-started-rest/create-index.gif)
-
-If there's a problem with your index definition, you should see an error message similar to the one below.
-
-![Create index error message](media/search-get-started-rest/create-index-error.png)
-
-If an error occurs, fix the issue and resave the file.
-
-## 2 - Load documents
-
-In the REST API, creating the index and populating the index are separate steps. In Azure Cognitive Search, the index contains all searchable data. In this quickstart, the data is provided as JSON documents. The [Add, Update, or Delete Documents REST API](/rest/api/searchservice/addupdate-or-delete-documents) is used for this task.
-
-To add new documents to the index:
-
-1. Expand the `hotels-quickstart` index you created. Right-click on **Documents** and select **Create new document**.
-
- ![Create a document](media/search-get-started-rest/create-document.png)
-
-2. You should see a JSON editor that has inferred the schema of your index.
-
- ![Create a document json](media/search-get-started-rest/create-document-2.png)
-
-3. Paste in the JSON below and then save the file. A prompt asks you to confirm your changes. Select **Upload** to save the changes.
-
- ```json
- {
- "HotelId": "1",
- "HotelName": "Secret Point Motel",
- "Description": "The hotel is ideally located on the main commercial artery of the city in the heart of New York. A few minutes away is Time's Square and the historic centre of the city, as well as other places of interest that make New York one of America's most attractive and cosmopolitan cities.",
- "Category": "Boutique",
- "Tags": [ "pool", "air conditioning", "concierge" ],
- "ParkingIncluded": false,
- "LastRenovationDate": "1970-01-18T00:00:00Z",
- "Rating": 3.60,
- "Address": {
- "StreetAddress": "677 5th Ave",
- "City": "New York",
- "StateProvince": "NY",
- "PostalCode": "10022",
- "Country": "USA"
- }
- }
- ```
-
-4. Repeat this process for the three remaining documents:
-
- Document 2:
- ```json
- {
- "HotelId": "2",
- "HotelName": "Twin Dome Motel",
- "Description": "The hotel is situated in a nineteenth century plaza, which has been expanded and renovated to the highest architectural standards to create a modern, functional and first-class hotel in which art and unique historical elements coexist with the most modern comforts.",
- "Category": "Boutique",
- "Tags": [ "pool", "free wifi", "concierge" ],
- "ParkingIncluded": false,
- "LastRenovationDate": "1979-02-18T00:00:00Z",
- "Rating": 3.60,
- "Address": {
- "StreetAddress": "140 University Town Center Dr",
- "City": "Sarasota",
- "StateProvince": "FL",
- "PostalCode": "34243",
- "Country": "USA"
- }
- }
- ```
-
- Document 3:
- ```json
- {
- "HotelId": "3",
- "HotelName": "Triple Landscape Hotel",
- "Description": "The Hotel stands out for its gastronomic excellence under the management of William Dough, who advises on and oversees all of the HotelΓÇÖs restaurant services.",
- "Category": "Resort and Spa",
- "Tags": [ "air conditioning", "bar", "continental breakfast" ],
- "ParkingIncluded": true,
- "LastRenovationDate": "2015-09-20T00:00:00Z",
- "Rating": 4.80,
- "Address": {
- "StreetAddress": "3393 Peachtree Rd",
- "City": "Atlanta",
- "StateProvince": "GA",
- "PostalCode": "30326",
- "Country": "USA"
- }
- }
- ```
-
- Document 4:
- ```json
- {
- "HotelId": "4",
- "HotelName": "Sublime Cliff Hotel",
- "Description": "Sublime Cliff Hotel is located in the heart of the historic center of Sublime in an extremely vibrant and lively area within short walking distance to the sites and landmarks of the city and is surrounded by the extraordinary beauty of churches, buildings, shops and monuments. Sublime Cliff is part of a lovingly restored 1800 palace.",
- "Category": "Boutique",
- "Tags": [ "concierge", "view", "24-hour front desk service" ],
- "ParkingIncluded": true,
- "LastRenovationDate": "1960-02-06T00:00:00Z",
- "Rating": 4.60,
- "Address": {
- "StreetAddress": "7400 San Pedro Ave",
- "City": "San Antonio",
- "StateProvince": "TX",
- "PostalCode": "78216",
- "Country": "USA"
- }
- }
- ```
-
-At this point, you should see all four documents available in the documents section.
-
-![status after uploading all documents](media/search-get-started-rest/create-document-finish.png)
-
-## 3 - Search an index
-
-Now that the index contains content, you can issue queries using [Search Documents REST API](/rest/api/searchservice/search-documents):
-
-1. Right-click the index you want to search and select **Search**. This step opens an editor with a name similar to `sandbox-b946dcda48.azs`.
-
- ![search view of extension](media/search-get-started-rest/search-vscode.png)
-
-2. A simple query is autopopulated. Press **Ctrl+Alt+R** or **Cmd+Alt+R** to submit the query. You'll see the results pop up in a window to the left.
-
- ![search results in extension](media/search-get-started-rest/search-results.png)
-
-### Example queries
-
-Try a few other query examples to get a feel for the syntax. There's four additional queries below for you to try. You can add multiple queries to the same editor. When you press **Ctrl+Alt+R** or **Cmd+Alt+R**, the line your cursor determines which query will be submitted.
-
-![queries and results side-by-side](media/search-get-started-rest/all-searches.png)
-
-In the first query, we search `boutique` and `select` only certain fields. It's a best practice to only `select` the fields you need because pulling back unnecessary data can add latency to your queries. The query also sets `$count=true` to return the total number of results with the search results.
-
-```
-// Query example 1 - Search `boutique` with select and return count
-search=boutique&$count=true&$select=HotelId,HotelName,Rating,Category
-```
-
-In the next query, we specify the search term `wifi` and also include a filter to only return results where the state is equal to `'FL'`. Results are also ordered by the Hotel's `Rating`.
-
-```
-// Query example 2 - Search with filter, orderBy, select, and count
-search=wifi&$filter=Address/StateProvince eq 'FL'&$select=HotelId,HotelName,Rating,Address/StateProvince&$orderby=Rating desc
-```
-
-Next, the search is limited to a single searchable field using the `searchFields` parameter. This is a great option to make your query more efficient if you know you're only interested in matches in certain fields.
-
-```
-// Query example 3 - Limit searchFields
-search=sublime cliff&$select=HotelId,HotelName,Rating&searchFields=HotelName
-```
-
-Another common option to include in a query is `facets`. Facets allow you to build out filters on your app to make it easy for users to know what values they can filter down to.
-
-```
-// Query example 4 - Take the top two results, and show only HotelName and Category in the results
-search=*&$select=HotelId,HotelName,Rating&searchFields=HotelName&facet=Category
-```
-
-## Open index in the portal
-
-If you'd like to view your search service in the portal, right-click the name of the search service and select **Open in Portal**.
-
-## Clean up resources
-
-When you're working in your own subscription, it's a good idea at the end of a project to identify whether you still need the resources you created. Resources left running can cost you money. You can delete resources individually or delete the resource group to delete the entire set of resources.
-
-You can find and manage resources in the portal, using the **All resources** or **Resource groups** link in the left-navigation pane.
-
-If you are using a free service, remember that you are limited to three indexes, indexers, and data sources. You can delete individual items in the portal to stay under the limit.
-
-## Next steps
-
-Now that you know how to perform core tasks, you can move forward with additional REST API calls for more advanced features, such as indexers or [setting up an enrichment pipeline](cognitive-search-tutorial-blob.md) that adds content transformations to indexing. For your next step, we recommend the following link:
-
-> [!div class="nextstepaction"]
-> [Tutorial: Use REST and AI to generate searchable content from Azure blobs](cognitive-search-tutorial-blob.md)
search Search Howto Connecting Azure Sql Database To Azure Search Using Indexers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-connecting-azure-sql-database-to-azure-search-using-indexers.md
This article also provides:
+ Read permissions. Azure Cognitive Search supports SQL Server authentication, where the user name and password are provided on the connection string. Alternatively, you can [set up a managed identity and use Azure roles](search-howto-managed-identities-sql.md).
-To work through the examples in this article, you'll need a REST client, such as [Postman](search-get-started-rest.md) or [Visual Studio Code with the extension for Azure Cognitive Search](search-get-started-vs-code.md).
+To work through the examples in this article, you'll need a REST client, such as [Postman](search-get-started-rest.md).
Other approaches for creating an Azure SQL indexer include Azure SDKs or [Import data wizard](search-get-started-portal.md) in the Azure portal. If you're using Azure portal, make sure that access to all public networks is enabled in the Azure SQL firewall and that the client has access via an inbound rule.
search Search Howto Index Cosmosdb Gremlin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-index-cosmosdb-gremlin.md
Previously updated : 09/08/2022 Last updated : 01/18/2023
-# Index data in Azure Cosmos DB for Apache Gremlin
+# Import data from Azure Cosmos DB for Apache Gremlin for queries in Azure Cognitive Search
> [!IMPORTANT] > The Azure Cosmos DB for Apache Gremlin indexer is currently in public preview under [Supplemental Terms of Use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). Currently, there is no SDK support.
-In this article, learn how to configure an [**indexer**](search-indexer-overview.md) that imports content via Azure Cosmos DB for Apache Gremlin
+In this article, learn how to configure an [**indexer**](search-indexer-overview.md) that imports content from [Azure Cosmos DB for Apache Gremlin](/azure/cosmos-db/gremlin/introduction) and makes it searchable in Azure Cognitive Search.
-This article supplements [**Create an indexer**](search-howto-create-indexers.md) with information that's specific to [Azure Cosmos DB for Apache Gremlin](../cosmos-db/choose-api.md#gremlin-api). It uses the REST APIs to demonstrate a three-part workflow common to all indexers: create a data source, create an index, create an indexer. Data extraction occurs when you submit the Create Indexer request.
+This article supplements [**Create an indexer**](search-howto-create-indexers.md) with information that's specific to Cosmos DB. It uses the REST APIs to demonstrate a three-part workflow common to all indexers: create a data source, create an index, create an indexer. Data extraction occurs when you submit the Create Indexer request.
Because terminology can be confusing, it's worth noting that [Azure Cosmos DB indexing](../cosmos-db/index-overview.md) and [Cognitive Search indexing](search-what-is-an-index.md) are different operations. Indexing in Cognitive Search creates and loads a search index on your search service.
search Search Howto Index Cosmosdb Mongodb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-index-cosmosdb-mongodb.md
Previously updated : 10/20/2022 Last updated : 01/18/2023
-# Indexing with Azure Cosmos DB for MongoDB
+# Import data from Azure Cosmos DB for MongoDB for queries in Azure Cognitive Search
> [!IMPORTANT] > MongoDB API support is currently in public preview under [supplemental Terms of Use](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). Currently, there is no SDK support.
-In this article, learn how to configure an [**indexer**](search-indexer-overview.md) that imports content using the MongoDB API provided by Azure Cosmos DB for MongoDB.
+In this article, learn how to configure an [**indexer**](search-indexer-overview.md) that imports content from [Azure Cosmos DB for MongoDB](/azure/cosmos-db/mongodb/introduction) and makes it searchable in Azure Cognitive Search.
-This article supplements [**Create an indexer**](search-howto-create-indexers.md) with information that's specific to [Azure Cosmos DB for MongoDB](../cosmos-db/choose-api.md#api-for-mongodb). It uses the REST APIs to demonstrate a three-part workflow common to all indexers: create a data source, create an index, create an indexer. Data extraction occurs when you submit the Create Indexer request.
+This article supplements [**Create an indexer**](search-howto-create-indexers.md) with information that's specific to Cosmos DB. It uses the REST APIs to demonstrate a three-part workflow common to all indexers: create a data source, create an index, create an indexer. Data extraction occurs when you submit the Create Indexer request.
Because terminology can be confusing, it's worth noting that [Azure Cosmos DB indexing](../cosmos-db/index-overview.md) and [Cognitive Search indexing](search-what-is-an-index.md) are different operations. Indexing in Cognitive Search creates and loads a search index on your search service.
search Search Howto Index Cosmosdb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-index-cosmosdb.md
Title: Azure Cosmos DB SQL indexer
+ Title: Azure Cosmos DB NoSQL indexer
-description: Set up a search indexer to index data stored in Azure Cosmos DB for full text search in Azure Cognitive Search. This article explains how index data using the SQL API protocol.
+description: Set up a search indexer to index data stored in Azure Cosmos DB for full text search in Azure Cognitive Search. This article explains how index data using the NoSQL API protocol.
Previously updated : 07/12/2022 Last updated : 01/18/2023
-# Index data from Azure Cosmos DB using the SQL API
+# Import data from Azure Cosmos DB for NoSQL for queries in Azure Cognitive Search
-In this article, learn how to configure an [**indexer**](search-indexer-overview.md) that imports content using the SQL API from Azure Cosmos DB.
+In this article, learn how to configure an [**indexer**](search-indexer-overview.md) that imports content from [Azure Cosmos DB for NoSQL](/azure/cosmos-db/nosql/) and makes it searchable in Azure Cognitive Search.
-This article supplements [**Create an indexer**](search-howto-create-indexers.md) with information that's specific to [Azure Cosmos DB for NoSQL](../cosmos-db/choose-api.md#coresql-api). It uses the REST APIs to demonstrate a three-part workflow common to all indexers: create a data source, create an index, create an indexer. Data extraction occurs when you submit the Create Indexer request.
+
+This article supplements [**Create an indexer**](search-howto-create-indexers.md) with information that's specific to Cosmos DB. It uses the REST APIs to demonstrate a three-part workflow common to all indexers: create a data source, create an index, create an indexer. Data extraction occurs when you submit the Create Indexer request.
Because terminology can be confusing, it's worth noting that [Azure Cosmos DB indexing](../cosmos-db/index-overview.md) and [Cognitive Search indexing](search-what-is-an-index.md) are different operations. Indexing in Cognitive Search creates and loads a search index on your search service.
search Search Howto Large Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-large-index.md
Title: Index large data set using built-in indexers
+ Title: Index large data sets for full text search
-description: Strategies for large data indexing or computationally intensive indexing through batch mode, resourcing, and techniques for scheduled, parallel, and distributed indexing.
+description: Strategies for large data indexing or computationally intensive indexing through batch mode, resourcing, and scheduled, parallel, and distributed indexing.
Previously updated : 12/10/2022 Last updated : 01/17/2023 # Index large data sets in Azure Cognitive Search
-Azure Cognitive Search supports [two basic approaches](search-what-is-data-import.md) for importing data into a search index. You can *push* your data into the index programmatically, or point an [Azure Cognitive Search indexer](search-indexer-overview.md) at a supported data source to *pull* in the data.
+If your search solution requirements include indexing big data or complex data, this article describes the strategies for accommodating long running processes on Azure Cognitive Search.
-As data volumes grow or processing needs change, you might find that simple indexing strategies are no longer practical. For Azure Cognitive Search, there are several approaches for accommodating larger data sets, ranging from how you structure a data upload request, to using a source-specific indexer for scheduled and distributed workloads.
-
-The same techniques also apply to long-running processes. In particular, the steps outlined in [parallel indexing](#run-indexers-in-parallel) are helpful for computationally intensive indexing, such as image analysis or natural language processing in an [AI enrichment pipeline](cognitive-search-concept-intro.md).
+This article assumes familiarity with the [two basic approaches for importing data](search-what-is-data-import.md): pushing data into an index, or pulling in data from a supported data source using a [search indexer](search-indexer-overview.md). The strategy you choose will be determined by the indexing approach you're already using. If your scenario involves computationally intensive [AI enrichment](cognitive-search-concept-intro.md), then your strategy must include indexers, given the skillset dependency on indexers.
-The following sections explain techniques for indexing large amounts of data for both push and pull approaches. You should also review [Tips for improving performance](search-performance-tips.md) for more best practices.
+This article complements [Tips for better performance](search-performance-tips.md), which offers best practices on index and query design. A well-designed index that includes only the fields and attributes you need is an important prerequisite for large-scale indexing.
-For C# tutorials, code samples, and alternative strategies, see:
+> [!NOTE]
+> The strategies described in this article assume a single large data source. If your solution requires indexing from multiple data sources, see [Index multiple data sources in Azure Cognitive Search](https://github.com/Azure-Samples/azure-cognitive-search-multiple-containers-indexer/blob/main/README.md) for a recommended approach.
-+ [Tutorial: Optimize indexing speeds](tutorial-optimize-indexing-push-api.md)
-+ [Tutorial: Index at scale using SynapseML and Apache Spark](search-synapseml-cognitive-services.md)
+## Index large data using the push APIs
-## Indexing large datasets with the "push" API
+"Push" APIs, such as [Add Documents REST API](/rest/api/searchservice/addupdate-or-delete-documents) or the [IndexDocuments method (Azure SDK for .NET)](/dotnet/api/azure.search.documents.searchclient.indexdocuments), are the most prevalent form of indexing in Cognitive Search. For solutions that use a push API, the strategy for long-running indexing will have one or both of the following components:
-When pushing large data volumes into an index using the [Add Documents REST API](/rest/api/searchservice/addupdate-or-delete-documents) or the [IndexDocuments method (Azure SDK for .NET)](/dotnet/api/azure.search.documents.searchclient.indexdocuments), batching documents and managing threads are two techniques that improve indexing speed.
++ Batching documents++ Managing threads ### Batch multiple documents per request
-One of the simplest mechanisms for indexing a larger data set is to submit multiple documents or records in a single request. As long as the entire payload is under 16 MB, a request can handle up to 1000 documents in a bulk upload operation. These limits apply whether you're using the [Add Documents REST API](/rest/api/searchservice/addupdate-or-delete-documents) or the [IndexDocuments method](/dotnet/api/azure.search.documents.searchclient.indexdocuments) in the .NET SDK. For either API, you would package 1000 documents in the body of each request.
+A simple mechanism for indexing a large quantity of data is to submit multiple documents or records in a single request. As long as the entire payload is under 16 MB, a request can handle up to 1000 documents in a bulk upload operation. These limits apply whether you're using the [Add Documents REST API](/rest/api/searchservice/addupdate-or-delete-documents) or the [IndexDocuments method](/dotnet/api/azure.search.documents.searchclient.indexdocuments) in the .NET SDK. For either API, you would package 1000 documents in the body of each request.
-Using batches to index documents will significantly improve indexing performance. Determining the optimal batch size for your data is a key component of optimizing indexing speeds. The two primary factors influencing the optimal batch size are:
+Batching documents will significantly shorten the amount of time it takes to work through a large data volume. Determining the optimal batch size for your data is a key component of optimizing indexing speeds. The two primary factors influencing the optimal batch size are:
+ The schema of your index + The size of your data
Indexers have built-in thread management, but when you're using the push APIs, y
The Azure .NET SDK automatically retries 503s and other failed requests, but you'll need to implement your own logic to retry 207s. Open-source tools such as [Polly](https://github.com/App-vNext/Polly) can also be used to implement a retry strategy.
-## Indexing large datasets with indexers and the "pull" APIs
+## Index with indexers and the "pull" APIs
-[Indexers](search-indexer-overview.md) have built-in capabilities that are particularly useful for accommodating larger data sets:
+[Indexers](search-indexer-overview.md) have several capabilities that are useful for long-running processes:
-+ Indexer schedules allow you to parcel out indexing at regular intervals so that you can spread it out over time.
++ Batching documents++ Parallel indexing over partitioned data++ Scheduling and change detection for indexing just new and change documents over time
-+ Scheduled indexing can resume at the last known stopping point. If a data source isn't fully scanned within the processing window, the indexer picks up wherever it left off at the last job.
+Indexer schedules can resume processing at the last known stopping point. If data isn't fully indexed within the processing window, the indexer picks up wherever it left off on the next run, assuming you're using a data source that provides change detection.
-+ Partitioning data into smaller individual data sources enables parallel processing. You can break up source data into smaller components, such as into multiple containers in Azure Blob Storage, create a [data source](/rest/api/searchservice/create-data-source) for each partition, and then run multiple indexers in parallel.
+Partitioning data into smaller individual data sources enables parallel processing. You can break up source data, such as into multiple containers in Azure Blob Storage, create a [data source](/rest/api/searchservice/create-data-source) for each partition, and then [run the indexers in parallel](search-howto-run-reset-indexers.md), subject to the number of search units of your search service.
### Check indexer batch size
When there are no longer any new or updated documents in the data source, indexe
For more information about setting schedules, see [Create Indexer REST API](/rest/api/searchservice/Create-Indexer) or see [How to schedule indexers for Azure Cognitive Search](search-howto-schedule-indexers.md). > [!NOTE]
-> Some indexers that run on an older runtime architecture have a 24-hour rather than 2-hour maximum processing window. The 2-hour limit is for newer content processors that run in an [internally managed multi-tenant environment](search-indexer-securing-resources.md#indexer-execution-environment). Whenever possible, Azure Cognitive Search tries to offload indexer and skillset processing to the multi-tenant environment. If the indexer can't be migrated, it will run in the private environment and it can run for as long as 24 hours. If you're scheduling an indexer that fits these characteristics, assume a 24 hour processing window.
+> Some indexers that run on an older runtime architecture have a 24-hour rather than 2-hour maximum processing window. The 2-hour limit is for newer content processors that run in an [internally managed multi-tenant environment](search-indexer-securing-resources.md#indexer-execution-environment). Whenever possible, Azure Cognitive Search tries to offload indexer and skillset processing to the multi-tenant environment. If the indexer can't be migrated, it will run in the private environment and it can run for as long as 24 hours. If you're scheduling an indexer that exhibits these characteristics, assume a 24 hour processing window.
<a name="parallel-indexing"></a>
If your data source is an [Azure Blob Storage container](../storage/blobs/storag
1. Specify the same target search index in each indexer.
-1. Schedule the indexers.
+1. Schedule the indexers.
1. Review indexer status and execution history for confirmation.
Second, Azure Cognitive Search doesn't lock the index for updates. Concurrent wr
Although multiple indexer-data-source sets can target the same index, be careful of indexer runs that can overwrite existing values in the index. If a second indexer-data-source targets the same documents and fields, any values from the first run will be overwritten. Field values are replaced in full; an indexer can't merge values from multiple runs into the same field.
+## Index big data on Spark
+
+If you have a big data architecture and your data is on a Spark cluster, we recommend [SynapseML for loading and indexing data](search-synapseml-cognitive-services.md). The tutorial includes steps for calling Cognitive Services for AI enrichment, but you can also use the AzureSearchWriter API for text indexing.
+ ## See also + [Tips for improving performance](search-performance-tips.md) + [Performance analysis](search-performance-analysis.md) + [Indexer overview](search-indexer-overview.md)
-+ [Monitor indexer status](search-howto-monitor-indexers.md)
++ [Monitor indexer status](search-howto-monitor-indexers.md)++
+<!-- Azure Cognitive Search supports [two basic approaches](search-what-is-data-import.md) for importing data into a search index. You can *push* your data into the index programmatically, or point an [Azure Cognitive Search indexer](search-indexer-overview.md) at a supported data source to *pull* in the data.
+
+As data volumes grow or processing needs change, you might find that simple indexing strategies are no longer practical. For Azure Cognitive Search, there are several approaches for accommodating larger data sets, ranging from how you structure a data upload request, to using a source-specific indexer for scheduled and distributed workloads.
+
+The same techniques used for long-running processes. In particular, the steps outlined in [parallel indexing](#run-indexers-in-parallel) are helpful for computationally intensive indexing, such as image analysis or natural language processing in an [AI enrichment pipeline](cognitive-search-concept-intro.md).
+
+The following sections explain techniques for indexing large amounts of data for both push and pull approaches. You should also review [Tips for improving performance](search-performance-tips.md) for more best practices.
+
+For C# tutorials, code samples, and alternative strategies, see:
+++ [Tutorial: Optimize indexing workloads](tutorial-optimize-indexing-push-api.md)++ [Tutorial: Index at scale using SynapseML and Apache Spark](search-synapseml-cognitive-services.md) -->
search Search Language Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-language-support.md
- Previously updated : 05/03/2022+ Last updated : 01/18/2023 # Create an index for multiple languages in Azure Cognitive Search
-A multilingual search application supports searching over and retrieving results in the user's own language. In Azure Cognitive Search, one way to meet the language requirements of a multilingual app is to create dedicated fields for storing strings in a specific language, and then constrain full text search to just those fields at query time.
+A multilingual search application is one that provides a search experience in the user's own language. [Language support](index-add-language-analyzers.md#supported-language-analyzers) is enabled through a language analyzer assigned to string field. Cognitive Search supports Microsoft and Lucene analyzers. The language analyzer determines the linguistic rules by which content is tokenized. By default, the search engine uses Standard Lucene, which is language agnostic. If testing shows that the default analyzer is insufficient, replace it with a language analyzer.
-+ On field definitions, [specify a language analyzer](index-add-language-analyzers.md) that invokes the linguistic rules of the target language.
+In Azure Cognitive Search, the two patterns for supporting a multi-lingual audience include:
-+ On the query request, set the `searchFields` parameter to scope full text search to specific fields, and then use `select` to return just those fields that have compatible content.
++ Create language-specific indexes where all of the alphanumeric content is in the same language, and all searchable string fields are attributed to use the same [language analyzer](index-add-language-analyzers.md).
-The success of this technique hinges on the integrity of field content. By itself, Azure Cognitive Search doesn't translate strings or perform language detection as part of query execution. It's up to you to make sure that fields contain the strings you expect.
++ Create a blended index with language-specific versions of each field (for example, description_en, description_fr, description_ko), and then constrain full text search to just those fields at query time. This approach is useful for scenarios where language variants are only needed on a few fields, like a description.
-## Need text translation?
+This article focuses on best practices for defining and querying language specific fields in a blended index. The steps you'll implement include:
+
+> [!div class="checklist"]
+> * Define a string field for each language variant.
+> * Set a language analyzer on each field.
+> * On the query request, set the `searchFields` parameter to specific fields, and then use `select` to return just those fields that have compatible content.
+
+## Prerequisites
+
+Language analysis applies to fields of type `Edm.String` that are `searchable`, and that contain localized text. If you also need text translation, review the next section to see if AI enrichment fits your scenario.
+
+Non-string fields and non-searchable string fields don't undergo lexical analysis and aren't tokenized. Instead, they are stored and returned verbatim.
+
+## Add text translation
This article assumes you have translated strings in place. If that's not the case, you can attach Cognitive Services to an [enrichment pipeline](cognitive-search-concept-intro.md), invoking text translation during data ingestion. Text translation takes a dependency on the indexer feature and Cognitive Services, but all setup is done within Azure Cognitive Search.
private static void RunQueries(SearchClient srchclient)
## Boost language-specific fields
-Sometimes the language of the agent issuing a query isn't known, in which case the query can be issued against all fields simultaneously. IA preference for results in a certain language can be defined using [scoring profiles](index-add-scoring-profiles.md). In the example below, matches found in the description in English will be scored higher relative to matches in other languages:
+Sometimes the language of the agent issuing a query isn't known, in which case the query can be issued against all fields simultaneously. IA preference for results in a certain language can be defined using [scoring profiles](index-add-scoring-profiles.md). In the example below, matches found in the description in French will be scored higher relative to matches in other languages:
```JSON "scoringProfiles": [ {
- "name": "englishFirst",
+ "name": "frenchFirst",
"text": {
- "weights": { "description": 2 }
+ "weights": { "description_fr": 2 }
} } ]
You would then include the scoring profile in the search request:
POST /indexes/hotels/docs/search?api-version=2020-06-30 { "search": "pets allowed",
- "searchFields": "Tags, Description",
- "select": "HotelName, Tags, Description",
- "scoringProfile": "englishFirst",
+ "searchFields": "Tags, Description_fr",
+ "select": "HotelName, Tags, Description_fr",
+ "scoringProfile": "frenchFirst",
"count": "true" } ```
search Search Query Lucene Examples https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-query-lucene-examples.md
POST /indexes/hotel-samples-index/docs/search?api-version=2020-06-30
} ```
-Response for this query should look similar to the following example, filtered on "Resort and Spa", returning hotels that include "hotel" or "motel" in the name.
+Response for this query should look similar to the following example, filtered on "Resort and Spa", returning hotels that include "hotel" in the name, while exlcuding results that include "motel" in the name.
```json "@odata.count": 4,
Additional syntax reference, query architecture, and examples can be found in th
+ [How full text search works in Azure Cognitive Search](search-lucene-query-architecture.md) + [Simple query syntax](query-simple-syntax.md) + [Full Lucene query syntax](query-lucene-syntax.md)
-+ [Filter syntax](search-query-odata-filter.md)
++ [Filter syntax](search-query-odata-filter.md)
search Search Semi Structured Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-semi-structured-data.md
Previously updated : 03/16/2022 Last updated : 01/18/2023 #Customer intent: As a developer, I want an introduction the indexing Azure blob data for Azure Cognitive Search.
search Troubleshoot Shared Private Link Resources https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/troubleshoot-shared-private-link-resources.md
Title: Troubleshooting shared private link resources
+ Title: Troubleshoot shared private link resources
-description: Troubleshooting guide for common problems when managing shared private link resources.
+description: Troubleshooting guide for common problems when managing shared private link resources in Azure Cognitive Search.
Previously updated : 02/26/2022 Last updated : 01/18/2023
-# Troubleshooting common issues with Shared Private Links
+# Troubleshoot issues with Shared Private Links in Azure Cognitive Search
A shared private link allows Azure Cognitive Search to make secure outbound connections over a private endpoint when accessing customer resources in a virtual network. This article can help you resolve errors that might occur.
security Encryption Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/encryption-models.md
na Previously updated : 12/16/2022 Last updated : 01/20/2023 # Data encryption models
The Azure services that support each encryption model:
| Azure Synapse Analytics | Yes | Yes, RSA 3072-bit, including Managed HSM | - | | SQL Server Stretch Database | Yes | Yes, RSA 3072-bit | Yes | | Table Storage | Yes | Yes | Yes |
-| Azure Cosmos DB | Yes ([learn more](../../cosmos-db/database-security.md?tabs=sql-api)) | Yes ([learn more](../../cosmos-db/how-to-setup-cmk.md)) | - |
+| Azure Cosmos DB | Yes ([learn more](../../cosmos-db/database-security.md?tabs=sql-api)) | Yes, including Managed HSM ([learn more](../../cosmos-db/how-to-setup-cmk.md) and [learn more](../../cosmos-db/how-to-setup-customer-managed-keys-mhsm.md)) | - |
| Azure Databricks | Yes | Yes | - | | Azure Database Migration Service | Yes | N/A\* | - | | **Identity** | | | |
security Log Audit https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/log-audit.md
The following table lists the most important types of logs available in Azure:
| | -- | | -- | |[Activity logs](../../azure-monitor/essentials/platform-logs-overview.md)|Control-plane events on Azure Resource Manager resources| Provides insight into the operations that were performed on resources in your subscription.| REST API, [Azure Monitor](../../azure-monitor/essentials/platform-logs-overview.md)| |[Azure Resource logs](../../azure-monitor/essentials/platform-logs-overview.md)|Frequent data about the operation of Azure Resource Manager resources in subscription| Provides insight into operations that your resource itself performed.| Azure Monitor|
-|[Azure Active Directory reporting](../../active-directory/reports-monitoring/overview-reports.md)|Logs and reports | Reports user sign-in activities and system activity information about users and group management.|[Graph API](../../active-directory/develop/microsoft-graph-intro.md)|
+|[Azure Active Directory reporting](../../active-directory/reports-monitoring/overview-reports.md)|Logs and reports | Reports user sign-in activities and system activity information about users and group management.|[Microsoft Graph](/graph/overview)|
|[Virtual machines and cloud services](../../azure-monitor/vm/monitor-virtual-machine.md)|Windows Event Log service and Linux Syslog| Captures system data and logging data on the virtual machines and transfers that data into a storage account of your choice.| Windows (using [Azure Diagnostics](../../azure-monitor/agents/diagnostics-extension-overview.md)] storage) and Linux in Azure Monitor| |[Azure Storage Analytics](/rest/api/storageservices/fileservices/storage-analytics)|Storage logging, provides metrics data for a storage account|Provides insight into trace requests, analyzes usage trends, and diagnoses issues with your storage account.| REST API or the [client library](/dotnet/api/overview/azure/storage)| |[Network security group (NSG) flow logs](../../network-watcher/network-watcher-nsg-flow-logging-overview.md)|JSON format, shows outbound and inbound flows on a per-rule basis|Displays information about ingress and egress IP traffic through a Network Security Group.|[Azure Network Watcher](../../network-watcher/network-watcher-monitoring-overview.md)|
security Operational Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/operational-best-practices.md
You should continuously monitor the storage services that your application uses
[Azure Storage Analytics](../../storage/common/storage-analytics.md) performs logging and provides metrics data for an Azure storage account. We recommend that you use this data to trace requests, analyze usage trends, and diagnose issues with your storage account. ## Prevent, detect, and respond to threats
-[Microsoft Defender for Cloud](../../security-center/security-center-introduction.md) helps you prevent, detect, and respond to threats by providing increased visibility into (and control over) the security of your Azure resources. It provides integrated security monitoring and policy management across your Azure subscriptions, helps detect threats that might otherwise go unnoticed, and works with various security solutions.
+[Microsoft Defender for Cloud](../../defender-for-cloud/defender-for-cloud-introduction.md) helps you prevent, detect, and respond to threats by providing increased visibility into (and control over) the security of your Azure resources. It provides integrated security monitoring and policy management across your Azure subscriptions, helps detect threats that might otherwise go unnoticed, and works with various security solutions.
-The Free tier of Defender for Cloud offers limited security for only your Azure resources. The Standard tier extends these capabilities to on-premises and other clouds. Defender for Cloud Standard helps you find and fix security vulnerabilities, apply access and application controls to block malicious activity, detect threats by using analytics and intelligence, and respond quickly when under attack. You can try Defender for Cloud Standard at no cost for the first 60 days. We recommend that you [upgrade your Azure subscription to Defender for Cloud Standard](../../security-center/security-center-get-started.md).
+The Free tier of Defender for Cloud offers limited security for your resources in Azure as well as Arc-enabled resources outside of Azure. The Enahanced Security Features extend these capabilities to include threat and vulnerability management, as well as regulatory compliance reporting. Defender for Cloud Plans help you find and fix security vulnerabilities, apply access and application controls to block malicious activity, detect threats by using analytics and intelligence, and respond quickly when under attack. You can try Defender for Cloud Standard at no cost for the first 30 days. We recommend that you [enable enhanced security features on your Azure subscriptions in Defender for Cloud](../../defender-for-cloud/enable-enhanced-security.md).
-Use Defender for Cloud to get a central view of the security state of all your Azure resources. At a glance, verify that the appropriate security controls are in place and configured correctly, and quickly identify any resources that need attention.
+Use Defender for Cloud to get a central view of the security state of all your resources in your own data centers, Azure and other clouds. At a glance, verify that the appropriate security controls are in place and configured correctly, and quickly identify any resources that need attention.
-Defender for Cloud also integrates with [Microsoft Defender Advanced Threat Protection (ATP)](../../security-center/security-center-wdatp.md), which provides comprehensive Endpoint Detection and Response (EDR) capabilities. With Microsoft Defender ATP integration, you can spot abnormalities. You can also detect and respond to advanced attacks on server endpoints monitored by Defender for Cloud.
+Defender for Cloud also integrates with [Microsoft Defender for Endpoint](../../defender-for-cloud/integration-defender-for-endpoint.md), which provides comprehensive Endpoint Detection and Response (EDR) capabilities. With Microsoft Defender for Endpoint integration, you can spot abnormalities and detect vulnerabilities. You can also detect and respond to advanced attacks on server endpoints monitored by Defender for Cloud.
Almost all enterprise organizations have a security information and event management (SIEM) system to help identify emerging threats by consolidating log information from diverse signal gathering devices. The logs are then analyzed by a data analytics system to help identify whatΓÇÖs ΓÇ£interestingΓÇ¥ from the noise that is inevitable in all log gathering and analytics solutions.
security Recover From Identity Compromise https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/recover-from-identity-compromise.md
+ Title: Use Microsoft and Azure security resources to help recover from systemic identity compromise | Microsoft Docs
-description: Learn how to use Microsoft and Azure security resources, such as Microsoft 365 Defender, Microsoft Sentinel, and Azure Active Directory, and Microsoft Defender for Cloud, and Microsoft recommendations to secure your system against systemic-identity compromises similar to the Nobelium attack (Solorigate) of December 2020.
-
+description: Learn how to use Microsoft and Azure security resources, such as Microsoft 365 Defender, Microsoft Sentinel, Azure Active Directory, Microsoft Defender for Cloud, and Microsoft Defender for IoT and Microsoft recommendations to secure your system against systemic-identity compromises.
+ documentationcenter: na -+ editor: '' -+ na Previously updated : 06/17/2021 Last updated : 01/15/2023 # Recovering from systemic identity compromise
-This article describes Microsoft resources and recommendations for recovering from a systemic identity compromise attack against your organization, such as the [Nobelium](https://aka.ms/solorigate) attack of December 2020.
+This article describes Microsoft resources and recommendations for recovering from a systemic identity compromise attack against your organization.
The content in this article is based on guidance provided by Microsoft's Detection and Response Team (DART), which works to respond to compromises and help customers become cyber-resilient. For more guidance from the DART team, see their [Microsoft security blog series](https://www.microsoft.com/security/blog/microsoft-detection-and-response-team-dart-blog-series/).
Responding to systemic identity compromises should include the steps shown in th
|**Investigate your environment** | After you have secured communications on your core investigation team, you can start looking for initial access points and persistence techniques. [Identify your indications of compromise](#identify-indications-of-compromise), and then look for initial access points and persistence. At the same time, start [establishing continuous monitoring operations](#establish-continuous-monitoring) during your recovery efforts. | |**Improve security posture** | [Enable security features and capabilities](#improve-security-posture) following best practice recommendations for improved system security moving forward. <br><br>Make sure to continue your [continuous monitoring](#establish-continuous-monitoring) efforts as time goes on and the security landscape changes. | |**Regain / retain control** | You must regain administrative control of your environment from the attacker. After you have control again and have refreshed your system's security posture, make sure to [remediate or block](#remediate-and-retain-administrative-control) all possible persistence techniques and new initial access exploits. |
-| | |
## Establish secure communications
Check for updates in the following Microsoft security products, and implement an
- [Microsoft 365 security solutions and services](/microsoft-365/security/) - [Windows 10 Enterprise Security](/windows/security/) - [Microsoft Defender for Cloud Apps ](/cloud-app-security/)
+- [Microsoft Defender for IoT](/defender-for-iot/organizations)
Implementing new updates will help identify any prior campaigns and prevent future campaigns against your system. Keep in mind that lists of IOCs may not be exhaustive, and may expand as investigations continue.
Therefore, we recommend also taking the following actions:
- Incorporate threat intelligence feeds into your SIEM, such as by configuring Microsoft Purview Data Connectors in [Microsoft Sentinel](../../sentinel/understand-threat-intelligence.md).
+- Make sure that any extended detection and response tools, such as [Microsoft Defender for IoT](/azure/defender-for-iot/organizations/how-to-work-with-threat-intelligence-packages), are using the most recent threat intelligence data.
+ For more information, see Microsoft's security documentation: - [Microsoft security documentation](/security/)
Review administrative rights in both your cloud and on-premises environments. Fo
|**All Enterprise applications** | Review for delegated permissions and consent grants that allow any of the following actions: <br><br> - Modifying privileged users and roles <br>- Reading or accessing all mailboxes <br>- Sending or forwarding email on behalf of other users <br>- Accessing all OneDrive or SharePoint site content <br>- Adding service principals that can read/write to the directory | |**Microsoft 365 environments** |Review access and configuration settings for your Microsoft 365 environment, including: <br>- SharePoint Online Sharing <br>- Microsoft Teams <br>- Power Apps <br>- Microsoft OneDrive for Business | | **Review user accounts in your environments** |- Review and remove guest user accounts that are no longer needed. <br>- Review email configurations for delegates, mailbox folder permissions, ActiveSync mobile device registrations, Inbox rules, and Outlook on the Web options. <br>- Review ApplicationImpersonation rights and reduce any use of legacy authentication as much as possible. <br>- Validate that MFA is enforced and that both MFA and self-service password reset (SSPR) contact information for all users is correct. |
-| | |
## Establish continuous monitoring
For example, Microsoft security services may have specific resources and guidanc
Microsoft Sentinel has many built-in resources to help in your investigation, such as hunting workbooks and analytics rules that can help detect attacks in relevant areas of your environment.
-For more information, see:
+Use Microsoft Sentinel's content hub to install extended security solutions and data connectors that stream content from other services in your environment. For more information, see:
- [Visualize and analyze your environment](../../sentinel/get-visibility.md)-- [Detect threats out of the box](../../sentinel/detect-threats-built-in.md).
+- [Detect threats out of the box](../../sentinel/detect-threats-built-in.md)
+- [Discover and deploy out-of-the-box solutions](/azure/sentinel/sentinel-solutions-deploy)
+
+### Monitoring with Microsoft Defender for IoT
+
+If your environment also includes Operational Technology (OT) resources, you may have devices that use specialized protocols, which prioritize operational challenges over security.
+
+Deploy Microsoft Defender for IoT to monitor and secure those devices, especially any that aren't protected by traditional security monitoring systems. Install Defender for IoT network sensors at specific points of interest in your environment to detect threats in ongoing network activity using agentless monitoring and dynamic threat intelligence.
+
+For more information, see [Get started with OT network security monitoring](/azure/defender-for-iot/organizations/getting-started).
+ ### Monitoring with Microsoft 365 Defender
The following table describes more methods for using Azure Active directory logs
|**Detect credentials for OAuth applications** | Attackers who have gained control of a privileged account may search for an application with the ability to access any user's email in the organization, and then add attacker-controlled credentials to that application. <br><br>For example, you may want to search for any of the following activities, which would be consistent with attacker behavior: <br>- Adding or updating service principal credentials <br>- Updating application certificates and secrets <br>- Adding an app role assignment grant to a user <br>- Adding Oauth2PermissionGrant | |**Detect e-mail access by applications** | Search for access to email by applications in your environment. For example, use the [Microsoft Purview Audit (Premium) features](/microsoft-365/compliance/mailitemsaccessed-forensics-investigations) to investigate compromised accounts. | |**Detect non-interactive sign-ins to service principals** | The Azure Active Directory sign-in reports provide details about any non-interactive sign-ins that used service principal credentials. For example, you can use the sign-in reports to find valuable data for your investigation, such as an IP address used by the attacker to access email applications. |
-| | |
## Improve security posture
We recommend the following actions to ensure your general security posture:
- **Review [Microsoft Secure Score](/microsoft-365/security/mtp/microsoft-secure-score)** for security fundamentals recommendations customized for the Microsoft products and services you consume. -- **Ensure that your organization has EDR and SIEM solutions in place**, such as [Microsoft 365 Defender for Endpoint](/microsoft-365/security/defender/microsoft-365-defender) and [Microsoft Sentinel](../../sentinel/overview.md).
+- **Ensure that your organization has extended detection and response (XDR) and security information and event management (SIEM) solutions in place**, such as [Microsoft 365 Defender for Endpoint](/microsoft-365/security/defender/microsoft-365-defender), [Microsoft Sentinel](../../sentinel/overview.md), and [Microsoft Defender for IoT](/azure/defender-for-iot/organizations/).
- **Review MicrosoftΓÇÖs [Enterprise access model](/security/compass/privileged-access-access-model)**.
We recommend the following actions to ensure identity-related security posture:
- **Eliminate your organizationΓÇÖs use of legacy authentication**, if systems or applications still require it. For more information, see [Block legacy authentication to Azure AD with Conditional Access](../../active-directory/conditional-access/block-legacy-authentication.md).
- > [!NOTE]
- > The Exchange Team is planning to [disable Basic Authentication for the EAS, EWS, POP, IMAP, and RPS protocols](https://developer.microsoft.com/en-us/office/blogs/deferred-end-of-support-date-for-basic-authentication-in-exchange-online/) in the second half of 2021.
- >
- > As a point of clarity, Security Defaults and Authentication Policies are separate but provide complementary features.
- >
- > We recommend that customers use Authentication Policies to turn off Basic Authentication for a subset of Exchange Online protocols or to gradually turn off Basic Authentication across a large organization.
- >
- - **Treat your ADFS infrastructure and AD Connect infrastructure as a Tier 0 asset**. - **Restrict local administrative access to the system**, including the account that is used to run the ADFS service.
We recommend the following actions to ensure identity-related security posture:
- If you are using a Service Account and your environment supports it, **migrate from a Service Account to a group-Managed Service Account (gMSA)**. If you cannot move to a gMSA, rotate the password on the Service Account to a complex password. -- **Ensure Verbose logging is enabled on your ADFS systems**. For example, run the following commands:-
- ```powershell
- Set-AdfsProperties -AuditLevel verbose
- Restart-Service -Name adfssrv
- Auditpol.exe /set /subcategory:ΓÇ¥Application GeneratedΓÇ¥ /failure:enable /success:enable
- ```
+- **Ensure Verbose logging is enabled on your ADFS systems**.
## Remediate and retain administrative control
If your organization decides *not* to [remove trust](#remove-trust-on-your-curre
Rotating the token-signing certificate a single time still allows the previous token-signing certificate to work. Continuing to allow previous certificates to work is a built-in functionality for normal certificate rotations, which permits a grace period for organizations to update any relying party trusts before the certificate expires.
-If there was an attack, you don't want the attacker to retain access at all. Make sure to use the following steps to ensure that the attacker doesn't maintain the ability to forge tokens for your domain.
-
-> [!CAUTION]
-> The last step in this procedure logs users out of their phones, current webmail sessions, and any other items that are using the associated tokens and refresh tokens.
->
-
-> [!TIP]
-> Performing these steps in your ADFS environment creates both a primary and secondary certificate, and automatically promotes the secondary certificate to primary after a default period of 5 days.
->
-> If you have Relying Party Trusts, this may have effects 5 days after the initial ADFS environment change, and should be accounted for in your plan. You can also resolve this by replacing the primary certificate a third time, using the **Urgent** flag again, and removing the secondary certificate or turning off automatic certificate rotation.
->
-
-**To fully rotate the token-signing certificate, and prevent new token forging by an attacker**
-
-1. Check to make sure that your **AutoCertificateRollover** parameter is set to **True**:
-
- ``` powershell
- Get-AdfsProperties | FL AutoCert*, Certificate*
- ```
- If **AutoCertificateRollover** isn't set to **True**, set the value as follows:
-
- ``` powershell
- Set-ADFSProperties -AutoCertificateRollover $true
- ```
+If there was an attack, you don't want the attacker to retain access at all. Make sure that the attacker doesn't retain the ability to forge tokens for your domain.
-1. Connect to the Microsoft Online Service:
-
- ``` powershell
- Connect-MsolService
- ```
-
-1. Run the following command and make a note of your on-premises and cloud token signing certificate thumbprint and expiration dates:
-
- ``` powershell
- Get-MsolFederationProperty -DomainName <domain>
- ```
-
- For example:
-
- ```powershell
- ...
- [Not Before]
- 12/9/2020 7:57:13 PM
-
- [Not After]
- 12/9/2021 7:57:13 PM
-
- [Thumbprint]
- 3UD1JG5MEFHSBW7HEPF6D98EI8AHNTY22XPQWJFK6
- ```
-
-1. Replace the primary token signing certificate using the **Urgent** switch. This command causes ADFS to replace the primary certificate immediately, without making it a secondary certificate:
-
- ```powershell
- Update-AdfsCertificate -CertificateType Token-Signing -Urgent
- ```
-
-1. Create a secondary Token Signing certificate, without the **Urgent** switch. This command allows for two on-premises token signing certificates before synching with Azure Cloud.
-
- ```powershell
- Update-AdfsCertificate -CertificateType Token-Signing
- ```
-
-1. Update the cloud environment with both the primary and secondary certificates on-premises to immediately remove the cloud published token signing certificate.
-
- ```powershell
- Update-MsolFederatedDomain -DomainName <domain>
- ```
-
- > [!IMPORTANT]
- > If this step is not performed using this method, the old token signing certificate may still be able to authenticate users.
-
-1. To ensure that these steps have been performed correctly, verify that the certificate displayed before in step 3 is now removed:
-
- ```powershell
- Get-MsolFederationProperty -DomainName <domain>
- ```
-
-1. Revoke your refresh tokens via PowerShell, to prevent access with the old tokens.
-
- For more information, see:
-
- - [Revoke user access in Azure Active Directory](../../active-directory/enterprise-users/users-revoke-access.md)
- - [Revoke-AzureADUserAllRefreshToken PowerShell docs](/powershell/module/azuread/revoke-azureaduserallrefreshtoken)
+For more information, see:
+- [Revoke user access in Azure Active Directory](../../active-directory/enterprise-users/users-revoke-access.md)
+- [Revoke-AzureADUserAllRefreshToken PowerShell docs](/powershell/module/azuread/revoke-azureaduserallrefreshtoken)
### Replace your ADFS servers
-If, instead of [rotating your SAML token-signing certificate](#rotate-your-saml-token-signing-certificate), you decide to replace the ADFS servers with clean systems, you'll need to remove the existing ADFS from your environment, and then build a new one.
+If, instead of rotating your SAML token-signing certificate, you decide to replace the ADFS servers with clean systems, you'll need to remove the existing ADFS from your environment, and then build a new one.
For more information, see [Remove a configuration](../../active-directory/cloud-sync/how-to-configure.md#remove-a-configuration).
In addition to the recommendations listed earlier in this article, we also recom
|**Enforce MFA** | Enforce Multi-Factor Authentication (MFA) across all elevated users in the tenant. We recommend enforcing MFA across all users in the tenant. | |**Limit administrative access** | Implement [Privileged Identity Management](../../active-directory/privileged-identity-management/pim-configure.md) (PIM) and conditional access to limit administrative access. <br><br>For Microsoft 365 users, implement [Privileged Access Management](https://techcommunity.microsoft.com/t5/microsoft-security-and/privileged-access-management-in-office-365-is-now-generally/ba-p/261751) (PAM) to limit access to sensitive abilities, such as eDiscovery, Global Admin, Account Administration, and more. | |**Review / reduce delegated permissions and consent grants** | Review and reduce all Enterprise Applications delegated permissions or [consent grants](/graph/auth-limit-mailbox-access) that allow any of the following functionalities: <br><br>- Modification of privileged users and roles <br>- Reading, sending email, or accessing all mailboxes <br>- Accessing OneDrive, Teams, or SharePoint content <br>- Adding Service Principals that can read/write to the directory <br>- Application Permissions versus Delegated Access |
-| | |
### On-premises remediation activities
In addition to the recommendations listed earlier in this article, we also recom
|**Reset the krbtgt account** | Reset the **krbtgt** account twice using the [New-KrbtgtKeys](https://github.com/microsoft/New-KrbtgtKeys.ps1/blob/master/New-KrbtgtKeys.ps1) script. <br><br>**Note**: If you are using Read-Only Domain Controllers, you will need to run the script separately for Read-Write Domain Controllers and for Read-Only Domain Controllers. | |**Schedule a system restart** | After you validate that no persistence mechanisms created by the attacker exist or remain on your system, schedule a system restart to assist with removing memory-resident malware. | |**Reset the DSRM password** | Reset each domain controllerΓÇÖs DSRM (Directory Services Restore Mode) password to something unique and complex. |
-| | |
### Remediate or block persistence discovered during investigation
security Subdomain Takeover https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/subdomain-takeover.md
Title: Prevent subdomain takeovers with Azure DNS alias records and Azure App Service's custom domain verification description: Learn how to avoid the common high-severity threat of subdomain takeover -+ ms.assetid:
na Previously updated : 02/04/2021- Last updated : 01/19/2023+ # Prevent dangling DNS entries and avoid subdomain takeover This article describes the common security threat of subdomain takeover and the steps you can take to mitigate against it. - ## What is a subdomain takeover?
-Subdomain takeovers are a common, high-severity threat for organizations that regularly create, and delete many resources. A subdomain takeover can occur when you have a [DNS record](../../dns/dns-zones-records.md#dns-records) that points to a deprovisioned Azure resource. Such DNS records are also known as "dangling DNS" entries. CNAME records are especially vulnerable to this threat. Subdomain takeovers enable malicious actors to redirect traffic intended for an organizationΓÇÖs domain to a site performing malicious activity.
+Subdomain takeovers are a common, high-severity threat for organizations that regularly create, and delete many resources. A subdomain takeover can occur when you have a [DNS record](../../dns/dns-zones-records.md#dns-records) that points to a deprovisioned Azure resource. Such DNS records are also known as "dangling DNS" entries. CNAME records are especially vulnerable to this threat. Subdomain takeovers enable malicious actors to redirect traffic intended for an organization's domain to a site performing malicious activity.
A common scenario for a subdomain takeover:
A common scenario for a subdomain takeover:
1. **DEPROVISIONING:**
- 1. The Azure resource is deprovisioned or deleted after it is no longer needed.
-
- At this point, the CNAME record `greatapp.contoso.com` *should* be removed from your DNS zone. If the CNAME record isn't removed, it's advertised as an active domain but doesn't route traffic to an active Azure resource. This is the definition of a ΓÇ£danglingΓÇ¥ DNS record.
+ 1. The Azure resource is deprovisioned or deleted after it is no longer needed.
+
+ At this point, the CNAME record `greatapp.contoso.com` *should* be removed from your DNS zone. If the CNAME record isn't removed, it's advertised as an active domain but doesn't route traffic to an active Azure resource. This is the definition of a "dangling" DNS record.
- 1. The dangling subdomain, `greatapp.contoso.com`, is now vulnerable and can be taken over by being assigned to another Azure subscriptionΓÇÖs resource.
+ 1. The dangling subdomain, `greatapp.contoso.com`, is now vulnerable and can be taken over by being assigned to another Azure subscription's resource.
1. **TAKEOVER:**
A common scenario for a subdomain takeover:
1. The threat actor provisions an Azure resource with the same FQDN of the resource you previously controlled. In this example, `app-contogreat-dev-001.azurewebsites.net`.
- 1. Traffic being sent to the subdomain `greatapp.contoso.com` is now routed to the malicious actorΓÇÖs resource where they control the content.
--
+ 1. Traffic being sent to the subdomain `greatapp.contoso.com` is now routed to the malicious actor's resource where they control the content.
![Subdomain takeover from a deprovisioned website](./media/subdomain-takeover/subdomain-takeover.png) -- ## The risks of subdomain takeover
-When a DNS record points to a resource that isn't available, the record itself should have been removed from your DNS zone. If it hasn't been deleted, it's a ΓÇ£dangling DNSΓÇ¥ record and creates the possibility for subdomain takeover.
+When a DNS record points to a resource that isn't available, the record itself should have been removed from your DNS zone. If it hasn't been deleted, it's a "dangling DNS" record and creates the possibility for subdomain takeover.
Dangling DNS entries make it possible for threat actors to take control of the associated DNS name to host a malicious website or service. Malicious pages and services on an organization's subdomain might result in:
Dangling DNS entries make it possible for threat actors to take control of the a
- **Further risks** - Malicious sites might be used to escalate into other classic attacks such as XSS, CSRF, CORS bypass, and more. -- ## Identify dangling DNS entries To identify DNS entries within your organization that might be dangling, use Microsoft's GitHub-hosted PowerShell tools ["Get-DanglingDnsRecords"](https://aka.ms/DanglingDNSDomains).
The tool supports the Azure resources listed in the following table. The tool ex
| Azure App Service | microsoft.web/sites | properties.defaultHostName | `abc.azurewebsites.net` | | Azure App Service - Slots | microsoft.web/sites/slots | properties.defaultHostName | `abc-def.azurewebsites.net` | -- ### Prerequisites Run the query as a user who has:
Run the query as a user who has:
- at least reader level access to the Azure subscriptions - read access to Azure resource graph
-If you're a global administrator of your organizationΓÇÖs tenant, elevate your account to have access to all of your organizationΓÇÖs subscription using the guidance in [Elevate access to manage all Azure subscriptions and management groups](../../role-based-access-control/elevate-access-global-admin.md).
-
+If you're a global administrator of your organization's tenant, elevate your account to have access to all of your organization's subscription using the guidance in [Elevate access to manage all Azure subscriptions and management groups](../../role-based-access-control/elevate-access-global-admin.md).
> [!TIP]
-> Azure Resource Graph has throttling and paging limits that you should consider if you have a large Azure environment.
+> Azure Resource Graph has throttling and paging limits that you should consider if you have a large Azure environment.
> > [Learn more about working with large Azure resource data sets](../../governance/resource-graph/concepts/work-with-data.md). >
If you're a global administrator of your organizationΓÇÖs tenant, elevate your a
Learn more about the PowerShell script, **Get-DanglingDnsRecords.ps1**, and download it from GitHub: https://aka.ms/Get-DanglingDnsRecords.
-## Remediate dangling DNS entries
+## Remediate dangling DNS entries
Review your DNS zones and identify CNAME records that are dangling or have been taken over. If subdomains are found to be dangling or have been taken over, remove the vulnerable subdomains and mitigate the risks with the following steps:
Review your DNS zones and identify CNAME records that are dangling or have been
1. Review your application code for references to specific subdomains and update any incorrect or outdated subdomain references.
-1. Investigate whether any compromise has occurred and take action per your organizationΓÇÖs incident response procedures. Tips and best practices for investigating this issue can be found below.
+1. Investigate whether any compromise has occurred and take action per your organization's incident response procedures. Tips and best practices for investigating this issue can be found below.
If your application logic is such that secrets such as OAuth credentials were sent to the dangling subdomain, or privacy-sensitive information was sent to the dangling subdomains, that data might have been exposed to third-parties. 1. Understand why the CNAME record was not removed from your DNS zone when the resource was deprovisioned and take steps to ensure that DNS records are updated appropriately when Azure resources are deprovisioned in the future. - ## Prevent dangling DNS entries Ensuring that your organization has implemented processes to prevent dangling DNS entries and the resulting subdomain takeovers is a crucial part of your security program.
-Some Azure services offer features to aid in creating preventative measures and are detailed below. Other methods to prevent this issue must be established through your organizationΓÇÖs best practices or standard operating procedures.
+Some Azure services offer features to aid in creating preventative measures and are detailed below. Other methods to prevent this issue must be established through your organization's best practices or standard operating procedures.
### Enable Microsoft Defender for App Service
-Microsoft Defender for Cloud's integrated cloud workload protection platform (CWPP), Microsoft Defender for Cloud, offers a range of plans to protect your Azure, hybrid, and multi-cloud resources and workloads.
+Microsoft Defender for Cloud's integrated cloud workload protection platform (CWPP) offers a range of plans to protect your Azure, hybrid, and multicloud resources and workloads.
The **Microsoft Defender for App Service** plan includes dangling DNS detection. With this plan enabled, you'll get security alerts if you decommission an App Service website but don't remove its custom domain from your DNS registrar. Microsoft Defender for Cloud's dangling DNS protection is available whether your domains are managed with Azure DNS or an external domain registrar and applies to App Service on both Windows and Linux.
-Learn more about this and other benefits of this Microsoft Defender plans in [Introduction to Microsoft Defender for App Service](../../security-center/defender-for-app-service-introduction.md).
+Learn more about this and other benefits of this Microsoft Defender plans in [Introduction to Microsoft Defender for App Service](../../defender-for-cloud/defender-for-app-service-introduction.md).
### Use Azure DNS alias records
Despite the limited service offerings today, we recommend using alias records to
[Learn more about the capabilities of Azure DNS's alias records](../../dns/dns-alias.md#capabilities). -- ### Use Azure App Service's custom domain verification
-When creating DNS entries for Azure App Service, create an asuid.{subdomain} TXT record with the Domain Verification ID. When such a TXT record exists, no other Azure Subscription can validate the Custom Domain that is, take it over.
+When creating DNS entries for Azure App Service, create an asuid.{subdomain} TXT record with the Domain Verification ID. When such a TXT record exists, no other Azure Subscription can validate the Custom Domain that is, take it over.
These records don't prevent someone from creating the Azure App Service with the same name that's in your CNAME entry. Without the ability to prove ownership of the domain name, threat actors can't receive traffic or control the content. [Learn more about how to map an existing custom DNS name to Azure App Service](../../app-service/app-service-web-tutorial-custom-domain.md). -- ### Build and automate processes to mitigate the threat It's often up to developers and operations teams to run cleanup processes to avoid dangling DNS threats. The practices below will help ensure your organization avoids suffering from this threat.
It's often up to developers and operations teams to run cleanup processes to avo
- Maintain a service catalog of your Azure fully qualified domain name (FQDN) endpoints and the application owners. To build your service catalog, run the following Azure Resource Graph query script. This script projects the FQDN endpoint information of the resources you have access to and outputs them in a CSV file. If you have access to all the subscriptions for your tenant, the script considers all those subscriptions as shown in the following sample script. To limit the results to a specific set of subscriptions, edit the script as shown. - - **Create procedures for remediation:** - When dangling DNS entries are found, your team needs to investigate whether any compromise has occurred. - Investigate why the address wasn't rerouted when the resource was decommissioned. - Delete the DNS record if it's no longer in use, or point it to the correct Azure resource (FQDN) owned by your organization. - ### Clean up DNS pointers or Re-claim the DNS
-Upon deletion of the classic cloud service resource, the corresponding DNS is reserved for 7 days. During the reservation period, re-use of the DNS will be forbidden EXCEPT for subscriptions belonging to the AAD tenant of the subscription originally owning the DNS. After the reservation expires, the DNS is free to be claimed by any subscription. By taking DNS reservations, the customer is afforded some time to either 1) clean up any associations/pointers to said DNS or 2) re-claim the DNS in Azure. The DNS name being reserved can be derived by appending the cloud service name to the DNS zone for that cloud.
+Upon deletion of the classic cloud service resource, the corresponding DNS is reserved for 7 days. During the reservation period, re-use of the DNS will be forbidden EXCEPT for subscriptions belonging to the Azure AD tenant of the subscription originally owning the DNS. After the reservation expires, the DNS is free to be claimed by any subscription. By taking DNS reservations, the customer is afforded some time to either 1) clean up any associations/pointers to said DNS or 2) re-claim the DNS in Azure. The DNS name being reserved can be derived by appending the cloud service name to the DNS zone for that cloud.
-Public - cloudapp.net
-Mooncake - chinacloudapp.cn
-Fairfax - usgovcloudapp.net
-BlackForest - azurecloudapp.de
+- Public - cloudapp.net
+- Mooncake - chinacloudapp.cn
+- Fairfax - usgovcloudapp.net
+- BlackForest - azurecloudapp.de
-i.e. a hosted service in Public named ΓÇ£testΓÇ¥ would have DNS ΓÇ£test.cloudapp.netΓÇ¥
+For example, a hosted service in Public named "test" would have DNS "test.cloudapp.net"
Example:
-Subscription ΓÇÿAΓÇÖ and subscription ΓÇÿBΓÇÖ are the only subscriptions belonging to AAD tenant ΓÇÿABΓÇÖ. Subscription ΓÇÿAΓÇÖ contains a classic cloud service ΓÇÿtestΓÇÖ with DNS name ΓÇÿtest.cloudapp.netΓÇÖ. Upon deletion of the cloud service, a reservation is taken on DNS name ΓÇÿtest.cloudapp.netΓÇÖ. During the 7 day reservation period, only subscription ΓÇÿAΓÇÖ or subscription ΓÇÿBΓÇÖ will be able to claim the DNS name ΓÇÿtest.cloudapp.netΓÇÖ by creating a classic cloud service named ΓÇÿtestΓÇÖ. No other subscriptions will be allowed to claim it. After the 7 days is up, any subscription in Azure can now claim ΓÇÿtest.cloudapp.netΓÇÖ.
-
+Subscription 'A' and subscription 'B' are the only subscriptions belonging to Azure AD tenant 'AB'. Subscription 'A' contains a classic cloud service 'test' with DNS name 'test.cloudapp.net'. Upon deletion of the cloud service, a reservation is taken on DNS name 'test.cloudapp.net'. During the 7 day reservation period, only subscription 'A' or subscription 'B' will be able to claim the DNS name 'test.cloudapp.net' by creating a classic cloud service named 'test'. No other subscriptions will be allowed to claim it. After the 7 days are up, any subscription in Azure can now claim 'test.cloudapp.net'.
## Next steps To learn more about related services and Azure features you can use to defend against subdomain takeover, see the following pages. -- [Enable Microsoft Defender for App Service](../../security-center/defender-for-app-service-introduction.md) - to receive alerts when dangling DNS entries are detected
+- [Enable Microsoft Defender for App Service](../../defender-for-cloud/enable-enhanced-security.md) - to receive alerts when dangling DNS entries are detected
- [Prevent dangling DNS records with Azure DNS](../../dns/dns-alias.md#prevent-dangling-dns-records)
security Trusted Hardware Identity Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/trusted-hardware-identity-management.md
Last updated 10/24/2022
# Trusted Hardware Identity Management
-The Trusted Hardware Identity Management (THIM) service handles cache management of certificates for all Trusted Execution Environments (TEE) residing in Azure and provides trusted computing base (TCB) information to enforce a minimum baseline for attestation solutions.
+The Trusted Hardware Identity Management (THIM) service handles cache management of certificates for all trusted execution environments (TEE) residing in Azure and provides trusted computing base (TCB) information to enforce a minimum baseline for attestation solutions.
## THIM & attestation interactions
THIM defines the Azure security baseline for Azure Confidential computing (ACC)
### The "next update" date of the Azure-internal caching service API, used by Microsoft Azure Attestation, seems to be out of date. Is it still in operation and can it be used?
-The "tcbinfo" field contains the TCB information. The THIM service by default provides an older tcbinfo -- updating to the latest tcbinfo from Intel would cause attestation failures for those customers who haven't migrated to the latest Intel SDK, and could results in outages.
+The "tcbinfo" field contains the TCB information. The THIM service by default provides an older tcbinfo--updating to the latest tcbinfo from Intel would cause attestation failures for those customers who haven't migrated to the latest Intel SDK, and could results in outages.
Open Enclave SDK and Microsoft Azure Attestation don't look at nextUpdate date, however, and will pass attestation.
Azure Data Center Attestation Primitives (DCAP), a replacement for Intel Quote P
### Why are there different baselines between THIM and Intel?
-THIM and Intel provide different baseline levels of the trusted computing base. While Intel can be viewed as having the latest and greatest, this imposes requirements upon the consumer to ensure that all the requirements are satisfied, thus leading to a potential breakage of customers if they haven't updated to the specified requirements. THIM takes a slower approach to updating the TCB baseline to allow customers to make the necessary changes at their own pace. This approach, while does provide an older TCB baseline, ensures that customers will not break if they haven't been able to meet the requirements of the new TCB baseline. This reason is why THIM's TCB baseline is of a different version from Intel's. We're customer-focused and want to empower the customer to meet the requirements imposed by the new TCB baseline on their pace, instead of forcing them to update and causing them a disruption that would require reprioritization of their workstreams.
+THIM and Intel provide different baseline levels of the trusted computing base. While Intel can be viewed as having the latest and greatest, this imposes requirements upon the consumer to ensure that all the requirements are satisfied, thus leading to a potential breakage of customers if they haven't updated to the specified requirements. THIM takes a slower approach to updating the TCB baseline to allow customers to make the necessary changes at their own pace. This approach, while does provide an older TCB baseline, ensures that customers won't break if they haven't been able to meet the requirements of the new TCB baseline. This reason is why THIM's TCB baseline is of a different version from Intel's. We're customer-focused and want to empower the customer to meet the requirements imposed by the new TCB baseline on their pace, instead of forcing them to update and causing them a disruption that would require reprioritization of their workstreams.
THIM is also introducing a new feature that will enable customers to select their own custom baseline. This feature will allow customers to decide between the newest TCB or using an older TCB than provided by Intel, enabling customers to ensure that the TCB version to enforce is compliant with their specific configuration. This new feature will be reflected in a future iteration of the THIM documentation.
THIM is also introducing a new feature that will enable customers to select thei
The certificates are fetched and cached in THIM service using platform manifest and indirect registration. As a result, Key Caching Policy will be set to never store platform root keys for a given platform. Direct calls to the Intel service from inside the VM are expected to fail.
-To retrieve the certificate, you must install the [Azure DCAP library](#what-is-the-azure-dcap-library) which replaces Intel QPL. This library directs the fetch requests to THIM service running in Azure cloud. For the downloading the latest DCAP packages, please see: [Where can I download the latest DCAP packages?](#where-can-i-download-the-latest-dcap-packages)
+To retrieve the certificate, you must install the [Azure DCAP library](#what-is-the-azure-dcap-library) that replaces Intel QPL. This library directs the fetch requests to THIM service running in Azure cloud. For the downloading the latest DCAP packages, see: [Where can I download the latest DCAP packages?](#where-can-i-download-the-latest-dcap-packages)
### How do I request collateral in a Confidential Virtual Machine (CVM)?
-Use the following sample in a CVM guest for requesting AMD collateral that includes the VCEK certificate and certificate chain. For details on this collateral and where it originates from, see [Versioned Chip Endorsement Key (VCEK) Certificate and KDS Interface Specification](https://www.amd.com/system/files/TechDocs/57230.pdf) (from <amd.com>).
+Use the following sample in a CVM guest for requesting AMD collateral that includes the VCEK certificate and certificate chain. For details on this collateral and where it originates from, see [Versioned Chip Endorsement Key (VCEK) Certificate and KDS Interface Specification](https://www.amd.com/system/files/TechDocs/57230.pdf).
#### URI parameters
curl GET "http://169.254.169.254/metadat/certification" -H "Metadata:
| Name | Description | |--|--|
-| 200 OK | Lists available collateral in http body within JSON format. For details on the keys in the JSON, please see Definitions |
+| 200 OK | Lists available collateral in http body within JSON format. For details on the keys in the JSON, see Definitions |
| Other Status Codes | Error response describing why the operation failed | #### Definitions
curl GET "http://169.254.169.254/metadat/certification" -H "Metadata:
| tcbm | Trusted Computing Base | | certificateChain | Includes the AMD SEV Key (ASK) and AMD Root Key (ARK) certificates |
+### How do I request AMD collateral in an Azure Kubernetes Service (AKS) Container on a Confidential Virtual Machine (CVM) node?
+
+Follow the steps for requesting AMD collateral in a confidential container.
+1. Start by creating an AKS cluster on CVM mode or adding a CVM node pool to the existing cluster.
+ 1. Create an AKS Cluster on CVM node.
+ 1. Create a resource group in one of the CVM supported regions.
+ ```bash
+ az group create --resource-group <RG_NAME> --location <LOCATION>
+ ```
+ 2. Create an AKS cluster with one CVM node in the resource group.
+ ```bash
+ az aks create --name <CLUSTER_NAME> --resource-group <RG_NAME> -l <LOCATION> --node-vm-size Standard_DC4as_v5 --nodepool-name <POOL_NAME> --node-count 1
+ ```
+ 3. Configure kubectl to connect to the cluster.
+ ```bash
+ az aks get-credentials --resource-group <RG_NAME> --name <CLUSTER_NAME>
+ ```
+ 2. Add a CVM node pool to the existing AKS cluster.
+ ```bash
+ az aks nodepool add --cluster-name <CLUSTER_NAME> --resource-group <RG_NAME> --name <POOL_NAME > --node-vm-size Standard_DC4as_v5 --node-count 1
+ ```
+ 3. Verify the connection to your cluster using the kubectl get command. This command returns a list of the cluster nodes.
+ ```bash
+ kubectl get nodes
+ ```
+ The following output example shows the single node created in the previous steps. Make sure the node status is Ready:
+
+ | NAME | STATUS | ROLES | AGE | VERSION |
+ |--|--|--|--|--|
+ | aks-nodepool1-31718369-0 | Ready | agent | 6m44s | v1.12.8 |
+
+2. Once the AKS cluster is created, create a curl.yaml file with the following content. It defines a job that runs a curl container to fetch AMD collateral from the THIM endpoint. For more information about Kubernetes Jobs, please seeΓÇ»[Kubernetes documentation](https://kubernetes.io/docs/concepts/workloads/controllers/job/).
+
+ **curl.yaml**
+ ```bash
+ apiVersion: batch/v1
+ kind: Job
+ metadata:
+ name: curl
+ spec:
+ template:
+ metadata:
+ labels:
+ app: curl
+ spec:
+ nodeSelector:
+ kubernetes.azure.com/security-type: ConfidentialVM
+ containers:
+ - name: curlcontainer
+ image: alpine/curl:3.14
+ imagePullPolicy: IfNotPresent
+ args: ["-H", "Metadata:true", "http://169.254.169.254/metadat/certification"]
+ restartPolicy: "Never"
+ ```
+
+ **Arguments**
+
+ | Name | Type | Description |
+ |--|--|--|
+ | Metadata | Boolean | Setting to True to allow for collateral to be returned |
+
+3. Run the job by applying the curl.yaml.
+ ```bash
+ kubectl apply -f curl.yaml
+ ```
+4. Check and wait for the pod to complete its job.
+ ```bash
+ kubectl get pods
+ ```
+
+ **Example Response**
+
+ | Name | Ready | Status | Restarts | Age |
+ |--|--|--|--|--|
+ | Curl-w7nt8 | 0/1 | Completed | 0 | 72 s |
+
+5. Run the following command to get the job logs and validate if it is working. A successful output should include vcekCert, tcbm and certificateChain.
+ ```bash
+ kubectl logs job/curl
+ ```
+ ## Next steps - Learn more about [Azure Attestation documentation](../../attestation/overview.md)
sentinel Add Entity To Threat Intelligence https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/add-entity-to-threat-intelligence.md
Title: Add entities to threat intelligence in Microsoft Sentinel description: This article shows you, if you discover a malicious entity in an incident investigation, how to add the entity to your threat intelligence indicator lists in Microsoft Sentinel. - Previously updated : 08/25/2022 + Last updated : 01/17/2023 # Add entities to threat intelligence in Microsoft Sentinel
When investigating an incident, you examine entities and their context as an imp
For example, you may discover an IP address performing port scans across your network, or functioning as a command and control node, sending and/or receiving transmissions from large numbers of nodes in your network.
-Microsoft Sentinel allows you to flag these types of entities as malicious, right from within the investigation graph, and add it to your threat indicator lists. You'll then be able to view the added indicators both in Logs and in the Threat Intelligence blade, and use them across your Microsoft Sentinel workspace.
+Microsoft Sentinel allows you to flag these types of entities as malicious, right from within your incident investigation, and add it to your threat indicator lists. You'll then be able to view the added indicators both in Logs and in the Threat Intelligence blade, and use them across your Microsoft Sentinel workspace.
> [!IMPORTANT] > Adding entities as TI indicators is currently in PREVIEW. The [Azure Preview Supplemental Terms](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) include additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability. ## Add an entity to your indicators list
+The new [incident details page](investigate-incidents.md) gives you another way to add entities to threat intelligence, in addition to the investigation graph. Both ways are shown below.
+
+# [Incident details page](#tab/incidents)
+
+1. From the Microsoft Sentinel navigation menu, select **Incidents**.
+
+1. Select an incident to investigate. In the incident details panel, select **View full details** to open the incident details page.
+
+ :::image type="content" source="media/add-entity-to-threat-intelligence/incident-details-overview.png" alt-text="Screenshot of incident details page." lightbox="media/add-entity-to-threat-intelligence/incident-details-overview.png":::
+
+1. Find the entity from the **Entities** widget that you want to add as a threat indicator. (You can filter the list or enter a search string to help you locate it.)
+
+1. Select the three dots to the right of the entity, and select **Add to TI (Preview)** from the pop-up menu.
+
+ Only the following types of entities can be added as threat indicators:
+ - Domain name
+ - IP address (IPv4 and IPv6)
+ - URL
+ - File (hash)
+
+ :::image type="content" source="media/add-entity-to-threat-intelligence/entity-actions-from-overview.png" alt-text="Screenshot of adding an entity to threat intelligence.":::
+
+# [Investigation graph](#tab/cases)
+ The [investigation graph](investigate-cases.md) is a visual, intuitive tool that presents connections and patterns and enables your analysts to ask the right questions and follow leads. You can use it to add entities to your threat intelligence indicator lists, making them available across your workspace. 1. From the Microsoft Sentinel navigation menu, select **Incidents**.
The [investigation graph](investigate-cases.md) is a visual, intuitive tool that
:::image type="content" source="media/add-entity-to-threat-intelligence/add-entity-to-ti.png" alt-text="Screenshot of adding entity to threat intelligence."::: ++
+Whichever of the two interfaces you choose, you will end up here:
+ 1. The **New indicator** side panel will open. The following fields will be populated automatically: - **Type**
The [investigation graph](investigate-cases.md) is a visual, intuitive tool that
- Optional; automatically populated by the **incident ID**. You can add others. - **Name**
- - Name of the indicator - this is what will be displayed in your list of indicators.
+ - Name of the indicator&mdash;this is what will be displayed in your list of indicators.
- Optional; automatically populated by the **incident name.** - **Created by** - Creator of the indicator.
- - Optional; automatically-populated by the user logged into Microsoft Sentinel.
+ - Optional; automatically populated by the user logged into Microsoft Sentinel.
Fill in the remaining fields accordingly.
The [investigation graph](investigate-cases.md) is a visual, intuitive tool that
In this article, you learned how to add entities to your threat indicator lists. For more information, see: -- [Investigate incidents with Microsoft Sentinel](investigate-cases.md)
+- [Investigate incidents with Microsoft Sentinel](investigate-incidents.md)
- [Understand threat intelligence in Microsoft Sentinel](understand-threat-intelligence.md) - [Work with threat indicators in Microsoft Sentinel](work-with-threat-indicators.md)
sentinel Connect Microsoft Purview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/connect-microsoft-purview.md
With the connector, you can:
### Azure Information Protection connector vs. Microsoft Purview Information Protection connector
-This connector replaces the Azure Information Protection (AIP) data connector. The Azure Information Protection (AIP) data connector uses the AIP audit logs (public preview) feature. As of **March 31, 2023**, the AIP analytics and audit logs public preview will be retired, and moving forward will be using the [Microsoft 365 auditing solution](/microsoft-365/compliance/auditing-solutions-overview).
+This connector replaces the Azure Information Protection (AIP) data connector. The Azure Information Protection (AIP) data connector uses the AIP audit logs (public preview) feature.
-For more information:
-- See [Removed and retired services](/azure/information-protection/removed-sunset-services#azure-information-protection-analytics).-- Learn how to [disconnect the AIP connector](#disconnect-the-azure-information-protection-connector).
+> [!IMPORTANT]
+>
+> As of **March 31, 2023**, the AIP analytics and audit logs public preview will be retired, and moving forward will be using the [Microsoft 365 auditing solution](/microsoft-365/compliance/auditing-solutions-overview).
+>
+> For more information:
+> - See [Removed and retired services](/azure/information-protection/removed-sunset-services#azure-information-protection-analytics).
+> - Learn how to [disconnect the AIP connector](#disconnect-the-azure-information-protection-connector).
When you enable the Microsoft Purview Information Protection connector, audit logs stream into the standardized `MicrosoftPurviewInformationProtection` table. Data is gathered through the [Office Management API](/office/office-365-management-api/office-365-management-activity-api-schema), which uses a structured schema. The new standardized schema is adjusted to enhance the deprecated schema used by AIP, with more fields and easier access to parameters.
sentinel Data Connectors Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/data-connectors-reference.md
See [Microsoft Defender for Cloud](#microsoft-defender-for-cloud).
## Azure Information Protection (Preview)
-> [!NOTE]
+> [!IMPORTANT]
+>
> The Azure Information Protection (AIP) data connector uses the AIP audit logs (public preview) feature. As of **March 31, 2023**, the AIP analytics and audit logs public preview will be retired, and moving forward will be using the [Microsoft 365 auditing solution](/microsoft-365/compliance/auditing-solutions-overview). > > For more information, see [Removed and retired services](/azure/information-protection/removed-sunset-services#azure-information-protection-analytics).
->
See the [Microsoft Purview Information Protection](#microsoft-purview-information-protection-preview) connector, which will replace this connector.
sentinel Detect Threats Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/detect-threats-custom.md
Title: Create custom analytics rules to detect threats with Microsoft Sentinel | Microsoft Docs description: Learn how to create custom analytics rules to detect security threats with Microsoft Sentinel. Take advantage of event grouping, alert grouping, and alert enrichment, and understand AUTO DISABLED. - Previously updated : 01/30/2022 -+ Last updated : 01/08/2023 # Create custom analytics rules to detect threats
In the **Alert grouping** section, if you want a single incident to be generated
:::image type="content" source="media/tutorial-detect-threats-custom/automated-response-tab.png" alt-text="Define the automated response settings":::
-1. Select **Review and create** to review all the settings for your new alert rule. When the "Validation passed" message appears, select **Create** to initialize your alert rule.
+1. Select **Review and create** to review all the settings for your new analytics rule. When the "Validation passed" message appears, select **Create**.
:::image type="content" source="media/tutorial-detect-threats-custom/review-and-create-tab.png" alt-text="Review all settings and create the rule":::
In the **Alert grouping** section, if you want a single incident to be generated
- You can find your newly created custom rule (of type "Scheduled") in the table under the **Active rules** tab on the main **Analytics** screen. From this list you can enable, disable, or delete each rule. -- To view the results of the alert rules you create, go to the **Incidents** page, where you can triage, [investigate incidents](investigate-cases.md), and remediate the threats.
+- To view the results of the analytics rules you create, go to the **Incidents** page, where you can triage incidents, [investigate them](investigate-cases.md), and [remediate the threats](respond-threats-during-investigation.md).
- You can update the rule query to exclude false positives. For more information, see [Handle false positives in Microsoft Sentinel](false-positives.md).
You can also push rules to Microsoft Sentinel via [API](/rest/api/securityinsigh
For more information, see:
-For more information, see:
- - [Tutorial: Investigate incidents with Microsoft Sentinel](investigate-cases.md) - [Classify and analyze data using entities in Microsoft Sentinel](entities.md) - [Tutorial: Use playbooks with automation rules in Microsoft Sentinel](tutorial-respond-threats-playbook.md)
sentinel Entity Pages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/entity-pages.md
description: Use entity pages to get information about entities that you come ac
Previously updated : 07/26/2022 Last updated : 01/17/2023 # Investigate entities with entity pages in Microsoft Sentinel
More specifically, entity pages consist of three parts:
- The right-side panel presents [behavioral insights](#entity-insights) on the entity. These insights are continuously developed by Microsoft security research teams. They are based on various data sources and provide context for the entity and its observed activities, helping you to quickly identify [anomalous behavior](soc-ml-anomalies.md) and security threats.
+If you're investigating an incident using the **[new investigation experience](investigate-incidents.md) (now in Preview)**, you'll be able to see a panelized version of the entity page right inside the incident details page. You have a [list of all the entities in a given incident](investigate-incidents.md#explore-the-incidents-entities), and selecting an entity opens a side panel with three "cards"&mdash;**Info**, **Timeline**, and **Insights**&mdash; showing all the same information described above, within the specific time frame corresponding with that of the alerts in the incident.
+ ## The timeline :::image type="content" source="./media/identify-threats-with-entity-behavior-analytics/entity-pages-timeline.png" alt-text="Screenshot of an example of a timeline on an entity page.":::
sentinel Incident Investigation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/incident-investigation.md
+
+ Title: Understand Microsoft Sentinel's incident investigation and case management capabilities
+description: This article describes Microsoft Sentinel's incident investigation and case management capabilities and features, taking you through the phases of a typical incident investigation while presenting all the displays and tools available to you to help you along.
+++ Last updated : 01/01/2023++
+# Understand Microsoft Sentinel's incident investigation and case management capabilities
+
+Microsoft Sentinel gives you a complete, full-featured case management platform for investigating and managing security incidents. **Incidents** are Microsoft SentinelΓÇÖs name for case files that contain a complete and constantly updated chronology of a security threat, whether itΓÇÖs individual pieces of evidence (alerts), suspects and parties of interest (entities), insights collected and curated by security experts and AI/machine learning models, or comments and logs of all the actions taken in the course of the investigation.
+
+The incident investigation experience in Microsoft Sentinel begins with the **Incidents** page ΓÇô a new experience designed to give you everything you need for your investigation in one place. The key goal of this new experience is to increase your SOCΓÇÖs efficiency and effectiveness, reducing its mean time to resolve (MTTR).
+
+> [!IMPORTANT]
+>
+> The new incident experience is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+>
+> Some of the individual functionalities mentioned below are also in **PREVIEW**. They will be so indicated.
+
+This article takes you through the phases of a typical incident investigation, presenting all the displays and tools available to you to help you along.
+
+## Increase your SOC's maturity
+
+Microsoft Sentinel gives you the tools to help your Security Operations (SecOps) maturity level up.
+
+### Standardize processes
+
+**Incident tasks** are workflow lists of tasks for analysts to follow to ensure a uniform standard of care and to prevent crucial steps from being missed. SOC managers and engineers can develop these task lists and have them automatically apply to different groups of incidents as appropriate, or across the board. SOC analysts can then access the assigned tasks within each incident, marking them off as theyΓÇÖre completed. Analysts can also manually add tasks to their open incidents, either as self-reminders or for the benefit of other analysts who may collaborate on the incident (for example, due to a shift change or escalation).
+
+Learn more about [incident tasks](incident-tasks.md).
+
+### Audit incident management
+
+The incident **activity log** tracks actions taken on an incident, whether initiated by humans or automated processes, and displays them along with all the comments on the incident. You can add your own comments here as well. It gives you a complete record of everything that happened, ensuring thoroughness and accountability.
+
+## Investigate effectively and efficiently
+
+### See timeline
+
+First things first: As an analyst, the most basic question you want to answer is, why is this incident being brought to my attention? Entering an incidentΓÇÖs details page will answer that question: right in the center of the screen, youΓÇÖll see the **Incident timeline** widget. The timeline is the diary of all the **alerts** that represent all the logged events that are relevant to the investigation, in the order in which they happened. The timeline also shows **bookmarks**, snapshots of evidence collected while hunting and added to the incident. See the full details of any item on this list by selecting it. Many of these details&mdash;such as the original alert, the analytics rule that created it, and any bookmarks&mdash;appear as links that you can select to dive still deeper and learn more.
+
+Learn more about what you can do from the [incident timeline](investigate-incidents.md#incident-timeline).
+
+### Learn from similar incidents
+
+If anything youΓÇÖve seen so far in your incident looks familiar, there may be good reason. Microsoft Sentinel stays one step ahead of you by showing you the incidents most similar to the open one. The **Similar incidents** widget shows you the most relevant information about incidents deemed to be similar, including their last updated date and time, last owner, last status (including, if they are closed, the reason they were closed), and the reason for the similarity.
+
+This can benefit your investigation in several ways:
+
+- Spot concurrent incidents that may be part of a larger attack strategy.
+- Use similar incidents as reference points for your current investigation&mdash;see how they were dealt with.
+- Identify owners of past similar incidents to benefit from their knowledge.
+
+The widget shows you the 20 most similar incidents. Microsoft Sentinel decides which incidents are similar based on common elements including entities, the source analytics rule, and alert details. From this widget you can jump directly to any of these incidents' full details pages, while keeping the connection to the current incident intact.
+
+Learn more about what you can do with [similar incidents](investigate-incidents.md#similar-incidents-preview).
+
+### Examine top insights
+
+Next, having the broad outlines of what happened (or is still happening), and having a better understanding of the context, youΓÇÖll be curious about what interesting information Microsoft Sentinel has already found out for you. It automatically asks the big questions about the entities in your incident and shows the top answers in the **Top insights** widget, visible on the right side of the incident details page. This widget shows a collection of insights based on both machine-learning analysis and the curation of top teams of security experts.
+
+These are a specially selected subset of the insights that appear on [entity pages](entity-pages.md#entity-insights), but in this context, insights for all the entities in the incident are presented together, giving you a more complete picture of what's happening. The full set of insights appears on the **Entities tab**, for each entity separately&mdash;see below.
+
+The **Top insights** widget answers questions about the entity relating to its behavior in comparison to its peers and its own history, its presence on watchlists or in threat intelligence, or any other sort of unusual occurrence relating to it.
+
+Most of these insights contain links to more information. These links open the Logs panel in-context, where you'll see the source query for that insight along with its results.
+
+### View entities
+
+Now that you have some context and some basic questions answered, youΓÇÖll want to get some more depth on the major players are in this story. Usernames, hostnames, IP addresses, file names, and other types of entities can all be ΓÇ£persons of interestΓÇ¥ in your investigation. Microsoft Sentinel finds them all for you and displays them front and center in the **Entities** widget, alongside the timeline. Selecting an entity from this widget will pivot you to that entity's listing in the **Entities tab** on the same **incident page**.
+
+The **Entities tab** contains a list of all the entities in the incident. When an entity in the list is selected, a side panel opens containing a display based on the [entity page](entity-pages.md). The side panel contains three cards:
+- **Info** contains basic information about the entity. For a user account entity this might be things like the username, domain name, security identifier (SID), organizational information, security information, and more.
+- **Timeline** contains a list of the alerts that feature this entity and activities the entity has done, as collected from logs in which the entity appears.
+- **Insights** contains answers to questions about the entity relating to its behavior in comparison to its peers and its own history, its presence on watchlists or in threat intelligence, or any other sort of unusual occurrence relating to it. These answers are the results of queries defined by Microsoft security researchers that provide valuable and contextual security information on entities, based on data from a collection of sources.
+
+Depending on the entity type, you can take a number of further actions from this side panel:
+- Pivot to the entity's full [entity page](entity-pages.md) to get even more details over a longer timespan or launch the graphical investigation tool centered on that entity.
+- Run a [playbook](respond-threats-during-investigation.md) to take specific response or remediation actions on the entity (in Preview).
+- Classify the entity as an [indicator of compromise (IOC)](add-entity-to-threat-intelligence.md) and add it to your Threat intelligence list (in Preview).
+
+Each of these actions is currently supported for certain entity types and not for others. The following table shows which actions are supported for which entity types:
+
+| Available actions &#9654;<br>Entity types &#9660; | View full details<br>(in entity page) | Add to TI *<br>(Preview) | Run playbook *<br>(Preview) |
+| -- | :-: | :-: | :-: |
+| **User account** | &#10004; | | &#10004; |
+| **Host** | &#10004; | | &#10004; |
+| **IP address** | &#10004; | &#10004; | &#10004; |
+| **URL** | | &#10004; | &#10004; |
+| **Domain name** | | &#10004; | &#10004; |
+| **File (hash)** | | &#10004; | &#10004; |
+| **Azure resource** | &#10004; | | |
+| **IoT device** | &#10004; | | |
+
+\* For entities for which either or both of these two actions are available, you can take those actions right from the **Entities** widget in the **Overview tab**, never leaving the incident page.
+
+### Explore logs
+
+Now youΓÇÖll want to get down into the details to know *what exactly happened?* From almost any of the places mentioned above, you can drill down into the individual alerts, entities, insights, and other items contained in the incident, viewing the original query and its results. These results are displayed in the Logs (log analytics) screen that appears here as a panel extension of the incident details page, so you donΓÇÖt leave the context of the investigation.
+
+### Keep your records in order
+
+Finally, in the interests of transparency, accountability, and continuity, youΓÇÖll want a record of all the actions that have been taken on the incident ΓÇô whether by automated processes or by people. The incident **activity log** shows you all of these activities. You can also see any comments that have been made and add your own. The activity log is constantly auto-refreshing, even while open, so you can see changes to it in real time.
++
+## Next steps
+
+In this document, you learned how the incident investigation experience in Microsoft Sentinel helps you [carry out an investigation in a single context](investigate-incidents.md). For more information about managing and investigating incidents, see the following articles:
+
+- [Use tasks to manage incidents in Microsoft Sentinel](incident-tasks.md)
+- [Investigate entities with entity pages in Microsoft Sentinel](entity-pages.md).
+- [Automate incident handling in Microsoft Sentinel with automation rules](automate-incident-handling-with-automation-rules.md).
+- [Identify advanced threats with User and Entity Behavior Analytics (UEBA) in Microsoft Sentinel](identify-threats-with-entity-behavior-analytics.md)
+- [Hunt for security threats](./hunting.md).
sentinel Investigate Incidents https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/investigate-incidents.md
+
+ Title: Navigate and investigate incidents in Microsoft Sentinel
+description: This article takes you through all the panels and options available on the incident details page, helping you navigate and investigate your incidents more quickly, effectively, and efficiently, and reducing your mean time to resolve (MTTR).
+++ Last updated : 01/17/2023++
+# Navigate and investigate incidents in Microsoft Sentinel
+
+Microsoft Sentinel gives you a complete, full-featured case management platform for investigating security incidents. The **Incident details** page is your central location from which to run your investigation, collecting all the relevant information and all the applicable tools and tasks in one screen.
+
+This article takes you through all the panels and options available on the incident details page, helping you navigate and investigate your incidents more quickly, effectively, and efficiently, and reducing your mean time to resolve (MTTR).
+
+> [!IMPORTANT]
+>
+> The new incident experience is currently in **PREVIEW**. See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for additional legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
+>
+> Some of the individual functionalities mentioned below are also in **PREVIEW**. They will be so indicated.
+
+See instructions for the [previous version of incident investigation](investigate-cases.md).
+
+Incidents are your case files that contain an aggregation of all the relevant evidence for specific investigations. Each incident is created (or added to) based on pieces of evidence ([alerts](detect-threats-built-in.md)) that were either generated by analytics rules or imported from third-party security products that produce their own alerts. Incidents inherit the [entities](entities.md) contained in the alerts, as well as the alerts' properties, such as severity, status, and MITRE ATT&CK tactics and techniques.
+
+## Prerequisites
+
+- The [**Microsoft Sentinel Responder**](../role-based-access-control/built-in-roles.md#microsoft-sentinel-responder) role assignment is required to investigate incidents.
+
+ Learn more about [roles in Microsoft Sentinel](roles.md).
+
+- If you have a guest user that needs to assign incidents, the user must be assigned the [Directory Reader](../active-directory/roles/permissions-reference.md#directory-readers) role in your Azure AD tenant. Regular (non-guest) users have this role assigned by default.
+
+## Navigate and triage incidents
+
+### The Incidents page
+
+1. From the Microsoft Sentinel navigation menu, under **Threat management**, select **Incidents**.
+
+ The **Incidents** page gives you basic information about all of your open incidents.
+
+ - Across the top of the screen you have the counts of open incidents, whether new or active, and the counts of open incidents by severity. You also have the **banner** with actions you can take outside of a specific incident&mdash;either on the grid as a whole, or on multiple selected incidents.
+
+ - In the central pane, you have the **incident grid**, a list of incidents as filtered by the filtering controls at the top of the list, and a search bar to find specific incidents.
+
+ - On the right side, you have a **details pane** that shows important information about the incident highlighted in the central list, along with buttons for taking certain specific actions with regard to that incident.
+
+ :::image type="content" source="media/investigate-incidents/incident-grid.png" alt-text="Screenshot of view of incident severity." lightbox="media/investigate-incidents/incident-grid.png":::
+
+1. Your Security Operations team may have [**automation rules**](automate-incident-handling-with-automation-rules.md#automatic-assignment-of-incidents) in place to perform basic triage on new incidents and assign them to the proper personnel.
+
+ In that case, filter the incident list by **Owner** to limit the list to the incidents assigned to you or to your team. This filtered set represents your personal workload.
+
+ Otherwise, you can perform basic triage yourself. You can start by filtering the list of incidents by available filtering criteria, whether status, severity, or product name. For more information, see [Search for incidents](#search-for-incidents).
+
+1. Triage a specific incident and take some actions on it immediately, right from the **details pane** on the **Incidents** page, without having to enter the incidentΓÇÖs full details page.
+
+ - **Investigate Microsoft 365 Defender incidents in Microsoft 365 Defender:** Follow the [**Investigate in Microsoft 365 Defender**](microsoft-365-defender-sentinel-integration.md) link to pivot to the parallel incident in the Defender portal. Any changes you make to the incident in Microsoft 365 Defender will be synchronized to the same incident in Microsoft Sentinel.
+
+ - **Open the list of assigned tasks:** Incidents for which any tasks have been assigned will display a count of completed and total tasks and a **View full details** link. Follow the link to open the [**Incident tasks**](incident-tasks.md) panel to see the list of tasks for this incident.
+
+ - **Assign ownership of the incident** to a user or group by selecting from the **Owner** drop-down list.
+
+ :::image type="content" source="media/investigate-incidents/assign-incident-to-user.png" alt-text="Screenshot of assigning incident to user.":::
+
+ Recently selected users and groups will appear at the top of the pictured drop-down list.
+
+ - **Update the incidentΓÇÖs status** (for example, from **New** to **Active** or **Closed**) by selecting from the **Status** drop-down list. When closing an incident, youΓÇÖll be required to specify a reason. [See below for instructions](#closing-an-incident).
+
+ - **Change the incidentΓÇÖs severity** by selecting from the **Severity** drop-down list.
+
+ - **Add tags** to categorize your incidents. You may need to scroll down to the bottom of the details pane to see where to add tags.
+
+ - **Add comments** to log your actions, ideas, questions, and more. You may need to scroll down to the bottom of the details pane to see where to add comments.
+
+1. If the information in the **details pane** is sufficient to prompt further remediation or mitigation actions, select the **Actions** button at the bottom of the **details pane** to do one of the following:
+
+ - **Investigate:** use the [graphical investigation tool](#investigate-incidents-visually-using-the-investigation-graph) to discover relationships between alerts, entities, and activities, both within this incident and across other incidents.
+
+ - **Run playbook (Preview):** run a [playbook](automate-responses-with-playbooks.md#run-a-playbook-manually) on this incident to take particular [enrichment, collaboration, or response actions](automate-responses-with-playbooks.md#use-cases-for-playbooks) such as your SOC engineers may have made available.
+
+ - **Create automation rule:** create an [automation rule](automate-incident-handling-with-automation-rules.md#common-use-cases-and-scenarios) that will run only on incidents like this one (generated by the same analytics rule) in the future, in order to reduce your future workload or to account for a temporary change in requirements (such as for a penetration test).
+
+ - **Create team (Preview):** create a team in Microsoft Teams to collaborate with other individuals or teams across departments on handling the incident.
+
+ :::image type="content" source="media/investigate-incidents/incident-actions.png" alt-text="Screenshot of menu of actions that can be performed on an incident from the details pane.":::
+
+1. If more information about the incident is needed, select **View full details** in the details pane to open and see the incident's details in their entirety, including the alerts and entities in the incident, a list of similar incidents, and selected top insights.
+
+See the next sections of this article to follow a typical investigation path, learning in the process about all the information you'll see there, and all the actions you can take.
+
+## Investigate your incident in depth
+
+Microsoft Sentinel offers a complete, full-featured incident investigation and case management experience so you can investigate, remediate, and resolve incidents more quickly and efficiently. Here's the new incident details page:
++
+### Prepare the ground properly
+
+As you're setting up to investigate an incident, assemble the things you'll need to direct your workflow. You'll find the following tools on a button bar at the top of the incident page, right below the title.
++
+1. Select **Tasks (Preview)** to [see the tasks assigned for this incident](work-with-tasks.md#view-and-follow-incident-tasks), or to [add your own tasks](work-with-tasks.md#manually-add-an-ad-hoc-task-to-an-incident).
+
+ Learn more about [using incident tasks](incident-tasks.md) to improve process standardization in your SOC.
+
+1. Select **Activity log** to see if any actions have already been taken on this incident&mdash;by automation rules, for example&mdash;and any comments that have been made. You can add your own comments here as well. See [more about the activity log below](#audit-and-comment-on-incidents).
+
+1. Select **Logs** at any time to open a full, blank Log analytics query window *inside* the incident page. Compose and run a query, related or not, without leaving the incident. So, whenever you're struck with sudden inspiration to go chasing a thought, don't worry about interrupting your flow. Logs is there for you.
+
+ See [more about Logs](#dive-deeper-into-your-data-in-logs) below.
+
+You'll also see the **Incident actions** button opposite the **Overview** and **Entities** tabs. Here you have available to you the same actions described above as available from the **Actions** button on the details pane on the **Incidents** grid page. The only one missing there is **Investigate**, which is available on the left-hand details panel.
++
+To recap the available actions under the **Incident actions** button:
+
+- **Run playbook:** run a [playbook](automate-responses-with-playbooks.md#run-a-playbook-manually) on this incident to take particular [enrichment, collaboration, or response actions](automate-responses-with-playbooks.md#use-cases-for-playbooks) such as your SOC engineers may have made available.
+
+- **Create automation rule:** create an [automation rule](automate-incident-handling-with-automation-rules.md#common-use-cases-and-scenarios) that will run only on incidents like this one (generated by the same analytics rule) in the future, in order to reduce your future workload or to account for a temporary change in requirements (such as for a penetration test).
+
+- **Create team (Preview):** create a team in Microsoft Teams to collaborate with other individuals or teams across departments on handling the incident. If a team has already been created for this incident, this menu item will display as **Open Teams**.
++
+### Get the whole picture on the incident details page
+
+The left-hand panel of the incident details page contains the same incident detail information that you saw on the **Incidents** page to the right of the grid, and it's pretty much unchanged from the previous version. This panel is always on display, no matter which tab is shown on the rest of the page. From there, you can see the incident's basic information, and drill down in the following ways:
+
+- Select **Events**, **Alerts**, or **Bookmarks** to open the **Logs** panel *within the incident page*. The **Logs** panel will display with the query of whichever of the three you selected, and you can go through the query results in depth, without pivoting away from the incident. [Learn more about Logs](#dive-deeper-into-your-data-in-logs).
+
+- Select any of the entries under **Entities** to display it in the **Entities tab**. (Only the first four entities in the incident are shown here. See the rest of them by selecting **View all**, or in the **Entities** widget on the **Overview tab**, or in the **Entities tab**.) [Learn what you can do in the **Entities tab**](#entities-tab).
+
+ :::image type="content" source="media/investigate-incidents/details-panel.png" alt-text="Screenshot of details panel in incident details page.":::
+
+You can also select **Investigate** to open the incident in the [graphical investigation tool](#investigate-incidents-visually-using-the-investigation-graph) that diagrams relationships between all the elements of the incident.
+
+This panel can also be collapsed into the left margin of the screen by selecting the small, left-pointing double arrow next to the **Owner** drop-down. Even in this minimized state, however, you will still be able to change the owner, status, and severity.
++
+The rest of the incident details page is divided into two tabs, **Overview** and **Entities**.
+
+The **Overview** tab contains the following widgets, each of which represents an essential objective of your investigation.
+
+- The **Incident timeline** widget shows you the timeline of alerts and [bookmarks](bookmarks.md) in the incident, which can help you reconstruct the timeline of attacker activity. Select an individual item to see all of its details, enabling you to drill down further.
+
+ [Learn more about the **Incident timeline** widget below](#incident-timeline).
+
+- In the **Similar incidents (Preview)** widget, you'll see a collection of up to 20 other incidents that most closely resemble the current incident. This allows you to view the incident in a larger context and helps direct your investigation.
+
+ [Learn more about the **Similar incidents** widget below](#similar-incidents-preview).
+
+- The **Entities** widget shows you all the [entities](entities.md) that have been identified in the alerts. These are the objects that played a role in the incident, whether they be users, devices, addresses, files, or [any other types](./entities-reference.md). Select an entity to see its full details (which will be displayed in the **Entities tab**&mdash;see below).
+
+ [Learn more about the **Entities** widget below](#explore-the-incidents-entities).
+
+- Finally, in the **Top insights** widget, you'll see a collection of results of queries defined by Microsoft security researchers that provide valuable and contextual security information on all the entities in the incident, based on data from a collection of sources.
+
+ [Learn more about the **Top insights** widget below](#get-the-top-insights-into-your-incident).
+
+The **Entities** tab shows you the complete list of entities in the incident (the same ones as in the Entities widget above). When you select an entity in the widget, you're directed here to see the entity's full dossier&mdash;its identifying information, a timeline of its activity (both within and outside the incident), and the full set of insights about the entity, just as you would see in its full entity page (but limited to the time frame appropriate to the incident).
+
+### Incident timeline
+
+The **Incident timeline** widget shows you the timeline of alerts and [bookmarks](bookmarks.md) in the incident, which can help you reconstruct the timeline of attacker activity.
+
+You can search the list of alerts and bookmarks, or filter the list by severity, tactics, or content type (alert or bookmark), to help you find the item you want to pursue.
+
+The initial display of the timeline immediately tells you several important things about each item in it, whether alert or bookmark:
+
+- The **date and time** of the creation of the alert or bookmark.
+- The **type** of item, alert or bookmark, indicated by an icon and a ToolTip when hovering on the icon.
+- The **name** of the alert or the bookmark, in bold type on the first line of the item.
+- The **severity** of the alert, indicated by a color band along the left edge, and in word form at the beginning of the three-part "subtitle" of the alert.
+- The **alert provider**, in the second part of the subtitle. For bookmarks, the **creator** of the bookmark.
+- The MITRE ATT&CK **tactics** associated with the alert, indicated by icons and ToolTips, in the third part of the subtitle.
+
+Hover over any icon or incomplete text element to see a ToolTip with the full text of that icon or text element. These ToolTips come in handy when the displayed text is truncated due to the limited width of the widget. See the example in this screenshot:
++
+Select an individual alert or bookmark to see its full details.
+
+- **Alert details** include the alert's severity and status, the analytics rules that generated it, the product that produced the alert, the entities mentioned in the alert, the associated MITRE ATT&CK tactics and techniques, and the internal **System alert ID**.
+
+ Select the **System alert ID** link to drill down even further into the alert, opening the **Logs** panel and displaying the query that generated the results and the events that triggered the alert.
+
+- **Bookmark details** aren't exactly the same as alert details; while they too include entities, MITRE ATT&CK tactics and techniques, and the **bookmark ID**, they also include the raw result and the bookmark creator information.
+
+ Select the **View bookmark logs** link to open the **Logs** panel and display the query that generated the results that were saved as the bookmark.
+
+ :::image type="content" source="media/investigate-incidents/alert-details.png" alt-text="Screenshot of the details of an alert displayed in the incident details page.":::
+
+From the incident timeline widget, you can also take the following actions on alerts and bookmarks:
+
+- Run a playbook on the alert to take immediate action to mitigate a threat. Sometimes you need to block or isolate a threat before you continue investigating. [Learn more about running playbooks on alerts](tutorial-respond-threats-playbook.md#run-a-playbook-manually-on-an-alert).
+
+- Remove an alert from an incident. You can remove alerts that were added to incidents after their creation if you judge them to not be relevant. [Learn more about removing alerts from incidents](relate-alerts-to-incidents.md#remove-an-alert-from-an-incident).
+
+- Remove a bookmark from an incident, or edit those fields in the bookmark that can be edited (not shown).
+
+ :::image type="content" source="media/investigate-incidents/remove-alert.png" alt-text="Screenshot of removing an alert from an incident.":::
+
+### Similar incidents (preview)
+
+As a security operations analyst, when investigating an incident you'll want to pay attention to its larger context. For example, you'll want to see if other incidents like this have happened before or are happening now.
+
+- You might want to identify concurrent incidents that may be part of the same larger attack strategy.
+
+- You might want to identify similar incidents in the past, to use them as reference points for your current investigation.
+
+- You might want to identify the owners of past similar incidents, to find the people in your SOC who can provide more context, or to whom you can escalate the investigation.
+
+The **similar incidents** widget in the incident details page, now in preview, presents up to 20 other incidents that are the most similar to the current one. Similarity is calculated by internal Microsoft Sentinel algorithms, and the incidents are sorted and displayed in descending order of similarity.
++
+As with the incident timeline widget, you can hover over any text that's incompletely displayed due to column width to reveal the full text.
+
+There are three criteria by which similarity is determined:
+
+- **Similar entities:** An incident is considered similar to another incident if they both include the same [entities](entities.md). The more entities two incidents have in common, the more similar they are considered to be.
+
+- **Similar rule:** An incident is considered similar to another incident if they were both created by the same [analytics rule](detect-threats-built-in.md).
+
+- **Similar alert details:** An incident is considered similar to another incident if they share the same title, product name, and/or [custom details](surface-custom-details-in-alerts.md).
+
+The reasons an incident appears in the similar incidents list are displayed in the **Similarity reason** column. Hover over the info icon to show the common items (entities, rule name, or details).
++
+Incident similarity is calculated based on data from the 14 days prior to the last activity in the incident, that being the end time of the most recent alert in the incident.
+
+Incident similarity is recalculated every time you enter the incident details page, so the results may vary between sessions if new incidents were created or updated.
+
+### Get the top insights into your incident
+
+Microsoft Sentinel's security experts have built queries that automatically ask the big questions about the entities in your incident. You can see the top answers in the **Top insights** widget, visible on the right side of the incident details page. This widget shows a collection of insights based on both machine-learning analysis and the curation of top teams of security experts.
+
+These are some of the same insights that appear on [entity pages](entity-pages.md#entity-insights), specially selected for helping you triage quickly and understand the scope of the threat. For the same reason, insights for all the entities in the incident are presented together to give you a more complete picture of what's happening.
+
+The following are the currently selected top insights (the list is subject to change):
+
+1. Actions *by* account.
+1. Actions *on* account.
+1. [UEBA insights](identify-threats-with-entity-behavior-analytics.md).
+1. Threat indicators related to user.
+1. Watchlist insights (Preview).
+1. Anomalously high number of a security event.
+1. Windows sign-in activity.
+1. IP address remote connections.
+1. IP address remote connections with TI match.
+
+Each of these insights (except for the ones relating to watchlists, for now) has a link you can select to open the underlying query in the [**Logs** panel that opens in the incident page](#dive-deeper-into-your-data-in-logs). You can then drill down into the query's results.
+
+The time frame for the **Top insights** widget is from 24 hours before the earliest alert in the incident until the time of the latest alert.
+
+### Explore the incident's entities
+
+The **Entities** widget shows you all the [entities](entities.md) that have been identified in the alerts in the incident. These are the objects that played a role in the incident, whether they be users, devices, addresses, files, or [any other types](./entities-reference.md).
+
+You can search the list of entities in the entities widget, or filter the list by entity type, to help you find an entity.
++
+If you already know that a particular entity is a known indicator of compromise, select the three dots on the entity's row and choose **Add to TI (Preview)** to [add the entity to your threat intelligence](add-entity-to-threat-intelligence.md). (This option is available for [supported entity types](incident-investigation.md#view-entities).)
+
+If you want to [trigger an automatic response sequence for a particular entity](respond-threats-during-investigation.md), select the three dots and choose **Run playbook (Preview)**. (This option is available for [supported entity types](incident-investigation.md#view-entities).)
+
+Select an entity to see its full details. When you select an entity, you will move from the **Overview tab** to the **Entities tab**, another part of the incident details page.
+
+#### Entities tab
+
+The **Entities tab** shows a list of all the entities in the incident.
++
+Like the entities widget, this list can also be searched and filtered by entity type. Searches and filters applied in one list won't apply to the other.
+
+Select a row in the list for that entity's information to be displayed in a side panel to the right.
+
+If the entity name appears as a link, selecting the entity's name will redirect you to the full [entity page](entity-pages.md), outside the incident investigation page. To display just the side panel without leaving the incident, select the row in the list where the entity appears, but don't select its name.
+
+You can take the same actions here that you can take from the widget on the overview page. Select the three dots in the row of the entity to either run a playbook or add the entity to your threat intelligence.
+
+You can also take these actions by selecting the button next to **View full details** at the bottom of the side panel. The button will read either **Add to TI (Preview)**, **Run playbook (Preview)**, or **Entity actions**&mdash;in which case a menu will appear with the other two choices.
+
+The **View full details** button itself will redirect you to the entity's full entity page.
+
+The side panel features three cards:
+
+- **Info** contains identifying information about the entity. For example, for a user account entity this might be things like the username, domain name, security identifier (SID), organizational information, security information, and more, and for an IP address it would include, for example, geolocation.
+
+- **Timeline** contains a list of the alerts, [bookmarks](bookmarks.md), and [anomalies](soc-ml-anomalies.md) that feature this entity, and also activities the entity has performed, as collected from logs in which the entity appears. All alerts featuring this entity will be in this list, *whether or not* the alerts belong to this incident.
+
+ Alerts that are not part of the incident will be displayed differently: the shield icon will be grayed out, the severity color band will be a dotted line instead of a solid line, and there will be a button with a plus sign on the right side of the alert's row.
+
+ :::image type="content" source="media/investigate-incidents/entity-timeline.png" alt-text="Screenshot of entity timeline in entities tab.":::
+
+ Select the plus sign to [add the alert to this incident](relate-alerts-to-incidents.md). When the alert is added to the incident, all the alert's other entities (that weren't already part of the incident) are also added to it. Now you can further expand your investigation by looking at *those* entities' timelines for related alerts.
+
+ This timeline is limited to alerts and activities over the prior seven days. To go further back, pivot to the timeline in the full entity page, whose time frame is customizable.
+
+- **Insights** contains results of queries defined by Microsoft security researchers that provide valuable and contextual security information on entities, based on data from a collection of sources. These insights include the ones from the **Top insights** widget and many more; they are the same ones that appear on the full [entity page](entity-pages.md), but over a limited time frame: starting from 24 hours before the earliest alert in the incident, and ending with the time of the latest alert.
+
+ Most insights contain links which, when selected, open the [**Logs** panel](#dive-deeper-into-your-data-in-logs) displaying the query that generated the insight along with its results.
+
+### Focus your investigation
+
+Learn how you can broaden or narrow the scope of your investigation by either [adding alerts to your incidents or removing alerts from incidents](relate-alerts-to-incidents.md).
+
+### Dive deeper into your data in Logs
+
+From just about anywhere in the investigation experience, you'll be able to select a link that will open an underlying query in the **Logs** panel, in the context of the investigation. If you got to the Logs panel from one of these links, the corresponding query will appear in the query window, and the query will run automatically and generate the appropriate results for you to explore.
+
+You can also call an empty Logs panel inside the incident details page anytime, if you think of a query you want to try while investigating, while remaining in context. To do this, select **Logs** at the top of the page.
+
+However you end up on the Logs panel, if you've run a query whose results you want to save:
+
+1. Mark the check box next to the row you want to save from among the results. To save all the results, mark the check box at the top of the column.
+
+1. Save the marked results as a bookmark. You have two options to do this:
+
+ - Select **Add bookmark to the current incident** to create a bookmark and add it to the open incident. Follow the [bookmark instructions](bookmarks.md) to complete the process. Once completed, the bookmark will appear in the incident timeline.
+
+ - Select **Add bookmark** to create a bookmark without adding it to any incident. Follow the [bookmark instructions](bookmarks.md) to complete the process. You'll be able to find this bookmark along with any others you've created on the **Hunting** page, under the **Bookmarks** tab. From there you can add it to this or any other incident.
+
+1. After creating the bookmark (or if you choose not to), select **Done** to close the **Logs** panel.
++
+## Audit and comment on incidents
+
+When investigating an incident, you'll want to thoroughly document the steps you take, both to ensure accurate reporting to management and to enable seamless cooperation and collaboration amongst coworkers. You'll also want to clearly see records of any actions taken on the incident by others, including by automated processes. Microsoft Sentinel gives you the **Activity log**, a rich audit and commenting environment, to help you accomplish this.
+
+You can also enrich your incidents automatically with comments. For example, when you run a playbook on an incident that fetches relevant information from external sources (say, checking a file for malware at VirusTotal), you can have the playbook place the external source's response&mdash;along with any other information you define&mdash;in the incident's comments.
+
+The activity log auto-refreshes, even while open, so that you can always see changes in real time. You'll also be notified of any changes made to the activity log while you have it open.
+
+To view the log of activities and comments, or to add your own comments:
+
+1. Select **Activity log** at the top of the incident details page.
+1. To filter the log to show either only activities or only comments, select the filter control at the top of the log.
+1. If you want to add a comment, enter it in the rich text editor at the bottom of the **Incident activity log** panel.
+1. Select **Comment** to submit the comment. You'll now see your comment at the top of the log.
+++
+### Considerations for comments
+
+The following are several considerations to take into account when using incident comments.
+
+**Supported input:**
+
+- **Text:** Comments in Microsoft Sentinel support text inputs in plain text, basic HTML, and Markdown. You can also paste copied text, HTML, and Markdown into the comment window.
+
+- **Links:** Links must be in the form of HTML anchor tags, and they must have the parameter `target="_blank"`. Example:
+
+ ```html
+ <a href="https://www.url.com" target="_blank">link text</a>
+ ```
+
+ > [!NOTE]
+ >
+ > If you have playbooks that create comments in incidents, links in those comments must now conform to this template as well, due to a change in the format of comments.
+
+- **Images:** You can insert links to images in comments and the images will be displayed inline, but the images must already be hosted in a publicly accessible location such as Dropbox, OneDrive, Google Drive and the like. Images can't be uploaded directly to comments.
+
+**Size limit:**
+
+- **Per comment:** A single comment can contain up to **30,000 characters**.
+
+- **Per incident:** A single incident can contain up to **100 comments**.
+
+ > [!NOTE]
+ > The size limit of a single incident record in the *SecurityIncident* table in Log Analytics is 64 KB. If this limit is exceeded, comments (starting with the earliest) will be truncated, which may affect the comments that will appear in [advanced search](#search-for-incidents) results.
+ >
+ > The actual incident records in the incidents database will not be affected.
+
+**Who can edit or delete:**
+
+- **Editing:** Only the author of a comment has permission to edit it.
+
+- **Deleting:** Only users with the [Microsoft Sentinel Contributor](roles.md) role have permission to delete comments. Even the comment's author must have this role in order to delete it.
+
+## Investigate incidents visually using the investigation graph
+
+If you prefer a visual, graphical representation of alerts, entities, and the connections between them in your investigation, you can accomplish many of the things discussed above with the classic investigation graph as well. The downside of the graph is that you'll end up having to switch contexts a great deal more.
+
+The investigation graph provides you with:
+
+- **Visual context from raw data**: The live, visual graph displays entity relationships extracted automatically from the raw data. This enables you to easily see connections across different data sources.
+
+- **Full investigation scope discovery**: Expand your investigation scope using built-in exploration queries to surface the full scope of a breach.
+
+- **Built-in investigation steps**: Use predefined exploration options to make sure you are asking the right questions in the face of a threat.
+
+To use the investigation graph:
+
+1. Select an incident, then select **Investigate**. This takes you to the investigation graph. The graph provides an illustrative map of the entities directly connected to the alert and each resource connected further.
++
+ [![View map.](media/investigate-incidents/investigation-map.png)](media/investigate-incidents/investigation-map.png#lightbox)
+
+ > [!IMPORTANT]
+ > - You'll only be able to investigate the incident if the analytics rule or bookmark that generated it contains entity mappings. The investigation graph requires that your original incident includes entities.
+ >
+ > - The investigation graph currently supports investigation of **incidents up to 30 days old**.
++
+1. Select an entity to open the **Entities** pane so you can review information on that entity.
+
+ ![View entities in map](media/investigate-incidents/map-entities.png)
+
+1. Expand your investigation by hovering over each entity to reveal a list of questions that was designed by our security experts and analysts per entity type to deepen your investigation. We call these options **exploration queries**.
+
+ ![Explore more details](media/investigate-incidents/exploration-cases.png)
+
+ For example, you can request related alerts. If you select an exploration query, the resulting entitles are added back to the graph. In this example, selecting **Related alerts** returned the following alerts into the graph:
+
+ :::image type="content" source="media/investigate-incidents/related-alerts.png" alt-text="Screenshot: view related alerts." lightbox="media/investigate-incidents/related-alerts.png":::
+
+ See that the related alerts appear connected to the entity by dotted lines.
+
+1. For each exploration query, you can select the option to open the raw event results and the query used in Log Analytics, by selecting **Events\>**.
+
+1. In order to understand the incident, the graph gives you a parallel timeline.
+
+ :::image type="content" source="media/investigate-incidents/map-timeline.png" alt-text="Screenshot: view timeline in map." lightbox="media/investigate-incidents/map-timeline.png":::
+
+1. Hover over the timeline to see which things on the graph occurred at what point in time.
+
+ :::image type="content" source="media/investigate-incidents/use-timeline.png" alt-text="Screenshot: use timeline in map to investigate alerts.'" lightbox="media/investigate-incidents/use-timeline.png":::
+++
+## Closing an incident
+
+Once you have resolved a particular incident (for example, when your investigation has reached its conclusion), you should set the incidentΓÇÖs status to **Closed**. When you do so, you will be asked to classify the incident by specifying the reason you are closing it. This step is mandatory. Click **Select classification** and choose one of the following from the drop-down list:
+
+- True Positive &ndash; suspicious activity
+- Benign Positive &ndash; suspicious but expected
+- False Positive &ndash; incorrect alert logic
+- False Positive &ndash; incorrect data
+- Undetermined
++
+For more information about false positives and benign positives, see [Handle false positives in Microsoft Sentinel](false-positives.md).
+
+After choosing the appropriate classification, add some descriptive text in the **Comment** field. This will be useful in the event you need to refer back to this incident. Click **Apply** when youΓÇÖre done, and the incident will be closed.
++
+## Search for incidents
+
+To find a specific incident quickly, enter a search string in the search box above the incidents grid and press **Enter** to modify the list of incidents shown accordingly. If your incident isn't included in the results, you may want to narrow your search by using **Advanced search** options.
+
+To modify the search parameters, select the **Search** button and then select the parameters where you want to run your search.
+
+For example:
++
+By default, incident searches run across the **Incident ID**, **Title**, **Tags**, **Owner**, and **Product name** values only. In the search pane, scroll down the list to select one or more other parameters to search, and select **Apply** to update the search parameters. Select **Set to default** reset the selected parameters to the default option.
++
+> [!NOTE]
+> Searches in the **Owner** field support both names and email addresses.
+>
+
+Using advanced search options changes the search behavior as follows:
+
+| Search behavior | Description |
+|||
+| **Search button color** | The color of the search button changes, depending on the types of parameters currently being used in the search. <ul><li>As long as only the default parameters are selected, the button is grey. <li>As soon as different parameters are selected, such as advanced search parameters, the button turns blue. |
+| **Auto-refresh** | Using advanced search parameters prevents you from selecting to automatically refresh your results. |
+| **Entity parameters** | All entity parameters are supported for advanced searches. When searching in any entity parameter, the search runs in all entity parameters. |
+| **Search strings** | Searching for a string of words includes all of the words in the search query. Search strings are case sensitive. |
+| **Cross workspace support** | Advanced searches are not supported for cross-workspace views. |
+| **Number of search results displayed** | When you're using advanced search parameters, only 50 results are shown at a time. |
++
+> [!TIP]
+> If you're unable to find the incident you're looking for, remove search parameters to expand your search. If your search results in too many items, add more filters to narrow down your results.
+>
++
+## Next steps
+In this article, you learned how to get started investigating incidents using Microsoft Sentinel. For more information, see:
+
+- [Investigate incidents comprehensively in Microsoft Sentinel](incident-investigation.md)
+- [Tutorial: Use playbooks with automation rules in Microsoft Sentinel](tutorial-respond-threats-playbook.md)
+- [Investigate incidents with UEBA data](investigate-with-ueba.md)
sentinel Relate Alerts To Incidents https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/relate-alerts-to-incidents.md
Title: Relate alerts to incidents in Microsoft Sentinel | Microsoft Docs description: This article shows you how to relate alerts to your incidents in Microsoft Sentinel. - Previously updated : 05/12/2022 + Last updated : 01/17/2023 # Relate alerts to incidents in Microsoft Sentinel
One thing that this feature allows you to do is to include alerts from one data
This feature is built into the latest version of the Microsoft Sentinel API, which means that it's available to the Logic Apps connector for Microsoft Sentinel. So you can use playbooks to automatically add an alert to an incident if certain conditions are met.
-You can also use this automation to add alerts to [manually-created incidents](create-incident-manually.md), to create custom correlations, or to define custom criteria for grouping alerts into incidents when they're created.
+You can also use this automation to add alerts to [manually created incidents](create-incident-manually.md), to create custom correlations, or to define custom criteria for grouping alerts into incidents when they're created.
+
+### Limitations
+
+- Microsoft Sentinel imports both alerts and incidents from Microsoft 365 Defender. For the most part, you can treat these alerts and incidents like regular Microsoft Sentinel alerts and incidents.
+
+ However, you can only add Defender alerts to Defender incidents (or remove them) in the Defender portal, not in the Sentinel portal. If you try doing this in Microsoft Sentinel, you will get an error message. You can pivot to the incident in the Microsoft 365 Defender portal using the link in the Microsoft Sentinel incident. Don't worry, though - any changes you make to the incident in the Microsoft 365 Defender portal are [synchronized](microsoft-365-defender-sentinel-integration.md#working-with-microsoft-365-defender-incidents-in-microsoft-sentinel-and-bi-directional-sync) with the parallel incident in Microsoft Sentinel, so you'll still see the added alerts in the incident in the Sentinel portal.
+
+ You *can* add Microsoft 365 Defender alerts to non-Defender incidents, and non-Defender alerts to Defender incidents, in the Microsoft Sentinel portal.
+
+- An incident can contain a maximum of 150 alerts. If you try to add an alert to an incident with 150 alerts in it, you will get an error message.
+
+## Add alerts using the entity timeline (Preview)
+
+The entity timeline, as featured in the new [incident experience](incident-investigation.md) (now in Preview), presents all the entities in a particular incident investigation. When an entity in the list is selected, a miniature entity page is displayed in a side panel.
+
+1. From the Microsoft Sentinel navigation menu, select **Incidents**.
+
+ :::image type="content" source="media/investigate-incidents/incident-grid.png" alt-text="Screenshot of new incidents queue displayed in a grid." lightbox="media/investigate-incidents/incident-grid.png":::
+
+1. Select an incident to investigate. In the incident details panel, select **View full details**.
+
+1. In the incident page, select the **Entities** tab.
+
+ :::image type="content" source="media/investigate-incidents/entities-tab.png" alt-text="Screenshot of entities tab in incident page." lightbox="media/investigate-incidents/entities-tab.png":::
+
+1. Select an entity from the list.
+
+1. In the entity page side panel, select the **Timeline** card.
+
+ :::image type="content" source="media/relate-alerts-to-incidents/entity-timeline.png" alt-text="Screenshot of entity timeline card in entities tab of incident page.":::
+
+1. Select an alert external to the open incident. These are indicated by a grayed-out shield icon and a dotted-line color band representing the severity. Select the plus-sign icon on the right end of that alert.
+
+ :::image type="content" source="media/relate-alerts-to-incidents/external-alert.png" alt-text="Screenshot of appearance of external alert in entity timeline.":::
+
+1. Confirm adding the alert to the incident by selecting **OK**. You'll receive a notification confirming the adding of the alert to the incident, or explaining why it was not added.
+ :::image type="content" source="media/relate-alerts-to-incidents/add-alert-to-incident.png" alt-text="Screenshot of adding an alert to an incident in the entity timeline.":::
+
+You'll see that the added alert now appears in the open incident's **Timeline** widget in the **Overview** tab, with a full-color shield icon and a solid-line color band like any other alert in the incident.
+
+The added alert is now a full part of the incident, and any entities in the added alert (that weren't already part of the incident) have also become part of the incident. You can now explore *those* entities' timelines for *their* other alerts that are now eligible to be added to the incident.
+
+### Remove an alert from an incident
+
+Alerts that were added to an incident&mdash;manually or automatically&mdash;can be removed from an incident as well.
+
+1. From the Microsoft Sentinel navigation menu, select **Incidents**.
+
+1. Select an incident to investigate. In the incident details panel, select **View full details**.
+
+1. In the **Overview** tab, in the **Incident timeline** widget, select the three dots next to an alert you want to remove from the incident. From the pop-up menu, select **Remove alert**.
+
+ :::image type="content" source="media/relate-alerts-to-incidents/remove-alert.png" alt-text="Screenshot showing how to remove an alert from an incident in the incident timeline.":::
## Add alerts using the investigation graph
The [investigation graph](investigate-cases.md) is a visual, intuitive tool that
1. Hover over one of the related alerts until a menu pops out to its side. Select **Add alert to incident (Preview)**.
- :::image type="content" source="media/relate-alerts-to-incidents/add-alert-to-incident.png" alt-text="Screenshot of adding an alert to an incident in the investigation graph.":::
+ :::image type="content" source="media/relate-alerts-to-incidents/add-alert-using-graph.png" alt-text="Screenshot of adding an alert to an incident in the investigation graph.":::
1. The alert is added to the incident, and for all purposes is part of the incident, along with all its entities and details. You'll see two visual representations of this:
When adding an alert to an incident, depending on the circumstances, you might b
Which of these options you choose depends on your particular needs; we don't recommend one choice over the other.
-### Limitations
--- Microsoft Sentinel imports both alerts and incidents from Microsoft 365 Defender. For the most part, you can treat these alerts and incidents like regular Microsoft Sentinel alerts and incidents. -
- However, you can only add Defender alerts to Defender incidents (or remove them) in the Defender portal, not in the Sentinel portal. If you try doing this in Microsoft Sentinel, you will get an error message. You can pivot to the incident in the Microsoft 365 Defender portal using the link in the Microsoft Sentinel incident. Don't worry, though - any changes you make to the incident in the Microsoft 365 Defender portal are [synchronized](microsoft-365-defender-sentinel-integration.md#working-with-microsoft-365-defender-incidents-in-microsoft-sentinel-and-bi-directional-sync) with the parallel incident in Microsoft Sentinel, so you'll still see the added alerts in the incident in the Sentinel portal.
-
- You *can* add Microsoft 365 Defender alerts to non-Defender incidents, and non-Defender alerts to Defender incidents, in the Microsoft Sentinel portal.
--- An incident can contain a maximum of 150 alerts. If you try to add an alert to an incident with 150 alerts in it, you will get an error message.- ## Add/remove alerts using playbooks Adding and removing alerts to incidents are also available as Logic Apps actions in the Microsoft Sentinel connector, and therefore in Microsoft Sentinel playbooks. You need to supply the **incident ARM ID** and the **system alert ID** as parameters, and you can find them both in the playbook schema for both the alert and incident triggers.
sentinel Respond Threats During Investigation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/respond-threats-during-investigation.md
description: This article shows you how to take response actions against threat
Previously updated : 12/07/2022 Last updated : 01/17/2023 # Respond to threat actors while investigating or threat hunting in Microsoft Sentinel
The entity trigger currently supports the following entity types:
When you're investigating an incident, and you determine that a given entity - a user account, a host, an IP address, a file, and so on - represents a threat, you can take immediate remediation actions on that threat by running a playbook on-demand. You can do likewise if you encounter suspicious entities while proactively hunting for threats outside the context of incidents. 1. Select the entity in whichever context you encounter it, and choose the appropriate means to run a playbook, as follows:
+ - In the **Entities** widget on the **Overview tab** of an incident in the [new incident details page](investigate-incidents.md#explore-the-incidents-entities) (now in Preview), or in its [**Entities tab**](investigate-incidents.md#entities-tab), choose an entity from the list, select the three dots next to the entity, and select **Run playbook (Preview)** from the pop-up menu.
+
+ :::image type="content" source="media/respond-threats-during-investigation/incident-details-overview.png" alt-text="Screenshot of incident details page.":::
+
+ :::image type="content" source="media/respond-threats-during-investigation/entities-tab.png" alt-text="Screenshot of entities tab on incident details page.":::
+ - In the **Entities** tab of an incident, choose the entity from the list and select the **Run playbook (Preview)** link at the end of its line in the list. :::image type="content" source="media/respond-threats-during-investigation/incident-details-page.png" alt-text="Screenshot of selecting entity from incident details page to run a playbook on it.":::
When you're investigating an incident, and you determine that a given entity - a
In this article, you learned how to run playbooks manually to remediate threats from entities while in the middle of investigating an incident or hunting for threats. -- Learn more about [investigating incidents](investigate-cases.md) in Microsoft Sentinel.
+- Learn more about [investigating incidents](investigate-incidents.md) in Microsoft Sentinel.
- Learn how to [proactively hunt for threats](hunting.md) using Microsoft Sentinel. - Learn more about [entities](entities.md) in Microsoft Sentinel. - Learn more about [playbooks](automate-responses-with-playbooks.md) in Microsoft Sentinel.
sentinel Sentinel Solution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/sentinel-solution.md
Title: Build and monitor Zero Trust (TIC 3.0) security architectures with Microsoft Sentinel
+ Title: Monitor Zero Trust (TIC 3.0) security architectures with Microsoft Sentinel
description: Install and learn how to use the Microsoft Sentinel Zero Trust (TIC3.0) solution for an automated visualization of Zero Trust principles, cross-walked to the Trusted Internet Connections framework. Last updated 01/09/2023
- zerotrust-services
-# Build and monitor Zero Trust (TIC 3.0) security architectures with Microsoft Sentinel
+# Monitor Zero Trust (TIC 3.0) security architectures with Microsoft Sentinel
-The Microsoft Sentinel solution for **Zero Trust (TIC 3.0)** enables governance and compliance teams to design, build, monitor, and respond to Zero Trust (TIC 3.0) requirements. This solution includes a workbook, analytics rules, and a playbook, which provide an automated visualization of Zero Trust principles, cross-walked to the Trust Internet Connections framework, helping organizations to monitor configurations over time.
+[Zero Trust](/security/zero-trust/zero-trust-overview) is a security strategy for designing and implementing security principles that assumes breach, and verifies each request as though it originated from an uncontrolled network. A Zero Trust model implements the following security principles:
-This article describes how to install and use the Microsoft Sentinel solution for **Zero Trust (TIC 3.0)** in your Microsoft Sentinel workspace.
+- **Verify explicitly**: Always authenticate and authorize based on all available data points.
+- **Use least privilege access**: Limit user access with Just-In-Time and Just-Enough-Access (JIT/JEA), risk-based adaptive policies, and data protection.
+- **Assume breach**: Minimize blast radius and segment access. Verify end-to-end encryption and use analytics to get visibility, drive threat detection, and improve defenses.
-While only Microsoft Sentinel is required to get started, the solution is enhanced by integrations with other Microsoft Services, such as:
+This article describes how to use the Microsoft Sentinel **Zero Trust (TIC 3.0)** solution, which helps governance and compliance teams monitor and respond to Zero Trust requirements according to the [TRUSTED INTERNET CONNECTIONS (TIC) 3.0](https://www.cisa.gov/tic) initiative.
-- [Microsoft 365 Defender](https://www.microsoft.com/microsoft-365/security/microsoft-365-defender)-- [Microsoft Information Protection](https://azure.microsoft.com/services/information-protection/)-- [Azure Active Directory](https://azure.microsoft.com/services/active-directory/)-- [Microsoft Defender for Cloud](https://azure.microsoft.com/services/active-directory/)-- [Microsoft Defender for Endpoint](https://www.microsoft.com/microsoft-365/security/endpoint-defender)-- [Microsoft Defender for Identity](https://www.microsoft.com/microsoft-365/security/identity-defender)-- [Microsoft Defender for Cloud Apps](https://www.microsoft.com/microsoft-365/enterprise-mobility-security/cloud-app-security)-- [Microsoft Defender for Office 365](https://www.microsoft.com/microsoft-365/security/office-365-defender)-
-For more information, see [Guiding principles of Zero Trust](/azure/security/integrated/zero-trust-overview#guiding-principles-of-zero-trust).
-
-> [!NOTE]
-> Microsoft Sentinel solutions are sets of bundled content, pre-configured for a specific set of data. For more information, see [Microsoft Sentinel solutions documentation](sentinel-solutions.md).
->
+[Microsoft Sentinel solutions](sentinel-solutions.md) are sets of bundled content, pre-configured for a specific set of data. The **Zero Trust (TIC 3.0)** solution includes a workbook, analytics rules, and a playbook, which provide an automated visualization of Zero Trust principles, cross-walked to the Trust Internet Connections framework, helping organizations to monitor configurations over time.
## The Zero Trust solution and the TIC 3.0 framework
Before installing the **Zero Trust (TIC 3.0)** solution, make sure you have the
- **Required user permissions**. To install the **Zero Trust (TIC 3.0)** solution, you must have access to your Microsoft Sentinel workspace with [Security Reader](../active-directory/roles/permissions-reference.md#security-reader) permissions.
+The **Zero Trust (TIC 3.0)** solution is also enhanced by integrations with other Microsoft Services, such as:
+
+- [Microsoft 365 Defender](https://www.microsoft.com/microsoft-365/security/microsoft-365-defender)
+- [Microsoft Information Protection](https://azure.microsoft.com/services/information-protection/)
+- [Azure Active Directory](https://azure.microsoft.com/services/active-directory/)
+- [Microsoft Defender for Cloud](https://azure.microsoft.com/services/active-directory/)
+- [Microsoft Defender for Endpoint](https://www.microsoft.com/microsoft-365/security/endpoint-defender)
+- [Microsoft Defender for Identity](https://www.microsoft.com/microsoft-365/security/identity-defender)
+- [Microsoft Defender for Cloud Apps](https://www.microsoft.com/microsoft-365/enterprise-mobility-security/cloud-app-security)
+- [Microsoft Defender for Office 365](https://www.microsoft.com/microsoft-365/security/office-365-defender)
+ ## Install the Zero Trust (TIC 3.0) solution **To deploy the *Zero Trust (TIC 3.0)* solution from the Azure portal**:
For more information, see [Deploy out-of-the-box content and solutions](sentinel
## Sample usage scenario
-The following sections shows how a security operations analyst could use the resources deployed with the **Zero Trust (TIC 3.0)** solution to review requirements, explore queries, configure alerts, and implement automation.
+The following sections show how a security operations analyst could use the resources deployed with the **Zero Trust (TIC 3.0)** solution to review requirements, explore queries, configure alerts, and implement automation.
After [installing](#install-the-zero-trust-tic-30-solution) the **Zero Trust (TIC 3.0)** solution, use the workbook, analytics rules, and playbook deployed to your Microsoft Sentinel workspace to manage Zero Trust in your network.
Read our blogs!
- [Announcing the Microsoft Sentinel: Zero Trust (TIC3.0) Solution](https://techcommunity.microsoft.com/t5/microsoft-sentinel-blog/announcing-the-microsoft-sentinel-zero-trust-tic3-0-solution/ba-p/3031685) - [Building and monitoring Zero Trust (TIC 3.0) workloads for federal information systems with Microsoft Sentinel](https://devblogs.microsoft.com/azuregov/building-and-monitoring-zero-trust-tic-3-0-workloads-for-federal-information-systems-with-microsoft-sentinel/) - [Zero Trust: 7 adoption strategies from security leaders](https://www.microsoft.com/security/blog/2021/03/31/zero-trust-7-adoption-strategies-from-security-leaders/)-- [Implementing Zero Trust with Microsoft Azure: Identity and Access Management (6 Part Series)](https://devblogs.microsoft.com/azuregov/implementing-zero-trust-with-microsoft-azure-identity-and-access-management-1-of-6/)
+- [Implementing Zero Trust with Microsoft Azure: Identity and Access Management (6 Part Series)](https://devblogs.microsoft.com/azuregov/implementing-zero-trust-with-microsoft-azure-identity-and-access-management-1-of-6/)
sentinel Tutorial Log4j Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/tutorial-log4j-detection.md
+
+ Title: Tutorial - Detect Log4j vulnerability exploits with Microsoft Sentinel
+description: In this tutorial, learn how to detect exploits of the Apache Log4j vulnerability in any of your susceptible systems with Microsoft Sentinel analytics rules, taking advantage of alert enrichment capabilities to surface as much information as possible to benefit an investigation.
+++ Last updated : 01/08/2023++
+# Tutorial: Detect Log4j vulnerability exploits in your systems and produce enriched alerts
+
+As a Security Information and Event Management (SIEM) service, Microsoft Sentinel is responsible for detecting security threats to your organization. It does this by analyzing the massive volumes of data generated by all of your systems' logs.
+
+In this tutorial, you'll learn how to set up a Microsoft Sentinel analytics rule from a template to search for exploits of the Apache Log4j vulnerability across your environment. The rule will frame user accounts and IP addresses found in your logs as trackable entities, surface notable pieces of information in the alerts generated by the rules, and package alerts as incidents to be investigated.
+
+When you complete this tutorial, you'll be able to:
+
+> [!div class="checklist"]
+> - Create an analytics rule from a template
+> - Customize a rule's query and settings
+> - Configure the three types of alert enrichment
+> - Choose automated threat responses for your rules
++
+## Prerequisites
+
+To complete this tutorial, make sure you have:
+
+- An Azure subscription. Create a [free account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F) if you don't already have one.
+
+- A Log Analytics workspace with the Microsoft Sentinel solution deployed on it and data being ingested into it.
+
+- An Azure user with the [**Microsoft Sentinel Contributor**](../role-based-access-control/built-in-roles.md#microsoft-sentinel-contributor) role assigned on the Log Analytics workspace where Microsoft Sentinel is deployed.
+
+- The following data sources are referenced in this rule. The more of these you have deployed connectors for, the more effective the rule will be. You must have at least one.
+
+ | Data source | Log Analytics tables referenced |
+ | - | - |
+ | **Office 365** | OfficeActivity (SharePoint)<br>OfficeActivity (Exchange)<br>OfficeActivity (Teams) |
+ | **DNS** | DnsEvents |
+ | **Azure Monitor** (VM Insights) | VMConnection |
+ | **Cisco ASA** | CommonSecurityLog (Cisco) |
+ | **Palo Alto Networks (Firewall)** | CommonSecurityLog (PaloAlto) |
+ | **Security Events** | SecurityEvents |
+ | **Azure Active Directory** | SigninLogs<br>AADNonInteractiveUserSignInLogs |
+ | **Azure Monitor (WireData)** | WireData |
+ | **Azure Monitor (IIS)** | W3CIISLog |
+ | **Azure Activity** | AzureActivity |
+ | **Amazon Web Services** | AWSCloudTrail |
+ | **Microsoft 365 Defender** | DeviceNetworkEvents |
+ | **Azure Firewall** | AzureDiagnostics (Azure Firewall) |
+
+## Sign in to the Azure portal and Microsoft Sentinel
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+1. From the Search bar, search for and select **Microsoft Sentinel**.
+
+1. Search for and select your workspace from the list of available Microsoft Sentinel workspaces.
+
+1. On the **Microsoft Sentinel | Overview** page, select **Analytics** from the navigation menu, under **Configuration**.
+
+## Create a scheduled analytics rule from a template
+
+1. From the **Analytics** page, select the **Rule templates** tab.
+
+1. In the search field at the top of the list of rule templates, enter **log4j**.
+
+1. From the filtered list of templates, select **Log4j vulnerability exploit aka Log4Shell IP IOC**. From the details pane, select **Create rule**.
+
+ :::image type="content" source="media/tutorial-log4j-detection/find-template-create-rule.png" alt-text="Screenshot showing how to search for and locate template and create analytics rule." lightbox="media/tutorial-log4j-detection/find-template-create-rule.png":::
+
+ The **Analytics rule wizard** will open.
+
+1. In the **General** tab, in the **Name** field, enter **Log4j vulnerability exploit aka Log4Shell IP IOC - Tutorial-1**.
+1. Leave the rest of the fields on this page as they are. These are the defaults, but we will add customization to the alert name at a later stage.
+
+ If you donΓÇÖt want the rule to run immediately, select **Disabled**, and the rule will be added to your **Active rules** tab and you can enable it from there when you need it.
+
+1. Select **Next : Set rule logic**.
+ :::image type="content" source="media/tutorial-log4j-detection/general-tab.png" alt-text="Screenshot of the General tab of the Analytics rule wizard.":::
+
+## Review rule query logic and configuration of settings
+
+- In the **Set rule logic** tab, review the query as it appears under the **Rule query** heading.
+
+ To see more of the query text at one time, select the diagonal double-arrow icon at the upper right corner of the query window to expand the window to a larger size.
+
+ :::image type="content" source="media/tutorial-log4j-detection/set-rule-logic-tab.png" alt-text="Screenshot of the Set rule logic tab of the Analytics rule wizard." lightbox="media/tutorial-log4j-detection/set-rule-logic-tab.png":::
+
+## Enrich alerts with entities and other details
+
+1. Under **Alert enrichment**, keep the **Entity mapping** settings as they are. Note the three mapped entities.
+
+ :::image type="content" source="media/tutorial-log4j-detection/entity-mappings.png" alt-text="Screenshot of existing entity mapping settings.":::
+
+1. In the **Custom details** section, let's add the timestamp of each occurrence to the alert, so you can see it right in the alert details, without having to drill down.
+ 1. Type **timestamp** in the **Key** field. This will be the property name in the alert.
+ 1. Select **timestamp** from the **Value** drop-down list.
+
+1. In the **Alert details** section, let's customize the alert name so that the timestamp of each occurrence appears in the alert title.
+
+ In the **Alert name format** field, enter **Log4j vulnerability exploit aka Log4Shell IP IOC at {{timestamp}}**.
+
+ :::image type="content" source="media/tutorial-log4j-detection/custom-details.png" alt-text="Screenshot of custom details and alert details configurations.":::
+
+## Review remaining settings
+
+1. Review the remaining settings on the **Set rule logic** tab. There's no need to change anything, though you can if you'd like to change the interval, for example. Just make sure that the lookback period matches the interval in order to maintain continuous coverage.
+
+ - **Query scheduling:**
+ - Run query every **1 hour**.
+ - Lookup data from the last **1 hour**.
+
+ - **Alert threshold:**
+ - Generate alert when number of query results **is greater than 0**.
+
+ - **Event grouping:**
+ - Configure how rule query results are grouped into alerts: **Group all events into a single alert**.
+
+ - **Suppression:**
+ - Stop running query after alert is generated: **Off**.
+
+ :::image type="content" source="media/tutorial-log4j-detection/remaining-rule-settings.png" alt-text="Screenshot of remaining rule logic settings for analytics rule.":::
+
+1. Select **Next : Incident settings**.
+
+## Review the incident creation settings
+
+1. Review the settings on the **Incident settings** tab. There's no need to change anything, unless, for example, you have a different system for incident creation and management, in which case you'd want to disable incident creation.
+
+ - **Incident settings:**
+ - Create incidents from alerts triggered by this analytics rule: **Enabled**.
+
+ - **Alert grouping:**
+ - Group related alerts, triggered by this analytics rule, into incidents: **Disabled**.
+
+ :::image type="content" source="media/tutorial-log4j-detection/incident-settings-tab.png" alt-text="Screenshot of the Incident settings tab of the Analytics rule wizard.":::
+
+1. Select **Next : Automated response**.
+
+## Set automated responses and create the rule
+
+In the **Automated response** tab:
+
+1. Select **+ Add new** to create a new automation rule for this analytics rule. This will open the **Create new automation rule** wizard.
+
+ :::image type="content" source="media/tutorial-log4j-detection/add-automation-rule.png" alt-text="Screenshot of Automated response tab in Analytics rule wizard.":::
+
+1. In the **Automation rule name** field, enter **Log4J vulnerability exploit detection - Tutorial-1**.
+
+1. Leave the **Trigger** and **Conditions** sections as they are.
+
+1. Under **Actions**, select **Add tags** from the drop-down list.
+ 1. Select **+ Add tag**.
+ 1. Enter **Log4J exploit** in the text box and select **OK**.
+
+1. Leave the **Rule expiration** and **Order** sections as they are.
+
+1. Select **Apply**. You'll soon see your new automation rule in the list in the **Automated response** tab.
+
+1. Select **Next : Review** to review all the settings for your new analytics rule. When the "Validation passed" message appears, select **Create**. Unless you set the rule to **Disabled** in the **General** tab above, the rule will run immediately.
+
+ Select the image below for a display of the full review (most of the query text was clipped for viewability).
+
+ :::image type="content" source="media/tutorial-log4j-detection/review-and-create-tab.png" alt-text="Screenshot of the Review and Create tab of the Analytics rule wizard.":::
+
+## Verify the success of the rule
+
+1. To view the results of the alert rules you create, go to the **Incidents** page.
+
+1. To filter the list of incidents to those generated by your analytics rule, enter the name (or part of the name) of the analytics rule you created in the **Search** bar.
+
+1. Open an incident whose title matches the name of the analytics rule. See that the flag you defined in the automation rule was applied to the incident.
+
+## Clean up resources
+
+If you're not going to continue to use this analytics rule, delete (or at least disable) the analytics and automation rules you created with the following steps:
+
+1. In the **Analytics** page, select the **Active rules** tab.
+
+1. Enter the name (or part of the name) of the analytics rule you created in the **Search** bar.
+ (If it doesn't show up, make sure any filters are set to **Select all**.)
+
+1. Mark the check box next to your rule in the list, and select **Delete** from the top banner.
+ (If you don't want to delete it, you can select **Disable** instead.)
+
+<!-- Check that this part doesn't happen automatically when you delete the analytics rule. -->
+1. In the **Automation** page, select the **Automation rules** tab.
+
+1. Enter the name (or part of the name) of the automation rule you created in the **Search** bar.
+ (If it doesn't show up, make sure any filters are set to **Select all**.)
+
+1. Mark the check box next to your automation rule in the list, and select **Delete** from the top banner.
+ (If you don't want to delete it, you can select **Disable** instead.)
+
+## Next steps
+
+Now that you've learned how to search for exploits of a common vulnerability using analytics rules, learn more about what you can do with analytics in Microsoft Sentinel:
+
+- Learn about the full range of settings and configurations in [scheduled analytics rules](detect-threats-custom.md).
+- In particular, learn more about the different types of alert enrichment you saw here:
+ - [Entity mapping](map-data-fields-to-entities.md)
+ - [Custom details](surface-custom-details-in-alerts.md)
+ - [Alert properties](customize-alert-details.md)
+
+- Learn about [other kinds of analytics rules](detect-threats-built-in.md) in Microsoft Sentinel and their function.
+- Learn more about writing queries in Kusto Query Language (KQL). Learn more about KQL [concepts](/azure/data-explorer/kusto/concepts/) and [queries](/azure/data-explorer/kusto/query/), and see this handy [quick reference guide](/azure/data-explorer/kql-quick-reference).
sentinel Tutorial Respond Threats Playbook https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/tutorial-respond-threats-playbook.md
Title: Use playbooks with automation rules in Microsoft Sentinel description: Use this tutorial to help you use playbooks together with automation rules in Microsoft Sentinel to automate your incident response and remediate security threats. -- Previously updated : 02/21/2022 + Last updated : 01/17/2023 # Tutorial: Use playbooks with automation rules in Microsoft Sentinel
You can also manually run a playbook on demand, on both incidents (in Preview) a
### Run a playbook manually on an alert
+# [NEW Incident details page](#tab/incidents)
+
+1. In the **Incidents** page, select an incident.
+
+1. Select **View full details** at the bottom of the incident details pane.
+
+1. In the incident details page, in the **Incident timeline** widget, choose the alert you want to run the playbook on. Select the three dots at the end of the alert's line and choose **Run playbook** from the pop-up menu.
+
+1. The **Alert playbooks** pane will open. You'll see a list of all playbooks configured with the **Microsoft Sentinel Alert** Logic Apps trigger that you have access to.
+
+1. Select **Run** on the line of a specific playbook to run it immediately.
+
+# [Investigation graph](#tab/cases)
+ 1. In the **Incidents** page, select an incident. 1. Select **View full details** at the bottom of the incident details pane.
You can also manually run a playbook on demand, on both incidents (in Preview) a
1. Select **Run** on the line of a specific playbook to run it immediately. ++ You can see the run history for playbooks on an alert by selecting the **Runs** tab on the **Alert playbooks** pane. It might take a few seconds for any just-completed run to appear in the list. Selecting a specific run will open the full run log in Logic Apps. ### Run a playbook manually on an incident
You can see the run history for playbooks on an incident by selecting the **Runs
1. Select an entity in one of the following ways, depending on your originating context:
- **If you're in an incident's details page:**
+ **If you're in an incident's details page (new version, now in Preview):**
+ 1. In the **Entities** widget in the **Overview** tab, find an entity from the list (don't select it).
+ 1. Select the three dots to the right of the entity.
+ 1. Select **Run playbook (Preview)** from the pop-up menu and continue with step 2 below.
+ If you selected the entity and entered the **Entities tab** of the incident details page, continue with the next line below.
+ 1. Find an entity from the list (don't select it).
+ 1. Select the three dots to the right of the entity.
+ 1. Select **Run playbook (Preview)** from the pop-up menu.
+ If you selected the entity and entered its entity page, select the **Run playbook (Preview)** button in the left-hand panel.
+
+ **If you're in an incident's details page (legacy version):**
1. Select the incident's **Entities** tab. 1. Find an entity from the list (don't select it). 1. Select the **Run playbook (Preview)** link at the end of its line in the list. If you selected the entity and entered its entity page, select the **Run playbook (Preview)** button in the left-hand panel.
-
+ **If you're in the Investigation graph:** 1. Select an entity in the graph. 1. Select the **Run playbook (Preview)** button in the entity side panel.
sentinel Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/whats-new.md
The listed features were released in the last three months. For information abou
## January 2023
+- [New incident investigation experience (Preview)](#new-incident-investigation-experience-preview)
+- [Microsoft Purview Information Protection connector (Preview)](#microsoft-purview-information-protection-connector-preview)
+
+### New incident investigation experience (Preview)
+
+SOC analysts need to understand the full scope of an attack as fast as possible to respond effectively.
+
+While triaging, investigating, and responding to a security incident, analysts require quick and seamless access to many pieces of information, actions, and tools. This access should optimally be within the incident investigation environment, with an absolute minimum of pivoting to other pages, products, or services&mdash;for example, to find Azure AD info or the geo-location of an IP, edit a bookmark, or add an entity to threat intelligence.
+
+**Microsoft Sentinel now offers a new incident investigation experience**. The new incident page design, along with many new features for investigation, response, and incident management, offers the analyst the information and tools necessary to understand the incident and the scope of breach, while making navigation easy and context switching less frequent. New features include, among others, top insights, a new activity log for incident audits and a Log Analytics query window to investigate logs.
+
+Learn more about the new investigation experience:
+- [Understand Microsoft Sentinel's incident investigation and case management capabilities](incident-investigation.md)
+- [Navigate and investigate incidents in Microsoft Sentinel](investigate-incidents.md)
+ ### Microsoft Purview Information Protection connector (Preview) With the new [Microsoft Purview Information Protection connector](connect-microsoft-purview.md), you can stream data from Microsoft Purview Information Protection (formerly Microsoft Information Protection or MIP) to Microsoft Sentinel. You can use the data ingested from the Microsoft Purview labeling clients and scanners to track, analyze, report on the data, and use it for compliance purposes.
-This connector replaces the Azure Information Protection (AIP) data connector, aligned with the retirement of the AIP analytics and audit logs public preview as of **March 31, 2023**.
+> [!IMPORTANT]
+> This connector replaces the Azure Information Protection (AIP) data connector, aligned with the retirement of the AIP analytics and audit logs public preview as of **March 31, 2023**.
The new connector streams audit logs into the standardized `MicrosoftPurviewInformationProtection` table, which has been adjusted to enhance the deprecated schema used by AIP, with more fields and easier access to parameters. Data is gathered through the [Office Management API](/office/office-365-management-api/office-365-management-activity-api-schema), which uses a structured schema. Review the list of supported [audit log record types and activities](microsoft-purview-record-types-activities.md).
service-bus-messaging Service Bus Python How To Use Queues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-python-how-to-use-queues.md
The following sample code shows you how to send a message to a queue. Open your
# send a batch of messages await send_batch_message(sender)
- # Close credential when no longer needed.
+ # Close credential when no longer needed.
await credential.close() ```
Open your favorite editor, such as [Visual Studio Code](https://code.visualstudi
# complete the message so that the message is removed from the queue await receiver.complete_message(msg)
- # Close credential when no longer needed.
+ # Close credential when no longer needed.
await credential.close() ```
service-connector Tutorial Portal Key Vault https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-connector/tutorial-portal-key-vault.md
Now you can create a service connection to another target service and directly s
1. Select **Secrets** in the Key Vault left ToC, and select the blob storage secret name. > [!TIP]
- > Don't have permission to list secrets? Refer to [troubleshooting](../key-vault/general/troubleshooting-access-issues.md#i-am-not-able-to-list-or-get-secretskeyscertificate-i-am-seeing-something-went-wrong-error).
+ > Don't have permission to list secrets? Refer to [troubleshooting](../key-vault/general/troubleshooting-access-issues.md#im-not-able-to-list-or-get-secretskeyscertificate-im-seeing-a-something-went-wrong-error).
4. Select a version ID from the Current Version list.
service-fabric Service Fabric Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-versions.md
Support for Service Fabric on a specific OS ends when support for the OS version
| 9.0 CU4<br>9.0.1114.1 | 8.0 CU3<br>8.0.527.1 | 8.2 CU 5.1<br>8.2.1483.1 | Less than or equal to version 6.0 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | November 1, 2023 | | 9.0 CU3<br>9.0.1103.1 | 8.0 CU3<br>8.0.527.1 | 8.2 CU 5.1<br>8.2.1483.1 | Less than or equal to version 6.0 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | November 1, 2023 | | 9.0 CU2.1<br>9.0.1086.1 | 8.0 CU3<br>8.0.527.1 | 8.2 CU 5.1<br>8.2.1483.1 | Less than or equal to version 6.0 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | November 1, 2023 |
-| 8.2 CU6<br>8.2.1485.1 | 8.0 CU3<br>8.0.527.1 | N/A | Less than or equal to version 5.2 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | March 31, 2023 |
-| 8.2 CU5.1<br>8.2.1483.1 | 8.0 CU3<br>8.0.527.1 | N/A | Less than or equal to version 5.2 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | March 31, 2023 |
-| 8.2 CU4<br>8.2.1458.1 | 8.0 CU3<br>8.0.527.1 | 8.0 | Less than or equal to version 5.2 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | March 31, 2023 |
-| 8.2 CU3<br>8.2.1434.1 | 8.0 CU3<br>8.0.527.1 | 8.0 | Less than or equal to version 5.2 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | March 31, 2023 |
-| 8.2 CU2.1<br>8.2.1397.1 | 8.0 CU3<br>8.0.527.1 | 8.0 | Less than or equal to version 5.2 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | March 31, 2023 |
-| 8.2 CU2<br>8.2.1285.1 | 8.0 CU3<br>8.0.527.1 | 8.0 | Less than or equal to version 5.2 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | March 31, 2023 |
-| 8.2 CU1<br>8.2.1204.1 | 8.0 CU3<br>8.0.527.1 | 8.0 | Less than or equal to version 5.2 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | March 31, 2023 |
-| 8.2 RTO<br>8.2.1124.1 | 8.0 CU3<br>8.0.527.1 | 8.0 | Less than or equal to version 5.2 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | March 31, 2023 |
| Service Fabric runtime | Can upgrade directly from |Can downgrade to*|Compatible SDK or NuGet package version | Supported .NET runtimes** | OS version | End of support | | | | | | | | | | 9.0 CU2<br>9.0.1056.1 | 8.0 CU3<br>8.0.527.1 | 8.0 | Less than or equal to version 6.0 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | August 19, 2022 | | 9.0 CU1<br>9.0.1035.1 | 8.0 CU3<br>8.0.527.1 | 8.0 | Less than or equal to version 6.0 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | August 19, 2022 | | 9.0 RTO<br>9.0.1018.1 | 8.0 CU3<br>8.0.527.1 | 8.0 | Less than or equal to version 6.0 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | August 19, 2022 |
+| 8.2 CU6<br>8.2.1485.1 | 8.0 CU3<br>8.0.527.1 | N/A | Less than or equal to version 5.2 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | November 30, 2022 |
+| 8.2 CU5.1<br>8.2.1483.1 | 8.0 CU3<br>8.0.527.1 | N/A | Less than or equal to version 5.2 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | November 30, 2022 |
+| 8.2 CU4<br>8.2.1458.1 | 8.0 CU3<br>8.0.527.1 | 8.0 | Less than or equal to version 5.2 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | November 30, 2022 |
+| 8.2 CU3<br>8.2.1434.1 | 8.0 CU3<br>8.0.527.1 | 8.0 | Less than or equal to version 5.2 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | November 30, 2022 |
+| 8.2 CU2.1<br>8.2.1397.1 | 8.0 CU3<br>8.0.527.1 | 8.0 | Less than or equal to version 5.2 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | November 30, 2022 |
+| 8.2 CU2<br>8.2.1285.1 | 8.0 CU3<br>8.0.527.1 | 8.0 | Less than or equal to version 5.2 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | November 30, 2022 |
+| 8.2 CU1<br>8.2.1204.1 | 8.0 CU3<br>8.0.527.1 | 8.0 | Less than or equal to version 5.2 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | November 30, 2022 |
+| 8.2 RTO<br>8.2.1124.1 | 8.0 CU3<br>8.0.527.1 | 8.0 | Less than or equal to version 5.2 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | November 30, 2022 |
| 8.1 CU4<br>8.1.360.1 | 7.2 CU7<br>7.2.476.1 | 8.0 | Less than or equal to version 5.1 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | June 30, 2022 | | 8.1 CU3.1<br>8.1.340.1 | 7.2 CU7<br>7.2.476.1 | 8.0 | Less than or equal to version 5.1 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | June 30, 2022 | | 8.1 CU3<br>8.1.334.1 | 7.2 CU7<br>7.2.476.1 | 8.0 | Less than or equal to version 5.1 | >= .NET Core 2.1 | [See supported OS version](#supported-linux-versions-and-support-end-date) | June 30, 2022 |
service-fabric Service Fabric Visualizing Your Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/service-fabric-visualizing-your-cluster.md
Last updated 07/14/2022
Service Fabric Explorer (SFX) is an open-source tool for inspecting and managing Azure Service Fabric clusters. Service Fabric Explorer is a desktop application for Windows, macOS and Linux.
-## Service Fabric Explorer download
-
-Use the following links to download Service Fabric Explorer as a desktop application:
--- Windows
- - https://aka.ms/sfx-windows
--- Linux
- - https://aka.ms/sfx-linux-x86
- - https://aka.ms/sfx-linux-x64
--- macOS
- - https://aka.ms/sfx-macos
-
-> [!NOTE]
-> The desktop version of Service Fabric Explorer can have more or fewer features than the cluster support. You can fall back to the Service Fabric Explorer version deployed to the cluster to ensure full feature compatibility.
->
->
- ### Running Service Fabric Explorer from the cluster Service Fabric Explorer is also hosted in a Service Fabric cluster's HTTP management endpoint. To launch SFX in a web browser, browse to the cluster's HTTP management endpoint from any browser - for example https:\//clusterFQDN:19080.
service-health Resource Health Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-health/resource-health-overview.md
Different resources have their own criteria for when they report that they are d
![Status of *Degraded* for a virtual machine](./media/resource-health-overview/degraded.png)
-For virtual machine scale sets, visit [Resource health state is "Degraded" in Azure Virtual Machine Scale Set](/troubleshoot/azure/virtual-machine-scale-sets/resource-health-degraded-state) page for more information.
+For Virtual Machine Scale Sets, visit [Resource health state is "Degraded" in Azure Virtual Machine Scale Set](/troubleshoot/azure/virtual-machine-scale-sets/resource-health-degraded-state) page for more information.
## History information
To open Resource Health for one resource:
1. Sign in to the Azure portal. 2. Browse to your resource. 3. On the resource menu in the left pane, select **Resource health**.
+4. From the health history grid, you can either download a PDF or click the "Share/Manage" RCA button.
![Opening Resource Health from the resource view](./media/resource-health-overview/from-resource-blade.png) + You can also access Resource Health by selecting **All services** and typing **resource health** in the filter text box. In the **Help + support** pane, select [Resource health](https://portal.azure.com/#blade/Microsoft_Azure_Monitoring/AzureMonitoringBrowseBlade/resourceHealth). ![Opening Resource Health from "All services"](./media/resource-health-overview/FromOtherServices.png)
site-recovery Azure To Azure How To Reprotect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/azure-to-azure-how-to-reprotect.md
The following conditions determine how much data is replicated:
|Source region has 1 VM with 1-TB premium disk.<br/>Only 127-GB data is used and rest of the disk is empty.<br/>Disk type is premium with 200-MBps throughput.<br/>45-GB data changes after failover.| Approximate time: 45-75 mins.<br/>During reprotection, Site Recovery will populate the checksum of all data, which operates at 46% of disk throughput - 92 MBps. The total time that it will take is 127 GB/92 MBps, approximately 25 minutes. </br>Transfer speed is approximately 23% of throughput, or 46 MBps. Therefore, transfer time to apply changes of 45 GB that is 45 GB/46 MBps, approximately 17 minutes.<br/>Some overhead time may be required for Site Recovery to auto scale, approximately 20-30 minutes. | |Source region has 1 VM with 1-TB premium disk.<br/>Only 20-GB data is used and rest of the disk is empty.<br/>Disk type is premium with 200-MBps throughput.<br/>The initial data on the disk immediately after failover was 15 GB. There was 5-GB data change after failover. Total populated data is therefore 20 GB| Approximate time: 10-40 minutes.<br/>Since the data populated in the disk is less than 10% of the size of the disk, we perform a complete initial replication.<br/>Transfer speed is approximately 23% of throughput, or 46-MBps. Therefore, transfer time to apply changes of 20 GB that is 20 GB/46 MBps, approximately 8 minutes.<br/>Some overhead time may be required for Site Recovery to auto scale, approximately 20-30 minutes |
-When the VM is re-protected after failing back to the primary region (that is, if the VM is re-protected from primary region to DR region), the target VM, and associated NIC(s) are deleted.
+When the VM is re-protected from DR region to primary region (that is, after failing over from the primary region to DR region), the target VM (original source VM), and associated NIC(s) are deleted.
-When the VM is re-protected from the DR region to the primary region, we do not delete the erstwhile primary VM and associated NIC(s).
+When the VM is re-protected again from the primary region to DR region after failback, we do not delete the VM and associated NIC(s) in the DR region that were created during the earlier failover.
## Next steps
-After the VM is protected, you can initiate a failover. The failover shuts down the VM in the secondary region and creates and boots the VM in the primary region, with brief downtime during this process. We recommend you choose an appropriate time for this process and that you run a test failover before initiating a full failover to the primary site. [Learn more](site-recovery-failover.md) about Azure Site Recovery failover.
+After the VM is protected, you can initiate a failover. The failover shuts down the VM in the secondary region and creates and boots the VM in the primary region, with brief downtime during this process. We recommend you choose an appropriate time for this process and that you run a test failover before initiating a full failover to the primary site. [Learn more](site-recovery-failover.md) about Azure Site Recovery failover.
site-recovery Vmware Physical Azure Support Matrix https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/vmware-physical-azure-support-matrix.md
Disaster recovery of physical servers | Replication of on-premises Windows/Linux
**Server** | **Requirements** | **Details** | |
-vCenter Server | Version 7.0 & subsequent updates in this version, 6.7, 6.5, 6.0, or 5.5 | We recommend that you use a vCenter server in your disaster recovery deployment.
+vCenter Server | Version 8.0, Version 7.0 & subsequent updates in this version, 6.7, 6.5, 6.0, or 5.5 | We recommend that you use a vCenter server in your disaster recovery deployment.
vSphere hosts | Version 7.0 & subsequent updates in this version, 6.7, 6.5, 6.0, or 5.5 | We recommend that vSphere hosts and vCenter servers are located in the same network as the process server. By default the process server runs on the configuration server. [Learn more](vmware-physical-azure-config-process-server-overview.md). ## Site Recovery configuration server
BTRFS | BTRFS is supported from [Update Rollup 34](https://support.microsoft.com
| Resize disk on replicated VM | Resizing up on the source VM is supported. Resizing down on the source VM is not supported. Resizing should be performed before failover, directly in the VM properties. No need to disable/re-enable replication.<br/><br/> If you change the source VM after failover, the changes aren't captures.<br/><br/> If you change the disk size on the Azure VM after failover, when you fail back, Site Recovery creates a new VM with the updates. Add disk on replicated VM | Not supported.<br/> Disable replication for the VM, add the disk, and then re-enable replication.
+Exclude disk before replicating VM | Supported for VMware machines. <br/><br/> Not supported for Physical machines, if using modernized experience.
> [!NOTE] > - Any change to disk identity is not supported. For example, if the disk partitioning has been changed from GPT to MBR or vice versa, then this will change the disk identity. In such a scenario, the replication will break and a fresh setup will be required.
spring-apps How To Application Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-application-insights.md
az spring app-insights update \
This section applies to the Enterprise Tier only, and provides instructions that that supplement the previous section.
-Azure Enterprise tier uses [Buildpack Bindings](./how-to-enterprise-build-service.md#buildpack-bindings) to integrate [Azure Application Insights](../azure-monitor/app/app-insights-overview.md) with the type `ApplicationInsights`.
+Azure Enterprise tier uses buildpack bindings to integrate [Azure Application Insights](../azure-monitor/app/app-insights-overview.md) with the type `ApplicationInsights`. For more information, see [How to configure APM integration and CA certificates](how-to-enterprise-configure-apm-intergration-and-ca-certificates.md).
To create an Application Insights buildpack binding, use the following command:
spring-apps How To Create User Defined Route Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-create-user-defined-route-instance.md
Previously updated : 09/25/2021 Last updated : 01/17/2023
The following illustration shows an example of an Azure Spring Apps virtual netw
:::image type="content" source="media/how-to-create-user-defined-route-instance/user-defined-route-example-architecture.png" lightbox="media/how-to-create-user-defined-route-instance/user-defined-route-example-architecture.png" alt-text="Architecture diagram that shows user-defined routing.":::
+This diagram illustrates the following features of the architecture:
+
+* Public ingress traffic must flow through firewall filters.
+* Each Azure Spring Apps instance is isolated within a dedicated subnet.
+* The firewall is owned and managed by customers.
+* This structure ensures that the firewall enables a healthy environment for all the functions you need.
+ ### Define environment variables The following example shows how to define a set of environment variables to be used in resource creation:
az network vnet subnet update
--route-table $SERVICE_RUNTIME_ROUTE_TABLE_NAME ```
-### Add a role for an Azure Spring Apps relying party
+### Add a role for an Azure Spring Apps resource provider
-The following example shows how to add a role for an Azure Spring Apps relying party:
+The following example shows how to add a role for the Azure Spring Apps resource provider. The role is assigned to all users identified by the string `e8de9221-a19c-4c81-b814-fd37c6caf9d2`:
```azurecli VIRTUAL_NETWORK_RESOURCE_ID=$(az network vnet show \
az role assignment create \
--role "Owner" \ --scope ${VIRTUAL_NETWORK_RESOURCE_ID} \ --assignee e8de9221-a19c-4c81-b814-fd37c6caf9d2+
+APP_ROUTE_TABLE_RESOURCE_ID=$(az network route-table show \
+ --name $APP_ROUTE_TABLE_NAME \
+ --resource-group $RG \
+ --query "id" \
+ --output tsv)
+
+az role assignment create \
+ --role "Owner" \
+ --scope ${APP_ROUTE_TABLE_RESOURCE_ID} \
+ --assignee e8de9221-a19c-4c81-b814-fd37c6caf9d2
+
+SERVICE_RUNTIME_ROUTE_TABLE_RESOURCE_ID=$(az network route-table show \
+ --name $SERVICE_RUNTIME_ROUTE_TABLE_NAME \
+ --resource-group $RG \
+ --query "id" \
+ --output tsv)
+
+az role assignment create \
+ --role "Owner" \
+ --scope ${SERVICE_RUNTIME_ROUTE_TABLE_RESOURCE_ID} \
+ --assignee e8de9221-a19c-4c81-b814-fd37c6caf9d2
``` ### Create an Azure Spring Apps instance with user-defined routing
spring-apps How To Deploy With Custom Container Image https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-deploy-with-custom-container-image.md
If you deployed the instance to a VNet, make sure you allow the network traffic
### Install an APM into the image manually
-The installation steps vary on different APMs and languages. The following steps are for New Relic with Java applications. You must modify the *Dockerfile* using the following steps:
+The installation steps vary on different application performance monitors (APMs) and languages. The following steps are for New Relic with Java applications. You must modify the *Dockerfile* using the following steps:
1. Download and install the agent file into the image by adding the following to the *Dockerfile*:
spring-apps How To Enterprise Build Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-enterprise-build-service.md
Title: How to Use Tanzu Build Service in Azure Spring Apps Enterprise Tier-
-description: Learn how to Use Tanzu Build Service in Azure Spring Apps Enterprise Tier
+ Title: How to use Tanzu Build Service in Azure Spring Apps Enterprise tier
+description: Learn how to use Tanzu Build Service in Azure Spring Apps Enterprise tier.
**This article applies to:** ❌ Basic/Standard tier ✔️ Enterprise tier
-This article describes the extra configuration and functionality included in VMware Tanzu® Build Service™ with Azure Spring Apps Enterprise Tier.
+This article shows you how to use VMware Tanzu® Build Service™ with Azure Spring Apps Enterprise tier.
-In Azure Spring Apps, the existing Standard tier already supports compiling user source code into [OCI images](https://opencontainers.org/) through [Kpack](https://github.com/pivotal/kpack). Kpack is a Kubernetes (K8s) implementation of [Cloud Native Buildpacks (CNB)](https://buildpacks.io/) provided by VMware. This article provides details about the extra configurations and functionality exposed in the Azure Spring Apps Enterprise tier.
+VMware Tanzu Build Service automates container creation, management, and governance at enterprise scale. Tanzu Build Service uses the open-source [Cloud Native Buildpacks](https://buildpacks.io/) project to turn application source code into container images. It executes reproducible builds aligned with modern container standards and keeps images up to date.
-## Build Agent Pool
+## Buildpacks
-Tanzu Build Service in the Enterprise tier is the entry point to containerize user applications from both source code and artifacts. There's a dedicated build agent pool that reserves compute resources for a given number of concurrent build tasks. The build agent pool prevents resource contention with your running apps. You can configure the number of resources given to the build agent pool when you create a new service instance of Azure Spring Apps using the **VMware Tanzu settings**.
+VMware Tanzu Buildpacks provide framework and runtime support for applications. Buildpacks typically examine your applications to determine what dependencies to download and how to configure the apps to communicate with bound services.
+The [language family buildpacks](https://docs.vmware.com/en/VMware-Tanzu-Buildpacks/services/tanzu-buildpacks/GUID-https://docsupdatetracker.net/index.html) are [composite buildpacks](https://paketo.io/docs/concepts/buildpacks/#composite-buildpacks) that provide easy out-of-the-box support for the most popular language runtimes and app configurations. These buildpacks combine multiple component buildpacks into ordered groupings. The groupings satisfy each buildpackΓÇÖs requirements.
-The following Build Agent Pool scale set sizes are available:
+## Builders
+
+A [Builder](https://docs.vmware.com/en/Tanzu-Build-Service/1.6/vmware-tanzu-build-service/GUID-https://docsupdatetracker.net/index.html#builder) is a Tanzu Build Service resource. A Builder contains a set of buildpacks and a [stack](https://docs.vmware.com/en/VMware-Tanzu-Buildpacks/services/tanzu-buildpacks/GUID-stacks.html) used in the process of building source code.
+
+## Build agent pool
+
+Tanzu Build Service in the Enterprise tier is the entry point to containerize user applications from both source code and artifacts. There's a dedicated build agent pool that reserves compute resources for a given number of concurrent build tasks. The build agent pool prevents resource contention with your running apps.
+
+The following table shows the build agent pool scale set sizes available:
| Scale Set | CPU/Gi | |--||
The following Build Agent Pool scale set sizes are available:
| S4 | 5 vCPU, 10 Gi | | S5 | 6 vCPU, 12 Gi |
-The following image shows the resources given to the Tanzu Build Service Agent Pool after you've successfully provisioned the service instance. You can also update the configured agent pool size on the **Build Service** page after you've created the service instance.
-
+## Configure the build agent pool
-## Default Builder and Tanzu Buildpacks
+When you create a new Azure Spring Apps service instance using the Azure portal, you can use the **VMware Tanzu settings** tab to configure the number of resources given to the build agent pool.
-In the Enterprise Tier, a default builder is provided within Tanzu Build Service with a list of commercial VMware Tanzu® Buildpacks.
-Tanzu Buildpacks make it easier to integrate with other software like New Relic. They're configured as optional and will only run with proper configuration. For more information, see the [Buildpack bindings](#buildpack-bindings) section.
+The following image shows the resources given to the Tanzu Build Service Agent Pool after you've successfully provisioned the service instance. You can also update the configured agent pool size here after the service instance is created.
-The following list shows the Tanzu Buildpacks available in Azure Spring Apps Enterprise edition:
-- tanzu-buildpacks/java-azure-- tanzu-buildpacks/dotnet-core-- tanzu-buildpacks/go-- tanzu-buildpacks/web-servers-- tanzu-buildpacks/nodejs-- tanzu-buildpacks/python
+## Use the default builder to deploy an app
-## Build apps using a custom builder
+In Enterprise tier, the `default` builder includes all the language family buildpacks supported in Azure Spring Apps so you can use it to build polyglot apps.
-Besides the `default` builder, you can also create custom builders with the provided buildpacks.
-
-All the builders configured in a Spring Cloud Service instance are listed in the **Build Service** section under **VMware Tanzu components**.
--
-Select **Add** to create a new builder. The image below shows the resources you should use to create the custom builder.
--
-You can also edit a custom builder when the builder isn't used in a deployment. You can update the buildpacks or the [OS Stack](https://docs.pivotal.io/tanzu-buildpacks/stacks.html), but the builder name is read only.
--
-You can delete any custom builder when the builder isn't used in a deployment, but the `default` builder is read only.
-
-When you deploy an app, you can build the app by specifying a specific builder in the command:
+The `default` builder is read only, so you can't edit or delete it. When you deploy an app, if you don't specify the builder, the `default` builder will be used, making the following two commands equivalent.
```azurecli az spring app deploy \ --name <app-name> \
- --builder <builder-name> \
--artifact-path <path-to-your-JAR-file> ```
-If the builder isn't specified, the `default` builder will be used. The builder is a resource that continuously contributes to your deployments. The builder provides the latest runtime images and latest buildpacks, including the latest APM agents and so on. When you use a builder to deploy the app, the builder and the bindings under the builder aren't allowed to edit and delete. To apply changes to a builder, save the configuration as a new builder. To delete a builder, remove the deployments that use the builder first.
-
-You can also configure the build environment and build resources by using the following command:
- ```azurecli az spring app deploy \ --name <app-name> \
- --build-env <key1=value1>, <key2=value2> \
- --build-cpu <build-cpu-size> \
- --build-memory <build-memory-size> \
- --builder <builder-name> \
- --artifact-path <path-to-your-JAR-file>
+ --artifact-path <path-to-your-JAR-file> \
+ --builder default
```
-If you're using the `tanzu-buildpacks/java-azure` buildpack, we recommend that you set the `BP_JVM_VERSION` environment variable in the `build-env` argument.
-
-When you use a custom builder in an app deployment, the builder can't make edits and deletions. If you want to change the configuration, create a new builder. Use the new builder to deploy the app.
-
-After you deploy the app with the new builder, the deployment is linked to the new builder. You can then migrate the deployments under the previous builder to the new builder, and make edits and deletions.
-
-## Real-time build logs
-
-A build task will be triggered when an app is deployed from an Azure CLI command. Build logs are streamed in real time as part of the CLI command output. For information on using build logs to diagnose problems, see [Analyze logs and metrics with diagnostics settings](./diagnostic-services.md) .
-
-## Buildpack bindings
-
-You can configure Kpack Images with Service Bindings as described in the [Cloud Native Buildpacks Bindings specification](https://github.com/buildpacks/spec/blob/adbc70f5672e474e984b77921c708e1475e163c1/extensions/bindings.md). Azure Spring Apps Enterprise tier uses Service Bindings to integrate with [Tanzu Partner Buildpacks](https://docs.pivotal.io/tanzu-buildpacks/partner-integrations/partner-integration-buildpacks.html). For example, we use Binding to integrate [Azure Application Insights](../azure-monitor/app/app-insights-overview.md) using the [Paketo Azure Application Insights Buildpack](https://github.com/paketo-buildpacks/azure-application-insights).
-
-Currently, buildpack binding only supports binding the buildpacks listed below. Follow the documentation links listed under each type to configure the properties and secrets for buildpack binding.
--- ApplicationInsights-
- - [Monitor Apps with Application Insights](./how-to-application-insights.md).
--- NewRelic-
- - [New Relic Partner Buildpack](https://docs.pivotal.io/tanzu-buildpacks/partner-integrations/partner-integration-buildpacks.html#new-relic).
- - [New Relic Environment Variables](https://docs.newrelic.com/docs/apm/agents/java-agent/configuration/java-agent-configuration-config-file/#Environment_Variables).
--- Dynatrace-
- - [Dynatrace Partner Buildpack](https://docs.pivotal.io/tanzu-buildpacks/partner-integrations/partner-integration-buildpacks.html#dynatrace).
- - [Determine the values for the required environment variables](https://www.dynatrace.com/support/help/shortlink/azure-spring#envvar).
--- AppDynamics-
- - [AppDynamic Partner Buildpack](https://docs.pivotal.io/tanzu-buildpacks/partner-integrations/partner-integration-buildpacks.html#appdynamics).
- - [Configure Using the Environment Variables](https://docs.appdynamics.com/21.11/en/application-monitoring/install-app-server-agents/java-agent/monitor-azure-spring-cloud-with-java-agent#MonitorAzureSpringCloudwithJavaAgent-ConfigureUsingtheEnvironmentVariablesorSystemProperties).
--- ElasticAPM-
- - [ElasticAPM Partner Buildpack](https://docs.pivotal.io/tanzu-buildpacks/partner-integrations/partner-integration-buildpacks.html#elastic-apm).
- - [Elastic Configuration](https://www.elastic.co/guide/en/apm/agent/java/master/configuration.html).
-
-Not all Tanzu Buildpacks support all service binding types. The following table shows the binding types that are supported by Tanzu Buildpacks and Tanzu Partner Buildpacks.
+For more information about deploying a polyglot app, see [How to deploy polyglot apps in Azure Spring Apps Enterprise tier](how-to-enterprise-deploy-polyglot-apps.md).
-|Buildpack|ApplicationInsights|NewRelic|AppDynamics|Dynatrace|ElasticAPM|
-||-|--|--||-|
-|Java |✅|✅|✅|✅|✅|
-|Dotnet|❌|❌|❌|✅|❌|
-|Go |❌|❌|❌|✅|❌|
-|Python|❌|❌|❌|❌|❌|
-|NodeJS|❌|✅|✅|✅|✅|
-|[WebServers](how-to-enterprise-deploy-static-file.md)|❌|❌|❌|✅|❌|
+## Configure APM integration and CA certificates
-To edit service bindings for the builder, select **Edit**. After a builder is bound to the service bindings, the service bindings are enabled for an app deployed with the builder.
+By using Tanzu Partner Buildpacks and CA Certificates Buildpack, Enterprise tier provides a simplified configuration experience to support application performance monitor (APM) integration and certificate authority (CA) certificates integration scenarios for polyglot apps. For more information, see [How to configure APM integration and CA certificates](how-to-enterprise-configure-apm-intergration-and-ca-certificates.md).
+## Manage custom builders
-> [!NOTE]
-> When configuring environment variables for APM bindings, use key names without a prefix. For example, do not use a `DT_` prefix for a Dynatrace binding. Tanzu APM buildpacks will transform the key name to the original environment variable name with a prefix.
-
-## Manage buildpack bindings
-
-You can manage buildpack bindings with the Azure portal or the Azure CLI.
-
-> [!NOTE]
-> You can only manage buildpack bindings when the parent builder isn't used by any app deployments. To create, update, or delete buildpack bindings of an existing builder, create a new builder and configure new buildpack bindings there.
-
-### [Portal](#tab/azure-portal)
-
-### View buildpack bindings using the Azure portal
-
-Follow these steps to view the current buildpack bindings:
-
-1. Open the [Azure portal](https://portal.azure.com/?AppPlatformExtension=entdf#home).
-1. Select **Build Service**.
-1. Select **Edit** under the **Bindings** column to view the bindings configured under a builder.
-
--
-### Create a buildpack binding
-
-To create a buildpack binding, select **Unbound** on the **Edit Bindings** page, specify binding properties, and then select **Save**.
-
-### Unbind a buildpack binding
-
-You can unbind a buildpack binding by using the **Unbind binding** command, or by editing binding properties.
-
-To use the **Unbind binding** command, select the **Bound** hyperlink, and then select **Unbind binding**.
--
-To unbind a buildpack binding by editing binding properties, select **Edit Binding**, and then select **Unbind**.
--
-When you unbind a binding, the bind status changes from **Bound** to **Unbound**.
-
-### [Azure CLI](#tab/azure-cli)
-
-### View buildpack bindings using the Azure CLI
+As an alternative to the `default` builder, you can create custom builders with the provided buildpacks.
-View the current buildpack bindings using the following command:
+All the builders configured in an Azure Spring Apps service instance are listed in the **Build Service** section under **VMware Tanzu components**, as shown in the following screenshot:
-```azurecli
-az spring build-service builder buildpack-binding list \
- --resource-group <your-resource-group-name> \
- --service <your-service-instance-name> \
- --builder-name <your-builder-name>
-```
-
-### Create a binding
-
-Use this command to change the binding from **Unbound** to **Bound** status:
-```azurecli
-az spring build-service builder buildpack-binding create \
- --resource-group <your-resource-group-name> \
- --service <your-service-instance-name> \
- --name <your-buildpack-binding-name> \
- --builder-name <your-builder-name> \
- --type <your-binding-type> \
- --properties a=b c=d \
- --secrets e=f g=h
-```
+Select **Add** to create a new builder. The following screenshot shows the resources you should use to create the custom builder. The [OS Stack](https://docs.pivotal.io/tanzu-buildpacks/stacks.html) includes `Bionic Base`, `Bionic Full`, `Jammy Base`, and `Jammy Full`. Bionic is based on `Ubuntu 18.04 (Bionic Beaver)` and Jammy is based on `Ubuntu 22.04 (Jammy Jellyfish)`. For more information, see [Ubuntu Stacks](https://docs.vmware.com/en/VMware-Tanzu-Buildpacks/services/tanzu-buildpacks/GUID-stacks.html#ubuntu-stacks) in the VMware documentation.
-For information on the `properties` and `secrets` parameters for your buildpack, see the [Buildpack bindings](#buildpack-bindings) section.
-### Show the details for a specific binding
+You can also edit a custom builder when the builder isn't used in a deployment. You can update the buildpacks or the OS Stack, but the builder name is read only.
-You can view the details of a specific binding using the following command:
-```azurecli
-az spring build-service builder buildpack-binding show \
- --resource-group <your-resource-group-name> \
- --service <your-service-instance-name> \
- --name <your-buildpack-binding-name> \
- --builder-name <your-builder-name>
-```
+You can delete any custom builder when the builder isn't used in a deployment.
-### Edit the properties of a binding
+## Build apps using a custom builder
-You can change a binding's properties using the following command:
+When you deploy an app, you can use the following command to build the app by specifying a specific builder:
```azurecli
-az spring build-service builder buildpack-binding set \
- --resource-group <your-resource-group-name> \
- --service <your-service-instance-name> \
- --name <your-buildpack-binding-name> \
- --builder-name <your-builder-name> \
- --type <your-binding-type> \
- --properties a=b c=d \
- --secrets e=f2 g=h
+az spring app deploy \
+ --name <app-name> \
+ --builder <builder-name> \
+ --artifact-path <path-to-your-JAR-file>
```
-For more information on the `properties` and `secrets` parameters for your buildpack, see the [Buildpack bindings](#buildpack-bindings) section.
+The builder is a resource that continuously contributes to your deployments. The builder provides the latest runtime images and latest buildpacks.
-### Delete a binding
+You can't delete a builder when existing active deployments are built by the builder. To delete such a builder, save the configuration as a new builder first. After you deploy apps with the new builder, the deployments are linked to the new builder. You can then migrate the deployments under the previous builder to the new builder, and then delete the original builder.
-Use the following command to change the binding status from **Bound** to **Unbound**.
+For more information about deploying a polyglot app, see [How to deploy polyglot apps in Azure Spring Apps Enterprise tier](how-to-enterprise-deploy-polyglot-apps.md).
-```azurecli
-az spring build-service builder buildpack-binding delete \
- --resource-group <your-resource-group-name> \
- --service <your-service-instance-name> \
- --name <your-buildpack-binding-name> \
- --builder-name <your-builder-name>
-```
+## Real-time build logs
-
+A build task will be triggered when an app is deployed from an Azure CLI command. Build logs are streamed in real time as part of the CLI command output. For information on using build logs to diagnose problems, see [Analyze logs and metrics with diagnostics settings](./diagnostic-services.md).
## Next steps
spring-apps How To Enterprise Configure Apm Intergration And Ca Certificates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-enterprise-configure-apm-intergration-and-ca-certificates.md
+
+ Title: How to configure APM integration and CA certificates
+
+description: How to configure APM integration and CA certificates
++++ Last updated : 01/13/2023+++
+# How to configure APM integration and CA certificates
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
+
+**This article applies to:** ❌ Basic/Standard tier ✔️ Enterprise tier
+
+This article shows you how to configure application performance monitor (APM) integration and certificate authority (CA) certificates in Azure Spring Apps Enterprise tier.
+
+## Prerequisites
+
+- An already provisioned Azure Spring Apps Enterprise tier instance. For more information, see [Quickstart: Build and deploy apps to Azure Spring Apps using the Enterprise tier](quickstart-deploy-apps-enterprise.md).
+
+## Supported scenarios - APM and CA certificates integration
+
+Azure Spring Apps Enterprise tier uses buildpack bindings to integrate with [Tanzu Partner Buildpacks](https://docs.pivotal.io/tanzu-buildpacks/partner-integrations/partner-integration-buildpacks.html) and other Cloud Native Buildpack like [ca-certificate buildpack](https://github.com/paketo-buildpacks/ca-certificates).
+
+Currently, the following APM types and CA certificates are supported:
+
+- [ApplicationInsights](#use-application-insights)
+- [Dynatrace](#use-dynatrace)
+- [AppDynamics](#use-appdynamics)
+- [NewRelic](#use-new-relic)
+- [ElasticAPM](#use-elasticapm)
+- [CA certificates](#use-ca-certificates)
+
+CA Certificates are supported for all language family buildpacks, but not all supported APMs. The following table shows the binding types supported by Tanzu language family buildpacks.
+
+| Buildpack | ApplicationInsights | NewRelic | AppDynamics | Dynatrace | ElasticAPM |
+|-||-|-|--||
+| Java | ✅ | ✅ | ✅ | ✅ | ✅ |
+| Dotnet | ❌ | ❌ | ❌ | ✅ | ❌ |
+| Go | ❌ | ❌ | ❌ | ✅ | ❌ |
+| Python | ❌ | ❌ | ❌ | ❌ | ❌ |
+| NodeJS | ❌ | ✅ | ✅ | ✅ | ✅ |
+| [WebServers](how-to-enterprise-deploy-static-file.md) | ❌ | ❌ | ❌ | ✅ | ❌ |
+
+### Use Application Insights
+
+The following languages are supported:
+
+- Java
+
+The following list shows the required environment variables:
+
+- `connection-string`
+- `sampling-percentage`
+
+Upper-case keys are allowed, and you can also replace `_` with `-`.
+
+For other supported environment variables, see [Application Insights Overview](/azure/azure-monitor/app/app-insights-overview?tabs=net).
+
+### Use Dynatrace
+
+The following languages are supported:
+
+- Java
+- .NET
+- Go
+- Node.js
+- WebServers
+
+The following list shows the required environment variables:
+
+- `api-url` or `environment-id` (used in build step)
+- `api-token` (used in build step)
+- `TENANT`
+- `TENANTTOKEN`
+- `CONNECTION_POINT`
+
+For other supported environment variables, see [Dynatrace Environment Variables](https://www.dynatrace.com/support/help/shortlink/azure-spring#envvar).
+
+### Use New Relic
+
+The following languages are supported:
+
+- Java
+- Node.js
+
+The following list shows the required environment variables:
+
+- `license_key`
+- `app_name`
+
+For other supported environment variables, see [New Relic Environment Variables](https://docs.newrelic.com/docs/apm/agents/java-agent/configuration/java-agent-configuration-config-file/#Environment_Variables).
+
+### Use ElasticAPM
+
+The following languages are supported:
+
+- Java
+- Node.js
+
+The following list shows the required environment variables:
+
+- `service_name`
+- `application_packages`
+- `server_url`
+
+For other supported environment variables, see [Elastic Environment Variables](https://www.elastic.co/guide/en/apm/agent/java/master/configuration.html).
+
+### Use AppDynamics
+
+The following languages are supported:
+
+- Java
+- Node.js
+
+The following list shows the required environment variables:
+
+- `agent_application_name`
+- `agent_tier_name`
+- `agent_node_name`
+- `agent_account_name`
+- `agent_account_access_key`
+- `controller_host_name`
+- `controller_ssl_enabled`
+- `controller_port`
+
+For other supported environment variables, see [AppDynamics Environment Variables](https://docs.appdynamics.com/21.11/en/application-monitoring/install-app-server-agents/java-agent/monitor-azure-spring-cloud-with-java-agent#MonitorAzureSpringCloudwithJavaAgent-ConfigureUsingtheEnvironmentVariablesorSystemProperties).
+
+### Use CA certificates
+
+CA certificates use [ca-certificate buildpack](https://github.com/paketo-buildpacks/ca-certificates) to support providing CA certificates to the system trust store at build and runtime.
+
+In Azure Spring Apps Enterprise tier, the CA certificates will use the **Public Key Certificates** tab on the **TLS/SSL settings** page in the Azure portal, as shown in the following screenshot:
++
+You can configure the CA certificates on the **Edit binding** page. The `succeeded` certificates are shown in the **CA Certificates** list.
++
+## Manage APM integration and CA certificates in Azure Spring Apps
+
+In the current context, one buildpack binding means either credential configuration against one APM type, or CA certificates configuration against the CA Certificates type. For APM integration, follow the earlier instructions configure the necessary environment variables or secrets for your APM.
+
+To edit buildpack bindings for the builder, select **Edit**. After a builder is bound to the buildpack bindings, the buildpack bindings are enabled for an app deployed with the builder.
++
+> [!NOTE]
+> When configuring environment variables for APM bindings, use key names without a prefix. For example, do not use a `DT_` prefix for a Dynatrace binding or `APPLICATIONINSIGHTS_` for Application Insights. Tanzu APM buildpacks will transform the key name to the original environment variable name with a prefix.
+
+You can manage buildpack bindings with the Azure portal or the Azure CLI.
+
+### [Azure portal](#tab/azure-portal)
+
+### View buildpack bindings using the Azure portal
+
+Follow these steps to view the current buildpack bindings:
+
+1. Open the [Azure portal](https://portal.azure.com/?AppPlatformExtension=entdf#home).
+1. Select **Build Service**.
+1. Select **Edit** under the **Bindings** column to view the bindings configured under a builder.
+++
+### Create a buildpack binding
+
+To create a buildpack binding, select **Unbound** on the **Edit Bindings** page, specify the binding properties, and then select **Save**.
+
+### Unbind a buildpack binding
+
+You can unbind a buildpack binding by using the **Unbind binding** command, or by editing the binding properties.
+
+To use the **Unbind binding** command, select the **Bound** hyperlink, and then select **Unbind binding**.
++
+To unbind a buildpack binding by editing binding properties, select **Edit Binding**, and then select **Unbind**.
++
+When you unbind a binding, the bind status changes from **Bound** to **Unbound**.
+
+### [Azure CLI](#tab/azure-cli)
+
+### View buildpack bindings using the Azure CLI
+
+View the current buildpack bindings by using the following command:
+
+```azurecli
+az spring build-service builder buildpack-binding list \
+ --resource-group <your-resource-group-name> \
+ --service <your-service-instance-name> \
+ --builder-name <your-builder-name>
+```
+
+### Create a binding
+
+Use this command to change the binding from *Unbound* to *Bound* status:
+
+```azurecli
+az spring build-service builder buildpack-binding create \
+ --resource-group <your-resource-group-name> \
+ --service <your-service-instance-name> \
+ --name <your-buildpack-binding-name> \
+ --builder-name <your-builder-name> \
+ --type <your-binding-type> \
+ --properties a=b c=d \
+ --secrets e=f g=h
+```
+
+For information on the `properties` and `secrets` parameters for your buildpack, see the [Supported Scenarios - APM and CA Certificates Integration](#supported-scenariosapm-and-ca-certificates-integration) section.
+
+### Show the details for a specific binding
+
+You can view the details of a specific binding by using the following command:
+
+```azurecli
+az spring build-service builder buildpack-binding show \
+ --resource-group <your-resource-group-name> \
+ --service <your-service-instance-name> \
+ --name <your-buildpack-binding-name> \
+ --builder-name <your-builder-name>
+```
+
+### Edit the properties of a binding
+
+You can change a binding's properties by using the following command:
+
+```azurecli
+az spring build-service builder buildpack-binding set \
+ --resource-group <your-resource-group-name> \
+ --service <your-service-instance-name> \
+ --name <your-buildpack-binding-name> \
+ --builder-name <your-builder-name> \
+ --type <your-binding-type> \
+ --properties a=b c=d \
+ --secrets e=f2 g=h
+```
+
+For more information on the `properties` and `secrets` parameters for your buildpack, see the [Supported Scenarios - APM and CA Certificates Integration](#supported-scenariosapm-and-ca-certificates-integration) section.
+
+### Delete a binding
+
+Use the following command to change the binding status from *Bound* to *Unbound*.
+
+```azurecli
+az spring build-service builder buildpack-binding delete \
+ --resource-group <your-resource-group-name> \
+ --service <your-service-instance-name> \
+ --name <your-buildpack-binding-name> \
+ --builder-name <your-builder-name>
+```
+++
+## Next steps
+
+- [Azure Spring Apps](index.yml)
spring-apps How To Enterprise Deploy Non Java Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-enterprise-deploy-non-java-apps.md
- Title: How to Deploy Non-Java Applications in Azure Spring Apps Enterprise Tier-
-description: How to Deploy Non-Java Applications in Azure Spring Apps Enterprise Tier
---- Previously updated : 02/09/2022---
-# How to deploy non-Java applications in Azure Spring Apps
-
-> [!NOTE]
-> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
-
-**This article applies to:** ❌ Basic/Standard tier ✔️ Enterprise tier
-
-This article shows you how to deploy your non-java application to Azure Spring Apps Enterprise tier.
-
-## Prerequisites
--- An already provisioned Azure Spring Apps Enterprise tier instance. For more information, see [Quickstart: Build and deploy apps to Azure Spring Apps using the Enterprise tier](quickstart-deploy-apps-enterprise.md).-- One or more applications running in Azure Spring Apps. For more information on creating apps, see [How to Deploy Spring Boot applications from Azure CLI](./how-to-launch-from-source.md).-- [Azure CLI](/cli/azure/install-azure-cli), version 2.0.67 or higher.-- Your application source code.-
-## Deploy your application
-
-To deploy from a source code folder your local machine, see [Non-Java application restrictions](#application-restriction).
-
-To deploy the source code folder to an active deployment, use the following command:
-
-```azurecli
-az spring app deploy
- --resource-group <your-resource-group-name> \
- --service <your-Azure-Spring-Apps-name> \
- --name <your-app-name> \
- --source-path <path-to-source-code>
-```
-
-## Application restriction
-
-Your application must conform to the following restrictions:
--- Your application must listen on port 8080. The service checks the port on TCP for readiness and liveness.-- If your source code contains a package management folder, such as *node_modules*, ensure the folder contains all the dependencies. Otherwise, remove it and let Azure Spring Apps install it.-- To see whether your source code language is supported and the feature is provided, see the [Support Matrix](#support-matrix) section.-
-## Support matrix
-
-The following table indicates the features supported for each language.
-
-| Feature | Java | Python | Node | .NET Core | Go |[Static Files](how-to-enterprise-deploy-static-file.md)|
-|--||--||--|-|--|
-| App lifecycle management | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
-| Assign endpoint | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
-| Azure Monitor | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
-| Out of box APM integration | ✔️ | ❌ | ❌ | ❌ | ❌ | ❌ |
-| Blue/green deployment | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
-| Custom domain | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
-| Scaling - auto scaling | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
-| Scaling - manual scaling (in/out, up/down) | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
-| Managed Identity | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
-| API portal for VMware Tanzu® | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
-| Spring Cloud Gateway for VMware Tanzu® | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
-| Application Configuration Service for VMware Tanzu® | ✔️ | ❌ | ❌ | ❌ | ❌ | ❌ |
-| Application Live View for VMware Tanzu® | ✔️ | ❌ | ❌ | ❌ | ❌ | ❌ |
-| Application Accelerator for VMware Tanzu® | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
-| VMware Tanzu® Service Registry | ✔️ | ❌ | ❌ | ❌ | ❌ | ❌ |
-| VNET | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
-| Outgoing IP Address | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
-| E2E TLS | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
-| Advanced troubleshooting - thread/heap/JFR dump | ✔️ | ❌ | ❌ | ❌ | ❌ | ❌ |
-| Bring your own storage | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
-| Integrate service binding with Resource Connector | ✔️ | ❌ | ❌ | ❌ | ❌ | ❌ |
-| Availability Zone | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
-| App Lifecycle events | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
-| Reduced app size - 0.5 vCPU and 512 MB | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
-| Automate app deployments with Terraform and Azure Pipeline Task | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
-| Soft Deletion | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
-| Interactive diagnostic experience (AppLens-based) | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
-| SLA | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
-
-## Next steps
--- [Azure Spring Apps](index.yml)
spring-apps How To Enterprise Deploy Polyglot Apps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-enterprise-deploy-polyglot-apps.md
+
+ Title: How to deploy polyglot apps in Azure Spring Apps Enterprise tier
+description: Shows you how to deploy polyglot apps in Azure Spring Apps Enterprise tier.
++++ Last updated : 01/13/2023+++
+# How to deploy polyglot apps in Azure Spring Apps Enterprise tier
+
+> [!NOTE]
+> Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams.
+
+**This article applies to:** ❌ Basic/Standard tier ✔️ Enterprise tier
+
+This article shows you how to deploy polyglot apps in Azure Spring Apps Enterprise tier, and how these polyglot apps can use the build service features provided by buildpacks.
+
+## Prerequisites
+
+- An already provisioned Azure Spring Apps Enterprise tier instance. For more information, see [Quickstart: Build and deploy apps to Azure Spring Apps using the Enterprise tier](quickstart-deploy-apps-enterprise.md).
+- [Azure CLI](/cli/azure/install-azure-cli), version 2.43.0 or higher.
+
+## Deploy a polyglot application
+
+When you create an Enterprise tier instance of Azure Spring Apps, you'll be provided with a `default` builder with one of the following supported [language family buildpacks](https://docs.vmware.com/en/VMware-Tanzu-Buildpacks/services/tanzu-buildpacks/GUID-https://docsupdatetracker.net/index.html):
+
+- [tanzu-buildpacks/java-azure](https://network.tanzu.vmware.com/products/tanzu-java-azure-buildpack)
+- [tanzu-buildpacks/dotnet-core](https://network.tanzu.vmware.com/products/tanzu-dotnet-core-buildpack)
+- [tanzu-buildpacks/go](https://network.tanzu.vmware.com/products/tanzu-go-buildpack)
+- [tanzu-buildpacks/web-servers](https://network.tanzu.vmware.com/products/tanzu-web-servers-buildpack/)
+- [tanzu-buildpacks/nodejs](https://network.tanzu.vmware.com/products/tanzu-nodejs-buildpack)
+- [tanzu-buildpacks/python](https://network.tanzu.vmware.com/products/tanzu-python-buildpack/)
+
+These buildpacks support deployment from source code or artifact for Java, .NET Core, Go, web static files, Node.js, and Python apps. You can also create a custom builder by specifying buildpacks and stack. For more information, see the [Manage custom builders](how-to-enterprise-build-service.md#manage-custom-builders) section of [Use Tanzu Build Service](how-to-enterprise-build-service.md).
+
+When deploying polyglot apps, you should choose a builder to build the app, as shown in the following example. If you don't specify the builder, a `default` builder will be used.
+
+```azurecli
+az spring app deploy \
+ --name <app-name> \
+ --builder <builder-name> \
+ --artifact-path <path-to-your-JAR-file>
+```
+
+If you deploy the app with an artifact file, use `--artifact-path` to specify the file path. Both JAR and WAR files are acceptable.
+
+If the Azure CLI detects the WAR package as a thin JAR, use `--disable-validation` to disable validation.
+
+If you deploy the source code folder to an active deployment, use `--source-path` to specify the folder, as shown in the following example:
+
+```azurecli
+az spring app deploy \
+ --name <app-name> \
+ --builder <builder-name> \
+ --source-path <path-to-source-code>
+```
+
+You can also configure the build environment to build the app. For example, in a Java application, you can specify the JDK version using the `BP_JVM_VERSION` build environment.
+
+To specify build environments, use `--build-env`, as shown in the following example. The available build environment variables are described later in this article.
+
+```azurecli
+az spring app deploy \
+ --name <app-name> \
+ --build-env <key1=value1> <key2=value2> \
+ --builder <builder-name> \
+ --artifact-path <path-to-your-JAR-file>
+```
+
+Additionally, for each build, you can specify the build resources, as shown in the following example.
+
+```azurecli
+az spring app deploy \
+ --name <app-name> \
+ --build-env <key1=value1> <key2=value2> \
+ --build-cpu <build-cpu-size> \
+ --build-memory <build-memory-size> \
+ --builder <builder-name> \
+ --artifact-path <path-to-your-JAR-file>
+```
+
+The default build CPU/memory resource is `1 vCPU, 2 Gi`. If your app needs a smaller or larger amount of memory, then use `--build-memory` to specify the memory resources; for example, `500Mi`, `1Gi`, `2Gi`, and so on. If your app needs a smaller or larger amount of CPU resources, then use `--build-cpu` to specify the CPU resources; for example, `500m`, `1`, `2`, and so on.
+
+The CPU and memory resources are limited by the build service agent pool size. For more information, see the [Build agent pool](how-to-enterprise-build-service.md#build-agent-pool) section of [Use Tanzu Build Service](how-to-enterprise-build-service.md). The sum of the processing build resource quota can't exceed the agent pool size.
+
+The parallel number of build tasks depends on the agent pool size and each build resource. For example, if the build resource is the default `1 vCPU, 2 Gi` and the agent pool size is `6 vCPU, 12 Gi`, then the parallel build number is 6.
+
+Other build tasks will be blocked for a while because of resource quota limitations.
+
+Your application must listen on port 8080. Spring Boot applications will override the `SERVER_PORT` to use 8080 automatically.
+
+The following table indicates the features supported for each language.
+
+| Feature | Java | Python | Node | .NET Core | Go |[Static Files](how-to-enterprise-deploy-static-file.md)|
+|--||--||--|-|--|
+| App lifecycle management | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| Assign endpoint | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| Azure Monitor | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| Out of box APM integration | ✔️ | ❌ | ❌ | ❌ | ❌ | ❌ |
+| Blue/green deployment | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| Custom domain | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| Scaling - auto scaling | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| Scaling - manual scaling (in/out, up/down) | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| Managed Identity | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| API portal for VMware Tanzu® | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| Spring Cloud Gateway for VMware Tanzu® | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| Application Configuration Service for VMware Tanzu® | ✔️ | ❌ | ❌ | ❌ | ❌ | ❌ |
+| VMware Tanzu® Service Registry | ✔️ | ❌ | ❌ | ❌ | ❌ | ❌ |
+| Virtual network | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| Outgoing IP Address | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| E2E TLS | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| Advanced troubleshooting - thread/heap/JFR dump | ✔️ | ❌ | ❌ | ❌ | ❌ | ❌ |
+| Bring your own storage | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| Integrate service binding with Resource Connector | ✔️ | ❌ | ❌ | ❌ | ❌ | ❌ |
+| Availability Zone | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| App Lifecycle events | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| Reduced app size - 0.5 vCPU and 512 MB | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| Automate app deployments with Terraform and Azure Pipeline Task | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| Soft Deletion | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| Interactive diagnostic experience (AppLens-based) | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+| SLA | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ | ✔️ |
+
+For more information about the supported configurations for different language apps, see the corresponding section later in this article.
+
+### Deploy Java applications
+
+The buildpack for deploying Java applications is [tanzu-buildpacks/java-azure](https://network.tanzu.vmware.com/products/tanzu-java-azure-buildpack).
+
+The following table lists the features supported in Azure Spring Apps:
+
+| Feature description | Comment | Environment variable | Usage |
+|--||--||
+| Provides the Microsoft OpenJDK. | Configures the JVM version. The default JDK version is 11. We currently support only JDK 8, 11, and 17. | `BP_JVM_VERSION` | `--build-env BP_JVM_VERSION=11.*` |
+| | Runtime env. Configures whether Java Native Memory Tracking (NMT) is enabled. The default value is *true*. Not supported in JDK 8. | `BPL_JAVA_NMT_ENABLED` | `--env BPL_JAVA_NMT_ENABLED=true` |
+| | Configures the level of detail for Java Native Memory Tracking (NMT) output. The default value is *summary*. Set to *detail* for detailed NMT output. | `BPL_JAVA_NMT_LEVEL` | `--env BPL_JAVA_NMT_ENABLED=summary` |
+| Add CA certificates to the system trust store at build and runtime. | See the [Use CA certificates](./how-to-enterprise-configure-apm-intergration-and-ca-certificates.md#use-ca-certificates) of [How to configure APM integration and CA certificates](./how-to-enterprise-configure-apm-intergration-and-ca-certificates.md). | N/A | N/A |
+| Integrate with Application Insights, Dynatrace, Elastic, New Relic, App Dynamic APM agent. | See [How to configure APM integration and CA certificates](./how-to-enterprise-configure-apm-intergration-and-ca-certificates.md). | N/A | N/A |
+| Deploy WAR package with Apache Tomcat or TomEE. | Set the application server to use. Set to *tomcat* to use Tomcat and *tomee* to use TomEE. The default value is *tomcat*. | `BP_JAVA_APP_SERVER` | `--build-env BP_JAVA_APP_SERVER=tomee` |
+| Support Spring Boot applications. | Indicates whether to contribute Spring Cloud Bindings support for the image at build time. The default value is *false*. | `BP_SPRING_CLOUD_BINDINGS_DISABLED` | `--build-env BP_SPRING_CLOUD_BINDINGS_DISABLED=false` |
+| | Indicates whether to auto-configure Spring Boot environment properties from bindings at runtime. This feature requires Spring Cloud Bindings to have been installed at build time or it will do nothing. The default value is *false*. | `BPL_SPRING_CLOUD_BINDINGS_DISABLED` | `--env BPL_SPRING_CLOUD_BINDINGS_DISABLED=false` |
+| Support building Maven-based applications from source. | Used for a multi-module project. Indicates the module to find the application artifact in. Defaults to the root module (empty) | `BP_MAVEN_BUILT_MODULE` | `--build-env BP_MAVEN_BUILT_MODULE=./gateway` |
+| Support building Gradle-based applications from source. | Used for a multi-module project. Indicates the module to find the application artifact in. Defaults to the root module (empty) | `BP_GRADLE_BUILT_MODULE` | `--build-env BP_GRADLE_BUILT_MODULE=./gateway` |
+| Enable configuration of labels on the created image. | Configures both OCI-specified labels with short environment variable names and arbitrary labels using a space-delimited syntax in a single environment variable. | `BP_IMAGE_LABELS` <br> `BP_OCI_AUTHORS` <br> see more envs [here](https://github.com/paketo-buildpacks/image-labels). | `--build-env BP_OCI_AUTHORS=<value>` |
+| Integrate JProfiler agent. | Indicates whether to integrate JProfiler support. The default value is *false*. | `BP_JPROFILER_ENABLED` | build phase: <br>`--build-env BP_JPROFILER_ENABLED=true` <br> runtime phase: <br> `--env BPL_JPROFILER_ENABLED=true` <br> `BPL_JPROFILER_PORT=<port>` (optional, defaults to *8849*) <br> `BPL_JPROFILER_NOWAIT=true` (optional. Indicates whether the JVM will execute before JProfiler has attached. The default value is *true*.) |
+| | Indicates whether to enable JProfiler support at runtime. The default value is *false*. | `BPL_JPROFILER_ENABLED` | `--env BPL_JPROFILER_ENABLED=false` |
+| | Indicates which port the JProfiler agent will listen on. The default value is *8849*. | `BPL_JPROFILER_PORT` | `--env BPL_JPROFILER_PORT=8849` |
+| | Indicates whether the JVM will execute before JProfiler has attached. The default value is *true*. | `BPL_JPROFILER_NOWAIT` | `--env BPL_JPROFILER_NOWAIT=true` |
+| Integrate [JRebel](https://www.jrebel.com/) agent. | The application should contain a *rebel-remote.xml* file. | N/A | N/A |
+| AES encrypts an application at build time and then decrypts it at launch time. | The AES key to use at build time. | `BP_EAR_KEY` | `--build-env BP_EAR_KEY=<value>` |
+| | The AES key to use at run time. | `BPL_EAR_KEY` | `--env BPL_EAR_KEY=<value>` |
+| Integrate [AspectJ Weaver](https://www.eclipse.org/aspectj/) agent. | `<APPLICATION_ROOT>`/*aop.xml* exists and *aspectj-weaver.\*.jar* exists. | N/A | N/A |
+
+### Deploy .NET applications
+
+The buildpack for deploying .NET applications is [tanzu-buildpacks/dotnet-core](https://network.tanzu.vmware.com/products/tanzu-dotnet-core-buildpack).
+
+The following table lists the features supported in Azure Spring Apps:
+
+| Feature description | Comment | Environment variable | Usage |
+|||--|--|
+| Configure the .NET Core runtime version. | Supports *Net6.0* and *Net7.0*. <br> You can configure through a *runtimeconfig.json* or MSBuild Project file. <br> The default runtime is *6.0.\**. | N/A | N/A |
+| Add CA certificates to the system trust store at build and runtime. | See the [Use CA certificates](./how-to-enterprise-configure-apm-intergration-and-ca-certificates.md#use-ca-certificates) of [How to configure APM integration and CA certificates](./how-to-enterprise-configure-apm-intergration-and-ca-certificates.md). | N/A | N/A |
+| Integrate with Dynatrace APM agent. | See [How to configure APM integration and CA certificates](./how-to-enterprise-configure-apm-intergration-and-ca-certificates.md). | N/A | N/A |
+| Enable configuration of labels on the created image. | Configures both OCI-specified labels with short environment variable names and arbitrary labels using a space-delimited syntax in a single environment variable. | `BP_IMAGE_LABELS` <br> `BP_OCI_AUTHORS` <br> See more envs [here](https://github.com/paketo-buildpacks/image-labels). | `--build-env BP_OCI_AUTHORS=<value>` |
+
+### Deploy Python applications
+
+The buildpack for deploying Python applications is [tanzu-buildpacks/python](https://network.tanzu.vmware.com/products/tanzu-python-buildpack/).
+
+The following table lists the features supported in Azure Spring Apps:
+
+| Feature description | Comment | Environment variable | Usage |
+|||--|-|
+| Specify a Python version. | Supports *3.7.\**, *3.8.\**, *3.9.\**, *3.10.\**. The default value is *3.10.\**<br> You can specify the version via the `BP_CPYTHON_VERSION` environment variable during build. | `BP_CPYTHON_VERSION` | `--build-env BP_CPYTHON_VERSION=3.8.*` |
+| Add CA certificates to the system trust store at build and runtime. | See the [Use CA certificates](./how-to-enterprise-configure-apm-intergration-and-ca-certificates.md#use-ca-certificates) of [How to configure APM integration and CA certificates](./how-to-enterprise-configure-apm-intergration-and-ca-certificates.md). | N/A | N/A |
+| Enable configuration of labels on the created image. | Configures both OCI-specified labels with short environment variable names and arbitrary labels using a space-delimited syntax in a single environment variable. | `BP_IMAGE_LABELS` <br> `BP_OCI_AUTHORS` <br> See more envs [here](https://github.com/paketo-buildpacks/image-labels). | `--build-env BP_OCI_AUTHORS=<value>` |
+
+### Deploy Go applications
+
+The buildpack for deploying Go applications is [tanzu-buildpacks/go](https://network.tanzu.vmware.com/products/tanzu-go-buildpack).
+
+The following table lists the features supported in Azure Spring Apps:
+
+| Feature description | Comment | Environment variable | Usage |
+|||--|-|
+| Specify a Go version. | Supports *1.18.\**, *1.19.\**. The default value is *1.18.\**.<br> The Go version is automatically detected from the appΓÇÖs *go.mod* file. You can override this version by setting the `BP_GO_VERSION` environment variable at build time. | `BP_GO_VERSION` | `--build-env BP_GO_VERSION=1.19.*` |
+| Configure multiple targets. | Specifies multiple targets for a Go build. | `BP_GO_TARGETS` | `--build-env BP_GO_TARGETS=./some-target:./other-target` |
+| Add CA certificates to the system trust store at build and runtime. | See the [Use CA certificates](./how-to-enterprise-configure-apm-intergration-and-ca-certificates.md#use-ca-certificates) of [How to configure APM integration and CA certificates](./how-to-enterprise-configure-apm-intergration-and-ca-certificates.md). | N/A | N/A |
+| Integrate with Dynatrace APM agent. | See [How to configure APM integration and CA certificates](./how-to-enterprise-configure-apm-intergration-and-ca-certificates.md). | N/A | N/A |
+| Enable configuration of labels on the created image. | Configures both OCI-specified labels with short environment variable names and arbitrary labels using a space-delimited syntax in a single environment variable. | `BP_IMAGE_LABELS` <br> `BP_OCI_AUTHORS` <br> See more envs [here](https://github.com/paketo-buildpacks/image-labels). | `--build-env BP_OCI_AUTHORS=<value>` |
+
+### Deploy Node.js applications
+
+The buildpack for deploying Node.js applications is [tanzu-buildpacks/nodejs](https://network.tanzu.vmware.com/products/tanzu-nodejs-buildpack).
+
+The following table lists the features supported in Azure Spring Apps:
+
+| Feature description | Comment | Environment variable | Usage |
+|-||--|--|
+| Specify a Node version. | Supports *12.\**, *14.\**, *16.\**, *18.\**. The default value is *16.\**. <br>You can specify the Node version via an *.nvmrc* or *.node-version* file at the application directory root. `BP_NODE_VERSION` overrides the settings. | `BP_NODE_VERSION` | `--build-env BP_NODE_VERSION=18.*` |
+| Add CA certificates to the system trust store at build and runtime. | See the [Use CA certificates](./how-to-enterprise-configure-apm-intergration-and-ca-certificates.md#use-ca-certificates) of [How to configure APM integration and CA certificates](./how-to-enterprise-configure-apm-intergration-and-ca-certificates.md). | N/A | N/A |
+| Integrate with Dynatrace, Elastic, New Relic, App Dynamic APM agent. | See [How to configure APM integration and CA certificates](./how-to-enterprise-configure-apm-intergration-and-ca-certificates.md). | N/A | N/A |
+| Enable configuration of labels on the created image. | Configures both OCI-specified labels with short environment variable names and arbitrary labels using a space-delimited syntax in a single environment variable. | `BP_IMAGE_LABELS` <br> `BP_OCI_AUTHORS` <br> See more envs [here](https://github.com/paketo-buildpacks/image-labels). | `--build-env BP_OCI_AUTHORS=<value>` |
+
+### Deploy WebServer applications
+
+The buildpack for deploying WebServer applications is [tanzu-buildpacks/web-servers](https://network.tanzu.vmware.com/products/tanzu-web-servers-buildpack/).
+
+For more information, see [Deploy web static files](how-to-enterprise-deploy-static-file.md).
+
+## Next steps
+
+- [Azure Spring Apps](index.yml)
spring-apps How To Enterprise Deploy Static File https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-enterprise-deploy-static-file.md
Title: Deploy static files in Azure Spring Apps Enterprise tier
+ Title: Deploy web static files
-description: Learn how to deploy static files in Azure Spring Apps Enterprise tier.
+description: Learn how to deploy web static files.
Last updated 10/19/2022
-# Deploy static files in Azure Spring Apps Enterprise tier
+# Deploy web static files
> [!NOTE] > Azure Spring Apps is the new name for the Azure Spring Cloud service. Although the service has a new name, you'll see the old name in some places for a while as we work to update assets such as screenshots, videos, and diagrams. **This article applies to:** ❌ Basic/Standard tier ✔️ Enterprise tier
-This article shows you how to deploy your static files to Azure Spring Apps Enterprise tier, leveraging Tanzu Web Servers buildpack. This approach is useful if you have applications that are purely for holding static files like HTML, CSS, or front-end applications built with the JavaScript framework of your choice. You can directly deploy these applications with an automatically-configured web server (HTTPD and NGINX) to serve those assets.
+This article shows you how to deploy your static files to Azure Spring Apps Enterprise tier using the Tanzu Web Servers buildpack. This approach is useful if you have applications that are purely for holding static files like HTML, CSS, or front-end applications built with the JavaScript framework of your choice. You can directly deploy these applications with an automatically configured web server (HTTPD and NGINX) to serve those assets.
## Prerequisites
The following environment variables aren't supported.
You can configure web server by using a customized server configuration file. Your configuration file must conform to the restrictions described in the following table.
-| Configuration | Description | Nginx Configuration | Httpd Configuration |
-|-|-||-|
-| Listening port | Web server must listen on port 8080. The service checks the port on TCP for readiness and whether it's live. You must use the templated variable `PORT` in the configuration file. The appropriate port number is injected when the web server is launched. | `listen {{PORT}}` | `Listen "${PORT}"` |
-| Log path | Config log path to the console. | `access_log /dev/stdout`, `error_log stderr` | `ErrorLog /proc/self/fd/2` |
-| File path with write permission | Web server is granted write permission to the */tmp* directory. Configuring the full path requires write permission under the */tmp* directory. | For example: *client_body_temp_path /tmp/client_body_temp* | |
-| Maximum accepted body size of client request | Web server is behind the gateway. The maximum accepted body size of the client request is set to 500m in the gateway and the value for web server must be less than 500m. | `client_max_body_size` should be less than 500m. | `LimitRequestBody` should be less than 500m. |
+| Configuration | Description | Nginx Configuration | Httpd Configuration |
+|-|-||--|
+| Listening port | Web server must listen on port 8080. The service checks the port on TCP for readiness and whether it's live. You must use the templated variable `PORT` in the configuration file. The appropriate port number is injected when the web server is launched. | `listen {{PORT}}` | `Listen "${PORT}"` |
+| Log path | Config log path to the console. | `access_log /dev/stdout`, `error_log stderr` | `ErrorLog /proc/self/fd/2` |
+| File path with write permission | Web server is granted write permission to the */tmp* directory. Configuring the full path requires write permission under the */tmp* directory. | For example: *client_body_temp_path /tmp/client_body_temp* | |
+| Maximum accepted body size of client request | Web server is behind the gateway. The maximum accepted body size of the client request is set to 500 m in the gateway and the value for web server must be less than 500 m. | `client_max_body_size` should be less than 500 m. | `LimitRequestBody` should be less than 500 m. |
## Buildpack bindings Deploying static files to Azure Spring Apps Enterprise tier supports the Dynatrace buildpack binding. The `htpasswd` buildpack binding isn't supported.
-For more information, see the [Buildpack bindings](how-to-enterprise-build-service.md#buildpack-bindings) section of [Use Tanzu Build Service](how-to-enterprise-build-service.md).
+For more information, see [How to configure APM integration and CA certificates](how-to-enterprise-configure-apm-intergration-and-ca-certificates.md).
## Common build and deployment errors Your deployment of static files to Azure Spring Apps Enterprise tier may generate the following common build errors: -- ERROR: No buildpack groups passed detection.-- ERROR: Please check that you're running against the correct path.-- ERROR: failed to detect: no buildpacks participating
+- `ERROR: No buildpack groups passed detection.`
+- `ERROR: Please check that you're running against the correct path.`
+- `ERROR: failed to detect: no buildpacks participating`
The root cause of these errors is that the web server type isn't specified. To resolve these errors, set the environment variable `BP_WEB_SERVER` to *nginx* or *httpd*. The following table describes common deployment errors when you deploy static files to Azure Spring Apps Enterprise tier.
-| Error message | Root cause | Solution |
-|-||--|
-| *112404: Exit code 0: purposely stopped, please refer to `https://aka.ms/exitcode`* | The web server failed to start. | Validate your server configuration file to see if there's a configuration error. Then, check whether your configuration file conforms to the restrictions described in the [Using a customized server configuration file](#using-a-customized-server-configuration-file) section. |
-| *mkdir() "/var/client_body_temp" failed (13: Permission denied)* | The web server doesn't have write permission to the specified path. | Configure the path under the directory */tmp*; for example: */tmp/client_body_temp*. |
+| Error message | Root cause | Solution |
+|--||--|
+| `112404: Exit code 0: purposely stopped, please refer to https://aka.ms/exitcode` | The web server failed to start. | Validate your server configuration file to see if there's a configuration error. Then, check whether your configuration file conforms to the restrictions described in the [Using a customized server configuration file](#using-a-customized-server-configuration-file) section. |
+| `mkdir() "/var/client_body_temp" failed (13: Permission denied)` | The web server doesn't have write permission to the specified path. | Configure the path under the directory */tmp*; for example: */tmp/client_body_temp*. |
## Next steps
spring-apps How To Launch From Source https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-launch-from-source.md
**This article applies to:** ✔️ Java ❌ C#
-**This article applies to:** ✔️ Basic/Standard tier ✔️ Enterprise tier
+**This article applies to:** ✔️ Basic/Standard tier ❌️ Enterprise tier
Azure Spring Apps enables Spring Boot applications on Azure.
spring-apps How To Migrate Standard Tier To Enterprise Tier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/how-to-migrate-standard-tier-to-enterprise-tier.md
To build locally, use the following steps:
## Use Application Insight
-Azure Enterprise tier uses the build service feature [Buildpack Bindings](./how-to-enterprise-build-service.md#buildpack-bindings) to integrate [Application Insights](../azure-monitor/app/app-insights-overview.md) with the type `ApplicationInsights` instead of In-Process Agent.
+Azure Spring Apps Enterprise tier uses buildpack bindings to integrate [Application Insights](../azure-monitor/app/app-insights-overview.md) with the type `ApplicationInsights` instead of In-Process Agent. For more information, see [How to configure APM integration and CA certificates](how-to-enterprise-configure-apm-intergration-and-ca-certificates.md).
| Standard Tier | Enterprise Tier | |--||
spring-apps Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/overview.md
As a quick reference, the articles listed above and the articles in the followin
* [Use Tanzu Service Registry](how-to-enterprise-service-registry.md) * [Use API portal for VMware Tanzu](how-to-use-enterprise-api-portal.md) * [Use Spring Cloud Gateway for Tanzu](how-to-use-enterprise-spring-cloud-gateway.md)
-* [Deploy non-Java enterprise applications](how-to-enterprise-deploy-non-java-apps.md)
+* [Deploy polyglot enterprise applications](how-to-enterprise-deploy-polyglot-apps.md)
* [Enable system-assigned managed identity](how-to-enable-system-assigned-managed-identity.md?pivots=sc-enterprise-tier) * [Application Insights using Java In-Process Agent](how-to-application-insights.md?pivots=sc-enterprise-tier)
spring-apps Quickstart Sample App Acme Fitness Store Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/quickstart-sample-app-acme-fitness-store-introduction.md
**This article applies to:** ❌ Basic/Standard tier ✔️ Enterprise tier
-This quickstart describes the [fitness store](https://github.com/Azure-Samples/acme-fitness-store) sample application, which will show you how to deploy polyglot applications to Azure Spring Apps Enterprise tier. You'll see how polyglot applications are built and deployed using Azure Spring Apps Enterprise tier capabilities. These capabilities include Tanzu Build Service, Service Discovery, externalized configuration with Application Configuration Service, application routing with Spring Cloud Gateway, logs, metrics, and distributed tracing.
+This quickstart describes the [fitness store](https://github.com/Azure-Samples/acme-fitness-store) sample application, which will show you how to deploy polyglot apps to Azure Spring Apps Enterprise tier. You'll see how polyglot applications are built and deployed using Azure Spring Apps Enterprise tier capabilities. These capabilities include Tanzu Build Service, Service Discovery, externalized configuration with Application Configuration Service, application routing with Spring Cloud Gateway, logs, metrics, and distributed tracing.
The following diagram shows a common application architecture:
spring-apps Troubleshoot Build Exit Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/troubleshoot-build-exit-code.md
The following list describes some common exit codes:
- The builder you're using doesn't support the language your project used.
- If you're using the default builder, check the language the default builder supports. For more information, see the [Default Builder and Tanzu Buildpacks](how-to-enterprise-build-service.md#default-builder-and-tanzu-buildpacks) section of [Use Tanzu Build Service](how-to-enterprise-build-service.md).
+ If you're using the default builder, check the language the default builder supports. For more information, see the [Use the default builder to deploy an app](how-to-enterprise-build-service.md#use-the-default-builder-to-deploy-an-app) section of [Use Tanzu Build Service](how-to-enterprise-build-service.md).
If you're using the custom builder, check whether your custom builder's buildpack supports the language your project used.
The following list describes some common exit codes:
Retry to fix the issue.
- If your application is a static file or dynamic front-end application served by a web server, see the [Common build and deployment errors](how-to-enterprise-deploy-static-file.md#common-build-and-deployment-errors) section of [Deploy static files in Azure Spring Apps Enterprise tier](how-to-enterprise-deploy-static-file.md).
+ If your application is a static file or dynamic front-end application served by a web server, see the [Common build and deployment errors](how-to-enterprise-deploy-static-file.md#common-build-and-deployment-errors) section of [Deploy web static files](how-to-enterprise-deploy-static-file.md).
## Next steps
spring-apps Troubleshoot Exit Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/troubleshoot-exit-code.md
The exit code indicates the reason the application terminated. The following lis
For example, you need to connect to Azure Key Vault to import certificates in your application, but your application doesn't have the necessary permissions to access it.
- - If your application is a static file or dynamic front-end application served by a web server, see the [Common build and deployment errors](how-to-enterprise-deploy-static-file.md#common-build-and-deployment-errors) section of [Deploy static files in Azure Spring Apps Enterprise tier](how-to-enterprise-deploy-static-file.md).
+ - If your application is a static file or dynamic front-end application served by a web server, see the [Common build and deployment errors](how-to-enterprise-deploy-static-file.md#common-build-and-deployment-errors) section of [Deploy web static files](how-to-enterprise-deploy-static-file.md).
- **137** - The application exited because of an out-of-memory error. The application requested resources that the hosting platform failed to provide. Update your application's Java Virtual Machine (JVM) parameters to restrict resource usage or scale up application resources.
static-web-apps Deploy Nextjs Hybrid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/deploy-nextjs-hybrid.md
Begin by adding an API route.
:::image type="content" source="media/deploy-nextjs/nextjs-api-route-display.png" alt-text="Display the output from the API route":::
+## Enable logging for Next.js
+
+Following best practices for Next.js server API troubleshooting, add logging to the API to catch these errors. Logging on Azure uses **Application Insights**. In order to preload this SDK, you need to create a custom start up script. To learn more:
+
+* [Example preload script for Application Insights + Next.js](https://medium.com/microsoftazure/enabling-the-node-js-application-insights-sdk-in-next-js-746762d92507)
+* [GitHub issue](https://github.com/microsoft/ApplicationInsights-node.js/issues/808)
+* [Preloading with Next.js](https://jake.tl/notes/2021-04-04-nextjs-preload-hack)
++ ## Clean up resources If you're not going to continue to use this application, you can delete the Azure Static Web Apps instance through the following steps:
storage Sas Service Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/sas-service-create.md
Previously updated : 05/10/2022 Last updated : 01/19/2023 ms.devlang: csharp-+ # Create a service SAS for a container or blob
The following code example creates a SAS for a container. If the name of an exis
### [.NET v12 SDK](#tab/dotnet)
-A service SAS is signed with the account access key. Use the [StorageSharedKeyCredential](/dotnet/api/azure.storage.storagesharedkeycredential) class to create the credential that is used to sign the SAS. Next, create a new [BlobSasBuilder](/dotnet/api/azure.storage.sas.blobsasbuilder) object and call the [ToSasQueryParameters](/dotnet/api/azure.storage.sas.blobsasbuilder.tosasqueryparameters) to get the SAS token string.
+A service SAS is signed with the account access key. Use the [StorageSharedKeyCredential](/dotnet/api/azure.storage.storagesharedkeycredential) class to create the credential that is used to sign the SAS.
+
+In the following example, populate the constants with your account name, account key, and container name:
+
+```csharp
+const string AccountName = "<account-name>";
+const string AccountKey = "<account-key>";
+const string ContainerName = "<container-name>";
+
+Uri blobContainerUri = new(string.Format("https://{0}.blob.core.windows.net/{1}",
+ AccountName, ContainerName));
+
+StorageSharedKeyCredential storageSharedKeyCredential =
+ new(AccountName, AccountKey);
+
+BlobContainerClient blobContainerClient =
+ new(blobContainerUri, storageSharedKeyCredential);
+```
+
+Next, create a new [BlobSasBuilder](/dotnet/api/azure.storage.sas.blobsasbuilder) object and call the [ToSasQueryParameters](/dotnet/api/azure.storage.sas.blobsasbuilder.tosasqueryparameters) to get the SAS token string.
:::code language="csharp" source="~/azure-storage-snippets/blobs/howto/dotnet/dotnet-v12/Sas.cs" id="Snippet_GetServiceSasUriForContainer":::
The following code example creates a SAS on a blob. If the name of an existing s
# [.NET v12 SDK](#tab/dotnet)
-A service SAS is signed with the account access key. Use the [StorageSharedKeyCredential](/dotnet/api/azure.storage.storagesharedkeycredential) class to create the credential that is used to sign the SAS. Next, create a new [BlobSasBuilder](/dotnet/api/azure.storage.sas.blobsasbuilder) object and call the [ToSasQueryParameters](/dotnet/api/azure.storage.sas.blobsasbuilder.tosasqueryparameters) to get the SAS token string.
+A service SAS is signed with the account access key. Use the [StorageSharedKeyCredential](/dotnet/api/azure.storage.storagesharedkeycredential) class to create the credential that is used to sign the SAS.
+
+In the following example, populate the constants with your account name, account key, and container name:
+
+```csharp
+const string AccountName = "<account-name>";
+const string AccountKey = "<account-key>";
+const string ContainerName = "<container-name>";
+
+Uri blobContainerUri = new(string.Format("https://{0}.blob.core.windows.net/{1}",
+ AccountName, ContainerName));
+
+StorageSharedKeyCredential storageSharedKeyCredential =
+ new(AccountName, AccountKey);
+
+BlobContainerClient blobContainerClient =
+ new(blobContainerUri, storageSharedKeyCredential);
+```
+
+Next, create a new [BlobSasBuilder](/dotnet/api/azure.storage.sas.blobsasbuilder) object and call the [ToSasQueryParameters](/dotnet/api/azure.storage.sas.blobsasbuilder.tosasqueryparameters) to get the SAS token string.
:::code language="csharp" source="~/azure-storage-snippets/blobs/howto/dotnet/dotnet-v12/Sas.cs" id="Snippet_GetServiceSasUriForBlob":::
storage Secure File Transfer Protocol Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-support.md
You can't set custom passwords, rather Azure generates one for you. If you choos
A public-private key pair is the most common form of authentication for Secure Shell (SSH). The private key is secret and should be known only to the local user. The public key is stored in Azure. When an SSH client connects to the storage account using a local user identity, it sends a message with the public key and signature. Azure validates the message and checks that the user and key are recognized by the storage account. To learn more, see [Overview of SSH and keys](../../virtual-machines/linux/ssh-from-windows.md#).
-If you choose to authenticate with private-public key pair, you can either generate one, use one already stored in Azure, or provide Azure the public key of an existing public-private key pair.
+If you choose to authenticate with private-public key pair, you can either generate one, use one already stored in Azure, or provide Azure the public key of an existing public-private key pair. You can have a maxiumum of 10 public keys per local user.
## Container permissions
storage Storage Blob Container Properties Metadata Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-container-properties-metadata-java.md
You can specify metadata as one or more name-value pairs on a blob or container
- [setMetadata](/java/api/com.azure.storage.blob.blobcontainerclient)
-The name of your metadata must conform to the naming conventions for C# identifiers. Metadata names preserve the case with which they were created, but are case-insensitive when set or read. If two or more metadata headers with the same name are submitted for a resource, the Blob service returns status code 400 (Bad Request).
- Setting container metadata overwrites all existing metadata associated with the container. It's not possible to modify an individual name-value pair. The following code example sets metadata on a container:
storage Storage Blob Copy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-copy.md
Title: Copy a blob with .NET description: Learn how to copy a blob in Azure Storage by using the .NET client library.- + Last updated 03/28/2022
storage Storage Blob Properties Metadata Java https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-blob-properties-metadata-java.md
The following code example gets a blob's system properties and displays some of
## Set and retrieve metadata
-You can specify metadata as one or more name-value pairs on a blob or container resource. To set metadata, send a JSON object of name-value pairs using the following method:
+You can specify metadata as one or more name-value pairs on a blob or container resource. To set metadata, send a [Map](https://docs.oracle.com/javase/8/docs/api/java/util/Map.html) object containing name-value pairs using the following method:
- [setMetadata](/java/api/com.azure.storage.blob.specialized.blobclientbase)
storage Storage Feature Support In Storage Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-feature-support-in-storage-accounts.md
The following table describes whether a feature is supported in a premium block
| [Azure Active Directory security](authorize-access-azure-active-directory.md) | &#x2705; | &#x2705; | &#x2705;<sup>1</sup> | &#x2705;<sup>1</sup> | | [Azure DNS Zone endpoints (preview)](../common/storage-account-overview.md#storage-account-endpoints) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Blob inventory](blob-inventory.md) | &#x2705; | &#x1F7E6; | &#x1F7E6; | &#x1F7E6; |
-| [Blob index tags](storage-manage-find-blobs.md) | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; |
+| [Blob index tags](storage-manage-find-blobs.md) | &#x2705; | &nbsp;&#x2B24; | &nbsp;&#x2B24; | &nbsp;&#x2B24; |
| [Blob snapshots](snapshots-overview.md) | &#x2705; | &#x1F7E6; | &nbsp;&#x2B24; | &#x1F7E6; | | [Blob Storage APIs](reference.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; | | [Blob Storage Azure CLI commands](storage-quickstart-blobs-cli.md) | &#x2705; | &#x2705; | &#x2705; | &#x2705; |
storage Storage Manage Find Blobs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-manage-find-blobs.md
You're charged for the monthly average number of index tags within a storage acc
This section describes known issues and conditions. -- Only general-purpose v2 accounts are supported. Premium block blob, legacy blob, and accounts with a hierarchical namespace enabled aren't supported. General-purpose v1 accounts won't be supported.
+- Only general-purpose v2 accounts and premium block blob accounts are supported. Premium page blob, legacy blob, and accounts with a hierarchical namespace enabled aren't supported. General-purpose v1 accounts won't be supported.
- Uploading page blobs with index tags doesn't persist the tags. Set the tags after uploading a page blob.
storage Storage Quickstart Blobs Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-cli.md
The following example uses your Azure AD account to authorize the operation to c
Remember to replace placeholder values in angle brackets with your own values: ```azurecli
-az ad signed-in-user show --query objectId -o tsv | az role assignment create \
+az ad signed-in-user show --query Id -o tsv | az role assignment create \
--role "Storage Blob Data Contributor" \ --assignee @- \ --scope "/subscriptions/<subscription>/resourceGroups/<resource-group>/providers/Microsoft.Storage/storageAccounts/<storage-account>"
storage Storage Quickstart Blobs Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/storage-quickstart-blobs-python.md
Title: "Quickstart: Azure Blob Storage client library for Python"+ description: In this quickstart, you learn how to use the Azure Blob Storage client library for Python to create a container and a blob in Blob (object) storage. Next, you learn how to download the blob to your local computer, and how to list all of the blobs in a container. + Last updated 10/24/2022
storage Versioning Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/versioning-overview.md
Previously updated : 06/22/2022 Last updated : 01/20/2023
storage Azure Defender Storage Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/azure-defender-storage-configure.md
Previously updated : 10/24/2022 Last updated : 01/18/2023
To enable Microsoft Defender for Storage at the subscription level with per-stor
1. Sign in to the [Azure portal](https://portal.azure.com/). 1. Navigate to **Microsoft Defender for Cloud** > **Environment settings**.
-1. Select the subscription that you want to enable Defender for Storage for.
+1. Select the subscription for which you want to enable Defender for Storage.
:::image type="content" source="media/azure-defender-storage-configure/defender-for-cloud-select-subscription.png" alt-text="Screenshot showing how to select a subscription in Defender for Cloud." lightbox="media/azure-defender-storage-configure/defender-for-cloud-select-subscription.png":::
-1. In the Defender plans page, to enable Defender for Storage per-storage account pricing either:
+1. On the **Defender plans** page, enable Defender for Storage per-storage account pricing with one of the following options:
- - Select **Enable all Microsoft Defender plans** to enable Microsoft Defender for Cloud in the subscription.
- - For Microsoft Defender for Storage, select **On** to turn on Defender for Storage, and select **Save**.
- - If you currently have Defender for Storage enabled with per-transaction pricing, select the **New pricing plan available** link and confirm the pricing change.
+ - Choose the **Enable all** button to enable Microsoft Defender for Cloud in the subscription.
+ - To enable Microsoft Defender for Storage, locate **Storage** in the list and toggle the **On** button. Then choose **Save**.
- :::image type="content" source="media/azure-defender-storage-configure/enable-azure-defender-security-center.png" alt-text="Screenshot showing how to enable Defender for Storage in Defender for Cloud." lightbox="media/azure-defender-storage-configure/enable-azure-defender-security-center.png":::
+ If you currently have Defender for Storage enabled with per-transaction pricing, select the **New pricing plan available** link and confirm the pricing change.
+
+ :::image type="content" source="media/azure-defender-storage-configure/enable-azure-defender-security-center.png" alt-text="Screenshot showing how to enable Defender for Storage in Defender for Cloud." lightbox="media/azure-defender-storage-configure/enable-azure-defender-security-center.png":::
Microsoft Defender for Storage is now enabled for this storage account.
-To disable the plan, select **Off** for Defender for Storage in the Defender plans page.
+To disable the plan, toggle the **Off** button for Defender for Storage on the **Defender plans** page.
### Enable per-storage account pricing programmatically
To enable Microsoft Defender for Storage for a specific account with per-transac
1. Sign in to the [Azure portal](https://portal.azure.com/). 1. Navigate to your storage account.
-1. In the Security + networking section of the Storage account menu, select **Microsoft Defender for Cloud**.
+1. In the **Security + networking** section of the Storage account menu, select **Microsoft Defender for Cloud**.
1. Select **Enable Defender on this storage account only**. :::image type="content" source="media/azure-defender-storage-configure/storage-enable-defender-for-account.png" alt-text="Screenshot showing how to enable the Defender for Storage per-transaction pricing on a specific account." lightbox="media/azure-defender-storage-configure/storage-enable-defender-for-account.png":::
storage Infrastructure Encryption Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/infrastructure-encryption-enable.md
Service-level encryption supports the use of either Microsoft-managed keys or cu
To doubly encrypt your data, you must first create a storage account or an encryption scope that is configured for infrastructure encryption. This article describes how to enable infrastructure encryption.
+> [!IMPORTANT]
+> Infrastructure encryption is recommended for scenarios where doubly encrypting data is necessary for compliance requirements. For most other scenarios, Azure Storage encryption provides a sufficiently powerful encryption algorithm, and there is unlikely to be a benefit to using infrastructure encryption.
+ ## Create an account with infrastructure encryption enabled To enable infrastructure encryption for a storage account, you must configure a storage account to use infrastructure encryption at the time that you create the account. Infrastructure encryption cannot be enabled or disabled after the account has been created. The storage account must be of type general-purpose v2 or premium block blob.
storage Storage Samples Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-samples-python.md
Title: Azure Storage samples using Python+ description: View, download, and run sample code and applications for Azure Storage. Discover getting started samples for blobs, queues, tables, and files, using the Python storage client libraries. + Last updated 12/21/2022
storage File Sync Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/file-sync/file-sync-release-notes.md
Previously updated : 12/16/2022 Last updated : 1/18/2022
Before deploying Azure File Sync, you should evaluate whether it is compatible w
### Agent installation and server configuration For more information on how to install and configure the Azure File Sync agent with Windows Server, see [Planning for an Azure File Sync deployment](file-sync-planning.md) and [How to deploy Azure File Sync](file-sync-deployment-guide.md). -- A restart is required for servers that have an existing Azure File Sync agent installation if the agent version is less than version 12.0. - The agent installation package must be installed with elevated (admin) permissions. - The agent is not supported on Nano Server deployment option. - The agent is supported only on Windows Server 2019, Windows Server 2016, Windows Server 2012 R2, and Windows Server 2022.
The following items don't sync, but the rest of the system continues to operate
- Do not store an OS or application paging file within a server endpoint location. ### Cloud endpoint-- Azure File Sync supports making changes to the Azure file share directly. However, any changes made on the Azure file share first need to be discovered by an Azure File Sync change detection job. A change detection job is initiated for a cloud endpoint once every 24 hours. To immediately sync files that are changed in the Azure file share, the [Invoke-AzStorageSyncChangeDetection](/powershell/module/az.storagesync/invoke-azstoragesyncchangedetection) PowerShell cmdlet can be used to manually initiate the detection of changes in the Azure file share. In addition, changes made to an Azure file share over the REST protocol will not update the SMB last modified time and will not be seen as a change by sync.
+- Azure File Sync supports making changes to the Azure file share directly. However, any changes made on the Azure file share first need to be discovered by an Azure File Sync change detection job. A change detection job is initiated for a cloud endpoint once every 24 hours. To immediately sync files that are changed in the Azure file share, the [Invoke-AzStorageSyncChangeDetection](/powershell/module/az.storagesync/invoke-azstoragesyncchangedetection) PowerShell cmdlet can be used to manually initiate the detection of changes in the Azure file share.
- The storage sync service and/or storage account can be moved to a different resource group, subscription, or Azure AD tenant. After the storage sync service or storage account is moved, you need to give the Microsoft.StorageSync application access to the storage account (see [Ensure Azure File Sync has access to the storage account](file-sync-troubleshoot-sync-errors.md?tabs=portal1%252cportal#troubleshoot-rbac)). > [!Note]
storage Storage Files Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-introduction.md
# What is Azure Files?
-Azure Files offers fully managed file shares in the cloud that are accessible via the industry standard [Server Message Block (SMB) protocol](/windows/win32/fileio/microsoft-smb-protocol-and-cifs-protocol-overview), [Network File System (NFS) protocol](https://en.wikipedia.org/wiki/Network_File_System), and [Azure Files REST API](/rest/api/storageservices/file-service-rest-api). Azure file shares can be mounted concurrently by cloud or on-premises deployments. SMB Azure file shares are accessible from Windows, Linux, and macOS clients. NFS Azure file shares are accessible from Linux or macOS clients. Additionally, SMB Azure file shares can be cached on Windows servers with [Azure File Sync](../file-sync/file-sync-introduction.md) for fast access near where the data is being used.
+Azure Files offers fully managed file shares in the cloud that are accessible via the industry standard [Server Message Block (SMB) protocol](/windows/win32/fileio/microsoft-smb-protocol-and-cifs-protocol-overview), [Network File System (NFS) protocol](https://en.wikipedia.org/wiki/Network_File_System), and [Azure Files REST API](/rest/api/storageservices/file-service-rest-api). Azure file shares can be mounted concurrently by cloud or on-premises deployments. SMB Azure file shares are accessible from Windows, Linux, and macOS clients. NFS Azure file shares are accessible from Linux clients. Additionally, SMB Azure file shares can be cached on Windows servers with [Azure File Sync](../file-sync/file-sync-introduction.md) for fast access near where the data is being used.
Here are some videos on common use cases for Azure Files: * [Replace your file server with a serverless Azure file share](https://youtu.be/H04e9AgbcSc)
storage Storage Python How To Use File Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-python-how-to-use-file-storage.md
Title: Develop for Azure Files with Python+ description: Learn how to develop Python applications and services that use Azure Files to store file data.-+ Last updated 10/08/2020-+
storage Storage Nodejs How To Use Queues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/storage-nodejs-how-to-use-queues.md
description: Learn to use the Azure Queue Storage to create and delete queues. L
Previously updated : 12/21/2020 Last updated : 01/20/2023
The [Azure Storage client library for JavaScript](https://github.com/Azure/azure
1. Use a command-line interface such as PowerShell (Windows), Terminal (Mac), or Bash (Unix), navigate to the folder where you created your sample application.
-# [JavaScript v12 SDK](#tab/javascript)
- 1. Type `npm install @azure/storage-queue` in the command window. 1. Verify that a `node_modules` folder was created. Inside that folder you'll find the `@azure/storage-queue` package, which contains the client library you need to access storage.
-# [JavaScript v2](#tab/javascript2)
-
-1. Type `npm install azure-storage` in the command window.
-
-1. Verify that a `node_modules` folder was created. Inside that folder, you'll find the `azure-storage` package, which contains the libraries you need to access storage.
--- ### Import the package Using your code editor, add the following to the top the JavaScript file where you intend to use queues.
-# [JavaScript v12 SDK](#tab/javascript)
- :::code language="javascript" source="~/azure-storage-snippets/queues/howto/JavaScript/JavaScript-v12/javascript-queues-v12.js" id="Snippet_ImportStatements":::
-# [JavaScript v2](#tab/javascript2)
-
-```javascript
-var azure = require('azure-storage');
-```
-- ## How to create a queue
-# [JavaScript v12 SDK](#tab/javascript)
The following code gets the value of an environment variable called `AZURE_STORAGE_CONNECTION_STRING` and uses it to create a [`QueueServiceClient`](/javascript/api/@azure/storage-queue/queueserviceclient) object. This object is then used to create a [`QueueClient`](/javascript/api/@azure/storage-queue/queueclient) object that allows you to work with a specific queue.
The following code gets the value of an environment variable called `AZURE_STORA
If the queue already exists, an exception is thrown.
-# [JavaScript v2](#tab/javascript2)
+## How to format the message
-The Azure module will read the environment variables `AZURE_STORAGE_ACCOUNT` and `AZURE_STORAGE_ACCESS_KEY`, or `AZURE_STORAGE_CONNECTION_STRING` for information required to connect to your Azure Storage account. If these environment variables aren't set, you must specify the account information when calling `createQueueService`.
+The message type is a string. All messages are treated as strings. If you need to send a different data type, you need to serialize that datatype into a string when sending the message and deserialize the string format when reading the message.
-The following code creates a `QueueService` object, which enables you to work with queues.
+To convert **JSON** to a string format and back again in Node.js, use the following helper functions:
```javascript
-var queueSvc = azure.createQueueService();
-```
-
-Call the `createQueueIfNotExists` method to create a new queue with the specified name or return the queue if it already exists.
-
-```javascript
-queueSvc.createQueueIfNotExists('myqueue', function(error, results, response){
- if(!error){
- // Queue created or exists
- }
-});
-```
-
-If the queue is created, `result.created` is true. If the queue exists, `result.created` is false.
--
+function jsonToBase64(jsonObj) {
+ const jsonString = JSON.stringify(jsonObj)
+ return Buffer.from(jsonString).toString('base64')
+}
+function encodeBase64ToJson(base64String) {
+ const jsonString = Buffer.from(base64String,'base64').toString()
+ return JSON.parse(jsonString)
+}
+```
## How to insert a message into a queue
-# [JavaScript v12 SDK](#tab/javascript)
To add a message to a queue, call the [`sendMessage`](/javascript/api/@azure/storage-queue/queueclient#sendmessage-string--queuesendmessageoptions-) method. :::code language="javascript" source="~/azure-storage-snippets/queues/howto/JavaScript/JavaScript-v12/javascript-queues-v12.js" id="Snippet_AddMessage":::
-# [JavaScript v2](#tab/javascript2)
-
-To insert a message into a queue, call the `createMessage` method to create a new message and add it to the queue.
-
-```javascript
-queueSvc.createMessage('myqueue', "Hello, World", function(error, results, response){
- if(!error){
- // Message inserted
- }
-});
-```
-- ## How to peek at the next message You can peek at messages in the queue without removing them from the queue by calling the `peekMessages` method.
-# [JavaScript v12 SDK](#tab/javascript)
By default, [`peekMessages`](/javascript/api/@azure/storage-queue/queueclient#peekmessages-queuepeekmessagesoptions-) peeks at a single message. The following example peeks at the first five messages in the queue. If fewer than five messages are visible, just the visible messages are returned. :::code language="javascript" source="~/azure-storage-snippets/queues/howto/JavaScript/JavaScript-v12/javascript-queues-v12.js" id="Snippet_PeekMessage":::
-# [JavaScript v2](#tab/javascript2)
-
-By default, `peekMessages` peeks at a single message.
-
-```javascript
-queueSvc.peekMessages('myqueue', function(error, results, response){
- if(!error){
- // Message text is in results[0].messageText
- }
-});
-```
-
-The `result` contains the message.
--- Calling `peekMessages` when there are no messages in the queue won't return an error. However, no messages are returned. ## How to change the contents of a queued message The following example updates the text of a message.
-# [JavaScript v12 SDK](#tab/javascript)
- Change the contents of a message in-place in the queue by calling [`updateMessage`](/javascript/api/@azure/storage-queue/queueclient#updatemessage-string--string--string--number--queueupdatemessageoptions-). :::code language="javascript" source="~/azure-storage-snippets/queues/howto/JavaScript/JavaScript-v12/javascript-queues-v12.js" id="Snippet_UpdateMessage":::
-# [JavaScript v2](#tab/javascript2)
-
-Change the contents of a message in-place in the queue by calling `updateMessage`.
-
-```javascript
-queueSvc.getMessages('myqueue', function(error, getResults, getResponse){
- if(!error){
- // Got the message
- var message = getResults[0];
- queueSvc.updateMessage('myqueue', message.messageId, message.popReceipt, 10, {messageText: 'new text'}, function(error, updateResults, updateResponse){
- if(!error){
- // Message updated successfully
- }
- });
- }
-});
-```
--- ## How to dequeue a message Dequeueing a message is a two-stage process:
Dequeueing a message is a two-stage process:
The following example gets a message, then deletes it.
-# [JavaScript v12 SDK](#tab/javascript)
- To get a message, call the [`receiveMessages`](/javascript/api/@azure/storage-queue/queueclient#receivemessages-queuereceivemessageoptions-) method. This call makes the messages invisible in the queue, so no other clients can process them. Once your application has processed a message, call [`deleteMessage`](/javascript/api/@azure/storage-queue/queueclient#deletemessage-string--string--queuedeletemessageoptions-) to delete it from the queue. :::code language="javascript" source="~/azure-storage-snippets/queues/howto/JavaScript/JavaScript-v12/javascript-queues-v12.js" id="Snippet_DequeueMessage":::
By default, a message is only hidden for 30 seconds. After 30 seconds it's visib
Calling `receiveMessages` when there are no messages in the queue won't return an error. However, no messages will be returned.
-# [JavaScript v2](#tab/javascript2)
-
-To get a message, call the `getMessages` method. This call makes the messages invisible in the queue, so no other clients can process them. Once your application has processed a message, call `deleteMessage` to delete it from the queue.
-
-```javascript
-queueSvc.getMessages('myqueue', function(error, results, response){
- if(!error){
- // Message text is in results[0].messageText
- var message = results[0];
- queueSvc.deleteMessage('myqueue', message.messageId, message.popReceipt, function(error, response){
- if(!error){
- //message deleted
- }
- });
- }
-});
-```
-
-By default, a message is only hidden for 30 seconds. After 30 seconds it's visible to other clients. You can specify a different value by using `options.visibilityTimeout` with `getMessages`.
-
-Using `getMessages` when there are no messages in the queue won't return an error. However, no messages will be returned.
--- ## Additional options for dequeuing messages
-# [JavaScript v12 SDK](#tab/javascript)
- There are two ways you can customize message retrieval from a queue: - [`options.numberOfMessages`](/javascript/api/@azure/storage-queue/queuereceivemessageoptions#numberofmessages): Retrieve a batch of messages (up to 32).
The following example uses the `receiveMessages` method to get five messages in
:::code language="javascript" source="~/azure-storage-snippets/queues/howto/JavaScript/JavaScript-v12/javascript-queues-v12.js" id="Snippet_DequeueMessages":::
-# [JavaScript v2](#tab/javascript2)
-
-There are two ways you can customize message retrieval from a queue:
--- `options.numOfMessages`: Retrieve a batch of messages (up to 32).-- `options.visibilityTimeout`: Set a longer or shorter invisibility timeout.-
-The following example uses the `getMessages` method to get 15 messages in one call. Then it processes each message using a `for` loop. It also sets the invisibility timeout to five minutes for all messages returned by this method.
-
-```javascript
-queueSvc.getMessages('myqueue', {numOfMessages: 15, visibilityTimeout: 5 * 60}, function(error, results, getResponse){
- if(!error){
- // Messages retrieved
- for(var index in results){
- // text is available in result[index].messageText
- var message = results[index];
- queueSvc.deleteMessage(queueName, message.messageId, message.popReceipt, function(error, deleteResponse){
- if(!error){
- // Message deleted
- }
- });
- }
- }
-});
-```
--- ## How to get the queue length
-# [JavaScript v12 SDK](#tab/javascript)
The [`getProperties`](/javascript/api/@azure/storage-queue/queueclient#getproperties-queuegetpropertiesoptions-) method returns metadata about the queue, including the approximate number of messages waiting in the queue. :::code language="javascript" source="~/azure-storage-snippets/queues/howto/JavaScript/JavaScript-v12/javascript-queues-v12.js" id="Snippet_QueueLength":::
-# [JavaScript v2](#tab/javascript2)
-
-The `getQueueMetadata` method returns metadata about the queue, including the approximate number of messages waiting in the queue.
-
-```javascript
-queueSvc.getQueueMetadata('myqueue', function(error, results, response){
- if(!error){
- // Queue length is available in results.approximateMessageCount
- }
-});
-```
--- ## How to list queues
-# [JavaScript v12 SDK](#tab/javascript)
- To retrieve a list of queues, call [`QueueServiceClient.listQueues`](/javascript/api/@azure/storage-queue/servicelistqueuesoptions#prefix). To retrieve a list filtered by a specific prefix, set [options.prefix](/javascript/api/@azure/storage-queue/servicelistqueuesoptions#prefix) in your call to `listQueues`. :::code language="javascript" source="~/azure-storage-snippets/queues/howto/JavaScript/JavaScript-v12/javascript-queues-v12.js" id="Snippet_ListQueues":::
-# [JavaScript v2](#tab/javascript2)
-
-To retrieve a list of queues, use `listQueuesSegmented`. To retrieve a list filtered by a specific prefix, use `listQueuesSegmentedWithPrefix`.
-
-```javascript
-queueSvc.listQueuesSegmented(null, function(error, results, response){
- if(!error){
- // results.entries contains the list of queues
- }
-});
-```
-
-If all queues can't be returned, pass `result.continuationToken` as the first parameter of `listQueuesSegmented` or the second parameter of `listQueuesSegmentedWithPrefix` to retrieve more results.
--- ## How to delete a queue
-# [JavaScript v12 SDK](#tab/javascript)
- To delete a queue and all the messages contained in it, call the [`DeleteQueue`](/javascript/api/@azure/storage-queue/queueclient#delete-queuedeleteoptions-) method on the `QueueClient` object. :::code language="javascript" source="~/azure-storage-snippets/queues/howto/JavaScript/JavaScript-v12/javascript-queues-v12.js" id="Snippet_DeleteQueue"::: To clear all messages from a queue without deleting it, call [`ClearMessages`](/javascript/api/@azure/storage-queue/queueclient#clearmessages-queueclearmessagesoptions-).
-# [JavaScript v2](#tab/javascript2)
-
-To delete a queue and all the messages contained in it, call the `deleteQueue` method on the queue object.
-
-```javascript
-queueSvc.deleteQueue(queueName, function(error, response){
- if(!error){
- // Queue has been deleted
- }
-});
-```
-
-To clear all messages from a queue without deleting it, call `clearMessages`.
--- [!INCLUDE [storage-check-out-samples-all](../../../includes/storage-check-out-samples-all.md)] ## Next steps
storage Storage Python How To Use Queue Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/storage-python-how-to-use-queue-storage.md
Title: How to use Azure Queue Storage from Python+ description: Learn to use the Azure Queue Storage from Python to create and delete queues, and insert, get, and delete messages.-+ -- Previously updated : 02/16/2021+ Last updated : 01/19/2023 ms.devlang: quickstart-+ # How to use Azure Queue Storage from Python
storage Storage Quickstart Queues Python https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/queues/storage-quickstart-queues-python.md
Title: 'Quickstart: Azure Queue Storage client library for Python'+ description: Learn how to use the Azure Queue Storage client library for Python to create a queue and add messages to it. Then learn how to read and delete messages from the queue. You'll also learn how to delete a queue. + Last updated 12/14/2022
stream-analytics Stream Analytics Machine Learning Anomaly Detection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-machine-learning-anomaly-detection.md
The machine learning operations don't support seasonality trends or multi-variat
The following video demonstrates how to detect an anomaly in real time using machine learning functions in Azure Stream Analytics.
-> [!VIDEO /Shows/Internet-of-Things-Show/Real-Time-ML-Based-Anomaly-Detection-In-Azure-Stream-Analytics/player]
+> [!VIDEO https://learn-video.azurefd.net/vod/player?show=internet-of-things-show&ep=real-time-ml-based-anomaly-detection-in-azure-stream-analytics]
## Model behavior
stream-analytics Stream Analytics Troubleshoot Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-troubleshoot-input.md
Title: Troubleshooting Inputs for Azure Stream Analytics description: This article describes techniques to troubleshoot your input connections in Azure Stream Analytics jobs.--++ Previously updated : 04/08/2022 Last updated : 01/17/2023
Other common reasons that result in input deserialization errors are:
3. Using Event Hub capture blob in Avro format as input in your job. 4. Having two columns in a single input event that differ only in case. Example: *column1* and *COLUMN1*.
+## Partition count changes
+Partition count of Event Hub can be changed. The Stream Analytics job needs to be stopped and started again if the partition count of Event Hub is changed.
+
+The following errors are shown when the partition count of Event Hub is changed when the job is running.
+Microsoft.Streaming.Diagnostics.Exceptions.InputPartitioningChangedException
+ ## Job exceeds maximum Event Hub receivers A best practice for using Event Hubs is to use multiple consumer groups for job scalability. The number of readers in the Stream Analytics job for a specific input affects the number of readers in a single consumer group. The precise number of receivers is based on internal implementation details for the scale-out topology logic and is not exposed externally. The number of readers can change when a job is started or during job upgrades.
synapse-analytics Concepts Lake Database https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/database-designer/concepts-lake-database.md
The lake database in Azure Synapse Analytics enables customers to bring together
## Database designer
-The new database designer gives you the possibility to create a data model for your lake database and add additional information to it. Every Entity and Attribute can be described to provide more information about the model, which not only contains Entities but relationships as well. In particular, the inability to model relationships has been a challenge for the interaction on the data lake. This challenge is now addressed with an integrated designer that provides possibilities that have been available in databases but not on the lake. Also the capability to add descriptions and possible demo values to the model allows people who are interacting with it in the future to have information where they need it to get a better understanding about the data.
+The new database designer in Synapse Studio gives you the possibility to create a data model for your lake database and add additional information to it. Every Entity and Attribute can be described to provide more information about the model, which not only contains Entities but relationships as well. In particular, the inability to model relationships has been a challenge for the interaction on the data lake. This challenge is now addressed with an integrated designer that provides possibilities that have been available in databases but not on the lake. Also the capability to add descriptions and possible demo values to the model allows people who are interacting with it in the future to have information where they need it to get a better understanding about the data.
## Data storage
Lake databases use a data lake on the Azure Storage account to store the data of
> [!NOTE] > Publishing a lake database does not create any of the underlying structures or schemas needed to query the data in Spark or SQL. After publishing, load data into your lake database using [pipelines](../data-integration/data-integration-data-lake.md) to begin querying it.-
+>
+> The synchronization of lake database objects between storage and Synapse is one-directional. Be sure to perform any creation or schema modification of lake database objects using the database designer in Synapse Studio. If you instead make such changes from Spark or directly in storage, the definitions of your lake databases will become out of sync. If this happens, you may see old lake database definitions in the database designer. You will need to replicate and publish such changes in the database designer in order to bring your lake databases back in sync.
## Database compute
synapse-analytics Active Directory Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/active-directory-authentication.md
The following authentication methods are supported for Azure AD server principal
### Additional considerations - To enhance manageability, we recommend you provision a dedicated Azure AD group as an administrator.-- Only one Azure AD administrator (a user or group) can be configured for Synapse SQL pool at any time.
+- Only one Azure AD administrator (a user or group) can be configured for Synapse SQL pools at any time.
- The addition of Azure AD server principals (logins) for Synapse SQL allows the possibility of creating multiple Azure AD server principals (logins) that can be added to the `sysadmin` role. - Only an Azure AD administrator for Synapse SQL can initially connect to Synapse SQL using an Azure Active Directory account. The Active Directory administrator can configure subsequent Azure AD database users. - We recommend setting the connection timeout to 30 seconds.
The following authentication methods are supported for Azure AD server principal
- Beginning with version 15.0.1, [sqlcmd utility](/sql/tools/sqlcmd-utility?view=azure-sqldw-latest&preserve-view=true) and [bcp utility](/sql/tools/bcp-utility?view=azure-sqldw-latest&preserve-view=true) support Active Directory Interactive authentication with MFA. - SQL Server Data Tools for Visual Studio 2015 requires at least the April 2016 version of the Data Tools (version 14.0.60311.1). Currently, Azure AD users aren't shown in SSDT Object Explorer. As a workaround, view the users in [sys.database_principals](/sql/relational-databases/system-catalog-views/sys-database-principals-transact-sql?view=azure-sqldw-latest&preserve-view=true). - [Microsoft JDBC Driver 6.0 for SQL Server](https://www.microsoft.com/download/details.aspx?id=11774) supports Azure AD authentication. Also, see [Setting the Connection Properties](/sql/connect/jdbc/setting-the-connection-properties?view=azure-sqldw-latest&preserve-view=true).-- The Azure Active Directory admin account controls access to dedicated SQL pools, while Synapse RBAC roles are used to control access to serverless pools, for example, the **Synapse Administrator** role. Configure Synapse RBAC roles via Synapse Studio, for more information, see [How to manage Synapse RBAC role assignments in Synapse Studio](../security/how-to-manage-synapse-rbac-role-assignments.md).-- If a user is configured as an Azure Active Directory administrator and Synapse Administrator, and then removed from the Azure Active Directory administrator role, then the user will lose access to the dedicated SQL pools in Synapse. They must be removed and then added to the Synapse Administrator role to regain access to dedicated SQL pools.
+- The Azure Active Directory admin account controls access to both dedicated and serverless SQL pools, while Synapse RBAC roles can be used to additionally control access to serverless pools, for example, with the **Synapse Administrator** and **Synapse SQL Administrator** role. Configure Synapse RBAC roles via Synapse Studio, for more information, see [How to manage Synapse RBAC role assignments in Synapse Studio](../security/how-to-manage-synapse-rbac-role-assignments.md).
+- If a user is configured as an Azure Active Directory administrator and Synapse Administrator, and then removed from the Azure Active Directory administrator role, then the user will lose access to the dedicated and serverless SQL pools in Synapse. They must be removed and then added to the Synapse Administrator role to regain access to SQL pools.
## Next steps
synapse-analytics Sql Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/synapse-analytics/sql/sql-authentication.md
There are two administrative accounts (**SQL admin username** and **SQL Active D
One Azure Active Directory account, either an individual or security group account, can also be configured as an administrator. It's optional to configure an Azure AD administrator, but an Azure AD administrator **must** be configured if you want to use Azure AD accounts to connect to Synapse SQL.
- - The Azure Active Directory admin account controls access to dedicated SQL pools, while Synapse RBAC roles are used to control access to serverless pools, for example, the **Synapse Administrator** role. Changing the Azure Active Directory administrator account will only affect the account's access to dedicated SQL pools.
+ - The Azure Active Directory admin account controls access to dedicated and serverless SQL pools, while Synapse RBAC roles can be used to additionally control access to serverless pools, for example, with the **Synapse Administrator** and **Synapse SQL Administrator** role.
The **SQL admin username** and **SQL Active Directory admin** accounts have the following characteristics:
The **SQL admin username** and **SQL Active Directory admin** accounts have the
- Can view the `sys.sql_logins` system table. >[!Note]
->If a user is configured as an Active Directory admin and Synapse Administrator, and then removed from the Active Directory admin role, then the user will lose access to the dedicated SQL pools in Synapse. They must be removed and then added to the Synapse Administrator role to regain access to dedicated SQL pools.
+>If a user is configured as an Active Directory admin and Synapse Administrator, and then removed from the Active Directory admin role, then the user will lose access to the dedicated and serverless SQL pools in Synapse. They must be removed and then added to the Synapse Administrator role to regain access to SQL pools.
## [Serverless SQL pool](#tab/serverless)
virtual-desktop Teams On Avd https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/teams-on-avd.md
Title: Use Microsoft Teams on Azure Virtual Desktop - Azure
description: How to use Microsoft Teams on Azure Virtual Desktop. Previously updated : 10/21/2022 Last updated : 01/19/2023
For more information about which features Teams on Azure Virtual Desktop support
This section will show you how to install the Teams desktop app on your Windows 10 or 11 Enterprise multi-session or Windows 10 or 11 Enterprise VM image. To learn more, check out [Install or update the Teams desktop app on VDI](/microsoftteams/teams-for-vdi#install-or-update-the-teams-desktop-app-on-vdi).
-### Prepare your image for Teams
+### Enable media optimization for Teams
To enable media optimization for Teams, set the following registry key on the host VM:
-1. From the start menu, run **Registry Editor** as an administrator. Navigate to `HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Teams`. Create the Teams key if it doesn't already exist.
+1. From the start menu, run **Registry Editor** as an administrator. Go to `HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Teams`. Create the Teams key if it doesn't already exist.
2. Create the following value for the Teams key:
After installing the WebSocket Service and the Teams desktop app, follow these s
If optimizations don't load, uninstall then reinstall Teams and check again.
+## Enable registry keys for optional features
+
+If you want to use certain optional features for Teams on Azure Virtual Desktop, you'll need to enable certain registry keys. The following instructions only apply to Windows client devices and session host VMs.
+
+### Enable hardware encode for teams on Azure Virtual Desktop
+
+Hardware encode lets you increase video quality for the outgoing camera during Teams calls. In order to enable this feature, your client will need to be running version 1.2.3213 or later of the [Windows Desktop client](whats-new-client-windows.md). You'll need to repeat the following instructions for every client device.
+
+To enable hardware encode:
+
+1. On your client device, from the start menu, run **Registry Editor** as an administrator.
+1. Go to `HKCU\SOFTWARE\Microsoft\Terminal Server Client\Default\AddIns\WebRTC Redirector`.
+1. Add the **UseHardwareEncoding** as a DWORD value.
+1. Set the value to **1** to enable the feature.
+1. Repeat these instructions for every client device.
+
+### Enable content sharing for Teams for Remote App
+
+Enabling content sharing for Teams on Azure Virtual Desktop lets you share your screen or application window. To enable this feature, your session host VM needs to be running version 1.31.2211.15001 or later of [the WebRTC service](whats-new-webrtc.md) and version 1.2.3401 or later of the [Windows Desktop client](whats-new-client-windows.md).
+
+To enable content sharing:
+
+1. On your session host VM, from the start menu, run **Registry Editor** as an administrator.
+1. Go to `HKLM\SYSTEM\CurrentControlSet\Control\Terminal Server\AddIns\WebRTC Redirector\Policy`.
+1. Add the **ShareClientDesktop** as a DWORD value.
+1. Set the value to **1** to enable the feature.
+
+### Disable desktop screen share for Teams for Remote App
+
+You can disable desktop screen sharing for Teams on Azure Virtual Desktop. To enable this feature, your session host VM needs to be running version 1.31.2211.15001 or later of [the WebRTC service](whats-new-webrtc.md) and version 1.2.3401 or later of the [Windows Desktop client](whats-new-client-windows.md).
+
+>[!NOTE]
+>You must [enable the ShareClientDesktop key](#enable-content-sharing-for-teams-for-remote-app) before you can use this key.
+
+To disable desktop screen share:
+
+1. On your session host VM, from the start menu, run **Registry Editor** as an administrator.
+1. Go to `HKLM\SYSTEM\CurrentControlSet\Control\Terminal Server\AddIns\WebRTC Redirector\Policy`.
+1. Add the **DisableRAILScreensharing** as a DWORD value.
+1. Set the value to **1** to disable desktop screen share.
+
+### Disable application window sharing for Teams for Remote App
+
+You can disable application window sharing for Teams on Azure Virtual Desktop. To enable this feature, your session host VM needs to be running version 1.31.2211.15001 or later of [the WebRTC service](whats-new-webrtc.md) and version 1.2.3401 or later of the [Windows Desktop client](whats-new-client-windows.md).
+
+>[!NOTE]
+>You must [enable the ShareClientDesktop key](#enable-content-sharing-for-teams-for-remote-app) before you can use this key.
+
+To disable application window sharing:
+
+1. On your session host VM, from the start menu, run **Registry Editor** as an administrator.
+1. Go to `HKLM\SYSTEM\CurrentControlSet\Control\Terminal Server\AddIns\WebRTC Redirector\Policy`.
+1. Add the **DisableRAILAppSharing** as a DWORD value.
+1. Set the value to **1** to disable application window sharing.
+ ## Customize Remote Desktop Protocol properties for a host pool Customizing a host pool's Remote Desktop Protocol (RDP) properties, such as multi-monitor experience or enabling microphone and audio redirection, lets you deliver an optimal experience for your users based on their needs.
virtual-desktop Teams Supported Features https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/teams-supported-features.md
Title: Supported features for Microsoft Teams on Azure Virtual Desktop - Azure
description: Supported features for Microsoft Teams on Azure Virtual Desktop. Previously updated : 11/01/2022 Last updated : 01/19/2023
The following table lists whether the Windows Desktop client or macOS client sup
|Background blur|Yes|Yes| |Background images|Yes|Yes| |Screen share and video together|Yes|Yes|
+|Application window sharing|Yes|No|
|Secondary ringer|Yes|No| |Dynamic e911|Yes|Yes| |Diagnostic overlay|Yes|No|
The following table lists the minimum required versions for each Teams feature.
|Background blur|1.2.3004 and later|10.7.10 and later|1.0.2006.11001 and later|1.5.00.11865 and later| |Background images|1.2.3004 and later|10.7.10 and later|1.0.2006.11001 and later|1.5.00.11865 and later| |Screen share and video together|1.2.1755 and later|10.7.7 and later|1.0.2006.11001 and later|Updates within 90 days of the current version|
+|Application window sharing|1.2.3770 and later|Not supported|1.31.2211.15001|Updates within 90 days of the current version|
|Secondary ringer|1.2.3004 and later|10.7.7 and later|1.0.2006.11001 and later|Updates within 90 days of the current version| |Dynamic e911|1.2.2600 and later|10.7.7 and later|1.0.2006.11001 and later|Updates within 90 days of the current version| |Diagnostic overlay|1.2.3316 and later|Not supported|1.17.2205.23001 and later|Updates within 90 days of the current version|
virtual-desktop Troubleshoot Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-insights.md
This article presents known issues and solutions for common problems in Azure Virtual Desktop Insights. >[!IMPORTANT]
->[The Log Analytics Agent is currently being deprecated](https://azure.microsoft.com/updates/were-retiring-the-log-analytics-agent-in-azure-monitor-on-31-august-2024/). While Azure Virtual Desktop Insights currently uses the Log Analytics Agent for Azure Virtual Desktop support, you'll eventually need to migrate to Azure Virtual Desktop Insights by August 31, 2024. We'll provide instructions for how to migrate when we release the update that allows Azure Virtual Desktop Insights to support the Azure Monitor Agent. Until then, continue to use the Log Analytics Agent.
+>[The Log Analytics Agent is currently being deprecated](https://azure.microsoft.com/updates/were-retiring-the-log-analytics-agent-in-azure-monitor-on-31-august-2024/). While Azure Virtual Desktop Insights currently uses the Log Analytics Agent for Azure Virtual Desktop support, you'll eventually need to migrate to the [Azure Monitor Agent](../azure-monitor/agents/agents-overview.md) by August 31, 2024. We'll provide instructions for how to migrate when we release the update that allows Azure Virtual Desktop Insights to support the Azure Monitor Agent. Until then, continue to use the Log Analytics Agent.
## Issues with configuration and setup
virtual-desktop Client Features Android Chrome Os https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/users/client-features-android-chrome-os.md
If you want to help us test new builds before they're released, you should downl
> [!NOTE] > The beta client shouldn't be used in production environments.
-You can download the beta client for Android and Chrome OS from the [Google Play Store](https://play.google.com/apps/testing/com.microsoft.rdc.androidx). You'll need to give consent to access preview versions and download the client. You'll receive preview versions directly through the Google Play Store.
+You can download the beta client for Android and Chrome OS from [Google Play](https://play.google.com/apps/testing/com.microsoft.rdc.androidx). You'll need to give consent to access preview versions and download the client. You'll receive preview versions directly through the Google Play Store.
## Provide feedback
virtual-desktop Whats New Client Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-client-windows.md
In this article you'll learn about the latest updates for the Remote Desktop cli
The following table lists the current versions available for the public and Insider releases. To enable Insider releases, see [Enable Windows Insider releases](users/client-features-windows.md#enable-windows-insider-releases).
-| Release | Latest version | Minimum supported version |
-||-||
-| Public | 1.2.3770 | 1.2.1672 |
-| Insider | 1.2.3770 | 1.2.1672 |
+| Release | Latest version | Minimum supported version | Download |
+||-||--|
+| Public | 1.2.3770 | 1.2.1672 | [Windows 64-bit](https://go.microsoft.com/fwlink/?linkid=2139369) *(most common)*<br />[Windows 32-bit](https://go.microsoft.com/fwlink/?linkid=2139456)<br />[Windows ARM64](https://go.microsoft.com/fwlink/?linkid=2139370) |
+| Insider | 1.2.3770 | 1.2.1672 | As above |
## Updates for version 1.2.3770
virtual-desktop Whats New Insights https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-insights.md
Title: What's new in Azure Virtual Desktop Insights?
description: New features and product updates in Azure Virtual Desktop Insights. Previously updated : 08/16/2022 Last updated : 01/18/2023
For example, a release with a version number of 1.2.31 is on the first major rel
When one of the numbers is increased, all numbers after it must change, too. One release has one version number. However, not all version numbers track releases. Patch numbers can be somewhat arbitrary, for example.
+## Version 1.4.0
+
+This update was released in October 2022 and has the following changes:
+
+- Added Windows 7 end of life reporting for client operating system and a dynamic notification box as a reminder of the deprecation timeframe for Windows 7 support for Azure Virtual Desktop.
+
+## Version 1.3.0
+
+This update was released in September 2022 and has the following changes:
+
+- Introduced a public preview of *at scale* reporting for Azure Virtual Desktop Insights to allow the selection of multiple subscriptions, resource groups, and host pools.
+ ## Version 1.2.2 This update was released in July 2022 and has the following changes:
virtual-desktop Whats New Webrtc https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/whats-new-webrtc.md
Title: What's new in the Remote Desktop WebRTC Redirector Service?
description: New features and product updates the Remote Desktop WebRTC Redirector Service for Azure Virtual Desktop. Previously updated : 10/21/2022 Last updated : 01/19/2023
This article provides information about the latest updates to the Remote Desktop
The following sections describe what changed in each version of the Remote Desktop WebRTC Redirector Service.
+### Updates for version 1.31.2211.15001
+
+Download: [MSI Installer](https://query.prod.cms.rt.microsoft.com/cms/api/am/binary/RE5c8Kk)
+
+- Support for application window sharing for Windows users.
+- Support for Give and Take Control functionality for macOS users.
+- Latency and performance improvements for Give and Take Control on Windows.
+- Improved screen share performance.
+ ### Updates for version 1.17.2205.23001 Date published: June 20, 2022
virtual-machine-scale-sets Virtual Machine Scale Sets Automatic Instance Repairs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-automatic-instance-repairs.md
The scale set should have application health monitoring for instances enabled. H
**Configure endpoint to provide health status**
-Before enabling automatic instance repairs policy, ensure that the scale set instances have application endpoint configured to emit the application health status. When an instance returns status 200 (OK) on this application endpoint, then the instance is marked as "Healthy". In all other cases, the instance is marked "Unhealthy", including the following scenarios:
+Before enabling automatic instance repairs policy, ensure that your scale set instances have application endpoint configured to emit the application health status. To configure health status on Application Health extension, you can use either [Binary Health States](./virtual-machine-scale-sets-health-extension.md#binary-health-states) or [Rich Health States](./virtual-machine-scale-sets-health-extension.md#rich-health-states). To configure health status using Load balancer health probes, see [probe up behavior](../load-balancer/load-balancer-custom-probe-overview.md#probe-up-behavior).
-- When there's no application endpoint configured inside the virtual machine instances to provide application health status-- When the application endpoint is incorrectly configured-- When the application endpoint isn't reachable-
-For instances marked as "Unhealthy", automatic repairs are triggered by the scale set. Ensure the application endpoint is correctly configured before enabling the automatic repairs policy in order to avoid unintended instance repairs, while the endpoint is getting configured.
+For instances marked as "Unhealthy" or "Unknown" (*Unknown* state is only available with [Application Health extension - Rich Health States](./virtual-machine-scale-sets-health-extension.md#unknown-state)), automatic repairs are triggered by the scale set. Ensure the application endpoint is correctly configured before enabling the automatic repairs policy in order to avoid unintended instance repairs, while the endpoint is getting configured.
**API version**
This feature is currently not supported for service fabric scale sets.
**Restriction for VMs with provisioning errors**
-Automatic repairs doesn't currently support scenarios where a VM instance is marked *Unhealthy* due to a provisioning failure. VMs must be successfully initialized to enable health monitoring and automatic repair capabilities.
+Automatic repairs don't currently support scenarios where a VM instance is marked *Unhealthy* due to a provisioning failure. VMs must be successfully initialized to enable health monitoring and automatic repair capabilities.
## How do automatic instance repairs work?
virtual-machine-scale-sets Virtual Machine Scale Sets Health Extension https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-health-extension.md
Title: Use Application Health extension with Azure Virtual Machine Scale Sets
+ Title: Use Application Health extension with Azure Virtual Machine Scale Sets (preview)
description: Learn how to use the Application Health extension to monitor the health of your applications deployed on Virtual Machine Scale Sets. Previously updated : 11/22/2022 Last updated : 01/17/2023 - + # Using Application Health extension with Virtual Machine Scale Sets
-Monitoring your application health is an important signal for managing and upgrading your deployment. Azure Virtual Machine Scale Sets provide support for [Rolling Upgrades](virtual-machine-scale-sets-upgrade-scale-set.md#how-to-bring-vms-up-to-date-with-the-latest-scale-set-model) including [Automatic OS-Image Upgrades](virtual-machine-scale-sets-automatic-upgrade.md) and [Automatic VM Guest Patching](https://learn.microsoft.com/azure/virtual-machines/automatic-vm-guest-patching), which rely on health monitoring of the individual instances to upgrade your deployment. You can also use Application Health Extension to monitor the application health of each instance in your scale set and perform instance repairs using [Automatic Instance Repairs](virtual-machine-scale-sets-automatic-instance-repairs.md).
+> [!IMPORTANT]
+> **Rich Health States** is currently in public preview. **Binary Health States** is generally available.
+> This preview version is provided without a service-level agreement, and we don't recommend it for production workloads. Certain features might not be supported or might have constrained capabilities.
+> For more information, see [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
-This article describes how you can use the Application Health extension to monitor the health of your applications deployed on Virtual Machine Scale Sets.
+Monitoring your application health is an important signal for managing and upgrading your deployment. Azure Virtual Machine Scale Sets provide support for [Rolling Upgrades](virtual-machine-scale-sets-upgrade-scale-set.md#how-to-bring-vms-up-to-date-with-the-latest-scale-set-model) including [Automatic OS-Image Upgrades](virtual-machine-scale-sets-automatic-upgrade.md) and [Automatic VM Guest Patching](../virtual-machines/automatic-vm-guest-patching.md), which rely on health monitoring of the individual instances to upgrade your deployment. You can also use Application Health Extension to monitor the application health of each instance in your scale set and perform instance repairs using [Automatic Instance Repairs](virtual-machine-scale-sets-automatic-instance-repairs.md).
+
+This article describes how you can use the two types of Application Health extension, **Binary Health States** or **Rich Health States**, to monitor the health of your applications deployed on Virtual Machine Scale Sets.
## Prerequisites
-This article assumes that you are familiar with:
+
+This article assumes that you're familiar with:
- Azure virtual machine [extensions](../virtual-machines/extensions/overview.md) - [Modifying](virtual-machine-scale-sets-upgrade-scale-set.md) Virtual Machine Scale Sets
+> [!CAUTION]
+> Application Health Extension expects to receive a consistent probe response at the configured port `tcp` or request path `http/https` in order to label a VM as *Healthy*. If no application is running on the VM, or you're unable to configure a probe response, your VM is going to show up as *Unhealthy*.
+ ## When to use the Application Health extension+ The Application Health extension is deployed inside a Virtual Machine Scale Set instance and reports on VM health from inside the scale set instance. You can configure the extension to probe on an application endpoint and update the status of the application on that instance. This instance status is checked by Azure to determine whether an instance is eligible for upgrade operations.
-As the extension reports health from within a VM, the extension can be used in situations where external probes such as Application Health Probes (that utilize custom Azure Load Balancer [probes](../load-balancer/load-balancer-custom-probe-overview.md)) canΓÇÖt be used.
+The Application Health Extension is deployed inside a Virtual Machine Scale Set instance and reports on application health from inside the scale set instance. The extension probes on a local application endpoint and will update the health status based on TCP/HTTP(S) responses received from the application. This health status is used by Azure to initiate repairs on unhealthy instances and to determine if an instance is eligible for upgrade operations.
+
+The extension reports health from within a VM and can be used in situations where an external probe such as the [Azure Load Balancer health probes](../load-balancer/load-balancer-custom-probe-overview.md) canΓÇÖt be used.
+
+## Binary versus Rich Health States
+
+> [!IMPORTANT]
+> **Rich Health States** is currently in public preview.
+
+Application Health Extensions has two options available: **Binary Health States** and **Rich Health States**. The following table highlights some key differences between the two options. See the end of this section for general recommendations.
+
+| Features | Binary Health States | Rich Health States |
+| -- | -- | |
+| Available Health States | Two available states: *Healthy*, *Unhealthy* | Four available states: *Healthy*, *Unhealthy*, *Initializing*, *Unknown*<sup>1</sup> |
+| Sending Health Signals | Health signals are sent through HTTP/HTTPS response codes or TCP connections. | Health signals on HTTP/HTTPS protocol are sent through the probe response code and response body. Health signals through TCP protocol remain unchanged from Binary Health States. |
+| Identifying *Unhealthy* Instances | Instances will automatically fall into *Unhealthy* state if a *Healthy* signal isn't received from the application. An *Unhealthy* instance can indicate either an issue with the extension configuration (for example, unreachable endpoint) or an issue with the application (for example, non-2xx status code). | Instances will only go into an *Unhealthy* state if the application emits an *Unhealthy* probe response. Users are responsible for implementing custom logic to identify and flag instances with *Unhealthy* applications<sup>2</sup>. Instances with incorrect extension settings (for example, unreachable endpoint) or invalid health probe responses will fall under the *Unknown* state<sup>2</sup>. |
+| *Initializing* state for newly created instances | *Initializing* state isn't available. Newly created instances may take some time before settling into a steady state. | *Initializing* state allows newly created instances to settle into a steady Health State before making the instance eligible for rolling upgrades or instance repair operations. |
+| HTTP/HTTPS protocol | Supported | Supported |
+| TCP protocol | Supported | Limited Support ΓÇô *Unknown* state is unavailable on TCP protocol. See [Rich Health States protocol table](#rich-health-states) for Health State behaviors on TCP. |
+
+<sup>1</sup> The *Unknown* state is unavailable on TCP protocol.
+<sup>2</sup> Only applicable for HTTP/HTTPS protocol. TCP protocol will follow the same process of identifying *Unhealthy* instances as in Binary Health States.
+
+In general, you should use **Binary Health States** if:
+- You're not interested in configuring custom logic to identify and flag an unhealthy instance
+- You don't require an *initializing* grace period for newly created instances
+
+You should use **Rich Health States** if:
+- You send health signals through HTTP/HTTPS protocol and can submit health information through the probe response body
+- You would like to use custom logic to identify and mark unhealthy instances
+- You would like to set an *initializing* grace period for newly created instances, so that they settle into a steady Health State before making the instance eligible for rolling upgrade or instance repairs
+
+## Binary Health States
+
+Binary Health State reporting contains two Health States, *Healthy* and *Unhealthy*. The following tables provide a brief description for how the Health States are configured.
+
+**HTTP/HTTPS Protocol**
+
+| Protocol | Health State | Description |
+| -- | | -- |
+| http/https | Healthy | To send a *Healthy* signal, the application is expected to return a 2xx response code. |
+| http/https | Unhealthy | The instance will be marked as *Unhealthy* if a 2xx response code isn't received from the application. |
+
+**TCP Protocol**
+
+| Protocol | Health State | Description |
+| -- | | -- |
+| TCP | Healthy | To send a *Healthy* signal, a successful handshake must be made with the provided application endpoint. |
+| TCP | Unhealthy | The instance will be marked as *Unhealthy* if a failed or incomplete handshake occurred with the provided application endpoint. |
+
+Some scenarios that may result in an *Unhealthy* state include:
+- When the application endpoint returns a non-2xx status code
+- When there's no application endpoint configured inside the virtual machine instances to provide application health status
+- When the application endpoint is incorrectly configured
+- When the application endpoint isn't reachable
+
+## Rich Health States
+
+Rich Health States reporting contains four Health States, *Initializing*, *Healthy*, *Unhealthy*, and *Unknown*. The following tables provide a brief description for how each Health State is configured.
+
+**HTTP/HTTPS Protocol**
+
+| Protocol | Health State | Description |
+| -- | | -- |
+| http/https | Healthy | To send a *Healthy* signal, the application is expected to return a probe response with: **Probe Response Code**: Status 2xx, Probe Response Body: `{"ApplicationHealthState": "Healthy"}` |
+| http/https | Unhealthy | To send an *Unhealthy* signal, the application is expected to return a probe response with: **Probe Response Code**: Status 2xx, Probe Response Body: `{"ApplicationHealthState": "Unhealthy"}` |
+| http/https | Initializing | The instance automatically enters an *Initializing* state at extension start time. For more information, see [Initializing state](#initializing-state). |
+| http/https | Unknown | An *Unknown* state may occur in the following scenarios: when a non-2xx status code is returned by the application, when the probe request times out, when the application endpoint is unreachable or incorrectly configured, when a missing or invalid value is provided for `ApplicationHealthState` in the response body, or when the grace period expires. For more information, see [Unknown state](#unknown-state). |
-## Extension schema
+**TCP Protocol**
+
+| Protocol | Health State | Description |
+| -- | | -- |
+| TCP | Healthy | To send a *Healthy* signal, a successful handshake must be made with the provided application endpoint. |
+| TCP | Unhealthy | The instance will be marked as *Unhealthy* if a failed or incomplete handshake occurred with the provided application endpoint. |
+| TCP | Unhealthy | The instance automatically enters an *Initializing* state at extension start time. For more information, see [Initializing state](#initializing-state). |
+
+## Initializing state
+
+This state only applies to Rich Health States. The *Initializing* state only occurs once at extension start time and can be configured by the extension settings `gracePeriod` and `numberOfProbes`.
+
+At extension startup, the application health will remain in the *Initializing* state until one of two scenarios occurs:
+- The same Health State (*Healthy* or *Unhealthy*) is reported a consecutive number of times as configured through *numberOfProbes*
+- The `gracePeriod` expires
+
+If the same Health State (*Healthy* or *Unhealthy*) is reported consecutively, the application health will transition out of the *Initializing* state and into the reported Health State (*Healthy* or *Unhealthy*).
+
+### Example
+
+If `numberOfProbes` = 3, that would mean:
+- To transition from *Initializing* to *Healthy* state: Application health extension must receive three consecutive *Healthy* signals via HTTP/HTTPS or TCP protocol
+- To transition from *Initializing* to *Unhealthy* state: Application health extension must receive three consecutive *Unhealthy* signals via HTTP/HTTPS or TCP protocol
+
+If the `gracePeriod` expires before a consecutive health status is reported by the application, the instance health will be determined as follows:
+- HTTP/HTTPS protocol: The application health will transition from *Initializing* to *Unknown*
+- TCP protocol: The application health will transition from *Initializing* to *Unhealthy*
+
+## Unknown state
+
+This state only applies to Rich Health States. The *Unknown* state is only reported for "http" or "https" probes and occurs in the following scenarios:
+- When a non-2xx status code is returned by the application
+- When the probe request times out
+- When the application endpoint is unreachable or incorrectly configured
+- When a missing or invalid value is provided for `ApplicationHealthState` in the response body
+- When the grace period expires
+
+An instance in an *Unknown* state is treated similar to an *Unhealthy* instance. If enabled, instance repairs will be carried out on an *Unknown* instance while rolling upgrades will be paused until the instance falls back into a *Healthy* state.
+
+The following table shows the health status interpretation for [Rolling Upgrades](virtual-machine-scale-sets-upgrade-scale-set.md#how-to-bring-vms-up-to-date-with-the-latest-scale-set-model) and [Instance Repairs](virtual-machine-scale-sets-automatic-instance-repairs.md):
+
+| Health State | Rolling Upgrade interpretation | Instance Repairs trigger |
+| | | |
+| Initializing | Wait for the state to be in *Healthy*, *Unhealthy*, or *Unknown* | No |
+| Healthy | Healthy | No |
+| Unhealthy | Unhealthy | Yes |
+| Unknown | Unhealthy | Yes |
++
+## Extension schema for Binary Health States
The following JSON shows the schema for the Application Health extension. The extension requires at a minimum either a "tcp", "http" or "https" request with an associated port or request path respectively.
The following JSON shows the schema for the Application Health extension. The ex
"typeHandlerVersion": "1.0", "settings": { "protocol": "<protocol>",
- "port": "<port>",
+ "port": <port>,
"requestPath": "</requestPath>",
- "intervalInSeconds": "5.0",
- "numberOfProbes": "1.0"
+ "intervalInSeconds": 5.0,
+ "numberOfProbes": 1.0
} } }
The following JSON shows the schema for the Application Health extension. The ex
### Property values
-| Name | Value / Example | Data Type
-| - | - | -
+| Name | Value / Example | Data Type |
+| - | | |
| apiVersion | `2018-10-01` | date | | publisher | `Microsoft.ManagedServices` | string | | type | `ApplicationHealthLinux` (Linux), `ApplicationHealthWindows` (Windows) | string |
-| typeHandlerVersion | `1.0` | int |
+| typeHandlerVersion | `1.0` | string |
### Settings
-| Name | Value / Example | Data Type
-| - | - | -
+| Name | Value / Example | Data Type |
+| - | | |
| protocol | `http` or `https` or `tcp` | string | | port | Optional when protocol is `http` or `https`, mandatory when protocol is `tcp` | int | | requestPath | Mandatory when protocol is `http` or `https`, not allowed when protocol is `tcp` | string | +
+## Extension schema for Rich Health States
+
+The following JSON shows the schema for the Rich Health States extension. The extension requires at a minimum either an "http" or "https" request with an associated port or request path respectively. TCP probes are also supported, but won't be able to set the `ApplicationHealthState` through the probe response body and won't have access to the *Unknown* state.
+
+```json
+{
+ "type": "extensions",
+ "name": "HealthExtension",
+ "apiVersion": "2018-10-01",
+ "location": "<location>",
+ "properties": {
+ "publisher": "Microsoft.ManagedServices",
+ "type": "<ApplicationHealthLinux or ApplicationHealthWindows>",
+ "autoUpgradeMinorVersion": true,
+ "typeHandlerVersion": "2.0",
+ "settings": {
+ "protocol": "<protocol>",
+ "port": <port>,
+ "requestPath": "</requestPath>",
+ "intervalInSeconds": 5.0,
+ "numberOfProbes": 1.0,
+ "gracePeriod": 600
+ }
+ }
+}
+```
+
+### Property values
+
+| Name | Value / Example | Data Type |
+| - | | |
+| apiVersion | `2018-10-01` | date |
+| publisher | `Microsoft.ManagedServices` | string |
+| type | `ApplicationHealthLinux` (Linux), `ApplicationHealthWindows` (Windows) | string |
+| typeHandlerVersion | `2.0` | string |
+
+### Settings
+
+| Name | Value / Example | Data Type |
+| - | | |
+| protocol | `http` or `https` or `tcp` | string |
+| port | Optional when protocol is `http` or `https`, mandatory when protocol is `tcp` | int |
+| requestPath | Mandatory when protocol is `http` or `https`, not allowed when protocol is `tcp` | string |
+| intervalInSeconds | Optional, default is 5 seconds | int |
+| numberOfProbes | Optional, default is 1 | int |
+| gracePeriod | Optional, default = `intervalInSeconds` * `numberOfProbes`; maximum grace period is 7200 seconds | int |
++ ## Deploy the Application Health extension
-There are multiple ways of deploying the Application Health extension to your scale sets as detailed in the examples below.
+There are multiple ways of deploying the Application Health extension to your scale sets as detailed in the following examples.
+
+### Binary Health States
-### REST API
+# [REST API](#tab/rest-api)
The following example adds the Application Health extension (with name myHealthExtension) to the extensionProfile in the scale set model of a Windows-based scale set.
+You can also use this example to change an existing extension from Rich Health State to Binary Health by making a PATCH call instead of a PUT.
+ ``` PUT on `/subscriptions/subscription_id/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachineScaleSets/myScaleSet/extensions/myHealthExtension?api-version=2018-10-01` ```
PUT on `/subscriptions/subscription_id/resourceGroups/myResourceGroup/providers/
"typeHandlerVersion": "1.0", "settings": { "protocol": "<protocol>",
- "port": "<port>",
+ "port": <port>,
"requestPath": "</requestPath>" } }
PUT on `/subscriptions/subscription_id/resourceGroups/myResourceGroup/providers/
``` Use `PATCH` to edit an already deployed extension.
-### Azure PowerShell
+**Upgrade the VMs to install the extension.**
+
+```
+POST on `/subscriptions/<subscriptionId>/resourceGroups/<myResourceGroup>/providers/Microsoft.Compute/virtualMachineScaleSets/< myScaleSet >/manualupgrade?api-version=2022-08-01`
+```
+
+```json
+{
+ "instanceIds": ["*"]
+}
+```
+
+# [Azure PowerShell](#tab/azure-powershell)
Use the [Add-AzVmssExtension](/powershell/module/az.compute/add-azvmssextension) cmdlet to add the Application Health extension to the scale set model definition. The following example adds the Application Health extension to the `extensionProfile` in the scale set model of a Windows-based scale set. The example uses the new Az PowerShell module.
+To change an existing extension from Rich Health States to Binary Health, use [Update-AzVmssExtension](/cli/azure/azure-cli-extensions-overview#how-to-update-extensions) instead of `Add-AzVmssExtension` at *Add the Application Health extension to the scale set model* step.
+ ```azurepowershell-interactive # Define the scale set variables $vmScaleSetName = "myVMScaleSet"
Add-AzVmssExtension -VirtualMachineScaleSet $vmScaleSet `
Update-AzVmss -ResourceGroupName $vmScaleSetResourceGroup ` -Name $vmScaleSetName ` -VirtualMachineScaleSet $vmScaleSet
+
+# Upgrade instances to install the extension
+Update-AzVmssInstances -ResourceGroupName $vmScaleSetResourceGroup `
+ -VMScaleSetName $vmScaleSetName `
+ -InstanceId '*'
``` -
-### Azure CLI 2.0
+# [Azure CLI 2.0](#tab/azure-cli)
Use [az vmss extension set](/cli/azure/vmss/extension#az-vmss-extension-set) to add the Application Health extension to the scale set model definition. The following example adds the Application Health extension to the scale set model of a Linux-based scale set.
+You can also use this example to change an existing extension from Rich Health States to Binary Health.
+ ```azurecli-interactive az vmss extension set \ --name ApplicationHealthLinux \
The extension.json file content.
"requestPath": "</requestPath>" } ```
+**Upgrade the VMs to install the extension.**
+
+```azurecli-interactive
+az vmss update-instances \
+ --resource-group <myVMScaleSetResourceGroup> \
+ --name <myVMScaleSet> \
+ --instance-ids "*"
+```
+
+### Rich Health States
+
+# [REST API](#tab/rest-api)
+
+The following example adds the **Application Health - Rich States** extension (with name `myHealthExtension`) to the `extensionProfile` in the scale set model of a Windows-based scale set.
+
+You can also use this example to upgrade an existing extension from Binary to Rich Health States by making a PATCH call instead of a PUT.
+
+```
+PUT on `/subscriptions/subscription_id/resourceGroups/myResourceGroup/providers/Microsoft.Compute/virtualMachineScaleSets/myScaleSet/extensions/myHealthExtension?api-version=2018-10-01`
+```
+
+```json
+{
+ "name": "myHealthExtension",
+ "properties": {
+ "publisher": "Microsoft.ManagedServices",
+ "type": "ApplicationHealthWindows",
+ "autoUpgradeMinorVersion": true,
+ "typeHandlerVersion": "2.0",
+ "settings": {
+ "protocol": "<protocol>",
+ "port": <port>,
+ "requestPath": "</requestPath>",
+ "intervalInSeconds": <intervalInSeconds>,
+ "numberOfProbes": <numberOfProbes>,
+ "gracePeriod": <gracePeriod>
+ }
+ }
+}
+```
+Use `PATCH` to edit an already deployed extension.
+
+**Upgrade the VMs to install the extension.**
+
+```
+POST on `/subscriptions/<subscriptionId>/resourceGroups/<myResourceGroup>/providers/Microsoft.Compute/virtualMachineScaleSets/< myScaleSet >/manualupgrade?api-version=2022-08-01`
+```
+
+```json
+{
+ "instanceIds": ["*"]
+}
+```
+
+# [Azure PowerShell](#tab/azure-powershell)
+
+Use the [Add-AzVmssExtension](/powershell/module/az.compute/add-azvmssextension) cmdlet to add the Application Health extension to the scale set model definition.
+
+The following example adds the **Application Health - Rich States** extension to the `extensionProfile` in the scale set model of a Windows-based scale set. The example uses the new Az PowerShell module.
+
+To upgrade an existing extension from Binary to Rich Health States, use [Update-AzVmssExtension](/cli/azure/azure-cli-extensions-overview#how-to-update-extensions) instead of `Add-AzVmssExtension` at *Add the Application Health extension to the scale set model* step.
+
+```azurepowershell-interactive
+# Define the scale set variables
+$vmScaleSetName = "myVMScaleSet"
+$vmScaleSetResourceGroup = "myVMScaleSetResourceGroup"
+
+# Define the Application Health extension properties
+$publicConfig = @{"protocol" = "http"; "port" = 80; "requestPath" = "/healthEndpoint"; "gracePeriod" = 600};
+$extensionName = "myHealthExtension"
+$extensionType = "ApplicationHealthWindows"
+$publisher = "Microsoft.ManagedServices"
+
+# Get the scale set object
+$vmScaleSet = Get-AzVmss `
+ -ResourceGroupName $vmScaleSetResourceGroup `
+ -VMScaleSetName $vmScaleSetName
+
+# Add the Application Health extension to the scale set model
+Add-AzVmssExtension -VirtualMachineScaleSet $vmScaleSet `
+ -Name $extensionName `
+ -Publisher $publisher `
+ -Setting $publicConfig `
+ -Type $extensionType `
+ -TypeHandlerVersion "2.0" `
+ -AutoUpgradeMinorVersion $True
+
+# Update the scale set
+Update-AzVmss -ResourceGroupName $vmScaleSetResourceGroup `
+ -Name $vmScaleSetName `
+ -VirtualMachineScaleSet $vmScaleSet
+
+# Upgrade instances to install the extension
+Update-AzVmssInstances -ResourceGroupName $vmScaleSetResourceGroup `
+ -VMScaleSetName $vmScaleSetName `
+ -InstanceId '*'
+```
+
+# [Azure CLI 2.0](#tab/azure-cli)
+
+Use [az vmss extension set](/cli/azure/vmss/extension#az-vmss-extension-set) to add the Application Health extension to the scale set model definition.
+
+The following example adds the **Application Health - Rich States** extension to the scale set model of a Linux-based scale set.
+
+You can also use this example to upgrade an existing extension from Binary to Rich Health States.
+
+```azurecli-interactive
+az vmss extension set \
+ --name ApplicationHealthLinux \
+ --publisher Microsoft.ManagedServices \
+ --version 2.0 \
+ --resource-group <myVMScaleSetResourceGroup> \
+ --vmss-name <myVMScaleSet> \
+ --settings ./extension.json
+```
+The extension.json file content.
+
+```json
+{
+ "protocol": "<protocol>",
+ "port": <port>,
+ "requestPath": "</requestPath>",
+ "gracePeriod": <healthExtensionGracePeriod>
+}
+```
+**Upgrade the VMs to install the extension.**
+
+```azurecli-interactive
+az vmss update-instances \
+ --resource-group <myVMScaleSetResourceGroup> \
+ --name <myVMScaleSet> \
+ --instance-ids "*"
+```
+++ ## Troubleshoot+
+## View VMHealth - single instance
+```azurepowershell-interactive
+Get-AzVmssVM
+ -InstanceView `
+ -ResourceGroupName <rgName>ΓÇ»`
+ -VMScaleSetName <vmssName> `
+ -InstanceId <instanceId>
+```
+
+### View VMHealth ΓÇô batch call
+This is only available for Virtual Machine Scale Sets with Uniform orchestration.
+
+```
+GET on `/subscriptions/<subscriptionID>/resourceGroups/<resourceGroupName>/providers/Microsoft.Compute/virtualMachineScaleSets/<vmssName>/virtualMachines/?api-version=2022-03-01&$expand=instanceview`
+```
+
+### Health State isn't showing up
+If Health State isn't showing up in Azure portal or via GET call, check to ensure that the VM is upgraded to the latest model. If the VM isn't on the latest model, upgrade the VM and the health status will come up.
+
+### Extension execution output log
Extension execution output is logged to files found in the following directories: ```Windows
virtual-machines Azure Compute Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/azure-compute-gallery.md
There are limits, per subscription, for deploying resources using Azure Compute
- 10,000 image versions, per subscription, per region - 100 replicas per image version however 50 replicas should be sufficient for most use cases - Any disk attached to the image must be less than or equal to 1TB in size
+- Resource move is not supported for 'Azure compute gallery' resources
For more information, see [Check resource usage against limits](../networking/check-usage-against-limits.md) for examples on how to check your current usage.
virtual-machines Classic Vm Deprecation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/classic-vm-deprecation.md
Title: We're retiring Azure VMs (classic) on March 1, 2023
+ Title: We're retiring Azure VMs (classic) on September 1, 2023
description: This article provides a high-level overview of the retirement of VMs created using the classic deployment model.
Last updated 02/10/2020
-# Migrate your IaaS resources to Azure Resource Manager by March 1, 2023
+# Migrate your IaaS resources to Azure Resource Manager by September 1, 2023
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs
-In 2014, we launched infrastructure as a service (IaaS) on [Azure Resource Manager](https://azure.microsoft.com/features/resource-manager/). We've been enhancing capabilities ever since. Because Azure Resource Manager now has full IaaS capabilities and other advancements, we deprecated the management of IaaS virtual machines (VMs) through [Azure Service Manager](./migration-classic-resource-manager-faq.yml) (ASM) on February 28, 2020. This functionality will be fully retired on March 1, 2023.
+In 2014, we launched infrastructure as a service (IaaS) on [Azure Resource Manager](https://azure.microsoft.com/features/resource-manager/). We've been enhancing capabilities ever since. Because Azure Resource Manager now has full IaaS capabilities and other advancements, we deprecated the management of IaaS virtual machines (VMs) through [Azure Service Manager](./migration-classic-resource-manager-faq.yml) (ASM) on February 28, 2020. This functionality will be fully retired on September 1, 2023.
-Today, about 90 percent of the IaaS VMs are using Azure Resource Manager. If you use IaaS resources through ASM, start planning your migration now. Complete it by March 1, 2023, to take advantage of [Azure Resource Manager](../azure-resource-manager/management/index.yml).
+Today, about 90 percent of the IaaS VMs are using Azure Resource Manager. If you use IaaS resources through ASM, start planning your migration now. Complete it by September 1, 2023, to take advantage of [Azure Resource Manager](../azure-resource-manager/management/index.yml).
VMs created using the classic deployment model will follow the [Modern Lifecycle Policy](https://support.microsoft.com/help/30881/modern-lifecycle-policy) for retirement. ## How does this affect me? - As of February 28, 2020, customers who didn't utilize IaaS VMs through ASM in the month of February 2020 can no longer create VMs (classic). -- On March 1, 2023, customers will no longer be able to start IaaS VMs by using ASM. Any that are still running or allocated will be stopped and deallocated. -- On March 1, 2023, subscriptions that are not migrated to Azure Resource Manager will be informed regarding timelines for deleting any remaining VMs (classic).
+- On September 1, 2023, customers will no longer be able to start IaaS VMs by using ASM. Any that are still running or allocated will be stopped and deallocated.
+- On September 1, 2023, subscriptions that are not migrated to Azure Resource Manager will be informed regarding timelines for deleting any remaining VMs (classic).
This retirement does *not* affect the following Azure services and functionality: - Storage accounts *not* used by VMs (classic)
virtual-machines Disks Enable Customer Managed Keys Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-enable-customer-managed-keys-portal.md
Title: Azure portal - Enable customer-managed keys with SSE - managed disks
description: Enable customer-managed keys on your managed disks through the Azure portal. Previously updated : 06/16/2022 Last updated : 01/19/2023
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark:
-Azure Disk Storage allows you to manage your own keys when using server-side encryption (SSE) for managed disks, if you choose. For conceptual information on SSE with customer managed keys, as well as other managed disk encryption types, see the **Customer-managed keys** section of our disk encryption article: [Customer-managed keys](disk-encryption.md#customer-managed-keys)
+Azure Disk Storage allows you to manage your own keys when using server-side encryption (SSE) for managed disks, if you choose. For conceptual information on SSE with customer managed keys, and other managed disk encryption types, see the **Customer-managed keys** section of our disk encryption article: [Customer-managed keys](disk-encryption.md#customer-managed-keys)
## Restrictions For now, customer-managed keys have the following restrictions: -- If this feature is enabled for your disk, you cannot disable it.
+- If this feature is enabled for your disk, you can't disable it.
If you need to work around this, you must copy all the data to an entirely different managed disk that isn't using customer-managed keys: - For Linux: [Copy a managed disk](./linux/disks-upload-vhd-to-managed-disk-cli.md#copy-a-managed-disk)
The following sections cover how to enable and use customer-managed keys for man
Now that you've created and set up your key vault and the disk encryption set, you can deploy a VM using the encryption. The VM deployment process is similar to the standard deployment process, the only differences are that you need to deploy the VM in the same region as your other resources and you opt to use a customer managed key.
-1. Search for **Virtual Machines** and select **+ Add** to create a VM.
-1. On the **Basic** blade, select the same region as your disk encryption set and Azure Key Vault.
-1. Fill in the other values on the **Basic** blade as you like.
+1. Search for **Virtual Machines** and select **+ Create** to create a VM.
+1. On the **Basic** pane, select the same region as your disk encryption set and Azure Key Vault.
+1. Fill in the other values on the **Basic** pane as you like.
- ![Screenshot of the VM creation experience, with the region value highlighted.](media/virtual-machines-disk-encryption-portal/server-side-encryption-create-a-vm-region.png)
+ :::image type="content" source="media/virtual-machines-disk-encryption-portal/server-side-encryption-create-a-vm-region.png" alt-text="Screenshot of the VM creation experience, with the region value highlighted." lightbox="media/virtual-machines-disk-encryption-portal/server-side-encryption-create-a-vm-region.png":::
-1. On the **Disks** blade, select **Encryption at rest with a customer-managed key**.
-1. Select your disk encryption set in the **Disk encryption set** drop-down.
+1. On the **Disks** pane, for **Key management** select your disk encryption set, key vault, and key in the drop-down.
1. Make the remaining selections as you like.
- ![Screenshot of the VM creation experience, the disks blade. With the disk encryption set drop-down highlighted.](media/virtual-machines-disk-encryption-portal/server-side-encryption-create-vm-select-customer-managed-key-disk-encryption-set.png)
+ :::image type="content" source="media/virtual-machines-disk-encryption-portal/server-side-encryption-create-vm-customer-managed-key-disk-encryption-set.png" alt-text="Screenshot of the VM creation experience, the disks pane, customer-managed key selected." lightbox="media/virtual-machines-disk-encryption-portal/server-side-encryption-create-vm-customer-managed-key-disk-encryption-set.png":::
## Enable on an existing disk > [!CAUTION]
-> Enabling disk encryption on any disks attached to a VM will require that you stop the VM.
+> Enabling disk encryption on any disks attached to a VM requires you to stop the VM.
1. Navigate to a VM that is in the same region as one of your disk encryption sets. 1. Open the VM and select **Stop**.
- ![Screenshot of the main overlay for your example VM, with the Stop button highlighted.](media/virtual-machines-disk-encryption-portal/server-side-encryption-stop-vm-to-encrypt-disk-fix.png)
+ :::image type="content" source="media/virtual-machines-disk-encryption-portal/server-side-encryption-stop-vm-to-encrypt-disk-fix.png" alt-text="Screenshot of the main overlay for your example VM, with the Stop button highlighted." lightbox="media/virtual-machines-disk-encryption-portal/server-side-encryption-stop-vm-to-encrypt-disk-fix.png":::
-1. After the VM has finished stopping, select **Disks** and then select the disk you want to encrypt.
+1. After the VM has finished stopping, select **Disks**, and then select the disk you want to encrypt.
- ![Screenshot of your example VM, with the Disks blade open. The OS disk is highlighted, as an example disk for you to select.](media/virtual-machines-disk-encryption-portal/server-side-encryption-existing-disk-select.png)
+ :::image type="content" source="media/virtual-machines-disk-encryption-portal/server-side-encryption-existing-disk-select.png" alt-text="Screenshot of your example VM, with the Disks pane open, the OS disk is highlighted, as an example disk for you to select." lightbox="media/virtual-machines-disk-encryption-portal/server-side-encryption-existing-disk-select.png":::
-1. Select **Encryption** and select **Encryption at rest with a customer-managed key** and then select your disk encryption set in the drop-down list.
+1. Select **Encryption** and under **Key management** select your key vault and key in the drop-down list, under **Customer-managed key**.
1. Select **Save**.
- ![Screenshot of your example OS disk. The encryption blade is open, encryption at rest with a customer-managed key is selected, as well as your example Azure Key Vault. After making those selections, the save button is selected.](media/virtual-machines-disk-encryption-portal/server-side-encryption-encrypt-existing-disk-customer-managed-key.png)
+ :::image type="content" source="media/virtual-machines-disk-encryption-portal/server-side-encryption-encrypt-existing-disk-customer-managed-key.png" alt-text="Screenshot of your example OS disk, the encryption pane is open, encryption at rest with a customer-managed key is selected, as well as your example Azure Key Vault." lightbox="media/virtual-machines-disk-encryption-portal/server-side-encryption-encrypt-existing-disk-customer-managed-key.png":::
1. Repeat this process for any other disks attached to the VM you'd like to encrypt.
-1. When your disks finish switching over to customer-managed keys, if there are no there no other attached disks you'd like to encrypt, you may start your VM.
+1. When your disks finish switching over to customer-managed keys, if there are no there no other attached disks you'd like to encrypt, start your VM.
> [!IMPORTANT] > Customer-managed keys rely on managed identities for Azure resources, a feature of Azure Active Directory (Azure AD). When you configure customer-managed keys, a managed identity is automatically assigned to your resources under the covers. If you subsequently move the subscription, resource group, or managed disk from one Azure AD directory to another, the managed identity associated with the managed disks is not transferred to the new tenant, so customer-managed keys may no longer work. For more information, see [Transferring a subscription between Azure AD directories](../active-directory/managed-identities-azure-resources/known-issues.md#transferring-a-subscription-between-azure-ad-directories).
virtual-machines Disks Enable Double Encryption At Rest Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-enable-double-encryption-at-rest-portal.md
Title: Enable double encryption at rest - Azure portal - managed disks
description: Enable double encryption at rest for your managed disk data using the Azure portal. Previously updated : 06/29/2021 Last updated : 01/19/2023
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark:
-Azure Disk Storage supports double encryption at rest for managed disks. For conceptual information on double encryption at rest, as well as other managed disk encryption types, see the [Double encryption at rest](disk-encryption.md#double-encryption-at-rest) section of our disk encryption article.
+Azure Disk Storage supports double encryption at rest for managed disks. For conceptual information on double encryption at rest, and other managed disk encryption types, see the [Double encryption at rest](disk-encryption.md#double-encryption-at-rest) section of our disk encryption article.
## Getting started
-1. Sign in to the [Azure portal](https://aka.ms/diskencryptionupdates).
-
- > [!IMPORTANT]
- > You must use the [provided link](https://aka.ms/diskencryptionupdates) to access the Azure portal. Double encryption at rest is not currently visible in the public Azure portal without using the link.
-
+1. Sign in to the [Azure portal](https://portal.azure.com).
1. Search for and select **Disk Encryption Sets**.
- :::image type="content" source="media/virtual-machines-disks-double-encryption-at-rest-portal/double-encryption-disk-encryption-sets-search.png" alt-text="Screenshot of the main Azure portal, disk encryption sets is highlighted in the search bar.":::
-
-1. Select **+ Add**.
-
- :::image type="content" source="media/virtual-machines-disks-double-encryption-at-rest-portal/double-encryption-add-disk-encryption-set.png" alt-text="Screenshot of the disk encryption set blade, + Add is highlighted.":::
+ :::image type="content" source="media/virtual-machines-disks-double-encryption-at-rest-portal/double-encryption-disk-encryption-sets-search.png" alt-text="Screenshot of the main Azure portal, disk encryption sets is highlighted in the search bar." lightbox="media/virtual-machines-disks-double-encryption-at-rest-portal/double-encryption-disk-encryption-sets-search.png":::
+1. Select **+ Create**.
1. Select one of the supported regions. 1. For **Encryption type**, select **Double encryption with platform-managed and customer-managed keys**.
Azure Disk Storage supports double encryption at rest for managed disks. For con
1. Fill in the remaining info.
- :::image type="content" source="media/virtual-machines-disks-double-encryption-at-rest-portal/double-encryption-create-disk-encryption-set-blade.png" alt-text="Screenshot of the disk encryption set creation blade, regions and double encryption with platform-managed and customer-managed keys are highlighted.":::
+ :::image type="content" source="media/virtual-machines-disks-double-encryption-at-rest-portal/double-encryption-create-disk-encryption-set-blade.png" alt-text="Screenshot of the disk encryption set creation blade, regions and double encryption with platform-managed and customer-managed keys are highlighted." lightbox="media/virtual-machines-disks-double-encryption-at-rest-portal/double-encryption-create-disk-encryption-set-blade.png":::
1. Select an Azure Key Vault and key, or create a new one if necessary. > [!NOTE] > If you create a Key Vault instance, you must enable soft delete and purge protection. These settings are mandatory when using a Key Vault for encrypting managed disks, and protect you from losing data due to accidental deletion.
- :::image type="content" source="media/virtual-machines-disks-double-encryption-at-rest-portal/double-encryption-select-key-vault.png" alt-text="Screenshot of the Key Vault creation blade.":::
+ :::image type="content" source="media/virtual-machines-disks-double-encryption-at-rest-portal/double-encryption-select-key-vault.png" alt-text="Screenshot of the Key Vault creation blade." lightbox="media/virtual-machines-disks-double-encryption-at-rest-portal/double-encryption-select-key-vault.png":::
1. Select **Create**. 1. Navigate to the disk encryption set you created, and select the error that is displayed. This will configure your disk encryption set to work.
- :::image type="content" source="media/virtual-machines-disks-double-encryption-at-rest-portal/double-encryption-disk-set-error.png" alt-text="Screenshot of the disk encryption set displayed error, the error text is: To associate a disk, image, or snapshot with this disk encryption set, you must grant permissions to the key vault.":::
+ :::image type="content" source="media/virtual-machines-disks-double-encryption-at-rest-portal/double-encryption-disk-set-error.png" alt-text="Screenshot of the disk encryption set displayed error, the error text is: To associate a disk, image, or snapshot with this disk encryption set, you must grant permissions to the key vault." lightbox="media/virtual-machines-disks-double-encryption-at-rest-portal/double-encryption-disk-set-error.png":::
A notification should pop up and succeed. Doing this will allow you to use the disk encryption set with your key vault.
- ![Screenshot of successful permission and role assignment for your key vault.](media/virtual-machines-disks-double-encryption-at-rest-portal/disk-encryption-notification-success.png)
+ :::image type="content" source="media/virtual-machines-disks-double-encryption-at-rest-portal/disk-encryption-notification-success.png" alt-text="Screenshot of successful permission and role assignment for your key vault." lightbox="media/virtual-machines-disks-double-encryption-at-rest-portal/disk-encryption-notification-success.png":::
1. Navigate to your disk. 1. Select **Encryption**.
-1. For **Encryption type**, select **Double encryption with platform-managed and customer-managed keys**.
-1. Select your disk encryption set.
+1. For **Key management**, select one of the keys under **Platform-managed and customer-managed keys**.
1. select **Save**.
- :::image type="content" source="media/virtual-machines-disks-double-encryption-at-rest-portal/double-encryption-enable-disk-blade.png" alt-text="Screenshot of the encryption blade for your managed disk, the aforementioned encryption type is highlighted.":::
+ :::image type="content" source="media/virtual-machines-disks-double-encryption-at-rest-portal/double-encryption-enable-disk-blade.png" alt-text="Screenshot of the encryption blade for your managed disk, the aforementioned encryption type is highlighted." lightbox="media/virtual-machines-disks-double-encryption-at-rest-portal/double-encryption-enable-disk-blade.png":::
You have now enabled double encryption at rest on your managed disk. - ## Next steps - [Azure PowerShell - Enable customer-managed keys with server-side encryption - managed disks](./windows/disks-enable-customer-managed-keys-powershell.md)
virtual-machines Disks Enable Host Based Encryption Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-enable-host-based-encryption-portal.md
description: Use encryption at host to enable end-to-end encryption on your Azur
Previously updated : 09/27/2022 Last updated : 01/19/2023
Temporary disks and ephemeral OS disks are encrypted at rest with platform-manag
### Supported VM sizes
-Legacy VM Sizes are not supported. You can find the list of supported VM sizes by either using the [Azure PowerShell module](windows/disks-enable-host-based-encryption-powershell.md#finding-supported-vm-sizes) or [Azure CLI](linux/disks-enable-host-based-encryption-cli.md#finding-supported-vm-sizes).
+Legacy VM Sizes aren't supported. You can find the list of supported VM sizes by either using the [Azure PowerShell module](windows/disks-enable-host-based-encryption-powershell.md#finding-supported-vm-sizes) or [Azure CLI](linux/disks-enable-host-based-encryption-cli.md#finding-supported-vm-sizes).
## Prerequisites
-You must enable the feature for your subscription before you use the EncryptionAtHost property for your VM/VMSS. Use the following steps to enable the feature for your subscription:
+You must enable the feature for your subscription before you can use encryption at host for either your VM or virtual machine scale set. Use the following steps to enable the feature for your subscription:
1. **Azure portal**: Select the Cloud Shell icon on the [Azure portal](https://portal.azure.com):
- ![Icon to launch the Cloud Shell from the Azure portal](../Cloud-Shell/media/overview/portal-launch-icon.png)
+ ![Screenshot of icon to launch the Cloud Shell from the Azure portal.](../Cloud-Shell/media/overview/portal-launch-icon.png)
1. Execute the following command to register the feature for your subscription ### [Azure PowerShell](#tab/azure-powershell) ```powershell
- Register-AzProviderFeature -FeatureName "EncryptionAtHost" -ProviderNamespace "Microsoft.Compute"
+ Register-AzProviderFeature -FeatureName "EncryptionAtHost" -ProviderNamespace "Microsoft.Compute"
``` ### [Azure CLI](#tab/azure-cli)
You must enable the feature for your subscription before you use the EncryptionA
-1. Confirm that the registration state is **Registered** (takes a few minutes) using the command below before trying out the feature.
+1. Confirm that the registration state is **Registered** (registration may take a few minutes) using the following command before trying out the feature.
### [Azure PowerShell](#tab/azure-powershell) ```powershell
- Get-AzProviderFeature -FeatureName "EncryptionAtHost" -ProviderNamespace "Microsoft.Compute"
+ Get-AzProviderFeature -FeatureName "EncryptionAtHost" -ProviderNamespace "Microsoft.Compute"
``` ### [Azure CLI](#tab/azure-cli)
You must enable the feature for your subscription before you use the EncryptionA
## Deploy a VM with platform-managed keys 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Search for **Virtual Machines** and select **+ Add** to create a VM.
-1. Create a new virtual machine, select an appropriate region and a supported VM size.
-1. Fill in the other values on the **Basic** pane as you like, then proceed to the **Disks** pane.
+1. Search for **Virtual Machines** and select **+ Create** to create a VM.
+1. Select an appropriate region and a supported VM size.
+1. Fill in the other values on the **Basic** pane as you like, and then proceed to the **Disks** pane.
- :::image type="content" source="media/virtual-machines-disks-encryption-at-host-portal/disks-encryption-at-host-basic-blade.png" alt-text="Screenshot of the virtual machine creation basics pane, region and V M size are highlighted.":::
+ :::image type="content" source="media/virtual-machines-disks-encryption-at-host-portal/disks-encryption-at-host-basic-blade.png" alt-text="Screenshot of the virtual machine creation basics pane, region and VM size are highlighted." lightbox="media/virtual-machines-disks-encryption-at-host-portal/disks-encryption-at-host-basic-blade.png":::
1. On the **Disks** pane, select **Encryption at host**. 1. Make the remaining selections as you like.
- :::image type="content" source="media/virtual-machines-disks-encryption-at-host-portal/host-based-encryption-platform-keys.png" alt-text="Screenshot of the virtual machine creation disks pane, encryption at host highlighted.":::
+ :::image type="content" source="media/virtual-machines-disks-encryption-at-host-portal/host-based-encryption-platform-keys.png" alt-text="Screenshot of the virtual machine creation disks pane, encryption at host highlighted." lightbox="media/virtual-machines-disks-encryption-at-host-portal/host-based-encryption-platform-keys.png":::
-1. Finish the VM deployment process, make selections that fit your environment.
+1. For the rest of the VM deployment process, make selections that fit your environment, and complete the deployment.
-You have now deployed a VM with encryption at host enabled, and the cache for the disk is encrypted using platform-managed keys.
+You've now deployed a VM with encryption at host enabled, and the cache for the disk is encrypted using platform-managed keys.
## Deploy a VM with customer-managed keys
Once the feature is enabled, you'll need to set up an Azure Key Vault and a disk
[!INCLUDE [virtual-machines-disks-encryption-create-key-vault-portal](../../includes/virtual-machines-disks-encryption-create-key-vault-portal.md)]
-## Deploy a VM
+### Deploy a VM
Now that you've setup an Azure Key Vault and disk encryption set, you can deploy a VM and it will use encryption at host.
Now that you've setup an Azure Key Vault and disk encryption set, you can deploy
1. Create a new virtual machine, select an appropriate region and a supported VM size. 1. Fill in the other values on the **Basic** pane as you like, then proceed to the **Disks** pane.
- :::image type="content" source="media/virtual-machines-disks-encryption-at-host-portal/disks-encryption-at-host-basic-blade.png" alt-text="Screenshot of the virtual machine creation basics pane, region and V M size are highlighted.":::
+ :::image type="content" source="media/virtual-machines-disks-encryption-at-host-portal/disks-encryption-at-host-basic-blade.png" alt-text="Screenshot of the virtual machine creation basics pane, region and VM size are highlighted." lightbox="media/virtual-machines-disks-encryption-at-host-portal/disks-encryption-at-host-basic-blade.png":::
-1. On the **Disks** pane, select **Encryption at-rest for customer-managed key** for **SSE encryption type** and select your disk encryption set.
-1. Select **Encryption at host**.
+1. On the **Disks** pane, select **Encryption at host**.
+1. Select **Key management** and select one of your customer-managed keys.
1. Make the remaining selections as you like.
- :::image type="content" source="media/virtual-machines-disks-encryption-at-host-portal/disks-host-based-encryption-customer-managed-keys.png" alt-text="Screenshot of the virtual machine creation disks pane, encryption at host is highlighted, customer-managed keys selected.":::
+ :::image type="content" source="media/virtual-machines-disks-encryption-at-host-portal/disks-host-based-encryption-customer-managed-keys.png" alt-text="Screenshot of the virtual machine creation disks pane, encryption at host is highlighted, customer-managed keys selected." lightbox="media/virtual-machines-disks-encryption-at-host-portal/disks-host-based-encryption-customer-managed-keys.png":::
-1. Finish the VM deployment process, make selections that fit your environment.
+1. For the rest of the VM deployment process, make selections that fit your environment, and complete the deployment.
-You have now deployed a VM with encryption at host enabled.
+You've now deployed a VM with encryption at host enabled using customer-managed keys.
## Disable host based encryption
-Make sure your VM is deallocated first, you cannot disable encryption at host unless your VM is deallocated.
+Deallocate your VM first, encryption at host can't be disabled unless your VM is deallocated.
1. On your VM, select **Disks** and then select **Additional settings**.
virtual-machines Diagnostics Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/diagnostics-linux.md
Supported distributions and versions:
- OpenSUSE 13.1+ - SUSE Linux Enterprise Server 12 - Debian 9, 8, 7-- Red Hat Enterprise Linux (RHEL) 8, 7, 6.7+
+- Red Hat Enterprise Linux (RHEL) 7, 6.7+
- Alma Linux 8 - Rocky Linux 8
virtual-machines Iaas Antimalware Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/iaas-antimalware-windows.md
documentationcenter: '' + ms.assetid:
vm-windows Previously updated : 07/30/2021 Last updated : 01/19/2023
Depends on your type of deployment, use the corresponding commands to deploy the
* [Azure Resource Manager based Virtual Machine](../../security/fundamentals/antimalware-code-samples.md#enable-and-configure-microsoft-antimalware-for-azure-resource-manager-vms) * [Azure Service Fabric Clusters](../../security/fundamentals/antimalware-code-samples.md#add-microsoft-antimalware-to-azure-service-fabric-clusters) * [Classic Cloud Service](/powershell/module/servicemanagement/azure.service/set-azureserviceextension)
+ * [Azure Arc-enabled servers](../../security/fundamentals/antimalware-code-samples.md#add-microsoft-antimalware-for-azure-arc-enabled-servers)
## Troubleshoot and support
virtual-machines Lasv3 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/lasv3-series.md
The Lasv3-series of Azure Virtual Machines (Azure VMs) features high-throughput,
| Standard_L80as_v3 | 80 | 640 | 800 | 10x1.92TB | 3.8M/20000 | 80000/1400 | 80000/2000 | 32 | 8 | 32000 | 1. **Temp disk**: Lasv3-series VMs have a standard SCSI-based temp resource disk for use by the OS paging or swap file (`D:` on Windows, `/dev/sdb` on Linux). This disk provides 80 GiB of storage, 4000 IOPS, and 80 MBps transfer rate for every 8 vCPUs. For example, Standard_L80as_v3 provides 800 GiB at 40000 IOPS and 800 MBPS. This configuration ensures that the NVMe drives can be fully dedicated to application use. This disk is ephemeral, and all data is lost on stop or deallocation.
-1. **NVMe Disks**: NVMe disk throughput can go higher than the specified numbers. However, higher performance isn't guaranteed. Local NVMe disks are ephemeral. Data is lost on these disks if you stop or deallocate your VM. Local NVMe disks aren't encrypted by [Azure Storage encryption](disk-encryption.md), even if you enable [encryption at host](disk-encryption.md#supported-vm-sizes).
-1. **NVMe Disk throughput**: Hyper-V NVMe Direct technology provides unthrottled access to local NVMe drives mapped securely into the guest VM space. Lasv3 NVMe disk throughput can go higher than the specified numbers, but higher performance isn't guaranteed. To achieve maximum performance, see how to optimize performance on Lasv3-series [Windows-based VMs](../virtual-machines/windows/storage-performance.md) or [Linux-based VMs](../virtual-machines/linux/storage-performance.md). Read/write performance varies based on IO size, drive load, and capacity utilization.
-1. **Max burst uncached data disk throughput**: Lasv3-series VMs can [burst their disk performance](./disk-bursting.md) for up to 30 minutes at a time.
+2. **NVMe Disks**: NVMe disk throughput can go higher than the specified numbers. However, higher performance isn't guaranteed. Local NVMe disks are ephemeral. Data is lost on these disks if you stop or deallocate your VM.
+3. **NVMe Disk encryption** Lsv3 VMs created or allocated on or after 1/1/2023 have their local NVME drives encrypted by default using hardware-based encryption with a Platform-managed key, except for the regions listed below.
+
+> [!NOTE]
+> Central US and Qatar Central do not support Local NVME disk encryption, but will be added in the future.
+
+4. **NVMe Disk throughput**: Hyper-V NVMe Direct technology provides unthrottled access to local NVMe drives mapped securely into the guest VM space. Lasv3 NVMe disk throughput can go higher than the specified numbers, but higher performance isn't guaranteed. To achieve maximum performance, see how to optimize performance on Lasv3-series [Windows-based VMs](../virtual-machines/windows/storage-performance.md) or [Linux-based VMs](../virtual-machines/linux/storage-performance.md). Read/write performance varies based on IO size, drive load, and capacity utilization.
+5. **Max burst uncached data disk throughput**: Lasv3-series VMs can [burst their disk performance](./disk-bursting.md) for up to 30 minutes at a time.
> [!NOTE] > Lasv3-series VMs don't provide a host cache for the data disk because this configuration doesn't benefit the Lasv3 workloads.
virtual-machines Convert Disk Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/convert-disk-storage.md
Previously updated : 02/13/2021 Last updated : 01/18/2023
This article shows how to convert managed disks from one disk type to another by
## Before you begin
-* Disk conversion requires a restart of the virtual machine (VM), so schedule the migration of your disk storage during a pre-existing maintenance window.
-* For unmanaged disks, first [convert to managed disks](convert-unmanaged-to-managed-disks.md) so you can switch between storage options.
+Conversion requires a restart of the virtual machine (VM), so schedule the migration of your disk during a pre-existing maintenance window.
+
+## Restrictions
+
+- You can only change disk type once per day.
+- You can only change the disk type of managed disks. If your disk is unmanaged, [convert it to a managed disk](convert-unmanaged-to-managed-disks.md) to switch between disk types.
## Switch all managed disks of a VM between from one account to another
virtual-machines Disks Enable Double Encryption At Rest Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/disks-enable-double-encryption-at-rest-cli.md
Title: Enable double encryption at rest - Azure CLI - managed disks description: Enable double encryption at rest for your managed disk data using the Azure CLI. Previously updated : 06/29/2021 Last updated : 01/20/2023
**Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Flexible scale sets
-Azure Disk Storage supports double encryption at rest for managed disks. For conceptual information on double encryption at rest, as well as other managed disk encryption types, see the [Double encryption at rest](../disk-encryption.md#double-encryption-at-rest) section of our disk encryption article.
+Azure Disk Storage supports double encryption at rest for managed disks. For conceptual information on double encryption at rest, and other managed disk encryption types, see the [Double encryption at rest](../disk-encryption.md#double-encryption-at-rest) section of our disk encryption article.
## Prerequisites
-Install the latest [Azure CLI](/cli/azure/install-az-cli2) and log in to an Azure account with [az login](/cli/azure/reference-index).
+Install the latest [Azure CLI](/cli/azure/install-az-cli2) and sign in to an Azure account with [az login](/cli/azure/reference-index).
## Getting started 1. Create an instance of Azure Key Vault and encryption key.
- When creating the Key Vault instance, you must enable soft delete and purge protection. Soft delete ensures that the Key Vault holds a deleted key for a given retention period (90 day default). Purge protection ensures that a deleted key cannot be permanently deleted until the retention period lapses. These settings protect you from losing data due to accidental deletion. These settings are mandatory when using a Key Vault for encrypting managed disks.
+ When creating the Key Vault instance, you must enable soft delete and purge protection. Soft delete ensures that the Key Vault holds a deleted key for a given retention period (90 day default). Purge protection ensures that a deleted key can't be permanently deleted until the retention period lapses. These settings protect you from losing data due to accidental deletion. These settings are mandatory when using a Key Vault for encrypting managed disks.
```azurecli
Install the latest [Azure CLI](/cli/azure/install-az-cli2) and log in to an Azur
az keyvault key create --vault-name $keyVaultName -n $keyName --protection software ```
-1. Create a DiskEncryptionSet with encryptionType set as EncryptionAtRestWithPlatformAndCustomerKeys. Use API version **2020-05-01** in the Azure Resource Manager (ARM) template.
+1. Get the key URL of the key you created with `az keyvault key show`.
+
+ ```azurecli
+ az keyvault key show --name $keyName --vault-name $keyVaultName
+ ```
+
+1. Create a DiskEncryptionSet with encryptionType set as EncryptionAtRestWithPlatformAndCustomerKeys. Replace `yourKeyURL` with the URL you received from `az keyvault key show`.
```azurecli
- az deployment group create -g $rgName \
- --template-uri "https://raw.githubusercontent.com/Azure-Samples/managed-disks-powershell-getting-started/master/DoubleEncryption/CreateDiskEncryptionSetForDoubleEncryption.json" \
- --parameters "diskEncryptionSetName=$diskEncryptionSetName" "encryptionType=EncryptionAtRestWithPlatformAndCustomerKeys" "keyVaultId=$keyVaultId" "keyVaultKeyUrl=$keyVaultKeyUrl" "region=$location"
+ az disk-encryption-set create --resource-group $rgName --name $diskEncryptionSetName --key-url yourKeyURL --source-vault $keyVaultName --encryption-type EncryptionAtRestWithPlatformAndCustomerKeys
``` 1. Grant the DiskEncryptionSet resource access to the key vault.
virtual-machines Mac Create Ssh Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/linux/mac-create-ssh-keys.md
Previously updated : 09/10/2021 Last updated : 01/19/2023
ssh-keygen -m PEM -t rsa -b 4096
> [!NOTE] > You can also create key pairs with the [Azure CLI](/cli/azure) with the [az sshkey create](/cli/azure/sshkey#az-sshkey-create) command, as described in [Generate and store SSH keys](../ssh-keys-azure-cli.md).
-If you use the [Azure CLI](/cli/azure) to create your VM with the [az vm create](/cli/azure/vm#az-vm-create) command, you can optionally generate SSH public and private key files using the `--generate-ssh-keys` option. The key files are stored in the ~/.ssh directory unless specified otherwise with the `--ssh-dest-key-path` option. If an ssh key pair already exists and the `--generate-ssh-keys` option is used, a new key pair will not be generated but instead the existing key pair will be used. In the following command, replace *VMname* and *RGname* with your own values:
+If you use the [Azure CLI](/cli/azure) to create your VM with the [az vm create](/cli/azure/vm#az-vm-create) command, you can optionally generate SSH public and private key files using the `--generate-ssh-keys` option. The key files are stored in the ~/.ssh directory unless specified otherwise with the `--ssh-dest-key-path` option. If an ssh key pair already exists and the `--generate-ssh-keys` option is used, a new key pair won't be generated but instead the existing key pair will be used. In the following command, replace *VMname* and *RGname* with your own values:
```azurecli az vm create --name VMname --resource-group RGname --image UbuntuLTS --generate-ssh-keys
With the public key deployed on your Azure VM, and the private key on your local
ssh azureuser@myvm.westus.cloudapp.azure.com ```
-If you're connecting to this VM for the first time, you'll be asked to verify the host's fingerprint. It's tempting to simply accept the fingerprint that's presented, but that approach exposes you to a possible person-in-the-middle attack. You should always validate the host's fingerprint. You need to do this only the first time you connect from a client. To obtain the host fingerprint via the portal, use the Run Command feature to execute the command `ssh-keygen -lf /etc/ssh/ssh_host_ecdsa_key.pub | awk '{print $2}'`.
+If you're connecting to this VM for the first time, you'll be asked to verify the host's fingerprint. It's tempting to accept the fingerprint that's presented, but that approach exposes you to a possible person-in-the-middle attack. You should always validate the host's fingerprint. You need to do this only the first time you connect from a client. To obtain the host fingerprint via the portal, use the Run Command feature to execute the command `ssh-keygen -lf /etc/ssh/ssh_host_ecdsa_key.pub | awk '{print $2}'`.
:::image type="content" source="media/ssh-from-windows/run-command-validate-host-fingerprint.png" alt-text="Screenshot showing using the Run Command to validate the host fingerprint.":::
virtual-machines Lsv3 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/lsv3-series.md
The Lsv3-series VMs are available in sizes from 8 to 80 vCPUs. There are 8 GiB o
3. **NVMe Disk encryption** Lsv3 VMs created or allocated on or after 1/1/2023 have their local NVME drives encrypted by default using hardware-based encryption with a Platform-managed key, except for the regions listed below. > [!NOTE]
-> Central US, East US 2, and Qatar Central do not support Local NVME disk encryption, but will be added in the future.
+> Central US and Qatar Central do not support Local NVME disk encryption, but will be added in the future.
-5. **NVMe Disk throughput**: Hyper-V NVMe Direct technology provides unthrottled access to local NVMe drives mapped securely into the guest VM space. Lsv3 NVMe disk throughput can go higher than the specified numbers, but higher performance isn't guaranteed. To achieve maximum performance, see how to optimize performance on the Lsv3-series [Windows-based VMs](../virtual-machines/windows/storage-performance.md) or [Linux-based VMs](../virtual-machines/linux/storage-performance.md). Read/write performance varies based on IO size, drive load, and capacity utilization.
-6. **Max burst uncached data disk throughput**: Lsv3-series VMs can [burst their disk performance](./disk-bursting.md) for up to 30 minutes at a time.
+4. **NVMe Disk throughput**: Hyper-V NVMe Direct technology provides unthrottled access to local NVMe drives mapped securely into the guest VM space. Lsv3 NVMe disk throughput can go higher than the specified numbers, but higher performance isn't guaranteed. To achieve maximum performance, see how to optimize performance on the Lsv3-series [Windows-based VMs](../virtual-machines/windows/storage-performance.md) or [Linux-based VMs](../virtual-machines/linux/storage-performance.md). Read/write performance varies based on IO size, drive load, and capacity utilization.
+5. **Max burst uncached data disk throughput**: Lsv3-series VMs can [burst their disk performance](./disk-bursting.md) for up to 30 minutes at a time.
> [!NOTE] > Lsv3-series VMs don't provide host cache for data disk as it doesn't benefit the Lsv3 workloads.
virtual-machines Nvv4 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/nvv4-series.md
The NVv4-series virtual machines are powered by [AMD Radeon Instinct MI25](https
[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Not Supported <br> <br>
-| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | GPU | GPU memory: GiB | Max data disks | Max NICs / Expected network bandwidth (MBps) |
-| | | | | | | | |
-| Standard_NV4as_v4 |4 |14 |88 | 1/8 | 2 | 4 | 2 / 1000 |
-| Standard_NV8as_v4 |8 |28 |176 | 1/4 | 4 | 8 | 4 / 2000 |
-| Standard_NV16as_v4 |16 |56 |352 | 1/2 | 8 | 16 | 8 / 4000 |
-| Standard_NV32as_v4 |32 |112 |704 | 1 | 16 | 32 | 8 / 8000 |
+| Size | vCPU | Memory: GiB | Temp storage (SSD) GiB | GPU | GPU memory: GiB | Max data disks | Max uncached disk throughput: IOPS/MBps | Max NICs / Expected network bandwidth (MBps) |
+| | | | | | | | | |
+| Standard_NV4as_v4 |4 |14 |88 | 1/8 | 2 | 4 | 6400 / 96 | 2 / 1000 |
+| Standard_NV8as_v4 |8 |28 |176 | 1/4 | 4 | 8 | 12800 / 192 | 4 / 2000 |
+| Standard_NV16as_v4 |16 |56 |352 | 1/2 | 8 | 16 | 25600 / 384 | 8 / 4000 |
+| Standard_NV32as_v4 |32 |112 |704 | 1 | 16 | 32 | 51200 / 768 |8 / 8000 |
<sup>1</sup> NVv4-series VMs feature AMD Simultaneous multithreading Technology
virtual-machines Windows In Place Upgrade https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows-in-place-upgrade.md
description: This article describes how to do an in-place upgrade for VMs runnin
Previously updated : 01/17/2023 Last updated : 01/19/2023 # In-place upgrade for VMs running Windows Server in Azure
-An in-place upgrade allows you to go from an older operating system to a newer one while keeping your settings, server roles, and data intact. This article will teach you how to move your Azure VMs to a later version of Windows Server using an in-place upgrade.
+An in-place upgrade allows you to go from an older operating system to a newer one while keeping your settings, server roles, and data intact. This article will teach you how to move your Azure VMs to a later version of Windows Server using an in-place upgrade. Currently, upgrading to Windows Server 2019 and Windows Server 2022 is supported.
Before you begin an in-place upgrade: - Review the upgrade requirements for the target operating system:
- - Upgrade options for Windows Server 2019
+ - Upgrade options for Windows Server 2019 from Windows Server 2012 R2 or Windows Server 2016
- - Upgrade options for Windows Server 2022
+ - Upgrade options for Windows Server 2022 from Windows Server 2016 or Windows Server 2019
- Verify the operating system disk has enough [free space to perform the in-place upgrade](/windows-server/get-started/hardware-requirements#storage-controller-and-disk-space-requirements). If additional space is needed [follow these steps](/azure/virtual-machines/windows/expand-os-disk) to expand the operating system disk attached to the VM. - Disable antivirus and anti-spyware software and firewalls. These types of software can conflict with the upgrade process. Re-enable antivirus and anti-spyware software and firewalls after the upgrade is completed.
-## Windows versions not yet supported for in-place system upgrades
-For the following versions, consider using the work-around in the next section:
+## Windows versions not yet supported for in-place upgrade
+For the following versions, consider using the [workaround](#workaround) later in this article:
- Windows Server 2012 Datacenter - Windows Server 2012 Standard - Windows Server 2008 R2 Datacenter - Windows Server 2008 R2 Standard
-### Work-around
-To work around this issue, create an Azure VM that's running a supported version. And then either migrate the workload (Method 1, preferred), or download and upgrade the VHD of the VM (Method 2).
-To prevent data loss, back up the Windows 10 VM by using [Azure Backup](../backup/backup-overview.md). Or use a third-party backup solution from [Azure Marketplace Backup & Recovery](https://azuremarketplace.microsoft.com/marketplace/apps?page=1&search=Backup+&exp=ubp8).
-#### Method 1: Deploy a newer system and migrate the workload
-
-Create an Azure VM that runs a supported version of the operating system, and then migrate the workload. To do so, you'll use Windows Server migration tools. For instructions to migrate Windows Server roles and features, see [Install, use, and remove Windows Server migration tools](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012).
--
-#### Method 2: Download and upgrade the VHD
-1. Do an in-place upgrade in a local Hyper-V VM
- 1. [Download the VHD](./windows/download-vhd.md) of the VM.
- 2. Attach the VHD to a local Hyper-V VM.
- 3. Start the VM.
- 4. Run the in-place upgrade.
-2. Upload the VHD to Azure. For more information, see [Upload a generalized VHD and use it to create new VMs in Azure](./windows/upload-generalized-managed.md).
- ## Upgrade VM to volume license (KMS server activation)
We recommend that you create a snapshot of your operating system disk and any da
## Create upgrade media disk
-To perform an in-place upgrade the upgrade media must be attached to the VM as a Managed Disk. To create the upgrade media, use the following PowerShell script with the specific variables configured for the desired upgrade media. The created upgrade media disk can be used to upgrade multiple VMs, but it can only be used to upgrade a single VM at a time. To upgrade multiple VMs simultaneously multiple upgrade disks must be created for each simultaneous upgrade.
+To start an in-place upgrade the upgrade media must be attached to the VM as a Managed Disk. To create the upgrade media, modify the variables in the following PowerShell script for Windows Server 2022. The upgrade media disk can be used to upgrade multiple VMs, but it can only be used to upgrade a single VM at a time. To upgrade multiple VMs simultaneously multiple upgrade disks must be created for each simultaneous upgrade.
| Parameter | Definition | |||
To perform an in-place upgrade the upgrade media must be attached to the VM as a
| diskName | Name of the Managed Disk that will contain the upgrade media | | sku | Windows Server upgrade media version. This must be either: `server2022Upgrade` or `server2019Upgrade` |
-### Script contents
+### PowerShell script
```azurepowershell-interactive # # Customer specific parameters
-#
+
+# Resource group of the source VM
$resourceGroup = "WindowsServerUpgrades"+
+# Location of the source VM
$location = "WestUS2"
-$diskName = "WindowsServer2022UpgradeDisk"
+
+# Zone of the source VM, if any
$zone = ""
-#
-# Selection of upgrade target version
-#
+# Disk name for the that will be created
+$diskName = "WindowsServer2022UpgradeDisk"
+
+# Target version for the upgrade - must be either server2022Upgrade or server2019Upgrade
$sku = "server2022Upgrade"
-#
+ # Common parameters
-#
+ $publisher = "MicrosoftWindowsServer" $offer = "WindowsServerUpgrade" $managedDiskSKU = "Standard_LRS" #
-# Get the latest version of the image
-#
+# Get the latest version of the special (hidden) VM Image from the Azure Marketplace
+ $versions = Get-AzVMImage -PublisherName $publisher -Location $location -Offer $offer -Skus $sku | sort-object -Descending {[version] $_.Version } $latestString = $versions[0].Version
-#
-# Get Image
-#
+
+# Get the special (hidden) VM Image from the Azure Marketplace by version - the image is used to create a disk to upgrade to the new version
+ $image = Get-AzVMImage -Location $location ` -PublisherName $publisher `
Once the upgrade process has completed successfully the following steps should b
- Enable any antivirus, anti-spyware or firewall software that may have been disabled at the start of the upgrade process.
-## Recovery from failures
+## Workaround
+
+For versions of Windows that are not currently supported, create an Azure VM that's running a supported version. And then either migrate the workload (Method 1, preferred), or download and upgrade the VHD of the VM (Method 2).
+To prevent data loss, back up the Windows 10 VM by using [Azure Backup](../backup/backup-overview.md). Or use a third-party backup solution from [Azure Marketplace Backup & Recovery](https://azuremarketplace.microsoft.com/marketplace/apps?page=1&search=Backup+&exp=ubp8).
+### Method 1: Deploy a newer system and migrate the workload
+
+Create an Azure VM that runs a supported version of the operating system, and then migrate the workload. To do so, you'll use Windows Server migration tools. For instructions to migrate Windows Server roles and features, see [Install, use, and remove Windows Server migration tools](/previous-versions/windows/it-pro/windows-server-2012-R2-and-2012).
++
+### Method 2: Download and upgrade the VHD
+1. Do an in-place upgrade in a local Hyper-V VM
+ 1. [Download the VHD](./windows/download-vhd.md) of the VM.
+ 2. Attach the VHD to a local Hyper-V VM.
+ 3. Start the VM.
+ 4. Run the in-place upgrade.
+2. Upload the VHD to Azure. For more information, see [Upload a generalized VHD and use it to create new VMs in Azure](./windows/upload-generalized-managed.md).
+
+## Recover from failure
If the in-place upgrade process failed to complete successfully you can return to the previous version of the VM if snapshots of the operating system disk and data disk(s) were created. To revert the VM to the previous state using snapshots complete the following steps: 1. Create a new Managed Disk from the OS disk snapshot and each data disk snapshot following the steps in [Create a disk from a snapshot](/virtual-machines/scripts/virtual-machines-powershell-sample-create-managed-disk-from-snapshot) making sure to create the disks in the same Availability Zone as the VM if the VM is in a zone.
If the in-place upgrade process failed to complete successfully you can return t
## Next steps
-For more information, see [Perform an in-place upgrade of Windows Server](/windows-server/get-started/perform-in-place-upgrade)
+For more information, see [Perform an in-place upgrade of Windows Server](/windows-server/get-started/perform-in-place-upgrade)
virtual-machines Convert Disk Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/convert-disk-storage.md
Previously updated : 02/13/2021 Last updated : 01/18/2023
There are four disk types of Azure managed disks: Azure ultra disks, premium SSD
This functionality is not supported for unmanaged disks. But you can easily [convert an unmanaged disk to a managed disk](convert-unmanaged-to-managed-disks.md) to be able to switch between disk types. - ## Before you begin
-* Because conversion requires a restart of the virtual machine (VM), you should schedule the migration of your disk storage during a pre-existing maintenance window.
-* If your disk is unmanaged, first [convert it to a managed disk](convert-unmanaged-to-managed-disks.md) so you can switch between storage options.
+Because conversion requires a restart of the virtual machine (VM), schedule the migration of your disk during a pre-existing maintenance window.
+
+## Restrictions
+
+- You can only change disk type once per day.
+- You can only change the disk type of managed disks. If your disk is unmanaged, [convert it to a managed disk](convert-unmanaged-to-managed-disks.md) to switch between disk types.
## Switch all managed disks of a VM between from one account to another
virtual-machines Disks Enable Double Encryption At Rest Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/windows/disks-enable-double-encryption-at-rest-powershell.md
Title: Azure PowerShell - Enable double encryption at rest - managed disks
description: Enable double encryption at rest for your managed disk data using Azure PowerShell. Previously updated : 06/29/2021 Last updated : 01/20/2023
**Applies to:** :heavy_check_mark: Windows VMs
-Azure Disk Storage supports double encryption at rest for managed disks. For conceptual information on double encryption at rest, as well as other managed disk encryption types, see the [Double encryption at rest](../disk-encryption.md#double-encryption-at-rest) section of our disk encryption article.
+Azure Disk Storage supports double encryption at rest for managed disks. For conceptual information on double encryption at rest, and other managed disk encryption types, see the [Double encryption at rest](../disk-encryption.md#double-encryption-at-rest) section of our disk encryption article.
## Prerequisites
Install the latest [Azure PowerShell version](/powershell/azure/install-az-ps),
1. Create an instance of Azure Key Vault and encryption key.
- When creating the Key Vault instance, you must enable soft delete and purge protection. Soft delete ensures that the Key Vault holds a deleted key for a given retention period (90 day default). Purge protection ensures that a deleted key cannot be permanently deleted until the retention period lapses. These settings protect you from losing data due to accidental deletion. These settings are mandatory when using a Key Vault for encrypting managed disks.
+ When creating the Key Vault instance, you must enable soft delete and purge protection. Soft delete ensures that the Key Vault holds a deleted key for a given retention period (90 day default). Purge protection ensures that a deleted key can't be permanently deleted until the retention period lapses. These settings protect you from losing data due to accidental deletion. These settings are mandatory when using a Key Vault for encrypting managed disks.
```powershell $ResourceGroupName="yourResourceGroupName"
Install the latest [Azure PowerShell version](/powershell/azure/install-az-ps),
$key = Add-AzKeyVaultKey -VaultName $keyVaultName -Name $keyName -Destination $keyDestination ```
-1. Create a DiskEncryptionSet with encryptionType set as EncryptionAtRestWithPlatformAndCustomerKeys. Use API version **2020-05-01** in the Azure Resource Manager (ARM) template.
+1. Retrieve the URL for the key you created, you'll need it for subsequent commands. The ID output from `Get-AzKeyVaultKey` is the key URL.
+
+ ```powershell
+ Get-AzKeyVaultKey -VaultName $keyVaultName -KeyName $keyName
+ ```
+
+1. Get the resource ID for the Key Vault instance you created, you'll need it for subsequent commands.
+
+ ```powershell
+ Get-AzKeyVault -VaultName $keyVaultName
+ ```
+
+1. Create a DiskEncryptionSet with encryptionType set as EncryptionAtRestWithPlatformAndCustomerKeys. Replace `yourKeyURL` and `yourKeyVaultURL` with the URLs you retrieved earlier.
```powershell
- New-AzResourceGroupDeployment -ResourceGroupName $ResourceGroupName `
- -TemplateUri "https://raw.githubusercontent.com/Azure-Samples/managed-disks-powershell-getting-started/master/DoubleEncryption/CreateDiskEncryptionSetForDoubleEncryption.json" `
- -diskEncryptionSetName $diskEncryptionSetName `
- -keyVaultId $keyVault.ResourceId `
- -keyVaultKeyUrl $key.Key.Kid `
- -encryptionType "EncryptionAtRestWithPlatformAndCustomerKeys" `
- -region $LocationName
+ $config = New-AzDiskEncryptionSetConfig -Location $locationName -KeyUrl "yourKeyURL" -SourceVaultId 'yourKeyVaultURL' -IdentityType 'SystemAssigned'
+
+ $config | New-AzDiskEncryptionSet -ResourceGroupName $ResourceGroupName -Name $diskEncryptionSetName -EncryptionType EncryptionAtRestWithPlatformAndCustomerKeys
``` 1. Grant the DiskEncryptionSet resource access to the key vault.
virtual-machines Redhat Rhui https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/redhat/redhat-rhui.md
If you're using a network configuration to further restrict access from RHEL PAY
``` # Azure Global
+RHUI 3
13.91.47.76 40.85.190.91 52.187.75.218 52.174.163.213 52.237.203.198
+RHUI 4
+westeurope - 52.136.197.163
+southcentralus - 20.225.226.182
+eastus - 52.142.4.99
+australiaeast - 20.248.180.252
+southeastasia - 20.24.186.80
+ # Azure US Government 13.72.186.193 13.72.14.155
virtual-machines Configure Ha Cluster Azure Monitor Sap Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/configure-ha-cluster-azure-monitor-sap-solutions.md
Previously updated : 10/19/2022 Last updated : 01/05/2023 #Customer intent: As a developer, I want to create a High Availability Pacemaker cluster so I can use the resource with Azure Monitor for SAP solutions.
[!INCLUDE [Azure Monitor for SAP solutions public preview notice](./includes/preview-azure-monitor.md)]
-In this how-to guide, you'll learn to create a High Availability (HA) Pacemaker cluster provider for Azure Monitor for SAP solutions. You'll install the HA agent, then create the provider for Azure Monitor for SAP solutions.
+In this how-to guide, you'll learn to create a High Availability (HA) Pacemaker cluster provider for Azure Monitor for SAP solutions. You'll install the HA agent, then create the provider for Azure Monitor for SAP solutions.
This content applies to both Azure Monitor for SAP solutions and Azure Monitor for SAP solutions (classic) versions. ## Prerequisites -- An Azure subscription.
+- An Azure subscription.
- An existing Azure Monitor for SAP solutions resource. To create an Azure Monitor for SAP solutions resource, see the [quickstart for the Azure portal](azure-monitor-sap-quickstart.md) or the [quickstart for PowerShell](azure-monitor-sap-quickstart-powershell.md). ## Install HA agent
For RHEL-based clusters, install **performance co-pilot (PCP)** and the **pcp-pm
For RHEL-based pacemaker clusters, also install [PMProxy](https://access.redhat.com/articles/6139852) in each node.
+### Install HA Cluster Exporter on RHEL
+1. Install the required packages on the system.
+
+ ```bash
+ yum install pcp pcp-pmda-hacluster
+ ```
+
+1. Enable and start the required PCP Collector Services.
+
+ ```bash
+ systemctl enable pmcd
+ ```
+
+ ```bash
+ systemctl start pmcd
+ ```
+
+1. Install and enable the HA Cluster PMDA. Replace `$PCP_PMDAS_DIR` with the path where `hacluster` is installed. Use the `find` command in Linuxto find the path.
+
+ ```bash
+ cd $PCP_PMDAS_DIR/hacluster
+ ```
+
+ ```bash
+ . ./install
+ ```
+
+1. Enable and start the `pmproxy` service.
+
+ ```bash
+ sstemctl start pmproxy
+ ```
+
+ ```bash
+ systemctl enable pmproxy
+ ```
+
+1. Data will then be collected by PCP on the system. You can export the data using `pmproxy` at `http://<SERVER-NAME-OR-IP-ADDRESS>:44322/metrics?names=ha_cluster`. Replace `<SERVER-NAME-OR-IP-ADDRESS>` with your server name or IP address.
## Create provider for Azure Monitor for SAP solutions 1. Sign in to the [Azure portal](https://portal.azure.com).
-1. Go to the Azure Monitor for SAP solutions service.
+1. Go to the Azure Monitor for SAP solutions service.
1. Open your Azure Monitor for SAP solutions resource. 1. In the resource's menu, under **Settings**, select **Providers**. 1. Select **Add** to add a new provider.
- ![Diagram of Azure Monitor for SAP solutions resource in the Azure portal, showing button to add a new provider.](./media/azure-monitor-sap/azure-monitor-providers-ha-cluster-start.png)
+ ![Diagram of Azure Monitor for SAP solutions resource in the Azure portal, showing button to add a new provider.](./media/azure-monitor-sap/azure-monitor-providers-ha-cluster-start.png)
1. For **Type**, select **High-availability cluster (Pacemaker)**.
-1. Configure providers for each node of the cluster by entering the endpoint URL for **HA Cluster Exporter Endpoint**.
+1. Configure providers for each node of the cluster by entering the endpoint URL for **HA Cluster Exporter Endpoint**.
+
+ 1. For SUSE-based clusters, enter `http://<IP-address> :9664/metrics`.
- 1. For SUSE-based clusters, enter `http://<'IP address'> :9664/metrics`.
+ ![Diagram of the setup for an Azure Monitor for SAP solutions resource, showing the fields for SUSE-based clusters.](./media/azure-monitor-sap/azure-monitor-providers-ha-cluster-suse.png)
- ![Diagram of the setup for an Azure Monitor for SAP solutions resource, showing the fields for SUSE-based clusters.](./media/azure-monitor-sap/azure-monitor-providers-ha-cluster-suse.png)
-
1. For RHEL-based clusters, enter `http://<'IP address'>:44322/metrics?names=ha_cluster`.
- ![Diagram of the setup for an Azure Monitor for SAP solutions resource, showing the fields for RHEL-based clusters.](./media/azure-monitor-sap/azure-monitor-providers-ha-cluster-rhel.png)
+ ![Diagram of the setup for an Azure Monitor for SAP solutions resource, showing the fields for RHEL-based clusters.](./media/azure-monitor-sap/azure-monitor-providers-ha-cluster-rhel.png)
1. Enter the system identifiers, host names, and cluster names. For the system identifier, enter a unique SAP system identifier for each cluster. For the hostname, the value refers to an actual hostname in the VM. Use `hostname -s` for SUSE- and RHEL-based clusters.
For RHEL-based pacemaker clusters, also install [PMProxy](https://access.redhat.
1. Select **Create** to finish creating the resource.
+## Troubleshooting
+
+Use the following troubleshooting steps for common errors.
+
+### Unable to reach the Prometheus endpoint
+
+When the provider settings validation operation fails with the code ΓÇÿPrometheusURLConnectionFailureΓÇÖ:
+
+1. Restart the HA cluster exporter agent.
+
+ ```bash
+ sstemctl start pmproxy
+ ```
+
+1. Reenable the HA cluster exporter agent.
+ ```bash
+ systemctl enable pmproxy
+ ```
+
+1. Verify that the Prometheus endpoint is reachable from the subnet that provided while creating the Azure Monitor for SAP solutions resource.
+ ## Next steps > [!div class="nextstepaction"]
virtual-machines Configure Linux Os Azure Monitor Sap Solutions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/configure-linux-os-azure-monitor-sap-solutions.md
Previously updated : 10/19/2022 Last updated : 01/05/2023 #Customer intent: As a developer, I want to configure a Linux provider so that I can use Azure Monitor for SAP solutions for monitoring.
[!INCLUDE [Azure Monitor for SAP solutions public preview notice](./includes/preview-azure-monitor.md)]
-In this how-to guide, you'll learn to create a Linux OS provider for *Azure Monitor for SAP solutions* resources.
+In this how-to guide, you'll learn to create a Linux OS provider for *Azure Monitor for SAP solutions* resources.
This content applies to both versions of the service, *Azure Monitor for SAP solutions* and *Azure Monitor for SAP solutions (classic)*. ## Prerequisites -- An Azure subscription.
+- An Azure subscription.
- An existing Azure Monitor for SAP solutions resource. To create an Azure Monitor for SAP solutions resource, see the [quickstart for the Azure portal](azure-monitor-sap-quickstart.md) or the [quickstart for PowerShell](azure-monitor-sap-quickstart-powershell.md).-- Install [node exporter version 1.3.0](https://prometheus.io/download/#node_exporter) in each SAP host that you want to monitor, either BareMetal or Azure virtual machine (Azure VM). For more information, see [the node exporter GitHub repository](https://github.com/prometheus/node_exporter).
+- Install the [node exporter version 1.3.0](https://prometheus.io/download/#node_exporter) in each SAP host that you want to monitor, either BareMetal or Azure virtual machine (Azure VM). For more information, see [the node exporter GitHub repository](https://github.com/prometheus/node_exporter).
-## Create Linux provider
+To install the node exporter on Linux:
+
+1. Run `wget https://github.com/prometheus/node_exporter/releases/download/v*/node_exporter-*.*-amd64.tar.gz`. Replace `*` with the version number.
+
+1. Run `tar xvfz node_exporter-*.*-amd64.tar.gz`
+
+1. Run `cd node_exporter-*.*-amd64`
+
+1. Run `./node_exporter`
+
+1. The node exporter now starts collecting data. You can export the data at `http://IP:9100/metrics`.
+
+## Create Linux OS provider
1. Sign in to the [Azure portal](https://portal.azure.com). 1. Go to the Azure Monitor for SAP solutions or Azure Monitor for SAP solutions (classic) service.
This content applies to both versions of the service, *Azure Monitor for SAP sol
1. Select **Add provider**. 1. Configure the following settings for the new provider: 1. For **Type**, select **OS (Linux)**.
- 1. For **Name**, enter a name that will be the identifier for the BareMetal instance.
- 1. For **Node Exporter Endpoint**, enter `http://IP:9100/metrics`.
+ 1. For **Name**, enter a name that will be the identifier for the BareMetal instance.
+ 1. For **Node Exporter Endpoint**, enter `http://IP:9100/metrics`.
1. For the IP address, use the private IP address of the Linux host. Make sure the host and Azure Monitor for SAP solutions resource are in the same virtual network.
-1. Open firewall port 9100 on the Linux host.
- 1. If you're using `firewall-cmd`, run `_firewall-cmd_ _--permanent_ _--add-port=9100/tcp_ ` then `_firewall-cmd_ _--reload_`.
- 1. If you're using `ufw`, run `_ufw_ _allow_ _9100/tcp_` then `_ufw_ _reload_`.
-1. If the Linux host is an Azure virtual machine (VM), make sure that all applicable network security groups (NSGs) allow inbound traffic at port 9100 from **VirtualNetwork** as the source.
-1. Select **Add provider** to save your changes.
+1. Open firewall port 9100 on the Linux host.
+ 1. If you're using `firewall-cmd`, run `_firewall-cmd_ _--permanent_ _--add-port=9100/tcp_ ` then `_firewall-cmd_ _--reload_`.
+ 1. If you're using `ufw`, run `_ufw_ _allow_ _9100/tcp_` then `_ufw_ _reload_`.
+1. If the Linux host is an Azure virtual machine (VM), make sure that all applicable network security groups (NSGs) allow inbound traffic at port 9100 from **VirtualNetwork** as the source.
+1. Select **Add provider** to save your changes.
1. Continue to add more providers as needed. 1. Select **Review + create** to review the settings. 1. Select **Create** to finish creating the resource.
+## Troubleshooting
+
+Use these steps to resolve common errors.
+
+### Unable to reach the Prometheus endpoint
+
+When the provider settings validation operation fails with the code ΓÇÿPrometheusURLConnectionFailureΓÇÖ:
+
+1. Open firewall port 9100 on the Linux host.
+ 1. If you're using `firewall-cmd`, run `_firewall-cmd_ _--permanent_ _--add-port=9100/tcp_ ` then `_firewall-cmd_ _--reload_`.
+ 1. If you're using `ufw`, run `_ufw_ _allow_ _9100/tcp_` then `_ufw_ _reload_`.
+1. Try to restart the node exporter agent:
+ 1. Go to the folder where you installed the node exporter (the file name resembles `node_exporter-*.*-amd64`).
+ 1. Run `./node_exporter`.
+1. Verify that the Prometheus endpoint is reachable from the subnet that you provided while creating the Azure Monitor for SAP solutions resource.
+ ## Next steps > [!div class="nextstepaction"]
virtual-machines High Availability Guide Rhel With Hana Ascs Ers Dialog Instance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/high-availability-guide-rhel-with-hana-ascs-ers-dialog-instance.md
+
+ Title: Deploy SAP ASCS/SCS and SAP ERS with SAP HANA high availability VMs on RHEL | Microsoft Docs
+description: Configure SAP ASCS/SCS and SAP ERS with SAP HANA high availability VMs on RHEL.
+
+documentationcenter: saponazure
++
+tags: azure-resource-manager
++
+ vm-linux
+ Last updated : 08/16/2022+++
+# Deploy SAP ASCS/ERS with SAP HANA high availability VMs on Red Hat Enterprise Linux
+
+This article describes how to install and configure SAP HANA along with ASCS/SCS and ERS instances on the same high availability cluster, running on Red Hat Enterprise Linux (RHEL).
+
+## References
+
+* [Configuring SAP S/4HANA ASCS/ERS with Standalone Enqueue Server 2 (ENSA2) in Pacemaker](https://access.redhat.com/articles/3974941)
+* [Configuring SAP NetWeaver ASCS/ERS ENSA1 with Standalone Resources in RHEL 7.5+ and RHEL 8](https://access.redhat.com/articles/3569681)
+* SAP Note [1928533](https://launchpad.support.sap.com/#/notes/1928533), which has:
+ * List of Azure VM sizes that are supported for the deployment of SAP software
+ * Important capacity information for Azure VM sizes
+ * Supported SAP software, and operating system (OS) and database combinations
+ * Required SAP kernel version for Windows and Linux on Microsoft Azure
+* SAP Note [2015553](https://launchpad.support.sap.com/#/notes/2015553) lists prerequisites for SAP-supported SAP software deployments in Azure.
+* SAP Note [2002167](https://launchpad.support.sap.com/#/notes/2002167) has recommended OS settings for Red Hat Enterprise Linux 7.x
+* SAP Note [2772999](https://launchpad.support.sap.com/#/notes/2772999) has recommended OS settings for Red Hat Enterprise Linux 8.x
+* SAP Note [2009879](https://launchpad.support.sap.com/#/notes/2009879) has SAP HANA Guidelines for Red Hat Enterprise Linux
+* SAP Note [2178632](https://launchpad.support.sap.com/#/notes/2178632) has detailed information about all monitoring metrics reported for SAP in Azure.
+* SAP Note [2191498](https://launchpad.support.sap.com/#/notes/2191498) has the required SAP Host Agent version for Linux in Azure.
+* SAP Note [2243692](https://launchpad.support.sap.com/#/notes/224362) has information about SAP licensing on Linux in Azure.
+* SAP Note [1999351](https://launchpad.support.sap.com/#/notes/1999351) has additional troubleshooting information for the Azure Enhanced Monitoring Extension for SAP.
+* [SAP Community Wiki](https://wiki.scn.sap.com/wiki/display/HOME/SAPonLinuxNotes) has all required SAP Notes for Linux.
+* [Azure Virtual Machines planning and implementation for SAP on Linux](planning-guide.md)
+* [Azure Virtual Machines deployment for SAP on Linux](deployment-guide.md)
+* [Azure Virtual Machines DBMS deployment for SAP on Linux](dbms_guide_general.md)
+* [SAP Netweaver in pacemaker cluster](https://access.redhat.com/articles/3150081)
+* General RHEL documentation
+ * [High Availability Add-On Overview](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/high_availability_add-on_overview/index)
+ * [High Availability Add-On Administration](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/high_availability_add-on_administration/index)
+ * [High Availability Add-On Reference](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/high_availability_add-on_reference/index)
+* Azure-specific RHEL documentation:
+ * [Support Policies for RHEL High Availability Clusters - Microsoft Azure Virtual Machines as Cluster Members](https://access.redhat.com/articles/3131341)
+ * [Installing and Configuring a Red Hat Enterprise Linux 7.4 (and later) High-Availability Cluster on Microsoft Azure](https://access.redhat.com/articles/3252491)
+
+## Overview
+
+This article describes the cost optimization scenario where you deploy SAP HANA, SAP ASCS/SCS and SAP ERS instances in the same high availability setup. To minimize the number of VMs for a single SAP system, you want to install SAP ASCS/SCS and SAP ERS on the same hosts where SAP HANA is running. With SAP HANA being configured in high availability cluster setup, you want SAP ASCS/SCS and SAP ERS to be managed by cluster as well. The configuration is basically an addition to already configured SAP HANA cluster setup. In this setup SAP ASCS/SCS and SAP ERS will be installed on a virtual hostname and its instance directory is managed by the cluster.
+
+The presented architecture showcases [NFS on Azure Files](../../../storage/files/files-nfs-protocol.md) or [Azure NetApp Files](../../../azure-netapp-files/azure-netapp-files-introduction.md) for highly available instance directory for the setup.
+
+The example shown in this article to describe deployment uses following system information -
+
+| Instance name | Instance number | Virtual hostname | Virtual IP (Probe Port) |
+| -- | | - | -- |
+| SAP HANA DB | 03 | saphana | 10.66.0.13 (62503) |
+| ABAP SAP Central Services (ASCS) | 00 | sapascs | 10.66.0.20 (62000) |
+| Enqueue Replication Server (ERS) | 01 | sapers | 10.66.0.30 (62101) |
+| SAP HANA system identifier | HN1 | | |
+| SAP system identifier | NW1 | | |
+
+> [!NOTE]
+>
+> Install SAP Dialog instances (PAS and AAS) on separate VM's.
+
+![Architecture of SAP HANA, SAP ASCS/SCS and ERS installation within the same cluster](media/high-availability-guide-rhel/high-availability-rhel-hana-ascs-ers-dialog-instance.png)
+
+### Important consideration for the cost optimization solution
+
+* SAP Dialog Instances (PAS and AAS) (like **sapa01** and **sapa02**), should be installed on separate VMs. Install SAP ASCS and ERS with virtual hostnames. To learn more on how to assign virtual hostname to a VM, refer to the blog [Use SAP Virtual Host Names with Linux in Azure](https://techcommunity.microsoft.com/t5/running-sap-applications-on-the/use-sap-virtual-host-names-with-linux-in-azure/ba-p/3251593).
+* With HANA DB, ASCS/SCS and ERS deployment in the same cluster setup, the instance number of HANA DB, ASCS/SCS and ERS must be different.
+* Consider sizing your VM SKUs appropriately based on the sizing guidelines. You have to factor in the cluster behavior where multiple SAP instances (HANA DB, ASCS/SCS and ERS) may run on a single VM, when other VM in the cluster is unavailable.
+* You can use different storage (for example, Azure NetApp Files or NFS on Azure Files) to install the SAP ASCS and ERS instances.
+ > [!NOTE]
+ >
+ > For SAP J2EE systems, it's not supported to place `/usr/sap/<SID>/J<nr>` on NFS on Azure Files.
+ > Database filesystem like /hana/data and /hana/log are not supported on NFS on Azure Files.
+* To install additional application servers on separate VMs, you can either use NFS shares or local managed disk for instance directory filesystem. If you're installing additional application servers for SAP J2EE system, `/usr/sap/<SID>/J<nr>` on NFS on Azure Files isn't supported.
+* Refer [NFS on Azure Files consideration](high-availability-guide-rhel-nfs-azure-files.md#important-considerations-for-nfs-on-azure-files-shares) and [Azure NetApp Files consideration](high-availability-guide-rhel-netapp-files.md#important-considerations), as same consideration applies for this setup as well.
+
+## Prerequisites
+
+The configuration described in this article is an addition to your already configured SAP HANA cluster setup. In this configuration, SAP ASCS/SCS and ERS will be installed on a virtual hostname and its instance directory is managed by the cluster.
+
+Install HANA database, set up HSR and Pacemaker cluster by following the documentation [High availability of SAP HANA on Azure VMs on Red Hat Enterprise Linux](sap-hana-high-availability-rhel.md) or [High availability of SAP HANA Scale-up with Azure NetApp Files on Red Hat Enterprise Linux](sap-hana-high-availability-netapp-files-red-hat.md) depending on what storage option you're using.
+
+Once you've Installed, configured and set-up the **HANA Cluster**, follow the steps below to install ASCS and ERS instances.
+
+## Configure Azure Load Balancer for ASCS and ERS
+
+1. Open the internal load balancer that you've created for SAP HANA cluster setup.
+2. Create the frontend IP address for ASCS and ERS instance
+ 1. IP address for ASCS is **10.66.0.20**
+ 1. In **Settings** > **Frontend IP configuration**, click on **Add**.
+ 2. Enter the name of the new frontend IP (for example, **ascs-frontend**).
+ 3. Select the **subnet**.
+ 4. Set the **assignment** to **Static** and enter the IP address (for example, **10.66.0.20**).
+ 5. Click Ok.
+ 2. IP address for ERS is **10.66.0.30**
+ 1. Repeat the steps under "2.a" to create a frontend IP address for ERS (for example **10.66.0.30** and **ers-frontend**)
+3. Backend Pool remains same, as we're deploying ASCS and ERS on the same backend pool (**hana-backend**).
+4. Create health probe for ASCS and ERS instance
+ 1. Port for ASCS is **62000**
+ 1. In **Settings** > **Health probes**, click on **Add**.
+ 2. Enter the name of the health probe (for example, **ascs-hp**).
+ 3. Select **TCP** as protocol, port **62000** and keep interval **5**.
+ 4. Click Ok.
+ 2. Port for ERS is **62101**
+ 1. Repeat the steps above under "4.a" to create health probe for ERS (for example **62101** and **ers-hp**)
+5. Create load balancing rules for ASCS and ERS instance
+ 1. Load balancing rule for ASCS
+ 1. In **Settings** > **Load balancing rules**, click on **Add**.
+ 2. Enter the name of load balancing rule (for example, **ascs-lb**).
+ 3. Select the frontend IP address for ASCS, backend pool, and health probe you created earlier (for example **ascs-frontend**, **hana-backend**, and **ascs-hp**)
+ 4. Select **HA ports**
+ 5. Make sure to **enable Floating IP**
+ 6. Leave the rest as default and Click OK
+ 2. Load balancing rule for ERS
+ 1. Repeat the steps under ΓÇ£5.1ΓÇ¥ to create load balancing rule for ERS (for example, **ers-lb**).
+
+> [!IMPORTANT]
+>
+> Floating IP is not supported on a NIC secondary IP configuration in load-balancing scenarios. For details see [Azure Load balancer Limitations](../../../load-balancer/load-balancer-multivip-overview.md#limitations). If you need additional IP address for the VM, deploy a second NIC.
+
+> [!NOTE]
+>
+> When VMs without public IP addresses are placed in the backend pool of internal (no public IP address) Standard Azure load balancer, there will be no outbound internet connectivity, unless additional configuration is performed to allow routing to public end points. For details on how to achieve outbound connectivity see [Public endpoint connectivity for Virtual Machines using Azure Standard Load Balancer in SAP high-availability scenarios](high-availability-guide-standard-load-balancer-outbound-connections.md).
+
+> [!IMPORTANT]
+>
+> Do not enable TCP timestamps on Azure VMs placed behind Azure Load Balancer. Enabling TCP timestamps will cause the health probes to fail. Set parameter **net.ipv4.tcp_timestamps** to **0**. For details see [Load Balancer health probes](../../../load-balancer/load-balancer-custom-probe-overview.md).
+
+## SAP ASCS/SCS and ERS Setup
+
+Based on your storage, follow the steps described in below guides to configure `SAPInstance` resource for SAP ASCS/SCS and SAP ERS instance in the cluster.
+
+* NFS on Azure Files - [Azure VMs high availability for SAP NW on RHEL with NFS on Azure Files](high-availability-guide-rhel-nfs-azure-files.md#prepare-for-sap-netweaver-installation)
+* Azure NetApp Files - [Azure VMs high availability for SAP NW on RHEL with Azure NetApp Files](high-availability-guide-rhel-netapp-files.md#prepare-for-sap-netweaver-installation)
+
+## Test the cluster setup
+
+Thoroughly test your pacemaker cluster.
+* [Execute the typical Netweaver failover tests](high-availability-guide-rhel.md#test-the-cluster-setup).
+* [Execute the typical HANA DB failover tests](sap-hana-high-availability-rhel.md#test-the-cluster-setup).
virtual-machines Integration Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/sap/integration-get-started.md
Also see the following SAP resources:
### SAP Fiori
-For more information about integration with SAP Fiori, see [Introduction to the Application Gateway WAF Triage Workbook](https://techcommunity.microsoft.com/t5/azure-network-security-blog/introducing-the-application-gateway-waf-triage-workbook/ba-p/2973341).
+For more information about integration with SAP Fiori, see the following resources:
+
+- [Monitor SAP Fiori performance with Azure Application Insights](https://github.com/microsoft/ApplicationInsights-SAP-Fiori-Plugin)
+- [Introduction to the Application Gateway WAF Triage Workbook](https://techcommunity.microsoft.com/t5/azure-network-security-blog/introducing-the-application-gateway-waf-triage-workbook/ba-p/2973341).
Also see the following SAP resources: - [Azure CDN for SAPUI5 libraries](https://blogs.sap.com/2021/03/22/sap-fiori-using-azure-cdn-for-sapui5-libraries/)
For more information about using SAP with Azure Integration services, see the fo
- [New SAP events on Azure Event Grid with SAP Event Mesh](https://techcommunity.microsoft.com/t5/messaging-on-azure-blog/new-sap-events-on-azure-event-grid/ba-p/3663372) - [Expose SAP Process Orchestration on Azure securely](expose-sap-process-orchestration-on-azure.md) - [Connect to SAP from workflows in Azure Logic Apps](../../../logic-apps/logic-apps-using-sap-connector.md)
+- [Import SAP OData metadata as an API into Azure API Management](../../../api-management/sap-api.md)
+- [Apply SAP Principal Propagation to your Azure hosted APIs](https://github.com/Azure/api-management-policy-snippets/blob/master/examples/Request%20OAuth2%20access%20token%20from%20SAP%20using%20AAD%20JWT%20token.xml)
Also see the following SAP resources: - [Event-driven architectures for SAP ERP with Azure](https://blogs.sap.com/2021/12/09/hey-sap-where-is-my-xbox-an-insight-into-capitalizing-on-event-driven-architectures/) - [Achieve high availability for SAP Cloud Integration (part of SAP Integration Suite) on Azure](https://blogs.sap.com/2021/09/23/black-friday-will-take-your-cpi-instance-offline-unless/) - [Automate SAP invoice processing using Azure Logic Apps and Cognitive Services](https://blogs.sap.com/2021/02/03/your-sap-on-azure-part-26-automate-invoice-processing-using-azure-logic-apps-and-cognitive-services/)-- [Import SAP OData metadata as an API into Azure API Management](../../../api-management/sap-api.md) ### App development and DevOps
-For more information about integrating SAP with Microsoft services natively, see [the ABAP SDK for Azure](https://github.com/microsoft/ABAP-SDK-for-Azure).
+For more information about integrating SAP with Microsoft services natively, see the following resources:
+
+- [the ABAP SDK for Azure](https://github.com/microsoft/ABAP-SDK-for-Azure)
+- [Use SAP's Cloud SDK with Azure app development services](https://github.com/Azure-Samples/app-service-javascript-sap-cloud-sdk-quickstart)
+- [Use community-driven OData SDKs with Azure Functions](https://github.com/Azure/azure-sdk-for-sap-odata)
Also see the following SAP resources: - [dotNET speaks OData too, how to implement Azure App Service with SAP Gateway](https://blogs.sap.com/2021/08/12/.net-speaks-odata-too-how-to-implement-azure-app-service-with-sap-odata-gateway/)
These resources include Customer Engagement Initiatives (CEI), public BETAs, and
You can use the following free developer accounts to explore integration scenarios for Azure and SAP. -- [Free trial of Azure](https://azure.microsoft.com/free/), which you can use to configure Azure Active Directory (Azure AD) for development purposes.
+- [Free trial of Azure](https://azure.microsoft.com/free/)
+- [Free trial of Azure for students](https://azure.microsoft.com/free/students/)
- [Free account on SAP BTP trial](https://developers.sap.com/tutorials/hcp-create-trial-account.html). Select Singapore for Azure. - [GitHub account](https://github.com/), which you can use to host your projects. - [Microsoft 365 developer program account](https://developer.microsoft.com/microsoft-365/dev-program)
virtual-network Nat Gateway Resource https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/nat-gateway-resource.md
NAT gateway dynamically allocates SNAT ports across a subnet's private resources
*Figure: Virtual Network NAT on-demand outbound SNAT*
-Pre-allocation of SNAT ports to each virtual machine isn't required, which means SNAT ports aren't left unused by VMs not actively needing them.
+Pre-allocation of SNAT ports to each virtual machine is required for other SNAT methods. This pre-allocation of SNAT ports can cause SNAT port exhaustion on some virtual machines while others still have available SNAT ports for connecting outbound. With NAT gateway, pre-allocation of SNAT ports isn't required, which means SNAT ports aren't left unused by VMs not actively needing them.
:::image type="content" source="./media/nat-overview/exhaustion-threshold.png" alt-text="Diagram of all available SNAT ports used by virtual machines on subnets configured with NAT and an exhaustion threshold.":::
virtual-network Tutorial Hub Spoke Nat Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/tutorial-hub-spoke-nat-firewall.md
+
+ Title: 'Tutorial: Integrate NAT gateway with Azure Firewall in a hub and spoke network'
+
+description: Learn how to integrate a NAT gateway and Azure Firewall in a hub and spoke network.
+++++ Last updated : 01/17/2023+++
+# Tutorial: Integrate NAT gateway with Azure Firewall in a hub and spoke network for outbound connectivity
+
+In this tutorial, youΓÇÖll learn how to integrate a NAT gateway with an Azure Firewall in a hub and spoke network
+
+Azure Firewall provides [2,496 SNAT ports per public IP address](/azure/firewall/integrate-with-nat-gateway) configured per backend Virtual Machine Scale Set instance (minimum of two instances). You can associate up to 250 public IP addresses to Azure Firewall. Depending on your architecture requirements and traffic patterns, you may require more SNAT ports than what Azure Firewall can provide. You may also require the use of fewer public IPs while also requiring more SNAT ports. A better method for outbound connectivity is to use NAT gateway. NAT gateway provides 64,512 SNAT ports per public IP address and can be used with up to 16 public IP addresses.
+
+NAT gateway can be integrated with Azure Firewall by configuring NAT gateway directly to the Azure Firewall subnet in order to provide a more scalable method of outbound connectivity. For production deployments, a hub and spoke network is recommended, where the firewall is in its own virtual network. The workload servers are peered virtual networks in the same region as the hub virtual network where the firewall resides. In this architectural setup, NAT gateway can provide outbound connectivity from the hub virtual network for all spoke virtual networks peered.
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * Create a hub virtual network and deploy an Azure Firewall and Azure Bastion during deployment
+> * Create a NAT gateway and associate it with the firewall subnet in the hub virtual network
+> * Create a spoke virtual network
+> * Create a virtual network peering
+> * Create a route table for the spoke virtual network
+> * Create a firewall policy for the hub virtual network
+> * Create a virtual machine to test the outbound connectivity through the NAT gateway
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+## Create the hub virtual network
+
+The hub virtual network contains the firewall subnet that is associated with the Azure Firewall and NAT gateway. Use the following example to create the hub virtual network.
+
+1. Sign in to the [Azure portal](https://portal.azure.com)
+
+2. In the search box at the top of the portal, enter **Virtual network**. Select **Virtual networks** in the search results.
+
+3. Select **+ Create**.
+
+4. In the **Basics** tab of **Create virtual network**, enter or select the following information:
+
+ | Setting | Value |
+ | - | -- |
+ | **Project details** | |
+ | Subscription | Select your subscription. |
+ | Resource group | Select **Create new**. </br> Enter **TutorialNATHubSpokeFW-rg**. </br> Select **OK**. |
+ | **Instance details** | |
+ | Name | Enter **myVNet-Hub**. |
+ | Region | Select **South Central US**. |
+
+5. Select **Next: IP Addresses**.
+
+6. In the **IP Addresses** tab in **IPv4 address space**, select the trash can to delete the address space that is auto populated.
+
+7. In **IPv4 address space** enter **10.1.0.0/16**.
+
+8. Select **+ Add subnet**.
+
+9. In **Add subnet** enter or select the following information:
+
+ | Setting | Value |
+ | - | -- |
+ | Subnet name | Enter **subnet-private**. |
+ | Subnet address range | Enter **10.1.0.0/24**. |
+
+10. Select **Add**.
+
+11. Select **Next: Security**.
+
+12. In the **Security** tab in **BastionHost**, select **Enable**.
+
+13. Enter or select the following information:
+
+ | Setting | Value |
+ | - | -- |
+ | Bastion name | Enter **myBastion**. |
+ | AzureBastionSubnet address space | Enter **10.1.1.0/26**. |
+ | Public IP address | Select **Create new**. </br> In **Name** enter **myPublicIP-Bastion**. </br> Select **OK**. |
+
+14. In **Firewall** select **Enable**.
+
+15. Enter or select the following information:
+
+ | Setting | Value |
+ | - | -- |
+ | Bastion name | Enter **myFirewall**. |
+ | Firewall subnet address space | Enter **10.1.2.0/26**. |
+ | Public IP address | Select **Create new**. </br> In **Name** enter **myPublicIP-Firewall**. </br> Select **OK**. |
+
+16. Select **Review + create**.
+
+17. Select **Create**.
+
+It will take a few minutes for the bastion host and firewall to deploy. When the virtual network is created as part of the deployment, you can proceed to the next steps.
+
+## Create the NAT gateway
+
+All outbound internet traffic will traverse the NAT gateway to the internet. Use the following example to create a NAT gateway for the hub and spoke network and associate it with the **AzureFirewallSubnet**.
+
+1. In the search box at the top of the portal, enter **NAT gateway**. Select **NAT gateways** in the search results.
+
+2. Select **+ Create**.
+
+3. In the **Basics** tab of **Create network address translation (NAT) gateway** enter or select the following information:
+
+ | Setting | Value |
+ | - | -- |
+ | **Project details** | |
+ | Subscription | Select your subscription. |
+ | Resource group | Select **TutorialNATHubSpokeFW-rg**. |
+ | **Instance details** | |
+ | NAT gateway name | Enter **myNATgateway**. |
+ | Region | Select **South Central US**. |
+ | Availability zone | Select a **Zone** or **No zone**. |
+ | TCP idle timeout (minutes) | Leave the default of **4**. |
+
+ For more information about availability zones, see [NAT gateway and availability zones](nat-availability-zones.md).
+
+5. Select **Next: Outbound IP**.
+
+6. In **Outbound IP** in **Public IP addresses**, select **Create a new public IP address**.
+
+7. Enter **myPublicIP-NAT** in **Name**.
+
+8. Select **OK**.
+
+9. Select **Next: Subnet**.
+
+10. In **Virtual Network** select **myVNet-Hub**.
+
+11. Select **AzureFirewallSubnet** in **Subnet name**.
+
+12. Select **Review + create**.
+
+13. Select **Create**.
+
+## Create spoke virtual network
+
+The spoke virtual network contains the test virtual machine used to test the routing of the internet traffic to the NAT gateway. Use the following example to create the spoke network.
+
+1. In the search box at the top of the portal, enter **Virtual network**. Select **Virtual networks** in the search results.
+
+2. Select **+ Create**.
+
+3. In the **Basics** tab of **Create virtual network**, enter or select the following information:
+
+ | Setting | Value |
+ | - | -- |
+ | **Project details** | |
+ | Subscription | Select your subscription. |
+ | Resource group | Select **TutorialNATHubSpokeFW-rg**. |
+ | **Instance details** | |
+ | Name | Enter **myVNet-Spoke**. |
+ | Region | Select **South Central US**. |
+
+4. Select **Next: IP Addresses**.
+
+5. In the **IP Addresses** tab in **IPv4 address space**, select the trash can to delete the address space that is auto populated.
+
+6. In **IPv4 address space** enter **10.2.0.0/16**.
+
+7. Select **+ Add subnet**.
+
+8. In **Add subnet** enter or select the following information:
+
+ | Setting | Value |
+ | - | -- |
+ | Subnet name | Enter **subnet-private**. |
+ | Subnet address range | Enter **10.2.0.0/24**. |
+
+9. Select **Add**.
+
+10. Select **Review + create**.
+
+12. Select **Create**.
+
+## Create peering between the hub and spoke
+
+A virtual network peering is used to connect the hub to the spoke and the spoke to the hub. Use the following example to create a two-way network peering between the hub and spoke.
+
+1. In the search box at the top of the portal, enter **Virtual network**. Select **Virtual networks** in the search results.
+
+2. Select **myVNet-Hub**.
+
+3. Select **Peerings** in **Settings**.
+
+4. Select **+ Add**.
+
+5. Enter or select the following information in **Add peering**:
+
+ | Setting | Value |
+ | - | -- |
+ | **This virtual network** | |
+ | Peering link name | Enter **myVNet-Hub-To-myVNet-Spoke**. |
+ | Traffic to remote virtual network | Leave the default of **Allow (default)**. |
+ | Traffic forwarded from remote virtual network | Leave the default of **Allow (default)**. |
+ | Virtual network gateway or Route Server | Leave the default of **None**. |
+ | **Remote virtual network** | |
+ | Peering link name | Enter **myVNet-Spoke-To-myVNet-Hub**. |
+ | Virtual network deployment model | Leave the default of **Resource manager**. |
+ | Subscription | Select your subscription. |
+ | Virtual network | Select **myVNet-Spoke**. |
+ | Traffic to remote virtual network | Leave the default of **Allow (default)**. |
+ | Traffic forwarded from remote virtual network | Leave the default of **Allow (default)**. |
+ | Virtual network gateway or Route Server | Leave the default of **None**. |
+
+6. Select **Add**.
+
+7. Select **Refresh** and verify **Peering status** is **Connected**.
+
+## Create spoke network route table
+
+A route table will force all traffic leaving the spoke virtual network to the hub virtual network. The route table is configured with the private IP address of the Azure Firewall as the virtual appliance.
+
+### Obtain private IP address of firewall
+
+The private IP address of the firewall is needed for the route table created later in this article. Use the following example to obtain the firewall private IP address.
+
+1. In the search box at the top of the portal, enter **Firewall**. Select **Firewall** in the search results.
+
+2. Select **myFirewall**.
+
+3. In the **Overview** of **myFirewall**, note the IP address in the field **Firewall private IP**. The IP address should be **10.1.2.4**.
+
+### Create route table
+
+Create a route table to force all inter-spoke and internet egress traffic through the firewall in the hub virtual network.
+
+1. In the search box at the top of the portal, enter **Route table**. Select **Route tables** in the search results.
+
+2. Select **+ Create**.
+
+3. In **Create Route table** enter or select the following information:
+
+ | Setting | Value |
+ | - | -- |
+ | **Project details** | |
+ | Subscription | Select your subscription. |
+ | Resource group | Select **TutorialNATHubSpokeFW-rg**. |
+ | **Instance details** | |
+ | Region | Select **South Central US**. |
+ | Name | Enter **myRouteTable-Spoke**. |
+ | Propagate gateway routes | Select **No**. |
+
+4. Select **Review + create**.
+
+5. Select **Create**.
+
+6. In the search box at the top of the portal, enter **Route table**. Select **Route tables** in the search results.
+
+7. Select **myRouteTable-Spoke**.
+
+8. In **Settings** select **Routes**.
+
+9. Select **+ Add** in **Routes**.
+
+10. Enter or select the following information in **Add route**:
+
+ | Setting | Value |
+ | - | -- |
+ | Route name | Enter **Route-To-Hub**. |
+ | Address prefix destination | Select **IP Addresses**. |
+ | Destination IP addresses/CIDR ranges | Enter **0.0.0.0/0**. |
+ | Next hop type | Select **Virtual appliance**. |
+ | Next hop address | Enter **10.1.2.4**. |
+
+11. Select **Add**.
+
+12. Select **Subnets** in **Settings**.
+
+13. Select **+ Associate**.
+
+14. Enter or select the following information in **Associate subnet**:
+
+ | Setting | Value |
+ | - | -- |
+ | Virtual network | Select **myVNet-Spoke (TutorialNATHubSpokeFW-rg)**. |
+ | Subnet | Select **subnet-private**. |
+
+15. Select **OK**.
+
+## Configure firewall
+
+Traffic from the spoke through the hub must be allowed through and firewall policy and a network rule. Use the following example to create the firewall policy and network rule.
+
+### Create firewall policy
+
+1. In the search box at the top of the portal, enter **Firewall**. Select **Firewalls** in the search results.
+
+2. Select **myFirewall**.
+
+3. In the **Overview** select **Migrate to firewall policy**.
+
+4. In **Migrate to firewall policy** enter or select the following information:
+
+ | Setting | Value |
+ | - | -- |
+ | **Project details** | |
+ | Subscription | Select your subscription. |
+ | Resource group | Select **TutorialNATHubSpokeFW-rg**. |
+ | **Policy details** | |
+ | Name | Enter **myFirewallPolicy**. |
+ | Region | Select **South Central US**. |
+
+5. Select **Review + create**.
+
+6. Select **Create**.
+
+### Configure network rule
+
+1. In the search box at the top of the portal, enter **Firewall**. Select **Firewall Policies** in the search results.
+
+2. Select **myFirewallPolicy**.
+
+3. In **Settings** select **Network rules**.
+
+4. Select **+ Add a rule collection**.
+
+5. In **Add a rule collection** enter or select the following information:
+
+ | Setting | Value |
+ | - | -- |
+ | Name | Enter **SpokeToInternet**. |
+ | Rule collection type | Select **Network**. |
+ | Priority | Enter **100**. |
+ | Rule collection action | Select **Allow**. |
+ | Rule collection group | Select **DefaultNetworkRuleCollectionGroup**. |
+ | Rules | |
+ | Name | Enter **AllowWeb**. |
+ | Source type | **IP Address**. |
+ | Source | Enter **10.2.0.0/24**. |
+ | Protocol | Select **TCP**. |
+ | Destination Ports | Enter **80**,**443**. |
+ | Destination Type | Select **IP Address**. |
+ | Destination | Enter * |
+
+6. Select **Add**.
+
+## Create test virtual machine
+
+A Windows Server 2022 virtual machine is used to test the outbound internet traffic through the NAT gateway. Use the following example to create a Windows Server 2022 virtual machine.
+
+1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
+
+2. Select **+ Create** then **Azure virtual machine**.
+
+3. In **Create a virtual machine** enter or select the following information in the **Basics** tab:
+
+ | Setting | Value |
+ | - | -- |
+ | **Project details** | |
+ | Subscription | Select your subscription. |
+ | Resource group | Select **TutorialNATHubSpokeFW-rg**. |
+ | **Instance details** | |
+ | Virtual machine name | Enter **myVM-Spoke**. |
+ | Region | Select **South Central US**. |
+ | Availability options | Select **No infrastructure redundancy required**. |
+ | Security type | Select **Standard**. |
+ | Image | Select **Windows Server 2022 Datacenter - x64 Gen2**. |
+ | VM architecture | Leave the default of **x64**. |
+ | Size | Select a size. |
+ | **Administrator account** | |
+ | Authentication type | Select **Password**. |
+ | Username | Enter a username. |
+ | Password | Enter a password. |
+ | Confirm password | Reenter password. |
+ | **Inbound port rules** | |
+ | Public inbound ports | Select **None**. |
+
+4. Select **Next: Disks** then **Next: Networking**.
+
+5. In the Networking tab, enter or select the following information:
+
+ | Setting | Value |
+ | - | -- |
+ | **Network interface** | |
+ | Virtual network | Select **myVNet-Spoke**. |
+ | Subnet | Select **subnet-private (10.2.0.0/24)**. |
+ | Public IP | Select **None**. |
+
+6. Leave the rest of the options at the defaults and select **Review + create**.
+
+7. Select **Create**.
+
+## Test NAT gateway
+
+You'll connect to the Windows Server 2022 virtual machines you created in the previous steps to verify that the outbound internet traffic is leaving the NAT gateway.
+
+### Obtain NAT gateway public IP address
+
+Obtain the NAT gateway public IP address for verification of the steps later in the article.
+
+1. In the search box at the top of the portal, enter **Public IP**. Select **Public IP addresses** in the search results.
+
+2. Select **myPublic-NAT**.
+
+3. Make note of value in **IP address**. The example used in this article is **20.225.88.213**.
+
+### Test NAT gateway from spoke
+
+Use Microsoft Edge on the Windows Server 2022 virtual machine to connect to https://whatsmyip.com to verify the functionality of the NAT gateway.
+
+1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
+
+2. Select **myVM-Spoke**.
+
+3. Select **Connect** then **Bastion**.
+
+4. Enter the username and password you entered when the virtual machine was created.
+
+5. Select **Connect**.
+
+6. Open **Microsoft Edge** when the desktop finishes loading.
+
+7. In the address bar, enter **https://whatsmyip.com**.
+
+8. Verify the outbound IP address displayed is the same as the IP of the NAT gateway you obtained previously.
+
+ :::image type="content" source="./media/tutorial-hub-spoke-nat-firewall/outbound-ip-address.png" alt-text="Screenshot of outbound IP address.":::
+
+## Clean up resources
+
+If you're not going to continue to use this application, delete the created resources with the following steps:
+
+1. In the search box at the top of the portal, enter **Resource group**. Select **Resource groups** in the search results.
+
+2. Select **TutorialNATHubSpokeFW-rg**.
+
+3. In the **Overview** of **TutorialNATHubSpokeFW-rg**, select **Delete resource group**.
+
+4. In **TYPE THE RESOURCE GROUP NAME:**, enter **TutorialNATHubSpokeFW-rg**.
+
+5. Select **Delete**.
+
+## Next steps
+
+Advance to the next article to learn how to integrate a NAT gateway with an Azure Load Balancer:
+> [!div class="nextstepaction"]
+> [Integrate NAT gateway with an internal load balancer](tutorial-nat-gateway-load-balancer-internal-portal.md)
+
virtual-network Tutorial Hub Spoke Route Nat https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/nat-gateway/tutorial-hub-spoke-route-nat.md
+
+ Title: 'Tutorial: Use a NAT gateway with a hub and spoke network'
+
+description: Learn how to integrate a NAT gateway into a hub and spoke network with a network virtual appliance.
+++++ Last updated : 01/17/2023+++
+# Tutorial: Use a NAT gateway with a hub and spoke network
+
+A hub and spoke network is one of the building blocks of a highly available multiple location network infrastructure. The most common deployment of a hub and spoke network is done with the intention of routing all inter-spoke and outbound internet traffic through the central hub. The purpose is to inspect all of the traffic traversing the network with a Network Virtual Appliance (NVA) for security scanning and packet inspection.
+
+For outbound traffic to the internet, the network virtual appliance would typically have one network interface with an assigned public IP address. The NVA after inspecting the outbound traffic forwards the traffic out the public interface and to the internet. Azure Virtual Network NAT eliminates the need for the public IP address assigned to the NVA. Associating a NAT gateway with the public subnet of the NVA changes the routing for the public interface to route all outbound internet traffic through the NAT gateway. The elimination of the public IP address increases security and allows for the scaling of outbound source network address translation (SNAT) with multiple public IP addresses and or public IP prefixes.
+
+> [!IMPORTANT]
+> The NVA used in this article is for demonstration purposes only and is simulated with an Ubuntu virtual machine. The solution doesn't include a load balancer for high availability of the NVA deployment. Replace the Ubuntu virtual machine in this article with an NVA of your choice. Consult the vendor of the chosen NVA for routing and configuration instructions. A load balancer and availability zones is recommended for a highly available NVA infrastructure.
+
+In this tutorial, you learn how to:
+
+> [!div class="checklist"]
+> * Create a NAT gateway.
+> * Create a hub and spoke virtual network.
+> * Create a simulated Network Virtual Appliance (NVA).
+> * Force all traffic from the spokes through the hub.
+> * Force all internet traffic in the hub and the spokes out the NAT gateway.
+> * Test the NAT gateway and inter-spoke routing.
+
+## Prerequisites
+
+- An Azure account with an active subscription. [Create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+## Create a NAT gateway
+
+All outbound internet traffic will traverse the NAT gateway to the internet. Use the following example to create a NAT gateway for the hub and spoke network.
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+
+2. In the search box at the top of the portal, enter **NAT gateway**. Select **NAT gateways** in the search results.
+
+3. Select **+ Create**.
+
+4. In the **Basics** tab of **Create network address translation (NAT) gateway** enter or select the following information:
+
+ | Setting | Value |
+ | - | -- |
+ | **Project details** | |
+ | Subscription | Select your subscription. |
+ | Resource group | Select **Create new**. </br> Enter **TutorialNATHubSpoke-rg** in **Name**. </br> Select **OK**. |
+ | **Instance details** | |
+ | NAT gateway name | Enter **myNATgateway**. |
+ | Region | Select **South Central US**. |
+ | Availability zone | Select a **Zone** or **No zone**. |
+ | TCP idle timeout (minutes) | Leave the default of **4**. |
+
+5. Select **Next: Outbound IP**.
+
+6. In **Outbound IP** in **Public IP addresses**, select **Create a new public IP address**.
+
+7. Enter **myPublicIP-NAT** in **Name**.
+
+8. Select **OK**.
+
+9. Select **Review + create**.
+
+10. Select **Create**.
+
+## Create hub virtual network
+
+The hub virtual network is the central network of the solution. The hub network contains the NVA appliance and a public and private subnet. The NAT gateway is assigned to the public subnet during the creation of the virtual network. An Azure Bastion host is configured as part of the following example. The bastion host is used to securely connect to the NVA virtual machine and the test virtual machines deployed in the spokes later in the article.
+
+1. In the search box at the top of the portal, enter **Virtual network**. Select **Virtual networks** in the search results.
+
+2. Select **+ Create**.
+
+3. In the **Basics** tab of **Create virtual network**, enter or select the following information:
+
+ | Setting | Value |
+ | - | -- |
+ | **Project details** | |
+ | Subscription | Select your subscription. |
+ | Resource group | Select **TutorialNATHubSpoke-rg**. |
+ | **Instance details** | |
+ | Name | Enter **myVNet-Hub**. |
+ | Region | Select **South Central US**. |
+
+4. Select **Next: IP Addresses**.
+
+5. In the **IP Addresses** tab in **IPv4 address space**, select the trash can to delete the address space that is auto populated.
+
+6. In **IPv4 address space** enter **10.1.0.0/16**.
+
+7. Select **+ Add subnet**.
+
+8. In **Add subnet** enter or select the following information:
+
+ | Setting | Value |
+ | - | -- |
+ | Subnet name | Enter **subnet-private**. |
+ | Subnet address range | Enter **10.1.0.0/24**. |
+
+9. Select **Add**.
+
+10. Select **+ Add subnet**.
+
+11. In **Add subnet** enter or select the following information:
+
+ | Setting | Value |
+ | - | -- |
+ | Subnet name | Enter **subnet-public**. |
+ | Subnet address range | Enter **10.1.253.0/28**. |
+ | **NAT GATEWAY** | |
+ | NAT gateway | Select **myNATgateway**. |
+
+12. Select **Add**.
+
+13. Select **Next: Security**.
+
+14. In the **Security** tab in **BastionHost**, select **Enable**.
+
+15. Enter or select the following information:
+
+ | Setting | Value |
+ | - | -- |
+ | Bastion name | Enter **myBastion**. |
+ | AzureBastionSubnet address space | Enter **10.1.1.0/26**. |
+ | Public IP address | Select **Create new**. </br> In **Name** enter **myPublicIP-Bastion**. </br> Select **OK**. |
+
+16. Select **Review + create**.
+
+17. Select **Create**.
+
+It will take a few minutes for the bastion host to deploy. When the virtual network is created as part of the deployment, you can proceed to the next steps.
+
+## Create simulated NVA virtual machine
+
+The simulated NVA will act as a virtual appliance to route all traffic between the spokes and hub and traffic outbound to the internet. An Ubuntu virtual machine is used for the simulated NVA. Use the following example to create the simulated NVA and configure the network interfaces.
+
+1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
+
+2. Select **+ Create** then **Azure virtual machine**.
+
+3. In **Create a virtual machine** enter or select the following information in the **Basics** tab:
+
+ | Setting | Value |
+ | - | -- |
+ | **Project details** | |
+ | Subscription | Select your subscription. |
+ | Resource group | Select **TutorialNATHubSpoke-rg**. |
+ | **Instance details** | |
+ | Virtual machine name | Enter **myVM-NVA**. |
+ | Region | Select **(US) South Central US**. |
+ | Availability options | Select **No infrastructure redundancy required**. |
+ | Security type | Select **Standard**. |
+ | Image | Select **Ubuntu Server 20.04 LTS - x64 Gen2**. |
+ | VM architecture | Leave the default of **x64**. |
+ | Size | Select a size. |
+ | **Administrator account** | |
+ | Authentication type | Select **Password**. |
+ | Username | Enter a username. |
+ | Password | Enter a password. |
+ | Confirm password | Reenter password. |
+ | **Inbound port rules** | |
+ | Public inbound ports | Select **None**. |
+
+4. Select **Next: Disks** then **Next: Networking**.
+
+5. In the Networking tab, enter or select the following information:
+
+ | Setting | Value |
+ | - | -- |
+ | **Network interface** | |
+ | Virtual network | Select **myVNet-Hub**. |
+ | Subnet | Select **subnet-public**. |
+ | Public IP | Select **None**. |
+
+6. Leave the rest of the options at the defaults and select **Review + create**.
+
+7. Select **Create**.
+
+### Configure virtual machine network interfaces
+
+The IP configuration of the primary network interface of the virtual machine is set to dynamic by default. Use the following example to change the primary network interface IP configuration to static and add a secondary network interface for the private interface of the NVA.
+
+1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
+
+2. Select **myVM-NVA**.
+
+3. In the **Overview** select **Stop** if the virtual machine is running.
+
+4. Select **Networking** in **Settings**.
+
+5. In **Networking** select the network interface name next to **Network Interface:**. The interface name is the virtual machine name and random numbers and letters. In this example, the interface name is **myvm-nva271**.
+
+6. In the network interface properties, select **IP configurations** in **Settings**.
+
+7. In **IP forwarding** select **Enabled**.
+
+8. Select **Save**.
+
+9. When the save action completes, select **ipconfig1**.
+
+10. In **Assignment** in **ipconfig1** select **Static**.
+
+11. In **IP address** enter **10.1.253.10**.
+
+12. Select **Save**.
+
+13. When the save action completes, return to the networking configuration for **myVM-NVA**.
+
+14. In **Networking** of **myVM-NVA** select **Attach network interface**.
+
+15. Select **Create and attach network interface**.
+
+16. In **Create network interface** enter or select the following information:
+
+ | Setting | Value |
+ | - | -- |
+ | **Project details** | |
+ | Resource group | Select **TutorialNATHubSpoke-rg**. |
+ | **Network interface** | |
+ | Name | Enter **myVM-NVA-private-nic**. |
+ | Subnet | Select **subnet-private (10.1.0.0/24)**. |
+ | NIC network security group | Select **Advanced**. |
+ | Configure network security group | Select **myVM-VNA-nsg**. |
+ | Private IP address assignment | Select **Static**. |
+ | Private IP address | Enter **10.1.0.10**. |
+
+17. Select **Create**.
+
+### Configure virtual machine software
+
+The routing for the simulated NVA uses IP tables and internal NAT in the Ubuntu virtual machine. Connect to the NVA virtual machine with Azure Bastion to configure IP tables and the routing configuration.
+
+1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
+
+2. Select **myVM-NVA**.
+
+3. Start **myVM-NVA**.
+
+4. When the virtual machine is completed booting, continue with the next steps.
+
+5. Select **Connect** then **Bastion**.
+
+6. Enter the username and password you entered when the virtual machine was created.
+
+7. Select **Connect**.
+
+8. Enter the following information at the prompt of the virtual machine to enable IP forwarding:
+
+ ```bash
+ sudo vim /etc/sysctl.conf
+ ```
+
+9. In the Vim editor, remove the **`#`** from the line **`net.ipv4.ip_forward=1`**:
+
+ Press the **Insert** key.
+
+ ```bash
+ # Uncomment the next line to enable packet forwarding for IPv4
+ net.ipv4.ip_forward=1
+ ```
+
+ Press the **Esc** key.
+
+ Enter **`:wq`** and press **Enter**.
+
+10. Enter the following information to enable internal NAT in the virtual machine:
+
+ ```bash
+ sudo iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE
+ sudo apt-get update
+ sudo apt install iptables-persistent
+ ```
+
+ Select **Yes** twice.
+
+ ```bash
+ sudo su
+ iptables-save > /etc/iptables/rules.v4
+ exit
+ ```
+
+11. Use Vim to edit the configuration with the following information:
+
+ ```bash
+ sudo vim /etc/rc.local
+ ```
+
+ Press the **Insert** key.
+
+ Add the following line to the configuration file:
+
+ ```bash
+ /sbin/iptables-restore < /etc/iptables/rules.v4
+ ```
+
+ Press the **Esc** key.
+
+ Enter **`:wq`** and press **Enter**.
+
+12. Reboot the virtual machine:
+
+ ```bash
+ sudo reboot
+ ```
+
+## Create hub network route table
+
+Route tables are used to overwrite Azure's default routing. Create a route table to force all traffic within the hub private subnet through the simulated NVA.
+
+1. In the search box at the top of the portal, enter **Route table**. Select **Route tables** in the search results.
+
+2. Select **+ Create**.
+
+3. In **Create Route table** enter or select the following information:
+
+ | Setting | Value |
+ | - | -- |
+ | **Project details** | |
+ | Subscription | Select your subscription. |
+ | Resource group | Select **TutorialNATHubSpoke-rg**. |
+ | **Instance details** | |
+ | Region | Select **South Central US**. |
+ | Name | Enter **myRouteTable-NAT-Hub**. |
+ | Propagate gateway routes | Leave the default of **Yes**. |
+
+4. Select **Review + create**.
+
+5. Select **Create**.
+
+6. In the search box at the top of the portal, enter **Route table**. Select **Route tables** in the search results.
+
+7. Select **myRouteTable-NAT-Hub**.
+
+8. In **Settings** select **Routes**.
+
+9. Select **+ Add** in **Routes**.
+
+10. Enter or select the following information in **Add route**:
+
+ | Setting | Value |
+ | - | -- |
+ | Route name | Enter **default-via-NAT-Hub**. |
+ | Address prefix destination | Select **IP Addresses**. |
+ | Destination IP addresses/CIDR ranges | Enter **0.0.0.0/0**. |
+ | Next hop type | Select **Virtual appliance**. |
+ | Next hop address | Enter **10.1.0.10**. </br> **_This is the IP address you added to the private interface of the NVA in the previous steps._**. |
+
+11. Select **Add**.
+
+12. Select **Subnets** in **Settings**.
+
+13. Select **+ Associate**.
+
+14. Enter or select the following information in **Associate subnet**:
+
+ | Setting | Value |
+ | - | -- |
+ | Virtual network | Select **myVNet-Hub (TutorialNATHubSpoke-rg)**. |
+ | Subnet | Select **subnet-private**. |
+
+15. Select **OK**.
+
+## Create spoke one virtual network
+
+Create another virtual network in a different region for the first spoke of the hub and spoke network.
+
+1. In the search box at the top of the portal, enter **Virtual network**. Select **Virtual networks** in the search results.
+
+2. Select **+ Create**.
+
+3. In the **Basics** tab of **Create virtual network**, enter or select the following information:
+
+ | Setting | Value |
+ | - | -- |
+ | **Project details** | |
+ | Subscription | Select your subscription. |
+ | Resource group | Select **TutorialNATHubSpoke-rg**. |
+ | **Instance details** | |
+ | Name | Enter **myVNet-Spoke-1**. |
+ | Region | Select **East US 2**. |
+
+4. Select **Next: IP Addresses**.
+
+5. In the **IP Addresses** tab in **IPv4 address space**, select the trash can to delete the address space that is auto populated.
+
+6. In **IPv4 address space** enter **10.2.0.0/16**.
+
+7. Select **+ Add subnet**.
+
+8. In **Add subnet** enter or select the following information:
+
+ | Setting | Value |
+ | - | -- |
+ | Subnet name | Enter **subnet-private**. |
+ | Subnet address range | Enter **10.2.0.0/24**. |
+
+9. Select **Add**.
+
+10. Select **Review + create**.
+
+11. Select **Create**.
+
+## Create peering between hub and spoke one
+
+A virtual network peering is used to connect the hub to spoke one and spoke one to the hub. Use the following example to create a two-way network peering between the hub and spoke one.
+
+1. In the search box at the top of the portal, enter **Virtual network**. Select **Virtual networks** in the search results.
+
+2. Select **myVNet-Hub**.
+
+3. Select **Peerings** in **Settings**.
+
+4. Select **+ Add**.
+
+5. Enter or select the following information in **Add peering**:
+
+ | Setting | Value |
+ | - | -- |
+ | **This virtual network** | |
+ | Peering link name | Enter **myVNet-Hub-To-myVNet-Spoke-1**. |
+ | Traffic to remote virtual network | Leave the default of **Allow (default)**. |
+ | Traffic forwarded from remote virtual network | Leave the default of **Allow (default)**. |
+ | Virtual network gateway or Route Server | Leave the default of **None**. |
+ | **Remote virtual network** | |
+ | Peering link name | Enter **myVNet-Spoke-1-To-myVNet-Hub**. |
+ | Virtual network deployment model | Leave the default of **Resource manager**. |
+ | Subscription | Select your subscription. |
+ | Virtual network | Select **myVNet-Spoke-1**. |
+ | Traffic to remote virtual network | Leave the default of **Allow (default)**. |
+ | Traffic forwarded from remote virtual network | Leave the default of **Allow (default)**. |
+ | Virtual network gateway or Route Server | Leave the default of **None**. |
+
+6. Select **Add**.
+
+7. Select **Refresh** and verify **Peering status** is **Connected**.
+
+## Create spoke one network route table
+
+Create a route table to force all inter-interspoke and internet egress traffic through the simulated NVA in the hub virtual network.
+
+1. In the search box at the top of the portal, enter **Route table**. Select **Route tables** in the search results.
+
+2. Select **+ Create**.
+
+3. In **Create Route table** enter or select the following information:
+
+ | Setting | Value |
+ | - | -- |
+ | **Project details** | |
+ | Subscription | Select your subscription. |
+ | Resource group | Select **TutorialNATHubSpoke-rg**. |
+ | **Instance details** | |
+ | Region | Select **East US 2**. |
+ | Name | Enter **myRouteTable-NAT-Spoke-1**. |
+ | Propagate gateway routes | Leave the default of **Yes**. |
+
+4. Select **Review + create**.
+
+5. Select **Create**.
+
+6. In the search box at the top of the portal, enter **Route table**. Select **Route tables** in the search results.
+
+7. Select **myRouteTable-NAT-Spoke-1**.
+
+8. In **Settings** select **Routes**.
+
+9. Select **+ Add** in **Routes**.
+
+10. Enter or select the following information in **Add route**:
+
+ | Setting | Value |
+ | - | -- |
+ | Route name | Enter **default-via-NAT-Spoke-1**. |
+ | Address prefix destination | Select **IP Addresses**. |
+ | Destination IP addresses/CIDR ranges | Enter **0.0.0.0/0**. |
+ | Next hop type | Select **Virtual appliance**. |
+ | Next hop address | Enter **10.1.0.10**. </br> **_This is the IP address you added to the private interface of the NVA in the previous steps._**. |
+
+11. Select **Add**.
+
+12. Select **Subnets** in **Settings**.
+
+13. Select **+ Associate**.
+
+14. Enter or select the following information in **Associate subnet**:
+
+ | Setting | Value |
+ | - | -- |
+ | Virtual network | Select **myVNet-Spoke-1 (TutorialNATHubSpoke-rg)**. |
+ | Subnet | Select **subnet-private**. |
+
+15. Select **OK**.
+
+## Create spoke one test virtual machine
+
+A Windows Server 2022 virtual machine is used to test the outbound internet traffic through the NAT gateway and inter-spoke traffic in the hub and spoke network. Use the following example to create a Windows Server 2022 virtual machine.
+
+1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
+
+2. Select **+ Create** then **Azure virtual machine**.
+
+3. In **Create a virtual machine** enter or select the following information in the **Basics** tab:
+
+ | Setting | Value |
+ | - | -- |
+ | **Project details** | |
+ | Subscription | Select your subscription. |
+ | Resource group | Select **TutorialNATHubSpoke-rg**. |
+ | **Instance details** | |
+ | Virtual machine name | Enter **myVM-Spoke-1**. |
+ | Region | Select **(US) East US 2**. |
+ | Availability options | Select **No infrastructure redundancy required**. |
+ | Security type | Select **Standard**. |
+ | Image | Select **Windows Server 2022 Datacenter - x64 Gen2**. |
+ | VM architecture | Leave the default of **x64**. |
+ | Size | Select a size. |
+ | **Administrator account** | |
+ | Authentication type | Select **Password**. |
+ | Username | Enter a username. |
+ | Password | Enter a password. |
+ | Confirm password | Reenter password. |
+ | **Inbound port rules** | |
+ | Public inbound ports | Select **None**. |
+
+4. Select **Next: Disks** then **Next: Networking**.
+
+5. In the Networking tab, enter or select the following information:
+
+ | Setting | Value |
+ | - | -- |
+ | **Network interface** | |
+ | Virtual network | Select **myVNet-Spoke-1**. |
+ | Subnet | Select **subnet-private (10.2.0.0/24)**. |
+ | Public IP | Select **None**. |
+ | NIC network security group | Select **Basic**. |
+ | Public inbound ports | Select **Allow selected ports**. |
+ | Select inbound ports | Select **HTTP (80)**. </br> Select **RDP (3389)**. |
+
+6. Leave the rest of the options at the defaults and select **Review + create**.
+
+7. Select **Create**.
+
+## Create the second spoke virtual network
+
+Create the second virtual network for the second spoke of the hub and spoke network.
+
+1. In the search box at the top of the portal, enter **Virtual network**. Select **Virtual networks** in the search results.
+
+2. Select **+ Create**.
+
+3. In the **Basics** tab of **Create virtual network**, enter or select the following information:
+
+ | Setting | Value |
+ | - | -- |
+ | **Project details** | |
+ | Subscription | Select your subscription. |
+ | Resource group | Select **TutorialNATHubSpoke-rg**. |
+ | **Instance details** | |
+ | Name | Enter **myVNet-Spoke-2**. |
+ | Region | Select **West US 2**. |
+
+4. Select **Next: IP Addresses**.
+
+5. In the **IP Addresses** tab in **IPv4 address space**, select the trash can to delete the address space that is auto populated.
+
+6. In **IPv4 address space** enter **10.3.0.0/16**.
+
+7. Select **+ Add subnet**.
+
+8. In **Add subnet** enter or select the following information:
+
+ | Setting | Value |
+ | - | -- |
+ | Subnet name | Enter **subnet-private**. |
+ | Subnet address range | Enter **10.3.0.0/24**. |
+
+9. Select **Add**.
+
+10. Select **Review + create**.
+
+11. Select **Create**.
+
+## Create peering between hub and spoke two
+
+Create a two-way virtual network peer between the hub and spoke two.
+
+1. In the search box at the top of the portal, enter **Virtual network**. Select **Virtual networks** in the search results.
+
+2. Select **myVNet-Hub**.
+
+3. Select **Peerings** in **Settings**.
+
+4. Select **+ Add**.
+
+5. Enter or select the following information in **Add peering**:
+
+ | Setting | Value |
+ | - | -- |
+ | **This virtual network** | |
+ | Peering link name | Enter **myVNet-Hub-To-myVNet-Spoke-2**. |
+ | Traffic to remote virtual network | Leave the default of **Allow (default)**. |
+ | Traffic forwarded from remote virtual network | Leave the default of **Allow (default)**. |
+ | Virtual network gateway or Route Server | Leave the default of **None**. |
+ | **Remote virtual network** | |
+ | Peering link name | Enter **myVNet-Spoke-2-To-myVNet-Hub**. |
+ | Virtual network deployment model | Leave the default of **Resource manager**. |
+ | Subscription | Select your subscription. |
+ | Virtual network | Select **myVNet-Spoke-2**. |
+ | Traffic to remote virtual network | Leave the default of **Allow (default)**. |
+ | Traffic forwarded from remote virtual network | Leave the default of **Allow (default)**. |
+ | Virtual network gateway or Route Server | Leave the default of **None**. |
+
+6. Select **Add**.
+
+7. Select **Refresh** and verify **Peering status** is **Connected**.
+
+## Create spoke two network route table
+
+Create a route table to force all outbound internet and inter-spoke traffic through the simulated NVA in the hub virtual network.
+
+1. In the search box at the top of the portal, enter **Route table**. Select **Route tables** in the search results.
+
+2. Select **+ Create**.
+
+3. In **Create Route table** enter or select the following information:
+
+ | Setting | Value |
+ | - | -- |
+ | **Project details** | |
+ | Subscription | Select your subscription. |
+ | Resource group | Select **TutorialNATHubSpoke-rg**. |
+ | **Instance details** | |
+ | Region | Select **West US 2**. |
+ | Name | Enter **myRouteTable-NAT-Spoke-2**. |
+ | Propagate gateway routes | Leave the default of **Yes**. |
+
+4. Select **Review + create**.
+
+5. Select **Create**.
+
+6. In the search box at the top of the portal, enter **Route table**. Select **Route tables** in the search results.
+
+7. Select **myRouteTable-NAT-Spoke-2**.
+
+8. In **Settings** select **Routes**.
+
+9. Select **+ Add** in **Routes**.
+
+10. Enter or select the following information in **Add route**:
+
+ | Setting | Value |
+ | - | -- |
+ | Route name | Enter **default-via-NAT-Spoke-2**. |
+ | Address prefix destination | Select **IP Addresses**. |
+ | Destination IP addresses/CIDR ranges | Enter **0.0.0.0/0**. |
+ | Next hop type | Select **Virtual appliance**. |
+ | Next hop address | Enter **10.1.0.10**. </br> **_This is the IP address you added to the private interface of the NVA in the previous steps._**. |
+
+11. Select **Add**.
+
+12. Select **Subnets** in **Settings**.
+
+13. Select **+ Associate**.
+
+14. Enter or select the following information in **Associate subnet**:
+
+ | Setting | Value |
+ | - | -- |
+ | Virtual network | Select **myVNet-Spoke-2 (TutorialNATHubSpoke-rg)**. |
+ | Subnet | Select **subnet-private**. |
+
+15. Select **OK**.
+
+## Create spoke two test virtual machine
+
+Create a Windows Server 2022 virtual machine for the test virtual machine in spoke two.
+
+1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
+
+2. Select **+ Create** then **Azure virtual machine**.
+
+3. In **Create a virtual machine** enter or select the following information in the **Basics** tab:
+
+ | Setting | Value |
+ | - | -- |
+ | **Project details** | |
+ | Subscription | Select your subscription. |
+ | Resource group | Select **TutorialNATHubSpoke-rg**. |
+ | **Instance details** | |
+ | Virtual machine name | Enter **myVM-Spoke-2**. |
+ | Region | Select **(US) West US 2**. |
+ | Availability options | Select **No infrastructure redundancy required**. |
+ | Security type | Select **Standard**. |
+ | Image | Select **Windows Server 2022 Datacenter - x64 Gen2**. |
+ | VM architecture | Leave the default of **x64**. |
+ | Size | Select a size. |
+ | **Administrator account** | |
+ | Authentication type | Select **Password**. |
+ | Username | Enter a username. |
+ | Password | Enter a password. |
+ | Confirm password | Reenter password. |
+ | **Inbound port rules** | |
+ | Public inbound ports | Select **None**. |
+
+4. Select **Next: Disks** then **Next: Networking**.
+
+5. In the Networking tab, enter or select the following information:
+
+ | Setting | Value |
+ | - | -- |
+ | **Network interface** | |
+ | Virtual network | Select **myVNet-Spoke-2**. |
+ | Subnet | Select **subnet-private (10.3.0.0/24)**. |
+ | Public IP | Select **None**. |
+ | NIC network security group | Select **Basic**. |
+ | Public inbound ports | Select **Allow selected ports**. |
+ | Select inbound ports | Select **HTTP (80)**. </br> Select **RDP (3389)**. |
+
+6. Leave the rest of the options at the defaults and select **Review + create**.
+
+7. Select **Create**.
+
+## Test NAT gateway
+
+You'll connect to the Windows Server 2022 virtual machines you created in the previous steps to verify that the outbound internet traffic is leaving the NAT gateway.
+
+### Obtain NAT gateway public IP address
+
+Obtain the NAT gateway public IP address for verification of the steps later in the article.
+
+1. In the search box at the top of the portal, enter **Public IP**. Select **Public IP addresses** in the search results.
+
+2. Select **myPublic-NAT**.
+
+3. Make note of value in **IP address**. The example used in this article is **52.153.224.79**.
+
+### Test NAT gateway from spoke one
+
+Use Microsoft Edge on the Windows Server 2022 virtual machine to connect to https://whatsmyip.com to verify the functionality of the NAT gateway.
+
+1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
+
+2. Select **myVM-Spoke-1**.
+
+3. Select **Connect** then **Bastion**.
+
+4. Enter the username and password you entered when the virtual machine was created.
+
+5. Select **Connect**.
+
+6. Open **Microsoft Edge** when the desktop finishes loading.
+
+7. In the address bar, enter **https://whatsmyip.com**.
+
+8. Verify the outbound IP address displayed is the same as the IP of the NAT gateway you obtained previously.
+
+ :::image type="content" source="./media/tutorial-hub-spoke-route-nat/outbound-ip-address.png" alt-text="Screenshot of outbound IP address.":::
+
+9. Open **Windows PowerShell**.
+
+10. Use the following example to install IIS. IIS will be used later to test inter-spoke routing.
+
+ ```powershell
+ Install-WindowsFeature Web-Server
+ ```
+
+11. Leave the bastion connection open to **myVM-Spoke-1**.
+
+### Test NAT gateway from spoke two
+
+Use Microsoft Edge on the Windows Server 2022 virtual machine to connect to https://whatsmyip.com to verify the functionality of the NAT gateway.
+
+1. In the search box at the top of the portal, enter **Virtual machine**. Select **Virtual machines** in the search results.
+
+2. Select **myVM-Spoke-2**.
+
+3. Select **Connect** then **Bastion**.
+
+4. Enter the username and password you entered when the virtual machine was created.
+
+5. Select **Connect**.
+
+6. Open **Microsoft Edge** when the desktop finishes loading.
+
+7. In the address bar, enter **https://whatsmyip.com**.
+
+8. Verify the outbound IP address displayed is the same as the IP of the NAT gateway you obtained previously.
+
+ :::image type="content" source="./media/tutorial-hub-spoke-route-nat/outbound-ip-address.png" alt-text="Screenshot of outbound IP address.":::
+
+9. Open **Windows PowerShell**.
+
+10. Use the following example to install IIS. IIS will be used later to test inter-spoke routing.
+
+ ```powershell
+ Install-WindowsFeature Web-Server
+ ```
+
+11. Leave the bastion connection open to **myVM-Spoke-2**.
+
+## Test routing between the spokes
+
+Traffic from spoke one to spoke two and spoke two to spoke one will route through the simulated NVA in the hub virtual network. Use the following examples to verify the routing between spokes of the hub and spoke network.
+
+### Test routing from spoke one to spoke two
+
+Use Microsoft Edge to connect to the web server on **myVM-Spoke-2** you installed in the previous steps.
+
+1. Return to the open bastion connection to **myVM-Spoke-1**.
+
+2. Open **Microsoft Edge** if it's not open.
+
+3. In the address bar, enter **10.3.0.4**.
+
+4. Verify the default IIS page is displayed from **myVM-Spoke-2**.
+
+ :::image type="content" source="./media/tutorial-hub-spoke-route-nat/iis-myvm-spoke-1.png" alt-text="Screenshot of default IIS page on myVM-Spoke-1.":::
+
+5. Close the bastion connection to **myVM-Spoke-1**.
+
+### Test routing from spoke two to spoke one
+
+Use Microsoft Edge to connect to the web server on **myVM-Spoke-1** you installed in the previous steps.
+
+1. Return to the open bastion connection to **myVM-Spoke-2**.
+
+2. Open **Microsoft Edge** if it's not open.
+
+3. In the address bar, enter **10.2.0.4**.
+
+4. Verify the default IIS page is displayed from **myVM-Spoke-1**.
+
+ :::image type="content" source="./media/tutorial-hub-spoke-route-nat/iis-myvm-spoke-2.png" alt-text="Screenshot of default IIS page on myVM-Spoke-2.":::
+
+5. Close the bastion connection to **myVM-Spoke-1**.
+
+## Clean up resources
+
+If you're not going to continue to use this application, delete the created resources with the following steps:
+
+1. In the search box at the top of the portal, enter **Resource group**. Select **Resource groups** in the search results.
+
+2. Select **myResourceGroup**.
+
+3. In the **Overview** of **myResourceGroup**, select **Delete resource group**.
+
+4. In **TYPE THE RESOURCE GROUP NAME:**, enter **TutorialNATHubSpoke-rg**.
+
+5. Select **Delete**.
+
+## Next steps
+
+Advance to the next article to learn how to use an Azure Gateway Load Balancer for highly available network virtual appliances:
+> [!div class="nextstepaction"]
+> [Gateway Load Balancer](/azure/load-balancer/gateway-overview)
virtual-network Service Tags Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-network/service-tags-overview.md
By default, service tags reflect the ranges for the entire cloud. Some service t
| **AzureDevOps** | Azure DevOps. | Inbound | Yes | Yes | | **AzureDigitalTwins** | Azure Digital Twins.<br/><br/>**Note**: This tag or the IP addresses covered by this tag can be used to restrict access to endpoints configured for event routes. | Inbound | No | Yes | | **AzureEventGrid** | Azure Event Grid. | Both | No | No |
-| **AzureFrontDoor.Frontend** <br/> **AzureFrontDoor.Backend** <br/> **AzureFrontDoor.FirstParty** | Azure Front Door. | Both | No | No |
+| **AzureFrontDoor.Frontend** <br/> **AzureFrontDoor.Backend** <br/> **AzureFrontDoor.FirstParty** | Azure Front Door. | Both | Yes | Yes |
| **AzureHealthcareAPIs** | The IP addresses covered by this tag can be used to restrict access to Azure Health Data Services. | Both | No | Yes | | **AzureInformationProtection** | Azure Information Protection.<br/><br/>**Note**: This tag has a dependency on the **AzureActiveDirectory**, **AzureFrontDoor.Frontend** and **AzureFrontDoor.FirstParty** tags. | Outbound | No | No | | **AzureIoTHub** | Azure IoT Hub. | Outbound | Yes | No |
By default, service tags reflect the ranges for the entire cloud. Some service t
| **DataFactory** | Azure Data Factory | Both | No | No | | **DataFactoryManagement** | Management traffic for Azure Data Factory. | Outbound | No | No | | **Dynamics365ForMarketingEmail** | The address ranges for the marketing email service of Dynamics 365. | Outbound | Yes | No |
+| **Dynamics365BusinessCentral** | This tag or the IP addresses covered by this tag can be used to restrict access from/to the Dynamics 365 Business Central Services. | Both | No | Yes |
| **EOPExternalPublishedIPs** | This tag represents the IP addresses used for Security & Compliance Center PowerShell. Refer to the [Connect to Security & Compliance Center PowerShell using the EXO V2 module for more details](/powershell/exchange/connect-to-scc-powershell). | Both | No | Yes | | **EventHub** | Azure Event Hubs. | Outbound | Yes | Yes | | **GatewayManager** | Management traffic for deployments dedicated to Azure VPN Gateway and Application Gateway. | Inbound | No | No |
virtual-wan User Groups About https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/user-groups-about.md
Azure Active Directory|AADGroupID|Azure Active Directory Group Object ID |0cf484
Gateways using Azure Active Directory authentication can use **Azure Active Directory Group Object IDs** to determine which user group a user belongs to. If a user is part of multiple Azure Active Directory groups, they're considered to be part of the Virtual WAN user group that has the lowest numerical priority.
+However, if you plan to have users who are external (users who are not part of the Azure Active Directory domain configured on the VPN Gateway) connect to the Virtual WAN Point-to-site VPN Gateway, please make sure that the user type of the external user is "Member" and **not** "Guest". Also, make sure that the "Name" of the user is set to the user's email address. If the user type and name of the connecting user is not set correctly as described above or you cannot set an external member to be a "Member" of your Azure Active Directory domain, that connecting user will be assigned to the default group and assigned an IP from the default IP address pool.
+
+You can also identify whether or not a user is external by looking at the user's "User Principal Name." External users will have **#EXT** in their "User Principal Name."
+ :::image type="content" source="./media/user-groups-about/groups.png" alt-text="Screenshot of an Azure Active Directory group." lightbox="./media/user-groups-about/groups.png"::: #### Azure Certificate (OpenVPN and IKEv2)
virtual-wan User Groups Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-wan/user-groups-create.md
Before beginning, make sure you've configured a virtual WAN that uses one or mor
1. Make sure all point-to-site VPN connection configurations are associated to the defaultRouteTable and propagate to the same set of route tables. This should be configured automatically if you're using portal, but if you're using REST, PowerShell or CLI, make sure all propagations and associations are set appropriately. 1. If you're using the Azure VPN client, make sure the Azure VPN client installed on user devices are the latest version. 1. If you're using Azure Active Directory authentication, please make sure the tenant URL input in the server configuration (`https://login.microsoftonline.com/<tenant ID>`) does **not** end in a `\`. If the URL is input to end with `\`, the Gateway will not be able to properly process Azure Active Directory user groups and all users will be assigned to the default group. To remediate, please modify the server configuration to remove the trailing `\` and modify the address pools configured on the gateway to apply the changes to the gateway. This is a known issue that will be fixed in a later relase.-
+1. If you're using Azure Active Directory authentication and you plan to invite users who are external (users who are not part of the Azure Active Directory domain configured on the VPN Gateway) to connect to the Virtual WAN Point-to-site VPN Gateway, please make sure that the user type of the external user is "Member" and not "Guest". Also, make sure that the "Name" of the user is set to the user's email address. If the user type and name of the connecting user is not set correctly as described above or you cannot set an external member to be a "Member" of your Azure Active Directory domain, that connecting user will be assigned to the default group and assigned an IP from the default IP address pool.
## Next steps * For more information about user groups, see [About user groups and IP address pools for P2S User VPNs](user-groups-about.md).
vmware-cloudsimple Access Cloudsimple Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/access-cloudsimple-portal.md
- Title: Access Azure VMware Solution by CloudSimple - Portal
-description: Describes how to access VMware Solution by CloudSimple portal from Azure portal
-- Previously updated : 06/04/2019 ------
-# Access the VMware Solution by CloudSimple portal from the Azure portal
-
-Single sign-on is supported for access to the CloudSimple portal. After you sign in to the Azure portal, you can access the CloudSimple portal without signing in again. The first time you access the CloudSimple portal you're prompted to authorize the [CloudSimple Service Authorization](#consent-to-cloudsimple-service-authorization-application) application. Authorization is a one-time action.
-
-## Before you begin
-
-Users with builtin **Owner** and **Contributor** roles can access CloudSimple portal. The roles must be configured on the resource group where CloudSimple service is deployed. The roles can also be configured on the CloudSimple service object. For more information on checking your role, see [View role assignments](../role-based-access-control/check-access.md) article. Only users with built-in **Owner** and **Contributor** roles can access the CloudSimple portal. The roles must be configured on the subscription. For more information on checking your role, see [View role assignments](../role-based-access-control/check-access.md) article.
-
-If you are using custom roles, the role should have any of the following operations under ```Actions```. For more information on custom roles, see [Azure custom roles](../role-based-access-control/custom-roles.md). If any of the operations is a part of ```NotActions```, the user cannot access CloudSimple portal.
-
-```
-Microsoft.VMwareCloudSimple/*
-Microsoft.VMwareCloudSimple/*/write
-Microsoft.VMwareCloudSimple/dedicatedCloudServices/*
-Microsoft.VMwareCloudSimple/dedicatedCloudServices/*/write
-```
-
-## Sign in to Azure
-
-Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
-
-## Access the CloudSimple portal
-
-1. Select **All services**.
-
-2. Search for **CloudSimple Services**.
-
-3. Select the CloudSimple service on which you want to create your Private Cloud.
-
-4. On the **Overview** page, click **Go to the CloudSimple portal**. If you're accessing the CloudSimple portal from the Azure portal for the first time, you'll be prompted to authorize the [CloudSimple Service Authorization](#consent-to-cloudsimple-service-authorization-application) application.
-
- ![Launch CloudSimple portal](media/launch-cloudsimple-portal.png)
-
-> [!NOTE]
-> If you select a Private Cloud operation (such as creating or expanding a Private Cloud) directly from the Azure portal, the CloudSimple portal opens to the indicated page.
-
-In the CloudSimple portal, select **Home** on the side menu to display summary information about your Private Clouds. The resources and capacity of your Private Clouds are shown, along with alerts and tasks that require attention. For common tasks, click the named icons at the top of the page.
-
-![Home Page](media/cloudsimple-portal-home.png)
-
-## Consent to CloudSimple Service Authorization application
-
-Launching the CloudSimple portal from the Azure portal for the first time requires your consent for the CloudSimple Service Authorization application. Select **Accept** to grant requested permissions and access the CloudSimple portal.
-
-![Consent to CloudSimple Service Authorization - administrators](media/cloudsimple-azure-consent.png)
-
-If you have global administrator privilege, you can consent for your organization. Select **Consent on behalf of your organization**.
-
-![Consent to CloudSimple Service Authorization - global admin](media/cloudsimple-azure-consent-global-admin.png)
-
-If your permissions don't permit access to the CloudSimple portal, contact the global administrator of your tenant to grant required permissions. A global administrator can consent on behalf of your organization.
-
-![Consent to CloudSimple Service Authorization - requires administrators](media/cloudsimple-azure-consent-requires-administrator.png)
-
-## Next steps
-
-* Learn how to [Create a private cloud](./create-private-cloud.md)
-* Learn how to [Configure a private cloud environment](quickstart-create-private-cloud.md)
vmware-cloudsimple Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/account.md
- Title: Account management - Azure VMware Solution by CloudSimple portal
-description: Describes how to manage accounts on the Azure VMware Solution by CloudSimple portal
-- Previously updated : 08/14/2019 ------
-# Manage accounts on the Azure VMware Solution by CloudSimple portal
-
-When you create your CloudSimple service, it creates an account on CloudSimple. The account is associated with your Azure subscription where the service is located. All users with owner and contributor roles in the subscription have access to the CloudSimple portal. The Azure subscription ID and tenant ID associated with the CloudSimple service are found on the Accounts page.
-
-To manage accounts in the CloudSimple portal, [access the portal](access-cloudsimple-portal.md) and select **Account** on the side menu.
-
-Select **Summary** to view information about your companyΓÇÖs CloudSimple configuration. The current capacity of your cloud configuration is shown, including number of Private Clouds, total storage, vSphere cluster configuration, number of nodes, and number of compute cores. A link is included to purchase additional nodes if the current configuration doesn't meet all of your needs.
-
-## Email alerts
-
-You can add email addresses of any people you would like to notify about changes to the Private Cloud configuration.
-
-1. In the **Additional email alerts** area, click **Add new**.
-2. Enter the email address.
-3. Press Return.
-
-To remove an entry, click **X**.
-
-## CloudSimple operator access
-
-The operator access setting allows CloudSimple to help you with troubleshooting by permitting a support engineer to sign in to your CloudSimple portal. The setting is enabled by default. All actions performed by the support engineer when logged in to your customer account are recorded and available for your review on the **Activity** > **Audit** page.
-
-Click the **CloudSimple operator access enabled** toggle to turn access on or off.
vmware-cloudsimple Azure Ad https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/azure-ad.md
- Title: Azure VMware Solution by CloudSimple - Use Azure AD as identity source on Private Cloud
-description: Describes how to add Azure AD as an identity provider on your CloudSimple Private Cloud to authenticate users accessing CloudSimple from Azure
-- Previously updated : 08/15/2019 -----
-# Use Azure AD as an identity provider for vCenter on CloudSimple Private Cloud
-
-You can set up your CloudSimple Private Cloud vCenter to authenticate with Azure Active Directory (Azure AD) for your VMware administrators to access vCenter. After the single sign-on identity source is set up, the **cloudowner** user can add users from the identity source to vCenter.
-
-You can set up your Active Directory domain and domain controllers in any of the following ways:
-
-* Active Directory domain and domain controllers running on-premises
-* Active Directory domain and domain controllers running on Azure as virtual machines in your Azure subscription
-* New Active Directory domain and domain controllers running in your CloudSimple Private Cloud
-* Azure Active Directory service
-
-This guide explains the tasks required to set up Azure AD as an identity source. For information on using on-premises Active Directory or Active Directory running in Azure, refer to [Set up vCenter identity sources to use Active Directory](set-vcenter-identity.md) for detailed instructions in setting up the identity source.
-
-## About Azure AD
-
-Azure AD is the Microsoft multi-tenant, cloud based directory and identity management service. Azure AD provides a scalable, consistent, and reliable authentication mechanism for users to authenticate and access different services on Azure. It also provides secure LDAP services for any third-party services to use Azure AD as an authentication/identity source. Azure AD combines core directory services, advanced identity governance, and application access management, which can be used for giving access to your Private Cloud for users who administer the Private Cloud.
-
-To use Azure AD as an identity source with vCenter, you must set up Azure AD and Azure AD domain services. Follow these instructions:
-
-1. [How to set up Azure AD and Azure AD domain services](#set-up-azure-ad-and-azure-ad-domain-services)
-2. [How to set up an identity source on your Private Cloud vCenter](#set-up-an-identity-source-on-your-private-cloud-vcenter)
-
-## Set up Azure AD and Azure AD domain services
-
-Before you get started, you will need access to your Azure subscription with Global Administrator privileges. The following steps give general guidelines. Details are contained in the Azure documentation.
-
-### Azure AD
-
-> [!NOTE]
-> If you already have Azure AD, you can skip this section.
-
-1. Set up Azure AD on your subscription as described in [Azure AD documentation](../active-directory/fundamentals/active-directory-whatis.md).
-2. Enable Azure Active Directory Premium on your subscription as described in [Sign up for Azure Active Directory Premium](../active-directory/fundamentals/active-directory-get-started-premium.md).
-3. Set up a custom domain name and verify the custom domain name as described in [Add a custom domain name to Azure Active Directory](../active-directory/fundamentals/add-custom-domain.md).
- 1. Set up a DNS record on your domain registrar with the information provided on Azure.
- 2. Set the custom domain name to be the primary domain.
-
-You can optionally configure other Azure AD features. These are not required for enabling vCenter authentication with Azure AD.
-
-### Azure AD domain services
-
-> [!NOTE]
-> This is an important step for enabling Azure AD as an identity source for vCenter. To avoid any issues, ensure that all steps are performed correctly.
-
-1. Enable Azure AD domain services as described in [Enable Azure Active Directory domain services using the Azure portal](../active-directory-domain-services/tutorial-create-instance.md).
-2. Set up the network that will be used by Azure AD domain services as described in [Enable Azure Active Directory Domain Services using the Azure portal](../active-directory-domain-services/tutorial-create-instance.md).
-3. Configure Administrator Group for managing Azure AD Domain Services as described in [Enable Azure Active Directory Domain Services using the Azure portal](../active-directory-domain-services/tutorial-create-instance.md).
-4. Update DNS settings for your Azure AD Domain Services as described in [Enable Azure Active Directory Domain Services](../active-directory-domain-services/tutorial-create-instance.md). If you want to connect to AD over the Internet, set up the DNS record for the public IP address of the Azure AD domain services to the domain name.
-5. Enable password hash synchronization for users. This step enables synchronization of password hashes required for NT LAN Manager (NTLM) and Kerberos authentication to Azure AD Domain Services. After you've set up password hash synchronization, users can sign in to the managed domain with their corporate credentials. See [Enable password hash synchronization to Azure Active Directory Domain Services](../active-directory-domain-services/tutorial-create-instance.md).
- 1. If cloud-only users are present, they must change their password using <a href="https://myapps.microsoft.com/" target="_blank">Azure AD access panel</a> to ensure password hashes are stored in the format required by NTLM or Kerberos. Follow instructions in [Enable password hash synchronization to your managed domain for cloud-only user accounts](../active-directory-domain-services/tutorial-create-instance.md#enable-user-accounts-for-azure-ad-ds). This step must be done for individual users and any new user who is created in your Azure AD directory using the Azure portal or Azure AD PowerShell cmdlets. Users who require access to Azure AD domain services must use the <a href="https://myapps.microsoft.com/" target="_blank">Azure AD access panel</a> and access their profile to change the password.
-
- > [!NOTE]
- > If your organization has cloud-only user accounts, all users who need to use Azure Active Directory Domain Services must change their passwords. A cloud-only user account is an account that was created in your Azure AD directory using either the Azure portal or Azure AD PowerShell cmdlets. Such user accounts aren't synchronized from an on-premises directory.
-
- 2. If you are synchronizing passwords from your on-premises Active directory, follow the steps in the [Active Directory documentation](../active-directory-domain-services/tutorial-configure-password-hash-sync.md).
-
-6. Configure secure LDAP on your Azure Active Directory Domain Services as described in [Configure secure LDAP (LDAPS) for an Azure AD Domain Services managed domain](../active-directory-domain-services/tutorial-configure-ldaps.md).
- 1. Upload a certificate for use by secure LDAP as described in the Azure topic [obtain a certificate for secure LDAP](../active-directory-domain-services/tutorial-configure-ldaps.md#create-a-certificate-for-secure-ldap). CloudSimple recommends using a signed certificate issued by a certificate authority to ensure that vCenter can trust the certificate.
- 2. Enable secure LDAP as described [Enable secure LDAP (LDAPS) for an Azure AD Domain Services managed domain](../active-directory-domain-services/tutorial-configure-ldaps.md).
- 3. Save the public part of the certificate (without the private key) in .cer format for use with vCenter while configuring the identity source.
- 4. If Internet access to the Azure AD domain services is required, enable the 'Allow secure access to LDAP over internet' option.
- 5. Add the inbound security rule for the Azure AD Domain services NSG for TCP port 636.
-
-## Set up an identity source on your Private Cloud vCenter
-
-1. [Escalate privileges](escalate-private-cloud-privileges.md) for your Private Cloud vCenter.
-2. Collect the configuration parameters required for setting up of identity source.
-
- | **Option** | **Description** |
- ||--|
- | **Name** | Name of the identity source. |
- | **Base DN for users** | Base distinguished name for users. For Azure AD, use: `OU=AADDC Users,DC=<domain>,DC=<domain suffix>` Example: `OU=AADDC Users,DC=cloudsimplecustomer,DC=com`.|
- | **Domain name** | FQDN of the domain, for example, example.com. Do not provide an IP address in this text box. |
- | **Domain alias** | *(optional)* The domain NetBIOS name. Add the NetBIOS name of the Active Directory domain as an alias of the identity source if you are using SSPI authentications. |
- | **Base DN for groups** | The base distinguished name for groups. For Azure AD, use: `OU=AADDC Users,DC=<domain>,DC=<domain suffix>` Example: `OU=AADDC Users,DC=cloudsimplecustomer,DC=com`|
- | **Primary Server URL** | Primary domain controller LDAP server for the domain.<br><br>Use the format `ldaps://hostname:port`. The port is typically 636 for LDAPS connections. <br><br>A certificate that establishes trust for the LDAPS endpoint of the Active Directory server is required when you use `ldaps://` in the primary or secondary LDAP URL. |
- | **Secondary server URL** | Address of a secondary domain controller LDAP server that is used for failover. |
- | **Choose certificate** | If you want to use LDAPS with your Active Directory LDAP Server or OpenLDAP Server identity source, a Choose certificate button appears after you type `ldaps://` in the URL text box. A secondary URL is not required. |
- | **Username** | ID of a user in the domain who has a minimum of read-only access to Base DN for users and groups. |
- | **Password** | Password of the user who is specified by Username. |
-
-3. Sign in to your Private Cloud vCenter after the privileges are escalated.
-4. Follow the instructions in [Add an identity source on vCenter](set-vcenter-identity.md#add-an-identity-source-on-vcenter) using the values from the previous step to set up Azure Active Directory as an identity source.
-5. Add users/groups from Azure AD to vCenter groups as described in the VMware topic [Add Members to a vCenter Single Sign-On Group](https://docs.vmware.com/en/VMware-vSphere/5.5/com.vmware.vsphere.security.doc/GUID-CDEA6F32-7581-4615-8572-E0B44C11D80D.html).
-
-> [!CAUTION]
-> New users must be added only to *Cloud-Owner-Group*, *Cloud-Global-Cluster-Admin-Group*, *Cloud-Global-Storage-Admin-Group*, *Cloud-Global-Network-Admin-Group* or, *Cloud-Global-VM-Admin-Group*. Users added to *Administrators* group will be removed automatically. Only service accounts must be added to *Administrators* group.
-
-## Next steps
-
-* [Learn about Private Cloud permission model](learn-private-cloud-permissions.md)
vmware-cloudsimple Azure Application Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/azure-application-gateway.md
- Title: Use Azure Application Gateway with VMware virtual machines
-description: Describes how to use the Azure application gateway to manage incoming web traffic for web servers running in VMware virtual machines win the CloudSimple Private Cloud environment
-- Previously updated : 08/16/2019 ------
-# Use Azure Application Gateway with VMware virtual machines in the CloudSimple Private Cloud environment
-
-You can use the Azure Application Gateway to manage incoming web traffic for your web servers running in VMware virtual machines within your CloudSimple Private Cloud environment.
-
-By leveraging Azure Application Gateway in a public-private hybrid deployment, you can manage web traffic to your applications, provide a secure front-end, and offload TLS processing for their services running in VMware environment. Azure Application Gateway routes incoming web traffic to backend pool instances residing in VMware environments according to configured rules and health probes.
-
-This Azure Application Gateway solution requires you to:
-
-* Have an Azure subscription.
-* Create and configure an Azure virtual network, and a subnet within the virtual network.
-* Create and configure NSG rules and peer your vNet using ExpressRoute to your CloudSimple Private Cloud.
-* Create & Configure your Private Cloud.
-* Create & Configure your Azure Application Gateway.
-
-## Azure Application Gateway deployment scenario
-
-In this scenario, the Azure Application Gateway runs in your Azure virtual network. The virtual network is connected to your Private Cloud over an ExpressRoute circuit. All the subnets in the Private Cloud are IP reachable from the virtual network subnets.
-
-![Azure load balancer in Azure virtual network](media/load-balancer-use-case.png)
-
-## How to deploy the solution
-
-The deployment process consists of the following tasks:
-
-1. [Verify that prerequisites are met](#1-verify-prerequisites)
-2. [Connect your Azure virtual connection to the Private Cloud](#2-connect-your-azure-virtual-network-to-your-private-cloud)
-3. [Deploy an Azure application gateway](#3-deploy-an-azure-application-gateway)
-4. [Create and Configure Web Server VM pool in your Private Cloud](#4-create-and-configure-a-web-server-vm-pool-in-your-private-cloud)
-
-## 1. Verify prerequisites
-
-Verify that these prerequisites are met:
-
-* An Azure Resource Manager and a virtual network is already created.
-* A dedicated subnet (for Application Gateway) within your Azure virtual network is already created.
-* A CloudSimple Private Cloud is already created.
-* There is no IP conflict between IP subnets in the virtual network and subnets in the Private Cloud.
-
-## 2. Connect your Azure virtual network to your Private Cloud
-
-To connect your Azure virtual network to your Private Cloud, follow this process.
-
-1. [In the CloudSimple portal, copy the ExpressRoute peering information](virtual-network-connection.md).
-
-2. [Configure a virtual network gateway for your Azure virtual network](../expressroute/expressroute-howto-add-gateway-portal-resource-manager.md).
-
-3. [Link your virtual network to the CloudSimple ExpressRoute circuit](../expressroute/expressroute-howto-linkvnet-portal-resource-manager.md#connect-a-vnet-to-a-circuitdifferent-subscription).
-
-4. [Use the peering information that you copied to link your virtual network to the ExpressRoute circuit](virtual-network-connection.md).
-
-## 3. Deploy an Azure application gateway
-
-The detailed instructions for this are available in [Create an application gateway with path-based routing rules using the Azure portal](../application-gateway/create-url-route-portal.md). Here is a summary of the required steps:
-
-1. Create a virtual network in your subscription and resource group.
-2. Create a subnet (to be used as the dedicated subnet) within your virtual network.
-3. Create a standard Application Gateway (optionally enable WAF): From the Azure portal homepage, click **Resource** > **Networking** > **Application Gateway** from the top left side of the page. Select the standard SKU and size and provide Azure subscription, resource group and location information. If required, create a new public IP for this application gateway and provide details on the virtual network and the dedicated subnet for the application gateway.
-4. Add a backend pool with virtual machines and add it to your application gateway.
-
-## 4. Create and configure a web server VM pool in your Private Cloud
-
-In vCenter, create VMs with the OS and web server of your choice (such as Windows/IIS or Linux/Apache). Choose a subnet/VLAN that is designated for the web tier in your Private Cloud. Verify that at least one vNIC of the web server VMs is on the web tier subnet.
vmware-cloudsimple Azure Create Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/azure-create-vm.md
- Title: Azure VMware Solution by CloudSimple - Create a virtual machine in Azure with VM templates
-description: Describes how to create virtual machines in Azure using VM templates on the VMware infrastructure for your CloudSimple Private Cloud
-- Previously updated : 08/16/2019 ------
-# Create a virtual machine in Azure using VM templates on the VMware infrastructure
-
-You can create a virtual machine in the Azure portal by using VM templates on the VMware infrastructure that your CloudSimple administrator has enabled for your subscription.
-
-## Sign in to Azure
-
-Sign in to the [Azure portal](https://portal.azure.com).
-
-## Create CloudSimple virtual machine
-
-1. Select **All services**.
-
-2. Search for **CloudSimple Virtual Machines**.
-
-3. Click **Add**.
-
- ![Create CloudSimple virtual machine](media/create-cloudsimple-virtual-machine.png)
-
-4. Enter basic information click **Next:Size**.
-
- > [!NOTE]
- > CloudSimple virtual machine creation on Azure requires a VM template. This VM template should exist on your Private Cloud vCenter. Create a virtual machine on your Private Cloud from vCenter UI with desired operating system and configurations. Using instructions in [Clone a Virtual Machine to a Template in the vSphere Web Client](https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.vm_admin.doc/GUID-FE6DE4DF-FAD0-4BB0-A1FD-AFE9A40F4BFE_copy.html), create a template.
-
- ![Create CloudSimple virtual machine - basics](media/create-cloudsimple-virtual-machine-basic-info.png)
-
- | Field | Description |
- | | - |
- | Subscription | Azure subscription associated with your Private Cloud. |
- | Resource Group | Resource group to which the VM will be assigned. You can select an existing group or create a new one. |
- | Name | Name to identify the VM. |
- | Location | Azure region in which this VM is hosted. |
- | Private Cloud | CloudSimple Private Cloud where you want to create the virtual machine. |
- | Resource Pool | Mapped resource pool for the VM. Select from the available resource pools. |
- | vSphere Template | vSphere template for the VM. |
- | User name | User name of the VM administrator (for Windows templates)|
- | Password <br>Confirm password | Password for the VM administrator (for Windows templates). |
-
-5. Select the number of cores and memory capacity for the VM and click **Next:Configurations**. Select the checkbox if you want to expose full CPU virtualization to the guest operating system so that applications that require hardware virtualization can run on virtual machines without binary translation or paravirtualization. For more information, see the VMware article [Expose VMware Hardware Assisted Virtualization](https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.vm_admin.doc/GUID-2A98801C-68E8-47AF-99ED-00C63E4857F6.html).
-
- ![Create CloudSimple virtual machine - size](media/create-cloudsimple-virtual-machine-size.png)
-
-6. Configure network interfaces and disks as described in the following tables and click **Review + create**.
-
- ![Create CloudSimple virtual machine - configurations](media/create-cloudsimple-virtual-machine-configurations.png)
-
- For network interfaces, click **Add network interface** and configure the following settings.
-
- | Control | Description |
- | | - |
- | Name | Enter a name to identify the interface. |
- | Network | Select from the list of configured distributed port group in your Private Cloud vSphere. |
- | Adapter | Select a vSphere adaptor from the list of available types configured for the VM. For more information, see the VMware knowledge base article [Choosing a network adapter for your virtual machine](https://kb.vmware.com/s/article/1001805). |
- | Power on at Boot | Choose whether to enable the NIC hardware when the VM is booted. The default is **Enable**. |
-
- For disks, click **Add disk** and configure the following settings.
-
- | Item | Description |
- | | - |
- | Name | Enter a name to identify the disk. |
- | Size | Select one of the available sizes. |
- | SCSI Controller | Select a SCSI controller for the disk. |
- | Mode | Determines how the disk participates in snapshots. Choose one of these options: <br> - Independent persistent: All data written to the disk is written permanently.<br> - Independent non-persistent: Changes written to the disk are discarded when you power off or reset the virtual machine. Independent non-persistent mode allows you to always restart the VM in the same state. For more information, see the [VMware documentation](https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.vm_admin.doc/GUID-8B6174E6-36A8-42DA-ACF7-0DA4D8C5B084.html).
-
-7. Once validation completes, review the settings and click **Create**. To make any changes, click the tabs at the top or click.
-
- ![Create CloudSimple virtual machine - review](media/create-cloudsimple-virtual-machine-review.png)
-
-## View list of CloudSimple virtual machines
-
-1. Select **All services**.
-
-2. Search for **CloudSimple Virtual Machines**.
-
-3. Select the virtual machine on which your Private Cloud was created.
-
- ![List of CloudSimple Virtual Machines](media/list-cloudsimple-virtual-machines.png)
-
-List of CloudSimple virtual machines includes virtual machines created from Azure portal. Virtual machines created on Private Cloud vCenter in the mapped vCenter resource pool will be shown in the list.
vmware-cloudsimple Azure Expressroute Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/azure-expressroute-connection.md
- Title: Azure VMware Solution by CloudSimple - Connect Private Cloud to Azure network using ExpressRoute
-description: Describes how to connect your CloudSimple Private Cloud environment to the Azure virtual network using ExpressRoute
-- Previously updated : 08/14/2019 ------
-# Connect your CloudSimple Private Cloud environment to the Azure virtual network using ExpressRoute
-
-Your CloudSimple Private Cloud can be connected to your Azure virtual network using Azure ExpressRoute. This high bandwidth, low latency connection allows you to access services running in your Azure subscription from your Private Cloud environment.
-
-Virtual network connection allows you to:
-
-* Use Azure as a backup target for virtual machines on your Private Cloud.
-* Deploy KMS servers in your Azure subscription to encrypt your Private Cloud vSAN datastore.
-* Use hybrid applications where the web tier of the application runs in the public cloud while the application and database tiers run in your Private Cloud.
-
-![Azure ExpressRoute Connection to virtual network](media/cloudsimple-azure-network-connection.png)
-
-## Set up a virtual network connection
-
-To set up the virtual network connection to your Private Cloud, you need your authorization key, peer circuit URI, and access to your Azure subscription. This information is available on the Virtual Network Connection page in the CloudSimple portal. For instructions, see [Obtain peering information for Azure virtual network to CloudSimple connection](virtual-network-connection.md). If you have any trouble obtaining the information, submit a <a href="https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest" target="_blank">support request</a>.
-
-> [!TIP]
-> If you already have a Azure virtual network, gateway subnet, and virtual network gateway, you can skip to step 4.
-
-1. Create a virtual network on your Azure subscription and verify that the address space you select is different from the address space of your Private Cloud. If you already have an Azure virtual network, you can use the existing one. For details, see [Create a virtual network using the Azure portal](../virtual-network/quick-create-portal.md).
-2. Create the gateway subnet on your Azure virtual network. If you already have a gateway subnet in your Azure virtual network, you can use the existing one. For details, see [Create the gateway subnet](../expressroute/expressroute-howto-add-gateway-portal-resource-manager.md#create-the-gateway-subnet).
-3. Create the virtual network gateway on your virtual network. If you have an existing virtual network gateway, you can use the existing one. For details, see [Create the virtual network gateway](../expressroute/expressroute-howto-add-gateway-portal-resource-manager.md#create-the-virtual-network-gateway).
-4. Create the connection between your virtual network and your Private Cloud by redeeming the authorization key as described in [Connect a virtual network to a circuit - different subscription](../expressroute/expressroute-howto-linkvnet-portal-resource-manager.md#connect-a-vnet-to-a-circuitdifferent-subscription).
-
-> [!WARNING]
-> If you are using an existing virtual network gateway and it has an ExpressRoute connection to the same location as the CloudSimple ExpressRoute circuit, the connection will not be established. Create a new virtual network and follow the previous steps.
-
-## Test the virtual network connection
-
-After the connection is created, you can check the status of the connection by selecting **Properties** under **Settings**. Status and Provisioning State should show **Succeeded**.
-
-![Connection Status](media/azure-expressroute-connection.png)
-
-To test the virtual network connection:
-
-1. Create a virtual machine in your Azure subscription.
-2. Find the IP address of your Private Cloud vCenter (refer to your welcome email).
-3. Ping your Cloud vCenter from the virtual machine created in your Azure virtual network.
-4. Ping your Azure virtual machine from a virtual machine running in your Private Cloud vCenter.
-
-If you have any issues establishing the connection, submit a <a href="https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest" target="_blank">support request</a>.
vmware-cloudsimple Azure Manage Vm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/azure-manage-vm.md
- Title: Azure VMware Solution by CloudSimple - Manage Private Cloud VMs in Azure
-description: Describes how to manage CloudSimple Private Cloud VMs in the Azure portal, including adding disks, changing VM capacity, and adding network interfaces
-- Previously updated : 08/16/2019 ------
-# Manage your CloudSimple Private Cloud virtual machines in Azure
-
-To manage the virtual machines that you [created for your CloudSimple Private Cloud](azure-create-vm.md), sign to the [Azure portal](https://portal.azure.com). Search for and select the virtual (search under **All Services** or **Virtual Machines** on the side menu).
-
-## Control virtual machine operation
-
-The following controls are available from the **Overview** page for your selected virtual machine.
-
-| Control | Description |
-| | - |
-| Connect | Connect to the specified VM. |
-| Start | Start the specified VM. |
-| Restart | Shut down and then power up the specified VM. |
-| Stop | Shut down the specific VM. |
-| Capture | Capture an image of the specified VM so it can be used as an image to create other VMs. See [Create a managed image of a generalized VM in Azure](../virtual-machines/windows/capture-image-resource.md). |
-| Move | Move to the specified VM. |
-| Delete | Remove the specified VM. |
-| Refresh | Refresh the data in the display. |
-
-### View performance information
-
-The charts in the lower area of the **Overview** page present performance data for the selected interval (last hour to last 30 days; default is last hour). Within each chart, you can display the numeric values for any time within the interval by moving your cursor back and forth over the chart.
-
-The following charts are displayed.
-
-| Item | Description |
-| | - |
-| CPU (average) | Average CPU utilization in percentage over the selected interval. |
-| Network | Traffic in and out of the network (MB) over the selected interval. |
-| Disk Bytes | Total data read from disk and written to disk (MB) over the selected interval. |
-| Disk Operations | Average rate of disk operations (operations/second) over the selected interval. |
-
-## Manage VM disks
-
-To add a VM disk, open the **Disks** page for the selected VM. To add a disk, click **Add disk**. Configure each of the following settings by entering or selecting an inline option. Click **Save**.
-
- | Item | Description |
- | | - |
- | Name | Enter a name to identify the disk. |
- | Size | Select one of the available sizes. |
- | SCSI Controller | Select a SCSI controller. The available controllers vary for the different supported operating systems. |
- | Mode | Determines how the disk participates in snapshots. Choose one of these options: <br> - Independent persistent: All data written to the disk is written permanently.<br> - Independent, non-persistent: Changes written to the disk are discarded when you power off or reset the virtual machine. This mode allows you to always restart the VM in the same state. For more information, see the [VMware documentation](https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.vm_admin.doc/GUID-8B6174E6-36A8-42DA-ACF7-0DA4D8C5B084.html). |
-
-To delete a disk, select it and click **Delete**.
-
-## Change the capacity of the VM
-
-To change the capacity of the VM, open the **Size** page for the selected VM. Specify any of the following, and click **Save**.
-
-| Item | Description |
-| | - |
-| Number of cores | Number of cores assigned to the VM. |
-| Hardware virtualization | Select the checkbox to expose the hardware virtualization to the guest OS. See the VMware article [Expose VMware Hardware Assisted Virtualization](https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.vm_admin.doc/GUID-2A98801C-68E8-47AF-99ED-00C63E4857F6.html). |
-| Memory Size | Select the amount of memory to allocate to the VM.
-
-## Manage network interfaces
-
-To add an interface, click **Add network interface**. Configure each of the following settings by entering or selected an inline option. Click **Save**.
-
- | Control | Description |
- | | - |
- | Name | Enter a name to identify the interface. |
- | Network | Select from the list of configured networks in your Private Cloud vSphere. |
- | Adapter | Select a vSphere adaptor from the list of available types configured for the VM. For more information, see the VMware knowledge base article [Choosing a network adapter for your virtual machine](https://kb.vmware.com/s/article/1001805). |
- | Power on at Boot | Choose whether to enable the NIC hardware when the VM is booted. The default is **Enable**. |
-
-To delete a network interface, select it and click **Delete**.
vmware-cloudsimple Azure Subscription Mapping https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/azure-subscription-mapping.md
- Title: Create resource pools with Azure subscription mapping-
-description: Describes how to create resource pools for your Private Cloud through Azure subscription mapping
-- Previously updated : 08/14/2019 ------
-# Create resource pools for your Private Cloud with Azure subscription mapping
-Azure subscription mapping allows you to create resource pools for your Private Cloud from the available vSphere resource pools. In the CloudSimple portal, you can view and manage the Azure subscription for your Private Clouds.
-
-> [!NOTE]
-> Mapping a resource pool also maps any child resource pools. A parent resource pool cannot be mapped if any child resource pools are already mapped.
-
-1. [Access the CloudSimple portal](access-cloudsimple-portal.md).
-2. Open the **Resources** page and select **Azure subscriptions mapping**.
-3. Click **Edit Azure subscription mapping**.
-4. To map available resource pools, select them on the left and click the right-facing arrow.
-5. To remove mappings, select them on the right and click the left-facing arrow.
-
- ![Azure subscriptions](media/resources-azure-mapping.png)
-
-6. Click **OK**.
vmware-cloudsimple Backup Workloads Veeam https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/backup-workloads-veeam.md
- Title: Azure VMware Solution by CloudSimple - Back up workload virtual machines on Private Cloud using Veeam
-description: Describes how you can back up your virtual machines that are running in an Azure-based CloudSimple Private Cloud using Veeam B&R 9.5
-- Previously updated : 08/16/2019 ------
-# Back up workload VMs on CloudSimple Private Cloud using Veeam B&R
-
-This guide describes how you can back up your virtual machines that are running in an Azure-based CloudSimple Private Cloud by using Veeam B&R 9.5.
-
-## About the Veeam back up and recovery solution
-
-The Veeam solution includes the following components.
-
-**Backup Server**
-
-The backup server is a Windows server (VM) that serves as the control center for Veeam and performs these functions:
-
-* Coordinates backup, replication, recovery verification, and restore tasks
-* Controls job scheduling and resource allocation
-* Allows you to set up and manage backup infrastructure components and specify global settings for the backup infrastructure
-
-**Proxy Servers**
-
-Proxy servers are installed between the backup server and other components of the backup infrastructure. They manage the following functions:
-
-* Retrieval of VM data from the production storage
-* Compression
-* Deduplication
-* Encryption
-* Transmission of data to the backup repository
-
-**Backup repository**
-
-The backup repository is the storage location where Veeam keeps backup files, VM copies, and metadata for replicated VMs. The repository can be a Windows or Linux server with local disks (or mounted NFS/SMB) or a hardware storage deduplication appliance.
-
-### Veeam deployment scenarios
-You can leverage Azure to provide a backup repository and a storage target for long term backup and archiving. All the backup network traffic between VMs in the Private Cloud and the backup repository in Azure travels over a high bandwidth, low latency link. Replication traffic across regions travels over the internal Azure backplane network, which lowers bandwidth costs for users.
-
-**Basic deployment**
-
-For environments with less than 30 TB to back up, CloudSimple recommends the following configuration:
-
-* Veeam backup server and proxy server installed on the same VM in the Private Cloud.
-* A Linux based primary backup repository in Azure configured as a target for backup jobs.
-* `azcopy` used to copy the data from the primary backup repository to an Azure blob container that is replicated to another region.
-
-![Diagram that shows basic Veeam deployment scenarios.](media/veeam-basicdeployment.png)
-
-**Advanced deployment**
-
-For environments with more than 30 TB to back up, CloudSimple recommends the following configuration:
-
-* One proxy server per node in the vSAN cluster, as recommended by Veeam.
-* Windows based primary backup repository in the Private Cloud to cache five days of data for fast restores.
-* Linux backup repository in Azure as a target for backup copy jobs for longer duration retention. This repository should be configured as a scale-out backup repository.
-* `azcopy` used to copy the data from the primary backup repository to an Azure blob container that is replicated to another region.
-
-![Basic deployment scenarios](media/veeam-advanceddeployment.png)
-
-In the previous figure, notice that the backup proxy is a VM with Hot Add access to workload VM disks on the vSAN datastore. Veeam uses Virtual Appliance backup proxy transport mode for vSAN.
-
-## Requirements for Veeam solution on CloudSimple
-
-The Veeam solution requires you to do the following:
-
-* Provide your own Veeam licenses.
-* Deploy and manage Veeam to backup the workloads running in the CloudSimple Private Cloud.
-
-This solution provides you with full control over the Veeam backup tool and offers the choice to use the native Veeam interface or the Veeam vCenter plug-in to manage VM backup jobs.
-
-If you are an existing Veeam user, you can skip the section on Veeam Solution Components and directly proceed to [Veeam Deployment Scenarios](#veeam-deployment-scenarios).
-
-## Install and configure Veeam backups in your CloudSimple Private Cloud
-
-The following sections describe how to install and configure a Veeam backup solution for your CloudSimple Private Cloud.
-
-The deployment process consists of these steps:
-
-1. [vCenter UI: Set up infrastructure services in your Private Cloud](#vcenter-ui-set-up-infrastructure-services-in-your-private-cloud)
-2. [CloudSimple portal: Set up Private Cloud networking for Veeam](#cloudsimple-private-cloud-set-up-private-cloud-networking-for-veeam)
-3. [CloudSimple portal: Escalate Privileges](#cloudsimple-private-cloud-escalate-privileges-for-cloudowner)
-4. [Azure portal: Connect your virtual network to the Private Cloud](#azure-portal-connect-your-virtual-network-to-the-private-cloud)
-5. [Azure portal: Create a backup repository in Azure](#azure-portal-connect-your-virtual-network-to-the-private-cloud)
-6. [Azure portal: Configure Azure blob storage for long term data retention](#configure-azure-blob-storage-for-long-term-data-retention)
-7. [vCenter UI of Private Cloud: Install Veeam B&R](#vcenter-console-of-private-cloud-install-veeam-br)
-8. [Veeam Console: Configure Veeam Backup & Recovery software](#veeam-console-install-veeam-backup-and-recovery-software)
-9. [CloudSimple portal: Set up Veeam access and de-escalate privileges](#cloudsimple-portal-set-up-veeam-access-and-de-escalate-privileges)
-
-### Before you begin
-
-The following are required before you begin Veeam deployment:
-
-* An Azure subscription owned by you
-* A pre-created Azure resource group
-* An Azure virtual network in your subscription
-* An Azure storage account
-* A [Private Cloud](create-private-cloud.md) created using the CloudSimple portal.
-
-The following items are needed during the implementation phase:
-
-* VMware templates for Windows to install Veeam (such as Windows Server 2012 R2 - 64 bit image)
-* One available VLAN identified for the backup network
-* CIDR of the subnet to be assigned to the backup network
-* Veeam 9.5 u3 installable media (ISO) uploaded to the vSAN datastore of the Private Cloud
-
-### vCenter UI: Set up infrastructure services in your Private Cloud
-
-Configure infrastructure services in the Private Cloud to make it easy to manage your workloads and tools.
-
-* You can add an external identity provider as described in [Set up vCenter identity sources to use Active Directory](set-vcenter-identity.md) if any of the following apply:
-
- * You want to identify users from your on-premises Active Directory (AD) in your Private Cloud.
- * You want to set up an AD in your Private Cloud for all users.
- * You want to use Azure AD.
-* To provide IP address lookup, IP address management, and name resolution services for your workloads in the Private Cloud, set up a DHCP and DNS server as described in [Set up DNS and DHCP applications and workloads in your CloudSimple Private Cloud](dns-dhcp-setup.md).
-
-### CloudSimple Private Cloud: Set up Private Cloud networking for Veeam
-
-Access the CloudSimple portal to set up Private Cloud networking for the Veeam solution.
-
-Create a VLAN for the backup network and assign it a subnet CIDR. For instructions, see [Create and manage VLANs/Subnets](create-vlan-subnet.md).
-
-Create firewall rules between the management subnet and the backup network to allow network traffic on ports used by Veeam. See the Veeam topic [Used Ports](https://helpcenter.veeam.com/docs/backup/vsphere/used_ports.html?ver=95). For instructions on firewall rule creation, see [Set up firewall tables and rules](firewall.md).
-
-The following table provides a port list.
-
-| Icon | Description | Icon | Description |
-| | - | | - |
-| Backup Server | vCenter | HTTPS / TCP | 443 |
-| Backup Server <br> *Required for deploying Veeam Backup & Replication components* | Backup Proxy | TCP/UDP | 135, 137 to 139 and 445 |
- | Backup Server | DNS | UDP | 53 |
- | Backup Server | Veeam Update Notification Server | TCP | 80 |
- | Backup Server | Veeam License Update Server | TCP | 443 |
- | Backup Proxy | vCenter | | |
- | Backup Proxy | Linux Backup Repository | TCP | 22 |
- | Backup Proxy | Windows Backup Repository | TCP | 49152 - 65535 |
- | Backup Repository | Backup Proxy | TCP | 2500 -5000 |
- | Source Backup Repository<br> *Used for backup copy jobs* | Target Backup Repository | TCP | 2500 - 5000 |
-
-Create firewall rules between the workload subnet and the backup network as described in [Set up firewall tables and rules](firewall.md). For application aware backup and restore, [additional ports](https://helpcenter.veeam.com/docs/backup/vsphere/used_ports.html?ver=95) must be opened on the workload VMs that host specific applications.
-
-By default, CloudSimple provides a 1Gbps ExpressRoute link. For larger environment sizes, a higher bandwidth link may be required. Contact Azure support for more information about higher bandwidth links.
-
-To continue the setup, you need the authorization key and peer circuit URI and access to your Azure Subscription. This information is available on the Virtual Network Connection page in the CloudSimple portal. For instructions, see [Obtain peering information for Azure virtual network to CloudSimple connection](virtual-network-connection.md). If you have any trouble obtaining the information, [contact support](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).
-
-### CloudSimple Private Cloud: Escalate privileges for cloudowner
-
-The default 'cloudowner' user doesn't have sufficient privileges in the Private Cloud vCenter to install VEEAM, so the user's vCenter privileges must be escalated. For more information, see [Escalate privileges](escalate-private-cloud-privileges.md).
-
-### Azure portal: Connect your virtual network to the Private Cloud
-
-Connect your virtual network to the Private Cloud by following the instructions in [Azure Virtual Network Connection using ExpressRoute](azure-expressroute-connection.md).
-
-### Azure portal: Create a backup repository VM
-
-1. Create a standard D2 v3 VM with (2 vCPUs and 8 GB memory).
-2. Select the CentOS 7.4 based image.
-3. Configure a network security group (NSG) for the VM. Verify that the VM does not have a public IP address and is not reachable from the public internet.
-4. Create a username and password based user account for the new VM. For instructions, see [Create a Linux virtual machine in the Azure portal](../virtual-machines/linux/quick-create-portal.md).
-5. Create 1x512 GiB standard HDD and attach it to the repository VM. For instructions, see [How to attach a managed data disk to a Windows VM in the Azure portal](../virtual-machines/windows/attach-managed-disk-portal.md).
-6. [Create an XFS volume on the managed disk](https://www.digitalocean.com/docs/volumes/how-to/). Log in to the VM using the previously mentioned credentials. Execute the following script to create a logical volume, add the disk to it, create an XFS filesystem [partition](https://www.digitalocean.com/docs/volumes/how-to/partition/) and [mount](https://www.digitalocean.com/docs/volumes/how-to/mount/) the partition under the /backup1 path.
-
- Example script:
-
- ```
- sudo pvcreate /dev/sdc
- sudo vgcreate backup1 /dev/sdc
- sudo lvcreate -n backup1 -l 100%FREE backup1
- sudo mkdir -p /backup1
- sudo chown veeamadmin /backup1
- sudo chmod 775 /backup1
- sudo mkfs.xfs -d su=64k -d sw=1 -f /dev/mapper/backup1-backup1
- sudo mount -t xfs /dev/mapper/backup1-backup1 /backup1
- ```
-
-7. Expose /backup1 as an NFS mount point to the Veeam backup server that is running in the Private Cloud. For instructions, see the Digital Ocean article [How To Set Up an NFS Mount on CentOS 6](https://www.digitalocean.com/community/tutorials/how-to-set-up-an-nfs-mount-on-centos-6). Use this NFS share name when you configure the backup repository in the Veeam backup server.
-
-8. Configure filtering rules in the NSG for the backup repository VM to explicitly allow all network traffic to and from the VM.
-
-> [!NOTE]
-> Veeam Backup & Replication uses the SSH protocol to communicate with Linux backup repositories and requires the SCP utility on Linux repositories. Verify that the SSH daemon is properly configured and that SCP is available on the Linux host.
-
-### Configure Azure blob storage for long term data retention
-
-1. Create a general purpose storage account (GPv2) of standard type and a blob container as described in the Microsoft video [Getting Started with Azure Storage](https://azure.microsoft.com/resources/videos/get-started-with-azure-storage).
-2. Create an Azure storage container, as described in the [Create Container](/rest/api/storageservices/create-container) reference.
-2. Download the `azcopy` command line utility for Linux from Microsoft. You can use the following commands in the bash shell in CentOS 7.5.
-
- ```
- wget -O azcopy.tar.gz https://aka.ms/downloadazcopylinux64
- tar -xf azcopy.tar.gz
- sudo ./install.sh
- sudo yum -y install libunwind.x86_64
- sudo yum -y install icu
- ```
-
-3. Use the `azcopy` command to copy backup files to and from the blob container. See [Transfer data with AzCopy on Linux](../storage/common/storage-use-azcopy-v10.md) for detailed commands.
-
-### vCenter console of Private Cloud: Install Veeam B&R
-
-Access vCenter from your Private Cloud to create a Veeam service account, install Veeam B&R 9.5, and configure Veeam using the service account.
-
-1. Create a new role named ΓÇÿVeeam Backup RoleΓÇÖ and assign it necessary permissions as recommended by Veeam. For details see the Veeam topic [Required Permissions](https://helpcenter.veeam.com/docs/backup/vsphere/required_permissions.html?ver=95).
-2. Create a new ΓÇÿVeeam User GroupΓÇÖ group in vCenter and assign it the ΓÇÿVeeam Backup RoleΓÇÖ.
-3. Create a new ΓÇÿVeeam Service AccountΓÇÖ user and add it to the ΓÇÿVeeam User GroupΓÇÖ.
-
- ![Creating a Veeam service account](media/veeam-vcenter01.png)
-
-4. Create a distributed port group in vCenter using the backup network VLAN. For details, view the VMware video [Creating a Distributed Port Group in the vSphere Web Client](https://www.youtube.com/watch?v=wpCd5ZbPOpA).
-5. Create the VMs for the Veeam backup and proxy servers in vCenter as per the [Veeam system requirements](https://helpcenter.veeam.com/docs/backup/vsphere/system_requirements.html?ver=95). You can use Windows 2012 R2 or Linux. For more information see [Requirements for using Linux backup repositories](https://www.veeam.com/kb2216).
-6. Mount the installable Veeam ISO as a CDROM device in the Veeam backup server VM.
-7. Using an RDP session to the Windows 2012 R2 machine (the target for the Veeam installation), [install Veeam B&R 9.5u3](https://helpcenter.veeam.com/docs/backup/vsphere/install_vbr.html?ver=95) in a Windows 2012 R2 VM.
-8. Find the internal IP address of the Veeam backup server VM and configure the IP address to be static in the DHCP server. The exact steps required to do this depend on the DHCP server. As an example, the Netgate article <a href="https://docs.netgate.com/pfsense/en/latest/services/dhcp/https://docsupdatetracker.net/index.html" target="_blank">static DHCP mappings</a> explains how to configure a DHCP server using a pfSense router.
-
-### Veeam console: Install Veeam backup and recovery software
-
-Using the Veeam console, configure Veeam backup and recovery software. For details, see [Veeam Backup & Replication v9 - Installation and Deployment](https://www.youtube.com/watch?v=b4BqC_WXARk).
-
-1. Add VMware vSphere as a managed server environment. When prompted, provide the credentials of the Veeam Service Account that you created at the beginning of [vCenter Console of Private Cloud: Install Veeam B&R](#vcenter-console-of-private-cloud-install-veeam-br).
-
- * Use default settings for load control and default advanced settings.
- * Set the mount server location to be the backup server.
- * Change the configuration backup location for the Veeam server to the remote repository.
-
-2. Add the Linux server in Azure as the backup repository.
-
- * Use default settings for load control and for the advanced settings.
- * Set the mount server location to be the backup server.
- * Change the configuration backup location for the Veeam server to the remote repository.
-
-3. Enable encryption of configuration backup using **Home> Configuration Backup Settings**.
-
-4. Add a Windows server VM as a proxy server for VMware environment. Using ΓÇÿTraffic RulesΓÇÖ for a proxy, encrypt backup data over the wire.
-
-5. Configure backup jobs.
- * To configure backup jobs, follow the instructions in [Creating a Backup Job](https://www.youtube.com/watch?v=YHxcUFEss4M).
- * Enable encryption of backup files under **Advanced Settings > Storage**.
-
-6. Configure backup copy jobs.
-
- * To configure backup copy jobs, follow the instructions in the video [Creating a Backup Copy Job](https://www.youtube.com/watch?v=LvEHV0_WDWI&t=2s).
- * Enable encryption of backup files under **Advanced Settings > Storage**.
-
-### CloudSimple portal: Set up Veeam access and de-escalate privileges
-Create a public IP address for the Veeam backup and recovery server. For instructions, see [Allocate public IP addresses](public-ips.md).
-
-Create a firewall rule using to allow the Veeam backup server to create an outbound connection to Veeam website for downloading updates/patches on TCP port 80. For instructions, see [Set up firewall tables and rules](firewall.md).
-
-To de-escalate privileges, see [De-escalate privileges](escalate-private-cloud-privileges.md#de-escalate-privileges).
-
-## References
-
-### CloudSimple references
-
-* [Create a Private Cloud](create-private-cloud.md)
-* [Create and manage VLANs/Subnets](create-vlan-subnet.md)
-* [vCenter Identity Sources](set-vcenter-identity.md)
-* [Workload DNS and DHCP Setup](dns-dhcp-setup.md)
-* [Escalate privileges](escalate-privileges.md)
-* [Set up firewall tables and rules](firewall.md)
-* [Private Cloud permissions](learn-private-cloud-permissions.md)
-* [Allocate public IP Addresses](public-ips.md)
-
-### Veeam References
-
-* [Used Ports](https://helpcenter.veeam.com/docs/backup/vsphere/used_ports.html?ver=95)
-* [Required Permissions](https://helpcenter.veeam.com/docs/backup/vsphere/required_permissions.html?ver=95)
-* [System Requirements](https://helpcenter.veeam.com/docs/backup/vsphere/system_requirements.html?ver=95)
-* [Installing Veeam Backup & Replication](https://helpcenter.veeam.com/docs/backup/vsphere/install_vbr.html?ver=95)
-* [Required modules and permissions for Multi-OS FLR and Repository support for Linux](https://www.veeam.com/kb2216)
-* [Veeam Backup & Replication v9 - Installation and Deployment - Video](https://www.youtube.com/watch?v=b4BqC_WXARk)
-* [Veeam v9 Creating a Backup Job - Video](https://www.youtube.com/watch?v=YHxcUFEss4M)
-* [Veeam v9 Creating a Backup Copy Job - Video](https://www.youtube.com/watch?v=LvEHV0_WDWI&t=2s)
-
-### Azure references
-
-* [Configure a virtual network gateway for ExpressRoute using the Azure portal](../expressroute/expressroute-howto-add-gateway-portal-resource-manager.md)
-* [Connect a VNet to a circuit - different subscription](../expressroute/expressroute-howto-linkvnet-portal-resource-manager.md#connect-a-vnet-to-a-circuitdifferent-subscription)
-* [Create a Linux virtual machine in the Azure portal](../virtual-machines/linux/quick-create-portal.md)
-* [How to attach a managed data disk to a Windows VM in the Azure portal](../virtual-machines/windows/attach-managed-disk-portal.md)
-* [Getting Started with Azure Storage - Video](https://azure.microsoft.com/resources/videos/get-started-with-azure-storage)
-* [Create Container](/rest/api/storageservices/create-container)
-* [Transfer data with AzCopy on Linux](../storage/common/storage-use-azcopy-v10.md)
-
-### VMware references
-
-* [Creating a Distributed Port Group in the vSphere Web Client - Video](https://www.youtube.com/watch?v=wpCd5ZbPOpA)
-
-### Other references
-
-* [Create an XFS volume on the managed disk - RedHat](https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/storage_administration_guide/ch-xfs)
-* [How To Set Up an NFS Mount on CentOS 7 - HowToForge](https://www.howtoforge.com/nfs-server-and-client-on-centos-7)
-* [Configuring the DHCP Server - Netgate](https://docs.netgate.com/pfsense/en/latest/services/dhcp/https://docsupdatetracker.net/index.html)
vmware-cloudsimple Cloudsimple Account https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/cloudsimple-account.md
- Title: CloudSimple account management - Azure
-description: Learn about managing a CloudSimple account, which is created along with your CloudSimple service and is associated with your Azure subscription.
-- Previously updated : 04/10/2019 -----
-# Account management overview
-
-When you create your CloudSimple service, it creates an account on CloudSimple. The account is associated with your Azure subscription where the service is located. All users with **owner** and **contributor** roles in the subscription have access to the CloudSimple portal. The Azure subscription ID and tenant ID associated with the CloudSimple service are found on the [Accounts page](account.md).
-
-## Additional alert emails
-
-You can configure email IDs in CloudSimple to receive alerts:
-
-* Related to your service
-* For automatic processing
-
-## CloudSimple operator access
-
-You can control access to the CloudSimple portal for service operations personnel. Service operations personnel sign in to the portal when you submit a support ticket. Service operations will fix any problems reported and the actions taken are available in audit logs.
-
-## Users
-
-All users who have **owner** and **contributor** role in the subscription have access to the CloudSimple portal. When you access the portal, the user is created on the CloudSimple account. You can disable access to the CloudSimple portal for specific users from the Accounts page.
-
-## Next steps
-
-* [View account summary](account.md)
-* [View user list](users.md)
vmware-cloudsimple Cloudsimple Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/cloudsimple-activity.md
- Title: CloudSimple activity management-
-description: Learn about the Activity pages, which summarize activity and allow you find additional details. Activities include alerts, events, tasks, and audit activity.
-- Previously updated : 04/30/2019-----
-# Activity management overview
-
-CloudSimple keeps track of all the activity that can affect the functioning of your Private Cloud environment. Activities include alerts, events, tasks, and audit activity. The [Activity pages](monitor-activity.md) summarize all the current activity and allow you to drill down for additional details.
-
-## Events
-
-Events track user and system activity on the CloudSimple portal. Events show the activity associated with a specific resource and the severity of the impact. You can view the events from the CloudSimple portal.
-
-## Alerts
-
-Alerts are notifications of any significant activity in your CloudSimple environment. Events that impact billing or user access are shown as alerts. You can acknowledge alerts from the CloudSimple portal.
-
-## Tasks
-
-Tasks track any user operation that takes more than 30 seconds to complete. You can monitor the progress of a task from the CloudSimple portal. For completed tasks, the information includes the total time for completion.
-
-## Audit
-
-Audit logs keep track of user operations. Audit logs contain the parameters provided for the operation by the user. You can use audit logs to monitor user activity for all users.
-
-## Next steps
-
-* [View the account summary](account.md)
vmware-cloudsimple Cloudsimple Azure Network Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/cloudsimple-azure-network-connection.md
- Title: VMware Solution by CloudSimple - Azure network connections
-description: Learn about connecting your Azure virtual network to your CloudSimple region network
-- Previously updated : 04/10/2019 -----
-# Azure network connections overview
-
-When you create a CloudSimple service in a region and create nodes, you can:
-
-* Request an Azure ExpressRoute circuit and attach it to the CloudSimple network in that region.
-* Connect your CloudSimple region network to your Azure virtual network or your on-premises network using Azure ExpressRoute.
-* Provide access to services running in your Azure subscription or your on-premises network from your Private Cloud environment.
-
-The ExpressRoute connection is high bandwidth with low latency.
-
-## Benefits
-
-Azure network connection allows you to:
-
-* Use Azure as a backup target for virtual machines on your Private Cloud.
-* Deploy KMS servers in your Azure subscription to encrypt your Private Cloud vSAN datastore.
-* Use hybrid applications where the web tier of the application runs in the public cloud while the application and database tiers run in your Private Cloud.
-
-## Azure virtual network connection
-
-Private Clouds can be connected to your Azure resources using ExpressRoute. The ExpressRoute connection allows you to access resources running in your Azure subscription from your Private Cloud. This connection allows you to extend your Private Cloud network to your Azure virtual network. Routes from CloudSimple network will be exchanged with your Azure virtual network via BGP. If you have virtual network peering configured, all peered virtual networks will be accessible from your CloudSimple network.
-
-![Azure ExpressRoute Connection to virtual network](media/cloudsimple-azure-network-connection.png)
-
-## ExpressRoute connection to on-premises network
-
-You can connect your existing Azure ExpressRoute circuit to your CloudSimple region. ExpressRoute Global Reach feature is used to connect the two circuits with each other. A connection is established between the on-premises and CloudSimple ExpressRoute circuits. This connection allows you to extend your on-premises networks to Private Cloud network. Routes from your CloudSimple network will be exchanged via BGP with your on-premises network.
-
-![On-premises ExpressRoute Connection - Global Reach](media/cloudsimple-global-reach-connection.png)
-
-## Connection to on-premises network and Azure virtual network
-
-Connections to on-premises network and Azure virtual network can coexist from your CloudSimple network. The connection uses BGP to exchange routes between on-premises network, Azure virtual network, and CloudSimple network. When you connect your CloudSimple network to your Azure virtual network in presence of Global Reach connection, Azure virtual network routes will be visible on your on-premises network. Route exchange happens in Azure between the edge routers.
-
-![On-premises ExpressRoute Connection with Azure virtual network connection](media/cloudsimple-global-reach-and-vnet-connection.png)
-
-### Important considerations
-
-Connecting to CloudSimple network from on-premises network and from Azure virtual network allows route exchange between all networks.
-
-* Azure virtual network will be visible from both on-premises network and CloudSimple network.
-* If you have connected to your Azure virtual network from on-premises network, connection to CloudSimple network using Global Reach will allow access to virtual networks from CloudSimple network.
-* Subnet addresses **must not** overlap between any of the networks connected.
-* CloudSimple will **not** advertise default route to the ExpressRoute connections
-* If your on-premises router advertises the default route, traffic from CloudSimple network and Azure virtual network will use the advertised default route. As a result, virtual machines on Azure cannot be accessed using public IP addresses.
-
-## Next steps
-
-* [Connect Azure virtual network to CloudSimple using ExpressRoute](virtual-network-connection.md)
-* [Connect from on-premises to CloudSimple using ExpressRoute](on-premises-connection.md)
vmware-cloudsimple Cloudsimple Firewall Tables https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/cloudsimple-firewall-tables.md
- Title: Azure VMware Solution by CloudSimple - Firewall tables
-description: Learn about CloudSimple private cloud firewall tables and firewall rules, including default rules that are created on every firewall table.
-- Previously updated : 08/20/2019-----
-# Firewall tables overview
-
-A firewall table lists rules to filter network traffic to and from Private Cloud resources. You can apply firewall tables to a VLAN/subnet. The rules control network traffic between a source network or IP address and a destination network or IP address.
-
-## Firewall rules
-
-The following table describes the parameters in a firewall rule.
-
-| Property | Details |
-| | --|
-| **Name** | A name that uniquely identifies the firewall rule and its purpose. |
-| **Priority** | A number between 100 and 4096, with 100 being the highest priority. Rules are processed in priority order. When traffic encounters a rule match, rule processing stops. As a result, rules with lower priorities that have the same attributes as rules with higher priorities aren't processed. Take care to avoid conflicting rules. |
-| **State Tracking** | Tracking can be stateless (Private Cloud, Internet, or VPN) or stateful (Public IP). |
-| **Protocol** | Options include Any, TCP, or UDP. If you require ICMP, use Any. |
-| **Direction** | Whether the rule applies to inbound, or outbound traffic. |
-| **Action** | Allow or deny for the type of traffic defined in the rule. |
-| **Source** | An IP address, classless inter-domain routing (CIDR) block (10.0.0.0/24, for example), or Any. Specifying a range, a service tag, or application security group enables you to create fewer security rules. |
-| **Source Port** | Port from which network traffic originates. You can specify an individual port or range of ports, such as 443 or 8000-8080. Specifying ranges enables you to create fewer security rules. |
-| **Destination** | An IP address, classless inter-domain routing (CIDR) block (10.0.0.0/24, for example), or Any. Specifying a range, a service tag, or application security group enables you to create fewer security rules. |
-| **Destination Port** | Port to which the network traffic flows. You can specify an individual port or range of ports, such as 443 or 8000-8080. Specifying ranges enables you to create fewer security rules.|
-
-### Stateless
-
-A stateless rule looks only at individual packets and filters them based on the rule.
-Additional rules may be required for traffic flow in the reverse direction. Use stateless rules for traffic between the following points:
-
-* Subnets of Private Clouds
-* On-premises subnet and a Private Cloud subnet
-* Internet traffic from the Private Clouds
-
-### Stateful
-
- A stateful rule is aware of the connections that pass through it. A flow record is created for existing connections. Communication is allowed or denied based on the connection state of the flow record. Use this rule type for public IP addresses to filter traffic from the Internet.
-
-### Default rules
-
-The following default rules are created on every firewall table.
-
-|Priority|Name|State Tracking|Direction|Traffic Type|Protocol|Source|Source Port|Destination|Destination Port|Action|
-|--|-|--|||--||--|--|-||
-|65000|allow-all-to-internet|Stateful|Outbound|Public IP or internet traffic|All|Any|Any|Any|Any|Allow|
-|65001|deny-all-from-internet|Stateful|Inbound|Public IP or internet traffic|All|Any|Any|Any|Any|Deny|
-|65002|allow-all-to-intranet|Stateless|Outbound|Private Cloud internal or VPN traffic|All|Any|Any|Any|Any|Allow|
-|65003|allow-all-from-intranet|Stateless|Inbound|Private Cloud internal or VPN traffic|All|Any|Any|Any|Any|Allow|
-
-## Next steps
-
-* [Set up firewall tables and rules](firewall.md)
vmware-cloudsimple Cloudsimple Maintenance Updates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/cloudsimple-maintenance-updates.md
- Title: CloudSimple maintenance and updates-
-description: Describes the CloudSimple service process for scheduled maintenance and updates
-- Previously updated : 03/09/2021-----
-# CloudSimple maintenance and updates
-
-The Private Cloud environment is designed to have no single point of failure.
-
-* ESXi clusters are configured with vSphere High Availability (HA). The clusters are sized to have at least one spare node for resiliency.
-* Redundant primary storage is provided by vSAN, which requires at least three nodes to provide protection against a single failure. vSAN can be configured to provide higher resiliency for larger clusters.
-* vCenter, PSC, and NSX Manager VMs are configured with RAID-10 storage to protect against storage failure. The VMs are protected against node/network failures by vSphere HA.
-* ESXi hosts have redundant fans and NICs.
-* TOR and spine switches are configured in HA pairs to provide resiliency.
-
-CloudSimple continuously monitors the following VMs for uptime and availability, and provides availability SLAs:
-
-* ESXi hosts
-* vCenter
-* PSC
-* NSX Manager
-
-CloudSimple also monitors the following continuously for failures:
-
-* Hard disks
-* Physical NIC ports
-* Servers
-* Fans
-* Power
-* Switches
-* Switch ports
-
-If a disk or node fails, a new node is automatically added to the affected VMware cluster to bring it back to health immediately.
-
-CloudSimple backs up, maintains, and updates these VMware elements in the Private Clouds:
-
-* ESXi
-* vCenter Platform Services
-* Controller
-* vSAN
-* NSX
-
-## Back up and restore
-
-CloudSimple backup includes:
-
-* Nightly incremental backups of vCenter, PSC, and DVS rules.
-* vCenter native APIs to back up components at the application layer.
-* Automatic backup prior to update or upgrade of the VMware management software.
-* vCenter data encryption at the source before data is transferred over a TLS1.2 encrypted channel to Azure. The data is stored in an Azure blob where it's replicated across regions.
-
-You can request a restore by opening a [Support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).
-
-## Maintenance
-
-CloudSimple does several types of planned maintenance.
-
-### Backend/internal maintenance
-
-This maintenance typically involves reconfiguring physical assets or installing software patches. It doesnΓÇÖt affect normal consumption of the assets being serviced. With redundant NICs going to each physical rack, normal network traffic and Private Cloud operations arenΓÇÖt affected. You might notice a performance impact only if your organization expects to use the full redundant bandwidth during the maintenance interval.
-
-### CloudSimple portal maintenance
-
-Some limited service downtime is required when the CloudSimple control plane or infrastructure is updated. Currently, maintenance intervals can be as frequent as once per month. The frequency is expected to decline over time. CloudSimple provides notification for portal maintenance and keeps the interval as short as possible. During a portal maintenance interval, the following services continue to function without any impact:
-
-* VMware management plane and applications
-* vCenter access
-* All networking and storage
-* All Azure traffic
-
-### VMware infrastructure maintenance
-
-Occasionally it's necessary to make changes to the configuration of the VMware infrastructure. Currently, these intervals can occur every 1-2 months, but the frequency is expected to decline over time. This type of maintenance can usually be done without interrupting normal consumption of the CloudSimple services. During a VMware maintenance interval, the following services continue to function without any impact:
-
-* VMware management plane and applications
-* vCenter access
-* All networking and storage
-* All Azure traffic
-
-## Updates and Upgrades
-
-CloudSimple is responsible for lifecycle management of VMware software (ESXi, vCenter, PSC, and NSX) in the Private Cloud.
-
-Software updates include:
-
-* **Patches**. Security patches or bug fixes released by VMware.
-* **Updates**. Minor version change of a VMware stack component.
-* **Upgrades**. Major version change of a VMware stack component.
-
-CloudSimple tests a critical security patch as soon as it becomes available from VMware.
-
-Documented VMware workarounds will be implemented in lieu of installing a corresponding patch until the next scheduled updates are deployed.
-
-## Next steps
-
-[Back up workload VMs using Veeam](backup-workloads-veeam.md)
vmware-cloudsimple Cloudsimple Network Checklist https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/cloudsimple-network-checklist.md
- Title: Azure VMware Solution by CloudSimple - Network checklist
-description: Checklist for allocating network CIDR on Azure VMware Solution by CloudSimple
-- Previously updated : 09/25/2019 ------
-# Networking prerequisites for Azure VMware Solution by CloudSimple
-
-Azure VMware Solution by CloudSimple offers a VMware private cloud environment that's accessible for users and applications from on-premises environments, enterprise-managed devices, and Azure resources. The connectivity is delivered through networking services such as VPNs and Azure ExpressRoute connections. Some of these networking services require you to specify network address ranges for enabling the services.
-
-Tables in this article describe the set of address ranges and corresponding services that use the specified addresses. Some of the addresses are mandatory and some depend on the services you want to deploy. These address spaces should not overlap with any of your on-premises subnets, Azure Virtual Network subnets, or planned CloudSimple workload subnets.
-
-## Network address ranges required for creating a private cloud
-
-During the creation of a CloudSimple service and a private cloud, you must comply with the specified network classless inter-domain routing (CIDR) ranges, as follows.
-
-| Name/used for | Description | Address range |
-|-|-|--|
-| Gateway CIDR | Required for edge services (VPN gateways). This CIDR is required during CloudSimple Service creation and must be from the RFC 1918 space. | /28 |
-| vSphere/vSAN CIDR | Required for VMware management networks. This CIDR must be specified during private cloud creation. | /24 or /23 or /22 or /21 |
-
-## Network address range required for Azure network connection to an on-premises network
-
-Connecting from an [on-premises network to the private cloud network through ExpressRoute](on-premises-connection.md) establishes a Global Reach connection. The connection uses Border Gateway Protocol (BGP) to exchange routes between your on-premises network, your private cloud network, and your Azure networks.
-
-| Name/used for | Description | Address range |
-||--||
-| ExpressRoute Peering CIDR | Required when you use ExpressRoute Global Reach for on-premises connectivity. This CIDR must be provided when a Global Reach connection request is made through a support ticket. | /29 |
-
-## Network address range required for using a site-to-site VPN connection to an on-premises network
-
-Connecting from an [on-premises network to the private cloud network by using site-to-site VPN](vpn-gateway.md) requires the following IP addresses, on-premises network, and identifiers.
-
-| Address/address range | Description |
-|--|--|
-| Peer IP | On-premises VPN gateway public IP address. Required to establish a site-to-site VPN connection between an on-premises datacenter and the CloudSimple Service region. This IP address is required during site-to-site VPN gateway creation. |
-| Peer identifier | Peer identifier of the on-premises VPN gateway. This is usually the same as **peer IP**. If a unique identifier is specified on your on-premises VPN gateway, the identifier must be specified. Peer ID is required during site-to-site VPN gateway creation. |
-| On-premises networks | On-premises prefixes that need access CloudSimple networks in the region. Include all prefixes from an on-premises network that will access the CloudSimple network, including the client network from where users will access the network. |
-
-## Network address range required for using point-to-site VPN connections
-
-A point-to-site VPN connection enables access to the CloudSimple network from a client machine. [To set up point-to-site VPN](vpn-gateway.md), you must specify the following network address range.
-
-| Address/address range | Description |
-|--|--|
-| Client subnet | DHCP addresses are provided by the client subnet when you connect by using a point-to-site VPN. This subnet is required while you're creating a point-to-site VPN gateway on a CloudSimple portal. The network is divided into two subnets; one for the UDP connection and the other for TCP connections. |
-
-## Next steps
-
-* [On-premises firewall setup for accessing your private cloud](on-premises-firewall-configuration.md)
-* [Quickstart - Create a CloudSimple service](quickstart-create-cloudsimple-service.md)
-* [Quickstart- Configure a private cloud](quickstart-create-private-cloud.md)
-* Learn more about [Azure network connections](cloudsimple-azure-network-connection.md)
-* Learn more about [VPN gateways](cloudsimple-vpn-gateways.md)
vmware-cloudsimple Cloudsimple Node https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/cloudsimple-node.md
- Title: Azure VMware Solution by CloudSimple - Nodes overview
-description: Learn about CloudSimple concepts, including nodes, provisioned nodes, a Private Cloud, and VMware Solution by CloudSimple nodes SKUs.
-- Previously updated : 08/20/2019-----
-# CloudSimple nodes overview
-
-Nodes are the building blocks of a Private Cloud. A node is:
-
-* A dedicated bare metal compute host where a VMware ESXi hypervisor is installed
-* A unit of computing you can provision or reserve to create Private Clouds
-* Available to provision or reserve in a region where the CloudSimple service is available
-
-You create a Private Cloud from the provisioned nodes. To create a Private Cloud, you need a minimum of three nodes of the same SKU. To expand a Private Cloud, add additional nodes. You can add nodes to an existing cluster or create a new cluster by provisioning nodes in the Azure portal and associating them with the CloudSimple service. All provisioned nodes are visible under the CloudSimple service.
-
-## Provisioned nodes
-
-Provisioned nodes provide pay-as-you-go capacity. Provisioning nodes helps you quickly scale your VMware cluster on demand. You can add nodes as needed or delete a provisioned node to scale down your VMware cluster. Provisioned nodes are billed on a monthly basis and charged to the subscription where they're provisioned.
-
-* If you pay for your Azure subscription by credit card, the card is billed immediately.
-* If you're billed by invoice, the charges appear on your next invoice.
-
-## VMware Solution by CloudSimple nodes SKU
-
-The following types of nodes are available for provisioning or reservation.
-
-| SKU | CS28 - Node | CS36 - Node | CS36m - Node |
-||--|--|--|
-| Region | East US, West US | East US, West US | West Europe |
-| CPU | 2x2.2 GHz, 28 Cores (56 HT) | 2x2.3 GHz, 36 Cores (72 HT) | 2x2.3 GHz, 36 Cores (72 HT) |
-| RAM | 256 GB | 512 GB | 576 GB |
-| Cache Disk | 1.6-TB NVMe | 3.2-TB NVMe | 3.2-TB NVMe |
-| Capacity Disk | 5.625 TB Raw | 11.25 TB Raw | 15.36 TB Raw |
-| Storage Type | All Flash | All Flash | All Flash |
-
-## Limits
-
-The following node limits apply to Private Clouds.
-
-| Resource | Limit |
-|-|-|
-| Minimum number of nodes to create a Private Cloud | 3 |
-| Maximum number of nodes in a cluster on a Private Cloud | 16 |
-| Maximum number of nodes in a Private Cloud | 64 |
-| Minimum number of nodes on a new cluster | 3 |
-
-## Next steps
-
-* Learn how to [provision nodes](create-nodes.md)
-* Learn about [Private Clouds](cloudsimple-private-cloud.md)
vmware-cloudsimple Cloudsimple Private Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/cloudsimple-private-cloud.md
- Title: Azure VMware Solution by CloudSimple - Private Clouds
-description: Learn about CloudSimple concepts and advantages, including complete VMware operational continuity, compatibility with existing tools, skills, and processes.
-- Previously updated : 08/20/2019 -----
-# CloudSimple Private Cloud overview
-
-CloudSimple transforms and extends VMware workloads to public clouds in minutes. Using the CloudSimple service, you can deploy VMware natively on Azure bare metal infrastructure. Your deployment lives on Azure locations and fully integrates with the rest of the Azure cloud.
-
-The CloudSimple solution provides complete VMware operational continuity. This solution gives you the public cloud benefits of:
-
-* Elasticity
-* Innovation
-* Efficiency
-
-With CloudSimple, you benefit from a cloud consumption model that lowers your total cost of ownership. It also offers on-demand provisioning, pay-as-you-grow, and capacity optimization.
-
-CloudSimple is fully compatible with:
-
-* Existing tools
-* Skills
-* Processes
-
-This compatibility enables your teams to manage workloads on the Azure cloud, without disrupting these types of policies:
-
-* Network
-* Security
-* Data protection
-* Audit
-
-CloudSimple manages the infrastructure and all the necessary networking and management services. The CloudSimple service enables your team to focus on:
-
-* Business value
-* Application provisioning
-* Business continuity
-* Support
-* Policy enforcement
-
-## Private Cloud environment overview
-
-A Private Cloud is an isolated VMware stack that supports:
-
-* ESXi hosts
-* vCenter
-* vSAN
-* NSX
-
-Private Clouds are managed through the CloudSimple portal. They have their own vCenter server in its own management domain.
-
-The stack runs on:
-
-* Dedicated nodes
-* Isolated bare metal hardware nodes
-
-Users consume the stack through native VMware tools, including:
-
-* vCenter
-* NSX Manager
-
-You can deploy dedicated nodes in Azure locations. Then you can manage them with Azure and CloudSimple. A Private Cloud consists of one or more vSphere clusters, and each cluster contains 3 to 16 nodes.
-
-You can create a Private Cloud using purchased, pay-as-you-go nodes, or reserved, dedicated nodes.
-
-You can connect the Private Cloud to your on-premises environment and the Azure network using the following connections:
-
-* Secure
-* Private VPN
-* Azure ExpressRoute
-
-The Private Cloud environment is designed to eliminate single points of failure:
-
-* ESXi clusters are configured with vSphere high availability and are sized to have at least one spare node for resiliency.
-* vSAN provides redundant primary storage. vSan requires at least three nodes to provide protection against a single failure. You can configure vSAN to provide higher resiliency for larger clusters.
-* You can configure vCenter, PSC, and NSX Manager VMs with RAID-10 storage policy to protect against storage failure. vSphere HA protects against node and network failures.
-
-## Scenarios for deploying a Private Cloud
-
-Here are some example use cases for Private Cloud deployment.
-
-### Data center retirement or migration
-
-* Get additional capacity when you reach the limits of your existing datacenter or refresh hardware.
-* Add needed capacity in the cloud and eliminate the headaches of managing hardware refreshes.
-* Reduce the risk and cost of cloud migrations compared to time-consuming conversions or rearchitecture.
-* Use familiar VMware tools and skills to accelerate cloud migrations. In the cloud, use Azure services to modernize your applications at your pace.
-
-### Expand on demand
-
-* Expand to the cloud to meet unanticipated needs, such as new development environments or seasonal capacity bursts.
-* Create new capacity on demand and keep it only as long as you need it.
-* Reduce your up-front investment, accelerate speed of provisioning, and reduce complexity with the same architecture and policies across both on-premises and the cloud.
-
-### Disaster recovery and virtual desktops in the Azure cloud
-
-* Establish remote access to data, apps, and desktops in the Azure cloud. With high-bandwidth connections, you upload / download data fast to recover from incidents. Low-latency networks give you fast response times that users expect from a desktop app.
-
-* Replicate all your policies and networking in the cloud using the CloudSimple portal and familiar VMware tools. Replication reduces the effort and risk of creating and managing DR and VDI implementations.
-
-### High-performance applications and databases
-
-* Run your most demanding workloads with the hyperconverged architecture provided by CloudSimple.
-* Run Oracle, Microsoft SQL server, middleware systems, and high-performance no-SQL databases.
-* Experience the cloud as your own data center with high-speed 25-Gbps network connections. High-speed connections enable you to run hybrid apps that span on-premises, VMware on Azure, and Azure private workloads, without compromising performance.
-
-### True hybrid
-
-* Unify DevOps across VMware and Azure services.
-* Optimize VMware administration for Azure services and solutions that can be applied across all your workloads.
-* Access public cloud services without having to expand your data center or rearchitect your applications.
-* Centralize identities, access control policies, logging and monitoring for VMware applications on Azure.
-
-## Limits
-
-The following table lists the node limits on resources of a Private Cloud.
-
-| Resource | Limit |
-|-|-|
-| Minimum number of nodes to create a Private Cloud | 3 |
-| Maximum number of nodes in a cluster on a Private Cloud | 16 |
-| Maximum number of nodes in a Private Cloud | 64 |
-| Minimum number of nodes on a new cluster | 3 |
-
-## Next steps
-
-* Learn how to [create a Private Cloud](create-private-cloud.md)
-* Learn how to [configure a Private Cloud environment](quickstart-create-private-cloud.md)
vmware-cloudsimple Cloudsimple Public Ip Address https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/cloudsimple-public-ip-address.md
- Title: Azure VMware Solution by CloudSimple - Public IP address
-description: Learn about public IP addresses and their benefits on Azure VMware Solution by CloudSimple
-- Previously updated : 08/20/2019 -----
-# CloudSimple public IP address overview
-
-A public IP address allows internet resources to communicate inbound to Private Cloud resources at a private IP address. The private IP address is a virtual machine or a software load balancer on your Private Cloud vCenter. The public IP address allows you to expose services running on your Private Cloud to the internet.
-
-The public IP address is dedicated to the private IP address until you unassign it. A public IP address can only be assigned to one private IP address.
-
-A resource associated with a public IP address always uses the public IP address for internet access. By default, only outbound internet access is allowed on a public IP address. Incoming traffic on the public IP address is denied. To allow inbound traffic, create a firewall rule for the public IP address to the specific port.
-
-## Benefits
-
-Using a public IP address to communicate inbound provides:
-
-* Distributed denial of service (DDoS) attack prevention. This protection is automatically enabled for the public IP address.
-* Always-on traffic monitoring and real-time mitigation of common network-level attacks. These defenses are the same defenses used by Microsoft online services.
-* The entire scale of the Azure global network. The network can be used to distribute and mitigate attack traffic across regions.
-
-## Next steps
-
-* Learn how to [allocate a public IP address](public-ips.md)
vmware-cloudsimple Cloudsimple Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/cloudsimple-security.md
- Title: Azure VMware Solution by CloudSimple - Security for CloudSimple Services
-description: Describes the shared responsibility models for security of CloudSimple services
-- Previously updated : 08/20/2019 ------
-# CloudSimple security overview
-
-This article provides an overview of how security is implemented on the Azure VMware Solution by CloudSimple service, infrastructure, and datacenter. You learn about data protection and security, network security, and how vulnerabilities and patches are managed.
-
-## Shared responsibility
-
-Azure VMware Solution by CloudSimple uses a shared responsibility model for security. Trusted security in the cloud is achieved through the shared responsibilities of customers and Microsoft as a service provider. This matrix of responsibility provides higher security and eliminates single points of failure.
-
-## Azure infrastructure
-
-Azure infrastructure security considerations include the datacenters and equipment location.
-
-### Datacenter security
-
-Microsoft has an entire division devoted to designing, building, and operating the physical facilities that support Azure. This team is invested in maintaining state-of-the-art physical security. For details on physical security, see [Azure facilities, premises, and physical security](../security/fundamentals/physical-security.md).
-
-### Equipment location
-
-The bare metal hardware equipment that runs your Private Clouds is hosted in Azure datacenter locations. The cages where that equipment is, requires biometric based two-factor authentication to gain access.
-
-## Dedicated hardware
-
-As part of the CloudSimple service, all CloudSimple customers get dedicated bare metal hosts with local attached disks that are physically isolated from other tenant hardware. An ESXi hypervisor with vSAN runs on every node. The nodes are managed through customer dedicated VMware vCenter and NSX. Not sharing hardware between tenants provides an additional layer of isolation and security protection.
-
-## Data security
-
-Customers keep control and ownership of their data. Data stewardship of customer data is the responsibility of the customer.
-
-### Data protection for data at rest and data in motion within internal networks
-
-For data at rest in the Private Cloud environment, you can use vSAN encryption. vSAN encryption works with VMware certified external key management servers (KMS) in your own virtual network or on-premises. You control the data encryption keys yourself. For data in motion within the Private Cloud, vSphere supports encryption of data over the wire for all vmkernel traffic (including vMotion traffic).
-
-### Data Protection for data that is required to move through public networks
-
-To protect data that moves through public networks, you can create IPsec and TLS VPN tunnels for your Private Clouds. Common encryption methods are supported, including 128-byte and 256-byte AES. Data in transit (including authentication, administrative access, and customer data) is encrypted with standard encryption mechanisms (SSH, TLS 1.2, and Secure RDP). Communication that transports sensitive information uses the standard encryption mechanisms.
-
-### Secure Disposal
-
-If your CloudSimple service expires or is terminated, you are responsible for removing or deleting your data. CloudSimple will cooperate with you to delete or return all customer data as provided in the customer agreement, except to the extent CloudSimple is required by applicable law to retain some or all of the personal data. If necessary to retain any personal data, CloudSimple will archive the data and implement reasonable measures to prevent the customer data from any further processing.
-
-### Data Location
-
-When setting up your Private Clouds, you choose the Azure region where they will be deployed. VMware virtual machine data is not moved from that physical datacenter unless you perform data migration or offsite data backup. You can also host workloads and store data within multiple Azure regions if appropriate for your needs.
-
-The customer data that is resident in Private Cloud hyper-converged nodes doesn't traverse locations without the explicit action of the tenant administrator. It is your responsibility to implement your workloads in a highly available manner.
-
-### Data backups
-
-CloudSimple doesn't back up or archive customer data. CloudSimple does perform periodic backup of vCenter and NSX data to provide high availability of management servers. Prior to backup, all the data is encrypted at the vCenter source using VMware APIs. The encrypted data is transported and stored in Azure blob. Encryption keys for backups are stored in a highly secure CloudSimple managed vault running in the CloudSimple virtual network in Azure.
-
-## Network Security
-
-The CloudSimple solution relies on layers of network security.
-
-### Azure edge security
-
-The CloudSimple services are built on top of the base network security provided by Azure. Azure applies defense-in-depth techniques for detection and timely response to network-based attacks associated with anomalous ingress or egress traffic patterns and distributed denial-of-service (DDoS) attacks. This security control applies to Private Cloud environments and the control plane software developed by CloudSimple.
-
-### Segmentation
-
-The CloudSimple service has logically separate Layer 2 networks that restrict access to your own private networks in your Private Cloud environment. You can further protect your Private Cloud networks using a firewall. The CloudSimple portal allows you to define EW and NS network traffic controls rules for all network traffic, including intra Private Cloud traffic, inter-Private Cloud traffic, general traffic to the Internet, and network traffic to on-premises over IPsec VPN or ExpressRoute connection.
-
-## Vulnerability and patch management
-
-CloudSimple is responsible for periodic security patching of managed VMware software (ESXi, vCenter, and NSX).
-
-## Identity and access management
-
-Customers can authenticate to their Azure account (in Azure AD) using multi-factor authentication or SSO as preferred. From the Azure portal, you can launch the CloudSimple portal without reentering credentials.
-
-CloudSimple supports optional configuration of an identity source for the Private Cloud vCenter. You can use an [on-premises identity source](set-vcenter-identity.md), a new identity source for the Private Cloud, or [Azure AD](azure-ad.md).
-
-By default, customers are given the privileges that are necessary for day-to-day operations of vCenter within the Private Cloud. This permission level doesn't include administrative access to vCenter. If administrative access is temporarily required, you can [escalate your privileges](escalate-private-cloud-privileges.md) for a limited period while you complete the administrative tasks.
vmware-cloudsimple Cloudsimple Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/cloudsimple-service.md
- Title: Azure VMware Solution by CloudSimple - Service
-description: Learn about the CloudSimple service with his overview. Creating the service allows you to purchase nodes, reserve nodes, and create Private Clouds.
-- Previously updated : 08/20/2019 -----
-# CloudSimple service overview
-
-The CloudSimple service allows you to consume Azure VMware Solution by CloudSimple. Creating the service allows you to purchase nodes, reserve nodes, and create Private Clouds. You create the CloudSimple service in each Azure region where the CloudSimple service is available. The service defines the edge network of Azure VMware Solution by CloudSimple. The edge network supports services that include VPN, ExpressRoute, and internet connectivity to your Private Clouds.
-
-## Gateway subnet
-
-A gateway subnet is required per CloudSimple service and is unique to the region in which it's created. The gateway subnet is used when creating the edge network and requires a /28 CIDR block. The gateway subnet address space must be unique. It must not overlap with any network that communicates with the CloudSimple environment. The networks that communicate with CloudSimple include on-premises networks and Azure virtual network. A gateway subnet can't be deleted once it's created. The gateway subnet is removed when the service is deleted.
-
-## Next steps
-
-* Learn how to [create a CloudSimple service on Azure](quickstart-create-cloudsimple-service.md).
vmware-cloudsimple Cloudsimple Virtual Machines https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/cloudsimple-virtual-machines.md
- Title: Virtual machines overview-
-description: Learn about CloudSimple virtual machines and their benefits. You can manage VMware virtual machines from the Azure portal.
-- Previously updated : 08/20/2019-----
-# CloudSimple virtual machines overview
-
-CloudSimple allows you to manage VMware virtual machines (VMs) from the Azure portal. A cluster or a resource pool from your vSphere cluster is managed through Azure by mapping it to your subscription.
-
-To create a CloudSimple VM from Azure, a VM template must exist on your Private Cloud vCenter. The template is used to customize the operating system and applications. The template VM can be hardened to meet enterprise security policies. You can use the template to create VMs and then consume them from the Azure portal using a self-service model.
-
-## Benefits
-
-CloudSimple virtual machines from Azure portal provide a self-service mechanism for users to create and manage VMware virtual machines.
-
-* Create a CloudSimple VM on your Private Cloud vCenter
-* Manage VM properties
- * Add/remove disks
- * Add/remove NICs
-* Power operations of your CloudSimple VM
- * Power on and power off
- * Reset VM
-* Delete VM
-
-## Next steps
-
-* Learn how to [Consume VMware VMs on Azure](quickstart-create-vmware-virtual-machine.md)
-* Learn how to [Map your Azure subscription](azure-subscription-mapping.md)
vmware-cloudsimple Cloudsimple Vlans Subnets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/cloudsimple-vlans-subnets.md
- Title: VLANs and subnets in Azure VMware Solution by CloudSimple
-description: Learn about VLANs and subnets in CloudSimple Private Cloud and about the network that CloudSimple provides where your CloudSimple service is deployed.
-- Previously updated : 08/15/2019 -----
-# VLANs and subnets overview
-
-CloudSimple provides a network per region where your CloudSimple service is deployed. The network is a single TCP Layer 3 address space with routing enabled by default. All Private Clouds and subnets created in this region can communicate with each other without any additional configuration. You can create distributed port groups on the vCenter using the VLANs.
-
-![CloudSimple Network Topology](media/cloudsimple-network-topology.png)
-
-## VLANs
-
-A VLAN (Layer 2 network) is created for each Private Cloud. The Layer 2 traffic stays within the boundary of a Private Cloud, allowing you to isolate the local traffic within the Private Cloud. A VLAN created on the Private Cloud can be used to create distributed port groups only in that Private Cloud. A VLAN created on a Private Cloud is automatically configured on all the switches connected to the hosts of a Private Cloud.
-
-## Subnets
-
-You can create a subnet when you create a VLAN by defining the address space of the subnet. An IP address from the address space is assigned as a subnet gateway. A single private Layer 3 address space is assigned per customer and region. You can configure any RFC 1918 non-overlapping address space, with your on-premises network or Azure virtual network, in your network region.
-
-All subnets can communicate with each other by default, reducing the configuration overhead for routing between Private Clouds. East-west data across PCs in the same region stays in the same Layer 3 network and transfers over the local network infrastructure within the region. No egress is required for communication between Private Clouds in a region. This approach eliminates any WAN/egress performance penalty in deploying different workloads in different Private Clouds.
-
-## vSphere/vSAN subnets CIDR range
-
-A Private Cloud is created as an isolated VMware stack (ESXi hosts, vCenter, vSAN, and NSX) environment managed by a vCenter server. Management components are deployed in the network selected for vSphere/vSAN subnets CIDR. The network CIDR range is divided into different subnets during the deployment.
-
-* Minimum vSphere/vSAN subnets CIDR range prefix: **/24**
-* Maximum vSphere/vSAN subnets CIDR range prefix: **/21**
-
-> [!CAUTION]
-> IP addresses in the vSphere/vSAN CIDR range are reserved for use by the Private Cloud infrastructure. Don't use the IP address in this range on any virtual machine.
-
-### vSphere/vSAN subnets CIDR range limits
-
-Selecting the vSphere/vSAN subnets CIDR range size has an impact on the size of your Private Cloud. The following table shows the maximum number of nodes you can have based on the size of vSphere/vSAN subnets CIDR.
-
-| Specified vSphere/vSAN subnets CIDR prefix length | Maximum number of nodes |
-||-|
-| /24 | 26 |
-| /23 | 58 |
-| /22 | 118 |
-| /21 | 220 |
-
-### Management subnets created on a Private Cloud
-
-The following management subnets are created when you create a Private Cloud.
-
-* **System management**. VLAN and subnet for ESXi hosts' management network, DNS server, vCenter server.
-* **VMotion**. VLAN and subnet for ESXi hosts' vMotion network.
-* **VSAN**. VLAN and subnet for ESXi hosts' vSAN network.
-* **NsxtEdgeUplink1**. VLAN and subnet for VLAN uplinks to an external network.
-* **NsxtEdgeUplink2**. VLAN and subnet for VLAN uplinks to an external network.
-* **NsxtEdgeTransport**. VLAN and subnet for transport zones control the reach of Layer 2 networks in NSX-T.
-* **NsxtHostTransport**. VLAN and subnet for host transport zone.
-
-### Management network CIDR range breakdown
-
-vSphere/vSAN subnets CIDR range specified is divided into multiple subnets. The following table shows an example of the breakdown for allowed prefixes. The example uses 192.168.0.0 as the CIDR range.
-
-Example:
-
-| Specified vSphere/vSAN subnets CIDR/prefix | 192.168.0.0/21 | 192.168.0.0/22 | 192.168.0.0/23 | 192.168.0.0/24 |
-||-|-|-|-|
-| System management | 192.168.0.0/24 | 192.168.0.0/24 | 192.168.0.0/25 | 192.168.0.0/26 |
-| vMotion | 192.168.1.0/24 | 192.168.1.0/25 | 192.168.0.128/26 | 192.168.0.64/27 |
-| vSAN | 192.168.2.0/24 | 192.168.1.128/25 | 192.168.0.192/26 | 192.168.0.96/27 |
-| NSX-T Host Transport | 192.168.4.0/23 | 192.168.2.0/24 | 192.168.1.0/25 | 192.168.0.128/26 |
-| NSX-T Edge Transport | 192.168.7.208/28 | 192.168.3.208/28 | 192.168.1.208/28 | 192.168.0.208/28 |
-| NSX-T Edge Uplink1 | 192.168.7.224/28 | 192.168.3.224/28 | 192.168.1.224/28 | 192.168.0.224/28 |
-| NSX-T Edge uplink2 | 192.168.7.240/28 | 192.168.3.240/28 | 192.168.1.240/28 | 192.168.0.240/28 |
-
-## Next steps
-
-* [Create and manage VLANs and subnets](create-vlan-subnet.md)
vmware-cloudsimple Cloudsimple Vmware Solutions Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/cloudsimple-vmware-solutions-overview.md
- Title: Azure VMware Solution by CloudSimple - Overview
-description: Learn about features, scenarios, and benefits of VMware Solution on Azure by CloudSimple service.
-- Previously updated : 08/20/2019 -----
-# What is Azure VMware Solution by CloudSimple
-
-**Azure VMware Solution by CloudSimple** is a fully managed service that empowers you to run the VMware platform in Azure. This solution includes vSphere, vCenter, vSAN, NSX-T, and corresponding tools. Your VMware environment runs natively on Azure bare metal infrastructure in Azure cloud locations. The service includes all the features required to consume the VMware platforms efficiently and securely.
-
-![VMware Solution on Azure by CloudSimple Overview](media/azure-vmware-solution-by-cloudsimple.png)
-
-## Features
-
-* On-demand self-service provisioning of VMware cloud environments. Ability to add and remove capacity on demand.
-* VMware platform deployment, upgrade, management plane backup, health/capacity monitoring, alerting, troubleshooting, and remediation.
-* Underlay networking services required to enable VMware, including L2/L3 services and firewall rule management.
-* Edge-type networking services, including VPN, public IP, and internet gateways. These services run on Azure and carry the security and DDoS protection of Azure.
-* Capacity reservation to lower costs.
-* High-speed, low-latency connectivity to Azure and on-premises.
-* Solution architectures for customers to consume Azure services in an integrated fashion and take advantage of this unique "VMware cloud in a public cloudΓÇ¥ architecture. The Azure services include Azure AD, storage, application gateways, and others.
-* Infrastructure that is fully dedicated to you and is physically isolated from infrastructure of other customers.
-* Management features such activity management, usage, billing/metering, and user management.
-* 24x7 customer support.
-
-## Benefits
-
-* **Operational Continuity**. CloudSimple offers native access to VMware platforms. The CloudSimple architecture is compatible with your existing:
- * Applications
- * Operations
- * Security
- * Backup
- * Disaster recovery
- * Audit
- * Compliance tools
- * Processes
-* **No Retraining**. VMware platform compatibility allows you to use existing skills and knowledge.
-* **Infrastructure agility**. You no longer have to predict all your capacity needs and then end up with wasted capacity or infrastructure shortages. CloudSimple is delivered as a cloud service, and you can add or reduce capacity at any time
-* **Security**. Access to the CloudSimple environment through Azure provides built-in DDoS protection and security monitoring.
-* **Lower cost**. The CloudSimple platform is highly engineered, and provides high levels of automation, operational efficiency, and economies of scale. Further, CloudSimple publishes solution architectures that take advantage of the presence of VMware in a public cloud to lower costs. Examples include Azure AD, backup to Azure storage, application gateway, load balancer, and others.
-* **A new hybrid platform**. The service enables high-speed, low latency access to the rest of Azure. Further, CloudSimple management enables unified management of VMware virtual machines and the rest of Azure using the same UI and API. Your development teams can take advantage of both public and private platforms in an integrated, consistent fashion.
-* **Infrastructure monitoring, troubleshooting, and support**. CloudSimple operates your underlying infrastructure as a service. Fail hardware is automatically replaced. You can focus on consumption while CloudSimple ensures that the environment runs smoothly.
-* **Policy compatibility**. Keep your VMware-based tools, security procedures, audit practices, and compliance certifications.
-
-## Scenarios
-
-* **Datacenter retirement or migration**. Get additional capacity when you reach the limits of your existing datacenter or refresh hardware. It's easy for you to add needed capacity in the cloud and eliminate the headaches of managing hardware refreshes. Reduce the risk and cost of cloud migrations compared to time-consuming conversions or rearchitecture. Use familiar VMware tools and skills to accelerate cloud migrations. In the cloud, use Azure services to modernize your applications at your pace.
-* **Expand on demand**. Expand to the cloud to meet unanticipated needs, such as new development environments or seasonal capacity bursts. You can easily create new capacity on demand and keep it only as long as you need it. Reduce your up-front investment, accelerate speed of provisioning, and reduce complexity with the same architecture and policies across both on-premises and the cloud.
-* **Disaster recovery and virtual desktops in the Azure cloud**. Establish remote access to data, apps, and desktops in the Azure cloud. With high-bandwidth connections, you upload and download data fast to recover from incidents. Low-latency networks give you fast response times that users expect from a desktop app. With CloudSimple, it's easy to replicate all your policies and networking in the cloud using the CloudSimple portal and familiar VMware tools. The ease of recovery and replication greatly reduces the effort and risk of creating and managing DR and VDI implementations.
-* **High-performance applications and databases**. CloudSimple provides a hyperconverged architecture designed to run your most demanding VMware workloads. Run Oracle, Microsoft SQL server, middleware systems, and high-performance no-SQL databases. Experience the cloud as your own data center with high speed 25-Gbps network connections that let you run hybrid apps that span on-premises, VMware on Azure, and Azure private workloads without compromising performance.
-* **True hybrid**. Unify DevOps across VMware and Azure. Optimize VMware administration for Azure services and solutions that can be applied across all your workloads. Access public cloud services without having to expand your data center or rearchitect your applications. Centralize identities, access control policies, logging and monitoring for VMware applications on Azure.
-
-![Scenarios](media/cloudsimple-scenarios.png)
-
-## Next steps
-
-* [Create CloudSimple service](quickstart-create-cloudsimple-service.md)
-* [Create Private Cloud](quickstart-create-private-cloud.md)
vmware-cloudsimple Cloudsimple Vpn Gateways https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/cloudsimple-vpn-gateways.md
- Title: Azure VMware Solution by CloudSimple - VPN gateways
-description: Learn about CloudSimple site-to-site and point-to-site VPN gateways, which are used to send encrypted traffic between a CloudSimple region and other resources.
-- Previously updated : 08/20/2019 -----
-# VPN gateways overview
-
-A VPN gateway is used to send encrypted traffic between a CloudSimple region network at an on-premises location, or a computer over the public internet. Each region can have one VPN gateway, which can support multiple connections. When you create multiple connections to the same VPN gateway, all VPN tunnels share the available gateway bandwidth.
-
-CloudSimple provides two kinds of VPN gateways:
-
-* Site-to-Site VPN gateway
-* Point-to-Site VPN gateway
-
-## Site-to-Site VPN gateway
-
-A Site-to-Site VPN gateway is used to send encrypted traffic between a CloudSimple region network and an on-premises datacenter. Use this connection to define the subnets/CIDR range, for network traffic between your on-premises network and the CloudSimple region network.
-
-The VPN gateway allows you to consume services from on-premises on your Private Cloud, and services on your Private Cloud from the on-premises network. CloudSimple provides a policy-based VPN server for establishing the connection from your on-premises network.
-
-Use cases for Site-to-Site VPN:
-
-* Accessibility of your Private Cloud vCenter from any workstation in your on-premises network.
-* Use of your on-premises Active Directory as a vCenter identity source.
-* Convenient transfer of VM templates, ISOs, and other files from your on-premises resources to your Private Cloud vCenter.
-* Accessibility of workloads running on your Private Cloud from your on-premises network.
-
-![Site-to-Site VPN connection topology](media/cloudsimple-site-to-site-vpn-connection.png)
-
-### Cryptographic parameters
-
-A Site-to-Site VPN connection uses the following default cryptographic parameters to establish a secure connection. When you create a connection from your on-premises VPN device, use any of the following parameters that are supported by your on-premises VPN gateway.
-
-#### Phase 1 proposals
-
-| Parameter | Proposal 1 | Proposal 2 | Proposal 3 |
-|--||||
-| IKE Version | IKEv1 | IKEv1 | IKEv1 |
-| Encryption | AES 128 | AES 256 | AES 256 |
-| Hash Algorithm| SHA 256 | SHA 256 | SHA 1 |
-| Diffie Hellman Group (DH Group) | 2 | 2 | 2 |
-| Life Time | 28,800 seconds | 28,800 seconds | 28,800 seconds |
-| Data Size | 4 GB | 4 GB | 4 GB |
-
-#### Phase 2 proposals
-
-| Parameter | Proposal 1 | Proposal 2 | Proposal 3 |
-|--||||
-| Encryption | AES 128 | AES 256 | AES 256 |
-| Hash Algorithm| SHA 256 | SHA 256 | SHA 1 |
-| Perfect Forward Secrecy Group (PFS Group) | None | None | None |
-| Life Time | 1,800 seconds | 1,800 seconds | 1,800 seconds |
-| Data Size | 4 GB | 4 GB | 4 GB |
--
-> [!IMPORTANT]
-> Set TCP MSS Clamping at 1200 on your VPN device. Or if your VPN devices do not support MSS clamping, you can alternatively set the MTU on the tunnel interface to 1240 bytes instead.
-
-## Point-to-Site VPN gateway
-
-A Point-to-Site VPN is used to send encrypted traffic between a CloudSimple region network and a client computer. Point-to-Site VPN is the easiest way to access your Private Cloud network, including your Private Cloud vCenter and workload VMs. Use Point-to-Site VPN connectivity if you're connecting to the Private Cloud remotely.
-
-## Next steps
-
-* [Set up VPN gateway](vpn-gateway.md)
vmware-cloudsimple Configure Server Vrealize Automation Endpoint https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/configure-server-vrealize-automation-endpoint.md
- Title: Azure VMware Solution by CloudSimple - Set up vCenter on Private Cloud for vRealize Automation
-description: Describes how to set up a VMware vCenter server on your CloudSimple Private Cloud as an endpoint for VMware vRealize Automation
-- Previously updated : 08/19/2019 -----
-# Set up vCenter on your Private Cloud for VMware vRealize Automation
-
-You can set up a VMware vCenter server on your CloudSimple Private Cloud as an endpoint for VMware vRealize Automation.
-
-## Before you begin
-
-Complete these tasks before configuring the vCenter server:
-
-* Configure a [Site-to-Site VPN connection](vpn-gateway.md#set-up-a-site-to-site-vpn-gateway) between your on-premises environment and your Private Cloud.
-* [Configure DNS forwarding of on-premises DNS requests](on-premises-dns-setup.md) to the DNS servers for your Private Cloud.
-* Submit a [support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest) to create a vRealize Automation IaaS administrative user with the set of permissions that are listed in the following table.
-
-| Attribute Value | Permission |
- | - |
-| Datastore | Allocate Space <br> Browse Datastore |
-| Datastore Cluster | Configure a Datastore Cluster |
-| Folder | Create Folder <br>Delete Folder |
-| Global | Manage Custom Attributes<br>Set Custom Attribute |
-| Network | Assign Network |
-| Permissions | Modify Permissions |
-| Resource | Assign VM to Resource Pool<br>Migrate Powered Off Virtual Machine<br>Migrate Powered On Virtual Machine |
-| Virtual Machine Inventory | Create from existing<br>Create New<br>Move<br>Remove |
-| Virtual Machine Interaction | Configure CD Media<br>Console Interaction<br>Device Connection<br>Power Off<br>Power On<br>Reset<br>Suspend<br>Tools Install |
-| Virtual Machine Configuration | Add Existing Disk<br>Add New Disk<br>Add or Remove<br>Remove Disk<br>Advanced<br>Change CPU Count<br>Change Resource<br>Extend Virtual Disk<br>Disk Change Tracking<br>Memory<br>Modify Device Settings<br>Rename<br>Set Annotation (version 5.0 and later)<br>Settings<br>Swapfile Placement |
-| Provisioning | Customize<br>Clone Template<br>Clone Virtual Machine<br>Deploy Template<br>Read Customization Specs |
-| Virtual Machine State | Create Snapshot<br>Remove Snapshot<br>Revert to Snapshot |
-
-## Install vRealize Automation in your on-premises environment
-
-1. Sign in to the vRealize Automation IaaS server appliance as the IaaS administrator that CloudSimple Support created for you.
-2. Deploy a vSphere Agent for the vRealize Automation endpoint.
- 1. Go to https://*vra-url*:5480/installer, where *vra-url* is the URL that you use to access the vRealize Automation administration UI.
- 2. Click the **IaaS Installer** to download the installer.<br>
- The naming convention for the installer file is setup_*vra-url*@5480.exe.
- 3. Run the installer. On the Welcome screen, click **Next**.
- 4. Accept the EULA and click **Next**.
- 5. Provide the sign-in information, click **Accept Certificate**, and then click **Next**.
- ![vRA credentials](media/configure-vra-endpoint-login.png)
- 6. Select **Custom Install** and **Proxy Agents** and click **Next**.
- ![vRA install type](media/configure-vra-endpoint-install-type.png)
- 7. Enter the IaaS server sign-in information and click **Next**. If you are using Active Directory, enter the username in **domain\user** format. Otherwise, use **user@domain** format.
- ![vRA login info](media/configure-vra-endpoint-account.png)
- 8. For the proxy settings, enter **vSphere** for **Agent type**. Enter a name for the agent.
- 9. Enter the IaaS server FQDN in the **Manager Service Host** and the **Model Manager Web Service Host** fields. Click **Test** to test the connection for each of the FQDN values. If the test fails, modify your DNS settings so that the IaaS server hostname is resolved.
- 10. Enter a name for vCenter server endpoint for the Private Cloud. Record the name for use later in the configuration process.
-
- ![vRA install proxy](media/configure-vra-endpoint-proxy.png)
-
- 11. Click **Next**.
- 12. Click **Install**.
-
-## Configure the vSphere agent
-
-1. Go to https://*vra-url*/vcac and sign in as **ConfigurationAdmin**.
-2. Select **Infrastructure** > **Endpoints** > **Endpoints**.
-3. Select **New** > **Virtual** > **vSphere**.
-4. Enter the vSphere endpoint name that you specified in the previous procedure.
-5. For **Address**, enter the Private Cloud vCenter Server URL in the format https://*vcenter-fqdn*/sdk, where *vcenter-fqdn* is the name of the vCenter server.
-6. Enter the credentials for the vRealize Automation IaaS administrative user that CloudSimple Support created for you.
-7. Click **Test Connection** to validate the user credentials. If the test fails, verify the URL, account information, and [endpoint name](#verify-the-endpoint-name) and test again.
-8. After a successful test, click **OK** to create the vSphere endpoint.
- ![vRA endpoint config access](media/configure-vra-endpoint-vra-edit.png)
-
-### Verify the endpoint name
-
-To identify the correct vCenter server endpoint name, do the following:
-
-1. Open a command prompt on the IaaS appliance.
-2. Change directory to C:\Program Files (x86)\VMware\vCAC\Agents\agent-name, where *agent-name* is the name you assigned to the vCenter server endpoint.
-3. Run the following command.
-
-```
-..\..\Server\DynamicOps.Vrm.VRMencrypt.exe VRMAgent.exe.config get
-```
-
-The output is similar to the following. The value of the `managementEndpointName` field is the endpoint name.
-
-```
-managementEndpointName: cslab1pc3-vc
-doDeletes: true
-```
vmware-cloudsimple Create Cloudsimple Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/create-cloudsimple-service.md
- Title: Azure VMware Solution by CloudSimple - Create CloudSimple service
-description: Learn how to create the CloudSimple service in the Azure portal. Review required configuration before you begin.
-- Previously updated : 08/19/2019 ------
-# Create the Azure VMware Solution by CloudSimple service
-
-To get started with Azure VMware Solution by CloudSimple, create the Azure VMware Solution by CloudSimple service in the Azure portal.
-
-## Before you begin
-
-Allocate a /28 CIDR block for the gateway subnet. A gateway subnet is required per CloudSimple service and is unique to the region in which it's created. The gateway subnet is used for edge network services and requires a /28 CIDR block. The gateway subnet address space must be unique. It must not overlap with any network that communicates with the CloudSimple environment. The networks that communicate with CloudSimple include on-premises networks and Azure virtual networks.
-
-## Sign in to Azure
-
-Sign in to the [Azure portal](https://portal.azure.com).
-
-## Create the service
-
-1. Select **All services**.
-2. Search for **CloudSimple Services**.
- ![Search CloudSimple Service](media/create-cloudsimple-service-search.png)
-3. Select **CloudSimple Services**.
-4. Click **Add** to create a new service.
- ![Add CloudSimple Service](media/create-cloudsimple-service-add.png)
-5. Select the subscription where you want to create the CloudSimple service.
-6. Select the resource group for the service. To add a new resource group, click **Create New**.
-7. Enter name to identify the service.
-8. Enter the CIDR for the service gateway. Specify a /28 subnet that doesnΓÇÖt overlap with any of your on-premises subnets, Azure subnets, or planned CloudSimple subnets. You can't change the CIDR after the service is created.
-
- ![Creating the CloudSimple service](media/create-cloudsimple-service.png)
-9. Click **OK**.
-
-The service is created and added to the list of services.
-
-## Next steps
-
-* Learn how to [provision nodes](create-nodes.md)
-* Learn how to [create a private cloud](create-private-cloud.md)
-* Learn how to [configure a private cloud environment](quickstart-create-private-cloud.md)
vmware-cloudsimple Create Nodes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/create-nodes.md
- Title: Provision nodes for VMware Solution by CloudSimple - Azure
-description: Learn how to add nodes to your VMWare with CloudSimple deployment in the Azure portal. You can set up pay-as-you go capacity for your private cloud environment.
-- Previously updated : 08/14/2019-----
-# Provision nodes for Azure VMware Solution by CloudSimple
-
-Provision nodes in the Azure portal. Then you can set up pay-as-you go capacity for your CloudSimple private cloud environment.
-
-## Sign in to Azure
-
-Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
-
-## Add a node to your CloudSimple private cloud
-
-1. Select **All services**.
-2. Search for **CloudSimple Nodes**.
-
- ![Search CloudSimple Nodes](media/create-cloudsimple-node-search.png)
-
-3. Select **CloudSimple Nodes**.
-4. Click **Add** to create nodes.
-
- ![Add CloudSimple Nodes](media/create-cloudsimple-node-add.png)
-
-5. Select the subscription where you want to provision CloudSimple nodes.
-6. Select the resource group for the nodes. To add a new resource group, click **Create New**.
-7. Enter the prefix to identify the nodes.
-8. Select the location for the node resources.
-9. Select the dedicated location to host the node resources.
-10. Select the [node type](cloudsimple-node.md).
-11. Select the number of nodes to provision.
-12. Select **Review + Create**.
-13. Review the settings. To modify any settings, click **Previous**.
-14. Select **Create**.
-
-## Next steps
-
-* [Create Private Cloud](create-private-cloud.md)
vmware-cloudsimple Create Private Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/create-private-cloud.md
- Title: Azure VMware Solution by CloudSimple - Create CloudSimple Private Cloud
-description: Describes how to create a CloudSimple Private Cloud to extend VMware workloads to the cloud with operational flexibility and continuity
-- Previously updated : 08/19/2019 ------
-# Create a CloudSimple Private Cloud
-
-A Private Cloud is an isolated VMware stack that supports ESXi hosts, vCenter, vSAN, and NSX. Private Clouds are managed through the CloudSimple portal. They have their own vCenter server in its own management domain. The stack runs on dedicated nodes and isolated bare metal hardware nodes.
-
-Creating a Private Cloud helps you address a variety of common needs for network infrastructure:
-
-* **Growth**. If you've reached a hardware refresh point for your existing infrastructure, a Private Cloud allows you to expand with no new hardware investment required.
-
-* **Fast expansion**. If any temporary or unplanned capacity needs arise, a Private Cloud allows you to create the additional capacity with no delay.
-
-* **Increased protection**. With a Private Cloud of three or more nodes, you get automatic redundancy and high availability protection.
-
-* **Long-term infrastructure needs**. If your datacenters are at capacity or you want to restructure to lower your costs, a Private Cloud allows you to retire datacenters and migrate to a cloud-based solution while remaining compatible with your enterprise operations.
-
-When you create a Private Cloud, you get a single vSphere cluster and all the management VMs that are created in that cluster.
-
-## Before you begin
-
-Nodes must be provisioned before you can create your Private Cloud. For more information on provisioning nodes, see [Provision nodes for Azure VMware Solution by CloudSimple](create-nodes.md).
-
-Allocate a CIDR range for vSphere/vSAN subnets for the Private Cloud. A Private Cloud is created as an isolated VMware stack environment (with ESXi hosts, vCenter, vSAN, and NSX) managed by a vCenter server. Management components are deployed in the network that is selected for vSphere/vSAN subnets CIDR. The network CIDR range is divided into different subnets during the deployment. The vSphere/vSAN subnet address space must be unique. It must not overlap with any network that communicates with the CloudSimple environment. The networks that communicate with CloudSimple include on-premises networks and Azure virtual networks. For more information on vSphere/vSAN subnets, see VLANs and subnets overview.
-
-* Minimum vSphere/vSAN subnets CIDR range prefix: /24
-* Maximum vSphere/vSAN subnets CIDR range prefix: /21
--
-## Access the CloudSimple portal
-
-Access the [CloudSimple portal](access-cloudsimple-portal.md).
-
-## Create a New Private Cloud
-
-1. Select **All services**.
-2. Search for **CloudSimple Services**.
-3. Select the CloudSimple service on which you want to create your Private Cloud.
-4. From **Overview**, click **Create Private Cloud** to open a new browser tab for CloudSimple portal. If prompted, sign in with your Azure sign in credentials.
-
- ![Create Private Cloud from Azure](media/create-private-cloud-from-azure.png)
-
-5. In the CloudSimple portal, provide a name for your Private Cloud.
-6. Select **Location** for your Private Cloud.
-7. Select **Node type**, consistent with what you provisioned on Azure.
-8. Specify **Node count**. At least three nodes are required to create a Private Cloud.
-
- ![Create Private Cloud - Basic info](media/create-private-cloud-basic-info.png)
-
-9. Click **Next: Advanced options**.
-10. Enter the CIDR range for vSphere/vSAN subnets. Make sure that the CIDR range doesn't overlap with any of your on-premises or other Azure subnets (virtual networks) or with the gateway subnet.
-
- **CIDR range options:** /24, /23, /22, or /21. A /24 CIDR range supports up to nine nodes, a /23 CIDR range supports up to 41 nodes, and a /22 and /21 CIDR range supports up to 64 nodes (the maximum number of nodes in a Private Cloud).
-
- > [!IMPORTANT]
- > IP addresses in the vSphere/vSAN CIDR range are reserved for use by the Private Cloud infrastructure. Don't use the IP address in this range on any virtual machine.
-
-11. Click **Next: Review and create**.
-12. Review the settings. If you need to change any settings, click **Previous**.
-13. Click **Create**.
-
-Private Cloud provisioning process starts. It can take up to two hours for the Private Cloud to be provisioned.
-
-For instructions on expanding an existing Private Cloud, see [Expand a Private Cloud](expand-private-cloud.md).
vmware-cloudsimple Create Vlan Subnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/create-vlan-subnet.md
- Title: Create VLANs/subnets - Azure VMware Solution by CloudSimple
-description: Azure VMware Solutions by CloudSimple - Describes how to create and manage VLANs/subnets for your Private Clouds and then apply firewall rules.
-- Previously updated : 08/15/2019 ------
-# Create and manage VLANs/subnets for your Private Clouds
-
-Open the VLANs/Subnets tab on the Network page to create and manage VLANs/subnets for your Private Clouds. After you create a VLAN/subnet, you can apply firewall rules.
-
-## Create a VLAN/subnet
-
-1. [Access the CloudSimple portal](access-cloudsimple-portal.md) and select **Network** on the side menu.
-2. Select **VLANs/subnets**.
-3. Click **Create VLAN/Subnet**.
-
- ![VLAN/subnet page](media/vlan-subnet-page.png)
-
-4. Select the Private Cloud for the new VLAN/subnet.
-5. Enter a VLAN ID.
-6. Enter the subnet name.
-7. To enable routing on the VLAN (subnet), specify the subnet CIDR range. Make sure that the CIDR range doesn't overlap with any of your on-premises subnets, Azure subnets, or gateway subnet.
-8. Click **Submit**.
-
- ![Create VLAN/subnet](media/create-new-vlan-subnet-details.png)
--
-> [!IMPORTANT]
-> There is a quota of 30 VLANs per private cloud. These limits can be increased by [contacting support](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).
-
-## Use VLAN information to set up a distributed port group in vSphere
-
-To create a distributed port group in vSphere, follow the instructions in the VMware topic 'Add a distributed port group' in the <a href="https://docs.vmware.com/en/VMware-vSphere/6.5/vsphere-esxi-vcenter-server-65-networking-guide.pdf" target="_blank">vSphere Networking Guide</a>. When setting up the distributed port group, provide the VLAN information from the CloudSimple configuration.
-
-![Distributed Port Group](media/distributed-port-group.png)
-
-## Select a firewall table
-
-Firewall tables and associated rules are defined on the **Network > Firewall tables** page. To select the firewall table to apply to the VLAN/subnet for a Private Cloud, select the VLAN/subnet click **Firewall table attachment** on the **VLANs/Subnets** page. See [Firewall Tables](firewall.md) for instructions on setting up firewall tables and defining rules.
-
-![Firewall table link](media/vlan-subnet-firewall-link.png)
-
-> [!NOTE]
-> A subnet can be associated with one firewall table. A firewall table can be associated with multiple subnets.
-
-## Edit a VLAN/subnet
-
-To edit the settings for a VLAN/Subnet, select it on the **VLANs/Subnets** page and click the **Edit** icon. Make changes and click **Submet**.
-
-## Delete a VLAN/subnet
-
-To delete a VLAN/Subnet, select it on the **VLANs/Subnets** page and click the **Delete** icon. Click **Delete** to confirm.
vmware-cloudsimple Delete Nodes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/delete-nodes.md
- Title: Delete nodes for VMware Solution by CloudSimple - Azure
-description: Learn how to delete nodes from your VMWare with CloudSimple deployment. CloudSimple nodes are metered. Delete the nodes that are not used from Azure portal.
-- Previously updated : 08/05/2019------
-# Delete nodes from Azure VMware Solution by CloudSimple
-
-CloudSimple nodes are metered once they are created. Nodes must be deleted to stop metering of the nodes. You delete the nodes that are not used from Azure portal.
-
-## Before you begin
-
-A node can be deleted only under following conditions:
-
-* A Private Cloud created with the nodes is deleted. To delete a Private Cloud, see [Delete an Azure VMware Solution by CloudSimple Private Cloud](delete-private-cloud.md).
-* The node has been removed from the Private Cloud by shrinking the Private Cloud. To shrink a Private Cloud, see [Shrink Azure VMware Solution by CloudSimple Private Cloud](shrink-private-cloud.md).
-
-## Sign in to Azure
-
-Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
-
-## Delete CloudSimple node
-
-1. Select **All services**.
-
-2. Search for **CloudSimple Nodes**.
-
- ![Search CloudSimple Nodes](media/create-cloudsimple-node-search.png)
-
-3. Select **CloudSimple Nodes**.
-
-4. Select nodes that don't belong to a Private Cloud to delete. **PRIVATE CLOUD NAME** column shows the Private Cloud name to which a node belongs to. If a node is not used by a Private Cloud, the value will be empty.
-
- ![Select CloudSimple Nodes](media/select-delete-cloudsimple-node.png)
-
-> [!NOTE]
-> Only nodes which are not a part of the Private Cloud can be deleted.
-
-## Next steps
-
-* Learn about [Private Cloud](cloudsimple-private-cloud.md)
vmware-cloudsimple Delete Private Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/delete-private-cloud.md
- Title: Delete an Azure VMware Solution by CloudSimple Private Cloud
-description: Learn how to delete a CloudSimple Private Cloud. When you delete a Private Cloud, all clusters will be deleted.
-- Previously updated : 08/06/2019 ------
-# Delete a CloudSimple Private Cloud
-
-CloudSimple provides the flexibility to delete a Private Cloud. A Private Cloud consists of one or more vSphere clusters. Each cluster can have 3 to 16 nodes. When you delete a Private Cloud, all clusters will be deleted.
-
-## Before you begin
-
-Deletion of a Private Cloud deletes the entire Private Cloud. All components of the Private Cloud will be deleted. If you want to keep any of the data, ensure you've backed up the data to on-premises storage or Azure storage.
-
-The components of a Private Cloud include:
-
-* CloudSimple Nodes
-* Virtual machines
-* VLANs/Subnets
-* All user data stored on the Private Cloud
-* All firewall rule attachments to a VLAN/Subnet
-
-## Sign in to Azure
-
-Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
-
-## Delete a Private Cloud
-
-1. [Access the CloudSimple portal](access-cloudsimple-portal.md).
-
-2. Open the **Resources** page.
-
-3. Click on the Private Cloud you want to delete
-
-4. On the summary page, click **Delete**.
-
- ![Delete private cloud](media/delete-private-cloud.png)
-
-5. On the confirmation page, enter the name of the Private Cloud and click **Delete**.
-
- ![Delete private cloud - confirm](media/delete-private-cloud-confirm.png)
-
-The Private Cloud is marked for deletion. The deletion process starts after three hours and deletes the Private Cloud.
-
-> [!CAUTION]
-> Nodes must be deleted after deletion of the Private Cloud. Metering of nodes will continue till nodes are deleted from your subscription.
-
-## Next steps
-
-* [Delete nodes](delete-nodes.md)
vmware-cloudsimple Disaster Recovery Site Recovery Manager https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/disaster-recovery-site-recovery-manager.md
- Title: Azure VMware Solution by CloudSimple - Set up Private Cloud as a disaster recovery site by using VMware Site Recovery Manager
-description: Describes how to set up your CloudSimple Private Cloud as a disaster recovery site by using VMware Site Recovery Manager.
-- Previously updated : 08/20/2019 ------
-# Set up Private Cloud as a disaster recovery target with VMware Site Recovery Manager
-
-You can use your CloudSimple Private Cloud as a disaster recovery (DR) site for on-premises VMware workloads.
-
-The DR solution is based on vSphere Replication and VMware Site Recovery Manager (SRM). A similar approach can be followed to enable your Private Cloud as a primary site that is protected by your on-premises recovery site.
-
-The CloudSimple solution:
-
-* Eliminates the need to set up a datacenter specifically for DR.
-* Allows you to leverage the Azure locations where CloudSimple is deployed for worldwide geographic resilience.
-* Gives you an option to reduce deployment costs and total cost of ownership for establishing DR.
-
-The CloudSimple solution requires you to do the following:
-
-* Install, configure, and manage vSphere Replication and SRM in your Private Cloud.
-* Provide your own licenses for SRM when the Private Cloud is the protected site. You do not need any additional SRM licenses for the CloudSimple site when it is used as the recovery site.
-
-With this solution, you have full control over vSphere replication and SRM. The familiar UI, API, and CLI interfaces enable use of your existing scripts and tools.
-
-![Site Recovery Manager deployment](media/srm-deployment.png)
-
-You can use any versions of vRA and SRM that are compatible with your Private Cloud and on-premises environments. The examples in this guide use vRA 6.5 and SRM 6.5. These versions are compatible with vSphere 6.5, which is supported by CloudSimple.
-
-## Deploy the solution
-
-The following sections describe how to deploy a DR solution using SRM in your Private Cloud.
-
-1. [Verify that VMware product versions are compatible](#verify-that-vmware-product-versions-are-compatible)
-2. [Estimate the size of your DR environment](#estimate-the-size-of-your-dr-environment)
-3. [Create a Private Cloud for your environment](#create-a-private-cloud-for-your-environment)
-4. [Set up Private Cloud networking for the SRM solution](#set-up-private-cloud-networking-for-the-srm-solution)
-5. [Set up a Site-to-Site VPN connection between your on-premises network and the Private Cloud and open required ports](#set-up-a-site-to-site-vpn-connection-between-your-on-premises-network-and-the-private-cloud-and-open-required-ports)
-6. [Set up infrastructure services in your Private Cloud](#set-up-infrastructure-services-in-your-private-cloud)
-7. [Install vSphere Replication appliance in your on-premises environment](#install-vsphere-replication-appliance-in-your-on-premises-environment)
-8. [Install vSphere Replication appliance in your Private Cloud environment](#install-vsphere-replication-appliance-in-your-private-cloud-environment)
-9. [Install SRM server in your on-premises environment](#install-srm-server-in-your-on-premises-environment)
-10. [Install SRM server in your Private Cloud](#install-srm-server-in-your-private-cloud)
-
-### Verify that VMware product versions are compatible
-
-The configurations in this guide are subject to the following compatibility requirements:
-
-* The same version of SRM must be deployed in your Private Cloud and your on-premises environment.
-* The same version of vSphere Replication must be deployed in your Private Cloud and your on-premises environment.
-* The versions of Platform Services Controller (PSC) in your Private Cloud and your on-premises environment must be compatible.
-* The versions of vCenter in your Private Cloud and your on-premises environment must be compatible.
-* The versions of SRM and vSphere replication must be compatible with each other and with the versions of PSC and vCenter.
-
-For links to the relevant VMware documentation and compatibility information, go to [VMware Site Recovery Manager](https://docs.vmware.com/en/Site-Recovery-Manager/https://docsupdatetracker.net/index.html) documentation.
-
-To find out the versions of vCenter and PSC in your Private Cloud, open the CloudSimple portal. Go to **Resources**, select your Private Cloud, and click the **vSphere Management Network** tab.
-
-![vCenter & PSC versions in Private Cloud](media/srm-resources.png)
-
-### Estimate the size of your DR environment
-
-1. Verify that your identified on-premises configuration is within supported limits. For SRM 6.5, the limits are documented in the VMware knowledge base article on [operational limits for Site Recovery Manager 6.5](https://kb.vmware.com/s/article/2147110).
-2. Ensure that you have sufficient network bandwidth to meet your workload size and RPO requirements. The VMware knowledge base article on [calculating bandwidth requirements for vSphere Replication](https://docs.vmware.com/en/vSphere-Replication/6.5/com.vmware.vsphere.replication-admin.doc/GUID-4A34D0C9-8CC1-46C4-96FF-3BF7583D3C4F.html) provides guidance on bandwidth limits.
-3. Use the CloudSimple sizer tool to estimate the resources that are needed in your DR site to protect your on-premises environment.
-
-### Create a Private Cloud for your environment
-
-Create a Private Cloud from the CloudSimple portal by following the instructions and sizing recommendations in [Create a Private Cloud](create-private-cloud.md).
-
-### Set up Private Cloud networking for the SRM solution
-
-Access the CloudSimple portal to set up Private Cloud networking for the SRM solution.
-
-Create a VLAN for the SRM solution network and assign it a subnet CIDR. For instructions, see [Create and manage VLANs/Subnets](create-vlan-subnet.md).
-
-### Set up a Site-to-Site VPN connection between your on-premises network and the Private Cloud and open required ports
-
-To set up Site-to-Site connectivity between your on-premises network and your Private Cloud. For instructions, see [Configure a VPN connection to your CloudSimple Private Cloud](set-up-vpn.md).
-
-### Set up infrastructure services in your Private Cloud
-
-Configure infrastructure services in the Private Cloud to make it easy to manage your workloads and tools.
-
-You can add an external identity provider as described in [Use Azure AD as an identity provider for vCenter on CloudSimple Private Cloud](azure-ad.md) if you want to do any of the following:
-
-* Identify users from your on-premises Active Directory (AD) in your Private Cloud.
-* Set up an AD in your Private Cloud for all users.
-* Use Azure AD.
-
-To provide IP address lookup, IP address management, and name resolution services for your workloads in the Private Cloud, set up a DHCP and DNS server as described in [Set up DNS and DHCP applications and workloads in your CloudSimple Private Cloud](dns-dhcp-setup.md).
-
-The *.cloudsimple.io domain is used by management VMs and hosts in your Private Cloud. To resolve requests to this domain, configure DNS forwarding on the DNS server as described in [Create a Conditional Forwarder](on-premises-dns-setup.md#create-a-conditional-forwarder).
-
-### Install vSphere Replication Appliance in your on-premises environment
-
-Install vSphere Replication Appliance (vRA) in your on-premises environment by following the VMware documentation. The installation consists of these high-level steps:
-
-1. Prepare your on-premises environment for vRA installation.
-
-2. Deploy vRA in your on-premises environment using the OVF in the VR ISO from vmware.com. For vRA 6.5, [this VMware blog](https://blogs.vmware.com/virtualblocks/2017/01/20/vr-65-ovf-choices) has the relevant information.
-
-3. Register your on-premises vRA with vCenter Single Sign-On at the on-premises site. For detailed instructions for vSphere Replication 6.5, see the VMware document [VMware vSphere Replication 6.5 Installation and Configuration](https://docs.vmware.com/en/vSphere-Replication/6.5/vsphere-replication-65-install.pdf).
-
-## Install vSphere Replication appliance in your Private Cloud environment
-
-Before you begin, verify that you have the following:
-
-* IP reachability from subnets in your on-premises environment to the management subnet of your Private Cloud
-* IP reachability from replication subnet in your on-premises vSphere environment to the SRM solution subnet of your Private Cloud
-
-For instructions, see [Configure a VPN connection to your CloudSimple Private Cloud](set-up-vpn.md). The steps are similar to those for the on-premises installation.
-
-CloudSimple recommends using FQDNs instead of IP addresses during the vRA and SRM installation. To find out the FQDN of vCenter and PSC in your Private Cloud, open the CloudSimple portal. Go to **Resources**, select your Private Cloud, and click the **vSphere Management Network** tab.
-
-![Finding FQDN of vCenter/PSC in Private Cloud](media/srm-resources.png)
-
-CloudSimple requires that you don't install vRA and SRM using the default ΓÇÿcloudownerΓÇÖ user, but instead create a new user. This is done to help ensure high uptime and availability for your Private Cloud vCenter environment. However, the default cloudowner user in the Private Cloud vCenter does't have sufficient privileges to create a new user with administrative privileges.
-
-Before installing vRA and SRM, you must escalate the vCenter privileges of the cloudowner user and then create a user with Administrative privileges in vCenter SSO domain. For details on the default Private Cloud user and permission model, see [Learn the Private Cloud permission model](learn-private-cloud-permissions.md).
-
-The installation consists of these high-level steps:
-
-1. [Escalate privileges](escalate-private-cloud-privileges.md).
-2. Create a user in your Private Cloud for vSphere Replication and SRM installation. Explained below in [vCenter UI: Create a user in Private Cloud for vRA & SRM installation](#vcenter-ui-create-a-user-in-private-cloud-for-vra-and-srm-installation).
-3. Prepare your Private Cloud environment for vRA installation.
-4. Deploy vRA in your Private Cloud using the OVF in the VR ISO from vmware.com. For vRA 6.5, [this VMware blog](https://blogs.vmware.com/virtualblocks/2017/01/20/vr-65-ovf-choices) has relevant information.
-5. Configure firewall rules for vRA. Explained below in [CloudSimple portal: Configure Firewall rules for vRA](#cloudsimple-portal-configure-firewall-rules-for-vra).
-6. Register Private Cloud vRA with vCenter Single Sign-On at the Private Cloud site.
-7. Configure vSphere Replication connections between the two appliances. Ensure that the required ports are opened across the firewalls. See [this VMware knowledge base article](https://kb.vmware.com/s/article/2087769) for a list of port numbers that must be open for vSphere Replication 6.5.
-
-For detailed installation instructions for vSphere Replication 6.5, see the VMware document [VMware vSphere Replication 6.5 Installation and Configuration](https://docs.vmware.com/en/vSphere-Replication/6.5/vsphere-replication-65-install.pdf).
-
-#### vCenter UI: Create a user in Private Cloud for vRA and SRM installation
-
-Sign in to vCenter using cloudowner user credentials after escalating privileges from the CloudSimple portal.
-
-Create a new user, `srm-soln-admin`, in vCenter and add it to the administrators group in vCenter.
-Sign out of vCenter as the cloudowner user and sign in as the *srm-soln-admin* user.
-
-#### CloudSimple portal: Configure firewall rules for vRA
-
-Configure firewall rules as described in [Set up firewall tables and rules](firewall.md) to open ports to enable communication between:
-
-* vRA in the SRM solution network and vCenter and ESXi hosts in the management network.
-* vRA appliances at the two sites.
-
-See this [VMware knowledge base article](https://kb.vmware.com/s/article/2087769) for a list of port numbers that must be open for vSphere Replication 6.5.
-
-### Install SRM server in your on-premises environment
-
-Before you begin, verify the following:
-
-* vSphere Replication Appliance is installed in your on-premises and Private Cloud environments.
-* The vSphere Replication Appliances at both sites are connected to each other.
-* You have reviewed the VMware information on prerequisites and best practices. For SRM 6.5, you can refer to the VMware document [Prerequisites and Best Practices for SRM 6.5](https://docs.vmware.com/en/Site-Recovery-Manager/6.5/com.vmware.srm.install_config.doc/GUID-BB0C03E4-72BE-4C74-96C3-97AC6911B6B8.html).
-
-Follow VMware documentation to perform SRM server installation in the deployment model ΓÇÿTwo-Site Topology with One vCenter Instance per Platform Services ControllerΓÇÖ as described in this [VMWare document](https://docs.vmware.com/en/Site-Recovery-Manager/6.5/com.vmware.srm.install_config.doc/GUID-F474543A-88C5-4030-BB86-F7CC51DADE22.html). The installation instructions for SRM 6.5 are available in the VMware document [Installing Site Recovery Manager](https://docs.vmware.com/en/Site-Recovery-Manager/6.5/com.vmware.srm.install_config.doc/GUID-437E1B65-A17B-4B4B-BA5B-C667C90FA418.html).
-
-### Install SRM server in your Private Cloud
-
-Before you begin, verify the following:
-
-* vSphere Replication Appliance is installed in your on-premises and Private Cloud environments.
-* The vSphere Replication Appliances at both sites are connected to each other.
-* You have reviewed the VMware information on prerequisites and best practices. For SRM 6.5, you can refer to [Prerequisites and Best Practices for Site Recovery Manager 6.5 Server Installation](https://docs.vmware.com/en/Site-Recovery-Manager/6.5/com.vmware.srm.install_config.doc/GUID-BB0C03E4-72BE-4C74-96C3-97AC6911B6B8.html).
-
-The following steps describe the Private Cloud SRM installation.
-
-1. [vCenter UI: Install SRM](#vcenter-ui-install-srm)
-2. [CloudSimple portal: Configure firewall rules for SRM](#cloudsimple-portal-configure-firewall-rules-for-srm)
-3. [vCenter UI: Configure SRM](#vcenter-ui-configure-srm)
-4. [CloudSimple portal: de-escalate privileges](#cloudsimple-portal-de-escalate-privileges)
-
-#### vCenter UI: Install SRM
-
-After logging to vCenter using srm-soln-admin credentials, follow VMware documentation to perform SRM server installation in the deployment model ΓÇÿTwo-Site Topology with One vCenter Instance per Platform Services ControllerΓÇÖ as described in this [VMWare document](https://docs.vmware.com/en/Site-Recovery-Manager/6.5/com.vmware.srm.install_config.doc/GUID-F474543A-88C5-4030-BB86-F7CC51DADE22.html). The installation instructions for SRM 6.5 are available in the VMware document [Installing Site Recovery Manager](https://docs.vmware.com/en/Site-Recovery-Manager/6.5/com.vmware.srm.install_config.doc/GUID-437E1B65-A17B-4B4B-BA5B-C667C90FA418.html).
-
-#### CloudSimple portal: Configure firewall rules for SRM
-
-Configure firewall rules as described in [Set up firewall tables and rules](firewall.md) to allow communication between:
-
-The SRM server and vCenter / PSC in the Private Cloud.
-The SRM servers at both sites
-
-See [this VMware knowledge base article](https://kb.vmware.com/s/article/2087769) for a list of port numbers that must be open for vSphere Replication 6.5.
-
-#### vCenter UI: Configure SRM
-
-After SRM is installed in the private cloud, perform the following tasks as described in the sections of the VMware Site Recovery Manager Installation and Configuration Guide. For SRM 6.5, the instructions are available in the VMware document [Installing Site Recovery Manager](https://docs.vmware.com/en/Site-Recovery-Manager/6.5/com.vmware.srm.install_config.doc/GUID-437E1B65-A17B-4B4B-BA5B-C667C90FA418.html).
-
-1. Connect the Site Recovery Manager Server Instances on the Protected and Recovery Sites.
-2. Establish a Client Connection to the Remote Site Recovery Manager Server Instance.
-3. Install the Site Recovery Manager License Key.
-
-#### CloudSimple portal: De-escalate privileges
-
-To de-escalate privileges, see [De-escalate privileges](escalate-private-cloud-privileges.md#de-escalate-privileges).
-
-## Ongoing management of your SRM solution
-
-You have full control over vSphere Replication and SRM software in your Private Cloud environment and are expected to perform the necessary software lifecycle management. Ensure that any new version of software is compatible with the Private Cloud vCenter and PSC before updating or upgrading vSphere Replication or SRM.
-
-> [!NOTE]
-> CloudSimple is currently exploring options for offering a managed DR service.
-
-## Multiple replication configuration
-
- [Both array-based replication and vSphere replication technologies can be used together with SRM](https://blogs.vmware.com/virtualblocks/2017/06/22/srm-array-based-replication-vs-vsphere-replication) at the same time. However, they must be applied to separate set of VMs (a given VM can be protected either by array-based replication or vSphere replication, but not both). Furthermore, the CloudSimple site can be configured as a recovery site for multiple protected sites. See [SRM Multi-Site Options](https://blogs.vmware.com/virtualblocks/2016/07/28/srm-multisite/) for information on multi-site configurations.
-
-## References
-
-* [VMware Site Recovery Manager Documentation](https://docs.vmware.com/en/Site-Recovery-Manager/https://docsupdatetracker.net/index.html)
-* [Operational limits for Site Recovery Manager 6.5](https://kb.vmware.com/s/article/2147110)
-* [Calculating bandwidth requirements for vSphere Replication](https://docs.vmware.com/en/vSphere-Replication/6.5/com.vmware.vsphere.replication-admin.doc/GUID-4A34D0C9-8CC1-46C4-96FF-3BF7583D3C4F.html)
-* [OVF Choices When Deploying vSphere Replication 6.5](https://blogs.vmware.com/virtualblocks/2017/01/20/vr-65-ovf-choices/)
-* [VMware vSphere Replication 6.5 Installation and Configuration](https://docs.vmware.com/en/vSphere-Replication/6.5/vsphere-replication-65-install.pdf)
-* [Prerequisites and Best Practices for SRM 6.5](https://docs.vmware.com/en/Site-Recovery-Manager/6.5/com.vmware.srm.install_config.doc/GUID-BB0C03E4-72BE-4C74-96C3-97AC6911B6B8.html)
-* [Site Recovery Manager in a Two-Site Topology with One vCenter Server Instance per Platform Services Controller](https://docs.vmware.com/en/Site-Recovery-Manager/6.5/com.vmware.srm.install_config.doc/GUID-F474543A-88C5-4030-BB86-F7CC51DADE22.html)
-* [VMware Site Recovery Manager 6.5 Installation and Configuration Guide](https://docs.vmware.com/en/Site-Recovery-Manager/6.5/com.vmware.srm.install_config.doc/GUID-437E1B65-A17B-4B4B-BA5B-C667C90FA418.html)
-* [VMware Blog on SRM with array based replication vs. vSphere replication](https://blogs.vmware.com/virtualblocks/2017/06/22/srm-array-based-replication-vs-vsphere-replication)
-* [VMware Blog on SRM Multi-site options](https://blogs.vmware.com/virtualblocks/2016/07/28/srm-multisite)
-* [Port numbers that must be open for vSphere Replication 5.8.x, 6.x, and 8](https://kb.vmware.com/s/article/2147112)
vmware-cloudsimple Disaster Recovery Zerto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/disaster-recovery-zerto.md
- Title: Azure VMware Solution by CloudSimple - Use Private Cloud as disaster site for on-premises workloads
-description: Describes how to set up your CloudSimple Private Cloud as a disaster recovery site for on-premises VMware workloads
-- Previously updated : 08/20/2019 ------
-# Set up CloudSimple Private Cloud as a disaster recovery site for on-premises VMware workloads
-
-Your CloudSimple Private Cloud can be set up as a recovery site for on-premises applications to provide business continuity in case of a disaster. The recovery solution is based on Zerto Virtual Replication as the replication and orchestration platform. Critical infrastructure and application virtual machines can be replicated continuously from your on-premises vCenter to your Private Cloud. You can use your Private Cloud for failover testing and to ensure the availability of your application during a disaster. A similar approach can be followed to set up the Private Cloud as a primary site that is protected by a recovery site at a different location.
-
-> [!NOTE]
-> Refer to the Zerto document [Sizing Considerations For Zerto Virtual Replication](https://s3.amazonaws.com/zertodownload_docs/5.5U3/Zerto%20Virtual%20Replication%20Sizing.pdf) for guidelines on sizing your disaster recovery environment.
-
-The CloudSimple solution:
-
-* Eliminates the need to set up a datacenter specifically for disaster recovery (DR).
-* Allows you to leverage the Azure locations where CloudSimple is deployed for worldwide geographic resilience.
-* Gives you an option to reduce deployment costs and total cost of ownership for DR.
-
-The solution requires you to:
-
-* Install, configure, and manage Zerto in your Private Cloud.
-* Provide your own licenses for Zerto when the Private Cloud is the protected site. You can pair Zerto running on the CloudSimple site with your on-premises site for licensing.
-
-The following figure shows the architecture for the Zerto solution.
-
-![Architecture](media/cloudsimple-zerto-architecture.png)
-
-## How to deploy the solution
-
-The following sections describe how to deploy a DR solution using Zerto Virtual Replication in your Private Cloud.
-
-1. [Prerequisites](#prerequisites)
-2. [Optional configuration on CloudSimple Private Cloud](#optional-configuration-on-your-private-cloud)
-3. [Set up ZVM and VRA on CloudSimple Private Cloud](#set-up-zvm-and-vra-on-your-private-cloud)
-4. [Set up Zerto Virtual Protection Group](#set-up-zerto-virtual-protection-group)
-
-### Prerequisites
-
-To enable Zerto Virtual Replication from your on-premises environment to your Private Cloud, complete the following prerequisites.
-
-1. [Set up a Site-to-Site VPN connection between your on-premises network and your CloudSimple Private Cloud](set-up-vpn.md).
-2. [Set up DNS lookup so that your Private Cloud management components are forwarded to Private Cloud DNS servers](on-premises-dns-setup.md). To enable forwarding of DNS lookup, create a forwarding zone entry in your on-premises DNS server for `*.cloudsimple.io` to CloudSimple DNS servers.
-3. Set up DNS lookup so that on-premises vCenter components are forwarded to on-premises DNS servers. The DNS servers must be reachable from your CloudSimple Private Cloud over Site-to-Site VPN. For assistance, submit a [support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest), providing the following information.
-
- * On-premises DNS domain name
- * On-premises DNS server IP addresses
-
-4. Install a Windows server on your Private Cloud. The server is used to install Zerto Virtual Manager.
-5. [Escalate your CloudSimple privileges](escalate-private-cloud-privileges.md).
-6. Create a new user on your Private Cloud vCenter with the administrative role to use as the service account for Zerto Virtual Manager.
-
-### Optional configuration on your Private Cloud
-
-1. Create one or more resource pools on your Private Cloud vCenter to use as target resource pools for VMs from your on-premises environment.
-2. Create one or more folders on your Private Cloud vCenter to use as target folders for VMs from your on-premises environment.
-3. Create VLANs for failover network and set up firewall rules. Open a [support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest) for assistance.
-4. Create distributed port groups for failover network and test network for testing failover of VMs.
-5. Install [DHCP and DNS servers](dns-dhcp-setup.md) or use an Active Directory domain controller in your Private Cloud environment.
-
-### Set up ZVM and VRA on your Private Cloud
-
-1. Install Zerto Virtual Manager (ZVM) on the Windows server in your Private Cloud.
-2. Sign in to ZVM using the service account created in previous steps.
-3. Set up licensing for Zerto Virtual Manager.
-4. Install Zerto Virtual Replication Appliance (VRA) on the ESXi hosts of your Private Cloud.
-5. Pair your Private Cloud ZVM with your on-premises ZVM.
-
-### Set up Zerto Virtual Protection Group
-
-1. Create a new Virtual Protection Group (VPG) and specify the priority for the VPG.
-2. Select the virtual machines that require protection for business continuity and customize the boot order if needed.
-3. Select the recovery site as your Private Cloud and the default recovery server as the Private Cloud cluster or the resource group you created. Select **vsanDatastore** for the recovery datastore on your Private Cloud.
-
- ![VPG](media/cloudsimple-zerto-vpg.png)
-
- > [!NOTE]
- > You can customize the host option for individual VMs under the VM Settings option.
-
-4. Customize storage options as required.
-5. Specify the recovery networks to use for failover network and failover test network as the distributed port groups created earlier and customize the recovery scripts as required.
-6. Customize the network settings for individual VMs if necessary and create the VPG.
-7. Test failover once the replication completes.
-
-## Reference
-
-[Zerto Documentation](https://www.zerto.com/myzerto/technical-documentation/)
vmware-cloudsimple Dns Dhcp Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/dns-dhcp-setup.md
- Title: Azure VMware Solution by CloudSimple - Set up workload DNS and DHCP for Private Cloud
-description: Describes how to set up DNS and DHCP for applications and workloads running in your CloudSimple Private Cloud environment
-- Previously updated : 08/16/2019 ------
-# Set up DNS and DHCP applications and workloads in your CloudSimple Private Cloud
-
-Applications and workloads running in a Private Cloud environment require name resolution and DHCP services for lookup and IP address assignment. A proper DHCP and DNS infrastructure is required to provide these services. You can configure a virtual machine to provide these services in your Private Cloud environment.
-
-## Prerequisites
-
-* A distributed port group with VLAN configured
-* Route setup to on-premises or Internet-based DNS servers
-* Virtual machine template or ISO to create a virtual machine
-
-## Linux-based DNS server setup
-
-Linux offers various packages for setting up DNS servers. Here is an [example setup from DigitalOcean](https://www.digitalocean.com/community/tutorials/how-to-configure-bind-as-a-private-network-dns-server-on-ubuntu-18-04) with instructions for setting up an open-source BIND DNS server.
-
-## Windows-based setup
-
-These Microsoft topics describe how to set up a Windows server as a DNS server and as a DHCP server.
-
-* [Windows Server as DNS Server](/windows-server/networking/dns/dns-top)
-* [Windows Server as DHCP Server](/windows-server/networking/technologies/dhcp/dhcp-top)
vmware-cloudsimple Ensuring High Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/ensuring-high-availability.md
- Title: Ensure application high availability when running in VMware on Azure
-description: Describes CloudSimple high availability features to address common application failure scenarios for applications running in a CloudSimple Private Cloud
-- Previously updated : 08/20/2019 ------
-# Ensure application high availability when running in VMware on Azure
-
-The CloudSimple solution provides high availability for your applications running on VMware in the Azure environment. The following table lists failure scenarios and the associated high availability features.
-
-| Failure scenario | Application protected? | Platform HA feature | VMware HA feature | Azure HA feature |
-|-||-|-|-|
-| Disk Failure | YES | Fast replacement of failed node | [About the vSAN Default Storage Policy](https://docs.vmware.com/en/VMware-vSphere/6.7/com.vmware.vsphere.virtualsan.doc/GUID-C228168F-6807-4C2A-9D74-E584CAF49A2A.html) | |
-| Fan Failure | YES | Redundant fans, fast replacement of failed node | | |
-| NIC Failure | YES | Redundant NIC, fast replacement of failed node | | |
-| Host Power Failure | YES | Redundant power supply | | |
-| ESXi Host Failure | YES | fast replacement of failed node | [VMware vSphere High Availability](https://www.vmware.com/products/vsphere/high-availability.html) | |
-| VM Failure | YES | [Load balancers](load-balancers.md) | [VMware vSphere High Availability](https://www.vmware.com/products/vsphere/high-availability.html) | Azure Load Balancer for stateless VMware VMs |
-| Leaf Switch Port Failure | YES | Redundant NIC | | |
-| Leaf Switch Failure | YES | Redundant leaf switches | | |
-| Rack Failure | YES | Placement groups | | |
-| Network Connectivity to on-premises DC | YES | Redundant networking services | | Redundant ER circuits |
-| Network Connectivity to Azure | YES | | | Redundant ER circuits |
-| Datacenter Failure | YES | | | Availability zones |
-| Regional Failure | YES | | | Azure regions |
-
-Azure VMware Solution by CloudSimple provides the following high availability features.
-
-## Fast replacement of failed node
-
-The CloudSimple control plane software continuously monitors the health of VMware clusters and detects when an ESXi node fails. It then automatically adds a new ESXi host to the affected VMware cluster from its pool of readily available nodes and takes the failed node out of the cluster. This functionality ensures that the spare capacity in the VMware cluster is restored quickly so that the clusterΓÇÖs resiliency provided by vSAN and VMware HA is restored.
-
-## Placement groups
-
-A user who creates a Private Cloud can select an Azure region and a placement group within the selected region. A placement group is a set of nodes spread across multiple racks but within the same spine network segment. Nodes within the same placement group can reach each other with a maximum of two extra switch hops. A placement group is always within a single Azure availability zone and spans multiple racks. The CloudSimple control plane distributes nodes of a Private Cloud across multiple racks based on best effort. Nodes in different placement groups are guaranteed to be placed in different racks.
-
-## Availability zones
-
-Availability zones are a high-availability offering that protects your applications and data from datacenter failures. Availability zones are special physical locations within an Azure region. Each zone is made up of one or more datacenters equipped with independent power, cooling, and networking. Each region has one availability zone. For more information, see [What are Availability Zones in Azure?](../availability-zones/az-overview.md).
-
-## Redundant Azure ExpressRoute circuits
-
-Data center connectivity to Azure vNet using ExpressRoute has redundant circuits to provide highly available network connectivity link.
-
-## Redundant networking services
-
-All the CloudSimple networking services for the Private Cloud (including VLAN, firewall, public IP addresses, Internet, and VPN) are designed to be highly available and able to support the service SLA.
-
-## Azure Layer 7 Load Balancer for stateless VMware VMs
-
-Users can put an Azure Layer 7 Load Balancer in front of the stateless web tier VMs running in the VMware environment to achieve high availability for the web tier.
-
-## Azure regions
-
-An Azure region is a set of data centers deployed within a latency-defined perimeter and connected through a dedicated regional low-latency network. For details, see [Azure Regions](https://azure.microsoft.com/global-infrastructure/regions).
vmware-cloudsimple Escalate Private Cloud Privileges https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/escalate-private-cloud-privileges.md
- Title: Escalate private cloud privileges-
-description: Describes how to escalate privileges on your private cloud for administrative functions in vCenter
-- Previously updated : 06/05/2019------
-# Escalate Private Cloud vCenter privileges from the CloudSimple portal
-
-For administrative access to your Private Cloud vCenter, you can temporarily escalate your CloudSimple privileges. Using elevated privileges, you can install VMware solutions, add identity sources, and manage users.
-
-New users can be created on the vCenter SSO domain and given access to vCenter. When you create new users, add them to the CloudSimple builtin groups for accessing vCenter. For more information, see [CloudSimple Private Cloud permission model of VMware vCenter](./learn-private-cloud-permissions.md).
-
-> [!CAUTION]
-> DonΓÇÖt make any configuration changes for management components. Actions taken during the escalated privileged state can adversely impact your system or can cause your system to become unavailable.
-
-## Sign in to Azure
-
-Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
-
-## Escalate privileges
-
-1. Access the [CloudSimple portal](access-cloudsimple-portal.md).
-
-2. Open the **Resources** page, select the Private Cloud for which you want to escalate privileges.
-
-3. Near the bottom of the Summary page under **Change vSphere privileges**, click **Escalate**.
-
- ![Change vSphere privilege](media/escalate-private-cloud-privilege.png)
-
-4. Select the vSphere user type. Only `CloudOwner@cloudsimple.local` local user can be escalated.
-
-5. Select the escalate time interval from the drop-down. Choose the shortest period that will allow you to complete the task.
-
-6. Select the checkbox to confirm that you understand the risks.
-
- ![Escalate privilege dialog](media/escalate-private-cloud-privilege-dialog.png)
-
-7. Click **OK**.
-
-8. The escalation process can take a couple of minutes. When complete, click **OK**.
-
-The privilege escalation begins and lasts until the end of the selected interval. You can sign in to your private cloud vCenter to do administrative tasks.
-
-> [!IMPORTANT]
-> Only one user can have escalated privileges. You must de-escalate the user's privileges before you can escalate another user's privileges.
-
-> [!CAUTION]
-> New users must be added only to *Cloud-Owner-Group*, *Cloud-Global-Cluster-Admin-Group*, *Cloud-Global-Storage-Admin-Group*, *Cloud-Global-Network-Admin-Group* or, *Cloud-Global-VM-Admin-Group*. Users added to *Administrators* group will be removed automatically. Only service accounts must be added to *Administrators* group and service accounts must not be used to sign in to vSphere web UI.
-
-## Extend privilege escalation
-
-If you require additional time to complete your tasks, you can extend the privilege escalation period. Choose the additional escalate time interval that allows you to complete the administrative tasks.
-
-1. On the **Resources** > **Private Clouds** in the CloudSimple portal, select the Private Cloud for which you want to extend privilege escalation.
-
-2. Near the bottom of the Summary tab, click **Extend privilege escalation**.
-
- ![Extend privilege escalation](media/de-escalate-private-cloud-privilege.png)
-
-3. Select an escalate time interval from the drop-down. Review the new escalation end time.
-
-4. Click **Save** to extend the interval.
-
-## De-escalate privileges
-
-Once your administrative tasks are complete, you should de-escalate your privileges.
-
-1. On the **Resources** > **Private Clouds** in the CloudSimple portal, select the Private Cloud for which you want to de-escalate privileges.
-
-2. Click **De-escalate**.
-
-3. Click **OK**.
-
-> [!IMPORTANT]
-> To avoid any errors, sign out of vCenter and sign in again after de-escalating privileges.
-
-## Next steps
-
-* [Set up vCenter identity sources to use Active Directory](./set-vcenter-identity.md)
-* Install backup solution to [backup workload virtual machines](./backup-workloads-veeam.md)
vmware-cloudsimple Escalate Privileges https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/escalate-privileges.md
- Title: Azure VMware Solution by CloudSimple - Escalate CloudSimple privileges
-description: Describes how to escalate CloudSimple permissions to perform administrative functions in the Private Cloud vCenter
-- Previously updated : 08/16/2019 ------
-# Escalate CloudSimple privileges to perform administrative functions in Private Cloud vCenter
-
-The CloudSimple privileges approach is designed to give vCenter users the privileges they need to perform normal operations. In some instances, a user may require additional privileges to perform a particular task. You can escalate privileges of a vCenter SSO user for a limited period.
-
-Reasons for escalating privileges can include the following:
-
-* Configuration of identity sources
-* User management
-* Deletion of distributed port group
-* Installing vCenter solutions (such as backup apps)
-* Creating service accounts
-
-> [!WARNING]
-> Actions taken in the escalated privileged state can adversely impact your system and can cause your system to become unavailable. Perform only the necessary actions during the escalation period.
-
-From the CloudSimple portal, [escalate privileges](escalate-private-cloud-privileges.md) for the CloudOwner local user on the vCenter SSO. You can escalate remote user's privilege only if additional identity provider is configured on vCenter. Escalation of privileges involves adding the selected user to the vSphere built-in Administrators group. Only one user can have escalated privileges. If you need to escalate another user's privileges, first de-escalate the privileges of the current users.
-
-Users from additional identity sources must be added as members of CloudOwner group.
-
-> [!CAUTION]
-> New users must be added only to *Cloud-Owner-Group*, *Cloud-Global-Cluster-Admin-Group*, *Cloud-Global-Storage-Admin-Group*, *Cloud-Global-Network-Admin-Group* or, *Cloud-Global-VM-Admin-Group*. Users added to *Administrators* group will be removed automatically. Only service accounts must be added to *Administrators* group and service accounts must not be used to sign in to vSphere web UI.
-
-During the escalation period, CloudSimple uses automated monitoring with associated alert notifications to identify any inadvertent changes to the environment.
vmware-cloudsimple Expand Private Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/expand-private-cloud.md
- Title: Expand Azure VMware Solution by CloudSimple Private Cloud
-description: Describes how to expand an existing CloudSimple Private Cloud to add capacity in an existing or new cluster
-- Previously updated : 06/06/2019 ------
-# Expand a CloudSimple Private Cloud
-
-CloudSimple provides the flexibility to dynamically expand a Private Cloud. You can begin with a smaller configuration and then expand as you need higher capacity. Or you can create a Private Cloud based on current needs and then expand as consumption grows.
-
-A Private Cloud consists of one or more vSphere clusters. Each cluster can have 3 to 16 nodes. When expanding a Private Cloud, you add nodes to the existing cluster or create a new cluster. To expand an existing cluster, additional nodes must be the same type (SKU) as the existing nodes. For creating a new cluster, the nodes can be of a different type. For more information on Private Cloud limits, see limits section in [CloudSimple private cloud overview](cloudsimple-private-cloud.md) article.
-
-A private cloud is created with a default **Datacenter** on vCenter. Each datacenter serves as a top-level management entity. For a new cluster, CloudSimple provides the choice of adding to the existing datacenter or creating a new datacenter.
-
-As part of the new cluster configuration, CloudSimple configures the VMware infrastructure. The settings include storage settings for vSAN disk groups, VMware High Availability, and Distributed Resource Scheduler (DRS).
-
-A Private Cloud can be expanded multiple times. Expansion can be done only when you stay within the overall node limits. Each time you expand a Private Cloud you add to the existing cluster or create a new one.
-
-## Before you begin
-
-Nodes must be provisioned before you can expand your Private Cloud. For more information on provisioning nodes, see [Provision nodes for VMware Solution by CloudSimple - Azure](create-nodes.md) article. For creating a new cluster, you must have at least three available nodes of the same SKU.
-
-## Sign in to Azure
-
-Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
-
-## Expand a Private Cloud
-
-1. [Access the CloudSimple portal](access-cloudsimple-portal.md).
-
-2. Open the **Resources** page and select the Private Cloud for which you want to expand.
-
-3. In summary section, click **Expand**.
-
- ![Expand private cloud](media/resources-expand-private-cloud.png)
-
-4. Choose whether to expand your existing cluster or create a new vSphere cluster. As you make changes, the summary information on the page is updated.
-
- * To expand your existing cluster, click **Expand existing cluster**. Select the cluster you want to expand and enter the number of nodes to add. Each cluster can have a maximum of 16 nodes.
- * To add a new cluster, click **Create new cluster**. Enter a name for the cluster. Select an existing datacenter, or enter a name to create a new datacenter. Choose the node type. You can choose a different node type when creating a new vSphere cluster, but not when expanding an existing vSphere cluster. Select the number of nodes. Each new cluster must have at least three nodes.
-
- ![Expand private cloud - add nodes](media/resources-expand-private-cloud-add-nodes.png)
-
-5. Click **Submit** to expand the private cloud.
-
-## Next steps
-
-* [Consume VMware VMs on Azure](quickstart-create-vmware-virtual-machine.md)
-* Learn more about [Private Clouds](cloudsimple-private-cloud.md)
vmware-cloudsimple Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/firewall.md
- Title: Azure VMware Solution by CloudSimple - Set up firewall tables and rules
-description: Describes how to set up Private Cloud firewall tables and rules to restrict traffic on subnets and VLANs.
-- Previously updated : 08/15/2019 ------
-# Set up firewall tables and rules for Private Clouds
-
-Firewall tables and the associated rules allow you to specify restrictions on traffic to apply to particular subnets and VLANs.
-
-* A subnet can be associated with one firewall table.
-* A firewall table can be associated with multiple subnets.
-
-## Add a new firewall table
-
-1. [Access the CloudSimple portal](access-cloudsimple-portal.md) and select **Network** on the side menu.
-2. Select **Firewall Tables**.
-3. Select **Create firewall table**.
-
- ![VLAN/subnet page](media/firewall-tables-page.png)
-
-4. Enter a name for the table.
-5. A default rule for the table is listed. Click **Create New Rule** to create an additional rule. See the following procedure for details.
-6. Click **Done** to save the firewall table.
-
-> [!IMPORTANT]
-> You can create up to two Firewall tables per Private Cloud.
-
-## Firewall rules
-
-Firewall rules determine how the firewall treats specific types of traffic. The **Rules** tab for a selected firewall table lists all the associated rules.
-
-![Firewall rules table](media/firewall-rules-tab.png)
-
-## Create a firewall rule
-
-1. Display the settings to create a firewall rule in either of these ways:
- * Click **Add Rule** when creating a firewall table.
- * Select a particular firewall table on the **Network > Firewall Tables** page and click **Create new firewall rule**.
-2. Set up the rule as follows:
- * **Name**. Give the rule a name.
- * **Priority**. Assign a priority to the rule. Rules with lower numbers are executed first.
- * **Traffic type**. Select whether the rule is for Private Cloud, Internet, or VPN traffic (stateless) or for a public IP address (stateful).
- * **Protocol**. Select the protocol covered by the rule (TCP, UDP, or any protocol).
- * **Direction**. Select whether the rule is for inbound or outbound traffic. You must define separate rules for inbound and outbound traffic.
- * **Action**. Select the action to take if the rule matches (allow or deny).
- * **Source**. Specify the sources covered by the rule (CIDR block, internal, or any source).
- * **Source port range**. Specify the range of ports subject to the rule.
- * **Direction**. Select inbound or outbound.
- * **Destination**. Specify the destinations covered by the rule (CIDR block, internal, or any source).
- * **Source port range**. Specify the range of ports subject to the rule.
-
- ![Firewall table add rule](media/firewall-rule-create.png)
-
-3. Click **Done** to save the rule and add it to the list of rules for the firewall table.
-
-> [!IMPORTANT]
-> Each Firewall table can have up to 10 inbound rules and 20 outbound rules. These limits can be increased by [contacting support](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).
-
-## <a name="attach-vlans-subnet"></a>Attach VLANs/subnets
-
-After you define a firewall table, you can specify the subnets that are subject to the rules in the table.
-
-1. On the **Network** > **Firewall Tables** page, select a firewall table.
-2. Open the **Attached VLANs/Subnet** tab.
-3. Click **Attach to a VLAN/Subnet**.
-4. Select the Private Cloud and VLAN. The associated subnet name and CIDR block are shown.
-5. Click **Submit**.
vmware-cloudsimple Forbidden Actions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/forbidden-actions.md
- Title: Forbidden actions during elevated access
-description: VMware Engine reverts the changes to ensure that service remains uninterrupted when VMware Engine detects any of the following forbidden actions.
- Previously updated : 10/28/2020 ------
-# Forbidden actions during elevated access
-
-During the elevation time interval, some actions are forbidden. When VMware Engine detects any of the following forbidden actions, VMware Engine reverts the changes to ensure that service remains uninterrupted.
-
-## Cluster actions
--- Removing a cluster from vCenter.-- Changing vSphere High Availability (HA) on a cluster.-- Adding a host to the cluster from vCenter.-- Removing a host from the cluster from vCenter.-
-## Host actions
--- Removing datastores on an ESXi host.-- Uninstalling vCenter agent from host.-- Modifying the host configuration.-- Making any changes to the host profiles.-- Placing a host in maintenance mode.-
-## Network actions
--- Deleting the default distributed virtual switch (DVS) in a private cloud.-- Removing a host from the default DVS.-- Importing any DVS setting.-- Reconfiguring any DVS setting.-- Upgrading any DVS.-- Deleting the management portgroup.-- Editing the management portgroup.-
-## Roles and permissions actions
--- Creating a global role.-- Modifying or deleting permission to any management objects.-- Modifying or removing any default roles.-- Increase the privileges of a role to higher than of Cloud-Owner-Role.-
-## Other actions
--- Removing any default licenses:
- - vCenter Server
- - ESXi nodes
- - NSX-T
- - HCX
-- Modifying or deleting the management resource pool.-- Cloning management VMs.--
-## Next steps
-[CloudSimple maintenance and updates](cloudsimple-maintenance-updates.md)
vmware-cloudsimple High Availability Vpn Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/high-availability-vpn-connection.md
- Title: Azure VMware Solution by CloudSimple - Configure high availability from on-premises to CloudSimple VPN gateway
-description: Describes how to configure a high availability connection from your on-premises environment to a CloudSimple VPN gateway enabled for high availability
-- Previously updated : 08/14/2019 ------
-# Configure a high availability connection from on-premises to CloudSimple VPN gateway
-
-Network administrators can configure a high availability IPsec Site-to-Site VPN connection from their on-premises environment to a CloudSimple VPN gateway.
-
-This guide presents steps to configure an on-premises firewall for an IPsec Site-to-Site VPN high availability connection. The detailed steps are specific to the type of on-premises firewall. As examples, this guide presents steps for two types of firewalls: Cisco ASA and Palo Alto Networks.
-
-## Before you begin
-
-Complete the following tasks before you configure the on-premises firewall.
-
-1. Verify that your organization has [provisioned](create-nodes.md) the required nodes and created at least one CloudSimple Private Cloud.
-2. [Configure a Site-to-Site VPN gateway](vpn-gateway.md#set-up-a-site-to-site-vpn-gateway) between your on-premises network and your CloudSimple Private Cloud.
-
-See [VPN gateways overview](cloudsimple-vpn-gateways.md) for supported phase 1 and phase 2 proposals.
-
-## Configure on-premises Cisco ASA firewall
-
-The instructions in this section apply to Cisco ASA version 8.4 and later. In the configuration example, Cisco Adaptive Security Appliance Software Version 9.10 is deployed and configured in IKEv1 mode.
-
-For the Site-to-Site VPN to work, you must allow UDP 500/4500 and ESP (IP protocol 50) from the CloudSimple primary and secondary public IP (peer IP) on the outside interface of the on-premises Cisco ASA VPN gateway.
-
-### 1. Configure phase 1 (IKEv1)
-
-To enable phase 1 (IKEv1) on the outside interface, enter the following CLI command in the Cisco ASA firewall.
-
-`crypto ikev1 enable outside`
-
-### 2. Create an IKEv1 policy
-
-Create an IKEv1 policy that defines the algorithms and methods to be used for hashing, authentication, Diffie-Hellman group, lifetime, and encryption.
-
-```
-crypto ikev1 policy 1
-authentication pre-share
-encryption aes-256
-hash sha
-group 2
-lifetime 28800
-```
-
-### 3. Create a tunnel group
-
-Create a tunnel group under the IPsec attributes. Configure the peer IP address and the tunnel pre-shared key, which you set when [configuring your Site-to-Site VPN gateway](vpn-gateway.md#set-up-a-site-to-site-vpn-gateway).
-
-```
-tunnel-group <primary peer ip> type ipsec-l2l
-tunnel-group <primary peer ip> ipsec-attributes
-ikev1 pre-shared-key *****
-
-tunnel-group <secondary peer ip> type ipsec-l2l
-tunnel-group <secondary peer ip> ipsec-attributes
-ikev1 pre-shared-key *****
-```
-
-### 4. Configure phase 2 (IPsec)
-
-To configure phase 2 (IPsec), create an access control list (ACL) that defines the traffic to be encrypted and tunneled. In the following example, the traffic of interest is from the tunnel that is sourced from the on-premises local subnet (10.16.1.0/24) to the Private Cloud remote subnet (192.168.0.0/24). The ACL can contain multiple entries if there are multiple subnets between the sites.
-
-In Cisco ASA versions 8.4 and later, objects or object groups can be created that serve as containers for the networks, subnets, host IP addresses, or multiple objects. Create an object for the local and an object for the remote subnets and use them for the crypto ACL and the NAT statements.
-
-#### Define an on-premises local subnet as an object
-
-```
-object network AZ_inside
-subnet 10.16.1.0 255.255.255.0
-```
-
-#### Define the CloudSimple remote subnet as an object
-
-```
-object network CS_inside
-subnet 192.168.0.0 255.255.255.0
-```
-
-#### Configure an access list for the traffic of interest
-
-```
-access-list ipsec-acl extended permit ip object AZ_inside object CS_inside
-```
-
-### 5. Configure the transform set
-
-Configure the transform set (TS), which must involve the keyword ```ikev1```. The encryption and hash attributes specified in the TS must match with the parameters listed in [Default configuration for CloudSimple VPN gateways](cloudsimple-vpn-gateways.md).
-
-```
-crypto ipsec ikev1 transform-set devtest39 esp-aes-256 esp-sha-hmac
-```
-
-### 6. Configure the crypto map
-
-Configure the crypto map, which contains these components:
-
-* Peer IP address
-* Defined ACL that contains the traffic of interest
-* Transform Set
-
-```
-crypto map mymap 1 set peer <primary peer ip> <secondary peer ip>
-crypto map mymap 1 match address ipsec-acl
-crypto map mymap 1 set ikev1 transform-set devtest39
-```
-
-### 7. Apply the crypto map
-
-Apply the crypto map on the outside interface:
-
-`crypto map mymap interface outside`
-
-### 8. Confirm applicable NAT rules
-
-The following is the NAT rule that is used. Ensure that the VPN traffic is not subjected to any other NAT rule.
-
-`nat (inside,outside) source static AZ_inside AZ_inside destination static CS_inside CS_inside`
-
-### Sample IPsec Site-to-Site VPN established output from Cisco ASA
-
-Phase 1 output:
-
-![Phase 1 output for Cisco ASA firewall](media/ha-vpn-connection-cisco-phase1.png)
-
-Phase 2 output:
-
-![Phase 2 output for Cisco ASA firewall](media/ha-vpn-connection-cisco-phase2.png)
-
-## Configure on-premises Palo Alto Networks firewall
-
-The instructions in this section apply to Palo Alto Networks version 7.1 and later. In this configuration example, Palo Alto Networks VM-Series Software Version 8.1.0 is deployed and configured in IKEv1 mode.
-
-For the Site-to-Site VPN to work, you must allow UDP 500/4500 and ESP (IP protocol 50) from the CloudSimple primary and secondary public IP (peer IP) on the outside interface of the on-premises Palo Alto Networks gateway.
-
-### 1. Create primary and secondary tunnel interfaces
-
-Sign in to the Palo Alto firewall, select **Network** > **Interfaces** > **Tunnel** > **Add**, configure the following fields, and click **OK**.
-
-* Interface Name. The first field is autopopulated with keyword 'tunnel'. In the adjacent field, enter any number between from 1 to 9999. This interface will be used as a primary tunnel interface to carry Site-to-Site traffic between the on-premises datacenter and the Private Cloud.
-* Comment. Enter comments for easy identification of the purpose of the tunnel
-* Netflow Profile. Leave default.
-* Config.
- Assign Interface To:
- Virtual Router: Select **default**.
- Security Zone: Select the zone for trusted LAN traffic. In this example, the name of the zone for LAN traffic is 'Trust'.
-* IPv4. Click **Add** and add any non-overlapping unused /32 ip address in your environment, which will be assigned to the primary tunnel interface and will be used for monitoring the tunnels (explained later).
-
-Because this configuration is for a high availability VPN, two tunnel interfaces are required: One primary and one secondary. Repeat the previous steps to create the secondary tunnel interface. Select a different tunnel ID and a different unused /32 ip address.
-
-### 2. Set up static routes for Private Cloud subnets to be reached over the Site-to-Site VPN
-
-Routes are necessary for the on-premises subnets to reach CloudSimple private cloud subnets.
-
-Select **Network** > **Virtual Routers** > *default* > **Static Routes** > **Add**, configure the following fields, and click **OK**.
-
-* Name. Enter any name for easy identification of the purpose of the route.
-* Destination. Specify the CloudSimple private cloud subnets to be reached over S2S tunnel interfaces from on-premises
-* Interface. Select the primary tunnel interface created in step-1(Section-2) from the dropdown. In this example, it is tunnel.20.
-* Next Hop. Select **None**.
-* Admin Distance. Leave default.
-* Metric. Enter any value from 1 to 65535. The key is to enter lower metric for the route corresponding to primary tunnel interface compared to the route corresponding secondary tunnel interface making the former route preferred. If tunnel.20 has metric value of 20 as opposed to a metric value of 30 for tunnel.30, tunnel.20 is preferred.
-* Route Table. Leave default.
-* BFD Profile. Leave default.
-* Path monitoring. Leave unchecked.
-
-Repeat the previous steps to create another route for Private Cloud subnets to use as a secondary/backup route via secondary tunnel interface. This time, select different tunnel ID and a higher metric than for the primary route.
-
-### 3. Define the cryptographic profile
-
-Define a cryptographic profile that specifies the protocols and algorithms for identification, authentication, and encryption to be used for setting up VPN tunnels in IKEv1 Phase 1.
-
-Select **Network** > **Expand Network Profiles** > **IKE Crypto** > **Add**, configure the following fields, and click **OK**.
-
-* Name. Enter any name of the IKE crypto profile.
-* DH Group. Click **Add** and select the appropriate DH group.
-* Encryption. Click **Add** and select the appropriate encryption method.
-* Authentication. Click **Add** and select the appropriate authentication method.
-* Key lifetime. Leave default.
-* IKEv2 Authentication Multiple. Leave default.
-
-### 4. Define IKE gateways
-
-Define IKE gateways to establish communication between the peers across each end of the VPN tunnel.
-
-Select **Network** > **Expand Network Profiles** > **IKE Gateways** > **Add**, configure the following fields, and click **OK**.
-
-General tab:
-
-* Name. Enter the name for the IKE gateway to be peered with the primary CloudSimple VPN peer.
-* Version. Select **IKEv1 only mode**.
-* Address Type. Select **IPv4**.
-* Interface. Select the public facing or outside interface.
-* Local IP Address. Leave default.
-* Peer IP Address Type. Select **IP**.
-* Peer Address. Enter the primary CloudSimple VPN peer IP address.
-* Authentication. Select **Pre-Shared Key**.
-* Pre-shared Key / Confirm Pre-shared Key. Enter the pre-shared key to match the CloudSimple VPN gateway key.
-* Local Identification. Enter the public IP address of the on-premises Palo Alto firewall.
-* Peer Identification. Enter the primary CloudSimple VPN peer IP address.
-
-Advanced Options tab:
-
-* Enable Passive Mode. Leave unchecked.
-* Enable NAT Traversal. Leave unchecked if the on-premises Palo Alto firewall is not behind any NAT device. Otherwise, select the checkbox.
-
-IKEv1:
-
-* Exchange Mode. Select **main**.
-* IKE Crypto Profile. Select the IKE Crypto profile that you created earlier. Leave the Enable Fragmentation box unchecked.
-* Dead Peer Detection. Leave the box unchecked.
-
-Repeat the previous steps to create the secondary IKE gateway.
-
-### 5. Define IPSEC Crypto profiles
-
-Select **Network** > **Expand Network Profiles** > **IPSEC Crypto** > **Add**, configure the following fields, and click **OK**.
-
-* Name. Enter a name for the IPsec crypto profile.
-* IPsec Protocol. Select **ESP**.
-* Encryption. Click **Add** and select the appropriate encryption method.
-* Authentication. Click **Add** and select the appropriate authentication method.
-* DH Group. Select **no-pfs**.
-* Lifetime. Set as 30 minutes.
-* Enable. Leave the box unchecked.
-
-Repeat the previous steps to create another IPsec crypto profile, which will be used for as the secondary CloudSimple VPN peer. The same IPSEC Crypto profile can also be used for both the primary and secondary IPsec tunnels (see the following procedure).
-
-### 6. Define monitor profiles for tunnel monitoring
-
-Select **Network** > **Expand Network Profiles** > **Monitor** > **Add**, configure the following fields, and click **OK**.
-
-* Name. Enter any name of the Monitor profile to be used for tunnel monitoring for proactive reaction to the failure.
-* Action. Select **Fail Over**.
-* Interval. Enter the value **3**.
-* Threshold. Enter the value **7**.
-
-### 7. Set up primary and secondary IPsec tunnels.
-
-Select **Network** > **IPsec Tunnels** > **Add**, configure the following fields, and click **OK**.
-
-General tab:
-
-* Name. Enter any name for the primary IPSEC tunnel to be peered with primary CloudSimple VPN peer.
-* Tunnel Interface. Select the primary tunnel interface.
-* Type. Leave default.
-* Address Type. Select **IPv4**.
-* IKE Gateway. Select the primary IKE gateway.
-* IPsec Crypto Profile. Select the primary IPsec profile. Select **Show Advanced options**.
-* Enable Replay Protection. Leave default.
-* Copy TOS Header. Leave the box unchecked.
-* Tunnel Monitor. Check the box.
-* Destination IP. Enter any IP address belonging to the CloudSimple Private Cloud subnet that is allowed over the Site-to-Site connection. Make sure that the tunnel interfaces (such as tunnel.20 - 10.64.5.2/32 and tunnel.30 - 10.64.6.2/32) on Palo Alto are allowed to reach the CloudSimple Private Cloud IP address over the Site-to-Site VPN. See the following configuration for proxy IDs.
-* Profile. Select the monitor profile.
-
-Proxy IDs tab:
-Click **IPv4** > **Add** and configure the following:
-
-* Proxy ID. Enter any name for the interesting traffic. There could be multiple Proxy IDs carried inside one IPsec tunnel.
-* Local. Specify the on-premises local subnets that are allowed to communicate with Private Cloud subnets over the Site-to-Site VPN.
-* Remote. Specify the Private Cloud remote subnets that are allowed to communicate with the local subnets.
-* Protocol. Select **any**.
-
-Repeat the previous steps to create another IPsec tunnel to use for the secondary CloudSimple VPN peer.
-
-## References
-
-Configuring NAT on Cisco ASA:
-
-<a href="https://www.cisco.com/c/en/us/td/docs/security/asa/asa84/configuration/guide/asa_84_cli_config/nat_objects.html" target="_blank">Cisco ASA 5500 Series Configuration Guide</a>
-
-Supported IKEv1 and IKEv2 attributes on Cisco ASA:
-
-<a href="https://www.cisco.com/c/en/us/td/docs/security/asa/asa72/configuration/guide/conf_gd/ike.html" target="_blank">Cisco ASA Series CLI Configuration Guide</a>
-
-Configuring IPsec Site-to-Site VPN on Cisco ASA with version 8.4 and later:
-
-<a href="https://www.cisco.com/c/en/us/support/docs/security/asa-5500-x-series-next-generation-firewalls/119141-configure-asa-00.html#anc8" target="_blank">Configure IKEv1 IPsec Site-to-Site Tunnels with the ASDM or CLI on the ASA</a>
-
-Configuring Cisco Adaptive Security Appliance virtual (ASAv) on Azure:
-
-<a href="https://www.cisco.com/c/en/us/td/docs/security/asa/asa96/asav/quick-start-book/asav-96-qsg/asav-azure.html" target="_blank">Cisco Adaptive Security Virtual Appliance (ASAv) quickstart Guide</a>
-
-Configuring Site-to-Site VPN with Proxy IDs on Palo Alto:
-
-[Set Up Site-to-Site VPN](https://docs.paloaltonetworks.com/pan-os/9-0/pan-os-admin/vpns/set-up-site-to-site-vpn#)
-
-Setting up tunnel monitor:
-
-[Set Up Tunnel Monitoring](https://docs.paloaltonetworks.com/pan-os/7-1/pan-os-admin/vpns/set-up-tunnel-monitoring.html)
-
-IKE gateway or IPsec tunnel operations:
-
-<a href="https://docs.paloaltonetworks.com/pan-os/9-0/pan-os-admin/vpns/set-up-site-to-site-vpn/enabledisable-refresh-or-restart-an-ike-gateway-or-ipsec-tunnel#" target="_blank">Enable/Disable, Refresh, or Restart an IKE Gateway or IPsec Tunnel</a>
vmware-cloudsimple Horizon Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/horizon-guide.md
- Title: Azure VMware Solution by CloudSimple - Use Private Cloud site to host a virtual desktop infrastructure using VMware Horizon
-description: Describes how you can use your CloudSimple Private Cloud site to host a virtual desktop infrastructure using VMware Horizon
-- Previously updated : 08/20/2019 ------
-# Use CloudSimple Private Cloud site to host a virtual desktop infrastructure using VMware Horizon
-
-You can use your CloudSimple Private Cloud site to host a virtual desktop infrastructure (VDI) using VMware Horizon 7.x. The following figure shows the logical solution architecture for the VDI.
-
-![Horizon deployment](media/horizon-deployment.png)
-
-With this solution, you have full control over Horizon View Manager and App Volume. The familiar UI, API, and CLI interfaces enable use of your existing scripts and tools.
-
-The CloudSimple solution requires you to do the following:
-
-* Install, configure, and manage VMware Horizon 7.x in your Private Cloud.
-* Provide your own Horizon licenses.
-
-## Deploy the solution
-
-The following sections describe how to deploy a VDI solution using Horizon in your Private Cloud.
-
-1. [Verify that VMware product versions are compatible](#verify-that-vmware-product-versions-are-compatible)
-2. [Estimate the size of your desktop environment](#estimate-the-size-of-your-desktop-environment)
-3. [Create a Private Cloud for your environment](#create-a-private-cloud-for-your-environment)
-4. [Install VMware Horizon in your Private Cloud](#install-vmware-horizon-in-your-private-cloud)
-
-### Verify that VMware product versions are compatible
-
-* Verify that your current and planned versions of Horizon, App Volumes, Unified Access Gateway, and User Environment Manager are compatible with each other and with vCenter and PSC in the Private Cloud. For compatibility information, see [VMware Compatibility Matrix for Horizon 7.5](https://www.vmware.com/resources/compatibility/sim/interop_matrix.php#interop&260=2877&0=).
-* To find out the current versions of vCenter and PSC in your Private Cloud, go to **Resources** in the [CloudSimple portal](access-cloudsimple-portal.md), select your Private Cloud, and click the **vSphere Management Network** tab.
-
-![vCenter and PSC versions](media/private-cloud-vsphere-versions.png)
-
-### Estimate the size of your desktop environment
-
-* Verify that your identified configuration is within VMware operational limits.
-* Estimate the resources that are needed for all your desktops and your Horizon management components.
-
-### Create a Private Cloud for your environment
-
-1. Create a Private Cloud from the CloudSimple portal by following the instructions in [Configure a Private Cloud environment](quickstart-create-private-cloud.md). CloudSimple creates a default vCenter user named 'cloudowner' in every newly created Private Cloud. For details on the default Private Cloud user and permission model, see [Learn the Private Cloud permissions model](learn-private-cloud-permissions.md).
-2. Create a VLAN in your Private Cloud for the Horizon management plane and assign it a subnet CIDR. For instructions, see [Create and manage VLANs/Subnets](create-vlan-subnet.md). This is the network where all the solution components (Unified Access Gateway, Connection Server, App Volume Server, and User Environment Manager servers) will be installed.
-3. Decide if you want to use an external identity provider with your Private Cloud vCenter. If yes, choose one of these options:
- * Use your on-premises Active Directory as the external identity provider. For instructions, see [vCenter Identity Sources](set-vcenter-identity.md).
- * Set up an Active Directory server in the Private Cloud in Horizon management plane VLAN to use as your external identity provider. For instructions, see [vCenter Identity Sources](set-vcenter-identity.md).
- * Set up a DHCP and DNS server in Horizon management plane VLAN in the Private Cloud. For instructions, see [Set up DNS and DHCP applications and workloads in your CloudSimple Private Cloud](dns-dhcp-setup.md).
-4. Configure DNS forwarding on the DNS server installed in the Private Cloud. For instructions, see [Create a Conditional Forwarder](on-premises-dns-setup.md#create-a-conditional-forwarder).
-
-### Install VMware Horizon in your Private Cloud
-
-The following deployment diagram depicts a Horizon solution deployed in a Private Cloud. Unified Access Gateway, AD/DC, View, and App Volume Server are installed in user-created VLAN 234. Unified Access Gateway has an assigned public IP address that is reachable from the Internet. Horizon desktop pool VMs are deployed in VLAN 235 to provide additional isolation and security.
-
-![Horizon deployment in the Private Cloud](media/horizon-private-cloud.png)
-
-The following sections outline the instructions to set up a deployment similar to the one that is depicted in the figure. Before you begin, verify that you have the following:
-
-* A Private Cloud created using the CloudSimple portal with sufficient capacity to run your desktop pools.
-* Sufficient bandwidth between your on-premises environment and the Private Cloud environment to support the network traffic for your desktops.
-* A Site-to-Site VPN tunnel set up between your on-premises datacenter and the Private Cloud.
-* IP reachability from end-user subnets in your on-premises environment to the CloudSimple Private Cloud subnets.
-* AD/DHCP/DNS installed for your Private Cloud.
-
-#### CloudSimple portal: Create a dedicated VLAN/subnet for desktop pools
-
-Create a VLAN for the Horizon desktop pools and assign it a subnet CIDR. For instructions, see [Create and manage VLANs/Subnets](create-vlan-subnet.md). This is the network where all the desktop virtual machines will run.
-
-Follow standard security best practices to secure your Horizon deployment:
-
-* Allow only desktop RDP traffic / SSH traffic to your desktop VMs.
-* Allow only management traffic between Horizon management plane VLAN and desktop pool VLAN.
-* Allow only management traffic from on-premises network.
-
-You can enforce these best practices by configuring [firewall rules](firewall.md) from the CloudSimple portal.
-
-#### CloudSimple portal: Configure firewall rules to secure Horizon management plane
-
-Set up the following rules in the CloudSimple portal. For instructions, see [Set up firewall tables and rules](firewall.md).
-
-1. Configure firewall rules in the CloudSimple N-S firewall to allow communication between on-premises subnets and Horizon management VLAN so that only the network ports listed in the VMware document [Horizon port list](https://docs.vmware.com/en/VMware-Horizon-7/7.1/com.vmware.horizon-client-agent.security.doc/GUID-52807839-6BB0-4727-A9C7-EA73DE61ADAB.html) are allowed.
-
-2. Create E-W firewall rules between the Horizon management VLAN and desktop pool VLAN in the Private Cloud.
-
-#### CloudSimple portal: Create a public IP address for Unified Access Gateway
-
-Create a public IP address for the Unified Access Gateway appliance to enable desktop client connections from the internet. For instructions, see [Allocate public IP addresses](public-ips.md).
-
-When the setup is complete, the public IP address is assigned and listed on the Public IPs page.
-
-#### CloudSimple portal: Escalate privileges
-
-The default 'cloudowner' user doesn't have sufficient privileges in the Private Cloud vCenter to install Horizon, so the user's vCenter privileges must be escalated. For more information, see [Escalate privileges](escalate-private-cloud-privileges.md).
-
-#### vCenter UI: Create a user in Private Cloud for Horizon installation
-
-1. Sign in to vCenter using the 'cloudowner' user credentials.
-2. Create a new user, 'horizon-soln-admin', in vCenter and add the user to the administrators group in vCenter.
-3. Sign out of vCenter as the 'cloudowner' user and sign in as the 'horizon-soln-admin' user.
-
-#### vCenter UI: Install VMware Horizon
-
-As mentioned in the earlier logical architecture section, Horizon solution has the following components:
-
-* VMware Horizon View
-* VMware Unified Access Gateway
-* VMware App Volume Manager
-* VMware User Environment Manager
-
-Install the components as follows:
-
-1. Install and configure Unified Access Gateway by following the instructions provided in the VMware document [Deploying and Configuring VMware Unified Access Gateway](https://docs.vmware.com/en/Unified-Access-Gateway/3.3.1/com.vmware.uag-331-deploy-config.doc/GUID-F5CE0D5E-BE85-4FA5-BBCF-0F86C9AB8A70.html).
-
-2. Install Horizon View in the Private Cloud by following the instructions in [View Installation Guide](https://docs.vmware.com/en/VMware-Horizon-7/7.4/horizon-installation/GUID-37D39B4F-5870-4188-8B11-B6C41AE9133C.html).
-
-3. Install App Volume Manager by following the instructions in [Install and Configure VMware App Volumes](https://docs.vmware.com/en/VMware-App-Volumes/4/com.vmware.appvolumes.install.doc/GUID-3F92761D-9F83-4610-978C-4DAA55E07D14.html).
-
-4. Install and configure User Environment Manager by following the instructions in [About Installing and Configuring VMware User Environment Manager](https://docs.vmware.com/en/VMware-User-Environment-Manager/9.4/com.vmware.user.environment.manager-install-config/GUID-DBBC82E4-483F-4B28-9D49-4D28E08715BC.html).
-
-#### File a support request to upload VMware Horizon pre-packaged app volumes
-
-As a part of the installation process, App Volume Manager uses pre-packaged volumes to
-provision app stacks and writable volumes. These volumes serve as templates
-for app stacks and writable volumes.
-
-Uploading the volumes to the Private Cloud datastore requires the ESXi root password. For assistance, submit a [support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest). Attach the AppVolumes installer bundle so that CloudSimple support personnel can upload the templates to your Private Cloud environment.
-
-#### CloudSimple portal: De-escalate privileges
-
-You can now [de-escalate the privileges](escalate-private-cloud-privileges.md#de-escalate-privileges) of the 'cloudowner' user.
-
-## Ongoing management of your Horizon solution
-
-You have full control over Horizon and App Volume Manager software in your Private Cloud environment and are expected to perform the necessary software lifecycle management. Ensure that any new versions of software are compatible with the Private Cloud vCenter and PSC before updating or upgrading Horizon or App Volume.
vmware-cloudsimple Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/index.md
- Title: Azure VMware Solution by CloudSimple
-description: Learn about Azure VMware Solutions by CloudSimple, including an overview, quickstarts, concepts, tutorials, and how-to guides.
-- Previously updated : 08/20/2019 -----
-keywords: vms support, azure vmware solution by cloudsimple, cloudsimple azure, vms tools, vmware documentation
-
-# Azure VMware Solution by CloudSimple
-
-Welcome to the one-stop portal for help with Azure VMware Solution by CloudSimple.
-In the documentation site you can learn about the following topics:
-
-## Overview
-
-Learn more about Azure VMware Solution by CloudSimple
-
-* Learn about the features, benefits, and usage scenarios at [What is Azure VMware Solution by CloudSimple](cloudsimple-vmware-solutions-overview.md)
-* Review the [key concepts for administration](key-concepts.md)
-
-## Quickstart
-
-Learn how to get started with the solution
-
-* Understand how to [initialize the service and purchase capacity](quickstart-create-cloudsimple-service.md)
-* Learn how to create a new VMware environment at [Configure a Private Cloud Environment](quickstart-create-private-cloud.md)
-* Learn how to unify management across VMware and Azure by reviewing the article [Consume VMware VMs on Azure](quickstart-create-vmware-virtual-machine.md)
-
-## Concepts
-
-Learn about the following concepts
-
-* A [CloudSimple Service](cloudsimple-service.md) (also known as "Azure VMware Solution by CloudSimple - Service"). This resource must be created once per region.
-* Purchase capacity for your environment by creating one or more [CloudSimple Node](cloudsimple-node.md) resources. These resources are also referred to as "Azure VMware Solution by CloudSimple - Node".
-* Initialize and configure your VMware environment using the [Private Clouds](cloudsimple-private-cloud.md).
-* Unify management using [CloudSimple Virtual Machines](cloudsimple-virtual-machines.md) (also known as "Azure VMware Solution by CloudSimple - Virtual machine").
-* Design the underlay network using [VLANs/subnets](cloudsimple-vlans-subnets.md).
-* Segment and secure your underlay network using the [Firewall Table](cloudsimple-firewall-tables.md) resource.
-* Get secure access to your VMware environments over the WAN using [VPN Gateways](cloudsimple-vpn-gateways.md).
-* Enable public access for workloads using [Public IP](cloudsimple-public-ip-address.md).
-* Establish connectivity to Azure Virtual Networks and On-premises networks using [Azure Network Connection](cloudsimple-azure-network-connection.md).
-* Configure alert email targets using [Account Management](cloudsimple-account.md).
-* View logs of user and system activity using [Activity Management](cloudsimple-activity.md) screens.
-* Understand the various [VMware components](vmware-components.md).
-
-## Tutorials
-
-Learn how to perform common tasks, such as:
-
-* [Create a CloudSimple Service](create-cloudsimple-service.md), once per region where you want to deploy VMware environments.
-* Manage core service functionality in the [CloudSimple portal](access-cloudsimple-portal.md).
-* Enable capacity and optimize billing for your infrastructure by [Purchasing CloudSimple nodes](create-nodes.md).
-* Manage VMware environment configurations using Private Clouds. You can [create](create-private-cloud.md), [manage](manage-private-cloud.md), [expand](expand-private-cloud.md), or [shrink](shrink-private-cloud.md) Private Clouds.
-* Enable unified management by [mapping Azure Subscriptions](azure-subscription-mapping.md).
-* Monitor user and system activity using the [Activity pages](monitor-activity.md).
-* Configure networking for your environments by [creating and managing subnets](create-vlan-subnet.md).
-* Segment and secure your environment using [Firewall tables and rules](firewall.md).
-* Enable inbound internet access for workloads by [Allocating Public IPs](public-ips.md).
-* Enable connectivity from your internal networks or client workstations by [setting-up VPN](vpn-gateway.md).
-* Enable communications from your [on-premises environments](on-premises-connection.md) as well to [Azure Virtual networks](virtual-network-connection.md).
-* Configure alert targets and view total purchased capacity in the [account summary](account.md)
-* View [users](users.md) that have accessed the CloudSimple portal.
-* Manage VMware virtual machines from the Azure portal:
- * [Create virtual machines](azure-create-vm.md) in the Azure portal.
- * [Manage virtual machines](azure-manage-vm.md) that you have created.
-
-## How-to Guides
-
-These guides describe solutions to goals such as:
-
-* [Securing your environment](private-cloud-secure.md)
-* Install third-party tools, enable additional users and external authentication source in vSphere using [privilege escalation](escalate-privileges.md).
-* Configure access to various VMware services by [configuring on-premises DNS](on-premises-dns-setup.md).
-* Enable name and address allocation for your workloads by [configuring workload DNS and DHCP](dns-dhcp-setup.md).
-* Understand how the service ensures security and functionality in your platform through service [updates and upgrades](vmware-components.md#updates-and-upgrades).
-* Save TCO on backup by creating a sample backup architecture with a [third-party backup software such as Veeam](backup-workloads-veeam.md).
-* Create a secure environment by enabling encryption at rest with a [third-party KMS encryption software](vsan-encryption.md).
-* Extend Azure Active Directory (Azure AD) management into VMware by configuring the [Azure AD identity source](azure-ad.md).
vmware-cloudsimple Key Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/key-concepts.md
- Title: Key concepts for administering Azure VMware Solution by CloudSimple-
-description: Describes key concepts for administering Azure VMware Solutions by CloudSimple
-- Previously updated : 04/24/2019 -----
-# Key concepts for administration of Azure VMware Solutions by CloudSimple
-
-Administering Azure VMware Solutions by CloudSimple requires an understanding of the following concepts:
-
-* CloudSimple service, which is displayed as Azure VMware Solutions by CloudSimple - Service
-* CloudSimple node, which is displayed as Azure VMware Solutions by CloudSimple - Node
-* CloudSimple private cloud
-* Service networking
-* CloudSimple virtual machine, which is displayed as Azure VMware Solutions by CloudSimple - Virtual machine
-
-## CloudSimple service
-
-With the CloudSimple service, you can create and manage all resources associated with VMware Solutions by CloudSimple from the Azure portal. Create a service resource in every region where you intend to use the service.
-
-Learn more about the [CloudSimple service](cloudsimple-service.md).
-
-## CloudSimple node
-
-A CloudSimple node is a dedicated, bare-metal, hyperconverged compute and storage host into which the VMware ESXi hypervisor is deployed. This node is then incorporated into the VMware vSphere, vCenter, vSAN, and NSX platforms. CloudSimple networking services and edge networking services are also enabled. Each node serves as a unit of compute and storage capacity that you can provision to create [CloudSimple private clouds](cloudsimple-private-cloud.md). You provision or reserve nodes in a region where the CloudSimple service is available.
-
-Learn more about [CloudSimple nodes](cloudsimple-node.md).
-
-## CloudSimple private cloud
-
-A CloudSimple private cloud is an isolated VMware stack environment managed by a vCenter server in its own management domain. The VMware stack includes ESXi hosts, vSphere, vCenter, vSAN, and NSX. The stack runs on dedicated nodes (dedicated and isolated bare-metal hardware) and is consumed by users through native VMware tools that include vCenter and NSX Manager. Dedicated nodes are deployed in Azure locations and are managed by Azure. Each private cloud can be segmented and secured by using networking services such as VLANs and subnets and firewall tables. Connections to your on-premises environment and the Azure network are created by using secure, private VPN, and Azure ExpressRoute connections.
-
-Learn more about [CloudSimple private cloud](cloudsimple-private-cloud.md).
-
-## Service networking
-
-The CloudSimple service provides a network per region where your CloudSimple service is deployed. The network is a single TCP Layer 3 address space with routing enabled by default. All private clouds and subnets created in this region communicate with each other without any additional configuration. You create distributed port groups on the vCenter by using the VLANs. You can use the following network features to configure and secure your workload resources in your private cloud:
-
-* [VLANs and subnets](cloudsimple-vlans-subnets.md)
-* [Firewall tables](cloudsimple-firewall-tables.md)
-* [VPN gateways](cloudsimple-vpn-gateways.md)
-* [Public IP](cloudsimple-public-ip-address.md)
-* [Azure network connection](cloudsimple-azure-network-connection.md)
-
-## CloudSimple virtual machine
-
-With the CloudSimple service, you can manage VMware virtual machines from the Azure portal. One or more clusters or resource pools from your vSphere environment can be mapped to the subscription on which the service is created.
-
-Learn more about:
-
-* [CloudSimple virtual machines](cloudsimple-virtual-machines.md)
-* [Azure subscription mapping](./azure-subscription-mapping.md)
vmware-cloudsimple Learn Private Cloud Permissions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/learn-private-cloud-permissions.md
- Title: Azure VMware Solution by CloudSimple - Private Cloud permission model
-description: Describes the CloudSimple Private Cloud permission model, groups, and categories
-- Previously updated : 08/16/2019 ------
-# CloudSimple Private Cloud permission model of VMware vCenter
-
-CloudSimple retains full administrative access to the Private Cloud environment. Each CloudSimple customer is granted sufficient administrative privileges to be able to deploy and manage the virtual machines in their environment. If needed, you can temporarily escalated your privileges to perform administrative functions.
-
-## Cloud Owner
-
-When you create a Private Cloud, a **CloudOwner** user is created in the vCenter Single Sign-On domain, with **Cloud-Owner-Role** access to manage objects in the Private Cloud. This user also can set up additional [vCenter Identity Sources](set-vcenter-identity.md), and other users to the Private Cloud vCenter.
-
-> [!NOTE]
-> Default user for your CloudSimple Private Cloud vCenter is cloudowner@cloudsimple.local when a Private Cloud is created.
-
-## User Groups
-
-A group called **Cloud-Owner-Group** is created during the deployment of a Private Cloud. Users in this group can administer various parts of the vSphere environment on the Private Cloud. This group is automatically given **Cloud-Owner-Role** privileges, and the **CloudOwner** user is added as a member of this group. CloudSimple creates additional groups with limited privileges for ease of management. You can add any user to these pre-created groups and the privileges defined below are automatically assigned to the users in the groups.
-
-### Pre-created Groups
-
-| Group Name | Purpose | Role |
-| -- | - | |
-| Cloud-Owner-Group | Members of this group have administrative privileges to the Private Cloud vCenter | [Cloud-Owner-Role](#cloud-owner-role) |
-| Cloud-Global-Cluster-Admin-Group | Members of this group have administrative privileges on the Private Cloud vCenter Cluster | [Cloud-Cluster-Admin-Role](#cloud-cluster-admin-role) |
-| Cloud-Global-Storage-Admin-Group | Members of this group can manage storage on the Private Cloud vCenter | [Cloud-Storage-Admin-Role](#cloud-storage-admin-role) |
-| Cloud-Global-Network-Admin-Group | Members of this group can manage network and distributed port groups on the Private Cloud vCenter | [Cloud-Network-Admin-Role](#cloud-network-admin-role) |
-| Cloud-Global-VM-Admin-Group | Members of this group can manage virtual machines on the Private Cloud vCenter | [Cloud-VM-Admin-Role](#cloud-vm-admin-role) |
-
-To grant individual users permissions to manage the Private Cloud, create user accounts add to the appropriate groups.
-
-> [!CAUTION]
-> New users must be added only to *Cloud-Owner-Group*, *Cloud-Global-Cluster-Admin-Group*, *Cloud-Global-Storage-Admin-Group*, *Cloud-Global-Network-Admin-Group* or, *Cloud-Global-VM-Admin-Group*. Users added to *Administrators* group will be removed automatically. Only service accounts must be added to *Administrators* group and service accounts must not be used to sign in to vSphere web UI.
-
-## List of vCenter privileges for default roles
-
-### Cloud-Owner-Role
-
-| **Category** | **Privilege** |
-|-|--|
-| **Alarms** | Acknowledge alarm <br> Create alarm <br> Disable alarm action <br> Modify alarm <br> Remove alarm <br> Set alarm status |
-| **Permissions** | Modify permission |
-| **Content Library** | Add library item <br> Create local library <br> Create subscribed library <br> Delete library item <br> Delete local library <br> Delete subscribed library <br> Download files <br> Evict library item <br> Evict subscribed library <br> Import storage <br> Probe subscription information <br> Read storage <br> Sync library item <br> Sync subscribed library <br> Type introspection <br> Update configuration settings <br> Update files <br> Update library <br> Update library item <br> Update local library <br> Update subscribed library <br> View configuration settings |
-| **Cryptographic operations** | Add disk <br> Clone <br> Decrypt <br> Direct Access <br> Encrypt <br> Encrypt new <br> Manage KMS <br> Manage encryption policies <br> Manage keys <br> Migrate <br> Recrypt <br> Register VM <br> Register host |
-| **dvPort group** | Create <br> Delete <br> Modify <br> Policy operation <br> Scope operation |
-| **Datastore** | Allocate space <br> Browse datastore <br> Configure datastore <br> Low-level file operations <br> Move datastore <br> Remove datastore <br> Remove file <br> Rename datastore <br> Update virtual machine files <br> Update virtual machine metadata |
-| **ESX Agent Manager** | Config <br> Modify <br> View |
-| **Extension** | Register extension <br> Unregister extension <br> Update extension |
-| **External stats provider**| Register <br> Unregister <br> Update |
-| **Folder** | Create folder <br> Delete folder <br> Move folder <br> Rename folder |
-| **Global** | Cancel task <br> Capacity planning <br> Diagnostics <br> Disable methods <br> Enable methods <br> Global tag <br> Health <br> Licenses <br> Log event <br> Manage custom attributes <br> Proxy <br> Script action <br> Service managers <br> Set custom attribute <br> System tag |
-| **Health update provider** | Register <br> Unregister <br> Update |
-| **Host > Configuration** | Storage partition configuration |
-| **Host > Inventory** | Modify cluster |
-| **vSphere Tagging** | Assign or Unassign vSphere Tag <br> Create vSphere Tag <br> Create vSphere Tag Category <br> Delete vSphere Tag <br> Delete vSphere Tag Category <br> Edit vSphere Tag <br> Edit vSphere Tag Category <br> Modify UsedBy Field For Category <br> Modify UsedBy Field For Tag |
-| **Network** | Assign network <br> Configure <br> Move network <br> Remove |
-| **Performance** | Modify intervals |
-| **Host profile** | View |
-| **Resource** | Apply recommendation <br> Assign vApp to resource pool <br> Assign virtual machine to resource pool <br> Create resource pool <br> Migrate powered off virtual machine <br> Migrate powered on virtual machine <br> Modify resource pool <br> Move resource pool <br> Query vMotion <br> Remove resource pool <br> Rename resource pool |
-| **Scheduled task** | Create tasks <br> Modify task <br> Remove task <br> Run task |
-| **Sessions** | Impersonate user <br> Message <br> Validate session <br> View and stop sessions |
-| **Datastore cluster** | Configure a datastore cluster |
-| **Profile-driven storage** | Profile-driven storage update <br> Profile-driven storage view |
-| **Storage views** | Configure service <br> View |
-| **Tasks** | Create task <br> Update task |
-| **Transfer service**| Manage <br> Monitor |
-| **vApp** | Add virtual machine <br> Assign resource pool <br> Assign vApp <br> Clone <br> Create <br> Delete <br> Export <br> Import <br> Move <br> Power off <br> Power on <br> Rename <br> Suspend <br> Unregister <br> View OVF environment <br> vApp application configuration <br> vApp instance configuration <br> vApp managedBy configuration <br> vApp resource configuration |
-| **VRMPolicy** | Query VRMPolicy <br> Update VRMPolicy |
-| **Virtual machine > Configuration** | Add existing disk <br> Add new disk <br> Add or remove device <br> Advanced <br> Change CPU count <br> Change resource <br> Configure managedBy <br> Disk change tracking <br> Disk lease <br> Display connection settings <br> Extend virtual disk <br> Host USB device <br> Memory <br> Modify device settings <br> Query Fault Tolerance compatibility <br> Query unowned files <br> Raw device <br> Reload from path <br> Remove disk <br> Rename <br> Reset guest information <br> Set annotation <br> Settings <br> Swapfile placement <br> Toggle fork parent <br> Unlock virtual machine <br> Upgrade virtual machine compatibility |
-| **Virtual machine > Guest operations** | Guest operation alias modification <br> Guest operation alias query <br> Guest operation modifications <br> Guest operation program execution <br> Guest operation queries |
-| **Virtual machine > Interaction** | Answer question <br> Backup operation on virtual machine <br> Configure CD media <br> Configure floppy media <br> Console interaction <br> Create screenshot <br> Defragment all disks <br> Device connection <br> Drag and drop <br> Guest operating system management by VIX API <br> Inject USB HID scan codes <br> Pause or Unpause <br> Perform wipe or shrink operations <br> Power off <br> Power on <br> Record session on virtual machine <br> Replay session on virtual machine <br> Reset <br> Resume Fault Tolerance <br> Suspend <br> Suspend Fault Tolerance <br> Test failover <br> Test restart Secondary VM <br> Turn off Fault Tolerance <br> Turn on Fault Tolerance <br> VMware Tools install |
-| **Virtual machine > Inventory** | Create from existing <br> Create new <br> Move <br> Register <br> Remove <br> Unregister |
-| **Virtual machine > Provisioning** | Allow disk access <br> Allow file access <br> Allow read-only disk access <br> Allow virtual machine download <br> Allow virtual machine files upload <br> Clone template <br> Clone virtual machine <br> Create template from virtual machine <br> Customize <br> Deploy template <br> Mark as template <br> Mark as virtual machine <br> Modify customization specification <br> Promote disks <br> Read customization specifications |
-| **Virtual machine > Service configuration** | Allow notifications <br> Allow polling of global event notifications <br> Manage service configurations <br> Modify service configuration <br> Query service configurations <br> Read service configuration |
-| **Virtual machine > Snapshot management** | Create snapshot <br> Remove snapshot <br> Rename snapshot <br> Revert to snapshot |
-| **Virtual machine > vSphere Replication** | Configure replication <br> Manage replication <br> Monitor replication |
-| **vService** | Create dependency <br> Destroy dependency <br> Reconfigure dependency configuration <br> Update dependency |
-
-### Cloud-Cluster-Admin-Role
-
-| **Category** | **Privilege** |
-|-|--|
-| **Datastore** | Allocate space <br> Browse datastore <br> Configure datastore <br> Low-level file operations <br> Remove datastore <br> Rename datastore <br> Update virtual machine files <br> Update virtual machine metadata |
-| **Folder** | Create folder <br> Delete folder <br> Move folder <br> Rename folder |
-| **Host > Configuration** | Storage partition configuration |
-| **vSphere Tagging** | Assign or Unassign vSphere Tag <br> Create vSphere Tag <br> Create vSphere Tag Category <br> Delete vSphere Tag <br> Delete vSphere Tag Category <br> Edit vSphere Tag <br> Edit vSphere Tag Category <br> Modify UsedBy Field For Category <br> Modify UsedBy Field For Tag |
-| **Network** | Assign network |
-| **Resource** | Apply recommendation <br> Assign vApp to resource pool <br> Assign virtual machine to resource pool <br> Create resource pool <br> Migrate powered off virtual machine <br> Migrate powered on virtual machine <br> Modify resource pool <br> Move resource pool <br> Query vMotion <br> Remove resource pool <br> Rename resource pool |
-| **vApp** | Add virtual machine <br> Assign resource pool <br> Assign vApp <br> Clone <br> Create <br> Delete <br> Export <br> Import <br> Move <br> Power off <br> Power on <br> Rename <br> Suspend <br> Unregister <br> View OVF environment <br> vApp application configuration <br> vApp instance configuration <br> vApp managedBy configuration <br> vApp resource configuration |
-| **VRMPolicy** | Query VRMPolicy <br> Update VRMPolicy |
-| **Virtual machine > Configuration** | Add existing disk <br> Add new disk <br> Add or remove device <br> Advanced <br> Change CPU count <br> Change resource <br> Configure managedBy <br> Disk change tracking <br> Disk lease <br> Display connection settings <br> Extend virtual disk <br> Host USB device <br> Memory <br> Modify device settings <br> Query Fault Tolerance compatibility <br> Query unowned files <br> Raw device <br> Reload from path <br> Remove disk <br> Rename <br> Reset guest information <br> Set annotation <br> Settings <br> Swapfile placement <br> Toggle fork parent <br> Unlock virtual machine <br> Upgrade virtual machine compatibility |
-| **Virtual machine > Guest operations** | Guest operation alias modification <br> Guest operation alias query <br> Guest operation modifications <br> Guest operation program execution <br> Guest operation queries |
-| **Virtual machine > Interaction** | Answer question <br> Backup operation on virtual machine <br> Configure CD media <br> Configure floppy media <br> Console interaction <br> Create screenshot <br> Defragment all disks <br> Device connection <br> Drag and drop <br> Guest operating system management by VIX API <br> Inject USB HID scan codes <br> Pause or Unpause <br> Perform wipe or shrink operations <br> Power off <br> Power on <br> Record session on virtual machine <br> Replay session on virtual machine <br> Reset <br> Resume Fault Tolerance <br> Suspend <br> Suspend Fault Tolerance <br> Test failover <br> Test restart Secondary VM <br> Turn off Fault Tolerance <br> Turn on Fault Tolerance <br> VMware Tools install
-| **Virtual machine > Inventory** | Create from existing <br> Create new <br> Move <br> Register <br> Remove <br> Unregister |
-| **Virtual machine > Provisioning** | Allow disk access <br> Allow file access <br> Allow read-only disk access <br> Allow virtual machine download <br> Allow virtual machine files upload <br> Clone template <br> Clone virtual machine <br> Create template from virtual machine <br> Customize <br> Deploy template <br> Mark as template <br> Mark as virtual machine <br> Modify customization specification <br> Promote disks <br> Read customization specifications |
-| **Virtual machine > Service configuration** | Allow notifications <br> Allow polling of global event notifications <br> Manage service configurations <br> Modify service configuration <br> Query service configurations <br> Read service configuration
-| **Virtual machine > Snapshot management** | Create snapshot <br> Remove snapshot <br> Rename snapshot <br> Revert to snapshot |
-| **Virtual machine > vSphere Replication** | Configure replication <br> Manage replication <br> Monitor replication |
-| **vService** | Create dependency <br> Destroy dependency <br> Reconfigure dependency configuration <br> Update dependency |
-
-### Cloud-Storage-Admin-Role
-
-| **Category** | **Privilege** |
-|-|--|
-| **Datastore** | Allocate space <br> Browse datastore <br> Configure datastore <br> Low-level file operations <br> Remove datastore <br> Rename datastore <br> Update virtual machine files <br> Update virtual machine metadata |
-| **Host > Configuration** | Storage partition configuration |
-| **Datastore cluster** | Configure a datastore cluster |
-| **Profile-driven storage** | Profile-driven storage update <br> Profile-driven storage view |
-| **Storage views** | Configure service <br> View |
-
-### Cloud-Network-Admin-Role
-
-| **Category** | **Privilege** |
-|-|--|
-| **dvPort group** | Create <br> Delete <br> Modify <br> Policy operation <br> Scope operation |
-| **Network** | Assign network <br> Configure <br> Move network <br> Remove |
-| **Virtual machine > Configuration** | Modify device settings |
-
-### Cloud-VM-Admin-Role
-
-| **Category** | **Privilege** |
-|-|--|
-| **Datastore** | Allocate space <br> Browse datastore |
-| **Network** | Assign network |
-| **Resource** | Assign virtual machine to resource pool <br> Migrate powered off virtual machine <br> Migrate powered on virtual machine
-| **vApp** | Export <br> Import |
-| **Virtual machine > Configuration** | Add existing disk <br> Add new disk <br> Add or remove device <br> Advanced <br> Change CPU count <br> Change resource <br> Configure managedBy <br> Disk change tracking <br> Disk lease <br> Display connection settings <br> Extend virtual disk <br> Host USB device <br> Memory <br> Modify device settings <br> Query Fault Tolerance compatibility <br> Query unowned files <br> Raw device <br> Reload from path <br> Remove disk <br> Rename <br> Reset guest information <br> Set annotation <br> Settings <br> Swapfile placement <br> Toggle fork parent <br> Unlock virtual machine <br> Upgrade virtual machine compatibility |
-| **Virtual machine >Guest operations** | Guest operation alias modification <br> Guest operation alias query <br> Guest operation modifications <br> Guest operation program execution <br> Guest operation queries |
-| **Virtual machine >Interaction** | Answer question <br> Backup operation on virtual machine <br> Configure CD media <br> Configure floppy media <br> Console interaction <br> Create screenshot <br> Defragment all disks <br> Device connection <br> Drag and drop <br> Guest operating system management by VIX API <br> Inject USB HID scan codes <br> Pause or Unpause <br> Perform wipe or shrink operations <br> Power off <br> Power on <br> Record session on virtual machine <br> Replay session on virtual machine <br> Reset <br> Resume Fault Tolerance <br> Suspend <br> Suspend Fault Tolerance <br> Test failover <br> Test restart Secondary VM <br> Turn off Fault Tolerance <br> Turn on Fault Tolerance <br> VMware Tools install |
-| **Virtual machine >Inventory** | Create from existing <br> Create new <br> Move <br> Register <br> Remove <br> Unregister |
-| **Virtual machine >Provisioning** | Allow disk access <br> Allow file access <br> Allow read-only disk access <br> Allow virtual machine download <br> Allow virtual machine files upload <br> Clone template <br> Clone virtual machine <br> Create template from virtual machine <br> Customize <br> Deploy template <br> Mark as template <br> Mark as virtual machine <br> Modify customization specification <br> Promote disks <br> Read customization specifications |
-| **Virtual machine >Service configuration** | Allow notifications <br> Allow polling of global event notifications <br> Manage service configurations <br> Modify service configuration <br> Query service configurations <br> Read service configuration
-| **Virtual machine >Snapshot management** | Create snapshot <br> Remove snapshot <br> Rename snapshot <br> Revert to snapshot |
-| **Virtual machine >vSphere Replication** | Configure replication <br> Manage replication <br> Monitor replication |
-| **vService** | Create dependency <br> Destroy dependency <br> Reconfigure dependency configuration <br> Update dependency |
vmware-cloudsimple Load Balancers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/load-balancers.md
- Title: Azure VMware Solution by CloudSimple - Choose a load balancing solution for CloudSimple Private Clouds
-description: Describes the load balancing options deploying an application in a Private Cloud
-- Previously updated : 08/20/2019 ------
-# Choose a load balancing solution for CloudSimple Private Clouds
-
-When deploying an application in a CloudSimple Private Cloud, you can choose any of several options for load balancing.
-
-You can choose a virtual or software-based load balancer in your CloudSimple private cloud or even use Azure L7 load balancer running in your Azure subscription to front end your web tier VMs running in the CloudSimple Private Cloud. Here, we list a few options:
-
-## Virtual load balancers
-
-You can deploy virtual load balancer appliances in your VMware environment through the vCenter interface and configure them to front end your application traffic.
-
-Some popular vendors are:
-NginX: http://nginx.org/en/docs/http/load_balancing.html
-F5- BigIP - Traffic
-Citrix ADC: https://www.citrix.com/products/citrix-adc/
-
-## Azure L7 load balancer
-
-When you use Azure Application Gateway as a L7 load balancer for your application running in a Private Cloud, you donΓÇÖt need to manage the load balancer software. The load balancer software is managed by Azure. All the web tier VMs in the Private Cloud use private IP addresses and donΓÇÖt require additional NAT rules or public IPs addresses to resolve names. Web tier VMs communicate with the Azure Application Gateway over a private, low-latency, high-bandwidth connection.
-
-To learn more about how to configure this solution, refer to the solution guide on Using Azure Application Gateway as a L7 load balancer.
-
-## Azure internal load balancer
-
-If you choose to run your application in a hybrid deployment where the web front-end tier is running within an Azure vNet in your Azure subscription and the DB tier of the application is running in VMware VMs in CloudSimple Private Cloud, you can use Azure internal load balancer (L4 load balancer) in front of your DB tier VMs for traffic management.
-
-To learn more, see Azure [Internal Load Balancer](../load-balancer/components.md#frontend-ip-configurations) documentation.
-
-## Global server load balancer
-
-If you are looking for a DNS-based load balancer, then you may either use third party solutions available in Azure Marketplace or go with the native Azure solution.
-
-Azure Traffic Manager is a DNS-based traffic load balancer that enables you to distribute traffic optimally to services across global Azure regions and on-premises, while providing high availability and responsiveness. To learn more, see Azure [Traffic Manager](../traffic-manager/traffic-manager-configure-geographic-routing-method.md) documentation.
vmware-cloudsimple Manage Private Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/manage-private-cloud.md
- Title: Manage Azure VMware Solution by CloudSimple Private Cloud
-description: Describes the capabilities available to manage your CloudSimple Private Cloud resources and activity
-- Previously updated : 06/10/2019 ------
-# Manage Private Cloud resources and activity
-
-Private clouds are managed from CloudSimple portal. Check the status, available resources, activity on the private cloud, and other settings from the CloudSimple portal.
-
-## Sign in to Azure
-
-Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
-
-## Access the CloudSimple portal
-
-Access the [CloudSimple portal](access-cloudsimple-portal.md).
-
-## View the List of Private Clouds
-
-The **Private Clouds** tab on the **Resources** page lists all Private Clouds in your subscription. Information includes the name, number of vSphere clusters, location, current state of the private cloud and, resource information.
-
-![Private Cloud page](media/manage-private-cloud.png)
-
-Select a Private Cloud for additional information and actions.
-
-## Private Cloud summary
-
-View a comprehensive summary of the selected Private Cloud. Summary page includes the DNS servers deployed on the Private Cloud. You can set up DNS forwarding from on-premises DNS servers to your Private Cloud DNS servers. For more information on DNS forwarding, see [Configure DNS for name resolution for Private Cloud vCenter from on-premises](./on-premises-dns-setup.md).
-
-![Private Cloud Summary](media/private-cloud-summary.png)
-
-### Available actions
-
-* [Launch vSphere client](./vcenter-access.md). Access the vCenter for this Private Cloud.
-* [Purchase nodes](create-nodes.md). Add nodes to this Private Cloud.
-* [Expand](expand-private-cloud.md). Add nodes to this Private Cloud.
-* **Refresh**. Update the information on this page.
-* **Delete**. You can delete the Private Cloud at any time. **Before deleting, make sure that you have backed up all systems and data.** Deleting a Private Cloud deletes all the VMs, vCenter configuration, and data. Click **Delete** in the summary section for the selected Private Cloud. Following deletion, all the Private Cloud data is erased in a secure, highly compliant erasure process.
-* [Change vSphere privileges](escalate-private-cloud-privileges.md). Escalate your privileges on this Private Cloud.
-
-## Private Cloud VLANS/subnets
-
-View the list of defined VLANs/subnets for the selected Private Cloud. The list includes the management VLANs/subnets created when the private cloud was created.
-
-![Private Cloud - VLANs/Subnets](media/private-cloud-vlans-subnets.png)
-
-### Available actions
-
-* [Add VLANS/Subnets](./create-vlan-subnet.md). Add a VLAN/subset to this Private Cloud.
-
-Select a VLAN/Subnet for following actions
-* [Attach firewall table](./firewall.md). Attach a firewall table to this Private Cloud.
-* **Edit**
-* **Delete** (only user-defined VLANs/Subnets)
-
-## Private Cloud activity
-
-View the following information for the selected Private Cloud. The activity information is a filtered list of all activities for the selected Private Cloud. This page shows up to 25 recent activities.
-
-* Recent alerts
-* Recent events
-* Recent tasks
-* Recent audit
-
-![Private Cloud - Activity](media/private-cloud-activity.png)
-
-## Cloud Racks
-
-Cloud racks are the building blocks of your Private Cloud. Each rack provides a unit of capacity. CloudSimple automatically configures cloud racks based on your selections when creating or expanding a Private Cloud. View the full list of cloud racks, including the Private Cloud that each is assigned to.
-
-![Private Cloud - Cloud Racks](media/private-cloud-cloudracks.png)
-
-## vSphere Management Network
-
-List of VMware management resources and virtual machines that are currently configured on the Private Cloud. Information includes the software version, fully qualified domain name (FQDN), and IP address of the resources.
-
-![Private Cloud - vSphere Management Network](media/private-cloud-vsphere-management-network.png)
-
-## Next steps
-
-* [Consume VMware VMs on Azure](quickstart-create-vmware-virtual-machine.md)
-* Learn more about [Private Clouds](cloudsimple-private-cloud.md)
vmware-cloudsimple Migrate Workloads https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/migrate-workloads.md
- Title: Azure VMware Solution by CloudSimple - Migrate workload VMs to Private Cloud
-description: Describes how to migrate virtual machines from on-premises vCenter to CloudSimple Private Cloud vCenter
-- Previously updated : 08/20/2019 ------
-# Migrate workload VMs from on-premises vCenter to Private Cloud vCenter environment
-
-To migrate VMs from an on-premises datacenter to your CloudSimple Private Cloud, several options are available. The Private Cloud provides native access to VMware vCenter, and tools supported by VMware can be used for workload migration. This article describes some of the vCenter migration options.
-
-## Prerequisites
-
-Migration of VMs and data from your on-premises datacenter requires network connectivity from the datacenter to your Private Cloud environment. Use either of the following methods to establish network connectivity:
-
-* [Site-to-Site VPN connection](vpn-gateway.md#set-up-a-site-to-site-vpn-gateway) between your on-premises environment and your Private Cloud.
-* ExpressRoute Global Reach connection between your on-premises ExpressRoute circuit and a CloudSimple ExpressRoute circuit.
-
-The network path from your on-premises vCenter environment to your Private Cloud must be available for migration of VMs using vMotion. The vMotion network on your on-premises vCenter must have routing abilities. Verify that your firewall allows all vMotion traffic between your on-premises vCenter and Private Cloud vCenter. (On the Private Cloud, routing on the vMotion network is configured by default.)
-
-## Migrate ISOs and templates
-
-To create new virtual machines on your Private Cloud, use ISOs and VM templates. To upload the ISOs and templates to your Private Cloud vCenter and make them available, use the following method.
-
-1. Upload the ISO to the Private Cloud vCenter from vCenter UI.
-2. [Publish a content library](https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.vm_admin.doc/GUID-2A0F1C13-7336-45CE-B211-610D39A6E1F4.html) on your Private Cloud vCenter:
-
- 1. Publish your on-premises content library.
- 2. Create a new content library on the Private Cloud vCenter.
- 3. Subscribe to the published on-premises content library.
- 4. Synchronize the content library for access to subscribed contents.
-
-## Migrate VMs using PowerCLI
-
-To migrate VMs from the on-premises vCenter to the Private Cloud vCenter, use VMware PowerCLI or the Cross vCenter Workload Migration Utility available from VMware Labs. The following sample script shows the PowerCLI migration commands.
-
-```
-$sourceVC = Connect-VIServer -Server <source-vCenter name> -User <source-vCenter user name> -Password <source-vCenter user password>
-$targetVC = Connect-VIServer -Server <target-vCenter name> -User <target-vCenter user name> -Password <target-vCenter user password>
-$vmhost = <name of ESXi host on destination>
-$vm = Get-VM -Server $sourceVC <name of VM>
-Move-VM -VM $vm -VMotionPriority High -Destination (Get-VMhost -Server $targetVC -Name $vmhost) -Datastore (Get-Datastore -Server $targetVC -Name <name of tgt vc datastore>)
-```
-
-> [!NOTE]
-> To use the names of the destination vCenter server and ESXi hosts, configure DNS forwarding from on-premises to your Private Cloud.
-
-## Migrate VMs using NSX Layer 2 VPN
-
-This option enables live migration of workloads from your on-premises VMware environment to the Private Cloud in Azure. With this stretched Layer 2 network, the subnet from on-premises will be available on the Private Cloud. After migration, new IP address assignment is not required for the VMs.
-
-[Migrate workloads using Layer 2 stretched networks](migration-layer-2-vpn.md) describes how to use a Layer 2 VPN to stretch a Layer 2 network from your on-premises environment to your Private Cloud.
-
-## Migrate VMs using backup and disaster recovery tools
-
-Migration of VMs to Private Cloud can be done using backup/restore tools and disaster recovery tools. Use the Private Cloud as a target for restore from backups that are created using a third-party tool. The Private Cloud can also be used as a target for disaster recovery using VMware SRM or a third-party tool.
-
-For more information using these tools, see the following topics:
-
-* [Back up workload virtual machines on CloudSimple Private Cloud using Veeam B&R](backup-workloads-veeam.md)
-* [Set up CloudSimple Private Cloud as a disaster recovery site for on-premises VMware workloads](disaster-recovery-zerto.md)
vmware-cloudsimple Migration Layer 2 Vpn https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/migration-layer-2-vpn.md
- Title: Azure VMware Solution by CloudSimple - Stretch a Layer 2 network on-premises to Private Cloud
-description: Describes how to set up a Layer 2 VPN between NSX-T on a CloudSimple Private Cloud and an on-premises standalone NSX Edge client
-- Previously updated : 08/19/2019 ------
-# Migrate workloads using Layer 2 stretched networks
-
-In this guide, you will learn how to use Layer 2 VPN (L2VPN) to stretch a Layer 2 network from your on-premises environment to your CloudSimple Private Cloud. This solution enables migration of workloads running in your on-premises VMware environment to the Private Cloud in Azure within the same subnet address space without having to re-IP your workloads.
-
-L2VPN based stretching of Layer 2 networks can work with or without NSX-based networks in your on-premises VMware environment. If you don't have NSX-based networks for workloads on-premises, you can use a standalone NSX Edge Services Gateway.
-
-> [!NOTE]
-> This guide covers the scenario where on-premises and the Private Cloud datacenters are connected over Site-to-Site VPN.
-
-## Deployment scenario
-
-To stretch your on-premises network using L2VPN, you must configure an L2VPN server (destination NSX-T Tier0 router) and an L2VPN client (source standalone client).
-
-In this deployment scenario, your Private Cloud is connected to your on-premises environment via a Site-to-Site VPN tunnel that allows on-premises management and vMotion subnets to communicate with the Private Cloud management and vMotion subnets. This arrangement is necessary for Cross vCenter vMotion (xVC-vMotion). A NSX-T Tier0 router is deployed as an L2VPN server in the Private Cloud.
-
-Standalone NSX Edge is deployed in your on-premises environment as an L2VPN client and subsequently paired with the L2VPN server. A GRE tunnel endpoint is created on each side and configured to 'stretch' the on-premises Layer 2 network to your Private Cloud. This configuration is depicted in the following figure.
-
-![Deployment scenario](media/l2vpn-deployment-scenario.png)
-
-To learn more about migration using L2 VPN, see [Virtual Private Networks](https://docs.vmware.com/en/VMware-NSX-T-Data-Center/2.3/com.vmware.nsxt.admin.doc/GUID-A8B113EC-3D53-41A5-919E-78F1A3705F58.html#GUID-A8B113EC-3D53-41A5-919E-78F1A3705F58__section_44B4972B5F12453B90625D98F86D5704) in the VMware documentation.
-
-## Prerequisites for deploying the solution
-
-Verify that the following are in place before deploying and configuring the solution:
-
-* The on-premises vSphere version is 6.7U1+ or 6.5P03+.
-* The on-premises vSphere license is at the Enterprise Plus level (for vSphere Distributed Switch).
-* Identify the workload Layer 2 network to be stretched to your Private Cloud.
-* Identify a Layer 2 network in your on-premises environment for deploying your L2VPN client appliance.
-* [A Private Cloud is already created](create-private-cloud.md).
-* The version of the standalone NSX-T Edge appliance is compatible with the NSX-T Manager version (NSX-T 2.3.0) used in your Private Cloud environment.
-* A trunk port group has been created in the on-premises vCenter with forged transmits enabled.
-* A public IP address has been reserved to use for the NSX-T standalone client uplink IP address, and 1:1 NAT is in place for translation between the two addresses.
-* DNS forwarding is set on the on-premises DNS servers for the az.cloudsimple.io domain to point to the Private Cloud DNS servers.
-* RTT latency is less than or equal to 150 ms, as required for vMotion to work across the two sites.
-
-## Limitations and considerations
-
-The following table lists supported vSphere versions and network adaptor types.
-
-| vSphere version | Source vSwitch type | Virtual NIC driver | Target vSwitch Type | Supported? |
- | - | | - | -
-| All | DVS | All | DVS | Yes |
-| vSphere 6.7UI or higher, 6.5P03 or higher | DVS | VMXNET3 | N-VDS | Yes |
-| vSphere 6.7UI or higher, 6.5P03 or higher | DVS | E1000 | N-VDS | [Not supported per VWware](https://kb.vmware.com/s/article/56991) |
-| vSphere 6.7UI or 6.5P03, NSX-V or versions below NSX-T2.2, 6.5P03 or higher | All | All | N-VDS | [Not supported per VWware](https://kb.vmware.com/s/article/56991) |
-
-As of the VMware NSX-T 2.3 release:
-
-* The logical switch on the Private Cloud side that is stretched to on-premises over L2VPN can't be routed at the same time. The stretched logical switch can't be connected to a logical router.
-* L2VPN and route-based IPSEC VPNs can only be configured using API calls.
-
-For more information, see [Virtual Private Networks](https://docs.vmware.com/en/VMware-NSX-T-Data-Center/2.3/com.vmware.nsxt.admin.doc/GUID-A8B113EC-3D53-41A5-919E-78F1A3705F58.html#GUID-A8B113EC-3D53-41A5-919E-78F1A3705F58__section_44B4972B5F12453B90625D98F86D5704) in the VMware documentation.
-
-### Sample L2 VPN deployment addressing
-
-### On-premises network where the standalone ESG (L2 VPN client) is deployed
-
-| **Item** | **Value** |
-||--|
-| Network name | MGMT_NET_VLAN469 |
-| VLAN | 469 |
-| CIDR| 10.250.0.0/24 |
-| Standalone Edge appliance IP address | 10.250.0.111 |
-| Standalone Edge appliance NAT IP address | 192.227.85.167 |
-
-### On-premises network to be stretched
-
-| **Item** | **Value** |
-||--|
-| VLAN | 472 |
-| CIDR| 10.250.3.0/24 |
-
-### Private Cloud IP schema for NSX-T Tier0 Router (L2 VPN serve)
-
-| **Item** | **Value** |
-||--|
-| Loopback interface | 192.168.254.254/32 |
-| Tunnel interface | 5.5.5.1/29 |
-| Logical switch (stretched) | Stretch_LS |
-| Loopback interface (NAT IP address) | 104.40.21.81 |
-
-### Private Cloud network to be mapped to the stretched network
-
-| **Item** | **Value** |
-||--|
-| VLAN | 712 |
-| CIDR| 10.200.15.0/24 |
-
-## Fetch the logical router ID needed for L2VPN
-
-The following steps show how to fetch the logical-router ID of Tier0 DR logical router instance for the IPsec and L2VPN services. The logical-router ID is needed later when implementing the L2VPN.
-
-1. Sign in to NSX-T Manager `https://*nsx-t-manager-ip-address*` and select **Networking** > **Routers** > **Provider-LR** > **Overview**. For **High Availability Mode**, select **Active-Standby**. This action opens a pop-up window that shows the Edge VM on which the Tier0 router is currently active.
-
- ![Select active-standby](media/l2vpn-fetch01.png)
-
-2. Select **Fabric** > **Nodes** > **Edges**. Make a note of the management IP address of the active Edge VM (Edge VM1) identified in the previous step.
-
- ![Note management IP](media/l2vpn-fetch02.png)
-
-3. Open an SSH session to the management IP address of the Edge VM. Run the ```get logical-router``` command with username **admin** and password **CloudSimple 123!**.
-
- ![Screenshot that shows an open SSH session.](media/l2vpn-fetch03.png)
-
-4. If you don't see an entry 'DR-Provider-LR', complete the following steps.
-
-5. Create two overlay-backed logical switches. One logical switch is stretched to on-premises where the migrated workloads reside. Another logical switch is a dummy switch. For instructions, see [Create a Logical Switch](https://docs.vmware.com/en/VMware-NSX-T-Data-Center/2.3/com.vmware.nsxt.admin.doc/GUID-23194F9A-416A-40EA-B9F7-346B391C3EF8.html) in the VMware documentation.
-
- ![Create logical switch](media/l2vpn-fetch04.png)
-
-6. Attach the dummy switch to the Tier1 router with a link local IP address or any non-overlapping subnet from on-premises or your Private Cloud. See [Add a Downlink Port on a Tier-1 Logical Router](https://docs.vmware.com/en/VMware-NSX-T-Data-Center/2.3/com.vmware.nsxt.admin.doc/GUID-E7EA867C-604C-4224-B61D-2A8EF41CB7A6.html) in the VMware documentation.
-
- ![Attach dummy switch](media/l2vpn-fetch05.png)
-
-7. Run the `get logical-router` command again on the SSH session of the Edge VM. The UUID of the 'DR-Provider-LR' logical router is displayed. Make a note of the UUID, which is required when configuring the L2VPN.
-
- ![Screenshot that shows the UUID for the logical router.](media/l2vpn-fetch06.png)
-
-## Fetch the logical-switch ID needed for L2VPN
-
-1. Sign in to NSX-T Manager (`https://nsx-t-manager-ip-address`).
-2. Select **Networking** > **Switching** > **Switches** > **<\Logical switch\>** > **Overview**.
-3. Make a note of the UUID of the stretch logical switch, which is required when configuring the L2VPN.
-
- ![get logical-router output](media/l2vpn-fetch-switch01.png)
-
-## Routing and security considerations for L2VPN
-
-To establish an IPsec route-based VPN between the NSX-T Tier0 router and the standalone NSX Edge client, the loopback interface of the NSX-T Tier0 router must be able to communicate with the public IP address of NSX standalone client on-premises over UDP 500/4500.
-
-### Allow UDP 500/4500 for IPsec
-
-1. [Create a public IP address](public-ips.md) for the NSX-T Tier0 loopback interface in the CloudSimple portal.
-
-2. [Create a firewall table](firewall.md) with stateful rules that allow UDP 500/ 4500 inbound traffic and attach the firewall table to the NSX-T HostTransport subnet.
-
-### Advertise the loopback interface IP to the underlay network
-
-1. Create a null route for the loopback interface network. Sign in to NSX-T Manager and select **Networking** > **Routing** > **Routers** > **Provider-LR** > **Routing** > **Static Routes**. Click **Add**. For **Network**, enter the loopback interface IP address. For **Next Hops**, click **Add**, specify 'Null' for the next hop, and keep the default of 1 for Admin Distance.
-
- ![Add static route](media/l2vpn-routing-security01.png)
-
-2. Create an IP prefix list. Sign in to NSX-T Manager and select **Networking** > **Routing** > **Routers** > **Provider-LR** > **Routing** > **IP Prefix Lists**. Click **Add**. Enter a name to identify the list. For **Prefixes**, click **Add** twice. In the first line, enter '0.0.0.0/0' for **Network** and 'Deny' for **Action**. In the second line, select **Any** for **Network** and **Permit** for **Action**.
-3. Attach the IP prefix list to both BGP neighbors (TOR). Attaching the IP prefix list to the BGP neighbor prevents the default route from being advertised in BGP to the TOR switches. However, any other route that includes the null route will advertise the loopback interface IP address to the TOR switches.
-
- ![Create IP prefix list](media/l2vpn-routing-security02.png)
-
-4. Sign in to NSX-T Manager and select **Networking** > **Routing** > **Routers** > **Provider-LR** > **Routing** > **BGP** > **Neighbors**. Select the first neighbor. Click **Edit** > **Address Families**. For the IPv4 family, Edit the **Out Filter** column and select the IP prefix list that you created. Click **Save**. Repeat this step for the second neighbor.
-
- ![Attach IP prefix list 1](media/l2vpn-routing-security03.png)
- ![Attach IP prefix list 2](media/l2vpn-routing-security04.png)
-
-5. Redistribute the null static route into BGP. To advertise the loopback interface route to the underlay, you must redistribute the null static route into BGP. Sign in to NSX-T Manager and select **Networking** > **Routing** > **Routers** > **Provider-LR** > **Routing** > **Route Redistribution** > **Neighbors**. Select **Provider-LR-Route_Redistribution** and click **Edit**. Select the **Static** checkbox and click **Save**.
-
- ![Redistribute null static route into BGP](media/l2vpn-routing-security05.png)
-
-## Configure a route-based VPN on the NSX-T Tier0 router
-
-Use the following template to fill in all the details for configuring a route-based VPN on the NSX-T Tier0 router. The UUIDs in each POST call are required in subsequent POST calls. The IP addresses for the loopback and tunnel interfaces for L2VPN must be unique and not overlap with the on-premises or Private Cloud networks.
-
-The IP addresses chosen for loopback and tunnel interface used for L2VPN must be unique and not overlap with the on-premises or Private Cloud networks. The loopback interface network must always be /32.
-
-```
-Loopback interface ip : 192.168.254.254/32
-Tunnel interface subnet : 5.5.5.0/29
-Logical-router ID : UUID of Tier0 DR logical router obtained in section "Steps to fetch Logical-Router ID needed for L2VPN"
-Logical-switch ID(Stretch) : UUID of Stretch Logical Switch obtained earlier
-IPSec Service ID :
-IKE profile ID :
-DPD profile ID :
-Tunnel Profile ID :
-Local-endpoint ID :
-Peer end-point ID :
-IPSec VPN session ID (route-based) :
-L2VPN service ID :
-L2VPN session ID :
-Logical-Port ID :
-Peer Code :
-```
-
-For all of the following API calls, replace the IP address with your NSX-T Manager IP address. You can run all these API calls from the POSTMAN client or by using `curl` commands.
-
-### Enable the IPSec VPN service on the logical router
-
-```
-POST https://192.168.110.201/api/v1/vpn/ipsec/services/
-{
-"resource_type": "IPSecVPNService",
-"description": "Manage VPN service",
-"display_name": "IPSec VPN service",
-"logical_router_id": "Logical-router ID",
-"ike_log_level": "INFO",
-"enabled": true
-}
-```
-
-### Create profiles: IKE
-
-```
-POST https://192.168.110.201/api/v1/vpn/ipsec/ike-profiles
-
-{
-"resource_type": "IPSecVPNIKEProfile",
-"description": "IKEProfile for siteA",
-"display_name": "IKEProfile siteA",
-"encryption_algorithms": ["AES_128"],
-"ike_version": "IKE_V2",
-"digest_algorithms": ["SHA2_256"],
-"sa_life_time":21600,
-"dh_groups": ["GROUP14"]
-}
-```
-
-### Create profiles: DPD
-
-```
-POST https://192.168.110.201/api/v1/vpn/ipsec/dpd-profiles
-
-{
-"resource_type": "IPSecVPNDPDProfile",
-"display_name": "nsx-default-dpd-profile",
-"enabled": true
-}
-```
-
-### Create profiles: Tunnel
-
-```
-POST https://192.168.110.201/api/v1/vpn/ipsec/tunnel-profiles
-
-{
-"resource_type": "IPSecVPNTunnelProfile",
-"display_name": "nsx-default-tunnel-profile",
-"enable_perfect_forward_secrecy": true,
-"encryption_algorithms": ["AES_GCM_128"],
-"digest_algorithms": [],
-"sa_life_time":3600,
-"dh_groups": ["GROUP14"],
-"encapsulation_mode": "TUNNEL_MODE",
-"transform_protocol": "ESP",
-"df_policy": "COPY"
-}
-```
-
-### Create a local endpoint
-
-```
-POST https://192.168.110.201/api/v1/vpn/ipsec/local-endpoints
-
-{
-"resource_type": "IPSecVPNLocalEndpoint",
-"description": "Local endpoint",
-"display_name": "Local endpoint",
-"local_id": "<Public IP of Loopback interface>",
-"ipsec_vpn_service_id": {
-"target_id": "IPSec VPN service ID"},
-"local_address": "<IP of Loopback interface>",
-"trust_ca_ids": [],
-"trust_crl_ids": []
-}
-```
-
-### Create a peer endpoint
-
-```
-POST https://192.168.110.201/api/v1/vpn/ipsec/peer-endpoints
-
-{
-"resource_type": "IPSecVPNPeerEndpoint",
-"description": "Peer endpoint for site B",
-"display_name": "Peer endpoint for site B",
-"connection_initiation_mode": "INITIATOR",
-"authentication_mode": "PSK",
-"ipsec_tunnel_profile_id": "IPSec Tunnel profile ID",
-"dpd_profile_id": "DPD profile ID",
-"psk":"nsx",
-"ike_profile_id": "IKE profile ID",
-"peer_address": "<Public IP of Standalone client",
-"peer_id": "<Public IP of Standalone client>"
-}
-```
-
-### Create a route-based VPN session
-
-```
-POST : https://192.168.110.201/api/v1/vpn/ipsec/sessions
-
-{
-"resource_type": "RouteBasedIPSecVPNSession",
-"peer_endpoint_id": "Peer Endpoint ID",
-"ipsec_vpn_service_id": "IPSec VPN service ID",
-"local_endpoint_id": "Local Endpoint ID",
-"enabled": true,
-"tunnel_ports": [
-{
-"ip_subnets": [
-{
-"ip_addresses": [
- "5.5.5.1"
-],
-"prefix_length": 24
-}
- ]
-}
-]
-}
-```
-
-## Configure L2VPN on NSX-T Tier0 router
-
-Fill in the following information after every POST call. The IDs are required in subsequent POST calls.
-
-```
-L2VPN Service ID:
-L2VPN Session ID:
-Logical Port ID:
-```
-
-### Create the L2VPN service
-
-The output of the following GET command will be blank, because the configuration is not complete yet.
-
-```
-GET : https://192.168.110.201/api/v1/vpn/l2vpn/services
-```
-
-For the following POST command, the logical router ID is the UUID of the Tier0 DR logical router obtained earlier.
-
-```
-POST : https://192.168.110.201/api/v1/vpn/l2vpn/services
-
-{
-"logical_router_id": "Logical Router ID",
-"enable_full_mesh" : true
-}
-```
-
-### Create the L2VPN session
-
-For the following POST command, the L2VPN service ID is the ID that you just obtained and the IPsec VPN session ID is the ID obtained in the previous section.
-
-```
-POST: https://192.168.110.201/api/v1/vpn/l2vpn/sessions
-
-{
-"l2vpn_service_id" : "L2VPN service ID",
-"transport_tunnels" : [
- {
- "target_id" : "IPSec VPN session ID"
- }]
-}
-```
-
-These calls create a GRE tunnel endpoint. To check the status, run the following command.
-
-```
-edge-2> get tunnel-port
-Tunnel : 44648dae-8566-5bc9-a065-b1c4e5c3e03f
-IFUID : 328
-LOCAL : 169.254.64.1
-REMOTE : 169.254.64.2
-ENCAP : GRE
-
-Tunnel : cf950ca1-5cf8-5438-9b1a-d2c8c8e7229b
-IFUID : 318
-LOCAL : 192.168.140.155
-REMOTE : 192.168.140.152
-ENCAP : GENEVE
-
-Tunnel : 63639321-87c5-529e-8a61-92c1939799b2
-IFUID : 304
-LOCAL : 192.168.140.155
-REMOTE : 192.168.140.156
-ENCAP : GENEVE
-```
-
-### Create logical port with the tunnel ID specified
-
-```
- POST https://192.168.110.201/api/v1/logical-ports/
-
-{
-"resource_type": "LogicalPort",
-"display_name": "Extend logicalSwitch, port for service",
-"logical_switch_id": "Logical switch ID",
-"admin_state" : "UP",
-"attachment": {
-"attachment_type":"L2VPN_SESSION",
-"id":"L2VPN session ID",
-"context" : {
-"resource_type" : "L2VpnAttachmentContext",
- "tunnel_id" : 10
-}
- }
- }
-```
-
-## Obtain the peer code for L2VPN on the NSX-T side
-
-Obtain the peer code of the NSX-T endpoint. The peer code is required when configuring the remote endpoint. The L2VPN \<session-id\> can be obtained from the previous section. For more information, see the [NSX-T 2.3 API Guide](https://www.vmware.com/support/nsxt/doc/nsxt_23_api.html).
-
-```
-GET https://192.168.110.201/api/v1/vpn/l2vpn/sessions/<session-id>/peer-codes
-```
-
-## Deploy the NSX-T standalone client (on-premises)
-
-Before deploying, verify that your on-premises firewall rules allow inbound and outbound UDP 500/4500 traffic from/to the CloudSimple public IP address that was reserved earlier for the NSX-T T0 router loopback interface.
-
-1. [Download the Standalone NSX Edge Client](https://my.vmware.com/group/vmware/details?productId=673&rPId=33945&downloadGroup=NSX-T-230) OVF and Extract the files from the downloaded bundle into a folder.
-
- ![Download standalone NSX Edge client](media/l2vpn-deploy-client01.png)
-
-2. Go to the folder with all the extracted files. Select all the vmdks (NSX-l2t-client-large.mf and NSX-l2t-client-large.ovf for large appliance size or NSX-l2t-client-Xlarge.mf and NSX-l2t-client-Xlarge.ovf for extra large size appliance size). Click **Next**.
-
- ![Select template](media/l2vpn-deploy-client02.png)
- ![Screenshot that shows the selected vmdks files.](media/l2vpn-deploy-client03.png)
-
-3. Enter a name for the NSX-T standalone client and click **Next**.
-
- ![Enter template name](media/l2vpn-deploy-client04.png)
-
-4. Click **Next** as needed to reach the datastore settings. Select the appropriate datastore for NSX-T standalone client and click **Next**.
-
- ![Select datastore](media/l2vpn-deploy-client06.png)
-
-5. Select the correct port groups for Trunk (Trunk PG), Public (Uplink PG) and HA interface (Uplink PG) for the NSX-T standalone client. Click **Next**.
-
- ![Select port groups](media/l2vpn-deploy-client07.png)
-
-6. Fill the following details in the **Customize template** screen and click **Next**:
-
- Expand L2T:
-
- * **Peer Address**. Enter the IP address reserved on Azure CloudSimple portal for NSX-T Tier0 Loopback interface.
- * **Peer Code**. Paste the peer code obtained from the last step of L2VPN Server deployment.
- * **Sub Interfaces VLAN (Tunnel ID)**. Enter the VLAN ID to be stretched. In parentheses (), enter the tunnel ID that was previously configured.
-
- Expand Uplink Interface:
-
- * **DNS IP Address**. Enter the on-premises DNS IP address.
- * **Default Gateway**. Enter the default gateway of the VLAN that will act as a default gateway for this client.
- * **IP Address**. Enter the uplink IP address of the standalone client.
- * **Prefix Length**. Enter the prefix length of the uplink VLAN/subnet.
- * **CLI admin/enable/root User Password**. Set the password for admin /enable /root account.
-
- ![Customize template](media/l2vpn-deploy-client08.png)
- ![Customize template - more](media/l2vpn-deploy-client09.png)
-
-7. Review the settings and click **Finish**.
-
- ![Complete configuration](media/l2vpn-deploy-client10.png)
-
-## Configure an on-premises sink port
-
-If one of the VPN sites doesn't have NSX deployed, you can configure an L2 VPN by deploying a standalone NSX Edge at that site. A standalone NSX Edge is deployed using an OVF file on a host that is not managed by NSX. This deploys an NSX Edge Services Gateway appliance to function as an L2 VPN client.
-
-If a standalone edge trunk vNIC is connected to a vSphere Distributed Switch, either promiscuous mode or a sink port is required for L2 VPN function. Using promiscuous mode can cause duplicate pings and duplicate responses. For this reason, use sink port mode in the L2 VPN standalone NSX Edge configuration. See the [Configure a sink port](https://docs.vmware.com/en/VMware-NSX-Data-Center-for-vSphere/6.4/com.vmware.nsx.admin.doc/GUID-3CDA4346-E692-4592-8796-ACBEEC87C161.html) in the VMware documentation.
-
-## IPsec VPN and L2VPN verification
-
-Use the following commands to verify IPsec and L2VPN sessions from standalone NSX-T Edge.
-
-```
-nsx-l2t-edge> show service ipsec
-vShield Edge IPSec Service Status:
-IPSec Server is running.
-AESNI is enabled.
-Total Sites: 1, 1 UP, 0 Down
-Total Tunnels: 1, 1 UP, 0 Down
--
-Site: 10.250.0.111_0.0.0.0/0-104.40.21.81_0.0.0.0/0
-Channel: PeerIp: 104.40.21.81 LocalIP: 10.250.0.111 Version: IKEv2 Status: UP
-Tunnel: PeerSubnet: 0.0.0.0/0 LocalSubnet: 0.0.0.0/0 Status: UP
--
-```
-
-```
-nsx-l2t-edge> show service l2vpn
-L2 VPN is running
--
-L2 VPN type: Client/Spoke
-
-SITENAME IPSECSTATUS VTI GRE
-1ecb00fb-a538-4740-b788-c9049e8cb6c6 UP vti-100 l2t-1
-```
-
-Use the following commands to verify IPsec and L2VPN sessions from the NSX-T Tier0 router.
-
-```
-edge-2> get ipsecvpn session
-Total Number of Sessions: 1
-
-IKE Session ID : 3
-UUID : 1ecb00fb-a538-4740-b788-c9049e8cb6c6
-Type : Route
-
-Local IP : 192.168.254.254 Peer IP : 192.227.85.167
-Local ID : 104.40.21.81 Peer ID : 192.227.85.167
-Session Status : Up
-
-Policy Rules
- VTI UUID : 4bf96e3b-e50b-49cc-a16e-43a6390e3d53
- ToRule ID : 560874406 FromRule ID : 2708358054
- Local Subnet : 0.0.0.0/0 Peer Subnet : 0.0.0.0/0
- Tunnel Status : Up
-```
-
-```
-edge-2> get l2vpn session
-Session : f7147137-5dd0-47fe-9e53-fdc2b687b160
-Tunnel : b026095b-98c8-5932-96ff-dda922ffe316
-IPSEC Session : 1ecb00fb-a538-4740-b788-c9049e8cb6c6
-Status : UP
-```
-
-Use the following commands to verify the sink port on the ESXi host where the NSX-T standalone client VM resides in the on-premises environment.
-
-```
- [root@esxi02:~] esxcfg-vswitch -l |grep NSX
- 53 1 NSXT-client-large.eth2
- 225 1 NSXT-client-large.eth1
- 52 1 NSXT-client-large.eth0
-```
-
-```
-[root@esxi02:~] net-dvs -l | grep "port\ [0-9]\|SINK\|com.vmware.common.alias"
- com.vmware.common.alias = csmlab2DS , propType = CONFIG
- port 24:
- port 25:
- port 26:
- port 27:
- port 13:
- port 19:
- port 169:
- port 54:
- port 110:
- port 6:
- port 107:
- port 4:
- port 199:
- port 168:
- port 201:
- port 0:
- port 49:
- port 53:
- port 225:
- com.vmware.etherswitch.port.extraEthFRP = SINK
- port 52:
-```
vmware-cloudsimple Migration Using Azure Data Box https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/migration-using-azure-data-box.md
- Title: Azure VMware Solution - migration using Azure Data Box
-description: How to use Azure Data Box to bulk-migrate data to Azure VMware Solution.
-- Previously updated : 09/27/2019 ------
-# Migrating data to Azure VMware Solution by using Azure Data Box
-
-The Microsoft Azure Data Box cloud solution lets you send terabytes (TBs) of data to Azure in a quick, inexpensive, and reliable way. The secure data transfer is accelerated by shipping you a proprietary Data Box storage device. Each storage device has a maximum usable storage capacity of 80 TB and is transported to your datacenter by a regional carrier. The device has a rugged casing to protect and secure your data during transit.
-
-By using Data Box, you can bulk-migrate your VMware data to your private cloud. Data from your on-premises VMware vSphere environment is copied to Data Box through the Network File System (NFS) protocol. Bulk data migration involves saving a point-in-time copy of virtual machines, configuration, and associated data to Data Box and then manually shipping it to Azure.
-
-In this article, you learn about:
-
-* Setting up Data Box.
-* Copying data from the on-premises VMware environment to the Data Box by through NFS.
-* Preparing for the return of Data Box.
-* Preparing blob data for copying to Azure VMware Solution.
-* Copying the data from Azure to your private cloud.
-
-## Scenarios
-
-Use Data Box in the following scenarios for bulk data migration:
-
-* To migrate a large amount of data from on-premises to Azure VMware Solution. This method establishes a baseline and syncs differences over the network.
-* To migrate a large number of virtual machines that are turned off (cold virtual machines).
-* To migrate virtual machine data for setting up development and test environments.
-* To migrate a large number of virtual machine templates, ISO files, and virtual machine disks.
-
-## Before you begin
-
-* Check the prerequisites and [order Data Box](../databox/data-box-deploy-ordered.md) through your Azure portal. During the order process, you must select a storage account that enables Blob storage. After you receive the Data Box device, connect it to your on-premises network and [set up the device](../databox/data-box-deploy-set-up.md) with an IP address that's reachable from your vSphere management network.
-
-* Create a virtual network and a storage account in the same region where your Azure VMware Solution is provisioned.
-
-* Create an [Azure virtual network connection](cloudsimple-azure-network-connection.md) from your private cloud to the virtual network where the storage account is created by following the steps in [Connect Azure virtual network to CloudSimple using ExpressRoute](virtual-network-connection.md).
-
-## Set up Data Box for NFS
-
-Connect to your Data Box local web UI by following the steps in the "Connect to your device" section of [Tutorial: Cable and connect to your Azure Data Box](../databox/data-box-deploy-set-up.md). Configure Data Box to allow access to NFS clients:
-
-1. In the local web UI, go to the **Connect and copy** page. Under **NFS settings**, select **NFS client access**.
-
- ![Configure NFS client access 1](media/nfs-client-access.png)
-
-2. Enter the IP address of the VMware ESXi hosts and select **Add**. You can configure access for all the hosts in your vSphere cluster by repeating this step. Select **OK**.
-
- ![Configure NFS client access 2](media/nfs-client-access2.png)
-> [!IMPORTANT]
-> **Always create a folder for the files that you intend to copy under the share and then copy the files to that folder**. The folder created under block blob and page blob shares represents a container to which data is uploaded as blobs. You can't copy files directly to the *root* folder in the storage account.
-
-Under block blob and page blob shares, first-level entities are containers, and second-level entities are blobs. Under shares for Azure Files, first-level entities are shares, and second-level entities are files.
-
-The following table shows the UNC path to the shares on your Data Box and Azure Storage path URL where the data is uploaded. The final Azure Storage path URL can be derived from the UNC share path.
-
-| Blobs and Files | Path and URL |
-|- | |
-| Azure Block blobs | <li>UNC path to shares: `//<DeviceIPAddress>/<StorageAccountName_BlockBlob>/<ContainerName>/files/a.txt`</li><li>Azure Storage URL: `https://<StorageAccountName>.blob.core.windows.net/<ContainerName>/files/a.txt`</li> |
-| Azure Page blobs | <li>UNC path to shares: `//<DeviceIPAddres>/<StorageAccountName_PageBlob>/<ContainerName>/files/a.txt`</li><li>Azure Storage URL: `https://<StorageAccountName>.blob.core.windows.net/<ContainerName>/files/a.txt`</li> |
-| Azure Files |<li>UNC path to shares: `//<DeviceIPAddres>/<StorageAccountName_AzFile>/<ShareName>/files/a.txt`</li><li>Azure Storage URL: `https://<StorageAccountName>.file.core.windows.net/<ShareName>/files/a.txt`</li> |
-
-> [!NOTE]
-> Use Azure Block blobs for copying VMware data.
-
-## Mount the NFS share as a datastore on your on-premises vCenter cluster and copy the data
-
-The NFS share from your Data Box must be mounted as a datastore on your on-premises vCenter cluster or VMware ESXi host in order to copy the data to the NFS datastore:
-
-1. Log in to your on-premises vCenter server.
-
-2. Right-click **Datacenter**, select **Storage**, select **New Datastore**, and then select **Next**.
-
- ![Add new datastore](media/databox-migration-add-datastore.png)
-
-3. In step 1 of the Add Datastore wizard, select **NFS** under **Type**.
-
- ![Add new datastore - type](media/databox-migration-add-datastore-type.png)
-
-4. In step 2 of the wizard, select **NFS 3** as the NFS version and then select **Next**.
-
- ![Add new datastore - NFS version](media/databox-migration-add-datastore-nfs-version.png)
-
-5. In step 3 of the wizard, specify the name for the datastore, the path, and the server. You can use the IP address of your Data Box for the server. The folder path will be in the `/<StorageAccountName_BlockBlob>/<ContainerName>/` format.
-
- ![Add new datastore - NFS configuration](media/databox-migration-add-datastore-nfs-configuration.png)
-
-6. In step 4 of the wizard, select the ESXi hosts where you want the datastore to be mounted and then select **Next**. In a cluster, select all the hosts to ensure migration of the virtual machines.
-
- ![Add new datastore - Select hosts](media/databox-migration-add-datastore-nfs-select-hosts.png)
-
-7. In step 5 of the wizard, review the summary and select **Finish**.
-
-## Copy data to the Data Box NFS datastore
-
-Virtual machines can be migrated or cloned to the new datastore. Any unused virtual machines that you want to migrate can be migrated to the Data Box NFS datastore by using the **storage vMotion** option. Active virtual machines can be cloned to the Data Box NFS datastore.
-
-* Identify and list the virtual machines that can be **moved**.
-* Identify and list the virtual machines that must be **cloned**.
-
-### Move a virtual machine to a Data Box datastore
-
-1. Right-click the virtual machine that you want to move to the Data Box datastore and then select **Migrate**.
-
- ![Migrate virtual machine](media/databox-migration-vm-migrate.png)
-
-2. Select **Change storage only** for the migration type and then select **Next**.
-
- ![Migrate virtual machine - storage only](media/databox-migration-vm-migrate-change-storage.png)
-
-3. Select **Databox-Datastore** as the destination and then select **Next**.
-
- ![Migrate virtual machine - select datastore](media/databox-migration-vm-migrate-change-storage-select-datastore.png)
-
-4. Review the information and select **Finish**.
-
-5. Repeat steps 1 through 4 for additional virtual machines.
-
-> [!TIP]
-> You can select multiple virtual machines that are in the same power state (turned on or turned off) and migrate them in bulk.
-
-The virtual machine will be migrated to the NFS datastore from Data Box. After all virtual machines are migrated, you can turn off (shut down) the active virtual machines in preparation for migration of data to Azure VMware Solution.
-
-### Clone a virtual machine or a virtual machine template to the Data Box datastore
-
-1. Right-click a virtual machine or a virtual machine template that you want to clone. Select **Clone** > **Clone to Virtual Machine**.
-
- ![Virtual machine clone](media/databox-migration-vm-clone.png)
-
-2. Select a name for the cloned virtual machine or the virtual machine template.
-
-3. Select the folder where you want to put the cloned object and then select **Next**.
-
-4. Select the cluster or the resource pool where you want to put the cloned object and then select **Next**.
-
-5. Select **Databox-Datastore** as the storage location and then select **Next**.
-
- ![Virtual machine clone - select datastore](media/databox-migration-vm-clone-select-datastore.png)
-
-6. If you want to customize any options for the cloned object, select the customization options, and then select **Next**.
-
-7. Review the configurations and select **Finish**.
-
-8. Repeat steps 1 through 7 for additional virtual machines or virtual machine templates.
-
-Virtual machines will be cloned and stored on the NFS datastore from Data Box. After the virtual machines are cloned, make sure they're shut down in preparation for migration of data to Azure VMware Solution.
-
-### Copy ISO files to the Data Box datastore
-
-1. From your on-premises vCenter web UI, go to **Storage**. Select **Databox-Datastore** and then select **Files**. Create a new folder for storing ISO files.
-
- ![Copy ISO - create new folder](media/databox-migration-create-folder.png)
-
-2. Provide a name for the folder where ISO files will be stored.
-
-3. Double-click the newly created folder to open it.
-
-4. Select **Upload Files** and then select the ISO files you want to upload.
-
- ![Copy ISO - upload files](media/databox-migration-upload-iso.png)
-
-> [!TIP]
-> If you already have ISO files in your on-premises datastore, you can select the files and **Copy to** to copy the files to the Data Box NFS datastore.
--
-## Prepare Data Box for return
-
-After all virtual machine data, virtual machine template data, and any ISO files are copied to the Data Box NFS datastore, you can disconnect the datastore from your vCenter. All virtual machines and virtual machine templates must be removed from inventory before you disconnect the datastore.
-
-### Remove objects from inventory
-
-1. From your on-premises vCenter web UI, go to **Storage**. Select **Databox-Datastore** and then select **VMs**.
-
- ![Remove virtual machines from inventory - turned off](media/databox-migration-select-databox-vm.png)
-
-2. Make sure that all the virtual machines are shut down.
-
-3. Select all virtual machines, right-click, and then select **Remove from inventory**.
-
- ![Remove virtual machines from inventory](media/databox-migration-remove-vm-from-inventory.png)
-
-4. Select **VM Templates in Folders** and then repeat step 3.
-
-### Remove the Data Box NFS datastore from vCenter
-
-The Data Box NFS datastore must be disconnected from VMware ESXi hosts before preparing for return.
-
-1. From your on-premises vCenter web UI, go to **Storage**.
-
-2. Right-click **Databox-Datastore** and select **Unmount Datastore**.
-
- ![Unmount Data Box datastore](media/databox-migration-unmount-datastore.png)
-
-3. Select all ESXi hosts where the datastore is mounted and select **OK**.
-
- ![Unmount Data Box datastore - select hosts](media/databox-migration-unmount-datastore-select-hosts.png)
-
-4. Review and accept any warnings and select **OK**.
-
-### Prepare Data Box for return and then return it
-
-Follow the steps outlined in the article [Return Azure Data Box and verify data upload to Azure](../databox/data-box-deploy-picked-up.md) to return the Data Box. Check the status of the data copy to your Azure storage account. After the status shows as completed, you can verify the data in your Azure storage account.
-
-## Copy data from Azure storage to Azure VMware Solution
-
-Data copied to your Data Box device will be available on your Azure storage account after the order status of your Data Box shows as completed. The data can now be copied to your Azure VMware Solution. Data in the storage account must be copied to the vSAN datastore of your private cloud by using the NFS protocol.
-
-First, copy Blob storage data to a managed disk on a Linux virtual machine in Azure by using **AzCopy**. Make the managed disk available through NFS, mount the NFS share as a datastore on your private cloud, and then copy the data. This method enables faster copy of the data to your private cloud.
-
-### Copy data to your private cloud using a Linux virtual machine and managed disks, and then export as NFS share
-
-1. Create a [Linux virtual machine](../virtual-machines/linux/quick-create-portal.md) in Azure in the same region where your storage account is created and has an Azure virtual network connection to your private cloud.
-
-2. Create a managed disk whose storage capacity is greater than the amount of blob data, and [attach it to your Linux virtual machine](../virtual-machines/linux/attach-disk-portal.md). If the amount of blob data is greater than the capacity of the largest managed disk available, the data must be copied in multiple steps or by using multiple managed disks.
-
-3. Connect to the Linux virtual machine and mount the managed disk.
-
-4. Install [AzCopy on your Linux virtual machine](../storage/common/storage-use-azcopy-v10.md).
-
-5. Download the data from your Azure Blob storage onto the managed disk using AzCopy. Command syntax: `azcopy copy "https://<storage-account-name>.blob.core.windows.net/<container-name>/*" "<local-directory-path>/"`. Replace `<storage-account-name>` with your Azure storage account name and `<container-name>` with the container that holds the data copied through Data Box.
-
-6. Install the NFS server on your Linux virtual machine:
-
- - On an Ubuntu/Debian distribution: `sudo apt install nfs-kernel-server`.
- - On an Enterprise Linux distribution: `sudo yum install nfs-utils`.
-
-7. Change the permission of the folder on your managed disk where data from Azure Blob storage was copied. Change the permissions for all the folders that you want to export as an NFS share.
-
- ```bash
- chmod -R 755 /<folder>/<subfolder>
- chown nfsnobody:nfsnobody /<folder>/<subfolder>
- ```
-
-8. Assign permissions for client IP addresses to access the NFS share by editing the `/etc/exports` file.
-
- ```bash
- sudo vi /etc/exports
- ```
-
- Enter the following lines in the file for every ESXi host IP of your private cloud. If you're creating shares for multiple folders, add all the folders.
-
- ```bash
- /<folder>/<subfolder> <ESXiNode1IP>(rw,sync,no_root_squash,no_subtree_check)
- /<folder>/<subfolder> <ESXiNode2IP>(rw,sync,no_root_squash,no_subtree_check)
- .
- .
- ```
-
-9. Export the NFS shares by using the `sudo exportfs -a` command.
-
-10. Restart NFS kernel server by using the `sudo systemctl restart nfs-kernel-server` command.
--
-### Mount the Linux virtual machine NFS share as a datastore on a private cloud vCenter cluster and then copy data
-
-The NFS share from your Linux virtual machine must be mounted as a datastore on your private cloud vCenter cluster. After it's mounted, data can be copied from the NFS datastore to the private cloud vSAN datastore.
-
-1. Log in to your private cloud vCenter server.
-
-2. Right-click **Datacenter**, select **Storage**, select **New Datastore**, and then select **Next**.
-
- ![Add new datastore](media/databox-migration-add-datastore.png)
-
-3. In step 1 of the Add Datastore wizard, select the **NFS** type.
-
- ![Add new datastore - type](media/databox-migration-add-datastore-type.png)
-
-4. In step 2 of the wizard, select **NFS 3** as the NFS version and then select **Next**.
-
- ![Add new datastore - NFS version](media/databox-migration-add-datastore-nfs-version.png)
-
-5. In step 3 of the wizard, specify the name for the datastore, the path, and the server. You can use the IP address of your Linux virtual machine for the server. The folder path will be in the `/<folder>/<subfolder>/` format.
-
- ![Add new datastore - NFS configuration](media/databox-migration-add-datastore-nfs-configuration.png)
-
-6. In step 4 of the wizard, select the ESXi hosts where you want the datastore to be mounted and then select **Next**. In a cluster, select all the hosts to ensure migration of the virtual machines.
-
- ![Add new datastore - Select hosts](media/databox-migration-add-datastore-nfs-select-hosts.png)
-
-7. In step 5 of the wizard, review the summary and then select **Finish**.
-
-### Add virtual machines and virtual machine templates from an NFS datastore to the inventory
-
-1. From your private cloud vCenter web UI, go to **Storage**. Select a Linux virtual machine NFS datastore and then select **Files**.
-
- ![Select files from NFS datastore](media/databox-migration-datastore-select-files.png)
-
-2. Select a folder that contains a virtual machine or a virtual machine template. In the details pane, select a .vmx file for a virtual machine or a .vmtx file for a virtual machine template.
-
-3. Select **Register VM** to register the virtual machine on your private cloud vCenter.
-
- ![Register virtual machine](media/databox-migration-datastore-register-vm.png)
-
-4. Select the datacenter, folder, and cluster/resource pool where you want the virtual machine to be registered.
-
-4. Repeat steps 3 and 4 for all the virtual machines and virtual machine templates.
-
-5. Go to the folder that contains the ISO files. Select the ISO files and then select **Copy to** to copy the files to a folder on your vSAN datastore.
-
-The virtual machines and virtual machine templates are now available on your private cloud vCenter. These virtual machines must be moved from the NFS datastore to the vSAN datastore before you turn them on. You can use the **storage vMotion** option and select the vSAN datastore as the target for the virtual machines.
-
-The virtual machine templates must be cloned from your Linux virtual machine NFS datastore to your vSAN datastore.
-
-### Clean up your Linux virtual machine
-
-After all the data is copied to your private cloud, you can remove the NFS datastore from your private cloud:
-
-1. Make sure that all virtual machines and templates are moved and cloned to your vSAN datastore.
-
-2. Remove from inventory all virtual machine templates from the NFS datastore.
-
-3. Unmount the Linux virtual machine datastore from your private cloud vCenter.
-
-4. Delete the virtual machine and managed disk from Azure.
-
-5. If you don't want to keep the data that was transferred by Data Box in your storage account, delete the Azure storage account.
-
--
-## Next steps
-
-* Learn more about [Data Box](../databox/data-box-overview.md).
-* Learn more about different options for [migrating workloads to your private cloud](migrate-workloads.md).
vmware-cloudsimple Monitor Activity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/monitor-activity.md
- Title: Monitor Private Cloud activity-
-description: Describes the information available on activity in the Azure VMware Solution by CloudSimple environment, including alerts, events, tasks, and audit.
-- Previously updated : 08/13/2019 ------
-# Monitor VMware Solution by CloudSimple activity
-
-CloudSimple activity logs provide an insight into operations done on CloudSimple portal. The list includes alerts, events, tasks, and audit. Use the activity logs to determine who, when and what operations were performed. Activity logs do not include any read operations done by a user.
-
-## Sign in to Azure
-
-Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
-
-## Access the CloudSimple portal
-
-Access the [CloudSimple portal](access-cloudsimple-portal.md).
-
-## Activity information
-
-To access the Activity pages, select **Activity** on the side menu.
-
-![Activity page overview](media/activity-page-overview.png)
-
-To view details about any of the activities on the activity page, select the activity. A details panel opens on the right. Actions in the panel depend on the type of activity. Click **X** to close the panel.
-
-Click on a column header to sort the display. You can filter columns for specific values to view. Download activity report by clicking on **Download as CSV** icon.
-
-## Alerts
-
-Alerts are notifications of any significant activity in your CloudSimple environment. Alerts include events that affect billing or user access.
-
-To acknowledge alerts and remove them from the list, select one or more from the list and click **Acknowledge**.
-
-The following columns of information are available for alerts. Click on **Edit columns** and select columns you want to view.
-
-| Column | Description |
- | - |
-| Alert Type | Category of alert.|
-| Time | Time the alert occurred. |
-| Severity | Significance of the alert.|
-| Resource Name | Name assigned to the resource, such as the Private Cloud name. |
-| Resource Type | Category of resource: Private Cloud, Cloud Rack. |
-| Resource ID | Identifier of the resource. |
-| Description | Description of what triggered the alert. |
-| Acknowledged | Indication of whether the alert is acknowledged. |
-
-## Events
-
-Events show user and system activity on the CloudSimple portal. The Events page lists the activity associated with a specific resource and the severity of the impact.
-
-The following columns of information are available for alerts. Click on **Edit columns** and select columns you want to view.
-
-| Column | Description |
- | - |
-| Time | Date and time the event occurred. |
-| Event Type | Numeric code that identifies the event. |
-| Severity | Event severity.|
-| Resource Name | Name assigned to the resource, such as the Private Cloud name. |
-| Resource Type | Category of resource: Private Cloud, Cloud Rack. |
-| Description | Description of what triggered the alert. |
-
-## Tasks
-
-Tasks are Private Cloud activities that are expected to take 30 seconds or more to complete. (Activities that are expected to take less than 30 seconds are reported only as events.) Open the Tasks pages to track the progress of tasks for your Private Cloud.
-
-The following columns of information are available for alerts. Click on **Edit columns** and select columns you want to view.
-
-| Column | Description |
- | - |
-| Task ID | Unique identifier for the task. |
-| Operation | Action that the task performs. |
-| User | User assigned to complete the task. |
-| Resource Name | Name assigned to the resource. |
-| Resource Type | Category of resource: Private Cloud, Cloud Rack. |
-| Resource ID | Identifier of the resource. |
-| Start | Start time for the task. |
-| End | End time for the task. |
-| Status | Current task status. |
-| Time Elapsed | Time that the task took to complete (if completed) or is currently taking (if in progress). |
-| Description | Task description. |
-
-## Audit
-
-Audit logs keep track of user activity. You can use audit logs to monitor user activity for all users.
-
-The following columns of information are available for alerts. Click on **Edit columns** and select columns you want to view.
-
-| Column | Description |
- | - |
-| Time | Time of the audit entry. |
-| Operation | Action that the task performs. |
-| User | User assigned to the task. |
-| Resource Name | Name assigned to the resource. |
-| Resource Type | Category of resource: Private Cloud, Cloud Rack. |
-| Resource ID | Identifier of the resource. |
-| Result | Result of the activity, such as **Success**. |
-| Time Taken | Time to complete the task. |
-| Description | Description of the action. |
-
-## Next steps
-
-* [Consume VMware VMs on Azure](quickstart-create-vmware-virtual-machine.md)
-* Learn more about [Private Clouds](cloudsimple-private-cloud.md)
vmware-cloudsimple Node Quota https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/node-quota.md
- Title: Azure VMware Solution by CloudSimple - CloudSimple node quota
-description: Describes the quota limits for CloudSimple nodes and how to request for an increase of quota
-- Previously updated : 08/19/2019-----
-# CloudSimple node quota limits
-
-Four nodes is the default quantity available for purchase when your subscription is enabled for the CloudSimple service. You can purchase any [node type](cloudsimple-node.md) from the Azure portal. At least three nodes of the same SKU are required to create a Private Cloud. If you've purchased the nodes, you may see an error when you try to purchase additional nodes.
-
-## Quota increase
-
-You can increase the node quota by submitting a support request. The service operations team will evaluate the request, and work with you to increase node quota. Select the following options when you open a new ticket:
-
-* Issue type: **Technical**
-* Subscription: **Your subscription ID**
-* Service type: **VMware Solution by CloudSimple**
-* Problem type: **Dedicated Nodes quota**
-* Problem subtype: **Increase quota of dedicated nodes**
-* Subject: **Quota increase**
-
-In the details of the support ticket, provide the required number of nodes and node SKU.
-
-* Node SKU
-* Number of additional nodes for which you're requesting the quota increase
-
-## Next steps
-
-* [Purchase nodes](create-nodes.md)
-* [CloudSimple nodes overview](cloudsimple-node.md)
vmware-cloudsimple On Premises Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/on-premises-connection.md
- Title: Azure VMware Solution by CloudSimple - On-premises connection using ExpressRoute
-description: Describes how to request an on-premises connection using ExpressRoute from CloudSimple region network
-- Previously updated : 08/14/2019 -----
-# Connect from on-premises to CloudSimple using ExpressRoute
-
-If you already have an Azure ExpressRoute connection from an external location (such as on-premises) to Azure, you can connect it to your CloudSimple environment. You can do so via an Azure feature that allows two ExpressRoute circuits to connect with each other. This method establishes a secure, private, high bandwidth, low latency connection between the two environments.
-
-[![On-premises ExpressRoute Connection - Global Reach](media/cloudsimple-global-reach-connection.png)](media/cloudsimple-global-reach-connection.png)
-
-## Before you begin
-
-A **/29** network address block is required for establishing Global Reach connection from on-premises. The /29 address space is used for transit network between ExpressRoute circuits. The transit network should not overlap with any of your Azure virtual networks, on-premises networks, or CloudSimple Private Cloud networks.
-
-## Prerequisites
-
-* An Azure ExpressRoute circuit is required before you can establish the connection between the circuit and the CloudSimple Private Cloud networks.
-* A user is required with privileges to create authorization keys on an ExpressRoute circuit.
-
-## Scenarios
-
-Connecting your on-premises network to your Private Cloud network allows you to use the Private Cloud in various ways, including the following scenarios:
-
-* Access your Private Cloud network without creating a Site-to-Site VPN connection.
-* Use your on-premises Active Directory as an identity source on your Private Cloud.
-* Migrate virtual machines running on-premises to your Private Cloud.
-* Use your Private Cloud as part of a disaster recovery solution.
-* Consume on-premises resources on your Private Cloud workload VMs.
-
-## Connecting ExpressRoute circuits
-
-To establish the ExpressRoute connection, you must create an authorization on your ExpressRoute circuit and provide the authorization information to CloudSimple.
--
-### Create ExpressRoute authorization
-
-1. Sign in to the Azure portal.
-
-2. From the top search bar, search for **ExpressRoute circuit** and click **ExpressRoute circuits** under **Services**.
- [![ExpressRoute Circuits](media/azure-expressroute-transit-search.png)](media/azure-expressroute-transit-search.png)
-
-3. Select the ExpressRoute circuit that you intend to connect to your CloudSimple network.
-
-4. On the ExpressRoute page, click **Authorizations**, enter a name for the authorization, and click **Save**.
- [![ExpressRoute Circuit Authorization](media/azure-expressroute-transit-authorizations.png)](media/azure-expressroute-transit-authorizations.png)
-
-5. Copy the resource ID and authorization key by clicking the copy icon. Paste the ID and key into a text file.
- [![ExpressRoute Circuit Authorization Copy](media/azure-expressroute-transit-authorization-copy.png)](media/azure-expressroute-transit-authorization-copy.png)
-
- > [!IMPORTANT]
- > **Resource ID** must be copied from the UI and should be in the format ```/subscriptions/<subscription-ID>/resourceGroups/<resource-group-name>/providers/Microsoft.Network/expressRouteCircuits/<express-route-circuit-name>``` when you provide it to support.
-
-6. File a ticket with <a href="https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest" target="_blank">Support</a> for the connection to be created.
- * Issue type: **Technical**
- * Subscription: **Subscription where CloudSimple service is deployed**
- * Service: **VMware Solution by CloudSimple**
- * Problem type: **Service request**
- * Problem subtype: **Create ExpressRoute connection to on-premises**
- * Provide the resource ID and authorization key that you copied and saved in the details pane.
- * Provide a /29 network address space for transit network.
- * Are you sending default route through ExpressRoute?
- * Should the Private Cloud traffic use the default route sent through ExpressRoute?
-
- > [!IMPORTANT]
- > Sending default route allows you to send all internet traffic from Private Cloud using your on-premises internet connection. To disable the default route configured on the Private Cloud and to use the on-premises connection default route, provide the details in the support ticket.
-
-## Next steps
-
-* [Learn more about Azure network connections](cloudsimple-azure-network-connection.md)
vmware-cloudsimple On Premises Dns Setup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/on-premises-dns-setup.md
- Title: Azure VMware Solution by CloudSimple - Configure DNS for CloudSimple Private Cloud
-description: Describes how to set up DNS name resolution for access to vCenter server on a CloudSimple Private Cloud from on-premises workstations
-- Previously updated : 08/14/2019 -----
-# Configure DNS for name resolution for Private Cloud vCenter access from on-premises workstations
-
-To access the vCenter server on a CloudSimple Private Cloud from on-premises workstations, you must configure DNS address resolution so the vCenter server can be addressed by hostname as well as by IP address.
-
-## Obtain the IP address of the DNS server for your Private Cloud
-
-1. Sign in to the [CloudSimple portal](access-cloudsimple-portal.md).
-
-2. Navigate to **Resources** > **Private Clouds** and select the Private Cloud you want to connect to.
-
-3. On the **Summary** page of the Private Cloud under **Basic Info**, copy the Private Cloud DNS server IP address.
-
- ![Private Cloud DNS servers](media/private-cloud-dns-server.png)
--
-Use either of these options for the DNS configuration.
-
-* [Create a zone on the DNS server for *.cloudsimple.io](#create-a-zone-on-a-microsoft-windows-dns-server)
-* [Create a conditional forwarder on your on-premises DNS server to resolve *.cloudsimple.io](#create-a-conditional-forwarder)
-
-## Create a zone on the DNS server for *.cloudsimple.io
-
-You can set up a zone up as a stub zone and point to the DNS servers on the Private
-Cloud for name resolution. This section provides information on using a BIND DNS server or a Microsoft Windows DNS server.
-
-### Create a zone on a BIND DNS server
-
-The specific file and parameters to configure can vary based on your individual DNS setup.
-
-For example, for the default BIND server configuration, edit the
-`/etc/named.conf` file on your DNS server and add the following zone information.
-
-> [!NOTE]
->This article contains references to the term *slave*, a term that Microsoft no longer uses. When the term is removed from the software, weΓÇÖll remove it from this article.
-
-```
-zone "az.cloudsimple.io"
-{
- type stub;
- masters { IP address of DNS servers; };
- file "slaves/cloudsimple.io.db";
-};
-```
-
-### Create a zone on a Microsoft Windows DNS server
-
-1. Right-click the DNS server and select **New Zone**.
-
- ![Screenshot that highlights the New Zone menu option.](media/DNS01.png)
-2. Select **Stub Zone** and click **Next**.
-
- ![Screenshot that highlights the Stub Zone option.](media/DNS02.png)
-3. Select the appropriate option depending on your environment and click **Next**.
-
- ![Screenshot that shows the zone data replication options.](media/DNS03.png)
-4. Select **Forward lookup zone** and click **Next**.
-
- ![Screenshot that highlights the Forward lookup zone option.](media/DNS01.png)
-5. Enter the zone name and click **Next**.
-
- ![Screenshot that shows where to enter the zone name.](media/DNS05.png)
-6. Enter the IP addresses of the DNS servers for your Private Cloud that you obtained
-from the CloudSimple portal.
-
- ![New Zone](media/DNS06.png)
-7. Click **Next** as needed to complete the wizard setup.
-
-## Create a conditional forwarder
-
-A conditional forwarder forwards all DNS name resolution requests to the designated server. With this setup, any request to *.cloudsimple.io is forwarded to the DNS servers located on the Private Cloud. The following examples show how to set up
-forwarders on different types of DNS servers.
-
-### Create a conditional forwarder on a BIND DNS server
-
-The specific file and parameters to configure can vary based on your individual DNS setup.
-
-For example, for the default BIND server configuration, edit
-/etc/named.conf file on your DNS server and add the following conditional forwarding
-information.
-
-```
-zone "az.cloudsimple.io" {
- type forward;
- forwarders { IP address of DNS servers; };
-};
-```
-
-### Create a conditional forwarder on a Microsoft Windows DNS server
-
-1. Open the DNS Manager on the DNS server.
-2. Right-click **Conditional Forwarders** and select the option to add a new conditional forwarder.
-
- ![Conditional Forwarder 1 Windows DNS](media/DNS08.png)
-3. Enter the DNS domain and the IP address of the DNS servers in the Private Cloud, and click **OK**.
vmware-cloudsimple On Premises Firewall Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/on-premises-firewall-configuration.md
- Title: Access Azure VMware Solution by CloudSimple from on-premises -
-description: Accessing your Azure VMware Solution by CloudSimple from your on-premises network through a firewall
-- Previously updated : 08/08/2019 ------
-# Accessing your CloudSimple Private Cloud environment and applications from on-premises
-
-A connection can be set up from on-premises network to CloudSimple using Azure ExpressRoute or Site-to-Site VPN. Access your CloudSimple Private Cloud vCenter and any workloads you run on the Private Cloud using the connection. You can control what ports are opened on the connection using a firewall in your on-premises network. This article discusses some of the typical applications port requirements. For any other applications, refer to the application documentation for port requirements.
-
-## Ports required for accessing vCenter
-
-To access your Private Cloud vCenter and NSX-T manager, ports defined in the table below must be opened on the on-premises firewall.
-
-| Port | Source | Destination | Purpose |
-||-|-||
-| 53 (UDP) | On-premises DNS servers | Private Cloud DNS servers | Required for forwarding DNS lookup of *az.cloudsimple.io* to Private Cloud DNS servers from on-premises network. |
-| 53 (UDP) | Private Cloud DNS servers | On-premises DNS servers | Required for forwarding DNS look up of on-premises domain names from Private Cloud vCenter to on-premises DNS servers. |
-| 80 (TCP) | On-premises network | Private Cloud management network | Required for redirecting vCenter URL from *http* to *https*. |
-| 443 (TCP) | On-premises network | Private Cloud management network | Required for accessing vCenter and NSX-T manager from on-premises network. |
-| 8000 (TCP) | On-premises network | Private Cloud management network | Required for vMotion of virtual machines from on-premises to Private Cloud. |
-| 8000 (TCP) | Private Cloud management network | On-premises network | Required for vMotion of virtual machines from Private Cloud to on-premises. |
-
-## Ports required for using on-premises active directory as an identity source
-
-To configure on-premises active directory as an identity source on Private Cloud vCenter, ports defined in the table must be opened. See [Use Azure AD as an identity provider for vCenter on CloudSimple Private Cloud](./azure-ad.md) for configuration steps.
-
-| Port | Source | Destination | Purpose |
-|--|-|--|--|
-| 53 (UDP) | Private Cloud DNS servers | On-premises DNS servers | Required for forwarding DNS look up of on-premises active directory domain names from Private Cloud vCenter to on-premises DNS servers. |
-| 389 (TCP/UDP) | Private Cloud management network | On-premises active directory domain controllers | Required for LDAP communication from Private Cloud vCenter server to active directory domain controllers for user authentication. |
-| 636 (TCP) | Private Cloud management network | On-premises active directory domain controllers | Required for secure LDAP (LDAPS) communication from Private Cloud vCenter server to active directory domain controllers for user authentication. |
-| 3268 (TCP) | Private Cloud management network | On-premises active directory global catalog servers | Required for LDAP communication in a multi-domain controller deployments. |
-| 3269 (TCP) | Private Cloud management network | On-premises active directory global catalog servers | Required for LDAPS communication in a multi-domain controller deployments. |
-
-## Common ports required for accessing workload virtual machines
-
-Access workload virtual machines running on Private Cloud requires ports to be opened on your on-premises firewall. Table below shows some of the common ports required and their purpose. For any application specific port requirements, refer to the application documentation.
-
-| Port | Source | Destination | Purpose |
-|--|--|--|--|
-| 22 (TCP) | On-premises network | Private Cloud workload network | Secure shell access to Linux virtual machines running on Private Cloud. |
-| 3389 (TCP) | On-premises network | Private Cloud workload network | Remote desktop to windows virtual machines running on Private Cloud. |
-| 80 (TCP) | On-premises network | Private Cloud workload network | Access any web servers deployed on virtual machines running on Private Cloud. |
-| 443 (TCP) | On-premises network | Private Cloud workload network | Access any secure web servers deployed on virtual machines running on Private Cloud. |
-| 389 (TCP/UDP) | Private Cloud workload network | On-premises active directory network | Join Windows workload virtual machines to on-premises active directory domain. |
-| 53 (UDP) | Private Cloud workload network | On-premises network | DNS service access for workload virtual machines to on-premises DNS servers. |
-
-## Next steps
-
-* [Create and manage VLANs and Subnets](./create-vlan-subnet.md)
-* [Connect to on-premises network using Azure ExpressRoute](./on-premises-connection.md)
-* [Setup Site-to-Site VPN from on-premises](./vpn-gateway.md)
vmware-cloudsimple Oracle Real Application Clusters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/oracle-real-application-clusters.md
- Title: Azure VMware Solution by CloudSimple - Optimize your CloudSimple Private Cloud for Oracle RAC
-description: Describes how to deploy a new cluster and optimize a VM for Oracle Real Application Clusters (RAC) installation and configuration
-- Previously updated : 08/06/2019 ------
-# Optimize your CloudSimple Private Cloud for installing Oracle RAC
-
-You can deploy Oracle Real Application Clusters (RAC) in your CloudSimple Private Cloud environment. This guide describes how to deploy a new cluster and to optimize a VM for the Oracle RAC solution. After completing the steps in this topic, you can install and configure Oracle RAC.
-
-## Storage Policy
-
-Successful implementation of Oracle RAC requires an adequate number of nodes in the cluster. In vSAN storage policy, failures to tolerate (FTT) is applied to data disks used for storing the database, log, and redo disks. The required number of nodes to effectively tolerate failures is 2N+1 where N is the value of FTT.
-
-Example: If the desired FTT is 2, then the total number of nodes in the cluster must be 2*2+1 = 5.
-
-## Overview of deployment
-
-The following sections describe how to set up your CloudSimple Private Cloud environment for Oracle RAC.
-
-1. Best practices for disk configuration
-2. Deploy CloudSimple Private Cloud vSphere Cluster
-3. Set up Networking for Oracle RAC
-4. Setup vSAN storage policies
-5. Create Oracle VMs and create shared VM disks
-6. Set up VM-to-host affinity rules
-
-## Best practices for disk configuration
-
-Oracle RAC virtual machines have multiple disks, which are used for specific function. Shared disks are mounted on all virtual machines, which are used by Oracle RAC Cluster. Operating system and software installation disks are mounted only on the individual virtual machines.
-
-![Oracle RAC virtual machine disks overview](media/oracle-vm-disks-overview.png)
-
-Following example uses the disks defined in the table below.
-
-| Disk | Purpose | Shared Disk |
-|-|--|-|
-| OS | Operating system disk | No |
-| GRID | Install location for Oracle Grid software | No |
-| DATABASE | Install location for Oracle database software | No |
-| ORAHOME | Base location for Oracle database binaries | No |
-| DATA1, DATA2, DATA3, DATA4 | Disk where Oracle database files are stored | Yes |
-| REDO1, REDO2, REDO3, REDO4, REDO5, REDO6 | Redo log disks | Yes |
-| OCR1, OCR2, OCR3, OCR4, OCR5 | Voting disks | Yes |
-| FRA1, FRA2 | Fast recovery area disks | Yes |
-
-![Oracle virtual machine disk configuration](medik.png)
-
-### Virtual machine configuration
-
-* Each virtual machine is configured with four SCSI controllers.
-* SCSI controller type is set to VMware paravirtual.
-* Multiple virtual disks (.vmdk) are created.
-* Disks are mounted on different SCSI controllers.
-* Multi writer sharing type is set for shared cluster disks.
-* vSAN storage policy is defined for ensuring high availability of disks.
-
-### Operating system and software disk configuration
-
-Each Oracle virtual machine is configured with multiple disks for the host operating system, swap, software installation and other OS functions. These disks are not shared between the virtual machines.
-
-* Three disks for each virtual machine are configured as virtual disks and mounted on Oracle RAC virtual machines.
- * OS Disk
- * Disk for storing Oracle Grid installs files
- * Disk for storing Oracle database install files
-* Disks can be configured as **Thin Provisioned**.
-* Each disk is mounted on the first SCSI controller (SCSI0).
-* Sharing is set to **No sharing**.
-* Redundancy is defined on the storage using vSAN policies.
-
-![Diagram that shows the Oracle RAC OS disk physical configuration.](media/oracle-vm-os-disks.png)
-
-### Data disk configuration
-
-Data disks are primarily used for storing database files.
-
-* Four disks are configured as virtual disks and mounted on all Oracle RAC virtual machines.
-* Each disk is mounted on a different SCSI controller.
-* Each virtual disk is configured as **Thick Provision Eager Zeroed**.
-* Sharing is set to **Multi-writer**.
-* The disks must be configured as an Automatic Storage Management (ASM) disk group.
-* Redundancy is defined on the storage using vSAN policies.
-* ASM redundancy is set to **External** redundancy.
-
-![Oracle RAC data disk group configuration](media/oracle-vm-data-disks.png)
-
-### Redo log disk configuration
-
-Redo log files are used for storing a copy of the changes made to the database. The log files are used when data needs to be recovered after any failures.
-
-* Redo log disks must be configured as multiple disk groups.
-* Six disks are created and mounted on all Oracle RAC virtual machines.
-* Disks are mounted on different SCSI controllers
-* Each virtual disk is configured as **Thick Provision Eager Zeroed**.
-* Sharing is set to **Multi-writer**.
-* The disks must be configured as two ASM disk groups.
-* Each ASM disk group contains three disks, which are on different SCSI controllers.
-* ASM redundancy is set to **Normal** redundancy.
-* Five redo log files are created on both ASM Redo log group
-
-```
-SQL > alter database add logfile thread 1 ('+ORCLRAC_REDO1','+ORCLRAC_REDO2') size 1G;
-SQL > alter database add logfile thread 1 ('+ORCLRAC_REDO1','+ORCLRAC_REDO2') size 1G;
-SQL > alter database add logfile thread 1 ('+ORCLRAC_REDO1','+ORCLRAC_REDO2') size 1G;
-SQL > alter database add logfile thread 1 ('+ORCLRAC_REDO1','+ORCLRAC_REDO2') size 1G;
-SQL > alter database add logfile thread 1 ('+ORCLRAC_REDO1','+ORCLRAC_REDO2') size 1G;
-SQL > alter database add logfile thread 2 ('+ORCLRAC_REDO1','+ORCLRAC_REDO2') size 1G;
-SQL > alter database add logfile thread 2 ('+ORCLRAC_REDO1','+ORCLRAC_REDO2') size 1G;
-SQL > alter database add logfile thread 2 ('+ORCLRAC_REDO1','+ORCLRAC_REDO2') size 1G;
-SQL > alter database add logfile thread 2 ('+ORCLRAC_REDO1','+ORCLRAC_REDO2') size 1G;
-SQL > alter database add logfile thread 2 ('+ORCLRAC_REDO1','+ORCLRAC_REDO2') size 1G;
-```
-
-![Oracle RAC redo log disk group configuration](media/oracle-vm-redo-log-disks.png)
-
-### Oracle voting disk configuration
-
-Voting disks provide quorum disk functionality as an additional communication channel to avoid any split-brain situation.
-
-* Five disks are created and mounted on all Oracle RAC virtual machines.
-* Disks are mounted on one SCSI controller
-* Each virtual disk is configured as **Thick Provision Eager Zeroed**.
-* Sharing is set to **Multi-writer**.
-* The disks must be configured as an ASM disk group.
-* ASM redundancy is set to **High** redundancy.
-
-![Oracle RAC voting disk group configuration](media/oracle-vm-voting-disks.png)
-
-### Oracle fast recovery area disk configuration (optional)
-
-The fast recovery area (FRA) is file system managed by Oracle ASM disk group. FRA provides a shared storage location for backup and recovery files. Oracle creates archived logs and flashback logs in the fast recovery area. Oracle Recovery Manager (RMAN) can optionally store its backup sets and image copies in the fast recovery area, and it uses it when restoring files during media recovery.
-
-* Two disks are created and mounted on all Oracle RAC virtual machines.
-* Disks are mounted on different SCSI controller
-* Each virtual disk is configured as **Thick Provision Eager Zeroed**.
-* Sharing is set to **Multi-writer**.
-* The disks must be configured as an ASM disk group.
-* ASM redundancy is set to **External** redundancy.
-
-![Diagram that shows the Oracle RAC voting disk group configuration.](media/oracle-vm-fra-disks.png)
-
-## Deploy CloudSimple Private Cloud vSphere cluster
-
-To deploy a vSphere cluster on your Private Cloud, follow this process:
-
-1. From the CloudSimple portal, [create a Private Cloud](create-private-cloud.md). CloudSimple creates a default vCenter user named 'cloudowner' in the newly created Private Cloud. For details on the default Private Cloud user and permission model, see [Learn the Private Cloud permission model](learn-private-cloud-permissions.md). This step creates the primary management cluster for your Private Cloud.
-
-2. From the CloudSimple portal, [expand the Private Cloud](expand-private-cloud.md) with a new cluster. This cluster will be used to deploy Oracle RAC. Select the number of nodes based on the desired fault tolerance (minimum three nodes).
-
-## Set up networking for Oracle RAC
-
-1. In your Private Cloud, [create two VLANs](create-vlan-subnet.md), one for the Oracle public network and one for the Oracle private network and assign appropriate subnet CIDRs.
-2. After the VLANs are created, create the [distributed port groups on the Private Cloud vCenter](create-vlan-subnet.md#use-vlan-information-to-set-up-a-distributed-port-group-in-vsphere).
-3. Set up a [DHCP and DNS server virtual machine](dns-dhcp-setup.md) on your management cluster for the Oracle environment.
-4. [Configure DNS forwarding on the DNS server](on-premises-dns-setup.md#create-a-conditional-forwarder) installed in the Private Cloud.
-
-## Set up vSAN storage policies
-
-vSAN policies define the failures to tolerate and disk striping for the data stored on the VM disks. The storage policy created must be applied on the VM disks while creating the VM.
-
-1. [Sign in to the vSphere client](./vcenter-access.md) of your Private Cloud.
-2. From the top menu, select **Policies and Profiles**.
-3. From the left menu, select **VM Storage Policies** and then select **Create a VM storage Policy**.
-4. Enter a meaningful name for the policy and click **NEXT**.
-5. In the **Policy structure** section, select **Enable rules for vSAN storage** and click **NEXT**.
-6. In the **vSAN** > **Availability** section, select **None** for Site disaster tolerance. For Failures to tolerate, select the **RAID - Mirroring** option for the desired FTT.
- ![vSAN settings](media/oracle-rac-storage-wizard-vsan.png).
-7. In the **Advanced** section, select the number of disk stripes per object. For Object space reservation, select **Thick Provisioned**. Select **Disable object checksum**. Click **NEXT**.
-8. Follow the on-screen instructions to view the list of compatible vSAN datastores, review the settings, and finish the setup.
-
-## Create Oracle VMs and create shared VM disks for Oracle
-
-To create a VM for Oracle, clone an existing VM or create a new one. This section describes how to create a new VM and then clone it to create a second one after installing the base operating system. After the VMs are created, you can create an add disks to them. Oracle cluster uses shared disks for storing, data, logs, and redo logs.
-
-### Create VMs
-
-1. In vCenter, click the **Hosts and Clusters** icon. Select the cluster that you created for Oracle.
-2. Right-click the cluster and select **New Virtual Machine**.
-3. Select **Create new virtual machine** and click **Next**.
-4. Name the machine, select the Oracle VM's location, and click **Next**.
-5. Select the cluster resource and click **Next**.
-6. Select the vSAN datastore for the cluster and click **Next**.
-7. Keep the default ESXi 6.5 compatibility selection and click **Next**.
-8. Select the guest OS of the ISO for the VM that you are creating and click **Next**.
-9. Select the hard disk size that is required for installing the OS.
-10. To install the application on a different device, click **Add new device**.
-11. Select network options and assign the distributed port group created for the public network.
-12. To add additional network interfaces, click **Add new device** and select the distributed port group created for the private network.
-13. For New DC/DVD Drive, select the datastore ISO file that contains the ISO for the preferred operating system installation. Select the file you previously uploaded to the ISOs and Templates folder and click **OK**.
-14. Review the settings and click **OK** to create the new VM.
-15. Power on the VM. Install the operating system and any updates required
-
-After the operating system is installed, you can clone a second VM. Right-click the VM entry and select and clone option.
-
-### Create shared disks for VMs
-
-Oracle uses shared disk to store the data, log, and redo log files. You can create a shared disk on vCenter and mount it on both the VMs. For higher performance, place the data disks on different SCSI controllers Steps below show how to create a shared disk on vCenter and then attach it to a virtual machine. vCenter Flash client is used for modifying the VM properties.
-
-#### Create disks on the first VM
-
-1. In vCenter, right-click one of the Oracle VMs and select **Edit settings**.
-2. In the new device section, select **SCSI controller** and click **Add**.
-3. In the new device section, select **New Hard disk** and click **Add**.
-4. Expand the properties of New Hard disk.
-5. Specify the size of the hard disk.
-6. Specify the VM storage policy to be the vSAN storage policy that you defined earlier.
-7. Select the location as a folder on vSAN datastore. The location helps with browsing and attaching the disks to a second VM.
-8. For disk provisioning, select **Thick provision eager zeroed**.
-9. For sharing, specify **Multi-writer**.
-10. For the virtual device node, select the new SCSI controller that was created in step 2.
-
- ![Screenshot that highlights the fields needed to create disks on the first VM.](media/oracle-rac-new-hard-disk.png)
-
-Repeat steps 2 ΓÇô 10 for all the new disks required for the Oracle data, logs, and redo log files.
-
-#### Attach disks to second VM
-
-1. In vCenter, right-click one of the Oracle VMs and select **Edit settings**.
-2. In the new device section, select **SCSI controller** and click **Add**.
-3. In the new device section, select **Existing Hard disk** and click **Add**.
-4. Browse to the location where the disk was created for the first VM and select the VMDK file.
-5. Specify the VM storage policy to be the vSAN storage policy that you defined earlier.
-6. For disk provisioning, select **Thick provision eager zeroed**.
-7. For sharing, specify **Multi-writer**.
-8. For the virtual device node, select the new SCSI controller that was created in step 2.
-
- ![Create disks on first VM](media/oracle-rac-existing-hard-disk.png)
-
-Repeat steps 2 ΓÇô 7 for all the new disks required for the Oracle data, logs, and redo log files.
-
-## Set up VM host affinity rules
-
-VM-to-host affinity rules ensure that the VM runs on the desired host. You can define rules on vCenter to ensure the Oracle VM runs on the host with adequate resources and to meet any specific licensing requirements.
-
-1. In the CloudSimple portal, [escalate the privileges](escalate-private-cloud-privileges.md) of the cloudowner user.
-2. Log in to the vSphere client of your Private Cloud.
-3. In the vSphere client, select the cluster where Oracle VMs are deployed and click **Configure**.
-4. Under Configure, select **VM/Host Groups**.
-5. Click **+**.
-6. Add a VM group. Select **VM group** as the type. Enter the name of the group. Select the VMs and then click **OK** to create the group.
-6. Add a host group. Select **Host Group** as the type. Enter the name of the group. Select the hosts where the VMs will run and then click **OK** to create the group.
-7. To create a rule, click on **VM/Host rules**.
-8. Click **+**.
-9. Enter a name for the rule and check **Enable**.
-10. For the rule type, select **Virtual Machines to Host**.
-11. Select the VM group that contains the Oracle VMs.
-12. Select **Must run on hosts in this group**.
-13. Select the host group that you created.
-14. Click **OK** to create the rule.
-
-## References
-
-* [About vSAN Policies](https://docs.vmware.com/en/VMware-vSphere/6.7/com.vmware.vsphere.virtualsan.doc/GUID-08911FD3-2462-4C1C-AE81-0D4DBC8F7990.html)
vmware-cloudsimple Private Cloud Dns Forwarding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/private-cloud-dns-forwarding.md
- Title: Azure VMware Solution - DNS forwarding from private cloud to on-premises
-description: Describes how to enable your CloudSimple Private Cloud DNS server to forward lookup of on-premises resources
-- Previously updated : 02/29/2020 ------
-# Enable CloudSimple Private Cloud DNS servers to forward DNS lookup of on-premises resources to your DNS servers
-
-Private Cloud DNS servers can forward DNS lookup for any on-premises resources to your DNS servers. Enabling the lookup allows Private Cloud vSphere components to look up any services running in your on-premises environment and communicate with them using fully qualified domain names (FQDN).
-
-## Scenarios
-
-Forwarding DNS lookup for your on-premises DNS server allows you to use your Private Cloud for the following scenarios:
-
-* Use Private Cloud as a disaster recovery setup for your on-premises VMware solution
-* Use on-premises Active Directory as an identity source for your Private Cloud vSphere
-* Use HCX for migrating virtual machines from on-premises to Private Cloud
-
-## Before you begin
-
-A network connection must be present from your Private Cloud network to your on-premises network for DNS forwarding to work. You can set up network connection using:
-
-* [Connect from on-premises to CloudSimple using ExpressRoute](on-premises-connection.md)
-* [Set up a Site-to-Site VPN gateway](./vpn-gateway.md#set-up-a-site-to-site-vpn-gateway)
-
-Firewall ports must be opened on this connection for DNS forwarding to work. Ports used are TCP port 53 or UDP port 53.
-
-> [!NOTE]
-> If you are using Site-to-Site VPN, your on-premises DNS server subnet must be added as a part of on-premises prefixes.
-
-## Request DNS forwarding from Private Cloud to on-premises
-
-To enable DNS forwarding from Private Cloud to on-premises, submit a [support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest), providing the following information.
-
-* Issue type: **Technical**
-* Subscription: **Subscription where CloudSimple service is deployed**
-* Service: **VMware Solution by CloudSimple**
-* Problem type: **Advisory or How do I...**
-* Problem subtype: **Need help with NW**
-* Provide the domain name of your on-premises domain in the details pane.
-* Provide the list of your on-premises DNS servers to which the lookup will be forwarded from your private cloud in the details pane.
-
-## Next steps
-
-* [Learn more about on-premises firewall configuration](on-premises-firewall-configuration.md)
-* [On-premises DNS server configuration](on-premises-dns-setup.md)
vmware-cloudsimple Private Cloud Secure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/private-cloud-secure.md
- Title: Azure VMware Solutions by CloudSimple - Secure Private Cloud
-description: Describes how to secure Azure VMware Solutions by CloudSimple Private Cloud
-- Previously updated : 08/19/2019 ------
-# How to secure your Private Cloud environment
-
-Define role-based access control (RBAC) for CloudSimple Service, CloudSimple portal, and Private Cloud from Azure. Users, groups, and roles for accessing vCenter of Private Cloud are specified using VMware SSO.
-
-## Azure RBAC for CloudSimple service
-
-Creation of CloudSimple service requires **Owner** or **Contributor** role on the Azure subscription. By default, all owners and contributors can create a CloudSimple service and access CloudSimple portal for creating and managing Private Clouds. Only one CloudSimple service can be created per region. To restrict access to specific administrators, follow the procedure below.
-
-1. Create a CloudSimple Service in a new **resource group** on Azure portal
-2. Specify Azure RBAC for the resource group.
-3. Purchase nodes and use the same resource group as CloudSimple service
-
-Only the users who have **Owner** or **Contributor** privileges on the resource group will see the CloudSimple service and launch CloudSimple portal.
-
-For more information, see [What is Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md).
-
-## RBAC for Private Cloud vCenter
-
-A default user `CloudOwner@cloudsimple.local` is created in the vCenter SSO domain when a Private Cloud is created. CloudOwner user has privileges for managing vCenter. Additional identity sources are added to the vCenter SSO for giving access to different users. Pre-defined roles and groups are set up on the vCenter that can be used to add additional users.
-
-### Add new users to vCenter
-
-1. [Escalate privileges](escalate-private-cloud-privileges.md) for **CloudOwner\@cloudsimple.local** user on the Private Cloud.
-2. Sign into vCenter using **CloudOwner\@cloudsimple.local**
-3. [Add vCenter Single Sign-On Users](https://docs.vmware.com/en/VMware-vSphere/5.5/com.vmware.vsphere.security.doc/GUID-72BFF98C-C530-4C50-BF31-B5779D2A4BBB.html).
-4. Add users to [vCenter single sign-on groups](https://docs.vmware.com/en/VMware-vSphere/5.5/com.vmware.vsphere.security.doc/GUID-CDEA6F32-7581-4615-8572-E0B44C11D80D.html).
-
-For more information about pre-defined roles and groups, see [CloudSimple Private Cloud permission model of VMware vCenter](learn-private-cloud-permissions.md) article.
-
-### Add new identity sources
-
-You can add additional identity providers for vCenter SSO domain of your Private Cloud. Identity providers provide authentication and vCenter SSO groups provide authorization for users.
-
-* [Use Active Directory as an identity provider](set-vcenter-identity.md) on Private Cloud vCenter.
-* [Use Azure AD as an identity provider](azure-ad.md) on Private Cloud vCenter
-
-1. [Escalate privileges](escalate-private-cloud-privileges.md) for **CloudOwner\@cloudsimple.local** user on the Private Cloud.
-2. Sign into vCenter using **CloudOwner\@cloudsimple.local**
-3. Add users from the identity provider to [vCenter single sign-on groups](https://docs.vmware.com/en/VMware-vSphere/5.5/com.vmware.vsphere.security.doc/GUID-CDEA6F32-7581-4615-8572-E0B44C11D80D.html).
-
-## Secure network on your Private Cloud environment
-
-Private Cloud environment's network security is controlled by securing network access and controlling network traffic between resources.
-
-### Access to Private Cloud resources
-
-Private Cloud vCenter and resources access is over a secure network connection:
-
-* **[ExpressRoute connection](on-premises-connection.md)**. ExpressRoute provides a secure, high-bandwidth, low-latency connection from your on-premises environment. Using the connection allows your on-premises services, networks, and users to access your Private Cloud vCenter.
-* **[Site-to-Site VPN gateway](vpn-gateway.md)**. Site-to-Site VPN gives access to your Private Cloud resources from on-premises through a secure tunnel. You specify which on-premises networks can send and receive network traffic to your Private Cloud.
-* **[Point-to-Site VPN gateway](vpn-gateway.md#set-up-a-site-to-site-vpn-gateway)**. Use Point-to-Site VPN connection for quick remote access to your Private Cloud vCenter.
-
-### Control network traffic in Private Cloud
-
-Firewall tables and rules control network traffic in the Private Cloud. The firewall table allows you to control network traffic between a source network or IP address and a destination network or IP address based on the combination of rules defined in the table.
-
-1. Create a [firewall table](firewall.md#add-a-new-firewall-table).
-2. [Add rules](firewall.md#create-a-firewall-rule) to the firewall table.
-3. [Attach a firewall table to a VLAN/subnet](firewall.md#attach-vlans-subnet).
vmware-cloudsimple Public Ips https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/public-ips.md
- Title: Azure VMware Solution by CloudSimple - Allocate public IP addresses
-description: Describes how to allocate public IP addresses for virtual machines in the Private Cloud environment
-- Previously updated : 08/15/2019 ------
-# Allocate public IP addresses for Private Cloud environment
-
-Open the Public IPs tab on the Network page to allocate public IP addresses for virtual machines in your Private Cloud environment.
-
-1. [Access the CloudSimple portal](access-cloudsimple-portal.md) and select **Network** on the side menu.
-2. Select **Public IPs**.
-3. Click **New Public IP**.
-
- ![Public IPs page](media/public-ips-page.png)
-
-4. Enter a name to identify the IP address entry.
-5. Keep the default location.
-6. Use the slider to change the idle timeout, if needed.
-7. Enter the local IP address for which you want to assign a public IP address.
-8. Enter an associated DNS name.
-9. Click **Submit**.
-
-![Allocate public IPs](media/network-public-ip-allocate.png)
-
-The task of allocating the public IP address begins. You can check the status of the task on the **Activity > Tasks** page. When allocation is complete, the new entry is shown on the Public IPs page.
vmware-cloudsimple Quickstart Create Cloudsimple Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/quickstart-create-cloudsimple-service.md
- Title: "Quickstart: Create VMware CloudSimple service"-
-description: Learn how to create the CloudSimple service, purchase nodes, and reserve nodes
-- Previously updated : 08/16/2019 ------
-# Quickstart - Create Azure VMware Solution by CloudSimple service
-
-To get started, create the Azure VMware Solution by CloudSimple in the Azure portal.
-
-## VMware Solution by CloudSimple - Service overview
-
-The CloudSimple service allows you to consume Azure VMware Solution by CloudSimple. Creating the service allows you to provision nodes, reserve nodes, and create private clouds. You add the CloudSimple service in each Azure region where the CloudSimple service is available. The service defines the edge network of Azure VMware Solution by CloudSimple. This edge network is used for services that include VPN, ExpressRoute, and Internet connectivity to your private clouds.
-
-To add the CloudSimple service, you must create a gateway subnet. The gateway subnet is used when creating the edge network and requires a /28 CIDR block. The gateway subnet address space must be unique. It can't overlap with any of your on-premises network address spaces or Azure virtual network address space.
-
-## Before you begin
-
-Allocate a /28 CIDR block for gateway subnet. A gateway subnet is required per CloudSimple service and is unique to the region in which it's created. The gateway subnet is used for Azure VMware Solution by CloudSimple edge network services and requires a /28 CIDR block. The gateway subnet address space must be unique. It must not overlap with any network that communicates with the CloudSimple environment. The networks that communicate with CloudSimple include on-premises networks and Azure virtual networks.
-
-Review [Networking Prerequisites](cloudsimple-network-checklist.md).
-
-## Sign in to Azure
-
-Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
-
-## Create the service
-
-1. Select **All services**.
-2. Search for **CloudSimple Service**.
-
- ![Search CloudSimple Service](media/create-cloudsimple-service-search.png)
-
-3. Select **CloudSimple Services**.
-4. Click **Add** to create a new service.
-
- ![Add CloudSimple Service](media/create-cloudsimple-service-add.png)
-
-5. Select the subscription where you want to create the CloudSimple service.
-6. Select the resource group for the service. To add a new resource group, click **Create New**.
-7. Enter name to identify the service.
-8. Enter the CIDR for the service gateway. Specify a /28 subnet that doesn't overlap with any of your on-premises subnets, Azure subnets, or planned CloudSimple subnets. You can't change the CIDR after the service is created.
-
- ![Creating the CloudSimple service](media/create-cloudsimple-service.png)
-
-9. Click **OK**.
-
-The service is created and added to the list of services.
-
-## Provision nodes
-
-To set up pay-as-you go capacity for a CloudSimple Private Cloud environment, first provision nodes in the Azure portal.
-
-1. Select **All services**.
-2. Search for **CloudSimple Nodes**.
-
- ![Search CloudSimple Nodes](media/create-cloudsimple-node-search.png)
-
-3. Select **CloudSimple Nodes**.
-4. Click **Add** to create nodes.
-
- ![Add CloudSimple Nodes](media/create-cloudsimple-node-add.png)
-
-5. Select the subscription where you want to provision CloudSimple nodes.
-6. Select the resource group for the nodes. To add a new resource group, click **Create New**.
-7. Enter the prefix to identify the nodes.
-8. Select the location for the node resources.
-9. Select the dedicated location to host the node resources.
-10. Select the [node type](cloudsimple-node.md).
-11. Select the number of nodes to provision.
-12. Select **Review + Create**.
-13. Review the settings. To modify any settings, click **Previous**.
-14. Select **Create**.
-
-## Next steps
-
-* [Create Private Cloud and configure environment](quickstart-create-private-cloud.md)
-* Learn more about [CloudSimple service](./cloudsimple-service.md)
vmware-cloudsimple Quickstart Create Private Cloud Vmware Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/quickstart-create-private-cloud-vmware-virtual-machine.md
- Title: Quickstart - Create an Azure VMware VM on a Private Cloud - Azure VMware Solution by CloudSimple
-description: Learn how to create a VMware virtual machine on CloudSimple Private Cloud. Access the CloudSimple portal from the Azure portal.
-- Previously updated : 08/16/2019 ------
-# Create VMware virtual machines on your Private Cloud
-
-To create virtual machines on your Private Cloud, begin by accessing the CloudSimple portal from the Azure portal.
-
-## Sign in to the Azure portal
-
-Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
-
-## Access the CloudSimple portal
-
-1. Select **All services**.
-2. Search for **CloudSimple Services**.
-3. Select the CloudSimple service on which you want to create your Private Cloud.
-4. From the **Overview** page, click **Go to the CloudSimple portal** to open a new browser tab for CloudSimple portal. If prompted, sign in with your Azure sign in credentials.
-
- ![Launch CloudSimple portal](media/launch-cloudsimple-portal.png)
-
-## Launch vCenter web-ui
-
-You can now launch vCenter to set up virtual machines and policies.
-
-To access vCenter, start from the CloudSimple portal. On the Home page, under **Common Tasks**, click **Launch vSphere Client**. Select the Private Cloud and then click **Launch vSphere Client** on the Private Cloud.
-
- ![Launch vSphere Client](media/launch-vcenter-from-cloudsimple-portal.png)
-
-## Upload an ISO or vSphere template
-
- > [!WARNING]
- > For ISO upload, use the vSphere HTML5 client. Using a Flash client may result in an error.
-
-1. Obtain the ISO or vSphere template that you want to upload to vCenter to create a VM and have it available on your local system.
-2. In vCenter, click the **Disk** icon and select **vsanDatastore**. Click **Files** and then click **New Folder**.
- ![vCenter ISO](media/vciso00.png)
-
-3. Create a folder entitled ΓÇÿISOs and TemplatesΓÇÖ.
-
-4. Navigate to the ISOs folder in ISOs and Templates, and click **Upload Files**. Follow the on-screen instructions to upload the ISO.
-
-## Create a Virtual Machine in vCenter
-
-1. In vCenter, click the **Hosts and Clusters** icon.
-
-2. Right-click **Workload** and select **New Virtual Machine**.
- ![Screenshot that highlights the New Virtual Machine menu option.](media/vcvm01.png)
-
-3. Select **Create new virtual machine** and click **Next**.
- ![Screenshot that highlights the Create new virtual machine option.](media/vcvm02.png)
-
-4. Name the machine, select the **Workload VM's** location, and click **Next**.
- ![Screenshot that highlights the Workload VMs option.](media/vcvm03.png)
-
-5. Select the **Workload** compute resource and click **Next**.
- ![Screenshot that highlights the Workload compute resource.](media/vcvm04.png)
-
-6. Select **vsanDatastore** and click **Next**.
- ![Screenshot that highlights the vsanDatastore option.](media/vcvm05.png)
-
-7. Keep the default ESXi 6.5 compatibility selection and click **Next**.
- ![Screenshot that shows the selected ESXi 6.5 compatibility option.](media/vcvm06.png)
-
-8. Select the guest OS of the ISO for the VM that you are creating and click **Next**.
- ![Screenshot that shows how to select the guese OS of the ISO for the VM.](media/vcvm07.png)
-
-9. Select hard disk and network options. For New CD/DVD Drive, select **Datastore ISO file**. If you want to allow traffic from the Public IP address to this VM, select the network as **vm-1**.
- ![Screenshot that highlights where you select the Datastore ISO file.](media/vcvm08.png)
-
-10. A selection window opens. Select the file you previously uploaded to the ISOs and Templates folder and click **OK**.
- ![New VM](media/vcvm10.png)
-
-11. Review the settings and click **OK** to create the VM.
- ![Screenshot that shows where you review the settings.](media/vcvm11.png)
-
-The VM is now added to the Workload compute resources and is ready for use.
-![Screenshot that shows the VM that's been added to the Workload compute resources.](media/vcvm12.png)
-
-The basic setup is now complete. You can start using your Private Cloud similar to how you would use your on-premises VM infrastructure.
-
-The following sections contain optional information about setting up DNS and DHCP servers for Private Cloud workloads and modifying the default networking configuration.
-
-## Add Users and identity sources to vCenter (Optional)
-
-CloudSimple assigns a default vCenter user account with username `cloudowner@cloudsimple.local`. No additional account setup is required for you to get started. CloudSimple normally assigns administrators the privileges they need to perform normal operations. Set up your on-premises active directory or Azure AD as an [additional identity source](set-vcenter-identity.md) on your Private Cloud.
-
-## Create a DNS and DHCP server (Optional)
-
-Applications and workloads running in a Private Cloud environment require name resolution and DHCP services for lookup and IP address assignment. A proper DHCP and DNS infrastructure is required to provide these services. You can configure a virtual machine in vCenter to provide these services in your Private Cloud environment.
-
-Prerequisites
-
-* A distributed port group with VLAN configured
-
-* Route setup to on-premises or Internet-based DNS servers
-
-* Virtual machine template or ISO to create a virtual machine
-
-The following links provide guidance on setting up DHCP and DNS servers on Linux and Windows.
-
-#### Linux-based DNS server setup
-
-Linux offers various packages for setting up DNS servers. Here is a link to instructions for setting up an open-source BIND DNS server.
-
-[Example setup](https://www.digitalocean.com/community/tutorials/how-to-configure-bind-as-a-private-network-dns-server-on-centos-7)
-
-#### Windows-based setup
-
-These Microsoft topics describe how to set up a Windows server as a DNS server and as a DHCP server.
-
-[Windows Server as DNS Server](/windows-server/networking/dns/dns-top)
-
-[Windows Server as DHCP Server](/windows-server/networking/technologies/dhcp/dhcp-top)
-
-## Customize networking configuration (Optional)
-
-The Network pages in the CloudSimple portal allow you to specify the configuration for firewall tables and public IP addresses for VMs.
-
-### Allocate public IPs
-
-1. Navigate to **Network > Public IP** in the CloudSimple portal.
-2. Click **Allocate Public IP**.
-3. Enter a name to identify the IP address entry.
-4. Keep the default location.
-5. Use the slider to change the idle timeout if desired.
-6. Enter the local IP address for which you want to assign a public IP address.
-7. Enter an associated DNS name if desired.
-8. Click **Done**.
-
- ![Public IP](media/quick-create-pc-public-ip.png)
-
-The task of allocating the public IP address begins. You can check the status of the task on the **Activity > Tasks** page. When allocation is complete, the new entry is shown on the Public IPs page.
-
-The VM to which this IP address must be mapped needs to be configured with the local address specified above. The procedure to configure an IP address is specific to the VM operating system. Consult the documentation for your VM operating system for the correct procedure.
-
-#### Example
-
-For example, here are the details for Ubuntu 16.04.
-
-Add the static method to the inet address family configuration in the file /etc/network/interfaces. Change the address, netmask, and gateway values. For this example, we are using the eth0 interface, internal IP address 192.168.24.10, gateway address 192.168.24.1, and netmask 255.255.255.0. For your environment, the available subnet information is provided in the welcome email.
-
-```
-sudo vi /etc/network/interfaces
-```
-
-```
-auto eth0
-Iface eth0 inet static
-iface eth0 inet static
-address 192.168.24.10
-netmask 255.255.255.0
-gateway 192.168.24.1
-dns-nameservers 8.8.8.8
-dns-domain acme.com
-dns-search acme.com
-```
-Manually disable the interface.
-
-```
-sudo ifdown eth0
-```
-Manually enable the interface again.
-
-```
-sudo ifup eth0
-```
-
-By default, all incoming traffic from the Internet is **denied**. If you would like to open any other port, create a [firewall table](firewall.md).
-
-After configuring an internal IP address as the static IP address, verify that you can reach the Internet from within the VM.
-
-```
-ping 8.8.8.8
-```
-Also verify that you can reach the VM from the Internet using the public IP address.
-
-Ensure that any iptable rules on the VM are not blocking port 80 inbound.
-
-```
-netstat -an | grep 80
-```
-
-Start an http server that listens on port 80.
-
-```
-python2.7 -m SimpleHTTPServer 80
-```
-
-or
-
-```
-python3 -m http.server 80
-```
-Start a browser on your desktop and point it to port 80 for the public IP address to browse the files on your VM.
-
-### Default CloudSimple firewall rules for public IP
-
-* VPN traffic: All traffic between (from/to) the VPN and all the workload networks and management network is allowed.
-* Private cloud internal traffic: All east-west traffic between (from/to) workload networks and the management network (shown above) is allowed.
-* Internet traffic:
- * All incoming traffic from the Internet is denied to workload networks and the management network.
- * All outgoing traffic to the Internet from workload networks or the management network is allowed.
-
-You can also modify the way your traffic is secured, using the Firewall Rules feature. For more information, see [Set up firewall tables and rules](firewall.md).
-
-## Install solutions (Optional)
-
-You can install solutions on your CloudSimple Private Cloud to take full advantage of your Private Cloud vCenter environment. You can set up backup, disaster recovery, replication, and other functions to protect your virtual machines. Examples include VMware Site Recovery Manager (VMware SRM) and Veeam Backup & Replication.
-
-To install a solution, you must request additional privileges for a limited period. See [Escalate privileges](escalate-private-cloud-privileges.md).
-
-## Next steps
-
-* [Consume VMware VMs on Azure](quickstart-create-vmware-virtual-machine.md)
-* [Connect to on-premises network using Azure ExpressRoute](on-premises-connection.md)
-* [Set up VPN gateways on CloudSimple network](vpn-gateway.md)
vmware-cloudsimple Quickstart Create Private Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/quickstart-create-private-cloud.md
- Title: "Quickstart: Create a Private Cloud"-
-description: Learn how to create and configure a Private Cloud with Azure VMware Solutions by CloudSimple
-- Previously updated : 08/16/2019 -----
-# Quickstart - Configure a Private Cloud environment
-
-In this article, learn how to create a CloudSimple Private Cloud and set up your Private Cloud environment.
-
-## Before you begin
-
-Review [Networking Prerequisites](cloudsimple-network-checklist.md).
-
-## Sign in to Azure
-
-Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
-
-## Create a Private Cloud
-
-A Private Cloud is an isolated VMware stack that supports ESXi hosts, vCenter, vSAN, and NSX.
-
-Private Clouds are managed through the CloudSimple portal. They have their own vCenter server in its own management domain. The stack runs on dedicated nodes and isolated bare metal hardware nodes.
-
-1. Select **All services**.
-2. Search for **CloudSimple Services**.
-3. Select the CloudSimple service on which you want to create your Private Cloud.
-4. From **Overview**, click **Create Private Cloud** to open a new browser tab for CloudSimple portal. If prompted, sign in with your Azure sign in credentials.
-
- ![Create Private Cloud from Azure](media/create-private-cloud-from-azure.png)
-
-5. In the CloudSimple portal, provide a name for your Private Cloud.
-6. Select the **Location** of your Private Cloud.
-7. Select **Node type**, consistent with what you provisioned on Azure.
-8. Specify **Node count**. At least three nodes are required to create a Private Cloud.
-
- ![Create Private Cloud - Basic info](media/create-private-cloud-basic-info.png)
-
-9. Click **Next: Advanced options**.
-10. Enter the CIDR range for vSphere/vSAN subnets. Make sure that the CIDR range doesn't overlap with any of your on-premises or other Azure subnets (virtual networks) or with the gateway subnet.
-
- **CIDR range options:** /24, /23, /22, or /21. A /24 CIDR range supports up to 26 nodes, a /23 CIDR range supports up to 58 nodes, and a /22 and /21 CIDR range supports 64 nodes (the maximum number of nodes in a Private Cloud). To learn more and VLANs and subnets, see [VLANs and subnets overview](cloudsimple-vlans-subnets.md).
-
- > [!IMPORTANT]
- > IP addresses in the vSphere/vSAN CIDR range are reserved for use by the Private Cloud infrastructure. Don't use the IP address in this range on any virtual machine.
-
-11. Click **Next: Review and create**.
-12. Review the settings. If you need to change any settings, click **Previous**.
-13. Click **Create**.
-
-Private Cloud provisioning process starts. It can take up to two hours for the Private Cloud to be provisioned.
-
-## Launch CloudSimple portal
-
-You can access the CloudSimple portal from Azure portal. The CloudSimple portal will be launched with your Azure sign in credentials using Single Sign-On (SSO). Accessing the CloudSimple portal requires you to authorize the **CloudSimple Service Authorization** application. For more information on granting permissions, see [Consent to CloudSimple Service Authorization application](access-cloudsimple-portal.md#consent-to-cloudsimple-service-authorization-application).
-
-1. Select **All services**.
-2. Search for **CloudSimple Services**.
-3. Select the CloudSimple service on which you want to create your Private Cloud.
-4. From overview, click **Go to the CloudSimple portal** to open a new browser tab for CloudSimple portal. If prompted, sign in with your Azure sign in credentials.
-
- ![Launch CloudSimple portal](media/launch-cloudsimple-portal.png)
-
-## Create Point-to-Site VPN
-
-A Point-to-Site VPN connection is the simplest way to connect to your Private Cloud from your computer. Use Point-to-Site VPN connection if you're connecting to the Private Cloud remotely. For quick access to your Private Cloud, follow the steps below. Access to the CloudSimple region from your on-premises network can be done using [Site-to-Site VPN](vpn-gateway.md) or [Azure ExpressRoute](on-premises-connection.md).
-
-### Create gateway
-
-1. Launch CloudSimple portal and select **Network**.
-2. Select **VPN Gateway**.
-3. Click **New VPN Gateway**.
-
- ![Create VPN gateway](media/create-vpn-gateway.png)
-
-4. For **Gateway configuration**, specify the following settings and click **Next**.
-
- * Select **Point-to-Site VPN** as the gateway type.
- * Enter a name to identify the gateway.
- * Select the Azure location where your CloudSimple service is deployed.
- * Specify the client subnet for the Point-to-Site gateway. DHCP addresses will be given from this subnet when you connect.
-
-5. For **Connection/User**, specify the following settings and click **Next**.
-
- * To automatically allow all current and future users to access the Private Cloud through this Point-to-Site gateway, select **Automatically add all users**. When you select this option, all users in the User list are automatically selected. You can override the automatic option by deselecting individual users in the list.
- * To select only individual users, click the check boxes in the User list.
-
-6. The VLANs/Subnets section allows you to specify management and user VLANs/subnets for the gateway and connections.
-
- * The **Automatically add** options set the global policy for this gateway. The settings apply to the current gateway. The settings can be overridden in the **Select** area.
- * Select **Add management VLANs/Subnets of Private Clouds**.
- * To add all user-defined VLANs/subnets, click **Add user-defined VLANs/Subnets**.
- * The **Select** settings override the global settings under **Automatically add**.
-
-7. Click **Next** to review the settings. Click the Edit icons to make any changes.
-8. Click **Create** to create the VPN gateway.
-
-### Connect to CloudSimple using Point-to-Site VPN
-
-VPN client is needed for connecting to CloudSimple from your computer. Download [OpenVPN client](https://openvpn.net/community-downloads/) for Windows or [Viscosity](https://www.sparklabs.com/viscosity/download/) for macOS and OS X.
-
-1. Launch CloudSimple portal and select **Network**.
-2. Select **VPN Gateway**.
-3. From the list of VPN gateways, click the Point-to-Site VPN gateway.
-4. Select **Users**.
-5. Click **Download my VPN configuration**.
-
- ![Download VPN configuration](media/download-p2s-vpn-configuration.png)
-
-6. Import the configuration on your VPN client.
-
- * Instructions for [importing configuration on Windows client](https://openvpn.net/vpn-server-resources/connecting-to-access-server-with-windows/#openvpn-open-source-openvpn-gui-program)
- * Instructions for [importing configuration on macOS or OS X](https://www.sparklabs.com/support/kb/article/getting-started-with-viscosity-mac/#creating-your-first-connection)
-
-7. Connect to CloudSimple.
-
-## Create a VLAN for your workload VMs
-
-After creating a Private Cloud, create a VLAN where you'll deploy your workload/application VMs.
-
-1. In the CloudSimple portal, select **Network**.
-2. Click **VLAN/Subnets**.
-3. Click **Create VLAN/Subnet**.
-
- ![Create VLAN/Subnet](media/create-new-vlan-subnet.png)
-
-4. Select the **Private Cloud** for the new VLAN/subnet.
-5. Select a VLAN ID from the list.
-6. Enter a subnet name to identify the subnet.
-7. Specify the subnet CIDR range and mask. This range must not overlap with any existing subnets.
-8. Click **Submit**.
-
- ![Create VLAN/Subnet details](media/create-new-vlan-subnet-details.png)
-
-The VLAN/subnet will be created. You can now use this VLAN ID to create a distributed port group on your Private Cloud vCenter.
-
-## Connect your environment to an Azure virtual network
-
-CloudSimple provides you with an ExpressRoute circuit for your Private Cloud. You can connect your virtual network on Azure to the ExpressRoute circuit. For full details on setting up the connection, follow the steps in [Azure Virtual Network Connection using ExpressRoute](./cloudsimple-azure-network-connection.md).
-
-## Sign in to vCenter
-
-You can now sign in to vCenter to set up virtual machines and policies.
-
-1. To access vCenter, start from the CloudSimple portal. On the Home page, under **Common Tasks**, click **Launch vSphere Client**. Select the Private Cloud and then click **Launch vSphere Client** on the Private Cloud.
-
- ![Launch vSphere Client](media/launch-vcenter-from-cloudsimple-portal.png)
-
-2. Select your preferred vSphere client to access vCenter and sign in with your username and password. The defaults are:
- * User name: `CloudOwner@cloudsimple.local`
- * Password: `CloudSimple123!`
-
-The vCenter screens in the next procedures are from the vSphere (HTML5) client.
-
-## Change your vCenter password
-
-CloudSimple recommends that you change your password the first time you sign in to vCenter.
-The password you set must meet the following requirements:
-
-* Maximum lifetime: Password must be changed every 365 days
-* Restrict reuse: Users can't reuse any of the previous five passwords
-* Length: 8 - 20 characters
-* Special character: At least one special character
-* Alphabetic characters: At least one uppercase character, A-Z, and at least one lowercase character, a-z
-* Numbers: At least one numeric character, 0-9
-* Maximum identical adjacent characters: Three
-
- Example: CC or CCC is acceptable as a part of the password, but CCCC isn't.
-
-If you set a password that doesn't meet the requirements:
-
-* if you use the vSphere Flash Client, it reports an error
-* If you use the HTML5 client, it doesn't report an error. The client doesn't accept the change and the old password continues to work.
-
-## Access NSX manager
-
-NSX manager is deployed with a default password.
-
-* User name: **admin**
-* Password: **CloudSimple123!**
-
-You can find the fully qualified domain name (FQDN) and IP address of NSX manager on CloudSimple portal.
-
-1. Launch CloudSimple portal and select **Resources**.
-2. Click on the Private Cloud, which you want to use.
-3. Select **vSphere management network**
-4. Use the FQDN or IP address of **NSX Manager** and connect using a web browser.
-
- ![Find NSX Manager FQDN](media/private-cloud-nsx-manager-fqdn.png)
-
-## Create a port group
-
-To create a distributed port group in vSphere:
-
-1. Follow the instructions in "Add a distributed port group" in [vSphere Networking Guide](https://docs.vmware.com/en/VMware-vSphere/6.5/vsphere-esxi-vcenter-server-65-networking-guide.pdf).
-2. When setting up the distributed port group, provide the VLAN ID created in [Create a VLAN for your Workload VMs](#create-a-vlan-for-your-workload-vms).
-
-## Next steps
-
-* [Consume VMware VMs on Azure](quickstart-create-vmware-virtual-machine.md)
-* [Connect to on-premises network using Azure ExpressRoute](on-premises-connection.md)
-* [Set up Site-to-Site VPN from on-premises](vpn-gateway.md)
vmware-cloudsimple Quickstart Create Vmware Virtual Machine https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/quickstart-create-vmware-virtual-machine.md
- Title: "Quickstart: Consume VMware VMs on Azure"-
-description: Learn how to configure and consume VMware VMs from Azure portal using Azure VMware Solution by CloudSimple
-- Previously updated : 08/14/2019 -----
-# Quickstart - Consume VMware VMs on Azure
-
-To create a virtual machine in the Azure portal, use virtual machine templates that your CloudSimple administrator has enabled for your subscription. The VM templates are found in the VMware infrastructure.
-
-## CloudSimple VM creation on Azure requires a VM template
-
-Create a virtual machine on your Private Cloud from the vCenter UI. To create a template, follow the instructions in [Clone a Virtual Machine to a Template in the vSphere Web Client](https://docs.vmware.com/en/VMware-vSphere/6.7/com.vmware.vsphere.vm_admin.doc/GUID-FE6DE4DF-FAD0-4BB0-A1FD-AFE9A40F4BFE.html). Store the VM template on your Private Cloud vCenter.
-
-## Create a virtual machine in the Azure portal
-
-1. Select **All services**.
-
-2. Search for **CloudSimple Virtual Machines**.
-
-3. Click **Add**.
-
- ![Create CloudSimple virtual machine](media/create-cloudsimple-virtual-machine.png)
-
-4. Enter basic information and click **Next:Size**.
-
- ![Create CloudSimple virtual machine - basics](media/create-cloudsimple-virtual-machine-basic-info.png)
-
- | Field | Description |
- | | - |
- | Subscription | Azure subscription associated with your Private Cloud. |
- | Resource Group | Resource group to which the VM will be assigned. You can select an existing group or create a new one. |
- | Name | Name to identify the VM. |
- | Location | Azure region in which this VM is hosted. |
- | Private Cloud | CloudSimple Private Cloud where you want to create the virtual machine. |
- | Resource Pool | Mapped resource pool for the VM. Select from the available resource pools. |
- | vSphere Template | vSphere template for the VM. |
- | User name | User name of the VM administrator (for Windows templates).|
- | Password | Password for the VM administrator (for Windows templates). |
- | Confirm password | Confirm the password. |
-
-5. Select the number of cores and memory capacity for the VM and click **Next:Configurations**. Select the checkbox if you want to expose full CPU virtualization to the guest operating system. Applications that require hardware virtualization can run on virtual machines without binary translation or paravirtualization. For more information, see the VMware article <a href="https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.vm_admin.doc/GUID-2A98801C-68E8-47AF-99ED-00C63E4857F6.html" target="_blank">Expose VMware Hardware Assisted Virtualization</a>.
-
- ![Create CloudSimple virtual machine - size](media/create-cloudsimple-virtual-machine-size.png)
-
-6. Configure network interfaces and disks as described in the following tables and click **Review + create**.
-
- ![Create CloudSimple virtual machine - configurations](media/create-cloudsimple-virtual-machine-configurations.png)
-
- For network interfaces, click **Add network interface** and configure the following settings.
-
- | Control | Description |
- | | - |
- | Name | Enter a name to identify the interface. |
- | Network | Select from the list of configured distributed port group in your Private Cloud vSphere. |
- | Adapter | Select a vSphere adaptor from the list of available types configured for the VM. For more information, see the VMware knowledge base article <a href="https://kb.vmware.com/s/article/1001805" target="_blank">Choosing a network adapter for your virtual machine</a>. |
- | Power on at Boot | Choose whether to enable the NIC hardware when the VM is booted. The default is **Enable**. |
-
- For disks, click **Add disk** and configure the following settings.
-
- | Item | Description |
- | | - |
- | Name | Enter a name to identify the disk. |
- | Size | Select one of the available sizes. |
- | SCSI Controller | Select a SCSI controller for the disk. |
- | Mode | Determines how the disk participates in snapshots. Choose one of these options: <br> - Independent persistent: All data written to the disk is written permanently.<br> - Independent non-persistent: Changes written to the disk are discarded when you power off or reset the virtual machine. Independent non-persistent mode allows you to always restart the VM in the same state. For more information, see the <a href="https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.vsphere.vm_admin.doc/GUID-8B6174E6-36A8-42DA-ACF7-0DA4D8C5B084.html" target="_blank">VMware documentation</a>.
-
-7. When validation completes, review the settings and click **Create**. To make any changes, click the tabs at the top or click.
-
- ![Create CloudSimple virtual machine - review](media/create-cloudsimple-virtual-machine-review.png)
-
-## Next steps
-
-* [View list of CloudSimple virtual machines](azure-create-vm.md#view-list-of-cloudsimple-virtual-machines)
-* [Manage CloudSimple virtual machine from Azure](azure-manage-vm.md)
vmware-cloudsimple Set Up Vpn https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/set-up-vpn.md
- Title: Azure VMware Solution by CloudSimple - Configure VPN between on-premises and Private Cloud
-description: Describes how to configure a Site-to-Site or Point-to-Site VPN connection between your on-premises network and your CloudSimple Private Cloud
-- Previously updated : 08/14/2019 ------
-# Configure a VPN connection to your CloudSimple Private Cloud
-
-VPN gateways allow you to connect to CloudSimple network from your on-premises network and from a client computer remotely. In this article, you can find information on setting up VPN gateways from the CloudSimple portal. A VPN connection between your on-premises network and your CloudSimple network provides access to the vCenter and workloads on your Private Cloud. CloudSimple supports both Point-to-Site VPN and Site-to-Site VPN gateways.
-
-## VPN gateway types
-
-* **Point-to-Site VPN** connection is the simplest way to connect to your Private Cloud from your computer. Use Point-to-Site VPN connectivity for connecting to the Private Cloud remotely.
-* **Site-to-Site VPN** connection allows you to set up your Private Cloud workloads to access on-premises services. You can also use on-premises Active Directory as an identity source for authenticating to your Private Cloud vCenter. Currently, **Policy-Based VPN** type is supported.
-
-In a region, you can create one Site-to-Site VPN gateway and one Point-to-Site VPN gateway.
-
-## Point-to-Site VPN
-
-To create a Point-to-Site VPN gateway, see [Create Point-to-Site VPN gateway](vpn-gateway.md#create-point-to-site-vpn-gateway).
-
-### Connect to CloudSimple using Point-to-Site VPN
-
-VPN client is needed for connecting to CloudSimple from your computer. Download [OpenVPN client](https://openvpn.net/community-downloads/) for Windows or [Viscosity](https://www.sparklabs.com/viscosity/download/) for macOS and OS X.
-
-1. Launch CloudSimple portal and select **Network**.
-2. Select **VPN Gateway**.
-3. From the list of VPN gateways, click on the Point-to-Site VPN gateway.
-4. Select **Users**.
-5. Click on **Download my VPN configuration**
-
- ![Download VPN configuration](media/download-p2s-vpn-configuration.png)
-
-6. Import the configuration on your VPN client
-
- * Instructions for [importing configuration on Windows client](https://openvpn.net/vpn-server-resources/connecting-to-access-server-with-windows/#openvpn-open-source-openvpn-gui-program)
- * Instructions for [importing configuration on macOS or OS X](https://www.sparklabs.com/support/kb/article/getting-started-with-viscosity-mac/#creating-your-first-connection)
-
-7. Connect to CloudSimple VPN gateway.
-
-Example below shows importing connection using **Viscosity Client**.
-
-#### Import connection on Viscosity client
-
-1. Extract the contents of VPN configuration from downloaded .zip file.
-
-2. Open Viscosity on your computer.
-
-3. Click the **+** icon and select **Import connection** > **From File**.
-
- ![Import VPN configuration from file](media/import-p2s-vpn-config.png)
-
-4. Select the OpenVPN configuration file (.ovpn) for the protocol you want to use and click **Open**.
-
- ![Screenshot that highlights the OpenVPN configuration files you can select.](media/import-p2s-vpn-config-choose-ovpn.png)
-
-The connection now appears in the Viscosity menu.
-
-#### Connect to the VPN
-
-To connect to VPN using the Viscosity OpenVPN client, select the connection from the menu. The menu icon updates to indicate that the connection is established.
-
-![Screenshot that shows the CloudSimple VPN connectivity status.](media/vis03.png)
-
-### Connecting to Multiple Private Clouds
-
-A Point-to-Site VPN connection resolves the DNS names of the first Private Cloud that you create. When you want to access other Private Clouds, you must update the DNS server on your VPN client.
-
-1. Launch [CloudSimple portal](access-cloudsimple-portal.md).
-
-2. Navigate to **Resources** > **Private Clouds** and select the Private Cloud you want to connect to.
-
-3. On the **Summary** page of the Private Cloud, copy the Private Cloud DNS server IP address under **Basic Info**.
-
- ![Private Cloud DNS servers](media/private-cloud-dns-server.png)
-
-4. Right-click the Viscosity icon in your computer's system tray and select **Preferences**.
-
- ![VPN](media/vis00.png)
-
-5. Select the CloudSimple VPN connection.
-
- ![VPN Connection](media/viscosity-client.png)
-
-6. Click **Edit** to change the connection properties.
-
- ![Edit VPN Connection](media/viscosity-edit-connection.png)
-
-7. Click the **Networking** tab and enter the Private Cloud DNS server IP addresses separated by a comma or space and the domain as ```cloudsimple.io```. Select **Ignore DNS settings sent by VPN server**.
-
- ![VPN Networking](media/viscosity-edit-connection-networking.png)
-
-> [!IMPORTANT]
-> To connect to your first Private Cloud, remove these settings and connect to the VPN server.
-
-## Site-to-Site VPN
-
-To create a Site-to-Site VPN gateway, see [Create Site-to-Site VPN gateway](vpn-gateway.md#set-up-a-site-to-site-vpn-gateway). Site-to-Site VPN connection from your on-premises network to your Private Cloud provides these benefits.
-
-* Accessibility of your Private Cloud vCenter from any workstation in your on-premises network
-* Use of your on-premises Active Directory as a vCenter identity source
-* Convenient transfer of VM templates, ISOs, and other files from your on-premises resources to your Private Cloud vCenter
-* Accessibility of workloads running on your Private Cloud from your on-premises network
-
-To set up your on-premises VPN gateway in high-availability mode, see [Configure a high availability VPN connection](high-availability-vpn-connection.md).
-
-> [!IMPORTANT]
-> 1. Set TCP MSS Clamping at 1200 on your VPN device. Or if your VPN devices do not support MSS clamping, you can alternatively set the MTU on the tunnel interface to 1240 bytes instead.
-> 2. After Site-to-Site VPN is set up, forward the DNS requests for *.cloudsimple.io to the Private Cloud DNS servers. Follow the instructions in [On-Premises DNS Setup](on-premises-dns-setup.md).
vmware-cloudsimple Set Vcenter Identity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/set-vcenter-identity.md
- Title: Azure VMware Solution by CloudSimple - Set up vCenter identity sources on Private Cloud
-description: Describes how to set up your Private Cloud vCenter to authenticate with Active Directory for VMware administrators to access vCenter
-- Previously updated : 08/15/2019 ------
-# Set up vCenter identity sources to use Active Directory
-
-## About VMware vCenter identity sources
-
-VMware vCenter supports different identity sources for authentication of users who access vCenter. Your CloudSimple Private Cloud vCenter can be set up to authenticate with Active Directory for your VMware administrators to access vCenter. When the setup is complete, the **cloudowner** user can add users from the identity source to vCenter.
-
-You can set up your Active Directory domain and domain controllers in any of the following ways:
-
-* Active Directory domain and domain controllers running on-premises
-* Active Directory domain and domain controllers running on Azure as virtual machines in your Azure subscription
-* New Active Directory domain and domain controllers running in your Private Cloud
-* Azure Active Directory service
-
-This guide explains the tasks to set up Active Directory domain and domain controllers running either on-premises or as virtual machines in your subscriptions. If you would like to use Azure AD as the identity source, refer to [Use Azure AD as an identity provider for vCenter on CloudSimple Private Cloud](azure-ad.md) for detailed instructions in setting up the identity source.
-
-Before [adding an identity source](#add-an-identity-source-on-vcenter), temporarily [escalate your vCenter privileges](escalate-private-cloud-privileges.md).
-
-> [!CAUTION]
-> New users must be added only to *Cloud-Owner-Group*, *Cloud-Global-Cluster-Admin-Group*, *Cloud-Global-Storage-Admin-Group*, *Cloud-Global-Network-Admin-Group* or, *Cloud-Global-VM-Admin-Group*. Users added to *Administrators* group will be removed automatically. Only service accounts must be added to *Administrators* group and service accounts must not be used to sign in to vSphere web UI.
--
-## Identity source options
-
-* [Add on-premises Active Directory as a single sign-on identity source](#add-on-premises-active-directory-as-a-single-sign-on-identity-source)
-* [Set Up New Active Directory on a Private Cloud](#set-up-new-active-directory-on-a-private-cloud)
-* [Set Up Active Directory on Azure](#set-up-active-directory-on-azure)
-
-> [!IMPORTANT]
-> **Active Directory (Windows Integrated Authentication) is not supported.** Only Active Directory over LDAP option is supported as an identity source.
-
-## Add On-Premises Active Directory as a Single Sign-On Identity Source
-
-To set up your on-premises Active Directory as a Single Sign-On identity source, you need:
-
-* [Site-to-Site VPN connection](vpn-gateway.md#set-up-a-site-to-site-vpn-gateway) from your on-premises datacenter to your Private Cloud.
-* On-premises DNS server IP added to vCenter and Platform Services Controller (PSC).
-
-Use the information in the following table when setting up your Active Directory domain.
-
-| **Option** | **Description** |
-||--|
-| **Name** | Name of the identity source. |
-| **Base DN for users** | Base distinguished name for users. |
-| **Domain name** | FQDN of the domain, for example, example.com. Do not provide an IP address in this text box. |
-| **Domain alias** | The domain NetBIOS name. Add the NetBIOS name of the Active Directory domain as an alias of the identity source if you are using SSPI authentications. |
-| **Base DN for groups** | The base distinguished name for groups. |
-| **Primary Server URL** | Primary domain controller LDAP server for the domain.<br><br>Use the format `ldap://hostname:port` or `ldaps://hostname:port`. The port is typically 389 for LDAP connections and 636 for LDAPS connections. For Active Directory multi-domain controller deployments, the port is typically 3268 for LDAP and 3269 for LDAPS.<br><br>A certificate that establishes trust for the LDAPS endpoint of the Active Directory server is required when you use `ldaps://` in the primary or secondary LDAP URL. |
-| **Secondary server URL** | Address of a secondary domain controller LDAP server that is used for failover. |
-| **Choose certificate** | If you want to use LDAPS with your Active Directory LDAP Server or OpenLDAP Server identity source, a Choose certificate button appears after you type `ldaps://` in the URL text box. A secondary URL is not required. |
-| **Username** | ID of a user in the domain who has a minimum of read-only access to Base DN for users and groups. |
-| **Password** | Password of the user who is specified by Username. |
-
-When you have the information in the previous table, you can add your on-premises Active Directory as a Single Sign-On identity source on vCenter.
-
-> [!TIP]
-> You'll find more information on Single Sign-On identity sources on the [VMware documentation page](https://docs.vmware.com/en/VMware-vSphere/6.5/com.vmware.psc.doc/GUID-B23B1360-8838-4FF2-B074-71643C4CB040.html).
-
-## Set Up new Active Directory on a Private Cloud
-
-You can set up a new Active Directory domain on your Private Cloud and use it as an identity source for Single Sign-On. The Active Directory domain can be a part of an existing Active Directory forest or can be set up as an independent forest.
-
-### New Active Directory forest and domain
-
-To set up a new Active Directory forest and domain, you need:
-
-* One or more virtual machines running Microsoft Windows Server to use as domain controllers for the new Active Directory forest and domain.
-* One or more virtual machines running DNS service for name resolution.
-
-See [Install a New Windows Server 2012 Active Directory Forest](/windows-server/identity/ad-ds/deploy/install-a-new-windows-server-2012-active-directory-forest--level-200-) for detailed steps.
-
-> [!TIP]
-> For high availability of services, we recommend setting up multiple domain controllers and DNS servers.
-
-After setting up the Active Directory forest and domain, you can [add an identity source on vCenter](#add-an-identity-source-on-vcenter) for your new Active Directory.
-
-### New Active Directory domain in an existing Active Directory forest
-
-To set up a new Active Directory domain in an existing Active Directory forest, you need:
-
-* Site-to-Site VPN connection to your Active Directory forest location.
-* DNS Server to resolve the name of your existing Active Directory forest.
-
-See [Install a new Windows Server 2012 Active Directory child or tree domain](/windows-server/identity/ad-ds/deploy/install-a-new-windows-server-2012-active-directory-child-or-tree-domain--level-200-) for detailed steps.
-
-After setting up the Active Directory domain, you can [add an identity source on vCenter](#add-an-identity-source-on-vcenter) for your new Active Directory.
-
-## Set up Active Directory on Azure
-
-Active Directory running on Azure is similar to Active Directory running on-premises. To set up Active Directory running on Azure as a Single Sign-On identity source on vCenter, the vCenter server and PSC must have network connectivity to the Azure Virtual Network where Active Directory services are running. You can establish this connectivity using [Azure Virtual Network Connection using ExpressRoute](azure-expressroute-connection.md) from the Azure virtual network where Active Directory Services are running to CloudSimple Private Cloud.
-
-After the network connection is established, follow the steps in [Add On-Premises Active Directory as a Single Sign-On Identity Source](#add-on-premises-active-directory-as-a-single-sign-on-identity-source) to add it as an Identity Source.
-
-## Add an identity source on vCenter
-
-1. [Escalate privileges](escalate-private-cloud-privileges.md) on your Private Cloud.
-
-2. Sign in to the vCenter for your Private Cloud.
-
-3. Select **Home > Administration**.
-
- ![Administration](media/OnPremAD01.png)
-
-4. Select **Single Sign On > Configuration**.
-
- ![Single Sign On](media/OnPremAD02.png)
-
-5. Open the **Identity Sources** tab and click **+** to add a new identity source.
-
- ![Identity Sources](media/OnPremAD03.png)
-
-6. Select **Active Directory as an LDAP Server** and click **Next**.
-
- ![Screenshot that highlights the Active Directory as an LDAP Server option.](media/OnPremAD04.png)
-
-7. Specify the identity source parameters for your environment and click **Next**.
-
- ![Active Directory](media/OnPremAD05.png)
-
-8. Review the settings and click **Finish**.
vmware-cloudsimple Shrink Private Cloud https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/shrink-private-cloud.md
- Title: Shrink Azure VMware Solution by CloudSimple Private Cloud
-description: Learn how to dynamically shrink a Private Cloud in CloudSimple by removing a node from an existing vSphere cluster or removing an entire cluster.
-- Previously updated : 07/01/2019 ------
-# Shrink a CloudSimple Private Cloud
-
-CloudSimple provides the flexibility to dynamically shrink a Private Cloud. A Private Cloud consists of one or more vSphere clusters. Each cluster can have 3 to 16 nodes. When shrinking a Private Cloud, you remove a node from the existing cluster or delete an entire cluster.
-
-## Before you begin
-
-Following conditions must be met for shrink of a Private Cloud. Management cluster (first cluster) created when a Private Cloud was created cannot be deleted.
-
-* A vSphere cluster must have three nodes. A cluster with only three nodes cannot be shrunk.
-* Total storage consumed should not exceed the total capacity after shrink of the cluster.
-* Check if any Distributed Resource Scheduler (DRS) rules prevents vMotion of a virtual machine. If rules are present, disable or delete the rules. DRS rules include virtual machine to host affinity rules.
-
-## Sign in to Azure
-
-Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
-
-## Shrink a Private Cloud
-
-1. [Access the CloudSimple portal](access-cloudsimple-portal.md).
-
-2. Open the **Resources** page.
-
-3. Click on the Private Cloud you want to shrink
-
-4. On the summary page, click **Shrink**.
-
- ![Shrink private cloud](media/shrink-private-cloud.png)
-
-5. Select the cluster that you want to shrink or delete.
-
- ![Shrink private cloud - select cluster](media/shrink-private-cloud-select-cluster.png)
-
-6. Select **Remove one node** or **Delete the whole cluster**.
-
-7. Verify the cluster capacity
-
-8. Click **Submit** to shrink the Private Cloud.
-
-Shrink of the Private Cloud starts. You can monitor the progress in tasks. The shrink process can take a few hours depending on the data, which needs to be resynced on vSAN.
-
-> [!NOTE]
-> 1. If you shrink a private cloud by deleting the last or the only cluster in the datacenter, the datacenter will not be deleted.
-> 2. If any DRS rule violation occurs, node will not be removed from the cluster and the task description shows that removing a node will violate DRS rules on the cluster.
--
-## Next steps
-
-* [Consume VMware VMs on Azure](quickstart-create-vmware-virtual-machine.md)
-* Learn more about [Private Clouds](cloudsimple-private-cloud.md)
vmware-cloudsimple Update Plan Sept 2020 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/update-plan-sept-2020.md
- Title: Azure VMware Solution by CloudSimple September 2020 update
-description: In this article, learn about what to expect during this maintenance operation and changes to your private cloud.
-- Previously updated : 09/3/2020 -----
-# Azure VMware Solution by CloudSimple September 2020 update
-
-An important update to Azure VMware Solution service will be performed in September. An email notification, sent as a part of maintenance will include the timeline of the maintenance. In this article, you learn about what to expect during this maintenance operation and changes to your private cloud.
-
-> [!NOTE]
-> This is a non-disruptive upgrade. During the upgrade, you may see one of the redundant components go down.
-
-## VMware infrastructure upgrade
-
-VMware infrastructure of your private cloud will be updated to a newer version. This includes updates to vCenter, ESXi, NSX-T, and Hybrid Cloud Extension (HCX) components (if deployed) of your private cloud.
-
-During the upgrade, a new node will be added to your private cloud before a node is placed in maintenance mode for upgrade operation. This ensures the capacity of your private cloud and availability of your private cloud is maintained during the upgrade process. During the upgrade of VMware components, you may see alarms displayed on your vCenter web UI. The alarms
-are a part of the maintenance operations performed by the service operations team.
-
-**Component version**
--- ESXi 6.7U3-- vCenter 6.7U3-- vSAN 6.7-- NSX Data Center 2.5.1-- HCX 3.5.2-
-## Datacenter updates
-
-This update includes updates to datacenter infrastructure. Non-disruptive updates will be performed during the maintenance period. You will notice reduced redundancy during the update process. Alerts may be generated in your private cloud VMware infrastructure, your ExpressRoute circuits, GlobalReach connections, and any Site-to-Site VPN devices related to one of the link availabilities. This is normal during the update as the components will be rebooted as a part of the update.
--- If a Site-to-Site VPN is deployed as a single instance (non-HA), you may have to re-establish the VPN connection.--- If you use a Point-to-Site VPN connection, you will have to re-establish the VPN connection.-
-## Post update
-
-Once the updates are complete, you should see newer versions of VMware components. If you notice any issues or have any questions, contact our support team by opening a [support ticket](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).
vmware-cloudsimple Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/users.md
- Title: View Azure VMware CloudSimple portal users - Azure VMware Solution by CloudSimple
-description: Describes how to view the list of users who have access to the CloudSimple portal through the Azure portal
-- Previously updated : 08/14/2019 ------
-# View the list of CloudSimple portal users
-
-Users are added to the user list when they first access the CloudSimple portal. To view the list of users who have access to the CloudSimple portal through Azure, [access the CloudSimple portal](access-cloudsimple-portal.md), select **Account** on the side menu, and then select **Users** in the CloudSimple portal.
-
-* To display the user details, including the Azure subscription, tenant, and user IDs, click an entry on the **Users** page.
-
-* To view an audit log of activity for a user, select the **Audit log** tab.
-* To lock or unlock a user account, click the **Locked** toggle when displaying the user details. When the account is unlocked, the user can access the CloudSimple portal. When the account is locked, access to the portal is blocked.
vmware-cloudsimple Vcenter Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/vcenter-access.md
- Title: Azure VMware Solution by CloudSimple - Access vSphere client
-description: Describes how to access vCenter of your Private Cloud.
-- Previously updated : 08/30/2019 ------
-# Access your Private Cloud vCenter portal
-
-You can launch your Private Cloud vCenter portal from Azure portal or CloudSimple portal. vCenter portal allows you to manage VMware infrastructure on your Private Cloud.
-
-## Before you begin
-
-Network connection must be established and DNS name resolution must be enabled for accessing vCenter portal. You can establish network connection to your Private Cloud using any of the options below.
-
-* [Connect from on-premises to CloudSimple using ExpressRoute](on-premises-connection.md)
-* [Configure a VPN connection to your CloudSimple Private Cloud](set-up-vpn.md)
-
-To set up DNS name resolution of your Private Cloud VMware infrastructure components, see [Configure DNS for name resolution for Private Cloud vCenter access from on-premises workstations](on-premises-dns-setup.md)
-
-## Sign in to Azure
-
-Sign in to the Azure portal at [https://portal.azure.com](https://portal.azure.com).
-
-## Access vCenter from Azure portal
-
-You can launch vCenter portal of your Private Cloud from Azure portal.
-
-1. Select **All services**.
-
-2. Search for **CloudSimple Services**.
-
-3. Select the CloudSimple service of your Private Cloud to which you want to connect.
-
-4. On the **Overview** page, click **View VMware Private Clouds**
-
- ![CloudSimple service overview](media/cloudsimple-service-overview.png)
-
-5. Select the Private Cloud from the list of Private Clouds and click **Launch vSphere Client**.
-
- ![Launch vSphere Client](media/cloudsimple-service-launch-vsphere-client.png)
-
-## Access vCenter from CloudSimple portal
-
-You can launch vCenter portal of your Private Cloud from CloudSimple portal.
-
-1. Access your [CloudSimple portal](access-cloudsimple-portal.md).
-
-2. From the **Resources** select the Private Cloud, which you want to access and click on **Launch vSphere Client**.
-
- ![Launch vSphere Client - Resources](media/cloudsimple-portal-resources-launch-vcenter.png)
-
-3. You can also launch the vCenter portal from summary screen of your Private Cloud.
-
- ![Launch vSphere Client - Summary](media/cloudsimple-resources-summary-launch-vcenter.png)
-
-## Next steps
-
-* [Create and manage VLANs/subnets for your Private Clouds](create-vlan-subnet.md)
-* [CloudSimple Private Cloud permission model of VMware vCenter](learn-private-cloud-permissions.md)
vmware-cloudsimple Virtual Network Connection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/virtual-network-connection.md
- Title: Connect Azure virtual network to CloudSimple using ExpressRoute - Azure VMware Solution by CloudSimple
-description: Describes how to obtain peering information for a connection between the Azure virtual network and your CloudSimple environment
-- Previously updated : 08/14/2019 ------
-# Connect Azure virtual network to CloudSimple using ExpressRoute
-
-You can extend your Private Cloud network to your Azure virtual network and Azure resources. An ExpressRoute connection allows you to access resources running in your Azure subscription from your Private Cloud.
-
-## Request authorization key
-
-An authorization key is required for the ExpressRoute connection between your Private Cloud and the Azure virtual network. To obtain a key, file a ticket with <a href="https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest" target="_blank">Support</a>. Use the following information in the request:
-
-* Issue type: **Technical**
-* Subscription: **Select the subscription where CloudSimple service is deployed**
-* Service: **VMware Solution by CloudSimple**
-* Problem type: **Service request**
-* Problem subtype: **Authorization key for Azure VNET connection**
-* Subject: **Request for authorization key for Azure VNET connection**
-
-## Get peering information from CloudSimple portal
-
-To set up the connection, you must establish a connection between Azure virtual network and your CloudSimple environment. As part of the procedure, you must supply the peer circuit URI and authorization key. Obtain the URI and authorization key from [CloudSimple portal](access-cloudsimple-portal.md). Select **Network** on the side menu, and then select **Azure Network Connection**. Or select **Account** on the side menu and then select **Azure network connection**.
-
-Copy peer circuit URI and for the authorization key for each of the regions using *copy* icon. For each CloudSimple region you want to connect:
-
-1. Click **Copy** to copy the URI. Paste it into a file where it can be available to add to the Azure portal.
-2. Click **Copy** to copy the authorization key and paste it into the file as well.
-
-Copy the authorization key and peer circuit URI that is in **Available** state. **Used** status indicates that the key has already been used to create a virtual network connection.
-
-![Virtual Network Connection page](media/virtual-network-connection.png)
-
-For details on setting up the Azure virtual network to CloudSimple link, see [Connect your CloudSimple Private Cloud environment to the Azure virtual network using ExpressRoute](azure-expressroute-connection.md).
-
-## Next steps
-
-* [Azure virtual network connection to Private Cloud](azure-expressroute-connection.md)
-* [Connect to on-premises network using Azure ExpressRoute](on-premises-connection.md)
vmware-cloudsimple Vmware Components https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/vmware-components.md
- Title: Private cloud VMware components -
-description: Learn how the CloudSimple service allows you to deploy VMware natively in Azure locations. Private Clouds are integrated with the rest of the Azure Cloud.
-- Previously updated : 08/15/2019-----
-# Private Cloud VMware components
-
-A Private Cloud is an isolated VMware stack (ESXi hosts, vCenter, vSAN, and NSX) environment managed by a vCenter server in a management domain. The CloudSimple service allows you to deploy VMware natively on Azure bare metal infrastructure in Azure locations. Private Clouds are integrated with the rest of the Azure Cloud. A private cloud is deployed with the following VMware stack components:
-
-* **VMware ESXi -** Hypervisor on Azure dedicated nodes
-* **VMware vCenter -** Appliance for centralized management of private cloud vSphere environment
-* **VMware vSAN -** Hyper-converged infrastructure solution
-* **VMware NSX Data Center -** Network Virtualization and Security Software
-
-## VMware component versions
-
-A Private Cloud VMware stack is deployed with the following software version.
-
-| Component | Version | Licensed version |
-|--|||
-| ESXi | 6.7U2 | Enterprise Plus |
-| vCenter | 6.7U2 | vCenter Standard |
-| vSAN | 6.7 | Enterprise |
-| NSX Data Center | 2.4.1 | Advanced |
-
-## ESXi
-
-VMware ESXi is installed on provisioned CloudSimple nodes when you create a private cloud. ESXi provides the hypervisor for deploying workload virtual machines (VMs). Nodes provide hyper-converged infrastructure (compute and storage) on your private cloud. The nodes are a part of the vSphere cluster on the private cloud. Each node has four physical networks interfaces connected to underlay network. Two physical network interfaces are used to create a **vSphere Distributed Switch (VDS)** on vCenter and two are used to create an **NSX-managed virtual distributed switch (N-VDS)**. Network interfaces are configured in active-active mode for high availability.
-
-Learn more on VMware ESXi
-
-## vCenter server appliance
-
-vCenter server appliance (VCSA) provides the authentication, management, and orchestration functions for VMware Solution by CloudSimple. VCSA with embedded Platform Services Controller (PSC) is deployed when you create your private cloud. VCSA is deployed on the vSphere cluster that is created when you deploy your private cloud. Each private cloud has its own VCSA. Expansion of a private cloud adds the nodes to the VCSA on the private cloud.
-
-### vCenter single sign-on
-
-Embedded Platform Services Controller on VCSA is associated with a **vCenter Single Sign-On domain**. The domain name is **cloudsimple.local**. A default user **CloudOwner@cloudsimple.com** is created for you to access vCenter. You can add your on-premises/Azure active directory [identity sources for vCenter](set-vcenter-identity.md).
-
-## vSAN storage
-
-Private clouds are created with fully configured all-flash vSAN storage, local to the cluster. Minimum three nodes of the same SKU are required to create a vSphere cluster with vSAN datastore. De-duplication and compression are enabled on the vSAN datastore by default. Two disk groups are created on each node of the vSphere cluster. Each disk group contains one cache disk and three capacity disks.
-
-A default vSAN storage policy is created on the vSphere cluster and applied to the vSAN datastore. This policy determines how the VM storage objects are provisioned and allocated within the datastore to guarantee the required level of service. The storage policy defines the **Failures to tolerate (FTT)** and the **Failure tolerance method**. You can create new storage policies and apply them to the VMs. To maintain SLA, 25% spare capacity must be maintained on the vSAN datastore.
-
-### Default vSAN storage policy
-
-Table below shows the default vSAN storage policy parameters.
-
-| Number of nodes in vSphere Cluster | FTT | Failure tolerance method |
-||--|--|
-| 3 and 4 nodes | 1 | RAID 1 (mirroring) - creates 2 copies |
-| 5 to 16 nodes | 2 | RAID 1 (mirroring) - creates 3 copies |
-
-## NSX Data Center
-
-NSX Data Center provides network virtualization, micro segmentation, and network security capabilities on your private cloud. You can configure all services supported by NSX Data Center on your private cloud through NSX. When you create a private cloud, the following NSX components are installed and configured.
-
-* NSXT Manager
-* Transport Zones
-* Host and Edge Uplink Profile
-* Logical Switch for Edge Transport, Ext1, and Ext2
-* IP Pool for ESXi Transport Node
-* IP Pool for Edge Transport Node
-* Edge Nodes
-* DRS Anti-affinity rule for controller and Edge VMs
-* Tier 0 Router
-* Enable BGP on Tier0 Router
-
-## vSphere cluster
-
-ESXi hosts are configured as a cluster to ensure high availability of the private cloud. When you create a private cloud, management components of vSphere are deployed on the first cluster. A resource pool is created for management components and all management VMs are deployed in this resource pool. The first cluster can't be deleted to shrink the private cloud. vSphere cluster provides high availability for VMs using **vSphere HA**. Failures to tolerate are based on the number of available nodes in the cluster. You can use the formula ```Number of nodes = 2N+1``` where ```N``` is the number of failures to tolerate.
-
-### vSphere cluster limits
-
-| Resource | Limit |
-|-|-|
-| Minimum number of nodes to create a private cloud (first vSphere cluster) | 3 |
-| Maximum number of nodes in a vSphere Cluster on a private cloud | 16 |
-| Maximum number of nodes in a private cloud | 64 |
-| Maximum number of vSphere Clusters in a private cloud | 21 |
-| Minimum number of nodes on a new vSphere Cluster | 3 |
-
-## VMware infrastructure maintenance
-
-Occasionally it's necessary to make changes to the configuration of the VMware infrastructure. Currently, these intervals can occur every 1-2 months, but the frequency is expected to decline over time. This type of maintenance can usually be done without interrupting normal consumption of the CloudSimple services. During a VMware maintenance interval, the following services continue to function without any impact:
-
-* VMware management plane and applications
-* vCenter access
-* All networking and storage
-* All Azure traffic
-
-## Updates and upgrades
-
-CloudSimple is responsible for lifecycle management of VMware software (ESXi, vCenter, PSC, and NSX) in the private cloud.
-
-Software updates include:
-
-* **Patches**. Security patches or bug fixes released by VMware.
-* **Updates**. Minor version change of a VMware stack component.
-* **Upgrades**. Major version change of a VMware stack component.
-
-CloudSimple tests a critical security patch as soon as it becomes available from VMware. Per SLA, CloudSimple rolls out the security patch to private cloud environments within a week.
-
-CloudSimple provides quarterly maintenance updates to VMware software components. When a new major version of VMware software is available, CloudSimple works with customers to coordinate a suitable maintenance window for upgrade.
-
-## Next steps
-
-* [CloudSimple maintenance and updates](cloudsimple-maintenance-updates.md)
vmware-cloudsimple Vpn Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/vpn-gateway.md
- Title: Azure VMware Solution by CloudSimple - Set up a VPN gateway
-description: Describes how to set up Point-to-Site VPN gateway and Site-to-Site VPN gateway and create connections between your on-premises network and your CloudSimple Private Cloud
-- Previously updated : 08/14/2019 ------
-# Set up VPN gateways on CloudSimple network
-
-VPN gateways allow you to connect to CloudSimple network from your on-premises network and from a client computer remotely. A VPN connection between your on-premises network and your CloudSimple network provides access to the vCenter and workloads on your Private Cloud. CloudSimple supports both Site-to-Site VPN and Point-to-Site VPN gateways.
-
-## VPN gateway types
-
-* **Site-to-Site VPN** connection allows you to set up your Private Cloud workloads to access on-premises services. You can also use on-premises Active Directory as an identity source for authenticating to your Private Cloud vCenter. Currently, only **Policy-Based VPN** type is supported.
-* **Point-to-Site VPN** connection is the simplest way to connect to your Private Cloud from your computer. Use Point-to-Site VPN connectivity to connect to the Private Cloud remotely. For information about installing a client for a Point-to-Site VPN connection, see [Configure a VPN connection to your Private Cloud](set-up-vpn.md).
-
-In a region, you can create one Point-to-Site VPN gateway and one Site-to-Site VPN gateway.
-
-## Automatic addition of VLAN/subnets
-
-CloudSimple VPN gateways provide policies for adding VLAN/subnets to VPN gateways. Policies allow you to specify different rules for management VLAN/subnets and user-defined VLAN/subnets. Rules for management VLAN/subnets apply to any new Private Clouds you create. Rules for user-defined VLANs/subnets allow you to automatically add any new VLAN/subnets to existing or new Private Clouds For a Site-to-Site VPN gateway, you define the policy for each connection.
-
-The policies on adding VLANs/subnets to VPN gateways apply to both Site-to-Site VPN and Point-to-Site VPN gateways.
-
-## Automatic addition of users
-
-A Point-to-Site VPN gateway allows you to define an automatic addition policy for new users. By default, all owners and contributors of the subscription have access to CloudSimple portal. Users are created only when the CloudSimple portal is launched for the first time. Selecting **Automatically add** rules enables any new user to access the CloudSimple network using Point-to-Site VPN connection.
-
-## Set up a Site-to-Site VPN gateway
-
-1. [Access the CloudSimple portal](access-cloudsimple-portal.md) and select **Network**.
-2. Select **VPN Gateway**.
-3. Click **New VPN Gateway**.
-
- ![Create VPN gateway](media/create-vpn-gateway.png)
-
-4. For **Gateway configuration**, specify the following settings and click **Next**.
-
- * Select **Site-to-Site VPN** as the gateway type.
- * Enter a name to identify the gateway.
- * Select the Azure location where your CloudSimple service is deployed.
- * Optionally, enable High Availability.
-
- ![Create Site-to-Site VPN gateway](media/create-vpn-gateway-s2s.png)
-
- > [!WARNING]
- > Enabling High Availability requires your on-premises VPN device to support connecting to two IP addresses. This option cannot be disabled once VPN gateway is deployed.
-
-5. Create the first connection from your on-premises network and click **Next**.
-
- * Enter a name to identify the connection.
- * For the peer IP, enter your on-premises VPN gateway's public IP address.
- * Enter the peer identifier of your on-premises VPN gateway. The peer identifier is usually the public IP address of your on-premises VPN gateway. If you've configured a specific identifier on your gateway, enter the identifier.
- * Copy the shared key to use for connection from your on-premises VPN gateway. To change the default shared key and specify a new one, click the edit icon.
- * For **On-Premises Prefixes**, enter the on-premises CIDR prefixes that will access CloudSimple network. You can add multiple CIDR prefixes when you create the connection.
-
- ![Create Site-to-Site VPN gateway connection](media/create-vpn-gateway-s2s-connection.png)
-
-6. Enable the VLAN/subnets on your Private Cloud network that will be accessed from the on-premises network and click **Next**.
-
- * To add a management VLAN/subnet, enable **Add management VLANs/Subnets of Private Clouds**. Management subnet is required for vMotion and vSAN subnets.
- * To add vMotion subnets, enable **Add vMotion network of Private Clouds**.
- * To add vSAN subnets, enable **Add vSAN subnet of Private Clouds**.
- * Select or de-select specific VLANs.
-
- ![Create connection](media/create-vpn-gateway-s2s-connection-vlans.png)
-
-7. Review the settings and click **Submit**.
-
- ![Site-to-Site VPN gateway review and create](media/create-vpn-gateway-s2s-review.png)
-
-## Create Point-to-Site VPN gateway
-
-1. [Access the CloudSimple portal](access-cloudsimple-portal.md) and select **Network**.
-2. Select **VPN Gateway**.
-3. Click **New VPN Gateway**.
-
- ![Create VPN gateway](media/create-vpn-gateway.png)
-
-4. For **Gateway configuration**, specify the following settings and click **Next**.
-
- * Select **Point-to-Site VPN** as the gateway type.
- * Enter a name to identify the gateway.
- * Select the Azure location where your CloudSimple service is deployed.
- * Specify the client subnet for the Point-to-Site gateway. DHCP addresses will be given from the client subnet when you connect.
-
-5. For **Connection/User**, specify the following settings and click **Next**.
-
- * To automatically allow all current and future users to access the Private Cloud through the Point-to-Site gateway, select **Automatically add all users**. When you select the option, all users in the user list are automatically selected. You can override the automatic option by deselecting individual users in the list.
- * To select individual users, click the check boxes in the user list.
-
-6. The VLANs/Subnets section allows you to specify management and user VLANs/subnets for the gateway and connections.
-
- * The **Automatically add** options set the global policy for the gateway. The settings apply to the current gateway. The settings can be overridden in the **Select** area.
- * Select **Add management VLANs/Subnets of Private Clouds**.
- * To add all user-defined VLANs/subnets, click **Add user-defined VLANs/Subnets**.
- * The **Select** settings override the global settings under **Automatically add**.
-
-7. Click **Next** to review the settings. Click the Edit icons to make any changes.
-8. Click **Create** to create the VPN gateway.
-
-### Client subnet and protocols for Point-to-Site VPN gateway
-
-The Point-to-Site VPN gateway allows TCP and UDP connections. Choose the protocol to use when you connect from your computer by selecting the TCP or UDP configuration.
-
-The configured client subnet is used for both TCP and UDP clients. The CIDR prefix is divided into two subnets, one for TCP and one for UDP clients. Choose the prefix mask based on the number of VPN users who will connect concurrently.
-
-The following table lists the number of concurrent client connections for prefix mask.
-
-| Prefix Mask | /24 | /25 | /26 | /27 | /28 |
-|-|--|--|--|--|--|
-| Number of concurrent TCP connections | 124 | 60 | 28 | 12 | 4 |
-| Number of concurrent UDP connections | 124 | 60 | 28 | 12 | 4 |
-
-To connect using Point-to-Site VPN, see [Connect to CloudSimple using Point-to-Site VPN](set-up-vpn.md#connect-to-cloudsimple-using-point-to-site-vpn).
vmware-cloudsimple Vsan Encryption https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vmware-cloudsimple/vsan-encryption.md
- Title: Azure VMware Solution by CloudSimple - Configure vSAN encryption for Private Cloud
-description: Describes how to configure vSAN software encryption feature so your CloudSimple Private Cloud can work with a key management server running in your Azure virtual network.
-- Previously updated : 08/19/2019 -----
-# Configure vSAN encryption for CloudSimple Private Cloud
-
-You can configure the vSAN software encryption feature so your CloudSimple Private Cloud can work with a key management server running in your Azure virtual network.
-
-VMware requires use of an external KMIP 1.1 compliant third-party key management server (KMS) tool when using vSAN encryption. You can leverage any supported KMS that is certified by VMware and is available for Azure.
-
-This guide describes how to use HyTrust KeyControl KMS running in an Azure virtual network. A similar approach can be used for any other certified third-party KMS solution for vSAN.
-
-This KMS solution requires you to:
-
-* Install, configure, and manage a VMware certified third-party KMS tool in your Azure virtual network.
-* Provide your own licenses for the KMS tool.
-* Configure and manage vSAN encryption in your Private Cloud using the third-party KMS tool running in your Azure virtual network.
-
-## KMS deployment scenario
-
-The KMS server cluster runs in your Azure virtual network and is IP reachable from the Private Cloud vCenter over the configured Azure ExpressRoute connection.
-
-![../media/KMS cluster in Azure virtual network](media/vsan-kms-cluster.png)
-
-## How to deploy the solution
-
-The deployment process has the following steps:
-
-1. [Verify that prerequisites are met](#verify-prerequisites-are-met)
-2. [CloudSimple portal: Obtain ExpressRoute Peering Information](#cloudsimple-portal-obtain-expressroute-peering-information)
-3. [Azure portal: Connect your virtual network to the Private Cloud](#azure-portal-connect-your-virtual-network-to-your-private-cloud)
-4. [Azure portal: Deploy a HyTrust KeyControl Cluster in your virtual network](#azure-portal-deploy-a-hytrust-keycontrol-cluster-in-the-azure-resource-manager-in-your-virtual-network)
-5. [HyTrust WebUI: Configure KMIP server](#hytrust-webui-configure-the-kmip-server)
-6. [vCenter UI: Configure vSAN encryption to use KMS cluster in your Azure virtual network](#vcenter-ui-configure-vsan-encryption-to-use-kms-cluster-in-your-azure-virtual-network)
-
-### Verify prerequisites are met
-
-Verify the following prior to deployment:
-
-* The selected KMS vendor, tool, and version are on the vSAN compatibility list.
-* The selected vendor supports a version of the tool to run in Azure.
-* The Azure version of the KMS tool is KMIP 1.1 compliant.
-* An Azure Resource Manager and a virtual network are already created.
-* A CloudSimple Private Cloud is already created.
-
-### CloudSimple portal: Obtain ExpressRoute peering information
-
-To continue the setup, you need the authorization key and peer circuit URI for ExpressRoute plus access to your Azure Subscription. This information is available on the Virtual Network Connection page in the CloudSimple portal. For instructions, see [Set up a virtual network connection to the Private Cloud](virtual-network-connection.md). If you have any trouble obtaining the information, open a [support request](https://portal.azure.com/#blade/Microsoft_Azure_Support/HelpAndSupportBlade/newsupportrequest).
-
-### Azure portal: Connect your virtual network to your Private Cloud
-
-1. Create a virtual network gateway for your virtual network by following the instructions in [Configure a virtual network gateway for ExpressRoute using the Azure portal](../expressroute/expressroute-howto-add-gateway-portal-resource-manager.md).
-2. Link your virtual network to the CloudSimple ExpressRoute circuit by following the instructions in [Connect a virtual network to an ExpressRoute circuit using the portal](../expressroute/expressroute-howto-linkvnet-portal-resource-manager.md).
-3. Use the CloudSimple ExpressRoute circuit information received in your welcome email from CloudSimple to link your virtual network to the CloudSimple ExpressRoute circuit in Azure.
-4. Enter the authorization key and peer circuit URI, give the connection a name, and click **OK**.
-
-![Provide CS peer circuit URI when creating the virtual network](media/vsan-azureportal01.png)
-
-### Azure portal: Deploy a HyTrust KeyControl cluster in the Azure Resource Manager in your virtual network
-
-To deploy a HyTrust KeyControl cluster in the Azure Resource Manager in your virtual network, perform the following tasks. See the [HyTrust documentation](https://docs.hytrust.com/DataControl/Admin_Guide-4.0/Default.htm#OLH-Files/Azure.htm%3FTocPath%3DHyTrust%2520DataControl%2520and%2520Microsoft%2520Azure%7C_____0) for details.
-
-1. Create an Azure network security group (nsg-hytrust) with specified inbound rules by following the instructions in the HyTrust documentation.
-2. Generate an SSH key pair in Azure.
-3. Deploy the initial KeyControl node from the image in Azure Marketplace. Use the public key of the key pair that was generated and select **nsg-hytrust** as the network security group for the KeyControl node.
-4. Convert the private IP address of KeyControl to a static IP address.
-5. SSH to the KeyControl VM using its public IP address and the private key of the previously mentioned key pair.
-6. When prompted in the SSH shell, select `No` to set the node as the initial KeyControl node.
-7. Add additional KeyControl nodes by repeating steps 3-5 of this procedure and selecting `Yes` when prompted about adding to an existing cluster.
-
-### HyTrust WebUI: Configure the KMIP server
-
-Go to https://*public-ip*, where *public-ip* is the public IP address of the KeyControl node VM. Follow these steps from the [HyTrust documentation](https://docs.hytrust.com/DataControl/Admin_Guide-4.0/Default.htm#OLH-Files/Azure.htm%3FTocPath%3DHyTrust%2520DataControl%2520and%2520Microsoft%2520Azure%7C_____0).
-
-1. [Configuring a KMIP server](https://docs.hytrust.com/DataControl/4.2/Admin_Guide-4.2/index.htm#Books/VMware-vSphere-VSAN-Encryption/configuring-kmip-server.htm%3FTocPath%3DHyTrust%2520KeyControl%2520with%2520VSAN%25C2%25A0and%2520VMware%2520vSphere%2520VM%2520Encryption%7C_____2)
-2. [Creating a Certificate Bundle for VMware Encryption](https://docs.hytrust.com/DataControl/4.2/Admin_Guide-4.2/index.htm#Books/VMware-vSphere-VSAN-Encryption/creating-user-for-vmcrypt.htm%3FTocPath%3DHyTrust%2520KeyControl%2520with%2520VSAN%25C2%25A0and%2520VMware%2520vSphere%2520VM%2520Encryption%7C_____3)
-
-### vCenter UI: Configure vSAN encryption to use KMS cluster in your Azure virtual network
-
-Follow the HyTrust instructions to [Create a KMS cluster in vCenter](https://docs.hytrust.com/DataControl/4.2/Admin_Guide-4.2/index.htm#Books/VMware-vSphere-VSAN-Encryption/creating-KMS-Cluster.htm%3FTocPath%3DHyTrust%2520KeyControl%2520with%2520VSAN%25C2%25A0and%2520VMware%2520vSphere%2520VM%2520Encryption%7C_____4).
-
-![Add KMS cluster details in vCenter](media/vsan-config01.png)
-
-In vCenter, go to **Cluster > Configure** and select **General** option for vSAN. Enable encryption and select the KMS cluster that was previously added to vCenter.
-
-![Enable vSAN encryption and configure KMS cluster in vCenter](media/vsan-config02.png)
-
-## References
-
-### Azure
-
-[Configure a virtual network gateway for ExpressRoute using the Azure portal](../expressroute/expressroute-howto-add-gateway-portal-resource-manager.md)
-
-[Connect a virtual network to an ExpressRoute circuit using the portal](../expressroute/expressroute-howto-linkvnet-portal-resource-manager.md)
-
-### HyTrust
-
-[HyTrust DataControl and Microsoft Azure](https://docs.hytrust.com/DataControl/Admin_Guide-4.0/Default.htm#OLH-Files/Azure.htm%3FTocPath%3DHyTrust%2520DataControl%2520and%2520Microsoft%2520Azure%7C_____0)
-
-[Configuring a KMPI Server](https://docs.hytrust.com/DataControl/4.2/Admin_Guide-4.2/index.htm#Books/VMware-vSphere-VSAN-Encryption/configuring-kmip-server.htm%3FTocPath%3DHyTrust%2520KeyControl%2520with%2520VSAN%25C2%25A0and%2520VMware%2520vSphere%2520VM%2520Encryption%7C_____2)
-
-[Creating a Certificate Bundle for VMware Encryption](https://docs.hytrust.com/DataControl/4.2/Admin_Guide-4.2/index.htm#Books/VMware-vSphere-VSAN-Encryption/creating-user-for-vmcrypt.htm%3FTocPath%3DHyTrust%2520KeyControl%2520with%2520VSAN%25C2%25A0and%2520VMware%2520vSphere%2520VM%2520Encryption%7C_____3)
-
-[Creating the KMS Cluster in vSphere](https://docs.hytrust.com/DataControl/4.2/Admin_Guide-4.2/index.htm#Books/VMware-vSphere-VSAN-Encryption/creating-KMS-Cluster.htm%3FTocPath%3DHyTrust%2520KeyControl%2520with%2520VSAN%25C2%25A0and%2520VMware%2520vSphere%2520VM%2520Encryption%7C_____4)
vpn-gateway Ipsec Ike Policy Howto https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/ipsec-ike-policy-howto.md
 Title: 'IPsec/IKE policy for S2S VPN & VNet-to-VNet connections: Azure portal'
+ Title: 'Configure custom IPsec/IKE connection policies for S2S VPN & VNet-to-VNet: Azure portal'
-description: Learn how to configure IPsec/IKE policy for S2S or VNet-to-VNet connections with Azure VPN Gateways using the Azure portal.
+description: Learn how to configure IPsec/IKE custom policy for S2S or VNet-to-VNet connections with Azure VPN Gateways using the Azure portal.
Previously updated : 01/12/2023 Last updated : 01/17/2023
-# Configure IPsec/IKE policy for S2S VPN and VNet-to-VNet connections: Azure portal
+# Configure custom IPsec/IKE connection policies for S2S VPN and VNet-to-VNet: Azure portal
This article walks you through the steps to configure IPsec/IKE policy for VPN Gateway Site-to-Site VPN or VNet-to-VNet connections using the Azure portal. The following sections help you create and configure an IPsec/IKE policy, and apply the policy to a new or existing connection.
-## <a name="about"></a>About IPsec and IKE policy parameters
-
-IPsec and IKE protocol standard supports a wide range of cryptographic algorithms in various combinations. Refer to [About cryptographic requirements and Azure VPN gateways](vpn-gateway-about-compliance-crypto.md) to see how this can help ensure cross-premises and VNet-to-VNet connectivity to satisfy your compliance or security requirements.
-
-This article provides instructions to create and configure an IPsec/IKE policy, and apply it to a new or existing VPN gateway connection.
-
-### Considerations
-
-* IPsec/IKE policy only works on the following gateway SKUs:
- * ***VpnGw1~5 and VpnGw1AZ~5AZ***
- * ***Standard*** and ***HighPerformance***
-* You can only specify ***one*** policy combination for a given connection.
-* You must specify all algorithms and parameters for both IKE (Main Mode) and IPsec (Quick Mode). Partial policy specification isn't allowed.
-* Consult with your VPN device vendor specifications to ensure the policy is supported on your on-premises VPN devices. S2S or VNet-to-VNet connections can't establish if the policies are incompatible.
- ## <a name ="workflow"></a>Workflow
-This section outlines the workflow to create and update IPsec/IKE policy on a S2S VPN or VNet-to-VNet connection:
+The instructions in this article help you set up and configure IPsec/IKE policies as shown in the following diagram.
+ 1. Create a virtual network and a VPN gateway. 1. Create a local network gateway for cross premises connection, or another virtual network and gateway for VNet-to-VNet connection. 1. Create a connection (IPsec or VNet2VNet). 1. Configure/update/remove the IPsec/IKE policy on the connection resources.
-The instructions in this article help you set up and configure IPsec/IKE policies as shown in the diagram:
+## Policy parameters
-
-## Supported cryptographic algorithms & key strengths
-### Algorithms and keys
+### Cryptographic algorithms & key strengths
The following table lists the supported configurable cryptographic algorithms and key strengths. [!INCLUDE [Algorithm and keys table](../../includes/vpn-gateway-ipsec-ike-algorithm-include.md)]
-#### Important requirements
- [!INCLUDE [Important requirements table](../../includes/vpn-gateway-ipsec-ike-requirements-include.md)]
-### Diffie-Hellman Groups
+### Diffie-Hellman groups
-The following table lists the corresponding Diffie-Hellman Groups supported by the custom policy:
+The following table lists the corresponding Diffie-Hellman groups supported by the custom policy:
Refer to [RFC3526](https://tools.ietf.org/html/rfc3526) and [RFC5114](https://tools.ietf.org/html/rfc5114) for more details.
This section walks you through the steps to create a Site-to-Site VPN connection
### Step 1 - Create the virtual network, VPN gateway, and local network gateway for TestVNet1
-Create the following resources using the following values. For steps, see [Create a Site-to-Site VPN connection](./tutorial-site-to-site-portal.md).
-
-**Virtual network** TestVNet1
-
-* **Resource group:** TestRG1
-* **Name:** TestVNet1
-* **Region:** (US) East US
-* **IPv4 address space:** 10.1.0.0/16
-* **Subnet 1 name:** FrontEnd
-* **Subnet 1 address range:** 10.1.0.0/24
-* **Subnet 2 name:** BackEnd
-* **Subnet 2 address range:** 10.1.1.0/24
-
-**VPN gateway:** VNet1GW
-
-* **Name:** VNet1GW
-* **Region:** East US
-* **Gateway type:** VPN
-* **VPN type:** Route-based
-* **SKU:** VpnGw2
-* **Generation:** Generation 2
-* **Virtual network:** VNet1
-* **Gateway subnet address range:** 10.1.255.0/27
-* **Public IP address type:** Basic or Standard
-* **Public IP address:** Create new
-* **Public IP address name:** VNet1GWpip
-* **Enable active-active mode:** Disabled
-* **Configure BGP:** Disabled
+Create the following resources.For steps, see [Create a Site-to-Site VPN connection](./tutorial-site-to-site-portal.md).
+
+1. Create the virtual network **TestVNet1** using the following values.
+
+ * **Resource group:** TestRG1
+ * **Name:** TestVNet1
+ * **Region:** (US) East US
+ * **IPv4 address space:** 10.1.0.0/16
+ * **Subnet 1 name:** FrontEnd
+ * **Subnet 1 address range:** 10.1.0.0/24
+ * **Subnet 2 name:** BackEnd
+ * **Subnet 2 address range:** 10.1.1.0/24
+
+1. Create the virtual network gateway **VNet1GW** using the following values.
+
+ * **Name:** VNet1GW
+ * **Region:** East US
+ * **Gateway type:** VPN
+ * **VPN type:** Route-based
+ * **SKU:** VpnGw2
+ * **Generation:** Generation 2
+ * **Virtual network:** VNet1
+ * **Gateway subnet address range:** 10.1.255.0/27
+ * **Public IP address type:** Basic or Standard
+ * **Public IP address:** Create new
+ * **Public IP address name:** VNet1GWpip
+ * **Enable active-active mode:** Disabled
+ * **Configure BGP:** Disabled
### Step 2 - Configure the local network gateway and connection resources
-Create the local network gateway resource.
+1. Create the local network gateway resource **Site6** using the following values.
-**Local network gateway** Site6
+ * **Name:** Site6
+ * **Resource Group:** TestRG1
+ * **Location:** East US
+ * **Local gateway IP address:** 5.4.3.2 (example value only - use the IP address of your on-premises device)
+ * **Address Spaces** 10.61.0.0/16, 10.62.0.0/16 (example value only)
-* **Name:** Site6
-* **Resource Group:** TestRG1
-* **Location:** East US
-* **Local gateway IP address:** 5.4.3.2 (example value only - use the IP address of your on-premises device)
-* **Address Spaces** 10.61.0.0/16, 10.62.0.0/16 (example value only)
+1. From the virtual network gateway, add a connection to the local network gateway using the following values.
-**Connection:** VNet1 to Site6
-
-From the virtual network gateway, add a connection to the local network gateway.
-
-* **Connection name:** VNet1toSite6
-* **Connection type:** IPsec
-* **Local network gateway:** Site6
-* **Shared key:** abc123 (example value - must match the on-premises device key used)
-* **IKE protocol:** IKEv2
+ * **Connection name:** VNet1toSite6
+ * **Connection type:** IPsec
+ * **Local network gateway:** Site6
+ * **Shared key:** abc123 (example value - must match the on-premises device key used)
+ * **IKE protocol:** IKEv2
### Step 3 - Configure a custom IPsec/IKE policy on the S2S VPN connection
-In this section, configure a custom IPsec/IKE policy with the following algorithms and parameters:
+Configure a custom IPsec/IKE policy with the following algorithms and parameters:
* IKE Phase 1: AES256, SHA384, DHGroup24 * IKE Phase 2(IPsec): AES256, SHA256, PFS None
In this section, configure a custom IPsec/IKE policy with the following algorith
1. Once all the options are selected, select **Save** to commit the changes to the connection resource. The policy will be enforced in about a minute.
-> [!IMPORTANT]
->
-> * Once an IPsec/IKE policy is specified on a connection, the Azure VPN gateway will only send or accept the IPsec/IKE proposal with specified cryptographic algorithms and key strengths on that particular connection. Make sure your on-premises VPN device for the connection uses or accepts the exact policy combination, otherwise the S2S VPN tunnel will not establish.
->
-> * **Policy-based traffic selector** and **DPD timeout** options can be specified with **Default** policy, without the custom IPsec/IKE policy.
->
+ > [!IMPORTANT]
+ >
+ > * Once an IPsec/IKE policy is specified on a connection, the Azure VPN gateway will only send or accept the IPsec/IKE proposal with specified cryptographic algorithms and key strengths on that particular connection. Make sure your on-premises VPN device for the connection uses or accepts the exact policy combination, otherwise the S2S VPN tunnel will not establish.
+ >
+ > * **Policy-based traffic selector** and **DPD timeout** options can be specified with **Default** policy, without the custom IPsec/IKE policy.
+ >
## Create VNet-to-VNet connection with custom policy
vpn-gateway Vpn Gateway Howto Point To Site Resource Manager Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-howto-point-to-site-resource-manager-portal.md
description: Learn how to configure VPN Gateway server settings for P2S configur
Previously updated : 01/11/2023 Last updated : 01/18/2023
vpn-gateway Vpn Gateway Howto Point To Site Rm Ps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-howto-point-to-site-rm-ps.md
Title: 'Connect to a VNet from a computer - P2S VPN and Azure certificate authentication: PowerShell'
+ Title: 'Configure P2S server configuration - certificate authentication: PowerShell'
description: Learn how to connect Windows and macOS clients securely to Azure virtual network using P2S and self-signed or CA issued certificates.
Previously updated : 05/05/2022 Last updated : 01/18/2023
-# Configure a Point-to-Site VPN connection to a VNet using Azure certificate authentication: PowerShell
+# Configure server settings for P2S VPN Gateway connections - certificate authentication - Azure PowerShell
This article helps you securely connect individual clients running Windows, Linux, or macOS to an Azure VNet. Point-to-site VPN connections are useful when you want to connect to your VNet from a remote location, such when you are telecommuting from home or a conference. You can also use P2S instead of a Site-to-Site VPN when you have only a few clients that need to connect to a VNet. Point-to-site connections do not require a VPN device or a public-facing IP address. P2S creates the VPN connection over either SSTP (Secure Socket Tunneling Protocol), or IKEv2.
$DNS = "10.2.1.4"
## <a name="creategateway"></a>Create the VPN gateway
-In this step, you configure and create the virtual network gateway for your VNet.
+In this step, you configure and create the virtual network gateway for your VNet. For more complete information about authentication and tunnel type, see [Specify tunnel and authentication type](vpn-gateway-howto-point-to-site-resource-manager-portal.md#type) in the Azure portal version of this article.
* The -GatewayType must be **Vpn** and the -VpnType must be **RouteBased**. * The -VpnClientProtocol is used to specify the types of tunnels that you would like to enable. The tunnel options are **OpenVPN, SSTP**, and **IKEv2**. You can choose to enable one of them or any supported combination. If you want to enable multiple types, then specify the names separated by a comma. OpenVPN and SSTP cannot be enabled together. The strongSwan client on Android and Linux and the native IKEv2 VPN client on iOS and macOS will use only the IKEv2 tunnel to connect. Windows clients try IKEv2 first and if that doesnΓÇÖt connect, they fall back to SSTP. You can use the OpenVPN client to connect to OpenVPN tunnel type.
In this step, you configure and create the virtual network gateway for your VNet
```azurepowershell-interactive New-AzVirtualNetworkGateway -Name $GWName -ResourceGroupName $RG ` -Location $Location -IpConfigurations $ipconf -GatewayType Vpn `
- -VpnType RouteBased -EnableBgp $false -GatewaySku VpnGw1 -VpnClientProtocol "IKEv2"
+ -VpnType RouteBased -EnableBgp $false -GatewaySku VpnGw1 -VpnClientProtocol IkeV2,OpenVPN
``` 1. Once your gateway is created, you can view it using the following example. If you closed PowerShell or it timed out while your gateway was being created, you can [declare your variables](#declare) again.
Set-AzVirtualNetworkGateway -VirtualNetworkGateway $Gateway -VpnClientAddressPoo
> You can't generate certificates using Azure Cloud Shell. You must use one of the methods outlined in this section. If you want to use PowerShell, you must install it locally. >
-Certificates are used by Azure to authenticate VPN clients for point-to-site VPNs. You upload the public key information of the root certificate to Azure. The public key is then considered 'trusted'. Client certificates must be generated from the trusted root certificate, and then installed on each client computer in the Certificates-Current User/Personal certificate store. The certificate is used to authenticate the client when it initiates a connection to the VNet.
+Certificates are used by Azure to authenticate VPN clients for point-to-site VPNs. You upload the public key information of the root certificate to Azure. The public key is then considered 'trusted'. Client certificates must be generated from the trusted root certificate, and then installed on each client computer in the Certificates-Current User/Personal certificate store. The certificate is used to authenticate the client when it initiates a connection to the VNet.
If you use self-signed certificates, they must be created using specific parameters. You can create a self-signed certificate using the instructions for [PowerShell and Windows 10 or later](vpn-gateway-certificates-point-to-site.md), or, if you don't have Windows 10 or later, you can use [MakeCert](vpn-gateway-certificates-point-to-site-makecert.md). It's important that you follow the steps in the instructions when generating self-signed root certificates and client certificates. Otherwise, the certificates you generate will not be compatible with P2S connections and you receive a connection error.
The following steps help you install on a Windows client. For additional clients
Make sure the client certificate was exported as a .pfx along with the entire certificate chain (which is the default). Otherwise, the root certificate information isn't present on the client computer and the client won't be able to authenticate properly.
-## <a name="clientconfig"></a>Configure the VPN client
+## <a name="connect"></a>Configure VPN clients and connect to Azure
-To connect to the virtual network gateway using P2S, each computer uses the VPN client that is natively installed as a part of the operating system. For example, when you go to VPN settings on your Windows computer, you can add VPN connections without installing a separate VPN client. You configure each VPN client by using a client configuration package. The client configuration package contains settings that are specific to the VPN gateway that you created.
+Each VPN client is configured using the files in a VPN client profile configuration package that you generate and download. The configuration package contains settings that are specific to the VPN gateway that you created. If you make changes to the gateway, such as changing a tunnel type, certificate, or authentication type, you'll need to generate another VPN client profile configuration package and install it on each client. Otherwise, your VPN clients may not be able to connect.
-You can use the following quick examples to generate and install the client configuration package. For more information about package contents and additional instructions about to generate and install VPN client configuration files, see [Create and install VPN client configuration files](point-to-site-vpn-client-cert-windows.md).
+For steps to generate a VPN client profile configuration package, configure your VPN clients, and connect to Azure, see the following articles:
-If you need to declare your variables again, you can find them [here](#declare).
-
-### To generate configuration files
-
-```azurepowershell-interactive
-$profile=New-AzVpnClientConfiguration -ResourceGroupName $RG -Name $GWName -AuthenticationMethod "EapTls"
-
-$profile.VPNProfileSASUrl
-```
-
-### To install the client configuration package
--
-## <a name="connect"></a>10. Connect to Azure
-
-### Windows VPN client
---
-### Mac VPN client
-
-From the Network dialog box, locate the client profile that you want to use, then click **Connect**.
-Check [Install - Mac (macOS)](point-to-site-vpn-client-cert-mac.md) for detailed instructions. If you are having trouble connecting, verify that the virtual network gateway is not using a Basic SKU. Basic SKU is not supported for Mac clients.
-
- ![Mac connection](./media/vpn-gateway-howto-point-to-site-rm-ps/applyconnect.png)
+* [Windows](point-to-site-vpn-client-cert-windows.md)
+* [macOS-iOS](point-to-site-vpn-client-cert-mac.md)
+* [Linux](point-to-site-vpn-client-cert-linux.md)
## <a name="verify"></a>To verify a connection
vpn-gateway Vpn Gateway Ipsecikepolicy Rm Powershell https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-ipsecikepolicy-rm-powershell.md
Title: 'IPsec/IKE policy for S2S VPN & VNet-to-VNet connections: PowerShell'
+ Title: 'Configure custom IPsec/IKE connection policies for S2S VPN & VNet-to-VNet: PowerShell'
-description: Learn how to configure IPsec/IKE policy for S2S or VNet-to-VNet connections with Azure VPN Gateways using PowerShell.
+description: Learn how to configure IPsec/IKE custom policy for S2S or VNet-to-VNet connections with Azure VPN Gateways using PowerShell.
Previously updated : 09/02/2020 Last updated : 01/12/2023
-# Configure IPsec/IKE policy for S2S VPN or VNet-to-VNet connections
+# Configure custom IPsec/IKE connection policies for S2S VPN and VNet-to-VNet: PowerShell
-This article walks you through the steps to configure IPsec/IKE policy for Site-to-Site VPN or VNet-to-VNet connections using PowerShell.
+This article walks you through the steps to configure a custom IPsec/IKE policy for VPN Gateway Site-to-Site VPN or VNet-to-VNet connections using PowerShell.
+## Workflow
+The instructions in this article help you set up and configure IPsec/IKE policies as shown in the following diagram.
-## <a name="about"></a>About IPsec and IKE policy parameters for Azure VPN gateways
-IPsec and IKE protocol standard supports a wide range of cryptographic algorithms in various combinations. Refer to [About cryptographic requirements and Azure VPN gateways](vpn-gateway-about-compliance-crypto.md) to see how this can help ensuring cross-premises and VNet-to-VNet connectivity satisfy your compliance or security requirements.
-This article provides instructions to create and configure an IPsec/IKE policy and apply to a new or existing connection:
+1. Create a virtual network and a VPN gateway.
+1. Create a local network gateway for cross premises connection, or another virtual network and gateway for VNet-to-VNet connection.
+1. Create an IPsec/IKE policy with selected algorithms and parameters.
+1. Create a connection (IPsec or VNet2VNet) with the IPsec/IKE policy.
+1. Add/update/remove an IPsec/IKE policy for an existing connection.
-* [Part 1 - Workflow to create and set IPsec/IKE policy](#workflow)
-* [Part 2 - Supported cryptographic algorithms and key strengths](#params)
-* [Part 3 - Create a new S2S VPN connection with IPsec/IKE policy](#crossprem)
-* [Part 4 - Create a new VNet-to-VNet connection with IPsec/IKE policy](#vnet2vnet)
-* [Part 5 - Manage (create, add, remove) IPsec/IKE policy for a connection](#managepolicy)
+## Policy parameters
-> [!IMPORTANT]
-> 1. Note that IPsec/IKE policy only works on the following gateway SKUs:
-> * ***VpnGw1~5 and VpnGw1AZ~5AZ*** (route-based)
-> * ***Standard*** and ***HighPerformance*** (route-based)
-> 2. You can only specify ***one*** policy combination for a given connection.
-> 3. You must specify all algorithms and parameters for both IKE (Main Mode) and IPsec (Quick Mode). Partial policy specification is not allowed.
-> 4. Consult with your VPN device vendor specifications to ensure the policy is supported on your on-premises VPN devices. S2S or VNet-to-VNet connections cannot establish if the policies are incompatible.
-
-## <a name ="workflow"></a>Part 1 - Workflow to create and set IPsec/IKE policy
-This section outlines the workflow to create and update IPsec/IKE policy on a S2S VPN or VNet-to-VNet connection:
-1. Create a virtual network and a VPN gateway
-2. Create a local network gateway for cross premises connection, or another virtual network and gateway for VNet-to-VNet connection
-3. Create an IPsec/IKE policy with selected algorithms and parameters
-4. Create a connection (IPsec or VNet2VNet) with the IPsec/IKE policy
-5. Add/update/remove an IPsec/IKE policy for an existing connection
-
-The instructions in this article helps you set up and configure IPsec/IKE policies as shown in the diagram:
-
-![ipsec-ike-policy](./media/vpn-gateway-ipsecikepolicy-rm-powershell/ipsecikepolicy.png)
-
-## <a name ="params"></a>Part 2 - Supported cryptographic algorithms & key strengths
-
-The following table lists the supported cryptographic algorithms and key strengths configurable by the customers:
-
-| **IPsec/IKEv2** | **Options** |
-| |
-| IKEv2 Encryption | GCMAES256, GCMAES128, AES256, AES192, AES128, DES3, DES |
-| IKEv2 Integrity | GCMAES256, GCMAES128, SHA384, SHA256, SHA1, MD5 |
-| DH Group | DHGroup24, ECP384, ECP256, DHGroup14, DHGroup2048, DHGroup2, DHGroup1, None |
-| IPsec Encryption | GCMAES256, GCMAES192, GCMAES128, AES256, AES192, AES128, DES3, DES, None |
-| IPsec Integrity | GCMAES256, GCMAES192, GCMAES128, SHA256, SHA1, MD5 |
-| PFS Group | PFS24, ECP384, ECP256, PFS2048, PFS2, PFS1, None
-| QM SA Lifetime | (**Optional**: default values are used if not specified)<br>Seconds (integer; **min. 300**/default 27000 seconds)<br>KBytes (integer; **min. 1024**/default 102400000 KBytes) |
-| Traffic Selector | UsePolicyBasedTrafficSelectors** ($True/$False; **Optional**, default $False if not specified) |
-| DPD timeout | Seconds (integer: min. 9/max. 3600; default 45 seconds) |
+### Cryptographic algorithms & key strengths
-> [!IMPORTANT]
-> 1. **Your on-premises VPN device configuration must match or contain the following algorithms and parameters that you specify on the Azure IPsec/IKE policy:**
-> * IKE encryption algorithm (Main Mode / Phase 1)
-> * IKE integrity algorithm (Main Mode / Phase 1)
-> * DH Group (Main Mode / Phase 1)
-> * IPsec encryption algorithm (Quick Mode / Phase 2)
-> * IPsec integrity algorithm (Quick Mode / Phase 2)
-> * PFS Group (Quick Mode / Phase 2)
-> * Traffic Selector (if UsePolicyBasedTrafficSelectors is used)
-> * The SA lifetimes are local specifications only, do not need to match.
->
-> 2. **If GCMAES is used as for IPsec Encryption algorithm, you must select the same GCMAES algorithm and key length for IPsec Integrity; for example, using GCMAES128 for both**
-> 3. In the table above:
-> * IKEv2 corresponds to Main Mode or Phase 1
-> * IPsec corresponds to Quick Mode or Phase 2
-> * DH Group specifies the Diffie-Hellman Group used in Main Mode or Phase 1
-> * PFS Group specified the Diffie-Hellman Group used in Quick Mode or Phase 2
-> 4. IKEv2 Main Mode SA lifetime is fixed at 28,800 seconds on the Azure VPN gateways
-> 5. Setting "UsePolicyBasedTrafficSelectors" to $True on a connection will configure the Azure VPN gateway to connect to policy-based VPN firewall on premises. If you enable PolicyBasedTrafficSelectors, you need to ensure your VPN device has the matching traffic selectors defined with all combinations of your on-premises network (local network gateway) prefixes to/from the Azure virtual network prefixes, instead of any-to-any. For example, if your on-premises network prefixes are 10.1.0.0/16 and 10.2.0.0/16, and your virtual network prefixes are 192.168.0.0/16 and 172.16.0.0/16, you need to specify the following traffic selectors:
-> * 10.1.0.0/16 <====> 192.168.0.0/16
-> * 10.1.0.0/16 <====> 172.16.0.0/16
-> * 10.2.0.0/16 <====> 192.168.0.0/16
-> * 10.2.0.0/16 <====> 172.16.0.0/16
-
-For more information regarding policy-based traffic selectors, see [Connect multiple on-premises policy-based VPN devices](vpn-gateway-connect-multiple-policybased-rm-ps.md).
-
-The following table lists the corresponding Diffie-Hellman Groups supported by the custom policy:
-
-| **Diffie-Hellman Group** | **DHGroup** | **PFSGroup** | **Key length** |
-| | | | |
-| 1 | DHGroup1 | PFS1 | 768-bit MODP |
-| 2 | DHGroup2 | PFS2 | 1024-bit MODP |
-| 14 | DHGroup14<br>DHGroup2048 | PFS2048 | 2048-bit MODP |
-| 19 | ECP256 | ECP256 | 256-bit ECP |
-| 20 | ECP384 | ECP384 | 384-bit ECP |
-| 24 | DHGroup24 | PFS24 | 2048-bit MODP |
+The following table lists the supported configurable cryptographic algorithms and key strengths.
+++
+#### Diffie-Hellman groups
+
+The following table lists the corresponding Diffie-Hellman groups supported by the custom policy:
+ Refer to [RFC3526](https://tools.ietf.org/html/rfc3526) and [RFC5114](https://tools.ietf.org/html/rfc5114) for more details.
-## <a name ="crossprem"></a>Part 3 - Create a new S2S VPN connection with IPsec/IKE policy
+## <a name ="crossprem"></a>Create an S2S VPN connection with IPsec/IKE policy
This section walks you through the steps of creating a S2S VPN connection with an IPsec/IKE policy. The following steps create the connection as shown in the diagram:
-![s2s-policy](./media/vpn-gateway-ipsecikepolicy-rm-powershell/s2spolicy.png)
See [Create a S2S VPN connection](vpn-gateway-create-site-to-site-rm-powershell.md) for more detailed step-by-step instructions for creating a S2S VPN connection.
-### <a name="before"></a>Before you begin
+You can run the steps for this exercise using Azure Cloud Shell in your browser. If you want to use PowerShell directly from your computer instead, install the Azure Resource Manager PowerShell cmdlets. For more information about installing the PowerShell cmdlets, see [How to install and configure Azure PowerShell](/powershell/azure/).
+
+### <a name="createvnet1"></a>Step 1 - Create the virtual network, VPN gateway, and local network gateway resources
+
+If you use Azure Cloud Shell, you automatically connect to your account and don't need to run the following command.
-* Verify that you have an Azure subscription. If you don't already have an Azure subscription, you can activate your [MSDN subscriber benefits](https://azure.microsoft.com/pricing/member-offers/msdn-benefits-details/) or sign up for a [free account](https://azure.microsoft.com/pricing/free-trial/).
-* Install the Azure Resource Manager PowerShell cmdlets. See [Overview of Azure PowerShell](/powershell/azure/) for more information about installing the PowerShell cmdlets.
+If you use PowerShell from your computer, open your PowerShell console and connect to your account. For more information, see [Using Windows PowerShell with Resource Manager](../azure-resource-manager/management/manage-resources-powershell.md). Use the following sample to help you connect:
-### <a name="createvnet1"></a>Step 1 - Create the virtual network, VPN gateway, and local network gateway
+```PowerShell
+Connect-AzAccount
+Select-AzSubscription -SubscriptionName <YourSubscriptionName>
+```
#### 1. Declare your variables
-For this exercise, we start by declaring our variables. Be sure to replace the values with your own when configuring for production.
+For this exercise, we start by declaring variables. You can replace the variables with your own before running the commands.
-```powershell
-$Sub1 = "<YourSubscriptionName>"
-$RG1 = "TestPolicyRG1"
-$Location1 = "East US 2"
+```azurepowershell-interactive
+$RG1 = "TestRG1"
+$Location1 = "EastUS"
$VNetName1 = "TestVNet1" $FESubName1 = "FrontEnd" $BESubName1 = "Backend" $GWSubName1 = "GatewaySubnet"
-$VNetPrefix11 = "10.11.0.0/16"
-$VNetPrefix12 = "10.12.0.0/16"
-$FESubPrefix1 = "10.11.0.0/24"
-$BESubPrefix1 = "10.12.0.0/24"
-$GWSubPrefix1 = "10.12.255.0/27"
+$VNetPrefix11 = "10.1.0.0/16"
+$FESubPrefix1 = "10.1.0.0/24"
+$BESubPrefix1 = "10.1.1.0/24"
+$GWSubPrefix1 = "10.1.255.0/27"
$DNS1 = "8.8.8.8" $GWName1 = "VNet1GW" $GW1IPName1 = "VNet1GWIP1" $GW1IPconf1 = "gw1ipconf1" $Connection16 = "VNet1toSite6"- $LNGName6 = "Site6" $LNGPrefix61 = "10.61.0.0/16" $LNGPrefix62 = "10.62.0.0/16" $LNGIP6 = "131.107.72.22" ```
-#### 2. Connect to your subscription and create a new resource group
-
-Make sure you switch to PowerShell mode to use the Resource Manager cmdlets. For more information, see [Using Windows PowerShell with Resource Manager](../azure-resource-manager/management/manage-resources-powershell.md).
+#### 2. Create the virtual network, VPN gateway, and local network gateway
-Open your PowerShell console and connect to your account. Use the following sample to help you connect:
+The following samples create the virtual network, TestVNet1, with three subnets, and the VPN gateway. When substituting values, it's important that you always name your gateway subnet specifically GatewaySubnet. If you name it something else, your gateway creation fails. It can take 45 minutes or more for the virtual network gateway to create. During this time, if you are using Azure Cloud Shell, your connection may time out. This doesn't affect the gateway create command.
-```powershell
-Connect-AzAccount
-Select-AzSubscription -SubscriptionName $Sub1
+```azurepowershell-interactive
New-AzResourceGroup -Name $RG1 -Location $Location1
-```
-
-#### 3. Create the virtual network, VPN gateway, and local network gateway
-The following sample creates the virtual network, TestVNet1, with three subnets, and the VPN gateway. When substituting values, it's important that you always name your gateway subnet specifically GatewaySubnet. If you name it something else, your gateway creation fails.
-
-```powershell
$fesub1 = New-AzVirtualNetworkSubnetConfig -Name $FESubName1 -AddressPrefix $FESubPrefix1 $besub1 = New-AzVirtualNetworkSubnetConfig -Name $BESubName1 -AddressPrefix $BESubPrefix1 $gwsub1 = New-AzVirtualNetworkSubnetConfig -Name $GWSubName1 -AddressPrefix $GWSubPrefix1
-New-AzVirtualNetwork -Name $VNetName1 -ResourceGroupName $RG1 -Location $Location1 -AddressPrefix $VNetPrefix11,$VNetPrefix12 -Subnet $fesub1,$besub1,$gwsub1
+New-AzVirtualNetwork -Name $VNetName1 -ResourceGroupName $RG1 -Location $Location1 -AddressPrefix $VNetPrefix11 -Subnet $fesub1,$besub1,$gwsub1
-$gw1pip1 = New-AzPublicIpAddress -Name $GW1IPName1 -ResourceGroupName $RG1 -Location $Location1 -AllocationMethod Dynamic
-$vnet1 = Get-AzVirtualNetwork -Name $VNetName1 -ResourceGroupName $RG1
-$subnet1 = Get-AzVirtualNetworkSubnetConfig -Name "GatewaySubnet" -VirtualNetwork $vnet1
+$gw1pip1 = New-AzPublicIpAddress -Name $GW1IPName1 -ResourceGroupName $RG1 -Location $Location1 -AllocationMethod Dynamic
+$vnet1 = Get-AzVirtualNetwork -Name $VNetName1 -ResourceGroupName $RG1
+$subnet1 = Get-AzVirtualNetworkSubnetConfig -Name "GatewaySubnet" -VirtualNetwork $vnet1
$gw1ipconf1 = New-AzVirtualNetworkGatewayIpConfig -Name $GW1IPconf1 -Subnet $subnet1 -PublicIpAddress $gw1pip1 New-AzVirtualNetworkGateway -Name $GWName1 -ResourceGroupName $RG1 -Location $Location1 -IpConfigurations $gw1ipconf1 -GatewayType Vpn -VpnType RouteBased -GatewaySku VpnGw1
+```
+
+Create the local network gateway. You may need to reconnect and declare the following variables again if Azure Cloud Shell timed out.
+Declare variables.
+
+```azurepowershell-interactive
+$RG1 = "TestRG1"
+$Location1 = "EastUS"
+$LNGName6 = "Site6"
+$LNGPrefix61 = "10.61.0.0/16"
+$LNGPrefix62 = "10.62.0.0/16"
+$LNGIP6 = "131.107.72.22"
+$GWName1 = "VNet1GW"
+$Connection16 = "VNet1toSite6"
+```
+
+Create local network gateway Site6.
+
+```azurepowershell-interactive
New-AzLocalNetworkGateway -Name $LNGName6 -ResourceGroupName $RG1 -Location $Location1 -GatewayIpAddress $LNGIP6 -AddressPrefix $LNGPrefix61,$LNGPrefix62 ```
The following sample script creates an IPsec/IKE policy with the following algor
* IKEv2: AES256, SHA384, DHGroup24 * IPsec: AES256, SHA256, PFS None, SA Lifetime 14400 seconds & 102400000KB
-```powershell
+```azurepowershell-interactive
$ipsecpolicy6 = New-AzIpsecPolicy -IkeEncryption AES256 -IkeIntegrity SHA384 -DhGroup DHGroup24 -IpsecEncryption AES256 -IpsecIntegrity SHA256 -PfsGroup None -SALifeTimeSeconds 14400 -SADataSizeKilobytes 102400000 ```
If you use GCMAES for IPsec, you must use the same GCMAES algorithm and key leng
Create an S2S VPN connection and apply the IPsec/IKE policy created earlier.
-```powershell
+```azurepowershell-interactive
$vnet1gw = Get-AzVirtualNetworkGateway -Name $GWName1 -ResourceGroupName $RG1 $lng6 = Get-AzLocalNetworkGateway -Name $LNGName6 -ResourceGroupName $RG1 New-AzVirtualNetworkGatewayConnection -Name $Connection16 -ResourceGroupName $RG1 -VirtualNetworkGateway1 $vnet1gw -LocalNetworkGateway2 $lng6 -Location $Location1 -ConnectionType IPsec -IpsecPolicies $ipsecpolicy6 -SharedKey 'AzureA1b2C3' ```
-You can optionally add "-UsePolicyBasedTrafficSelectors $True" to the create connection cmdlet to enable Azure VPN gateway to connect to policy-based VPN devices on premises, as described above.
+You can optionally add "-UsePolicyBasedTrafficSelectors $True" to the create connection cmdlet to enable Azure VPN gateway to connect to policy-based on-premises VPN devices.
> [!IMPORTANT] > Once an IPsec/IKE policy is specified on a connection, the Azure VPN gateway will only send or accept
You can optionally add "-UsePolicyBasedTrafficSelectors $True" to the create con
> connection. Make sure your on-premises VPN device for the connection uses or accepts the exact > policy combination, otherwise the S2S VPN tunnel will not establish. -
-## <a name ="vnet2vnet"></a>Part 4 - Create a new VNet-to-VNet connection with IPsec/IKE policy
+## <a name ="vnet2vnet"></a>Create a VNet-to-VNet connection with IPsec/IKE policy
The steps of creating a VNet-to-VNet connection with an IPsec/IKE policy are similar to that of a S2S VPN connection. The following sample scripts create the connection as shown in the diagram:
-![v2v-policy](./media/vpn-gateway-ipsecikepolicy-rm-powershell/v2vpolicy.png)
-See [Create a VNet-to-VNet connection](vpn-gateway-vnet-vnet-rm-ps.md) for more detailed steps for creating a VNet-to-VNet connection. You must complete [Part 3](#crossprem) to create and configure TestVNet1 and the VPN Gateway.
+See [Create a VNet-to-VNet connection](vpn-gateway-vnet-vnet-rm-ps.md) for more detailed steps for creating a VNet-to-VNet connection.
-### <a name="createvnet2"></a>Step 1 - Create the second virtual network and VPN gateway
+### Step 1 - Create the second virtual network and VPN gateway
#### 1. Declare your variables
-Be sure to replace the values with the ones that you want to use for your configuration.
-
-```powershell
-$RG2 = "TestPolicyRG2"
-$Location2 = "East US 2"
+```azurepowershell-interactive
+$RG2 = "TestRG2"
+$Location2 = "EastUS"
$VNetName2 = "TestVNet2" $FESubName2 = "FrontEnd" $BESubName2 = "Backend"
$Connection21 = "VNet2toVNet1"
$Connection12 = "VNet1toVNet2" ```
-#### 2. Create the second virtual network and VPN gateway in the new resource group
+#### 2. Create the second virtual network and VPN gateway
-```powershell
+```azurepowershell-interactive
New-AzResourceGroup -Name $RG2 -Location $Location2 $fesub2 = New-AzVirtualNetworkSubnetConfig -Name $FESubName2 -AddressPrefix $FESubPrefix2
$vnet2 = Get-AzVirtualNetwork -Name $VNetName2 -ResourceGroupName $RG2
$subnet2 = Get-AzVirtualNetworkSubnetConfig -Name "GatewaySubnet" -VirtualNetwork $vnet2 $gw2ipconf1 = New-AzVirtualNetworkGatewayIpConfig -Name $GW2IPconf1 -Subnet $subnet2 -PublicIpAddress $gw2pip1
-New-AzVirtualNetworkGateway -Name $GWName2 -ResourceGroupName $RG2 -Location $Location2 -IpConfigurations $gw2ipconf1 -GatewayType Vpn -VpnType RouteBased -GatewaySku HighPerformance
+New-AzVirtualNetworkGateway -Name $GWName2 -ResourceGroupName $RG2 -Location $Location2 -IpConfigurations $gw2ipconf1 -GatewayType Vpn -VpnType RouteBased -GatewaySku VpnGw2
```
+It can take about 45 minutes or more to create the VPN gateway.
+ ### Step 2 - Create a VNet-toVNet connection with the IPsec/IKE policy
-Similar to the S2S VPN connection, create an IPsec/IKE policy then apply to policy to the new connection.
+Similar to the S2S VPN connection, create an IPsec/IKE policy, then apply the policy to the new connection. If you used Azure Cloud Shell, your connection may have timed out. If so, re-connect and state the necessary variables again.
-#### 1. Create an IPsec/IKE policy
+```azurepowershell-interactive
+$GWName1 = "VNet1GW"
+$GWName2 = "VNet2GW"
+$RG1 = "TestRG1"
+$RG2 = "TestRG2"
+$Location1 = "EastUS"
+$Location2 = "EastUS"
+$Connection21 = "VNet2toVNet1"
+$Connection12 = "VNet1toVNet2"
+```
+
+#### 1. Create the IPsec/IKE policy
The following sample script creates a different IPsec/IKE policy with the following algorithms and parameters:+ * IKEv2: AES128, SHA1, DHGroup14
-* IPsec: GCMAES128, GCMAES128, PFS14, SA Lifetime 14400 seconds & 102400000KB
+* IPsec: GCMAES128, GCMAES128, PFS24, SA Lifetime 14400 seconds & 102400000KB
-```powershell
-$ipsecpolicy2 = New-AzIpsecPolicy -IkeEncryption AES128 -IkeIntegrity SHA1 -DhGroup DHGroup14 -IpsecEncryption GCMAES128 -IpsecIntegrity GCMAES128 -PfsGroup PFS14 -SALifeTimeSeconds 14400 -SADataSizeKilobytes 102400000
+```azurepowershell-interactive
+$ipsecpolicy2 = New-AzIpsecPolicy -IkeEncryption AES128 -IkeIntegrity SHA1 -DhGroup DHGroup14 -IpsecEncryption GCMAES128 -IpsecIntegrity GCMAES128 -PfsGroup PFS24 -SALifeTimeSeconds 14400 -SADataSizeKilobytes 102400000
``` #### 2. Create VNet-to-VNet connections with the IPsec/IKE policy
-Create a VNet-to-VNet connection and apply the IPsec/IKE policy you created. In this example, both gateways are in the same subscription. So it is possible to create and configure both connections with the same IPsec/IKE policy in the same PowerShell session.
+Create a VNet-to-VNet connection and apply the IPsec/IKE policy you created. In this example, both gateways are in the same subscription. So it's possible to create and configure both connections with the same IPsec/IKE policy in the same PowerShell session.
-```powershell
+```azurepowershell-interactive
$vnet1gw = Get-AzVirtualNetworkGateway -Name $GWName1 -ResourceGroupName $RG1 $vnet2gw = Get-AzVirtualNetworkGateway -Name $GWName2 -ResourceGroupName $RG2
New-AzVirtualNetworkGatewayConnection -Name $Connection21 -ResourceGroupName $RG
> connection. Make sure the IPsec policies for both connections are the same, otherwise the > VNet-to-VNet connection will not establish.
-After completing these steps, the connection is established in a few minutes, and you will have the following network topology as shown in the beginning:
-
-![ipsec-ike-policy](./media/vpn-gateway-ipsecikepolicy-rm-powershell/ipsecikepolicy.png)
+After you complete these steps, the connection is established in a few minutes, and you'll have the following network topology as shown in the beginning:
-## <a name ="managepolicy"></a>Part 5 - Update IPsec/IKE policy for a connection
+## <a name="managepolicy"></a>Update IPsec/IKE policy for a connection
-The last section shows you how to manage IPsec/IKE policy for an existing S2S or VNet-to-VNet connection. The exercise below walks you through the following operations on a connection:
+The last section shows you how to manage IPsec/IKE policy for an existing S2S or VNet-to-VNet connection. The following exercise walks you through the following operations on a connection:
1. Show the IPsec/IKE policy of a connection
-2. Add or update the IPsec/IKE policy to a connection
-3. Remove the IPsec/IKE policy from a connection
+1. Add or update the IPsec/IKE policy to a connection
+1. Remove the IPsec/IKE policy from a connection
The same steps apply to both S2S and VNet-to-VNet connections. > [!IMPORTANT] > IPsec/IKE policy is supported on *Standard* and *HighPerformance* route-based VPN gateways only. It does not work on the Basic gateway SKU or the policy-based VPN gateway.
-#### 1. Show the IPsec/IKE policy of a connection
+### 1. Show an IPsec/IKE policy for a connection
The following example shows how to get the IPsec/IKE policy configured on a connection. The scripts also continue from the exercises above.
-```powershell
-$RG1 = "TestPolicyRG1"
+```azurepowershell-interactive
+$RG1 = "TestRG1"
$Connection16 = "VNet1toSite6" $connection6 = Get-AzVirtualNetworkGatewayConnection -Name $Connection16 -ResourceGroupName $RG1 $connection6.IpsecPolicies ```
-The last command lists the current IPsec/IKE policy configured on the connection, if there is any. The following is a sample output for the connection:
+The last command lists the current IPsec/IKE policy configured on the connection, if existing. The following example is a sample output for the connection:
-```powershell
+```azurepowershell-interactive
SALifeTimeSeconds : 14400 SADataSizeKilobytes : 102400000 IpsecEncryption : AES256
DhGroup : DHGroup24
PfsGroup : PFS24 ```
-If there is no IPsec/IKE policy configured, the command (PS> $connection6.IpsecPolicies) gets an empty return. It does not mean IPsec/IKE is not configured on the connection, but that there is no custom IPsec/IKE policy. The actual connection uses the default policy negotiated between your on-premises VPN device and the Azure VPN gateway.
+If there isn't a configured IPsec/IKE policy, the command (PS> $connection6.IpsecPolicies) gets an empty return. It doesn't mean IPsec/IKE isn't configured on the connection, but that there's no custom IPsec/IKE policy. The actual connection uses the default policy negotiated between your on-premises VPN device and the Azure VPN gateway.
-#### 2. Add or update an IPsec/IKE policy for a connection
+### 2. Add or update an IPsec/IKE policy for a connection
The steps to add a new policy or update an existing policy on a connection are the same: create a new policy then apply the new policy to the connection.
-```powershell
-$RG1 = "TestPolicyRG1"
+```azurepowershell-interactive
+$RG1 = "TestRG1"
$Connection16 = "VNet1toSite6" $connection6 = Get-AzVirtualNetworkGatewayConnection -Name $Connection16 -ResourceGroupName $RG1
Set-AzVirtualNetworkGatewayConnection -VirtualNetworkGatewayConnection $connecti
To enable "UsePolicyBasedTrafficSelectors" when connecting to an on-premises policy-based VPN device, add the "-UsePolicyBaseTrafficSelectors" parameter to the cmdlet, or set it to $False to disable the option:
-```powershell
+```azurepowershell-interactive
Set-AzVirtualNetworkGatewayConnection -VirtualNetworkGatewayConnection $connection6 -IpsecPolicies $newpolicy6 -UsePolicyBasedTrafficSelectors $True ```
-You can get the connection again to check if the policy is updated.
+To check the connection for the updated policy, run the following command.
-```powershell
+```azurepowershell-interactive
$connection6 = Get-AzVirtualNetworkGatewayConnection -Name $Connection16 -ResourceGroupName $RG1 $connection6.IpsecPolicies ```
-You should see the output from the last line, as shown in the following example:
+Example output:
-```powershell
+```azurepowershell-interactive
SALifeTimeSeconds : 14400 SADataSizeKilobytes : 102400000 IpsecEncryption : AES256
DhGroup : DHGroup14
PfsGroup : None ```
-#### 3. Remove an IPsec/IKE policy from a connection
+### 3. Remove an IPsec/IKE policy from a connection
Once you remove the custom policy from a connection, the Azure VPN gateway reverts back to the [default list of IPsec/IKE proposals](vpn-gateway-about-vpn-devices.md) and renegotiates again with your on-premises VPN device.
-```powershell
-$RG1 = "TestPolicyRG1"
+```azurepowershell-interactive
+$RG1 = "TestRG1"
$Connection16 = "VNet1toSite6" $connection6 = Get-AzVirtualNetworkGatewayConnection -Name $Connection16 -ResourceGroupName $RG1
$connection6.IpsecPolicies.Remove($currentpolicy)
Set-AzVirtualNetworkGatewayConnection -VirtualNetworkGatewayConnection $connection6 ```
-You can use the same script to check if the policy has been removed from the connection.
- ## Next steps
-See [Connect multiple on-premises policy-based VPN devices](vpn-gateway-connect-multiple-policybased-rm-ps.md) for more details regarding policy-based traffic selectors.
-
-Once your connection is complete, you can add virtual machines to your virtual networks. See [Create a Virtual Machine](../virtual-machines/windows/quick-create-portal.md) for steps.
+See [Connect multiple on-premises policy-based VPN devices](vpn-gateway-connect-multiple-policybased-rm-ps.md) for more details regarding policy-based traffic selectors.