Updates from: 08/10/2024 01:07:04
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Identity Provider Twitter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-twitter.md
Title: Set up sign-up and sign-in with a Twitter account
+ Title: Set up sign-up and sign-in with an X account
-description: Provide sign-up and sign-in to customers with Twitter accounts in your applications using Azure Active Directory B2C.
+description: Provide sign-up and sign-in to customers with X accounts in your applications using Azure Active Directory B2C.
zone_pivot_groups: b2c-policy-type
-#Customer Intent: As a developer setting up sign-up and sign-in with a Twitter account using Azure Active Directory B2C, I want to configure Twitter as an identity provider so that I can enable users to sign in with their Twitter accounts.
+#Customer Intent: As a developer setting up sign-up and sign-in with an X account using Azure Active Directory B2C, I want to configure X as an identity provider so that I can enable users to sign in with their X accounts.
-# Set up sign-up and sign-in with a Twitter account using Azure Active Directory B2C
+# Set up sign-up and sign-in with an X account using Azure Active Directory B2C
[!INCLUDE [active-directory-b2c-choose-user-flow-or-custom-policy](../../includes/active-directory-b2c-choose-user-flow-or-custom-policy.md)] ::: zone pivot="b2c-custom-policy"
zone_pivot_groups: b2c-policy-type
## Create an application
-To enable sign-in for users with a Twitter account in Azure AD B2C, you need to create a Twitter application. If you don't already have a Twitter account, you can sign up at [`https://twitter.com/signup`](https://twitter.com/signup). You also need to [Apply for a developer account](https://developer.twitter.com/). For more information, see [Apply for access](https://developer.twitter.com/en/apply-for-access).
+To enable sign-in for users with an X account in Azure AD B2C, you need to create an X application. If you don't already have an X account, you can sign up at [`https://x.com/signup`](https://x.com/signup). You also need to [Apply for a developer account](https://developer.x.com/). For more information, see [Apply for access](https://developer.x.com/en/apply-for-access).
::: zone pivot="b2c-custom-policy"
-1. Sign in to the [Twitter Developer Portal](https://developer.twitter.com/portal/projects-and-apps) with your Twitter account credentials.
+1. Sign in to the [X Developer Portal](https://developer.x.com/portal/projects-and-apps) with your X account credentials.
1. Select **+ Create Project** button. 1. Under **Project name** tab, enter a preferred name of your project, and then select **Next** button. 1. Under **Use case** tab, select your preferred use case, and then select **Next**.
To enable sign-in for users with a Twitter account in Azure AD B2C, you need to
1. For the **Callback URI/Redirect URL**, enter `https://your-tenant-name.b2clogin.com/your-tenant-name.onmicrosoft.com/your-policy-id/oauth1/authresp`. If you use a [custom domain](custom-domain.md), enter `https://your-domain-name/your-tenant-name.onmicrosoft.com/your-user-flow-Id/oauth1/authresp`. Use all lowercase letters when entering your tenant name and user flow ID even if they are defined with uppercase letters in Azure AD B2C. Replace: - `your-tenant-name` with the name of your tenant name. - `your-domain-name` with your custom domain.
- - `your-policy-id` with the identifier of your user flow. For example, `b2c_1a_signup_signin_twitter`.
+ - `your-policy-id` with the identifier of your user flow. For example, `b2c_1a_signup_signin_x`.
1. For the **Website URL**, enter `https://your-tenant.b2clogin.com`. Replace `your-tenant` with the name of your tenant. For example, `https://contosob2c.b2clogin.com`. If you use a [custom domain](custom-domain.md), enter `https://your-domain-name`. 1. (Optional) Enter a URL for the **Terms of service**, for example `http://www.contoso.com/tos`. The policy URL is a page you maintain to provide terms and conditions for your application. 1. (Optional) Enter a URL for the **Privacy policy**, for example `http://www.contoso.com/privacy`. The policy URL is a page you maintain to provide privacy information for your application.
To enable sign-in for users with a Twitter account in Azure AD B2C, you need to
::: zone pivot="b2c-user-flow"
-1. Sign in to the [Twitter Developer Portal](https://developer.twitter.com/portal/projects-and-apps) with your Twitter account credentials.
+1. Sign in to the [X Developer Portal](https://developer.x.com/portal/projects-and-apps) with your X account credentials.
1. Select **+ Create Project** button. 1. Under **Project name** tab, enter a preferred name of your project, and then select **Next** button. 1. Under **Use case** tab, select your preferred use case, and then select **Next**. 1. Under **Project description** tab, enter your project description, and then select **Next** button. 1. Under **App name** tab, enter a name for your app, such as *azureadb2c*, and the select **Next** button.
-1. Under **Keys & Tokens** tab, copy the value of **API Key** and **API Key Secret** for later. You use both of them to configure Twitter as an identity provider in your Azure AD B2C tenant.
+1. Under **Keys & Tokens** tab, copy the value of **API Key** and **API Key Secret** for later. You use both of them to configure X as an identity provider in your Azure AD B2C tenant.
1. Select **App settings** to open the app settings. 1. At the lower part of the page, under **User authentication settings**, select **Set up**. 1. Under **Type of app**, select your appropriate app type such as *Web App*.
To enable sign-in for users with a Twitter account in Azure AD B2C, you need to
1. For the **Callback URI/Redirect URL**, enter `https://your-tenant.b2clogin.com/your-tenant-name.onmicrosoft.com/your-user-flow-name/oauth1/authresp`. If you use a [custom domain](custom-domain.md), enter `https://your-domain-name/your-tenant-name.onmicrosoft.com/your-user-flow-Id/oauth1/authresp`. Use all lowercase letters when entering your tenant name and user flow ID even if they are defined with uppercase letters in Azure AD B2C. Replace: - `your-tenant-name` with the name of your tenant name. - `your-domain-name` with your custom domain.
- - `your-user-flow-name` with the identifier of your user flow. For example, `b2c_1_signup_signin_twitter`.
+ - `your-user-flow-name` with the identifier of your user flow. For example, `b2c_1_signup_signin_x`.
1. For the **Website URL**, enter `https://your-tenant.b2clogin.com`. Replace `your-tenant` with the name of your tenant. For example, `https://contosob2c.b2clogin.com`. If you use a [custom domain](custom-domain.md), enter `https://your-domain-name`. 1. Enter a URL for the **Terms of service**, for example `http://www.contoso.com/tos`. The policy URL is a page you maintain to provide terms and conditions for your application. 1. Enter a URL for the **Privacy policy**, for example `http://www.contoso.com/privacy`. The policy URL is a page you maintain to provide privacy information for your application.
To enable sign-in for users with a Twitter account in Azure AD B2C, you need to
::: zone pivot="b2c-user-flow"
-## Configure Twitter as an identity provider
+## Configure X as an identity provider
1. Sign in to the [Azure portal](https://portal.azure.com/) as the global administrator of your Azure AD B2C tenant. 1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu. 1. Choose **All services** in the top-left corner of the Azure portal, search for and select **Azure AD B2C**. 1. Select **Identity providers**, then select **Twitter**.
-1. Enter a **Name**. For example, *Twitter*.
-1. For the **Client ID**, enter the *API Key* of the Twitter application that you created earlier.
+1. Enter a **Name**. For example, *X*.
+1. For the **Client ID**, enter the *API Key* of the X application that you created earlier.
1. For the **Client secret**, enter the *API key secret* that you recorded. 1. Select **Save**.
-## Add Twitter identity provider to a user flow
+## Add X identity provider to a user flow
-At this point, the Twitter identity provider has been set up, but it's not yet available in any of the sign-in pages. To add the Twitter identity provider to a user flow:
+At this point, the X identity provider has been set up, but it's not yet available in any of the sign-in pages. To add the X identity provider to a user flow:
1. In your Azure AD B2C tenant, select **User flows**.
-1. Select the user flow that you want to add the Twitter identity provider.
+1. Select the user flow that you want to add the X identity provider.
1. Under the **Social identity providers**, select **Twitter**. 1. Select **Save**.
At this point, the Twitter identity provider has been set up, but it's not yet a
1. To test your policy, select **Run user flow**. 1. For **Application**, select the web application named *testapp1* that you previously registered. The **Reply URL** should show `https://jwt.ms`. 1. Select the **Run user flow** button.
-1. From the sign-up or sign-in page, select **Twitter** to sign in with Twitter account.
+1. From the sign-up or sign-in page, select **Twitter** to sign in with X account.
::: zone-end
At this point, the Twitter identity provider has been set up, but it's not yet a
## Create a policy key
-You need to store the secret key that you previously recorded for Twitter app in your Azure AD B2C tenant.
+You need to store the secret key that you previously recorded for X app in your Azure AD B2C tenant.
1. Sign in to the [Azure portal](https://portal.azure.com/). 1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu.
You need to store the secret key that you previously recorded for Twitter app in
1. On the left menu, under **Policies**, select **Identity Experience Framework**. 1. Select **Policy Keys** and then select **Add**. 1. For **Options**, choose `Manual`.
-1. Enter a **Name** for the policy key. For example, `TwitterSecret`. The prefix `B2C_1A_` is added automatically to the name of your key.
+1. Enter a **Name** for the policy key. For example, `XSecret`. The prefix `B2C_1A_` is added automatically to the name of your key.
1. For **Secret**, enter your *API key secret* value that you previously recorded. 1. For **Key usage**, select `Signature`. 1. Click **Create**.
-## Configure Twitter as an identity provider
+## Configure X as an identity provider
-To enable users to sign in using a Twitter account, you need to define the account as a claims provider that Azure AD B2C can communicate with through an endpoint. The endpoint provides a set of claims that are used by Azure AD B2C to verify that a specific user has authenticated.
+To enable users to sign in using an X account, you need to define the account as a claims provider that Azure AD B2C can communicate with through an endpoint. The endpoint provides a set of claims that are used by Azure AD B2C to verify that a specific user has authenticated.
-You can define a Twitter account as a claims provider by adding it to the **ClaimsProviders** element in the extension file of your policy. Refer to the custom policy starter pack that you downloaded in the Prerequisites of this article.
+You can define an X account as a claims provider by adding it to the **ClaimsProviders** element in the extension file of your policy. Refer to the custom policy starter pack that you downloaded in the Prerequisites of this article.
1. Open the *TrustFrameworkExtensions.xml*. 2. Find the **ClaimsProviders** element. If it does not exist, add it under the root element.
You can define a Twitter account as a claims provider by adding it to the **Clai
```xml <ClaimsProvider>
- <Domain>twitter.com</Domain>
- <DisplayName>Twitter</DisplayName>
+ <Domain>x.com</Domain>
+ <DisplayName>X</DisplayName>
<TechnicalProfiles> <TechnicalProfile Id="Twitter-OAuth1">
- <DisplayName>Twitter</DisplayName>
+ <DisplayName>X</DisplayName>
<Protocol Name="OAuth1" /> <Metadata> <Item Key="ProviderName">Twitter</Item>
You can define a Twitter account as a claims provider by adding it to the **Clai
<Item Key="request_token_endpoint">https://api.twitter.com/oauth/request_token</Item> <Item Key="ClaimsEndpoint">https://api.twitter.com/1.1/account/verify_credentials.json?include_email=true</Item> <Item Key="ClaimsResponseFormat">json</Item>
- <Item Key="client_id">Your Twitter application API key</Item>
+ <Item Key="client_id">Your X application API key</Item>
</Metadata> <CryptographicKeys> <Key Id="client_secret" StorageReferenceId="B2C_1A_TwitterSecret" />
You can define a Twitter account as a claims provider by adding it to the **Clai
1. Select your relying party policy, for example `B2C_1A_signup_signin`. 1. For **Application**, select a web application that you [previously registered](tutorial-register-applications.md). The **Reply URL** should show `https://jwt.ms`. 1. Select the **Run now** button.
-1. From the sign-up or sign-in page, select **Twitter** to sign in with Twitter account.
+1. From the sign-up or sign-in page, select **Twitter** to sign in with X account.
::: zone-end If the sign-in process is successful, your browser is redirected to `https://jwt.ms`, which displays the contents of the token returned by Azure AD B2C. > [!TIP]
-> If you're facing `unauthorized` error while testing this identity provider, make sure you use the correct Twitter API Key and API Key Secret, or try to apply for [elevated](https://developer.twitter.com/en/portal/products/elevated) access. Also, we recommend you've a look at [Twitter's projects structure](https://developer.twitter.com/en/docs/projects/overview), if you registered your app before the feature was available.
+> If you're facing `unauthorized` error while testing this identity provider, make sure you use the correct X API Key and API Key Secret, or try to apply for [elevated](https://developer.x.com/en/portal/products/elevated) access. Also, we recommend you've a look at [X's projects structure](https://developer.x.com/en/docs/projects/overview), if you registered your app before the feature was available.
active-directory-b2c Oauth1 Technical Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/oauth1-technical-profile.md
-#Customer intent: As a developer implementing Azure Active Directory B2C custom policies, I want to define an OAuth1 technical profile, so that I can federate with an OAuth1 based identity provider like Twitter and allow users to sign in with their existing social or enterprise identities.
+#Customer intent: As a developer implementing Azure Active Directory B2C custom policies, I want to define an OAuth1 technical profile, so that I can federate with an OAuth1 based identity provider like X and allow users to sign in with their existing social or enterprise identities.
[!INCLUDE [active-directory-b2c-advanced-audience-warning](../../includes/active-directory-b2c-advanced-audience-warning.md)]
-Azure Active Directory B2C (Azure AD B2C) provides support for the [OAuth 1.0 protocol](https://tools.ietf.org/html/rfc5849) identity provider. This article describes the specifics of a technical profile for interacting with a claims provider that supports this standardized protocol. With an OAuth1 technical profile, you can federate with an OAuth1 based identity provider, such as Twitter. Federating with the identity provider allows users to sign in with their existing social or enterprise identities.
+Azure Active Directory B2C (Azure AD B2C) provides support for the [OAuth 1.0 protocol](https://tools.ietf.org/html/rfc5849) identity provider. This article describes the specifics of a technical profile for interacting with a claims provider that supports this standardized protocol. With an OAuth1 technical profile, you can federate with an OAuth1 based identity provider, such as X. Federating with the identity provider allows users to sign in with their existing social or enterprise identities.
## Protocol
The **Name** attribute of the **Protocol** element needs to be set to `OAuth1`.
```xml <TechnicalProfile Id="Twitter-OAUTH1">
- <DisplayName>Twitter</DisplayName>
+ <DisplayName>X</DisplayName>
<Protocol Name="OAuth1" /> ... ```
The **OutputClaims** element contains a list of claims returned by the OAuth1 id
The **OutputClaimsTransformations** element may contain a collection of **OutputClaimsTransformation** elements that are used to modify the output claims or generate new ones.
-The following example shows the claims returned by the Twitter identity provider:
+The following example shows the claims returned by the X identity provider:
- The **user_id** claim that is mapped to the **issuerUserId** claim. - The **screen_name** claim that is mapped to the **displayName** claim.
When you configure the redirect URI of your identity provider, enter `https://{t
Examples: -- [Add Twitter as an OAuth1 identity provider by using custom policies](identity-provider-twitter.md)
+- [Add X as an OAuth1 identity provider by using custom policies](identity-provider-twitter.md)
active-directory-b2c Partner Keyless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-keyless.md
Title: Tutorial to configure Keyless with Azure Active Directory B2C
-description: Tutorial to configure Sift Keyless with Azure Active Directory B2C for passwordless authentication
+description: Tutorial to configure Keyless with Azure Active Directory B2C for passwordless authentication
Previously updated : 06/21/2024 Last updated : 08/09/2024
# Tutorial: Configure Keyless with Azure Active Directory B2C
-Learn to configure Azure Active Directory B2C (Azure AD B2C) with the Sift Keyless passwordless solution. With Azure AD B2C as an identity provider (IdP), integrate Keyless with customer applications to provide passwordless authentication. The Keyless Zero-Knowledge Biometric (ZKB) is passwordless multifactor authentication that helps eliminate fraud, phishing, and credential reuse, while enhancing the customer experience and protecting privacy.
+Learn to configure Azure Active Directory B2C (Azure AD B2C) with the Keyless passwordless solution. With Azure AD B2C as an identity provider (IdP), integrate Keyless with customer applications to provide passwordless authentication. The Keyless Zero-Knowledge Biometric (ZKB) is passwordless multifactor authentication that helps eliminate fraud, phishing, and credential reuse, while enhancing the customer experience and protecting privacy.
Go to keyless.io to learn about:
-* [Sift Keyless](https://keyless.io/)
+* [Keyless](https://keyless.io/)
* [How Keyless uses zero-knowledge proofs to protect your biometric data](https://keyless.io/blog/post/how-keyless-uses-zero-knowledge-proofs-to-protect-your-biometric-data) ## Prerequisites
The Keyless integration includes the following components:
* **Azure AD B2C** ΓÇô authorization server that verifies user credentials. Also known as the IdP. * **Web and mobile applications** ΓÇô mobile or web applications to protect with Keyless and Azure AD B2C
-* **The Keyless Authenticator mobile app** ΓÇô Sift mobile app for authentication to the Azure AD B2C enabled applications
+* **The Keyless Authenticator mobile app** ΓÇô mobile app for authentication to the Azure AD B2C enabled applications
The following architecture diagram illustrates an implementation.
active-directory-b2c Userjourneys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/userjourneys.md
The **ClaimsProviderSelection** element contains the following attributes:
### Claims provider selection example
-In the following orchestration step, the user can choose to sign in with Facebook, LinkedIn, Twitter, Google, or a local account. If the user selects one of the social identity providers, the second orchestration step executes with the selected claim exchange specified in the `TargetClaimsExchangeId` attribute. The second orchestration step redirects the user to the social identity provider to complete the sign-in process. If the user chooses to sign in with the local account, Azure AD B2C stays on the same orchestration step (the same sign-up page or sign-in page) and skips the second orchestration step.
+In the following orchestration step, the user can choose to sign in with Facebook, LinkedIn, X, Google, or a local account. If the user selects one of the social identity providers, the second orchestration step executes with the selected claim exchange specified in the `TargetClaimsExchangeId` attribute. The second orchestration step redirects the user to the social identity provider to complete the sign-in process. If the user chooses to sign in with the local account, Azure AD B2C stays on the same orchestration step (the same sign-up page or sign-in page) and skips the second orchestration step.
```xml <OrchestrationStep Order="1" Type="CombinedSignInAndSignUp" ContentDefinitionReferenceId="api.signuporsignin">
ai-services Choose Model Feature https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/choose-model-feature.md
The following decision charts highlight the features of each **Document Intellig
| Document type | Data to extract | Your best solution | | --|--|-|
+|**US Unified Tax**|You want to extract key information across all tax forms of W2, 1040, 1090, 1098 from a single file without running any custom classification of your own.|[**US Unified tax model**](concept-tax-document.md)|
|**US Tax W-2 tax**|You want to extract key information such as salary, wages, and taxes withheld.|[**US tax W-2 model**](concept-tax-document.md)| |**US Tax 1098**|You want to extract mortgage interest details such as principal, points, and tax.|[**US tax 1098 model**](concept-tax-document.md)| |**US Tax 1098-E**|You want to extract student loan interest details such as lender and interest amount.|[**US tax 1098-E model**](concept-tax-document.md)| |**US Tax 1098T**|You want to extract qualified tuition details such as scholarship adjustments, student status, and lender information.|[**US tax 1098-T model**](concept-tax-document.md)| |**US Tax 1099(Variations)**|You want to extract information from `1099` forms and its variations (A, B, C, CAP, DIV, G, H, INT, K, LS, LTC, MISC, NEC, OID, PATR, Q, QA, R, S, SA, SB).|[**US tax 1099 model**](concept-tax-document.md)| |**US Tax 1040(Variations)**|You want to extract information from `1040` forms and its variations (Schedule 1, Schedule 2, Schedule 3, Schedule 8812, Schedule A, Schedule B, Schedule C, Schedule D, Schedule E, Schedule `EIC`, Schedule F, Schedule H, Schedule J, Schedule R, Schedule `SE`, Schedule Senior).|[**US tax 1040 model**](concept-tax-document.md)|
+|**Bank Statement** |You want to extract key information from US bank statement | [**\Bank Statement**](concept-bank-statement.md)|
+|**Check** |You want to extract key information from check document. | [**Bank Check**](concept-bank-check.md)|
|**Contract** (legal agreement between parties).|You want to extract contract agreement details such as parties, dates, and intervals.|[**Contract model**](concept-contract.md)| |**Health insurance card** or health insurance ID.| You want to extract key information such as insurer, member ID, prescription coverage, and group number.|[**Health insurance card model**](./concept-health-insurance-card.md)|
-|**Credit/Debit card** . |You want to extract key information bank cards such as card number and bank name. | [**Credit/Debit card model**](concept-credit-card.md)|
-|**Marriage Certificate** . |You want to extract key information from marriage certificates. | [**Marriage certificate model**](concept-marriage-certificate.md)|
-|**Invoice** or billing statement.|You want to extract key information such as customer name, billing address, and amount due.|[**Invoice model**](concept-invoice.md)
+|**Credit/Debit card** |You want to extract key information bank cards such as card number and bank name. | [**Credit/Debit card model**](concept-credit-card.md)|
+|**Marriage Certificate** |You want to extract key information from marriage certificates. | [**Marriage certificate model**](concept-marriage-certificate.md)|
+|**Invoice** or billing statement|You want to extract key information such as customer name, billing address, and amount due.|[**Invoice model**](concept-invoice.md)
|**Receipt**, voucher, or single-page hotel receipt. |You want to extract key information such as merchant name, transaction date, and transaction total.|[**Receipt model**](concept-receipt.md)|
-|**Identity document (ID)** like a U.S. driver's license or international passport. |You want to extract key information such as first name, surname, date of birth, address, and signature. | [**Identity document (ID) model**](concept-id-document.md)|
-|**US Mortgage 1003** . |You want to extract key information from the Uniform Residential loan application. | [**1003 form model**](concept-mortgage-documents.md)|
-|**US Mortgage 1008** . |You want to extract key information from the Uniform Underwriting and Transmittal summary. | [**1008 form model**](concept-mortgage-documents.md)|
-|**US Mortgage Closing Disclosure** . |You want to extract key information from a mortgage closing disclosure form. | [**Mortgage closing disclosure form model**](concept-mortgage-documents.md)|
-|**Mixed-type document(s)** with structured, semi-structured, and/or unstructured elements. | You want to extract key-value pairs, selection marks, tables, signature fields, and selected regions not extracted by prebuilt or general document models.| [**Custom model**](concept-custom.md)|
+|**Identity document (ID)** like a U.S. driver's license or international passport |You want to extract key information such as first name, surname, date of birth, address, and signature. | [**Identity document (ID) model**](concept-id-document.md)|
+|**Pay stub** |You want to extract key information from the pay stub document. | [**Pay stub Model**](concept-pay-stub.md)|
+|**US Mortgage 1003** |You want to extract key information from the Uniform Residential loan application. | [**1003 form model**](concept-mortgage-documents.md)|
+|**US Mortgage 1004** |You want to extract key information from the Uniform Residential Appraisal Report (URAR). | [**1004 form model**](concept-mortgage-documents.md)|
+|**US Mortgage 1005** |You want to extract key information from the Verification of employment form | [**1005 form model**](concept-mortgage-documents.md)|
+|**US Mortgage 1008** |You want to extract key information from the Uniform Underwriting and Transmittal summary. | [**1008 form model**](concept-mortgage-documents.md)|
+|**US Mortgage Closing Disclosure** |You want to extract key information from a mortgage closing disclosure form. | [**Mortgage closing disclosure form model**](concept-mortgage-documents.md)|
+|**Mixed-type document(s)** with structured, semi-structured, and/or unstructured elements | You want to extract key-value pairs, selection marks, tables, signature fields, and selected regions not extracted by prebuilt or general document models.| [**Custom model**](concept-custom.md)|
>[!Tip] >
The following decision charts highlight the features of each **Document Intellig
| --|--|-| |**At least two different types of documents**. |Forms, letters, or documents | [**Custom classification model**](./concept-custom-classifier.md)| -- ## Next steps * [Learn how to process your own forms and documents](quickstarts/try-document-intelligence-studio.md) with the [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio)
ai-services Concept Add On Capabilities https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-add-on-capabilities.md
Document Intelligence supports more sophisticated and modular analysis capabilit
* [`languages`](#language-detection)
+Starting with `2024-07-31-preview` release, the Read model supports searchable PDF output:
+
+* [`Searchable PDF](#searchable-pdf)
++ :::moniker-end :::moniker range="doc-intel-4.0.0"
Document Intelligence supports more sophisticated and modular analysis capabilit
> > * Add-on capabilities are currently not supported for Microsoft Office file types.
-The following add-on capabilities are available for`2024-02-29-preview`, `2024-02-29-preview`, and later releases:
+Document Intelligence supports optional features that can be enabled and disabled depending on the document extraction scenario. The following add-on capabilities are available for `2023-10-31-preview`, and later releases:
* [`keyValuePairs`](#key-value-pairs)
for lang_idx, lang in enumerate(result.languages):
::: moniker range="doc-intel-4.0.0"
+## Searchable PDF
+
+The searchable PDF capability enables you to convert an analog PDF, such as scanned-image PDF files, to a PDF with embedded text. The embedded text enables deep text search within the PDF's extracted content by overlaying the detected text entities on top of the image files.
+
+ > [!IMPORTANT]
+ >
+ > * Currently, the searchable PDF capability is only supported by Read OCR model `prebuilt-read`. When using this feature, please specify the `modelId` as `prebuilt-read`, as other model types will return error for this preview version.
+ > * Searchable PDF is included with the 2024-07-31-preview `prebuilt-read` model with no usage cost for general PDF consumption.
+
+### Use searchable PDF
+
+To use searchable PDF, make a `POST` request using the `Analyze` operation and specify the output format as `pdf`:
+
+```bash
+
+POST /documentModels/prebuilt-read:analyze?output=pdf
+{...}
+202
+```
+
+Once the `Analyze` operation is complete, make a `GET` request to retrieve the `Analyze` operation results.
+
+Upon successful completion, the PDF can be retrieved and downloaded as `application/pdf`. This operation allows direct downloading of the embedded text form of PDF instead of Base64-encoded JSON.
+
+```bash
+
+// Monitor the operation until completion.
+GET /documentModels/prebuilt-read/analyzeResults/{resultId}
+200
+{...}
+
+// Upon successful completion, retrieve the PDF as application/pdf.
+GET /documentModels/prebuilt-read/analyzeResults/{resultId}/pdf
+200 OK
+Content-Type: application/pdf
+```
++ ## Key-value Pairs In earlier API versions, the prebuilt-document model extracted key-value pairs from forms and documents. With the addition of the `keyValuePairs` feature to prebuilt-layout, the layout model now produces the same results.
ai-services Concept Composed Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-composed-models.md
::: moniker-end > [!IMPORTANT]
-> Model compose behavior is changing for api-version=2024-07-31-preview and later. The following behavior only applies to v3.1 and previous versions
+>
+> [The `model compose` operation behavior is changing from api-version=2024-07-31-preview](#benefits-of-the-new-model-compose-operation). The `model compose` operation v4.0 and later adds an explicitly trained classifier instead of an implicit classifier for analysis. For the previous composed model version, *see* Composed custom models v3.1. If you are currently using composed models consider upgrading to the latest implementation.
-**Composed models**. A composed model is created by taking a collection of custom models and assigning them to a single model built from your form types. When a document is submitted for analysis using a composed model, the service performs a classification to decide which custom model best represents the submitted document.
+## What is a composed model?
-With composed models, you can assign multiple custom models to a composed model called with a single model ID. It's useful when you train several models and want to group them to analyze similar form types. For example, your composed model might include custom models trained to analyze your supply, equipment, and furniture purchase orders. Instead of manually trying to select the appropriate model, you can use a composed model to determine the appropriate custom model for each analysis and extraction.
+With composed models, you can group multiple custom models into a composed model called with a single model ID. For example, your composed model might include custom models trained to analyze your supply, equipment, and furniture purchase orders. Instead of manually trying to select the appropriate model, you can use a composed model to determine the appropriate custom model for each analysis and extraction.
-* ```Custom form``` and ```Custom template``` models can be composed together into a single composed model.
+Some scenarios require classifying the document first and then analyzing the document with the model best suited to extract the fields from the model. Such scenarios can include ones where a user uploads a document but the document type isn't explicitly known. Another scenario can be when multiple documents are scanned together into a single file and the file is submitted for processing. Your application then needs to identify the component documents and select the best model for each document.
-* With the model compose operation, you can assign up to 200 trained custom models to a single composed model. To analyze a document with a composed model, Document Intelligence first classifies the submitted form, chooses the best-matching assigned model, and returns results.
+In previous versions, the `model compose` operation performed an implicit classification to decide which custom model best represents the submitted document. The `2024-07-31-preview` implementation of the `model compose` operation replaces the implicit classification from the earlier versions with an explicit classification step and adds conditional routing.
-* For ```Custom template``` models, the composed model can be created using variations of a custom template or different form types. This operation is useful when incoming forms belong to one of several templates.
+## Benefits of the new model compose operation
-* For ```Custom neural``` models the best practice is to add all the different variations of a single document type into a single training dataset and train on custom neural model. Model compose is best suited for scenarios when you have documents of different types being submitted for analysis.
+The new `model compose` operation requires you to train an explicit classifier and provides several benefits.
-* The response includes a ```docType``` property to indicate which of the composed models was used to analyze the document.
+* **Continual incremental improvement**. You can consistently improve the quality of the classifier by adding more samples and [incrementally improving classification]( concept-incremental-classifier.md). This fine tuning ensures your documents are always routed to the right model for extraction.
+* **Complete control over routing**. By adding confidence-based routing, you provide a confidence threshold for the document type and the classification response.
-With the introduction of [**custom classification models**](./concept-custom-classifier.md), you can choose to use a [**composed model**](./concept-composed-models.md) or [**classification model**](concept-custom-classifier.md) as an explicit step before analysis. For a deeper understanding of when to use a classification or composed model, _see_ [**Custom classification models**](concept-custom-classifier.md#compare-custom-classification-and-composed-models).
+* **Ignore document specific document types during the operation**. Earlier implementations of the `model compose` operation selected the best analysis model for extraction based on the confidence score even if the highest confidence scores were relatively low. By providing a confidence threshold or explicitly not mapping a known document type from classification to an extraction model, you can ignore specific document types.
-## Compose model limits
+* **Analyze multiple instances of the same document type**. When paired with the `splitMode` option of the classifier, the `model compose` operation can detect multiple instances of the same document in a file and split the file to process each document independently. Using `splitMode` enables the processing of multiple instances of a document in a single request.
+
+* **Support for add on features**. [Add on features](concept-add-on-capabilities.md) like query fields or barcodes can also be specified as a part of the analysis model parameters.
+
+* **Assigned custom model maximum expanded to 500**. The new implementation of the `model compose` operation allows you to assign up to 500 trained custom models to a single composed model.
++
+## How to use model compose
+
+* Start by collecting samples of all your needed documents including samples with information that should be extracted or ignored.
+
+* Train a classifier by organizing the documents in folders where the folder names are the document type you intend to use in your composed model definition.
+
+* Finally, train an extraction model for each of the document types you intend to use.
+
+* Once your classification and extraction models are trained, use the Document Intelligence Studio, client libraries, or the [REST API](/rest/api/aiservices/operation-groups?view=rest-aiservices-2024-07-31-preview&preserve-view=true) to compose the classification and extraction models into a composed model.
+
+Use the `splitMode` parameter to control the file splitting behavior:
+
+* **None**. The entire file is treated as a single document.
+* **perPage**. Each page in the file is treated as a separate document.
+* **auto**. The file is automatically split into documents.
+
+## Billing and pricing
+
+Composed models are billed the same as individual custom models. The pricing is based on the number of pages analyzed by the downstream analysis model. Billing is based on the extraction price for the pages routed to an extraction model. With the addition of the explicit classification charges are incurred for the classification of all pages in the input file. For more information, see the [Document Intelligence pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/form-recognizer/).
++
-> [!NOTE]
-> With the addition of **_custom neural model_** , there are a few limits to the compatibility of models that can be composed together.
+## Use model compose
-* With the model compose operation, you can assign up to 200 models to a single model ID. If the number of models that I want to compose exceeds the upper limit of a composed model, you can use one of these alternatives:
+* Start by creating a list of all the model IDs you want to compose into a single model.
+
+* Compose the models into a single model ID using the Studio, REST API, or client libraries.
+
+* Use the composed model ID to analyze documents.
+
+## Billing
+
+Composed models are billed the same as individual custom models. The pricing is based on the number of pages analyzed. Billing is based on the extraction price for the pages routed to an extraction model. For more information, see the [Document Intelligence pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/form-recognizer/).
+
+* There's no change in pricing for analyzing a document by using an individual custom model or a composed custom model.
+
+## Composed models features
+
+* `Custom template` and `custom neural` models can be composed together into a single composed model across multiple API versions.
+
+* The response includes a `docType` property to indicate which of the composed models was used to analyze the document.
+
+* For `custom template` models, the composed model can be created using variations of a custom template or different form types. This operation is useful when incoming forms belong to one of several templates.
+
+* For `custom neural` models the best practice is to add all the different variations of a single document type into a single training dataset and train on custom neural model. The `model compose` operation is best suited for scenarios when you have documents of different types being submitted for analysis.
+
+## Compose model limits
+
+* With the `model compose` operation, you can assign up to 500 models to a single model ID. If the number of models that I want to compose exceeds the upper limit of a composed model, you can use one of these alternatives:
* Classify the documents before calling the custom model. You can use the [Read model](concept-read.md) and build a classification based on the extracted text from the documents and certain phrases by using sources like code, regular expressions, or search. * If you want to extract the same fields from various structured, semi-structured, and unstructured documents, consider using the deep-learning [custom neural model](concept-custom-neural.md). Learn more about the [differences between the custom template model and the custom neural model](concept-custom.md#compare-model-features).
-* Analyzing a document by using composed models is identical to analyzing a document by using a single model. The `Analyze Document` result returns a `docType` property that indicates which of the component models you selected for analyzing the document. There's no change in pricing for analyzing a document by using an individual custom model or a composed custom model.
+* Analyzing a document by using composed models is identical to analyzing a document by using a single model. The `Analyze Document` result returns a `docType` property that indicates which of the component models you selected for analyzing the document.
-* Model Compose is currently available only for custom models trained with labels.
+* The `model compose` operation is currently available only for custom models trained with labels.
### Composed model compatibility
-|Custom model type|Models trained with v2.1 and v2.0 | Custom template models v3.0 |Custom neural models 3.0|Custom Neural models v3.1|
+|Custom model type|Models trained with v2.1 and v2.0 | Custom template and neural models v3.1 and v3.0 |Custom template and neural models v4.0 preview|Custom Generative models v4.0 preview|
|--|--|--|--|--|
-|**Models trained with version 2.1 and v2.0** |Supported|Supported|Not Supported|Not Supported|
-|**Custom template models v3.0** |Supported|Supported|Not Supported|Not Supported|
-|**Custom template models v3.1** |Not Supported|Not Supported|Not Supported|Not Supported|
-|**Custom Neural models v3.0**|Not Supported|Not Supported|Supported|Supported|
-|**Custom Neural models v3.1**|Not Supported|Not Supported|Supported|Supported|
+|**Models trained with version 2.1 and v2.0** |Not Supported|Not Supported|Not Supported|Not Supported|
+|**Custom template and neural models v3.0 and v3.1** |Not Supported|Supported|Supported|Not Supported|
+|**Custom template and neural models v4.0 preview**|Not Supported|Supported|Supported|Not Supported|
+|**Custom generative models v4.0 preview**|Not Supported|Not Supported|Not Supported|Not Supported|
* To compose a model trained with a prior version of the API (v2.1 or earlier), train a model with the v3.0 API using the same labeled dataset. That addition ensures that the v2.1 model can be composed with other models. * With models composed using v2.1 of the API continues to be supported, requiring no updates.
-* For custom models, the maximum number that can be composed is 200.
- ::: moniker-end ## Development options :::moniker range="doc-intel-4.0.0"
-Document Intelligence **v4.0:2023-02-29-preview** supports the following tools, applications, and libraries:
+Document Intelligence **v4.0:2024-07-31-preview** supports the following tools, applications, and libraries:
| Feature | Resources | |-|-|
-|_**Custom model**_| &bullet; [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</br>&bullet; [REST API](/rest/api/aiservices/operation-groups?view=rest-aiservices-2024-07-31-preview&preserve-view=true)</br>&bullet; [C# SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>&bullet; [Java SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>&bullet; [JavaScript SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>&bullet; [Python SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)|
-| _**Composed model**_| &bullet; [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</br>&bullet; [REST API](/rest/api/aiservices/operation-groups?view=rest-aiservices-v4.0%20(2024-07-31-preview)&preserve-view=true)</br>&bullet; [C# SDK](/dotnet/api/azure.ai.formrecognizer.training.formtrainingclient.startcreatecomposedmodel)</br>&bullet; [Java SDK](/java/api/com.azure.ai.formrecognizer.training.formtrainingclient.begincreatecomposedmodel)</br>&bullet; [JavaScript SDK](/javascript/api/@azure/ai-form-recognizer/documentmodeladministrationclient?view=azure-node-latest#@azure-ai-form-recognizer-documentmodeladministrationclient-begincomposemodel&preserve-view=true)</br>&bullet; [Python SDK](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.formtrainingclient?view=azure-python#azure-ai-formrecognizer-formtrainingclient-begin-create-composed-model&preserve-view=true)|
+|***Custom model***| &bullet; [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</br>&bullet; [REST API](/rest/api/aiservices/operation-groups?view=rest-aiservices-2024-07-31-preview&preserve-view=true)</br>&bullet; [C# SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>&bullet; [Java SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>&bullet; [JavaScript SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>&bullet; [Python SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)|
+| ***Composed model***| &bullet; [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</br>&bullet; [REST API](/rest/api/aiservices/operation-groups?view=rest-aiservices-2024-02-29-preview&preserve-view=true)</br>&bullet; [C# SDK](/dotnet/api/azure.ai.formrecognizer.training.formtrainingclient.startcreatecomposedmodel)</br>&bullet; [Java SDK](/java/api/com.azure.ai.formrecognizer.training.formtrainingclient.begincreatecomposedmodel)</br>&bullet; [JavaScript SDK](/javascript/api/@azure/ai-form-recognizer/documentmodeladministrationclient?view=azure-node-latest#@azure-ai-form-recognizer-documentmodeladministrationclient-begincomposemodel&preserve-view=true)</br>&bullet; [Python SDK](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.formtrainingclient?view=azure-python#azure-ai-formrecognizer-formtrainingclient-begin-create-composed-model&preserve-view=true)|
:::moniker-end
Document Intelligence **v3.1:2023-07-31 (GA)** supports the following tools, app
| Feature | Resources | |-|-|
-|_**Custom model**_| &bullet; [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</br>&bullet; [REST API](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)</br>&bullet; [C# SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)</br>&bullet; [Java SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)</br>&bullet; [JavaScript SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)</br>&bullet; [Python SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)|
-| _**Composed model**_| &bullet; [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</br>&bullet; [REST API](/rest/api/aiservices/document-models/compose-model?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)</br>&bullet; [C# SDK](/dotnet/api/azure.ai.formrecognizer.training.formtrainingclient.startcreatecomposedmodel)</br>&bullet; [Java SDK](/java/api/com.azure.ai.formrecognizer.training.formtrainingclient.begincreatecomposedmodel)</br>&bullet; [JavaScript SDK](/javascript/api/@azure/ai-form-recognizer/documentmodeladministrationclient?view=azure-node-latest#@azure-ai-form-recognizer-documentmodeladministrationclient-begincomposemodel&preserve-view=true)</br>&bullet; [Python SDK](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.formtrainingclient?view=azure-python#azure-ai-formrecognizer-formtrainingclient-begin-create-composed-model&preserve-view=true)|
+|***Custom model***| &bullet; [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</br>&bullet; [REST API](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)</br>&bullet; [C# SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)</br>&bullet; [Java SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)</br>&bullet; [JavaScript SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)</br>&bullet; [Python SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)|
+| ***Composed model***| &bullet; [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</br>&bullet; [REST API](/rest/api/aiservices/document-models/compose-model?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)</br>&bullet; [C# SDK](/dotnet/api/azure.ai.formrecognizer.training.formtrainingclient.startcreatecomposedmodel)</br>&bullet; [Java SDK](/java/api/com.azure.ai.formrecognizer.training.formtrainingclient.begincreatecomposedmodel)</br>&bullet; [JavaScript SDK](/javascript/api/@azure/ai-form-recognizer/documentmodeladministrationclient?view=azure-node-latest#@azure-ai-form-recognizer-documentmodeladministrationclient-begincomposemodel&preserve-view=true)</br>&bullet; [Python SDK](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.formtrainingclient?view=azure-python#azure-ai-formrecognizer-formtrainingclient-begin-create-composed-model&preserve-view=true)|
:::moniker-end Document Intelligence **v3.0:2022-08-31 (GA)** supports the following tools, applications, and libraries: | Feature | Resources | |-|-|
-|_**Custom model**_| &bullet; [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</br>&bullet; [REST API](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-v3.0%20(2022-08-31)&preserve-view=true&tabs=HTTP)</br>&bullet; [C# SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&bullet; [Java SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&bullet; [JavaScript SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&bullet; [Python SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)|
-| _**Composed model**_| &bullet; [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</br>&bullet; [REST API](/rest/api/aiservices/document-models/compose-model?view=rest-aiservices-v3.0%20(2022-08-31)&preserve-view=true&tabs=HTTP)</br>&bullet; [C# SDK](/dotnet/api/azure.ai.formrecognizer.training.formtrainingclient.startcreatecomposedmodel)</br>&bullet; [Java SDK](/java/api/com.azure.ai.formrecognizer.training.formtrainingclient.begincreatecomposedmodel)</br>&bullet; [JavaScript SDK](/javascript/api/@azure/ai-form-recognizer/documentmodeladministrationclient?view=azure-node-latest#@azure-ai-form-recognizer-documentmodeladministrationclient-begincomposemodel&preserve-view=true)</br>&bullet; [Python SDK](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.formtrainingclient?view=azure-python#azure-ai-formrecognizer-formtrainingclient-begin-create-composed-model&preserve-view=true)|
+|***Custom model***| &bullet; [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</br>&bullet; [REST API](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-v3.0%20(2022-08-31)&preserve-view=true&tabs=HTTP)</br>&bullet; [C# SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&bullet; [Java SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&bullet; [JavaScript SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>&bullet; [Python SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)|
+| ***Composed model***| &bullet; [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</br>&bullet; [REST API](/rest/api/aiservices/document-models/compose-model?view=rest-aiservices-v3.0%20(2022-08-31)&preserve-view=true&tabs=HTTP)</br>&bullet; [C# SDK](/dotnet/api/azure.ai.formrecognizer.training.formtrainingclient.startcreatecomposedmodel)</br>&bullet; [Java SDK](/java/api/com.azure.ai.formrecognizer.training.formtrainingclient.begincreatecomposedmodel)</br>&bullet; [JavaScript SDK](/javascript/api/@azure/ai-form-recognizer/documentmodeladministrationclient?view=azure-node-latest#@azure-ai-form-recognizer-documentmodeladministrationclient-begincomposemodel&preserve-view=true)</br>&bullet; [Python SDK](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.formtrainingclient?view=azure-python#azure-ai-formrecognizer-formtrainingclient-begin-create-composed-model&preserve-view=true)|
+ ::: moniker-end ::: moniker range="doc-intel-2.1.0"
Document Intelligence v2.1 supports the following resources:
| Feature | Resources | |-|-|
-|_**Custom model**_| &bullet; [Document Intelligence labeling tool](https://fott-2-1.azurewebsites.net)</br>&bullet; [REST API](how-to-guides/compose-custom-models.md?view=doc-intel-2.1.0&tabs=rest&preserve-view=true)</br>&bullet; [Client library SDK](~/articles/ai-services/document-intelligence/how-to-guides/use-sdk-rest-api.md?view=doc-intel-2.1.0&preserve-view=true)</br>&bullet; [Document Intelligence Docker container](containers/install-run.md?tabs=custom#run-the-container-with-the-docker-compose-up-command)|
-| _**Composed model**_ |&bullet; [Document Intelligence labeling tool](https://fott-2-1.azurewebsites.net/)</br>&bullet; [REST API](/rest/api/aiservices/analyzer?view=rest-aiservices-v2.1&preserve-view=true)</br>&bullet; [C# SDK](/dotnet/api/azure.ai.formrecognizer.training.createcomposedmodeloperation?view=azure-dotnet&preserve-view=true)</br>&bullet; [Java SDK](/java/api/com.azure.ai.formrecognizer.models.createcomposedmodeloptions?view=azure-java-stable&preserve-view=true)</br>&bullet; JavaScript SDK</br>&bullet; [Python SDK](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.formtrainingclient?view=azure-python#azure-ai-formrecognizer-formtrainingclient-begin-create-composed-model&preserve-view=true)|
+|***Custom model***| &bullet; [Document Intelligence labeling tool](https://fott-2-1.azurewebsites.net)</br>&bullet; [REST API](how-to-guides/compose-custom-models.md?view=doc-intel-2.1.0&tabs=rest&preserve-view=true)</br>&bullet; [Client library SDK](~/articles/ai-services/document-intelligence/how-to-guides/use-sdk-rest-api.md?view=doc-intel-2.1.0&preserve-view=true)</br>&bullet; [Document Intelligence Docker container](containers/install-run.md?tabs=custom#run-the-container-with-the-docker-compose-up-command)|
+| ***Composed model*** |&bullet; [Document Intelligence labeling tool](https://fott-2-1.azurewebsites.net/)</br>&bullet; [REST API](/rest/api/aiservices/analyzer?view=rest-aiservices-v2.1&preserve-view=true)</br>&bullet; [C# SDK](/dotnet/api/azure.ai.formrecognizer.training.createcomposedmodeloperation?view=azure-dotnet&preserve-view=true)</br>&bullet; [Java SDK](/java/api/com.azure.ai.formrecognizer.models.createcomposedmodeloptions?view=azure-java-stable&preserve-view=true)</br>&bullet; JavaScript SDK</br>&bullet; [Python SDK](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.formtrainingclient?view=azure-python#azure-ai-formrecognizer-formtrainingclient-begin-create-composed-model&preserve-view=true)|
+ ::: moniker-end ## Next steps
Document Intelligence v2.1 supports the following resources:
Learn to create and compose custom models: > [!div class="nextstepaction"]
+>
> [**Build a custom model**](how-to-guides/build-a-custom-model.md) > [**Compose custom models**](how-to-guides/compose-custom-models.md)
->
ai-services Concept Custom Neural https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-custom-neural.md
https://{endpoint}/formrecognizer/documentModels/{modelId}:copyTo?api-version=20
:::moniker range="doc-intel-4.0.0" ## Billing-
-Starting with version `2024-07-31-preview` and later you can receive **10 hours** of free model training. Billing charges are calculated for model trainings that exceed 10 hours. You can choose to spend all of 10 free hours on a single build with a large set of data, or utilize it across multiple builds by adjusting the maximum duration value for the `build` operation by specifying `maxTrainingHours`:
+
+Starting with version `2024-07-31-preview`, you can train your custom neural model for longer durations than 30 minutes. Previous versions have been capped at 30 minutes per training instance, with a total of 20 free training instances per month. Now with `2024-07-31-preview`, you can receive **10 hours** of free model training, and train a model for as long as 10 hours. If you would like to train a model for longer than 10 hours, billing charges are calculated for model trainings that exceed 10 hours. You can choose to spend all of 10 free hours on a single build with a large set of data, or utilize it across multiple builds by adjusting the maximum duration value for the `build` operation by specifying `maxTrainingHours` as below:
```bash
POST /documentModels:build
} ```
-Build time varies. Billing is calculated for the actual time spent (excluding time in queue), with a minimum of 30 minutes per training job. The elapsed time is converted to V100 equivalent training hours and reported as part of the model.
+> [!NOTE]
+> For Document Intelligence versions `v3.1 (2023-07-31)` and `v3.0 (2022-08-31)`, custom neural model's paid training is not enabled. For the two older versions, you will get a maximum of 30 minutes training duration per model. If you would like to train more than 20 model instances, you can request for increase in the training limit.
+
+Each training hour is the amount of compute a single V100 GPU can perform in an hour. As each build takes different amount of time, billing is calculated for the actual time spent (excluding time in queue), with a minimum of 30 minutes per training job. The elapsed time is converted to V100 equivalent training hours and reported as part of the model.
```bash
This billing structure enables you to train larger data sets for longer duration
:::moniker-end +
+## Billing
+
+For Document Intelligence versions `v3.1 (2023-07-31)` and `v3.0 (2022-08-31)`, you will get a maximum of 30 minutes training duration per model, and a maximum of 20 trainings for free per month. If you would like to train more than 20 model instances, you can request for increase in the training limit.
+
+If you are interested in training models for longer durations than 30 minutes, we support **paid training** for our newest version, `v4.0 (2024-07-31)`. Using the latest version, you can train your model for a longer duration to process larger documents.
+++
+## Billing
+
+For Document Intelligence versions `v3.1 (2023-07-31)` and `v3.0 (2022-08-31)`, you will get a maximum of 30 minutes training duration per model, and a maximum of 20 trainings for free per month. If you would like to train more than 20 model instances, you can request for increase in the training limit.
+
+If you are interested in training models for longer durations than 30 minutes, we support **paid training** for our newest version, `v4.0 (2024-07-31)`. Using the latest version, you can train your model for a longer duration to process larger documents.
++ ## Next steps Learn to create and compose custom models: > [!div class="nextstepaction"] > [**Build a custom model**](how-to-guides/build-a-custom-model.md)
-> [**Compose custom models**](how-to-guides/compose-custom-models.md)
+> [**Compose custom models**](how-to-guides/compose-custom-models.md)
ai-services Concept Model Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-model-overview.md
The following table shows the available models for each current preview and stab
|Document analysis models|[Read](concept-read.md) | ✔️| ✔️| ✔️| n/a| |Document analysis models|[Layout](concept-layout.md) | ✔️| ✔️| ✔️| ✔️| |Document analysis models|[General document](concept-general-document.md) |moved to layout**| ✔️| ✔️| n/a|
+|Prebuilt models|[Bank Check](concept-bank-check.md) | ✔️| n/a| n/a| n/a|
+|Prebuilt models|[Bank Statement](concept-bank-statement.md) | ✔️| n/a| n/a| n/a|
+|Prebuilt models|[Paystub](concept-pay-stub.md) | ✔️| n/a| n/a| n/a|
|Prebuilt models|[Contract](concept-contract.md) | ✔️| ✔️| n/a| n/a| |Prebuilt models|[Health insurance card](concept-health-insurance-card.md)| ✔️| ✔️| ✔️| n/a| |Prebuilt models|[ID document](concept-id-document.md) | ✔️| ✔️| ✔️| ✔️| |Prebuilt models|[Invoice](concept-invoice.md) | ✔️| ✔️| ✔️| ✔️| |Prebuilt models|[Receipt](concept-receipt.md) | ✔️| ✔️| ✔️| ✔️|
+|Prebuilt models|[US Unified Tax*](concept-tax-document.md) | ✔️| n/a| n/a| n/a|
|Prebuilt models|[US 1040 Tax*](concept-tax-document.md) | ✔️| ✔️| n/a| n/a| |Prebuilt models|[US 1098 Tax*](concept-tax-document.md) | ✔️| n/a| n/a| n/a| |Prebuilt models|[US 1099 Tax*](concept-tax-document.md) | ✔️| n/a| n/a| n/a| |Prebuilt models|[US W2 Tax](concept-tax-document.md) | ✔️| ✔️| ✔️| n/a| |Prebuilt models|[US Mortgage 1003 URLA](concept-mortgage-documents.md) | ✔️| n/a| n/a| n/a|
+|Prebuilt models|[US Mortgage 1004 URAR](concept-mortgage-documents.md) | ✔️| n/a| n/a| n/a|
+|Prebuilt models|[US Mortgage 1005](concept-mortgage-documents.md) | ✔️| n/a| n/a| n/a|
|Prebuilt models|[US Mortgage 1008 Summary](concept-mortgage-documents.md) | ✔️| n/a| n/a| n/a| |Prebuilt models|[US Mortgage closing disclosure](concept-mortgage-documents.md) | ✔️| n/a| n/a| n/a| |Prebuilt models|[Marriage certificate](concept-marriage-certificate.md) | ✔️| n/a| n/a| n/a| |Prebuilt models|[Credit card](concept-credit-card.md) | ✔️| n/a| n/a| n/a| |Prebuilt models|[Business card](concept-business-card.md) | deprecated|✔️|✔️|✔️ | |Custom classification model|[Custom classifier](concept-custom-classifier.md) | ✔️| ✔️| n/a| n/a|
+|Custom Generative Model|[Custom Generative Model](concept-custom-generative.md) | ✔️| n/a| n/a| n/a|
|Custom extraction model|[Custom neural](concept-custom-neural.md) | ✔️| ✔️| ✔️| n/a| |Customextraction model|[Custom template](concept-custom-template.md) | ✔️| ✔️| ✔️| ✔️| |Custom extraction model|[Custom composed](concept-composed-models.md) | ✔️| ✔️| ✔️| ✔️|
Latency is the amount of time it takes for an API server to handle and process a
|Language detection|Free| ✔️| ✔️| n/a| n/a| |Key value pairs|Free| ✔️|n/a|n/a| n/a| |Query fields|Add-On*| ✔️|n/a|n/a| n/a|
+|Searchable pdf|Add-On*| ✔️|n/a|n/a| n/a|
### Model analysis features
Add-On* - Query fields are priced differently than the other add-on features. Se
::: moniker range=">=doc-intel-3.0.0"
-| **Model** | **Description** |
-| | |
-|**Document analysis models**||
-| [Read OCR](#read-ocr) | Extract print and handwritten text including words, locations, and detected languages.|
-| [Layout analysis](#layout-analysis) | Extract text and document layout elements like tables, selection marks, titles, section headings, and more.|
-|**Prebuilt models**||
-| [Health insurance card](#health-insurance-card) | Automate healthcare processes by extracting insurer, member, prescription, group number, and other key information from US health insurance cards.|
-| [US Tax document models](#us-tax-documents) | Process US tax forms to extract employee, employer, wage, and other information. |
-| [US Mortgage document models](#us-mortgage-documents) | Process US mortgage forms to extract borrower loan and property information. |
-| [Contract](#contract) | Extract agreement and party details.|
-| [Invoice](#invoice) | Automate invoices. |
-| [Receipt](#receipt) | Extract receipt data from receipts.|
-| [Identity document (ID)](#identity-document-id) | Extract identity (ID) fields from US driver licenses and international passports. |
-| [Business card](#business-card) | Scan business cards to extract key fields and data into your applications. |
-|**Custom models**||
-| [Custom model (overview)](#custom-models) | Extract data from forms and documents specific to your business. Custom models are trained for your distinct data and use cases. |
-| [Custom extraction models](#custom-extraction)| &#9679; **Custom template models** use layout cues to extract values from documents and are suitable to extract fields from highly structured documents with defined visual templates.</br>&#9679; **Custom neural models** are trained on various document types to extract fields from structured, semi-structured, and unstructured documents.|
-| [Custom classification model](#custom-classifier)| The **Custom classification model** can classify each page in an input file to identify the documents within and can also identify multiple documents or multiple instances of a single document within an input file.
-| [Composed models](#composed-models) | Combine several custom models into a single model to automate processing of diverse document types with a single composed model.
- ### Bounding box and polygon coordinates A bounding box (`polygon` in v3.0 and later versions) is an abstract rectangle that surrounds text elements in a document used as a reference point for object detection.
For all models, except Business card model, Document Intelligence now supports a
* [`languages`](concept-add-on-capabilities.md#language-detection) * [`keyValuePairs`](concept-add-on-capabilities.md#key-value-pairs) (2024-02-29-preview, 2023-10-31-preview) * [`queryFields`](concept-add-on-capabilities.md#query-fields) (2024-02-29-preview, 2023-10-31-preview) `Not available with the US.Tax models`
+* [`searchablePDF`](concept-read.md#searchable-pdf) (2024-07-31-preview) `Only available for Read Model`
## Language support
ai-services Concept Read https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-read.md
The searchable PDF capability enables you to convert an analog PDF, such as scan
> > * Currently, the searchable PDF capability is only supported by Read OCR model `prebuilt-read`. When using this feature, please specify the `modelId` as `prebuilt-read`, as other model types will return error for this preview version. > * Searchable PDF is included with the 2024-07-31-preview `prebuilt-read` model with no additional cost for generating a searchable PDF output.
+> * Searchable PDF currently only supports PDF files as input. Support for other file types, such as image files, will be available later.
### Use searchable PDF
ai-services Language Support Prebuilt https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/language-support-prebuilt.md
Azure AI Document Intelligence models provide multilingual document processing s
:::moniker-end +
+## Bank Statement
++
+***Model ID: prebuilt-bankStatement***
+
+| Language Locale code | Default |
+|:-|:|
+| English (United States) `en-US`| English (United States) `en-US`|
++ ## Contract :::moniker range="doc-intel-4.0.0 || doc-intel-3.1.0"
Azure AI Document Intelligence models provide multilingual document processing s
:::moniker-end +
+## Check
+
+***Model ID: prebuilt-check***
+
+| Language Locale code | Default |
+|:-|:|
+| English (United States) `en-US`| English (United States) `en-US`|
++ ## Health insurance card :::moniker range=">=doc-intel-3.0.0"
Azure AI Document Intelligence models provide multilingual document processing s
|English (`en`) | United States (`us`) :::moniker-end
+## Mortgage
++
+***Model ID: prebuilt-mortgage***
+
+ | Model ID | Language Locale code | Default |
+ |--|:-|:|
+ |**prebuilt-mortgage-1003**|English (United States)|English (United States) `en-US`|
+ |**prebuilt-mortgage-1004**|English (United States)|English (United States) `en-US`|
+ |**prebuilt-mortgage-1005**|English (United States)|English (United States) `en-US`|
+ |**prebuilt-mortgage-1008**|English (United States)|English (United States) `en-US`|
+ |**prebuilt-mortgage-.closingDisclosure**|English (United States)|English (United States) `en-US`|
+++
+## Pay stub
+
+***Model ID: prebuilt-paystub***
+
+| Language Locale code | Default |
+|:-|:|
+| English (United States) `en-US`| English (United States) `en-US`|
++ ## Receipt :::moniker range=">=doc-intel-3.0.0"
Azure AI Document Intelligence models provide multilingual document processing s
| Model ID | Language Locale code | Default | |--|:-|:| |**prebuilt-tax.us.w2**|English (United States)|English (United States) `en-US`|
+ |**prebuilt-tax.us**|English (United States)|English (United States) `en-US`|
+ |**prebuilt-tax.us.1099Combo**|English (United States)|English (United States) `en-US`|
|**prebuilt-tax.us.1098**|English (United States)|English (United States) `en-US`| |**prebuilt-tax.us.1098E**|English (United States)|English (United States) `en-US`| |**prebuilt-tax.us.1098T**|English (United States)|English (United States) `en-US`|
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/overview.md
monikerRange: '<=doc-intel-4.0.0'
Azure AI Document Intelligence is a cloud-based [Azure AI service](../../ai-services/index.yml) that enables you to build intelligent document processing solutions. Massive amounts of data, spanning a wide variety of data types, are stored in forms and documents. Document Intelligence enables you to effectively manage the velocity at which data is collected and processed and is key to improved operations, informed data-driven decisions, and enlightened innovation. </br></br>
-| ✔️ [**Document analysis models**](#document-analysis-models) | ✔️ [**Prebuilt models**](#prebuilt-models) | ✔️ [**Custom models**](#custom-model-overview) |
+| ✔️ [**Document analysis models**](#general-extraction-models) | ✔️ [**Prebuilt models**](#prebuilt-models) | ✔️ [**Custom models**](#custom-model-overview) |
-## Document analysis models
+## General extraction models
-Document analysis models enable text extraction from forms and documents and return structured business-ready content ready for your organization's action, use, or development.
+General extraction models enable text extraction from forms and documents and return structured business-ready content ready for your organization's action, use, or development.
:::moniker range="doc-intel-4.0.0" :::row::: :::column:::
- :::image type="icon" source="media/overview/icon-read.png" link="#read":::</br>
- [**Read**](#read) | Extract printed </br>and handwritten text.
+ [**Read**](#read) | Extract printed and handwritten text.
:::column-end::: :::column span="":::
- :::image type="icon" source="media/overview/icon-layout.png" link="#layout":::</br>
- [**Layout**](#layout) | Extract text, tables, </br>and document structure.
+ [**Layout**](#layout) | Extract text, tables, and document structure.
:::column-end::: :::row-end::: :::moniker-end
Document analysis models enable text extraction from forms and documents and ret
:::moniker range="<=doc-intel-3.1.0" :::row::: :::column:::
- :::image type="icon" source="media/overview/icon-read.png" link="#read":::</br>
[**Read**](#read) | Extract printed </br>and handwritten text. :::column-end::: :::column span="":::
- :::image type="icon" source="media/overview/icon-layout.png" link="#layout":::</br>
[**Layout**](#layout) | Extract text, tables, </br>and document structure. :::column-end::: :::column span="":::
- :::image type="icon" source="media/overview/icon-general-document.png" link="#general-document-deprecated-in-2023-10-31-preview":::</br>
[**General document**](#general-document-deprecated-in-2023-10-31-preview) | Extract text, </br>structure, and key-value pairs. :::column-end::: :::row-end:::
Document analysis models enable text extraction from forms and documents and ret
Prebuilt models enable you to add intelligent document processing to your apps and flows without having to train and build your own models. :::moniker range="doc-intel-4.0.0"
+### Financial Services and Legal
+ :::row::: :::column span="":::
- :::image type="icon" source="media/overview/icon-invoice.png" link="#invoice":::</br>
- [**Invoice**](#invoice) | Extract customer and vendor details.
+ [**Bank Statement**](#bank-statement) | Extract account information and details from bank statements.
:::column-end::: :::column span="":::
- :::image type="icon" source="media/overview/icon-receipt.png" link="#receipt":::</br>
- [**Receipt**](#receipt) | Extract sales transaction details.
+ [**Check**](#check) | Extract relevant information from checks.
:::column-end::: :::column span="":::
- :::image type="icon" source="media/overview/icon-id-document.png" link="#identity-id":::</br>
- [**Identity**](#identity-id) | Extract verification details.
+ [**Contract**](#contract-model) | Extract agreement and party details.
:::column-end::: :::row-end::: :::row:::
- :::column span="":::
- :::image type="icon" source="media/overview/icon-check.png" link="#check":::</br>
- [**Check**](#check) | Extract relevant information from checks.
+ :::column span="":::
+ [**Credit card**](#credit-card-model) | Extract payment card information.
+ :::column-end:::
+ :::column span="":::
+ [**Invoice**](#invoice) | Extract customer and vendor details.
:::column-end::: :::column span="":::
- :::image type="icon" source="media/overview/icon-pay-stub.png" link="#pay-stub":::</br>
[**Pay Stub**](#pay-stub) | Extract pay stub details. :::column-end::: :::column span="":::
- :::image type="icon" source="media/overview/icon-bank-statement.png" link="#bank-statement":::</br>
- [**Bank Statement**](#bank-statement) | Extract account information and details from bank statements.
+ [**Receipt**](#receipt) | Extract sales transaction details.
:::column-end::: :::row-end:::+
+### US Tax
:::row::: :::column span="":::
- :::image type="icon" source="media/overview/icon-insurance-card.png" link="#health-insurance-card":::</br>
- [**Health Insurance card**](#health-insurance-card) | Extract insurance coverage details.
+ [**Unified US tax**](#unified-us-tax-forms) | Extract from any US tax forms supported.
:::column-end::: :::column span="":::
- :::image type="icon" source="media/overview/icon-contract.png" link="#contract-model":::</br>
- [**Contract**](#contract-model) | Extract agreement and party details.
+ [**US Tax W-2**](#us-tax-w-2-model) | Extract taxable compensation details.
:::column-end:::
- :::image type="icon" source="media/overview/icon-payment-card.png" link="#contract-model":::</br>
- [**Credit/Debit card**](#credit-card-model) | Extract payment card information.
+ :::column span="":::
+ [**US Tax 1098**](#us-tax-1098-and-variations-forms) | Extract `1098` variation details.
:::column-end:::
- :::column span="":::
- :::image type="icon" source="media/overview/icon-marriage-certificate.png" link="#contract-model":::</br>
- [**Marriage certificate**](#marriage-certificate-model) | Extract certified marriage information.
+ :::column span="":::
+ [**US Tax 1099**](#us-tax-1099-and-variations-forms) | Extract `1099` variation details.
+ :::column-end:::
+ :::column span="":::
+ [**US Tax 1040**](#us-tax-1040-and-variations-forms) | Extract `1040` variation details.
:::column-end::: :::row-end:::+
+### US Mortgage
:::row::: :::column span="":::
- :::image type="icon" source="media/overview/icon-mortgage-1003.png" link="#us-mortgage-1003-form":::</br>
[**US mortgage 1003**](#us-mortgage-1003-form) | Extract loan application details. :::column-end::: :::column span="":::
- :::image type="icon" source="media/overview/icon-mortgage-1004.png" link="#us-mortgage-1004-form":::</br>
[**US mortgage 1004**](#us-mortgage-1004-form) | Extract information from appraisal. :::column-end::: :::column span="":::
- :::image type="icon" source="media/overview/icon-mortgage-1005.png" link="#us-mortgage-1005-form":::</br>
[**US mortgage 1005**](#us-mortgage-1005-form) | Extract information from validation of employment. :::column-end::: :::column span="":::
- :::image type="icon" source="media/overview/icon-mortgage-1008.png" link="#us-mortgage-1008-form":::</br>
[**US mortgage 1008**](#us-mortgage-1008-form) | Extract loan transmittal details. :::column-end::: :::column span="":::
- :::image type="icon" source="media/overview/icon-mortgage-disclosure.png" link="#us-mortgage-disclosure-form":::</br>
[**US mortgage disclosure**](#us-mortgage-disclosure-form) | Extract final closing loan terms. :::column-end::: :::row-end:::+
+### Personal Identification
+ :::row::: :::column span="":::
- :::image type="icon" source="media/overview/icon-w2.png" link="#us-tax-w-2-model":::</br>
- [**US Tax W-2**](#us-tax-w-2-model) | Extract taxable compensation details.
- :::column-end:::
- :::column span="":::
- :::image type="icon" source="media/overview/icon-1098.png" link="#us-tax-1098-form":::</br>
- [**US Tax 1098**](#us-tax-1098-form) | Extract mortgage interest details.
- :::column-end:::
- :::column span="":::
- :::image type="icon" source="media/overview/icon-1098-e.png" link="#us-tax-1098-e-form":::</br>
- [**US Tax 1098-E**](#us-tax-1098-e-form) | Extract student loan interest details.
- :::column-end:::
- :::column span="":::
- :::image type="icon" source="media/overview/icon-1098-t.png" link="#us-tax-1098-t-form":::</br>
- [**US Tax 1098-T**](#us-tax-1098-t-form) | Extract qualified tuition details.
+ [**Health Insurance card**](#health-insurance-card) | Extract insurance coverage details.
:::column-end:::
- :::column span="":::
- :::image type="icon" source="media/overview/icon-1099.png" link="#us-tax-1098-t-form":::</br>
- [**US Tax 1099**](#us-tax-1099-and-variations-forms) | Extract `1099` variation details.
+ :::column span="":::
+ [**Identity**](#identity-id) | Extract verification details.
:::column-end:::
- :::column span="":::
- :::image type="icon" source="media/overview/icon-1040.png" link="#us-tax-1098-t-form":::</br>
- [**US Tax 1040**](#us-tax-1040-form) | Extract `1040` variation details.
+ :::column span="":::
+ [**Marriage certificate**](#marriage-certificate-model) | Extract certified marriage information.
:::column-end::: :::row-end:::++++ :::moniker-end :::moniker range="<=doc-intel-3.1.0" :::row::: :::column span="":::
- :::image type="icon" source="media/overview/icon-invoice.png" link="#invoice":::</br>
[**Invoice**](#invoice) | Extract customer </br>and vendor details. :::column-end::: :::column span="":::
- :::image type="icon" source="media/overview/icon-receipt.png" link="#receipt":::</br>
[**Receipt**](#receipt) | Extract sales </br>transaction details. :::column-end::: :::column span="":::
- :::image type="icon" source="media/overview/icon-id-document.png" link="#identity-id":::</br>
[**Identity**](#identity-id) | Extract identification </br>and verification details. :::column-end::: :::row-end::: :::row::: :::column span="":::
- :::image type="icon" source="media/overview/icon-insurance-card.png" link="#health-insurance-card":::</br>
[**Health Insurance card**](#health-insurance-card) | Extract health insurance details. :::column-end::: :::column span="":::
- :::image type="icon" source="media/overview/icon-business-card.png" link="#business-card":::</br>
[**Business card**](#business-card) | Extract business contact details. :::column-end::: :::column span="":::
- :::image type="icon" source="media/overview/icon-contract.png" link="#contract-model":::</br>
[**Contract**](#contract-model) | Extract agreement</br> and party details. :::column-end::: :::row-end::: :::row::: :::column span="":::
- :::image type="icon" source="media/overview/icon-w2.png" link="#us-tax-w-2-model":::</br>
[**US Tax W-2**](#us-tax-w-2-model) | Extract taxable </br>compensation details. :::column-end::: :::column span="":::
- :::image type="icon" source="media/overview/icon-1098.png" link="#us-tax-1098-form":::</br>
- [**US Tax 1098**](#us-tax-1098-form) | Extract mortgage interest details.
- :::column-end:::
- :::column span="":::
- :::image type="icon" source="media/overview/icon-1098-e.png" link="#us-tax-1098-e-form":::</br>
- [**US Tax 1098-E**](#us-tax-1098-e-form) | Extract student loan interest details.
- :::column-end:::
- :::column span="":::
- :::image type="icon" source="media/overview/icon-1098-t.png" link="#us-tax-1098-t-form":::</br>
- [**US Tax 1098-T**](#us-tax-1098-t-form) | Extract qualified tuition details.
+ [**US Tax 1098**](#us-tax-1098-and-variations-forms) | Extract `1098` variation details.
:::column-end::: :::row-end::: :::moniker-end ## Custom models
-* Custom models are trained using your labeled datasets to extract distinct data from forms and documents, specific to your use cases.
-* Standalone custom models can be combined to create composed models.
+Custom models are trained using your labeled datasets to extract distinct data from forms and documents, specific to your use cases. Standalone custom models can be combined to create composed models.
- :::column:::
- * **Extraction models**</br>
- ✔️ Custom extraction models are trained to extract labeled fields from documents.
- :::column-end:::
+### Document field extraction models
+✔️ Document field extraction models are trained to extract labeled fields from documents.
:::row::: :::column:::
- :::image type="icon" source="media/overview/icon-custom-generative.png" link="#custom-generative":::</br>
- [**Custom generative**](#custom-generative) | Extract data from unstructured documents and structured documents with varying templates.
+ [**Custom generative**](#custom-generative-document-field-extraction) | Build a custom extraction model using generative AI for documents with unstructured format and varying templates.
:::column-end::: :::column span="":::
- :::image type="icon" source="media/overview/icon-custom-neural.png" link="#custom-neural":::</br>
[**Custom neural**](#custom-neural) | Extract data from mixed-type documents. :::column-end::: :::column span="":::
- :::image type="icon" source="media/overview/icon-custom-template.png" link="#custom-template":::</br>
[**Custom template**](#custom-template) | Extract data from static layouts. :::column-end::: :::column span="":::
- :::image type="icon" source="media/overview/icon-custom-composed.png" link="#custom-composed":::</br>
[**Custom composed**](#custom-composed) | Extract data using a collection of models. :::column-end::: :::row-end:::
- :::column:::
- * **Classification model**</br>
- ✔️ Custom classifiers identify document types before invoking an extraction model.
- :::column-end:::
+### Custom classification models
+✔️ Custom classifiers identify document types before invoking an extraction model.
:::row::: :::column span="":::
- :::image type="icon" source="media/overview/icon-custom-classifier.png" link="#custom-classification-model":::</br>
- [**Custom classifier**](#custom-classification-model) | Identify designated document types (classes) </br>before invoking an extraction model.
+ [**Custom classifier**](#custom-classification-model) | Identify designated document types (classes) before invoking an extraction model.
:::column-end::: :::row-end:::
Document Intelligence supports optional features that can be enabled and disable
* [`ocr.barcode`](concept-add-on-capabilities.md#barcode-property-extraction)
-Document Intelligence supports optional features that can be enabled and disabled depending on the document extraction scenario. The following add-on capabilities are available for`2024-02-29-preview`, `2023-10-31-preview`, and later releases:
+ The`2024-07-31-preview` release introduces `read` model support for [searchable PDF](concept-read.md#searchable-pdf) output:
+
+* [`Searchable PDF](concept-add-on-capabilities.md#searchable-pdf)
+
+Document Intelligence supports optional features that can be enabled and disabled depending on the document extraction scenario. The following add-on capabilities are available for `2023-10-31-preview`, and later releases:
* [`queryFields`](concept-add-on-capabilities.md#query-fields)
+* [`keyValuePairs`](concept-add-on-capabilities.md#key-value-pairs)
+ ## Analysis features [!INCLUDE [model analysis features](includes/model-analysis-features.md)]
You can use Document Intelligence to automate document processing in application
|[**prebuilt-read**](concept-read.md)|&#9679; Extract **text** from documents.</br>&#9679; [Data extraction](concept-read.md#data-extraction)| &#9679; Digitizing any document. </br>&#9679; Compliance and auditing.</br>&#9679; Processing handwritten notes before translation.|&#9679; [**Document Intelligence Studio**](https://documentintelligence.ai.azure.com/studio/read)</br>&#9679; [**REST API**](how-to-guides/use-sdk-rest-api.md?view=doc-intel-4.0.0&preserve-view=true&pivots=programming-language-rest-api)</br>&#9679; [**C# SDK**](how-to-guides/use-sdk-rest-api.md?view=doc-intel-4.0.0&preserve-view=true&pivots=programming-language-csharp)</br>&#9679; [**Python SDK**](how-to-guides/use-sdk-rest-api.md?view=doc-intel-4.0.0&preserve-view=true&pivots=programming-language-python)</br>&#9679; [**Java SDK**](how-to-guides/use-sdk-rest-api.md?view=doc-intel-4.0.0&preserve-view=true&pivots=programming-language-java)</br>&#9679; [**JavaScript**](how-to-guides/use-sdk-rest-api.md?view=doc-intel-4.0.0&preserve-view=true&pivots=programming-language-javascript) | > [!div class="nextstepaction"]
-> [Return to model types](#document-analysis-models)
+> [Return to model types](#general-extraction-models)
### Layout
You can use Document Intelligence to automate document processing in application
|[**prebuilt-layout**](concept-layout.md) |&#9679; Extract **text and layout** information from documents.</br>&#9679; [Data extraction](concept-layout.md#data-extraction) |&#9679; Document indexing and retrieval by structure.</br>&#9679; Financial and medical report analysis. |&#9679; [**Document Intelligence Studio**](https://documentintelligence.ai.azure.com/studio/layout)</br>&#9679; [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>&#9679; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#layout-model)</br>&#9679; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#layout-model)</br>&#9679; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#layout-model)</br>&#9679; [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#layout-model)| > [!div class="nextstepaction"]
-> [Return to model types](#document-analysis-models)
+> [Return to model types](#general-extraction-models)
::: moniker range="doc-intel-3.1.0 || doc-intel-3.0.0"
You can use Document Intelligence to automate document processing in application
|[**prebuilt-document**](concept-general-document.md)|&#9679; Extract **text,layout, and key-value pairs** from documents.</br>&#9679; [Data and field extraction](concept-general-document.md#data-extraction)|&#9679; Key-value pair extraction.</br>&#9679; Form processing.</br>&#9679; Survey data collection and analysis.|&#9679; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/document)</br>&#9679; [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)| > [!div class="nextstepaction"]
-> [Return to model types](#document-analysis-models)
+> [Return to model types](#general-extraction-models)
:::moniker-end ### Invoice
You can use Document Intelligence to automate document processing in application
> [!div class="nextstepaction"] > [Return to model types](#prebuilt-models)
-### US tax 1098 form
+### US tax 1098 (and variations) forms
:::image type="content" source="media/overview/analyze-1098.png" alt-text="Screenshot of US 1098 tax form analyzed in the Document Intelligence Studio."::: | Model ID | Description| Development options | |-|--|-|
-|[**prebuilt-tax.us.1098**](concept-tax-document.md)|Extract mortgage interest information and details. </br>&#9679; [Data and field extraction](concept-tax-document.md#field-extraction-1098)|&#9679; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.1098)</br>&#9679; [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true&pivots=programming-language-rest-api#analyze-document-post-request)</br>&#9679; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)|
-
-> [!div class="nextstepaction"]
-> [Return to model types](#prebuilt-models)
-
-### US tax 1098-E form
--
-| Model ID | Description |Development options |
-|-|--|-|
-|[**prebuilt-tax.us.1098E**](concept-tax-document.md)|Extract student loan information and details. </br>&#9679; [Data and field extraction](concept-tax-document.md#field-extraction-1098)|&#9679; [**Document Intelligence Studio**](https://documentintelligence.ai.azure.com/studio/prebuilt?formCategory=tax.us.1098&formType=tax.us.1098E)</br>&#9679; </br>&#9679; [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true&pivots=programming-language-rest-api#analyze-document-post-request)</br>&#9679; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)|
+|[**prebuilt-tax.us.1098{`variation`}**](concept-tax-document.md)|&#9679; Extract key information from 1098-form variations.</br>&#9679; [Data and field extraction](concept-tax-document.md#field-extraction-1098)|&#9679; [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.1098)</br>&#9679; [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true&pivots=programming-language-rest-api#analyze-document-post-request)</br>&#9679; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)|
> [!div class="nextstepaction"] > [Return to model types](#prebuilt-models)
-### US tax 1098-T form
+### US tax 1099 (and variations) forms
| Model ID |Description|Development options | |-|--|--|
-|[**prebuilt-tax.us.1098T**](concept-tax-document.md)|Extract tuition information and details. </br>&#9679; [Data and field extraction](concept-tax-document.md#field-extraction-1098)|&#9679; [**Document Intelligence Studio**](https://documentintelligence.ai.azure.com/studio/prebuilt?formCategory=tax.us.1098&formType=tax.us.1098T)</br>&#9679; [**REST API**](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)|
+|[**prebuilt-tax.us.1099{`variation`}**](concept-tax-document.md)|&#9679; Extract information from 1099-form variations.</br>&#9679; [Data and field extraction](concept-tax-document.md#field-extraction-1099-nec)|&#9679; [**Document Intelligence Studio**](https://documentintelligence.ai.azure.com/studio/prebuilt?formCategory=tax.us.1099)</br>&#9679; [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true&pivots=programming-language-rest-api#analyze-document-post-request)</br>&#9679; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)|
> [!div class="nextstepaction"] > [Return to model types](#prebuilt-models)
-### US tax 1099 (and variations) forms
+### US tax 1040 (and variations) forms
| Model ID |Description|Development options | |-|--|--|
-|[**prebuilt-tax.us.1099{`variation`}**](concept-tax-document.md)|Extract information from 1099-form variations.|&#9679; </br>&#9679; [Data and field extraction](concept-tax-document.md#field-extraction-1099-nec) [**Document Intelligence Studio**](https://documentintelligence.ai.azure.com/studio/prebuilt?formCategory=tax.us.1099)</br>&#9679; [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true&pivots=programming-language-rest-api#analyze-document-post-request)</br>&#9679; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)|
+|[**prebuilt-tax.us.1040{`variation`}**](concept-tax-document.md)|&#9679; Extract information from 1040-form variations.</br>&#9679; [Data and field extraction](concept-tax-document.md#field-extraction-1040-tax-form)|&#9679; [**Document Intelligence Studio**](https://documentintelligence.ai.azure.com/studio/prebuilt?formCategory=tax.us.1040)</br>&#9679; [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true&pivots=programming-language-rest-api#analyze-document-post-request)</br>&#9679; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)|
-> [!div class="nextstepaction"]
-> [Return to model types](#prebuilt-models)
-### US tax 1040 form
-
+### Unified US tax forms
| Model ID |Description|Development options | |-|--|--|
-|**prebuilt-tax.us.1040**|Extract information from 1040-form variations.|&#9679; </br>&#9679; [Data and field extraction](concept-tax-document.md#field-extraction-1040-tax-form) [**Document Intelligence Studio**](https://documentintelligence.ai.azure.com/studio/prebuilt?formCategory=tax.us.1040)</br>&#9679; [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true&pivots=programming-language-rest-api#analyze-document-post-request)</br>&#9679; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)|
+|[**prebuilt-tax.us**](concept-tax-document.md)|&#9679;Extract information from any of the supported US tax forms.|&#9679; [**Document Intelligence Studio**](https://documentintelligence.ai.azure.com/studio/prebuilt?formCategory=tax.us.1040)</br>&#9679; [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true&pivots=programming-language-rest-api#analyze-document-post-request)</br>&#9679; [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>&#9679; [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)|
+ ::: moniker range="<=doc-intel-3.1.0" ### Business card
+ :::image type="content" source="media/overview/analyze-business-card.png" alt-text="Screenshot of Business card model analysis using Document Intelligence Studio.":::
| Model ID | Description |Automation use cases | Development options | |-|--|-|--|
You can use Document Intelligence to automate document processing in application
> [!div class="nextstepaction"] > [Return to model types](#prebuilt-models)+ ### Custom model overview
You can use Document Intelligence to automate document processing in application
> [!div class="nextstepaction"] > [Return to custom model types](#custom-models)
-#### Custom generative
+#### Custom generative (document field extraction)
:::image type="content" source="media/overview/analyze-custom-generative.png" alt-text="Screenshot of Custom generative model analysis using Azure AI Studio.":::
ai-services Service Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/service-limits.md
Document Intelligence billing is calculated monthly based on the model type and
| **Max number of Neural models** | 100 | 500 | | Adjustable | No | No | +
+## Custom model usage
+
+> [!div class="checklist"]
+>
+> * [**Custom template model**](concept-custom-template.md)
+> * [**Custom neural model**](concept-custom-neural.md)
+> * [**Custom generative model**](concept-custom-generative.md)
+> * [**Composed classification models**](concept-custom-classifier.md)
+> * [**Composed custom models**](concept-composed-models.md)
+
+|Quota|Free (F0) <sup>1</sup>|Standard (S0)|
+|--|--|--|
+| **Compose Model limit** | 5 | 500 (default value) |
+| Adjustable | No | No |
+| **Training dataset size * Neural and Generative** | 1 GB <sup>3</sup> | 1 GB (default value) |
+| Adjustable | No | No |
+| **Training dataset size * Template** | 50 MB <sup>4</sup> | 50 MB (default value) |
+| Adjustable | No | No |
+| **Max number of pages (Training) * Template** | 500 | 500 (default value) |
+| Adjustable | No | No |
+| **Max number of pages (Training) * Neural and Generative** | 50,000 | 50,000 (default value) |
+| Adjustable | No | No |
+| **Custom neural model train** | 10 hours per month <sup>5</sup> | no limit (pay by the hour) |
+| Adjustable | No |Yes <sup>3</sup>|
+| **Max number of pages (Training) * Classifier** | 10,000 | 10,000 (default value) |
+| Adjustable | No | No |
+| **Max number of document types (classes) * Classifier** | 500 | 500 (default value) |
+| Adjustable | No | No |
+| **Training dataset size * Classifier** | 1GB | 2GB (default value) |
+| Adjustable | No | No |
+| **Min number of samples per class * Classifier** | 5 | 5 (default value) |
+| Adjustable | No | No |
+++
+## Custom model usage
+
+> [!div class="checklist"]
+>
+> * [**Custom template model**](concept-custom-template.md)
+> * [**Custom neural model**](concept-custom-neural.md)
+> * [**Composed classification models**](concept-custom-classifier.md)
+> * [**Composed custom models**](concept-composed-models.md)
+
+|Quota|Free (F0) <sup>1</sup>|Standard (S0)|
+|--|--|--|
+| **Compose Model limit** | 5 | 200 (default value) |
+| Adjustable | No | No |
+| **Training dataset size * Neural** | 1 GB <sup>3</sup> | 1 GB (default value) |
+| Adjustable | No | No |
+| **Training dataset size * Template** | 50 MB <sup>4</sup> | 50 MB (default value) |
+| Adjustable | No | No |
+| **Max number of pages (Training) * Template** | 500 | 500 (default value) |
+| Adjustable | No | No |
+| **Max number of pages (Training) * Neural** | 50,000 | 50,000 (default value) |
+| Adjustable | No | No |
+| **Custom neural model train** | 10 per month | 20 per month |
+| Adjustable | No |Yes <sup>3</sup>|
+| **Max number of pages (Training) * Classifier** | 10,000 | 10,000 (default value) |
+| Adjustable | No | No |
+| **Max number of document types (classes) * Classifier** | 500 | 500 (default value) |
+| Adjustable | No | No |
+| **Training dataset size * Classifier** | 1GB | 1GB (default value) |
+| Adjustable | No | No |
+| **Min number of samples per class * Classifier** | 5 | 5 (default value) |
+| Adjustable | No | No |
++ ## Custom model usage
Document Intelligence billing is calculated monthly based on the model type and
::: moniker range=">=doc-intel-2.1.0" > <sup>1</sup> For **Free (F0)** pricing tier see also monthly allowances at the [pricing page](https://azure.microsoft.com/pricing/details/form-recognizer/).</br>
-> <sup>2</sup> See [best practices](#example-of-a-workload-pattern-best-practice), and [adjustment instructions(#create-and-submit-support-request).</br>
+> <sup>2</sup> See [best practices](#example-of-a-workload-pattern-best-practice), and [adjustment instructions](#create-and-submit-support-request).</br>
> <sup>3</sup> Neural models training count is reset every calendar month. Open a support request to increase the monthly training limit. ::: moniker-end ::: moniker range=">=doc-intel-3.0.0" > <sup>4</sup> This limit applies to all documents found in your training dataset folder prior to any labeling-related updates. ::: moniker-end
+> <sup>5</sup> This limit applies for `v 4.0 (2024-07-31)` custom neural models only. Starting from `v 4.0`, we support training larger documents for longer durations (up to 10 hours for free, and incurring charges after). For more information, please refer to [custom nerual model page](concept-custom-neural.md).
## Detailed description, Quota adjustment, and best practices
ai-services Use Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/use-web-app.md
Along with Azure OpenAI Studio, APIs, and SDKs, you can use the available standa
## Important considerations -- Publishing creates an Azure App Service instance in your subscription. It might incur costs depending on the [pricing plan](https://azure.microsoft.com/pricing/details/app-service/windows/) that you select. When you're done with your app, you can delete it from the Azure portal.-- GPT-4 Turbo with Vision models are not supported.
+- Publishing creates an Azure App Service instance in your subscription. It might incur costs depending on the [pricing plan](https://azure.microsoft.com/pricing/details/app-service/windows/) that you select. When finished with your app, you can delete it from the Azure portal.
+- GPT-4 Turbo with Vision models aren't supported.
- By default, the app is deployed with the Microsoft identity provider already configured. The identity provider restricts access to the app to members of your Azure tenant. To add or modify authentication: 1. Go to the [Azure portal](https://portal.azure.com/#home) and search for the app name that you specified during publishing. Select the web app, and then select **Authentication** on the left menu. Then select **Add identity provider**.
Along with Azure OpenAI Studio, APIs, and SDKs, you can use the available standa
1. Select Microsoft as the identity provider. The default settings on this page restrict the app to your tenant only, so you don't need to change anything else here. Select **Add**.
- Now users will be asked to sign in with their Microsoft Entra account to access your app. You can follow a similar process to add another identity provider if you prefer. The app doesn't use the user's sign-in information in any way other than verifying that the user is a member of your tenant.
+ Now users are asked to sign in with their Microsoft Entra account to access your app. You can follow a similar process to add another identity provider if you prefer. The app doesn't use the user's sign-in information in any way other than verifying that the user is a member of your tenant.
## Web app customization
You can customize the app's front-end and back-end logic. The app provides sever
When you're customizing the app, we recommend: -- Resetting the chat session (clear chat) if users change any settings. Notify the users that their chat history will be lost.--- Clearly communicating how each setting that you implement will affect the user experience.
+- Clearly communicating how each setting that you implement affects the user experience.
- Updating the app settings for each of your deployed apps to use new API keys after you rotate keys for your Azure OpenAI or Azure AI Search resource.
After you turn on chat history, your users can show and hide it in the upper-rig
## Deleting your Cosmos DB instance
-Deleting your web app does not delete your Cosmos DB instance automatically. To delete your Cosmos DB instance along with all stored chats, you need to go to the associated resource in the [Azure portal](https://portal.azure.com) and delete it. If you delete the Cosmos DB resource but keep the chat history option turned on in the studio, your users are notified of a connection error but can continue to use the web app without access to the chat history.
+Deleting your web app doesn't delete your Cosmos DB instance automatically. To delete your Cosmos DB instance along with all stored chats, you need to go to the associated resource in the [Azure portal](https://portal.azure.com) and delete it. If you delete the Cosmos DB resource but keep the chat history option selected on subsequent updates from the Azure OpenAI Studio, the application notifies the user of a connection error. However, the user can continue to use the web app without access to the chat history.
+
+## Enabling Microsoft Entra ID authentication between services
+
+To enable Microsoft Entra ID for intra-service authentication for your web app, follow these steps.
+
+### Enable managed identity on your Azure OpenAI resource and Azure App Service
+
+You can enable managed identity for the Azure OpenAI resource and the Azure App Service by navigating to "Identity" and turning on the system assigned managed identity in the Azure portal for each resource.
+++
+> [!NOTE]
+> If you're using an embedding model deployed to the same resource used for inference, you only need to enable managed identity on one Azure OpenAI resource. If using an embedding model deployed to a different resource from the one used for inference, you also need to enable managed identity on the Azure OpenAI resource used to deploy your embedding model.
+
+### Enable role-based access control (RBAC) on your Azure Search resource (optional)
+
+If using On Your Data with Azure Search, you should follow this step.
+
+To enable your Azure OpenAI resource to access your Azure Search resource, you need to enable role-based access control on your Azure Search resource. Learn more about [enabling RBAC roles](../../../search/search-security-enable-roles.md) for your resources.
+
+### Assign RBAC roles to enable intra-service communication
+
+The following table summarizes the RBAC role assignments needed for all Azure resources associated with your application.
+
+| Role | Assignee | Resource |
+| -- | | - |
+| `Search Index Data Reader` | Azure OpenAI (Inference) | Azure AI Search |
+| `Search Service Contributor` | Azure OpenAI (Inference) | Azure AI Search |
+| `Cognitive Services OpenAI User` | Web app | Azure OpenAI (Inference) |
+| `Cognitive Services OpenAI User` | Azure OpenAI (Inference) | Azure OpenAI (Embeddings) |
+
+To assign these roles, follow [these instructions](../../../role-based-access-control/role-assignments-portal.yml) to create the needed role assignments.
+
+### App Settings Changes
+
+In the webapp application settings, navigate to "Environment Variables" and make the following changes:
+
+* Remove the environment variable `AZURE_OPENAI_KEY`, as it's no longer needed.
+* If using On Your Data with Azure Search and are using Microsoft Entra ID authentication between Azure OpenAI and Azure Search, you should also delete the `AZURE_SEARCH_KEY` environment variables for the data source access keys as well.
+
+If using an embedding model deployed to the same resource as your model used for inference, there are no other settings changes required.
+
+However, if you're using an embedding model deployed to a different resource, make the following extra changes to your app's environment variables:
+* Set `AZURE_OPENAI_EMBEDDING_ENDPOINT` variable to the full API path of the embedding API for the resource you're using for embeddings, for example, `https://<your embedding AOAI resource name>.openai.azure.com/openai/deployments/<your embedding deployment name>/embeddings`
+* Delete the `AZURE_OPENAI_EMBEDDING_KEY` variable to use Microsoft Entra ID authentication.
+
+Once all of the environment variable changes are completed, restart the webapp to begin using Microsoft Entra ID authentication between services in the webapp. It will take a few minutes after restarting for any settings changes to take effect.
## Related content
ai-studio Rbac Ai Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/concepts/rbac-ai-studio.md
The following table is an example of how to set up role-based access control for
| | | | | IT admin | Owner of the hub | The IT admin can ensure the hub is set up to their enterprise standards. They can assign managers the Contributor role on the resource if they want to enable managers to make new hubs. Or they can assign managers the Azure AI Developer role on the resource to not allow for new hub creation. | | Managers | Contributor or Azure AI Developer on the hub | Managers can manage the hub, audit compute resources, audit connections, and create shared connections. |
-| Team lead/Lead developer | Azure AI Developer on the hub | Lead developers can create projects for their team and create shared resources (ex: compute and connections) at the hub level. After project creation, project owners can invite other members. |
+| Team lead/Lead developer | Azure AI Developer on the hub | Lead developers can create projects for their team and create shared resources (such as compute and connections) at the hub level. After project creation, project owners can invite other members. |
| Team members/developers | Contributor or Azure AI Developer on the project | Developers can build and deploy AI models within a project and create assets that enable development such as computes and connections. | ## Access to resources created outside of the hub
ai-studio Deploy Models Cohere Command https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/deploy-models-cohere-command.md
The response is as follows:
```python print("Model name:", model_info.model_name) print("Model type:", model_info.model_type)
-print("Model provider name:", model_info.model_provider)
+print("Model provider name:", model_info.model_provider_name)
``` ```console
To visualize the output, define a helper function to print the stream.
```python def print_stream(result): """
- Prints the chat completion with streaming. Some delay is added to simulate
- a real-time conversation.
+ Prints the chat completion with streaming.
""" import time for update in result: if update.choices: print(update.choices[0].delta.content, end="")
- time.sleep(0.05)
``` You can visualize how streaming generates content:
catch (RequestFailedException ex)
{ if (ex.ErrorCode == "content_filter") {
- Console.WriteLine($"Your query has trigger Azure Content Safeaty: {ex.Message}");
+ Console.WriteLine($"Your query has trigger Azure Content Safety: {ex.Message}");
} else {
ai-studio Deploy Models Jais https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/deploy-models-jais.md
JAIS 30b Chat is an autoregressive bi-lingual LLM for **Arabic** & **English**.
::: zone pivot="programming-language-python"
+## Jais chat models
+ You can learn more about the models in their respective model card:
The response is as follows:
```python print("Model name:", model_info.model_name) print("Model type:", model_info.model_type)
-print("Model provider name:", model_info.model_provider)
+print("Model provider name:", model_info.model_provider_name)
``` ```console
To visualize the output, define a helper function to print the stream.
```python def print_stream(result): """
- Prints the chat completion with streaming. Some delay is added to simulate
- a real-time conversation.
+ Prints the chat completion with streaming.
""" import time for update in result: if update.choices: print(update.choices[0].delta.content, end="")
- time.sleep(0.05)
``` You can visualize how streaming generates content:
except HttpResponseError as ex:
::: zone pivot="programming-language-javascript"
+## Jais chat models
+ You can learn more about the models in their respective model card:
catch (error) {
::: zone pivot="programming-language-csharp"
+## Jais chat models
+ You can learn more about the models in their respective model card:
catch (RequestFailedException ex)
{ if (ex.ErrorCode == "content_filter") {
- Console.WriteLine($"Your query has trigger Azure Content Safeaty: {ex.Message}");
+ Console.WriteLine($"Your query has trigger Azure Content Safety: {ex.Message}");
} else {
catch (RequestFailedException ex)
::: zone pivot="programming-language-rest"
+## Jais chat models
+ You can learn more about the models in their respective model card:
ai-studio Deploy Models Jamba https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/deploy-models-jamba.md
The Jamba-Instruct model is AI21's production-grade Mamba-based large language m
::: zone pivot="programming-language-python"
+## Jamba-Instruct chat models
+ You can learn more about the models in their respective model card:
The response is as follows:
```python print("Model name:", model_info.model_name) print("Model type:", model_info.model_type)
-print("Model provider name:", model_info.model_provider)
+print("Model provider name:", model_info.model_provider_name)
``` ```console
To visualize the output, define a helper function to print the stream.
```python def print_stream(result): """
- Prints the chat completion with streaming. Some delay is added to simulate
- a real-time conversation.
+ Prints the chat completion with streaming.
""" import time for update in result: if update.choices: print(update.choices[0].delta.content, end="")
- time.sleep(0.05)
``` You can visualize how streaming generates content:
except HttpResponseError as ex:
::: zone pivot="programming-language-javascript"
+## Jamba-Instruct chat models
+ You can learn more about the models in their respective model card:
catch (error) {
::: zone pivot="programming-language-csharp"
+## Jamba-Instruct chat models
+ You can learn more about the models in their respective model card:
catch (RequestFailedException ex)
{ if (ex.ErrorCode == "content_filter") {
- Console.WriteLine($"Your query has trigger Azure Content Safeaty: {ex.Message}");
+ Console.WriteLine($"Your query has trigger Azure Content Safety: {ex.Message}");
} else {
catch (RequestFailedException ex)
::: zone pivot="programming-language-rest"
+## Jamba-Instruct chat models
+ You can learn more about the models in their respective model card:
ai-studio Deploy Models Llama https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/deploy-models-llama.md
The response is as follows:
```python print("Model name:", model_info.model_name) print("Model type:", model_info.model_type)
-print("Model provider name:", model_info.model_provider)
+print("Model provider name:", model_info.model_provider_name)
``` ```console
To visualize the output, define a helper function to print the stream.
```python def print_stream(result): """
- Prints the chat completion with streaming. Some delay is added to simulate
- a real-time conversation.
+ Prints the chat completion with streaming.
""" import time for update in result: if update.choices: print(update.choices[0].delta.content, end="")
- time.sleep(0.05)
``` You can visualize how streaming generates content:
catch (RequestFailedException ex)
{ if (ex.ErrorCode == "content_filter") {
- Console.WriteLine($"Your query has trigger Azure Content Safeaty: {ex.Message}");
+ Console.WriteLine($"Your query has trigger Azure Content Safety: {ex.Message}");
} else {
ai-studio Deploy Models Mistral Nemo https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/deploy-models-mistral-nemo.md
The response is as follows:
```python print("Model name:", model_info.model_name) print("Model type:", model_info.model_type)
-print("Model provider name:", model_info.model_provider)
+print("Model provider name:", model_info.model_provider_name)
``` ```console
To visualize the output, define a helper function to print the stream.
```python def print_stream(result): """
- Prints the chat completion with streaming. Some delay is added to simulate
- a real-time conversation.
+ Prints the chat completion with streaming.
""" import time for update in result: if update.choices: print(update.choices[0].delta.content, end="")
- time.sleep(0.05)
``` You can visualize how streaming generates content:
catch (RequestFailedException ex)
{ if (ex.ErrorCode == "content_filter") {
- Console.WriteLine($"Your query has trigger Azure Content Safeaty: {ex.Message}");
+ Console.WriteLine($"Your query has trigger Azure Content Safety: {ex.Message}");
} else {
ai-studio Deploy Models Mistral Open https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/deploy-models-mistral-open.md
The response is as follows:
```python print("Model name:", model_info.model_name) print("Model type:", model_info.model_type)
-print("Model provider name:", model_info.model_provider)
+print("Model provider name:", model_info.model_provider_name)
``` ```console
To visualize the output, define a helper function to print the stream.
```python def print_stream(result): """
- Prints the chat completion with streaming. Some delay is added to simulate
- a real-time conversation.
+ Prints the chat completion with streaming.
""" import time for update in result: if update.choices: print(update.choices[0].delta.content, end="")
- time.sleep(0.05)
``` You can visualize how streaming generates content:
ai-studio Deploy Models Mistral https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/deploy-models-mistral.md
The response is as follows:
```python print("Model name:", model_info.model_name) print("Model type:", model_info.model_type)
-print("Model provider name:", model_info.model_provider)
+print("Model provider name:", model_info.model_provider_name)
``` ```console
To visualize the output, define a helper function to print the stream.
```python def print_stream(result): """
- Prints the chat completion with streaming. Some delay is added to simulate
- a real-time conversation.
+ Prints the chat completion with streaming.
""" import time for update in result: if update.choices: print(update.choices[0].delta.content, end="")
- time.sleep(0.05)
``` You can visualize how streaming generates content:
catch (RequestFailedException ex)
{ if (ex.ErrorCode == "content_filter") {
- Console.WriteLine($"Your query has trigger Azure Content Safeaty: {ex.Message}");
+ Console.WriteLine($"Your query has trigger Azure Content Safety: {ex.Message}");
} else {
ai-studio Deploy Models Phi 3 Vision https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/deploy-models-phi-3-vision.md
The Phi-3 family of small language models (SLMs) is a collection of instruction-
::: zone pivot="programming-language-python"
+## Phi-3 chat models with vision
+ Phi-3 Vision is a lightweight, state-of-the-art open multimodal model built upon datasets which include - synthetic data and filtered publicly available websites - with a focus on very high-quality, reasoning dense data both on text and vision. The model belongs to the Phi-3 model family, and the multimodal version comes with 128K context length (in tokens) it can support. The model underwent a rigorous enhancement process, incorporating both supervised fine-tuning and direct preference optimization to ensure precise instruction adherence and robust safety measures.
The response is as follows:
```python print("Model name:", model_info.model_name) print("Model type:", model_info.model_type)
-print("Model provider name:", model_info.model_provider)
+print("Model provider name:", model_info.model_provider_name)
``` ```console
To visualize the output, define a helper function to print the stream.
```python def print_stream(result): """
- Prints the chat completion with streaming. Some delay is added to simulate
- a real-time conversation.
+ Prints the chat completion with streaming.
""" import time for update in result: if update.choices: print(update.choices[0].delta.content, end="")
- time.sleep(0.05)
``` You can visualize how streaming generates content:
Usage:
::: zone pivot="programming-language-javascript"
+## Phi-3 chat models with vision
+ Phi-3 Vision is a lightweight, state-of-the-art open multimodal model built upon datasets which include - synthetic data and filtered publicly available websites - with a focus on very high-quality, reasoning dense data both on text and vision. The model belongs to the Phi-3 model family, and the multimodal version comes with 128K context length (in tokens) it can support. The model underwent a rigorous enhancement process, incorporating both supervised fine-tuning and direct preference optimization to ensure precise instruction adherence and robust safety measures.
Usage:
::: zone pivot="programming-language-csharp"
+## Phi-3 chat models with vision
+ Phi-3 Vision is a lightweight, state-of-the-art open multimodal model built upon datasets which include - synthetic data and filtered publicly available websites - with a focus on very high-quality, reasoning dense data both on text and vision. The model belongs to the Phi-3 model family, and the multimodal version comes with 128K context length (in tokens) it can support. The model underwent a rigorous enhancement process, incorporating both supervised fine-tuning and direct preference optimization to ensure precise instruction adherence and robust safety measures.
Usage:
::: zone pivot="programming-language-rest"
+## Phi-3 chat models with vision
+ Phi-3 Vision is a lightweight, state-of-the-art open multimodal model built upon datasets which include - synthetic data and filtered publicly available websites - with a focus on very high-quality, reasoning dense data both on text and vision. The model belongs to the Phi-3 model family, and the multimodal version comes with 128K context length (in tokens) it can support. The model underwent a rigorous enhancement process, incorporating both supervised fine-tuning and direct preference optimization to ensure precise instruction adherence and robust safety measures.
ai-studio Deploy Models Phi 3 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/deploy-models-phi-3.md
The response is as follows:
```python print("Model name:", model_info.model_name) print("Model type:", model_info.model_type)
-print("Model provider name:", model_info.model_provider)
+print("Model provider name:", model_info.model_provider_name)
``` ```console
To visualize the output, define a helper function to print the stream.
```python def print_stream(result): """
- Prints the chat completion with streaming. Some delay is added to simulate
- a real-time conversation.
+ Prints the chat completion with streaming.
""" import time for update in result: if update.choices: print(update.choices[0].delta.content, end="")
- time.sleep(0.05)
``` You can visualize how streaming generates content:
catch (RequestFailedException ex)
{ if (ex.ErrorCode == "content_filter") {
- Console.WriteLine($"Your query has trigger Azure Content Safeaty: {ex.Message}");
+ Console.WriteLine($"Your query has trigger Azure Content Safety: {ex.Message}");
} else {
ai-studio Deploy Models Serverless https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/deploy-models-serverless.md
This article uses a Meta Llama model deployment for illustration. However, you c
) ```
+ # [Bicep](#tab/bicep)
+
+ Install the Azure CLI as described at [Azure CLI](/cli/azure/).
+
+ Configure the following environment variables according to your settings:
+
+ ```azurecli
+ RESOURCE_GROUP="serverless-models-dev"
+ LOCATION="eastus2"
+ ```
+ # [ARM](#tab/arm) You can use any compatible web browser to [deploy ARM templates](../../azure-resource-manager/templates/deploy-portal.md) in the Microsoft Azure portal or use any of the deployment tools. This tutorial uses the [Azure CLI](/cli/azure/).
The next section covers the steps for subscribing your project to a model offeri
Serverless API endpoints can deploy both Microsoft and non-Microsoft offered models. For Microsoft models (such as Phi-3 models), you don't need to create an Azure Marketplace subscription and you can [deploy them to serverless API endpoints directly](#deploy-the-model-to-a-serverless-api-endpoint) to consume their predictions. For non-Microsoft models, you need to create the subscription first. If it's your first time deploying the model in the project, you have to subscribe your project for the particular model offering from the Azure Marketplace. Each project has its own subscription to the particular Azure Marketplace offering of the model, which allows you to control and monitor spending.
+> [!TIP]
+> Skip this step if you are deploying models from the Phi-3 family of models. Directly [deploy the model to a serverless API endpoint](#deploy-the-model-to-a-serverless-api-endpoint).
+ > [!NOTE] > Models offered through the Azure Marketplace are available for deployment to serverless API endpoints in specific regions. Check [Model and region availability for Serverless API deployments](deploy-models-serverless-availability.md) to verify which models and regions are available. If the one you need is not listed, you can deploy to a workspace in a supported region and then [consume serverless API endpoints from a different workspace](deploy-models-serverless-connect.md).
Serverless API endpoints can deploy both Microsoft and non-Microsoft offered mod
).result() ```
+ # [Bicep](#tab/bicep)
+
+ Use the following bicep configuration to create a model subscription:
+
+ __model-subscription.bicep__
+
+ ```bicep
+ param projectName string = 'my-project'
+ param modelId string = 'azureml://registries/azureml-meta/models/Meta-Llama-3-8B-Instruct'
+
+ var modelName = substring(modelId, (lastIndexOf(modelId, '/') + 1))
+ var subscriptionName = '${modelName}-subscription'
+
+ resource projectName_subscription 'Microsoft.MachineLearningServices/workspaces/marketplaceSubscriptions@2024-04-01-preview' = if (!startsWith(
+ modelId,
+ 'azureml://registries/azureml/'
+ )) {
+ name: '${projectName}/${subscriptionName}'
+ properties: {
+ modelId: modelId
+ }
+ }
+ ```
+
+ Then create the resource as follows:
+
+ ```azurecli
+ ```
+ # [ARM](#tab/arm) Use the following template to create a model subscription:
- __template.json__
+ __model-subscription.json__
```json {
Serverless API endpoints can deploy both Microsoft and non-Microsoft offered mod
} ```
+ Use the Azure portal or the Azure CLI to create the deployment.
+
+ ```azurecli
+ ```
+ 1. Once you subscribe the project for the particular Azure Marketplace offering, subsequent deployments of the same offering in the same project don't require subscribing again. 1. At any point, you can see the model offers to which your project is currently subscribed:
Serverless API endpoints can deploy both Microsoft and non-Microsoft offered mod
print(sub.as_dict()) ```
+ # [Bicep](#tab/bicep)
+
+ You can use the resource management tools to query the resources. The following code uses Azure CLI:
+
+ ```azurecli
+ az resource list \
+ --query "[?type=='Microsoft.SaaS']"
+ ```
+ # [ARM](#tab/arm) You can use the resource management tools to query the resources. The following code uses Azure CLI:
In this section, you create an endpoint with the name **meta-llama3-8b-qwerty**.
).result() ```
+ # [Bicep](#tab/bicep)
+
+ Use the following template to create an endpoint:
+
+ __serverless-endpoint.bicep__
+
+ ```bicep
+ param projectName string = 'my-project'
+ param endpointName string = 'myserverless-text-1234ss'
+ param location string = resourceGroup().location
+ param modelId string = 'azureml://registries/azureml-meta/models/Meta-Llama-3-8B-Instruct'
+
+ var modelName = substring(modelId, (lastIndexOf(modelId, '/') + 1))
+ var subscriptionName = '${modelName}-subscription'
+
+ resource projectName_endpoint 'Microsoft.MachineLearningServices/workspaces/serverlessEndpoints@2024-04-01-preview' = {
+ name: '${projectName}/${endpointName}'
+ location: location
+ sku: {
+ name: 'Consumption'
+ }
+ properties: {
+ modelSettings: {
+ modelId: modelId
+ }
+ }
+ dependsOn: [
+ projectName_subscription
+ ]
+ }
+
+ output endpointUri string = projectName_endpoint.properties.inferenceEndpoint.uri
+ ```
+
+ Create the deployment as follows:
+
+ ```azurecli
+ ```
+ # [ARM](#tab/arm) Use the following template to create an endpoint:
In this section, you create an endpoint with the name **meta-llama3-8b-qwerty**.
```azurecli az deployment group create \
- --name model-subscription-deployment \
- --resource-group <resource-group> \
+ --resource-group $RESOURCE_GROUP \
--template-file template.json ```
In this section, you create an endpoint with the name **meta-llama3-8b-qwerty**.
).result() ```
+ # [Bicep](#tab/bicep)
+
+ You can use the resource management tools to query the resources. The following code uses Azure CLI:
+
+ ```azurecli
+ az resource list \
+ --query "[?type=='Microsoft.MachineLearningServices/workspaces/serverlessEndpoints']"
+ ```
+ # [ARM](#tab/arm) You can use the resource management tools to query the resources. The following code uses Azure CLI:
In this section, you create an endpoint with the name **meta-llama3-8b-qwerty**.
print(endpoint_keys.secondary_key) ```
+ # [Bicep](#tab/bicep)
+
+ Use REST APIs to query this information.
+ # [ARM](#tab/arm) Use REST APIs to query this information.
Models deployed in Azure Machine Learning and Azure AI studio in Serverless API
Read more about the [capabilities of this API](../reference/reference-model-inference-api.md#capabilities) and how [you can use it when building applications](../reference/reference-model-inference-api.md#getting-started).
+## Network isolation
+
+Endpoints for models deployed as Serverless APIs follow the public network access (PNA) flag setting of the AI Studio Hub that has the project in which the deployment exists. To secure your MaaS endpoint, disable the PNA flag on your AI Studio Hub. You can secure inbound communication from a client to your endpoint by using a private endpoint for the hub.
+
+To set the PNA flag for the Azure AI hub:
+
+1. Go to the [Azure portal](https://portal.azure.com).
+2. Search for the Resource group to which the hub belongs, and select your Azure AI hub from the resources listed for this Resource group.
+3. On the hub Overview page, use the left navigation pane to go to Settings > Networking.
+4. Under the **Public access** tab, you can configure settings for the public network access flag.
+5. Save your changes. Your changes might take up to five minutes to propagate.
+ ## Delete endpoints and subscriptions You can delete model subscriptions and endpoints. Deleting a model subscription makes any associated endpoint become *Unhealthy* and unusable.
To delete the associated model subscription:
client.marketplace_subscriptions.begin_delete(subscription_name).wait() ```
+# [Bicep](#tab/bicep)
+
+You can use the resource management tools to manage the resources. The following code uses Azure CLI:
+
+```azurecli
+az resource delete --name <resource-name>
+```
++ # [ARM](#tab/arm) You can use the resource management tools to manage the resources. The following code uses Azure CLI:
ai-studio Sdk Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/develop/sdk-overview.md
Title: How to get started with Azure AI SDKs
-description: This article provides instructions on how to get started with Azure AI SDKs.
+description: This article provides an overview of available Azure AI SDKs.
- build-2024 Previously updated : 5/21/2024- Last updated : 8/9/2024+ # Overview of the Azure AI SDKs - Microsoft offers a variety of packages that you can use for building generative AI applications in the cloud. In most applications, you need to use a combination of packages to manage and use various Azure services that provide AI functionality. We also offer integrations with open-source libraries like LangChain and mlflow for use with Azure. In this article we'll give an overview of the main services and SDKs you can use with Azure AI Studio. For building generative AI applications, we recommend using the following services and SDKs:
- * [Azure Machine Learning](../../../machine-learning/overview-what-is-azure-machine-learning.md) for the hub and project infrastructure used in AI Studio to organize your work into projects, manage project artifacts (data, evaluation runs, traces), fine-tune & deploy models, and connect to external services and resources
- * [Azure AI Services](../../../ai-services/what-are-ai-services.md) provides pre-built and customizable intelligent APIs and models, with support for Azure OpenAI, Search, Speech, Vision, and Language
+ * [Azure Machine Learning](../../../machine-learning/overview-what-is-azure-machine-learning.md) for the hub and project infrastructure used in AI Studio to organize your work into projects, manage project artifacts (data, evaluation runs, traces), fine-tune & deploy models, and connect to external services and resources.
+ * [Azure AI services](../../../ai-services/what-are-ai-services.md) provides pre-built and customizable intelligent APIs and models, with support for Azure OpenAI, Azure AI Search, Speech, Vision, and Language.
* [Prompt flow](https://microsoft.github.io/promptflow/https://docsupdatetracker.net/index.html) for developer tools to streamline the end-to-end development cycle of LLM-based AI application, with support for inferencing, indexing, evaluation, deployment, and monitoring. For each of these, there are separate sets of management libraries and client libraries. ## Management libraries for creating and managing cloud resources
-Azure [Management libraries](/azure/developer/python/sdk/azure-sdk-overview#create-and-manage-azure-resources-with-management-libraries) (also "control plane" or "management plane"), for creating and managing cloud resources that are used by your application.
+Azure [management libraries](/azure/developer/python/sdk/azure-sdk-overview#create-and-manage-azure-resources-with-management-libraries) (also "control plane" or "management plane"), for creating and managing cloud resources that are used by your application.
Azure Machine Learning * [Azure Machine Learning Python SDK (v2)](/python/api/overview/azure/ai-ml-readme) * [Azure Machine Learning CLI (v2)](/azure/machine-learning/how-to-configure-cli?view=azureml-api-2&tabs=public) * [Azure Machine Learning REST API](/rest/api/azureml)
-Azure AI Services
+Azure AI services
* [Azure AI Services Python Management Library](/python/api/overview/azure/mgmt-cognitiveservices-readme?view=azure-python) * [Azure AI Search Python Management Library](/python/api/azure-mgmt-search/azure.mgmt.search?view=azure-python) * [Azure CLI commands for Azure AI Search](/azure/search/search-manage-azure-cli)
Prompt flow
Azure [Client libraries](/azure/developer/python/sdk/azure-sdk-overview#connect-to-and-use-azure-resources-with-client-libraries) (also called "data plane") for connecting to and using provisioned services from runtime application code.
-Azure AI Services
+Azure AI services
* [Azure AI services SDKs](../../../ai-services/reference/sdk-package-resources.md?context=/azure/ai-studio/context/context) * [Azure AI services REST APIs](../../../ai-services/reference/rest-api-resources.md?context=/azure/ai-studio/context/context)
ai-studio Flow Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/flow-deploy.md
If you enable this, tracing data and system metrics during inference time (such
## Grant permissions to the endpoint > [!IMPORTANT]
-> Granting permissions (adding role assignment) is only enabled to the **Owner** of the specific Azure resources. You might need to ask your IT admin for help.
+> Granting permissions (adding role assignment) is only enabled to the **Owner** of the specific Azure resources. You might need to ask your Azure subscription owner (who might be your IT admin) for help.
> > It's recommended to grant roles to the **user-assigned** identity **before the deployment creation**. > It might take more than 15 minutes for the granted permission to take effect.
-You can grant all permissions in Azure portal UI by following steps.
+You can grant the required permissions in Azure portal UI by following steps.
1. Go to the Azure AI Studio project overview page in [Azure portal](https://ms.portal.azure.com/#home).
ai-studio Model Catalog Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/model-catalog-overview.md
Model | Managed compute | Serverless API (pay-as-you-go)
--|--|-- Llama family models | Llama-2-7b <br> Llama-2-7b-chat <br> Llama-2-13b <br> Llama-2-13b-chat <br> Llama-2-70b <br> Llama-2-70b-chat <br> Llama-3-8B-Instruct <br> Llama-3-70B-Instruct <br> Llama-3-8B <br> Llama-3-70B | Llama-3-70B-Instruct <br> Llama-3-8B-Instruct <br> Llama-2-7b <br> Llama-2-7b-chat <br> Llama-2-13b <br> Llama-2-13b-chat <br> Llama-2-70b <br> Llama-2-70b-chat Mistral family models | mistralai-Mixtral-8x22B-v0-1 <br> mistralai-Mixtral-8x22B-Instruct-v0-1 <br> mistral-community-Mixtral-8x22B-v0-1 <br> mistralai-Mixtral-8x7B-v01 <br> mistralai-Mistral-7B-Instruct-v0-2 <br> mistralai-Mistral-7B-v01 <br> mistralai-Mixtral-8x7B-Instruct-v01 <br> mistralai-Mistral-7B-Instruct-v01 | Mistral-large (2402) <br> Mistral-large (2407) <br> Mistral-small <br> Mistral-NeMo
-Cohere family models | Not available | Cohere-command-r-plus <br> Cohere-command-r <br> Cohere-embed-v3-english <br> Cohere-embed-v3-multilingual
+Cohere family models | Not available | Cohere-command-r-plus <br> Cohere-command-r <br> Cohere-embed-v3-english <br> Cohere-embed-v3-multilingual <br> Cohere-rerank-v3-english <br> Cohere-rerank-v3-multilingual
JAIS | Not available | jais-30b-chat Phi-3 family models | Phi-3-mini-4k-Instruct <br> Phi-3-mini-128k-Instruct <br> Phi-3-small-8k-Instruct <br> Phi-3-small-128k-Instruct <br> Phi-3-medium-4k-instruct <br> Phi-3-medium-128k-instruct | Phi-3-mini-4k-Instruct <br> Phi-3-mini-128k-Instruct <br> Phi-3-small-8k-Instruct <br> Phi-3-small-128k-Instruct <br> Phi-3-medium-4k-instruct <br> Phi-3-medium-128k-instruct Nixtla | Not available | TimeGEN-1
ai-studio Get Started Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/quickstarts/get-started-code.md
Title: Get started building a chat app using the prompt flow SDK
-description: This article provides instructions on how to set up your development environment for Azure AI SDKs.
+description: This article provides instructions on how to build a custom chat app in Python using the prompt flow SDK.
Previously updated : 5/30/2024 Last updated : 8/6/2024 # Build a custom chat app in Python using the prompt flow SDK+ [!INCLUDE [Feature preview](~/reusable-content/ce-skilling/azure/includes/ai-studio/includes/feature-preview.md)] In this quickstart, we walk you through setting up your local development environment with the prompt flow SDK. We write a prompt, run it as part of your app code, trace the LLM calls being made, and run a basic evaluation on the outputs of the LLM. ## Prerequisites
+> [!IMPORTANT]
+> You must have the necessary permissions to add role assignments for storage accounts in your Azure subscription. Granting permissions (adding role assignment) is only allowed by the **Owner** of the specific Azure resources. You might need to ask your Azure subscription owner (who might be your IT admin) for help to [grant access to call Azure OpenAI Service using your identity](#grant-access-to-call-azure-openai-service-using-your-identity).
+ Before you can follow this quickstart, create the resources that you need for your application: - An [AI Studio hub](../how-to/create-azure-ai-resource.md) for connecting to external resources. - A [project](../how-to/create-projects.md) for organizing your project artifacts and sharing traces and evaluation runs.
Before you can follow this quickstart, create the resources that you need for yo
Complete the [AI Studio playground quickstart](../quickstarts/get-started-playground.md) to create these resources if you haven't already. You can also create these resources by following the [SDK guide to create a hub and project](../how-to/develop/create-hub-project-sdk.md) article.
-Also, you must have the necessary permissions to add role assignments for storage accounts in your Azure subscription. Granting permissions (adding role assignment) is only allowed by the **Owner** of the specific Azure resources. You might need to ask your IT admin for help to [grant access to call Azure OpenAI Service using your identity](#grant-access-to-call-azure-openai-service-using-your-identity).
- ## Grant access to call Azure OpenAI Service using your identity To use security best practices, instead of API keys we use [Microsoft Entra ID](/entra/fundamentals/whatis) to authenticate with Azure OpenAI using your user identity.
To grant yourself access to the Azure AI Services resource that you're using:
1. Continue through the wizard and select **Review + assign** to add the role assignment.
-## Install the Azure CLI and login
+## Install the Azure CLI and sign in
-Now we install the Azure CLI and login from your local development environment, so that you can use your user credentials to call the Azure OpenAI service.
+You install the Azure CLI and sign in from your local development environment, so that you can use your user credentials to call the Azure OpenAI service.
In most cases you can install the Azure CLI from your terminal using the following command: # [Windows](#tab/windows)
brew update && brew install azure-cli
You can follow instructions [How to install the Azure CLI](/cli/azure/install-azure-cli) if these commands don't work for your particular operating system or setup.
-After you install the Azure CLI, login using the ``az login`` command and sign-in using the browser:
+After you install the Azure CLI, sign in using the ``az login`` command and sign-in using the browser:
``` az login ```
source .venv/bin/activate
-Activating the Python environment means that when you run ```python``` or ```pip``` from the command line, you'll be using the Python interpreter contained in the ```.venv``` folder of your application.
+Activating the Python environment means that when you run ```python``` or ```pip``` from the command line, you then use the Python interpreter contained in the ```.venv``` folder of your application.
> [!NOTE] > You can use the ```deactivate``` command to exit the python virtual environment, and can later reactivate it when needed.
Your AI services endpoint and deployment name are required to call the Azure Ope
## Create a basic chat prompt and app
-First create a prompt template file, for this we'll use **Prompty** which is the prompt template format supported by prompt flow.
+First create a **Prompty** file, which is the prompt template format supported by prompt flow.
Create a ```chat.prompty``` file and copy the following code into it:
For more information on how to use prompt flow evaluators, including how to make
## Next step > [!div class="nextstepaction"]
-> [Augment the model with data for retrieval augmented generation (RAG)](../tutorials/copilot-sdk-build-rag.md)
+> [Add data and use retrieval augmented generation (RAG) to build a copilot](../tutorials/copilot-sdk-build-rag.md)
ai-studio Reference Model Inference Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/reference/reference-model-inference-api.md
Models deployed to [managed inference](../concepts/deployments-overview.md):
> [!div class="checklist"] > * [Meta Llama 3 instruct](../how-to/deploy-models-llama.md) family of models > * [Phi-3](../how-to/deploy-models-phi-3.md) family of models
-> * Mixtral famility of models
+> * [Mistral](../how-to/deploy-models-mistral-open.md) and [Mixtral](../how-to/deploy-models-mistral-open.md?tabs=mistral-8x7B-instruct) family of models.
The API is compatible with Azure OpenAI model deployments.
const client = new ModelClient(
Explore our [samples](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/ai/ai-inference-rest/samples) and read the [API reference documentation](https://aka.ms/AAp1kxa) to get yourself started.
+# [C#](#tab/csharp)
+
+Install the Azure AI inference library with the following command:
+
+```dotnetcli
+dotnet add package Azure.AI.Inference --prerelease
+```
+
+For endpoint with support for Microsoft Entra ID (formerly Azure Active Directory), install the `Azure.Identity` package:
+
+```dotnetcli
+dotnet add package Azure.Identity
+```
+
+Import the following namespaces:
+
+```csharp
+using Azure;
+using Azure.Identity;
+using Azure.AI.Inference;
+```
+
+Then, you can use the package to consume the model. The following example shows how to create a client to consume chat completions:
+
+```csharp
+ChatCompletionsClient client = new ChatCompletionsClient(
+ new Uri(Environment.GetEnvironmentVariable("AZURE_INFERENCE_ENDPOINT")),
+ new AzureKeyCredential(Environment.GetEnvironmentVariable("AZURE_INFERENCE_CREDENTIAL"))
+);
+```
+
+For endpoint with support for Microsoft Entra ID (formerly Azure Active Directory):
+
+```csharp
+ChatCompletionsClient client = new ChatCompletionsClient(
+ new Uri(Environment.GetEnvironmentVariable("AZURE_INFERENCE_ENDPOINT")),
+ new DefaultAzureCredential(includeInteractiveCredentials: true)
+);
+```
+
+Explore our [samples](https://aka.ms/azsdk/azure-ai-inference/csharp/samples) and read the [API reference documentation](https://aka.ms/azsdk/azure-ai-inference/csharp/reference) to get yourself started.
+ # [REST](#tab/rest) Use the reference section to explore the API design and which parameters are available. For example, the reference section for [Chat completions](reference-model-inference-chat-completions.md) details how to use the route `/chat/completions` to generate predictions based on chat-formatted instructions:
var response = await client.path("/chat/completions").post({
console.log(response.choices[0].message.content) ```
+# [C#](#tab/csharp)
+
+```csharp
+requestOptions = new ChatCompletionsOptions()
+{
+ Messages = {
+ new ChatRequestSystemMessage("You are a helpful assistant."),
+ new ChatRequestUserMessage("How many languages are in the world?")
+ },
+ AdditionalProperties = { { "logprobs", BinaryData.FromString("true") } },
+};
+
+response = client.Complete(requestOptions, extraParams: ExtraParameters.PassThrough);
+Console.WriteLine($"Response: {response.Value.Choices[0].Message.Content}");
+```
+ # [REST](#tab/rest) __Request__
catch (error) {
} ```
+# [C#](#tab/csharp)
+
+```csharp
+try
+{
+ requestOptions = new ChatCompletionsOptions()
+ {
+ Messages = {
+ new ChatRequestSystemMessage("You are a helpful assistant"),
+ new ChatRequestUserMessage("How many languages are in the world?"),
+ },
+ ResponseFormat = new ChatCompletionsResponseFormatJSON()
+ };
+
+ response = client.Complete(requestOptions);
+ Console.WriteLine(response.Value.Choices[0].Message.Content);
+}
+catch (RequestFailedException ex)
+{
+ if (ex.Status == 422)
+ {
+ Console.WriteLine($"Looks like the model doesn't support a parameter: {ex.Message}");
+ }
+ else
+ {
+ throw;
+ }
+}
+```
+ # [REST](#tab/rest) __Request__
catch (error) {
} ```
+# [C#](#tab/csharp)
+
+```csharp
+try
+{
+ requestOptions = new ChatCompletionsOptions()
+ {
+ Messages = {
+ new ChatRequestSystemMessage("You are an AI assistant that helps people find information."),
+ new ChatRequestUserMessage(
+ "Chopping tomatoes and cutting them into cubes or wedges are great ways to practice your knife skills."
+ ),
+ },
+ };
+
+ response = client.Complete(requestOptions);
+ Console.WriteLine(response.Value.Choices[0].Message.Content);
+}
+catch (RequestFailedException ex)
+{
+ if (ex.ErrorCode == "content_filter")
+ {
+ Console.WriteLine($"Your query has trigger Azure Content Safeaty: {ex.Message}");
+ }
+ else
+ {
+ throw;
+ }
+}
+```
+ # [REST](#tab/rest) __Request__
The client library `@azure-rest/ai-inference` does inference, including chat com
Explore our [samples](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/ai/ai-inference-rest/samples) and read the [API reference documentation](https://aka.ms/AAp1kxa) to get yourself started.
+# [C#](#tab/csharp)
+
+The client library `Azure.Ai.Inference` does inference, including chat completions, for AI models deployed by Azure AI Studio and Azure Machine Learning Studio. It supports Serverless API endpoints and Managed Compute endpoints (formerly known as Managed Online Endpoints).
+
+Explore our [samples](https://aka.ms/azsdk/azure-ai-inference/csharp/samples) and read the [API reference documentation](https://aka.ms/azsdk/azure-ai-inference/csharp/reference) to get yourself started.
+ # [REST](#tab/rest) Explore the reference section of the Azure AI model inference API to see parameters and options to consume models, including chat completions models, deployed by Azure AI Studio and Azure Machine Learning Studio. It supports Serverless API endpoints and Managed Compute endpoints (formerly known as Managed Online Endpoints).
ai-studio Reference Model Inference Info https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/reference/reference-model-inference-info.md
The information about the deployed model.
### ModelType
-The infernce task associated with the mode.
+The inference task associated with the mode.
| Name | Type | Description |
ai-studio Copilot Sdk Build Rag https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/tutorials/copilot-sdk-build-rag.md
description: Learn how to build a RAG-based copilot using the prompt flow SDK.
Previously updated : 7/18/2024 Last updated : 8/6/2024
In this [Azure AI Studio](https://ai.azure.com) tutorial, you use the prompt flo
This tutorial is part one of a two-part tutorial. > [!TIP]
-> This tutorial is based on code in the sample repo for a [copilot application that implements RAG](https://github.com/Azure-Samples/rag-data-openai-python-promptflow).
+> Be sure to set aside enough time to complete the prerequisites before starting this tutorial. If you're new to Azure AI Studio, you might need to spend additional time to get familiar with the platform.
-This part one shows you how to enhance a basic chat application by adding retrieval augmented generation (RAG) to ground the responses in your custom data.
+This part one shows you how to enhance a basic chat application by adding [retrieval augmented generation (RAG)](../concepts/retrieval-augmented-generation.md) to ground the responses in your custom data.
In this part one, you learn how to:
In this part one, you learn how to:
## Prerequisites
+> [!IMPORTANT]
+> You must have the necessary permissions to add role assignments in your Azure subscription. Granting permissions by role assignment is only allowed by the **Owner** of the specific Azure resources. You might need to ask your Azure subscription owner (who might be your IT admin) for help with completing the [assign access](#configure-access-for-the-azure-ai-search-service) section.
+ - You need to complete the [Build a custom chat app in Python using the prompt flow SDK quickstart](../quickstarts/get-started-code.md) to set up your environment. > [!IMPORTANT] > This tutorial builds on the code and environment you set up in the quickstart. -- You need a local copy of product data. The [Azure-Samples/rag-data-openai-python-promptflow repository on GitHub](https://github.com/Azure-Samples/rag-data-openai-python-promptflow/) contains sample retail product information that's relevant for this tutorial scenario. Clone the repository or [download the example Contoso Trek retail product data in a ZIP file](https://github.com/Azure-Samples/rag-data-openai-python-promptflow/raw/main/tutorial/data.zip) to your local machine.--- You must have the necessary permissions to add role assignments in your Azure subscription. Granting permissions by role assignment is only allowed by the **Owner** of the specific Azure resources. You might need to ask your IT admin for help with completing the [assign access](#configure-access-for-the-azure-ai-search-service) section.
+- You need a local copy of product data. The [Azure-Samples/rag-data-openai-python-promptflow repository on GitHub](https://github.com/Azure-Samples/rag-data-openai-python-promptflow/) contains sample retail product information that's relevant for this tutorial scenario. [Download the example Contoso Trek retail product data in a ZIP file](https://github.com/Azure-Samples/rag-data-openai-python-promptflow/raw/main/tutorial/data.zip) to your local machine.
## Application code structure
AZURE_OPENAI_CONNECTION_NAME=<your AIServices or Azure OpenAI connection name>
## Deploy an embedding model
-For the RAG capability, we need to be able to embed the search query to search the Azure AI Search index we create.
+For the [retrieval augmented generation (RAG)](../concepts/retrieval-augmented-generation.md) capability, we need to be able to embed the search query to search the Azure AI Search index we create.
1. Deploy an Azure OpenAI embedding model. Follow the [deploy Azure OpenAI models guide](../how-to/deploy-models-openai.md) and deploy the **text-embedding-ada-002** model. Use the same **AIServices** or **Azure OpenAI** connection that you used [to deploy the chat model](../quickstarts/get-started-playground.md#deploy-a-chat-model). 2. Add embedding model environment variables in your *.env* file. For the *AZURE_OPENAI_EMBEDDING_DEPLOYMENT* value, enter the name of the embedding model that you deployed.
For the RAG capability, we need to be able to embed the search query to search t
AZURE_OPENAI_EMBEDDING_DEPLOYMENT=embedding_model_deployment_name ```
+For more information about the embedding model, see the [Azure OpenAI Service embeddings documentation](../../ai-services/openai/how-to/embeddings.md).
+ ## Create an Azure AI Search index The goal with this RAG-based application is to ground the model responses in your custom data. You use an Azure AI Search index that stores vectorized data from the embeddings model. The search index is used to retrieve relevant documents based on the user's question.
The goal with this RAG-based application is to ground the model responses in you
You need an Azure AI Search service and connection in order to create a search index. > [!NOTE]
-> Creating an Azure AI Search service and subsequent search indexes has associated costs. You can see details about pricing and pricing tiers for the Azure AI Search service on the creation page, to confirm cost before creating the resource.
+> Creating an [Azure AI Search service](../../search/index.yml) and subsequent search indexes has associated costs. You can see details about pricing and pricing tiers for the Azure AI Search service on the creation page, to confirm cost before creating the resource.
### Create an Azure AI Search service
Otherwise, you can create an Azure AI Search service using the [Azure portal](ht
## [Azure CLI](#tab/cli) 1. Open a terminal on your local machine.
-1. Type `az` and then enter to verify that the Azure CLI tool is installed. If it's installed, a help menu with `az` commands appears. If you get an error, make sure you followed the [steps for installing the Azure CLI in the quickstart](../quickstarts/get-started-code.md#install-the-azure-cli-and-login).
+1. Type `az` and then enter to verify that the Azure CLI tool is installed. If it's installed, a help menu with `az` commands appears. If you get an error, make sure you followed the [steps for installing the Azure CLI in the quickstart](../quickstarts/get-started-code.md#install-the-azure-cli-and-sign-in).
1. Follow the steps to create an Azure AI Search service using the [`az search service create`](../../search/search-manage-azure-cli.md#create-or-delete-a-service) command.
ai-studio Copilot Sdk Evaluate Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/tutorials/copilot-sdk-evaluate-deploy.md
description: Evaluate and deploy a RAG-based copilot with the prompt flow SDK. T
Previously updated : 7/18/2024 Last updated : 8/6/2024
In this [Azure AI Studio](https://ai.azure.com) tutorial, you use the prompt flo
This tutorial is part two of a two-part tutorial.
-> [!TIP]
-> This tutorial is based on code in the sample repo for a [copilot application that implements RAG](https://github.com/Azure-Samples/rag-data-openai-python-promptflow).
- In this part two, you learn how to: > [!div class="checklist"]
In this part two, you learn how to:
- You must complete [part 1 of the tutorial series](copilot-sdk-build-rag.md) to build the copilot application. -- You must have the necessary permissions to add role assignments in your Azure subscription. Granting permissions by role assignment is only allowed by the **Owner** of the specific Azure resources. You might need to ask your IT admin for help with completing the [assign access](#assign-access-for-the-endpoint) section.
+- You must have the necessary permissions to add role assignments in your Azure subscription. Granting permissions by role assignment is only allowed by the **Owner** of the specific Azure resources. You might need to ask your Azure subscription owner (who might be your IT admin) for help with endpoint access later in the tutorial.
## Evaluate the quality of copilot responses
Now define an evaluation script that will:
- Load the sample `.jsonl` dataset. - Generate a target function wrapper around our copilot logic. - Run the evaluation, which takes the target function, and merges the evaluation dataset with the responses from the copilot.-- Generate a set of GPT-assisted metrics (Relevance, Groundedness, and Coherence) to evaluate the quality of the copilot responses.
+- Generate a set of GPT-assisted metrics (relevance, groundedness, and coherence) to evaluate the quality of the copilot responses.
- Output the results locally, and logs the results to the cloud project. The script allows you to review the results locally, by outputting the results in the command line, and to a json file.
We recommend you test your application in the Azure AI Studio. If you prefer to
Note your endpoint name, which you need for the next steps.
-### Assign access for the endpoint
-
-While you wait for your application to deploy, you or your administrator can assign role-based access to the endpoint. These roles allow the application to run without keys in the deployed environment, just like it did locally.
+### Endpoint access for Azure OpenAI resource
-Previously, you provided your account with a specific role to be able to access the resource using Microsoft Entra ID authentication. Now, assign the endpoint that same role.
+You might need to ask your Azure subscription owner (who might be your IT admin) for help with this section.
-### Endpoint access for Azure OpenAI resource
+While you wait for your application to deploy, you or your administrator can assign role-based access to the endpoint. These roles allow the application to run without keys in the deployed environment, just like it did locally.
-You or your administrator needs to grant your endpoint the **Cognitive Services OpenAI User** role on the Azure AI Services resource that you're using. This role lets your endpoint call the Azure OpenAI service.
+Previously, you provided your account with a specific role to be able to access the resource using Microsoft Entra ID authentication. Now, assign the endpoint that same **Cognitive Services OpenAI User** role.
> [!NOTE] > These steps are similar to how you assigned a role for your user identity to use the Azure OpenAI Service in the [quickstart](../quickstarts/get-started-code.md).
To grant yourself access to the Azure AI Services resource that you're using:
### Endpoint access for Azure AI Search resource
+You might need to ask your Azure subscription owner (who might be your IT admin) for help with this section.
+ Similar to how you assigned the **Search Index Data Contributor** [role to your Azure AI Search service](./copilot-sdk-build-rag.md#configure-access-for-the-azure-ai-search-service), you need to assign the same role for your endpoint. 1. In Azure AI Studio, select **Settings** and navigate to the connected **Azure AI Search** service.
To avoid incurring unnecessary Azure costs, you should delete the resources you
## Related content
-> [!div class="nextstepaction"]
-> [Learn more about prompt flow](../how-to/prompt-flow.md)
+- [Learn more about prompt flow](../how-to/prompt-flow.md)
+- For a sample copilot application that implements RAG, see [Azure-Samples/rag-data-openai-python-promptflow](https://github.com/Azure-Samples/rag-data-openai-python-promptflow)
ai-studio Deploy Chat Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/tutorials/deploy-chat-web-app.md
Your data source is used to help ground the model with specific data. Grounding
The steps in this tutorial are:
-1. Deploy and test a chat model without your data
-1. Add your data
-1. Test the model with your data
-1. Deploy your web app
-
+1. Deploy and test a chat model without your data.
+1. Add your data.
+1. Test the model with your data.
+1. Deploy your web app.
## Prerequisites - An Azure subscription - <a href="https://azure.microsoft.com/free/cognitive-services" target="_blank">Create one for free</a>. - An [AI Studio hub](../how-to/create-azure-ai-resource.md), [project](../how-to/create-projects.md), and [deployed Azure OpenAI](../how-to/deploy-models-openai.md) chat model. Complete the [AI Studio playground quickstart](../quickstarts/get-started-playground.md) to create these resources if you haven't already. -- An [Azure AI Search service connection](../how-to/connections-add.md#create-a-new-connection) to index the sample product and customer data.
+- An [Azure AI Search service connection](../how-to/connections-add.md#create-a-new-connection) to index the sample product data.
-- You need at least one file to upload that contains example data. To complete this tutorial, use the product information samples from the [Azure-Samples/aistudio-python-quickstart-sample repository on GitHub](https://github.com/Azure-Samples/aistudio-python-quickstart-sample/tree/main/data). Specifically, the [product_info_11.md](https://github.com/Azure-Samples/aistudio-python-quickstart-sample/blob/main/dat` on your local computer.
+- You need a local copy of product data. The [Azure-Samples/rag-data-openai-python-promptflow repository on GitHub](https://github.com/Azure-Samples/rag-data-openai-python-promptflow/) contains sample retail product information that's relevant for this tutorial scenario. Specifically, the `product_info_11.md` file contains product information about the TrailWalker hiking shoes that's relevant for this tutorial example. [Download the example Contoso Trek retail product data in a ZIP file](https://github.com/Azure-Samples/rag-data-openai-python-promptflow/raw/main/tutorial/data.zip) to your local machine.
## Add your data and try the chat model again
Once you're satisfied with the experience in Azure AI Studio, you can deploy the
### Find your resource group in the Azure portal
-In this tutorial, your web app is deployed to the same resource group as your AI Studio hub. Later you configure authentication for the web app in the Azure portal.
+In this tutorial, your web app is deployed to the same resource group as your [AI Studio hub](../how-to/create-secure-ai-hub.md). Later you configure authentication for the web app in the Azure portal.
Follow these steps to navigate from Azure AI Studio to your resource group in the Azure portal:
Follow these steps to navigate from Azure AI Studio to your resource group in th
:::image type="content" source="../media/tutorials/chat/resource-group-manage-page.png" alt-text="Screenshot of the resource group in the Azure AI Studio." lightbox="../media/tutorials/chat/resource-group-manage-page.png":::
-1. You should now be in the Azure portal, viewing the contents of the resource group where you deployed the hub. Keep this page open in a browser tab - you return to it later.
+1. You should now be in the Azure portal, viewing the contents of the resource group where you deployed the hub. Keep this page open in a browser tab. You return to it later.
### Deploy the web app
You're almost there! Now you can test the web app.
*If the authentication settings haven't yet taken effect, close the browser tab for your web app and return to the chat playground in Azure AI Studio. Then wait a little longer and try again.*
-1. In your web app, you can ask the same question as before ("How much are the TrailWalker hiking shoes"), and this time it uses information from your data to construct the response. You can expand the **references** button to see the data that was used.
+1. In your web app, you can ask the same question as before ("How much are the TrailWalker hiking shoes"), and this time it uses information from your data to construct the response. You can expand the **reference** button to see the data that was used.
:::image type="content" source="../media/tutorials/chat/chat-with-data-web-app.png" alt-text="Screenshot of the chat experience via the deployed web app." lightbox="../media/tutorials/chat/chat-with-data-web-app.png":::
Once you've enabled chat history, your users will be able to show and hide it in
If you delete the Cosmos DB resource but keep the chat history option enabled on the studio, your users will be notified of a connection error, but can continue to use the web app without access to the chat history.
-## Next steps
+## Related content
-- [Create a project in Azure AI Studio](../how-to/create-projects.md).-- Learn more about what you can do in the [Azure AI Studio](../what-is-ai-studio.md).
+- [Build and deploy a question and answer copilot with prompt flow in Azure AI Studio.](./deploy-copilot-ai-studio.md).
+- [Build your own copilot with the prompt flow SDK.](./copilot-sdk-build-rag.md).
ai-studio Deploy Copilot Ai Studio https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/tutorials/deploy-copilot-ai-studio.md
The steps in this tutorial are:
- An [Azure AI Search service connection](../how-to/connections-add.md#create-a-new-connection) to index the sample product and customer data. -- You need a local copy of product and customer data. The [Azure-Samples/aistudio-python-quickstart-sample repository on GitHub](https://github.com/Azure-Samples/aistudio-python-quickstart-sample/tree/main/data) contains sample retail customer and product information that's relevant for this tutorial scenario. Clone the repository or copy the files from [1-customer-info](https://github.com/Azure-Samples/aistudio-python-quickstart-sample/tree/main/data/1-customer-info) and [3-product-info](https://github.com/Azure-Samples/aistudio-python-quickstart-sample/tree/main/data/3-product-info).
+- You need a local copy of product data. The [Azure-Samples/rag-data-openai-python-promptflow repository on GitHub](https://github.com/Azure-Samples/rag-data-openai-python-promptflow/) contains sample retail product information that's relevant for this tutorial scenario. Specifically, the `product_info_11.md` file contains product information about the TrailWalker hiking shoes that's relevant for this tutorial example. [Download the example Contoso Trek retail product data in a ZIP file](https://github.com/Azure-Samples/rag-data-openai-python-promptflow/raw/main/tutorial/data.zip) to your local machine.
## Add your data and try the chat model again
You can return to the prompt flow anytime by selecting **Prompt flow** from **To
## Customize prompt flow with multiple data sources
-previously in the [AI Studio](https://ai.azure.com) chat playground, you [added your data](#add-your-data-and-try-the-chat-model-again) to create one search index that contained product data for the Contoso copilot. So far, users can only inquire about products with questions such as "How much do the TrailWalker hiking shoes cost?". But they can't get answers to questions such as "How many TrailWalker hiking shoes did Daniel Wilson buy?" To enable this scenario, we add another index with customer information to the flow.
+Previously in the [AI Studio](https://ai.azure.com) chat playground, you [added your data](#add-your-data-and-try-the-chat-model-again) to create one search index that contained product data for the Contoso copilot. So far, users can only inquire about products with questions such as "How much do the TrailWalker hiking shoes cost?". But they can't get answers to questions such as "How many TrailWalker hiking shoes did Daniel Wilson buy?" To enable this scenario, we add another index with customer information to the flow.
### Create the customer info index
ai-studio Screen Reader https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/tutorials/screen-reader.md
- ignite-2023 - build-2024 Previously updated : 5/21/2024 Last updated : 8/9/2024
# QuickStart: Get started using AI Studio with a screen reader - This article is for people who use screen readers such as [Microsoft's Narrator](https://support.microsoft.com/windows/complete-guide-to-narrator-e4397a0d-ef4f-b386-d8ae-c172f109bdb1#WindowsVersion=Windows_11), JAWS, NVDA or Apple's Voiceover. In this quickstart, you'll be introduced to the basic structure of Azure AI Studio and discover how to navigate around efficiently. ## Getting oriented in Azure AI Studio
Once you have created or selected a project, go to the navigation landmark. Pres
The prompt flow UI in Azure AI Studio is composed of the following main sections: the command toolbar, flow (includes list of the flow nodes), files, and graph view. The flow, files, and graph sections each have their own H2 headings that can be used for navigation. - ### Flow - This is the main working area where you can edit your flow, for example adding a new node, editing the prompt, selecting input data
api-management Api Management Policies https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-policies.md
More information about policies:
| [Set usage quota by subscription](quota-policy.md) | Allows you to enforce a renewable or lifetime call volume and/or bandwidth quota, on a per subscription basis. | Yes | Yes | Yes | Yes | [Set usage quota by key](quota-by-key-policy.md) | Allows you to enforce a renewable or lifetime call volume and/or bandwidth quota, on a per key basis. | Yes | No | No | Yes | | [Limit concurrency](limit-concurrency-policy.md) | Prevents enclosed policies from executing by more than the specified number of requests at a time. | Yes | Yes | Yes | Yes |
-| [Limit Azure OpenAI Service token usage](azure-openai-token-limit-policy.md) | Prevents Azure OpenAI API usage spikes by limiting language model tokens per calculated key. | Yes | Yes | No | No |
+| [Limit Azure OpenAI Service token usage](azure-openai-token-limit-policy.md) | Prevents Azure OpenAI API usage spikes by limiting large language model tokens per calculated key. | Yes | Yes | No | No |
+| [Limit large language model API token usage](llm-token-limit-policy.md) | Prevents large language model (LLM) API usage spikes by limiting LLM tokens per calculated key. | Yes | Yes | No | No |
## Authentication and authorization
More information about policies:
| [Get value from cache](cache-lookup-value-policy.md) | Retrieves a cached item by key. | Yes | Yes | Yes | Yes | | [Store value in cache](cache-store-value-policy.md) | Stores an item in the cache by key. | Yes | Yes | Yes | Yes | | [Remove value from cache](cache-remove-value-policy.md) | Removes an item in the cache by key. | Yes | Yes | Yes | Yes |
-| [Get cached responses of Azure OpenAI API requests](azure-openai-semantic-cache-lookup-policy.md) | Performs cache lookup using semantic search and returns a valid cached response when available. | Yes | Yes | Yes | Yes |
+| [Get cached responses of Azure OpenAI API requests](azure-openai-semantic-cache-lookup-policy.md) | Performs lookup in Azure OpenAI API cache using semantic search and returns a valid cached response when available. | Yes | Yes | Yes | Yes |
| [Store responses of Azure OpenAI API requests to cache](azure-openai-semantic-cache-store-policy.md) | Caches response according to the Azure OpenAI API cache configuration. | Yes | Yes | Yes | Yes |
+| [Get cached responses of large language model API requests](llm-semantic-cache-lookup-policy.md) | Performs lookup in large language model API cache using semantic search and returns a valid cached response when available. | Yes | Yes | Yes | Yes |
+| [Store responses of large language model API requests to cache](llm-semantic-cache-store-policy.md) | Caches response according to the large language model API cache configuration. | Yes | Yes | Yes | Yes |
More information about policies:
||||||--| | [Trace](trace-policy.md) | Adds custom traces into the [request tracing](./api-management-howto-api-inspector.md) output in the test console, Application Insights telemetries, and resource logs. | Yes | Yes<sup>1</sup> | Yes | Yes | | [Emit metrics](emit-metric-policy.md) | Sends custom metrics to Application Insights at execution. | Yes | Yes | Yes | Yes |
-| [Emit Azure OpenAI token metrics](azure-openai-emit-token-metric-policy.md) | Sends metrics to Application Insights for consumption of language model tokens through Azure OpenAI service APIs. | Yes | Yes | No | No |
+| [Emit Azure OpenAI token metrics](azure-openai-emit-token-metric-policy.md) | Sends metrics to Application Insights for consumption of large language model tokens through Azure OpenAI service APIs. | Yes | Yes | No | No |
+| [Emit large language model API token metrics](llm-emit-token-metric-policy.md) | Sends metrics to Application Insights for consumption of large language model (LLM) tokens through LLM APIs. | Yes | Yes | No | No |
<sup>1</sup> In the V2 gateway, the `trace` policy currently does not add tracing output in the test console.
api-management Azure Openai Enable Semantic Caching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/azure-openai-enable-semantic-caching.md
Enable semantic caching of responses to Azure OpenAI API requests to reduce bandwidth and processing requirements imposed on the backend APIs and lower latency perceived by API consumers. With semantic caching, you can return cached responses for identical prompts and also for prompts that are similar in meaning, even if the text isn't the same. For background, see [Tutorial: Use Azure Cache for Redis as a semantic cache](../azure-cache-for-redis/cache-tutorial-semantic-cache.md).
+> [!NOTE]
+> The configuration steps in this article enable semantic caching for Azure OpenAI APIs. These steps can be generalized to enable semantic caching for corresponding large language model (LLM) APIs available through the [Azure AI Model Inference API](../ai-studio/reference/reference-model-inference-api.md).
+ ## Prerequisites * One or more Azure OpenAI Service APIs must be added to your API Management instance. For more information, see [Add an Azure OpenAI Service API to Azure API Management](azure-openai-api-from-specification.md).
with request body:
When the request succeeds, the response includes a completion for the chat message.
-## Create a backend for Embeddings API
+## Create a backend for embeddings API
-Configure a [backend](backends.md) resource for the Embeddings API deployment with the following settings:
+Configure a [backend](backends.md) resource for the embeddings API deployment with the following settings:
* **Name** - A name of your choice, such as `embeddings-backend`. You use this name to reference the backend in policies. * **Type** - Select **Custom URL**.
-* **Runtime URL** - The URL of the Embeddings API deployment in the Azure OpenAI Service, similar to:
+* **Runtime URL** - The URL of the embeddings API deployment in the Azure OpenAI Service, similar to:
``` https://my-aoai.openai.azure.com/openai/deployments/embeddings-deployment/embeddings ```
If the request is successful, the response includes a vector representation of t
Configure the following policies to enable semantic caching for Azure OpenAI APIs in Azure API Management: * In the **Inbound processing** section for the API, add the [azure-openai-semantic-cache-lookup](azure-openai-semantic-cache-lookup-policy.md) policy. In the `embeddings-backend-id` attribute, specify the Embeddings API backend you created.
+ > [!NOTE]
+ > When enabling semantic caching for other large language model APIs, use the [llm-semantic-cache-lookup](llm-semantic-cache-lookup-policy.md) policy instead.
+ Example: ```xml
Configure the following policies to enable semantic caching for Azure OpenAI API
* In the **Outbound processing** section for the API, add the [azure-openai-semantic-cache-store](azure-openai-semantic-cache-store-policy.md) policy.
+ > [!NOTE]
+ > When enabling semantic caching for other large language model APIs, use the [llm-semantic-cache-store](llm-semantic-cache-store-policy.md) policy instead.
+ Example: ```xml
api-management Azure Openai Token Limit Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/azure-openai-token-limit-policy.md
In the following example, the token limit of 5000 per minute is keyed by the cal
## Related policies * [Rate limiting and quotas](api-management-policies.md#rate-limiting-and-quotas)
+* [llm-token-limit](llm-token-limit-policy.md) policy
* [azure-openai-emit-token-metric](azure-openai-emit-token-metric-policy.md) policy [!INCLUDE [api-management-policy-ref-next-steps](../../includes/api-management-policy-ref-next-steps.md)]
api-management Llm Emit Token Metric Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/llm-emit-token-metric-policy.md
+
+ Title: Azure API Management policy reference - llm-emit-token-metric
+description: Reference for the llm-emit-token-metric policy available for use in Azure API Management. Provides policy usage, settings, and examples.
+++++ Last updated : 08/08/2024+++++
+# Emit metrics for consumption of large language model tokens
++
+The `llm-emit-token-metric` policy sends metrics to Application Insights about consumption of large language model (LLM) tokens through LLM APIs. Token count metrics include: Total Tokens, Prompt Tokens, and Completion Tokens.
+
+> [!NOTE]
+> Currently, this policy is in preview.
++++
+## Prerequisites
+
+* One or more LLM APIs must be added to your API Management instance.
+* Your API Management instance must be integrated with Application insights. For more information, see [How to integrate Azure API Management with Azure Application Insights](./api-management-howto-app-insights.md#create-a-connection-using-the-azure-portal).
+* Enable Application Insights logging for your LLM APIs.
+* Enable custom metrics with dimensions in Application Insights. For more information, see [Emit custom metrics](api-management-howto-app-insights.md#emit-custom-metrics).
+
+## Policy statement
+
+```xml
+<llm-emit-token-metric
+ namespace="metric namespace" >
+ <dimension name="dimension name" value="dimension value" />
+ ...additional dimensions...
+</llm-emit-token-metric>
+```
+
+## Attributes
+
+| Attribute | Description | Required | Default value |
+| | -- | | -- |
+| namespace | A string. Namespace of metric. Policy expressions aren't allowed. | No | API Management |
+| value | Value of metric expressed as a double. Policy expressions are allowed. | No | 1 |
++
+## Elements
+
+| Element | Description | Required |
+| -- | | -- |
+| dimension | Add one or more of these elements for each dimension included in the metric. | Yes |
+
+### dimension attributes
+
+| Attribute | Description | Required | Default value |
+| | -- | | -- |
+| name | A string or policy expression. Name of dimension. | Yes | N/A |
+| value | A string or policy expression. Value of dimension. Can only be omitted if `name` matches one of the default dimensions. If so, value is provided as per dimension name. | No | N/A |
+
+ ### Default dimension names that may be used without value
+
+* API ID
+* Operation ID
+* Product ID
+* User ID
+* Subscription ID
+* Location
+* Gateway ID
+
+## Usage
+
+- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted, workspace
+
+### Usage notes
+
+* This policy can be used multiple times per policy definition.
+* You can configure at most 10 custom dimensions for this policy.
+* Where available, values in the usage section of the response from the LLM API are used to determine token metrics.
+* Certain LLM endpoints support streaming of responses. When `stream` is set to `true` in the API request to enable streaming, token metrics are estimated.
+
+## Example
+
+The following example sends LLM token count metrics to Application Insights along with User ID, Client IP, and API ID as dimensions.
+
+```xml
+<policies>
+ <inbound>
+ <llm-emit-token-metric
+ namespace="MyLLM">
+ <dimension name="User ID" />
+ <dimension name="Client IP" value="@(context.Request.IpAddress)" />
+ <dimension name="API ID" />
+ </llm-emit-token-metric>
+ </inbound>
+ <outbound>
+ </outbound>
+</policies>
+```
+
+## Related policies
+
+* [Logging](api-management-policies.md#logging)
+* [emit-metric](emit-metric-policy.md) policy
+* [azure-openai-emit-token-metric](azure-openai-emit-token-metric-policy.md) policy
+* [llm-token-limit](llm-token-limit-policy.md) policy
+
api-management Llm Semantic Cache Lookup Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/llm-semantic-cache-lookup-policy.md
+
+ Title: Azure API Management policy reference - llm-semantic-cache-lookup | Microsoft Docs
+description: Reference for the llm-semantic-cache-lookup policy available for use in Azure API Management. Provides policy usage, settings, and examples.
++++++
+ - build-2024
+ Last updated : 08/07/2024+++
+# Get cached responses of large language model API requests
++
+Use the `llm-semantic-cache-lookup` policy to perform cache lookup of responses to large language model (LLM) API requests from a configured external cache, based on vector proximity of the prompt to previous requests and a specified similarity score threshold. Response caching reduces bandwidth and processing requirements imposed on the backend LLM API and lowers latency perceived by API consumers.
+
+> [!NOTE]
+> * This policy must have a corresponding [Cache responses to large language model API requests](llm-semantic-cache-store-policy.md) policy.
+> * For prerequisites and steps to enable semantic caching, see [Enable semantic caching for Azure OpenAI APIs in Azure API Management](azure-openai-enable-semantic-caching.md).
+> * Currently, this policy is in preview.
++
+## Policy statement
+
+```xml
+<llm-semantic-cache-lookup
+ score-threshold="similarity score threshold"
+ embeddings-backend-id ="backend entity ID for embeddings API"
+ embeddings-backend-auth ="system-assigned"
+ ignore-system-messages="true | false"
+ max-message-count="count" >
+ <vary-by>"expression to partition caching"</vary-by>
+</llm-semantic-cache-lookup>
+```
+
+## Attributes
+
+| Attribute | Description | Required | Default |
+| -- | | -- | - |
+| score-threshold | Similarity score threshold used to determine whether to return a cached response to a prompt. Value is a decimal between 0.0 and 1.0. [Learn more](../azure-cache-for-redis/cache-tutorial-semantic-cache.md#change-the-similarity-threshold). | Yes | N/A |
+| embeddings-backend-id | [Backend](backends.md) ID for OpenAI embeddings API call. | Yes | N/A |
+| embeddings-backend-auth | Authentication used for Azure OpenAI embeddings API backend. | Yes. Must be set to `system-assigned`. | N/A |
+| ignore-system-messages | Boolean. If set to `true`, removes system messages from a GPT chat completion prompt before assessing cache similarity. | No | false |
+| max-message-count | If specified, number of remaining dialog messages after which caching is skipped. | No | N/A |
+
+## Elements
+
+|Name|Description|Required|
+|-|--|--|
+|vary-by| A custom expression determined at runtime whose value partitions caching. If multiple `vary-by` elements are added, values are concatenated to create a unique combination. | No |
+
+## Usage
++
+- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Gateways:**](api-management-gateways-overview.md) v2
+
+### Usage notes
+
+- This policy can only be used once in a policy section.
++
+## Examples
+
+### Example with corresponding llm-semantic-cache-store policy
++
+## Related policies
+
+* [Caching](api-management-policies.md#caching)
+* [llm-semantic-cache-store](llm-semantic-cache-store-policy.md)
+
api-management Llm Semantic Cache Store Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/llm-semantic-cache-store-policy.md
+
+ Title: Azure API Management policy reference - llm-semantic-cache-store
+description: Reference for the llm-semantic-cache-store policy available for use in Azure API Management. Provides policy usage, settings, and examples.
+++++++ Last updated : 08/08/2024+++
+# Cache responses to large language model API requests
++
+The `llm-semantic-cache-store` policy caches responses to chat completion API and completion API requests to a configured external cache. Response caching reduces bandwidth and processing requirements imposed on the backend Azure OpenAI API and lowers latency perceived by API consumers.
+
+> [!NOTE]
+> * This policy must have a corresponding [Get cached responses to large language model API requests](llm-semantic-cache-lookup-policy.md) policy.
+> * For prerequisites and steps to enable semantic caching, see [Enable semantic caching for Azure OpenAI APIs in Azure API Management](azure-openai-enable-semantic-caching.md).
+> * Currently, this policy is in preview.
++
+## Policy statement
+
+```xml
+<llm-semantic-cache-store duration="seconds"/>
+```
++
+## Attributes
+
+| Attribute | Description | Required | Default |
+| -- | | -- | - |
+| duration | Time-to-live of the cached entries, specified in seconds. Policy expressions are allowed. | Yes | N/A |
++
+## Usage
+
+- [**Policy sections:**](./api-management-howto-policies.md#sections) outbound
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation
+- [**Gateways:**](api-management-gateways-overview.md) v2
+
+### Usage notes
+
+- This policy can only be used once in a policy section.
+- If the cache lookup fails, the API call that uses the cache-related operation doesn't raise an error, and the cache operation completes successfully.
+
+## Examples
+
+### Example with corresponding llm-semantic-cache-lookup policy
++
+## Related policies
+
+* [Caching](api-management-policies.md#caching)
+* [llm-semantic-cache-lookup](llm-semantic-cache-lookup-policy.md)
+
api-management Llm Token Limit Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/llm-token-limit-policy.md
+
+ Title: Azure API Management policy reference - llm-token-limit
+description: Reference for the llm-token-limit policy available for use in Azure API Management. Provides policy usage, settings, and examples.
+++++++ Last updated : 08/08/2024+++
+# Limit large language model API token usage
++
+The `llm-token-limit` policy prevents large language model (LLM) API usage spikes on a per key basis by limiting consumption of LLM tokens to a specified number per minute. When the token usage is exceeded, the caller receives a `429 Too Many Requests` response status code.
+
+By relying on token usage metrics returned from the LLM endpoint, the policy can accurately monitor and enforce limits in real time. The policy also enables precalculation of prompt tokens by API Management, minimizing unnecessary requests to the LLM backend if the limit is already exceeded.
+
+> [!NOTE]
+> Currently, this policy is in preview.
+++
+## Policy statement
+
+```xml
+<llm-token-limit counter-key="key value"
+ tokens-per-minute="number"
+ estimate-prompt-tokens="true | false"
+ retry-after-header-name="custom header name, replaces default 'Retry-After'"
+ retry-after-variable-name="policy expression variable name"
+ remaining-tokens-header-name="header name"
+ remaining-tokens-variable-name="policy expression variable name"
+ tokens-consumed-header-name="header name"
+ tokens-consumed-variable-name="policy expression variable name" />
+```
+## Attributes
+
+| Attribute | Description | Required | Default |
+| -- | -- | -- | - |
+| counter-key | The key to use for the token limit policy. For each key value, a single counter is used for all scopes at which the policy is configured. Policy expressions are allowed.| Yes | N/A |
+| tokens-per-minute | The maximum number of tokens consumed by prompt and completion per minute. | Yes | N/A |
+| estimate-prompt-tokens | Boolean value that determines whether to estimate the number of tokens required for a prompt: <br> - `true`: estimate the number of tokens based on prompt schema in API; may reduce performance. <br> - `false`: don't estimate prompt tokens. <br><br>When set to `false`, the remaining tokens per `counter-key` are calculated using the actual token usage from the response of the model. This could result in prompts being sent to the model that exceed the token limit. In such case, this will be detected in the response, and all succeeding requests will be blocked by the policy until the token limit frees up again. | Yes | N/A |
+| retry-after-header-name | The name of a custom response header whose value is the recommended retry interval in seconds after the specified `tokens-per-minute` is exceeded. Policy expressions aren't allowed. | No | `Retry-After` |
+| retry-after-variable-name | The name of a variable that stores the recommended retry interval in seconds after the specified `tokens-per-minute` is exceeded. Policy expressions aren't allowed. | No | N/A |
+| remaining-tokens-header-name | The name of a response header whose value after each policy execution is the number of remaining tokens allowed for the time interval. Policy expressions aren't allowed.| No | N/A |
+| remaining-tokens-variable-name | The name of a variable that after each policy execution stores the number of remaining tokens allowed for the time interval. Policy expressions aren't allowed.| No | N/A |
+| tokens-consumed-header-name | The name of a response header whose value is the number of tokens consumed by both prompt and completion. The header is added to response only after the response is received from backend. Policy expressions aren't allowed.| No | N/A |
+| tokens-consumed-variable-name | The name of a variable initialized to the estimated number of tokens in the prompt in `backend` section of pipeline if `estimate-prompt-tokens` is `true` and zero otherwise. The variable is updated with the reported count upon receiving the response in `outbound` section.| No | N/A |
+
+## Usage
+
+- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound
+- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation
+- [**Gateways:**](api-management-gateways-overview.md) classic, v2, self-hosted, workspace
+
+### Usage notes
+
+* This policy can be used multiple times per policy definition.
+* Where available when `estimate-prompt-tokens` is set to `false`, values in the usage section of the response from the LLM API are used to determine token usage.
+* Certain LLM endpoints support streaming of responses. When `stream` is set to `true` in the API request to enable streaming, prompt tokens are always estimated, regardless of the value of the `estimate-prompt-tokens` attribute.
+* [!INCLUDE [api-management-rate-limit-key-scope](../../includes/api-management-rate-limit-key-scope.md)]
+
+## Example
+
+In the following example, the token limit of 5000 per minute is keyed by the caller IP address. The policy doesn't estimate the number of tokens required for a prompt. After each policy execution, the remaining tokens allowed for that caller IP address in the time period are stored in the variable `remainingTokens`.
+
+```xml
+<policies>
+ <inbound>
+ <base />
+ <llm-token-limit
+ counter-key="@(context.Request.IpAddress)"
+ tokens-per-minute="5000" estimate-prompt-tokens="false" remaining-tokens-variable-name="remainingTokens" />
+ </inbound>
+ <outbound>
+ <base />
+ </outbound>
+</policies>
+```
+
+## Related policies
+
+* [Rate limiting and quotas](api-management-policies.md#rate-limiting-and-quotas)
+* [azure-openai-token-limit](azure-openai-token-limit-policy.md) policy
+* [llm-emit-token-metric](llm-emit-token-metric-policy.md) policy
+
app-service Configure Authentication Api Version https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-authentication-api-version.md
The following steps will allow you to manually migrate the application to the V2
* Microsoft Entra: `clientSecret` * Google: `googleClientSecret` * Facebook: `facebookAppSecret`
- * Twitter: `twitterConsumerSecret`
+ * X: `twitterConsumerSecret`
* Microsoft Account: `microsoftAccountClientSecret` > [!IMPORTANT]
The following steps will allow you to manually migrate the application to the V2
# For Web Apps, Google example az webapp config appsettings set -g <group_name> -n <site_name> --slot-settings GOOGLE_PROVIDER_AUTHENTICATION_SECRET=<value_from_previous_step>
- # For Azure Functions, Twitter example
+ # For Azure Functions, X example
az functionapp config appsettings set -g <group_name> -n <site_name> --slot-settings TWITTER_PROVIDER_AUTHENTICATION_SECRET=<value_from_previous_step> ```
The following steps will allow you to manually migrate the application to the V2
* Microsoft Entra: `clientSecretSettingName` * Google: `googleClientSecretSettingName` * Facebook: `facebookAppSecretSettingName`
- * Twitter: `twitterConsumerSecretSettingName`
+ * X: `twitterConsumerSecretSettingName`
* Microsoft Account: `microsoftAccountClientSecretSettingName` An example file after this operation might look similar to the following, in this case only configured for Microsoft Entra ID:
app-service Configure Authentication Customize Sign In Out https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-authentication-customize-sign-in-out.md
This article shows you how to customize user sign-ins and sign-outs while using
## Use multiple sign-in providers
-The portal configuration doesn't offer a turn-key way to present multiple sign-in providers to your users (such as both Facebook and Twitter). However, it isn't difficult to add the functionality to your app. The steps are outlined as follows:
+The portal configuration doesn't offer a turn-key way to present multiple sign-in providers to your users (such as both Facebook and X). However, it isn't difficult to add the functionality to your app. The steps are outlined as follows:
First, in the **Authentication / Authorization** page in the Azure portal, configure each of the identity provider you want to enable.
In the sign-in page, or the navigation bar, or any other location of your app, a
<a href="/.auth/login/aad">Log in with Microsoft Entra</a> <a href="/.auth/login/facebook">Log in with Facebook</a> <a href="/.auth/login/google">Log in with Google</a>
-<a href="/.auth/login/twitter">Log in with Twitter</a>
+<a href="/.auth/login/x">Log in with X</a>
<a href="/.auth/login/apple">Log in with Apple</a> ```
app-service Configure Authentication Oauth Tokens https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-authentication-oauth-tokens.md
From your server code, the provider-specific tokens are injected into the reques
| Microsoft Entra | `X-MS-TOKEN-AAD-ID-TOKEN` <br/> `X-MS-TOKEN-AAD-ACCESS-TOKEN` <br/> `X-MS-TOKEN-AAD-EXPIRES-ON` <br/> `X-MS-TOKEN-AAD-REFRESH-TOKEN` | | Facebook Token | `X-MS-TOKEN-FACEBOOK-ACCESS-TOKEN` <br/> `X-MS-TOKEN-FACEBOOK-EXPIRES-ON` | | Google | `X-MS-TOKEN-GOOGLE-ID-TOKEN` <br/> `X-MS-TOKEN-GOOGLE-ACCESS-TOKEN` <br/> `X-MS-TOKEN-GOOGLE-EXPIRES-ON` <br/> `X-MS-TOKEN-GOOGLE-REFRESH-TOKEN` |
-| Twitter | `X-MS-TOKEN-TWITTER-ACCESS-TOKEN` <br/> `X-MS-TOKEN-TWITTER-ACCESS-TOKEN-SECRET` |
+| X | `X-MS-TOKEN-TWITTER-ACCESS-TOKEN` <br/> `X-MS-TOKEN-TWITTER-ACCESS-TOKEN-SECRET` |
||| > [!NOTE]
When your provider's access token (not the [session token](#extend-session-token
- **Google**: Append an `access_type=offline` query string parameter to your `/.auth/login/google` API call. For more information, see [Google Refresh Tokens](https://developers.google.com/identity/protocols/OpenIDConnect#refresh-tokens). - **Facebook**: Doesn't provide refresh tokens. Long-lived tokens expire in 60 days (see [Facebook Expiration and Extension of Access Tokens](https://developers.facebook.com/docs/facebook-login/access-tokens/expiration-and-extension)).-- **Twitter**: Access tokens don't expire (see [Twitter OAuth FAQ](https://developer.twitter.com/en/docs/authentication/faq)).
+- **X**: Access tokens don't expire (see [OAuth FAQ](https://developer.x.com/en/docs/authentication/faq)).
- **Microsoft**: In [https://resources.azure.com](https://resources.azure.com), do the following steps: 1. At the top of the page, select **Read/Write**. 2. In the left browser, navigate to **subscriptions** > **_\<subscription\_name>_** > **resourceGroups** > **_\<resource\_group\_name>_** > **providers** > **Microsoft.Web** > **sites** > **_\<app\_name>_** > **config** > **authsettingsV2**.
app-service Configure Authentication Provider Twitter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-authentication-provider-twitter.md
Title: Configure Twitter authentication
-description: Learn how to configure Twitter authentication as an identity provider for your App Service or Azure Functions app.
+ Title: Configure X authentication
+description: Learn how to configure X authentication as an identity provider for your App Service or Azure Functions app.
ms.assetid: c6dc91d7-30f6-448c-9f2d-8e91104cde73 Last updated 03/29/2021
-# Configure your App Service or Azure Functions app to use Twitter login
+# Configure your App Service or Azure Functions app to use X login
[!INCLUDE [app-service-mobile-selector-authentication](../../includes/app-service-mobile-selector-authentication.md)]
-This article shows how to configure Azure App Service or Azure Functions to use Twitter as an authentication provider.
+This article shows how to configure Azure App Service or Azure Functions to use X as an authentication provider.
-To complete the procedure in this article, you need a Twitter account that has a verified email address and phone number. To create a new Twitter account, go to [twitter.com].
+To complete the procedure in this article, you need an X account that has a verified email address and phone number. To create a new X account, go to [x.com].
-## <a name="register"> </a>Register your application with Twitter
+## <a name="register"> </a>Register your application with X
-1. Sign in to the [Azure portal] and go to your application. Copy your **URL**. You'll use it to configure your Twitter app.
-1. Go to the [Twitter Developers] website, sign in with your Twitter account credentials, and select **Create an app**.
-1. Enter the **App name** and the **Application description** for your new app. Paste your application's **URL** into the **Website URL** field. In the **Callback URLs** section, enter the HTTPS URL of your App Service app and append the path `/.auth/login/twitter/callback`. For example, `https://contoso.azurewebsites.net/.auth/login/twitter/callback`.
+1. Sign in to the [Azure portal] and go to your application. Copy your **URL**. You'll use it to configure your X app.
+1. Go to the [X Developers] website, sign in with your X account credentials, and select **Create an app**.
+1. Enter the **App name** and the **Application description** for your new app. Paste your application's **URL** into the **Website URL** field. In the **Callback URLs** section, enter the HTTPS URL of your App Service app and append the path `/.auth/login/x/callback`. For example, `https://contoso.azurewebsites.net/.auth/login/x/callback`.
1. At the bottom of the page, type at least 100 characters in **Tell us how this app will be used**, then select **Create**. Click **Create** again in the pop-up. The application details are displayed. 1. Select the **Keys and Access Tokens** tab.
To complete the procedure in this article, you need a Twitter account that has a
> [!IMPORTANT] > The API secret key is an important security credential. Do not share this secret with anyone or distribute it with your app.
-## <a name="secrets"> </a>Add Twitter information to your application
+## <a name="secrets"> </a>Add X information to your application
1. Sign in to the [Azure portal] and navigate to your app. 1. Select **Authentication** in the menu on the left. Click **Add identity provider**.
To complete the procedure in this article, you need a Twitter account that has a
1. Click **Add**.
-You're now ready to use Twitter for authentication in your app. The provider will be listed on the **Authentication** screen. From there, you can edit or delete this provider configuration.
+You're now ready to use X for authentication in your app. The provider will be listed on the **Authentication** screen. From there, you can edit or delete this provider configuration.
## <a name="related-content"> </a>Next steps
You're now ready to use Twitter for authentication in your app. The provider wil
<!-- URLs. -->
-[Twitter Developers]: https://go.microsoft.com/fwlink/p/?LinkId=268300
-[twitter.com]: https://go.microsoft.com/fwlink/p/?LinkID=268287
+[X Developers]: https://go.microsoft.com/fwlink/p/?LinkId=268300
+[x.com]: https://go.microsoft.com/fwlink/p/?LinkID=268287
[Azure portal]: https://portal.azure.com/ [xamarin]: ../app-services-mobile-app-xamarin-ios-get-started-users.md
app-service Configure Language Dotnetcore https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-dotnetcore.md
For more information, see [Configure ASP.NET Core to work with proxy servers and
::: zone pivot="platform-linux"
+## Rewrite or redirect URL
+
+To rewrite or redirect URL, use the [URL rewriting middleware in ASP.NET Core](/aspnet/core/fundamentals/url-rewriting).
+ ## Open SSH session in browser [!INCLUDE [Open SSH session in browser](../../includes/app-service-web-ssh-connect-builtin-no-h.md)]
app-service Configure Language Java Deploy Run https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-java-deploy-run.md
To deploy .ear files, [use FTP](deploy-ftp.md). Your .ear application is deploye
Don't deploy your .war or .jar using FTP. The FTP tool is designed to upload startup scripts, dependencies, or other runtime files. It's not the optimal choice for deploying web apps.
+## Rewrite or redirect URL
+
+To rewrite or redirect URL, use one of the available URL rewriters, such as [UrlRewriteFilter](http://tuckey.org/urlrewrite/).
++
+Tomcat also provides a [rewrite valve](https://tomcat.apache.org/tomcat-10.1-doc/rewrite.html).
+++
+JBoss also provides a [rewrite valve](https://docs.jboss.org/jbossweb/7.0.x/rewrite.html).
++ ## Logging and debugging apps Performance reports, traffic visualizations, and health checkups are available for each app through the Azure portal. For more information, see [Azure App Service diagnostics overview](overview-diagnostics.md).
app-service Deploy Run Package https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-run-package.md
# Run your app in Azure App Service directly from a ZIP package
+> [!NOTE]
+> Run from package is not supported for Python apps. When deploying a ZIP file of your Python code, you need to set a flag to enable Azure build automation. The build automation will create the Python virtual environment for your app and install any necessary requirements and package needed. See [build automation](quickstart-python.md?tabs=flask%2Cmac-linux%2Cazure-cli%2Czip-deploy%2Cdeploy-instructions-azportal%2Cterminal-bash%2Cdeploy-instructions-zip-azcli#enable-build-automation) for more details.
+ In [Azure App Service](overview.md), you can run your apps directly from a deployment ZIP package file. This article shows how to enable this functionality in your app. All other deployment methods in App Service have something in common: your files are deployed to *D:\home\site\wwwroot* in your app (or */home/site/wwwroot* for Linux apps). Since the same directory is used by your app at runtime, it's possible for deployment to fail because of file lock conflicts, and for the app to behave unpredictably because some of the files are not yet updated.
app-service Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/migrate.md
If the step is in progress, you get a status of `Migrating`. After you get a sta
az rest --method get --uri "${ASE_ID}/configurations/networking?api-version=2021-02-01" ```
-> [!NOTE]
-> Due to a known bug, for ELB App Service Environment migrations, the inbound IP address may change again once the [migration step](#8-migrate-to-app-service-environment-v3-and-check-status) is complete. This bug is being addressed and will be fixed as soon as possible. Open a support case to receive the correct IP address upfront or if you have any questions or concerns about this issue.
->
- ### 4. Update dependent resources with new IPs By using the new IPs, update any of your resources or networking components to ensure that your new environment functions as intended after migration is complete. It's your responsibility to make any necessary updates.
Get the details of your new environment by running the following command or by g
az appservice ase show --name $ASE_NAME --resource-group $ASE_RG ```
-> [!NOTE]
-> Due to a known bug, for ELB App Service Environment migrations, the inbound IP address may change once the [migration step](#8-migrate-to-app-service-environment-v3) is complete. Check your App Service Environment v3's IP addresses and make any needed updates if there have been changes since the IP generation step. Open a support case if you have any questions or concerns about this issue or need help with the confirming the new IPs.
->
- ::: zone-end ::: zone pivot="experience-azp"
If migration is supported for your App Service Environment, proceed to the next
Under **Get new IP addresses**, confirm that you understand the implications and select the **Start** button. This step takes about 15 minutes to complete. You can't scale or make changes to your existing App Service Environment during this time.
-> [!NOTE]
-> Due to a known bug, for ELB App Service Environment migrations, the inbound IP address may change again once the migration step is complete. This bug is being addressed and will be fixed as soon as possible. Open a support case to receive the correct IP address upfront or if you have any questions or concerns about this issue.
->
- ### 3. Update dependent resources with new IPs When the previous step finishes, the IP addresses for your new App Service Environment v3 resource appear. Use the new IPs to update any resources and networking components so that your new environment functions as intended after migration is complete. It's your responsibility to make any necessary updates.
At this time, detailed migration statuses are available only when you're using t
When migration is complete, you have an App Service Environment v3 resource, and all of your apps are running in your new environment. You can confirm the environment's version by checking the **Configuration** page for your App Service Environment.
-> [!NOTE]
-> Due to a known bug, for ELB App Service Environment migrations, the inbound IP address may change once the migration step is complete. Check your App Service Environment v3's IP addresses and make any needed updates if there have been changes since the IP generation step. Open a support case if you have any questions or concerns about this issue or need help confirming the new IPs.
->
- If your migration includes a custom domain suffix, the domain appeared in the **Essentials** section of the **Overview** page of the portal for App Service Environment v1/v2, but it no longer appears there in App Service Environment v3. Instead, for App Service Environment v3, go to the **Custom domain suffix** page to confirm that your custom domain suffix is configured correctly. You can also remove the configuration if you no longer need it or configure one if you didn't have one previously. :::image type="content" source="./media/migration/custom-domain-suffix-app-service-environment-v3.png" alt-text="Screenshot that shows the page for custom domain suffix configuration for App Service Environment v3.":::
app-service Side By Side Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/side-by-side-migrate.md
The platform creates your new App Service Environment v3 in a different subnet t
- The subnet must be in the same virtual network, and therefore region, as your existing App Service Environment. - If your virtual network doesn't have an available subnet, you need to create one. You might need to increase the address space of your virtual network to create a new subnet. For more information, see [Create a virtual network](../../virtual-network/quick-create-portal.md).-- The subnet must be able to communicate with the subnet your existing App Service Environment is in. Ensure there aren't network security groups or other network configurations that would prevent communication between the subnets.
+- The subnet must be able to communicate in both directions with the subnet your existing App Service Environment is in. Ensure there aren't network security groups or other network configurations that would prevent communication between the subnets.
- The subnet must have a single delegation of `Microsoft.Web/hostingEnvironments`. - The subnet must have enough available IP addresses to support your new App Service Environment v3. The number of IP addresses needed depends on the number of instances you want to use for your new App Service Environment v3. For more information, see [App Service Environment v3 networking](networking.md#addresses). - The subnet must not have any locks applied to it. If there are locks, they must be removed before migration. The locks can be readded if needed once migration is complete. For more information on locks and lock inheritance, see [Lock your resources to protect your infrastructure](../../azure-resource-manager/management/lock-resources.md).
You receive the new inbound IP address once migration is complete but before you
### Update dependent resources with new outbound IPs
-The new outbound IPs are created and given to you before you start the actual migration. The new default outbound to the internet public addresses are given so you can adjust any external firewalls, DNS routing, network security groups, and any other resources that rely on these IPs before completing the migration. **It's your responsibility to update any and all resources that will be impacted by the IP address change associated with the new App Service Environment v3. Don't move on to the next step until you've made all required updates.** You might experience downtime during and after the migration step if you have dependencies on the outbound IPs and fail to make all necessary updates. This is because once the migration starts, even though traffic still goes to your App Service Environment v2 front ends, your underlying compute is your new App Service Environment v3 in the new subnet.
+The new outbound IPs are created and given to you before you start the actual migration. The new default outbound to the internet public addresses are given so you can adjust any external firewalls, DNS routing, network security groups, and any other resources that rely on these IPs before completing the migration. **It's your responsibility to update any and all resources that will be impacted by the IP address change associated with the new App Service Environment v3. Don't move on to the next step until you've made all required updates.** You might experience downtime during and after the migration step if you have dependencies on the outbound IPs and fail to make all necessary updates. This is because once the migration starts, even though traffic still goes to your App Service Environment v2 front ends, your underlying compute is your new App Service Environment v3 in the new subnet.
This step is also a good time to review the [inbound and outbound network](networking.md#ports-and-network-restrictions) dependency changes when moving to App Service Environment v3 including the port change for the Azure Load Balancer health probe, which now uses port 80.
az rest --method get --uri "${ASE_ID}?api-version=2022-03-01" --query properties
During this step, you get a status of `CompletingMigration`. When you get a status of `MigrationCompleted`, the traffic redirection step is done and your migration is complete.
+## Common sources of issues when migrating using the side-by-side migration feature
+
+The following are examples of common sources of issues that customers encounter when migrating using the side-by-side migration feature. You should review these areas to ensure that you don't experience downtime or service outages during or after the migration process.
+
+- Azure Key Vault should allow traffic from the new outbound IPs/subnet.
+- The two subnets should be able to communicate with each other in both directions. Customers typically allow traffic from the old to the new subnet, but forget to allow traffic from the new to the old subnet.
+- App Gateway should be updated with the new IP addresses.
+- DNS records should be updated with the new IP addresses.
+- If you've hardcoded IP addresses in your applications, you need to update them with the new IP addresses.
+- Route tables should be updated with any new routes.
+ ## Pricing There's no cost to migrate your App Service Environment. However, you're billed for both your App Service Environment v2 and your new App Service Environment v3 once you start the migration process. You stop being charged for your old App Service Environment v2 when you complete the final migration step where the old environment gets deleted. You should complete your validation as quickly as possible to prevent excess charges from accumulating. For more information about App Service Environment v3 pricing, see the [pricing details](overview.md#pricing).
app-service Version Comparison https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/version-comparison.md
App Service Environment has three versions. App Service Environment v3 is the la
> [!IMPORTANT] > App Service Environment v1 and v2 [will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-4/). After that date, those versions will no longer be supported and any remaining App Service Environment v1 and v2s and the applications running on them will be deleted. -
-There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1 or v2, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version.
+>
+> There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1 or v2, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version.
> > As of 29 January 2024, you can no longer create new App Service Environment v1 or v2 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss. >
app-service Overview Authentication Authorization https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-authentication-authorization.md
Implementing a secure solution for authentication (signing-in users) and authori
- Azure App Service allows you to integrate a variety of auth capabilities into your web app or API without implementing them yourself. - ItΓÇÖs built directly into the platform and doesnΓÇÖt require any particular language, SDK, security expertise, or even any code to utilize.-- You can integrate with multiple login providers. For example, Microsoft Entra, Facebook, Google, Twitter.
+- You can integrate with multiple login providers. For example, Microsoft Entra, Facebook, Google, X.
Your app might need to support more complex scenarios such as Visual Studio integration or incremental consent. There are several different authentication solutions available to support these scenarios. To learn more, read [Identity scenarios](identity-scenarios.md).
App Service uses [federated identity](https://en.wikipedia.org/wiki/Federated_id
| [Microsoft Entra](/entr) | | [Facebook](https://developers.facebook.com/docs/facebook-login) | `/.auth/login/facebook` | [App Service Facebook login](configure-authentication-provider-facebook.md) | | [Google](https://developers.google.com/identity/choose-auth) | `/.auth/login/google` | [App Service Google login](configure-authentication-provider-google.md) |
-| [Twitter](https://developer.twitter.com/en/docs/basics/authentication) | `/.auth/login/twitter` | [App Service Twitter login](configure-authentication-provider-twitter.md) |
+| [X](https://developer.x.com/en/docs/basics/authentication) | `/.auth/login/x` | [App Service X login](configure-authentication-provider-twitter.md) |
| [GitHub](https://docs.github.com/en/developers/apps/building-oauth-apps/creating-an-oauth-app) | `/.auth/login/github` | [App Service GitHub login](configure-authentication-provider-github.md) | | [Sign in with Apple](https://developer.apple.com/sign-in-with-apple/) | `/.auth/login/apple` | [App Service Sign in With Apple login (Preview)](configure-authentication-provider-apple.md) | | Any [OpenID Connect](https://openid.net/connect/) provider | `/.auth/login/<providerName>` | [App Service OpenID Connect login](configure-authentication-provider-openid-connect.md) |
app-service Tutorial Connect Msi Key Vault Javascript https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-msi-key-vault-javascript.md
description: Learn how to secure connectivity to back-end Azure services that do
ms.devlang: javascript # ms.devlang: javascript, azurecli Previously updated : 10/26/2021 Last updated : 08/02/2024
Clone the sample repository locally and deploy the sample application to App Ser
# Clone and prepare sample application git clone https://github.com/Azure-Samples/app-service-language-detector.git cd app-service-language-detector/javascript
-zip default.zip *.*
+zip -r default.zip .
# Save app name as variable for convenience appName=<app-name> az appservice plan create --resource-group $groupName --name $appName --sku FREE --location $region --is-linux
-az webapp create --resource-group $groupName --plan $appName --name $appName --runtime "node|14-lts"
+az webapp create --resource-group $groupName --plan $appName --name $appName --runtime "node:18-lts"
az webapp config appsettings set --resource-group $groupName --name $appName --settings SCM_DO_BUILD_DURING_DEPLOYMENT=true
-az webapp deployment source config-zip --resource-group $groupName --name $appName --src ./default.zip
+az webapp deploy --resource-group $groupName --name $appName --src-path ./default.zip
``` The preceding commands: * Create a linux app service plan
-* Create a web app for Node.js 14 LTS
+* Create a web app for Node.js 18 LTS
* Configure the web app to install the npm packages on deployment * Upload the zip file, and install the npm packages
application-gateway Application Gateway Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-diagnostics.md
Application Gateway logs provide detailed information for events related to a re
Logs are available for all resources of Application Gateway; however, to consume them, you must enable their collection in a storage location of your choice. Logging in Azure Application Gateway is enabled by the Azure Monitor service. We recommend using the Log Analytics workspace as you can readily use its predefined queries and set alerts based on specific log conditions.
-## <a name="diagnostic-logging"></a>Types of Diagnostic logs
+## <a name="firewall-log"></a><a name="diagnostic-logging"></a>Types of Resource logs
-You can use different types of logs in Azure to manage and troubleshoot application gateways. You can learn more about these types below:
+You can use different types of logs in Azure to manage and troubleshoot application gateways.
-* **Activity log**: You can use [Azure activity logs](../azure-monitor/essentials/activity-log.md) (formerly known as operational logs and audit logs) to view all operations that are submitted to your Azure subscription, and their status. Activity log entries are collected by default, and you can view them in the Azure portal.
-* **Access log**: You can use this log to view Application Gateway access patterns and analyze important information. This includes the caller's IP, requested URL, response latency, return code, and bytes in and out. An access log is collected every 60 seconds. This log contains one record per instance of Application Gateway. The Application Gateway instance is identified by the instanceId property.
-* **Performance log**: You can use this log to view how Application Gateway instances are performing. This log captures performance information for each instance, including total requests served, throughput in bytes, total requests served, failed request count, and healthy and unhealthy backend instance count. A performance log is collected every 60 seconds. The Performance log is available only for the v1 SKU. For the v2 SKU, use [Metrics](application-gateway-metrics.md) for performance data.
-* **Firewall log**: You can use this log to view the requests that are logged through either detection or prevention mode of an application gateway that is configured with the web application firewall. Firewall logs are collected every 60 seconds.
+- [Activity log](monitor-application-gateway-reference.md#activity-log)
+- [Application Gateway Access Log](monitor-application-gateway-reference.md#resource-logs)
+- [Application Gateway Performance Log](monitor-application-gateway-reference.md#resource-logs) (available only for the v1 SKU)
+- [Application Gateway Firewall Log](monitor-application-gateway-reference.md#resource-logs)
> [!NOTE] > Logs are available only for resources deployed in the Azure Resource Manager deployment model. You can't use logs for resources in the classic deployment model. For a better understanding of the two models, see the [Understanding Resource Manager deployment and classic deployment](../azure-resource-manager/management/deployment-models.md) article.
-## Storage locations
+## Examples of optimizing access logs using Workspace Transformations
-You have the following options to store the logs in your preferred location.
-
-**Log Analytic workspace**: This option allows you to readily use the predefined queries, visualizations, and set alerts based on specific log conditions. The tables used by resource logs in log analytics workspace depend on what type of collection the resource is using:
-
-* **Azure diagnostics**: Data is written to the [Azure Diagnostics table](/azure/azure-monitor/reference/tables/azurediagnostics). Azure Diagnostics table is shared between multiple resource type, with each of them adding their own custom fields. When number of custom fields ingested to Azure Diagnostics table exceeds 500, new fields aren't added as top level but added to "AdditionalFields" field as dynamic key value pairs.
-
-* **Resource-specific(recommended)**: Data is written to dedicated tables for each category of the resource. In resource specific mode, each log category selected in the diagnostic setting is assigned its own table within the chosen workspace. This has several benefits, including:
- - Easier data manipulation in log queries
- - Improved discoverability of schemas and their structures
- - Enhanced performance in terms of ingestion latency and query times
- - The ability to assign [Azure role-based access control rights to specific tables](../azure-monitor/logs/manage-access.md?tabs=portal#set-table-level-read-access)
-
- For Application Gateway, resource specific mode creates three tables:
- * [AGWAccessLogs](/azure/azure-monitor/reference/tables/agwaccesslogs)
- * [AGWPerformanceLogs](/azure/azure-monitor/reference/tables/agwperformancelogs)
- * [AGWFirewallLogs](/azure/azure-monitor/reference/tables/agwfirewalllogs)
-
-> [!NOTE]
-> The resource specific option is currently available in all **clouds**.<br>
-> Existing users can continue using Azure Diagnostics, or can opt for dedicated tables by switching the toggle in Diagnostic settings to **Resource specific**, or to **Dedicated** in API destination. Dual mode isn't possible. The data in all the logs can either flow to Azure Diagnostics, or to dedicated tables. However, you can have multiple diagnostic settings where one data flow is to azure diagnostic and another is using resource specific at the same time.
-
- **Selecting the destination table in Log analytics :** All Azure services eventually use the resource-specific tables. As part of this transition, you can select Azure diagnostic or resource specific table in the diagnostic setting using a toggle button. The toggle is set to **Resource specific** by default and in this mode, logs for new selected categories are sent to dedicated tables in Log Analytics, while existing streams remain unchanged. See the following example.
-
- [![Screenshot of the resource ID for application gateway in the portal.](./media/application-gateway-diagnostics/resource-specific.png)](./media/application-gateway-diagnostics/resource-specific.png#lightbox)
-
-**Workspace Transformations:** Opting for the Resource specific option allows you to filter and modify your data before itΓÇÖs ingested with [workspace transformations](../azure-monitor/essentials/data-collection-transformations-workspace.md). This provides granular control, allowing you to focus on the most relevant information from the logs there by reducing data costs and enhancing security.
-For detailed instructions on setting up workspace transformations, please refer:[Tutorial: Add a workspace transformation to Azure Monitor Logs by using the Azure portal](../azure-monitor/logs/tutorial-workspace-transformations-portal.md).
-
- ### Examples of optimizing access logs using Workspace Transformations
-
**Example 1: Selective Projection of Columns**: Imagine you have application gateway access logs with 20 columns, but youΓÇÖre interested in analyzing data from only 6 specific columns. By using workspace transformation, you can project these 6 columns into your workspace, effectively excluding the other 14 columns. Even though the original data from those excluded columns wonΓÇÖt be stored, empty placeholders for them still appear in the Logs blade. This approach optimizes storage and ensures that only relevant data is retained for analysis. > [!NOTE]
- > Within the Logs blade, selecting the **Try New Log Analytics** option gives greater control over the columns displayed in your user interface.
+ > Within the Logs blade, selecting the **Try New Log Analytics** option gives greater control over the columns displayed in your user interface.
**Example 2: Focusing on Specific Status Codes**: When analyzing access logs, instead of processing all log entries, you can write a query to retrieve only rows with specific HTTP status codes (such as 4xx and 5xx). Since most requests ideally fall under the 2xx and 3xx categories (representing successful responses), focusing on the problematic status codes narrows down the data set. This targeted approach allows you to extract the most relevant and actionable information, making it both beneficial and cost-effective. **Recommended transition strategy to move from Azure diagnostic to resource specific table:**
-1. Assess current data retention: Determine the duration for which data is presently retained in the Azure diagnostics table (for example: assume the diagnostics table retains data for 15 days).
-2. Establish resource-specific retention: Implement a new Diagnostic setting with resource specific table.
-3. Parallel data collection: For a temporary period, collect data concurrently in both the Azure Diagnostics and the resource-specific settings.
-4. Confirm data accuracy: Verify that data collection is accurate and consistent in both settings.
-5. Remove Azure diagnostics setting: Remove the Azure Diagnostic setting to prevent duplicate data collection.
+
+1. Assess current data retention: Determine the duration for which data is presently retained in the Azure diagnostics table (for example: assume the diagnostics table retains data for 15 days).
+2. Establish resource-specific retention: Implement a new Diagnostic setting with resource specific table.
+3. Parallel data collection: For a temporary period, collect data concurrently in both the Azure Diagnostics and the resource-specific settings.
+4. Confirm data accuracy: Verify that data collection is accurate and consistent in both settings.
+5. Remove Azure diagnostics setting: Remove the Azure Diagnostic setting to prevent duplicate data collection.
Other storage locations:+ - **Azure Storage account**: Storage accounts are best used for logs when logs are stored for a longer duration and reviewed when needed. - **Azure Event Hubs**: Event hubs are a great option for integrating with other security information and event management (SIEM) tools to get alerts on your resources. - **Azure Monitor partner integrations**.
Activity logging is automatically enabled for every Resource Manager resource. Y
:::image type="content" source="media/application-gateway-diagnostics/diagnostics1.png" alt-text="Screenshot of app gateway properties" lightbox="media/application-gateway-diagnostics/diagnostics1.png"::: - 3. Enable diagnostic logging by using the following PowerShell cmdlet: ```powershell
Activity logging is automatically enabled for every Resource Manager resource. Y
* Performance log * Firewall log
-2. To start collecting data, select **Turn on diagnostics**.
+1. To start collecting data, select **Turn on diagnostics**.
![Turning on diagnostics][1]
-3. The **Diagnostics settings** page provides the settings for the diagnostic logs. In this example, Log Analytics stores the logs. You can also use event hubs and a storage account to save the diagnostic logs.
+1. The **Diagnostics settings** page provides the settings for the diagnostic logs. In this example, Log Analytics stores the logs. You can also use event hubs and a storage account to save the diagnostic logs.
![Starting the configuration process][2]
-5. Type a name for the settings, confirm the settings, and select **Save**.
+1. Type a name for the settings, confirm the settings, and select **Save**.
-## Activity log
-
-Azure generates the activity log by default. The logs are preserved for 90 days in the Azure event logs store. Learn more about these logs by reading the [View events and activity log](../azure-monitor/essentials/activity-log.md) article.
-
-## Access log
-
-The access log is generated only if you've enabled it on each Application Gateway instance, as detailed in the preceding steps. The data is stored in the storage account that you specified when you enabled the logging. Each access of Application Gateway is logged in JSON format as shown below.
-
-### For Application Gateway and WAF v2 SKU
-
-> [!NOTE]
-> * For TLS/TCP proxy related information, visit [data reference](monitor-application-gateway-reference.md#tlstcp-proxy-logs).
-> * Some columns from the shared AzureDiagnostics table are still being ported to the dedicated tables. Therefore, the columns with Mutual Authentication details are currently available only through the [AzureDiagnostics table](#storage-locations).
-> * Access logs with clientIP value 127.0.0.1 originate from an internal security process running on the application gateway instances. You can safely ignore these log entries.
--
-|Value |Description |
-|||
-|instanceId | Application Gateway instance that served the request. |
-|clientIP | IP of the immediate client of Application Gateway. If another proxy fronts your application gateway, this displays the IP of that fronting proxy. |
-|httpMethod | HTTP method used by the request. |
-|requestUri | URI of the received request. |
-|UserAgent | User agent from the HTTP request header. |
-|httpStatus | HTTP status code returned to the client from Application Gateway. |
-|httpVersion | HTTP version of the request. |
-|receivedBytes | Size of packet received, in bytes. |
-|sentBytes| Size of packet sent, in bytes.|
-|clientResponseTime| Time difference (in seconds) between the first byte and the last byte application gateway sent to the client. Helpful in gauging Application Gateway's processing time for responses or slow clients. |
-|timeTaken| Length of time (in **seconds**) that it takes for the first byte of a client request to be processed and its last-byte sent in the response to the client. It's important to note that the Time-Taken field usually includes the time that the request and response packets are traveling over the network. |
-|WAFEvaluationTime| Length of time (in **seconds**) that it takes for the request to be processed by the WAF. |
-|WAFMode| Value can be either Detection or Prevention |
-|transactionId| Unique identifier to correlate the request received from the client |
-|sslEnabled| Whether communication to the backend pools used TLS. Valid values are on and off.|
-|sslCipher| Cipher suite being used for TLS communication (if TLS is enabled).|
-|sslProtocol| SSL/TLS protocol being used (if TLS is enabled).|
-|sslClientVerify | Shows the result of client certificate verification as SUCCESS or FAILED. Failed status will include error information.|
-|sslClientCertificateFingerprint|The SHA1 thumbprint of the client certificate for an established TLS connection.|
-|sslClientCertificateIssuerName|The issuer DN string of the client certificate for an established TLS connection.|
-|serverRouted| The backend server that application gateway routes the request to.|
-|serverStatus| HTTP status code of the backend server.|
-|serverResponseLatency| Latency of the response (in **seconds**) from the backend server.|
-|host| Address listed in the host header of the request. If rewritten using header rewrite, this field contains the updated host name|
-|originalRequestUriWithArgs| This field contains the original request URL |
-|upstreamSourcePort| The source port used by Application Gateway when initiating a connection to the backend target|
-|originalHost| This field contains the original request host name|
-|error_info|The reason for the 4xx and 5xx error. Displays an error code for a failed request. More details in [Error code information.](./application-gateway-diagnostics.md#error-code-information) |
-|contentType|The type of content or data that is being processed or delivered by the application gateway
--
-```json
-{
- "timeStamp": "2021-10-14T22:17:11+00:00",
- "resourceId": "/SUBSCRIPTIONS/{subscriptionId}/RESOURCEGROUPS/{resourceGroupName}/PROVIDERS/MICROSOFT.NETWORK/APPLICATIONGATEWAYS/{applicationGatewayName}",
- "listenerName": "HTTP-Listener",
- "ruleName": "Storage-Static-Rule",
- "backendPoolName": "StaticStorageAccount",
- "backendSettingName": "StorageStatic-HTTPS-Setting",
- "operationName": "ApplicationGatewayAccess",
- "category": "ApplicationGatewayAccessLog",
- "properties": {
- "instanceId": "appgw_2",
- "clientIP": "185.42.129.24",
- "clientPort": 45057,
- "httpMethod": "GET",
- "originalRequestUriWithArgs": "\/",
- "requestUri": "\/",
- "requestQuery": "",
- "userAgent": "Mozilla\/5.0 (Windows NT 6.1; WOW64) AppleWebKit\/537.36 (KHTML, like Gecko) Chrome\/52.0.2743.116 Safari\/537.36",
- "httpStatus": 200,
- "httpVersion": "HTTP\/1.1",
- "receivedBytes": 184,
- "sentBytes": 466,
- "clientResponseTime": 0,
- "timeTaken": 0.034,
- "WAFEvaluationTime": "0.000",
- "WAFMode": "Detection",
- "transactionId": "592d1649f75a8d480a3c4dc6a975309d",
- "sslEnabled": "on",
- "sslCipher": "ECDHE-RSA-AES256-GCM-SHA384",
- "sslProtocol": "TLSv1.2",
- "sslClientVerify": "NONE",
- "sslClientCertificateFingerprint": "",
- "sslClientCertificateIssuerName": "",
- "serverRouted": "52.239.221.65:443",
- "serverStatus": "200",
- "serverResponseLatency": "0.028",
- "upstreamSourcePort": "21564",
- "originalHost": "20.110.30.194",
- "host": "20.110.30.194",
- "error_info":"ERRORINFO_NO_ERROR",
- "contentType":"application/json"
- }
-}
-```
-
-### For Application Gateway Standard and WAF SKU (v1)
-
-|Value |Description |
-|||
-|instanceId | Application Gateway instance that served the request. |
-|clientIP | Originating IP for the request. |
-|clientPort | Originating port for the request. |
-|httpMethod | HTTP method used by the request. |
-|requestUri | URI of the received request. |
-|RequestQuery | **Server-Routed**: Backend pool instance that was sent the request.</br>**X-AzureApplicationGateway-LOG-ID**: Correlation ID used for the request. It can be used to troubleshoot traffic issues on the backend servers. </br>**SERVER-STATUS**: HTTP response code that Application Gateway received from the back end. |
-|UserAgent | User agent from the HTTP request header. |
-|httpStatus | HTTP status code returned to the client from Application Gateway. |
-|httpVersion | HTTP version of the request. |
-|receivedBytes | Size of packet received, in bytes. |
-|sentBytes| Size of packet sent, in bytes.|
-|timeTaken| Length of time (in milliseconds) that it takes for a request to be processed and its response to be sent. This is calculated as the interval from the time when Application Gateway receives the first byte of an HTTP request to the time when the response send operation finishes. It's important to note that the Time-Taken field usually includes the time that the request and response packets are traveling over the network. |
-|sslEnabled| Whether communication to the backend pools used TLS/SSL. Valid values are on and off.|
-|host| The hostname for which the request has been sent to the backend server. If backend hostname is being overridden, this name reflects that.|
-|originalHost| The hostname for which the request was received by the Application Gateway from the client.|
-
-```json
-{
- "resourceId": "/SUBSCRIPTIONS/{subscriptionId}/RESOURCEGROUPS/PEERINGTEST/PROVIDERS/MICROSOFT.NETWORK/APPLICATIONGATEWAYS/{applicationGatewayName}",
- "operationName": "ApplicationGatewayAccess",
- "time": "2017-04-26T19:27:38Z",
- "category": "ApplicationGatewayAccessLog",
- "properties": {
- "instanceId": "ApplicationGatewayRole_IN_0",
- "clientIP": "191.96.249.97",
- "clientPort": 46886,
- "httpMethod": "GET",
- "requestUri": "/phpmyadmin/scripts/setup.php",
- "requestQuery": "X-AzureApplicationGateway-CACHE-HIT=0&SERVER-ROUTED=10.4.0.4&X-AzureApplicationGateway-LOG-ID=874f1f0f-6807-41c9-b7bc-f3cfa74aa0b1&SERVER-STATUS=404",
- "userAgent": "-",
- "httpStatus": 404,
- "httpVersion": "HTTP/1.0",
- "receivedBytes": 65,
- "sentBytes": 553,
- "timeTaken": 205,
- "sslEnabled": "off",
- "host": "www.contoso.com",
- "originalHost": "www.contoso.com"
- }
-}
-```
-### Error code Information
-If the application gateway can't complete the request, it stores one of the following reason codes in the error_info field of the access log.
--
-|4XX Errors | (The 4xx error codes indicate that there was an issue with the client's request, and the Application Gateway can't fulfill it.) |
-|||
-| ERRORINFO_INVALID_METHOD| The client has sent a request which is non-RFC compliant. Possible reasons: client using HTTP method not supported by server, misspelled method, incompatible HTTP protocol version etc.|
- | ERRORINFO_INVALID_REQUEST | The server can't fulfill the request because of incorrect syntax.|
- | ERRORINFO_INVALID_VERSION| The application gateway received a request with an invalid or unsupported HTTP version.|
- | ERRORINFO_INVALID_09_METHOD| The client sent request with HTTP Protocol version 0.9.|
- | ERRORINFO_INVALID_HOST |The value provided in the "Host" header is either missing, improperly formatted, or doesn't match the expected host value. For example, when there's no Basic listener, and none of the hostnames of Multisite listeners match with the host.|
- | ERRORINFO_INVALID_CONTENT_LENGTH | The length of the content specified by the client in the content-Length header doesn't match the actual length of the content in the request.|
- | ERRORINFO_INVALID_METHOD_TRACE | The client sent HTTP TRACE method, which isn't supported by the application gateway.|
- | ERRORINFO_CLIENT_CLOSED_REQUEST | The client closed the connection with the application gateway before the idle timeout period elapsed. Check whether the client timeout period is greater than the [idle timeout period](./application-gateway-faq.yml#what-are-the-settings-for-keep-alive-timeout-and-tcp-idle-timeout) for the application gateway.|
- | ERRORINFO_REQUEST_URI_INVALID |Indicates issue with the Uniform Resource Identifier (URI) provided in the client's request. |
- | ERRORINFO_HTTP_NO_HOST_HEADER | Client sent a request without Host header. |
- | ERRORINFO_HTTP_TO_HTTPS_PORT |The client sent a plain HTTP request to an HTTPS port. |
- | ERRORINFO_HTTPS_NO_CERT | Indicates client isn't sending a valid and properly configured TLS certificate during Mutual TLS authentication. |
--
-|5XX Errors | Description |
-|||
- | ERRORINFO_UPSTREAM_NO_LIVE | The application gateway is unable to find any active or reachable backend servers to handle incoming requests |
- | ERRORINFO_UPSTREAM_CLOSED_CONNECTION | The backend server closed the connection unexpectedly or before the request was fully processed. This could happen due to backend server reaching its limits, crashing etc.|
- | ERRORINFO_UPSTREAM_TIMED_OUT | The established TCP connection with the server was closed as the connection took longer than the configured timeout value. |
-
-## Performance log
-
-The performance log is generated only if you have enabled it on each Application Gateway instance, as detailed in the preceding steps. The data is stored in the storage account that you specified when you enabled the logging. The performance log data is generated in 1-minute intervals. It's available only for the v1 SKU. For the v2 SKU, use [Metrics](application-gateway-metrics.md) for performance data. The following data is logged:
--
-|Value |Description |
-|||
-|instanceId | Application Gateway instance for which performance data is being generated. For a multiple-instance application gateway, there's one row per instance. |
-|healthyHostCount | Number of healthy hosts in the backend pool. |
-|unHealthyHostCount | Number of unhealthy hosts in the backend pool. |
-|requestCount | Number of requests served. |
-|latency | Average latency (in milliseconds) of requests from the instance to the back end that serves the requests. |
-|failedRequestCount| Number of failed requests.|
-|throughput| Average throughput since the last log, measured in bytes per second.|
-
-```json
-{
- "resourceId": "/SUBSCRIPTIONS/{subscriptionId}/RESOURCEGROUPS/{resourceGroupName}/PROVIDERS/MICROSOFT.NETWORK/APPLICATIONGATEWAYS/{applicationGatewayName}",
- "operationName": "ApplicationGatewayPerformance",
- "time": "2016-04-09T00:00:00Z",
- "category": "ApplicationGatewayPerformanceLog",
- "properties":
- {
- "instanceId":"ApplicationGatewayRole_IN_1",
- "healthyHostCount":"4",
- "unHealthyHostCount":"0",
- "requestCount":"185",
- "latency":"0",
- "failedRequestCount":"0",
- "throughput":"119427"
- }
-}
-```
-
-> [!NOTE]
-> Latency is calculated from the time when the first byte of the HTTP request is received to the time when the last byte of the HTTP response is sent. It's the sum of the Application Gateway processing time plus the network cost to the back end, plus the time that the back end takes to process the request.
-
-## Firewall log
-
-The firewall log is generated only if you have enabled it for each application gateway, as detailed in the preceding steps. This log also requires that the web application firewall is configured on an application gateway. The data is stored in the storage account that you specified when you enabled the logging. The following data is logged:
--
-|Value |Description |
-|||
-|instanceId | Application Gateway instance for which firewall data is being generated. For a multiple-instance application gateway, there's one row per instance. |
-|clientIp | Originating IP for the request. |
-|clientPort | Originating port for the request. |
-|requestUri | URL of the received request. |
-|ruleSetType | Rule set type. The available value is OWASP. |
-|ruleSetVersion | Rule set version used. Available values are 2.2.9 and 3.0. |
-|ruleId | Rule ID of the triggering event. |
-|message | User-friendly message for the triggering event. More details are provided in the details section. |
-|action | Action taken on the request. Available values are Blocked and Allowed (for custom rules), Matched (when a rule matches a part of the request), and Detected and Blocked (these are both for mandatory rules, depending on if the WAF is in detection or prevention mode). |
-|site | Site for which the log was generated. Currently, only Global is listed because rules are global.|
-|details | Details of the triggering event. |
-|details.message | Description of the rule. |
-|details.data | Specific data found in request that matched the rule. |
-|details.file | Configuration file that contained the rule. |
-|details.line | Line number in the configuration file that triggered the event. |
-|hostname | Hostname or IP address of the Application Gateway. |
-|transactionId | Unique ID for a given transaction which helps group multiple rule violations that occurred within the same request. |
-
-```json
-{
- "timeStamp": "2021-10-14T22:17:11+00:00",
- "resourceId": "/SUBSCRIPTIONS/{subscriptionId}/RESOURCEGROUPS/{resourceGroupName}/PROVIDERS/MICROSOFT.NETWORK/APPLICATIONGATEWAYS/{applicationGatewayName}",
- "operationName": "ApplicationGatewayFirewall",
- "category": "ApplicationGatewayFirewallLog",
- "properties": {
- "instanceId": "appgw_2",
- "clientIp": "185.42.129.24",
- "clientPort": "",
- "requestUri": "\/",
- "ruleSetType": "OWASP_CRS",
- "ruleSetVersion": "3.0.0",
- "ruleId": "920350",
- "message": "Host header is a numeric IP address",
- "action": "Matched",
- "site": "Global",
- "details": {
- "message": "Warning. Pattern match \\\"^[\\\\d.:]+$\\\" at REQUEST_HEADERS:Host .... ",
- "data": "20.110.30.194:80",
- "file": "rules\/REQUEST-920-PROTOCOL-ENFORCEMENT.conf",
- "line": "791"
- },
- "hostname": "20.110.30.194:80",
- "transactionId": "592d1649f75a8d480a3c4dc6a975309d",
- "policyId": "default",
- "policyScope": "Global",
- "policyScopeName": "Global"
- }
-}
-```
-
-## View and analyze the activity log
-
-You can view and analyze activity log data by using any of the following methods:
-
-* **Azure tools**: Retrieve information from the activity log through Azure PowerShell, the Azure CLI, the Azure REST API, or the Azure portal. Step-by-step instructions for each method are detailed in the [Activity operations with Resource Manager](../azure-monitor/essentials/activity-log.md) article.
-* **Power BI**: If you don't already have a [Power BI](https://powerbi.microsoft.com/pricing) account, you can try it for free. By using the [Power BI template apps](/power-bi/service-template-apps-overview), you can analyze your data.
+To view and analyze activity log data, see [Analyze monitoring data](monitor-application-gateway.md#azure-monitor-tools).
## View and analyze the access, performance, and firewall logs
-[Azure Monitor logs](/previous-versions/azure/azure-monitor/insights/azure-networking-analytics) can collect the counter and event log files from your Blob storage account. It includes visualizations and powerful search capabilities to analyze your logs.
+[Azure Monitor logs](/previous-versions/azure/azure-monitor/insights/azure-networking-analytics) can collect the counter and event log files from your Blob storage account. For more information, see [Analyze monitoring data](monitor-application-gateway.md#azure-monitor-tools).
You can also connect to your storage account and retrieve the JSON log entries for access and performance logs. After you download the JSON files, you can convert them to CSV and view them in Excel, Power BI, or any other data-visualization tool. > [!TIP] > If you're familiar with Visual Studio and basic concepts of changing values for constants and variables in C#, you can use the [log converter tools](https://github.com/Azure-Samples/networking-dotnet-log-converter) available from GitHub.
->
->
-
-### Analyzing Access logs through GoAccess
-
-We have published a Resource Manager template that installs and runs the popular [GoAccess](https://goaccess.io/) log analyzer for Application Gateway Access Logs. GoAccess provides valuable HTTP traffic statistics such as Unique Visitors, Requested Files, Hosts, Operating Systems, Browsers, HTTP Status codes and more. For more details, please see the [Readme file in the Resource Manager template folder in GitHub](https://github.com/Azure/azure-quickstart-templates/tree/master/demos/application-gateway-logviewer-goaccess).
## Next steps
application-gateway Application Gateway Metrics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-metrics.md
Previously updated : 05/17/2023 Last updated : 06/17/2024
Application Gateway publishes data points to [Azure Monitor](../azure-monitor/overview.md) for the performance of your Application Gateway and backend instances. These data points are called metrics, and are numerical values in an ordered set of time-series data. Metrics describe some aspect of your application gateway at a particular time. If there are requests flowing through the Application Gateway, it measures and sends its metrics in 60-second intervals. If there are no requests flowing through the Application Gateway or no data for a metric, the metric isn't reported. For more information, see [Azure Monitor metrics](../azure-monitor/essentials/data-platform-metrics.md).
+<a name="metrics-supported-by-application-gateway-v1-sku"></a>
+ ## Metrics supported by Application Gateway V2 SKU > [!NOTE]
Application Gateway publishes data points to [Azure Monitor](../azure-monitor/ov
### Timing metrics
-Application Gateway provides several builtΓÇæin timing metrics related to the request and response, which are all measured in milliseconds.
+Application Gateway provides several builtΓÇæin timing metrics related to the request and response, which are all measured in milliseconds.
:::image type="content" source="./media/application-gateway-metrics/application-gateway-metrics.png" alt-text="[Diagram of timing metrics for the Application Gateway" border="false"::: > [!NOTE] >
-> If there are more than one listener in the Application Gateway, then always filter by *Listener* dimension while comparing different latency metrics in order to get meaningful inference.
--- **Backend connect time**-
- *Aggregation type:Avg/Max*
-
- Time spent establishing a connection with the backend application.
-
- This includes the network latency as well as the time taken by the backend serverΓÇÖs TCP stack to establish new connections. For TLS, it also includes the time spent on handshake.
--- **Backend first byte response time**-
- *Aggregation type:Avg/Max*
-
- Time interval between start of establishing a connection to backend server and receiving the first byte of the response header.
-
- This approximates the sum of *Backend connect time*, time taken by the request to reach the backend from Application Gateway, time taken by backend application to respond (the time the server took to generate content, potentially fetch database queries), and the time taken by first byte of the response to reach the Application Gateway from the backend.
--- **Backend last byte response time**-
- *Aggregation type:Avg/Max*
-
- Time interval between start of establishing a connection to backend server and receiving the last byte of the response body.
-
- This approximates the sum of *Backend first byte response time* and data transfer time (this number may vary greatly depending on the size of objects requested and the latency of the server network).
--- **Application gateway total time**-
- *Aggregation type:Avg/Max*
-
- This metric captures either the Average/Max time taken for a request to be received, processed and its response to be sent.
-
- This is the interval from the time when Application Gateway receives the first byte of the HTTP request to the time when the last response byte has been sent to the client. This includes the processing time taken by Application Gateway, the *Backend last byte response time*, and the time taken by Application Gateway to send all the response.
--- **Client RTT**-
- *Aggregation type:Avg/Max*
-
- This metric captures the Average/Max round trip time between clients and Application Gateway.
+> If there is more than one listener in the Application Gateway, then always filter by *Listener* dimension while comparing different latency metrics in order to get meaningful inference.
-These metrics can be used to determine whether the observed slowdown is due to the client network, Application Gateway performance, the backend network and backend server TCP stack saturation, backend application performance, or large file size.
+You can use timing metrics to determine whether the observed slowdown is due to the client network, Application Gateway performance, the backend network and backend server TCP stack saturation, backend application performance, or large file size. For more information, see [Timing metrics](monitor-application-gateway-reference.md#timing-metrics-for-application-gateway-v2-sku).
-For example, If thereΓÇÖs a spike in *Backend first byte response time* trend but the *Backend connect time* trend is stable, then it can be inferred that the Application gateway to backend latency and the time taken to establish the connection is stable, and the spike is caused due to an increase in the response time of backend application. On the other hand, if the spike in *Backend first byte response time* is associated with a corresponding spike in *Backend connect time*, then it can be deduced that either the network between Application Gateway and backend server or the backend server TCP stack has saturated.
+For example, if there's a spike in *Backend first byte response time* trend but the *Backend connect time* trend is stable, you can infer that the application gateway to backend latency and the time taken to establish the connection is stable. The spike is caused due to an increase in the response time of backend application. On the other hand, if the spike in *Backend first byte response time* is associated with a corresponding spike in *Backend connect time*, you can deduce that either the network between Application Gateway and backend server or the backend server TCP stack has saturated.
-If you notice a spike in *Backend last byte response time* but the *Backend first byte response time* is stable, then it can be deduced that the spike is because of a larger file being requested.
+If you notice a spike in *Backend last byte response time* but the *Backend first byte response time* is stable, you can deduce that the spike is because of a larger file being requested.
Similarly, if the *Application gateway total time* has a spike but the *Backend last byte response time* is stable, then it can either be a sign of performance bottleneck at the Application Gateway or a bottleneck in the network between client and Application Gateway. Additionally, if the *client RTT* also has a corresponding spike, then it indicates that the degradation is because of the network between client and Application Gateway. ### Application Gateway metrics
-For Application Gateway, the following metrics are available:
--- **Bytes received**-
- Count of bytes received by the Application Gateway from the clients. (Reported based on the request "content size" only. It doesn't account for TLS negotiations overhead, TCP/IP packet headers, or retransmissions, and hence doesn't represent the complete bandwidth utilization.)
--- **Bytes sent**-
- Count of bytes sent by the Application Gateway to the clients. (Reported based on the response "content size" only. It doesn't account for TCP/IP packet headers or retransmissions, and hence doesn't represent the complete bandwidth utilization.)
--- **Client TLS protocol**-
- Count of TLS and non-TLS requests initiated by the client that established connection with the Application Gateway. To view TLS protocol distribution, filter by the dimension TLS Protocol. This metric includes requests served by the gateway, such as redirects.
--- **Current capacity units**-
- Count of capacity units consumed to load balance the traffic. There are three determinants to capacity unit - compute unit, persistent connections and throughput. Each capacity unit is composed of at most: 1 compute unit, or 2500 persistent connections, or 2.22-Mbps throughput.
--- **Current compute units**-
- Count of processor capacity consumed. Factors affecting compute unit are TLS connections/sec, URL Rewrite computations, and WAF rule processing.
--- **Current connections**-
- The total number of concurrent connections active from clients to the Application Gateway
-
-- **Estimated Billed Capacity units**-
- With the v2 SKU, the pricing model is driven by consumption. Capacity units measure consumption-based cost that is charged in addition to the fixed cost. *Estimated Billed Capacity units* indicate the number of capacity units using which the billing is estimated. This is calculated as the greater value between *Current capacity units* (capacity units required to load balance the traffic) and *Fixed billable capacity units* (minimum capacity units kept provisioned).
--- **Failed Requests**-
- Number of requests that Application Gateway has served with 5xx server error codes. This includes the 5xx codes that are generated from the Application Gateway as well as the 5xx codes that are generated from the backend. The request count can be further filtered to show count per each/specific backend pool-http setting combination.
-
-- **Fixed Billable Capacity Units**-
- The minimum number of capacity units kept provisioned as per the *Minimum scale units* setting (one instance translates to 10 capacity units) in the Application Gateway configuration.
-
-
- The average number of new TCP connections per second established from clients to the Application Gateway and from the Application Gateway to the backend members.
---- **Response Status**-
- HTTP response status returned by Application Gateway. The response status code distribution can be further categorized to show responses in 2xx, 3xx, 4xx, and 5xx categories.
--- **Throughput**-
- Number of bytes per second the Application Gateway has served. (Reported based on the "content size" only. It doesn't account for TLS negotiations overhead, TCP/IP packet headers, or retransmissions, and hence doesn't represent the complete bandwidth utilization.)
--- **Total Requests**-
- Count of successful requests that Application Gateway has served by the backend pool targets. Pages served directly by the gateway, such as redirects, are not counted and should be found in the Client TLS protocol metric. Total requests count metric can be further filtered to show count per each/specific backend pool-http setting combination.
-
-### Backend metrics
-
-For Application Gateway, the following metrics are available:
--- **Backend response status**-
- Count of HTTP response status codes returned by the backends. This doesn't include any response codes generated by the Application Gateway. The response status code distribution can be further categorized to show responses in 2xx, 3xx, 4xx, and 5xx categories.
--- **Healthy host count**-
- The number of backends that are determined healthy by the health probe. You can filter on a per backend pool basis to show the number of healthy hosts in a specific backend pool.
--- **Unhealthy host count**-
- The number of backends that are determined unhealthy by the health probe. You can filter on a per backend pool basis to show the number of unhealthy hosts in a specific backend pool.
-
-- **Requests per minute per Healthy Host**-
- The average number of requests received by each healthy member in a backend pool in a minute. You must specify the backend pool using the *BackendPool HttpSettings* dimension.
-
-### Web Application Firewall (WAF) metrics
-
-For information on WAF Monitoring, see [WAF v2 Metrics](../../articles/web-application-firewall/ag/application-gateway-waf-metrics.md#application-gateway-waf-v2-metrics)
-
-## Metrics supported by Application Gateway V1 SKU
-
-### Application Gateway metrics
-
-For Application Gateway, the following metrics are available:
--- **CPU Utilization**-
- Displays the utilization of the CPUs allocated to the Application Gateway. Under normal conditions, CPU usage should not regularly exceed 90%, as this may cause latency in the websites hosted behind the Application Gateway and disrupt the client experience. You can indirectly control or improve CPU utilization by modifying the configuration of the Application Gateway by increasing the instance count or by moving to a larger SKU size, or doing both.
--- **Current connections**-
- Count of current connections established with Application Gateway
--- **Failed Requests**-
- Number of requests that failed due to connection issues. This count includes requests that failed due to exceeding the "Request time-out" HTTP setting and requests that failed due to connection issues between Application gateway and backend. This count doesn't include failures due to no healthy backend being available. 4xx and 5xx responses from the backend are also not considered as part of this metric.
--- **Response Status**-
- HTTP response status returned by Application Gateway. The response status code distribution can be further categorized to show responses in 2xx, 3xx, 4xx, and 5xx categories.
--- **Throughput**-
- Number of bytes per second the Application Gateway has served
--- **Total Requests**-
- Count of successful requests that Application Gateway has served. The request count can be further filtered to show count per each/specific backend pool-http setting combination.
-
+For Application Gateway, there are several metrics available. For a list, see [Application Gateway metrics](monitor-application-gateway-reference.md#metrics-for-application-gateway-v2-sku).
### Backend metrics
-For Application Gateway, the following metrics are available:
--- **Healthy host count**-
- The number of backends that are determined healthy by the health probe. You can filter on a per backend pool basis to show the number of healthy hosts in a specific backend pool.
--- **Unhealthy host count**-
- The number of backends that are determined unhealthy by the health probe. You can filter on a per backend pool basis to show the number of unhealthy hosts in a specific backend pool.
+For Application Gateway, There are several backend metrics available. For a list, see [Backend metrics](monitor-application-gateway-reference.md#backend-metrics-for-application-gateway-v2-sku).
### Web Application Firewall (WAF) metrics
-For information on WAF Monitoring, see [WAF v1 Metrics](../../articles/web-application-firewall/ag/application-gateway-waf-metrics.md#application-gateway-waf-v1-metrics)
+For information on WAF Monitoring, see [WAF v2 Metrics](../../articles/web-application-firewall/ag/application-gateway-waf-metrics.md#application-gateway-waf-v2-metrics) and [WAF v1 Metrics](../../articles/web-application-firewall/ag/application-gateway-waf-metrics.md#application-gateway-waf-v1-metrics).
## Metrics visualization
Browse to an application gateway, under **Monitoring** select **Metrics**. To vi
In the following image, you see an example with three metrics displayed for the last 30 minutes: To see a current list of metrics, see [Supported metrics with Azure Monitor](../azure-monitor/essentials/metrics-supported.md).
application-gateway Configure Alerts With Templates https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configure-alerts-with-templates.md
Previously updated : 03/03/2022 Last updated : 06/17/2024 # Configure Azure Monitor alerts for Application Gateway - Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address issues in your system before your customers notice them. For more information about Azure Monitor Alerts for Application Gateway, see [Monitoring Azure Application Gateway](monitor-application-gateway.md#alerts).
-## Configure alerts using ARM templates
-
-You can use ARM templates to quickly configure important alerts for Application Gateway. Before you begin, consider the following details:
--- Azure Monitor alert rules are charged based on the type and number of signals it monitors. See [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/) before deploying for pricing information. Or you can see the estimated cost in the portal after deployment:
- :::image type="content" source="media/configure-alerts-with-templates/alert-pricing.png" alt-text="Image showing application gateway pricing details":::
-- You need to create an Azure Monitor action group in advance and then use the Resource ID for as many alerts as you need. Azure Monitor alerts use this action group to notify users that an alert has been triggered. For more information, see [Create and manage action groups in the Azure portal](../azure-monitor/alerts/action-groups.md).
->[!TIP]
-> You can manually form the ResourceID for your Action Group by following these steps.
-> 1. Select Azure Monitor in your Azure portal.
-> 1. Open the Alerts page and select Action Groups.
-> 1. Select the action group to view its details.
-> 1. Use the Resource Group Name, Action Group Name and Subscription Info here to form the ResourceID for the action group as shown here: <br>
-> `/subscriptions/<subscription-id-from-your-account>/resourcegroups/<resource-group-name>/providers/microsoft.insights/actiongroups/<action-group-name>`
-- The templates for alerts described here are defined generically for settings like Severity, Aggregation Granularity, Frequency of Evaluation, Condition Type, and so on. You can modify the settings after deployment to meet your needs. See [detailed information about configuring a metric alert rule](../azure-monitor/alerts/alerts-create-new-alert-rule.md) for more information.-- The templates for metric-based alerts use the **Dynamic threshold** value with [high sensitivity](../azure-monitor/alerts/alerts-dynamic-thresholds.md#known-issues-with-dynamic-threshold-sensitivity). You can choose to adjust these settings based on your needs.-
-## ARM templates
-
-The following ARM templates are available to configure Azure Monitor alerts for Application Gateway.
+The templates for alerts described here are defined generically for settings like Severity, Aggregation Granularity, Frequency of Evaluation, Condition Type, and so on. You can modify the settings after deployment to meet your needs. See [detailed information about configuring a metric alert rule](../azure-monitor/alerts/alerts-create-new-alert-rule.md) for more information.
-### Alert for Backend Response Status as 5xx
+The templates for metric-based alerts use the **Dynamic threshold** value with [high sensitivity](../azure-monitor/alerts/alerts-dynamic-thresholds.md#known-issues-with-dynamic-threshold-sensitivity). You can choose to adjust these settings based on your needs.
-[![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fdemos%2Fag-alert-backend-5xx%2Fazuredeploy.json)
+The following ARM templates are available to configure Azure Monitor alerts for Application Gateway. For the procedure to use these templates, see [Create a new alert rule using an ARM template](../azure-monitor/alerts/alerts-create-rule-cli-powershell-arm.md#create-a-new-alert-rule-using-an-arm-template).
-This notification is based on Metrics signal.
+- Alert for Backend Response Status as 5xx
-### Alert for average Unhealthy Host Count
+ [![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fdemos%2Fag-alert-backend-5xx%2Fazuredeploy.json)
-[![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fdemos%2Fag-alert-unhealthy-host%2Fazuredeploy.json)
+ This notification is based on Metrics signal.
-This notification is based on Metrics signal.
+- Alert for average Unhealthy Host Count
-### Alert for Backend Last Byte Response Time
+ [![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fdemos%2Fag-alert-unhealthy-host%2Fazuredeploy.json)
-[![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fdemos%2Fag-alert-backend-lastbyte-resp%2Fazuredeploy.json)
+ This notification is based on Metrics signal.
-This notification is based on Metrics signal.
+- Alert for Backend Last Byte Response Time
-### Alert for Key Vault integration issues
+ [![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fdemos%2Fag-alert-backend-lastbyte-resp%2Fazuredeploy.json)
-[![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fdemos%2Fag-alert-keyvault-advisor%2Fazuredeploy.json)
+ This notification is based on Metrics signal.
-This notification is based on its Azure Advisor recommendation.
+- Alert for Key Vault integration issues
+ [![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fdemos%2Fag-alert-keyvault-advisor%2Fazuredeploy.json)
-## Next steps
+ This notification is based on its Azure Advisor recommendation.
-<!-- Add additional links. You can change the wording of these and add more if useful. -->
+## Related content
- See [Monitoring Application Gateway data reference](monitor-application-gateway-reference.md) for a reference of the metrics, logs, and other important values created by Application Gateway.
application-gateway Configure Application Gateway With Private Frontend Ip https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configure-application-gateway-with-private-frontend-ip.md
Title: Configure an internal load balancer (ILB) endpoint
-description: This article provides information on how to configure Application Gateway Standard v2 with a private frontend IP address
+description: This article provides information on how to configure Application Gateway Standard v1 with a private frontend IP address
Previously updated : 02/07/2024 Last updated : 08/09/2024 # Configure an application gateway with an internal load balancer (ILB) endpoint
-Azure Application Gateway Standard v2 can be configured with an Internet-facing VIP or with an internal endpoint that isn't exposed to the Internet. An internal endpoint uses a private IP address for the frontend, which is also known as an *internal load balancer (ILB) endpoint*.
+Azure Application Gateway Standard v1 can be configured with an Internet-facing VIP or with an internal endpoint that isn't exposed to the Internet. An internal endpoint uses a private IP address for the frontend, which is also known as an *internal load balancer (ILB) endpoint*.
+
+> [!NOTE]
+> Application Gateway v1 is being retired. See the [v1 retiredment announcement](/azure/application-gateway/v1-retirement).<br>
+> To configure a v2 application gateway with a private frontend IP address, see [Private Application Gateway deployment](/azure/application-gateway/application-gateway-private-deployment).
Configuring the gateway using a frontend private IP address is useful for internal line-of-business applications that aren't exposed to the Internet. It's also useful for services and tiers within a multi-tier application that are in a security boundary that isn't exposed to the Internet but:
Configuring the gateway using a frontend private IP address is useful for intern
- session stickiness - or Transport Layer Security (TLS) termination (previously known as Secure Sockets Layer (SSL)).
-This article guides you through the steps to configure a Standard v2 Application Gateway with an ILB using the Azure portal.
+This article guides you through the steps to configure a Standard v1 Application Gateway with an ILB using the Azure portal.
[!INCLUDE [updated-for-az](~/reusable-content/ce-skilling/azure/includes/updated-for-az.md)]
In this example, you create a new virtual network. You can create a virtual netw
2. Select **Networking** and then select **Application Gateway** in the Featured list. 3. Enter *myAppGateway* for the name of the application gateway and *myResourceGroupAG* for the new resource group. 4. For **Region**, select **Central US**.
-5. For **Tier**, select **Standard V2**.
+5. For **Tier**, select **Standard**.
6. Under **Configure virtual network** select **Create new**, and then enter these values for the virtual network: - *myVNet* - for the name of the virtual network. - *10.0.0.0/16* - for the virtual network address space.
In this example, you create a new virtual network. You can create a virtual netw
9. Select **Next:Backends**. 10. Select **Add a backend pool**. 11. For **Name**, type *appGatewayBackendPool*.
-12. For **Add backend pool without targets**, select **Yes**. You'll add the targets later.
+12. For **Add backend pool without targets**, select **Yes**. Targets are added later.
13. Select **Add**. 14. Select **Next:Configuration**. 15. Under **Routing rules**, select **Add a routing rule**.
In this example, you create a new virtual network. You can create a virtual netw
## Add backend pool
-The backend pool is used to route requests to the backend servers that serve the request. The backend can be composed of NICs, virtual machine scale sets, public IP addresses, internal IP addresses, fully qualified domain names (FQDN), and multi-tenant backends like Azure App Service. In this example, you use virtual machines as the target backend. You can either use existing virtual machines or create new ones. In this example, you create two virtual machines that Azure uses as backend servers for the application gateway.
+The backend pool is used to route requests to the backend servers that serve the request. The backend can be composed of NICs, virtual machine scale sets, public IP addresses, internal IP addresses, fully qualified domain names (FQDN), and multitenant backends like Azure App Service. In this example, you use virtual machines as the target backend. You can either use existing virtual machines or create new ones. In this example, you create two virtual machines that Azure uses as backend servers for the application gateway.
-To do this, you:
+To do this:
1. Create two new virtual machines, *myVM* and *myVM2*, used as backend servers. 2. Install IIS on the virtual machines to verify that the application gateway was created successfully.
application-gateway Monitor Application Gateway Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/monitor-application-gateway-reference.md
Title: Monitoring Azure Application Gateway data reference
-description: Important reference material needed when you monitor Application Gateway
+ Title: Monitoring data reference for Azure Application Gateway
+description: This article contains important reference material you need when you monitor Azure Application Gateway.
Last updated : 06/17/2024++ - - Previously updated : 05/17/2024
-<!-- VERSION 2.2
-Template for monitoring data reference article for Azure services. This article is support for the main "Monitoring [servicename]" article for the service. -->
-# Monitoring Azure Application Gateway data reference
+# Azure Application Gateway monitoring data reference
-See [Monitoring Azure Application Gateway](monitor-application-gateway.md) for details on collecting and analyzing monitoring data for Azure Application Gateway.
-## Application Gateway v2 metrics
+See [Monitor Azure Application Gateway](monitor-application-gateway.md) for details on the data you can collect for Application Gateway and how to use it.
-Resource Provider and Type: [Microsoft.Network/applicationGateways](../azure-monitor/essentials/metrics-supported.md#microsoftnetworkapplicationgateways)
-### Timing metrics
-Application Gateway provides several builtΓÇæin timing metrics related to the request and response, which are all measured in milliseconds.
+### Supported metrics for Microsoft.Network/applicationGateways
-> [!NOTE]
->
-> If the Application Gateway has more than one listener, then always filter by the *Listener* dimension while comparing different latency metrics to get more meaningful inference.
+The following table lists the all metrics available for the Microsoft.Network/applicationGateways resource type. More description details for many metrics are included after this table.
-| Metric | Unit | Description|
-|:-|:--|:|
-|**Backend connect time**|Milliseconds|Time spent establishing a connection with the backend application.<br><br>This includes the network latency and the time taken by the backend serverΓÇÖs TCP stack to establish new connections. For TLS, it also includes the time spent on handshake.|
-|**Backend first byte response time**|Milliseconds|Time interval between start of establishing a connection to backend server and receiving the first byte of the response header.<br><br>This approximates the sum of Backend connect time, time taken by the request to reach the backend from Application Gateway, time taken by backend application to respond (the time the server took to generate content, potentially fetch database queries), and the time taken by first byte of the response to reach the Application Gateway from the backend.|
-|**Backend last byte response time**|Milliseconds|Time interval between start of establishing a connection to backend server and receiving the last byte of the response body.<br><br>This approximates the sum of backend first byte response time and data transfer time. This number may vary greatly depending on the size of objects requested and the latency of the server network.|
-|**Application gateway total time**|Milliseconds|Average time that it takes for a request to be received, processed and its response to be sent.<br><br>This is the interval from the time when Application Gateway receives the first byte of the HTTP request to the time when the last response byte has been sent to the client. This includes the processing time taken by Application Gateway, the Backend last byte response time, the time taken by Application Gateway to send all the response, and the Client RTT.|
-|**Client RTT**|Milliseconds|Average round-trip time between clients and Application Gateway.|
-These metrics can be used to determine whether the observed slowdown is due to the client network, Application Gateway performance, the backend network and backend server TCP stack saturation, backend application performance, or large file size.
+For available Web Application Firewall (WAF) metrics, see [Application Gateway WAF v2 metrics](../web-application-firewall/ag/application-gateway-waf-metrics.md#application-gateway-waf-v2-metrics) and [Application Gateway WAF v1 metrics](../web-application-firewall/ag/application-gateway-waf-metrics.md#application-gateway-waf-v1-metrics).
-For example, if thereΓÇÖs a spike in *Backend first byte response time* trend but the *Backend connect time* trend is stable, then it can be inferred that the Application gateway to backend latency and the time taken to establish the connection is stable, and the spike is caused due to an increase in the response time of backend application. On the other hand, if the spike in *Backend first byte response time* is associated with a corresponding spike in *Backend connect time*, then it can be deduced that either the network between Application Gateway and backend server or the backend server TCP stack has saturated.
+### Timing metrics for Application Gateway v2 SKU
-If you notice a spike in *Backend last byte response time* but the *Backend first byte response time* is stable, then it can be deduced that the spike is because of a larger file being requested.
+Application Gateway v2 SKU provides many builtΓÇæin timing metrics related to the request and response, which are all measured in milliseconds. What follows is expanded descriptions of the timing metrics already listed in the previous [metrics table](#supported-metrics-for-microsoftnetworkapplicationgateways).
-Similarly, if the *Application gateway total time* has a spike but the *Backend last byte response time* is stable, then it can either be a sign of performance bottleneck at the Application Gateway or a bottleneck in the network between client and Application Gateway. Additionally, if the *client RTT* also has a corresponding spike, then it indicates that the degradation is because of the network between client and Application Gateway.
+- **Backend connect time**. This value includes the network latency and the time taken by the backend server's TCP stack to establish new connections. For TLS, it also includes the time spent on handshake.
+- **Backend first byte response time**. This value approximates the sum of *Backend connect time*, time taken by the request to reach the backend from Application Gateway, time taken by backend application to respond, which is the time the server takes to generate content and potentially fetch database queries, and the time taken by first byte of the response to reach the Application Gateway from the backend.
+- **Backend last byte response time**. This value approximates the sum of backend first byte response time and data transfer time. This number varies greatly depending on the size of objects requested and the latency of the server network.
+- **Application gateway total time**. This interval is the time from Application Gateway receives the first byte of the HTTP request to the time when the last response byte was sent to the client.
+- **Client RTT**. Average round-trip time between clients and Application Gateway.
-### Application Gateway metrics
+### Metrics for Application Gateway v2 SKU
-| Metric | Unit | Description|
-|:-|:--|:|
-|**Bytes received**|Bytes|Count of bytes received by the Application Gateway from the clients. (This metric accounts for only the Request content size observed by the Application Gateway. It doesn't include data transfers such as TLS header negotiations, TCP/IP packet headers, or retransmissions.)|
-|**Bytes sent**|Bytes|Count of bytes sent by the Application Gateway to the clients. (This metric accounts for only the Response Content size served by the Application Gateway. It doesn't include data transfers such as TCP/IP packet headers or retransmissions.)|
-|**Client TLS protocol**|Count|Count of TLS and non-TLS requests initiated by the client that established connection with the Application Gateway. To view TLS protocol distribution, filter by the TLS Protocol dimension.|
-|**Current capacity units**|Count|Count of capacity units consumed to load balance the traffic. There are three determinants to capacity unit - compute unit, persistent connections, and throughput. Each capacity unit is composed of at most: one compute unit, or 2500 persistent connections, or 2.22-Mbps throughput.|
-|**Current compute units**|Count|Count of processor capacity consumed. Factors affecting compute unit are TLS connections/sec, URL Rewrite computations, and WAF rule processing.|
-|**Current connections**|Count|The total number of concurrent connections active from clients to the Application Gateway.|
-|**Estimated Billed Capacity units**|Count|With the v2 SKU, the pricing model is driven by consumption. Capacity units measure consumption-based cost that is charged in addition to the fixed cost. *Estimated Billed Capacity units indicate the number of capacity units using which the billing is estimated. This is calculated as the greater value between *Current capacity units* (capacity units required to load balance the traffic) and *Fixed billable capacity units* (minimum capacity units kept provisioned).|
-|**Failed Requests**|Count|Number of requests that Application Gateway has served with 5xx server error codes. This includes the 5xx codes that are generated from the Application Gateway and the 5xx codes that are generated from the backend. The request count can be further filtered to show count per each/specific backend pool-http setting combination.|
-|**Fixed Billable Capacity Units**|Count|The minimum number of capacity units kept provisioned as per the *Minimum scale units* setting (one instance translates to 10 capacity units) in the Application Gateway configuration.|
-|**New connections per second**|Count|The average number of new TCP connections per second established from clients to the Application Gateway and from the Application Gateway to the backend members.|
-|**Response Status**|Status code|HTTP response status returned by Application Gateway. The response status code distribution can be further categorized to show responses in 2xx, 3xx, 4xx, and 5xx categories.|
-|**Throughput**|Bytes/sec|Number of bytes per second the Application Gateway has served. (This metric accounts for only the Content size served by the Application Gateway. It doesn't include data transfers such as TLS header negotiations, TCP/IP packet headers, or retransmissions.)|
-|**Total Requests**|Count|Count of successful requests that Application Gateway has served. The request count can be further filtered to show count per each/specific backend pool-http setting combination.|
+For Application Gateway v2 SKU, the following metrics are available. What follows is expanded descriptions of the metrics already listed in the previous [metrics table](#supported-metrics-for-microsoftnetworkapplicationgateways).
-### Backend metrics
+- **Bytes received**. This metric accounts for only the Request content size observed by the Application Gateway. It doesn't include data transfers such as TLS header negotiations, TCP/IP packet headers, or retransmissions.
+- **Bytes sent**. This metric accounts for only the Response Content size served by the Application Gateway. It doesn't include data transfers such as TCP/IP packet headers or retransmissions.
+- **Client TLS protocol**. Count of TLS and non-TLS requests.
+- **Current capacity units**. There are three determinants to capacity unit: compute unit, persistent connections, and throughput. Each capacity unit is composed of at most one compute unit, or 2500 persistent connections, or 2.22-Mbps throughput.
+- **Current compute units**. Factors affecting compute unit are TLS connections/sec, URL Rewrite computations, and WAF rule processing.
+- **Current connections**. The total number of concurrent connections active from clients to the Application Gateway.
+- **Estimated Billed Capacity units**. With the v2 SKU, consumption drives the pricing model. Capacity units measure consumption-based cost that is charged in addition to the fixed cost. *Estimated Billed Capacity units indicate the number of capacity units using which the billing is estimated. This amount is calculated as the greater value between *Current capacity units* (capacity units required to load balance the traffic) and *Fixed billable capacity units* (minimum capacity units kept provisioned).
+- **Failed Requests**. This value includes the 5xx codes that are generated from the Application Gateway and the 5xx codes that are generated from the backend. The request count can be further filtered to show count per each/specific backend pool-http setting combination.
+- **Fixed Billable Capacity Units**. The minimum number of capacity units kept provisioned as per the *Minimum scale units* setting in the Application Gateway configuration. One instance translates to 10 capacity units.
+- **New connections per second**. The average number of new TCP connections per second established from clients to the Application Gateway and from the Application Gateway to the backend members.
+- **Response Status**. The response status code distribution can be further categorized to show responses in 2xx, 3xx, 4xx, and 5xx categories.
+- **Throughput**. This metric accounts for only the Content size served by the Application Gateway. It doesn't include data transfers such as TLS header negotiations, TCP/IP packet headers, or retransmissions.
+- **Total Requests**. Successful requests that Application Gateway served. The request count can be filtered to show count per each/specific backend pool-http setting combination.
-| Metric | Unit | Description|
-|:-|:--|:|
-|**Backend response status**|Count|Count of HTTP response status codes returned by the backends. This doesn't include any response codes generated by the Application Gateway. The response status code distribution can be further categorized to show responses in 2xx, 3xx, 4xx, and 5xx categories.|
-|**Healthy host count**|Count|The number of backends that are determined healthy by the health probe. You can filter on a per backend pool basis to show the number of healthy hosts in a specific backend pool.|
-|**Unhealthy host count**|Count|The number of backends that are determined unhealthy by the health probe. You can filter on a per backend pool basis to show the number of unhealthy hosts in a specific backend pool.|
-|**Requests per minute per Healthy Host**|Count|The average number of requests received by each healthy member in a backend pool in a minute. Specify the backend pool using the *BackendPool HttpSettings* dimension.|
+### Backend metrics for Application Gateway v2 SKU
-### Backend health API
+For Application Gateway v2 SKU, the following backend metrics are available. What follows is expanded descriptions of the backend metrics already listed in the previous [metrics table](#supported-metrics-for-microsoftnetworkapplicationgateways).
-See [Application Gateways - Backend Health](/rest/api/application-gateway/application-gateways/backend-health?tabs=HTTP) for details of the API call to retrieve the backend health of an application gateway.
+- **Backend response status**. Count of HTTP response status codes returned by the backends, not including any response codes generated by the Application Gateway. The response status code distribution can be categorized to show responses in 2xx, 3xx, 4xx, and 5xx categories.|
+- **Healthy host count**. The number of hosts that are determined healthy by the health probe. You can filter on a per backend pool basis to show the number of healthy hosts in a specific backend pool.
+- **Unhealthy host count**. The number of hosts that are determined unhealthy by the health probe. You can filter on a per backend pool basis to show the number of unhealthy hosts in a specific backend pool.
+- **Requests per minute per Healthy Host**. The average number of requests received by each healthy member in a backend pool in a minute. Specify the backend pool using the *BackendPool HttpSettings* dimension.
-Sample Request:
-``output
-POST
-https://management.azure.com/subscriptions/subid/resourceGroups/rg/providers/Microsoft.Network/
-applicationGateways/appgw/backendhealth?api-version=2021-08-01
-After
-``
-
-After sending this POST request, you should see an HTTP 202 Accepted response. In the response headers, find the Location header and send a new GET request using that URL.
+### Metrics for Application Gateway v1 SKU
-``output
-GET
-https://management.azure.com/subscriptions/subid/providers/Microsoft.Network/locations/region-name/operationResults/GUID?api-version=2021-08-01
-``
+For Application Gateway v1 SKU, the following metrics are available. What follows is expanded descriptions of the metrics already listed in the previous [metrics table](#supported-metrics-for-microsoftnetworkapplicationgateways).
-### Application Gateway TLS/TCP proxy monitoring
+- **CPU Utilization**. Displays the utilization of the CPUs allocated to the Application Gateway. Under normal conditions, CPU usage shouldn't regularly exceed 90%, because that situation might cause latency in the websites hosted behind the Application Gateway and disrupt the client experience. You can indirectly control or improve CPU utilization by modifying the configuration of the Application Gateway by increasing the instance count or by moving to a larger SKU size, or doing both.
-#### TLS/TCP proxy metrics
-
-With layer 4 proxy feature now available with Application Gateway, there are some Common metrics (apply to both layer 7 as well as layer 4), and some layer 4 specific metrics. The following table describes all the metrics are the applicable for layer 4 usage.
-
-| Metric | Description | Type | Dimension |
-|:--|:|:-|:-|
-| Current Connections | The number of active connections: reading, writing, or waiting. The count of current connections established with Application Gateway. | Common metric | None |
-| New Connections per second | The average number of connections handled per second during that minute. | Common metric | None |
-| Throughput | The rate of data flow (inBytes+ outBytes) during that minute. | Common metric | None |
-| Healthy host count | The number of healthy backend hosts. | Common metric | BackendSettingsPool |
-| Unhealthy host | The number of unhealthy backend hosts. | Common metric | BackendSettingsPool |
-| ClientRTT | Average round trip time between clients and Application Gateway. | Common metric | Listener |
-| Backend Connect Time | Time spent establishing a connection with a backend server. | Common metric | Listener, BackendServer, BackendPool, BackendSetting |
-| Backend First Byte Response Time | Time interval between start of establishing a connection to backend server and receiving the first byte of data (approximating processing time of backend server). | Common metric | Listener, BackendServer, BackendPool, BackendHttpSetting`*` |
-| Backend Session Duration | The total time of a backend connection. The average time duration from the start of a new connection to its termination. | L4-specific | Listener, BackendServer, BackendPool, BackendHttpSetting`*` |
-| Connection Lifetime | The total time of a client connection to application gateway. The average time duration from the start of a new connection to its termination in milliseconds. | L4-specific | Listener |
-
-`*` BackendHttpSetting dimension includes both layer 7 and layer 4 backend settings.
-
-#### TLS/TCP proxy logs
-
-Application GatewayΓÇÖs Layer 4 proxy provides log data through access logs. These logs are only generated and published if they are configured in the diagnostic settings of your gateway. Also see: [Supported categories for Azure Monitor resource logs](/azure/azure-monitor/essentials/resource-logs-categories#microsoftnetworkapplicationgateways).
-
-> [!NOTE]
-> The columns with Mutual Authentication details for a TLS listener are currently available only through the [AzureDiagnostics table](application-gateway-diagnostics.md#storage-locations).
-
-| Category | Resource log category |
-|:--|:-|
-| ResourceGroup | The resource group to which the application gateway resource belongs. |
-| SubscriptionId |The subscription ID of the application gateway resource. |
-| ResourceProvider |This will be MICROSOFT.NETWORK for application gateway. |
-| Resource |The name of the application gateway resource. |
-| ResourceType |This will be APPLICATIONGATEWAYS. |
-| ruleName |The name of the routing rule that served the connection request. |
-| instanceId |Application Gateway instance that served the request. |
-| clientIP |Originating IP for the request. |
-| receivedBytes |Data received from client to gateway, in bytes. |
-| sentBytes |Data sent from gateway to client, in bytes. |
-| listenerName |The name of the listener that established the frontend connection with client. |
-| backendSettingName |The name of the backend setting used for the backend connection. |
-| backendPoolName |The name of the backend pool from which a target server was selected to establish the backend connection. |
-| protocol |TCP (Irrespective of it being TCP or TLS, the protocol value will always be TCP). |
-| sessionTime |session duration, in seconds (this is for the client->appgw session) |
-| upstreamSentBytes |Data sent to backend server, in bytes. |
-| upstreamReceivedBytes |Data received from backend server, in bytes. |
-| upstreamSessionTime |session duration, in seconds (this is for the appgw->backend session) |
-| sslCipher |Cipher suite being used for TLS communication (for TLS protocol listeners). |
-| sslProtocol |SSL/TLS protocol being used (for TLS protocol listeners). |
-| serverRouted |The backend server IP and port number to which the traffic was routed. |
-| serverStatus |200 - session completed successfully. 400 - client data could not be parsed. 500 - internal server error. 502 - bad gateway. For example, when an upstream server could not be reached. 503 - service unavailable. For example, if access is limited by the number of connections. |
-| ResourceId |Application Gateway resource URI |
-
-### TLS/TCP proxy backend health
+- **Current connections**. Count of current connections established with Application Gateway.
-Application GatewayΓÇÖs layer 4 proxy provides the capability to monitor the health of individual members of the backend pools through the portal and REST API.
+- **Failed Requests**. Number of requests that failed due to connection issues. This count includes requests that failed due to exceeding the "Request time-out" HTTP setting and requests that failed due to connection issues between Application gateway and backend. This count doesn't include failures due to no healthy backend being available. 4xx and 5xx responses from the backend are also not considered as part of this metric.
-![Screenshot of backend health](./media/monitor-application-gateway-reference/backend-health.png)
+- **Response Status**. HTTP response status returned by Application Gateway. The response status code distribution can be further categorized to show responses in 2xx, 3xx, 4xx, and 5xx categories.
+- **Throughput**. Number of bytes per second the Application Gateway served.
+- **Total Requests**. Count of successful requests that Application Gateway has served. The request count can be further filtered to show count per each/specific backend pool-http setting combination.
-## Application Gateway v1 metrics
+### Backend metrics for Application Gateway v1 SKU
-### Application Gateway metrics
+For Application Gateway v1 SKU, the following backend metrics are available. What follows is expanded descriptions of the backend metrics already listed in the previous [metrics table](#supported-metrics-for-microsoftnetworkapplicationgateways).
-| Metric | Unit | Description|
-|:-|:--|:|
-|**CPU Utilization**|Percent|Displays the CPU usage allocated to the Application Gateway. Under normal conditions, CPU usage should not regularly exceed 90%, as this may cause latency in the websites hosted behind the Application Gateway and disrupt the client experience. You can indirectly control or improve CPU usage by modifying the configuration of the Application Gateway by increasing the instance count or by moving to a larger SKU size, or doing both.|
-|**Current connections**|Count|Count of current connections established with Application Gateway.|
-|**Failed Requests**|Count|Number of requests that failed because of connection issues. This count includes requests that failed due to exceeding the *Request time-out* HTTP setting and requests that failed due to connection issues between Application Gateway and the backend. This count doesn't include failures due to no healthy backend being available. 4xx and 5xx responses from the backend are also not considered as part of this metric.|
-|**Response Status**|Status code|HTTP response status returned by Application Gateway. The response status code distribution can be further categorized to show responses in 2xx, 3xx, 4xx, and 5xx categories.|
-|**Throughput**|Bytes/sec|Number of bytes per second the Application Gateway has served.|
-|**Total Requests**|Count|Count of successful requests that Application Gateway has served. The request count can be further filtered to show count per each/specific backend pool-http setting combination.|
-|**Web Application Firewall Blocked Requests Count**|Count|Number of requests blocked by WAF.|
-|**Web Application Firewall Blocked Requests Distribution**|Count|Number of requests blocked by WAF filtered to show count per each/specific WAF rule group or WAF rule ID combination.|
-|**Web Application Firewall Total Rule Distribution**|Count|Number of requests received per each specific WAF rule group or WAF rule ID combination.|
+- **Healthy host count**. The number of backends that are determined healthy by the health probe. You can filter on a per backend pool basis to show the number of healthy hosts in a specific backend pool.
+- **Unhealthy host count**. The number of backends that are determined unhealthy by the health probe. You can filter on a per backend pool basis to show the number of unhealthy hosts in a specific backend pool.
-<!-- Keep this text as-is -->
-For more information, see a list of [all platform metrics supported in Azure Monitor](../azure-monitor/essentials/metrics-supported.md).
+### Backend health API
+See [Application Gateways - Backend Health](/rest/api/application-gateway/application-gateways/backend-health?tabs=HTTP) for details of the API call to retrieve the backend health of an application gateway.
+Sample Request:
-## Metrics Dimensions
+```http
+POST
+https://management.azure.com/subscriptions/subid/resourceGroups/rg/providers/Microsoft.Network/
+applicationGateways/appgw/backendhealth?api-version=2021-08-01
+```
-<!-- REQUIRED. Please keep headings in this order -->
-<!-- If you have metrics with dimensions, outline it here. If you have no dimensions, say so. Questions email azmondocs@microsoft.com -->
+After sending this POST request, you should see an HTTP 202 Accepted response. In the response headers, find the Location header and send a new GET request using that URL.
-For more information on what metrics dimensions are, see [Multi-dimensional metrics](../azure-monitor/essentials/data-platform-metrics.md#multi-dimensional-metrics).
+```http
+GET
+https://management.azure.com/subscriptions/subid/providers/Microsoft.Network/locations/region-name/operationResults/GUID?api-version=2021-08-01
+```
+### TLS/TCP proxy metrics
-<!-- See https://learn.microsoft.com/azure/storage/common/monitor-storage-reference#metrics-dimensions for an example. Part is copied below. -->
+Application Gateway supports TLS/TCP proxy monitoring. With layer 4 proxy feature now available with Application Gateway, there are some Common metrics that apply to both layer 7 and layer 4. There are some layer 4 specific metrics. The following list summarizes the metrics are the applicable for layer 4 usage.
-Azure Application Gateway supports dimensions for some of the metrics in Azure Monitor. Each metric includes a description that explains the available dimensions specifically for that metric.
+- Current Connections
+- New Connections per second
+- Throughput
+- Healthy host count
+- Unhealthy host count
+- Client RTT
+- Backend Connect Time
+- Backend First Byte Response Time. `BackendHttpSetting` dimension includes both layer 7 and layer 4 backend settings.
+For more information, see previous descriptions and the [metrics table](#supported-metrics-for-microsoftnetworkapplicationgateways).
-## Resource logs
-<!-- REQUIRED. Please keep headings in this order -->
+These metrics apply to layer 4 only.
-This section lists the types of resource logs you can collect for Azure Application Gateway.
+- **Backend Session Duration**. The total time of a backend connection. The average time duration from the start of a new connection to its termination. `BackendHttpSetting` dimension includes both layer 7 and layer 4 backend settings.
+- **Connection Lifetime**. The total time of a client connection to application gateway. The average time duration from the start of a new connection to its termination in milliseconds.
-<!-- List all the resource log types you can have and what they are for -->
+### TLS/TCP proxy backend health
-For reference, see a list of [all resource logs category types supported in Azure Monitor](../azure-monitor/essentials/resource-logs-schema.md).
+Application Gateway's layer 4 proxy provides the capability to monitor the health of individual members of the backend pools through the portal and REST API.
++++
+- Action
+- BackendHttpSetting
+- BackendPool
+- BackendServer
+- BackendSettingsPool
+- Category
+- CountryCode
+- CustomRuleID
+- HttpStatusGroup
+- Listener
+- Method
+- Mode
+- PolicyName
+- PolicyScope
+- RuleGroup
+- RuleGroupID
+- RuleId
+- RuleSetName
+- TlsProtocol
> [!NOTE]
-> The Performance log is available only for the v1 SKU. For the v2 SKU, use [Application Gateway v2 metrics](#application-gateway-v2-metrics) for performance data.
-
-For more information, see [Backend health and diagnostic logs for Application Gateway](application-gateway-diagnostics.md#access-log).
-
-<!-- OPTION 2 - Link to the resource logs as above, but work in extra information not found in the automated metric-supported reference article. NOTE: YOU MUST MANUALLY MAINTAIN THIS SECTION to make sure it stays in sync with the resource-log-categories link. You can group these sections however you want provided you include the proper links back to resource-log-categories article.
>-
-<!-- Example format. Add extra information -->
+>
+> If the Application Gateway has more than one listener, then always filter by the *Listener* dimension while comparing different latency metrics to get more meaningful inference.
-## Application Gateway
-Resource Provider and Type: [Microsoft.Network/applicationGateways](../azure-monitor/essentials/resource-logs-categories.md#microsoftnetworkapplicationgateways)
+### Supported resource log categories for Microsoft.Network/applicationGateways
-| Category | Display Name | Information|
-|:|:-||
-| **Activitylog** | Activity log | Activity log entries are collected by default. You can use [Azure activity logs](../azure-monitor/essentials/activity-log.md) (formerly known as operational logs and audit logs) to view all operations that are submitted to your Azure subscription, and their status. |
-|**ApplicationGatewayAccessLog**|Access log| You can use this log to view Application Gateway access patterns and analyze important information. This includes the caller's IP address, requested URL, response latency, return code, and bytes in and out. An access log is collected every 60 seconds. This log contains one record per instance of Application Gateway. The Application Gateway instance is identified by the instanceId property.|
-| **ApplicationGatewayPerformanceLog**|Performance log|You can use this log to view how Application Gateway instances are performing. This log captures performance information for each instance, including total requests served, throughput in bytes, total requests served, failed request count, and healthy and unhealthy backend instance count. A performance log is collected every 60 seconds. The Performance log is available only for the v1 SKU. For the v2 SKU, use [Application Gateway v2 metrics](#application-gateway-v2-metrics) for performance data.|
-|**ApplicationGatewayFirewallLog**|Firewall log|You can use this log to view the requests that are logged through either detection or prevention mode of an application gateway that is configured with the web application firewall. Firewall logs are collected every 60 seconds.|
+- **Access log**. You can use the Access log to view Application Gateway access patterns and analyze important information. This information includes the caller's IP, requested URL, response latency, return code, and bytes in and out. An access log is collected every 60 seconds. This log contains one record per instance of Application Gateway. The `instanceId` property identifies the Application Gateway instance.
-## Azure Monitor Logs tables
-<!-- REQUIRED. Please keep heading in this order -->
+- **Firewall log**. You can use the Firewall log to view the requests that are logged through either detection or prevention mode of an application gateway that is configured with the web application firewall. Firewall logs are collected every 60 seconds.
-This section refers to all of the Azure Monitor Logs Kusto tables relevant to Azure Application Gateway and available for query by Log Analytics.
+- **Performance log**. You can use the Performance log to view how Application Gateway instances are performing. This log captures performance information for each instance, including total requests served, throughput in bytes, total requests served, failed request count, and healthy and unhealthy backend instance count. A performance log is collected every 60 seconds.
+ > [!NOTE]
+ > The Performance log is available only for the v1 SKU. For the v2 SKU, use [Metrics](application-gateway-metrics.md) for performance data.
-<!-- OPTION 1 - Minimum - Link to relevant bookmarks in https://learn.microsoft.com/azure/azure-monitor/reference/tables/tables-resourcetype where your service tables are listed. These files are auto generated from the REST API. If this article is missing tables that you and the PM know are available, both of you contact azmondocs@microsoft.com.
>
+### Access log category
-<!-- Example format. There should be AT LEAST one Resource Provider/Resource Type here. -->
+The access log is generated only if you enable it on each Application Gateway instance, as detailed in [Enable logging](application-gateway-diagnostics.md#enable-logging-through-the-azure-portal). The data is stored in the storage account that you specified when you enabled the logging. Each access of Application Gateway is logged in JSON format as shown.
-|Resource Type | Notes |
-|-|--|
-| [Application Gateway](/azure/azure-monitor/reference/tables/tables-resourcetype#application-gateways) |Includes AzureActivity, AzureDiagnostics, and AzureMetrics |
+> [!NOTE]
+> For TLS/TCP proxy related information, visit [data reference](monitor-application-gateway-reference.md#tlstcp-proxy-logs).
+
+For Application Gateway and WAF v2 SKU:
+
+| Value | Description |
+|:|:|
+|instanceId | Application Gateway instance that served the request. |
+|clientIP | IP of the immediate client of Application Gateway. If another proxy fronts your application gateway, this value displays the IP of that fronting proxy. |
+|httpMethod | HTTP method used by the request. |
+|requestUri | URI of the received request. |
+|UserAgent | User agent from the HTTP request header. |
+|httpStatus | HTTP status code returned to the client from Application Gateway. |
+|httpVersion | HTTP version of the request. |
+|receivedBytes | Size of packet received, in bytes. |
+|sentBytes | Size of packet sent, in bytes. |
+|clientResponseTime | Time difference (in seconds) between the first byte and the last byte application gateway sent to the client. Helpful in gauging Application Gateway's processing time for responses or slow clients. |
+|timeTaken | Length of time (in **seconds**) that it takes for the first byte of a client request to be processed and its last-byte sent in the response to the client. It's important to note that the Time-Taken field usually includes the time that the request and response packets are traveling over the network. |
+|WAFEvaluationTime | Length of time (in **seconds**) that it takes for the request to be processed by the WAF. |
+|WAFMode | Value can be either Detection or Prevention. |
+|transactionId | Unique identifier to correlate the request received from the client. |
+|sslEnabled | Whether communication to the backend pools used TLS. Valid values are on and off. |
+|sslCipher | Cipher suite being used for TLS communication (if TLS is enabled). |
+|sslProtocol | SSL/TLS protocol being used (if TLS is enabled). |
+|sslClientVerify | Shows the result of client certificate verification as SUCCESS or FAILED. Failed status will include error information.|
+|sslClientCertificateFingerprint|The SHA1 thumbprint of the client certificate for an established TLS connection.|
+|sslClientCertificateIssuerName|The issuer DN string of the client certificate for an established TLS connection.|
+|serverRouted | The backend server that application gateway routes the request to. |
+|serverStatus | HTTP status code of the backend server. |
+|serverResponseLatency | Latency of the response (in **seconds**) from the backend server. |
+|host | Address listed in the host header of the request. If rewritten using header rewrite, this field contains the updated host name. |
+|originalRequestUriWithArgs | This field contains the original request URL. |
+|requestUri | This field contains the URL after the rewrite operation on Application Gateway. |
+|upstreamSourcePort | The source port used by Application Gateway when initiating a connection to the backend target. |
+|originalHost | This field contains the original request host name. |
+|error_info | The reason for the 4xx and 5xx error. Displays an error code for a failed request. More details in the error code tables in this article. |
+|contentType | The type of content or data that is being processed or delivered by the application gateway. |
+
+```json
+{
+ "timeStamp": "2021-10-14T22:17:11+00:00",
+ "resourceId": "/SUBSCRIPTIONS/{subscriptionId}/RESOURCEGROUPS/{resourceGroupName}/PROVIDERS/MICROSOFT.NETWORK/APPLICATIONGATEWAYS/{applicationGatewayName}",
+ "listenerName": "HTTP-Listener",
+ "ruleName": "Storage-Static-Rule",
+ "backendPoolName": "StaticStorageAccount",
+ "backendSettingName": "StorageStatic-HTTPS-Setting",
+ "operationName": "ApplicationGatewayAccess",
+ "category": "ApplicationGatewayAccessLog",
+ "properties": {
+ "instanceId": "appgw_2",
+ "clientIP": "185.42.129.24",
+ "clientPort": 45057,
+ "httpMethod": "GET",
+ "originalRequestUriWithArgs": "\/",
+ "requestUri": "\/",
+ "requestQuery": "",
+ "userAgent": "Mozilla\/5.0 (Windows NT 6.1; WOW64) AppleWebKit\/537.36 (KHTML, like Gecko) Chrome\/52.0.2743.116 Safari\/537.36",
+ "httpStatus": 200,
+ "httpVersion": "HTTP\/1.1",
+ "receivedBytes": 184,
+ "sentBytes": 466,
+ "clientResponseTime": 0,
+ "timeTaken": 0.034,
+ "WAFEvaluationTime": "0.000",
+ "WAFMode": "Detection",
+ "transactionId": "592d1649f75a8d480a3c4dc6a975309d",
+ "sslEnabled": "on",
+ "sslCipher": "ECDHE-RSA-AES256-GCM-SHA384",
+ "sslProtocol": "TLSv1.2",
+ "sslClientVerify": "NONE",
+ "sslClientCertificateFingerprint": "",
+ "sslClientCertificateIssuerName": "",
+ "serverRouted": "52.239.221.65:443",
+ "serverStatus": "200",
+ "serverResponseLatency": "0.028",
+ "upstreamSourcePort": "21564",
+ "originalHost": "20.110.30.194",
+ "host": "20.110.30.194",
+ "error_info":"ERRORINFO_NO_ERROR",
+ "contentType":"application/json"
+ }
+}
+```
+> [!NOTE]
+>
+> Access logs with clientIP value 127.0.0.1 originate from an internal security process running on the application gateway instances. You can safely ignore these log entries.
+
+For Application Gateway Standard and WAF SKU (v1):
+
+| Value | Description |
+|:--|-|
+| instanceId | Application Gateway instance that served the request. |
+| clientIP | Originating IP for the request. |
+| clientPort | Originating port for the request. |
+| httpMethod | HTTP method used by the request. |
+| requestUri | URI of the received request. |
+| RequestQuery | **Server-Routed**: Backend pool instance that was sent the request.</br>**X-AzureApplicationGateway-LOG-ID**: Correlation ID used for the request. It can be used to troubleshoot traffic issues on the backend servers. </br>**SERVER-STATUS**: HTTP response code that Application Gateway received from the back end. |
+| UserAgent | User agent from the HTTP request header. |
+| httpStatus | HTTP status code returned to the client from Application Gateway. |
+| httpVersion | HTTP version of the request. |
+| receivedBytes | Size of packet received, in bytes. |
+| sentBytes | Size of packet sent, in bytes. |
+| timeTaken | Length of time (in milliseconds) that it takes for a request to be processed and its response to be sent. This value is calculated as the interval from the time when Application Gateway receives the first byte of an HTTP request to the time when the response send operation finishes. It's important to note that the Time-Taken field usually includes the time that the request and response packets are traveling over the network. |
+| sslEnabled | Whether communication to the backend pools used TLS/SSL. Valid values are on and off. |
+| host | The hostname for which the request has been sent to the backend server. If backend hostname is being overridden, this name reflects that. |
+| originalHost | The hostname for which the request was received by the Application Gateway from the client. |
+
+```json
+{
+ "resourceId": "/SUBSCRIPTIONS/{subscriptionId}/RESOURCEGROUPS/PEERINGTEST/PROVIDERS/MICROSOFT.NETWORK/APPLICATIONGATEWAYS/{applicationGatewayName}",
+ "operationName": "ApplicationGatewayAccess",
+ "time": "2017-04-26T19:27:38Z",
+ "category": "ApplicationGatewayAccessLog",
+ "properties": {
+ "instanceId": "ApplicationGatewayRole_IN_0",
+ "clientIP": "191.96.249.97",
+ "clientPort": 46886,
+ "httpMethod": "GET",
+ "requestUri": "/phpmyadmin/scripts/setup.php",
+ "requestQuery": "X-AzureApplicationGateway-CACHE-HIT=0&SERVER-ROUTED=10.4.0.4&X-AzureApplicationGateway-LOG-ID=874f1f0f-6807-41c9-b7bc-f3cfa74aa0b1&SERVER-STATUS=404",
+ "userAgent": "-",
+ "httpStatus": 404,
+ "httpVersion": "HTTP/1.0",
+ "receivedBytes": 65,
+ "sentBytes": 553,
+ "timeTaken": 205,
+ "sslEnabled": "off",
+ "host": "www.contoso.com",
+ "originalHost": "www.contoso.com"
+ }
+}
+```
+
+If the application gateway can't complete the request, it stores one of the following reason codes in the error_info field of the access log.
+
+| 4XX Errors | The 4xx error codes indicate that there was an issue with the client's request, and the Application Gateway can't fulfill it. |
+|:|:|
+| ERRORINFO_INVALID_METHOD | The client sent a request that is non-RFC compliant. Possible reasons: client using HTTP method not supported by server, misspelled method, incompatible HTTP protocol version etc. |
+| ERRORINFO_INVALID_REQUEST | The server can't fulfill the request because of incorrect syntax. |
+| ERRORINFO_INVALID_VERSION | The application gateway received a request with an invalid or unsupported HTTP version. |
+| ERRORINFO_INVALID_09_METHOD | The client sent request with HTTP Protocol version 0.9. |
+| ERRORINFO_INVALID_HOST | The value provided in the "Host" header is either missing, improperly formatted, or doesn't match the expected host value. For example, when there's no Basic listener, and none of the hostnames of Multisite listeners match with the host. |
+| ERRORINFO_INVALID_CONTENT_LENGTH | The length of the content specified by the client in the content-Length header doesn't match the actual length of the content in the request. |
+| ERRORINFO_INVALID_METHOD_TRACE | The client sent HTTP TRACE method, which the application gateway doesn't support. |
+| ERRORINFO_CLIENT_CLOSED_REQUEST | The client closed the connection with the application gateway before the idle timeout period elapsed. Check whether the client timeout period is greater than the [idle timeout period](./application-gateway-faq.yml#what-are-the-settings-for-keep-alive-timeout-and-tcp-idle-timeout) for the application gateway. |
+| ERRORINFO_REQUEST_URI_INVALID | Indicates issue with the Uniform Resource Identifier (URI) provided in the client's request. |
+| ERRORINFO_HTTP_NO_HOST_HEADER | Client sent a request without Host header. |
+| ERRORINFO_HTTP_TO_HTTPS_PORT | The client sent a plain HTTP request to an HTTPS port. |
+| ERRORINFO_HTTPS_NO_CERT | Indicates client isn't sending a valid and properly configured TLS certificate during Mutual TLS authentication. |
+
+| 5XX Errors | Description |
+|:--|:|
+| ERRORINFO_UPSTREAM_NO_LIVE | The application gateway is unable to find any active or reachable backend servers to handle incoming requests. |
+| ERRORINFO_UPSTREAM_CLOSED_CONNECTION | The backend server closed the connection unexpectedly or before the request was fully processed. This condition could happen due to backend server reaching its limits, crashing etc. |
+| ERRORINFO_UPSTREAM_TIMED_OUT | The established TCP connection with the server was closed as the connection took longer than the configured timeout value. |
+
+### Firewall log category
+
+The firewall log is generated only if you enable it for each application gateway, as detailed in [Enable logging](application-gateway-diagnostics.md#enable-logging-through-the-azure-portal). This log also requires that the web application firewall is configured on an application gateway. The data is stored in the storage account that you specified when you enabled the logging. The following data is logged:
+
+| Value | Description |
+|: |:-|
+| instanceId | Application Gateway instance for which firewall data is being generated. For a multiple-instance application gateway, there's one row per instance. |
+| clientIp | Originating IP for the request. |
+| clientPort | Originating port for the request. |
+| requestUri | URL of the received request. |
+| ruleSetType | Rule set type. The available value is OWASP. |
+| ruleSetVersion | Rule set version used. Available values are 2.2.9 and 3.0. |
+| ruleId | Rule ID of the triggering event. |
+| message | User-friendly message for the triggering event. More details are provided in the details section. |
+| action | Action taken on the request. Available values are Blocked and Allowed (for custom rules), Matched (when a rule matches a part of the request), and Detected and Blocked (these values are both for mandatory rules, depending on if the WAF is in detection or prevention mode). |
+| site | Site for which the log was generated. Currently, only Global is listed because rules are global.|
+| details | Details of the triggering event. |
+| details.message | Description of the rule. |
+| details.data | Specific data found in request that matched the rule. |
+| details.file | Configuration file that contained the rule. |
+| details.line | Line number in the configuration file that triggered the event. |
+| hostname | Hostname or IP address of the Application Gateway. |
+| transactionId | Unique ID for a given transaction, which helps group multiple rule violations that occurred within the same request. |
+
+```json
+{
+ "timeStamp": "2021-10-14T22:17:11+00:00",
+ "resourceId": "/SUBSCRIPTIONS/{subscriptionId}/RESOURCEGROUPS/{resourceGroupName}/PROVIDERS/MICROSOFT.NETWORK/APPLICATIONGATEWAYS/{applicationGatewayName}",
+ "operationName": "ApplicationGatewayFirewall",
+ "category": "ApplicationGatewayFirewallLog",
+ "properties": {
+ "instanceId": "appgw_2",
+ "clientIp": "185.42.129.24",
+ "clientPort": "",
+ "requestUri": "\/",
+ "ruleSetType": "OWASP_CRS",
+ "ruleSetVersion": "3.0.0",
+ "ruleId": "920350",
+ "message": "Host header is a numeric IP address",
+ "action": "Matched",
+ "site": "Global",
+ "details": {
+ "message": "Warning. Pattern match \\\"^[\\\\d.:]+$\\\" at REQUEST_HEADERS:Host .... ",
+ "data": "20.110.30.194:80",
+ "file": "rules\/REQUEST-920-PROTOCOL-ENFORCEMENT.conf",
+ "line": "791"
+ },
+ "hostname": "20.110.30.194:80",
+ "transactionId": "592d1649f75a8d480a3c4dc6a975309d",
+ "policyId": "default",
+ "policyScope": "Global",
+ "policyScopeName": "Global"
+ }
+}
+```
+
+### Performance log category
+
+The performance log is generated only if you enable it on each Application Gateway instance, as detailed in [Enable logging](application-gateway-diagnostics.md#enable-logging-through-the-azure-portal). The data is stored in the storage account that you specified when you enabled the logging. The performance log data is generated in 1-minute intervals. It's available only for the v1 SKU. For the v2 SKU, use [Metrics](application-gateway-metrics.md) for performance data. The following data is logged:
+
+| Value | Description |
+|:|:|
+| instanceId | Application Gateway instance for which performance data is being generated. For a multiple-instance application gateway, there's one row per instance. |
+| healthyHostCount | Number of healthy hosts in the backend pool. |
+| unHealthyHostCount | Number of unhealthy hosts in the backend pool. |
+| requestCount | Number of requests served. |
+| latency | Average latency (in milliseconds) of requests from the instance to the back end that serves the requests. |
+| failedRequestCount | Number of failed requests.|
+| throughput | Average throughput since the last log, measured in bytes per second.|
+
+```json
+{
+ "resourceId": "/SUBSCRIPTIONS/{subscriptionId}/RESOURCEGROUPS/{resourceGroupName}/PROVIDERS/MICROSOFT.NETWORK/APPLICATIONGATEWAYS/{applicationGatewayName}",
+ "operationName": "ApplicationGatewayPerformance",
+ "time": "2016-04-09T00:00:00Z",
+ "category": "ApplicationGatewayPerformanceLog",
+ "properties":
+ {
+ "instanceId":"ApplicationGatewayRole_IN_1",
+ "healthyHostCount":"4",
+ "unHealthyHostCount":"0",
+ "requestCount":"185",
+ "latency":"0",
+ "failedRequestCount":"0",
+ "throughput":"119427"
+ }
+}
+```
-For a reference of all Azure Monitor Logs / Log Analytics tables, see the [Azure Monitor Log Table Reference](/azure/azure-monitor/reference/tables/tables-resourcetype).
+> [!NOTE]
+> Latency is calculated from the time when the first byte of the HTTP request is received to the time when the last byte of the HTTP response is sent. It's the sum of the Application Gateway processing time plus the network cost to the back end, plus the time that the back end takes to process the request.
-### Diagnostics tables
-<!-- REQUIRED. Please keep heading in this order -->
-<!-- If your service uses the AzureDiagnostics table in Azure Monitor Logs / Log Analytics, list what fields you use and what they are for. Azure Diagnostics is over 500 columns wide with all services using the fields that are consistent across Azure Monitor and then adding extra ones just for themselves. If it uses service specific diagnostic table, refers to that table. If it uses both, put both types of information in. Most services in the future have their own specific table. If you have questions, contact azmondocs@microsoft.com -->
+### Azure Monitor Logs and Log Analytics Tables
Azure Application Gateway uses the [Azure Diagnostics](/azure/azure-monitor/reference/tables/azurediagnostics) table to store resource log information. The following columns are relevant.
-**Azure Diagnostics**
- | Property | Description |
-|: |:|
-requestUri_s | The URI of the client request.|
-Message | Informational messages such as "SQL Injection Attack"|
-userAgent_s | User agent details of the client request|
-ruleName_s | Request routing rule that is used to serve this request|
-httpMethod_s | HTTP method of the client request|
-instanceId_s | The Appgw instance to which the client request is routed to for evaluation|
-httpVersion_s | HTTP version of the client request|
-clientIP_s | IP from which is request is made|
-host_s | Host header of the client request|
-requestQuery_s | Query string as part of the client request|
-sslEnabled_s | Does the client request have SSL enabled|
--
-## See Also
-
-<!-- replace below with the proper link to your main monitoring service article -->
-- See [Monitoring Azure Application Gateway](monitor-application-gateway.md) for a description of monitoring Azure Application Gateway.-- See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
+|:-- |:|
+| requestUri_s | The URI of the client request.|
+| Message | Informational messages such as "SQL Injection Attack"|
+| userAgent_s | User agent details of the client request|
+| ruleName_s | Request routing rule that is used to serve this request|
+| httpMethod_s | HTTP method of the client request|
+| instanceId_s | The Appgw instance to which the client request is routed to for evaluation|
+| httpVersion_s | HTTP version of the client request|
+| clientIP_s | IP from which is request is made|
+| host_s | Host header of the client request|
+| requestQuery_s | Query string as part of the client request|
+| sslEnabled_s | Does the client request have SSL enabled|
++
+### Application Gateway Microsoft.Network/applicationGateways
+
+- [AzureActivity](/azure/azure-monitor/reference/tables/azureactivity#columns)
+- [AzureMetrics](/azure/azure-monitor/reference/tables/azuremetrics#columns)
+- [AGWAccessLogs](/azure/azure-monitor/reference/tables/agwaccesslogs#columns)
+- [AGWPerformanceLogs](/azure/azure-monitor/reference/tables/agwperformancelogs#columns)
+- [AGWFirewallLogs](/azure/azure-monitor/reference/tables/agwfirewalllogs#columns)
+- [AzureDiagnostics](/azure/azure-monitor/reference/tables/azurediagnostics#columns)
+
+### TLS/TCP proxy logs
+
+Application Gateway's Layer 4 proxy provides log data through access logs. These logs are only generated and published if they're configured in the diagnostic settings of your gateway. Also see: [Supported categories for Azure Monitor resource logs](/azure/azure-monitor/essentials/resource-logs-categories#microsoftnetworkapplicationgateways).
+
+> [!NOTE]
+> The columns with Mutual Authentication details for a TLS listener are currently available only through the [AzureDiagnostics table](/azure/azure-monitor/reference/tables/azurediagnostics).
+
+| Category | Resource log category |
+|:--|:-|
+| ResourceGroup | The resource group to which the application gateway resource belongs. |
+| SubscriptionId | The subscription ID of the application gateway resource. |
+| ResourceProvider | This value is MICROSOFT.NETWORK for application gateway. |
+| Resource | The name of the application gateway resource. |
+| ResourceType | This value is APPLICATIONGATEWAYS. |
+| ruleName | The name of the routing rule that served the connection request. |
+| instanceId | Application Gateway instance that served the request. |
+| clientIP | Originating IP for the request. |
+| receivedBytes | Data received from client to gateway, in bytes. |
+| sentBytes | Data sent from gateway to client, in bytes. |
+| listenerName | The name of the listener that established the frontend connection with client. |
+| backendSettingName | The name of the backend setting used for the backend connection. |
+| backendPoolName | The name of the backend pool from which a target server was selected to establish the backend connection. |
+| protocol | TCP (Irrespective of it being TCP or TLS, the protocol value is always TCP). |
+| sessionTime | Session duration, in seconds (this value is for the client->appgw session). |
+| upstreamSentBytes | Data sent to backend server, in bytes. |
+| upstreamReceivedBytes | Data received from backend server, in bytes. |
+| upstreamSessionTime | Session duration, in seconds (this value is for the appgw->backend session). |
+| sslCipher | Cipher suite being used for TLS communication (for TLS protocol listeners). |
+| sslProtocol | SSL/TLS protocol being used (for TLS protocol listeners). |
+| serverRouted | The backend server IP and port number to which the traffic was routed. |
+| serverStatus | 200 - session completed successfully. 400 - client data couldn't be parsed. 500 - internal server error. 502 - bad gateway. For example, when an upstream server couldn't be reached. 503 - service unavailable. For example, if access is limited by the number of connections. |
+| ResourceId | Application Gateway resource URI. |
++
+- [applicationGateways resource provider operations](/azure/role-based-access-control/resource-provider-operations#networking)
+
+You can use Azure activity logs to view all operations that are submitted to your Azure subscription, and their status. Activity log entries are collected by default. You can view them in the Azure portal. Azure activity logs were formerly known as *operational logs* and *audit logs*.
+
+Azure generates activity logs by default. The logs are preserved for 90 days in the Azure event logs store. Learn more about these logs by reading the [View events and activity log](../azure-monitor/essentials/activity-log.md) article.
+
+## Related content
+
+- See [Monitor Azure Application Gateway](monitor-application-gateway.md) for a description of monitoring Application Gateway.
+- See [Monitor Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for details on monitoring Azure resources.
application-gateway Monitor Application Gateway https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/monitor-application-gateway.md
Title: Monitoring Azure Application Gateway
-description: Start here to learn how to monitor Azure Application Gateway
+ Title: Monitor Azure Application Gateway
+description: Start here to learn how to monitor Azure Application Gateway. Learn how to monitor resources for availability, performance, and operation.
Last updated : 06/17/2024++ - Previously updated : 02/26/2024
-<!-- VERSION 2.2
-Template for the main monitoring article for Azure services.
-Keep the required sections and add/modify any content for any information specific to your service.
-This article should be in your TOC with the name *monitor-[Azure Application Gateway].md* and the TOC title "Monitor Azure Application Gateway".
-Put accompanying reference information into an article in the Reference section of your TOC with the name *monitor-[service-name]-reference.md* and the TOC title "Monitoring data".
-Keep the headings in this order.
>
+# Monitor Azure Application Gateway
+++
+Azure Monitor Network Insights provides a comprehensive view of health and metrics for all deployed network resources including Application Gateway, without requiring any configuration. For more information, see [Azure Monitor Network Insights](../network-watcher/network-insights-overview.md).
+
+For more information about the resource types for Application Gateway, see [Application Gateway monitoring data reference](monitor-application-gateway-reference.md).
+
-# Monitoring Azure Application Gateway
-<!-- REQUIRED. Please keep headings in this order -->
-<!-- Most services can use this section unchanged. Add to it if there are any unique charges if your service has significant monitoring beyond Azure Monitor. -->
+For Application Gateway, resource-specific mode creates three tables:
-When you have critical applications and business processes relying on Azure resources, you want to monitor those resources for their availability, performance, and operation.
+- [AGWAccessLogs](/azure/azure-monitor/reference/tables/agwaccesslogs)
+- [AGWPerformanceLogs](/azure/azure-monitor/reference/tables/agwperformancelogs)
+- [AGWFirewallLogs](/azure/azure-monitor/reference/tables/agwfirewalllogs)
-This article describes the monitoring data generated by Azure Application Gateway. Azure Application Gateway uses [Azure Monitor](../azure-monitor/overview.md). If you're unfamiliar with the features of Azure Monitor common to all Azure services that use it, read [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md).
+> [!NOTE]
+> The resource specific option is currently available in all **public regions**.
+>
+> Existing users can continue using Azure Diagnostics, or can opt for dedicated tables by switching the toggle in Diagnostic settings to **Resource specific**, or to **Dedicated** in API destination.Dual mode isn't possible. The data in all the logs can either flow to Azure Diagnostics, or to dedicated tables. However, you can have multiple diagnostic settings where one data flow is to azure diagnostic and another is using resource specific at the same time.
+**Selecting the destination table in Log analytics**: All Azure services eventually use the resource-specific tables. As part of this transition, you can select Azure diagnostic or resource specific table in the diagnostic setting using a toggle button. The toggle is set to **Resource specific** by default and in this mode, logs for new selected categories are sent to dedicated tables in Log Analytics, while existing streams remain unchanged. See the following example.
-<!-- Optional diagram showing monitoring for your service. If you need help creating one, contact robb@microsoft.com -->
-## Monitoring overview page in Azure portal
-<!-- OPTIONAL. Please keep headings in this order -->
-<!-- Most services can use this section unchanged. Edit it if there are any unique charges if your service has significant monitoring beyond Azure Monitor. -->
+**Workspace Transformations:** Opting for the Resource specific option allows you to filter and modify your data before [workspace transformations](../azure-monitor/essentials/data-collection-transformations-workspace.md) ingests it. This approach provides granular control, allowing you to focus on the most relevant information from the logs there by reducing data costs and enhancing security.
+
+For detailed instructions on setting up workspace transformations, see [Tutorial: Add a workspace transformation to Azure Monitor Logs by using the Azure portal](../azure-monitor/logs/tutorial-workspace-transformations-portal.md).
+ The **Overview** page in the Azure portal for each Application Gateway includes the following metrics:
The **Overview** page in the Azure portal for each Application Gateway includes
- Avg Healthy Host Count By BackendPool HttpSettings - Avg Unhealthy Host Count By BackendPool HttpSettings
-This list is just a subset of the metrics available for Application Gateway. For more information, see [Monitoring Azure Application Gateway data reference](monitor-application-gateway-reference.md).
--
-## Azure Monitor Network Insights
-
-Some services in Azure have a special focused prebuilt monitoring dashboard in the Azure portal that provides a starting point for monitoring your service. These special dashboards are called "insights".
-
-<!-- Give a quick outline of what your "insight page" provides and refer to another article that gives details -->
-
-Azure Monitor Network Insights provides a comprehensive view of health and metrics for all deployed network resources (including Application Gateway), without requiring any configuration. For more information, see [Azure Monitor Network Insights](../network-watcher/network-insights-overview.md).
-
-## Monitoring data
-
-<!-- REQUIRED. Please keep headings in this order -->
-Azure Application Gateway collects the same kinds of monitoring data as other Azure resources that are described in [Monitoring data from Azure resources](../azure-monitor/essentials/monitor-azure-resource.md#monitoring-data).
-
-See [Monitoring Azure Application Gateway data reference](monitor-application-gateway-reference.md) for detailed information on the metrics and logs metrics created by Azure Application Gateway.
-
-<!-- If your service has additional non-Azure Monitor monitoring data then outline and refer to that here. Also include that information in the data reference as appropriate. -->
-
-## Collection and routing
-
-<!-- REQUIRED. Please keep headings in this order -->
-
-Platform metrics and the Activity log are collected and stored automatically, but can be routed to other locations by using a diagnostic setting.
-
-Resource Logs aren't collected and stored until you create a diagnostic setting and route them to one or more locations.
-
-<!-- Include any additional information on collecting logs. The number of things that diagnostics settings control is expanding -->
-
-See [Create diagnostic setting to collect platform logs and metrics in Azure](../azure-monitor/essentials/diagnostic-settings.md) for the detailed process for creating a diagnostic setting using the Azure portal, CLI, or PowerShell. When you create a diagnostic setting, you specify which categories of logs to collect. The categories for Azure Application Gateway are listed in [Azure Application Gateway monitoring data reference](monitor-application-gateway-reference.md#resource-logs).
-
-<!-- OPTIONAL: Add specific examples of configuration for this service. For example, CLI and PowerShell commands for creating diagnostic setting. Ideally, customers should set up a policy to automatically turn on collection for services. Azure monitor has Resource Manager template examples you can point to. See https://learn.microsoft.com/azure/azure-monitor/samples/resource-manager-diagnostic-settings. Contact azmondocs@microsoft.com if you have questions. -->
+For a list of available metrics for Azure Application Gateway, see [Application Gateway monitoring data reference](monitor-application-gateway-reference.md#metrics).
-The metrics and logs you can collect are discussed in the following sections.
+For available Web Application Firewall (WAF) metrics, see [Application Gateway WAF v2 metrics](../web-application-firewall/ag/application-gateway-waf-metrics.md#application-gateway-waf-v2-metrics) and [Application Gateway WAF v1 metrics](../web-application-firewall/ag/application-gateway-waf-metrics.md#application-gateway-waf-v1-metrics).
-## Analyzing metrics
-
-<!-- REQUIRED. Please keep headings in this order
-If you don't support metrics, say so. Some services might be only onboarded to logs -->
-
-You can analyze metrics for Azure Application Gateway with metrics from other Azure services using metrics explorer by opening **Metrics** from the **Azure Monitor** menu. See [Analyze metrics with Azure Monitor metrics explorer](../azure-monitor/essentials/analyze-metrics.md) for details on using this tool.
-
-<!-- Point to the list of metrics available in your monitor-service-reference article. -->
-For a list of the platform metrics collected for Azure Application Gateway, see [Monitoring Application Gateway data reference](monitor-application-gateway-reference.md).
--
-For reference, you can see a list of [all resource metrics supported in Azure Monitor](../azure-monitor/essentials/metrics-supported.md).
-
-<!-- Optional: Call out additional information to help your customers. For example, you can include additional information here about how to use metrics explorer specifically for your service. Remember that the UI is subject to change quite often so you need to maintain these screenshots yourself if you add them in. -->
-
-## Analyzing logs
-
-<!-- REQUIRED. Please keep headings in this order
-If you don't support resource logs, say so. Some services might be only onboarded to metrics and the activity log. -->
Data in Azure Monitor Logs is stored in tables where each table has its own set of unique properties.
-All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema is outlined in [Common and service-specific schema for Azure Resource Logs](../azure-monitor/essentials/resource-logs-schema.md#top-level-common-schema).
-
-The [Activity log](../azure-monitor/essentials/activity-log.md) is a platform login Azure that provides insight into subscription-level events. You can view it independently or route it to Azure Monitor Logs, where you can do much more complex queries using Log Analytics.
+See [Application Gateway monitoring data reference](monitor-application-gateway-reference.md#resource-logs) for:
-For a list of the types of resource logs collected for Azure Application Gateway, see [Monitoring Azure Application Gateway data reference](monitor-application-gateway-reference.md#resource-logs).
+- A list of the types of resource logs collected for Application Gateway.
+- A list of the tables used by Azure Monitor Logs and queryable by Log Analytics.
+- The available resource log categories, their associated Log Analytics tables, and the log schemas for Application Gateway.
-For a list of the tables used by Azure Monitor Logs and queryable by Log Analytics, see [Monitoring Azure Application Gateway data reference](monitor-application-gateway-reference.md#azure-monitor-logs-tables).
-<!-- Optional: Call out additional information to help your customers. For example, you can include additional information here about log usage or what logs are most important. Remember that the UI is subject to change quite often so you need to maintain these screenshots yourself if you add them in. -->
-### Sample Kusto queries
+### Analyzing Access logs through GoAccess
-<!-- REQUIRED if you support logs. Please keep headings in this order -->
-<!-- Add sample Log Analytics Kusto queries for your service. -->
+We published a Resource Manager template that installs and runs the popular [GoAccess](https://goaccess.io/) log analyzer for Application Gateway Access Logs. GoAccess provides valuable HTTP traffic statistics such as Unique Visitors, Requested Files, Hosts, Operating Systems, Browsers, HTTP Status codes and more. For more details, please see the [Readme file in the Resource Manager template folder in GitHub](https://github.com/Azure/azure-quickstart-templates/tree/master/demos/application-gateway-logviewer-goaccess).
-> [!IMPORTANT]
-> When you select **Logs** from the Application Gateway menu, Log Analytics is opened with the query scope set to the current Application Gateway. This means that log queries only include data from that resource. If you want to run a query that includes data from other Application Gateways or data from other Azure services, select **Logs** from the **Azure Monitor** menu. See [Log query scope and time range in Azure Monitor Log Analytics](/azure/azure-monitor/log-query/scope/) for details.
-<!-- REQUIRED: Include queries that are helpful for figuring out the health and state of your service. Ideally, use some of these queries in the alerts section. It's possible that some of your queries might be in the Log Analytics UI (sample or example queries). Check if so. -->
-You can use the following queries to help you monitor your Application Gateway resource.
+The following examples show some useful queries for Application Gateway.
-<!-- Put in a code section here. -->
-```Kusto
+```kusto
// Requests per hour // Count of the incoming requests on the Application Gateway. // To create an alert for this query, click '+ New alert rule'
AzureDiagnostics
| sort by AggregatedValue desc ```
-## Alerts
-
-<!-- SUGGESTED: Include useful alerts on metrics, logs, log conditions or activity log. Ask your PMs if you don't know.
-This information is the BIGGEST request we get in Azure Monitor so do not avoid it long term. People don't know what to monitor for best results. Be prescriptive
>-
-Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address issues in your system before your customers notice them. You can set alerts on [metrics](../azure-monitor/alerts/alerts-metric-overview.md), [logs](../azure-monitor/alerts/alerts-unified-log.md), and the [activity log](../azure-monitor/alerts/activity-log-alerts.md). Different types of alerts have benefits and drawbacks
-<!-- only include next line if applications run on your service and work with App Insights. -->
-If you're creating or running an application that uses Application Gateway, [Azure Monitor Application Insights](../azure-monitor/app/app-insights-overview.md) can offer additional types of alerts.
-<!-- end -->
+To configure alerts using ARM templates, see [Configure Azure Monitor alerts](configure-alerts-with-templates.md).
-The following tables list common and recommended alert rules for Application Gateway.
+### Application Gateway alert rules
-<!-- Fill in the table with metric and log alerts that would be valuable for your service. Change the format as necessary to make it more readable -->
+The following table lists some suggested alert rules for Application Gateway. These alerts are just examples. You can set alerts for any metric, log entry, or activity log entry listed in the [Application Gateway monitoring data reference](monitor-application-gateway-reference.md).
-**Application Gateway v1**
+Application Gateway v2
| Alert type | Condition | Description | |:|:|:|
-|Metric|CPU utilization crosses 80%|Under normal conditions, CPU usage shouldn't regularly exceed 90%. This can cause latency in the websites hosted behind the Application Gateway and disrupt the client experience.|
-|Metric|Unhealthy host count crosses threshold|Indicates the number of backend servers that Application Gateway is unable to probe successfully. This catches issues where the Application Gateway instances are unable to connect to the backend. Alert if this number goes above 20% of backend capacity.|
+|Metric|Compute Unit utilization crosses 75% of average usage|Compute unit is the measure of compute utilization of your Application Gateway. Check your average compute unit usage in the last one month and set alert if it crosses 75% of it.|
+|Metric|Capacity Unit utilization crosses 75% of peak usage|Capacity units represent overall gateway utilization in terms of throughput, compute, and connection count. Check your maximum capacity unit usage in the last one month and set alert if it crosses 75% of it.|
+|Metric|Unhealthy host count crosses threshold|Indicates number of backend servers that application gateway is unable to probe successfully. This alert catches issues where Application gateway instances are unable to connect to the backend. Alert if this number goes above 20% of backend capacity.|
|Metric|Response status (4xx, 5xx) crosses threshold|When Application Gateway response status is 4xx or 5xx. There could be occasional 4xx or 5xx response seen due to transient issues. You should observe the gateway in production to determine static threshold or use dynamic threshold for the alert.|
-|Metric|Failed requests crosses threshold|When failed requests metric crosses a threshold. You should observe the gateway in production to determine static threshold or use dynamic threshold for the alert.|
-
+|Metric|Failed requests crosses threshold|When Failed requests metric crosses threshold. You should observe the gateway in production to determine static threshold or use dynamic threshold for the alert.|
+|Metric|Backend last byte response time crosses threshold|Indicates the time interval between start of establishing a connection to backend server and receiving the last byte of the response body. Create an alert if the backend response latency is more that certain threshold from usual.|
+|Metric|Application Gateway total time crosses threshold|This value is the interval from the time when Application Gateway receives the first byte of the HTTP request to the time when the last response byte has been sent to the client. Should create an alert if the backend response latency is more that certain threshold from usual.|
-**Application Gateway v2**
+Application Gateway v1
| Alert type | Condition | Description | |:|:|:|
-|Metric|Compute Unit utilization crosses 75% of average usage|Compute unit is the measure of compute utilization of your Application Gateway. Check your average compute unit usage in the last one month and set alert if it crosses 75% of it.|
-|Metric|Capacity Unit utilization crosses 75% of peak usage|Capacity units represent overall gateway utilization in terms of throughput, compute, and connection count. Check your maximum capacity unit usage in the last one month and set alert if it crosses 75% of it.|
-|Metric|Unhealthy host count crosses threshold|Indicates number of backend servers that application gateway is unable to probe successfully. This catches issues where Application gateway instances are unable to connect to the backend. Alert if this number goes above 20% of backend capacity.|
+|Metric|CPU utilization crosses 80%|Under normal conditions, CPU usage shouldn't regularly exceed 90%. This situation can cause latency in the websites hosted behind the Application Gateway and disrupt the client experience.|
+|Metric|Unhealthy host count crosses threshold|Indicates the number of backend servers that Application Gateway is unable to probe successfully. This alert catches issues where the Application Gateway instances are unable to connect to the backend. Alert if this number goes above 20% of backend capacity.|
|Metric|Response status (4xx, 5xx) crosses threshold|When Application Gateway response status is 4xx or 5xx. There could be occasional 4xx or 5xx response seen due to transient issues. You should observe the gateway in production to determine static threshold or use dynamic threshold for the alert.|
-|Metric|Failed requests crosses threshold|When Failed requests metric crosses threshold. You should observe the gateway in production to determine static threshold or use dynamic threshold for the alert.|
-|Metric|Backend last byte response time crosses threshold|Indicates the time interval between start of establishing a connection to backend server and receiving the last byte of the response body. Create an alert if the backend response latency is more that certain threshold from usual.|
-|Metric|Application Gateway total time crosses threshold|This is the interval from the time when Application Gateway receives the first byte of the HTTP request to the time when the last response byte has been sent to the client. Should create an alert if the backend response latency is more that certain threshold from usual.|
+|Metric|Failed requests crosses threshold|When failed requests metric crosses a threshold. You should observe the gateway in production to determine static threshold or use dynamic threshold for the alert.|
+
+Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address issues in your system before your customers notice them. You can set alerts on [metrics](../azure-monitor/alerts/alerts-metric-overview.md), [logs](../azure-monitor/alerts/alerts-unified-log.md), and the [activity log](../azure-monitor/alerts/activity-log-alerts.md). Different types of alerts have benefits and drawbacks.
-## Next steps
+If you're creating or running an application that uses Application Gateway, [Azure Monitor Application Insights](../azure-monitor/app/app-insights-overview.md) can offer other types of alerts.
-<!-- Add additional links. You can change the wording of these and add more if useful. -->
-- See [Monitoring Application Gateway data reference](monitor-application-gateway-reference.md) for a reference of the metrics, logs, and other important values created by Application Gateway.
+## Related content
-- See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
+- See [Application Gateway monitoring data reference](monitor-application-gateway-reference.md) for a reference of the metrics, logs, and other important values created for Application Gateway.
+- See [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for general details on monitoring Azure resources.
application-gateway Mutual Authentication Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/mutual-authentication-overview.md
Azure portal support is currently not available.
-To verify OCSP revocation status has been evaluated for the client request, [access logs](./application-gateway-diagnostics.md#access-log) will contain a property called "sslClientVerify", with the status of the OCSP response.
+To verify OCSP revocation status has been evaluated for the client request, [access logs](monitor-application-gateway-reference.md#access-log-category) will contain a property called "sslClientVerify", with the status of the OCSP response.
It is critical that the OCSP responder is highly available and network connectivity between Application Gateway and the responder is possible. In the event Application Gateway is unable to resolve the fully qualified domain name (FQDN) of the defined responder or network connectivity is blocked to/from the responder, certificate revocation status will fail and Application Gateway will return a 400 HTTP response to the requesting client.
application-gateway Rewrite Url Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/rewrite-url-portal.md
Observe the below fields in access logs to verify if the URL rewrite happened as
* **originalRequestUriWithArgs**: This field contains the original request URL * **requestUri**: This field contains the URL after the rewrite operation on Application Gateway
-For more information on all the fields in the access logs, see [here](application-gateway-diagnostics.md#for-application-gateway-and-waf-v2-sku).
+For more information on all the fields in the access logs, see [Access log](monitor-application-gateway-reference.md#access-log-category).
## Next steps
automation Dsc Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/quickstarts/dsc-configuration.md
description: This article helps you get started configuring an Azure VM with Des
keywords: dsc, configuration, automation Previously updated : 04/12/2023 Last updated : 08/08/2024
# Configure a VM with Desired State Configuration > [!CAUTION]
-> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
+> Azure Automation DSC for Linux has retired. For more information, see the [announcement](https://azure.microsoft.com/updates/migrate-from-linux-dsc-extension-to-the-guest-configuration-feature-of-azure-policy-by-may-1-2025/#:~:text=The%20DSC%20extension%20for%20Linux%20machines%20in%20Azure%2C,no%20longer%20be%20supported%20after%2030%20September%202023.).
> [!NOTE] > Before you enable Azure Automation DSC, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Policy named [Azure Machine Configuration](../../governance/machine-configuration/overview.md). The Azure Machine Configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Azure Machine Configuration also includes hybrid machine support through [Arc-enabled servers](../../azure-arc/servers/overview.md).
-By enabling Azure Automation State Configuration, you can manage and monitor the configurations of your Windows and Linux servers using Desired State Configuration (DSC). Configurations that drift from a desired configuration can be identified or auto-corrected. This quickstart steps through enabling an Azure Linux VM and deploying a LAMP stack using Azure Automation State Configuration.
+By enabling Azure Automation State Configuration, you can manage and monitor the configurations of your Windows servers using Desired State Configuration (DSC). Configurations that drift from a desired configuration can be identified or auto-corrected. This quickstart steps through enabling an Azure VM and deploying a LAMP stack using Azure Automation State Configuration.
## Prerequisites To complete this quickstart, you need: * An Azure subscription. If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/).
-* An Azure Resource Manager virtual machine running Red Hat Enterprise Linux, or Oracle Linux. For instructions on creating a VM, see [Create your first Linux virtual machine in the Azure portal](../../virtual-machines/linux/quick-create-portal.md)
+* An Azure Resource Manager virtual machine.
## Sign in to Azure Sign in to the [Azure portal](https://portal.azure.com).
There are many different methods to enable a machine for Automation State Config
5. Select the DSC settings appropriate for the virtual machine. If you have already prepared a configuration, you can specify it as `Node Configuration Name`. You can set the [configuration mode](/powershell/dsc/managing-nodes/metaConfig) to control the configuration behavior for the machine. 6. Click **OK**. While the DSC extension is deployed to the virtual machine, the status reported is `Connecting`.
-![Enabling an Azure VM for DSC](./media/dsc-configuration/dsc-onboard-azure-vm.png)
- ## Import modules Modules contain DSC resources and many can be found in the [PowerShell Gallery](https://www.powershellgallery.com). Any resources that are used in your configurations must be imported to the Automation account before compiling. For this quickstart, the module named **nx** is required.
Modules contain DSC resources and many can be found in the [PowerShell Gallery](
1. Click on the module to import. 1. Click **Import**.
-![Importing a DSC Module](./media/dsc-configuration/dsc-import-module-nx.png)
## Import the configuration
You can assign a compiled node configuration to a DSC node. Assignment applies t
1. In the left pane of the Automation account, select **State Configuration (DSC)** and then click the **Nodes** tab. 1. Select the node to which to assign a configuration. 1. Click **Assign Node Configuration**
-1. Select the node configuration `LAMPServer.localhost` and click **OK**. State Configuration now assigns the compiled configuration to the node, and the node status changes to `Pending`. On the next periodic check, the node retrieves the configuration, applies it, and reports status. It can take up to 30 minutes for the node to retrieve the configuration, depending on the node settings.
-1. To force an immediate check, you can run the following command locally on the Linux virtual machine:
- `sudo /opt/microsoft/dsc/Scripts/PerformRequiredConfigurationChecks.py`
+1. Select the node configuration `LAMPServer.localhost` and click **OK**. State Configuration now assigns the compiled configuration to the node, and the node status changes to `Pending`. On the next periodic check, the node retrieves the configuration, applies it, and reports status.
+
+It can take up to 30 minutes for the node to retrieve the configuration, depending on the node settings.
-![Assigning a Node Configuration](./media/dsc-configuration/dsc-assign-node-configuration.png)
## View node status
You can view the status of all State Configuration-managed nodes in your Automat
## Next steps
-In this quickstart, you enabled an Azure Linux VM for State Configuration, created a configuration for a LAMP stack, and deployed the configuration to the VM. To learn how you can use Azure Automation State Configuration to enable continuous deployment, continue to the article:
+In this quickstart, you enabled an Azure VM for State Configuration, created a configuration for a LAMP stack, and deployed the configuration to the VM. To learn how you can use Azure Automation State Configuration to enable continuous deployment, continue to the article:
> [!div class="nextstepaction"] > [Set up continuous deployment with Chocolatey](../automation-dsc-cd-chocolatey.md)
automation Modules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/shared-resources/modules.md
Title: Manage modules in Azure Automation
description: This article tells how to use PowerShell modules to enable cmdlets in runbooks and DSC resources in DSC configurations. Previously updated : 07/17/2024 Last updated : 08/09/2024
TestModule
2.0.0 ```
-Within each of the version folders, copy your PowerShell .psm1, .psd1, or PowerShell module **.dll** files that make up a module into the respective version folder. Zip up the module folder so that Azure Automation can import it as a single .zip file. While Automation only shows the highest version of the module imported, if the module package contains side-by-side versions of the module, they are all available for use in your runbooks or DSC configurations.
+Within each of the version folders, copy your PowerShell .psm1, .psd1, or PowerShell module **.dll** files that make up a module into the respective version folder. Zip up the module folder so that Azure Automation can import it as a single .zip file. While Automation only shows one of the versions of the module imported, if the module package contains side-by-side versions of the module, they are all available for use in your runbooks or DSC configurations.
While Automation supports modules containing side-by-side versions within the same package, it does not support using multiple versions of a module across module package imports. For example, you import **module A**, which contains versions 1 and 2 into your Automation account. Later you update **module A** to include versions 3 and 4, when you import into your Automation account, only versions 3 and 4 are usable within any runbooks or DSC configurations. If you require all versions - 1, 2, 3, and 4 to be available, the .zip file your import should contain versions 1, 2, 3, and 4.
azure-app-configuration Rest Api Key Value https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-key-value.md
Previously updated : 08/17/2020 Last updated : 08/02/2024
+zone_pivot_groups: appconfig-data-plane-api-version
+ # Key-values A key-value is a resource identified by unique combination of `key` + `label`. `label` is optional. To explicitly reference a key-value without a label, use "\0" (URL encoded as ``%00``). See details for each operation.
-This article applies to API version 1.0.
- ## Operations - Get
HTTP/1.1 200 OK
## List key-values Optional: ``key`` (If not specified, it implies any key.)+ Optional: ``label`` (If not specified, it implies any label.) + ```http GET /kv?label=*&api-version={api-version} HTTP/1.1 ```
HTTP/1.1 200 OK
Content-Type: application/vnd.microsoft.appconfig.kvset+json; charset=utf-8 ```
-For additional options, see the "Filtering" section later in this article.
+
+Optional: ``tags`` (If not specified, it implies any tags.)
+
+```http
+GET /kv?key=Test*&label=*&tags=tag1=value1&tags=tag2=value2&api-version={api-version} HTTP/1.1
+```
+
+**Response:**
+
+```http
+HTTP/1.1 200 OK
+Content-Type: application/vnd.microsoft.appconfig.kvset+json; charset=utf-8
+```
+
+For more options, see the "Filtering" section later in this article.
++
+## List key-values (conditionally)
+
+To improve client caching, use `If-Match` or `If-None-Match` request headers. The `etag` argument is part of the list key-values response body and header.
+If `If-Match` or `If-None-Match` are omitted, the operation is unconditional.
+
+The following response gets the key-value only if the current representation matches the specified `etag`:
+
+```http
+GET /kv?key={key}label={label}&api-version={api-version} HTTP/1.1
+If-Match: "4f6dd610dd5e4deebc7fbaef685fb903"
+```
+
+**Responses:**
+
+```http
+HTTP/1.1 412 PreconditionFailed
+```
+
+or
+
+```http
+HTTP/1.1 200 OK
+```
+
+The following response gets the key-values only if the current representation doesn't match the specified `etag`:
+
+```http
+GET /kv?key={key}label={label}&api-version={api-version} HTTP/1.1
+If-None-Match: "4f6dd610dd5e4deebc7fbaef685fb903"
+```
+
+**Responses:**
+
+```http
+HTTP/1.1 304 NotModified
+```
+
+or
+
+```http
+HTTP/1.1 200 OK
+```
+ ## Pagination
Link: <{relative uri}>; rel="next"
## Filtering + A combination of `key` and `label` filtering is supported. Use the optional `key` and `label` query string parameters.
Use the optional `key` and `label` query string parameters.
GET /kv?key={key}&label={label}&api-version={api-version} ``` +
+A combination of `key`, `label`, and `tags` filtering is supported.
+Use the optional `key`, `label`, and `tags` query string parameters.
+Multiple tag filters can be provided as query string parameters in the `tagName=tagValue` format. Tag filters must be an exact match.
+
+```http
+GET /kv?key={key}&label={label}&tags={tagFilter1}&tags={tagFilter2}&api-version={api-version}
+```
+++ ### Supported filters |Key filter|Effect|
GET /kv?key={key}&label={label}&api-version={api-version}
|Label filter|Effect| |--|--| |`label` is omitted or `label=*`|Matches **any** label|
-|`label=%00`|Matches KV without label|
+|`label=%00`|Matches key-values with no label|
|`label=prod`|Matches the label **prod**| |`label=prod*`|Matches labels that start with **prod**| |`label=prod,test`|Matches labels **prod** or **test** (limited to 5 CSV)| +
+|Tags filter|Effect|
+|--|--|
+|`tags` is omitted or `tags=` |Matches **any** tag|
+|`tags=group=app1`|Matches key-values that have a tag named `group` with value `app1`|
+|`tags=group=app1&tags=env=prod`|Matches key-values that have a tag named `group` with value `app1` and a tag named `env` with value `prod`(limited to 5 tag filters)|
+|`tags=tag1=%00`|Matches key-values that have a tag named `tag1` with value `null`|
+|`tags=tag1=`|Matches key-values that have a tag named `tag1` with empty value|
++ ***Reserved characters*** `*`, `\`, `,`
If a reserved character is part of the value, then it must be escaped by using `
***Filter validation***
-In the case of a filter validation error, the response is HTTP `400` with error details:
+If filter validation fails, the response is HTTP `400` with error details:
```http HTTP/1.1 400 Bad Request
ETag: "4f6dd610dd5e4deebc7fbaef685fb903"
} ```
-If the item is locked, you'll receive the following response:
+If the item is locked, the following response is returned:
```http HTTP/1.1 409 Conflict
HTTP/1.1 204 No Content
## Delete key (conditionally) This is similar to the "Set key (conditionally)" section earlier in this article.+
azure-app-configuration Rest Api Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-keys.md
Previously updated : 08/17/2020 Last updated : 08/02/2024
+zone_pivot_groups: appconfig-data-plane-api-version
+ # Keys
-api-version: 1.0
- The following syntax represents a key resource: ```http
Link: <relative uri>; rel="original"
] } ```+
azure-app-configuration Rest Api Labels https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-labels.md
Previously updated : 08/17/2020 Last updated : 08/02/2024
+zone_pivot_groups: appconfig-data-plane-api-version
+ # Labels
-api-version: 1.0
- The **Label** resource is defined as follows: ```json
GET /labels?name={label-name}&api-version={api-version}
### Supported filters
-|Key Filter|Effect|
+|Label Filter|Effect|
|--|--| |`name` is omitted or `name=*`|Matches **any** label| |`name=abc`|Matches a label named **abc**|
Link: <{relative uri}>; rel="original"
] } ```+
azure-app-configuration Rest Api Locks https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-locks.md
Previously updated : 08/17/2020 Last updated : 08/02/2024
+zone_pivot_groups: appconfig-data-plane-api-version
+ # Locks
-This API (version 1.0) provides lock and unlock semantics for the key-value resource. It supports the following operations:
+This API provides lock and unlock semantics for the key-value resource. It supports the following operations:
- Place lock - Remove lock
The following request applies the operation only if the current key-value repres
PUT|DELETE /kv/{key}?label={label}&api-version={api-version} HTTP/1.1 If-None-Match: "4f6dd610dd5e4deebc7fbaef685fb903" ```+
azure-app-configuration Rest Api Revisions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-revisions.md
Previously updated : 08/17/2020 Last updated : 08/02/2024
+zone_pivot_groups: appconfig-data-plane-api-version
+ # Key-value revisions
For all operations, ``key`` is an optional parameter. If omitted, it implies any
For all operations, ``label`` is an optional parameter. If omitted, it implies any label.
-This article applies to API version 1.0.
- ## Prerequisites [!INCLUDE [azure-app-configuration-create](../../includes/azure-app-configuration-rest-api-prereqs.md)]
Content-Range: items 0-2/80
## Filtering + A combination of `key` and `label` filtering is supported. Use the optional `key` and `label` query string parameters.
Use the optional `key` and `label` query string parameters.
GET /revisions?key={key}&label={label}&api-version={api-version} ``` +
+A combination of `key`, `label` and `tags` filtering is supported.
+Use the optional `key`, `label` and `tags` query string parameters.
+Multiple tag filters can be provided as query string parameters in the `tagName=tagValue` format. Tag filters must be an exact match.
+
+```http
+GET /revisions?key={key}&label={label}&tags={tagFilter1}&tags={tagFilter2}&api-version={api-version}
+```
++ ### Supported filters |Key filter|Effect|
GET /revisions?key={key}&label={label}&api-version={api-version}
|Label filter|Effect| |--|--|
-|`label` is omitted or `label=`|Matches entry without label|
+|`label` is omitted or `label=`|Matches key-values with no label|
|`label=*`|Matches **any** label| |`label=prod`|Matches the label **prod**| |`label=prod*`|Matches labels that start with **prod**|
GET /revisions?key={key}&label={label}&api-version={api-version}
|`label=*prod*`|Matches labels that contain **prod**| |`label=prod,test`|Matches labels **prod** or **test** (limited to 5 CSV)| +
+|Tags filter|Effect|
+|--|--|
+|`tags` is omitted or `tags=` |Matches **any** tag|
+|`tags=group=app1`|Matches key-values that have a tag named `group` with value `app1`|
+|`tags=group=app1&tags=env=prod`|Matches key-values that have a tag named `group` with value `app1` and a tag named `env` with value `prod`(limited to 5 tag filters)|
+|`tags=tag1=%00`|Matches key-values that have a tag named `tag1` with value `null`|
+|`tags=tag1=`|Matches key-values that have a tag named `tag1` with empty value|
++ ### Reserved characters The reserved characters are:
Link: <{relative uri}>; rel="original"
] } ```+
azure-app-configuration Rest Api Snapshot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-snapshot.md
Previously updated : 03/21/2023- Last updated : 08/02/2024
+zone_pivot_groups: appconfig-data-plane-api-version
+ # Snapshot
-A snapshot is a resource identified uniquely by its name. See details for each operation.
+
+Snapshot resource isn't available in API version 1.0.
+
-This article applies to API version 2022-11-01-preview.
+A snapshot is a resource identified uniquely by its name. See details for each operation.
## Operations
This article applies to API version 2022-11-01-preview.
`SnapshotFilter` + ```json { "key": [string],
This article applies to API version 2022-11-01-preview.
} ``` +
+```json
+{
+ "key": [string],
+ "label": [string],
+ "tags": [array<string>]
+}
+```
++ ## Get snapshot Required: ``{name}``, ``{api-version}``
If-None-Match: "{etag}"
HTTP/1.1 304 NotModified ```
-or
+Or
```http HTTP/1.1 200 OK
HTTP/1.1 200 OK
Content-Type: application/vnd.microsoft.appconfig.snapshotset+json; charset=utf-8 ```
-For additional options, see the "Filtering" section later in this article.
+For more options, see the "Filtering" section later in this article.
## Pagination
GET /snapshots?name={name}&status={status}&api-version={api-version}
`*`, `\`, `,`
-If a reserved character is part of the value, then it must be escaped by using `\{Reserved Character}`. Non-reserved characters can also be escaped.
+If a reserved character is part of the value, then it must be escaped by using `\{Reserved Character}`. Nonreserved characters can also be escaped.
***Filter validation***
-In the case of a filter validation error, the response is HTTP `400` with error details:
+If filter validation fails, the response is HTTP `400` with error details:
```http HTTP/1.1 400 Bad Request
GET /snapshot?$select=name,status&api-version={api-version} HTTP/1.1
**parameters** + | Property Name | Required | Default value | Validation | |-|-|-|-|
-| name | yes | n/a | Length <br/> &nbsp;&nbsp;&nbsp;&nbsp; maximum: 256 |
-| filters | yes | n/a | Count <br/> &nbsp;&nbsp;&nbsp;&nbsp; minimum: 1<br/> &nbsp;&nbsp;&nbsp;&nbsp; maximum: 3 |
+| name | yes | n/a | Length <br/> &nbsp;&nbsp;&nbsp;&nbsp; Maximum: 256 |
+| filters | yes | n/a | Count <br/> &nbsp;&nbsp;&nbsp;&nbsp; Minimum: 1<br/> &nbsp;&nbsp;&nbsp;&nbsp; Maximum: 3 |
| filters[\<index\>].key | yes | n/a | |
+| filters[\<index\>].label | no | null | Multi-match label filters (for example: "*", "comma,separated") aren't supported with 'key' composition type. |
| tags | no | {} | |
-| filters[\<index\>].label | no | null | Multi-match label filters (E.g.: "*", "comma,separated") aren't supported with 'key' composition type. |
| composition_type | no | key | |
-| retention_period | no | Standard tier <br/>&nbsp;&nbsp;&nbsp;&nbsp; 2592000 (30 days) <br/> Free tier <br/> &nbsp;&nbsp;&nbsp;&nbsp; 604800 (7 days) | Standard tier <br/> &nbsp;&nbsp;&nbsp;&nbsp; minimum: 3600 (1 hour) <br/> &nbsp;&nbsp;&nbsp;&nbsp; maximum: 7776000 (90 days) <br/> Free tier <br/> &nbsp;&nbsp;&nbsp;&nbsp; minimum: 3600 (1 hour) <br/> &nbsp;&nbsp;&nbsp;&nbsp; maximum: 604800 (7 days) |
+| retention_period | no | Standard tier <br/>&nbsp;&nbsp;&nbsp;&nbsp; 2592000 (30 days) <br/> Free tier <br/> &nbsp;&nbsp;&nbsp;&nbsp; 604800 (seven days) | Standard tier <br/> &nbsp;&nbsp;&nbsp;&nbsp; Minimum: 3600 (one hour) <br/> &nbsp;&nbsp;&nbsp;&nbsp; Maximum: 7776000 (90 days) <br/> Free tier <br/> &nbsp;&nbsp;&nbsp;&nbsp; Minimum: 3600 (one hour) <br/> &nbsp;&nbsp;&nbsp;&nbsp; Maximum: 604800 (seven days) |
```http PUT /snapshot/{name}?api-version={api-version} HTTP/1.1
Operation-Location: {appConfigurationEndpoint}/operations?snapshot={name}&api-ve
} ```
-The status of the newly created snapshot will be "provisioning".
-Once the snapshot is fully provisioned, the status will update to "ready".
+
+| Property Name | Required | Default value | Validation |
+|-|-|-|-|
+| name | yes | n/a | Length <br/> &nbsp;&nbsp;&nbsp;&nbsp; Maximum: 256 |
+| filters | yes | n/a | Count <br/> &nbsp;&nbsp;&nbsp;&nbsp; Minimum: 1<br/> &nbsp;&nbsp;&nbsp;&nbsp; Maximum: 3 |
+| filters[\<index\>].key | yes | n/a | |
+| filters[\<index\>].label | no | null | Multi-match label filters (for example: "*", "comma,separated") aren't supported with 'key' composition type. |
+| filters[\<index\>].tags | no | null | Count <br/> &nbsp;&nbsp;&nbsp;&nbsp; Minimum: 0<br/> &nbsp;&nbsp;&nbsp;&nbsp; Maximum: 5 |
+| tags | no | {} | |
+| composition_type | no | key | |
+| retention_period | no | Standard tier <br/>&nbsp;&nbsp;&nbsp;&nbsp; 2592000 (30 days) <br/> Free tier <br/> &nbsp;&nbsp;&nbsp;&nbsp; 604800 (7 days) | Standard tier <br/> &nbsp;&nbsp;&nbsp;&nbsp; Minimum: 3600 (1 hour) <br/> &nbsp;&nbsp;&nbsp;&nbsp; Maximum: 7776000 (90 days) <br/> Free tier <br/> &nbsp;&nbsp;&nbsp;&nbsp; Minimum: 3600 (1 hour) <br/> &nbsp;&nbsp;&nbsp;&nbsp; Maximum: 604800 (7 days) |
+
+```http
+PUT /snapshot/{name}?api-version={api-version} HTTP/1.1
+Content-Type: application/vnd.microsoft.appconfig.snapshot+json
+```
+
+```json
+{
+ "filters": [ // required
+ {
+ "key": "app1/*", // required
+ "label": "prod", // optional
+ "tags": ["group=g1", "default=true"] // optional
+ }
+ ],
+ "tags": { // optional
+ "tag1": "value1",
+ "tag2": "value2",
+ },
+ "composition_type": "key", // optional
+ "retention_period": 2592000 // optional
+}
+```
+
+**Responses:**
+
+```http
+HTTP/1.1 201 Created
+Content-Type: application/vnd.microsoft.appconfig.snapshot+json; charset=utf-8
+Last-Modified: Tue, 05 Dec 2017 02:41:26 GMT
+ETag: "4f6dd610dd5e4deebc7fbaef685fb903"
+Operation-Location: {appConfigurationEndpoint}/operations?snapshot={name}&api-version={api-version}
+```
+
+```json
+{
+ "etag": "4f6dd610dd5e4deebc7fbaef685fb903",
+ "name": "{name}",
+ "status": "provisioning",
+ "filters": [
+ {
+ "key": "app1/*",
+ "label": "prod",
+ "tags": ["group=g1", "default=true"]
+ }
+ ],
+ "composition_type": "key",
+ "created": "2023-03-20T21:00:03+00:00",
+ "size": 2000,
+ "items_count": 4,
+ "tags": {
+ "t1": "value1",
+ "t2": "value2"
+ },
+ "retention_period": 2592000
+}
+```
++
+The status of the newly created snapshot is `provisioning`.
+Once the snapshot is fully provisioned, the status updates to `ready`.
Clients can poll the snapshot to wait for the snapshot to be ready before listing its associated key-values. To query additional information about the operation, reference the [polling snapshot creation](#polling-snapshot-creation) section.
-If the snapshot already exists, you'll receive the following response:
+If the snapshot already exists, the following response is returned:
```http HTTP/1.1 409 Conflict
Content-Type: application/json; charset=utf-8
} ```
-If any error occurs during the provisioning of the snapshot, the `error` property will contain details describing the error.
+If any error occurs during the provisioning of the snapshot, the `error` property contains details describing the error.
```json {
If any error occurs during the provisioning of the snapshot, the `error` propert
## Archive (Patch) A snapshot in the `ready` state can be archived.
-An archived snapshot will be assigned an expiration date, based off the retention period established at the time of its creation.
+An archived snapshot is assigned an expiration date, based off the retention period established at the time of its creation.
After the expiration date passes, the snapshot will be permanently deleted. At any time before the expiration date, the snapshot's items can still be listed.
Content-Type: application/problem+json; charset="utf-8"
## Recover (Patch) A snapshot in the `archived` state can be recovered.
-Once the snapshot is recovered the snapshot's expiration date is removed.
+After the snapshot is recovered, the snapshot's expiration date is removed.
Recovering a snapshot that is already `ready` doesn't affect the snapshot.
Content-Type: application/vnd.microsoft.appconfig.snapshot+json; charset=utf-8
... ```
-or
+Or
```http HTTP/1.1 412 PreconditionFailed
Use the optional `$select` query string parameter and provide a comma-separated
```http GET /kv?snapshot={name}&$select=key,value&api-version={api-version} HTTP/1.1 ```+
azure-arc Extensions Release https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/extensions-release.md
Title: "Available extensions for Azure Arc-enabled Kubernetes clusters" Previously updated : 08/01/2024 Last updated : 08/08/2024 description: "See which extensions are currently available for Azure Arc-enabled Kubernetes clusters and view release notes."
For more information, see [Tutorial: Deploy applications using GitOps with Flux
The most recent version of the Flux v2 extension and the two previous versions (N-2) are supported. We generally recommend that you use the most recent version of the extension. > [!IMPORTANT]
-> The [Flux v2.3.0 release](https://fluxcd.io/blog/2024/05/flux-v2.3.0/) includes API changes to the HelmRelease and HelmChart APIs, with deprecated fields removed. An upcoming minor version update of Microsoft's Flux extension will include these changes, consistent with the upstream OSS Flux project.
+> The [Flux v2.3.0 release](https://fluxcd.io/blog/2024/05/flux-v2.3.0/) includes API changes to the HelmRelease and HelmChart APIs, with deprecated fields removed, and an updated version of the kustomize package. An upcoming minor version update of Microsoft's Flux extension will include these changes, consistent with the upstream OSS Flux project.
> > The [HelmRelease](https://fluxcd.io/flux/components/helm/helmreleases/) kind will be promoted from `v2beta1` to `v2` (GA). The `v2` API is backwards compatible with `v2beta1`, with the exception of these deprecated fields, which will be removed: >
The most recent version of the Flux v2 extension and the two previous versions (
> > The [HelmChart](https://fluxcd.io/flux/components/source/helmcharts/) kind will be promoted from `v1beta2` to `v1` (GA). The `v1` API is backwards compatible with `v1beta2`, with the exception of the `.spec.valuesFile` field, which will be replaced by `.spec.valuesFiles`. >
-> To avoid issues due to breaking changes, we recommend updating your deployments by July 29, 2024, so that they stop using the fields that will be removed and use the replacement fields instead. These new fields are already available in the current version of the APIs.
+> Use the new fields which are already available in the current version of the APIs, instead of the fields that will be removed.
+>
+> The kustomize package will be updated to v5.4.0, which contains the following breaking changes:
+>
+> - [Kustomization build fails when resources key is missing](https://github.com/kubernetes-sigs/kustomize/issues/5337)
+> - [Components are now applied after generators and before transformers](https://github.com/kubernetes-sigs/kustomize/pull/5170) in [v5.1.0](https://github.com/kubernetes-sigs/kustomize/releases/tag/kustomize%2Fv5.1.0)
+> - [Null yaml values are replaced by "null"](https://github.com/kubernetes-sigs/kustomize/pull/5519) in [v5.4.0](https://github.com/kubernetes-sigs/kustomize/releases/tag/kustomize%2Fv5.4.0)
+>
+> To avoid issues due to breaking changes, we recommend updating your manifests as soon as possible to ensure that your Flux configurations remain compliant with this release.
+ > [!NOTE] > When a new version of the `microsoft.flux` extension is released, it may take several days for the new version to become available in all regions.
Flux version: [Release v2.3.0](https://github.com/fluxcd/flux2/releases/tag/v2.3
- kustomize-controller: v1.3.0 - helm-controller: v1.0.1 - notification-controller: v1.3.0-- image-automation-controller: v0.32.1-- image-reflector-controller: v0.38.0
+- image-automation-controller: v0.38.0
+- image-reflector-controller: v0.32.0
Changes made for this version:
azure-arc System Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/system-requirements.md
You must also have a [kubeconfig file](https://kubernetes.io/docs/concepts/confi
The cluster must have at least one node with operating system and architecture type `linux/amd64` and/or `linux/arm64`. > [!IMPORTANT]
-> Many Arc-enabled Kubernetes features and scenarios are supported on ARM64 nodes, such as [cluster connect](cluster-connect.md) and [viewing Kubernetes resources in the Azure portal](kubernetes-resource-view.md). However, if using Azure CLI to enable these scenarios, [Azure CLI must be installed](/cli/azure/install-azure-cli) and run from an AMD64 machine.
->
+> Many Arc-enabled Kubernetes features and scenarios are supported on ARM64 nodes, such as [cluster connect](cluster-connect.md) and [viewing Kubernetes resources in the Azure portal](kubernetes-resource-view.md). However, if using Azure CLI to enable these scenarios, [Azure CLI must be installed](/cli/azure/install-azure-cli) and run from an AMD64 machine. Azure RBAC on Arc-enabled Kubernetes is currently not supported on ARM64 nodes. Please use [Kubernetes RBAC](identity-access-overview.md#kubernetes-rbac-authorization) for ARM64 nodes.
+>
> Currently, Azure Arc-enabled Kubernetes [cluster extensions](conceptual-extensions.md) aren't supported on ARM64-based clusters, except for [Flux (GitOps)](conceptual-gitops-flux2.md). To [install and use other cluster extensions](extensions.md), the cluster must have at least one node of operating system and architecture type `linux/amd64`.- ## Compute and memory requirements The Arc agents deployed on the cluster require:
azure-functions Functions App Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-app-settings.md
When using app settings, you should be aware of the following considerations:
+ Changes to function app settings require your function app to be restarted.
-+ In setting names, double-underscore (`__`) and semicolon (`:`) are considered reserved values. Double-underscores are interpreted as hierarchical delimiters on both Windows and Linux, and colons are interpreted in the same way only on Linux. For example, the setting `AzureFunctionsWebHost__hostid=somehost_123456` would be interpreted as the following JSON object:
++ In setting names, double-underscore (`__`) and colon (`:`) are considered reserved values. Double-underscores are interpreted as hierarchical delimiters on both Windows and Linux, and colons are interpreted in the same way only on Linux. For example, the setting `AzureFunctionsWebHost__hostid=somehost_123456` would be interpreted as the following JSON object: ```json "AzureFunctionsWebHost": {
azure-functions Functions Twitter Email https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-twitter-email.md
Azure Functions integrates with Azure Logic Apps in the Logic Apps Designer. This integration allows you use the computing power of Functions in orchestrations with other Azure and third-party services.
-This tutorial shows you how to create a workflow to analyze Twitter activity. As tweets are evaluated, the workflow sends notifications when positive sentiments are detected.
+This tutorial shows you how to create a workflow to analyze X activity. As tweets are evaluated, the workflow sends notifications when positive sentiments are detected.
In this tutorial, you learn to: > [!div class="checklist"] > * Create an Azure AI services API Resource. > * Create a function that categorizes tweet sentiment.
-> * Create a logic app that connects to Twitter.
+> * Create a logic app that connects to X.
> * Add sentiment detection to the logic app. > * Connect the logic app to the function. > * Send an email based on the response from the function. ## Prerequisites
-* An active [Twitter](https://twitter.com/) account.
+* An active [X](https://x.com/) account.
* An [Outlook.com](https://outlook.com/) account (for sending notifications). > [!NOTE]
With the Text Analytics resource created, you'll copy a few settings and set the
> [!NOTE] > To test the function, select **Test/Run** from the top menu. On the _Input_ tab, enter a value of `0.9` in the _Body_ input box, and then select **Run**. Verify that a value of _Positive_ is returned in the _HTTP response content_ box in the _Output_ section.
-Next, create a logic app that integrates with Azure Functions, Twitter, and the Azure AI services API.
+Next, create a logic app that integrates with Azure Functions, X, and the Azure AI services API.
## Create a logic app
Next, create a logic app that integrates with Azure Functions, Twitter, and the
You can now use the Logic Apps Designer to add services and triggers to your application.
-## Connect to Twitter
+## Connect to X
-Create a connection to Twitter so your app can poll for new tweets.
+Create a connection to X so your app can poll for new tweets.
-1. Search for **Twitter** in the top search box.
+1. Search for **X** in the top search box.
-1. Select the **Twitter** icon.
+1. Select the **X** icon.
1. Select the **When a new tweet is posted** trigger.
Create a connection to Twitter so your app can poll for new tweets.
| Setting | Value | | - | - |
- | Connection name | **MyTwitterConnection** |
+ | Connection name | **MyXConnection** |
| Authentication Type | **Use default shared application** | 1. Select **Sign in**.
-1. Follow the prompts in the pop-up window to complete signing in to Twitter.
+1. Follow the prompts in the pop-up window to complete signing in to X.
1. Next, enter the following values in the _When a new tweet is posted_ box. | Setting | Value | | - | -- |
- | Search text | **#my-twitter-tutorial** |
- | How often do you want to check for items? | **1** in the textbox, and <br> **Hour** in the dropdown. You may enter different values but be sure to review the current [limitations](/connectors/twitterconnector/#limits) of the Twitter connector. |
+ | Search text | **#my-x-tutorial** |
+ | How often do you want to check for items? | **1** in the textbox, and <br> **Hour** in the dropdown. You may enter different values but be sure to review the current [limitations](/connectors/twitterconnector/#limits) of the X connector. |
1. Select the **Save** button on the toolbar to save your progress.
The email box should now look like this screenshot.
## Run the workflow
-1. From your Twitter account, tweet the following text: **I'm enjoying #my-twitter-tutorial**.
+1. From your X account, tweet the following text: **I'm enjoying #my-x-tutorial**.
1. Return to the Logic Apps Designer and select the **Run** button.
To clean up all the Azure services and accounts created during this tutorial, de
1. Select the **Delete** button.
-Optionally, you may want to return to your Twitter account and delete any test tweets from your feed.
+Optionally, you may want to return to your X account and delete any test tweets from your feed.
## Next steps
azure-maps About Azure Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/about-azure-maps.md
Last updated 10/21/2022 -+
azure-maps Add Bubble Layer Map Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/add-bubble-layer-map-ios.md
Last updated 11/23/2021 -+ # Add a bubble layer to a map in the iOS SDK (Preview)
azure-maps Add Controls Map Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/add-controls-map-ios.md
Last updated 11/19/2021 -+ # Add controls to a map in the iOS SDK (Preview)
azure-maps Add Heat Map Layer Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/add-heat-map-layer-ios.md
Last updated 11/23/2021 -+ # Add a heat map layer in the iOS SDK (Preview)
azure-maps Add Image Layer Map Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/add-image-layer-map-ios.md
Last updated 11/23/2021 -+ # Add an image layer to a map in the iOS SDK (Preview)
azure-maps Add Line Layer Map Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/add-line-layer-map-ios.md
Last updated 11/23/2021 -+ # Add a line layer to the map in the iOS SDK (Preview)
azure-maps Add Polygon Extrusion Layer Map Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/add-polygon-extrusion-layer-map-ios.md
Last updated 11/23/2021 -+ # Add a polygon extrusion layer to the map in the iOS SDK (Preview)
azure-maps Add Polygon Layer Map Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/add-polygon-layer-map-ios.md
Last updated 11/23/2021 -+ # Add a polygon layer to the map in the iOS SDK (Preview)
azure-maps Add Symbol Layer Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/add-symbol-layer-ios.md
Last updated 11/19/2021 -+ # Add a symbol layer in the iOS SDK (Preview)
azure-maps Add Tile Layer Map Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/add-tile-layer-map-ios.md
Last updated 11/23/2021 -+ # Add a tile layer to a map in the iOS SDK (Preview)
azure-maps Android Map Add Line Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/android-map-add-line-layer.md
Last updated 2/26/2021 + - zone_pivot_groups: azure-maps-android
azure-maps Android Map Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/android-map-events.md
Last updated 2/26/2021 + - zone_pivot_groups: azure-maps-android
azure-maps Android Sdk Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/android-sdk-migration-guide.md
Last updated 02/20/2024 + # The Azure Maps Android SDK migration guide
azure-maps Authentication Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/authentication-best-practices.md
Last updated 05/11/2022 -+ # Authentication best practices
azure-maps Azure Maps Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/azure-maps-authentication.md
Last updated 07/05/2023 -+
azure-maps Azure Maps Event Grid Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/azure-maps-event-grid-integration.md
Last updated 01/08/2024 --+
azure-maps Azure Maps Qps Rate Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/azure-maps-qps-rate-limits.md
Title: Azure Maps QPS rate limits
description: Azure Maps limitation on the number of Queries Per Second. Previously updated : 10/15/2021 Last updated : 8/8/2024 -+ # Azure Maps QPS rate limits
azure-maps Choose Map Style https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/choose-map-style.md
Last updated 04/26/2020 +
azure-maps Clustering Point Data Android Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/clustering-point-data-android-sdk.md
Last updated 03/23/2021 + - zone_pivot_groups: azure-maps-android
azure-maps Clustering Point Data Ios Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/clustering-point-data-ios-sdk.md
Last updated 11/18/2021 -+ # Clustering point data in the iOS SDK (Preview)
azure-maps Clustering Point Data Web Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/clustering-point-data-web-sdk.md
Last updated 07/29/2019 --+ # Clustering point data in the Web SDK
azure-maps Consumption Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/consumption-model.md
Last updated 05/08/2018 -+
azure-maps Create Data Source Android Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/create-data-source-android-sdk.md
Last updated 2/26/2021 + - zone_pivot_groups: azure-maps-android
azure-maps Create Data Source Ios Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/create-data-source-ios-sdk.md
Last updated 10/22/2021 -+ # Create a data source in the iOS SDK (Preview)
azure-maps Create Data Source Web Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/create-data-source-web-sdk.md
Last updated 12/07/2020 --+ # Create a data source
azure-maps Creator Facility Ontology https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/creator-facility-ontology.md
Last updated 02/17/2023 --+ zone_pivot_groups: facility-ontology-schema
azure-maps Creator Geographic Scope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/creator-geographic-scope.md
Last updated 05/18/2021 -+
azure-maps Creator Indoor Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/creator-indoor-maps.md
Last updated 04/01/2022 -+
azure-maps Creator Long Running Operation V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/creator-long-running-operation-v2.md
Last updated 05/18/2021 -+
azure-maps Creator Long Running Operation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/creator-long-running-operation.md
Last updated 12/07/2020 -+
azure-maps Creator Onboarding Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/creator-onboarding-tool.md
Last updated 08/15/2023 --+ # Create indoor map with the onboarding tool
azure-maps Creator Qgis Plugin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/creator-qgis-plugin.md
Last updated 06/14/2023 -+ # Work with datasets using the QGIS plugin
azure-maps Data Driven Style Expressions Android Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/data-driven-style-expressions-android-sdk.md
Last updated 2/26/2021 + - zone_pivot_groups: azure-maps-android
azure-maps Data Driven Style Expressions Ios Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/data-driven-style-expressions-ios-sdk.md
Last updated 11/18/2021 -+ # Data-driven style expressions in the iOS SDK (Preview)
azure-maps Data Driven Style Expressions Web Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/data-driven-style-expressions-web-sdk.md
Last updated 4/4/2019 --+ # Data-driven style expressions (Web SDK)
azure-maps Display Feature Information Android https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/display-feature-information-android.md
Last updated 2/26/2021 + - zone_pivot_groups: azure-maps-android
azure-maps Display Feature Information Ios Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/display-feature-information-ios-sdk.md
Last updated 11/23/2021 -+ # Display feature information in the iOS SDK (Preview)
azure-maps Drawing Conversion Error Codes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/drawing-conversion-error-codes.md
Last updated 05/21/2021 -+
azure-maps Drawing Error Visualizer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/drawing-error-visualizer.md
Last updated 02/17/2023 --+ # Using the Azure Maps Drawing Error Visualizer with Creator
azure-maps Drawing Package Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/drawing-package-guide.md
Last updated 03/21/2023 -+ zone_pivot_groups: drawing-package-version
azure-maps Drawing Requirements https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/drawing-requirements.md
Last updated 03/21/2023 -+ zone_pivot_groups: drawing-package-version
azure-maps Drawing Tools Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/drawing-tools-events.md
Last updated 05/23/2023 -+ # Drawing tools events
The following image shows a screenshot of the complete working sample that demon
:::image type="content" source="./media/drawing-tools-events/drawing-tools-events.png" alt-text="Screenshot showing a map displaying data from a vector tile source.":::
-<!
-<br/>
-
-> [!VIDEO https://codepen.io/azuremaps/embed/dyPMRWo?height=500&theme-id=default&default-tab=js,result&editable=true]
->- ## Examples Let's see some common scenarios that use the drawing tools events.
For a complete working sample of how to use the drawing tools to draw polygon ar
:::image type="content" source="./media/drawing-tools-events/select-data-in-drawn-polygon-area.png" alt-text="Screenshot showing a map displaying points within polygon areas.":::
-<!-
-<br/>
-
-> [!VIDEO https://codepen.io/azuremaps/embed/XWJdeja?height=500&theme-id=default&default-tab=result]
-->- ### Draw and search in polygon area This code searches for points of interests inside the area of a shape after the user finished drawing the shape. The `drawingcomplete` event is used to trigger the search logic. If the user draws a rectangle or polygon, a search inside geometry is performed. If a circle is drawn, the radius and center position is used to perform a point of interest search. The `drawingmodechanged` event is used to determine when the user switches to the drawing mode, and this event clears the drawing canvas.
For a complete working sample of how to use the drawing tools to search for poin
:::image type="content" source="./media/drawing-tools-events/draw-and-search-polygon-area.png" alt-text="Screenshot showing a map displaying the Draw and search in polygon area sample.":::
-<!-
-<br/>
-
-> [!VIDEO https://codepen.io/azuremaps/embed/eYmZGNv?height=500&theme-id=default&default-tab=js,result&editable=true]
-->- ### Create a measuring tool The following code shows how the drawing events can be used to create a measuring tool. The `drawingchanging` is used to monitor the shape, as it's being drawn. As the user moves the mouse, the dimensions of the shape are calculated. The `drawingcomplete` event is used to do a final calculation on the shape after it has been drawn. The `drawingmodechanged` event is used to determine when the user is switching into a drawing mode. Also, the `drawingmodechanged` event clears the drawing canvas and clears old measurement information.
For a complete working sample of how to use the drawing tools to measure distanc
:::image type="content" source="./media/drawing-tools-events/create-a-measuring-tool.png" alt-text="Screenshot showing a map displaying the measuring tool sample.":::
-<!-
-> [!VIDEO https://codepen.io/azuremaps/embed/RwNaZXe?height=500&theme-id=default&default-tab=js,result&editable=true]
-->- ## Next steps Learn how to use other features of the drawing tools module:
azure-maps Drawing Tools Interactions Keyboard Shortcuts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/drawing-tools-interactions-keyboard-shortcuts.md
Last updated 12/05/2019 -+ # Interaction types and keyboard shortcuts in the drawing tools module
azure-maps Elevation Data Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/elevation-data-services.md
Title: Create elevation data & services using open data titeSuffix: Microsoft Azure Maps description: a guide to help developers build Elevation services and tiles using open data on the Microsoft Azure Cloud.-+ Last updated 3/17/2023 -+ # Create elevation data & services
azure-maps Extend Geojson https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/extend-geojson.md
Last updated 05/17/2018 -+
azure-maps Geocoding Coverage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/geocoding-coverage.md
Title: Geocoding coverage in Microsoft Azure Maps Search service description: See which regions Azure Maps Search covers. Geocoding categories include address points, house numbers, street level, city level, and points of interest.--++ Last updated 11/30/2021 -+ # Azure Maps geocoding coverage
azure-maps Geofence Geojson https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/geofence-geojson.md
Last updated 02/14/2019 -+ # Geofencing GeoJSON data
azure-maps Geographic Coverage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/geographic-coverage.md
Last updated 6/23/2021 -+ # Geographic coverage information
azure-maps Geographic Scope https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/geographic-scope.md
Last updated 04/18/2022 -+
azure-maps Glossary https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/glossary.md
Last updated 09/18/2018 -+ # Glossary
The following list describes common words used with the Azure Maps services.
## G
+<a name="geobias"></a> **Geobias**: A geospatial bias to improve the ranking of results. In some methods, this can be affected by setting the longitude and latitude parameters where available. In other cases it is purely internal.
+ <a name="geocode"></a> **Geocode**: An address or location that has been converted into a coordinate that can be used to display that location on a map. <a name="geocoding"></a> **Geocoding**: Or _forward geocoding_, is the process of converting address of location data into coordinates.
azure-maps How To Add Shapes To Android Map https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-add-shapes-to-android-map.md
Last updated 2/26/2021 + - zone_pivot_groups: azure-maps-android
azure-maps How To Add Symbol To Android Map https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-add-symbol-to-android-map.md
Last updated 2/26/2021 + - zone_pivot_groups: azure-maps-android
azure-maps How To Add Tile Layer Android Map https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-add-tile-layer-android-map.md
Last updated 3/25/2021 + - zone_pivot_groups: azure-maps-android
azure-maps How To Create Custom Styles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-create-custom-styles.md
Last updated 9/23/2022 -+ # Create custom styles for indoor maps (preview)
azure-maps How To Create Data Registries https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-create-data-registries.md
Last updated 6/14/2023 -+ # How to create data registry
azure-maps How To Create Template https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-create-template.md
Last updated 04/27/2021 + # Create your Azure Maps account using an ARM template
azure-maps How To Creator Wayfinding https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-creator-wayfinding.md
Last updated 10/25/2022 -+ # Indoor maps wayfinding service (preview)
azure-maps How To Creator Wfs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-creator-wfs.md
Last updated 03/03/2023 -+ # Query datasets using the Web Feature Service
azure-maps How To Dataset Geojson https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-dataset-geojson.md
Last updated 11/01/2021 -+ # Create a dataset using a GeoJson package (Preview)
azure-maps How To Dev Guide Csharp Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-dev-guide-csharp-sdk.md
Last updated 11/11/2021 + - # C# REST SDK Developers Guide
azure-maps How To Dev Guide Java Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-dev-guide-java-sdk.md
Last updated 01/25/2023 + - # Java REST SDK Developers Guide (preview)
azure-maps How To Dev Guide Js Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-dev-guide-js-sdk.md
Last updated 11/15/2021 + - # JavaScript/TypeScript REST SDK Developers Guide (preview)
azure-maps How To Dev Guide Py Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-dev-guide-py-sdk.md
Last updated 01/15/2021 + - # Python REST SDK Developers Guide (preview)
azure-maps How To Manage Account Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-manage-account-keys.md
Last updated 04/26/2021 -+ # Manage your Azure Maps account
azure-maps How To Manage Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-manage-authentication.md
Last updated 12/3/2021 -
-custom.ms: subject-rbac-steps
+ # Manage authentication in Azure Maps
azure-maps How To Manage Creator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-manage-creator.md
Last updated 01/20/2022 -+ # Manage Azure Maps Creator
azure-maps How To Manage Pricing Tier https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-manage-pricing-tier.md
Last updated 09/14/2023 -+ # Manage the pricing tier of your Azure Maps account
azure-maps How To Render Custom Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-render-custom-data.md
Last updated 06/20/2024 --+ # Render custom data on a raster map
azure-maps How To Request Weather Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-request-weather-data.md
Title: Request real-time and forecasted weather data using Azure Maps Weather services description: Learn how to request real-time (current) and forecasted (minute, hourly, daily) weather data using Microsoft Azure Maps Weather services -+ Previously updated : 10/28/2021 Last updated : 08/08/2024 -+
This video provides examples for making REST calls to Azure Maps Weather service
* An [Azure Maps account] * A [subscription key]
- >[!IMPORTANT]
- >The [Get Minute Forecast API] requires a Gen1 (S1) or Gen2 pricing tier.
+>[!IMPORTANT]
+>
+> In the URL examples in this article you will need to replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key.
-This tutorial uses the [Postman] application, but you may choose a different API development environment.
+This tutorial uses the [bruno] application, but you can choose a different API development environment.
## Request real-time weather data
The [Get Current Conditions API] returns detailed weather conditions such as pre
In this example, you use the [Get Current Conditions API] to retrieve current weather conditions at coordinates located in Seattle, WA.
-1. Open the Postman app. Select **New** to create the request. In the **Create New** window, select **HTTP Request**. Enter a **Request name** for the request.
+1. Open the bruno app, select **NEW REQUEST** to create the request. In the **NEW REQUEST** window, set **Type** to **HTTP**. Enter a **Name** for the request.
-2. Select the **GET** HTTP method in the builder tab and enter the following URL. For this request, and other requests mentioned in this article, replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key.
+1. Select the **GET** HTTP method in the **URL** drop-down list, then enter the following URL:
```http https://atlas.microsoft.com/weather/currentConditions/json?api-version=1.0&query=47.60357,-122.32945&subscription-key={Your-Azure-Maps-Subscription-key} ```
-3. Select the blue **Send** button. The response body contains current weather information.
+1. Select the blue **Create** button.
+
+1. Select the run button.
+
+ :::image type="content" source="./media/weather-service/bruno-run.png" alt-text="A screenshot showing the Request real-time weather data URL with the run button highlighted in the bruno app.":::
+
+ The response body contains current weather information.
```json {
- "results": [
+ "results": [
{
- "dateTime": "2020-10-19T20:39:00+00:00",
- "phrase": "Cloudy",
- "iconCode": 7,
- "hasPrecipitation": false,
- "isDayTime": true,
- "temperature": {
- "value": 12.4,
+ "dateTime": "2024-08-08T09:22:00-07:00",
+ "phrase": "Sunny",
+ "iconCode": 1,
+ "hasPrecipitation": false,
+ "isDayTime": true,
+ "temperature": {
+ "value": 19.5,
+ "unit": "C",
+ "unitType": 17
+ },
+ "realFeelTemperature": {
+ "value": 23.7,
+ "unit": "C",
+ "unitType": 17
+ },
+ "realFeelTemperatureShade": {
+ "value": 19.4,
+ "unit": "C",
+ "unitType": 17
+ },
+ "relativeHumidity": 81,
+ "dewPoint": {
+ "value": 16.2,
+ "unit": "C",
+ "unitType": 17
+ },
+ "wind": {
+ "direction": {
+ "degrees": 0,
+ "localizedDescription": "N"
+ },
+ "speed": {
+ "value": 2,
+ "unit": "km/h",
+ "unitType": 7
+ }
+ },
+ "windGust": {
+ "speed": {
+ "value": 3.8,
+ "unit": "km/h",
+ "unitType": 7
+ }
+ },
+ "uvIndex": 4,
+ "uvIndexPhrase": "Moderate",
+ "visibility": {
+ "value": 16.1,
+ "unit": "km",
+ "unitType": 6
+ },
+ "obstructionsToVisibility": "",
+ "cloudCover": 5,
+ "ceiling": {
+ "value": 12192,
+ "unit": "m",
+ "unitType": 5
+ },
+ "pressure": {
+ "value": 1015.9,
+ "unit": "mb",
+ "unitType": 14
+ },
+ "pressureTendency": {
+ "localizedDescription": "Steady",
+ "code": "S"
+ },
+ "past24HourTemperatureDeparture": {
+ "value": 3,
+ "unit": "C",
+ "unitType": 17
+ },
+ "apparentTemperature": {
+ "value": 20,
+ "unit": "C",
+ "unitType": 17
+ },
+ "windChillTemperature": {
+ "value": 19.4,
+ "unit": "C",
+ "unitType": 17
+ },
+ "wetBulbTemperature": {
+ "value": 17.5,
+ "unit": "C",
+ "unitType": 17
+ },
+ "precipitationSummary": {
+ "pastHour": {
+ "value": 0,
+ "unit": "mm",
+ "unitType": 3
+ },
+ "past3Hours": {
+ "value": 0,
+ "unit": "mm",
+ "unitType": 3
+ },
+ "past6Hours": {
+ "value": 0,
+ "unit": "mm",
+ "unitType": 3
+ },
+ "past9Hours": {
+ "value": 0,
+ "unit": "mm",
+ "unitType": 3
+ },
+ "past12Hours": {
+ "value": 0,
+ "unit": "mm",
+ "unitType": 3
+ },
+ "past18Hours": {
+ "value": 0,
+ "unit": "mm",
+ "unitType": 3
+ },
+ "past24Hours": {
+ "value": 0,
+ "unit": "mm",
+ "unitType": 3
+ }
+ },
+ "temperatureSummary": {
+ "past6Hours": {
+ "minimum": {
+ "value": 16,
"unit": "C", "unitType": 17
- },
- "realFeelTemperature": {
- "value": 13.7,
+ },
+ "maximum": {
+ "value": 19.5,
"unit": "C", "unitType": 17
+ }
},
- "realFeelTemperatureShade": {
- "value": 13.7,
+ "past12Hours": {
+ "minimum": {
+ "value": 16,
"unit": "C", "unitType": 17
- },
- "relativeHumidity": 87,
- "dewPoint": {
- "value": 10.3,
+ },
+ "maximum": {
+ "value": 20.4,
"unit": "C", "unitType": 17
+ }
},
- "wind": {
- "direction": {
- "degrees": 23.0,
- "localizedDescription": "NNE"
- },
- "speed": {
- "value": 4.5,
- "unit": "km/h",
- "unitType": 7
- }
- },
- "windGust": {
- "speed": {
- "value": 9.0,
- "unit": "km/h",
- "unitType": 7
- }
- },
- "uvIndex": 1,
- "uvIndexPhrase": "Low",
- "visibility": {
- "value": 9.7,
- "unit": "km",
- "unitType": 6
- },
- "obstructionsToVisibility": "",
- "cloudCover": 100,
- "ceiling": {
- "value": 1494.0,
- "unit": "m",
- "unitType": 5
- },
- "pressure": {
- "value": 1021.2,
- "unit": "mb",
- "unitType": 14
- },
- "pressureTendency": {
- "localizedDescription": "Steady",
- "code": "S"
- },
- "past24HourTemperatureDeparture": {
- "value": -2.1,
- "unit": "C",
- "unitType": 17
- },
- "apparentTemperature": {
- "value": 15.0,
+ "past24Hours": {
+ "minimum": {
+ "value": 16,
"unit": "C", "unitType": 17
- },
- "windChillTemperature": {
- "value": 12.2,
+ },
+ "maximum": {
+ "value": 26.4,
"unit": "C", "unitType": 17
- },
- "wetBulbTemperature": {
- "value": 11.3,
- "unit": "C",
- "unitType": 17
- },
- "precipitationSummary": {
- "pastHour": {
- "value": 0.0,
- "unit": "mm",
- "unitType": 3
- },
- "past3Hours": {
- "value": 0.0,
- "unit": "mm",
- "unitType": 3
- },
- "past6Hours": {
- "value": 0.0,
- "unit": "mm",
- "unitType": 3
- },
- "past9Hours": {
- "value": 0.0,
- "unit": "mm",
- "unitType": 3
- },
- "past12Hours": {
- "value": 0.0,
- "unit": "mm",
- "unitType": 3
- },
- "past18Hours": {
- "value": 0.0,
- "unit": "mm",
- "unitType": 3
- },
- "past24Hours": {
- "value": 0.4,
- "unit": "mm",
- "unitType": 3
- }
- },
- "temperatureSummary": {
- "past6Hours": {
- "minimum": {
- "value": 12.2,
- "unit": "C",
- "unitType": 17
- },
- "maximum": {
- "value": 14.0,
- "unit": "C",
- "unitType": 17
- }
- },
- "past12Hours": {
- "minimum": {
- "value": 12.2,
- "unit": "C",
- "unitType": 17
- },
- "maximum": {
- "value": 14.0,
- "unit": "C",
- "unitType": 17
- }
- },
- "past24Hours": {
- "minimum": {
- "value": 12.2,
- "unit": "C",
- "unitType": 17
- },
- "maximum": {
- "value": 15.6,
- "unit": "C",
- "unitType": 17
- }
- }
+ }
}
+ }
}
- ]
+ ]
} ``` ## Request severe weather alerts
-Azure Maps [Get Severe Weather Alerts API] returns the severe weather alerts that are available worldwide from both official Government Meteorological Agencies and leading global to regional weather alert providers. The service returns details like alert type, category, level. The service also returns detailed descriptions about the active severe alerts for the requested location, such as hurricanes, thunderstorms, lightning, heat waves or forest fires. As an example, logistics managers can visualize severe weather conditions on a map, along with business locations and planned routes, and coordinate further with drivers and local workers.
+Azure Maps [Get Severe Weather Alerts API] returns the severe weather alerts that are available worldwide from both official Government Meteorological Agencies and leading global to regional weather alert providers. The service returns details like alert type, category, level. The service also returns detailed descriptions about the active severe alerts for the requested location, such as hurricanes, thunderstorms, lightning, heat waves, or forest fires. As an example, logistics managers can visualize severe weather conditions on a map, along with business locations and planned routes, and coordinate further with drivers and local workers.
In this example, you use the [Get Severe Weather Alerts API] to retrieve current weather conditions at coordinates located in Cheyenne, WY.
->[!NOTE]
->This example retrieves severe weather alerts at the time of this writing. It is likely that there are no longer any severe weather alerts at the requested location. To retrieve actual severe alert data when running this example, you'll need to retrieve data at a different coordinate location.
+> [!NOTE]
+> This example retrieves severe weather alerts at the time of this writing. It is likely that there are no longer any severe weather alerts at the requested location. To retrieve actual severe alert data when running this example, you'll need to retrieve data at a different coordinate location.
-1. In the Postman app, select **New** to create the request. In the **Create New** window, select **HTTP Request**. Enter a **Request name** for the request.
+1. In the bruno app, select **NEW REQUEST** to create the request. In the **NEW REQUEST** window, set **Type** to **HTTP**. Enter a **Name** for the request.
-2. Select the **GET** HTTP method in the builder tab and enter the following URL. For this request, and other requests mentioned in this article, replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key.
+1. Select the **GET** HTTP method in the **URL** drop-down list, then enter the following URL:
```http https://atlas.microsoft.com/weather/severe/alerts/json?api-version=1.0&query=41.161079,-104.805450&subscription-key={Your-Azure-Maps-Subscription-key} ```
-3. Select the blue **Send** button. If there are no severe weather alerts, the response body contains an empty `results[]` array. If there are severe weather alerts, the response body contains something like the following JSON response:
+1. Select the blue **Create** button.
+
+1. Select the run button.
+
+ :::image type="content" source="./media/weather-service/bruno-run-request-severe-weather-alerts.png" alt-text="A screenshot showing the Request severe weather alerts URL with the run button highlighted in the bruno app.":::
+
+ If there are no severe weather alerts, the response body contains an empty `results[]` array. If there are severe weather alerts, the response body contains something like the following JSON response:
```json {
In this example, you use the [Get Severe Weather Alerts API] to retrieve current
"alertAreas": [ { "name": "Platte/Goshen/Central and Eastern Laramie",
- "summary": "Red Flag Warning in effect until 7:00 PM MDT. Source: U.S. National Weather Service",
+ "summary": "Red Flag Warning in effect until 7:00 PM MDT. Source: U.S. National Weather Service",
"startTime": "2020-10-05T15:00:00+00:00", "endTime": "2020-10-06T01:00:00+00:00", "latestStatus": {
In this example, you use the [Get Severe Weather Alerts API] to retrieve current
## Request daily weather forecast data
-The [Get Daily Forecast API] returns detailed daily weather forecast such as temperature and wind. The request can specify how many days to return: 1, 5, 10, 15, 25, or 45 days for a given coordinate location. The response includes details such as temperature, wind, precipitation, air quality, and UV index. In this example, we request for five days by setting `duration=5`.
+The [Get Daily Forecast API] returns detailed daily weather forecast such as temperature and wind. The request can specify how many days to return: 1, 5, 10, 15, 25, or 45 days for a given coordinate location. The response includes details such as temperature, wind, precipitation, air quality, and UV index. In this example, we request for five days by setting `duration=5`.
->[!IMPORTANT]
->In the S0 pricing tier, you can request daily forecast for the next 1, 5, 10, and 15 days. In either Gen1 (S1) or Gen2 pricing tier, you can request daily forecast for the next 25 days, and 45 days.
+> [!IMPORTANT]
+> In the S0 pricing tier, you can request daily forecast for the next 1, 5, 10, and 15 days. In either Gen1 (S1) or Gen2 pricing tier, you can request daily forecast for the next 25 days, and 45 days.
> > **Azure Maps Gen1 pricing tier retirement** >
The [Get Daily Forecast API] returns detailed daily weather forecast such as tem
In this example, you use the [Get Daily Forecast API] to retrieve the five-day weather forecast for coordinates located in Seattle, WA.
-1. In the Postman app, select **New** to create the request. In the **Create New** window, select **HTTP Request**. Enter a **Request name** for the request.
+1. In the bruno app, select **NEW REQUEST** to create the request. In the **NEW REQUEST** window, set **Type** to **HTTP**. Enter a **Name** for the request.
-2. Select the **GET** HTTP method in the builder tab and enter the following URL. For this request, and other requests mentioned in this article, replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key.
+1. Select the **GET** HTTP method in the **URL** drop-down list, then enter the following URL:
```http https://atlas.microsoft.com/weather/forecast/daily/json?api-version=1.0&query=47.60357,-122.32945&duration=5&subscription-key={Your-Azure-Maps-Subscription-key} ```
-3. Select the blue **Send** button. The response body contains the five-day weather forecast data. For the sake of brevity, the following JSON response shows the forecast for the first day.
+1. Select the blue **Create** button.
+
+1. Select the run button.
+
+ :::image type="content" source="./media/weather-service/bruno-run-request-daily-weather-forecast-data.png" alt-text="A screenshot showing the Request daily weather forecast data URL with the run button highlighted in the bruno app.":::
+
+ The response body contains the five-day weather forecast data. For the sake of brevity, the following JSON response shows the forecast for the first day.
```json {
- "summary": {
- "startDate": "2020-10-18T17:00:00+00:00",
- "endDate": "2020-10-19T23:00:00+00:00",
- "severity": 2,
- "phrase": "Snow, mixed with rain at times continuing through Monday evening and a storm total of 3-6 cm",
- "category": "snow/rain"
- },
- "forecasts": [
+ "summary": {
+ "startDate": "2024-08-09T08:00:00-07:00",
+ "endDate": "2024-08-09T20:00:00-07:00",
+ "severity": 7,
+ "phrase": "Very warm tomorrow",
+ "category": "heat"
+ },
+ "forecasts": [
{
- "date": "2020-10-19T04:00:00+00:00",
- "temperature": {
- "minimum": {
- "value": -1.1,
- "unit": "C",
- "unitType": 17
- },
- "maximum": {
- "value": 1.3,
- "unit": "C",
- "unitType": 17
- }
+ "date": "2024-08-08T07:00:00-07:00",
+ "temperature": {
+ "minimum": {
+ "value": 16.2,
+ "unit": "C",
+ "unitType": 17
+ },
+ "maximum": {
+ "value": 28.9,
+ "unit": "C",
+ "unitType": 17
+ }
+ },
+ "realFeelTemperature": {
+ "minimum": {
+ "value": 16.3,
+ "unit": "C",
+ "unitType": 17
+ },
+ "maximum": {
+ "value": 29.8,
+ "unit": "C",
+ "unitType": 17
+ }
+ },
+ "realFeelTemperatureShade": {
+ "minimum": {
+ "value": 16.3,
+ "unit": "C",
+ "unitType": 17
+ },
+ "maximum": {
+ "value": 27.3,
+ "unit": "C",
+ "unitType": 17
+ }
+ },
+ "hoursOfSun": 12.9,
+ "degreeDaySummary": {
+ "heating": {
+ "value": 0,
+ "unit": "C",
+ "unitType": 17
+ },
+ "cooling": {
+ "value": 5,
+ "unit": "C",
+ "unitType": 17
+ }
+ },
+ "airAndPollen": [
+ {
+ "name": "AirQuality",
+ "value": 56,
+ "category": "Moderate",
+ "categoryValue": 2,
+ "type": "Nitrogen Dioxide"
+ },
+ {
+ "name": "Grass",
+ "value": 2,
+ "category": "Low",
+ "categoryValue": 1
+ },
+ {
+ "name": "Mold",
+ "value": 0,
+ "category": "Low",
+ "categoryValue": 1
+ },
+ {
+ "name": "Ragweed",
+ "value": 5,
+ "category": "Low",
+ "categoryValue": 1
+ },
+ {
+ "name": "Tree",
+ "value": 0,
+ "category": "Low",
+ "categoryValue": 1
+ },
+ {
+ "name": "UVIndex",
+ "value": 7,
+ "category": "High",
+ "categoryValue": 3
+ }
+ ],
+ "day": {
+ "iconCode": 2,
+ "iconPhrase": "Mostly sunny",
+ "hasPrecipitation": false,
+ "shortPhrase": "Mostly sunny",
+ "longPhrase": "Mostly sunny; wildfire smoke will cause the sky to be hazy",
+ "precipitationProbability": 0,
+ "thunderstormProbability": 0,
+ "rainProbability": 0,
+ "snowProbability": 0,
+ "iceProbability": 0,
+ "wind": {
+ "direction": {
+ "degrees": 357,
+ "localizedDescription": "N"
+ },
+ "speed": {
+ "value": 11.1,
+ "unit": "km/h",
+ "unitType": 7
+ }
},
- "realFeelTemperature": {
- "minimum": {
- "value": -6.0,
- "unit": "C",
- "unitType": 17
- },
- "maximum": {
- "value": 0.5,
- "unit": "C",
- "unitType": 17
- }
+ "windGust": {
+ "direction": {
+ "degrees": 354,
+ "localizedDescription": "N"
+ },
+ "speed": {
+ "value": 29.6,
+ "unit": "km/h",
+ "unitType": 7
+ }
},
- "realFeelTemperatureShade": {
- "minimum": {
- "value": -6.0,
- "unit": "C",
- "unitType": 17
- },
- "maximum": {
- "value": 0.7,
- "unit": "C",
- "unitType": 17
- }
+ "totalLiquid": {
+ "value": 0,
+ "unit": "mm",
+ "unitType": 3
},
- "hoursOfSun": 1.8,
- "degreeDaySummary": {
- "heating": {
- "value": 18.0,
- "unit": "C",
- "unitType": 17
- },
- "cooling": {
- "value": 0.0,
- "unit": "C",
- "unitType": 17
- }
+ "rain": {
+ "value": 0,
+ "unit": "mm",
+ "unitType": 3
},
- "airAndPollen": [
- {
- "name": "AirQuality",
- "value": 23,
- "category": "Good",
- "categoryValue": 1,
- "type": "Ozone"
- },
- {
- "name": "Grass",
- "value": 0,
- "category": "Low",
- "categoryValue": 1
- },
- {
- "name": "Mold",
- "value": 0,
- "category": "Low",
- "categoryValue": 1
- },
- {
- "name": "Ragweed",
- "value": 0,
- "category": "Low",
- "categoryValue": 1
- },
- {
- "name": "Tree",
- "value": 0,
- "category": "Low",
- "categoryValue": 1
- },
- {
- "name": "UVIndex",
- "value": 0,
- "category": "Low",
- "categoryValue": 1
- }
- ],
- "day": {
- "iconCode": 22,
- "iconPhrase": "Snow",
- "hasPrecipitation": true,
- "precipitationType": "Mixed",
- "precipitationIntensity": "Light",
- "shortPhrase": "Chilly with snow, 2-4 cm",
- "longPhrase": "Chilly with snow, accumulating an additional 2-4 cm",
- "precipitationProbability": 90,
- "thunderstormProbability": 0,
- "rainProbability": 54,
- "snowProbability": 85,
- "iceProbability": 8,
- "wind": {
- "direction": {
- "degrees": 36.0,
- "localizedDescription": "NE"
- },
- "speed": {
- "value": 9.3,
- "unit": "km/h",
- "unitType": 7
- }
- },
- "windGust": {
- "direction": {
- "degrees": 70.0,
- "localizedDescription": "ENE"
- },
- "speed": {
- "value": 25.9,
- "unit": "km/h",
- "unitType": 7
- }
- },
- "totalLiquid": {
- "value": 4.3,
- "unit": "mm",
- "unitType": 3
- },
- "rain": {
- "value": 0.5,
- "unit": "mm",
- "unitType": 3
- },
- "snow": {
- "value": 2.72,
- "unit": "cm",
- "unitType": 4
- },
- "ice": {
- "value": 0.0,
- "unit": "mm",
- "unitType": 3
- },
- "hoursOfPrecipitation": 9.0,
- "hoursOfRain": 1.0,
- "hoursOfSnow": 9.0,
- "hoursOfIce": 0.0,
- "cloudCover": 96
- },
- "night": {
- "iconCode": 29,
- "iconPhrase": "Rain and snow",
- "hasPrecipitation": true,
- "precipitationType": "Mixed",
- "precipitationIntensity": "Light",
- "shortPhrase": "Showers of rain and snow",
- "longPhrase": "A couple of showers of rain or snow this evening; otherwise, cloudy; storm total snowfall 1-3 cm",
- "precipitationProbability": 65,
- "thunderstormProbability": 0,
- "rainProbability": 60,
- "snowProbability": 54,
- "iceProbability": 4,
- "wind": {
- "direction": {
- "degrees": 16.0,
- "localizedDescription": "NNE"
- },
- "speed": {
- "value": 16.7,
- "unit": "km/h",
- "unitType": 7
- }
- },
- "windGust": {
- "direction": {
- "degrees": 1.0,
- "localizedDescription": "N"
- },
- "speed": {
- "value": 35.2,
- "unit": "km/h",
- "unitType": 7
- }
- },
- "totalLiquid": {
- "value": 4.3,
- "unit": "mm",
- "unitType": 3
- },
- "rain": {
- "value": 3.0,
- "unit": "mm",
- "unitType": 3
- },
- "snow": {
- "value": 0.79,
- "unit": "cm",
- "unitType": 4
- },
- "ice": {
- "value": 0.0,
- "unit": "mm",
- "unitType": 3
- },
- "hoursOfPrecipitation": 4.0,
- "hoursOfRain": 1.0,
- "hoursOfSnow": 3.0,
- "hoursOfIce": 0.0,
- "cloudCover": 94
- },
- "sources": [
- "AccuWeather"
- ]
- },...
- ]
+ "snow": {
+ "value": 0,
+ "unit": "cm",
+ "unitType": 4
+ },
+ "ice": {
+ "value": 0,
+ "unit": "mm",
+ "unitType": 3
+ },
+ "hoursOfPrecipitation": 0,
+ "hoursOfRain": 0,
+ "hoursOfSnow": 0,
+ "hoursOfIce": 0,
+ "cloudCover": 10
+ },
+ "night": {
+ "iconCode": 35,
+ "iconPhrase": "Partly cloudy",
+ "hasPrecipitation": false,
+ "shortPhrase": "Partly cloudy",
+ "longPhrase": "Partly cloudy; wildfire smoke will cause the sky to be hazy",
+ "precipitationProbability": 1,
+ "thunderstormProbability": 0,
+ "rainProbability": 1,
+ "snowProbability": 0,
+ "iceProbability": 0,
+ "wind": {
+ "direction": {
+ "degrees": 7,
+ "localizedDescription": "N"
+ },
+ "speed": {
+ "value": 9.3,
+ "unit": "km/h",
+ "unitType": 7
+ }
+ },
+ "windGust": {
+ "direction": {
+ "degrees": 3,
+ "localizedDescription": "N"
+ },
+ "speed": {
+ "value": 20.4,
+ "unit": "km/h",
+ "unitType": 7
+ }
+ },
+ "totalLiquid": {
+ "value": 0,
+ "unit": "mm",
+ "unitType": 3
+ },
+ "rain": {
+ "value": 0,
+ "unit": "mm",
+ "unitType": 3
+ },
+ "snow": {
+ "value": 0,
+ "unit": "cm",
+ "unitType": 4
+ },
+ "ice": {
+ "value": 0,
+ "unit": "mm",
+ "unitType": 3
+ },
+ "hoursOfPrecipitation": 0,
+ "hoursOfRain": 0,
+ "hoursOfSnow": 0,
+ "hoursOfIce": 0,
+ "cloudCover": 26
+ },
+ "sources": [
+ "AccuWeather"
+ ]
+ }
+ ]
} ```
The [Get Hourly Forecast API] returns detailed weather forecast by the hour for
In this example, you use the [Get Hourly Forecast API] to retrieve the hourly weather forecast for the next 12 hours at coordinates located in Seattle, WA.
-1. In the Postman app, select **New** to create the request. In the **Create New** window, select **HTTP Request**. Enter a **Request name** for the request.
+1. In the bruno app, select **NEW REQUEST** to create the request. In the **NEW REQUEST** window, set **Type** to **HTTP**. Enter a **Name** for the request.
-2. Select the **GET** HTTP method in the builder tab and enter the following URL. For this request, and other requests mentioned in this article, replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key.
+1. Select the **GET** HTTP method in the **URL** drop-down list, then enter the following URL:
```http https://atlas.microsoft.com/weather/forecast/hourly/json?api-version=1.0&query=47.60357,-122.32945&duration=12&subscription-key={Your-Azure-Maps-Subscription-key} ```
-3. Select the blue **Send** button. The response body contains weather forecast data for the next 12 hours. For the sake of brevity, the following JSON response shows the forecast for the first hour.
+1. Select the blue **Create** button.
+
+1. Select the run button.
+
+ :::image type="content" source="./media/weather-service/bruno-run-request-hourly-weather-forecast-data.png" alt-text="A screenshot showing the Request hourly weather forecast data URL with the run button highlighted in the bruno app.":::
+
+ The response body contains weather forecast data for the next 12 hours. The following example JSON response only shows the first hour:
```json {
- "forecasts": [
+ "forecasts": [
{
- "date": "2020-10-19T21:00:00+00:00",
- "iconCode": 12,
- "iconPhrase": "Showers",
- "hasPrecipitation": true,
- "precipitationType": "Rain",
- "precipitationIntensity": "Light",
- "isDaylight": true,
- "temperature": {
- "value": 14.7,
- "unit": "C",
- "unitType": 17
- },
- "realFeelTemperature": {
- "value": 13.3,
- "unit": "C",
- "unitType": 17
- },
- "wetBulbTemperature": {
- "value": 12.0,
- "unit": "C",
- "unitType": 17
- },
- "dewPoint": {
- "value": 9.5,
- "unit": "C",
- "unitType": 17
- },
- "wind": {
- "direction": {
- "degrees": 242.0,
- "localizedDescription": "WSW"
- },
- "speed": {
- "value": 9.3,
- "unit": "km/h",
- "unitType": 7
- }
- },
- "windGust": {
- "speed": {
- "value": 14.8,
- "unit": "km/h",
- "unitType": 7
- }
- },
- "relativeHumidity": 71,
- "visibility": {
- "value": 9.7,
- "unit": "km",
- "unitType": 6
- },
- "cloudCover": 100,
- "ceiling": {
- "value": 1128.0,
- "unit": "m",
- "unitType": 5
- },
- "uvIndex": 1,
- "uvIndexPhrase": "Low",
- "precipitationProbability": 51,
- "rainProbability": 51,
- "snowProbability": 0,
- "iceProbability": 0,
- "totalLiquid": {
- "value": 0.3,
- "unit": "mm",
- "unitType": 3
- },
- "rain": {
- "value": 0.3,
- "unit": "mm",
- "unitType": 3
- },
- "snow": {
- "value": 0.0,
- "unit": "cm",
- "unitType": 4
- },
- "ice": {
- "value": 0.0,
- "unit": "mm",
- "unitType": 3
+ "date": "2024-08-07T15:00:00-07:00",
+ "iconCode": 2,
+ "iconPhrase": "Mostly sunny",
+ "hasPrecipitation": false,
+ "isDaylight": true,
+ "temperature": {
+ "value": 24.6,
+ "unit": "C",
+ "unitType": 17
+ },
+ "realFeelTemperature": {
+ "value": 26.4,
+ "unit": "C",
+ "unitType": 17
+ },
+ "wetBulbTemperature": {
+ "value": 18.1,
+ "unit": "C",
+ "unitType": 17
+ },
+ "dewPoint": {
+ "value": 14.5,
+ "unit": "C",
+ "unitType": 17
+ },
+ "wind": {
+ "direction": {
+ "degrees": 340,
+ "localizedDescription": "NNW"
+ },
+ "speed": {
+ "value": 14.8,
+ "unit": "km/h",
+ "unitType": 7
+ }
+ },
+ "windGust": {
+ "speed": {
+ "value": 24.1,
+ "unit": "km/h",
+ "unitType": 7
}
- }...
- ]
+ },
+ "relativeHumidity": 53,
+ "visibility": {
+ "value": 16.1,
+ "unit": "km",
+ "unitType": 6
+ },
+ "cloudCover": 11,
+ "ceiling": {
+ "value": 10211,
+ "unit": "m",
+ "unitType": 5
+ },
+ "uvIndex": 5,
+ "uvIndexPhrase": "Moderate",
+ "precipitationProbability": 0,
+ "rainProbability": 0,
+ "snowProbability": 0,
+ "iceProbability": 0,
+ "totalLiquid": {
+ "value": 0,
+ "unit": "mm",
+ "unitType": 3
+ },
+ "rain": {
+ "value": 0,
+ "unit": "mm",
+ "unitType": 3
+ },
+ "snow": {
+ "value": 0,
+ "unit": "cm",
+ "unitType": 4
+ },
+ "ice": {
+ "value": 0,
+ "unit": "mm",
+ "unitType": 3
+ }
+ }
+ ]
} ```
In this example, you use the [Get Hourly Forecast API] to retrieve the hourly we
In this example, you use the [Get Minute Forecast API] to retrieve the minute-by-minute weather forecast at coordinates located in Seattle, WA. The weather forecast is given for the next 120 minutes. Our query requests that the forecast is given at 15-minute intervals, but you can adjust the parameter to be either 1 or 5 minutes.
-1. In the Postman app, select **New** to create the request. In the **Create New** window, select **HTTP Request**. Enter a **Request name** for the request.
+1. In the bruno app, select **NEW REQUEST** to create the request. In the **NEW REQUEST** window, set **Type** to **HTTP**. Enter a **Name** for the request.
-2. Select the **GET** HTTP method in the builder tab and enter the following URL. For this request, and other requests mentioned in this article, replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key.
+1. Select the **GET** HTTP method in the **URL** drop-down list, then enter the following URL:
```http https://atlas.microsoft.com/weather/forecast/minute/json?api-version=1.0&query=47.60357,-122.32945&interval=15&subscription-key={Your-Azure-Maps-Subscription-key} ```
-3. Select the blue **Send** button. The response body contains weather forecast data for the next 120 minutes, in 15-minute intervals.
+1. Select the blue **Create** button.
+
+1. Select the run button.
+
+ :::image type="content" source="./media/weather-service/bruno-run-request-minute-by-minute-weather-forecast-data.png" alt-text="A screenshot showing the Request minute-by-minute weather forecast data URL with the run button highlighted in the bruno app.":::
+
+ The response body contains weather forecast data for the next 120 minutes, in 15-minute intervals.
```json {
- "summary": {
+ "summary": {
"briefPhrase60": "No precipitation for at least 60 min", "shortPhrase": "No precip for 120 min", "briefPhrase": "No precipitation for at least 120 min", "longPhrase": "No precipitation for at least 120 min",
- "iconCode": 7
- },
- "intervalSummaries": [
+ "iconCode": 1
+ },
+ "intervalSummaries": [
{
- "startMinute": 0,
- "endMinute": 119,
- "totalMinutes": 120,
- "shortPhrase": "No precip for %MINUTE_VALUE min",
- "briefPhrase": "No precipitation for at least %MINUTE_VALUE min",
- "longPhrase": "No precipitation for at least %MINUTE_VALUE min",
- "iconCode": 7
+ "startMinute": 0,
+ "endMinute": 119,
+ "totalMinutes": 120,
+ "shortPhrase": "No precip for %MINUTE_VALUE min",
+ "briefPhrase": "No precipitation for at least %MINUTE_VALUE min",
+ "longPhrase": "No precipitation for at least %MINUTE_VALUE min",
+ "iconCode": 1
}
- ],
- "intervals": [
+ ],
+ "intervals": [
{
- "startTime": "2020-10-19T20:51:00+00:00",
- "minute": 0,
- "dbz": 0.0,
- "shortPhrase": "No Precipitation",
- "iconCode": 7,
- "cloudCover": 100
+ "startTime": "2024-08-08T05:58:00-07:00",
+ "minute": 0,
+ "dbz": 0,
+ "shortPhrase": "No Precipitation",
+ "iconCode": 1,
+ "cloudCover": 7
}, {
- "startTime": "2020-10-19T21:06:00+00:00",
- "minute": 15,
- "dbz": 0.0,
- "shortPhrase": "No Precipitation",
- "iconCode": 7,
- "cloudCover": 100
+ "startTime": "2024-08-08T06:13:00-07:00",
+ "minute": 15,
+ "dbz": 0,
+ "shortPhrase": "No Precipitation",
+ "iconCode": 1,
+ "cloudCover": 3
}, {
- "startTime": "2020-10-19T21:21:00+00:00",
- "minute": 30,
- "dbz": 0.0,
- "shortPhrase": "No Precipitation",
- "iconCode": 7,
- "cloudCover": 100
+ "startTime": "2024-08-08T06:28:00-07:00",
+ "minute": 30,
+ "dbz": 0,
+ "shortPhrase": "No Precipitation",
+ "iconCode": 1,
+ "cloudCover": 2
}, {
- "startTime": "2020-10-19T21:36:00+00:00",
- "minute": 45,
- "dbz": 0.0,
- "shortPhrase": "No Precipitation",
- "iconCode": 7,
- "cloudCover": 100
+ "startTime": "2024-08-08T06:43:00-07:00",
+ "minute": 45,
+ "dbz": 0,
+ "shortPhrase": "No Precipitation",
+ "iconCode": 1,
+ "cloudCover": 2
}, {
- "startTime": "2020-10-19T21:51:00+00:00",
- "minute": 60,
- "dbz": 0.0,
- "shortPhrase": "No Precipitation",
- "iconCode": 7,
- "cloudCover": 100
+ "startTime": "2024-08-08T06:58:00-07:00",
+ "minute": 60,
+ "dbz": 0,
+ "shortPhrase": "No Precipitation",
+ "iconCode": 1,
+ "cloudCover": 1
}, {
- "startTime": "2020-10-19T22:06:00+00:00",
- "minute": 75,
- "dbz": 0.0,
- "shortPhrase": "No Precipitation",
- "iconCode": 7,
- "cloudCover": 100
+ "startTime": "2024-08-08T07:13:00-07:00",
+ "minute": 75,
+ "dbz": 0,
+ "shortPhrase": "No Precipitation",
+ "iconCode": 1,
+ "cloudCover": 1
}, {
- "startTime": "2020-10-19T22:21:00+00:00",
- "minute": 90,
- "dbz": 0.0,
- "shortPhrase": "No Precipitation",
- "iconCode": 7,
- "cloudCover": 100
+ "startTime": "2024-08-08T07:28:00-07:00",
+ "minute": 90,
+ "dbz": 0,
+ "shortPhrase": "No Precipitation",
+ "iconCode": 1,
+ "cloudCover": 0
}, {
- "startTime": "2020-10-19T22:36:00+00:00",
- "minute": 105,
- "dbz": 0.0,
- "shortPhrase": "No Precipitation",
- "iconCode": 7,
- "cloudCover": 100
+ "startTime": "2024-08-08T07:43:00-07:00",
+ "minute": 105,
+ "dbz": 0,
+ "shortPhrase": "No Precipitation",
+ "iconCode": 1,
+ "cloudCover": 0
}
- ]
+ ]
} ```
In this example, you use the [Get Minute Forecast API] to retrieve the minute-by
[Get Minute Forecast API]: /rest/api/maps/weather/getminuteforecast [Get Severe Weather Alerts API]: /rest/api/maps/weather/getsevereweatheralerts [Manage the pricing tier of your Azure Maps account]: how-to-manage-pricing-tier.md
-[Postman]: https://www.postman.com/
+[bruno]: https://www.usebruno.com/
[subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account [Weather service concepts]: weather-services-concepts.md [Weather services]: /rest/api/maps/weather
azure-maps How To Search For Address https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-search-for-address.md
Title: Search for a location using Azure Maps Search services description: Learn about the Azure Maps Search service. See how to use this set of APIs for geocoding, reverse geocoding, fuzzy searches, and reverse cross street searches.-+ Previously updated : 10/28/2021 Last updated : 8/9/2024 --+ # Search for a location using Azure Maps Search services
This article demonstrates how to:
* An [Azure Maps account] * A [subscription key]
-This tutorial uses the [Postman] application, but you may choose a different API development environment.
+>[!IMPORTANT]
+>
+> In the URL examples in this article you will need to replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key.
+
+This article uses the [bruno] application, but you can choose a different API development environment.
## Request latitude and longitude for an address (geocoding) The example in this section uses [Get Search Address] to convert an address into latitude and longitude coordinates. This process is also called *geocoding*. In addition to returning the coordinates, the response also returns detailed address properties such as street, postal code, municipality, and country/region information.
->[!TIP]
->If you have a set of addresses to geocode, you can use [Post Search Address Batch] to send a batch of queries in a single request.
+> [!TIP]
+> If you have a set of addresses to geocode, you can use [Post Search Address Batch] to send a batch of queries in a single request.
-1. In the Postman app, select **New** to create the request. In the **Create New** window, select **HTTP Request**. Enter a **Request name** for the request.
+1. Open the bruno app, select **NEW REQUEST** to create the request. In the **NEW REQUEST** window, set **Type** to **HTTP**. Enter a **Name** for the request.
-2. Select the **GET** HTTP method in the builder tab and enter the following URL. In this request, we're searching for a specific address: `400 Braod St, Seattle, WA 98109`. For this request, and other requests mentioned in this article, replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key.
+1. Select the **GET** HTTP method in the **URL** drop-down list, then enter the following URL:
```http https://atlas.microsoft.com/search/address/json?&subscription-key={Your-Azure-Maps-Subscription-key}&api-version=1.0&language=en-US&query=400 Broad St, Seattle, WA 98109 ```
-3. Select the blue **Send** button. The response body contains data for a single location.
+1. Select the **Create** button.
+
+1. Select the run button.
-4. Next, search an address that has more than one possible locations. In the **Params** section, change the `query` key to `400 Broad, Seattle`. Select the blue **Send** button.
+ This request searches for a specific address: `400 Broad St, Seattle, WA 98109`. Next, search an address that has more than one possible location.
+
+1. In the **Params** section, change the `query` key to `400 Broad, Seattle`, then select the run button.
:::image type="content" source="./media/how-to-search-for-address/search-address.png" alt-text="Search for address":::
-5. Next, try setting the `query` key to `400 Broa`.
+1. Next, try setting the `query` key to `400 Broa`, then select the run button.
-6. Select the **Send** button. The response includes results from multiple countries/regions. To geobias results to the relevant area for your users, always add as many location details as possible to the request.
+ The response includes results from multiple countries/regions. To [geobias] results to the relevant area for your users, always add as many location details as possible to the request.
## Fuzzy Search
-[Fuzzy Search] supports standard single line and free-form searches. We recommend that you use the Azure Maps Search Fuzzy API when you don't know your user input type for a search request. The query input can be a full or partial address. It can also be a Point of Interest (POI) token, like a name of POI, POI category or name of brand. Furthermore, to improve the relevance of your search results, constrain the query results using a coordinate location and radius, or by defining a bounding box.
+[Fuzzy Search] supports standard single line and free-form searches. We recommend that you use the Azure Maps Search Fuzzy API when you don't know your user input type for a search request. The query input can be a full or partial address. It can also be a Point of Interest (POI) token, like a name of POI, POI category or name of brand. Furthermore, to improve the relevance of your search results, constrain the query results using a coordinate location and radius, or by defining a bounding box.
> [!TIP] > Most Search queries default to `maxFuzzyLevel=1` to improve performance and reduce unusual results. Adjust fuzziness levels by using the `maxFuzzyLevel` or `minFuzzyLevel` parameters. For more information on `maxFuzzyLevel` and a complete list of all optional parameters, see [Fuzzy Search URI Parameters].
The example in this section uses `Fuzzy Search` to search the entire world for *
> [!IMPORTANT] > To geobias results to the relevant area for your users, always add as many location details as possible. For more information, see [Best Practices for Search].
-1. In the Postman app, select **New** to create the request. In the **Create New** window, select **HTTP Request**. Enter a **Request name** for the request.
+1. Open the bruno app, select **NEW REQUEST** to create the request. In the **NEW REQUEST** window, set **Type** to **HTTP**. Enter a **Name** for the request.
-2. Select the **GET** HTTP method in the builder tab and enter the following URL. For this request, and other requests mentioned in this article, replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key.
+1. Select the **GET** HTTP method in the **URL** drop-down list, then enter the following URL:
```http
- https://atlas.microsoft.com/search/fuzzy/json?&api-version=1.0&subscription-key={Your-Azure-Maps-Subscription-key}&language=en-US&query=pizza
+ https://atlas.microsoft.com/search/fuzzy/json?api-version=1.0&subscription-key={Your-Azure-Maps-Subscription-key}&language=en-US&query=pizza
``` > [!NOTE] > The _json_ attribute in the URL path determines the response format. This article uses json for ease of use and readability. To find other supported response formats, see the `format` parameter definition in the [URI Parameter reference] documentation.
-3. Select **Send** and review the response body.
+1. Select the run button, then review the response body.
- The ambiguous query string for "pizza" returned 10 [point of interest result] (POI) in both the "pizza" and "restaurant" categories. Each result includes details such as street address, latitude and longitude values, view port, and entry points for the location. The results are now varied for this query, and aren't tied to any reference location.
+ The ambiguous query string for "pizza" returned 10 [point of interest] (POI) results in both the "pizza" and "restaurant" categories. Each result includes details such as street address, latitude and longitude values, view port, and entry points for the location. The results are now varied for this query, and aren't tied to any reference location.
- In the next step, you'll use the `countrySet` parameter to specify only the countries/regions for which your application needs coverage. For a complete list of supported countries/regions, see [Search Coverage].
+ In the next step, you'll use the `countrySet` parameter to specify only the countries/regions for which your application needs coverage. For a complete list of supported countries/regions, see [Azure Maps geocoding coverage].
-4. The default behavior is to search the entire world, potentially returning unnecessary results. Next, search for pizza only in the United States. Add the `countrySet` key to the **Params** section, and set its value to `US`. Setting the `countrySet` key to `US` bounds the results to the United States.
+1. The default behavior is to search the entire world, potentially returning unnecessary results. Next, search for pizza only in the United States. Add the `countrySet` key to the **Params** section, and set its value to `US`. Setting the `countrySet` key to `US` bounds the results to the United States.
:::image type="content" source="./media/how-to-search-for-address/search-fuzzy-country.png" alt-text="Search for pizza in the United States"::: The results are now bounded by the country code and the query returns pizza restaurants in the United States.
-5. To get an even more targeted search, you can search over the scope of a lat/lon coordinate pair. The following example uses the lat/lon coordinates of the Seattle Space Needle. Since we only want to return results within a 400-meters radius, we add the `radius` parameter. Also, we add the `limit` parameter to limit the results to the five closest pizza places.
+1. To get an even more targeted search, you can search over the scope of a lat/lon coordinate pair. The following example uses the lat/lon coordinates of the Seattle Space Needle. Since we only want to return results within a 400-meters radius, we add the `radius` parameter. Also, we add the `limit` parameter to limit the results to the five closest pizza places.
In the **Params** section, add the following key/value pairs:
The example in this section uses `Fuzzy Search` to search the entire world for *
| radius | 400 | | limit | 5 |
-6. Select **Send**. The response includes results for pizza restaurants near the Seattle Space Needle.
+1. Select run. The response includes results for pizza restaurants near the Seattle Space Needle.
## Search for a street address using Reverse Address Search [Get Search Address Reverse] translates coordinates into human readable street addresses. This API is often used for applications that consume GPS feeds and want to discover addresses at specific coordinate points. > [!IMPORTANT]
-> To geobias results to the relevant area for your users, always add as many location details as possible. For more information, see [Best Practices for Search].
+> To [geobias] results to the relevant area for your users, always add as many location details as possible. For more information, see [Best Practices for Search].
> [!TIP] > If you have a set of coordinate locations to reverse geocode, you can use [Post Search Address Reverse Batch] to send a batch of queries in a single request. This example demonstrates making reverse searches using a few of the optional parameters that are available. For the full list of optional parameters, see [Reverse Search Parameters].
-1. In the Postman app, select **New** to create the request. In the **Create New** window, select **HTTP Request**. Enter a **Request name** for the request.
+1. Open the bruno app, select **NEW REQUEST** to create the request. In the **NEW REQUEST** window, set **Type** to **HTTP**. Enter a **Name** for the request.
-2. Select the **GET** HTTP method in the builder tab and enter the following URL. For this request, and other requests mentioned in this article, replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key. The request should look like the following URL:
+1. Select the **GET** HTTP method in the **URL** drop-down list, then enter the following URL:
```http
- https://atlas.microsoft.com/search/address/reverse/json?api-version=1.0&subscription-key={Your-Azure-Maps-Subscription-key}&language=en-US&query=47.591180,-122.332700&number=1
+ https://atlas.microsoft.com/search/address/reverse/json?api-version=1.0&subscription-key={Your-Azure-Maps-Subscription-key}&language=en-US&query=47.591180,-122.332700
```
-3. Select **Send**, and review the response body. You should see one query result. The response includes key address information about Safeco Field.
+1. Select the run button, and review the response body. You should see one query result. The response includes key address information about Safeco Field.
-4. Next, add the following key/value pairs to the **Params** section:
+1. Next, add the following key/value pairs to the **Params** section:
- | Key | Value | Returns
- |--|||
- | number | 1 |The response may include the side of the street (Left/Right) and also an offset position for the number.|
+ | Key | Value | Returns |
+ |--|-||
+ | number | 1 |The response can include the side of the street (Left/Right) and also an offset position for the number.|
| returnSpeedLimit | true | Returns the speed limit at the address.| | returnRoadUse | true | Returns road use types at the address. For all possible road use types, see [Road Use Types].|
- | returnMatchType | true| Returns the type of match. For all possible values, see [Reverse Address Search Results].
+ | returnMatchType | true| Returns the type of match. For all possible values, see [Reverse Address Search Results]. |
:::image type="content" source="./media/how-to-search-for-address/search-reverse.png" alt-text="Search reverse.":::
-5. Select **Send**, and review the response body.
+1. Select the run button, and review the response body.
-6. Next, we add the `entityType` key, and set its value to `Municipality`. The `entityType` key overrides the `returnMatchType` key in the previous step. `returnSpeedLimit` and `returnRoadUse` also need removed since you're requesting information about the municipality. For all possible entity types, see [Entity Types].
+1. Next, add the `entityType` key, and set its value to `Municipality`. The `entityType` key overrides the `returnMatchType` key in the previous step. `returnSpeedLimit` and `returnRoadUse` also need removed since you're requesting information about the municipality. For all possible entity types, see [Entity Types].
:::image type="content" source="./media/how-to-search-for-address/search-reverse-entity-type.png" alt-text="Search reverse entityType.":::
-7. Select **Send**. Compare the results to the results returned in step 5. Because the requested entity type is now `municipality`, the response doesn't include street address information. Also, the returned `geometryId` can be used to request boundary polygon through Azure Maps Get [Search Polygon API].
+1. Select the run button. Compare the results to the results returned in step 5. Because the requested entity type is now `municipality`, the response doesn't include street address information. Also, the returned `geometryId` can be used to request boundary polygon through Azure Maps Get [Search Polygon API].
> [!TIP] > For more information on these as well as other parameters, see [Reverse Search Parameters].
This example demonstrates making reverse searches using a few of the optional pa
This example demonstrates how to search for a cross street based on the coordinates of an address.
-1. In the Postman app, select **New** to create the request. In the **Create New** window, select **HTTP Request**. Enter a **Request name** for the request.
+1. Open the bruno app, select **NEW REQUEST** to create the request. In the **NEW REQUEST** window, set **Type** to **HTTP**. Enter a **Name** for the request.
-2. Select the **GET** HTTP method in the builder tab and enter the following URL. For this request, and other requests mentioned in this article, replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key. The request should look like the following URL:
+1. Select the **GET** HTTP method in the **URL** drop-down list, then enter the following URL:
```http
- https://atlas.microsoft.com/search/address/reverse/crossstreet/json?&api-version=1.0&subscription-key={Your-Azure-Maps-Subscription-key}&language=en-US&query=47.591180,-122.332700
+ https://atlas.microsoft.com/search/address/reverse/crossstreet/json?api-version=1.0&subscription-key={Your-Azure-Maps-Subscription-key}&language=en-US&query=47.591180,-122.332700
```
- :::image type="content" source="./media/how-to-search-for-address/search-address-cross.png" alt-text="Search cross street.":::
-
-3. Select **Send**, and review the response body. Notice that the response contains a `crossStreet` value of `South Atlantic Street`.
+1. Select the run button, and review the response body. Notice that the response contains a `crossStreet` value of `South Atlantic Street`.
## Next steps
This example demonstrates how to search for a cross street based on the coordina
[Entity Types]: /rest/api/maps/search/getsearchaddressreverse?view=rest-maps-1.0&preserve-view=true#entitytype [Fuzzy Search URI Parameters]: /rest/api/maps/search/getsearchfuzzy?view=rest-maps-1.0&preserve-view=true#uri-parameters [Fuzzy Search]: /rest/api/maps/search/getsearchfuzzy?view=rest-maps-1.0&preserve-view=true
+[geobias]: glossary.md#geobias
[Get Search Address Reverse]: /rest/api/maps/search/getsearchaddressreverse?view=rest-maps-1.0&preserve-view=true [Get Search Address]: /rest/api/maps/search/getsearchaddress?view=rest-maps-1.0&preserve-view=true
-[point of interest result]: /rest/api/maps/search/getsearchpoi?view=rest-maps-1.0&preserve-view=true#searchpoiresponse
+[point of interest]: /rest/api/maps/search/getsearchpoi?view=rest-maps-1.0&preserve-view=true#searchpoiresponse
[Post Search Address Batch]: /rest/api/maps/search/postsearchaddressbatch [Post Search Address Reverse Batch]: /rest/api/maps/search/postsearchaddressreversebatch?view=rest-maps-1.0&preserve-view=true
-[Postman]: https://www.postman.com/
+[bruno]: https://www.usebruno.com/
[Reverse Address Search Results]: /rest/api/maps/search/getsearchaddressreverse?view=rest-maps-1.0&preserve-view=true#searchaddressreverseresult [Reverse Address Search]: /rest/api/maps/search/getsearchaddressreverse?view=rest-maps-1.0&preserve-view=true [Reverse Search Parameters]: /rest/api/maps/search/getsearchaddressreverse?view=rest-maps-1.0&preserve-view=true#uri-parameters
This example demonstrates how to search for a cross street based on the coordina
[Route]: /rest/api/maps/route [Search Address Reverse Cross Street]: /rest/api/maps/search/getsearchaddressreversecrossstreet?view=rest-maps-1.0&preserve-view=true [Search Address]: /rest/api/maps/search/getsearchaddress?view=rest-maps-1.0&preserve-view=true
-[Search Coverage]: geocoding-coverage.md
+[Azure Maps geocoding coverage]: geocoding-coverage.md
[Search Polygon API]: /rest/api/maps/search/getsearchpolygon?view=rest-maps-1.0&preserve-view=true [Search]: /rest/api/maps/search?view=rest-maps-1.0&preserve-view=true [subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account
azure-maps How To Secure Daemon App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-secure-daemon-app.md
Title: How to secure a daemon application in Microsoft Azure Maps
description: This article describes how to host daemon applications, such as background processes, timers, and jobs in a trusted and secure environment in Microsoft Azure Maps. -+ Last updated 10/28/2021 -
-custom.ms: subject-rbac-steps
+ # Secure a daemon application
azure-maps How To Secure Device Code https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-secure-device-code.md
Last updated 06/12/2020 -+ # Secure an input constrained device by using Microsoft Entra ID and Azure Maps REST APIs
azure-maps How To Secure Sas App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-secure-sas-app.md
Title: How to secure an Azure Maps application with a SAS token
description: Create an Azure Maps account secured with SAS token authentication. -+ Last updated 06/08/2022 --+
azure-maps How To Secure Spa App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-secure-spa-app.md
Title: How to secure a single-page web application with non-interactive sign-in
description: How to configure a single-page web application with non-interactive Azure role-based access control (Azure RBAC) and Azure Maps Web SDK. -+ Last updated 10/28/2021 -+
azure-maps How To Secure Spa Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-secure-spa-users.md
Title: How to secure a single page application with user sign-in
description: How to configure a single page application that supports Microsoft Entra single-sign-on with Azure Maps Web SDK. -+ Last updated 06/12/2020 --+ # Secure a single page application with user sign-in
azure-maps How To Secure Webapp Users https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-secure-webapp-users.md
Title: How to secure a web application with interactive single sign-in
description: How to configure a web application that supports Microsoft Entra single sign-in with Azure Maps Web SDK using OpenID Connect protocol. -+ Last updated 06/12/2020 --+ # Secure a web application with user sign-in
azure-maps How To Show Attribution https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-show-attribution.md
Last updated 3/16/2022 -+ # Show the correct copyright attribution
azure-maps How To Show Traffic Android https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-show-traffic-android.md
Last updated 2/26/2021 + - zone_pivot_groups: azure-maps-android
azure-maps How To Use Android Map Control Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-android-map-control-library.md
Last updated 2/26/2021 + - zone_pivot_groups: azure-maps-android
azure-maps How To Use Best Practices For Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-best-practices-for-routing.md
Title: Best practices for Azure Maps Route service in Microsoft Azure Maps description: Learn how to route vehicles by using Route service from Microsoft Azure Maps.-+ Last updated 10/28/2021 --+ # Best practices for Azure Maps Route service
azure-maps How To Use Best Practices For Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-best-practices-for-search.md
Title: Best practices for Azure Maps Search service description: Learn how to apply the best practices when using the Search service from Microsoft Azure Maps.-+ Last updated 10/28/2021 --+ # Best practices for Azure Maps Search service
azure-maps How To Use Feedback Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-feedback-tool.md
Last updated 03/15/2024 --+
azure-maps How To Use Image Templates Web Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-image-templates-web-sdk.md
Last updated 8/6/2019 --+ # How to use image templates
azure-maps How To Use Indoor Module Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-indoor-module-ios.md
Last updated 12/10/2021 + - # Indoor maps in the iOS SDK (Preview)
azure-maps How To Use Indoor Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-indoor-module.md
Last updated 06/28/2023 -+
azure-maps How To Use Ios Map Control Library https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-ios-map-control-library.md
Last updated 11/23/2021 -+ # Get started with Azure Maps iOS SDK (Preview)
azure-maps How To Use Map Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-map-control.md
Last updated 06/29/2023 -+
azure-maps How To Use Npm Package https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-npm-package.md
Last updated 07/04/2023 -+
azure-maps How To Use Services Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-services-module.md
Last updated 03/27/2024 -+
azure-maps How To Use Spatial Io Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-spatial-io-module.md
Last updated 02/28/2020 --
-#Customer intent: As an Azure Maps web sdk user, I want to install and use the spatial io module so that I can integrate spatial data with the Azure Maps web sdk.
+ # How to use the Azure Maps Spatial IO module
azure-maps How To Use Ts Rest Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-ts-rest-sdk.md
Last updated 07/01/2023 -+
azure-maps How To View Api Usage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-view-api-usage.md
Last updated 08/06/2018 -+ # View Azure Maps API usage metrics
azure-maps Interact Map Ios Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/interact-map-ios-sdk.md
Last updated 11/18/2021 -+ # Interact with the map in the iOS SDK (Preview)
azure-maps Ios Sdk Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/ios-sdk-migration-guide.md
Last updated 02/20/2024 + # The Azure Maps iOS SDK migration guide
azure-maps Itinerary Optimization Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/itinerary-optimization-service.md
Title: Create multi-itinerary optimization service description: Learn how to use Azure Maps and NVIDIA cuOpt to build a multi-itinerary optimization service.-+ Last updated 05/20/2024 -+ # Create multi-itinerary optimization service
azure-maps Map Accessibility https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-accessibility.md
Last updated 05/15/2023 + # Building an accessible application
azure-maps Map Add Bubble Layer Android https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-bubble-layer-android.md
Last updated 2/26/2021 + - zone_pivot_groups: azure-maps-android
azure-maps Map Add Bubble Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-bubble-layer.md
Last updated 05/15/2023 -+ # Add a bubble layer to a map
azure-maps Map Add Controls Android https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-controls-android.md
Last updated 02/26/2021 + - zone_pivot_groups: azure-maps-android
azure-maps Map Add Controls https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-controls.md
Last updated 05/15/2023 -+ # Add controls to a map
azure-maps Map Add Custom Html https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-custom-html.md
Last updated 05/17/2023 -+ # Add HTML markers to the map
azure-maps Map Add Drawing Toolbar https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-drawing-toolbar.md
Last updated 06/05/2023 -+ # Add a drawing tools toolbar to a map
azure-maps Map Add Heat Map Layer Android https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-heat-map-layer-android.md
Last updated 02/26/2021 + - zone_pivot_groups: azure-maps-android
azure-maps Map Add Heat Map Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-heat-map-layer.md
Last updated 06/06/2023 + # Add a heat map layer to a map
azure-maps Map Add Image Layer Android https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-image-layer-android.md
Last updated 02/26/2021 + - zone_pivot_groups: azure-maps-android
azure-maps Map Add Image Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-image-layer.md
Last updated 06/06/2023 + # Add an image layer to a map
azure-maps Map Add Line Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-line-layer.md
Last updated 06/06/2023 + # Add a line layer to the map
azure-maps Map Add Pin https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-pin.md
Last updated 06/14/2023 + # Add a symbol layer to a map
azure-maps Map Add Popup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-popup.md
Last updated 06/14/2023 + # Add a popup to the map
azure-maps Map Add Shape https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-shape.md
Last updated 06/07/2023 + # Add a polygon layer to the map
azure-maps Map Add Snap Grid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-snap-grid.md
Last updated 06/08/2023 + # Add a snap grid to the map
azure-maps Map Add Tile Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-tile-layer.md
Last updated 06/08/2023 + # Add a tile layer to a map
azure-maps Map Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-create.md
Last updated 06/13/2023 + # Create a map
azure-maps Map Events https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-events.md
Last updated 06/12/2023 + # Handle map events
azure-maps Map Extruded Polygon Android https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-extruded-polygon-android.md
Last updated 02/26/2021 + - zone_pivot_groups: azure-maps-android
azure-maps Map Extruded Polygon https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-extruded-polygon.md
Last updated 06/15/2023 + # Add a polygon extrusion layer to the map
azure-maps Map Get Information From Coordinate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-get-information-from-coordinate.md
Last updated 07/01/2023 + # Get information from a coordinate
azure-maps Map Get Shape Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-get-shape-data.md
Last updated 07/13/2023 + # Get shape data
azure-maps Map Route https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-route.md
Last updated 07/01/2023 + # Show directions from A to B
azure-maps Map Search Location https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-search-location.md
Last updated 07/01/2023 + # Show search results on the map
azure-maps Map Show Traffic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-show-traffic.md
Last updated 10/26/2023 + # Show traffic on the map
azure-maps Migrate Bing Maps Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-bing-maps-overview.md
Last updated 05/16/2024 -+ # Migrate from Bing Maps to Azure Maps overview
azure-maps Migrate Calculate Route https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-calculate-route.md
Title: Migrate Bing Maps Calculate a Route API to Azure Maps Route Directions API description: Learn how to Migrate the Bing Maps Calculate a Route API to the Azure Maps Route Directions API.-+ Last updated 05/16/2024 -+ # Migrate Bing Maps Calculate a Route API
azure-maps Migrate Calculate Truck Route https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-calculate-truck-route.md
Title: Migrate Bing Maps Calculate a Truck Route API to Azure Maps Route Directions API description: Learn how to Migrate the Bing Maps Calculate a Truck Route API to the Azure Maps Route Directions API.-+ Last updated 05/16/2024 -+ # Migrate Bing Maps Calculate a Truck Route API
azure-maps Migrate Find Location Address https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-find-location-address.md
Last updated 05/16/2024 -+ # Migrate Bing Maps Find a Location by Address API
azure-maps Migrate Find Location By Point https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-find-location-by-point.md
Last updated 05/16/2024 -+ # Migrate Bing Maps Find a Location by Point API
azure-maps Migrate Find Location Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-find-location-query.md
Last updated 05/16/2024 -+ # Migrate Bing Maps Find a Location by Query API
azure-maps Migrate Find Time Zone https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-find-time-zone.md
Last updated 04/15/2024 -+
azure-maps Migrate From Bing Maps Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-bing-maps-web-app.md
Last updated 10/28/2021 --+ # Tutorial: Migrate a web app from Bing Maps
azure-maps Migrate From Google Maps Android App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-google-maps-android-app.md
Last updated 12/1/2021 + - zone_pivot_groups: azure-maps-android
azure-maps Migrate From Google Maps Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-google-maps-web-app.md
Last updated 09/28/2023 -+ # Tutorial: Migrate a web app from Google Maps
azure-maps Migrate From Google Maps Web Services https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-google-maps-web-services.md
Last updated 09/28/2023 -+ # Tutorial: Migrate web service from Google Maps
azure-maps Migrate From Google Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-google-maps.md
Last updated 09/23/2020 -+ # Tutorial: Migrate from Google Maps to Azure Maps
azure-maps Migrate Geocode Dataflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-geocode-dataflow.md
Title: Migrate Bing Maps Geocode Dataflow API to Azure Maps Geocoding Batch and Reverse Geocoding Batch API description: Learn how to Migrate the Bing Maps Geocode Dataflow API to the Azure Maps Geocoding Batch and Reverse Geocoding Batch API.-+ Last updated 05/15/2024 -+ # Migrate Bing Maps Geocode Dataflow API
azure-maps Migrate Geodata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-geodata.md
Last updated 05/16/2024 -+ # Migrate Bing Maps Geodata API
azure-maps Migrate Get Imagery Metadata https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-get-imagery-metadata.md
Last updated 05/16/2024 -+ # Migrate Bing Maps Get Imagery Metadata API
azure-maps Migrate Get Static Map https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-get-static-map.md
Last updated 06/26/2024 -+ # Migrate Bing Maps Get a Static Map API
azure-maps Migrate Get Traffic Incidents https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-get-traffic-incidents.md
Title: Migrate Bing Maps Get Traffic Incidents API to Azure Maps Get Traffic Incident Detail API description: Learn how to Migrate the Bing Maps Get Traffic Incidents API to the Azure Maps Get Traffic Incident Detail API.-+ Last updated 04/15/2024 -+
azure-maps Migrate Help Using Copilot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-help-using-copilot.md
Last updated 05/16/2024 -+ # Migrate Bing Maps Enterprise applications to Azure Maps with GitHub Copilot
azure-maps Migrate Sds Data Source Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-sds-data-source-management.md
Title: Migrate Bing Maps Data Source Management and Query API to Azure Maps API description: Learn how to Migrate the Bing Maps Data Source Management and Query API to the appropriate Azure Maps API.-+ Last updated 05/15/2024 -+ # Migrate Bing Maps Data Source Management and Query API
azure-maps Open Source Projects https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/open-source-projects.md
Last updated 12/07/2020 -+ # Azure Maps community - Open-source projects
azure-maps Power Bi Visual Add 3D Column Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-add-3d-column-layer.md
Last updated 09/15/2023 -+ # Add a 3D column layer
azure-maps Power Bi Visual Add Bubble Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-add-bubble-layer.md
Last updated 12/04/2023 -+ # Add a bubble layer
azure-maps Power Bi Visual Add Heat Map Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-add-heat-map-layer.md
Last updated 09/15/2023 -+ # Add a heat map layer
azure-maps Power Bi Visual Add Pie Chart Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-add-pie-chart-layer.md
Last updated 07/17/2023 -+ # Add a pie chart layer
azure-maps Power Bi Visual Add Reference Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-add-reference-layer.md
Last updated 07/10/2024 -+ # Add a reference layer
azure-maps Power Bi Visual Add Tile Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-add-tile-layer.md
Last updated 07/18/2023 -+ # Add a tile layer
azure-maps Power Bi Visual Cluster Bubbles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-cluster-bubbles.md
Last updated 02/27/2024 -+ # Add a cluster bubble layer
azure-maps Power Bi Visual Conversion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-conversion.md
Last updated 05/23/2023 -+ # Convert Map and Filled map visuals to an Azure Maps visual
azure-maps Power Bi Visual Data Residency https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-data-residency.md
Last updated 03/22/2024 -+ # Azure Maps Power BI visual Data Residency
azure-maps Power Bi Visual Filled Map https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-filled-map.md
Last updated 07/19/2023 -+ # Filled map in Azure Maps Power BI visual
azure-maps Power Bi Visual Geocode https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-geocode.md
Last updated 03/16/2022 -+ # Geocoding in Azure Maps Power BI Visual
azure-maps Power Bi Visual Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-get-started.md
Last updated 09/29/2023 -+ # Get started with Azure Maps Power BI visual
azure-maps Power Bi Visual Manage Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-manage-access.md
Last updated 11/29/2021 - -+ # Manage Azure Maps Power BI visual within your organization
azure-maps Power Bi Visual On Object Interaction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-on-object-interaction.md
Last updated 03/13/2023 -+ # Contextual on-object interaction with Azure Maps Power BI visual (preview)
azure-maps Power Bi Visual Show Real Time Traffic https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-show-real-time-traffic.md
Last updated 07/18/2023 -+ # Show real-time traffic
azure-maps Power Bi Visual Understanding Layers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-understanding-layers.md
Last updated 07/19/2023 -+ # Layers in Azure Maps Power BI visual
azure-maps Quick Android Map https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/quick-android-map.md
Last updated 09/22/2022 -+ zone_pivot_groups: azure-maps-android
azure-maps Quick Demo Map App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/quick-demo-map-app.md
Last updated 12/23/2021 -+
azure-maps Quick Ios App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/quick-ios-app.md
Last updated 11/23/2021 -+ # Create an iOS app (Preview)
azure-maps Release Notes Drawing Tools Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/release-notes-drawing-tools-module.md
Last updated 10/25/2023 -+ # Drawing Tools Module release notes
azure-maps Release Notes Indoor Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/release-notes-indoor-module.md
Last updated 3/24/2023 -+ # Indoor Module release notes
azure-maps Release Notes Map Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/release-notes-map-control.md
Last updated 3/15/2023 -+ # Web SDK map control release notes
azure-maps Release Notes Spatial Module https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/release-notes-spatial-module.md
Last updated 5/23/2023 -+ # Spatial IO Module release notes
azure-maps Render Coverage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/render-coverage.md
Last updated 09/21/2023 -+ # Azure Maps render coverage
azure-maps Rest Api Azure Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/rest-api-azure-maps.md
Last updated 02/05/2024 -+ # Azure Maps Rest API
azure-maps Rest Api Creator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/rest-api-creator.md
Last updated 02/05/2024 -+ # Creator Rest API
azure-maps Rest Sdk Developer Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/rest-sdk-developer-guide.md
Last updated 10/31/2021 + - # REST SDK Developer Guide
azure-maps Routing Coverage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/routing-coverage.md
Title: Routing coverage description: Learn what level of coverage Azure Maps provides in various regions for routing, routing with traffic, and truck routing. -+ Last updated 10/21/2022 -+ zone_pivot_groups: azure-maps-coverage
azure-maps Set Android Map Styles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/set-android-map-styles.md
Last updated 02/26/2021 + - zone_pivot_groups: azure-maps-android
azure-maps Set Drawing Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/set-drawing-options.md
Last updated 06/15/2023 + # Use the drawing tools module
azure-maps Set Map Style Ios Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/set-map-style-ios-sdk.md
Last updated 07/22/2023 -+ # Set map style in the iOS SDK (Preview)
azure-maps Show Traffic Data Map Ios Sdk https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/show-traffic-data-map-ios-sdk.md
Last updated 07/21/2023 -+ # Show traffic data on the map in the iOS SDK (Preview)
azure-maps Spatial Io Add Ogc Map Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/spatial-io-add-ogc-map-layer.md
Last updated 06/16/2023 + # Add a map layer from the Open Geospatial Consortium (OGC)
azure-maps Spatial Io Add Simple Data Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/spatial-io-add-simple-data-layer.md
Last updated 06/19/2023 + #Customer intent: As an Azure Maps web sdk user, I want to add simple data layer so that I can render styled features on the map.
azure-maps Spatial Io Connect Wfs Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/spatial-io-connect-wfs-service.md
Last updated 06/20/2023 + # Connect to a WFS service
azure-maps Spatial Io Core Operations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/spatial-io-core-operations.md
Last updated 03/03/2020 -+ # Core IO operations
azure-maps Spatial Io Read Write Spatial Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/spatial-io-read-write-spatial-data.md
Last updated 06/21/2023 + # Read and write spatial data
azure-maps Spatial Io Supported Data Format Details https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/spatial-io-supported-data-format-details.md
Last updated 10/28/2021 -+ # Supported data format details
azure-maps Supported Browsers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/supported-browsers.md
Last updated 06/22/2023 +
azure-maps Supported Languages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/supported-languages.md
Last updated 01/05/2022 -+ # Localization support in Azure Maps
azure-maps Supported Map Styles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/supported-map-styles.md
Last updated 11/01/2023 -+
azure-maps Supported Search Categories https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/supported-search-categories.md
Last updated 05/14/2018 -+
azure-maps Traffic Coverage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/traffic-coverage.md
Title: Traffic coverage description: Learn about traffic coverage in Azure Maps. See whether information on traffic flow and incidents is available in various regions throughout the world.-+ Last updated 03/24/2022 -+
azure-maps Tutorial Create Store Locator https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-create-store-locator.md
Last updated 11/01/2023 -+
azure-maps Tutorial Ev Routing https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-ev-routing.md
Title: 'Tutorial: Route electric vehicles by using Azure Notebooks (Python) with Microsoft Azure Maps' description: Tutorial on how to route electric vehicles by using Microsoft Azure Maps routing APIs and Azure Notebooks-+ Last updated 04/26/2021
azure-maps Tutorial Iot Hub Maps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-iot-hub-maps.md
Title: 'Tutorial: Implement IoT spatial analytics' description: Tutorial on how to Integrate IoT Hub with Microsoft Azure Maps service APIs-+ Last updated 09/14/2023 -+
azure-maps Tutorial Load Geojson File Android https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-load-geojson-file-android.md
Last updated 12/10/2020 + - zone_pivot_groups: azure-maps-android
azure-maps Tutorial Prioritized Routes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-prioritized-routes.md
Last updated 12/29/2021 -+
azure-maps Tutorial Route Location https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-route-location.md
Last updated 12/28/2021 -+
azure-maps Tutorial Search Location https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-search-location.md
Last updated 12/23/2021 -+
azure-maps Understanding Azure Maps Transactions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/understanding-azure-maps-transactions.md
Last updated 04/05/2024 -+ # Understanding Azure Maps Transactions
azure-maps Weather Coverage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/weather-coverage.md
Title: Microsoft Azure Maps Weather services coverage description: Learn about Microsoft Azure Maps Weather services coverage-+ Last updated 11/08/2022 -+
azure-maps Weather Service Tutorial https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/weather-service-tutorial.md
Title: 'Tutorial: Join sensor data with weather forecast data by using Azure Notebooks(Python)' description: Tutorial on how to join sensor data with weather forecast data from Microsoft Azure Maps Weather services using Azure Notebooks(Python).-+ Last updated 10/28/2021 -+
azure-maps Weather Services Concepts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/weather-services-concepts.md
Title: Weather services concepts in Microsoft Azure Maps description: Learn about the concepts that apply to Microsoft Azure Maps Weather services.-+ Last updated 09/10/2020 -+ # Weather services in Azure Maps
azure-maps Web Sdk Best Practices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/web-sdk-best-practices.md
Last updated 06/23/2023 -+ # Azure Maps Web SDK best practices
azure-maps Web Sdk Migration Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/web-sdk-migration-guide.md
Last updated 08/18/2023 + # The Azure Maps Web SDK migration guide
azure-maps Webgl Custom Layer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/webgl-custom-layer.md
Last updated 10/17/2022 -+ # Add a custom WebGL layer to a map
azure-maps Zoom Levels And Tile Grid https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/zoom-levels-and-tile-grid.md
Title: Zoom levels and tile grid in Microsoft Azure Maps description: Learn how to set zoom levels in Azure Maps. See how to convert geographic coordinates into pixel coordinates, tile coordinates, and quadkeys. View code samples.--++ Last updated 07/14/2020 --+ # Zoom levels and tile grid
azure-monitor Azure Monitor Agent Mma Removal Tool https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-mma-removal-tool.md
You'll use the following script for agent removal. Open a file in your local dir
# az login # az account set --subscription <subscription_id/subscription_name> # This script uses parallel processing, modify the $parallelThrottleLimit parameter to either increase or decrease the number of parallel processes
-# PS> .\MMAUnistallUtilityScript.ps1 GetInventory
-# The above command will generate a csv file with the details of Vm's and Vmss that has MMA extension installed.
+# PS> .\LogAnalyticsAgentUninstallUtilityScript.ps1 GetInventory
+# The above command will generate a csv file with the details of Vm's and Vmss and Arc servers that has MMA/OMS extension installed.
# The customer can modify the the csv by adding/removing rows if needed
-# Remove the MMA by running the script again as shown below:
-# PS> .\MMAUnistallUtilityScript.ps1 UninstallMMAExtension
+# Remove the MMA/OMS by running the script again as shown below:
+# PS> .\LogAnalyticsAgentUninstallUtilityScript.ps1 UninstallExtension
# This version of the script requires Powershell version >= 7 in order to improve performance via ForEach-Object -Parallel # https://docs.microsoft.com/en-us/powershell/scripting/whats-new/migrating-from-windows-powershell-51-to-powershell-7?view=powershell-7.1
if ($PSVersionTable.PSVersion.Major -lt 7)
$parallelThrottleLimit = 16
-function GetVmsWithMMAExtensionInstalled
+function GetArcServersWithLogAnalyticsAgentExtensionInstalled {
+ param (
+ $fileName
+ )
+
+ $serverList = az connectedmachine list --query "[].{ResourceId:id, ResourceGroup:resourceGroup, ServerName:name}" | ConvertFrom-Json
+ if(!$serverList)
+ {
+ Write-Host "Cannot get the Arc server list"
+ return
+ }
+
+ $serversCount = $serverList.Length
+ $vmParallelThrottleLimit = $parallelThrottleLimit
+ if ($serversCount -lt $vmParallelThrottleLimit)
+ {
+ $serverParallelThrottleLimit = $serversCount
+ }
+
+ if($serversCount -eq 1)
+ {
+ $serverGroups += ,($serverList[0])
+ }
+ else
+ {
+ # split the list into batches to do parallel processing
+ for ($i = 0; $i -lt $serversCount; $i += $vmParallelThrottleLimit)
+ {
+ $serverGroups += , ($serverList[$i..($i + $serverParallelThrottleLimit - 1)])
+ }
+ }
+
+ Write-Host "Detected $serversCount Arc servers in this subscription."
+ $hash = [hashtable]::Synchronized(@{})
+ $hash.One = 1
+
+ $serverGroups | Foreach-Object -ThrottleLimit $parallelThrottleLimit -Parallel {
+ $len = $using:serversCount
+ $hash = $using:hash
+ $_ | ForEach-Object {
+ $percent = 100 * $hash.One++ / $len
+ Write-Progress -Activity "Getting Arc server extensions Inventory" -PercentComplete $percent
+ $serverName = $_.ServerName
+ $resourceGroup = $_.ResourceGroup
+ $resourceId = $_.ResourceId
+ Write-Debug "Getting extensions for Arc server: $serverName"
+ $extensions = az connectedmachine extension list -g $resourceGroup --machine-name $serverName --query "[?contains(['MicrosoftMonitoringAgent', 'OmsAgentForLinux', 'AzureMonitorLinuxAgent', 'AzureMonitorWindowsAgent'], properties.type)].{type: properties.type, name: name}" | ConvertFrom-Json
+
+ if (!$extensions) {
+ return
+ }
+ $extensionMap = @{}
+ foreach ($ext in $extensions) {
+ $extensionMap[$ext.type] = $ext.name
+ }
+ if ($extensionMap.ContainsKey("MicrosoftMonitoringAgent")) {
+ $extensionName = $extensionMap["MicrosoftMonitoringAgent"]
+ }
+ elseif ($extensionMap.ContainsKey("OmsAgentForLinux")) {
+ $extensionName = $extensionMap["OmsAgentForLinux"]
+ }
+ if ($extensionName) {
+ $amaExtensionInstalled = "False"
+ if ($extensionMap.ContainsKey("AzureMonitorWindowsAgent") -or $extensionMap.ContainsKey("AzureMonitorLinuxAgent")) {
+ $amaExtensionInstalled = "True"
+ }
+ $csvObj = New-Object -TypeName PSObject -Property @{
+ 'ResourceId' = $resourceId
+ 'Name' = $serverName
+ 'Resource_Group' = $resourceGroup
+ 'Resource_Type' = "ArcServer"
+ 'Install_Type' = "Extension"
+ 'Extension_Name' = $extensionName
+ 'AMA_Extension_Installed' = $amaExtensionInstalled
+ }
+ $csvObj | Export-Csv $using:fileName -Append -Force | Out-Null
+ }
+ # az cli sometime cannot handle many requests at same time, so delaying next request by 2 milliseconds
+ Start-Sleep -Milliseconds 2
+ }
+ }
+}
+
+function GetVmsWithLogAnalyticsAgentExtensionInstalled
{ param( $fileName )
- $vmList = az vm list --query "[].{ResourceGroup:resourceGroup, VmName:name}" | ConvertFrom-Json
+ $vmList = az vm list --query "[].{ResourceId:id, ResourceGroup:resourceGroup, VmName:name}" | ConvertFrom-Json
if(!$vmList) {
function GetVmsWithMMAExtensionInstalled
} $vmsCount = $vmList.Length
-
$vmParallelThrottleLimit = $parallelThrottleLimit if ($vmsCount -lt $vmParallelThrottleLimit) {
function GetVmsWithMMAExtensionInstalled
} }
- Write-Host "Detected $vmsCount Vm's running in this subscription."
+ Write-Host "Detected $vmsCount Vm's in this subscription."
$hash = [hashtable]::Synchronized(@{}) $hash.One = 1
function GetVmsWithMMAExtensionInstalled
$hash = $using:hash $_ | ForEach-Object { $percent = 100 * $hash.One++ / $len
- Write-Progress -Activity "Getting VM Inventory" -PercentComplete $percent
+ Write-Progress -Activity "Getting VM extensions Inventory" -PercentComplete $percent
+ $resourceId = $_.ResourceId
$vmName = $_.VmName $resourceGroup = $_.ResourceGroup
- $extensionName = az vm extension list -g $resourceGroup --vm-name $vmName --query "[?name == 'MicrosoftMonitoringAgent' || name == 'OmsAgentForLinux'].name" | ConvertFrom-Json
- if ($extensionName)
- {
+ Write-Debug "Getting extensions for VM: $vmName"
+ $extensions = az vm extension list -g $resourceGroup --vm-name $vmName --query "[?contains(['MicrosoftMonitoringAgent', 'OmsAgentForLinux', 'AzureMonitorLinuxAgent', 'AzureMonitorWindowsAgent'], typePropertiesType)].{type: typePropertiesType, name: name}" | ConvertFrom-Json
+
+ if (!$extensions) {
+ return
+ }
+ $extensionMap = @{}
+ foreach ($ext in $extensions) {
+ $extensionMap[$ext.type] = $ext.name
+ }
+ if ($extensionMap.ContainsKey("MicrosoftMonitoringAgent")) {
+ $extensionName = $extensionMap["MicrosoftMonitoringAgent"]
+ }
+ elseif ($extensionMap.ContainsKey("OmsAgentForLinux")) {
+ $extensionName = $extensionMap["OmsAgentForLinux"]
+ }
+ if ($extensionName) {
+ $amaExtensionInstalled = "False"
+ if ($extensionMap.ContainsKey("AzureMonitorWindowsAgent") -or $extensionMap.ContainsKey("AzureMonitorLinuxAgent")) {
+ $amaExtensionInstalled = "True"
+ }
$csvObj = New-Object -TypeName PSObject -Property @{
- 'Name' = $vmName
- 'Resource_Group' = $resourceGroup
- 'Resource_Type' = "VM"
- 'Install_Type' = "Extension"
- 'Extension_Name' = $extensionName
+ 'ResourceId' = $resourceId
+ 'Name' = $vmName
+ 'Resource_Group' = $resourceGroup
+ 'Resource_Type' = "VM"
+ 'Install_Type' = "Extension"
+ 'Extension_Name' = $extensionName
+ 'AMA_Extension_Installed' = $amaExtensionInstalled
} $csvObj | Export-Csv $using:fileName -Append -Force | Out-Null }
+ # az cli sometime cannot handle many requests at same time, so delaying next request by 2 milliseconds
+ Start-Sleep -Milliseconds 2
} } }
-function GetVmssWithMMAExtensionInstalled
+function GetVmssWithLogAnalyticsAgentExtensionInstalled
{ param( $fileName ) # get the vmss list which are successfully provisioned
- $vmssList = az vmss list --query "[?provisioningState=='Succeeded'].{ResourceGroup:resourceGroup, VmssName:name}" | ConvertFrom-Json
+ $vmssList = az vmss list --query "[?provisioningState=='Succeeded'].{ResourceId:id, ResourceGroup:resourceGroup, VmssName:name}" | ConvertFrom-Json
$vmssCount = $vmssList.Length
- Write-Host "Detected $vmssCount Vmss running in this subscription."
+ Write-Host "Detected $vmssCount Vmss in this subscription."
$hash = [hashtable]::Synchronized(@{}) $hash.One = 1
function GetVmssWithMMAExtensionInstalled
$len = $using:vmssCount $hash = $using:hash $percent = 100 * $hash.One++ / $len
- Write-Progress -Activity "Getting VMSS Inventory" -PercentComplete $percent
+ Write-Progress -Activity "Getting VMSS extensions Inventory" -PercentComplete $percent
+ $resourceId = $_.ResourceId
$vmssName = $_.VmssName $resourceGroup = $_.ResourceGroup-
- $extensionName = az vmss extension list -g $resourceGroup --vmss-name $vmssName --query "[?name == 'MicrosoftMonitoringAgent' || name == 'OmsAgentForLinux'].name" | ConvertFrom-Json
- if ($extensionName)
- {
+ Write-Debug "Getting extensions for VMSS: $vmssName"
+ $extensions = az vmss extension list -g $resourceGroup --vmss-name $vmssName --query "[?contains(['MicrosoftMonitoringAgent', 'OmsAgentForLinux', 'AzureMonitorLinuxAgent', 'AzureMonitorWindowsAgent'], typePropertiesType)].{type: typePropertiesType, name: name}" | ConvertFrom-Json
+
+ if (!$extensions) {
+ return
+ }
+ $extensionMap = @{}
+ foreach ($ext in $extensions) {
+ $extensionMap[$ext.type] = $ext.name
+ }
+ if ($extensionMap.ContainsKey("MicrosoftMonitoringAgent")) {
+ $extensionName = $extensionMap["MicrosoftMonitoringAgent"]
+ }
+ elseif ($extensionMap.ContainsKey("OmsAgentForLinux")) {
+ $extensionName = $extensionMap["OmsAgentForLinux"]
+ }
+ if ($extensionName) {
+ $amaExtensionInstalled = "False"
+ if ($extensionMap.ContainsKey("AzureMonitorWindowsAgent") -or $extensionMap.ContainsKey("AzureMonitorLinuxAgent")) {
+ $amaExtensionInstalled = "True"
+ }
$csvObj = New-Object -TypeName PSObject -Property @{
- 'Name' = $vmssName
- 'Resource_Group' = $resourceGroup
- 'Resource_Type' = "VMSS"
- 'Install_Type' = "Extension"
- 'Extension_Name' = $extensionName
+ 'ResourceId' = $resourceId
+ 'Name' = $vmssName
+ 'Resource_Group' = $resourceGroup
+ 'Resource_Type' = "VMSS"
+ 'Install_Type' = "Extension"
+ 'Extension_Name' = $extensionName
+ 'AMA_Extension_Installed' = $amaExtensionInstalled
} $csvObj | Export-Csv $using:fileName -Append -Force | Out-Null
- }
+ }
+ # az cli sometime cannot handle many requests at same time, so delaying next request by 2 milliseconds
+ Start-Sleep -Milliseconds 2
} } function GetInventory { param(
- $fileName = "MMAInventory.csv"
+ $fileName = "LogAnalyticsAgentExtensionInventory.csv"
) # create a new file New-Item -Name $fileName -ItemType File -Force Start-Transcript -Path $logFileName -Append
- GetVmsWithMMAExtensionInstalled $fileName
- GetVmssWithMMAExtensionInstalled $fileName
+ GetVmsWithLogAnalyticsAgentExtensionInstalled $fileName
+ GetVmssWithLogAnalyticsAgentExtensionInstalled $fileName
+ GetArcServersWithLogAnalyticsAgentExtensionInstalled $fileName
Stop-Transcript }
-function UninstallMMAExtension
+function UninstallExtension
{ param(
- $fileName = "MMAInventory.csv"
+ $fileName = "LogAnalyticsAgentExtensionInventory.csv"
) Start-Transcript -Path $logFileName -Append Import-Csv $fileName | ForEach-Object -ThrottleLimit $parallelThrottleLimit -Parallel { if ($_.Install_Type -eq "Extension") {
+ $extensionName = $_.Extension_Name
+ $resourceName = $_.Name
+ Write-Debug "Uninstalling extension: $extensionName from $resourceName"
if ($_.Resource_Type -eq "VMSS") { # if the extension is installed with a custom name, provide the name using the flag: --extension-instance-name <extension name>
- az vmss extension delete --name $_.Extension_Name --vmss-name $_.Name --resource-group $_.Resource_Group --output none --no-wait
+ az vmss extension delete --name $extensionName --vmss-name $resourceName --resource-group $_.Resource_Group --output none --no-wait
}
- else
+ elseif($_.Resource_Type -eq "VM")
{ # if the extension is installed with a custom name, provide the name using the flag: --extension-instance-name <extension name>
- az vm extension delete --name $_.Extension_Name --vm-name $_.Name --resource-group $_.Resource_Group --output none --no-wait
+ az vm extension delete --name $extensionName --vm-name $resourceName --resource-group $_.Resource_Group --output none --no-wait
+ }
+ elseif($_.Resource_Type -eq "ArcServer")
+ {
+ az connectedmachine extension delete --name $extensionName --machine-name $resourceName --resource-group $_.Resource_Group --no-wait --output none --yes -y
}
+ # az cli sometime cannot handle many requests at same time, so delaying next delete request by 2 milliseconds
+ Start-Sleep -Milliseconds 2
} } Stop-Transcript }
-$logFileName = "MMAUninstallUtilityScriptLog.log"
+$logFileName = "LogAnalyticsAgentUninstallUtilityScriptLog.log"
switch ($args.Count) { 0 { Write-Host "The arguments provided are incorrect."
- Write-Host "To get the Inventory: Run the script as: PS> .\MMAUnistallUtilityScript.ps1 GetInventory"
- Write-Host "To uninstall MMA from Inventory: Run the script as: PS> .\MMAUnistallUtilityScript.ps1 UninstallMMAExtension"
+ Write-Host "To get the Inventory: Run the script as: PS> .\LogAnalyticsAgentUninstallUtilityScript.ps1 GetInventory"
+ Write-Host "To uninstall MMA/OMS from Inventory: Run the script as: PS> .\LogAnalyticsAgentUninstallUtilityScript.ps1 UninstallExtension"
} 1 { if (-Not (Test-Path $logFileName)) {
You'll collect a list of all legacy agents, both MMA and OMS, on all VM, VMSSs
``` The script reports the total VM, VMSSs, or Arc enables servers seen in the subscription. It takes several minutes to run. You see a progress bar in the console window. Once complete, you are able to see a CSV file called MMAInventory.csv in the local directory with the following format.
-|Resource_Group | Resource_Type | Name | Install_Type |Extension_Name |
-||||||
-| Linux-AMA-E2E | VM | Linux-ama-e2e-debian9 | Extension | OmsAgentForLinux |
-|AMA-ADMIN | VM | test2012-r2-da | Extension | MicrosoftMonitorAgent |
+| Resource_ID | Name | Resource_Group | Resource_Type | Install_Type | Extension_Name | AMA_Extension_Installed |
+||||||||
+| 012cb5cf-e1a8-49ee-a484-d40673167c9c | Linux-ama-e2e-debian9 | Linux-AMA-E2E | VM | Extension | OmsAgentForLinux | True |
+| 8acae35a-454f-4869-bf4f-658189d98516 | test2012-r2-da | test2012-r2-daAMA-ADMIN | VM | Extension | MicrosoftMonitorAgent | False |
## Step 4 Uninstall inventory This script iterates through the list of VM, Virtual Machine Scale Sets, and Arc enabled servers and uninstalls the legacy agent. If the VM, Virtual Machine Scale Sets, or Arc enabled server is not running you won't be able to remove the agent.
azure-monitor Azure Monitor Agent Supported Operating Systems https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-supported-operating-systems.md
This article lists the operating systems supported by [Azure Monitor Agent](./az
| Red Hat Enterprise Linux Server 6.7+ | | | | Rocky Linux 9 | Γ£ô | Γ£ô | | Rocky Linux 8 | Γ£ô | Γ£ô |
-| SUSE Linux Enterprise Server 15 SP4 | Γ£ô<sup>2</sup> | Γ£ô |
+| SUSE Linux Enterprise Server 15 SP5 | Γ£ô<sup>2</sup> | Γ£ô |
+| SUSE Linux Enterprise Server 15 SP4 | Γ£ô<sup>2</sup> | Γ£ô |
| SUSE Linux Enterprise Server 15 SP3 | Γ£ô | Γ£ô | | SUSE Linux Enterprise Server 15 SP2 | Γ£ô | Γ£ô | | SUSE Linux Enterprise Server 15 SP1 | Γ£ô | Γ£ô |
azure-monitor Container Insights Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-private-link.md
- Title: Enable private link with Container insights
-description: Learn how to enable private link on an Azure Kubernetes Service (AKS) cluster.
- Previously updated : 06/05/2024----
-# Enable private link with Container insights
-This article describes how to configure Container insights to use Azure Private Link for your AKS cluster.
-
-## Prerequisites
-- Create an Azure Monitor Private Link Scope (AMPLS) following the guidance in [Configure your private link](../logs/private-link-configure.md).-- Configure network isolation on your Log Analytics workspace to disable ingestion for the public networks. Isolate log queries if you want them to be restricted to Private network.-
-## Cluster using managed identity authentication
-
-### [CLI](#tab/cli)
-
-### Prerequisites
-- Azure CLI version 2.63.0 or higher.-- AKS-preview CLI extension version MUST be 7.0.0b4 or higher if there is an AKS-preview CLI extension installed.--
-### Existing AKS Cluster
-
-**Use default Log Analytics workspace**
-
-```azurecli
-az aks enable-addons --addon monitoring --name <cluster-name> --resource-group <cluster-resource-group-name> --ampls-resource-id "<azure-monitor-private-link-scope-resource-id>"
-```
-
-Example:
-
-```azurecli
-az aks enable-addons --addon monitoring --name "my-cluster" --resource-group "my-resource-group" --workspace-resource-id "/subscriptions/my-subscription/resourceGroups/my-resource-group/providers/Microsoft.OperationalInsights/workspaces/my-workspace" --ampls-resource-id "/subscriptions/my-subscription /resourceGroups/my-resource-group/providers/microsoft.insights/privatelinkscopes/my-ampls-resource"
-```
-
-**Use existing Log Analytics workspace**
-
-```azurecli
-az aks enable-addons --addon monitoring --name <cluster-name> --resource-group <cluster-resource-group-name> --workspace-resource-id <workspace-resource-id> --ampls-resource-id "<azure-monitor-private-link-scope-resource-id>"
-```
-
-Example:
-
-```azurecli
-az aks enable-addons --addon monitoring --name "my-cluster" --resource-group "my-resource-group" --workspace-resource-id "/subscriptions/my-subscription/resourceGroups/my-resource-group/providers/Microsoft.OperationalInsights/workspaces/my-workspace" --ampls-resource-id "/subscriptions/my-subscription /resourceGroups/ my-resource-group/providers/microsoft.insights/privatelinkscopes/my-ampls-resource"
-```
-
-### New AKS cluster
-
-```azurecli
-az aks create --resource-group rgName --name clusterName --enable-addons monitoring --workspace-resource-id "workspaceResourceId" --ampls-resource-id "azure-monitor-private-link-scope-resource-id"
-```
-
-Example:
-
-```azurecli
-az aks create --resource-group "my-resource-group" --name "my-cluster" --enable-addons monitoring --workspace-resource-id "/subscriptions/my-subscription/resourceGroups/my-resource-group/providers/Microsoft.OperationalInsights/workspaces/my-workspace" --ampls-resource-id "/subscriptions/my-subscription /resourceGroups/ my-resource-group/providers/microsoft.insights/privatelinkscopes/my-ampls-resource"
-```
--
-### [ARM](#tab/arm)
-
-The following sections provide links to the template and parameter files for enabling private link with Container insights on an AKS and Arc-enabled clusters.
-
-Edit the values in the parameter file and deploy the template using any valid method for deploying ARM templates. Retrieve the **resource ID** of the resources from the **JSON** View of their **Overview** page.
-
- Based on your requirements, you can configure other parameters such `streams`, `enableContainerLogV2`, `enableSyslog`, `syslogLevels`, `syslogFacilities`, `dataCollectionInterval`, `namespaceFilteringModeForDataCollection` and `namespacesForDataCollection`.
-
-### Prerequisites
-- The template must be deployed in the same resource group as the cluster.-
-### AKS cluster
-
-**Template file:** https://aka.ms/aks-enable-monitoring-msi-onboarding-template-file<br>
-**Parameter file:** https://aka.ms/aks-enable-monitoring-msi-onboarding-template-parameter-file
--
-| Parameter | Description |
-|:|:|
-| `aksResourceId`| Resource ID of the cluster. |
-| `aksResourceLocation` | Azure Region of the cluster. |
-| `workspaceResourceId`| Resource ID of the Log Analytics workspace. |
-| `workspaceRegion` | Region of the Log Analytics workspace. |
-| `resourceTagValues` | Tag values specified for the existing Container insights extension data collection rule (DCR) of the cluster and the name of the DCR. The name will be MSCI-\<clusterName\>-\<clusterRegion\>, and this resource created in an AKS clusters resource group. For first time onboarding, you can set arbitrary tag values. |
-| `useAzureMonitorPrivateLinkScope` | Boolean flag to indicate whether Azure Monitor link scope is used or not. |
-| `azureMonitorPrivateLinkScopeResourceId` | Resource ID of the Azure Monitor Private link scope. This only used if `useAzureMonitorPrivateLinkScope` is set to **true**. |
-
-### Arc-enabled Kubernetes cluster
-
-**Template file:** https://aka.ms/arc-k8s-azmon-extension-msi-arm-template<br>
-**Parameter file:** https://aka.ms/arc-k8s-azmon-extension-msi-arm-template-params
-
-| Parameter | Description |
-|:|:|
-| `clusterResourceId` | Resource ID of the cluster. |
-| `clusterRegion` | Azure Region of the cluster. |
-| `workspaceResourceId` | Resource ID of the Log Analytics workspace. |
-| `workspaceRegion` | Region of the Log Analytics workspace. |
-| `workspaceDomain` | Domain of the Log Analytics workspace:<br>`opinsights.azure.com` for Azure public cloud<br>`opinsights.azure.us` for Azure US Government<br>`opinsights.azure.cn` for Azure China Cloud |
-| `resourceTagValues` | Tag values specified for the existing Container insights extension data collection rule (DCR) of the cluster and the name of the DCR. The name will be MSCI-\<clusterName\>-\<clusterRegion\>, and this resource created in an AKS clusters resource group. For first time onboarding, you can set arbitrary tag values. |
-| `useAzureMonitorPrivateLinkScope` | Boolean flag to indicate whether Azure Monitor link scope is used or not. |
-| `azureMonitorPrivateLinkScopeResourceId` | Resource ID of the Azure Monitor Private link scope. This is only used if `useAzureMonitorPrivateLinkScope` is set to **true**. |
---
-## Cluster using legacy authentication
-Use the following procedures to enable network isolation by connecting your cluster to the Log Analytics workspace using [Azure Private Link](../logs/private-link-security.md) if your cluster is not using managed identity authentication. This requires a [private AKS cluster](/azure/aks/private-clusters).
-
-1. Create a private AKS cluster following the guidance in [Create a private Azure Kubernetes Service cluster](/azure/aks/private-clusters).
-
-2. Disable public Ingestion on your Log Analytics workspace.
-
- Use the following command to disable public ingestion on an existing workspace.
-
- ```cli
- az monitor log-analytics workspace update --resource-group <azureLogAnalyticsWorkspaceResourceGroup> --workspace-name <azureLogAnalyticsWorkspaceName> --ingestion-access Disabled
- ```
-
- Use the following command to create a new workspace with public ingestion disabled.
-
- ```cli
- az monitor log-analytics workspace create --resource-group <azureLogAnalyticsWorkspaceResourceGroup> --workspace-name <azureLogAnalyticsWorkspaceName> --ingestion-access Disabled
- ```
-
-3. Configure private link by following the instructions at [Configure your private link](../logs/private-link-configure.md). Set ingestion access to public and then set to private after the private endpoint is created but before monitoring is enabled. The private link resource region must be same as AKS cluster region.
-
-4. Enable monitoring for the AKS cluster.
-
- ```cli
- az aks enable-addons -a monitoring --resource-group <AKSClusterResourceGorup> --name <AKSClusterName> --workspace-resource-id <workspace-resource-id>
- ```
---
-## Next steps
-
-* If you experience issues while you attempt to onboard the solution, review the [Troubleshooting guide](container-insights-troubleshoot.md).
-* With monitoring enabled to collect health and resource utilization of your AKS cluster and workloads running on them, learn [how to use](container-insights-analyze.md) Container insights.
azure-monitor Kubernetes Monitoring Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/kubernetes-monitoring-enable.md
Use one of the following methods to enable scraping of Prometheus metrics from y
> If you have a single Azure Monitor Resource that is private-linked, then Prometheus enablement won't work if the AKS cluster and Azure Monitor Workspace are in different regions. > The configuration needed for the Prometheus add-on isn't available cross region because of the private link constraint. > To resolve this, create a new DCE in the AKS cluster location and a new DCRA (association) in the same AKS cluster region. Associate the new DCE with the AKS cluster and name the new association (DCRA) as configurationAccessEndpoint.
-> For full instructions on how to configure the DCEs associated with your Azure Monitor workspace to use a Private Link for data ingestion, see [Use a private link for Managed Prometheus data ingestion](../essentials/private-link-data-ingestion.md).
+> For full instructions on how to configure the DCEs associated with your Azure Monitor workspace to use a Private Link for data ingestion, see [Enable private link for Kubernetes monitoring in Azure Monitor](./kubernetes-monitoring-private-link.md).
### [CLI](#tab/cli)
azure-monitor Kubernetes Monitoring Private Link https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/kubernetes-monitoring-private-link.md
+
+ Title: Enable private link with Container insights
+description: Learn how to enable private link on an Azure Kubernetes Service (AKS) cluster.
+ Last updated : 06/05/2024++++
+# Enable private link for Kubernetes monitoring in Azure Monitor
+[Azure Private Link](../../private-link/private-link-overview.md) enables you to access Azure platform as a service (PaaS) resources to your virtual network by using private endpoints. An [Azure Monitor Private Link Scope (AMPLS)](../logs/private-link-security.md) connects a private endpoint to a set of Azure Monitor resources to define the boundaries of your monitoring network. This article describes how to configure Container insights and Managed Prometheus to use private link for data ingestion from your Azure Kubernetes Service (AKS) cluster.
++
+> [!NOTE]
+> - See [Connect to a data source privately](../../../articles/managed-grafan) for details on how to configure private link to query data from your Azure Monitor workspace using Grafana.
+> - See [Use private endpoints for Managed Prometheus and Azure Monitor workspace](../essentials/azure-monitor-workspace-private-endpoint.md) for details on how to configure private link to query data from your Azure Monitor workspace using workbooks.
++
+## Prerequisites
+This article describes how to connect your cluster to an existing Azure Monitor Private Link Scope (AMPLS). Create an AMPLS following the guidance in [Configure your private link](../logs/private-link-configure.md).
+
+## Managed Prometheus (Azure Monitor workspace)
+Data for Managed Prometheus is stored in an [Azure Monitor workspace](../essentials/azure-monitor-workspace-overview.md), so you must make this workspace accessible over a private link.
+
+### Configure DCEs
+Private links for data ingestion for Managed Prometheus are configured on the Data Collection Endpoints (DCE) of the Azure Monitor workspace that stores the data. To identify the DCEs associated with your Azure Monitor workspace, select **Data Collection Endpoints** from your Azure Monitor workspace in the Azure portal.
++
+If your AKS cluster isn't in the same region as your Azure Monitor workspace, then you need to [create another DCE](../essentials/data-collection-endpoint-overview.md#create-a-data-collection-endpoint) in the same region as the AKS cluster. In this case, open the data collection rule (DCR) created when you enabled Managed Prometheus. This DCR will be named **MSProm-\<clusterName\>-\<clusterRegion\>**. The cluster will be listed on the **Resources** page. On the **Data collection endpoint** dropdown, select the DCE in the same region as the AKS cluster.
+++
+## Ingestion from a private AKS cluster
+By default, a private AKS cluster can send data to Managed Prometheus and your Azure Monitor workspace over the public network using a public Data Collection Endpoint.
+
+If you choose to use an Azure Firewall to limit the egress from your cluster, you can implement one of the following:
+
+- Open a path to the public ingestion endpoint. Update the routing table with the following two endpoints:
+ - `*.handler.control.monitor.azure.com`
+ - `*.ingest.monitor.azure.com`
+- Enable the Azure Firewall to access the Azure Monitor Private Link scope and DCE that's used for data ingestion.
+
+## Private link ingestion for remote write
+Use the following steps to set up remote write for a Kubernetes cluster over a private link virtual network and an Azure Monitor Private Link scope.
+
+1. Create your Azure virtual network.
+1. Configure the on-premises cluster to connect to an Azure VNET using a VPN gateway or ExpressRoutes with private-peering.
+1. Create an Azure Monitor Private Link scope.
+1. Connect the Azure Monitor Private Link scope to a private endpoint in the virtual network used by the on-premises cluster. This private endpoint is used to access your DCEs.
+1. From your Azure Monitor workspace in the portal, select **Data Collection Endpoints** from the Azure Monitor workspace menu.
+1. You'll have at least one DCE which will have the same name as your workspace. Click on the DCE to open its details.
+1. Select the **Network Isolation** page for the DCE.
+2. Click **Add** and select your Azure Monitor Private Link scope. It takes a few minutes for the settings to propagate. Once completed, data from your private AKS cluster is ingested into your Azure Monitor workspace over the private link.
++
+## Container insights (Log Analytics workspace)
+Data for Container insights, is stored in a [Log Analytics workspace](../logs/log-analytics-workspace-overview.md), so you must make this workspace accessible over a private link.
+
+> [!NOTE]
+> This section describes how to enable private link for Container insights using CLI. For details on using an ARM template, see [Enable Container insights](./kubernetes-monitoring-enable.md?tabs=arm#enable-container-insights) and note the parameters `useAzureMonitorPrivateLinkScope` and `azureMonitorPrivateLinkScopeResourceId`.
+
+### Cluster using managed identity authentication
++
+### Existing AKS Cluster
+
+**Use default Log Analytics workspace**
+
+```azurecli
+az aks enable-addons --addon monitoring --name <cluster-name> --resource-group <cluster-resource-group-name> --ampls-resource-id "<azure-monitor-private-link-scope-resource-id>"
+```
+
+Example:
+
+```azurecli
+az aks enable-addons --addon monitoring --name "my-cluster" --resource-group "my-resource-group" --workspace-resource-id "/subscriptions/my-subscription/resourceGroups/my-resource-group/providers/Microsoft.OperationalInsights/workspaces/my-workspace" --ampls-resource-id "/subscriptions/my-subscription /resourceGroups/my-resource-group/providers/microsoft.insights/privatelinkscopes/my-ampls-resource"
+```
+
+**Use existing Log Analytics workspace**
+
+```azurecli
+az aks enable-addons --addon monitoring --name <cluster-name> --resource-group <cluster-resource-group-name> --workspace-resource-id <workspace-resource-id> --ampls-resource-id "<azure-monitor-private-link-scope-resource-id>"
+```
+
+Example:
+
+```azurecli
+az aks enable-addons --addon monitoring --name "my-cluster" --resource-group "my-resource-group" --workspace-resource-id "/subscriptions/my-subscription/resourceGroups/my-resource-group/providers/Microsoft.OperationalInsights/workspaces/my-workspace" --ampls-resource-id "/subscriptions/my-subscription /resourceGroups/ my-resource-group/providers/microsoft.insights/privatelinkscopes/my-ampls-resource"
+```
+
+### New AKS cluster
+
+```azurecli
+az aks create --resource-group rgName --name clusterName --enable-addons monitoring --workspace-resource-id "workspaceResourceId" --ampls-resource-id "azure-monitor-private-link-scope-resource-id"
+```
+
+Example:
+
+```azurecli
+az aks create --resource-group "my-resource-group" --name "my-cluster" --enable-addons monitoring --workspace-resource-id "/subscriptions/my-subscription/resourceGroups/my-resource-group/providers/Microsoft.OperationalInsights/workspaces/my-workspace" --ampls-resource-id "/subscriptions/my-subscription /resourceGroups/ my-resource-group/providers/microsoft.insights/privatelinkscopes/my-ampls-resource"
+```
++
+## Cluster using legacy authentication
+Use the following procedures to enable network isolation by connecting your cluster to the Log Analytics workspace using [Azure Private Link](../logs/private-link-security.md) if your cluster is not using managed identity authentication. This requires a [private AKS cluster](/azure/aks/private-clusters).
+
+1. Create a private AKS cluster following the guidance in [Create a private Azure Kubernetes Service cluster](/azure/aks/private-clusters).
+
+2. Disable public Ingestion on your Log Analytics workspace.
+
+ Use the following command to disable public ingestion on an existing workspace.
+
+ ```cli
+ az monitor log-analytics workspace update --resource-group <azureLogAnalyticsWorkspaceResourceGroup> --workspace-name <azureLogAnalyticsWorkspaceName> --ingestion-access Disabled
+ ```
+
+ Use the following command to create a new workspace with public ingestion disabled.
+
+ ```cli
+ az monitor log-analytics workspace create --resource-group <azureLogAnalyticsWorkspaceResourceGroup> --workspace-name <azureLogAnalyticsWorkspaceName> --ingestion-access Disabled
+ ```
+
+3. Configure private link by following the instructions at [Configure your private link](../logs/private-link-configure.md). Set ingestion access to public and then set to private after the private endpoint is created but before monitoring is enabled. The private link resource region must be same as AKS cluster region.
+
+4. Enable monitoring for the AKS cluster.
+
+ ```cli
+ az aks enable-addons -a monitoring --resource-group <AKSClusterResourceGorup> --name <AKSClusterName> --workspace-resource-id <workspace-resource-id> --enable-msi-auth-for-monitoring false
+ ```
+++
+## Next steps
+
+* If you experience issues while you attempt to onboard the solution, review the [Troubleshooting guide](container-insights-troubleshoot.md).
+* With monitoring enabled to collect health and resource utilization of your AKS cluster and workloads running on them, learn [how to use](container-insights-analyze.md) Container insights.
azure-monitor Private Link Data Ingestion https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/private-link-data-ingestion.md
- Title: Use a private link for Managed Prometheus data ingestion
-description: Overview of private link for secure data ingestion to Azure Monitor workspace from virtual networks.
---- Previously updated : 06/08/2024--
-# Private Link for data ingestion for Managed Prometheus and Azure Monitor workspace
-
-Private links for data ingestion for Managed Prometheus are configured on the Data Collection Endpoints (DCE) of the workspace that stores the data.
-
-This article shows you how to configure the DCEs associated with your Azure Monitor workspace to use a Private Link for data ingestion.
-
-To define your Azure Monitor Private Link scope (AMPLS), see [Azure Monitor private link documentation](../logs/private-link-configure.md), then associate your DCEs with the AMPLS.
-
-Find the DCEs associated with your Azure Monitor workspace.
-
-1. Open the Azure Monitor workspaces menu in the Azure portal
-2. Select your workspace
-3. Select Data Collection Endpoints from the workspace menu
--
-The page displays all of the DCEs that are associated with the Azure Monitor workspace and that enable data ingestion into the workspace. Select the DCE you want to configure with Private Link and then follow the steps to [create an Azure Monitor private link scope](../logs/private-link-configure.md) to complete the process.
-
-Once this is done, navigate to the DCR resource which was created during managed prometheus enablement from the Azure portal and choose 'Resources' under Configuration menu.
-In the Data collection endpoint dropdown, pick a DCE in the same region as the AKS cluster. If the Azure Monitor Workspace is in the same region as the AKS cluster, you can reuse the DCE created during managed prometheus enablement. If not, create a DCE in the same region as the AKS cluster and pick that in the dropdown.
--
-> [!NOTE]
-> Please refer to [Connect to a data source privately](../../../articles/managed-grafan) for details on how to configure private link for querying data from your Azure Monitor workspace using Grafana.
->
-> Please refer to [use private endpoints for queries](azure-monitor-workspace-private-endpoint.md) for details on how to configure private link for querying data from your Azure Monitor workspace using workbooks (non-grafana).
-
-## Private link ingestion from a private AKS cluster
-
-A private Azure Kubernetes Service cluster can by default, send data to Managed Prometheus and your Azure Monitor workspace over the public network, and to the public Data Collection Endpoint.
-
-If you choose to use an Azure Firewall to limit the egress from your cluster, you can implement one of the following:
-
-+ Open a path to the public ingestion endpoint. Update the routing table with the following two endpoints:
- - *.handler.control.monitor.azure.com
- - *.ingest.monitor.azure.com
-+ Enable the Azure Firewall to access the Azure Monitor Private Link scope and Data Collection Endpoint that's used for data ingestion
-
-## Private link ingestion for remote write
-
-The following steps show how to set up remote write for a Kubernetes cluster over a private link VNET and an Azure Monitor Private Link scope.
-
-The following are the steps for setting up remote write for a Kubernetes cluster over a private link VNET and an Azure Monitor Private Link scope.
-
-We start with your on-premises Kubernetes cluster.
-
-1. Create your Azure virtual network.
-1. Configure the on-premises cluster to connect to an Azure VNET using a VPN gateway or ExpressRoutes with private-peering.
-1. Create an Azure Monitor Private Link scope.
-1. Connect the Azure Monitor Private Link scope to a private endpoint in the virtual network used by the on-premises cluster. This private endpoint is used to access your Data Collection Endpoint(s).
-1. Navigate to your Azure Monitor workspace in the portal. As part of creating your Azure Monitor workspace a system Data Collection Endpoint is created that you can use to ingest data via remote write.
-1. Choose **Data Collection Endpoints** from the Azure Monitor workspace menu.
-1. By default, the system Data Collection Endpoint has the same name as your Azure Monitor workspace. Select this Data Collection Endpoint.
-1. The Data Collection Endpoint, Network Isolation page displays. From this page, select **Add** and choose the Azure Monitor Private Link scope you created. It takes a few minutes for the settings to propagate. Once completed, data from your private AKS cluster is ingested into your Azure Monitor workspace over the private link.
--
-## Verify that data is being ingested
-
-To verify data is being ingested, try one of the following methods:
--- Open the Workbooks page from your Azure Monitor workspace and select the **Prometheus Explorer** tile. For more information on Azure Monitor workspace Workbooks, see [Workbooks overview](./prometheus-workbooks.md).-
-
-## Next steps
--- [Managed Grafana network settings](https://aka.ms/ags/mpe)-- [Azure Private Endpoint DNS configuration](../../private-link/private-endpoint-dns.md)-- [Verify remote write is working correctly](./prometheus-remote-write.md#verify-remote-write-is-working-correctly)
azure-netapp-files Azure Netapp Files Cost Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-cost-model.md
For cost model specific to cross-region replication, see [Cost model for cross-r
Azure NetApp Files is billed on provisioned storage capacity, which is allocated by creating capacity pools. Capacity pools are billed monthly based on a set cost per allocated GiB per hour. Capacity pool allocation is measured hourly.
-Capacity pools must be at least 1 TiB and can be increased or decreased in 1-TiB intervals. Capacity pools contain volumes that range in size from a minimum of 100 GiB to a maximum of 100 TiB for regular volumes and up to 1 PiB for [large volumes](azure-netapp-files-understand-storage-hierarchy.md#large-volumes). Volumes are assigned quotas that are subtracted from the capacity poolΓÇÖs provisioned size. For an active volume, capacity consumption against the quota is based on logical (effective) capacity, being active filesystem data or snapshot data. See [How Azure NetApp Files snapshots work](snapshots-introduction.md) for details.
+Capacity pools must be at least 1 TiB and can be increased or decreased in 1-TiB intervals. Capacity pools contain volumes that range in size from a minimum of 50 GiB to a maximum of 100 TiB for regular volumes and up to 1 PiB for [large volumes](azure-netapp-files-understand-storage-hierarchy.md#large-volumes). Volumes are assigned quotas that are subtracted from the capacity poolΓÇÖs provisioned size. For an active volume, capacity consumption against the quota is based on logical (effective) capacity, being active filesystem data or snapshot data. See [How Azure NetApp Files snapshots work](snapshots-introduction.md) for details.
### Pricing examples
azure-netapp-files Azure Netapp Files Create Volumes Smb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-create-volumes-smb.md
This article shows you how to create an SMB3 volume. For NFS volumes, see [Creat
* You must have already set up a capacity pool. See [Create a capacity pool](azure-netapp-files-set-up-capacity-pool.md). * A subnet must be delegated to Azure NetApp Files. See [Delegate a subnet to Azure NetApp Files](azure-netapp-files-delegate-subnet.md).
+* [!INCLUDE [50 GiB volume preview](./includes/50-gib-volume.md)]
* The [non-browsable shares](#non-browsable-share) and [access-based enumeration](#access-based-enumeration) features are currently in preview. You must register each feature before you can use it: 1. Register the feature:
azure-netapp-files Azure Netapp Files Create Volumes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-create-volumes.md
This article shows you how to create an NFS volume. For SMB volumes, see [Create
* A subnet must be delegated to Azure NetApp Files. See [Delegate a subnet to Azure NetApp Files](azure-netapp-files-delegate-subnet.md).
+* [!INCLUDE [50 GiB volume preview](./includes/50-gib-volume.md)]
+ ## Considerations * Deciding which NFS version to use
azure-netapp-files Azure Netapp Files Introduction https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-introduction.md
Azure NetApp Files is designed to provide high-performance file storage for ente
| In-Azure bare-metal flash performance | Fast and reliable all-flash performance with submillisecond latency. | Run performance-intensive workloads in the cloud with on-premises infrastructure-level performance. | Multi-protocol support | Supports multiple protocols, including NFSv3, NFSv4.1, SMB 3.0, SMB 3.1.1, and simultaneous dual-protocol. | Seamlessly integrate with existing infrastructure and workflows without compatibility issues or complex configurations. | | Three flexible performance tiers (Standard, Premium, Ultra) | Three performance tiers with dynamic service-level change capability based on workload needs, including cool access for cold data. | Choose the right performance level for workloads and dynamically adjust performance without overspending on resources.
-| Small-to-large volumes | Easily resize file volumes from 100 GiB up to 100 TiB without downtime. | Scale storage as business needs grow without over-provisioning, avoiding upfront cost.
+| Small-to-large volumes | Easily resize file volumes from 50 GiB up to 100 TiB without downtime. | Scale storage as business needs grow without over-provisioning, avoiding upfront cost.
| 1-TiB minimum capacity pool size | 1-TiB capacity pool is a reduced-size storage pool compared to the initial 4-TiB minimum. | Save money by starting with a smaller storage footprint and lower entry point, without sacrificing performance or availability. Scale storage based on growth without high upfront costs. | 2,048-TiB maximum capacity pool | 2048-TiB capacity pool is an increased storage pool compared to the initial 500-TiB maximum. | Reduce waste by creating larger, pooled capacity and performance budget, and share and distribute across volumes. | 50-1,024 TiB large volumes | Store large volumes of data up to 1,024 TiB in a single volume. | Manage large datasets and high-performance workloads with ease.
azure-netapp-files Azure Netapp Files Resource Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-resource-limits.md
The following table describes resource limits for Azure NetApp Files:
| Number of IPs in a virtual network (including immediately peered VNets) accessing volumes in an Azure NetApp Files hosting VNet | <ul><li>**Basic**: 1000</li><li>**Standard**: [Same standard limits as VMs](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-resource-manager-virtual-networking-limits)</li></ul> | No | | Minimum size of a single capacity pool | 1 TiB* | No | | Maximum size of a single capacity pool | 2,048 TiB | No |
-| Minimum size of a single regular volume | 100 GiB | No |
+| Minimum size of a single regular volume | 50 GiB | No |
| Maximum size of a single regular volume | 100 TiB | No | | Minimum size of a single [large volume](large-volumes-requirements-considerations.md) | 50 TiB | No | | Large volume size increase | 30% of lowest provisioned size | Yes |
azure-netapp-files Azure Netapp Files Service Levels https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-service-levels.md
The following diagram shows throughput limit examples of volumes in an auto QoS
* In Example 1, a volume from an auto QoS capacity pool with the Premium storage tier that is assigned 2 TiB of quota will be assigned a throughput limit of 128 MiB/s (2 TiB * 64 MiB/s). This scenario applies regardless of the capacity pool size or the actual volume consumption.
-* In Example 2, a volume from an auto QoS capacity pool with the Premium storage tier that is assigned 100 GiB of quota will be assigned a throughput limit of 6.25 MiB/s (0.09765625 TiB * 64 MiB/s). This scenario applies regardless of the capacity pool size or the actual volume consumption.
+* In Example 2, a volume from an auto QoS capacity pool with the Premium storage tier that is assigned 100 GiB of quota is assigned a throughput limit of 6.25 MiB/s (0.09765625 TiB * 64 MiB/s). This scenario applies regardless of the capacity pool size or the actual volume consumption.
### Throughput limit examples of volumes in a manual QoS capacity pool
azure-netapp-files Azure Netapp Files Understand Storage Hierarchy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-understand-storage-hierarchy.md
# Storage hierarchy of Azure NetApp Files
-Before creating a volume in Azure NetApp Files, you must purchase and set up a pool for provisioned capacity. To set up a capacity pool, you must have a NetApp account. Understanding the storage hierarchy helps you set up and manage your Azure NetApp Files resources.
+Before creating a volume in Azure NetApp Files, you must purchase and set up a pool for provisioned capacity. To set up a capacity pool, you must have a NetApp account. Understanding the storage hierarchy helps you set up and manage your Azure NetApp Files resources.
> [!IMPORTANT] > Azure NetApp Files currently doesn't support resource migration between subscriptions. ## <a name="conceptual_diagram_of_storage_hierarchy"></a>Conceptual diagram of storage hierarchy
-The following example shows the relationships of the Azure subscription, NetApp accounts, capacity pools, and volumes.
+The following example shows the relationships of the Azure subscription, NetApp accounts, capacity pools, and volumes.
:::image type="content" source="./media/azure-netapp-files-understand-storage-hierarchy/azure-netapp-files-storage-hierarchy.png" alt-text="Conceptual diagram of storage hierarchy." lightbox="./media/azure-netapp-files-understand-storage-hierarchy/azure-netapp-files-storage-hierarchy.png":::
When you use a manual QoS capacity pool with, for example, an SAP HANA system, a
- A volume's capacity consumption counts against its pool's provisioned capacity. - A volumeΓÇÖs throughput consumption counts against its poolΓÇÖs available throughput. See [Manual QoS type](#manual-qos-type). - Each volume belongs to only one pool, but a pool can contain multiple volumes. -- Volumes contain a capacity of between 100 GiB and 100 TiB. You can create a [large volume](#large-volumes) with a size of between 50 TiB and 1 PiB.
+- Volumes contain a capacity of between 50 GiB and 100 TiB. You can create a [large volume](#large-volumes) with a size of between 50 and 1 PiB.
## Large volumes
-Azure NetApp Files allows you to create large volumes up to 1 PiB in size. Large volumes begin at a capacity of 50 TiB and scale up to 1 PiB. Regular Azure NetApp Files volumes are offered between 100 GiB and 102,400 GiB.
+Azure NetApp Files allows you to create large volumes up to 1 PiB in size. Large volumes begin at a capacity of 50 TiB and scale up to 1 PiB. Regular Azure NetApp Files volumes are offered between 50 GiB and 102,400 GiB.
For more information, see [Requirements and considerations for large volumes](large-volumes-requirements-considerations.md).
azure-netapp-files Backup Restore New Volume https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/backup-restore-new-volume.md
Restoring a backup creates a new volume with the same protocol type. This articl
> [!IMPORTANT] > Running multiple concurrent volume restores using Azure NetApp Files backup may increase the time it takes for each individual, in-progress restore to complete. As such, if time is a factor to you, you should prioritize and sequentialize the most important volume restores and wait until the restores are complete before starting another, lower priority, volume restores.
-See [Requirements and considerations for Azure NetApp Files backup](backup-requirements-considerations.md) for more considerations about using Azure NetApp Files backup.
+See [Requirements and considerations for Azure NetApp Files backup](backup-requirements-considerations.md) for more considerations about using Azure NetApp Files backup. See [Resource limits for Azure NetApp Files](azure-netapp-files-resource-limits.md) for information about minimums and maximums.
## Steps
See [Requirements and considerations for Azure NetApp Files backup](backup-requi
However, if you restore a volume from the backup list at the NetApp account level, you need to specify the Protocol field. The Protocol field must match the protocol of the original volume. Otherwise, the restore operation fails with the following error: `Protocol Type value mismatch between input and source volume of backupId <backup-id of the selected backup>. Supported protocol type : <Protocol Type of the source volume>`
- * The **Quota** value must be **at least 20% greater** than the size of the backup from which the restore is triggered (minimum 100 GiB). Once the restore is complete, the volume can be resized depending on the size used.
+ * The **Quota** value must be **at least 20% greater** than the size of the backup from which the restore is triggered. Once the restore is complete, the volume can be resized depending on the size used.
* The **Capacity pool** that the backup is restored into must have sufficient unused capacity to host the new restored volume. Otherwise, the restore operation fails.
azure-netapp-files Configure Application Volume Group Sap Hana Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-application-volume-group-sap-hana-api.md
Previously updated : 04/09/2023 Last updated : 08/08/2024 # Configure application volume groups for SAP HANA using REST API
In a create request, use the following URI format:
The request body consists of the _outer_ parameters, the group properties, and an array of volumes to be created, each with their individual outer parameters and volume properties.
-The following table describes the request body parameters and group level properties required to create a SAP HANA application volume group.
+The following table describes the request body parameters and group level properties required to create an SAP HANA application volume group.
| URI parameter | Description | Restrictions for SAP HANA | | - | -- | -- |
The following table describes the request body parameters and group level proper
| `applicationIdentifier` | Application specific identifier string, following application naming rules | The SAP System ID, which should follow aforementioned naming rules, for example `SH9` | | `volumes` | Array of volumes to be created (see the next table for volume-granular details) | Volume count depends upon host configuration: <ul><li>Single-host (3-5 volumes) <br /> **Required**: _data_, _log_ and _shared_ <br /> **Optional**: _data-backup_, _log-backup_ </li><li> Multiple-host (two volumes) <br /> **Required**: _data_ and _log_ </li></ul> |
-This table describes the request body parameters and volume properties for creating a volume in a SAP HANA application volume group.
+This table describes the request body parameters and volume properties for creating a volume in an SAP HANA application volume group.
| Volume-level request parameter | Description | Restrictions for SAP HANA | | - | -- | -- |
This table describes the request body parameters and volume properties for creat
| **Volume properties** | **Description** | **SAP HANA Value Restrictions** | | `creationToken` | Export path name, typically same as the volume name. | None. Example: `SH9-data-mnt00001` | | `throughputMibps` | QoS throughput | This must be between 1 Mbps and 4500 Mbps. You should set throughput based on volume type. |
-| `usageThreshhold` | Size of the volume in bytes. This must be in the 100 GiB to 100-TiB range. For instance, 100 GiB = 107374182400 bytes. | None. You should set volume size depending on the volume type. |
+| `usageThreshold` | Size of the volume in bytes. This must be in the 50 GiB to 100-TiB range. For instance, 100 GiB = 107374182400 bytes. | None. You should set volume size depending on the volume type. |
| `exportPolicyRule` | Volume export policy rule | At least one export policy rule must be specified for SAP HANA. Only the following rules values can be modified for SAP HANA, the rest _must_ have their default values: <ul><li>`unixReadOnly`: should be false</li><li>`unixReadWrite`: should be true</li><li>`allowedClients`: specify allowed clients. Use `0.0.0.0/0` for no restrictions.</li><li>`hasRootAccess`: must be true to install SAP.</li><li>`chownMode`: Specify `chown` mode.</li><li>`nfsv41`: true for data, log, and shared volumes, optionally true for data backup and log backup volumes</li><li>`nfsv3`: optionally true for data backup and log backup volumes</li><ul> All other rule values _must_ be left defaulted. | | `volumeSpecName` | Specifies the type of volume for the application volume group being created | SAP HANA volumes must have a value that is one of the following: <ul><li>"data"</li><li>"log"</li><li>"shared"</li><li>"data-backup"</li><li>"log-backup"</li></ul> | | `proximityPlacementGroup` | Resource ID of the Proximity Placement Group (PPG) for proper placement of the volume. | <ul><li>The ΓÇ£dataΓÇ¥, ΓÇ£logΓÇ¥ and ΓÇ£sharedΓÇ¥ volumes must each have a PPG specified, preferably a common PPG.</li><li>A PPG must be specified for the ΓÇ£data-backupΓÇ¥ and ΓÇ£log-backupΓÇ¥ volumes, but it will be ignored during placement.</li></ul> |
In the following examples, selected placeholders are specified. You should repla
SAP HANA volume groups for the following examples can be created using a sample shell script that calls the API using curl:
-1. Extract the subscription ID. This automates the extraction of the subscription ID and generate the authorization token:
+1. Extract the subscription ID. This automates the extraction of the subscription ID and generates the authorization token:
```bash subId=$(az account list | jq ".[] | select (.name == \"Pay-As-You-Go\") | .id" -r) echo "Subscription ID: $subId"
azure-netapp-files Configure Application Volume Oracle Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-application-volume-oracle-api.md
na Previously updated : 10/20/2023 Last updated : 08/08/2024 # Configure application volume group for Oracle using REST API
The following tables describe the request body parameters and volume properties
|||| | `creationToken` | Export path name, typically same as the volume name. | `<sid>-ora-data1` | | `throughputMibps` | QoS throughput | You should set throughput based on volume type between 1 MiBps and 4500 MiBps. |
-| `usageThreshhold` | Size of the volume in bytes. This value must be in the 100 GiB to 100-TiB range. For instance, 100 GiB = 107374182400 bytes. | You should set volume size in bytes. |
+| `usageThreshold` | Size of the volume in bytes. This value must be in the 50 GiB to 100-TiB range. For instance, 100 GiB = 107374182400 bytes. | You should set volume size in bytes. |
| `exportPolicyRule` | Volume export policy rule | At least one export policy rule must be specified for Oracle. Only the following rules values can be modified for Oracle. The rest *must* have their default values: <br><br> - `unixReadOnly`: should be false. <br><br> - `unixReadWrite`: should be true. <br><br> - `allowedClients`: specify allowed clients. Use `0.0.0.0/0` for no restrictions. <br><br> - `hasRootAccess`: must be true to use root user for installation. <br><br> - `chownMode`: Specify `chown` mode. <br><br> - `Select nfsv41: or nfsv3:`: as true. It's recommended to use the same protocol version for all volumes. <br> <br> All other rule values _must_ be left defaulted. | | `volumeSpecName` | Specifies the type of volume for the application volume group being created | Oracle volumes must have a value that is one of the following: <br><br> - `ora-data1` <br> - `ora-data2` <br> - `ora-data3` <br> - `ora-data4` <br> - `ora-data5` <br> - `ora-data6` <br> - `ora-data7` <br> - `ora-data8` <br> - `ora-log` <br> - `ora-log-mirror` <br> - `ora-binary` <br> - `ora-backup` <br> | | `proximityPlacementGroup` | Resource ID of the Proximity Placement Group (PPG) for proper placement of the volume. This parameter is optional. If the region has zones available, then use of zones is always priority. | The `data`, `log` and `mirror-log`, `ora-binary` and `backup` volumes must each have a PPG specified, preferably a common PPG. |
azure-netapp-files Configure Customer Managed Keys Hardware https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-customer-managed-keys-hardware.md
+
+ Title: Configure customer-managed keys with managed Hardware Security Module for Azure NetApp Files volume encryption
+description: Learn how to encrypt data in Azure NetApp Files with customer-managed keys using the Hardware Security Module
+
+documentationcenter: ''
++
+editor: ''
+
+ms.assetid:
++
+ na
++ Last updated : 08/08/2024++
+# Configure customer-managed keys with managed Hardware Security Module for Azure NetApp Files volume encryption
+
+Azure NetApp Files volume encryption with customer-managed keys with the managed Hardware Security Module (HSM) is an extension to [customer-managed keys for Azure NetApp Files volumes encryption feature](configure-customer-managed-keys.md). Customer-managed keys with HSM allows you to store your encryptions keys in a more secure FIPS 140-2 Level 3 HSM instead of the FIPS 140-2 Level 1 or Level 2 service used by Azure Key Vault (AKV).
+
+## Requirements
+
+* Customer-managed keys with managed HSM is supported using the 2022.11 or later API version.
+* Customer-managed keys with managed HSM is only supported for Azure NetApp Files accounts that don't have existing encryption.
+* Before creating a volume using customer-managed key with managed HSM volume, you must have:
+ * created an [Azure Key Vault](/azure/key-vault/general/overview), containing at least one key.
+ * The key vault must have soft delete and purge protection enabled.
+ * The key must be type RSA.
+ * created a VNet with a subnet delegated to Microsoft.Netapp/volumes.
+ * a user- or system-assigned identity for your Azure NetApp Files account.
+ * [provisioned and activated a managed HSM.](/azure/key-vault/managed-hsm/quick-create-cli)
+
+## Supported regions
+
+* Australia East
+* Brazil South
+* Canada Central
+* Central US
+* East Asia
+* East US
+* East US 2
+* France Central
+* Japan East
+* Korea Central
+* North Central US
+* North Europe
+* Norway East
+* Norway West
+* South Africa North
+* South Central US
+* Southeast Asia
+* Sweden Central
+* Switzerland North
+* UAE Central
+* UAE North
+* UK South
+* West US
+* West US 2
+* West US 3
+
+## Register the feature
+
+This feature is currently in preview. You need to register the feature before using it for the first time. After registration, the feature is enabled and works in the background. No UI control is required.
+
+1. Register the feature:
+
+ ```azurepowershell-interactive
+ Register-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFManagedHsmEncryption
+ ```
+
+2. Check the status of the feature registration:
+
+ > [!NOTE]
+ > The **RegistrationState** may be in the `Registering` state for up to 60 minutes before changing to`Registered`. Wait until the status is **Registered** before continuing.
+
+ ```azurepowershell-interactive
+ Get-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFManagedHsmEncryption
+ ```
+You can also use [Azure CLI commands](/cli/azure/feature) `az feature register` and `az feature show` to register the feature and display the registration status.
+
+## Configure customer-managed keys with managed HSM for system-assigned identity
+
+When you configure customer-managed keys with a system-assigned identity, Azure configures the NetApp account automatically by adding a system-assigned identity. The access policy is created on your Azure Key Vault with key permissions of Get, Encrypt, and Decrypt.
+
+### Requirements
+
+To use a system-assigned identity, the Azure Key Vault must be configured to use Vault access policy as its permission model. Otherwise, you must use a user-assigned identity.
+
+### Steps
+
+1. In the Azure portal, navigate to Azure NetApp Files then select **Encryption**.
+1. In the **Encryption** menu, provide the following values:
+ * For **Encryption key source**, select **Customer Managed Key**.
+ * For **Key URI**, select **Enter Key URI** then provide the URI for the managed HSM.
+ * Select the NetApp **Subscription**.
+ * For **Identity type**, select **System-assigned**.
+
+ :::image type="content" source="./media/configure-customer-managed-keys/key-enter-uri.png" alt-text="Screenshot of the encryption menu showing key URI field." lightbox="./media/configure-customer-managed-keys//key-enter-uri.png":::
+
+1. Select **Save**.
+
+## Configure customer-managed keys with managed HSM for user-assigned identity
+
+1. In the Azure portal, navigate to Azure NetApp Files then select **Encryption**.
+1. In the **Encryption** menu, provide the following values:
+ * For **Encryption key source**, select **Customer Managed Key**.
+ * For **Key URI**, select **Enter Key URI** then provide the URI for the managed HSM.
+ * Select the NetApp **Subscription**.
+ * For **Identity type**, select **User-assigned**.
+1. When you select **User-assigned**, a context pane opens to select the identity.
+ * If your Azure Key Vault is configured to use a Vault access policy, Azure configures the NetApp account automatically and adds the user-assigned identity to your NetApp account. The access policy is created on your Azure Key Vault with key permissions of Get, Encrypt, and Decrypt.
+ * If your Azure Key Vault is configured to use Azure role-based access control (RBAC), ensure the selected user-assigned identity has a role assignment on the key vault with permissions for data actions:
+ * "Microsoft.KeyVault/vaults/keys/read"
+ * "Microsoft.KeyVault/vaults/keys/encrypt/action"
+ * "Microsoft.KeyVault/vaults/keys/decrypt/action"
+ The user-assigned identity you select is added to your NetApp account. Due to RBAC being customizable, the Azure portal doesn't configure access to the key vault. For more information, see [Using Azure RBAC secret, key, and certificate permissions with Key Vault](/azure/key-vault/general/rbac-guide#using-azure-rbac-secret-key-and-certificate-permissions-with-key-vault)
+
+ :::image type="content" source="./media/configure-customer-managed-keys/encryption-user-assigned.png" alt-text="Screenshot of user-assigned submenu." lightbox="./media/configure-customer-managed-keys/encryption-user-assigned.png":::
+
+1. Select **Save**.
+
+## Next steps
+
+* [Configure customer-managed keys](configure-customer-managed-keys.md)
+* [Security FAQs](faq-security.md)
azure-netapp-files Configure Customer Managed Keys https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-customer-managed-keys.md
For more information about Azure Key Vault and Azure Private Endpoint, refer to:
:::image type="content" source="./media/configure-customer-managed-keys/key-enter-uri.png" alt-text="Screenshot of the encryption menu showing key URI field." lightbox="./media/configure-customer-managed-keys/key-enter-uri.png"::: 1. Select the identity type that you want to use for authentication to the Azure Key Vault. If your Azure Key Vault is configured to use Vault access policy as its permission model, both options are available. Otherwise, only the user-assigned option is available.
- * If you choose **System-assigned**, select the **Save** button. The Azure portal configures the NetApp account automatically with the following process: A system-assigned identity is added to your NetApp account. An access policy is to be created on your Azure Key Vault with key permissions Get, Encrypt, Decrypt.
+ * If you choose **System-assigned**, select the **Save** button. The Azure portal configures the NetApp account automatically by adding a system-assigned identity to your NetApp account. An access policy is also created on your Azure Key Vault with key permissions Get, Encrypt, Decrypt.
:::image type="content" source="./media/configure-customer-managed-keys/encryption-system-assigned.png" alt-text="Screenshot of the encryption menu with system-assigned options." lightbox="./media/configure-customer-managed-keys/encryption-system-assigned.png":::
This section lists error messages and possible resolutions when Azure NetApp Fil
## Next steps * [Azure NetApp Files API](https://github.com/Azure/azure-rest-api-specs/tree/master/specification/netapp/resource-manager/Microsoft.NetApp/stable/2019-11-01)
+* [Configure customer-managed keys with managed Hardware Security Module](configure-customer-managed-keys-hardware.md)
azure-netapp-files Create Volumes Dual Protocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/create-volumes-dual-protocol.md
To create NFS volumes, see [Create an NFS volume](azure-netapp-files-create-volu
See [Create a capacity pool](azure-netapp-files-set-up-capacity-pool.md). * A subnet must be delegated to Azure NetApp Files. See [Delegate a subnet to Azure NetApp Files](azure-netapp-files-delegate-subnet.md).
+* [!INCLUDE [50 GiB volume preview](./includes/50-gib-volume.md)]
* The [non-browsable shares](#non-browsable-share) and [access-based enumeration](#access-based-enumeration) features are currently in preview. You must register each feature before you can use it: 1. Register the feature:
azure-netapp-files Faq Security https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/faq-security.md
Previously updated : 06/15/2024 Last updated : 08/07/2024 # Security FAQs for Azure NetApp Files
NFSv3 protocol doesn't provide support for encryption, so this data-in-flight ca
## Can the storage be encrypted at rest?
-All Azure NetApp Files volumes are encrypted using the FIPS 140-2 standard. Learn [how encryption keys managed](#how-are-encryption-keys-managed).
+All Azure NetApp Files volumes are encrypted using the FIPS 140-2 standard. Learn [how encryption keys are managed](#how-are-encryption-keys-managed).
## Is Azure NetApp Files cross-region and cross-zone replication traffic encrypted?
Alternatively, [customer-managed keys for Azure NetApp Files volume encryption](
Azure NetApp Files supports the ability to move existing volumes using platform-managed keys to customer-managed keys. Once you complete the transition, you cannot revert back to platform-managed keys. For additional information, see [Transition an Azure NetApp Files volume to customer-managed keys](configure-customer-managed-keys.md#transition).
-Also, customer-managed keys using Azure Dedicated HSM is supported on a controlled basis. Support is currently available in the East US, South Central US, West US 2, and US Gov Virginia regions. You can request access [with the Azure NetApp Files feedback form](https://aka.ms/ANFFeedback). As capacity becomes available, requests will be approved.
+<!-- Also, customer-managed keys using Azure Dedicated HSM is supported on a controlled basis. Support is currently available in the East US, South Central US, West US 2, and US Gov Virginia regions. You can request access [with the Azure NetApp Files feedback form](https://aka.ms/ANFFeedback). As capacity becomes available, requests will be approved. -->
## Can I configure the NFS export policy rules to control access to the Azure NetApp Files service mount target?
azure-netapp-files Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/whats-new.md
Previously updated : 07/30/2024 Last updated : 08/07/2024
Azure NetApp Files is updated regularly. This article provides a summary about the latest new features and enhancements.
-## August 2024
+## August 2024
+
+* [Volume encryption with customer-managed keys with managed Hardware Security Module (HSM)](configure-customer-managed-keys-hardware.md) (Preview)
+
+ Volume encryption with customer-managed keys with managed HSM extends the [customer-managed keys](configure-customer-managed-keys.md), enabling you to store your keys in a more secure FIPS 140-2 Level 3 HSM service instead of the FIPS 140-2 Level 1 or 2 encryption offered with Azure Key Vault.
+
+* [Volume enhancement: Azure NetApp Files now supports 50 GiB minimum volume sizes](azure-netapp-files-resource-limits.md) (preview)
+
+ You can now create an Azure NetApp Files volume as small as 50 GiB--a reduction from the initial minimum size of 100 GiB. 50 GiB volumes save costs for workloads that require volumes smaller than 100 GiB, allowing you to appropriately size storage volumes.
* [Azure NetApp Files double encryption at rest](double-encryption-at-rest.md) is now generally available (GA).
backup Backup Support Matrix Iaas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-support-matrix-iaas.md
Title: Support matrix for Azure VM backups description: Get a summary of support settings and limitations for backing up Azure VMs by using the Azure Backup service. Previously updated : 06/20/2024 Last updated : 08/09/2024 -+
Backup and restore of deduplicated VMs or disks | Azure Backup doesn't support d
Adding a disk to a protected VM | Supported. Resizing a disk on a protected VM | Supported. Shared storage| Backing up VMs by using Cluster Shared Volumes (CSV) or Scale-Out File Server isn't supported. CSV writers are likely to fail during backup. On restore, disks that contain CSV volumes might not come up.
-[Shared disks](../virtual-machines/disks-shared-enable.md) | Not supported. <br><br> You can exclude shared disk with Enhanced policy and backup the other supported disks in the VM.
+[Shared disks](../virtual-machines/disks-shared-enable.md) | Not supported. <br><br> - You can exclude shared disk with Enhanced policy and backup the other supported disks in the VM. <br><br> - You can use S2D to create a shared disk or standalone volumes by combining capacities from disks in different VMs. Azure Backup doesn't support backup of a shared volume (between VMs for database cluster or cluster Configuration) created using S2D.
<a name="ultra-disk-backup">Ultra disks</a> | Supported with [Enhanced policy](backup-azure-vms-enhanced-policy.md). <br><br> [Supported regions](../virtual-machines/disks-types.md#ultra-disk-limitations). <br><br> - Configuration of Ultra disk protection is supported via Recovery Services vault and via virtual machine blade. <br><br> - Cross-region restore is currently not supported for machines using Ultra disks. <br><br> - GRS type vaults cannot be used for enabling backup. <br><br> - File-level restore is currently not supported for machines using Ultra disks. <a name="premium-ssd-v2-backup">Premium SSD v2</a> | Supported with [Enhanced policy](backup-azure-vms-enhanced-policy.md). <br><br> [Supported regions](../virtual-machines/disks-types.md#regional-availability). <br><br> - Configuration of Premium SSD v2 disk protection is supported via Recovery Services vault and via virtual machine blade. <br><br> - Cross-region restore is currently not supported for machines using Premium v2 disks and GRS type vaults cannot be used for enabling backup. <br><br> - File-level restore is currently not supported for machines using Premium SSD v2 disks. [Temporary disks](../virtual-machines/managed-disks-overview.md#temporary-disk) | Azure Backup doesn't back up temporary disks.
batch Account Key Rotation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/account-key-rotation.md
+
+ Title: Rotate Batch account keys
+description: Learn how to rotate Batch account shared key credentials.
+ Last updated : 08/09/2024+
+# Batch account shared key credential rotation
+
+Batch accounts can be authenticated in one of two ways, either via shared key or Microsoft Entra ID. Batch accounts
+with shared key authentication enabled have two keys associated with them to allow for key rotation scenarios.
+
+> [!TIP]
+> It's highly recommended to avoid using shared key authentication with Batch accounts. The preferred authentication
+> mechanism is through Microsoft Entra ID. You can disable shared key authentication during account creation or you
+> can update allowed [Authentication Modes](/rest/api/batchmanagement/batch-account/create#authenticationmode) for an
+> active account.
+
+## Batch shared key rotation procedure
+
+Azure Batch accounts have two shared keys, `primary` or `secondary`. It's important not to regenerate both
+keys at the same time, and instead regenerate them one at a time to avoid potential downtime.
+
+> [!WARNING]
+> Once a key has been regenerated, it is no longer valid and the prior key cannot be recovered for use. Ensure
+> that your application update process follows the recommended key rotation procedure to prevent losing access
+> to your Batch account.
+
+The typical key rotation procedure is as follows:
+
+1. Normalize your application code to use either the primary or secondary key. If you're using both keys in your
+application simultaneously, then any rotation procedure leads to authentication errors. The following steps assume
+that you're using the `primary` key in your application.
+1. Regenerate the `secondary` key.
+1. Update your application code to utilize the newly regenerated `secondary` key. Deploy these changes and
+ensure that everything is working as expected.
+1. Regenerate the `primary` key.
+1. Optionally update your application code to use the `primary` key and deploy. This step isn't strictly
+necessary as long as you're tracking which key is used in your application and deployed.
+
+### Rotation in Azure portal
+
+First, sign in to the [Azure portal](https://portal.azure.com). Then, navigate to the **Keys** blade of your
+Batch account under **Settings**. Then select either `Regenerate primary` or `Regenerate secondary` to create a new key.
+
+ :::image type="content" source="media/account-key-rotation/batch-account-key-rotation.png" alt-text="Screenshot showing key rotation.":::
+
+## See also
+
+- Learn more about [Batch accounts](accounts.md).
+- Learn how to authenticate with [Batch Service APIs](batch-aad-auth.md)
+or [Batch Management APIs](batch-aad-auth-management.md) with Microsoft Entra ID.
batch Batch Aad Auth Management https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-aad-auth-management.md
Last updated 04/27/2017
-# Authenticate Batch Management solutions with Active Directory
+# Authenticate Batch Management solutions with Microsoft Entra ID
-Applications that call the Azure Batch Management service authenticate with [Microsoft Authentication Library](../active-directory/develop/msal-overview.md) (Microsoft Entra ID). Microsoft Entra ID is Microsoft's multi-tenant cloud based directory and identity management service. Azure itself uses Microsoft Entra ID for the authentication of its customers, service administrators, and organizational users.
+Applications that call the Azure Batch Management service authenticate with [Microsoft Authentication Library](../active-directory/develop/msal-overview.md) (Microsoft Entra ID). Microsoft Entra ID is Microsoft's multitenant cloud based directory and identity management service. Azure itself uses Microsoft Entra ID for the authentication of its customers, service administrators, and organizational users.
The Batch Management .NET library exposes types for working with Batch accounts, account keys, applications, and application packages. The Batch Management .NET library is an Azure resource provider client, and is used together with [Azure Resource Manager](../azure-resource-manager/management/overview.md) to manage these resources programmatically. Microsoft Entra ID is required to authenticate requests made through any Azure resource provider client, including the Batch Management .NET library, and through Azure Resource Manager.
certification Validate Device Edge Secured Core https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/certification/validate-device-edge-secured-core.md
+
+ Title: Validate device is Edge Secured-core enabled
+description: Instructions to validate device is Edge Secured-core enabled
+++ Last updated : 08/06/2024 +++
+# Validate your Edge Secured-core certified devices
+To check if your device is Edge Secured-core enabled:
+1. Go to Windows Icon > Security Settings > Device Security. The "Secured-core PC" status is available on the top of the screen. If the status is missing, reach out to the device builder for assistance.
+
+2. Go to "Core isolation" to ensure that "Memory integrity" is on.
+
+3. Go to "Security processor" to ensure that the Trusted Platform Module "Specification version" is 2.0.
+
+4. Go to "Data encryption" to ensure that "Device encryption" is on.
+
communication-services Call Automation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/call-automation.md
The following list presents the set of features that are currently available in
| | Place new outbound call to one or more endpoints | ✔️ | ✔️ | ✔️ | ✔️ | | | Redirect* (forward) a call to one or more endpoints | ✔️ | ✔️ | ✔️ | ✔️ | | | Reject an incoming call | ✔️ | ✔️ | ✔️ | ✔️ |
-| | Connect to an ongoing call or Room | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Connect to an ongoing call or Room (in preview) | ✔️ | ✔️ | ✔️ | ✔️ |
| Mid-call scenarios | Add one or more endpoints to an existing call | ✔️ | ✔️ | ✔️ | ✔️ | | | Cancel adding an endpoint to an existing call | ✔️ | ✔️ | ✔️ | ✔️ | | | Play Audio from an audio file | ✔️ | ✔️ | ✔️ | ✔️ |
The following list presents the set of features that are currently available in
| | Send DTMF | ✔️ | ✔️ | ✔️ | ✔️ | | | Mute participant | ✔️ | ✔️ | ✔️ | ✔️ | | | Remove one or more endpoints from an existing call| ✔️ | ✔️ | ✔️ | ✔️ |
-| | Blind Transfer* a 1:1 call to another endpoint | ✔️ | ✔️ | ✔️ | ✔️ |
-| | Blind Transfer* a participant from group call to another endpoint| ✔️ | ✔️ | ✔️ | ✔️ |
+| | Blind Transfer a 1:1 call to another endpoint | ✔️ | ✔️ | ✔️ | ✔️ |
+| | Blind Transfer a participant from group call to another endpoint| ✔️ | ✔️ | ✔️ | ✔️ |
| | Hang up a call (remove the call leg) | ✔️ | ✔️ | ✔️ | ✔️ | | | Terminate a call (remove all participants and end call)| ✔️ | ✔️ | ✔️ | ✔️ | | | Cancel media operations | ✔️ | ✔️ | ✔️ | ✔️ |
The following list presents the set of features that are currently available in
| | List all participants in a call | ✔️ | ✔️ | ✔️ | ✔️ | | Call Recording | Start/pause/resume/stop recording | ✔️ | ✔️ | ✔️ | ✔️ |
-*Transfer or redirect of a VoIP call to a phone number is currently not supported.
+*Redirect of a VoIP call to a phone number is not supported.
## Architecture
Using the IncomingCall event from Event Grid, a call can be redirected to one or
**Create Call** Create Call action can be used to place outbound calls to phone numbers and to other communication users. Use cases include your application placing outbound calls to proactively inform users about an outage or notify about an order update.
-**Connect Call**
+**Connect Call** (in preview)
Connect Call action can be used to connect to an ongoing call and take call actions on it. You can also use this action to connect and manage a Rooms call programmatically, like performing PSTN dial outs for Room using your service. ### Mid-call actions
communication-services Actions For Call Control https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/call-automation/actions-for-call-control.md
The response provides you with CallConnection object that you can use to take fu
2. `ParticipantsUpdated` event that contains the latest list of participants in the call. ![Sequence diagram for placing an outbound call.](media/make-call-flow.png)
-## Connect to a call
+## Connect to a call (in preview)
+ Connect action enables your service to establish a connection with an ongoing call and take actions on it. This is useful to manage a Rooms call or when client applications started a 1:1 or group call that Call automation isn't part of. Connection is established using the CallLocator property and can be of types: ServerCallLocator, GroupCallLocator, and RoomCallLocator. These IDs can be found when the call is originally established or a Room is created, and also published as part of [CallStarted](./../../../event-grid/communication-services-voice-video-events.md#microsoftcommunicationcallstarted) event. To connect to any 1:1 or group call, use the ServerCallLocator. If you started a call using GroupCallId, you can also use the GroupCallLocator.
communication-services Manage Rooms Call https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/rooms/manage-rooms-call.md
+
+ Title: Quickstart - Manage a room call
+
+description: In this quickstart, you learn how to manage a room call using Calling SDKs and Call Automation SDKs
+++++ Last updated : 07/10/2024+++++
+# Quickstart: Manage a room call
+
+## Introduction
+During an Azure Communication Services (ACS) room call, you can manage the call using Calling SDKs or Call Automation SDKs or both. In a room call, you can control in-call actions using both the roles assigned to participants and properties configured in the room. The participant's roles control capabilities permitted per participant, while room properties apply to the room call as a whole.
+
+## Calling SDKs
+Calling SDK is a client-side calling library enabling participants in a room call to perform several in-call operations, such as screen share, turn on/off video, mute/unmute, and so on. For the full list of capabilities, see [Calling SDK Overview](../../concepts/voice-video-calling/calling-sdk-features.md#detailed-capabilities).
+
+You control the capabilities based on roles assigned to participants in the call. For example, only the presenter can screen share. For participant roles and permissions, see [Rooms concepts](../../concepts/rooms/room-concept.md#predefined-participant-roles-and-permissions).
+
+## Call Automation SDKs
+Call Automation SDK is a server-side library enabling administrators to manage an ongoing room call in a central and controlled environment. Unlike Calling SDK, Call Automation SDK operations are roles agnostic. Therefore, a call administrator can perform several in-call operations on behalf of the room call participants.
+
+The following lists describe common in-call actions available in a room call.
+
+### Connect to a room call
+Call Automation must connect to an existing room call before performing any in-call operations. The `CallConnected` or `ConnectFailed` events are raised using callback mechanisms to indicate if a connect operation was successful or failed respectively.
+
+### [csharp](#tab/csharp)
+
+```csharp
+Uri callbackUri = new Uri("https://<myendpoint>/Events"); //the callback endpoint where you want to receive subsequent events
+CallLocator roomCallLocator = new RoomCallLocator("<RoomId>");
+ConnectCallResult response = await client.ConnectAsync(roomCallLocator, callbackUri);
+```
+
+### [Java](#tab/java)
+
+```java
+String callbackUri = "https://<myendpoint>/Events"; //the callback endpoint where you want to receive subsequent events
+CallLocator roomCallLocator = new RoomCallLocator("<RoomId>");
+ConnectCallResult response = client.connectCall(roomCallLocator, callbackUri).block();
+```
+
+### [JavaScript](#tab/javascript)
+
+```javascript
+const roomCallLocator = { kind: "roomCallLocator", id: "<RoomId>" };
+const callbackUri = "https://<myendpoint>/Events"; // the callback endpoint where you want to receive subsequent events
+const response = await client.connectCall(roomCallLocator, callbackUri);
+```
+
+### [Python](#tab/python)
+
+```python
+callback_uri = "https://<myendpoint>/Events" # the callback endpoint where you want to receive subsequent events
+room_call_locator = RoomCallLocator("<room_id>")
+call_connection_properties = client.connect_call(call_locator=room_call_locator, callback_url=callback_uri)
+```
+--
+
+Once successfully connected to a room call, a `CallConnect` event is notified via Callback URI. You can use `callConnectionId` to retrieve a call connection on the room call as needed. The following sample code snippets use the `callConnectionId` to demonstrate this function.
++
+### Add PSTN Participant
+Using Call Automation you can dial out to a PSTN number and add the participant into a room call. You must, however, set up a room to enable PSTN dial-out option (`EnabledPSTNDialout` set to `true`) and the Azure Communication Services resource must have a valid phone number provisioned.
+
+For more information, see [Rooms quickstart](../../quickstarts//rooms/get-started-rooms.md?tabs=windows&pivots=platform-azcli#enable-pstn-dial-out-capability-for-a-room).
++
+### [csharp](#tab/csharp)
+
+```csharp
+var callerIdNumber = new PhoneNumberIdentifier("+16044561234"); // This is the ACS-provisioned phone number for the caller
+var callThisPerson = new CallInvite(new PhoneNumberIdentifier("+16041234567"), callerIdNumber); // The target phone number to dial out to
+CreateCallResult response = await client.GetCallConnection(callConnectionId).AddParticipantAsync(callThisPerson);
+```
+
+### [Java](#tab/java)
+
+```java
+PhoneNumberIdentifier callerIdNumber = new PhoneNumberIdentifier("+16044561234"); // This is the ACS-provisioned phone number for the caller
+CallInvite callInvite = new CallInvite(new PhoneNumberIdentifier("+16041234567"), callerIdNumber); // The phone number participant to dial out to
+AddParticipantOptions addParticipantOptions = new AddParticipantOptions(callInvite);
+Response<AddParticipantResult> addParticipantResultResponse = client.getCallConnectionAsync(callConnectionId)
+ .addParticipantWithResponse(addParticipantOptions).block();
+```
+
+### [JavaScript](#tab/javascript)
+
+```javascript
+const callInvite = {
+ targetParticipant: { phoneNumber: "+18008008800" }, // The phone number participant to dial out to
+ sourceCallIdNumber: { phoneNumber: "+18888888888" } // This is the ACS-provisioned phone number for the caller
+};
+const response = await client.getCallConnection(callConnectionId).addParticipant(callInvite);
+```
+
+### [Python](#tab/python)
+
+```python
+caller_id_number = PhoneNumberIdentifier(
+ "+18888888888"
+) # TThis is the ACS-provisioned phone number for the caller
+target = PhoneNumberIdentifier("+18008008800"), # The phone number participant to dial out to
+
+call_connection_client = call_automation_client.get_call_connection(
+ "call_connection_id"
+)
+result = call_connection_client.add_participant(
+ target,
+ opration_context="Your context",
+ operationCallbackUrl="<url_endpoint>"
+)
+```
+--
+
+### Remove PSTN Participant
+
+### [csharp](#tab/csharp)
+
+```csharp
+
+var removeThisUser = new PhoneNumberIdentifier("+16044561234");
+
+// Remove a participant from the call with optional parameters
+var removeParticipantOptions = new RemoveParticipantOptions(removeThisUser)
+{
+ OperationContext = "operationContext",
+ OperationCallbackUri = new Uri("uri_endpoint"); // Sending event to a non-default endpoint
+}
+
+RemoveParticipantsResult result = await client.GetCallConnection(callConnectionId).RemoveParticipantAsync(removeParticipantOptions);
+```
+
+### [Java](#tab/java)
+
+```java
+CommunicationIdentifier removeThisUser = new PhoneNumberIdentifier("+16044561234");
+RemoveParticipantOptions removeParticipantOptions = new RemoveParticipantOptions(removeThisUser)
+ .setOperationContext("<operation_context>")
+ .setOperationCallbackUrl("<url_endpoint>");
+Response<RemoveParticipantResult> removeParticipantResultResponse = client.getCallConnectionAsync(callConnectionId)
+ .removeParticipantWithResponse(removeParticipantOptions);
+```
+
+### [JavaScript](#tab/javascript)
+
+```javascript
+const removeThisUser = { phoneNumber: "+16044561234" };
+const removeParticipantResult = await client.getCallConnection(callConnectionId).removeParticipant(removeThisUser);
+```
+
+### [Python](#tab/python)
+
+```python
+remove_this_user = PhoneNumberIdentifier("+16044561234")
+call_connection_client = call_automation_client.get_call_connection(
+ "call_connection_id"
+)
+result = call_connection_client.remove_participant(remove_this_user, opration_context="Your context", operationCallbackUrl="<url_endpoint>")
+```
+--
+
+### Send DTMF
+Send a list of DTMF tones to an external participant.
+
+### [csharp](#tab/csharp)
+```csharp
+var tones = new DtmfTone[] { DtmfTone.One, DtmfTone.Two, DtmfTone.Three, DtmfTone.Pound };
+var sendDtmfTonesOptions = new SendDtmfTonesOptions(tones, new PhoneNumberIdentifier(calleePhonenumber))
+{
+ OperationContextΓÇ»=ΓÇ»"dtmfs-to-ivr"
+};
+
+var sendDtmfAsyncResult = await callAutomationClient.GetCallConnection(callConnectionId).GetCallMedia().SendDtmfTonesAsync(sendDtmfTonesOptions);
+
+```
+### [Java](#tab/java)
+```java
+List<DtmfTone> tones = Arrays.asList(DtmfTone.ONE, DtmfTone.TWO, DtmfTone.THREE, DtmfTone.POUND);
+SendDtmfTonesOptions options = new SendDtmfTonesOptions(tones, new PhoneNumberIdentifier(c2Target));
+options.setOperationContext("dtmfs-to-ivr");
+client.getCallConnectionAsync(callConnectionId)
+ .getCallMediaAsync()
+ .sendDtmfTonesWithResponse(options)
+ .block();
+```
+### [JavaScript](#tab/javascript)
+```javascript
+const tones = [DtmfTone.One, DtmfTone.Two, DtmfTone.Three];
+const sendDtmfTonesOptions: SendDtmfTonesOptions = {
+ operationContext: "dtmfs-to-ivr"
+};
+const result: SendDtmfTonesResult = await client.getCallConnection(callConnectionId)
+ .getCallMedia()
+ .sendDtmfTones(tones, {
+ phoneNumber: c2Target
+ }, sendDtmfTonesOptions);
+console.log("sendDtmfTones, result=%s", result);
+```
+### [Python](#tab/python)
+```python
+tones = [DtmfTone.ONE, DtmfTone.TWO, DtmfTone.THREE]
+call_connection_client = call_automation_client.get_call_connection(
+ "call_connection_id"
+)
+
+result = call_connection_client.send_dtmf_tones(
+ tones = tones,
+ target_participant = PhoneNumberIdentifier(c2_target),
+ operation_context = "dtmfs-to-ivr")
+```
+--
+
+### Call Recording
+Azure Communication Services rooms support recording capabilities including `start`, `stop`, `pause`, `resume`, and so on, provided by Call Automation. See the following code snippets to start/stop/pause/resume a recording in a room call. For a complete list of actions, see [Call Automation recording](../../concepts/voice-video-calling/call-recording.md#get-full-control-over-your-recordings-with-our-call-recording-apis).
+
+### [csharp](#tab/csharp)
+```csharp
+// Start recording
+StartRecordingOptions recordingOptions = new StartRecordingOptions(new ServerCallLocator("<ServerCallId>"))
+{
+ RecordingContent = RecordingContent.Audio,
+ RecordingChannel = RecordingChannel.Unmixed,
+ RecordingFormat = RecordingFormat.Wav,
+ RecordingStateCallbackUri = new Uri("<CallbackUri>"),
+ RecordingStorage = RecordingStorage.CreateAzureBlobContainerRecordingStorage(new Uri("<YOUR_STORAGE_CONTAINER_URL>"))
+};
+Response<RecordingStateResult> response = await callAutomationClient.GetCallRecording()
+.StartAsync(recordingOptions);
+
+// Pause recording using recordingId received in response of start recording.
+var pauseRecording = await callAutomationClient.GetCallRecording ().PauseAsync(recordingId);
+
+// Resume recording using recordingId received in response of start recording.
+var resumeRecording = await callAutomationClient.GetCallRecording().ResumeAsync(recordingId);
+
+// Stop recording using recordingId received in response of start recording.
+var stopRecording = await callAutomationClient.GetCallRecording().StopAsync(recordingId);
+
+```
+### [Java](#tab/java)
+```java
+// Start recording
+StartRecordingOptions recordingOptions = new StartRecordingOptions(new ServerCallLocator("<serverCallId>"))
+ .setRecordingChannel(RecordingChannel.UNMIXED)
+ .setRecordingFormat(RecordingFormat.WAV)
+ .setRecordingContent(RecordingContent.AUDIO)
+ .setRecordingStateCallbackUrl("<recordingStateCallbackUrl>");
+
+Response<RecordingStateResult> response = callAutomationClient.getCallRecording()
+.startWithResponse(recordingOptions, null);
+
+// Pause recording using recordingId received in response of start recording
+Response<Void> response = callAutomationClient.getCallRecording()
+ .pauseWithResponse(recordingId, null);
+
+// Resume recording using recordingId received in response of start recording
+Response<Void> response = callAutomationClient.getCallRecording()
+ .resumeWithResponse(recordingId, null);
+
+// Stop recording using recordingId received in response of start recording
+Response<Void> response = callAutomationClient.getCallRecording()
+ .stopWithResponse(recordingId, null);
+
+```
+### [JavaScript](#tab/javascript)
+```javascript
+// Start recording
+var locator: CallLocator = { id: "<ServerCallId>", kind: "serverCallLocator" };
+
+var options: StartRecordingOptions =
+{
+ callLocator: locator,
+ recordingContent: "audio",
+ recordingChannel:"unmixed",
+ recordingFormat: "wav",
+ recordingStateCallbackEndpointUrl: "<CallbackUri>"
+};
+var response = await callAutomationClient.getCallRecording().start(options);
+
+// Pause recording using recordingId received in response of start recording
+var pauseRecording = await callAutomationClient.getCallRecording().pause(recordingId);
+
+// Resume recording using recordingId received in response of start recording.
+var resumeRecording = await callAutomationClient.getCallRecording().resume(recordingId);
+
+// Stop recording using recordingId received in response of start recording
+var stopRecording = await callAutomationClient.getCallRecording().stop(recordingId);
+
+```
+### [Python](#tab/python)
+```python
+# Start recording
+response = call_automation_client.start_recording(call_locator=ServerCallLocator(server_call_id),
+ recording_content_type = RecordingContent.Audio,
+ recording_channel_type = RecordingChannel.Unmixed,
+ recording_format_type = RecordingFormat.Wav,
+ recording_state_callback_url = "<CallbackUri>")
+
+# Pause recording using recording_id received in response of start recording
+pause_recording = call_automation_client.pause_recording(recording_id = recording_id)
+
+# Resume recording using recording_id received in response of start recording
+resume_recording = call_automation_client.resume_recording(recording_id = recording_id)
+
+# Stop recording using recording_id received in response of start recording
+stop_recording = call_automation_client.stop_recording(recording_id = recording_id)
+```
+--
+
+### Terminate a Call
+You can use the Call Automation SDK Hang Up action to terminate a call. When the Hang Up action completes, the SDK publishes a `CallDisconnected` event.
+
+### [csharp](#tab/csharp)
+
+```csharp
+_ = await client.GetCallConnection(callConnectionId).HangUpAsync(forEveryone: true);
+```
+
+### [Java](#tab/java)
+
+```java
+Response<Void> response = client.getCallConnectionAsync(callConnectionId).hangUpWithResponse(true).block();
+```
+
+### [JavaScript](#tab/javascript)
+
+```javascript
+await callConnection.hangUp(true);
+```
+
+### [Python](#tab/python)
+
+```python
+call_connection_client = call_automation_client.get_call_connection(
+ "call_connection_id"
+)
+
+call_connection_client.hang_up(is_for_everyone=True)
+```
+--
+
+## Other Actions
+The following in-call actions are also supported in a room call.
+1. Add participant (ACS identifier)
+1. Remove participant (ACS identifier)
+1. Cancel add participant (ACS identifier and PSTN number)
+1. Hang up call
+1. Get participant (ACS identifier and PSTN number)
+1. Get multiple participants (ACS identifier and PSTN number)
+1. Get latest info about a call
+1. Play both audio files and text
+1. Play all both audio files and text
+1. Recognize both DTMF and speech
+1. Recognize continuous DTMF
+
+For more information, see [call actions](../../how-tos/call-automation/actions-for-call-control.md?branch=pr-en-us-280574&tabs=csharp) and [media actions](../../how-tos/call-automation/control-mid-call-media-actions.md?branch=pr-en-us-280574&tabs=csharp).
+
+## Next steps
+
+In this section you learned how to:
+> [!div class="checklist"]
+> - Join a room call from your application
+> - Add in-call actions into a room call using calling SDKs
+> - Add in-call actions into a room call using Call Automation SDKs
+
+You may also want to:
+ - Learn about [Rooms concept](../../concepts/rooms/room-concept.md)
+ - Learn about [Calling SDKs features](../../concepts/voice-video-calling/calling-sdk-features.md)
+ - Learn about [Call Automation concepts](../../concepts/call-automation/call-automation.md)
communication-services Get Started With Closed Captions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/get-started-with-closed-captions.md
# QuickStart: Add closed captions to your calling app -- ::: zone pivot="platform-web" [!INCLUDE [Closed Captions for Web](./includes/closed-captions/closed-captions-javascript.md)] ::: zone-end
confidential-computing Confidential Vm Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/confidential-vm-overview.md
Title: About Azure confidential VMs
description: Learn about Azure confidential virtual machines. These series are for tenants with high security and confidentiality requirements. + - - ignite-2023
Azure confidential VMs offer strong security and confidentiality for tenants. Th
- Secure key release with cryptographic binding between the platform's successful attestation and the VM's encryption keys. - Dedicated virtual [Trusted Platform Module (TPM)](/windows/security/information-protection/tpm/trusted-platform-module-overview) instance for attestation and protection of keys and secrets in the virtual machine. - Secure boot capability similar to [Trusted launch for Azure VMs](../virtual-machines/trusted-launch.md)-- Ultra disk capability is supported on confidential VMs ## Confidential OS disk encryption
Confidential VMs support the following VM sizes:
- General Purpose with local disk: DCadsv5-series, DCedsv5-series - Memory Optimized without local disk: ECasv5-series, ECesv5-series - Memory Optimized with local disk: ECadsv5-series, ECedsv5-series
+- NVIDIA H100 Tensor Core GPU powered NCCadsH100v5-series
### OS support Confidential VMs support the following OS options:
Confidential VMs *don't support*:
- Microsoft Azure Virtual Machine Scale Sets with Confidential OS disk encryption enabled - Limited Azure Compute Gallery support - Shared disks
+- Ultra disks
- Accelerated Networking - Live migration - Screenshots under boot diagnostics
confidential-computing Gpu Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/gpu-options.md
+
+ Title: Azure Confidential GPU options
+description: Learn about Azure Confidential VMs with confidential GPU.
++++++ Last updated : 07/16/2024++
+# Azure Confidential GPU options
+
+Azure confidential GPUs are based on AMD 4th Gen EPYC processors with SEV-SNP technology and NVIDIA H100 Tensor Core GPUs. In this VM SKU Trusted Execution Environment (TEE) spans confidential VM on the CPU and attached GPU, enabling secure offload of data, models and computation to the GPU.
+
+## Sizes
+
+We offer the following VM sizes:
+
+| Size Family | TEE | Description |
+| | | -- |
+| [**NCCadsH100v5-series**](../virtual-machines/sizes/gpu-accelerated/nccadsh100v5-series.md) | AMD SEV-SNP and NVIDIA H100 Tensor Core GPUs | CVM with Confidential GPU. |
++
+## Azure CLI
+
+You can use the [Azure CLI](/cli/azure/install-azure-cli) with your confidential GPU VMs.
+
+To see a list of confidential VM sizes, run the following command. Replace `<vm-series>` with the series you want to use. The output shows information about available regions and availability zones.
+
+```azurecli-interactive
+vm_series='NCC'
+az vm list-skus \
+ --size dc \
+ --query "[?family=='standard${vm_series}Family'].{name:name,locations:locationInfo[0].location,AZ_a:locationInfo[0].zones[0],AZ_b:locationInfo[0].zones[1],AZ_c:locationInfo[0].zones[2]}" \
+ --all \
+ --output table
+```
+
+For a more detailed list, run the following command instead:
+
+```azurecli-interactive
+vm_series='NCC'
+az vm list-skus \
+ --size dc \
+ --query "[?family=='standard${vm_series}Family']"
+```
+
+## Deployment considerations
+
+Consider the following settings and choices before deploying confidential GPU VMs.
+
+### Azure subscription
+
+To deploy a confidential GPU VM instance, consider a [pay-as-you-go subscription](/azure/virtual-machines/linux/azure-hybrid-benefit-linux) or other purchase option. If you're using an [Azure free account](https://azure.microsoft.com/free/), the quota doesn't allow the appropriate number of Azure compute cores.
+
+You might need to increase the cores quota in your Azure subscription from the default value. Default limits vary depending on your subscription category. Your subscription might also limit the number of cores you can deploy in certain VM size families, including the confidential VM sizes.
+
+To request a quota increase, [open an online customer support request](../azure-portal/supportability/per-vm-quota-requests.md).
+
+If you have large-scale capacity needs, contact Azure Support. Azure quotas are credit limits, not capacity guarantees. You only incur charges for cores that you use.
+
+### Pricing
+
+For pricing options, see the [Linux Virtual Machines Pricing](https://azure.microsoft.com/pricing/details/virtual-machines/linux/).
+
+### Regional availability
+
+For availability information, see which [VM products are available by Azure region](https://azure.microsoft.com/global-infrastructure/services/?products=virtual-machines).
+
+### Resizing
+
+Confidential GPU VMs run on specialized hardware and resizing is currently not supported.
+
+### Guest OS support
+
+OS images for confidential VMs have to meet certain security and compatibility requirements. Qualified images support the secure mounting, attestation, optional [confidential OS disk encryption](confidential-vm-overview.md#confidential-os-disk-encryption), and isolation from underlying cloud infrastructure. These images include:
+
+- Ubuntu 22.04 LTS
+
+For more information about supported and unsupported VM scenarios, see [support for generation 2 VMs on Azure](../virtual-machines/generation-2.md).
+
+### High availability and disaster recovery
+
+You're responsible for creating high availability and disaster recovery solutions for your confidential GPU VMs. Planning for these scenarios helps minimize and avoid prolonged downtime.
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [Deploy a confidential GPU VM from the Azure portal](quick-create-confidential-vm-portal.md)
+
+For more information see our [Confidential VM FAQ](confidential-vm-faq.yml).
confidential-computing Quick Create Confidential Vm Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/quick-create-confidential-vm-portal.md
Title: Create an Azure confidential VM in the Azure portal
description: Learn how to quickly create a confidential virtual machine (confidential VM) in the Azure portal using Azure Marketplace images. - Last updated 12/01/2023
To create a confidential VM in the Azure portal using an Azure Marketplace image
h. Toggle [Generation 2](../virtual-machines/generation-2.md) images. Confidential VMs only run on Generation 2 images. To ensure, under **Image**, select **Configure VM generation**. In the pane **Configure VM generation**, for **VM generation**, select **Generation 2**. Then, select **Apply**.
+ > [!NOTE]
+ > For NCCH100v5 series, only the **Ubuntu Server 22.04 LTS (Confidential VM)** image is currently supported.
+ i. For **Size**, select a VM size. For more information, see [supported confidential VM families](virtual-machine-options.md).
confidential-computing Virtual Machine Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/virtual-machine-options.md
- Last updated 11/15/2023
We offer the following VM sizes:
| **DCedsv5-series** | Intel TDX | General purpose CVM with local temporary disk. | | **ECesv5-series** | Intel TDX | Memory-optimized CVM with remote storage. No local temporary disk. | | **ECedsv5-series** | Intel TDX | Memory-optimized CVM with local temporary disk. |
+| **NCCadsH100v5-series** | AMD SEV-SNP and NVIDIA H100 Tensor Core GPUs | CVM with Confidential GPU. |
> [!NOTE] > Memory-optimized confidential VMs offer double the ratio of memory per vCPU count.
connectors Connectors Google Data Security Privacy Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-google-data-security-privacy-policy.md
Here are some examples that use the Gmail connector with built-in triggers and a
![Non-compliant logic app - Example 2](./media/connectors-google-data-security-privacy-policy/not-compliant-logic-app-2.png)
-* This workflow uses the Gmail connector with the Twitter connector:
+* This workflow uses the Gmail connector with the X connector:
![Non-compliant logic app - Example 3](./media/connectors-google-data-security-privacy-policy/not-compliant-logic-app-3.png)
container-apps Authentication Twitter https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/authentication-twitter.md
Title: Enable authentication and authorization in Azure Container Apps with Twitter
-description: Learn to use the built-in Twitter authentication provider in Azure Container Apps.
+ Title: Enable authentication and authorization in Azure Container Apps with X
+description: Learn to use the built-in X authentication provider in Azure Container Apps.
Last updated 04/20/2022
-# Enable authentication and authorization in Azure Container Apps with Twitter
+# Enable authentication and authorization in Azure Container Apps with X
-This article shows how to configure Azure Container Apps to use Twitter as an authentication provider.
+This article shows how to configure Azure Container Apps to use X as an authentication provider.
-To complete the procedure in this article, you need a Twitter account that has a verified email address and phone number. To create a new Twitter account, go to [twitter.com].
+To complete the procedure in this article, you need an X account that has a verified email address and phone number. To create a new X account, go to [x.com](https://x.com).
-## <a name="twitter-register"> </a>Register your application with Twitter
+## <a name="twitter-register"> </a>Register your application with X
-1. Sign in to the [Azure portal] and go to your application. Copy your **URL**. You'll use it to configure your Twitter app.
-1. Go to the [Twitter Developers] website, sign in with your Twitter account credentials, and select **Create an app**.
-1. Enter the **App name** and the **Application description** for your new app. Paste your application's **URL** into the **Website URL** field. In the **Callback URLs** section, enter the HTTPS URL of your container app and append the path `/.auth/login/twitter/callback`. For example, `https://<hostname>.azurecontainerapps.io/.auth/login/twitter/callback`.
+1. Sign in to the [Azure portal] and go to your application. Copy your **URL**. You'll use it to configure your X app.
+1. Go to the [X Developers] website, sign in with your X account credentials, and select **Create an app**.
+1. Enter the **App name** and the **Application description** for your new app. Paste your application's **URL** into the **Website URL** field. In the **Callback URLs** section, enter the HTTPS URL of your container app and append the path `/.auth/login/x/callback`. For example, `https://<hostname>.azurecontainerapps.io/.auth/login/x/callback`.
1. At the bottom of the page, type at least 100 characters in **Tell us how this app will be used**, then select **Create**. Select **Create** again in the pop-up. The application details are displayed. 1. Select the **Keys and Access Tokens** tab.
To complete the procedure in this article, you need a Twitter account that has a
> [!IMPORTANT] > The API secret key is an important security credential. Do not share this secret with anyone or distribute it with your app.
-## <a name="twitter-secrets"> </a>Add Twitter information to your application
+## <a name="twitter-secrets"> </a>Add X information to your application
1. Sign in to the [Azure portal] and navigate to your app. 1. Select **Authentication** in the menu on the left. Select **Add identity provider**.
To complete the procedure in this article, you need a Twitter account that has a
1. Select **Add**.
-You're now ready to use Twitter for authentication in your app. The provider will be listed on the **Authentication** screen. From there, you can edit or delete this provider configuration.
+You're now ready to use X for authentication in your app. The provider will be listed on the **Authentication** screen. From there, you can edit or delete this provider configuration.
## Working with authenticated users
Use the following guides for details on working with authenticated users.
<!-- URLs. --> [Azure portal]: https://portal.azure.com/
+[X Developers]: https://go.microsoft.com/fwlink/p/?LinkId=268300
+
container-apps Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/authentication.md
For details surrounding authentication and authorization, refer to the following
* [Facebook](authentication-facebook.md) * [GitHub](authentication-github.md) * [Google](authentication-google.yml)
-* [Twitter](authentication-twitter.md)
+* [X](authentication-twitter.md)
* [Custom OpenID Connect](authentication-openid.md) ## Why use the built-in authentication?
The benefits include:
* Azure Container Apps provides access to various built-in authentication providers. * The built-in auth features donΓÇÖt require any particular language, SDK, security expertise, or even any code that you have to write.
-* You can integrate with multiple providers including Microsoft Entra ID, Facebook, Google, and Twitter.
+* You can integrate with multiple providers including Microsoft Entra ID, Facebook, Google, and X.
## Identity providers
Container Apps uses [federated identity](https://en.wikipedia.org/wiki/Federated
| [Facebook](https://developers.facebook.com/docs/facebook-login) | `/.auth/login/facebook` | [Facebook](authentication-facebook.md) | | [GitHub](https://docs.github.com/en/developers/apps/building-oauth-apps/authorizing-oauth-apps) | `/.auth/login/github` | [GitHub](authentication-github.md) | | [Google](https://developers.google.com/identity/choose-auth) | `/.auth/login/google` | [Google](authentication-google.yml) |
-| [Twitter](https://developer.twitter.com/en/docs/basics/authentication) | `/.auth/login/twitter` | [Twitter](authentication-twitter.md) |
+| [X](https://developer.x.com/en/docs/basics/authentication) | `/.auth/login/x` | [X](authentication-twitter.md) |
| Any [OpenID Connect](https://openid.net/connect/) provider | `/.auth/login/<providerName>` | [OpenID Connect](authentication-openid.md) | When you use one of these providers, the sign-in endpoint is available for user authentication and authentication token validation from the provider. You can provide your users with any number of these provider options.
Container Apps Authentication provides built-in endpoints for sign in and sign o
### Use multiple sign-in providers
-The portal configuration doesn't offer a turn-key way to present multiple sign-in providers to your users (such as both Facebook and Twitter). However, it isn't difficult to add the functionality to your app. The steps are outlined as follows:
+The portal configuration doesn't offer a turn-key way to present multiple sign-in providers to your users (such as both Facebook and X). However, it isn't difficult to add the functionality to your app. The steps are outlined as follows:
First, in the **Authentication / Authorization** page in the Azure portal, configure each of the identity provider you want to enable.
In the sign-in page, or the navigation bar, or any other location of your app, a
<a href="/.auth/login/aad">Log in with the Microsoft Identity Platform</a> <a href="/.auth/login/facebook">Log in with Facebook</a> <a href="/.auth/login/google">Log in with Google</a>
-<a href="/.auth/login/twitter">Log in with Twitter</a>
+<a href="/.auth/login/x">Log in with X</a>
``` When the user selects on one of the links, the UI for the respective providers is displayed to the user.
Refer to the following articles for details on securing your container app.
* [Facebook](authentication-facebook.md) * [GitHub](authentication-github.md) * [Google](authentication-google.yml)
-* [Twitter](authentication-twitter.md)
+* [X](authentication-twitter.md)
* [Custom OpenID Connect](authentication-openid.md)
container-instances Container Instances Using Azure Container Registry https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-using-azure-container-registry.md
**Azure CLI**: The command-line examples in this article use the [Azure CLI](/cli/azure/) and are formatted for the Bash shell. You can [install the Azure CLI](/cli/azure/install-azure-cli) locally, or use the [Azure Cloud Shell][cloud-shell-bash]. ## Limitations-
-* The [Azure Container Registry](../container-registry/container-registry-vnet.md) must have [Public Access set to 'All Networks'](../container-registry/container-registry-access-selected-networks.md). To use an Azure container registry with Public Access set to 'Select Networks' or 'None', visit [ACI's article for using Managed-Identity based authentication with ACR](../container-registry/container-registry-authentication-managed-identity.md).
+* Windows containers don't support system-assigned managed identity-authenticated image pulls with ACR, only user-assigned.
## Configure registry authentication
cosmos-db How To Setup Customer Managed Keys Existing Accounts https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-setup-customer-managed-keys-existing-accounts.md
Enabling CMK on an existing account is an asynchronous operation that kicks off
The Cosmos DB account can continue to be used and data can continue to be written without waiting for the asynchronous operation to succeed. CLI command for enabling CMK waits for the completion of encryption of data.
+In order to allow an existing Cosmos DB account to use to CMK, a scan needs to be done to ensure that the account doesn't have "Large IDs". A "Large ID" is a document id that exceeds 990 characters length. This scan is mandatory for the CMK migration and it is done by Microsoft automatically. During this process you may see an error.
+
+ERROR: (InternalServerError) Unexpected error on document scan for CMK Migration. Please retry the operation.
+
+This happens when the scan process uses more RUs than the ones provisioned on the collection, throwing a 429 error. A solution for this problem will be to temporarily bump their RUs significantly. Alternatively, you could make use of the provided console application [hosted here](https://github.com/AzureCosmosDB/Cosmos-DB-Non-CMK-to-CMK-Migration-Scanner) in order to scan their collections.
+
+> [!NOTE]
+> If you wish to disable server-side validation for this during migration, please contact support. This is advisable only if you are sure that there are no Large IDs. If Large ID is encountered during encryption, the process will stop till the Large Id document has been addressed.
+ If you have further questions, reach out to Microsoft Support. ## FAQs
Enabling CMK kicks off a background, asynchronous process to encrypt all the dat
It's suggested to bump up the RUs before you trigger CMK. Once CMK is triggered, then some control plane operations are blocked till the encryption is complete. This block may prevent the user from increasing the RUΓÇÖs once CMK is triggered.
+In order to allow an existing Cosmos DB account to use to CMK, a Large ID scan is done mandatory by Microsoft automatically to address one of the known limitations listed earlier. This process also consumes additional RUs and its a good idea to bump up the RU's significantly to avoid error 429.
+ **Is there a way to reverse the encryption or disable encryption after triggering CMK?** Once the data encryption process using CMK is triggered, it can't be reverted.
cosmos-db Quickstart Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/quickstart-portal.md
Previously updated : 06/20/2024 Last updated : 08/08/2024 # Quickstart: Create an Azure Cosmos DB for MongoDB vCore cluster by using the Azure portal
Create a MongoDB cluster by using Azure Cosmos DB for MongoDB vCore.
1. On the **New** page, search for and select **Azure Cosmos DB**.
-1. On the **Which API best suits your workload?** page, select the **Create** option within the **Azure Cosmos DB for MongoDB** section. For more information, see [API for MongoDB and it's various models](../choose-model.md).
+1. On the **Which API best suits your workload?** page, select the **Create** option within the **Azure Cosmos DB for MongoDB** section.
:::image type="content" source="media/quickstart-portal/select-api-option.png" lightbox="media/quickstart-portal/select-api-option.png" alt-text="Screenshot of the select API option page for Azure Cosmos DB."::: 1. On the **Which type of resource?** page, select the **Create** option within the **vCore cluster** section. For more information, see [API for MongoDB vCore overview](introduction.md).
- :::image type="content" source="media/quickstart-portal/select-resource-type.png" alt-text="Screenshot of the select resource type option page for Azure Cosmos DB for MongoDB.":::
- 1. On the **Create Azure Cosmos DB for MongoDB cluster** page, select the **Configure** option within the **Cluster tier** section.
- :::image type="content" source="media/quickstart-portal/select-cluster-option.png" alt-text="Screenshot of the configure cluster option for a new Azure Cosmos DB for MongoDB cluster.":::
+ :::image type="content" source="media/quickstart-portal/select-cluster-option.png" alt-text="Screenshot of the 'configure cluster' option for a new Azure Cosmos DB for MongoDB cluster.":::
1. On the **Scale** page, leave the options set to their default values:
Create a MongoDB cluster by using Azure Cosmos DB for MongoDB vCore.
| **Cluster tier** | M30 Tier, 2 vCores, 8-GiB RAM | | **Storage per shard** | 128 GiB |
-1. Unselect **High availability** option. In the high availability (HA) acknowledgment section, select **I understand**. Finally, select **Save** to persist your changes to the cluster tier.
-
- :::image type="content" source="media/quickstart-portal/configure-scale.png" alt-text="Screenshot of cluster tier and scale options for a cluster.":::
-
- You can always turn HA on after cluster creation for another layer of protection from failures.
+1. Select the **High availability** option if this cluster will be used for production workloads. If not, in the high availability (HA) acknowledgment section, select **I understand**. Finally, select **Save** to persist your changes to the cluster tier.
1. Back on the cluster page, enter the following information:
Create a MongoDB cluster by using Azure Cosmos DB for MongoDB vCore.
| Resource group | Resource group name | Select a resource group, or select **Create new**, then enter a unique name for the new resource group. | | Cluster name | A unique name | Enter a name to identify your Azure Cosmos DB for MongoDB cluster. The name is used as part of a fully qualified domain name (FQDN) with a suffix of *mongocluster.cosmos.azure.com*, so the name must be globally unique. The name can only contain lowercase letters, numbers, and the hyphen (-) character. The name must also be between 3 and 40 characters in length. | | Location | The region closest to your users | Select a geographic location to host your Azure Cosmos DB for MongoDB cluster. Use the location that is closest to your users to give them the fastest access to the data. |
- | MongoDB version | Version of MongoDB to run in your cluster | This value is set to a default of the latest available MongoDB version. |
+ | MongoDB version | Version of MongoDB to run in your cluster | This controls the mongo version your application uses. |
| Admin username | Provide a username to access the cluster | This user is created on the cluster as a user administrator. | | Password | Use a unique password to pair with the username | Password must be at least eight characters and at most 128 characters. |
When you're done with Azure Cosmos DB for MongoDB vCore cluster, you can delete
1. On the resource group page, select **Delete resource group**.
- :::image type="content" source="media/quickstart-portal/select-delete-resource-group-option.png" alt-text="Screenshot of the delete resource group option in the menu for a specific resource group.":::
+ :::image type="content" source="media/quickstart-portal/select-delete-resource-group-option.png" alt-text="Screenshot of the 'delete resource group' option in the menu for a specific resource group.":::
1. In the deletion confirmation dialog, enter the name of the resource group to confirm that you intend to delete it. Finally, select **Delete** to permanently delete the resource group.
cosmos-db How To Javascript Vector Index Query https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-javascript-vector-index-query.md
+
+ Title: Index and query vector data in JavaScript
+
+description: Add vector data Azure Cosmos DB for NoSQL and then query the data efficiently in your JavaScript application
+++++ Last updated : 08/08/2024+++
+# Index and query vectors in Azure Cosmos DB for NoSQL in JavaScript
++
+The Azure Cosmos DB for NoSQL vector search feature is in preview. Before you use this feature, you must first register for the preview. This article covers the following steps:
+
+1. Registering for the preview of Vector Search in Azure Cosmos DB for NoSQL
+
+1. Setting up the Azure Cosmos DB container for vector search
+
+1. Authoring vector embedding policy
+
+1. Adding vector indexes to the container indexing policy
+
+1. Creating a container with vector indexes and vector embedding policy
+
+1. Performing a vector search on the stored data
+
+This guide walks through the process of creating vector data, indexing the data, and then querying the data in a container.
+
+## Prerequisites
+
+- An existing Azure Cosmos DB for NoSQL account.
+ - If you don't have an Azure subscription, [Try Azure Cosmos DB for NoSQL free](https://cosmos.azure.com/try/).
+ - If you have an existing Azure subscription, [create a new Azure Cosmos DB for NoSQL account](how-to-create-account.md).
+- Latest version of the Azure Cosmos DB [JavaScript](sdk-nodejs.md) SDK (Version 4.1.0 or later)
+
+## Register for the preview
+
+Vector search for Azure Cosmos DB for NoSQL requires preview feature registration. Follow the below steps to register:
+
+1. Navigate to your Azure Cosmos DB for NoSQL resource page.
+
+1. Select the "Features" pane under the "Settings" menu item.
+
+1. Select for "Vector Search in Azure Cosmos DB for NoSQL."
+
+1. Read the description of the feature to confirm you want to enroll in the preview.
+
+1. Select "Enable" to enroll in the preview.
+
+ > [!NOTE]
+ > The registration request will be autoapproved, however it may take several minutes to take effect.
+
+## Understand the steps involved in vector search
+
+The following steps assume that you know how to [setup a Cosmos DB NoSQL account and create a database](quickstart-portal.md). The vector search feature is currently only supported on new containers, not existing container. You need to create a new container and then specify the container-level vector embedding policy and the vector indexing policy at the time of creation.
+
+LetΓÇÖs take an example of creating a database for an internet-based bookstore and you're storing Title, Author, ISBN, and Description for each book. We also define two properties to contain vector embeddings. The first is the "contentVector" property, which contains [text embeddings](../../ai-services/openai/concepts/models.md#embeddings ) generated from the text content of the book (for example, concatenating the "title" "author" "isbn" and "description" properties before creating the embedding). The second is "coverImageVector," which is generated from [images of the bookΓÇÖs cover](../../ai-services/computer-vision/concept-image-retrieval.md).
+
+1. Create and store vector embeddings for the fields on which you want to perform vector search.
+2. Specify the vector embedding paths in the vector embedding policy.
+3. Include any desired vector indexes in the indexing policy for the container.
+
+For subsequent sections of this article, we consider this structure for the items stored in our container:
+
+```json
+{
+"title": "book-title",
+"author": "book-author",
+"isbn": "book-isbn",
+"description": "book-description",
+"contentVector": [2, -1, 4, 3, 5, -2, 5, -7, 3, 1],
+"coverImageVector": [0.33, -0.52, 0.45, -0.67, 0.89, -0.34, 0.86, -0.78]
+}
+```
+
+## Create a vector embedding policy for your container
+
+Next, you need to define a container vector policy. This policy provides information that is used to inform the Azure Cosmos DB query engine how to handle vector properties in the VectorDistance system functions. This policy also informs the vector indexing policy of necessary information, should you choose to specify one.
+
+The following information is included in the contained vector policy:
+
+| | Description |
+| | |
+| **`path`** | The property path that contains vectors |
+| **`datatype`** | The type of the elements of the vector (default `Float32`) |
+| **`dimensions`** | The length of each vector in the path (default `1536`) |
+| **`distanceFunction`** | The metric used to compute distance/similarity (default `Cosine`) |
+
+For our example with book details, the vector policy can look like the example JSON:
+
+```javascript
+const vectorEmbeddingPolicy: VectorEmbeddingPolicy = {
+ vectorEmbeddings: [
+ {
+ path: "/coverImageVector",
+ dataType: "float32",
+ dimensions: 8,
+ distanceFunction: "dotproduct",
+ },
+ {
+ path: "contentVector",
+ dataType: "float32",
+ dimensions: 10,
+ distanceFunction: "cosine",
+ },
+ ],
+ };
+```
+
+## Create a vector index in the indexing policy
+
+Once the vector embedding paths are decided, vector indexes need to be added to the indexing policy. You must apply the vector policy during the time of container creation and it canΓÇÖt be modified later. For this example, the indexing policy would look like this:
+
+```javascript
+const indexingPolicy: IndexingPolicy = {
+ vectorIndexes: [
+ { path: "/coverImageVector", type: "quantizedFlat" },
+ { path: "/contentVector", type: "diskANN" },
+ ],
+ inlcludedPaths: [
+ {
+ path: "/*",
+ },
+ ],
+ excludedPaths: [
+ {
+ path: "/coverImageVector/*",
+ },
+ {
+ path: "/contentVector/*",
+ },
+ ]
+};
+```
+
+Now create your container as usual.
+
+```javascript
+const containerName = "vector embedding container";
+ // create container
+ const { resource: containerdef } = await database.containers.createIfNotExists({
+ id: containerName,
+ vectorEmbeddingPolicy: vectorEmbeddingPolicy,
+ indexingPolicy: indexingPolicy,
+ });
+```
+
+> [!IMPORTANT]
+> Currently vector search in Azure Cosmos DB for NoSQL is supported on new containers only. You need to set both the container vector policy and any vector indexing policy during the time of container creation as it canΓÇÖt be modified later. Both policies will be modifiable in a future improvement to the preview feature.
+
+## Run a vector similarity search query
+
+Once you create a container with the desired vector policy, and insert vector data into the container, you can conduct a vector search using the [Vector Distance](query/vectordistance.md) system function in a query. Suppose you want to search for books about food recipes by looking at the description. You first need to get the embeddings for your query text. In this case, you might want to generate embeddings for the query text ΓÇô "food recipe." Once you have the embedding for your search query, you can use it in the VectorDistance function in the vector search query and get all the items that are similar to your query as shown here:
+
+```sql
+SELECT c.title, VectorDistance(c.contentVector, [1,2,3,4,5,6,7,8,9,10]) AS SimilarityScore
+FROM c
+ORDER BY VectorDistance(c.contentVector, [1,2,3,4,5,6,7,8,9,10])
+```
+
+This query retrieves the book titles along with similarity scores with respect to your query. Here's an example in JavaScript:
+
+```javascript
+const { resources } = await container.items
+ .query({
+ query: "SELECT c.title, VectorDistance(c.contentVector, @embedding) AS SimilarityScore FROM cΓÇ» ORDER BY VectorDistance(c.contentVector, @embedding)"
+ parameters: [{ name: "@embedding", value: [1,2,3,4,5,6,7,8,9,10] }]
+ })
+ .fetchAll();
+for (const item of resources) {
+ console.log(`${itme.title}, ${item.SimilarityScore} is a capitol `);
+}
+```
+
+## Related content
+
+- [VectorDistance system function](query/vectordistance.md)
+- [Vector indexing](../index-policy.md)
+- [Setup Azure Cosmos DB for NoSQL for vector search](../vector-search.md).
cosmos-db Vector Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/vector-search.md
Vector indexing and search in Azure Cosmos DB for NoSQL has some limitations whi
- `quantizedFlat` utilizes the same quantization method as DiskANN and isn't configurable at this time. - Shared throughput databases can't use the vector search preview feature at this time. - Ingestion rate should be limited while using an early preview of DiskANN.
+- At this time in the preview, Vector Search is not supported on accounts with Analytical Store, Shared Throughput, Customer Managed Keys, Continuous Backup, Storage Analytics, and All Versions and Deletes Change Feed.
## Next step - [DiskANN + Azure Cosmos DB - Microsoft Mechanics Video](https://www.youtube.com/watch?v=MlMPIYONvfQ)
cost-management-billing Understand Usage Details Fields https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/automate/understand-usage-details-fields.md
description: This article describes the fields in the usage data files. Previously updated : 07/12/2024 Last updated : 08/08/2024
MPA accounts have all MCA terms, in addition to the MPA terms, as described in t
| InvoiceSectionId┬╣ | EA, MCA | Unique identifier for the EA department or MCA invoice section. | | InvoiceSectionName | EA, MCA | Name of the EA department or MCA invoice section. | | IsAzureCreditEligible | All | Indicates if the charge is eligible to be paid for using Azure credits (Values: `True` or `False`). |
-| Location | MCA | Normalized location of the resource, if different resource locations are configured for the same regions. Purchases and Marketplace usage might be shown as blank or `unassigned`. |
+| Location | MCA | The normalized location used to resolve inconsistencies in region names sent by different Azure Resource Providers (RPs). The normalized location is based strictly on the resource location sent by RPs in usage data and is programmatically normalized to mitigate inconsistencies. Purchases and Marketplace usage might be shown as blank or unassigned. For example, `US East`. |
| MeterCategory | All | Name of the classification category for the meter. For example, _Cloud services_ and _Networking_. Purchases and Marketplace usage might be shown as blank or `unassigned`. | | MeterId┬╣ | All | The unique identifier for the meter. | | MeterName | All | The name of the meter. Purchases and Marketplace usage might be shown as blank or `unassigned`.|
-| MeterRegion | All | Name of the datacenter location for services priced based on location. See `Location`. |
+| MeterRegion | All | The name of the Azure region associated with the meter. It generally aligns with the resource location, except for certain global meters that are shared across regions. In such cases, the meter region indicates the primary region of the meter.<br>**Note**: The meter is used to track the usage of specific services or resources, mainly for billing purposes. Each Azure service, resource, and region have its own billing meter ID that precisely reflects how its consumption and price are calculated. |
| MeterSubCategory | All | Name of the meter subclassification category. Purchases and Marketplace usage might be shown as blank or `unassigned`.| | OfferId┬╣ | EA, pay-as-you-go | Name of the Azure offer, which is the type of Azure subscription that you have. For more information, see supported [Microsoft Azure offer details](https://azure.microsoft.com/support/legal/offer-details/). | | pay-as-you-goPrice┬▓ ┬│| All | The market price, also referred to as retail or list price, for a given product or service. For more information, see [Pricing behavior in cost details](automation-ingest-usage-details-overview.md#pricing-behavior-in-cost-and-usage-details). |
MPA accounts have all MCA terms, in addition to the MPA terms, as described in t
| ReservationName | EA, MCA | Name of the purchased reservation instance. | | ResourceGroup | All | Name of the [resource group](../../azure-resource-manager/management/overview.md) the resource is in. Not all charges come from resources deployed to resource groups. Charges that don't have a resource group are shown as null or empty, **Others**, or **Not applicable**. | | ResourceId┬╣ | All | Unique identifier of the [Azure Resource Manager](/rest/api/resources/resources) resource. |
-| ResourceLocation┬╣ | All | Datacenter location where the resource is running. See `Location`. |
+| ResourceLocation┬╣ | All | The Azure region where the resource is deployed, also referred to as the datacenter location where the resource is running. For an example using Virtual Machines, see [What's the difference between MeterRegion and ResourceLocation](../../virtual-machines/vm-usage.md#what-is-the-difference-between-meter-region-and-resource-location). |
| ResourceName | EA, pay-as-you-go | Name of the resource. Not all charges come from deployed resources. Charges that don't have a resource type are shown as null/empty, **Others** , or **Not applicable**. | | ResourceType | MCA | Type of resource instance. Not all charges come from deployed resources. Charges that don't have a resource type are shown as null/empty, **Others** , or **Not applicable**. | | RoundingAdjustment | EA, MCA | Rounding adjustment represents the quantization that occurs during cost calculation. When the calculated costs are converted to the invoiced total, small rounding errors can occur. The rounding errors are represented as `rounding adjustment` to ensure that the costs shown in Cost Management align to the invoice. For more information, see [Rounding adjustment details](#rounding-adjustment-details). |
cost-management-billing Tutorial Improved Exports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/costs/tutorial-improved-exports.md
Title: Tutorial - Improved exports experience - Preview
description: This tutorial helps you create automatic exports for your actual and amortized costs in the Cost and Usage Specification standard (FOCUS) format. Previously updated : 07/17/2024 Last updated : 08/09/2024
Note: A template simplifies export creation by preselecting a set of commonly us
2. Specify your Azure storage account subscription. Choose an existing resource group or create a new one. 3. Select the Storage account name or create a new one. 4. If you create a new storage account, choose an Azure region.
-5. Specify the storage container and directory path for the export file.
-6. File partitioning is enabled by default. It splits large files into smaller ones.
-7. **Overwrite data** is enabled by default. For daily exports, it replaces the previous day's file with an updated file.
-8. Select **Next** to move to the **Review + create** tab.
-
+6. Specify the storage container and directory path for the export file.
+7. Choose the **Format** as CSV or Parquet.
+8. Choose the **Compression type** as **None**, **Gzip** for CSV file format, or **Snappy** for the parquet file format.
+9. **File partitioning** is enabled by default. It splits large files into smaller ones.
+10. **Overwrite data** is enabled by default. For daily exports, it replaces the previous day's file with an updated file.
+11. Select **Next** to move to the **Review + create** tab.
+ :::image type="content" source="./media/tutorial-improved-exports/new-export-example.png" border="true" alt-text="Screenshot showing the New export dialog." lightbox="./media/tutorial-improved-exports/new-export-example.png" :::
### Review and create
You can perform the following actions by selecting the ellipsis (**…**) on the
- Delete - Permanently removes the export. - Refresh - Updates the Run history.
+ :::image type="content" source="./media/tutorial-improved-exports/export-run-history.png" border="true" alt-text="Screenshot showing the Export run history." lightbox="./media/tutorial-improved-exports/export-run-history.png" :::
### Schedule frequency
Agreement types, scopes, and required roles are explained at [Understand and wor
| **Data types** | **Supported agreement** | **Supported scopes** | | | | |
-| Cost and usage (actual) | ΓÇó EA<br> ΓÇó MCA that you bought through the Azure website <br> ΓÇó MCA enterprise<br> ΓÇó MCA that you buy through a Microsoft partner <br> ΓÇó Microsoft Online Service Program (MOSP), also known as pay-as-you-go (PAYG) <br> ΓÇó Azure internal | ΓÇó EA - Enrollment, department, account, management group, subscription, and resource group <br> ΓÇó MCA - Billing account, billing profile, Invoice section, subscription, and resource group <br> ΓÇó Microsoft Partner Agreement (MPA) - Customer, subscription, and resource group |
-| Cost and usage (amortized) | ΓÇó EA <br> ΓÇó MCA that you bought through the Azure website <br> ΓÇó MCA enterprise <br> ΓÇó MCA that you buy through a Microsoft partner <br> ΓÇó Microsoft Online Service Program (MOSP), also known as pay-as-you-go (PAYG) <br> ΓÇó Azure internal | ΓÇó EA - Enrollment, department, account, management group, subscription, and resource group <br> ΓÇó MCA - Billing account, billing profile, Invoice section, subscription, and resource group <br> ΓÇó MPA - Customer, subscription, and resource group |
+| Cost and usage (actual) | ΓÇó EA<br> ΓÇó MCA that you bought through the Azure website <br> ΓÇó MCA enterprise<br> ΓÇó MCA that you buy through a Microsoft partner <br> ΓÇó Microsoft Online Service Program (MOSP), also known as pay-as-you-go <br> ΓÇó Azure internal | ΓÇó EA - Enrollment, department, account, management group, subscription, and resource group <br> ΓÇó MCA - Billing account, billing profile, Invoice section, subscription, and resource group <br> ΓÇó Microsoft Partner Agreement (MPA) - Customer, subscription, and resource group |
+| Cost and usage (amortized) | ΓÇó EA <br> ΓÇó MCA that you bought through the Azure website <br> ΓÇó MCA enterprise <br> ΓÇó MCA that you buy through a Microsoft partner <br> ΓÇó Microsoft Online Service Program (MOSP), also known as pay-as-you-go <br> ΓÇó Azure internal | ΓÇó EA - Enrollment, department, account, management group, subscription, and resource group <br> ΓÇó MCA - Billing account, billing profile, Invoice section, subscription, and resource group <br> ΓÇó MPA - Customer, subscription, and resource group |
| Cost and usage (FOCUS) | ΓÇó EA <br> ΓÇó MCA that you bought through the Azure website <br> ΓÇó MCA enterprise <br> ΓÇó MCA that you buy through a Microsoft partner| ΓÇó EA - Enrollment, department, account, subscription, and resource group <br> ΓÇó MCA - Billing account, billing profile, invoice section, subscription, and resource group <br> ΓÇó MPA - Customer, subscription, resource group. **NOTE**: The management group scope isn't supported for Cost and usage details (FOCUS) exports. | | All available prices | ΓÇó EA <br> ΓÇó MCA that you bought through the Azure website <br> ΓÇó MCA enterprise <br> ΓÇó MCA that you buy through a Microsoft partner | ΓÇó EA - Billing account <br> ΓÇó All other supported agreements - Billing profile | | Reservation recommendations | ΓÇó EA <br> ΓÇó MCA that you bought through the Azure website <br> ΓÇó MCA enterprise <br> ΓÇó MCA that you buy through a Microsoft partner | ΓÇó EA - Billing account <br> ΓÇó All other supported agreements - Billing profile |
Agreement types, scopes, and required roles are explained at [Understand and wor
The improved exports experience currently has the following limitations. - The new exports experience doesn't fully support the management group scope and it has feature limitations.- - Azure internal and MOSP billing scopes and subscriptions donΓÇÖt support FOCUS datasets. - Shared access service (SAS) key-based cross tenant export is only supported for Microsoft partners at the billing account scope. It isn't supported for other partner scenarios like any other scope, EA indirect contract, or Azure Lighthouse. ## FAQ
-1. Why is file partitioning enabled in exports?
+Why is file partitioning enabled in exports?
The file partitioning is a feature that is activated by default to facilitate the management of large files. This functionality divides larger files into smaller segments, which enhances the ease of file transfer, download, ingestion, and overall readability. It's advantageous for customers whose cost files increase in size over time. The specifics of the file partitions are described in a manifest.json file provided with each export run, enabling you to rejoin the original file. ## Next steps -- Learn more about exports at [Tutorial: Create and manage exported data](tutorial-export-acm-data.md).
+- Learn more about exports at [Tutorial: Create and manage exported data](tutorial-export-acm-data.md).
cost-management-billing Subscription Disabled https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/subscription-disabled.md
To resolve the issue, [switch to a different credit card](change-credit-card.md)
If you're the Account Administrator or subscription Owner and you canceled a pay-as-you-go subscription, you can reactivate it in the Azure portal.
-If you're a billing administrator (partner billing administrator or Enterprise Administrator), you might not have the required permission to reactive the subscription. If this situation applies to you, contact the Account Administrator, or subscription Owner and ask them to reactivate the subscription.
- 1. Sign in to the [Azure portal](https://portal.azure.com). 1. Go to Subscriptions and then select the canceled subscription. 1. Select **Reactivate**. :::image type="content" source="./media/subscription-disabled/reactivate-sub.png" alt-text="Screenshot that shows Confirm reactivation." :::
-For other subscription types, [contact support](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade) to have your subscription reactivated.
+For other subscription types (for example, Enterprise Subscription), [contact support](https://portal.azure.com/?#blade/Microsoft_Azure_Support/HelpAndSupportBlade) to have your subscription reactivated.
## After reactivation
cost-management-billing Subscription Transfer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/subscription-transfer.md
Previously updated : 06/27/2024 Last updated : 08/09/2024 # customer intent: As a billing administrator, I want to learn about transferring subscriptions so that I can transfer one.
Answers to the above questions can help you to communicate early with others to
Understanding the answers to source and destination offer type questions is crucial to determine the technical steps required and to recognize any potential restrictions in the transfer process. Limitations are covered in more detail in the next section.
+If you're not sure what type of subscription you have, see [Check the type of your account](view-all-accounts.md#check-the-type-of-your-account).
+ ## Support plan transfers
-You can't transfer support plans. If you have a support plan, then you should cancel it. Then you can buy a new one for the new agreement. If you cancel an Azure support plan, you're billed for the rest of the month. Cancelling a support plan doesn't result in a prorated refund. For more information about support plans, see [Azure support plans](https://azure.microsoft.com/support/plans/).
+You can't transfer support plans. If you have a support plan, then you should cancel it. Then you can buy a new one for the new agreement. If you cancel an Azure support plan, you get billed for the rest of the month. Cancelling a support plan doesn't result in a prorated refund. For more information about support plans, see [Azure support plans](https://azure.microsoft.com/support/plans/).
For information about how to cancel a support plan, see [Cancel your Azure subscription](cancel-azure-subscription.md).
cost-management-billing View Reservations https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/reservations/view-reservations.md
When you use the PowerShell script to assign the ownership role and it runs succ
[User Access Administrator](../../role-based-access-control/built-in-roles.md#user-access-administrator) rights are required before you can grant users or groups the Reservations Administrator and Reservations Reader roles at the tenant level. In order to get User Access Administrator rights at the tenant level, follow [Elevate access](../../role-based-access-control/elevate-access-global-admin.md) steps. ### Add a Reservations Administrator role or Reservations Reader role at the tenant level
-You can assign these roles from the [Azure portal](https://portal.azure.com).
+Only Global Administrators can assign these roles from the [Azure portal](https://portal.azure.com).
1. Sign in to the Azure portal and navigate to **Reservations**. 1. Select a reservation that you have access to.
data-factory Connector Azure Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-azure-cosmos-db.md
Previously updated : 01/05/2024 Last updated : 07/23/2024 # Copy and transform data in Azure Cosmos DB for NoSQL by using Azure Data Factory
These properties are supported for the linked service:
``` ### User-assigned managed identity authentication
->[!NOTE]
->Currently, the user-assigned managed identity authentication is not supported in data flow.
- A data factory or Synapse pipeline can be associated with a [user-assigned managed identities](data-factory-service-identity.md#user-assigned-managed-identity), which represents this specific service instance. You can directly use this managed identity for Azure Cosmos DB authentication, similar to using your own service principal. It allows this designated resource to access and copy data to or from your Azure Cosmos DB instance. To use user-assigned managed identities for Azure resource authentication, follow these steps.
These properties are supported for the linked service:
| database | Specify the name of the database. | Yes | | credentials | Specify the user-assigned managed identity as the credential object. | Yes | | connectVia | The [integration runtime](concepts-integration-runtime.md) to be used to connect to the data store. You can use the Azure integration runtime or a self-hosted integration runtime if your data store is in a private network. If not specified, the default Azure integration runtime is used. |No |
+| subscriptionId | Specify the subscription id for the Azure Cosmos DB instance | No for Copy Activity, Yes for Mapping Data Flow |
+| tenantId | Specify the tenant id for the Azure Cosmos DB instance | No for Copy Activity, Yes for Mapping Data Flow |
+| resourceGroup | Specify the resource group name for the Azure Cosmos DB instance | No for Copy Activity, Yes for Mapping Data Flow |
**Example:**
These properties are supported for the linked service:
"credential": { "referenceName": "credential1", "type": "CredentialReference"
- }
+ },
+ "subscriptionId": "<subscription id>",
+ "tenantId": "<tenant id>",
+ "resourceGroup": "<resource group>"
}, "connectVia": { "referenceName": "<name of Integration Runtime>",
event-grid Custom Domains Namespaces https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/custom-domains-namespaces.md
The Event Grid namespace is automatically assigned an HTTP hostname at the time
You can assign your custom domain names to your Event Grid namespaceΓÇÖs MQTT and HTTP host names, along with the default host names. Custom domain configurations not only help you to meet your security and compliance requirements, but also eliminates the need to modify your clients that are already linked to your domain.
+> [!NOTE]
+> This feature is currently in preview.
+ ## High-level steps To use custom domains for namespaces, follow these steps:
event-grid Oauth Json Web Token Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-grid/oauth-json-web-token-authentication.md
This article shows how to authenticate with Azure Event Grid namespace using JSO
OAuth 2.0 (JSON Web Token) authentication allows clients to authenticate and connect with the MQTT broker using JSON Web Tokens (JWT) issued by any OpenID Connect identity provider, apart from Microsoft Entra ID. MQTT clients can get their token from their identity provider and provide the token in the MQTTv5 or MQTTv3.1.1 CONNECT packets to authenticate with the MQTT broker. This authentication method provides a lightweight, secure, and flexible option for MQTT clients that aren't provisioned in Azure.
+> [!NOTE]
+> This feature is currently in preview.
+ ## High-level steps To use custom JWT authentication for namespaces, follow these steps:
event-hubs Process Data Azure Stream Analytics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/event-hubs/process-data-azure-stream-analytics.md
Your Azure Stream Analytics job defaults to three streaming units (SUs). To adju
:::image type="content" source="./media/process-data-azure-stream-analytics/scale.png" alt-text="Screenshots showing the Scale page for a Stream Analytics job." lightbox="./media/process-data-azure-stream-analytics/scale.png"::: + ## Related content To learn more about Stream Analytics queries, see [Stream Analytics Query Language](/stream-analytics-query/built-in-functions-azure-stream-analytics)
hdinsight-aks Rest Api Cluster Creation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/rest-api-cluster-creation.md
Variables required in the script
- [HDInsight on AKS VM list](/azure/hdinsight-aks/virtual-machine-recommendation-capacity-planning) - recommendation-capacity-planning
-To create a cluster, copy the following command to your REST API tool e.g Postman
+To create a cluster, copy the following command to your REST API tool.
``` PUT https://management.azure.com/subscriptions/{subscriptionId}/resourceGroups/{resourceGroupName}/providers/Microsoft.HDInsight/clusterpools/{clusterPoolName}/clusters/{clusterName}?api-version=2023-06-01-preview
hdinsight Hdinsight Analyze Twitter Data Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-analyze-twitter-data-linux.md
Title: Analyze Twitter data with Apache Hive - Azure HDInsight
-description: Learn how to use Apache Hive and Apache Hadoop on HDInsight to transform raw TWitter data into a searchable Hive table.
+ Title: Analyze X data with Apache Hive - Azure HDInsight
+description: Learn how to use Apache Hive and Apache Hadoop on HDInsight to transform raw X data into a searchable Hive table.
Last updated 06/15/2024
-# Analyze Twitter data using Apache Hive and Apache Hadoop on HDInsight
+# Analyze X data using Apache Hive and Apache Hadoop on HDInsight
-Learn how to use [Apache Hive](https://hive.apache.org/) to process Twitter data. The result is a list of Twitter users who sent the most tweets that contain a certain word.
+Learn how to use [Apache Hive](https://hive.apache.org/) to process X data. The result is a list of X users who sent the most tweets that contain a certain word.
> [!IMPORTANT] > The steps in this document were tested on HDInsight 3.6. ## Get the data
-Twitter allows you to retrieve the data for each tweet as a JavaScript Object Notation (JSON) document through a REST API. [OAuth](https://oauth.net) is required for authentication to the API.
+X allows you to retrieve the data for each tweet as a JavaScript Object Notation (JSON) document through a REST API. [OAuth](https://oauth.net) is required for authentication to the API.
-### Create a Twitter application
+### Create an X application
-1. From a web browser, sign in to [https://developer.twitter.com](https://developer.twitter.com). Select the **Sign-up now** link if you don't have a Twitter account.
+1. From a web browser, sign in to [https://developer.x.com](https://developer.x.com). Select the **Sign-up now** link if you don't have an X account.
2. Select **Create New App**.
Twitter allows you to retrieve the data for each tweet as a JavaScript Object No
### Download tweets
-The following Python code downloads 10,000 tweets from Twitter and save them to a file named **tweets.txt**.
+The following Python code downloads 10,000 tweets from X and save them to a file named **tweets.txt**.
> [!NOTE] > The following steps are performed on the HDInsight cluster, since Python is already installed.
The following Python code downloads 10,000 tweets from Twitter and save them to
nano gettweets.py ```
-1. Edit the code below by replacing `Your consumer secret`, `Your consumer key`, `Your access token`, and `Your access token secret` with the relevant information from your twitter application. Then paste the edited code as the contents of the **gettweets.py** file.
+1. Edit the code below by replacing `Your consumer secret`, `Your consumer key`, `Your access token`, and `Your access token secret` with the relevant information from your X application. Then paste the edited code as the contents of the **gettweets.py** file.
```python #!/usr/bin/python
The following Python code downloads 10,000 tweets from Twitter and save them to
import json import sys
- #Twitter app information
+ #X app information
consumer_secret='Your consumer secret' consumer_key='Your consumer key' access_token='Your access token'
The following Python code downloads 10,000 tweets from Twitter and save them to
To upload the data to HDInsight storage, use the following commands: ```bash
-hdfs dfs -mkdir -p /tutorials/twitter/data
-hdfs dfs -put tweets.txt /tutorials/twitter/data/tweets.txt
+hdfs dfs -mkdir -p /tutorials/x/data
+hdfs dfs -put tweets.txt /tutorials/x/data/tweets.txt
``` These commands store the data in a location that all nodes in the cluster can access.
These commands store the data in a location that all nodes in the cluster can ac
1. Use the following command to create a file containing [HiveQL](https://cwiki.apache.org/confluence/display/Hive/LanguageManual) statements: ```bash
- nano twitter.hql
+ nano x.hql
``` Use the following text as the contents of the file:
These commands store the data in a location that all nodes in the cluster can ac
set hive.exec.dynamic.partition.mode = nonstrict; -- Drop table, if it exists DROP TABLE tweets_raw;
- -- Create it, pointing toward the tweets logged from Twitter
+ -- Create it, pointing toward the tweets logged from X
CREATE EXTERNAL TABLE tweets_raw ( json_response STRING )
- STORED AS TEXTFILE LOCATION '/tutorials/twitter/data';
+ STORED AS TEXTFILE LOCATION '/tutorials/x/data';
-- Drop and recreate the destination table DROP TABLE tweets; CREATE TABLE tweets
These commands store the data in a location that all nodes in the cluster can ac
1. Use the following command to run the HiveQL contained in the file: ```bash
- beeline -u 'jdbc:hive2://headnodehost:10001/;transportMode=http' -i twitter.hql
+ beeline -u 'jdbc:hive2://headnodehost:10001/;transportMode=http' -i x.hql
```
- This command runs the **twitter.hql** file. Once the query completes, you see a `jdbc:hive2//localhost:10001/>` prompt.
+ This command runs the **x.hql** file. Once the query completes, you see a `jdbc:hive2//localhost:10001/>` prompt.
1. From the beeline prompt, use the following query to verify that data was imported:
hdinsight Hdinsight Release Notes Archive https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-release-notes-archive.md
description: Archived release notes for Azure HDInsight. Get development tips an
Previously updated : 05/27/2024 Last updated : 08/09/2024 # Archived release notes
To subscribe, click the ΓÇ£watchΓÇ¥ button in the banner and watch out for [HDIn
## Release Information
+### Release date: Jul 05, 2024
+
+> [!NOTE]
+> This is a Hotfix / maintenance release for Resource Provider. For more information see, [Resource Provider](.//hdinsight-overview-versioning.md#hdinsight-resource-provider)
+
+### Fixed issues
+
+* HOBO tags overwrite user tags.
+
+ * HOBO tags overwrite user tags on sub-resources in HDInsight cluster creation.
+
+### Release date: Jun 19, 2024
+
+This release note applies to
+++++
+HDInsight release will be available to all regions over several days. This release note is applicable for image number **2406180258**. [How to check the image number?](./view-hindsight-cluster-image-version.md)
+
+HDInsight uses safe deployment practices, which involve gradual region deployment. It might take up to 10 business days for a new release or a new version to be available in all regions.
+
+**OS versions**
+
+* HDInsight 5.1: Ubuntu 18.04.5 LTS Linux Kernel 5.4
+* HDInsight 5.0: Ubuntu 18.04.5 LTS Linux Kernel 5.4
+* HDInsight 4.0: Ubuntu 18.04.5 LTS Linux Kernel 5.4
+
+> [!NOTE]
+> Ubuntu 18.04 is supported under [Extended Security Maintenance(ESM)](https://techcommunity.microsoft.com/t5/linux-and-open-source-blog/canonical-ubuntu-18-04-lts-reaching-end-of-standard-support/ba-p/3822623) by the Azure Linux team for [Azure HDInsight July 2023](/azure/hdinsight/hdinsight-release-notes-archive#release-date-july-25-2023), release onwards.
+
+For workload specific versions, see [HDInsight 5.x component versions](./hdinsight-5x-component-versioning.md).
+
+## Fixed issues
+* Security enhancements
+ * Improvements on using Tags for clusters in line with the [SFI](https://www.microsoft.com/microsoft-cloud/resources/secure-future-initiative) requirements.
+ * Improvements in probes scripts as per the [SFI](https://www.microsoft.com/microsoft-cloud/resources/secure-future-initiative) requirements.
+* Improvements in the HDInsight Log Analytics with System Managed Identity support for HDInsight Resource Provider.
+* Addition of new activity to upgrade the `mdsd` agent version for old image (created before 2024).
+* Enabling MISE in gateway as part of the continued improvements for [MSAL Migration](/entra/identity-platform/msal-overview).
+* Incorporate Spark Thrift Server `Httpheader hiveConf` to the Jetty HTTP ConnectionFactory.
+* Revert RANGER-3753 and RANGER-3593.
+
+ The `setOwnerUser` implementation given in Ranger 2.3.0 release has a critical regression issue when being used by Hive. In Ranger 2.3.0, when HiveServer2 tries to evaluate the policies, Ranger Client tries to get the owner of the hive table by calling the Metastore in the setOwnerUser function which essentially makes call to storage to check access for that table. This issue causes the queries to run slow when Hive runs on 2.3.0 Ranger.
+
+## :::image type="icon" border="false" source="./media/hdinsight-release-notes/clock.svg"::: Coming soon
+
+* [Basic and Standard A-series VMs Retirement](https://azure.microsoft.com/updates/basic-and-standard-aseries-vms-on-hdinsight-will-retire-on-31-august-2024/).
+ * On August 31, 2024, we'll retire Basic and Standard A-series VMs. Before that date, you need to migrate your workloads to Av2-series VMs, which provide more memory per vCPU and faster storage on solid-state drives (SSDs).
+ * To avoid service disruptions, [migrate your workloads](https://aka.ms/Av1retirement) from Basic and Standard A-series VMs to Av2-series VMs before August 31, 2024.
+* Retirement Notifications for [HDInsight 4.0](https://azure.microsoft.com/updates/azure-hdinsight-40-will-be-retired-on-31-march-2025-migrate-your-hdinsight-clusters-to-51) and [HDInsight 5.0](https://azure.microsoft.com/updates/hdinsight5retire/).
+
+If you have any more questions, contact [Azure Support](https://ms.portal.azure.com/#view/Microsoft_Azure_Support/HelpAndSupportBlade/~/overview).
+
+You can always ask us about HDInsight on [Azure HDInsight - Microsoft Q&A](/answers/tags/168/azure-hdinsight).
+
+We're listening: YouΓÇÖre welcome to add more ideas and other topics here and vote for them - [HDInsight Ideas](https://feedback.azure.com/d365community/search/?q=HDInsight) and follow us for more updates on [AzureHDInsight Community](https://www.linkedin.com/groups/14313521/).
+
+> [!NOTE]
+> We advise customers to use to latest versions of HDInsight [Images](./view-hindsight-cluster-image-version.md) as they bring in the best of open source updates, Azure updates and security fixes. For more information, see [Best practices](./hdinsight-overview-before-you-start.md).
++ ### Release date: May 16, 2024 This release note applies to
HDInsight uses safe deployment practices, which involve gradual region deploymen
For workload specific versions, see [HDInsight 5.x component versions](./hdinsight-5x-component-versioning.md).
-## Fixed issues
+**Fixed issues**
* Added API in gateway to get token for Keyvault, as part of the SFI initiative. * In the new Log monitor `HDInsightSparkLogs` table, for log type `SparkDriverLog`, some of the fields were missing. For example, `LogLevel & Message`. This release adds the missing fields to schemas and fixed formatting for `SparkDriverLog`.
For workload specific versions, see [HDInsight 5.x component versions](./hdinsig
* CVE Fixes for [HDInsight Resource Provider](./hdinsight-overview-versioning.md#hdinsight-resource-provider).
-## :::image type="icon" border="false" source="./media/hdinsight-release-notes/clock.svg"::: Coming soon
+**Coming soon**
* [Basic and Standard A-series VMs Retirement](https://azure.microsoft.com/updates/basic-and-standard-aseries-vms-on-hdinsight-will-retire-on-31-august-2024/). * On August 31, 2024, we'll retire Basic and Standard A-series VMs. Before that date, you need to migrate your workloads to Av2-series VMs, which provide more memory per vCPU and faster storage on solid-state drives (SSDs).
hdinsight Hdinsight Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight/hdinsight-release-notes.md
description: Latest release notes for Azure HDInsight. Get development tips and
Previously updated : 07/08/2024 Last updated : 08/09/2024 # Azure HDInsight release notes
To subscribe, click the **watch** button in the banner and watch out for [HDInsi
## Release Information
-### Release date: Jul 05, 2024
-
-> [!NOTE]
-> This is a Hotfix / maintenance release for Resource Provider. For more information see, [Resource Provider](.//hdinsight-overview-versioning.md#hdinsight-resource-provider)
-
-### Fixed issues
-
-* HOBO tags overwrite user tags.
-
- * HOBO tags overwrite user tags on sub-resources in HDInsight cluster creation.
-
-### Release date: Jun 19, 2024
+### Release date: Aug 09, 2024
This release note applies to
This release note applies to
:::image type="icon" source="./media/hdinsight-release-notes/yes-icon.svg" border="false"::: HDInsight 4.0 version.
-HDInsight release will be available to all regions over several days. This release note is applicable for image number **2406180258**. [How to check the image number?](./view-hindsight-cluster-image-version.md)
+HDInsight release will be available to all regions over several days. This release note is applicable for image number **2407260448**. [How to check the image number?](./view-hindsight-cluster-image-version.md)
HDInsight uses safe deployment practices, which involve gradual region deployment. It might take up to 10 business days for a new release or a new version to be available in all regions.
HDInsight uses safe deployment practices, which involve gradual region deploymen
For workload specific versions, see [HDInsight 5.x component versions](./hdinsight-5x-component-versioning.md).
-## Fixed issues
-* Security enhancements
- * Improvements on using Tags for clusters in line with the [SFI](https://www.microsoft.com/microsoft-cloud/resources/secure-future-initiative) requirements.
- * Improvements in probes scripts as per the [SFI](https://www.microsoft.com/microsoft-cloud/resources/secure-future-initiative) requirements.
-* Improvements in the HDInsight Log Analytics with System Managed Identity support for HDInsight Resource Provider.
-* Addition of new activity to upgrade the `mdsd` agent version for old image (created before 2024).
-* Enabling MISE in gateway as part of the continued improvements for [MSAL Migration](/entra/identity-platform/msal-overview).
-* Incorporate Spark Thrift Server `Httpheader hiveConf` to the Jetty HTTP ConnectionFactory.
-* Revert RANGER-3753 and RANGER-3593.
-
- The `setOwnerUser` implementation given in Ranger 2.3.0 release has a critical regression issue when being used by Hive. In Ranger 2.3.0, when HiveServer2 tries to evaluate the policies, Ranger Client tries to get the owner of the hive table by calling the Metastore in the setOwnerUser function which essentially makes call to storage to check access for that table. This issue causes the queries to run slow when Hive runs on 2.3.0 Ranger.
+## Updates
+
+**[Addition of Azure Monitor Agent](./azure-monitor-agent.md) for Log Analytics in HDInsight**
+
+Addition of `SystemMSI` and Automated DCR for Log analytics, given the deprecation of [New Azure Monitor experience (preview)](./hdinsight-hadoop-oms-log-analytics-tutorial.md) .
## :::image type="icon" border="false" source="./media/hdinsight-release-notes/clock.svg"::: Coming soon
iot-dps How To Troubleshoot Dps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/how-to-troubleshoot-dps.md
Previously updated : 05/25/2022 Last updated : 06/28/2024 # Troubleshooting with Azure IoT Hub Device Provisioning Service
-Provisioning issues for IoT devices can be difficult to troubleshoot because there are many possible points of failures such as attestation failures, registration failures, etc. This article provides guidance on how to detect and troubleshoot device provisioning issues via Azure Monitor. To learn more about using Azure Monitor with DPS, see [Monitor Device Provisioning Service](monitor-iot-dps.md).
-
-## Using Azure Monitor to view metrics and set up alerts
-
-To view and set up alerts on IoT Hub Device Provisioning Service metrics:
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
-2. Browse to your IoT Hub Device Provisioning Service.
-
-3. Select **Metrics**.
-
-4. Select the desired metric. For supported metrics, see [Metrics](monitor-iot-dps-reference.md#metrics).
-
-5. Select desired aggregation method to create a visual view of the metric.
-
-6. To set up an alert of a metric, select **New alert rules** from the top right of the metric blade, similarly you can go to **Alert** blade and select **New alert rules**.
-
-7. Select **Add condition**, then select the desired metric and threshold by following prompts.
-
-To learn more about viewing metrics and setting up alerts on your DPS instance, see [Analyzing metrics](monitor-iot-dps.md#analyzing-metrics) and [Alerts](monitor-iot-dps.md#alerts) in Monitor Device Provisioning Service.
-
-## Using Log Analytics to view and resolve errors
-
-1. Sign in to the [Azure portal](https://portal.azure.com).
-
-2. Browse to your Device Provisioning Service.
-
-3. Select **Diagnostics settings**.
-
-4. Select **Add diagnostic setting**.
-
-5. Configure the desired logs to be collected. For supported categories, see [Resource logs](monitor-iot-dps-reference.md#resource-logs).
-
-6. Tick the box **Send to Log Analytics** ([see pricing](https://azure.microsoft.com/pricing/details/log-analytics/)) and save.
-
-7. Go to **Logs** tab in the Azure portal under Device Provisioning Service resource.
-
-8. Write **AzureDiagnostics** as a query and click **Run** to view recent events.
-
-9. If there are results, look for `OperationName`, `ResultType`, `ResultSignature`, and `ResultDescription` (error message) to get more detail on the error.
+Provisioning issues for IoT devices can be difficult to troubleshoot because there are many possible points of failures such as attestation failures, registration failures, etc. To learn more about using Azure Monitor with DPS, see [Monitor Azure IoT Hub Device Provisioning Service](monitor-iot-dps.md).
## Common error codes
iot-dps Monitor Iot Dps Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/monitor-iot-dps-reference.md
Title: Monitoring DPS data reference-
-description: Important reference material needed when you monitor Azure IoT Hub Device Provisioning Service using Azure Monitor
--
+ Title: Monitoring Device Provisioning Service data reference
+description: This article contains important reference material you need when you monitor Azure IoT Hub Device Provisioning Service.
Last updated : 06/28/2024+ + - Previously updated : 04/15/2022
-# Monitoring Azure IoT Hub Device Provisioning Service data reference
+# Azure IoT Hub Device Provisioning Service monitoring data reference
-See [Monitoring Iot Hub Device Provisioning Service](monitor-iot-dps.md) for details on collecting and analyzing monitoring data for Azure IoT Hub Device Provisioning Service (DPS).
-## Metrics
+See [Monitor Azure IoT Hub Device Provisioning Service](monitor-iot-dps.md) for details on the data you can collect for IoT Hub Device Provisioning Service and how to use it.
-This section lists all the automatically collected platform metrics collected for DPS.
-Resource Provider and Type: [Microsoft.Devices/provisioningServices](/azure/azure-monitor/platform/metrics-supported#microsoftdevicesprovisioningservices).
+### Supported metrics for Microsoft.Devices/provisioningServices
-|Metric|Exportable via Diagnostic Settings?|Metric Display Name|Unit|Aggregation Type|Description|Dimensions|
-||||||||
-|AttestationAttempts|Yes|Attestation attempts|Count|Total|Number of device attestations attempted|ProvisioningServiceName, Status, Protocol|
-|DeviceAssignments|Yes|Devices assigned|Count|Total|Number of devices assigned to an IoT hub|ProvisioningServiceName, IotHubName|
-|RegistrationAttempts|Yes|Registration attempts|Count|Total|Number of device registrations attempted|ProvisioningServiceName, IotHubName, Status|
+The following table lists the metrics available for the Microsoft.Devices/provisioningServices resource type.
-For more information, see a list of [all platform metrics supported in Azure Monitor](/azure/azure-monitor/platform/metrics-supported).
-## Metric dimensions
-DPS has the following dimensions associated with its metrics.
-| Dimension Name | Description |
-| - | -- |
-| IotHubName | The name of the target IoT hub. |
-| Protocol | The device or service protocol used. |
-| ProvisioningServiceName | The name of the DPS instance. |
-| Status | The status of the operation. |
+| Dimension Name | Description |
+|:|:-|
+| IotHubName | The name of the target IoT hub. |
+| Protocol | The device or service protocol used. |
+| ProvisioningServiceName | The name of the DPS instance. |
+| Status | The status of the operation. |
For more information on what metric dimensions are, see [Multi-dimensional metrics](/azure/azure-monitor/platform/data-platform-metrics#multi-dimensional-metrics).
-## Resource logs
-This section lists the types of resource logs you can collect for DPS.
+### Supported resource logs for Microsoft.Devices/provisioningServices
-Resource Provider and Type: [Microsoft.Devices/provisioningServices](../azure-monitor/essentials/resource-logs-categories.md#microsoftdevicesprovisioningservices).
-| Category | Description |
-|:||
-| DeviceOperations | Logs related to device attestation events. See device APIs listed in [Billable service operations and pricing](about-iot-dps.md#billable-service-operations-and-pricing). |
-| ServiceOperations | Logs related to DPS service events. See DPS service APIs listed in [Billable service operations and pricing](about-iot-dps.md#billable-service-operations-and-pricing). |
+The following list provides additional information about the preceding logs:
+
+- DeviceOperations: Logs related to device attestation events. See device APIs listed in [Billable service operations and pricing](about-iot-dps.md#billable-service-operations-and-pricing).
+- ServiceOperations: Logs related to DPS service events. See DPS service APIs listed in [Billable service operations and pricing](about-iot-dps.md#billable-service-operations-and-pricing).
For reference, see a list of [all resource logs category types supported in Azure Monitor](/azure/azure-monitor/platform/resource-logs-schema).
DPS uses the [AzureDiagnostics](/azure/azure-monitor/reference/tables/azurediagn
| Level | Int | The logging severity of the event. For example, Information or Error. | | OperationName | String | The type of action performed during the event. For example: Query, Get, Upsert, and so on. | | OperationVersion | String | The API Version used during the event. |
-| Resource | String | The name forOF the resource where the event took place. For example, "MYEXAMPLEDPS". |
+| Resource | String | The name forOF the resource where the event took place. For example, `MYEXAMPLEDPS`. |
| ResourceGroup | String | The name of the resource group where the resource is located. | | ResourceId | String | The Azure Resource Manager Resource ID for the resource where the event took place. |
-| ResourceProvider | String | The resource provider for the event. For example, "MICROSOFT.DEVICES". |
-| ResourceType | String | The resource type for the event. For example, "PROVISIONINGSERVICES". |
+| ResourceProvider | String | The resource provider for the event. For example, `MICROSOFT.DEVICES`. |
+| ResourceType | String | The resource type for the event. For example, `PROVISIONINGSERVICES`. |
| ResultDescription | String | Error details for the event if unsuccessful. | | ResultSignature | String | HTTP status code for the event if unsuccessful. | | ResultType | String | Outcome of the event: Success, Failure, ClientError, and so on. |
The following JSON is an example of a successful add (`Upsert`) individual enrol
} ```
-## Azure Monitor Logs tables
-
-This section refers to all of the Azure Monitor Logs Kusto tables relevant to DPS and available for query by Log Analytics. For a list of these tables and links to more information for the DPS resource type, see [Device Provisioning Services](/azure/azure-monitor/reference/tables/tables-resourcetype#device-provisioning-services) in the Azure Monitor Logs table reference.
-For a reference of all Azure Monitor Logs / Log Analytics tables, see the [Azure Monitor Log Table Reference](/azure/azure-monitor/reference/tables/tables-resourcetype).
+### IoT Hub Device Provisioning Service Microsoft.Devices/ProvisioningServices
-## Activity log
+- [AzureActivity](/azure/azure-monitor/reference/tables/azureactivity#columns)
+- [AzureMetrics](/azure/azure-monitor/reference/tables/azuremetrics#columns)
+- [AzureDiagnostics](/azure/azure-monitor/reference/tables/azurediagnostics#columns)
-For more information on the schema of Activity Log entries, see [Activity Log schema](../azure-monitor/essentials/activity-log-schema.md).
-## See Also
+- [Microsoft.Devices resource provider operations](/azure/role-based-access-control/resource-provider-operations#internet-of-things)
-- See [Monitoring Azure IoT Hub Device Provisioning Service](monitor-iot-dps.md) for a description of monitoring Azure IoT Hub Device Provisioning Service.
+## Related content
-- See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
+- See [Monitor Azure IoT Hub Device Provisioning Service](monitor-iot-dps.md) for a description of monitoring IoT Hub Device Provisioning Service.
+- See [Monitor Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for details on monitoring Azure resources.
iot-dps Monitor Iot Dps https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-dps/monitor-iot-dps.md
Title: Monitor DPS using Azure Monitor-
-description: Start here to learn how to monitor metrics and logs from the Azure IoT Hub Device Provisioning Service by using Azure Monitor
+ Title: Monitor Azure IoT Hub Device Provisioning Service
+description: Start here to learn how to monitor metrics and logs from the Azure IoT Hub Device Provisioning Service by using Azure Monitor.
Last updated : 06/28/2024++ - - - Previously updated : 04/15/2022
-# Monitoring Azure IoT Hub Device Provisioning Service
-
-When you have critical applications and business processes relying on Azure resources, you want to monitor those resources for their availability, performance, and operation.
+# Monitor Azure IoT Hub Device Provisioning Service
-This article describes the monitoring data generated by Azure IoT Hub Device Provisioning Service (DPS). DPS uses [Azure Monitor](../azure-monitor/overview.md). If you are unfamiliar with the features of Azure Monitor common to all Azure services that use it, read [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md).
-## Monitoring data
+This article describes the monitoring data generated by Azure IoT Hub Device Provisioning Service (DPS). DPS uses Azure Monitor.
-DPS collects the same kinds of monitoring data as other Azure resources that are described in [Monitoring data from Azure resources](../azure-monitor/essentials/monitor-azure-resource.md#monitoring-data-from-azure-resources).
-See [Monitoring Azure IoT Hub Device Provisioning Service data reference](monitor-iot-dps-reference.md) for detailed information on the metrics and logs created by DPS.
+For more information about the resource types for IoT Hub DPS, see [Azure IoT Hub Device Provisioning Service monitoring data reference](monitor-iot-dps-reference.md).
-## Collection and routing
-Platform metrics and the Activity log are collected and stored automatically, but can be routed to other locations by using a diagnostic setting.
+### Collection and routing
-Resource Logs are not collected and stored until you create a diagnostic setting and route them to one or more locations.
+Resource Logs aren't collected and stored until you create a diagnostic setting and route them to one or more locations.
In Azure portal, you can select **Diagnostic settings** under **Monitoring** on the left-pane of your DPS instance followed by **Add diagnostic setting** to create diagnostic settings scoped to the logs and platform metrics emitted by your instance.
The following screenshot shows a diagnostic setting for routing to a Log Analyti
See [Create diagnostic setting to collect platform logs and metrics in Azure](/azure/azure-monitor/platform/diagnostic-settings) for the detailed process for creating a diagnostic setting using the Azure portal, CLI, or PowerShell. When you create a diagnostic setting, you specify which categories of logs to collect. The categories for DPS are listed in [Resource logs in the Azure IoT Hub Device Provisioning Service monitoring data reference](monitor-iot-dps-reference.md#resource-logs).
-The metrics and logs you can collect are discussed in the following sections.
+
+For a list of available metrics for IoT Hub DPS, see [IoT Hub DPS monitoring data reference](monitor-iot-dps-reference.md#metrics).
+
+### Using Azure Monitor to view metrics and set up alerts
+
+To view and set up alerts on IoT Hub Device Provisioning Service metrics:
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Browse to your IoT Hub Device Provisioning Service.
+1. Select **Metrics**.
+1. Select the desired metric. For supported metrics, see [Metrics](monitor-iot-dps-reference.md#metrics).
+1. Select desired aggregation method to create a visual view of the metric.
+1. To set up an alert of a metric, select **New alert rules** from the top right of the metric area, similarly you can go to **Alert** pane and select **New alert rules**.
+1. Select **Add condition**, then select the desired metric and threshold by following prompts.
-## Analyzing metrics
+To learn more about viewing metrics and setting up alerts on your DPS instance, see [Analyzing metrics](#analyzing-metrics) and [Alerts](#alerts) in Monitor Device Provisioning Service.
+
+### Analyzing metrics
You can analyze metrics for DPS with metrics from other Azure services using metrics explorer by opening **Metrics** from the **Azure Monitor** menu. See [Analyze metrics with Azure Monitor metrics explorer](../azure-monitor/essentials/analyze-metrics.md) for details on using this tool.
In Azure portal, you can select **Metrics** under **Monitoring** on the left-pan
:::image type="content" source="media/monitor-iot-dps/metrics-portal.png" alt-text="Screenshot showing the metrics explorer page for a DPS instance." border="true":::
-For a list of the platform metrics collected for DPS, see [Metrics in the Monitoring Azure IoT Hub Device Provisioning Service data reference](monitor-iot-dps-reference.md#metrics).
+For a list of the platform metrics collected for DPS, see [Metrics](monitor-iot-dps-reference.md#metrics). For reference, you can see a list of [all resource metrics supported in Azure Monitor](../azure-monitor/essentials/metrics-supported.md).
++
+For the available resource log categories, their associated Log Analytics tables, and the log schemas for IoT Hub DPS, see [IoT Hub DPS monitoring data reference](monitor-iot-dps-reference.md#resource-logs).
-For reference, you can see a list of [all resource metrics supported in Azure Monitor](../azure-monitor/essentials/metrics-supported.md).
-## Analyzing logs
+### Analyzing logs
Data in Azure Monitor Logs is stored in tables where each table has its own set of unique properties.
To route data to Azure Monitor Logs, you must create a diagnostic setting to sen
In Azure portal, you can select **Logs** under **Monitoring** on the left-pane of your DPS instance to perform Log Analytics queries scoped, by default, to the logs and metrics collected in Azure Monitor Logs for your instance. > [!IMPORTANT] > When you select **Logs** from the DPS menu, Log Analytics is opened with the query scope set to the current DPS instance. This means that log queries will only include data from that resource. If you want to run a query that includes data from other DPS instances or data from other Azure services, select **Logs** from the **Azure Monitor** menu. See [Log query scope and time range in Azure Monitor Log Analytics](../azure-monitor/logs/scope.md) for details.
-Run queries against the **AzureDiagnostics** table to see the resource logs collected for the diagnostic settings you've created for your DPS instance.
+Run queries against the **AzureDiagnostics** table to see the resource logs collected for the diagnostic settings you created for your DPS instance.
```kusto AzureDiagnostics
AzureDiagnostics
All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema is outlined in [Azure Monitor resource log schema](../azure-monitor/essentials/resource-logs-schema.md) The schema for DPS resource logs is found in [Resource logs in the Monitoring Azure IoT Hub Device Provisioning Service data reference](monitor-iot-dps-reference.md#resource-logs).
-The [Activity log](../azure-monitor/essentials/activity-log.md) is a type of platform log in Azure that provides insight into subscription-level events. You can view it independently or route it to Azure Monitor Logs, where you can do much more complex queries using Log Analytics.
+### Using Log Analytics to view and resolve errors
+
+1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Browse to your Device Provisioning Service.
+1. Select **Diagnostics settings**.
+1. Select **Add diagnostic setting**.
+1. Configure the desired logs to be collected. For supported categories, see [Resource logs](monitor-iot-dps-reference.md#resource-logs).
+1. Select the box **Send to Log Analytics** ([see pricing](https://azure.microsoft.com/pricing/details/log-analytics/)) and save.
+1. Go to **Logs** tab in the Azure portal under Device Provisioning Service resource.
+1. Write **AzureDiagnostics** as a query and select **Run** to view recent events.
+1. If there are results, look for `OperationName`, `ResultType`, `ResultSignature`, and `ResultDescription` (error message) to get more detail on the error.
++
-For a list of the types of resource logs collected for DPS, see [Resource logs in the Monitoring Azure IoT Hub Device Provisioning Service data reference](monitor-iot-dps-reference.md#resource-logs).
-For a list of the tables used by Azure Monitor Logs and queryable by Log Analytics, see [Azure Monitor Logs tables in the Monitoring Azure IoT Hub Device Provisioning Service data reference](monitor-iot-dps-reference.md#azure-monitor-logs-tables).
-## Alerts
+### IoT Hub Device Provisioning Service alert rules
-Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address issues in your system before your customers notice them. You can set alerts on [metrics](../azure-monitor/alerts/alerts-metric-overview.md), [logs](../azure-monitor/alerts/alerts-unified-log.md), and the [activity log](../azure-monitor/alerts/activity-log-alerts.md). Different types of alerts have benefits and drawbacks.
+You can set alerts for any metric, log entry, or activity log entry listed in the [IoT Hub DPS monitoring data reference](monitor-iot-dps-reference.md).
-## Next steps
-- See [Monitoring Azure IoT Hub Device Provisioning Service data reference](monitor-iot-dps-reference.md) for a reference of the metrics, logs, and other important values created by DPS.
+## Related content
-- See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.
+- See [Azure IoT Hub Device Provisioning Service monitoring data reference](monitor-iot-dps-reference.md) for a reference of the metrics, logs, and other important values created for IoT Hub Device Provisioning Service.
+- See [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for general details on monitoring Azure resources.
logic-apps Create Single Tenant Workflows Azure Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/create-single-tenant-workflows-azure-portal.md
Title: Create example Standard logic app workflow in Azure portal
-description: Create your first example Standard logic app workflow that runs in single-tenant Azure Logic Apps using the Azure portal.
+ Title: Create example Standard workflow in Azure portal
+description: Learn to build your first example Standard logic app workflow that runs in single-tenant Azure Logic Apps using the Azure portal.
ms.suite: integration + Previously updated : 01/03/2024 Last updated : 08/09/2024 # Customer intent: As a developer, I want to create my first example Standard logic app workflow that runs in single-tenant Azure Logic Apps using the Azure portal.
-# Create an example Standard workflow in single-tenant Azure Logic Apps with the Azure portal
+# Create an example Standard logic app workflow using the Azure portal
[!INCLUDE [logic-apps-sku-standard](../../includes/logic-apps-sku-standard.md)]
-This how-to guide shows how to create an example automated workflow that waits for an inbound web request and then sends a message to an email account. More specifically, you'll create a [Standard logic app resource](logic-apps-overview.md#resource-environment-differences), which can include multiple [stateful and stateless workflows](single-tenant-overview-compare.md#stateful-stateless) that run in single-tenant Azure Logic Apps.
+This how-to guide shows how to create an example workflow that runs in single-tenant Azure Logic Apps. The workflow waits for an inbound web request and then sends a message to an email account. Specifically, you create a Standard logic app resource and workflow that contains the following items:
-> [!NOTE]
->
-> To create this example workflow in Visual Studio Code instead, follow the steps in
-> [Create Standard workflows in single-tenant Azure Logic Apps with Visual Studio Code](create-single-tenant-workflows-visual-studio-code.md).
-> Both options provide the capability to develop, run, and deploy logic app workflows in the same kinds of environments.
-> However, with Visual Studio Code, you can *locally* develop, test, and run workflows in your development environment.
+- The **Request** trigger, which creates a callable endpoint that can handle inbound requests from any caller.
+- The **Office 365 Outlook** connector, which provides an action to send email.
+
+When you finish, your workflow looks like the following high level example:
-While this example workflow is cloud-based and has only two steps, you can create workflows from hundreds of operations that can connect a wide range of apps, data, services, and systems across cloud, on premises, and hybrid environments. The example workflow starts with the Request built-in trigger, which is followed by an Office 365 Outlook action. The trigger creates a callable endpoint for the workflow and waits for an inbound HTTPS request from any caller. When the trigger receives a request and fires, the next action runs by sending email to the specified email address along with selected outputs from the trigger.
+
+You can have multiple workflows in a Standard logic app. Workflows in the same logic app and the tenant run in the same process as the Azure Logic Apps runtime, so they share the same resources and provide better performance.
+
+> [!TIP]
+>
+> To learn more, you can ask Azure Copilot these questions:
+>
+> - *What's Azure Logic Apps?*
+> - *What's a Standard logic app workflow?*
+> - *What's the Request triger?*
+> - *What's the Office 365 Outlook connector?*
+>
+> To find Azure Copilot, on the [Azure portal](https://portal.azure.com) toolbar, select **Copilot**.
-![Screenshot showing the Azure portal with the designer for Standard logic app workflow.](./media/create-single-tenant-workflows-azure-portal/azure-portal-logic-apps-overview.png)
+The operations in this example are from two connectors among [1000+ connectors](/connectors/connector-reference/connector-reference-logicapps-connectors) that you can use in a workflow. While this example is cloud-based, you can create workflows that integrate a vast range of apps, data, services, and systems across cloud, on-premises, and hybrid environments.
-As you progress, you'll complete these high-level tasks:
+For more information, see the following documentation:
-* Create a Standard logic app resource and add a blank [*stateful* workflow](single-tenant-overview-compare.md#stateful-stateless).
-* Add a trigger and action.
-* Trigger a workflow run.
-* View the workflow's run and trigger history.
-* Enable or open the Application Insights after deployment.
-* Enable run history for stateless workflows.
+- [Single-tenant versus multitenant](single-tenant-overview-compare.md)
+- [Create and deploy to different environments](logic-apps-overview.md#resource-environment-differences)
-In single-tenant Azure Logic Apps, workflows in the same logic app resource and tenant run in the same process as the runtime, so they share the same resources and provide better performance. For more information about single-tenant Azure Logic Apps, see [Single-tenant versus multitenant and integration service environment](single-tenant-overview-compare.md).
+To create and manage a Standard logic app workflow using other tools, see [Create Standard workflows with Visual Studio Code](create-single-tenant-workflows-visual-studio-code.md). With Visual Studio Code, you can develop, test, and run workflows in your *local* development environment.
## Prerequisites
In single-tenant Azure Logic Apps, workflows in the same logic app resource and
> [Stateful workflows](single-tenant-overview-compare.md#stateful-stateless) perform storage transactions, such as > using queues for scheduling and storing workflow states in tables and blobs. These transactions incur > [storage charges](https://azure.microsoft.com/pricing/details/storage/). For more information about
- > how stateful workflows store data in external storage, review [Stateful and stateless workflows](single-tenant-overview-compare.md#stateful-stateless).
+ > how stateful workflows store data in external storage, see [Stateful and stateless workflows](single-tenant-overview-compare.md#stateful-stateless).
-* To create the same example workflow in this guide, you need an Office 365 Outlook email account that uses a Microsoft work or school account to sign in.
+* An email account from an email provider supported by Azure Logic Apps, such as Office 365 Outlook, Outlook.com, or Gmail. For other supported email providers, see [Connectors for Azure Logic Apps](/connectors/connector-reference/connector-reference-logicapps-connectors).
- If you don't have an Office 365 account, you can use [any other available email connector](/connectors/connector-reference/connector-reference-logicapps-connectors) that can send messages from your email account, for example, Outlook.com. If you use a different email connector, you can still follow the example, and the general overall steps are the same. However, your options might differ in some ways. For example, if you use the Outlook.com connector, use your personal Microsoft account instead to sign in.
+ This example uses Office 365 Outlook with a work or school account. If you use a different email account, the general steps stay the same, but the user experience might slightly differ. If you use Outlook.com, use your personal Microsoft account instead to sign in.
+ > [!NOTE]
+ >
+ > If you want to use the Gmail connector, only G-Suite business accounts can use this connector without restriction in logic app workflows.
+ > If you have a Gmail consumer account, you can use this connector with only specific Google-approved services, or you can
+ > [create a Google client app to use for authentication with your Gmail connector](/connectors/gmail/#authentication-and-bring-your-own-application).
+ > For more information, see [Data security and privacy policies for Google connectors in Azure Logic Apps](../connectors/connectors-google-data-security-privacy-policy.md).
-* If you create your logic app resource and enable [Application Insights](../azure-monitor/app/app-insights-overview.md), you can optionally enable diagnostics logging and tracing for your logic app. You can do so either when you create your logic app or after deployment. You need to have an Application Insights instance, but you can create this resource either [in advance](../azure-monitor/app/create-workspace-resource.md), when you create your logic app, or after deployment.
-* To deploy your Standard logic app resource to an [App Service Environment v3 (ASEv3) - Windows plan only](../app-service/environment/overview.md), you have to create this environment resource first. You can then select this environment as the deployment location when you create your logic app resource. For more information, review [Resources types and environments](single-tenant-overview-compare.md#resource-environment-differences) and [Create an App Service Environment](../app-service/environment/creation.md).
+* To deploy your Standard logic app resource to an [App Service Environment v3 (ASEv3) - Windows plan only](../app-service/environment/overview.md), you have to create this environment resource first. You can then select this environment as the deployment location when you create your logic app. For more information, see [Resources types and environments](single-tenant-overview-compare.md#resource-environment-differences) and [Create an App Service Environment](../app-service/environment/creation.md).
-* Starting mid-October 2022, new Standard logic app workflows in the Azure portal automatically use Azure Functions v4. Throughout November 2022, existing Standard workflows in the Azure portal are automatically migrating to Azure Functions v4. Unless you deployed your Standard logic apps as NuGet-based projects or pinned your logic apps to a specific bundle version, this upgrade is designed to require no action from you nor have a runtime impact. However, if the exceptions apply to you, or for more information about Azure Functions v4 support, see [Azure Logic Apps Standard now supports Azure Functions v4](https://techcommunity.microsoft.com/t5/integrations-on-azure-blog/azure-logic-apps-standard-now-supports-azure-functions-v4/ba-p/3656072).
+* If you enable [Application Insights](../azure-monitor/app/app-insights-overview.md) on your logic app, you can optionally enable diagnostics logging and tracing. You can do so either when you create your logic app or after deployment. You need to have an Application Insights instance, but you can [create this resource in advance](../azure-monitor/app/create-workspace-resource.md), when you create your logic app, or after deployment.
## Best practices and recommendations For optimal designer responsiveness and performance, review and follow these guidelines: -- Use no more than 50 actions per workflow. Exceeding this number of actions raises the possibility for slower designer performance.
+- Use no more than 50 actions per workflow. Exceeding this number of actions raises the possibility for slower designer performance.
- Consider splitting business logic into multiple workflows where necessary.
More workflows in your logic app raise the risk of longer load times, which nega
1. In the [Azure portal](https://portal.azure.com), sign in with your Azure account.
-1. In the Azure portal search box, enter **logic apps**, and select **Logic apps**.
+1. In the Azure portal search box, enter **logic app**, and select **Logic apps**.
- ![Screenshot showing Azure portal search box with logic apps entered and logic apps group selected.](./media/create-single-tenant-workflows-azure-portal/find-logic-app-resource-template.png)
+ :::image type="content" source="media/create-single-tenant-workflows-azure-portal/find-select-logic-apps.png" alt-text="Screenshot shows Azure portal search box with the words, logic app, and shows the selection, Logic apps." lightbox="media/create-single-tenant-workflows-azure-portal/find-select-logic-apps.png":::
-1. On the **Logic apps** page, select **Add**.
+1. On the **Logic apps** page toolbar, select **Add**.
-1. On the **Create Logic App** page, on the **Basics** tab, provide the following basic information about your logic app:
-
- | Property | Required | Value | Description |
- |-|-|-|-|
- | **Subscription** | Yes | <*Azure-subscription-name*> | Your Azure subscription name. |
- | **Resource Group** | Yes | <*Azure-resource-group-name*> | The [Azure resource group](../azure-resource-manager/management/overview.md#terminology) where you create your logic app and related resources. This name must be unique across regions and can contain only letters, numbers, hyphens (**-**), underscores (**_**), parentheses (**()**), and periods (**.**). <br><br>This example creates a resource group named **Fabrikam-Workflows-RG**. |
- | **Logic App name** | Yes | <*logic-app-name*> | Your logic app resource name, which must be unique across regions and can contain only letters, numbers, hyphens (**-**), underscores (**_**), parentheses (**()**), and periods (**.**). <br><br>**Note**: Your logic app's name automatically gets the suffix, **.azurewebsites.net**, because the Standard logic app resource is powered by the single-tenant Azure Logic Apps runtime, which uses the Azure Functions extensibility model and is hosted as an extension on the Azure Functions runtime. Azure Functions uses the same app naming convention. <br><br>This example creates a logic app named **Fabrikam-Workflows**. |
+ The **Create Logic App** page appears and shows the following options:
-1. Before you continue making selections, go to the **Plan** section. For **Plan type**, select **Standard** so that you view only the settings that apply to the Standard plan-based logic app type.
+ [!INCLUDE [logic-apps-host-plans](../../includes/logic-apps-host-plans.md)]
- The **Plan type** property specifies the hosting plan and billing model to use for your logic app. For more information, review [Hosting plans and pricing tiers](logic-apps-pricing.md).
+1. On the **Create Logic App** page, select **Standard (Workflow Service Plan)**.
- | Plan type | Description |
- |--|-|
- | **Standard** | This logic app type is the default selection. Workflows run in single-tenant Azure Logic Apps and use the [Standard billing model](logic-apps-pricing.md#standard-pricing). |
- | **Consumption** | This logic app type and workflow runs in global, multitenant Azure Logic Apps and uses the [Consumption billing model](logic-apps-pricing.md#consumption-pricing). |
+1. On the **Create Logic App** page, on the **Basics** tab, provide the following basic information about your logic app:
| Property | Required | Value | Description | |-|-|-|-|
- | **Windows Plan** | Yes | <*plan-name*> | The plan name to use. Either select an existing plan name or provide a name for a new plan. <br><br>This example uses the name **My-App-Service-Plan**. <br><br>**Note**: Only the Windows-based App Service plan is supported. Don't use a Linux-based App Service plan. |
+ | **Subscription** | Yes | <*Azure-subscription-name*> | Your Azure subscription name. <br><br>This example uses **Pay-As-You-Go**. |
+ | **Resource Group** | Yes | <*Azure-resource-group-name*> | The [Azure resource group](../azure-resource-manager/management/overview.md#terminology) where you create your logic app and related resources. This name must be unique across regions and can contain only letters, numbers, hyphens (**-**), underscores (**_**), parentheses (**()**), and periods (**.**). <br><br>This example creates a resource group named **Fabrikam-Workflows-RG**. |
+ | **Logic App name** | Yes | <*logic-app-name*> | Your logic app resource name, which must be unique across regions and can contain only letters, numbers, hyphens (**-**), underscores (**_**), parentheses (**()**), and periods (**.**). <br><br>**Note**: Your logic app's name automatically gets the suffix, **.azurewebsites.net**, because the Standard logic app resource is powered by the single-tenant Azure Logic Apps runtime, which uses the Azure Functions extensibility model and is hosted as an extension on the Azure Functions runtime. Azure Functions uses the same app naming convention. <br><br>This example creates a logic app resource named **Fabrikam-Workflows**. |
+ | **Region** | Yes | <*Azure-region*> | The Azure datacenter region for your logic app. <br><br>This example uses **West US**. |
+ | **Windows Plan** | Yes | <*plan-name*> | The plan name to use. Either select an existing plan name or provide a name for a new plan. <br><br>This example uses the name **My-App-Service-Plan**. <br><br>**Note**: Don't use a Linux-based App Service plan. Only the Windows-based App Service plan is supported. |
| **Pricing plan** | Yes | <*pricing-tier*> | The [pricing tier](../app-service/overview-hosting-plans.md) to use for your logic app and workflows. Your selection affects the pricing, compute, memory, and storage that your logic app and workflows use. <br><br>For more information, review [Hosting plans and pricing tiers](logic-apps-pricing.md#standard-pricing). |
-1. Now continue making the following selections:
-
- | Property | Required | Value | Description |
- |-|-|-|-|
- | **Publish** | Yes | **Workflow** | This option appears and applies only when **Plan type** is set to the **Standard** logic app type. By default, this option is set to **Workflow** and creates an empty logic app resource where you add your first workflow. <br><br>**Note**: Currently, the **Docker Container** option requires a [*custom location*](../azure-arc/kubernetes/conceptual-custom-locations.md) on an Azure Arc enabled Kubernetes cluster, which you can use with [Azure Arc enabled Logic Apps (Standard)](azure-arc-enabled-logic-apps-overview.md). The resource locations for your logic app, custom location, and cluster must all be the same. |
- | **Region** | Yes | <*Azure-region*> | The Azure datacenter region to use for storing your app's information. This example deploys the sample logic app to the **West US** region in Azure. <br><br>- If you previously chose **Docker Container**, select your custom location from the **Region** list. <br><br>- If you want to deploy your app to an existing [App Service Environment v3 resource](../app-service/environment/overview.md), you can select that environment from the **Region** list. |
- > [!NOTE] > > If you select an Azure region that supports availability zone redundancy, the **Zone redundancy**
More workflows in your logic app raise the risk of longer load times, which nega
> so you can ignore this section for this example. For more information, see > [Protect logic apps from region failures with zone redundancy and availability zones](set-up-zone-redundancy-availability-zones.md).
- When you're done, your settings look similar to the following example:
+ When you finish, your settings look similar to the following example:
+
+ :::image type="content" source="media/create-single-tenant-workflows-azure-portal/create-logic-app-basics.png" alt-text="Screenshot shows Azure portal and page named Create Logic App Workflow Service Plan." lightbox="media/create-single-tenant-workflows-azure-portal/create-logic-app-basics.png":::
- ![Screenshot showing Azure portal and page named Create Logic App.](./media/create-single-tenant-workflows-azure-portal/create-logic-app-resource-portal.png)
+1. When you finish, select **Next: Storage**.
-1. On the **Hosting** tab, provide the following information about the storage solution and hosting plan to use for your logic app.
+1. On the **Storage** tab, provide the following information about the storage solution and hosting plan to use for your logic app.
| Property | Required | Value | Description | |-|-|-|-|
- | **Storage type** | Yes | - **Azure Storage** <br>- **SQL and Azure Storage** | The storage type that you want to use for workflow-related artifacts and data. <br><br>- To deploy only to Azure, select **Azure Storage**. <br><br>- To use SQL as primary storage and Azure Storage as secondary storage, select **SQL and Azure Storage**, and review [Set up SQL database storage for Standard logic apps in single-tenant Azure Logic Apps](set-up-sql-db-storage-single-tenant-standard-workflows.md). <br><br>**Note**: If you're deploying to an Azure region, you still need an Azure storage account, which is used to complete the one-time hosting of the logic app's configuration on the Azure Logic Apps platform. The workflow's state, run history, and other runtime artifacts are stored in your SQL database. <br><br>For deployments to a custom location that's hosted on an Azure Arc cluster, you only need SQL as your storage provider. |
+ | **Storage type** | Yes | - **Azure Storage** <br>- **SQL (Preview) and Azure Storage** | The storage type that you want to use for workflow-related artifacts and data. <br><br>- To deploy only to Azure, select **Azure Storage**. <br><br>- To use SQL as primary storage and Azure Storage as secondary storage, select **SQL (Preview) and Azure Storage**, and see [Set up SQL database storage for Standard logic apps in single-tenant Azure Logic Apps](set-up-sql-db-storage-single-tenant-standard-workflows.md). <br><br>**Note**: If you're deploying to an Azure region, you still need an Azure storage account, which is used to complete the one-time hosting of the logic app's configuration on the Azure Logic Apps platform. The workflow's state, run history, and other runtime artifacts are stored in your SQL database. <br><br>For deployments to a custom location that's hosted on an Azure Arc cluster, you only need SQL as your storage provider. |
| **Storage account** | Yes | <*Azure-storage-account-name*> | The [Azure Storage account](../storage/common/storage-account-overview.md) to use for storage transactions. <br><br>This resource name must be unique across regions and have 3-24 characters with only numbers and lowercase letters. Either select an existing account or create a new account. <br><br>This example creates a storage account named **mystorageacct**. | 1. On the **Networking** tab, you can leave the default options for this example.
More workflows in your logic app raise the risk of longer load times, which nega
| **On** | Your logic app workflows can privately and securely communicate with endpoints in the virtual network. | | **Off** | Your logic app workflows can't communicate with endpoints in the virtual network. |
-1. If your creation and deployment settings support using [Application Insights](../azure-monitor/app/app-insights-overview.md), you can optionally enable diagnostics logging and tracing for your logic app workflows.
+1. If your creation and deployment settings support using [Application Insights](../azure-monitor/app/app-insights-overview.md), you can optionally enable diagnostics logging and tracing for your logic app workflows by following these steps:
- 1. On the **Monitoring** tab, under **Application Insights**, set **Enable Application Insights** to **Yes** if not already selected.
+ 1. On the **Monitoring** tab, under **Application Insights**, set **Enable Application Insights** to **Yes**.
1. For the **Application Insights** setting, either select an existing Application Insights instance, or if you want to create a new instance, select **Create new** and provide the name that you want to use.
-1. After Azure validates your logic app's settings, on the **Review + create** tab, select **Create**, for example:
+1. After Azure validates your logic app settings, on the **Review + create** tab, select **Create**, for example:
- ![Screenshot showing Azure portal and new logic app resource settings.](./media/create-single-tenant-workflows-azure-portal/check-logic-app-resource-settings.png)
+ :::image type="content" source="media/create-single-tenant-workflows-azure-portal/check-logic-app-settings.png" alt-text="Screenshot shows Azure portal and new logic app resource settings." lightbox="media/create-single-tenant-workflows-azure-portal/check-logic-app-settings.png":::
> [!NOTE] >
More workflows in your logic app raise the risk of longer load times, which nega
1. On the deployment completion page, select **Go to resource** so that you can add a blank workflow.
- ![Screenshot showing Azure portal and finished deployment.](./media/create-single-tenant-workflows-azure-portal/logic-app-completed-deployment.png)
+ :::image type="content" source="media/create-single-tenant-workflows-azure-portal/logic-app-completed-deployment.png" alt-text="Screenshot showing Azure portal and finished deployment." lightbox="media/create-single-tenant-workflows-azure-portal/logic-app-completed-deployment.png":::
<a name="add-workflow"></a>
More workflows in your logic app raise the risk of longer load times, which nega
After you create your empty logic app resource, you have to add your first workflow.
-1. After Azure opens the resource, on your logic app resource menu, select **Workflows**. On the **Workflows** toolbar, select **Add**.
+1. After Azure opens the resource, on your logic app menu, under **Workflows**, select **Workflows**. On the **Workflows** toolbar, select **Add**.
- ![Screenshot showing logic app resource menu with Workflows selected, and on the toolbar, Add is selected.](./media/create-single-tenant-workflows-azure-portal/logic-app-add-blank-workflow.png)
+ :::image type="content" source="media/create-single-tenant-workflows-azure-portal/logic-app-add-blank-workflow.png" alt-text="Screenshot shows logic app menu with Workflows selected. The toolbar shows selected option for Add." lightbox="media/create-single-tenant-workflows-azure-portal/logic-app-add-blank-workflow.png":::
-1. After the **New workflow** pane opens, provide a name for your workflow, and select the state type, either [**Stateful** or **Stateless**](single-tenant-overview-compare.md#stateful-stateless). When you're done, select **Create**.
+1. After the **New workflow** pane opens, provide a name for your workflow, and select the state type, either [**Stateful** or **Stateless**](single-tenant-overview-compare.md#stateful-stateless). When you finish, select **Create**.
This example adds a blank stateful workflow named **Stateful-Workflow**. By default, the workflow is enabled but doesn't do anything until you add a trigger and actions.
After you create your empty logic app resource, you have to add your first workf
The designer surface shows a prompt to select a trigger operation. By default, the prompt is already selected so that a pane with available triggers already appears open.
-So now you'll add a trigger that starts your workflow.
+Now, add a trigger that starts your workflow.
<a name="add-trigger-actions"></a>
To debug a stateless workflow more easily, you can enable the run history for th
1. In the [Azure portal](https://portal.azure.com), open your Standard logic app resource.
-1. On the logic app's menu, under **Settings**, select **Configuration**.
+1. On the logic app menu, under **Settings**, select **Configuration**.
1. On the **Application settings** tab, select **New application setting**.
logic-apps Monitor Health Standard Workflows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/monitor-health-standard-workflows.md
+
+ Title: Monitor Standard workflows with Health Check
+description: Set up Health Check to monitor health for Standard workflows in Azure Logic Apps.
+
+ms.suite: integration
++ Last updated : 08/06/2024
+# Customer intent: As a developer, I want to monitor the health for my Standard logic app workflows in single-tenant Azure Logic Apps by setting up Health Check, which is an Azure App Service feature.
++
+# Monitor health for Standard workflows in Azure Logic Apps with Health Check (Preview)
++
+> [!NOTE]
+> This capability is in preview and is subject to the
+> [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/).
+
+To help your Standard logic app workflows run with high availability and performance, set up the Health Check feature on your logic app to monitor workflow health. This feature makes sure that your app stays resilient by providing the following benefits:
+
+- Proactive monitoring so you can find and address issues before they impact your customers.
+
+- Increased availability by removing unhealthy instances from the load balancer in Azure.
+
+- Automatic recovery by replacing unhealthy instances.
+
+## How does Health Check work in Azure Logic Apps?
+
+Health Check is an Azure App Service platform feature that redirects requests away from unhealthy instances and replaces those instances if they stay unhealthy. For a Standard logic app, you can specify a path to a "health" workflow that you create for this purpose and for the App Service platform to ping at regular intervals. For example, the following sample shows the basic minimum workflow:
++
+After you enable Health Check, the App Service platform pings the specified workflow path for all logic app instances at 1-minute intervals. If the logic app requires scale out, Azure immediately creates a new instance. The App Service platform pings the workflow path again to make sure that the new instance is ready.
+
+If a workflow running on an instance doesn't respond to the ping after 10 requests, the App Service platform determines that the instance is unhealthy and removes the instance for that specific logic app from the load balancer in Azure. With a two-request minimum, you can specify the required number of failed requests to determine that an instance is unhealthy. For more information about overriding default behavior, see [Configuration: Monitor App Service instances using Health Check](../app-service/monitor-instances-health-check.md#configuration).
+
+After Health Check removes the unhealthy instance, the feature continues to ping the instance. If the instance responds with a healthy status code, inclusively ranging from 200 to 299, Health Check returns the instance to the load balancer. However, if the instance remains unhealthy for one hour, Health Check replaces the instance with a new one. For more information, see [What App Service does with health checks](../app-service/monitor-instances-health-check.md#what-app-service-does-with-health-checks).
+
+## Prerequisites
+
+- An Azure account and subscription. If you don't have a subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+- A Standard logic app resource with the following attributes:
+
+ - An App Service plan that is scaled to two or more instances.
+
+ - A "health" workflow that specifically runs the health check and the following elements:
+
+ - Starts with the **Request** trigger named **When a HTTP request is received**.
+
+ - Includes the **Request** action named **Response**. Set this action to return a status code inclusively between **200** to **299**.
+
+ You can also optionally have this workflow run other checks to make sure that dependent services are available and work as expected. As a best practice, make sure that the Health Check path monitors critical components in your workflow. For example, if your app depends on a database and messaging system, make sure that Health Check can access those components.
+
+## Limitations
+
+- The specified path length must have fewer than 65 characters.
+
+- Changes in the specified path for Health Check cause your logic app to restart. To reduce the impact on production apps, [set up and use deployment slots](set-up-deployment-slots.md).
+
+- Health Check doesn't follow redirects for the **302** status code So, avoid redirects, and make sure to select a valid path that exists in your app.
+
+## Set up Health Check
+
+1. In the [Azure portal](https://portal.azure.com), go to your Standard logic app resource.
+
+1. On the logic app menu, select **Diagnose and solve problems**.
+
+1. On the **Diagnose and solve problems** page, in the search box, find and select **Health Check feature**.
+
+ :::image type="content" source="media/monitor-health-standard-workflows/health-check.png" alt-text="Screenshot shows Azure portal, page for Diagnose and solve problems, search box with health check entered, and selected option for Health Check feature." lightbox="media/monitor-health-standard-workflows/health-check.png":::
+
+1. In the **Health Check feature** section, select **View Solution**.
+
+1. On the pane that opens, select **Configure and enable health check feature**.
+
+1. On the **Health check** tab, next to **Health check**, select **Enable**.
+
+1. Under **Health probe path**, in the **Path** box, enter a valid URL path for your workflow, for example:
+
+ **`/api/{workflow-name}/triggers/{request-trigger-name}/invoke?api-version=2022-05-01`**
+
+1. Save your changes. On the toolbar, select **Save**.
+
+1. In your logic app resource, update the **host.json** file by following these steps:
+
+ 1. On the logic app menu, under **Development Tools**, select **Advanced Tools** > **Go**.
+
+ 1. On the **KuduPlus** toolbar, from the **Debug console** menu, select **CMD**.
+
+ 1. Browse to the **site/wwwroot** folder, and next to the **host.json** file, select **Edit**.
+
+ 1. In the **host.json** file editor, add the **Workflows.HealthCheckWorkflowName** property and your health workflow name to enable health check authentication and authorization, for example:
+
+ ```json
+ "extensions": {
+ "workflow": {
+ "settings": {
+ "Workflows.HealthCheckWorkflowName" : "{workflow-name}"
+ }
+ }
+ }
+ ```
+
+ 1. When you finish, select **Save**.
+
+## Troubleshooting
+
+### After I set the health path, my health workflow doesn't trigger.
+
+1. On the logic app menu, select **Diagnose and solve problems**.
+
+1. Under **Troubleshooting categories**, select **Availability and Performance**.
+
+ :::image type="content" source="media/monitor-health-standard-workflows/availability-performance.png" alt-text="Screenshot shows Azure portal, page for Diagnose and solve problems, and selected option for Availability and Performance." lightbox="media/monitor-health-standard-workflows/availability-performance.png":::
+
+1. Find and review the status code section.
+
+ If the status code is **401**, check the following items:
+
+ - Confirm that the **Workflows.HealthCheckWorkflowName** property and your health workflow name appear correctly.
+
+ - Confirm that the specified path matches the workflow and **Request** trigger name.
+
+## Related content
+
+- [Monitor and collect diagnostic data for workflows](monitor-workflows-collect-diagnostic-data.md)
+- [Enable and view enhanced telemetry for Standard workflows](enable-enhanced-telemetry-standard-workflows.md)
+- [View health and performance metrics](view-workflow-metrics.md)
logic-apps Quickstart Create Example Consumption Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/quickstart-create-example-consumption-workflow.md
Title: Create example Consumption workflow using Azure portal
-description: Quickstart to create your first example Consumption logic app workflow that runs in multitenant Azure Logic Apps using the Azure portal.
-
+ Title: Create example Consumption workflow in Azure portal
+description: Learn to build your first example Consumption logic app workflow that runs in multitenant Azure Logic Apps using the Azure portal.
+ ms.suite: integration Previously updated : 06/13/2024 Last updated : 08/07/2024 #Customer intent: As a developer, I want to create my first example Consumption logic app workflow that runs in multitenant Azure Logic Apps using the Azure portal.
Last updated 06/13/2024
[!INCLUDE [logic-apps-sku-consumption](~/reusable-content/ce-skilling/azure/includes/logic-apps-sku-consumption.md)]
-To create an automated workflow that performs tasks with multiple cloud services, this quickstart shows how to create an example logic app workflow that integrates the following services, an RSS feed for a website and an email account.
+This quickstart show how to create an example workflow that runs in multitenant Azure Logic Apps and performs tasks with multiple cloud services. The workflow checks an RSS feed for new articles, based on a specific schedule, and sends an email for each new RSS item. Specifically, you create a Consumption logic app resource and workflow that uses the following items:
-This example specifically creates a Consumption logic app resource and workflow that runs in multitenant Azure Logic Apps. The example uses the **RSS** connector and the **Office 365 Outlook** connector. The **RSS** connector provides a trigger that you can use to check an RSS feed, based on a specific schedule. The **Office 365 Outlook** connector provides an action that sends an email for each new RSS item.
+- The **RSS** connector, which provides a trigger to check an RSS feed.
+- The **Office 365 Outlook** connector, which provides an action to send email.
-The following screenshot shows the high-level example workflow:
+When you finish, your workflow looks like the following high level example:
> [!TIP] > > To learn more, you can ask Azure Copilot these questions: >
+> - *What's Azure Logic Apps?*
> - *What's a Consumption logic app workflow?* > - *What's the RSS connector?* > - *What's the Office 365 Outlook connector?* > > To find Azure Copilot, on the [Azure portal](https://portal.azure.com) toolbar, select **Copilot**.
-The connectors in this example are only two connectors among [1000+ connectors](/connectors/connector-reference/connector-reference-logicapps-connectors) that you can use in a workflow. While this example is cloud-based, Azure Logic Apps supports workflows that connect apps, data, services, and systems across cloud, on-premises, and hybrid environments.
-
-> [!NOTE]
-> To create a Standard logic app workflow that runs in single-tenant Azure Logic Apps instead, see
-> [Create an example Standard logic app workflow using Azure portal](create-single-tenant-workflows-azure-portal.md).
-
-As you progress through this quickstart, you'll learn the following basic high-level steps:
-
-* Create a Consumption logic app resource that is hosted in multitenant Azure Logic Apps.
-* Add a trigger that specifies when to run the workflow.
-* Add an action that performs a task after the trigger fires.
-* Run your workflow.
+The operations in this example are from two connectors among [1000+ connectors](/connectors/connector-reference/connector-reference-logicapps-connectors) that you can use in a workflow. While this example is cloud-based, Azure Logic Apps supports workflows that connect apps, data, services, and systems across cloud, on-premises, and hybrid environments.
To create and manage a Consumption logic app workflow using other tools, see the following quickstarts:
To create and manage a Consumption logic app workflow using other tools, see the
* [Create and manage logic app workflows in Visual Studio](quickstart-create-logic-apps-with-visual-studio.md) * [Create and manage logic apps workflows using the Azure CLI](quickstart-logic-apps-azure-cli.md)
+To create a Standard logic app workflow that runs in single-tenant Azure Logic Apps instead, see [Create an example Standard logic app workflow using Azure portal](create-single-tenant-workflows-azure-portal.md).
+ <a name="prerequisites"></a> ## Prerequisites * An Azure account and subscription. If you don't have a subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* An email account from a service that works with Azure Logic Apps, such as Office 365 Outlook or Outlook.com. For other supported email providers, review [Connectors for Azure Logic Apps](/connectors/connector-reference/connector-reference-logicapps-connectors).
+* An email account from a service that works with Azure Logic Apps, such as Office 365 Outlook or Outlook.com. For other supported email providers, see [Connectors for Azure Logic Apps](/connectors/connector-reference/connector-reference-logicapps-connectors).
+
+ This quickstart uses Office 365 Outlook with a work or school account. If you use a different email account, the general steps stay the same, but your UI might slightly differ. If you use Outlook.com, use your personal Microsoft account instead to sign in.
> [!NOTE] >
To create and manage a Consumption logic app workflow using other tools, see the
* If you have a firewall that limits traffic to specific IP addresses, make sure that you set up your firewall to allow access for both the [inbound](logic-apps-limits-and-config.md#inbound) and [outbound](logic-apps-limits-and-config.md#outbound) IP addresses that Azure Logic Apps uses in the Azure region where you create your logic app workflow.
- This example uses the **RSS** and **Office 365 Outlook** connectors, which [run in global multitenant Azure and are managed by Microsoft](../connectors/managed.md). These connectors require that you set up your firewall to allow access for all the [managed connector outbound IP addresses](/connectors/common/outbound-ip-addresses) in the Azure region for your logic app resource.
+ This example uses the **RSS** and **Office 365 Outlook** connectors, which [are hosted and run in global multitenant Azure and are managed by Microsoft](../connectors/managed.md). These connectors require that you set up your firewall to allow access for all the [managed connector outbound IP addresses](/connectors/common/outbound-ip-addresses) in the Azure region for your logic app resource.
<a name="create-logic-app-resource"></a> ## Create a Consumption logic app resource
-1. In the [Azure portal](https://portal.azure.com) search box, enter **logic apps**, and select **Logic apps**.
+1. In the [Azure portal](https://portal.azure.com), sign in with your Azure account.
- :::image type="content" source="media/quickstart-create-example-consumption-workflow/find-select-logic-apps.png" alt-text="Screenshot shows Azure portal search box with the words, logic apps, and shows the selection, Logic apps." lightbox="media/quickstart-create-example-consumption-workflow/find-select-logic-apps.png":::
+1. In the Azure portal search box, enter **logic app**, and select **Logic apps**.
-1. On the **Logic apps** page toolbar, select **Add**.
+ :::image type="content" source="media/quickstart-create-example-consumption-workflow/find-select-logic-apps.png" alt-text="Screenshot shows Azure portal search box with the words, logic app, and shows the selection, Logic apps." lightbox="media/quickstart-create-example-consumption-workflow/find-select-logic-apps.png":::
-1. On the **Create Logic App** page, first select the **Plan** type for your logic app resource. That way, only the options for that plan type appear.
+1. On the **Logic apps** page toolbar, select **Add**.
- 1. In the **Plan** section, for **Plan type**, select **Consumption** to view only the Consumption logic app resource settings.
+ The **Create Logic App** page appears and shows the following options:
- The **Plan type** not only specifies the logic app resource type, but also the billing model.
+ [!INCLUDE [logic-apps-host-plans](../../includes/logic-apps-host-plans.md)]
- | Plan type | Description |
- |--|-|
- | **Standard** | This logic app resource is the default selection and supports multiple workflows. These workflows run in single-tenant Azure Logic Apps and use the [Standard billing model](logic-apps-pricing.md#standard-pricing). |
- | **Consumption** | This logic app resource type is the alternative selection and supports only a single workflow. This workflow runs in multitenant Azure Logic Apps and uses the [Consumption model for billing](logic-apps-pricing.md#consumption-pricing). |
+1. On the **Create Logic App** page, select **Consumption (Multi-tenant)**.
-1. Provide the following information for your logic app resource:
+1. On the **Basics** tab, provide the following information about your logic app resource:
| Property | Required | Value | Description | |-|-|-|-|
- | **Subscription** | Yes | <*Azure-subscription-name*> | Your Azure subscription name. |
+ | **Subscription** | Yes | <*Azure-subscription-name*> | Your Azure subscription name. <br><br>This example uses **Pay-As-You-Go**. |
| **Resource Group** | Yes | <*Azure-resource-group-name*> | The [Azure resource group](../azure-resource-manager/management/overview.md#terminology) where you create your logic app and related resources. This name must be unique across regions and can contain only letters, numbers, hyphens (**-**), underscores (**_**), parentheses (**()**), and periods (**.**). <br><br>This example creates a resource group named **Consumption-RG**. |
- | **Logic App name** | Yes | <*logic-app-resource-name*> | Your logic app resource name, which must be unique across regions and can contain only letters, numbers, hyphens (`-`), underscores (`_`), parentheses (`(`, `)`), and periods (`.`). <br><br>This example creates a logic app resource named **My-Consumption-Logic-App**. |
- | **Region** | Yes | <*Azure-region*> | The Azure datacenter region for storing your app's information. This example deploys the sample logic app to the **West US** region in Azure. |
- | **Enable log analytics** | Yes | **No** | This option appears and applies only when you select the **Consumption** logic app type. <br><br>Change this option only when you want to enable diagnostic logging. For this quickstart, keep the default selection. |
+ | **Logic App name** | Yes | <*logic-app-name*> | Your logic app resource name, which must be unique across regions and can contain only letters, numbers, hyphens (**-**), underscores (**_**), parentheses (**()**), and periods (**.**). <br><br>This example creates a logic app resource named **My-Consumption-Logic-App**. |
+ | **Region** | Yes | <*Azure-region*> | The Azure datacenter region for your logic app. <br><br>This example uses **West US**. |
+ | **Enable log analytics** | Yes | **No** | Change this option only when you want to enable diagnostic logging. For this quickstart, keep the default selection. <br><br>**Note**: This option is available only with Consumption logic apps. |
> [!NOTE] >
To create and manage a Consumption logic app workflow using other tools, see the
:::image type="content" source="media/quickstart-create-example-consumption-workflow/create-logic-app-settings.png" alt-text="Screenshot shows Azure portal and logic app resource creation page with details for new logic app." lightbox="media/quickstart-create-example-consumption-workflow/create-logic-app-settings.png":::
-1. When you're ready, select **Review + Create**.
-
-1. On the validation page that appears, confirm all the provided information, and select **Create**.
+1. When you're ready, select **Review + create**. On the validation page that appears, confirm all the provided information, and select **Create**.
-1. After Azure successfully deploys your logic app resource, select **Go to resource**. Or, find and select your logic app resource by typing the name in the Azure search box.
+1. After Azure successfully deploys your logic app resource, select **Go to resource**. Or, find and select your logic app resource by using the Azure search box.
:::image type="content" source="media/quickstart-create-example-consumption-workflow/go-to-resource.png" alt-text="Screenshot shows the resource deployment page and selected button named Go to resource." lightbox="media/quickstart-create-example-consumption-workflow/go-to-resource.png":::
logic-apps Tutorial Build Schedule Recurring Logic App Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/tutorial-build-schedule-recurring-logic-app-workflow.md
Title: Create schedule-based automated workflows
-description: Learn how to build a schedule-based automation workflow that integrates cloud services using Azure Logic Apps.
-
+description: Learn to build a schedule-based automation workflow that integrates cloud services using Azure Logic Apps.
+ ms.suite: integration + Previously updated : 06/11/2024 Last updated : 08/07/2024 # Tutorial: Create schedule-based automated workflows using Azure Logic Apps [!INCLUDE [logic-apps-sku-consumption](~/reusable-content/ce-skilling/azure/includes/logic-apps-sku-consumption.md)]
-This tutorial shows how to build an example [logic app workflow](../logic-apps/logic-apps-overview.md) that runs on a recurring schedule. Specifically, this example workflow checks the travel time, including the traffic, between two places and runs every weekday morning. If the time exceeds a specific limit, the workflow sends you an email that includes the travel time and the extra time necessary to arrive at your destination. The workflow includes various steps, which start with a schedule-based trigger followed by a Bing Maps action, a data operations action, a control flow action, and an email notification action.
+This tutorial shows how to build an example workflow that runs on a recurring schedule by using Azure Logic Apps. This example specifically creates a Consumption logic app workflow that checks the travel time, including the traffic, between two places and runs every weekday morning. If the time exceeds a specific limit, the workflow sends you an email that includes the travel time and the extra time necessary to arrive at your destination. The workflow includes various steps, which start with a schedule-based trigger followed by a Bing Maps action, a data operations action, a control flow action, and an email notification action.
-In this tutorial, you learn how to:
+When you finish, your workflow looks like the following high level example:
-> [!div class="checklist"]
->
-> * Create a logic app and blank workflow.
-> * Add a Recurrence trigger that specifies the schedule to run your workflow.
-> * Add a Bing Maps action that gets the travel time for a route.
-> * Add an action that creates a variable, converts the travel time from seconds to minutes, and stores that result in the variable.
-> * Add a condition that compares the travel time against a specified limit.
-> * Add an action that sends an email if the travel time exceeds the limit.
-When you're done, your workflow looks similar to the following high level example:
+> [!TIP]
+>
+> To learn more, you can ask Azure Copilot these questions:
+>
+> - *What's Azure Logic Apps?*
+> - *What's a Consumption logic app workflow?*
+> - *What's the Bing Maps connector?*
+> - *What's a Data Operations action?*
+> - *What's a control flow action?*
+> - *What's the Office 365 Outlook connector?*
+>
+> To find Azure Copilot, on the [Azure portal](https://portal.azure.com) toolbar, select **Copilot**.
+You can create a similar workflow with a Standard logic app resource. However, the user experience and tutorial steps vary slightly from the Consumption version.
## Prerequisites * An Azure account and subscription. If you don't have a subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* An email account from an email provider that's supported by Azure Logic Apps, such as Office 365 Outlook, Outlook.com, or Gmail. For other providers, [review the connectors list here](/connectors/). This quickstart uses Office 365 Outlook with a work or school account. If you use a different email account, the general steps stay the same, but your UI might slightly differ.
+* An email account from an email provider that's supported by Azure Logic Apps, such as Office 365 Outlook or Outlook.com. For other supported email providers, see [Connectors for Azure Logic Apps](/connectors/connector-reference/connector-reference-logicapps-connectors).
+
+ This tutorial uses Office 365 Outlook with a work or school account. If you use a different email account, the general steps stay the same, but the user experience might slightly differ. If you use Outlook.com, use your personal Microsoft account instead to sign in.
> [!IMPORTANT] >
When you're done, your workflow looks similar to the following high level exampl
* If your workflow needs to communicate through a firewall that limits traffic to specific IP addresses, that firewall needs to allow access for *both* the [inbound](logic-apps-limits-and-config.md#inbound) and [outbound](logic-apps-limits-and-config.md#outbound) IP addresses used by Azure Logic Apps in the Azure region where your logic app resource exists. If your workflow also uses [managed connectors](../connectors/managed.md), such as the Office 365 Outlook connector or SQL connector, or uses [custom connectors](/connectors/custom-connectors/), the firewall also needs to allow access for *all* the [managed connector outbound IP addresses](logic-apps-limits-and-config.md#outbound) in your logic app resource's Azure region.
-## Create a Consumption logic app workflow
+## Create a Consumption logic app resource
1. In the [Azure portal](https://portal.azure.com), sign in with your Azure account.
-1. On the Azure home page, select **Create a resource**.
+1. In the Azure portal search box, enter **logic app**, and select **Logic apps**.
-1. On the Azure Marketplace menu, select **Integration** > **Logic App**.
+ :::image type="content" source="media/tutorial-build-scheduled-recurring-logic-app-workflow/find-select-logic-apps.png" alt-text="Screenshot shows Azure portal search box with logic app entered and selected option for Logic apps." lightbox="media/tutorial-build-scheduled-recurring-logic-app-workflow/find-select-logic-apps.png":::
- :::image type="content" source="~/reusable-content/ce-skilling/azure/media/logic-apps/create-new-logic-app-resource.png" alt-text="Screenshot shows Azure Marketplace menu with selection options for Integration and Logic App." lightbox="~/reusable-content/ce-skilling/azure/media/logic-apps/create-new-logic-app-resource.png":::
-
-1. On the **Create Logic App** pane, on the **Basics** tab, provide the following information about your logic app resource.
-
- :::image type="content" source="~/reusable-content/ce-skilling/azure/media/logic-apps/create-logic-app-settings.png" alt-text="Screenshot shows Azure portal, logic app creation pane, and info for new logic app resource." lightbox="~/reusable-content/ce-skilling/azure/media/logic-apps/create-logic-app-settings.png":::
-
- | Property | Required | Value | Description |
- |-|-|-|-|
- | **Subscription** | Yes | <*Azure-subscription-name*> | Your Azure subscription name. This example uses **Pay-As-You-Go**. |
- | **Resource Group** | Yes | **LA-TravelTime-RG** | The [Azure resource group](../azure-resource-manager/management/overview.md) where you create your logic app resource and related resources. This name must be unique across regions and can contain only letters, numbers, hyphens (`-`), underscores (`_`), parentheses (`(`, `)`), and periods (`.`). |
- | **Name** | Yes | **LA-TravelTime** | Your logic app resource name, which must be unique across regions and can contain only letters, numbers, hyphens (`-`), underscores (`_`), parentheses (`(`, `)`), and periods (`.`). |
+1. On the **Logic apps** page toolbar, select **Add**.
-1. Before you continue making selections, go to the **Plan** section. For **Plan type**, select **Consumption** to show only the settings for a Consumption logic app workflow, which runs in multitenant Azure Logic Apps.
+ The **Create Logic App** page appears and shows the following options:
- The **Plan type** property also specifies the billing model to use.
+ [!INCLUDE [logic-apps-host-plans](../../includes/logic-apps-host-plans.md)]
- | Plan type | Description |
- |--|-|
- | **Standard** | This logic app type is the default selection and runs in single-tenant Azure Logic Apps and uses the [Standard pricing model](logic-apps-pricing.md#standard-pricing). |
- | **Consumption** | This logic app type runs in global, multitenant Azure Logic Apps and uses the [Consumption pricing model](logic-apps-pricing.md#consumption-pricing). |
+1. On the **Create Logic App** page, select **Consumption (Multi-tenant)**.
-1. Now continue with the following selections:
+1. On the **Basics** tab, provide the following information about your logic app resource:
| Property | Required | Value | Description | |-|-|-|-|
- | **Region** | Yes | **West US** | The Azure datacenter region for storing your app's information. This example deploys the sample logic app to the **West US** region in Azure. |
- | **Enable log analytics** | Yes | **No** | This option appears and applies only when you select the **Consumption** logic app type. Change this option only when you want to enable diagnostic logging. For this tutorial, keep the default selection. |
+ | **Subscription** | Yes | <*Azure-subscription-name*> | Your Azure subscription name. <br><br>This example uses **Pay-As-You-Go**. |
+ | **Resource Group** | Yes | <*Azure-resource-group-name*> | The [Azure resource group](../azure-resource-manager/management/overview.md#terminology) where you create your logic app and related resources. This name must be unique across regions and can contain only letters, numbers, hyphens (**-**), underscores (**_**), parentheses (**()**), and periods (**.**). <br><br>This example creates a resource group named **LA-TravelTime-RG**. |
+ | **Logic App name** | Yes | <*logic-app-resource-name*> | Your logic app resource name, which must be unique across regions and can contain only letters, numbers, hyphens (**-**), underscores (**_**), parentheses (**()**), and periods (**.**). <br><br>This example creates a logic app resource named **LA-TravelTime**. |
+ | **Region** | Yes | <*Azure-region*> | The Azure datacenter region for your app. <br><br>This example uses **West US**. |
+ | **Enable log analytics** | Yes | **No** | Change this option only when you want to enable diagnostic logging. For this tutorial, keep the default selection. <br><br>**Note**: This option is available only with Consumption logic apps. |
-1. When you're done, select **Review + create**. After Azure validates the information about your logic app resource, select **Create**.
+ > [!NOTE]
+ >
+ > Availability zones are automatically enabled for new and existing Consumption logic app workflows in
+ > [Azure regions that support availability zones](../reliability/availability-zones-service-support.md#azure-regions-with-availability-zone-support).
+ > For more information, see [Reliability in Azure Functions](../reliability/reliability-functions.md#availability-zone-support) and
+ > [Protect logic apps from region failures with zone redundancy and availability zones](set-up-zone-redundancy-availability-zones.md).
+
+ After you finish, your settings look similar to the following example:
+
+ :::image type="content" source="media/quickstart-create-example-consumption-workflow/create-logic-app-settings.png" alt-text="Screenshot shows Azure portal and creation page for multitenant Consumption logic app and details." lightbox="media/quickstart-create-example-consumption-workflow/create-logic-app-settings.png":::
-1. After Azure deploys your app, select **Go to resource**.
+1. When you finish, select **Review + create**. After Azure validates the information about your logic app resource, select **Create**.
- The Azure portal opens your Consumption logic app and the workflow designer.
+1. After Azure deploys your logic app resource, select **Go to resource**. Or, find and select your logic app resource by using the Azure search box.
-Next, add the **Schedule** trigger named **Recurrence**, which runs the workflow based on a specified schedule. Every workflow must start with a trigger, which fires when a specific event happens or when new data meets a specific condition. For more information, see [Create an example Consumption logic app workflow in multitenant Azure Logic Apps](../logic-apps/quickstart-create-example-consumption-workflow.md).
+Next, add the **Schedule** trigger named **Recurrence**, which runs the workflow based on a specified schedule. Every workflow must start with a trigger, which fires when a specific event happens or when new data meets a specific condition.
## Add the Recurrence trigger
-1. On the workflow designer, [follow these general steps to add the **Recurrence** trigger](create-workflow-with-trigger-or-action.md?tabs=consumption#add-trigger).
+1. On the workflow designer, [follow these general steps to add the **Schedule** trigger named **Recurrence**](create-workflow-with-trigger-or-action.md?tabs=consumption#add-trigger).
1. Rename the **Recurrence** trigger with the following Title: **Check travel time every weekday morning**.
Next, add the **Schedule** trigger named **Recurrence**, which runs the workflow
1. In the trigger information box, provide the following information:
- | Property | Value | Description |
- |-|-|-|
+ | Parameter | Value | Description |
+ |--|-|-|
| **Interval** | 1 | The number of intervals to wait between checks | | **Frequency** | Week | The unit of time to use for the recurrence | | **On these days** | Monday, Tuesday, Wednesday, Thursday, Friday | This setting is available only when you set the **Frequency** to **Week**. | | **At these hours** | 7, 8, 9 | This setting is available only when you set the **Frequency** to **Week** or **Day**. For this recurrence, select the hours of the day. This example runs at the **7**, **8**, and **9**-hour marks. | | **At these minutes** | 0, 15, 30, 45 | This setting is available only when you set the **Frequency** to **Week** or **Day**. For this recurrence, select the minutes of the day. This example starts at the zero-hour mark and runs every 15 minutes. |
- When you're done, the trigger information box appears similar to the following example:
+ When you finish, the trigger information box appears similar to the following example:
:::image type="content" source="media/tutorial-build-scheduled-recurring-logic-app-workflow/recurrence-trigger-property-values.png" alt-text="Screenshot shows week-related properties set to values described in the preceding table." lightbox="media/tutorial-build-scheduled-recurring-logic-app-workflow/recurrence-trigger-property-values.png":::
Now that you have a trigger, add a **Bing Maps** action that gets the travel tim
1. If you don't have a Bing Maps connection, you're asked to create a connection. Provide the following connection information, and select **Create**.
- | Property | Required | Value | Description |
- |-|-|-|-|
+ | Parameter | Required | Value | Description |
+ |--|-|-|-|
| **Connection Name** | Yes | <*Bing-Maps-connection-name*> | Provide a name for your connection. This example uses **BingMapsConnection**. | | **API Key** | Yes | <*Bing-Maps-API-key*> | Enter the Bing Maps API key that you previously received. If you don't have a Bing Maps key, learn [how to get a key](/bingmaps/getting-started/bing-maps-dev-center-help/getting-a-bing-maps-key). |
Now that you have a trigger, add a **Bing Maps** action that gets the travel tim
1. Now enter the values for the following action's properties:
- | Property | Value | Description |
- |-|-|-|
+ | Parameter | Value | Description |
+ |--|-|-|
| **Waypoint 1** | <*start-location*> | Your route's origin. This example specifies an example starting address. | | **Waypoint 2** | <*end-location*> | Your route's destination. This example specifies an example destination address. | | **Optimize** | timeWithTraffic | A parameter to optimize your route, such as distance, travel time with current traffic, and so on. Select the parameter value, **timeWithTraffic**. |
By default, the **Get route** action returns the current travel time with traffi
1. Provide the following action information:
- | Property | Value | Description |
- |-|-|-|
+ | Parameter | Value | Description |
+ |--|-|-|
| **Name** | travelTime | The name for your variable. This example uses `travelTime`. | | **Type** | Integer | The data type for your variable | | **Value** | <*initial-value*> | An expression that converts the current travel time from seconds to minutes (see the steps under this table). |
Next, add a condition that checks whether the current travel time is greater tha
1. On the condition's right side, in the **Choose a value** box, enter the following value: **15**
- When you're done, the condition looks like the following example:
+ When you finish, the condition looks like the following example:
:::image type="content" source="media/tutorial-build-scheduled-recurring-logic-app-workflow/build-condition-check-travel-time.png" alt-text="Screenshot shows finished condition for comparing the travel time to the specified limit." lightbox="media/tutorial-build-scheduled-recurring-logic-app-workflow/build-condition-check-travel-time.png":::
Now, add an action that sends email when the travel time exceeds your limit. Thi
1. Enter the text **Add extra travel time (minutes):** with a trailing space. Keep your cursor in the **Body** box, and select the option for the expression editor (formula icon).
- 1. In the expression editor, enter **sub(,15)** so that you can calculate the number of minutes that exceed your limit:
+ 1. In the expression editor, enter **sub(,15)** so that you can calculate the number of minutes that exceed your limit:
:::image type="content" source="media/tutorial-build-scheduled-recurring-logic-app-workflow/send-email-body-expression-editor.png" alt-text="Screenshot shows expression editor with the sub(,15) entered." lightbox="media/tutorial-build-scheduled-recurring-logic-app-workflow/send-email-body-expression-editor.png":::
- 1. Within the expression, put your cursor between the left parenthesis (**(**) and the comma (**,**), and select **Dynamic content**.
+ 1. Within the expression, put your cursor between the left parenthesis (**(**) and the comma (**,**), and select **Dynamic content**.
:::image type="content" source="media/tutorial-build-scheduled-recurring-logic-app-workflow/send-email-body-select-dynamic-content.png" alt-text="Screenshot shows where to put cursor in the sub(,15) expression, and select Dynamic content." lightbox="media/tutorial-build-scheduled-recurring-logic-app-workflow/send-email-body-select-dynamic-content.png":::
-1. Under **Variables**, select **travelTime**.
+ 1. Under **Variables**, select **travelTime**.
:::image type="content" source="media/tutorial-build-scheduled-recurring-logic-app-workflow/send-email-body-select-travel-time.png" alt-text="Screenshot shows dynamic content list with travelTime variable selected." lightbox="media/tutorial-build-scheduled-recurring-logic-app-workflow/send-email-body-select-travel-time.png":::
To manually start your workflow, on the designer toolbar, select **Run** > **Run
> redirect these kinds of mails. Otherwise, if you're unsure that your workflow ran correctly, > see [Troubleshoot your workflow](../logic-apps/logic-apps-diagnosing-failures.md).
-Congratulations, you've created and run a schedule-based recurring workflow.
+Congratulations, you created and ran a schedule-based recurring workflow!
## Clean up resources
-Your workflow continues running until you disable or delete the logic app resource. When you no longer need the sample workflow, delete the resource group that contains your logic app resource and related resources.
-
-1. In the Azure portal's search box, enter the name for the resource group that you created. From the results, under **Resource Groups**, select the resource group.
+Your workflow continues running until you disable or delete the logic app resource. When you no longer need this sample, delete the resource group that contains your logic app and related resources.
- This example created the resource group named **LA-TravelTime-RG**.
+1. In the Azure portal search box, enter **resource groups**, and select **Resource groups**.
- ![Screenshot that shows the Azure search box with "la-travel-time-rg" entered and **LA-TravelTime-RG** selected.](./media/tutorial-build-scheduled-recurring-logic-app-workflow/find-resource-group.png)
-
- > [!TIP]
- >
- > If the Azure home page shows the resource group under **Recent resources**,
- > you can select the group from the home page.
+1. From the **Resource groups** list, select the resource group for this tutorial.
-1. On the resource group menu, check that **Overview** is selected. On the **Overview** pane's toolbar, select **Delete resource group**.
+1. On the resource group menu, select **Overview**.
- :::image type="content" source="media/tutorial-build-scheduled-recurring-logic-app-workflow/delete-resource-group.png" alt-text="Screenshot shows resource group's Overview pane with pane toolbar selected option for Delete resource group." lightbox="media/tutorial-build-scheduled-recurring-logic-app-workflow/delete-resource-group.png":::
+1. On the **Overview** page toolbar, select **Delete resource group**.
-1. In the confirmation pane that appears, enter the resource group name, and select **Delete**.
+1. When the confirmation pane appears, enter the resource group name, and select **Delete**.
## Next step
logic-apps Tutorial Process Email Attachments Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/tutorial-process-email-attachments-workflow.md
Title: Tutorial - Create workflows with multiple Azure services
-description: Learn how to create automated workflows using Azure Logic Apps, Azure Functions, and Azure Storage.
-
+ Title: Create workflows with multiple Azure services
+description: Learn to build an automated workflow using Azure Logic Apps, Azure Functions, and Azure Storage.
+ ms.suite: integration + Previously updated : 04/16/2024 Last updated : 08/07/2024 # Tutorial: Create workflows that process emails using Azure Logic Apps, Azure Functions, and Azure Storage [!INCLUDE [logic-apps-sku-consumption](~/reusable-content/ce-skilling/azure/includes/logic-apps-sku-consumption.md)]
-Azure Logic Apps helps you automate workflows and integrate data across Azure services, Microsoft services, other software-as-a-service (SaaS) apps, and on-premises systems. This tutorial shows how to build a [logic app workflow](logic-apps-overview.md) that handles incoming emails and any attachments, analyzes the email content using Azure Functions, saves the content to Azure storage, and sends email for reviewing the content.
+This tutorial shows how to build an example workflow that integrates Azure Functions and Azure Storage by using Azure Logic Apps. This example specifically creates a Consumption logic app workflow that handles incoming emails and any attachments, analyzes the email content using Azure Functions, saves the content to Azure storage, and sends email for reviewing the content.
-In this tutorial, you learn how to:
+When you finish, your workflow looks like the following high level example:
-> [!div class="checklist"]
-> * Set up [Azure storage](../storage/common/storage-introduction.md) and Storage Explorer for checking saved emails and attachments.
-> * Create an [Azure function](../azure-functions/functions-overview.md) that removes HTML from emails. This tutorial includes the code that you can use for this function.
-> * Create a blank Consumption logic app workflow.
-> * Add a trigger that monitors emails for attachments.
-> * Add a condition that checks whether emails have attachments.
-> * Add an action that calls the Azure function when an email has attachments.
-> * Add an action that creates storage blobs for emails and attachments.
-> * Add an action that sends email notifications.
-The following screenshot shows the workflow at a high level:
+> [!TIP]
+>
+> To learn more, you can ask Azure Copilot these questions:
+>
+> - *What's Azure Logic Apps?*
+> - *What's Azure Functions?*
+> - *What's Azure Storage?*
+> - *What's a Consumption logic app workflow?*
+>
+> To find Azure Copilot, on the [Azure portal](https://portal.azure.com) toolbar, select **Copilot**.
-![Screenshot showing example high-level Consumption workflow for this tutorial.](./media/tutorial-process-email-attachments-workflow/overview.png)
+You can create a similar workflow with a Standard logic app resource where some connector operations, such as Azure Blob Storage, are also available as built-in, service provider-based operations. However, the user experience and tutorial steps vary slightly from the Consumption version.
## Prerequisites * An Azure account and subscription. If you don't have a subscription, [sign up for a free Azure account](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
-* An email account from an email provider supported by Azure Logic Apps, such as Office 365 Outlook, Outlook.com, or Gmail. For other providers, [review the connectors list here](/connectors/).
+* An email account from an email provider supported by Azure Logic Apps, such as Office 365 Outlook, Outlook.com, or Gmail. For other supported email providers, see [Connectors for Azure Logic Apps](/connectors/connector-reference/connector-reference-logicapps-connectors).
- This logic app workflow uses a work or school account. If you use a different email account, the general steps stay the same, but your UI might appear slightly different.
+ This example uses Office 365 Outlook with a work or school account. If you use a different email account, the general steps stay the same, but the user experience might slightly differ. If you use Outlook.com, use your personal Microsoft account instead to sign in.
> [!NOTE] >
The following screenshot shows the workflow at a high level:
## Set up storage to save attachments
-You can save incoming emails and attachments as blobs in an [Azure storage container](../storage/common/storage-introduction.md).
+The following steps set up [Azure storage](../storage/common/storage-introduction.md) so that you can store incoming emails and attachments as blobs.
+
+1. In the [Azure portal](https://portal.azure.com), sign in with your Azure account credentials.
-1. In the [Azure portal](https://portal.azure.com) with your Azure account credentials, [create a storage account](../storage/common/storage-account-create.md) unless you already have one, using the following information on the **Basics** tab:
+1. [Follow these steps to create a storage account](../storage/common/storage-account-create.md?tabs=azure-portal) unless you already have one.
- | Property | Value | Description |
- |-|-|-|
- | **Subscription** | <*Azure-subscription-name*> | The name for your Azure subscription |
- | **Resource group** | <*Azure-resource-group*> | The name for the [Azure resource group](../azure-resource-manager/management/overview.md) used to organize and manage related resources. This example uses **LA-Tutorial-RG**. <br><br>**Note**: A resource group exists inside a specific region. Although the items in this tutorial might not be available in all regions, try to use the same region when possible. |
- | **Storage account name** | <*Azure-storage-account-name*> | Your storage account name, which must have 3-24 characters and can contain only lowercase letters and numbers. This example uses **attachmentstorageacct**. |
- | **Region** | <*Azure-region*> | The region where to store information about your storage account. This example uses **West US**. |
- | **Performance** | **Standard** | This setting specifies the data types supported and media for storing data. See [Types of storage accounts](../storage/common/storage-introduction.md#types-of-storage-accounts). |
- | **Redundancy** | **Geo-redundant storage (GRS)** | This setting enables storing multiple copies of your data as protection from planned and unplanned events. For more information, see [Azure Storage redundancy](../storage/common/storage-redundancy.md). |
+ On the **Basics** tab, provide the following information:
+
+ | Property | Required | Value | Description |
+ |-|-|-|-|
+ | **Subscription** | Yes | <*Azure-subscription-name*> | Your Azure subscription name. <br><br>This example uses **Pay-As-You-Go**. |
+ | **Resource group** | Yes | <*Azure-resource-group*> | The name for the [Azure resource group](../azure-resource-manager/management/overview.md) used to organize and manage related resources. <br><br>**Note**: A resource group exists inside a specific region. Although the items in this tutorial might not be available in all regions, try to use the same region when possible. <br><br>This example uses **LA-Tutorial-RG**. |
+ | **Storage account name** | Yes | <*Azure-storage-account-name*> | Your unique storage account name, which must have 3-24 characters and can contain only lowercase letters and numbers. <br><br>This example uses **attachmentstorageacct**. |
+ | **Region** | Yes | <*Azure-region*> | The Azure data region for your storage account. <br><br>This example uses **West US**. |
+ | **Primary service** | No | <*Azure-storage-service*> | The primary storage type to use in your storage account. See [Review options for storing data in Azure](../storage/common/storage-introduction.md#review-options-for-storing-data-in-azure). |
+ | **Performance** | Yes | - **Standard** <br>- **Premium** | This setting specifies the data types supported and media for storing data. See [Storage account overview](../storage/common/storage-account-overview.md). <br><br>This example uses **Standard**. |
+ | **Redundancy** | Yes | - **Locally-redundant storage** <br>- **Geo-redundant storage (GRS)** | This setting enables storing multiple copies of your data as protection from planned and unplanned events. For more information, see [Azure Storage redundancy](../storage/common/storage-redundancy.md). <br><br>This example uses **Geo-redundant storage (GRS)**. |
To create your storage account, you can also use [Azure PowerShell](../storage/common/storage-account-create.md?tabs=powershell) or [Azure CLI](../storage/common/storage-account-create.md?tabs=azure-cli).
-1. When you're done, select **Review** > **Create**.
+1. When you're ready, select **Review + create**. After Azure validates the information about your storage account resource, select **Create**.
+
+1. After Azure deploys your storage account, select **Go to resource**. Or, find and select your storage account by using the Azure search box.
-1. After Azure deploys your storage account, find your storage account, and get the storage account's access key:
+1. Get the storage account's access key by following these steps:
- 1. On your storage account menu, under **Security + networking**, select **Access keys**.
+ 1. On the storage account menu, under **Security + networking**, select **Access keys**.
- 1. Copy your storage account name and **key1**, and save those values somewhere safe.
+ 1. Copy the storage account name and **key1**. Save these values somewhere safe to use later.
To get your storage account's access key, you can also use [Azure PowerShell](/powershell/module/az.storage/get-azstorageaccountkey)
You can save incoming emails and attachments as blobs in an [Azure storage conta
1. Create a blob storage container for your email attachments.
- 1. On your storage account menu, under **Data storage**, select **Containers**.
+ 1. On the storage account menu, under **Data storage**, select **Containers**.
1. On the **Containers** page toolbar, select **Container**.
- 1. Under **New container**, enter **attachments** as the container name. Under **Public access level**, select **Container (anonymous read access for containers and blobs)** > **OK**.
+ 1. On the **New container** pane, provide the following information:
+
+ | Property | Value | Description |
+ |-|-|-|
+ | **Name** | **attachments** | The container name. |
+ | **Anonymous access level** | **Container (anonymous read access for containers and blobs)** |
+
+ 1. Select **Create**.
- When you're done, the containers list now shows the new storage container.
+ After you finish, the containers list now shows the new storage container.
- To create a storage container, you can also use [Azure PowerShell](/powershell/module/az.storage/new-azstoragecontainer) or [Azure CLI](/cli/azure/storage/container#az-storage-container-create).
+To create a storage container, you can also use [Azure PowerShell](/powershell/module/az.storage/new-azstoragecontainer) or [Azure CLI](/cli/azure/storage/container#az-storage-container-create).
Next, connect Storage Explorer to your storage account. ## Set up Storage Explorer
-Now, connect Storage Explorer to your storage account so you can confirm that your workflow can correctly save attachments as blobs in your storage container.
+The following steps connect Storage Explorer to your storage account so to you can confirm that your workflow correctly saves attachments as blobs in your storage container.
1. Launch Microsoft Azure Storage Explorer. Sign in with your Azure account. > [!NOTE] >
- > If no prompt appears, on the Storage Explorer activity bar, select **Account Management** (account icon).
+ > If no prompt appears, on the Storage Explorer activity bar, select **Account Management** (profile icon).
1. In the **Select Azure Environment** window, select your Azure environment, and then select **Next**.
Now, connect Storage Explorer to your storage account so you can confirm that yo
1. In the browser window that appears, sign in with your Azure account.
-1. Return to Storage Explorer and the **Account Management** window, and check that correct Microsoft Entra tenant and subscription are selected.
+1. Return to Storage Explorer and the **Account Management** window. Confirm that the correct Microsoft Entra tenant and subscription are selected.
1. On the Storage Explorer activity bar, select **Open Connect Dialog**.
Now, connect Storage Explorer to your storage account so you can confirm that yo
1. In the **Select Connection Method** window, select **Account name and key** > **Next**.
-1. In the **Connect to Azure Storage** window, provide the following information, and select **Next**.
+1. In the **Connect to Azure Storage** window, provide the following information:
| Property | Value | |-|-|
Now, connect Storage Explorer to your storage account so you can confirm that yo
| **Account name** | Your storage account name | | **Account key** | The access key that you previously saved |
-1. On the **Summary** window, confirm your connection information, and then select **Connect**.
+1. For **Storage domain**, confirm that **Azure (core.windows.net)** is selected, and select **Next**.
+
+1. On the **Summary** window, confirm your connection information, and select **Connect**.
- Storage Explorer creates the connection, and shows your storage account in the Explorer window under **Emulator & Attached** > **Storage Accounts**.
+ Storage Explorer creates the connection. Your storage account appears in the Explorer window under **Emulator & Attached** > **Storage Accounts**.
-1. To find your blob storage container, under **Storage Accounts**, expand your storage account, which is **attachmentstorageacct** here, and expand **Blob Containers** where you find the **attachments** container, for example:
+1. To find your blob storage container, under **Storage Accounts**, expand your storage account, which is **attachmentstorageacct** for this example. Under **Blob Containers** where you find the **attachments** container, for example:
:::image type="content" source="./media/tutorial-process-email-attachments-workflow/storage-explorer-check-contianer.png" alt-text="Screenshot showing Storage Explorer - find storage container.":::
-Next, create an [Azure function](../azure-functions/functions-overview.md) that removes HTML from incoming email.
+Next, create an Azure function app and a function that removes HTML from content.
-## Create function to remove HTML
+## Create a function app
-Now, use the code snippet provided by these steps to create an Azure function that removes HTML from each incoming email. That way, the email content is cleaner and easier to process. You can then call this function from your workflow.
+The following steps create an Azure function that your workflow calls to remove HTML from incoming email.
-1. Before you can create a function, [create a function app](../azure-functions/functions-create-function-app-portal.md) by following these steps:
+1. Before you can create a function, [create a function app by selecting the **Consumption** plan](../azure-functions/functions-create-function-app-portal.md) and following these steps:
1. On the **Basics** tab, provide the following information:
- | Property | Value | Description |
- |-|-|-|
- | **Subscription** | <*your-Azure-subscription-name*> | The same Azure subscription that you previously used |
- | **Resource Group** | **LA-Tutorial-RG** | The same Azure resource group that you previously used |
- | **Function App name** | <*function-app-name*> | Your function app's name, which must be globally unique across Azure. This example already uses **CleanTextFunctionApp**, so provide a different name, such as **MyCleanTextFunctionApp-<*your-name*>** |
- | **Do you want to deploy code or container image?** | Code | Publish code files. |
- | **Runtime stack** | <*preferred-language*> | Select a runtime that supports your favorite function programming language. In-portal editing is only available for JavaScript, PowerShell, TypeScript, and C# script. C# class library, Java, and Python functions must be [developed locally](../azure-functions/functions-develop-local.md#local-development-environments). For C# and F# functions, select **.NET**. |
- | **Version** | <*version-number*> | Select the version for your installed runtime. |
- | **Region** | <*Azure-region*> | The same region that you previously used. This example uses **West US**. |
- | **Operating System** | <*your-operating-system*> | An operating system is preselected for you based on your runtime stack selection, but you can select the operating system that supports your favorite function programming language. In-portal editing is only supported on Windows. This example selects **Windows**. |
- | [**Hosting options and plans**](../azure-functions/functions-scale.md) | **Consumption (Serverless)** | Select the hosting plan that defines how resources are allocated to your function app. In the default **Consumption** plan, resources are added dynamically as required by your functions. In this [serverless](https://azure.microsoft.com/overview/serverless-computing/) hosting, you pay only for the time your functions run. When you run in an App Service plan, you must manage the [scaling of your function app](../azure-functions/functions-scale.md). |
+ | Property | Required | Value | Description |
+ |-|-|-|-|
+ | **Subscription** | Yes | <*Azure-subscription-name*> | The same Azure subscription that you previously used for your storage account. |
+ | **Resource Group** | Yes | <*Azure-resource-group-name*> | The same Azure resource group that you previously used for your storage account. <br><br>For this example, select **LA-Tutorial-RG**. |
+ | **Function App name** | Yes | <*function-app-name*> | Your function app name, which must be unique across Azure regions and can contain only letters (case insensitive), numbers (0-9), and hyphens (**-**). <br><br>This example already uses **CleanTextFunctionApp**, so provide a different name, such as **MyCleanTextFunctionApp-<*your-name*>** |
+ | **Runtime stack** | Yes | <*programming-language*> | The runtime for your preferred function programming language. For C# and F# functions, select **.NET**. <br><br>This example uses **.NET**. <br><br>In-portal editing is only available for the following languages: <br><br>- JavaScript <br>- PowerShell <br>- TypeScript <br>- C# script <br><br>You must [locally develop](../azure-functions/functions-develop-local.md#local-development-environments) any C# class library, Java, and Python functions. |
+ | **Version** | Yes | <*version-number*> | Select the version for your installed runtime. |
+ | **Region** | Yes | <*Azure-region*> | The same region that you previously used. <br><br>This example uses **West US**. |
+ | **Operating System** | Yes | <*your-operating-system*> | An operating system is preselected for you based on your runtime stack selection, but you can select the operating system that supports your favorite function programming language. In-portal editing is only supported on Windows. <br><br>This example selects **Windows**. |
1. Select **Next: Storage**. On the **Storage** tab, provide the following information:
- | Property | Value | Description |
- |-|-|-|
- | [**Storage account**](../storage/common/storage-account-create.md) | **cleantextfunctionstorageacct** | Create a storage account used by your function app. Storage account names must be between 3 and 24 characters in length and can contain only lowercase letters and numbers. <br><br>**Note:** This storage account contains your function apps and differs from your previously created storage account for email attachments. You can also use an existing account, which must meet the [storage account requirements](../azure-functions/storage-considerations.md#storage-account-requirements). |
+ | Property | Required | Value | Description |
+ |-|-|-|-|
+ | **Storage account** | Yes | <*Azure-storage-account-name*> | Create a storage account for your function app to use. Storage account names must be between 3 and 24 characters in length and can contain only lowercase letters and numbers. <br><br>This example uses **cleantextfunctionstorageacct**. <br><br>**Note:** This storage account contains your function apps and differs from your previously created storage account for email attachments. You can also use an existing account, which must meet the [storage account requirements](../azure-functions/storage-considerations.md#storage-account-requirements). |
+
+ 1. When you finish, select **Review + create**. After Azure validates the provided information, select **Create**.
- 1. When you're done, select **Review + create**. Confirm your information, and select **Create**.
+ 1. After Azure deploys the function app resource, select **Go to resource**.
- 1. After Azure creates and deploys the function app resource, select **Go to resource**.
+## Create function to remove HTML
-1. Now [create your function locally](../azure-functions/functions-create-function-app-portal.md?pivots=programming-language-csharp#create-your-functions-locally) as function creation in the Azure portal is limited. Make sure to use the **HTTP trigger** template, provide the following information for your function, and use the included sample code, which removes HTML and returns the results to the caller:
+The following steps create an Azure function that removes HTML from each incoming email by using the sample code snippet. This function makes the email content cleaner and easier to process. You can call this function from your workflow.
- | Property | Value |
- |-|-|
+For more information, see [Create your first function in the Azure portal](../azure-functions/functions-create-function-app-portal.md?pivots=programming-language-csharp#create-function). For expanded function creation, you can also [create your function locally](../azure-functions/functions-create-function-app-portal.md?pivots=programming-language-csharp#create-your-functions-locally).
+
+1. In the [Azure portal](https://portal.azure.com), open your function app, if not already open.
+
+1. To run your function later in the Azure portal, set up your function app to explicitly accept requests from the portal. On the function app menu, under **API**, select **CORS**. Under **Allowed Origins**, enter **`https://portal.azure.com`**, and select **Save**.
+
+1. On the function app menu, select **Overview**. On the **Functions** tab, select **Create**.
+
+1. On the **Create function** pane, select **HTTP trigger: C#** > **Next**.
+
+ > [!NOTE]
+ >
+ > If you don't see the C# version, make sure to
+
+1. Provide the following information for your function, and select **Create**:
+
+ | Parameter | Value |
+ |--|-|
| **Function name** | **RemoveHTMLFunction** | | **Authorization level** | **Function** |
+1. On the **Code + Test** tab, enter the following sample code, which removes HTML and returns the results to the caller.
+ ```csharp #r "Newtonsoft.Json"
Now, use the code snippet provided by these steps to create an Azure function th
using Newtonsoft.Json; using System.Text.RegularExpressions;
- public static async Task<IActionResult> Run(HttpRequest req, ILogger log)
+ public static async Task<IActionResult> Run(HttpRequest req, ILogger log)
{ log.LogInformation("HttpWebhook triggered");
Now, use the code snippet provided by these steps to create an Azure function th
updatedBody = updatedBody.Replace(@"&nbsp;", " "); // Return cleaned text
- return (ActionResult)new OkObjectResult(new { updatedBody });
+ return (ActionResult)new OkObjectResult(new {updatedBody});
} ```
-1. To test your function, you can use the following sample input:
+1. When you finish, on the **Code + Test** toolbar, select **Save**, and then select **Test/Run**.
+
+1. On the **Test/Run** pane, on the **Input** tab, in the **Body** box, enter following sample input, and select **Run**:
- `{"name": "<p><p>Testing my function</br></p></p>"}`
+ **`{"name": "<p><p>Testing my function</br></p></p>"}`**
Your function's output looks like the following result:
- ```json
- {"updatedBody":"{\"name\": \"Testing my function\"}"}
- ```
+ **`{"updatedBody": "{\"name\": \"Testing my function\"}"}`**
After you confirm that your function works, create your logic app resource and workflow. Although this tutorial shows how to create a function that removes HTML from emails, Azure Logic Apps also provides an **HTML to Text** connector.
-## Create your logic app workflow
+## Create a Consumption logic app resource
-1. In the Azure portal's top-level search box, enter **logic apps**, and select **Logic apps**.
+1. In the Azure portal search box, enter **logic app**, and select **Logic apps**.
-1. On the **Logic apps** page, select **Add**.
+1. On the **Logic apps** page toolbar, select **Add**.
-1. On the **Create Logic App** page, under **Plan**, select **Consumption** as the plan type, which then shows only the options for Consumption logic app workflows. Provide the following information, and then select **Review + create**.
+ The **Create Logic App** page appears and shows the following options:
- | Property | Value | Description |
- |-|-|-|
- | **Subscription** | <*your-Azure-subscription-name*> | The same Azure subscription that you previously used |
- | **Resource Group** | **LA-Tutorial-RG** | The same Azure resource group that you previously used |
- | **Logic App name** | **LA-ProcessAttachment** | The name for your logic app and workflow. A Consumption logic app and workflow always have the same name. |
- | **Region** | **West US** | The same region that you previously used |
- | **Enable log analytics** | **No** | For this tutorial, keep the **Off** setting. |
+ [!INCLUDE [logic-apps-host-plans](../../includes/logic-apps-host-plans.md)]
-1. Confirm the information that you provided, and select **Create**. After Azure deploys your app, select **Go to resource**.
+1. On the **Create Logic App** page, select **Consumption (Multi-tenant)**.
-1. On the logic app resource menu, select **Logic app designer** to open the workflow designer.
+1. On the **Basics** tab, provide the following information about your logic app resource:
-## Add a trigger to check incoming email
+ | Parameter | Required | Value | Description |
+ |--|-|-|-|
+ | **Subscription** | Yes | <*Azure-subscription-name*> | The same Azure subscription that you previously used. |
+ | **Resource Group** | Yes | **LA-Tutorial-RG** | The same Azure resource group that you previously used. |
+ | **Logic App name** | Yes | <*logic-app-name*> | Your logic app resource name, which must be unique across regions and can contain only letters, numbers, hyphens (**-**), underscores (**_**), parentheses (**()**), and periods (**.**). <br><br>This example creates a logic app resource named **LA-ProcessAttachment**. A Consumption logic app and workflow always have the same name. |
+ | **Region** | Yes | **West US** | The same region that you previously used. |
+ | **Enable log analytics** | Yes | **No** | Change this option only when you want to enable diagnostic logging. For this tutorial, keep the default selection. <br><br>**Note**: This option is available only with Consumption logic apps. |
-Now, add a [trigger](logic-apps-overview.md#logic-app-concepts) that checks for incoming emails that have attachments. Every workflow must start with a trigger, which fires when the trigger condition is met, for example, a specific event happens or when new data exists. For more information, see [Quickstart: Create an example Consumption logic app workflow in multitenant Azure Logic Apps](quickstart-create-example-consumption-workflow.md).
+ > [!NOTE]
+ >
+ > Availability zones are automatically enabled for new and existing Consumption logic app workflows in
+ > [Azure regions that support availability zones](../reliability/availability-zones-service-support.md#azure-regions-with-availability-zone-support).
+ > For more information, see [Reliability in Azure Functions](../reliability/reliability-functions.md#availability-zone-support) and
+ > [Protect logic apps from region failures with zone redundancy and availability zones](set-up-zone-redundancy-availability-zones.md).
+
+1. When you're ready, select **Review + create**. After Azure validates the information about your logic app resource, select **Create**.
+
+1. After Azure deploys your logic app resource, select **Go to resource**. Or, find and select your logic app resource by using the Azure search box.
-This example uses the Office 365 Outlook connector, which requires that you sign in with a Microsoft work or school account. If you're using a personal Microsoft account, use the Outlook.com connector.
+## Add a trigger to monitor incoming email
-1. On the workflow designer, select **Add a trigger**.
+The following steps add a trigger that waits for incoming emails that have attachments.
-1. After the **Add a trigger** pane opens, in the search box, enter **office 365 outlook**. From the trigger results list, under **Office 365 Outlook**, select **When a new email arrives (V3)**.
+1. On the logic app menu, under **Development Tools**, select **Logic app designer**.
-1. If you're asked for credentials, sign in to your email account, which creates a connection between your workflow and your email account.
+1. On the workflow designer, [follow these general steps to add the **Office 365 Outlook** trigger named **When a new email arrives**](create-workflow-with-trigger-or-action.md?tabs=consumption#add-trigger).
-1. Now provide the trigger criteria for checking new email and running your workflow.
+ The Office 365 Outlook connector requires that you sign in with a Microsoft work or school account. If you're using a personal Microsoft account, use the Outlook.com connector.
- | Property | Value | Description |
- |-|-|-|
+1. Sign in to your email account, which creates a connection between your workflow and your email account.
+
+1. In the trigger information box, from the **Advanced parameters** list, add the following parameters, if they don't appear, and provide the following information:
+
+ | Parameter | Value | Description |
+ |--|-|-|
| **Importance** | **Any** | Specifies the importance level of the email that you want. | | **Only with Attachments** | **Yes** | Get only emails with attachments. <br><br>**Note:** The trigger doesn't remove any emails from your account, checking only new messages and processing only emails that match the subject filter. | | **Include Attachments** | **Yes** | Get the attachments as input for your workflow, rather than just check for attachments. |
- | **Folder** | **Inbox** | The email folder to check |
+ | **Folder** | **Inbox** | The email folder to check. |
+ | **Subject Filter** | **Business Analyst 2 #423501** | Specifies the text to find in the email subject. |
-1. From the **Advanced parameters** list, select **Subject Filter**.
+ When you finish, the trigger looks similar to the following example:
-1. After the **Subject Filter** box appears in the action, specify the subject as described here:
-
- | Property | Value | Description |
- |-|-|-|
- | **Subject Filter** | **Business Analyst 2 #423501** | The text to find in the email subject |
+ :::image type="content" source="media/tutorial-process-email-attachments-workflow/trigger-information.png" alt-text="Screenshot shows Consumption workflow and Office 365 Outlook trigger.":::
1. Save your workflow. On the designer toolbar, select **Save**.
- Your logic app workflow is now live but doesn't do anything other check your emails. Next, add a condition that specifies criteria to continue subsequent actions in the workflow.
-
-## Check for attachments
+ Your workflow is now live but doesn't do anything other check your emails. Next, add a condition that specifies criteria to continue subsequent actions in the workflow.
-Now add a condition that selects only emails that have attachments.
+## Add a condition to check for attachments
-1. Under the trigger, select the plus sign (**+**), and then select **Add an action**.
+The following steps add a condition that selects only emails that have attachments.
-1. On the **Add an action** pane, in the search box, enter **condition**.
+1. On the workflow designer, [follow these general steps to add the **Control** action named **Condition**](create-workflow-with-trigger-or-action.md?tabs=consumption#add-action).
-1. From the action results list, select the action named **Condition**.
+1. In the **Condition** action information pane, rename the action with **If email has attachments and key subject phrase**.
-1. Rename the condition using a better description.
+1. Build a condition that checks for emails that have attachments.
- 1. On the **Condition** information pane, replace the condition's default name with the following description: **If email has attachments and key subject phrase**
+ 1. On the **Parameters** tab, in the first row under the **AND** list, select inside the left box, and then select the dynamic content list (lightning icon). From this list, in the trigger section, select the **Has Attachment** output.
-1. Create a condition that checks for emails that have attachments.
+ > [!TIP]
+ >
+ > If you don't see the **Has Attachment** output, select **See More**.
- 1. On the first row under the **And** operation list, select inside the leftmost box. From the dynamic content list that appears, select the **Has Attachment** property.
+ :::image type="content" source="media/tutorial-process-email-attachments-workflow/has-attachment.png" alt-text="Screenshot shows condition action, second row with cursor in leftmost box, open dynamic content list, and Has Attachment selected." lightbox="media/tutorial-process-email-attachments-workflow/has-attachment.png":::
- ![Screenshot showing condition action, the second row with the cursor in leftmost box, the opened dynamic content list, and Has Attachment property selected.](./media/tutorial-process-email-attachments-workflow/build-condition.png)
+ 1. In the middle box, keep the operator named **is equal to**.
- 1. In the middle box, keep the operator **is equal to**.
+ 1. In the right box, enter **true**, which is the value to compare with the **Has Attachment** output value from the trigger. If both values are equal, the email has at least one attachment, the condition passes, and the workflow continues.
- 1. In the rightmost box, enter **true**, which is the value to compare with the **Has Attachment** property value that's output from the trigger. If both values are equal, the email has at least one attachment, the condition passes, and the workflow continues.
+ :::image type="content" source="media/tutorial-process-email-attachments-workflow/finished-condition.png" alt-text="Screenshot shows complete condition." lightbox="media/tutorial-process-email-attachments-workflow/finished-condition.png":::
- ![Screenshot showing complete condition.](./media/tutorial-process-email-attachments-workflow/finished-condition.png)
-
- In your underlying workflow definition, which you can show by selecting **Code view** on the designer, the condition looks similar to the following example:
+ In your underlying workflow definition, which you can view by selecting **Code view** on the designer toolbar, the condition looks similar to the following example:
```json "Condition": {
Now add a condition that selects only emails that have attachments.
### Test your condition
-1. On the designer toolbar, select **Run Trigger** > **Run**.
+1. On the designer toolbar, select **Run** > **Run**.
- This step manually starts and runs your workflow, but nothing will happen until the test email arrives in your inbox.
+ This step manually starts and runs your workflow, but nothing happens until you send a test email to your inbox.
1. Send yourself an email that meets the following criteria:
- * Your email's subject has the text that you specified in the trigger's **Subject filter**: `Business Analyst 2 #423501`
+ * Your email's subject has the text that you specified in the trigger's **Subject Filter**: **Business Analyst 2 #423501**
* Your email has one attachment. For now, just create one empty text file and attach that file to your email.
Now add a condition that selects only emails that have attachments.
1. To check that the trigger fired and the workflow successfully ran, on the logic app menu, select **Overview**.
- * To view successfully fired triggers, select **Trigger history**.
- * To view successfully run workflows, select **Runs history**.
+ * To view successfully fired triggers, select **Trigger history**.
+ If the trigger didn't fire, or the workflow didn't run despite a successful trigger, see [Troubleshoot your logic app workflow](logic-apps-diagnosing-failures.md). Next, define the actions to take for the **True** branch. To save the email along with any attachments, remove any HTML from the email body, then create blobs in the storage container for the email and attachments.
Next, define the actions to take for the **True** branch. To save the email alon
> email doesn't have attachments. As a bonus exercise after you finish this tutorial, > you can add any appropriate action that you want to take for the **False** branch.
-## Call RemoveHTMLFunction
-
-This step adds your previously created Azure function to your workflow and passes the email body content from email trigger to your function.
+## Call the RemoveHTMLFunction
-1. On the logic app menu, select **Logic app designer**. In the **True** branch, select **Add an action**.
+The following steps add your previously created Azure function, which accepts the email body content from the email trigger as input.
-1. Under the **Choose an operation** search box, select **Built-in**. In the search box, enter **azure functions**, and select the action named **Choose an Azure function**.
+1. On the logic app menu, under **Development Tools**, select **Logic app designer**. In the **True** branch, select **Add an action**.
- ![Screenshot showing the selected action named Choose an Azure function.](./media/tutorial-process-email-attachments-workflow/add-action-azure-function.png)
+1. [Follow these general steps to add the **Azure Functions** action named **Choose an Azure function**](create-workflow-with-trigger-or-action.md?tabs=consumption#add-action).
-1. Select your previously created function app, which is **CleanTextFunctionApp** in this example:
+1. Select your previously created function app, which is **CleanTextFunctionApp** in this example.
-1. Now select your function, which is named **RemoveHTMLFunction** in this example.
+1. Select your function, which is named **RemoveHTMLFunction** in this example, and then select **Add Action**.
-1. Rename your function shape with the following description: **Call RemoveHTMLFunction to clean email body**
+1. In the **Azure Functions** action information pane, rename the action with **Call RemoveHTMLFunction**.
1. Now specify the input for your function to process.
- 1. Under **Request Body**, enter this text with a trailing space:
+ 1. For **Request Body**, enter the following text with a trailing space:
- `{ "emailBody":`
+ **`{ "emailBody": `**
While you work on this input in the next steps, an error about invalid JSON appears until your input is correctly formatted as JSON. When you previously tested this function, the input specified for this function used JavaScript Object Notation (JSON). So, the request body must also use the same format.
- Also, when your cursor is inside the **Request body** box, the dynamic content list appears so you can select property values available from previous actions.
+ 1. Select inside the **Request Body** box, and then select the dynamic content list (lightning icon) so that you can select outputs from previous actions.
- 1. From the dynamic content list, under **When a new email arrives**, select the **Body** property. After this property, remember to add the closing curly brace (**}**).
+ 1. From the dynamic content list, under **When a new email arrives**, select the **Body** output. After this value resolves in the **Request Body** box, remember to add the closing curly brace (**}**).
- ![Specify the request body for passing to the function](./media/tutorial-process-email-attachments-workflow/add-email-body-for-function-processing.png)
+ :::image type="content" source="media/tutorial-process-email-attachments-workflow/add-email-body.png" alt-text="Screenshot shows Azure function information box with dynamic content list and Body selected." lightbox="media/tutorial-process-email-attachments-workflow/add-email-body.png":::
- When you're done, the input to your function looks like the following example:
+ When you finish, the Azure function looks like the following example:
- ![Finished request body to pass to your function](./media/tutorial-process-email-attachments-workflow/add-email-body-for-function-processing-2.png)
+ :::image type="content" source="media/tutorial-process-email-attachments-workflow/add-email-body-done.png" alt-text="Screenshot shows finished Azure function with request body content to pass to your function." lightbox="media/tutorial-process-email-attachments-workflow/add-email-body-done.png":::
1. Save your workflow.
-Next, add an action that creates a blob in your storage container so you can save the email body.
+Next, add an action that creates a blob to store the email body.
+
+## Add an action to create a blob for email body
+
+The following steps create a blob that stores the email body in your storage container.
-## Create blob for email body
+1. On the designer, in the condition's **True** block, under your Azure function, select **Add an action**.
-1. On the designer, in the **True** block, under your Azure function, select **Add an action**.
+1. [Follow these general steps to add the **Azure Blob Storage** action named **Create blob**](create-workflow-with-trigger-or-action.md?tabs=consumption#add-action).
-1. Under the **Choose an operation** search box, select **All**. In the search box, enter **create blob**, and select the action named **Create blob**.
+1. Provide the connection information for your storage account, for example:
- ![Screenshot showing the Azure Blob Storage action named Create blob selected.](./media/tutorial-process-email-attachments-workflow/create-blob-action-for-email-body.png)
+ | Parameter | Required | Value | Description |
+ |--|-|-|-|
+ | **Connection Name** | Yes | <*connection-name*> | A descriptive name for the connection. <br><br>This example uses **AttachmentStorageConnection**. |
+ | **Authentication Type** | Yes | <*authentication-type*> | The authentication type to use for the connection. <br><br>This example uses **Access Key**. |
+ | **Azure Storage Account Name Or Blob Endpoint** | Yes | <*storage-account-name*> | The name for your previously created storage account. <br><br>This example uses **attachmentstorageacct**. |
+ | **Azure Storage Account Access Key** | Yes | <*storage-account-access-key*> | The access key for your previously created storage account. |
-1. Provide the connection information for your storage account, and select **Create**, for example:
+1. When you finish, select **Create New**.
- | Property | Value | Description |
- |-|-|-|
- | **Connection name** | **AttachmentStorageConnection** | A descriptive name for the connection |
- | **Authentication type** | **Access Key** | The authentication type to use for the connection |
- | **Azure Storage account name or endpoint** | <*storage-account-name*> | The name for your previously created storage account, which is **attachmentstorageacct** for this example |
- | **Azure Storage Account Access Key** | <*storage-account-access-key*> | The access key for your previously created storage account |
+1. In the **Create blob** action information pane, rename the action with **Create blob for email body**.
-1. Rename the **Create blob** action with the following description: **Create blob for email body**
+1. Provide the following action information:
-1. In the **Create blob** action, provide the following information:
+ > [!TIP]
+ >
+ > If you can't find a specified output in the dynamic content list,
+ > select **See more** next to the operation name.
- | Property | Value | Description |
- |-|-|-|
- | **Storage account name or blob endpoint** | **Use connection settings(<*storage-account-name*>)** | Select your storage account, which is **attachmentstorageacct** for this example. |
- | **Folder path** | <*path-and-container-name*> | The path and name for the container that you previously created. For this example, select the folder icon, and then select the **attachments** container. |
- | **Blob name** | <*sender-name*> | For this example, use the sender's name as the blob's name. Select inside this box so that the dynamic content list appears. From the **When a new email arrives** section, select the **From** field. |
- | **Blob content** | <*content-for-blob*> | For this example, use the HTML-free email body as the blob content. Select inside this box so that the dynamic content list appears. From the **Call RemoveHTMLFunction to clean email body** section, select **Body**. |
+ | Parameter | Required | Value | Description |
+ |--|-|-|-|
+ | **Storage Account Name Or Blob Endpoint** | Yes | **Use connection settings(<*storage-account-name-or-blob-endpoint*>)** | Select the option that includes your storage account name. <br><br>This example uses **`https://attachmentstorageacct.blob.core.windows.net`**. |
+ | **Folder Path** | Yes | <*path-and-container-name*> | The path and name for the container that you previously created. <br><br>For this example, select the folder icon, and then select **attachments**. |
+ | **Blob Name** | Yes | <*sender-name*> | For this example, use the sender name as the blob name. <br><br>1. Select inside the **Blob Name** box, and then select the dynamic content list option (lightning icon). <br><br>2. From the **When a new email arrives** section, select **From**. |
+ | **Blob Content** | Yes | <*cleaned-email-body*> | For this example, use the HTML-free email body as the blob content. <br><br>1. Select inside the **Blob Content** box, and then select the dynamic content list option (lightning icon). <br><br>2. From the **Call RemoveHTMLFunction** section, select **Body**. |
- The following image shows the fields to select for the **Create blob** action:
+ The following screenshot shows the outputs to select for the **Create blob for email body** action:
- ![Screenshot showing information about the HTML-free email body in the Create blob action.](./media/tutorial-process-email-attachments-workflow/create-blob-for-email-body.png)
+ :::image type="content" source="media/tutorial-process-email-attachments-workflow/create-blob-email-body.png" alt-text="Screenshot shows storage container, sender, and HTML-free email body in Create blob action." lightbox="media/tutorial-process-email-attachments-workflow/create-blob-email-body.png":::
- When you're done, the action looks like the following example:
+ When you finish, the action looks like the following example:
- ![Screenshot showing example HTML-free email inputs for the finished Create blob action.](./media/tutorial-process-email-attachments-workflow/create-blob-for-email-body-done.png)
+ :::image type="content" source="media/tutorial-process-email-attachments-workflow/create-blob-email-body-done.png" alt-text="Screenshot shows example email body information for finished Create blob action." lightbox="media/tutorial-process-email-attachments-workflow/create-blob-email-body-done.png":::
1. Save your workflow.
-### Check attachment handling
+### Test attachment handling
-1. On the designer toolbar, select **Run Trigger** > **Run**.
+1. On the designer toolbar, select **Run** > **Run**.
- This step manually starts and runs your workflow, but nothing will happen until the test email arrives in your inbox.
+ This step manually starts and runs your workflow, but nothing happens until you send a test email to your inbox.
1. Send yourself an email that meets the following criteria:
- * Your email's subject has the text that you specified in the trigger's **Subject filter**: `Business Analyst 2 #423501`
+ * Your email's subject has the text that you specified in the trigger's **Subject Filter** parameter: **Business Analyst 2 #423501**
- * Your email has at least one attachment. For now, just create one empty text file, and attach that file to your email.
+ * Your email has one or more attachments. For now, just create one empty text file, and attach that file to your email.
- * Your email has some test content in the body, for example: `Testing my logic app workflow`
+ * Your email has some test content in the body, for example: **Testing my logic app workflow**
If your workflow didn't trigger or run despite a successful trigger, see [Troubleshoot your logic app workflow](logic-apps-diagnosing-failures.md).
Next, add an action that creates a blob in your storage container so you can sav
At this point, only the email appears in the container because the workflow hasn't processed the attachments yet.
- ![Screenshot showing Storage Explorer with only the saved email.](./media/tutorial-process-email-attachments-workflow/storage-explorer-saved-email.png)
+ :::image type="content" source="media/tutorial-process-email-attachments-workflow/storage-explorer-saved-email.png" alt-text="Screenshot shows Storage Explorer with only the saved email." lightbox="media/tutorial-process-email-attachments-workflow/storage-explorer-saved-email.png":::
- 1. When you're done, delete the email in Storage Explorer.
+ 1. When you finish, delete the email in Storage Explorer.
1. Optionally, to test the **False** branch, which does nothing at this time, you can send an email that doesn't meet the criteria.
-Next, add a **For each** loop to process all the email attachments.
+Next, add a **For each** loop to process each email attachment.
+
+## Add a loop to process attachments
-## Process attachments
+The following steps add a loop to process each attachment in the email.
-To process each attachment in the email, add a **For each** loop to your workflow.
+1. Return to the workflow designer. Under the **Create blob for email body** action, select **Add an action**.
-1. Return to the designer. Under the **Create blob for email body** action, select **Add an action**.
+1. [Follow these general steps to add the **Control** action named **For each**](create-workflow-with-trigger-or-action.md?tabs=consumption#add-action).
-1. Under the **Choose an operation** search box, select **Built-in**. In the search box, enter **for each**, and select the action named **For each**.
+1. In the **For each** action information pane, rename the action with **For each email attachment**.
- ![Screenshot showing the selected action named For each.](./media/tutorial-process-email-attachments-workflow/select-for-each.png)
+1. Now select the content for the loop to process.
-1. Rename your loop with the following description: **For each email attachment**
+ 1. In the **For each email attachment** loop, select inside the **Select An Output From Previous Steps** box, and then select the dynamic content list option (lightning icon).
-1. Now select the data for the loop to process. In the **For each email attachment** loop, select inside the **Select an output from previous steps** box so that the dynamic content list appears. From the **When a new email arrives** section, select **Attachments**.
+ 1. From the **When a new email arrives** section, select **Attachments**.
- ![Screenshot showing dynamic content list with the selected field named Attachments.](./media/tutorial-process-email-attachments-workflow/select-attachments.png)
+ The **Attachments** output includes an array with all the attachments from an email. The **For each** loop repeats actions on each array item.
- The **Attachments** field passes in an array that contains all the attachments included with an email. The **For each** loop repeats actions on each item that's passed in with the array.
+ > [!TIP]
+ >
+ > If you don't see **Attachments**, select **See More**.
+
+ :::image type="content" source="media/tutorial-process-email-attachments-workflow/select-attachments.png" alt-text="Screenshot shows dynamic content list with selected output named Attachments." lightbox="media/tutorial-process-email-attachments-workflow/select-attachments.png":::
1. Save your workflow.
-Next, add the action that saves each attachment as a blob in your **attachments** storage container.
+Next, add an action that saves each attachment as a blob in your **attachments** storage container.
+
+## Add an action to create a blob per attachment
-## Create blob for each attachment
+The following steps add an action to create a blob for each attachment.
-1. In the designer, in the **For each email attachment** loop, select **Add an action** to specify the task to perform on each found attachment.
+1. In the designer, in the **For each email attachment** loop, select **Add an action**.
- ![Screenshot showing loop with the Add an action selected.](./media/tutorial-process-email-attachments-workflow/for-each-add-action.png)
+1. [Follow these general steps to add the **Azure Blob Storage** action named **Create blob**](create-workflow-with-trigger-or-action.md?tabs=consumption#add-action).
-1. Under the **Choose an operation** search box, select **All**. In the search box, enter **create blob**, and select the action named **Create blob**.
+1. In the **Create blob** action information pane, rename the action with **Create blob for email attachment**.
- ![Screenshot showing the selected action named Create blob.](./media/tutorial-process-email-attachments-workflow/create-blob-action-for-attachments.png)
+1. Provide the following action information:
-1. Rename the **Create blob 2** action with the following description: **Create blob for each email attachment**
+ > [!TIP]
+ >
+ > If you can't find a specified output in the dynamic content list,
+ > select **See more** next to the operation name.
+
+ | Parameter | Required | Value | Description |
+ |--|-|-|-|
+ | **Storage Account Name Or Blob Endpoint** | Yes | **Use connection settings(<*storage-account-name-or-blob-endpoint*>)** | Select the option that includes your storage account name. <br><br>This example uses **`https://attachmentstorageacct.blob.core.windows.net`**. |
+ | **Folder Path** | Yes | <*path-and-container-name*> | The path and name for the container that you previously created. <br><br>For this example, select the folder icon, and then select **attachments**. |
+ | **Blob Name** | Yes | <*attachment-name*> | For this example, use the attachment name as the blob name. <br><br>1. Select inside the **Blob Name** box, and then select the dynamic content list option (lightning icon). <br><br>2. From the **When a new email arrives** section, select **Name**. |
+ | **Blob Content** | Yes | <*email-content*> | For this example, use the email content as the blob content. <br><br>1. Select inside the **Blob Content** box, and then select the dynamic content list option (lightning icon). <br><br>2. From the **When a new email arrives** section, select **Content**. |
-1. In the **Create blob for each email attachment** action, provide the following information:
+ > [!NOTE]
+ >
+ > If you select an output that has an array, such as the **Content** output, which is an array
+ > that includes attachments, the designer automatically adds a **For each** loop around the action
+ > that references that output. That way, your workflow can perform that action on each array item.
+ > To remove loop, move the action that references the output to outside the loop, and delete the loop.
- | Property | Value | Description |
- |-|-|-|
- | **Storage account name or blob endpoint** | **Use connection settings(<*storage-account-name*>)** | Select your storage account, which is **attachmentstorageacct** for this example. |
- | **Folder path** | <*path-and-container-name*> | The path and name for the container that you previously created. For this example, select the folder icon, and then select the **attachments** container. |
- | **Blob name** | <*attachment-name*> | For this example, use the attachment's name as the blob's name. Select inside this box so that the dynamic content list appears. From the **When a new email arrives** section, select the **Name** field. |
- | **Blob content** | <*email-content*> | For this example, use the email content as the blob content. Select inside this box so that the dynamic content list appears. From the **When a new email arrives** section, select **Content**. |
+ The following screenshot shows the outputs to select for the **Create blob for email attachment** action:
- ![Screenshot showing information about the attachment in the Create blob action.](./media/tutorial-process-email-attachments-workflow/create-blob-per-attachment.png)
+ :::image type="content" source="media/tutorial-process-email-attachments-workflow/create-blob-per-attachment.png" alt-text="Screenshot shows storage container and attachment information in Create blob action." lightbox="media/tutorial-process-email-attachments-workflow/create-blob-per-attachment.png":::
- When you're done, the action looks like the following example:
+ When you finish, the action looks like the following example:
- ![Screenshot showing example attachment information for the finished Create blob action.](./media/tutorial-process-email-attachments-workflow/create-blob-per-attachment-done.png)
+ :::image type="content" source="media/tutorial-process-email-attachments-workflow/create-blob-per-attachment-done.png" alt-text="Screenshot shows example attachment information for finished Create blob action." lightbox="media/tutorial-process-email-attachments-workflow/create-blob-per-attachment-done.png":::
1. Save your workflow.
-### Check attachment handling
+### Retest attachment handling
-1. On the designer toolbar, select **Run Trigger** > **Run**.
+1. On the designer toolbar, select **Run** > **Run**.
- This step manually starts and runs your workflow, but nothing will happen until the test email arrives in your inbox.
+ This step manually starts and runs your workflow, but nothing happens until you send a test email to your inbox.
1. Send yourself an email that meets the following criteria:
- * Your email's subject has the text that you specified in the trigger's **Subject filter** property: `Business Analyst 2 #423501`
+ * Your email's subject has the text that you specified in the trigger's **Subject Filter** parameter: **Business Analyst 2 #423501**
- * Your email has at least two attachments. For now, just create two empty text files and attach those files to your email.
+ * Your email has two or more attachments. For now, just create two empty text files and attach those files to your email.
If your workflow didn't trigger or run despite a successful trigger, see [Troubleshoot your logic app workflow](logic-apps-diagnosing-failures.md).
Next, add the action that saves each attachment as a blob in your **attachments*
1. Check the **attachments** container for both the email and the attachments.
- ![Screenshot showing Storage Explorer and saved email and attachments.](./media/tutorial-process-email-attachments-workflow/storage-explorer-saved-attachments.png)
+ :::image type="content" source="media/tutorial-process-email-attachments-workflow/storage-explorer-saved-attachments.png" alt-text="Screenshot shows Storage Explorer and saved email and attachments." lightbox="media/tutorial-process-email-attachments-workflow/storage-explorer-saved-attachments.png":::
- 1. When you're done, delete the email and attachments in Storage Explorer.
+ 1. When you finish, delete the email and attachments in Storage Explorer.
-Next, add an action so that your workflow sends email to review the attachments.
+Next, add an action in your workflow that sends email to review the attachments.
-## Send email notifications
+## Add an action to send email
-1. Return to the designer. In the **True** branch, collapse the **For each email attachment** loop.
+The following steps add an action so that your workflow sends email to review the attachments.
-1. Under the loop, select **Add an action**.
+1. Return to the workflow designer. In the **True** branch, under the **For each email attachment** loop, select **Add an action**.
- ![Screenshot showing the collapsed for each loop. Under the loop, the Add an action option is selected.](./media/tutorial-process-email-attachments-workflow/add-action-send-email.png)
-
-1. Under the **Choose an operation** search box, select **Standard**. In the search box, enter **send email**.
-
-1. From the actions list, select the send email action for your email provider. To filter the actions list based on a specific connector, you can select the connector first.
+1. [Follow these general steps to add the **Office 365 Outlook** action named **Send an email**](create-workflow-with-trigger-or-action.md?tabs=consumption#add-action).
This example continues with the Office 365 Outlook connector, which works only with an Azure work or school account. For personal Microsoft accounts, select the Outlook.com connector.
- ![Screenshot showing the Office 365 Outlook send email action selected.](./media/tutorial-process-email-attachments-workflow/add-action-select-send-email.png)
- 1. If you're asked for credentials, sign in to your email account so that Azure Logic Apps creates a connection to your email account.
-1. Rename the **Send an email** action with the following description: **Send email for review**
-
-1. Provide the following action information and select the fields to include in the email.
+1. In the **Send an email** action information pane, rename the action with **Send email for review**.
- * To add blank lines in an edit box, press Shift + Enter.
- * If you can't find an expected field in the dynamic content list, select **See more** next to **When a new email arrives**.
+1. Provide the following action information and select the outputs to include in the email:
- | Property | Value | Description |
- |-|-|-|
- | **To** | <*recipient-email-address*> | For testing purposes, you can use your own email address. |
- | **Subject** | ```ASAP - Review applicant for position:``` **Subject** | The email subject that you want to include. Click inside this box, enter the example text, and from the dynamic content list, select the **Subject** field under **When a new email arrives**. |
- | **Body** | ```Please review new applicant:``` <p>```Applicant name:``` **From** <p>```Application file location:``` **Path** <p>```Application email content:``` **Body** | The email's body content. Click inside this box, enter the example text, and from the dynamic content list, select these fields: <p>- The **From** field under **When a new email arrives** </br>- The **Path** field under **Create blob for email body** </br>- The **Body** field under **Call RemoveHTMLFunction to clean email body** |
+ > [!TIP]
+ >
+ > If you can't find a specified output in the dynamic content list,
+ > select **See more** next to the operation name.
- ![Screenshot showing the sample email to send.](./media/tutorial-process-email-attachments-workflow/send-email-notification.png)
+ | Parameter | Required | Value | Description |
+ |--|-|-|-|
+ | **To** | Yes | <*recipient-email-address*> | For testing purposes, use your own email address. |
+ | **Subject** | Yes | <*email-subject*> | The email subject to include. <br><br>This example uses **ASAP - Review applicant for position:**, and the **Subject** output from the trigger. <br><br>1. In the **Subject** box, enter the example text with a trailing space. <br><br>2. Select inside the **Subject** box, and then select the dynamic content list option (lightning icon). <br><br>3. In the list, under **When a new email arrives**, select **Subject**. |
+ | **Body** | Yes | <*email-body*> | The email body to include. <br><br>The example uses **Please review new applicant:**, the trigger output named **From**, the **Path** output from the **Create blob for email body** action, and the **Body** output from your **Call RemoveHTMLFunction** action. <br><br>1. In the **Body** box, enter the example text, **Please review new applicant:**. <br><br>2. On a new line, enter the example text, **Applicant name:**, and add the **From** output from the trigger. <br><br>3. On a new line, enter the example text, **Application file location:**, and add the **Path** output from the **Create blob for email body** action. <br><br>4. On a new line, enter the example text, **Application email content:**, and add the **Body** output from the **Call RemoveHTMLFunction** action. |
> [!NOTE] >
- > If you select a field that contains an array, such as the **Content** field, which is an array
- > that contains attachments, the designer automatically adds a **For each** loop around the action
- > that references that field. That way, your workflow can perform that action on each array item.
- > To remove the loop, remove the field for the array, move the referencing action to outside the loop,
- > select the ellipses (**...**) on the loop's title bar, and select **Delete**.
+ > If you select an output that has an array, such as the **Content** output, which is an array
+ > that includes attachments, the designer automatically adds a **For each** loop around the action
+ > that references that output. That way, your workflow can perform that action on each array item.
+ > To remove loop, move the action that references the output to outside the loop, and delete the loop.
+
+ The following screenshot shows the finished **Send an email** action:
+
+ :::image type="content" source="media/tutorial-process-email-attachments-workflow/send-email-done.png" alt-text="Screenshot shows sample email to send." lightbox="media/tutorial-process-email-attachments-workflow/send-email-done.png":::
1. Save your workflow.
-Now, test your workflow, which now looks like the following example:
+Your finished workflow now looks like the following example:
-![Screenshot showing the finished workflow.](./media/tutorial-process-email-attachments-workflow/complete.png)
-## Run your workflow
+## Test your workflow
1. Send yourself an email that meets this criteria:
- * Your email's subject has the text that you specified in the trigger's **Subject filter** property: `Business Analyst 2 #423501`
+ * Your email's subject has the text that you specified in the trigger's **Subject Filter** parameter: **Business Analyst 2 #423501**
* Your email has one or more attachments. You can reuse an empty text file from your previous test. For a more realistic scenario, attach a resume file.
Now, test your workflow, which now looks like the following example:
1. Run your workflow. If successful, your workflow sends you an email that looks like the following example:
- ![Screenshot showing example email sent by logic app workflow.](./media/tutorial-process-email-attachments-workflow/email-notification.png)
+ :::image type="content" source="media/tutorial-process-email-attachments-workflow/email-notification.png" alt-text="Screenshot shows example email sent by logic app workflow." lightbox="media/tutorial-process-email-attachments-workflow/email-notification.png":::
- If you don't get any emails, check your email's junk folder. Your email junk filter might redirect these kinds of mails. Otherwise, if you're unsure that your workflow ran correctly, see [Troubleshoot your logic app workflow](logic-apps-diagnosing-failures.md).
+ If you don't get any emails, check your email's junk folder. Otherwise, if you're unsure that your workflow ran correctly, see [Troubleshoot your logic app workflow](logic-apps-diagnosing-failures.md).
-Congratulations, you've now created and run a workflow that automates tasks across different Azure services and calls some custom code.
+Congratulations, you created and ran a workflow that automates tasks across different Azure services and calls some custom code!
## Clean up resources
-When you no longer need this sample, delete the resource group that contains your logic app workflow and related resources.
+Your workflow continues running until you disable or delete the logic app resource. When you no longer need this sample, delete the resource group that contains your logic app and related resources.
+
+1. In the Azure portal search box, enter **resource groups**, and select **Resource groups**.
-1. In the Azure portal's top-level search box, enter **resource groups**, and select **Resource groups**.
+1. From the **Resource groups** list, select the resource group for this tutorial.
-1. From the **Resource groups** list, select the resource group for this tutorial.
+1. On the resource group menu, select **Overview**.
-1. On the resource group's **Overview** page toolbar, select **Delete resource group**.
+1. On the **Overview** page toolbar, select **Delete resource group**.
1. When the confirmation pane appears, enter the resource group name, and select **Delete**.
logic-apps Tutorial Process Mailing List Subscriptions Workflow https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/tutorial-process-mailing-list-subscriptions-workflow.md
Title: Build approval-based automated workflows
-description: Tutorial - Create an approval-based automated workflow that processes mailing list subscriptions using Azure Logic Apps.
-
+ Title: Create approval-based automated workflows
+description: Learn to build an automated approval-based workflow that processes mailing list subscriptions using Azure Logic Apps.
+ ms.suite: integration + Previously updated : 01/04/2024 Last updated : 08/09/2024
-# Tutorial: Create automated approval-based workflows by using Azure Logic Apps
+# Tutorial: Create approval-based workflows using Azure Logic Apps
[!INCLUDE [logic-apps-sku-consumption](~/reusable-content/ce-skilling/azure/includes/logic-apps-sku-consumption.md)]
-This tutorial shows how to build an example [logic app workflow](../logic-apps/logic-apps-overview.md) that automates an approval-based tasks. Specifically, this example workflow app processes subscription requests for a mailing list that's managed by the [MailChimp](https://mailchimp.com/) service. This workflow includes various steps, which start by monitoring an email account for requests, sends these requests for approval, checks whether or not the request gets approval, adds approved members to the mailing list, and confirms whether or not new members get added to the list.
+This tutorial shows how to build an example workflow that automates an approval-based task by using Azure Logic Apps. This example specifically creates a Consumption logic app workflow that processes subscription requests for a mailing list that's managed by [MailChimp](https://mailchimp.com/).
-In this tutorial, you learn how to:
+The workflow starts with monitoring an email account for requests, sends received requests for approval, checks whether or not the request gets approval, adds approved members to the mailing list, and confirms whether or not new members get added to the list.
-> [!div class="checklist"]
->
-> * Create a blank logic app.
-> * Add a trigger that monitors emails for subscription requests.
-> * Add an action that sends emails for approving or rejecting these requests.
-> * Add a condition that checks the approval response.
-> * Add an action that adds approved members to the mailing list.
-> * Add a condition that checks whether these members successfully joined the list.
-> * Add an action that sends emails confirming whether these members successfully joined the list.
+When you finish, your workflow looks like the following high level example:
+
-When you're done, your workflow looks like this version at a high level:
+> [!TIP]
+>
+> To learn more, you can ask Azure Copilot these questions:
+>
+> - *What's Azure Logic Apps?*
+> - *What's a Consumption logic app workflow?*
+>
+> To find Azure Copilot, on the [Azure portal](https://portal.azure.com) toolbar, select **Copilot**.
-![High-level finished logic app overview](./media/tutorial-process-mailing-list-subscriptions-workflow/tutorial-high-level-overview.png)
+You can create a similar workflow with a Standard logic app resource where some connector operations, such as Azure Blob Storage, are also available as built-in, service provider-based operations. However, the user experience and tutorial steps vary slightly from the Consumption version.
## Prerequisites
When you're done, your workflow looks like this version at a high level:
* A MailChimp account where you previously created a list named "test-members-ML" where your logic app can add email addresses for approved members. If you don't have an account, [sign up for a free account](https://login.mailchimp.com/signup/), and then learn [how to create a MailChimp list](https://us17.admin.mailchimp.com/lists/#).
-* An email account from an email provider that's supported by Azure Logic Apps, such as Office 365 Outlook, Outlook.com, or Gmail. For other providers, [review the connectors list here](/connectors/). This quickstart uses Office 365 Outlook with a work or school account. If you use a different email account, the general steps stay the same, but your UI might slightly differ.
+* An email account in Office 365 Outlook or Outlook.com, which supports approval workflows. For other email providers, see [Connectors for Azure Logic Apps](/connectors/connector-reference/connector-reference-logicapps-connectors).
-* An email account in Office 365 Outlook or Outlook.com, which supports approval workflows. This tutorial uses Office 365 Outlook. If you use a different email account, the general steps stay the same, but your UI might appear slightly different.
+ This tutorial uses Office 365 Outlook with a work or school account. If you use a different email account, the general steps stay the same, but the user experience might slightly differ. If you use Outlook.com, use your personal Microsoft account instead to sign in.
-* If your logic app workflow needs to communicate through a firewall that limits traffic to specific IP addresses, that firewall needs to allow access for *both* the [inbound](logic-apps-limits-and-config.md#inbound) and [outbound](logic-apps-limits-and-config.md#outbound) IP addresses used by Azure Logic Apps in the Azure region where your logic app resource exists. If your logic app also uses [managed connectors](../connectors/managed.md), such as the Office 365 Outlook connector or SQL connector, or uses [custom connectors](/connectors/custom-connectors/), the firewall also needs to allow access for *all* the [managed connector outbound IP addresses](logic-apps-limits-and-config.md#outbound) in your logic app's Azure region.
-
-## Create your logic app resource
+ > [!IMPORTANT]
+ >
+ > If you want to use the Gmail connector, only G-Suite business accounts can use this connector without restriction in logic app workflows.
+ > If you have a Gmail consumer account, you can use this connector with only specific Google-approved services, or you can
+ > [create a Google client app to use for authentication with your Gmail connector](/connectors/gmail/#authentication-and-bring-your-own-application).
+ > For more information, see [Data security and privacy policies for Google connectors in Azure Logic Apps](../connectors/connectors-google-data-security-privacy-policy.md).
-1. Sign in to the [Azure portal](https://portal.azure.com) with your Azure account credentials. On the Azure home page, select **Create a resource**.
-
-1. On the Azure Marketplace menu, select **Integration** > **Logic App**.
+* If your logic app workflow needs to communicate through a firewall that limits traffic to specific IP addresses, that firewall needs to allow access for *both* the [inbound](logic-apps-limits-and-config.md#inbound) and [outbound](logic-apps-limits-and-config.md#outbound) IP addresses used by Azure Logic Apps in the Azure region where your logic app resource exists. If your logic app also uses [managed connectors](../connectors/managed.md), such as the Office 365 Outlook connector or SQL connector, or uses [custom connectors](/connectors/custom-connectors/), the firewall also needs to allow access for *all* the [managed connector outbound IP addresses](logic-apps-limits-and-config.md#outbound) in your logic app's Azure region.
- ![Screenshot that shows Azure Marketplace menu with "Integration" and "Logic App" selected.](./media/tutorial-process-mailing-list-subscriptions-workflow/create-new-logic-app-resource.png)
+## Create a Consumption logic app resource
-1. On the **Logic App** pane, provide the information described here about the logic app resource that you want to create.
+1. In the [Azure portal](https://portal.azure.com), sign in with your Azure account.
- ![Screenshot that shows the Logic App creation pane and the info to provide for the new logic app.](./media/tutorial-process-mailing-list-subscriptions-workflow/create-logic-app-settings.png)
+1. In the Azure portal search box, enter **logic app**, and select **Logic apps**.
- | Property | Value | Description |
- |-|-|-|
- | **Subscription** | <*Azure-subscription-name*> | Your Azure subscription name. This example uses `Pay-As-You-Go`. |
- | **Resource group** | LA-MailingList-RG | The name for the [Azure resource group](../azure-resource-manager/management/overview.md), which is used to organize related resources. This example creates a new resource group named `LA-MailingList-RG`. |
- | **Name** | LA-MailingList | Your logic app's name, which can contain only letters, numbers, hyphens (`-`), underscores (`_`), parentheses (`(`, `)`), and periods (`.`). This example uses `LA-MailingList`. |
- | **Location** | West US | The region where to store your logic app information. This example uses `West US`. |
- | **Plan type** | Consumption |
- | **Log Analytics** | Off | Keep the **Off** setting for diagnostic logging. |
+ :::image type="content" source="media/tutorial-build-scheduled-recurring-logic-app-workflow/find-select-logic-apps.png" alt-text="Screenshot shows Azure portal search box with logic app entered and selected option for Logic apps." lightbox="media/tutorial-build-scheduled-recurring-logic-app-workflow/find-select-logic-apps.png":::
-1. When you're done, select **Review + create**. After Azure validates the information about your logic app, select **Create**.
+1. On the **Logic apps** page toolbar, select **Add**.
-1. After Azure deploys your app, select **Go to resource**.
+ The **Create Logic App** page appears and shows the following options:
- Azure opens the template selection pane, which shows an introduction video, commonly used triggers, and logic app template patterns.
+ [!INCLUDE [logic-apps-host-plans](../../includes/logic-apps-host-plans.md)]
-1. Scroll down past the video and common triggers sections to the **Templates** section, and select **Blank Logic App**.
+1. On the **Create Logic App** page, select **Consumption (Multi-tenant)**.
- ![Screenshot that shows the Logic Apps template selection pane with "Blank Logic App" selected.](./media/tutorial-process-mailing-list-subscriptions-workflow/select-logic-app-template.png)
+1. On the **Basics** tab, provide the following information about your logic app resource:
-Next, add an Outlook [trigger](../logic-apps/logic-apps-overview.md#logic-app-concepts) that listens for incoming emails with subscription requests. Each logic app must start with a trigger, which fires when a specific event happens or when new data meets a specific condition. For more information, see [Quickstart: Create an example Consumption logic app workflow in multi-tenant Azure Logic Apps](../logic-apps/quickstart-create-example-consumption-workflow.md).
+ | Property | Required | Value | Description |
+ |-|-|-|-|
+ | **Subscription** | Yes | <*Azure-subscription-name*> | Your Azure subscription name. <br><br>This example uses **Pay-As-You-Go**. |
+ | **Resource Group** | Yes | <*Azure-resource-group-name*> | The [Azure resource group](../azure-resource-manager/management/overview.md#terminology) where you create your logic app and related resources. This name must be unique across regions and can contain only letters, numbers, hyphens (**-**), underscores (**_**), parentheses (**()**), and periods (**.**). <br><br>This example creates a resource group named **LA-MailingList-RG**. |
+ | **Logic App name** | Yes | <*logic-app-resource-name*> | Your logic app resource name, which must be unique across regions and can contain only letters, numbers, hyphens (**-**), underscores (**_**), parentheses (**()**), and periods (**.**). <br><br>This example creates a logic app resource named **LA-MailingList**. |
+ | **Region** | Yes | <*Azure-region*> | The Azure datacenter region for your app. <br><br>This example uses **West US**. |
+ | **Enable log analytics** | Yes | **No** | Change this option only when you want to enable diagnostic logging. For this tutorial, keep the default selection. <br><br>**Note**: This option is available only with Consumption logic apps. |
-## Add trigger to monitor emails
+ > [!NOTE]
+ >
+ > Availability zones are automatically enabled for new and existing Consumption logic app workflows in
+ > [Azure regions that support availability zones](../reliability/availability-zones-service-support.md#azure-regions-with-availability-zone-support).
+ > For more information, see [Reliability in Azure Functions](../reliability/reliability-functions.md#availability-zone-support) and
+ > [Protect logic apps from region failures with zone redundancy and availability zones](set-up-zone-redundancy-availability-zones.md).
-1. In the workflow designer search box, enter `when email arrives`, and select the trigger named **When a new email arrives**.
+ After you finish, your settings look similar to the following example:
- * For Azure work or school accounts, select **Office 365 Outlook**.
- * For personal Microsoft accounts, select **Outlook.com**.
+ :::image type="content" source="media/tutorial-process-mailing-list-subscriptions-workflow/create-logic-app-settings.png" alt-text="Screenshot shows Azure portal and creation page for multitenant Consumption logic app and details." lightbox="media/tutorial-process-mailing-list-subscriptions-workflow/create-logic-app-settings.png":::
- This example continues by selecting Office 365 Outlook.
+1. When you finish, select **Review + create**. After Azure validates the information about your logic app resource, select **Create**.
- ![Screenshot that shows the Logic Apps Designer search box that contains the "when email arrives" search term, and the "When a new email arrives" trigger appears selected.](./media/tutorial-process-mailing-list-subscriptions-workflow/add-trigger-new-email.png)
+1. After Azure deploys your logic app resource, select **Go to resource**. Or, find and select your logic app resource by using the Azure search box.
-1. If you don't already have a connection, sign in and authenticate access to your email account when prompted.
+## Add a trigger to check emails
- Azure Logic Apps creates a connection to your email account.
+The following steps add a trigger that waits for incoming emails that have subscription requests.
-1. In the trigger, provide the criteria for checking new email.
+1. On the logic app menu, under **Development Tools**, select **Logic app designer**.
- 1. Specify the folder for checking emails, and keep the other properties set to their default values.
+1. On the workflow designer, [follow these general steps to add the **Office 365 Outlook** trigger named **When a new email arrives**](create-workflow-with-trigger-or-action.md?tabs=consumption#add-trigger).
- ![Screenshot that shows the designer with the "When a new email arrives" action and "Folder" set to "Inbox".](./media/tutorial-process-mailing-list-subscriptions-workflow/add-trigger-set-up-email.png)
+ The Office 365 Outlook connector requires that you sign in with a Microsoft work or school account. If you're using a personal Microsoft account, use the Outlook.com connector.
- 1. Add the trigger's **Subject Filter** property so that you can filter emails based on the subject line. Open the **Add new parameter** list, and select **Subject Filter**.
+1. Sign in to your email account, which creates a connection between your workflow and your email account.
- ![Screenshot that shows the opened "Add new parameter" list with "Subject Filter" selected.](./media/tutorial-process-mailing-list-subscriptions-workflow/add-trigger-add-properties.png)
+1. In the trigger information box, from the **Advanced parameters** list, add the following parameters, if they don't appear, and provide the following information:
- For more information about this trigger's properties, see the [Office 365 Outlook connector reference](/connectors/office365/) or the [Outlook.com connector reference](/connectors/outlook/).
+ | Parameter | Value | Description |
+ |--|-|-|
+ | **Importance** | **Any** | Specifies the importance level of the email that you want. |
+ | **Folder** | **Inbox** | The email folder to check. |
+ | **Subject Filter** | **subscribe-test-members-ML** | Specifies the text to find in the email subject and filters emails based on the subject line. |
- 1. After the property appears in the trigger, enter this text: `subscribe-test-members-ML`
+ > [!NOTE]
+ >
+ > When you select inside some edit boxes, the options for the dynamic content list (lightning icon)
+ > and expression editor (function icon) appear, which you can ignore for now.
- ![Screenshot that shows the "Subject Filter" property with the text "subscribe-test-members-ML" entered.](./media/tutorial-process-mailing-list-subscriptions-workflow/add-trigger-subject-filter-property.png)
+ For more information about this trigger's properties, see the [Office 365 Outlook connector reference](/connectors/office365/) or the [Outlook.com connector reference](/connectors/outlook/).
-1. To hide the trigger's details for now, collapse the shape by clicking inside the shape's title bar.
+ When you finish, the trigger looks similar to the following example:
- ![Screenshot that shows the collapsed trigger shape.](./media/tutorial-process-mailing-list-subscriptions-workflow/collapse-trigger-shape.png)
+ :::image type="content" source="media/tutorial-process-mailing-list-subscriptions-workflow/trigger-information.png" alt-text="Screenshot shows Consumption workflow with trigger named When a new email arrives." lightbox="media/tutorial-process-mailing-list-subscriptions-workflow/trigger-information.png":::
-1. Save your logic app workflow. On the designer toolbar, select **Save**.
+1. Save your workflow. On the designer toolbar, select **Save**.
-Your logic app is now live but doesn't do anything other than check your incoming email. So, add an action that responds when the trigger fires.
+Your workflow is now live but doesn't do anything other check your emails. Next, add an action that responds when the trigger fires.
-## Send approval email
+## Add an action to send approval email
-Now that you have a trigger, add an [action](../logic-apps/logic-apps-overview.md#logic-app-concepts) that sends an email to approve or reject the request.
+The following steps add an action that sends an email to approve or reject the request.
-1. In the workflow designer, under the **When a new email arrives** trigger, select **New step**.
+1. On the designer, under the trigger named **When a new email arrives**, [follow these general steps to add the **Office 365 Outlook** action named **Send approval email**](create-workflow-with-trigger-or-action.md?tabs=consumption#add-action).
-1. Under **Choose an operation**, in the search box, enter `send approval`, and select the action named **Send approval email**.
+1. For the **Send approval email** action, provide the following information:
- ![Screenshot that shows the "Choose an operation" list filtered by "approval" actions, and the "Send approval email" action selected.](./media/tutorial-process-mailing-list-subscriptions-workflow/add-action-send-approval-email.png)
+ | Property | Required | Value | Description |
+ |-|-|-|-|
+ | **To** | Yes | <*approver-email-address*> | The approver's email address. For testing, use your own address. |
+ | **Subject** | No | <*email-subject*> | A descriptive email subject. <br><br>This example uses **Approve member request for test-members-ML**. |
-1. Now enter the values for the specified properties shown and described here. leaving all the others at their default values. For more information about these properties, see the [Office 365 Outlook connector reference](/connectors/office365/) or the [Outlook.com connector reference](/connectors/outlook/).
+ For more information about these properties, see the [Office 365 Outlook connector reference](/connectors/office365/) or the [Outlook.com connector reference](/connectors/outlook/).
- ![Screenshot that shows the "Send approval email" properties](./media/tutorial-process-mailing-list-subscriptions-workflow/add-action-approval-email-settings.png)
+ When you finish, the **Send approval email** action looks like the following example:
- | Property | Value | Description |
- |-|-|-|
- | **To** | <*approval-email-address*> | The approver's email address. For testing purposes, you can use your own address. This example uses the fictional `sophiaowen@fabrikam.com` email address. |
- | **Subject** | `Approve member request for test-members-ML` | A descriptive email subject |
- | **User Options** | `Approve, Reject` | Make sure that this property specifies the response options that the approver can select, which are **Approve** or **Reject** by default. |
- ||||
+ :::image type="content" source="media/tutorial-process-mailing-list-subscriptions-workflow/add-action-approval-email-settings.png" alt-text="Screenshot shows information for action named Send approval email." lightbox="media/tutorial-process-mailing-list-subscriptions-workflow/add-action-approval-email-settings.png":::
- > [!NOTE]
- > When you click inside some edit boxes, the dynamic content list appears, which you can ignore for now.
- > This list shows the outputs from previous actions that are available for you to select as inputs to
- > subsequent actions in your workflow.
-
-1. Save your logic app workflow.
+1. Save your workflow.
Next, add a condition that checks the approver's selected response.
-## Check approval response
-
-1. Under the **Send approval email** action, select **New step**.
-
-1. Under **Choose an operation**, select **Built-in**. In the search box, enter `condition`, and select the action named **Condition**.
+## Add an action to check approval response
- ![Screenshot that shows the "Choose an operation" search box with "Built-in" selected and "condition" as the search term, while the "Condition" action appears selected.](./media/tutorial-process-mailing-list-subscriptions-workflow/select-condition-action.png)
+1. On the designer, under the **Send approval email** action, [follow these general steps to add the **Control** action named **Condition**](create-workflow-with-trigger-or-action.md?tabs=consumption#add-action).
-1. In the **Condition** title bar, select the **ellipses** (**...**) button, and then select **Rename**. Rename the condition with this description: `If request approved`
-
- ![Screenshot that shows the ellipses button selected with the "Settings" list opened and the "Rename" command selected.](./media/tutorial-process-mailing-list-subscriptions-workflow/rename-condition-description.png)
+1. On the **Condition** action pane, rename the action with **If request approved**.
1. Build a condition that checks whether the approver selected **Approve**.
- 1. On the condition's left side, click inside the **Choose a value** box.
-
- 1. From the dynamic content list that appears, under **Send approval email**, select the **SelectedOption** property.
-
- ![Screenshot that shows the dynamic content list where in the "Send approval email" section, the "SelectedOption" output appears selected.](./media/tutorial-process-mailing-list-subscriptions-workflow/build-condition-check-approval-response.png)
-
- 1. In the middle comparison box, select the **is equal to** operator.
-
- 1. On the condition's right side, in the **Choose a value** box, enter the text, `Approve`.
+ 1. On the **Parameters** tab, in the first row under the **AND** list, select inside the left box, and then select the dynamic content list (lightning icon). From this list, in the **Send approval email** section, select the **SelectedOption** output.
- When you're done, the condition looks like this example:
+ :::image type="content" source="media/tutorial-process-mailing-list-subscriptions-workflow/build-condition-selected-option.png" alt-text="Screenshot shows condition action, second row with cursor in leftmost box, open dynamic content list, and SelectedOption selected." lightbox="media/tutorial-process-mailing-list-subscriptions-workflow/build-condition-selected-option.png":::
- ![Screenshot that shows the finished condition for the approved request example](./media/tutorial-process-mailing-list-subscriptions-workflow/build-condition-check-approval-response-2.png)
+ 1. In the middle box, keep the operator named **is equal to**.
-1. Save your logic app workflow.
+ 1. In the right box, enter **Approve**.
-Next, specify the action that your logic app performs when the reviewer approves the request.
+ When you finish, the condition looks like the following example:
-## Add member to MailChimp list
+ :::image type="content" source="media/tutorial-process-mailing-list-subscriptions-workflow/build-condition-done.png" alt-text="Screenshot shows the finished condition for example approval workflow." lightbox="media/tutorial-process-mailing-list-subscriptions-workflow/build-condition-done.png":::
-Now add an action that adds the approved member to your mailing list.
+1. Save your workflow.
-1. In the condition's **True** branch, select **Add an action**.
+## Add an action to include member in MailChimp list
-1. Under the **Choose an operation** search box, select **All**. In the search box, enter `mailchimp`, and select the action named **Add member to list**.
+The following steps add an action that includes the approved member on your mailing list.
- ![Screenshot that shows the "Choose an operation" box with the "mailchimp" search term and the "Add member to list" action selected.](./media/tutorial-process-mailing-list-subscriptions-workflow/add-action-mailchimp-add-member.png)
+1. In the condition's **True** block, [follow these general steps to add the **MailChimp** action named **Add member to list**](create-workflow-with-trigger-or-action.md?tabs=consumption#add-action).
-1. If you don't already have a connection to your MailChimp account, you're prompted to sign in.
+1. Sign in and authorize access to your MailChimp account, which creates a connection between your workflow and your MailChimp account.
-1. In the **Add member to list** action, provide the information as shown and described here:
+1. In the **Add member to list** action, provide the following information:
- ![Screenshot that shows the "Add member to list" action information.](./media/tutorial-process-mailing-list-subscriptions-workflow/add-action-mailchimp-add-member-settings.png)
-
- | Property | Required | Value | Description |
- |-|-|-|-|
- | **List Id** | Yes | <*mailing-list-name*> | Select the name for your MailChimp mailing list. This example uses `test-members-ML`. |
- | **Email Address** | Yes | <*new-member-email-address*> | In the dynamic content list that opens, from the **When a new email arrives** section, select **From**, which is output from the trigger and specifies the email address for the new member. |
- | **Status** | Yes | <*member-subscription-status*> | Select the subscription status to set for the new member. This example selects `subscribed`. <p>For more information, see [Manage subscribers with the MailChimp API](https://developer.mailchimp.com/documentation/mailchimp/guides/manage-subscribers-with-the-mailchimp-api/). |
- |||||
+ | Parameter | Required | Value | Description |
+ |--|-|-|-|
+ | **List Id** | Yes | <*mailing-list-name*> | The name for your MailChimp mailing list. <br><br>This example uses **test-members-ML**. |
+ | **Status** | Yes | <*member-subscription-status*> | The new member's subscription status. <br><br>This example selects **subscribed**. |
+ | **Email Address** | Yes | <*member-email-address*> | The new member's email address. <br><br>1. Select inside the **Email Address** box, and then select the dynamic content list (lightning icon). <br><br>From the dynamic content list, in the **When a new email arrives** section, select **From**, which is a trigger output. |
For more information about the **Add member to list** action properties, see the [MailChimp connector reference](/connectors/mailchimp/).
-1. Save your logic app workflow.
+ When you finish, the **Add member to list** action looks like the following example:
-Next, add a condition so that you can check whether the new member successfully joined your mailing list. That way, your logic app can notify you whether this operation succeeded or failed.
+ :::image type="content" source="media/tutorial-process-mailing-list-subscriptions-workflow/add-action-mailchimp-settings.png" alt-text="Screenshot shows information for the MailChimp action named Add member to list." lightbox="media/tutorial-process-mailing-list-subscriptions-workflow/add-action-mailchimp-settings.png":::
-## Check for success or failure
+1. Save your workflow.
-1. In the **True** branch, under the **Add member to list** action, select **Add an action**.
+## Add an action to check success or failure
-1. Under **Choose an operation**, select **Built-in**. In the search box, enter `condition`, and select the action named **Condition**.
+The following steps add a condition to check whether the new member successfully joined your mailing list. Your workflow can then notify you whether this operation succeeded or failed.
-1. Rename the condition with this description: `If add member succeeded`
+1. In the **True** block, under the **Add member to list** action, [follow these general steps to add the **Control** action named **Condition**](create-workflow-with-trigger-or-action.md?tabs=consumption#add-action).
-1. Build a condition that checks whether the approved member succeeds or fails in joining your mailing list:
+1. Rename the condition with **If add member succeeded**.
- 1. On the condition's left side, click inside the **Choose a value** box. From the dynamic content list that appears, in the **Add member to list** section, select the **Status** property.
+1. Build a condition that checks whether the approved member succeeds or fails in joining your mailing list.
- For example, your condition looks like this example:
+ 1. On the **Parameters** tab, in the first row under the **AND** list, select inside the left box, and then select the dynamic content list (lightning icon). From this list, in the **Add member to list** section, select the **Status** output.
- ![Screenshot that shows the condition's left side "Choose a value" box with "Status" entered.](./media/tutorial-process-mailing-list-subscriptions-workflow/build-condition-check-added-member.png)
+ 1. In the middle box, keep the operator named **is equal to**.
- 1. In the middle comparison box, select the **is equal to** operator.
+ 1. In the right box, enter **subscribed**.
- 1. On the condition's right side, in the **Choose a value** box, enter this text: `subscribed`
+ When you finish, the condition looks like the following example:
- When you're done, the condition looks like this example:
+ :::image type="content" source="media/tutorial-process-mailing-list-subscriptions-workflow/build-condition-check-member-done.png" alt-text="Screenshot shows finished condition to check added member." lightbox="media/tutorial-process-mailing-list-subscriptions-workflow/build-condition-check-member-done.png":::
- ![Screenshot that shows the finished condition for checking successful or failed subscription.](./media/tutorial-process-mailing-list-subscriptions-workflow/build-condition-check-added-member-2.png)
+## Add an action to send success email
-Next, set up the emails to send when the approved member either succeeds or fails in joining your mailing list.
+The following steps add an action to send success email when the workflow succeeds in adding the member to your mailing list.
-## Send email if member added
+1. In the **True** block for the **If add member succeeded** condition, [follow these general steps to add the **Office 365 Outlook** action named **Send an email**](create-workflow-with-trigger-or-action.md?tabs=consumption#add-action).
-1. Under the **If add member succeeded** condition, in the **True** branch, select **Add an action**.
+1. Rename the **Send an email** action with **Send email on success**.
- ![Screenshot that shows the "If add member succeeded" condition's "True" branch with "Add an action" selected.](./media/tutorial-process-mailing-list-subscriptions-workflow/add-action-email-success.png)
+1. In the **Send email on success** action, provide the following information:
-1. In the **Choose an operation** search box, enter `outlook send email`, and select the action named **Send an email**.
+ | Parameter | Required | Value | Description |
+ |--|-|-|-|
+ | **To** | Yes | <*recipient-email-address*> | The email recipient's email address. For testing purposes, use your own email address. |
+ | **Subject** | Yes | <*success-email-subject*> | The subject for the success email. For this example, follow these steps: <br><br>1. Enter the following text with a trailing space: **Success! Member added to test-members-ML:** <br><br>2. Select inside the **Subject** box, and select the dynamic content list option (lightning icon). <br><br>3. From the **Add member to list** section, select **Email Address**. <br><br>**Note**: If this output doesn't appear, next to the **Add member to list** section name, select **See more**. |
+ | **Body** | Yes | <*success-email-body*> | The body content for the success email. For this example, follow these steps: <br><br>1. Enter the following text with a trailing space: **Member opt-in status:** <br><br>2. Select inside the **Body** box, and select the dynamic content list option (lightning icon). <br><br>3. From the **Add member to list** section, select **Status**. |
- ![Screenshot that shows the "Choose an operation" search box with "outlook send email" entered and the "Send an email" action selected for notification.](./media/tutorial-process-mailing-list-subscriptions-workflow/add-action-email-success-2.png)
+ When you finish, the action looks like the following example:
-1. Rename the action with this description: `Send email on success`
+ :::image type="content" source="media/tutorial-process-mailing-list-subscriptions-workflow/add-action-email-success.png" alt-text="Screenshot shows information for action named Send email on success." lightbox="media/tutorial-process-mailing-list-subscriptions-workflow/add-action-email-success.png":::
-1. In the **Send email on success** action, provide the information as shown and described here:
+1. Save your workflow.
- ![Screenshot that shows the "Send email on success" action and the information provided for the success email.](./media/tutorial-process-mailing-list-subscriptions-workflow/add-action-email-success-settings.png)
+## Add an action to send failure email
- | Property | Required | Value | Description |
- |-|-|-|-|
- | **Body** | Yes | <*success-email-body*> | The body content for the success email. For this tutorial, follow these steps: <p>1. Enter this text with a trailing space: `New member has joined "test-members-ML":` <p>2. From the dynamic content list that appears, select the **Email Address** property. <p>**Note**: If this property doesn't appear, next to the **Add member to list** section header, select **See more**. <p>3. On the next row, enter this text with a trailing space: `Member opt-in status: ` <p>4. From the dynamic content list, under **Add member to list**, select the **Status** property. |
- | **Subject** | Yes | <*success-email-subject*> | The subject for the success email. For this tutorial, follow these steps: <p>1. Enter this text with a trailing space: `Success! Member added to "test-members-ML": ` <p>2. From the dynamic content list, under **Add member to list**, select the **Email Address** property. |
- | **To** | Yes | <*your-email-address*> | The email address for where to send the success email. For testing purposes, you can use your own email address. |
- |||||
-
-1. Save your logic app workflow.
-
-## Send email if member not added
-
-1. Under the **If add member succeeded** condition, in the **False** branch, select **Add an action**.
+The following steps add an action to send failure email when the workflow fails in adding the member to your mailing list.
- ![Screenshot that shows the "If add member succeeded" condition's "False" branch with "Add an action" selected.](./media/tutorial-process-mailing-list-subscriptions-workflow/add-action-email-failed.png)
+1. In the **False** block for the **If add member succeeded** condition, [follow these general steps to add the **Office 365 Outlook** action named **Send an email**](create-workflow-with-trigger-or-action.md?tabs=consumption#add-action).
-1. In the **Choose an operation** search box, enter `outlook send email`, and select the action named **Send an email**.
+1. Rename the **Send an email** action with **Send email on failure**.
- ![Screenshot that shows the "Choose an operation" search box with "outlook send email" entered and the "Send an email" action selected.](./media/tutorial-process-mailing-list-subscriptions-workflow/add-action-email-failed-2.png)
+1. In the **Send email on failure** action, provide the following information:
-1. Rename the action with this description: `Send email on failure`
+ | Parameter | Required | Value | Description |
+ |--|-|-|-|
+ | **To** | Yes | <*recipient-email-address*> | The email recipient's email address. For testing purposes, use your own email address. |
+ | **Subject** | Yes | <*failure-email-subject*> | The subject for the failure email. For this example, follow these steps: <br><br>1. Enter the following text with a trailing space: **Failed, member not added to test-members-ML:** <br><br>2. Select inside the **Subject** box, and select the dynamic content list option (lightning icon). <br><br>3. From the **Add member to list** section, select **Email Address**. <br><br>**Note**: If this output doesn't appear, next to the **Add member to list** section name, select **See more**. |
+ | **Body** | Yes | <*failure-email-body*> | The body content for the failure email. <br><br>For this example, enter the following text: **Member might already exist. Check your MailChimp account.** |
-1. Provide information about this action as shown and described here:
+ When you finish, the action looks like the following example:
- ![Screenshot that shows the "Send email on failure" action and the information provided for the failure email.](./media/tutorial-process-mailing-list-subscriptions-workflow/add-action-email-failed-settings.png)
+ :::image type="content" source="media/tutorial-process-mailing-list-subscriptions-workflow/add-action-email-failed.png" alt-text="Screenshot shows information for action named Send email on failure." lightbox="media/tutorial-process-mailing-list-subscriptions-workflow/add-action-email-failed.png":::
- | Property | Required | Value | Description |
- |-|-|-|-|
- | **Body** | Yes | <*body-for-failure-email*> | The body content for the failure email. For this tutorial, enter this text: <p>`Member might already exist. Check your MailChimp account.` |
- | **Subject** | Yes | <*subject-for-failure-email*> | The subject for the failure email. For this tutorial, follow these steps: <p>1. Enter this text with a trailing space: `Failed, member not added to "test-members-ML": ` <p>2. From the dynamic content list, under **Add member to list**, select the **Email Address** property. |
- | **To** | Yes | <*your-email-address*> | The email address for where to send the failure email. For testing purposes, you can use your own email address. |
- |||||
+1. Save your workflow.
-1. Save your logic app workflow.
+Your finished workflow looks similar to the following example:
-Next, test your workflow, which now looks similar to this example:
-![Screenshot that shows the example finished logic app workflow.](./media/tutorial-process-mailing-list-subscriptions-workflow/tutorial-high-level-completed.png)
-
-## Run your logic app workflow
+## Test your workflow
1. Send yourself an email request to join your mailing list. Wait for the request to appear in your inbox.
-1. To manually start your workflow, on the designer toolbar, select **Run Trigger** > **Run**.
+1. To manually start your workflow, on the designer toolbar, select **Run** > **Run**.
If your email has a subject that matches the trigger's subject filter, your workflow sends you email to approve the subscription request. 1. In the approval email that you receive, select **Approve**.
-1. If the subscriber's email address doesn't exist on your mailing list, your workflow adds that person's email address and sends you an email like this example:
+1. If the subscriber's email address doesn't exist on your mailing list, your workflow adds that person's email address and sends you an email like the following example:
- ![Screenshot that shows the example email for a successful subscription.](./media/tutorial-process-mailing-list-subscriptions-workflow/add-member-mailing-list-success.png)
+ :::image type="content" source="media/tutorial-process-mailing-list-subscriptions-workflow/add-member-success.png" alt-text="Screenshot shows example email for successful subscription." lightbox="media/tutorial-process-mailing-list-subscriptions-workflow/add-member-success.png":::
- If your workflow can't add the subscriber, you get an email like this example:
+1. If your workflow can't add the subscriber, you get an email like the following example:
- ![Screenshot that shows the example email for a failed subscription.](./media/tutorial-process-mailing-list-subscriptions-workflow/add-member-mailing-list-failed.png)
+ :::image type="content" source="media/tutorial-process-mailing-list-subscriptions-workflow/add-member-failure.png" alt-text="Screenshot shows example email for failed subscription." lightbox="media/tutorial-process-mailing-list-subscriptions-workflow/add-member-failure.png":::
> [!TIP]
- > If you don't get any emails, check your email's junk folder. Your email junk filter might
- > redirect these kinds of mails. Otherwise, if you're unsure that your logic app ran correctly,
- > see [Troubleshoot your logic app](../logic-apps/logic-apps-diagnosing-failures.md).
+ >
+ > If you don't get any emails, check your email's junk folder. Otherwise,
+ > if you're unsure that your logic app ran correctly, see
+ > [Troubleshoot your logic app](../logic-apps/logic-apps-diagnosing-failures.md).
-Congratulations, you've now created and run a logic app workflow that integrates information across Azure, Microsoft services, and other SaaS apps.
+Congratulations, you created and ran a logic app workflow that integrates information across Azure, Microsoft services, and other SaaS apps!
## Clean up resources
-Your logic app continues running until you disable or delete the logic app resource. When you no longer need the sample logic app, delete the resource group that contains your logic app and related resources.
-
-1. In the Azure portal's search box, enter the name for the resource group that you created. From the results, under **Resource Groups**, select the resource group.
+Your workflow continues running until you disable or delete the logic app resource. When you no longer need this sample, delete the resource group that contains your logic app and related resources.
- This example created the resource group named `LA-MailingList-RG`.
+Your workflow continues running until you disable or delete the logic app resource. When you no longer need this sample, delete the resource group that contains your logic app and related resources.
- ![Screenshot that shows the Azure search box with "la-mailinglist-rg" entered and **LA-MailingList-RG** selected.](./media/tutorial-process-mailing-list-subscriptions-workflow/find-resource-group.png)
+1. In the Azure portal search box, enter **resource groups**, and select **Resource groups**.
- > [!TIP]
- > If the Azure home page shows the resource group under **Recent resources**,
- > you can select the group from the home page.
+1. From the **Resource groups** list, select the resource group for this tutorial.
-1. On the resource group menu, check that **Overview** is selected. On the **Overview** pane's toolbar, select **Delete resource group**.
+1. On the resource group menu, select **Overview**.
- ![Screenshot that shows the resource group's "Overview" pane and on the pane's toolbar, "Delete resource group" is selected.](./media/tutorial-process-mailing-list-subscriptions-workflow/delete-resource-group.png)
+1. On the **Overview** page toolbar, select **Delete resource group**.
-1. In the confirmation pane that appears, enter the resource group name, and select **Delete**.
+1. When the confirmation pane appears, enter the resource group name, and select **Delete**.
## Next steps
logic-apps View Workflow Status Run History https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/logic-apps/view-workflow-status-run-history.md
+
+ Title: Check workflow status, view run history, and set up alerts
+description: Check your workflow status, view workflow run history, and enable alerts in Azure Logic Apps.
+
+ms.suite: integration
++ Last updated : 06/10/2024++
+# Check workflow status, view run history, and set up alerts in Azure Logic Apps
++
+After you create and run a logic app workflow, you can check that workflow's run status, trigger history, workflow run history, and performance.
+
+This guide shows how to perform the following tasks:
+
+- [Review trigger history](#review-trigger-history).
+- [Review workflow run history](#review-runs-history).
+- [Set up alerts](#add-azure-alerts) to get notifications about failures or other possible problems. For example, you can create an alert that detects "when more than five runs fail in an hour".
+
+To monitor and review the workflow run status for Standard workflows, see the following sections in [Create an example Standard logic app workflow in single-tenant Azure Logic Apps](create-single-tenant-workflows-azure-portal.md):
+
+- [Review trigger history](create-single-tenant-workflows-azure-portal.md#review-trigger-history)
+- [Review workflow run history](create-single-tenant-workflows-azure-portal.md#review-run-history).
+- [Enable or open Application Insights after deployment](create-single-tenant-workflows-azure-portal.md#enable-open-application-insights).
+
+For real-time event monitoring and richer debugging, you can set up diagnostics logging for your logic app workflow by using [Azure Monitor logs](../azure-monitor/overview.md). This Azure service helps you monitor your cloud and on-premises environments so that you can more easily maintain their availability and performance. You can then find and view events, such as trigger events, run events, and action events. By storing this information in [Azure Monitor logs](../azure-monitor/logs/data-platform-logs.md), you can create [log queries](../azure-monitor/logs/log-query-overview.md) that help you find and analyze this information. You can also use this diagnostic data with other Azure services, such as Azure Storage and Azure Event Hubs. For more information, see [Monitor logic apps by using Azure Monitor](monitor-workflows-collect-diagnostic-data.md).
+
+> [!NOTE]
+>
+> If your workflow runs in an [integration service environment (ISE)](connect-virtual-network-vnet-isolated-environment-overview.md)
+> that was created to use an [internal access endpoint](connect-virtual-network-vnet-isolated-environment-overview.md#endpoint-access),
+> you can view and access inputs and outputs from a workflow run history *only from inside your virtual network*. Make sure that you have network
+> connectivity between the private endpoints and the computer from where you want to access run history. For example, your client computer can exist
+> inside the ISE's virtual network or inside a virtual network that's connected to the ISE's virtual network, for example, through peering or a virtual
+> private network. For more information, see [ISE endpoint access](connect-virtual-network-vnet-isolated-environment-overview.md#endpoint-access).
+
+<a name="review-trigger-history"></a>
+
+## Review trigger history
+
+Each workflow run starts with a trigger, which either fires on a schedule or waits for an incoming request or event. The trigger history lists all the trigger attempts that your workflow made and information about the inputs and outputs for each trigger attempt.
+
+### [Consumption](#tab/consumption)
+
+1. In the [Azure portal](https://portal.azure.com), open your logic app resource and workflow in the designer.
+
+1. On your logic app menu, select **Overview**. On the **Overview** pane, select **Trigger history**.
+
+ ![Screenshot shows Overview pane for Consumption logic app workflow with selected option named Trigger history.](./media/monitor-logic-apps/overview-logic-app-trigger-history-consumption.png)
+
+ Under **Trigger history**, all trigger attempts appear. Each time the trigger successfully fires, Azure Logic Apps creates an individual workflow instance and runs that instance. By default, each instance runs in parallel so that no workflow has to wait before starting a run. If your workflow triggers for multiple events or items at the same time, a trigger entry appears for each item with the same date and time.
+
+ ![Screenshot shows Overview pane with Consumption logic app workflow and multiple trigger attempts for different items.](./media/monitor-logic-apps/logic-app-triggers-history-consumption.png)
+
+ The following table lists the possible trigger statuses:
+
+ | Trigger status | Description |
+ |-|-|
+ | **Failed** | An error occurred. To review any generated error messages for a failed trigger, select that trigger attempt, and choose **Outputs**. For example, you might find inputs that aren't valid. |
+ | **Skipped** | The trigger checked the endpoint but found no data that met the specified criteria. |
+ | **Succeeded** | The trigger checked the endpoint and found available data. Usually, a **Fired** status also appears alongside this status. If not, the trigger definition might have a condition or `SplitOn` command that wasn't met. <br><br>This status can apply to a manual trigger, recurrence-based trigger, or polling trigger. A trigger can run successfully, but the run itself might still fail when the actions generate unhandled errors. |
+
+ > [!TIP]
+ >
+ > You can recheck the trigger without waiting for the next recurrence. On the
+ > **Overview** pane toolbar or on the designer toolbar, select **Run Trigger** > **Run**.
+
+1. To view information about a specific trigger attempt, select that trigger event.
+
+ ![Screenshot shows Consumption workflow trigger entry selected.](./media/monitor-logic-apps/select-trigger-event-for-review.png)
+
+ If the list shows many trigger attempts, and you can't find the entry that you want, try filtering the list. If you don't find the data that you expect, try selecting **Refresh** on the toolbar.
+
+ You can now review information about the selected trigger event, for example:
+
+ ![Screenshot shows selected Consumption workflow trigger history information.](./media/monitor-logic-apps/view-specific-trigger-details.png)
+
+### [Standard](#tab/standard)
+
+For a stateful workflow, you can review the trigger history for each run, including the trigger status along with inputs and outputs, separately from the [workflow's run history](#review-runs-history). In the Azure portal, trigger history and run history appear at the workflow level, not the logic app level.
+
+1. In the [Azure portal](https://portal.azure.com), open your logic app resource and workflow in the designer.
+
+1. On the workflow menu, select **Overview**. On the **Overview** page, select **Trigger history**.
+
+ ![Screenshot shows Overview page for Standard workflow with selected option named Trigger history.](./media/monitor-logic-apps/overview-logic-app-trigger-history-standard.png)
+
+ Under **Trigger history**, all trigger attempts appear. Each time the trigger successfully fires, Azure Logic Apps creates an individual workflow instance and runs that instance. By default, each instance runs in parallel so that no workflow has to wait before starting a run. If your workflow triggers for multiple events or items at the same time, a trigger entry appears for each item with the same date and time.
+
+ ![Screenshot shows Overview page for Standard workflow and multiple trigger attempts for different items.](./media/monitor-logic-apps/logic-app-triggers-history-standard.png)
+
+ The following table lists the possible trigger statuses:
+
+ | Trigger status | Description |
+ |-|-|
+ | **Failed** | An error occurred. To review any generated error messages for a failed trigger, select that trigger attempt and choose **Outputs**. For example, you might find inputs that aren't valid. |
+ | **Skipped** | The trigger checked the endpoint but found no data that met the specified criteria. |
+ | **Succeeded** | The trigger checked the endpoint and found available data. Usually, a **Fired** status also appears alongside this status. If not, the trigger definition might have a condition or `SplitOn` command that wasn't met. <br><br>This status can apply to a manual trigger, recurrence-based trigger, or polling trigger. A trigger can run successfully, but the run itself might still fail when the actions generate unhandled errors. |
+
+ > [!TIP]
+ >
+ > You can recheck the trigger without waiting for the next recurrence. On the
+ > **Overview** page toolbar, select **Run Trigger** > **Run**.
+
+1. To view information about a specific trigger attempt, select the identifier for that trigger attempt.
+
+ ![Screenshot shows Standard workflow trigger entry selected.](./media/monitor-logic-apps/select-trigger-event-for-review-standard.png)
+
+ If the list shows many trigger attempts, and you can't find the entry that you want, try filtering the list. If you don't find the data that you expect, try selecting **Refresh** on the toolbar.
+
+1. Check the trigger's inputs to confirm that they appear as you expect. On the **History** pane, under **Inputs link**, select the link, which shows the **Inputs** pane.
+
+ ![Screenshot shows Standard workflow trigger inputs.](./media/monitor-logic-apps/review-trigger-inputs-standard.png)
+
+1. Check the triggers outputs, if any, to confirm that they appear as you expect. On the **History** pane, under **Outputs link**, select the link, which shows the **Outputs** pane.
+
+ Trigger outputs include the data that the trigger passes to the next step in your workflow. Reviewing these outputs can help you determine whether the correct or expected values passed on to the next step in your workflow.
+
+ For example, the RSS trigger generated an error message that states that the RSS feed wasn't found.
+
+ ![Screenshot shows Standard workflow trigger outputs.](./media/logic-apps-diagnosing-failures/review-trigger-outputs-standard.png)
+++
+<a name="review-runs-history"></a>
+
+## Review workflow run history
+
+Each time a trigger successfully fires, Azure Logic Apps creates a workflow instance and runs that instance. By default, each instance runs in parallel so that no workflow has to wait before starting a run. You can review what happened during each run, including the status, inputs, and outputs for each step in the workflow.
+
+### [Consumption](#tab/consumption)
+
+1. In the [Azure portal](https://portal.azure.com), open your logic app resource and workflow in the designer.
+
+1. On your logic app menu, select **Overview**. On the **Overview** page, select **Runs history**.
+
+ Under **Runs history**, all the past, current, and any waiting runs appear. If the trigger fires for multiple events or items at the same time, an entry appears for each item with the same date and time.
+
+ ![Screenshot shows Consumption workflow and Overview page with selected option for Runs history.](./media/monitor-logic-apps/overview-logic-app-runs-history-consumption.png)
+
+ The following table lists the possible run statuses:
+
+ | Run status | Description |
+ ||-|
+ | **Aborted** | The run stopped or didn't finish due to external problems, for example, a system outage or lapsed Azure subscription. |
+ | **Cancelled** | The run was triggered and started, but received a cancellation request. |
+ | **Failed** | At least one action in the run failed. No subsequent actions in the workflow were set up to handle the failure. |
+ | **Running** | The run was triggered and is in progress. However, this status can also appear for a run that's throttled due to [action limits](logic-apps-limits-and-config.md) or the [current pricing plan](https://azure.microsoft.com/pricing/details/logic-apps/). <br><br>**Tip**: If you set up [diagnostics logging](monitor-workflows-collect-diagnostic-data.md), you can get information about any throttle events that happen. |
+ | **Succeeded** | The run succeeded. If any action failed, a subsequent action in the workflow handled that failure. |
+ | **Timed out** | The run timed out because the current duration exceeded the run duration limit, which is controlled by the [**Run history retention in days** setting](logic-apps-limits-and-config.md#run-duration-retention-limits). A run's duration is calculated by using the run's start time and run duration limit at that start time. <br><br>**Note**: If the run's duration also exceeds the current *run history retention limit*, which is also controlled by the [**Run history retention in days** setting](logic-apps-limits-and-config.md#run-duration-retention-limits), the run is cleared from the run history by a daily cleanup job. Whether the run times out or completes, the retention period is always calculated by using the run's start time and *current* retention limit. So, if you reduce the duration limit for an in-flight run, the run times out. However, the run either stays or is cleared from the run history based on whether the run's duration exceeded the retention limit. |
+ | **Waiting** | The run hasn't started or is paused, for example, due to an earlier workflow instance that's still running. |
+
+1. To review the steps and other information for a specific run, under **Runs history**, select that run. If the list shows many runs, and you can't find the entry that you want, try filtering the list.
+
+ > [!TIP]
+ >
+ > If the run status doesn't appear, try refreshing the overview pane by selecting **Refresh**.
+ > No run happens for a trigger that's skipped due to unmet criteria or finding no data.
+
+ ![Screenshot shows Consumption workflow run selected.](./media/monitor-logic-apps/select-specific-logic-app-run-consumption.png)
+
+ The **Logic app run** pane shows each step in the selected run, each step's run status, and the time taken for each step to run, for example:
+
+ ![Screenshot shows each action in the selected workflow run.](./media/monitor-logic-apps/logic-app-run-pane-consumption.png)
+
+ To view this information in list form, on the **Logic app run** toolbar, select **Run Details**.
+
+ ![Screenshot shows toolbar named Logic app run with the selected option Run Details.](./media/monitor-logic-apps/toolbar-select-run-details.png)
+
+ The Run Details lists each step, their status, and other information.
+
+ ![Screenshot showing the run details for each step in the workflow.](./media/monitor-logic-apps/review-logic-app-run-details.png)
+
+ For example, you can get the run's **Correlation ID** property, which you might need when you use the [REST API for Logic Apps](/rest/api/logic).
+
+1. To get more information about a specific step, select either option:
+
+ * In the **Logic app run** pane, select the step so that the shape expands. You can now view information such as inputs, outputs, and any errors that happened in that step.
+
+ For example, suppose you had an action that failed, and you wanted to review which inputs might have caused that step to fail. By expanding the shape, you can view the inputs, outputs, and error for that step:
+
+ ![Screenshot showing the "Logic app run" pane with the expanded shape for an example failed step.](./media/monitor-logic-apps/specific-step-inputs-outputs-errors.png)
+
+ * In the **Logic app run details** pane, select the step that you want.
+
+ ![Screenshot showing the "Logic app run details" pane with the example failed step selected.](./media/monitor-logic-apps/select-failed-step.png)
+
+ > [!NOTE]
+ >
+ > All runtime details and events are encrypted within Azure Logic Apps and
+ > are decrypted only when a user requests to view that data. You can
+ > [hide inputs and outputs in run history](logic-apps-securing-a-logic-app.md#obfuscate)
+ > or control user access to this information by using
+ > [Azure role-based access control (Azure RBAC)](../role-based-access-control/overview.md).
+
+### [Standard](#tab/standard)
+
+You can view run history only for stateful workflows, not stateless workflows. To enable run history for a stateless workflow, see [Enable run history for stateless workflows](create-single-tenant-workflows-azure-portal.md#enable-run-history-stateless).
+
+1. In the [Azure portal](https://portal.azure.com), open your logic app resource and workflow in the designer.
+
+1. On your workflow menu, select **Overview**. On the **Overview** page, select **Run History**.
+
+ Under **Run History**, all the past, current, and any waiting runs appear. If the trigger fires for multiple events or items at the same time, an entry appears for each item with the same date and time.
+
+ ![Screenshot shows Standard workflow and Overview page with selected option for Run History.](./media/monitor-logic-apps/overview-logic-app-runs-history-standard.png)
+
+ The following table lists the possible final statuses that each workflow run can have and show in the portal:
+
+ | Run status | Icon | Description |
+ |||-|
+ | **Aborted** | ![Aborted icon][aborted-icon] | The run stopped or didn't finish due to external problems, for example, a system outage or lapsed Azure subscription. |
+ | **Cancelled** | ![Canceled icon][canceled-icon] | The run was triggered and started, but received a cancellation request. |
+ | **Failed** | ![Failed icon][failed-icon] | At least one action in the run failed. No subsequent actions in the workflow were set up to handle the failure. |
+ | **Running** | ![Running icon][running-icon] | The run was triggered and is in progress. However, this status can also appear for a run that's throttled due to [action limits](logic-apps-limits-and-config.md) or the [current pricing plan](https://azure.microsoft.com/pricing/details/logic-apps/). <br><br>**Tip**: If you set up [diagnostics logging](monitor-workflows-collect-diagnostic-data.md), you can get information about any throttle events that happen. |
+ | **Skipped** | ![Skipped icon][skipped-icon] | The trigger condition was checked but wasn't met, so the run never started. |
+ | **Succeeded** | ![Succeeded icon][succeeded-icon] | The run succeeded. If any action failed, a subsequent action in the workflow handled that failure. |
+ | **Timed out** | ![Timed-out icon][timed-out-icon] | The run timed out because the current duration exceeded the run duration limit, which is controlled by the [**Run history retention in days** setting](logic-apps-limits-and-config.md#run-duration-retention-limits). A run's duration is calculated by using the run's start time and run duration limit at that start time. <br><br>**Note**: If the run's duration also exceeds the current *run history retention limit*, which is also controlled by the [**Run history retention in days** setting](logic-apps-limits-and-config.md#run-duration-retention-limits), the run is cleared from the run history by a daily cleanup job. Whether the run times out or completes, the retention period is always calculated by using the run's start time and *current* retention limit. So, if you reduce the duration limit for an in-flight run, the run times out. However, the run either stays or is cleared from the run history based on whether the run's duration exceeded the retention limit. |
+ | **Waiting** | ![Waiting icon][waiting-icon] | The run hasn't started or is paused, for example, due to an earlier workflow instance that's still running. |
+
+1. On the **Run History** tab, select the run that you want to review.
+
+ The run details page opens and shows the status for each step in the run.
+
+ > [!TIP]
+ >
+ > If the run status doesn't appear, on the **Overview** page toolbar, select **Refresh**.
+ > No run happens for a trigger that's skipped due to unmet criteria or finding no data.
+
+ If the list shows many runs, and you can't find the entry that you want, try filtering the list.
+
+ ![Screenshot shows selected Standard workflow run.](./media/monitor-logic-apps/select-specific-logic-app-run-standard.png)
+
+ The workflow run pane shows each step in the selected run, each step's run status, and the time taken for each step to run, for example:
+
+ ![Screenshot shows each action in selected Standard workflow run.](./media/monitor-logic-apps/logic-app-run-pane-standard.png)
+
+ The following table shows the possible statuses that each workflow action can have and show in the portal:
+
+ | Action status | Icon | Description |
+ |||-|
+ | **Aborted** | ![Aborted icon][aborted-icon] | The action stopped or didn't finish due to external problems, for example, a system outage or lapsed Azure subscription. |
+ | **Cancelled** | ![Canceled icon][canceled-icon] | The action was running but received a cancel request. |
+ | **Failed** | ![Failed icon][failed-icon] | The action failed. |
+ | **Running** | ![Running icon][running-icon] | The action is currently running. |
+ | **Skipped** | ![Skipped icon][skipped-icon] | The action was skipped because its **runAfter** conditions weren't met, for example, a preceding action failed. Each action has a `runAfter` object where you can set up conditions that must be met before the current action can run. |
+ | **Succeeded** | ![Succeeded icon][succeeded-icon] | The action succeeded. |
+ | **Succeeded with retries** | ![Succeeded-with-retries-icon][succeeded-with-retries-icon] | The action succeeded but only after a single or multiple retries. To review the retry history, in the run history details view, select that action so that you can view the inputs and outputs. |
+ | **Timed out** | ![Timed-out icon][timed-out-icon] | The action stopped due to the timeout limit specified by that action's settings. |
+ | **Waiting** | ![Waiting icon][waiting-icon] | Applies to a webhook action that's waiting for an inbound request from a caller. |
+
+ [aborted-icon]: ./media/monitor-logic-apps/aborted.png
+ [canceled-icon]: ./media/monitor-logic-apps/cancelled.png
+ [failed-icon]: ./media/monitor-logic-apps/failed.png
+ [running-icon]: ./media/monitor-logic-apps/running.png
+ [skipped-icon]: ./media/monitor-logic-apps/skipped.png
+ [succeeded-icon]: ./media/monitor-logic-apps/succeeded.png
+ [succeeded-with-retries-icon]: ./media/monitor-logic-apps/succeeded-with-retries.png
+ [timed-out-icon]: ./media/monitor-logic-apps/timed-out.png
+ [waiting-icon]: ./media/monitor-logic-apps/waiting.png
+
+1. After all the steps in the run appear, select each step to review more information such as inputs, outputs, and any errors that happened in that step.
+
+ For example, suppose you had an action that failed, and you wanted to review which inputs might have caused that step to fail.
+
+ ![Screenshot shows Standard workflow with failed step inputs.](./media/monitor-logic-apps/failed-action-inputs-standard.png)
+
+ The following screenshot shows the outputs from the failed step.
+
+ ![Screenshot shows Standard workflow with failed step outputs.](./media/monitor-logic-apps/failed-action-outputs-standard.png)
+
+ > [!NOTE]
+ >
+ > All runtime details and events are encrypted within Azure Logic Apps and
+ > are decrypted only when a user requests to view that data. You can
+ > [hide inputs and outputs in run history](logic-apps-securing-a-logic-app.md#obfuscate).
+++
+<a name="resubmit-workflow-run"></a>
+
+## Rerun a workflow with same inputs
+
+You can rerun a previously finished workflow with the same inputs that the workflow used previously in the following ways:
+
+- Rerun the entire workflow.
+
+- Rerun the workflow starting at a specific action. The resubmitted action and all subsequent actions run as usual.
+
+Completing this task creates and adds a new workflow run to your workflow's run history.
+
+### Limitations and considerations
+
+- By default, only Consumption workflows and Standard stateful workflows, which record and store run history, are supported. To use these capabilities with a stateless Standard workflow, enable stateful mode. For more information, see [Enable run history for stateless workflows](create-single-tenant-workflows-azure-portal.md#enable-run-history-for-stateless-workflows) and [Enable stateful mode for stateless connectors](../connectors/enable-stateful-affinity-built-in-connectors.md).
+
+- The resubmitted run executes the same workflow version as the original run, even if you updated the workflow definition.
+
+- You can rerun only actions from sequential workflows. Workflows with parallel paths are currently not supported.
+
+- The workflow must have a completed state, such as Succeeded, Failed, or Cancelled.
+
+- The workflow must have 40 or fewer actions for you to rerun from a specific action.
+
+- If your workflow has operations such as create or delete operations, resubmitting a run might create duplicate data or try to delete data that no longer exists, resulting in an error.
+
+- These capabilities currently are unavailable with Visual Studio Code or Azure CLI.
+
+### [Consumption](#tab/consumption)
+
+#### Rerun the entire workflow
+
+1. In the [Azure portal](https://portal.azure.com), open your logic app resource and workflow in the designer.
+
+1. On your logic app menu, select **Overview**. On the **Overview** page, select **Runs history**.
+
+ Under **Runs history**, all the past, current, and any waiting runs appear. If the trigger fires for multiple events or items at the same time, an entry appears for each item with the same date and time.
+
+1. On the **Runs history** pane, select the run that you want to resubmit.
+
+1. On the **Logic app run** toolbar, select **Resubmit**, and then select **Yes**.
+
+ The **Runs history** pane now shows the resubmitted run.
+
+ > [!TIP]
+ >
+ > If the resubmitted run doesn't appear, on the **Runs history** pane toolbar, select **Refresh**.
+ > No run happens for a trigger that's skipped due to unmet criteria or finding no data.
+
+1. To review the inputs and outputs for the resubmitted workflow run, on the **Runs history** tab, select that run.
+
+### Rerun from a specific action (preview)
+
+> [!NOTE]
+>
+> This capability is in preview. For legal terms that apply to Azure features that
+> are in beta, preview, or otherwise not yet released into general availability, see
+> [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). Some aspects of this capability might change before general availability (GA).
+
+The resubmit capability is available for all actions except for non-sequential and complex concurrency scenarios and per the following limitations:
+
+| Actions | Resubmit availability and limitations |
+|||
+| **Condition** action and actions in the **True** and **False** paths | - Yes for **Condition** action <br>- No for actions in the **True** and **False** paths |
+| **For each** action plus all actions inside the loop and after the loop | No for all actions |
+| **Switch** action and all actions in the **Default** path and **Case** paths | - Yes for **Switch** action <br>- No for actions in the **Default** path and **Case** paths |
+| **Until** action plus all actions inside the loop and after the loop | No for all actions |
+
+1. In the [Azure portal](https://portal.azure.com), open your logic app resource.
+
+1. On the logic app resource menu, select **Overview**. On the **Overview** page, select **Runs history**, which shows the run history for the workflow.
+
+1. On the **Runs history** tab, select the run that you want to resubmit.
+
+ The run details page opens and shows the status for each step in the run.
+
+1. In the run details page, find the action from where you want to resubmit the workflow run, open the shortcut menu, and select **Submit from this action**.
+
+ The run details page refreshes and shows the new run. All the operations that precede the resubmitted action show a lighter-colored status icon, representing reused inputs and outputs. The resubmitted action and subsequent actions show the usually colored status icons. For more information, see [Review workflow run history](#review-runs-history).
+
+ > [!TIP]
+ >
+ > If the run hasn't fully finished, on the run details page toolbar, select **Refresh**.
+
+### [Standard](#tab/standard)
+
+You can rerun only stateful workflows, not stateless workflows. To enable run history for a stateless workflow, see [Enable run history for stateless workflows](create-single-tenant-workflows-azure-portal.md#enable-run-history-stateless).
+
+#### Rerun the entire workflow
+
+1. In the [Azure portal](https://portal.azure.com), open your logic app resource and workflow in the designer.
+
+1. On the workflow menu, select **Overview**. On the **Overview** page, select **Run History**, which shows the run history for the current workflow.
+
+1. On the **Run History** tab, select the run that you want to resubmit.
+
+1. On the run history toolbar, select **Resubmit**.
+
+1. Return to the **Overview** page and the **Run History** tab, which now shows the resubmitted run.
+
+ > [!TIP]
+ >
+ > If the resubmitted run doesn't appear, on the **Overview** page toolbar, select **Refresh**.
+ > No run happens for a trigger that's skipped due to unmet criteria or finding no data.
+
+1. To review the inputs and outputs from the resubmitted workflow run, on the **Run History** tab, select that run.
+
+### Rerun from a specific action (preview)
+
+> [!NOTE]
+>
+> This capability is in preview. For legal terms that apply to Azure features that
+> are in beta, preview, or otherwise not yet released into general availability, see
+> [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/). Some aspects of this capability might change before general availability (GA).
+
+The resubmit capability is available for all actions except for non-sequential and complex concurrency scenarios and per the following limitations:
+
+| Actions | Resubmit availability and limitations |
+|||
+| **Condition** action and actions in the **True** and **False** paths | - Yes for **Condition** action <br>- No for actions in the **True** and **False** paths |
+| **For each** action plus all actions inside the loop and after the loop | No for all actions |
+| **Switch** action and all actions in the **Default** path and **Case** paths | - Yes for **Switch** action <br>- No for actions in the **Default** path and **Case** paths |
+| **Until** action plus all actions inside the loop and after the loop | No for all actions |
+
+1. In the [Azure portal](https://portal.azure.com), open your logic app resource and workflow.
+
+1. On the workflow menu, select **Overview**. On the **Overview** page, select **Run History**, which shows the run history for the current workflow.
+
+1. On the **Run History** tab, select the run that you want to resubmit.
+
+ The run details page opens and shows the status for each step in the run.
+
+1. In the run details page, find the action from where you want to resubmit the workflow run, open the shortcut menu, and select **Submit from this action**.
+
+ The run details page refreshes and shows the new run. All the operations that precede the resubmitted action show a lighter-colored status icon, representing reused inputs and outputs. The resubmitted action and subsequent actions show the usually colored status icons. For more information, see [Review workflow run history](#review-runs-history).
+
+ > [!TIP]
+ >
+ > If the run hasn't fully finished, on the run details page toolbar, select **Refresh**.
+++
+<a name="add-azure-alerts"></a>
+
+## Set up monitoring alerts
+
+To get alerts based on specific metrics or exceeded thresholds for your logic app, set up [alerts in Azure Monitor](../azure-monitor/alerts/alerts-overview.md). For more information, review [Metrics in Azure](../azure-monitor/data-platform.md).
+
+To set up alerts without using [Azure Monitor](../azure-monitor/logs/log-query-overview.md), follow these steps, which apply to both Consumption and Standard logic app resources:
+
+1. On your logic app menu, under **Monitoring**, select **Alerts**. On the toolbar, select **Create** > **Alert rule**.
+
+1. On the **Create an alert rule** page, from the **Signal name** list, select the signal for which you want to get an alert.
+
+ > [!NOTE]
+ >
+ > Available alert signals differ between Consumption and Standard logic apps. For example,
+ > Consumption logic apps have many trigger-related signals, such as **Triggers Completed**
+ > and **Triggers Failed**, while Standard workflows have the **Workflow Triggers Completed Count**
+ > and **Workflow Triggers Failure Rate** signals.
+
+ For example, to send an alert when a trigger fails in a Consumption workflow, follow these steps:
+
+ 1. From the **Signal name** list, select the **Triggers Failed** signal.
+
+ 1. Under **Alert logic**, set up your condition, for example:
+
+ | Property | Example value |
+ |-||
+ | **Threshold** | **Static** |
+ | **Aggregation type** | **Count** |
+ | **Operator** | **Greater than or equal to** |
+ | **Unit** | **Count** |
+ | **Threshold value** | **1** |
+
+ The **Preview** section now shows the condition that you set up, for example:
+
+ **Whenever the count Triggers Failed is greater than or equal to 1**
+
+ 1. Under **When to evaluate**, set up the schedule for checking the condition:
+
+ | Property | Example value |
+ |-||
+ | **Check every** | **1 minute** |
+ | **Lookback period** | **5 minutes** |
+
+ For example, the finished condition looks similar to the following example, and the **Create an alert rule** page now shows the cost for running that alert:
+
+ ![Screenshot shows Consumption logic app and alert rule condition.](./media/monitor-logic-apps/set-up-condition-for-alert.png)
+
+1. When you're ready, select **Review + Create**.
+
+For general information, see [Create an alert rule from a specific resource - Azure Monitor](../azure-monitor/alerts/alerts-create-new-alert-rule.md#create-or-edit-an-alert-rule-in-the-azure-portal).
+
+## Next steps
+
+* [Monitor logic apps with Azure Monitor](monitor-workflows-collect-diagnostic-data.md)
machine-learning Concept Endpoint Serverless Availability https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-endpoint-serverless-availability.md
Certain models in the model catalog can be deployed as a serverless API with pay
## Region availability
+Pay-as-you-go billing is available only to users whose Azure subscription belongs to a billing account in a country where the model provider has made the offer available (see "offer availability region" in the table in the next section). If the offer is available in the relevant region, the user then must have a Hub/Project in the Azure region where the model is available for deployment or fine-tuning, as applicable (see "Hub/Project Region" columns in the following tables).
+
-> [!NOTE]
-> Models offered through the Azure Marketplace are available for purchase only on [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions), with exception of Cohere family of models, which is also available in Japan.
## Alternatives to region availability
machine-learning Concept Model Catalog https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/concept-model-catalog.md
Azure AI Studio enables users to make use of Vector Indexes and Retrieval Augmen
### Regional availability of offers and models
-Pay-as-you-go deployment is available only to users whose Azure subscription belongs to a billing account in a country where the model provider has made the offer available (see "offer availability region" in the table in the next section). If the offer is available in the relevant region, the user then must have a Workspace in the Azure region where the model is available for deployment or fine-tuning, as applicable (see "Workspace region" columns in the table below).
-
-Model | Offer availability region | Workspace Region for Deployment | Workspace Region for Finetuning
|--|--|--
-Llama-3-70B-Instruct <br> Llama-3-8B-Instruct | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) | East US 2, Sweden Central, North Central US, South Central US, West US, East US, West US 3 | Not available
-Llama-2-7b <br> Llama-2-13b <br> Llama-2-70b | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) | East US 2, East US, West US 3, West US, North Central US, South Central US | West US 3
-Llama-2-7b-chat <br> Llama-2-13b-chat <br> Llama-2-70b-chat | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) | East US 2, East US, West US 3, West US, North Central US, South Central US | Not available
-Mistral Small | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) | East US 2, Sweden Central, North Central US, South Central US, East US, West US 3, West US | Not available
-Mistral Large (2402) <br> Mistral Large (2407) | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) <br> Brazil <br> Hong Kong <br> Israel| East US 2, Sweden Central, North Central US, South Central US, East US, West US 3, West US | Not available
-Mistral Nemo | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) <br> Brazil <br> Hong Kong <br> Israel| East US 2, Sweden Central, North Central US, South Central US, East US, West US 3, West US | Not available
-Cohere-command-r-plus <br> Cohere-command-r <br> Cohere-embed-v3-english <br> Cohere-embed-v3-multilingual | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) <br> Japan | East US 2, Sweden Central, North Central US, South Central US, East US, West US 3, West US | Not available
-Cohere-rerank-3-english <br> Cohere-rerank-3-multilingual | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions)| East US 2, Sweden Central, North Central US, South Central US, East US, West US 3, West US | Not available
-TimeGEN-1 | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) <br> Mexico <br> Israel| East US, East US 2, North Central US, South Central US, Sweden Central, West US, West US 3 | Not available
-jais-30b-chat | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) | East US 2, Sweden Central, North Central US, South Central US, East US, West US 3, West US | Not available
-Phi-3-mini-4k-instruct <br> Phi-3-mini-128k-instruct | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) | East US 2, Sweden Central | Not available
-Phi-3-small-8k-instruct <br> Phi-3-small-128k-Instruct | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) | East US 2, Sweden Central | Not available
-Phi-3-medium-4k-instruct, Phi-3-medium-128k-instruct | [Microsoft Managed Countries](/partner-center/marketplace/tax-details-marketplace#microsoft-managed-countriesregions) | East US 2, Sweden Central | Not available
+PPay-as-you-go billing is available only to users whose Azure subscription belongs to a billing account in a country where the model provider has made the offer available. If the offer is available in the relevant region, the user then must have a Hub/Project in the Azure region where the model is available for deployment or fine-tuning, as applicable. See [Region availability for models in serverless API endpoints](concept-endpoint-serverless-availability.md) for detailed information.
++ ### Content safety for models deployed via MaaS
machine-learning How To Secure Workspace Vnet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-secure-workspace-vnet.md
Azure Container Registry can be configured to use a private endpoint. Use the fo
1. Configure the ACR for the workspace to [Allow access by trusted services](../container-registry/allow-access-trusted-services.md).
-1. If you are using [serverless compute](how-to-use-serverless-compute.md) (recommended) with your workspace, Azure Machine Learning will try to use the serverless compute to build the image. To configure the workspace to create the compute in your Azure Virtual Network, follow the guidance in the [Secure training environment](how-to-secure-training-vnet.md) article.
+1. By default, Azure Machine Learning will try to use a [serverless compute](how-to-use-serverless-compute.md) to build the image. This works only when the workspace-dependent resources such as Storage Account or Container Registry are not under any network restriction (private endpoints). If your workspace-dependent resources are network restricted, use an image-build-compute instead.
-1. If you are __not__ using serverless compute, create an Azure Machine Learning compute cluster. This cluster is used to build Docker images when ACR is behind a virtual network. For more information, see [Create a compute cluster](how-to-create-attach-compute-cluster.md). Use one of the following methods to configure the workspace to build Docker images using the compute cluster.
+1. To set up an image-build compute, create an Azure Machine Learning CPU SKU [compute cluster](how-to-create-attach-compute-cluster.md) in the same VNet as your workspace-dependent resources. This cluster can then be set as the default image-build compute and will be used to build every image in your workspace from that point onwards. Use one of the following methods to configure the workspace to build Docker images using the compute cluster.
> [!IMPORTANT] > The following limitations apply When using a compute cluster for image builds:
machine-learning How To Use Secrets In Runs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-secrets-in-runs.md
Before following the steps in this article, make sure you have the following pre
> [!TIP] > You do not have to use same key vault as the workspace.
-* An Azure Machine Learning compute cluster configured to use a [managed identity](how-to-create-attach-compute-cluster.md?tabs=azure-studio#set-up-managed-identity). The cluster can be configured for either a system-assigned or user-assigned managed identity.
+* (Optional) An Azure Machine Learning compute cluster configured to use a [managed identity](how-to-create-attach-compute-cluster.md?tabs=azure-studio#set-up-managed-identity). The cluster can be configured for either a system-assigned or user-assigned managed identity.
-* Grant the managed identity for the compute cluster access to the secrets stored in key vault. The method used to grant access depends on how your key vault is configured:
+* If your job will run on a compute cluster, grant the managed identity for the compute cluster access to the secrets stored in key vault. Or, if the job will run on serverless compute, grant the managed identity specified for the job access to the secrets. The method used to grant access depends on how your key vault is configured:
* [Azure role-based access control (Azure RBAC)](/azure/key-vault/general/rbac-guide): When configured for Azure RBAC, add the managed identity to the __Key Vault Secrets User__ role on your key vault. * [Azure Key Vault access policy](/azure/key-vault/general/assign-access-policy): When configured to use access policies, add a new policy that grants the __get__ operation for secrets and assign it to the managed identity.
machine-learning How To Use Serverless Compute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-use-serverless-compute.md
When you [view your usage and quota in the Azure portal](how-to-manage-quotas.md
## Identity support and credential pass through
-* **User credential pass through** : Serverless compute fully supports user credential pass through. The user token of the user who is submitting the job is used for storage access. These credentials are from your Microsoft Entra ID.
+* **User credential pass through** : Serverless compute fully supports user credential pass through. The user token of the user who is submitting the job is used for storage access. These credentials are from your Microsoft Entra ID.
# [Python SDK](#tab/python)
When you [view your usage and quota in the Azure portal](how-to-manage-quotas.md
-* **User-assigned managed identity** : When you have a workspace configured with [user-assigned managed identity](how-to-identity-based-service-authentication.md#workspace), you can use that identity with the serverless job for storage access.
+* **User-assigned managed identity** : When you have a workspace configured with [user-assigned managed identity](how-to-identity-based-service-authentication.md#workspace), you can use that identity with the serverless job for storage access. To access secrets, see [Use authentication credential secrets in Azure Machine Learning jobs](how-to-use-secrets-in-runs.md).
# [Python SDK](#tab/python)
machine-learning Tutorial Create Secure Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/tutorial-create-secure-workspace.md
Title: Create a secure workspace with a managed virtual network
description: Create an Azure Machine Learning workspace and required Azure services inside a managed virtual network. -+ Previously updated : 08/11/2023 Last updated : 08/09/2024 monikerRange: 'azureml-api-2 || azureml-api-1'
Use the following steps to create an Azure Virtual Machine to use as a jump box.
Azure Bastion enables you to connect to the VM desktop through your browser.
-1. In the Azure portal, select the VM you created earlier. From the __Operations__ section of the page, select __Bastion__ and then __Deploy Bastion__.
+1. In the Azure portal, select the VM you created earlier. From the __Connect__ section of the page, select __Bastion__ and then __Deploy Bastion__.
:::image type="content" source="./media/tutorial-create-secure-workspace/virtual-machine-deploy-bastion.png" alt-text="Screenshot of the deploy Bastion option.":::
-1. Once the Bastion service has been deployed, you're presented with a connection page. Leave this dialog for now.
+1. Once the Bastion service is deployed, you arrive at a connection dialog. Leave this dialog for now.
## Create a workspace
Azure Bastion enables you to connect to the VM desktop through your browser.
1. From the __Create private endpoint__ form, enter a unique value in the __Name__ field. Select the __Virtual network__ created earlier with the VM, and select the default __Subnet__. Leave the rest of the fields at the default values. Select __OK__ to save the endpoint.
- :::image type="content" source="./media/tutorial-create-secure-workspace/private-endpoint-workspace.png" alt-text="Screenshot of the create private endpoint form.":::
+ :::image type="content" source="./media/tutorial-create-secure-workspace/private-endpoint-workspace.png" alt-text="Screenshot of the form to create a private endpoint.":::
1. Select __Review + create__. Verify that the information is correct, and then select __Create__.
-1. Once the workspace has been created, select __Go to resource__.
+1. Once the workspace is created, select __Go to resource__.
## Connect to the VM desktop
Azure Bastion enables you to connect to the VM desktop through your browser.
## Connect to studio
-At this point, the workspace has been created __but the managed virtual network has not__. The managed virtual network is _configured_ when you create the workspace, but it isn't created until you create the first compute resource or manually provision it.
+At this point, the workspace is created __but the managed virtual network is not__. The managed virtual network is _configured_ when you create the workspace. To _create_ the managed virtual network, create a compute resource or manually provision the network.
Use the following steps to create a compute instance.
Use the following steps to create a compute instance.
### Enable studio access to storage
-Since the Azure Machine Learning studio partially runs in the web browser on the client, the client needs to be able to directly access the default storage account for the workspace to perform data operations. To enable this, use the following steps:
+Since the Azure Machine Learning studio partially runs in the web browser on the client, the client needs to be able to directly access the default storage account for the workspace to perform data operations. To enable direct access, use the following steps:
1. From the [Azure portal](https://portal.azure.com), select the jump box VM you created earlier. From the __Overview__ section, copy the __Public IP address__. 1. From the [Azure portal](https://portal.azure.com), select the workspace you created earlier. From the __Overview__ section, select the link for the __Storage__ entry.
To delete all resources created in this tutorial, use the following steps:
## Next steps
-Now that you've created a secure workspace and can access studio, learn how to [deploy a model to an online endpoint with network isolation](how-to-secure-online-endpoint.md).
+Now that you have a secure workspace and can access studio, learn how to [deploy a model to an online endpoint with network isolation](how-to-secure-online-endpoint.md).
For more information on the managed virtual network, see [Secure your workspace with a managed virtual network](how-to-managed-network.md).
machine-learning How To Manage Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/how-to-manage-workspace.md
You can create a workspace [directly in Azure Machine Learning studio](../quicks
) ```
-* **[Sovereign cloud](../reference-machine-learning-cloud-parity.md)**. You'll need extra code to authenticate to Azure if you're working in a sovereign cloud.
+* **[Sovereign cloud](../reference-machine-learning-cloud-parity.md)**. You need extra code to authenticate to Azure if you're working in a sovereign cloud.
[!INCLUDE [sdk v1](../includes/machine-learning-sdk-v1.md)]
You can create a workspace [directly in Azure Machine Learning studio](../quicks
) ```
-* **Use existing Azure resources**. You can also create a workspace that uses existing Azure resources with the Azure resource ID format. Find the specific Azure resource IDs in the Azure portal or with the SDK. This example assumes that the resource group, storage account, key vault, App Insights, and container registry already exist.
-
- [!INCLUDE [sdk v1](../includes/machine-learning-sdk-v1.md)]
-
- ```python
- import os
- from azureml.core import Workspace
- from azureml.core.authentication import ServicePrincipalAuthentication
-
- service_principal_password = os.environ.get("AZUREML_PASSWORD")
-
- service_principal_auth = ServicePrincipalAuthentication(
- tenant_id="<tenant-id>",
- username="<application-id>",
- password=service_principal_password)
-
- auth=service_principal_auth,
- subscription_id='<azure-subscription-id>',
- resource_group='myresourcegroup',
- create_resource_group=False,
- location='eastus2',
- friendly_name='My workspace',
- storage_account='subscriptions/<azure-subscription-id>/resourcegroups/myresourcegroup/providers/microsoft.storage/storageaccounts/mystorageaccount',
- key_vault='subscriptions/<azure-subscription-id>/resourcegroups/myresourcegroup/providers/microsoft.keyvault/vaults/mykeyvault',
- app_insights='subscriptions/<azure-subscription-id>/resourcegroups/myresourcegroup/providers/microsoft.insights/components/myappinsights',
- container_registry='subscriptions/<azure-subscription-id>/resourcegroups/myresourcegroup/providers/microsoft.containerregistry/registries/mycontainerregistry',
- exist_ok=False)
- ```
For more information, see [Workspace SDK reference](/python/api/azureml-core/azureml.core.workspace.workspace).
-If you have problems in accessing your subscription, see [Set up authentication for Azure Machine Learning resources and workflows](how-to-setup-authentication.md), as well as the [Authentication in Azure Machine Learning](https://aka.ms/aml-notebook-auth) notebook.
+If you have problems in accessing your subscription, see [Set up authentication for Azure Machine Learning resources and workflows](how-to-setup-authentication.md), and the [Authentication in Azure Machine Learning](https://aka.ms/aml-notebook-auth) notebook.
### Networking
from azureml.core import Workspace
### Download a configuration file
-If you'll be using a [compute instance](../quickstart-create-resources.md) in your workspace to run your code, skip this step. The compute instance will create and store a copy of this file for you.
+If you are using a [compute instance](../quickstart-create-resources.md) in your workspace to run your code, skip this step. The compute instance will create and store a copy of this file for you.
If you plan to use code on your local environment that references this workspace (`ws`), write the configuration file:
Place the file into the directory structure with your Python scripts or Jupyter
## Connect to a workspace
-In your Python code, you create a workspace object to connect to your workspace. This code will read the contents of the configuration file to find your workspace. You'll get a prompt to sign in if you aren't already authenticated.
+In your Python code, you create a workspace object to connect to your workspace. This code reads the contents of the configuration file to find your workspace. You get a prompt to sign in if you aren't already authenticated.
[!INCLUDE [sdk v1](../includes/machine-learning-sdk-v1.md)]
ws = Workspace.from_config()
ws = Workspace.from_config(auth=interactive_auth) ```
-* **[Sovereign cloud](../reference-machine-learning-cloud-parity.md)**. You'll need extra code to authenticate to Azure if you're working in a sovereign cloud.
+* **[Sovereign cloud](../reference-machine-learning-cloud-parity.md)**. You need extra code to authenticate to Azure if you're working in a sovereign cloud.
[!INCLUDE [sdk v1](../includes/machine-learning-sdk-v1.md)]
ws = Workspace.from_config()
ws = Workspace.from_config(auth=interactive_auth) ```
-If you have problems in accessing your subscription, see [Set up authentication for Azure Machine Learning resources and workflows](../how-to-setup-authentication.md), as well as the [Authentication in Azure Machine Learning](https://aka.ms/aml-notebook-auth) notebook.
+If you have problems in accessing your subscription, see [Set up authentication for Azure Machine Learning resources and workflows](../how-to-setup-authentication.md), and the [Authentication in Azure Machine Learning](https://aka.ms/aml-notebook-auth) notebook.
## Find a workspace
managed-grafana Grafana Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/grafana-settings.md
+
+ Title: Learn about Grafana settings
+
+description: Learn about Grafana settings in Azure Managed Grafana, including Viewers can Edit and External Enabled.
++++ Last updated : 08/09/2024
+#customer intent: In this document, learn about the custom Grafana options available in the Grafana settings tab, in Azure Managed Grafana.
+++
+# Grafana settings
+
+This article introduces the Grafana settings available in Azure Managed Grafana. These settings are designed to enable Azure Managed Grafana customers to customize their Grafana instances by enabling or disabling the Grafana options listed below.
+
+These settings are located in Azure Managed Grafana's **Settings** > **Configuration** menu, in the **Grafana Settings (Preview)** tab.
++
+They are also referenced in Grafana's documentation, under [Grafana configuration](https://grafana.com/docs/grafana/latest/setup-grafana/configure-grafana/).
+
+## Viewers Can Edit
+
+The **Viewers Can Edit** setting allows users with the Grafana Viewer role to edit dashboards. This feature is designed to enable Grafana Viewers to run tests and interact with dashboards without making permanent changes. While they can edit dashboards, they can't save these edits.
+
+This option also gives Grafana Viewers access to the **Explore** menu in the Grafana UI, where they can perform interactive queries and analyze data within Grafana. However, it's important to note that any changes made by Viewers won't be saved permanently unless they have the appropriate Editor permissions.
+
+To enable or disable this option, open an Azure Managed Grafana instance in the Azure portal and go to **Settings** > **Configuration** > **Grafana Settings (Preview)** > **Viewers can edit**. This option is disabled by default.
+
+## External Enabled
+
+The **External Enabled** setting controls the public sharing of snapshots. This option is enabled by default, allowing users to publish snapshots of their dashboards.
+
+With this option enabled, users can publish a snapshot of a dashboard to an external URL by opening a dashboard, selecting **Share** > **Snapshot**, and then **Publish to snapshots.raintanks.io**.
+
+You can disable the External Enabled option to restrict the public sharing of snapshots. To do this, open an Azure Managed Grafana instance in the Azure portal and go to **Settings** > **Configuration** > **Grafana Settings (Preview)** and toggle off the **External Enabled** setting.
+
+## Related content
+
+- [Grafana UI](grafana-app-ui.md)
+- [Manage plugins](how-to-manage-plugins.md)
managed-grafana How To Share Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-grafana/how-to-share-dashboard.md
Previously updated : 03/01/2023 Last updated : 08/09/2024 # Share a Grafana dashboard or panel
The **Snapshot** tab lets you share an interactive dashboard or panel publicly.
> - Snapshots published on snapshots.raintank.io can be viewed by anyone who has the link. > - Users must have a Grafana Viewer permission to view snapshots shared locally.
+> [!TIP]
+> To disable public sharing of snapshots, open your Azure Managed Grafana instance within the Azure portal, then go to **Settings** > **Configuration**, open the **Grafana Settings (Preview)** tab, and turn off the **External Enabled** option.
+ ### Create a library panel
-The **Library panel** tab lets you create a library panel that can be reused in other Grafana dashboards. Do this in a single step by selecting The **Create library panel** at the bottom of the tab. Optionally update the panel library name and select another folder to save it in.
+The **Library panel** tab lets you create a library panel that can be reused in other Grafana dashboards. Do this in a single step by selecting the **Create library panel** at the bottom of the tab. Optionally update the panel library name and select another folder to save it in.
Once you've created a library panel, reuse it in other dashboards of the same Grafana instance by going to **Dashboards > New dashboard > Add panel from panel library**.
managed-instance-apache-cassandra Create Cluster Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/managed-instance-apache-cassandra/create-cluster-portal.md
For external data centers, such as those hosted on-premises, they can be include
1. For non-production environments, you can pause/de-allocate resources in the cluster in order to avoid being charged for them (you will continue to be charged for storage). First change cluster type to `NonProduction`, then `deallocate`.
+> [!TIP]
+> Cluster type should be used as "NonProduction" only to save development costs. They may come with smaller SKU's, and should NOT be used to run production workloads.
+ > [!WARNING]
-> Do not execute any schema or write operations during de-allocation - this can lead to data loss and in rare cases schema corruption requiring manual intervention from the support team.
+> - Cluster type defined as "Nonproduction" will not have SLA guarantees applied to it.
+> - Do not execute any schema or write operations during de-allocation - this can lead to data loss and in rare cases schema corruption requiring manual intervention from the support team.
:::image type="content" source="./media/create-cluster-portal/pause-cluster.png" alt-text="Screenshot of pausing a cluster." lightbox="./media/create-cluster-portal/pause-cluster.png" border="true":::
mysql Concepts Version Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-version-policy.md
description: Describes the policy around MySQL major and minor versions in Azure
Previously updated : 12/01/2023 Last updated : 08/09/2024 -+
+ - fasttrack-edit
# Azure Database for MySQL version support policy [!INCLUDE [Azure-database-for-mysql-single-server-deprecation](~/reusable-content/ce-skilling/azure/includes/mysql/includes/azure-database-for-mysql-single-server-deprecation.md)]
-This page describes the Azure Database for MySQL versioning policy and applies to Azure Database for MySQL - Single Server and Azure Database for MySQL - Flexible Server deployment modes.
- ## Supported MySQL versions
-Azure Database for MySQL was developed from [MySQL Community Edition](https://www.mysql.com/products/community/), using the InnoDB storage engine. The service supports the community's current major versions, namely MySQL 5.7, and 8.0. MySQL uses the X.Y.Z. naming scheme where X is the major version, Y is the minor version, and Z is the bug fix release. For more information about the scheme, see the [MySQL documentation](https://dev.mysql.com/doc/refman/5.7/en/which-version.html).
+Azure Database for MySQL was developed from the [MySQL Community Edition](https://www.mysql.com/products/community/), using the InnoDB storage engine. The service supports the community's current major versions, namely MySQL 5.7 and 8.0. MySQL uses the X.Y.Z. naming scheme where X is the major version, Y is the minor version, and Z is the bug fix release. For more information about the scheme, see the [MySQL documentation](https://dev.mysql.com/doc/refman/5.7/en/which-version.html).
Azure Database for MySQL currently supports the following major and minor versions of MySQL:
-| Version | [Single Server](single-server/overview.md)<br />Current minor version | [Flexible Server](flexible-server/overview.md)<br />Current minor version |
-| : | : | : |
-| MySQL Version 5.7 | [5.7.32](https://dev.mysql.com/doc/relnotes/mysql/5.7/en/news-5-7-29.html) | [5.7.44](https://dev.mysql.com/doc/relnotes/mysql/5.7/en/news-5-7-44.html) |
-| MySQL Version 8.0 | [8.0.15](https://dev.mysql.com/doc/relnotes/mysql/8.0/en/news-8-0-15.html) | [8.0.36](https://dev.mysql.com/doc/relnotes/mysql/8.0/en/news-8-0-36.html) |
-
-> [!NOTE]
-> In the Single Server deployment option, a gateway redirects the connections to server instances. After the connection is established, the MySQL client displays the version of MySQL set in the gateway, not the actual version running on your MySQL server instance. To determine the version of your MySQL server instance, use the `SELECT VERSION();` command at the MySQL prompt. If your application has a requirement to connect to a specific major version, say v5.7 or v8.0, you can do so by changing the port in your server connection string as explained in our documentation [here.](concepts-supported-versions.md#connect-to-a-gateway-node-that-is-running-a-specific-mysql-version)
+| Version | [Flexible Server](flexible-server/overview.md)<br />Current minor version |
+| : | : |
+| MySQL Version 5.7 | [5.7.44](https://dev.mysql.com/doc/relnotes/mysql/5.7/en/news-5-7-44.html) |
+| MySQL Version 8.0 | [8.0.37](https://dev.mysql.com/doc/relnotes/mysql/8.0/en/news-8-0-37.html) |
Read the version support policy for retired versions in [version support policy documentation.](concepts-version-policy.md#retired-mysql-engine-versions-not-supported-in-azure-database-for-mysql) ## Major version support
-Each major version of MySQL is supported by Azure Database for MySQL from the date Azure begins supporting the version until the version is retired by the MySQL community, as provided in the [versioning policy](https://www.mysql.com/support/eol-notice.html).
+Azure Database for MySQL supports each major version of MySQL from the date Azure begins supporting it until the MySQL community retires it, as provided in the [versioning policy](https://www.mysql.com/support/eol-notice.html).
## Minor version support
-Azure Database for MySQL automatically performs minor version upgrades to the Azure-preferred MySQL version as part of periodic maintenance.
+Azure Database for MySQL automatically performs minor version upgrades to the Azure-preferred version as part of periodic maintenance.
## Major version retirement policy
The retirement details for MySQL major versions are listed in the following tabl
| Version | What's New | Azure support start date | Azure support end date | Community Retirement date | | | | | |
-| [MySQL 5.7](https://dev.mysql.com/doc/relnotes/mysql/5.7/en/) | [Features](https://dev.mysql.com/doc/relnotes/mysql/5.7/en/news-5-7-31.html) | March 20, 2018 |September 2025 |October 2023|
-| [MySQL 8](https://mysqlserverteam.com/whats-new-in-mysql-8-0-generally-available/) | [Features](https://dev.mysql.com/doc/relnotes/mysql/8.0/en/news-8-0-21.html) | December 11, 2019 | NA |April 2026|
+| [MySQL 5.7](https://dev.mysql.com/doc/relnotes/mysql/5.7/en/) | [Features](https://dev.mysql.com/doc/relnotes/mysql/5.7/en/news-5-7-31.html) | March 20, 2018 | September 2025 | October 2023 |
+| [MySQL 8](https://mysqlserverteam.com/whats-new-in-mysql-8-0-generally-available/) | [Features](https://dev.mysql.com/doc/relnotes/mysql/8.0/en/news-8-0-21.html) | December 11, 2019 | NA | April 2026 |
-## What will happen to Azure Database for MySQL service after MySQL community version is retired in October 2023?
+## What happens to Azure Database for MySQL service after the MySQL community version is retired in October 2023?
-In line with Oracle's announcement regarding the end-of-life (EOL) of [MySQL Community Version v5.7 in __October 2023__](https://www.oracle.com/us/support/library/lsp-tech-chart-069290.pdf) (Page 23), we at Azure are actively preparing for this important transition. This development specifically impacts customers who are currently utilizing Version 5.7 of Azure Database for MySQL - Single Server and Flexible Server.
+In line with Oracle's announcement regarding the end-of-life of [MySQL Community Version v5.7 in __October 2023__](https://www.oracle.com/us/support/library/lsp-tech-chart-069290.pdf) (Page 23), we at Azure are actively preparing for this critical transition. This development impacts explicitly customers utilizing Version 5.7 of Azure Database for MySQL - Single Server and Flexible Server.
-In response to the customer's requests, Microsoft decided to prolong the support for Azure Database for MySQL beyond __October 2023__. During the extended support period, which lasts until __September 2025__, Microsoft prioritizes the availability, reliability, and security of the service. While there are no specific guarantees regarding minor version upgrades, we implement essential modifications to ensure that the service remains accessible, dependable, and protected. Our plan includes:
+In response to the customer's requests, Microsoft decided to prolong the support for Azure Database for MySQL beyond __October 2023__. During the extended support period, which lasts until __September 2025__, Microsoft prioritizes the service's availability, reliability, and security. While there are no guarantees regarding minor version upgrades, we implement essential modifications to ensure the service remains accessible, dependable, and protected. Our plan includes:
- Extended support for v5.7 on Azure Database for MySQL- Flexible Servers until __September 2025__, offering ample time for customers to plan and execute their upgrades to MySQL v8.0. -- Extended support for v5.7 on Azure Database for MySQL- Single Servers until they're retired on __September 2024__. This extended support provides Azure Database for MySQL -Single Server customers ample time to migrate to Azure Database for MySQL - Flexible Server version 5.7 and then later upgrade to 8.0.
+- Extended support for v5.7 on Azure Database for MySQL- Single Servers until they're retired on __September 2024__. This extended support provides Azure Database for MySQL -Single Server customers ample time to migrate to Azure Database for MySQL - Flexible Server version 5.7 and later upgrade to 8.0.
-Before we end our support of Azure Database for MySQL 5.7, there are several important timelines that you should pay attention.
+Before we end our support of Azure Database for MySQL 5.7, you should pay attention to several important timelines.
__Azure MySQL 5.7 Deprecation Timelines__
-|Timelines| Azure MySQL 5.7 Flexible end at |Azure MySQL 5.7 Single end at|
-||||
-|Creation of new servers using the Azure portal.| To Be Decided| Already ended as part of [Single Server deprecation](single-server/whats-happening-to-mysql-single-server.md)|
-|Creation of new servers using the Command Line Interface (CLI). | To Be Decided| March 19, 2024|
-|Creation of replica servers for existing servers. | September 2025| September 2024|
-|Creation of servers using restore workflow for the existing servers| September 2025|September 2024|
-|Creation of new servers for migrating from Azure Database for MySQL - Single Server to Azure Database for MySQL - Flexible Server.| NA| September 2024|
-|Creation of new servers for migrating from Azure Database for MariaDB to Azure Database for MySQL - Flexible Server.| September 2025| NA|
-|Extended support for Azure Database for MySQL v5.7| September 2025| September 2024|
+| Timelines | Azure MySQL 5.7 Flexible
+| | |
+| Creation of new servers using the Azure portal. | To Be Decided |
+| Creation of new servers using the Command Line Interface (CLI). | To Be Decided |
+| Creation of replica servers for existing servers. | September 2025 |
+| Creation of servers using restore workflow for the existing servers | September 2025 |
+| Creation of new servers for migrating from Azure Database for MySQL - Single Server to Azure Database for MySQL - Flexible Server. | NA |
+| Creation of new servers for migrating from Azure Database for MariaDB to Azure Database for MySQL - Flexible Server. | September 2025 |
+| Extended support for Azure Database for MySQL v5.7 | September 2025 |
> [!NOTE]
-> We initially planned to stop the creation of new Azure Database for MySQL version 5.7 instances via CLI and Portal after April 2024. However, after further review and customer feedback, we have decided to delay this action. The specific date for discontinuing the creation of new MySQL 5.7 instances is currently under review and remains 'To Be Decided'. This change reflects our commitment to accommodating customer needs and providing flexibility during the transition. Don't hesitate to let us know if you have any concerns about the Azure Database For MySQL Flexible Server extended support for MySQL 5.7 by email us at [Ask Azure DB For MySQL](mailto:AskAzureDBforMySQL@service.microsoft.com), we value your feedback and encourage ongoing communication as we navigate these changes.
+> We initially planned to stop the creation of a new Azure Database for MySQL version 5.7 instances via CLI and Portal after April 2024. However, after further review and customer feedback, we have decided to delay this action. The date for discontinuing the creation of new MySQL 5.7 instances is currently under review and remains 'To Be Decided'. This change reflects our commitment to accommodating customer needs and providing flexibility during the transition. Don't hesitate to let us know if you have any concerns about the Azure Database For MySQL Flexible Server extended support for MySQL 5.7 by emailing us at [Ask Azure DB For MySQL](mailto:AskAzureDBforMySQL@service.microsoft.com); we value your feedback and encourage ongoing communication as we navigate these changes.
### FAQs
-__Q: What is the process for upgrading Azure database for MySQL - Flexible server from version v5.7 to v8.0?__
+__Q: What is the process for upgrading the Azure database for MySQL - Flexible server from version v5.7 to v8.0?__
-A: Starting May 2023, Azure Database for MySQL - Flexible Server enables you to carry out an in-place upgrade from MySQL v5.7 to v8.0 utilizing the Major Version Upgrade (MVU) feature. For more detailed information, please consult the [Major version upgrade](flexible-server/how-to-upgrade.md) document.
+A: Starting May 2023, Azure Database for MySQL - Flexible Server enables you to carry out an in-place upgrade from MySQL v5.7 to v8.0 utilizing the Major Version Upgrade (MVU) feature. Consult the [Major version upgrade](flexible-server/how-to-upgrade.md) document for more detailed information.
-__Q: I'm currently using Azure database for MySQL - Single Sever version 5.7, how should I plan my upgrade?__
+__Q: I'm currently using the Azure Database for MySQL - Single Server version 5.7; how should I plan my upgrade?__
-A: Azure Database for MySQL - Single Server doesn't offer built-in support for major version upgrade from v5.7 to v8.0. As Azure Database for MySQL - Single Server is on deprecation path, there are no investments planned to support major version upgrade from v5.7 to v8.0. The recommended path to upgrade from v5.7 of Azure Database for MySQL - Single Server to v8.0 is to first [migrate your v5.7 Azure Database for MySQL - Single Server to v5.7 of Azure Database for MySQL - Flexible Server](single-server/whats-happening-to-mysql-single-server.md#migrate-from-single-server-to-flexible-server). After the migration is completed and server is stabilized on Flexible Server, you can proceed with performing a [major version upgrade](flexible-server/how-to-upgrade.md) on the migrated Azure Database for MySQL - Flexible Server from v5.7 to v8.0. The extended support for v5.7 on Flexible Server will allow you to run on v5.7 longer and plan your upgrade to v8.0 on Flexible Server at a later point in time after migration from Single Server.
+A: Azure Database for MySQL - Single Server doesn't offer built-in support for major version upgrades from v5.7 to v8.0. As Azure Database for MySQL - Single Server is on the deprecation path, no investments are planned to support major version upgrades from v5.7 to v8.0. The recommended path to upgrade from v5.7 of Azure Database for MySQL - Single Server to v8.0 is to first [migrate your v5.7 Azure Database for MySQL - Single Server to v5.7 of Azure Database for MySQL - Flexible Server](single-server/whats-happening-to-mysql-single-server.md#migrate-from-single-server-to-flexible-server). After the migration is completed and server is stabilized on Flexible Server, you can proceed with performing a [major version upgrade](flexible-server/how-to-upgrade.md) on the migrated Azure Database for MySQL - Flexible Server from v5.7 to v8.0. The extended support for v5.7 on Flexible Server will allow you to run on v5.7 longer and plan your upgrade to v8.0 on Flexible Server later after migration from Single Server.
__Q: Are there any expected downtime or performance impacts during the upgrade process?__
-A: Yes, it's expected that there will be some downtime during the upgrade process. The specific duration varies depending on factors such as the size and complexity of the database. We advise conducting a test upgrade on a nonproduction environment to assess the expected downtime and evaluate the potential performance impact. If you wish to minimize downtime for your applications during the upgrade, you can explore the option of [perform minimal downtime major version upgrade from MySQL 5.7 to MySQL 8.0 using read replica](flexible-server/how-to-upgrade.md#perform-minimal-downtime-major-version-upgrade-from-mysql-57-to-mysql-80-using-read-replicas).
+A: Yes, it's expected that there will be some downtime during the upgrade process. The specific duration varies depending on factors such as the size and complexity of the database. We advise conducting a test upgrade on a nonproduction environment to assess the expected downtime and evaluate the potential performance. Suppose you minimize downtime for your applications during the upgrade. In that case, you can explore the option of [perform minimal downtime major version upgrade from MySQL 5.7 to MySQL 8.0 using read replica](flexible-server/how-to-upgrade.md#perform-minimal-downtime-major-version-upgrade-from-mysql-57-to-mysql-80-using-read-replicas).
__Q: Can I roll back to MySQL v5.7 after upgrading to v8.0?__
-A: While it's generally not recommended to downgrade from MySQL v8.0 to v5.7, as the latter is nearing its End of Life status, we acknowledge that there may be specific scenarios where this flexibility becomes necessary. To ensure a smooth upgrade process and alleviate any potential concerns, it's strongly advised adhering to best practices by performing a comprehensive [on-demand backup](flexible-server/how-to-trigger-on-demand-backup.md) before proceeding with the upgrade to MySQL v8.0. This backup serves as a precautionary measure, allowing you to [restore your database](flexible-server/how-to-restore-server-portal.md) to its previous version on to another new Azure Database for MySQL -Flexible server in the event of any unexpected issues or complications with MySQL v8.0.
+A: While it's not recommended to downgrade from MySQL v8.0 to v5.7, as the latter is nearing its End of Life status, we acknowledge that there might be specific scenarios where this flexibility becomes necessary. To ensure a smooth upgrade process and alleviate any potential concerns, it's advised to adhere to best practices by performing a comprehensive [on-demand backup](flexible-server/how-to-trigger-on-demand-backup.md) before proceeding with the upgrade to MySQL v8.0. This backup serves as a precautionary measure, allowing you to [restore your database](flexible-server/how-to-restore-server-portal.md) to its previous version on to another new Azure Database for MySQL - Flexible Server for any unexpected issues or complications with MySQL v8.0.
__Q: What are the main advantages of upgrading to MySQL v8.0?__
-A: MySQL v8.0 comes with a host of improvements, including more efficient data dictionary, enhanced security, and other features like common table expressions and window functions. Details please refer to [MySQL 8.0 release notes](https://dev.mysql.com/doc/relnotes/mysql/8.0/en/news-8-0-32.html)
+A: MySQL v8.0 comes with a host of improvements, including a more efficient data dictionary, enhanced security, and other features like common table expressions and window functions. For details, refer to [MySQL 8.0 release notes](https://dev.mysql.com/doc/relnotes/mysql/8.0/en/news-8-0-32.html)
__Q: Are there any compatibility issues to be aware of when upgrading to MySQL v8.0?__
-A: Some compatibility issues may arise due to changes in MySQL v8.0. It's important to test your applications with MySQL v8.0 before upgrading the production database. Check [MySQL's official documentation](https://dev.mysql.com/doc/refman/8.0/en/upgrading-from-previous-series.html) for a detailed list of compatibility issues.
+A: Changes in MySQL v8.0 might cause some compatibility issues. It's important to test your applications with MySQL v8.0 before upgrading the production database. Check [MySQL's official documentation](https://dev.mysql.com/doc/refman/8.0/en/upgrading-from-previous-series.html) for a detailed list of compatibility issues.
__Q: What support is available if I encounter issues during the upgrade process?__
A: If you have questions, get answers from community experts in [Microsoft Q&A](
__Q: What will happen to my data during the upgrade?__
-A: While your data will remain unaffected during the upgrade process, it's highly advisable to create a backup of your data before proceeding with the upgrade. This precautionary measure helps mitigate the risk of potential data loss in the event of unforeseen complications.
+A: While your data will remain unaffected during the upgrade process, it's highly advisable to create a backup before proceeding with the upgrade. This precautionary measure helps mitigate the risk of potential data loss due to any unforeseen complications.
__Q: What will happen to the server 5.7 after Sep 2025?__ A: You refer to our [retired MySQL version support policy](concepts-version-policy.md#retired-mysql-engine-versions-not-supported-in-azure-database-for-mysql) to learn what will happen after Azure Database for MySQL 5.7 end of support
-__Q: I have a Azure Database for MariaDB or Azure database for MySQL -Single server, how can I create the server in 5.7 post April 2024 for migrating to Azure database for MySQL - flexible server?__
+__Q: I have an Azure Database for MariaDB or an Azure database for MySQL -Single server; how can I create the server in 5.7 post April 2024 for migrating to Azure Database for MySQL - Flexible Server?__
-A: If there's MariaDB\Single server in your subscription, this subscription is still permitted to create Azure Database for MySQL ΓÇô Flexible Server v5.7 to migrate to Azure Database for MySQL ΓÇô Flexible Server.
+A: If there's a MariaDB server in your subscription, this subscription is still permitted to create Azure Database for MySQL ΓÇô Flexible Server v5.7 to migrate to Azure Database for MySQL ΓÇô Flexible Server.
## Retired MySQL engine versions not supported in Azure Database for MySQL After the retirement date for each MySQL database version, if you continue running the retired version, note the following restrictions: -- As the community won't release any further bug fixes or security fixes, Azure Database for MySQL won't patch the retired database engine for any bugs, or security issues or otherwise take security measures regarding the retired database engine. However, Azure continues performing periodic maintenance and patching for the host, OS, containers, and other service-related components.-- If any support issue you may experience relates to the MySQL database, we may be unable to support you. In such cases, you have to upgrade your database for us to provide you with any support.
+As the community won't release any further bug fixes or security fixes, Azure Database for MySQL won't patch the retired database engine for any bugs or security issues or otherwise take security measures regarding it. However, Azure continues performing periodic maintenance and patching for the host, OS, containers, and other service-related components.
+- If any support issue you might experience relates to the MySQL database, we might be unable to assist you. In such cases, you must upgrade your database for us to provide you with any support.
- You won't be able to create new database servers for the retired version. However, you can perform point-in-time recoveries and create read replicas for your existing servers.-- New service capabilities developed by Azure Database for MySQL may only be available to supported database server versions.
+- New service capabilities developed by Azure Database for MySQL might only be available to supported database server versions.
- Uptime S.L.A.s apply solely to Azure Database for MySQL service-related issues and not to any downtime caused by database engine-related bugs.-- In the extreme event of a serious threat to the service caused by the MySQL database engine vulnerability identified in, the retired database version, Azure may choose to stop the compute node of your database server from securing the service first. You're asked to upgrade the server before bringing the server online. During the upgrade process, your data is always protected using automatic backups performed on the service, which can be used to restore to the older version if desired.
+In the extreme event of a serious threat to the service caused by the MySQL database engine vulnerability identified in the retired database version, Azure might choose to stop the compute node of your database server from securing the service first. You're asked to upgrade the server before bringing it online. During the upgrade process, your data is always protected using automatic backups performed on the service, which can be used to restore to the older version if desired.
-## Next steps
+## Next step
-- See MySQL [dump and restore](single-server/concepts-migrate-dump-restore.md) to perform upgrades.
+> [!div class="nextstepaction"]
+> [dump and restore](single-server/concepts-migrate-dump-restore.md)
mysql Concepts Server Parameters https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-server-parameters.md
The [Physical Memory Size](./concepts-service-tiers-storage.md#physical-memory-s
MySQL stores the InnoDB table in different tablespaces based on the configuration you provided during the table creation. The [system tablespace](https://dev.mysql.com/doc/refman/5.7/en/innodb-system-tablespace.html) is the storage area for the InnoDB data dictionary. A [file-per-table tablespace](https://dev.mysql.com/doc/refman/5.7/en/innodb-file-per-table-tablespaces.html) contains data and indexes for a single InnoDB table, and is stored in the file system in its own data file. This behavior is controlled by the `innodb_file_per_table` server parameter. Setting `innodb_file_per_table` to `OFF` causes InnoDB to create tables in the system tablespace. Otherwise, InnoDB creates tables in file-per-table tablespaces.
-Azure Database for MySQL flexible server supports at largest, **4 TB**, in a single data file. If your database size is larger than 4 TB, you should create the table in [innodb_file_per_table](https://dev.mysql.com/doc/refman/5.7/en/innodb-parameters.html#sysvar_innodb_file_per_table) tablespace. If you have a single table size larger than 4 TB, you should use the partition table.
+Azure Database for MySQL flexible server supports at largest, **8 TB**, in a single data file. If your database size is larger than 8 TB, you should create the table in [innodb_file_per_table](https://dev.mysql.com/doc/refman/5.7/en/innodb-parameters.html#sysvar_innodb_file_per_table) tablespace. If you have a single table size larger than 8 TB, you should use the partition table.
### innodb_log_file_size
mysql April 2024 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/release-notes/april-2024.md
description: Learn about the release notes for Azure Database for MySQL Flexible
Previously updated : 06/18/2024 Last updated : 08/09/2024
-# Azure Database For MySQL Flexible Server April 2024 Maintenance
+# Azure Database For MySQL - Flexible Server April 2024 maintenance
We're pleased to announce the April 2024 maintenance for Azure Database for MySQL Flexible Server. This maintenance incorporates several new features and improvement, along with known issue fix, minor version upgrade, and security patches.
-> [!NOTE]
+> [!NOTE]
> We regret to inform our users that after a thorough assessment of our current maintenance processes, we have observed an unusually high failure rate across the board. Consequently, we have made the difficult decision to cancel the minor version upgrade maintenance scheduled for April. The rescheduling of the next minor version upgrade maintenance remains undetermined at this time. We commit to providing at least one month's notice prior to the rescheduled maintenance to ensure all users are adequately prepared.
->
-> Please notes that if your maintenance has already been completed, whether it was rescheduled to an earlier date or carried out as initially scheduled, and concluded successfully, your services are not affected by this cancellation. Your maintenance is considered successful and will not be impacted by the current round of cancellations.
+>
+> Notes that if your maintenance has already been completed, whether it was rescheduled to an earlier date or carried out as initially scheduled, and concluded successfully, your services are not affected by this cancellation. Your maintenance is considered successful and will not be affected by the current round of cancellations.
## Engine version changes+ All existing engine version server upgrades to 8.0.36 engine version. To check your engine version, run `SELECT VERSION();` command at the MySQL prompt ## Features
-### [Microsoft Defender for Cloud](/azure/defender-for-cloud/defender-for-databases-introduction)
+
+### [Microsoft Defender for Cloud](/azure/defender-for-cloud/defender-for-databases-introduction)
+ - Introducing Defender for Cloud support to simplify security management with threat protection from anomalous database activities in Azure Database for MySQL flexible server instances.
-
+ ## Improvement+ - Expose old_alter_table for 8.0.x.
-## Known Issues Fix
+## Known issues fixes
+ - Fixed the issue where `GTID RESET` operation's retry interval was excessively long. - Fixed the issue that data-in HA failover stuck because of system table corrupt-- Fixed the issue that in point-in-time restore that database or table starts with special keywords may be ignored
+- Fixed the issue that in point-in-time restore that database or table starts with special keywords might be ignored
- Fixed the issue where, if there's replication failure, the system now ignores the replication latency metric instead of displaying a '0' latency value. - Fixed the issue where under certain circumstances MySQL RP does not correctly get notified of a "private dns zone move operation". The issue will cause the server to be showing incorrect ARM resource ID of the associated private dns zone resource.
mysql August 2024 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/release-notes/august-2024.md
+
+ Title: Release notes for Azure Database for MySQL - Flexible Server - August 2024
+description: Learn about the release notes for Azure Database for MySQL Flexible Server August 2024.
+++ Last updated : 08/09/2024+++++
+# Azure Database For MySQL - Flexible Server August 2024 maintenance
+
+We're pleased to announce the June 2024 maintenance of the Azure Database for MySQL Flexible Server. This maintenance updates all existing 8.0.34 and, after, engine version servers to the 8.0.37 engine version, along with several security improvements and known issue fixes.
+
+## Engine version changes
+
+The existing engine version is >=8.0.34, and the server upgrades to the 8.0.37 engine version.
+To check your engine version, run `SELECT VERSION();` command at the MySQL prompt
+
+> [!NOTE]
+> Percona has identified a [critical bug](https://www.percona.com/blog/do-not-upgrade-to-any-version-of-mysql-after-8-0-37/?utm_campaign=2024-blog-q3&utm_content=300046226&utm_medium=social&utm_source=linkedin&hss_channel=lcp-421929) in MySQL versions 8.0.38, 8.4.1, and 9.0.0 that causes the MySQL daemon to crash upon restart if over 10,000 tables exist. Azure MySQL will not upgrade to the buggy MySQL versions 8.0.38, 8.4.1, and 9.0.0 in the August maintenance. Instead, we will skip these versions and upgrade directly to a future MySQL engine version that has resolved this issue. Microsoft Azure MySQL remains committed to providing customers with the most secure and stable PaaS database service.
+
+## Features
+
+No new features are being introduced in this maintenance update.
+
+## Improvement
+
+- Many security improvements have been made to the service during this maintenance.
+
+## Known issues fixes
+
+- Fix the issue that for some servers migrated from single server to flexible server, execute table partition leads to table corrupted
+- Fix the issue that for some servers with audit/slow log enabled, when a large number of logs are generated, these servers might be missing server metrics, and start operation might be stuck for these servers if they are in a stopped state.
mysql February 2024 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/release-notes/february-2024.md
description: Learn about the release notes for Azure Database for MySQL Flexible
Previously updated : 06/18/2024 Last updated : 08/09/2024
-# Azure Database For MySQL Flexible Server February 2024 Maintenance
+# Azure Database For MySQL - Flexible Server February 2024 maintenance
We're pleased to announce the February 2024 maintenance for Azure Database for MySQL Flexible Server. This maintenance mainly focuses on known issue fix, underlying OS upgrading and vulnerability patching.
-> [!NOTE]
-> During the preliminary stages of our February-March maintenance period, we identified a regression issue that necessitated a reevaluation of our scheduled maintenance activities. Consequently, all maintenance sessions originally planned for the period from March 2nd, 13:00 UTC, to March 14th, 00:00 UTC, have been canceled. We are currently in the process of rescheduling these maintenance activities. Affected customers will be promptly notified of the new maintenance timetable. We apologize for any inconvenience this may cause and thank you for your understanding and continued support.
+> [!NOTE]
+> During the preliminary stages of our February-March maintenance period, we identified a regression issue that necessitated a reevaluation of our scheduled maintenance activities. Consequently, all maintenance sessions originally planned for the period from March 2nd, 13:00 UTC, to March 14th, 00:00 UTC, have been canceled. We are currently in the process of rescheduling these maintenance activities. Affected customers will be promptly notified of the new maintenance timetable. We apologize for any inconvenience this might cause and thank you for your understanding and continued support.
## Engine version changes+
+- All existing 5.7.42 engine version server will upgrade to 5.7.44 engine version.
+- All existing 8.0.34 engine version server will upgrade to 8.0.35 engine version.
To check your engine version, run `SELECT VERSION();` command at the MySQL prompt ## Features+ There will be no new features in this maintenance update. ## Improvement+ There will be no new improvement in this maintenance update.
-## Known Issues Fix
+## Known issues fixes
+ - Fix HA standby replication dead lock issue caused by slave_preserve_commit_order. - Fix promotion stuck issue when source server is unavailable or source region is down. Improve customer experience on replica promotion to better support disaster recovery. - Fix the default value of character_set_server & collation_server.
mysql January 2024 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/release-notes/january-2024.md
description: Learn about the release notes for Azure Database for MySQL Flexible
Previously updated : 06/18/2024 Last updated : 08/09/2024
-# Azure Database For MySQL Flexible Server January 2024 Maintenance
+# Azure Database For MySQL - Flexible Server January 2024 maintenance
We are pleased to announce the January 2024 maintenance for Azure Database for MySQL Flexible Server. This maintenance incorporates several new features and resolves known issues for enhanced performance and reliability.
-> [!NOTE]
-> Between 2024/1/12 04:00 UTC and 2024/1/15 07:00 UTC, we strategically paused Azure MySQL maintenance to proactively address a detected issue that could lead to maintenance interruptions. We're happy to report that maintenance operations are fully restored. For those impacted, you're welcome to utilize our flexible maintenance feature to conveniently reschedule your maintenance times as needed.
+> [!NOTE]
+> Between 2024/1/12 04:00 UTC and 2024/1/15 07:00 UTC, we strategically paused Azure MySQL maintenance to proactively address a detected issue that could lead to maintenance interruptions. We're happy to report that maintenance operations are fully restored. For those affected, you're welcome to utilize our flexible maintenance feature to conveniently reschedule your maintenance times as needed.
## Engine version changes+ There will be no engine version changes in this maintenance update. ## Features+ ### [Accelerated Logs V2](../concepts-accelerated-logs.md)+ - Introducing a new type of disk designed to offer superior performance in storing binary logs and redo logs. ## Improvement ### [Audit Log Improvement](../concepts-audit-logs.md)+ - In alignment with our users' expectations for the audit log, we have introduced wildcard support for audit log usernames and added connection status for connection logs.
-## Known Issues Fix
+## Known issues fixes
+ ### Support Data-in Replication in Major Version Upgrade+ - During an upgrade from 5.7 to 8.0, data-in replication encounters issues due to a known bug in the MySQL community. With this January 2024 maintenance, we have addressed this concern, enabling data-in replication support for servers upgraded from version 5.7.+ ### Server Operations Blockage After Moving Subscription or Resource Group+ - Several server operations were hindered post the transfer of subscription or resource group owing to incomplete server information updates. This issue has been resolved in this January 2024 maintenance, ensuring unhindered movement of subscriptions and resource groups.
mysql June 2024 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/release-notes/june-2024.md
description: Learn about the release notes for Azure Database for MySQL Flexible
Previously updated : 06/18/2024 Last updated : 08/09/2024
-# Azure Database For MySQL Flexible Server June 2024 Maintenance
+# Azure Database For MySQL - Flexible Server June 2024 maintenance
-We're pleased to announce the June 2024 maintenance for Azure Database for MySQL Flexible Server. In this maintenance update, we're addressing some availability issues that have been impacting a subset of our servers. While most servers remain unaffected, a small portion experiences maintenance activities to enhance their performance and stability. We appreciate your understanding and patience as we work to improve our service.
+We're pleased to announce the June 2024 maintenance for Azure Database for MySQL Flexible Server. In this maintenance update, we're addressing some availability issues that have been affecting a subset of our servers. While most servers remain unaffected, a small portion experiences maintenance activities to enhance their performance and stability. We appreciate your understanding and patience as we work to improve our service.
## Engine version changes+ No engine version upgrade in this maintenance To check your engine version, run `SELECT VERSION();` command at the MySQL prompt ## Features+ No new features are being introduced in this maintenance update. ## Improvement+ - Allow customers to truncate performance_schema tables by invoking a predefined stored procedure. - Improve the startup time for servers with large amount of tablespaces.
-
-## Known Issues Fix
-- Fixed the issue that MySQL engine may not receive the shutdown signal during scaling and maintenance, which may lead to long recovery time.-- Fixed the issue that if server with originalPrimaryName is deleted due to HA failover->disable HA action, earlier ATP update operation failed. +
+## Known issues fixes
+
+- Fixed the issue that MySQL engine might not receive the shutdown signal during scaling and maintenance, which might lead to long recovery time.
+- Fixed the issue that if server with originalPrimaryName is deleted due to HA failover->disable HA action, earlier ATP update operation failed.
- Fixed the issue that unhealthy servers/Burstable servers without credits will now throw ServerNotSucceeded instead of Internal Server Error
mysql May 2024 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/release-notes/may-2024.md
description: Learn about the release notes for Azure Database for MySQL Flexible
Previously updated : 06/18/2024 Last updated : 08/09/2024
-# Azure Database For MySQL Flexible Server May 2024 Maintenance
+# Azure Database For MySQL - Flexible Server May 2024 maintenance
We're pleased to announce the May 2024 maintenance for Azure Database for MySQL Flexible Server. This maintenance incorporates several new features and improvement, along with known issue fix, and security patches. ## Engine version changes+ No engine version upgrade in this maintenance To check your engine version, run `SELECT VERSION();` command at the MySQL prompt ## Features+ ### [Generally Available of Accelerated Logs in Azure Database for MySQL Flexible Server](../concepts-accelerated-logs.md)-- This feature is available within the Business-Critical service tier, which significantly enhances the performance of Azure Database for MySQL Flexible Server instances. It offers a dynamic solution designed for high throughput needs, reducing latency with no additional cost. +
+- This feature is available within the Business-Critical service tier, which significantly enhances the performance of Azure Database for MySQL Flexible Server instances. It offers a dynamic solution designed for high throughput needs, reducing latency with no additional cost.
## Improvement+ - Improved server restart logic, server restart has a timeout of 2 hours for none burstable servers, 4 hours timeout for burstable servers. After server restart workflow timeout, it would roll back and set the server state to Succeeded. - Improved the data-in replication procedures to show the real error message and safe exit when exception happens.-- Improved the read replica creation workflow to precheck the VNet setting.
-
-## Known Issues Fix
+- Read replica improvement for the creation workflow to precheck the virtual network setting.
+
+## Known issues fixes
+ - Fixed the issue that the server parameter max_connections and table_open_cache can't be configured correctly - Fixed the issue where executing `CREATE AADUSER IF NOT EXISTS 'myuser' IDENTIFIED BY 'CLIENT_ID'` when the user already exists incorrectly set the binlog record, affecting replica and high availability functionalities.-
mysql Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/whats-new.md
This article summarizes new releases and features in the Azure Database for MySQ
## August 2024 +
+- **Azure Database for MySQL Flexible Server now supports up to 8 TB in a single data file!**
+
+ Azure Database for MySQL now supports single InnoDB data files up to **8 TB** in size, enabling users to store larger datasets within a single file. This enhancement reduces the need for data partitioning and streamlines database management, making it easier to handle substantial volumes of data using the InnoDB storage engine. [Learn more.](./concepts-server-parameters.md#innodb_file_per_table)
+ - **Major version upgrade support for Burstable compute tier** Azure Database for MySQL now offers major version upgrades for Burstable SKU compute tiers. This support automatically upgrades the compute tier to General Purpose SKU before performing the upgrade, ensuring sufficient resources. Customers can choose to revert back to Burstable SKU after the upgrade. Additional costs may apply. [Learn more](how-to-upgrade.md#perform-a-planned-major-version-upgrade-from-mysql-57-to-mysql-80-using-the-azure-portal-for-burstable-sku-servers)
nat-gateway Nat Gateway Design https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/nat-gateway/nat-gateway-design.md
The following examples demonstrate coexistence of a load balancer or instance-le
| Resource | Traffic flow direction | Connectivity method used | | | | |
-| VM (Subnet B) | Inbound </br> Outbound | NA </br> NAT gateway |
-| Virtual machine scale set (Subnet B) | Inbound </br> Outbound | NA </br> NAT gateway |
-| VMs (Subnet A) | Inbound </br> Outbound | Instance-level public IP </br> NAT gateway |
+| VM (Subnet 1) | Inbound </br> Outbound | Instance-level public IP </br> NAT gateway |
+| Virtual machine scale set (Subnet 1) | Inbound </br> Outbound | NA </br> NAT gateway |
+| VMs (Subnet 2) | Inbound </br> Outbound | NA </br> NAT gateway |
-The virtual machine uses the NAT gateway for outbound and return traffic. Inbound originated traffic passes through the instance level public IP directly associated with the virtual machine in subnet A. The virtual machine scale set from subnet B and VMs from subnet B can only egress and receive response traffic through the NAT gateway. No inbound originated traffic can be received.
+The virtual machine uses the NAT gateway for outbound and return traffic. Inbound originated traffic passes through the instance level public IP directly associated with the virtual machine in subnet 1. The virtual machine scale set from subnet 1 and VMs from subnet 2 can only egress and receive response traffic through the NAT gateway. No inbound originated traffic can be received.
### A NAT gateway and VM with a standard public load balancer
The virtual machine uses the NAT gateway for outbound and return traffic. Inboun
| Resource | Traffic flow direction | Connectivity method used | | | | |
-| VMs in backend pool | Inbound </br> Outbound | Load balancer </br> NAT gateway |
-| VM and virtual machine scale set (Subnet B) | Inbound </br> Outbound | NA </br> NAT gateway |
+|VM and virtual machine scale set (Subnet 1) | Inbound </br> Outbound | Load balancer </br> NAT gateway |
+|VMs (Subnet 2) | Inbound </br> Outbound | NA </br> NAT gateway |
-NAT Gateway supersedes any outbound configuration from a load-balancing rule or outbound rules on the load balancer. VM instances in the backend pool use the NAT gateway to send outbound traffic and receive return traffic. Inbound originated traffic passes through the load balancer for all VM instances within the load balancerΓÇÖs backend pool. VM and the virtual machine scale set from subnet B can only egress and receive response traffic through the NAT gateway. No inbound originated traffic can be received.
+NAT Gateway supersedes any outbound configuration from a load-balancing rule or outbound rules on the load balancer. VM instances in the backend pool use the NAT gateway to send outbound traffic and receive return traffic. Inbound originated traffic passes through the load balancer for all VM instances (Subnet 1) within the load balancerΓÇÖs backend pool. VMs from subnet 2 can only egress and receive response traffic through the NAT gateway. No inbound originated traffic can be received.
### A NAT gateway and VM with an instance-level public IP and a standard public load balancer
NAT Gateway supersedes any outbound configuration from a load-balancing rule or
| Resource | Traffic flow direction | Connectivity method used | | | | |
-| VM (Subnet A) | Inbound </br> Outbound | Instance-level public IP </br> NAT gateway |
-| Virtual machine scale set | Inbound </br> Outbound | NA </br> NAT gateway |
-| VM (Subnet B) | Inbound </br> Outbound | NA </br> NAT gateway |
+| VM (Subnet 1) | Inbound </br> Outbound | Instance-level public IP </br> NAT gateway |
+| Virtual machine scale set (Subnet 1) | Inbound </br> Outbound | Load balancer </br> NAT gateway |
+| VMs (Subnet 2) | Inbound </br> Outbound | NA </br> NAT gateway |
-The NAT gateway supersedes any outbound configuration from a load-balancing rule or outbound rules on a load balancer and instance level public IPs on a virtual machine. All virtual machines in subnets A and B use the NAT gateway exclusively for outbound and return traffic. Instance level public IPs take precedence over load balancer. The VM in subnet A uses the instance level public IP for inbound originating traffic.
+The NAT gateway supersedes any outbound configuration from a load-balancing rule or outbound rules on a load balancer and instance level public IPs on a virtual machine. All virtual machines in subnets 1 and 2 use the NAT gateway exclusively for outbound and return traffic. Instance-level public IPs take precedence over load balancer. The VM in subnet 1 uses the instance level public IP for inbound originating traffic. VMSS do not have instance-level public IPs.
## Monitor outbound network traffic with NSG flow logs
operator-nexus Concepts Nexus Kubernetes Cluster https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/concepts-nexus-kubernetes-cluster.md
Title: "Azure Operator Nexus: Nexus Kubernetes Cluster Service" description: Introduction to Nexus Kubernetes Cluster Service.--++ Last updated 06/28/2023
operator-nexus Concepts Resource Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/concepts-resource-types.md
Title: Azure Operator Nexus resource types description: Operator Nexus platform and tenant resource types--++ Last updated 03/06/2023
operator-nexus Reference Supported Software Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/reference-supported-software-versions.md
Title: Supported software versions in Azure Operator Nexus
description: Learn about supported software versions in Azure Operator Nexus. Last updated 07/18/2024--++
operator-nexus Release Notes 2404.2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-nexus/release-notes-2404.2.md
Title: Azure Operator Nexus Release Notes 2404.2
description: Release notes for Operator Nexus 2404.2 release. Last updated 05/06/2024--++
operator-service-manager Best Practices Onboard Deploy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operator-service-manager/best-practices-onboard-deploy.md
Title: Best practices for Azure Operator Service Manager description: Understand best practices for Azure Operator Service Manager to onboard and deploy a network function (NF).-- Previously updated : 09/11/2023++ Last updated : 08/09/2024
Delete publisher resources in the following order to make sure no orphaned resou
- Artifact Store - Publisher
-## Next steps
+## Considerations if your NF runs cert-manager
+
+With release 1.0.2728-50 and later , AOSM now uses cert-manager to store and rotate certificates. As part of this change, AOSM deploys a cert-manager operator, and associate CRDs, in the azurehybridnetwork namespace. Since having multiple cert-manager operators, even deployed in separate namespaces, will watch across all namespaces, only one cert-manager can be effectively run on the cluster.
+
+Any user trying to install cert-manager on the cluster, as part of a workload deployment, will get a deployment failure with an error that the CRD ΓÇ£exists and cannot be imported into the current release.ΓÇ¥ To avoid this error, the recommendation is to skip installing cert-manager, instead take dependency on cert-manager operator and CRD already installed by AOSM.
+
+### Other Configuration Changes to Consider
+
+In addition to disabling the NfApp associated with the old user cert-manager, we have found other changes may be needed;
+1. If any other NfApps have DependsOn references to the old user cert-manager NfApp, these will need to be removed.
+2. If any other NfApps reference the old user cert-manager namespace value, this will need to be changed to the new azurehybridnetwork namespace value.
+
+### Cert-Manager Version Compatibility & Management
+
+For the cert-manager operator, our current deployed version is 1.14.5. Users should test for compatibility with this version. Future cert-manager operator upgrades will be supported via the NFO extension upgrade process.
+
+For the CRD resources, our current deployed version is 1.14.5. Users should test for compatibility with this version. Since management of a common cluster CRD is something typically handled by a cluster administrator, we are working to enable CRD resource upgrades via standard Nexus Add-on process.
-- [Quickstart: Complete the prerequisites to deploy a Containerized Network Function in Azure Operator Service Manager](quickstart-containerized-network-function-prerequisites.md)-- [Quickstart: Complete the prerequisites to deploy a Virtualized Network Function in Azure Operator Service Manager](quickstart-virtualized-network-function-prerequisites.md)
oracle Faq Oracle Database Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/oracle/oracle-db/faq-oracle-database-azure.md
# Oracle Database@Azure FAQs
-This article answers frequently asked questions (FAQs) about the Oracle Database@Azure partnership with Microsoft.
+This article answers frequently asked questions (FAQs) about the Oracle Database@Azure offering.
## General In this section, we cover general questions about Oracle Database@Azure.
Oracle Database@Azure is enabled by hosting OCIΓÇÖs infrastructure in Azure and
- Oracle Database@Azure: Oracle Database@Azure (Oracle Database Service for Azure) is hosted on OCIΓÇÖs infrastructure in Azure datacenters enabling you to host your mission critical Oracle databases closer to your application tier hosted in Azure. Azure virtual network integration with subnet delegation enables private IPs from customer virtual network to serve as database endpoints. This solution is Oracle managed and supported service in Azure. -- Oracle on Azure VMs: You can also deploy and self-manage your Oracle workloads on Azure VMs. Specifically, workloads that don't require features like RAC, Smart Scan or Exadata, performance are best suited for this operation.
+- Oracle on Azure VMs: You can also deploy and self-manage your Oracle workloads on Azure VMs. Specifically, workloads that don't require features like RAC, Smart Scan or Exadata performance are best suited for this operation.
- OCI Interconnect: OCI interconnect is used to connect your Oracle deployments in OCI with Applications and services in Azure over OCI FastConnect and Azure ExpressRoute. This typically suits workloads/solutions that can work with the high latency envelope, have dependency on services, features, and functionalities running in both clouds.
Oracle versions supported on Oracle Cloud Infrastructure (OCI) are supported on
### Do you have any documented benchmark latency-wise between Azure resources and Oracle Database@Azure?
-Latency between Azure resources and Oracle Database@Azure is within the Azure regional latency envelope as the Exadata infrastructure is within the Azure Data Centers. Latency can be further fine-tuned dependent on Co-Location within Availability Zones.
+Latency between Azure resources and Oracle Database@Azure is within the Azure regional latency envelope as the Exadata infrastructure is within the Azure Data Centers. Latency can be further fine-tuned dependent on colocation within availability zones. For more information, see [What are availability zones?](/azure/reliability/availability-zones-overview?tabs=azure-cli).
### Does Oracle Database@Azure support deploying Base Database (BD), or do I need to migrate to Autonomous Database service? No, Base Database isn't currently supported with Oracle Database@Azure. You can deploy single instance self-managed databases on Azure VMs or if you need Oracle managed databases with RAC, we recommend Autonomous Databases via Oracle Database@Azure. For more information, see [Autonomous Database | Oracle](https://www.oracle.com/cloud/azure/oracle-database-at-azure/) and [Provision Oracle Autonomous Databases | Microsoft Learn](/training/modules/migrate-oracle-workload-azure-odaa/).
-### For the Oracle Database@Azure service, will the automated DBCS DR use Azure backbone or the OCI backbone?
+### For the Oracle Database@Azure service, will the automated DR use Azure backbone or the OCI backbone?
BCDR is enabled using the OCI managed offering (Backup and Data Guard) and will use the Azure-OCI backbone. ### How many database servers can be deployed in each rack of Oracle Database@Azure? Is there flexibility in terms of being able to scale up and down as needed from both the consumption and licensing perspective?
-Oracle Database@Azure currently runs on X9M hardware and provides a configuration of a minimum of two database servers and three Storage servers. This constitutes a quarter rack configuration. This configuration can be increased to a limit of 32 database servers and 64 Storage servers. You can scale up and down as needed within the Exadata system depending on your SKU. For more information about configurations, see [Oracle Exadata Database Service on Dedicated Infrastructure Description](https://docs.oracle.com/en-us/iaas/exadatacloud/exacs/exa-service-desc.html#ECSCM-GUID-EC1A62C6-DDA1-4F39-B28C-E5091A205DD3). For more specifics, see [Oracle Exadata Cloud Infrastructure X9M Data Sheet](https://www.oracle.com/a/ocom/docs/engineered-systems/exadata/exadata-cloud-infrastructure-x9m-ds.pdf).
+Oracle Database@Azure currently runs on X9M hardware and provides a configuration of a minimum of two database servers and three Storage servers. This constitutes a quarter rack configuration. This configuration can be increased to a limit of 32 database servers and 64 Storage servers. You can scale up and down as needed within the Exadata system depending on your SKU. For more information about configurations, see [Oracle Exadata Database Service on Dedicated Infrastructure Description](https://docs.oracle.com/iaas/exadatacloud/exacs/exa-service-desc.html#ECSCM-GUID-EC1A62C6-DDA1-4F39-B28C-E5091A205DD3). For more specifics, see [Oracle Exadata Cloud Infrastructure X9M Data Sheet](https://www.oracle.com/a/ocom/docs/engineered-systems/exadata/exadata-cloud-infrastructure-x9m-ds.pdf).
### What Oracle applications are supported to run on Azure?
Various Oracle applications are authorized and supported to be run on Azure. For
### What are the available Service Level Agreements (SLAs)?
-For detailed Service Level Agreements, refer to the Oracle PaaS and IaaS Public Cloud Services Pillar Document.
+For detailed Service Level Agreements, refer to the Oracle PaaS and IaaS Public Cloud Services [Pillar Document](https://www.oracle.com/contracts/docs/paas_iaas_pub_cld_srvs_pillar_4021422.pdf?download=false).
## Billing and Commerce In this section, we cover questions related to billing and commerce for Oracle Database@Azure. ### How much will Oracle Database@Azure cost?
-Oracle Database@Azure is at parity with the Exadata Cloud costs in OCI. For list prices, refer to OCIΓÇÖs Cloud Cost Estimator (oracle.com). For your specific costs tailored to your needs, work with your Oracle sales team.
+Oracle Database@Azure is at parity with the Exadata Cloud costs in OCI. For list prices, refer to [OCIΓÇÖs Cloud Cost Estimator](https://www.oracle.com/cloud/costestimator.html). For your specific costs tailored to your needs, work with your Oracle sales team.
### Is Oracle Database@Azure eligible for MACC (Microsoft Azure Commit to Consume)?
Yes, the Oracle Database@Azure offering is Azure benefits eligible and hence eli
You can Bring Your Own License (BYOL) or provision License included Oracle databases with Oracle Database@Azure.
-### Can we utilize multi-tenancy billing ID across different regions?
-
-The billing account ID used to target the private offer to a specific customer doesn't constrain where the service can be deployed.
- ### Can I procure Oracle Database@Azure even if the service isn't available in my region? You can purchase the Oracle Database@Azure anytime as it's Generally Available in multiple regions. However, you can only deploy the service in the region of your choice once it's live.
Ingress and Egress for managed services is via Azure OCI backbone and doesn't in
In this section, we'll cover questions related to onboarding, provisioning, and migration to Oracle Database@Azure. ### To set up Oracle Database@Azure, what would be the role assignments needed for the Azure user?
-You can find the list of role assignments here.
+See [Groups and roles for Oracle Database@Azure](/azure/oracle/oracle-db/oracle-database-groups-roles) for the list of role assignments.
### Can you describe the authentication/authorization standards supported by Oracle Database@Azure?
Oracle Database@Azure is based on SAML and OpenID standards. OCI Oracle Identity
### Where can I find best practices to plan and deploy Oracle Database@Azure?
-Refer to our landing zone architecture documentation to plan and deploy your oracle workloads with Oracle Database@Azure here.
+To plan and deploy your oracle workloads with Oracle Database@Azure, refer to the [landing zone architecture documentation](/azure/cloud-adoption-framework/scenarios/oracle-iaas/?wt.mc_id=knwlserapi_inproduct_azportal#landing-zone-architecture-for-oracle-databaseazure).
### Does Azure have any tools to assist with understanding Oracle database sizing, license usage and TCO for both Oracle Database@Azure and Oracle IaaS?
For Oracle Database on Azure VMs, we currently have the Oracle Migration Assista
### What tools can be used for database migration? Could you help share other details about licensing and charges for these tools?
-There are multiple tools available from Oracle: ZDM, Data Guard, Data pump, GoldenGate, and more. For more information, contact your Oracle representative for commercials.
+There are multiple tools available from Oracle: ZDM, Data Guard, Data pump, GoldenGate, and more. For more information, see[Migrate Oracle workloads to Azure](/azure/cloud-adoption-framework/scenarios/oracle-iaas/oracle-migration-planning?wt.mc_id=knwlserapi_inproduct_azportal#migrate-oracle-workloads-to-azure). Contact your Oracle representative for commercials.
### When using Oracle GoldenGate for migration, do I need to purchase a GoldenGate license?
Oracle will manage and host the data on Oracle Cloud Infrastructure hosted in Az
In case you enable backup to Azure, that data reside in the respective Azure storage ΓÇô Azure NetApp Files, Blob storage.
-We ensure compliance with both companiesΓÇÖ data privacy and compliance policies through physical isolation of systems within Azure datacenters and access enforced assignment policies. For more information on compliance, refer to [Overview - Oracle Database@Azure | Microsoft Learn](database-overview.md) or Oracle compliance website.
+We ensure compliance with both companiesΓÇÖ data privacy and compliance policies through physical isolation of systems within Azure datacenters and access enforced assignment policies. For more information on compliance, refer to [Overview - Oracle Database@Azure | Microsoft Learn](database-overview.md) or [Oracle compliance website](https://docs.oracle.com/iaas/Content/multicloud/compliance.htm).
### How is data security managed? Is the data encrypted in transit and at rest?
postgresql Concepts Read Replicas Virtual Endpoints https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-read-replicas-virtual-endpoints.md
This section explains how to use Virtual Endpoints in Azure Database for Postgre
## Related content -- [create virtual endpoints](how-to-read-replicas-portal.md#create-virtual-endpoints).
+- [Create virtual endpoints](how-to-read-replicas-portal.md#create-virtual-endpoints).
- [Read replicas - overview](concepts-read-replicas.md) - [Geo-replication](concepts-read-replicas-geo.md) - [Promote read replicas](concepts-read-replicas-promote.md) - [Create and manage read replicas in the Azure portal](how-to-read-replicas-portal.md) - [Cross-region replication with virtual network](concepts-networking.md#replication-across-azure-regions-and-virtual-networks-with-private-networking)
+- [Create virtual endpoints for read replicas with Terraform](how-to-read-replicas-virtual-endpoints-terraform.md)
postgresql Generative Ai Semantic Search https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/generative-ai-semantic-search.md
[!INCLUDE [applies-to-postgresql-flexible-server](~/reusable-content/ce-skilling/azure/includes/postgresql/includes/applies-to-postgresql-flexible-server.md)]
-This hands-on tutorial shows you how to build a semantic search application using Azure Database for PostgreSQL flexible server and Azure OpenAI service. Semantic search does searches based on semantics; standard lexical search does searches based on keywords provided in a query. For example, your recipe dataset might not contain labels like gluten-free, vegan, dairy-free, fruit-free or dessert but these characteristics can be deduced from the ingredients. The idea is to issue such semantic queries and get relevant search results.
+This hands-on tutorial shows you how to build a semantic search application using Azure Database for PostgreSQL flexible server and Azure OpenAI Service. Semantic search does searches based on semantics; standard lexical search does searches based on keywords provided in a query. For example, your recipe dataset might not contain labels like gluten-free, vegan, dairy-free, fruit-free or dessert but these characteristics can be deduced from the ingredients. The idea is to issue such semantic queries and get relevant search results.
Building semantic search capability on your data using GenAI and Flexible Server involves the following steps: >[!div class="checklist"]
Building semantic search capability on your data using GenAI and Flexible Server
1. Grant access to Azure OpenAI in the desired subscription. 1. Grant permissions toΓÇ»[create Azure OpenAI resources and to deploy models](../../ai-services/openai/how-to/role-based-access-control.md).
-[Create and deploy an Azure OpenAI service resource and a model](../../ai-services/openai/how-to/create-resource.md), deploy the embeddings model [text-embedding-ada-002](../../ai-services/openai/concepts/models.md#embeddings-models). Copy the deployment name as it is needed to create embeddings.
+[Create and deploy an Azure OpenAI Service resource and a model](../../ai-services/openai/how-to/create-resource.md), deploy the embeddings model [text-embedding-ada-002](../../ai-services/openai/concepts/models.md#embeddings-models). Copy the deployment name as it is needed to create embeddings.
## Enable the `azure_ai` and `pgvector` extensions
postgresql How To Read Replicas Virtual Endpoints Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/how-to-read-replicas-virtual-endpoints-terraform.md
+
+ Title: Create virtual endpoints for read replicas with Terraform
+description: This article describes the virtual endpoints for read replica feature using Terraform for Azure Database for PostgreSQL - Flexible Server.
+++ Last updated : 08/08/2024 +++
+ai.usage: ai-assisted
++
+# Create virtual endpoints for read replicas with Terraform
+
+Using Terraform, you can create and manage virtual endpoints for read replicas in Azure Database for PostgreSQLΓÇöFlexible Server. Terraform is an open-source infrastructure-as-code tool that allows you to define and provision infrastructure using a high-level configuration language.
+
+## Prerequisites
+
+Before you begin, ensure you have the following:
+
+- An Azure account with the necessary permissions.
+- Terraform installed on your local machine. You can download it from the [official Terraform website](https://www.terraform.io/downloads.html).
+- Azure CLI installed and authenticated. Instructions are in the [Azure CLI documentation](/cli/azure/install-azure-cli).
+
+Ensure you have a basic understanding of Terraform syntax and Azure resource provisioning.
+
+Step-by-Step Terraform Configuration: Provide a step-by-step guide on configuring virtual endpoints for read replicas using Terraform.
+
+## Configuring virtual endpoints
+
+Follow these steps to create virtual endpoints for read replicas in Azure Database for PostgreSQL - Flexible Server:
+
+### Initialize the Terraform configuration
+
+ Create a `main.tf` file and define the Azure provider.
+
+ ```terraform
+ provider "azurerm" {
+ features {}
+ }
+
+ resource "azurerm_resource_group" "example" {
+ name = "example-resources"
+ location = "East US"
+ }
+ ```
+
+### Create the primary Azure Database for PostgreSQL
+
+Define the primary PostgreSQL server resource.
+
+```terraform
+resource "azurerm_postgresql_flexible_server" "primary" {
+ name = "primary-server"
+ resource_group_name = azurerm_resource_group.example.name
+ location = azurerm_resource_group.example.location
+ version = "12"
+ administrator_login = "adminuser"
+ administrator_password = "password"
+ sku_name = "Standard_D4s_v3"
+
+ storage_mb = 32768
+ backup_retention_days = 7
+ geo_redundant_backup = "Disabled"
+ high_availability {
+ mode = "ZoneRedundant"
+ }
+}
+```
+
+### Create read replicas
+
+Define the read replicas for the primary server.
+
+```terraform
+resource "azurerm_postgresql_flexible_server_replica" "replica" {
+ name = "replica-server"
+ resource_group_name = azurerm_resource_group.example.name
+ location = azurerm_resource_group.example.location
+ source_server_id = azurerm_postgresql_flexible_server.primary.id
+}
+```
+
+### Configure virtual endpoints
+
+Define the necessary resources to configure virtual endpoints.
+
+```terraform
+resource "azurerm_private_endpoint" "example" {
+ name = "example-private-endpoint"
+ location = azurerm_resource_group.example.location
+ resource_group_name = azurerm_resource_group.example.name
+ subnet_id = azurerm_subnet.example.id
+
+ private_service_connection {
+ name = "example-privateserviceconnection"
+ private_connection_resource_id = azurerm_postgresql_flexible_server.primary.id
+ is_manual_connection = false
+ subresource_names = ["postgresqlServer"]
+ }
+}
+```
+
+### Apply the configuration
+
+Initialize Terraform and apply the configuration.
+
+`terraform init`
+`terraform apply`
+
+Confirm the apply action when prompted. Terraform provisions the resources and configure the virtual endpoints as specified.
++
+For additional info about Virtual endpoints, refer to [create virtual endpoints](how-to-read-replicas-portal.md#create-virtual-endpoints)
+
+## Related content
+
+- [Create virtual endpoints](how-to-read-replicas-portal.md#create-virtual-endpoints)
+- [Read replicas - overview](concepts-read-replicas.md)
+- [Geo-replication](concepts-read-replicas-geo.md)
+- [Promote read replicas](concepts-read-replicas-promote.md)
+- [Create and manage read replicas in the Azure portal](how-to-read-replicas-portal.md)
+- [Cross-region replication with virtual network](concepts-networking.md#replication-across-azure-regions-and-virtual-networks-with-private-networking)
+- [Terraform Azure provider documentation](https://registry.terraform.io/providers/hashicorp/azurerm/latest/docs)
sap Hana Vm Operations Netapp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/hana-vm-operations-netapp.md
Azure NetApp Files provides native NFS shares that can be used for **/hana/share
When considering Azure NetApp Files for the SAP Netweaver and SAP HANA, be aware of the following important considerations: -- The minimum capacity pool is 4 TiB -- The minimum volume size is 100 GiB-- ANF-based NFS shares and the virtual machines that mount those shares must be in the same Azure Virtual Network or in [peered virtual networks](../../virtual-network/virtual-network-peering-overview.md) in the same region-- The selected virtual network must have a subnet, delegated to Azure NetApp Files. **For SAP workload, it is highly recommended to configure a /25 range for the subnet delegated to ANF.**
+- For volume and capacity pool limits, see [Azure NetApp Files resource limits](../../azure-netapp-files/azure-netapp-files-resource-limits.md).
+- Azure NetApp Files-based NFS shares and the virtual machines that mount those shares must be in the same Azure Virtual Network or in [peered virtual networks](../../virtual-network/virtual-network-peering-overview.md) in the same region.
+- The selected virtual network must have a subnet, delegated to Azure NetApp Files. **For SAP workload, it is highly recommended to configure a /25 range for the subnet delegated to Azure NetApp Files.**
- It's important to have the virtual machines deployed sufficient proximity to the Azure NetApp storage for lower latency as, for example, demanded by SAP HANA for redo log writes. - Azure NetApp Files meanwhile has functionality to deploy NFS volumes into specific Azure Availability Zones. Such a zonal proximity is going to be sufficient in the majority of cases to achieve a latency of less than 1 millisecond. The functionality is in public preview and described in the article [Manage availability zone volume placement for Azure NetApp Files](../../azure-netapp-files/manage-availability-zone-volume-placement.md). This functionality isn't requiring any interactive process with Microsoft to achieve proximity between your VM and the NFS volumes you allocate. - To achieve most optimal proximity, the functionality of [Application Volume Groups](../../azure-netapp-files/application-volume-group-introduction.md) is available. This functionality isn't only looking for most optimal proximity, but for most optimal placement of the NFS volumes, so, that HANA data and redo log volumes are handled by different controllers. The disadvantage is that this method needs some interactive process with Microsoft to pin your VMs. -- Make sure the latency from the database server to the ANF volume is measured and below 1 millisecond
+- Make sure the latency from the database server to the Azure NetApp Files volume is measured and below 1 millisecond
- The throughput of an Azure NetApp volume is a function of the volume quota and Service level, as documented in [Service level for Azure NetApp Files](../../azure-netapp-files/azure-netapp-files-service-levels.md). When sizing the HANA Azure NetApp volumes, make sure the resulting throughput meets the HANA system requirements. Alternatively consider using a [manual QoS capacity pool](../../azure-netapp-files/azure-netapp-files-understand-storage-hierarchy.md#manual-qos-type) where volume capacity and throughput can be configured and scaled independently (SAP HANA specific examples are in [this document](../../azure-netapp-files/azure-netapp-files-understand-storage-hierarchy.md#manual-qos-type) - Try to “consolidate” volumes to achieve more performance in a larger Volume for example, use one volume for /sapmnt, /usr/sap/trans, … if possible - Azure NetApp Files offers [export policy](../../azure-netapp-files/azure-netapp-files-configure-export-policy.md): you can control the allowed clients, the access type (Read&Write, Read Only, etc.).
When considering Azure NetApp Files for the SAP Netweaver and SAP HANA, be aware
> If there's a mismatch between User ID for <b>sid</b>adm and the Group ID for `sapsys` between the virtual machine and the Azure NetApp configuration, the permissions for files on Azure NetApp volumes, mounted to the VM, would be be displayed as `nobody`. Make sure to specify the correct User ID for <b>sid</b>adm and the Group ID for `sapsys`, when [on-boarding a new system](https://forms.office.com/Pages/ResponsePage.aspx?id=v4j5cvGGr0GRqy180BHbRxjSlHBUxkJBjmARn57skvdUQlJaV0ZBOE1PUkhOVk40WjZZQVJXRzI2RC4u) to Azure NetApp Files. ## NCONNECT mount option
-Nconnect is a mount option for NFS volumes hosted on ANF that allows the NFS client to open multiple sessions against a single NFS volume. Using nconnect with a value of larger than 1 also triggers the NFS client to use more than one RPC session on the client side (in the guest OS) to handle the traffic between the guest OS and the mounted NFS volumes. The usage of multiple sessions handling traffic of one NFS volume, but also the usage of multiple RPC sessions can address performance and throughput scenarios like:
+Nconnect is a mount option for NFS volumes hosted on Azure NetApp Files that allows the NFS client to open multiple sessions against a single NFS volume. Using nconnect with a value of larger than 1 also triggers the NFS client to use more than one RPC session on the client side (in the guest OS) to handle the traffic between the guest OS and the mounted NFS volumes. The usage of multiple sessions handling traffic of one NFS volume, but also the usage of multiple RPC sessions can address performance and throughput scenarios like:
-- Mounting of multiple ANF hosted NFS volumes with different [service levels](../../azure-netapp-files/azure-netapp-files-service-levels.md#supported-service-levels) in one VM-- The maximum write throughput for a volume and a single Linux session is between 1.2 and 1.4 GB/s. Having multiple sessions against one ANF hosted NFS volume can increase the throughput
+- Mounting multiple Azure NetApp Files-hosted NFS volumes with different [service levels](../../azure-netapp-files/azure-netapp-files-service-levels.md#supported-service-levels) in one VM
+- The maximum write throughput for a volume and a single Linux session is between 1.2 and 1.4 GB/s. Having multiple sessions against one Azure NetApp Files-hosted NFS volume can increase the throughput
For Linux OS releases that support nconnect as a mount option and some important configuration considerations of nconnect, especially with different NFS server endpoints, read the document [Linux NFS mount options best practices for Azure NetApp Files](../../azure-netapp-files/performance-linux-mount-options.md).
As you design the infrastructure for SAP in Azure you should be aware of some mi
| Data Volume Write | 250 MB/sec | 4 TB | 2 TB | | Data Volume Read | 400 MB/sec | 6.3 TB | 3.2 TB |
-Since all three KPIs are demanded, the **/han#manual-qos-type))
+Since all three KPIs are demanded, the **/han#manual-qos-type))
-For HANA systems, which aren't requiring high bandwidth, the ANF volume throughput can be lowered by either a smaller volume size or, using manual QoS, by adjusting the throughput directly. And in case a HANA system requires more throughput the volume could be adapted by resizing the capacity online. No KPIs are defined for backup volumes. However the backup volume throughput is essential for a well performing environment. Log ΓÇô and Data volume performance must be designed to the customer expectations.
+For HANA systems, which aren't requiring high bandwidth, the Azure NetApp Files volume throughput can be lowered by either a smaller volume size or, using manual QoS, by adjusting the throughput directly. And in case a HANA system requires more throughput the volume could be adapted by resizing the capacity online. No KPIs are defined for backup volumes. However the backup volume throughput is essential for a well performing environment. Log ΓÇô and Data volume performance must be designed to the customer expectations.
> [!IMPORTANT]
-> Independent of the capacity you deploy on a single NFS volume, the throughput, is expected to plateau in the range of 1.2-1.4 GB/sec bandwidth utilized by a consumer in a single session. This has to do with the underlying architecture of the ANF offer and related Linux session limits around NFS. The performance and throughput numbers as documented in the article [Performance benchmark test results for Azure NetApp Files](../../azure-netapp-files/performance-benchmarks-linux.md) were conducted against one shared NFS volume with multiple client VMs and as a result with multiple sessions. That scenario is different to the scenario we measure in SAP. Where we measure throughput from a single VM against an NFS volume. Hosted on ANF.
+> Independent of the capacity you deploy on a single NFS volume, the throughput is expected to plateau in the range of 1.2-1.4 GB/sec bandwidth utilized by a consumer in a single session. This has to do with the underlying architecture of the Azure NetApp Files offer and related Linux session limits around NFS. The performance and throughput numbers as documented in the article [Performance benchmark test results for Azure NetApp Files](../../azure-netapp-files/performance-benchmarks-linux.md) were conducted against one shared NFS volume with multiple client VMs and as a result with multiple sessions. That scenario is different to the scenario we measure in SAP where we measure throughput from a single VM against an NFS volume hosted on Azure NetApp Files.
To meet the SAP minimum throughput requirements for data and log, and according to the guidelines for **/hana/shared**, the recommended sizes would look like:
To meet the SAP minimum throughput requirements for data and log, and according
For all volumes, NFS v4.1 is highly recommended. Review carefully the [considerations for sizing **/han#considerations-for-the-hana-shared-file-system), as appropriately sized **/hana/shared** volume contributes to system's stability.
-The sizes for the backup volumes are estimations. Exact requirements need to be defined based on workload and operation processes. For backups, you could consolidate many volumes for different SAP HANA instances to one (or two) larger volumes, which could have a lower service level of ANF.
+The sizes for the backup volumes are estimations. Exact requirements need to be defined based on workload and operation processes. For backups, you could consolidate many volumes for different SAP HANA instances to one (or two) larger volumes, which could have a lower service level of Azure NetApp Files.
> [!NOTE] > The Azure NetApp Files, sizing recommendations stated in this document are targeting the minimum requirements SAP expresses towards their infrastructure providers. In real customer deployments and workload scenarios, that may not be enough. Use these recommendations as a starting point and adapt, based on the requirements of your specific workload.
-Therefore you could consider to deploy similar throughput for the ANF volumes as listed for Ultra disk storage already. Also consider the sizes for the sizes listed for the volumes for the different VM SKUs as done in the Ultra disk tables already.
+Therefore you could consider to deploy similar throughput for the Azure NetApp Files volumes as listed for Ultra disk storage already. Also consider the sizes for the sizes listed for the volumes for the different VM SKUs as done in the Ultra disk tables already.
> [!TIP] > You can re-size Azure NetApp Files volumes dynamically, without the need to `unmount` the volumes, stop the virtual machines or stop SAP HANA. That allows flexibility to meet your application both expected and unforeseen throughput demands.
-Documentation on how to deploy an SAP HANA scale-out configuration with standby node using ANF based NFS v4.1 volumes is published in [SAP HANA scale-out with standby node on Azure VMs with Azure NetApp Files on SUSE Linux Enterprise Server](./sap-hana-scale-out-standby-netapp-files-suse.md).
+Documentation on how to deploy an SAP HANA scale-out configuration with standby node using Azure NetApp Files based NFS v4.1 volumes is published in [SAP HANA scale-out with standby node on Azure VMs with Azure NetApp Files on SUSE Linux Enterprise Server](./sap-hana-scale-out-standby-netapp-files-suse.md).
## Linux Kernel Settings
-To successfully deploy SAP HANA on ANF, Linux kernel settings need to be implemented according to SAP note [3024346](https://launchpad.support.sap.com/#/notes/3024346).
+To successfully deploy SAP HANA on Azure NetApp Files, Linux kernel settings need to be implemented according to SAP note [3024346](https://launchpad.support.sap.com/#/notes/3024346).
For systems using High Availability (HA) using pacemaker and Azure Load Balancer following settings need to be implemented in file /etc/sysctl.d/91-NetApp-HANA.conf
net.ipv4.tcp_sack = 1
To get a zonal proximity of your NFS volumes and VMs, you can follow the instructions as described in [Manage availability zone volume placement for Azure NetApp Files](../../azure-netapp-files/manage-availability-zone-volume-placement.md). With this method, the VMs and the NFS volumes are going to be in the same Azure Availability Zone. In most of the Azure regions, this type of proximity should be sufficient to achieve less than 1 millisecond latency for the smaller redo log writes for SAP HANA. This method doesn't require any interactive work with Microsoft to place and pin VMs into specific datacenter. As a result, you're flexible with change VM sizes and families within all the VM types and families offered in the Availability Zone you deployed. So, that you can react flexible on chanign conditions or move faster to more cost efficient VM sizes or families. We recommend this method for non-production systems and production systems that can work with redo log latencies that are closer to 1 millisecond. **The functionality is currently in public preview**. ## Deployment through Azure NetApp Files application volume group for SAP HANA (AVG)
-To deploy ANF volumes with proximity to your VM, a new functionality called Azure NetApp Files application volume group for SAP HANA (AVG) got developed. There's a series of articles that document the functionality. Best is to start with the article [Understand Azure NetApp Files application volume group for SAP HANA](../../azure-netapp-files/application-volume-group-introduction.md). As you read the articles, it becomes clear that the usage of AVGs involves the usage of Azure proximity placement groups as well. Proximity placement groups are used by the new functionality to tie into with the volumes that are getting created. To ensure that over the lifetime of the HANA system, the VMs aren't going to be moved away from the ANF volumes, we recommend using a combination of Avset/ PPG for each of the zones you deploy into.
+To deploy Azure NetApp Files volumes with proximity to your VM, a new functionality called Azure NetApp Files application volume group for SAP HANA (AVG) got developed. There's a series of articles that document the functionality. Best is to start with the article [Understand Azure NetApp Files application volume group for SAP HANA](../../azure-netapp-files/application-volume-group-introduction.md). As you read the articles, it becomes clear that the usage of AVGs involves the usage of Azure proximity placement groups as well. Proximity placement groups are used by the new functionality to tie into with the volumes that are getting created. To ensure that over the lifetime of the HANA system, the VMs aren't going to be moved away from the Azure NetApp Files volumes, we recommend using a combination of Avset/ PPG for each of the zones you deploy into.
The order of deployment would look like: - Using the [form](https://aka.ms/HANAPINNING) you need to request a pinning of the empty AvSet to a compute HW to ensure that VMs aren't going to move
The order of deployment would look like:
The proximity placement group configuration to use AVGs in an optimal way would look like:
-![ANF application volume group and ppg architecture](media/hana-vm-operations-netapp/avg-ppg-architecture.png)
+![Diagram of Azure NetApp Files application volume group and proximity placement group architecture.](media/hana-vm-operations-netapp/avg-ppg-architecture.png)
-The diagram shows that you're going to use an Azure proximity placement group for the DBMS layer. So, that it can get used together with AVGs. It's best to just include only the VMs that run the HANA instances in the proximity placement group. The proximity placement group is necessary, even if only one VM with a single HANA instance is used, for the AVG to identify the closest proximity of the ANF hardware. And to allocate the NFS volume on ANF as close as possible to the VM(s) that are using the NFS volumes.
+The diagram shows that you're going to use an Azure proximity placement group for the DBMS layer. So, that it can get used together with AVGs. It's best to just include only the VMs that run the HANA instances in the proximity placement group. The proximity placement group is necessary, even if only one VM with a single HANA instance is used, for the AVG to identify the closest proximity of the Azure NetApp Files hardware. And to allocate the NFS volume on Azure NetApp Files as close as possible to the VM(s) that are using the NFS volumes.
-This method generates the most optimal results as it relates to low latency. Not only by getting the NFS volumes and VMs as close together as possible. But considerations of placing the data and redo log volumes across different controllers on the NetApp backend are taken into account as well. Though, the disadvantage is that your VM deployment is pinned down to one datacenter. With that you're losing flexibilities in changing VM types and families. As a result, you should limit this method to the systems that absolutely require such low storage latency. For all other systems, you should attempt the deployment with a traditional zonal deployment of the VM and ANF. In most cases this is sufficient in terms of low latency. This also ensures a easy maintenance and administration of the VM and ANF.
+This method generates the most optimal results as it relates to low latency. Not only by getting the NFS volumes and VMs as close together as possible. But considerations of placing the data and redo log volumes across different controllers on the NetApp backend are taken into account as well. Though, the disadvantage is that your VM deployment is pinned down to one datacenter. With that you're losing flexibilities in changing VM types and families. As a result, you should limit this method to the systems that absolutely require such low storage latency. For all other systems, you should attempt the deployment with a traditional zonal deployment of the VM and Azure NetApp Files. In most cases this is sufficient in terms of low latency. This also ensures easy maintenance and administration of the VM and Azure NetApp Files.
## Availability ANF system updates and upgrades are applied without impacting the customer environment. The defined [SLA is 99.99%](https://azure.microsoft.com/support/legal/sla/netapp/).
sap High Availability Guide Suse Netapp Files https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/high-availability-guide-suse-netapp-files.md
In this example, we used Azure NetApp Files for all SAP Netweaver file systems t
When considering Azure NetApp Files for the SAP Netweaver on SUSE High Availability architecture, be aware of the following important considerations:
-* The minimum capacity pool is 4 TiB. The capacity pool size can be increased in 1-TiB increments.
-* The minimum volume is 100 GiB
+* For volume and capacity pool limits, see [Azure NetApp Files resource limits](../../azure-netapp-files/azure-netapp-files-resource-limits.md).
* Azure NetApp Files and all virtual machines, where Azure NetApp Files volumes will be mounted, must be in the same Azure Virtual Network or in [peered virtual networks](../../virtual-network/virtual-network-peering-overview.md) in the same region. Azure NetApp Files access over VNET peering in the same region is supported now. Azure NetApp access over global peering is not yet supported. * The selected virtual network must have a subnet, delegated to Azure NetApp Files. * The throughput and performance characteristics of an Azure NetApp Files volume is a function of the volume quota and service level, as documented in [Service level for Azure NetApp Files](../../azure-netapp-files/azure-netapp-files-service-levels.md). While sizing the SAP Azure NetApp volumes, make sure that the resulting throughput meets the application requirements.
sap Planning Guide Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/planning-guide-storage.md
Remark about the units used throughout this article. The public cloud vendors mo
## Microsoft Azure Storage resiliency
-Microsoft Azure storage of Standard HDD, Standard SSD, Azure premium storage, Premium SSD v2, and Ultra disk keeps the base VHD (with OS) and VM attached data disks or VHDs (Virtual Hard Disk) in three copies on three different storage nodes. Failing over to another replica and seeding of a new replica if there's a storage node failure, is transparent. As a result of this redundancy, it's **NOT** required to use any kind of storage redundancy layer across multiple Azure disks. This fact is called Local Redundant Storage (LRS). LRS is default for these types of storage in Azure. [Azure NetApp Files](https://azure.microsoft.com/services/netapp/) provides sufficient redundancy to achieve the same SLAs (Serive Level Agreements) as other native Azure storage.
+Microsoft Azure storage of Standard HDD, Standard SSD, Azure premium storage, Premium SSD v2, and Ultra disk keeps the base VHD (with OS) and VM attached data disks or VHDs (Virtual Hard Disk) in three copies on three different storage nodes. Failing over to another replica and seeding of a new replica if there's a storage node failure, is transparent. As a result of this redundancy, it's **NOT** required to use any kind of storage redundancy layer across multiple Azure disks. This fact is called Local Redundant Storage (LRS). LRS is default for these types of storage in Azure. [Azure NetApp Files](https://azure.microsoft.com/services/netapp/) provides sufficient redundancy to achieve the same SLAs (Service Level Agreements) as other native Azure storage.
There are several more redundancy methods, which are all described in the article [Azure Storage replication](../../storage/common/storage-redundancy.md?toc=%2fazure%2fstorage%2fqueues%2ftoc.json) that applies to some of the different storage types Azure has to offer.
The capability matrix for SAP workload looks like:
| Backup storage | Suitable | For short term storage of backups | | Shares/shared disk | Not available | Needs Azure Premium Files or third party | | Resiliency | LRS | No GRS or ZRS available for disks |
-| Latency | Low-to medium | - |
+| Latency | Low to medium | - |
| IOPS SLA | Yes | - | | IOPS linear to capacity | semi linear in brackets | [Managed Disk pricing](https://azure.microsoft.com/pricing/details/managed-disks/) | | Maximum IOPS per disk | 20,000 [dependent on disk size](https://azure.microsoft.com/pricing/details/managed-disks/) | Also consider [VM limits](../../virtual-machines/sizes.md) |
For information about service levels, see [Service levels for Azure NetApp Files
For optimal results, use [Application volume group for SAP HANA](../../azure-netapp-files/application-volume-group-introduction.md) to deploy the volumes. Application volume group places volumes in optimal locations in the Azure infrastructure using affinity and anti-affinity rules to reduce contention and to allow for the best throughput and lowest latency. > [!NOTE]
-> Capacity pools are a basic provisioning unit for Azure NetApp Files. Capacity pools are offered beginning at 1 TiB in size; you can expand a capacity pool in 1-TiB increments. Capacity pools are the parent unit for volumes; the smallest volume size is 100 GiB. For pricing, see [Azure NetApp Files Pricing](https://azure.microsoft.com/pricing/details/netapp/)
+> Capacity pools are a basic provisioning unit for Azure NetApp Files. Capacity pools are offered beginning at 1 TiB in size; you can expand a capacity pool in 1-TiB increments. Capacity pools are the parent unit for volumes. For sizing information, see [Azure NetApp Files resource limits](../../azure-netapp-files/azure-netapp-files-resource-limits.md). For pricing, see [Azure NetApp Files Pricing](https://azure.microsoft.com/pricing/details/netapp/).
Azure NetApp Files is supported for several SAP workload scenarios:
search Search Data Sources Gallery https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-data-sources-gallery.md
Secure enterprise search connector for reliably indexing content from Symantec E
By [Accenture](https://www.accenture.com)
-The Twitter connector crawls content from any twitter account. It performs full and incremental crawls, supports authentication using Twitter user, consumer key and consumer secret key.
+The Twitter connector crawls content from any X account. It performs full and incremental crawls, supports authentication using X user, consumer key and consumer secret key.
[More details](https://contentanalytics.digital.accenture.com/display/aspire40/Twitter+Connector)
security Encryption Models https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/security/fundamentals/encryption-models.md
The Azure services that support each encryption model:
| Data Lake Storage Gen2 | Yes | Yes, including Managed HSM | Yes | | Avere vFXT | Yes | - | - | | Azure Cache for Redis | Yes | Yes\*\*\*, including Managed HSM | - |
-| Azure NetApp Files | Yes | Yes | Yes |
+| Azure NetApp Files | Yes | Yes, including Managed HSM | Yes |
| Archive Storage | Yes | Yes | - | | StorSimple | Yes | Yes | Yes | | Azure Backup | Yes | Yes, including Managed HSM | Yes | | Data Box | Yes | - | Yes |
-| Data Box Edge | Yes | Yes | - |
+| Azure Stack Edge | Yes | Yes | - |
| **Other** | | | | | Azure Data Manager for Energy | Yes | Yes | Yes |
The Azure services that support each encryption model:
## Related content -- [encryption is used in Azure](encryption-overview.md)-- [double encryption](double-encryption.md)
+- [How encryption is used in Azure](encryption-overview.md)
+- [Double encryption](double-encryption.md)
service-fabric How To Managed Cluster Large Virtual Machine Scale Sets https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-managed-cluster-large-virtual-machine-scale-sets.md
Previously updated : 07/11/2022 Last updated : 08/09/2024 # Service Fabric managed cluster node type scaling
-Each node type in a Service Fabric managed cluster is backed by a virtual machine scale set. To allow managed cluster node types to create [large virtual machine scale sets](../virtual-machine-scale-sets/virtual-machine-scale-sets-placement-groups.md) a property `multiplePlacementGroups` has been added to node type definition. By default, managed cluster node types set this property to false to keep fault and upgrade domains consistent within a placement group, but this setting limits a node type from scaling beyond 100 VMs. To help decide whether your application can make effective use of large scale sets, see [this list of requirements](../virtual-machine-scale-sets/virtual-machine-scale-sets-placement-groups.md#checklist-for-using-large-scale-sets).
+A virtual machine scale set backs each node type in a Service Fabric managed cluster. To allow managed cluster node types to create [large virtual machine scale sets](../virtual-machine-scale-sets/virtual-machine-scale-sets-placement-groups.md), a property `multiplePlacementGroups` has been added to node type definition. By default, managed cluster node types set this property to false to keep fault and upgrade domains consistent within a placement group, but this setting limits a node type from scaling beyond 100 VMs. To help decide whether your application can make effective use of large scale sets, see [this list of requirements and limitations](../virtual-machine-scale-sets/virtual-machine-scale-sets-placement-groups.md#checklist-for-using-large-scale-sets).
-Since the Azure Service Fabric managed cluster resource provider orchestrates scaling and uses managed disks for data, we are able to support large scale sets for both stateful and stateless secondary node types.
+Since the Azure Service Fabric managed cluster resource provider orchestrates scaling and uses managed disks for data, we're able to support large scale sets for both stateful and stateless secondary node types.
> [!NOTE] > This property can not be modified after a node type is deployed.
service-fabric How To Managed Cluster Stateless Node Type https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-fabric/how-to-managed-cluster-stateless-node-type.md
Previously updated : 07/11/2022 Last updated : 08/09/2024 # Deploy a Service Fabric managed cluster with stateless node types
Service Fabric node types come with an inherent assumption that at some point of
* Primary node types can't be configured to be stateless. * Stateless node types require an API version of **2021-05-01** or later.
-* This will automatically set the **multipleplacementgroup** property to **true** which you can [learn more about here](how-to-managed-cluster-large-virtual-machine-scale-sets.md).
-* This enables support for up to 1000 nodes for the given node type.
+* This will automatically set the **multipleplacementgroup** property to **true** which you can [learn more about here](how-to-managed-cluster-large-virtual-machine-scale-sets.md). The underlying [virtual machine scale set requirements and limitations](../virtual-machine-scale-sets/virtual-machine-scale-sets-placement-groups.md#checklist-for-using-large-scale-sets) to enabling this property apply for Service Fabric managed clusters.
+* This enables support for up to 1,000 nodes for the given node type.
* Stateless node types can utilize a VM SKU temporary disk. ## Enabling stateless node types in a Service Fabric managed cluster
-To set one or more node types as stateless in a node type resource, set the **isStateless** property to **true**. When deploying a Service Fabric cluster with stateless node types, it's required to have at least one primary node type, which is not stateless in the cluster.
+To set one or more node types as stateless in a node type resource, set the **isStateless** property to **true**. When deploying a Service Fabric cluster with stateless node types, the setup requires at least one primary node type that isn't stateless in the cluster.
Sample templates are available: [Service Fabric Stateless Node types template](https://github.com/Azure-Samples/service-fabric-cluster-templates)
Sample templates are available: [Service Fabric Stateless Node types template](h
## Enabling stateless node types using Spot VMs in a Service Fabric managed cluster (Preview)
-[Azure Spot Virtual Machines on scale sets](../virtual-machine-scale-sets/use-spot.md) enables users to take advantage of unused compute capacity at a significant cost savings. At any point in time when Azure needs the capacity back, the Azure infrastructure will evict these Azure Spot Virtual Machine instances. Therefore, Spot VM node types are great for workloads that can handle interruptions and don't need to be completed within a specific time frame. Recommended workloads include development, testing, batch processing jobs, big data, or other large-scale stateless scenarios.
+[Azure Spot Virtual Machines on scale sets](../virtual-machine-scale-sets/use-spot.md) enables users to take advantage of unused compute capacity at a significant cost savings. At any point in time when Azure needs the capacity back, the Azure infrastructure evicts these Azure Spot Virtual Machine instances. Therefore, Spot VM node types are great for workloads that can handle interruptions and don't need to be completed within a specific time frame. Recommended workloads include development, testing, batch processing jobs, big data, or other large-scale stateless scenarios.
-To set one or more stateless node types to use Spot VM, set both **isStateless** and **IsSpotVM** properties to true. When deploying a Service Fabric cluster with stateless node types, it's required to have at least one primary node type, which is not stateless in the cluster. Stateless node types configured to use Spot VMs have Eviction Policy set to 'Delete' by default. Customers can configure the 'evictionPolicy' to be 'Delete' or 'Deallocate' but this can only be defined at the time of nodetype creation.
+To set one or more stateless node types to use Spot VM, set both **isStateless** and **IsSpotVM** properties to true. When deploying a Service Fabric cluster with stateless node types, it's required to have at least one primary node type, which isn't stateless in the cluster. Stateless node types configured to use Spot VMs have Eviction Policy set to 'Delete' by default. Customers can configure the 'evictionPolicy' to be 'Delete' or 'Deallocate' but this can only be defined at the time of nodetype creation.
Sample templates are available: [Service Fabric Spot Node types template](https://github.com/Azure-Samples/service-fabric-cluster-templates/tree/master/SF-Managed-Standard-SKU-2-NT-Spot)
Sample templates are available: [Service Fabric Spot Node types template](https:
## Enabling Spot VMs with Try & Restore This configuration enables the platform to automatically try to restore the evicted Spot VMs. Refer to the virtual machine scale set doc for [details](../virtual-machine-scale-sets/use-spot.md#try--restore).
-This configuration can only be enabled on new Spot nodetypes by specifying the **spotRestoreTimeout**, which is an ISO 8601 time duration having a value between 30 & 2880 mins. The platform will try to restore the VMs for this duration, after eviction.
+This configuration can only be enabled on new Spot nodetypes by specifying the **spotRestoreTimeout**, which is an ISO 8601 time duration having a value between 30 and 2880 minutes. The platform will try to restore the VMs for this duration, after eviction.
```json {
This configuration can only be enabled on new Spot nodetypes by specifying the *
``` ## Configure stateless node types for zone resiliency
-To configure a Stateless node type for zone resiliency you must [configure managed cluster zone spanning](how-to-managed-cluster-availability-zones.md) at the cluster level.
+To configure a Stateless node type for zone resiliency, you must [configure managed cluster zone spanning](how-to-managed-cluster-availability-zones.md) at the cluster level.
>[!NOTE] > The zonal resiliency property must be set at the cluster level, and this property can't be changed in place.
To configure a Stateless node type for zone resiliency you must [configure manag
## Temporary disk support Stateless node types can be configured to use temporary disk as the data disk instead of a Managed Disk. Using a temporary disk can reduce costs for stateless workloads. To configure a stateless node type to use the temporary disk set the **useTempDataDisk** property to **true**.
-* Temporary disk size must be 32GB or more. The size of the temporary disk depends on the VM size.
-* The temporary disk is not encrypted by server side encryption unless you enable encryption at host.
+* Temporary disk size must be 32 GB or more. The size of the temporary disk depends on the VM size.
+* The temporary disk isn't encrypted by server side encryption unless you enable encryption at host.
* The Service Fabric managed cluster resource apiVersion should be **2022-01-01** or later. ```json
spring-apps How To Enterprise Application Configuration Service https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-enterprise-application-configuration-service.md
This article shows you how to use Application Configuration Service for VMware Tanzu with the Azure Spring Apps Enterprise plan.
-[Application Configuration Service for VMware Tanzu](https://docs.vmware.com/en/Application-Configuration-Service-for-VMware-Tanzu/2.0/acs/GUID-overview.html) is one of the commercial VMware Tanzu components. It enables the management of Kubernetes-native `ConfigMap` resources that are populated from properties defined in one or more Git repositories.
+[Application Configuration Service for VMware Tanzu](https://docs.vmware.com/en/Application-Configuration-Service-for-VMware-Tanzu/2.3/acs/GUID-overview.html) is one of the commercial VMware Tanzu components. It enables the management of Kubernetes-native `ConfigMap` resources that are populated from properties defined in one or more Git repositories.
With Application Configuration Service, you have a central place to manage external properties for applications across all environments. To understand the differences from Spring Cloud Config Server in the Basic and Standard plans, see the [Use Application Configuration Service for external configuration](./how-to-migrate-standard-tier-to-enterprise-tier.md#use-application-configuration-service-for-external-configuration) section of [Migrate an Azure Spring Apps Basic or Standard plan instance to the Enterprise plan](./how-to-migrate-standard-tier-to-enterprise-tier.md).
spring-apps How To Prepare App Deployment https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/spring-apps/enterprise/how-to-prepare-app-deployment.md
public class GatewayApplication {
#### [Enterprise plan](#tab/enterprise-plan)
-To enable distributed configuration in the Enterprise plan, use [Application Configuration Service for VMware Tanzu](https://docs.vmware.com/en/Application-Configuration-Service-for-VMware-Tanzu/2.0/acs/GUID-overview.html), which is one of the proprietary VMware Tanzu components. Application Configuration Service for Tanzu is Kubernetes-native, and different from Spring Cloud Config Server. Application Configuration Service for Tanzu enables the management of Kubernetes-native ConfigMap resources that are populated from properties defined in one or more Git repositories.
+To enable distributed configuration in the Enterprise plan, use [Application Configuration Service for VMware Tanzu](https://docs.vmware.com/en/Application-Configuration-Service-for-VMware-Tanzu/2.3/acs/GUID-overview.html), which is one of the proprietary VMware Tanzu components. Application Configuration Service for Tanzu is Kubernetes-native, and different from Spring Cloud Config Server. Application Configuration Service for Tanzu enables the management of Kubernetes-native ConfigMap resources that are populated from properties defined in one or more Git repositories.
In the Enterprise plan, there's no Spring Cloud Config Server, but you can use Application Configuration Service for Tanzu to manage centralized configurations. For more information, see [Use Application Configuration Service for Tanzu](how-to-enterprise-application-configuration-service.md)
static-web-apps Authentication Custom https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/authentication-custom.md
Next, use the following sample to configure the provider in the [configuration f
For more information on how to configure Google as an authentication provider, see the [App Service Authentication/Authorization documentation](../app-service/configure-authentication-provider-google.md).
-# [X (Twitter)](#tab/twitter)
+# [X](#tab/x)
To create the registration, begin by creating the following [application settings](application-settings.yml): | Setting Name | Value | | | |
-| `X_CONSUMER_KEY` | The X (Twitter) consumer key. |
-| `X_CONSUMER_SECRET_APP_SETTING_NAME` | The name of the application setting that holds the X (Twitter) consumer secret. |
+| `X_CONSUMER_KEY` | The X consumer key. |
+| `X_CONSUMER_SECRET_APP_SETTING_NAME` | The name of the application setting that holds the X consumer secret. |
Next, use the following sample to configure the provider in the [configuration file](configuration.md).
Next, use the following sample to configure the provider in the [configuration f
} ```
-For more information on how to configure Twitter as an authentication provider, see the [App Service Authentication/Authorization documentation](../app-service/configure-authentication-provider-twitter.md).
+For more information on how to configure X as an authentication provider, see the [App Service Authentication/Authorization documentation](../app-service/configure-authentication-provider-twitter.md).
# [OpenID Connect](#tab/openid-connect)
Invitations are specific to individual authorization-providers, so consider the
| - | - | | Microsoft Entra ID | email address | | GitHub | username |
-| Twitter | username |
+| X | username |
Use the following steps to create an invitation.
Use the following steps to create an invitation.
3. Select **Invite**. 4. Select an _Authorization provider_ from the list of options. 5. Add either the username or email address of the recipient in the _Invitee details_ box.
- - For GitHub and Twitter, enter the username. For all others, enter the recipient's email address.
+ - For GitHub and X, enter the username. For all others, enter the recipient's email address.
6. Select the domain of your static site from the _Domain_ drop-down menu. - The domain you select is the domain that appears in the invitation. If you have a custom domain associated with your site, choose the custom domain. 7. Add a comma-separated list of role names in the _Role_ box.
static-web-apps Configuration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/static-web-apps/configuration.md
By default, when the `trailingSlash` configuration is omitted, Static Web Apps a
"rewrite": "/.auth/login/github" }, {
- "route": "/.auth/login/twitter",
+ "route": "/.auth/login/x",
"statusCode": 404 }, {
Based on the above configuration, review the following scenarios.
| _/api/admin_ | `GET` requests from authenticated users in the _registeredusers_ role are sent to the API. Authenticated users not in the _registeredusers_ role and unauthenticated users are served a `401` error.<br/><br/>`POST`, `PUT`, `PATCH`, and `DELETE` requests from authenticated users in the _administrator_ role are sent to the API. Authenticated users not in the _administrator_ role and unauthenticated users are served a `401` error. | | _/customers/contoso_ | Authenticated users who belong to either the _administrator_ or _customers_contoso_ roles are served the _/customers/contoso/https://docsupdatetracker.net/index.html_ file. Authenticated users not in the _administrator_ or _customers_contoso_ roles are served a `403` error<sup>1</sup>. Unauthenticated users are redirected to _/login_. | | _/login_ | Unauthenticated users are challenged to authenticate with GitHub. |
-| _/.auth/login/twitter_ | Since the route rule disables Twitter (X) authorization, a `404` error is returned. This error then falls back to serving _/https://docsupdatetracker.net/index.html_ with a `200` status code. |
+| _/.auth/login/x | Since the route rule disables X authorization, a `404` error is returned. This error then falls back to serving _/https://docsupdatetracker.net/index.html_ with a `200` status code. |
| _/logout_ | Users are logged out of any authentication provider. | | _/calendar/2021/01_ | The browser is served the _/calendar.html_ file. | | _/specials_ | The browser is permanently redirected to _/deals_. |
storage Nfs Comparison https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/nfs-comparison.md
For more general comparisons, see [this article](storage-introduction.md) to com
|Available protocols |NFSv3<br></br>REST<br></br>Data Lake Storage Gen2 |SMB<br><br>NFSv4.1<br></br> (No interoperability between either protocol) |NFSv3 and NFSv4.1<br></br>SMB<br></br>Dual protocol (SMB and NFSv3, SMB and NFSv4.1) | |Key features | Integrated with HPC cache for low latency workloads. <br> </br> Integrated management, including lifecycle, immutable blobs, data failover, and metadata index. | Zonally redundant for high availability. <br></br> Consistent single-digit millisecond latency. <br></br>Predictable performance and cost that scales with capacity. |Extremely low latency (as low as sub-ms).<br></br>Rich ONTAP management capabilities such as snapshots, backup, cross-region replication, and cross-zone replication.<br></br>Consistent hybrid cloud experience. | |Performance (Per volume) |Up to 20,000 IOPS, up to 15 GiB/s throughput. |Up to 100,000 IOPS, up to 10 GiB/s throughput. |Up to 460,000 IOPS, up to 4.5 GiB/s throughput per regular volume, up to 10 GiB/s throughput per large volume. |
-|Scale | Up to 5 PiB for a single volume. <br></br> Up to 190.7 TiB for a single blob.<br></br>No minimum capacity requirements. |Up to 100 TiB for a single file share.<br></br>Up to 4 TiB for a single file.<br></br>100 GiB min capacity. |Up to 100 TiB for a single regular volume, up to 500 TiB for a large volume.<br></br>Up to 16 TiB for a single file.<br></br>Consistent hybrid cloud experience. |
+|Scale | Up to 5 PiB for a single volume. <br></br> Up to 190.7 TiB for a single blob.<br></br>No minimum capacity requirements. |Up to 100 TiB for a single file share.<br></br>Up to 4 TiB for a single file.<br></br>50 GiB min capacity. |Up to 100 TiB for a single regular volume, up to 2 PiB for a large volume.<br></br>Up to 16 TiB for a single file.<br></br>Consistent hybrid cloud experience. |
|Pricing |[Azure Blob Storage pricing](https://azure.microsoft.com/pricing/details/storage/blobs/) |[Azure Files pricing](https://azure.microsoft.com/pricing/details/storage/files/) |[Azure NetApp Files pricing](https://azure.microsoft.com/pricing/details/netapp/) | ## Next steps
storage Storage Files Netapp Comparison https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/files/storage-files-netapp-comparison.md
Most workloads that require cloud file storage work well on either Azure Files o
| Category | Azure Files | Azure NetApp Files | ||||
-| Minimum Share/Volume Size | Premium<br><ul><li>100 GiB</li></ul><br>Standard<br><ul><li>No minimum (SMB only - NFS requires Premium shares).</li></ul> | All tiers<br><ul><li>100 GiB (Minimum capacity pool size: 1 TiB)</li></ul> |
-| Maximum Share/Volume Size | 100 TiB | All tiers<br><ul><li>Up to 100 TiB (regular volume)</li><li>50 TiB - 500 TiB (large volume)</li><li>1000 TiB capacity pool size limit</li></ul><br>Up to 12.5 PiB per Azure NetApp account |
+| Minimum Share/Volume Size | Premium<br><ul><li>100 GiB</li></ul><br>Standard<br><ul><li>No minimum (SMB only - NFS requires Premium shares).</li></ul> | All tiers<br><ul><li>50 GiB (Minimum capacity pool size: 1 TiB)</li></ul> |
+| Maximum Share/Volume Size | 100 TiB | All tiers<br><ul><li>Up to 100 TiB (regular volume)</li><li>50 TiB - 2 PiB (large volume)</li><li>1000 TiB capacity pool size limit</li></ul><br>Up to 12.5 PiB per Azure NetApp account |
| Maximum Share/Volume IOPS | Premium<br><ul><li>Up to 100k</li></ul><br>Standard<br><ul><li>Up to 20k</li></ul> | Ultra and Premium<br><ul><li>Up to 450k </li></ul><br>Standard<br><ul><li>Up to 320k</li></ul> | | Maximum Share/Volume Throughput | Premium<br><ul><li>Up to 10 GiB/s</li></ul><br>Standard<br><ul><li>Up to [storage account limits](./storage-files-scale-targets.md#storage-account-scale-targets).</li></ul> | Ultra<br><ul><li>4.5 GiB/s (regular volume)</li><li>10 GiB/s (large volume)</li></ul><br>Premium<br><ul><li>Up to 4.5 GiB/s (regular volume)</li><li>Up to 6.4 GiB/s (large volume)</li></ul><br>Standard<br><ul><li>Up to 1.6 GiB/s (regular and large volume)</li><ul> | | Maximum File Size | 4 TiB | 16 TiB |
stream-analytics Capture Event Hub Data Delta Lake https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/capture-event-hub-data-delta-lake.md
Verify that the parquet files with Delta lake format are generated in the Azure
:::image type="content" source="./media/capture-event-hub-data-delta-lake/verify-captured-data.png" alt-text="Screenshot showing the generated Parquet files in the Azure Data Lake Storage (ADLS) container." lightbox="./media/capture-event-hub-data-delta-lake/verify-captured-data.png" ::: + ## Next steps Now you know how to use the Stream Analytics no code editor to create a job that captures Event Hubs data to Azure Data Lake Storage Gen2 in Delta lake format. Next, you can learn more about Azure Stream Analytics and how to monitor the job that you created.
stream-analytics Capture Event Hub Data Parquet https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/capture-event-hub-data-parquet.md
Use the following steps to configure a Stream Analytics job to capture data in A
:::image type="content" source="./media/capture-event-hub-data-parquet/edit-fields.png" alt-text="Screenshot showing sample data under Data Preview." lightbox="./media/capture-event-hub-data-parquet/edit-fields.png" ::: 1. Select the **Azure Data Lake Storage Gen2** tile to edit the configuration. 1. On the **Azure Data Lake Storage Gen2** configuration page, follow these steps:
- 1. Select the subscription, storage account name and container from the drop-down menu.
+ 1. Select the subscription, storage account name, and container from the drop-down menu.
1. Once the subscription is selected, the authentication method and storage account key should be automatically filled in. 1. Select **Parquet** for **Serialization** format.
Use the following steps to configure a Stream Analytics job to capture data in A
1. On the Event Hubs instance page for your event hub, select **Generate data**, select **Stocks data** for dataset, and then select **Send** to send some sample data to the event hub. 1. Verify that the Parquet files are generated in the Azure Data Lake Storage container.
- :::image type="content" source="./media/capture-event-hub-data-parquet/verify-captured-data.png" alt-text="Screenshot showing the generated Parquet files in the ADLS container." lightbox="./media/capture-event-hub-data-parquet/verify-captured-data.png" :::
+ :::image type="content" source="./media/capture-event-hub-data-parquet/verify-captured-data.png" alt-text="Screenshot showing the generated Parquet files in the Azure Data Lake Storage container." lightbox="./media/capture-event-hub-data-parquet/verify-captured-data.png" :::
1. Select **Process data** on the left menu. Switch to the **Stream Analytics jobs** tab. Select **Open metrics** to monitor it. :::image type="content" source="./media/capture-event-hub-data-parquet/open-metrics-link.png" alt-text="Screenshot showing Open Metrics link selected." lightbox="./media/capture-event-hub-data-parquet/open-metrics-link.png" :::
Use the following steps to configure a Stream Analytics job to capture data in A
:::image type="content" source="./media/capture-event-hub-data-parquet/job-metrics.png" alt-text="Screenshot showing metrics of the Stream Analytics job." lightbox="./media/capture-event-hub-data-parquet/job-metrics.png" ::: ++ ## Next steps Now you know how to use the Stream Analytics no code editor to create a job that captures Event Hubs data to Azure Data Lake Storage Gen2 in Parquet format. Next, you can learn more about Azure Stream Analytics and how to monitor the job that you created. + * [Introduction to Azure Stream Analytics](stream-analytics-introduction.md) * [Monitor Stream Analytics job with Azure portal](stream-analytics-monitoring.md)
stream-analytics Filter Ingest Data Lake Storage Gen2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/filter-ingest-data-lake-storage-gen2.md
This article describes how you can use the no code editor to easily create a Str
{"RecordType":"MO","SystemIdentity":"d0","FileNum":"559","SwitchNum":"US","CallingNum":"456757102","CallingIMSI":"466920401237309","CalledNum":"345617823","CalledIMSI":"466923000886460","DateS":"20220524","TimeType":1,"CallPeriod":696,"ServiceType":"V","Transfer":1,"OutgoingTrunk":"419","MSRN":"886932429155","callrecTime":"2022-05-25T02:07:22Z","EventProcessedUtcTime":"2022-05-25T02:07:50.5478116Z","PartitionId":0,"EventEnqueuedUtcTime":"2022-05-25T02:07:21.9190000Z", "TimeS":null,"CallingCellID":null,"CalledCellID":null,"IncomingTrunk":null,"CalledNum2":null,"FCIFlag":null} ``` ## Next steps
stream-analytics Filter Ingest Synapse Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/filter-ingest-synapse-sql.md
Use the following steps to develop a Stream Analytics job to filter and ingest r
:::image type="content" source="./media/filter-ingest-synapse-sql/no-code-list-jobs.png" alt-text="Screenshot of the Stream Analytics jobs tab where you view the running jobs status." lightbox="./media/filter-ingest-synapse-sql/no-code-list-jobs.png" ::: + ## Next steps Learn more about Azure Stream Analytics and how to monitor the job you've created.
stream-analytics No Code Build Power Bi Dashboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/no-code-build-power-bi-dashboard.md
Now, you have the Azure Stream Analytics job running and the data is continuousl
5. Then, you can adjust its size and get the continuously updated dashboard as shown in the following example. :::image type="content" source="./media/no-code-build-power-bi-dashboard/pbi-dashboard-report.png" alt-text="Screenshot of the pbi dashboard report." lightbox="./media/no-code-build-power-bi-dashboard/pbi-dashboard-report.png" ::: ## Next steps
stream-analytics No Code Enrich Event Hub Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/no-code-enrich-event-hub-data.md
This article describes how you can use the no code editor to easily create a Str
:::image type="content" source="./media/no-code-enrich-event-hub-data/no-code-list-jobs.png" alt-text="Screenshot of the Stream Analytics jobs tab where you view the running jobs status." lightbox="./media/no-code-enrich-event-hub-data/no-code-list-jobs.png" ::: + ## Next steps Learn more about Azure Stream Analytics and how to monitor the job you've created.
stream-analytics No Code Filter Ingest Data Explorer https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/no-code-filter-ingest-data-explorer.md
This article describes how you can use the no code editor to easily create a Str
1. In the [Azure portal](https://portal.azure.com), locate and select the Azure Event Hubs instance. 1. Select **Features** > **Process Data** and then select **Start** on the **Filter and store data to Azure Data Explorer** card.
- :::image type="content" source="./media/no-code-filter-ingest-data-explorer/event-hub-process-data-templates.png" alt-text="Screenshot showing the Filter and ingest to ADLS Gen2 card where you select Start." lightbox="./media/no-code-filter-ingest-data-explorer/event-hub-process-data-templates.png" :::
+ :::image type="content" source="./media/no-code-filter-ingest-data-explorer/event-hub-process-data-templates.png" alt-text="Screenshot showing the Filter and ingest to Azure Data Lake Storage Gen2 card where you select Start." lightbox="./media/no-code-filter-ingest-data-explorer/event-hub-process-data-templates.png" :::
1. Enter a name for the Stream Analytics job, then select **Create**.
This article describes how you can use the no code editor to easily create a Str
:::image type="content" source="./media/no-code-filter-ingest-data-explorer/no-code-list-jobs.png" alt-text="Screenshot of the Stream Analytics jobs tab where you view the running jobs status." lightbox="./media/no-code-filter-ingest-data-explorer/no-code-list-jobs.png" ::: + ## Next steps Learn more about Azure Stream Analytics and how to monitor the job you've created.
stream-analytics No Code Materialize Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/no-code-materialize-cosmos-db.md
To start the job, you must specify:
- **Output data error handling** allows you to specify the behavior you want when a jobΓÇÖs output to your destination fails due to data errors. By default, your job retries until the write operation succeeds. You can also choose to drop output events. 9. After you select **Start**, the job starts running within two minutes. View the job under the **Process Data** section in the Stream Analytics jobs tab. You can explore job metrics and stop and restart it as needed. + ## Next steps
-Now you know how to use the Stream Analytics no code editor to develop a job that reads from Event Hubs and calculates aggregates such as counts, averages and writes it to your Azure Cosmos DB resource.
+Now you know how to use the Stream Analytics no code editor to develop a job that reads from Event Hubs and calculates aggregates such as counts, averages, and writes it to your Azure Cosmos DB resource.
* [Introduction to Azure Stream Analytics](stream-analytics-introduction.md) * [Monitor Stream Analytics job with Azure portal](./stream-analytics-monitoring.md)
stream-analytics No Code Transform Filter Ingest Sql https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/no-code-transform-filter-ingest-sql.md
In this section, you create an Azure Stream Analytics job using the no-code edit
:::image type="content" source="./media/no-code-transform-filter-ingest-sql/sql-output.png" alt-text="Screenshot that shows contents of the stocks table in the database." lightbox="./media/no-code-transform-filter-ingest-sql/sql-output.png"::: + ## Next steps Learn more about Azure Stream Analytics and how to monitor the job you've created.
stream-analytics Stream Analytics Twitter Sentiment Analysis Trends https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/stream-analytics/stream-analytics-twitter-sentiment-analysis-trends.md
Last updated 10/03/2022
# Social media analysis with Azure Stream Analytics
-This article teaches you how to build a social media sentiment analysis solution by bringing real-time Twitter events into Azure Event Hubs and then analyzing them using Stream Analytics. You write an Azure Stream Analytics query to analyze the data and store results for later use or create a [Power BI](https://powerbi.com/) dashboard to provide insights in real-time.
+This article teaches you how to build a social media sentiment analysis solution by bringing real-time X events into Azure Event Hubs and then analyzing them using Stream Analytics. You write an Azure Stream Analytics query to analyze the data and store results for later use or create a [Power BI](https://powerbi.com/) dashboard to provide insights in real-time.
Social media analytics tools help organizations understand trending topics. Trending topics are subjects and attitudes that have a high volume of posts on social media. Sentiment analysis, which is also called *opinion mining*, uses social media analytics tools to determine attitudes toward a product or idea.
-Real-time Twitter trend analysis is a great example of an analytics tool because the hashtag subscription model enables you to listen to specific keywords (hashtags) and develop sentiment analysis of the feed.
+Real-time X trend analysis is a great example of an analytics tool because the hashtag subscription model enables you to listen to specific keywords (hashtags) and develop sentiment analysis of the feed.
## Scenario: Social media sentiment analysis in real time
-A company that has a news media website is interested in gaining an advantage over its competitors by featuring site content that's immediately relevant to its readers. The company uses social media analysis on topics that are relevant to readers by doing real-time sentiment analysis of Twitter data.
+A company that has a news media website is interested in gaining an advantage over its competitors by featuring site content that's immediately relevant to its readers. The company uses social media analysis on topics that are relevant to readers by doing real-time sentiment analysis of X data.
-To identify trending topics in real time on Twitter, the company needs real-time analytics about the tweet volume and sentiment for key topics.
+To identify trending topics in real time on X, the company needs real-time analytics about the tweet volume and sentiment for key topics.
## Prerequisites
-In this how-to guide, you use a client application that connects to Twitter and looks for tweets that have certain hashtags (which you can set). The following list gives you prerequisites for running the application and analyzing the tweets using Azure Streaming Analytics.
+In this how-to guide, you use a client application that connects to X and looks for tweets that have certain hashtags (which you can set). The following list gives you prerequisites for running the application and analyzing the tweets using Azure Streaming Analytics.
* If you don't have an Azure subscription, create a [free account](https://azure.microsoft.com/free/).
-* A [Twitter](https://twitter.com) account.
+* An [X](https://x.com) account.
-* The TwitterClientCore application, which reads the Twitter feed. To get this application, download [TwitterClientCore](https://github.com/Azure/azure-stream-analytics/tree/master/DataGenerators/TwitterClientCore).
+* The TwitterClientCore application, which reads the X feed. To get this application, download [TwitterClientCore](https://github.com/Azure/azure-stream-analytics/tree/master/DataGenerators/TwitterClientCore).
* Install the [.NET Core CLI](/dotnet/core/tools/?tabs=netcore2x) version 2.1.0.
The sample application generates events and pushes them to an event hub. Azure E
### Create an Event Hubs namespace and event hub
-Follow instructions from [Quickstart: Create an event hub using Azure portal](../event-hubs/event-hubs-create.md) to create an Event Hubs namespace and an event hub named **socialtwitter-eh**. You can use a different name. If you do, make a note of it, because you need the name later. You don't need to set any other options for the event hub.
+Follow instructions from [Quickstart: Create an event hub using Azure portal](../event-hubs/event-hubs-create.md) to create an Event Hubs namespace and an event hub named **socialx-eh**. You can use a different name. If you do, make a note of it, because you need the name later. You don't need to set any other options for the event hub.
### Grant access to the event hub
Before a process can send data to an event hub, the event hub needs a policy tha
>[!NOTE] >There is a **Shared access policies** option under for the namespace and for the event hub. Make sure you're working in the context of your event hub, not the namespace.
-3. On the **Shared access policies** page, select **+ Add** on the commandbar. Then enter *socialtwitter-access* for the **Policy name** and check the **Manage** checkbox.
+3. On the **Shared access policies** page, select **+ Add** on the commandbar. Then enter *socialx-access* for the **Policy name** and check the **Manage** checkbox.
4. Select **Create**.
Before a process can send data to an event hub, the event hub needs a policy tha
The connection string looks like this: ```
- Endpoint=sb://EVENTHUBS-NAMESPACE.servicebus.windows.net/;SharedAccessKeyName=socialtwitter-access;SharedAccessKey=XXXXXXXXXXXXXXX;EntityPath=socialtwitter-eh
+ Endpoint=sb://EVENTHUBS-NAMESPACE.servicebus.windows.net/;SharedAccessKeyName=socialx-access;SharedAccessKey=XXXXXXXXXXXXXXX;EntityPath=socialx-eh
``` Notice that the connection string contains multiple key-value pairs, separated with semicolons: `Endpoint`, `SharedAccessKeyName`, `SharedAccessKey`, and `EntityPath`.
Before a process can send data to an event hub, the event hub needs a policy tha
> [!NOTE] > For security, parts of the connection string in the example have been removed.
-## Configure and start the Twitter client application
+## Configure and start the X client application
-The client application gets tweet events directly from Twitter. In order to do so, it needs permission to call the Twitter Streaming APIs. To configure that permission, you create an application in Twitter, which generates unique credentials (such as an OAuth token). You can then configure the client application to use these credentials when it makes API calls.
+The client application gets tweet events directly from X. In order to do so, it needs permission to call the Twitter Streaming APIs. To configure that permission, you create an application in X, which generates unique credentials (such as an OAuth token). You can then configure the client application to use these credentials when it makes API calls.
-### Create a Twitter application
-If you don't already have a Twitter application that you can use for this how-to guide, you can create one. You must already have a Twitter account.
+### Create an X application
+If you don't already have an X application that you can use for this how-to guide, you can create one. You must already have an X account.
> [!NOTE]
-> The exact process in Twitter for creating an application and getting the keys, secrets, and token might change. If these instructions don't match what you see on the Twitter site, refer to the Twitter developer documentation.
+> The exact process in X for creating an application and getting the keys, secrets, and token might change. If these instructions don't match what you see on the X site, refer to the X developer documentation.
-1. From a web browser, go to [Twitter For Developers](https://developer.twitter.com/en/apps), create a developer account, and select **Create an app**. You might see a message saying that you need to apply for a Twitter developer account. Feel free to do so, and after your application has been approved, you should see a confirmation email. It could take several days to be approved for a developer account.
+1. From a web browser, go to [X Developers](https://developer.x.com/en/apps), create a developer account, and select **Create an app**. You might see a message saying that you need to apply for an X developer account. Feel free to do so, and after your application has been approved, you should see a confirmation email. It could take several days to be approved for a developer account.
- ![Screenshot shows the Create an app button.](./media/stream-analytics-twitter-sentiment-analysis-trends/provide-twitter-app-details.png "Twitter application details")
+ ![Screenshot shows the Create an app button.](./media/stream-analytics-twitter-sentiment-analysis-trends/provide-twitter-app-details.png "X application details")
2. In the **Create an application** page, provide the details for the new app, and then select **Create your Twitter application**.
- ![Screenshot shows the App details pane where you can enter values for your app.](./media/stream-analytics-twitter-sentiment-analysis-trends/provide-twitter-app-details-create.png "Twitter application details")
+ ![Screenshot shows the App details pane where you can enter values for your app.](./media/stream-analytics-twitter-sentiment-analysis-trends/provide-twitter-app-details-create.png "X application details")
3. In the application page, select the **Keys and Tokens** tab and copy the values for **Consumer API Key** and **Consumer API Secret Key**. Also, select **Create** under **Access Token and Access Token Secret** to generate the access tokens. Copy the values for **Access Token** and **Access Token Secret**.
- Save the values that you retrieved for the Twitter application. You need the values later.
+ Save the values that you retrieved for the X application. You need the values later.
> [!NOTE]
-> The keys and secrets for the Twitter application provide access to your Twitter account. Treat this information as sensitive, the same as you do your Twitter password. For example, don't embed this information in an application that you give to others.
+> The keys and secrets for the X application provide access to your X account. Treat this information as sensitive, the same as you do your X password. For example, don't embed this information in an application that you give to others.
### Configure the client application
-We've created a client application that connects to Twitter data using [Twitter Streaming APIs](https://dev.twitter.com/streaming/overview) to collect tweet events about a specific set of topics.
+We've created a client application that connects to X data using [Twitter Streaming APIs](https://dev.twitter.com/streaming/overview) to collect tweet events about a specific set of topics.
Before the application runs, it requires certain information from you, like the Twitter keys and the event hub connection string.
Before the application runs, it requires certain information from you, like the
## Create a Stream Analytics job
-Now that tweet events are streaming in real time from Twitter, you can set up a Stream Analytics job to analyze these events in real time.
+Now that tweet events are streaming in real time from X, you can set up a Stream Analytics job to analyze these events in real time.
1. In the Azure portal, navigate to your resource group and select **+ Add**. Then search for **Stream Analytics job** and select **Create**.
-2. Name the job `socialtwitter-sa-job` and specify a subscription, resource group, and location.
+2. Name the job `socialx-sa-job` and specify a subscription, resource group, and location.
It's a good idea to place the job and the event hub in the same region for best performance and so that you don't pay to transfer data between regions.
Now that tweet events are streaming in real time from Twitter, you can set up a
|||| |Input alias| *TwitterStream* | Enter an alias for the input. | |Subscription | \<Your subscription\> | Select the Azure subscription that you want to use. |
- |Event Hubs namespace | *asa-twitter-eventhub* |
- |Event hub name | *socialtwitter-eh* | Choose *Use existing*. Then select the event hub you created.|
+ |Event Hubs namespace | *asa-x-eventhub* |
+ |Event hub name | *socialx-eh* | Choose *Use existing*. Then select the event hub you created.|
|Event compression type| Gzip | The data compression type.| Leave the remaining default values and select **Save**. ## Specify the job query
-Stream Analytics supports a simple, declarative query model that describes transformations. To learn more about the language, see the [Azure Stream Analytics Query Language Reference](/stream-analytics-query/stream-analytics-query-language-reference). This how-to guide helps you author and test several queries over Twitter data.
+Stream Analytics supports a simple, declarative query model that describes transformations. To learn more about the language, see the [Azure Stream Analytics Query Language Reference](/stream-analytics-query/stream-analytics-query-language-reference). This how-to guide helps you author and test several queries over X data.
To compare the number of mentions among topics, you can use a [Tumbling window](/stream-analytics-query/tumbling-window-azure-stream-analytics) to get the count of mentions by topic every five seconds.
In this how-to guide, you write the aggregated tweet events from the job query t
* **Output alias**: Use the name `TwitterStream-Output`. * **Import options**: Select **Select storage from your subscriptions**. * **Storage account**. Select your storage account.
- * **Container**. Select **Create new** and enter `socialtwitter`.
+ * **Container**. Select **Create new** and enter `socialx`.
4. Select **Save**.
virtual-desktop Authentication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/authentication.md
To access Microsoft Entra resources with Windows Hello for Business or security
#### In-session smart card authentication
-To use a smart card in your session, make sure you've installed the smart card drivers on the session host and enabled [smart card redirection](configure-device-redirections.md#smart-card-redirection). Review the [client comparison chart](compare-remote-desktop-clients.md?pivots=azure-virtual-desktop#in-session-authentication) to make sure your client supports smart card redirection.
+To use a smart card in your session, make sure you've installed the smart card drivers on the session host and enabled [smart card redirection](redirection-configure-smart-cards.md). Review the comparison charts for [Windows App](/windows-app/compare-platforms-features?pivots=azure-virtual-desktop#device-redirection) and the [Remote Desktop app](compare-remote-desktop-clients.md?pivots=azure-virtual-desktop#device-redirection) to make you can use smart card redirection.
## Next steps
virtual-desktop Client Device Redirection Intune https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/client-device-redirection-intune.md
Last updated 05/29/2024
> [!TIP] > This article contains information for multiple products that use the Remote Desktop Protocol (RDP) to provide remote access to Windows desktops and applications.
-Redirection of resources and peripherals from a user's local device to a remote session from Azure Virtual Desktop or Windows 365 using the Remote Desktop Protocol (RDP), such as the clipboard, camera, and audio, is normally governed by central configuration of a host pool and its session hosts. Client device redirection is configured for Windows App and the Remote Desktop app using a combination of Microsoft Intune app configuration policies, app protection policies, and Microsoft Entra Conditional Access on a user's local device.
+Redirection of resources and peripherals from a user's local device to a remote session from Azure Virtual Desktop or Windows 365 over the Remote Desktop Protocol (RDP), such as the clipboard, camera, and audio, is normally governed by central configuration of a host pool and its session hosts. Client device redirection is configured for Windows App and the Remote Desktop app using a combination of Microsoft Intune app configuration policies, app protection policies, and Microsoft Entra Conditional Access on a user's local device.
These features enable you to achieve the following scenarios:
virtual-desktop Clipboard Transfer Direction Data Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/clipboard-transfer-direction-data-types.md
You apply settings to your session hosts. It doesn't depend on a specific Remote
To configure the clipboard transfer direction, you need: -- Host pool RDP properties must allow [clipboard redirection](configure-device-redirections.md#clipboard-redirection), otherwise it will be completely blocked.
+- Host pool RDP properties must allow [clipboard redirection](redirection-configure-clipboard.md), otherwise it will be completely blocked.
- Depending on the method you use to configure the clipboard transfer direction:
virtual-desktop Compare Remote Desktop Clients https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/compare-remote-desktop-clients.md
The following table provides a description for each of the multimedia features:
The following sections detail the redirection support available on each platform.
+> [!TIP]
+> Redirection of some peripheral and resource types needs to be enabled by an administrator before they can be used in a remote session. For more information, see [Redirection over the Remote Desktop Protocol](redirection-remote-desktop-protocol.md), where you can also find links in the [Related content](redirection-remote-desktop-protocol.md#related-content) section to articles that explain how to configure redirection for specific peripheral and resource types.
+ ### Device redirection The following table shows which local devices you can redirect to a remote session on each platform:
The following table shows which other features you can redirect:
::: zone pivot="azure-virtual-desktop" | Feature | Windows<br />(MSI) | Windows<br />(AVD Store) | Windows<br />(RD Store) | macOS | iOS/<br />iPadOS | Android/<br />Chrome OS | Web browser | |--|:-:|:-:|:-:|:-:|:-:|:-:|:-:|
-| Clipboard - bidirectional | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup1; | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup3; | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> |
+| Clipboard - bidirectional | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup1; | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup2; | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup2; |
| Clipboard - unidirectional | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
-| Location | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup2; | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup2; | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> |
+| Location | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup3; | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup3; | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> |
| Third-party virtual channel plugins | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | | Time zone | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | | WebAuthn | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
The following table shows which other features you can redirect:
::: zone pivot="windows-365,dev-box" | Feature | Windows<br />(MSI) | macOS | iOS/<br />iPadOS | Android/<br />Chrome OS | Web browser | |--|:-:|:-:|:-:|:-:|:-:|
-| Clipboard - bidirectional | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup1; | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup3; | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> |
+| Clipboard - bidirectional | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup1; | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup2; | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup2; |
| Clipboard - unidirectional | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
-| Location | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup2; | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> |
+| Location | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup3; | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> |
| Third-party virtual channel plugins | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | | Time zone | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | | WebAuthn | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
The following table shows which other features you can redirect:
::: zone pivot="remote-desktop-services,remote-pc" | Feature | Windows<br />(MSTSC) | Windows<br />(RD Store) | macOS | iOS/<br />iPadOS | Android/<br />Chrome OS | Web browser | |--|:-:|:-:|:-:|:-:|:-:|:-:|
-| Clipboard - bidirectional | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup1; | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup3; | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> |
+| Clipboard - bidirectional | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup1; | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup2; | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup2; |
| Clipboard - unidirectional | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
-| Location | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup2; | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> |
+| Location | <sup>&#160;&#160;&#8201;</sup><sub>:::image type="icon" source="media/yes.svg" border="false":::</sub>&sup3; | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> |
| Third-party virtual channel plugins | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | | Time zone | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | | WebAuthn | <sub>:::image type="icon" source="media/yes.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> | <sub>:::image type="icon" source="media/no.svg" border="false":::</sub> |
The following table shows which other features you can redirect:
::: zone-end 1. Text and images only.
-1. From a local device running Windows 11 only.
1. Text only.
+1. From a local device running Windows 11 only.
+ The following table provides a description for each other redirection feature you can redirect:
virtual-desktop Configure Device Redirections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/configure-device-redirections.md
- Title: Configure device redirection - Azure
-description: How to configure device redirection for Azure Virtual Desktop.
-- Previously updated : 01/08/2024--
-# Configure device redirection
-
-Configuring device redirection for your Azure Virtual Desktop environment allows you to use printers, USB devices, microphones, and other peripheral devices in the remote session. Some device redirections require changes to both Remote Desktop Protocol (RDP) properties and Group Policy settings.
-
-## Supported device redirection
-
-Each client supports different kinds of device redirections. Check out [Compare the clients](compare-remote-desktop-clients.md) for the full list of supported device redirections for each client.
-
->[!IMPORTANT]
->You can only enable redirections with binary settings that apply to both to and from the remote machine.
-
-## Customizing RDP properties for a host pool
-
-To learn more about customizing RDP properties for a host pool using PowerShell or the Azure portal, check out [RDP properties](customize-rdp-properties.md). For the full list of supported RDP properties, see [Supported RDP file settings](rdp-properties.md).
-
-## Setup device redirection
-
-You can use the following RDP properties and Group Policy settings to configure device redirection.
-
-### Audio input (microphone) redirection
-
-Set the following RDP property to configure audio input redirection:
--- `audiocapturemode:i:1` enables audio input redirection.-- `audiocapturemode:i:0` disables audio input redirection.-
-### Audio output (speaker) redirection
-
-Set the following RDP property to configure audio output redirection:
--- `audiomode:i:0` enables audio output redirection.-- `audiomode:i:1` or `audiomode:i:2` disable audio output redirection.-
-### Camera redirection
-
-Set the following RDP property to configure camera redirection:
--- `camerastoredirect:s:*` redirects all cameras.-- `camerastoredirect:s:` disables camera redirection.-
->[!NOTE]
->Even if the `camerastoredirect:s:` property is disabled, local cameras may be redirected through the `devicestoredirect:s:` property. To fully disable camera redirection set `camerastoredirect:s:` and either set `devicestoredirect:s:` or define some subset of plug and play devices that does not include any camera.
-
-You can also redirect specific cameras using a semicolon-delimited list of KSCATEGORY_VIDEO_CAMERA interfaces, such as `camerastoredirect:s:\?\usb#vid_0bda&pid_58b0&mi`.
-
-### Clipboard redirection
-
-Set the following RDP property to configure clipboard redirection:
--- `redirectclipboard:i:1` enables clipboard redirection.-- `redirectclipboard:i:0` disables clipboard redirection.-
-### COM port redirection
-
-Set the following RDP property to configure COM port redirection:
--- `redirectcomports:i:1` enables COM port redirection.-- `redirectcomports:i:0` disables COM port redirection.-
-### USB redirection
-
->[!IMPORTANT]
->To redirect a mass storage USB device connected to your local computer to a remote session host that uses a supported operating system for Azure Virtual Desktop, you'll need to configure the **Drive/storage redirection** RDP property. Enabling the **USB redirection** RDP property by itself won't work. For more information, see [Local drive redirection](#local-drive-redirection).
-
-To configure the property, open the Azure portal and set the following RDP property to enable USB device redirection:
--- `usbdevicestoredirect:s:*` enables USB device redirection for all supported devices on the client.-- `usbdevicestoredirect:s:` disables USB device redirection.-
-In order to use USB redirection, you'll need to enable Plug and Play device redirection on your session host first. To enable Plug and Play:
-
-1. Next, decide whether you want to configure Group Policy centrally from your domain or locally for each session host:
-
- - To configure it from an Active Directory (AD) Domain, open the Group Policy Management Console (GPMC) and create or edit a policy that targets your session hosts.
- - To configure it locally, open the Local Group Policy Editor on the session host.
-
-1. Go to **Computer Configuration** > **Administrative Templates** > **Windows Components** > **Remote Desktop Services** > **Remote Desktop Session Host** > **Device and resource redirection**.
-1. Select **Do not allow supported Plug and Play device redirection** and set it to **Disabled**.
-1. Restart your VM.
-
-After that, to enable USB redirection:
-
-1. For client devices, apply the following Group Policy setting. You can apply this policy centrally for devices joined to an Active Directory domain or [managed by Intune](/mem/intune/configuration/administrative-templates-windows), or locally on the device using the Local Group Policy editor:
-
- **Computer Configuration** > **Policies** > **Administrative Templates** > **Windows Components** > **Remote Desktop Services** > **Remote Desktop Connection Client** > **RemoteFX USB Device Redirection**.
-
-1. Select **Allows RDP redirection of other supported RemoteFX USB devices from this computer**.
-1. Select the **Enabled** option, and then select the **Administrators and Users in RemoteFX USB Redirection Access Rights** box.
-1. Select **OK**.
-1. Open an elevated Command Prompt and run the following command:
-
- ```cmd
- gpupdate /force
- ```
-
-1. Restart the local device.
-
->[!NOTE]
->If the USB device you're looking for isn't appearing, check out our troubleshooting article at [Some USB devices are not available through RemoteFX USB redirection](/troubleshoot/windows-client/remote/usb-devices-unavailable-remotefx-usb-redirection).
-
-Next, make sure the USB device you're trying to connect to is compatible with Azure Virtual Desktop. To check compatibility:
-
-1. Connect the USB device to your local machine.
-1. Run **mstsc.exe** to open the Remote Desktop client.
-
- >[!NOTE]
- >Although you can use mstc.exe to confirm the device supports redirection, you can't use the program to connect to Azure Virtual Desktop.
-
-1. Select **Show Options**.
-1. Select the **Local Resources** tab.
-1. Under **Local devices and resources**, select **More**.
-1. If your device is compatible, it should appear under **Other supported Remote FX USB devices**. You can only use USB redirection on USB devices that appear in this list.
-
-### Plug and play device redirection
-
-Set the following RDP property to configure plug and play device redirection:
--- `devicestoredirect:s:*` enables redirection of all plug and play devices.-- `devicestoredirect:s:` disables redirection of plug and play devices.-
-You can also select specific plug and play devices using a semicolon-delimited list, such as `devicestoredirect:s:root\*PNP0F08`.
-
-### Local drive redirection
-
-Set the following RDP property to configure local drive redirection:
--- `drivestoredirect:s:*` enables redirection of all disk drives.-- `drivestoredirect:s:` disables local drive redirection.-
-You can also select specific drives using a semicolon-delimited list, such as `drivestoredirect:s:C:;E:;`.
-
-To enable web client file transfer, set `drivestoredirect:s:*`. If you set any other value for this RDP property, web client file transfer will be disabled.
-
-### Location redirection
-
-Set the following RDP property to configure location redirection:
--- `redirectlocation:i:1` enables location redirection.-- `redirectlocation:i:0` disables location redirection.-
-When enabled, the location of the local device is sent to the session host and set as its location. Location redirection lets applications like Maps or Printer Search use your physical location. When you disable location redirection, these applications will use the location of the session host instead.
-
-### Printer redirection
-
-Set the following RDP property to configure printer redirection:
--- `redirectprinters:i:1` enables printer redirection.-- `redirectprinters:i:0` disables printer redirection.-
-### Smart card redirection
-
-Set the following RDP property to configure smart card redirection:
--- `redirectsmartcards:i:1` enables smart card redirection.-- `redirectsmartcards:i:0` disables smart card redirection.-
-### WebAuthn redirection
-
-Set the following RDP property to configure WebAuthn redirection:
--- `redirectwebauthn:i:1` enables WebAuthn redirection.-- `redirectwebauthn:i:0` disables WebAuthn redirection.-
-When enabled, WebAuthn requests from the session are sent to the local PC to be completed using the local Windows Hello for Business or security devices like FIDO keys. For more information, see [In-session passwordless authentication](authentication.md#in-session-passwordless-authentication).
-
-## Disable redirection on the local device
-
-If you're connecting from personal resources to corporate ones using the Windows Desktop clients, you can disable drive, printer, and clipboard redirection on your local device for security purposes by overriding the configuration from your administrator.
-
-### Disable drive redirection
-
-To disable drive redirection:
-
-1. Open the **Registry Editor (regedit)**.
-
-1. Go to the following registry key and create or set the value:
-
- - **Key**: `HKLM\Software\Microsoft\Terminal Server Client`
- - **Type**: `REG_DWORD`
- - **Value name**: `DisableDriveRedirection`
- - **Value data**: `1`
-
-### Disable printer redirection
-
-To disable printer redirection:
-
-1. Open the **Registry Editor (regedit)**.
-
-1. Go to the following registry key and create or set the value:
-
- - **Key**: `HKLM\Software\Microsoft\Terminal Server Client`
- - **Type**: `REG_DWORD`
- - **Value name**: `DisablePrinterRedirection`
- - **Value data**: `1`
-
-### Disable clipboard redirection
-
-To disable clipboard redirection:
-
-1. Open the **Registry Editor (regedit)**.
-
-1. Go to the following registry key and create or set the value:
-
- - **Key**: `HKLM\Software\Microsoft\Terminal Server Client`
- - **Type**: `REG_DWORD`
- - **Value name**: `DisableClipboardRedirection`
- - **Value data**: `1`
-
-## Next steps
--- For more information about how to configure RDP settings, see [Customize RDP properties](customize-rdp-properties.md).-- For a list of RDP settings you can change, see [Supported RDP properties for Azure Virtual Desktop](rdp-properties.md).
virtual-desktop Rdp Properties https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/rdp-properties.md
Title: Supported RDP properties with Azure Virtual Desktop - Azure Virtual Desktop
-description: Learn about the supported RDP properties you can use with Azure Virtual Desktop.
+ Title: Supported RDP properties
+description: Learn about the supported RDP properties you can set to customize the behavior of a remote session, such as for device redirection, display settings, session behavior, and more.
+ - Previously updated : 11/15/2022 Last updated : 08/07/2024
-# Supported RDP properties with Azure Virtual Desktop
+# Supported RDP properties
-Organizations can configure Remote Desktop Protocol (RDP) properties centrally in Azure Virtual Desktop to determine how a connection to Azure Virtual Desktop should behave. There are a wide range of RDP properties that can be set, such as for device redirection, display settings, session behavior, and more. For more information, see [Customize RDP properties for a host pool](customize-rdp-properties.md).
+
+The Remote Desktop Protocol (RDP) has a number of properties you can set to customize the behavior of a remote session, such as for device redirection, display settings, session behavior, and more.
+
+The following sections contain each RDP property available and lists its syntax, description, supported values, the default value, and connections to which services and products you can use them with.
+
+How you use these RDP properties depends on the service or product you're using:
+
+| Product | Configuration point |
+|--|--|
+| Azure Virtual Desktop | Host pool RDP properties. To learn more, see [Customize RDP properties for a host pool](customize-rdp-properties.md). |
+| Remote Desktop Services | Session collection RDP properties |
+| Remote PC connections | The `.rdp` file you use to connect to a remote PC. |
> [!NOTE]
-> Supported RDP properties differ when using Azure Virtual Desktop compared to Remote Desktop Services. Use the following tables to understand each setting and whether it applies when connecting to Azure Virtual Desktop, Remote Desktop Services, or both.
+> For each RDP property, replace `<value>` with an allowed value for that property.
+
+## Connections
+
+Here are the RDP properties that you can use to configure connections.
+
+### `alternate full address`
+
+- **Syntax**: `alternate full address:s:<value>`
+- **Description**: Specifies an alternate name or IP address of the remote computer.
+- **Supported values**:
+ - A valid hostname, IPv4 address, or IPv6 address.
+- **Default value**: None.
+- **Applies to**:
+ - Remote Desktop Services
+ - Remote PC connections
+
+### `alternate shell`
+
+- **Syntax**: `alternate shell:s:<value>`
+- **Description**: Specifies a program to be started automatically in a remote session as the shell instead of explorer.
+- **Supported values**:
+ - A valid path to an executable file, such as `C:\Program Files\MyApp\myapp.exe`.
+- **Default value**: None.
+- **Applies to**:
+ - Azure Virtual Desktop
+ - Remote Desktop Services
+ - Remote PC connections
+
+### `authentication level`
+
+- **Syntax**: `authentication level:i:<value>`
+- **Description**: Defines the server authentication level settings.
+- **Supported values**:
+ - `0`: If server authentication fails, connect to the computer without warning.
+ - `1`: If server authentication fails, don't establish a connection.
+ - `2`: If server authentication fails, show a warning, and choose to connect or refuse the connection.
+ - `3`: No authentication requirement specified.
+- **Default value**: `3`
+- **Applies to**:
+ - Remote Desktop Services
+ - Remote PC connections
+
+### `disableconnectionsharing`
+
+- **Syntax**: `disableconnectionsharing:i:<value>`
+- **Description**: Determines whether the client reconnects to any existing disconnected session or initiate a new connection when a new connection is launched.
+- **Supported values**:
+ - `0`: Reconnect to any existing session.
+ - `1`: Initiate new connection.
+- **Default value**: `0`
+- **Applies to**:
+ - Remote Desktop Services
+
+### `domain`
+
+- **Syntax**: `domain:s:<value>`
+- **Description**: Specifies the name of the Active Directory domain in which the user account that will be used to sign in to the remote computer is located.
+- **Supported values**:
+ - A valid domain name, such as `CONTOSO`.
+- **Default value**: None.
+- **Applies to**:
+ - Remote Desktop Services
+ - Remote PC connections
+
+### `enablecredsspsupport`
+
+- **Syntax**: `enablecredsspsupport:i:<value>`
+- **Description**: Determines whether the client will use the Credential Security Support Provider (CredSSP) for authentication if it's available.
+- **Supported values**:
+ - `0`: RDP won't use CredSSP, even if the operating system supports CredSSP.
+ - `1`: RDP will use CredSSP if the operating system supports CredSSP.
+- **Default value**: `1`
+- **Applies to**:
+ - Azure Virtual Desktop
+ - Remote Desktop Services
+ - Remote PC connections
+
+### `enablerdsaadauth`
+
+- **Syntax**: `enablerdsaadauth:i:<value>`
+- **Description**: Determines whether the client will use Microsoft Entra ID to authenticate to the remote PC. When used with Azure Virtual Desktop, this provides a single sign-on experience. This property replaces the property [`targetisaadjoined`](#targetisaadjoined).
+- **Supported values**:
+ - `0`: Connections won't use Microsoft Entra authentication, even if the remote PC supports it.
+ - `1`: Connections will use Microsoft Entra authentication if the remote PC supports it.
+- **Default value**: `0`
+- **Applies to**:
+ - Azure Virtual Desktop
+ - Remote Desktop Services
+ - Remote PC connections
+
+### `full address`
+
+- **Syntax**: `full address:s:<value>`
+- **Description**: Specifies the hostname or IP address of the remote computer that you want to connect to.. This is the only mandatory property in a `.rdp` file.
+- **Supported values**:
+ - A valid hostname, IPv4 address, or IPv6 address.
+- **Default value**: None.
+- **Applies to**:
+ - Remote Desktop Services
+ - Remote PC connections
+
+### `gatewaycredentialssource`
+
+- **Syntax**: `gatewaycredentialssource:i:<value>`
+- **Description**: Specifies the authentication method used for Remote Desktop gateway connections.
+- **Supported values**:
+ - `0`: Ask for password (NTLM).
+ - `1`: Use smart card.
+ - `2`: Use the credentials for the currently signed in user.
+ - `3`: Prompt the user for their credentials and use basic authentication.
+ - `4`: Allow user to select later.
+ - `5`: Use cookie-based authentication.
+- **Default value**: `0`
+- **Applies to**:
+ - Remote Desktop Services
+
+### `gatewayhostname`
+
+- **Syntax**: `gatewayhostname:s:<value>`
+- **Description**: Specifies the host name of a Remote Desktop gateway.
+- **Supported values**:
+ - A valid hostname, IPv4 address, or IPv6 address.
+- **Default value**: None.
+- **Applies to**:
+ - Remote Desktop Services
+
+### `gatewayprofileusagemethod`
+
+- **Syntax**: `gatewayprofileusagemethod:i:<value>`
+- **Description**: Specifies whether to use the default Remote Desktop gateway settings.
+- **Supported values**:
+ - `1`: Use explicit settings, as specified by the user.
+- **Default value**: `0`
+- **Applies to**:
+ - Remote Desktop Services
+
+### `gatewayusagemethod`
+
+- **Syntax**: `gatewayusagemethod:i:<value>`
+- **Description**: Specifies whether to use a Remote Desktop gateway for the connection.
+- **Supported values**:
+ - `0`: Don't use a Remote Desktop gateway.
+ - `1`: Always use a Remote Desktop gateway.
+ - `2`: Use a Remote Desktop gateway if a direct connection can't be made to the RD Session Host.
+ - `3`: Use the default Remote Desktop gateway settings.
+ - `4`: Don't use a Remote Desktop gateway, bypass gateway for local addresses.<br />Setting this property value to `0` or `4` are effectively equivalent, but `4` enables the option to bypass local addresses.
+- **Default value**: `0`
+- **Applies to**:
+ - Remote Desktop Services
+
+### `kdcproxyname`
+
+- **Syntax**: `kdcproxyname:s:<value>`
+- **Description**: Specifies the fully qualified domain name of a KDC proxy.
+- **Supported values**:
+ - A valid path to a KDC proxy server, such as `kdc.contoso.com`.
+- **Default value**: None.
+- **Applies to**:
+ - Azure Virtual Desktop. For more information, see [Configure a Kerberos Key Distribution Center proxy](key-distribution-center-proxy.md).
+
+### `promptcredentialonce`
+
+- **Syntax**: `promptcredentialonce:i:<value>`
+- **Description**: Determines whether a user's credentials are saved and used for both the Remote Desktop gateway and the remote computer.
+- **Supported values**:
+ - `0`: Remote session doesn't use the same credentials.
+ - `1`: Remote session does use the same credentials.
+- **Default value**: `1`
+- **Applies to**:
+ - Remote Desktop Services
+
+### `targetisaadjoined`
+
+- **Syntax**: `targetisaadjoined:i:<value>`
+- **Description**: Allows connections to Microsoft Entra joined session hosts using a username and password. This property is only applicable to non-Windows clients and local Windows devices that aren't joined to Microsoft Entra. It is being replaced by the property [`enablerdsaadauth`](#enablerdsaadauth).
+- **Supported values**:
+ - `0`: Connections to Microsoft Entra joined session hosts will succeed for Windows devices that [meet the requirements](/azure/virtual-desktop/deploy-azure-ad-joined-vm#connect-using-the-windows-desktop-client), but other connections will fail.
+ - `1`: Connections to Microsoft Entra joined hosts will succeed but are restricted to entering user name and password credentials when connecting to session hosts.
+- **Default value**: `0`
+- **Applies to**:
+ - Azure Virtual Desktop. For more information, see [Microsoft Entra joined session hosts in Azure Virtual Desktop](azure-ad-joined-session-hosts.md#connect-using-legacy-authentication-protocols).
+
+### `username`
+
+- **Syntax**: `username:s:<value>`
+- **Description**: Specifies the name of the user account that will be used to sign in to the remote computer.
+- **Supported values**:
+ - Any valid username.
+- **Default value**: None.
+- **Applies to**:
+ - Remote Desktop Services
+
+## Session behavior
+
+Here are the RDP properties that you can use to configure session behavior.
+
+### `autoreconnection enabled`
+
+- **Syntax**: `autoreconnection enabled:i:<value>`
+- **Description**: Determines whether the local device will automatically try to reconnect to the remote computer if the connection is dropped, such as when there's a network connectivity interruption.
+- **Supported values**:
+ - `0`: The local device doesn't automatically try to reconnect.
+ - `1`: The local device automatically tries to reconnect.
+- **Default value**: `1`
+- **Applies to**:
+ - Azure Virtual Desktop
+ - Remote Desktop Services
+ - Remote PC connections
+
+### `bandwidthautodetect`
+
+- **Syntax**: `bandwidthautodetect:i:<value>`
+- **Description**: Determines whether or not to use automatic network bandwidth detection.
+- **Supported values**:
+ - `0`: Don't use automatic network bandwidth detection.
+ - `1`: Use automatic network bandwidth detection.
+- **Default value**: `1`
+- **Applies to**:
+ - Azure Virtual Desktop
+ - Remote Desktop Services
+ - Remote PC connections
+
+### `compression`
+
+- **Syntax**: `compression:i:<value>`
+- **Description**: Determines whether bulk compression is enabled when transmitting data to the local device.
+- **Supported values**:
+ - `0`: Disable bulk compression.
+ - `1`: Enable RDP bulk compression.
+- **Default value**: `1`
+- **Applies to**:
+ - Azure Virtual Desktop
+ - Remote Desktop Services
+ - Remote PC connections
+
+### `networkautodetect`
+
+- **Syntax**: `networkautodetect:i:<value>`
+- **Description**: Determines whether automatic network type detection is enabled.
+- **Supported values**:
+ - `0`: Disable automatic network type detection.
+ - `1`: Enable automatic network type detection.
+- **Default value**: `1`
+- **Applies to**:
+ - Azure Virtual Desktop
+ - Remote Desktop Services
+ - Remote PC connections
+
+### `videoplaybackmode`
+
+- **Syntax**: `videoplaybackmode:i:<value>`
+- **Description**: Determines whether the connection will use RDP-efficient multimedia streaming for video playback.
+- **Supported values**:
+ - `0`: Don't use RDP efficient multimedia streaming for video playback.
+ - `1`: Use RDP-efficient multimedia streaming for video playback when possible.
+- **Default value**: `1`
+- **Applies to**:
+ - Azure Virtual Desktop
+ - Remote Desktop Services
+ - Remote PC connections
+
+## Device redirection
+
+Here are the RDP properties that you can use to configure device redirection. To learn more, see [Redirection over the Remote Desktop Protocol](redirection-remote-desktop-protocol.md).
+
+### `audiocapturemode`
+
+- **Syntax**: `audiocapturemode:i:<value>`
+- **Description**: Indicates whether audio input redirection is enabled.
+- **Supported values**:
+ - `0`: Disable audio capture from a local device.
+ - `1`: Enable audio capture from a local device and redirect it to a remote session.
+- **Default value**: `0`
+- **Applies to**:
+ - Azure Virtual Desktop
+ - Remote Desktop Services
+ - Remote PC connections
+
+To learn how to use this property, see [Configure audio and video redirection over the Remote Desktop Protocol](redirection-configure-audio-video.md#configure-audio-capture-redirection).
+
+### `audiomode`
+
+- **Syntax**: `audiomode:i:<value>`
+- **Description**: Determines whether the local or remote machine plays audio.
+- **Supported values**:
+ - `0`: Play sounds on the local device.
+ - `1`: Play sounds in a remote session.
+ - `2`: Don't play sounds.
+- **Default value**: `0`
+- **Applies to**:
+ - Azure Virtual Desktop
+ - Remote Desktop Services
+ - Remote PC connections
+
+To learn how to use this property, see [Configure audio and video redirection over the Remote Desktop Protocol](redirection-configure-audio-video.md#configure-audio-output-redirection).
+
+### `camerastoredirect`
+
+- **Syntax**: `camerastoredirect:s:<value>`
+- **Description**: Configures which cameras to redirect. This setting uses a semicolon-delimited list of `KSCATEGORY_VIDEO_CAMERA` interfaces of cameras enabled for redirection.
+- **Supported values**:
+ - `*`: Redirect all cameras.
+ - `\\?\usb#vid_0bda&pid_58b0&mi`: Specifies a list of cameras by device instance path, such as this example.
+ - `-`: Exclude a specific camera by prepending the symbolic link string.
+- **Default value**: None.
+- **Applies to**:
+ - Azure Virtual Desktop
+ - Remote Desktop Services
+ - Remote PC connections
+
+To learn how to use this property, see [Configure camera, webcam, and video capture redirection over the Remote Desktop Protocol](redirection-configure-camera-webcam-video-capture.md).
+
+### `devicestoredirect`
+
+- **Syntax**: `devicestoredirect:s:<value>`
+- **Description**: Determines which peripherals that use the Media Transfer Protocol (MTP) or Picture Transfer Protocol (PTP), such as a digital camera, are redirected from a local Windows device to a remote session.
+- **Supported values**:
+ - `*`: Redirect all supported devices, including ones that are connected later.
+ - `\\?\usb#vid_0bda&pid_58b0&mi`: Specifies a list of MTP or PTP peripherals by device instance path, such as this example.
+ - `DynamicDevices`: Redirect all supported devices that are connected later.
+- **Default value**: `*`
+- **Applies to**:
+ - Azure Virtual Desktop
+ - Remote Desktop Services
+ - Remote PC connections
+
+To learn how to use this property, see [Configure Media Transfer Protocol and Picture Transfer Protocol redirection on Windows over the Remote Desktop Protocol](redirection-configure-plug-play-mtp-ptp.md).
+
+### `drivestoredirect`
+
+- **Syntax**: `drivestoredirect:s:<value>`
+- **Description**: Determines which fixed, removable, and network drives on the local device will be redirected and available in a remote session.
+- **Supported values**:
+ - *Empty*: Don't redirect any drives.
+ - `*`: Redirect all drives, including drives that are connected later.
+ - `DynamicDrives`: Redirect any drives that are connected later.
+ - `drivestoredirect:s:C\:;E\:;`: Redirect the specified drive letters for one or more drives, such as this example.
+- **Default value**: `*`
+- **Applies to**:
+ - Azure Virtual Desktop
+ - Remote Desktop Services
+ - Remote PC connections
+
+To learn how to use this property, see [Configure fixed, removable, and network drive redirection over the Remote Desktop Protocol](redirection-configure-drives-storage.md).
+
+### `encode redirected video capture`
+
+- **Syntax**: `encode redirected video capture:i:<value>`
+- **Description**: Enables or disables encoding of redirected video.
+- **Supported values**:
+ - `0`: Disable encoding of redirected video.
+ - `1`: Enable encoding of redirected video.
+- **Default value**: `1`
+- **Applies to**:
+ - Azure Virtual Desktop
+ - Remote Desktop Services
+ - Remote PC connections
+
+To learn how to use this property, see [Configure camera, webcam, and video capture redirection over the Remote Desktop Protocol](redirection-configure-camera-webcam-video-capture.md).
+
+### `keyboardhook`
+
+- **Syntax**: `keyboardhook:i:<value>`
+- **Description**: Determines whether Windows key combinations (<kbd>Windows</kbd>, <kbd>Alt</kbd>+<kbd>Tab</kbd>) are applied to a remote session.
+- **Supported values**:
+ - `0`: Windows key combinations are applied on the local device.
+ - `1`: (Desktop sessions only) Windows key combinations are applied on the remote computer when in focus.
+ - `2`: (Desktop sessions only) Windows key combinations are applied on the remote computer in full screen mode only.
+ - `3`: (RemoteApp sessions only) Windows key combinations are applied on the RemoteApp when in focus. We recommend you use this value only when publishing the Remote Desktop Connection app (`mstsc.exe`) from the host pool on Azure Virtual Desktop. This value is only supported when using the [Windows client](/azure/virtual-desktop/users/connect-windows).
+- **Default value**: `2`
+- **Applies to**:
+ - Azure Virtual Desktop
+ - Remote Desktop Services
+ - Remote PC connections
+
+### `redirectclipboard`
+
+- **Syntax**: `redirectclipboard:i:<value>`
+- **Description**: Determines whether to redirect the clipboard.
+- **Supported values**:
+ - `0`: Clipboard on local device isn't available in remote session.
+ - `1`: Clipboard on local device is available in remote session.
+- **Default value**: `1`
+- **Applies to**:
+ - Azure Virtual Desktop
+ - Remote Desktop Services
+ - Remote PC connections
+
+To learn how to use this property, see [Configure clipboard redirection over the Remote Desktop Protocol](redirection-configure-clipboard.md).
+
+### `redirectcomports`
+
+- **Syntax**: `redirectcomports:i:<value>`
+- **Description**: Determines whether serial or COM ports on the local device are redirected to a remote session.
+- **Supported values**:
+ - `0`: Serial or COM ports on the local device aren't available in a remote session.
+ - `1`: Serial or COM ports on the local device are available in a remote session.
+- **Default value**: `1`
+- **Applies to**:
+ - Azure Virtual Desktop
+ - Remote Desktop Services
+ - Remote PC connections
+
+To learn how to use this property, see [Configure serial or COM port redirection over the Remote Desktop Protocol](redirection-configure-serial-com-ports.md).
+
+### `redirected video capture encoding quality`
+
+- **Syntax**: `redirected video capture encoding quality:i:<value>`
+- **Description**: Controls the quality of encoded video.
+- **Supported values**:
+ - `0`: High compression video. Quality may suffer when there's a lot of motion.
+ - `1`: Medium compression.
+ - `2`: Low compression video with high picture quality.
+- **Default value**: `0`
+- **Applies to**:
+ - Azure Virtual Desktop
+ - Remote Desktop Services
+ - Remote PC connections
+
+To learn how to use this property, see [Configure camera, webcam, and video capture redirection over the Remote Desktop Protocol](redirection-configure-camera-webcam-video-capture.md).
+
+### `redirectlocation`
+
+- **Syntax**: `redirectlocation:i:<value>`
+- **Description**: Determines whether the location of the local device is redirected to a remote session.
+- **Supported values**:
+ - `0`: A remote session uses the location of the remote computer or virtual machine.
+ - `1`: A remote session uses the location of the local device.
+- **Default value**: `0`
+- **Applies to**:
+ - Azure Virtual Desktop
+ - Remote Desktop Services
+ - Remote PC connections
+
+To learn how to use this property, see [Configure location redirection over the Remote Desktop Protocol](redirection-configure-location.md).
+
+### `redirectprinters`
+
+- **Syntax**: `redirectprinters:i:<value>`
+- **Description**: Determines whether printers available on the local device are redirected to a remote session.
+- **Supported values**:
+ - `0`: The printers on the local device aren't redirected to a remote session.
+ - `1`: The printers on the local device are redirected to a remote session.
+- **Default value**: `1`
+- **Applies to**:
+ - Azure Virtual Desktop
+ - Remote Desktop Services
+ - Remote PC connections
+
+To learn how to use this property, see [Configure printer redirection over the Remote Desktop Protocol](redirection-configure-printers.md).
+
+### `redirectsmartcards`
+
+- **Syntax**: `redirectsmartcards:i:<value>`
+- **Description**: Determines whether smart card devices on the local device will be redirected and available in a remote session.
+- **Supported values**:
+ - `0`: Smart cards on the local device aren't redirected to a remote session.
+ - `1`: Smart cards on the local device are redirected a remote session.
+- **Default value**: `1`
+- **Applies to**:
+ - Azure Virtual Desktop
+ - Remote Desktop Services
+ - Remote PC connections
+
+To learn how to use this property, see [Configure smart card redirection over the Remote Desktop Protocol](redirection-configure-smart-cards.md).
+
+### `redirectwebauthn`
+
+- **Syntax**: `redirectwebauthn:i:<value>`
+- **Description**: Determines whether WebAuthn requests from a remote session are redirected to the local device allowing the use of local authenticators (such as Windows Hello for Business and security keys).
+- **Supported values**:
+ - `0`: WebAuthn requests from a remote session aren't sent to the local device for authentication and must be completed in the remote session.
+ - `1`: WebAuthn requests from a remote session are sent to the local device for authentication.
+- **Default value**: `1`
+- **Applies to**:
+ - Azure Virtual Desktop
+ - Remote Desktop Services
+ - Remote PC connections
+
+To learn how to use this property, see [Configure WebAuthn redirection over the Remote Desktop Protocol](redirection-configure-webauthn.md).
+
+### `usbdevicestoredirect`
+
+- **Syntax**: `usbdevicestoredirect:s:<value>`
+- **Description**: Determines which supported USB devices on the client computer are redirected using opaque low-level redirection to a remote session.
+- **Supported values**:
+ - `*`: Redirect all USB devices that aren't already redirected by high-level redirection.
+ - `{*Device Setup Class GUID*}`: Redirect all devices that are members of the specified device setup class.
+ - `*USBInstanceID*`: Redirect a specific USB device identified by the instance ID.
+- **Default value**: `*`
+- **Applies to**:
+ - Azure Virtual Desktop
+ - Remote Desktop Services
+ - Remote PC connections
+
+To learn how to use this property, see [Configure USB redirection on Windows over the Remote Desktop Protocol](redirection-configure-usb.md).
+
+## Display settings
+
+Here are the RDP properties that you can use to configure display settings.
+
+### `desktop size id`
+
+- **Syntax**: `desktop size id:i:<value>`
+- **Description**: Specifies the dimensions of a remote session desktop from a set of predefined options. This setting is overridden if [`desktopheight`](#desktopheight) and [`desktopwidth`](#desktopwidth) are specified.
+- **Supported values**:
+ - `0`: 640×480
+ - `1`: 800×600
+ - `2`: 1024×768
+ - `3`: 1280×1024
+ - `4`: 1600×1200
+- **Default value**: None. Match the local device.
+- **Applies to**:
+ - Azure Virtual Desktop
+ - Remote Desktop Services
+ - Remote PC connections
+
+### `desktopheight`
+
+- **Syntax**: `desktopheight:i:<value>`
+- **Description**: Specifies the resolution height (in pixels) of a remote session.
+- **Supported values**:
+ - Numerical value between `200` and `8192`.
+- **Default value**: None. Match the local device.
+- **Applies to**:
+ - Azure Virtual Desktop
+ - Remote Desktop Services
+ - Remote PC connections
+
+### `desktopwidth`
+
+- **Syntax**: `desktopwidth:i:<value>`
+- **Description**: Specifies the resolution width (in pixels) of a remote session.
+- **Supported values**:
+ - Numerical value between `200` and `8192`.
+- **Default value**: None. Match the local device.
+- **Applies to**:
+ - Azure Virtual Desktop
+ - Remote Desktop Services
+ - Remote PC connections
+
+### `dynamic resolution`
+
+- **Syntax**: `dynamic resolution:i:<value>`
+- **Description**: Determines whether the resolution of a remote session is automatically updated when the local window is resized.
+- **Supported values**:
+ - `0`: Session resolution remains static during the session.
+ - `1`: Session resolution updates as the local window resizes.
+- **Default value**: `1`
+- **Applies to**:
+ - Azure Virtual Desktop
+ - Remote Desktop Services
+ - Remote PC connections
+
+### `maximizetocurrentdisplays`
+
+- **Syntax**: `maximizetocurrentdisplays:i:<value>`
+- **Description**: Determines which display a remote session uses for full screen on when maximizing. Requires [`use multimon`](#use-multimon) set to `1`. Only available on Windows App for Windows and the Remote Desktop app for Windows.
+- **Supported values**:
+ - `0`: Session is full screen on the displays initially selected when maximizing.
+ - `1`: Session dynamically is full screen on the displays the session window spans when maximizing.
+- **Default value**: `0`
+- **Applies to**:
+ - Azure Virtual Desktop
+ - Remote Desktop Services
+ - Remote PC connections
+
+### `screen mode id`
+
+- **Syntax**: `screen mode id:i:<value>`
+- **Description**: Determines whether a remote session window appears full screen when you launch the connection.
+- **Supported values**:
+ - `1`: A remote session appears in a window.
+ - `2`: A remote session appears full screen.
+- **Default value**: `2`
+- **Applies to**:
+ - Azure Virtual Desktop
+ - Remote Desktop Services
+ - Remote PC connections
+
+### `selectedmonitors`
+
+- **Syntax**: `selectedmonitors:s:<value>`
+- **Description**: Specifies which local displays to use in a remote session. The selected displays must be contiguous. Requires [`use multimon`](#use-multimon) set to `1`. Only available on Windows App for Windows, the Remote Desktop app for Windows, and the inbox Remote Desktop Connection app on Windows.
+- **Supported values**:
+ - A comma separated list of machine-specific display IDs. You can retrieve available IDs by running `mstsc.exe /l` from the command line. The first ID listed is set as the primary display in a remote session.
+- **Default value**: None. All displays are used.
+- **Applies to**:
+ - Azure Virtual Desktop
+ - Remote Desktop Services
+ - Remote PC connections
+
+### `singlemoninwindowedmode`
+
+- **Syntax**: `singlemoninwindowedmode:i:<value>`
+- **Description**: Determines whether a multi display remote session automatically switches to single display when exiting full screen. Requires [`use multimon`](#use-multimon) set to **1**. Only available on Windows App for Windows and the Remote Desktop app for Windows.
+- **Supported values**:
+ - `0`: A remote session retains all displays when exiting full screen.
+ - `1`: A remote session switches to a single display when exiting full screen.
+- **Default value**: `0`
+- **Applies to**:
+ - Azure Virtual Desktop
+ - Remote Desktop Services
+ - Remote PC connections
+
+### `smart sizing`
+
+- **Syntax**: `smart sizing:i:<value>`
+- **Description**: Determines whether the local device scales the content of the remote session to fit the window size.
+- **Supported values**:
+ - `0`: The local window content doesn't scale when resized.
+ - `1`: The local window content does scale when resized.
+- **Default value**: `0`
+- **Applies to**:
+ - Azure Virtual Desktop
+ - Remote Desktop Services
+ - Remote PC connections
+
+### `use multimon`
+
+- **Syntax**: `use multimon:i:<value>`
+- **Description**: Determines whether the remote session will use one or multiple displays from the local device.
+- **Supported values**:
+ - `0`: A remote session uses a single display.
+ - `1`: A remote session uses multiple displays.
+- **Default value**: `1`
+- **Applies to**:
+ - Azure Virtual Desktop
+ - Remote Desktop Services
+ - Remote PC connections
+
+## RemoteApp
+
+Here are the RDP properties that you can use to configure RemoteApp behavior for Remote Desktop Services.
+
+### `remoteapplicationcmdline`
+
+- **Syntax**: `remoteapplicationcmdline:s:<value>`
+- **Description**: Optional command line parameters for the RemoteApp.
+- **Supported values**:
+ - Valid command-line parameters for the application.
+- **Default value**: None.
+- **Applies to**:
+ - Remote Desktop Services
+
+### `remoteapplicationexpandcmdline`
+
+- **Syntax**: `remoteapplicationexpandcmdline:i:<value>`
+- **Description**: Determines whether environment variables contained in the RemoteApp command line parameters should be expanded locally or remotely.
+- **Supported values**:
+ - `0`: Environment variables should be expanded to the values of the local device.
+ - `1`: Environment variables should be expanded to the values of the remote session.
+- **Default value**: `1`
+- **Applies to**:
+ - Remote Desktop Services
+
+### `remoteapplicationexpandworkingdir`
+
+- **Syntax**: `remoteapplicationexpandworkingdir:i:<value>`
+- **Description**: Determines whether environment variables contained in the RemoteApp working directory parameter should be expanded locally or remotely.
+- **Supported values**:
+ - `0`: Environment variables should be expanded to the values of the local device.
+ - `1`: Environment variables should be expanded to the values of the remote session.<br />The RemoteApp working directory is specified through the shell working directory parameter.
+- **Default value**: `1`
+- **Applies to**:
+ - Remote Desktop Services
+
+### `remoteapplicationfile`
+
+- **Syntax**: `remoteapplicationfile:s:<value>`
+- **Description**: Specifies a file to be opened in the remote session by the RemoteApp. For local files to be opened, you must also enable [drive redirection](#drivestoredirect) for the source drive.
+- **Supported values**:
+ - A valid file path in the remote session.
+- **Default value**: None.
+- **Applies to**:
+ - Remote Desktop Services
+
+### `remoteapplicationicon`
+
+- **Syntax**: `remoteapplicationicon:s:<value>`
+- **Description**: Specifies the icon file to be displayed in Windows App or the Remote Desktop app while launching a RemoteApp. If no file name is specified, the client will use the standard Remote Desktop icon. Only `.ico` files are supported.
+- **Supported values**:
+ - A valid file path to an `.ico` file.
+- **Default value**: None.
+- **Applies to**:
+ - Remote Desktop Services
+
+### `remoteapplicationmode`
+
+- **Syntax**: `remoteapplicationmode:i:<value>`
+- **Description**: Determines whether a connection is started as a RemoteApp session.
+- **Supported values**:
+ - `0`: Don't launch a RemoteApp session.
+ - `1`: Launch a RemoteApp session.
+- **Default value**: `1`
+- **Applies to**:
+ - Remote Desktop Services
+
+### `remoteapplicationname`
+
+- **Syntax**: `remoteapplicationname:s:<value>`
+- **Description**: Specifies the name of the RemoteApp in Windows App or the Remote Desktop app while starting the RemoteApp.
+- **Supported values**:
+ - A valid application display name, for example `Microsoft Excel`.
+- **Default value**: None.
+- **Applies to**:
+ - Remote Desktop Services
+
+### `remoteapplicationprogram`
+- **Syntax**: `remoteapplicationprogram:s:<value>`
+- **Description**: Specifies the alias or executable name of the RemoteApp.
+- **Supported values**:
+ - A valid application name or alias, for example `EXCEL`.
+- **Default value**: None.
+- **Applies to**:
+ - Remote Desktop Services
virtual-desktop Redirection Configure Audio Video https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/redirection-configure-audio-video.md
+
+ Title: Configure audio and video redirection over the Remote Desktop Protocol
+description: Learn how to redirect audio peripherals, such as microphone and speaker, between a local device and a remote session over the Remote Desktop Protocol. It applies to Azure Virtual Desktop, Windows 365, and Microsoft Dev Box.
+
+zone_pivot_groups: rdp-products-features
++ Last updated : 04/24/2024++
+# Configure audio and video redirection over the Remote Desktop Protocol
++
+You can configure the redirection behavior of audio peripherals, such as microphones and speakers, between a local device and a remote session over the Remote Desktop Protocol (RDP).
+
+For Azure Virtual Desktop, we recommend you enable audio and video redirection on your session hosts using Microsoft Intune or Group Policy, then control redirection using the host pool RDP properties.
+
+For Windows 365, you can configure your Cloud PCs using Microsoft Intune or Group Policy.
+
+For Microsoft Dev Box, you can configure your dev boxes using Microsoft Intune or Group Policy.
+
+This article provides information about the supported redirection methods and how to configure the redirection behavior for audio and video peripherals. To learn more about how redirection works, see [Redirection over the Remote Desktop Protocol](redirection-remote-desktop-protocol.md).
+
+> [!TIP]
+> If you use the following features in a remote session, they have their own optimizations that are independent from the redirection configuration on the session host, host pool RDP properties, or local device.
+>
+> - [Microsoft Teams](teams-on-avd.md) for camera, microphone, and audio redirection.
+> - [Multimedia redirection](multimedia-redirection-intro.md) for audio, video and call redirection.
++
+> [!TIP]
+> If you use the following features in a remote session, they have their own optimizations that are independent from the redirection configuration on the Cloud PC or local device.
+>
+> - [Microsoft Teams](/windows-365/enterprise/teams-on-cloud-pc) for camera, microphone, and audio redirection.
+> - [Multimedia redirection](multimedia-redirection-intro.md) for audio, video and call redirection.
++
+> [!TIP]
+> If you use the following features in a remote session, they have their own optimizations that are independent from the redirection configuration on the dev box or local device.
+>
+> - [Microsoft Teams](/windows-365/enterprise/teams-on-cloud-pc) for camera, microphone, and audio redirection.
+> - [Multimedia redirection](multimedia-redirection-intro.md) for audio, video and call redirection.
++
+## Prerequisites
+
+Before you can configure audio and video redirection, you need:
+
+- An existing host pool with session hosts.
+
+- A Microsoft Entra ID account that is assigned the [Desktop Virtualization Host Pool Contributor](rbac.md#desktop-virtualization-host-pool-contributor) built-in role-based access control (RBAC) roles on the host pool as a minimum.
+
+- An existing Cloud PC.
+
+- An existing dev box.
+
+- An audio device you can use to test the redirection configuration.
+
+- To configure Microsoft Intune, you need:
+
+ - Microsoft Entra ID account that is assigned the [Policy and Profile manager](/mem/intune/fundamentals/role-based-access-control-reference#policy-and-profile-manager) built-in RBAC role.
+ - A group containing the devices you want to configure.
+
+- To configure Group Policy, you need:
+
+ - A domain account that has permission to create or edit Group Policy objects.
+ - A security group or organizational unit (OU) containing the devices you want to configure.
+
+- You need to connect to a remote session from a supported app and platform. To view redirection support in Windows App and the Remote Desktop app, see [Compare Windows App features across platforms and devices](/windows-app/compare-platforms-features#redirection) and [Compare Remote Desktop app features across platforms and devices](compare-remote-desktop-clients.md#redirection).
+
+## Configure audio output redirection
+
+Audio output redirection controls where audio signals from the remote session are played. Configuration of a session host or setting an RDP property on a host pool governs the ability to play audio from a remote session, which is subject to a priority order.
+
+Session host configuration controls whether audio and video playback redirection is enabled together with the audio playback quality and is set using Microsoft Intune or Group Policy. A host pool RDP property controls whether to play audio and the audio output location over the Remote Desktop Protocol.
+
+The default configuration is:
+
+- **Windows operating system**: Audio and video playback redirection isn't blocked.
+- **Azure Virtual Desktop host pool RDP properties**: Play sounds on the local computer.
+- **Resultant default behavior**: Audio is redirected to the local computer.
+
+> [!IMPORTANT]
+> Take care when configuring redirection settings as the most restrictive setting is the resultant behavior. For example, if you disable audio and video playback redirection on a session host with Microsoft Intune or Group Policy, but enable it with the host pool RDP property, redirection is disabled.
++
+Audio output redirection controls where audio signals from the remote session are played. Configuration of a Cloud PC governs the ability to play audio from a remote session and controls the audio playback quality. You can configure audio output redirection using Microsoft Intune or Group Policy.
+
+The default configuration is:
+
+- **Windows operating system**: Audio and video playback redirection isn't blocked.
+- **Windows 365**: Audio and video playback redirection is enabled.
+- **Resultant default behavior**: Audio is redirected to the local computer.
++
+Audio output redirection controls where audio signals from the remote session are played. Configuration of a dev box governs the ability to play audio from a remote session and controls the audio playback quality. You can configure audio output redirection using Microsoft Intune or Group Policy.
+
+The default configuration is:
+
+- **Windows operating system**: Audio and video playback redirection isn't blocked.
+- **Microsoft Dev Box**: Audio and video playback redirection is enabled.
+- **Resultant default behavior**: Audio is redirected to the local computer.
++
+### Configure the audio output location using host pool RDP properties
+
+The Azure Virtual Desktop host pool setting *audio output location* controls whether to play audio from remote session in the remote session, redirected to the local device, or disable audio. The corresponding RDP property is `audiomode:i:<value>`. For more information, see [Supported RDP properties](rdp-properties.md#device-redirection).
+
+To configure the audio output location using host pool RDP properties:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. In the search bar, type *Azure Virtual Desktop* and select the matching service entry.
+
+1. Select **Host pools**, then select the host pool you want to configure.
+
+1. Select **RDP Properties**, then select **Device redirection**.
+
+ :::image type="content" source="media/redirection-remote-desktop-protocol/redirection-configuration-host-pool.png" alt-text="A screenshot showing the host pool device redirection tab in the Azure portal." lightbox="media/redirection-remote-desktop-protocol/redirection-configuration-host-pool.png":::
+
+1. For **Audio output location**, select the drop-down list, then select one of the following options:
+
+ - **Play sounds on the local computer** (*default*)
+ - **Play sounds on the remote computer**
+ - **Do not play sounds**
+ - **Not configured**
+
+1. Select **Save**.
+
+1. To test the configuration, connect to a remote session and play audio. Verify that you can hear audio as expected. Make sure you're not using Microsoft Teams or a web page that's redirected with [multimedia redirection](multimedia-redirection-intro.md) for this test.
+
+### Configure audio and video playback redirection, and limit audio playback quality using Microsoft Intune or Group Policy
+
+### Configure audio and video playback redirection, and limit audio playback quality using Microsoft Intune or Group Policy
+
+Select the relevant tab for your scenario.
+
+# [Microsoft Intune](#tab/intune)
+
+To allow or disable audio and video playback redirection, and limit audio playback quality using Microsoft Intune:
+
+1. Sign in to the [Microsoft Intune admin center](https://endpoint.microsoft.com/).
+
+1. [Create or edit a configuration profile](/mem/intune/configuration/administrative-templates-windows) for **Windows 10 and later** devices, with the **Settings catalog** profile type.
+
+1. In the settings picker, browse to **Administrative templates** > **Windows Components** > **Remote Desktop Services** > **Remote Desktop Session Host** > **Device and Resource Redirection**.
+
+ :::image type="content" source="media/redirection-remote-desktop-protocol/redirection-configuration-intune.png" alt-text="A screenshot showing the device and resource redirection options in the Microsoft Intune portal." lightbox="media/redirection-remote-desktop-protocol/redirection-configuration-intune.png":::
+
+1. Check the box for **Allow audio and video playback redirection**, and optionally **Limit audio playback quality**, then close the settings picker.
+
+1. Expand the **Administrative templates** category, then toggle the switch for **Allow audio and video playback redirection**, depending on your requirements:
+
+ - To allow audio and video playback redirection, toggle the switch to **Enabled**, then select **OK**.
+
+ - To disable audio and video playback redirection, toggle the switch to **Disabled**, then select **OK**.
+
+1. If you selected **Limit audio playback quality**, select the audio quality from the drop-down list.
+
+1. Select **Next**.
+
+1. *Optional*: On the **Scope tags** tab, select a scope tag to filter the profile. For more information about scope tags, see [Use role-based access control (RBAC) and scope tags for distributed IT](/mem/intune/fundamentals/scope-tags).
+
+1. On the **Assignments** tab, select the group containing the computers providing a remote session you want to configure, then select **Next**.
+
+1. On the **Review + create** tab, review the settings, then select **Create**.
+
+1. Once the policy applies to the computers providing a remote session, restart them for the settings to take effect.
+
+1. To test the configuration, connect to a remote session and play audio. Verify that you can hear audio as expected. Make sure you're not using Microsoft Teams or a web page that's redirected with [multimedia redirection](multimedia-redirection-intro.md) for this test.
+
+# [Group Policy](#tab/group-policy)
+
+To allow or disable audio and video playback redirection, and limit audio playback quality using Group Policy:
+
+1. Open the **Group Policy Management** console on device you use to manage the Active Directory domain.
+
+1. Create or edit a policy that targets the computers providing a remote session you want to configure.
+
+1. Navigate to **Computer Configuration** > **Policies** > **Administrative Templates** > **Windows Components** > **Remote Desktop Services** > **Remote Desktop Session Host** > **Device and Resource Redirection**.
+
+ :::image type="content" source="media/redirection-remote-desktop-protocol/redirection-configuration-group-policy.png" alt-text="A screenshot showing the device and resource redirection options in the Group Policy editor." lightbox="media/redirection-remote-desktop-protocol/redirection-configuration-group-policy.png":::
+
+1. Double-click the policy setting **Allow audio and video playback redirection** to open it.
+
+ - To allow audio and video playback redirection, select **Enabled** or **Not configured**, then select **OK**.
+
+ - To disable audio and video playback redirection, select **Disabled**, then select **OK**.
+
+1. If you want to limit audio playback quality, double-click the policy setting **Limit audio playback quality** to open it.
+
+1. Select **Enabled**, then select the audio quality from the drop-down list, then select **OK**.
+
+1. Ensure the policy is applied to the computers providing a remote session, then restart them for the settings to take effect.
+
+1. To test the configuration, connect to a remote session and play audio. Verify that you can hear audio as expected. Make sure you're not using Microsoft Teams or a web page that's redirected with [multimedia redirection](multimedia-redirection-intro.md) for this test.
+++
+## Configure audio capture redirection
+
+Audio recording redirection controls whether you want to allow peripherals such as a microphone to be accessible in the remote session. Configuration of a session host and setting an RDP property on a host pool governs the ability to record audio from a local device in a remote session, which is subject to a priority order.
+
+Session host configuration controls whether audio recording redirection is enabled and is set using Microsoft Intune or Group Policy. A host pool RDP property controls whether microphones are redirected over the Remote Desktop Protocol.
+
+The default configuration is:
+
+- **Windows operating system**: Audio recording redirection isn't blocked.
+- **Azure Virtual Desktop host pool RDP properties**: Not configured.
+
+> [!IMPORTANT]
+> Take care when configuring redirection settings as the most restrictive setting is the resultant behavior. For example, if you disable audio recording redirection on a session host with Microsoft Intune or Group Policy, but enable it with the host pool RDP property, redirection is disabled.
++
+Audio recording redirection controls whether you want to allow peripherals such as a microphone to be accessible in the remote session. Configuration of a Cloud PC governs the ability to record audio from a local device in a remote session. You can configure audio recording redirection using Microsoft Intune or Group Policy.
+
+The default configuration is:
+
+- **Windows operating system**: Audio recording redirection isn't blocked. Windows 365 enables audio recording redirection.
++
+Audio recording redirection controls whether you want to allow peripherals such as a microphone to be accessible in the remote session. Configuration of a dev box governs the ability to record audio from a local device in a remote session. You can configure audio recording redirection using Microsoft Intune or Group Policy.
+
+The default configuration is:
+
+- **Windows operating system**: Audio recording redirection isn't blocked. Microsoft Dev Box enables audio recording redirection.
++
+### Configure audio input redirection using host pool RDP properties
+
+The Azure Virtual Desktop host pool setting *microphone redirection* controls whether to redirect audio input from a local device to an audio application in a remote session. The corresponding RDP property is `audiocapturemode:i:<value>`. For more information, see [Supported RDP properties](rdp-properties.md#device-redirection).
+
+To configure audio input redirection using host pool RDP properties:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. In the search bar, type *Azure Virtual Desktop* and select the matching service entry.
+
+1. Select **Host pools**, then select the host pool you want to configure.
+
+1. Select **RDP Properties**, then select **Device redirection**.
+
+ :::image type="content" source="media/redirection-remote-desktop-protocol/redirection-configuration-host-pool.png" alt-text="A screenshot showing the host pool device redirection tab in the Azure portal." lightbox="media/redirection-remote-desktop-protocol/redirection-configuration-host-pool.png":::
+
+1. For **Microphone redirection**, select the drop-down list, then select one of the following options:
+
+ - **Disable audio capture from the local device**
+ - **Enable audio capture from the local device and redirection to an audio application in the remote session**
+ - **Not configured** (*default*)
+
+1. Select **Save**.
+
+1. To test the configuration, connect to a remote session and verify that the audio input redirection is as expected, such as recording audio from a microphone in an application in the remote session.
+
+### Configure audio input redirection using Microsoft Intune or Group Policy
+
+### Configure audio input redirection using Microsoft Intune or Group Policy
+
+Select the relevant tab for your scenario.
+
+# [Microsoft Intune](#tab/intune)
+
+To allow or disable audio input redirection using Microsoft Intune:
+
+1. Sign in to the [Microsoft Intune admin center](https://endpoint.microsoft.com/).
+
+1. [Create or edit a configuration profile](/mem/intune/configuration/administrative-templates-windows) for **Windows 10 and later** devices, with the **Settings catalog** profile type.
+
+1. In the settings picker, browse to **Administrative templates** > **Windows Components** > **Remote Desktop Services** > **Remote Desktop Session Host** > **Device and Resource Redirection**.
+
+ :::image type="content" source="media/redirection-remote-desktop-protocol/redirection-configuration-intune.png" alt-text="A screenshot showing the device and resource redirection options in the Microsoft Intune portal." lightbox="media/redirection-remote-desktop-protocol/redirection-configuration-intune.png":::
+
+1. Check the box for **Allow audio recording redirection**, then close the settings picker.
+
+1. Expand the **Administrative templates** category, then toggle the switch for **Allow audio recording redirection** to **Enabled** or **Disabled**, depending on your requirements. Select **Next**.
+
+1. *Optional*: On the **Scope tags** tab, select a scope tag to filter the profile. For more information about scope tags, see [Use role-based access control (RBAC) and scope tags for distributed IT](/mem/intune/fundamentals/scope-tags).
+
+1. On the **Assignments** tab, select the group containing the computers providing a remote session you want to configure, then select **Next**.
+
+1. On the **Review + create** tab, review the settings, then select **Create**.
+
+1. Once the policy applies to the computers providing a remote session, restart them for the settings to take effect.
+
+1. To test the configuration, connect to a remote session and verify that the audio input redirection is as expected, such as recording audio from a microphone in an application in the remote session.
+
+# [Group Policy](#tab/group-policy)
+
+To allow or disable audio input redirection using Group Policy:
+
+1. Open the **Group Policy Management** console on device you use to manage the Active Directory domain.
+
+1. Create or edit a policy that targets the computers providing a remote session you want to configure.
+
+1. Navigate to **Computer Configuration** > **Policies** > **Administrative Templates** > **Windows Components** > **Remote Desktop Services** > **Remote Desktop Session Host** > **Device and Resource Redirection**.
+
+ :::image type="content" source="media/redirection-remote-desktop-protocol/redirection-configuration-group-policy.png" alt-text="A screenshot showing the device and resource redirection options in the Group Policy editor." lightbox="media/redirection-remote-desktop-protocol/redirection-configuration-group-policy.png":::
+
+1. Double-click the policy setting **Allow audio recording redirection** to open it.
+
+ - To allow audio recording redirection, select **Enabled** or **Not configured**, then select **OK**.
+
+ - To disable audio recording redirection, select **Disabled**, then select **OK**.
+
+1. Ensure the policy is applied to the computers providing a remote session, then restart them for the settings to take effect.
+
+1. To test the configuration, connect to a remote session and verify that the audio input redirection is as expected, such as recording audio from a microphone in an application in the remote session.
+
+### Optional: Disable audio input redirection on a local device
+
+You can disable audio input redirection on a local device to prevent audio input from being redirected from a local device to a remote session. This method is useful if you want to enable audio input redirection for most users, but disable it for specific devices.
+
+For iOS/iPadOS and Android devices, you can disable audio input redirection using Intune. For more information, see [Configure client device redirection settings for Windows App and the Remote Desktop app using Microsoft Intune](client-device-redirection-intune.md).
+++
+## Related content
+
virtual-desktop Redirection Configure Camera Webcam Video Capture https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/redirection-configure-camera-webcam-video-capture.md
+
+ Title: Configure camera, webcam, and video capture redirection over the Remote Desktop Protocol
+description: Learn how to redirect camera, webcam, and video capture peripherals, and also video encoding and quality, from a local device to a remote session over the Remote Desktop Protocol. It applies to Azure Virtual Desktop, Windows 365, and Microsoft Dev Box.
+
+zone_pivot_groups: rdp-products-features
++ Last updated : 04/24/2024++
+# Configure camera, webcam, and video capture redirection over the Remote Desktop Protocol
++
+You can configure the redirection behavior of cameras, webcams, and video capture peripherals, and also video encoding and quality, from a local device to a remote session over the Remote Desktop Protocol (RDP).
+
+For Azure Virtual Desktop, we recommend you enable camera, webcam, and video capture redirection on your session hosts using Microsoft Intune or Group Policy, then control redirection using the host pool RDP properties.
+
+For Windows 365, you can configure your Cloud PCs using Microsoft Intune or Group Policy.
+
+For Microsoft Dev Box, you can configure your dev boxes using Microsoft Intune or Group Policy.
+
+This article provides information about the supported redirection methods and how to configure the redirection behavior for camera, webcam, and video capture peripherals. To learn more about how redirection works, see [Redirection over the Remote Desktop Protocol](redirection-remote-desktop-protocol.md).
+
+> [!TIP]
+> If you use the following features in a remote session, they have their own optimizations that are independent from the redirection configuration on the session host, host pool RDP properties, or local device.
+>
+> - [Microsoft Teams](teams-on-avd.md) for camera, microphone, and audio redirection.
+> - [Multimedia redirection](multimedia-redirection-intro.md) for audio, video and call redirection.
++
+> [!TIP]
+> If you use the following features in a remote session, they have their own optimizations that are independent from the redirection configuration on the Cloud PC or local device.
+>
+> - [Microsoft Teams](/windows-365/enterprise/teams-on-cloud-pc) for camera, microphone, and audio redirection.
+> - [Multimedia redirection](multimedia-redirection-intro.md) for audio, video and call redirection.
++
+> [!TIP]
+> If you use the following features in a remote session, they have their own optimizations that are independent from the redirection configuration on the dev box or local device.
+>
+> - [Microsoft Teams](/windows-365/enterprise/teams-on-cloud-pc) for camera, microphone, and audio redirection.
+> - [Multimedia redirection](multimedia-redirection-intro.md) for audio, video and call redirection.
++
+## Prerequisites
+
+Before you can configure camera, webcam, and video capture redirection, you need:
+
+- An existing host pool with session hosts.
+
+- A Microsoft Entra ID account that is assigned the [Desktop Virtualization Host Pool Contributor](rbac.md#desktop-virtualization-host-pool-contributor) built-in role-based access control (RBAC) roles on the host pool as a minimum.
+
+- An existing Cloud PC.
+
+- An existing dev box.
+
+- A camera, webcam, or video capture device you can use to test the redirection configuration.
+
+- To configure Microsoft Intune, you need:
+
+ - Microsoft Entra ID account that is assigned the [Policy and Profile manager](/mem/intune/fundamentals/role-based-access-control-reference#policy-and-profile-manager) built-in RBAC role.
+ - A group containing the devices you want to configure.
+
+- To configure Group Policy, you need:
+
+ - A domain account that has permission to create or edit Group Policy objects.
+ - A security group or organizational unit (OU) containing the devices you want to configure.
+
+- You need to connect to a remote session from a supported app and platform. To view redirection support in Windows App and the Remote Desktop app, see [Compare Windows App features across platforms and devices](/windows-app/compare-platforms-features#redirection) and [Compare Remote Desktop app features across platforms and devices](compare-remote-desktop-clients.md#redirection).
+
+## Configure camera, webcam, and video capture
+
+Configuration of a session host or setting an RDP property on a host pool governs the ability to use cameras, webcams, and video capture peripherals in a remote session, which is subject to a priority order. The configuration of the session host controls whether cameras, webcams, and video capture peripherals can be redirected to a remote session, and is set using Microsoft Intune or Group Policy. A host pool RDP property controls whether cameras, webcams, and video capture peripherals can be redirected to a remote session over the Remote Desktop Protocol, and whether to redirect all applicable devices, or only those specified by Vendor ID (VID) and Product ID (PID).
+
+The default configuration is:
+
+- **Windows operating system**: Camera, webcam, and video capture peripheral redirection is allowed.
+- **Azure Virtual Desktop host pool RDP properties**: Not configured.
+- **Resultant default behavior**: Camera, webcam, and video capture peripherals are redirected to the local computer.
+
+> [!IMPORTANT]
+> Take care when configuring redirection settings as the most restrictive setting is the resultant behavior. For example, if you disable camera, webcam, and video capture peripheral redirection on a session host with Microsoft Intune or Group Policy, but enable it with the host pool RDP property, redirection is disabled.
+
+Configuration of a Cloud PC governs the ability to use cameras, webcams, and video capture peripherals in a remote session, and is set using Microsoft Intune or Group Policy.
+
+The default configuration is:
+
+- **Windows operating system**: Camera, webcam, and video capture peripheral redirection isn't blocked.
+- **Windows 365**: Camera, webcam, and video capture peripheral redirection is enabled.
+- **Resultant default behavior**: Camera, webcam, and video capture peripherals are redirected to the local computer.
++
+Configuration of a dev box governs the ability to use cameras, webcams, and video capture peripherals in a remote session, and is set using Microsoft Intune or Group Policy.
+
+The default configuration is:
+
+- **Windows operating system**: Camera, webcam, and video capture peripheral redirection isn't blocked.
+- **Microsoft Dev Box**: Camera, webcam, and video capture peripheral redirection is enabled.
+- **Resultant default behavior**: Camera, webcam, and video capture peripherals are redirected to the local computer.
++
+### Configure camera, webcam and video capture redirection using host pool RDP properties
+
+The Azure Virtual Desktop host pool setting *camera redirection* controls whether cameras, webcams, and video capture peripherals are redirected from a local device to a remote session, and optionally which devices. The corresponding RDP property is `camerastoredirect:s:<value>`. For more information, see [Supported RDP properties](rdp-properties.md#device-redirection).
+
+To configure camera, webcam and video capture redirection using host pool RDP properties:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. In the search bar, type *Azure Virtual Desktop* and select the matching service entry.
+
+1. Select **Host pools**, then select the host pool you want to configure.
+
+1. Select **RDP Properties**, then select **Device redirection**.
+
+ :::image type="content" source="media/redirection-remote-desktop-protocol/redirection-configuration-host-pool.png" alt-text="A screenshot showing the host pool device redirection tab in the Azure portal." lightbox="media/redirection-remote-desktop-protocol/redirection-configuration-host-pool.png":::
+
+1. For **Camera redirection**, select the drop-down list, then select one of the following options:
+
+ - **Don't redirect any cameras**
+ - **Redirect cameras**
+ - **Manually enter list of cameras**
+ - **Not configured** (*default*)
+
+ 1. If you select **Manually enter list of cameras**, enter the Vendor ID (VID) and Product ID (PID) of the cameras you want to redirect using a semicolon-delimited list of `KSCATEGORY_VIDEO_CAMERA` interfaces. Characters `\`, `:`, and `;` must be escaped with a backslash character `\`, and cannot end with a backslash. For example, the value `\?\usb#vid_0bda&pid_58b0&mi` needs to be entered as `\\?\\usb#vid_0bda&pid_58b0&mi`. You can find the VID and PID in the *device instance path* in Device Manager on the local device. For more information, see [Device instance path](redirection-remote-desktop-protocol.md#controlling-opaque-low-level-usb-redirection).
+
+1. Select **Save**.
+
+1. To test the configuration, connect to a remote session with a camera, webcam, or video capture peripheral and use it with a supported application for the peripheral, such as Microsoft Teams.
+
+### Configure video capture redirection using Microsoft Intune or Group Policy
+
+### Configure video capture redirection using Microsoft Intune or Group Policy
+
+Select the relevant tab for your scenario.
+
+# [Microsoft Intune](#tab/intune)
+
+To allow or disable video capture redirection, which includes cameras and webcams, using Microsoft Intune:
+
+1. Sign in to the [Microsoft Intune admin center](https://endpoint.microsoft.com/).
+
+1. [Create or edit a configuration profile](/mem/intune/configuration/administrative-templates-windows) for **Windows 10 and later** devices, with the **Settings catalog** profile type.
+
+1. In the settings picker, browse to **Administrative templates** > **Windows Components** > **Remote Desktop Services** > **Remote Desktop Session Host** > **Device and Resource Redirection**.
+
+ :::image type="content" source="media/redirection-remote-desktop-protocol/redirection-configuration-intune.png" alt-text="A screenshot showing the device and resource redirection options in the Microsoft Intune portal." lightbox="media/redirection-remote-desktop-protocol/redirection-configuration-intune.png":::
+
+1. Check the box for **Do not allow video capture redirection**, then close the settings picker.
+
+1. Expand the **Administrative templates** category, then toggle the switch for **Do not allow video capture redirection** to **Enabled** or **Disabled**, depending on your requirements:
+
+ - To allow video capture redirection, toggle the switch to **Disabled**, then select **OK**.
+
+ - To disable video capture redirection, toggle the switch to **Enabled**, then select **OK**.
+
+1. Select **Next**.
+
+1. *Optional*: On the **Scope tags** tab, select a scope tag to filter the profile. For more information about scope tags, see [Use role-based access control (RBAC) and scope tags for distributed IT](/mem/intune/fundamentals/scope-tags).
+
+1. On the **Assignments** tab, select the group containing the computers providing a remote session you want to configure, then select **Next**.
+
+1. On the **Review + create** tab, review the settings, then select **Create**.
+
+1. Once the policy applies to the computers providing a remote session, restart them for the settings to take effect.
+
+1. To test the configuration, connect to a remote session with a camera, webcam, or video capture peripheral and use it with a supported application for the peripheral. Don't use Microsoft Teams to test as it uses its own redirection optimizations that's independent of the Remote Desktop Protocol.
+
+# [Group Policy](#tab/group-policy)
+
+To allow or disable video capture redirection, which includes cameras and webcams, using Group Policy:
+
+1. Open the **Group Policy Management** console on device you use to manage the Active Directory domain.
+
+1. Create or edit a policy that targets the computers providing a remote session you want to configure.
+
+1. Navigate to **Computer Configuration** > **Policies** > **Administrative Templates** > **Windows Components** > **Remote Desktop Services** > **Remote Desktop Session Host** > **Device and Resource Redirection**.
+
+ :::image type="content" source="media/redirection-remote-desktop-protocol/redirection-configuration-group-policy.png" alt-text="A screenshot showing the device and resource redirection options in the Group Policy editor." lightbox="media/redirection-remote-desktop-protocol/redirection-configuration-group-policy.png":::
+
+1. Double-click the policy setting **Do not allow video capture redirection** to open it.
+
+ - To allow video capture redirection, select **Disabled** or **Not configured**, then select **OK**.
+
+ - To disable video capture redirection, select **Enabled**, then select **OK**.
+
+1. Ensure the policy is applied to the computers providing a remote session, then restart them for the settings to take effect.
+
+1. To test the configuration, connect to a remote session with a camera, webcam, or video capture peripheral and use it with a supported application for the peripheral. Don't use Microsoft Teams to test as it uses its own redirection optimizations that's independent of the Remote Desktop Protocol.
+
+### Optional: Disable camera redirection on a local device
+
+You can disable camera redirection on a local device to prevent a camera from being redirected from a local device to a remote session. This method is useful if you want to enable camera redirection for most users, but disable it for specific devices.
+
+For iOS/iPadOS and Android devices, you can disable camera redirection using Intune. For more information, see [Configure client device redirection settings for Windows App and the Remote Desktop app using Microsoft Intune](client-device-redirection-intune.md).
+++
+## Configure video encoding redirection
+
+Video encoding redirection controls whether to encode video in a remote session or redirected to the local device, and is configured with a host pool RDP property. The corresponding RDP property is `encode redirected video capture:i:<value>`. For more information, see [Supported RDP properties](rdp-properties.md#device-redirection).
+
+> [!TIP]
+> Redirect video encoding is different to [multimedia redirection](multimedia-redirection-intro.md), which redirects video playback and calls to your local device for faster processing and rendering.
+
+To configure redirect video encoding:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. In the search bar, type *Azure Virtual Desktop* and select the matching service entry.
+
+1. Select **Host pools**, then select the host pool you want to configure.
+
+1. Select **RDP Properties**, then select **Device redirection**.
+
+ :::image type="content" source="media/redirection-remote-desktop-protocol/redirection-configuration-host-pool.png" alt-text="A screenshot showing the host pool device redirection tab in the Azure portal." lightbox="media/redirection-remote-desktop-protocol/redirection-configuration-host-pool.png":::
+
+1. For **Redirect video encoding**, select the drop-down list, then select one of the following options:
+
+ - **Disable encoding of redirected video**
+ - **Enable encoding of redirected video**
+ - **Not configured** (*default*)
+
+1. Select **Save**.
++
+## Configure encoded video quality
+
+Encoded video quality controls the quality of encoded video between high, medium, and low compression, and is configured with a host pool RDP property. You also need to [redirect video encoding](#configure-video-encoding-redirection) to the local device. The corresponding RDP property is `redirected video capture encoding quality:i:<value>`. For more information, see [Supported RDP properties](rdp-properties.md#device-redirection).
+
+To configure encoded video quality:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. In the search bar, type *Azure Virtual Desktop* and select the matching service entry.
+
+1. Select **Host pools**, then select the host pool you want to configure.
+
+1. Select **RDP Properties**, then select **Device redirection**.
+
+ :::image type="content" source="media/redirection-remote-desktop-protocol/redirection-configuration-host-pool.png" alt-text="A screenshot showing the host pool device redirection tab in the Azure portal." lightbox="media/redirection-remote-desktop-protocol/redirection-configuration-host-pool.png":::
+
+1. For **Encoded video quality**, select the drop-down list, then select one of the following options:
+
+ - **High compression video. Quality may suffer when there is a lot of motion**
+ - **Medium compression**
+ - **Low compression video with high picture quality**
+ - **Not configured** (*default*)
+
+1. Select **Save**.
++
+## Related content
+
virtual-desktop Redirection Configure Clipboard https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/redirection-configure-clipboard.md
+
+ Title: Configure clipboard redirection over the Remote Desktop Protocol
+description: Learn how to redirect the clipboard between a local device and a remote session over the Remote Desktop Protocol. It applies to Azure Virtual Desktop, Windows 365, and Microsoft Dev Box.
+
+zone_pivot_groups: rdp-products-features
++ Last updated : 04/29/2024++
+# Configure clipboard redirection over the Remote Desktop Protocol
++
+You can configure the redirection behavior of the clipboard between a local device and a remote session over the Remote Desktop Protocol (RDP).
+
+For Azure Virtual Desktop, we recommend you enable clipboard redirection on your session hosts using Microsoft Intune or Group Policy, then control redirection using the host pool RDP properties. Additionally, in Windows Insider Preview, you can configure whether users can use the clipboard from session host to client, or client to session host, and the types of data that can be copied. For more information, see [Configure the clipboard transfer direction and types of data that can be copied](clipboard-transfer-direction-data-types.md).
+
+For Windows 365, you can configure your Cloud PCs using Microsoft Intune or Group Policy. Additionally, in Windows Insider Preview, you can configure whether users can use the clipboard from Cloud PC to client, or client to Cloud PC, and the types of data that can be copied. For more information, see [Configure the clipboard transfer direction and types of data that can be copied](clipboard-transfer-direction-data-types.md).
+
+For Microsoft Dev Box, you can configure your dev boxes using Microsoft Intune or Group Policy. Additionally, in Windows Insider Preview, you can configure whether users can use the clipboard from dev box to client, or client to dev box, and the types of data that can be copied. For more information, see [Configure the clipboard transfer direction and types of data that can be copied](clipboard-transfer-direction-data-types.md).
+
+This article provides information about the supported redirection methods and how to configure the redirection behavior for the clipboard. To learn more about how redirection works, see [Redirection over the Remote Desktop Protocol](redirection-remote-desktop-protocol.md).
+
+## Prerequisites
+
+Before you can configure clipboard redirection, you need:
+
+- An existing host pool with session hosts.
+
+- A Microsoft Entra ID account that is assigned the [Desktop Virtualization Host Pool Contributor](rbac.md#desktop-virtualization-host-pool-contributor) built-in role-based access control (RBAC) roles on the host pool as a minimum.
+
+- An existing Cloud PC.
+
+- An existing dev box.
+
+- To configure Microsoft Intune, you need:
+
+ - Microsoft Entra ID account that is assigned the [Policy and Profile manager](/mem/intune/fundamentals/role-based-access-control-reference#policy-and-profile-manager) built-in RBAC role.
+ - A group containing the devices you want to configure.
+
+- To configure Group Policy, you need:
+
+ - A domain account that has permission to create or edit Group Policy objects.
+ - A security group or organizational unit (OU) containing the devices you want to configure.
+
+- You need to connect to a remote session from a supported app and platform. To view redirection support in Windows App and the Remote Desktop app, see [Compare Windows App features across platforms and devices](/windows-app/compare-platforms-features#redirection) and [Compare Remote Desktop app features across platforms and devices](compare-remote-desktop-clients.md#redirection).
+
+## Configure clipboard redirection
+
+Configuration of a session host using Microsoft Intune or Group Policy, or setting an RDP property on a host pool governs the ability to redirect the clipboard between the remote session and the local device, which is subject to a priority order.
+
+The default configuration is:
+
+- **Windows operating system**: Clipboard redirection isn't blocked.
+- **Azure Virtual Desktop host pool RDP properties**: The clipboard is available between the remote session and the local device.
+- **Resultant default behavior**: The clipboard is redirected in both directions between the remote session and the local device.
+
+> [!IMPORTANT]
+> Take care when configuring redirection settings as the most restrictive setting is the resultant behavior. For example, if you disable clipboard redirection on a session host with Microsoft Intune or Group Policy, but enable it with the host pool RDP property, redirection is disabled.
++
+Configuration of a Cloud PC governs the ability to redirect the clipboard between the remote session and the local device, and is set using Microsoft Intune or Group Policy.
+
+The default configuration is:
+
+- **Windows operating system**: Clipboard redirection isn't blocked.
+- **Windows 365**: Clipboard redirection is enabled.
+- **Resultant default behavior**: The clipboard is redirected in both directions between the remote session and the local device.
++
+Configuration of a dev box governs the ability to redirect the clipboard between the remote session and the local device, and is set using Microsoft Intune or Group Policy.
+
+The default configuration is:
+
+- **Windows operating system**: Clipboard redirection isn't blocked.
+- **Microsoft Dev Box**: Clipboard redirection is enabled.
+- **Resultant default behavior**: The clipboard is redirected in both directions between the remote session and the local device.
++
+### Configure clipboard redirection using host pool RDP properties
+
+The Azure Virtual Desktop host pool setting *clipboard redirection* controls whether to redirect the clipboard between the remote session and the local device. The corresponding RDP property is `redirectclipboard:i:<value>`. For more information, see [Supported RDP properties](rdp-properties.md#device-redirection).
+
+To configure clipboard redirection using host pool RDP properties:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. In the search bar, type *Azure Virtual Desktop* and select the matching service entry.
+
+1. Select **Host pools**, then select the host pool you want to configure.
+
+1. Select **RDP Properties**, then select **Device redirection**.
+
+ :::image type="content" source="media/redirection-remote-desktop-protocol/redirection-configuration-host-pool.png" alt-text="A screenshot showing the host pool device redirection tab in the Azure portal." lightbox="media/redirection-remote-desktop-protocol/redirection-configuration-host-pool.png":::
+
+1. For **Clipboard redirection**, select the drop-down list, then select one of the following options:
+
+ - **Clipboard on local computer isn't available in remote session**
+ - **Clipboard on local computer is available in remote session** (*default*)
+ - **Not configured**
+
+1. Select **Save**.
+
+1. To test the configuration, connect to a remote session and copy and paste some text between the local device and remote session. Verify that the text is as expected.
+
+### Configure clipboard redirection using Microsoft Intune or Group Policy
+
+### Configure clipboard redirection using Microsoft Intune or Group Policy
+
+Select the relevant tab for your scenario.
+
+# [Microsoft Intune](#tab/intune)
+
+To enable or disable clipboard redirection using Microsoft Intune:
+
+1. Sign in to the [Microsoft Intune admin center](https://endpoint.microsoft.com/).
+
+1. [Create or edit a configuration profile](/mem/intune/configuration/administrative-templates-windows) for **Windows 10 and later** devices, with the **Settings catalog** profile type.
+
+1. In the settings picker, browse to **Administrative templates** > **Windows Components** > **Remote Desktop Services** > **Remote Desktop Session Host** > **Device and Resource Redirection**.
+
+ :::image type="content" source="media/redirection-remote-desktop-protocol/redirection-configuration-intune.png" alt-text="A screenshot showing the device and resource redirection options in the Microsoft Intune portal." lightbox="media/redirection-remote-desktop-protocol/redirection-configuration-intune.png":::
+
+1. Check the box for **Do not allow Clipboard redirection**, then close the settings picker.
+
+1. Expand the **Administrative templates** category, then toggle the switch for **Do not allow Clipboard redirection** to **Enabled** or **Disabled**, depending on your requirements:
+
+ - To allow clipboard redirection, toggle the switch to **Disabled**, then select **OK**.
+
+ - To disable clipboard redirection, toggle the switch to **Enabled**, then select **OK**.
+
+1. Select **Next**.
+
+1. *Optional*: On the **Scope tags** tab, select a scope tag to filter the profile. For more information about scope tags, see [Use role-based access control (RBAC) and scope tags for distributed IT](/mem/intune/fundamentals/scope-tags).
+
+1. On the **Assignments** tab, select the group containing the computers providing a remote session you want to configure, then select **Next**.
+
+1. On the **Review + create** tab, review the settings, then select **Create**.
+
+1. Once the policy applies to the computers providing a remote session, restart them for the settings to take effect.
+
+1. To test the configuration, connect to a remote session and copy and paste some text between the local device and remote session. Verify that the text is as expected.
+
+# [Group Policy](#tab/group-policy)
+
+To enable or disable clipboard redirection using Group Policy:
+
+1. Open the **Group Policy Management** console on device you use to manage the Active Directory domain.
+
+1. Create or edit a policy that targets the computers providing a remote session you want to configure.
+
+1. Navigate to **Computer Configuration** > **Policies** > **Administrative Templates** > **Windows Components** > **Remote Desktop Services** > **Remote Desktop Session Host** > **Device and Resource Redirection**.
+
+ :::image type="content" source="media/redirection-remote-desktop-protocol/redirection-configuration-group-policy.png" alt-text="A screenshot showing the device and resource redirection options in the Group Policy editor." lightbox="media/redirection-remote-desktop-protocol/redirection-configuration-group-policy.png":::
+
+1. Double-click the policy setting **Do not allow Clipboard redirection** to open it.
+
+ - To enable clipboard redirection, select **Disabled** or **Not configured**, then select **OK**.
+
+ - To disable clipboard redirection, select **Enabled**, then select **OK**.
+
+1. Ensure the policy is applied to the computers providing a remote session, then restart them for the settings to take effect.
+
+1. To test the configuration, connect to a remote session and copy and paste some text between the local device and remote session. Verify that the text is as expected.
+++
+> [!IMPORTANT]
+> If you disable [drive redirection](redirection-configure-drives-storage.md) using Intune or Group Policy, it also prevents files being transferred between the local device and remote session using the clipboard. Other content, such as text or images, isn't affected.
+
+### Optional: Disable clipboard redirection on a local device
+
+You can disable clipboard redirection on a local device to prevent the clipboard from being redirected between a remote session. This method is useful if you want to enable clipboard redirection for most users, but disable it for specific devices.
+
+On a local Windows device, you can disable clipboard redirection by configuring the following registry key and value:
+
+- **Key**: `HKEY_LOCAL_MACHINE\Software\Microsoft\Terminal Server Client`
+- **Type**: `REG_DWORD`
+- **Value name**: `DisableClipboardRedirection`
+- **Value data**: `1`
+
+For iOS/iPadOS and Android devices, you can disable clipboard redirection using Intune. For more information, see [Configure client device redirection settings for Windows App and the Remote Desktop app using Microsoft Intune](client-device-redirection-intune.md).
+
+## Related content
+
virtual-desktop Redirection Configure Drives Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/redirection-configure-drives-storage.md
+
+ Title: Configure fixed, removable, and network drive redirection over the Remote Desktop Protocol
+description: Learn how to redirect fixed, removable, and network storage drives from a local device to a remote session over the Remote Desktop Protocol. It applies to Azure Virtual Desktop, Windows 365, and Microsoft Dev Box.
+
+zone_pivot_groups: rdp-products-features
++ Last updated : 04/29/2024++
+# Configure fixed, removable, and network drive redirection over the Remote Desktop Protocol
++
+You can configure the redirection behavior of fixed, removable, and network drives from a local device to a remote session over the Remote Desktop Protocol (RDP).
+
+For Azure Virtual Desktop, we recommend you enable drive redirection on your session hosts using Microsoft Intune or Group Policy, then control redirection using the host pool RDP properties.
+
+For Windows 365, you can configure your Cloud PCs using Microsoft Intune or Group Policy.
+
+For Microsoft Dev Box, you can configure your dev boxes using Microsoft Intune or Group Policy.
+
+This article provides information about the supported redirection methods and how to configure the redirection behavior for drives and storage. To learn more about how redirection works, see [Redirection over the Remote Desktop Protocol](redirection-remote-desktop-protocol.md).
+
+## Prerequisites
+
+Before you can configure drive redirection, you need:
+
+- An existing host pool with session hosts.
+
+- A Microsoft Entra ID account that is assigned the [Desktop Virtualization Host Pool Contributor](rbac.md#desktop-virtualization-host-pool-contributor) built-in role-based access control (RBAC) roles on the host pool as a minimum.
+
+- An existing Cloud PC.
+
+- An existing dev box.
+
+- Each drive you want to redirect must have a drive letter assigned on the local device.
+
+- If you want to test drive redirection with a removable drive, you need a removable drive connected to the local device.
+
+- To configure Microsoft Intune, you need:
+
+ - Microsoft Entra ID account that is assigned the [Policy and Profile manager](/mem/intune/fundamentals/role-based-access-control-reference#policy-and-profile-manager) built-in RBAC role.
+ - A group containing the devices you want to configure.
+
+- To configure Group Policy, you need:
+
+ - A domain account that has permission to create or edit Group Policy objects.
+ - A security group or organizational unit (OU) containing the devices you want to configure.
+
+- You need to connect to a remote session from a supported app and platform. To view redirection support in Windows App and the Remote Desktop app, see [Compare Windows App features across platforms and devices](/windows-app/compare-platforms-features#redirection) and [Compare Remote Desktop app features across platforms and devices](compare-remote-desktop-clients.md#redirection).
+
+## Configure drive redirection
+
+Configuration of a session host using Microsoft Intune or Group Policy, or setting an RDP property on a host pool governs the ability to redirect drives from a local device to a remote session, which is subject to a priority order.
+
+The default configuration is:
+
+- **Windows operating system**: Drive and storage redirection isn't blocked.
+- **Azure Virtual Desktop host pool RDP properties**: All drives are redirected from the local device to a remote session, including ones that are connected later.
+- **Resultant default behavior**: All drives are redirected from the local device to a remote session, including ones that are connected later.
+
+> [!IMPORTANT]
+> Take care when configuring redirection settings as the most restrictive setting is the resultant behavior. For example, if you disable drive and storage redirection on a session host with Microsoft Intune or Group Policy, but enable it with the host pool RDP property, redirection is disabled.
++
+Configuration of a Cloud PC governs the ability to redirect drives from a local device to a remote session, and is set using Microsoft Intune or Group Policy.
+
+The default configuration is:
+
+- **Windows operating system**: Drive redirection isn't blocked.
+- **Windows 365**: All drives are redirected from the local device to a remote session, including ones that are connected later.
+- **Resultant default behavior**: All drives are redirected from the local device to a remote session, including ones that are connected later.
++
+Configuration of a dev box governs the ability to redirect drives from a local device to a remote session, and is set using Microsoft Intune or Group Policy.
+
+The default configuration is:
+
+- **Windows operating system**: Drive and storage redirection isn't blocked.
+- **Microsoft Dev Box**: All drives are redirected from the local device to a remote session, including ones that are connected later.
+- **Resultant default behavior**: All drives are redirected from the local device to a remote session, including ones that are connected later.
++
+### Configure drive redirection using host pool RDP properties
+
+The Azure Virtual Desktop host pool setting *drive/storage redirection* controls whether to redirect drives from a local device to a remote session. The corresponding RDP property is `drivestoredirect:s:<value>`. For more information, see [Supported RDP properties](rdp-properties.md#device-redirection).
+
+To configure drive redirection using host pool RDP properties:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. In the search bar, type *Azure Virtual Desktop* and select the matching service entry.
+
+1. Select **Host pools**, then select the host pool you want to configure.
+
+1. Select **RDP Properties**, then select **Device redirection**.
+
+ :::image type="content" source="media/redirection-remote-desktop-protocol/redirection-configuration-host-pool.png" alt-text="A screenshot showing the host pool device redirection tab in the Azure portal." lightbox="media/redirection-remote-desktop-protocol/redirection-configuration-host-pool.png":::
+
+1. For **Drive/storage redirection**, select the drop-down list, then select one of the following options:
+
+ - **Don't redirect any drives**
+ - **Redirect all disk drives, including ones that are connected later** (*default*)
+ - **Dynamic drives: redirect any drives that are connected later**
+ - **Manually enter drives and labels**
+ - **Not configured**
+
+1. If you select **Manually enter drives and labels**, an extra box shows. You need to enter the drive letter for each fixed, removable, and network drive you want to redirect, with each drive letter followed by a semicolon. For Azure Virtual Desktop, the characters `\`, `:`, and `;` must be escaped using a backslash character. For example, to redirect drives `C:\` and `D:\` from the local device, enter `C\:\\\;D\:\\\;`.
+
+1. Select **Save**.
+
+1. To test the configuration, make sure the drives you configured to redirect are connected to the local device, then connect to a remote session. Verify that drives you redirected are available in **File Explorer** or **Disk Management** in the remote session. If you selected **Redirect all disk drives, including ones that are connected later** or **Dynamic drives: redirect any drives that are connected later**, you can connect more drives to the local device after you connect to the remote session and verify they're redirected too.
+
+### Configure drive redirection using Microsoft Intune or Group Policy
+
+### Configure drive redirection using Microsoft Intune or Group Policy
+
+Select the relevant tab for your scenario.
+
+# [Microsoft Intune](#tab/intune)
+
+To enable or disable drive redirection using Microsoft Intune:
+
+1. Sign in to the [Microsoft Intune admin center](https://endpoint.microsoft.com/).
+
+1. [Create or edit a configuration profile](/mem/intune/configuration/administrative-templates-windows) for **Windows 10 and later** devices, with the **Settings catalog** profile type.
+
+1. In the settings picker, browse to **Administrative templates** > **Windows Components** > **Remote Desktop Services** > **Remote Desktop Session Host** > **Device and Resource Redirection**.
+
+ :::image type="content" source="media/redirection-remote-desktop-protocol/redirection-configuration-intune.png" alt-text="A screenshot showing the device and resource redirection options in the Microsoft Intune portal." lightbox="media/redirection-remote-desktop-protocol/redirection-configuration-intune.png":::
+
+1. Check the box for **Do not allow drive redirection**, then close the settings picker.
+
+1. Expand the **Administrative templates** category, then toggle the switch for **Do not allow drive redirection** to **Enabled** or **Disabled**, depending on your requirements:
+
+ - To allow drive redirection, toggle the switch to **Disabled**, then select **OK**.
+
+ - To disable drive redirection, toggle the switch to **Enabled**, then select **OK**.
+
+1. Select **Next**.
+
+1. *Optional*: On the **Scope tags** tab, select a scope tag to filter the profile. For more information about scope tags, see [Use role-based access control (RBAC) and scope tags for distributed IT](/mem/intune/fundamentals/scope-tags).
+
+1. On the **Assignments** tab, select the group containing the computers providing a remote session you want to configure, then select **Next**.
+
+1. On the **Review + create** tab, review the settings, then select **Create**.
+
+1. Once the policy applies to the computers providing a remote session, restart them for the settings to take effect.
+
+# [Group Policy](#tab/group-policy)
+
+To enable or disable drive redirection using Group Policy:
+
+1. Open the **Group Policy Management** console on device you use to manage the Active Directory domain.
+
+1. Create or edit a policy that targets the computers providing a remote session you want to configure.
+
+1. Navigate to **Computer Configuration** > **Policies** > **Administrative Templates** > **Windows Components** > **Remote Desktop Services** > **Remote Desktop Session Host** > **Device and Resource Redirection**.
+
+ :::image type="content" source="media/redirection-remote-desktop-protocol/redirection-configuration-group-policy.png" alt-text="A screenshot showing the device and resource redirection options in the Group Policy editor." lightbox="media/redirection-remote-desktop-protocol/redirection-configuration-group-policy.png":::
+
+1. Double-click the policy setting **Do not allow drive redirection** to open it.
+
+ - To enable drive redirection, select **Disabled** or **Not configured**, then select **OK**.
+
+ - To disable drive redirection, select **Enabled**, then select **OK**.
+
+1. Ensure the policy is applied to the computers providing a remote session, then restart them for the settings to take effect.
+++
+> [!IMPORTANT]
+> - Network drives that are disconnected aren't redirected. Once the network drives are reconnected, they're not automatically redirected during the remote session. You need to disconnect and reconnect to the remote session to redirect the network drives.
+>
+> - If you disable drive redirection using Intune or Group Policy, it also prevents files being transferred between the local device and remote session using the clipboard. Other content, such as text or images, isn't affected.
+
+## Test drive redirection
+
+To test drive redirection:
+
+1. Connect to a remote session using Window App or the Remote Desktop app on a platform that supports drive redirection. For more information, see [Compare Windows App features across platforms and devices](/windows-app/compare-platforms-features#redirection) and [Compare Remote Desktop app features across platforms and devices](compare-remote-desktop-clients.md#redirection).
+
+1. Check the redirected drives available in the remote session. Here are some ways to check:
+
+ 1. Open **File explorer** in the remote session from the start menu. Select **This PC**, then check the redirected drives appear in the list. When you redirect drives from a local Windows device, it looks similar to the following image:
+
+ :::image type="content" source="media/redirection-remote-desktop-protocol/redirection-drives.png" alt-text="A screenshot showing the available drives in the remote session." lightbox="media/redirection-remote-desktop-protocol/redirection-drives.png":::
+
+ 1. Open a PowerShell prompt in the remote session and run the following command:
+
+ ```powershell
+ $CLSIDs = @()
+ foreach($registryKey in (Get-ChildItem "Registry::HKEY_CLASSES_ROOT\CLSID" -Recurse)){
+ If (($registryKey.GetValueNames() | %{$registryKey.GetValue($_)}) -eq "Drive or folder redirected using Remote Desktop") {
+ $CLSIDs += $registryKey
+ }
+ }
+
+ $drives = @()
+ foreach ($CLSID in $CLSIDs.PSPath) {
+ $drives += (Get-ItemProperty $CLSID)."(default)"
+ }
+
+ Write-Output "These are the local drives redirected to the remote session:`n"
+ $drives
+ ```
+
+ The output is similar to the following output when you redirect drives from a local Windows device:
+
+ ```output
+ These are the local drives redirected to the remote session:
+
+ C on DESKTOP
+ S on DESKTOP
+ ```
+
+### Optional: Disable drive redirection on a local device
+
+You can disable drive redirection on a local device to prevent the drives from being redirected between a remote session. This method is useful if you want to enable drive redirection for most users, but disable it for specific devices.
+
+On a local Windows device, you can disable drive redirection by configuring the following registry key and value:
+
+- **Key**: `HKEY_LOCAL_MACHINE\Software\Microsoft\Terminal Server Client`
+- **Type**: `REG_DWORD`
+- **Value name**: `DisableDriveRedirection`
+- **Value data**: `1`
+
+For iOS/iPadOS and Android devices, you can disable drive redirection using Intune. For more information, see [Configure client device redirection settings for Windows App and the Remote Desktop app using Microsoft Intune](client-device-redirection-intune.md).
+
+## Related content
+
virtual-desktop Redirection Configure Location https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/redirection-configure-location.md
+
+ Title: Configure location redirection over the Remote Desktop Protocol
+description: Learn how to redirect location information from a local device to a remote session over the Remote Desktop Protocol. It applies to Azure Virtual Desktop, Windows 365, and Microsoft Dev Box.
+
+zone_pivot_groups: rdp-products-features
++ Last updated : 07/09/2024++
+# Configure location redirection over the Remote Desktop Protocol
++
+You can configure the redirection behavior of location information from a local device to a remote session over the Remote Desktop Protocol (RDP). A user's location can be important for some applications, such as mapping and regional services in browsers. Without redirecting location information, the location of a remote session is near the datacenter the user connects to for the remote session.
+
+For Azure Virtual Desktop, location redirection must be configured at the following points. If any of these components aren't configured correctly, location redirection won't work as expected. You can use Microsoft Intune or Group Policy to configure your session hosts and the local device.
+
+- Session host
+- Host pool RDP property
+- Local device
++
+For Windows 365, location services must be configured on the Cloud PC and the local device. If either of these components aren't configured correctly, location redirection won't work as expected. You can use Microsoft Intune or Group Policy to configure your Cloud PC and the local device. Windows 365 allows location redirection.
+
+For Microsoft Dev Box, location services must be configured on the dev box and the local device. If either of these components aren't configured correctly, location redirection won't work as expected. You can use Microsoft Intune or Group Policy to configure your dev box and the local device. Microsoft Dev Box allows location redirection.
+
+> [!IMPORTANT]
+> Redirected longitude and latitude information is accurate to 1 meter. Horizontal accuracy is currently set at 10 kilometers, so applications that use the horizontal accuracy value might report that a precise location can't be determined.
+
+This article provides information about the supported redirection methods and how to configure the redirection behavior for location information. To learn more about how redirection works, see [Redirection over the Remote Desktop Protocol](redirection-remote-desktop-protocol.md).
+
+## Prerequisites
+
+Before you can configure location redirection, you need:
+
+- An existing host pool with session hosts running Windows 11 Enterprise or Windows 11 Enterprise multi-session version 22H2 or later.
+
+- A Microsoft Entra ID account that is assigned the [Desktop Virtualization Host Pool Contributor](rbac.md#desktop-virtualization-host-pool-contributor) built-in role-based access control (RBAC) roles on the host pool as a minimum.
+
+- An existing Cloud PC running Windows 11 Enterprise version 22H2 or later.
+
+- An existing dev box running Windows 11 Enterprise, version 22H2 or later.
+
+- To configure Microsoft Intune, you need:
+
+ - Microsoft Entra ID account that is assigned the [Policy and Profile manager](/mem/intune/fundamentals/role-based-access-control-reference#policy-and-profile-manager) built-in RBAC role.
+ - A group containing the devices you want to configure.
+
+- To configure Group Policy, you need:
+
+ - A domain account that has permission to create or edit Group Policy objects.
+ - A security group or organizational unit (OU) containing the devices you want to configure.
+
+- You need to connect to a remote session from a supported app and platform. To view redirection support in Windows App and the Remote Desktop app, see [Compare Windows App features across platforms and devices](/windows-app/compare-platforms-features#redirection) and [Compare Remote Desktop app features across platforms and devices](compare-remote-desktop-clients.md#redirection).
+
+## Session host configuration
+
+To configure a session host for location redirection, you need to enable and configure location services. You can do this using Microsoft Intune or Group Policy.
+
+> [!IMPORTANT]
+> If you use a multi-session edition of Windows, when you enable location services on a session host, it's enabled for all users. You can specify which apps can access location information on a per-user basis based on your requirements.
++
+## Cloud PC configuration
+
+To configure a Cloud PC for location redirection, you need to enable and configure location services. You can do this using Microsoft Intune or Group Policy.
+
+## Dev box configuration
+
+To configure a dev box for location redirection, you need to enable and configure location services. You can do this using Microsoft Intune or Group Policy.
+
+Select the relevant tab for your scenario.
+
+# [Microsoft Intune](#tab/intune)
+
+To enable location services using Microsoft Intune:
+
+1. Sign in to the [Microsoft Intune admin center](https://endpoint.microsoft.com/).
+
+1. [Create or edit a configuration profile](/mem/intune/configuration/administrative-templates-windows) for **Windows 10 and later** devices, with the **Settings catalog** profile type.
+
+1. In the settings picker, select **System**. Check the box for **Allow Location**, then close the settings picker.
+
+ :::image type="content" source="media/redirection-remote-desktop-protocol/redirection-configuration-intune-system.png" alt-text="A screenshot showing the device and resource redirection options in the Microsoft Intune portal." lightbox="media/redirection-remote-desktop-protocol/redirection-configuration-intune-system.png":::
+
+1. Expand the **System** category, then from the drop-down menu select **Force Location On. All Location Privacy settings are toggled on and grayed out. Users cannot change the settings and all consent permissions will be automatically suppressed**.
+
+1. Select **Next**.
+
+1. *Optional*: On the **Scope tags** tab, select a scope tag to filter the profile. For more information about scope tags, see [Use role-based access control (RBAC) and scope tags for distributed IT](/mem/intune/fundamentals/scope-tags).
+
+1. On the **Assignments** tab, select the group containing the computers providing a remote session you want to configure, then select **Next**.
+
+1. On the **Review + create** tab, review the settings, then select **Create**.
+
+1. Once the policy applies to the computers providing a remote session, restart them for the settings to take effect.
+
+1. You need to enable the location setting **Allow location override** for the location to be updated in the remote session, which you can do by configuring a registry value and is set per user. Users can still change this setting in Windows location settings.
+
+ You can do this by creating a PowerShell script and using it as a [custom script remediation in Intune](/mem/intune/fundamentals/remediations). When you create the custom script remediation, you must set **Run this script using the logged-on credentials** to **Yes**.
+
+ ```powershell
+ try
+ {
+ New-ItemProperty -Path "HKCU:\Software\Microsoft\Windows\CurrentVersion\CPSS\Store\UserLocationOverridePrivacySetting" -Name Value -PropertyType DWORD -Value 1 -Force
+ exit 0
+ }
+ catch{
+ $errMsg = $_.Exception.Message
+ Write-Error $errMsg
+ exit 1
+ }
+ ```
+
+1. Once you have made the changes, location services in the Windows Settings app should look similar to the following image:
+
+ :::image type="content" source="media/redirection-remote-desktop-protocol/redirection-location-intune.png" alt-text="A screenshot showing the location settings in the Windows Settings app." lightbox="media/redirection-remote-desktop-protocol/redirection-location-intune.png":::
+
+# [Group Policy](#tab/group-policy)
+
+To enable location services without Intune, you can use Group Policy to configure registry values. You can also configure location redirection using Group Policy. Configuring location services this way doesn't prevent users from changing its settings.
+
+1. Open the **Group Policy Management** console on device you use to manage the Active Directory domain.
+
+1. Create or edit a policy that targets the computers providing a remote session you want to configure.
+
+1. Configure Group Policy Preferences to set the following registry values. To learn how to use Group Policy Preferences, see [Configure a Registry Item](/previous-versions/windows/it-pro/windows-server-2008-r2-and-2008/cc753092(v=ws.10)). You can specify which apps and services can use location services, based on your requirements.
+
+ 1. Enable **Location services** (this value needs to be set per device):
+
+ - **Key**: `HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\Windows\CurrentVersion\CapabilityAccessManager\ConsentStore\location`
+ - **Type**: `REG_SZ` (String value)
+ - **Value name**: `Value`
+ - **Value data**: `Allow`
+
+ 1. Enable **Allow location override** (this value needs to be set per user):
+
+ - **Key**: `HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\CPSS\Store\UserLocationOverridePrivacySetting`
+ - **Type**: `REG_DWORD` (DWORD value)
+ - **Value name**: `Value`
+ - **Value data**: `1`
+
+ 1. Enable **Let apps access your location** (this value needs to be set per user):
+
+ - **Key**: `HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\CapabilityAccessManager\ConsentStore\location`
+ - **Type**: `REG_SZ` (String value)
+ - **Value name**: `Value`
+ - **Value data**: `Allow`
+
+ 1. Enable **Let desktop apps access your location**, such as Microsoft Edge (this value needs to be set per user):
+
+ - **Key**: `HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\CapabilityAccessManager\ConsentStore\location\NonPackaged`
+ - **Type**: `REG_SZ` (String value)
+ - **Value name**: `Value`
+ - **Value data**: `Allow`
+
+ 1. Enable individual Microsoft Store, MSIX, or Appx apps (this value needs to be set per user). Replace `<Package Family Name>` with the package family name of the app, for example `Microsoft.BingWeather_8wekyb3d8bbwe`. You can get a list of apps and their package family name using the [Get-AppxPackage](/powershell/module/appx/get-appxpackage) PowerShell cmdlet.
+
+ - **Key**: `HKEY_CURRENT_USER\oftware\Microsoft\Windows\CurrentVersion\CapabilityAccessManager\ConsentStore\location\<Package Family Name>`
+ - **Type**: `REG_SZ` (String value)
+ - **Value name**: `Value`
+ - **Value data**: `Allow`
+
+1. Make sure that location redirection isn't blocked. Navigate to **Computer Configuration** > **Policies** > **Administrative Templates** > **Windows Components** > **Remote Desktop Services** > **Remote Desktop Session Host** > **Device and Resource Redirection**.
+
+ :::image type="content" source="media/redirection-remote-desktop-protocol/redirection-configuration-group-policy.png" alt-text="A screenshot showing the device and resource redirection options in the Group Policy editor." lightbox="media/redirection-remote-desktop-protocol/redirection-configuration-group-policy.png":::
+
+1. Double-click the policy setting **Do not allow location redirection** to open it. To allow location redirection, select **Disabled** or **Not configured**, then select **OK**.
+
+1. Ensure the policy is applied to the computers providing a remote session, then restart them for the settings to take effect.
+++
+### Host pool configuration
+
+The Azure Virtual Desktop host pool setting *Location service redirection* controls whether to redirect location information from the local device to the remote session. The corresponding RDP property is `redirectlocation:i:<value>`. For more information, see [Supported RDP properties](rdp-properties.md#device-redirection).
+
+To configure location redirection using host pool RDP properties:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. In the search bar, type *Azure Virtual Desktop* and select the matching service entry.
+
+1. Select **Host pools**, then select the host pool you want to configure.
+
+1. Select **RDP Properties**, then select **Device redirection**.
+
+ :::image type="content" source="media/redirection-remote-desktop-protocol/redirection-configuration-host-pool.png" alt-text="A screenshot showing the host pool device redirection tab in the Azure portal." lightbox="media/redirection-remote-desktop-protocol/redirection-configuration-host-pool.png":::
+
+1. For **Location service redirection**, select the drop-down list, then select **Enable location sharing from the local device and redirection to apps in the remote session**.
+
+1. Select **Save**.
++
+## Local device configuration
+
+You need to use a supported app and platform connect to a remote session and enable location services on a local device. How you achieve this depends on your requirements, the platform you're using, and whether the device is managed or unmanaged.
+
+To view redirection support in Windows App and the Remote Desktop app, see [Compare Windows App features across platforms and devices](/windows-app/compare-platforms-features#redirection) and [Compare Remote Desktop app features across platforms and devices](compare-remote-desktop-clients.md#redirection).
+
+On Windows, you can enable location services in the Windows Settings app. For more information, see [Windows location service and privacy](https://support.microsoft.com/windows/windows-location-service-and-privacy-3a8eee0a-5b0b-dc07-eede-2a5ca1c49088). The steps in this article to enable location services in a remote session using Intune and Group Policy can also be applied to local Windows devices.
+
+To enable location services on other platforms, refer to the relevent manufacturer's documentation.
+
+## Test location redirection
+
+Once you configure your session hosts, host pool RDP property, and local devices, you can test location redirection.
+
+Once you configure your Cloud PCs and local devices, you can test location redirection.
+
+Once you configure your dev boxes and local devices, you can test location redirection.
+
+To test location redirection:
+
+1. Connect to a remote session using Window App or the Remote Desktop app on a platform that supports location redirection. For more information, see [Compare Windows App features across platforms and devices](/windows-app/compare-platforms-features#redirection) and [Compare Remote Desktop app features across platforms and devices](compare-remote-desktop-clients.md#redirection).
+
+1. Check the user's location information is available in the remote session. Here are some ways to check:
+
+ 1. Open a web browser and go to a website that uses location information, such as [Bing Maps](https://www.bing.com/maps). In Bing Maps, select the icon for the button **Locate me**. The website should show the user's location as the location of the local device.
+
+ 1. Open a PowerShell prompt in the remote session and run the following commands to get the latitude and longitude values. You can also run these commands on a local Windows device to check they are consistent.
+
+ ```powershell
+ Add-Type -AssemblyName System.Device
+ $GeoCoordinateWatcher = New-Object System.Device.Location.GeoCoordinateWatcher
+ $GeoCoordinateWatcher.Start()
+
+ Start-Sleep -Milliseconds 500
+
+ If ($GeoCoordinateWatcher.Permission -eq "Granted") {
+ While ($GeoCoordinateWatcher.Status -ne "Ready") {
+ Start-Sleep -Milliseconds 500
+ }
+ $GeoCoordinateWatcher.Position.Location | FL Latitude, Longitude
+ } else {
+ Write-Output "Desktop apps aren't allowed to access your location. Please enable access."
+ }
+ ```
+
+ The output is similar to the following output:
+
+ ```output
+ Latitude : 47.64354
+ Longitude : -122.13082
+ ```
+
+## Related content
+
virtual-desktop Redirection Configure Plug Play Mtp Ptp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/redirection-configure-plug-play-mtp-ptp.md
+
+ Title: Configure plug and play MTP and PTP redirection over the Remote Desktop Protocol
+description: Learn how to redirect MTP and PTP plug and play peripherals from a local device to a remote session over the Remote Desktop Protocol. It applies to Azure Virtual Desktop, Windows 365, Microsoft Dev Box, Remote Desktop Services, and Remote PC connections.
+
+zone_pivot_groups: rdp-products-features
++ Last updated : 07/03/2024++
+# Configure Media Transfer Protocol and Picture Transfer Protocol redirection on Windows over the Remote Desktop Protocol
++
+You can configure the redirection behavior of peripherals that use the Media Transfer Protocol (MTP) or Picture Transfer Protocol (PTP), such as a digital camera, from a local device to a remote session over the Remote Desktop Protocol (RDP).
+
+For Azure Virtual Desktop, we recommend you enable MTP and PTP redirection on your session hosts using Microsoft Intune or Group Policy, then control redirection using the host pool RDP properties.
+
+For Windows 365, you can configure your Cloud PCs using Microsoft Intune or Group Policy. Once enabled, Windows 365 redirects all supported MTP and PTP peripherals.
+
+For Microsoft Dev Box, you can configure your dev boxes using Microsoft Intune or Group Policy. Once enabled, Microsoft Dev Box redirects all supported MTP and PTP peripherals.
+
+This article provides information about the supported redirection methods and how to configure the redirection behavior for MTP and PTP peripherals. To learn more about how redirection works, see [Redirection over the Remote Desktop Protocol](redirection-remote-desktop-protocol.md).
+
+## MTP and PTP redirection vs USB redirection
+
+Most MTP and PTP peripherals connect to a computer over USB. RDP supports redirecting MTP and PTP peripherals using native MTP and PTP redirection or opaque low-levelUSB device redirection, independent of each other. Behavior depends on the peripheral and its supported features.
+
+Both redirection methods redirect the device to the remote session listed under **Portable Devices** in **Device Manager**. This device class is `WPD` and the device class GUID is `{eec5ad98-8080-425f-922a-dabf3de3f69a}`. You can find a list of the device classes at [System-Defined Device Setup Classes Available to Vendors](/windows-hardware/drivers/install/system-defined-device-setup-classes-available-to-vendors)
+
+Devices are redirected differently depending on the redirection method used. MTP and PTP redirection uses high-level redirection; the peripheral is available locally and in the remote session concurrently, and requires the relevant driver installed locally. Opaque low-level USB redirection transports the raw communication of a peripheral, so requires the relevant driver installed in the remote session. You should use high-level redirection methods where possible. For more information, see [Redirection methods](redirection-remote-desktop-protocol.md#redirection-methods-and-classifications).
+
+The following example shows the difference when redirecting an Apple iPhone using the two methods. Both methods achieve the same result where pictures can be imported from the iPhone to the remote session.
+
+- Using MTP and PTP redirection, the iPhone is listed as **Digital Still Camera** to applications and under **Portable Devices** in **Device Manager**:
+
+ :::image type="content" source="media/redirection-remote-desktop-protocol/remote-session-device-manager-portable-devices-digital-still-camera.png" alt-text="A screenshot showing portable devices in Device Manager using MTP and PTP redirection." lightbox="media/redirection-remote-desktop-protocol/remote-session-device-manager-portable-devices-digital-still-camera.png":::
+
+- Using USB redirection, the iPhone is listed as **Apple iPhone** to applications and under **Portable Devices** in **Device Manager**:
+
+ :::image type="content" source="media/redirection-remote-desktop-protocol/remote-session-device-manager-portable-devices-usb.png" alt-text="A screenshot showing portable devices in Device Manager using USB redirection." lightbox="media/redirection-remote-desktop-protocol/remote-session-device-manager-portable-devices-usb.png":::
+
+The rest of this article covers MTP and PTP redirection. To learn how to configure USB redirection, see [Configure USB redirection on Windows over the Remote Desktop Protocol](redirection-configure-usb.md).
+
+## Prerequisites
+
+Before you can configure MTP and PTP redirection, you need:
+
+- An existing host pool with session hosts.
+
+- A Microsoft Entra ID account that is assigned the [Desktop Virtualization Host Pool Contributor](rbac.md#desktop-virtualization-host-pool-contributor) built-in role-based access control (RBAC) roles on the host pool as a minimum.
+
+- An existing Cloud PC.
+
+- An existing dev box.
+
+- A device that supports MTP or PTP you can use to test the redirection configuration connected to a local device.
+
+- To configure Microsoft Intune, you need:
+
+ - Microsoft Entra ID account that is assigned the [Policy and Profile manager](/mem/intune/fundamentals/role-based-access-control-reference#policy-and-profile-manager) built-in RBAC role.
+ - A group containing the devices you want to configure.
+
+- To configure Group Policy, you need:
+
+ - A domain account that has permission to create or edit Group Policy objects.
+ - A security group or organizational unit (OU) containing the devices you want to configure.
+
+- You need to connect to a remote session from a supported app and platform. To view redirection support in Windows App and the Remote Desktop app, see [Compare Windows App features across platforms and devices](/windows-app/compare-platforms-features#redirection) and [Compare Remote Desktop app features across platforms and devices](compare-remote-desktop-clients.md#redirection).
+
+## MTP and PTP redirection
+
+Configuration of a session host using Microsoft Intune or Group Policy, or setting an RDP property on a host pool governs the ability to redirect MTP and PTP peripherals between the remote session and the local device, which is subject to a priority order.
+
+The default configuration is:
+
+- **Windows operating system**: MTP and PTP redirection isn't allowed.
+- **Azure Virtual Desktop host pool RDP properties**: MTP and PTP devices are redirected from the local device to the remote session.
+- **Resultant default behavior**: MTP and PTP peripherals aren't redirected.
+
+> [!IMPORTANT]
+> Take care when configuring redirection settings as the most restrictive setting is the resultant behavior. For example, if you disable MTP and PTP redirection on a session host with Microsoft Intune or Group Policy, but enable it with the host pool RDP property, redirection is disabled. You can also specify individual MTP and PTP peripherals to redirect only.
++
+Configuration of a Cloud PC governs the ability to redirect MTP and PTP peripherals between the remote session and the local device, and is set using Microsoft Intune or Group Policy.
+
+The default configuration is:
+
+- **Windows operating system**: MTP and PTP redirection isn't allowed.
+- **Windows 365**: MTP and PTP redirection is enabled.
+- **Resultant default behavior**: MTP and PTP peripherals are redirected.
++
+Configuration of a dev box governs the ability to redirect the MTP and PTP peripherals between the remote session and the local device, and is set using Microsoft Intune or Group Policy.
+
+The default configuration is:
+
+- **Windows operating system**: MTP and PTP redirection isn't allowed.
+- **Microsoft Dev Box**: MTP and PTP redirection is enabled.
+- **Resultant default behavior**: MTP and PTP peripherals are redirected.
++
+### Configure MTP and PTP redirection using host pool RDP properties
+
+The Azure Virtual Desktop host pool setting *MTP and PTP device redirection* controls whether to redirect MTP and PTP peripherals between the remote session and the local device. The corresponding RDP property is `devicestoredirect:s:<value>`. For more information, see [Supported RDP properties](rdp-properties.md#device-redirection).
+
+To configure MTP and PTP redirection using host pool RDP properties:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. In the search bar, type *Azure Virtual Desktop* and select the matching service entry.
+
+1. Select **Host pools**, then select the host pool you want to configure.
+
+1. Select **RDP Properties**, then select **Device redirection**.
+
+ :::image type="content" source="media/redirection-remote-desktop-protocol/redirection-configuration-host-pool.png" alt-text="A screenshot showing the host pool device redirection tab in the Azure portal." lightbox="media/redirection-remote-desktop-protocol/redirection-configuration-host-pool.png":::
+
+1. For **MTP and PTP device redirection**, select the drop-down list, then select one of the following options:
+
+ - **Don't redirect any devices**
+ - **Redirect portable media players based on the Media Transfer Protocol (MTP) and digital cameras based on the Picture Transfer Protocol (PTP)** (*default*)
+ - **Not configured**
+
+1. Select **Save**.
+
+> [!TIP]
+> If you enable redirection using host pool RDP properties, you need the check that redirection isn't blocked by a Microsoft Intune or Group Policy setting.
+
+### Optional: Retrieve specific MTP and PTP device instance IDs and add them to the RDP property
+
+By default, the host pool RDP property will redirect all supported MTP and PTP peripherals, but you can also enter specific device instance IDs in the host pool properties so that only the peripherals you approve are redirected. To retrieve the device instance IDs available of the USB devices on a local device you want to redirect:
+
+1. On the local device, connect any devices you want to redirect.
+
+1. Open a PowerShell prompt and run the following command:
+
+ ```powershell
+ Get-PnPdevice | Where-Object {$_.Class -eq "WPD" -and $_.Status -eq "OK"} | FT -AutoSize
+ ```
+
+ The output is similar to the following output. Make a note of the **InstanceId** value for each device you want to redirect.
+
+ ```output
+ Status Class FriendlyName InstanceId
+ -- -
+ OK WPD Apple iPhone USB\VID_05AC&PID_12A8&MI_00\B&1A733E8B&0&0000
+ ```
+
+1. In the Azure portal, return to the host pool RDP properties configuration, and select **Advanced**.
+
+1. In the text box, find the relevant RDP property, which by default is `devicestoredirect:s:*`, then add the instance IDs you want to redirect, as shown in the following example. Separate each device instance ID with a semi-colon (`;`).
+
+ ```uri
+ devicestoredirect:s:USB\VID_05AC&PID_12A8&MI_00\B&1A733E8B&0&0000
+ ```
+
+1. Select **Save**.
+
+> [!TIP]
+> The following behavior is expected when you specify an instance ID:
+>
+> - If you refresh the Azure portal, the value you entered changes to lowercase and each backslash character in the instance ID is escaped by another backslash character.
+>
+> - When you navigate to the **Device redirection** tab, the value for **MTP and PTP device redirection** is blank.
++
+### Configure MTP and PTP redirection using Microsoft Intune or Group Policy
+
+### Configure MTP and PTP redirection using Microsoft Intune or Group Policy
+
+Select the relevant tab for your scenario.
+
+# [Microsoft Intune](#tab/intune)
+
+To allow or disable MTP and PTP redirection using Microsoft Intune:
+
+1. Sign in to the [Microsoft Intune admin center](https://endpoint.microsoft.com/).
+
+1. [Create or edit a configuration profile](/mem/intune/configuration/administrative-templates-windows) for **Windows 10 and later** devices, with the **Settings catalog** profile type.
+
+1. In the settings picker, browse to **Administrative templates** > **Windows Components** > **Remote Desktop Services** > **Remote Desktop Session Host** > **Device and Resource Redirection**.
+
+ :::image type="content" source="media/redirection-remote-desktop-protocol/redirection-configuration-intune.png" alt-text="A screenshot showing the device and resource redirection options in the Microsoft Intune portal." lightbox="media/redirection-remote-desktop-protocol/redirection-configuration-intune.png":::
+
+1. Check the box for **Do not allow supported Plug and Play device redirection**, then close the settings picker.
+
+1. Expand the **Administrative templates** category, then set toggle the switch for **Do not allow supported Plug and Play device redirection**, depending on your requirements:
+
+ - To allow MTP and PTP redirection, toggle the switch to **Disabled**, then select **OK**.
+
+ - To disable MTP and PTP redirection, toggle the switch to **Enabled**, then select **OK**.
+
+1. Select **Next**.
+
+1. *Optional*: On the **Scope tags** tab, select a scope tag to filter the profile. For more information about scope tags, see [Use role-based access control (RBAC) and scope tags for distributed IT](/mem/intune/fundamentals/scope-tags).
+
+1. On the **Assignments** tab, select the group containing the computers providing a remote session you want to configure, then select **Next**.
+
+1. On the **Review + create** tab, review the settings, then select **Create**.
+
+1. Once the policy applies to the computers providing a remote session, restart them for the settings to take effect.
+
+> [!NOTE]
+> When you configure the Intune policy setting **Do not allow supported Plug and Play device redirection**, it also affects USB redirection.
+
+# [Group Policy](#tab/group-policy)
+
+To allow or disable MTP and PTP redirection using Group Policy:
+
+1. Open the **Group Policy Management** console on device you use to manage the Active Directory domain.
+
+1. Create or edit a policy that targets the computers providing a remote session you want to configure.
+
+1. Navigate to **Computer Configuration** > **Policies** > **Administrative Templates** > **Windows Components** > **Remote Desktop Services** > **Remote Desktop Session Host** > **Device and Resource Redirection**.
+
+ :::image type="content" source="media/redirection-remote-desktop-protocol/redirection-configuration-group-policy.png" alt-text="A screenshot showing the device and resource redirection options in the Group Policy editor." lightbox="media/redirection-remote-desktop-protocol/redirection-configuration-group-policy.png":::
+
+1. Double-click the policy setting **Do not allow supported Plug and Play device redirection** to open it.
+
+ - To allow MTP and PTP redirection, select **Disabled**, then select **OK**.
+
+ - To disable MTP and PTP redirection, select **Enabled**, then select **OK**.
+
+1. Ensure the policy is applied to the computers providing a remote session, then restart them for the settings to take effect.
+
+> [!NOTE]
+> When you configure the Group Policy setting **Do not allow supported Plug and Play device redirection**, it also affects USB redirection.
+++++
+## Test MTP and PTP redirection
+
+To test MTP and PTP redirection:
+
+1. Make sure a device that supports MTP or PTP is connected to the local device.
+
+1. Connect to a remote session using Window App or the Remote Desktop app on a platform that supports MTP and PTP redirection. For more information, see [Compare Windows App features across platforms and devices](/windows-app/compare-platforms-features#redirection) and [Compare Remote Desktop app features across platforms and devices](compare-remote-desktop-clients.md#redirection).
+
+1. Check the MTP or PTP device is available in the remote session. Here are some ways to check:
+
+ 1. Open the **Photos** app (from Microsoft) in the remote session from the start menu. Select **Import** and check the redirected device appears in the list of connected devices.
+
+ :::image type="content" source="media/redirection-remote-desktop-protocol/redirection-photos-app-digital-still-camera.png" alt-text="A screenshot showing the available printers and scanners in the remote session." lightbox="media/redirection-remote-desktop-protocol/redirection-photos-app-digital-still-camera.png":::
+
+ 1. Open a PowerShell prompt in the remote session and run the following command:
+
+ ```powershell
+ Get-PnPdevice | ? Class -eq "WPD" | FT -AutoSize
+ ```
+
+ The output is similar to the following output:
+
+ ```output
+ Status Class FriendlyName InstanceId
+ -- -
+ OK WPD Digital Still Camera TSBUS\UMB\2&FD4482C&0&TSDEVICE#0002.0003
+ ```
+
+ You can verify whether the device is redirected using MTP and PTP redirection or USB redirection by the **InstanceId** value:
+
+ - For MTP and PTP redirection, the **InstanceId** value begins with `TSBUS`.
+
+ - For USB redirection, the **InstanceId** value begins `USB`.
+
+1. Open an application and print a test page to verify the printer is functioning correctly.
+
+## Related content
+
virtual-desktop Redirection Configure Printers https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/redirection-configure-printers.md
+
+ Title: Configure printer redirection over the Remote Desktop Protocol
+description: Learn how to redirect printers from a local device to a remote session over the Remote Desktop Protocol. It applies to Azure Virtual Desktop, Windows 365, and Microsoft Dev Box.
+
+zone_pivot_groups: rdp-products-features
++ Last updated : 07/02/2024++
+# Configure printer redirection over the Remote Desktop Protocol
++
+You can configure the redirection behavior of printers from a local device to a remote session over the Remote Desktop Protocol (RDP). Printer redirection supports locally attached and network printers. When you enable printer redirection, all printers available on the local device are redirected; you can't select specific printers to redirect. The default printer on the local device is automatically set as the default printer in the remote session.
+
+Printer redirection uses high-level redirection and doesn't require drivers to be installed on session hosts. The **Remote Desktop Easy Print** driver is used automatically on session hosts. The driver for the printer must be installed on the local device for redirection to work correctly.
+
+For Azure Virtual Desktop, we recommend you enable printer redirection on your session hosts using Microsoft Intune or Group Policy, then control redirection using the host pool RDP properties.
+
+Printer redirection uses high-level redirection and doesn't require drivers to be installed on a Cloud PC. The **Remote Desktop Easy Print** driver is used automatically on a Cloud PC. The driver for the printer must be installed on the local device for redirection to work correctly.
+
+For Windows 365, you can configure your Cloud PCs using Microsoft Intune or Group Policy.
+
+Printer redirection uses high-level redirection and doesn't require drivers to be installed on a dev box. The **Remote Desktop Easy Print** driver is used automatically on a dev box. The driver for the printer must be installed on the local device for redirection to work correctly.
+
+For Microsoft Dev Box, you can configure your dev boxes using Microsoft Intune or Group Policy.
+
+This article provides information about the supported redirection methods and how to configure the redirection behavior for printers. To learn more about how redirection works, see [Redirection over the Remote Desktop Protocol](redirection-remote-desktop-protocol.md).
+
+> [!TIP]
+> Azure Universal Print is an alternative solution to redirecting printers from a local device to a remote session. For more information, see [Discover Universal Print](/universal-print/discover-universal-print) and to learn about using it with Azure Virtual Desktop, see [Printing on Azure Virtual Desktop using Universal Print](/universal-print/fundamentals/universal-print-avd).
+
+## Prerequisites
+
+Before you can configure printer redirection, you need:
+
+- An existing host pool with session hosts.
+
+- A Microsoft Entra ID account that is assigned the [Desktop Virtualization Host Pool Contributor](rbac.md#desktop-virtualization-host-pool-contributor) built-in role-based access control (RBAC) roles on the host pool as a minimum.
+
+- An existing Cloud PC.
+
+- An existing dev box.
+
+- A printer available on the local device. You need to make sure local device has the printer driver is installed correctly. No driver is needed in the remote session as redirected printers use the **Remote Desktop Easy Print** driver.
+
+- To configure Microsoft Intune, you need:
+
+ - Microsoft Entra ID account that is assigned the [Policy and Profile manager](/mem/intune/fundamentals/role-based-access-control-reference#policy-and-profile-manager) built-in RBAC role.
+ - A group containing the devices you want to configure.
+
+- To configure Group Policy, you need:
+
+ - A domain account that has permission to create or edit Group Policy objects.
+ - A security group or organizational unit (OU) containing the devices you want to configure.
+
+- You need to connect to a remote session from a supported app and platform. To view redirection support in Windows App and the Remote Desktop app, see [Compare Windows App features across platforms and devices](/windows-app/compare-platforms-features#redirection) and [Compare Remote Desktop app features across platforms and devices](compare-remote-desktop-clients.md#redirection).
+
+## Printer redirection
+
+Configuration of a session host using Microsoft Intune or Group Policy, or setting an RDP property on a host pool governs the ability to redirect printers from a local device to a remote session, which is subject to a priority order.
+
+The default configuration is:
+
+- **Windows operating system**: Printer redirection isn't blocked.
+- **Azure Virtual Desktop host pool RDP properties**: All printers are redirected from the local device to a remote session and the default printer on the local device is the default printer in the remote session.
+- **Resultant default behavior**: All printers are redirected from the local device to a remote session and the default printer on the local device is the default printer in the remote session.
+
+> [!IMPORTANT]
+> Take care when configuring redirection settings as the most restrictive setting is the resultant behavior. For example, if you disable printer redirection on a session host with Microsoft Intune or Group Policy, but enable it with the host pool RDP property, redirection is disabled.
++
+Configuration of a Cloud PC governs the ability to redirect printers from a local device to a remote session, and is set using Microsoft Intune or Group Policy.
+
+The default configuration is:
+
+- **Windows operating system**: Printer redirection isn't blocked.
+- **Windows 365**: All printers are redirected from the local device to a remote session and the default printer on the local device is the default printer in the remote session.
+- **Resultant default behavior**: All printers are redirected from the local device to a remote session and the default printer on the local device is the default printer in the remote session.
++
+Configuration of a dev box governs the ability to redirect printers from a local device to a remote session, and is set using Microsoft Intune or Group Policy.
+
+The default configuration is:
+
+- **Windows operating system**: Printer redirection isn't blocked.
+- **Microsoft Dev Box**: All printers are redirected from the local device to a remote session and the default printer on the local device is the default printer in the remote session.
+- **Resultant default behavior**: All printers are redirected from the local device to a remote session and the default printer on the local device is the default printer in the remote session.
++
+### Configure printer redirection using host pool RDP properties
+
+The Azure Virtual Desktop host pool setting *printer redirection* controls whether to redirect printers from a local device to a remote session. The corresponding RDP property is `redirectprinters:i:<value>`. For more information, see [Supported RDP properties](rdp-properties.md#device-redirection).
+
+To configure printer redirection using host pool RDP properties:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. In the search bar, type *Azure Virtual Desktop* and select the matching service entry.
+
+1. Select **Host pools**, then select the host pool you want to configure.
+
+1. Select **RDP Properties**, then select **Device redirection**.
+
+ :::image type="content" source="media/redirection-remote-desktop-protocol/redirection-configuration-host-pool.png" alt-text="A screenshot showing the host pool device redirection tab in the Azure portal." lightbox="media/redirection-remote-desktop-protocol/redirection-configuration-host-pool.png":::
+
+1. For **Printer redirection**, select the drop-down list, then select one of the following options:
+
+ - **The printers on the local computer are not available in remote session**
+ - **The printers on the local computer are available in remote session** (*default*)
+ - **Not configured**
+
+1. Select **Save**.
+
+### Configure printer redirection using Microsoft Intune or Group Policy
+
+### Configure printer redirection using Microsoft Intune or Group Policy
+
+Select the relevant tab for your scenario.
+
+# [Microsoft Intune](#tab/intune)
+
+To allow or disable printer redirection using Microsoft Intune:
+
+1. Sign in to the [Microsoft Intune admin center](https://endpoint.microsoft.com/).
+
+1. [Create or edit a configuration profile](/mem/intune/configuration/administrative-templates-windows) for **Windows 10 and later** devices, with the **Settings catalog** profile type.
+
+1. In the settings picker, browse to **Administrative templates** > **Windows Components** > **Remote Desktop Services** > **Remote Desktop Session Host** > **Printer Redirection**.
+
+ :::image type="content" source="media/redirection-remote-desktop-protocol/redirection-configuration-printers-intune.png" alt-text="A screenshot showing the printer redirection options in the Microsoft Intune portal." lightbox="media/redirection-remote-desktop-protocol/redirection-configuration-printers-intune.png":::
+
+1. Check the box for **Do not allow client printer redirection**, then close the settings picker.
+
+1. Expand the **Administrative templates** category, then toggle the switch for **Do not allow client printer redirection** to **Enabled** or **Disabled**, depending on your requirements:
+
+ - To allow printer redirection, toggle the switch to **Disabled**, then select **OK**.
+
+ - To disable printer redirection, toggle the switch to **Enabled**, then select **OK**.
+
+1. Select **Next**.
+
+1. *Optional*: On the **Scope tags** tab, select a scope tag to filter the profile. For more information about scope tags, see [Use role-based access control (RBAC) and scope tags for distributed IT](/mem/intune/fundamentals/scope-tags).
+
+1. On the **Assignments** tab, select the group containing the computers providing a remote session you want to configure, then select **Next**.
+
+1. On the **Review + create** tab, review the settings, then select **Create**.
+
+1. Once the policy applies to the computers providing a remote session, restart them for the settings to take effect.
+
+# [Group Policy](#tab/group-policy)
+
+To allow or disable printer redirection using Group Policy:
+
+1. Open the **Group Policy Management** console on device you use to manage the Active Directory domain.
+
+1. Create or edit a policy that targets the computers providing a remote session you want to configure.
+
+1. Navigate to **Computer Configuration** > **Policies** > **Administrative Templates** > **Windows Components** > **Remote Desktop Services** > **Remote Desktop Session Host** > **Printer Redirection**.
+
+ :::image type="content" source="media/redirection-remote-desktop-protocol/redirection-configuration-printers-group-policy.png" alt-text="A screenshot showing the printer redirection options in the Group Policy editor." lightbox="media/redirection-remote-desktop-protocol/redirection-configuration-printers-group-policy.png":::
+
+1. Double-click the policy setting **Do not allow client printer redirection** to open it.
+
+ - To allow printer redirection, select **Disabled** or **Not configured**, then select **OK**.
+
+ - To disable printer redirection, select **Enabled**, then select **OK**.
+
+1. Ensure the policy is applied to the computers providing a remote session, then restart them for the settings to take effect.
+++
+## Test printer redirection
+
+Printer redirection uses high-level redirection; the printer is available locally and in the remote session concurrently, and requires the relevant driver installed locally. The driver for the printer doesn't need to be installed in the remote session as redirected printers use the **Remote Desktop Easy Print** driver.
+
+To test printer redirection:
+
+1. Make sure a printer is available on the local device that's functioning.
+
+1. Connect to a remote session using Window App or the Remote Desktop app on a platform that supports printer redirection. For more information, see [Compare Windows App features across platforms and devices](/windows-app/compare-platforms-features#redirection) and [Compare Remote Desktop app features across platforms and devices](compare-remote-desktop-clients.md#redirection).
+
+1. Check the printers available in the remote session. Here are some ways to check:
+
+ 1. Open **Printers & scanners** in the remote session from the start menu. Check the redirected printers appear in the list of printers. Redirected printers are identified where the name of the printer is appended with **(redirected *n*)**, where *n* is the user's session ID. The session ID is appended to make sure redirected printers are unique to the user's session.
+
+ :::image type="content" source="media/redirection-remote-desktop-protocol/redirection-printers.png" alt-text="A screenshot showing the available printers and scanners in the remote session." lightbox="media/redirection-remote-desktop-protocol/redirection-printers.png":::
+
+ 1. Open a PowerShell prompt in the remote session and run the following command:
+
+ ```powershell
+ Get-Printer | ? DriverName -eq "Remote Desktop Easy Print" | Sort-Object | FT -AutoSize
+ ```
+
+ The output is similar to the following output:
+
+ ```output
+ Name ComputerName Type DriverName PortName Shared Published DeviceType
+ - - - -- -
+ HP Color LaserJet MFP M281fdw (redirected 2) Local Remote Desktop Easy Print TS001 False False Print
+ Microsoft Print to PDF (redirected 2) Local Remote Desktop Easy Print TS002 False False Print
+ OneNote (Desktop) (redirected 2) Local Remote Desktop Easy Print TS003 False False Print
+ ```
+
+1. Open an application and print a test page to verify the printer is functioning correctly.
+
+### Optional: Disable printer redirection on a local Windows device
+
+You can disable printer redirection on a local Windows device to prevent printers from being redirected to a remote session. This method is useful if you want to enable printer redirection for most users, but disable it for specific Windows devices.
+
+1. As an Administrator on a local Windows device, open the Registry Editor app from the start menu, or run `regedit.exe` from the command line.
+
+1. Configure the following registry key and value. You don't need to restart the local device for the settings to take effect.
+
+ - **Key**: `HKEY_LOCAL_MACHINE\Software\Microsoft\Terminal Server Client`
+ - **Type**: `REG_DWORD`
+ - **Value name**: `DisablePrinterRedirection`
+ - **Value data**: `1`
+
+## Related content
+
virtual-desktop Redirection Configure Serial Com Ports https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/redirection-configure-serial-com-ports.md
+
+ Title: Configure serial or COM port redirection over the Remote Desktop Protocol
+description: Learn how to redirect serial or COM ports from a local device to a remote session over the Remote Desktop Protocol. It applies to Azure Virtual Desktop, Windows 365, and Microsoft Dev Box.
+
+zone_pivot_groups: rdp-products-features
++ Last updated : 04/29/2024++
+# Configure serial or COM port redirection over the Remote Desktop Protocol
++
+You can configure the redirection behavior of serial or COM ports between a local device and a remote session over the Remote Desktop Protocol (RDP).
+
+For Azure Virtual Desktop, we recommend you enable serial or COM port redirection on your session hosts using Microsoft Intune or Group Policy, then control redirection using the host pool RDP properties.
+
+For Windows 365, you can configure your Cloud PCs using Microsoft Intune or Group Policy.
+
+For Microsoft Dev Box, you can configure your dev boxes using Microsoft Intune or Group Policy.
+
+This article provides information about the supported redirection methods and how to configure the redirection behavior serial or COM ports. To learn more about how redirection works, see [Redirection over the Remote Desktop Protocol](redirection-remote-desktop-protocol.md).
+
+## Prerequisites
+
+Before you can configure serial or COM port redirection, you need:
+
+- An existing host pool with session hosts.
+
+- A Microsoft Entra ID account that is assigned the [Desktop Virtualization Host Pool Contributor](rbac.md#desktop-virtualization-host-pool-contributor) built-in role-based access control (RBAC) roles on the host pool as a minimum.
+
+- An existing Cloud PC.
+
+- An existing dev box.
+
+- A serial or COM port on a local device and a peripheral that connects to the port. Serial or COM port redirection uses opaque low-level redirection, so drivers need to be installed in the remote session for the peripheral to function correctly.
+
+- To configure Microsoft Intune, you need:
+
+ - Microsoft Entra ID account that is assigned the [Policy and Profile manager](/mem/intune/fundamentals/role-based-access-control-reference#policy-and-profile-manager) built-in RBAC role.
+ - A group containing the devices you want to configure.
+
+- To configure Group Policy, you need:
+
+ - A domain account that has permission to create or edit Group Policy objects.
+ - A security group or organizational unit (OU) containing the devices you want to configure.
+
+- You need to connect to a remote session from a supported app and platform. To view redirection support in Windows App and the Remote Desktop app, see [Compare Windows App features across platforms and devices](/windows-app/compare-platforms-features#redirection) and [Compare Remote Desktop app features across platforms and devices](compare-remote-desktop-clients.md#redirection).
+
+## Serial or COM port redirection
+
+Configuration of a session host using Microsoft Intune or Group Policy, or setting an RDP property on a host pool governs the ability to redirect serial or COM ports from the local device to the remote session, which is subject to a priority order.
+
+The default configuration is:
+
+- **Windows operating system**: Serial or COM port redirection isn't blocked.
+- **Azure Virtual Desktop host pool RDP properties**: Serial or COM ports are redirected from the local device to the remote session.
+- **Resultant default behavior**: Serial or COM ports are redirected from the local device to the remote session.
+
+> [!IMPORTANT]
+> Take care when configuring redirection settings as the most restrictive setting is the resultant behavior. For example, if you disable serial or COM port redirection on a session host with Microsoft Intune or Group Policy, but enable it with the host pool RDP property, redirection is disabled.
++
+Configuration of a Cloud PC governs the ability to redirect the serial or COM ports from the local device to the remote session, and is set using Microsoft Intune or Group Policy.
+
+The default configuration is:
+
+- **Windows operating system**: Serial or COM port redirection isn't blocked.
+- **Windows 365**: Serial or COM ports are redirected from the local device to the remote session.
+- **Resultant default behavior**: Serial or COM ports are redirected from the local device to the remote session.
++
+Configuration of a dev box governs the ability to redirect Serial or COM port from the local device to the remote session, and is set using Microsoft Intune or Group Policy.
+
+The default configuration is:
+
+- **Windows operating system**: Serial or COM port redirection isn't blocked.
+- **Microsoft Dev Box**: Serial or COM ports are redirected from the local device to the remote session.
+- **Resultant default behavior**: Serial or COM ports are redirected from the local device to the remote session.
++
+### Configure serial or COM port redirection using host pool RDP properties
+
+The Azure Virtual Desktop host pool setting *COM ports redirection* controls whether to redirect the serial or COM ports between the remote session and the local device. The corresponding RDP property is `redirectcomports:i:<value>`. For more information, see [Supported RDP properties](rdp-properties.md#device-redirection).
+
+To configure serial or COM port redirection using host pool RDP properties:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. In the search bar, type *Azure Virtual Desktop* and select the matching service entry.
+
+1. Select **Host pools**, then select the host pool you want to configure.
+
+1. Select **RDP Properties**, then select **Device redirection**.
+
+ :::image type="content" source="media/redirection-remote-desktop-protocol/redirection-configuration-host-pool.png" alt-text="A screenshot showing the host pool device redirection tab in the Azure portal." lightbox="media/redirection-remote-desktop-protocol/redirection-configuration-host-pool.png":::
+
+1. For **COM ports redirection**, select the drop-down list, then select one of the following options:
+
+ - **COM ports on the local computer are not available in the remote session**
+ - **COM ports on the local computer are available in the remote session** (*default*)
+ - **Not configured**
+
+1. Select **Save**.
++
+### Configure serial or COM port redirection using Microsoft Intune or Group Policy
+
+### Configure serial or COM port redirection using Microsoft Intune or Group Policy
+
+Select the relevant tab for your scenario.
+
+# [Microsoft Intune](#tab/intune)
+
+To allow or disable serial or COM port redirection using Microsoft Intune:
+
+1. Sign in to the [Microsoft Intune admin center](https://endpoint.microsoft.com/).
+
+1. [Create or edit a configuration profile](/mem/intune/configuration/administrative-templates-windows) for **Windows 10 and later** devices, with the **Settings catalog** profile type.
+
+1. In the settings picker, browse to **Administrative templates** > **Windows Components** > **Remote Desktop Services** > **Remote Desktop Session Host** > **Device and Resource Redirection**.
+
+ :::image type="content" source="media/redirection-remote-desktop-protocol/redirection-configuration-intune.png" alt-text="A screenshot showing the device and resource redirection options in the Microsoft Intune portal." lightbox="media/redirection-remote-desktop-protocol/redirection-configuration-intune.png":::
+
+1. Check the box for **Do not allow COM port redirection**, then close the settings picker.
+
+1. Expand the **Administrative templates** category, then toggle the switch for **Do not allow COM port redirection** to **Enabled** or **Disabled**, depending on your requirements:
+
+ - To allow serial or COM port redirection, toggle the switch to **Disabled**, then select **OK**.
+
+ - To disable serial or COM port redirection, toggle the switch to **Enabled**, then select **OK**.
+
+1. Select **Next**.
+
+1. *Optional*: On the **Scope tags** tab, select a scope tag to filter the profile. For more information about scope tags, see [Use role-based access control (RBAC) and scope tags for distributed IT](/mem/intune/fundamentals/scope-tags).
+
+1. On the **Assignments** tab, select the group containing the computers providing a remote session you want to configure, then select **Next**.
+
+1. On the **Review + create** tab, review the settings, then select **Create**.
+
+1. Once the policy applies to the computers providing a remote session, restart them for the settings to take effect.
+
+# [Group Policy](#tab/group-policy)
+
+To allow or disable serial or COM port redirection using Group Policy:
+
+1. Open the **Group Policy Management** console on device you use to manage the Active Directory domain.
+
+1. Create or edit a policy that targets the computers providing a remote session you want to configure.
+
+1. Navigate to **Computer Configuration** > **Policies** > **Administrative Templates** > **Windows Components** > **Remote Desktop Services** > **Remote Desktop Session Host** > **Device and Resource Redirection**.
+
+ :::image type="content" source="media/redirection-remote-desktop-protocol/redirection-configuration-group-policy.png" alt-text="A screenshot showing the device and resource redirection options in the Group Policy editor." lightbox="media/redirection-remote-desktop-protocol/redirection-configuration-group-policy.png":::
+
+1. Double-click the policy setting **Do not allow COM port redirection** to open it.
+
+ - To allow serial or COM port redirection, select **Disabled** or **Not configured**, then select **OK**.
+
+ - To disable serial or COM port redirection, select **Enabled**, then select **OK**.
+
+1. Ensure the policy is applied to the computers providing a remote session, then restart them for the settings to take effect.
+++
+## Test serial or COM port redirection
+
+When using serial or COM port redirection, consider the following behavior:
+
+- Drivers for redirected peripherals connected to a serial or COM port need to be installed in the remote session using the same process as the local device. Ensure that Windows Update is enabled in the remote session, or that drivers are available for the peripheral.
+
+- Opaque low-level redirection is designed for LAN connections; with higher latency, some peripherals connected to a serial or COM port might not function properly, or the user experience might not suitable.
+
+- Peripherals connected to a serial or COM port aren't available on the local device locally while it's redirected to the remote session.
+
+- Peripherals connected to a serial or COM port can only be used in one remote session at a time.
+
+- Serial or COM port redirection is only available from a local Windows device.
+
+To test serial or COM port redirection from a local Windows device:
+
+1. Plug in the supported peripherals you want to use in a remote session to a serial or COM port.
+
+1. Connect to a remote session using Window App or the Remote Desktop app on a platform that supports drive redirection. For more information, see [Compare Windows App features across platforms and devices](/windows-app/compare-platforms-features#redirection) and [Compare Remote Desktop app features across platforms and devices](compare-remote-desktop-clients.md#redirection).
+
+1. Check the device is functioning correctly in the remote session. As serial or COM ports are redirected using opaque low-level redirection, the correct driver needs to be installed in the remote session, which you need to do if it's not installed automatically.
+
+ Here are some ways to check the USB peripherals are available in the remote session, depending on the permission you have in the remote session:
+
+ 1. Open **Device Manager** in the remote session from the start menu, or run `devmgmt.msc` from the command line. Check the redirected peripherals appear in the expected device category and don't show any errors.
+
+ :::image type="content" source="media/redirection-remote-desktop-protocol/remote-session-device-manager.png" alt-text="A screenshot showing device manager in a remote session.":::
+
+ 1. Open a Command Prompt or PowerShell prompt on both the local device and in the remote session, then run the following command in both locations. This command shows the serial or COM ports available locally and enable you to verify that they're available in the remote session.
+
+ ```cmd
+ chgport
+ ```
+
+ The output is similar to the following output:
+
+ - On the local device:
+
+ ```output
+ COM3 = \Device\Serial0
+ COM4 = \Device\Serial1
+ ```
+
+ - In the remote session:
+
+ ```output
+ COM3 = \Device\RdpDrPort\;COM3:2\tsclient\COM3
+ COM4 = \Device\RdpDrPort\;COM4:2\tsclient\COM4
+ ```
+
+1. Once the peripherals are redirected and functioning correctly, you can use them as you would on a local device.
+
+## Related content
+
virtual-desktop Redirection Configure Smart Cards https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/redirection-configure-smart-cards.md
+
+ Title: Configure smart card device redirection over the Remote Desktop Protocol
+description: Learn how to redirect smart card devices from a local device to a remote session over the Remote Desktop Protocol. It applies to Azure Virtual Desktop, Windows 365, and Microsoft Dev Box.
+
+zone_pivot_groups: rdp-products-features
++ Last updated : 07/05/2024++
+# Configure smart card redirection over the Remote Desktop Protocol
++
+You can configure the redirection behavior of smart card devices from a local device to a remote session over the Remote Desktop Protocol (RDP).
+
+For Azure Virtual Desktop, we recommend you enable smart card redirection on your session hosts using Microsoft Intune or Group Policy, then control redirection using the host pool RDP properties.
+
+For Windows 365, you can configure your Cloud PCs using Microsoft Intune or Group Policy.
+
+For Microsoft Dev Box, you can configure your dev boxes using Microsoft Intune or Group Policy.
+
+This article provides information about the supported redirection methods and how to configure the redirection behavior for smart card devices. To learn more about how redirection works, see [Redirection over the Remote Desktop Protocol](redirection-remote-desktop-protocol.md).
+
+## Prerequisites
+
+Before you can configure smart card redirection, you need:
+
+- An existing host pool with session hosts.
+
+- A Microsoft Entra ID account that is assigned the [Desktop Virtualization Host Pool Contributor](rbac.md#desktop-virtualization-host-pool-contributor) built-in role-based access control (RBAC) roles on the host pool as a minimum.
+
+- An existing Cloud PC.
+
+- An existing dev box.
+
+- A smart card device available on your local device.
+
+- To configure Microsoft Intune, you need:
+
+ - Microsoft Entra ID account that is assigned the [Policy and Profile manager](/mem/intune/fundamentals/role-based-access-control-reference#policy-and-profile-manager) built-in RBAC role.
+ - A group containing the devices you want to configure.
+
+- To configure Group Policy, you need:
+
+ - A domain account that has permission to create or edit Group Policy objects.
+ - A security group or organizational unit (OU) containing the devices you want to configure.
+
+- You need to connect to a remote session from a supported app and platform. To view redirection support in Windows App and the Remote Desktop app, see [Compare Windows App features across platforms and devices](/windows-app/compare-platforms-features#redirection) and [Compare Remote Desktop app features across platforms and devices](compare-remote-desktop-clients.md#redirection).
+
+## Smart card redirection
+
+Configuration of a session host using Microsoft Intune or Group Policy, or setting an RDP property on a host pool governs the ability to redirect smart card devices from a local device to a remote session, which is subject to a priority order.
+
+The default configuration is:
+
+- **Windows operating system**: Smart card redirection isn't blocked.
+- **Azure Virtual Desktop host pool RDP properties**: Smart card devices are redirected from the local device to the remote session.
+- **Resultant default behavior**: Smart card devices are redirected from the local device to the remote session.
+
+> [!IMPORTANT]
+> Take care when configuring redirection settings as the most restrictive setting is the resultant behavior. For example, if you disable smart card redirection on a session host with Microsoft Intune or Group Policy, but enable it with the host pool RDP property, redirection is disabled.
++
+Configuration of a Cloud PC governs the ability to redirect smart card devices from a local device to a remote session, and is set using Microsoft Intune or Group Policy.
+
+The default configuration is:
+
+- **Windows operating system**: Smart card redirection isn't blocked.
+- **Windows 365**: Smart card redirection is enabled.
+- **Resultant default behavior**: Smart card devices are redirected from the local device to the remote session.
++
+Configuration of a dev box governs the ability to redirect smart card devices from a local device to a remote session, and is set using Microsoft Intune or Group Policy.
+
+The default configuration is:
+
+- **Windows operating system**: Smart card redirection isn't blocked.
+- **Microsoft Dev Box**: Smart card redirection is enabled.
+- **Resultant default behavior**: Smart card devices are redirected from the local device to the remote session.
++
+### Configure smart card device redirection using host pool RDP properties
+
+The Azure Virtual Desktop host pool setting *smart card redirection* controls whether to redirect smart card from a local device to a remote session. The corresponding RDP property is `redirectsmartcards:i:<value>`. For more information, see [Supported RDP properties](rdp-properties.md#device-redirection).
+
+To configure smart card redirection using host pool RDP properties:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. In the search bar, type *Azure Virtual Desktop* and select the matching service entry.
+
+1. Select **Host pools**, then select the host pool you want to configure.
+
+1. Select **RDP Properties**, then select **Device redirection**.
+
+ :::image type="content" source="media/redirection-remote-desktop-protocol/redirection-configuration-host-pool.png" alt-text="A screenshot showing the host pool device redirection tab in the Azure portal." lightbox="media/redirection-remote-desktop-protocol/redirection-configuration-host-pool.png":::
+
+1. For **Smart card redirection**, select the drop-down list, then select one of the following options:
+
+ - **The smart card device on the local computer is not available in remote session**
+ - **The smart card device on the local computer is available in remote session** (*default*)
+ - **Not configured**
+
+1. Select **Save**.
+
+1. To test the configuration, connect to a remote session, then use an application or website that requires your smart card. Verify that the smart card is available and works as expected.
+
+### Configure smart card device redirection using Microsoft Intune or Group Policy
+
+### Configure smart card device redirection using Microsoft Intune or Group Policy
+
+Select the relevant tab for your scenario.
+
+# [Microsoft Intune](#tab/intune)
+
+To allow or disable smart card device redirection using Microsoft Intune:
+
+1. Sign in to the [Microsoft Intune admin center](https://endpoint.microsoft.com/).
+
+1. [Create or edit a configuration profile](/mem/intune/configuration/administrative-templates-windows) for **Windows 10 and later** devices, with the **Settings catalog** profile type.
+
+1. In the settings picker, browse to **Administrative templates** > **Windows Components** > **Remote Desktop Services** > **Remote Desktop Session Host** > **Device and Resource Redirection**.
+
+ :::image type="content" source="media/redirection-remote-desktop-protocol/redirection-configuration-intune.png" alt-text="A screenshot showing the device and resource redirection options in the Microsoft Intune portal." lightbox="media/redirection-remote-desktop-protocol/redirection-configuration-intune.png":::
+
+1. Check the box for **Do not allow smart card device redirection**, then close the settings picker.
+
+1. Expand the **Administrative templates** category, then toggle the switch for **Do not allow smart card device redirection**, depending on your requirements:
+
+ - To allow smart card device redirection, toggle the switch to **Disabled**, then select **OK**.
+
+ - To disable smart card device redirection, toggle the switch to **Enabled**, then select **OK**.
+
+1. Select **Next**.
+
+1. *Optional*: On the **Scope tags** tab, select a scope tag to filter the profile. For more information about scope tags, see [Use role-based access control (RBAC) and scope tags for distributed IT](/mem/intune/fundamentals/scope-tags).
+
+1. On the **Assignments** tab, select the group containing the computers providing a remote session you want to configure, then select **Next**.
+
+1. On the **Review + create** tab, review the settings, then select **Create**.
+
+1. Once the policy applies to the computers providing a remote session, restart them for the settings to take effect.
+
+# [Group Policy](#tab/group-policy)
+
+To allow or disable smart card device redirection using Group Policy:
+
+1. Open the **Group Policy Management** console on device you use to manage the Active Directory domain.
+
+1. Create or edit a policy that targets the computers providing a remote session you want to configure.
+
+1. Navigate to **Computer Configuration** > **Policies** > **Administrative Templates** > **Windows Components** > **Remote Desktop Services** > **Remote Desktop Session Host** > **Device and Resource Redirection**.
+
+ :::image type="content" source="media/redirection-remote-desktop-protocol/redirection-configuration-group-policy.png" alt-text="A screenshot showing the device and resource redirection options in the Group Policy editor." lightbox="media/redirection-remote-desktop-protocol/redirection-configuration-group-policy.png":::
+
+1. Double-click the policy setting **Do not allow smart card device redirection** to open it.
+
+ - To allow smart card device redirection, select **Disabled** or **Not configured**, then select **OK**.
+
+ - To disable smart card device redirection, select **Enabled**, then select **OK**.
+
+1. Ensure the policy is applied to the computers providing a remote session, then restart them for the settings to take effect.
+++
+## Test smart card redirection
+
+To test smart card redirection:
+
+1. Connect to a remote session using Window App or the Remote Desktop app on a platform that supports smart card redirection. For more information, see [Compare Windows App features across platforms and devices](/windows-app/compare-platforms-features#redirection) and [Compare Remote Desktop app features across platforms and devices](compare-remote-desktop-clients.md#redirection).
+
+1. Check your smart cards are available in the remote session. Run the following command in the remote session in Command Prompt or from a PowerShell prompt.
+
+ ```cmd
+ certutil -scinfo
+ ```
+
+ If smart card redirection is working, the output starts similar to the following output:
+
+ ```output
+ The Microsoft Smart Card Resource Manager is running.
+ Current reader/card status:
+ Readers: 2
+ 0: Windows Hello for Business 1
+ 1: Yubico YubiKey OTP+FIDO+CCID 0
+ Reader: Windows Hello for Business 1
+ Status: SCARD_STATE_PRESENT | SCARD_STATE_INUSE
+ Status: The card is being shared by a process.
+ Card: Identity Device (Microsoft Generic Profile)
+ ATR:
+ aa bb cc dd ee ff 00 11 22 33 44 55 66 77 88 99 ;.........AB12..
+ ab .
+
+ Reader: Yubico YubiKey OTP+FIDO+CCID 0
+ Status: SCARD_STATE_PRESENT | SCARD_STATE_UNPOWERED
+ Status: The card is available for use.
+ Card: Identity Device (NIST SP 800-73 [PIV])
+ ATR:
+ aa bb cc dd ee ff 00 11 22 33 44 55 66 77 88 99 ;.........34yz..
+ ab .
+
+ [continued...]
+ ```
+
+1. Open and use an application or website that requires your smart card. Verify that the smart card is available and works as expected.
+
+## Related content
+
virtual-desktop Redirection Configure Usb https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/redirection-configure-usb.md
+
+ Title: Configure USB redirection on Windows over the Remote Desktop Protocol
+description: Learn how to redirect USB peripherals from a local Windows device to a remote session over the Remote Desktop Protocol. It applies to Azure Virtual Desktop, Windows 365, and Microsoft Dev Box.
+
+zone_pivot_groups: rdp-products-features
++ Last updated : 08/08/2024++
+# Configure USB redirection on Windows over the Remote Desktop Protocol
++
+You can configure the redirection of certain USB peripherals between a local Windows device and a remote session over the Remote Desktop Protocol (RDP).
+
+> [!IMPORTANT]
+> This article covers USB devices that use opaque low-level redirection only. USB devices that use high-level redirection are covered by the article for the specific device type. You should use high-level redirection methods where possible.
+>
+> For a list of which device type uses which redirection method, see [Supported resources and peripherals](redirection-remote-desktop-protocol.md#supported-resources-and-peripherals). Peripherals redirected using opaque low-level redirection require drivers installed in the remote session.
+
+ For Azure Virtual Desktop, USB redirection must be configured at the following points. If any of these components aren't configured correctly, USB redirection won't work as expected. You can use Microsoft Intune or Group Policy to configure your session hosts and the local device.
+
+ - Session host
+ - Host pool RDP property
+ - Local device
+
+By default, the host pool RDP property will redirect all supported USB peripherals, but you can also specify individual USB peripherals to redirect or exclude from redirection, and redirect an entire device setup class, such as multimedia peripherals. Take care when configuring redirection settings as the most restrictive setting is the resultant behavior.
+
+Some USB peripherals might have functions that use opaque low-level USB redirection or high-level redirection. By default, these peripherals are redirected using high-level redirection. You can force these peripherals to use opaque low-level USB redirection also by following the steps in this article.
+
+ For Windows 365, USB redirection must be configured on the Cloud PC and the local device. If either of these components aren't configured correctly, USB redirection won't work as expected. You can use Microsoft Intune or Group Policy to configure your Cloud PC and the local device. Once configured, Windows 365 redirects all supported USB peripherals.
+
+ For Microsoft Dev Box, USB redirection must be configured on the dev box and the local device. If either of these components aren't configured correctly, USB redirection won't work as expected. You can use Microsoft Intune or Group Policy to configure your dev box and the local device. Once configured, Microsoft Dev Box redirects all supported USB peripherals.
+
+> [!TIP]
+> If you use the following features in a remote session, they have their own optimizations that are independent from the redirection configuration on the session host, host pool RDP properties, or local device.
+>
+> - [Microsoft Teams](teams-on-avd.md) for camera, microphone, and audio redirection.
+> - [Multimedia redirection](multimedia-redirection-intro.md) for audio, video and call redirection.
++
+> [!TIP]
+> If you use the following features in a remote session, they have their own optimizations that are independent from the redirection configuration on the Cloud PC or local device.
+>
+> - [Microsoft Teams](/windows-365/enterprise/teams-on-cloud-pc) for camera, microphone, and audio redirection.
+> - [Multimedia redirection](multimedia-redirection-intro.md) for audio, video and call redirection.
++
+> [!TIP]
+> If you use the following features in a remote session, they have their own optimizations that are independent from the redirection configuration on the dev box or local device.
+>
+> - [Microsoft Teams](/windows-365/enterprise/teams-on-cloud-pc) for camera, microphone, and audio redirection.
+> - [Multimedia redirection](multimedia-redirection-intro.md) for audio, video and call redirection.
++
+## Prerequisites
+
+Before you can configure USB redirection using opaque low-level redirection, you need:
+
+- An existing host pool with session hosts.
+
+- A Microsoft Entra ID account that is assigned the [Desktop Virtualization Host Pool Contributor](rbac.md#desktop-virtualization-host-pool-contributor) built-in role-based access control (RBAC) roles on the host pool as a minimum.
+
+- An existing Cloud PC.
+
+- An existing dev box.
+
+- A USB device you can use to test the redirection configuration.
+
+- To configure Microsoft Intune, you need:
+
+ - Microsoft Entra ID account that is assigned the [Policy and Profile manager](/mem/intune/fundamentals/role-based-access-control-reference#policy-and-profile-manager) built-in RBAC role.
+ - A group containing the devices you want to configure.
+
+- To configure Group Policy, you need:
+
+ - A domain account that has permission to create or edit Group Policy objects.
+ - A security group or organizational unit (OU) containing the devices you want to configure.
+
+- You need to connect to a remote session from a supported app and platform. To view redirection support in Windows App and the Remote Desktop app, see [Compare Windows App features across platforms and devices](/windows-app/compare-platforms-features#redirection) and [Compare Remote Desktop app features across platforms and devices](compare-remote-desktop-clients.md#redirection).
+
+## Session host configuration
+
+To configure a session host for USB redirection using opaque low-level redirection, you need to enable Plug and Play redirection. You can do this using Microsoft Intune or Group Policy.
+
+## Cloud PC configuration
+
+To configure a Cloud PC for USB redirection using opaque low-level redirection, you need to enable Plug and Play redirection. You can do this using Microsoft Intune or Group Policy.
+
+## Dev box configuration
+
+To configure a dev box for USB redirection using opaque low-level redirection, you need to enable Plug and Play redirection. You can do this using Microsoft Intune or Group Policy.
+
+The default configuration is:
+
+- **Windows operating system**: USB redirection isn't allowed.
+
+Select the relevant tab for your scenario.
+
+# [Microsoft Intune](#tab/intune)
+
+To enable Plug and Play redirection using Microsoft Intune:
+
+1. Sign in to the [Microsoft Intune admin center](https://endpoint.microsoft.com/).
+
+1. [Create or edit a configuration profile](/mem/intune/configuration/administrative-templates-windows) for **Windows 10 and later** devices, with the **Settings catalog** profile type.
+
+1. In the settings picker, browse to **Administrative templates** > **Windows Components** > **Remote Desktop Services** > **Remote Desktop Session Host** > **Device and Resource Redirection**.
+
+ :::image type="content" source="media/redirection-remote-desktop-protocol/redirection-configuration-intune.png" alt-text="A screenshot showing the device and resource redirection options in the Microsoft Intune portal." lightbox="media/redirection-remote-desktop-protocol/redirection-configuration-intune.png":::
+
+1. Check the box for **Do not allow supported Plug and Play device redirection**, then close the settings picker.
+
+1. Expand the **Administrative templates** category, then set toggle the switch for **Do not allow supported Plug and Play device redirection** to **Disabled**.
+
+1. Select **Next**.
+
+1. *Optional*: On the **Scope tags** tab, select a scope tag to filter the profile. For more information about scope tags, see [Use role-based access control (RBAC) and scope tags for distributed IT](/mem/intune/fundamentals/scope-tags).
+
+1. On the **Assignments** tab, select the group containing the computers providing a remote session you want to configure, then select **Next**.
+
+1. On the **Review + create** tab, review the settings, then select **Create**.
+
+1. Once the policy applies to the computers providing a remote session, restart them for the settings to take effect.
+
+# [Group Policy](#tab/group-policy)
+
+To allow or disable video capture redirection, which includes cameras and webcams, using Group Policy:
+
+1. Open the **Group Policy Management** console on device you use to manage the Active Directory domain.
+
+1. Create or edit a policy that targets the computers providing a remote session you want to configure.
+
+1. Navigate to **Computer Configuration** > **Policies** > **Administrative Templates** > **Windows Components** > **Remote Desktop Services** > **Remote Desktop Session Host** > **Device and Resource Redirection**.
+
+ :::image type="content" source="media/redirection-remote-desktop-protocol/redirection-configuration-group-policy.png" alt-text="A screenshot showing the device and resource redirection options in the Group Policy editor." lightbox="media/redirection-remote-desktop-protocol/redirection-configuration-group-policy.png":::
+
+1. Double-click the policy setting **Do not allow supported Plug and Play device redirection** to open it. Select **Disabled**, then select **OK**.
+
+1. Ensure the policy is applied to the computers providing a remote session, then restart them for the settings to take effect.
+
+> [!IMPORTANT]
+> With opaque low-level USB redirection, drivers for redirected USB peripherals will be installed in the remote session by using the same process that a physical Windows computer uses when a device is plugged in. Please ensure that Windows Update is enabled in the remote session, or that drivers are available for the USB peripheral being redirected.
+++
+## Local Windows device configuration
+
+To configure a local Windows device for USB redirection using opaque low-level redirection, you need to allow RDP redirection of other supported USB peripherals for users and administrators. You can do this using Microsoft Intune or Group Policy.
+
+The default configuration is:
+
+- **Windows operating system**: other supported USB peripherals aren't available for RDP redirection by using any user account.
+
+Select the relevant tab for your scenario.
+
+# [Microsoft Intune](#tab/intune)
+
+To allow RDP redirection of other supported USB peripherals using Microsoft Intune:
+
+1. Sign in to the [Microsoft Intune admin center](https://endpoint.microsoft.com/).
+
+1. [Create or edit a configuration profile](/mem/intune/configuration/administrative-templates-windows) for **Windows 10 and later** devices, with the **Settings catalog** profile type.
+
+1. In the settings picker, browse to **Administrative templates** > **Windows Components** > **Remote Desktop Services** > **Remote Desktop Connection Client** > **RemoteFX USB Device Redirection**.
+
+ :::image type="content" source="media/redirection-remote-desktop-protocol/redirection-remotefx-usb-device-redirection-intune.png" alt-text="A screenshot showing the client USB device redirection options in the Microsoft Intune portal." lightbox="media/redirection-remote-desktop-protocol/redirection-remotefx-usb-device-redirection-intune.png":::
+
+1. Check the box for **Allow RDP redirection of other supported RemoteFX USB devices from this computer**, then close the settings picker.
+
+1. Expand the **Administrative templates** category, then set toggle the switch for **Allow RDP redirection of other supported RemoteFX USB devices from this computer** to **Enabled**.
+
+1. For the drop-down list for **RemoteFX USB Redirection Access Rights (Device)**, select **Administrators and Users**, then select **OK**.
+
+1. Select **Next**.
+
+1. *Optional*: On the **Scope tags** tab, select a scope tag to filter the profile. For more information about scope tags, see [Use role-based access control (RBAC) and scope tags for distributed IT](/mem/intune/fundamentals/scope-tags).
+
+1. On the **Assignments** tab, select the group containing the computers providing a remote session you want to configure, then select **Next**.
+
+1. On the **Review + create** tab, review the settings, then select **Create**.
+
+1. Once the policy applies to the local Windows devices, you must restart them for USB redirection to be functional.
+
+# [Group Policy](#tab/group-policy)
+
+To allow RDP redirection of other supported USB peripherals using Group Policy:
+
+1. Open the **Group Policy Management** console on device you use to manage the Active Directory domain.
+
+1. Create or edit a policy that targets the computers providing a remote session you want to configure.
+
+1. Navigate to **Computer Configuration** > **Policies** > **Administrative Templates** > **Windows Components** > **Remote Desktop Services** > **Remote Desktop Connection Client** > **RemoteFX USB Device Redirection**.
+
+ :::image type="content" source="media/redirection-remote-desktop-protocol/redirection-remotefx-usb-device-redirection-group-policy.png" alt-text="A screenshot showing the client USB device redirection options in Group Policy editor." lightbox="media/redirection-remote-desktop-protocol/redirection-remotefx-usb-device-redirection-group-policy.png":::
+
+1. Double-click the policy setting **Allow RDP redirection of other supported RemoteFX USB devices from this computer** to open it. Select **Enabled**
+
+1. For the drop-down list for **RemoteFX USB Redirection Access Rights**, select **Administrators and Users**, then select **OK**.
+
+1. Ensure the policy is applied to the local Windows devices, then you must restart them for USB redirection to work.
+++
+### Optional: Retrieve specific USB device instance IDs to use with opaque low-level redirection
+
+### Optional: Discover available devices to redirect using opaque low-level redirection
+
+For Azure Virtual Desktop, you can enter specific device instance IDs in the host pool properties so that only the peripherals you approve are redirected. To retrieve the device instance IDs available of the USB devices on a local device you want to redirect:
+
+Windows 365 redirects all supported peripherals for opaque low-level redirection connected to a local device. To discover which devices:
+
+Microsoft Dev Box redirects all supported peripherals for opaque low-level redirection connected to a local device. To discover which devices:
+
+1. On the local device, connect any devices you want to redirect.
+
+1. Open the Remote Desktop Connection app from the start menu, or run `mstsc.exe` from the command line.
+
+1. Select **Show Options**, then select the **Local Resources** tab.
+
+ :::image type="content" source="media/redirection-remote-desktop-protocol/remote-desktop-connection-local-resources.png" alt-text="A screenshot showing the Local Resources tab of the Remote Desktop Connection app.":::
+
+1. In the section **Local devices and resources**, select **More...**.
+
+1. From the list of devices and resources, check the box for **Other supported RemoteFX USB devices**. This option only appears if you enable the setting **Allow RDP redirection of other supported RemoteFX USB devices from this computer** covered in the section [Local Windows device configuration](#local-windows-device-configuration). You can select the **+** (plus) icon to expand the list and see which devices are available to be redirected using opaque low-level redirection.
+
+ :::image type="content" source="media/redirection-remote-desktop-protocol/remote-desktop-connection-usb-low-level-devices.png" alt-text="A screenshot showing an example of available USB peripherals to redirect using opaque low-level redirection.":::
+
+1. Select **OK** to close **Local devices and resources**.
+
+1. Select the **General** tab, then select **Save As...** and save the `.rdp` file.
+
+1. Open a PowerShell prompt on the local device.
+
+1. Run the following commands to match each supported USB device name with the USB instance ID. You need to replace the `<placeholder>` value for the `.rdp` file you saved previously.
+
+ ```powershell
+ $rdpFile = "<RDP file path>"
+
+ $testPath = Test-Path $rdpFile
+ If ($testPath) {
+
+ # Function used for recursively getting all child devices of a parent device
+ Function Lookup-Device-Children {
+ [CmdletBinding()]
+ Param(
+ [Parameter(Mandatory, ValueFromPipeline)]
+ [ValidateNotNullOrEmpty()]
+ [object]
+ $ChildDeviceIds
+ )
+
+ foreach ($childDeviceId in $childDeviceIds) {
+ $pnpDeviceProperties = Get-PnpDeviceProperty -InstanceId $childDeviceId
+
+ [string]$childDevice = ($pnpDeviceProperties | ? KeyName -eq DEVPKEY_NAME).Data
+ Write-Output " $childDevice"
+
+ If ($pnpDeviceProperties.KeyName -contains "DEVPKEY_Device_Children") {
+ $pnpChildDeviceIds = ($pnpDeviceProperties | ? KeyName -eq DEVPKEY_Device_Children).Data
+ Lookup-Device-Children -ChildDeviceIds $pnpChildDeviceIds
+ }
+ }
+ }
+
+ # Get a list of the supported devices from the .rdp file and store them in an array
+ [string]$usb = Get-Content -Path $rdpFile | Select-String USB
+ $devices = @($usb.Replace("usbdevicestoredirect:s:","").Replace("-","").Split(";"))
+
+ # Get the devices
+ foreach ($device in $devices) {
+ $pnpDeviceProperties = Get-PnpDeviceProperty -InstanceId $device
+
+ [string]$parentDevice = ($pnpDeviceProperties | ? KeyName -eq DEVPKEY_NAME).Data
+ Write-Output "`n-`n`nParent device name: $parentDevice`nUSB device ID: $device`n"
+
+ If ($pnpDeviceProperties.KeyName -contains "DEVPKEY_Device_Children") {
+ $pnpChildDeviceIds = ($pnpDeviceProperties | ? KeyName -eq DEVPKEY_Device_Children).Data
+ Write-Output "This parent device has the following child devices:"
+ Lookup-Device-Children -ChildDeviceIds $pnpChildDeviceIds
+ }
+ }
+
+ } else {
+ Write-Output "Error: file doesn't exist. Please check the file path and try again."
+ }
+ ```
+
+ The output is similar to the following output:
+
+ ```output
+ -
+
+ Parent device name: USB Composite Device
+ USB device ID: USB\VID_0ECB&PID_1F58\9&2E5F6FA0&0&1
+
+ This parent device has the following child devices:
+ AKG C44-USB Microphone
+ Headphones (AKG C44-USB Microphone)
+ Microphone (AKG C44-USB Microphone)
+ USB Input Device
+ HID-compliant consumer control device
+ HID-compliant consumer control device
+
+ -
+
+ Parent device name: USB Composite Device
+ USB device ID: USB\VID_262A&PID_180A\6&22E6BE6&0&1
+
+ This parent device has the following child devices:
+ USB Input Device
+ HID-compliant consumer control device
+ Klipsch R-41PM
+ Speakers (Klipsch R-41PM)
+
+ -
+
+ Parent device name: USB-to-Serial Comm Port (COM30)
+ USB device ID: USB\VID_012A&PID_0123\A&3A944CE5&0&2
+
+ -
+
+ Parent device name: USB Composite Device
+ USB device ID: USB\VID_046D&PID_0893\88A44075
+
+ This parent device has the following child devices:
+ Logitech StreamCam
+ Logitech StreamCam
+ Microphone (Logitech StreamCam)
+ Logitech StreamCam WinUSB
+ USB Input Device
+ HID-compliant vendor-defined device
+ ```
+
+1. Make a note of the device instance ID of any of the parent devices you want to use for redirection. Only the parent device instance ID is applicable for USB redirection.
++
+### Optional: Discover peripherals matching a device setup class
++
+For Azure Virtual Desktop, you can enter a device class GUID in the host pool properties so that only the devices that match that device class are redirected. To retrieve a list of the devices that match a specific device class GUID on a local device:
+
+1. On the local device, open a PowerShell prompt.
+
+1. Run the following command, replacing `<device class GUID>` with the device class GUID you want to search for and list the matching devices. For a list of device class GUID values, see [System-Defined Device Setup Classes Available to Vendors](/windows-hardware/drivers/install/system-defined-device-setup-classes-available-to-vendors).
+
+ ```powershell
+ $deviceClassGuid = "<device class GUID>"
+ Get-PnpDevice | Where-Object {$_.ClassGuid -like "*$deviceClassGuid*" -and $_.InstanceId -like "USB\*" -and $_.Present -like "True"} | FT -AutoSize
+ ```
+
+ For example, using the device class GUID `4d36e96c-e325-11ce-bfc1-08002be10318` for multimedia devices, the output is similar to the following output:
+
+ ```output
+ Status Class FriendlyName InstanceId
+ -- -
+ OK MEDIA USB Advanced Audio Device USB\VID_0D8C&PID_0147&MI_00\B&35486F89&0&0000
+ OK MEDIA AKG C44-USB Microphone USB\VID_0ECB&PID_1F58&MI_00\A&250837E1&0&0000
+ OK MEDIA Logitech StreamCam USB\VID_046D&PID_0893&MI_02\6&4886529&0&0002
+ OK MEDIA Klipsch R-41PM USB\VID_262A&PID_180A&MI_01\7&3598D0A0&0&0001
+ ```
++
+## Host pool configuration
+
+The Azure Virtual Desktop host pool setting *USB device redirection* determines which supported USB devices connected to the local device are available in the remote session. The corresponding RDP property is `usbdevicestoredirect:s:<value>`. For more information, see [Supported RDP properties](rdp-properties.md#device-redirection).
+
+To configure USB redirection using host pool RDP properties:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. In the search bar, type *Azure Virtual Desktop* and select the matching service entry.
+
+1. Select **Host pools**, then select the host pool you want to configure.
+
+1. Select **RDP Properties**, then select **Device redirection**.
+
+ :::image type="content" source="media/redirection-remote-desktop-protocol/redirection-configuration-host-pool.png" alt-text="A screenshot showing the host pool device redirection tab in the Azure portal." lightbox="media/redirection-remote-desktop-protocol/redirection-configuration-host-pool.png":::
+
+1. For **USB device redirection**, select the drop-down list, then select one of the following options:
+
+ - **Redirect all USB devices that are not already redirected by another high-level redirection** (*default*)
+ - **Redirect all devices that are members of the specified device setup class or devices defined by specific instance ID**
+
+1. If you select **Redirect all devices that are members of the specified device setup class or devices defined by specific instance ID**, an extra box shows. You need to enter the device setup class or specific device instance path for the devices you want to redirect, separated by a semicolon. For more information, see [Controlling opaque low-level USB redirection](redirection-remote-desktop-protocol.md#controlling-opaque-low-level-usb-redirection). To get the values for supported devices, see [Optional: Retrieve specific device instance IDs](#optional-retrieve-specific-usb-device-instance-ids-to-use-with-opaque-low-level-redirection), and for device class GUIDs, see [Optional: Discover peripherals matching a device setup class](#optional-discover-peripherals-matching-a-device-setup-class). For Azure Virtual Desktop, the characters `\`, `:`, and `;` must be escaped using a backslash character.
+
+ Here are some examples:
+
+ - To redirect a specific peripheral where it's only redirected when based on whole device instance path (that is, it's connected to a particular USB port), enter the device instance path using double backslash characters, such as `USB\\VID_045E&PID_0779\\5&21F6DCD1&0&5`. For multiple devices, separate them with a semicolon, such as `USB\\VID_045E&PID_0779\\5&21F6DCD1&0&5;USB\\VID_0ECB&PID_1F58\\9&2E5F6FA0&0&1`.
+
+ - To redirect all peripherals that are members of a specific device setup class (that is, all supported multimedia devices), enter the device class GUID, including braces. For example, to redirect all multimedia devices, enter `{4d36e96c-e325-11ce-bfc1-08002be10318}`. For multiple device class IDs, separate them with a semicolon, such as `{4d36e96c-e325-11ce-bfc1-08002be10318};{6bdd1fc6-810f-11d0-bec7-08002be2092f}`.
+
+ > [!TIP]
+ > You can create advanced configurations by combining device instance paths and device class GUIDs, and you enter the configuration on the **Advanced** tab of **RDP Properties**. For more examples, see [usbdevicestoredirect RDP property](#usbdevicestoredirect-rdp-property).
+
+1. Select **Save**. You can now test the USB redirection configuration.
++
+## Test USB redirection
+
+Once you configure your session hosts, host pool RDP property, and local devices, you can test USB redirection. Consider the following behavior:
+
+Once you configure your Cloud PCs and local devices, you can test USB redirection. Consider the following behavior:
+
+Once you configure your dev boxes and local devices, you can test USB redirection. Consider the following behavior:
+
+- Drivers for redirected USB peripherals are installed in the remote session using the same process as the local device. Ensure that Windows Update is enabled in the remote session, or that drivers are available for the peripheral.
+
+- Opaque low-level USB redirection is designed for LAN connections (< 20 ms latency); with higher latency, some USB peripherals might not function properly, or the user experience might not suitable.
+
+- USB peripherals aren't available on the local device locally while it's redirected to the remote session.
+
+- USB peripherals can only be used in one remote session at a time.
+
+- USB redirection is only available from a local Windows device.
+
+To test USB redirection:
+
+1. Plug in the supported USB peripherals you want to use in a remote session.
+
+1. Connect to a remote session using Window App or the Remote Desktop app on a platform that supports USB redirection. For more information, see [Compare Windows App features across platforms and devices](/windows-app/compare-platforms-features#redirection) and [Compare Remote Desktop app features across platforms and devices](compare-remote-desktop-clients.md#redirection).
+
+1. Check the peripherals are connected to the remote session. With the display in full screen, on the status bar select the icon to select devices to use. This icon only shows when USB redirection is correctly configured.
+
+ :::image type="content" source="media/redirection-remote-desktop-protocol/windows-app-status-bar-device-redirection.png" alt-text="A screenshot showing the status bar of Windows App with a red box around the select devices to use icon.":::
+
+1. Check the box for each USB peripheral you want to redirect to the remote session, and uncheck the box for those peripherals you don't want to redirect. Some devices might appear in this list as **Remote Desktop Generic USB Device** once directed.
+
+ :::image type="content" source="media/redirection-remote-desktop-protocol/windows-app-connected-local-devices-resources.png" alt-text="A screenshot showing the local devices and resources dialog box of Windows App when connected to a remote session.":::
+
+1. Check the device is functioning correctly in the remote session. The correct driver needs to be installed in the remote session. Here are some ways to check the USB peripherals are available in the remote session, depending on the permission you have in the remote session:
+
+ 1. Open Device Manager in the remote session from the start menu, or run `devmgmt.msc` from the command line. Check the redirected peripherals appear in the expected device category and don't show any errors.
+
+ :::image type="content" source="media/redirection-remote-desktop-protocol/remote-session-device-manager.png" alt-text="A screenshot showing device manager in a remote session.":::
+
+ 1. Open a PowerShell prompt in the remote session and run the following command:
+
+ ```powershell
+ Get-PnPDevice | Where-Object {$_.InstanceId -like "*TSUSB*" -and $_.Present -eq "true"} | FT -AutoSize
+ ```
+
+ The output is similar to the following output. Check the status column for any entries that show **Error**. If there are any entries with an error, troubleshoot the device according to the manufacturer's instructions.
+
+ ```output
+ Status Class FriendlyName InstanceId
+ -- -
+ OK USB USB Composite Device USB\VID_0D8C&PID_0147&REV_0109\3&2DCEE32&0&TSUSB-SESSION4...
+ OK Ports USB-to-Serial Comm Port (COM6) USB\VID_012A&PID_0123&REV_0202\3&2DCEE32&0&TSUSB-SESSION4...
+ ```
+
+1. Once the peripherals are redirected and functioning correctly, you can use them as you would on a local device.
+
+## usbdevicestoredirect RDP property
+
+The `usbdevicestoredirect` RDP property is used to specify which USB devices are redirected to the remote session and its syntax `usbdevicestoredirect:s:<value>` provides flexibility when redirecting USB peripherals using opaque low-level redirection. Valid values for the property are shown in the following table. Values can be used on their own, or a combination of these values can be used with each other when separated with a semicolon, subject to a processing order. For more information, see [Controlling opaque low-level USB redirection](redirection-remote-desktop-protocol.md#controlling-opaque-low-level-usb-redirection).
+
+| Processing order | Value | Description |
+|:--:|:--:|--|
+| N/A | *No value specified* | Don't redirect any supported USB peripherals using opaque low-level redirection. |
+| 1 | `*` | Redirect all peripherals that aren't using high-level redirection. |
+| 2 | `{<DeviceClassGUID>}` | Redirect all peripherals that are members of the specified device setup class. |
+| 3 | `<USBInstanceID>` | Redirect a USB peripheral specified by the given device instance path. |
+| 4 | `<-USBInstanceID>` | Don't redirect a peripheral specified by the given device instance path. |
+
+When constructed as a string in the correct processing order, the syntax is:
+
+```uri
+usbdevicestoredirect:s:*;{<DeviceClassGUID>};<USBInstanceID>;<-USBInstanceID>`
+```
+
+Here are some examples of using the `usbdevicestoredirect` RDP property:
+
+- To redirect all supported USB peripherals that high-level redirection doesn't redirect, use:
+
+ ```uri
+ usbdevicestoredirect:s:*
+ ```
+
+- To redirect all supported USB peripherals with a device class GUID of `{6bdd1fc6-810f-11d0-bec7-08002be2092f}`, use:
+
+ ```uri
+ usbdevicestoredirect:s:{6bdd1fc6-810f-11d0-bec7-08002be2092f}
+ ```
+
+- To redirect all supported USB peripherals that high-level redirection doesn't redirect and USB peripherals with a device class GUIDs of `{6bdd1fc6-810f-11d0-bec7-08002be2092f}` and `{4d36e96c-e325-11ce-bfc1-08002be10318}`, use:
+
+ ```uri
+ usbdevicestoredirect:s:*;{6bdd1fc6-810f-11d0-bec7-08002be2092f};{4d36e96c-e325-11ce-bfc1-08002be10318}
+ ```
+
+- To redirect a supported a USB peripheral with instance IDs `USB\VID_095D&PID_9208\5&23639F31&0&2` and `USB\VID_045E&PID_076F\5&14D1A39&0&7`, use:
+
+ ```uri
+ usbdevicestoredirect:s:USB\VID_095D&PID_9208\5&23639F31&0&2;USB\VID_045E&PID_076F\5&14D1A39&0&7
+ ```
+
+- To redirect all supported USB peripherals that high-level redirection doesn't redirect, except for a device with an instance ID of `USB\VID_045E&PID_076F\5&14D1A39&0&7`, use:
+
+ ```uri
+ usbdevicestoredirect:s:*;-USB\VID_045E&PID_076F\5&14D1A39&0&7
+ ```
+
+- Use the following syntax to achieve the following scenario:
+ - Redirect all supported USB peripherals that high-level redirection doesn't redirect.
+ - Redirect all supported USB peripherals with a device setup class GUID of `{6bdd1fc6-810f-11d0-bec7-08002be2092f}`.
+ - Redirect a supported a USB peripheral with instance ID `USB\VID_095D&PID_9208\5&23639F31&0&2`.
+ - Don't redirect a supported USB peripheral with an instance ID of `USB\VID_045E&PID_076F\5&14D1A39&0&7`.
+
+ ```uri
+ usbdevicestoredirect:s:*;{6bdd1fc6-810f-11d0-bec7-08002be2092f};USB\VID_095D&PID_9208\5&23639F31&0&2;-USB\VID_045E&PID_076F\5&14D1A39&0&7
+ ```
+
+> [!TIP]
+> For Azure Virtual Desktop, the characters `\`, `:`, and `;` must be escaped using a backslash character. This includes any device instance paths, such as `USB\\VID_045E&PID_0779\\5&21F6DCD1&0&5`. It doesn't affect the redirection behavior.
+
+## Related content
+
virtual-desktop Redirection Configure Webauthn https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/redirection-configure-webauthn.md
+
+ Title: Configure WebAuthn redirection over the Remote Desktop Protocol
+description: Learn how to redirect WebAuthn requests from a remote session to a local device over the Remote Desktop Protocol. It applies to Azure Virtual Desktop, Windows 365, and Microsoft Dev Box.
+
+zone_pivot_groups: rdp-products-features
++ Last updated : 06/25/2024++
+# Configure WebAuthn redirection over the Remote Desktop Protocol
++
+You can configure the redirection behavior of WebAuthn requests from a remote session to a local device over the Remote Desktop Protocol (RDP). WebAuthn redirection enables [in-session passwordless authentication](authentication.md#in-session-authentication) using Windows Hello for Business or security devices like FIDO keys.
+
+For Azure Virtual Desktop, we recommend you enable WebAuthn redirection on your session hosts using Microsoft Intune or Group Policy, then control redirection using the host pool RDP properties.
+
+For Windows 365, you can configure your Cloud PCs using Microsoft Intune or Group Policy.
+
+For Microsoft Dev Box, you can configure your dev boxes using Microsoft Intune or Group Policy.
+
+This article provides information about the supported redirection methods and how to configure the redirection behavior for WebAuthn requests. To learn more about how redirection works, see [Redirection over the Remote Desktop Protocol](redirection-remote-desktop-protocol.md).
+
+## Prerequisites
+
+Before you can configure WebAuthn redirection, you need:
+
+- An existing host pool with session hosts.
+
+- A Microsoft Entra ID account that is assigned the [Desktop Virtualization Host Pool Contributor](rbac.md#desktop-virtualization-host-pool-contributor) built-in role-based access control (RBAC) roles on the host pool as a minimum.
+
+- An existing Cloud PC.
+
+- An existing dev box.
+
+- A local Windows device with Windows Hello for Business or a security device like a FIDO USB key already configured.
+
+- To configure Microsoft Intune, you need:
+
+ - Microsoft Entra ID account that is assigned the [Policy and Profile manager](/mem/intune/fundamentals/role-based-access-control-reference#policy-and-profile-manager) built-in RBAC role.
+ - A group containing the devices you want to configure.
+
+- To configure Group Policy, you need:
+
+ - A domain account that has permission to create or edit Group Policy objects.
+ - A security group or organizational unit (OU) containing the devices you want to configure.
+
+- You need to connect to a remote session from a supported app and platform. To view redirection support in Windows App and the Remote Desktop app, see [Compare Windows App features across platforms and devices](/windows-app/compare-platforms-features#redirection) and [Compare Remote Desktop app features across platforms and devices](compare-remote-desktop-clients.md#redirection).
+
+## WebAuthn redirection
+
+Configuration of a session host using Microsoft Intune or Group Policy, or setting an RDP property on a host pool governs the ability to redirect WebAuthn requests from a remote session to a local device, which is subject to a priority order.
+
+The default configuration is:
+
+- **Windows operating system**: WebAuthn requests aren't blocked.
+- **Azure Virtual Desktop host pool RDP properties**: WebAuthn requests in the remote session are redirected to the local computer.
+
+> [!IMPORTANT]
+> Take care when configuring redirection settings as the most restrictive setting is the resultant behavior. For example, if you disable WebAuthn redirection on a session host with Microsoft Intune or Group Policy, but enable it with the host pool RDP property, redirection is disabled.
++
+Configuration of a Cloud PC governs the ability to redirect WebAuthn requests between the remote session and the local device, and is set using Microsoft Intune or Group Policy.
+
+The default configuration is:
+
+- **Windows operating system**: WebAuthn requests aren't blocked. Windows 365 enables WebAuthn redirection.
++
+Configuration of a dev box governs the ability to redirect WebAuthn requests between the remote session and the local device, and is set using Microsoft Intune or Group Policy.
+
+The default configuration is:
+
+- **Windows operating system**: WebAuthn requests aren't blocked. Windows 365 enables WebAuthn redirection.
++
+### Configure WebAuthn redirection using host pool RDP properties
+
+The Azure Virtual Desktop host pool setting *WebAuthn redirection* controls whether to redirect WebAuthn requests between the remote session and the local device. The corresponding RDP property is `redirectwebauthn:i:<value>`. For more information, see [Supported RDP properties](rdp-properties.md#device-redirection).
+
+To configure WebAuthn redirection using host pool RDP properties:
+
+1. Sign in to the [Azure portal](https://portal.azure.com/).
+
+1. In the search bar, type *Azure Virtual Desktop* and select the matching service entry.
+
+1. Select **Host pools**, then select the host pool you want to configure.
+
+1. Select **RDP Properties**, then select **Device redirection**.
+
+ :::image type="content" source="media/redirection-remote-desktop-protocol/redirection-configuration-host-pool.png" alt-text="A screenshot showing the host pool device redirection tab in the Azure portal." lightbox="media/redirection-remote-desktop-protocol/redirection-configuration-host-pool.png":::
+
+1. For **WebAuthn redirection**, select the drop-down list, then select one of the following options:
+
+ - **WebAuthn requests in the remote session are not redirected to the local computer**
+ - **WebAuthn requests in the remote session are redirected to the local computer** (*default*)
+ - **Not configured**
+
+1. Select **Save**.
+
+1. To test the configuration, follow the steps in [Test WebAuthn redirection](#test-webauthn-redirection).
++
+### Configure WebAuthn redirection using Microsoft Intune or Group Policy
+
+### Configure WebAuthn redirection using Microsoft Intune or Group Policy
+
+Select the relevant tab for your scenario.
+
+# [Microsoft Intune](#tab/intune)
+
+To allow or disable WebAuthn redirection using Microsoft Intune:
+
+1. Sign in to the [Microsoft Intune admin center](https://endpoint.microsoft.com/).
+
+1. [Create or edit a configuration profile](/mem/intune/configuration/administrative-templates-windows) for **Windows 10 and later** devices, with the **Templates** profile type, and the **Administrative templates** template.
+
+1. In the **Configuration settings** tab, browse to **Computer configuration** > **Windows Components** > **Remote Desktop Services** > **Remote Desktop Session Host** > **Device and Resource Redirection**, then select **Do not allow WebAuthn redirection**.
+
+ :::image type="content" source="media/redirection-remote-desktop-protocol/redirection-configuration-intune-template-webauthn.png" alt-text="A screenshot showing the device and resource redirection options in the Microsoft Intune portal." lightbox="media/redirection-remote-desktop-protocol/redirection-configuration-intune-template-webauthn.png":::
+
+1. Select **Do not allow WebAuthn redirection**. In the pane separate pane that opens:
+
+ - To allow WebAuthn redirection, select **Disabled** or **Not configured**, then select **OK**.
+
+ - To disable WebAuthn redirection, select **Enabled**, then select **OK**.
+
+1. Select **Next**.
+
+1. *Optional*: On the **Scope tags** tab, select a scope tag to filter the profile. For more information about scope tags, see [Use role-based access control (RBAC) and scope tags for distributed IT](/mem/intune/fundamentals/scope-tags).
+
+1. On the **Assignments** tab, select the group containing the computers providing a remote session you want to configure, then select **Next**.
+
+1. On the **Review + create** tab, review the settings, then select **Create**.
+
+1. Once the policy applies to the computers providing a remote session, restart them for the settings to take effect.
+
+# [Group Policy](#tab/group-policy)
+
+To allow or disable WebAuthn redirection using Group Policy:
+
+1. Open the **Group Policy Management** console on device you use to manage the Active Directory domain.
+
+1. Create or edit a policy that targets the computers providing a remote session you want to configure.
+
+1. Navigate to **Computer Configuration** > **Policies** > **Administrative Templates** > **Windows Components** > **Remote Desktop Services** > **Remote Desktop Session Host** > **Device and Resource Redirection**.
+
+ :::image type="content" source="media/redirection-remote-desktop-protocol/redirection-configuration-group-policy.png" alt-text="A screenshot showing the device and resource redirection options in the Group Policy editor." lightbox="media/redirection-remote-desktop-protocol/redirection-configuration-group-policy.png":::
+
+1. Double-click the policy setting **Do not allow WebAuthn redirection** to open it.
+
+ - To allow WebAuthn redirection, select **Disabled** or **Not configured**, then select **OK**.
+
+ - To disable WebAuthn redirection, select **Enabled**, then select **OK**.
+
+1. Ensure the policy is applied to the computers providing a remote session, then restart them for the settings to take effect.
+++
+## Test WebAuthn redirection
+
+Once you enable WebAuthn redirection, to test it:
+
+1. If you're using a USB security key, make sure it's plugged in first.
+
+1. Connect to a remote session using Window App or the Remote Desktop app on a platform that supports WebAuthn redirection. For more information, see [Compare Windows App features across platforms and devices](/windows-app/compare-platforms-features#redirection) and [Compare Remote Desktop app features across platforms and devices](compare-remote-desktop-clients.md#redirection).
+
+1. In the remote session, open a website in an **InPrivate window** that uses WebAuthn authentication, such as Windows App for web browsers at [https://windows.cloud.microsoft/](https://windows.cloud.microsoft/).
+
+1. Follow the sign-in process. When the authentication comes to use Windows Hello for Business or the security key, you should see a Windows Security prompt to complete the authentication, as shown in the following image when using a Windows local device.
+
+ The Windows Security prompt is on the local device and overlays the remote session, indicating that WebAuthn redirection is working.
+
+ :::image type="content" source="media/redirection-remote-desktop-protocol/redirection-webauthn.png" alt-text="A screenshot showing a WebAuthn request from the remote session to the local device." lightbox="media/redirection-remote-desktop-protocol/redirection-webauthn.png":::
+
+## Related content
+
virtual-desktop Redirection Remote Desktop Protocol https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/redirection-remote-desktop-protocol.md
+
+ Title: Redirection over the Remote Desktop Protocol
+description: Learn about redirection over the Remote Desktop Protocol, which enables users to share resources between their local device and a remote session. It applies to Azure Virtual Desktop, Windows 365, Microsoft Dev Box, Remote Desktop Services, and Remote PC connections.
+++ Last updated : 08/06/2024++
+# Redirection over the Remote Desktop Protocol
+
+Redirection enables users to share resources and peripherals, such as the clipboard, webcams, USB devices, printers, and more, between their local device (client-side) and a remote session (server-side) over the *Remote Desktop Protocol* (RDP). Redirection aims to provide a seamless remote experience, comparable to the experience using their local device. This experience helps users be more productive and efficient when working remotely. As an administrator, you can configure redirection to help balance between your security requirements and the needs of your users.
+
+This article provides detailed information about redirection methods across difference peripheral classes, redirection classifications, and the supported types of resources and peripherals you can redirect.
+
+## Redirection methods and classifications
+
+RDP leverages two redirection methods to redirect resources and peripherals between the local device and a remote session:
+
+- **High-level redirection**: functions as an intelligent intermediary by intercepting and optimizing all communication for a specific class of peripherals or experience. High-level redirection ensures the best possible performance for remote scenarios, but also relies on peripheral driver and application support.
+
+- **Opaque low-level redirection**: transports the raw communication of a peripheral without any attempt to interpret, understand, throttle, or optimize it for remote scenarios.
+
+ Opaque low-level redirection is used for peripherals that connect via USB where a suitable high-level peripheral reflection redirection solution doesn't exist, and for peripherals that have particular driver or software requirements in the remote session to work properly. USB redirection happens at the port and protocol level using [USB request blocks](/windows-hardware/drivers/usbcon/communicating-with-a-usb-device) (URB). Opaque low-level redirection is also used for peripherals that connect via serial/COM ports.
+
+Within high-level redirection, there are four overarching techniques that are used, which are classified based on the direction of the redirection and the type of resource or peripheral being redirected. The four high-level redirection classifications are:
+
+- **Peripheral reflection**: reflects a specific class of peripheral connected to the local device into a remote session. This classification includes input devices, such as keyboard, mouse, touch, pen, and trackpad.
+
+- **Data sharing**: shares and transfers data between the local device and a remote session for the clipboard.
+
+- **State reflection**: reflects the local device state into a remote session, such as its battery status and location.
+
+- **Application splitting**: splits the functionality of an application across the local device and a remote session, such as Microsoft Teams.
+
+The redirection method used can vary based on the peripheral class, such as Windows, macOS, iOS/iPadOS, or Android, and its available resources, peripherals, and capabilities. What redirection is available in a remote session is also dependent on the application used. For a comparison of the support for redirection using Windows App across different platforms, see [Compare Windows App features across platforms and devices](/windows-app/compare-platforms-features#redirection).
+
+> [!IMPORTANT]
+> You should use high-level redirection whenever possible, as it provides the best performance and user experience. Opaque low-level redirection is effectively a fallback scenario, so performance, reliability, and the supported feature set of such peripherals isn't guaranteed by default.
+>
+> Some peripherals can't be redirected, such as encrypted USB storage.
+
+### USB redirection comparison
+
+The following table compares redirecting a USB peripheral using opaque low-level USB redirection to redirecting the peripheral using high-level redirection with a supported peripheral class over RDP:
+
+| Opaque low-level USB redirection | High-level redirection |
+|--|--|
+| Requires the driver for the USB peripheral to be installed in the remote session. Doesn't require the driver to be installed on the local device. | Requires the driver for the peripheral to be installed on the local device. In most cases, it doesn't require the driver to be installed in the remote session. |
+| Uses a single redirection method for many peripheral classes. | Uses a specific redirection method for each peripheral class. |
+| Forwards URB to and from the USB peripheral over the RDP connection. | Exposes high-level peripheral functionality in a remote session by using an optimized protocol for the peripheral class. |
+| The USB peripheral can't be used on the local device while it's being used in a remote session. It can only be used in one remote session at a time. | The peripheral can be used simultaneously on the local device and in a remote session. |
+| Optimized for low latency connections. Variable based on peripheral driver implementation. | Optimized for LAN and WAN connections and is aware of changes in conditions, such as bandwidth and latency. |
+
+### Controlling opaque low-level USB redirection
+
+Redirecting USB peripherals using opaque low-level USB redirection is controlled by the [RDP property](rdp-properties.md) `usbdevicestoredirect:s:<value>`, where *\<value\>* is the *device instance path* in the format `USB\<Vendor ID and Product ID>\<USB instance ID>`.
+
+For some products and services, such as Azure Virtual Desktop, you can control redirection behavior by setting the RDP property value as follows:
+
+- Some USB peripherals might have functions that use opaque low-level USB redirection or high-level redirection. By default, these peripherals are redirected using high-level redirection. You can use the RDP property to force these peripherals to use opaque low-level USB redirection. To use USB audio peripherals with opaque low-level USB redirection, the audio output location must be set to play sounds on the local computer.
+
+- Use [class GUIDs](/windows-hardware/drivers/install/system-defined-device-setup-classes-available-to-vendors) to redirect or not redirect an entire class of USB peripherals.
+
+- Use the wildcard `*` as the value will redirect most peripherals that don't have high-level redirection mechanisms or drivers installed. Class GUIDs can be used to redirect additional peripherals that aren't matched automatically.
+
+Values can be used on their own, or a combination of these values can be used in conjunction with each other when separated with a semicolon, subject to a processing order. The following table lists the valid values and the processing order:
+
+| Processing order | Value | Description |
+|:--:|:--:|--|
+| N/A | *No value specified* | Don't redirect any supported USB peripherals using opaque low-level redirection. |
+| 1 | `*` | Redirect all peripherals that aren't using high-level redirection. |
+| 2 | `{<DeviceClassGUID>}` | Redirect all peripherals that are members of the specified device setup class. |
+| 3 | `<USBInstanceID>` | Redirect a USB peripheral specified by the given device instance path. |
+| 4 | `<-USBInstanceID>` | Don't redirect a peripheral specified by the given device instance path. |
+
+When constructed as a string in the correct processing order, the syntax is:
+
+```uri
+usbdevicestoredirect:s:*;{<DeviceClassGUID>};<USBInstanceID>;<-USBInstanceID>
+```
+
+The device instance path for USB devices, is constructed in three sections in the format `USB\<Device ID>\<USB instance ID>`. You can find this value in Device Manager, or by using the [Get-PnpDevice PowerShell cmdlet](/powershell/module/pnpdevice/get-pnpdevice). The three sections in order are:
+
+1. [Bus driver](/windows-hardware/drivers/kernel/bus-drivers) name, in this case *USB*.
+1. [Device ID](/windows-hardware/drivers/install/device-ids), which contains the *Vendor ID* (VID) and *Product ID* (PID) of the USB peripheral.
+1. [Instance ID](/windows-hardware/drivers/install/instance-ids), which uniquely distinguishes a device from other devices of the same type on a computer.
+
+When specifying USB peripherals to redirect over RDP, you can use the device instance path. When using the device instance path, the value is specific to the port on the local device to which it's connected. For example, a peripheral connected to the first USB port has the device instance path `USB\VID_045E&PID_0779\5&21F6DCD1&0&5`, but connecting the same peripheral to the second USB port has the device instance path `USB\VID_045E&PID_0779\5&21F6DCD1&0&6`. For USB peripherals, specifying the device instance path means the peripheral is only redirected when connected to the same port.
+
+Alternatively you can redirect an entire [device setup class](/windows-hardware/drivers/install/system-defined-device-setup-classes-available-to-vendors) of USB peripherals by using the class GUID. When using the class GUID, all peripherals on the local device that have the corresponding class GUID are redirected, regardless of the port to which they're connected. For example, using the class GUID `{4d36e96c-e325-11ce-bfc1-08002be10318}` redirects all multimedia devices. A list of all the class GUIDs is available at [System-Defined Device Setup Classes Available to Vendors](/windows-hardware/drivers/install/system-defined-device-setup-classes-available-to-vendors).
+
+For some examples of how to use the RDP property, see [usbdevicestoredirect RDP property](redirection-configure-usb.md#usbdevicestoredirect-rdp-property).
+
+## Supported resources and peripherals
+
+The following table lists each supported resource or peripheral class and the recommended redirection method to use for each:
+
+| Resource or peripheral class | Redirection method | Predominant data flow direction |
+|-|||
+| All-in-one printer/scanner | Opaque low-level redirection | Bidirectional |
+| Audio input - microphone (USB or integrated) | High-level - peripheral reflection | Local to remote |
+| Audio output - speaker (USB or integrated) | High-level - peripheral reflection | Remote to local |
+| Battery (automatic, not configurable) | High-level - state reflection | Local to remote |
+| Biometric reader (only within a session, not during logon) | Opaque low-level redirection | Bidirectional |
+| Camera/webcam (USB or integrated) | High-level - peripheral reflection | Local to remote |
+| CD/DVD drive (read-only) | High-level - peripheral reflection | Local to remote |
+| Clipboard | High-level - data sharing | Bidirectional |
+| Keyboard (USB or integrated) | High-level - peripheral reflection | Local to remote |
+| Local hard drive or USB removable storage | High-level - peripheral reflection | Bidirectional |
+| Location | High-level - state reflection | Local to remote |
+| Mouse (USB or integrated) | High-level - peripheral reflection | Local to remote |
+| MTP Media Player | High-level - peripheral reflection | Local to remote |
+| Multimedia redirection | High-level - application splitting | Bidirectional |
+| Pen (USB or integrated) | High-level - peripheral reflection | Local to remote |
+| Printer (locally attached or network) | High-level - peripheral reflection | Remote to local |
+| PTP camera | High-level - peripheral reflection | Local to remote |
+| Scanner | Opaque low-level redirection | Bidirectional |
+| Serial/COM port | Opaque low-level redirection | Bidirectional |
+| Smart card reader | High-level - peripheral reflection | Bidirectional |
+| Touch (USB or integrated) | High-level - peripheral reflection | Local to remote |
+| Trackpad (USB or integrated, excluding precision touch pad (PTP) gestures) | High-level - peripheral reflection | Local to remote |
+| USB to serial adapter | Opaque low-level redirection | Bidirectional |
+| VoIP Telephone/Headset | Opaque low-level redirection | Bidirectional |
+| WebAuthN | High-level - peripheral reflection | Bidirectional |
+
+> [!NOTE]
+> - The following peripheral classes are blocked from redirection:
+>
+> - USB network adapters.
+> - USB displays.
+>
+> - Scanner redirection doesn't include TWAIN support.
+>
+> - Battery redirection is only available for Azure Virtual Desktop and Windows 365. It's automatically available and not configurable.
+
+The following diagram shows the redirection methods used for each peripheral class:
++
+## Configuration priority order
+
+Which device classes are enabled for redirection and how redirections behave are configured by an administrator of a remote session. The behavior can be configured by Microsoft Intune or Group Policy (Active Directory or local) server-side, or specified in an `.rdp` file that is used to connect to a remote session. Azure Virtual Desktop and Remote Desktop Services also have a broker service where RDP properties can be specified instead.
+
+However, certain settings can be overridden on the local device where a more restrictive configuration is required. A more restrictive setting takes precedence wherever it's configured; for example, if an administrator configures the clipboard to be redirected by default for all remote sessions, but the local device is configured to disable clipboard redirection, the clipboard isn't available in the remote session. This provides flexibility in scenarios where a subset of users or devices require more restrictive settings than the default configuration.
+
+## Related content
+
+- [Configure audio and video redirection over the Remote Desktop Protocol](redirection-configure-audio-video.md).
+- [Configure camera, webcam, and video capture redirection over the Remote Desktop Protocol](redirection-configure-camera-webcam-video-capture.md).
+- [Configure clipboard redirection over the Remote Desktop Protocol](redirection-configure-clipboard.md).
+- [Configure fixed, removable, and network drive redirection over the Remote Desktop Protocol](redirection-configure-drives-storage.md).
+- [Configure location redirection over the Remote Desktop Protocol](redirection-configure-location.md).
+- [Configure Media Transfer Protocol and Picture Transfer Protocol redirection on Windows over the Remote Desktop Protocol](redirection-configure-plug-play-mtp-ptp.md).
+- [Configure printer redirection over the Remote Desktop Protocol](redirection-configure-printers.md).
+- [Configure serial or COM port redirection over the Remote Desktop Protocol](redirection-configure-serial-com-ports.md).
+- [Configure smart card redirection over the Remote Desktop Protocol](redirection-configure-smart-cards.md).
+- [Configure USB redirection on Windows over the Remote Desktop Protocol](redirection-configure-usb.md).
+- [Configure WebAuthn redirection over the Remote Desktop Protocol](redirection-configure-webauthn.md).
+- [Supported RDP properties](rdp-properties.md).
+- [Compare Windows App features across platforms and devices](/windows-app/compare-platforms-features#redirection).
+- [Compare Remote Desktop app features across platforms and devices](compare-remote-desktop-clients.md#redirection).
virtual-desktop Service Architecture Resilience https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/service-architecture-resilience.md
This high-level diagram shows the components and responsibilities:
When a user wants to access their desktops and apps in Azure Virtual Desktop, multiple components are involved in making that connection successful. There are two separate sequences: 1. Feed discovery. The feed is the list of desktops and apps that are available to the user.
-1. A connection using the Remote Desktop Protocol to a session host.
+1. A connection over the Remote Desktop Protocol to a session host.
### Feed discovery
virtual-desktop Troubleshoot Device Redirections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-device-redirections.md
Use this article to resolve issues with device redirections in Azure Virtual Des
If WebAuthn requests from the session aren't redirected to the local PC, check to make sure you've fulfilled the following requirements: - Are you using supported operating systems for [in-session passwordless authentication](authentication.md#in-session-passwordless-authentication) on both the local PC and session host?-- Have you enabled WebAuthn redirection as a [device redirection](configure-device-redirections.md#webauthn-redirection)?
+- Have you enabled WebAuthn redirection as a [device redirection](redirection-configure-webauthn.md)?
If you've answered "yes" to both of the earlier questions but still don't see the option to use Windows Hello for Business or security keys when accessing Microsoft Entra resources, make sure you've enabled the FIDO2 security key method for the user account in Microsoft Entra ID. To enable this method, follow the directions in [Enable FIDO2 security key method](../active-directory/authentication/howto-authentication-passwordless-security-key.md#enable-fido2-security-key-method).
virtual-desktop Connect Ios Ipados https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/users/connect-ios-ipados.md
Once you've subscribed to a workspace, its content will update automatically reg
If you want to help us test new builds before they're released, you should download our beta client. Organizations can use the beta client to validate new versions for their users before they're generally available. For more information, see [Test the beta client](client-features-ios-ipados.md#test-the-beta-client).
+> [!IMPORTANT]
+> The Remote Desktop app is changing to Windows App. To ensure you can validate the upcoming Windows App update before it's released into the store, the Windows App preview is now available in the [Remote Desktop Beta channels](client-features-ios-ipados.md#test-the-beta-client) where you can test the experience of updating from Remote Desktop to Windows App. To learn more about Windows App, see [Get started with Windows App to connect to devices and apps](/windows-app/get-started-connect-devices-desktops-apps).
+ ## Next steps To learn more about the features of the Remote Desktop client for iOS and iPadOS, check out [Use features of the Remote Desktop client for iOS and iPadOS when connecting to Azure Virtual Desktop](client-features-ios-ipados.md).
virtual-desktop Connect Macos https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/users/connect-macos.md
Once you've subscribed to a workspace, its content will update automatically eve
If you want to help us test new builds before they're released, you should download our beta client. Organizations can use the beta client to validate new versions for their users before they're generally available. For more information, see [Test the beta client](client-features-macos.md#test-the-beta-client).
+> [!IMPORTANT]
+> The Remote Desktop app is changing to Windows App. To ensure you can validate the upcoming Windows App update before it's released into the store, the Windows App preview is now available in the [Remote Desktop Beta channels](client-features-macos.md#test-the-beta-client) where you can test the experience of updating from Remote Desktop to Windows App. To learn more about Windows App, see [Get started with Windows App to connect to devices and apps](/windows-app/get-started-connect-devices-desktops-apps).
+ ## Next steps - To learn more about the features of the Remote Desktop client for macOS, check out [Use features of the Remote Desktop client for macOS when connecting to Azure Virtual Desktop](client-features-macos.md).
virtual-desktop Watermarking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/watermarking.md
You'll need the following things before you can use watermarking:
- Windows App for: - Windows - macOS
+ - iOS and iPadOS
- Web browser - [Azure Virtual Desktop Insights](azure-monitor.md) configured for your environment.
virtual-machine-scale-sets Azure Hybrid Benefit Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/azure-hybrid-benefit-linux.md
description: Learn how Azure Hybrid Benefit can apply to Virtual Machine Scale S
-+
virtual-machine-scale-sets Tutorial Autoscale Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/tutorial-autoscale-cli.md
description: Learn how to use the Azure CLI to automatically scale a Virtual Mac
-+ Last updated 06/14/2024
virtual-machine-scale-sets Virtual Machine Scale Sets Autoscale Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-autoscale-overview.md
description: Learn about the different ways that you can automatically scale an
-+ Last updated 06/14/2024
virtual-machine-scale-sets Virtual Machine Scale Sets Autoscale Portal https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-autoscale-portal.md
description: How to create autoscale rules for Virtual Machine Scale Sets in the
-+ Last updated 06/14/2024
virtual-machine-scale-sets Virtual Machine Scale Sets Manage Fault Domains https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-manage-fault-domains.md
description: Learn how to choose the right number of FDs while creating a Virtua
-+ Last updated 06/14/2024
virtual-machine-scale-sets Virtual Machine Scale Sets Mvss Guest Based Autoscale Linux https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-mvss-guest-based-autoscale-linux.md
description: Learn how to autoscale using guest metrics in a Linux Virtual Machi
-+ Last updated 06/14/2024
virtual-machine-scale-sets Virtual Machine Scale Sets Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-troubleshoot.md
description: Troubleshoot autoscale with Virtual Machine Scale Sets. Understand
-+ Last updated 06/14/2024
virtual-machine-scale-sets Virtual Machine Scale Sets Use Availability Zones https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/virtual-machine-scale-sets-use-availability-zones.md
description: Learn how to create Azure Virtual Machine Scale Sets that use Avail
-+ Last updated 06/14/2024
virtual-machines Disks Deploy Premium V2 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-deploy-premium-v2.md
Title: Deploy a Premium SSD v2 managed disk
description: Learn how to deploy a Premium SSD v2 and about its regional availability. Previously updated : 02/05/2024 Last updated : 08/09/2024
virtual-machines Nc Family https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes/gpu-accelerated/nc-family.md
[!INCLUDE [nc-series-specs](./includes/nc-series-specs.md)]
-### NCads_-_H100_v5-series
+### NCads_H100_v5-series
-[View the full NCads_-_H100_v5-series page](./ncadsh100v5-series.md).
+[View the full NCads_H100_v5-series page](./ncadsh100v5-series.md).
++
+### NCCads_H100_v5-series
+
+[View the full NCCads_H100_v5-series page](./nccadsh100v5-series.md).
+ ### NCv2-series
virtual-machines Nccadsh100v5 Series https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes/gpu-accelerated/nccadsh100v5-series.md
+
+ Title: NCCads_H100_v5 size series
+description: Information on and specifications of the NCCads_H100_v5-series sizes
++++ Last updated : 08/06/2024++++
+# NCCads_H100_v5 sizes series
++
+## Host specifications
+
+## Feature support
+[Premium Storage](../../premium-storage-performance.md): Supported <br>[Premium Storage caching](../../premium-storage-performance.md): Supported <br>[Live Migration](../../maintenance-and-updates.md): Not Supported <br>[Memory Preserving Updates](../../maintenance-and-updates.md): Not Supported <br>[Generation 2 VMs](../../generation-2.md): Supported <br>[Generation 1 VMs](../../generation-2.md): Not Supported <br>[Accelerated Networking](../../../virtual-network/create-vm-accelerated-networking-cli.md): Not Supported <br>[Ephemeral OS Disk](../../ephemeral-os-disks.md): Supported <br>[Nested Virtualization](/virtualization/hyper-v-on-windows/user-guide/nested-virtualization): Not Supported <br>
+
+## Sizes in series
+
+### [Basics](#tab/sizebasic)
+
+vCPUs (Qty.) and Memory for each size
+
+| Size Name | vCPUs (Qty.) | Memory (GB) |
+| | | |
+| Standard_NCC40ads_H100_v5 | 40 | 320 |
+
+#### VM Basics resources
+- [Check vCPU quotas](../../../virtual-machines/quotas.md)
+
+### [Local storage](#tab/sizestoragelocal)
+
+Local (temp) storage info for each size
+
+| Size Name | Max Temp Storage Disks (Qty.) | Temp Disk Size (GiB) | Temp Disk Random Read (RR)<sup>1</sup> IOPS | Temp Disk Random Read (RR)<sup>1</sup> Speed (MBps) | Temp Disk Random Write (RW)<sup>1</sup> IOPS | Temp Disk Random Write (RW)<sup>1</sup> Speed (MBps) | Local-Special-Disk-Count | Local-Special-Disk-Size-GB | Local-Special-Disk-RR-IOPS | Local-Special-Disk-RR-MBps |
+| | | | | | | | | | | |
+| Standard_NCC40ads_H100_v5 | 1 | 800 | | | | | | | | |
+
+#### Storage resources
+- [Introduction to Azure managed disks](../../../virtual-machines/managed-disks-overview.md)
+- [Azure managed disk types](../../../virtual-machines/disks-types.md)
+- [Share an Azure managed disk](../../../virtual-machines/disks-shared.md)
+
+#### Table definitions
+- <sup>1</sup>Temp disk speed often differs between RR (Random Read) and RW (Random Write) operations. RR operations are typically faster than RW operations. The RW speed is usually slower than the RR speed on series where only the RR speed value is listed.
+- Storage capacity is shown in units of GiB or 1024^3 bytes. When you compare disks measured in GB (1000^3 bytes) to disks measured in GiB (1024^3) remember that capacity numbers given in GiB may appear smaller. For example, 1023 GiB = 1098.4 GB.
+- Disk throughput is measured in input/output operations per second (IOPS) and MBps where MBps = 10^6 bytes/sec.
+- To learn how to get the best storage performance for your VMs, see [Virtual machine and disk performance](../../../virtual-machines/disks-performance.md).
+
+### [Remote storage](#tab/sizestorageremote)
+
+Remote (uncached) storage info for each size
+
+| Size Name | Max Remote Storage Disks (Qty.) | Uncached Disk IOPS | Uncached Disk Speed (MBps) | Uncached Disk Burst<sup>1</sup> IOPS | Uncached Disk Burst<sup>1</sup> Speed (MBps) | Uncached Special<sup>2</sup> Disk IOPS | Uncached Special<sup>2</sup> Disk Speed (MBps) | Uncached Burst<sup>1</sup> Special<sup>2</sup> Disk IOPS | Uncached Burst<sup>1</sup> Special<sup>2</sup> Disk Speed (MBps) |
+| | | | | | | | | | |
+| Standard_NCC40ads_H100_v5 | 8 | 100000 | 3000 | | | | | | |
+
+#### Storage resources
+- [Introduction to Azure managed disks](../../../virtual-machines/managed-disks-overview.md)
+- [Azure managed disk types](../../../virtual-machines/disks-types.md)
+- [Share an Azure managed disk](../../../virtual-machines/disks-shared.md)
+
+#### Table definitions
+- <sup>1</sup>Some sizes support [bursting](../../disk-bursting.md) to temporarily increase disk performance. Burst speeds can be maintained for up to 30 minutes at a time.
+- <sup>2</sup>Special Storage refers to either [Ultra Disk](../../../virtual-machines/disks-enable-ultra-ssd.md) or [Premium SSD v2](../../../virtual-machines/disks-deploy-premium-v2.md) storage.
+- Storage capacity is shown in units of GiB or 1024^3 bytes. When you compare disks measured in GB (1000^3 bytes) to disks measured in GiB (1024^3) remember that capacity numbers given in GiB may appear smaller. For example, 1023 GiB = 1098.4 GB.
+- Disk throughput is measured in input/output operations per second (IOPS) and MBps where MBps = 10^6 bytes/sec.
+- Data disks can operate in cached or uncached modes. For cached data disk operation, the host cache mode is set to ReadOnly or ReadWrite. For uncached data disk operation, the host cache mode is set to None.
+- To learn how to get the best storage performance for your VMs, see [Virtual machine and disk performance](../../../virtual-machines/disks-performance.md).
++
+### [Network](#tab/sizenetwork)
+
+Network interface info for each size
+
+| Size Name | Max NICs (Qty.) | Max Bandwidth (Mbps) |
+| | | |
+| Standard_NCC40ads_H100_v5 | 2 | 40000 |
+
+#### Networking resources
+- [Virtual networks and virtual machines in Azure](../../../virtual-network/network-overview.md)
+- [Virtual machine network bandwidth](../../../virtual-network/virtual-machine-network-throughput.md)
+
+#### Table definitions
+- Expected network bandwidth is the maximum aggregated bandwidth allocated per VM type across all NICs, for all destinations. For more information, see [Virtual machine network bandwidth](../../../virtual-network/virtual-machine-network-throughput.md)
+- Upper limits aren't guaranteed. Limits offer guidance for selecting the right VM type for the intended application. Actual network performance will depend on several factors including network congestion, application loads, and network settings. For information on optimizing network throughput, see [Optimize network throughput for Azure virtual machines](../../../virtual-network/virtual-network-optimize-network-bandwidth.md).
+- To achieve the expected network performance on Linux or Windows, you may need to select a specific version or optimize your VM. For more information, see [Bandwidth/Throughput testing (NTTTCP)](../../../virtual-network/virtual-network-bandwidth-testing.md).
+
+### [Accelerators](#tab/sizeaccelerators)
+
+Accelerator (GPUs, FPGAs, etc.) info for each size
+
+| Size Name | Accelerators (Qty.) | Accelerator-Memory (GB) |
+| | | |
+| Standard_NCC40ads_H100_v5 | 1 | 94 |
+++
virtual-machines Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/sizes/overview.md
List of GPU optimized VM size families:
| Family | Workloads | Series List | |-|||
-| [NC-family](./gpu-accelerated/nc-family.md) | Compute-intensive <br> Graphics-intensive <br> Visualization | [NC-series](./gpu-accelerated/nc-family.md#nc-series-v1) <br> [NCads_H100_v5-series](./gpu-accelerated/nc-family.md#ncads_-_h100_v5-series) <br> [NCv2-series](./gpu-accelerated/nc-family.md#ncv2-series) <br> [NCv3-series](./gpu-accelerated/nc-family.md#ncv3-series) <br> [NCasT4_v3-series](./gpu-accelerated/nc-family.md#ncast4_v3-series) <br> [NC_A100_v4-series](./gpu-accelerated/nc-family.md#nc_a100_v4-series)|
+| [NC-family](./gpu-accelerated/nc-family.md) | Compute-intensive <br> Graphics-intensive <br> Visualization | [NC-series](./gpu-accelerated/nc-family.md#nc-series-v1) <br> [NCads_H100_v5-series](./gpu-accelerated/nc-family.md#ncads_h100_v5-series) <br> [NCCads_H100_v5-series](./gpu-accelerated/nc-family.md#nccads_h100_v5-series) <br> [NCv2-series](./gpu-accelerated/nc-family.md#ncv2-series) <br> [NCv3-series](./gpu-accelerated/nc-family.md#ncv3-series) <br> [NCasT4_v3-series](./gpu-accelerated/nc-family.md#ncast4_v3-series) <br> [NC_A100_v4-series](./gpu-accelerated/nc-family.md#nc_a100_v4-series)|
| [ND-family](./gpu-accelerated/nd-family.md) | Large memory compute-intensive <br> Large memory graphics-intensive <br> Large memory visualization | [ND_MI300X_v5-series](./gpu-accelerated/nd-family.md#nd_mi300x_v5-series) <br> [ND-H100-v5-series](./gpu-accelerated/nd-family.md#nd_h100_v5-series) <br> [NDm_A100_v4-series](./gpu-accelerated/nd-family.md#ndm_a100_v4-series) <br> [ND_A100_v4-series](./gpu-accelerated/nd-family.md#nd_a100_v4-series) | | [NG-family](./gpu-accelerated/ng-family.md) | Virtual Desktop (VDI) <br> Cloud gaming | [NGads V620-series](./gpu-accelerated/ng-family.md#ngads-v620-series) | | [NV-family](./gpu-accelerated/nv-family.md) | Virtual desktop (VDI) <br> Single-precision compute <br> Video encoding and rendering | [NV-series](./gpu-accelerated/nv-family.md#nv-series-v1) <br> [NVv3-series](./gpu-accelerated/nv-family.md#nvv3-series) <br> [NVv4-series](./gpu-accelerated/nv-family.md#nvv4-series) <br> [NVadsA10_v5-series](./gpu-accelerated/nv-family.md#nvads-a10-v5-series) <br> [Previous-gen NV-family](./previous-gen-sizes-list.md#gpu-accelerated-previous-gen-sizes) |
virtual-machines Oracle Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/workloads/oracle/oracle-migration.md
Title: Migrate Oracle workloads to Azure VMs description: Learn how to migrate Oracle workloads to Azure VMs.-+
This article shows how to move your Oracle workload from your on-premises enviro
## Discovery
-Migration begins with a detailed assessment of the Oracle product portfolio. The current infrastructure that supports Oracle database and apps, database versions, and types of applications that use Oracle database are: Oracle ([EBS](https://www.oracle.com/in/applications/ebusiness/), [Siebel](https://www.oracle.com/in/cx/siebel/), [People Soft](https://www.oracle.com/in/applications/peoplesoft/), [JDE](https://www.oracle.com/in/applications/jd-edwards-enterpriseone/), and others) and non-Microsoft partner offerings like [SAP](https://pages.community.sap.com/topics/oracle) or custom applications. The existing Oracle database can operate on servers, Oracle Real Application Clusters (RAC), or non-Microsoft partner RAC. For applications, we need to discover size of infrastructure that can be done easily by using Azure Migrate based discovery. For database, the approach is to get allowed with restrictions (AWR) reports on peak load to move on to design steps.
+Migration begins with a detailed assessment of the Oracle product portfolio. The current infrastructure that supports Oracle database and apps, database versions, and types of applications that use Oracle database are: Oracle ([EBS](https://www.oracle.com/in/applications/ebusiness/), [Siebel](https://www.oracle.com/in/cx/siebel/), [People Soft](https://www.oracle.com/in/applications/peoplesoft/), [JDE](https://www.oracle.com/in/applications/jd-edwards-enterpriseone/), and others) and non-Microsoft partner offerings like [SAP](https://pages.community.sap.com/topics/oracle) or custom applications. The existing Oracle database can operate on servers, Oracle Real Application Clusters (RAC), or non-Microsoft partner RAC. For applications, we need to discover size of infrastructure that can be done easily by using Azure Migrate based discovery. For database, the approach is to get allowed with restrictions Automatic Workload repository (AWR) reports on peak load to move on to design steps.
## Design
vpn-gateway Ikev2 Openvpn From Sstp https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/ikev2-openvpn-from-sstp.md
A point-to-site (P2S) VPN gateway connection lets you create a secure connection
Point-to-site VPN can use one of the following protocols:
-* **OpenVPN&reg; Protocol**, an SSL/TLS based VPN protocol. An SSL VPN solution can penetrate firewalls, since most firewalls open TCP port 443 outbound, which SSL uses. OpenVPN can be used to connect from Android, iOS (versions 11.0 and above), Windows, Linux, and Mac devices (macOS versions 10.13 and above).
+* **OpenVPN&reg; Protocol**, an SSL/TLS based VPN protocol. An SSL VPN solution can pass through firewalls, since most firewalls open TCP port 443 outbound, which SSL uses. OpenVPN can be used to connect from Android, iOS (versions 11.0 and above), Windows, Linux, and Mac devices (macOS versions 12.x and above).
* **Secure Socket Tunneling Protocol (SSTP)**, a proprietary SSL-based VPN protocol. An SSL VPN solution can penetrate firewalls, since most firewalls open TCP port 443 outbound, which SSL uses. SSTP is only supported on Windows devices. Azure supports all versions of Windows that have SSTP (Windows 7 and later). **SSTP supports up to 128 concurrent connections only regardless of the gateway SKU**.