Service | Microsoft Docs article | Related commit history on GitHub | Change details |
---|---|---|---|
active-directory-b2c | Identity Provider Twitter | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/identity-provider-twitter.md | Title: Set up sign-up and sign-in with a Twitter account + Title: Set up sign-up and sign-in with an X account -description: Provide sign-up and sign-in to customers with Twitter accounts in your applications using Azure Active Directory B2C. +description: Provide sign-up and sign-in to customers with X accounts in your applications using Azure Active Directory B2C. -#Customer Intent: As a developer setting up sign-up and sign-in with a Twitter account using Azure Active Directory B2C, I want to configure Twitter as an identity provider so that I can enable users to sign in with their Twitter accounts. +#Customer Intent: As a developer setting up sign-up and sign-in with an X account using Azure Active Directory B2C, I want to configure X as an identity provider so that I can enable users to sign in with their X accounts. -# Set up sign-up and sign-in with a Twitter account using Azure Active Directory B2C +# Set up sign-up and sign-in with an X account using Azure Active Directory B2C [!INCLUDE [active-directory-b2c-choose-user-flow-or-custom-policy](../../includes/active-directory-b2c-choose-user-flow-or-custom-policy.md)] ::: zone pivot="b2c-custom-policy" zone_pivot_groups: b2c-policy-type ## Create an application -To enable sign-in for users with a Twitter account in Azure AD B2C, you need to create a Twitter application. If you don't already have a Twitter account, you can sign up at [`https://twitter.com/signup`](https://twitter.com/signup). You also need to [Apply for a developer account](https://developer.twitter.com/). For more information, see [Apply for access](https://developer.twitter.com/en/apply-for-access). +To enable sign-in for users with an X account in Azure AD B2C, you need to create an X application. If you don't already have an X account, you can sign up at [`https://x.com/signup`](https://x.com/signup). You also need to [Apply for a developer account](https://developer.x.com/). For more information, see [Apply for access](https://developer.x.com/en/apply-for-access). ::: zone pivot="b2c-custom-policy"-1. Sign in to the [Twitter Developer Portal](https://developer.twitter.com/portal/projects-and-apps) with your Twitter account credentials. +1. Sign in to the [X Developer Portal](https://developer.x.com/portal/projects-and-apps) with your X account credentials. 1. Select **+ Create Project** button. 1. Under **Project name** tab, enter a preferred name of your project, and then select **Next** button. 1. Under **Use case** tab, select your preferred use case, and then select **Next**. To enable sign-in for users with a Twitter account in Azure AD B2C, you need to 1. For the **Callback URI/Redirect URL**, enter `https://your-tenant-name.b2clogin.com/your-tenant-name.onmicrosoft.com/your-policy-id/oauth1/authresp`. If you use a [custom domain](custom-domain.md), enter `https://your-domain-name/your-tenant-name.onmicrosoft.com/your-user-flow-Id/oauth1/authresp`. Use all lowercase letters when entering your tenant name and user flow ID even if they are defined with uppercase letters in Azure AD B2C. Replace: - `your-tenant-name` with the name of your tenant name. - `your-domain-name` with your custom domain.- - `your-policy-id` with the identifier of your user flow. For example, `b2c_1a_signup_signin_twitter`. + - `your-policy-id` with the identifier of your user flow. For example, `b2c_1a_signup_signin_x`. 1. For the **Website URL**, enter `https://your-tenant.b2clogin.com`. Replace `your-tenant` with the name of your tenant. For example, `https://contosob2c.b2clogin.com`. If you use a [custom domain](custom-domain.md), enter `https://your-domain-name`. 1. (Optional) Enter a URL for the **Terms of service**, for example `http://www.contoso.com/tos`. The policy URL is a page you maintain to provide terms and conditions for your application. 1. (Optional) Enter a URL for the **Privacy policy**, for example `http://www.contoso.com/privacy`. The policy URL is a page you maintain to provide privacy information for your application. To enable sign-in for users with a Twitter account in Azure AD B2C, you need to ::: zone pivot="b2c-user-flow" -1. Sign in to the [Twitter Developer Portal](https://developer.twitter.com/portal/projects-and-apps) with your Twitter account credentials. +1. Sign in to the [X Developer Portal](https://developer.x.com/portal/projects-and-apps) with your X account credentials. 1. Select **+ Create Project** button. 1. Under **Project name** tab, enter a preferred name of your project, and then select **Next** button. 1. Under **Use case** tab, select your preferred use case, and then select **Next**. 1. Under **Project description** tab, enter your project description, and then select **Next** button. 1. Under **App name** tab, enter a name for your app, such as *azureadb2c*, and the select **Next** button.-1. Under **Keys & Tokens** tab, copy the value of **API Key** and **API Key Secret** for later. You use both of them to configure Twitter as an identity provider in your Azure AD B2C tenant. +1. Under **Keys & Tokens** tab, copy the value of **API Key** and **API Key Secret** for later. You use both of them to configure X as an identity provider in your Azure AD B2C tenant. 1. Select **App settings** to open the app settings. 1. At the lower part of the page, under **User authentication settings**, select **Set up**. 1. Under **Type of app**, select your appropriate app type such as *Web App*. To enable sign-in for users with a Twitter account in Azure AD B2C, you need to 1. For the **Callback URI/Redirect URL**, enter `https://your-tenant.b2clogin.com/your-tenant-name.onmicrosoft.com/your-user-flow-name/oauth1/authresp`. If you use a [custom domain](custom-domain.md), enter `https://your-domain-name/your-tenant-name.onmicrosoft.com/your-user-flow-Id/oauth1/authresp`. Use all lowercase letters when entering your tenant name and user flow ID even if they are defined with uppercase letters in Azure AD B2C. Replace: - `your-tenant-name` with the name of your tenant name. - `your-domain-name` with your custom domain.- - `your-user-flow-name` with the identifier of your user flow. For example, `b2c_1_signup_signin_twitter`. + - `your-user-flow-name` with the identifier of your user flow. For example, `b2c_1_signup_signin_x`. 1. For the **Website URL**, enter `https://your-tenant.b2clogin.com`. Replace `your-tenant` with the name of your tenant. For example, `https://contosob2c.b2clogin.com`. If you use a [custom domain](custom-domain.md), enter `https://your-domain-name`. 1. Enter a URL for the **Terms of service**, for example `http://www.contoso.com/tos`. The policy URL is a page you maintain to provide terms and conditions for your application. 1. Enter a URL for the **Privacy policy**, for example `http://www.contoso.com/privacy`. The policy URL is a page you maintain to provide privacy information for your application. To enable sign-in for users with a Twitter account in Azure AD B2C, you need to ::: zone pivot="b2c-user-flow" -## Configure Twitter as an identity provider +## Configure X as an identity provider 1. Sign in to the [Azure portal](https://portal.azure.com/) as the global administrator of your Azure AD B2C tenant. 1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu. 1. Choose **All services** in the top-left corner of the Azure portal, search for and select **Azure AD B2C**. 1. Select **Identity providers**, then select **Twitter**.-1. Enter a **Name**. For example, *Twitter*. -1. For the **Client ID**, enter the *API Key* of the Twitter application that you created earlier. +1. Enter a **Name**. For example, *X*. +1. For the **Client ID**, enter the *API Key* of the X application that you created earlier. 1. For the **Client secret**, enter the *API key secret* that you recorded. 1. Select **Save**. -## Add Twitter identity provider to a user flow +## Add X identity provider to a user flow -At this point, the Twitter identity provider has been set up, but it's not yet available in any of the sign-in pages. To add the Twitter identity provider to a user flow: +At this point, the X identity provider has been set up, but it's not yet available in any of the sign-in pages. To add the X identity provider to a user flow: 1. In your Azure AD B2C tenant, select **User flows**.-1. Select the user flow that you want to add the Twitter identity provider. +1. Select the user flow that you want to add the X identity provider. 1. Under the **Social identity providers**, select **Twitter**. 1. Select **Save**. At this point, the Twitter identity provider has been set up, but it's not yet a 1. To test your policy, select **Run user flow**. 1. For **Application**, select the web application named *testapp1* that you previously registered. The **Reply URL** should show `https://jwt.ms`. 1. Select the **Run user flow** button.-1. From the sign-up or sign-in page, select **Twitter** to sign in with Twitter account. +1. From the sign-up or sign-in page, select **Twitter** to sign in with X account. ::: zone-end At this point, the Twitter identity provider has been set up, but it's not yet a ## Create a policy key -You need to store the secret key that you previously recorded for Twitter app in your Azure AD B2C tenant. +You need to store the secret key that you previously recorded for X app in your Azure AD B2C tenant. 1. Sign in to the [Azure portal](https://portal.azure.com/). 1. If you have access to multiple tenants, select the **Settings** icon in the top menu to switch to your Azure AD B2C tenant from the **Directories + subscriptions** menu. You need to store the secret key that you previously recorded for Twitter app in 1. On the left menu, under **Policies**, select **Identity Experience Framework**. 1. Select **Policy Keys** and then select **Add**. 1. For **Options**, choose `Manual`.-1. Enter a **Name** for the policy key. For example, `TwitterSecret`. The prefix `B2C_1A_` is added automatically to the name of your key. +1. Enter a **Name** for the policy key. For example, `XSecret`. The prefix `B2C_1A_` is added automatically to the name of your key. 1. For **Secret**, enter your *API key secret* value that you previously recorded. 1. For **Key usage**, select `Signature`. 1. Click **Create**. -## Configure Twitter as an identity provider +## Configure X as an identity provider -To enable users to sign in using a Twitter account, you need to define the account as a claims provider that Azure AD B2C can communicate with through an endpoint. The endpoint provides a set of claims that are used by Azure AD B2C to verify that a specific user has authenticated. +To enable users to sign in using an X account, you need to define the account as a claims provider that Azure AD B2C can communicate with through an endpoint. The endpoint provides a set of claims that are used by Azure AD B2C to verify that a specific user has authenticated. -You can define a Twitter account as a claims provider by adding it to the **ClaimsProviders** element in the extension file of your policy. Refer to the custom policy starter pack that you downloaded in the Prerequisites of this article. +You can define an X account as a claims provider by adding it to the **ClaimsProviders** element in the extension file of your policy. Refer to the custom policy starter pack that you downloaded in the Prerequisites of this article. 1. Open the *TrustFrameworkExtensions.xml*. 2. Find the **ClaimsProviders** element. If it does not exist, add it under the root element. You can define a Twitter account as a claims provider by adding it to the **Clai ```xml <ClaimsProvider>- <Domain>twitter.com</Domain> - <DisplayName>Twitter</DisplayName> + <Domain>x.com</Domain> + <DisplayName>X</DisplayName> <TechnicalProfiles> <TechnicalProfile Id="Twitter-OAuth1">- <DisplayName>Twitter</DisplayName> + <DisplayName>X</DisplayName> <Protocol Name="OAuth1" /> <Metadata> <Item Key="ProviderName">Twitter</Item> You can define a Twitter account as a claims provider by adding it to the **Clai <Item Key="request_token_endpoint">https://api.twitter.com/oauth/request_token</Item> <Item Key="ClaimsEndpoint">https://api.twitter.com/1.1/account/verify_credentials.json?include_email=true</Item> <Item Key="ClaimsResponseFormat">json</Item>- <Item Key="client_id">Your Twitter application API key</Item> + <Item Key="client_id">Your X application API key</Item> </Metadata> <CryptographicKeys> <Key Id="client_secret" StorageReferenceId="B2C_1A_TwitterSecret" /> You can define a Twitter account as a claims provider by adding it to the **Clai 1. Select your relying party policy, for example `B2C_1A_signup_signin`. 1. For **Application**, select a web application that you [previously registered](tutorial-register-applications.md). The **Reply URL** should show `https://jwt.ms`. 1. Select the **Run now** button.-1. From the sign-up or sign-in page, select **Twitter** to sign in with Twitter account. +1. From the sign-up or sign-in page, select **Twitter** to sign in with X account. ::: zone-end If the sign-in process is successful, your browser is redirected to `https://jwt.ms`, which displays the contents of the token returned by Azure AD B2C. > [!TIP]-> If you're facing `unauthorized` error while testing this identity provider, make sure you use the correct Twitter API Key and API Key Secret, or try to apply for [elevated](https://developer.twitter.com/en/portal/products/elevated) access. Also, we recommend you've a look at [Twitter's projects structure](https://developer.twitter.com/en/docs/projects/overview), if you registered your app before the feature was available. +> If you're facing `unauthorized` error while testing this identity provider, make sure you use the correct X API Key and API Key Secret, or try to apply for [elevated](https://developer.x.com/en/portal/products/elevated) access. Also, we recommend you've a look at [X's projects structure](https://developer.x.com/en/docs/projects/overview), if you registered your app before the feature was available. |
active-directory-b2c | Oauth1 Technical Profile | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/oauth1-technical-profile.md | -#Customer intent: As a developer implementing Azure Active Directory B2C custom policies, I want to define an OAuth1 technical profile, so that I can federate with an OAuth1 based identity provider like Twitter and allow users to sign in with their existing social or enterprise identities. +#Customer intent: As a developer implementing Azure Active Directory B2C custom policies, I want to define an OAuth1 technical profile, so that I can federate with an OAuth1 based identity provider like X and allow users to sign in with their existing social or enterprise identities. -Azure Active Directory B2C (Azure AD B2C) provides support for the [OAuth 1.0 protocol](https://tools.ietf.org/html/rfc5849) identity provider. This article describes the specifics of a technical profile for interacting with a claims provider that supports this standardized protocol. With an OAuth1 technical profile, you can federate with an OAuth1 based identity provider, such as Twitter. Federating with the identity provider allows users to sign in with their existing social or enterprise identities. +Azure Active Directory B2C (Azure AD B2C) provides support for the [OAuth 1.0 protocol](https://tools.ietf.org/html/rfc5849) identity provider. This article describes the specifics of a technical profile for interacting with a claims provider that supports this standardized protocol. With an OAuth1 technical profile, you can federate with an OAuth1 based identity provider, such as X. Federating with the identity provider allows users to sign in with their existing social or enterprise identities. ## Protocol The **Name** attribute of the **Protocol** element needs to be set to `OAuth1`. ```xml <TechnicalProfile Id="Twitter-OAUTH1">- <DisplayName>Twitter</DisplayName> + <DisplayName>X</DisplayName> <Protocol Name="OAuth1" /> ... ``` The **OutputClaims** element contains a list of claims returned by the OAuth1 id The **OutputClaimsTransformations** element may contain a collection of **OutputClaimsTransformation** elements that are used to modify the output claims or generate new ones. -The following example shows the claims returned by the Twitter identity provider: +The following example shows the claims returned by the X identity provider: - The **user_id** claim that is mapped to the **issuerUserId** claim. - The **screen_name** claim that is mapped to the **displayName** claim. When you configure the redirect URI of your identity provider, enter `https://{t Examples: -- [Add Twitter as an OAuth1 identity provider by using custom policies](identity-provider-twitter.md)+- [Add X as an OAuth1 identity provider by using custom policies](identity-provider-twitter.md) |
active-directory-b2c | Partner Keyless | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/partner-keyless.md | Title: Tutorial to configure Keyless with Azure Active Directory B2C -description: Tutorial to configure Sift Keyless with Azure Active Directory B2C for passwordless authentication +description: Tutorial to configure Keyless with Azure Active Directory B2C for passwordless authentication Previously updated : 06/21/2024 Last updated : 08/09/2024 -Learn to configure Azure Active Directory B2C (Azure AD B2C) with the Sift Keyless passwordless solution. With Azure AD B2C as an identity provider (IdP), integrate Keyless with customer applications to provide passwordless authentication. The Keyless Zero-Knowledge Biometric (ZKB) is passwordless multifactor authentication that helps eliminate fraud, phishing, and credential reuse, while enhancing the customer experience and protecting privacy. +Learn to configure Azure Active Directory B2C (Azure AD B2C) with the Keyless passwordless solution. With Azure AD B2C as an identity provider (IdP), integrate Keyless with customer applications to provide passwordless authentication. The Keyless Zero-Knowledge Biometric (ZKB) is passwordless multifactor authentication that helps eliminate fraud, phishing, and credential reuse, while enhancing the customer experience and protecting privacy. Go to keyless.io to learn about: -* [Sift Keyless](https://keyless.io/) +* [Keyless](https://keyless.io/) * [How Keyless uses zero-knowledge proofs to protect your biometric data](https://keyless.io/blog/post/how-keyless-uses-zero-knowledge-proofs-to-protect-your-biometric-data) ## Prerequisites The Keyless integration includes the following components: * **Azure AD B2C** ΓÇô authorization server that verifies user credentials. Also known as the IdP. * **Web and mobile applications** ΓÇô mobile or web applications to protect with Keyless and Azure AD B2C-* **The Keyless Authenticator mobile app** ΓÇô Sift mobile app for authentication to the Azure AD B2C enabled applications +* **The Keyless Authenticator mobile app** ΓÇô mobile app for authentication to the Azure AD B2C enabled applications The following architecture diagram illustrates an implementation. |
active-directory-b2c | Userjourneys | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/userjourneys.md | The **ClaimsProviderSelection** element contains the following attributes: ### Claims provider selection example -In the following orchestration step, the user can choose to sign in with Facebook, LinkedIn, Twitter, Google, or a local account. If the user selects one of the social identity providers, the second orchestration step executes with the selected claim exchange specified in the `TargetClaimsExchangeId` attribute. The second orchestration step redirects the user to the social identity provider to complete the sign-in process. If the user chooses to sign in with the local account, Azure AD B2C stays on the same orchestration step (the same sign-up page or sign-in page) and skips the second orchestration step. +In the following orchestration step, the user can choose to sign in with Facebook, LinkedIn, X, Google, or a local account. If the user selects one of the social identity providers, the second orchestration step executes with the selected claim exchange specified in the `TargetClaimsExchangeId` attribute. The second orchestration step redirects the user to the social identity provider to complete the sign-in process. If the user chooses to sign in with the local account, Azure AD B2C stays on the same orchestration step (the same sign-up page or sign-in page) and skips the second orchestration step. ```xml <OrchestrationStep Order="1" Type="CombinedSignInAndSignUp" ContentDefinitionReferenceId="api.signuporsignin"> |
ai-services | Choose Model Feature | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/choose-model-feature.md | The following decision charts highlight the features of each **Document Intellig | Document type | Data to extract | Your best solution | | --|--|-|+|**US Unified Tax**|You want to extract key information across all tax forms of W2, 1040, 1090, 1098 from a single file without running any custom classification of your own.|[**US Unified tax model**](concept-tax-document.md)| |**US Tax W-2 tax**|You want to extract key information such as salary, wages, and taxes withheld.|[**US tax W-2 model**](concept-tax-document.md)| |**US Tax 1098**|You want to extract mortgage interest details such as principal, points, and tax.|[**US tax 1098 model**](concept-tax-document.md)| |**US Tax 1098-E**|You want to extract student loan interest details such as lender and interest amount.|[**US tax 1098-E model**](concept-tax-document.md)| |**US Tax 1098T**|You want to extract qualified tuition details such as scholarship adjustments, student status, and lender information.|[**US tax 1098-T model**](concept-tax-document.md)| |**US Tax 1099(Variations)**|You want to extract information from `1099` forms and its variations (A, B, C, CAP, DIV, G, H, INT, K, LS, LTC, MISC, NEC, OID, PATR, Q, QA, R, S, SA, SB).|[**US tax 1099 model**](concept-tax-document.md)| |**US Tax 1040(Variations)**|You want to extract information from `1040` forms and its variations (Schedule 1, Schedule 2, Schedule 3, Schedule 8812, Schedule A, Schedule B, Schedule C, Schedule D, Schedule E, Schedule `EIC`, Schedule F, Schedule H, Schedule J, Schedule R, Schedule `SE`, Schedule Senior).|[**US tax 1040 model**](concept-tax-document.md)|+|**Bank Statement** |You want to extract key information from US bank statement | [**\Bank Statement**](concept-bank-statement.md)| +|**Check** |You want to extract key information from check document. | [**Bank Check**](concept-bank-check.md)| |**Contract** (legal agreement between parties).|You want to extract contract agreement details such as parties, dates, and intervals.|[**Contract model**](concept-contract.md)| |**Health insurance card** or health insurance ID.| You want to extract key information such as insurer, member ID, prescription coverage, and group number.|[**Health insurance card model**](./concept-health-insurance-card.md)|-|**Credit/Debit card** . |You want to extract key information bank cards such as card number and bank name. | [**Credit/Debit card model**](concept-credit-card.md)| -|**Marriage Certificate** . |You want to extract key information from marriage certificates. | [**Marriage certificate model**](concept-marriage-certificate.md)| -|**Invoice** or billing statement.|You want to extract key information such as customer name, billing address, and amount due.|[**Invoice model**](concept-invoice.md) +|**Credit/Debit card** |You want to extract key information bank cards such as card number and bank name. | [**Credit/Debit card model**](concept-credit-card.md)| +|**Marriage Certificate** |You want to extract key information from marriage certificates. | [**Marriage certificate model**](concept-marriage-certificate.md)| +|**Invoice** or billing statement|You want to extract key information such as customer name, billing address, and amount due.|[**Invoice model**](concept-invoice.md) |**Receipt**, voucher, or single-page hotel receipt. |You want to extract key information such as merchant name, transaction date, and transaction total.|[**Receipt model**](concept-receipt.md)|-|**Identity document (ID)** like a U.S. driver's license or international passport. |You want to extract key information such as first name, surname, date of birth, address, and signature. | [**Identity document (ID) model**](concept-id-document.md)| -|**US Mortgage 1003** . |You want to extract key information from the Uniform Residential loan application. | [**1003 form model**](concept-mortgage-documents.md)| -|**US Mortgage 1008** . |You want to extract key information from the Uniform Underwriting and Transmittal summary. | [**1008 form model**](concept-mortgage-documents.md)| -|**US Mortgage Closing Disclosure** . |You want to extract key information from a mortgage closing disclosure form. | [**Mortgage closing disclosure form model**](concept-mortgage-documents.md)| -|**Mixed-type document(s)** with structured, semi-structured, and/or unstructured elements. | You want to extract key-value pairs, selection marks, tables, signature fields, and selected regions not extracted by prebuilt or general document models.| [**Custom model**](concept-custom.md)| +|**Identity document (ID)** like a U.S. driver's license or international passport |You want to extract key information such as first name, surname, date of birth, address, and signature. | [**Identity document (ID) model**](concept-id-document.md)| +|**Pay stub** |You want to extract key information from the pay stub document. | [**Pay stub Model**](concept-pay-stub.md)| +|**US Mortgage 1003** |You want to extract key information from the Uniform Residential loan application. | [**1003 form model**](concept-mortgage-documents.md)| +|**US Mortgage 1004** |You want to extract key information from the Uniform Residential Appraisal Report (URAR). | [**1004 form model**](concept-mortgage-documents.md)| +|**US Mortgage 1005** |You want to extract key information from the Verification of employment form | [**1005 form model**](concept-mortgage-documents.md)| +|**US Mortgage 1008** |You want to extract key information from the Uniform Underwriting and Transmittal summary. | [**1008 form model**](concept-mortgage-documents.md)| +|**US Mortgage Closing Disclosure** |You want to extract key information from a mortgage closing disclosure form. | [**Mortgage closing disclosure form model**](concept-mortgage-documents.md)| +|**Mixed-type document(s)** with structured, semi-structured, and/or unstructured elements | You want to extract key-value pairs, selection marks, tables, signature fields, and selected regions not extracted by prebuilt or general document models.| [**Custom model**](concept-custom.md)| >[!Tip] > The following decision charts highlight the features of each **Document Intellig | --|--|-| |**At least two different types of documents**. |Forms, letters, or documents | [**Custom classification model**](./concept-custom-classifier.md)| -- ## Next steps * [Learn how to process your own forms and documents](quickstarts/try-document-intelligence-studio.md) with the [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio) |
ai-services | Concept Add On Capabilities | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-add-on-capabilities.md | Document Intelligence supports more sophisticated and modular analysis capabilit * [`languages`](#language-detection) +Starting with `2024-07-31-preview` release, the Read model supports searchable PDF output: ++* [`Searchable PDF](#searchable-pdf) ++ :::moniker-end :::moniker range="doc-intel-4.0.0" Document Intelligence supports more sophisticated and modular analysis capabilit > > * Add-on capabilities are currently not supported for Microsoft Office file types. -The following add-on capabilities are available for`2024-02-29-preview`, `2024-02-29-preview`, and later releases: +Document Intelligence supports optional features that can be enabled and disabled depending on the document extraction scenario. The following add-on capabilities are available for `2023-10-31-preview`, and later releases: * [`keyValuePairs`](#key-value-pairs) for lang_idx, lang in enumerate(result.languages): ::: moniker range="doc-intel-4.0.0" +## Searchable PDF ++The searchable PDF capability enables you to convert an analog PDF, such as scanned-image PDF files, to a PDF with embedded text. The embedded text enables deep text search within the PDF's extracted content by overlaying the detected text entities on top of the image files. ++ > [!IMPORTANT] + > + > * Currently, the searchable PDF capability is only supported by Read OCR model `prebuilt-read`. When using this feature, please specify the `modelId` as `prebuilt-read`, as other model types will return error for this preview version. + > * Searchable PDF is included with the 2024-07-31-preview `prebuilt-read` model with no usage cost for general PDF consumption. ++### Use searchable PDF ++To use searchable PDF, make a `POST` request using the `Analyze` operation and specify the output format as `pdf`: ++```bash ++POST /documentModels/prebuilt-read:analyze?output=pdf +{...} +202 +``` ++Once the `Analyze` operation is complete, make a `GET` request to retrieve the `Analyze` operation results. ++Upon successful completion, the PDF can be retrieved and downloaded as `application/pdf`. This operation allows direct downloading of the embedded text form of PDF instead of Base64-encoded JSON. ++```bash ++// Monitor the operation until completion. +GET /documentModels/prebuilt-read/analyzeResults/{resultId} +200 +{...} ++// Upon successful completion, retrieve the PDF as application/pdf. +GET /documentModels/prebuilt-read/analyzeResults/{resultId}/pdf +200 OK +Content-Type: application/pdf +``` ++ ## Key-value Pairs In earlier API versions, the prebuilt-document model extracted key-value pairs from forms and documents. With the addition of the `keyValuePairs` feature to prebuilt-layout, the layout model now produces the same results. |
ai-services | Concept Composed Models | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-composed-models.md | -> Model compose behavior is changing for api-version=2024-07-31-preview and later. The following behavior only applies to v3.1 and previous versions +> +> [The `model compose` operation behavior is changing from api-version=2024-07-31-preview](#benefits-of-the-new-model-compose-operation). The `model compose` operation v4.0 and later adds an explicitly trained classifier instead of an implicit classifier for analysis. For the previous composed model version, *see* Composed custom models v3.1. If you are currently using composed models consider upgrading to the latest implementation. -**Composed models**. A composed model is created by taking a collection of custom models and assigning them to a single model built from your form types. When a document is submitted for analysis using a composed model, the service performs a classification to decide which custom model best represents the submitted document. +## What is a composed model? -With composed models, you can assign multiple custom models to a composed model called with a single model ID. It's useful when you train several models and want to group them to analyze similar form types. For example, your composed model might include custom models trained to analyze your supply, equipment, and furniture purchase orders. Instead of manually trying to select the appropriate model, you can use a composed model to determine the appropriate custom model for each analysis and extraction. +With composed models, you can group multiple custom models into a composed model called with a single model ID. For example, your composed model might include custom models trained to analyze your supply, equipment, and furniture purchase orders. Instead of manually trying to select the appropriate model, you can use a composed model to determine the appropriate custom model for each analysis and extraction. -* ```Custom form``` and ```Custom template``` models can be composed together into a single composed model. +Some scenarios require classifying the document first and then analyzing the document with the model best suited to extract the fields from the model. Such scenarios can include ones where a user uploads a document but the document type isn't explicitly known. Another scenario can be when multiple documents are scanned together into a single file and the file is submitted for processing. Your application then needs to identify the component documents and select the best model for each document. -* With the model compose operation, you can assign up to 200 trained custom models to a single composed model. To analyze a document with a composed model, Document Intelligence first classifies the submitted form, chooses the best-matching assigned model, and returns results. +In previous versions, the `model compose` operation performed an implicit classification to decide which custom model best represents the submitted document. The `2024-07-31-preview` implementation of the `model compose` operation replaces the implicit classification from the earlier versions with an explicit classification step and adds conditional routing. -* For ```Custom template``` models, the composed model can be created using variations of a custom template or different form types. This operation is useful when incoming forms belong to one of several templates. +## Benefits of the new model compose operation -* For ```Custom neural``` models the best practice is to add all the different variations of a single document type into a single training dataset and train on custom neural model. Model compose is best suited for scenarios when you have documents of different types being submitted for analysis. +The new `model compose` operation requires you to train an explicit classifier and provides several benefits. -* The response includes a ```docType``` property to indicate which of the composed models was used to analyze the document. +* **Continual incremental improvement**. You can consistently improve the quality of the classifier by adding more samples and [incrementally improving classification]( concept-incremental-classifier.md). This fine tuning ensures your documents are always routed to the right model for extraction. +* **Complete control over routing**. By adding confidence-based routing, you provide a confidence threshold for the document type and the classification response. -With the introduction of [**custom classification models**](./concept-custom-classifier.md), you can choose to use a [**composed model**](./concept-composed-models.md) or [**classification model**](concept-custom-classifier.md) as an explicit step before analysis. For a deeper understanding of when to use a classification or composed model, _see_ [**Custom classification models**](concept-custom-classifier.md#compare-custom-classification-and-composed-models). +* **Ignore document specific document types during the operation**. Earlier implementations of the `model compose` operation selected the best analysis model for extraction based on the confidence score even if the highest confidence scores were relatively low. By providing a confidence threshold or explicitly not mapping a known document type from classification to an extraction model, you can ignore specific document types. -## Compose model limits +* **Analyze multiple instances of the same document type**. When paired with the `splitMode` option of the classifier, the `model compose` operation can detect multiple instances of the same document in a file and split the file to process each document independently. Using `splitMode` enables the processing of multiple instances of a document in a single request. ++* **Support for add on features**. [Add on features](concept-add-on-capabilities.md) like query fields or barcodes can also be specified as a part of the analysis model parameters. ++* **Assigned custom model maximum expanded to 500**. The new implementation of the `model compose` operation allows you to assign up to 500 trained custom models to a single composed model. +++## How to use model compose ++* Start by collecting samples of all your needed documents including samples with information that should be extracted or ignored. ++* Train a classifier by organizing the documents in folders where the folder names are the document type you intend to use in your composed model definition. ++* Finally, train an extraction model for each of the document types you intend to use. ++* Once your classification and extraction models are trained, use the Document Intelligence Studio, client libraries, or the [REST API](/rest/api/aiservices/operation-groups?view=rest-aiservices-2024-07-31-preview&preserve-view=true) to compose the classification and extraction models into a composed model. ++Use the `splitMode` parameter to control the file splitting behavior: ++* **None**. The entire file is treated as a single document. +* **perPage**. Each page in the file is treated as a separate document. +* **auto**. The file is automatically split into documents. ++## Billing and pricing ++Composed models are billed the same as individual custom models. The pricing is based on the number of pages analyzed by the downstream analysis model. Billing is based on the extraction price for the pages routed to an extraction model. With the addition of the explicit classification charges are incurred for the classification of all pages in the input file. For more information, see the [Document Intelligence pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/form-recognizer/). ++ -> [!NOTE] -> With the addition of **_custom neural model_** , there are a few limits to the compatibility of models that can be composed together. +## Use model compose -* With the model compose operation, you can assign up to 200 models to a single model ID. If the number of models that I want to compose exceeds the upper limit of a composed model, you can use one of these alternatives: +* Start by creating a list of all the model IDs you want to compose into a single model. ++* Compose the models into a single model ID using the Studio, REST API, or client libraries. ++* Use the composed model ID to analyze documents. ++## Billing ++Composed models are billed the same as individual custom models. The pricing is based on the number of pages analyzed. Billing is based on the extraction price for the pages routed to an extraction model. For more information, see the [Document Intelligence pricing page](https://azure.microsoft.com/pricing/details/cognitive-services/form-recognizer/). ++* There's no change in pricing for analyzing a document by using an individual custom model or a composed custom model. ++## Composed models features ++* `Custom template` and `custom neural` models can be composed together into a single composed model across multiple API versions. ++* The response includes a `docType` property to indicate which of the composed models was used to analyze the document. ++* For `custom template` models, the composed model can be created using variations of a custom template or different form types. This operation is useful when incoming forms belong to one of several templates. ++* For `custom neural` models the best practice is to add all the different variations of a single document type into a single training dataset and train on custom neural model. The `model compose` operation is best suited for scenarios when you have documents of different types being submitted for analysis. ++## Compose model limits ++* With the `model compose` operation, you can assign up to 500 models to a single model ID. If the number of models that I want to compose exceeds the upper limit of a composed model, you can use one of these alternatives: * Classify the documents before calling the custom model. You can use the [Read model](concept-read.md) and build a classification based on the extracted text from the documents and certain phrases by using sources like code, regular expressions, or search. * If you want to extract the same fields from various structured, semi-structured, and unstructured documents, consider using the deep-learning [custom neural model](concept-custom-neural.md). Learn more about the [differences between the custom template model and the custom neural model](concept-custom.md#compare-model-features). -* Analyzing a document by using composed models is identical to analyzing a document by using a single model. The `Analyze Document` result returns a `docType` property that indicates which of the component models you selected for analyzing the document. There's no change in pricing for analyzing a document by using an individual custom model or a composed custom model. +* Analyzing a document by using composed models is identical to analyzing a document by using a single model. The `Analyze Document` result returns a `docType` property that indicates which of the component models you selected for analyzing the document. -* Model Compose is currently available only for custom models trained with labels. +* The `model compose` operation is currently available only for custom models trained with labels. ### Composed model compatibility -|Custom model type|Models trained with v2.1 and v2.0 | Custom template models v3.0 |Custom neural models 3.0|Custom Neural models v3.1| +|Custom model type|Models trained with v2.1 and v2.0 | Custom template and neural models v3.1 and v3.0 |Custom template and neural models v4.0 preview|Custom Generative models v4.0 preview| |--|--|--|--|--|-|**Models trained with version 2.1 and v2.0** |Supported|Supported|Not Supported|Not Supported| -|**Custom template models v3.0** |Supported|Supported|Not Supported|Not Supported| -|**Custom template models v3.1** |Not Supported|Not Supported|Not Supported|Not Supported| -|**Custom Neural models v3.0**|Not Supported|Not Supported|Supported|Supported| -|**Custom Neural models v3.1**|Not Supported|Not Supported|Supported|Supported| +|**Models trained with version 2.1 and v2.0** |Not Supported|Not Supported|Not Supported|Not Supported| +|**Custom template and neural models v3.0 and v3.1** |Not Supported|Supported|Supported|Not Supported| +|**Custom template and neural models v4.0 preview**|Not Supported|Supported|Supported|Not Supported| +|**Custom generative models v4.0 preview**|Not Supported|Not Supported|Not Supported|Not Supported| * To compose a model trained with a prior version of the API (v2.1 or earlier), train a model with the v3.0 API using the same labeled dataset. That addition ensures that the v2.1 model can be composed with other models. * With models composed using v2.1 of the API continues to be supported, requiring no updates. -* For custom models, the maximum number that can be composed is 200. - ::: moniker-end ## Development options :::moniker range="doc-intel-4.0.0" -Document Intelligence **v4.0:2023-02-29-preview** supports the following tools, applications, and libraries: +Document Intelligence **v4.0:2024-07-31-preview** supports the following tools, applications, and libraries: | Feature | Resources | |-|-|-|_**Custom model**_| • [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</br>• [REST API](/rest/api/aiservices/operation-groups?view=rest-aiservices-2024-07-31-preview&preserve-view=true)</br>• [C# SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>• [Java SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>• [JavaScript SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>• [Python SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)| -| _**Composed model**_| • [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</br>• [REST API](/rest/api/aiservices/operation-groups?view=rest-aiservices-v4.0%20(2024-07-31-preview)&preserve-view=true)</br>• [C# SDK](/dotnet/api/azure.ai.formrecognizer.training.formtrainingclient.startcreatecomposedmodel)</br>• [Java SDK](/java/api/com.azure.ai.formrecognizer.training.formtrainingclient.begincreatecomposedmodel)</br>• [JavaScript SDK](/javascript/api/@azure/ai-form-recognizer/documentmodeladministrationclient?view=azure-node-latest#@azure-ai-form-recognizer-documentmodeladministrationclient-begincomposemodel&preserve-view=true)</br>• [Python SDK](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.formtrainingclient?view=azure-python#azure-ai-formrecognizer-formtrainingclient-begin-create-composed-model&preserve-view=true)| +|***Custom model***| • [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</br>• [REST API](/rest/api/aiservices/operation-groups?view=rest-aiservices-2024-07-31-preview&preserve-view=true)</br>• [C# SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>• [Java SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>• [JavaScript SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>• [Python SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)| +| ***Composed model***| • [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</br>• [REST API](/rest/api/aiservices/operation-groups?view=rest-aiservices-2024-02-29-preview&preserve-view=true)</br>• [C# SDK](/dotnet/api/azure.ai.formrecognizer.training.formtrainingclient.startcreatecomposedmodel)</br>• [Java SDK](/java/api/com.azure.ai.formrecognizer.training.formtrainingclient.begincreatecomposedmodel)</br>• [JavaScript SDK](/javascript/api/@azure/ai-form-recognizer/documentmodeladministrationclient?view=azure-node-latest#@azure-ai-form-recognizer-documentmodeladministrationclient-begincomposemodel&preserve-view=true)</br>• [Python SDK](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.formtrainingclient?view=azure-python#azure-ai-formrecognizer-formtrainingclient-begin-create-composed-model&preserve-view=true)| :::moniker-end Document Intelligence **v3.1:2023-07-31 (GA)** supports the following tools, app | Feature | Resources | |-|-|-|_**Custom model**_| • [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</br>• [REST API](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)</br>• [C# SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)</br>• [Java SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)</br>• [JavaScript SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)</br>• [Python SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)| -| _**Composed model**_| • [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</br>• [REST API](/rest/api/aiservices/document-models/compose-model?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)</br>• [C# SDK](/dotnet/api/azure.ai.formrecognizer.training.formtrainingclient.startcreatecomposedmodel)</br>• [Java SDK](/java/api/com.azure.ai.formrecognizer.training.formtrainingclient.begincreatecomposedmodel)</br>• [JavaScript SDK](/javascript/api/@azure/ai-form-recognizer/documentmodeladministrationclient?view=azure-node-latest#@azure-ai-form-recognizer-documentmodeladministrationclient-begincomposemodel&preserve-view=true)</br>• [Python SDK](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.formtrainingclient?view=azure-python#azure-ai-formrecognizer-formtrainingclient-begin-create-composed-model&preserve-view=true)| +|***Custom model***| • [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</br>• [REST API](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)</br>• [C# SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)</br>• [Java SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)</br>• [JavaScript SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)</br>• [Python SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)| +| ***Composed model***| • [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</br>• [REST API](/rest/api/aiservices/document-models/compose-model?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)</br>• [C# SDK](/dotnet/api/azure.ai.formrecognizer.training.formtrainingclient.startcreatecomposedmodel)</br>• [Java SDK](/java/api/com.azure.ai.formrecognizer.training.formtrainingclient.begincreatecomposedmodel)</br>• [JavaScript SDK](/javascript/api/@azure/ai-form-recognizer/documentmodeladministrationclient?view=azure-node-latest#@azure-ai-form-recognizer-documentmodeladministrationclient-begincomposemodel&preserve-view=true)</br>• [Python SDK](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.formtrainingclient?view=azure-python#azure-ai-formrecognizer-formtrainingclient-begin-create-composed-model&preserve-view=true)| :::moniker-end Document Intelligence **v3.0:2022-08-31 (GA)** supports the following tools, applications, and libraries: | Feature | Resources | |-|-|-|_**Custom model**_| • [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</br>• [REST API](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-v3.0%20(2022-08-31)&preserve-view=true&tabs=HTTP)</br>• [C# SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>• [Java SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>• [JavaScript SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>• [Python SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)| -| _**Composed model**_| • [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</br>• [REST API](/rest/api/aiservices/document-models/compose-model?view=rest-aiservices-v3.0%20(2022-08-31)&preserve-view=true&tabs=HTTP)</br>• [C# SDK](/dotnet/api/azure.ai.formrecognizer.training.formtrainingclient.startcreatecomposedmodel)</br>• [Java SDK](/java/api/com.azure.ai.formrecognizer.training.formtrainingclient.begincreatecomposedmodel)</br>• [JavaScript SDK](/javascript/api/@azure/ai-form-recognizer/documentmodeladministrationclient?view=azure-node-latest#@azure-ai-form-recognizer-documentmodeladministrationclient-begincomposemodel&preserve-view=true)</br>• [Python SDK](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.formtrainingclient?view=azure-python#azure-ai-formrecognizer-formtrainingclient-begin-create-composed-model&preserve-view=true)| +|***Custom model***| • [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</br>• [REST API](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-v3.0%20(2022-08-31)&preserve-view=true&tabs=HTTP)</br>• [C# SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>• [Java SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>• [JavaScript SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)</br>• [Python SDK](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.0.0&preserve-view=true)| +| ***Composed model***| • [Document Intelligence Studio](https://formrecognizer.appliedai.azure.com/studio/custommodel/projects)</br>• [REST API](/rest/api/aiservices/document-models/compose-model?view=rest-aiservices-v3.0%20(2022-08-31)&preserve-view=true&tabs=HTTP)</br>• [C# SDK](/dotnet/api/azure.ai.formrecognizer.training.formtrainingclient.startcreatecomposedmodel)</br>• [Java SDK](/java/api/com.azure.ai.formrecognizer.training.formtrainingclient.begincreatecomposedmodel)</br>• [JavaScript SDK](/javascript/api/@azure/ai-form-recognizer/documentmodeladministrationclient?view=azure-node-latest#@azure-ai-form-recognizer-documentmodeladministrationclient-begincomposemodel&preserve-view=true)</br>• [Python SDK](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.formtrainingclient?view=azure-python#azure-ai-formrecognizer-formtrainingclient-begin-create-composed-model&preserve-view=true)| + ::: moniker-end ::: moniker range="doc-intel-2.1.0" Document Intelligence v2.1 supports the following resources: | Feature | Resources | |-|-|-|_**Custom model**_| • [Document Intelligence labeling tool](https://fott-2-1.azurewebsites.net)</br>• [REST API](how-to-guides/compose-custom-models.md?view=doc-intel-2.1.0&tabs=rest&preserve-view=true)</br>• [Client library SDK](~/articles/ai-services/document-intelligence/how-to-guides/use-sdk-rest-api.md?view=doc-intel-2.1.0&preserve-view=true)</br>• [Document Intelligence Docker container](containers/install-run.md?tabs=custom#run-the-container-with-the-docker-compose-up-command)| -| _**Composed model**_ |• [Document Intelligence labeling tool](https://fott-2-1.azurewebsites.net/)</br>• [REST API](/rest/api/aiservices/analyzer?view=rest-aiservices-v2.1&preserve-view=true)</br>• [C# SDK](/dotnet/api/azure.ai.formrecognizer.training.createcomposedmodeloperation?view=azure-dotnet&preserve-view=true)</br>• [Java SDK](/java/api/com.azure.ai.formrecognizer.models.createcomposedmodeloptions?view=azure-java-stable&preserve-view=true)</br>• JavaScript SDK</br>• [Python SDK](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.formtrainingclient?view=azure-python#azure-ai-formrecognizer-formtrainingclient-begin-create-composed-model&preserve-view=true)| +|***Custom model***| • [Document Intelligence labeling tool](https://fott-2-1.azurewebsites.net)</br>• [REST API](how-to-guides/compose-custom-models.md?view=doc-intel-2.1.0&tabs=rest&preserve-view=true)</br>• [Client library SDK](~/articles/ai-services/document-intelligence/how-to-guides/use-sdk-rest-api.md?view=doc-intel-2.1.0&preserve-view=true)</br>• [Document Intelligence Docker container](containers/install-run.md?tabs=custom#run-the-container-with-the-docker-compose-up-command)| +| ***Composed model*** |• [Document Intelligence labeling tool](https://fott-2-1.azurewebsites.net/)</br>• [REST API](/rest/api/aiservices/analyzer?view=rest-aiservices-v2.1&preserve-view=true)</br>• [C# SDK](/dotnet/api/azure.ai.formrecognizer.training.createcomposedmodeloperation?view=azure-dotnet&preserve-view=true)</br>• [Java SDK](/java/api/com.azure.ai.formrecognizer.models.createcomposedmodeloptions?view=azure-java-stable&preserve-view=true)</br>• JavaScript SDK</br>• [Python SDK](/python/api/azure-ai-formrecognizer/azure.ai.formrecognizer.formtrainingclient?view=azure-python#azure-ai-formrecognizer-formtrainingclient-begin-create-composed-model&preserve-view=true)| + ::: moniker-end ## Next steps Document Intelligence v2.1 supports the following resources: Learn to create and compose custom models: > [!div class="nextstepaction"]+> > [**Build a custom model**](how-to-guides/build-a-custom-model.md) > [**Compose custom models**](how-to-guides/compose-custom-models.md)-> |
ai-services | Concept Custom Neural | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-custom-neural.md | https://{endpoint}/formrecognizer/documentModels/{modelId}:copyTo?api-version=20 :::moniker range="doc-intel-4.0.0" ## Billing--Starting with version `2024-07-31-preview` and later you can receive **10 hours** of free model training. Billing charges are calculated for model trainings that exceed 10 hours. You can choose to spend all of 10 free hours on a single build with a large set of data, or utilize it across multiple builds by adjusting the maximum duration value for the `build` operation by specifying `maxTrainingHours`: + +Starting with version `2024-07-31-preview`, you can train your custom neural model for longer durations than 30 minutes. Previous versions have been capped at 30 minutes per training instance, with a total of 20 free training instances per month. Now with `2024-07-31-preview`, you can receive **10 hours** of free model training, and train a model for as long as 10 hours. If you would like to train a model for longer than 10 hours, billing charges are calculated for model trainings that exceed 10 hours. You can choose to spend all of 10 free hours on a single build with a large set of data, or utilize it across multiple builds by adjusting the maximum duration value for the `build` operation by specifying `maxTrainingHours` as below: ```bash POST /documentModels:build } ``` -Build time varies. Billing is calculated for the actual time spent (excluding time in queue), with a minimum of 30 minutes per training job. The elapsed time is converted to V100 equivalent training hours and reported as part of the model. +> [!NOTE] +> For Document Intelligence versions `v3.1 (2023-07-31)` and `v3.0 (2022-08-31)`, custom neural model's paid training is not enabled. For the two older versions, you will get a maximum of 30 minutes training duration per model. If you would like to train more than 20 model instances, you can request for increase in the training limit. ++Each training hour is the amount of compute a single V100 GPU can perform in an hour. As each build takes different amount of time, billing is calculated for the actual time spent (excluding time in queue), with a minimum of 30 minutes per training job. The elapsed time is converted to V100 equivalent training hours and reported as part of the model. ```bash This billing structure enables you to train larger data sets for longer duration :::moniker-end ++## Billing ++For Document Intelligence versions `v3.1 (2023-07-31)` and `v3.0 (2022-08-31)`, you will get a maximum of 30 minutes training duration per model, and a maximum of 20 trainings for free per month. If you would like to train more than 20 model instances, you can request for increase in the training limit. ++If you are interested in training models for longer durations than 30 minutes, we support **paid training** for our newest version, `v4.0 (2024-07-31)`. Using the latest version, you can train your model for a longer duration to process larger documents. ++++## Billing ++For Document Intelligence versions `v3.1 (2023-07-31)` and `v3.0 (2022-08-31)`, you will get a maximum of 30 minutes training duration per model, and a maximum of 20 trainings for free per month. If you would like to train more than 20 model instances, you can request for increase in the training limit. ++If you are interested in training models for longer durations than 30 minutes, we support **paid training** for our newest version, `v4.0 (2024-07-31)`. Using the latest version, you can train your model for a longer duration to process larger documents. ++ ## Next steps Learn to create and compose custom models: > [!div class="nextstepaction"] > [**Build a custom model**](how-to-guides/build-a-custom-model.md)-> [**Compose custom models**](how-to-guides/compose-custom-models.md) +> [**Compose custom models**](how-to-guides/compose-custom-models.md) |
ai-services | Concept Model Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-model-overview.md | The following table shows the available models for each current preview and stab |Document analysis models|[Read](concept-read.md) | Γ£ö∩╕Å| Γ£ö∩╕Å| Γ£ö∩╕Å| n/a| |Document analysis models|[Layout](concept-layout.md) | Γ£ö∩╕Å| Γ£ö∩╕Å| Γ£ö∩╕Å| Γ£ö∩╕Å| |Document analysis models|[General document](concept-general-document.md) |moved to layout**| Γ£ö∩╕Å| Γ£ö∩╕Å| n/a|+|Prebuilt models|[Bank Check](concept-bank-check.md) | Γ£ö∩╕Å| n/a| n/a| n/a| +|Prebuilt models|[Bank Statement](concept-bank-statement.md) | Γ£ö∩╕Å| n/a| n/a| n/a| +|Prebuilt models|[Paystub](concept-pay-stub.md) | Γ£ö∩╕Å| n/a| n/a| n/a| |Prebuilt models|[Contract](concept-contract.md) | Γ£ö∩╕Å| Γ£ö∩╕Å| n/a| n/a| |Prebuilt models|[Health insurance card](concept-health-insurance-card.md)| Γ£ö∩╕Å| Γ£ö∩╕Å| Γ£ö∩╕Å| n/a| |Prebuilt models|[ID document](concept-id-document.md) | Γ£ö∩╕Å| Γ£ö∩╕Å| Γ£ö∩╕Å| Γ£ö∩╕Å| |Prebuilt models|[Invoice](concept-invoice.md) | Γ£ö∩╕Å| Γ£ö∩╕Å| Γ£ö∩╕Å| Γ£ö∩╕Å| |Prebuilt models|[Receipt](concept-receipt.md) | Γ£ö∩╕Å| Γ£ö∩╕Å| Γ£ö∩╕Å| Γ£ö∩╕Å|+|Prebuilt models|[US Unified Tax*](concept-tax-document.md) | Γ£ö∩╕Å| n/a| n/a| n/a| |Prebuilt models|[US 1040 Tax*](concept-tax-document.md) | Γ£ö∩╕Å| Γ£ö∩╕Å| n/a| n/a| |Prebuilt models|[US 1098 Tax*](concept-tax-document.md) | Γ£ö∩╕Å| n/a| n/a| n/a| |Prebuilt models|[US 1099 Tax*](concept-tax-document.md) | Γ£ö∩╕Å| n/a| n/a| n/a| |Prebuilt models|[US W2 Tax](concept-tax-document.md) | Γ£ö∩╕Å| Γ£ö∩╕Å| Γ£ö∩╕Å| n/a| |Prebuilt models|[US Mortgage 1003 URLA](concept-mortgage-documents.md) | Γ£ö∩╕Å| n/a| n/a| n/a|+|Prebuilt models|[US Mortgage 1004 URAR](concept-mortgage-documents.md) | Γ£ö∩╕Å| n/a| n/a| n/a| +|Prebuilt models|[US Mortgage 1005](concept-mortgage-documents.md) | Γ£ö∩╕Å| n/a| n/a| n/a| |Prebuilt models|[US Mortgage 1008 Summary](concept-mortgage-documents.md) | Γ£ö∩╕Å| n/a| n/a| n/a| |Prebuilt models|[US Mortgage closing disclosure](concept-mortgage-documents.md) | Γ£ö∩╕Å| n/a| n/a| n/a| |Prebuilt models|[Marriage certificate](concept-marriage-certificate.md) | Γ£ö∩╕Å| n/a| n/a| n/a| |Prebuilt models|[Credit card](concept-credit-card.md) | Γ£ö∩╕Å| n/a| n/a| n/a| |Prebuilt models|[Business card](concept-business-card.md) | deprecated|Γ£ö∩╕Å|Γ£ö∩╕Å|Γ£ö∩╕Å | |Custom classification model|[Custom classifier](concept-custom-classifier.md) | Γ£ö∩╕Å| Γ£ö∩╕Å| n/a| n/a|+|Custom Generative Model|[Custom Generative Model](concept-custom-generative.md) | Γ£ö∩╕Å| n/a| n/a| n/a| |Custom extraction model|[Custom neural](concept-custom-neural.md) | Γ£ö∩╕Å| Γ£ö∩╕Å| Γ£ö∩╕Å| n/a| |Customextraction model|[Custom template](concept-custom-template.md) | Γ£ö∩╕Å| Γ£ö∩╕Å| Γ£ö∩╕Å| Γ£ö∩╕Å| |Custom extraction model|[Custom composed](concept-composed-models.md) | Γ£ö∩╕Å| Γ£ö∩╕Å| Γ£ö∩╕Å| Γ£ö∩╕Å| Latency is the amount of time it takes for an API server to handle and process a |Language detection|Free| Γ£ö∩╕Å| Γ£ö∩╕Å| n/a| n/a| |Key value pairs|Free| Γ£ö∩╕Å|n/a|n/a| n/a| |Query fields|Add-On*| Γ£ö∩╕Å|n/a|n/a| n/a|+|Searchable pdf|Add-On*| Γ£ö∩╕Å|n/a|n/a| n/a| ### Model analysis features Add-On* - Query fields are priced differently than the other add-on features. Se ::: moniker range=">=doc-intel-3.0.0" -| **Model** | **Description** | -| | | -|**Document analysis models**|| -| [Read OCR](#read-ocr) | Extract print and handwritten text including words, locations, and detected languages.| -| [Layout analysis](#layout-analysis) | Extract text and document layout elements like tables, selection marks, titles, section headings, and more.| -|**Prebuilt models**|| -| [Health insurance card](#health-insurance-card) | Automate healthcare processes by extracting insurer, member, prescription, group number, and other key information from US health insurance cards.| -| [US Tax document models](#us-tax-documents) | Process US tax forms to extract employee, employer, wage, and other information. | -| [US Mortgage document models](#us-mortgage-documents) | Process US mortgage forms to extract borrower loan and property information. | -| [Contract](#contract) | Extract agreement and party details.| -| [Invoice](#invoice) | Automate invoices. | -| [Receipt](#receipt) | Extract receipt data from receipts.| -| [Identity document (ID)](#identity-document-id) | Extract identity (ID) fields from US driver licenses and international passports. | -| [Business card](#business-card) | Scan business cards to extract key fields and data into your applications. | -|**Custom models**|| -| [Custom model (overview)](#custom-models) | Extract data from forms and documents specific to your business. Custom models are trained for your distinct data and use cases. | -| [Custom extraction models](#custom-extraction)| ● **Custom template models** use layout cues to extract values from documents and are suitable to extract fields from highly structured documents with defined visual templates.</br>● **Custom neural models** are trained on various document types to extract fields from structured, semi-structured, and unstructured documents.| -| [Custom classification model](#custom-classifier)| The **Custom classification model** can classify each page in an input file to identify the documents within and can also identify multiple documents or multiple instances of a single document within an input file. -| [Composed models](#composed-models) | Combine several custom models into a single model to automate processing of diverse document types with a single composed model. - ### Bounding box and polygon coordinates A bounding box (`polygon` in v3.0 and later versions) is an abstract rectangle that surrounds text elements in a document used as a reference point for object detection. For all models, except Business card model, Document Intelligence now supports a * [`languages`](concept-add-on-capabilities.md#language-detection) * [`keyValuePairs`](concept-add-on-capabilities.md#key-value-pairs) (2024-02-29-preview, 2023-10-31-preview) * [`queryFields`](concept-add-on-capabilities.md#query-fields) (2024-02-29-preview, 2023-10-31-preview) `Not available with the US.Tax models`+* [`searchablePDF`](concept-read.md#searchable-pdf) (2024-07-31-preview) `Only available for Read Model` ## Language support |
ai-services | Concept Read | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/concept-read.md | The searchable PDF capability enables you to convert an analog PDF, such as scan > > * Currently, the searchable PDF capability is only supported by Read OCR model `prebuilt-read`. When using this feature, please specify the `modelId` as `prebuilt-read`, as other model types will return error for this preview version. > * Searchable PDF is included with the 2024-07-31-preview `prebuilt-read` model with no additional cost for generating a searchable PDF output.+> * Searchable PDF currently only supports PDF files as input. Support for other file types, such as image files, will be available later. ### Use searchable PDF |
ai-services | Language Support Prebuilt | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/language-support-prebuilt.md | Azure AI Document Intelligence models provide multilingual document processing s :::moniker-end ++## Bank Statement +++***Model ID: prebuilt-bankStatement*** ++| Language Locale code | Default | +|:-|:| +| English (United States) `en-US`| English (United States) `en-US`| ++ ## Contract :::moniker range="doc-intel-4.0.0 || doc-intel-3.1.0" Azure AI Document Intelligence models provide multilingual document processing s :::moniker-end ++## Check ++***Model ID: prebuilt-check*** ++| Language Locale code | Default | +|:-|:| +| English (United States) `en-US`| English (United States) `en-US`| ++ ## Health insurance card :::moniker range=">=doc-intel-3.0.0" Azure AI Document Intelligence models provide multilingual document processing s |English (`en`) | United States (`us`) :::moniker-end +## Mortgage +++***Model ID: prebuilt-mortgage*** ++ | Model ID | Language Locale code | Default | + |--|:-|:| + |**prebuilt-mortgage-1003**|English (United States)|English (United States) `en-US`| + |**prebuilt-mortgage-1004**|English (United States)|English (United States) `en-US`| + |**prebuilt-mortgage-1005**|English (United States)|English (United States) `en-US`| + |**prebuilt-mortgage-1008**|English (United States)|English (United States) `en-US`| + |**prebuilt-mortgage-.closingDisclosure**|English (United States)|English (United States) `en-US`| ++++## Pay stub ++***Model ID: prebuilt-paystub*** ++| Language Locale code | Default | +|:-|:| +| English (United States) `en-US`| English (United States) `en-US`| ++ ## Receipt :::moniker range=">=doc-intel-3.0.0" Azure AI Document Intelligence models provide multilingual document processing s | Model ID | Language Locale code | Default | |--|:-|:| |**prebuilt-tax.us.w2**|English (United States)|English (United States) `en-US`|+ |**prebuilt-tax.us**|English (United States)|English (United States) `en-US`| + |**prebuilt-tax.us.1099Combo**|English (United States)|English (United States) `en-US`| |**prebuilt-tax.us.1098**|English (United States)|English (United States) `en-US`| |**prebuilt-tax.us.1098E**|English (United States)|English (United States) `en-US`| |**prebuilt-tax.us.1098T**|English (United States)|English (United States) `en-US`| |
ai-services | Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/overview.md | monikerRange: '<=doc-intel-4.0.0' Azure AI Document Intelligence is a cloud-based [Azure AI service](../../ai-services/index.yml) that enables you to build intelligent document processing solutions. Massive amounts of data, spanning a wide variety of data types, are stored in forms and documents. Document Intelligence enables you to effectively manage the velocity at which data is collected and processed and is key to improved operations, informed data-driven decisions, and enlightened innovation. </br></br> -| Γ£ö∩╕Å [**Document analysis models**](#document-analysis-models) | Γ£ö∩╕Å [**Prebuilt models**](#prebuilt-models) | Γ£ö∩╕Å [**Custom models**](#custom-model-overview) | +| Γ£ö∩╕Å [**Document analysis models**](#general-extraction-models) | Γ£ö∩╕Å [**Prebuilt models**](#prebuilt-models) | Γ£ö∩╕Å [**Custom models**](#custom-model-overview) | -## Document analysis models +## General extraction models -Document analysis models enable text extraction from forms and documents and return structured business-ready content ready for your organization's action, use, or development. +General extraction models enable text extraction from forms and documents and return structured business-ready content ready for your organization's action, use, or development. :::moniker range="doc-intel-4.0.0" :::row::: :::column:::- :::image type="icon" source="media/overview/icon-read.png" link="#read":::</br> - [**Read**](#read) | Extract printed </br>and handwritten text. + [**Read**](#read) | Extract printed and handwritten text. :::column-end::: :::column span="":::- :::image type="icon" source="media/overview/icon-layout.png" link="#layout":::</br> - [**Layout**](#layout) | Extract text, tables, </br>and document structure. + [**Layout**](#layout) | Extract text, tables, and document structure. :::column-end::: :::row-end::: :::moniker-end Document analysis models enable text extraction from forms and documents and ret :::moniker range="<=doc-intel-3.1.0" :::row::: :::column:::- :::image type="icon" source="media/overview/icon-read.png" link="#read":::</br> [**Read**](#read) | Extract printed </br>and handwritten text. :::column-end::: :::column span="":::- :::image type="icon" source="media/overview/icon-layout.png" link="#layout":::</br> [**Layout**](#layout) | Extract text, tables, </br>and document structure. :::column-end::: :::column span="":::- :::image type="icon" source="media/overview/icon-general-document.png" link="#general-document-deprecated-in-2023-10-31-preview":::</br> [**General document**](#general-document-deprecated-in-2023-10-31-preview) | Extract text, </br>structure, and key-value pairs. :::column-end::: :::row-end::: Document analysis models enable text extraction from forms and documents and ret Prebuilt models enable you to add intelligent document processing to your apps and flows without having to train and build your own models. :::moniker range="doc-intel-4.0.0"+### Financial Services and Legal + :::row::: :::column span="":::- :::image type="icon" source="media/overview/icon-invoice.png" link="#invoice":::</br> - [**Invoice**](#invoice) | Extract customer and vendor details. + [**Bank Statement**](#bank-statement) | Extract account information and details from bank statements. :::column-end::: :::column span="":::- :::image type="icon" source="media/overview/icon-receipt.png" link="#receipt":::</br> - [**Receipt**](#receipt) | Extract sales transaction details. + [**Check**](#check) | Extract relevant information from checks. :::column-end::: :::column span="":::- :::image type="icon" source="media/overview/icon-id-document.png" link="#identity-id":::</br> - [**Identity**](#identity-id) | Extract verification details. + [**Contract**](#contract-model) | Extract agreement and party details. :::column-end::: :::row-end::: :::row:::- :::column span=""::: - :::image type="icon" source="media/overview/icon-check.png" link="#check":::</br> - [**Check**](#check) | Extract relevant information from checks. + :::column span=""::: + [**Credit card**](#credit-card-model) | Extract payment card information. + :::column-end::: + :::column span=""::: + [**Invoice**](#invoice) | Extract customer and vendor details. :::column-end::: :::column span="":::- :::image type="icon" source="media/overview/icon-pay-stub.png" link="#pay-stub":::</br> [**Pay Stub**](#pay-stub) | Extract pay stub details. :::column-end::: :::column span="":::- :::image type="icon" source="media/overview/icon-bank-statement.png" link="#bank-statement":::</br> - [**Bank Statement**](#bank-statement) | Extract account information and details from bank statements. + [**Receipt**](#receipt) | Extract sales transaction details. :::column-end::: :::row-end:::++### US Tax :::row::: :::column span="":::- :::image type="icon" source="media/overview/icon-insurance-card.png" link="#health-insurance-card":::</br> - [**Health Insurance card**](#health-insurance-card) | Extract insurance coverage details. + [**Unified US tax**](#unified-us-tax-forms) | Extract from any US tax forms supported. :::column-end::: :::column span="":::- :::image type="icon" source="media/overview/icon-contract.png" link="#contract-model":::</br> - [**Contract**](#contract-model) | Extract agreement and party details. + [**US Tax W-2**](#us-tax-w-2-model) | Extract taxable compensation details. :::column-end:::- :::image type="icon" source="media/overview/icon-payment-card.png" link="#contract-model":::</br> - [**Credit/Debit card**](#credit-card-model) | Extract payment card information. + :::column span=""::: + [**US Tax 1098**](#us-tax-1098-and-variations-forms) | Extract `1098` variation details. :::column-end:::- :::column span=""::: - :::image type="icon" source="media/overview/icon-marriage-certificate.png" link="#contract-model":::</br> - [**Marriage certificate**](#marriage-certificate-model) | Extract certified marriage information. + :::column span=""::: + [**US Tax 1099**](#us-tax-1099-and-variations-forms) | Extract `1099` variation details. + :::column-end::: + :::column span=""::: + [**US Tax 1040**](#us-tax-1040-and-variations-forms) | Extract `1040` variation details. :::column-end::: :::row-end:::++### US Mortgage :::row::: :::column span="":::- :::image type="icon" source="media/overview/icon-mortgage-1003.png" link="#us-mortgage-1003-form":::</br> [**US mortgage 1003**](#us-mortgage-1003-form) | Extract loan application details. :::column-end::: :::column span="":::- :::image type="icon" source="media/overview/icon-mortgage-1004.png" link="#us-mortgage-1004-form":::</br> [**US mortgage 1004**](#us-mortgage-1004-form) | Extract information from appraisal. :::column-end::: :::column span="":::- :::image type="icon" source="media/overview/icon-mortgage-1005.png" link="#us-mortgage-1005-form":::</br> [**US mortgage 1005**](#us-mortgage-1005-form) | Extract information from validation of employment. :::column-end::: :::column span="":::- :::image type="icon" source="media/overview/icon-mortgage-1008.png" link="#us-mortgage-1008-form":::</br> [**US mortgage 1008**](#us-mortgage-1008-form) | Extract loan transmittal details. :::column-end::: :::column span="":::- :::image type="icon" source="media/overview/icon-mortgage-disclosure.png" link="#us-mortgage-disclosure-form":::</br> [**US mortgage disclosure**](#us-mortgage-disclosure-form) | Extract final closing loan terms. :::column-end::: :::row-end:::++### Personal Identification + :::row::: :::column span="":::- :::image type="icon" source="media/overview/icon-w2.png" link="#us-tax-w-2-model":::</br> - [**US Tax W-2**](#us-tax-w-2-model) | Extract taxable compensation details. - :::column-end::: - :::column span=""::: - :::image type="icon" source="media/overview/icon-1098.png" link="#us-tax-1098-form":::</br> - [**US Tax 1098**](#us-tax-1098-form) | Extract mortgage interest details. - :::column-end::: - :::column span=""::: - :::image type="icon" source="media/overview/icon-1098-e.png" link="#us-tax-1098-e-form":::</br> - [**US Tax 1098-E**](#us-tax-1098-e-form) | Extract student loan interest details. - :::column-end::: - :::column span=""::: - :::image type="icon" source="media/overview/icon-1098-t.png" link="#us-tax-1098-t-form":::</br> - [**US Tax 1098-T**](#us-tax-1098-t-form) | Extract qualified tuition details. + [**Health Insurance card**](#health-insurance-card) | Extract insurance coverage details. :::column-end:::- :::column span=""::: - :::image type="icon" source="media/overview/icon-1099.png" link="#us-tax-1098-t-form":::</br> - [**US Tax 1099**](#us-tax-1099-and-variations-forms) | Extract `1099` variation details. + :::column span=""::: + [**Identity**](#identity-id) | Extract verification details. :::column-end:::- :::column span=""::: - :::image type="icon" source="media/overview/icon-1040.png" link="#us-tax-1098-t-form":::</br> - [**US Tax 1040**](#us-tax-1040-form) | Extract `1040` variation details. + :::column span=""::: + [**Marriage certificate**](#marriage-certificate-model) | Extract certified marriage information. :::column-end::: :::row-end:::++++ :::moniker-end :::moniker range="<=doc-intel-3.1.0" :::row::: :::column span="":::- :::image type="icon" source="media/overview/icon-invoice.png" link="#invoice":::</br> [**Invoice**](#invoice) | Extract customer </br>and vendor details. :::column-end::: :::column span="":::- :::image type="icon" source="media/overview/icon-receipt.png" link="#receipt":::</br> [**Receipt**](#receipt) | Extract sales </br>transaction details. :::column-end::: :::column span="":::- :::image type="icon" source="media/overview/icon-id-document.png" link="#identity-id":::</br> [**Identity**](#identity-id) | Extract identification </br>and verification details. :::column-end::: :::row-end::: :::row::: :::column span="":::- :::image type="icon" source="media/overview/icon-insurance-card.png" link="#health-insurance-card":::</br> [**Health Insurance card**](#health-insurance-card) | Extract health insurance details. :::column-end::: :::column span="":::- :::image type="icon" source="media/overview/icon-business-card.png" link="#business-card":::</br> [**Business card**](#business-card) | Extract business contact details. :::column-end::: :::column span="":::- :::image type="icon" source="media/overview/icon-contract.png" link="#contract-model":::</br> [**Contract**](#contract-model) | Extract agreement</br> and party details. :::column-end::: :::row-end::: :::row::: :::column span="":::- :::image type="icon" source="media/overview/icon-w2.png" link="#us-tax-w-2-model":::</br> [**US Tax W-2**](#us-tax-w-2-model) | Extract taxable </br>compensation details. :::column-end::: :::column span="":::- :::image type="icon" source="media/overview/icon-1098.png" link="#us-tax-1098-form":::</br> - [**US Tax 1098**](#us-tax-1098-form) | Extract mortgage interest details. - :::column-end::: - :::column span=""::: - :::image type="icon" source="media/overview/icon-1098-e.png" link="#us-tax-1098-e-form":::</br> - [**US Tax 1098-E**](#us-tax-1098-e-form) | Extract student loan interest details. - :::column-end::: - :::column span=""::: - :::image type="icon" source="media/overview/icon-1098-t.png" link="#us-tax-1098-t-form":::</br> - [**US Tax 1098-T**](#us-tax-1098-t-form) | Extract qualified tuition details. + [**US Tax 1098**](#us-tax-1098-and-variations-forms) | Extract `1098` variation details. :::column-end::: :::row-end::: :::moniker-end ## Custom models -* Custom models are trained using your labeled datasets to extract distinct data from forms and documents, specific to your use cases. -* Standalone custom models can be combined to create composed models. +Custom models are trained using your labeled datasets to extract distinct data from forms and documents, specific to your use cases. Standalone custom models can be combined to create composed models. - :::column::: - * **Extraction models**</br> - Γ£ö∩╕Å Custom extraction models are trained to extract labeled fields from documents. - :::column-end::: +### Document field extraction models +Γ£ö∩╕Å Document field extraction models are trained to extract labeled fields from documents. :::row::: :::column:::- :::image type="icon" source="media/overview/icon-custom-generative.png" link="#custom-generative":::</br> - [**Custom generative**](#custom-generative) | Extract data from unstructured documents and structured documents with varying templates. + [**Custom generative**](#custom-generative-document-field-extraction) | Build a custom extraction model using generative AI for documents with unstructured format and varying templates. :::column-end::: :::column span="":::- :::image type="icon" source="media/overview/icon-custom-neural.png" link="#custom-neural":::</br> [**Custom neural**](#custom-neural) | Extract data from mixed-type documents. :::column-end::: :::column span="":::- :::image type="icon" source="media/overview/icon-custom-template.png" link="#custom-template":::</br> [**Custom template**](#custom-template) | Extract data from static layouts. :::column-end::: :::column span="":::- :::image type="icon" source="media/overview/icon-custom-composed.png" link="#custom-composed":::</br> [**Custom composed**](#custom-composed) | Extract data using a collection of models. :::column-end::: :::row-end::: - :::column::: - * **Classification model**</br> - Γ£ö∩╕Å Custom classifiers identify document types before invoking an extraction model. - :::column-end::: +### Custom classification models +Γ£ö∩╕Å Custom classifiers identify document types before invoking an extraction model. :::row::: :::column span="":::- :::image type="icon" source="media/overview/icon-custom-classifier.png" link="#custom-classification-model":::</br> - [**Custom classifier**](#custom-classification-model) | Identify designated document types (classes) </br>before invoking an extraction model. + [**Custom classifier**](#custom-classification-model) | Identify designated document types (classes) before invoking an extraction model. :::column-end::: :::row-end::: Document Intelligence supports optional features that can be enabled and disable * [`ocr.barcode`](concept-add-on-capabilities.md#barcode-property-extraction) -Document Intelligence supports optional features that can be enabled and disabled depending on the document extraction scenario. The following add-on capabilities are available for`2024-02-29-preview`, `2023-10-31-preview`, and later releases: + The`2024-07-31-preview` release introduces `read` model support for [searchable PDF](concept-read.md#searchable-pdf) output: ++* [`Searchable PDF](concept-add-on-capabilities.md#searchable-pdf) ++Document Intelligence supports optional features that can be enabled and disabled depending on the document extraction scenario. The following add-on capabilities are available for `2023-10-31-preview`, and later releases: * [`queryFields`](concept-add-on-capabilities.md#query-fields) +* [`keyValuePairs`](concept-add-on-capabilities.md#key-value-pairs) + ## Analysis features [!INCLUDE [model analysis features](includes/model-analysis-features.md)] You can use Document Intelligence to automate document processing in application |[**prebuilt-read**](concept-read.md)|● Extract **text** from documents.</br>● [Data extraction](concept-read.md#data-extraction)| ● Digitizing any document. </br>● Compliance and auditing.</br>● Processing handwritten notes before translation.|● [**Document Intelligence Studio**](https://documentintelligence.ai.azure.com/studio/read)</br>● [**REST API**](how-to-guides/use-sdk-rest-api.md?view=doc-intel-4.0.0&preserve-view=true&pivots=programming-language-rest-api)</br>● [**C# SDK**](how-to-guides/use-sdk-rest-api.md?view=doc-intel-4.0.0&preserve-view=true&pivots=programming-language-csharp)</br>● [**Python SDK**](how-to-guides/use-sdk-rest-api.md?view=doc-intel-4.0.0&preserve-view=true&pivots=programming-language-python)</br>● [**Java SDK**](how-to-guides/use-sdk-rest-api.md?view=doc-intel-4.0.0&preserve-view=true&pivots=programming-language-java)</br>● [**JavaScript**](how-to-guides/use-sdk-rest-api.md?view=doc-intel-4.0.0&preserve-view=true&pivots=programming-language-javascript) | > [!div class="nextstepaction"]-> [Return to model types](#document-analysis-models) +> [Return to model types](#general-extraction-models) ### Layout You can use Document Intelligence to automate document processing in application |[**prebuilt-layout**](concept-layout.md) |● Extract **text and layout** information from documents.</br>● [Data extraction](concept-layout.md#data-extraction) |● Document indexing and retrieval by structure.</br>● Financial and medical report analysis. |● [**Document Intelligence Studio**](https://documentintelligence.ai.azure.com/studio/layout)</br>● [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true)</br>● [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#layout-model)</br>● [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#layout-model)</br>● [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#layout-model)</br>● [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#layout-model)| > [!div class="nextstepaction"]-> [Return to model types](#document-analysis-models) +> [Return to model types](#general-extraction-models) ::: moniker range="doc-intel-3.1.0 || doc-intel-3.0.0" You can use Document Intelligence to automate document processing in application |[**prebuilt-document**](concept-general-document.md)|● Extract **text,layout, and key-value pairs** from documents.</br>● [Data and field extraction](concept-general-document.md#data-extraction)|● Key-value pair extraction.</br>● Form processing.</br>● Survey data collection and analysis.|● [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/document)</br>● [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-3.1.0&preserve-view=true)| > [!div class="nextstepaction"]-> [Return to model types](#document-analysis-models) +> [Return to model types](#general-extraction-models) :::moniker-end ### Invoice You can use Document Intelligence to automate document processing in application > [!div class="nextstepaction"] > [Return to model types](#prebuilt-models) -### US tax 1098 form +### US tax 1098 (and variations) forms :::image type="content" source="media/overview/analyze-1098.png" alt-text="Screenshot of US 1098 tax form analyzed in the Document Intelligence Studio."::: | Model ID | Description| Development options | |-|--|-|-|[**prebuilt-tax.us.1098**](concept-tax-document.md)|Extract mortgage interest information and details. </br>● [Data and field extraction](concept-tax-document.md#field-extraction-1098)|● [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.1098)</br>● [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true&pivots=programming-language-rest-api#analyze-document-post-request)</br>● [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>● [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>● [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>● [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)| --> [!div class="nextstepaction"] -> [Return to model types](#prebuilt-models) --### US tax 1098-E form ---| Model ID | Description |Development options | -|-|--|-| -|[**prebuilt-tax.us.1098E**](concept-tax-document.md)|Extract student loan information and details. </br>● [Data and field extraction](concept-tax-document.md#field-extraction-1098)|● [**Document Intelligence Studio**](https://documentintelligence.ai.azure.com/studio/prebuilt?formCategory=tax.us.1098&formType=tax.us.1098E)</br>● </br>● [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true&pivots=programming-language-rest-api#analyze-document-post-request)</br>● [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>● [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>● [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>● [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)| +|[**prebuilt-tax.us.1098{`variation`}**](concept-tax-document.md)|● Extract key information from 1098-form variations.</br>● [Data and field extraction](concept-tax-document.md#field-extraction-1098)|● [**Document Intelligence Studio**](https://formrecognizer.appliedai.azure.com/studio/prebuilt?formType=tax.us.1098)</br>● [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true&pivots=programming-language-rest-api#analyze-document-post-request)</br>● [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>● [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>● [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>● [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)| > [!div class="nextstepaction"] > [Return to model types](#prebuilt-models) -### US tax 1098-T form +### US tax 1099 (and variations) forms | Model ID |Description|Development options | |-|--|--|-|[**prebuilt-tax.us.1098T**](concept-tax-document.md)|Extract tuition information and details. </br>● [Data and field extraction](concept-tax-document.md#field-extraction-1098)|● [**Document Intelligence Studio**](https://documentintelligence.ai.azure.com/studio/prebuilt?formCategory=tax.us.1098&formType=tax.us.1098T)</br>● [**REST API**](/rest/api/aiservices/document-models/analyze-document?view=rest-aiservices-2023-07-31&preserve-view=true&tabs=HTTP)| +|[**prebuilt-tax.us.1099{`variation`}**](concept-tax-document.md)|● Extract information from 1099-form variations.</br>● [Data and field extraction](concept-tax-document.md#field-extraction-1099-nec)|● [**Document Intelligence Studio**](https://documentintelligence.ai.azure.com/studio/prebuilt?formCategory=tax.us.1099)</br>● [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true&pivots=programming-language-rest-api#analyze-document-post-request)</br>● [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>● [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>● [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>● [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)| > [!div class="nextstepaction"] > [Return to model types](#prebuilt-models) -### US tax 1099 (and variations) forms +### US tax 1040 (and variations) forms | Model ID |Description|Development options | |-|--|--|-|[**prebuilt-tax.us.1099{`variation`}**](concept-tax-document.md)|Extract information from 1099-form variations.|● </br>● [Data and field extraction](concept-tax-document.md#field-extraction-1099-nec) [**Document Intelligence Studio**](https://documentintelligence.ai.azure.com/studio/prebuilt?formCategory=tax.us.1099)</br>● [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true&pivots=programming-language-rest-api#analyze-document-post-request)</br>● [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>● [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>● [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>● [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)| +|[**prebuilt-tax.us.1040{`variation`}**](concept-tax-document.md)|● Extract information from 1040-form variations.</br>● [Data and field extraction](concept-tax-document.md#field-extraction-1040-tax-form)|● [**Document Intelligence Studio**](https://documentintelligence.ai.azure.com/studio/prebuilt?formCategory=tax.us.1040)</br>● [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true&pivots=programming-language-rest-api#analyze-document-post-request)</br>● [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>● [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>● [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>● [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)| -> [!div class="nextstepaction"] -> [Return to model types](#prebuilt-models) -### US tax 1040 form -+### Unified US tax forms | Model ID |Description|Development options | |-|--|--|-|**prebuilt-tax.us.1040**|Extract information from 1040-form variations.|● </br>● [Data and field extraction](concept-tax-document.md#field-extraction-1040-tax-form) [**Document Intelligence Studio**](https://documentintelligence.ai.azure.com/studio/prebuilt?formCategory=tax.us.1040)</br>● [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true&pivots=programming-language-rest-api#analyze-document-post-request)</br>● [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>● [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>● [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>● [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)| +|[**prebuilt-tax.us**](concept-tax-document.md)|●Extract information from any of the supported US tax forms.|● [**Document Intelligence Studio**](https://documentintelligence.ai.azure.com/studio/prebuilt?formCategory=tax.us.1040)</br>● [**REST API**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true&pivots=programming-language-rest-api#analyze-document-post-request)</br>● [**C# SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>● [**Python SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>● [**Java SDK**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)</br>● [**JavaScript**](quickstarts/get-started-sdks-rest-api.md?view=doc-intel-4.0.0&preserve-view=true#prebuilt-model)| + ::: moniker range="<=doc-intel-3.1.0" ### Business card + :::image type="content" source="media/overview/analyze-business-card.png" alt-text="Screenshot of Business card model analysis using Document Intelligence Studio."::: | Model ID | Description |Automation use cases | Development options | |-|--|-|--| You can use Document Intelligence to automate document processing in application > [!div class="nextstepaction"] > [Return to model types](#prebuilt-models)+ ### Custom model overview You can use Document Intelligence to automate document processing in application > [!div class="nextstepaction"] > [Return to custom model types](#custom-models) -#### Custom generative +#### Custom generative (document field extraction) :::image type="content" source="media/overview/analyze-custom-generative.png" alt-text="Screenshot of Custom generative model analysis using Azure AI Studio."::: |
ai-services | Service Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/document-intelligence/service-limits.md | Document Intelligence billing is calculated monthly based on the model type and | **Max number of Neural models** | 100 | 500 | | Adjustable | No | No | ++## Custom model usage ++> [!div class="checklist"] +> +> * [**Custom template model**](concept-custom-template.md) +> * [**Custom neural model**](concept-custom-neural.md) +> * [**Custom generative model**](concept-custom-generative.md) +> * [**Composed classification models**](concept-custom-classifier.md) +> * [**Composed custom models**](concept-composed-models.md) ++|Quota|Free (F0) <sup>1</sup>|Standard (S0)| +|--|--|--| +| **Compose Model limit** | 5 | 500 (default value) | +| Adjustable | No | No | +| **Training dataset size * Neural and Generative** | 1 GB <sup>3</sup> | 1 GB (default value) | +| Adjustable | No | No | +| **Training dataset size * Template** | 50 MB <sup>4</sup> | 50 MB (default value) | +| Adjustable | No | No | +| **Max number of pages (Training) * Template** | 500 | 500 (default value) | +| Adjustable | No | No | +| **Max number of pages (Training) * Neural and Generative** | 50,000 | 50,000 (default value) | +| Adjustable | No | No | +| **Custom neural model train** | 10 hours per month <sup>5</sup> | no limit (pay by the hour) | +| Adjustable | No |Yes <sup>3</sup>| +| **Max number of pages (Training) * Classifier** | 10,000 | 10,000 (default value) | +| Adjustable | No | No | +| **Max number of document types (classes) * Classifier** | 500 | 500 (default value) | +| Adjustable | No | No | +| **Training dataset size * Classifier** | 1GB | 2GB (default value) | +| Adjustable | No | No | +| **Min number of samples per class * Classifier** | 5 | 5 (default value) | +| Adjustable | No | No | ++++## Custom model usage ++> [!div class="checklist"] +> +> * [**Custom template model**](concept-custom-template.md) +> * [**Custom neural model**](concept-custom-neural.md) +> * [**Composed classification models**](concept-custom-classifier.md) +> * [**Composed custom models**](concept-composed-models.md) ++|Quota|Free (F0) <sup>1</sup>|Standard (S0)| +|--|--|--| +| **Compose Model limit** | 5 | 200 (default value) | +| Adjustable | No | No | +| **Training dataset size * Neural** | 1 GB <sup>3</sup> | 1 GB (default value) | +| Adjustable | No | No | +| **Training dataset size * Template** | 50 MB <sup>4</sup> | 50 MB (default value) | +| Adjustable | No | No | +| **Max number of pages (Training) * Template** | 500 | 500 (default value) | +| Adjustable | No | No | +| **Max number of pages (Training) * Neural** | 50,000 | 50,000 (default value) | +| Adjustable | No | No | +| **Custom neural model train** | 10 per month | 20 per month | +| Adjustable | No |Yes <sup>3</sup>| +| **Max number of pages (Training) * Classifier** | 10,000 | 10,000 (default value) | +| Adjustable | No | No | +| **Max number of document types (classes) * Classifier** | 500 | 500 (default value) | +| Adjustable | No | No | +| **Training dataset size * Classifier** | 1GB | 1GB (default value) | +| Adjustable | No | No | +| **Min number of samples per class * Classifier** | 5 | 5 (default value) | +| Adjustable | No | No | ++ ## Custom model usage Document Intelligence billing is calculated monthly based on the model type and ::: moniker range=">=doc-intel-2.1.0" > <sup>1</sup> For **Free (F0)** pricing tier see also monthly allowances at the [pricing page](https://azure.microsoft.com/pricing/details/form-recognizer/).</br>-> <sup>2</sup> See [best practices](#example-of-a-workload-pattern-best-practice), and [adjustment instructions(#create-and-submit-support-request).</br> +> <sup>2</sup> See [best practices](#example-of-a-workload-pattern-best-practice), and [adjustment instructions](#create-and-submit-support-request).</br> > <sup>3</sup> Neural models training count is reset every calendar month. Open a support request to increase the monthly training limit. ::: moniker-end ::: moniker range=">=doc-intel-3.0.0" > <sup>4</sup> This limit applies to all documents found in your training dataset folder prior to any labeling-related updates. ::: moniker-end+> <sup>5</sup> This limit applies for `v 4.0 (2024-07-31)` custom neural models only. Starting from `v 4.0`, we support training larger documents for longer durations (up to 10 hours for free, and incurring charges after). For more information, please refer to [custom nerual model page](concept-custom-neural.md). ## Detailed description, Quota adjustment, and best practices |
ai-services | Use Web App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/use-web-app.md | Along with Azure OpenAI Studio, APIs, and SDKs, you can use the available standa ## Important considerations -- Publishing creates an Azure App Service instance in your subscription. It might incur costs depending on the [pricing plan](https://azure.microsoft.com/pricing/details/app-service/windows/) that you select. When you're done with your app, you can delete it from the Azure portal.-- GPT-4 Turbo with Vision models are not supported.+- Publishing creates an Azure App Service instance in your subscription. It might incur costs depending on the [pricing plan](https://azure.microsoft.com/pricing/details/app-service/windows/) that you select. When finished with your app, you can delete it from the Azure portal. +- GPT-4 Turbo with Vision models aren't supported. - By default, the app is deployed with the Microsoft identity provider already configured. The identity provider restricts access to the app to members of your Azure tenant. To add or modify authentication: 1. Go to the [Azure portal](https://portal.azure.com/#home) and search for the app name that you specified during publishing. Select the web app, and then select **Authentication** on the left menu. Then select **Add identity provider**. Along with Azure OpenAI Studio, APIs, and SDKs, you can use the available standa 1. Select Microsoft as the identity provider. The default settings on this page restrict the app to your tenant only, so you don't need to change anything else here. Select **Add**. - Now users will be asked to sign in with their Microsoft Entra account to access your app. You can follow a similar process to add another identity provider if you prefer. The app doesn't use the user's sign-in information in any way other than verifying that the user is a member of your tenant. + Now users are asked to sign in with their Microsoft Entra account to access your app. You can follow a similar process to add another identity provider if you prefer. The app doesn't use the user's sign-in information in any way other than verifying that the user is a member of your tenant. ## Web app customization You can customize the app's front-end and back-end logic. The app provides sever When you're customizing the app, we recommend: -- Resetting the chat session (clear chat) if users change any settings. Notify the users that their chat history will be lost.--- Clearly communicating how each setting that you implement will affect the user experience.+- Clearly communicating how each setting that you implement affects the user experience. - Updating the app settings for each of your deployed apps to use new API keys after you rotate keys for your Azure OpenAI or Azure AI Search resource. After you turn on chat history, your users can show and hide it in the upper-rig ## Deleting your Cosmos DB instance -Deleting your web app does not delete your Cosmos DB instance automatically. To delete your Cosmos DB instance along with all stored chats, you need to go to the associated resource in the [Azure portal](https://portal.azure.com) and delete it. If you delete the Cosmos DB resource but keep the chat history option turned on in the studio, your users are notified of a connection error but can continue to use the web app without access to the chat history. +Deleting your web app doesn't delete your Cosmos DB instance automatically. To delete your Cosmos DB instance along with all stored chats, you need to go to the associated resource in the [Azure portal](https://portal.azure.com) and delete it. If you delete the Cosmos DB resource but keep the chat history option selected on subsequent updates from the Azure OpenAI Studio, the application notifies the user of a connection error. However, the user can continue to use the web app without access to the chat history. ++## Enabling Microsoft Entra ID authentication between services ++To enable Microsoft Entra ID for intra-service authentication for your web app, follow these steps. ++### Enable managed identity on your Azure OpenAI resource and Azure App Service ++You can enable managed identity for the Azure OpenAI resource and the Azure App Service by navigating to "Identity" and turning on the system assigned managed identity in the Azure portal for each resource. ++++> [!NOTE] +> If you're using an embedding model deployed to the same resource used for inference, you only need to enable managed identity on one Azure OpenAI resource. If using an embedding model deployed to a different resource from the one used for inference, you also need to enable managed identity on the Azure OpenAI resource used to deploy your embedding model. ++### Enable role-based access control (RBAC) on your Azure Search resource (optional) ++If using On Your Data with Azure Search, you should follow this step. ++To enable your Azure OpenAI resource to access your Azure Search resource, you need to enable role-based access control on your Azure Search resource. Learn more about [enabling RBAC roles](../../../search/search-security-enable-roles.md) for your resources. ++### Assign RBAC roles to enable intra-service communication ++The following table summarizes the RBAC role assignments needed for all Azure resources associated with your application. ++| Role | Assignee | Resource | +| -- | | - | +| `Search Index Data Reader` | Azure OpenAI (Inference) | Azure AI Search | +| `Search Service Contributor` | Azure OpenAI (Inference) | Azure AI Search | +| `Cognitive Services OpenAI User` | Web app | Azure OpenAI (Inference) | +| `Cognitive Services OpenAI User` | Azure OpenAI (Inference) | Azure OpenAI (Embeddings) | ++To assign these roles, follow [these instructions](../../../role-based-access-control/role-assignments-portal.yml) to create the needed role assignments. ++### App Settings Changes ++In the webapp application settings, navigate to "Environment Variables" and make the following changes: ++* Remove the environment variable `AZURE_OPENAI_KEY`, as it's no longer needed. +* If using On Your Data with Azure Search and are using Microsoft Entra ID authentication between Azure OpenAI and Azure Search, you should also delete the `AZURE_SEARCH_KEY` environment variables for the data source access keys as well. ++If using an embedding model deployed to the same resource as your model used for inference, there are no other settings changes required. ++However, if you're using an embedding model deployed to a different resource, make the following extra changes to your app's environment variables: +* Set `AZURE_OPENAI_EMBEDDING_ENDPOINT` variable to the full API path of the embedding API for the resource you're using for embeddings, for example, `https://<your embedding AOAI resource name>.openai.azure.com/openai/deployments/<your embedding deployment name>/embeddings` +* Delete the `AZURE_OPENAI_EMBEDDING_KEY` variable to use Microsoft Entra ID authentication. ++Once all of the environment variable changes are completed, restart the webapp to begin using Microsoft Entra ID authentication between services in the webapp. It will take a few minutes after restarting for any settings changes to take effect. ## Related content |
ai-studio | Rbac Ai Studio | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/concepts/rbac-ai-studio.md | The following table is an example of how to set up role-based access control for | | | | | IT admin | Owner of the hub | The IT admin can ensure the hub is set up to their enterprise standards. They can assign managers the Contributor role on the resource if they want to enable managers to make new hubs. Or they can assign managers the Azure AI Developer role on the resource to not allow for new hub creation. | | Managers | Contributor or Azure AI Developer on the hub | Managers can manage the hub, audit compute resources, audit connections, and create shared connections. |-| Team lead/Lead developer | Azure AI Developer on the hub | Lead developers can create projects for their team and create shared resources (ex: compute and connections) at the hub level. After project creation, project owners can invite other members. | +| Team lead/Lead developer | Azure AI Developer on the hub | Lead developers can create projects for their team and create shared resources (such as compute and connections) at the hub level. After project creation, project owners can invite other members. | | Team members/developers | Contributor or Azure AI Developer on the project | Developers can build and deploy AI models within a project and create assets that enable development such as computes and connections. | ## Access to resources created outside of the hub |
ai-studio | Deploy Models Cohere Command | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/deploy-models-cohere-command.md | The response is as follows: ```python print("Model name:", model_info.model_name) print("Model type:", model_info.model_type)-print("Model provider name:", model_info.model_provider) +print("Model provider name:", model_info.model_provider_name) ``` ```console To visualize the output, define a helper function to print the stream. ```python def print_stream(result): """- Prints the chat completion with streaming. Some delay is added to simulate - a real-time conversation. + Prints the chat completion with streaming. """ import time for update in result: if update.choices: print(update.choices[0].delta.content, end="")- time.sleep(0.05) ``` You can visualize how streaming generates content: catch (RequestFailedException ex) { if (ex.ErrorCode == "content_filter") {- Console.WriteLine($"Your query has trigger Azure Content Safeaty: {ex.Message}"); + Console.WriteLine($"Your query has trigger Azure Content Safety: {ex.Message}"); } else { |
ai-studio | Deploy Models Jais | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/deploy-models-jais.md | JAIS 30b Chat is an autoregressive bi-lingual LLM for **Arabic** & **English**. ::: zone pivot="programming-language-python" +## Jais chat models + You can learn more about the models in their respective model card: The response is as follows: ```python print("Model name:", model_info.model_name) print("Model type:", model_info.model_type)-print("Model provider name:", model_info.model_provider) +print("Model provider name:", model_info.model_provider_name) ``` ```console To visualize the output, define a helper function to print the stream. ```python def print_stream(result): """- Prints the chat completion with streaming. Some delay is added to simulate - a real-time conversation. + Prints the chat completion with streaming. """ import time for update in result: if update.choices: print(update.choices[0].delta.content, end="")- time.sleep(0.05) ``` You can visualize how streaming generates content: except HttpResponseError as ex: ::: zone pivot="programming-language-javascript" +## Jais chat models + You can learn more about the models in their respective model card: catch (error) { ::: zone pivot="programming-language-csharp" +## Jais chat models + You can learn more about the models in their respective model card: catch (RequestFailedException ex) { if (ex.ErrorCode == "content_filter") {- Console.WriteLine($"Your query has trigger Azure Content Safeaty: {ex.Message}"); + Console.WriteLine($"Your query has trigger Azure Content Safety: {ex.Message}"); } else { catch (RequestFailedException ex) ::: zone pivot="programming-language-rest" +## Jais chat models + You can learn more about the models in their respective model card: |
ai-studio | Deploy Models Jamba | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/deploy-models-jamba.md | The Jamba-Instruct model is AI21's production-grade Mamba-based large language m ::: zone pivot="programming-language-python" +## Jamba-Instruct chat models + You can learn more about the models in their respective model card: The response is as follows: ```python print("Model name:", model_info.model_name) print("Model type:", model_info.model_type)-print("Model provider name:", model_info.model_provider) +print("Model provider name:", model_info.model_provider_name) ``` ```console To visualize the output, define a helper function to print the stream. ```python def print_stream(result): """- Prints the chat completion with streaming. Some delay is added to simulate - a real-time conversation. + Prints the chat completion with streaming. """ import time for update in result: if update.choices: print(update.choices[0].delta.content, end="")- time.sleep(0.05) ``` You can visualize how streaming generates content: except HttpResponseError as ex: ::: zone pivot="programming-language-javascript" +## Jamba-Instruct chat models + You can learn more about the models in their respective model card: catch (error) { ::: zone pivot="programming-language-csharp" +## Jamba-Instruct chat models + You can learn more about the models in their respective model card: catch (RequestFailedException ex) { if (ex.ErrorCode == "content_filter") {- Console.WriteLine($"Your query has trigger Azure Content Safeaty: {ex.Message}"); + Console.WriteLine($"Your query has trigger Azure Content Safety: {ex.Message}"); } else { catch (RequestFailedException ex) ::: zone pivot="programming-language-rest" +## Jamba-Instruct chat models + You can learn more about the models in their respective model card: |
ai-studio | Deploy Models Llama | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/deploy-models-llama.md | The response is as follows: ```python print("Model name:", model_info.model_name) print("Model type:", model_info.model_type)-print("Model provider name:", model_info.model_provider) +print("Model provider name:", model_info.model_provider_name) ``` ```console To visualize the output, define a helper function to print the stream. ```python def print_stream(result): """- Prints the chat completion with streaming. Some delay is added to simulate - a real-time conversation. + Prints the chat completion with streaming. """ import time for update in result: if update.choices: print(update.choices[0].delta.content, end="")- time.sleep(0.05) ``` You can visualize how streaming generates content: catch (RequestFailedException ex) { if (ex.ErrorCode == "content_filter") {- Console.WriteLine($"Your query has trigger Azure Content Safeaty: {ex.Message}"); + Console.WriteLine($"Your query has trigger Azure Content Safety: {ex.Message}"); } else { |
ai-studio | Deploy Models Mistral Nemo | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/deploy-models-mistral-nemo.md | The response is as follows: ```python print("Model name:", model_info.model_name) print("Model type:", model_info.model_type)-print("Model provider name:", model_info.model_provider) +print("Model provider name:", model_info.model_provider_name) ``` ```console To visualize the output, define a helper function to print the stream. ```python def print_stream(result): """- Prints the chat completion with streaming. Some delay is added to simulate - a real-time conversation. + Prints the chat completion with streaming. """ import time for update in result: if update.choices: print(update.choices[0].delta.content, end="")- time.sleep(0.05) ``` You can visualize how streaming generates content: catch (RequestFailedException ex) { if (ex.ErrorCode == "content_filter") {- Console.WriteLine($"Your query has trigger Azure Content Safeaty: {ex.Message}"); + Console.WriteLine($"Your query has trigger Azure Content Safety: {ex.Message}"); } else { |
ai-studio | Deploy Models Mistral Open | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/deploy-models-mistral-open.md | The response is as follows: ```python print("Model name:", model_info.model_name) print("Model type:", model_info.model_type)-print("Model provider name:", model_info.model_provider) +print("Model provider name:", model_info.model_provider_name) ``` ```console To visualize the output, define a helper function to print the stream. ```python def print_stream(result): """- Prints the chat completion with streaming. Some delay is added to simulate - a real-time conversation. + Prints the chat completion with streaming. """ import time for update in result: if update.choices: print(update.choices[0].delta.content, end="")- time.sleep(0.05) ``` You can visualize how streaming generates content: |
ai-studio | Deploy Models Mistral | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/deploy-models-mistral.md | The response is as follows: ```python print("Model name:", model_info.model_name) print("Model type:", model_info.model_type)-print("Model provider name:", model_info.model_provider) +print("Model provider name:", model_info.model_provider_name) ``` ```console To visualize the output, define a helper function to print the stream. ```python def print_stream(result): """- Prints the chat completion with streaming. Some delay is added to simulate - a real-time conversation. + Prints the chat completion with streaming. """ import time for update in result: if update.choices: print(update.choices[0].delta.content, end="")- time.sleep(0.05) ``` You can visualize how streaming generates content: catch (RequestFailedException ex) { if (ex.ErrorCode == "content_filter") {- Console.WriteLine($"Your query has trigger Azure Content Safeaty: {ex.Message}"); + Console.WriteLine($"Your query has trigger Azure Content Safety: {ex.Message}"); } else { |
ai-studio | Deploy Models Phi 3 Vision | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/deploy-models-phi-3-vision.md | The Phi-3 family of small language models (SLMs) is a collection of instruction- ::: zone pivot="programming-language-python" +## Phi-3 chat models with vision + Phi-3 Vision is a lightweight, state-of-the-art open multimodal model built upon datasets which include - synthetic data and filtered publicly available websites - with a focus on very high-quality, reasoning dense data both on text and vision. The model belongs to the Phi-3 model family, and the multimodal version comes with 128K context length (in tokens) it can support. The model underwent a rigorous enhancement process, incorporating both supervised fine-tuning and direct preference optimization to ensure precise instruction adherence and robust safety measures. The response is as follows: ```python print("Model name:", model_info.model_name) print("Model type:", model_info.model_type)-print("Model provider name:", model_info.model_provider) +print("Model provider name:", model_info.model_provider_name) ``` ```console To visualize the output, define a helper function to print the stream. ```python def print_stream(result): """- Prints the chat completion with streaming. Some delay is added to simulate - a real-time conversation. + Prints the chat completion with streaming. """ import time for update in result: if update.choices: print(update.choices[0].delta.content, end="")- time.sleep(0.05) ``` You can visualize how streaming generates content: Usage: ::: zone pivot="programming-language-javascript" +## Phi-3 chat models with vision + Phi-3 Vision is a lightweight, state-of-the-art open multimodal model built upon datasets which include - synthetic data and filtered publicly available websites - with a focus on very high-quality, reasoning dense data both on text and vision. The model belongs to the Phi-3 model family, and the multimodal version comes with 128K context length (in tokens) it can support. The model underwent a rigorous enhancement process, incorporating both supervised fine-tuning and direct preference optimization to ensure precise instruction adherence and robust safety measures. Usage: ::: zone pivot="programming-language-csharp" +## Phi-3 chat models with vision + Phi-3 Vision is a lightweight, state-of-the-art open multimodal model built upon datasets which include - synthetic data and filtered publicly available websites - with a focus on very high-quality, reasoning dense data both on text and vision. The model belongs to the Phi-3 model family, and the multimodal version comes with 128K context length (in tokens) it can support. The model underwent a rigorous enhancement process, incorporating both supervised fine-tuning and direct preference optimization to ensure precise instruction adherence and robust safety measures. Usage: ::: zone pivot="programming-language-rest" +## Phi-3 chat models with vision + Phi-3 Vision is a lightweight, state-of-the-art open multimodal model built upon datasets which include - synthetic data and filtered publicly available websites - with a focus on very high-quality, reasoning dense data both on text and vision. The model belongs to the Phi-3 model family, and the multimodal version comes with 128K context length (in tokens) it can support. The model underwent a rigorous enhancement process, incorporating both supervised fine-tuning and direct preference optimization to ensure precise instruction adherence and robust safety measures. |
ai-studio | Deploy Models Phi 3 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/deploy-models-phi-3.md | The response is as follows: ```python print("Model name:", model_info.model_name) print("Model type:", model_info.model_type)-print("Model provider name:", model_info.model_provider) +print("Model provider name:", model_info.model_provider_name) ``` ```console To visualize the output, define a helper function to print the stream. ```python def print_stream(result): """- Prints the chat completion with streaming. Some delay is added to simulate - a real-time conversation. + Prints the chat completion with streaming. """ import time for update in result: if update.choices: print(update.choices[0].delta.content, end="")- time.sleep(0.05) ``` You can visualize how streaming generates content: catch (RequestFailedException ex) { if (ex.ErrorCode == "content_filter") {- Console.WriteLine($"Your query has trigger Azure Content Safeaty: {ex.Message}"); + Console.WriteLine($"Your query has trigger Azure Content Safety: {ex.Message}"); } else { |
ai-studio | Deploy Models Serverless | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/deploy-models-serverless.md | This article uses a Meta Llama model deployment for illustration. However, you c ) ``` + # [Bicep](#tab/bicep) ++ Install the Azure CLI as described at [Azure CLI](/cli/azure/). ++ Configure the following environment variables according to your settings: ++ ```azurecli + RESOURCE_GROUP="serverless-models-dev" + LOCATION="eastus2" + ``` + # [ARM](#tab/arm) You can use any compatible web browser to [deploy ARM templates](../../azure-resource-manager/templates/deploy-portal.md) in the Microsoft Azure portal or use any of the deployment tools. This tutorial uses the [Azure CLI](/cli/azure/). The next section covers the steps for subscribing your project to a model offeri Serverless API endpoints can deploy both Microsoft and non-Microsoft offered models. For Microsoft models (such as Phi-3 models), you don't need to create an Azure Marketplace subscription and you can [deploy them to serverless API endpoints directly](#deploy-the-model-to-a-serverless-api-endpoint) to consume their predictions. For non-Microsoft models, you need to create the subscription first. If it's your first time deploying the model in the project, you have to subscribe your project for the particular model offering from the Azure Marketplace. Each project has its own subscription to the particular Azure Marketplace offering of the model, which allows you to control and monitor spending. +> [!TIP] +> Skip this step if you are deploying models from the Phi-3 family of models. Directly [deploy the model to a serverless API endpoint](#deploy-the-model-to-a-serverless-api-endpoint). + > [!NOTE] > Models offered through the Azure Marketplace are available for deployment to serverless API endpoints in specific regions. Check [Model and region availability for Serverless API deployments](deploy-models-serverless-availability.md) to verify which models and regions are available. If the one you need is not listed, you can deploy to a workspace in a supported region and then [consume serverless API endpoints from a different workspace](deploy-models-serverless-connect.md). Serverless API endpoints can deploy both Microsoft and non-Microsoft offered mod ).result() ``` + # [Bicep](#tab/bicep) ++ Use the following bicep configuration to create a model subscription: ++ __model-subscription.bicep__ + + ```bicep + param projectName string = 'my-project' + param modelId string = 'azureml://registries/azureml-meta/models/Meta-Llama-3-8B-Instruct' + + var modelName = substring(modelId, (lastIndexOf(modelId, '/') + 1)) + var subscriptionName = '${modelName}-subscription' + + resource projectName_subscription 'Microsoft.MachineLearningServices/workspaces/marketplaceSubscriptions@2024-04-01-preview' = if (!startsWith( + modelId, + 'azureml://registries/azureml/' + )) { + name: '${projectName}/${subscriptionName}' + properties: { + modelId: modelId + } + } + ``` ++ Then create the resource as follows: ++ ```azurecli + ``` + # [ARM](#tab/arm) Use the following template to create a model subscription: - __template.json__ + __model-subscription.json__ ```json { Serverless API endpoints can deploy both Microsoft and non-Microsoft offered mod } ``` + Use the Azure portal or the Azure CLI to create the deployment. ++ ```azurecli + ``` + 1. Once you subscribe the project for the particular Azure Marketplace offering, subsequent deployments of the same offering in the same project don't require subscribing again. 1. At any point, you can see the model offers to which your project is currently subscribed: Serverless API endpoints can deploy both Microsoft and non-Microsoft offered mod print(sub.as_dict()) ``` + # [Bicep](#tab/bicep) ++ You can use the resource management tools to query the resources. The following code uses Azure CLI: ++ ```azurecli + az resource list \ + --query "[?type=='Microsoft.SaaS']" + ``` + # [ARM](#tab/arm) You can use the resource management tools to query the resources. The following code uses Azure CLI: In this section, you create an endpoint with the name **meta-llama3-8b-qwerty**. ).result() ``` + # [Bicep](#tab/bicep) ++ Use the following template to create an endpoint: ++ __serverless-endpoint.bicep__ ++ ```bicep + param projectName string = 'my-project' + param endpointName string = 'myserverless-text-1234ss' + param location string = resourceGroup().location + param modelId string = 'azureml://registries/azureml-meta/models/Meta-Llama-3-8B-Instruct' + + var modelName = substring(modelId, (lastIndexOf(modelId, '/') + 1)) + var subscriptionName = '${modelName}-subscription' + + resource projectName_endpoint 'Microsoft.MachineLearningServices/workspaces/serverlessEndpoints@2024-04-01-preview' = { + name: '${projectName}/${endpointName}' + location: location + sku: { + name: 'Consumption' + } + properties: { + modelSettings: { + modelId: modelId + } + } + dependsOn: [ + projectName_subscription + ] + } + + output endpointUri string = projectName_endpoint.properties.inferenceEndpoint.uri + ``` ++ Create the deployment as follows: ++ ```azurecli + ``` + # [ARM](#tab/arm) Use the following template to create an endpoint: In this section, you create an endpoint with the name **meta-llama3-8b-qwerty**. ```azurecli az deployment group create \- --name model-subscription-deployment \ - --resource-group <resource-group> \ + --resource-group $RESOURCE_GROUP \ --template-file template.json ``` In this section, you create an endpoint with the name **meta-llama3-8b-qwerty**. ).result() ``` + # [Bicep](#tab/bicep) ++ You can use the resource management tools to query the resources. The following code uses Azure CLI: ++ ```azurecli + az resource list \ + --query "[?type=='Microsoft.MachineLearningServices/workspaces/serverlessEndpoints']" + ``` + # [ARM](#tab/arm) You can use the resource management tools to query the resources. The following code uses Azure CLI: In this section, you create an endpoint with the name **meta-llama3-8b-qwerty**. print(endpoint_keys.secondary_key) ``` + # [Bicep](#tab/bicep) ++ Use REST APIs to query this information. + # [ARM](#tab/arm) Use REST APIs to query this information. Models deployed in Azure Machine Learning and Azure AI studio in Serverless API Read more about the [capabilities of this API](../reference/reference-model-inference-api.md#capabilities) and how [you can use it when building applications](../reference/reference-model-inference-api.md#getting-started). +## Network isolation ++Endpoints for models deployed as Serverless APIs follow the public network access (PNA) flag setting of the AI Studio Hub that has the project in which the deployment exists. To secure your MaaS endpoint, disable the PNA flag on your AI Studio Hub. You can secure inbound communication from a client to your endpoint by using a private endpoint for the hub. ++To set the PNA flag for the Azure AI hub: ++1. Go to the [Azure portal](https://portal.azure.com). +2. Search for the Resource group to which the hub belongs, and select your Azure AI hub from the resources listed for this Resource group. +3. On the hub Overview page, use the left navigation pane to go to Settings > Networking. +4. Under the **Public access** tab, you can configure settings for the public network access flag. +5. Save your changes. Your changes might take up to five minutes to propagate. + ## Delete endpoints and subscriptions You can delete model subscriptions and endpoints. Deleting a model subscription makes any associated endpoint become *Unhealthy* and unusable. To delete the associated model subscription: client.marketplace_subscriptions.begin_delete(subscription_name).wait() ``` +# [Bicep](#tab/bicep) ++You can use the resource management tools to manage the resources. The following code uses Azure CLI: ++```azurecli +az resource delete --name <resource-name> +``` ++ # [ARM](#tab/arm) You can use the resource management tools to manage the resources. The following code uses Azure CLI: |
ai-studio | Sdk Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/develop/sdk-overview.md | Title: How to get started with Azure AI SDKs -description: This article provides instructions on how to get started with Azure AI SDKs. +description: This article provides an overview of available Azure AI SDKs. - build-2024 Previously updated : 5/21/2024- Last updated : 8/9/2024+ # Overview of the Azure AI SDKs - Microsoft offers a variety of packages that you can use for building generative AI applications in the cloud. In most applications, you need to use a combination of packages to manage and use various Azure services that provide AI functionality. We also offer integrations with open-source libraries like LangChain and mlflow for use with Azure. In this article we'll give an overview of the main services and SDKs you can use with Azure AI Studio. For building generative AI applications, we recommend using the following services and SDKs:- * [Azure Machine Learning](../../../machine-learning/overview-what-is-azure-machine-learning.md) for the hub and project infrastructure used in AI Studio to organize your work into projects, manage project artifacts (data, evaluation runs, traces), fine-tune & deploy models, and connect to external services and resources - * [Azure AI Services](../../../ai-services/what-are-ai-services.md) provides pre-built and customizable intelligent APIs and models, with support for Azure OpenAI, Search, Speech, Vision, and Language + * [Azure Machine Learning](../../../machine-learning/overview-what-is-azure-machine-learning.md) for the hub and project infrastructure used in AI Studio to organize your work into projects, manage project artifacts (data, evaluation runs, traces), fine-tune & deploy models, and connect to external services and resources. + * [Azure AI services](../../../ai-services/what-are-ai-services.md) provides pre-built and customizable intelligent APIs and models, with support for Azure OpenAI, Azure AI Search, Speech, Vision, and Language. * [Prompt flow](https://microsoft.github.io/promptflow/https://docsupdatetracker.net/index.html) for developer tools to streamline the end-to-end development cycle of LLM-based AI application, with support for inferencing, indexing, evaluation, deployment, and monitoring. For each of these, there are separate sets of management libraries and client libraries. ## Management libraries for creating and managing cloud resources -Azure [Management libraries](/azure/developer/python/sdk/azure-sdk-overview#create-and-manage-azure-resources-with-management-libraries) (also "control plane" or "management plane"), for creating and managing cloud resources that are used by your application. +Azure [management libraries](/azure/developer/python/sdk/azure-sdk-overview#create-and-manage-azure-resources-with-management-libraries) (also "control plane" or "management plane"), for creating and managing cloud resources that are used by your application. Azure Machine Learning * [Azure Machine Learning Python SDK (v2)](/python/api/overview/azure/ai-ml-readme) * [Azure Machine Learning CLI (v2)](/azure/machine-learning/how-to-configure-cli?view=azureml-api-2&tabs=public) * [Azure Machine Learning REST API](/rest/api/azureml) -Azure AI Services +Azure AI services * [Azure AI Services Python Management Library](/python/api/overview/azure/mgmt-cognitiveservices-readme?view=azure-python) * [Azure AI Search Python Management Library](/python/api/azure-mgmt-search/azure.mgmt.search?view=azure-python) * [Azure CLI commands for Azure AI Search](/azure/search/search-manage-azure-cli) Prompt flow Azure [Client libraries](/azure/developer/python/sdk/azure-sdk-overview#connect-to-and-use-azure-resources-with-client-libraries) (also called "data plane") for connecting to and using provisioned services from runtime application code. -Azure AI Services +Azure AI services * [Azure AI services SDKs](../../../ai-services/reference/sdk-package-resources.md?context=/azure/ai-studio/context/context) * [Azure AI services REST APIs](../../../ai-services/reference/rest-api-resources.md?context=/azure/ai-studio/context/context) |
ai-studio | Flow Deploy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/flow-deploy.md | If you enable this, tracing data and system metrics during inference time (such ## Grant permissions to the endpoint > [!IMPORTANT]-> Granting permissions (adding role assignment) is only enabled to the **Owner** of the specific Azure resources. You might need to ask your IT admin for help. +> Granting permissions (adding role assignment) is only enabled to the **Owner** of the specific Azure resources. You might need to ask your Azure subscription owner (who might be your IT admin) for help. > > It's recommended to grant roles to the **user-assigned** identity **before the deployment creation**. > It might take more than 15 minutes for the granted permission to take effect. -You can grant all permissions in Azure portal UI by following steps. +You can grant the required permissions in Azure portal UI by following steps. 1. Go to the Azure AI Studio project overview page in [Azure portal](https://ms.portal.azure.com/#home). |
ai-studio | Model Catalog Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/model-catalog-overview.md | Model | Managed compute | Serverless API (pay-as-you-go) --|--|-- Llama family models | Llama-2-7b <br> Llama-2-7b-chat <br> Llama-2-13b <br> Llama-2-13b-chat <br> Llama-2-70b <br> Llama-2-70b-chat <br> Llama-3-8B-Instruct <br> Llama-3-70B-Instruct <br> Llama-3-8B <br> Llama-3-70B | Llama-3-70B-Instruct <br> Llama-3-8B-Instruct <br> Llama-2-7b <br> Llama-2-7b-chat <br> Llama-2-13b <br> Llama-2-13b-chat <br> Llama-2-70b <br> Llama-2-70b-chat Mistral family models | mistralai-Mixtral-8x22B-v0-1 <br> mistralai-Mixtral-8x22B-Instruct-v0-1 <br> mistral-community-Mixtral-8x22B-v0-1 <br> mistralai-Mixtral-8x7B-v01 <br> mistralai-Mistral-7B-Instruct-v0-2 <br> mistralai-Mistral-7B-v01 <br> mistralai-Mixtral-8x7B-Instruct-v01 <br> mistralai-Mistral-7B-Instruct-v01 | Mistral-large (2402) <br> Mistral-large (2407) <br> Mistral-small <br> Mistral-NeMo-Cohere family models | Not available | Cohere-command-r-plus <br> Cohere-command-r <br> Cohere-embed-v3-english <br> Cohere-embed-v3-multilingual +Cohere family models | Not available | Cohere-command-r-plus <br> Cohere-command-r <br> Cohere-embed-v3-english <br> Cohere-embed-v3-multilingual <br> Cohere-rerank-v3-english <br> Cohere-rerank-v3-multilingual JAIS | Not available | jais-30b-chat Phi-3 family models | Phi-3-mini-4k-Instruct <br> Phi-3-mini-128k-Instruct <br> Phi-3-small-8k-Instruct <br> Phi-3-small-128k-Instruct <br> Phi-3-medium-4k-instruct <br> Phi-3-medium-128k-instruct | Phi-3-mini-4k-Instruct <br> Phi-3-mini-128k-Instruct <br> Phi-3-small-8k-Instruct <br> Phi-3-small-128k-Instruct <br> Phi-3-medium-4k-instruct <br> Phi-3-medium-128k-instruct Nixtla | Not available | TimeGEN-1 |
ai-studio | Get Started Code | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/quickstarts/get-started-code.md | Title: Get started building a chat app using the prompt flow SDK -description: This article provides instructions on how to set up your development environment for Azure AI SDKs. +description: This article provides instructions on how to build a custom chat app in Python using the prompt flow SDK. Previously updated : 5/30/2024 Last updated : 8/6/2024 # Build a custom chat app in Python using the prompt flow SDK+ [!INCLUDE [Feature preview](~/reusable-content/ce-skilling/azure/includes/ai-studio/includes/feature-preview.md)] In this quickstart, we walk you through setting up your local development environment with the prompt flow SDK. We write a prompt, run it as part of your app code, trace the LLM calls being made, and run a basic evaluation on the outputs of the LLM. ## Prerequisites +> [!IMPORTANT] +> You must have the necessary permissions to add role assignments for storage accounts in your Azure subscription. Granting permissions (adding role assignment) is only allowed by the **Owner** of the specific Azure resources. You might need to ask your Azure subscription owner (who might be your IT admin) for help to [grant access to call Azure OpenAI Service using your identity](#grant-access-to-call-azure-openai-service-using-your-identity). + Before you can follow this quickstart, create the resources that you need for your application: - An [AI Studio hub](../how-to/create-azure-ai-resource.md) for connecting to external resources. - A [project](../how-to/create-projects.md) for organizing your project artifacts and sharing traces and evaluation runs. Before you can follow this quickstart, create the resources that you need for yo Complete the [AI Studio playground quickstart](../quickstarts/get-started-playground.md) to create these resources if you haven't already. You can also create these resources by following the [SDK guide to create a hub and project](../how-to/develop/create-hub-project-sdk.md) article. -Also, you must have the necessary permissions to add role assignments for storage accounts in your Azure subscription. Granting permissions (adding role assignment) is only allowed by the **Owner** of the specific Azure resources. You might need to ask your IT admin for help to [grant access to call Azure OpenAI Service using your identity](#grant-access-to-call-azure-openai-service-using-your-identity). - ## Grant access to call Azure OpenAI Service using your identity To use security best practices, instead of API keys we use [Microsoft Entra ID](/entra/fundamentals/whatis) to authenticate with Azure OpenAI using your user identity. To grant yourself access to the Azure AI Services resource that you're using: 1. Continue through the wizard and select **Review + assign** to add the role assignment. -## Install the Azure CLI and login +## Install the Azure CLI and sign in -Now we install the Azure CLI and login from your local development environment, so that you can use your user credentials to call the Azure OpenAI service. +You install the Azure CLI and sign in from your local development environment, so that you can use your user credentials to call the Azure OpenAI service. In most cases you can install the Azure CLI from your terminal using the following command: # [Windows](#tab/windows) brew update && brew install azure-cli You can follow instructions [How to install the Azure CLI](/cli/azure/install-azure-cli) if these commands don't work for your particular operating system or setup. -After you install the Azure CLI, login using the ``az login`` command and sign-in using the browser: +After you install the Azure CLI, sign in using the ``az login`` command and sign-in using the browser: ``` az login ``` source .venv/bin/activate -Activating the Python environment means that when you run ```python``` or ```pip``` from the command line, you'll be using the Python interpreter contained in the ```.venv``` folder of your application. +Activating the Python environment means that when you run ```python``` or ```pip``` from the command line, you then use the Python interpreter contained in the ```.venv``` folder of your application. > [!NOTE] > You can use the ```deactivate``` command to exit the python virtual environment, and can later reactivate it when needed. Your AI services endpoint and deployment name are required to call the Azure Ope ## Create a basic chat prompt and app -First create a prompt template file, for this we'll use **Prompty** which is the prompt template format supported by prompt flow. +First create a **Prompty** file, which is the prompt template format supported by prompt flow. Create a ```chat.prompty``` file and copy the following code into it: For more information on how to use prompt flow evaluators, including how to make ## Next step > [!div class="nextstepaction"]-> [Augment the model with data for retrieval augmented generation (RAG)](../tutorials/copilot-sdk-build-rag.md) +> [Add data and use retrieval augmented generation (RAG) to build a copilot](../tutorials/copilot-sdk-build-rag.md) |
ai-studio | Reference Model Inference Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/reference/reference-model-inference-api.md | Models deployed to [managed inference](../concepts/deployments-overview.md): > [!div class="checklist"] > * [Meta Llama 3 instruct](../how-to/deploy-models-llama.md) family of models > * [Phi-3](../how-to/deploy-models-phi-3.md) family of models-> * Mixtral famility of models +> * [Mistral](../how-to/deploy-models-mistral-open.md) and [Mixtral](../how-to/deploy-models-mistral-open.md?tabs=mistral-8x7B-instruct) family of models. The API is compatible with Azure OpenAI model deployments. const client = new ModelClient( Explore our [samples](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/ai/ai-inference-rest/samples) and read the [API reference documentation](https://aka.ms/AAp1kxa) to get yourself started. +# [C#](#tab/csharp) ++Install the Azure AI inference library with the following command: ++```dotnetcli +dotnet add package Azure.AI.Inference --prerelease +``` ++For endpoint with support for Microsoft Entra ID (formerly Azure Active Directory), install the `Azure.Identity` package: ++```dotnetcli +dotnet add package Azure.Identity +``` ++Import the following namespaces: ++```csharp +using Azure; +using Azure.Identity; +using Azure.AI.Inference; +``` ++Then, you can use the package to consume the model. The following example shows how to create a client to consume chat completions: ++```csharp +ChatCompletionsClient client = new ChatCompletionsClient( + new Uri(Environment.GetEnvironmentVariable("AZURE_INFERENCE_ENDPOINT")), + new AzureKeyCredential(Environment.GetEnvironmentVariable("AZURE_INFERENCE_CREDENTIAL")) +); +``` ++For endpoint with support for Microsoft Entra ID (formerly Azure Active Directory): ++```csharp +ChatCompletionsClient client = new ChatCompletionsClient( + new Uri(Environment.GetEnvironmentVariable("AZURE_INFERENCE_ENDPOINT")), + new DefaultAzureCredential(includeInteractiveCredentials: true) +); +``` ++Explore our [samples](https://aka.ms/azsdk/azure-ai-inference/csharp/samples) and read the [API reference documentation](https://aka.ms/azsdk/azure-ai-inference/csharp/reference) to get yourself started. + # [REST](#tab/rest) Use the reference section to explore the API design and which parameters are available. For example, the reference section for [Chat completions](reference-model-inference-chat-completions.md) details how to use the route `/chat/completions` to generate predictions based on chat-formatted instructions: var response = await client.path("/chat/completions").post({ console.log(response.choices[0].message.content) ``` +# [C#](#tab/csharp) ++```csharp +requestOptions = new ChatCompletionsOptions() +{ + Messages = { + new ChatRequestSystemMessage("You are a helpful assistant."), + new ChatRequestUserMessage("How many languages are in the world?") + }, + AdditionalProperties = { { "logprobs", BinaryData.FromString("true") } }, +}; ++response = client.Complete(requestOptions, extraParams: ExtraParameters.PassThrough); +Console.WriteLine($"Response: {response.Value.Choices[0].Message.Content}"); +``` + # [REST](#tab/rest) __Request__ catch (error) { } ``` +# [C#](#tab/csharp) ++```csharp +try +{ + requestOptions = new ChatCompletionsOptions() + { + Messages = { + new ChatRequestSystemMessage("You are a helpful assistant"), + new ChatRequestUserMessage("How many languages are in the world?"), + }, + ResponseFormat = new ChatCompletionsResponseFormatJSON() + }; ++ response = client.Complete(requestOptions); + Console.WriteLine(response.Value.Choices[0].Message.Content); +} +catch (RequestFailedException ex) +{ + if (ex.Status == 422) + { + Console.WriteLine($"Looks like the model doesn't support a parameter: {ex.Message}"); + } + else + { + throw; + } +} +``` + # [REST](#tab/rest) __Request__ catch (error) { } ``` +# [C#](#tab/csharp) ++```csharp +try +{ + requestOptions = new ChatCompletionsOptions() + { + Messages = { + new ChatRequestSystemMessage("You are an AI assistant that helps people find information."), + new ChatRequestUserMessage( + "Chopping tomatoes and cutting them into cubes or wedges are great ways to practice your knife skills." + ), + }, + }; ++ response = client.Complete(requestOptions); + Console.WriteLine(response.Value.Choices[0].Message.Content); +} +catch (RequestFailedException ex) +{ + if (ex.ErrorCode == "content_filter") + { + Console.WriteLine($"Your query has trigger Azure Content Safeaty: {ex.Message}"); + } + else + { + throw; + } +} +``` + # [REST](#tab/rest) __Request__ The client library `@azure-rest/ai-inference` does inference, including chat com Explore our [samples](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/ai/ai-inference-rest/samples) and read the [API reference documentation](https://aka.ms/AAp1kxa) to get yourself started. +# [C#](#tab/csharp) ++The client library `Azure.Ai.Inference` does inference, including chat completions, for AI models deployed by Azure AI Studio and Azure Machine Learning Studio. It supports Serverless API endpoints and Managed Compute endpoints (formerly known as Managed Online Endpoints). ++Explore our [samples](https://aka.ms/azsdk/azure-ai-inference/csharp/samples) and read the [API reference documentation](https://aka.ms/azsdk/azure-ai-inference/csharp/reference) to get yourself started. + # [REST](#tab/rest) Explore the reference section of the Azure AI model inference API to see parameters and options to consume models, including chat completions models, deployed by Azure AI Studio and Azure Machine Learning Studio. It supports Serverless API endpoints and Managed Compute endpoints (formerly known as Managed Online Endpoints). |
ai-studio | Reference Model Inference Info | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/reference/reference-model-inference-info.md | The information about the deployed model. ### ModelType -The infernce task associated with the mode. +The inference task associated with the mode. | Name | Type | Description | |
ai-studio | Copilot Sdk Build Rag | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/tutorials/copilot-sdk-build-rag.md | description: Learn how to build a RAG-based copilot using the prompt flow SDK. Previously updated : 7/18/2024 Last updated : 8/6/2024 In this [Azure AI Studio](https://ai.azure.com) tutorial, you use the prompt flo This tutorial is part one of a two-part tutorial. > [!TIP]-> This tutorial is based on code in the sample repo for a [copilot application that implements RAG](https://github.com/Azure-Samples/rag-data-openai-python-promptflow). +> Be sure to set aside enough time to complete the prerequisites before starting this tutorial. If you're new to Azure AI Studio, you might need to spend additional time to get familiar with the platform. -This part one shows you how to enhance a basic chat application by adding retrieval augmented generation (RAG) to ground the responses in your custom data. +This part one shows you how to enhance a basic chat application by adding [retrieval augmented generation (RAG)](../concepts/retrieval-augmented-generation.md) to ground the responses in your custom data. In this part one, you learn how to: In this part one, you learn how to: ## Prerequisites +> [!IMPORTANT] +> You must have the necessary permissions to add role assignments in your Azure subscription. Granting permissions by role assignment is only allowed by the **Owner** of the specific Azure resources. You might need to ask your Azure subscription owner (who might be your IT admin) for help with completing the [assign access](#configure-access-for-the-azure-ai-search-service) section. + - You need to complete the [Build a custom chat app in Python using the prompt flow SDK quickstart](../quickstarts/get-started-code.md) to set up your environment. > [!IMPORTANT] > This tutorial builds on the code and environment you set up in the quickstart. -- You need a local copy of product data. The [Azure-Samples/rag-data-openai-python-promptflow repository on GitHub](https://github.com/Azure-Samples/rag-data-openai-python-promptflow/) contains sample retail product information that's relevant for this tutorial scenario. Clone the repository or [download the example Contoso Trek retail product data in a ZIP file](https://github.com/Azure-Samples/rag-data-openai-python-promptflow/raw/main/tutorial/data.zip) to your local machine.--- You must have the necessary permissions to add role assignments in your Azure subscription. Granting permissions by role assignment is only allowed by the **Owner** of the specific Azure resources. You might need to ask your IT admin for help with completing the [assign access](#configure-access-for-the-azure-ai-search-service) section.+- You need a local copy of product data. The [Azure-Samples/rag-data-openai-python-promptflow repository on GitHub](https://github.com/Azure-Samples/rag-data-openai-python-promptflow/) contains sample retail product information that's relevant for this tutorial scenario. [Download the example Contoso Trek retail product data in a ZIP file](https://github.com/Azure-Samples/rag-data-openai-python-promptflow/raw/main/tutorial/data.zip) to your local machine. ## Application code structure AZURE_OPENAI_CONNECTION_NAME=<your AIServices or Azure OpenAI connection name> ## Deploy an embedding model -For the RAG capability, we need to be able to embed the search query to search the Azure AI Search index we create. +For the [retrieval augmented generation (RAG)](../concepts/retrieval-augmented-generation.md) capability, we need to be able to embed the search query to search the Azure AI Search index we create. 1. Deploy an Azure OpenAI embedding model. Follow the [deploy Azure OpenAI models guide](../how-to/deploy-models-openai.md) and deploy the **text-embedding-ada-002** model. Use the same **AIServices** or **Azure OpenAI** connection that you used [to deploy the chat model](../quickstarts/get-started-playground.md#deploy-a-chat-model). 2. Add embedding model environment variables in your *.env* file. For the *AZURE_OPENAI_EMBEDDING_DEPLOYMENT* value, enter the name of the embedding model that you deployed. For the RAG capability, we need to be able to embed the search query to search t AZURE_OPENAI_EMBEDDING_DEPLOYMENT=embedding_model_deployment_name ``` +For more information about the embedding model, see the [Azure OpenAI Service embeddings documentation](../../ai-services/openai/how-to/embeddings.md). + ## Create an Azure AI Search index The goal with this RAG-based application is to ground the model responses in your custom data. You use an Azure AI Search index that stores vectorized data from the embeddings model. The search index is used to retrieve relevant documents based on the user's question. The goal with this RAG-based application is to ground the model responses in you You need an Azure AI Search service and connection in order to create a search index. > [!NOTE]-> Creating an Azure AI Search service and subsequent search indexes has associated costs. You can see details about pricing and pricing tiers for the Azure AI Search service on the creation page, to confirm cost before creating the resource. +> Creating an [Azure AI Search service](../../search/index.yml) and subsequent search indexes has associated costs. You can see details about pricing and pricing tiers for the Azure AI Search service on the creation page, to confirm cost before creating the resource. ### Create an Azure AI Search service Otherwise, you can create an Azure AI Search service using the [Azure portal](ht ## [Azure CLI](#tab/cli) 1. Open a terminal on your local machine.-1. Type `az` and then enter to verify that the Azure CLI tool is installed. If it's installed, a help menu with `az` commands appears. If you get an error, make sure you followed the [steps for installing the Azure CLI in the quickstart](../quickstarts/get-started-code.md#install-the-azure-cli-and-login). +1. Type `az` and then enter to verify that the Azure CLI tool is installed. If it's installed, a help menu with `az` commands appears. If you get an error, make sure you followed the [steps for installing the Azure CLI in the quickstart](../quickstarts/get-started-code.md#install-the-azure-cli-and-sign-in). 1. Follow the steps to create an Azure AI Search service using the [`az search service create`](../../search/search-manage-azure-cli.md#create-or-delete-a-service) command. |
ai-studio | Copilot Sdk Evaluate Deploy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/tutorials/copilot-sdk-evaluate-deploy.md | description: Evaluate and deploy a RAG-based copilot with the prompt flow SDK. T Previously updated : 7/18/2024 Last updated : 8/6/2024 In this [Azure AI Studio](https://ai.azure.com) tutorial, you use the prompt flo This tutorial is part two of a two-part tutorial. -> [!TIP] -> This tutorial is based on code in the sample repo for a [copilot application that implements RAG](https://github.com/Azure-Samples/rag-data-openai-python-promptflow). - In this part two, you learn how to: > [!div class="checklist"] In this part two, you learn how to: - You must complete [part 1 of the tutorial series](copilot-sdk-build-rag.md) to build the copilot application. -- You must have the necessary permissions to add role assignments in your Azure subscription. Granting permissions by role assignment is only allowed by the **Owner** of the specific Azure resources. You might need to ask your IT admin for help with completing the [assign access](#assign-access-for-the-endpoint) section.+- You must have the necessary permissions to add role assignments in your Azure subscription. Granting permissions by role assignment is only allowed by the **Owner** of the specific Azure resources. You might need to ask your Azure subscription owner (who might be your IT admin) for help with endpoint access later in the tutorial. ## Evaluate the quality of copilot responses Now define an evaluation script that will: - Load the sample `.jsonl` dataset. - Generate a target function wrapper around our copilot logic. - Run the evaluation, which takes the target function, and merges the evaluation dataset with the responses from the copilot.-- Generate a set of GPT-assisted metrics (Relevance, Groundedness, and Coherence) to evaluate the quality of the copilot responses.+- Generate a set of GPT-assisted metrics (relevance, groundedness, and coherence) to evaluate the quality of the copilot responses. - Output the results locally, and logs the results to the cloud project. The script allows you to review the results locally, by outputting the results in the command line, and to a json file. We recommend you test your application in the Azure AI Studio. If you prefer to Note your endpoint name, which you need for the next steps. -### Assign access for the endpoint --While you wait for your application to deploy, you or your administrator can assign role-based access to the endpoint. These roles allow the application to run without keys in the deployed environment, just like it did locally. +### Endpoint access for Azure OpenAI resource -Previously, you provided your account with a specific role to be able to access the resource using Microsoft Entra ID authentication. Now, assign the endpoint that same role. +You might need to ask your Azure subscription owner (who might be your IT admin) for help with this section. -### Endpoint access for Azure OpenAI resource +While you wait for your application to deploy, you or your administrator can assign role-based access to the endpoint. These roles allow the application to run without keys in the deployed environment, just like it did locally. -You or your administrator needs to grant your endpoint the **Cognitive Services OpenAI User** role on the Azure AI Services resource that you're using. This role lets your endpoint call the Azure OpenAI service. +Previously, you provided your account with a specific role to be able to access the resource using Microsoft Entra ID authentication. Now, assign the endpoint that same **Cognitive Services OpenAI User** role. > [!NOTE] > These steps are similar to how you assigned a role for your user identity to use the Azure OpenAI Service in the [quickstart](../quickstarts/get-started-code.md). To grant yourself access to the Azure AI Services resource that you're using: ### Endpoint access for Azure AI Search resource +You might need to ask your Azure subscription owner (who might be your IT admin) for help with this section. + Similar to how you assigned the **Search Index Data Contributor** [role to your Azure AI Search service](./copilot-sdk-build-rag.md#configure-access-for-the-azure-ai-search-service), you need to assign the same role for your endpoint. 1. In Azure AI Studio, select **Settings** and navigate to the connected **Azure AI Search** service. To avoid incurring unnecessary Azure costs, you should delete the resources you ## Related content -> [!div class="nextstepaction"] -> [Learn more about prompt flow](../how-to/prompt-flow.md) +- [Learn more about prompt flow](../how-to/prompt-flow.md) +- For a sample copilot application that implements RAG, see [Azure-Samples/rag-data-openai-python-promptflow](https://github.com/Azure-Samples/rag-data-openai-python-promptflow) |
ai-studio | Deploy Chat Web App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/tutorials/deploy-chat-web-app.md | Your data source is used to help ground the model with specific data. Grounding The steps in this tutorial are: -1. Deploy and test a chat model without your data -1. Add your data -1. Test the model with your data -1. Deploy your web app -+1. Deploy and test a chat model without your data. +1. Add your data. +1. Test the model with your data. +1. Deploy your web app. ## Prerequisites - An Azure subscription - <a href="https://azure.microsoft.com/free/cognitive-services" target="_blank">Create one for free</a>. - An [AI Studio hub](../how-to/create-azure-ai-resource.md), [project](../how-to/create-projects.md), and [deployed Azure OpenAI](../how-to/deploy-models-openai.md) chat model. Complete the [AI Studio playground quickstart](../quickstarts/get-started-playground.md) to create these resources if you haven't already. -- An [Azure AI Search service connection](../how-to/connections-add.md#create-a-new-connection) to index the sample product and customer data. +- An [Azure AI Search service connection](../how-to/connections-add.md#create-a-new-connection) to index the sample product data. -- You need at least one file to upload that contains example data. To complete this tutorial, use the product information samples from the [Azure-Samples/aistudio-python-quickstart-sample repository on GitHub](https://github.com/Azure-Samples/aistudio-python-quickstart-sample/tree/main/data). Specifically, the [product_info_11.md](https://github.com/Azure-Samples/aistudio-python-quickstart-sample/blob/main/dat` on your local computer.+- You need a local copy of product data. The [Azure-Samples/rag-data-openai-python-promptflow repository on GitHub](https://github.com/Azure-Samples/rag-data-openai-python-promptflow/) contains sample retail product information that's relevant for this tutorial scenario. Specifically, the `product_info_11.md` file contains product information about the TrailWalker hiking shoes that's relevant for this tutorial example. [Download the example Contoso Trek retail product data in a ZIP file](https://github.com/Azure-Samples/rag-data-openai-python-promptflow/raw/main/tutorial/data.zip) to your local machine. ## Add your data and try the chat model again Once you're satisfied with the experience in Azure AI Studio, you can deploy the ### Find your resource group in the Azure portal -In this tutorial, your web app is deployed to the same resource group as your AI Studio hub. Later you configure authentication for the web app in the Azure portal. +In this tutorial, your web app is deployed to the same resource group as your [AI Studio hub](../how-to/create-secure-ai-hub.md). Later you configure authentication for the web app in the Azure portal. Follow these steps to navigate from Azure AI Studio to your resource group in the Azure portal: Follow these steps to navigate from Azure AI Studio to your resource group in th :::image type="content" source="../media/tutorials/chat/resource-group-manage-page.png" alt-text="Screenshot of the resource group in the Azure AI Studio." lightbox="../media/tutorials/chat/resource-group-manage-page.png"::: -1. You should now be in the Azure portal, viewing the contents of the resource group where you deployed the hub. Keep this page open in a browser tab - you return to it later. +1. You should now be in the Azure portal, viewing the contents of the resource group where you deployed the hub. Keep this page open in a browser tab. You return to it later. ### Deploy the web app You're almost there! Now you can test the web app. *If the authentication settings haven't yet taken effect, close the browser tab for your web app and return to the chat playground in Azure AI Studio. Then wait a little longer and try again.* -1. In your web app, you can ask the same question as before ("How much are the TrailWalker hiking shoes"), and this time it uses information from your data to construct the response. You can expand the **references** button to see the data that was used. +1. In your web app, you can ask the same question as before ("How much are the TrailWalker hiking shoes"), and this time it uses information from your data to construct the response. You can expand the **reference** button to see the data that was used. :::image type="content" source="../media/tutorials/chat/chat-with-data-web-app.png" alt-text="Screenshot of the chat experience via the deployed web app." lightbox="../media/tutorials/chat/chat-with-data-web-app.png"::: Once you've enabled chat history, your users will be able to show and hide it in If you delete the Cosmos DB resource but keep the chat history option enabled on the studio, your users will be notified of a connection error, but can continue to use the web app without access to the chat history. -## Next steps +## Related content -- [Create a project in Azure AI Studio](../how-to/create-projects.md).-- Learn more about what you can do in the [Azure AI Studio](../what-is-ai-studio.md).+- [Build and deploy a question and answer copilot with prompt flow in Azure AI Studio.](./deploy-copilot-ai-studio.md). +- [Build your own copilot with the prompt flow SDK.](./copilot-sdk-build-rag.md). |
ai-studio | Deploy Copilot Ai Studio | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/tutorials/deploy-copilot-ai-studio.md | The steps in this tutorial are: - An [Azure AI Search service connection](../how-to/connections-add.md#create-a-new-connection) to index the sample product and customer data. -- You need a local copy of product and customer data. The [Azure-Samples/aistudio-python-quickstart-sample repository on GitHub](https://github.com/Azure-Samples/aistudio-python-quickstart-sample/tree/main/data) contains sample retail customer and product information that's relevant for this tutorial scenario. Clone the repository or copy the files from [1-customer-info](https://github.com/Azure-Samples/aistudio-python-quickstart-sample/tree/main/data/1-customer-info) and [3-product-info](https://github.com/Azure-Samples/aistudio-python-quickstart-sample/tree/main/data/3-product-info). +- You need a local copy of product data. The [Azure-Samples/rag-data-openai-python-promptflow repository on GitHub](https://github.com/Azure-Samples/rag-data-openai-python-promptflow/) contains sample retail product information that's relevant for this tutorial scenario. Specifically, the `product_info_11.md` file contains product information about the TrailWalker hiking shoes that's relevant for this tutorial example. [Download the example Contoso Trek retail product data in a ZIP file](https://github.com/Azure-Samples/rag-data-openai-python-promptflow/raw/main/tutorial/data.zip) to your local machine. ## Add your data and try the chat model again You can return to the prompt flow anytime by selecting **Prompt flow** from **To ## Customize prompt flow with multiple data sources -previously in the [AI Studio](https://ai.azure.com) chat playground, you [added your data](#add-your-data-and-try-the-chat-model-again) to create one search index that contained product data for the Contoso copilot. So far, users can only inquire about products with questions such as "How much do the TrailWalker hiking shoes cost?". But they can't get answers to questions such as "How many TrailWalker hiking shoes did Daniel Wilson buy?" To enable this scenario, we add another index with customer information to the flow. +Previously in the [AI Studio](https://ai.azure.com) chat playground, you [added your data](#add-your-data-and-try-the-chat-model-again) to create one search index that contained product data for the Contoso copilot. So far, users can only inquire about products with questions such as "How much do the TrailWalker hiking shoes cost?". But they can't get answers to questions such as "How many TrailWalker hiking shoes did Daniel Wilson buy?" To enable this scenario, we add another index with customer information to the flow. ### Create the customer info index |
ai-studio | Screen Reader | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/tutorials/screen-reader.md | Once you have created or selected a project, go to the navigation landmark. Pres The prompt flow UI in Azure AI Studio is composed of the following main sections: the command toolbar, flow (includes list of the flow nodes), files, and graph view. The flow, files, and graph sections each have their own H2 headings that can be used for navigation. - ### Flow - This is the main working area where you can edit your flow, for example adding a new node, editing the prompt, selecting input data |
api-management | Api Management Policies | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-policies.md | More information about policies: | [Set usage quota by subscription](quota-policy.md) | Allows you to enforce a renewable or lifetime call volume and/or bandwidth quota, on a per subscription basis. | Yes | Yes | Yes | Yes | [Set usage quota by key](quota-by-key-policy.md) | Allows you to enforce a renewable or lifetime call volume and/or bandwidth quota, on a per key basis. | Yes | No | No | Yes | | [Limit concurrency](limit-concurrency-policy.md) | Prevents enclosed policies from executing by more than the specified number of requests at a time. | Yes | Yes | Yes | Yes |-| [Limit Azure OpenAI Service token usage](azure-openai-token-limit-policy.md) | Prevents Azure OpenAI API usage spikes by limiting language model tokens per calculated key. | Yes | Yes | No | No | +| [Limit Azure OpenAI Service token usage](azure-openai-token-limit-policy.md) | Prevents Azure OpenAI API usage spikes by limiting large language model tokens per calculated key. | Yes | Yes | No | No | +| [Limit large language model API token usage](llm-token-limit-policy.md) | Prevents large language model (LLM) API usage spikes by limiting LLM tokens per calculated key. | Yes | Yes | No | No | ## Authentication and authorization More information about policies: | [Get value from cache](cache-lookup-value-policy.md) | Retrieves a cached item by key. | Yes | Yes | Yes | Yes | | [Store value in cache](cache-store-value-policy.md) | Stores an item in the cache by key. | Yes | Yes | Yes | Yes | | [Remove value from cache](cache-remove-value-policy.md) | Removes an item in the cache by key. | Yes | Yes | Yes | Yes |-| [Get cached responses of Azure OpenAI API requests](azure-openai-semantic-cache-lookup-policy.md) | Performs cache lookup using semantic search and returns a valid cached response when available. | Yes | Yes | Yes | Yes | +| [Get cached responses of Azure OpenAI API requests](azure-openai-semantic-cache-lookup-policy.md) | Performs lookup in Azure OpenAI API cache using semantic search and returns a valid cached response when available. | Yes | Yes | Yes | Yes | | [Store responses of Azure OpenAI API requests to cache](azure-openai-semantic-cache-store-policy.md) | Caches response according to the Azure OpenAI API cache configuration. | Yes | Yes | Yes | Yes |+| [Get cached responses of large language model API requests](llm-semantic-cache-lookup-policy.md) | Performs lookup in large language model API cache using semantic search and returns a valid cached response when available. | Yes | Yes | Yes | Yes | +| [Store responses of large language model API requests to cache](llm-semantic-cache-store-policy.md) | Caches response according to the large language model API cache configuration. | Yes | Yes | Yes | Yes | More information about policies: ||||||--| | [Trace](trace-policy.md) | Adds custom traces into the [request tracing](./api-management-howto-api-inspector.md) output in the test console, Application Insights telemetries, and resource logs. | Yes | Yes<sup>1</sup> | Yes | Yes | | [Emit metrics](emit-metric-policy.md) | Sends custom metrics to Application Insights at execution. | Yes | Yes | Yes | Yes |-| [Emit Azure OpenAI token metrics](azure-openai-emit-token-metric-policy.md) | Sends metrics to Application Insights for consumption of language model tokens through Azure OpenAI service APIs. | Yes | Yes | No | No | +| [Emit Azure OpenAI token metrics](azure-openai-emit-token-metric-policy.md) | Sends metrics to Application Insights for consumption of large language model tokens through Azure OpenAI service APIs. | Yes | Yes | No | No | +| [Emit large language model API token metrics](llm-emit-token-metric-policy.md) | Sends metrics to Application Insights for consumption of large language model (LLM) tokens through LLM APIs. | Yes | Yes | No | No | <sup>1</sup> In the V2 gateway, the `trace` policy currently does not add tracing output in the test console. |
api-management | Azure Openai Enable Semantic Caching | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/azure-openai-enable-semantic-caching.md | +> [!NOTE] +> The configuration steps in this article enable semantic caching for Azure OpenAI APIs. These steps can be generalized to enable semantic caching for corresponding large language model (LLM) APIs available through the [Azure AI Model Inference API](../ai-studio/reference/reference-model-inference-api.md). + ## Prerequisites * One or more Azure OpenAI Service APIs must be added to your API Management instance. For more information, see [Add an Azure OpenAI Service API to Azure API Management](azure-openai-api-from-specification.md). with request body: When the request succeeds, the response includes a completion for the chat message. -## Create a backend for Embeddings API +## Create a backend for embeddings API -Configure a [backend](backends.md) resource for the Embeddings API deployment with the following settings: +Configure a [backend](backends.md) resource for the embeddings API deployment with the following settings: * **Name** - A name of your choice, such as `embeddings-backend`. You use this name to reference the backend in policies. * **Type** - Select **Custom URL**.-* **Runtime URL** - The URL of the Embeddings API deployment in the Azure OpenAI Service, similar to: +* **Runtime URL** - The URL of the embeddings API deployment in the Azure OpenAI Service, similar to: ``` https://my-aoai.openai.azure.com/openai/deployments/embeddings-deployment/embeddings ``` If the request is successful, the response includes a vector representation of t Configure the following policies to enable semantic caching for Azure OpenAI APIs in Azure API Management: * In the **Inbound processing** section for the API, add the [azure-openai-semantic-cache-lookup](azure-openai-semantic-cache-lookup-policy.md) policy. In the `embeddings-backend-id` attribute, specify the Embeddings API backend you created. + > [!NOTE] + > When enabling semantic caching for other large language model APIs, use the [llm-semantic-cache-lookup](llm-semantic-cache-lookup-policy.md) policy instead. + Example: ```xml Configure the following policies to enable semantic caching for Azure OpenAI API * In the **Outbound processing** section for the API, add the [azure-openai-semantic-cache-store](azure-openai-semantic-cache-store-policy.md) policy. + > [!NOTE] + > When enabling semantic caching for other large language model APIs, use the [llm-semantic-cache-store](llm-semantic-cache-store-policy.md) policy instead. + Example: ```xml |
api-management | Azure Openai Token Limit Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/azure-openai-token-limit-policy.md | In the following example, the token limit of 5000 per minute is keyed by the cal ## Related policies * [Rate limiting and quotas](api-management-policies.md#rate-limiting-and-quotas)+* [llm-token-limit](llm-token-limit-policy.md) policy * [azure-openai-emit-token-metric](azure-openai-emit-token-metric-policy.md) policy [!INCLUDE [api-management-policy-ref-next-steps](../../includes/api-management-policy-ref-next-steps.md)] |
api-management | Llm Emit Token Metric Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/llm-emit-token-metric-policy.md | + + Title: Azure API Management policy reference - llm-emit-token-metric +description: Reference for the llm-emit-token-metric policy available for use in Azure API Management. Provides policy usage, settings, and examples. +++++ Last updated : 08/08/2024++++++# Emit metrics for consumption of large language model tokens +++The `llm-emit-token-metric` policy sends metrics to Application Insights about consumption of large language model (LLM) tokens through LLM APIs. Token count metrics include: Total Tokens, Prompt Tokens, and Completion Tokens. ++> [!NOTE] +> Currently, this policy is in preview. +++++## Prerequisites ++* One or more LLM APIs must be added to your API Management instance. +* Your API Management instance must be integrated with Application insights. For more information, see [How to integrate Azure API Management with Azure Application Insights](./api-management-howto-app-insights.md#create-a-connection-using-the-azure-portal). +* Enable Application Insights logging for your LLM APIs. +* Enable custom metrics with dimensions in Application Insights. For more information, see [Emit custom metrics](api-management-howto-app-insights.md#emit-custom-metrics). ++## Policy statement ++```xml +<llm-emit-token-metric + namespace="metric namespace" > + <dimension name="dimension name" value="dimension value" /> + ...additional dimensions... +</llm-emit-token-metric> +``` ++## Attributes ++| Attribute | Description | Required | Default value | +| | -- | | -- | +| namespace | A string. Namespace of metric. Policy expressions aren't allowed. | No | API Management | +| value | Value of metric expressed as a double. Policy expressions are allowed. | No | 1 | +++## Elements ++| Element | Description | Required | +| -- | | -- | +| dimension | Add one or more of these elements for each dimension included in the metric. | Yes | ++### dimension attributes ++| Attribute | Description | Required | Default value | +| | -- | | -- | +| name | A string or policy expression. Name of dimension. | Yes | N/A | +| value | A string or policy expression. Value of dimension. Can only be omitted if `name` matches one of the default dimensions. If so, value is provided as per dimension name. | No | N/A | ++ ### Default dimension names that may be used without value ++* API ID +* Operation ID +* Product ID +* User ID +* Subscription ID +* Location +* Gateway ID ++## Usage ++- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound +- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation +- [**Gateways:**](api-management-gateways-overview.md) classic, v2, consumption, self-hosted, workspace ++### Usage notes ++* This policy can be used multiple times per policy definition. +* You can configure at most 10 custom dimensions for this policy. +* Where available, values in the usage section of the response from the LLM API are used to determine token metrics. +* Certain LLM endpoints support streaming of responses. When `stream` is set to `true` in the API request to enable streaming, token metrics are estimated. ++## Example ++The following example sends LLM token count metrics to Application Insights along with User ID, Client IP, and API ID as dimensions. ++```xml +<policies> + <inbound> + <llm-emit-token-metric + namespace="MyLLM"> + <dimension name="User ID" /> + <dimension name="Client IP" value="@(context.Request.IpAddress)" /> + <dimension name="API ID" /> + </llm-emit-token-metric> + </inbound> + <outbound> + </outbound> +</policies> +``` ++## Related policies ++* [Logging](api-management-policies.md#logging) +* [emit-metric](emit-metric-policy.md) policy +* [azure-openai-emit-token-metric](azure-openai-emit-token-metric-policy.md) policy +* [llm-token-limit](llm-token-limit-policy.md) policy + |
api-management | Llm Semantic Cache Lookup Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/llm-semantic-cache-lookup-policy.md | + + Title: Azure API Management policy reference - llm-semantic-cache-lookup | Microsoft Docs +description: Reference for the llm-semantic-cache-lookup policy available for use in Azure API Management. Provides policy usage, settings, and examples. +++++++ - build-2024 + Last updated : 08/07/2024++++# Get cached responses of large language model API requests +++Use the `llm-semantic-cache-lookup` policy to perform cache lookup of responses to large language model (LLM) API requests from a configured external cache, based on vector proximity of the prompt to previous requests and a specified similarity score threshold. Response caching reduces bandwidth and processing requirements imposed on the backend LLM API and lowers latency perceived by API consumers. ++> [!NOTE] +> * This policy must have a corresponding [Cache responses to large language model API requests](llm-semantic-cache-store-policy.md) policy. +> * For prerequisites and steps to enable semantic caching, see [Enable semantic caching for Azure OpenAI APIs in Azure API Management](azure-openai-enable-semantic-caching.md). +> * Currently, this policy is in preview. +++## Policy statement ++```xml +<llm-semantic-cache-lookup + score-threshold="similarity score threshold" + embeddings-backend-id ="backend entity ID for embeddings API" + embeddings-backend-auth ="system-assigned" + ignore-system-messages="true | false" + max-message-count="count" > + <vary-by>"expression to partition caching"</vary-by> +</llm-semantic-cache-lookup> +``` ++## Attributes ++| Attribute | Description | Required | Default | +| -- | | -- | - | +| score-threshold | Similarity score threshold used to determine whether to return a cached response to a prompt. Value is a decimal between 0.0 and 1.0. [Learn more](../azure-cache-for-redis/cache-tutorial-semantic-cache.md#change-the-similarity-threshold). | Yes | N/A | +| embeddings-backend-id | [Backend](backends.md) ID for OpenAI embeddings API call. | Yes | N/A | +| embeddings-backend-auth | Authentication used for Azure OpenAI embeddings API backend. | Yes. Must be set to `system-assigned`. | N/A | +| ignore-system-messages | Boolean. If set to `true`, removes system messages from a GPT chat completion prompt before assessing cache similarity. | No | false | +| max-message-count | If specified, number of remaining dialog messages after which caching is skipped. | No | N/A | + +## Elements ++|Name|Description|Required| +|-|--|--| +|vary-by| A custom expression determined at runtime whose value partitions caching. If multiple `vary-by` elements are added, values are concatenated to create a unique combination. | No | ++## Usage +++- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound +- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation +- [**Gateways:**](api-management-gateways-overview.md) v2 ++### Usage notes ++- This policy can only be used once in a policy section. +++## Examples ++### Example with corresponding llm-semantic-cache-store policy +++## Related policies ++* [Caching](api-management-policies.md#caching) +* [llm-semantic-cache-store](llm-semantic-cache-store-policy.md) + |
api-management | Llm Semantic Cache Store Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/llm-semantic-cache-store-policy.md | + + Title: Azure API Management policy reference - llm-semantic-cache-store +description: Reference for the llm-semantic-cache-store policy available for use in Azure API Management. Provides policy usage, settings, and examples. +++++++ Last updated : 08/08/2024++++# Cache responses to large language model API requests +++The `llm-semantic-cache-store` policy caches responses to chat completion API and completion API requests to a configured external cache. Response caching reduces bandwidth and processing requirements imposed on the backend Azure OpenAI API and lowers latency perceived by API consumers. ++> [!NOTE] +> * This policy must have a corresponding [Get cached responses to large language model API requests](llm-semantic-cache-lookup-policy.md) policy. +> * For prerequisites and steps to enable semantic caching, see [Enable semantic caching for Azure OpenAI APIs in Azure API Management](azure-openai-enable-semantic-caching.md). +> * Currently, this policy is in preview. +++## Policy statement ++```xml +<llm-semantic-cache-store duration="seconds"/> +``` +++## Attributes ++| Attribute | Description | Required | Default | +| -- | | -- | - | +| duration | Time-to-live of the cached entries, specified in seconds. Policy expressions are allowed. | Yes | N/A | +++## Usage ++- [**Policy sections:**](./api-management-howto-policies.md#sections) outbound +- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, product, API, operation +- [**Gateways:**](api-management-gateways-overview.md) v2 ++### Usage notes ++- This policy can only be used once in a policy section. +- If the cache lookup fails, the API call that uses the cache-related operation doesn't raise an error, and the cache operation completes successfully. ++## Examples ++### Example with corresponding llm-semantic-cache-lookup policy +++## Related policies ++* [Caching](api-management-policies.md#caching) +* [llm-semantic-cache-lookup](llm-semantic-cache-lookup-policy.md) + |
api-management | Llm Token Limit Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/llm-token-limit-policy.md | + + Title: Azure API Management policy reference - llm-token-limit +description: Reference for the llm-token-limit policy available for use in Azure API Management. Provides policy usage, settings, and examples. +++++++ Last updated : 08/08/2024++++# Limit large language model API token usage +++The `llm-token-limit` policy prevents large language model (LLM) API usage spikes on a per key basis by limiting consumption of LLM tokens to a specified number per minute. When the token usage is exceeded, the caller receives a `429 Too Many Requests` response status code. ++By relying on token usage metrics returned from the LLM endpoint, the policy can accurately monitor and enforce limits in real time. The policy also enables precalculation of prompt tokens by API Management, minimizing unnecessary requests to the LLM backend if the limit is already exceeded. ++> [!NOTE] +> Currently, this policy is in preview. ++++## Policy statement ++```xml +<llm-token-limit counter-key="key value" + tokens-per-minute="number" + estimate-prompt-tokens="true | false" + retry-after-header-name="custom header name, replaces default 'Retry-After'" + retry-after-variable-name="policy expression variable name" + remaining-tokens-header-name="header name" + remaining-tokens-variable-name="policy expression variable name" + tokens-consumed-header-name="header name" + tokens-consumed-variable-name="policy expression variable name" /> +``` +## Attributes ++| Attribute | Description | Required | Default | +| -- | -- | -- | - | +| counter-key | The key to use for the token limit policy. For each key value, a single counter is used for all scopes at which the policy is configured. Policy expressions are allowed.| Yes | N/A | +| tokens-per-minute | The maximum number of tokens consumed by prompt and completion per minute. | Yes | N/A | +| estimate-prompt-tokens | Boolean value that determines whether to estimate the number of tokens required for a prompt: <br> - `true`: estimate the number of tokens based on prompt schema in API; may reduce performance. <br> - `false`: don't estimate prompt tokens. <br><br>When set to `false`, the remaining tokens per `counter-key` are calculated using the actual token usage from the response of the model. This could result in prompts being sent to the model that exceed the token limit. In such case, this will be detected in the response, and all succeeding requests will be blocked by the policy until the token limit frees up again. | Yes | N/A | +| retry-after-header-name | The name of a custom response header whose value is the recommended retry interval in seconds after the specified `tokens-per-minute` is exceeded. Policy expressions aren't allowed. | No | `Retry-After` | +| retry-after-variable-name | The name of a variable that stores the recommended retry interval in seconds after the specified `tokens-per-minute` is exceeded. Policy expressions aren't allowed. | No | N/A | +| remaining-tokens-header-name | The name of a response header whose value after each policy execution is the number of remaining tokens allowed for the time interval. Policy expressions aren't allowed.| No | N/A | +| remaining-tokens-variable-name | The name of a variable that after each policy execution stores the number of remaining tokens allowed for the time interval. Policy expressions aren't allowed.| No | N/A | +| tokens-consumed-header-name | The name of a response header whose value is the number of tokens consumed by both prompt and completion. The header is added to response only after the response is received from backend. Policy expressions aren't allowed.| No | N/A | +| tokens-consumed-variable-name | The name of a variable initialized to the estimated number of tokens in the prompt in `backend` section of pipeline if `estimate-prompt-tokens` is `true` and zero otherwise. The variable is updated with the reported count upon receiving the response in `outbound` section.| No | N/A | ++## Usage ++- [**Policy sections:**](./api-management-howto-policies.md#sections) inbound +- [**Policy scopes:**](./api-management-howto-policies.md#scopes) global, workspace, product, API, operation +- [**Gateways:**](api-management-gateways-overview.md) classic, v2, self-hosted, workspace ++### Usage notes ++* This policy can be used multiple times per policy definition. +* Where available when `estimate-prompt-tokens` is set to `false`, values in the usage section of the response from the LLM API are used to determine token usage. +* Certain LLM endpoints support streaming of responses. When `stream` is set to `true` in the API request to enable streaming, prompt tokens are always estimated, regardless of the value of the `estimate-prompt-tokens` attribute. +* [!INCLUDE [api-management-rate-limit-key-scope](../../includes/api-management-rate-limit-key-scope.md)] ++## Example ++In the following example, the token limit of 5000 per minute is keyed by the caller IP address. The policy doesn't estimate the number of tokens required for a prompt. After each policy execution, the remaining tokens allowed for that caller IP address in the time period are stored in the variable `remainingTokens`. ++```xml +<policies> + <inbound> + <base /> + <llm-token-limit + counter-key="@(context.Request.IpAddress)" + tokens-per-minute="5000" estimate-prompt-tokens="false" remaining-tokens-variable-name="remainingTokens" /> + </inbound> + <outbound> + <base /> + </outbound> +</policies> +``` ++## Related policies ++* [Rate limiting and quotas](api-management-policies.md#rate-limiting-and-quotas) +* [azure-openai-token-limit](azure-openai-token-limit-policy.md) policy +* [llm-emit-token-metric](llm-emit-token-metric-policy.md) policy + |
app-service | Configure Authentication Api Version | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-authentication-api-version.md | The following steps will allow you to manually migrate the application to the V2 * Microsoft Entra: `clientSecret` * Google: `googleClientSecret` * Facebook: `facebookAppSecret`- * Twitter: `twitterConsumerSecret` + * X: `twitterConsumerSecret` * Microsoft Account: `microsoftAccountClientSecret` > [!IMPORTANT] The following steps will allow you to manually migrate the application to the V2 # For Web Apps, Google example az webapp config appsettings set -g <group_name> -n <site_name> --slot-settings GOOGLE_PROVIDER_AUTHENTICATION_SECRET=<value_from_previous_step> - # For Azure Functions, Twitter example + # For Azure Functions, X example az functionapp config appsettings set -g <group_name> -n <site_name> --slot-settings TWITTER_PROVIDER_AUTHENTICATION_SECRET=<value_from_previous_step> ``` The following steps will allow you to manually migrate the application to the V2 * Microsoft Entra: `clientSecretSettingName` * Google: `googleClientSecretSettingName` * Facebook: `facebookAppSecretSettingName`- * Twitter: `twitterConsumerSecretSettingName` + * X: `twitterConsumerSecretSettingName` * Microsoft Account: `microsoftAccountClientSecretSettingName` An example file after this operation might look similar to the following, in this case only configured for Microsoft Entra ID: |
app-service | Configure Authentication Customize Sign In Out | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-authentication-customize-sign-in-out.md | This article shows you how to customize user sign-ins and sign-outs while using ## Use multiple sign-in providers -The portal configuration doesn't offer a turn-key way to present multiple sign-in providers to your users (such as both Facebook and Twitter). However, it isn't difficult to add the functionality to your app. The steps are outlined as follows: +The portal configuration doesn't offer a turn-key way to present multiple sign-in providers to your users (such as both Facebook and X). However, it isn't difficult to add the functionality to your app. The steps are outlined as follows: First, in the **Authentication / Authorization** page in the Azure portal, configure each of the identity provider you want to enable. In the sign-in page, or the navigation bar, or any other location of your app, a <a href="/.auth/login/aad">Log in with Microsoft Entra</a> <a href="/.auth/login/facebook">Log in with Facebook</a> <a href="/.auth/login/google">Log in with Google</a>-<a href="/.auth/login/twitter">Log in with Twitter</a> +<a href="/.auth/login/x">Log in with X</a> <a href="/.auth/login/apple">Log in with Apple</a> ``` |
app-service | Configure Authentication Oauth Tokens | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-authentication-oauth-tokens.md | From your server code, the provider-specific tokens are injected into the reques | Microsoft Entra | `X-MS-TOKEN-AAD-ID-TOKEN` <br/> `X-MS-TOKEN-AAD-ACCESS-TOKEN` <br/> `X-MS-TOKEN-AAD-EXPIRES-ON` <br/> `X-MS-TOKEN-AAD-REFRESH-TOKEN` | | Facebook Token | `X-MS-TOKEN-FACEBOOK-ACCESS-TOKEN` <br/> `X-MS-TOKEN-FACEBOOK-EXPIRES-ON` | | Google | `X-MS-TOKEN-GOOGLE-ID-TOKEN` <br/> `X-MS-TOKEN-GOOGLE-ACCESS-TOKEN` <br/> `X-MS-TOKEN-GOOGLE-EXPIRES-ON` <br/> `X-MS-TOKEN-GOOGLE-REFRESH-TOKEN` |-| Twitter | `X-MS-TOKEN-TWITTER-ACCESS-TOKEN` <br/> `X-MS-TOKEN-TWITTER-ACCESS-TOKEN-SECRET` | +| X | `X-MS-TOKEN-TWITTER-ACCESS-TOKEN` <br/> `X-MS-TOKEN-TWITTER-ACCESS-TOKEN-SECRET` | ||| > [!NOTE] When your provider's access token (not the [session token](#extend-session-token - **Google**: Append an `access_type=offline` query string parameter to your `/.auth/login/google` API call. For more information, see [Google Refresh Tokens](https://developers.google.com/identity/protocols/OpenIDConnect#refresh-tokens). - **Facebook**: Doesn't provide refresh tokens. Long-lived tokens expire in 60 days (see [Facebook Expiration and Extension of Access Tokens](https://developers.facebook.com/docs/facebook-login/access-tokens/expiration-and-extension)).-- **Twitter**: Access tokens don't expire (see [Twitter OAuth FAQ](https://developer.twitter.com/en/docs/authentication/faq)).+- **X**: Access tokens don't expire (see [OAuth FAQ](https://developer.x.com/en/docs/authentication/faq)). - **Microsoft**: In [https://resources.azure.com](https://resources.azure.com), do the following steps: 1. At the top of the page, select **Read/Write**. 2. In the left browser, navigate to **subscriptions** > **_\<subscription\_name>_** > **resourceGroups** > **_\<resource\_group\_name>_** > **providers** > **Microsoft.Web** > **sites** > **_\<app\_name>_** > **config** > **authsettingsV2**. |
app-service | Configure Authentication Provider Twitter | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-authentication-provider-twitter.md | Title: Configure Twitter authentication -description: Learn how to configure Twitter authentication as an identity provider for your App Service or Azure Functions app. + Title: Configure X authentication +description: Learn how to configure X authentication as an identity provider for your App Service or Azure Functions app. ms.assetid: c6dc91d7-30f6-448c-9f2d-8e91104cde73 Last updated 03/29/2021-# Configure your App Service or Azure Functions app to use Twitter login +# Configure your App Service or Azure Functions app to use X login [!INCLUDE [app-service-mobile-selector-authentication](../../includes/app-service-mobile-selector-authentication.md)] -This article shows how to configure Azure App Service or Azure Functions to use Twitter as an authentication provider. +This article shows how to configure Azure App Service or Azure Functions to use X as an authentication provider. -To complete the procedure in this article, you need a Twitter account that has a verified email address and phone number. To create a new Twitter account, go to [twitter.com]. +To complete the procedure in this article, you need an X account that has a verified email address and phone number. To create a new X account, go to [x.com]. -## <a name="register"> </a>Register your application with Twitter +## <a name="register"> </a>Register your application with X -1. Sign in to the [Azure portal] and go to your application. Copy your **URL**. You'll use it to configure your Twitter app. -1. Go to the [Twitter Developers] website, sign in with your Twitter account credentials, and select **Create an app**. -1. Enter the **App name** and the **Application description** for your new app. Paste your application's **URL** into the **Website URL** field. In the **Callback URLs** section, enter the HTTPS URL of your App Service app and append the path `/.auth/login/twitter/callback`. For example, `https://contoso.azurewebsites.net/.auth/login/twitter/callback`. +1. Sign in to the [Azure portal] and go to your application. Copy your **URL**. You'll use it to configure your X app. +1. Go to the [X Developers] website, sign in with your X account credentials, and select **Create an app**. +1. Enter the **App name** and the **Application description** for your new app. Paste your application's **URL** into the **Website URL** field. In the **Callback URLs** section, enter the HTTPS URL of your App Service app and append the path `/.auth/login/x/callback`. For example, `https://contoso.azurewebsites.net/.auth/login/x/callback`. 1. At the bottom of the page, type at least 100 characters in **Tell us how this app will be used**, then select **Create**. Click **Create** again in the pop-up. The application details are displayed. 1. Select the **Keys and Access Tokens** tab. To complete the procedure in this article, you need a Twitter account that has a > [!IMPORTANT] > The API secret key is an important security credential. Do not share this secret with anyone or distribute it with your app. -## <a name="secrets"> </a>Add Twitter information to your application +## <a name="secrets"> </a>Add X information to your application 1. Sign in to the [Azure portal] and navigate to your app. 1. Select **Authentication** in the menu on the left. Click **Add identity provider**. To complete the procedure in this article, you need a Twitter account that has a 1. Click **Add**. -You're now ready to use Twitter for authentication in your app. The provider will be listed on the **Authentication** screen. From there, you can edit or delete this provider configuration. +You're now ready to use X for authentication in your app. The provider will be listed on the **Authentication** screen. From there, you can edit or delete this provider configuration. ## <a name="related-content"> </a>Next steps You're now ready to use Twitter for authentication in your app. The provider wil <!-- URLs. --> -[Twitter Developers]: https://go.microsoft.com/fwlink/p/?LinkId=268300 -[twitter.com]: https://go.microsoft.com/fwlink/p/?LinkID=268287 +[X Developers]: https://go.microsoft.com/fwlink/p/?LinkId=268300 +[x.com]: https://go.microsoft.com/fwlink/p/?LinkID=268287 [Azure portal]: https://portal.azure.com/ [xamarin]: ../app-services-mobile-app-xamarin-ios-get-started-users.md |
app-service | Configure Language Dotnetcore | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-dotnetcore.md | For more information, see [Configure ASP.NET Core to work with proxy servers and ::: zone pivot="platform-linux" +## Rewrite or redirect URL ++To rewrite or redirect URL, use the [URL rewriting middleware in ASP.NET Core](/aspnet/core/fundamentals/url-rewriting). + ## Open SSH session in browser [!INCLUDE [Open SSH session in browser](../../includes/app-service-web-ssh-connect-builtin-no-h.md)] |
app-service | Configure Language Java Deploy Run | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/configure-language-java-deploy-run.md | To deploy .ear files, [use FTP](deploy-ftp.md). Your .ear application is deploye Don't deploy your .war or .jar using FTP. The FTP tool is designed to upload startup scripts, dependencies, or other runtime files. It's not the optimal choice for deploying web apps. +## Rewrite or redirect URL ++To rewrite or redirect URL, use one of the available URL rewriters, such as [UrlRewriteFilter](http://tuckey.org/urlrewrite/). +++Tomcat also provides a [rewrite valve](https://tomcat.apache.org/tomcat-10.1-doc/rewrite.html). ++++JBoss also provides a [rewrite valve](https://docs.jboss.org/jbossweb/7.0.x/rewrite.html). ++ ## Logging and debugging apps Performance reports, traffic visualizations, and health checkups are available for each app through the Azure portal. For more information, see [Azure App Service diagnostics overview](overview-diagnostics.md). |
app-service | Deploy Run Package | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/deploy-run-package.md | +> [!NOTE] +> Run from package is not supported for Python apps. When deploying a ZIP file of your Python code, you need to set a flag to enable Azure build automation. The build automation will create the Python virtual environment for your app and install any necessary requirements and package needed. See [build automation](quickstart-python.md?tabs=flask%2Cmac-linux%2Cazure-cli%2Czip-deploy%2Cdeploy-instructions-azportal%2Cterminal-bash%2Cdeploy-instructions-zip-azcli#enable-build-automation) for more details. + In [Azure App Service](overview.md), you can run your apps directly from a deployment ZIP package file. This article shows how to enable this functionality in your app. All other deployment methods in App Service have something in common: your files are deployed to *D:\home\site\wwwroot* in your app (or */home/site/wwwroot* for Linux apps). Since the same directory is used by your app at runtime, it's possible for deployment to fail because of file lock conflicts, and for the app to behave unpredictably because some of the files are not yet updated. |
app-service | Migrate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/migrate.md | If the step is in progress, you get a status of `Migrating`. After you get a sta az rest --method get --uri "${ASE_ID}/configurations/networking?api-version=2021-02-01" ``` -> [!NOTE] -> Due to a known bug, for ELB App Service Environment migrations, the inbound IP address may change again once the [migration step](#8-migrate-to-app-service-environment-v3-and-check-status) is complete. This bug is being addressed and will be fixed as soon as possible. Open a support case to receive the correct IP address upfront or if you have any questions or concerns about this issue. -> - ### 4. Update dependent resources with new IPs By using the new IPs, update any of your resources or networking components to ensure that your new environment functions as intended after migration is complete. It's your responsibility to make any necessary updates. Get the details of your new environment by running the following command or by g az appservice ase show --name $ASE_NAME --resource-group $ASE_RG ``` -> [!NOTE] -> Due to a known bug, for ELB App Service Environment migrations, the inbound IP address may change once the [migration step](#8-migrate-to-app-service-environment-v3) is complete. Check your App Service Environment v3's IP addresses and make any needed updates if there have been changes since the IP generation step. Open a support case if you have any questions or concerns about this issue or need help with the confirming the new IPs. -> - ::: zone-end ::: zone pivot="experience-azp" If migration is supported for your App Service Environment, proceed to the next Under **Get new IP addresses**, confirm that you understand the implications and select the **Start** button. This step takes about 15 minutes to complete. You can't scale or make changes to your existing App Service Environment during this time. -> [!NOTE] -> Due to a known bug, for ELB App Service Environment migrations, the inbound IP address may change again once the migration step is complete. This bug is being addressed and will be fixed as soon as possible. Open a support case to receive the correct IP address upfront or if you have any questions or concerns about this issue. -> - ### 3. Update dependent resources with new IPs When the previous step finishes, the IP addresses for your new App Service Environment v3 resource appear. Use the new IPs to update any resources and networking components so that your new environment functions as intended after migration is complete. It's your responsibility to make any necessary updates. At this time, detailed migration statuses are available only when you're using t When migration is complete, you have an App Service Environment v3 resource, and all of your apps are running in your new environment. You can confirm the environment's version by checking the **Configuration** page for your App Service Environment. -> [!NOTE] -> Due to a known bug, for ELB App Service Environment migrations, the inbound IP address may change once the migration step is complete. Check your App Service Environment v3's IP addresses and make any needed updates if there have been changes since the IP generation step. Open a support case if you have any questions or concerns about this issue or need help confirming the new IPs. -> - If your migration includes a custom domain suffix, the domain appeared in the **Essentials** section of the **Overview** page of the portal for App Service Environment v1/v2, but it no longer appears there in App Service Environment v3. Instead, for App Service Environment v3, go to the **Custom domain suffix** page to confirm that your custom domain suffix is configured correctly. You can also remove the configuration if you no longer need it or configure one if you didn't have one previously. :::image type="content" source="./media/migration/custom-domain-suffix-app-service-environment-v3.png" alt-text="Screenshot that shows the page for custom domain suffix configuration for App Service Environment v3."::: |
app-service | Side By Side Migrate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/side-by-side-migrate.md | The platform creates your new App Service Environment v3 in a different subnet t - The subnet must be in the same virtual network, and therefore region, as your existing App Service Environment. - If your virtual network doesn't have an available subnet, you need to create one. You might need to increase the address space of your virtual network to create a new subnet. For more information, see [Create a virtual network](../../virtual-network/quick-create-portal.md).-- The subnet must be able to communicate with the subnet your existing App Service Environment is in. Ensure there aren't network security groups or other network configurations that would prevent communication between the subnets.+- The subnet must be able to communicate in both directions with the subnet your existing App Service Environment is in. Ensure there aren't network security groups or other network configurations that would prevent communication between the subnets. - The subnet must have a single delegation of `Microsoft.Web/hostingEnvironments`. - The subnet must have enough available IP addresses to support your new App Service Environment v3. The number of IP addresses needed depends on the number of instances you want to use for your new App Service Environment v3. For more information, see [App Service Environment v3 networking](networking.md#addresses). - The subnet must not have any locks applied to it. If there are locks, they must be removed before migration. The locks can be readded if needed once migration is complete. For more information on locks and lock inheritance, see [Lock your resources to protect your infrastructure](../../azure-resource-manager/management/lock-resources.md). You receive the new inbound IP address once migration is complete but before you ### Update dependent resources with new outbound IPs -The new outbound IPs are created and given to you before you start the actual migration. The new default outbound to the internet public addresses are given so you can adjust any external firewalls, DNS routing, network security groups, and any other resources that rely on these IPs before completing the migration. **It's your responsibility to update any and all resources that will be impacted by the IP address change associated with the new App Service Environment v3. Don't move on to the next step until you've made all required updates.** You might experience downtime during and after the migration step if you have dependencies on the outbound IPs and fail to make all necessary updates. This is because once the migration starts, even though traffic still goes to your App Service Environment v2 front ends, your underlying compute is your new App Service Environment v3 in the new subnet. +The new outbound IPs are created and given to you before you start the actual migration. The new default outbound to the internet public addresses are given so you can adjust any external firewalls, DNS routing, network security groups, and any other resources that rely on these IPs before completing the migration. **It's your responsibility to update any and all resources that will be impacted by the IP address change associated with the new App Service Environment v3. Don't move on to the next step until you've made all required updates.** You might experience downtime during and after the migration step if you have dependencies on the outbound IPs and fail to make all necessary updates. This is because once the migration starts, even though traffic still goes to your App Service Environment v2 front ends, your underlying compute is your new App Service Environment v3 in the new subnet. This step is also a good time to review the [inbound and outbound network](networking.md#ports-and-network-restrictions) dependency changes when moving to App Service Environment v3 including the port change for the Azure Load Balancer health probe, which now uses port 80. az rest --method get --uri "${ASE_ID}?api-version=2022-03-01" --query properties During this step, you get a status of `CompletingMigration`. When you get a status of `MigrationCompleted`, the traffic redirection step is done and your migration is complete. +## Common sources of issues when migrating using the side-by-side migration feature ++The following are examples of common sources of issues that customers encounter when migrating using the side-by-side migration feature. You should review these areas to ensure that you don't experience downtime or service outages during or after the migration process. ++- Azure Key Vault should allow traffic from the new outbound IPs/subnet. +- The two subnets should be able to communicate with each other in both directions. Customers typically allow traffic from the old to the new subnet, but forget to allow traffic from the new to the old subnet. +- App Gateway should be updated with the new IP addresses. +- DNS records should be updated with the new IP addresses. +- If you've hardcoded IP addresses in your applications, you need to update them with the new IP addresses. +- Route tables should be updated with any new routes. + ## Pricing There's no cost to migrate your App Service Environment. However, you're billed for both your App Service Environment v2 and your new App Service Environment v3 once you start the migration process. You stop being charged for your old App Service Environment v2 when you complete the final migration step where the old environment gets deleted. You should complete your validation as quickly as possible to prevent excess charges from accumulating. For more information about App Service Environment v3 pricing, see the [pricing details](overview.md#pricing). |
app-service | Version Comparison | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/version-comparison.md | App Service Environment has three versions. App Service Environment v3 is the la > [!IMPORTANT] > App Service Environment v1 and v2 [will be retired on 31 August 2024](https://azure.microsoft.com/updates/app-service-environment-version-1-and-version-2-will-be-retired-on-31-august-2024-4/). After that date, those versions will no longer be supported and any remaining App Service Environment v1 and v2s and the applications running on them will be deleted. --There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1 or v2, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version. +> +> There's a new version of App Service Environment that is easier to use and runs on more powerful infrastructure. To learn more about the new version, start with the [Introduction to the App Service Environment](overview.md). If you're currently using App Service Environment v1 or v2, please follow the steps in [this article](upgrade-to-asev3.md) to migrate to the new version. > > As of 29 January 2024, you can no longer create new App Service Environment v1 or v2 resources using any of the available methods including ARM/Bicep templates, Azure Portal, Azure CLI, or REST API. You must [migrate to App Service Environment v3](upgrade-to-asev3.md) before 31 August 2024 to prevent resource deletion and data loss. > |
app-service | Overview Authentication Authorization | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/overview-authentication-authorization.md | Implementing a secure solution for authentication (signing-in users) and authori - Azure App Service allows you to integrate a variety of auth capabilities into your web app or API without implementing them yourself. - ItΓÇÖs built directly into the platform and doesnΓÇÖt require any particular language, SDK, security expertise, or even any code to utilize.-- You can integrate with multiple login providers. For example, Microsoft Entra, Facebook, Google, Twitter.+- You can integrate with multiple login providers. For example, Microsoft Entra, Facebook, Google, X. Your app might need to support more complex scenarios such as Visual Studio integration or incremental consent. There are several different authentication solutions available to support these scenarios. To learn more, read [Identity scenarios](identity-scenarios.md). App Service uses [federated identity](https://en.wikipedia.org/wiki/Federated_id | [Microsoft Entra](/entr) | | [Facebook](https://developers.facebook.com/docs/facebook-login) | `/.auth/login/facebook` | [App Service Facebook login](configure-authentication-provider-facebook.md) | | [Google](https://developers.google.com/identity/choose-auth) | `/.auth/login/google` | [App Service Google login](configure-authentication-provider-google.md) |-| [Twitter](https://developer.twitter.com/en/docs/basics/authentication) | `/.auth/login/twitter` | [App Service Twitter login](configure-authentication-provider-twitter.md) | +| [X](https://developer.x.com/en/docs/basics/authentication) | `/.auth/login/x` | [App Service X login](configure-authentication-provider-twitter.md) | | [GitHub](https://docs.github.com/en/developers/apps/building-oauth-apps/creating-an-oauth-app) | `/.auth/login/github` | [App Service GitHub login](configure-authentication-provider-github.md) | | [Sign in with Apple](https://developer.apple.com/sign-in-with-apple/) | `/.auth/login/apple` | [App Service Sign in With Apple login (Preview)](configure-authentication-provider-apple.md) | | Any [OpenID Connect](https://openid.net/connect/) provider | `/.auth/login/<providerName>` | [App Service OpenID Connect login](configure-authentication-provider-openid-connect.md) | |
app-service | Tutorial Connect Msi Key Vault Javascript | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/tutorial-connect-msi-key-vault-javascript.md | description: Learn how to secure connectivity to back-end Azure services that do ms.devlang: javascript # ms.devlang: javascript, azurecli Previously updated : 10/26/2021 Last updated : 08/02/2024 Clone the sample repository locally and deploy the sample application to App Ser # Clone and prepare sample application git clone https://github.com/Azure-Samples/app-service-language-detector.git cd app-service-language-detector/javascript-zip default.zip *.* +zip -r default.zip . # Save app name as variable for convenience appName=<app-name> az appservice plan create --resource-group $groupName --name $appName --sku FREE --location $region --is-linux-az webapp create --resource-group $groupName --plan $appName --name $appName --runtime "node|14-lts" +az webapp create --resource-group $groupName --plan $appName --name $appName --runtime "node:18-lts" az webapp config appsettings set --resource-group $groupName --name $appName --settings SCM_DO_BUILD_DURING_DEPLOYMENT=true-az webapp deployment source config-zip --resource-group $groupName --name $appName --src ./default.zip +az webapp deploy --resource-group $groupName --name $appName --src-path ./default.zip ``` The preceding commands: * Create a linux app service plan-* Create a web app for Node.js 14 LTS +* Create a web app for Node.js 18 LTS * Configure the web app to install the npm packages on deployment * Upload the zip file, and install the npm packages |
application-gateway | Application Gateway Diagnostics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-diagnostics.md | Application Gateway logs provide detailed information for events related to a re Logs are available for all resources of Application Gateway; however, to consume them, you must enable their collection in a storage location of your choice. Logging in Azure Application Gateway is enabled by the Azure Monitor service. We recommend using the Log Analytics workspace as you can readily use its predefined queries and set alerts based on specific log conditions. -## <a name="diagnostic-logging"></a>Types of Diagnostic logs +## <a name="firewall-log"></a><a name="diagnostic-logging"></a>Types of Resource logs -You can use different types of logs in Azure to manage and troubleshoot application gateways. You can learn more about these types below: +You can use different types of logs in Azure to manage and troubleshoot application gateways. -* **Activity log**: You can use [Azure activity logs](../azure-monitor/essentials/activity-log.md) (formerly known as operational logs and audit logs) to view all operations that are submitted to your Azure subscription, and their status. Activity log entries are collected by default, and you can view them in the Azure portal. -* **Access log**: You can use this log to view Application Gateway access patterns and analyze important information. This includes the caller's IP, requested URL, response latency, return code, and bytes in and out. An access log is collected every 60 seconds. This log contains one record per instance of Application Gateway. The Application Gateway instance is identified by the instanceId property. -* **Performance log**: You can use this log to view how Application Gateway instances are performing. This log captures performance information for each instance, including total requests served, throughput in bytes, total requests served, failed request count, and healthy and unhealthy backend instance count. A performance log is collected every 60 seconds. The Performance log is available only for the v1 SKU. For the v2 SKU, use [Metrics](application-gateway-metrics.md) for performance data. -* **Firewall log**: You can use this log to view the requests that are logged through either detection or prevention mode of an application gateway that is configured with the web application firewall. Firewall logs are collected every 60 seconds. +- [Activity log](monitor-application-gateway-reference.md#activity-log) +- [Application Gateway Access Log](monitor-application-gateway-reference.md#resource-logs) +- [Application Gateway Performance Log](monitor-application-gateway-reference.md#resource-logs) (available only for the v1 SKU) +- [Application Gateway Firewall Log](monitor-application-gateway-reference.md#resource-logs) > [!NOTE] > Logs are available only for resources deployed in the Azure Resource Manager deployment model. You can't use logs for resources in the classic deployment model. For a better understanding of the two models, see the [Understanding Resource Manager deployment and classic deployment](../azure-resource-manager/management/deployment-models.md) article. -## Storage locations +## Examples of optimizing access logs using Workspace Transformations -You have the following options to store the logs in your preferred location. --**Log Analytic workspace**: This option allows you to readily use the predefined queries, visualizations, and set alerts based on specific log conditions. The tables used by resource logs in log analytics workspace depend on what type of collection the resource is using: - -* **Azure diagnostics**: Data is written to the [Azure Diagnostics table](/azure/azure-monitor/reference/tables/azurediagnostics). Azure Diagnostics table is shared between multiple resource type, with each of them adding their own custom fields. When number of custom fields ingested to Azure Diagnostics table exceeds 500, new fields aren't added as top level but added to "AdditionalFields" field as dynamic key value pairs. --* **Resource-specific(recommended)**: Data is written to dedicated tables for each category of the resource. In resource specific mode, each log category selected in the diagnostic setting is assigned its own table within the chosen workspace. This has several benefits, including: - - Easier data manipulation in log queries - - Improved discoverability of schemas and their structures - - Enhanced performance in terms of ingestion latency and query times - - The ability to assign [Azure role-based access control rights to specific tables](../azure-monitor/logs/manage-access.md?tabs=portal#set-table-level-read-access) -- For Application Gateway, resource specific mode creates three tables: - * [AGWAccessLogs](/azure/azure-monitor/reference/tables/agwaccesslogs) - * [AGWPerformanceLogs](/azure/azure-monitor/reference/tables/agwperformancelogs) - * [AGWFirewallLogs](/azure/azure-monitor/reference/tables/agwfirewalllogs) --> [!NOTE] -> The resource specific option is currently available in all **clouds**.<br> -> Existing users can continue using Azure Diagnostics, or can opt for dedicated tables by switching the toggle in Diagnostic settings to **Resource specific**, or to **Dedicated** in API destination. Dual mode isn't possible. The data in all the logs can either flow to Azure Diagnostics, or to dedicated tables. However, you can have multiple diagnostic settings where one data flow is to azure diagnostic and another is using resource specific at the same time. -- **Selecting the destination table in Log analytics :** All Azure services eventually use the resource-specific tables. As part of this transition, you can select Azure diagnostic or resource specific table in the diagnostic setting using a toggle button. The toggle is set to **Resource specific** by default and in this mode, logs for new selected categories are sent to dedicated tables in Log Analytics, while existing streams remain unchanged. See the following example. -- [![Screenshot of the resource ID for application gateway in the portal.](./media/application-gateway-diagnostics/resource-specific.png)](./media/application-gateway-diagnostics/resource-specific.png#lightbox) - -**Workspace Transformations:** Opting for the Resource specific option allows you to filter and modify your data before it’s ingested with [workspace transformations](../azure-monitor/essentials/data-collection-transformations-workspace.md). This provides granular control, allowing you to focus on the most relevant information from the logs there by reducing data costs and enhancing security. -For detailed instructions on setting up workspace transformations, please refer:[Tutorial: Add a workspace transformation to Azure Monitor Logs by using the Azure portal](../azure-monitor/logs/tutorial-workspace-transformations-portal.md). -- ### Examples of optimizing access logs using Workspace Transformations - **Example 1: Selective Projection of Columns**: Imagine you have application gateway access logs with 20 columns, but you’re interested in analyzing data from only 6 specific columns. By using workspace transformation, you can project these 6 columns into your workspace, effectively excluding the other 14 columns. Even though the original data from those excluded columns won’t be stored, empty placeholders for them still appear in the Logs blade. This approach optimizes storage and ensures that only relevant data is retained for analysis. > [!NOTE]- > Within the Logs blade, selecting the **Try New Log Analytics** option gives greater control over the columns displayed in your user interface. + > Within the Logs blade, selecting the **Try New Log Analytics** option gives greater control over the columns displayed in your user interface. **Example 2: Focusing on Specific Status Codes**: When analyzing access logs, instead of processing all log entries, you can write a query to retrieve only rows with specific HTTP status codes (such as 4xx and 5xx). Since most requests ideally fall under the 2xx and 3xx categories (representing successful responses), focusing on the problematic status codes narrows down the data set. This targeted approach allows you to extract the most relevant and actionable information, making it both beneficial and cost-effective. **Recommended transition strategy to move from Azure diagnostic to resource specific table:**-1. Assess current data retention: Determine the duration for which data is presently retained in the Azure diagnostics table (for example: assume the diagnostics table retains data for 15 days). -2. Establish resource-specific retention: Implement a new Diagnostic setting with resource specific table. -3. Parallel data collection: For a temporary period, collect data concurrently in both the Azure Diagnostics and the resource-specific settings. -4. Confirm data accuracy: Verify that data collection is accurate and consistent in both settings. -5. Remove Azure diagnostics setting: Remove the Azure Diagnostic setting to prevent duplicate data collection. ++1. Assess current data retention: Determine the duration for which data is presently retained in the Azure diagnostics table (for example: assume the diagnostics table retains data for 15 days). +2. Establish resource-specific retention: Implement a new Diagnostic setting with resource specific table. +3. Parallel data collection: For a temporary period, collect data concurrently in both the Azure Diagnostics and the resource-specific settings. +4. Confirm data accuracy: Verify that data collection is accurate and consistent in both settings. +5. Remove Azure diagnostics setting: Remove the Azure Diagnostic setting to prevent duplicate data collection. Other storage locations:+ - **Azure Storage account**: Storage accounts are best used for logs when logs are stored for a longer duration and reviewed when needed. - **Azure Event Hubs**: Event hubs are a great option for integrating with other security information and event management (SIEM) tools to get alerts on your resources. - **Azure Monitor partner integrations**. Activity logging is automatically enabled for every Resource Manager resource. Y :::image type="content" source="media/application-gateway-diagnostics/diagnostics1.png" alt-text="Screenshot of app gateway properties" lightbox="media/application-gateway-diagnostics/diagnostics1.png"::: - 3. Enable diagnostic logging by using the following PowerShell cmdlet: ```powershell Activity logging is automatically enabled for every Resource Manager resource. Y * Performance log * Firewall log -2. To start collecting data, select **Turn on diagnostics**. +1. To start collecting data, select **Turn on diagnostics**. ![Turning on diagnostics][1] -3. The **Diagnostics settings** page provides the settings for the diagnostic logs. In this example, Log Analytics stores the logs. You can also use event hubs and a storage account to save the diagnostic logs. +1. The **Diagnostics settings** page provides the settings for the diagnostic logs. In this example, Log Analytics stores the logs. You can also use event hubs and a storage account to save the diagnostic logs. ![Starting the configuration process][2] -5. Type a name for the settings, confirm the settings, and select **Save**. +1. Type a name for the settings, confirm the settings, and select **Save**. -## Activity log --Azure generates the activity log by default. The logs are preserved for 90 days in the Azure event logs store. Learn more about these logs by reading the [View events and activity log](../azure-monitor/essentials/activity-log.md) article. --## Access log --The access log is generated only if you've enabled it on each Application Gateway instance, as detailed in the preceding steps. The data is stored in the storage account that you specified when you enabled the logging. Each access of Application Gateway is logged in JSON format as shown below. --### For Application Gateway and WAF v2 SKU --> [!NOTE] -> * For TLS/TCP proxy related information, visit [data reference](monitor-application-gateway-reference.md#tlstcp-proxy-logs). -> * Some columns from the shared AzureDiagnostics table are still being ported to the dedicated tables. Therefore, the columns with Mutual Authentication details are currently available only through the [AzureDiagnostics table](#storage-locations). -> * Access logs with clientIP value 127.0.0.1 originate from an internal security process running on the application gateway instances. You can safely ignore these log entries. ---|Value |Description | -||| -|instanceId | Application Gateway instance that served the request. | -|clientIP | IP of the immediate client of Application Gateway. If another proxy fronts your application gateway, this displays the IP of that fronting proxy. | -|httpMethod | HTTP method used by the request. | -|requestUri | URI of the received request. | -|UserAgent | User agent from the HTTP request header. | -|httpStatus | HTTP status code returned to the client from Application Gateway. | -|httpVersion | HTTP version of the request. | -|receivedBytes | Size of packet received, in bytes. | -|sentBytes| Size of packet sent, in bytes.| -|clientResponseTime| Time difference (in seconds) between the first byte and the last byte application gateway sent to the client. Helpful in gauging Application Gateway's processing time for responses or slow clients. | -|timeTaken| Length of time (in **seconds**) that it takes for the first byte of a client request to be processed and its last-byte sent in the response to the client. It's important to note that the Time-Taken field usually includes the time that the request and response packets are traveling over the network. | -|WAFEvaluationTime| Length of time (in **seconds**) that it takes for the request to be processed by the WAF. | -|WAFMode| Value can be either Detection or Prevention | -|transactionId| Unique identifier to correlate the request received from the client | -|sslEnabled| Whether communication to the backend pools used TLS. Valid values are on and off.| -|sslCipher| Cipher suite being used for TLS communication (if TLS is enabled).| -|sslProtocol| SSL/TLS protocol being used (if TLS is enabled).| -|sslClientVerify | Shows the result of client certificate verification as SUCCESS or FAILED. Failed status will include error information.| -|sslClientCertificateFingerprint|The SHA1 thumbprint of the client certificate for an established TLS connection.| -|sslClientCertificateIssuerName|The issuer DN string of the client certificate for an established TLS connection.| -|serverRouted| The backend server that application gateway routes the request to.| -|serverStatus| HTTP status code of the backend server.| -|serverResponseLatency| Latency of the response (in **seconds**) from the backend server.| -|host| Address listed in the host header of the request. If rewritten using header rewrite, this field contains the updated host name| -|originalRequestUriWithArgs| This field contains the original request URL | -|upstreamSourcePort| The source port used by Application Gateway when initiating a connection to the backend target| -|originalHost| This field contains the original request host name| -|error_info|The reason for the 4xx and 5xx error. Displays an error code for a failed request. More details in [Error code information.](./application-gateway-diagnostics.md#error-code-information) | -|contentType|The type of content or data that is being processed or delivered by the application gateway ---```json -{ - "timeStamp": "2021-10-14T22:17:11+00:00", - "resourceId": "/SUBSCRIPTIONS/{subscriptionId}/RESOURCEGROUPS/{resourceGroupName}/PROVIDERS/MICROSOFT.NETWORK/APPLICATIONGATEWAYS/{applicationGatewayName}", - "listenerName": "HTTP-Listener", - "ruleName": "Storage-Static-Rule", - "backendPoolName": "StaticStorageAccount", - "backendSettingName": "StorageStatic-HTTPS-Setting", - "operationName": "ApplicationGatewayAccess", - "category": "ApplicationGatewayAccessLog", - "properties": { - "instanceId": "appgw_2", - "clientIP": "185.42.129.24", - "clientPort": 45057, - "httpMethod": "GET", - "originalRequestUriWithArgs": "\/", - "requestUri": "\/", - "requestQuery": "", - "userAgent": "Mozilla\/5.0 (Windows NT 6.1; WOW64) AppleWebKit\/537.36 (KHTML, like Gecko) Chrome\/52.0.2743.116 Safari\/537.36", - "httpStatus": 200, - "httpVersion": "HTTP\/1.1", - "receivedBytes": 184, - "sentBytes": 466, - "clientResponseTime": 0, - "timeTaken": 0.034, - "WAFEvaluationTime": "0.000", - "WAFMode": "Detection", - "transactionId": "592d1649f75a8d480a3c4dc6a975309d", - "sslEnabled": "on", - "sslCipher": "ECDHE-RSA-AES256-GCM-SHA384", - "sslProtocol": "TLSv1.2", - "sslClientVerify": "NONE", - "sslClientCertificateFingerprint": "", - "sslClientCertificateIssuerName": "", - "serverRouted": "52.239.221.65:443", - "serverStatus": "200", - "serverResponseLatency": "0.028", - "upstreamSourcePort": "21564", - "originalHost": "20.110.30.194", - "host": "20.110.30.194", - "error_info":"ERRORINFO_NO_ERROR", - "contentType":"application/json" - } -} -``` --### For Application Gateway Standard and WAF SKU (v1) --|Value |Description | -||| -|instanceId | Application Gateway instance that served the request. | -|clientIP | Originating IP for the request. | -|clientPort | Originating port for the request. | -|httpMethod | HTTP method used by the request. | -|requestUri | URI of the received request. | -|RequestQuery | **Server-Routed**: Backend pool instance that was sent the request.</br>**X-AzureApplicationGateway-LOG-ID**: Correlation ID used for the request. It can be used to troubleshoot traffic issues on the backend servers. </br>**SERVER-STATUS**: HTTP response code that Application Gateway received from the back end. | -|UserAgent | User agent from the HTTP request header. | -|httpStatus | HTTP status code returned to the client from Application Gateway. | -|httpVersion | HTTP version of the request. | -|receivedBytes | Size of packet received, in bytes. | -|sentBytes| Size of packet sent, in bytes.| -|timeTaken| Length of time (in milliseconds) that it takes for a request to be processed and its response to be sent. This is calculated as the interval from the time when Application Gateway receives the first byte of an HTTP request to the time when the response send operation finishes. It's important to note that the Time-Taken field usually includes the time that the request and response packets are traveling over the network. | -|sslEnabled| Whether communication to the backend pools used TLS/SSL. Valid values are on and off.| -|host| The hostname for which the request has been sent to the backend server. If backend hostname is being overridden, this name reflects that.| -|originalHost| The hostname for which the request was received by the Application Gateway from the client.| --```json -{ - "resourceId": "/SUBSCRIPTIONS/{subscriptionId}/RESOURCEGROUPS/PEERINGTEST/PROVIDERS/MICROSOFT.NETWORK/APPLICATIONGATEWAYS/{applicationGatewayName}", - "operationName": "ApplicationGatewayAccess", - "time": "2017-04-26T19:27:38Z", - "category": "ApplicationGatewayAccessLog", - "properties": { - "instanceId": "ApplicationGatewayRole_IN_0", - "clientIP": "191.96.249.97", - "clientPort": 46886, - "httpMethod": "GET", - "requestUri": "/phpmyadmin/scripts/setup.php", - "requestQuery": "X-AzureApplicationGateway-CACHE-HIT=0&SERVER-ROUTED=10.4.0.4&X-AzureApplicationGateway-LOG-ID=874f1f0f-6807-41c9-b7bc-f3cfa74aa0b1&SERVER-STATUS=404", - "userAgent": "-", - "httpStatus": 404, - "httpVersion": "HTTP/1.0", - "receivedBytes": 65, - "sentBytes": 553, - "timeTaken": 205, - "sslEnabled": "off", - "host": "www.contoso.com", - "originalHost": "www.contoso.com" - } -} -``` -### Error code Information -If the application gateway can't complete the request, it stores one of the following reason codes in the error_info field of the access log. ---|4XX Errors | (The 4xx error codes indicate that there was an issue with the client's request, and the Application Gateway can't fulfill it.) | -||| -| ERRORINFO_INVALID_METHOD| The client has sent a request which is non-RFC compliant. Possible reasons: client using HTTP method not supported by server, misspelled method, incompatible HTTP protocol version etc.| - | ERRORINFO_INVALID_REQUEST | The server can't fulfill the request because of incorrect syntax.| - | ERRORINFO_INVALID_VERSION| The application gateway received a request with an invalid or unsupported HTTP version.| - | ERRORINFO_INVALID_09_METHOD| The client sent request with HTTP Protocol version 0.9.| - | ERRORINFO_INVALID_HOST |The value provided in the "Host" header is either missing, improperly formatted, or doesn't match the expected host value. For example, when there's no Basic listener, and none of the hostnames of Multisite listeners match with the host.| - | ERRORINFO_INVALID_CONTENT_LENGTH | The length of the content specified by the client in the content-Length header doesn't match the actual length of the content in the request.| - | ERRORINFO_INVALID_METHOD_TRACE | The client sent HTTP TRACE method, which isn't supported by the application gateway.| - | ERRORINFO_CLIENT_CLOSED_REQUEST | The client closed the connection with the application gateway before the idle timeout period elapsed. Check whether the client timeout period is greater than the [idle timeout period](./application-gateway-faq.yml#what-are-the-settings-for-keep-alive-timeout-and-tcp-idle-timeout) for the application gateway.| - | ERRORINFO_REQUEST_URI_INVALID |Indicates issue with the Uniform Resource Identifier (URI) provided in the client's request. | - | ERRORINFO_HTTP_NO_HOST_HEADER | Client sent a request without Host header. | - | ERRORINFO_HTTP_TO_HTTPS_PORT |The client sent a plain HTTP request to an HTTPS port. | - | ERRORINFO_HTTPS_NO_CERT | Indicates client isn't sending a valid and properly configured TLS certificate during Mutual TLS authentication. | ---|5XX Errors | Description | -||| - | ERRORINFO_UPSTREAM_NO_LIVE | The application gateway is unable to find any active or reachable backend servers to handle incoming requests | - | ERRORINFO_UPSTREAM_CLOSED_CONNECTION | The backend server closed the connection unexpectedly or before the request was fully processed. This could happen due to backend server reaching its limits, crashing etc.| - | ERRORINFO_UPSTREAM_TIMED_OUT | The established TCP connection with the server was closed as the connection took longer than the configured timeout value. | --## Performance log --The performance log is generated only if you have enabled it on each Application Gateway instance, as detailed in the preceding steps. The data is stored in the storage account that you specified when you enabled the logging. The performance log data is generated in 1-minute intervals. It's available only for the v1 SKU. For the v2 SKU, use [Metrics](application-gateway-metrics.md) for performance data. The following data is logged: ---|Value |Description | -||| -|instanceId | Application Gateway instance for which performance data is being generated. For a multiple-instance application gateway, there's one row per instance. | -|healthyHostCount | Number of healthy hosts in the backend pool. | -|unHealthyHostCount | Number of unhealthy hosts in the backend pool. | -|requestCount | Number of requests served. | -|latency | Average latency (in milliseconds) of requests from the instance to the back end that serves the requests. | -|failedRequestCount| Number of failed requests.| -|throughput| Average throughput since the last log, measured in bytes per second.| --```json -{ - "resourceId": "/SUBSCRIPTIONS/{subscriptionId}/RESOURCEGROUPS/{resourceGroupName}/PROVIDERS/MICROSOFT.NETWORK/APPLICATIONGATEWAYS/{applicationGatewayName}", - "operationName": "ApplicationGatewayPerformance", - "time": "2016-04-09T00:00:00Z", - "category": "ApplicationGatewayPerformanceLog", - "properties": - { - "instanceId":"ApplicationGatewayRole_IN_1", - "healthyHostCount":"4", - "unHealthyHostCount":"0", - "requestCount":"185", - "latency":"0", - "failedRequestCount":"0", - "throughput":"119427" - } -} -``` --> [!NOTE] -> Latency is calculated from the time when the first byte of the HTTP request is received to the time when the last byte of the HTTP response is sent. It's the sum of the Application Gateway processing time plus the network cost to the back end, plus the time that the back end takes to process the request. --## Firewall log --The firewall log is generated only if you have enabled it for each application gateway, as detailed in the preceding steps. This log also requires that the web application firewall is configured on an application gateway. The data is stored in the storage account that you specified when you enabled the logging. The following data is logged: ---|Value |Description | -||| -|instanceId | Application Gateway instance for which firewall data is being generated. For a multiple-instance application gateway, there's one row per instance. | -|clientIp | Originating IP for the request. | -|clientPort | Originating port for the request. | -|requestUri | URL of the received request. | -|ruleSetType | Rule set type. The available value is OWASP. | -|ruleSetVersion | Rule set version used. Available values are 2.2.9 and 3.0. | -|ruleId | Rule ID of the triggering event. | -|message | User-friendly message for the triggering event. More details are provided in the details section. | -|action | Action taken on the request. Available values are Blocked and Allowed (for custom rules), Matched (when a rule matches a part of the request), and Detected and Blocked (these are both for mandatory rules, depending on if the WAF is in detection or prevention mode). | -|site | Site for which the log was generated. Currently, only Global is listed because rules are global.| -|details | Details of the triggering event. | -|details.message | Description of the rule. | -|details.data | Specific data found in request that matched the rule. | -|details.file | Configuration file that contained the rule. | -|details.line | Line number in the configuration file that triggered the event. | -|hostname | Hostname or IP address of the Application Gateway. | -|transactionId | Unique ID for a given transaction which helps group multiple rule violations that occurred within the same request. | --```json -{ - "timeStamp": "2021-10-14T22:17:11+00:00", - "resourceId": "/SUBSCRIPTIONS/{subscriptionId}/RESOURCEGROUPS/{resourceGroupName}/PROVIDERS/MICROSOFT.NETWORK/APPLICATIONGATEWAYS/{applicationGatewayName}", - "operationName": "ApplicationGatewayFirewall", - "category": "ApplicationGatewayFirewallLog", - "properties": { - "instanceId": "appgw_2", - "clientIp": "185.42.129.24", - "clientPort": "", - "requestUri": "\/", - "ruleSetType": "OWASP_CRS", - "ruleSetVersion": "3.0.0", - "ruleId": "920350", - "message": "Host header is a numeric IP address", - "action": "Matched", - "site": "Global", - "details": { - "message": "Warning. Pattern match \\\"^[\\\\d.:]+$\\\" at REQUEST_HEADERS:Host .... ", - "data": "20.110.30.194:80", - "file": "rules\/REQUEST-920-PROTOCOL-ENFORCEMENT.conf", - "line": "791" - }, - "hostname": "20.110.30.194:80", - "transactionId": "592d1649f75a8d480a3c4dc6a975309d", - "policyId": "default", - "policyScope": "Global", - "policyScopeName": "Global" - } -} -``` --## View and analyze the activity log --You can view and analyze activity log data by using any of the following methods: --* **Azure tools**: Retrieve information from the activity log through Azure PowerShell, the Azure CLI, the Azure REST API, or the Azure portal. Step-by-step instructions for each method are detailed in the [Activity operations with Resource Manager](../azure-monitor/essentials/activity-log.md) article. -* **Power BI**: If you don't already have a [Power BI](https://powerbi.microsoft.com/pricing) account, you can try it for free. By using the [Power BI template apps](/power-bi/service-template-apps-overview), you can analyze your data. +To view and analyze activity log data, see [Analyze monitoring data](monitor-application-gateway.md#azure-monitor-tools). ## View and analyze the access, performance, and firewall logs -[Azure Monitor logs](/previous-versions/azure/azure-monitor/insights/azure-networking-analytics) can collect the counter and event log files from your Blob storage account. It includes visualizations and powerful search capabilities to analyze your logs. +[Azure Monitor logs](/previous-versions/azure/azure-monitor/insights/azure-networking-analytics) can collect the counter and event log files from your Blob storage account. For more information, see [Analyze monitoring data](monitor-application-gateway.md#azure-monitor-tools). You can also connect to your storage account and retrieve the JSON log entries for access and performance logs. After you download the JSON files, you can convert them to CSV and view them in Excel, Power BI, or any other data-visualization tool. > [!TIP] > If you're familiar with Visual Studio and basic concepts of changing values for constants and variables in C#, you can use the [log converter tools](https://github.com/Azure-Samples/networking-dotnet-log-converter) available from GitHub.-> -> --### Analyzing Access logs through GoAccess --We have published a Resource Manager template that installs and runs the popular [GoAccess](https://goaccess.io/) log analyzer for Application Gateway Access Logs. GoAccess provides valuable HTTP traffic statistics such as Unique Visitors, Requested Files, Hosts, Operating Systems, Browsers, HTTP Status codes and more. For more details, please see the [Readme file in the Resource Manager template folder in GitHub](https://github.com/Azure/azure-quickstart-templates/tree/master/demos/application-gateway-logviewer-goaccess). ## Next steps |
application-gateway | Application Gateway Metrics | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/application-gateway-metrics.md | +<a name="metrics-supported-by-application-gateway-v1-sku"></a> + ## Metrics supported by Application Gateway V2 SKU > [!NOTE] Application Gateway publishes data points to [Azure Monitor](../azure-monitor/ov ### Timing metrics -Application Gateway provides several builtΓÇæin timing metrics related to the request and response, which are all measured in milliseconds. +Application Gateway provides several builtΓÇæin timing metrics related to the request and response, which are all measured in milliseconds. :::image type="content" source="./media/application-gateway-metrics/application-gateway-metrics.png" alt-text="[Diagram of timing metrics for the Application Gateway" border="false"::: > [!NOTE] >-> If there are more than one listener in the Application Gateway, then always filter by *Listener* dimension while comparing different latency metrics in order to get meaningful inference. --- **Backend connect time**-- *Aggregation type:Avg/Max* -- Time spent establishing a connection with the backend application. -- This includes the network latency as well as the time taken by the backend serverΓÇÖs TCP stack to establish new connections. For TLS, it also includes the time spent on handshake. --- **Backend first byte response time**-- *Aggregation type:Avg/Max* -- Time interval between start of establishing a connection to backend server and receiving the first byte of the response header. -- This approximates the sum of *Backend connect time*, time taken by the request to reach the backend from Application Gateway, time taken by backend application to respond (the time the server took to generate content, potentially fetch database queries), and the time taken by first byte of the response to reach the Application Gateway from the backend. --- **Backend last byte response time**-- *Aggregation type:Avg/Max* -- Time interval between start of establishing a connection to backend server and receiving the last byte of the response body. -- This approximates the sum of *Backend first byte response time* and data transfer time (this number may vary greatly depending on the size of objects requested and the latency of the server network). --- **Application gateway total time**-- *Aggregation type:Avg/Max* -- This metric captures either the Average/Max time taken for a request to be received, processed and its response to be sent. -- This is the interval from the time when Application Gateway receives the first byte of the HTTP request to the time when the last response byte has been sent to the client. This includes the processing time taken by Application Gateway, the *Backend last byte response time*, and the time taken by Application Gateway to send all the response. --- **Client RTT**-- *Aggregation type:Avg/Max* -- This metric captures the Average/Max round trip time between clients and Application Gateway. +> If there is more than one listener in the Application Gateway, then always filter by *Listener* dimension while comparing different latency metrics in order to get meaningful inference. -These metrics can be used to determine whether the observed slowdown is due to the client network, Application Gateway performance, the backend network and backend server TCP stack saturation, backend application performance, or large file size. +You can use timing metrics to determine whether the observed slowdown is due to the client network, Application Gateway performance, the backend network and backend server TCP stack saturation, backend application performance, or large file size. For more information, see [Timing metrics](monitor-application-gateway-reference.md#timing-metrics-for-application-gateway-v2-sku). -For example, If thereΓÇÖs a spike in *Backend first byte response time* trend but the *Backend connect time* trend is stable, then it can be inferred that the Application gateway to backend latency and the time taken to establish the connection is stable, and the spike is caused due to an increase in the response time of backend application. On the other hand, if the spike in *Backend first byte response time* is associated with a corresponding spike in *Backend connect time*, then it can be deduced that either the network between Application Gateway and backend server or the backend server TCP stack has saturated. +For example, if there's a spike in *Backend first byte response time* trend but the *Backend connect time* trend is stable, you can infer that the application gateway to backend latency and the time taken to establish the connection is stable. The spike is caused due to an increase in the response time of backend application. On the other hand, if the spike in *Backend first byte response time* is associated with a corresponding spike in *Backend connect time*, you can deduce that either the network between Application Gateway and backend server or the backend server TCP stack has saturated. -If you notice a spike in *Backend last byte response time* but the *Backend first byte response time* is stable, then it can be deduced that the spike is because of a larger file being requested. +If you notice a spike in *Backend last byte response time* but the *Backend first byte response time* is stable, you can deduce that the spike is because of a larger file being requested. Similarly, if the *Application gateway total time* has a spike but the *Backend last byte response time* is stable, then it can either be a sign of performance bottleneck at the Application Gateway or a bottleneck in the network between client and Application Gateway. Additionally, if the *client RTT* also has a corresponding spike, then it indicates that the degradation is because of the network between client and Application Gateway. ### Application Gateway metrics -For Application Gateway, the following metrics are available: --- **Bytes received**-- Count of bytes received by the Application Gateway from the clients. (Reported based on the request "content size" only. It doesn't account for TLS negotiations overhead, TCP/IP packet headers, or retransmissions, and hence doesn't represent the complete bandwidth utilization.) --- **Bytes sent**-- Count of bytes sent by the Application Gateway to the clients. (Reported based on the response "content size" only. It doesn't account for TCP/IP packet headers or retransmissions, and hence doesn't represent the complete bandwidth utilization.) --- **Client TLS protocol**-- Count of TLS and non-TLS requests initiated by the client that established connection with the Application Gateway. To view TLS protocol distribution, filter by the dimension TLS Protocol. This metric includes requests served by the gateway, such as redirects. --- **Current capacity units**-- Count of capacity units consumed to load balance the traffic. There are three determinants to capacity unit - compute unit, persistent connections and throughput. Each capacity unit is composed of at most: 1 compute unit, or 2500 persistent connections, or 2.22-Mbps throughput. --- **Current compute units**-- Count of processor capacity consumed. Factors affecting compute unit are TLS connections/sec, URL Rewrite computations, and WAF rule processing. --- **Current connections**-- The total number of concurrent connections active from clients to the Application Gateway - -- **Estimated Billed Capacity units**-- With the v2 SKU, the pricing model is driven by consumption. Capacity units measure consumption-based cost that is charged in addition to the fixed cost. *Estimated Billed Capacity units* indicate the number of capacity units using which the billing is estimated. This is calculated as the greater value between *Current capacity units* (capacity units required to load balance the traffic) and *Fixed billable capacity units* (minimum capacity units kept provisioned). --- **Failed Requests**-- Number of requests that Application Gateway has served with 5xx server error codes. This includes the 5xx codes that are generated from the Application Gateway as well as the 5xx codes that are generated from the backend. The request count can be further filtered to show count per each/specific backend pool-http setting combination. - -- **Fixed Billable Capacity Units**-- The minimum number of capacity units kept provisioned as per the *Minimum scale units* setting (one instance translates to 10 capacity units) in the Application Gateway configuration. - -- The average number of new TCP connections per second established from clients to the Application Gateway and from the Application Gateway to the backend members. ---- **Response Status**-- HTTP response status returned by Application Gateway. The response status code distribution can be further categorized to show responses in 2xx, 3xx, 4xx, and 5xx categories. --- **Throughput**-- Number of bytes per second the Application Gateway has served. (Reported based on the "content size" only. It doesn't account for TLS negotiations overhead, TCP/IP packet headers, or retransmissions, and hence doesn't represent the complete bandwidth utilization.) --- **Total Requests**-- Count of successful requests that Application Gateway has served by the backend pool targets. Pages served directly by the gateway, such as redirects, are not counted and should be found in the Client TLS protocol metric. Total requests count metric can be further filtered to show count per each/specific backend pool-http setting combination. --### Backend metrics --For Application Gateway, the following metrics are available: --- **Backend response status**-- Count of HTTP response status codes returned by the backends. This doesn't include any response codes generated by the Application Gateway. The response status code distribution can be further categorized to show responses in 2xx, 3xx, 4xx, and 5xx categories. --- **Healthy host count**-- The number of backends that are determined healthy by the health probe. You can filter on a per backend pool basis to show the number of healthy hosts in a specific backend pool. --- **Unhealthy host count**-- The number of backends that are determined unhealthy by the health probe. You can filter on a per backend pool basis to show the number of unhealthy hosts in a specific backend pool. - -- **Requests per minute per Healthy Host**-- The average number of requests received by each healthy member in a backend pool in a minute. You must specify the backend pool using the *BackendPool HttpSettings* dimension. - -### Web Application Firewall (WAF) metrics --For information on WAF Monitoring, see [WAF v2 Metrics](../../articles/web-application-firewall/ag/application-gateway-waf-metrics.md#application-gateway-waf-v2-metrics) --## Metrics supported by Application Gateway V1 SKU --### Application Gateway metrics --For Application Gateway, the following metrics are available: --- **CPU Utilization**-- Displays the utilization of the CPUs allocated to the Application Gateway. Under normal conditions, CPU usage should not regularly exceed 90%, as this may cause latency in the websites hosted behind the Application Gateway and disrupt the client experience. You can indirectly control or improve CPU utilization by modifying the configuration of the Application Gateway by increasing the instance count or by moving to a larger SKU size, or doing both. --- **Current connections**-- Count of current connections established with Application Gateway --- **Failed Requests**-- Number of requests that failed due to connection issues. This count includes requests that failed due to exceeding the "Request time-out" HTTP setting and requests that failed due to connection issues between Application gateway and backend. This count doesn't include failures due to no healthy backend being available. 4xx and 5xx responses from the backend are also not considered as part of this metric. --- **Response Status**-- HTTP response status returned by Application Gateway. The response status code distribution can be further categorized to show responses in 2xx, 3xx, 4xx, and 5xx categories. --- **Throughput**-- Number of bytes per second the Application Gateway has served --- **Total Requests**-- Count of successful requests that Application Gateway has served. The request count can be further filtered to show count per each/specific backend pool-http setting combination. -+For Application Gateway, there are several metrics available. For a list, see [Application Gateway metrics](monitor-application-gateway-reference.md#metrics-for-application-gateway-v2-sku). ### Backend metrics -For Application Gateway, the following metrics are available: --- **Healthy host count**-- The number of backends that are determined healthy by the health probe. You can filter on a per backend pool basis to show the number of healthy hosts in a specific backend pool. --- **Unhealthy host count**-- The number of backends that are determined unhealthy by the health probe. You can filter on a per backend pool basis to show the number of unhealthy hosts in a specific backend pool. +For Application Gateway, There are several backend metrics available. For a list, see [Backend metrics](monitor-application-gateway-reference.md#backend-metrics-for-application-gateway-v2-sku). ### Web Application Firewall (WAF) metrics -For information on WAF Monitoring, see [WAF v1 Metrics](../../articles/web-application-firewall/ag/application-gateway-waf-metrics.md#application-gateway-waf-v1-metrics) +For information on WAF Monitoring, see [WAF v2 Metrics](../../articles/web-application-firewall/ag/application-gateway-waf-metrics.md#application-gateway-waf-v2-metrics) and [WAF v1 Metrics](../../articles/web-application-firewall/ag/application-gateway-waf-metrics.md#application-gateway-waf-v1-metrics). ## Metrics visualization Browse to an application gateway, under **Monitoring** select **Metrics**. To vi In the following image, you see an example with three metrics displayed for the last 30 minutes: To see a current list of metrics, see [Supported metrics with Azure Monitor](../azure-monitor/essentials/metrics-supported.md). |
application-gateway | Configure Alerts With Templates | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configure-alerts-with-templates.md | -## Configure alerts using ARM templates --You can use ARM templates to quickly configure important alerts for Application Gateway. Before you begin, consider the following details: --- Azure Monitor alert rules are charged based on the type and number of signals it monitors. See [Azure Monitor pricing](https://azure.microsoft.com/pricing/details/monitor/) before deploying for pricing information. Or you can see the estimated cost in the portal after deployment:- :::image type="content" source="media/configure-alerts-with-templates/alert-pricing.png" alt-text="Image showing application gateway pricing details"::: -- You need to create an Azure Monitor action group in advance and then use the Resource ID for as many alerts as you need. Azure Monitor alerts use this action group to notify users that an alert has been triggered. For more information, see [Create and manage action groups in the Azure portal](../azure-monitor/alerts/action-groups.md).->[!TIP] -> You can manually form the ResourceID for your Action Group by following these steps. -> 1. Select Azure Monitor in your Azure portal. -> 1. Open the Alerts page and select Action Groups. -> 1. Select the action group to view its details. -> 1. Use the Resource Group Name, Action Group Name and Subscription Info here to form the ResourceID for the action group as shown here: <br> -> `/subscriptions/<subscription-id-from-your-account>/resourcegroups/<resource-group-name>/providers/microsoft.insights/actiongroups/<action-group-name>` -- The templates for alerts described here are defined generically for settings like Severity, Aggregation Granularity, Frequency of Evaluation, Condition Type, and so on. You can modify the settings after deployment to meet your needs. See [detailed information about configuring a metric alert rule](../azure-monitor/alerts/alerts-create-new-alert-rule.md) for more information.-- The templates for metric-based alerts use the **Dynamic threshold** value with [high sensitivity](../azure-monitor/alerts/alerts-dynamic-thresholds.md#known-issues-with-dynamic-threshold-sensitivity). You can choose to adjust these settings based on your needs.--## ARM templates --The following ARM templates are available to configure Azure Monitor alerts for Application Gateway. +The templates for alerts described here are defined generically for settings like Severity, Aggregation Granularity, Frequency of Evaluation, Condition Type, and so on. You can modify the settings after deployment to meet your needs. See [detailed information about configuring a metric alert rule](../azure-monitor/alerts/alerts-create-new-alert-rule.md) for more information. -### Alert for Backend Response Status as 5xx +The templates for metric-based alerts use the **Dynamic threshold** value with [high sensitivity](../azure-monitor/alerts/alerts-dynamic-thresholds.md#known-issues-with-dynamic-threshold-sensitivity). You can choose to adjust these settings based on your needs. -[![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fdemos%2Fag-alert-backend-5xx%2Fazuredeploy.json) +The following ARM templates are available to configure Azure Monitor alerts for Application Gateway. For the procedure to use these templates, see [Create a new alert rule using an ARM template](../azure-monitor/alerts/alerts-create-rule-cli-powershell-arm.md#create-a-new-alert-rule-using-an-arm-template). -This notification is based on Metrics signal. +- Alert for Backend Response Status as 5xx -### Alert for average Unhealthy Host Count + [![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fdemos%2Fag-alert-backend-5xx%2Fazuredeploy.json) -[![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fdemos%2Fag-alert-unhealthy-host%2Fazuredeploy.json) + This notification is based on Metrics signal. -This notification is based on Metrics signal. +- Alert for average Unhealthy Host Count -### Alert for Backend Last Byte Response Time + [![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fdemos%2Fag-alert-unhealthy-host%2Fazuredeploy.json) -[![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fdemos%2Fag-alert-backend-lastbyte-resp%2Fazuredeploy.json) + This notification is based on Metrics signal. -This notification is based on Metrics signal. +- Alert for Backend Last Byte Response Time -### Alert for Key Vault integration issues + [![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fdemos%2Fag-alert-backend-lastbyte-resp%2Fazuredeploy.json) -[![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fdemos%2Fag-alert-keyvault-advisor%2Fazuredeploy.json) + This notification is based on Metrics signal. -This notification is based on its Azure Advisor recommendation. +- Alert for Key Vault integration issues + [![Deploy to Azure](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure%2Fazure-quickstart-templates%2Fmaster%2Fdemos%2Fag-alert-keyvault-advisor%2Fazuredeploy.json) -## Next steps + This notification is based on its Azure Advisor recommendation. -<!-- Add additional links. You can change the wording of these and add more if useful. --> +## Related content - See [Monitoring Application Gateway data reference](monitor-application-gateway-reference.md) for a reference of the metrics, logs, and other important values created by Application Gateway. |
application-gateway | Configure Application Gateway With Private Frontend Ip | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/configure-application-gateway-with-private-frontend-ip.md | Title: Configure an internal load balancer (ILB) endpoint -description: This article provides information on how to configure Application Gateway Standard v2 with a private frontend IP address +description: This article provides information on how to configure Application Gateway Standard v1 with a private frontend IP address Previously updated : 02/07/2024 Last updated : 08/09/2024 # Configure an application gateway with an internal load balancer (ILB) endpoint -Azure Application Gateway Standard v2 can be configured with an Internet-facing VIP or with an internal endpoint that isn't exposed to the Internet. An internal endpoint uses a private IP address for the frontend, which is also known as an *internal load balancer (ILB) endpoint*. +Azure Application Gateway Standard v1 can be configured with an Internet-facing VIP or with an internal endpoint that isn't exposed to the Internet. An internal endpoint uses a private IP address for the frontend, which is also known as an *internal load balancer (ILB) endpoint*. ++> [!NOTE] +> Application Gateway v1 is being retired. See the [v1 retiredment announcement](/azure/application-gateway/v1-retirement).<br> +> To configure a v2 application gateway with a private frontend IP address, see [Private Application Gateway deployment](/azure/application-gateway/application-gateway-private-deployment). Configuring the gateway using a frontend private IP address is useful for internal line-of-business applications that aren't exposed to the Internet. It's also useful for services and tiers within a multi-tier application that are in a security boundary that isn't exposed to the Internet but: Configuring the gateway using a frontend private IP address is useful for intern - session stickiness - or Transport Layer Security (TLS) termination (previously known as Secure Sockets Layer (SSL)). -This article guides you through the steps to configure a Standard v2 Application Gateway with an ILB using the Azure portal. +This article guides you through the steps to configure a Standard v1 Application Gateway with an ILB using the Azure portal. [!INCLUDE [updated-for-az](~/reusable-content/ce-skilling/azure/includes/updated-for-az.md)] In this example, you create a new virtual network. You can create a virtual netw 2. Select **Networking** and then select **Application Gateway** in the Featured list. 3. Enter *myAppGateway* for the name of the application gateway and *myResourceGroupAG* for the new resource group. 4. For **Region**, select **Central US**.-5. For **Tier**, select **Standard V2**. +5. For **Tier**, select **Standard**. 6. Under **Configure virtual network** select **Create new**, and then enter these values for the virtual network: - *myVNet* - for the name of the virtual network. - *10.0.0.0/16* - for the virtual network address space. In this example, you create a new virtual network. You can create a virtual netw 9. Select **Next:Backends**. 10. Select **Add a backend pool**. 11. For **Name**, type *appGatewayBackendPool*.-12. For **Add backend pool without targets**, select **Yes**. You'll add the targets later. +12. For **Add backend pool without targets**, select **Yes**. Targets are added later. 13. Select **Add**. 14. Select **Next:Configuration**. 15. Under **Routing rules**, select **Add a routing rule**. In this example, you create a new virtual network. You can create a virtual netw ## Add backend pool -The backend pool is used to route requests to the backend servers that serve the request. The backend can be composed of NICs, virtual machine scale sets, public IP addresses, internal IP addresses, fully qualified domain names (FQDN), and multi-tenant backends like Azure App Service. In this example, you use virtual machines as the target backend. You can either use existing virtual machines or create new ones. In this example, you create two virtual machines that Azure uses as backend servers for the application gateway. +The backend pool is used to route requests to the backend servers that serve the request. The backend can be composed of NICs, virtual machine scale sets, public IP addresses, internal IP addresses, fully qualified domain names (FQDN), and multitenant backends like Azure App Service. In this example, you use virtual machines as the target backend. You can either use existing virtual machines or create new ones. In this example, you create two virtual machines that Azure uses as backend servers for the application gateway. -To do this, you: +To do this: 1. Create two new virtual machines, *myVM* and *myVM2*, used as backend servers. 2. Install IIS on the virtual machines to verify that the application gateway was created successfully. |
application-gateway | Monitor Application Gateway Reference | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/monitor-application-gateway-reference.md | Title: Monitoring Azure Application Gateway data reference -description: Important reference material needed when you monitor Application Gateway + Title: Monitoring data reference for Azure Application Gateway +description: This article contains important reference material you need when you monitor Azure Application Gateway. Last updated : 06/17/2024++ - - Previously updated : 05/17/2024 -<!-- VERSION 2.2 -Template for monitoring data reference article for Azure services. This article is support for the main "Monitoring [servicename]" article for the service. --> -# Monitoring Azure Application Gateway data reference +# Azure Application Gateway monitoring data reference -See [Monitoring Azure Application Gateway](monitor-application-gateway.md) for details on collecting and analyzing monitoring data for Azure Application Gateway. -## Application Gateway v2 metrics +See [Monitor Azure Application Gateway](monitor-application-gateway.md) for details on the data you can collect for Application Gateway and how to use it. -Resource Provider and Type: [Microsoft.Network/applicationGateways](../azure-monitor/essentials/metrics-supported.md#microsoftnetworkapplicationgateways) -### Timing metrics -Application Gateway provides several builtΓÇæin timing metrics related to the request and response, which are all measured in milliseconds. +### Supported metrics for Microsoft.Network/applicationGateways -> [!NOTE] -> -> If the Application Gateway has more than one listener, then always filter by the *Listener* dimension while comparing different latency metrics to get more meaningful inference. +The following table lists the all metrics available for the Microsoft.Network/applicationGateways resource type. More description details for many metrics are included after this table. -| Metric | Unit | Description| -|:-|:--|:| -|**Backend connect time**|Milliseconds|Time spent establishing a connection with the backend application.<br><br>This includes the network latency and the time taken by the backend serverΓÇÖs TCP stack to establish new connections. For TLS, it also includes the time spent on handshake.| -|**Backend first byte response time**|Milliseconds|Time interval between start of establishing a connection to backend server and receiving the first byte of the response header.<br><br>This approximates the sum of Backend connect time, time taken by the request to reach the backend from Application Gateway, time taken by backend application to respond (the time the server took to generate content, potentially fetch database queries), and the time taken by first byte of the response to reach the Application Gateway from the backend.| -|**Backend last byte response time**|Milliseconds|Time interval between start of establishing a connection to backend server and receiving the last byte of the response body.<br><br>This approximates the sum of backend first byte response time and data transfer time. This number may vary greatly depending on the size of objects requested and the latency of the server network.| -|**Application gateway total time**|Milliseconds|Average time that it takes for a request to be received, processed and its response to be sent.<br><br>This is the interval from the time when Application Gateway receives the first byte of the HTTP request to the time when the last response byte has been sent to the client. This includes the processing time taken by Application Gateway, the Backend last byte response time, the time taken by Application Gateway to send all the response, and the Client RTT.| -|**Client RTT**|Milliseconds|Average round-trip time between clients and Application Gateway.| -These metrics can be used to determine whether the observed slowdown is due to the client network, Application Gateway performance, the backend network and backend server TCP stack saturation, backend application performance, or large file size. +For available Web Application Firewall (WAF) metrics, see [Application Gateway WAF v2 metrics](../web-application-firewall/ag/application-gateway-waf-metrics.md#application-gateway-waf-v2-metrics) and [Application Gateway WAF v1 metrics](../web-application-firewall/ag/application-gateway-waf-metrics.md#application-gateway-waf-v1-metrics). -For example, if thereΓÇÖs a spike in *Backend first byte response time* trend but the *Backend connect time* trend is stable, then it can be inferred that the Application gateway to backend latency and the time taken to establish the connection is stable, and the spike is caused due to an increase in the response time of backend application. On the other hand, if the spike in *Backend first byte response time* is associated with a corresponding spike in *Backend connect time*, then it can be deduced that either the network between Application Gateway and backend server or the backend server TCP stack has saturated. +### Timing metrics for Application Gateway v2 SKU -If you notice a spike in *Backend last byte response time* but the *Backend first byte response time* is stable, then it can be deduced that the spike is because of a larger file being requested. +Application Gateway v2 SKU provides many builtΓÇæin timing metrics related to the request and response, which are all measured in milliseconds. What follows is expanded descriptions of the timing metrics already listed in the previous [metrics table](#supported-metrics-for-microsoftnetworkapplicationgateways). -Similarly, if the *Application gateway total time* has a spike but the *Backend last byte response time* is stable, then it can either be a sign of performance bottleneck at the Application Gateway or a bottleneck in the network between client and Application Gateway. Additionally, if the *client RTT* also has a corresponding spike, then it indicates that the degradation is because of the network between client and Application Gateway. +- **Backend connect time**. This value includes the network latency and the time taken by the backend server's TCP stack to establish new connections. For TLS, it also includes the time spent on handshake. +- **Backend first byte response time**. This value approximates the sum of *Backend connect time*, time taken by the request to reach the backend from Application Gateway, time taken by backend application to respond, which is the time the server takes to generate content and potentially fetch database queries, and the time taken by first byte of the response to reach the Application Gateway from the backend. +- **Backend last byte response time**. This value approximates the sum of backend first byte response time and data transfer time. This number varies greatly depending on the size of objects requested and the latency of the server network. +- **Application gateway total time**. This interval is the time from Application Gateway receives the first byte of the HTTP request to the time when the last response byte was sent to the client. +- **Client RTT**. Average round-trip time between clients and Application Gateway. -### Application Gateway metrics +### Metrics for Application Gateway v2 SKU -| Metric | Unit | Description| -|:-|:--|:| -|**Bytes received**|Bytes|Count of bytes received by the Application Gateway from the clients. (This metric accounts for only the Request content size observed by the Application Gateway. It doesn't include data transfers such as TLS header negotiations, TCP/IP packet headers, or retransmissions.)| -|**Bytes sent**|Bytes|Count of bytes sent by the Application Gateway to the clients. (This metric accounts for only the Response Content size served by the Application Gateway. It doesn't include data transfers such as TCP/IP packet headers or retransmissions.)| -|**Client TLS protocol**|Count|Count of TLS and non-TLS requests initiated by the client that established connection with the Application Gateway. To view TLS protocol distribution, filter by the TLS Protocol dimension.| -|**Current capacity units**|Count|Count of capacity units consumed to load balance the traffic. There are three determinants to capacity unit - compute unit, persistent connections, and throughput. Each capacity unit is composed of at most: one compute unit, or 2500 persistent connections, or 2.22-Mbps throughput.| -|**Current compute units**|Count|Count of processor capacity consumed. Factors affecting compute unit are TLS connections/sec, URL Rewrite computations, and WAF rule processing.| -|**Current connections**|Count|The total number of concurrent connections active from clients to the Application Gateway.| -|**Estimated Billed Capacity units**|Count|With the v2 SKU, the pricing model is driven by consumption. Capacity units measure consumption-based cost that is charged in addition to the fixed cost. *Estimated Billed Capacity units indicate the number of capacity units using which the billing is estimated. This is calculated as the greater value between *Current capacity units* (capacity units required to load balance the traffic) and *Fixed billable capacity units* (minimum capacity units kept provisioned).| -|**Failed Requests**|Count|Number of requests that Application Gateway has served with 5xx server error codes. This includes the 5xx codes that are generated from the Application Gateway and the 5xx codes that are generated from the backend. The request count can be further filtered to show count per each/specific backend pool-http setting combination.| -|**Fixed Billable Capacity Units**|Count|The minimum number of capacity units kept provisioned as per the *Minimum scale units* setting (one instance translates to 10 capacity units) in the Application Gateway configuration.| -|**New connections per second**|Count|The average number of new TCP connections per second established from clients to the Application Gateway and from the Application Gateway to the backend members.| -|**Response Status**|Status code|HTTP response status returned by Application Gateway. The response status code distribution can be further categorized to show responses in 2xx, 3xx, 4xx, and 5xx categories.| -|**Throughput**|Bytes/sec|Number of bytes per second the Application Gateway has served. (This metric accounts for only the Content size served by the Application Gateway. It doesn't include data transfers such as TLS header negotiations, TCP/IP packet headers, or retransmissions.)| -|**Total Requests**|Count|Count of successful requests that Application Gateway has served. The request count can be further filtered to show count per each/specific backend pool-http setting combination.| +For Application Gateway v2 SKU, the following metrics are available. What follows is expanded descriptions of the metrics already listed in the previous [metrics table](#supported-metrics-for-microsoftnetworkapplicationgateways). -### Backend metrics +- **Bytes received**. This metric accounts for only the Request content size observed by the Application Gateway. It doesn't include data transfers such as TLS header negotiations, TCP/IP packet headers, or retransmissions. +- **Bytes sent**. This metric accounts for only the Response Content size served by the Application Gateway. It doesn't include data transfers such as TCP/IP packet headers or retransmissions. +- **Client TLS protocol**. Count of TLS and non-TLS requests. +- **Current capacity units**. There are three determinants to capacity unit: compute unit, persistent connections, and throughput. Each capacity unit is composed of at most one compute unit, or 2500 persistent connections, or 2.22-Mbps throughput. +- **Current compute units**. Factors affecting compute unit are TLS connections/sec, URL Rewrite computations, and WAF rule processing. +- **Current connections**. The total number of concurrent connections active from clients to the Application Gateway. +- **Estimated Billed Capacity units**. With the v2 SKU, consumption drives the pricing model. Capacity units measure consumption-based cost that is charged in addition to the fixed cost. *Estimated Billed Capacity units indicate the number of capacity units using which the billing is estimated. This amount is calculated as the greater value between *Current capacity units* (capacity units required to load balance the traffic) and *Fixed billable capacity units* (minimum capacity units kept provisioned). +- **Failed Requests**. This value includes the 5xx codes that are generated from the Application Gateway and the 5xx codes that are generated from the backend. The request count can be further filtered to show count per each/specific backend pool-http setting combination. +- **Fixed Billable Capacity Units**. The minimum number of capacity units kept provisioned as per the *Minimum scale units* setting in the Application Gateway configuration. One instance translates to 10 capacity units. +- **New connections per second**. The average number of new TCP connections per second established from clients to the Application Gateway and from the Application Gateway to the backend members. +- **Response Status**. The response status code distribution can be further categorized to show responses in 2xx, 3xx, 4xx, and 5xx categories. +- **Throughput**. This metric accounts for only the Content size served by the Application Gateway. It doesn't include data transfers such as TLS header negotiations, TCP/IP packet headers, or retransmissions. +- **Total Requests**. Successful requests that Application Gateway served. The request count can be filtered to show count per each/specific backend pool-http setting combination. -| Metric | Unit | Description| -|:-|:--|:| -|**Backend response status**|Count|Count of HTTP response status codes returned by the backends. This doesn't include any response codes generated by the Application Gateway. The response status code distribution can be further categorized to show responses in 2xx, 3xx, 4xx, and 5xx categories.| -|**Healthy host count**|Count|The number of backends that are determined healthy by the health probe. You can filter on a per backend pool basis to show the number of healthy hosts in a specific backend pool.| -|**Unhealthy host count**|Count|The number of backends that are determined unhealthy by the health probe. You can filter on a per backend pool basis to show the number of unhealthy hosts in a specific backend pool.| -|**Requests per minute per Healthy Host**|Count|The average number of requests received by each healthy member in a backend pool in a minute. Specify the backend pool using the *BackendPool HttpSettings* dimension.| +### Backend metrics for Application Gateway v2 SKU -### Backend health API +For Application Gateway v2 SKU, the following backend metrics are available. What follows is expanded descriptions of the backend metrics already listed in the previous [metrics table](#supported-metrics-for-microsoftnetworkapplicationgateways). -See [Application Gateways - Backend Health](/rest/api/application-gateway/application-gateways/backend-health?tabs=HTTP) for details of the API call to retrieve the backend health of an application gateway. +- **Backend response status**. Count of HTTP response status codes returned by the backends, not including any response codes generated by the Application Gateway. The response status code distribution can be categorized to show responses in 2xx, 3xx, 4xx, and 5xx categories.| +- **Healthy host count**. The number of hosts that are determined healthy by the health probe. You can filter on a per backend pool basis to show the number of healthy hosts in a specific backend pool. +- **Unhealthy host count**. The number of hosts that are determined unhealthy by the health probe. You can filter on a per backend pool basis to show the number of unhealthy hosts in a specific backend pool. +- **Requests per minute per Healthy Host**. The average number of requests received by each healthy member in a backend pool in a minute. Specify the backend pool using the *BackendPool HttpSettings* dimension. -Sample Request: -``output -POST -https://management.azure.com/subscriptions/subid/resourceGroups/rg/providers/Microsoft.Network/ -applicationGateways/appgw/backendhealth?api-version=2021-08-01 -After -`` --After sending this POST request, you should see an HTTP 202 Accepted response. In the response headers, find the Location header and send a new GET request using that URL. +### Metrics for Application Gateway v1 SKU -``output -GET -https://management.azure.com/subscriptions/subid/providers/Microsoft.Network/locations/region-name/operationResults/GUID?api-version=2021-08-01 -`` +For Application Gateway v1 SKU, the following metrics are available. What follows is expanded descriptions of the metrics already listed in the previous [metrics table](#supported-metrics-for-microsoftnetworkapplicationgateways). -### Application Gateway TLS/TCP proxy monitoring +- **CPU Utilization**. Displays the utilization of the CPUs allocated to the Application Gateway. Under normal conditions, CPU usage shouldn't regularly exceed 90%, because that situation might cause latency in the websites hosted behind the Application Gateway and disrupt the client experience. You can indirectly control or improve CPU utilization by modifying the configuration of the Application Gateway by increasing the instance count or by moving to a larger SKU size, or doing both. -#### TLS/TCP proxy metrics --With layer 4 proxy feature now available with Application Gateway, there are some Common metrics (apply to both layer 7 as well as layer 4), and some layer 4 specific metrics. The following table describes all the metrics are the applicable for layer 4 usage. --| Metric | Description | Type | Dimension | -|:--|:|:-|:-| -| Current Connections | The number of active connections: reading, writing, or waiting. The count of current connections established with Application Gateway. | Common metric | None | -| New Connections per second | The average number of connections handled per second during that minute. | Common metric | None | -| Throughput | The rate of data flow (inBytes+ outBytes) during that minute. | Common metric | None | -| Healthy host count | The number of healthy backend hosts. | Common metric | BackendSettingsPool | -| Unhealthy host | The number of unhealthy backend hosts. | Common metric | BackendSettingsPool | -| ClientRTT | Average round trip time between clients and Application Gateway. | Common metric | Listener | -| Backend Connect Time | Time spent establishing a connection with a backend server. | Common metric | Listener, BackendServer, BackendPool, BackendSetting | -| Backend First Byte Response Time | Time interval between start of establishing a connection to backend server and receiving the first byte of data (approximating processing time of backend server). | Common metric | Listener, BackendServer, BackendPool, BackendHttpSetting`*` | -| Backend Session Duration | The total time of a backend connection. The average time duration from the start of a new connection to its termination. | L4-specific | Listener, BackendServer, BackendPool, BackendHttpSetting`*` | -| Connection Lifetime | The total time of a client connection to application gateway. The average time duration from the start of a new connection to its termination in milliseconds. | L4-specific | Listener | --`*` BackendHttpSetting dimension includes both layer 7 and layer 4 backend settings. --#### TLS/TCP proxy logs --Application GatewayΓÇÖs Layer 4 proxy provides log data through access logs. These logs are only generated and published if they are configured in the diagnostic settings of your gateway. Also see: [Supported categories for Azure Monitor resource logs](/azure/azure-monitor/essentials/resource-logs-categories#microsoftnetworkapplicationgateways). --> [!NOTE] -> The columns with Mutual Authentication details for a TLS listener are currently available only through the [AzureDiagnostics table](application-gateway-diagnostics.md#storage-locations). --| Category | Resource log category | -|:--|:-| -| ResourceGroup | The resource group to which the application gateway resource belongs. | -| SubscriptionId |The subscription ID of the application gateway resource. | -| ResourceProvider |This will be MICROSOFT.NETWORK for application gateway. | -| Resource |The name of the application gateway resource. | -| ResourceType |This will be APPLICATIONGATEWAYS. | -| ruleName |The name of the routing rule that served the connection request. | -| instanceId |Application Gateway instance that served the request. | -| clientIP |Originating IP for the request. | -| receivedBytes |Data received from client to gateway, in bytes. | -| sentBytes |Data sent from gateway to client, in bytes. | -| listenerName |The name of the listener that established the frontend connection with client. | -| backendSettingName |The name of the backend setting used for the backend connection. | -| backendPoolName |The name of the backend pool from which a target server was selected to establish the backend connection. | -| protocol |TCP (Irrespective of it being TCP or TLS, the protocol value will always be TCP). | -| sessionTime |session duration, in seconds (this is for the client->appgw session) | -| upstreamSentBytes |Data sent to backend server, in bytes. | -| upstreamReceivedBytes |Data received from backend server, in bytes. | -| upstreamSessionTime |session duration, in seconds (this is for the appgw->backend session) | -| sslCipher |Cipher suite being used for TLS communication (for TLS protocol listeners). | -| sslProtocol |SSL/TLS protocol being used (for TLS protocol listeners). | -| serverRouted |The backend server IP and port number to which the traffic was routed. | -| serverStatus |200 - session completed successfully. 400 - client data could not be parsed. 500 - internal server error. 502 - bad gateway. For example, when an upstream server could not be reached. 503 - service unavailable. For example, if access is limited by the number of connections. | -| ResourceId |Application Gateway resource URI | --### TLS/TCP proxy backend health +- **Current connections**. Count of current connections established with Application Gateway. -Application GatewayΓÇÖs layer 4 proxy provides the capability to monitor the health of individual members of the backend pools through the portal and REST API. +- **Failed Requests**. Number of requests that failed due to connection issues. This count includes requests that failed due to exceeding the "Request time-out" HTTP setting and requests that failed due to connection issues between Application gateway and backend. This count doesn't include failures due to no healthy backend being available. 4xx and 5xx responses from the backend are also not considered as part of this metric. -![Screenshot of backend health](./media/monitor-application-gateway-reference/backend-health.png) +- **Response Status**. HTTP response status returned by Application Gateway. The response status code distribution can be further categorized to show responses in 2xx, 3xx, 4xx, and 5xx categories. +- **Throughput**. Number of bytes per second the Application Gateway served. +- **Total Requests**. Count of successful requests that Application Gateway has served. The request count can be further filtered to show count per each/specific backend pool-http setting combination. -## Application Gateway v1 metrics +### Backend metrics for Application Gateway v1 SKU -### Application Gateway metrics +For Application Gateway v1 SKU, the following backend metrics are available. What follows is expanded descriptions of the backend metrics already listed in the previous [metrics table](#supported-metrics-for-microsoftnetworkapplicationgateways). -| Metric | Unit | Description| -|:-|:--|:| -|**CPU Utilization**|Percent|Displays the CPU usage allocated to the Application Gateway. Under normal conditions, CPU usage should not regularly exceed 90%, as this may cause latency in the websites hosted behind the Application Gateway and disrupt the client experience. You can indirectly control or improve CPU usage by modifying the configuration of the Application Gateway by increasing the instance count or by moving to a larger SKU size, or doing both.| -|**Current connections**|Count|Count of current connections established with Application Gateway.| -|**Failed Requests**|Count|Number of requests that failed because of connection issues. This count includes requests that failed due to exceeding the *Request time-out* HTTP setting and requests that failed due to connection issues between Application Gateway and the backend. This count doesn't include failures due to no healthy backend being available. 4xx and 5xx responses from the backend are also not considered as part of this metric.| -|**Response Status**|Status code|HTTP response status returned by Application Gateway. The response status code distribution can be further categorized to show responses in 2xx, 3xx, 4xx, and 5xx categories.| -|**Throughput**|Bytes/sec|Number of bytes per second the Application Gateway has served.| -|**Total Requests**|Count|Count of successful requests that Application Gateway has served. The request count can be further filtered to show count per each/specific backend pool-http setting combination.| -|**Web Application Firewall Blocked Requests Count**|Count|Number of requests blocked by WAF.| -|**Web Application Firewall Blocked Requests Distribution**|Count|Number of requests blocked by WAF filtered to show count per each/specific WAF rule group or WAF rule ID combination.| -|**Web Application Firewall Total Rule Distribution**|Count|Number of requests received per each specific WAF rule group or WAF rule ID combination.| +- **Healthy host count**. The number of backends that are determined healthy by the health probe. You can filter on a per backend pool basis to show the number of healthy hosts in a specific backend pool. +- **Unhealthy host count**. The number of backends that are determined unhealthy by the health probe. You can filter on a per backend pool basis to show the number of unhealthy hosts in a specific backend pool. -<!-- Keep this text as-is --> -For more information, see a list of [all platform metrics supported in Azure Monitor](../azure-monitor/essentials/metrics-supported.md). +### Backend health API +See [Application Gateways - Backend Health](/rest/api/application-gateway/application-gateways/backend-health?tabs=HTTP) for details of the API call to retrieve the backend health of an application gateway. +Sample Request: -## Metrics Dimensions +```http +POST +https://management.azure.com/subscriptions/subid/resourceGroups/rg/providers/Microsoft.Network/ +applicationGateways/appgw/backendhealth?api-version=2021-08-01 +``` -<!-- REQUIRED. Please keep headings in this order --> -<!-- If you have metrics with dimensions, outline it here. If you have no dimensions, say so. Questions email azmondocs@microsoft.com --> +After sending this POST request, you should see an HTTP 202 Accepted response. In the response headers, find the Location header and send a new GET request using that URL. -For more information on what metrics dimensions are, see [Multi-dimensional metrics](../azure-monitor/essentials/data-platform-metrics.md#multi-dimensional-metrics). +```http +GET +https://management.azure.com/subscriptions/subid/providers/Microsoft.Network/locations/region-name/operationResults/GUID?api-version=2021-08-01 +``` +### TLS/TCP proxy metrics -<!-- See https://learn.microsoft.com/azure/storage/common/monitor-storage-reference#metrics-dimensions for an example. Part is copied below. --> +Application Gateway supports TLS/TCP proxy monitoring. With layer 4 proxy feature now available with Application Gateway, there are some Common metrics that apply to both layer 7 and layer 4. There are some layer 4 specific metrics. The following list summarizes the metrics are the applicable for layer 4 usage. -Azure Application Gateway supports dimensions for some of the metrics in Azure Monitor. Each metric includes a description that explains the available dimensions specifically for that metric. +- Current Connections +- New Connections per second +- Throughput +- Healthy host count +- Unhealthy host count +- Client RTT +- Backend Connect Time +- Backend First Byte Response Time. `BackendHttpSetting` dimension includes both layer 7 and layer 4 backend settings. +For more information, see previous descriptions and the [metrics table](#supported-metrics-for-microsoftnetworkapplicationgateways). -## Resource logs -<!-- REQUIRED. Please keep headings in this order --> +These metrics apply to layer 4 only. -This section lists the types of resource logs you can collect for Azure Application Gateway. +- **Backend Session Duration**. The total time of a backend connection. The average time duration from the start of a new connection to its termination. `BackendHttpSetting` dimension includes both layer 7 and layer 4 backend settings. +- **Connection Lifetime**. The total time of a client connection to application gateway. The average time duration from the start of a new connection to its termination in milliseconds. -<!-- List all the resource log types you can have and what they are for --> +### TLS/TCP proxy backend health -For reference, see a list of [all resource logs category types supported in Azure Monitor](../azure-monitor/essentials/resource-logs-schema.md). +Application Gateway's layer 4 proxy provides the capability to monitor the health of individual members of the backend pools through the portal and REST API. +++++- Action +- BackendHttpSetting +- BackendPool +- BackendServer +- BackendSettingsPool +- Category +- CountryCode +- CustomRuleID +- HttpStatusGroup +- Listener +- Method +- Mode +- PolicyName +- PolicyScope +- RuleGroup +- RuleGroupID +- RuleId +- RuleSetName +- TlsProtocol > [!NOTE]-> The Performance log is available only for the v1 SKU. For the v2 SKU, use [Application Gateway v2 metrics](#application-gateway-v2-metrics) for performance data. --For more information, see [Backend health and diagnostic logs for Application Gateway](application-gateway-diagnostics.md#access-log). --<!-- OPTION 2 - Link to the resource logs as above, but work in extra information not found in the automated metric-supported reference article. NOTE: YOU MUST MANUALLY MAINTAIN THIS SECTION to make sure it stays in sync with the resource-log-categories link. You can group these sections however you want provided you include the proper links back to resource-log-categories article. >--<!-- Example format. Add extra information --> +> +> If the Application Gateway has more than one listener, then always filter by the *Listener* dimension while comparing different latency metrics to get more meaningful inference. -## Application Gateway -Resource Provider and Type: [Microsoft.Network/applicationGateways](../azure-monitor/essentials/resource-logs-categories.md#microsoftnetworkapplicationgateways) +### Supported resource log categories for Microsoft.Network/applicationGateways -| Category | Display Name | Information| -|:|:-|| -| **Activitylog** | Activity log | Activity log entries are collected by default. You can use [Azure activity logs](../azure-monitor/essentials/activity-log.md) (formerly known as operational logs and audit logs) to view all operations that are submitted to your Azure subscription, and their status. | -|**ApplicationGatewayAccessLog**|Access log| You can use this log to view Application Gateway access patterns and analyze important information. This includes the caller's IP address, requested URL, response latency, return code, and bytes in and out. An access log is collected every 60 seconds. This log contains one record per instance of Application Gateway. The Application Gateway instance is identified by the instanceId property.| -| **ApplicationGatewayPerformanceLog**|Performance log|You can use this log to view how Application Gateway instances are performing. This log captures performance information for each instance, including total requests served, throughput in bytes, total requests served, failed request count, and healthy and unhealthy backend instance count. A performance log is collected every 60 seconds. The Performance log is available only for the v1 SKU. For the v2 SKU, use [Application Gateway v2 metrics](#application-gateway-v2-metrics) for performance data.| -|**ApplicationGatewayFirewallLog**|Firewall log|You can use this log to view the requests that are logged through either detection or prevention mode of an application gateway that is configured with the web application firewall. Firewall logs are collected every 60 seconds.| +- **Access log**. You can use the Access log to view Application Gateway access patterns and analyze important information. This information includes the caller's IP, requested URL, response latency, return code, and bytes in and out. An access log is collected every 60 seconds. This log contains one record per instance of Application Gateway. The `instanceId` property identifies the Application Gateway instance. -## Azure Monitor Logs tables -<!-- REQUIRED. Please keep heading in this order --> +- **Firewall log**. You can use the Firewall log to view the requests that are logged through either detection or prevention mode of an application gateway that is configured with the web application firewall. Firewall logs are collected every 60 seconds. -This section refers to all of the Azure Monitor Logs Kusto tables relevant to Azure Application Gateway and available for query by Log Analytics. +- **Performance log**. You can use the Performance log to view how Application Gateway instances are performing. This log captures performance information for each instance, including total requests served, throughput in bytes, total requests served, failed request count, and healthy and unhealthy backend instance count. A performance log is collected every 60 seconds. + > [!NOTE] + > The Performance log is available only for the v1 SKU. For the v2 SKU, use [Metrics](application-gateway-metrics.md) for performance data. -<!-- OPTION 1 - Minimum - Link to relevant bookmarks in https://learn.microsoft.com/azure/azure-monitor/reference/tables/tables-resourcetype where your service tables are listed. These files are auto generated from the REST API. If this article is missing tables that you and the PM know are available, both of you contact azmondocs@microsoft.com. >+### Access log category -<!-- Example format. There should be AT LEAST one Resource Provider/Resource Type here. --> +The access log is generated only if you enable it on each Application Gateway instance, as detailed in [Enable logging](application-gateway-diagnostics.md#enable-logging-through-the-azure-portal). The data is stored in the storage account that you specified when you enabled the logging. Each access of Application Gateway is logged in JSON format as shown. -|Resource Type | Notes | -|-|--| -| [Application Gateway](/azure/azure-monitor/reference/tables/tables-resourcetype#application-gateways) |Includes AzureActivity, AzureDiagnostics, and AzureMetrics | +> [!NOTE] +> For TLS/TCP proxy related information, visit [data reference](monitor-application-gateway-reference.md#tlstcp-proxy-logs). ++For Application Gateway and WAF v2 SKU: ++| Value | Description | +|:|:| +|instanceId | Application Gateway instance that served the request. | +|clientIP | IP of the immediate client of Application Gateway. If another proxy fronts your application gateway, this value displays the IP of that fronting proxy. | +|httpMethod | HTTP method used by the request. | +|requestUri | URI of the received request. | +|UserAgent | User agent from the HTTP request header. | +|httpStatus | HTTP status code returned to the client from Application Gateway. | +|httpVersion | HTTP version of the request. | +|receivedBytes | Size of packet received, in bytes. | +|sentBytes | Size of packet sent, in bytes. | +|clientResponseTime | Time difference (in seconds) between the first byte and the last byte application gateway sent to the client. Helpful in gauging Application Gateway's processing time for responses or slow clients. | +|timeTaken | Length of time (in **seconds**) that it takes for the first byte of a client request to be processed and its last-byte sent in the response to the client. It's important to note that the Time-Taken field usually includes the time that the request and response packets are traveling over the network. | +|WAFEvaluationTime | Length of time (in **seconds**) that it takes for the request to be processed by the WAF. | +|WAFMode | Value can be either Detection or Prevention. | +|transactionId | Unique identifier to correlate the request received from the client. | +|sslEnabled | Whether communication to the backend pools used TLS. Valid values are on and off. | +|sslCipher | Cipher suite being used for TLS communication (if TLS is enabled). | +|sslProtocol | SSL/TLS protocol being used (if TLS is enabled). | +|sslClientVerify | Shows the result of client certificate verification as SUCCESS or FAILED. Failed status will include error information.| +|sslClientCertificateFingerprint|The SHA1 thumbprint of the client certificate for an established TLS connection.| +|sslClientCertificateIssuerName|The issuer DN string of the client certificate for an established TLS connection.| +|serverRouted | The backend server that application gateway routes the request to. | +|serverStatus | HTTP status code of the backend server. | +|serverResponseLatency | Latency of the response (in **seconds**) from the backend server. | +|host | Address listed in the host header of the request. If rewritten using header rewrite, this field contains the updated host name. | +|originalRequestUriWithArgs | This field contains the original request URL. | +|requestUri | This field contains the URL after the rewrite operation on Application Gateway. | +|upstreamSourcePort | The source port used by Application Gateway when initiating a connection to the backend target. | +|originalHost | This field contains the original request host name. | +|error_info | The reason for the 4xx and 5xx error. Displays an error code for a failed request. More details in the error code tables in this article. | +|contentType | The type of content or data that is being processed or delivered by the application gateway. | ++```json +{ + "timeStamp": "2021-10-14T22:17:11+00:00", + "resourceId": "/SUBSCRIPTIONS/{subscriptionId}/RESOURCEGROUPS/{resourceGroupName}/PROVIDERS/MICROSOFT.NETWORK/APPLICATIONGATEWAYS/{applicationGatewayName}", + "listenerName": "HTTP-Listener", + "ruleName": "Storage-Static-Rule", + "backendPoolName": "StaticStorageAccount", + "backendSettingName": "StorageStatic-HTTPS-Setting", + "operationName": "ApplicationGatewayAccess", + "category": "ApplicationGatewayAccessLog", + "properties": { + "instanceId": "appgw_2", + "clientIP": "185.42.129.24", + "clientPort": 45057, + "httpMethod": "GET", + "originalRequestUriWithArgs": "\/", + "requestUri": "\/", + "requestQuery": "", + "userAgent": "Mozilla\/5.0 (Windows NT 6.1; WOW64) AppleWebKit\/537.36 (KHTML, like Gecko) Chrome\/52.0.2743.116 Safari\/537.36", + "httpStatus": 200, + "httpVersion": "HTTP\/1.1", + "receivedBytes": 184, + "sentBytes": 466, + "clientResponseTime": 0, + "timeTaken": 0.034, + "WAFEvaluationTime": "0.000", + "WAFMode": "Detection", + "transactionId": "592d1649f75a8d480a3c4dc6a975309d", + "sslEnabled": "on", + "sslCipher": "ECDHE-RSA-AES256-GCM-SHA384", + "sslProtocol": "TLSv1.2", + "sslClientVerify": "NONE", + "sslClientCertificateFingerprint": "", + "sslClientCertificateIssuerName": "", + "serverRouted": "52.239.221.65:443", + "serverStatus": "200", + "serverResponseLatency": "0.028", + "upstreamSourcePort": "21564", + "originalHost": "20.110.30.194", + "host": "20.110.30.194", + "error_info":"ERRORINFO_NO_ERROR", + "contentType":"application/json" + } +} +``` +> [!NOTE] +> +> Access logs with clientIP value 127.0.0.1 originate from an internal security process running on the application gateway instances. You can safely ignore these log entries. ++For Application Gateway Standard and WAF SKU (v1): ++| Value | Description | +|:--|-| +| instanceId | Application Gateway instance that served the request. | +| clientIP | Originating IP for the request. | +| clientPort | Originating port for the request. | +| httpMethod | HTTP method used by the request. | +| requestUri | URI of the received request. | +| RequestQuery | **Server-Routed**: Backend pool instance that was sent the request.</br>**X-AzureApplicationGateway-LOG-ID**: Correlation ID used for the request. It can be used to troubleshoot traffic issues on the backend servers. </br>**SERVER-STATUS**: HTTP response code that Application Gateway received from the back end. | +| UserAgent | User agent from the HTTP request header. | +| httpStatus | HTTP status code returned to the client from Application Gateway. | +| httpVersion | HTTP version of the request. | +| receivedBytes | Size of packet received, in bytes. | +| sentBytes | Size of packet sent, in bytes. | +| timeTaken | Length of time (in milliseconds) that it takes for a request to be processed and its response to be sent. This value is calculated as the interval from the time when Application Gateway receives the first byte of an HTTP request to the time when the response send operation finishes. It's important to note that the Time-Taken field usually includes the time that the request and response packets are traveling over the network. | +| sslEnabled | Whether communication to the backend pools used TLS/SSL. Valid values are on and off. | +| host | The hostname for which the request has been sent to the backend server. If backend hostname is being overridden, this name reflects that. | +| originalHost | The hostname for which the request was received by the Application Gateway from the client. | ++```json +{ + "resourceId": "/SUBSCRIPTIONS/{subscriptionId}/RESOURCEGROUPS/PEERINGTEST/PROVIDERS/MICROSOFT.NETWORK/APPLICATIONGATEWAYS/{applicationGatewayName}", + "operationName": "ApplicationGatewayAccess", + "time": "2017-04-26T19:27:38Z", + "category": "ApplicationGatewayAccessLog", + "properties": { + "instanceId": "ApplicationGatewayRole_IN_0", + "clientIP": "191.96.249.97", + "clientPort": 46886, + "httpMethod": "GET", + "requestUri": "/phpmyadmin/scripts/setup.php", + "requestQuery": "X-AzureApplicationGateway-CACHE-HIT=0&SERVER-ROUTED=10.4.0.4&X-AzureApplicationGateway-LOG-ID=874f1f0f-6807-41c9-b7bc-f3cfa74aa0b1&SERVER-STATUS=404", + "userAgent": "-", + "httpStatus": 404, + "httpVersion": "HTTP/1.0", + "receivedBytes": 65, + "sentBytes": 553, + "timeTaken": 205, + "sslEnabled": "off", + "host": "www.contoso.com", + "originalHost": "www.contoso.com" + } +} +``` ++If the application gateway can't complete the request, it stores one of the following reason codes in the error_info field of the access log. ++| 4XX Errors | The 4xx error codes indicate that there was an issue with the client's request, and the Application Gateway can't fulfill it. | +|:|:| +| ERRORINFO_INVALID_METHOD | The client sent a request that is non-RFC compliant. Possible reasons: client using HTTP method not supported by server, misspelled method, incompatible HTTP protocol version etc. | +| ERRORINFO_INVALID_REQUEST | The server can't fulfill the request because of incorrect syntax. | +| ERRORINFO_INVALID_VERSION | The application gateway received a request with an invalid or unsupported HTTP version. | +| ERRORINFO_INVALID_09_METHOD | The client sent request with HTTP Protocol version 0.9. | +| ERRORINFO_INVALID_HOST | The value provided in the "Host" header is either missing, improperly formatted, or doesn't match the expected host value. For example, when there's no Basic listener, and none of the hostnames of Multisite listeners match with the host. | +| ERRORINFO_INVALID_CONTENT_LENGTH | The length of the content specified by the client in the content-Length header doesn't match the actual length of the content in the request. | +| ERRORINFO_INVALID_METHOD_TRACE | The client sent HTTP TRACE method, which the application gateway doesn't support. | +| ERRORINFO_CLIENT_CLOSED_REQUEST | The client closed the connection with the application gateway before the idle timeout period elapsed. Check whether the client timeout period is greater than the [idle timeout period](./application-gateway-faq.yml#what-are-the-settings-for-keep-alive-timeout-and-tcp-idle-timeout) for the application gateway. | +| ERRORINFO_REQUEST_URI_INVALID | Indicates issue with the Uniform Resource Identifier (URI) provided in the client's request. | +| ERRORINFO_HTTP_NO_HOST_HEADER | Client sent a request without Host header. | +| ERRORINFO_HTTP_TO_HTTPS_PORT | The client sent a plain HTTP request to an HTTPS port. | +| ERRORINFO_HTTPS_NO_CERT | Indicates client isn't sending a valid and properly configured TLS certificate during Mutual TLS authentication. | ++| 5XX Errors | Description | +|:--|:| +| ERRORINFO_UPSTREAM_NO_LIVE | The application gateway is unable to find any active or reachable backend servers to handle incoming requests. | +| ERRORINFO_UPSTREAM_CLOSED_CONNECTION | The backend server closed the connection unexpectedly or before the request was fully processed. This condition could happen due to backend server reaching its limits, crashing etc. | +| ERRORINFO_UPSTREAM_TIMED_OUT | The established TCP connection with the server was closed as the connection took longer than the configured timeout value. | ++### Firewall log category ++The firewall log is generated only if you enable it for each application gateway, as detailed in [Enable logging](application-gateway-diagnostics.md#enable-logging-through-the-azure-portal). This log also requires that the web application firewall is configured on an application gateway. The data is stored in the storage account that you specified when you enabled the logging. The following data is logged: ++| Value | Description | +|: |:-| +| instanceId | Application Gateway instance for which firewall data is being generated. For a multiple-instance application gateway, there's one row per instance. | +| clientIp | Originating IP for the request. | +| clientPort | Originating port for the request. | +| requestUri | URL of the received request. | +| ruleSetType | Rule set type. The available value is OWASP. | +| ruleSetVersion | Rule set version used. Available values are 2.2.9 and 3.0. | +| ruleId | Rule ID of the triggering event. | +| message | User-friendly message for the triggering event. More details are provided in the details section. | +| action | Action taken on the request. Available values are Blocked and Allowed (for custom rules), Matched (when a rule matches a part of the request), and Detected and Blocked (these values are both for mandatory rules, depending on if the WAF is in detection or prevention mode). | +| site | Site for which the log was generated. Currently, only Global is listed because rules are global.| +| details | Details of the triggering event. | +| details.message | Description of the rule. | +| details.data | Specific data found in request that matched the rule. | +| details.file | Configuration file that contained the rule. | +| details.line | Line number in the configuration file that triggered the event. | +| hostname | Hostname or IP address of the Application Gateway. | +| transactionId | Unique ID for a given transaction, which helps group multiple rule violations that occurred within the same request. | ++```json +{ + "timeStamp": "2021-10-14T22:17:11+00:00", + "resourceId": "/SUBSCRIPTIONS/{subscriptionId}/RESOURCEGROUPS/{resourceGroupName}/PROVIDERS/MICROSOFT.NETWORK/APPLICATIONGATEWAYS/{applicationGatewayName}", + "operationName": "ApplicationGatewayFirewall", + "category": "ApplicationGatewayFirewallLog", + "properties": { + "instanceId": "appgw_2", + "clientIp": "185.42.129.24", + "clientPort": "", + "requestUri": "\/", + "ruleSetType": "OWASP_CRS", + "ruleSetVersion": "3.0.0", + "ruleId": "920350", + "message": "Host header is a numeric IP address", + "action": "Matched", + "site": "Global", + "details": { + "message": "Warning. Pattern match \\\"^[\\\\d.:]+$\\\" at REQUEST_HEADERS:Host .... ", + "data": "20.110.30.194:80", + "file": "rules\/REQUEST-920-PROTOCOL-ENFORCEMENT.conf", + "line": "791" + }, + "hostname": "20.110.30.194:80", + "transactionId": "592d1649f75a8d480a3c4dc6a975309d", + "policyId": "default", + "policyScope": "Global", + "policyScopeName": "Global" + } +} +``` ++### Performance log category ++The performance log is generated only if you enable it on each Application Gateway instance, as detailed in [Enable logging](application-gateway-diagnostics.md#enable-logging-through-the-azure-portal). The data is stored in the storage account that you specified when you enabled the logging. The performance log data is generated in 1-minute intervals. It's available only for the v1 SKU. For the v2 SKU, use [Metrics](application-gateway-metrics.md) for performance data. The following data is logged: ++| Value | Description | +|:|:| +| instanceId | Application Gateway instance for which performance data is being generated. For a multiple-instance application gateway, there's one row per instance. | +| healthyHostCount | Number of healthy hosts in the backend pool. | +| unHealthyHostCount | Number of unhealthy hosts in the backend pool. | +| requestCount | Number of requests served. | +| latency | Average latency (in milliseconds) of requests from the instance to the back end that serves the requests. | +| failedRequestCount | Number of failed requests.| +| throughput | Average throughput since the last log, measured in bytes per second.| ++```json +{ + "resourceId": "/SUBSCRIPTIONS/{subscriptionId}/RESOURCEGROUPS/{resourceGroupName}/PROVIDERS/MICROSOFT.NETWORK/APPLICATIONGATEWAYS/{applicationGatewayName}", + "operationName": "ApplicationGatewayPerformance", + "time": "2016-04-09T00:00:00Z", + "category": "ApplicationGatewayPerformanceLog", + "properties": + { + "instanceId":"ApplicationGatewayRole_IN_1", + "healthyHostCount":"4", + "unHealthyHostCount":"0", + "requestCount":"185", + "latency":"0", + "failedRequestCount":"0", + "throughput":"119427" + } +} +``` -For a reference of all Azure Monitor Logs / Log Analytics tables, see the [Azure Monitor Log Table Reference](/azure/azure-monitor/reference/tables/tables-resourcetype). +> [!NOTE] +> Latency is calculated from the time when the first byte of the HTTP request is received to the time when the last byte of the HTTP response is sent. It's the sum of the Application Gateway processing time plus the network cost to the back end, plus the time that the back end takes to process the request. -### Diagnostics tables -<!-- REQUIRED. Please keep heading in this order --> -<!-- If your service uses the AzureDiagnostics table in Azure Monitor Logs / Log Analytics, list what fields you use and what they are for. Azure Diagnostics is over 500 columns wide with all services using the fields that are consistent across Azure Monitor and then adding extra ones just for themselves. If it uses service specific diagnostic table, refers to that table. If it uses both, put both types of information in. Most services in the future have their own specific table. If you have questions, contact azmondocs@microsoft.com --> +### Azure Monitor Logs and Log Analytics Tables Azure Application Gateway uses the [Azure Diagnostics](/azure/azure-monitor/reference/tables/azurediagnostics) table to store resource log information. The following columns are relevant. -**Azure Diagnostics** - | Property | Description |-|: |:| -requestUri_s | The URI of the client request.| -Message | Informational messages such as "SQL Injection Attack"| -userAgent_s | User agent details of the client request| -ruleName_s | Request routing rule that is used to serve this request| -httpMethod_s | HTTP method of the client request| -instanceId_s | The Appgw instance to which the client request is routed to for evaluation| -httpVersion_s | HTTP version of the client request| -clientIP_s | IP from which is request is made| -host_s | Host header of the client request| -requestQuery_s | Query string as part of the client request| -sslEnabled_s | Does the client request have SSL enabled| ---## See Also --<!-- replace below with the proper link to your main monitoring service article --> -- See [Monitoring Azure Application Gateway](monitor-application-gateway.md) for a description of monitoring Azure Application Gateway.-- See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.+|:-- |:| +| requestUri_s | The URI of the client request.| +| Message | Informational messages such as "SQL Injection Attack"| +| userAgent_s | User agent details of the client request| +| ruleName_s | Request routing rule that is used to serve this request| +| httpMethod_s | HTTP method of the client request| +| instanceId_s | The Appgw instance to which the client request is routed to for evaluation| +| httpVersion_s | HTTP version of the client request| +| clientIP_s | IP from which is request is made| +| host_s | Host header of the client request| +| requestQuery_s | Query string as part of the client request| +| sslEnabled_s | Does the client request have SSL enabled| +++### Application Gateway Microsoft.Network/applicationGateways ++- [AzureActivity](/azure/azure-monitor/reference/tables/azureactivity#columns) +- [AzureMetrics](/azure/azure-monitor/reference/tables/azuremetrics#columns) +- [AGWAccessLogs](/azure/azure-monitor/reference/tables/agwaccesslogs#columns) +- [AGWPerformanceLogs](/azure/azure-monitor/reference/tables/agwperformancelogs#columns) +- [AGWFirewallLogs](/azure/azure-monitor/reference/tables/agwfirewalllogs#columns) +- [AzureDiagnostics](/azure/azure-monitor/reference/tables/azurediagnostics#columns) ++### TLS/TCP proxy logs ++Application Gateway's Layer 4 proxy provides log data through access logs. These logs are only generated and published if they're configured in the diagnostic settings of your gateway. Also see: [Supported categories for Azure Monitor resource logs](/azure/azure-monitor/essentials/resource-logs-categories#microsoftnetworkapplicationgateways). ++> [!NOTE] +> The columns with Mutual Authentication details for a TLS listener are currently available only through the [AzureDiagnostics table](/azure/azure-monitor/reference/tables/azurediagnostics). ++| Category | Resource log category | +|:--|:-| +| ResourceGroup | The resource group to which the application gateway resource belongs. | +| SubscriptionId | The subscription ID of the application gateway resource. | +| ResourceProvider | This value is MICROSOFT.NETWORK for application gateway. | +| Resource | The name of the application gateway resource. | +| ResourceType | This value is APPLICATIONGATEWAYS. | +| ruleName | The name of the routing rule that served the connection request. | +| instanceId | Application Gateway instance that served the request. | +| clientIP | Originating IP for the request. | +| receivedBytes | Data received from client to gateway, in bytes. | +| sentBytes | Data sent from gateway to client, in bytes. | +| listenerName | The name of the listener that established the frontend connection with client. | +| backendSettingName | The name of the backend setting used for the backend connection. | +| backendPoolName | The name of the backend pool from which a target server was selected to establish the backend connection. | +| protocol | TCP (Irrespective of it being TCP or TLS, the protocol value is always TCP). | +| sessionTime | Session duration, in seconds (this value is for the client->appgw session). | +| upstreamSentBytes | Data sent to backend server, in bytes. | +| upstreamReceivedBytes | Data received from backend server, in bytes. | +| upstreamSessionTime | Session duration, in seconds (this value is for the appgw->backend session). | +| sslCipher | Cipher suite being used for TLS communication (for TLS protocol listeners). | +| sslProtocol | SSL/TLS protocol being used (for TLS protocol listeners). | +| serverRouted | The backend server IP and port number to which the traffic was routed. | +| serverStatus | 200 - session completed successfully. 400 - client data couldn't be parsed. 500 - internal server error. 502 - bad gateway. For example, when an upstream server couldn't be reached. 503 - service unavailable. For example, if access is limited by the number of connections. | +| ResourceId | Application Gateway resource URI. | +++- [applicationGateways resource provider operations](/azure/role-based-access-control/resource-provider-operations#networking) ++You can use Azure activity logs to view all operations that are submitted to your Azure subscription, and their status. Activity log entries are collected by default. You can view them in the Azure portal. Azure activity logs were formerly known as *operational logs* and *audit logs*. ++Azure generates activity logs by default. The logs are preserved for 90 days in the Azure event logs store. Learn more about these logs by reading the [View events and activity log](../azure-monitor/essentials/activity-log.md) article. ++## Related content ++- See [Monitor Azure Application Gateway](monitor-application-gateway.md) for a description of monitoring Application Gateway. +- See [Monitor Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for details on monitoring Azure resources. |
application-gateway | Monitor Application Gateway | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/monitor-application-gateway.md | Title: Monitoring Azure Application Gateway -description: Start here to learn how to monitor Azure Application Gateway + Title: Monitor Azure Application Gateway +description: Start here to learn how to monitor Azure Application Gateway. Learn how to monitor resources for availability, performance, and operation. Last updated : 06/17/2024++ - Previously updated : 02/26/2024 -<!-- VERSION 2.2 -Template for the main monitoring article for Azure services. -Keep the required sections and add/modify any content for any information specific to your service. -This article should be in your TOC with the name *monitor-[Azure Application Gateway].md* and the TOC title "Monitor Azure Application Gateway". -Put accompanying reference information into an article in the Reference section of your TOC with the name *monitor-[service-name]-reference.md* and the TOC title "Monitoring data". -Keep the headings in this order. >+# Monitor Azure Application Gateway ++++Azure Monitor Network Insights provides a comprehensive view of health and metrics for all deployed network resources including Application Gateway, without requiring any configuration. For more information, see [Azure Monitor Network Insights](../network-watcher/network-insights-overview.md). ++For more information about the resource types for Application Gateway, see [Application Gateway monitoring data reference](monitor-application-gateway-reference.md). + -# Monitoring Azure Application Gateway -<!-- REQUIRED. Please keep headings in this order --> -<!-- Most services can use this section unchanged. Add to it if there are any unique charges if your service has significant monitoring beyond Azure Monitor. --> +For Application Gateway, resource-specific mode creates three tables: -When you have critical applications and business processes relying on Azure resources, you want to monitor those resources for their availability, performance, and operation. +- [AGWAccessLogs](/azure/azure-monitor/reference/tables/agwaccesslogs) +- [AGWPerformanceLogs](/azure/azure-monitor/reference/tables/agwperformancelogs) +- [AGWFirewallLogs](/azure/azure-monitor/reference/tables/agwfirewalllogs) -This article describes the monitoring data generated by Azure Application Gateway. Azure Application Gateway uses [Azure Monitor](../azure-monitor/overview.md). If you're unfamiliar with the features of Azure Monitor common to all Azure services that use it, read [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md). +> [!NOTE] +> The resource specific option is currently available in all **public regions**. +> +> Existing users can continue using Azure Diagnostics, or can opt for dedicated tables by switching the toggle in Diagnostic settings to **Resource specific**, or to **Dedicated** in API destination.Dual mode isn't possible. The data in all the logs can either flow to Azure Diagnostics, or to dedicated tables. However, you can have multiple diagnostic settings where one data flow is to azure diagnostic and another is using resource specific at the same time. +**Selecting the destination table in Log analytics**: All Azure services eventually use the resource-specific tables. As part of this transition, you can select Azure diagnostic or resource specific table in the diagnostic setting using a toggle button. The toggle is set to **Resource specific** by default and in this mode, logs for new selected categories are sent to dedicated tables in Log Analytics, while existing streams remain unchanged. See the following example. -<!-- Optional diagram showing monitoring for your service. If you need help creating one, contact robb@microsoft.com --> -## Monitoring overview page in Azure portal -<!-- OPTIONAL. Please keep headings in this order --> -<!-- Most services can use this section unchanged. Edit it if there are any unique charges if your service has significant monitoring beyond Azure Monitor. --> +**Workspace Transformations:** Opting for the Resource specific option allows you to filter and modify your data before [workspace transformations](../azure-monitor/essentials/data-collection-transformations-workspace.md) ingests it. This approach provides granular control, allowing you to focus on the most relevant information from the logs there by reducing data costs and enhancing security. ++For detailed instructions on setting up workspace transformations, see [Tutorial: Add a workspace transformation to Azure Monitor Logs by using the Azure portal](../azure-monitor/logs/tutorial-workspace-transformations-portal.md). + The **Overview** page in the Azure portal for each Application Gateway includes the following metrics: The **Overview** page in the Azure portal for each Application Gateway includes - Avg Healthy Host Count By BackendPool HttpSettings - Avg Unhealthy Host Count By BackendPool HttpSettings -This list is just a subset of the metrics available for Application Gateway. For more information, see [Monitoring Azure Application Gateway data reference](monitor-application-gateway-reference.md). ---## Azure Monitor Network Insights --Some services in Azure have a special focused prebuilt monitoring dashboard in the Azure portal that provides a starting point for monitoring your service. These special dashboards are called "insights". --<!-- Give a quick outline of what your "insight page" provides and refer to another article that gives details --> --Azure Monitor Network Insights provides a comprehensive view of health and metrics for all deployed network resources (including Application Gateway), without requiring any configuration. For more information, see [Azure Monitor Network Insights](../network-watcher/network-insights-overview.md). --## Monitoring data --<!-- REQUIRED. Please keep headings in this order --> -Azure Application Gateway collects the same kinds of monitoring data as other Azure resources that are described in [Monitoring data from Azure resources](../azure-monitor/essentials/monitor-azure-resource.md#monitoring-data). --See [Monitoring Azure Application Gateway data reference](monitor-application-gateway-reference.md) for detailed information on the metrics and logs metrics created by Azure Application Gateway. --<!-- If your service has additional non-Azure Monitor monitoring data then outline and refer to that here. Also include that information in the data reference as appropriate. --> --## Collection and routing --<!-- REQUIRED. Please keep headings in this order --> --Platform metrics and the Activity log are collected and stored automatically, but can be routed to other locations by using a diagnostic setting. --Resource Logs aren't collected and stored until you create a diagnostic setting and route them to one or more locations. --<!-- Include any additional information on collecting logs. The number of things that diagnostics settings control is expanding --> --See [Create diagnostic setting to collect platform logs and metrics in Azure](../azure-monitor/essentials/diagnostic-settings.md) for the detailed process for creating a diagnostic setting using the Azure portal, CLI, or PowerShell. When you create a diagnostic setting, you specify which categories of logs to collect. The categories for Azure Application Gateway are listed in [Azure Application Gateway monitoring data reference](monitor-application-gateway-reference.md#resource-logs). --<!-- OPTIONAL: Add specific examples of configuration for this service. For example, CLI and PowerShell commands for creating diagnostic setting. Ideally, customers should set up a policy to automatically turn on collection for services. Azure monitor has Resource Manager template examples you can point to. See https://learn.microsoft.com/azure/azure-monitor/samples/resource-manager-diagnostic-settings. Contact azmondocs@microsoft.com if you have questions. --> +For a list of available metrics for Azure Application Gateway, see [Application Gateway monitoring data reference](monitor-application-gateway-reference.md#metrics). -The metrics and logs you can collect are discussed in the following sections. +For available Web Application Firewall (WAF) metrics, see [Application Gateway WAF v2 metrics](../web-application-firewall/ag/application-gateway-waf-metrics.md#application-gateway-waf-v2-metrics) and [Application Gateway WAF v1 metrics](../web-application-firewall/ag/application-gateway-waf-metrics.md#application-gateway-waf-v1-metrics). -## Analyzing metrics --<!-- REQUIRED. Please keep headings in this order -If you don't support metrics, say so. Some services might be only onboarded to logs --> --You can analyze metrics for Azure Application Gateway with metrics from other Azure services using metrics explorer by opening **Metrics** from the **Azure Monitor** menu. See [Analyze metrics with Azure Monitor metrics explorer](../azure-monitor/essentials/analyze-metrics.md) for details on using this tool. --<!-- Point to the list of metrics available in your monitor-service-reference article. --> -For a list of the platform metrics collected for Azure Application Gateway, see [Monitoring Application Gateway data reference](monitor-application-gateway-reference.md). ---For reference, you can see a list of [all resource metrics supported in Azure Monitor](../azure-monitor/essentials/metrics-supported.md). --<!-- Optional: Call out additional information to help your customers. For example, you can include additional information here about how to use metrics explorer specifically for your service. Remember that the UI is subject to change quite often so you need to maintain these screenshots yourself if you add them in. --> --## Analyzing logs --<!-- REQUIRED. Please keep headings in this order -If you don't support resource logs, say so. Some services might be only onboarded to metrics and the activity log. --> Data in Azure Monitor Logs is stored in tables where each table has its own set of unique properties. -All resource logs in Azure Monitor have the same fields followed by service-specific fields. The common schema is outlined in [Common and service-specific schema for Azure Resource Logs](../azure-monitor/essentials/resource-logs-schema.md#top-level-common-schema). --The [Activity log](../azure-monitor/essentials/activity-log.md) is a platform login Azure that provides insight into subscription-level events. You can view it independently or route it to Azure Monitor Logs, where you can do much more complex queries using Log Analytics. +See [Application Gateway monitoring data reference](monitor-application-gateway-reference.md#resource-logs) for: -For a list of the types of resource logs collected for Azure Application Gateway, see [Monitoring Azure Application Gateway data reference](monitor-application-gateway-reference.md#resource-logs). +- A list of the types of resource logs collected for Application Gateway. +- A list of the tables used by Azure Monitor Logs and queryable by Log Analytics. +- The available resource log categories, their associated Log Analytics tables, and the log schemas for Application Gateway. -For a list of the tables used by Azure Monitor Logs and queryable by Log Analytics, see [Monitoring Azure Application Gateway data reference](monitor-application-gateway-reference.md#azure-monitor-logs-tables). -<!-- Optional: Call out additional information to help your customers. For example, you can include additional information here about log usage or what logs are most important. Remember that the UI is subject to change quite often so you need to maintain these screenshots yourself if you add them in. --> -### Sample Kusto queries +### Analyzing Access logs through GoAccess -<!-- REQUIRED if you support logs. Please keep headings in this order --> -<!-- Add sample Log Analytics Kusto queries for your service. --> +We published a Resource Manager template that installs and runs the popular [GoAccess](https://goaccess.io/) log analyzer for Application Gateway Access Logs. GoAccess provides valuable HTTP traffic statistics such as Unique Visitors, Requested Files, Hosts, Operating Systems, Browsers, HTTP Status codes and more. For more details, please see the [Readme file in the Resource Manager template folder in GitHub](https://github.com/Azure/azure-quickstart-templates/tree/master/demos/application-gateway-logviewer-goaccess). -> [!IMPORTANT] -> When you select **Logs** from the Application Gateway menu, Log Analytics is opened with the query scope set to the current Application Gateway. This means that log queries only include data from that resource. If you want to run a query that includes data from other Application Gateways or data from other Azure services, select **Logs** from the **Azure Monitor** menu. See [Log query scope and time range in Azure Monitor Log Analytics](/azure/azure-monitor/log-query/scope/) for details. -<!-- REQUIRED: Include queries that are helpful for figuring out the health and state of your service. Ideally, use some of these queries in the alerts section. It's possible that some of your queries might be in the Log Analytics UI (sample or example queries). Check if so. --> -You can use the following queries to help you monitor your Application Gateway resource. +The following examples show some useful queries for Application Gateway. -<!-- Put in a code section here. --> -```Kusto +```kusto // Requests per hour // Count of the incoming requests on the Application Gateway. // To create an alert for this query, click '+ New alert rule' AzureDiagnostics | sort by AggregatedValue desc ``` -## Alerts --<!-- SUGGESTED: Include useful alerts on metrics, logs, log conditions or activity log. Ask your PMs if you don't know. -This information is the BIGGEST request we get in Azure Monitor so do not avoid it long term. People don't know what to monitor for best results. Be prescriptive >--Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address issues in your system before your customers notice them. You can set alerts on [metrics](../azure-monitor/alerts/alerts-metric-overview.md), [logs](../azure-monitor/alerts/alerts-unified-log.md), and the [activity log](../azure-monitor/alerts/activity-log-alerts.md). Different types of alerts have benefits and drawbacks -<!-- only include next line if applications run on your service and work with App Insights. --> -If you're creating or running an application that uses Application Gateway, [Azure Monitor Application Insights](../azure-monitor/app/app-insights-overview.md) can offer additional types of alerts. -<!-- end --> +To configure alerts using ARM templates, see [Configure Azure Monitor alerts](configure-alerts-with-templates.md). -The following tables list common and recommended alert rules for Application Gateway. +### Application Gateway alert rules -<!-- Fill in the table with metric and log alerts that would be valuable for your service. Change the format as necessary to make it more readable --> +The following table lists some suggested alert rules for Application Gateway. These alerts are just examples. You can set alerts for any metric, log entry, or activity log entry listed in the [Application Gateway monitoring data reference](monitor-application-gateway-reference.md). -**Application Gateway v1** +Application Gateway v2 | Alert type | Condition | Description | |:|:|:|-|Metric|CPU utilization crosses 80%|Under normal conditions, CPU usage shouldn't regularly exceed 90%. This can cause latency in the websites hosted behind the Application Gateway and disrupt the client experience.| -|Metric|Unhealthy host count crosses threshold|Indicates the number of backend servers that Application Gateway is unable to probe successfully. This catches issues where the Application Gateway instances are unable to connect to the backend. Alert if this number goes above 20% of backend capacity.| +|Metric|Compute Unit utilization crosses 75% of average usage|Compute unit is the measure of compute utilization of your Application Gateway. Check your average compute unit usage in the last one month and set alert if it crosses 75% of it.| +|Metric|Capacity Unit utilization crosses 75% of peak usage|Capacity units represent overall gateway utilization in terms of throughput, compute, and connection count. Check your maximum capacity unit usage in the last one month and set alert if it crosses 75% of it.| +|Metric|Unhealthy host count crosses threshold|Indicates number of backend servers that application gateway is unable to probe successfully. This alert catches issues where Application gateway instances are unable to connect to the backend. Alert if this number goes above 20% of backend capacity.| |Metric|Response status (4xx, 5xx) crosses threshold|When Application Gateway response status is 4xx or 5xx. There could be occasional 4xx or 5xx response seen due to transient issues. You should observe the gateway in production to determine static threshold or use dynamic threshold for the alert.|-|Metric|Failed requests crosses threshold|When failed requests metric crosses a threshold. You should observe the gateway in production to determine static threshold or use dynamic threshold for the alert.| -+|Metric|Failed requests crosses threshold|When Failed requests metric crosses threshold. You should observe the gateway in production to determine static threshold or use dynamic threshold for the alert.| +|Metric|Backend last byte response time crosses threshold|Indicates the time interval between start of establishing a connection to backend server and receiving the last byte of the response body. Create an alert if the backend response latency is more that certain threshold from usual.| +|Metric|Application Gateway total time crosses threshold|This value is the interval from the time when Application Gateway receives the first byte of the HTTP request to the time when the last response byte has been sent to the client. Should create an alert if the backend response latency is more that certain threshold from usual.| -**Application Gateway v2** +Application Gateway v1 | Alert type | Condition | Description | |:|:|:|-|Metric|Compute Unit utilization crosses 75% of average usage|Compute unit is the measure of compute utilization of your Application Gateway. Check your average compute unit usage in the last one month and set alert if it crosses 75% of it.| -|Metric|Capacity Unit utilization crosses 75% of peak usage|Capacity units represent overall gateway utilization in terms of throughput, compute, and connection count. Check your maximum capacity unit usage in the last one month and set alert if it crosses 75% of it.| -|Metric|Unhealthy host count crosses threshold|Indicates number of backend servers that application gateway is unable to probe successfully. This catches issues where Application gateway instances are unable to connect to the backend. Alert if this number goes above 20% of backend capacity.| +|Metric|CPU utilization crosses 80%|Under normal conditions, CPU usage shouldn't regularly exceed 90%. This situation can cause latency in the websites hosted behind the Application Gateway and disrupt the client experience.| +|Metric|Unhealthy host count crosses threshold|Indicates the number of backend servers that Application Gateway is unable to probe successfully. This alert catches issues where the Application Gateway instances are unable to connect to the backend. Alert if this number goes above 20% of backend capacity.| |Metric|Response status (4xx, 5xx) crosses threshold|When Application Gateway response status is 4xx or 5xx. There could be occasional 4xx or 5xx response seen due to transient issues. You should observe the gateway in production to determine static threshold or use dynamic threshold for the alert.|-|Metric|Failed requests crosses threshold|When Failed requests metric crosses threshold. You should observe the gateway in production to determine static threshold or use dynamic threshold for the alert.| -|Metric|Backend last byte response time crosses threshold|Indicates the time interval between start of establishing a connection to backend server and receiving the last byte of the response body. Create an alert if the backend response latency is more that certain threshold from usual.| -|Metric|Application Gateway total time crosses threshold|This is the interval from the time when Application Gateway receives the first byte of the HTTP request to the time when the last response byte has been sent to the client. Should create an alert if the backend response latency is more that certain threshold from usual.| +|Metric|Failed requests crosses threshold|When failed requests metric crosses a threshold. You should observe the gateway in production to determine static threshold or use dynamic threshold for the alert.| ++Azure Monitor alerts proactively notify you when important conditions are found in your monitoring data. They allow you to identify and address issues in your system before your customers notice them. You can set alerts on [metrics](../azure-monitor/alerts/alerts-metric-overview.md), [logs](../azure-monitor/alerts/alerts-unified-log.md), and the [activity log](../azure-monitor/alerts/activity-log-alerts.md). Different types of alerts have benefits and drawbacks. -## Next steps +If you're creating or running an application that uses Application Gateway, [Azure Monitor Application Insights](../azure-monitor/app/app-insights-overview.md) can offer other types of alerts. -<!-- Add additional links. You can change the wording of these and add more if useful. --> -- See [Monitoring Application Gateway data reference](monitor-application-gateway-reference.md) for a reference of the metrics, logs, and other important values created by Application Gateway.+## Related content -- See [Monitoring Azure resources with Azure Monitor](../azure-monitor/essentials/monitor-azure-resource.md) for details on monitoring Azure resources.+- See [Application Gateway monitoring data reference](monitor-application-gateway-reference.md) for a reference of the metrics, logs, and other important values created for Application Gateway. +- See [Monitoring Azure resources with Azure Monitor](/azure/azure-monitor/essentials/monitor-azure-resource) for general details on monitoring Azure resources. |
application-gateway | Mutual Authentication Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/mutual-authentication-overview.md | Azure portal support is currently not available. -To verify OCSP revocation status has been evaluated for the client request, [access logs](./application-gateway-diagnostics.md#access-log) will contain a property called "sslClientVerify", with the status of the OCSP response. +To verify OCSP revocation status has been evaluated for the client request, [access logs](monitor-application-gateway-reference.md#access-log-category) will contain a property called "sslClientVerify", with the status of the OCSP response. It is critical that the OCSP responder is highly available and network connectivity between Application Gateway and the responder is possible. In the event Application Gateway is unable to resolve the fully qualified domain name (FQDN) of the defined responder or network connectivity is blocked to/from the responder, certificate revocation status will fail and Application Gateway will return a 400 HTTP response to the requesting client. |
application-gateway | Rewrite Url Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/application-gateway/rewrite-url-portal.md | Observe the below fields in access logs to verify if the URL rewrite happened as * **originalRequestUriWithArgs**: This field contains the original request URL * **requestUri**: This field contains the URL after the rewrite operation on Application Gateway -For more information on all the fields in the access logs, see [here](application-gateway-diagnostics.md#for-application-gateway-and-waf-v2-sku). +For more information on all the fields in the access logs, see [Access log](monitor-application-gateway-reference.md#access-log-category). ## Next steps |
automation | Dsc Configuration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/quickstarts/dsc-configuration.md | description: This article helps you get started configuring an Azure VM with Des keywords: dsc, configuration, automation Previously updated : 04/12/2023 Last updated : 08/08/2024 -> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and planning accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md). +> Azure Automation DSC for Linux has retired. For more information, see the [announcement](https://azure.microsoft.com/updates/migrate-from-linux-dsc-extension-to-the-guest-configuration-feature-of-azure-policy-by-may-1-2025/#:~:text=The%20DSC%20extension%20for%20Linux%20machines%20in%20Azure%2C,no%20longer%20be%20supported%20after%2030%20September%202023.). > [!NOTE] > Before you enable Azure Automation DSC, we would like you to know that a newer version of DSC is now generally available, managed by a feature of Azure Policy named [Azure Machine Configuration](../../governance/machine-configuration/overview.md). The Azure Machine Configuration service combines features of DSC Extension, Azure Automation State Configuration, and the most commonly requested features from customer feedback. Azure Machine Configuration also includes hybrid machine support through [Arc-enabled servers](../../azure-arc/servers/overview.md). -By enabling Azure Automation State Configuration, you can manage and monitor the configurations of your Windows and Linux servers using Desired State Configuration (DSC). Configurations that drift from a desired configuration can be identified or auto-corrected. This quickstart steps through enabling an Azure Linux VM and deploying a LAMP stack using Azure Automation State Configuration. +By enabling Azure Automation State Configuration, you can manage and monitor the configurations of your Windows servers using Desired State Configuration (DSC). Configurations that drift from a desired configuration can be identified or auto-corrected. This quickstart steps through enabling an Azure VM and deploying a LAMP stack using Azure Automation State Configuration. ## Prerequisites To complete this quickstart, you need: * An Azure subscription. If you don't have an Azure subscription, [create a free account](https://azure.microsoft.com/free/).-* An Azure Resource Manager virtual machine running Red Hat Enterprise Linux, or Oracle Linux. For instructions on creating a VM, see [Create your first Linux virtual machine in the Azure portal](../../virtual-machines/linux/quick-create-portal.md) +* An Azure Resource Manager virtual machine. ## Sign in to Azure Sign in to the [Azure portal](https://portal.azure.com). There are many different methods to enable a machine for Automation State Config 5. Select the DSC settings appropriate for the virtual machine. If you have already prepared a configuration, you can specify it as `Node Configuration Name`. You can set the [configuration mode](/powershell/dsc/managing-nodes/metaConfig) to control the configuration behavior for the machine. 6. Click **OK**. While the DSC extension is deployed to the virtual machine, the status reported is `Connecting`. -![Enabling an Azure VM for DSC](./media/dsc-configuration/dsc-onboard-azure-vm.png) - ## Import modules Modules contain DSC resources and many can be found in the [PowerShell Gallery](https://www.powershellgallery.com). Any resources that are used in your configurations must be imported to the Automation account before compiling. For this quickstart, the module named **nx** is required. Modules contain DSC resources and many can be found in the [PowerShell Gallery]( 1. Click on the module to import. 1. Click **Import**. -![Importing a DSC Module](./media/dsc-configuration/dsc-import-module-nx.png) ## Import the configuration You can assign a compiled node configuration to a DSC node. Assignment applies t 1. In the left pane of the Automation account, select **State Configuration (DSC)** and then click the **Nodes** tab. 1. Select the node to which to assign a configuration. 1. Click **Assign Node Configuration**-1. Select the node configuration `LAMPServer.localhost` and click **OK**. State Configuration now assigns the compiled configuration to the node, and the node status changes to `Pending`. On the next periodic check, the node retrieves the configuration, applies it, and reports status. It can take up to 30 minutes for the node to retrieve the configuration, depending on the node settings. -1. To force an immediate check, you can run the following command locally on the Linux virtual machine: - `sudo /opt/microsoft/dsc/Scripts/PerformRequiredConfigurationChecks.py` +1. Select the node configuration `LAMPServer.localhost` and click **OK**. State Configuration now assigns the compiled configuration to the node, and the node status changes to `Pending`. On the next periodic check, the node retrieves the configuration, applies it, and reports status. ++It can take up to 30 minutes for the node to retrieve the configuration, depending on the node settings. -![Assigning a Node Configuration](./media/dsc-configuration/dsc-assign-node-configuration.png) ## View node status You can view the status of all State Configuration-managed nodes in your Automat ## Next steps -In this quickstart, you enabled an Azure Linux VM for State Configuration, created a configuration for a LAMP stack, and deployed the configuration to the VM. To learn how you can use Azure Automation State Configuration to enable continuous deployment, continue to the article: +In this quickstart, you enabled an Azure VM for State Configuration, created a configuration for a LAMP stack, and deployed the configuration to the VM. To learn how you can use Azure Automation State Configuration to enable continuous deployment, continue to the article: > [!div class="nextstepaction"] > [Set up continuous deployment with Chocolatey](../automation-dsc-cd-chocolatey.md) |
automation | Modules | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/shared-resources/modules.md | Title: Manage modules in Azure Automation description: This article tells how to use PowerShell modules to enable cmdlets in runbooks and DSC resources in DSC configurations. Previously updated : 07/17/2024 Last updated : 08/09/2024 TestModule 2.0.0 ``` -Within each of the version folders, copy your PowerShell .psm1, .psd1, or PowerShell module **.dll** files that make up a module into the respective version folder. Zip up the module folder so that Azure Automation can import it as a single .zip file. While Automation only shows the highest version of the module imported, if the module package contains side-by-side versions of the module, they are all available for use in your runbooks or DSC configurations. +Within each of the version folders, copy your PowerShell .psm1, .psd1, or PowerShell module **.dll** files that make up a module into the respective version folder. Zip up the module folder so that Azure Automation can import it as a single .zip file. While Automation only shows one of the versions of the module imported, if the module package contains side-by-side versions of the module, they are all available for use in your runbooks or DSC configurations. While Automation supports modules containing side-by-side versions within the same package, it does not support using multiple versions of a module across module package imports. For example, you import **module A**, which contains versions 1 and 2 into your Automation account. Later you update **module A** to include versions 3 and 4, when you import into your Automation account, only versions 3 and 4 are usable within any runbooks or DSC configurations. If you require all versions - 1, 2, 3, and 4 to be available, the .zip file your import should contain versions 1, 2, 3, and 4. |
azure-app-configuration | Rest Api Key Value | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-key-value.md | +zone_pivot_groups: appconfig-data-plane-api-version + # Key-values A key-value is a resource identified by unique combination of `key` + `label`. `label` is optional. To explicitly reference a key-value without a label, use "\0" (URL encoded as ``%00``). See details for each operation. -This article applies to API version 1.0. - ## Operations - Get HTTP/1.1 200 OK ## List key-values Optional: ``key`` (If not specified, it implies any key.)+ Optional: ``label`` (If not specified, it implies any label.) + ```http GET /kv?label=*&api-version={api-version} HTTP/1.1 ``` HTTP/1.1 200 OK Content-Type: application/vnd.microsoft.appconfig.kvset+json; charset=utf-8 ``` -For additional options, see the "Filtering" section later in this article. ++Optional: ``tags`` (If not specified, it implies any tags.) ++```http +GET /kv?key=Test*&label=*&tags=tag1=value1&tags=tag2=value2&api-version={api-version} HTTP/1.1 +``` ++**Response:** ++```http +HTTP/1.1 200 OK +Content-Type: application/vnd.microsoft.appconfig.kvset+json; charset=utf-8 +``` ++For more options, see the "Filtering" section later in this article. +++## List key-values (conditionally) ++To improve client caching, use `If-Match` or `If-None-Match` request headers. The `etag` argument is part of the list key-values response body and header. +If `If-Match` or `If-None-Match` are omitted, the operation is unconditional. ++The following response gets the key-value only if the current representation matches the specified `etag`: ++```http +GET /kv?key={key}label={label}&api-version={api-version} HTTP/1.1 +If-Match: "4f6dd610dd5e4deebc7fbaef685fb903" +``` ++**Responses:** ++```http +HTTP/1.1 412 PreconditionFailed +``` ++or ++```http +HTTP/1.1 200 OK +``` ++The following response gets the key-values only if the current representation doesn't match the specified `etag`: ++```http +GET /kv?key={key}label={label}&api-version={api-version} HTTP/1.1 +If-None-Match: "4f6dd610dd5e4deebc7fbaef685fb903" +``` ++**Responses:** ++```http +HTTP/1.1 304 NotModified +``` ++or ++```http +HTTP/1.1 200 OK +``` + ## Pagination Link: <{relative uri}>; rel="next" ## Filtering + A combination of `key` and `label` filtering is supported. Use the optional `key` and `label` query string parameters. Use the optional `key` and `label` query string parameters. GET /kv?key={key}&label={label}&api-version={api-version} ``` ++A combination of `key`, `label`, and `tags` filtering is supported. +Use the optional `key`, `label`, and `tags` query string parameters. +Multiple tag filters can be provided as query string parameters in the `tagName=tagValue` format. Tag filters must be an exact match. ++```http +GET /kv?key={key}&label={label}&tags={tagFilter1}&tags={tagFilter2}&api-version={api-version} +``` +++ ### Supported filters |Key filter|Effect| GET /kv?key={key}&label={label}&api-version={api-version} |Label filter|Effect| |--|--| |`label` is omitted or `label=*`|Matches **any** label|-|`label=%00`|Matches KV without label| +|`label=%00`|Matches key-values with no label| |`label=prod`|Matches the label **prod**| |`label=prod*`|Matches labels that start with **prod**| |`label=prod,test`|Matches labels **prod** or **test** (limited to 5 CSV)| ++|Tags filter|Effect| +|--|--| +|`tags` is omitted or `tags=` |Matches **any** tag| +|`tags=group=app1`|Matches key-values that have a tag named `group` with value `app1`| +|`tags=group=app1&tags=env=prod`|Matches key-values that have a tag named `group` with value `app1` and a tag named `env` with value `prod`(limited to 5 tag filters)| +|`tags=tag1=%00`|Matches key-values that have a tag named `tag1` with value `null`| +|`tags=tag1=`|Matches key-values that have a tag named `tag1` with empty value| ++ ***Reserved characters*** `*`, `\`, `,` If a reserved character is part of the value, then it must be escaped by using ` ***Filter validation*** -In the case of a filter validation error, the response is HTTP `400` with error details: +If filter validation fails, the response is HTTP `400` with error details: ```http HTTP/1.1 400 Bad Request ETag: "4f6dd610dd5e4deebc7fbaef685fb903" } ``` -If the item is locked, you'll receive the following response: +If the item is locked, the following response is returned: ```http HTTP/1.1 409 Conflict HTTP/1.1 204 No Content ## Delete key (conditionally) This is similar to the "Set key (conditionally)" section earlier in this article.+ |
azure-app-configuration | Rest Api Keys | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-keys.md | +zone_pivot_groups: appconfig-data-plane-api-version + # Keys -api-version: 1.0 - The following syntax represents a key resource: ```http Link: <relative uri>; rel="original" ] } ```+ |
azure-app-configuration | Rest Api Labels | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-labels.md | +zone_pivot_groups: appconfig-data-plane-api-version + # Labels -api-version: 1.0 - The **Label** resource is defined as follows: ```json GET /labels?name={label-name}&api-version={api-version} ### Supported filters -|Key Filter|Effect| +|Label Filter|Effect| |--|--| |`name` is omitted or `name=*`|Matches **any** label| |`name=abc`|Matches a label named **abc**| Link: <{relative uri}>; rel="original" ] } ```+ |
azure-app-configuration | Rest Api Locks | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-locks.md | +zone_pivot_groups: appconfig-data-plane-api-version + # Locks -This API (version 1.0) provides lock and unlock semantics for the key-value resource. It supports the following operations: +This API provides lock and unlock semantics for the key-value resource. It supports the following operations: - Place lock - Remove lock The following request applies the operation only if the current key-value repres PUT|DELETE /kv/{key}?label={label}&api-version={api-version} HTTP/1.1 If-None-Match: "4f6dd610dd5e4deebc7fbaef685fb903" ```+ |
azure-app-configuration | Rest Api Revisions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-revisions.md | +zone_pivot_groups: appconfig-data-plane-api-version + # Key-value revisions For all operations, ``key`` is an optional parameter. If omitted, it implies any For all operations, ``label`` is an optional parameter. If omitted, it implies any label. -This article applies to API version 1.0. - ## Prerequisites [!INCLUDE [azure-app-configuration-create](../../includes/azure-app-configuration-rest-api-prereqs.md)] Content-Range: items 0-2/80 ## Filtering + A combination of `key` and `label` filtering is supported. Use the optional `key` and `label` query string parameters. Use the optional `key` and `label` query string parameters. GET /revisions?key={key}&label={label}&api-version={api-version} ``` ++A combination of `key`, `label` and `tags` filtering is supported. +Use the optional `key`, `label` and `tags` query string parameters. +Multiple tag filters can be provided as query string parameters in the `tagName=tagValue` format. Tag filters must be an exact match. ++```http +GET /revisions?key={key}&label={label}&tags={tagFilter1}&tags={tagFilter2}&api-version={api-version} +``` ++ ### Supported filters |Key filter|Effect| GET /revisions?key={key}&label={label}&api-version={api-version} |Label filter|Effect| |--|--|-|`label` is omitted or `label=`|Matches entry without label| +|`label` is omitted or `label=`|Matches key-values with no label| |`label=*`|Matches **any** label| |`label=prod`|Matches the label **prod**| |`label=prod*`|Matches labels that start with **prod**| GET /revisions?key={key}&label={label}&api-version={api-version} |`label=*prod*`|Matches labels that contain **prod**| |`label=prod,test`|Matches labels **prod** or **test** (limited to 5 CSV)| ++|Tags filter|Effect| +|--|--| +|`tags` is omitted or `tags=` |Matches **any** tag| +|`tags=group=app1`|Matches key-values that have a tag named `group` with value `app1`| +|`tags=group=app1&tags=env=prod`|Matches key-values that have a tag named `group` with value `app1` and a tag named `env` with value `prod`(limited to 5 tag filters)| +|`tags=tag1=%00`|Matches key-values that have a tag named `tag1` with value `null`| +|`tags=tag1=`|Matches key-values that have a tag named `tag1` with empty value| ++ ### Reserved characters The reserved characters are: Link: <{relative uri}>; rel="original" ] } ```+ |
azure-app-configuration | Rest Api Snapshot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-app-configuration/rest-api-snapshot.md | +zone_pivot_groups: appconfig-data-plane-api-version + # Snapshot -A snapshot is a resource identified uniquely by its name. See details for each operation. ++Snapshot resource isn't available in API version 1.0. + -This article applies to API version 2022-11-01-preview. +A snapshot is a resource identified uniquely by its name. See details for each operation. ## Operations This article applies to API version 2022-11-01-preview. `SnapshotFilter` + ```json { "key": [string], This article applies to API version 2022-11-01-preview. } ``` ++```json +{ + "key": [string], + "label": [string], + "tags": [array<string>] +} +``` ++ ## Get snapshot Required: ``{name}``, ``{api-version}`` If-None-Match: "{etag}" HTTP/1.1 304 NotModified ``` -or +Or ```http HTTP/1.1 200 OK HTTP/1.1 200 OK Content-Type: application/vnd.microsoft.appconfig.snapshotset+json; charset=utf-8 ``` -For additional options, see the "Filtering" section later in this article. +For more options, see the "Filtering" section later in this article. ## Pagination GET /snapshots?name={name}&status={status}&api-version={api-version} `*`, `\`, `,` -If a reserved character is part of the value, then it must be escaped by using `\{Reserved Character}`. Non-reserved characters can also be escaped. +If a reserved character is part of the value, then it must be escaped by using `\{Reserved Character}`. Nonreserved characters can also be escaped. ***Filter validation*** -In the case of a filter validation error, the response is HTTP `400` with error details: +If filter validation fails, the response is HTTP `400` with error details: ```http HTTP/1.1 400 Bad Request GET /snapshot?$select=name,status&api-version={api-version} HTTP/1.1 **parameters** + | Property Name | Required | Default value | Validation | |-|-|-|-|-| name | yes | n/a | Length <br/> maximum: 256 | -| filters | yes | n/a | Count <br/> minimum: 1<br/> maximum: 3 | +| name | yes | n/a | Length <br/> Maximum: 256 | +| filters | yes | n/a | Count <br/> Minimum: 1<br/> Maximum: 3 | | filters[\<index\>].key | yes | n/a | |+| filters[\<index\>].label | no | null | Multi-match label filters (for example: "*", "comma,separated") aren't supported with 'key' composition type. | | tags | no | {} | |-| filters[\<index\>].label | no | null | Multi-match label filters (E.g.: "*", "comma,separated") aren't supported with 'key' composition type. | | composition_type | no | key | |-| retention_period | no | Standard tier <br/> 2592000 (30 days) <br/> Free tier <br/> 604800 (7 days) | Standard tier <br/> minimum: 3600 (1 hour) <br/> maximum: 7776000 (90 days) <br/> Free tier <br/> minimum: 3600 (1 hour) <br/> maximum: 604800 (7 days) | +| retention_period | no | Standard tier <br/> 2592000 (30 days) <br/> Free tier <br/> 604800 (seven days) | Standard tier <br/> Minimum: 3600 (one hour) <br/> Maximum: 7776000 (90 days) <br/> Free tier <br/> Minimum: 3600 (one hour) <br/> Maximum: 604800 (seven days) | ```http PUT /snapshot/{name}?api-version={api-version} HTTP/1.1 Operation-Location: {appConfigurationEndpoint}/operations?snapshot={name}&api-ve } ``` -The status of the newly created snapshot will be "provisioning". -Once the snapshot is fully provisioned, the status will update to "ready". ++| Property Name | Required | Default value | Validation | +|-|-|-|-| +| name | yes | n/a | Length <br/> Maximum: 256 | +| filters | yes | n/a | Count <br/> Minimum: 1<br/> Maximum: 3 | +| filters[\<index\>].key | yes | n/a | | +| filters[\<index\>].label | no | null | Multi-match label filters (for example: "*", "comma,separated") aren't supported with 'key' composition type. | +| filters[\<index\>].tags | no | null | Count <br/> Minimum: 0<br/> Maximum: 5 | +| tags | no | {} | | +| composition_type | no | key | | +| retention_period | no | Standard tier <br/> 2592000 (30 days) <br/> Free tier <br/> 604800 (7 days) | Standard tier <br/> Minimum: 3600 (1 hour) <br/> Maximum: 7776000 (90 days) <br/> Free tier <br/> Minimum: 3600 (1 hour) <br/> Maximum: 604800 (7 days) | ++```http +PUT /snapshot/{name}?api-version={api-version} HTTP/1.1 +Content-Type: application/vnd.microsoft.appconfig.snapshot+json +``` ++```json +{ + "filters": [ // required + { + "key": "app1/*", // required + "label": "prod", // optional + "tags": ["group=g1", "default=true"] // optional + } + ], + "tags": { // optional + "tag1": "value1", + "tag2": "value2", + }, + "composition_type": "key", // optional + "retention_period": 2592000 // optional +} +``` ++**Responses:** ++```http +HTTP/1.1 201 Created +Content-Type: application/vnd.microsoft.appconfig.snapshot+json; charset=utf-8 +Last-Modified: Tue, 05 Dec 2017 02:41:26 GMT +ETag: "4f6dd610dd5e4deebc7fbaef685fb903" +Operation-Location: {appConfigurationEndpoint}/operations?snapshot={name}&api-version={api-version} +``` ++```json +{ + "etag": "4f6dd610dd5e4deebc7fbaef685fb903", + "name": "{name}", + "status": "provisioning", + "filters": [ + { + "key": "app1/*", + "label": "prod", + "tags": ["group=g1", "default=true"] + } + ], + "composition_type": "key", + "created": "2023-03-20T21:00:03+00:00", + "size": 2000, + "items_count": 4, + "tags": { + "t1": "value1", + "t2": "value2" + }, + "retention_period": 2592000 +} +``` +++The status of the newly created snapshot is `provisioning`. +Once the snapshot is fully provisioned, the status updates to `ready`. Clients can poll the snapshot to wait for the snapshot to be ready before listing its associated key-values. To query additional information about the operation, reference the [polling snapshot creation](#polling-snapshot-creation) section. -If the snapshot already exists, you'll receive the following response: +If the snapshot already exists, the following response is returned: ```http HTTP/1.1 409 Conflict Content-Type: application/json; charset=utf-8 } ``` -If any error occurs during the provisioning of the snapshot, the `error` property will contain details describing the error. +If any error occurs during the provisioning of the snapshot, the `error` property contains details describing the error. ```json { If any error occurs during the provisioning of the snapshot, the `error` propert ## Archive (Patch) A snapshot in the `ready` state can be archived.-An archived snapshot will be assigned an expiration date, based off the retention period established at the time of its creation. +An archived snapshot is assigned an expiration date, based off the retention period established at the time of its creation. After the expiration date passes, the snapshot will be permanently deleted. At any time before the expiration date, the snapshot's items can still be listed. Content-Type: application/problem+json; charset="utf-8" ## Recover (Patch) A snapshot in the `archived` state can be recovered.-Once the snapshot is recovered the snapshot's expiration date is removed. +After the snapshot is recovered, the snapshot's expiration date is removed. Recovering a snapshot that is already `ready` doesn't affect the snapshot. Content-Type: application/vnd.microsoft.appconfig.snapshot+json; charset=utf-8 ... ``` -or +Or ```http HTTP/1.1 412 PreconditionFailed Use the optional `$select` query string parameter and provide a comma-separated ```http GET /kv?snapshot={name}&$select=key,value&api-version={api-version} HTTP/1.1 ```+ |
azure-arc | Extensions Release | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/extensions-release.md | Title: "Available extensions for Azure Arc-enabled Kubernetes clusters" Previously updated : 08/01/2024 Last updated : 08/08/2024 description: "See which extensions are currently available for Azure Arc-enabled Kubernetes clusters and view release notes." For more information, see [Tutorial: Deploy applications using GitOps with Flux The most recent version of the Flux v2 extension and the two previous versions (N-2) are supported. We generally recommend that you use the most recent version of the extension. > [!IMPORTANT]-> The [Flux v2.3.0 release](https://fluxcd.io/blog/2024/05/flux-v2.3.0/) includes API changes to the HelmRelease and HelmChart APIs, with deprecated fields removed. An upcoming minor version update of Microsoft's Flux extension will include these changes, consistent with the upstream OSS Flux project. +> The [Flux v2.3.0 release](https://fluxcd.io/blog/2024/05/flux-v2.3.0/) includes API changes to the HelmRelease and HelmChart APIs, with deprecated fields removed, and an updated version of the kustomize package. An upcoming minor version update of Microsoft's Flux extension will include these changes, consistent with the upstream OSS Flux project. > > The [HelmRelease](https://fluxcd.io/flux/components/helm/helmreleases/) kind will be promoted from `v2beta1` to `v2` (GA). The `v2` API is backwards compatible with `v2beta1`, with the exception of these deprecated fields, which will be removed: > The most recent version of the Flux v2 extension and the two previous versions ( > > The [HelmChart](https://fluxcd.io/flux/components/source/helmcharts/) kind will be promoted from `v1beta2` to `v1` (GA). The `v1` API is backwards compatible with `v1beta2`, with the exception of the `.spec.valuesFile` field, which will be replaced by `.spec.valuesFiles`. >-> To avoid issues due to breaking changes, we recommend updating your deployments by July 29, 2024, so that they stop using the fields that will be removed and use the replacement fields instead. These new fields are already available in the current version of the APIs. +> Use the new fields which are already available in the current version of the APIs, instead of the fields that will be removed. +> +> The kustomize package will be updated to v5.4.0, which contains the following breaking changes: +> +> - [Kustomization build fails when resources key is missing](https://github.com/kubernetes-sigs/kustomize/issues/5337) +> - [Components are now applied after generators and before transformers](https://github.com/kubernetes-sigs/kustomize/pull/5170) in [v5.1.0](https://github.com/kubernetes-sigs/kustomize/releases/tag/kustomize%2Fv5.1.0) +> - [Null yaml values are replaced by "null"](https://github.com/kubernetes-sigs/kustomize/pull/5519) in [v5.4.0](https://github.com/kubernetes-sigs/kustomize/releases/tag/kustomize%2Fv5.4.0) +> +> To avoid issues due to breaking changes, we recommend updating your manifests as soon as possible to ensure that your Flux configurations remain compliant with this release. + > [!NOTE] > When a new version of the `microsoft.flux` extension is released, it may take several days for the new version to become available in all regions. Flux version: [Release v2.3.0](https://github.com/fluxcd/flux2/releases/tag/v2.3 - kustomize-controller: v1.3.0 - helm-controller: v1.0.1 - notification-controller: v1.3.0-- image-automation-controller: v0.32.1-- image-reflector-controller: v0.38.0+- image-automation-controller: v0.38.0 +- image-reflector-controller: v0.32.0 Changes made for this version: |
azure-arc | System Requirements | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-arc/kubernetes/system-requirements.md | You must also have a [kubeconfig file](https://kubernetes.io/docs/concepts/confi The cluster must have at least one node with operating system and architecture type `linux/amd64` and/or `linux/arm64`. > [!IMPORTANT]-> Many Arc-enabled Kubernetes features and scenarios are supported on ARM64 nodes, such as [cluster connect](cluster-connect.md) and [viewing Kubernetes resources in the Azure portal](kubernetes-resource-view.md). However, if using Azure CLI to enable these scenarios, [Azure CLI must be installed](/cli/azure/install-azure-cli) and run from an AMD64 machine. -> +> Many Arc-enabled Kubernetes features and scenarios are supported on ARM64 nodes, such as [cluster connect](cluster-connect.md) and [viewing Kubernetes resources in the Azure portal](kubernetes-resource-view.md). However, if using Azure CLI to enable these scenarios, [Azure CLI must be installed](/cli/azure/install-azure-cli) and run from an AMD64 machine. Azure RBAC on Arc-enabled Kubernetes is currently not supported on ARM64 nodes. Please use [Kubernetes RBAC](identity-access-overview.md#kubernetes-rbac-authorization) for ARM64 nodes. +> > Currently, Azure Arc-enabled Kubernetes [cluster extensions](conceptual-extensions.md) aren't supported on ARM64-based clusters, except for [Flux (GitOps)](conceptual-gitops-flux2.md). To [install and use other cluster extensions](extensions.md), the cluster must have at least one node of operating system and architecture type `linux/amd64`.- ## Compute and memory requirements The Arc agents deployed on the cluster require: |
azure-functions | Functions App Settings | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-app-settings.md | When using app settings, you should be aware of the following considerations: + Changes to function app settings require your function app to be restarted. -+ In setting names, double-underscore (`__`) and semicolon (`:`) are considered reserved values. Double-underscores are interpreted as hierarchical delimiters on both Windows and Linux, and colons are interpreted in the same way only on Linux. For example, the setting `AzureFunctionsWebHost__hostid=somehost_123456` would be interpreted as the following JSON object: ++ In setting names, double-underscore (`__`) and colon (`:`) are considered reserved values. Double-underscores are interpreted as hierarchical delimiters on both Windows and Linux, and colons are interpreted in the same way only on Linux. For example, the setting `AzureFunctionsWebHost__hostid=somehost_123456` would be interpreted as the following JSON object: ```json "AzureFunctionsWebHost": { |
azure-functions | Functions Twitter Email | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-twitter-email.md | -This tutorial shows you how to create a workflow to analyze Twitter activity. As tweets are evaluated, the workflow sends notifications when positive sentiments are detected. +This tutorial shows you how to create a workflow to analyze X activity. As tweets are evaluated, the workflow sends notifications when positive sentiments are detected. In this tutorial, you learn to: > [!div class="checklist"] > * Create an Azure AI services API Resource. > * Create a function that categorizes tweet sentiment.-> * Create a logic app that connects to Twitter. +> * Create a logic app that connects to X. > * Add sentiment detection to the logic app. > * Connect the logic app to the function. > * Send an email based on the response from the function. ## Prerequisites -* An active [Twitter](https://twitter.com/) account. +* An active [X](https://x.com/) account. * An [Outlook.com](https://outlook.com/) account (for sending notifications). > [!NOTE] With the Text Analytics resource created, you'll copy a few settings and set the > [!NOTE] > To test the function, select **Test/Run** from the top menu. On the _Input_ tab, enter a value of `0.9` in the _Body_ input box, and then select **Run**. Verify that a value of _Positive_ is returned in the _HTTP response content_ box in the _Output_ section. -Next, create a logic app that integrates with Azure Functions, Twitter, and the Azure AI services API. +Next, create a logic app that integrates with Azure Functions, X, and the Azure AI services API. ## Create a logic app Next, create a logic app that integrates with Azure Functions, Twitter, and the You can now use the Logic Apps Designer to add services and triggers to your application. -## Connect to Twitter +## Connect to X -Create a connection to Twitter so your app can poll for new tweets. +Create a connection to X so your app can poll for new tweets. -1. Search for **Twitter** in the top search box. +1. Search for **X** in the top search box. -1. Select the **Twitter** icon. +1. Select the **X** icon. 1. Select the **When a new tweet is posted** trigger. Create a connection to Twitter so your app can poll for new tweets. | Setting | Value | | - | - |- | Connection name | **MyTwitterConnection** | + | Connection name | **MyXConnection** | | Authentication Type | **Use default shared application** | 1. Select **Sign in**. -1. Follow the prompts in the pop-up window to complete signing in to Twitter. +1. Follow the prompts in the pop-up window to complete signing in to X. 1. Next, enter the following values in the _When a new tweet is posted_ box. | Setting | Value | | - | -- |- | Search text | **#my-twitter-tutorial** | - | How often do you want to check for items? | **1** in the textbox, and <br> **Hour** in the dropdown. You may enter different values but be sure to review the current [limitations](/connectors/twitterconnector/#limits) of the Twitter connector. | + | Search text | **#my-x-tutorial** | + | How often do you want to check for items? | **1** in the textbox, and <br> **Hour** in the dropdown. You may enter different values but be sure to review the current [limitations](/connectors/twitterconnector/#limits) of the X connector. | 1. Select the **Save** button on the toolbar to save your progress. The email box should now look like this screenshot. ## Run the workflow -1. From your Twitter account, tweet the following text: **I'm enjoying #my-twitter-tutorial**. +1. From your X account, tweet the following text: **I'm enjoying #my-x-tutorial**. 1. Return to the Logic Apps Designer and select the **Run** button. To clean up all the Azure services and accounts created during this tutorial, de 1. Select the **Delete** button. -Optionally, you may want to return to your Twitter account and delete any test tweets from your feed. +Optionally, you may want to return to your X account and delete any test tweets from your feed. ## Next steps |
azure-maps | About Azure Maps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/about-azure-maps.md | |
azure-maps | Add Bubble Layer Map Ios | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/add-bubble-layer-map-ios.md | |
azure-maps | Add Controls Map Ios | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/add-controls-map-ios.md | |
azure-maps | Add Heat Map Layer Ios | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/add-heat-map-layer-ios.md | |
azure-maps | Add Image Layer Map Ios | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/add-image-layer-map-ios.md | |
azure-maps | Add Line Layer Map Ios | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/add-line-layer-map-ios.md | |
azure-maps | Add Polygon Extrusion Layer Map Ios | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/add-polygon-extrusion-layer-map-ios.md | |
azure-maps | Add Polygon Layer Map Ios | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/add-polygon-layer-map-ios.md | |
azure-maps | Add Symbol Layer Ios | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/add-symbol-layer-ios.md | |
azure-maps | Add Tile Layer Map Ios | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/add-tile-layer-map-ios.md | |
azure-maps | Android Map Add Line Layer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/android-map-add-line-layer.md | |
azure-maps | Android Map Events | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/android-map-events.md | |
azure-maps | Android Sdk Migration Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/android-sdk-migration-guide.md | |
azure-maps | Authentication Best Practices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/authentication-best-practices.md | |
azure-maps | Azure Maps Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/azure-maps-authentication.md | |
azure-maps | Azure Maps Event Grid Integration | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/azure-maps-event-grid-integration.md | |
azure-maps | Azure Maps Qps Rate Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/azure-maps-qps-rate-limits.md | Title: Azure Maps QPS rate limits description: Azure Maps limitation on the number of Queries Per Second. Previously updated : 10/15/2021 Last updated : 8/8/2024 -+ # Azure Maps QPS rate limits |
azure-maps | Choose Map Style | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/choose-map-style.md | |
azure-maps | Clustering Point Data Android Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/clustering-point-data-android-sdk.md | |
azure-maps | Clustering Point Data Ios Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/clustering-point-data-ios-sdk.md | |
azure-maps | Clustering Point Data Web Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/clustering-point-data-web-sdk.md | |
azure-maps | Consumption Model | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/consumption-model.md | |
azure-maps | Create Data Source Android Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/create-data-source-android-sdk.md | |
azure-maps | Create Data Source Ios Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/create-data-source-ios-sdk.md | |
azure-maps | Create Data Source Web Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/create-data-source-web-sdk.md | |
azure-maps | Creator Facility Ontology | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/creator-facility-ontology.md | |
azure-maps | Creator Geographic Scope | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/creator-geographic-scope.md | |
azure-maps | Creator Indoor Maps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/creator-indoor-maps.md | |
azure-maps | Creator Long Running Operation V2 | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/creator-long-running-operation-v2.md | |
azure-maps | Creator Long Running Operation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/creator-long-running-operation.md | |
azure-maps | Creator Onboarding Tool | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/creator-onboarding-tool.md | |
azure-maps | Creator Qgis Plugin | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/creator-qgis-plugin.md | |
azure-maps | Data Driven Style Expressions Android Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/data-driven-style-expressions-android-sdk.md | |
azure-maps | Data Driven Style Expressions Ios Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/data-driven-style-expressions-ios-sdk.md | |
azure-maps | Data Driven Style Expressions Web Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/data-driven-style-expressions-web-sdk.md | |
azure-maps | Display Feature Information Android | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/display-feature-information-android.md | |
azure-maps | Display Feature Information Ios Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/display-feature-information-ios-sdk.md | |
azure-maps | Drawing Conversion Error Codes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/drawing-conversion-error-codes.md | |
azure-maps | Drawing Error Visualizer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/drawing-error-visualizer.md | |
azure-maps | Drawing Package Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/drawing-package-guide.md | |
azure-maps | Drawing Requirements | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/drawing-requirements.md | |
azure-maps | Drawing Tools Events | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/drawing-tools-events.md | The following image shows a screenshot of the complete working sample that demon :::image type="content" source="./media/drawing-tools-events/drawing-tools-events.png" alt-text="Screenshot showing a map displaying data from a vector tile source."::: -<! -<br/> --> [!VIDEO https://codepen.io/azuremaps/embed/dyPMRWo?height=500&theme-id=default&default-tab=js,result&editable=true] ->- ## Examples Let's see some common scenarios that use the drawing tools events. For a complete working sample of how to use the drawing tools to draw polygon ar :::image type="content" source="./media/drawing-tools-events/select-data-in-drawn-polygon-area.png" alt-text="Screenshot showing a map displaying points within polygon areas."::: -<!- -<br/> --> [!VIDEO https://codepen.io/azuremaps/embed/XWJdeja?height=500&theme-id=default&default-tab=result] -->- ### Draw and search in polygon area This code searches for points of interests inside the area of a shape after the user finished drawing the shape. The `drawingcomplete` event is used to trigger the search logic. If the user draws a rectangle or polygon, a search inside geometry is performed. If a circle is drawn, the radius and center position is used to perform a point of interest search. The `drawingmodechanged` event is used to determine when the user switches to the drawing mode, and this event clears the drawing canvas. For a complete working sample of how to use the drawing tools to search for poin :::image type="content" source="./media/drawing-tools-events/draw-and-search-polygon-area.png" alt-text="Screenshot showing a map displaying the Draw and search in polygon area sample."::: -<!- -<br/> --> [!VIDEO https://codepen.io/azuremaps/embed/eYmZGNv?height=500&theme-id=default&default-tab=js,result&editable=true] -->- ### Create a measuring tool The following code shows how the drawing events can be used to create a measuring tool. The `drawingchanging` is used to monitor the shape, as it's being drawn. As the user moves the mouse, the dimensions of the shape are calculated. The `drawingcomplete` event is used to do a final calculation on the shape after it has been drawn. The `drawingmodechanged` event is used to determine when the user is switching into a drawing mode. Also, the `drawingmodechanged` event clears the drawing canvas and clears old measurement information. For a complete working sample of how to use the drawing tools to measure distanc :::image type="content" source="./media/drawing-tools-events/create-a-measuring-tool.png" alt-text="Screenshot showing a map displaying the measuring tool sample."::: -<!- -> [!VIDEO https://codepen.io/azuremaps/embed/RwNaZXe?height=500&theme-id=default&default-tab=js,result&editable=true] -->- ## Next steps Learn how to use other features of the drawing tools module: |
azure-maps | Drawing Tools Interactions Keyboard Shortcuts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/drawing-tools-interactions-keyboard-shortcuts.md | |
azure-maps | Elevation Data Services | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/elevation-data-services.md | Title: Create elevation data & services using open data titeSuffix: Microsoft Azure Maps description: a guide to help developers build Elevation services and tiles using open data on the Microsoft Azure Cloud.-+ Last updated 3/17/2023 -+ # Create elevation data & services |
azure-maps | Extend Geojson | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/extend-geojson.md | |
azure-maps | Geocoding Coverage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/geocoding-coverage.md | Title: Geocoding coverage in Microsoft Azure Maps Search service description: See which regions Azure Maps Search covers. Geocoding categories include address points, house numbers, street level, city level, and points of interest.--++ Last updated 11/30/2021 -+ # Azure Maps geocoding coverage |
azure-maps | Geofence Geojson | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/geofence-geojson.md | |
azure-maps | Geographic Coverage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/geographic-coverage.md | |
azure-maps | Geographic Scope | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/geographic-scope.md | |
azure-maps | Glossary | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/glossary.md | The following list describes common words used with the Azure Maps services. ## G +<a name="geobias"></a> **Geobias**: A geospatial bias to improve the ranking of results. In some methods, this can be affected by setting the longitude and latitude parameters where available. In other cases it is purely internal. + <a name="geocode"></a> **Geocode**: An address or location that has been converted into a coordinate that can be used to display that location on a map. <a name="geocoding"></a> **Geocoding**: Or _forward geocoding_, is the process of converting address of location data into coordinates. |
azure-maps | How To Add Shapes To Android Map | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-add-shapes-to-android-map.md | |
azure-maps | How To Add Symbol To Android Map | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-add-symbol-to-android-map.md | |
azure-maps | How To Add Tile Layer Android Map | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-add-tile-layer-android-map.md | |
azure-maps | How To Create Custom Styles | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-create-custom-styles.md | |
azure-maps | How To Create Data Registries | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-create-data-registries.md | |
azure-maps | How To Create Template | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-create-template.md | |
azure-maps | How To Creator Wayfinding | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-creator-wayfinding.md | |
azure-maps | How To Creator Wfs | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-creator-wfs.md | |
azure-maps | How To Dataset Geojson | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-dataset-geojson.md | |
azure-maps | How To Dev Guide Csharp Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-dev-guide-csharp-sdk.md | |
azure-maps | How To Dev Guide Java Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-dev-guide-java-sdk.md | |
azure-maps | How To Dev Guide Js Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-dev-guide-js-sdk.md | |
azure-maps | How To Dev Guide Py Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-dev-guide-py-sdk.md | |
azure-maps | How To Manage Account Keys | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-manage-account-keys.md | |
azure-maps | How To Manage Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-manage-authentication.md | -custom.ms: subject-rbac-steps + # Manage authentication in Azure Maps |
azure-maps | How To Manage Creator | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-manage-creator.md | |
azure-maps | How To Manage Pricing Tier | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-manage-pricing-tier.md | |
azure-maps | How To Render Custom Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-render-custom-data.md | |
azure-maps | How To Request Weather Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-request-weather-data.md | Title: Request real-time and forecasted weather data using Azure Maps Weather services description: Learn how to request real-time (current) and forecasted (minute, hourly, daily) weather data using Microsoft Azure Maps Weather services -+ Previously updated : 10/28/2021 Last updated : 08/08/2024 -+ This video provides examples for making REST calls to Azure Maps Weather service * An [Azure Maps account] * A [subscription key] - >[!IMPORTANT] - >The [Get Minute Forecast API] requires a Gen1 (S1) or Gen2 pricing tier. +>[!IMPORTANT] +> +> In the URL examples in this article you will need to replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key. -This tutorial uses the [Postman] application, but you may choose a different API development environment. +This tutorial uses the [bruno] application, but you can choose a different API development environment. ## Request real-time weather data The [Get Current Conditions API] returns detailed weather conditions such as pre In this example, you use the [Get Current Conditions API] to retrieve current weather conditions at coordinates located in Seattle, WA. -1. Open the Postman app. Select **New** to create the request. In the **Create New** window, select **HTTP Request**. Enter a **Request name** for the request. +1. Open the bruno app, select **NEW REQUEST** to create the request. In the **NEW REQUEST** window, set **Type** to **HTTP**. Enter a **Name** for the request. -2. Select the **GET** HTTP method in the builder tab and enter the following URL. For this request, and other requests mentioned in this article, replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key. +1. Select the **GET** HTTP method in the **URL** drop-down list, then enter the following URL: ```http https://atlas.microsoft.com/weather/currentConditions/json?api-version=1.0&query=47.60357,-122.32945&subscription-key={Your-Azure-Maps-Subscription-key} ``` -3. Select the blue **Send** button. The response body contains current weather information. +1. Select the blue **Create** button. ++1. Select the run button. ++ :::image type="content" source="./media/weather-service/bruno-run.png" alt-text="A screenshot showing the Request real-time weather data URL with the run button highlighted in the bruno app."::: ++ The response body contains current weather information. ```json {- "results": [ + "results": [ {- "dateTime": "2020-10-19T20:39:00+00:00", - "phrase": "Cloudy", - "iconCode": 7, - "hasPrecipitation": false, - "isDayTime": true, - "temperature": { - "value": 12.4, + "dateTime": "2024-08-08T09:22:00-07:00", + "phrase": "Sunny", + "iconCode": 1, + "hasPrecipitation": false, + "isDayTime": true, + "temperature": { + "value": 19.5, + "unit": "C", + "unitType": 17 + }, + "realFeelTemperature": { + "value": 23.7, + "unit": "C", + "unitType": 17 + }, + "realFeelTemperatureShade": { + "value": 19.4, + "unit": "C", + "unitType": 17 + }, + "relativeHumidity": 81, + "dewPoint": { + "value": 16.2, + "unit": "C", + "unitType": 17 + }, + "wind": { + "direction": { + "degrees": 0, + "localizedDescription": "N" + }, + "speed": { + "value": 2, + "unit": "km/h", + "unitType": 7 + } + }, + "windGust": { + "speed": { + "value": 3.8, + "unit": "km/h", + "unitType": 7 + } + }, + "uvIndex": 4, + "uvIndexPhrase": "Moderate", + "visibility": { + "value": 16.1, + "unit": "km", + "unitType": 6 + }, + "obstructionsToVisibility": "", + "cloudCover": 5, + "ceiling": { + "value": 12192, + "unit": "m", + "unitType": 5 + }, + "pressure": { + "value": 1015.9, + "unit": "mb", + "unitType": 14 + }, + "pressureTendency": { + "localizedDescription": "Steady", + "code": "S" + }, + "past24HourTemperatureDeparture": { + "value": 3, + "unit": "C", + "unitType": 17 + }, + "apparentTemperature": { + "value": 20, + "unit": "C", + "unitType": 17 + }, + "windChillTemperature": { + "value": 19.4, + "unit": "C", + "unitType": 17 + }, + "wetBulbTemperature": { + "value": 17.5, + "unit": "C", + "unitType": 17 + }, + "precipitationSummary": { + "pastHour": { + "value": 0, + "unit": "mm", + "unitType": 3 + }, + "past3Hours": { + "value": 0, + "unit": "mm", + "unitType": 3 + }, + "past6Hours": { + "value": 0, + "unit": "mm", + "unitType": 3 + }, + "past9Hours": { + "value": 0, + "unit": "mm", + "unitType": 3 + }, + "past12Hours": { + "value": 0, + "unit": "mm", + "unitType": 3 + }, + "past18Hours": { + "value": 0, + "unit": "mm", + "unitType": 3 + }, + "past24Hours": { + "value": 0, + "unit": "mm", + "unitType": 3 + } + }, + "temperatureSummary": { + "past6Hours": { + "minimum": { + "value": 16, "unit": "C", "unitType": 17- }, - "realFeelTemperature": { - "value": 13.7, + }, + "maximum": { + "value": 19.5, "unit": "C", "unitType": 17+ } },- "realFeelTemperatureShade": { - "value": 13.7, + "past12Hours": { + "minimum": { + "value": 16, "unit": "C", "unitType": 17- }, - "relativeHumidity": 87, - "dewPoint": { - "value": 10.3, + }, + "maximum": { + "value": 20.4, "unit": "C", "unitType": 17+ } },- "wind": { - "direction": { - "degrees": 23.0, - "localizedDescription": "NNE" - }, - "speed": { - "value": 4.5, - "unit": "km/h", - "unitType": 7 - } - }, - "windGust": { - "speed": { - "value": 9.0, - "unit": "km/h", - "unitType": 7 - } - }, - "uvIndex": 1, - "uvIndexPhrase": "Low", - "visibility": { - "value": 9.7, - "unit": "km", - "unitType": 6 - }, - "obstructionsToVisibility": "", - "cloudCover": 100, - "ceiling": { - "value": 1494.0, - "unit": "m", - "unitType": 5 - }, - "pressure": { - "value": 1021.2, - "unit": "mb", - "unitType": 14 - }, - "pressureTendency": { - "localizedDescription": "Steady", - "code": "S" - }, - "past24HourTemperatureDeparture": { - "value": -2.1, - "unit": "C", - "unitType": 17 - }, - "apparentTemperature": { - "value": 15.0, + "past24Hours": { + "minimum": { + "value": 16, "unit": "C", "unitType": 17- }, - "windChillTemperature": { - "value": 12.2, + }, + "maximum": { + "value": 26.4, "unit": "C", "unitType": 17- }, - "wetBulbTemperature": { - "value": 11.3, - "unit": "C", - "unitType": 17 - }, - "precipitationSummary": { - "pastHour": { - "value": 0.0, - "unit": "mm", - "unitType": 3 - }, - "past3Hours": { - "value": 0.0, - "unit": "mm", - "unitType": 3 - }, - "past6Hours": { - "value": 0.0, - "unit": "mm", - "unitType": 3 - }, - "past9Hours": { - "value": 0.0, - "unit": "mm", - "unitType": 3 - }, - "past12Hours": { - "value": 0.0, - "unit": "mm", - "unitType": 3 - }, - "past18Hours": { - "value": 0.0, - "unit": "mm", - "unitType": 3 - }, - "past24Hours": { - "value": 0.4, - "unit": "mm", - "unitType": 3 - } - }, - "temperatureSummary": { - "past6Hours": { - "minimum": { - "value": 12.2, - "unit": "C", - "unitType": 17 - }, - "maximum": { - "value": 14.0, - "unit": "C", - "unitType": 17 - } - }, - "past12Hours": { - "minimum": { - "value": 12.2, - "unit": "C", - "unitType": 17 - }, - "maximum": { - "value": 14.0, - "unit": "C", - "unitType": 17 - } - }, - "past24Hours": { - "minimum": { - "value": 12.2, - "unit": "C", - "unitType": 17 - }, - "maximum": { - "value": 15.6, - "unit": "C", - "unitType": 17 - } - } + } }+ } }- ] + ] } ``` ## Request severe weather alerts -Azure Maps [Get Severe Weather Alerts API] returns the severe weather alerts that are available worldwide from both official Government Meteorological Agencies and leading global to regional weather alert providers. The service returns details like alert type, category, level. The service also returns detailed descriptions about the active severe alerts for the requested location, such as hurricanes, thunderstorms, lightning, heat waves or forest fires. As an example, logistics managers can visualize severe weather conditions on a map, along with business locations and planned routes, and coordinate further with drivers and local workers. +Azure Maps [Get Severe Weather Alerts API] returns the severe weather alerts that are available worldwide from both official Government Meteorological Agencies and leading global to regional weather alert providers. The service returns details like alert type, category, level. The service also returns detailed descriptions about the active severe alerts for the requested location, such as hurricanes, thunderstorms, lightning, heat waves, or forest fires. As an example, logistics managers can visualize severe weather conditions on a map, along with business locations and planned routes, and coordinate further with drivers and local workers. In this example, you use the [Get Severe Weather Alerts API] to retrieve current weather conditions at coordinates located in Cheyenne, WY. ->[!NOTE] ->This example retrieves severe weather alerts at the time of this writing. It is likely that there are no longer any severe weather alerts at the requested location. To retrieve actual severe alert data when running this example, you'll need to retrieve data at a different coordinate location. +> [!NOTE] +> This example retrieves severe weather alerts at the time of this writing. It is likely that there are no longer any severe weather alerts at the requested location. To retrieve actual severe alert data when running this example, you'll need to retrieve data at a different coordinate location. -1. In the Postman app, select **New** to create the request. In the **Create New** window, select **HTTP Request**. Enter a **Request name** for the request. +1. In the bruno app, select **NEW REQUEST** to create the request. In the **NEW REQUEST** window, set **Type** to **HTTP**. Enter a **Name** for the request. -2. Select the **GET** HTTP method in the builder tab and enter the following URL. For this request, and other requests mentioned in this article, replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key. +1. Select the **GET** HTTP method in the **URL** drop-down list, then enter the following URL: ```http https://atlas.microsoft.com/weather/severe/alerts/json?api-version=1.0&query=41.161079,-104.805450&subscription-key={Your-Azure-Maps-Subscription-key} ``` -3. Select the blue **Send** button. If there are no severe weather alerts, the response body contains an empty `results[]` array. If there are severe weather alerts, the response body contains something like the following JSON response: +1. Select the blue **Create** button. ++1. Select the run button. ++ :::image type="content" source="./media/weather-service/bruno-run-request-severe-weather-alerts.png" alt-text="A screenshot showing the Request severe weather alerts URL with the run button highlighted in the bruno app."::: ++ If there are no severe weather alerts, the response body contains an empty `results[]` array. If there are severe weather alerts, the response body contains something like the following JSON response: ```json { In this example, you use the [Get Severe Weather Alerts API] to retrieve current "alertAreas": [ { "name": "Platte/Goshen/Central and Eastern Laramie",- "summary": "Red Flag Warning in effect until 7:00 PM MDT. Source: U.S. National Weather Service", + "summary": "Red Flag Warning in effect until 7:00 PM MDT. Source: U.S. National Weather Service", "startTime": "2020-10-05T15:00:00+00:00", "endTime": "2020-10-06T01:00:00+00:00", "latestStatus": { In this example, you use the [Get Severe Weather Alerts API] to retrieve current ## Request daily weather forecast data -The [Get Daily Forecast API] returns detailed daily weather forecast such as temperature and wind. The request can specify how many days to return: 1, 5, 10, 15, 25, or 45 days for a given coordinate location. The response includes details such as temperature, wind, precipitation, air quality, and UV index. In this example, we request for five days by setting `duration=5`. +The [Get Daily Forecast API] returns detailed daily weather forecast such as temperature and wind. The request can specify how many days to return: 1, 5, 10, 15, 25, or 45 days for a given coordinate location. The response includes details such as temperature, wind, precipitation, air quality, and UV index. In this example, we request for five days by setting `duration=5`. ->[!IMPORTANT] ->In the S0 pricing tier, you can request daily forecast for the next 1, 5, 10, and 15 days. In either Gen1 (S1) or Gen2 pricing tier, you can request daily forecast for the next 25 days, and 45 days. +> [!IMPORTANT] +> In the S0 pricing tier, you can request daily forecast for the next 1, 5, 10, and 15 days. In either Gen1 (S1) or Gen2 pricing tier, you can request daily forecast for the next 25 days, and 45 days. > > **Azure Maps Gen1 pricing tier retirement** > The [Get Daily Forecast API] returns detailed daily weather forecast such as tem In this example, you use the [Get Daily Forecast API] to retrieve the five-day weather forecast for coordinates located in Seattle, WA. -1. In the Postman app, select **New** to create the request. In the **Create New** window, select **HTTP Request**. Enter a **Request name** for the request. +1. In the bruno app, select **NEW REQUEST** to create the request. In the **NEW REQUEST** window, set **Type** to **HTTP**. Enter a **Name** for the request. -2. Select the **GET** HTTP method in the builder tab and enter the following URL. For this request, and other requests mentioned in this article, replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key. +1. Select the **GET** HTTP method in the **URL** drop-down list, then enter the following URL: ```http https://atlas.microsoft.com/weather/forecast/daily/json?api-version=1.0&query=47.60357,-122.32945&duration=5&subscription-key={Your-Azure-Maps-Subscription-key} ``` -3. Select the blue **Send** button. The response body contains the five-day weather forecast data. For the sake of brevity, the following JSON response shows the forecast for the first day. +1. Select the blue **Create** button. ++1. Select the run button. ++ :::image type="content" source="./media/weather-service/bruno-run-request-daily-weather-forecast-data.png" alt-text="A screenshot showing the Request daily weather forecast data URL with the run button highlighted in the bruno app."::: ++ The response body contains the five-day weather forecast data. For the sake of brevity, the following JSON response shows the forecast for the first day. ```json {- "summary": { - "startDate": "2020-10-18T17:00:00+00:00", - "endDate": "2020-10-19T23:00:00+00:00", - "severity": 2, - "phrase": "Snow, mixed with rain at times continuing through Monday evening and a storm total of 3-6 cm", - "category": "snow/rain" - }, - "forecasts": [ + "summary": { + "startDate": "2024-08-09T08:00:00-07:00", + "endDate": "2024-08-09T20:00:00-07:00", + "severity": 7, + "phrase": "Very warm tomorrow", + "category": "heat" + }, + "forecasts": [ {- "date": "2020-10-19T04:00:00+00:00", - "temperature": { - "minimum": { - "value": -1.1, - "unit": "C", - "unitType": 17 - }, - "maximum": { - "value": 1.3, - "unit": "C", - "unitType": 17 - } + "date": "2024-08-08T07:00:00-07:00", + "temperature": { + "minimum": { + "value": 16.2, + "unit": "C", + "unitType": 17 + }, + "maximum": { + "value": 28.9, + "unit": "C", + "unitType": 17 + } + }, + "realFeelTemperature": { + "minimum": { + "value": 16.3, + "unit": "C", + "unitType": 17 + }, + "maximum": { + "value": 29.8, + "unit": "C", + "unitType": 17 + } + }, + "realFeelTemperatureShade": { + "minimum": { + "value": 16.3, + "unit": "C", + "unitType": 17 + }, + "maximum": { + "value": 27.3, + "unit": "C", + "unitType": 17 + } + }, + "hoursOfSun": 12.9, + "degreeDaySummary": { + "heating": { + "value": 0, + "unit": "C", + "unitType": 17 + }, + "cooling": { + "value": 5, + "unit": "C", + "unitType": 17 + } + }, + "airAndPollen": [ + { + "name": "AirQuality", + "value": 56, + "category": "Moderate", + "categoryValue": 2, + "type": "Nitrogen Dioxide" + }, + { + "name": "Grass", + "value": 2, + "category": "Low", + "categoryValue": 1 + }, + { + "name": "Mold", + "value": 0, + "category": "Low", + "categoryValue": 1 + }, + { + "name": "Ragweed", + "value": 5, + "category": "Low", + "categoryValue": 1 + }, + { + "name": "Tree", + "value": 0, + "category": "Low", + "categoryValue": 1 + }, + { + "name": "UVIndex", + "value": 7, + "category": "High", + "categoryValue": 3 + } + ], + "day": { + "iconCode": 2, + "iconPhrase": "Mostly sunny", + "hasPrecipitation": false, + "shortPhrase": "Mostly sunny", + "longPhrase": "Mostly sunny; wildfire smoke will cause the sky to be hazy", + "precipitationProbability": 0, + "thunderstormProbability": 0, + "rainProbability": 0, + "snowProbability": 0, + "iceProbability": 0, + "wind": { + "direction": { + "degrees": 357, + "localizedDescription": "N" + }, + "speed": { + "value": 11.1, + "unit": "km/h", + "unitType": 7 + } },- "realFeelTemperature": { - "minimum": { - "value": -6.0, - "unit": "C", - "unitType": 17 - }, - "maximum": { - "value": 0.5, - "unit": "C", - "unitType": 17 - } + "windGust": { + "direction": { + "degrees": 354, + "localizedDescription": "N" + }, + "speed": { + "value": 29.6, + "unit": "km/h", + "unitType": 7 + } },- "realFeelTemperatureShade": { - "minimum": { - "value": -6.0, - "unit": "C", - "unitType": 17 - }, - "maximum": { - "value": 0.7, - "unit": "C", - "unitType": 17 - } + "totalLiquid": { + "value": 0, + "unit": "mm", + "unitType": 3 },- "hoursOfSun": 1.8, - "degreeDaySummary": { - "heating": { - "value": 18.0, - "unit": "C", - "unitType": 17 - }, - "cooling": { - "value": 0.0, - "unit": "C", - "unitType": 17 - } + "rain": { + "value": 0, + "unit": "mm", + "unitType": 3 },- "airAndPollen": [ - { - "name": "AirQuality", - "value": 23, - "category": "Good", - "categoryValue": 1, - "type": "Ozone" - }, - { - "name": "Grass", - "value": 0, - "category": "Low", - "categoryValue": 1 - }, - { - "name": "Mold", - "value": 0, - "category": "Low", - "categoryValue": 1 - }, - { - "name": "Ragweed", - "value": 0, - "category": "Low", - "categoryValue": 1 - }, - { - "name": "Tree", - "value": 0, - "category": "Low", - "categoryValue": 1 - }, - { - "name": "UVIndex", - "value": 0, - "category": "Low", - "categoryValue": 1 - } - ], - "day": { - "iconCode": 22, - "iconPhrase": "Snow", - "hasPrecipitation": true, - "precipitationType": "Mixed", - "precipitationIntensity": "Light", - "shortPhrase": "Chilly with snow, 2-4 cm", - "longPhrase": "Chilly with snow, accumulating an additional 2-4 cm", - "precipitationProbability": 90, - "thunderstormProbability": 0, - "rainProbability": 54, - "snowProbability": 85, - "iceProbability": 8, - "wind": { - "direction": { - "degrees": 36.0, - "localizedDescription": "NE" - }, - "speed": { - "value": 9.3, - "unit": "km/h", - "unitType": 7 - } - }, - "windGust": { - "direction": { - "degrees": 70.0, - "localizedDescription": "ENE" - }, - "speed": { - "value": 25.9, - "unit": "km/h", - "unitType": 7 - } - }, - "totalLiquid": { - "value": 4.3, - "unit": "mm", - "unitType": 3 - }, - "rain": { - "value": 0.5, - "unit": "mm", - "unitType": 3 - }, - "snow": { - "value": 2.72, - "unit": "cm", - "unitType": 4 - }, - "ice": { - "value": 0.0, - "unit": "mm", - "unitType": 3 - }, - "hoursOfPrecipitation": 9.0, - "hoursOfRain": 1.0, - "hoursOfSnow": 9.0, - "hoursOfIce": 0.0, - "cloudCover": 96 - }, - "night": { - "iconCode": 29, - "iconPhrase": "Rain and snow", - "hasPrecipitation": true, - "precipitationType": "Mixed", - "precipitationIntensity": "Light", - "shortPhrase": "Showers of rain and snow", - "longPhrase": "A couple of showers of rain or snow this evening; otherwise, cloudy; storm total snowfall 1-3 cm", - "precipitationProbability": 65, - "thunderstormProbability": 0, - "rainProbability": 60, - "snowProbability": 54, - "iceProbability": 4, - "wind": { - "direction": { - "degrees": 16.0, - "localizedDescription": "NNE" - }, - "speed": { - "value": 16.7, - "unit": "km/h", - "unitType": 7 - } - }, - "windGust": { - "direction": { - "degrees": 1.0, - "localizedDescription": "N" - }, - "speed": { - "value": 35.2, - "unit": "km/h", - "unitType": 7 - } - }, - "totalLiquid": { - "value": 4.3, - "unit": "mm", - "unitType": 3 - }, - "rain": { - "value": 3.0, - "unit": "mm", - "unitType": 3 - }, - "snow": { - "value": 0.79, - "unit": "cm", - "unitType": 4 - }, - "ice": { - "value": 0.0, - "unit": "mm", - "unitType": 3 - }, - "hoursOfPrecipitation": 4.0, - "hoursOfRain": 1.0, - "hoursOfSnow": 3.0, - "hoursOfIce": 0.0, - "cloudCover": 94 - }, - "sources": [ - "AccuWeather" - ] - },... - ] + "snow": { + "value": 0, + "unit": "cm", + "unitType": 4 + }, + "ice": { + "value": 0, + "unit": "mm", + "unitType": 3 + }, + "hoursOfPrecipitation": 0, + "hoursOfRain": 0, + "hoursOfSnow": 0, + "hoursOfIce": 0, + "cloudCover": 10 + }, + "night": { + "iconCode": 35, + "iconPhrase": "Partly cloudy", + "hasPrecipitation": false, + "shortPhrase": "Partly cloudy", + "longPhrase": "Partly cloudy; wildfire smoke will cause the sky to be hazy", + "precipitationProbability": 1, + "thunderstormProbability": 0, + "rainProbability": 1, + "snowProbability": 0, + "iceProbability": 0, + "wind": { + "direction": { + "degrees": 7, + "localizedDescription": "N" + }, + "speed": { + "value": 9.3, + "unit": "km/h", + "unitType": 7 + } + }, + "windGust": { + "direction": { + "degrees": 3, + "localizedDescription": "N" + }, + "speed": { + "value": 20.4, + "unit": "km/h", + "unitType": 7 + } + }, + "totalLiquid": { + "value": 0, + "unit": "mm", + "unitType": 3 + }, + "rain": { + "value": 0, + "unit": "mm", + "unitType": 3 + }, + "snow": { + "value": 0, + "unit": "cm", + "unitType": 4 + }, + "ice": { + "value": 0, + "unit": "mm", + "unitType": 3 + }, + "hoursOfPrecipitation": 0, + "hoursOfRain": 0, + "hoursOfSnow": 0, + "hoursOfIce": 0, + "cloudCover": 26 + }, + "sources": [ + "AccuWeather" + ] + } + ] } ``` The [Get Hourly Forecast API] returns detailed weather forecast by the hour for In this example, you use the [Get Hourly Forecast API] to retrieve the hourly weather forecast for the next 12 hours at coordinates located in Seattle, WA. -1. In the Postman app, select **New** to create the request. In the **Create New** window, select **HTTP Request**. Enter a **Request name** for the request. +1. In the bruno app, select **NEW REQUEST** to create the request. In the **NEW REQUEST** window, set **Type** to **HTTP**. Enter a **Name** for the request. -2. Select the **GET** HTTP method in the builder tab and enter the following URL. For this request, and other requests mentioned in this article, replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key. +1. Select the **GET** HTTP method in the **URL** drop-down list, then enter the following URL: ```http https://atlas.microsoft.com/weather/forecast/hourly/json?api-version=1.0&query=47.60357,-122.32945&duration=12&subscription-key={Your-Azure-Maps-Subscription-key} ``` -3. Select the blue **Send** button. The response body contains weather forecast data for the next 12 hours. For the sake of brevity, the following JSON response shows the forecast for the first hour. +1. Select the blue **Create** button. ++1. Select the run button. ++ :::image type="content" source="./media/weather-service/bruno-run-request-hourly-weather-forecast-data.png" alt-text="A screenshot showing the Request hourly weather forecast data URL with the run button highlighted in the bruno app."::: ++ The response body contains weather forecast data for the next 12 hours. The following example JSON response only shows the first hour: ```json {- "forecasts": [ + "forecasts": [ {- "date": "2020-10-19T21:00:00+00:00", - "iconCode": 12, - "iconPhrase": "Showers", - "hasPrecipitation": true, - "precipitationType": "Rain", - "precipitationIntensity": "Light", - "isDaylight": true, - "temperature": { - "value": 14.7, - "unit": "C", - "unitType": 17 - }, - "realFeelTemperature": { - "value": 13.3, - "unit": "C", - "unitType": 17 - }, - "wetBulbTemperature": { - "value": 12.0, - "unit": "C", - "unitType": 17 - }, - "dewPoint": { - "value": 9.5, - "unit": "C", - "unitType": 17 - }, - "wind": { - "direction": { - "degrees": 242.0, - "localizedDescription": "WSW" - }, - "speed": { - "value": 9.3, - "unit": "km/h", - "unitType": 7 - } - }, - "windGust": { - "speed": { - "value": 14.8, - "unit": "km/h", - "unitType": 7 - } - }, - "relativeHumidity": 71, - "visibility": { - "value": 9.7, - "unit": "km", - "unitType": 6 - }, - "cloudCover": 100, - "ceiling": { - "value": 1128.0, - "unit": "m", - "unitType": 5 - }, - "uvIndex": 1, - "uvIndexPhrase": "Low", - "precipitationProbability": 51, - "rainProbability": 51, - "snowProbability": 0, - "iceProbability": 0, - "totalLiquid": { - "value": 0.3, - "unit": "mm", - "unitType": 3 - }, - "rain": { - "value": 0.3, - "unit": "mm", - "unitType": 3 - }, - "snow": { - "value": 0.0, - "unit": "cm", - "unitType": 4 - }, - "ice": { - "value": 0.0, - "unit": "mm", - "unitType": 3 + "date": "2024-08-07T15:00:00-07:00", + "iconCode": 2, + "iconPhrase": "Mostly sunny", + "hasPrecipitation": false, + "isDaylight": true, + "temperature": { + "value": 24.6, + "unit": "C", + "unitType": 17 + }, + "realFeelTemperature": { + "value": 26.4, + "unit": "C", + "unitType": 17 + }, + "wetBulbTemperature": { + "value": 18.1, + "unit": "C", + "unitType": 17 + }, + "dewPoint": { + "value": 14.5, + "unit": "C", + "unitType": 17 + }, + "wind": { + "direction": { + "degrees": 340, + "localizedDescription": "NNW" + }, + "speed": { + "value": 14.8, + "unit": "km/h", + "unitType": 7 + } + }, + "windGust": { + "speed": { + "value": 24.1, + "unit": "km/h", + "unitType": 7 }- }... - ] + }, + "relativeHumidity": 53, + "visibility": { + "value": 16.1, + "unit": "km", + "unitType": 6 + }, + "cloudCover": 11, + "ceiling": { + "value": 10211, + "unit": "m", + "unitType": 5 + }, + "uvIndex": 5, + "uvIndexPhrase": "Moderate", + "precipitationProbability": 0, + "rainProbability": 0, + "snowProbability": 0, + "iceProbability": 0, + "totalLiquid": { + "value": 0, + "unit": "mm", + "unitType": 3 + }, + "rain": { + "value": 0, + "unit": "mm", + "unitType": 3 + }, + "snow": { + "value": 0, + "unit": "cm", + "unitType": 4 + }, + "ice": { + "value": 0, + "unit": "mm", + "unitType": 3 + } + } + ] } ``` In this example, you use the [Get Hourly Forecast API] to retrieve the hourly we In this example, you use the [Get Minute Forecast API] to retrieve the minute-by-minute weather forecast at coordinates located in Seattle, WA. The weather forecast is given for the next 120 minutes. Our query requests that the forecast is given at 15-minute intervals, but you can adjust the parameter to be either 1 or 5 minutes. -1. In the Postman app, select **New** to create the request. In the **Create New** window, select **HTTP Request**. Enter a **Request name** for the request. +1. In the bruno app, select **NEW REQUEST** to create the request. In the **NEW REQUEST** window, set **Type** to **HTTP**. Enter a **Name** for the request. -2. Select the **GET** HTTP method in the builder tab and enter the following URL. For this request, and other requests mentioned in this article, replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key. +1. Select the **GET** HTTP method in the **URL** drop-down list, then enter the following URL: ```http https://atlas.microsoft.com/weather/forecast/minute/json?api-version=1.0&query=47.60357,-122.32945&interval=15&subscription-key={Your-Azure-Maps-Subscription-key} ``` -3. Select the blue **Send** button. The response body contains weather forecast data for the next 120 minutes, in 15-minute intervals. +1. Select the blue **Create** button. ++1. Select the run button. ++ :::image type="content" source="./media/weather-service/bruno-run-request-minute-by-minute-weather-forecast-data.png" alt-text="A screenshot showing the Request minute-by-minute weather forecast data URL with the run button highlighted in the bruno app."::: ++ The response body contains weather forecast data for the next 120 minutes, in 15-minute intervals. ```json {- "summary": { + "summary": { "briefPhrase60": "No precipitation for at least 60 min", "shortPhrase": "No precip for 120 min", "briefPhrase": "No precipitation for at least 120 min", "longPhrase": "No precipitation for at least 120 min",- "iconCode": 7 - }, - "intervalSummaries": [ + "iconCode": 1 + }, + "intervalSummaries": [ {- "startMinute": 0, - "endMinute": 119, - "totalMinutes": 120, - "shortPhrase": "No precip for %MINUTE_VALUE min", - "briefPhrase": "No precipitation for at least %MINUTE_VALUE min", - "longPhrase": "No precipitation for at least %MINUTE_VALUE min", - "iconCode": 7 + "startMinute": 0, + "endMinute": 119, + "totalMinutes": 120, + "shortPhrase": "No precip for %MINUTE_VALUE min", + "briefPhrase": "No precipitation for at least %MINUTE_VALUE min", + "longPhrase": "No precipitation for at least %MINUTE_VALUE min", + "iconCode": 1 }- ], - "intervals": [ + ], + "intervals": [ {- "startTime": "2020-10-19T20:51:00+00:00", - "minute": 0, - "dbz": 0.0, - "shortPhrase": "No Precipitation", - "iconCode": 7, - "cloudCover": 100 + "startTime": "2024-08-08T05:58:00-07:00", + "minute": 0, + "dbz": 0, + "shortPhrase": "No Precipitation", + "iconCode": 1, + "cloudCover": 7 }, {- "startTime": "2020-10-19T21:06:00+00:00", - "minute": 15, - "dbz": 0.0, - "shortPhrase": "No Precipitation", - "iconCode": 7, - "cloudCover": 100 + "startTime": "2024-08-08T06:13:00-07:00", + "minute": 15, + "dbz": 0, + "shortPhrase": "No Precipitation", + "iconCode": 1, + "cloudCover": 3 }, {- "startTime": "2020-10-19T21:21:00+00:00", - "minute": 30, - "dbz": 0.0, - "shortPhrase": "No Precipitation", - "iconCode": 7, - "cloudCover": 100 + "startTime": "2024-08-08T06:28:00-07:00", + "minute": 30, + "dbz": 0, + "shortPhrase": "No Precipitation", + "iconCode": 1, + "cloudCover": 2 }, {- "startTime": "2020-10-19T21:36:00+00:00", - "minute": 45, - "dbz": 0.0, - "shortPhrase": "No Precipitation", - "iconCode": 7, - "cloudCover": 100 + "startTime": "2024-08-08T06:43:00-07:00", + "minute": 45, + "dbz": 0, + "shortPhrase": "No Precipitation", + "iconCode": 1, + "cloudCover": 2 }, {- "startTime": "2020-10-19T21:51:00+00:00", - "minute": 60, - "dbz": 0.0, - "shortPhrase": "No Precipitation", - "iconCode": 7, - "cloudCover": 100 + "startTime": "2024-08-08T06:58:00-07:00", + "minute": 60, + "dbz": 0, + "shortPhrase": "No Precipitation", + "iconCode": 1, + "cloudCover": 1 }, {- "startTime": "2020-10-19T22:06:00+00:00", - "minute": 75, - "dbz": 0.0, - "shortPhrase": "No Precipitation", - "iconCode": 7, - "cloudCover": 100 + "startTime": "2024-08-08T07:13:00-07:00", + "minute": 75, + "dbz": 0, + "shortPhrase": "No Precipitation", + "iconCode": 1, + "cloudCover": 1 }, {- "startTime": "2020-10-19T22:21:00+00:00", - "minute": 90, - "dbz": 0.0, - "shortPhrase": "No Precipitation", - "iconCode": 7, - "cloudCover": 100 + "startTime": "2024-08-08T07:28:00-07:00", + "minute": 90, + "dbz": 0, + "shortPhrase": "No Precipitation", + "iconCode": 1, + "cloudCover": 0 }, {- "startTime": "2020-10-19T22:36:00+00:00", - "minute": 105, - "dbz": 0.0, - "shortPhrase": "No Precipitation", - "iconCode": 7, - "cloudCover": 100 + "startTime": "2024-08-08T07:43:00-07:00", + "minute": 105, + "dbz": 0, + "shortPhrase": "No Precipitation", + "iconCode": 1, + "cloudCover": 0 }- ] + ] } ``` In this example, you use the [Get Minute Forecast API] to retrieve the minute-by [Get Minute Forecast API]: /rest/api/maps/weather/getminuteforecast [Get Severe Weather Alerts API]: /rest/api/maps/weather/getsevereweatheralerts [Manage the pricing tier of your Azure Maps account]: how-to-manage-pricing-tier.md-[Postman]: https://www.postman.com/ +[bruno]: https://www.usebruno.com/ [subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account [Weather service concepts]: weather-services-concepts.md [Weather services]: /rest/api/maps/weather |
azure-maps | How To Search For Address | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-search-for-address.md | Title: Search for a location using Azure Maps Search services description: Learn about the Azure Maps Search service. See how to use this set of APIs for geocoding, reverse geocoding, fuzzy searches, and reverse cross street searches.-+ Previously updated : 10/28/2021 Last updated : 8/9/2024 --+ # Search for a location using Azure Maps Search services This article demonstrates how to: * An [Azure Maps account] * A [subscription key] -This tutorial uses the [Postman] application, but you may choose a different API development environment. +>[!IMPORTANT] +> +> In the URL examples in this article you will need to replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key. ++This article uses the [bruno] application, but you can choose a different API development environment. ## Request latitude and longitude for an address (geocoding) The example in this section uses [Get Search Address] to convert an address into latitude and longitude coordinates. This process is also called *geocoding*. In addition to returning the coordinates, the response also returns detailed address properties such as street, postal code, municipality, and country/region information. ->[!TIP] ->If you have a set of addresses to geocode, you can use [Post Search Address Batch] to send a batch of queries in a single request. +> [!TIP] +> If you have a set of addresses to geocode, you can use [Post Search Address Batch] to send a batch of queries in a single request. -1. In the Postman app, select **New** to create the request. In the **Create New** window, select **HTTP Request**. Enter a **Request name** for the request. +1. Open the bruno app, select **NEW REQUEST** to create the request. In the **NEW REQUEST** window, set **Type** to **HTTP**. Enter a **Name** for the request. -2. Select the **GET** HTTP method in the builder tab and enter the following URL. In this request, we're searching for a specific address: `400 Braod St, Seattle, WA 98109`. For this request, and other requests mentioned in this article, replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key. +1. Select the **GET** HTTP method in the **URL** drop-down list, then enter the following URL: ```http https://atlas.microsoft.com/search/address/json?&subscription-key={Your-Azure-Maps-Subscription-key}&api-version=1.0&language=en-US&query=400 Broad St, Seattle, WA 98109 ``` -3. Select the blue **Send** button. The response body contains data for a single location. +1. Select the **Create** button. ++1. Select the run button. -4. Next, search an address that has more than one possible locations. In the **Params** section, change the `query` key to `400 Broad, Seattle`. Select the blue **Send** button. + This request searches for a specific address: `400 Broad St, Seattle, WA 98109`. Next, search an address that has more than one possible location. ++1. In the **Params** section, change the `query` key to `400 Broad, Seattle`, then select the run button. :::image type="content" source="./media/how-to-search-for-address/search-address.png" alt-text="Search for address"::: -5. Next, try setting the `query` key to `400 Broa`. +1. Next, try setting the `query` key to `400 Broa`, then select the run button. -6. Select the **Send** button. The response includes results from multiple countries/regions. To geobias results to the relevant area for your users, always add as many location details as possible to the request. + The response includes results from multiple countries/regions. To [geobias] results to the relevant area for your users, always add as many location details as possible to the request. ## Fuzzy Search -[Fuzzy Search] supports standard single line and free-form searches. We recommend that you use the Azure Maps Search Fuzzy API when you don't know your user input type for a search request. The query input can be a full or partial address. It can also be a Point of Interest (POI) token, like a name of POI, POI category or name of brand. Furthermore, to improve the relevance of your search results, constrain the query results using a coordinate location and radius, or by defining a bounding box. +[Fuzzy Search] supports standard single line and free-form searches. We recommend that you use the Azure Maps Search Fuzzy API when you don't know your user input type for a search request. The query input can be a full or partial address. It can also be a Point of Interest (POI) token, like a name of POI, POI category or name of brand. Furthermore, to improve the relevance of your search results, constrain the query results using a coordinate location and radius, or by defining a bounding box. > [!TIP] > Most Search queries default to `maxFuzzyLevel=1` to improve performance and reduce unusual results. Adjust fuzziness levels by using the `maxFuzzyLevel` or `minFuzzyLevel` parameters. For more information on `maxFuzzyLevel` and a complete list of all optional parameters, see [Fuzzy Search URI Parameters]. The example in this section uses `Fuzzy Search` to search the entire world for * > [!IMPORTANT] > To geobias results to the relevant area for your users, always add as many location details as possible. For more information, see [Best Practices for Search]. -1. In the Postman app, select **New** to create the request. In the **Create New** window, select **HTTP Request**. Enter a **Request name** for the request. +1. Open the bruno app, select **NEW REQUEST** to create the request. In the **NEW REQUEST** window, set **Type** to **HTTP**. Enter a **Name** for the request. -2. Select the **GET** HTTP method in the builder tab and enter the following URL. For this request, and other requests mentioned in this article, replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key. +1. Select the **GET** HTTP method in the **URL** drop-down list, then enter the following URL: ```http- https://atlas.microsoft.com/search/fuzzy/json?&api-version=1.0&subscription-key={Your-Azure-Maps-Subscription-key}&language=en-US&query=pizza + https://atlas.microsoft.com/search/fuzzy/json?api-version=1.0&subscription-key={Your-Azure-Maps-Subscription-key}&language=en-US&query=pizza ``` > [!NOTE] > The _json_ attribute in the URL path determines the response format. This article uses json for ease of use and readability. To find other supported response formats, see the `format` parameter definition in the [URI Parameter reference] documentation. -3. Select **Send** and review the response body. +1. Select the run button, then review the response body. - The ambiguous query string for "pizza" returned 10 [point of interest result] (POI) in both the "pizza" and "restaurant" categories. Each result includes details such as street address, latitude and longitude values, view port, and entry points for the location. The results are now varied for this query, and aren't tied to any reference location. + The ambiguous query string for "pizza" returned 10 [point of interest] (POI) results in both the "pizza" and "restaurant" categories. Each result includes details such as street address, latitude and longitude values, view port, and entry points for the location. The results are now varied for this query, and aren't tied to any reference location. - In the next step, you'll use the `countrySet` parameter to specify only the countries/regions for which your application needs coverage. For a complete list of supported countries/regions, see [Search Coverage]. + In the next step, you'll use the `countrySet` parameter to specify only the countries/regions for which your application needs coverage. For a complete list of supported countries/regions, see [Azure Maps geocoding coverage]. -4. The default behavior is to search the entire world, potentially returning unnecessary results. Next, search for pizza only in the United States. Add the `countrySet` key to the **Params** section, and set its value to `US`. Setting the `countrySet` key to `US` bounds the results to the United States. +1. The default behavior is to search the entire world, potentially returning unnecessary results. Next, search for pizza only in the United States. Add the `countrySet` key to the **Params** section, and set its value to `US`. Setting the `countrySet` key to `US` bounds the results to the United States. :::image type="content" source="./media/how-to-search-for-address/search-fuzzy-country.png" alt-text="Search for pizza in the United States"::: The results are now bounded by the country code and the query returns pizza restaurants in the United States. -5. To get an even more targeted search, you can search over the scope of a lat/lon coordinate pair. The following example uses the lat/lon coordinates of the Seattle Space Needle. Since we only want to return results within a 400-meters radius, we add the `radius` parameter. Also, we add the `limit` parameter to limit the results to the five closest pizza places. +1. To get an even more targeted search, you can search over the scope of a lat/lon coordinate pair. The following example uses the lat/lon coordinates of the Seattle Space Needle. Since we only want to return results within a 400-meters radius, we add the `radius` parameter. Also, we add the `limit` parameter to limit the results to the five closest pizza places. In the **Params** section, add the following key/value pairs: The example in this section uses `Fuzzy Search` to search the entire world for * | radius | 400 | | limit | 5 | -6. Select **Send**. The response includes results for pizza restaurants near the Seattle Space Needle. +1. Select run. The response includes results for pizza restaurants near the Seattle Space Needle. ## Search for a street address using Reverse Address Search [Get Search Address Reverse] translates coordinates into human readable street addresses. This API is often used for applications that consume GPS feeds and want to discover addresses at specific coordinate points. > [!IMPORTANT]-> To geobias results to the relevant area for your users, always add as many location details as possible. For more information, see [Best Practices for Search]. +> To [geobias] results to the relevant area for your users, always add as many location details as possible. For more information, see [Best Practices for Search]. > [!TIP] > If you have a set of coordinate locations to reverse geocode, you can use [Post Search Address Reverse Batch] to send a batch of queries in a single request. This example demonstrates making reverse searches using a few of the optional parameters that are available. For the full list of optional parameters, see [Reverse Search Parameters]. -1. In the Postman app, select **New** to create the request. In the **Create New** window, select **HTTP Request**. Enter a **Request name** for the request. +1. Open the bruno app, select **NEW REQUEST** to create the request. In the **NEW REQUEST** window, set **Type** to **HTTP**. Enter a **Name** for the request. -2. Select the **GET** HTTP method in the builder tab and enter the following URL. For this request, and other requests mentioned in this article, replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key. The request should look like the following URL: +1. Select the **GET** HTTP method in the **URL** drop-down list, then enter the following URL: ```http- https://atlas.microsoft.com/search/address/reverse/json?api-version=1.0&subscription-key={Your-Azure-Maps-Subscription-key}&language=en-US&query=47.591180,-122.332700&number=1 + https://atlas.microsoft.com/search/address/reverse/json?api-version=1.0&subscription-key={Your-Azure-Maps-Subscription-key}&language=en-US&query=47.591180,-122.332700 ``` -3. Select **Send**, and review the response body. You should see one query result. The response includes key address information about Safeco Field. +1. Select the run button, and review the response body. You should see one query result. The response includes key address information about Safeco Field. -4. Next, add the following key/value pairs to the **Params** section: +1. Next, add the following key/value pairs to the **Params** section: - | Key | Value | Returns - |--||| - | number | 1 |The response may include the side of the street (Left/Right) and also an offset position for the number.| + | Key | Value | Returns | + |--|-|| + | number | 1 |The response can include the side of the street (Left/Right) and also an offset position for the number.| | returnSpeedLimit | true | Returns the speed limit at the address.| | returnRoadUse | true | Returns road use types at the address. For all possible road use types, see [Road Use Types].|- | returnMatchType | true| Returns the type of match. For all possible values, see [Reverse Address Search Results]. + | returnMatchType | true| Returns the type of match. For all possible values, see [Reverse Address Search Results]. | :::image type="content" source="./media/how-to-search-for-address/search-reverse.png" alt-text="Search reverse."::: -5. Select **Send**, and review the response body. +1. Select the run button, and review the response body. -6. Next, we add the `entityType` key, and set its value to `Municipality`. The `entityType` key overrides the `returnMatchType` key in the previous step. `returnSpeedLimit` and `returnRoadUse` also need removed since you're requesting information about the municipality. For all possible entity types, see [Entity Types]. +1. Next, add the `entityType` key, and set its value to `Municipality`. The `entityType` key overrides the `returnMatchType` key in the previous step. `returnSpeedLimit` and `returnRoadUse` also need removed since you're requesting information about the municipality. For all possible entity types, see [Entity Types]. :::image type="content" source="./media/how-to-search-for-address/search-reverse-entity-type.png" alt-text="Search reverse entityType."::: -7. Select **Send**. Compare the results to the results returned in step 5. Because the requested entity type is now `municipality`, the response doesn't include street address information. Also, the returned `geometryId` can be used to request boundary polygon through Azure Maps Get [Search Polygon API]. +1. Select the run button. Compare the results to the results returned in step 5. Because the requested entity type is now `municipality`, the response doesn't include street address information. Also, the returned `geometryId` can be used to request boundary polygon through Azure Maps Get [Search Polygon API]. > [!TIP] > For more information on these as well as other parameters, see [Reverse Search Parameters]. This example demonstrates making reverse searches using a few of the optional pa This example demonstrates how to search for a cross street based on the coordinates of an address. -1. In the Postman app, select **New** to create the request. In the **Create New** window, select **HTTP Request**. Enter a **Request name** for the request. +1. Open the bruno app, select **NEW REQUEST** to create the request. In the **NEW REQUEST** window, set **Type** to **HTTP**. Enter a **Name** for the request. -2. Select the **GET** HTTP method in the builder tab and enter the following URL. For this request, and other requests mentioned in this article, replace `{Your-Azure-Maps-Subscription-key}` with your Azure Maps subscription key. The request should look like the following URL: +1. Select the **GET** HTTP method in the **URL** drop-down list, then enter the following URL: ```http- https://atlas.microsoft.com/search/address/reverse/crossstreet/json?&api-version=1.0&subscription-key={Your-Azure-Maps-Subscription-key}&language=en-US&query=47.591180,-122.332700 + https://atlas.microsoft.com/search/address/reverse/crossstreet/json?api-version=1.0&subscription-key={Your-Azure-Maps-Subscription-key}&language=en-US&query=47.591180,-122.332700 ``` - :::image type="content" source="./media/how-to-search-for-address/search-address-cross.png" alt-text="Search cross street."::: - -3. Select **Send**, and review the response body. Notice that the response contains a `crossStreet` value of `South Atlantic Street`. +1. Select the run button, and review the response body. Notice that the response contains a `crossStreet` value of `South Atlantic Street`. ## Next steps This example demonstrates how to search for a cross street based on the coordina [Entity Types]: /rest/api/maps/search/getsearchaddressreverse?view=rest-maps-1.0&preserve-view=true#entitytype [Fuzzy Search URI Parameters]: /rest/api/maps/search/getsearchfuzzy?view=rest-maps-1.0&preserve-view=true#uri-parameters [Fuzzy Search]: /rest/api/maps/search/getsearchfuzzy?view=rest-maps-1.0&preserve-view=true+[geobias]: glossary.md#geobias [Get Search Address Reverse]: /rest/api/maps/search/getsearchaddressreverse?view=rest-maps-1.0&preserve-view=true [Get Search Address]: /rest/api/maps/search/getsearchaddress?view=rest-maps-1.0&preserve-view=true-[point of interest result]: /rest/api/maps/search/getsearchpoi?view=rest-maps-1.0&preserve-view=true#searchpoiresponse +[point of interest]: /rest/api/maps/search/getsearchpoi?view=rest-maps-1.0&preserve-view=true#searchpoiresponse [Post Search Address Batch]: /rest/api/maps/search/postsearchaddressbatch [Post Search Address Reverse Batch]: /rest/api/maps/search/postsearchaddressreversebatch?view=rest-maps-1.0&preserve-view=true-[Postman]: https://www.postman.com/ +[bruno]: https://www.usebruno.com/ [Reverse Address Search Results]: /rest/api/maps/search/getsearchaddressreverse?view=rest-maps-1.0&preserve-view=true#searchaddressreverseresult [Reverse Address Search]: /rest/api/maps/search/getsearchaddressreverse?view=rest-maps-1.0&preserve-view=true [Reverse Search Parameters]: /rest/api/maps/search/getsearchaddressreverse?view=rest-maps-1.0&preserve-view=true#uri-parameters This example demonstrates how to search for a cross street based on the coordina [Route]: /rest/api/maps/route [Search Address Reverse Cross Street]: /rest/api/maps/search/getsearchaddressreversecrossstreet?view=rest-maps-1.0&preserve-view=true [Search Address]: /rest/api/maps/search/getsearchaddress?view=rest-maps-1.0&preserve-view=true-[Search Coverage]: geocoding-coverage.md +[Azure Maps geocoding coverage]: geocoding-coverage.md [Search Polygon API]: /rest/api/maps/search/getsearchpolygon?view=rest-maps-1.0&preserve-view=true [Search]: /rest/api/maps/search?view=rest-maps-1.0&preserve-view=true [subscription key]: quick-demo-map-app.md#get-the-subscription-key-for-your-account |
azure-maps | How To Secure Daemon App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-secure-daemon-app.md | Title: How to secure a daemon application in Microsoft Azure Maps description: This article describes how to host daemon applications, such as background processes, timers, and jobs in a trusted and secure environment in Microsoft Azure Maps. -+ Last updated 10/28/2021 --custom.ms: subject-rbac-steps + # Secure a daemon application |
azure-maps | How To Secure Device Code | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-secure-device-code.md | |
azure-maps | How To Secure Sas App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-secure-sas-app.md | Title: How to secure an Azure Maps application with a SAS token description: Create an Azure Maps account secured with SAS token authentication. -+ Last updated 06/08/2022 --+ |
azure-maps | How To Secure Spa App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-secure-spa-app.md | Title: How to secure a single-page web application with non-interactive sign-in description: How to configure a single-page web application with non-interactive Azure role-based access control (Azure RBAC) and Azure Maps Web SDK. -+ Last updated 10/28/2021 -+ |
azure-maps | How To Secure Spa Users | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-secure-spa-users.md | Title: How to secure a single page application with user sign-in description: How to configure a single page application that supports Microsoft Entra single-sign-on with Azure Maps Web SDK. -+ Last updated 06/12/2020 --+ # Secure a single page application with user sign-in |
azure-maps | How To Secure Webapp Users | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-secure-webapp-users.md | Title: How to secure a web application with interactive single sign-in description: How to configure a web application that supports Microsoft Entra single sign-in with Azure Maps Web SDK using OpenID Connect protocol. -+ Last updated 06/12/2020 --+ # Secure a web application with user sign-in |
azure-maps | How To Show Attribution | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-show-attribution.md | |
azure-maps | How To Show Traffic Android | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-show-traffic-android.md | |
azure-maps | How To Use Android Map Control Library | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-android-map-control-library.md | |
azure-maps | How To Use Best Practices For Routing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-best-practices-for-routing.md | Title: Best practices for Azure Maps Route service in Microsoft Azure Maps description: Learn how to route vehicles by using Route service from Microsoft Azure Maps.-+ Last updated 10/28/2021 --+ # Best practices for Azure Maps Route service |
azure-maps | How To Use Best Practices For Search | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-best-practices-for-search.md | Title: Best practices for Azure Maps Search service description: Learn how to apply the best practices when using the Search service from Microsoft Azure Maps.-+ Last updated 10/28/2021 --+ # Best practices for Azure Maps Search service |
azure-maps | How To Use Feedback Tool | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-feedback-tool.md | |
azure-maps | How To Use Image Templates Web Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-image-templates-web-sdk.md | |
azure-maps | How To Use Indoor Module Ios | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-indoor-module-ios.md | |
azure-maps | How To Use Indoor Module | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-indoor-module.md | |
azure-maps | How To Use Ios Map Control Library | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-ios-map-control-library.md | |
azure-maps | How To Use Map Control | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-map-control.md | |
azure-maps | How To Use Npm Package | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-npm-package.md | |
azure-maps | How To Use Services Module | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-services-module.md | |
azure-maps | How To Use Spatial Io Module | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-spatial-io-module.md | -#Customer intent: As an Azure Maps web sdk user, I want to install and use the spatial io module so that I can integrate spatial data with the Azure Maps web sdk. + # How to use the Azure Maps Spatial IO module |
azure-maps | How To Use Ts Rest Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-use-ts-rest-sdk.md | |
azure-maps | How To View Api Usage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/how-to-view-api-usage.md | |
azure-maps | Interact Map Ios Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/interact-map-ios-sdk.md | |
azure-maps | Ios Sdk Migration Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/ios-sdk-migration-guide.md | |
azure-maps | Itinerary Optimization Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/itinerary-optimization-service.md | Title: Create multi-itinerary optimization service description: Learn how to use Azure Maps and NVIDIA cuOpt to build a multi-itinerary optimization service.-+ Last updated 05/20/2024 -+ # Create multi-itinerary optimization service |
azure-maps | Map Accessibility | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-accessibility.md | |
azure-maps | Map Add Bubble Layer Android | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-bubble-layer-android.md | |
azure-maps | Map Add Bubble Layer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-bubble-layer.md | |
azure-maps | Map Add Controls Android | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-controls-android.md | |
azure-maps | Map Add Controls | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-controls.md | |
azure-maps | Map Add Custom Html | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-custom-html.md | |
azure-maps | Map Add Drawing Toolbar | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-drawing-toolbar.md | |
azure-maps | Map Add Heat Map Layer Android | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-heat-map-layer-android.md | |
azure-maps | Map Add Heat Map Layer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-heat-map-layer.md | |
azure-maps | Map Add Image Layer Android | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-image-layer-android.md | |
azure-maps | Map Add Image Layer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-image-layer.md | |
azure-maps | Map Add Line Layer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-line-layer.md | |
azure-maps | Map Add Pin | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-pin.md | |
azure-maps | Map Add Popup | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-popup.md | |
azure-maps | Map Add Shape | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-shape.md | |
azure-maps | Map Add Snap Grid | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-snap-grid.md | |
azure-maps | Map Add Tile Layer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-add-tile-layer.md | |
azure-maps | Map Create | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-create.md | |
azure-maps | Map Events | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-events.md | |
azure-maps | Map Extruded Polygon Android | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-extruded-polygon-android.md | |
azure-maps | Map Extruded Polygon | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-extruded-polygon.md | |
azure-maps | Map Get Information From Coordinate | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-get-information-from-coordinate.md | |
azure-maps | Map Get Shape Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-get-shape-data.md | |
azure-maps | Map Route | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-route.md | |
azure-maps | Map Search Location | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-search-location.md | |
azure-maps | Map Show Traffic | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/map-show-traffic.md | |
azure-maps | Migrate Bing Maps Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-bing-maps-overview.md | |
azure-maps | Migrate Calculate Route | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-calculate-route.md | Title: Migrate Bing Maps Calculate a Route API to Azure Maps Route Directions API description: Learn how to Migrate the Bing Maps Calculate a Route API to the Azure Maps Route Directions API.-+ Last updated 05/16/2024 -+ # Migrate Bing Maps Calculate a Route API |
azure-maps | Migrate Calculate Truck Route | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-calculate-truck-route.md | Title: Migrate Bing Maps Calculate a Truck Route API to Azure Maps Route Directions API description: Learn how to Migrate the Bing Maps Calculate a Truck Route API to the Azure Maps Route Directions API.-+ Last updated 05/16/2024 -+ # Migrate Bing Maps Calculate a Truck Route API |
azure-maps | Migrate Find Location Address | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-find-location-address.md | |
azure-maps | Migrate Find Location By Point | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-find-location-by-point.md | |
azure-maps | Migrate Find Location Query | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-find-location-query.md | |
azure-maps | Migrate Find Time Zone | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-find-time-zone.md | |
azure-maps | Migrate From Bing Maps Web App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-bing-maps-web-app.md | |
azure-maps | Migrate From Google Maps Android App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-google-maps-android-app.md | |
azure-maps | Migrate From Google Maps Web App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-google-maps-web-app.md | |
azure-maps | Migrate From Google Maps Web Services | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-google-maps-web-services.md | |
azure-maps | Migrate From Google Maps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-from-google-maps.md | |
azure-maps | Migrate Geocode Dataflow | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-geocode-dataflow.md | Title: Migrate Bing Maps Geocode Dataflow API to Azure Maps Geocoding Batch and Reverse Geocoding Batch API description: Learn how to Migrate the Bing Maps Geocode Dataflow API to the Azure Maps Geocoding Batch and Reverse Geocoding Batch API.-+ Last updated 05/15/2024 -+ # Migrate Bing Maps Geocode Dataflow API |
azure-maps | Migrate Geodata | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-geodata.md | |
azure-maps | Migrate Get Imagery Metadata | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-get-imagery-metadata.md | |
azure-maps | Migrate Get Static Map | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-get-static-map.md | |
azure-maps | Migrate Get Traffic Incidents | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-get-traffic-incidents.md | Title: Migrate Bing Maps Get Traffic Incidents API to Azure Maps Get Traffic Incident Detail API description: Learn how to Migrate the Bing Maps Get Traffic Incidents API to the Azure Maps Get Traffic Incident Detail API.-+ Last updated 04/15/2024 -+ |
azure-maps | Migrate Help Using Copilot | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-help-using-copilot.md | |
azure-maps | Migrate Sds Data Source Management | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/migrate-sds-data-source-management.md | Title: Migrate Bing Maps Data Source Management and Query API to Azure Maps API description: Learn how to Migrate the Bing Maps Data Source Management and Query API to the appropriate Azure Maps API.-+ Last updated 05/15/2024 -+ # Migrate Bing Maps Data Source Management and Query API |
azure-maps | Open Source Projects | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/open-source-projects.md | |
azure-maps | Power Bi Visual Add 3D Column Layer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-add-3d-column-layer.md | |
azure-maps | Power Bi Visual Add Bubble Layer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-add-bubble-layer.md | |
azure-maps | Power Bi Visual Add Heat Map Layer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-add-heat-map-layer.md | |
azure-maps | Power Bi Visual Add Pie Chart Layer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-add-pie-chart-layer.md | |
azure-maps | Power Bi Visual Add Reference Layer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-add-reference-layer.md | |
azure-maps | Power Bi Visual Add Tile Layer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-add-tile-layer.md | |
azure-maps | Power Bi Visual Cluster Bubbles | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-cluster-bubbles.md | |
azure-maps | Power Bi Visual Conversion | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-conversion.md | |
azure-maps | Power Bi Visual Data Residency | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-data-residency.md | |
azure-maps | Power Bi Visual Filled Map | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-filled-map.md | |
azure-maps | Power Bi Visual Geocode | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-geocode.md | |
azure-maps | Power Bi Visual Get Started | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-get-started.md | |
azure-maps | Power Bi Visual Manage Access | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-manage-access.md | |
azure-maps | Power Bi Visual On Object Interaction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-on-object-interaction.md | |
azure-maps | Power Bi Visual Show Real Time Traffic | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-show-real-time-traffic.md | |
azure-maps | Power Bi Visual Understanding Layers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/power-bi-visual-understanding-layers.md | |
azure-maps | Quick Android Map | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/quick-android-map.md | |
azure-maps | Quick Demo Map App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/quick-demo-map-app.md | |
azure-maps | Quick Ios App | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/quick-ios-app.md | |
azure-maps | Release Notes Drawing Tools Module | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/release-notes-drawing-tools-module.md | |
azure-maps | Release Notes Indoor Module | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/release-notes-indoor-module.md | |
azure-maps | Release Notes Map Control | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/release-notes-map-control.md | |
azure-maps | Release Notes Spatial Module | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/release-notes-spatial-module.md | |
azure-maps | Render Coverage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/render-coverage.md | |
azure-maps | Rest Api Azure Maps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/rest-api-azure-maps.md | |
azure-maps | Rest Api Creator | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/rest-api-creator.md | |
azure-maps | Rest Sdk Developer Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/rest-sdk-developer-guide.md | |
azure-maps | Routing Coverage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/routing-coverage.md | Title: Routing coverage description: Learn what level of coverage Azure Maps provides in various regions for routing, routing with traffic, and truck routing. -+ Last updated 10/21/2022 -+ zone_pivot_groups: azure-maps-coverage |
azure-maps | Set Android Map Styles | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/set-android-map-styles.md | |
azure-maps | Set Drawing Options | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/set-drawing-options.md | |
azure-maps | Set Map Style Ios Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/set-map-style-ios-sdk.md | |
azure-maps | Show Traffic Data Map Ios Sdk | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/show-traffic-data-map-ios-sdk.md | |
azure-maps | Spatial Io Add Ogc Map Layer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/spatial-io-add-ogc-map-layer.md | |
azure-maps | Spatial Io Add Simple Data Layer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/spatial-io-add-simple-data-layer.md | |
azure-maps | Spatial Io Connect Wfs Service | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/spatial-io-connect-wfs-service.md | |
azure-maps | Spatial Io Core Operations | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/spatial-io-core-operations.md | |
azure-maps | Spatial Io Read Write Spatial Data | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/spatial-io-read-write-spatial-data.md | |
azure-maps | Spatial Io Supported Data Format Details | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/spatial-io-supported-data-format-details.md | |
azure-maps | Supported Browsers | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/supported-browsers.md | |
azure-maps | Supported Languages | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/supported-languages.md | |
azure-maps | Supported Map Styles | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/supported-map-styles.md | |
azure-maps | Supported Search Categories | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/supported-search-categories.md | |
azure-maps | Traffic Coverage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/traffic-coverage.md | Title: Traffic coverage description: Learn about traffic coverage in Azure Maps. See whether information on traffic flow and incidents is available in various regions throughout the world.-+ Last updated 03/24/2022 -+ |
azure-maps | Tutorial Create Store Locator | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-create-store-locator.md | |
azure-maps | Tutorial Ev Routing | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-ev-routing.md | Title: 'Tutorial: Route electric vehicles by using Azure Notebooks (Python) with Microsoft Azure Maps' description: Tutorial on how to route electric vehicles by using Microsoft Azure Maps routing APIs and Azure Notebooks-+ Last updated 04/26/2021 |
azure-maps | Tutorial Iot Hub Maps | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-iot-hub-maps.md | Title: 'Tutorial: Implement IoT spatial analytics' description: Tutorial on how to Integrate IoT Hub with Microsoft Azure Maps service APIs-+ Last updated 09/14/2023 -+ |
azure-maps | Tutorial Load Geojson File Android | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-load-geojson-file-android.md | |
azure-maps | Tutorial Prioritized Routes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-prioritized-routes.md | |
azure-maps | Tutorial Route Location | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-route-location.md | |
azure-maps | Tutorial Search Location | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/tutorial-search-location.md | |
azure-maps | Understanding Azure Maps Transactions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/understanding-azure-maps-transactions.md | |
azure-maps | Weather Coverage | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/weather-coverage.md | Title: Microsoft Azure Maps Weather services coverage description: Learn about Microsoft Azure Maps Weather services coverage-+ Last updated 11/08/2022 -+ |
azure-maps | Weather Service Tutorial | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/weather-service-tutorial.md | Title: 'Tutorial: Join sensor data with weather forecast data by using Azure Notebooks(Python)' description: Tutorial on how to join sensor data with weather forecast data from Microsoft Azure Maps Weather services using Azure Notebooks(Python).-+ Last updated 10/28/2021 -+ |
azure-maps | Weather Services Concepts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/weather-services-concepts.md | Title: Weather services concepts in Microsoft Azure Maps description: Learn about the concepts that apply to Microsoft Azure Maps Weather services.-+ Last updated 09/10/2020 -+ # Weather services in Azure Maps |
azure-maps | Web Sdk Best Practices | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/web-sdk-best-practices.md | |
azure-maps | Web Sdk Migration Guide | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/web-sdk-migration-guide.md | |
azure-maps | Webgl Custom Layer | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/webgl-custom-layer.md | |
azure-maps | Zoom Levels And Tile Grid | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-maps/zoom-levels-and-tile-grid.md | Title: Zoom levels and tile grid in Microsoft Azure Maps description: Learn how to set zoom levels in Azure Maps. See how to convert geographic coordinates into pixel coordinates, tile coordinates, and quadkeys. View code samples.--++ Last updated 07/14/2020 --+ # Zoom levels and tile grid |
azure-monitor | Azure Monitor Agent Mma Removal Tool | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-mma-removal-tool.md | You'll use the following script for agent removal. Open a file in your local dir # az login # az account set --subscription <subscription_id/subscription_name> # This script uses parallel processing, modify the $parallelThrottleLimit parameter to either increase or decrease the number of parallel processes-# PS> .\MMAUnistallUtilityScript.ps1 GetInventory -# The above command will generate a csv file with the details of Vm's and Vmss that has MMA extension installed. +# PS> .\LogAnalyticsAgentUninstallUtilityScript.ps1 GetInventory +# The above command will generate a csv file with the details of Vm's and Vmss and Arc servers that has MMA/OMS extension installed. # The customer can modify the the csv by adding/removing rows if needed-# Remove the MMA by running the script again as shown below: -# PS> .\MMAUnistallUtilityScript.ps1 UninstallMMAExtension +# Remove the MMA/OMS by running the script again as shown below: +# PS> .\LogAnalyticsAgentUninstallUtilityScript.ps1 UninstallExtension # This version of the script requires Powershell version >= 7 in order to improve performance via ForEach-Object -Parallel # https://docs.microsoft.com/en-us/powershell/scripting/whats-new/migrating-from-windows-powershell-51-to-powershell-7?view=powershell-7.1 if ($PSVersionTable.PSVersion.Major -lt 7) $parallelThrottleLimit = 16 -function GetVmsWithMMAExtensionInstalled +function GetArcServersWithLogAnalyticsAgentExtensionInstalled { + param ( + $fileName + ) + + $serverList = az connectedmachine list --query "[].{ResourceId:id, ResourceGroup:resourceGroup, ServerName:name}" | ConvertFrom-Json + if(!$serverList) + { + Write-Host "Cannot get the Arc server list" + return + } ++ $serversCount = $serverList.Length + $vmParallelThrottleLimit = $parallelThrottleLimit + if ($serversCount -lt $vmParallelThrottleLimit) + { + $serverParallelThrottleLimit = $serversCount + } ++ if($serversCount -eq 1) + { + $serverGroups += ,($serverList[0]) + } + else + { + # split the list into batches to do parallel processing + for ($i = 0; $i -lt $serversCount; $i += $vmParallelThrottleLimit) + { + $serverGroups += , ($serverList[$i..($i + $serverParallelThrottleLimit - 1)]) + } + } ++ Write-Host "Detected $serversCount Arc servers in this subscription." + $hash = [hashtable]::Synchronized(@{}) + $hash.One = 1 ++ $serverGroups | Foreach-Object -ThrottleLimit $parallelThrottleLimit -Parallel { + $len = $using:serversCount + $hash = $using:hash + $_ | ForEach-Object { + $percent = 100 * $hash.One++ / $len + Write-Progress -Activity "Getting Arc server extensions Inventory" -PercentComplete $percent + $serverName = $_.ServerName + $resourceGroup = $_.ResourceGroup + $resourceId = $_.ResourceId + Write-Debug "Getting extensions for Arc server: $serverName" + $extensions = az connectedmachine extension list -g $resourceGroup --machine-name $serverName --query "[?contains(['MicrosoftMonitoringAgent', 'OmsAgentForLinux', 'AzureMonitorLinuxAgent', 'AzureMonitorWindowsAgent'], properties.type)].{type: properties.type, name: name}" | ConvertFrom-Json ++ if (!$extensions) { + return + } + $extensionMap = @{} + foreach ($ext in $extensions) { + $extensionMap[$ext.type] = $ext.name + } + if ($extensionMap.ContainsKey("MicrosoftMonitoringAgent")) { + $extensionName = $extensionMap["MicrosoftMonitoringAgent"] + } + elseif ($extensionMap.ContainsKey("OmsAgentForLinux")) { + $extensionName = $extensionMap["OmsAgentForLinux"] + } + if ($extensionName) { + $amaExtensionInstalled = "False" + if ($extensionMap.ContainsKey("AzureMonitorWindowsAgent") -or $extensionMap.ContainsKey("AzureMonitorLinuxAgent")) { + $amaExtensionInstalled = "True" + } + $csvObj = New-Object -TypeName PSObject -Property @{ + 'ResourceId' = $resourceId + 'Name' = $serverName + 'Resource_Group' = $resourceGroup + 'Resource_Type' = "ArcServer" + 'Install_Type' = "Extension" + 'Extension_Name' = $extensionName + 'AMA_Extension_Installed' = $amaExtensionInstalled + } + $csvObj | Export-Csv $using:fileName -Append -Force | Out-Null + } + # az cli sometime cannot handle many requests at same time, so delaying next request by 2 milliseconds + Start-Sleep -Milliseconds 2 + } + } +} ++function GetVmsWithLogAnalyticsAgentExtensionInstalled { param( $fileName ) - $vmList = az vm list --query "[].{ResourceGroup:resourceGroup, VmName:name}" | ConvertFrom-Json + $vmList = az vm list --query "[].{ResourceId:id, ResourceGroup:resourceGroup, VmName:name}" | ConvertFrom-Json if(!$vmList) { function GetVmsWithMMAExtensionInstalled } $vmsCount = $vmList.Length- $vmParallelThrottleLimit = $parallelThrottleLimit if ($vmsCount -lt $vmParallelThrottleLimit) { function GetVmsWithMMAExtensionInstalled } } - Write-Host "Detected $vmsCount Vm's running in this subscription." + Write-Host "Detected $vmsCount Vm's in this subscription." $hash = [hashtable]::Synchronized(@{}) $hash.One = 1 function GetVmsWithMMAExtensionInstalled $hash = $using:hash $_ | ForEach-Object { $percent = 100 * $hash.One++ / $len- Write-Progress -Activity "Getting VM Inventory" -PercentComplete $percent + Write-Progress -Activity "Getting VM extensions Inventory" -PercentComplete $percent + $resourceId = $_.ResourceId $vmName = $_.VmName $resourceGroup = $_.ResourceGroup- $extensionName = az vm extension list -g $resourceGroup --vm-name $vmName --query "[?name == 'MicrosoftMonitoringAgent' || name == 'OmsAgentForLinux'].name" | ConvertFrom-Json - if ($extensionName) - { + Write-Debug "Getting extensions for VM: $vmName" + $extensions = az vm extension list -g $resourceGroup --vm-name $vmName --query "[?contains(['MicrosoftMonitoringAgent', 'OmsAgentForLinux', 'AzureMonitorLinuxAgent', 'AzureMonitorWindowsAgent'], typePropertiesType)].{type: typePropertiesType, name: name}" | ConvertFrom-Json + + if (!$extensions) { + return + } + $extensionMap = @{} + foreach ($ext in $extensions) { + $extensionMap[$ext.type] = $ext.name + } + if ($extensionMap.ContainsKey("MicrosoftMonitoringAgent")) { + $extensionName = $extensionMap["MicrosoftMonitoringAgent"] + } + elseif ($extensionMap.ContainsKey("OmsAgentForLinux")) { + $extensionName = $extensionMap["OmsAgentForLinux"] + } + if ($extensionName) { + $amaExtensionInstalled = "False" + if ($extensionMap.ContainsKey("AzureMonitorWindowsAgent") -or $extensionMap.ContainsKey("AzureMonitorLinuxAgent")) { + $amaExtensionInstalled = "True" + } $csvObj = New-Object -TypeName PSObject -Property @{- 'Name' = $vmName - 'Resource_Group' = $resourceGroup - 'Resource_Type' = "VM" - 'Install_Type' = "Extension" - 'Extension_Name' = $extensionName + 'ResourceId' = $resourceId + 'Name' = $vmName + 'Resource_Group' = $resourceGroup + 'Resource_Type' = "VM" + 'Install_Type' = "Extension" + 'Extension_Name' = $extensionName + 'AMA_Extension_Installed' = $amaExtensionInstalled } $csvObj | Export-Csv $using:fileName -Append -Force | Out-Null }+ # az cli sometime cannot handle many requests at same time, so delaying next request by 2 milliseconds + Start-Sleep -Milliseconds 2 } } } -function GetVmssWithMMAExtensionInstalled +function GetVmssWithLogAnalyticsAgentExtensionInstalled { param( $fileName ) # get the vmss list which are successfully provisioned- $vmssList = az vmss list --query "[?provisioningState=='Succeeded'].{ResourceGroup:resourceGroup, VmssName:name}" | ConvertFrom-Json + $vmssList = az vmss list --query "[?provisioningState=='Succeeded'].{ResourceId:id, ResourceGroup:resourceGroup, VmssName:name}" | ConvertFrom-Json $vmssCount = $vmssList.Length- Write-Host "Detected $vmssCount Vmss running in this subscription." + Write-Host "Detected $vmssCount Vmss in this subscription." $hash = [hashtable]::Synchronized(@{}) $hash.One = 1 function GetVmssWithMMAExtensionInstalled $len = $using:vmssCount $hash = $using:hash $percent = 100 * $hash.One++ / $len- Write-Progress -Activity "Getting VMSS Inventory" -PercentComplete $percent + Write-Progress -Activity "Getting VMSS extensions Inventory" -PercentComplete $percent + $resourceId = $_.ResourceId $vmssName = $_.VmssName $resourceGroup = $_.ResourceGroup-- $extensionName = az vmss extension list -g $resourceGroup --vmss-name $vmssName --query "[?name == 'MicrosoftMonitoringAgent' || name == 'OmsAgentForLinux'].name" | ConvertFrom-Json - if ($extensionName) - { + Write-Debug "Getting extensions for VMSS: $vmssName" + $extensions = az vmss extension list -g $resourceGroup --vmss-name $vmssName --query "[?contains(['MicrosoftMonitoringAgent', 'OmsAgentForLinux', 'AzureMonitorLinuxAgent', 'AzureMonitorWindowsAgent'], typePropertiesType)].{type: typePropertiesType, name: name}" | ConvertFrom-Json + + if (!$extensions) { + return + } + $extensionMap = @{} + foreach ($ext in $extensions) { + $extensionMap[$ext.type] = $ext.name + } + if ($extensionMap.ContainsKey("MicrosoftMonitoringAgent")) { + $extensionName = $extensionMap["MicrosoftMonitoringAgent"] + } + elseif ($extensionMap.ContainsKey("OmsAgentForLinux")) { + $extensionName = $extensionMap["OmsAgentForLinux"] + } + if ($extensionName) { + $amaExtensionInstalled = "False" + if ($extensionMap.ContainsKey("AzureMonitorWindowsAgent") -or $extensionMap.ContainsKey("AzureMonitorLinuxAgent")) { + $amaExtensionInstalled = "True" + } $csvObj = New-Object -TypeName PSObject -Property @{- 'Name' = $vmssName - 'Resource_Group' = $resourceGroup - 'Resource_Type' = "VMSS" - 'Install_Type' = "Extension" - 'Extension_Name' = $extensionName + 'ResourceId' = $resourceId + 'Name' = $vmssName + 'Resource_Group' = $resourceGroup + 'Resource_Type' = "VMSS" + 'Install_Type' = "Extension" + 'Extension_Name' = $extensionName + 'AMA_Extension_Installed' = $amaExtensionInstalled } $csvObj | Export-Csv $using:fileName -Append -Force | Out-Null- } + } + # az cli sometime cannot handle many requests at same time, so delaying next request by 2 milliseconds + Start-Sleep -Milliseconds 2 } } function GetInventory { param(- $fileName = "MMAInventory.csv" + $fileName = "LogAnalyticsAgentExtensionInventory.csv" ) # create a new file New-Item -Name $fileName -ItemType File -Force Start-Transcript -Path $logFileName -Append- GetVmsWithMMAExtensionInstalled $fileName - GetVmssWithMMAExtensionInstalled $fileName + GetVmsWithLogAnalyticsAgentExtensionInstalled $fileName + GetVmssWithLogAnalyticsAgentExtensionInstalled $fileName + GetArcServersWithLogAnalyticsAgentExtensionInstalled $fileName Stop-Transcript } -function UninstallMMAExtension +function UninstallExtension { param(- $fileName = "MMAInventory.csv" + $fileName = "LogAnalyticsAgentExtensionInventory.csv" ) Start-Transcript -Path $logFileName -Append Import-Csv $fileName | ForEach-Object -ThrottleLimit $parallelThrottleLimit -Parallel { if ($_.Install_Type -eq "Extension") {+ $extensionName = $_.Extension_Name + $resourceName = $_.Name + Write-Debug "Uninstalling extension: $extensionName from $resourceName" if ($_.Resource_Type -eq "VMSS") { # if the extension is installed with a custom name, provide the name using the flag: --extension-instance-name <extension name>- az vmss extension delete --name $_.Extension_Name --vmss-name $_.Name --resource-group $_.Resource_Group --output none --no-wait + az vmss extension delete --name $extensionName --vmss-name $resourceName --resource-group $_.Resource_Group --output none --no-wait }- else + elseif($_.Resource_Type -eq "VM") { # if the extension is installed with a custom name, provide the name using the flag: --extension-instance-name <extension name>- az vm extension delete --name $_.Extension_Name --vm-name $_.Name --resource-group $_.Resource_Group --output none --no-wait + az vm extension delete --name $extensionName --vm-name $resourceName --resource-group $_.Resource_Group --output none --no-wait + } + elseif($_.Resource_Type -eq "ArcServer") + { + az connectedmachine extension delete --name $extensionName --machine-name $resourceName --resource-group $_.Resource_Group --no-wait --output none --yes -y }+ # az cli sometime cannot handle many requests at same time, so delaying next delete request by 2 milliseconds + Start-Sleep -Milliseconds 2 } } Stop-Transcript } -$logFileName = "MMAUninstallUtilityScriptLog.log" +$logFileName = "LogAnalyticsAgentUninstallUtilityScriptLog.log" switch ($args.Count) { 0 { Write-Host "The arguments provided are incorrect."- Write-Host "To get the Inventory: Run the script as: PS> .\MMAUnistallUtilityScript.ps1 GetInventory" - Write-Host "To uninstall MMA from Inventory: Run the script as: PS> .\MMAUnistallUtilityScript.ps1 UninstallMMAExtension" + Write-Host "To get the Inventory: Run the script as: PS> .\LogAnalyticsAgentUninstallUtilityScript.ps1 GetInventory" + Write-Host "To uninstall MMA/OMS from Inventory: Run the script as: PS> .\LogAnalyticsAgentUninstallUtilityScript.ps1 UninstallExtension" } 1 { if (-Not (Test-Path $logFileName)) { You'll collect a list of all legacy agents, both MMA and OMS, on all VM, VMSSs ``` The script reports the total VM, VMSSs, or Arc enables servers seen in the subscription. It takes several minutes to run. You see a progress bar in the console window. Once complete, you are able to see a CSV file called MMAInventory.csv in the local directory with the following format. -|Resource_Group | Resource_Type | Name | Install_Type |Extension_Name | -|||||| -| Linux-AMA-E2E | VM | Linux-ama-e2e-debian9 | Extension | OmsAgentForLinux | -|AMA-ADMIN | VM | test2012-r2-da | Extension | MicrosoftMonitorAgent | +| Resource_ID | Name | Resource_Group | Resource_Type | Install_Type | Extension_Name | AMA_Extension_Installed | +|||||||| +| 012cb5cf-e1a8-49ee-a484-d40673167c9c | Linux-ama-e2e-debian9 | Linux-AMA-E2E | VM | Extension | OmsAgentForLinux | True | +| 8acae35a-454f-4869-bf4f-658189d98516 | test2012-r2-da | test2012-r2-daAMA-ADMIN | VM | Extension | MicrosoftMonitorAgent | False | ## Step 4 Uninstall inventory This script iterates through the list of VM, Virtual Machine Scale Sets, and Arc enabled servers and uninstalls the legacy agent. If the VM, Virtual Machine Scale Sets, or Arc enabled server is not running you won't be able to remove the agent. |
azure-monitor | Azure Monitor Agent Supported Operating Systems | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-supported-operating-systems.md | This article lists the operating systems supported by [Azure Monitor Agent](./az | Red Hat Enterprise Linux Server 6.7+ | | | | Rocky Linux 9 | Γ£ô | Γ£ô | | Rocky Linux 8 | Γ£ô | Γ£ô |-| SUSE Linux Enterprise Server 15 SP4 | Γ£ô<sup>2</sup> | Γ£ô | +| SUSE Linux Enterprise Server 15 SP5 | Γ£ô<sup>2</sup> | Γ£ô | +| SUSE Linux Enterprise Server 15 SP4 | Γ£ô<sup>2</sup> | Γ£ô | | SUSE Linux Enterprise Server 15 SP3 | Γ£ô | Γ£ô | | SUSE Linux Enterprise Server 15 SP2 | Γ£ô | Γ£ô | | SUSE Linux Enterprise Server 15 SP1 | Γ£ô | Γ£ô | |
azure-monitor | Container Insights Private Link | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/container-insights-private-link.md | - Title: Enable private link with Container insights -description: Learn how to enable private link on an Azure Kubernetes Service (AKS) cluster. - Previously updated : 06/05/2024-----# Enable private link with Container insights -This article describes how to configure Container insights to use Azure Private Link for your AKS cluster. --## Prerequisites -- Create an Azure Monitor Private Link Scope (AMPLS) following the guidance in [Configure your private link](../logs/private-link-configure.md).-- Configure network isolation on your Log Analytics workspace to disable ingestion for the public networks. Isolate log queries if you want them to be restricted to Private network.--## Cluster using managed identity authentication --### [CLI](#tab/cli) --### Prerequisites -- Azure CLI version 2.63.0 or higher.-- AKS-preview CLI extension version MUST be 7.0.0b4 or higher if there is an AKS-preview CLI extension installed.---### Existing AKS Cluster --**Use default Log Analytics workspace** --```azurecli -az aks enable-addons --addon monitoring --name <cluster-name> --resource-group <cluster-resource-group-name> --ampls-resource-id "<azure-monitor-private-link-scope-resource-id>" -``` --Example: --```azurecli -az aks enable-addons --addon monitoring --name "my-cluster" --resource-group "my-resource-group" --workspace-resource-id "/subscriptions/my-subscription/resourceGroups/my-resource-group/providers/Microsoft.OperationalInsights/workspaces/my-workspace" --ampls-resource-id "/subscriptions/my-subscription /resourceGroups/my-resource-group/providers/microsoft.insights/privatelinkscopes/my-ampls-resource" -``` --**Use existing Log Analytics workspace** --```azurecli -az aks enable-addons --addon monitoring --name <cluster-name> --resource-group <cluster-resource-group-name> --workspace-resource-id <workspace-resource-id> --ampls-resource-id "<azure-monitor-private-link-scope-resource-id>" -``` --Example: --```azurecli -az aks enable-addons --addon monitoring --name "my-cluster" --resource-group "my-resource-group" --workspace-resource-id "/subscriptions/my-subscription/resourceGroups/my-resource-group/providers/Microsoft.OperationalInsights/workspaces/my-workspace" --ampls-resource-id "/subscriptions/my-subscription /resourceGroups/ my-resource-group/providers/microsoft.insights/privatelinkscopes/my-ampls-resource" -``` --### New AKS cluster --```azurecli -az aks create --resource-group rgName --name clusterName --enable-addons monitoring --workspace-resource-id "workspaceResourceId" --ampls-resource-id "azure-monitor-private-link-scope-resource-id" -``` --Example: --```azurecli -az aks create --resource-group "my-resource-group" --name "my-cluster" --enable-addons monitoring --workspace-resource-id "/subscriptions/my-subscription/resourceGroups/my-resource-group/providers/Microsoft.OperationalInsights/workspaces/my-workspace" --ampls-resource-id "/subscriptions/my-subscription /resourceGroups/ my-resource-group/providers/microsoft.insights/privatelinkscopes/my-ampls-resource" -``` ---### [ARM](#tab/arm) --The following sections provide links to the template and parameter files for enabling private link with Container insights on an AKS and Arc-enabled clusters. --Edit the values in the parameter file and deploy the template using any valid method for deploying ARM templates. Retrieve the **resource ID** of the resources from the **JSON** View of their **Overview** page. -- Based on your requirements, you can configure other parameters such `streams`, `enableContainerLogV2`, `enableSyslog`, `syslogLevels`, `syslogFacilities`, `dataCollectionInterval`, `namespaceFilteringModeForDataCollection` and `namespacesForDataCollection`. --### Prerequisites -- The template must be deployed in the same resource group as the cluster.--### AKS cluster --**Template file:** https://aka.ms/aks-enable-monitoring-msi-onboarding-template-file<br> -**Parameter file:** https://aka.ms/aks-enable-monitoring-msi-onboarding-template-parameter-file ---| Parameter | Description | -|:|:| -| `aksResourceId`| Resource ID of the cluster. | -| `aksResourceLocation` | Azure Region of the cluster. | -| `workspaceResourceId`| Resource ID of the Log Analytics workspace. | -| `workspaceRegion` | Region of the Log Analytics workspace. | -| `resourceTagValues` | Tag values specified for the existing Container insights extension data collection rule (DCR) of the cluster and the name of the DCR. The name will be MSCI-\<clusterName\>-\<clusterRegion\>, and this resource created in an AKS clusters resource group. For first time onboarding, you can set arbitrary tag values. | -| `useAzureMonitorPrivateLinkScope` | Boolean flag to indicate whether Azure Monitor link scope is used or not. | -| `azureMonitorPrivateLinkScopeResourceId` | Resource ID of the Azure Monitor Private link scope. This only used if `useAzureMonitorPrivateLinkScope` is set to **true**. | --### Arc-enabled Kubernetes cluster --**Template file:** https://aka.ms/arc-k8s-azmon-extension-msi-arm-template<br> -**Parameter file:** https://aka.ms/arc-k8s-azmon-extension-msi-arm-template-params --| Parameter | Description | -|:|:| -| `clusterResourceId` | Resource ID of the cluster. | -| `clusterRegion` | Azure Region of the cluster. | -| `workspaceResourceId` | Resource ID of the Log Analytics workspace. | -| `workspaceRegion` | Region of the Log Analytics workspace. | -| `workspaceDomain` | Domain of the Log Analytics workspace:<br>`opinsights.azure.com` for Azure public cloud<br>`opinsights.azure.us` for Azure US Government<br>`opinsights.azure.cn` for Azure China Cloud | -| `resourceTagValues` | Tag values specified for the existing Container insights extension data collection rule (DCR) of the cluster and the name of the DCR. The name will be MSCI-\<clusterName\>-\<clusterRegion\>, and this resource created in an AKS clusters resource group. For first time onboarding, you can set arbitrary tag values. | -| `useAzureMonitorPrivateLinkScope` | Boolean flag to indicate whether Azure Monitor link scope is used or not. | -| `azureMonitorPrivateLinkScopeResourceId` | Resource ID of the Azure Monitor Private link scope. This is only used if `useAzureMonitorPrivateLinkScope` is set to **true**. | ----## Cluster using legacy authentication -Use the following procedures to enable network isolation by connecting your cluster to the Log Analytics workspace using [Azure Private Link](../logs/private-link-security.md) if your cluster is not using managed identity authentication. This requires a [private AKS cluster](/azure/aks/private-clusters). --1. Create a private AKS cluster following the guidance in [Create a private Azure Kubernetes Service cluster](/azure/aks/private-clusters). --2. Disable public Ingestion on your Log Analytics workspace. -- Use the following command to disable public ingestion on an existing workspace. -- ```cli - az monitor log-analytics workspace update --resource-group <azureLogAnalyticsWorkspaceResourceGroup> --workspace-name <azureLogAnalyticsWorkspaceName> --ingestion-access Disabled - ``` -- Use the following command to create a new workspace with public ingestion disabled. -- ```cli - az monitor log-analytics workspace create --resource-group <azureLogAnalyticsWorkspaceResourceGroup> --workspace-name <azureLogAnalyticsWorkspaceName> --ingestion-access Disabled - ``` --3. Configure private link by following the instructions at [Configure your private link](../logs/private-link-configure.md). Set ingestion access to public and then set to private after the private endpoint is created but before monitoring is enabled. The private link resource region must be same as AKS cluster region. --4. Enable monitoring for the AKS cluster. -- ```cli - az aks enable-addons -a monitoring --resource-group <AKSClusterResourceGorup> --name <AKSClusterName> --workspace-resource-id <workspace-resource-id> - ``` ----## Next steps --* If you experience issues while you attempt to onboard the solution, review the [Troubleshooting guide](container-insights-troubleshoot.md). -* With monitoring enabled to collect health and resource utilization of your AKS cluster and workloads running on them, learn [how to use](container-insights-analyze.md) Container insights. |
azure-monitor | Kubernetes Monitoring Enable | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/kubernetes-monitoring-enable.md | Use one of the following methods to enable scraping of Prometheus metrics from y > If you have a single Azure Monitor Resource that is private-linked, then Prometheus enablement won't work if the AKS cluster and Azure Monitor Workspace are in different regions. > The configuration needed for the Prometheus add-on isn't available cross region because of the private link constraint. > To resolve this, create a new DCE in the AKS cluster location and a new DCRA (association) in the same AKS cluster region. Associate the new DCE with the AKS cluster and name the new association (DCRA) as configurationAccessEndpoint.-> For full instructions on how to configure the DCEs associated with your Azure Monitor workspace to use a Private Link for data ingestion, see [Use a private link for Managed Prometheus data ingestion](../essentials/private-link-data-ingestion.md). +> For full instructions on how to configure the DCEs associated with your Azure Monitor workspace to use a Private Link for data ingestion, see [Enable private link for Kubernetes monitoring in Azure Monitor](./kubernetes-monitoring-private-link.md). ### [CLI](#tab/cli) |
azure-monitor | Kubernetes Monitoring Private Link | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/containers/kubernetes-monitoring-private-link.md | + + Title: Enable private link with Container insights +description: Learn how to enable private link on an Azure Kubernetes Service (AKS) cluster. + Last updated : 06/05/2024+++++# Enable private link for Kubernetes monitoring in Azure Monitor +[Azure Private Link](../../private-link/private-link-overview.md) enables you to access Azure platform as a service (PaaS) resources to your virtual network by using private endpoints. An [Azure Monitor Private Link Scope (AMPLS)](../logs/private-link-security.md) connects a private endpoint to a set of Azure Monitor resources to define the boundaries of your monitoring network. This article describes how to configure Container insights and Managed Prometheus to use private link for data ingestion from your Azure Kubernetes Service (AKS) cluster. +++> [!NOTE] +> - See [Connect to a data source privately](../../../articles/managed-grafan) for details on how to configure private link to query data from your Azure Monitor workspace using Grafana. +> - See [Use private endpoints for Managed Prometheus and Azure Monitor workspace](../essentials/azure-monitor-workspace-private-endpoint.md) for details on how to configure private link to query data from your Azure Monitor workspace using workbooks. +++## Prerequisites +This article describes how to connect your cluster to an existing Azure Monitor Private Link Scope (AMPLS). Create an AMPLS following the guidance in [Configure your private link](../logs/private-link-configure.md). ++## Managed Prometheus (Azure Monitor workspace) +Data for Managed Prometheus is stored in an [Azure Monitor workspace](../essentials/azure-monitor-workspace-overview.md), so you must make this workspace accessible over a private link. ++### Configure DCEs +Private links for data ingestion for Managed Prometheus are configured on the Data Collection Endpoints (DCE) of the Azure Monitor workspace that stores the data. To identify the DCEs associated with your Azure Monitor workspace, select **Data Collection Endpoints** from your Azure Monitor workspace in the Azure portal. +++If your AKS cluster isn't in the same region as your Azure Monitor workspace, then you need to [create another DCE](../essentials/data-collection-endpoint-overview.md#create-a-data-collection-endpoint) in the same region as the AKS cluster. In this case, open the data collection rule (DCR) created when you enabled Managed Prometheus. This DCR will be named **MSProm-\<clusterName\>-\<clusterRegion\>**. The cluster will be listed on the **Resources** page. On the **Data collection endpoint** dropdown, select the DCE in the same region as the AKS cluster. ++++## Ingestion from a private AKS cluster +By default, a private AKS cluster can send data to Managed Prometheus and your Azure Monitor workspace over the public network using a public Data Collection Endpoint. ++If you choose to use an Azure Firewall to limit the egress from your cluster, you can implement one of the following: ++- Open a path to the public ingestion endpoint. Update the routing table with the following two endpoints: + - `*.handler.control.monitor.azure.com` + - `*.ingest.monitor.azure.com` +- Enable the Azure Firewall to access the Azure Monitor Private Link scope and DCE that's used for data ingestion. ++## Private link ingestion for remote write +Use the following steps to set up remote write for a Kubernetes cluster over a private link virtual network and an Azure Monitor Private Link scope. ++1. Create your Azure virtual network. +1. Configure the on-premises cluster to connect to an Azure VNET using a VPN gateway or ExpressRoutes with private-peering. +1. Create an Azure Monitor Private Link scope. +1. Connect the Azure Monitor Private Link scope to a private endpoint in the virtual network used by the on-premises cluster. This private endpoint is used to access your DCEs. +1. From your Azure Monitor workspace in the portal, select **Data Collection Endpoints** from the Azure Monitor workspace menu. +1. You'll have at least one DCE which will have the same name as your workspace. Click on the DCE to open its details. +1. Select the **Network Isolation** page for the DCE. +2. Click **Add** and select your Azure Monitor Private Link scope. It takes a few minutes for the settings to propagate. Once completed, data from your private AKS cluster is ingested into your Azure Monitor workspace over the private link. +++## Container insights (Log Analytics workspace) +Data for Container insights, is stored in a [Log Analytics workspace](../logs/log-analytics-workspace-overview.md), so you must make this workspace accessible over a private link. ++> [!NOTE] +> This section describes how to enable private link for Container insights using CLI. For details on using an ARM template, see [Enable Container insights](./kubernetes-monitoring-enable.md?tabs=arm#enable-container-insights) and note the parameters `useAzureMonitorPrivateLinkScope` and `azureMonitorPrivateLinkScopeResourceId`. ++### Cluster using managed identity authentication +++### Existing AKS Cluster ++**Use default Log Analytics workspace** ++```azurecli +az aks enable-addons --addon monitoring --name <cluster-name> --resource-group <cluster-resource-group-name> --ampls-resource-id "<azure-monitor-private-link-scope-resource-id>" +``` ++Example: ++```azurecli +az aks enable-addons --addon monitoring --name "my-cluster" --resource-group "my-resource-group" --workspace-resource-id "/subscriptions/my-subscription/resourceGroups/my-resource-group/providers/Microsoft.OperationalInsights/workspaces/my-workspace" --ampls-resource-id "/subscriptions/my-subscription /resourceGroups/my-resource-group/providers/microsoft.insights/privatelinkscopes/my-ampls-resource" +``` ++**Use existing Log Analytics workspace** ++```azurecli +az aks enable-addons --addon monitoring --name <cluster-name> --resource-group <cluster-resource-group-name> --workspace-resource-id <workspace-resource-id> --ampls-resource-id "<azure-monitor-private-link-scope-resource-id>" +``` ++Example: ++```azurecli +az aks enable-addons --addon monitoring --name "my-cluster" --resource-group "my-resource-group" --workspace-resource-id "/subscriptions/my-subscription/resourceGroups/my-resource-group/providers/Microsoft.OperationalInsights/workspaces/my-workspace" --ampls-resource-id "/subscriptions/my-subscription /resourceGroups/ my-resource-group/providers/microsoft.insights/privatelinkscopes/my-ampls-resource" +``` ++### New AKS cluster ++```azurecli +az aks create --resource-group rgName --name clusterName --enable-addons monitoring --workspace-resource-id "workspaceResourceId" --ampls-resource-id "azure-monitor-private-link-scope-resource-id" +``` ++Example: ++```azurecli +az aks create --resource-group "my-resource-group" --name "my-cluster" --enable-addons monitoring --workspace-resource-id "/subscriptions/my-subscription/resourceGroups/my-resource-group/providers/Microsoft.OperationalInsights/workspaces/my-workspace" --ampls-resource-id "/subscriptions/my-subscription /resourceGroups/ my-resource-group/providers/microsoft.insights/privatelinkscopes/my-ampls-resource" +``` +++## Cluster using legacy authentication +Use the following procedures to enable network isolation by connecting your cluster to the Log Analytics workspace using [Azure Private Link](../logs/private-link-security.md) if your cluster is not using managed identity authentication. This requires a [private AKS cluster](/azure/aks/private-clusters). ++1. Create a private AKS cluster following the guidance in [Create a private Azure Kubernetes Service cluster](/azure/aks/private-clusters). ++2. Disable public Ingestion on your Log Analytics workspace. ++ Use the following command to disable public ingestion on an existing workspace. ++ ```cli + az monitor log-analytics workspace update --resource-group <azureLogAnalyticsWorkspaceResourceGroup> --workspace-name <azureLogAnalyticsWorkspaceName> --ingestion-access Disabled + ``` ++ Use the following command to create a new workspace with public ingestion disabled. ++ ```cli + az monitor log-analytics workspace create --resource-group <azureLogAnalyticsWorkspaceResourceGroup> --workspace-name <azureLogAnalyticsWorkspaceName> --ingestion-access Disabled + ``` ++3. Configure private link by following the instructions at [Configure your private link](../logs/private-link-configure.md). Set ingestion access to public and then set to private after the private endpoint is created but before monitoring is enabled. The private link resource region must be same as AKS cluster region. ++4. Enable monitoring for the AKS cluster. ++ ```cli + az aks enable-addons -a monitoring --resource-group <AKSClusterResourceGorup> --name <AKSClusterName> --workspace-resource-id <workspace-resource-id> --enable-msi-auth-for-monitoring false + ``` ++++## Next steps ++* If you experience issues while you attempt to onboard the solution, review the [Troubleshooting guide](container-insights-troubleshoot.md). +* With monitoring enabled to collect health and resource utilization of your AKS cluster and workloads running on them, learn [how to use](container-insights-analyze.md) Container insights. |
azure-monitor | Private Link Data Ingestion | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/private-link-data-ingestion.md | - Title: Use a private link for Managed Prometheus data ingestion -description: Overview of private link for secure data ingestion to Azure Monitor workspace from virtual networks. ---- Previously updated : 06/08/2024---# Private Link for data ingestion for Managed Prometheus and Azure Monitor workspace --Private links for data ingestion for Managed Prometheus are configured on the Data Collection Endpoints (DCE) of the workspace that stores the data. --This article shows you how to configure the DCEs associated with your Azure Monitor workspace to use a Private Link for data ingestion. --To define your Azure Monitor Private Link scope (AMPLS), see [Azure Monitor private link documentation](../logs/private-link-configure.md), then associate your DCEs with the AMPLS. --Find the DCEs associated with your Azure Monitor workspace. --1. Open the Azure Monitor workspaces menu in the Azure portal -2. Select your workspace -3. Select Data Collection Endpoints from the workspace menu ---The page displays all of the DCEs that are associated with the Azure Monitor workspace and that enable data ingestion into the workspace. Select the DCE you want to configure with Private Link and then follow the steps to [create an Azure Monitor private link scope](../logs/private-link-configure.md) to complete the process. --Once this is done, navigate to the DCR resource which was created during managed prometheus enablement from the Azure portal and choose 'Resources' under Configuration menu. -In the Data collection endpoint dropdown, pick a DCE in the same region as the AKS cluster. If the Azure Monitor Workspace is in the same region as the AKS cluster, you can reuse the DCE created during managed prometheus enablement. If not, create a DCE in the same region as the AKS cluster and pick that in the dropdown. ---> [!NOTE] -> Please refer to [Connect to a data source privately](../../../articles/managed-grafan) for details on how to configure private link for querying data from your Azure Monitor workspace using Grafana. -> -> Please refer to [use private endpoints for queries](azure-monitor-workspace-private-endpoint.md) for details on how to configure private link for querying data from your Azure Monitor workspace using workbooks (non-grafana). --## Private link ingestion from a private AKS cluster --A private Azure Kubernetes Service cluster can by default, send data to Managed Prometheus and your Azure Monitor workspace over the public network, and to the public Data Collection Endpoint. --If you choose to use an Azure Firewall to limit the egress from your cluster, you can implement one of the following: --+ Open a path to the public ingestion endpoint. Update the routing table with the following two endpoints: - - *.handler.control.monitor.azure.com - - *.ingest.monitor.azure.com -+ Enable the Azure Firewall to access the Azure Monitor Private Link scope and Data Collection Endpoint that's used for data ingestion --## Private link ingestion for remote write --The following steps show how to set up remote write for a Kubernetes cluster over a private link VNET and an Azure Monitor Private Link scope. --The following are the steps for setting up remote write for a Kubernetes cluster over a private link VNET and an Azure Monitor Private Link scope. --We start with your on-premises Kubernetes cluster. --1. Create your Azure virtual network. -1. Configure the on-premises cluster to connect to an Azure VNET using a VPN gateway or ExpressRoutes with private-peering. -1. Create an Azure Monitor Private Link scope. -1. Connect the Azure Monitor Private Link scope to a private endpoint in the virtual network used by the on-premises cluster. This private endpoint is used to access your Data Collection Endpoint(s). -1. Navigate to your Azure Monitor workspace in the portal. As part of creating your Azure Monitor workspace a system Data Collection Endpoint is created that you can use to ingest data via remote write. -1. Choose **Data Collection Endpoints** from the Azure Monitor workspace menu. -1. By default, the system Data Collection Endpoint has the same name as your Azure Monitor workspace. Select this Data Collection Endpoint. -1. The Data Collection Endpoint, Network Isolation page displays. From this page, select **Add** and choose the Azure Monitor Private Link scope you created. It takes a few minutes for the settings to propagate. Once completed, data from your private AKS cluster is ingested into your Azure Monitor workspace over the private link. ---## Verify that data is being ingested --To verify data is being ingested, try one of the following methods: --- Open the Workbooks page from your Azure Monitor workspace and select the **Prometheus Explorer** tile. For more information on Azure Monitor workspace Workbooks, see [Workbooks overview](./prometheus-workbooks.md).-- -## Next steps --- [Managed Grafana network settings](https://aka.ms/ags/mpe)-- [Azure Private Endpoint DNS configuration](../../private-link/private-endpoint-dns.md)-- [Verify remote write is working correctly](./prometheus-remote-write.md#verify-remote-write-is-working-correctly) |
azure-netapp-files | Azure Netapp Files Cost Model | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-cost-model.md | For cost model specific to cross-region replication, see [Cost model for cross-r Azure NetApp Files is billed on provisioned storage capacity, which is allocated by creating capacity pools. Capacity pools are billed monthly based on a set cost per allocated GiB per hour. Capacity pool allocation is measured hourly. -Capacity pools must be at least 1 TiB and can be increased or decreased in 1-TiB intervals. Capacity pools contain volumes that range in size from a minimum of 100 GiB to a maximum of 100 TiB for regular volumes and up to 1 PiB for [large volumes](azure-netapp-files-understand-storage-hierarchy.md#large-volumes). Volumes are assigned quotas that are subtracted from the capacity poolΓÇÖs provisioned size. For an active volume, capacity consumption against the quota is based on logical (effective) capacity, being active filesystem data or snapshot data. See [How Azure NetApp Files snapshots work](snapshots-introduction.md) for details. +Capacity pools must be at least 1 TiB and can be increased or decreased in 1-TiB intervals. Capacity pools contain volumes that range in size from a minimum of 50 GiB to a maximum of 100 TiB for regular volumes and up to 1 PiB for [large volumes](azure-netapp-files-understand-storage-hierarchy.md#large-volumes). Volumes are assigned quotas that are subtracted from the capacity poolΓÇÖs provisioned size. For an active volume, capacity consumption against the quota is based on logical (effective) capacity, being active filesystem data or snapshot data. See [How Azure NetApp Files snapshots work](snapshots-introduction.md) for details. ### Pricing examples |
azure-netapp-files | Azure Netapp Files Create Volumes Smb | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-create-volumes-smb.md | This article shows you how to create an SMB3 volume. For NFS volumes, see [Creat * You must have already set up a capacity pool. See [Create a capacity pool](azure-netapp-files-set-up-capacity-pool.md). * A subnet must be delegated to Azure NetApp Files. See [Delegate a subnet to Azure NetApp Files](azure-netapp-files-delegate-subnet.md).+* [!INCLUDE [50 GiB volume preview](./includes/50-gib-volume.md)] * The [non-browsable shares](#non-browsable-share) and [access-based enumeration](#access-based-enumeration) features are currently in preview. You must register each feature before you can use it: 1. Register the feature: |
azure-netapp-files | Azure Netapp Files Create Volumes | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-create-volumes.md | This article shows you how to create an NFS volume. For SMB volumes, see [Create * A subnet must be delegated to Azure NetApp Files. See [Delegate a subnet to Azure NetApp Files](azure-netapp-files-delegate-subnet.md). +* [!INCLUDE [50 GiB volume preview](./includes/50-gib-volume.md)] + ## Considerations * Deciding which NFS version to use |
azure-netapp-files | Azure Netapp Files Introduction | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-introduction.md | Azure NetApp Files is designed to provide high-performance file storage for ente | In-Azure bare-metal flash performance | Fast and reliable all-flash performance with submillisecond latency. | Run performance-intensive workloads in the cloud with on-premises infrastructure-level performance. | Multi-protocol support | Supports multiple protocols, including NFSv3, NFSv4.1, SMB 3.0, SMB 3.1.1, and simultaneous dual-protocol. | Seamlessly integrate with existing infrastructure and workflows without compatibility issues or complex configurations. | | Three flexible performance tiers (Standard, Premium, Ultra) | Three performance tiers with dynamic service-level change capability based on workload needs, including cool access for cold data. | Choose the right performance level for workloads and dynamically adjust performance without overspending on resources.-| Small-to-large volumes | Easily resize file volumes from 100 GiB up to 100 TiB without downtime. | Scale storage as business needs grow without over-provisioning, avoiding upfront cost. +| Small-to-large volumes | Easily resize file volumes from 50 GiB up to 100 TiB without downtime. | Scale storage as business needs grow without over-provisioning, avoiding upfront cost. | 1-TiB minimum capacity pool size | 1-TiB capacity pool is a reduced-size storage pool compared to the initial 4-TiB minimum. | Save money by starting with a smaller storage footprint and lower entry point, without sacrificing performance or availability. Scale storage based on growth without high upfront costs. | 2,048-TiB maximum capacity pool | 2048-TiB capacity pool is an increased storage pool compared to the initial 500-TiB maximum. | Reduce waste by creating larger, pooled capacity and performance budget, and share and distribute across volumes. | 50-1,024 TiB large volumes | Store large volumes of data up to 1,024 TiB in a single volume. | Manage large datasets and high-performance workloads with ease. |
azure-netapp-files | Azure Netapp Files Resource Limits | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-resource-limits.md | The following table describes resource limits for Azure NetApp Files: | Number of IPs in a virtual network (including immediately peered VNets) accessing volumes in an Azure NetApp Files hosting VNet | <ul><li>**Basic**: 1000</li><li>**Standard**: [Same standard limits as VMs](../azure-resource-manager/management/azure-subscription-service-limits.md#azure-resource-manager-virtual-networking-limits)</li></ul> | No | | Minimum size of a single capacity pool | 1 TiB* | No | | Maximum size of a single capacity pool | 2,048 TiB | No |-| Minimum size of a single regular volume | 100 GiB | No | +| Minimum size of a single regular volume | 50 GiB | No | | Maximum size of a single regular volume | 100 TiB | No | | Minimum size of a single [large volume](large-volumes-requirements-considerations.md) | 50 TiB | No | | Large volume size increase | 30% of lowest provisioned size | Yes | |
azure-netapp-files | Azure Netapp Files Service Levels | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-service-levels.md | The following diagram shows throughput limit examples of volumes in an auto QoS * In Example 1, a volume from an auto QoS capacity pool with the Premium storage tier that is assigned 2 TiB of quota will be assigned a throughput limit of 128 MiB/s (2 TiB * 64 MiB/s). This scenario applies regardless of the capacity pool size or the actual volume consumption. -* In Example 2, a volume from an auto QoS capacity pool with the Premium storage tier that is assigned 100 GiB of quota will be assigned a throughput limit of 6.25 MiB/s (0.09765625 TiB * 64 MiB/s). This scenario applies regardless of the capacity pool size or the actual volume consumption. +* In Example 2, a volume from an auto QoS capacity pool with the Premium storage tier that is assigned 100 GiB of quota is assigned a throughput limit of 6.25 MiB/s (0.09765625 TiB * 64 MiB/s). This scenario applies regardless of the capacity pool size or the actual volume consumption. ### Throughput limit examples of volumes in a manual QoS capacity pool |
azure-netapp-files | Azure Netapp Files Understand Storage Hierarchy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-understand-storage-hierarchy.md | -Before creating a volume in Azure NetApp Files, you must purchase and set up a pool for provisioned capacity. To set up a capacity pool, you must have a NetApp account. Understanding the storage hierarchy helps you set up and manage your Azure NetApp Files resources. +Before creating a volume in Azure NetApp Files, you must purchase and set up a pool for provisioned capacity. To set up a capacity pool, you must have a NetApp account. Understanding the storage hierarchy helps you set up and manage your Azure NetApp Files resources. > [!IMPORTANT] > Azure NetApp Files currently doesn't support resource migration between subscriptions. ## <a name="conceptual_diagram_of_storage_hierarchy"></a>Conceptual diagram of storage hierarchy -The following example shows the relationships of the Azure subscription, NetApp accounts, capacity pools, and volumes. +The following example shows the relationships of the Azure subscription, NetApp accounts, capacity pools, and volumes. :::image type="content" source="./media/azure-netapp-files-understand-storage-hierarchy/azure-netapp-files-storage-hierarchy.png" alt-text="Conceptual diagram of storage hierarchy." lightbox="./media/azure-netapp-files-understand-storage-hierarchy/azure-netapp-files-storage-hierarchy.png"::: When you use a manual QoS capacity pool with, for example, an SAP HANA system, a - A volume's capacity consumption counts against its pool's provisioned capacity. - A volumeΓÇÖs throughput consumption counts against its poolΓÇÖs available throughput. See [Manual QoS type](#manual-qos-type). - Each volume belongs to only one pool, but a pool can contain multiple volumes. -- Volumes contain a capacity of between 100 GiB and 100 TiB. You can create a [large volume](#large-volumes) with a size of between 50 TiB and 1 PiB.+- Volumes contain a capacity of between 50 GiB and 100 TiB. You can create a [large volume](#large-volumes) with a size of between 50 and 1 PiB. ## Large volumes -Azure NetApp Files allows you to create large volumes up to 1 PiB in size. Large volumes begin at a capacity of 50 TiB and scale up to 1 PiB. Regular Azure NetApp Files volumes are offered between 100 GiB and 102,400 GiB. +Azure NetApp Files allows you to create large volumes up to 1 PiB in size. Large volumes begin at a capacity of 50 TiB and scale up to 1 PiB. Regular Azure NetApp Files volumes are offered between 50 GiB and 102,400 GiB. For more information, see [Requirements and considerations for large volumes](large-volumes-requirements-considerations.md). |
azure-netapp-files | Backup Restore New Volume | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/backup-restore-new-volume.md | Restoring a backup creates a new volume with the same protocol type. This articl > [!IMPORTANT] > Running multiple concurrent volume restores using Azure NetApp Files backup may increase the time it takes for each individual, in-progress restore to complete. As such, if time is a factor to you, you should prioritize and sequentialize the most important volume restores and wait until the restores are complete before starting another, lower priority, volume restores. -See [Requirements and considerations for Azure NetApp Files backup](backup-requirements-considerations.md) for more considerations about using Azure NetApp Files backup. +See [Requirements and considerations for Azure NetApp Files backup](backup-requirements-considerations.md) for more considerations about using Azure NetApp Files backup. See [Resource limits for Azure NetApp Files](azure-netapp-files-resource-limits.md) for information about minimums and maximums. ## Steps See [Requirements and considerations for Azure NetApp Files backup](backup-requi However, if you restore a volume from the backup list at the NetApp account level, you need to specify the Protocol field. The Protocol field must match the protocol of the original volume. Otherwise, the restore operation fails with the following error: `Protocol Type value mismatch between input and source volume of backupId <backup-id of the selected backup>. Supported protocol type : <Protocol Type of the source volume>` - * The **Quota** value must be **at least 20% greater** than the size of the backup from which the restore is triggered (minimum 100 GiB). Once the restore is complete, the volume can be resized depending on the size used. + * The **Quota** value must be **at least 20% greater** than the size of the backup from which the restore is triggered. Once the restore is complete, the volume can be resized depending on the size used. * The **Capacity pool** that the backup is restored into must have sufficient unused capacity to host the new restored volume. Otherwise, the restore operation fails. |
azure-netapp-files | Configure Application Volume Group Sap Hana Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-application-volume-group-sap-hana-api.md | In a create request, use the following URI format: The request body consists of the _outer_ parameters, the group properties, and an array of volumes to be created, each with their individual outer parameters and volume properties. -The following table describes the request body parameters and group level properties required to create a SAP HANA application volume group. +The following table describes the request body parameters and group level properties required to create an SAP HANA application volume group. | URI parameter | Description | Restrictions for SAP HANA | | - | -- | -- | The following table describes the request body parameters and group level proper | `applicationIdentifier` | Application specific identifier string, following application naming rules | The SAP System ID, which should follow aforementioned naming rules, for example `SH9` | | `volumes` | Array of volumes to be created (see the next table for volume-granular details) | Volume count depends upon host configuration: <ul><li>Single-host (3-5 volumes) <br /> **Required**: _data_, _log_ and _shared_ <br /> **Optional**: _data-backup_, _log-backup_ </li><li> Multiple-host (two volumes) <br /> **Required**: _data_ and _log_ </li></ul> | -This table describes the request body parameters and volume properties for creating a volume in a SAP HANA application volume group. +This table describes the request body parameters and volume properties for creating a volume in an SAP HANA application volume group. | Volume-level request parameter | Description | Restrictions for SAP HANA | | - | -- | -- | This table describes the request body parameters and volume properties for creat | **Volume properties** | **Description** | **SAP HANA Value Restrictions** | | `creationToken` | Export path name, typically same as the volume name. | None. Example: `SH9-data-mnt00001` | | `throughputMibps` | QoS throughput | This must be between 1 Mbps and 4500 Mbps. You should set throughput based on volume type. | -| `usageThreshhold` | Size of the volume in bytes. This must be in the 100 GiB to 100-TiB range. For instance, 100 GiB = 107374182400 bytes. | None. You should set volume size depending on the volume type. | +| `usageThreshold` | Size of the volume in bytes. This must be in the 50 GiB to 100-TiB range. For instance, 100 GiB = 107374182400 bytes. | None. You should set volume size depending on the volume type. | | `exportPolicyRule` | Volume export policy rule | At least one export policy rule must be specified for SAP HANA. Only the following rules values can be modified for SAP HANA, the rest _must_ have their default values: <ul><li>`unixReadOnly`: should be false</li><li>`unixReadWrite`: should be true</li><li>`allowedClients`: specify allowed clients. Use `0.0.0.0/0` for no restrictions.</li><li>`hasRootAccess`: must be true to install SAP.</li><li>`chownMode`: Specify `chown` mode.</li><li>`nfsv41`: true for data, log, and shared volumes, optionally true for data backup and log backup volumes</li><li>`nfsv3`: optionally true for data backup and log backup volumes</li><ul> All other rule values _must_ be left defaulted. | | `volumeSpecName` | Specifies the type of volume for the application volume group being created | SAP HANA volumes must have a value that is one of the following: <ul><li>"data"</li><li>"log"</li><li>"shared"</li><li>"data-backup"</li><li>"log-backup"</li></ul> | | `proximityPlacementGroup` | Resource ID of the Proximity Placement Group (PPG) for proper placement of the volume. | <ul><li>The ΓÇ£dataΓÇ¥, ΓÇ£logΓÇ¥ and ΓÇ£sharedΓÇ¥ volumes must each have a PPG specified, preferably a common PPG.</li><li>A PPG must be specified for the ΓÇ£data-backupΓÇ¥ and ΓÇ£log-backupΓÇ¥ volumes, but it will be ignored during placement.</li></ul> | In the following examples, selected placeholders are specified. You should repla SAP HANA volume groups for the following examples can be created using a sample shell script that calls the API using curl: -1. Extract the subscription ID. This automates the extraction of the subscription ID and generate the authorization token: +1. Extract the subscription ID. This automates the extraction of the subscription ID and generates the authorization token: ```bash subId=$(az account list | jq ".[] | select (.name == \"Pay-As-You-Go\") | .id" -r) echo "Subscription ID: $subId" |
azure-netapp-files | Configure Application Volume Oracle Api | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-application-volume-oracle-api.md | The following tables describe the request body parameters and volume properties |||| | `creationToken` | Export path name, typically same as the volume name. | `<sid>-ora-data1` | | `throughputMibps` | QoS throughput | You should set throughput based on volume type between 1 MiBps and 4500 MiBps. |-| `usageThreshhold` | Size of the volume in bytes. This value must be in the 100 GiB to 100-TiB range. For instance, 100 GiB = 107374182400 bytes. | You should set volume size in bytes. | +| `usageThreshold` | Size of the volume in bytes. This value must be in the 50 GiB to 100-TiB range. For instance, 100 GiB = 107374182400 bytes. | You should set volume size in bytes. | | `exportPolicyRule` | Volume export policy rule | At least one export policy rule must be specified for Oracle. Only the following rules values can be modified for Oracle. The rest *must* have their default values: <br><br> - `unixReadOnly`: should be false. <br><br> - `unixReadWrite`: should be true. <br><br> - `allowedClients`: specify allowed clients. Use `0.0.0.0/0` for no restrictions. <br><br> - `hasRootAccess`: must be true to use root user for installation. <br><br> - `chownMode`: Specify `chown` mode. <br><br> - `Select nfsv41: or nfsv3:`: as true. It's recommended to use the same protocol version for all volumes. <br> <br> All other rule values _must_ be left defaulted. | | `volumeSpecName` | Specifies the type of volume for the application volume group being created | Oracle volumes must have a value that is one of the following: <br><br> - `ora-data1` <br> - `ora-data2` <br> - `ora-data3` <br> - `ora-data4` <br> - `ora-data5` <br> - `ora-data6` <br> - `ora-data7` <br> - `ora-data8` <br> - `ora-log` <br> - `ora-log-mirror` <br> - `ora-binary` <br> - `ora-backup` <br> | | `proximityPlacementGroup` | Resource ID of the Proximity Placement Group (PPG) for proper placement of the volume. This parameter is optional. If the region has zones available, then use of zones is always priority. | The `data`, `log` and `mirror-log`, `ora-binary` and `backup` volumes must each have a PPG specified, preferably a common PPG. | |
azure-netapp-files | Configure Customer Managed Keys Hardware | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-customer-managed-keys-hardware.md | + + Title: Configure customer-managed keys with managed Hardware Security Module for Azure NetApp Files volume encryption +description: Learn how to encrypt data in Azure NetApp Files with customer-managed keys using the Hardware Security Module ++documentationcenter: '' +++editor: '' ++ms.assetid: +++ na ++ Last updated : 08/08/2024+++# Configure customer-managed keys with managed Hardware Security Module for Azure NetApp Files volume encryption ++Azure NetApp Files volume encryption with customer-managed keys with the managed Hardware Security Module (HSM) is an extension to [customer-managed keys for Azure NetApp Files volumes encryption feature](configure-customer-managed-keys.md). Customer-managed keys with HSM allows you to store your encryptions keys in a more secure FIPS 140-2 Level 3 HSM instead of the FIPS 140-2 Level 1 or Level 2 service used by Azure Key Vault (AKV). ++## Requirements ++* Customer-managed keys with managed HSM is supported using the 2022.11 or later API version. +* Customer-managed keys with managed HSM is only supported for Azure NetApp Files accounts that don't have existing encryption. +* Before creating a volume using customer-managed key with managed HSM volume, you must have: + * created an [Azure Key Vault](/azure/key-vault/general/overview), containing at least one key. + * The key vault must have soft delete and purge protection enabled. + * The key must be type RSA. + * created a VNet with a subnet delegated to Microsoft.Netapp/volumes. + * a user- or system-assigned identity for your Azure NetApp Files account. + * [provisioned and activated a managed HSM.](/azure/key-vault/managed-hsm/quick-create-cli) ++## Supported regions ++* Australia East +* Brazil South +* Canada Central +* Central US +* East Asia +* East US +* East US 2 +* France Central +* Japan East +* Korea Central +* North Central US +* North Europe +* Norway East +* Norway West +* South Africa North +* South Central US +* Southeast Asia +* Sweden Central +* Switzerland North +* UAE Central +* UAE North +* UK South +* West US +* West US 2 +* West US 3 ++## Register the feature ++This feature is currently in preview. You need to register the feature before using it for the first time. After registration, the feature is enabled and works in the background. No UI control is required. ++1. Register the feature: ++ ```azurepowershell-interactive + Register-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFManagedHsmEncryption + ``` ++2. Check the status of the feature registration: ++ > [!NOTE] + > The **RegistrationState** may be in the `Registering` state for up to 60 minutes before changing to`Registered`. Wait until the status is **Registered** before continuing. ++ ```azurepowershell-interactive + Get-AzProviderFeature -ProviderNamespace Microsoft.NetApp -FeatureName ANFManagedHsmEncryption + ``` +You can also use [Azure CLI commands](/cli/azure/feature) `az feature register` and `az feature show` to register the feature and display the registration status. ++## Configure customer-managed keys with managed HSM for system-assigned identity ++When you configure customer-managed keys with a system-assigned identity, Azure configures the NetApp account automatically by adding a system-assigned identity. The access policy is created on your Azure Key Vault with key permissions of Get, Encrypt, and Decrypt. ++### Requirements ++To use a system-assigned identity, the Azure Key Vault must be configured to use Vault access policy as its permission model. Otherwise, you must use a user-assigned identity. ++### Steps ++1. In the Azure portal, navigate to Azure NetApp Files then select **Encryption**. +1. In the **Encryption** menu, provide the following values: + * For **Encryption key source**, select **Customer Managed Key**. + * For **Key URI**, select **Enter Key URI** then provide the URI for the managed HSM. + * Select the NetApp **Subscription**. + * For **Identity type**, select **System-assigned**. ++ :::image type="content" source="./media/configure-customer-managed-keys/key-enter-uri.png" alt-text="Screenshot of the encryption menu showing key URI field." lightbox="./media/configure-customer-managed-keys//key-enter-uri.png"::: ++1. Select **Save**. ++## Configure customer-managed keys with managed HSM for user-assigned identity ++1. In the Azure portal, navigate to Azure NetApp Files then select **Encryption**. +1. In the **Encryption** menu, provide the following values: + * For **Encryption key source**, select **Customer Managed Key**. + * For **Key URI**, select **Enter Key URI** then provide the URI for the managed HSM. + * Select the NetApp **Subscription**. + * For **Identity type**, select **User-assigned**. +1. When you select **User-assigned**, a context pane opens to select the identity. + * If your Azure Key Vault is configured to use a Vault access policy, Azure configures the NetApp account automatically and adds the user-assigned identity to your NetApp account. The access policy is created on your Azure Key Vault with key permissions of Get, Encrypt, and Decrypt. + * If your Azure Key Vault is configured to use Azure role-based access control (RBAC), ensure the selected user-assigned identity has a role assignment on the key vault with permissions for data actions: + * "Microsoft.KeyVault/vaults/keys/read" + * "Microsoft.KeyVault/vaults/keys/encrypt/action" + * "Microsoft.KeyVault/vaults/keys/decrypt/action" + The user-assigned identity you select is added to your NetApp account. Due to RBAC being customizable, the Azure portal doesn't configure access to the key vault. For more information, see [Using Azure RBAC secret, key, and certificate permissions with Key Vault](/azure/key-vault/general/rbac-guide#using-azure-rbac-secret-key-and-certificate-permissions-with-key-vault) ++ :::image type="content" source="./media/configure-customer-managed-keys/encryption-user-assigned.png" alt-text="Screenshot of user-assigned submenu." lightbox="./media/configure-customer-managed-keys/encryption-user-assigned.png"::: ++1. Select **Save**. ++## Next steps ++* [Configure customer-managed keys](configure-customer-managed-keys.md) +* [Security FAQs](faq-security.md) |
azure-netapp-files | Configure Customer Managed Keys | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/configure-customer-managed-keys.md | For more information about Azure Key Vault and Azure Private Endpoint, refer to: :::image type="content" source="./media/configure-customer-managed-keys/key-enter-uri.png" alt-text="Screenshot of the encryption menu showing key URI field." lightbox="./media/configure-customer-managed-keys/key-enter-uri.png"::: 1. Select the identity type that you want to use for authentication to the Azure Key Vault. If your Azure Key Vault is configured to use Vault access policy as its permission model, both options are available. Otherwise, only the user-assigned option is available.- * If you choose **System-assigned**, select the **Save** button. The Azure portal configures the NetApp account automatically with the following process: A system-assigned identity is added to your NetApp account. An access policy is to be created on your Azure Key Vault with key permissions Get, Encrypt, Decrypt. + * If you choose **System-assigned**, select the **Save** button. The Azure portal configures the NetApp account automatically by adding a system-assigned identity to your NetApp account. An access policy is also created on your Azure Key Vault with key permissions Get, Encrypt, Decrypt. :::image type="content" source="./media/configure-customer-managed-keys/encryption-system-assigned.png" alt-text="Screenshot of the encryption menu with system-assigned options." lightbox="./media/configure-customer-managed-keys/encryption-system-assigned.png"::: This section lists error messages and possible resolutions when Azure NetApp Fil ## Next steps * [Azure NetApp Files API](https://github.com/Azure/azure-rest-api-specs/tree/master/specification/netapp/resource-manager/Microsoft.NetApp/stable/2019-11-01)+* [Configure customer-managed keys with managed Hardware Security Module](configure-customer-managed-keys-hardware.md) |
azure-netapp-files | Create Volumes Dual Protocol | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/create-volumes-dual-protocol.md | To create NFS volumes, see [Create an NFS volume](azure-netapp-files-create-volu See [Create a capacity pool](azure-netapp-files-set-up-capacity-pool.md). * A subnet must be delegated to Azure NetApp Files. See [Delegate a subnet to Azure NetApp Files](azure-netapp-files-delegate-subnet.md).+* [!INCLUDE [50 GiB volume preview](./includes/50-gib-volume.md)] * The [non-browsable shares](#non-browsable-share) and [access-based enumeration](#access-based-enumeration) features are currently in preview. You must register each feature before you can use it: 1. Register the feature: |
azure-netapp-files | Faq Security | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/faq-security.md | NFSv3 protocol doesn't provide support for encryption, so this data-in-flight ca ## Can the storage be encrypted at rest? -All Azure NetApp Files volumes are encrypted using the FIPS 140-2 standard. Learn [how encryption keys managed](#how-are-encryption-keys-managed). +All Azure NetApp Files volumes are encrypted using the FIPS 140-2 standard. Learn [how encryption keys are managed](#how-are-encryption-keys-managed). ## Is Azure NetApp Files cross-region and cross-zone replication traffic encrypted? Alternatively, [customer-managed keys for Azure NetApp Files volume encryption]( Azure NetApp Files supports the ability to move existing volumes using platform-managed keys to customer-managed keys. Once you complete the transition, you cannot revert back to platform-managed keys. For additional information, see [Transition an Azure NetApp Files volume to customer-managed keys](configure-customer-managed-keys.md#transition). -Also, customer-managed keys using Azure Dedicated HSM is supported on a controlled basis. Support is currently available in the East US, South Central US, West US 2, and US Gov Virginia regions. You can request access [with the Azure NetApp Files feedback form](https://aka.ms/ANFFeedback). As capacity becomes available, requests will be approved. +<!-- Also, customer-managed keys using Azure Dedicated HSM is supported on a controlled basis. Support is currently available in the East US, South Central US, West US 2, and US Gov Virginia regions. You can request access [with the Azure NetApp Files feedback form](https://aka.ms/ANFFeedback). As capacity becomes available, requests will be approved. --> ## Can I configure the NFS export policy rules to control access to the Azure NetApp Files service mount target? |
azure-netapp-files | Whats New | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/whats-new.md | -## August 2024 +## August 2024 ++* [Volume encryption with customer-managed keys with managed Hardware Security Module (HSM)](configure-customer-managed-keys-hardware.md) (Preview) ++ Volume encryption with customer-managed keys with managed HSM extends the [customer-managed keys](configure-customer-managed-keys.md), enabling you to store your keys in a more secure FIPS 140-2 Level 3 HSM service instead of the FIPS 140-2 Level 1 or 2 encryption offered with Azure Key Vault. ++* [Volume enhancement: Azure NetApp Files now supports 50 GiB minimum volume sizes](azure-netapp-files-resource-limits.md) (preview) ++ You can now create an Azure NetApp Files volume as small as 50 GiB--a reduction from the initial minimum size of 100 GiB. 50 GiB volumes save costs for workloads that require volumes smaller than 100 GiB, allowing you to appropriately size storage volumes. * [Azure NetApp Files double encryption at rest](double-encryption-at-rest.md) is now generally available (GA). |
backup | Backup Support Matrix Iaas | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-support-matrix-iaas.md | Title: Support matrix for Azure VM backups description: Get a summary of support settings and limitations for backing up Azure VMs by using the Azure Backup service. Previously updated : 06/20/2024 Last updated : 08/09/2024 -+ Backup and restore of deduplicated VMs or disks | Azure Backup doesn't support d Adding a disk to a protected VM | Supported. Resizing a disk on a protected VM | Supported. Shared storage| Backing up VMs by using Cluster Shared Volumes (CSV) or Scale-Out File Server isn't supported. CSV writers are likely to fail during backup. On restore, disks that contain CSV volumes might not come up.-[Shared disks](../virtual-machines/disks-shared-enable.md) | Not supported. <br><br> You can exclude shared disk with Enhanced policy and backup the other supported disks in the VM. +[Shared disks](../virtual-machines/disks-shared-enable.md) | Not supported. <br><br> - You can exclude shared disk with Enhanced policy and backup the other supported disks in the VM. <br><br> - You can use S2D to create a shared disk or standalone volumes by combining capacities from disks in different VMs. Azure Backup doesn't support backup of a shared volume (between VMs for database cluster or cluster Configuration) created using S2D. <a name="ultra-disk-backup">Ultra disks</a> | Supported with [Enhanced policy](backup-azure-vms-enhanced-policy.md). <br><br> [Supported regions](../virtual-machines/disks-types.md#ultra-disk-limitations). <br><br> - Configuration of Ultra disk protection is supported via Recovery Services vault and via virtual machine blade. <br><br> - Cross-region restore is currently not supported for machines using Ultra disks. <br><br> - GRS type vaults cannot be used for enabling backup. <br><br> - File-level restore is currently not supported for machines using Ultra disks. <a name="premium-ssd-v2-backup">Premium SSD v2</a> | Supported with [Enhanced policy](backup-azure-vms-enhanced-policy.md). <br><br> [Supported regions](../virtual-machines/disks-types.md#regional-availability). <br><br> - Configuration of Premium SSD v2 disk protection is supported via Recovery Services vault and via virtual machine blade. <br><br> - Cross-region restore is currently not supported for machines using Premium v2 disks and GRS type vaults cannot be used for enabling backup. <br><br> - File-level restore is currently not supported for machines using Premium SSD v2 disks. [Temporary disks](../virtual-machines/managed-disks-overview.md#temporary-disk) | Azure Backup doesn't back up temporary disks. |
batch | Account Key Rotation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/account-key-rotation.md | + + Title: Rotate Batch account keys +description: Learn how to rotate Batch account shared key credentials. + Last updated : 08/09/2024++# Batch account shared key credential rotation ++Batch accounts can be authenticated in one of two ways, either via shared key or Microsoft Entra ID. Batch accounts +with shared key authentication enabled have two keys associated with them to allow for key rotation scenarios. ++> [!TIP] +> It's highly recommended to avoid using shared key authentication with Batch accounts. The preferred authentication +> mechanism is through Microsoft Entra ID. You can disable shared key authentication during account creation or you +> can update allowed [Authentication Modes](/rest/api/batchmanagement/batch-account/create#authenticationmode) for an +> active account. ++## Batch shared key rotation procedure ++Azure Batch accounts have two shared keys, `primary` or `secondary`. It's important not to regenerate both +keys at the same time, and instead regenerate them one at a time to avoid potential downtime. ++> [!WARNING] +> Once a key has been regenerated, it is no longer valid and the prior key cannot be recovered for use. Ensure +> that your application update process follows the recommended key rotation procedure to prevent losing access +> to your Batch account. ++The typical key rotation procedure is as follows: ++1. Normalize your application code to use either the primary or secondary key. If you're using both keys in your +application simultaneously, then any rotation procedure leads to authentication errors. The following steps assume +that you're using the `primary` key in your application. +1. Regenerate the `secondary` key. +1. Update your application code to utilize the newly regenerated `secondary` key. Deploy these changes and +ensure that everything is working as expected. +1. Regenerate the `primary` key. +1. Optionally update your application code to use the `primary` key and deploy. This step isn't strictly +necessary as long as you're tracking which key is used in your application and deployed. ++### Rotation in Azure portal ++First, sign in to the [Azure portal](https://portal.azure.com). Then, navigate to the **Keys** blade of your +Batch account under **Settings**. Then select either `Regenerate primary` or `Regenerate secondary` to create a new key. ++ :::image type="content" source="media/account-key-rotation/batch-account-key-rotation.png" alt-text="Screenshot showing key rotation."::: ++## See also ++- Learn more about [Batch accounts](accounts.md). +- Learn how to authenticate with [Batch Service APIs](batch-aad-auth.md) +or [Batch Management APIs](batch-aad-auth-management.md) with Microsoft Entra ID. |
batch | Batch Aad Auth Management | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/batch/batch-aad-auth-management.md | Last updated 04/27/2017 -# Authenticate Batch Management solutions with Active Directory +# Authenticate Batch Management solutions with Microsoft Entra ID -Applications that call the Azure Batch Management service authenticate with [Microsoft Authentication Library](../active-directory/develop/msal-overview.md) (Microsoft Entra ID). Microsoft Entra ID is Microsoft's multi-tenant cloud based directory and identity management service. Azure itself uses Microsoft Entra ID for the authentication of its customers, service administrators, and organizational users. +Applications that call the Azure Batch Management service authenticate with [Microsoft Authentication Library](../active-directory/develop/msal-overview.md) (Microsoft Entra ID). Microsoft Entra ID is Microsoft's multitenant cloud based directory and identity management service. Azure itself uses Microsoft Entra ID for the authentication of its customers, service administrators, and organizational users. The Batch Management .NET library exposes types for working with Batch accounts, account keys, applications, and application packages. The Batch Management .NET library is an Azure resource provider client, and is used together with [Azure Resource Manager](../azure-resource-manager/management/overview.md) to manage these resources programmatically. Microsoft Entra ID is required to authenticate requests made through any Azure resource provider client, including the Batch Management .NET library, and through Azure Resource Manager. |
certification | Validate Device Edge Secured Core | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/certification/validate-device-edge-secured-core.md | + + Title: Validate device is Edge Secured-core enabled +description: Instructions to validate device is Edge Secured-core enabled +++ Last updated : 08/06/2024 ++++# Validate your Edge Secured-core certified devices +To check if your device is Edge Secured-core enabled: +1. Go to Windows Icon > Security Settings > Device Security. The "Secured-core PC" status is available on the top of the screen. If the status is missing, reach out to the device builder for assistance. ++2. Go to "Core isolation" to ensure that "Memory integrity" is on. ++3. Go to "Security processor" to ensure that the Trusted Platform Module "Specification version" is 2.0. ++4. Go to "Data encryption" to ensure that "Device encryption" is on. + |
communication-services | Call Automation | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/call-automation/call-automation.md | The following list presents the set of features that are currently available in | | Place new outbound call to one or more endpoints | ✔️ | ✔️ | ✔️ | ✔️ | | | Redirect* (forward) a call to one or more endpoints | ✔️ | ✔️ | ✔️ | ✔️ | | | Reject an incoming call | ✔️ | ✔️ | ✔️ | ✔️ |-| | Connect to an ongoing call or Room | ✔️ | ✔️ | ✔️ | ✔️ | +| | Connect to an ongoing call or Room (in preview) | ✔️ | ✔️ | ✔️ | ✔️ | | Mid-call scenarios | Add one or more endpoints to an existing call | ✔️ | ✔️ | ✔️ | ✔️ | | | Cancel adding an endpoint to an existing call | ✔️ | ✔️ | ✔️ | ✔️ | | | Play Audio from an audio file | ✔️ | ✔️ | ✔️ | ✔️ | The following list presents the set of features that are currently available in | | Send DTMF | ✔️ | ✔️ | ✔️ | ✔️ | | | Mute participant | ✔️ | ✔️ | ✔️ | ✔️ | | | Remove one or more endpoints from an existing call| ✔️ | ✔️ | ✔️ | ✔️ |-| | Blind Transfer* a 1:1 call to another endpoint | ✔️ | ✔️ | ✔️ | ✔️ | -| | Blind Transfer* a participant from group call to another endpoint| ✔️ | ✔️ | ✔️ | ✔️ | +| | Blind Transfer a 1:1 call to another endpoint | ✔️ | ✔️ | ✔️ | ✔️ | +| | Blind Transfer a participant from group call to another endpoint| ✔️ | ✔️ | ✔️ | ✔️ | | | Hang up a call (remove the call leg) | ✔️ | ✔️ | ✔️ | ✔️ | | | Terminate a call (remove all participants and end call)| ✔️ | ✔️ | ✔️ | ✔️ | | | Cancel media operations | ✔️ | ✔️ | ✔️ | ✔️ | The following list presents the set of features that are currently available in | | List all participants in a call | ✔️ | ✔️ | ✔️ | ✔️ | | Call Recording | Start/pause/resume/stop recording | ✔️ | ✔️ | ✔️ | ✔️ | -*Transfer or redirect of a VoIP call to a phone number is currently not supported. +*Redirect of a VoIP call to a phone number is not supported. ## Architecture Using the IncomingCall event from Event Grid, a call can be redirected to one or **Create Call** Create Call action can be used to place outbound calls to phone numbers and to other communication users. Use cases include your application placing outbound calls to proactively inform users about an outage or notify about an order update. -**Connect Call** +**Connect Call** (in preview) Connect Call action can be used to connect to an ongoing call and take call actions on it. You can also use this action to connect and manage a Rooms call programmatically, like performing PSTN dial outs for Room using your service. ### Mid-call actions |
communication-services | Actions For Call Control | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/how-tos/call-automation/actions-for-call-control.md | The response provides you with CallConnection object that you can use to take fu 2. `ParticipantsUpdated` event that contains the latest list of participants in the call. ![Sequence diagram for placing an outbound call.](media/make-call-flow.png) -## Connect to a call +## Connect to a call (in preview) + Connect action enables your service to establish a connection with an ongoing call and take actions on it. This is useful to manage a Rooms call or when client applications started a 1:1 or group call that Call automation isn't part of. Connection is established using the CallLocator property and can be of types: ServerCallLocator, GroupCallLocator, and RoomCallLocator. These IDs can be found when the call is originally established or a Room is created, and also published as part of [CallStarted](./../../../event-grid/communication-services-voice-video-events.md#microsoftcommunicationcallstarted) event. To connect to any 1:1 or group call, use the ServerCallLocator. If you started a call using GroupCallId, you can also use the GroupCallLocator. |
communication-services | Manage Rooms Call | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/rooms/manage-rooms-call.md | + + Title: Quickstart - Manage a room call ++description: In this quickstart, you learn how to manage a room call using Calling SDKs and Call Automation SDKs +++++ Last updated : 07/10/2024++++++# Quickstart: Manage a room call ++## Introduction +During an Azure Communication Services (ACS) room call, you can manage the call using Calling SDKs or Call Automation SDKs or both. In a room call, you can control in-call actions using both the roles assigned to participants and properties configured in the room. The participant's roles control capabilities permitted per participant, while room properties apply to the room call as a whole. ++## Calling SDKs +Calling SDK is a client-side calling library enabling participants in a room call to perform several in-call operations, such as screen share, turn on/off video, mute/unmute, and so on. For the full list of capabilities, see [Calling SDK Overview](../../concepts/voice-video-calling/calling-sdk-features.md#detailed-capabilities). ++You control the capabilities based on roles assigned to participants in the call. For example, only the presenter can screen share. For participant roles and permissions, see [Rooms concepts](../../concepts/rooms/room-concept.md#predefined-participant-roles-and-permissions). ++## Call Automation SDKs +Call Automation SDK is a server-side library enabling administrators to manage an ongoing room call in a central and controlled environment. Unlike Calling SDK, Call Automation SDK operations are roles agnostic. Therefore, a call administrator can perform several in-call operations on behalf of the room call participants. ++The following lists describe common in-call actions available in a room call. ++### Connect to a room call +Call Automation must connect to an existing room call before performing any in-call operations. The `CallConnected` or `ConnectFailed` events are raised using callback mechanisms to indicate if a connect operation was successful or failed respectively. ++### [csharp](#tab/csharp) ++```csharp +Uri callbackUri = new Uri("https://<myendpoint>/Events"); //the callback endpoint where you want to receive subsequent events +CallLocator roomCallLocator = new RoomCallLocator("<RoomId>"); +ConnectCallResult response = await client.ConnectAsync(roomCallLocator, callbackUri); +``` ++### [Java](#tab/java) ++```java +String callbackUri = "https://<myendpoint>/Events"; //the callback endpoint where you want to receive subsequent events +CallLocator roomCallLocator = new RoomCallLocator("<RoomId>"); +ConnectCallResult response = client.connectCall(roomCallLocator, callbackUri).block(); +``` ++### [JavaScript](#tab/javascript) ++```javascript +const roomCallLocator = { kind: "roomCallLocator", id: "<RoomId>" }; +const callbackUri = "https://<myendpoint>/Events"; // the callback endpoint where you want to receive subsequent events +const response = await client.connectCall(roomCallLocator, callbackUri); +``` ++### [Python](#tab/python) ++```python +callback_uri = "https://<myendpoint>/Events" # the callback endpoint where you want to receive subsequent events +room_call_locator = RoomCallLocator("<room_id>") +call_connection_properties = client.connect_call(call_locator=room_call_locator, callback_url=callback_uri) +``` +-- ++Once successfully connected to a room call, a `CallConnect` event is notified via Callback URI. You can use `callConnectionId` to retrieve a call connection on the room call as needed. The following sample code snippets use the `callConnectionId` to demonstrate this function. +++### Add PSTN Participant +Using Call Automation you can dial out to a PSTN number and add the participant into a room call. You must, however, set up a room to enable PSTN dial-out option (`EnabledPSTNDialout` set to `true`) and the Azure Communication Services resource must have a valid phone number provisioned. ++For more information, see [Rooms quickstart](../../quickstarts//rooms/get-started-rooms.md?tabs=windows&pivots=platform-azcli#enable-pstn-dial-out-capability-for-a-room). +++### [csharp](#tab/csharp) ++```csharp +var callerIdNumber = new PhoneNumberIdentifier("+16044561234"); // This is the ACS-provisioned phone number for the caller +var callThisPerson = new CallInvite(new PhoneNumberIdentifier("+16041234567"), callerIdNumber); // The target phone number to dial out to +CreateCallResult response = await client.GetCallConnection(callConnectionId).AddParticipantAsync(callThisPerson); +``` ++### [Java](#tab/java) ++```java +PhoneNumberIdentifier callerIdNumber = new PhoneNumberIdentifier("+16044561234"); // This is the ACS-provisioned phone number for the caller +CallInvite callInvite = new CallInvite(new PhoneNumberIdentifier("+16041234567"), callerIdNumber); // The phone number participant to dial out to +AddParticipantOptions addParticipantOptions = new AddParticipantOptions(callInvite); +Response<AddParticipantResult> addParticipantResultResponse = client.getCallConnectionAsync(callConnectionId) + .addParticipantWithResponse(addParticipantOptions).block(); +``` ++### [JavaScript](#tab/javascript) ++```javascript +const callInvite = { + targetParticipant: { phoneNumber: "+18008008800" }, // The phone number participant to dial out to + sourceCallIdNumber: { phoneNumber: "+18888888888" } // This is the ACS-provisioned phone number for the caller +}; +const response = await client.getCallConnection(callConnectionId).addParticipant(callInvite); +``` ++### [Python](#tab/python) ++```python +caller_id_number = PhoneNumberIdentifier( + "+18888888888" +) # TThis is the ACS-provisioned phone number for the caller +target = PhoneNumberIdentifier("+18008008800"), # The phone number participant to dial out to ++call_connection_client = call_automation_client.get_call_connection( + "call_connection_id" +) +result = call_connection_client.add_participant( + target, + opration_context="Your context", + operationCallbackUrl="<url_endpoint>" +) +``` +-- ++### Remove PSTN Participant ++### [csharp](#tab/csharp) ++```csharp ++var removeThisUser = new PhoneNumberIdentifier("+16044561234"); ++// Remove a participant from the call with optional parameters +var removeParticipantOptions = new RemoveParticipantOptions(removeThisUser) +{ + OperationContext = "operationContext", + OperationCallbackUri = new Uri("uri_endpoint"); // Sending event to a non-default endpoint +} ++RemoveParticipantsResult result = await client.GetCallConnection(callConnectionId).RemoveParticipantAsync(removeParticipantOptions); +``` ++### [Java](#tab/java) ++```java +CommunicationIdentifier removeThisUser = new PhoneNumberIdentifier("+16044561234"); +RemoveParticipantOptions removeParticipantOptions = new RemoveParticipantOptions(removeThisUser) + .setOperationContext("<operation_context>") + .setOperationCallbackUrl("<url_endpoint>"); +Response<RemoveParticipantResult> removeParticipantResultResponse = client.getCallConnectionAsync(callConnectionId) + .removeParticipantWithResponse(removeParticipantOptions); +``` ++### [JavaScript](#tab/javascript) ++```javascript +const removeThisUser = { phoneNumber: "+16044561234" }; +const removeParticipantResult = await client.getCallConnection(callConnectionId).removeParticipant(removeThisUser); +``` ++### [Python](#tab/python) ++```python +remove_this_user = PhoneNumberIdentifier("+16044561234") +call_connection_client = call_automation_client.get_call_connection( + "call_connection_id" +) +result = call_connection_client.remove_participant(remove_this_user, opration_context="Your context", operationCallbackUrl="<url_endpoint>") +``` +-- ++### Send DTMF +Send a list of DTMF tones to an external participant. ++### [csharp](#tab/csharp) +```csharp +var tones = new DtmfTone[] { DtmfTone.One, DtmfTone.Two, DtmfTone.Three, DtmfTone.Pound }; +var sendDtmfTonesOptions = new SendDtmfTonesOptions(tones, new PhoneNumberIdentifier(calleePhonenumber)) +{ + OperationContext = "dtmfs-to-ivr" +}; ++var sendDtmfAsyncResult = await callAutomationClient.GetCallConnection(callConnectionId).GetCallMedia().SendDtmfTonesAsync(sendDtmfTonesOptions); ++``` +### [Java](#tab/java) +```java +List<DtmfTone> tones = Arrays.asList(DtmfTone.ONE, DtmfTone.TWO, DtmfTone.THREE, DtmfTone.POUND); +SendDtmfTonesOptions options = new SendDtmfTonesOptions(tones, new PhoneNumberIdentifier(c2Target)); +options.setOperationContext("dtmfs-to-ivr"); +client.getCallConnectionAsync(callConnectionId) + .getCallMediaAsync() + .sendDtmfTonesWithResponse(options) + .block(); +``` +### [JavaScript](#tab/javascript) +```javascript +const tones = [DtmfTone.One, DtmfTone.Two, DtmfTone.Three]; +const sendDtmfTonesOptions: SendDtmfTonesOptions = { + operationContext: "dtmfs-to-ivr" +}; +const result: SendDtmfTonesResult = await client.getCallConnection(callConnectionId) + .getCallMedia() + .sendDtmfTones(tones, { + phoneNumber: c2Target + }, sendDtmfTonesOptions); +console.log("sendDtmfTones, result=%s", result); +``` +### [Python](#tab/python) +```python +tones = [DtmfTone.ONE, DtmfTone.TWO, DtmfTone.THREE] +call_connection_client = call_automation_client.get_call_connection( + "call_connection_id" +) ++result = call_connection_client.send_dtmf_tones( + tones = tones, + target_participant = PhoneNumberIdentifier(c2_target), + operation_context = "dtmfs-to-ivr") +``` +-- ++### Call Recording +Azure Communication Services rooms support recording capabilities including `start`, `stop`, `pause`, `resume`, and so on, provided by Call Automation. See the following code snippets to start/stop/pause/resume a recording in a room call. For a complete list of actions, see [Call Automation recording](../../concepts/voice-video-calling/call-recording.md#get-full-control-over-your-recordings-with-our-call-recording-apis). ++### [csharp](#tab/csharp) +```csharp +// Start recording +StartRecordingOptions recordingOptions = new StartRecordingOptions(new ServerCallLocator("<ServerCallId>")) +{ + RecordingContent = RecordingContent.Audio, + RecordingChannel = RecordingChannel.Unmixed, + RecordingFormat = RecordingFormat.Wav, + RecordingStateCallbackUri = new Uri("<CallbackUri>"), + RecordingStorage = RecordingStorage.CreateAzureBlobContainerRecordingStorage(new Uri("<YOUR_STORAGE_CONTAINER_URL>")) +}; +Response<RecordingStateResult> response = await callAutomationClient.GetCallRecording() +.StartAsync(recordingOptions); ++// Pause recording using recordingId received in response of start recording. +var pauseRecording = await callAutomationClient.GetCallRecording ().PauseAsync(recordingId); ++// Resume recording using recordingId received in response of start recording. +var resumeRecording = await callAutomationClient.GetCallRecording().ResumeAsync(recordingId); ++// Stop recording using recordingId received in response of start recording. +var stopRecording = await callAutomationClient.GetCallRecording().StopAsync(recordingId); ++``` +### [Java](#tab/java) +```java +// Start recording +StartRecordingOptions recordingOptions = new StartRecordingOptions(new ServerCallLocator("<serverCallId>")) + .setRecordingChannel(RecordingChannel.UNMIXED) + .setRecordingFormat(RecordingFormat.WAV) + .setRecordingContent(RecordingContent.AUDIO) + .setRecordingStateCallbackUrl("<recordingStateCallbackUrl>"); ++Response<RecordingStateResult> response = callAutomationClient.getCallRecording() +.startWithResponse(recordingOptions, null); ++// Pause recording using recordingId received in response of start recording +Response<Void> response = callAutomationClient.getCallRecording() + .pauseWithResponse(recordingId, null); ++// Resume recording using recordingId received in response of start recording +Response<Void> response = callAutomationClient.getCallRecording() + .resumeWithResponse(recordingId, null); ++// Stop recording using recordingId received in response of start recording +Response<Void> response = callAutomationClient.getCallRecording() + .stopWithResponse(recordingId, null); ++``` +### [JavaScript](#tab/javascript) +```javascript +// Start recording +var locator: CallLocator = { id: "<ServerCallId>", kind: "serverCallLocator" }; ++var options: StartRecordingOptions = +{ + callLocator: locator, + recordingContent: "audio", + recordingChannel:"unmixed", + recordingFormat: "wav", + recordingStateCallbackEndpointUrl: "<CallbackUri>" +}; +var response = await callAutomationClient.getCallRecording().start(options); ++// Pause recording using recordingId received in response of start recording +var pauseRecording = await callAutomationClient.getCallRecording().pause(recordingId); ++// Resume recording using recordingId received in response of start recording. +var resumeRecording = await callAutomationClient.getCallRecording().resume(recordingId); ++// Stop recording using recordingId received in response of start recording +var stopRecording = await callAutomationClient.getCallRecording().stop(recordingId); ++``` +### [Python](#tab/python) +```python +# Start recording +response = call_automation_client.start_recording(call_locator=ServerCallLocator(server_call_id), + recording_content_type = RecordingContent.Audio, + recording_channel_type = RecordingChannel.Unmixed, + recording_format_type = RecordingFormat.Wav, + recording_state_callback_url = "<CallbackUri>") ++# Pause recording using recording_id received in response of start recording +pause_recording = call_automation_client.pause_recording(recording_id = recording_id) ++# Resume recording using recording_id received in response of start recording +resume_recording = call_automation_client.resume_recording(recording_id = recording_id) ++# Stop recording using recording_id received in response of start recording +stop_recording = call_automation_client.stop_recording(recording_id = recording_id) +``` +-- ++### Terminate a Call +You can use the Call Automation SDK Hang Up action to terminate a call. When the Hang Up action completes, the SDK publishes a `CallDisconnected` event. ++### [csharp](#tab/csharp) ++```csharp +_ = await client.GetCallConnection(callConnectionId).HangUpAsync(forEveryone: true); +``` ++### [Java](#tab/java) ++```java +Response<Void> response = client.getCallConnectionAsync(callConnectionId).hangUpWithResponse(true).block(); +``` ++### [JavaScript](#tab/javascript) ++```javascript +await callConnection.hangUp(true); +``` ++### [Python](#tab/python) ++```python +call_connection_client = call_automation_client.get_call_connection( + "call_connection_id" +) ++call_connection_client.hang_up(is_for_everyone=True) +``` +-- ++## Other Actions +The following in-call actions are also supported in a room call. +1. Add participant (ACS identifier) +1. Remove participant (ACS identifier) +1. Cancel add participant (ACS identifier and PSTN number) +1. Hang up call +1. Get participant (ACS identifier and PSTN number) +1. Get multiple participants (ACS identifier and PSTN number) +1. Get latest info about a call +1. Play both audio files and text +1. Play all both audio files and text +1. Recognize both DTMF and speech +1. Recognize continuous DTMF ++For more information, see [call actions](../../how-tos/call-automation/actions-for-call-control.md?branch=pr-en-us-280574&tabs=csharp) and [media actions](../../how-tos/call-automation/control-mid-call-media-actions.md?branch=pr-en-us-280574&tabs=csharp). ++## Next steps ++In this section you learned how to: +> [!div class="checklist"] +> - Join a room call from your application +> - Add in-call actions into a room call using calling SDKs +> - Add in-call actions into a room call using Call Automation SDKs ++You may also want to: + - Learn about [Rooms concept](../../concepts/rooms/room-concept.md) + - Learn about [Calling SDKs features](../../concepts/voice-video-calling/calling-sdk-features.md) + - Learn about [Call Automation concepts](../../concepts/call-automation/call-automation.md) |
communication-services | Get Started With Closed Captions | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/quickstarts/voice-video-calling/get-started-with-closed-captions.md | |
confidential-computing | Confidential Vm Overview | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/confidential-vm-overview.md | Title: About Azure confidential VMs description: Learn about Azure confidential virtual machines. These series are for tenants with high security and confidentiality requirements. + - - ignite-2023 Azure confidential VMs offer strong security and confidentiality for tenants. Th - Secure key release with cryptographic binding between the platform's successful attestation and the VM's encryption keys. - Dedicated virtual [Trusted Platform Module (TPM)](/windows/security/information-protection/tpm/trusted-platform-module-overview) instance for attestation and protection of keys and secrets in the virtual machine. - Secure boot capability similar to [Trusted launch for Azure VMs](../virtual-machines/trusted-launch.md)-- Ultra disk capability is supported on confidential VMs ## Confidential OS disk encryption Confidential VMs support the following VM sizes: - General Purpose with local disk: DCadsv5-series, DCedsv5-series - Memory Optimized without local disk: ECasv5-series, ECesv5-series - Memory Optimized with local disk: ECadsv5-series, ECedsv5-series+- NVIDIA H100 Tensor Core GPU powered NCCadsH100v5-series ### OS support Confidential VMs support the following OS options: Confidential VMs *don't support*: - Microsoft Azure Virtual Machine Scale Sets with Confidential OS disk encryption enabled - Limited Azure Compute Gallery support - Shared disks+- Ultra disks - Accelerated Networking - Live migration - Screenshots under boot diagnostics |
confidential-computing | Gpu Options | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/gpu-options.md | + + Title: Azure Confidential GPU options +description: Learn about Azure Confidential VMs with confidential GPU. ++++++ Last updated : 07/16/2024+++# Azure Confidential GPU options ++Azure confidential GPUs are based on AMD 4th Gen EPYC processors with SEV-SNP technology and NVIDIA H100 Tensor Core GPUs. In this VM SKU Trusted Execution Environment (TEE) spans confidential VM on the CPU and attached GPU, enabling secure offload of data, models and computation to the GPU. + +## Sizes ++We offer the following VM sizes: ++| Size Family | TEE | Description | +| | | -- | +| [**NCCadsH100v5-series**](../virtual-machines/sizes/gpu-accelerated/nccadsh100v5-series.md) | AMD SEV-SNP and NVIDIA H100 Tensor Core GPUs | CVM with Confidential GPU. | +++## Azure CLI ++You can use the [Azure CLI](/cli/azure/install-azure-cli) with your confidential GPU VMs. ++To see a list of confidential VM sizes, run the following command. Replace `<vm-series>` with the series you want to use. The output shows information about available regions and availability zones. ++```azurecli-interactive +vm_series='NCC' +az vm list-skus \ + --size dc \ + --query "[?family=='standard${vm_series}Family'].{name:name,locations:locationInfo[0].location,AZ_a:locationInfo[0].zones[0],AZ_b:locationInfo[0].zones[1],AZ_c:locationInfo[0].zones[2]}" \ + --all \ + --output table +``` ++For a more detailed list, run the following command instead: ++```azurecli-interactive +vm_series='NCC' +az vm list-skus \ + --size dc \ + --query "[?family=='standard${vm_series}Family']" +``` ++## Deployment considerations ++Consider the following settings and choices before deploying confidential GPU VMs. ++### Azure subscription ++To deploy a confidential GPU VM instance, consider a [pay-as-you-go subscription](/azure/virtual-machines/linux/azure-hybrid-benefit-linux) or other purchase option. If you're using an [Azure free account](https://azure.microsoft.com/free/), the quota doesn't allow the appropriate number of Azure compute cores. ++You might need to increase the cores quota in your Azure subscription from the default value. Default limits vary depending on your subscription category. Your subscription might also limit the number of cores you can deploy in certain VM size families, including the confidential VM sizes. ++To request a quota increase, [open an online customer support request](../azure-portal/supportability/per-vm-quota-requests.md). ++If you have large-scale capacity needs, contact Azure Support. Azure quotas are credit limits, not capacity guarantees. You only incur charges for cores that you use. ++### Pricing ++For pricing options, see the [Linux Virtual Machines Pricing](https://azure.microsoft.com/pricing/details/virtual-machines/linux/). ++### Regional availability ++For availability information, see which [VM products are available by Azure region](https://azure.microsoft.com/global-infrastructure/services/?products=virtual-machines). ++### Resizing ++Confidential GPU VMs run on specialized hardware and resizing is currently not supported. ++### Guest OS support ++OS images for confidential VMs have to meet certain security and compatibility requirements. Qualified images support the secure mounting, attestation, optional [confidential OS disk encryption](confidential-vm-overview.md#confidential-os-disk-encryption), and isolation from underlying cloud infrastructure. These images include: ++- Ubuntu 22.04 LTS ++For more information about supported and unsupported VM scenarios, see [support for generation 2 VMs on Azure](../virtual-machines/generation-2.md). ++### High availability and disaster recovery ++You're responsible for creating high availability and disaster recovery solutions for your confidential GPU VMs. Planning for these scenarios helps minimize and avoid prolonged downtime. ++## Next steps ++> [!div class="nextstepaction"] +> [Deploy a confidential GPU VM from the Azure portal](quick-create-confidential-vm-portal.md) ++For more information see our [Confidential VM FAQ](confidential-vm-faq.yml). |
confidential-computing | Quick Create Confidential Vm Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/quick-create-confidential-vm-portal.md | Title: Create an Azure confidential VM in the Azure portal description: Learn how to quickly create a confidential virtual machine (confidential VM) in the Azure portal using Azure Marketplace images. - Last updated 12/01/2023 To create a confidential VM in the Azure portal using an Azure Marketplace image h. Toggle [Generation 2](../virtual-machines/generation-2.md) images. Confidential VMs only run on Generation 2 images. To ensure, under **Image**, select **Configure VM generation**. In the pane **Configure VM generation**, for **VM generation**, select **Generation 2**. Then, select **Apply**. + > [!NOTE] + > For NCCH100v5 series, only the **Ubuntu Server 22.04 LTS (Confidential VM)** image is currently supported. + i. For **Size**, select a VM size. For more information, see [supported confidential VM families](virtual-machine-options.md). |
confidential-computing | Virtual Machine Options | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/confidential-computing/virtual-machine-options.md | We offer the following VM sizes: | **DCedsv5-series** | Intel TDX | General purpose CVM with local temporary disk. | | **ECesv5-series** | Intel TDX | Memory-optimized CVM with remote storage. No local temporary disk. | | **ECedsv5-series** | Intel TDX | Memory-optimized CVM with local temporary disk. |+| **NCCadsH100v5-series** | AMD SEV-SNP and NVIDIA H100 Tensor Core GPUs | CVM with Confidential GPU. | > [!NOTE] > Memory-optimized confidential VMs offer double the ratio of memory per vCPU count. |
connectors | Connectors Google Data Security Privacy Policy | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/connectors/connectors-google-data-security-privacy-policy.md | Here are some examples that use the Gmail connector with built-in triggers and a ![Non-compliant logic app - Example 2](./media/connectors-google-data-security-privacy-policy/not-compliant-logic-app-2.png) -* This workflow uses the Gmail connector with the Twitter connector: +* This workflow uses the Gmail connector with the X connector: ![Non-compliant logic app - Example 3](./media/connectors-google-data-security-privacy-policy/not-compliant-logic-app-3.png) |
container-apps | Authentication Twitter | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/authentication-twitter.md | Title: Enable authentication and authorization in Azure Container Apps with Twitter -description: Learn to use the built-in Twitter authentication provider in Azure Container Apps. + Title: Enable authentication and authorization in Azure Container Apps with X +description: Learn to use the built-in X authentication provider in Azure Container Apps. Last updated 04/20/2022 -# Enable authentication and authorization in Azure Container Apps with Twitter +# Enable authentication and authorization in Azure Container Apps with X -This article shows how to configure Azure Container Apps to use Twitter as an authentication provider. +This article shows how to configure Azure Container Apps to use X as an authentication provider. -To complete the procedure in this article, you need a Twitter account that has a verified email address and phone number. To create a new Twitter account, go to [twitter.com]. +To complete the procedure in this article, you need an X account that has a verified email address and phone number. To create a new X account, go to [x.com](https://x.com). -## <a name="twitter-register"> </a>Register your application with Twitter +## <a name="twitter-register"> </a>Register your application with X -1. Sign in to the [Azure portal] and go to your application. Copy your **URL**. You'll use it to configure your Twitter app. -1. Go to the [Twitter Developers] website, sign in with your Twitter account credentials, and select **Create an app**. -1. Enter the **App name** and the **Application description** for your new app. Paste your application's **URL** into the **Website URL** field. In the **Callback URLs** section, enter the HTTPS URL of your container app and append the path `/.auth/login/twitter/callback`. For example, `https://<hostname>.azurecontainerapps.io/.auth/login/twitter/callback`. +1. Sign in to the [Azure portal] and go to your application. Copy your **URL**. You'll use it to configure your X app. +1. Go to the [X Developers] website, sign in with your X account credentials, and select **Create an app**. +1. Enter the **App name** and the **Application description** for your new app. Paste your application's **URL** into the **Website URL** field. In the **Callback URLs** section, enter the HTTPS URL of your container app and append the path `/.auth/login/x/callback`. For example, `https://<hostname>.azurecontainerapps.io/.auth/login/x/callback`. 1. At the bottom of the page, type at least 100 characters in **Tell us how this app will be used**, then select **Create**. Select **Create** again in the pop-up. The application details are displayed. 1. Select the **Keys and Access Tokens** tab. To complete the procedure in this article, you need a Twitter account that has a > [!IMPORTANT] > The API secret key is an important security credential. Do not share this secret with anyone or distribute it with your app. -## <a name="twitter-secrets"> </a>Add Twitter information to your application +## <a name="twitter-secrets"> </a>Add X information to your application 1. Sign in to the [Azure portal] and navigate to your app. 1. Select **Authentication** in the menu on the left. Select **Add identity provider**. To complete the procedure in this article, you need a Twitter account that has a 1. Select **Add**. -You're now ready to use Twitter for authentication in your app. The provider will be listed on the **Authentication** screen. From there, you can edit or delete this provider configuration. +You're now ready to use X for authentication in your app. The provider will be listed on the **Authentication** screen. From there, you can edit or delete this provider configuration. ## Working with authenticated users Use the following guides for details on working with authenticated users. <!-- URLs. --> [Azure portal]: https://portal.azure.com/+[X Developers]: https://go.microsoft.com/fwlink/p/?LinkId=268300 + |
container-apps | Authentication | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/authentication.md | For details surrounding authentication and authorization, refer to the following * [Facebook](authentication-facebook.md) * [GitHub](authentication-github.md) * [Google](authentication-google.yml)-* [Twitter](authentication-twitter.md) +* [X](authentication-twitter.md) * [Custom OpenID Connect](authentication-openid.md) ## Why use the built-in authentication? The benefits include: * Azure Container Apps provides access to various built-in authentication providers. * The built-in auth features donΓÇÖt require any particular language, SDK, security expertise, or even any code that you have to write.-* You can integrate with multiple providers including Microsoft Entra ID, Facebook, Google, and Twitter. +* You can integrate with multiple providers including Microsoft Entra ID, Facebook, Google, and X. ## Identity providers Container Apps uses [federated identity](https://en.wikipedia.org/wiki/Federated | [Facebook](https://developers.facebook.com/docs/facebook-login) | `/.auth/login/facebook` | [Facebook](authentication-facebook.md) | | [GitHub](https://docs.github.com/en/developers/apps/building-oauth-apps/authorizing-oauth-apps) | `/.auth/login/github` | [GitHub](authentication-github.md) | | [Google](https://developers.google.com/identity/choose-auth) | `/.auth/login/google` | [Google](authentication-google.yml) |-| [Twitter](https://developer.twitter.com/en/docs/basics/authentication) | `/.auth/login/twitter` | [Twitter](authentication-twitter.md) | +| [X](https://developer.x.com/en/docs/basics/authentication) | `/.auth/login/x` | [X](authentication-twitter.md) | | Any [OpenID Connect](https://openid.net/connect/) provider | `/.auth/login/<providerName>` | [OpenID Connect](authentication-openid.md) | When you use one of these providers, the sign-in endpoint is available for user authentication and authentication token validation from the provider. You can provide your users with any number of these provider options. Container Apps Authentication provides built-in endpoints for sign in and sign o ### Use multiple sign-in providers -The portal configuration doesn't offer a turn-key way to present multiple sign-in providers to your users (such as both Facebook and Twitter). However, it isn't difficult to add the functionality to your app. The steps are outlined as follows: +The portal configuration doesn't offer a turn-key way to present multiple sign-in providers to your users (such as both Facebook and X). However, it isn't difficult to add the functionality to your app. The steps are outlined as follows: First, in the **Authentication / Authorization** page in the Azure portal, configure each of the identity provider you want to enable. In the sign-in page, or the navigation bar, or any other location of your app, a <a href="/.auth/login/aad">Log in with the Microsoft Identity Platform</a> <a href="/.auth/login/facebook">Log in with Facebook</a> <a href="/.auth/login/google">Log in with Google</a>-<a href="/.auth/login/twitter">Log in with Twitter</a> +<a href="/.auth/login/x">Log in with X</a> ``` When the user selects on one of the links, the UI for the respective providers is displayed to the user. Refer to the following articles for details on securing your container app. * [Facebook](authentication-facebook.md) * [GitHub](authentication-github.md) * [Google](authentication-google.yml)-* [Twitter](authentication-twitter.md) +* [X](authentication-twitter.md) * [Custom OpenID Connect](authentication-openid.md) |
container-instances | Container Instances Using Azure Container Registry | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-instances/container-instances-using-azure-container-registry.md | -* The [Azure Container Registry](../container-registry/container-registry-vnet.md) must have [Public Access set to 'All Networks'](../container-registry/container-registry-access-selected-networks.md). To use an Azure container registry with Public Access set to 'Select Networks' or 'None', visit [ACI's article for using Managed-Identity based authentication with ACR](../container-registry/container-registry-authentication-managed-identity.md). +* Windows containers don't support system-assigned managed identity-authenticated image pulls with ACR, only user-assigned. ## Configure registry authentication |
cosmos-db | How To Setup Customer Managed Keys Existing Accounts | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-setup-customer-managed-keys-existing-accounts.md | Enabling CMK on an existing account is an asynchronous operation that kicks off The Cosmos DB account can continue to be used and data can continue to be written without waiting for the asynchronous operation to succeed. CLI command for enabling CMK waits for the completion of encryption of data. +In order to allow an existing Cosmos DB account to use to CMK, a scan needs to be done to ensure that the account doesn't have "Large IDs". A "Large ID" is a document id that exceeds 990 characters length. This scan is mandatory for the CMK migration and it is done by Microsoft automatically. During this process you may see an error. ++ERROR: (InternalServerError) Unexpected error on document scan for CMK Migration. Please retry the operation. ++This happens when the scan process uses more RUs than the ones provisioned on the collection, throwing a 429 error. A solution for this problem will be to temporarily bump their RUs significantly. Alternatively, you could make use of the provided console application [hosted here](https://github.com/AzureCosmosDB/Cosmos-DB-Non-CMK-to-CMK-Migration-Scanner) in order to scan their collections. ++> [!NOTE] +> If you wish to disable server-side validation for this during migration, please contact support. This is advisable only if you are sure that there are no Large IDs. If Large ID is encountered during encryption, the process will stop till the Large Id document has been addressed. + If you have further questions, reach out to Microsoft Support. ## FAQs Enabling CMK kicks off a background, asynchronous process to encrypt all the dat It's suggested to bump up the RUs before you trigger CMK. Once CMK is triggered, then some control plane operations are blocked till the encryption is complete. This block may prevent the user from increasing the RUΓÇÖs once CMK is triggered. +In order to allow an existing Cosmos DB account to use to CMK, a Large ID scan is done mandatory by Microsoft automatically to address one of the known limitations listed earlier. This process also consumes additional RUs and its a good idea to bump up the RU's significantly to avoid error 429. + **Is there a way to reverse the encryption or disable encryption after triggering CMK?** Once the data encryption process using CMK is triggered, it can't be reverted. |
cosmos-db | Quickstart Portal | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/quickstart-portal.md | Create a MongoDB cluster by using Azure Cosmos DB for MongoDB vCore. 1. On the **New** page, search for and select **Azure Cosmos DB**. -1. On the **Which API best suits your workload?** page, select the **Create** option within the **Azure Cosmos DB for MongoDB** section. For more information, see [API for MongoDB and it's various models](../choose-model.md). +1. On the **Which API best suits your workload?** page, select the **Create** option within the **Azure Cosmos DB for MongoDB** section. :::image type="content" source="media/quickstart-portal/select-api-option.png" lightbox="media/quickstart-portal/select-api-option.png" alt-text="Screenshot of the select API option page for Azure Cosmos DB."::: 1. On the **Which type of resource?** page, select the **Create** option within the **vCore cluster** section. For more information, see [API for MongoDB vCore overview](introduction.md). - :::image type="content" source="media/quickstart-portal/select-resource-type.png" alt-text="Screenshot of the select resource type option page for Azure Cosmos DB for MongoDB."::: - 1. On the **Create Azure Cosmos DB for MongoDB cluster** page, select the **Configure** option within the **Cluster tier** section. - :::image type="content" source="media/quickstart-portal/select-cluster-option.png" alt-text="Screenshot of the configure cluster option for a new Azure Cosmos DB for MongoDB cluster."::: + :::image type="content" source="media/quickstart-portal/select-cluster-option.png" alt-text="Screenshot of the 'configure cluster' option for a new Azure Cosmos DB for MongoDB cluster."::: 1. On the **Scale** page, leave the options set to their default values: Create a MongoDB cluster by using Azure Cosmos DB for MongoDB vCore. | **Cluster tier** | M30 Tier, 2 vCores, 8-GiB RAM | | **Storage per shard** | 128 GiB | -1. Unselect **High availability** option. In the high availability (HA) acknowledgment section, select **I understand**. Finally, select **Save** to persist your changes to the cluster tier. -- :::image type="content" source="media/quickstart-portal/configure-scale.png" alt-text="Screenshot of cluster tier and scale options for a cluster."::: - - You can always turn HA on after cluster creation for another layer of protection from failures. +1. Select the **High availability** option if this cluster will be used for production workloads. If not, in the high availability (HA) acknowledgment section, select **I understand**. Finally, select **Save** to persist your changes to the cluster tier. 1. Back on the cluster page, enter the following information: Create a MongoDB cluster by using Azure Cosmos DB for MongoDB vCore. | Resource group | Resource group name | Select a resource group, or select **Create new**, then enter a unique name for the new resource group. | | Cluster name | A unique name | Enter a name to identify your Azure Cosmos DB for MongoDB cluster. The name is used as part of a fully qualified domain name (FQDN) with a suffix of *mongocluster.cosmos.azure.com*, so the name must be globally unique. The name can only contain lowercase letters, numbers, and the hyphen (-) character. The name must also be between 3 and 40 characters in length. | | Location | The region closest to your users | Select a geographic location to host your Azure Cosmos DB for MongoDB cluster. Use the location that is closest to your users to give them the fastest access to the data. |- | MongoDB version | Version of MongoDB to run in your cluster | This value is set to a default of the latest available MongoDB version. | + | MongoDB version | Version of MongoDB to run in your cluster | This controls the mongo version your application uses. | | Admin username | Provide a username to access the cluster | This user is created on the cluster as a user administrator. | | Password | Use a unique password to pair with the username | Password must be at least eight characters and at most 128 characters. | When you're done with Azure Cosmos DB for MongoDB vCore cluster, you can delete 1. On the resource group page, select **Delete resource group**. - :::image type="content" source="media/quickstart-portal/select-delete-resource-group-option.png" alt-text="Screenshot of the delete resource group option in the menu for a specific resource group."::: + :::image type="content" source="media/quickstart-portal/select-delete-resource-group-option.png" alt-text="Screenshot of the 'delete resource group' option in the menu for a specific resource group."::: 1. In the deletion confirmation dialog, enter the name of the resource group to confirm that you intend to delete it. Finally, select **Delete** to permanently delete the resource group. |
cosmos-db | How To Javascript Vector Index Query | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/how-to-javascript-vector-index-query.md | + + Title: Index and query vector data in JavaScript ++description: Add vector data Azure Cosmos DB for NoSQL and then query the data efficiently in your JavaScript application +++++ Last updated : 08/08/2024++++# Index and query vectors in Azure Cosmos DB for NoSQL in JavaScript +++The Azure Cosmos DB for NoSQL vector search feature is in preview. Before you use this feature, you must first register for the preview. This article covers the following steps: ++1. Registering for the preview of Vector Search in Azure Cosmos DB for NoSQL ++1. Setting up the Azure Cosmos DB container for vector search ++1. Authoring vector embedding policy ++1. Adding vector indexes to the container indexing policy ++1. Creating a container with vector indexes and vector embedding policy ++1. Performing a vector search on the stored data ++This guide walks through the process of creating vector data, indexing the data, and then querying the data in a container. ++## Prerequisites ++- An existing Azure Cosmos DB for NoSQL account. + - If you don't have an Azure subscription, [Try Azure Cosmos DB for NoSQL free](https://cosmos.azure.com/try/). + - If you have an existing Azure subscription, [create a new Azure Cosmos DB for NoSQL account](how-to-create-account.md). +- Latest version of the Azure Cosmos DB [JavaScript](sdk-nodejs.md) SDK (Version 4.1.0 or later) ++## Register for the preview ++Vector search for Azure Cosmos DB for NoSQL requires preview feature registration. Follow the below steps to register: ++1. Navigate to your Azure Cosmos DB for NoSQL resource page. ++1. Select the "Features" pane under the "Settings" menu item. ++1. Select for "Vector Search in Azure Cosmos DB for NoSQL." ++1. Read the description of the feature to confirm you want to enroll in the preview. ++1. Select "Enable" to enroll in the preview. ++ > [!NOTE] + > The registration request will be autoapproved, however it may take several minutes to take effect. ++## Understand the steps involved in vector search ++The following steps assume that you know how to [setup a Cosmos DB NoSQL account and create a database](quickstart-portal.md). The vector search feature is currently only supported on new containers, not existing container. You need to create a new container and then specify the container-level vector embedding policy and the vector indexing policy at the time of creation. ++LetΓÇÖs take an example of creating a database for an internet-based bookstore and you're storing Title, Author, ISBN, and Description for each book. We also define two properties to contain vector embeddings. The first is the "contentVector" property, which contains [text embeddings](../../ai-services/openai/concepts/models.md#embeddings ) generated from the text content of the book (for example, concatenating the "title" "author" "isbn" and "description" properties before creating the embedding). The second is "coverImageVector," which is generated from [images of the bookΓÇÖs cover](../../ai-services/computer-vision/concept-image-retrieval.md). ++1. Create and store vector embeddings for the fields on which you want to perform vector search. +2. Specify the vector embedding paths in the vector embedding policy. +3. Include any desired vector indexes in the indexing policy for the container. ++For subsequent sections of this article, we consider this structure for the items stored in our container: ++```json +{ +"title": "book-title", +"author": "book-author", +"isbn": "book-isbn", +"description": "book-description", +"contentVector": [2, -1, 4, 3, 5, -2, 5, -7, 3, 1], +"coverImageVector": [0.33, -0.52, 0.45, -0.67, 0.89, -0.34, 0.86, -0.78] +} +``` ++## Create a vector embedding policy for your container ++Next, you need to define a container vector policy. This policy provides information that is used to inform the Azure Cosmos DB query engine how to handle vector properties in the VectorDistance system functions. This policy also informs the vector indexing policy of necessary information, should you choose to specify one. ++The following information is included in the contained vector policy: ++| | Description | +| | | +| **`path`** | The property path that contains vectors | +| **`datatype`** | The type of the elements of the vector (default `Float32`) | +| **`dimensions`** | The length of each vector in the path (default `1536`) | +| **`distanceFunction`** | The metric used to compute distance/similarity (default `Cosine`) | ++For our example with book details, the vector policy can look like the example JSON: ++```javascript +const vectorEmbeddingPolicy: VectorEmbeddingPolicy = { + vectorEmbeddings: [ + { + path: "/coverImageVector", + dataType: "float32", + dimensions: 8, + distanceFunction: "dotproduct", + }, + { + path: "contentVector", + dataType: "float32", + dimensions: 10, + distanceFunction: "cosine", + }, + ], + }; +``` ++## Create a vector index in the indexing policy ++Once the vector embedding paths are decided, vector indexes need to be added to the indexing policy. You must apply the vector policy during the time of container creation and it canΓÇÖt be modified later. For this example, the indexing policy would look like this: ++```javascript +const indexingPolicy: IndexingPolicy = { + vectorIndexes: [ + { path: "/coverImageVector", type: "quantizedFlat" }, + { path: "/contentVector", type: "diskANN" }, + ], + inlcludedPaths: [ + { + path: "/*", + }, + ], + excludedPaths: [ + { + path: "/coverImageVector/*", + }, + { + path: "/contentVector/*", + }, + ] +}; +``` ++Now create your container as usual. ++```javascript +const containerName = "vector embedding container"; + // create container + const { resource: containerdef } = await database.containers.createIfNotExists({ + id: containerName, + vectorEmbeddingPolicy: vectorEmbeddingPolicy, + indexingPolicy: indexingPolicy, + }); +``` ++> [!IMPORTANT] +> Currently vector search in Azure Cosmos DB for NoSQL is supported on new containers only. You need to set both the container vector policy and any vector indexing policy during the time of container creation as it canΓÇÖt be modified later. Both policies will be modifiable in a future improvement to the preview feature. ++## Run a vector similarity search query ++Once you create a container with the desired vector policy, and insert vector data into the container, you can conduct a vector search using the [Vector Distance](query/vectordistance.md) system function in a query. Suppose you want to search for books about food recipes by looking at the description. You first need to get the embeddings for your query text. In this case, you might want to generate embeddings for the query text ΓÇô "food recipe." Once you have the embedding for your search query, you can use it in the VectorDistance function in the vector search query and get all the items that are similar to your query as shown here: ++```sql +SELECT c.title, VectorDistance(c.contentVector, [1,2,3,4,5,6,7,8,9,10]) AS SimilarityScore +FROM c +ORDER BY VectorDistance(c.contentVector, [1,2,3,4,5,6,7,8,9,10]) +``` ++This query retrieves the book titles along with similarity scores with respect to your query. Here's an example in JavaScript: ++```javascript +const { resources } = await container.items + .query({ + query: "SELECT c.title, VectorDistance(c.contentVector, @embedding) AS SimilarityScore FROM cΓÇ» ORDER BY VectorDistance(c.contentVector, @embedding)" + parameters: [{ name: "@embedding", value: [1,2,3,4,5,6,7,8,9,10] }] + }) + .fetchAll(); +for (const item of resources) { + console.log(`${itme.title}, ${item.SimilarityScore} is a capitol `); +} +``` ++## Related content ++- [VectorDistance system function](query/vectordistance.md) +- [Vector indexing](../index-policy.md) +- [Setup Azure Cosmos DB for NoSQL for vector search](../vector-search.md). |
cosmos-db | Vector Search | https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/vector-search.md | Vector indexing and search in Azure Cosmos DB for NoSQL has |