Updates from: 07/04/2024 01:08:14
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Javascript And Page Layout https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/javascript-and-page-layout.md
zone_pivot_groups: b2c-policy-type
[!INCLUDE [active-directory-b2c-choose-user-flow-or-custom-policy](../../includes/active-directory-b2c-choose-user-flow-or-custom-policy.md)]
-With Azure Active Directory B2C (Azure AD B2C) [HTML templates](customize-ui-with-html.md), you can craft your users' identity experiences. Your HTML templates can contain only certain HTML tags and attributes. Basic HTML tags, such as &lt;b&gt;, &lt;i&gt;, &lt;u&gt;, &lt;h1&gt;, and &lt;hr&gt; are allowed. More advanced tags such as &lt;script&gt;, and &lt;iframe&gt; are removed for security reasons but the `<script>` tag should be added in the `<head>` tag.
+With Azure Active Directory B2C (Azure AD B2C) [HTML templates](customize-ui-with-html.md), you can craft your users' identity experiences. Your HTML templates can contain only certain HTML tags and attributes. Basic HTML tags, such as &lt;b&gt;, &lt;i&gt;, &lt;u&gt;, &lt;h1&gt;, and &lt;hr&gt; are allowed. More advanced tags such as &lt;script&gt;, and &lt;iframe&gt; are removed for security reasons but the `<script>` tag should be added in the `<head>` tag. From selfasserted page layout version 2.1.21 / unifiedssp version 2.1.10 / multifactor version 1.2.10 onwards B2C doesn't support adding scripts in `<body>` tag (as this can pose a risk for cross site scripting attack). Migrating existing scripts from `<body>` to `<head>` may at-times require rewriting existing scripts with mutation observers for proper working.
The `<script>` tag should be added in the `<head>` tag in two ways:
active-directory-b2c Self Asserted Technical Profile https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/self-asserted-technical-profile.md
You can also call a REST API technical profile with your business logic, overwri
| IncludeClaimResolvingInClaimsHandling  | No | For input and output claims, specifies whether [claims resolution](claim-resolver-overview.md) is included in the technical profile. Possible values: `true`, or `false` (default). If you want to use a claims resolver in the technical profile, set this to `true`. | |setting.forgotPasswordLinkOverride <sup>4</sup>| No | A password reset claims exchange to be executed. For more information, see [Self-service password reset](add-password-reset-policy.md). | | setting.enableCaptchaChallenge | No | Specifies whether CAPTCHA challenge code should be displayed. Possible values: `true` , or `false` (default). For this setting to work, the [CAPTCHA display control]() must be referenced in the [display claims](#display-claims) of the self-asserted technical profile. CAPTCHA feature is in **public preview**.|
+| setting.showHeading | No | Specifies whether **User Details** heading element should be visible. Possible values: `true` (default), or `false`.|
Notes:
ai-services Use Your Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/use-your-data.md
If the policy above doesn't meet your need, please consider other options, for e
## Token usage estimation for Azure OpenAI On Your Data
-Azure OpenAI On Your Data Retrieval Augmented Generation (RAG) service that leverages both a search service (such as Azure AI Search) and generation (Azure OpenAI models) to let users get answers for their questions based on provided data.
+Azure OpenAI On Your Data Retrieval Augmented Generation (RAG) is a service that leverages both a search service (such as Azure AI Search) and generation (Azure OpenAI models) to let users get answers for their questions based on provided data.
As part of this RAG pipeline, there are three steps at a high-level:
ai-services Use Your Data Securely https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/use-your-data-securely.md
recommendations: false
# Securely use Azure OpenAI On Your Data
+> [!NOTE]
+> As of June 2024, the application form for the Microsoft managed private endpoint to Azure AI Search is no longer needed.
+>
+> The managed private endpoint will be deleted from the Microsoft managed virtual network at July 2025. If you have already provisioned a managed private endpoint through the application process before June 2024, migrate to the [Azure AI Search trusted service](#enable-trusted-service-1) as early as possible to avoid service disruption.
+ Use this article to learn how to use Azure OpenAI On Your Data securely by protecting data and resources with Microsoft Entra ID role-based access control, virtual networks, and private endpoints. This article is only applicable when using [Azure OpenAI On Your Data with text](/azure/ai-services/openai/concepts/use-your-data). It does not apply to [Azure OpenAI On Your Data with images](/azure/ai-services/openai/concepts/use-your-image-data).
When you use Azure OpenAI On Your Data to ingest data from Azure blob storage, l
* Downloading URLs to your blob storage is not illustrated in this diagram. After web pages are downloaded from the internet and uploaded to blob storage, steps 3 onward are the same. * Two indexers, two indexes, two data sources and a [custom skill](/azure/search/cognitive-search-custom-skill-interface) are created in the Azure AI Search resource. * The chunks container is created in the blob storage.
-* If the ingestion is triggered by a [scheduled refresh](../concepts/use-your-data.md#schedule-automatic-index-refreshes), the ingestion process starts from step 7.
+* If the schedule triggers the ingestion, the ingestion process starts from step 7.
* Azure OpenAI's `preprocessing-jobs` API implements the [Azure AI Search customer skill web API protocol](/azure/search/cognitive-search-custom-skill-web-api), and processes the documents in a queue. * Azure OpenAI: 1. Internally uses the first indexer created earlier to crack the documents.
- 1. Uses a heuristic-based algorithm to perform chunking, honoring table layouts and other formatting elements in the chunk boundary to ensure the best chunking quality.
- 1. If you choose to enable vector search, Azure OpenAI uses the selected embedding deployment to vectorize the chunks internally.
+ 1. Uses a heuristic-based algorithm to perform chunking. It honors table layouts and other formatting elements in the chunk boundary to ensure the best chunking quality.
+ 1. If you choose to enable vector search, Azure OpenAI uses the selected embedding setting to vectorize the chunks.
* When all the data that the service is monitoring are processed, Azure OpenAI triggers the second indexer. * The indexer stores the processed data into an Azure AI Search service.
For the managed identities used in service calls, only system assigned managed i
:::image type="content" source="../media/use-your-data/inference-architecture.png" alt-text="A diagram showing the process of using the inference API." lightbox="../media/use-your-data/inference-architecture.png":::
-When you send API calls to chat with an Azure OpenAI model on your data, the service needs to retrieve the index fields during inference to perform fields mapping automatically if the fields mapping isn't explicitly set in the request. Therefore the service requires the Azure OpenAI identity to have the `Search Service Contributor` role for the search service even during inference.
+When you send API calls to chat with an Azure OpenAI model on your data, the service needs to retrieve the index fields during inference to perform fields mapping. Therefore the service requires the Azure OpenAI identity to have the `Search Service Contributor` role for the search service even during inference.
-If an embedding deployment is provided in the inference request, the rewritten query will be vectorized by Azure OpenAI, and both query and vector are sent Azure AI Search for vector search.
+If an embedding dependency is provided in the inference request, Azure OpenAI will vectorize the rewritten query, and both query and vector are sent to Azure AI Search for vector search.
## Document-level access control > [!NOTE] > Document-level access control is supported for Azure AI search only.
-Azure OpenAI On Your Data lets you restrict the documents that can be used in responses for different users with Azure AI Search [security filters](/azure/search/search-security-trimming-for-azure-search-with-aad). When you enable document level access, the search results returned from Azure AI Search and used to generate a response will be trimmed based on user Microsoft Entra group membership. You can only enable document-level access on existing Azure AI Search indexes. To enable document-level access:
+Azure OpenAI On Your Data lets you restrict the documents that can be used in responses for different users with Azure AI Search [security filters](/azure/search/search-security-trimming-for-azure-search-with-aad). When you enable document level access, Azure AI Search will trim the search results based on user Microsoft Entra group membership specified in the filter. You can only enable document-level access on existing Azure AI Search indexes. To enable document-level access:
-1. Follow the steps in the [Azure AI Search documentation](/azure/search/search-security-trimming-for-azure-search-with-aad) to register your application and create users and groups.
-1. [Index your documents with their permitted groups](/azure/search/search-security-trimming-for-azure-search-with-aad#index-document-with-their-permitted-groups). Be sure that your new [security fields](/azure/search/search-security-trimming-for-azure-search#create-security-field) have the schema below:
+1. To register your application and create users and groups, follow the steps in the [Azure AI Search documentation](/azure/search/search-security-trimming-for-azure-search-with-aad).
+1. [Index your documents with their permitted groups](/azure/search/search-security-trimming-for-azure-search-with-aad#index-document-with-their-permitted-groups). Be sure that your new [security fields](/azure/search/search-security-trimming-for-azure-search#create-security-field) have the schema:
```json {"name": "group_ids", "type": "Collection(Edm.String)", "filterable": true }
Azure OpenAI On Your Data lets you restrict the documents that can be used in re
`group_ids` is the default field name. If you use a different field name like `my_group_ids`, you can map the field in [index field mapping](../concepts/use-your-data.md#index-field-mapping).
-1. Make sure each sensitive document in the index has the value set correctly on this security field to indicate the permitted groups of the document.
-1. In [Azure OpenAI Studio](https://oai.azure.com/portal), add your data source. in the [index field mapping](../concepts/use-your-data.md#index-field-mapping) section, you can map zero or one value to the **permitted groups** field, as long as the schema is compatible. If the **Permitted groups** field isn't mapped, document level access won't be enabled.
+1. Make sure each sensitive document in the index has this security field value set to the permitted groups of the document.
+1. In [Azure OpenAI Studio](https://oai.azure.com/portal), add your data source. in the [index field mapping](../concepts/use-your-data.md#index-field-mapping) section, you can map zero or one value to the **permitted groups** field, as long as the schema is compatible. If the **permitted groups** field isn't mapped, document level access is disabled.
**Azure OpenAI Studio**
-Once the Azure AI Search index is connected, your responses in the studio will have document access based on the Microsoft Entra permissions of the logged in user.
+Once the Azure AI Search index is connected, your responses in the studio have document access based on the Microsoft Entra permissions of the logged in user.
**Web app**
When using the API, pass the `filter` parameter in each API request. For example
## Resource configuration
-Use the following sections to configure your resources for optimal secure usage. Even if you plan to only secure part of your resources, you still need to follow all the steps below.
+Use the following sections to configure your resources for optimal secure usage. Even if you plan to only secure part of your resources, you still need to follow all the steps.
This article describes network settings related to disabling public network for Azure OpenAI resources, Azure AI search resources, and storage accounts. Using selected networks with IP rules is not supported, because the services' IP addresses are dynamic.
-> [!TIP]
-> You can use the bash script available on [GitHub](https://github.com/microsoft/sample-app-aoai-chatGPT/blob/main/scripts/validate-oyd-vnet.sh) to validate your setup, and determine if all of the requirements listed here are being met.
- ## Create resource group Create a resource group, so you can organize all the relevant resources. The resources in the resource group include but are not limited to:
Create a resource group, so you can organize all the relevant resources. The res
The virtual network has three subnets.
-1. The first subnet is used for the private IPs of the three private endpoints.
-1. The second subnet is created automatically when you create the virtual network gateway.
+1. The first subnet is used for the virtual network gateway.
+1. The second subnet is used for the private endpoints for the three key services.
1. The third subnet is empty, and used for Web App outbound virtual network integration. :::image type="content" source="../media/use-your-data/virtual-network.png" alt-text="A diagram showing the virtual network architecture." lightbox="../media/use-your-data/virtual-network.png":::
-Note the Microsoft managed virtual network is created by Microsoft, and you cannot see it. The Microsoft managed virtual network is used by Azure OpenAI to securely access your Azure AI Search.
## Configure Azure OpenAI ### Enabled custom subdomain
-If you created the Azure OpenAI via Azure portal, the [custom subdomain](/azure/ai-services/cognitive-services-custom-subdomains) should have been created already. The custom subdomain is required for Microsoft Entra ID based authentication, and private DNS zone.
+The [custom subdomain](/azure/ai-services/cognitive-services-custom-subdomains) is required for Microsoft Entra ID based authentication, and private DNS zone. If the Azure OpenAI resource is created using ARM template, the custom subdomain must be specified explicitly.
### Enable managed identity
To allow access to your Azure OpenAI service from your client machines, like usi
## Configure Azure AI Search
-You can use basic pricing tier and higher for the configuration below. It's not necessary, but if you use the S2 pricing tier you will see [additional options](#create-shared-private-link) available for selection.
+You can use basic pricing tier and higher for the search resource. It's not necessary, but if you use the S2 pricing tier, [advanced options](#create-shared-private-link) are available.
### Enable managed identity
To allow your other resources to recognize the Azure AI Search using Microsoft E
:::image type="content" source="../media/use-your-data/outbound-managed-identity-ai-search.png" alt-text="A screenshot showing the managed identity setting for Azure AI Search in the Azure portal." lightbox="../media/use-your-data/outbound-managed-identity-ai-search.png"::: ### Enable role-based access control
-As Azure OpenAI uses managed identity to access Azure AI Search, you need to enable role-based access control in your Azure AI Search. To do it on Azure portal, select **Both** in the **Keys** tab in the Azure portal.
+As Azure OpenAI uses managed identity to access Azure AI Search, you need to enable role-based access control in your Azure AI Search. To do it on Azure portal, select **Both** or **Role-based access control** in the **Keys** tab in the Azure portal.
:::image type="content" source="../media/use-your-data/managed-identity-ai-search.png" alt-text="A screenshot showing the managed identity option for Azure AI search in the Azure portal." lightbox="../media/use-your-data/managed-identity-ai-search.png":::
-To enable role-based access control via the REST API, set `authOptions` as `aadOrApiKey`. For more information, see the [Azure AI Search RBAC article](/azure/search/search-security-rbac?tabs=config-svc-rest%2Croles-portal%2Ctest-portal%2Ccustom-role-portal%2Cdisable-keys-portal#configure-role-based-access-for-data-plane).
-
-```json
-"disableLocalAuth": false,
-"authOptions": {
- "aadOrApiKey": {
- "aadAuthFailureMode": "http401WithBearerChallenge"
- }
-}
-```
+For more information, see the [Azure AI Search RBAC article](/azure/search/search-security-enable-roles).
### Disable public network access
You can disable public network access of your Azure AI Search resource in the Az
To allow access to your Azure AI Search resource from your client machines, like using Azure OpenAI Studio, you need to create [private endpoint connections](/azure/search/service-create-private-endpoint) that connect to your Azure AI Search resource.
-> [!NOTE]
-> To allow access to your Azure AI Search resource from Azure OpenAI resource, you need to submit an [application form](https://aka.ms/applyacsvpnaoaioyd). The application will be reviewed in 5 business days and you will be contacted via email about the results. If you are eligible, we will provision the private endpoint in Microsoft managed virtual network, and send a private endpoint connection request to your search service, and you will need to approve the request.
+### Enable trusted service
+
+You can enable trusted service of your search resource from Azure portal.
+
+Go to your search resource's network tab. With the public network access set to **disabled**, select **Allow Azure services on the trusted services list to access this search service.**
++
+You can also use the REST API to enable trusted service. This example uses the Azure CLI and the `jq` tool.
-The private endpoint resource is provisioned in a Microsoft managed tenant, while the linked resource is in your tenant. You can't access the private endpoint resource by just clicking the **private endpoint** link (in blue font) in the **Private access** tab of the **Networking page**. Instead, click elsewhere on the row, then the **Approve** button above should be clickable.
+```bash
+rid=/subscriptions/<YOUR-SUBSCRIPTION-ID>/resourceGroups/<YOUR-RESOURCE-GROUP>/providers/Microsoft.Search/searchServices/<YOUR-RESOURCE-NAME>
+apiVersion=2024-03-01-Preview
+#store the resource properties in a variable
+az rest --uri "https://management.azure.com$rid?api-version=$apiVersion" > search.json
-Learn more about the [manual approval workflow](/azure/private-link/private-endpoint-overview#access-to-a-private-link-resource-using-approval-workflow).
+#replace bypass with AzureServices using jq
+jq '.properties.networkRuleSet.bypass = "AzureServices"' search.json > search_updated.json
+#apply the updated properties to the resource
+az rest --uri "https://management.azure.com$rid?api-version=$apiVersion" \
+ --method PUT \
+ --body @search_updated.json
+
+```
### Create shared private link
This section is only applicable for S2 pricing tier search resource, because it
To create shared private link from your search resource connecting to your Azure OpenAI resource, see the [search documentation](/azure/search/search-indexer-howto-access-private). Select **Resource type** as `Microsoft.CognitiveServices/accounts` and **Group ID** as `openai_account`.
-With shared private link, [step eight](#data-ingestion-architecture) of the data ingestion architecture diagram is changed from **bypass trusted service** to **private endpoint**.
+With shared the private link, [step 8](#data-ingestion-architecture) of the data ingestion architecture diagram is changed from **bypass trusted service** to **shared private link**.
:::image type="content" source="../media/use-your-data/ingestion-architecture-s2.png" alt-text="A diagram showing the process of ingesting data with an S2 search resource." lightbox="../media/use-your-data/ingestion-architecture-s2.png":::
-The Azure AI Search shared private link you created is also in a Microsoft managed virtual network, not your virtual network. The difference compared to the other managed private endpoint created [earlier](#disable-public-network-access-1) is that the managed private endpoint `[1]` from Azure OpenAI to Azure Search is provisioned through the [form application](#disable-public-network-access-1), while the managed private endpoint `[2]` from Azure Search to Azure OpenAI is provisioned via Azure portal or REST API of Azure Search.
-- ## Configure Storage Account ### Enable trusted service
-To allow access to your Storage Account from Azure OpenAI and Azure AI Search, while the Storage Account has no public network access, you need to set up Storage Account to bypass your Azure OpenAI and Azure AI Search as [trusted services based on managed identity](/azure/storage/common/storage-network-security?tabs=azure-portal#trusted-access-based-on-a-managed-identity).
+To allow access to your Storage Account from Azure OpenAI and Azure AI Search, you need to set up Storage Account to bypass your Azure OpenAI and Azure AI Search as [trusted services based on managed identity](/azure/storage/common/storage-network-security?tabs=azure-portal#trusted-access-based-on-a-managed-identity).
In the Azure portal, navigate to your storage account networking tab, choose "Selected networks", and then select **Allow Azure services on the trusted services list to access this storage account** and click Save.
ai-services Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/Overview/overview.md
keywords: "qna maker, low code chat bots, multi-turn conversations"
# What is QnA Maker? > [!NOTE]
-> [Azure Open AI On Your Data](../../openai/concepts/use-your-data.md) utilizes large language models (LLMs) to produce similar results to QnA Maker. If you wish to migrate your QnA Maker project to Azure Open AI On Your Data, please check out our [guide](../How-To/migrate-to-openai.md).
+> [Azure OpenAI On Your Data](../../openai/concepts/use-your-data.md) utilizes large language models (LLMs) to produce similar results to QnA Maker. If you wish to migrate your QnA Maker project to Azure OpenAI On Your Data, please check out our [guide](../How-To/migrate-to-openai.md).
[!INCLUDE [Custom question answering](../includes/new-version.md)]
ai-studio Deploy Models Timegen 1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/deploy-models-timegen-1.md
You can deploy TimeGEN-1 as a serverless API with pay-as-you-go billing. Nixtla
- An [Azure AI Studio project](../how-to/create-projects.md). - Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure AI Studio. To perform the steps in this article, your user account must be assigned the __Azure AI Developer role__ on the resource group. For more information on permissions, visit [Role-based access control in Azure AI Studio](../concepts/rbac-ai-studio.md).
+### Pricing information
+
+#### Estimate the number of tokens needed
+
+Before you create a deployment, it's useful to estimate the number of tokens that you plan to use and be billed for.
+One token corresponds to one data point in your input dataset or output dataset.
+
+Suppose you have the following input time series dataset:
+
+| Unique_id | Timestamp | Target Variable | Exogenous Variable 1 | Exogenous Variable 2 |
+|::|:-:|::|:--:|:--:|
+| BE | 2016-10-22 00:00:00 | 70.00 | 49593.0 | 57253.0 |
+| BE | 2016-10-22 01:00:00 | 37.10 | 46073.0 | 51887.0 |
+
+To determine the number of tokens, multiply the number of rows (in this example, two) and the number of columns used for forecastingΓÇönot counting the unique_id and timestamp columns (in this example, three) to get a total of six tokens.
+
+Given the following output dataset:
+
+| Unique_id | Timestamp | Forecasted Target Variable |
+|::|:-:|:--:|
+| BE | 2016-10-22 02:00:00 | 46.57 |
+| BE | 2016-10-22 03:00:00 | 48.57 |
+
+You can also determine the number of tokens by counting the number of data points returned after data forecasting. In this example, the number of tokens is two.
+
+#### Estimate the pricing
+
+There are four pricing meters, as described in the following table:
+
+| Pricing Meter | Description |
+|--|--|
+| paygo-inference-input-tokens | Costs associated with the tokens used as input for inference when *finetune_steps* = 0 |
+| paygo-inference-output-tokens | Costs associated with the tokens used as output for inference when *finetune_steps* = 0 |
+| paygo-finetuned-model-inference-input-tokens | Costs associated with the tokens used as input for inference when *finetune_steps* > 0 |
+| paygo-finetuned-model-inference-output-tokens | Costs associated with the tokens used as output for inference when *finetune_steps* > 0 |
+ ### Create a new deployment These steps demonstrate the deployment of TimeGEN-1. To create a deployment:
ai-studio Reference Model Inference Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/reference/reference-model-inference-api.md
Content-Type: application/json
The Azure AI Model Inference API specifies a set of modalities and parameters that models can subscribe to. However, some models may have further capabilities that the ones the API indicates. On those cases, the API allows the developer to pass them as extra parameters in the payload.
-By setting a header `extra-parameters: allow`, the API will attempt to pass any unknown parameter directly to the underlying model. If the model can handle that parameter, the request completes.
+By setting a header `extra-parameters: pass-through`, the API will attempt to pass any unknown parameter directly to the underlying model. If the model can handle that parameter, the request completes.
The following example shows a request passing the parameter `safe_prompt` supported by Mistral-Large, which isn't specified in the Azure AI Model Inference API:
var messages = [
]; var response = await client.path("/chat/completions").post({
+ "extra-parameters": "pass-through",
body: { messages: messages, safe_mode: true
__Request__
POST /chat/completions?api-version=2024-04-01-preview Authorization: Bearer <bearer-token> Content-Type: application/json
-extra-parameters: allow
+extra-parameters: pass-through
``` ```JSON
extra-parameters: allow
> [!TIP]
-> Alternatively, you can set `extra-parameters: drop` to drop any unknown parameter in the request. Use this capability in case you happen to be sending requests with extra parameters that you know the model won't support but you want the request to completes anyway. A typical example of this is indicating `seed` parameter.
+> The default value for `extra-parameters` is `error` which returns an error if an extra parameter is indicated in the payload. Alternatively, you can set `extra-parameters: ignore` to drop any unknown parameter in the request. Use this capability in case you happen to be sending requests with extra parameters that you know the model won't support but you want the request to completes anyway. A typical example of this is indicating `seed` parameter.
### Models with disparate set of capabilities
__Response__
## Getting started The Azure AI Model Inference API is currently supported in certain models deployed as [Serverless API endpoints](../how-to/deploy-models-serverless.md) and Managed Online Endpoints. Deploy any of the [supported models](#availability) and use the exact same code to consume their predictions.+
+# [Python](#tab/python)
+
+The client library `azure-ai-inference` does inference, including chat completions, for AI models deployed by Azure AI Studio and Azure Machine Learning Studio. It supports Serverless API endpoints and Managed Compute endpoints (formerly known as Managed Online Endpoints).
+
+Explore our [samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/ai/azure-ai-inference/samples) and read the [API reference documentation](https://aka.ms/azsdk/azure-ai-inference/python/reference) to get yourself started.
+
+# [JavaScript](#tab/javascript)
+
+The client library `@azure-rest/ai-inference` does inference, including chat completions, for AI models deployed by Azure AI Studio and Azure Machine Learning Studio. It supports Serverless API endpoints and Managed Compute endpoints (formerly known as Managed Online Endpoints).
+
+Explore our [samples](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/ai/ai-inference-rest/samples) and read the [API reference documentation](https://aka.ms/AAp1kxa) to get yourself started.
+
+# [REST](#tab/rest)
+
+Explore the reference section of the Azure AI model inference API to see parameters and options to consume models, including chat completions models, deployed by Azure AI Studio and Azure Machine Learning Studio. It supports Serverless API endpoints and Managed Compute endpoints (formerly known as Managed Online Endpoints).
+
+* [Get info](reference-model-inference-info.md): Returns the information about the model deployed under the endpoint.
+* [Text embeddings](reference-model-inference-embeddings.md): Creates an embedding vector representing the input text.
+* [Text completions](reference-model-inference-completions.md): Creates a completion for the provided prompt and parameters.
+* [Chat completions](reference-model-inference-chat-completions.md): Creates a model response for the given chat conversation.
+* [Image embeddings](reference-model-inference-images-embeddings.md): Creates an embedding vector representing the input text and image.
++
ai-studio Reference Model Inference Chat Completions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/reference/reference-model-inference-chat-completions.md
POST /chat/completions?api-version=2024-04-01-preview
| Name | Required | Type | Description | | | | | |
-| extra-parameters | | string | The behavior of the API when extra parameters are indicated in the payload. Using `allow` makes the API to pass the parameter to the underlying model. Use this value when you want to pass parameters that you know the underlying model can support. Using `drop` makes the API to drop any unsupported parameter. Use this value when you need to use the same payload across different models, but one of the extra parameters may make a model to error out if not supported. Using `error` makes the API to reject any extra parameter in the payload. Only parameters specified in this API can be indicated, or a 400 error is returned. |
+| extra-parameters | | string | The behavior of the API when extra parameters are indicated in the payload. Using `pass-through` makes the API to pass the parameter to the underlying model. Use this value when you want to pass parameters that you know the underlying model can support. Using `ignore` makes the API to drop any unsupported parameter. Use this value when you need to use the same payload across different models, but one of the extra parameters may make a model to error out if not supported. Using `error` makes the API to reject any extra parameter in the payload. Only parameters specified in this API can be indicated, or a 400 error is returned. |
| azureml-model-deployment | | string | Name of the deployment you want to route the request to. Supported for endpoints that support multiple deployments. | ## Request Body
ai-studio Reference Model Inference Completions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/reference/reference-model-inference-completions.md
POST /completions?api-version=2024-04-01-preview
| | | | | | | api-version | query | True | string | The version of the API in the format "YYYY-MM-DD" or "YYYY-MM-DD-preview". |
+## Request Header
++
+| Name | Required | Type | Description |
+| | | | |
+| extra-parameters | | string | The behavior of the API when extra parameters are indicated in the payload. Using `pass-through` makes the API to pass the parameter to the underlying model. Use this value when you want to pass parameters that you know the underlying model can support. Using `ignore` makes the API to drop any unsupported parameter. Use this value when you need to use the same payload across different models, but one of the extra parameters may make a model to error out if not supported. Using `error` makes the API to reject any extra parameter in the payload. Only parameters specified in this API can be indicated, or a 400 error is returned. |
+| azureml-model-deployment | | string | Name of the deployment you want to route the request to. Supported for endpoints that support multiple deployments. |
+ ## Request Body
The object type, which is always "list".
| detail | [Detail](#detail) | | | error | string | The error description. | | message | string | The error message. |
-| status | integer | The HTTP status code. |
+| status | integer | The HTTP status code. |
ai-studio Reference Model Inference Embeddings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/reference/reference-model-inference-embeddings.md
POST /embeddings?api-version=2024-04-01-preview
| Name | Required | Type | Description | | | | | |
-| extra-parameters | | string | The behavior of the API when extra parameters are indicated in the payload. Using `allow` makes the API to pass the parameter to the underlying model. Use this value when you want to pass parameters that you know the underlying model can support. Using `drop` makes the API to drop any unsupported parameter. Use this value when you need to use the same payload across different models, but one of the extra parameters may make a model to error out if not supported. Using `error` makes the API to reject any extra parameter in the payload. Only parameters specified in this API can be indicated, or a 400 error is returned. |
+| extra-parameters | | string | The behavior of the API when extra parameters are indicated in the payload. Using `pass-through` makes the API to pass the parameter to the underlying model. Use this value when you want to pass parameters that you know the underlying model can support. Using `ignore` makes the API to drop any unsupported parameter. Use this value when you need to use the same payload across different models, but one of the extra parameters may make a model to error out if not supported. Using `error` makes the API to reject any extra parameter in the payload. Only parameters specified in this API can be indicated, or a 400 error is returned. |
| azureml-model-deployment | | string | Name of the deployment you want to route the request to. Supported for endpoints that support multiple deployments. | ## Request Body
ai-studio Reference Model Inference Images Embeddings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/reference/reference-model-inference-images-embeddings.md
POST /images/embeddings?api-version=2024-04-01-preview
| Name | Required | Type | Description | | | | | |
-| extra-parameters | | string | The behavior of the API when extra parameters are indicated in the payload. Using `allow` makes the API to pass the parameter to the underlying model. Use this value when you want to pass parameters that you know the underlying model can support. Using `drop` makes the API to drop any unsupported parameter. Use this value when you need to use the same payload across different models, but one of the extra parameters may make a model to error out if not supported. Using `error` makes the API to reject any extra parameter in the payload. Only parameters specified in this API can be indicated, or a 400 error is returned. |
+| extra-parameters | | string | The behavior of the API when extra parameters are indicated in the payload. Using `pass-through` makes the API to pass the parameter to the underlying model. Use this value when you want to pass parameters that you know the underlying model can support. Using `ignore` makes the API to drop any unsupported parameter. Use this value when you need to use the same payload across different models, but one of the extra parameters may make a model to error out if not supported. Using `error` makes the API to reject any extra parameter in the payload. Only parameters specified in this API can be indicated, or a 400 error is returned. |
| azureml-model-deployment | | string | Name of the deployment you want to route the request to. Supported for endpoints that support multiple deployments. | ## Request Body
aks Create Node Pools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/create-node-pools.md
This article shows you how to create one or more node pools in an AKS cluster.
The following limitations apply when you create AKS clusters that support multiple node pools: * See [Quotas, virtual machine size restrictions, and region availability in Azure Kubernetes Service (AKS)](quotas-skus-regions.md).
-* You can delete system node pools if you have another system node pool to take its place in the AKS cluster. Otherwise, you cannot delete the system node pool.
+* You can delete the system node pool if you have another system node pool to take its place in the AKS cluster. Otherwise, you cannot delete the system node pool.
* System pools must contain at least one node, and user node pools may contain zero or more nodes. * The AKS cluster must use the Standard SKU load balancer to use multiple node pools. This feature isn't supported with Basic SKU load balancers. * The AKS cluster must use Virtual Machine Scale Sets for the nodes.
The following limitations apply when you create AKS clusters that support multip
--name $CLUSTER_NAME \ --vm-set-type VirtualMachineScaleSets \ --node-count 2 \
- --generate-ssh-keys \
+ --location $LOCATION \
--load-balancer-sku standard \ --generate-ssh-keys ```
aks Istio Deploy Ingress https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/istio-deploy-ingress.md
NAME TYPE CLUSTER-IP EXTERNAL-IP
aks-istio-ingressgateway-external LoadBalancer 10.0.10.249 <EXTERNAL_IP> 15021:30705/TCP,80:32444/TCP,443:31728/TCP 4m21s ```
+> [!NOTE]
+> Customizations to IP address on internal and external gateways aren't supported yet. IP address customizations on the ingress are reverted back by the Istio add-on.
+It's planned to allow these customizations in Gateway API Istio implementation as part of the Istio add-on in future.
+ Applications aren't accessible from outside the cluster by default after enabling the ingress gateway. To make an application accessible, map the sample deployment's ingress to the Istio ingress gateway using the following manifest: ```bash
aks Istio Plugin Ca https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/istio-plugin-ca.md
The add-on requires Azure CLI version 2.57.0 or later installed. You can run `az
az keyvault secret set --vault-name $AKV_NAME --name root-cert --file <path-to-folder/root-cert.pem> az keyvault secret set --vault-name $AKV_NAME --name ca-cert --file <path-to-folder/ca-cert.pem> az keyvault secret set --vault-name $AKV_NAME --name ca-key --file <path-to-folder/ca-key.pem>
- az keyvault secret set --vault-name $AKV_NAME --name cert-chain --file <path/cert-chain.pem>
+ az keyvault secret set --vault-name $AKV_NAME --name cert-chain --file <path-to-folder/cert-chain.pem>
``` 1. Enable [Azure Key Vault provider for Secret Store CSI Driver for your cluster][akv-addon]:
aks Workload Identity Cross Tenant https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/aks/workload-identity-cross-tenant.md
+
+ Title: Configure cross-tenant workload identity on Azure Kubernetes Service (AKS)
+description: Learn how to configure cross-tenant workload identity on Azure Kubernetes Service (AKS).
+++ Last updated : 07/03/2024+++
+# Configure cross-tenant workload identity on Azure Kubernetes Service (AKS)
+
+In this article, you learn how to configure cross-tenant workload identity on Azure Kubernetes Service (AKS). Cross-tenant workload identity allows you to access resources in another tenant from your AKS cluster. In this example, you create an Azure Service Bus in one tenant and send messages to it from a workload running in an AKS cluster in another tenant.
+
+For more information on workload identity, see the [Workload identity overview][workload-identity-overview].
+
+## Prerequisites
+
+* ***Two Azure subscriptions***, each in a separate tenant. In this article, we refer to these as *Tenant A* and *Tenant B*.
+* Azure CLI installed on your local machine. If you don't have the Azure CLI installed, see [Install the Azure CLI][install-azure-cli].
+* Bash shell environment. This article uses Bash shell syntax.
+* You need to have the following subscription details:
+
+ * *Tenant A* tenant ID
+ * *Tenant A* subscription ID
+ * *Tenant B* tenant ID
+ * *Tenant B* subscription ID
+
+> [!IMPORTANT]
+> Make sure you stay within the same terminal window for the duration of this article to retain the environment variables you set. If you close the terminal window, you need to set the environment variables again.
+
+## Configure resources in Tenant A
+
+In *Tenant A*, you create an AKS cluster with workload identity and OIDC issuer enabled. You use this cluster to deploy an application that attempts to access resources in *Tenant B*.
+
+### Log in to Tenant A
+
+1. Log in to your *Tenant A* subscription using the [`az login`][az-login-interactively] command.
+
+ ```azurecli-interactive
+ # Set environment variable
+ TENANT_A_ID=<tenant-id>
+
+ az login --tenant $TENANT_A_ID
+ ```
+
+1. Ensure you're working with the correct subscription in *Tenant A* using the [`az account set`][az-account-set] command.
+
+ ```azurecli-interactive
+ # Set environment variable
+ TENANT_A_SUBSCRIPTION_ID=<subscription-id>
+
+ # Log in to your Tenant A subscription
+ az account set --subscription $TENANT_A_SUBSCRIPTION_ID
+ ```
+
+### Create resources in Tenant A
+
+1. Create a resource group in *Tenant A* to host the AKS cluster using the [`az group create`][az-group-create] command.
+
+ ```azurecli-interactive
+ # Set environment variables
+ RESOURCE_GROUP=<resource-group-name>
+ LOCATION=<location>
+
+ # Create a resource group
+ az group create --name $RESOURCE_GROUP --location $LOCATION
+ ```
+
+1. Create an AKS cluster in *Tenant A* with workload identity and OIDC issuer enabled using the [`az aks create`][az-aks-create] command.
+
+ ```azurecli-interactive
+ # Set environment variable
+ CLUSTER_NAME=<cluster-name>
+
+ # Create an AKS cluster
+ az aks create \
+ --resource-group $RESOURCE_GROUP \
+ --name $CLUSTER_NAME \
+ --enable-oidc-issuer \
+ --enable-workload-identity \
+ --generate-ssh-keys
+ ```
+
+### Get OIDC issuer URL from AKS cluster
+
+* Get the OIDC issuer URL from the cluster in *Tenant A* using the [`az aks show`][az-aks-show] command.
+
+ ```azurecli-interactive
+ OIDC_ISSUER_URL=$(az aks show --resource-group $RESOURCE_GROUP --name $CLUSTER_NAME --query "oidcIssuerProfile.issuerUrl" --output tsv)
+ ```
+
+## Configure resources in Tenant B
+
+In *Tenant B*, you create an Azure Service Bus, a managed identity and assign it permissions to read and write messages to the service bus, and establish the trust between the managed identity and the AKS cluster in *Tenant A*.
+
+### Log in to Tenant B
+
+1. Log out of your *Tenant A* subscription using the [`az logout`][az-logout] command.
+
+ ```azurecli-interactive
+ az logout
+ ```
+
+1. Log in to your *Tenant B* subscription using the [`az login`][az-login-interactively] command.
+
+ ```azurecli-interactive
+ # Set environment variable
+ TENANT_B_ID=<tenant-id>
+
+ az login --tenant $TENANT_B_ID
+ ```
+
+1. Ensure you're working with the correct subscription in *Tenant B* using the [`az account set`][az-account-set] command.
+
+ ```azurecli-interactive
+ # Set environment variable
+ TENANT_B_SUBSCRIPTION_ID=<subscription-id>
+
+ # Log in to your Tenant B subscription
+ az account set --subscription $TENANT_B_SUBSCRIPTION_ID
+ ```
+
+### Create resources in Tenant B
+
+1. Create a resource group in *Tenant B* to host the managed identity using the [`az group create`][az-group-create] command.
+
+ ```azurecli-interactive
+ # Set environment variables
+ RESOURCE_GROUP=<resource-group-name>
+ LOCATION=<location>
+
+ # Create a resource group
+ az group create --name $RESOURCE_GROUP --location $LOCATION
+ ```
+
+1. Create a service bus and queue in *Tenant B* using the [`az servicebus namespace create`][az-servicebus-namespace-create] and [`az servicebus queue create`][az-servicebus-queue-create] commands.
+
+ ```azurecli-interactive
+ # Set environment variable
+ SERVICEBUS_NAME=sb-crosstenantdemo-$RANDOM
+
+ # Create a new service bus namespace and and return the service bus hostname
+ SERVICEBUS_HOSTNAME=$(az servicebus namespace create \
+ --name $SERVICEBUS_NAME \
+ --resource-group $RESOURCE_GROUP \
+ --disable-local-auth \
+ --query serviceBusEndpoint \
+ --output tsv | sed -e 's/https:\/\///' -e 's/:443\///')
+
+ # Create a new queue in the service bus namespace
+ az servicebus queue create \
+ --name myqueue \
+ --namespace $SERVICEBUS_NAME \
+ --resource-group $RESOURCE_GROUP
+ ```
+
+1. Create a user-assigned managed identity in *Tenant B* using the [`az identity create`][az-identity-create] command.
+
+ ```azurecli-interactive
+ # Set environment variable
+ IDENTITY_NAME=${SERVICEBUS_NAME}-identity
+
+ # Create a user-assigned managed identity
+ az identity create --resource-group $RESOURCE_GROUP --name $IDENTITY_NAME
+ ```
+
+### Get resource IDs and assign permissions in Tenant B
+
+1. Get the principal ID of the managed identity in *Tenant B* using the [`az identity show`][az-identity-show] command.
+
+ ```azurecli-interactive
+ # Get the user-assigned managed identity principalId
+ PRINCIPAL_ID=$(az identity show \
+ --resource-group $RESOURCE_GROUP \
+ --name $IDENTITY_NAME \
+ --query principalId \
+ --output tsv)
+ ```
+
+1. Get the client ID of the managed identity in *Tenant B* using the [`az identity show`][az-identity-show] command.
+
+ ```azurecli-interactive
+ CLIENT_ID=$(az identity show \
+ --resource-group $RESOURCE_GROUP \
+ --name $IDENTITY_NAME \
+ --query clientId \
+ --output tsv)
+ ```
+
+1. Get the resource ID of the service bus namespace in *Tenant B* using the [`az servicebus namespace show`][az-servicebus-namespace-show] command.
+
+ ```azurecli-interactive
+ SERVICEBUS_ID=$(az servicebus namespace show \
+ --name $SERVICEBUS_NAME \
+ --resource-group $RESOURCE_GROUP \
+ --query id \
+ --output tsv)
+ ```
+
+1. Assign the managed identity in *Tenant B* permissions to read and write service bus messages using the [`az role assignment create`][az-role-assignment-create] command.
+
+ ```azurecli-interactive
+ az role assignment create \
+ --role "Azure Service Bus Data Owner" \
+ --assignee-object-id $PRINCIPAL_ID \
+ --assignee-principal-type ServicePrincipal \
+ --scope $SERVICEBUS_ID
+ ```
+
+## Establish trust between AKS cluster and managed identity
+
+In this section, you create the federated identity credential needed to establish trust between the AKS cluster in *Tenant A* and the managed identity in *Tenant B*. You use the OIDC issuer URL from the AKS cluster in *Tenant A* and the name of the managed identity in *Tenant B*.
+
+* Create a federated identity credential using the [`az identity federated-credential create`][az-identity-federated-credential-create] command.
+
+ ```azurecli-interactive
+ az identity federated-credential create \
+ --name $IDENTITY_NAME-$RANDOM \
+ --identity-name $IDENTITY_NAME \
+ --resource-group $RESOURCE_GROUP \
+ --issuer $OIDC_ISSUER_URL \
+ --subject system:serviceaccount:default:myserviceaccount
+ ```
+
+`--subject system:serviceaccount:default:myserviceaccount` is the name of the Kubernetes service account that you create in *Tenant A* later in the article. When your application pod makes authentication requests, this value is sent to Microsoft Entra ID as the `subject` in the authorization request. Microsoft Entra ID determines eligibility based on whether this value matches what you set when you created the federated identity credential, so it's important to ensure the value matches.
+
+## Deploy application to send messages to Azure Service Bus queue
+
+In this section, you deploy an application to your AKS cluster in *Tenant A* that sends messages to the Azure Service Bus queue in *Tenant B*.
+
+### Log in to Tenant A and get AKS credentials
+
+1. Log out of your *Tenant B* subscription using the [`az logout`][az-logout] command.
+
+ ```azurecli-interactive
+ az logout
+ ```
+
+1. Log in to your *Tenant A* subscription using the [`az login`][az-login-interactively] command.
+
+ ```azurecli-interactive
+ az login --tenant $TENANT_A_ID
+ ```
+
+1. Ensure you're working with the correct subscription in *Tenant A* using the [`az account set`][az-account-set] command.
+
+ ```azurecli-interactive
+ az account set --subscription $TENANT_A_SUBSCRIPTION_ID
+ ```
+
+1. Get the credentials for the AKS cluster in *Tenant A* using the [`az aks get-credentials`][az-aks-get-credentials] command.
+
+ ```azurecli-interactive
+ az aks get-credentials --resource-group $RESOURCE_GROUP --name $CLUSTER_NAME
+ ```
+
+### Create Kubernetes resources to send messages to Azure Service Bus queue
+
+1. Create a new Kubernetes ServiceAccount in the `default` namespace and pass in the client ID of your managed identity in *Tenant B* to the `kubectl apply` command. The client ID is used to authenticate the app in *Tenant A* to the Azure Service Bus in *Tenant B*.
+
+ ```azurecli-interactive
+ kubectl apply -f - <<EOF
+ apiVersion: v1
+ kind: ServiceAccount
+ metadata:
+ annotations:
+ azure.workload.identity/client-id: $CLIENT_ID
+ name: myserviceaccount
+ EOF
+ ```
+
+1. Create a new Kubernetes Job in the `default` namespace to send 100 messages to your Azure Service Bus queue. The Pod template is configured to use workload identity and the service account you created in the previous step. Also note that the `AZURE_TENANT_ID` environment variable is set to the tenant ID of *Tenant B*. This is required as workload identity defaults to the tenant of the AKS cluster, so you need to explicitly set the tenant ID of *Tenant B*.
+
+ ```azurecli-interactive
+ kubectl apply -f - <<EOF
+ apiVersion: batch/v1
+ kind: Job
+ metadata:
+ name: myproducer
+ spec:
+ template:
+ metadata:
+ labels:
+ azure.workload.identity/use: "true"
+ spec:
+ serviceAccountName: myserviceaccount
+ containers:
+ - image: ghcr.io/azure-samples/aks-app-samples/servicebusdemo:latest
+ name: myproducer
+ resources: {}
+ env:
+ - name: OPERATION_MODE
+ value: "producer"
+ - name: MESSAGE_COUNT
+ value: "100"
+ - name: AZURE_SERVICEBUS_QUEUE_NAME
+ value: myqueue
+ - name: AZURE_SERVICEBUS_HOSTNAME
+ value: $SERVICEBUS_HOSTNAME
+ - name: AZURE_TENANT_ID
+ value: $TENANT_B_ID
+ restartPolicy: Never
+ EOF
+ ```
+
+## Verify the deployment
+
+1. Verify that the pod is correctly configured to interact with the Azure Service Bus queue in *Tenant B* by checking the status of the pod using the `kubectl describe pod` command.
+
+ ```azurecli-interactive
+ # Get the dynamically generated pod name
+ POD_NAME=$(kubectl get po --selector job-name=myproducer -o jsonpath='{.items[0].metadata.name}')
+
+ # Verify the tenant ID environment variable is set for Tenant B
+ kubectl describe pod $POD_NAME | grep AZURE_TENANT_ID
+ ```
+
+1. Check the logs of the pod to see if the application was able to send messages across tenants using the `kubectl logs` command.
+
+ ```azurecli-interactive
+ kubectl logs $POD_NAME
+ ```
+
+ Your output should look similar to the following example output:
+
+ ```output
+ ...
+ Adding message to batch: Hello World!
+ Adding message to batch: Hello World!
+ Adding message to batch: Hello World!
+ Sent 100 messages
+ ```
+
+> [!NOTE]
+> As an extra verification step, you can go to the [Azure portal][azure-portal] and navigate to the Azure Service Bus queue in *Tenant B* to view the messages that were sent in the Service Bus Explorer.
+
+## Clean up resources
+
+After you verify that the deployment is successful, you can clean up the resources to avoid incurring Azure costs.
+
+### Delete resources in Tenant A
+
+1. Log in to your *Tenant A* subscription using the [`az login`][az-login-interactively] command.
+
+ ```azurecli-interactive
+ az login --tenant $TENANT_A_ID
+ ```
+
+1. Ensure you're working with the correct subscription in *Tenant A* using the [`az account set`][az-account-set] command.
+
+ ```azurecli-interactive
+ az account set --subscription $TENANT_A_SUBSCRIPTION_ID
+ ```
+
+1. Delete the Azure resource group and all resources in it using the [`az group delete`][az-group-delete] command.
+
+ ```azurecli-interactive
+ az group delete --name $RESOURCE_GROUP --yes --no-wait
+ ```
+
+### Delete resources in Tenant B
+
+1. Log in to your *Tenant B* subscription using the [`az login`][az-login-interactively] command.
+
+ ```azurecli-interactive
+ az login --tenant $TENANT_B_ID
+ ```
+
+1. Ensure you're working with the correct subscription in *Tenant B* using the [`az account set`][az-account-set] command.
+
+ ```azurecli-interactive
+ az account set --subscription $TENANT_B_SUBSCRIPTION_ID
+ ```
+
+1. Delete the Azure resource group and all resources in it using the [`az group delete`][az-group-delete] command.
+
+ ```azurecli-interactive
+ az group delete --name $RESOURCE_GROUP --yes --no-wait
+ ```
+
+## Next steps
+
+In this article, you learned how to configure cross-tenant workload identity on Azure Kubernetes Service (AKS). To learn more about workload identity, see the following articles:
+
+* [Workload identity overview][workload-identity-overview]
+* [Configure workload identity on Azure Kubernetes Service (AKS)][configure-workload-identity]
+
+<!-- LINKS -->
+[workload-identity-overview]: ./workload-identity-overview.md
+[configure-workload-identity]: ./workload-identity-deploy-cluster.md
+[install-azure-cli]: /cli/azure/install-azure-cli
+[az-login-interactively]: /cli/azure/authenticate-azure-cli-interactively
+[az-logout]: /cli/azure/authenticate-azure-cli-interactively#logout
+[az-account-set]: /cli/azure/account#az_account_set
+[az-group-create]: /cli/azure/group#az_group_create
+[az-aks-create]: /cli/azure/aks#az_aks_create
+[az-aks-show]: /cli/azure/aks#az_aks_show
+[az-identity-create]: /cli/azure/identity#az_identity_create
+[az-identity-show]: /cli/azure/identity#az_identity_show
+[az-role-assignment-create]: /cli/azure/role/assignment#az_role_assignment_create
+[az-identity-federated-credential-create]: /cli/azure/identity/federated-credential#az-identity-federated-credential-create
+[az-aks-get-credentials]: /cli/azure/aks#az_aks_get_credentials
+[az-servicebus-namespace-create]: /cli/azure/servicebus/namespace#az-servicebus-namespace-create
+[az-servicebus-namespace-show]: /cli/azure/servicebus/namespace#az-servicebus-namespace-show
+[az-servicebus-queue-create]: /cli/azure/servicebus/queue#az-servicebus-queue-create
+[az-group-delete]: /cli/azure/group#az_group_delete
+[azure-portal]: https://portal.azure.com
+
api-center Enable Api Analysis Linting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/enable-api-analysis-linting.md
Title: Perform API linting and analysis - Azure API Center
description: Configure linting of API definitions in your API center to analyze compliance of APIs with the organization's API style guide. Previously updated : 04/22/2024 Last updated : 06/29/2024
Now that the managed identity is enabled, assign it the Azure API Center Complia
```azurecli #! /bin/bash
- apicID=$(az apic service show --name <apic-name> --resource-group <resource-group-name> \
+ apicID=$(az apic show --name <apic-name> --resource-group <resource-group-name> \
--query "id" --output tsv) ``` ```azurecli # PowerShell syntax
- $apicID=$(az apic service show --name <apic-name> --resource-group <resource-group-name> `
+ $apicID=$(az apic show --name <apic-name> --resource-group <resource-group-name> `
--query "id" --output tsv) ```
Now create an event subscription in your API center to trigger the function app
```azurecli #! /bin/bash
- apicID=$(az apic service show --name <apic-name> --resource-group <resource-group-name> \
+ apicID=$(az apic show --name <apic-name> --resource-group <resource-group-name> \
--query "id" --output tsv) ``` ```azurecli # PowerShell syntax
- $apicID=$(az apic service show --name <apic-name> --resource-group <resource-group-name> `
+ $apicID=$(az apic show --name <apic-name> --resource-group <resource-group-name> `
--query "id" --output tsv) ``` 1. Get the resource ID of the function in the function app. In this example, the function name is *apicenter-analyzer*. Substitute `<function-app-name>` and `<resource-group-name>` with your function app name and resource group name.
api-center Import Api Management Apis https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/import-api-management-apis.md
description: Add APIs to your Azure API center inventory from your API Managemen
Previously updated : 04/30/2024 Last updated : 06/28/2024 # Customer intent: As an API program manager, I want to add APIs that are managed in my Azure API Management instance to my API center.
This article shows two options for using the Azure CLI to add APIs to your API c
* Run [az apic api register](/cli/azure/apic/api#az-apic-api-register) to register a new API in your API center. * Run [az apic api definition import-specification](/cli/azure/apic/api/definition#az-apic-api-definition-import-specification) to import the API definition to an existing API.
-* **Option 2** - Import APIs directly from API Management to your API center using the [az apic service import-from-apim](/cli/azure/apic/service#az-apic-service-import-from-apim) command.
+* **Option 2** - Import APIs directly from API Management to your API center using the [az apic import-from-apim](/cli/azure/apic/az-apic-import-from-apim) command.
After importing API definitions or APIs from API Management, you can add metadata and documentation in your API center to help stakeholders discover, understand, and consume the API.
-> [!VIDEO https://www.youtube.com/embed/SuGkhuBUV5k]
- ## Prerequisites * An API center in your Azure subscription. If you haven't created one, see [Quickstart: Create your API center](set-up-api-center.md).
First, export an API from your API Management instance to an API definition usin
### Export API to a local API definition file
-The following example command exports the API with identifier *my-api* in the *myAPIManagement* instance of API. The API is exported in OpenApiJson format to a local OpenAPI definition file named *specificationFile.json*.
+The following example command exports the API with identifier *my-api* in the *myAPIManagement* instance of API. The API is exported in OpenApiJson format to a local OpenAPI definition file at the path you specify.
```azurecli #! /bin/bash
You can register a new API in your API center from the exported definition by us
The following example registers an API in the *myAPICenter* API center from a local OpenAPI definition file named *definitionFile.json*. ```azurecli
-az apic api register --resource-group myResourceGroup --service myAPICenter --api-location "/path/to/definitionFile.json"
+az apic api register --resource-group myResourceGroup --service-name myAPICenter --api-location "/path/to/definitionFile.json"
``` ### Import API definition to an existing API in your API center
This example assumes you have an API named *my-api* and an associated API versio
```azurecli #! /bin/bash az apic api definition import-specification \
- --resource-group myResourceGroup --service myAPICenter \
+ --resource-group myResourceGroup --service-name myAPICenter \
--api-id my-api --version-id v1-0-0 \ --definition-id openapi --format "link" --value '$link' \ --specification '{"name":"openapi","version":"3.0.2"}'
az apic api definition import-specification \
```azurecli # PowerShell syntax az apic api definition import-specification `
- --resource-group myResourceGroup --service myAPICenter `
+ --resource-group myResourceGroup --service-name myAPICenter `
--api-id my-api --version-id v1-0-0 ` --definition-id openapi --format "link" --value '$link' ` --specification '{"name":"openapi","version":"3.0.2"}'
az apic api definition import-specification `
## Option 2: Import APIs directly from your API Management instance
-The following are steps to import APIs from your API Management instance to your API center using the [az apic service import-from-apim](/cli/azure/apic/service#az-apic-service-import-from-apim) command. This command is useful when you want to import multiple APIs from API Management to your API center, but you can also use it to import a single API.
+The following are steps to import APIs from your API Management instance to your API center using the [az apic import-from-apim](/cli/azure/apic#az-apic-service-import-from-apim) command. This command is useful when you want to import multiple APIs from API Management to your API center, but you can also use it to import a single API.
-When you add APIs from an API Management instance to your API center using `az apic service import-from-apim`, the following happens automatically:
+When you add APIs from an API Management instance to your API center using `az apic import-from-apim`, the following happens automatically:
* Each API's [versions](key-concepts.md#api-version), [definitions](key-concepts.md#api-definition), and [deployment](key-concepts.md#deployment) information are copied to your API center. * The API receives a system-generated API name in your API center. It retains its display name (title) from API Management.
The following examples show how to configure a system-assigned managed identity
#### [Azure CLI](#tab/cli)
-Set the system-assigned identity in your API center using the following [az apic service update](/cli/azure/apic/service#az-apic-service-update) command. Substitute the names of your API center and resource group:
+Set the system-assigned identity in your API center using the following [az apic update](/cli/azure/apic#az-apic-update) command. Substitute the names of your API center and resource group:
```azurecli
-az apic service update --name <api-center-name> --resource-group <resource-group-name> --identity '{"type": "SystemAssigned"}'
+az apic update --name <api-center-name> --resource-group <resource-group-name> --identity '{"type": "SystemAssigned"}'
```
To allow import of APIs, assign your API center's managed identity the **API Man
#### [Azure CLI](#tab/cli)
-1. Get the principal ID of the identity. For a system-assigned identity, use the [az apic service show](/cli/azure/apic/service#az-apic-service-show) command.
+1. Get the principal ID of the identity. For a system-assigned identity, use the [az apic show](/cli/azure/apic#az-apic-show) command.
```azurecli #! /bin/bash
- apicObjID=$(az apic service show --name <api-center-name> \
+ apicObjID=$(az apic show --name <api-center-name> \
--resource-group <resource-group-name> \ --query "identity.principalId" --output tsv) ``` ```azurecli # PowerShell syntax
- $apicObjID=$(az apic service show --name <api-center-name> `
+ $apicObjID=$(az apic show --name <api-center-name> `
--resource-group <resource-group-name> ` --query "identity.principalId" --output tsv) ```
To allow import of APIs, assign your API center's managed identity the **API Man
### Import APIs from API Management
-Use the [az apic service import-from-apim](/cli/azure/apic/service#az-apic-service-import-from-apim) command to import one or more APIs from your API Management instance to your API center.
+Use the [az apic import-from-apim](/cli/azure/apic#az-apic-import-from-apim) command to import one or more APIs from your API Management instance to your API center.
> [!NOTE] > * This command depends on a managed identity configured in your API center that has read permissions to the API Management instance. If you haven't added or configured a managed identity, see [Add a managed identity in your API center](#add-a-managed-identity-in-your-api-center) earlier in this article.
Use the [az apic service import-from-apim](/cli/azure/apic/service#az-apic-servi
#### Import all APIs from an API Management instance
-Use a wildcard (`*`) to specify all APIs from the API Management instance.
-
-1. Get the resource ID of your API Management instance using the [az apim show](/cli/azure/apim#az-apim-show) command.
+In the following command, substitute the names of your API center, your API center's resource group, your API Management instance, and your instance's resource group. Use `*` to specify all APIs from the API Management instance.
- ```azurecli
- #! /bin/bash
- apimID=$(az apim show --name <apim-name> --resource-group <resource-group-name> --query id --output tsv)
- ```
-
- ```azurecli
- # PowerShell syntax
- $apimID=$(az apim show --name <apim-name> --resource-group <resource-group-name> --query id --output tsv)
- ```
-
-1. Use the `az apic service import-from-apim` command to import the APIs. Substitute the names of your API center and resource group, and use `*` to specify all APIs from the API Management instance.
+```azurecli
+#! /bin/bash
+az apic import-from-apim --service-name <api-center-name> --resource-group <resource-group-name> \
+ --apim-name <api-management-name> --apim-resource-group <api-management-resource-group-name> \
+ --apim-apis '*'
+```
- ```azurecli
- az apic service import-from-apim --service-name <api-center-name> --resource-group <resource-group-name> --source-resource-ids $apimID/apis/*
- ```
+```azurecli
+# PowerShell syntax
+az apic import-from-apim --service-name <api-center-name> --resource-group <resource-group-name> `
+ --apim-name <api-management-name> --apim-resource-group <api-management-resource-group-name> `
+ --apim-apis '*'
+```
- > [!NOTE]
- > If your API Management instance has a large number of APIs, import to your API center might take some time.
+> [!NOTE]
+> If your API Management instance has a large number of APIs, import to your API center might take some time.
#### Import a specific API from an API Management instance Specify an API to import using its name from the API Management instance.
-1. Get the resource ID of your API Management instance using the [az apim show](/cli/azure/apim#az-apim-show) command.
+In the following command, substitute the names of your API center, your API center's resource group, your API Management instance, and your instance's resource group. Pass an API name such as `petstore-api` using the `--apim-apis` parameter.
- ```azurecli
- #! /bin/bash
- apimID=$(az apim show --name <apim-name> --resource-group <resource-group-name> --query id --output tsv)
- ```
+```azurecli
+#! /bin/bash
+import-from-apim --service-name <api-center-name> --resource-group <resource-group-name> \
+ --apim-name <api-management-name> --apim-resource-group <api-management-resource-group-name> \
+ --apim-apis 'petstore-api'
+```
- ```azurecli
- # PowerShell syntax
- $apimID=$(az apim show --name <apim-name> --resource-group <resource-group-name> --query id --output tsv)
- ```
-
-1. Use the `az apic service import-from-apim` command to import the API. Substitute the names of your API center and resource group, and specify an API name from the API Management instance.
- ```azurecli
- az apic service import-from-apim --service-name <api-center-name> --resource-group <resource-group-name> --source-resource-ids $apimID/apis/<api-name>
- ```
-
- > [!NOTE]
- > Specify `<api-name>` using the API resource name in the API Management instance, not the display name. Example: `petstore-api` instead of `Petstore API`.
+```azurecli
+# PowerShell syntax
+import-from-apim --service-name <api-center-name> --resource-group <resource-group-name> `
+ --apim-name <api-management-name> --apim-resource-group <api-management-resource-group-name> `
+ --apim-apis 'petstore-api'
+```
+
+> [!NOTE]
+> Specify an API name using the API resource name in the API Management instance, not the display name. Example: `petstore-api` instead of `Petstore API`.
After importing APIs from API Management, you can view and manage the imported APIs in your API center.
api-center Manage Apis Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/manage-apis-azure-cli.md
Previously updated : 04/30/2024
+ms. date: 06/28/2024
# Customer intent: As an API program manager, I want to automate processes to register and update APIs in my Azure API center.
This article shows how to use [`az apic api`](/cli/azure/apic/api) commands in the Azure CLI to add and configure APIs in your [API center](overview.md) inventory. Use commands in the Azure CLI to script operations to manage your API inventory and other aspects of your API center.
-> [!VIDEO https://www.youtube.com/embed/Dvar8Dg25s0]
- ## Prerequisites * An API center in your Azure subscription. If you haven't created one already, see [Quickstart: Create your API center](set-up-api-center.md).
The following example creates an API named *Petstore API* in the *myResourceGrou
```azurecli-interactive az apic api create --resource-group myResourceGroup \
- --service myAPICenter --api-id petstore-api \
+ --service-name myAPICenter --api-id petstore-api \
--title "Petstore API" --type "rest" ```
The following example creates an API version named *v1-0-0* for the *petstore-ap
```azurecli-interactive az apic api version create --resource-group myResourceGroup \
- --service myAPICenter --api-id petstore-api \
+ --service-name myAPICenter --api-id petstore-api \
--version-id v1-0-0 --title "v1-0-0" --lifecycle-stage "testing" ```
The following example uses the [az apic api definition create](/cli/azure/apic/a
```azurecli-interactive az apic api definition create --resource-group myResourceGroup \
- --service myAPICenter --api-id petstore-api \
+ --service-name myAPICenter --api-id petstore-api \
--version-id v1-0-0 --definition-id openapi --title "OpenAPI" ```
The following example imports an OpenAPI specification file from a publicly acce
```azurecli-interactive az apic api definition import-specification \
- --resource-group myResourceGroup --service myAPICenter \
+ --resource-group myResourceGroup --service-name myAPICenter \
--api-id petstore-api --version-id v1-0-0 \ --definition-id openapi --format "link" \ --value 'https://petstore3.swagger.io/api/v3/openapi.json' \
The following example exports the specification file from the *openapi* definiti
```azurecli-interactive az apic api definition export-specification \
- --resource-group myResourceGroup --service myAPICenter \
+ --resource-group myResourceGroup --service-name myAPICenter \
--api-id petstore-api --version-id v1-0-0 \ --definition-id openapi --file-name "/Path/to/specificationFile.json" ```
The following example registers an API in the *myAPICenter* API center from a lo
```azurecli-interactive az apic api register --resource-group myResourceGroup \
- --service myAPICenter --api-location "/Path/to/specificationFile.json"
+ --service-name myAPICenter --api-location "/Path/to/specificationFile.json"
``` * The command sets the API properties such as name and type from values in the definition file.
Use the [az apic api delete](/cli/azure/apic/api#az_apic_api_delete) command to
```azurecli-interactive az apic api delete \
- --resource-group myResoureGroup --service myAPICenter \
+ --resource-group myResoureGroup --service-name myAPICenter \
--api-id petstore-api ```
api-center Set Up Api Center Azure Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-center/set-up-api-center-azure-cli.md
Previously updated : 04/19/2024
+ms.daate: 06/27/2024
az group create --name MyGroup --location eastus
## Create an API center
-Create an API center using the [`az apic service create`](/cli/azure/apic/service#az-apic-service-create) command.
+Create an API center using the [`az apic create`](/cli/azure/apic/#az-apic-create) command.
The following example creates an API center called *MyApiCenter* in the *MyGroup* resource group. In this example, the API center is deployed in the *West Europe* location. Substitute an API center name of your choice and enter one of the [available locations](overview.md#available-regions) for your API center. ```azurecli-interactive
-az apic service create --name MyApiCenter --resource-group MyGroup --location westeurope
+az apic create --name MyApiCenter --resource-group MyGroup --location westeurope
``` Output from the command looks similar to the following. By default, the API center is created in the Free plan.
Output from the command looks similar to the following. By default, the API cent
"location": "westeurope", "name": "myapicenter", "resourceGroup": "mygroup",
+ "sku": {
+ "name": "Free"
+ },
"systemData": {
- "createdAt": "2024-04-22T21:40:35.2541624Z",
- "lastModifiedAt": "2024-04-22T21:40:35.2541624Z"
+ "createdAt": "2024-06-22T21:40:35.2541624Z",
+ "lastModifiedAt": "2024-06-22T21:40:35.2541624Z"
}, "tags": {}, "type": "Microsoft.ApiCenter/services"
api-management Api Management Authenticate Authorize Azure Openai https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/api-management-authenticate-authorize-azure-openai.md
Last updated 02/20/2024 + # Authenticate and authorize access to Azure OpenAI APIs using Azure API Management
api-management Azure Openai Api From Specification https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/azure-openai-api-from-specification.md
Last updated 05/10/2024+
api-management Azure Openai Emit Token Metric Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/azure-openai-emit-token-metric-policy.md
Last updated 05/10/2024 + - build-2024
api-management Azure Openai Enable Semantic Caching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/azure-openai-enable-semantic-caching.md
Last updated 05/13/2024 + # Enable semantic caching for Azure OpenAI APIs in Azure API Management
api-management Azure Openai Semantic Cache Lookup Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/azure-openai-semantic-cache-lookup-policy.md
+ - build-2024
api-management Azure Openai Semantic Cache Store Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/azure-openai-semantic-cache-store-policy.md
+ - build-2024
api-management Azure Openai Token Limit Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/api-management/azure-openai-token-limit-policy.md
+ - build-2024
app-service Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/migrate.md
description: Learn how to migrate your App Service Environment to App Service En
Previously updated : 6/12/2024 Last updated : 7/3/2024 zone_pivot_groups: app-service-cli-portal
If your App Service Environment doesn't pass the validation checks or you try to
|Migration to ASEv3 is not allowed for this ASE. |You can't migrate using the migration feature. |Migrate using one of the [manual migration options](migration-alternatives.md). | |Subscription has too many App Service Environments. Please remove some before trying to create more.|The App Service Environment [quota for your subscription](../../azure-resource-manager/management/azure-subscription-service-limits.md#app-service-limits) is met. |Remove unneeded environments or contact support to review your options. | |`<ZoneRedundant><DedicatedHosts><ASEv3/ASE>` is not available in this location. |This error appears if you're trying to migrate an App Service Environment in a region that doesn't support one of your requested features. |Migrate using one of the [manual migration options](migration-alternatives.md) if you want to migrate immediately. Otherwise, wait for the migration feature to support this App Service Environment configuration. |
-|Migrate cannot be called on this ASE until the active upgrade has finished. |App Service Environments can't be migrated during platform upgrades. You can set your [upgrade preference](how-to-upgrade-preference.md) from the Azure portal. In some cases, an upgrade is initiated when visiting the migration page if your App Service Environment isn't on the current build. |Wait until the upgrade finishes and then migrate. |
+|Migrate cannot be called on this ASE until the active upgrade has finished. |App Service Environments can't be migrated during platform upgrades. You can set your [upgrade preference](how-to-upgrade-preference.md) from the Azure portal. Upgrades take 8-12 hours or longer depending on the size (number of instances/cores) of the App Service Environment. |Wait until the upgrade finishes and then migrate. |
|App Service Environment management operation in progress. |Your App Service Environment is undergoing a management operation. These operations can include activities such as deployments or upgrades. Migration is blocked until these operations are complete. |You can migrate once these operations are complete. | |Migrate is not available for this subscription.|Support needs to be engaged for migrating this App Service Environment.|Open a support case to engage support to resolve your issue.| |Your InteralLoadBalancingMode is not currently supported.|App Service Environments that have InternalLoadBalancingMode set to certain values can't be migrated using the migration feature at this time. The InternalLoadBalancingMode must be manually changed by the Microsoft team. |Open a support case to engage support to resolve your issue. Request an update to the InternalLoadBalancingMode to allow migration. |
app-service Side By Side Migrate https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/environment/side-by-side-migrate.md
If your App Service Environment doesn't pass the validation checks or you try to
|Migrate cannot be called if IP SSL is enabled on any of the sites. |App Service Environments that have sites with IP SSL enabled can't be migrated using the side-by-side migration feature. |Remove the IP SSL from all of your apps in the App Service Environment to enable the migration feature. | |Cannot migrate within the same subnet. |The error appears if you specify the same subnet that your current environment is in for placement of your App Service Environment v3. |You must specify a different subnet for your App Service Environment v3. If you need to use the same subnet, migrate using the [in-place migration feature](migrate.md). | |Subscription has too many App Service Environments. Please remove some before trying to create more.|The App Service Environment [quota for your subscription](../../azure-resource-manager/management/azure-subscription-service-limits.md#app-service-limits) is met. |Remove unneeded environments or contact support to review your options. |
-|Migrate cannot be called on this ASE until the active upgrade has finished. |App Service Environments can't be migrated during platform upgrades. You can set your [upgrade preference](how-to-upgrade-preference.md) from the Azure portal. In some cases, an upgrade is initiated when visiting the migration page if your App Service Environment isn't on the current build. |Wait until the upgrade finishes and then migrate. |
+|Migrate cannot be called on this ASE until the active upgrade has finished. |App Service Environments can't be migrated during platform upgrades. You can set your [upgrade preference](how-to-upgrade-preference.md) from the Azure portal. Upgrades take 8-12 hours or longer depending on the size (number of instances/cores) of the App Service Environment. |Wait until the upgrade finishes and then migrate. |
|App Service Environment management operation in progress. |Your App Service Environment is undergoing a management operation. These operations can include activities such as deployments or upgrades. Migration is blocked until these operations are complete. |You can migrate once these operations are complete. | |Your InteralLoadBalancingMode is not currently supported.|App Service Environments that have InternalLoadBalancingMode set to certain values can't be migrated using the migration feature at this time. The Microsoft team must manually change the InternalLoadBalancingMode. |Open a support case to engage support to resolve your issue. Request an update to the InternalLoadBalancingMode. | |Full migration cannot be called before IP addresses are generated. |This error appears if you attempt to migrate before finishing the premigration steps. |Ensure you complete all premigration steps before you attempt to migrate. See the [step-by-step guide for migrating](#use-the-side-by-side-migration-feature). |
app-service Manage Custom Dns Buy Domain https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/manage-custom-dns-buy-domain.md
For pricing information on App Service domains, visit the [App Service Pricing p
| Setting | Description | | -- | -- | | **Auto renewal** | Your App Service domain is registered to you at one-year increments. Enable auto renewal so that your domain registration doesn't expire and that you retain ownership of the domain. Your Azure subscription is automatically charged the yearly domain registration fee at the time of renewal. If you leave it disabled, you must [renew it manually](#renew-the-domain). |
- | **Privacy protection** | Enabled by default. Privacy protection hides your domain registration contact information from the WHOIS database. Privacy protection is already included in the yearly domain registration fee. To opt out, select **Disable**. |
+ | **Privacy protection** | Enabled by default. Privacy protection hides your domain registration contact information from the WHOIS database and is already included in the yearly domain registration fee. To opt out, select **Disable**. Privacy protection is not supported in following top-level domains (TLDs): co.uk, in, org.uk, co.in, and nl. |
1. Select **Next: Tags** and set the tags you want for your App Service domain. Tagging isn't required for using App Service domains, but is a [feature in Azure that helps you manage your resources](../azure-resource-manager/management/tag-resources.md).
app-service Samples Terraform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/samples-terraform.md
Title: terraform samples
-description: Find terraform samples for some of the common App Service scenarios. Learn how to automate your App Service deployment or management tasks.
+ Title: Terraform samples
+description: Find Terraform samples for some of the common App Service scenarios. Learn how to automate your App Service deployment or management tasks.
tags: azure-service-management ms.assetid: 1e5ecfa8-4ab1-47d3-ab23-97abf723516d Previously updated : 11/18/2022 Last updated : 06/25/2024 # Terraform samples for Azure App Service
-The following table includes links to terraform scripts.
+The following table includes links to Terraform scripts.
| Script | Description | |-|-| |**Create app**|| | [Create two apps and connect securely with Private Endpoint and VNet integration](./scripts/terraform-secure-backend-frontend.md)| Creates two App Service apps and connect apps together with Private Endpoint and VNet integration. | | [Provision App Service and use slot swap to deploy](/azure/developer/terraform/provision-infrastructure-using-azure-deployment-slots)| Provision App Service infrastructure with Azure deployment slots. |
+| [Create an Azure Windows web app with a backup](./scripts/terraform-backup.md)| Create an Azure Windows web app with a backup schedule. |
| | |
app-service Terraform Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/app-service/scripts/terraform-backup.md
+
+ Title: 'Quickstart: Create an Azure Windows web app with a backup using Terraform'
+description: In this quickstart, you create an Azure Windows web app with a backup schedule and a .NET application stack.
+ Last updated : 07/02/2024++++
+customer intent: As a Terraform user, I want to see how to create an Azure Windows web app with a backup schedule and a .NET application stack.
++
+# Quickstart: Create an Azure Windows web app with a backup using Terraform
+
+In Azure App Service, you can make on-demand custom backups or configure scheduled custom backups. In this quickstart, you use Terraform to create an Azure Windows web app with a backup schedule and a .NET application stack. For more information about App Service backups and restores, see [Back up and restore your app in Azure App Service](/azure/app-service/manage-backup?tabs=portal).
++
+> [!div class="checklist"]
+> * Create an Azure storage account and container with the randomly generated name .
+> * Create an Azure service plan with the randomly generated name .
+> * Generate a Shared Access Signature (SAS) for the storage account.
+> * Create an Azure Windows web app with the randomly generated name .
+> * Configure a backup schedule for the web app.
+> * Specify the application stack for the web app.
+> * Output the names of key resources created with the Terraform script.
+> * Output the default hostname of the Windows web app.
+
+## Prerequisites
+
+- Create an Azure account with an active subscription. You can [create an account for free](https://azure.microsoft.com/free/?WT.mc_id=A261C142F).
+
+- [Install and configure Terraform](/azure/developer/terraform/quickstart-configure)
+
+## Implement the Terraform code
+
+> [!NOTE]
+> The sample code for this article is located in the [Azure Terraform GitHub repo](https://github.com/Azure/terraform/tree/master/quickstart/101-app-service-backup). You can view the log file containing the [test results from current and previous versions of Terraform](https://github.com/Azure/terraform/tree/master/quickstart/101-app-service-backup/TestRecord.md).
+>
+> See more [articles and sample code showing how to use Terraform to manage Azure resources](/azure/terraform)
+
+1. Create a directory in which to test and run the sample Terraform code and make it the current directory.
+
+1. Create a file named `providers.tf` and insert the following code.
+ :::code language="Terraform" source="~/terraform_samples/quickstart/101-app-service-backup/providers.tf":::
+
+1. Create a file named `main.tf` and insert the following code.
+ :::code language="Terraform" source="~/terraform_samples/quickstart/101-app-service-backup/main.tf":::
+
+1. Create a file named `variables.tf` and insert the following code.
+ :::code language="Terraform" source="~/terraform_samples/quickstart/101-app-service-backup/variables.tf":::
+
+1. Create a file named `outputs.tf` and insert the following code.
+ :::code language="Terraform" source="~/terraform_samples/quickstart/101-app-service-backup/outputs.tf":::
+
+## Initialize Terraform
++
+## Create a Terraform execution plan
++
+## Apply a Terraform execution plan
++
+## Verify the results
+
+Run [az webapp show](/cli/azure/webapp#az-webapp-show) to view the Azure Windows web app.
+
+```azurecli
+az webapp show --name <web_app_name> --resource-group <resource_group_name>
+```
+
+Replace `<web_app_name>` with the name of your Azure Windows web app and `<resource_group_name>` with the name of your resource group.
+
+## Clean up resources
++
+## Troubleshoot Terraform on Azure
+
+[Troubleshoot common problems when using Terraform on Azure](/azure/developer/terraform/troubleshoot).
+
+## Next steps
+
+> [!div class="nextstepaction"]
+> [See more articles about App Service](/azure/app-service)
azure-cache-for-redis Cache Best Practices Performance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-best-practices-performance.md
description: Learn how to test the performance of Azure Cache for Redis.
Previously updated : 06/19/2023 Last updated : 07/01/2024
Fortunately, several tools exist to make benchmarking Redis easier. Two of the m
## How to use the redis-benchmark utility
-1. Install open source Redis server to a client VM you can use for testing. The redis-benchmark utility is built into the open source Redis distribution. Follow the [Redis documentation](https://redis.io/docs/latest/operate/oss_and_stack/install/install-redis/) for instructions on how to install the open source image.
+1. Install open source Redis server to a client virtual machines (VMs) you can use for testing. The redis-benchmark utility is built into the open source Redis distribution. Follow the [Redis documentation](https://redis.io/docs/latest/operate/oss_and_stack/install/install-redis/) for instructions on how to install the open source image.
1. The client VM used for testing should be _in the same region_ as your Azure Cache for Redis instance.
Fortunately, several tools exist to make benchmarking Redis easier. Two of the m
- On the Premium tier, scaling out, clustering, is typically recommended before scaling up. Clustering allows Redis server to use more vCPUs by sharding data. Throughput should increase roughly linearly when adding shards in this case.
+- On _C0_ and _C1_ Standard caches, while internal Defender scanning is running on the VMs, you might see short spikes in server load not caused by an increase in cache requests. You see higher latency for requests while internal Defender scans are run on these tiers a couple of times a day. Caches on the _C0_ and _C1_ tiers only have a single core to multitask, dividing the work of serving internal Defender scanning and Redis requests. You can reduce the effect by scaling to a higher tier offering with multiple CPU cores, such as _C2_.
+
+ The increased cache size on the higher tiers helps address any latency concerns. Also, at the _C2_ level, you have support for as many as 2,000 client connections.
+ ## Redis-benchmark examples **Pre-test setup**:
redis-benchmark -h yourcache.region.redisenterprise.cache.azure.net -p 10000 -a
## Example performance benchmark data
-The following tables show the maximum throughput values that were observed while testing various sizes of Standard, Premium, Enterprise, and Enterprise Flash caches. We used `redis-benchmark` from an IaaS Azure VM against the Azure Cache for Redis endpoint. The throughput numbers are only for GET commands. Typically, SET commands have a lower throughput. These numbers are optimized for throughput. Real-world throughput under acceptable latency conditions may be lower.
+The following tables show the maximum throughput values that were observed while testing various sizes of Standard, Premium, Enterprise, and Enterprise Flash caches. We used `redis-benchmark` from an IaaS Azure VM against the Azure Cache for Redis endpoint. The throughput numbers are only for GET commands. Typically, SET commands have a lower throughput. These numbers are optimized for throughput. Real-world throughput under acceptable latency conditions might be lower.
The following configuration was used to benchmark throughput for the Basic, Standard, and Premium tiers:
redis-benchmark -h yourcache.region.redisenterprise.cache.azure.net -p 10000 -a
#### Enterprise Cluster Policy
-| Instance | Size | vCPUs | Expected network bandwidth (Mbps)| GET requests per second without SSL (1-kB value size) | GET requests per second with SSL (1-kB value size) |
+| Instance | Size | vCPUs | Expected network bandwidth (Mbps)| `GET` requests per second without SSL (1-kB value size) | `GET` requests per second with SSL (1-kB value size) |
|::| | :|:| :| :| | E10 | 12 GB | 4 | 4,000 | 300,000 | 207,000 | | E20 | 25 GB | 4 | 4,000 | 680,000 | 480,000 |
redis-benchmark -h yourcache.region.redisenterprise.cache.azure.net -p 10000 -a
#### OSS Cluster Policy
-| Instance | Size | vCPUs | Expected network bandwidth (Mbps)| GET requests per second without SSL (1-kB value size) | GET requests per second with SSL (1-kB value size) |
+| Instance | Size | vCPUs | Expected network bandwidth (Mbps)| `GET` requests per second without SSL (1-kB value size) | `GET` requests per second with SSL (1-kB value size) |
|::| | :|:| :| :| | E10 | 12 GB | 4 | 4,000 | 1,400,000 | 1,000,000 | | E20 | 25 GB | 4 | 4,000 | 1,200,000 | 900,000 |
redis-benchmark -h yourcache.region.redisenterprise.cache.azure.net -p 10000 -a
In addition to scaling up by moving to larger cache size, you can boost performance by [scaling out](cache-how-to-scale.md#how-to-scale-up-and-outenterprise-and-enterprise-flash-tiers). In the Enterprise tiers, scaling out is called increasing the _capacity_ of the cache instance. A cache instance by default has capacity of two--meaning a primary and replica node. An Enterprise cache instance with a capacity of four indicates that the instance was scaled out by a factor of two. Scaling out provides access to more memory and vCPUs. Details on how many vCPUs are used by the core Redis process at each cache size and capacity can be found at the [Enterprise tiers best practices page](cache-best-practices-enterprise-tiers.md#sharding-and-cpu-utilization). Scaling out is most effective when using the OSS cluster policy.
-The following tables show the GET requests per second at different capacities, using SSL and a 1-kB value size.
+The following tables show the `GET` requests per second at different capacities, using SSL and a 1-kB value size.
#### Scaling out - Enterprise cluster policy
azure-cache-for-redis Cache How To Scale https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-cache-for-redis/cache-how-to-scale.md
Previously updated : 01/05/2024 Last updated : 07/01/2024 ms.devlang: csharp
You can monitor the following metrics to determine if you need to scale.
- **Redis Server Load** - High Redis server load means that the server is unable to keep pace with requests from all the clients. Because a Redis server is a single threaded process, it's typically more helpful to _scale out_ rather than _scale up_. Scaling out by enabling clustering helps distribute overhead functions across multiple Redis processes. Scaling out also helps distribute TLS encryption/decryption and connection/disconnection, speeding up cache instances using TLS. - Scaling up can still be helpful in reducing server load because background tasks can take advantage of the more vCPUs and free up the thread for the main Redis server process.
- - The Enterprise and Enterprise Flash tiers use Redis Enterprise rather than open source Redis. One of the advantages of these tiers is that the Redis server process can take advantage of multiple vCPUs. Because of that, both scaling up and scaling out in these tiers can be helpful in reducing server load. For more information, see [Best Practices for the Enterprise and Enterprise Flash tiers of Azure Cache for Redis](cache-best-practices-enterprise-tiers.md).
+ - The Enterprise and Enterprise Flash tiers use Redis Enterprise rather than open source Redis. One of the advantages of these tiers is the Redis server process can take advantage of multiple vCPUs. With multiple vCPUs, both scaling up and scaling out in these tiers can be helpful in reducing server load. For more information, see [Best Practices for the Enterprise and Enterprise Flash tiers of Azure Cache for Redis](cache-best-practices-enterprise-tiers.md).
- **Memory Usage** - High memory usage indicates that your data size is too large for the current cache size. Consider scaling to a cache size with larger memory. Either _scaling up_ or _scaling out_ is effective here. - **Client connections** - Each cache size has a limit to the number of client connections it can support. If your client connections are close to the limit for the cache size, consider _scaling up_ to a larger tier. _Scaling out_ doesn't increase the number of supported client connections. - For more information on connection limits by cache size, see [Azure Cache for Redis Pricing](https://azure.microsoft.com/pricing/details/cache/). - **Network Bandwidth**
- - If the Redis server exceeds the available bandwidth, clients requests could time out because the server can't push data to the client fast enough. Check "Cache Read" and "Cache Write" metrics to see how much server-side bandwidth is being used. If your Redis server is exceeding available network bandwidth, you should consider scaling out or scaling up to a larger cache size with higher network bandwidth.
+ - If the Redis server exceeds the available bandwidth, clients requests could time out because the server can't push data to the client fast enough. To see how much server-side bandwidth is being used, check "Cache Read" and "Cache Write" metrics. If your Redis server is exceeding available network bandwidth, you should consider scaling out or scaling up to a larger cache size with higher network bandwidth.
- For Enterprise tier caches using the _Enterprise cluster policy_, scaling out doesn't increase network bandwidth. - For more information on network available bandwidth by cache size, see [Azure Cache for Redis planning FAQs](./cache-planning-faq.yml).
+- **Internal Defender Scans**
+ - On _C0_ and _C1_ Standard caches, while internal Defender scanning is running on the VMs, you might see short spikes in server load not caused by an increase in cache requests. You see higher latency for requests while internal Defender scans are run on these tiers a couple of times a day. Caches on the _C0_ and _C1_ tiers only have a single core to multitask, dividing the work of serving internal Defender scanning and Redis requests. You can reduce the effect by scaling to a higher tier offering with multiple CPU cores, such as _C2_.
+ - The increased cache size on the higher tiers helps address any latency concerns. Also, at the _C2_ level, you have support for as many as 2,000 client connections.
For more information on determining the cache pricing tier to use, see [Choosing the right tier](cache-overview.md#choosing-the-right-tier) and [Azure Cache for Redis planning FAQs](./cache-planning-faq.yml).
You can scale out/in with the following restrictions:
- _Scale out_ is only supported on the **Premium**, **Enterprise**, and **Enterprise Flash** tiers. - _Scale in_ is only supported on the **Premium** tier. - On the **Premium** tier, clustering must be enabled first before scaling in or out.-- On the **Premium** tier, there is GA support for scale out up to 10 shards. Support for up to 30 shards is in preview. (For caches with two replicas, the shard limit is 20. With three replicas, shard limit is 15.)
+- On the **Premium** tier, support for scale out up to 10 shards is generally available. Support for up to 30 shards is in preview. (For caches with two replicas, the shard limit is 20. With three replicas, shard limit is 15.)
- Only the **Enterprise** and **Enterprise Flash** tiers can scale up and scale out simultaneously. ## How to scale - Basic, Standard, and Premium tiers
No, your cache name and keys are unchanged during a scaling operation.
### How does scaling work? -- When you scale a **Basic** cache to a different size, it's shut down, and a new cache is provisioned using the new size. During this time, the cache is unavailable and all data in the cache is lost.
+- When you scale a **Basic** cache to a different size, the cache is shut down, and a new cache is provisioned using the new size. During this time, the cache is unavailable and all data in the cache is lost.
- When you scale a **Basic** cache to a **Standard** cache, a replica cache is provisioned and the data is copied from the primary cache to the replica cache. The cache remains available during the scaling process. - When you scale a **Standard**, **Premium**, **Enterprise**, or **Enterprise Flash** cache to a different size, one of the replicas is shut down and reprovisioned to the new size and the data transferred over, and then the other replica does a failover before it's reprovisioned, similar to the process that occurs during a failure of one of the cache nodes. - When you scale out a clustered cache, new shards are provisioned and added to the Redis server cluster. Data is then resharded across all shards.
No, your cache name and keys are unchanged during a scaling operation.
- When you scale a **Basic** cache to a new size, all data is lost and the cache is unavailable during the scaling operation. - When you scale a **Basic** cache to a **Standard** cache, the data in the cache is typically preserved.-- When you scale a **Standard**, **Premium**, **Enterprise**, or **Enterprise Flash** cache to a larger size, all data is typically preserved. When you scale a Standard or Premium cache to a smaller size, data can be lost if the data size exceeds the new smaller size when it's scaled down. If data is lost when scaling down, keys are evicted using the [allkeys-lru](https://redis.io/topics/lru-cache) eviction policy.
+- When you scale a **Standard**, **Premium**, **Enterprise**, or **Enterprise Flash** cache to a larger size, all data is typically preserved. When you scale a Standard or Premium cache to a smaller size, data can be lost if the data size exceeds the new smaller size when the cache is scaled down. If data is lost when scaling down, keys are evicted using the [allkeys-lru](https://redis.io/topics/lru-cache) eviction policy.
### Can I use all the features of Premium tier after scaling?
-No, some features can only be set when you create a cache in Premium tier, and are not available after scaling.
+No, some features can only be set when you create a cache in Premium tier, and aren't available after scaling.
-These features cannot be added after you create the Premium cache:
+These features can't be added after you create the Premium cache:
-- VNet injection
+- Virtual network injection
- Adding zone redundancy - Using multiple replicas per primary
With [active geo-replication](cache-how-to-active-geo-replication.md) configured
- You can't scale from a **Premium** cache to an **Enterprise** or **Enterprise Flash** cache. - You can't scale from a larger size down to the **C0 (250 MB)** size.
-If a scaling operation fails, the service tries to revert the operation, and the cache will revert to the original size.
+If a scaling operation fails, the service tries to revert the operation, and the cache reverts to the original size.
### How long does scaling take?
Scaling time depends on a few factors. Here are some factors that can affect how
- Amount of data: Larger amounts of data take a longer time to be replicated - High write requests: Higher number of writes mean more data replicates across nodes or shards-- High server load: Higher server load means Redis server is busy and has limited CPU cycles to complete data redistribution
+- High server load: Higher server load means the Redis server is busy and limited CPU cycles are available to complete data redistribution
Generally, when you scale a cache with no data, it takes approximately 20 minutes. For clustered caches, scaling takes approximately 20 minutes per shard with minimal data.
In the Azure portal, you can see the scaling operation in progress. When scaling
### Do I need to make any changes to my client application to use clustering? -- When clustering is enabled, only database 0 is available. If your client application uses multiple databases and it tries to read or write to a database other than 0, the following exception is thrown: `Unhandled Exception: StackExchange.Redis.RedisConnectionException: ProtocolFailure on GET >` `StackExchange.Redis.RedisCommandException: Multiple databases are not supported on this server; cannot switch to database: 6`
+- When clustering is enabled, only database 0 is available. If your client application uses multiple databases, and it tries to read or write to a database other than zero, the following exception is thrown: `Unhandled Exception: StackExchange.Redis.RedisConnectionException: ProtocolFailure on GET >` `StackExchange.Redis.RedisCommandException: Multiple databases are not supported on this server; cannot switch to database: 6`
For more information, see [Redis Cluster Specification - Implemented subset](https://redis.io/topics/cluster-spec#implemented-subset). - If you're using [StackExchange.Redis](https://www.nuget.org/packages/StackExchange.Redis/), you must use 1.0.481 or later. You connect to the cache using the same [endpoints, ports, and keys](cache-configure.md#properties) that you use when connecting to a cache where clustering is disabled. The only difference is that all reads and writes must be done to database 0.
- Other clients may have different requirements. See [Do all Redis clients support clustering?](#do-all-redis-clients-support-clustering)
+ Other clients might have different requirements. See [Do all Redis clients support clustering?](#do-all-redis-clients-support-clustering)
- If your application uses multiple key operations batched into a single command, all keys must be located in the same shard. To locate keys in the same shard, see [How are keys distributed in a cluster?](#how-are-keys-distributed-in-a-cluster) - If you're using Redis ASP.NET Session State provider, you must use 2.0.1 or higher. See [Can I use clustering with the Redis ASP.NET Session State and Output Caching providers?](#can-i-use-clustering-with-the-redis-aspnet-session-state-and-output-caching-providers)
The largest cache size you can have is 4.5 TB. This result is a clustered F1500
Many clients libraries support Redis clustering but not all. Check the documentation for the library you're using to verify you're using a library and version that support clustering. StackExchange.Redis is one library that does support clustering, in its newer versions. For more information on other clients, see the [Playing with the cluster](https://redis.io/topics/cluster-tutorial#playing-with-the-cluster) section of the [Redis cluster tutorial](https://redis.io/topics/cluster-tutorial).
-The Redis clustering protocol requires each client to connect to each shard directly in clustering mode, and also defines new error responses such as 'MOVED' na 'CROSSSLOTS'. When you attempt to use a client library that doesn't support clustering, with a cluster mode cache, the result can be many [MOVED redirection exceptions](https://redis.io/topics/cluster-spec#moved-redirection), or just break your application, if you're doing cross-slot multi-key requests.
+The Redis clustering protocol requires each client to connect to each shard directly in clustering mode, and also defines new error responses such as `MOVED` na `CROSSSLOTS`. When you attempt to use a client library that doesn't support clustering, with a cluster mode cache, the result can be many [MOVED redirection exceptions](https://redis.io/topics/cluster-spec#moved-redirection), or just break your application, if you're doing cross-slot multi-key requests.
> [!NOTE] > If you're using StackExchange.Redis as your client, verify that you are using the latest version of [StackExchange.Redis](https://www.nuget.org/packages/StackExchange.Redis/) 1.0.481 or later for clustering to work correctly. For more information on any issues with move exceptions, see [move exceptions](#im-getting-move-exceptions-when-using-stackexchangeredis-and-clustering-what-should-i-do).
You need to use the `-p` switch to specify the correct port to connect to. Use t
### Can I configure clustering for a previously created cache?
-Yes. First, ensure that your cache is in the Premium tier by scaling it up. Next, you can see the cluster configuration options, including an option to enable cluster. Change the cluster size after the cache is created, or after you have enabled clustering for the first time.
+Yes. First, ensure that your cache is in the Premium tier by scaling it up. Next, you can see the cluster configuration options, including an option to enable cluster. Change the cluster size after the cache is created, or after you enable clustering for the first time.
>[!IMPORTANT] >You can't undo enabling clustering. And a cache with clustering enabled and only one shard behaves _differently_ than a cache of the same size with _no_ clustering.
If you're using StackExchange.Redis and receive `MOVE` exceptions when using clu
### What is the difference between OSS Clustering and Enterprise Clustering on Enterprise tier caches?
-OSS Cluster Mode is the same as clustering on the Premium tier and follows the open source clustering specification. Enterprise Cluster Mode can be less performant, but uses Redis Enterprise clustering, which doesn't require any client changes to use. For more information, see [Clustering on Enterprise](cache-best-practices-enterprise-tiers.md#clustering-on-enterprise).
+OSS Cluster Mode is the same as clustering on the Premium tier and follows the open source clustering specification. Enterprise Cluster Mode can be less performant, but uses Redis Enterprise clustering, which doesn't require any client changes to use. For more information, see [Clustering on Enterprise](cache-best-practices-enterprise-tiers.md#clustering-on-enterprise).
### How many shards do Enterprise tier caches use?
-Unlike Basic, Standard, and Premium tier caches, Enterprise and Enterprise Flash caches can take advantage of multiple shards on a single node. For more information, see [Sharding and CPU utilization](cache-best-practices-enterprise-tiers.md#sharding-and-cpu-utilization).
+Unlike Basic, Standard, and Premium tier caches, Enterprise, and Enterprise Flash caches can take advantage of multiple shards on a single node. For more information, see [Sharding and CPU utilization](cache-best-practices-enterprise-tiers.md#sharding-and-cpu-utilization).
## Next steps
azure-functions Functions How To Use Azure Function App Settings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-functions/functions-how-to-use-azure-function-app-settings.md
Title: Configure function app settings in Azure Functions description: Learn how to configure function app settings in Azure Functions.- Previously updated : 06/15/2024++ Last updated : 07/02/2024
+ms.assetid: 81eb04f8-9a27-45bb-bf24-9ab6c30d205c
-# Manage your function app
+# Manage your function app
-In Azure Functions, a function app provides the execution context for your individual functions. Function app behaviors apply to all functions hosted by a given function app. All functions in a function app must be of the same [language](supported-languages.md).
+In Azure Functions, a function app provides the execution context for your individual functions. Function app behaviors apply to all functions hosted by a given function app. All functions in a function app must be of the same [language](supported-languages.md).
-Individual functions in a function app are deployed together and are scaled together. All functions in the same function app share resources, per instance, as the function app scales.
+Individual functions in a function app are deployed together and are scaled together. All functions in the same function app share resources, per instance, as the function app scales.
Connection strings, environment variables, and other application settings are defined separately for each function app. Any data that must be shared between function apps should be stored externally in a persisted store.
Connection strings, environment variables, and other application settings are de
[!INCLUDE [Don't mix development environments](../../includes/functions-mixed-dev-environments.md)]
-1. To begin, sign in to the [Azure portal] using your Azure account. In the search bar at the top of the portal, enter the name of your function app and select it from the list.
+To view the app settings in your function app, follow these steps:
-2. Under **Settings** in the left pane, select **Configuration**.
+1. Sign in to the [Azure portal] using your Azure account. Search for your function app and select it.
- :::image type="content" source="./media/functions-how-to-use-azure-function-app-settings/azure-function-app-main.png" alt-text="Function app overview in the Azure portal":::
+2. In the left pane of your function app, expand **Settings**, select **Environment variables**, and then select the **App settings** tab.
-You can navigate to everything you need to manage your function app from the overview page, in particular the **[Application settings](#settings)** and **[Platform features](#platform-features)**.
+ :::image type="content" source="./media/functions-how-to-use-azure-function-app-settings/azure-function-app-main.png" alt-text="Screen shot that how to select the App settings page in a function app." lightbox="./media/functions-how-to-use-azure-function-app-settings/azure-function-app-main.png":::
## <a name="settings"></a>Work with application settings
-You can create any number of application settings required by your function code. There are also predefined application settings used by Functions. To learn more, see the [App settings reference for Azure Functions](functions-app-settings.md).
+In addition to the predefined app settings used by Azure Functions, you can create any number of app settings, as required by your function code. For more information, see [App settings reference for Azure Functions](functions-app-settings.md).
+
+These settings are stored encrypted. For more information, see [App settings security](security-concepts.md#application-settings).
-These settings are stored encrypted. To learn more, see [Application settings security](security-concepts.md#application-settings).
+You can manage app settings from the [Azure portal](functions-how-to-use-azure-function-app-settings.md?tabs=portal#settings), and by using the [Azure CLI](functions-how-to-use-azure-function-app-settings.md?tabs=azurecli#settings) and [Azure PowerShell](functions-how-to-use-azure-function-app-settings.md?tabs=powershell#settings). You can also manage app settings from [Visual Studio Code](functions-develop-vs-code.md#application-settings-in-azure) and from [Visual Studio](functions-develop-vs.md#function-app-settings).
-Application settings can be managed from the [Azure portal](functions-how-to-use-azure-function-app-settings.md?tabs=portal#settings) and by using the [Azure CLI](functions-how-to-use-azure-function-app-settings.md?tabs=azurecli#settings) and [Azure PowerShell](functions-how-to-use-azure-function-app-settings.md?tabs=powershell#settings). You can also manage application settings from [Visual Studio Code](functions-develop-vs-code.md#application-settings-in-azure) and from [Visual Studio](functions-develop-vs.md#function-app-settings).
+### [Azure portal](#tab/azure-portal)
-### [Portal](#tab/portal)
+To view your app settings, see [Get started in the Azure portal](#get-started-in-the-azure-portal).
-To find the application settings, see [Get started in the Azure portal](#get-started-in-the-azure-portal).
+The **App settings** tab maintains settings that are used by your function app:
-The **Application settings** tab maintains settings that are used by your function app. You must select **Show values** to see the values in the portal.
-To add a setting in the portal, select **New application setting** and add the new key-value pair.
+1. To see the values of the app settings, select **Show values**.
-![Function app settings in the Azure portal.](./media/functions-how-to-use-azure-function-app-settings/azure-function-app-settings-tab.png)
+1. To add a setting, select **+ Add**, and then enter the **Name** and **Value** of the new key-value pair.
+
+ :::image type="content" source="./media/functions-how-to-use-azure-function-app-settings/azure-function-app-settings-tab.png" alt-text="Screen shot that shows the App settings page in a function app." lightbox="./media/functions-how-to-use-azure-function-app-settings/azure-function-app-settings-tab.png":::
### [Azure CLI](#tab/azure-cli)
-The [`az functionapp config appsettings list`](/cli/azure/functionapp/config/appsettings#az-functionapp-config-appsettings-list) command returns the existing application settings, as in the following example:
+The [`az functionapp config appsettings list`](/cli/azure/functionapp/config/appsettings#az-functionapp-config-appsettings-list) command returns the existing application settings, for example:
```azurecli-interactive az functionapp config appsettings list --name <FUNCTION_APP_NAME> \
az functionapp config appsettings list --name <FUNCTION_APP_NAME> \
The [`az functionapp config appsettings set`](/cli/azure/functionapp/config/appsettings#az-functionapp-config-appsettings-set) command adds or updates an application setting. The following example creates a setting with a key named `CUSTOM_FUNCTION_APP_SETTING` and a value of `12345`: - ```azurecli-interactive az functionapp config appsettings set --name <FUNCTION_APP_NAME> \ --resource-group <RESOURCE_GROUP_NAME> \
az functionapp config appsettings set --name <FUNCTION_APP_NAME> \
### [Azure PowerShell](#tab/azure-powershell)
-The [`Get-AzFunctionAppSetting`](/powershell/module/az.functions/get-azfunctionappsetting) cmdlet returns the existing application settings, as in the following example:
+The [`Get-AzFunctionAppSetting`](/powershell/module/az.functions/get-azfunctionappsetting) cmdlet returns the existing application settings, for example:
```azurepowershell-interactive Get-AzFunctionAppSetting -Name <FUNCTION_APP_NAME> -ResourceGroupName <RESOURCE_GROUP_NAME>
Update-AzFunctionAppSetting -Name <FUNCTION_APP_NAME> -ResourceGroupName <RESOUR
[!INCLUDE [functions-environment-variables](../../includes/functions-environment-variables.md)]
-When you develop a function app locally, you must maintain local copies of these values in the local.settings.json project file. To learn more, see [Local settings file](functions-develop-local.md#local-settings-file).
+When you develop a function app locally, you must maintain local copies of these values in the *local.settings.json* project file. For more information, see [Local settings file](functions-develop-local.md#local-settings-file).
## FTPS deployment settings
-Azure Functions supports deploying project code to your function app by using FTPS. Because this deployment method requires you to [sync triggers](functions-deployment-technologies.md#trigger-syncing), this method isn't recommended. To securely transfer project files, always use FTPS and not FTP.
+Azure Functions supports deploying project code to your function app by using FTPS. Because this deployment method requires you to [sync triggers](functions-deployment-technologies.md#trigger-syncing), it isn't recommended. To securely transfer project files, always use FTPS and not FTP.
-You can get the credentials required for FTPS deployment using one of these methods:
+To get the credentials required for FTPS deployment, use one of these methods:
-### [Portal](#tab/portal)
+### [Azure portal](#tab/azure-portal)
-You can get the FTPS publishing credentials in the Azure portal by downloading the publishing profile for your function app.
+You can get the FTPS publishing credentials in the Azure portal by downloading the publishing profile for your function app.
> [!IMPORTANT]
-> The publishing profile contains important security credentials. You should always secure the downloaded file on your local computer.
+> The publishing profile contains important security credentials. Always secure the downloaded file on your local computer.
[!INCLUDE [functions-download-publish-profile](../../includes/functions-download-publish-profile.md)]
-3. In the file, locate the `publishProfile` element with the attribute `publishMethod="FTP"`. In this element, the `publishUrl`, `userName`, and `userPWD` attributes contain the target URL and credentials for FTPS publishing.
+3. In the file, locate the `publishProfile` element with the attribute `publishMethod="FTP"`. In this element, the `publishUrl`, `userName`, and `userPWD` attributes contain the target URL and credentials for FTPS publishing.
### [Azure CLI](#tab/azure-cli)
In this example, replace `<APP_NAME>` with your function app name and `<GROUP_NA
## Hosting plan type
-When you create a function app, you also create a hosting plan in which the app runs. A plan can have one or more function apps. The functionality, scaling, and pricing of your functions depend on the type of plan. To learn more, see [Azure Functions hosting options](functions-scale.md).
+When you create a function app, you also create a hosting plan in which the app runs. A plan can have one or more function apps. The functionality, scaling, and pricing of your functions depend on the type of plan. For more information, see [Azure Functions hosting options](functions-scale.md).
-You can determine the type of plan being used by your function app from the Azure portal, or by using the Azure CLI or Azure PowerShell APIs.
+You can determine the type of plan being used by your function app from the Azure portal, or by using the Azure CLI or Azure PowerShell APIs.
The following values indicate the plan type:
-| Plan type | Portal | Azure CLI/PowerShell |
+| Plan type | Azure portal | Azure CLI/PowerShell |
| | | | | [Consumption](consumption-plan.md) | **Consumption** | `Dynamic` | | [Premium](functions-premium-plan.md) | **ElasticPremium** | `ElasticPremium` | | [Dedicated (App Service)](dedicated-plan.md) | Various | Various |
-### [Portal](#tab/portal)
+### [Azure portal](#tab/azure-portal)
-To determine the type of plan used by your function app, see **App Service plan** in the **Overview** tab for the function app in the [Azure portal](https://portal.azure.com). To see the pricing tier, select the name of the **App Service Plan**, and then select **Properties** from the left pane.
+1. To determine the type of plan used by your function app, see the **App Service Plan** in the **Overview** page of the function app in the [Azure portal](https://portal.azure.com).
-![View scaling plan in the portal](./media/functions-scale/function-app-overview-portal.png)
+ ![Screenshot that shows the App Service Plan link on the Overview page of a function app.](./media/functions-scale/function-app-overview-portal.png)
+
+1. To see the pricing tier, select the name of the **App Service Plan**, and then select **Settings > Properties** from the left pane.
### [Azure CLI](#tab/azure-cli)
az appservice plan list --query "[?id=='$appServicePlanId'].sku.tier" --output t
```
-In the previous example replace `<RESOURCE_GROUP>` and `<FUNCTION_APP_NAME>` with the resource group and function app names, respective.
+In the previous example, replace `<RESOURCE_GROUP>` and `<FUNCTION_APP_NAME>` with the resource group and function app names, respectively.
### [Azure PowerShell](#tab/azure-powershell)
$ResourceGroup = '<RESOURCE_GROUP>'
$PlanID = (Get-AzFunctionApp -ResourceGroupName $ResourceGroup -Name $FunctionApp).AppServicePlan (Get-AzFunctionAppPlan -Name $PlanID -ResourceGroupName $ResourceGroup).SkuTier ```
-In the previous example replace `<RESOURCE_GROUP>` and `<FUNCTION_APP_NAME>` with the resource group and function app names, respective.
+
+In the previous example, replace `<RESOURCE_GROUP>` and `<FUNCTION_APP_NAME>` with the resource group and function app names, respectively.
In the previous example replace `<RESOURCE_GROUP>` and `<FUNCTION_APP_NAME>` wit
You can migrate a function app between a Consumption plan and a Premium plan on Windows. When migrating between plans, keep in mind the following considerations:
-+ Direct migration to a Dedicated (App Service) plan isn't currently supported.
-+ Migration isn't supported on Linux.
++ Direct migration to a Dedicated (App Service) plan isn't supported.++ Migration isn't supported on Linux. + The source plan and the target plan must be in the same resource group and geographical region. For more information, see [Move an app to another App Service plan](../app-service/app-service-plan-manage.md#move-an-app-to-another-app-service-plan). + The specific CLI commands depend on the direction of the migration. + Downtime in your function executions occurs as the function app is migrated between plans.
-+ State and other app-specific content is maintained, since the same Azure Files share is used by the app both before and after migration.
++ State and other app-specific content is maintained, because the same Azure Files share is used by the app both before and after migration. You can migrate your plan using these tools:
-### [Portal](#tab/portal)
+### [Azure portal](#tab/azure-portal)
You can use the [Azure portal](https://portal.azure.com) to switch to a different plan.
You can use Azure PowerShell commands to manually create a new plan, switch your
Choose the direction of the migration for your app on Windows.
-### [Consumption-to-Premium](#tab/to-premium/portal)
+### [Consumption-to-Premium](#tab/to-premium/azure-portal)
1. In the Azure portal, navigate to your Consumption plan app and choose **Change App Service plan** under **App Service plan**.
Choose the direction of the migration for your app on Windows.
For more information, see [Move an app to another App Service plan](../app-service/app-service-plan-manage.md#move-an-app-to-another-app-service-plan).
-### [Premium-to-Consumption](#tab/to-consumption/portal)
+### [Premium-to-Consumption](#tab/to-consumption/azure-portal)
1. In the Azure portal, navigate to your Premium plan app and choose **Change App Service plan** under **App Service plan**.
For more information, see [Move an app to another App Service plan](../app-servi
Use the following procedure to migrate from a Consumption plan to a Premium plan on Windows:
-1. Run the [az functionapp create](/cli/azure/functionapp/plan#az-functionapp-plan-create) command as follows to create a new App Service plan (Elastic Premium) in the same region and resource group as your existing function app:
+1. Run the [az functionapp create](/cli/azure/functionapp/plan#az-functionapp-plan-create) command as follows to create a new App Service plan (Elastic Premium) in the same region and resource group as your existing function app:
```azurecli-interactive az functionapp plan create --name <NEW_PREMIUM_PLAN_NAME> --resource-group <MY_RESOURCE_GROUP> --location <REGION> --sku EP1
Use the following procedure to migrate from a Consumption plan to a Premium plan
az functionapp update --name <MY_APP_NAME> --resource-group <MY_RESOURCE_GROUP> --plan <NEW_PREMIUM_PLAN> ```
-1. When you no longer need the Consumption plan originally used by the app, delete your original plan after confirming you have successfully migrated to the new one. Run the [az functionapp plan list](/cli/azure/functionapp/plan#az-functionapp-plan-list) command as follows to get a list of all Consumption plans in your resource group:
+1. When you no longer need the Consumption plan originally used by the app, delete your original plan after confirming you've successfully migrated to the new one. Run the [az functionapp plan list](/cli/azure/functionapp/plan#az-functionapp-plan-list) command as follows to get a list of all Consumption plans in your resource group:
```azurecli-interactive az functionapp plan list --resource-group <MY_RESOURCE_GROUP> --query "[?sku.family=='Y'].{PlanName:name,Sites:numberOfSites}" -o table
Use the following procedure to migrate from a Premium plan to a Consumption plan
```azurecli-interactive az functionapp create --resource-group <MY_RESOURCE_GROUP> --name <NEW_CONSUMPTION_APP_NAME> --consumption-plan-location <REGION> --runtime <LANGUAGE_RUNTIME> --functions-version 4 --storage-account <STORAGE_NAME> ```
-
+ 1. Run the [az functionapp show](/cli/azure/functionapp#az-functionapp-show) command as follows to get the name of the Consumption plan created with the new function app: ```azurecli-interactive az functionapp show --resource-group <MY_RESOURCE_GROUP> --name <NEW_CONSUMPTION_APP_NAME> --query "{appServicePlanId}" -o tsv ```
- The Consumption plan name is the final segment of the fully-qualified resource ID that's returned.
-
+
+ The Consumption plan name is the final segment of the fully qualified resource ID that is returned.
+ 1. Run the [az functionapp update](/cli/azure/functionapp#az-functionapp-update) command as follows to migrate the existing function app to the new Consumption plan: ```azurecli-interactive
Use the following procedure to migrate from a Premium plan to a Consumption plan
az functionapp delete --name <NEW_CONSUMPTION_APP_NAME> --resource-group <MY_RESOURCE_GROUP> ```
-1. When you no longer need the Premium plan originally used by the app, delete your original plan after confirming you have successfully migrated to the new one. Until the Premium plan is deleted, you continue to be charged for it. Run the [az functionapp plan list](/cli/azure/functionapp/plan#az-functionapp-plan-list) command as follows to get a list of all Premium plans in your resource group:
+1. When you no longer need the Premium plan originally used by the app, delete your original plan after confirming you've successfully migrated to the new one. Until the Premium plan is deleted, you continue to be charged for it. Run the [az functionapp plan list](/cli/azure/functionapp/plan#az-functionapp-plan-list) command as follows to get a list of all Premium plans in your resource group:
```azurecli-interactive az functionapp plan list --resource-group <MY_RESOURCE_GROUP> --query "[?sku.family=='EP'].{PlanName:name,Sites:numberOfSites}" -o table
Use the following procedure to migrate from a Premium plan to a Consumption plan
```azurecli-interactive az functionapp plan delete --name <PREMIUM_PLAN> --resource-group <MY_RESOURCE_GROUP> ```+ ### [Consumption-to-Premium](#tab/to-premium/azure-powershell) Use the following procedure to migrate from a Consumption plan to a Premium plan on Windows:
-1. Run the [New-AzFunctionAppPlan](/powershell/module/az.functions/new-azfunctionappplan) command as follows to create a new App Service plan (Elastic Premium) in the same region and resource group as your existing function app:
+1. Run the [New-AzFunctionAppPlan](/powershell/module/az.functions/new-azfunctionappplan) command as follows to create a new App Service plan (Elastic Premium) in the same region and resource group as your existing function app:
```powershell-interactive New-AzFunctionAppPlan -Name <NEW_PREMIUM_PLAN_NAME> -ResourceGroupName <MY_RESOURCE_GROUP> -Location <REGION> -Sku EP1 -WorkerType Windows
Use the following procedure to migrate from a Consumption plan to a Premium plan
Use the following procedure to migrate from a Premium plan to a Consumption plan on Windows:
-1. Run the [New-AzFunctionApp](/powershell/module/az.functions/new-azfunctionapp) command as follows to create a new function app (Consumption) in the same region and resource group as your existing function app. This command also creates a new Consumption plan in which the function app runs:
+1. Run the [New-AzFunctionApp](/powershell/module/az.functions/new-azfunctionapp) command as follows to create a new function app (Consumption) in the same region and resource group as your existing function app. This command also creates a new Consumption plan in which the function app runs:
```powershell-interactive New-AzFunctionApp -Name <NEW_CONSUMPTION_APP_NAME> -StorageAccountName <STORAGE_NAME> -Location <REGION> -ResourceGroupName <MY_RESOURCE_GROUP> -Runtime <LANGUAGE_RUNTIME> -RuntimeVersion <LANGUAGE_VERSION> -FunctionsVersion 4 -OSType Windows ```+ 1. Run the [Get-AzFunctionApp](/powershell/module/az.functions/get-azfunctionapp) command as follows to get the name of the Consumption plan created with the new function app: ```powershell-interactive Get-AzFunctionApp -ResourceGroupName <MY_RESOURCE_GROUP> -Name <NEW_CONSUMPTION_APP_NAME> | Select-Object -Property AppServicePlan | Format-List ```
-
+ 1. Run the [Update-AzFunctionApp](/powershell/module/az.functions/update-azfunctionapp) command as follows to migrate the existing function app to the new Consumption plan: ```powershell-interactive
Use the following procedure to migrate from a Premium plan to a Consumption plan
## Get your function access keys
-HTTP triggered functions can generally be called by using a URL in the format: `https://<APP_NAME>.azurewebsites.net/api/<FUNCTION_NAME>`. When the authorization to your function is set a value other than `anonymous`, you must also provide an access key in your request. The access key can either be provided in the URL using the `?code=` query string or in the request header. To learn more, see [Function access keys](functions-bindings-http-webhook-trigger.md#authorization-keys). There are several ways to get your access keys.
+HTTP triggered functions can generally be called by using a URL in the format: `https://<APP_NAME>.azurewebsites.net/api/<FUNCTION_NAME>`. When the authorization to your function is set a value other than `anonymous`, you must also provide an access key in your request. The access key can either be provided in the URL using the `?code=` query string or in the request header. For more information, see [Function access keys](functions-bindings-http-webhook-trigger.md#authorization-keys). There are several ways to get your access keys.
-### [Portal](#tab/portal)
+### [Azure portal](#tab/azure-portal)
1. Sign in to the Azure portal, then search for and select **Function App**. 1. Select the function you want to verify.
-1. In the left navigation under **Functions**, select **App keys**.
+1. In the left pane, expand **Functions**, and then select **App keys**.
- This returns the host keys, which can be used to access any function in the app. It also returns the system key, which gives anyone administrator-level access to the all function app APIs.
+ The **App keys** page appears. On this page the host keys are displayed, which can be used to access any function in the app. The system key is also displayed, which gives anyone administrator-level access to all function app APIs.
-You can also practice least privilege by using the key just for the specific function key by selecting **Function keys** under **Developer** in your HTTP triggered function.
+You can also practice least privilege by using the key for a specific function. To do so, select **Function keys** under **Developer** in your HTTP-triggered function.
### [Azure CLI](#tab/azure-cli)
In this script, replace `<SUBSCRIPTION_ID>` and `<APP_NAME>` with the ID of your
### [Azure PowerShell](#tab/azure-powershell)
-Run the following script, the output of which is the [default (host) key](functions-bindings-http-webhook-trigger.md#authorization-scopes-function-level) that can be used to access any HTTP triggered function in the function app.
+Run the following script, the output of which is the [default (host) key](functions-bindings-http-webhook-trigger.md#authorization-scopes-function-level) that can be used to access any HTTP triggered function in the function app.
```powershell-interactive $subName = '<SUBSCRIPTION_ID>'
$path = "/subscriptions/$subName/resourceGroups/$rGroup/providers/Microsoft.Web/
((Invoke-AzRestMethod -Path $path -Method POST).Content | ConvertFrom-JSON).functionKeys.default ```
-In this script, replace `<SUBSCRIPTION_ID>` and `<APP_NAME>` with the ID of your subscription and your function app name, respective.
+In this script, replace `<SUBSCRIPTION_ID>` and `<APP_NAME>` with the ID of your subscription and your function app name, respectively.
## Development limitations in the Azure portal
-You must consider these limitations when developing your functions in the [Azure portal](https://portal.azure.com):
+Consider these limitations when you develop your functions in the [Azure portal](https://portal.azure.com):
-+ In-portal editing is currently only supported for functions that were created or last modified in the portal.
-+ In-portal editing is only supported for JavaScript, PowerShell, Python, and C# Script functions.
-+ In-portal editing isn't currently supported in the preview release of the [Flex Consumption plan](flex-consumption-plan.md#considerations).
-+ When you deploy code to a function app from outside the portal, you can no longer edit any of the code for that function app in the portal. In this case, just continue using [local development](functions-develop-local.md).
++ In-portal editing is supported only for functions that were created or last modified in the Azure portal.++ In-portal editing is supported only for JavaScript, PowerShell, Python, and C# Script functions.++ In-portal editing isn't supported in the preview release of the [Flex Consumption plan](flex-consumption-plan.md#considerations).++ When you deploy code to a function app from outside the Azure portal, you can no longer edit any of the code for that function app in the portal. In this case, just continue using [local development](functions-develop-local.md). + For compiled C# functions and Java functions, you can create the function app and related resources in the portal. However, you must create the functions code project locally and then publish it to Azure.
-When possible, you should develop your functions locally and publish your code project to a function app in Azure. For more information, see [Code and test Azure Functions locally](functions-develop-local.md).
+When possible, develop your functions locally and publish your code project to a function app in Azure. For more information, see [Code and test Azure Functions locally](functions-develop-local.md).
## Manually install extensions
-C# class library functions can include the NuGet packages for [binding extensions](functions-bindings-register.md) directly in the class library project. For other non-.NET languages and C# script, you should [use extension bundles](functions-bindings-register.md#extension-bundles). If you must manually install extensions, you can do so by [using Azure Functions Core Tools](./functions-core-tools-reference.md#func-extensions-install) locally. If you can't use extension bundles and are only able to work in the portal, you need to use [Advanced Tools (Kudu)](#kudu) to manually create the extensions.csproj file directly in the site. Make sure to first remove the `extensionBundle` element from the host.json file.
+C# class library functions can include the NuGet packages for [binding extensions](functions-bindings-register.md) directly in the class library project. For other non-.NET languages and C# script, you should [use extension bundles](functions-bindings-register.md#extension-bundles). If you must manually install extensions, you can do so by [using Azure Functions Core Tools](./functions-core-tools-reference.md#func-extensions-install) locally. If you can't use extension bundles and are only able to work in the portal, you need to use [Advanced Tools (Kudu)](#kudu) to manually create the extensions.csproj file directly in the site. Make sure to first remove the `extensionBundle` element from the *host.json* file.
-This same process works for any other file you need to add to your app.
+This same process works for any other file you need to add to your app.
> [!IMPORTANT]
-> When possible, you shouldn't edit files directly in your function app in Azure. We recommend [downloading your app files locally](deployment-zip-push.md#download-your-function-app-files), using [Core Tools to install extensions](./functions-core-tools-reference.md#func-extensions-install) and other packages, validating your changes, and then [republishing your app using Core Tools](functions-run-local.md#publish) or one of the other [supported deployment methods](functions-deployment-technologies.md#deployment-methods).
+> When possible, don't edit files directly in your function app in Azure. We recommend [downloading your app files locally](deployment-zip-push.md#download-your-function-app-files), using [Core Tools to install extensions](./functions-core-tools-reference.md#func-extensions-install) and other packages, validating your changes, and then [republishing your app using Core Tools](functions-run-local.md#publish) or one of the other [supported deployment methods](functions-deployment-technologies.md#deployment-methods).
-The Functions editor built into the Azure portal lets you update your function code and configuration files directly in the portal.
+The Functions editor built into the Azure portal lets you update your function code and configuration files directly in the portal:
+
+1. Select your function app, then under **Functions**, select **Functions**.
-1. Select your function app, then under **Functions** select **Functions**.
1. Choose your function and select **Code + test** under **Developer**.
-1. Choose your file to edit and select **Save** when you're done.
-Files in the root of the app, such as function.proj or extensions.csproj need to be created and edited by using the [Advanced Tools (Kudu)](#kudu).
+1. Choose your file to edit and select **Save** when you finish.
+
+Files in the root of the app, such as function.proj or extensions.csproj need to be created and edited by using the [Advanced Tools (Kudu)](#kudu):
-1. Select your function app, then under **Development tools** select **Advanced tools** > **Go**.
-1. If prompted, sign-in to the SCM site with your Azure credentials.
+1. Select your function app, expand **Development tools**, and then select **Advanced tools** > **Go**.
+1. If prompted, sign in to the Source Control Manager (SCM) site with your Azure credentials.
1. From the **Debug console** menu, choose **CMD**. 1. Navigate to `.\site\wwwroot`, select the plus (**+**) button at the top, and select **New file**.
-1. Name the file, such as `extensions.csproj` and press Enter.
-1. Select the edit button next to the new file, add or update code in the file, and select **Save**.
-1. For a project file like extensions.csproj, run the following command to rebuild the extensions project:
+1. Give the file a name, such as `extensions.csproj`, and then press Enter.
+1. Select the edit button next to the new file, add or update code in the file, and then select **Save**.
+1. For a project file like *extensions.csproj*, run the following command to rebuild the extensions project:
```bash dotnet build extensions.csproj
Files in the root of the app, such as function.proj or extensions.csproj need to
## Platform features
-Function apps run in, and are maintained by, the Azure App Service platform. As such, your function apps have access to most of the features of Azure's core web hosting platform. When working in the [Azure portal](https://portal.azure.com), the left pane is where you access the many features of the App Service platform that you can use in your function apps.
+Function apps run in the Azure App Service platform, which maintains them. As such, your function apps have access to most of the features of Azure's core web hosting platform. When you use the [Azure portal](https://portal.azure.com), the left pane is where you access the many features of the App Service platform that you can use in your function apps.
-The following matrix indicates portal feature support by hosting plan and operating system:
+The following matrix indicates Azure portal feature support by hosting plan and operating system:
-| Feature | Consumption plan | Premium plan | Dedicated plan |
+| Feature | Consumption plan | Premium plan | Dedicated plan |
| | | | | | [Advanced tools (Kudu)](#kudu) | Windows: Γ£ö <br/>Linux: **X** | Γ£ö | Γ£ö| | [App Service editor](#editor) | Windows: Γ£ö <br/>Linux: **X** | Windows: Γ£ö <br/>Linux: **X** | Windows: Γ£ö <br/>Linux: **X**|
For more information about how to work with App Service settings, see [Configure
### <a name="editor"></a>App Service editor
-![The App Service editor](./media/functions-how-to-use-azure-function-app-settings/configure-function-app-appservice-editor.png)
+The App Service editor is an advanced in-portal editor that you can use to modify JSON configuration files and code files alike. Choosing this option launches a separate browser tab with a basic editor. This editor enables you to integrate with the Git repository, run and debug code, and modify function app settings. This editor provides an enhanced development environment for your functions compared with the built-in function editor.
-The App Service editor is an advanced in-portal editor that you can use to modify JSON configuration files and code files alike. Choosing this option launches a separate browser tab with a basic editor. This enables you to integrate with the Git repository, run and debug code, and modify function app settings. This editor provides an enhanced development environment for your functions compared with the built-in function editor.
+![Screenshot that shows the App Service editor.](./media/functions-how-to-use-azure-function-app-settings/configure-function-app-appservice-editor.png)
-We recommend that you consider developing your functions on your local computer. When you develop locally and publish to Azure, your project files are read-only in the portal. To learn more, see [Code and test Azure Functions locally](functions-develop-local.md).
+We recommend that you consider developing your functions on your local computer. When you develop locally and publish to Azure, your project files are read-only in the Azure portal. For more information, see [Code and test Azure Functions locally](functions-develop-local.md).
### <a name="console"></a>Console
-![Function app console](./media/functions-how-to-use-azure-function-app-settings/configure-function-console.png)
+The in-portal console is an ideal developer tool when you prefer to interact with your function app from the command line. Common commands include directory and file creation and navigation, as well as executing batch files and scripts.
-The in-portal console is an ideal developer tool when you prefer to interact with your function app from the command line. Common commands include directory and file creation and navigation, as well as executing batch files and scripts.
+![Screenshot that shows the function app console.](./media/functions-how-to-use-azure-function-app-settings/configure-function-console.png)
When developing locally, we recommend using the [Azure Functions Core Tools](functions-run-local.md) and the [Azure CLI]. ### <a name="kudu"></a>Advanced tools (Kudu)
-![Configure Kudu](./media/functions-how-to-use-azure-function-app-settings/configure-function-app-kudu.png)
-
-The advanced tools for App Service (also known as Kudu) provide access to advanced administrative features of your function app. From Kudu, you manage system information, app settings, environment variables, site extensions, HTTP headers, and server variables. You can also launch **Kudu** by browsing to the SCM endpoint for your function app, like `https://<myfunctionapp>.scm.azurewebsites.net/`
+The advanced tools for App Service (also known as Kudu) provide access to advanced administrative features of your function app. From Kudu, you manage system information, app settings, environment variables, site extensions, HTTP headers, and server variables. You can also launch **Kudu** by browsing to the SCM endpoint for your function app, for example: `https://<myfunctionapp>.scm.azurewebsites.net/`.
+![Screenshot that shows the advanced tools for App Service (Kudo).](./media/functions-how-to-use-azure-function-app-settings/configure-function-app-kudu.png)
### <a name="deployment"></a>Deployment Center
When you use a source control solution to develop and maintain your functions co
To prevent malicious code execution on the client, modern browsers block requests from web applications to resources running in a separate domain. [Cross-origin resource sharing (CORS)](https://developer.mozilla.org/docs/Web/HTTP/CORS) lets an `Access-Control-Allow-Origin` header declare which origins are allowed to call endpoints on your function app.
-#### [Portal](#tab/portal)
+#### [Azure portal](#tab/azure-portal)
-When you configure the **Allowed origins** list for your function app, the `Access-Control-Allow-Origin` header is automatically added to all responses from HTTP endpoints in your function app.
+When you configure the **Allowed origins** list for your function app, the `Access-Control-Allow-Origin` header is automatically added to all responses from HTTP endpoints in your function app.
-![Configure function app's CORS list](./media/functions-how-to-use-azure-function-app-settings/configure-function-app-cors.png)
+![Screenshot that shows how to configure CORS list of your function app.](./media/functions-how-to-use-azure-function-app-settings/configure-function-app-cors.png)
-The wildcard (\*) is ignored if there's another domain entry.
+If there's another domain entry, the wildcard (\*) is ignored.
#### [Azure CLI](#tab/azure-cli)
You can't currently update CORS settings using Azure PowerShell.
### <a name="auth"></a>Authentication
-![Configure authentication for a function app](./media/functions-how-to-use-azure-function-app-settings/configure-function-app-authentication.png)
-
-When functions use an HTTP trigger, you can require calls to first be authenticated. App Service supports Microsoft Entra authentication and sign-in with social providers, such as Facebook, Microsoft, and Twitter. For details on configuring specific authentication providers, see [Azure App Service authentication overview](../app-service/overview-authentication-authorization.md).
+When functions use an HTTP trigger, you can require calls to first be authenticated. App Service supports Microsoft Entra authentication and sign-in with social providers, such as Facebook, Microsoft, and Twitter. For information about configuring specific authentication providers, see [Azure App Service authentication overview](../app-service/overview-authentication-authorization.md).
+![Screenshot that shows how to configure authentication for a function app.](./media/functions-how-to-use-azure-function-app-settings/configure-function-app-authentication.png)
-## Next steps
+## Related content
-+ [Configure Azure App Service Settings](../app-service/configure-common.md)
++ [Configure an App Service app](../app-service/configure-common.md) + [Continuous deployment for Azure Functions](functions-continuous-deployment.md) [Azure CLI]: /cli/azure/
azure-monitor Azure Monitor Agent Extension Versions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-extension-versions.md
This article describes the version details for the Azure Monitor Agent virtual m
We strongly recommended to always update to the latest version, or opt in to the [Automatic Extension Update](../../virtual-machines/automatic-extension-upgrade.md) feature.
-[//]: # "DON'T change the format (column schema, etc.) of the below table without consulting glinuxagent alias. The [Azure Monitor Linux Agent Troubleshooting Tool](https://github.com/Azure/azure-linux-extensions/blob/master/AzureMonitorAgent/ama_tst/AMA-Troubleshooting-Tool.md) parses the below table at runtime to determine the latest version of AMA; altering the format could degrade some of the functions of the tool."
+[//]: # "DON'T change the format (column schema, etc.) of the table without consulting glinuxagent alias. The [Azure Monitor Linux Agent Troubleshooting Tool](https://github.com/Azure/azure-linux-extensions/blob/master/AzureMonitorAgent/ama_tst/AMA-Troubleshooting-Tool.md) parses the table at runtime to determine the latest version of AMA; altering the format could degrade some of the functions of the tool."
## Version details | Release Date | Release notes | Windows | Linux | |:|:|:|:|
-| May 2024 |**Windows**<ul><li>Upgraded Fluent-bit version to 3.0.5. This resolves as security issue in fluent-bit (NVD - CVE-2024-4323 (nist.gov)</li><li>Disabled Fluent-bit logging that caused disk exhaustion issues for some customers. Example error is Fluentbit log with "[C:\projects\fluent-bit-2e87g\src\flb_scheduler.c:72 errno=0] No error" fills up the entire disk of the server.</li><li>Fixed AMA extension getting stuck in deletion state on some VMΓÇÖs that are using Arc. This is a reliability improvement.</li><li>Fixed AMA not using system proxy, this is a bug introduced in 1.26.0. This was caused by a new feature to use Arc agentΓÇÖs proxy settings. When the system proxy as set as None the proxy was broke in 1.26.</li><li>Fixed Windows Firewall Logs log file rollover issues</li></ul>**Linux**<ul><li>Comming Soon</ul></li>| 1.27.0 | |
-| April 2024 |**Windows**<ul><li>In preparation for the May 17 public preview of Firewall Logs the agent completed the addition of a profile filter for Domain, Public, and Private Logs. </li><li>AMA running on an Arc enabled server will default to using the Arc proxy setting if available.</li><li>The AMA VM extension proxy setting overrides the Arc defaults.</li><li>Bug fix in MSI installer: Symptom - If there are spaces in the fluent-bit config path, AMA was not recognizing the path properly. AMA now adds quotes to configuration path in fluent-bit.</li><li>Bug fix for Container Insights: Symptom - custom resource Id were not being honored.</li><li>Security issue fix: skip the deletion of files and directory whose path contains a redirection (via Junction point, Hard links, Mount point, OB Symlinks etc.).</li><li>Updating MetricExtension package to 2.2024.328.1744.</li></ul>**Linux**<ul><li>AMA 1.30 now available in Arc.</li><li>New distribution support Debian 12, RHEL CIS L2.</li><li>Fix for mdsd version 1.30.3 (in persistence mode) which converted positive integer float/double values (e.g. "3.0", "4.0") to type ulong which broke Azure stream analytics.</li></ul>| 1.26.0 | 1.31.1 |
-| March 2024 | **Known Issues** a change in 1.25.0 to the encoding of resource IDs in the request headers to the ingestion end point has disrupted SQL ATP. This is causing failures in alert notifications to the Microsoft Detection Center (MDC) and potentially affecting billing events. Symptom are not seeing expected alerts related to SQL security threats. 1.25.0 did not release to all data centers and it was not identified for auto update in any data center. Customers that did upgrade to 1.25.0 should role back to 1.24.0<br><br>**Windows**<ul><li>**Breaking Change from Public Preview to GA** Due to customer feedback, automatic parsing of JSON into column in your custom table in Log Analytic was added. You must take action to migrate your JSON DCR created prior to this release to prevent data loss. This is the last release of the JSON Log type in Public Preview an GA will be declared in a few weeks.</li><li>Fix AMA when resource ID contains non-ascii chars which is common when using some languages other than English. Errors would follow this pattern: … [HealthServiceCommon] [] [Error] … WinHttpAddRequestHeaders(x-ms-AzureResourceId: /subscriptions/{your subscription #} /resourceGroups/???????/providers/ … PostDataItems" failed with code 87(ERROR_INVALID_PARAMETER) </li></ul>**Linux**<ul><li>The AMA agent has been tested and thus supported on Debian 12 and RHEL9 CIS L2 distribution.</li></ul>| 1.25.0 | 1.31.0 |
-| February 2024 | **Known Issues**<ul><li>Occasional crash during startup in arm64 VMs. This is fixed in 1.30.3</li></uL>**Windows**<ul><li>Fix memory leak in Internet Information Service (IIS) log collection</li><li>Fix JSON parsing with Unicode characters for some ingestion endpoints</li><li>Allow Client installer to run on Azure Virtual Desktop (AVD) DevBox partner</li><li>Enable Transport Layer Security (TLS) 1.3 on supported Windows versions</li><li>Update MetricsExtension package to 2.2024.202.2043</li></ul>**Linux**<ul><li>Features<ul><li>Add EventTime to syslog for parity with OMS agent</li><li>Add more Common Event Format (CEF) format support</li><li>Add CPU quotas for Azure Monitor Agent (AMA)</li></ul><li>Fixes<ul><li>Handle truncation of large messages in syslog due to Transmission Control Protocol (TCP) framing issue</li><li>Set NO_PROXY for Instance Metadata Service (IMDS) endpoint in AMA Python wrapper</li><li>Fix a crash in syslog parsing</li><li>Add reasonable limits for metadata retries from IMDS</li><li>No longer reset /var/log/azure folder permissions</li></ul></ul> | 1.24.0 | 1.30.3<br>1.30.2 |
+| June 2024 |**Windows**<ul><li>Fix encoding issues with Resource Id field.</li><li>AMA: Support new ingestion endpoint for GovSG environment.</li><li>MA: Fixes a CPU uptick issue for certain Bond serialization scenarios.</li><li>Upgrade AzureSecurityPack version to 4.33.0.1.</li><li>Upgrade Metrics Extension version to 2.2024.517.533.</li><li>Upgrade Health Extension version to 2024.528.1.</li></ul>**Linux**<ul><li>Coming Soon</li></ul>| 1.28.0 | |
+| May 2024 |**Windows**<ul><li>Upgraded Fluent-bit version to 3.0.5. This Fix resolves as security issue in fluent-bit (NVD - CVE-2024-4323 (nist.gov)</li><li>Disabled Fluent-bit logging that caused disk exhaustion issues for some customers. Example error is Fluentbit log with "[C:\projects\fluent-bit-2e87g\src\flb_scheduler.c:72 errno=0] No error" fills up the entire disk of the server.</li><li>Fixed AMA extension getting stuck in deletion state on some VMs that are using Arc. This fix improves reliability.</li><li>Fixed AMA not using system proxy, this issue is a bug introduced in 1.26.0. The issue was caused by a new feature that uses the Arc agentΓÇÖs proxy settings. When the system proxy as set as None the proxy was broken in 1.26.</li><li>Fixed Windows Firewall Logs log file rollover issues</li></ul>| 1.27.0 | |
+| April 2024 |**Windows**<ul><li>In preparation for the May 17 public preview of Firewall Logs, the agent completed the addition of a profile filter for Domain, Public, and Private Logs. </li><li>AMA running on an Arc enabled server will default to using the Arc proxy settings if available.</li><li>The AMA VM extension proxy settings overrides the Arc defaults.</li><li>Bug fix in MSI installer: Symptom - If there are spaces in the fluent-bit config path, AMA wasn't recognizing the path properly. AMA now adds quotes to configuration path in fluent-bit.</li><li>Bug fix for Container Insights: Symptom - custom resource Id weren't being honored.</li><li>Security issue fix: skip the deletion of files and directory whose path contains a redirection (via Junction point, Hard links, Mount point, OB Symlinks etc.).</li><li>Updating MetricExtension package to 2.2024.328.1744.</li></ul>**Linux**<ul><li>AMA 1.30 now available in Arc.</li><li>New distribution support Debian 12, RHEL CIS L2.</li><li>Fix for mdsd version 1.30.3 in persistence mode, which converted positive integers to float/double values ("3.0", "4.0") to type ulong which broke Azure stream analytics.</li></ul>| 1.26.0 | 1.31.1 |
+| March 2024 | **Known Issues - ** a change in 1.25.0 to the encoding of resource IDs in the request headers to the ingestion end point has disrupted SQL ATP. This is causing failures in alert notifications to the Microsoft Detection Center (MDC) and potentially affecting billing events. Symptom is not seeing expected alerts related to SQL security threats. 1.25.0 didn't release to all data centers and it wasn't identified for auto update in any data center. Customers that did upgrade to 1.25.0 should role back to 1.24.0<br><br>**Windows**<ul><li>**Breaking Change from Public Preview to GA** Due to customer feedback, automatic parsing of JSON into column in your custom table in Log Analytic was added. You must take action to migrate your JSON DCR created before this release to prevent data loss. This fix is the last before the release of the JSON Log type in Public Preview.</li><li>Fix AMA when resource ID contains non-ascii chars, which is common when using some languages other than English. Errors would follow this pattern: … [HealthServiceCommon] [] [Error] … WinHttpAddRequestHeaders(x-ms-AzureResourceId: /subscriptions/{your subscription #} /resourceGroups/???????/providers/ … PostDataItems" failed with code 87(ERROR_INVALID_PARAMETER) </li></ul>**Linux**<ul><li>The AMA agent now supports Debian 12 and RHEL9 CIS L2 distribution.</li></ul>| 1.25.0 | 1.31.0 |
+| February 2024 | **Known Issues**<ul><li>Occasional crash during startup in arm64 VMs. The fix is in 1.30.3</li></uL>**Windows**<ul><li>Fix memory leak in Internet Information Service (IIS) log collection</li><li>Fix JSON parsing with Unicode characters for some ingestion endpoints</li><li>Allow Client installer to run on Azure Virtual Desktop (AVD) DevBox partner</li><li>Enable Transport Layer Security (TLS) 1.3 on supported Windows versions</li><li>Update MetricsExtension package to 2.2024.202.2043</li></ul>**Linux**<ul><li>Features<ul><li>Add EventTime to syslog for parity with OMS agent</li><li>Add more Common Event Format (CEF) format support</li><li>Add CPU quotas for Azure Monitor Agent (AMA)</li></ul><li>Fixes<ul><li>Handle truncation of large messages in syslog due to Transmission Control Protocol (TCP) framing issue</li><li>Set NO_PROXY for Instance Metadata Service (IMDS) endpoint in AMA Python wrapper</li><li>Fix a crash in syslog parsing</li><li>Add reasonable limits for metadata retries from IMDS</li><li>No longer reset /var/log/azure folder permissions</li></ul></ul> | 1.24.0 | 1.30.3<br>1.30.2 |
| January 2024 |**Known Issues**<ul><li>1.29.5 doesn't install on Arc-enabled servers because the agent extension code size is beyond the deployment limit set by Arc. **This issue was fixed in 1.29.6**</li></ul>**Windows**<ul><li>Added support for Transport Layer Security (TLS) 1.3</li><li>Reverted a change to enable multiple IIS subscriptions to use same filter. Feature is redeployed once memory leak is fixed</li><li>Improved Event Trace for Windows (ETW) event throughput rate</li></ul>**Linux**<ul><li>Fix error messages logged, intended for mdsd.err, that instead went to mdsd.warn in 1.29.4 only. Likely error messages: "Exception while uploading to Gig-LA: ...", "Exception while uploading to ODS: ...", "Failed to upload to ODS: ..."</li><li>Reduced noise generated by AMAs' use of semanage when SELinux is enabled</li><li>Handle time parsing in syslog to handle Daylight Savings Time (DST) and leap day</li></ul> | 1.23.0 | 1.29.5, 1.29.6 | | December 2023 |**Known Issues**<ul><li>1.29.4 doesn't install on Arc-enabled servers because the agent extension code size is beyond the deployment limit set by Arc. Fix is coming in 1.29.6</li><li>Multiple IIS subscriptions cause a memory leak. feature reverted in 1.23.0</ul>**Windows** <ul><li>Prevent CPU spikes by not using bookmark when resetting an Event Log subscription</li><li>Added missing Fluent Bit executable to AMA client setup for Custom Log support</li><li>Updated to latest AzureCredentialsManagementService and DsmsCredentialsManagement package</li><li>Update ME to v2.2023.1027.1417</li></ul>**Linux**<ul><li>Support for TLS v1.3</li><li>Support for nopri in Syslog</li><li>Ability to set disk quota from Data Collection Rule (DCR) Agent Settings</li><li>Add ARM64 Ubuntu 22 support</li><li>**Fixes**<ul><li>SysLog</li><ul><li>Parse syslog Palo Alto CEF with multiple space characters following the hostname</li><li>Fix an issue with incorrectly parsing messages containing two '\n' chars in a row</li><li>Improved support for non-RFC compliant devices</li><li>Support Infoblox device messages containing both hostname and IP headers</li></ul><li>Fix AMA crash in Read Hat Enterprise Linux (RHEL) 7.2</li><li>Remove dependency on "which" command</li><li>Fix port conflicts due to AMA using 13000 </li><li>Reliability and Performance improvements</li></ul></li></ul>| 1.22.0 | 1.29.4| | October 2023| **Windows** <ul><li>Minimize CPU spikes when resetting an Event Log subscription</li><li>Enable multiple IIS subscriptions to use same filter</li><li>Clean up files and folders for inactive tenants in multitenant mode</li><li>AMA installer doesn't install unnecessary certs</li><li>AMA emits Telemetry table locally</li><li>Update Metric Extension to v2.2023.721.1630</li><li>Update AzureSecurityPack to v4.29.0.4</li><li>Update AzureWatson to v1.0.99</li></ul>**Linux**<ul><li> Add support for Process metrics counters for Log Analytics upload and Azure Monitor Metrics</li><li>Use rsyslog omfwd TCP for improved syslog reliability</li><li>Support Palo Alto CEF logs where hostname is followed by two spaces</li><li>Bug and reliability improvements</li></ul> |1.21.0|1.28.11|
azure-monitor Data Collection Text Log https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/data-collection-text-log.md
Many applications log information to text or JSON files instead of standard logging services such as Windows Event log or Syslog. This article explains how to collect log data from text and JSON files on monitored machines using [Azure Monitor Agent](azure-monitor-agent-overview.md) by creating a [data collection rule (DCR)](../essentials/data-collection-rule-overview.md). > [!Note]
-> The JSON ingestion is in Preview at this time.
+> The agent based JSON custom file ingestion is in Preview at this time. We have not completed the UI experience in the portal yet. Please follow the directions in the Resource Manager Template tab for best results.
## Prerequisites To complete this procedure, you need:
To complete this procedure, you need:
- Do create a new log file every day so that you can remove old files easily. - Do clean up all log files in the monitored directory. Tracking many log files can drive up agent CPU and Memory usage. Wait for at least 2 days to allow ample time for all logs to be processed. - Do Not overwrite an existing file with new records. You should only append new records to the end of the file. Overwriting will cause data loss.
- - Do Not rename a file to a new name and then open a new file with the same name. This could cause data loss.
- Do Not rename or copy large log files that match the file scan pattern in to the monitored directory. If you must, do not exceed 50MB per minute. - Do Not rename a file that matches the file scan pattern to a new name that also matches the file scan pattern. This will cause duplicate data to be ingested.
azure-monitor Alerts Processing Rules https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alerts-processing-rules.md
> [!NOTE] > Alert processing rules were previously known as 'action rules'. For backward compatibility, the Azure resource type of these rules is still **Microsoft.AlertsManagement/actionRules** .
-Alert processing rules allow you to apply processing on fired alerts. Alert processing rules are different from alert rules. Alert rules generate new alerts, while alert processing rules modify the fired alerts as they're being fired.
+Alert processing rules allow you to apply processing on fired alerts. Alert processing rules are different from alert rules. Alert rules generate new alerts that notify you when something happens, while alert processing rules modify the fired alerts as they're being fired to change the usual alert behavior.
You can use alert processing rules to add [action groups](./action-groups.md) or remove (suppress) action groups from your fired alerts. You can apply alert processing rules to different resource scopes, from a single resource, or to an entire subscription, as long as they are within the same subscription as the alert processing rule. You can also use them to apply various filters or have the rule work on a predefined schedule.
azure-monitor Availability Azure Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/availability-azure-functions.md
Start by reviewing the graph on the **Availability** tab of your Application Ins
> [!NOTE] > Tests created with `TrackAvailability()` will appear with **CUSTOM** next to the test name.
+> Similar to standard web tests, we recommend a minimum of five test locations for TrackAvailability() to ensure you can distinguish problems in your website from network issues.
:::image type="content" source="media/availability-azure-functions/availability-custom.png" alt-text="Screenshot that shows the Availability tab with successful results." lightbox="media/availability-azure-functions/availability-custom.png":::
azure-monitor Opentelemetry Enable https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/app/opentelemetry-enable.md
Download the [applicationinsights-agent-3.5.3.jar](https://github.com/microsoft/
For Spring Boot native applications: * [Import the OpenTelemetry Bills of Materials (BOM)](https://opentelemetry.io/docs/zero-code/java/spring-boot-starter/getting-started/).
-* Add the [Spring Cloud Azure Starter Monitor](https://mvnrepository.com/artifact/com.azure.spring/cloud-starter-azure-monitor) dependency.
+* Add the [Spring Cloud Azure Starter Monitor](https://central.sonatype.com/artifact/com.azure.spring/spring-cloud-azure-starter-monitor) dependency.
* Follow [these instructions](/azure//developer/java/spring-framework/developer-guide-overview#configuring-spring-boot-3) for the Azure SDK JAR (Java Archive) files. For Quarkus native applications:
Azure Monitor OpenTelemetry sample applications are available for all supported
[!INCLUDE [azure-monitor-app-insights-opentelemetry-faqs](../includes/azure-monitor-app-insights-opentelemetry-faqs.md)]
azure-monitor Migrate To Azure Storage Lifecycle Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/migrate-to-azure-storage-lifecycle-policy.md
For logs sent to a Log Analytics workspace, retention is set for each table on t
> [!IMPORTANT] > **Deprecation Timeline.** > - March 31, 2023 ΓÇô The Diagnostic Settings Storage Retention feature will no longer be available to configure new retention rules for log data. This includes using the portal, CLI PowerShell, and ARM and Bicep templates. If you have configured retention settings, you'll still be able to see and change them in the portal.
-> - March 31, 2024 ΓÇô You will no longer be able to use the API (CLI, Powershell, or templates), or Azure portal to configure retention setting unless you're changing them to *0*. Existing retention rules will still be respected.
> - September 30, 2025 ΓÇô All retention functionality for the Diagnostic Settings Storage Retention feature will be disabled across all environments.
azure-monitor Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/whats-new.md
This article lists significant changes to Azure Monitor documentation.
## [2024](#tab/2024)
+## June 2024
+
+|Subservice | Article | Description |
+||||
+|Agents|[Azure Monitor Agent Migration Helper workbook](agents/azure-monitor-agent-migration-helper-workbook.md)|Guidance for using the AMA migration workbook.|
+|Agents|[Migrate to Azure Monitor Agent from Log Analytics agent](agents/azure-monitor-agent-migration.md)|Refreshed guidance for migrating to Azure Monitor Agent.|
+|Alerts|[Action groups](alerts/action-groups.md)|Added list of supported roles to which action groups can send emails.|
+|Alerts|[Action groups](alerts/action-groups.md)|Updated PowerShell script for action groups using secure webhook.|
+|Alerts|[Create or edit a log search alert rule](alerts/alerts-create-log-alert-rule.md)|Added limitation of log search alert rules to indicate that log search alert rules don't support linked storage.|
+|Alerts|[Common alert schema](alerts/alerts-common-schema.md)|A link to Azure Monitor Investigator was added to the alerts common schema.|
+|App|[Live metrics: Monitor and diagnose with 1-second latency](app/live-stream.md)|Update Distro Feature Matrix|
+|Application-Insights|[Migrate to workspace-based Application Insights resources](app/convert-classic-resource.md)|Migration guidance for Classic to Workspace-based resources has been updated. Classic Application Insights resources are fully retired and Continuous Export is disabled.|
+|Application-Insights|[OpenTelemetry on Azure](app/opentelemetry.md)|Our OpenTelemetry on Azure offerings are fully documented here, as well as a link to our OpenTelemetry roadmap.|
+|Application-Insights|[Application Insights availability tests](app/availability-overview.md)|Availability Test TLS support is now fully documented.|
+|Application-Insights|[Enable Azure Monitor OpenTelemetry for .NET, Node.js, Python, and Java applications](app/opentelemetry-enable.md)|A tab for Azure Monitor Application Insights OpenTelemetry support of Java Native images is available.|
+|Application-Insights|[Live metrics: Monitor and diagnose with 1-second latency](app/live-stream.md)|We've updated our Live Metrics documentation so that it links out to both OpenTelemetry and the Classic API code.|
+|Application-Insights|[Configuration options: Azure Monitor Application Insights for Java](app/java-standalone-config.md)|For Java OpenTelemetry, we've documented how to locally disable ingestion sampling. (preview feature)|
+|Containers|[Enable private link with Container insights](containers/container-insights-private-link.md)|Added guidance for CLI.|
+|Containers|[Customize scraping of Prometheus metrics in Azure Monitor managed service for Prometheus](containers/prometheus-metrics-scrape-configuration.md)|Updated and refrehed|
+|Containers|[Use Prometheus exporters for common workloads with Azure Managed Prometheus](containers/prometheus-exporters.md)|New article listing supported exporters.|
+|Essentials|[Send Prometheus metrics from virtual machines, scale sets, or Kubernetes clusters to an Azure Monitor workspace](essentials/prometheus-remote-write-virtual-machines.md)|Configure remote write for self-managed Prometheus on a Kubernetes cluster|
+|General|[Create a metric alert with dynamic thresholds](alerts/alerts-dynamic-thresholds.md)|Added possible values for alert User Response field.|
+|Logs|[Tutorial: Send data to Azure Monitor using Logs ingestion API (Resource Manager templates)](logs/tutorial-logs-ingestion-api.md)|Updated to use DCR endpoint instead of DCE.|
+|Logs|[Create and manage a dedicated cluster in Azure Monitor Logs](logs/logs-dedicated-clusters.md)|Added new process for configuring dedicated clusters in Azure portal.|
+|Logs|[Set a table's log data plan to Basic or Analytics](logs/basic-logs-configure.md)|The Basic Logs table plan now includes 30 days of interactive retention.|
+|Logs|[Aggregate data in a Log Analytics workspace by using summary rules (Preview)](logs/summary-rules.md)|Summary rules final 2|
+|Visualizations|[Link actions](visualize/workbooks-link-actions.md)|Added clarification that the user must have permissions to all resources referenced in a workbook as well as to the workbook itself.<p>Updated process and screenshots for custom views in workbook link actions.|
++ ## May 2024 |Subservice | Article | Description | |||| |Agents|[Migrate to Azure Monitor Agent from Log Analytics agent](agents/azure-monitor-agent-migration.md)|Updated support policy for the legacy Log Analytics agent, which will be retired on August 31, 2024.|
-|Alerts|[Create a metric alert with dynamic thresholds](alerts/alerts-dynamic-thresholds.md)|Clarify lookback period|
-|Alerts|[Action groups](alerts/action-groups.md)|Updated the desciption of action groups to clarify that you can use automatic workflows for any scenario, not only to let users know that alert has been raised.|
+|Alerts|[Create a metric alert with dynamic thresholds](alerts/alerts-dynamic-thresholds.md)|Clarify look-back period|
+|Alerts|[Action groups](alerts/action-groups.md)|Updated the description of action groups to clarify that you can use automatic workflows for any scenario, not only to let users know that alert has been raised.|
|Alerts|[Supported resources for Azure Monitor metric alerts](alerts/alerts-metric-near-real-time.md)|Removed references to the deprecated Microsoft.Web/containerApps namespace, and replaced with Microsoft.app/containerApps namespace.| |Alerts|[Action groups](alerts/action-groups.md)|Updated action group ARM role group notification functionality.| |Alerts|[Create or edit a log search alert rule](alerts/alerts-create-log-alert-rule.md)|Updated article to indicate that log search alert rule queries support 'ago()' with timespan literals only.|
This article lists significant changes to Azure Monitor documentation.
|Profiler|[Troubleshoot Application Insights Profiler](profiler/profiler-troubleshooting.md)|Update Troubleshooting guide with prerequisite for latest ASP.NET Core runtime and explanation for limit on active profiling sessions.| - ## April 2024 |Subservice | Article | Description |
azure-netapp-files Azacsnap Cmd Ref Backup https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azacsnap-cmd-ref-backup.md
Previously updated : 05/15/2024 Last updated : 07/02/2024
The `-c backup` command takes the following arguments:
azacsnap -c backup --volume data --prefix hana_TEST --retention 9 --trim ```
+- `[--flush]` an option to request the operating system kernel to flush I/O buffers for volumes after the database is put into "*backup mode*". In prior versions we used the "mountpoint" values to indicate volumes to flush, with AzAcSnap 10 the `--flush` option will take care of it. Therefore this key/value ("mountpoint") can be removed from the configuration file.
+ - On Windows volumes which are labelled as "Windows" or "Recovery", and are NTFS will not be flushed. You can also add "noflush" to the volume label and it will not be flushed.
+ - On Linux all I/O is flushed using the Linux `sync` command.
+
+ Running the following example on the same host running the database will:
+ 1. Put the database into "*backup mode*".
+ 2. Request an operating system kernel flush of I/O buffers for local volumes (see operating system specific details).
+ 3. Take a storage snapshot.
+ 4. Release the database from "*backup mode*".
+
+ ```bash
+ azacsnap -c backup --volume data --prefix hana_TEST --retention 9 --trim --flush
+ ```
+ - `[--ssl=]` an optional parameter that defines the encryption method used to communicate with SAP HANA, either `openssl` or `commoncrypto`. If defined, then the `azacsnap -c backup` command expects to find two files in the same directory, these files must be named after
azure-netapp-files Azure Netapp Files Cost Model https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/azure-netapp-files-cost-model.md
For cost model specific to cross-region replication, see [Cost model for cross-r
Azure NetApp Files is billed on provisioned storage capacity, which is allocated by creating capacity pools. Capacity pools are billed monthly based on a set cost per allocated GiB per hour. Capacity pool allocation is measured hourly.
-Capacity pools must be at least 2 TiB and can be increased or decreased in 1-TiB intervals. Capacity pools contain volumes that range in size from a minimum of 100 GiB to a maximum of 100 TiB. Volumes are assigned quotas that are subtracted from the capacity poolΓÇÖs provisioned size. For an active volume, capacity consumption against the quota is based on logical (effective) capacity, being active filesystem data or snapshot data. See [How Azure NetApp Files snapshots work](snapshots-introduction.md) for details.
+Capacity pools must be at least 1 TiB and can be increased or decreased in 1-TiB intervals. Capacity pools contain volumes that range in size from a minimum of 100 GiB to a maximum of 100 TiB. Volumes are assigned quotas that are subtracted from the capacity poolΓÇÖs provisioned size. For an active volume, capacity consumption against the quota is based on logical (effective) capacity, being active filesystem data or snapshot data. See [How Azure NetApp Files snapshots work](snapshots-introduction.md) for details.
### Pricing examples
azure-netapp-files Backup Configure Manual https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-netapp-files/backup-configure-manual.md
The following list summarizes manual backup behaviors:
* Unless you specify an existing snapshot to use for a backup, creating a manual backup automatically generates a snapshot on the volume. The snapshot is then transferred to Azure storage. The snapshot created on the volume will be retained until the next manual backup is created. During the subsequent manual backup operation, older snapshots are cleaned up. You can't delete the snapshot generated for the latest manual backup.
+>[!NOTE]
+>The option to disable backups is no longer available beginning with the 2023.09 API version. If your workflows require the disable function, you can still use an API version earlier than 2023.09 or the Azure CLI.
+ [!INCLUDE [Backup registration heading](includes/backup-registration.md)] + ## Requirements * Azure NetApp Files requires you to assign a backup vault before allowing backup creation on a volume. To configure a backup vault, see [Manage backup vaults](backup-vault-manage.md) for more information.
azure-portal Azure Portal Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/azure-portal-overview.md
Title: What is the Azure portal? description: The Azure portal is a graphical user interface that you can use to manage your Azure services. Learn how to navigate and find resources in the Azure portal.
-keywords: portal
Previously updated : 04/10/2024 Last updated : 07/02/2024
The Azure portal is a web-based, unified console that lets you create and manage
The Azure portal is designed for resiliency and continuous availability. It has a presence in every Azure datacenter. This configuration makes the Azure portal resilient to individual datacenter failures and helps avoid network slowdowns by being close to users. The Azure portal updates continuously, and it requires no downtime for maintenance activities. You can access the Azure portal with [any supported browser](azure-portal-supported-browsers-devices.md).
-In this topic, you learn about the different parts of the Azure portal.
+In this article, you learn about the different parts of the Azure portal.
-## Azure Home
+## Home
-By default, the first thing you see after you [sign in to the portal](https://portal.azure.com) is **Azure Home**. This page compiles resources that help you get the most from your Azure subscription. We include links to free online courses, documentation, core services, and useful sites for staying current and managing change for your organization. For quick and easy access to work in progress, we also show a list of your most recently visited resources.
+By default, the first thing you see after you [sign in to the portal](https://portal.azure.com) is **Home**. This page compiles resources that help you get the most from your Azure subscription. Select **Create a resource** to quickly create a new resource in the current subscription, or choose a service to start working in. For quick and easy access to work in progress, we show a list of your most recently visited resources. We also include links to free online courses, documentation, and other useful resources.
## Portal elements and controls
-The portal menu and page header are global elements that are always present in the Azure portal. These persistent features are the "shell" for the user interface associated with each individual service or feature. The header provides access to global controls. The working pane for a resource or service may also have a resource menu specific to that area.
+The [portal menu](#portal-menu) and page header are global elements that are always present in the Azure portal. These persistent features are the "shell" for the user interface associated with each individual service or feature. The header provides access to global controls.
-The figure below labels the basic elements of the Azure portal, each of which are described in the following table. In this example, the current focus is a virtual machine, but the same elements generally apply, no matter what type of resource or service you're working with.
+The working pane for a resource or service may also have a [service menu](#service-menu) with commands specific to that area.
+The illustration below labels the basic elements of the Azure portal, each of which are described in the following table. In this example, the current focus is a virtual machine (VM), but the same elements generally apply, no matter what type of resource or service you're working with.
-|Key|Description
+|Key|Description |
|::||
-|1|**Page header**. Appears at the top of every portal page and holds global elements.|
-|2|**Global search**. Use the search bar to quickly find a specific resource, a service, or documentation.|
-|3|**Global controls**. Like all global elements, these controls persist across the portal. Global controls include Cloud Shell, Notifications, Settings, Support + Troubleshooting, and Feedback.|
-|4|**Your account**. View information about your account, switch directories, sign out, or sign in with a different account.|
-|5|**Portal menu**. This global element can help you to navigate between services. Sometimes referred to as the sidebar. (Items 10 and 11 in this list appear in this menu.)|
-|6|**Resource menu**. Many services include a resource menu to help you manage the service. You may see this element referred to as the service menu, or sometimes as the left pane. The commands you see are contextual to the resource or service that you're using.|
-|7|**Command bar**. These controls are contextual to your current focus.|
-|8|**Working pane**. Displays details about the resource that is currently in focus.|
-|9|**Breadcrumb**. You can use the breadcrumb links to move back a level in your workflow.|
-|10|**+ Create a resource**. Master control to create a new resource in the current subscription, available in the Azure portal menu. You can also find this option on the **Home** page.|
-|11|**Favorites**. Your favorites list in the Azure portal menu. To learn how to customize this list, see [Add, remove, and sort favorites](../azure-portal/azure-portal-add-remove-sort-favorites.md).|
+|1|**[Portal menu](#portal-menu)**. This global element can help you to navigate between services. Here, the portal menu is in flyout mode, so it's hidden until you select the menu icon.|
+|2|**Breadcrumb**. Use the breadcrumb links to move back a level in your workflow.|
+|3|**Page header**. Appears at the top of every portal page and holds global elements.|
+|4|**Global search**. Use the search bar in the page header to quickly find a specific resource, a service, or documentation.|
+|5|**Copilot**. Provides quick access to [Microsoft Copilot in Azure (preview)](/azure/copilot/).|
+|6|**Global controls**. These controls for common tasks persist in the page header: Cloud Shell, Notifications, Settings, Support + Troubleshooting, and Feedback.|
+|7|**Your account**. View information about your account, switch directories, sign out, or sign in with a different account.|
+|8|**Command bar**. A group of controls that are contextual to your current focus.|
+|9|**[Service menu](#service-menu)**. A menu with commands that are contextual to the service or resource that you're working with. Sometimes referred to as the resource menu.|
+|10|**Working pane**. Displays details about the resource or service that's currently in focus.|
## Portal menu
-The Azure portal menu lets you quickly get to key functionality and resource types. You can [choose a default mode for the portal menu](set-preferences.md#set-menu-behavior): flyout or docked.
+The Azure portal menu lets you quickly get to key functionality and resource types. It's available from anywhere in the Azure portal.
++
+Useful commands in the portal menu include:
+
+- **Create a resource**. An easy way to get started creating a new resource in the current subscription.
+- **Favorites**. Your list of favorite Azure services. To learn how to customize this list, see [Add, remove, and sort favorites](../azure-portal/azure-portal-add-remove-sort-favorites.md).
+
+In your portal settings, you can [choose a default mode for the portal menu](set-preferences.md#portal-menu-behavior): flyout or docked.
When the portal menu is in flyout mode, it's hidden until you need it. Select the menu icon to open or close the menu. :::image type="content" source="media/azure-portal-overview/azure-portal-overview-portal-menu-flyout.png" alt-text="Screenshot of the Azure portal menu in flyout mode.":::
-If you choose docked mode for the portal menu, it will always be visible. You can collapse the menu to provide more working space.
+If you choose docked mode for the portal menu, it's always visible. You can select the arrows to manually collapse the menu if you want more working space.
:::image type="content" source="media/azure-portal-overview/azure-portal-overview-portal-menu-expandcollapse.png" alt-text="Screenshot of the Azure portal menu in docked mode.":::
-You can [customize the favorites list](azure-portal-add-remove-sort-favorites.md) that appears in the portal menu.
+## Service menu
+
+The service menu appears when you're working with an Azure service or resource. Commands in this menu are contextual to the service or resource that you're working with. You can use the search box at the top of the service menu to quickly find commands.
+
+By default, menu items appear collapsed within menu groups. If you prefer to have all menu items expanded by default, you can set **Service menu behavior** to **Expanded** in your [portal settings](set-preferences.md#service-menu-behavior).
+
+When you're working within a service, you can select any top-level menu item to expand it and see the available commands within that menu group. Select that top-level item again to collapse that menu group.
+
+To toggle all folders in a service menu between collapsed and expanded, select the expand/collapse icon near the service icon search box.
++
+If you use certain service menu commands frequently, you may want to save them as favorites for that service. To do so, hover over the command and then select the star icon.
++
+When you save a command as a favorite, it appears in a **Favorites** folder near the top of the service menu.
++
+Your menu group selections are preserved by resource type and throughout sessions. For example, if you add a favorite command while working with a VM, that command will appear in your **Favorites** if you later work with a different VM. Specific menu groups will also appear collapsed or expanded based on your previous selections.
+
+> [!NOTE]
+> We're in the process of rolling out the new service menu experience to all customers. If you don't see these options in the service menu, check back soon. We'll remove this note once all customers are seeing the new experience.
## Dashboard
-Dashboards provide a focused view of the resources in your subscription that matter most to you. We've given you a default dashboard to get you started. You can customize this dashboard to bring the resources you use frequently into a single view.
+Dashboards provide a focused view of the resources in your subscription that matter most to you. We give you a default dashboard to get you started. You can customize this dashboard to bring resources you use frequently into a single view, or to display other information.
You can create other dashboards for your own use, or publish customized dashboards and share them with other users in your organization. For more information, see [Create and share dashboards in the Azure portal](../azure-portal/azure-portal-dashboards.md).
To view all available services, select **All services** from the sidebar.
> [!TIP] > Often, the quickest way to get to a resource, service, or documentation is to use *Search* in the global header.
+For more help getting started with Azure, explore the [Azure Quickstart Center](azure-portal-quickstart-center.md).
+ ## Next steps
-* Take the [Manage services with the Azure portal training module](/training/modules/tour-azure-portal/).
-* Stay connected on the go with the [Azure mobile app](https://azure.microsoft.com/features/azure-portal/mobile-app/).
+- Take the [Manage services with the Azure portal training module](/training/modules/tour-azure-portal/).
+- Stay connected on the go with the [Azure mobile app](https://azure.microsoft.com/features/azure-portal/mobile-app/).
azure-portal Set Preferences https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-portal/set-preferences.md
Title: Manage Azure portal settings and preferences description: Change Azure portal settings such as default subscription/directory, timeouts, menu mode, contrast, theme, notifications, language/region and more. Previously updated : 06/06/2024 Last updated : 07/02/2024
To mark a directory as a favorite, select its star icon. Those directories will
To switch to a different directory, find the directory that you want to work in, then select the **Switch** button in its row. ### Subscription filters
To delete a filter, select the trash can icon in that filter's row. You can't de
The **Appearance + startup views** pane has two sections. The **Appearance** section lets you choose menu behavior, your color theme, and whether to use a high-contrast theme. The **Startup views** section lets you set options for what you see when you first sign in to the Azure portal.
-### Set menu behavior
+### Portal menu behavior
-The **Menu behavior** section lets you choose how the default Azure portal menu behaves.
+The **Menu behavior** section lets you choose how the [Azure portal menu](azure-portal-overview.md#service-menu) appears.
- **Flyout**: The menu is hidden until you need it. You can select the menu icon in the upper left hand corner to open or close the menu. - **Docked**: The menu is always visible. You can collapse the menu to provide more working space.
+### Service menu behavior
+
+The **Service menu behavior** section lets you choose how items in [service menus](azure-portal-overview.md#service-menu) are displayed.
+
+- **Collapsed**: Groups of commands in service menus will appear collapsed. You can still manually select any top-level item to display the commands within that menu group.
+- **Expanded**: Groups of commands in service menus will appear expanded. You can still manually select any top-level item to collapse that menu group.
+
+> [!NOTE]
+> We're in the process of rolling out the **Service menu behavior** settings option to all customers. If you don't see this section, check back soon. We'll remove this note after all customers have this option in their portal settings.
+ ### Choose a theme or enable high contrast The theme that you choose affects the background and font colors that appear in the Azure portal. In the **Theme** section, you can select from one of four preset color themes. Select each thumbnail to find the theme that best suits you.
The inactivity timeout setting helps to protect resources from unauthorized acce
In the drop-down menu next to **Sign me out when inactive**, choose the duration after which your Azure portal session is signed out if you're idle. - Select **Apply** to save your changes. After that, if you're inactive during the portal session, Azure portal will sign out after the duration you set.
-If your admin has enabled an inactivity timeout policy, you can still set your own, as long as it's shorter than the directory-level setting. To do so, select **Override the directory inactivity timeout policy**, then enter a time interval for the **Override value**.
+If your admin has enabled an inactivity timeout policy, you can still choose your own timeout duration, but it must be shorter than the directory-level setting. To do so, select **Override the directory inactivity timeout policy**, then enter a time interval for the **Override value**.
:::image type="content" source="media/set-preferences/azure-portal-settings-sign-out-inactive-user.png" alt-text="Screenshot showing the directory inactivity timeout override setting.":::
To change a previously selected timeout, any Global Administrator can follow the
Notifications are system messages related to your current session. They provide information such as showing your current credit balance, confirming your last action, or letting you know when resources you created become available. When pop-up notifications are turned on, the messages briefly display in the top corner of your screen.
-To enable or disable pop-up notifications, select or clear **Enable pop-up notifications**.
+To enable or disable pop-up notifications, select or clear **Show pop-up notifications**.
To read all notifications received during your current session, select the **Notifications** icon from the global header.
To read all notifications received during your current session, select the **Not
To view notifications from previous sessions, look for events in the Activity log. For more information, see [View the Activity log](../azure-monitor/essentials/activity-log-insights.md#view-the-activity-log).
+### Enable or disable teaching bubbles
+
+Teaching bubbles may appear in the portal when new features are released. These bubbles contain information to help you understand how new features work.
+
+To enable or disable teaching bubbles in the portal, select or clear **Show teaching bubbles**.
+ ## Next steps - Learn about [keyboard shortcuts in the Azure portal](azure-portal-keyboard-shortcuts.md).
azure-resource-manager Bicep Functions String https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/bicep/bicep-functions-string.md
Title: Bicep functions - string
description: Describes the functions to use in a Bicep file to work with strings. Previously updated : 01/31/2024 Last updated : 07/02/2024 # String functions for Bicep
Namespace: [sys](bicep-functions.md#namespaces-for-functions).
### Return value
-**True** if the item is found; otherwise, **False**.
+`True` if the item is found; otherwise, `False`.
### Examples
Namespace: [sys](bicep-functions.md#namespaces-for-functions).
### Return value
-Returns **True** if the value is empty; otherwise, **False**.
+Returns `True` if the value is empty; otherwise, `False`.
### Examples
Namespace: [sys](bicep-functions.md#namespaces-for-functions).
### Return value
-**True** if the last character or characters of the string match the value; otherwise, **False**.
+`True` if the last character or characters of the string match the value; otherwise, `False`.
### Examples
Namespace: [sys](bicep-functions.md#namespaces-for-functions).
### Return value
-**True** if the first character or characters of the string match the value; otherwise, **False**.
+`True` if the first character or characters of the string match the value; otherwise, `False`.
### Examples
Namespace: [sys](bicep-functions.md#namespaces-for-functions).
| baseUri |Yes |string |The base uri string. Take care to observe the behavior regarding the handling of the trailing slash ('/'), as described following this table. | | relativeUri |Yes |string |The relative uri string to add to the base uri string. |
-* If **baseUri** ends in a trailing slash, the result is simply
- **baseUri** followed by **relativeUri**.
+* If `baseUri` ends with a trailing slash, the result is simply `baseUri` followed by `relativeUri`. If `relativeUri` also begins with a leading slash, the trailing slash and the leading slash will be combined into one.
-* If **baseUri** does not end in a trailing slash one of two things
+* If `baseUri` does not end in a trailing slash one of two things
happens.
- * If **baseUri** has no slashes at all (aside from the "//" near
- the front) the result is simply **baseUri** followed by **relativeUri**.
+ * If `baseUri` has no slashes at all (aside from the "//" near
+ the front) the result is simply `baseUri` followed by `relativeUri`.
- * If **baseUri** has some slashes, but doesn't end with a slash,
- everything from the last slash onward is removed from **baseUri**
- and the result is **baseUri** followed by **relativeUri**.
+ * If `baseUri` has some slashes, but doesn't end with a slash,
+ everything from the last slash onward is removed from `baseUri`
+ and the result is `baseUri` followed by `relativeUri`.
Here are some examples: ``` uri('http://contoso.org/firstpath', 'myscript.sh') -> http://contoso.org/myscript.sh uri('http://contoso.org/firstpath/', 'myscript.sh') -> http://contoso.org/firstpath/myscript.sh
+uri('http://contoso.org/firstpath/', '/myscript.sh') -> http://contoso.org/firstpath/myscript.sh
uri('http://contoso.org/firstpath/azuredeploy.json', 'myscript.sh') -> http://contoso.org/firstpath/myscript.sh uri('http://contoso.org/firstpath/azuredeploy.json/', 'myscript.sh') -> http://contoso.org/firstpath/azuredeploy.json/myscript.sh ```
-For complete details, the **baseUri** and **relativeUri** parameters are
+For complete details, the `baseUri` and `relativeUri` parameters are
resolved as specified in [RFC 3986, section 5](https://tools.ietf.org/html/rfc3986#section-5).
azure-resource-manager Template Functions String https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-resource-manager/templates/template-functions-string.md
Title: Template functions - string
description: Describes the functions to use in an Azure Resource Manager template (ARM template) to work with strings. Previously updated : 01/31/2024 Last updated : 07/02/2024 # String functions for ARM templates
In Bicep, use the [contains](../bicep/bicep-functions-string.md#contains) functi
### Return value
-**True** if the item is found; otherwise, **False**.
+`True` if the item is found; otherwise, `False`.
### Examples
In Bicep, use the [empty](../bicep/bicep-functions-string.md#empty) function.
### Return value
-Returns **True** if the value is empty; otherwise, **False**.
+Returns `True` if the value is empty; otherwise, `False`.
### Examples
In Bicep, use the [endsWith](../bicep/bicep-functions-string.md#endswith) functi
### Return value
-**True** if the last character or characters of the string match the value; otherwise, **False**.
+`True` if the last character or characters of the string match the value; otherwise, `False`.
### Examples
In Bicep, use the [startsWith](../bicep/bicep-functions-string.md#startswith) fu
### Return value
-**True** if the first character or characters of the string match the value; otherwise, **False**.
+`True` if the first character or characters of the string match the value; otherwise, `False`.
### Examples
In Bicep, use the [uri](../bicep/bicep-functions-string.md#uri) function.
| baseUri |Yes |string |The base uri string. Take care to observe the behavior about the handling of the trailing slash (`/`), as described following this table. | | relativeUri |Yes |string |The relative uri string to add to the base uri string. |
-* If **baseUri** ends in a trailing slash, the result is **baseUri** followed by **relativeUri**.
+* If `baseUri` ends with a trailing slash, the result is simply `baseUri` followed by `relativeUri`. If `relativeUri` also begins with a leading slash, the trailing slash and the leading slash will be combined into one.
-* If **baseUri** doesn't end in a trailing slash one of two things
+* If `baseUri` doesn't end in a trailing slash one of two things
happens.
- * If **baseUri** has no slashes at all (aside from the `//` near
- the front) the result is **baseUri** followed by **relativeUri**.
+ * If `baseUri` has no slashes at all (aside from the `//` near
+ the front) the result is `baseUri` followed by `relativeUri`.
- * If **baseUri** has some slashes, but doesn't end with a slash,
- everything from the last slash onward is removed from **baseUri**
- and the result is **baseUri** followed by **relativeUri**.
+ * If `baseUri` has some slashes, but doesn't end with a slash,
+ everything from the last slash onward is removed from `baseUri`
+ and the result is `baseUri` followed by `relativeUri`.
Here are some examples: ``` uri('http://contoso.org/firstpath', 'myscript.sh') -> http://contoso.org/myscript.sh uri('http://contoso.org/firstpath/', 'myscript.sh') -> http://contoso.org/firstpath/myscript.sh
+uri('http://contoso.org/firstpath/', '/myscript.sh') -> http://contoso.org/firstpath/myscript.sh
uri('http://contoso.org/firstpath/azuredeploy.json', 'myscript.sh') -> http://contoso.org/firstpath/myscript.sh uri('http://contoso.org/firstpath/azuredeploy.json/', 'myscript.sh') -> http://contoso.org/firstpath/azuredeploy.json/myscript.sh ```
-For complete details, the **baseUri** and **relativeUri** parameters are
+For complete details, the `baseUri` and `relativeUri` parameters are
resolved as specified in [RFC 3986, section 5](https://tools.ietf.org/html/rfc3986#section-5).
communication-services Call Diagnostics https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communication-services/concepts/voice-video-calling/call-diagnostics.md
You can view detailed call logs for each participant within a call. Call informa
![Screenshot of the Call Diagnostics Call Timeline tab showing you the detailed events in a timeline view for the call you selected.](media/call-diagnostics-call-timeline-2.png)
-## Copilot for Call Diagnostics
+## Copilot in Azure for Call Diagnostics
-Artificial Intelligence can help app developers across every step of the development lifecycle: designing, building, and operating. Developers with [Microsoft Copilot in Azure (preview)](../../../copilot/overview.md) can use Copilot within Call Diagnostics to understand and resolve a variety of calling issues. For example, developers can ask Copilot questions, such as:
+Artificial Intelligence can help app developers across every step of the development lifecycle: designing, building, and operating. Developers with [Microsoft Copilot in Azure (preview)](../../../copilot/overview.md) can use Copilot in Azure within Call Diagnostics to understand and resolve a variety of calling issues. For example, developers can ask Copilot in Azure questions, such as:
- How do I run network diagnostics in Azure Communication Services VoIP calls? - How can I optimize my calls for poor network conditions? - What are the common causes of poor media streams in Azure Communication calls? - The video on my call didnΓÇÖt work, how do I fix the subcode 41048?
-![Screenshot of the Call Diagnostics Call Search showing recent calls for your Azure Communications Services Resource and the response from Copilot.](media/call-diagnostics-all-calls-copilot.png)
+![Screenshot of the Call Diagnostics Call Search showing recent calls for your Azure Communications Services Resource and the response from Copilot in Azure.](media/call-diagnostics-all-calls-copilot.png)
<!-- > [!NOTE] > You can explore information icons and links within Call Diagnostics to learn functionality, definitions, and helpful tips. -->
quality](https://learn.microsoft.com/azure/communication-services/concepts/voice
- **How do I use Copilot in Azure (preview) in Call Diagnostics?**
- - Your organization needs to manage access to [Microsoft Copilot in Azure (preview)](../../../copilot/overview.md). Once your organization has access to Copilot for Azure (preview), the Call Diagnostics interface will include the option to 'Diagnose with Copilot' in the Search, Overview, and Issues tabs.
- - Leverage Copilot for Call Diagnostics to improve call quality by detailing problems faced during Azure Communication Services calls. Giving Copilot detailed information from Call Diagnostics will help it enhance analysis, identify issues, and identify fixes. Be aware that this Copilot iteration lacks programmatic access to your call details.
+ - Your organization needs to manage access to [Microsoft Copilot in Azure (preview)](../../../copilot/overview.md). Once your organization has access to Copilot in Azure (preview), the Call Diagnostics interface will include the option to 'Diagnose with Copilot' in the Search, Overview, and Issues tabs.
+ - Leverage Copilot in Azure for Call Diagnostics to improve call quality by detailing problems faced during Azure Communication Services calls. Giving Copilot in Azure detailed information from Call Diagnostics will help it enhance analysis, identify issues, and identify fixes. Be aware that Copilot in Azure currently lacks programmatic access to your call details.
<!-- 1. If Teams participants join a call, how will they display in Call Diagnostics?
container-apps Ingress Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/ingress-overview.md
Azure Container Apps provides built-in authentication and authorization features
You can configure your app to support client certificates (mTLS) for authentication and traffic encryption. For more information, see [Configure client certificates](client-certificate-authorization.md).
-For details on how to use mTLS for environment level network encryption, see the [networking overview](./networking.md#mtls).
+For details on how to use peer-to-peer environment level network encryption, see the [networking overview](./networking.md#peer-to-peer-encryption).
## Traffic splitting
container-apps Networking https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/networking.md
You can fully secure your ingress and egress networking traffic workload profile
- Configure UDR to route all traffic through [Azure Firewall](./user-defined-routes.md).
-## <a name="mtls"></a> Environment level network encryption (preview)
+## <a name="peer-to-peer-encryption"></a> Peer-to-peer encryption in the Azure Container Apps environment
-Azure Container Apps supports environment level network encryption using mutual transport layer security (mTLS). When end-to-end encryption is required, mTLS encrypts data transmitted between applications within an environment.
+Azure Container Apps supports peer-to-peer TLS encryption within the environment. Enabling this feature encrypts all network traffic within the environment with a private certificate that is valid within the Azure Container Apps environment scope. These certificates are automatically managed by Azure Container Apps.
-Applications within a Container Apps environment are automatically authenticated. However, the Container Apps runtime doesn't support authorization for access control between applications using the built-in mTLS.
+> [!NOTE]
+> By default, peer-to-peer encryption is disabled. Enabling peer-to-peer encryption for your applications may increase response latency and reduce maximum throughput in high-load scenarios.
-When your apps are communicating with a client outside of the environment, two-way authentication with mTLS is supported. To learn more, see [configure client certificates](client-certificate-authorization.md).
+The following example shows an environment with peer-to-peer encryption enabled.
-> [!NOTE]
-> Enabling mTLS for your applications may increase response latency and reduce maximum throughput in high-load scenarios.
+<sup>1</sup> Inbound TLS traffic is terminated at the ingress proxy on the edge of the environment.
+
+<sup>2</sup> Traffic to and from the ingress proxy within the environment is TLS encrypted with a private certificate and decrypted by the receiver.
+
+<sup>3</sup> Calls made from app A to app B's FQDN are first sent to the edge ingress proxy, and are TLS encrypted.
+
+<sup>4</sup> Calls made from app A to app B using app B's app name are sent directly to app B and are TLS encrypted.
+
+Applications within a Container Apps environment are automatically authenticated. However, the Container Apps runtime doesn't support authorization for access control between applications using the built-in peer-to-peer encryption.
+
+When your apps are communicating with a client outside of the environment, two-way authentication with mTLS is supported. To learn more, see [configure client certificates](client-certificate-authorization.md).
# [Azure CLI](#tab/azure-cli)
-You can enable mTLS using the following commands.
+You can enable peer-to-peer encryption using the following commands.
On create:
az containerapp env create \
--name <environment-name> \ --resource-group <resource-group> \ --location <location> \
- --enable-mtls
+ --enable-peer-to-peer-encryption
``` For an existing container app:
For an existing container app:
az containerapp env update \ --name <environment-name> \ --resource-group <resource-group> \
- --enable-mtls
+ --enable-peer-to-peer-encryption
``` # [ARM template](#tab/arm-template)
You can enable mTLS in the ARM template for Container Apps environments using th
{ ... "properties": {
- "peerAuthentication":{
- "mtls": {
+ "peerTrafficConfiguration":{
+ "encryption": {
"enabled": "true|false" } }
container-apps Troubleshooting https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/troubleshooting.md
Your container app's ingress settings are enforced through a set of rules that c
| Is ingress enabled? | Verify the **Enabled** checkbox is checked. | | Do you want to allow external ingress? | Verify that **Ingress Traffic** is set to **Accepting traffic from anywhere**. If your container app doesn't listen for HTTP traffic, set **Ingress Traffic** to **Limited to Container Apps Environment**. | | Does your client use HTTP or TCP to access your container app? | Verify **Ingress type** is set to the correct protocol (**HTTP** or **TCP**). |
-| Does your client support mTLS? | Verify **Client certificate mode** is set to **Require** only if your client supports mTLS. For more information, see [Environment level network encryption.](./networking.md#mtls) |
+| Does your client support mTLS? | Verify **Client certificate mode** is set to **Require** only if your client supports mTLS. For more information, see [configure client certificate authentication.](./client-certificate-authorization.md) |
| Does your client use HTTP/1 or HTTP/2? | Verify **Transport** is set to the correct HTTP version (**HTTP/1** or **HTTP/2**). | | Is the target port set correctly? | Verify **Target port** is set to the same port your container app is listening on, or the same port exposed by your container app's Dockerfile. | | Is your client IP address denied? | If **IP Security Restrictions Mode** isn't set to **Allow all traffic**, verify your client doesn't have an IP address that is denied. |
container-apps Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/container-apps/whats-new.md
This article lists significant updates and new features available in Azure Conta
| [Generally Available: Session affinity](./sticky-sessions.md) | Session affinity enables you to route all requests from a single client to the same Container Apps replica. This is useful for stateful workloads that require session affinity. | | [Generally Available: Azure Key Vault references for secrets](https://azure.microsoft.com/updates/generally-available-azure-key-vault-references-for-secrets-in-azure-container-apps/) | Azure Key Vault references enable you to source a container appΓÇÖs secrets from secrets stored in Azure Key Vault. Using the container app's managed identity, the platform automatically retrieves the secret values from Azure Key Vault and injects it into your application's secrets. | | [Public preview: additional TCP ports](./ingress-overview.md#additional-tcp-ports) | Azure Container Apps now support additional TCP ports, enabling applications to accept TCP connections on multiple ports. This feature is in preview. |
-| [Public preview: environment level mTLS encryption](./networking.md#mtls) | When end-to-end encryption is required, mTLS will encrypt data transmitted between applications within an environment. |
+| [Public preview: environment level peer-to-peer encryption](./networking.md#peer-to-peer-encryption) | When end-to-end encryption is required, peer-to-peer encryption will encrypt data transmitted between applications within an environment. |
| [Retirement: ACA preview API versions 2022-06-01-preview and 2022-11-01-preview](https://azure.microsoft.com/updates/retirement-azure-container-apps-preview-api-versions-20220601preview-and-20221101preview/) | Starting on November 16, 2023, Azure Container Apps control plane API versions 2022-06-01-preview and 2022-11-01-preview will be retired. Before that date, migrate to the latest stable API version (2023-05-01) or latest preview API version (2023-04-01-preview). | | [Dapr: Stable Configuration API](https://docs.dapr.io/developing-applications/building-blocks/configuration/) | Dapr's Configuration API is now stable and supported in Azure Container Apps. Learn how to do [Dapr integration with Azure Container Apps](./dapr-overview.md)|
copilot Get Monitoring Information https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/copilot/get-monitoring-information.md
Title: Get information about Azure Monitor metrics and logs using Microsoft Copilot in Azure description: Learn about scenarios where Microsoft Copilot in Azure can provide information about Azure Monitor metrics and logs. Previously updated : 05/28/2024 Last updated : 07/03/2024
Here are a few examples of the kinds of prompts you can use to get information a
- "Show me all alerts triggered during the last 24 hours" ## Answer questions about Azure Monitor Investigator (preview)
-Use Microsoft Copilot for Azure (preview) to ask questions about your resources and to run Azure Monitor Investigator. You can ask to run an investigation on a resource to learn what happened, possible causes and how to start to troubleshoot the issue.
+
+Use Microsoft Copilot in Azure (preview) to ask questions about your resources and to run Azure Monitor Investigator. You can ask to run an investigation on a resource to learn about what happened, possible causes, and ways to troubleshoot the issue.
### Sample prompts+ Here are a few examples of the kinds of prompts you can use to get information about Azure Monitor Investigator. Modify these prompts based on your real-life scenarios, or try additional prompts to get different kinds of information. - "Why is this resource not working properly?" - "Is there any anomaly in my AKS resource?" - "Run investigation on my resource" - "What is causing the issue in this resource?"-- "Had an alert in my HCI at 8 am this morning, run an anomaly investigation for me"
+- "Had an alert in my HCI at 8 am this morning, run an anomaly investigation for me"
- "Run anomaly detection at 10/27/2023, 8:48:53 PM" - ## Next steps - Explore [capabilities](capabilities.md) of Microsoft Copilot in Azure.
cosmos-db Ai Agents https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/ai-agents.md
Title: AI agent
-description: AI agent key concepts and implementation of AI agent memory system.
+description: Learn about key concepts for agents and step through the implementation of an AI agent memory system.
Last updated 06/26/2024
# AI agent
-AI agents are designed to perform specific tasks, answer questions, and automate processes for users. These agents vary widely in complexity, ranging from simple chatbots, to copilots, to advanced AI assistants in the form of digital or robotic systems that can execute complex workflows autonomously. This article provides conceptual overviews and detailed implementation samples on AI agents.
+AI agents are designed to perform specific tasks, answer questions, and automate processes for users. These agents vary widely in complexity. They range from simple chatbots, to copilots, to advanced AI assistants in the form of digital or robotic systems that can run complex workflows autonomously.
-## What are AI Agents?
+This article provides conceptual overviews and detailed implementation samples for AI agents.
-Unlike standalone large language models (LLMs) or rule-based software/hardware systems, AI agent possesses the follow common features:
+## What are AI agents?
-- Planning. AI agent can plan and sequence actions to achieve specific goals. The integration of LLMs has revolutionized their planning capabilities.-- Tool usage. Advanced AI agent can utilize various tools, such as code execution, search, and computation capabilities, to perform tasks effectively. Tool usage is often done through function calling.-- Perception. AI agent can perceive and process information from their environment, including visual, auditory, and other sensory data, making them more interactive and context aware.-- Memory. AI agent possess the ability to remember past interactions (tool usage and perception) and behaviors (tool usage and planning). It stores these experiences and even perform self-reflection to inform future actions. This memory component allows for continuity and improvement in agent performance over time.
+Unlike standalone large language models (LLMs) or rule-based software/hardware systems, AI agents have these common features:
+
+- **Planning**: AI agents can plan and sequence actions to achieve specific goals. The integration of LLMs has revolutionized their planning capabilities.
+- **Tool usage**: Advanced AI agents can use various tools, such as code execution, search, and computation capabilities, to perform tasks effectively. AI agents often use tools through function calling.
+- **Perception**: AI agents can perceive and process information from their environment, to make them more interactive and context aware. This information includes visual, auditory, and other sensory data.
+- **Memory**: AI agents have the ability to remember past interactions (tool usage and perception) and behaviors (tool usage and planning). They store these experiences and even perform self-reflection to inform future actions. This memory component allows for continuity and improvement in agent performance over time.
> [!NOTE]
-> The usage of the term "memory" in the context of AI agent should not be confused with the concept of computer memory (like volatile, non-volatile, and persistent memory).
+> The usage of the term *memory* in the context of AI agents is different from the concept of computer memory (like volatile, nonvolatile, and persistent memory).
### Copilots
-Copilots are a type of AI agent designed to work alongside users rather than operate independently. Unlike fully automated agents, copilots provide suggestions and recommendations to assist users in completing tasks. For instance, when a user is writing an email, a copilot might suggest phrases, sentences, or paragraphs. The user might also ask the copilot to find relevant information in other emails or files to support the suggestion (see [retrieval-augmented generation](vector-database.md#retrieval-augmented-generation)). The user can accept, reject, or edit the suggested passages.
+Copilots are a type of AI agent. They work alongside users rather than operating independently. Unlike fully automated agents, copilots provide suggestions and recommendations to assist users in completing tasks.
+
+For instance, when a user is writing an email, a copilot might suggest phrases, sentences, or paragraphs. The user might also ask the copilot to find relevant information in other emails or files to support the suggestion (see [retrieval-augmented generation](vector-database.md#retrieval-augmented-generation)). The user can accept, reject, or edit the suggested passages.
### Autonomous agents Autonomous agents can operate more independently. When you set up autonomous agents to assist with email composition, you could enable them to perform the following tasks: -- Consult existing emails, chats, files, and other internal and public information that are related to the subject matter-- Perform qualitative or quantitative analysis on the collected information, and draw conclusions that are relevant to the email-- Write the complete email based on the conclusions and incorporate supporting evidence-- Attach relevant files to the email-- Review the email to ensure that all the incorporated information is factually accurate, and that the assertions are valid-- Select the appropriate recipients for "To," "Cc," and/or "Bcc" and look up their email addresses-- Schedule an appropriate time to send the email-- Perform follow-ups if responses are expected but not received
+- Consult existing emails, chats, files, and other internal and public information that's related to the subject matter.
+- Perform qualitative or quantitative analysis on the collected information, and draw conclusions that are relevant to the email.
+- Write the complete email based on the conclusions and incorporate supporting evidence.
+- Attach relevant files to the email.
+- Review the email to ensure that all the incorporated information is factually accurate and that the assertions are valid.
+- Select the appropriate recipients for **To**, **Cc**, and **Bcc**, and look up their email addresses.
+- Schedule an appropriate time to send the email.
+- Perform follow-ups if responses are expected but not received.
-You may configure the agents to perform each of the above steps with or without human approval.
+You can configure the agents to perform each of the preceding tasks with or without human approval.
### Multi-agent systems
-Currently, the prevailing strategy for achieving performant autonomous agents is through multi-agent systems. In multi-agent systems, multiple autonomous agents, whether in digital or robotic form, interact or work together to achieve individual or collective goals. Agents in the system can operate independently and possess their own knowledge or information. Each agent may also have the capability to perceive its environment, make decisions, and execute actions based on its objectives.
+A popular strategy for achieving performant autonomous agents is the use of multi-agent systems. In multi-agent systems, multiple autonomous agents, whether in digital or robotic form, interact or work together to achieve individual or collective goals. Agents in the system can operate independently and possess their own knowledge or information. Each agent might also have the capability to perceive its environment, make decisions, and execute actions based on its objectives.
-Key characteristics of multi-agent systems:
+Multi-agent systems have these key characteristics:
-- Autonomous: Each agent functions independently, making its own decisions without direct human intervention or control by other agents.-- Interactive: Agents communicate and collaborate with each other to share information, negotiate, and coordinate their actions. This interaction can occur through various protocols and communication channels.-- Goal-oriented: Agents in a multi-agent system are designed to achieve specific goals, which can be aligned with individual objectives or a common objective shared among the agents.-- Distributed: Multi-agent systems operate in a distributed manner, with no single point of control. This distribution enhances the system's robustness, scalability, and resource efficiency.
+- **Autonomous**: Each agent functions independently. It makes its own decisions without direct human intervention or control by other agents.
+- **Interactive**: Agents communicate and collaborate with each other to share information, negotiate, and coordinate their actions. This interaction can occur through various protocols and communication channels.
+- **Goal-oriented**: Agents in a multi-agent system are designed to achieve specific goals, which can be aligned with individual objectives or a shared objective among the agents.
+- **Distributed**: Multi-agent systems operate in a distributed manner, with no single point of control. This distribution enhances the system's robustness, scalability, and resource efficiency.
A multi-agent system provides the following advantages over a copilot or a single instance of LLM inference: -- Dynamic reasoning: Compared to chain-of-thought or tree-of-thought prompting, multi-agent systems allow for dynamic navigation through various reasoning paths.-- Sophisticated abilities: Multi-agent systems can handle complex or large-scale problems by conducting thorough decision-making processes and distributing tasks among multiple agents.-- Enhanced memory: Multi-agent systems with memory can overcome large language models' context windows, enabling better understanding and information retention.
+- **Dynamic reasoning**: Compared to chain-of-thought or tree-of-thought prompting, multi-agent systems allow for dynamic navigation through various reasoning paths.
+- **Sophisticated abilities**: Multi-agent systems can handle complex or large-scale problems by conducting thorough decision-making processes and distributing tasks among multiple agents.
+- **Enhanced memory**: Multi-agent systems with memory can overcome the context windows of LLMs to enable better understanding and information retention.
-## Implement AI agent
+## Implementation of AI agents
### Reasoning and planning
-Complex reasoning and planning are the hallmark of advanced autonomous agents. Popular autonomous agent frameworks incorporate one or more of the following methodologies for reasoning and planning:
+Complex reasoning and planning are the hallmark of advanced autonomous agents. Popular frameworks for autonomous agents incorporate one or more of the following methodologies (with links to arXiv archive pages) for reasoning and planning:
+
+- [Self-Ask](https://arxiv.org/abs/2210.03350)
+
+ Improve on chain of thought by having the model explicitly ask itself (and answer) follow-up questions before answering the initial question.
-[Self-ask](https://arxiv.org/abs/2210.03350)
-Improves on chain of thought by having the model explicitly asking itself (and answering) follow-up questions before answering the initial question.
+- [Reason and Act (ReAct)](https://arxiv.org/abs/2210.03629)
-[Reason and Act (ReAct)](https://arxiv.org/abs/2210.03629)
-Use LLMs to generate both reasoning traces and task-specific actions in an interleaved manner. Reasoning traces help the model induce, track, and update action plans as well as handle exceptions, while actions allow it to interface with external sources, such as knowledge bases or environments, to gather additional information.
+ Use LLMs to generate both reasoning traces and task-specific actions in an interleaved manner. Reasoning traces help the model induce, track, and update action plans, along with handling exceptions. Actions allow the model to connect with external sources, such as knowledge bases or environments, to gather additional information.
-[Plan and Solve](https://arxiv.org/abs/2305.04091)
-Devise a plan to divide the entire task into smaller subtasks, and then carry out the subtasks according to the plan. This mitigates the calculation errors, missing-step errors, and semantic misunderstanding errors that are often present in zero-shot chain-of-thought (CoT) prompting.
+- [Plan and Solve](https://arxiv.org/abs/2305.04091)
-[Reflection/Self-critique](https://arxiv.org/abs/2303.11366)
-Reflexion agents verbally reflect on task feedback signals, then maintain their own reflective text in an episodic memory buffer to induce better decision-making in subsequent trials.
+ Devise a plan to divide the entire task into smaller subtasks, and then carry out the subtasks according to the plan. This approach mitigates the calculation errors, missing-step errors, and semantic misunderstanding errors that are often present in zero-shot chain-of-thought prompting.
+
+- [Reflect/Self-critique](https://arxiv.org/abs/2303.11366)
+
+ Use *reflexion* agents that verbally reflect on task feedback signals. These agents maintain their own reflective text in an episodic memory buffer to induce better decision-making in subsequent trials.
### Frameworks
-Various frameworks and tools can facilitate the development and deployment of AI agent.
+Various frameworks and tools can facilitate the development and deployment of AI agents.
-For tool usage and perception that do not require sophisticated planning and memory, some popular LLM orchestrator frameworks are LangChain, LlamaIndex, Prompt Flow, and Semantic Kernel.
+For tool usage and perception that don't require sophisticated planning and memory, some popular LLM orchestrator frameworks are LangChain, LlamaIndex, Prompt Flow, and Semantic Kernel.
-For advanced and autonomous planning and execution workflows, [AutoGen](https://microsoft.github.io/autogen/) propelled the multi-agent wave that began in late 2022. OpenAI's [Assistants API](https://platform.openai.com/docs/assistants/overview) allow their users to create agents natively within the GPT ecosystem. [LangChain Agents](https://python.langchain.com/v0.1/docs/modules/agents/) and [LlamaIndex Agents](https://docs.llamaindex.ai/en/stable/use_cases/agents/) also emerged around the same time.
+For advanced and autonomous planning and execution workflows, [AutoGen](https://microsoft.github.io/autogen/) propelled the multi-agent wave that began in late 2022. OpenAI's [Assistants API](https://platform.openai.com/docs/assistants/overview) allows its users to create agents natively within the GPT ecosystem. [LangChain Agents](https://python.langchain.com/v0.1/docs/modules/agents/) and [LlamaIndex Agents](https://docs.llamaindex.ai/en/stable/use_cases/agents/) also emerged around the same time.
> [!TIP]
-> See the implementation sample section at the end of this article for tutorial on building a simple multi-agent system using one of the popular frameworks and a unified agent memory system.
+> The [implementation sample](#implementation-sample) later in this article shows how to build a simple multi-agent system by using one of the popular frameworks and a unified agent memory system.
### AI agent memory system
-The prevalent practice for experimenting with AI-enhanced applications in 2022 through 2024 has been using standalone database management systems for various data workflows or types. For example, an in-memory database for caching, a relational database for operational data (including tracing/activity logs and LLM conversation history), and a [pure vector database](vector-database.md#integrated-vector-database-vs-pure-vector-database) for embedding management.
+The prevalent practice for experimenting with AI-enhanced applications from 2022 through 2024 has been using standalone database management systems for various data workflows or types. For example, you can use an in-memory database for caching, a relational database for operational data (including tracing/activity logs and LLM conversation history), and a [pure vector database](vector-database.md#integrated-vector-database-vs-pure-vector-database) for embedding management.
+
+However, this practice of using a complex web of standalone databases can hurt an AI agent's performance. Integrating all these disparate databases into a cohesive, interoperable, and resilient memory system for AI agents is its own challenge.
-However, this practice of using a complex web of standalone databases can hurt AI agent's performance. Integrating all these disparate databases into a cohesive, interoperable, and resilient memory system for AI agent is a significant challenge in and of itself. Moreover, many of the frequently used database services are not optimal for the speed and scalability that AI agent systems need. These databases' individual weaknesses are exacerbated in multi-agent systems:
+Also, many of the frequently used database services are not optimal for the speed and scalability that AI agent systems need. These databases' individual weaknesses are exacerbated in multi-agent systems.
#### In-memory databases
-In-memory databases are excellent for speed but may struggle with the large-scale data persistence that AI agent requires.
+
+In-memory databases are excellent for speed but might struggle with the large-scale data persistence that AI agents need.
#### Relational databases
-Relational databases are not ideal for the varied modalities and fluid schemas of data handled by agents. Moreover, relational databases require manual efforts and even downtime to manage provisioning, partitioning, and sharding.
+
+Relational databases are not ideal for the varied modalities and fluid schemas of data that agents handle. Relational databases require manual efforts and even downtime to manage provisioning, partitioning, and sharding.
#### Pure vector databases
-Pure vector databases tend to be less effective for transactional operations, real-time updates, and distributed workloads. The popular pure vector databases nowadays typically offer
-- no guarantee on reads & writes-- limited ingestion throughput-- low availability (below 99.9%, or annualized outage of almost 9 hours or more)-- one consistency level (eventual)-- resource-intensive in-memory vector index-- limited options for multitenancy-- limited security
-The next section dives deeper into what makes a robust AI agent memory system.
+Pure vector databases tend to be less effective for transactional operations, real-time updates, and distributed workloads. The popular pure vector databases nowadays typically offer:
+
+- No guarantee on reads and writes.
+- Limited ingestion throughput.
+- Low availability (below 99.9%, or an annualized outage of 9 hours or more).
+- One consistency level (eventual).
+- A resource-intensive in-memory vector index.
+- Limited options for multitenancy.
+- Limited security.
+
+## Characteristics of a robust AI agent memory system
+
+Just as efficient database management systems are critical to the performance of software applications, it's critical to provide LLM-powered agents with relevant and useful information to guide their inference. Robust memory systems enable organizing and storing various kinds of information that the agents can retrieve at inference time.
+
+Currently, LLM-powered applications often use [retrieval-augmented generation](vector-database.md#retrieval-augmented-generation) that uses basic semantic search or vector search to retrieve passages or documents. [Vector search](vector-database.md#vector-search) can be useful for finding general information. But vector search might not capture the specific context, structure, or relationships that are relevant for a particular task or domain.
-## Memory can make or break agents
+For example, if the task is to write code, vector search might not be able to retrieve the syntax tree, file system layout, code summaries, or API signatures that are important for generating coherent and correct code. Similarly, if the task is to work with tabular data, vector search might not be able to retrieve the schema, the foreign keys, the stored procedures, or the reports that are useful for querying or analyzing the data.
-Just as efficient database management systems are critical to software applications' performances, it is critical to provide LLM-powered agents with relevant and useful information to guide their inference. Robust memory systems enable organizing and storing different kinds of information that the agents can retrieve at inference time.
+Weaving together a web of standalone in-memory, relational, and vector databases (as described [earlier](#ai-agent-memory-system)) is not an optimal solution for the varied data types. This approach might work for prototypical agent systems. However, it adds complexity and performance bottlenecks that can hamper the performance of advanced autonomous agents.
-Currently, LLM-powered applications often use [retrieval-augmented generation](vector-database.md#retrieval-augmented-generation) that uses basic semantic search or vector search to retrieve passages or documents. [Vector search](vector-database.md#vector-search) can be useful for finding general information, but it may not capture the specific context, structure, or relationships that are relevant for a particular task or domain.
+A robust memory system should have the following characteristics.
-For example, if the task is to write code, vector search may not be able to retrieve the syntax tree, file system layout, code summaries, or API signatures that are important for generating coherent and correct code. Similarly, if the task is to work with tabular data, vector search may not be able to retrieve the schema, the foreign keys, the stored procedures, or the reports that are useful for querying or analyzing the data.
+### Multimodal
-Weaving together [a web of standalone in-memory, relational, and vector databases](#ai-agent-memory-system) is not an optimal solution for the varied data types, either. This approach may work for prototypical agent systems; however, it adds complexity and performance bottlenecks that can hamper the performance of advanced autonomous agents.
+AI agent memory systems should provide collections that store metadata, relationships, entities, summaries, or other types of information that can be useful for various tasks and domains. These collections can be based on the structure and format of the data, such as documents, tables, or code. Or they can be based on the content and meaning of the data, such as concepts, associations, or procedural steps.
-Therefore, a robust memory system should have the following characteristics:
+Memory systems aren't just critical to AI agents. They're also important for the humans who develop, maintain, and use these agents.
-#### Multi-modal (Part I)
+For example, humans might need to supervise agents' planning and execution workflows in near real time. While supervising, humans might interject with guidance or make in-line edits of agents' dialogues or monologues. Humans might also need to audit the reasoning and actions of agents to verify the validity of the final output.
-AI agent memory systems should provide different collections that store metadata, relationships, entities, summaries, or other types of information that can be useful for different tasks and domains. These collections can be based on the structure and format of the data, such as documents, tables, or code, or they can be based on the content and meaning of the data, such as concepts, associations, or procedural steps.
+Human/agent interactions are likely in natural or programming languages, whereas agents "think," "learn," and "remember" through embeddings. This difference poses another requirement on memory systems' consistency across data modalities.
-#### Operational
+### Operational
-Memory systems should provide different memory banks that store information that is relevant for the interaction with the user and the environment. Such information may include chat history, user preferences, sensory data, decisions made, facts learned, or other operational data that are updated with high frequency and at high volumes. These memory banks can help the agents remember short-term and long-term information, avoid repeating or contradicting themselves, and maintain task coherence. These requirements must hold true even if the agents perform a multitude of unrelated tasks in succession. In advanced cases, agents may also wargame numerous branch plans that diverge or converge at different points.
+Memory systems should provide memory banks that store information that's relevant for the interaction with the user and the environment. Such information might include chat history, user preferences, sensory data, decisions made, facts learned, or other operational data that's updated with high frequency and at high volumes.
-#### Sharable but also separable
+These memory banks can help the agents remember short-term and long-term information, avoid repeating or contradicting themselves, and maintain task coherence. These requirements must hold true even if the agents perform a multitude of unrelated tasks in succession. In advanced cases, agents might also test numerous branch plans that diverge or converge at different points.
-At the macro level, memory systems should enable multiple AI agents to collaborate on a problem or process different aspects of the problem by providing shared memory that is accessible to all the agents. Shared memory can facilitate the exchange of information and the coordination of actions among the agents. At the same time, the memory system must allow agents to preserve their own persona and characteristics, such as their unique collections of prompts and memories.
+### Sharable but also separable
-#### Multi-modal (Part II)
+At the macro level, memory systems should enable multiple AI agents to collaborate on a problem or process different aspects of the problem by providing shared memory that's accessible to all the agents. Shared memory can facilitate the exchange of information and the coordination of actions among the agents.
-Not only are memory systems critical to AI agents; they are also important for the humans who develop, maintain, and use these agents. For example, humans may need to supervise agents' planning and execution workflows in near real-time. While supervising, humans may interject with guidance or make in-line edits of agents' dialogues or monologues. Humans may also need to audit the reasoning and actions of agents to verify the validity of the final output. Human-agent interactions are likely in natural or programming languages, while agents "think," "learn," and "remember" through embeddings. This data modal difference poses another requirement on memory systems' consistency across data modalities.
+At the same time, the memory system must allow agents to preserve their own persona and characteristics, such as their unique collections of prompts and memories.
## Building a robust AI agent memory system
-The above characteristics require AI agent memory systems to be highly scalable and swift. Painstakingly weaving together [a plethora of disparate in-memory, relational, and vector databases](#ai-agent-memory-system) may work for early-stage AI-enabled applications; however, this approach adds complexity and performance bottlenecks that can hamper the performance of advanced autonomous agents.
+The preceding characteristics require AI agent memory systems to be highly scalable and swift. Painstakingly weaving together disparate in-memory, relational, and vector databases (as described [earlier](#ai-agent-memory-system)) might work for early-stage AI-enabled applications. However, this approach adds complexity and performance bottlenecks that can hamper the performance of advanced autonomous agents.
-In place of all the standalone databases, Azure Cosmos DB can serve as a unified solution for AI agent memory systems. Its robustness successfully [enabled OpenAI's ChatGPT service](https://www.youtube.com/watch?v=6IIUtEFKJec&t) to scale dynamically with high reliability and low maintenance. Powered by an atom-record-sequence engine, it is the world's first globally distributed [NoSQL](distributed-nosql.md), [relational](distributed-relational.md), and [vector database](vector-database.md) service that offers a serverless mode. AI agents built on top of Azure Cosmos DB enjoy speed, scale, and simplicity.
+In place of all the standalone databases, Azure Cosmos DB can serve as a unified solution for AI agent memory systems. Its robustness successfully [enabled OpenAI's ChatGPT service](https://www.youtube.com/watch?v=6IIUtEFKJec&t) to scale dynamically with high reliability and low maintenance. Powered by an atom-record-sequence engine, it's the world's first globally distributed [NoSQL](distributed-nosql.md), [relational](distributed-relational.md), and [vector database](vector-database.md) service that offers a serverless mode. AI agents built on top of Azure Cosmos DB offer speed, scale, and simplicity.
-#### Speed
+### Speed
-Azure Cosmos DB provides single-digit millisecond latency, making it highly suitable for processes requiring rapid data access and management, including caching (both traditional and [semantic caching](https://techcommunity.microsoft.com/t5/azure-architecture-blog/optimize-azure-openai-applications-with-semantic-caching/ba-p/4106867), transactions, and operational workloads. This low latency is crucial for AI agents that need to perform complex reasoning, make real-time decisions, and provide immediate responses. Moreover, its [use of state-of-the-art DiskANN algorithm](nosql/vector-search.md#enroll-in-the-vector-search-preview-feature) provides accurate and fast vector search with 95% less memory consumption.
+Azure Cosmos DB provides single-digit millisecond latency. This capability makes it suitable for processes that require rapid data access and management. These processes include caching (both traditional and [semantic caching](https://techcommunity.microsoft.com/t5/azure-architecture-blog/optimize-azure-openai-applications-with-semantic-caching/ba-p/4106867)), transactions, and operational workloads.
-#### Scale
+Low latency is crucial for AI agents that need to perform complex reasoning, make real-time decisions, and provide immediate responses. In addition, the service's [use of the DiskANN algorithm](nosql/vector-search.md#enroll-in-the-vector-search-preview-feature) provides accurate and fast vector search with minimal memory consumption.
-Engineered for global distribution and horizontal scalability, and offering support for multi-region I/O and multitenancy, this service ensures that memory systems can expand seamlessly and keep up with rapidly growing agents and associated data. Its SLA-backed 99.999% availability guarantee (less than 5 minutes of downtime per year, contrasting 9 hours or more for pure vector database services) provides a solid foundation for mission-critical workloads. At the same time, its various service models like [Reserved Capacity](reserved-capacity.md) or Serverless drastically lower financial costs.
+### Scale
-#### Simplicity
+Azure Cosmos DB is engineered for global distribution and horizontal scalability. It offers support for multiple-region I/O and multitenancy.
-This service simplifies data management and architecture by integrating multiple database functionalities into a single, cohesive platform.
+The service helps ensure that memory systems can expand seamlessly and keep up with rapidly growing agents and associated data. The [availability guarantee in its service-level agreement (SLA)](https://www.microsoft.com/licensing/docs/view/Service-Level-Agreements-SLA-for-Online-Services) translates to less than 5 minutes of downtime per year. Pure vector database services, by contrast, come with 9 hours or more of downtime. This availability provides a solid foundation for mission-critical workloads. At the same time, the various service models in Azure Cosmos DB, like [Reserved Capacity](reserved-capacity.md) or Serverless, can help reduce financial costs.
-Its integrated vector database capabilities can store, index, and query embeddings alongside the corresponding data in natural or programming languages, enabling greater data consistency, scale, and performance.
+### Simplicity
-Its flexibility easily supports the varied modalities and fluid schemas of the metadata, relationships, entities, summaries, chat history, user preferences, sensory data, decisions, facts learned, or other operational data involved in agent workflows. The database automatically indexes all data without requiring schema or index management, allowing AI agents to perform complex queries quickly and efficiently.
+Azure Cosmos DB can simplify data management and architecture by integrating multiple database functionalities into a single, cohesive platform.
-Lastly, its fully managed service eliminates the overhead of database administration, including tasks such as scaling, patching, and backups. Thus, developers can focus on building and optimizing AI agents without worrying about the underlying data infrastructure.
+Its integrated vector database capabilities can store, index, and query embeddings alongside the corresponding data in natural or programming languages. This capability enables greater data consistency, scale, and performance.
-#### Advanced features
+Its flexibility supports the varied modalities and fluid schemas of the metadata, relationships, entities, summaries, chat history, user preferences, sensory data, decisions, facts learned, or other operational data involved in agent workflows. The database automatically indexes all data without requiring schema or index management, which helps AI agents perform complex queries quickly and efficiently.
-Azure Cosmos DB incorporates advanced features such as change feed, which allows tracking and responding to changes in data in real-time. This capability is useful for AI agents that need to react to new information promptly.
+Azure Cosmos DB is fully managed, which eliminates the overhead of database administration tasks like scaling, patching, and backups. Without this overhead, developers can focus on building and optimizing AI agents without worrying about the underlying data infrastructure.
-Additionally, the built-in support for multi-master writes enables high availability and resilience, ensuring continuous operation of AI agents even in the face of regional failures.
+### Advanced features
-The five available [consistency levels](consistency-levels.md) (from strong to eventual) can also cater to various distributed workloads depending on the scenario requirements.
+Azure Cosmos DB incorporates advanced features such as change feed, which allows tracking and responding to changes in data in real time. This capability is useful for AI agents that need to react to new information promptly.
-> [!TIP]
-> You may choose from two Azure Cosmos DB APIs to build your AI agent memory system: Azure Cosmos DB for NoSQL, and vCore-based Azure Cosmos DB for MongoDB. The former provides 99.999% availability and [three vector search algorithms](nosql/vector-search.md): IVF, HNSW, and the state-of-the-art DiskANN. The latter provides 99.995% availability and [two vector search algorithms](mongodb/vcore/vector-search.md): IVF and HNSW.
+Additionally, the built-in support for multi-master writes enables high availability and resilience to help ensure continuous operation of AI agents, even after regional failures.
+
+The five available [consistency levels](consistency-levels.md) (from strong to eventual) can also cater to various distributed workloads, depending on the scenario requirements.
-> [!div class="nextstepaction"]
-> [Use the Azure Cosmos DB lifetime free tier](free-tier.md)
+> [!TIP]
+> You can choose from two Azure Cosmos DB APIs to build your AI agent memory system:
+>
+> - Azure Cosmos DB for NoSQL, which offers 99.999% availability guarantee and provides [three vector search algorithms](nosql/vector-search.md): IVF, HNSW, and DiskANN
+> - vCore-based Azure Cosmos DB for MongoDB, which offers 99.995% availability guarantee and provides [two vector search algorithms](mongodb/vcore/vector-search.md): IVF and HNSW (DiskANN is upcoming)
+>
+> For information about the availability guarantees for these APIs, see the [service SLAs](https://www.microsoft.com/licensing/docs/view/Service-Level-Agreements-SLA-for-Online-Services).
## Implementation sample
-This section explores the implementation of an autonomous agent to process traveler inquiries and bookings in a CruiseLine travel application.
+This section explores the implementation of an autonomous agent to process traveler inquiries and bookings in a travel application for a cruise line.
-Chatbots have been a long-standing concept, but AI agents are advancing beyond basic human conversation to carry out tasks based on natural language, traditionally requiring coded logic. This AI travel agent uses the LangChain Agent framework for agent planning, tool usage, and perception. Its [unified memory system](#memory-can-make-or-break-agents) uses the [vector database](vector-database.md) and document store capabilities of Azure Cosmos DB to address traveler inquiries and facilitate trip bookings, ensuring [speed, scale, and simplicity](#building-a-robust-ai-agent-memory-system). It operates within a Python FastAPI backend and support user interactions through a React JS user interface.
+Chatbots are a long-standing concept, but AI agents are advancing beyond basic human conversation to carry out tasks based on natural language. These tasks traditionally required coded logic. The AI travel agent in this implementation sample uses the LangChain Agent framework for agent planning, tool usage, and perception.
+
+The AI travel agent's [unified memory system](#characteristics-of-a-robust-ai-agent-memory-system) uses the [vector database](vector-database.md) and document store capabilities of Azure Cosmos DB to address traveler inquiries and facilitate trip bookings. Using Azure Cosmos DB for this purpose helps ensure speed, scale, and simplicity, as described [earlier](#building-a-robust-ai-agent-memory-system).
+
+The sample agent operates within a Python FastAPI back end. It supports user interactions through a React JavaScript user interface.
### Prerequisites -- If you don't have an Azure subscription, you may [try Azure Cosmos DB free](try-free.md) for 30 days without creating an Azure account; no credit card is required, and no commitment follows when the trial period ends.-- Set up account for OpenAI API or Azure OpenAI Service.-- Create a vCore cluster in Azure Cosmos DB for MongoDB by following this [QuickStart](mongodb/vcore/quickstart-portal.md).-- An IDE for Development, such as VS Code.-- Python 3.11.4 installed on development environment.
+- An Azure subscription. If you don't have one, you can [try Azure Cosmos DB for free](try-free.md) for 30 days without creating an Azure account. The free trial doesn't require a credit card, and no commitment follows the trial period.
+- An account for the OpenAI API or Azure OpenAI Service.
+- A vCore cluster in Azure Cosmos DB for MongoDB. You can create one by following [this quickstart](mongodb/vcore/quickstart-portal.md).
+- An integrated development environment, such as Visual Studio Code.
+- Python 3.11.4 installed in the development environment.
### Download the project
-All of the code and sample datasets are available on [GitHub](https://github.com/jonathanscholtes/Travel-AI-Agent-React-FastAPI-and-Cosmos-DB-Vector-Store). In this repository, you can find the following folders:
+All of the code and sample datasets are available in [this GitHub repository](https://github.com/jonathanscholtes/Travel-AI-Agent-React-FastAPI-and-Cosmos-DB-Vector-Store). The repository includes these folders:
-- **loader**: This folder contains Python code for loading sample documents and vector embeddings in Azure Cosmos DB.-- **api**: This folder contains Python FastAPI for Hosting Travel AI Agent.-- **web**: The folder contains the Web Interface with React JS.
+- *loader*: This folder contains Python code for loading sample documents and vector embeddings in Azure Cosmos DB.
+- *api*: This folder contains the Python FastAPI project for hosting the AI travel agent.
+- *web*: This folder contains code for the React web interface.
### Load travel documents into Azure Cosmos DB
-The GitHub repository contains a Python project located in the **loader** directory intended for loading the sample travel documents into Azure Cosmos DB. This section sets up the project to load the documents.
+The GitHub repository contains a Python project in the *loader* directory. It's intended for loading the sample travel documents into Azure Cosmos DB.
+
+#### Set up the environment
-### Set up the environment for loader
+Set up your Python virtual environment in the *loader* directory by running the following command:
-Set up your Python virtual environment in the **loader** directory by running the following:
```python python -m venv venv ```
-Activate your environment and install dependencies in the **loader** directory:
+Activate your environment and install dependencies in the *loader* directory:
+ ```python venv\Scripts\activate python -m pip install -r requirements.txt ```
-Create a file, named **.env** in the **loader** directory, to store the following environment variables.
+Create a file named *.env* in the *loader* directory, to store the following environment variables:
+ ```python
- OPENAI_API_KEY="**Your Open AI Key**"
- MONGO_CONNECTION_STRING="mongodb+srv:**your connection string from Azure Cosmos DB**"
+ OPENAI_API_KEY="<your OpenAI key>"
+ MONGO_CONNECTION_STRING="mongodb+srv:<your connection string from Azure Cosmos DB>"
```
-### Load documents and vectors
+#### Load documents and vectors
+
+The Python file *main.py* serves as the central entry point for loading data into Azure Cosmos DB. This code processes the sample travel data from the GitHub repository, including information about ships and destinations. The code also generates travel itinerary packages for each ship and destination, so that travelers can book them by using the AI agent. The CosmosDBLoader tool is responsible for creating collections, vector embeddings, and indexes in the Azure Cosmos DB instance.
-The Python file **main.py** serves as the central entry point for loading data into Azure Cosmos DB. This code processes the sample travel data from the GitHub repository, including information about ships and destinations. Additionally, it generates travel itinerary packages for each ship and destination, allowing travelers to book them using the AI agent. The CosmosDBLoader is responsible for creating collections, vector embeddings, and indexes in the Azure Cosmos DB instance.
+Here are the contents of *main.py*:
-*main.py*
```python from cosmosdbloader import CosmosDBLoader from itinerarybuilder import ItineraryBuilder
with open('documents/destinations.json') as file:
builder = ItineraryBuilder(ship_json['ships'],destinations_json['destinations'])
-# Create five itinerary pakages
+# Create five itinerary packages
itinerary = builder.build(5) # Save itinerary packages to Cosmos DB
collection = cosmosdb_loader.load_vectors(ship_json['ships'],'ships')
collection.create_index([('name', 'text')]) ```
-Load the documents, vectors and create indexes by simply executing the following command from the loader directory:
+Load the documents, load the vectors, and create indexes by running the following command from the *loader* directory:
+ ```python python main.py ```
-Output:
+Here's the output of *main.py*:
```markdown --build itinerary--
Output:
--load vectors ships-- ```
-### Build travel AI agent with Python FastAPI
+### Build the AI travel agent by using Python FastAPI
+
+The AI travel agent is hosted in a back end API through Python FastAPI, which facilitates integration with the front-end user interface. The API project processes agent requests by [grounding](https://techcommunity.microsoft.com/t5/fasttrack-for-azure/grounding-llms/ba-p/3843857) the LLM prompts against the data layer, specifically the vectors and documents in Azure Cosmos DB.
-The AI travel agent is hosted in a backend API using Python FastAPI, facilitating integration with the frontend user interface. The API project processes agent requests by [grounding](https://techcommunity.microsoft.com/t5/fasttrack-for-azure/grounding-llms/ba-p/3843857) the LLM prompts against the data layer, specifically the vectors and documents in Azure Cosmos DB. Furthermore, the agent makes use of various tools, particularly the Python functions provided at the API service layer. This article focuses on the code necessary for AI agents within the API code.
+The agent makes use of various tools, particularly the Python functions provided at the API service layer. This article focuses on the code necessary for AI agents within the API code.
The API project in the GitHub repository is structured as follows: -- Model ΓÇô data modeling components using Pydantic models.-- Web ΓÇô web layer components responsible for routing requests and managing communication.-- Service ΓÇô service layer components responsible for primary business logic and interaction with data layer; LangChain Agent and Agent Tools.-- Data ΓÇô data layer components responsible for interacting with Azure Cosmos DB for MongoDB documents storage and vector search.
+- *Data modeling components* use Pydantic models.
+- *Web layer components* are responsible for routing requests and managing communication.
+- *Service layer components* are responsible for primary business logic and interaction with the data layer, the LangChain Agent, and agent tools.
+- *Data layer components* are responsible for interacting with Azure Cosmos DB for MongoDB document storage and vector search.
### Set up the environment for the API
-Python version 3.11.4 was utilized for the development and testing of the API.
+We used Python version 3.11.4 for the development and testing of the API.
+
+Set up your Python virtual environment in the *api* directory:
-Set up your python virtual environment in the **api** directory.
```python python -m venv venv ```
-Activate your environment and install dependencies using the requirements file in the **api** directory:
+Activate your environment and install dependencies by using the *requirements* file in the *api* directory:
+ ```python venv\Scripts\activate python -m pip install -r requirements.txt ```
-Create a file, named **.env** in the **api** directory, to store your environment variables.
+Create a file named *.env* in the *api* directory, to store your environment variables:
+ ```python
- OPENAI_API_KEY="**Your Open AI Key**"
- MONGO_CONNECTION_STRING="mongodb+srv:**your connection string from Azure Cosmos DB**"
+ OPENAI_API_KEY="<your Open AI key>"
+ MONGO_CONNECTION_STRING="mongodb+srv:<your connection string from Azure Cosmos DB>"
```
-With the environment configured and variables set up, we are ready to initiate the FastAPI server. Run the following command from the **api** directory to initiate the server.
+Now that you've configured the environment and set up variables, run the following command from the *api* directory to initiate the server:
+ ```python python app.py ```
-The FastAPI server launches on the localhost loopback 127.0.0.1 port 8000 by default. You can access the Swagger documents using the following localhost address: http://127.0.0.1:8000/docs
+The FastAPI server starts on the localhost loopback 127.0.0.1 port 8000 by default. You can access the Swagger documents by using the following localhost address: `http://127.0.0.1:8000/docs`.
### Use a session for the AI agent memory
-It is imperative for the Travel Agent to have the capability to reference previously provided information within the ongoing conversation. This ability is commonly known as "memory" in the context of LLMs, which should not be confused with the concept of computer memory (like volatile, non-volatile, and persistent memory).
-To achieve this objective, we use the chat message history, which is securely stored in our Azure Cosmos DB instance. Each chat session will have its history stored using a session ID to ensure that only messages from the current conversation session are accessible. This necessity is the reason behind the existence of a 'Get Session' method in our API. It is a placeholder method for managing web sessions in order to illustrate the use of chat message history.
+It's imperative for the travel agent to be able to reference previously provided information within the ongoing conversation. This ability is commonly known as *memory* in the context of LLMs.
+
+To achieve this objective, use the chat message history that's stored in the Azure Cosmos DB instance. The history for each chat session is stored through a session ID to ensure that only messages from the current conversation session are accessible. This necessity is the reason behind the existence of a `Get Session` method in the API. It's a placeholder method for managing web sessions to illustrate the use of chat message history.
-Click Try It out for /session/.
+Select **Try it out** for `/session/`.
+ ```python {
Click Try It out for /session/.
} ```
-For the AI Agent, we only need to simulate a session. Thus, the stubbed-out method merely returns a generated session ID for tracking message history. In a practical implementation, this session would be stored in Azure Cosmos DB and potentially in React JS localStorage.
+For the AI agent, you only need to simulate a session. The stubbed-out method merely returns a generated session ID for tracking message history. In a practical implementation, this session would be stored in Azure Cosmos DB and potentially in React `localStorage`.
+
+Here are the contents of *web/session.py*:
-*web/session.py*
```python @router.get("/") def get_session():
For the AI Agent, we only need to simulate a session. Thus, the stubbed-out meth
### Start a conversation with the AI travel agent
-Let us utilize the obtained session ID from the previous step to initiate a new dialogue with our AI agent to validate its functionality. We shall conduct our test by submitting the following phrase: "I want to take a relaxing vacation."
+Use the session ID that you obtained from the previous step to start a new dialogue with the AI agent, so you can validate its functionality. Conduct the test by submitting the following phrase: "I want to take a relaxing vacation."
+
+Select **Try it out** for `/agent/agent_chat`.
-Click Try It out for /agent/agent_chat.
+
+Use this example parameter:
-Example parameter
```python { "input": "I want to take a relaxing vacation.",
Example parameter
} ```
-The initial execution results in a recommendation for the Tranquil Breeze Cruise and the Fantasy Seas Adventure Cruise as they are anticipated to be the most 'relaxing' cruises available through the vector search. These documents have the highest score for ```similarity_search_with_score``` that is called in the data layer of our API, ```data.mongodb.travel.similarity_search()```.
-
-The similarity search scores are displayed as output from the API for debugging purposes.
+The initial execution results in a recommendation for the Tranquil Breeze Cruise and the Fantasy Seas Adventure Cruise, because the agent anticipates that they're the most relaxing cruises available through the vector search. These documents have the highest score for `similarity_search_with_score` called in the data layer of the API, `data.mongodb.travel.similarity_search()`.
-Output when calling ```data.mongodb.travel.similarity_search()```
+The similarity search scores appear as output from the API for debugging purposes. Here's the output after a call to `data.mongodb.travel.similarity_search()`:
```markdown 0.8394561085977978
Output when calling ```data.mongodb.travel.similarity_search()```
``` > [!TIP]
-> If documents are not being returned for vector search modify the ```similarity_search_with_score``` limit or the score filter value as needed (```[doc for doc, score in docs if score >=.78]```). in ```data.mongodb.travel.similarity_search()```
+> If documents are not being returned for vector search, modify the `similarity_search_with_score` limit or the score filter value as needed (`[doc for doc, score in docs if score >=.78]`) in `data.mongodb.travel.similarity_search()`.
+
+Calling `agent_chat` for the first time creates a new collection named `history` in Azure Cosmos DB to store the conversation by session. This call enables the agent to access the stored chat message history as needed. Subsequent executions of `agent_chat` with the same parameters produce varying results, because it draws from memory.
-Calling the 'agent_chat' for the first time creates a new collection named 'history' in Azure Cosmos DB to store the conversation by session. This call enables the agent to access the stored chat message history as needed. Subsequent executions of 'agent_chat' with the same parameters produce varying results as it draws from memory.
+### Walk through the AI agent
-### Walkthrough of AI agent
+When you're integrating the AI agent into the API, the web search components are responsible for initiating all requests. The web search components are followed by the search service, and finally the data components.
-When integrating the AI Agent into the API, the web search components are responsible for initiating all requests. This is followed by the search service, and finally the data components. In our specific case, we utilize MongoDB data search, which connects to Azure Cosmos DB. The layers facilitate the exchange of Model components, with the AI Agent and AI Agent Tool code residing in the service layer. This approach was implemented to enable the seamless interchangeability of data sources and to extend the capabilities of the AI Agent with additional, more intricate functionalities or 'tools'.
+In this specific case, you use a MongoDB data search that connects to Azure Cosmos DB. The layers facilitate the exchange of model components, with the AI agent and the AI agent tool code residing in the service layer. This approach enables the seamless interchangeability of data sources. It also extends the capabilities of the AI agent with additional, more intricate functionalities or tools.
#### Service layer
-The service layer forms the cornerstone of our core business logic. In this particular scenario, the service layer plays a crucial role as the repository for the LangChain agent code, facilitating the seamless integration of user prompts with Azure Cosmos DB data, conversation memory, and agent functions for our AI Agent.
+The service layer forms the cornerstone of core business logic. In this particular scenario, the service layer plays a crucial role as the repository for the LangChain Agent code. It facilitates the seamless integration of user prompts with Azure Cosmos DB data, conversation memory, and agent functions for the AI agent.
-The service layer employs a singleton pattern module for handling agent-related initializations in the **init.py** file.
+The service layer employs a singleton pattern module for handling agent-related initializations in the *init.py* file. Here are the contents of *service/init.py*:
-*service/init.py*
```python from dotenv import load_dotenv from os import environ
def LLM_init():
] )
- #Answer should be embedded in html tags. Only answer questions related to cruise travel, If you can not answer respond with \"I am here to assist with your travel questions.\".
+ #Answer should be embedded in HTML tags. Only answer questions related to cruise travel, If you can not answer respond with \"I am here to assist with your travel questions.\".
agent = create_openai_tools_agent(chat, tools, prompt)
def LLM_init():
LLM_init() ```
-The **init.py** file commences by initiating the loading of environment variables from a **.env** file utilizing the ```load_dotenv(override=False)``` method. Then, a global variable named ```agent_with_chat_history``` is instantiated for the agent, intended for use by our **TravelAgent.py**. The ```LLM_init()``` method is invoked during module initialization to configure our AI agent for conversation via the API web layer. The OpenAI Chat object is instantiated using the GPT-3.5 model, incorporating specific parameters such as model name and temperature. The chat object, tools list, and prompt template are combined to generate an ```AgentExecutor```, which operates as our AI Travel Agent. Lastly, the agent with history, ```agent_with_chat_history```, is established using ```RunnableWithMessageHistory``` with chat history (MongoDBChatMessageHistory), enabling it to maintain a complete conversation history via Azure Cosmos DB.
+The *init.py* file initiates the loading of environment variables from an *.env* file by using the `load_dotenv(override=False)` method. Then, a global variable named `agent_with_chat_history` is instantiated for the agent. This agent is intended for use by *TravelAgent.py*.
+
+The `LLM_init()` method is invoked during module initialization to configure the AI agent for conversation via the API web layer. The OpenAI `chat` object is instantiated through the GPT-3.5 model and incorporates specific parameters such as model name and temperature. The `chat` object, tools list, and prompt template are combined to generate `AgentExecutor`, which operates as the AI travel agent.
+
+The agent with history, `agent_with_chat_history`, is established through `RunnableWithMessageHistory` with chat history (`MongoDBChatMessageHistory`). This action enables it to maintain a complete conversation history via Azure Cosmos DB.
#### Prompt
-The LLM prompt initially began with the simple statement "You are a helpful and friendly travel assistant for a cruise company." However, through testing, it was determined that more consistent results could be obtained by including the instruction "Answer travel questions to the best of your ability, providing only relevant information. To book a cruise, capturing the person's name is essential." The results are presented in HTML format to enhance the visual appeal within the web interface.
+The LLM prompt initially began with the simple statement "You are a helpful and friendly travel assistant for a cruise company." However, testing showed that you could obtain more consistent results by including the instruction "Answer travel questions to the best of your ability, providing only relevant information. To book a cruise, capturing the person's name is essential." The results appear in HTML format to enhance the visual appeal of the web interface.
#### Agent tools
-[Tools](#what-are-ai-agents) are interfaces that an agent can use to interact with the world, often done through function calling.
-When creating an agent, it is essential to furnish it with a set of tools that it can utilize. The ```@tool``` decorator offers the most straightforward approach to defining a custom tool. By default, the decorator uses the function name as the tool name, although this can be replaced by providing a string as the first argument. Moreover, the decorator will utilize the function's docstring as the tool's description, thus requiring the provision of a docstring.
+[Tools](#what-are-ai-agents) are interfaces that an agent can use to interact with the world, often through function calling.
+
+When you're creating an agent, you must furnish it with a set of tools that it can use. The `@tool` decorator offers the most straightforward approach to defining a custom tool.
+
+By default, the decorator uses the function name as the tool name, although you can replace it by providing a string as the first argument. The decorator uses the function's docstring as the tool's description, so it requires the provisioning of a docstring.
+
+Here are the contents of *service/TravelAgentTools.py*:
-*service/TravelAgentTools.py*
```python from langchain_core.tools import tool from langchain.docstore.document import Document
def book_cruise(package_name:str, passenger_name:str, room: str )-> str:
return "Cruise has been booked, ref number is 343242" ```
-In the **TravelAgentTools.py** file, three specific tools are defined. The first tool, ```vacation_lookup```, conducts a vector search against Azure Cosmos DB, using a ```similarity_search``` to retrieve relevant travel-related material. The second tool, ```itinerary_lookup```, retrieves cruise package details and schedules for a specified cruise ship. Lastly, ```book_cruise``` is responsible for booking a cruise package for a passenger. Specific instructions ("In order to book a cruise I need to know your name.") might be necessary to ensure the capture of the passenger's name and room number for booking the cruise package. This is in spite of including such instructions in the LLM prompt.
+The *TravelAgentTools.py* file defines three tools:
+
+- `vacation_lookup` conducts a vector search against Azure Cosmos DB. It uses `similarity_search` to retrieve relevant travel-related material.
+- `itinerary_lookup` retrieves cruise package details and schedules for a specified cruise ship.
+- `book_cruise` books a cruise package for a passenger.
+
+Specific instructions ("In order to book a cruise I need to know your name") might be necessary to ensure the capture of the passenger's name and room number for booking the cruise package, even though you included such instructions in the LLM prompt.
#### AI agent
-The fundamental concept underlying agents is to utilize a language model for selecting a sequence of actions to execute.
+The fundamental concept that underlies agents is to use a language model for selecting a sequence of actions to execute.
+
+Here are the contents of *service/TravelAgent.py*:
-*service/TravelAgent.py*
```python from .init import agent_with_chat_history from model.prompt import PromptResponse
def agent_chat(input:str, session_id:str)->str:
return PromptResponse(text=results["output"],ResponseSeconds=(time.time() - start_time)) ```
-The **TravelAgent.py** file is straightforward, as ```agent_with_chat_history```, and its dependencies (tools, prompt, and LLM) are initialized and configured in the **init.py** file. In this file, the agent is called using the input received from the user, along with the session ID for conversation memory. Afterwards, ```PromptResponse``` (model/prompt) is returned with the agent's output and response time.
+The *TravelAgent.py* file is straightforward, because `agent_with_chat_history` and its dependencies (tools, prompt, and LLM) are initialized and configured in the *init.py* file. This file calls the agent by using the input received from the user, along with the session ID for conversation memory. Afterward, `PromptResponse` (model/prompt) is returned with the agent's output and response time.
+
+## AI agent integration with the React user interface
-### Integrate AI agent with React JS user interface
+With the successful loading of the data and accessibility of the AI agent through the API, you can now complete the solution by establishing a web user interface (by using React) for your travel website. Using the capabilities of React helps illustrate the seamless integration of the AI agent into a travel site. This integration enhances the user experience with a conversational travel assistant for inquiries and bookings.
-With the successful loading of the data and accessibility of our AI Agent through our API, we can now complete the solution by establishing a web user interface using React JS for our travel website. By harnessing the capabilities of React JS, we can illustrate the seamless integration of our AI agent into a travel site, enhancing the user experience with a conversational travel assistant for inquiries and bookings.
+### Set up the environment for React
-#### Set up the environment for React JS
+Install Node.js and the dependencies before testing the React interface.
-Install Node.js and the dependencies before testing out the React interface.
+Run the following command from the *web* directory to perform a clean installation of project dependencies. The installation might take some time.
-Run the following command from the **web** directory to perform a clean install of project dependencies, this may take some time.
```javascript npm ci ```
-Next, it is essential to create a file named **.env** within the **web** directory to facilitate the storage of environment variables. Then, you should include the following details in the newly created **.env** file.
+Next, create a file named *.env* within the *web* directory to facilitate the storage of environment variables. Include the following details in the newly created *.env* file:
-REACT_APP_API_HOST=http://127.0.0.1:8000
+`REACT_APP_API_HOST=http://127.0.0.1:8000`
+
+Now, run the following command from the *web* directory to initiate the React web user interface:
-Now, we have the ability to execute the following command from the **web** directory to initiate the React web user interface.
```javascript npm start ```
-Running the previous command launches the React JS web application.
+Running the previous command opens the React web application.
-#### Walkthrough of React JS Web interface
+### Walk through the React web interface
-The web project of the GitHub repository is a straightforward application to facilitate user interaction with our AI agent. The primary components required to converse with the agent are ```TravelAgent.js``` and ```ChatLayout.js```. The **Main.js** file serves as the central module or user landing page.
+The web project of the GitHub repository is a straightforward application to facilitate user interaction with the AI agent. The primary components required to converse with the agent are *TravelAgent.js* and *ChatLayout.js*. The *Main.js* file serves as the central module or user landing page.
#### Main
-The Main component serves as the central manager of the application, acting as the designated entry point for routing. Within the render function, it produces JSX code to delineate the main page layout. This layout encompasses placeholder elements for the application such as logos and links, a section housing the travel agent component (further details to come), and a footer containing a sample disclaimer regarding the application's nature.
+The main component serves as the central manager of the application. It acts as the designated entry point for routing. Within the render function, it produces JSX code to delineate the main page layout. This layout encompasses placeholder elements for the application, such as logos and links, a section that houses the travel agent component, and a footer that contains a sample disclaimer about the application's nature.
+
+Here are the contents of *main.js*:
-*main.js*
```javascript import React, { Component } from 'react' import { Stack, Link, Paper } from '@mui/material'
export default Main
#### Travel agent
-The Travel Agent component has a straightforward purpose ΓÇô capturing user inputs and displaying responses. It plays a key role in managing the integration with the backend AI Agent, primarily by capturing sessions and forwarding user prompts to our FastAPI service. The resulting responses are stored in an array for display, facilitated by the Chat Layout component.
+The travel agent component has a straightforward purpose: capturing user inputs and displaying responses. It plays a key role in managing the integration with the back-end AI agent, primarily by capturing sessions and forwarding user prompts to the FastAPI service. The resulting responses are stored in an array for display, facilitated by the chat layout component.
+
+Here are the contents of *TripPlanning/TravelAgent.js*:
-*TripPlanning/TravelAgent.js*
```javascript import React, { useState, useEffect } from 'react' import { Button, Box, Link, Stack, TextField } from '@mui/material'
export default function TravelAgent() {
} ```
-Click on "Effortlessly plan your voyage" to launch the travel assistant.
+Select **Effortlessly plan your voyage** to open the travel assistant.
#### Chat layout
-The Chat Layout component, as indicated by its name, oversees the arrangement of the chat. It systematically processes the chat messages and implements the designated formatting specified in the message JSON object.
+The chat layout component oversees the arrangement of the chat. It systematically processes the chat messages and implements the formatting specified in the `message` JSON object.
+
+Here are the contents of *TripPlanning/ChatLayout.py*:
-*TripPlanning/ChatLayout.py*
```javascript import React from 'react' import { Box, Stack } from '@mui/material'
export default function ChatLayout(messages) {
} ```
-User prompts are on the right side and colored blue, while the Travel AI Agent responses are on the left side and colored green. As you can see in the image below, the HTML formatted responses are accounted for in the conversation.
-
-When your AI agent is ready go to into production, you can use semantic caching to improve query performance by 80% and reduce LLM inference/API call costs. See this blog post for how to implement [semantic caching](https://stochasticcoder.com/2024/03/22/improve-llm-performance-using-semantic-cache-with-cosmos-db/).
+User prompts are on the right side and colored blue. Responses from the AI travel agent are on the left side and colored green. As the following image shows, the HTML-formatted responses are accounted for in the conversation.
-> [!NOTE]
-> If you would like to contribute to this article, feel free to click on the pencil button on the top right corner of the article. If you have any specific questions or comments on this article, you may reach out to cosmosdbgenai@microsoft.com
-### Next steps
+When your AI agent is ready to go into production, you can use semantic caching to improve query performance by 80% and to reduce LLM inference and API call costs. To implement semantic caching, see [this post on the Stochastic Coder blog](https://stochasticcoder.com/2024/03/22/improve-llm-performance-using-semantic-cache-with-cosmos-db/).
-[30-day Free Trial without Azure subscription](https://azure.microsoft.com/try/cosmosdb/)
-[90-day Free Trial and up to $6,000 in throughput credits with Azure AI Advantage](ai-advantage.md)
+## Related content
-> [!div class="nextstepaction"]
-> [Use the Azure Cosmos DB lifetime free tier](free-tier.md)
+- [30-day free trial without an Azure subscription](https://azure.microsoft.com/try/cosmosdb/)
+- [90-day free trial and up to $6,000 in throughput credits with Azure AI Advantage](ai-advantage.md)
+- [Azure Cosmos DB lifetime free tier](free-tier.md)
cosmos-db Distance Functions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gen-ai/distance-functions.md
Last updated 07/01/2024
# What are distance functions?
-Distance functions are mathematical formulas used to measure the similarity or dissimilarity between vectors (see [vector search](vector-search-overview.md)). Common examples include Manhattan distance, Euclidean distance, cosine similarity, and dot product. These measurements are crucial for determining how closely related two pieces of data.
+Distance functions are mathematical formulas used to measure the similarity or dissimilarity between vectors (see [vector search](vector-search-overview.md)). Common examples include Manhattan distance, Euclidean distance, cosine similarity, and dot product. These measurements are crucial for determining how closely related two pieces of data are.
## Manhattan distance
cosmos-db Knn Vs Ann https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gen-ai/knn-vs-ann.md
Last updated 07/01/2024
# kNN vs ANN
-Two popular vector search algorithms are k-Nearest Neighbors (kNN) and Approximate Nearest Neighbors (ANN, not to be confused with Artificial Neural Network). kNN is precise but computationally intensive, making it less suitable for large datasets. ANN, on the other hand, offers a balance between accuracy and efficiency, making it better suited for large-scale applications.
+Two major categories of vector search algorithms are k-Nearest Neighbors (kNN) and Approximate Nearest Neighbors (ANN, not to be confused with Artificial Neural Network). kNN is precise but computationally intensive, making it less suitable for large datasets. ANN, on the other hand, offers a balance between accuracy and efficiency, making it better suited for large-scale applications.
## How kNN works
cosmos-db Vector Search Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/gen-ai/vector-search-overview.md
Vector search is a method that helps you find similar items based on their data
This [interactive visualization](https://openai.com/index/introducing-text-and-code-embeddings/#_1Vr7cWWEATucFxVXbW465e) shows some examples of closeness and distance between vectors.
-Two popular types of vector search algorithms are [k-nearest neighbors (kNN) and approximate nearest neighbor (ANN)](knn-vs-ann.md). Some well-known vector search algorithms belonging to these categories include Inverted File (IVF), Hierarchical Navigable Small World (HNSW), and the state-of-the-art DiskANN.
+Two major types of vector search algorithms are k-nearest neighbors (kNN) and approximate nearest neighbor (ANN). Between [kNN and ANN](knn-vs-ann.md), the latter offers a balance between accuracy and efficiency, making it better suited for large-scale applications. Some well-known ANN algorithms include Inverted File (IVF), Hierarchical Navigable Small World (HNSW), and the state-of-the-art DiskANN.
Using an integrated vector search feature in a fully featured database ([as opposed to a pure vector database](../vector-database.md#integrated-vector-database-vs-pure-vector-database)) offers an efficient way to store, index, and search high-dimensional vector data directly alongside other application data. This approach removes the necessity of migrating your data to costlier alternative vector databases and provides a seamless integration of your AI-driven applications.
cosmos-db How To Move Regions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/how-to-move-regions.md
- Title: Move an Azure Cosmos DB account to another region
-description: Learn how to move an Azure Cosmos DB account to another region.
----- Previously updated : 03/15/2022----
-# Move an Azure Cosmos DB account to another region
-
-This article describes how to either:
--- Move a region where data is replicated in Azure Cosmos DB.-- Migrate account (Azure Resource Manager) metadata and data from one region to another.-
-## Move data from one region to another
-
-Azure Cosmos DB supports data replication natively, so moving data from one region to another is simple. You can accomplish it by using the Azure portal, Azure PowerShell, or the Azure CLI. It involves the following steps:
-
-1. Add a new region to the account.
-
- To add a new region to an Azure Cosmos DB account, see [Add/remove regions to an Azure Cosmos DB account](how-to-manage-database-account.yml#add-remove-regions-from-your-database-account).
-
-1. Perform a manual failover to the new region.
-
- When the region that's being removed is currently the write region for the account, you'll need to start a failover to the new region added in the previous step. This is a zero-downtime operation. If you're moving a read region in a multiple-region account, you can skip this step.
-
- To start a failover, see [Perform manual failover on an Azure Cosmos DB account](how-to-manage-database-account.yml#perform-manual-failover-on-an-azure-cosmos-db-account).
-
-1. Remove the original region.
-
- To remove a region from an Azure Cosmos DB account, see [Add/remove regions from your Azure Cosmos DB account](how-to-manage-database-account.yml#add-remove-regions-from-your-database-account).
-
-> [!NOTE]
-> If you perform a failover operation or add/remove a new region while an [asynchronous throughput scaling operation](scaling-provisioned-throughput-best-practices.md#background-on-scaling-rus) is in progress, the throughput scale-up operation will be paused. It will resume automatically when the failover or add/remove region operation is complete.
-
-## Migrate Azure Cosmos DB account metadata
-
-Azure Cosmos DB does not natively support migrating account metadata from one region to another. To migrate both the account metadata and customer data from one region to another, you must create a new account in the desired region and then copy the data manually.
-
-> [!IMPORTANT]
-> It is not necessary to migrate the account metadata if the data is stored or moved to a different region. The region in which the account metadata resides has no impact on the performance, security or any other operational aspects of your Azure Cosmos DB account.
-
-A near-zero-downtime migration for the API for NoSQL requires the use of the [change feed](change-feed.md) or a tool that uses it.
-
-The following steps demonstrate how to migrate an Azure Cosmos DB account for the API for NoSQL and its data from one region to another:
-
-1. Create a new Azure Cosmos DB account in the desired region.
-
- To create a new account via the Azure portal, PowerShell, or the Azure CLI, see [Create an Azure Cosmos DB account](how-to-manage-database-account.yml#create-an-account).
-
-1. Create a new database and container.
-
- To create a new database and container, see [Create an Azure Cosmos DB container](how-to-create-container.md).
-
-1. Migrate data by using the Azure Cosmos DB Spark Connector live migration sample.
-
- To migrate data with near zero downtime, see [Live Migrate Azure Cosmos DB SQL API Containers data with Spark Connector](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/cosmos/azure-cosmos-spark_3_2-12/Samples/DatabricksLiveContainerMigration).
-
-1. Update the application connection string.
-
- With the Live Data Migration sample still running, update the connection information in the new deployment of your application. You can retrieve the endpoints and keys for your application from the Azure portal.
-
- :::image type="content" source="./media/secure-access-to-data/nosql-database-security-master-key-portal.png" alt-text="Access control in the Azure portal, demonstrating NoSQL database security.":::
-
-1. Redirect requests to the new application.
-
- After the new application is connected to Azure Cosmos DB, you can redirect client requests to your new deployment.
-
-1. Delete any resources that you no longer need.
-
- With requests now fully redirected to the new instance, you can delete the old Azure Cosmos DB account and stop the Live Data Migrator sample.
-
-## Next steps
-
-For more information and examples on how to manage the Azure Cosmos DB account as well as databases and containers, read the following articles:
-
-* [Manage an Azure Cosmos DB account](how-to-manage-database-account.yml)
-* [Change feed in Azure Cosmos DB](change-feed.md)
cosmos-db How To Setup Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/how-to-setup-rbac.md
MongoClient client = new MongoClient(uri);
mongosh --authenticationDatabase <YOUR_DB> --authenticationMechanism SCRAM-SHA-256 "mongodb://<YOUR_USERNAME>:<YOUR_PASSWORD>@<YOUR_HOST>:10255/?ssl=true&replicaSet=globaldb&retrywrites=false&maxIdleTimeMS=120000" ```
+## Authenticate using MongoDB Compass/Azure Data Studio
+```bash
+connectionString = "mongodb://" + "<YOUR_USER>" + ":" + "<YOUR_PASSWORD>" + "@" + "<YOUR_HOSTNAME>" + ":10255/" + "?ssl=true&retrywrites=false&replicaSet=globaldb&authmechanism=SCRAM-SHA-256&appname=@" + "<YOUR appName FROM CONNECTION STRING IN AZURE PORTAL>" + "@"
++"&authSource=" +"<YOUR_DATABASE>";
+```
+ ## Azure CLI RBAC Commands The RBAC management commands will only work with newer versions of the Azure CLI installed. See the Quickstart above on how to get started.
cosmos-db How To Migrate Native Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/how-to-migrate-native-tools.md
Migrate a collection from the source MongoDB instance to the target Azure Cosmos
```bash mongorestore \
- --db <database-name> \
- --collection <collection-name> \
- --ssl \
- --uri <target-connection-string> \
- <dump-directory>/<database-name>/<collection-name>.bson
+ --ssl \
+ --uri <target-connection-string> \
+ <dump-directory>/<database-name>/<collection-name>.bson
```-
+ > [!NOTE]
+ > You can also restore a specific collection or collections from the dump-directory /directory. For example, the following operation restores a single collection from corresponding data files in the dump-directory / directory. ``` mongorestore --nsInclude=test.purchaseorders <dump-directory>/ ```
+
1. Monitor the terminal output from *mongoimport*. The output prints lines of text to the terminal with updates on the restore operation's status.
cosmos-db Limits https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/mongodb/vcore/limits.md
+
+ Title: Service Limits in Azure Cosmos DB for MongoDB vCore
+description: This document outlines the service limits for vCore-based Azure Cosmos DB for MongoDB.
+++++ Last updated : 06/27/2024++
+# Service Limits in Azure Cosmos DB for MongoDB vCore
+
+This document outlines the current hard and soft limits for Azure Cosmos DB for MongoDB vCore. Many of these limitations are temporary and will evolve over time as the service continues to improve. If any of these limits are an issue for your organization, please [reach out to our team](mailto:mongodb-feedback@microsoft.com) for assistance.
+
+## Query and Execution Limits
+
+### MongoDB Execution Limits
+- Maximum transaction lifetime: 30 seconds.
+- Cursor lifetime: 10 minutes. Note: A cursorNotFound error might occur if the cursor exceeds its lifetime.
+- Default query execution limit: 120 seconds. This can be overridden on a per-query basis using `maxTimeMS` in the respective MongoDB driver.
+#### Example:
+```javascript
+db.collection.find({ field: "value" }).maxTimeMS(5000)
+```
+
+### Maximum MongoDB Query Size
+- The maximum memory size for MongoDB queries depends on the tier. For example, for M80, the query memory size limit is approximately 150 MiB.
+- In sharded clusters, if a query pulls data across nodes, the limit on that data size is 1GB.
+
+## Indexing Limits
+
+### General Indexing Limits
+- Maximum number of compound index fields: 32.
+- Maximum size for `_id` field value: 2KB.
+- Maximum size for index path: 256B.
+- Default maximum: 64.
+ - Configurable up to: 300 indexes per collection.
+- Sorting is done in memory and doesn't push down to the index.
+- Maximum level of nesting for embedded objects/arrays on index definitions: 6.
+- Background index builds are in preview. To enable, please [reach out to our team](mailto:mongodb-feedback@microsoft.com) for assistance.
+ - A single index build can be in progress on the same collection.
+ - The number of simultaneous index builds on different collections is configurable (default: 2).
+ - Use the `currentOp` command to view the progress of long-running index builds.
+ - Unique index builds are done in the foreground and block writes in the collection.
+
+### Wildcard Indexing Limits
+- For wildcard indexes, if the indexed field is an array of arrays, the entire embedded array is taken as a value instead of traversing its contents.
+
+### Geospatial Indexing Limits
+- No support for BigPolygons.
+- Composite indexes don't support geospatial indexes.
+- `$geoWithin` query doesn't support polygons with holes.
+- The `key` field is required in the `$geoNear` aggregation stage.
+- Indexes are recommended but not required for `$near`, `$nearSphere` query operators, and the `$geoNear` aggregation stage.
+
+### Text Index Limits
+- Only one text index can be defined on a collection.
+- Supports simple text searches only; advanced search capabilities like regular expression searches aren't supported.
+- `hint()` isn't supported in combination with a query using `$text` expression.
+- Sort operations can't use the ordering of the text index.
+- Tokenization for Chinese, Japanese, Korean isn't supported yet.
+- Case insensitive tokenization isn't supported yet.
+
+### Vector Search Limits
+- Indexing vectors up to 2,000 dimensions in size.
+- Indexing applies to only one vector per path.
+- Only one index can be created per vector path.
+
+## Cluster and Shard Limits
+
+### Cluster Tier
+- Maximum: M200. Please [reach out to our team](mailto:mongodb-feedback@microsoft.com) for higher tiers.
+
+### Shards
+- Maximum: 6 (in preview). Please [reach out to our team](mailto:mongodb-feedback@microsoft.com) for additional shards.
+
+### Secondary Regions
+- Maximum: 1 additional secondary region. Please [reach out to our team](mailto:mongodb-feedback@microsoft.com) for additional regions.
+
+### Free Tier Limits
+The following limitations can be overridden by upgrading a paid tier
+- Maximum storage: 32GB.
+- Backup / Restore not supported (available in M25+)
+- High availability (HA) not supported (available in M30+)
+- HNSW vector indexes not supported (available in M40+)
+- Diagnostic logging not supported (available in M30+)
+- No service-level-agreement provided (requires HA to be enabled)
+- Free tier clusters are paused after 60 days of inactivity where there are no connections to the cluster.
+
+## Replication and HA Limits
+
+### Cross-Region Replication (Preview)
+- Supported only on single shard (node) vCore clusters.
+- The following configurations are the same on both primary and replica clusters and can't be changed on the replica cluster:
+ - Compute configuration
+ - Storage and shard count
+ - User accounts
+- HA isn't supported on replica clusters.
+- Cross-region replication isn't available on clusters with burstable compute or Free tier clusters.
+
+## Miscellaneous Limits
+
+### Portal Mongo Shell Usage
+- The Portal Mongo Shell can be used for 120 minutes within a 24-hour window.
+
+## Next steps
+
+- Get started by [creating a cluster.](quickstart-portal.md).
+- Review options for [migrating from MongoDB to Azure Cosmos DB for MongoDB vCore.](migration-options.md)
+++
cosmos-db Vectordistance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cosmos-db/nosql/query/vectordistance.md
VECTORDISTANCE(<vector_expr1>, <vector_expr2>, [<bool_expr>], [<obj_expr>])
| **`bool_expr`** | A boolean specifying how the computed value is used in an ORDER BY expression. If `true`, then brute force is used. A value of `false` leverages any index defined on the vector property, if it exists. Default value is `false`.| |**`obj_expr`**| A JSON formatted object literal used to specify options for the vector distance calculation. Valid items include `distanceFunction` and `dataType`.| | **`distanceFunction`** | The metric used to compute distance/similarity.
-| **`dataType`** | The data type of the vectors. `float32`, `float16`, `int8`, `uint8` values. Default value is `float32`. |
+| **`dataType`** | The data type of the vectors. `float32`, `int8`, `uint8` values. Default value is `float32`. |
Supported metrics for `distanceFunction` are:
ORDER BY VectorDistance(c.vector1, c.vector2)
- This function requires enrollment in the [Azure Cosmos DB NoSQL Vector Search preview feature](../vector-search.md#enroll-in-the-vector-search-preview-feature). - This function benefits from a [vector index](../../index-policy.md#vector-indexes) - if `false` is given as the optional `bool_expr`, then the vector index defined on the path is used, if one exists. If no index is defined on the vector path, then this reverts to full scan and incurs higher RU charges and higher latency than if using a vector index. -- When `VectorDistance` is used in an `ORDER BY` clause, no direction can be specified for the `ORDER BY`, as the results are always sorted in order of most similar (first) to least similar (last) based on the similarity metric used. If a direction such as `ASC` or `DESC` is specified, an error occurs.
+- When `VectorDistance` is used in an `ORDER BY` clause, no direction needs to be specified for the `ORDER BY` as the results are always sorted in order of most similar (first) to least similar (last) based on the similarity metric used.
- The result is expressed as a similarity score. ## Related content
cost-management-billing Pay By Invoice https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/cost-management-billing/manage/pay-by-invoice.md
Previously updated : 05/06/2024 Last updated : 07/03/2024
When you switch to pay by wire transfer:
- Send the exact amount per the invoice. - Pay the bill by the due date.
-Users with a Microsoft Customer Agreement must always [submit a request to set up pay by wire transfer](#submit-a-request-to-set-up-pay-by-wire-transfer) to Azure support to enable pay by wire transfer.
+## Prerequisites
-Customers who have a Microsoft Online Services Program (pay-as-you-go) account can use the Azure portal to [request to pay by wire transfer](#request-to-pay-by-wire-transfer).
+Users with a Microsoft Customer Agreement must always [submit a request to set up pay by wire transfer](#submit-a-request-to-set-up-pay-by-wire-transfer) to Azure support to enable pay by wire transfer. Any user with access to the Microsoft Online Services Program (pay-as-you-go) billing profile can submit the request to pay by wire transfer.
+
+Currently, customers who have a Microsoft Online Services Program (pay-as-you-go) account must [submit a request to set up pay by wire transfer](#submit-a-request-to-set-up-pay-by-wire-transfer) to Azure support to enable pay by wire transfer. Any user with access to the Microsoft Customer Agreement billing profile can submit the request to pay by wire transfer.
> [!IMPORTANT] > * Pay by wire transfer is only available for customers using Azure on behalf of a company. > * Pay all outstanding charges before switching to pay by wire transfer. > * An outstanding invoice is paid by your default payment method. In order to have it paid by wire transfer, you must change your default payment method to wire transfer after you've been approved. > * Currently, payment by wire transfer isn't supported for Global Azure in China.
-> * For Microsoft Online Services Program accounts, if you switch to pay by wire transfer, you can't switch back to paying by credit or debit card.
-> * Currently, only customers in the United States can get automatically approved to change their payment method to wire transfer. Support for other regions is being evaluated.
+> * If you switch to pay by wire transfer, you can't switch back to paying by credit or debit card, except for one-time payments.
> * As of September 30, 2023 Microsoft no longer accepts checks as a payment method.
-## Request to pay by wire transfer
-
-> [!NOTE]
-> - Currently only customers in the United States can get automatically approved to change their payment method to wire transfer and use the following procedure. Support for other regions is being evaluated. If you are not in the United States, you must [submit a request to set up pay by wire transfer](#submit-a-request-to-set-up-pay-by-wire-transfer) to change your payment method.
-> - After you're approved to pay by wire transfer, you can't switch back to credit card except for one-time payments.
-
-1. Sign in to the Azure portal.
- - If you have a pay-as-you-go subscription, navigate to **Subscriptions** and then select the one that you want to set up wire transfer for.
- - If you have a Microsoft Customer Agreement, navigate to **Cost Management + Billing** and then select **Billing profiles**. Select the billing profile that you want to set up wire transfer for.
-1. In the left menu, select **Payment methods**.
-1. On the Payment methods page, select **Pay by wire transfer**.
-1. On the **Pay by wire transfer** page, you see a message stating that you can request to use wire transfer instead of automatic payment using a credit or debit card. Select **Continue** to start the check.
-1. Depending on your approval status:
- - If you're automatically approved, the page shows a message stating that you're approved to pay by wire transfer. Enter your **Company name** and then select **Save**.
- - If the request couldn't be processed or if you're not approved, you need to follow the steps in the next section [Submit a request to set up pay by wire transfer](#submit-a-request-to-set-up-pay-by-wire-transfer).
-1. If you're approved, on the Payment methods page under **Other payment methods**, to the right of **Wire transfer**, select the ellipsis (**...**) symbol and then select **Make default**.
- You're all set to pay by wire transfer.
- ## Submit a request to set up pay by wire transfer Users in all regions can submit a request to pay by wire transfer through support. Currently, only customers in the United States can get automatically approved to change their payment method to wire transfer.
data-catalog Data Catalog Migration To Microsoft Purview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-catalog/data-catalog-migration-to-microsoft-purview.md
Microsoft launched a unified data governance service to help manage and govern your on-premises, multicloud, and software-as-a-service (SaaS) data. Microsoft Purview creates a map of your data landscape with automated data discovery, sensitive data classification, and end-to-end data lineage. Microsoft Purview data governance solutions enable data curators to manage and secure their data estate and empowers data consumers to find valuable, trustworthy data.
-The document shows you how to do the migration from Azure Data Catalog to Microsoft Purview.
+The document shows you how to migrate from an existing Azure Data Catalog to Microsoft Purview.
+
+>[!TIP]
+>If you're looking into data catalog services for the first time, use [Microsoft Purview](/purview/purview).
## Recommended approach
data-factory Connector Troubleshoot Oracle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/data-factory/connector-troubleshoot-oracle.md
Previously updated : 05/27/2024 Last updated : 07/02/2024
This article provides suggestions to troubleshoot common problems with the Oracl
- 3DES112 - DES
- - The following algorithms are deemed as secure by OpenSSL, and will be sent along to the server for OAS (Oracle Advanced Security) data integrity.
+ - The following algorithms are deemed as secure by OpenSSL, and will be sent along to the server for OAS (Oracle Advanced Security) data integrity.
- SHA256 - SHA384 - SHA512
+ >[!Note]
+ >The recommended data integrity algorithms SHA256, SHA384 and SHA512 are available for Oracle 19c or higher.
+
## Error code: UserErrorFailedToConnectOdbcSource There are three error messages associated with this error code. Check the cause and recommendation for each error message correspondingly.
energy-data-services How To Deploy Gcz https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/energy-data-services/how-to-deploy-gcz.md
+
+ Title: Deploy Geospatial Consumption Zone on top of Azure Data Manager for Energy
+description: Learn how to deploy Geospatial Consumption Zone on top of your Azure Data Manager for Energy instance.
+++++ Last updated : 05/11/2024
+zone_pivot_groups: energy-data-services-gcz-options
++
+# Deploy Geospatial Consumption Zone
+
+This guide shows you how to deploy the Geospatial Consumption Zone (GCZ) service integrated with Azure Data Manager for Energy (ADME).
+
+> [!IMPORTANT]
+> While the Geospatial Consumption Zone (GCZ) service is a graduated service in the OSDU Forum, it has limitations in terms of security and usage. We will deploy some additional services and policies to secure the environment, but encourage you to follow the service's development on the [OSDU Gitlab](https://community.opengroup.org/osdu/platform/consumption/geospatial/-/wikis/home).
+
+## Description
+
+The OSDU Geospatial Consumption Zone (GCZ) is a service that enables enhanced management and utilization of geospatial data. The GCZ streamlines the handling of location-based information. It abstracts away technical complexities, allowing software applications to access geospatial data without needing to deal with intricate details. By providing ready-to-use map services, the GCZ facilitates seamless integration with OSDU-enabled applications.
+
+## Create an App Registration in Microsoft Entra ID
+
+To deploy the GCZ, you need to create an App Registration in Microsoft Entra ID. The App Registration is to authenticate the GCZ APIs with Azure Data Manager for Energy to be able to generate the cache of the geospatial data.
+
+1. See [Create an App Registration in Microsoft Entra ID](/azure/active-directory/develop/quickstart-register-app) for instructions on how to create an App Registration.
+1. Grant the App Registration permission to read the relevant data in Azure Data Manager for Energy. See [How to add members to an OSDU group](./how-to-manage-users.md#add-members-to-an-osdu-group-in-a-data-partition) for further instructions.
+
+## Setup
+
+There are two main deployment options for the GCZ service:
+- **Azure Kubernetes Service (AKS)**: Deploy the GCZ service on an AKS cluster. This deployment option is recommended for production environments. It requires more setup, configuration, and maintenance. It also has some limitations in the provided container images.
+- **Windows**: Deploy the GCZ service on a Windows. This deployment option recommended for development and testing environments, as it's easier to set up and configure, and requires less maintenance.
+++++++
+## Publish GCZ APIs publicly (optional)
+
+If you want to expose the GCZ APIs publicly, you can use Azure API Management (APIM).
+Azure API Management allows us to securely expose the GCZ service to the internet, as the GCZ service doesn't yet have authentication and authorization built in.
+Through APIM we can add policies to secure, monitor, and manage the APIs.
+
+### Prerequisites
+
+- An Azure API Management instance. If you don't have an Azure API Management instance, see [Create an Azure API Management instance](/azure/api-management/get-started-create-service-instance).
+- The GCZ APIs are deployed and running.
+
+> [!IMPORTANT]
+> The Azure API Management instance will need to be injected into a virtual network that is routable to the AKS cluster to be able to communicate with the GCZ API's.
+
+### Add the GCZ APIs to Azure API Management
+
+#### Download the GCZ OpenAPI specifications
+
+1. Download the two OpenAPI specification to your local computer.
+ - [GCZ Provider](https://github.com/microsoft/adme-samples/blob/main/services/gcz/gcz-openapi-provider.yaml)
+ - [GCZ Transformer](https://github.com/microsoft/adme-samples/blob/main/services/gcz/gcz-openapi-transformer.yaml)
+1. Open each OpenAPI specification file in a text editor and replace the `servers` section with the corresponding IPs of the AKS GCZ Services' Load Balancer (External IP).
+
+ ```yaml
+ servers:
+ - url: "http://<GCZ-Service-External-IP>/ignite-provider"
+ ```
+++++++
+## Testing the GCZ service
+
+1. Download the API client collection from the [OSDU GitLab](https://community.opengroup.org/osdu/platform/consumption/geospatial/-/blob/master/docs/test-assets/postman/Geospatial%20Consumption%20Zone%20-%20Provider%20Postman%20Tests.postman_collection.json?ref_type=heads) and import it into your API client of choice (for example, Postman).
+1. Add the following environment variables to your API client:
+ - `PROVIDER_URL` - The URL to the GCZ Provider API.
+ - `AMBASSADOR_URL` - The URL to the GCZ Transformer API.
+ - `access_token` - A valid ADME access token.
+
+1. To verify that the GCZ is working as expected, run the API calls in the collection.
+
+## Next steps
+After you have a successful deployment of GCZ, you can:
+
+- Visualize your GCZ data using the GCZ WebApps from the [OSDU GitLab](https://community.opengroup.org/osdu/platform/consumption/geospatial/-/tree/master/docs/test-assets/webapps?ref_type=heads).
+
+> [!IMPORTANT]
+> The GCZ WebApps are currently in development and does not support authentication. We recommend deploying the WebApps in a private network and exposing them using Azure Application Gateway or Azure Front Door to enable authentication and authorization.
+
+You can also ingest data into your Azure Data Manager for Energy instance:
+
+- [Tutorial on CSV parser ingestion](tutorial-csv-ingestion.md).
+- [Tutorial on manifest ingestion](tutorial-manifest-ingestion.md).
+
+## References
+
+- For information about Geospatial Consumption Zone, see [OSDU GitLab](https://community.opengroup.org/osdu/platform/consumption/geospatial/).
governance Assignment Structure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/governance/policy/concepts/assignment-structure.md
Title: Details of the policy assignment structure description: Describes the policy assignment definition used by Azure Policy to relate policy definitions and parameters to resources for evaluation. Previously updated : 10/03/2022 Last updated : 07/03/2024 # Azure Policy assignment structure
_common_ properties used by Azure Policy. Each `metadata` property has a limit o
- `createdBy` (string): The GUID of the security principal that created the assignment. - `createdOn` (string): The Universal ISO 8601 DateTime format of the assignment creation time. - `parameterScopes` (object): A collection of key-value pairs where the key matches a
- [strongType](./definition-structure.md#strongtype) configured parameter name and the value defines
+ [strongType](./definition-structure-parameters.md#strongtype) configured parameter name and the value defines
the resource scope used in Portal to provide the list of available resources by matching _strongType_. Portal sets this value if the scope is different than the assignment scope. If set, an edit of the policy assignment in Portal automatically sets the scope for the parameter to this
non-compliance and is optional.
> [!IMPORTANT] > Custom messages for non-compliance are only supported on definitions or initiatives with
-> [Resource Manager modes](./definition-structure.md#resource-manager-modes) definitions.
+> [Resource Manager modes](./definition-structure-basics.md#resource-manager-modes) definitions.
```json "nonComplianceMessages": [
For policy assignments with effect set to **deployIfNotExist** or **modify**, it
## Next steps -- Learn about the [policy definition structure](./definition-structure.md).
+- Learn about the [policy definition structure](./definition-structure-basics.md).
- Understand how to [programmatically create policies](../how-to/programmatically-create.md). - Learn how to [get compliance data](../how-to/get-compliance-data.md). - Learn how to [remediate non-compliant resources](../how-to/remediate-resources.md).
iot-central Tutorial Connect Iot Edge Device https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/tutorial-connect-iot-edge-device.md
- Title: Tutorial - Connect an IoT Edge device to your application
-description: This tutorial shows you how to register, provision, and connect an IoT Edge device to your IoT Central application.
-- Previously updated : 03/04/2024-----
-# Customer intent: As a solution developer, I want to learn how to connect an IoT Edge device to IoT Central and then configure views and forms so that I can interact with the device.
--
-# Tutorial: Connect an IoT Edge device to your Azure IoT Central application
-
-This tutorial shows you how to connect an IoT Edge device to your Azure IoT Central application. The IoT Edge device runs a module that sends temperature, pressure, and humidity telemetry to your application. You use a device template to enable views and forms that let you interact with the module on the IoT Edge device.
-
-In this tutorial, you learn how to:
-
-> [!div class="checklist"]
-> * Import an IoT Edge deployment manifest into your IoT Central application.
-> * Add an IoT Edge device that uses this deployment manifest to your application.
-> * Connect the IoT Edge device to your application.
-> * Monitor the IoT Edge runtime from your application.
-> * Add a device template with views and forms to your application.
-> * View the telemetry sent from the device in your application.
-
-## Prerequisites
-
-To complete the steps in this tutorial, you need:
--
-You also need to be able to upload configuration files to your IoT Central application from your local machine.
-
-## Import a deployment manifest
-
-A deployment manifest specifies the configuration of an IoT Edge device including the details of any custom modules the device should download and run. IoT Edge devices that connect to an IoT Central application download their deployment manifests from the application.
-
-To add a deployment manifest to IoT Central to use in this tutorial:
-
-1. Download and save the [EnvironmentalSensorManifest-1-4.json](https://raw.githubusercontent.com/Azure-Samples/iot-central-docs-samples/main/iotedge/EnvironmentalSensorManifest-1-4.json) deployment manifest to your local machine.
-
-1. In your IoT Central application, navigate to the **Edge manifests** page.
-
-1. Select **+ New**.
-
-1. On the **Customize** page, enter *Environmental Sensor* as the name and then upload the *EnvironmentalSensorManifest-1-4.json* file.
-
-1. After the manifest file is validated, select **Next**.
-
-1. The **Review and finish** page shows the modules defined in the manifest, including the **SimulatedTemperatureSensor** custom module. Select **Create**.
-
-The **Edge manifests** list now includes the **Environmental sensor** manifest:
--
-## Add an IoT Edge device
-
-Before the IoT Edge device can connect to your IoT Central application, you need to add it to the list of devices and get its credentials:
-
-1. In your IoT Central application, navigate to the **Devices** page.
-
-1. On the **Devices** page, make sure that **All devices** is selected. Then select **+ New**.
-
-1. On the **Create a new device** page:
- * Enter *Environmental sensor - 001* as the device name.
- * Enter *env-sens-001* as the device ID.
- * Make sure that the device template is **unassigned**.
- * Make sure that the device isn't simulated.
- * Set **Azure IoT Edge device** to **Yes**.
- * Select the **Environmental sensor** IoT Edge deployment manifest.
-
-1. Select **Create**.
-
-The list of devices on the **Devices** page now includes the **Environmental sensor - 001** device. The device status is **Registered**:
--
-Before you deploy the IoT Edge device, you need the:
-
-* **ID Scope** of your IoT Central application.
-* **Device ID** values for the IoT Edge device.
-* **Primary key** values for the IoT Edge device.
-
-To find these values, navigate to the **Environmental sensor - 001** device from the **Devices** page and select **Connect**. Make a note of these values before you continue.
-
-## Deploy the IoT Edge device
-
-In this tutorial, you deploy the IoT Edge runtime to a Linux virtual machine in Azure. To deploy and configure the virtual machine, select the following button:
-
-[![Deploy to Azure Button](https://aka.ms/deploytoazurebutton)](https://portal.azure.com/#create/Microsoft.Template/uri/https%3A%2F%2Fraw.githubusercontent.com%2FAzure-Samples%2Fiot-central-docs-samples%2Fmain%2Fedge-vm-deploy-1-4%2FedgeDeploy.json)
-
-On the **Custom deployment** page, use the following values to complete the form:
-
-| Setting | Value |
-| - | -- |
-| `Resource group` | Create a new resource group with a name such as *MyIoTEdgeDevice_rg*. |
-| `Region` | Select a region close to you. |
-| `Dns Label Prefix` | A unique DNS prefix for your virtual machine. |
-| `Admin Username` | *AzureUser* |
-| `Admin Password` | A password of your choice to access the virtual machine. |
-| `Scope Id` | The ID scope you made a note of previously. |
-| `Device Id` | The device ID you made a note of previously. |
-| `Device Key` | The device key you made a note of previously. |
-
-Select **Review + create** and then **Create**. Wait for the deployment to finish before you continue.
-
-## Manage the IoT Edge device
-
-To verify the deployment of the IoT Edge device was successful:
-
-1. In your IoT Central application, navigate to the **Devices** page. Check the status of the **Environmental sensor - 001** device is **Provisioned**. You might need to wait for a few minutes while the device connects.
-
-1. Navigate to the **Environmental sensor - 001** device.
-
-1. On the **Modules** page, check the status of the three modules is **Running**.
-
-On the **Modules** page, you can view status information about the modules and perform actions such as viewing their logs and restarting them.
-
-## View raw data
-
-On the **Raw data** page for the **Environmental sensor - 001** device, you can see the telemetry it's sending and the property values it's reporting.
-
-At the moment, the IoT Edge device doesn't have a device template assigned, so all the data from the device is **Unmodeled**. Without a device template, there are no views or dashboards to display custom device information in the IoT Central application. However, you can use data export to forward the data to other services for analysis or storage.
-
-## Add a device template
-
-A deployment manifest can include definitions of properties exposed by a module. For example, the configuration in the deployment manifest for the **SimulatedTemperatureSensor** module includes the following:
-
-```json
-"SimulatedTemperatureSensor": {
- "properties.desired": {
- "SendData": true,
- "SendInterval": 10
- }
-}
-```
-
-The following steps show you how to add a device template for an IoT Edge device and the module property definitions from the deployment manifest:
-
-1. In your IoT Central application, navigate to the **Device templates** page and select **+ New**.
-
-1. On the **Select type** page, select **Azure IoT Edge**, and then **Next: Customize**.
-
-1. On the **Customize** page, enter *Environmental sensor* as the device template name. Select **Next: Review**.
-
-1. On the **Review** page, select **Create**.
-
-1. On the **Create a model** page, select **Custom model**.
-
-1. On the **Environmental sensor** page, select **Modules**, then **Import modules from manifest**.
-
-1. In the **Import modules** dialog, select the **Environmental sensor** deployment manifest, then **Import**.
-
-The device template now includes a module called **SimulatedTemperatureSensor**, with an interface called **management**. This interface includes definitions of the **SendData** and **SendInterval** properties from the deployment manifest.
-
-A deployment manifest can only define module properties, not commands or telemetry. To add the telemetry definitions to the device template:
-
-1. Download and save the [EnvironmentalSensorTelemetry.json](https://raw.githubusercontent.com/Azure-Samples/iot-central-docs-samples/main/iotedge/EnvironmentalSensorTelemetry.json) interface definition to your local machine.
-
-1. Navigate to the **SimulatedTemperatureSensor** module in the **Environmental sensor** device template.
-
-1. Select **Add inherited interface** (you might need to select **...** to see this option). Select **Import interface**. Then import the *EnvironmentalSensorTelemetry.json* file you previously downloaded.
-
-The module now includes a **telemetry** interface that defines **machine**, **ambient**, and **timeCreated** telemetry types:
--
-To add a view that plots telemetry from the device:
-
-1. In the **Environmental sensor** device template, select **Views**.
-
-1. On the **Select to add a new view** page, select **Visualizing the device**.
-
-1. Enter *Environmental telemetry* as the view name.
-
-1. Select **Start with devices**. Then add the following telemetry types:
- * **ambient/temperature**
- * **humidity**
- * **machine/temperature**
- * **pressure**
-
-1. Select **Add tile**, then **Save**.
-
-1. To publish the template, select **Publish**.
-
-## View telemetry and control module
-
-To view the telemetry from your device, you need to attach the device to the device template:
-
-1. Navigate to the **Devices** page and select the **Environmental sensor - 001** device.
-
-1. Select **Migrate**.
-
-1. In the **Migrate** dialog, select the **Environmental sensor** device template, and select **Migrate**.
-
-1. Navigate to the **Environmental sensor - 001** device and select the **Environmental telemetry** view.
-
-1. The line chart plots the four telemetry values you selected for the view:
-
- :::image type="content" source="media/tutorial-connect-iot-edge-device/environmental-telemetry-view.png" alt-text="Screenshot that shows the telemetry line charts.":::
-
-1. The **Raw data** page now includes columns for the **ambient**, **machine**, and **timeCreated** telemetry values.
-
-To control the module by using the properties defined in the deployment manifest, navigate to the **Environmental sensor - 001** device and select the **Manage** view.
-
-IoT Central created this view automatically from the **manage** interface in the **SimulatedTemperatureSensor** module. The **Raw data** page now includes columns for the **SendData** and **SendInterval** properties.
-
-## Clean up resources
--
-To remove the virtual machine that's running Azure IoT Edge, navigate to the Azure portal and delete the resource group you created previously. If you used the recommended name, your resource group is called **MyIoTEdgeDevice_rg**.
-
-## Next steps
-
-If you'd prefer to continue through the set of IoT Central tutorials and learn more about building an IoT Central solution, see:
-
-> [!div class="nextstepaction"]
-> [Create a gateway device template](./tutorial-define-gateway-device-type.md)
iot-central Tutorial Use Device Groups https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-central/core/tutorial-use-device-groups.md
To learn more about analytics, see [How to use data explorer to analyze device d
## Clean up resources [!INCLUDE [iot-central-clean-up-resources](../../../includes/iot-central-clean-up-resources.md)]-
-## Next steps
-
-Now that you've learned how to use device groups in your Azure IoT Central application, here's the suggested next step:
-
-> [!div class="nextstepaction"]
-> [Connect an IoT Edge device to your Azure IoT Central application](tutorial-connect-iot-edge-device.md)
iot-edge Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-edge/support.md
Modules built as Linux containers can be deployed to either Linux or Windows dev
[IoT Edge for Linux on Windows](iot-edge-for-linux-on-windows.md) is the recommended way to run IoT Edge on Windows devices.
-| Operating System | AMD64 | ARM32v7 | ARM64 | End of support |
+| Operating System | AMD64 | ARM32v7 | ARM64 | End of OS provider standard support |
| - | -- | - | -- | -- | | Debian 11 (Bullseye) | | ![Debian + ARM32v7](./media/support/green-check.png) | | [June 2026](https://wiki.debian.org/LTS) | | Red Hat Enterprise Linux 9 | ![Red Hat Enterprise Linux 9 + AMD64](./media/support/green-check.png) | | | [May 2032](https://access.redhat.com/product-life-cycles?product=Red%20Hat%20Enterprise%20Linux,OpenShift%20Container%20Platform%204) |
Modules built as Linux containers can be deployed to either Linux or Windows dev
| Windows Server 2019/2022 | ![Windows Server 2019/2022 + AMD64](./medi#prerequisites) for supported Windows OS versions. | > [!NOTE]
-> When a *Tier 1* operating system reaches its end of support date, it's removed from the *Tier 1* supported platform list. If you take no action, IoT Edge devices running on the unsupported operating system continue to work but ongoing security patches and bug fixes in the host packages for the operating system won't be available after the end of support date. To continue to receive support and security updates, we recommend that you update your host OS to a *Tier 1* supported platform.
+> When a *Tier 1* operating system reaches its end of standard support date, it's removed from the *Tier 1* supported platform list. If you take no action, IoT Edge devices running on the unsupported operating system continue to work but ongoing security patches and bug fixes in the host packages for the operating system won't be available after the end of support date. To continue to receive support and security updates, we recommend that you update your host OS to a *Tier 1* supported platform.
#### Windows containers
The systems listed in the following table are considered compatible with Azure I
::: moniker range="=iotedge-1.4"
-| Operating System | AMD64 | ARM32v7 | ARM64 | End of support |
+| Operating System | AMD64 | ARM32v7 | ARM64 | End of OS provider standard support |
| - | -- | - | -- | -- | | [CentOS-7](https://docs.centos.org/en-US/docs/) | ![CentOS + AMD64](./media/support/green-check.png) | ![CentOS + ARM32v7](./media/support/green-check.png) | ![CentOS + ARM64](./media/support/green-check.png) | [June 2024](https://www.redhat.com/en/topics/linux/centos-linux-eol) | | [Debian 10 <sup>1</sup>](https://www.debian.org/releases/buster/) | ![Debian 10 + AMD64](./media/support/green-check.png) | ![Debian 10 + ARM32v7](./media/support/green-check.png) | ![Debian 10 + ARM64](./media/support/green-check.png) | [June 2024](https://wiki.debian.org/LTS) |
The systems listed in the following table are considered compatible with Azure I
::: moniker range=">=iotedge-1.5"
-| Operating System | AMD64 | ARM32v7 | ARM64 | End of support |
+| Operating System | AMD64 | ARM32v7 | ARM64 | End of OS provider standard support |
| - | -- | - | -- | -- | | [Debian 11](https://www.debian.org/releases/bullseye/) | ![Debian 11 + AMD64](./media/support/green-check.png) | | ![Debian 11 + ARM64](./media/support/green-check.png) | [June 2026](https://wiki.debian.org/LTS) | | [Mentor Embedded Linux Flex OS](https://www.mentor.com/embedded-software/linux/mel-flex-os/) | ![Mentor Embedded Linux Flex OS + AMD64](./media/support/green-check.png) | ![Mentor Embedded Linux Flex OS + ARM32v7](./media/support/green-check.png) | ![Mentor Embedded Linux Flex OS + ARM64](./media/support/green-check.png) | |
The systems listed in the following table are considered compatible with Azure I
::: moniker-end > [!NOTE]
-> When a *Tier 2* operating system reaches its end of support date, it's removed from the supported platform list. If you take no action, IoT Edge devices running on the unsupported operating system continue to work but ongoing security patches and bug fixes in the host packages for the operating system won't be available after the end of support date. To continue to receive support and security updates, we recommend that you update your host OS to a *Tier 1* supported platform.
+> When a *Tier 2* operating system reaches its end of standard support date, it's removed from the supported platform list. If you take no action, IoT Edge devices running on the unsupported operating system continue to work but ongoing security patches and bug fixes in the host packages for the operating system won't be available after the end of support date. To continue to receive support and security updates, we recommend that you update your host OS to a *Tier 1* supported platform.
## Releases
iot-hub C2d Messaging Ios https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/iot-hub/c2d-messaging-ios.md
- Title: Send cloud-to-device messages (iOS)-
-description: How to send cloud-to-device messages from a back-end app and receive them on a device app using the Azure IoT SDKs for iOS.
----
-ms.devland: swift
- Previously updated : 05/30/2023---
-# Send cloud-to-device messages with IoT Hub (iOS)
--
-Azure IoT Hub is a fully managed service that helps enable reliable and secure bi-directional communications between millions of devices and a solution back end.
-
-This article shows you how to:
-
-* Receive cloud-to-device (C2D) messages on a device
-
-At the end of this article, you run the following Swift iOS project:
-
-* **sample-device**: the sample app from the [Azure IoT Samples for IoS Platform repository](https://github.com/Azure-Samples/azure-iot-samples-ios), which connects to your IoT hub and receives cloud-to-device messages.
-
-> [!NOTE]
-> IoT Hub has SDK support for many device platforms and languages (including C, Java, Python, and JavaScript) through the [Azure IoT device SDKs](iot-hub-devguide-sdks.md).
-
-To learn more about cloud-to-device messages, see [Send cloud-to-device messages from an IoT hub](iot-hub-devguide-messages-c2d.md).
-
-## Prerequisites
-
-* An active IoT hub in Azure.
-
-* The code sample from the [Azure IoT Samples for IoS Platform repository](https://github.com/Azure-Samples/azure-iot-samples-ios).
-
-* The latest version of [XCode](https://developer.apple.com/xcode/), running the latest version of the iOS SDK. This article was tested with XCode 9.3 and iOS 11.3.
-
-* The latest version of [CocoaPods](https://guides.cocoapods.org/using/getting-started.html).
-
-* Make sure that port 8883 is open in your firewall. The device sample in this article uses MQTT protocol, which communicates over port 8883. This port may be blocked in some corporate and educational network environments. For more information and ways to work around this issue, see [Connecting to IoT Hub (MQTT)](../iot/iot-mqtt-connect-to-iot-hub.md#connecting-to-iot-hub).
-
-## Simulate an IoT device
-
-In this section, you simulate an iOS device running a Swift application to receive cloud-to-device messages from the IoT hub.
-
-### Install CocoaPods
-
-CocoaPods manages dependencies for iOS projects that use third-party libraries.
-
-In a terminal window, navigate to the folder containing the repository that you downloaded in the [prerequisites](#prerequisites). Then, navigate to the sample project:
-
-```sh
-cd quickstart/sample-device
-```
-
-Make sure that XCode is closed, then run the following command to install the CocoaPods that are declared in the **podfile** file:
-
-```sh
-pod install
-```
-
-Along with installing the pods required for your project, the installation command also created an XCode workspace file that is already configured to use the pods for dependencies.
-
-### Run the sample device application
-
-1. Retrieve the connection string for your device. You can copy this string from the [Azure portal](https://portal.azure.com) in the device details page, or retrieve it with the following CLI command:
-
- ```azurecli-interactive
- az iot hub device-identity connection-string show --hub-name {YourIoTHubName} --device-id {YourDeviceID} --output table
- ```
-
-2. Open the sample workspace in XCode.
-
- ```sh
- open "MQTT Client Sample.xcworkspace"
- ```
-
-3. Expand the **MQTT Client Sample** project and then folder of the same name.
-
-4. Open **ViewController.swift** for editing in XCode.
-
-5. Search for the **connectionString** variable and update the value with the device connection string that you copied in the first step.
-
-6. Save your changes.
-
-7. Run the project in the device emulator with the **Build and run** button or the key combo **command + r**.
-
- ![Screenshot shows the Build and run button in the device emulator.](media/iot-hub-ios-swift-c2d/run-sample.png)
-
-## Send a cloud-to-device message
-
-You're now ready to receive cloud-to-device messages. Use the Azure portal to send a test cloud-to-device message to your simulated IoT device.
-
-1. In the **iOS App Sample** app running on the simulated IoT device, select **Start**. The application starts sending device-to-cloud messages, but also starts listening for cloud-to-device messages.
-
- ![View sample IoT device app](media/iot-hub-ios-swift-c2d/view-d2c.png)
-
-2. In the [Azure portal](https://portal.azure.com), navigate to your IoT hub.
-
-3. On the **Devices** page, select the device ID for your simulated IoT device.
-
-4. Select **Message to Device** to open the cloud-to-device message interface.
-
-5. Write a plaintext message in the **Message body** text box, then select **Send message**.
-
-6. Watch the app running on your simulated IoT device. It checks for messages from IoT Hub and prints the text from the most recent one on the screen. Your output should look like the following example:
-
- ![View cloud-to-device messages](media/iot-hub-ios-swift-c2d/view-c2d.png)
-
-## Next steps
-
-In this article, you learned how to send and receive cloud-to-device messages.
-
-* To learn more about cloud-to-device messages, see [Send cloud-to-device messages from an IoT hub](iot-hub-devguide-messages-c2d.md).
-
-* To learn more about IoT Hub message formats, see [Create and read IoT Hub messages](iot-hub-devguide-messages-construct.md).
key-vault Disaster Recovery Guidance https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/key-vault/general/disaster-recovery-guidance.md
Azure Key Vault features multiple layers of redundancy to make sure that your ke
The way that Key Vault replicates your data depends on the specific region that your vault is in.
-**For most Azure regions that are paired with another region**, the contents of your key vault are replicated both within the region and to the paired region. The paired region is usually at least 150 miles away, but within the same geography. This approach ensures high durability of your keys and secrets. For more information about Azure region pairs, see [Azure paired regions](../../reliability/cross-region-replication-azure.md). Two exceptions are the Brazil South region, which is paired to a region in another geography, and the West US 3 region. When you create key vaults in Brazil South or West US 3, they aren't replicated across regions.
+For most Azure regions that are paired with another region, the contents of your key vault are replicated both within the region and to the paired region. The paired region is usually at least 150 miles away, but within the same geography. This approach ensures high durability of your keys and secrets. For more information about Azure region pairs, see [Azure paired regions](../../reliability/cross-region-replication-azure.md). Two exceptions are the Brazil South region, which is paired to a region in another geography, and the West US 3 region. When you create key vaults in Brazil South or West US 3, they aren't replicated across regions.
-**For [Azure regions that don't have a pair](../../reliability/cross-region-replication-azure.md#regions-with-availability-zones-and-no-region-pair), as well as the Brazil South and West US 3 regions**, Azure Key Vault uses zone redundant storage (ZRS) to replicate your data three times within the region, across independent availability zones. For Azure Key Vault Premium, two of the three zones are used to replicate the hardware security module (HSM) keys. You can also use the [backup and restore](backup.md) feature to replicate the contents of your vault to another region of your choice.
## Failover within a region
machine-learning Azure Machine Learning Ci Image Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/azure-machine-learning-ci-image-release-notes.md
- Previously updated : 9/30/2022+ Last updated : 07/03/2024
Azure Machine Learning checks and validates any machine learning packages that m
Main updates provided with each image version are described in the below sections.
+## July 3, 2024
+
+Image Version: `24.06.10`
+SDK Version: `1.56.0`
+
+Issue fixed: Compute Instance 20.04 image build with SDK 1.56.0
+
+Major: Image Version: `24.06.10`
+
+SDK (azureml-core): `1.56.0`
+
+Python: `3.9`
+CUDA: `12.2`
+CUDnn==9.1.1
+Nvidia Driver: `535.171.04`
+PyTorch: `1.13.1`
+TensorFlow: `2.15.0`
+
+autokeras==1.0.16
+keras=2.15.0
+ray==2.2.0
+docker version==24.0.9-1
+ ## Feb 16, 2024 Version: `24.01.30`
Main environment specific updates:
- N/A ## June 30, 2023
-Version: `23.06.30`
+Version: `23.06.30`
Main changes:
machine-learning How To Attach Kubernetes To Workspace https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-attach-kubernetes-to-workspace.md
Otherwise, if a [user-assigned managed identity is specified in Azure Machine Le
|--|--|--| |Azure Relay|Azure Relay Owner|Only applicable for Arc-enabled Kubernetes cluster. Azure Relay isn't created for AKS cluster without Arc connected.| |Kubernetes - Azure Arc or Azure Kubernetes Service|Reader <br> Kubernetes Extension Contributor <br> Azure Kubernetes Service Cluster Admin |Applicable for both Arc-enabled Kubernetes cluster and AKS cluster.|
+|Azure Kubernetes Service|Contributor|Required only for AKS clusters that use the Trusted Access feature. The workspace uses user-assigned managed identity. See [AzureML access to AKS clusters with special configurations](https://github.com/Azure/AML-Kubernetes/blob/master/docs/azureml-aks-ta-support.md) for details.|
> [!TIP]
machine-learning How To Deploy Models Timegen 1 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-deploy-models-timegen-1.md
You can deploy TimeGEN-1 as a serverless API with pay-as-you-go billing. Nixtla
- Azure role-based access controls (Azure RBAC) are used to grant access to operations in Azure Machine Learning. To perform the steps in this article, your user account must be assigned the __Azure AI Developer role__ on the resource group. For more information on permissions, see [Manage access to an Azure Machine Learning workspace](how-to-assign-roles.md).
+### Pricing information
+
+#### Estimate the number of tokens needed
+
+Before you create a deployment, it's useful to estimate the number of tokens that you plan to use and be billed for.
+One token corresponds to one data point in your input dataset or output dataset.
+
+Suppose you have the following input time series dataset:
+
+| Unique_id | Timestamp | Target Variable | Exogenous Variable 1 | Exogenous Variable 2 |
+|::|:-:|::|:--:|:--:|
+| BE | 2016-10-22 00:00:00 | 70.00 | 49593.0 | 57253.0 |
+| BE | 2016-10-22 01:00:00 | 37.10 | 46073.0 | 51887.0 |
+
+To determine the number of tokens, multiply the number of rows (in this example, two) and the number of columns used for forecastingΓÇönot counting the unique_id and timestamp columns (in this example, three) to get a total of six tokens.
+
+Given the following output dataset:
+
+| Unique_id | Timestamp | Forecasted Target Variable |
+|::|:-:|:--:|
+| BE | 2016-10-22 02:00:00 | 46.57 |
+| BE | 2016-10-22 03:00:00 | 48.57 |
+
+You can also determine the number of tokens by counting the number of data points returned after data forecasting. In this example, the number of tokens is two.
+
+#### Estimate the pricing
+
+There are four pricing meters, as described in the following table:
+
+| Pricing Meter | Description |
+|--|--|
+| paygo-inference-input-tokens | Costs associated with the tokens used as input for inference when *finetune_steps* = 0 |
+| paygo-inference-output-tokens | Costs associated with the tokens used as output for inference when *finetune_steps* = 0 |
+| paygo-finetuned-model-inference-input-tokens | Costs associated with the tokens used as input for inference when *finetune_steps* > 0 |
+| paygo-finetuned-model-inference-output-tokens | Costs associated with the tokens used as output for inference when *finetune_steps* > 0 |
### Create a new deployment
machine-learning How To Troubleshoot Kubernetes Compute https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/how-to-troubleshoot-kubernetes-compute.md
Below is a list of error types in **cluster scope** that you might encounter whe
* [ERROR: GenericClusterError](#error-genericclustererror) * [ERROR: ClusterNotReachable](#error-clusternotreachable) * [ERROR: ClusterNotFound](#error-clusternotfound)
+* [ERROR: ClusterServiceNotFound](#error-clusterservicenotfound)
+* [ERROR: ClusterUnauthorized](#error-clusterunauthorized)
#### ERROR: GenericClusterError
You can check the following items to troubleshoot the issue:
* First, check the cluster resource ID in the Azure portal to verify whether Kubernetes cluster resource still exists and is running normally. * If the cluster exists and is running, then you can try to detach and reattach the compute to the workspace. Pay attention to more notes on [reattach](#error-genericcomputeerror).
+#### ERROR: ClusterServiceNotFound
+
+The error message is as follows:
+
+````bash
+AzureML extension service not found in cluster.
+````
+
+This error should occur when the extension-owned ingress service doesn't have enough backend pods.
+
+You can:
+
+* Access the cluster and check the status of the service `azureml-ingress-nginx-controller` and its backend pod under the `azureml` namespace.
+* If the cluster doesn't have any running backend pods, check the reason by describing the pod. For example, if the pod doesn't have enough resources to run, you can delete some pods to free enough resources for the ingress pod.
+
+#### ERROR: ClusterUnauthorized
+
+The error message is as follows:
+
+````bash
+Request to Kubernetes cluster unauthorized.
+````
+
+This error should only occur in the TA-enabled cluster, which means the access token expired during the deployment.
+
+You can try again after several minutes.
+ > [!TIP] > More troubleshoot guide of common errors when creating/updating the Kubernetes online endpoints and deployments, you can find in [How to troubleshoot online endpoints](how-to-troubleshoot-online-endpoints.md).
machine-learning Reference Model Inference Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-model-inference-api.md
Models deployed to [serverless API endpoints](how-to-deploy-models-serverless.md
> * [Mistral-Large](how-to-deploy-models-mistral.md) > * [Phi-3](how-to-deploy-models-phi-3.md) family of models
+Models deployed to [managed inference](concept-endpoints-online.md):
+
+> [!div class="checklist"]
+> * [Meta Llama 3 instruct](how-to-deploy-models-llama.md) family of models
+> * [Phi-3](how-to-deploy-models-phi-3.md) family of models
+> * Mixtral famility of models
+ The API is compatible with Azure OpenAI model deployments. ## Capabilities
The API indicates how developers can consume predictions for the following modal
* [Chat completions](reference-model-inference-chat-completions.md): Creates a model response for the given chat conversation. * [Image embeddings](reference-model-inference-images-embeddings.md): Creates an embedding vector representing the input text and image.
+### Inference SDK support
+
+You can use streamlined inference clients in the language of your choice to consume predictions from models running the Azure AI model inference API.
+
+# [Python](#tab/python)
+
+Install the package `azure-ai-inference` using your package manager, like pip:
+
+```bash
+pip install azure-ai-inference
+```
+
+Then, you can use the package to consume the model. The following example shows how to create a client to consume chat completions:
+
+```python
+import os
+from azure.ai.inference import ChatCompletionsClient
+from azure.core.credentials import AzureKeyCredential
+
+model = ChatCompletionsClient(
+ endpoint=os.environ["AZUREAI_ENDPOINT_URL"],
+ credential=AzureKeyCredential(os.environ["AZUREAI_ENDPOINT_KEY"]),
+)
+```
+
+Explore our [samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/ai/azure-ai-inference/samples) and read the [API reference documentation](https://aka.ms/azsdk/azure-ai-inference/python/reference) to get yourself started.
+
+# [JavaScript](#tab/javascript)
+
+Install the package `@azure-rest/ai-inference` using npm:
+
+```bash
+npm install @azure-rest/ai-inference
+```
+
+Then, you can use the package to consume the model. The following example shows how to create a client to consume chat completions:
+
+```javascript
+import ModelClient from "@azure-rest/ai-inference";
+import { isUnexpected } from "@azure-rest/ai-inference";
+import { AzureKeyCredential } from "@azure/core-auth";
+
+const client = new ModelClient(
+ process.env.AZUREAI_ENDPOINT_URL,
+ new AzureKeyCredential(process.env.AZUREAI_ENDPOINT_KEY)
+);
+```
+
+Explore our [samples](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/ai/ai-inference-rest/samples) and read the [API reference documentation](https://aka.ms/AAp1kxa) to get yourself started.
+
+# [REST](#tab/rest)
+
+Use the reference section to explore the API design and which parameters are available. For example, the reference section for [Chat completions](reference-model-inference-chat-completions.md) details how to use the route `/chat/completions` to generate predictions based on chat-formatted instructions:
+
+__Request__
+
+```HTTP/1.1
+POST /chat/completions?api-version=2024-04-01-preview
+Authorization: Bearer <bearer-token>
+Content-Type: application/json
+```
+ ### Extensibility The Azure AI Model Inference API specifies a set of modalities and parameters that models can subscribe to. However, some models may have further capabilities that the ones the API indicates. On those cases, the API allows the developer to pass them as extra parameters in the payload.
-By setting a header `extra-parameters: allow`, the API will attempt to pass any unknown parameter directly to the underlying model. If the model can handle that parameter, the request completes.
+By setting a header `extra-parameters: pass-through`, the API will attempt to pass any unknown parameter directly to the underlying model. If the model can handle that parameter, the request completes.
The following example shows a request passing the parameter `safe_prompt` supported by Mistral-Large, which isn't specified in the Azure AI Model Inference API:
+# [Python](#tab/python)
+
+```python
+response = model.complete(
+ messages=[
+ SystemMessage(content="You are a helpful assistant."),
+ UserMessage(content="How many languages are in the world?"),
+ ],
+ model_extras={
+ "safe_mode": True
+ }
+)
+```
+
+# [JavaScript](#tab/javascript)
+
+```javascript
+var messages = [
+ { role: "system", content: "You are a helpful assistant" },
+ { role: "user", content: "How many languages are in the world?" },
+];
+
+var response = await client.path("/chat/completions").post({
+ "extra-parameters": "pass-through",
+ body: {
+ messages: messages,
+ safe_mode: true
+ }
+});
+```
+
+# [REST](#tab/rest)
+ __Request__ ```HTTP/1.1 POST /chat/completions?api-version=2024-04-01-preview Authorization: Bearer <bearer-token> Content-Type: application/json
-extra-parameters: allow
+extra-parameters: pass-through
``` ```JSON
extra-parameters: allow
} ``` ++ > [!TIP]
-> Alternatively, you can set `extra-parameters: drop` to drop any unknown parameter in the request. Use this capability in case you happen to be sending requests with extra parameters that you know the model won't support but you want the request to completes anyway. A typical example of this is indicating `seed` parameter.
+> The default value for `extra-parameters` is `error` which returns an error if an extra parameter is indicated in the payload. Alternatively, you can set `extra-parameters: ignore` to drop any unknown parameter in the request. Use this capability in case you happen to be sending requests with extra parameters that you know the model won't support but you want the request to completes anyway. A typical example of this is indicating `seed` parameter.
### Models with disparate set of capabilities
The Azure AI Model Inference API indicates a general set of capabilities but eac
The following example shows the response for a chat completion request indicating the parameter `reponse_format` and asking for a reply in `JSON` format. In the example, since the model doesn't support such capability an error 422 is returned to the user.
+# [Python](#tab/python)
+
+```python
+from azure.ai.inference.models import ChatCompletionsResponseFormat
+from azure.core.exceptions import HttpResponseError
+import json
+
+try:
+ response = model.complete(
+ messages=[
+ SystemMessage(content="You are a helpful assistant."),
+ UserMessage(content="How many languages are in the world?"),
+ ],
+ response_format={ "type": ChatCompletionsResponseFormat.JSON_OBJECT }
+ )
+except HttpResponseError as ex:
+ if ex.status_code == 422:
+ response = json.loads(ex.response._content.decode('utf-8'))
+ if isinstance(response, dict) and "detail" in response:
+ for offending in response["detail"]:
+ param = ".".join(offending["loc"])
+ value = offending["input"]
+ print(
+ f"Looks like the model doesn't support the parameter '{param}' with value '{value}'"
+ )
+ else:
+ raise ex
+```
+
+# [JavaScript](#tab/javascript)
+
+```javascript
+try {
+ var messages = [
+ { role: "system", content: "You are a helpful assistant" },
+ { role: "user", content: "How many languages are in the world?" },
+ ];
+
+ var response = await client.path("/chat/completions").post({
+ body: {
+ messages: messages,
+ response_format: { type: "json_object" }
+ }
+ });
+}
+catch (error) {
+ if (error.status_code == 422) {
+ var response = JSON.parse(error.response._content)
+ if (response.detail) {
+ for (const offending of response.detail) {
+ var param = offending.loc.join(".")
+ var value = offending.input
+ console.log(`Looks like the model doesn't support the parameter '${param}' with value '${value}'`)
+ }
+ }
+ }
+ else
+ {
+ throw error
+ }
+}
+```
+
+# [REST](#tab/rest)
+ __Request__ ```HTTP/1.1
__Response__
"message": "One of the parameters contain invalid values." } ```+ > [!TIP] > You can inspect the property `details.loc` to understand the location of the offending parameter and `details.input` to see the value that was passed in the request. ## Content safety
-The Azure AI model inference API supports Azure AI Content Safety. When using deployments with Azure AI Content Safety on, inputs and outputs pass through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions.
+The Azure AI model inference API supports [Azure AI Content Safety](https://aka.ms/azureaicontentsafety). When using deployments with Azure AI Content Safety on, inputs and outputs pass through an ensemble of classification models aimed at detecting and preventing the output of harmful content. The content filtering system detects and takes action on specific categories of potentially harmful content in both input prompts and output completions.
The following example shows the response for a chat completion request that has triggered content safety.
+# [Python](#tab/python)
+
+```python
+from azure.ai.inference.models import AssistantMessage, UserMessage, SystemMessage
+
+try:
+ response = model.complete(
+ messages=[
+ SystemMessage(content="You are an AI assistant that helps people find information."),
+ UserMessage(content="Chopping tomatoes and cutting them into cubes or wedges are great ways to practice your knife skills."),
+ ]
+ )
+
+ print(response.choices[0].message.content)
+
+except HttpResponseError as ex:
+ if ex.status_code == 400:
+ response = json.loads(ex.response._content.decode('utf-8'))
+ if isinstance(response, dict) and "error" in response:
+ print(f"Your request triggered an {response['error']['code']} error:\n\t {response['error']['message']}")
+ else:
+ raise ex
+ else:
+ raise ex
+```
+
+# [JavaScript](#tab/javascript)
+
+```javascript
+try {
+ var messages = [
+ { role: "system", content: "You are an AI assistant that helps people find information." },
+ { role: "user", content: "Chopping tomatoes and cutting them into cubes or wedges are great ways to practice your knife skills." },
+ ]
+
+ var response = await client.path("/chat/completions").post({
+ body: {
+ messages: messages,
+ }
+ });
+
+ console.log(response.body.choices[0].message.content)
+}
+catch (error) {
+ if (error.status_code == 400) {
+ var response = JSON.parse(error.response._content)
+ if (response.error) {
+ console.log(`Your request triggered an ${response.error.code} error:\n\t ${response.error.message}`)
+ }
+ else
+ {
+ throw error
+ }
+ }
+}
+```
+
+# [REST](#tab/rest)
+ __Request__ ```HTTP/1.1
__Response__
"type": null } ```+ ## Getting started
-The Azure AI Model Inference API is currently supported in models deployed as [Serverless API endpoints](how-to-deploy-models-serverless.md). Deploy any of the [supported models](#availability) to a new [Serverless API endpoints](how-to-deploy-models-serverless.md) to get started. Then you can consume the API in the following ways:
-
-# [Studio](#tab/azure-studio)
-
-You can use the Azure AI Model Inference API to run evaluations or while building with *Prompt flow*. Create a [Serverless Model connection](how-to-connect-models-serverless.md) to a *Serverless API endpoint* and consume its predictions. The Azure AI Model Inference API is used under the hood.
+The Azure AI Model Inference API is currently supported in certain models deployed as [Serverless API endpoints](how-to-deploy-models-serverless.md) and Managed Online Endpoints. Deploy any of the [supported models](#availability) and use the exact same code to consume their predictions.
# [Python](#tab/python)
-Since the API is OpenAI-compatible, you can use any supported SDK that already supports Azure OpenAI. In the following example, we show how you can use LiteLLM with the common API:
+The client library `azure-ai-inference` does inference, including chat completions, for AI models deployed by Azure AI Studio and Azure Machine Learning Studio. It supports Serverless API endpoints and Managed Compute endpoints (formerly known as Managed Online Endpoints).
-```python
-import litellm
+Explore our [samples](https://github.com/Azure/azure-sdk-for-python/tree/main/sdk/ai/azure-ai-inference/samples) and read the [API reference documentation](https://aka.ms/azsdk/azure-ai-inference/python/reference) to get yourself started.
-client = litellm.LiteLLM(
- base_url="https://<endpoint-name>.<region>.inference.ai.azure.com",
- api_key="<key>",
-)
+# [JavaScript](#tab/javascript)
-response = client.chat.completions.create(
- messages=[
- {
- "content": "Who is the most renowned French painter?",
- "role": "user"
- }
- ],
- model="azureai",
- custom_llm_provider="custom_openai",
-)
+The client library `@azure-rest/ai-inference` does inference, including chat completions, for AI models deployed by Azure AI Studio and Azure Machine Learning Studio. It supports Serverless API endpoints and Managed Compute endpoints (formerly known as Managed Online Endpoints).
-print(response.choices[0].message.content)
-```
+Explore our [samples](https://github.com/Azure/azure-sdk-for-js/tree/main/sdk/ai/ai-inference-rest/samples) and read the [API reference documentation](https://aka.ms/AAp1kxa) to get yourself started.
# [REST](#tab/rest)
-Models deployed in Azure Machine Learning and Azure AI studio in Serverless API endpoints support the Azure AI Model Inference API. Each endpoint exposes the OpenAPI specification for the modalities the model support. Use the **Endpoint URI** and the **Key** to download the OpenAPI definition for the model. In the following example, we download it from a bash console. Replace `<TOKEN>` by the **Key** and `<ENDPOINT_URI>` for the **Endpoint URI**.
+Explore the reference section of the Azure AI model inference API to see parameters and options to consume models, including chat completions models, deployed by Azure AI Studio and Azure Machine Learning Studio. It supports Serverless API endpoints and Managed Compute endpoints (formerly known as Managed Online Endpoints).
-```bash
-wget -d --header="Authorization: Bearer <TOKEN>" <ENDPOINT_URI>/swagger.json
-```
-
-Use the **Endpoint URI** and the **Key** to submit requests. The following example sends a request to a Cohere embedding model:
-
-```HTTP/1.1
-POST /embeddings?api-version=2024-04-01-preview
-Authorization: Bearer <bearer-token>
-Content-Type: application/json
-```
-
-```JSON
-{
- "input": [
- "Explain the theory of strings"
- ],
- "input_type": "query",
- "encoding_format": "float",
- "dimensions": 1024
-}
-```
-
-__Response__
-
-```json
-{
- "id": "ab1c2d34-5678-9efg-hi01-0123456789ea",
- "object": "list",
- "data": [
- {
- "index": 0,
- "object": "embedding",
- "embedding": [
- 0.001912117,
- 0.048706055,
- -0.06359863,
- //...
- -0.00044369698
- ]
- }
- ],
- "model": "",
- "usage": {
- "prompt_tokens": 7,
- "completion_tokens": 0,
- "total_tokens": 7
- }
-}
-```
+* [Get info](reference-model-inference-info.md): Returns the information about the model deployed under the endpoint.
+* [Text embeddings](reference-model-inference-embeddings.md): Creates an embedding vector representing the input text.
+* [Text completions](reference-model-inference-completions.md): Creates a completion for the provided prompt and parameters.
+* [Chat completions](reference-model-inference-chat-completions.md): Creates a model response for the given chat conversation.
+* [Image embeddings](reference-model-inference-images-embeddings.md): Creates an embedding vector representing the input text and image.
+
machine-learning Reference Model Inference Chat Completions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-model-inference-chat-completions.md
POST /chat/completions?api-version=2024-04-01-preview
| | | | | | | api-version | query | True | string | The version of the API in the format "YYYY-MM-DD" or "YYYY-MM-DD-preview". |
+## Request Header
++
+| Name | Required | Type | Description |
+| | | | |
+| extra-parameters | | string | The behavior of the API when extra parameters are indicated in the payload. Using `pass-through` makes the API to pass the parameter to the underlying model. Use this value when you want to pass parameters that you know the underlying model can support. Using `ignore` makes the API to drop any unsupported parameter. Use this value when you need to use the same payload across different models, but one of the extra parameters may make a model to error out if not supported. Using `error` makes the API to reject any extra parameter in the payload. Only parameters specified in this API can be indicated, or a 400 error is returned. |
+| azureml-model-deployment | | string | Name of the deployment you want to route the request to. Supported for endpoints that support multiple deployments. |
+ ## Request Body | Name | Required | Type | Description |
POST /chat/completions?api-version=2024-04-01-preview
"stream": false, "temperature": 0, "top_p": 1,
- "response_format": "text"
+ "response_format": { "type": "text" }
} ```
Status code: 200
| [ChatCompletionFinishReason](#chatcompletionfinishreason) | The reason the model stopped generating tokens. This will be `stop` if the model hit a natural stop point or a provided stop sequence, `length` if the maximum number of tokens specified in the request was reached, `content_filter` if content was omitted due to a flag from our content filters, `tool_calls` if the model called a tool. | | [ChatCompletionMessageToolCall](#chatcompletionmessagetoolcall) | | | [ChatCompletionObject](#chatcompletionobject) | The object type, which is always `chat.completion`. |
-| [ChatCompletionResponseFormat](#chatcompletionresponseformat) | |
+| [ChatCompletionResponseFormat](#chatcompletionresponseformat) | The response format for the model response. Setting to `json_object` enables JSON mode, which guarantees the message the model generates is valid JSON. When using JSON mode, you **must** also instruct the model to produce JSON yourself via a system or user message. Also note that the message content may be partially cut off if `finish_reason="length"`, which indicates the generation exceeded `max_tokens` or the conversation exceeded the max context length. |
+| [ChatCompletionResponseFormatType](#chatcompletionresponseformattype) | The response format type. |
| [ChatCompletionResponseMessage](#chatcompletionresponsemessage) | A chat completion message generated by the model. | | [ChatCompletionTool](#chatcompletiontool) | | | [ChatMessageRole](#chatmessagerole) | The role of the author of this message. |
Status code: 200
| [ContentFilterError](#contentfiltererror) | The API call fails when the prompt triggers a content filter as configured. Modify the prompt and try again. | | [CreateChatCompletionRequest](#createchatcompletionrequest) | | | [CreateChatCompletionResponse](#createchatcompletionresponse) | Represents a chat completion response returned by model, based on the provided input. |
-| [Detail](#detail) | |
+| [Detail](#detail) | Details for the [UnprocessableContentError](#unprocessablecontenterror) error. |
| [Function](#function) | The function that the model called. |
-| [FunctionObject](#functionobject) | |
+| [FunctionObject](#functionobject) | Definition of a function the model has access to. |
| [ImageDetail](#imagedetail) | Specifies the detail level of the image. |
-| [NotFoundError](#notfounderror) | |
+| [NotFoundError](#notfounderror) | The route is not valid for the deployed model. |
| [ToolType](#tooltype) | The type of the tool. Currently, only `function` is supported. |
-| [TooManyRequestsError](#toomanyrequestserror) | |
-| [UnauthorizedError](#unauthorizederror) | |
-| [UnprocessableContentError](#unprocessablecontenterror) | |
+| [TooManyRequestsError](#toomanyrequestserror) | You have hit your assigned rate limit and your requests need to be paced. |
+| [UnauthorizedError](#unauthorizederror) | Authentication is missing or invalid. |
+| [UnprocessableContentError](#unprocessablecontenterror) | The request contains unprocessable content. The error is returned when the payload indicated is valid according to this specification. However, some of the instructions indicated in the payload are not supported by the underlying model. Use the `details` section to understand the offending parameter. |
### ChatCompletionFinishReason
The object type, which is always `chat.completion`.
### ChatCompletionResponseFormat
+The response format for the model response. Setting to `json_object` enables JSON mode, which guarantees the message the model generates is valid JSON. When using JSON mode, you **must** also instruct the model to produce JSON yourself via a system or user message. Also note that the message content may be partially cut off if `finish_reason="length"`, which indicates the generation exceeded `max_tokens` or the conversation exceeded the max context length.
+
+| Name | Type | Description |
+| | | |
+| type | [ChatCompletionResponseFormatType](#chatcompletionresponseformattype) | The response format type. |
+
+### ChatCompletionResponseFormatType
+
+The response format type.
| Name | Type | Description | | | | |
A chat completion message generated by the model.
The role of the author of this message. - | Name | Type | Description | | | | | | assistant | string | |
The role of the author of this message.
A list of chat completion choices. Can be more than one if `n` is greater than 1. - | Name | Type | Description | | | | | | finish\_reason | [ChatCompletionFinishReason](#chatcompletionfinishreason) | The reason the model stopped generating tokens. This will be `stop` if the model hit a natural stop point or a provided stop sequence, `length` if the maximum number of tokens specified in the request was reached, `content_filter` if content was omitted due to a flag from our content filters, `tool_calls` if the model called a tool. |
The API call fails when the prompt triggers a content filter as configured. Modi
### CreateChatCompletionRequest - | Name | Type | Default Value | Description | | | | | | | frequency\_penalty | number | 0 | Helps prevent word repetitions by reducing the chance of a word being selected if it has already been used. The higher the frequency penalty, the less likely the model is to repeat the same words in its output. Return a 422 error if value or parameter is not supported by model. |
Specifies the detail level of the image.
Represents a chat completion response returned by model, based on the provided input. - | Name | Type | Description | | | | | | choices | [Choices](#choices)\[\] | A list of chat completion choices. Can be more than one if `n` is greater than 1. |
Represents a chat completion response returned by model, based on the provided i
### Detail
+Details for the [UnprocessableContentError](#unprocessablecontenterror) error.
| Name | Type | Description | | | | |
Represents a chat completion response returned by model, based on the provided i
The function that the model called. - | Name | Type | Description | | | | | | arguments | string | The arguments to call the function with, as generated by the model in JSON format. Note that the model does not always generate valid JSON, and may generate incorrect parameters not defined by your function schema. Validate the arguments in your code before calling your function. |
The function that the model called.
### FunctionObject
+Definition of a function the model has access to.
| Name | Type | Description | | | | |
The type of the tool. Currently, only `function` is supported.
### TooManyRequestsError + | Name | Type | Description | | | | | | error | string | The error description. |
The type of the tool. Currently, only `function` is supported.
### UnprocessableContentError
+The request contains unprocessable content. The error is returned when the payload indicated is valid according to this specification. However, some of the instructions indicated in the payload are not supported by the underlying model. Use the `details` section to understand the offending parameter.
| Name | Type | Description | | | | |
The type of the tool. Currently, only `function` is supported.
| detail | [Detail](#detail) | | | error | string | The error description. | | message | string | The error message. |
-| status | integer | The HTTP status code. |
+| status | integer | The HTTP status code. |
machine-learning Reference Model Inference Completions https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-model-inference-completions.md
POST /completions?api-version=2024-04-01-preview
| | | | | | | api-version | query | True | string | The version of the API in the format "YYYY-MM-DD" or "YYYY-MM-DD-preview". |
+## Request Header
++
+| Name | Required | Type | Description |
+| | | | |
+| extra-parameters | | string | The behavior of the API when extra parameters are indicated in the payload. Using `pass-through` makes the API to pass the parameter to the underlying model. Use this value when you want to pass parameters that you know the underlying model can support. Using `ignore` makes the API to drop any unsupported parameter. Use this value when you need to use the same payload across different models, but one of the extra parameters may make a model to error out if not supported. Using `error` makes the API to reject any extra parameter in the payload. Only parameters specified in this API can be indicated, or a 400 error is returned. |
+| azureml-model-deployment | | string | Name of the deployment you want to route the request to. Supported for endpoints that support multiple deployments. |
+ ## Request Body
The object type, which is always "list".
| detail | [Detail](#detail) | | | error | string | The error description. | | message | string | The error message. |
-| status | integer | The HTTP status code. |
+| status | integer | The HTTP status code. |
machine-learning Reference Model Inference Embeddings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-model-inference-embeddings.md
POST /embeddings?api-version=2024-04-01-preview
| - | -- | -- | | -- | | `api-version` | query | True | string | The version of the API in the format "YYYY-MM-DD" or "YYYY-MM-DD-preview". |
+## Request Header
++
+| Name | Required | Type | Description |
+| | | | |
+| extra-parameters | | string | The behavior of the API when extra parameters are indicated in the payload. Using `pass-through` makes the API to pass the parameter to the underlying model. Use this value when you want to pass parameters that you know the underlying model can support. Using `ignore` makes the API to drop any unsupported parameter. Use this value when you need to use the same payload across different models, but one of the extra parameters may make a model to error out if not supported. Using `error` makes the API to reject any extra parameter in the payload. Only parameters specified in this API can be indicated, or a 400 error is returned. |
+| azureml-model-deployment | | string | Name of the deployment you want to route the request to. Supported for endpoints that support multiple deployments. |
+ ## Request Body | Name | Required | Type | Description |
Status code: 200
| Name | Description | | - | -- | | [ContentFilterError](#contentfiltererror) | The API call fails when the prompt triggers a content filter as configured. Modify the prompt and try again. |
-| [CreateEmbeddingRequest](#createembeddingrequest) | Request for creating embeddings |
-| [CreateEmbeddingResponse](#createembeddingresponse) | Response from an embeddings request |
-| [Detail](#detail) | Details of the errors |
+| [CreateEmbeddingRequest](#createembeddingrequest) | Request for creating embeddings. |
+| [CreateEmbeddingResponse](#createembeddingresponse) | Response from an embeddings request. |
+| [Detail](#detail) | Details of the errors. |
| [Embedding](#embedding) | Represents the embedding object generated. | | [EmbeddingEncodingFormat](#embeddingencodingformat) | The format to return the embeddings in. Either base64, float, int8, uint8, binary, or ubinary. Returns a 422 error if the model doesn't support the value or parameter. | | [EmbeddingInputType](#embeddinginputtype) | The type of the input. Either `text`, `query`, or `document`. Returns a 422 error if the model doesn't support the value or parameter. | | [EmbeddingObject](#embeddingobject) | The object type, which is always "embedding". | | [ListObject](#listobject) | The object type, which is always "list". |
-| [NotFoundError](#notfounderror) | |
-| [TooManyRequestsError](#toomanyrequestserror) | |
-| [UnauthorizedError](#unauthorizederror) | |
-| [UnprocessableContentError](#unprocessablecontenterror) | |
+| [NotFoundError](#notfounderror) | The route is not valid for the deployed model. |
+| [TooManyRequestsError](#toomanyrequestserror) | You have hit your assigned rate limit and your requests need to be paced. |
+| [UnauthorizedError](#unauthorizederror) | Authentication is missing or invalid. |
+| [UnprocessableContentError](#unprocessablecontenterror) | The request contains unprocessable content. The error is returned when the payload indicated is valid according to this specification. However, some of the instructions indicated in the payload are not supported by the underlying model. Use the `details` section to understand the offending parameter. |
| [Usage](#usage) | The usage information for the request. | ### ContentFilterError
+The API call fails when the prompt triggers a content filter as configured. Modify the prompt and try again.
+ | Name | Type | Description | | | | | | code | string | The error code. |
Status code: 200
### CreateEmbeddingRequest
+Request for creating embeddings.
+ | Name | Required | Type | Description | | | -- | | -- | | input | True | string[] | Input text to embed, encoded as a string or array of tokens. To embed multiple inputs in a single request, pass an array of strings or array of token arrays. |
Status code: 200
### CreateEmbeddingResponse
+Response from an embeddings request.
+ | Name | Type | Description | | | | | | data | [Embedding](#embedding)\[\] | The list of embeddings generated by the model. |
Status code: 200
### Detail
+Details for the [UnprocessableContentError](#unprocessablecontenterror) error.
| Name | Type | Description | | | | |
The object type, which is always "list".
### UnprocessableContentError
+The request contains unprocessable content. The error is returned when the payload indicated is valid according to this specification. However, some of the instructions indicated in the payload are not supported by the underlying model. Use the `details` section to understand the offending parameter.
| Name | Type | Description | | | | |
The usage information for the request.
| Name | Type | Description | | -- | - | -- | | prompt\_tokens | integer | The number of tokens used by the prompt. |
-| total\_tokens | integer | The total number of tokens used by the request. |
+| total\_tokens | integer | The total number of tokens used by the request. |
machine-learning Reference Model Inference Images Embeddings https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/reference-model-inference-images-embeddings.md
POST /images/embeddings?api-version=2024-04-01-preview
| | | | | | | api-version | query | True | string | The version of the API in the format "YYYY-MM-DD" or "YYYY-MM-DD-preview". |
+## Request Header
++
+| Name | Required | Type | Description |
+| | | | |
+| extra-parameters | | string | The behavior of the API when extra parameters are indicated in the payload. Using `pass-through` makes the API to pass the parameter to the underlying model. Use this value when you want to pass parameters that you know the underlying model can support. Using `ignore` makes the API to drop any unsupported parameter. Use this value when you need to use the same payload across different models, but one of the extra parameters may make a model to error out if not supported. Using `error` makes the API to reject any extra parameter in the payload. Only parameters specified in this API can be indicated, or a 400 error is returned. |
+| azureml-model-deployment | | string | Name of the deployment you want to route the request to. Supported for endpoints that support multiple deployments. |
+ ## Request Body
POST /images/embeddings?api-version=2024-04-01-preview
| 200 OK | [CreateEmbeddingResponse](#createembeddingresponse) | OK | | 401 Unauthorized | [UnauthorizedError](#unauthorizederror) | Access token is missing or invalid<br><br>Headers<br><br>x-ms-error-code: string | | 404 Not Found | [NotFoundError](#notfounderror) | Modality not supported by the model. Check the documentation of the model to see which routes are available.<br><br>Headers<br><br>x-ms-error-code: string |
-| 422 Unprocessable Entity | [UnprocessableContentError](#unprocessablecontenterror) | The request contains unprocessable content<br><br>Headers<br><br>x-ms-error-code: string |
+| 422 Unprocessable Entity | [UnprocessableContentError](#unprocessablecontenterror) | The request contains unprocessable content. The error is returned when the payload indicated is valid according to this specification. However, some of the instructions indicated in the payload are not supported by the underlying model. Use the `details` section to understand the offending parameter.<br><br>Headers<br><br>x-ms-error-code: string |
| 429 Too Many Requests | [TooManyRequestsError](#toomanyrequestserror) | You have hit your assigned rate limit and your request need to be paced.<br><br>Headers<br><br>x-ms-error-code: string | | Other Status Codes | [ContentFilterError](#contentfiltererror) | Bad request<br><br>Headers<br><br>x-ms-error-code: string |
Status code: 200
| [NotFoundError](#notfounderror) | | | [TooManyRequestsError](#toomanyrequestserror) | | | [UnauthorizedError](#unauthorizederror) | |
-| [UnprocessableContentError](#unprocessablecontenterror) | |
+| [UnprocessableContentError](#unprocessablecontenterror) | The request contains unprocessable content. The error is returned when the payload indicated is valid according to this specification. However, some of the instructions indicated in the payload are not supported by the underlying model. Use the `details` section to understand the offending parameter. |
| [Usage](#usage) | The usage information for the request. | ### ContentFilterError
The object type, which is always "list".
### UnprocessableContentError
+The request contains unprocessable content. The error is returned when the payload indicated is valid according to this specification. However, some of the instructions indicated in the payload are not supported by the underlying model. Use the `details` section to understand the offending parameter.
| Name | Type | Description | | | | |
The usage information for the request.
| prompt_patches | integer | The number of image patches used by the image prompt. | | prompt_tokens | integer | The number of tokens used by the prompt. | | total_patches | integer | The total number of patches used by the request. |
-| total_tokens | integer | The total number of tokens used by the request. |
+| total_tokens | integer | The total number of tokens used by the request. |
mysql Concepts Version Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/concepts-version-policy.md
__Azure MySQL 5.7 Deprecation Timelines__
|Timelines| Azure MySQL 5.7 Flexible end at |Azure MySQL 5.7 Single end at| |||| |Creation of new servers using the Azure portal.| To Be Decided| Already ended as part of [Single Server deprecation](single-server/whats-happening-to-mysql-single-server.md)|
-|Creation of new servers using the Command Line Interface (CLI). | To Be Decided| September 2024|
+|Creation of new servers using the Command Line Interface (CLI). | To Be Decided| March 19, 2024|
|Creation of replica servers for existing servers. | September 2025| September 2024| |Creation of servers using restore workflow for the existing servers| September 2025|September 2024| |Creation of new servers for migrating from Azure Database for MySQL - Single Server to Azure Database for MySQL - Flexible Server.| NA| September 2024|
mysql Concepts Service Tiers Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-service-tiers-storage.md
Azure Database for MySQL flexible server offers pre-provisioned IOPS, allowing y
## Autoscale IOPS
-The cornerstone of Azure Database for MySQL flexible server is its ability to achieve the best performance for tier 1 workloads. This can be improved by enabling the server to automatically scale its database servers' performance (IO) seamlessly depending on the workload needs. This opt-in feature enables users to scale IOPS on demand without having to pre-provision a certain amount of IO per second. With the Autoscale IOPS featured enabled, you can now enjoy worry-free IO management in Azure Database for MySQL flexible server because the server scales IOPs up or down automatically depending on workload needs.
+The cornerstone of Azure Database for MySQL flexible server is its ability to achieve the best performance for tier 1 workloads. This can be improved by enabling the server to automatically scale its database servers' performance (IO) seamlessly depending on the workload needs. This opt-in feature enables users to scale IOPS on demand without having to pre-provision a certain amount of IO per second. With the Autoscale IOPS featured enabled, you can now enjoy worry-free IO management in Azure Database for MySQL flexible server because the server scales IOPs up or down automatically depending on workload needs. AutoScale IOPS automatically scales up to the ΓÇÿMax Supported IOPSΓÇÖ for each service tier and compute size, as specified in the [service tiers documentation](#service-tiers-size-and-server-types). This ensures optimal performance without the need for manual scaling efforts
With Autoscale IOPS, you pay only for the IO the server uses and no longer need to provision and pay for resources they aren't fully using, saving time and money. In addition, mission-critical Tier-1 applications can achieve consistent performance by making additional IO available to the workload anytime. Autoscale IOPS eliminates the administration required to provide the best performance at the least cost for Azure Database for MySQL flexible server customers.
mysql Concepts Storage Iops https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/concepts-storage-iops.md
Moreover, Additional IOPS with pre-provisioned refers to the flexibility of incr
## Autoscale IOPS
-Autoscale IOPS offer the flexibility to scale IOPS on demand, eliminating the need to pre-provision a specific amount of IO per second. By enabling Autoscale IOPS, your server will automatically adjust IOPS based on workload requirements. With the Autoscale IOPS featured enable, you can now enjoy worry free IO management in Azure Database for MySQL flexible server because the server scales IOPs up or down automatically depending on workload needs.
+Autoscale IOPS offer the flexibility to scale IOPS on demand, eliminating the need to pre-provision a specific amount of IO per second. By enabling Autoscale IOPS, your server will automatically adjust IOPS based on workload requirements. With the Autoscale IOPS featured enable, you can now enjoy worry free IO management in Azure Database for MySQL flexible server because the server scales IOPs up or down automatically depending on workload needs. For detailed information on the ΓÇÿMax Supported IOPSΓÇÖ for each service tier and compute size, refer to the [service tiers documentation](./concepts-service-tiers-storage.md#service-tiers-size-and-server-types). AutoScale IOPS will scale up to these limits to optimize your workload performance.
**Dynamic Scaling**: Autoscale IOPS dynamically adjust the IOPS limit of your database server based on the actual demand of your workload. This ensures optimal performance without manual intervention or configuration.
mysql Migrate Single Flexible In Place Auto Migration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/migrate/migrate-single-flexible-in-place-auto-migration.md
The compute tier and SKU for the target flexible server is provisioned based on
| | | :: | :: | | Basic | 1 | Burstable | Standard_B1s | | Basic | 2 | Burstable | Standard_B2s |
+| General Purpose | 2 | GeneralPurpose | Standard_D2ds_v4 |
| General Purpose | 4 | GeneralPurpose | Standard_D4ds_v4 | | General Purpose | 8 | GeneralPurpose | Standard_D8ds_v4 | | General Purpose | 16 | GeneralPurpose | Standard_D16ds_v4 |
Here's the info you need to know post in-place migration:
> Post-migration do no restart the stopped Single Server instance as it might hamper your client's and application connectivity. - Copy the following properties from the source Single Server to target Flexible Server post in-place migration operation is completed successfully:
- - Monitoring page settings (Alerts, Metrics, and Diagnostic settings)
+ - Monitoring page settings (Alerts, Metrics, and Diagnostic settings) and Locks settings
- Any Terraform/CLI scripts you host to manage your Single Server instance should be updated with Flexible Server references. - For Single Server instance with Query store enabled, the server parameter 'slow_query_log' on target instance is set to ON to ensure feature parity when migrating to Flexible Server. Note, for certain workloads this could affect performance and if you observe any performance degradation, set this server parameter to 'OFF' on the Flexible Server instance. - For Single Server instance with Microsoft Defender for Cloud enabled, the enablement state is migrated. To achieve parity in Flexible Server post automigration for properties you can configure in Single Server, consider the details in the following table:
operational-excellence Overview Relocation https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operational-excellence/overview-relocation.md
The following tables provide links to each Azure service relocation document. Th
[Azure Batch](../batch/account-move.md?toc=/azure/operational-excellence/toc.json)|✅ | ✅| ❌ | [Azure Cache for Redis](../azure-cache-for-redis/cache-moving-resources.md?toc=/azure/operational-excellence/toc.json)| ✅ | ❌| ❌ | [Azure Container Registry](../container-registry/manual-regional-move.md)|✅ | ✅| ❌ |
-[Azure Cosmos DB](../cosmos-db/how-to-move-regions.md?toc=/azure/operational-excellence/toc.json)|✅ | ✅| ❌ |
+[Azure Cosmos DB](relocation-cosmos-db.md)|✅ | ✅| ❌ |
[Azure Database for MariaDB Server](../mariadb/howto-move-regions-portal.md?toc=/azure/operational-excellence/toc.json)|✅ | ✅| ❌ | [Azure Database for MySQL Server](../mysql/howto-move-regions-portal.md?toc=/azure/operational-excellence/toc.json)✅ | ✅| ❌ | [Azure Database for PostgreSQL](./relocation-postgresql-flexible-server.md)| ✅ | ✅| ❌ |
operational-excellence Relocation Cosmos Db https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/operational-excellence/relocation-cosmos-db.md
+
+ Title: Relocate an Azure Cosmos DB NoSQL account to another region
+description: Learn how to relocate an Azure Cosmos DB NoSQL account to another region.
+++++
+ - subject-relocation
Last updated : 06/11/2024++++
+# Relocate an Azure Cosmos DB NoSQL account to another region
++
+This article describes how to either:
+
+- Relocate a region where data is replicated in Azure Cosmos DB.
+- Migrate account (Azure Resource Manager) metadata and data from one region to another.
++
+## Prerequisites
+
+- An app registration must be created with delegated permission to the source and target resource group instance and ΓÇ£API permissionΓÇ¥ for ΓÇ£User.ReadBasic.AllΓÇ¥.
+
+- The selected Cosmos DB API must remain same from source to target. This document uses SQL DB API.
+
+- Account names must be limited to 44 characters, all lowercase.
+
+- When you add or remove locations to an Azure Cosmos account, you canΓÇÖt simultaneously modify other properties.
+
+- Identify all Cosmos DB dependent resources.
++
+## Downtime
+
+## Considerations for Service Endpoints
+
+The virtual network service endpoints for Azure Cosmos DB restrict access to a specified virtual network. The endpoints can also restrict access to a list of IPv4 (internet protocol version 4) address ranges. Any user connecting to the Event Hubs from outside those sources is denied access. If Service endpoints were configured in the source region for the Event Hubs resource, the same would need to be done in the target one.
+
+For a successful recreation of the Azure Cosmos DB to the target region, the VNet and Subnet must be created beforehand. In case the move of these two resources is being carried out with the Azure Resource Mover tool, the service endpoints wonΓÇÖt be configured automatically. Hence, they need to be configured manually, which can be done through the [Azure portal](/azure/key-vault/general/quick-create-portal), the [Azure CLI](/azure/key-vault/general/quick-create-cli), or [Azure PowerShell](/azure/key-vault/general/quick-create-powershell).
+++
+## Redeploy without data
+
+For cases where the Cosmos DB instance needs to be relocated alone without the configuration and customer data, the instance itself can be created using [Microsoft.DocumentDB databaseAccounts](/azure/templates/microsoft.documentdb/2021-04-15/databaseaccounts?tabs=json&pivots=deployment-language-arm-template)
+
+## Redeploy with data
+
+Azure Cosmos DB supports data replication natively, so moving data from one region to another is simple. You can accomplish it by using the Azure portal, Azure PowerShell, or the Azure CLI. It involves the following steps:
+
+1. Add a new region to the account.
+
+ To add a new region to an Azure Cosmos DB account, see [Add/remove regions to an Azure Cosmos DB account](../cosmos-db/how-to-manage-database-account.yml#add-remove-regions-from-your-database-account).
+
+1. Perform a manual failover to the new region.
+
+ When the region that's being removed is currently the write region for the account, you'll need to start a failover to the new region added in the previous step. This is a zero-downtime operation. If you're moving a read region in a multiple-region account, you can skip this step.
+
+ To start a failover, see [Perform manual failover on an Azure Cosmos DB account](../cosmos-db/how-to-manage-database-account.yml#perform-manual-failover-on-an-azure-cosmos-db-account).
+
+1. Remove the original region.
+
+ To remove a region from an Azure Cosmos DB account, see [Add/remove regions from your Azure Cosmos DB account](../cosmos-db/how-to-manage-database-account.yml#add-remove-regions-from-your-database-account).
+
+> [!NOTE]
+> If you perform a failover operation or add/remove a new region while an [asynchronous throughput scaling operation](../cosmos-db/scaling-provisioned-throughput-best-practices.md#background-on-scaling-rus) is in progress, the throughput scale-up operation will be paused. It will resume automatically when the failover or add/remove region operation is complete.
+
+## Redeploy Azure Cosmos DB account metadata
+
+Azure Cosmos DB does not natively support migrating account metadata from one region to another. To migrate both the account metadata and customer data from one region to another, you must create a new account in the desired region and then copy the data manually.
+
+> [!IMPORTANT]
+> It is not necessary to migrate the account metadata if the data is stored or moved to a different region. The region in which the account metadata resides has no impact on the performance, security or any other operational aspects of your Azure Cosmos DB account.
+
+A near-zero-downtime migration for the API for NoSQL requires the use of the [change feed](../cosmos-db/change-feed.md) or a tool that uses it.
+
+The following steps demonstrate how to migrate an Azure Cosmos DB account for the API for NoSQL and its data from one region to another:
+
+1. Create a new Azure Cosmos DB account in the desired region.
+
+ To create a new account via the Azure portal, PowerShell, or the Azure CLI, see [Create an Azure Cosmos DB account](../cosmos-db/how-to-manage-database-account.yml#create-an-account).
+
+1. Create a new database and container.
+
+ To create a new database and container, see [Create an Azure Cosmos DB container](../cosmos-db/nosql/how-to-create-container.md).
+
+1. Migrate data by using the Azure Cosmos DB Spark Connector live migration sample.
+
+ To migrate data with near zero downtime, see [Live Migrate Azure Cosmos DB SQL API Containers data with Spark Connector](https://github.com/Azure/azure-sdk-for-java/tree/main/sdk/cosmos/azure-cosmos-spark_3_2-12/Samples/DatabricksLiveContainerMigration).
+
+1. Update the application connection string.
+
+ With the Live Data Migration sample still running, update the connection information in the new deployment of your application. You can retrieve the endpoints and keys for your application from the Azure portal.
+
+ :::image type="content" source="../cosmos-db/media/secure-access-to-data/nosql-database-security-master-key-portal.png" alt-text="Access control in the Azure portal, demonstrating NoSQL database security.":::
+
+1. Redirect requests to the new application.
+
+ After the new application is connected to Azure Cosmos DB, you can redirect client requests to your new deployment.
+
+1. Delete any resources that you no longer need.
+
+ With requests now fully redirected to the new instance, you can delete the old Azure Cosmos DB account and stop the Live Data Migrator sample.
+
+## Next steps
+
+For more information and examples on how to manage the Azure Cosmos DB account as well as databases and containers, read the following articles:
+
+* [Manage an Azure Cosmos DB account](../cosmos-db/how-to-manage-database-account.yml)
+* [Change feed in Azure Cosmos DB](../cosmos-db/change-feed.md)
partner-solutions Dynatrace Get Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/dynatrace/dynatrace-get-support.md
+
+ Title: Get support for Azure Native Dynatrace Service
+description: This article shows you how to get support when using Azure Native Dynatrace Service on the Azure Cloud.
+ Last updated : 07/03/2024+
+#customer intent: As an implementer of Dynatrace on the Azure Cloud, I want to file a support request so that I can get unblocked.
++
+# Get support for Azure Native Dynatrace Service
+
+In this article, you learn how to contact support when working with an Azure Native Dynatrace Service resource. Before contacting support, see [Fix common errors](dynatrace-troubleshoot.md).
+
+## How to contact support
+
+1. In the Azure portal, go to your Dynatrace resource.
+
+1. In Resource menu, under **Support + troubleshooting**, select **New Support Request**.
+
+1. Select the link to go to the [Dynatrace support website](https://support.dynatrace.com/) and raise a request.
++
+## Related content
+
+- Learn about [managing your instance](dynatrace-how-to-manage.md) of Dynatrace.
+- Get started with Azure Native Dynatrace Service on
+
+ > [!div class="nextstepaction"]
+ > [Azure portal](https://portal.azure.com/#view/HubsExtension/BrowseResource/resourceType/Dynatrace.Observability%2Fmonitors)
+
+ > [!div class="nextstepaction"]
+ > [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/dynatrace.dynatrace_portal_integration?tab=Overview)
partner-solutions Dynatrace Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/dynatrace/dynatrace-troubleshoot.md
Title: Troubleshooting Azure Native Dynatrace Service
description: This article provides information about troubleshooting Dynatrace for Azure Previously updated : 02/02/2023 Last updated : 06/20/2024 # Troubleshoot Azure Native Dynatrace Service
-In this article, you learn how to contact support when working with an Azure Native Dynatrace Service resource. Before contacting support, see [Fix common errors](#fix-common-errors).
-
-## Contact support
-
-To contact support about the Azure Native Dynatrace Service, select **New Support request** in the left pane. Select the link to the Dynatrace support website.
--
-## Fix common errors
- This document contains information about troubleshooting your solutions that use Dynatrace.
-### Marketplace purchase errors
+## Marketplace purchase errors
[!INCLUDE [marketplace-purchase-errors](../includes/marketplace-purchase-errors.md)]
-
+ If those options don't solve the problem, contact [Dynatrace support](https://support.dynatrace.com/).
-### Unable to create Dynatrace resource
+## Unable to create Dynatrace resource
- To set up the Azure Native Dynatrace Service, you must have **Owner** or **Contributor** access on the Azure subscription. Ensure you have the appropriate access before starting the setup. - Create fails because Last Name is empty. The issue happens when the user info in Microsoft Entra ID is incomplete and doesn't contain Last Name. Contact your Azure tenant's global administrator to rectify the issue and try again.
-### Logs not being emitted or limit reached issue
+## Logs not being emitted or limit reached issue
- Resource doesn't support sending logs. Only resource types with monitoring log categories can be configured to send logs. For more information, see [supported categories](../../azure-monitor/essentials/resource-logs-categories.md).
If those options don't solve the problem, contact [Dynatrace support](https://s
- Export of Metrics data isn't supported currently by the partner solutions under Azure Monitor diagnostic settings.
-### Single sign-on errors
+## Single sign-on errors
- **Single sign-on configuration indicates lack of permissions** - Occurs when the user that is trying to configure single sign-on doesn't have Manage users permissions for the Dynatrace account. For a description of how to configure this permission, see [here](https://www.dynatrace.com/support/help/shortlink/azure-native-integration#setup).
If those options don't solve the problem, contact [Dynatrace support](https://s
- **App not showing in Single sign-on settings page** - First, search for application ID. If no result is shown, check the SAML settings of the app. The grid only shows apps with correct SAML settings.
-### Metrics checkbox disabled
+## Metrics checkbox disabled
- To collect metrics, you must have owner permission on the subscription. If you're a contributor, refer to the contributor guide mentioned in [Configure metrics and logs](dynatrace-create.md#configure-metrics-and-logs).
-### Diagnostic settings are active even after disabling the Dynatrace resource or applying necessary tag rules
+## Diagnostic settings are active even after disabling the Dynatrace resource or applying necessary tag rules
If logs are being emitted and diagnostic settings remain active on monitored resources even after the Dynatrace resource is disabled or tag rules have been modified to exclude certain resources, it's likely that there's a delete lock applied to the resource(s) or the resource group containing the resource. This lock prevents the cleanup of the diagnostic settings, and hence, logs continue to be forwarded for those resources. To resolve this, remove the delete lock from the resource or the resource group. If the lock is removed after the Dynatrace resource is deleted, the diagnostic settings have to be cleaned up manually to stop log forwarding.
-### Free trial errors
+## Free trial errors
- **Unable to create another free trial resource on Azure** - During free trials, Dynatrace accounts can only have one environment. You can therefore create only one Dynatrace resource during the trial period.
partner-solutions New Relic Get Support https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/new-relic/new-relic-get-support.md
+
+ Title: Get support for Azure Native New Relic Service
+description: This article shows you how to get support when using Azure Native New Relic Service with the Azure Cloud.
++ Last updated : 07/03/2024
+#customer intent: As an implementer of New Relic on the Azure Cloud, I want to file a support request so that I can get unblocked.
++
+# Get support for Azure Native New Relic Service
+
+In this article, you learn how to contact support when working with an Azure Native New Relic Service resource. Before contacting support, see [Fix common errors](new-relic-troubleshoot.md).
+
+## How to contact support
+
+1. In the Azure portal, go to the resource.
+1. On the left pane, under **Support + troubleshooting**, select **New Support Request**.
+1. Select the link to go to the [New Relic support website](https://support.newrelic.com/) and raise a request.
++
+## Related content
+
+- [Manage Azure Native New Relic Service](new-relic-how-to-manage.md)
+- Get started with Azure Native New Relic Service on
+
+ > [!div class="nextstepaction"]
+ > [Azure portal](https://portal.azure.com/#view/HubsExtension/BrowseResource/resourceType/NewRelic.Observability%2Fmonitors)
+
+ > [!div class="nextstepaction"]
+ > [Azure Marketplace](https://azuremarketplace.microsoft.com/marketplace/apps/newrelicinc1635200720692.newrelic_liftr_payg?tab=Overview)
partner-solutions New Relic Troubleshoot https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/partner-solutions/new-relic/new-relic-troubleshoot.md
Last updated 01/16/2023
This article describes how to fix common problems when you're working with Azure Native New Relic Service resources.
-Try the troubleshooting information in this article first. If that doesn't work, contact New Relic support:
-
-1. In the Azure portal, go to the resource.
-1. On the left pane, under **Support + troubleshooting**, select **New Support Request**.
-1. Select the link to go to the [New Relic support website](https://support.newrelic.com/) and raise a request.
--
-## Fix common errors
-
-### Marketplace purchase errors
+## Marketplace purchase errors
[!INCLUDE [marketplace-purchase-errors](../includes/marketplace-purchase-errors.md)]
-### You can't create a New Relic resource
+## You can't create a New Relic resource
To set up Azure Native New Relic Service, you must have owner access on the Azure subscription. Ensure that you have the appropriate access before you start the setup. To find the New Relic offering on Azure and set up the service, you must first register the `NewRelic.Observability` resource provider in your Azure subscription. To register the resource provider by using the Azure portal, follow the guidance in [Azure resource providers and types](../../azure-resource-manager/management/resource-providers-and-types.md). To register the resource provider from a command line, enter `az provider register --namespace NewRelic.Observability --subscription <subscription-id>`.
-### Logs aren't being sent to New Relic
+## Logs aren't being sent to New Relic
Only resource types in [supported categories](../../azure-monitor/essentials/resource-logs-categories.md) send logs to New Relic through the integration. To check whether the resource is set up to send logs to New Relic, go to the [Azure diagnostic settings](/azure/azure-monitor/platform/diagnostic-settings) for that resource. Then, verify that there's a New Relic diagnostic setting.
-### You can't install or uninstall an extension on a virtual machine
+## You can't install or uninstall an extension on a virtual machine
Only virtual machines without the New Relic agent installed should be selected together to install the extension. Deselect any virtual machines that already have the New Relic agent installed, so that **Install Extension** is active. The **Agent Status** column shows the status **Running** or **Shutdown** for any virtual machines that already have the New Relic agent installed. Only virtual machines that currently have the New Relic agent installed should be selected together to uninstall the extension. Deselect any virtual machines that don't already have the New Relic agent installed, so that **Uninstall Extension** is active. The **Agent Status** column shows the status **Not Installed** for any virtual machines that don't already have the New Relic agent installed.
-### Resource monitoring stopped working
+## Resource monitoring stopped working
Resource monitoring in New Relic is enabled through the *ingest API key*, which you set up at the time of resource creation. Revoking the ingest API key from the New Relic portal disrupts monitoring of logs and metrics for all resources, including virtual machines and app services. You shouldn't* revoke the ingest API key. If the API key is already revoked, contact New Relic support.
If your Azure subscription is suspended or deleted because of payment-related is
New Relic manages the APIs for creating and managing resources, and for the storage and processing of customer telemetry data. The New Relic APIs might be on or outside Azure. If your Azure subscription and resource are working correctly but the New Relic portal shows problems with monitoring data, contact New Relic support.
-### Diagnostic settings are active even after disabling the New Relic resource or applying necessary tag rules
+## Diagnostic settings are active even after disabling the New Relic resource or applying necessary tag rules
If logs are being emitted and diagnostic settings remain active on monitored resources even after the New Relic resource is disabled or tag rules have been modified to exclude certain resources, it's likely that there's a delete lock applied to the resource(s) or the resource group containing the resource. This lock prevents the cleanup of the diagnostic settings, and hence, logs continue to be forwarded for those resources. To resolve this, remove the delete lock from the resource or the resource group. If the lock is removed after the New Relic resource is deleted, the diagnostic settings have to be cleaned up manually to stop log forwarding.
sap Dbms Guide Ibm https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sap/workloads/dbms-guide-ibm.md
Db2 high availability disaster recovery (HADR) with pacemaker is supported. Both
#### Windows Cluster Server
-Microsoft Cluster Server (MSCS) isn't supported.
+Windows Server Failover Cluster (WSFC) also known as Microsoft Cluster Server (MSCS) isn't supported.
Db2 high availability disaster recovery (HADR) is supported. If the virtual machines of the HA configuration have working name resolution, the setup in Azure doesn't differ from any setup that is done on-premises. It isn't recommended to rely on IP resolution only.
search Resource Tools https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/resource-tools.md
Previously updated : 05/22/2024 Last updated : 07/02/2024 # Productivity tools - Azure AI Search
Productivity tools are built by engineers at Microsoft, but aren't part of the A
| Tool name | Description | Source code | |--| |-|
-| [Back up and Restore](https://github.com/liamc) | Download the retrievable fields of an index to your local device and then upload the index and its content to a new search service. | [https://github.com/liamca/azure-search-backup-restore](https://github.com/liamca/azure-search-backup-restore) |
+| [Back up and Restore](https://github.com/Azure/azure-search-vector-samples/tree/main/demo-python/code/index-backup-restore) | Download the retrievable fields of an index to your local device and then upload the index and its content to a new search service. | [https://github.com/Azure/azure-search-vector-samples/tree/main/demo-python/code/index-backup-restore](https://github.com/Azure/azure-search-vector-samples/tree/main/demo-python/code/index-backup-restore) |
| [Chat with your data solution accelerator](https://github.com/Azure-Samples/chat-with-your-data-solution-accelerator/blob/main/README.md) | Code and docs to create interactive search solution in production environments. | [https://github.com/Azure-Samples/chat-with-your-data-solution-accelerator](https://github.com/Azure-Samples/chat-with-your-data-solution-accelerator) | | [Knowledge Mining Accelerator](https://github.com/Azure-Samples/azure-search-knowledge-mining/blob/main/README.md) | Code and docs to jump start a knowledge store using your data. | [https://github.com/Azure-Samples/azure-search-knowledge-mining](https://github.com/Azure-Samples/azure-search-knowledge-mining) | | [Performance testing solution](https://github.com/Azure-Samples/azure-search-performance-testing/blob/main/README.md) | This solution helps you load test Azure AI Search. It uses Apache JMeter as an open source load and performance testing tool and Terraform to dynamically provision and destroy the required infrastructure on Azure. | [https://github.com/Azure-Samples/azure-search-performance-testing](https://github.com/Azure-Samples/azure-search-performance-testing) |
search Search How To Create Search Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-how-to-create-search-index.md
Title: Create a search index
+ Title: Create an index
description: Create a search index using the Azure portal, REST APIs, or an Azure SDK.
-
- - ignite-2023
Previously updated : 09/25/2023 Last updated : 07/01/2024 # Create an index in Azure AI Search
-In Azure AI Search, query requests target the searchable text in a [**search index**](search-what-is-an-index.md).
-
-In this article, learn the steps for defining and publishing a search index. Creating an index establishes the physical data structures on your search service. Once the index definition exists, [**loading the index**](search-what-is-data-import.md) follows as a separate task.
+In this article, learn the steps for defining a schema for a [**search index**](search-what-is-an-index.md) and pushing it to a search service. Creating an index establishes the physical data structures on your search service. Once the index exists, [**load the index**](search-what-is-data-import.md) as a separate task.
## Prerequisites
-+ Write permissions. Permission can be granted through an [admin API key](search-security-api-keys.md) on the request. Alternatively, if you're using [role-based access control](search-security-rbac.md), send a request as a member of the Search Contributor role.
++ Write permissions as a [**Search Service Contributor**](search-security-rbac.md) or an [admin API key](search-security-api-keys.md) for key-based authentication.
-+ An understanding of the data you want to index. Creating an index is a schema definition exercise, so you should have a clear idea of which source fields you want to make searchable, retrievable, filterable, facetable, and sortable (see the [schema checklist](#schema-checklist) for guidance).
++ An understanding of the data you want to index. A search index is based on external content that you want to make searchable. Searchable content is stored as fields in an index. You should have a clear idea of which source fields you want to make searchable, retrievable, filterable, facetable, and sortable (see the [schema checklist](#schema-checklist) for guidance).
- You must also have a unique field in source data that can be used as the [document key (or ID)](#document-keys) in the index.
++ You must also have a unique field in source data that can be used as the [document key (or ID)](#document-keys) in the index.
-+ A stable index location. Moving an existing index to a different search service isn't supported out-of-the-box. Revisit application requirements and make sure that your existing search service, its capacity and location, are sufficient for your needs.
++ A stable index location. Moving an existing index to a different search service isn't supported out-of-the-box. Revisit application requirements and make sure that your existing search service (capacity and location), are sufficient for your needs.
-+ Finally, all service tiers have [index limits](search-limits-quotas-capacity.md#index-limits) on the number of objects that you can create. For example, if you're experimenting on the Free tier, you can only have three indexes at any given time. Within the index itself, there are limits on the number of complex fields and collections.
++ Finally, all service tiers have [index limits](search-limits-quotas-capacity.md#index-limits) on the number of objects that you can create. For example, if you're experimenting on the Free tier, you can only have three indexes at any given time. Within the index itself, there are [limits on vectors](search-limits-quotas-capacity.md#vector-index-size-limits) and [index limits](search-limits-quotas-capacity.md#index-limits) on the number of simple and complex fields. ## Document keys
-A search index has one required field: a document key. A document key is the unique identifier of a search document. In Azure AI Search, it must be a string, and it must originate from unique values in the data source that's providing the content to be indexed. A search service doesn't generate key values, but in some scenarios (such as the [Azure table indexer](search-howto-indexing-azure-tables.md)) it synthesizes existing values to create a unique key for the documents being indexed.
+A search index has two requirements: it must have a name and a document key.
+
+A document key is the unique identifier of a search document, and a search document is a collection of fields that completely describes something. For example, if you're indexing a [movies data set](https://www.kaggle.com/datasets/harshitshankhdhar/imdb-dataset-of-top-1000-movies-and-tv-shows), a search document contains the title, genre, and duration of a single movie.
+
+In Azure AI Search, a document key must be a string, and it must originate from unique values in the data source that's providing the content to be indexed. A search service doesn't generate key values, but in some scenarios (such as the [Azure table indexer](search-howto-indexing-azure-tables.md)) it synthesizes existing values to create a unique key for the documents being indexed.
During incremental indexing, where new and updated content is indexed, incoming documents with new keys are added, while incoming documents with existing keys are either merged or overwritten, depending on whether index fields are null or populated.
Use this checklist to assist the design decisions for your search index.
1. Review [naming conventions](/rest/api/searchservice/naming-rules) so that index and field names conform to the naming rules.
-1. Review [supported data types](/rest/api/searchservice/supported-data-types). The data type affects how the field is used. For example, numeric content is filterable but not full text searchable. The most common data type is `Edm.String` for searchable text, which is tokenized and queried using the full text search engine.
+1. Review [supported data types](/rest/api/searchservice/supported-data-types). The data type affects how the field is used. For example, numeric content is filterable but not full text searchable. The most common data type is `Edm.String` for searchable text, which is tokenized and queried using the full text search engine. The most common data type for a vector field is `Edm.Single` but you can use other types as well.
1. Identify a [document key](#document-keys). A document key is an index requirement. It's a single string field and it's populated from a source data field that contains unique values. For example, if you're indexing from Blob Storage, the metadata storage path is often used as the document key because it uniquely identifies each blob in the container.
-1. Identify the fields in your data source that contribute searchable content in the index. Searchable content includes short or long strings that are queried using the full text search engine. If the content is verbose (small phrases or bigger chunks), experiment with different analyzers to see how the text is tokenized.
+1. Identify the fields in your data source that contribute searchable content in the index.
+
+ Searchable nonvector content includes short or long strings that are queried using the full text search engine. If the content is verbose (small phrases or bigger chunks), experiment with different analyzers to see how the text is tokenized.
+
+ Searchable vector content can be images or text (in any language) that exists as a mathematical representation. You can use narrow data types or vector compression to make vector fields smaller.
[Field attribute assignments](search-what-is-an-index.md#index-attributes) determine both search behaviors and the physical representation of your index on the search service. Determining how fields should be specified is an iterative process for many customers. To speed up iterations, start with sample data so that you can drop and rebuild easily. 1. Identify which source fields can be used as filters. Numeric content and short text fields, particularly those with repeating values, are good choices. When working with filters, remember:
+ + Filters can be used in vector and nonvector queries, but the filter itself is applied alphanumeric (nonvector) fields in your index.
+ + Filterable fields can optionally be used in faceted navigation. + Filterable fields are returned in arbitrary order, so consider making them sortable as well.
-1. Determine whether to use the default analyzer (`"analyzer": null`) or a different analyzer. [Analyzers](search-analyzers.md) are used to tokenize text fields during indexing and query execution.
+1. For vector fields, specify a vector search configuration and the algorithms used for creating navigation paths and filling the embedding space. For more information, see [Add vector fields](vector-search-how-to-create-index.md).
+
+ Vector fields have extra properties that nonvector fields don't have, such as which algorithms to use and vector compression.
+
+ Vector fields omit attributes that aren't useful on vector data, such as sorting, filtering, and faceting.
+
+1. For nonvector fields, determine whether to use the default analyzer (`"analyzer": null`) or a different analyzer. [Analyzers](search-analyzers.md) are used to tokenize text fields during indexing and query execution.
For multi-lingual strings, consider a [language analyzer](index-add-language-analyzers.md). For hyphenated strings or special characters, consider [specialized analyzers](index-add-custom-analyzers.md#built-in-analyzers). One example is [keyword](https://lucene.apache.org/core/6_6_1/analyzers-common/org/apache/lucene/analysis/core/KeywordAnalyzer.html) that treats the entire contents of a field as a single token. This behavior is useful for data like zip codes, IDs, and some product names. For more information, see [Partial term search and patterns with special characters](search-query-partial-matching.md). > [!NOTE]
-> Full text search is conducted over terms that are tokenized during indexing. If your queries fail to return the results you expect, [test for tokenization](/rest/api/searchservice/test-analyzer) to verify the string actually exists. You can try different analyzers on strings to see how tokens are produced for various analyzers.
+> Full text search is conducted over terms that are tokenized during indexing. If your queries fail to return the results you expect, [test for tokenization](/rest/api/searchservice/indexes/analyze) to verify the string you're searchin for actually exists. You can try different analyzers on strings to see how tokens are produced for various analyzers.
## Create an index
-When you're ready to create the index, use a search client that can send the request. You can use the Azure portal or REST APIs for early development and proof-of-concept testing.
+When you're ready to create the index, use a search client that can send the request. You can use the Azure portal or REST APIs for early development and proof-of-concept testing, otherwise it's common to use the Azure SDKs.
During development, plan on frequent rebuilds. Because physical structures are created in the service, [dropping and re-creating indexes](search-howto-reindex.md) is necessary for many modifications. You might consider working with a subset of your data to make rebuilds go faster.
Index design through the portal enforces requirements and schema rules for speci
1. Sign in to the [Azure portal](https://portal.azure.com).
+1. Check for space. Search services are subject to [maximum number of indexes](search-limits-quotas-capacity.md), varying by service tier. Make sure you have room for a second index.
+ 1. In the search service Overview page, choose either option for creating a search index: + **Add index**, an embedded editor for specifying an index schema
- + [**Import data wizard**](search-import-data-portal.md)
+ + [**Import wizards**](search-import-data-portal.md)
The wizard is an end-to-end workflow that creates an indexer, a data source, and a finished index. It also loads the data. If this is more than what you want, use **Add index** instead.
The following screenshot highlights where **Add index** and **Import data** appe
### [**REST**](#tab/index-rest)
-[**Create Index (REST API)**](/rest/api/searchservice/create-index) is used to create an index. You need a REST client to connect to your search service and send requests. See [Quickstart: Text search using REST](search-get-started-rest.md) to get started.
+[**Create Index (REST API)**](/rest/api/searchservice/indexes/create) is used to create an index. You need a REST client to connect to your search service and send requests. See [Quickstart: Full text search using REST](search-get-started-rest.md) or [Quickstart: Vector search using REST](search-get-started-vector.md) to get started.
The REST API provides defaults for field attribution. For example, all `Edm.String` fields are searchable by default. Attributes are shown in full below for illustrative purposes, but you can omit attribution in cases where the default values apply.
The following properties can be set for CORS:
## Allowed updates on existing indexes
-[**Create Index**](/rest/api/searchservice/create-index) creates the physical data structures (files and inverted indexes) on your search service. Once the index is created, your ability to effect changes using [**Update Index**](/rest/api/searchservice/update-index) is contingent upon whether your modifications invalidate those physical structures. Most field attributes can't be changed once the field is created in your index.
+[**Create Index**](/rest/api/searchservice/indexes/create) creates the physical data structures (files and inverted indexes) on your search service. Once the index is created, your ability to effect changes using [**Create or Update Index**](/rest/api/searchservice/indexes/create-or-update) is contingent upon whether your modifications invalidate those physical structures. Most field attributes can't be changed once the field is created in your index.
Alternatively, you can [create an index alias](search-how-to-alias.md) that serves as a stable reference in your application code. Instead of updating your code, you can update an index alias to point to newer index versions.
To minimize churn in the design process, the following table describes which ele
| Field names and types | No | | Field attributes (searchable, filterable, facetable, sortable) | No | | Field attribute (retrievable) | Yes |
+| Stored (applies to vectors) | No |
| [Analyzer](search-analyzers.md) | You can add and modify custom analyzers in the index. Regarding analyzer assignments on string fields, you can only modify `searchAnalyzer`. All other assignments and modifications require a rebuild. | | [Scoring profiles](index-add-scoring-profiles.md) | Yes | | [Suggesters](index-add-suggesters.md) | No |
To minimize churn in the design process, the following table describes which ele
Use the following links to become familiar with loading an index with data, or extending an index with a synonyms map. + [Data import overview](search-what-is-data-import.md)
-+ [Add, Update or Delete Documents (REST)](/rest/api/searchservice/addupdate-or-delete-documents)
++ [Add vector fields](vector-search-how-to-create-index.md)++ [Load documents](search-how-to-load-search-index.md)++ [Update an index](search-howto-reindex.md) + [Synonym maps](search-synonyms.md)
search Search How To Load Search Index https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-how-to-load-search-index.md
Title: Load a search index
+ Title: Load an index
description: Import and refresh data in a search index using the portal, REST APIs, or an Azure SDK. + -
- - ignite-2023
Previously updated : 01/17/2024 Last updated : 07/01/2024 # Load data into a search index in Azure AI Search
-This article explains how to import, refresh, and manage content in a predefined search index. In Azure AI Search, a [search index is created first](search-how-to-create-search-index.md), with [data import](search-what-is-data-import.md) following as a second step. The exception is Import Data wizard and indexer pipelines, which create and load an index in one workflow.
+This article explains how to import documents into a predefined search index. In Azure AI Search, a [search index is created first](search-how-to-create-search-index.md) with [data import](search-what-is-data-import.md) following as a second step. The exception is [Import wizards](search-import-data-portal.md) in the portal and indexer pipelines, which create and load an index in one workflow.
-A search service imports and indexes text and vectors in JSON, used in full text search, vector search, hybrid search, and knowledge mining scenarios. Text content is obtainable from alphanumeric fields in the external data source, metadata that's useful in search scenarios, or enriched content created by a [skillset](cognitive-search-working-with-skillsets.md) (skills can extract or infer textual descriptions from images and unstructured content). Vector content is vectorized using an [external embedding model](vector-search-how-to-generate-embeddings.md) or [integrated vectorization (preview)](vector-search-integrated-vectorization.md).
+## How data import works
-Once data is indexed, the physical data structures of the index are locked in. For guidance on what can and can't be changed, see [Drop and rebuild an index](search-howto-reindex.md).
+A search service accepts JSON documents that conform to the index schema. A search service imports and indexes plain text and vectors in JSON, used in full text search, vector search, hybrid search, and knowledge mining scenarios.
-Indexing isn't a background process. A search service will balance indexing and query workloads, but if [query latency is too high](search-performance-analysis.md#impact-of-indexing-on-queries), you can either [add capacity](search-capacity-planning.md#adjust-capacity) or identify periods of low query activity for loading an index.
++ Plain text content is obtainable from alphanumeric fields in the external data source, metadata that's useful in search scenarios, or enriched content created by a [skillset](cognitive-search-working-with-skillsets.md) (skills can extract or infer textual descriptions from images and unstructured content). +++ Vector content is vectorized using an [external embedding model](vector-search-how-to-generate-embeddings.md) or [integrated vectorization (preview)](vector-search-integrated-vectorization.md) using Azure AI Search features that integrate with applied AI.
-## Load documents
+You can prepare these documents yourself, but if content resides in a [supported data source](search-indexer-overview.md#supported-data-sources), running an [indexer](search-indexer-overview.md) or using an Import wizard can automate document retrieval, JSON serialization, and indexing.
-A search service accepts JSON documents that conform to the index schema.
+Once data is indexed, the physical data structures of the index are locked in. For guidance on what can and can't be changed, see [Update and rebuild an index](search-howto-reindex.md).
+
+Indexing isn't a background process. A search service will balance indexing and query workloads, but if [query latency is too high](search-performance-analysis.md#impact-of-indexing-on-queries), you can either [add capacity](search-capacity-planning.md#adjust-capacity) or identify periods of low query activity for loading an index.
-You can prepare these documents yourself, but if content resides in a [supported data source](search-indexer-overview.md#supported-data-sources), running an [indexer](search-indexer-overview.md) or the Import data wizard can automate document retrieval, JSON serialization, and indexing.
+For more information, see [Data import strategies](search-what-is-data-import.md).
-### [**Azure portal**](#tab/portal)
+## Use the Azure portal
-In the Azure portal, use the Import Data wizards to create and load indexes in a seamless workflow. If you want to load an existing index, choose an alternative approach.
+In the Azure portal, use the [import wizards](search-import-data-portal.md) to create and load indexes in a seamless workflow. If you want to load an existing index, choose an alternative approach.
-1. Sign in to the [Azure portal](https://portal.azure.com/) with your Azure account.
+1. Sign in to the [Azure portal](https://portal.azure.com/) with your Azure account and [find your search service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Storage%2storageAccounts/).
-1. [Find your search service](https://portal.azure.com/#blade/HubsExtension/BrowseResourceBlade/resourceType/Microsoft.Storage%2storageAccounts/) and on the Overview page, select **Import data** or **Import and vectorize data** on the command bar to create and populate a search index. You can follow these links to review the workflow: [Quickstart: Create an Azure AI Search index](search-get-started-portal.md) and [Quickstart: Integrated vectorization (preview)](search-get-started-portal-import-vectors.md).
+1. On the Overview page, select **Import data** or **Import and vectorize data** on the command bar to create and populate a search index.
:::image type="content" source="medi.png" alt-text="Screenshot of the Import data command" border="true":::
-If indexers are already defined, you can [reset and run an indexer](search-howto-run-reset-indexers.md) from the Azure portal, which is useful if you're adding fields incrementally. Reset forces the indexer to start over, picking up all fields from all source documents.
+ You can follow these links to review the workflow: [Quickstart: Create an Azure AI Search index](search-get-started-portal.md) and [Quickstart: Integrated vectorization (preview)](search-get-started-portal-import-vectors.md).
-### [**REST**](#tab/import-rest)
+1. After the wizard is finished, use [Search Explorer](search-explorer.md) to check for results.
-[Documents - Index (REST)](/rest/api/searchservice/documents) is the means by which you can import data into a search index. The @search.action parameter determines whether documents are added in full, or partially in terms of new or replacement values for specific fields.
+> [!TIP]
+> The import wizards create and run indexers. If indexers are already defined, you can [reset and run an indexer](search-howto-run-reset-indexers.md) from the Azure portal, which is useful if you're adding fields incrementally. Reset forces the indexer to start over, picking up all fields from all source documents.
+
+## Use the REST APIs
+
+[Documents - Index](/rest/api/searchservice/documents) is the REST API for importing data into a search index. REST APIs are useful for initial proof-of-concept testing, where you can test indexing workflows without having to write much code. The `@search.action` parameter determines whether documents are added in full, or partially in terms of new or replacement values for specific fields.
[**Quickstart: Text search using REST**](search-get-started-rest.md) explains the steps. The following example is a modified version of the example. It's been trimmed for brevity and the first HotelId value has been altered to avoid overwriting an existing document.
-1. Formulate a POST call specifying the index name, the "docs/index" endpoint, and a request body that includes the @search.action parameter.
+1. Formulate a POST call specifying the index name, the "docs/index" endpoint, and a request body that includes the `@search.action` parameter.
```http POST https://[service name].search.windows.net/indexes/hotels-sample-index/docs/index?api-version=2023-11-01
If indexers are already defined, you can [reset and run an indexer](search-howto
} ```
-1. [Look up the documents](/rest/api/searchservice/lookup-document) you just added as a validation step:
+1. Set the `@search.action` parameter to `upload` to create or overwrite a document. Set it to `merge` or `uploadOrMerge` if you're targeting updates to specific fields within the document. The previous example shows both actions.
+
+ | Action | Effect |
+ |--|--|
+ | merge | Updates a document that already exists, and fails a document that can't be found. Merge replaces existing values. For this reason, be sure to check for collection fields that contain multiple values, such as fields of type `Collection(Edm.String)`. For example, if a `tags` field starts with a value of `["budget"]` and you execute a merge with `["economy", "pool"]`, the final value of the `tags` field is `["economy", "pool"]`. It won't be `["budget", "economy", "pool"]`. |
+ | mergeOrUpload | Behaves like merge if the document exists, and upload if the document is new. This is the most common action for incremental updates. |
+ | upload | Similar to an "upsert" where the document is inserted if it's new, and updated or replaced if it exists. If the document is missing values that the index requires, the document field's value is set to null. |
+
+1. Send the request.
+
+1. [Look up the documents](/rest/api/searchservice/documents/get) you just added as a validation step:
```http GET https://[service name].search.windows.net/indexes/hotel-sample-index/docs/1111?api-version=2023-11-01
If indexers are already defined, you can [reset and run an indexer](search-howto
When the document key or ID is new, **null** becomes the value for any field that is unspecified in the document. For actions on an existing document, updated values replace the previous values. Any fields that weren't specified in a "merge" or "mergeUpload" are left intact in the search index.
-### [**.NET SDK (C#)**](#tab/importcsharp)
+## Use the Azure SDKs
+
+Programmability is provided in the following Azure SDKs.
+
+### [**.NET**](#tab/sdk-dotnet)
-Azure AI Search supports the following APIs for simple and bulk document uploads into an index:
+The Azure SDK for .NET provides the following APIs for simple and bulk document uploads into an index:
-+ [IndexDocumentsAsync (Azure SDK for .NET)](/dotnet/api/azure.search.documents.searchclient.indexdocumentsasync)
++ [IndexDocumentsAsync](/dotnet/api/azure.search.documents.searchclient.indexdocumentsasync) + [SearchIndexingBufferedSender](/dotnet/api/azure.search.documents.searchindexingbufferedsender-1) There are several samples that illustrate indexing in context of simple and large-scale indexing:
There are several samples that illustrate indexing in context of simple and larg
+ [**Tutorial: Index any data**](tutorial-optimize-indexing-push-api.md) couples batch indexing with testing strategies for determining an optimum size. -++ Be sure to check the [azure-search-vector-samples](https://github.com/Azure/azure-search-vector-samples) repo for code examples showing how to index vector fields.
-## Delete orphan documents
+### [**Python**](#tab/sdk-python)
-Azure AI Search supports document-level operations so that you can look up, update, and delete a specific document in isolation. The following example shows how to delete a document. In a search service, documents are unrelated so deleting one will have no impact on the rest of the index.
+The Azure SDK for Python provides the following APIs for simple and bulk document uploads into an index:
-1. Identify which field is the document key. In the portal, you can view the fields of each index. Document keys are string fields and are denoted with a key icon to make them easier to spot.
++ [IndexDocumentsBatch](/python/api/azure-search-documents/azure.search.documents.indexdocumentsbatch)++ [SearchIndexingBufferedSender](/python/api/azure-search-documents/azure.search.documents.searchindexingbufferedsender)
-1. Check the values of the document key field: `search=*&$select=HotelId`. A simple string is straightforward, but if the index uses a base-64 encoded field, or if search documents were generated from a `parsingMode` setting, you might be working with values that you aren't familiar with.
+Code samples include:
-1. [Look up the document](/rest/api/searchservice/lookup-document) to verify the value of the document ID and to review its content before deleting it. Specify the key or document ID in the request. The following examples illustrate a simple string for the [Hotels sample index](search-get-started-portal.md) and a base-64 encoded string for the metadata_storage_path key of the [cog-search-demo index](cognitive-search-tutorial-blob.md).
++ [sample_crud_operations.py](https://github.com/Azure/azure-sdk-for-python/blob/main/sdk/search/azure-search-documents/samples/sample_crud_operations.py)
- ```http
- GET https://[service name].search.windows.net/indexes/hotel-sample-index/docs/1111?api-version=2023-11-01
- ```
++ Be sure to check the [azure-search-vector-samples](https://github.com/Azure/azure-search-vector-samples) repo for code examples showing how to index vector fields.
- ```http
- GET https://[service name].search.windows.net/indexes/cog-search-demo/docs/aHR0cHM6Ly9oZWlkaWJsb2JzdG9yYWdlMi5ibG9iLmNvcmUud2luZG93cy5uZXQvY29nLXNlYXJjaC1kZW1vL2d1dGhyaWUuanBn0?api-version=2023-11-01
- ```
+### [**JavaScript**](#tab/sdk-javascript)
-1. [Delete the document](/rest/api/searchservice/addupdate-or-delete-documents) to remove it from the search index.
+The Azure SDK for JavaScript/TypeScript provides the following APIs for simple and bulk document uploads into an index:
- ```http
- POST https://[service name].search.windows.net/indexes/hotels-sample-index/docs/index?api-version=2023-11-01
- Content-Type: application/json
- api-key: [admin key]
- {
- "value": [
- {
- "@search.action": "delete",
- "id": "1111"
- }
- ]
- }
- ```
++ [IndexDocumentsBath](/javascript/api/%40azure/search-documents/indexdocumentsbatch)++ [SearchIndexingBufferedSender](/javascript/api/%40azure/search-documents/searchindexingbufferedsender)+
+Code samples include:
+++ See this quickstart for basic steps: [Quickstart: Full text search using the Azure SDKs](search-get-started-text.md?tabs=javascript)+++ Be sure to check the [azure-search-vector-samples](https://github.com/Azure/azure-search-vector-samples) repo for code examples showing how to index vector fields.+
+### [**Java**](#tab/sdk-java)
+
+The Azure SDK for Java provides the following APIs for simple and bulk document uploads into an index:
+++ [indexactiontype enumerator](/java/api/com.azure.search.documents.models.indexactiontype)++ [SearchIndexingBufferedSender](/java/api/com.azure.search.documents.searchclientbuilder.searchindexingbufferedsenderbuilder)+
+Code samples include:
+++ [IndexContentManagementExample.java](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/search/azure-search-documents/src/samples/java/com/azure/search/documents/IndexContentManagementExample.java)+++ Be sure to check the [azure-search-vector-samples](https://github.com/Azure/azure-search-vector-samples) repo for code examples showing how to index vector fields.++ ## See also
search Search Howto Reindex https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-howto-reindex.md
Title: Drop and rebuild an index
+ Title: Update or rebuild an index
-description: Re-index a search index to add or update the schema or delete obsolete documents using a full rebuild or partial indexing.
+description: Update or rebuild an index to update the schema or clean out obsolete documents. You can fully rebuild or do partial indexing.
+ -
- - ignite-2023
Previously updated : 01/11/2024 Last updated : 07/01/2024
-# Drop and rebuild an index in Azure AI Search
+# Update or rebuild an index in Azure AI Search
-This article explains how to drop and rebuild an Azure AI Search index. It explains the circumstances under which rebuilds are required, and provides recommendations for mitigating the effects of rebuilds on ongoing query requests. If you have to rebuild frequently, we recommend using [index aliases](search-how-to-alias.md) to make it easier to swap which index your application is pointing to.
+This article explains how to update an existing index in Azure AI Search with schema changes or content changes through incremental indexing. It explains the circumstances under which rebuilds are required, and provides recommendations for mitigating the effects of rebuilds on ongoing query requests.
During active development, it's common to drop and rebuild indexes when you're iterating over index design. Most developers work with a small representative sample of their data so that reindexing goes faster.
-## Modifications requiring a rebuild
+For schema changes on applications already in production, we recommend creating and testing a new index that runs side by side an existing index. Use an [index alias](search-how-to-alias.md) to swap in the new index while avoiding changes your application code.
-The following table lists the modifications that require an index drop and rebuild.
+## Update content
-| Action | Description |
-|--|-|
-| Delete a field | To physically remove all traces of a field, you have to rebuild the index. When an immediate rebuild isn't practical, you can modify application code to redirect access away from an obsolete field or use the [searchFields](search-query-create.md#example-of-a-full-text-query-request) and [select](search-query-odata-select.md) query parameters to choose which fields are searched and returned. Physically, the field definition and contents remain in the index until the next rebuild, when you apply a schema that omits the field in question. |
-| Change a field definition | Revisions to a field name, data type, or specific [index attributes](/rest/api/searchservice/create-index) (searchable, filterable, sortable, facetable) require a full rebuild. |
-| Assign an analyzer to a field | [Analyzers](search-analyzers.md) are defined in an index, assigned to fields, and then invoked during indexing to inform how tokens are created. You can add a new analyzer definition to an index at any time, but you can only *assign* an analyzer when the field is created. This is true for both the **analyzer** and **indexAnalyzer** properties. The **searchAnalyzer** property is an exception (you can assign this property to an existing field). |
-| Update or delete an analyzer definition in an index | You can't delete or change an existing analyzer configuration (analyzer, tokenizer, token filter, or char filter) in the index unless you rebuild the entire index. |
-| Add a field to a suggester | If a field already exists and you want to add it to a [Suggesters](index-add-suggesters.md) construct, rebuild the index. |
-| Switch tiers | In-place upgrades aren't supported. If you require more capacity, create a new service and rebuild your indexes from scratch. To help automate this process, you can use the **index-backup-restore** sample code in this [Azure AI Search .NET sample repo](https://github.com/Azure-Samples/azure-search-dotnet-utilities). This app will back up your index to a series of JSON files, and then recreate the index in a search service you specify.|
+Incremental indexing and synchronizing an index against changes in source data is fundamental to most search applications. This section explains the workflow for updating field contents in a search index.
+
+1. Use the same techniques for loading documents: [Documents - Index (REST)](/rest/api/searchservice/documents) or an equivalent API in the Azure SDKs. For more information about indexing techniques, see [Load documents](search-how-to-load-search-index.md).
+
+1. Set the `@search.action` parameter to determine the effect on existing documents:
+
+ | Action | Effect |
+ |--|--|
+ | delete | Removes the entire document from the index. If you want to remove an individual field, use merge instead, setting the field in question to null. Deleted documents and fields don't immediately free up space in the index. Every few minutes, a background process performs the physical deletion. Whether you use the portal or an API to return index statistics, you can expect a small delay before the deletion is reflected in the portal and through APIs. |
+ | merge | Updates a document that already exists, and fails a document that can't be found. Merge replaces existing values. For this reason, be sure to check for collection fields that contain multiple values, such as fields of type `Collection(Edm.String)`. For example, if a `tags` field starts with a value of `["budget"]` and you execute a merge with `["economy", "pool"]`, the final value of the `tags` field is `["economy", "pool"]`. It won't be `["budget", "economy", "pool"]`. |
+ | mergeOrUpload | Behaves like merge if the document exists, and upload if the document is new. This is the most common action for incremental updates. |
+ | upload | Similar to an "upsert" where the document is inserted if it's new, and updated or replaced if it exists. If the document is missing values that the index requires, the document field's value is set to null. |
+
+1. Post the update.
+
+Queries continue to run, but if you're updating or removing existing fields, you can expect mixed results and a higher incidence of throttling.
+
+## Tips for incremental indexing
+++ [Indexers automate incremental indexing](search-indexer-overview.md). If you can use an indexer, and if the data source supports change tracking, you can run the indexer on a recurring schedule to add, update, or overwrite searchable content so that it's synchronized to your external data.+++ If you're making index calls directly, use `mergeOrUpload` as the search action.+++ The payload must include the keys or identifiers of every document you want to add, update, or delete.+++ To update the contents of simple fields and subfields in complex types, list only the fields you want to change. For example, if you only need to update a description field, the payload should consist of the document key and the modified description. Omitting other fields retains their existing values.+++ To merge the inline changes into string collection, provide the entire value. Recall the `tags` field example from the previous section. New values overwrite the old values, and there's no merging at the field content level.+
+Here's a [REST API example](search-get-started-rest.md) demonstrating these tips:
+
+```rest
+### Get Secret Point Hotel by ID
+GET {{baseUrl}}/indexes/hotels-vector-quickstart/docs('1')?api-version=2023-11-01 HTTP/1.1
+ Content-Type: application/json
+ api-key: {{apiKey}}
+
+### Change the description and city for Secret Point Hotel
+POST {{baseUrl}}/indexes/hotels-vector-quickstart/docs/search.index?api-version=2023-11-01 HTTP/1.1
+ Content-Type: application/json
+ api-key: {{apiKey}}
+
+ {
+ "value": [
+ {
+ "@search.action": "mergeOrUpload",
+ "HotelId": "1",
+ "Description": "Change the description and city for Secret Point Hotel. Keep everything else."
+ "Address": {
+ "City": "Miami"
+ }
+ }
+ ]
+ }
+
+### Retrieve the same document, confirm the overwrites and retention of all other values
+GET {{baseUrl}}/indexes/hotels-vector-quickstart/docs('1')?api-version=2023-11-01 HTTP/1.1
+ Content-Type: application/json
+ api-key: {{apiKey}}
+```
-## Modifications with no rebuild requirement
+## Change an index schema
-Many other modifications can be made without impacting existing physical structures. Specifically, the following changes don't require an index rebuild. For these changes, you can [update an existing index definition](/rest/api/searchservice/update-index) with your changes.
+The index schema defines the physical data structures created on the search service, so there aren't many schema changes that you can make without incurring a full rebuild. The following list enumerates the schema changes that can be introduced seamlessly into an existing index. Generally, the list includes new fields and functionality used during query executions.
+ Add a new field
-+ Set the **retrievable** attribute on an existing field
-+ Update **searchAnalyzer** on a field having an existing **indexAnalyzer**
++ Set the `retrievable` attribute on an existing field++ Update `searchAnalyzer` on a field having an existing `indexAnalyzer` + Add a new analyzer definition in an index (which can be applied to new fields) + Add, update, or delete scoring profiles + Add, update, or delete CORS settings + Add, update, or delete synonymMaps + Add, update, or delete semantic configurations
-When you add a new field, existing indexed documents are given a null value for the new field. On a future data refresh, values from external source data replace the nulls added by Azure AI Search. For more information on updating index content, see [Add, Update or Delete Documents](/rest/api/searchservice/addupdate-or-delete-documents).
+The order of operations is:
-## How to rebuild an index
+1. [Get the index definition](/rest/api/searchservice/indexes/get).
-During development, the index schema changes frequently. You can plan for it by creating indexes that can be deleted, recreated, and reloaded quickly with a small representative data set.
+1. Revise the schema with updates from the previous list.
-For applications already in production, we recommend creating a new index that runs side by side an existing index to avoid query downtime. Your application code provides redirection to the new index.
+1. [Update index](/rest/api/searchservice/indexes/create-or-update).
-1. Check for space. Search services are subject to [maximum number of indexes](search-limits-quotas-capacity.md), varying by service tier. Make sure you have room for a second index.
+1. [Index documents](/rest/api/searchservice/documents).
-1. Determine whether a rebuild is required. If you're just adding fields, or changing some part of the index that is unrelated to fields, you might be able to simply [update the definition](/rest/api/searchservice/update-index) without deleting, recreating, and fully reloading it.
+When you update the index schema, existing documents in the index are given a null value for the new field. On the next index documents job, values from external source data replace the nulls added by Azure AI Search.
-1. [Get an index definition](/rest/api/searchservice/get-index) in case you need it for future reference.
+There should be no query disruptions during the updates, but query results will change as the updates take effect.
-1. [Drop the existing index](/rest/api/searchservice/delete-index), assuming you aren't running new and old indexes side by side.
+## Drop and rebuild an index
- Any queries targeting that index are immediately dropped. Remember that deleting an index is irreversible, destroying physical storage for the fields collection and other constructs. Pause to think about the implications before dropping it.
+Some modifications require an index drop and rebuild.
-1. [Create a revised index](/rest/api/searchservice/create-index), where the body of the request includes changed or modified field definitions.
+| Action | Description |
+|--|-|
+| Delete a field | To physically remove all traces of a field, you have to rebuild the index. When an immediate rebuild isn't practical, you can modify application code to redirect access away from an obsolete field or use the [searchFields](search-query-create.md#example-of-a-full-text-query-request) and [select](search-query-odata-select.md) query parameters to choose which fields are searched and returned. Physically, the field definition and contents remain in the index until the next rebuild, when you apply a schema that omits the field in question. |
+| Change a field definition | Revisions to a field name, data type, or specific [index attributes](/rest/api/searchservice/create-index) (searchable, filterable, sortable, facetable) require a full rebuild. |
+| Assign an analyzer to a field | [Analyzers](search-analyzers.md) are defined in an index, assigned to fields, and then invoked during indexing to inform how tokens are created. You can add a new analyzer definition to an index at any time, but you can only *assign* an analyzer when the field is created. This is true for both the **analyzer** and **indexAnalyzer** properties. The **searchAnalyzer** property is an exception (you can assign this property to an existing field). |
+| Update or delete an analyzer definition in an index | You can't delete or change an existing analyzer configuration (analyzer, tokenizer, token filter, or char filter) in the index unless you rebuild the entire index. |
+| Add a field to a suggester | If a field already exists and you want to add it to a [Suggesters](index-add-suggesters.md) construct, rebuild the index. |
+| Switch tiers | In-place upgrades aren't supported. If you require more capacity, create a new service and rebuild your indexes from scratch. To help automate this process, you can use the **index-backup-restore** sample code in this [Azure AI Search .NET sample repo](https://github.com/Azure-Samples/azure-search-dotnet-utilities). This app backs up your index to a series of JSON files, and then recreate the index in a search service you specify.|
+
+The order of operations is:
+
+1. [Get an index definition](/rest/api/searchservice/indexes/get) in case you need it for future reference.
+
+1. Consider using a backup and restore solution to preserve a copy of index content. There are solutions in [C#](https://github.com/liamc) and in [Python](https://github.com/Azure/azure-search-vector-samples/tree/main/demo-python/code/index-backup-restore). We recommend the Python version because it's more up to date.
+
+1. [Drop the existing index](/rest/api/searchservice/indexes/delete). queries targeting the index are immediately dropped. Remember that deleting an index is irreversible, destroying physical storage for the fields collection and other constructs.
-1. [Load the index with documents](/rest/api/searchservice/addupdate-or-delete-documents) from an external source.
+1. [Post a revised index](/rest/api/searchservice/indexes/create), where the body of the request includes changed or modified field definitions.
-When you create the index, physical storage is allocated for each field in the index schema, with an inverted index created for each searchable field. Fields that aren't searchable can be used in filters or expressions, but don't have inverted indexes and aren't full-text or fuzzy searchable. On an index rebuild, these inverted indexes are deleted and recreated based on the index schema you provide.
+1. [Load the index with documents](/rest/api/searchservice/documents) from an external source.
-When you load the index, each field's inverted index is populated with all of the unique, tokenized words from each document, with a map to corresponding document IDs. For example, when indexing a hotels data set, an inverted index created for a City field might contain terms for Seattle, Portland, and so forth. Documents that include Seattle or Portland in the City field would have their document ID listed alongside the term. On any [Add, Update or Delete](/rest/api/searchservice/addupdate-or-delete-documents) operation, the terms and document ID list are updated accordingly.
+When you create the index, physical storage is allocated for each field in the index schema, with an inverted index created for each searchable field and a vector index created for each vector field. Fields that aren't searchable can be used in filters or expressions, but don't have inverted indexes and aren't full-text or fuzzy searchable. On an index rebuild, these inverted indexes and vector indexes are deleted and recreated based on the index schema you provide.
## Balancing workloads
You can begin querying an index as soon as the first document is loaded. If you
You can use [Search Explorer](search-explorer.md) or a [REST client](search-get-started-rest.md) to check for updated content.
-If you added or renamed a field, use [$select](search-query-odata-select.md) to return that field: `search=*&$select=document-id,my-new-field,some-old-field&$count=true`
+If you added or renamed a field, use [$select](search-query-odata-select.md) to return that field: `search=*&$select=document-id,my-new-field,some-old-field&$count=true`.
+
+The Azure portal provides index size and vector index size. You can check these values after updating an index, but remember to expect a small delay as the service processes the change and to account for portal refresh rates, which can be a few minutes.
+
+## Delete orphan documents
+
+Azure AI Search supports document-level operations so that you can look up, update, and delete a specific document in isolation. The following example shows how to delete a document.
+
+Deleting a document doesn't immediately free up space in the index. Every few minutes, a background process performs the physical deletion. Whether you use the portal or an API to return index statistics, you can expect a small delay before the deletion is reflected in the portal and API metrics.
+
+1. Identify which field is the document key. In the portal, you can view the fields of each index. Document keys are string fields and are denoted with a key icon to make them easier to spot.
+
+1. Check the values of the document key field: `search=*&$select=HotelId`. A simple string is straightforward, but if the index uses a base-64 encoded field, or if search documents were generated from a `parsingMode` setting, you might be working with values that you aren't familiar with.
+
+1. [Look up the document](/rest/api/searchservice/documents/get) to verify the value of the document ID and to review its content before deleting it. Specify the key or document ID in the request. The following examples illustrate a simple string for the [Hotels sample index](search-get-started-portal.md) and a base-64 encoded string for the metadata_storage_path key of the [cog-search-demo index](cognitive-search-tutorial-blob.md).
+
+ ```http
+ GET https://[service name].search.windows.net/indexes/hotel-sample-index/docs/1111?api-version=2023-11-01
+ ```
+
+ ```http
+ GET https://[service name].search.windows.net/indexes/cog-search-demo/docs/aHR0cHM6Ly9oZWlkaWJsb2JzdG9yYWdlMi5ibG9iLmNvcmUud2luZG93cy5uZXQvY29nLXNlYXJjaC1kZW1vL2d1dGhyaWUuanBn0?api-version=2023-11-01
+ ```
+
+1. [Delete the document](/rest/api/searchservice/documents) using a delete `@search.action` to remove it from the search index.
+
+ ```http
+ POST https://[service name].search.windows.net/indexes/hotels-sample-index/docs/index?api-version=2023-11-01
+ Content-Type: application/json
+ api-key: [admin key]
+ {
+ "value": [
+ {
+ "@search.action": "delete",
+ "id": "1111"
+ }
+ ]
+ }
+ ```
## See also
search Search Security Enable Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-security-enable-roles.md
Roles for service administration (control plane) are built in and can't be enabl
## Prerequisites
-+ Owner, User Access Administrator, or a custom role with [Microsoft.Authorization/roleAssignments/write](/azure/templates/microsoft.authorization/roleassignments) permissions.
- + A search service in any region, on any tier, including free. ++ Owner, User Access Administrator, or a custom role with [Microsoft.Authorization/roleAssignments/write](/azure/templates/microsoft.authorization/roleassignments) permissions.+ ## Enable role-based access for data plane operations Configure your search service to recognize an **authorization** header on data requests that provide an OAuth2 access token.
search Search Security Rbac https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/search-security-rbac.md
Role-based access is optional, but recommended. The alternative is [key-based au
## Prerequisites
-+ **Owner**, **User Access Administrator**, or a custom role with [Microsoft.Authorization/roleAssignments/write](/azure/templates/microsoft.authorization/roleassignments) permissions.
- + A search service in any region, on any tier, [enabled for role-based access](search-security-enable-roles.md). ++ Owner, User Access Administrator, or a custom role with [Microsoft.Authorization/roleAssignments/write](/azure/templates/microsoft.authorization/roleassignments) permissions.+ <a name = "built-in-roles-used-in-search"></a> ## Built-in roles used in Azure AI Search
The following roles are built in. If these roles are insufficient, [create a cus
Combine these roles to get sufficient permissions for your use case. - > [!NOTE] > If you disable Azure role-based access, built-in roles for the control plane (Owner, Contributor, Reader) continue to be available. Disabling role-based access removes just the data-related permissions associated with those roles. If data plane roles are disabled, Search Service Contributor is equivalent to control-plane Contributor.
This approach assumes Visual Studio Code with a REST client extension.
az account get-access-token --query accessToken --output tsv ```
-1. In a new text file in Visual Studio Code, paste in these variables:
+1. Paste these variables in a new text file in Visual Studio Code.
```http @baseUrl = PASTE-YOUR-SEARCH-SERVICE-URL-HERE
This approach assumes Visual Studio Code with a REST client extension.
@token = PASTE-YOUR-TOKEN-HERE ```
-1. Paste in and then send a request that uses the variables you've specified. For the "Search Index Data Reader" role, you can send a query. You can use any [supported API version](/rest/api/searchservice/search-service-api-versions).
+1. Paste and then send a request that uses the variables you've specified. For the "Search Index Data Reader" role, you can send a query. You can use any [supported API version](/rest/api/searchservice/search-service-api-versions).
```http POST https://{{baseUrl}}/indexes/{{index-name}}/docs/search?api-version=2023-11-01 HTTP/1.1
If you're already a Contributor or Owner of your search service, you can present
Get-AzAccessToken -ResourceUrl https://search.azure.com ```
-1. In a new text file in Visual Studio Code, paste in these variables:
+1. Paste these variables into a new text file in Visual Studio Code.
```http @baseUrl = PASTE-YOUR-SEARCH-SERVICE-URL-HERE
The PowerShell example shows the JSON syntax for creating a custom role that's a
1. See [Create or update Azure custom roles using the REST API](../role-based-access-control/custom-roles-rest.md) for steps.
-1. Clone or create a role, or use JSON to specify the custom role (see the PowerShell tab for JSON syntax).
+1. Copy or create a role, or use JSON to specify the custom role (see the PowerShell tab for JSON syntax).
### [**Azure CLI**](#tab/custom-role-cli)
The PowerShell example shows the JSON syntax for creating a custom role that's a
1. See [Create or update Azure custom roles using Azure CLI](../role-based-access-control/custom-roles-cli.md) for steps.
-1. Clone or create a role, or use JSON to specify the custom role (see the PowerShell tab for JSON syntax).
+1. See the PowerShell tab for JSON syntax.
search Service Configure Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/search/service-configure-firewall.md
Last updated 06/27/2024
# Configure network access and firewall rules for Azure AI Search
-By default, Azure AI Search is configured to allow connections over a public endpoint. Access to a search service *through* the public endpoint is protected by authentication and authorization protocols, but the endpoint itself is open to the internet at the network layer for data plane requests.
+This article explains how to restrict network access to a search service's public endpoint. To block *all* data plane access to the public endpoint, use [private endpoints](service-create-private-endpoint.md) and an Azure virtual network.
-If you aren't hosting a public web site, you might want to configure network access to automatically refuse requests unless they originate from an approved set of devices and cloud services. There are two mechanisms:
+This article assumes the Azure portal for configuring network access options. You can also use the [Management REST API](/rest/api/searchmanagement/), [Azure PowerShell](/powershell/module/az.search), or the [Azure CLI](/cli/azure/search).
-+ Inbound rules listing the IP addresses, ranges, or subnets from which requests are admitted
-+ Exceptions to network rules, where requests are admitted with no checks, as long as the request originates from a [trusted service](#grant-access-to-trusted-azure-services)
+## Prerequisites
-Network rules aren't required, but it's a security best practice to add them if you use Azure AI Search for surfacing private or internal corporate content.
++ A search service, any region, at the Basic tier or higher
-Network rules are scoped to data plane operations against the search service's public endpoint. Data plane operations include creating or querying indexes, and all other actions described by the [Search REST APIs](/rest/api/searchservice/). Control plane operations target service administration. Those operations specify resource provider endpoints, which are subject to the [network protections supported by Azure Resource Manager](/security/benchmark/azure/baselines/azure-resource-manager-security-baseline).
++ Owner or Contributor permissions+
+## When to configure network access
-This article explains how to configure network access to a search service's public endpoint. To block *all* data plane access to the public endpoint, use [private endpoints](service-create-private-endpoint.md) and an Azure virtual network.
+By default, Azure AI Search is configured to allow connections over a public endpoint. Access to a search service *through* the public endpoint is protected by authentication and authorization protocols, but the endpoint itself is open to the internet at the network layer for data plane requests.
-This article assumes the Azure portal to explain network access options. You can also use the [Management REST API](/rest/api/searchmanagement/), [Azure PowerShell](/powershell/module/az.search), or the [Azure CLI](/cli/azure/search).
+If you aren't hosting a public web site, you might want to configure network access to automatically refuse requests unless they originate from an approved set of devices and cloud services.
-## Prerequisites
+There are two mechanisms for restricting access to the public endpoint:
-+ A search service, any region, at the Basic tier or higher
++ Inbound rules listing the IP addresses, ranges, or subnets from which requests are admitted++ Exceptions to network rules, where requests are admitted with no checks, as long as the request originates from a [trusted service](#grant-access-to-trusted-azure-services)
-+ Owner or Contributor permissions
+Network rules aren't required, but it's a security best practice to add them if you use Azure AI Search for surfacing private or internal corporate content.
+
+Network rules are scoped to data plane operations against the search service's public endpoint. Data plane operations include creating or querying indexes, and all other actions described by the [Search REST APIs](/rest/api/searchservice/). Control plane operations target service administration. Those operations specify resource provider endpoints, which are subject to the [network protections supported by Azure Resource Manager](/security/benchmark/azure/baselines/azure-resource-manager-security-baseline).
## Limitations
There are a few drawbacks to locking down the public endpoint.
+ It takes time to fully identify IP ranges and set up firewalls, and if you're in early stages of proof-of-concept testing and investigation and using sample data, you might want to defer network access controls until you actually need them.
-+ Some workflows require access to a public endpoint. Specifically, the import wizards in the Azure portal, such as the [Import data wizard](search-get-started-portal.md) and [Import and vectorize data wizard](search-get-started-portal-import-vectors.md), connect to built-in (hosted) sample data and embedding models over the public endpoint. You can switch to code or script to complete the same tasks with firewall rules in place, but if you want to run the wizards, the public endpoint must be available. For more information, see [Secure connections in the import wizards](search-import-data-portal.md#secure-connections).
++ Some workflows require access to a public endpoint. Specifically, the [**import wizards**](search-import-data-portal.md) in the Azure portal connect to built-in (hosted) sample data and embedding models over the public endpoint. You can switch to code or script to complete the same tasks when firewall rules in place, but if you want to run the wizards, the public endpoint must be available. For more information, see [Secure connections in the import wizards](search-import-data-portal.md#secure-connections). <a id="configure-ip-policy"></a>
There are a few drawbacks to locking down the public endpoint.
:::image type="content" source="media/service-configure-firewall/azure-portal-firewall-all.png" alt-text="Screenshot showing how to configure the IP firewall in the Azure portal.":::
-1. Under **IP Firewall**, select **Add your client IP address** to create an inbound rule for the public IP address of your system. See [Allow access from the Azure portal IP address](#allow-access-from-the-azure-portal-ip-address) for details.
+1. Under **IP Firewall**, select **Add your client IP address** to create an inbound rule for the public IP address of your personal device. See [Allow access from the Azure portal IP address](#allow-access-from-the-azure-portal-ip-address) for details.
1. Add other client IP addresses for other devices and services that send requests to a search service.
sentinel Soc Optimization Access https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/sentinel/soc-optimization/soc-optimization-access.md
- usx-security Previously updated : 06/09/2024 Last updated : 07/01/2024 appliesto: - Microsoft Sentinel in the Microsoft Defender portal - Microsoft Sentinel in the Azure portal
Use SOC optimization recommendations to help you close coverage gaps against spe
[!INCLUDE [unified-soc-preview](../includes/unified-soc-preview.md)]
-Watch the following video for an overview and demo of SOC optimization in the Defender portal. If you just want a demo, jump to minute 8:14. <br>
+Watch the following video for an overview and demo of SOC optimization in the Defender portal. If you just want a demo, jump to minute 8:14. <br><br>
> [!VIDEO https://www.youtube.com/embed/b0rbPZwBuc0?si=DuYJQewK8IZz8T0Y]
In the Defender portal, SOC optimization recommendations are listed in the **You
Each optimization card includes the status, title, the date it was created, a high-level description, and the workspace it applies to.
+> [!NOTE]
+> SOC optimization recommendations are calculated every 24 hours.
+ ### Filter optimizations Filter the optimizations based on optimization type, or search for a specific optimization title using the search box on the side. Optimization types include:
service-bus-messaging Monitor Service Bus Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/monitor-service-bus-reference.md
The following two types of errors are classified as **user errors**:
|Memory size usage per namespace| No | Memory Usage | Percent | The percentage memory usage of the namespace. | Replica | ### Error metrics+ | Metric Name | Exportable via diagnostic settings | Unit | Aggregation type | Description | Dimensions | | - | -- | | | | | |Server Errors| No | Count | Total | The number of requests not processed because of an error in the Service Bus service over a specified period. | Entity name<br/><br/>Operation Result | |User Errors | No | Count | Total | The number of requests not processed because of user errors over a specified period. | Entity name<br/><br/>Operation Result|
+### Geo-Replication metrics
+
+| Metric Name | Exportable via diagnostic settings | Unit | Aggregation type | Description | Dimensions |
+| - | -- | | | | |
+|Replication Lag Duration| No | Seconds | Max | The offset in seconds between the latest action on the primary and the secondary regions. | |
+|Replication Lag Count | No | Count | Max | The offset in number of operations between the latest action on the primary and the secondary regions. | |
+ ## Metric dimensions Azure Service Bus supports the following dimensions for metrics in Azure Monitor. Adding dimensions to your metrics is optional. If you don't add dimensions, metrics are specified at the namespace level.
service-bus-messaging Service Bus Geo Replication https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-bus-messaging/service-bus-geo-replication.md
This feature allows promoting any secondary region to primary, at any time. Prom
The Geo-replication feature can be used to implement different scenarios, as described here. ### Disaster recovery
-Data and metadata are continuously synchronized between the primary and secondary regions. If a region lags or is unavailable, it is possible to promote a secondary region as the primary. This promotion allows for the uninterrupted operation of workloads in the newly promoted region. Such a promotion may be necessitated by degradation of Service Bus or other services within your workload, particularly if you aim to run the various components together. Depending on the severity and impacted services, the promotion could either be planned or forced. In case of planned promotion in-flight messages are replicated before finalizing the promotion, while with forced promotion this is immediately executed.
+Data and metadata are continuously synchronized between the primary and secondary regions. If a region lags or is unavailable, it's possible to promote a secondary region as the primary. This promotion allows for the uninterrupted operation of workloads in the newly promoted region. Such a promotion may be necessitated by degradation of Service Bus or other services within your workload, particularly if you aim to run the various components together. Depending on the severity and impacted services, the promotion could either be planned or forced. In case of planned promotion in-flight messages are replicated before finalizing the promotion, while with forced promotion this is immediately executed.
### Region migration There are times when you want to migrate your Service Bus workloads to run in a different region. For example, when Azure adds a new region that is geographically closer to your location, users, or other services. Alternatively, you might want to migrate when the regions where most of your workloads run is shifted. The Geo-Replication feature also provides a good solution in these cases. In this case, you would set up Geo-Replication on your existing namespace with the desired new region as secondary region and wait for the synchronization to complete. At this point, you would start a planned promotion, allowing any in-flight messages to be replicated. Once the promotion is completed you can now optionally remove the old region, which is now the secondary region, and continue running your workloads in the desired region. ## Basic concepts
-The Geo-Replication feature implements metadata and data replication in a primary-secondary replication model. At a given time thereΓÇÖs a single primary region, which is serving both producers and consumers. The secondaries act as hot stand-by regions, meaning that it is not possible to interact with these secondary regions. However, they run in the same configuration as the primary region, allowing for fast promotion, and meaning they your workloads can immediately continue running after promotion has been completed. The Geo-Replication feature is available for the [Premium tier](service-bus-premium-messaging.md).
+The Geo-Replication feature implements metadata and data replication in a primary-secondary replication model. At a given time thereΓÇÖs a single primary region, which is serving both producers and consumers. The secondaries act as hot stand-by regions, meaning that it isn't possible to interact with these secondary regions. However, they run in the same configuration as the primary region, allowing for fast promotion, and meaning they your workloads can immediately continue running after promotion has been completed. The Geo-Replication feature is available for the [Premium tier](service-bus-premium-messaging.md).
Some of the key aspects of Geo-Replication feature are: - Service Bus services perform fully managed replication of metadata, message data, and message state and property changes across regions adhering to the replication consistency configured at the namespace. - Single namespace hostname; Upon successful configuration of a Geo-Replication enabled namespace, users can use the namespace hostname in their client application. The hostname behaves agnostic of the configured primary and secondary regions, and always points to the primary region. - When a customer initiates a promotion, the hostname points to the region selected to be the new primary region. The old primary becomes a secondary region.-- It is not possible to read or write on the secondary regions.
+- It isn't possible to read or write on the secondary regions.
- Synchronous and asynchronous replication modes, further described [here](#replication-modes). - Customer-managed promotion from primary to secondary region, providing full ownership and visibility for outage resolution. Metrics are available, which can help to automate the promotion from customer side. - Secondary regions can be added or removed at the customer's discretion.
As such, it doesnΓÇÖt have the absolute guarantee that all regions have the data
The replication mode can be changed after configuring Geo-Replication. You can go from synchronous to asynchronous or from asynchronous to synchronous. If you go from asynchronous to synchronous, your secondary will be configured as synchronous after lag reaches zero. If you're running with a continual lag for whatever reason, then you may need to pause your publishers in order for lag to reach zero and your mode to be able to switch to synchronous. The reasons to have synchronous replication enabled, instead of asynchronous replication, are tied to the importance of the data, specific business needs, or compliance reasons, rather than availability of your application. > [!NOTE]
-> In case a secondary region lags or becomes unavailable, the application will no longer be able to replicate to this region and will start throttling once the replication lag is reached. To continue using the namespace in the primary location, the afflicted secondary region can be removed. If no more secondary regions are configured, the namespace will continue without Geo-Replication enabled. It is possible to add additional secondary regions at any time.
+> In case a secondary region lags or becomes unavailable, the application will no longer be able to replicate to this region and will start throttling once the replication lag is reached. To continue using the namespace in the primary location, the afflicted secondary region can be removed. If no more secondary regions are configured, the namespace will continue without Geo-Replication enabled. It's possible to add additional secondary regions at any time.
## Secondary region selection
The Geo-Replication feature enables customers to configure a secondary region to
## Setup
-The following section is an overview to set up the Geo-Replication feature on a new namespace.
+### Using Azure portal
+
+The following section is an overview to set up the Geo-Replication feature on a new namespace through the Azure portal.
> [!NOTE] > This experience might change during public preview. We'll update this document accordingly.
The following section is an overview to set up the Geo-Replication feature on a
1. Either check the **Synchronous replication** checkbox, or specify a value for the **Async Replication - Max Replication lag** value in seconds. :::image type="content" source="./media/service-bus-geo-replication/create-namespace-with-geo-replication.png" alt-text="Screenshot showing the Create Namespace experience with Geo-Replication enabled.":::
+### Using Bicep template
+
+To create a namespace with the Geo-Replication feature enabled, add the *geoDataReplication* properties section.
+
+```bicep
+param serviceBusName string
+param primaryLocation string
+param secondaryLocation string
+param maxReplicationLagInSeconds int
+
+resource sb 'Microsoft.ServiceBus/namespaces@2023-01-01-preview' = {
+ name: serviceBusName
+ location: primaryLocation
+ sku: {
+ name: 'Premium'
+ tier: 'Premium'
+ capacity: 1
+ }
+ properties: {
+ geoDataReplication: {
+ maxReplicationLagDurationInSeconds: maxReplicationLagInSeconds
+ locations: [
+ {
+ locationName: primaryLocation
+ roleType: 'Primary'
+ }
+ {
+ locationName: secondaryLocation
+ roleType: 'Secondary'
+ }
+ ]
+ }
+ }
+}
+```
+ ## Management Once you create a namespace with the Geo-Replication feature enabled, you can manage the feature from the **Replication (preview)** blade.
To remove a secondary region, click on the **...**-ellipsis next to the region,
### Promotion flow
-A promotion is triggered manually by the customer (either explicitly through a command, or through client owned business logic that triggers the command) and never by Azure. It gives the customer full ownership and visibility for outage resolution on Azure's backbone. In the portal, click on the **Promote** icon, and follow the instructions in the pop-up blade to delete the region.
-
-When choosing **Planned** promotion, the service waits to catch up the replication lag before initiating the promotion. On the other hand, when choosing **Forced** promotion, the service immediately initiates the promotion. The namespace will be placed in read-only mode from the time that a promotion is requested, until the time that the promotion has completed. It is possible to do a forced promotion at any time after a planned promotion has been initiated. This puts the user in control to expedite the promotion, when a planned failover takes longer than desired.
+A promotion is triggered manually by the customer (either explicitly through a command, or through client owned business logic that triggers the command) and never by Azure. It gives the customer full ownership and visibility for outage resolution on Azure's backbone. When choosing **Planned** promotion, the service waits to catch up the replication lag before initiating the promotion. On the other hand, when choosing **Forced** promotion, the service immediately initiates the promotion. The namespace will be placed in read-only mode from the time that a promotion is requested, until the time that the promotion has completed. It is possible to do a forced promotion at any time after a planned promotion has been initiated. This puts the user in control to expedite the promotion, when a planned failover takes longer than desired.
> [!IMPORTANT] > When using **Forced** promotion, any data that has not been replicated may be lost. - After the promotion is initiated: 1. The hostname is updated to point to the secondary region, which can take up to a few minutes.
After the promotion is initiated:
> ping *your-namespace-fully-qualified-name* 1. Clients automatically reconnect to the secondary region. You can automate promotion either with monitoring systems, or with custom-built monitoring solutions. However, such automation takes extra planning and work, which is out of the scope of this article.
+### Using Azure portal
+
+In the portal, click on the **Promote** icon, and follow the instructions in the pop-up blade to delete the region.
++
+### Using Azure CLI
+
+Execute the Azure CLI command to initiate the promotion. The **Force** property is optional, and defaults to **false**.
+
+```azurecli
+az rest --method post --url https://management.azure.com/subscriptions/<subscriptionId>/resourceGroups/<resourceGroup>/providers/Microsoft.ServiceBus/namespaces/<namespaceName>/failover?api-version=2023-01-01-preview --body "{'properties': {'PrimaryLocation': '<newPrimaryocation>', 'api-version':'2023-01-01-preview', 'Force':'false'}}"
+```
+ ### Monitoring data replication Users can monitor the progress of the replication job by monitoring the replication lag metric in Log Analytics. - Enable Metrics logs in your Service Bus namespace as described at [Monitor Azure Service Bus](/azure/service-bus-messaging/monitor-service-bus).
Note the following considerations to keep in mind with this release:
- Promoting a complex distributed infrastructure should be [rehearsed](/azure/architecture/reliability/disaster-recovery#disaster-recovery-plan) at least once. ## Pricing
-The Premium tier for Service Bus is priced per [Messaging Unit](service-bus-premium-messaging.md#how-many-messaging-units-are-needed). With the Geo-Replication feature, secondary regions run on the same number of MUs as the primary region, and the pricing is calculated over the total number of MUs. Additionally, there is a charge for based on the published bandwidth times the number of secondary regions. During the early public preview, this charge is waived.
+The Premium tier for Service Bus is priced per [Messaging Unit](service-bus-premium-messaging.md#how-many-messaging-units-are-needed). With the Geo-Replication feature, secondary regions run on the same number of MUs as the primary region, and the pricing is calculated over the total number of MUs. Additionally, there's a charge for based on the published bandwidth times the number of secondary regions. During the early public preview, this charge is waived.
## Next steps
storage Lifecycle Management Policy Configure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/lifecycle-management-policy-configure.md
A lifecycle management policy is composed of one or more rules that define a set
- The number of days since the blob was last modified. - The number of days since the blob was last accessed. To use this condition in an action, you should first [optionally enable last access time tracking](#optionally-enable-access-time-tracking).
+> [!NOTE]
+> Any operation that modifies the blob, including an update of the blob's metadata or properties, changes the last-modified time of the blob.
+ When the selected condition is true, then the management policy performs the specified action. For example, if you have defined an action to move a blob from the hot tier to the cool tier if it hasn't been modified for 30 days, then the lifecycle management policy will move the blob 30 days after the last write operation to that blob. For a blob snapshot or version, the condition that is checked is the number of days since the snapshot or version was created.
storage Secure File Transfer Protocol Known Issues https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/blobs/secure-file-transfer-protocol-known-issues.md
Previously updated : 06/06/2024 Last updated : 06/24/2024
To learn more, see [SFTP permission model](secure-file-transfer-protocol-support
- By default, the Content-MD5 property of blobs that are uploaded by using SFTP are set to null. Therefore, if you want the Content-MD5 property of those blobs to contain an MD5 hash, your client must calculate that value, and then set the Content-MD5 property of the blob before the uploading the blob. -- Maximum file upload size via the SFTP endpoint is 100 GB.
+- Maximum file upload size via the SFTP endpoint is 500 GB.
- Customer-managed account failover is supported at the preview level in select regions. For more information, see [Azure storage disaster recovery planning and failover](../common/storage-disaster-recovery-guidance.md#azure-data-lake-storage-gen2).
storage Storage Explorer Support Policy Lifecycle https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/storage/common/storage-explorer-support-policy-lifecycle.md
This table describes the release date and the end of support date for each relea
| Storage Explorer version | Release date | End of support date | |:-:|::|:-:|
+| v1.34.0 | May 24, 2024 | May 24, 2025 |
| v1.33.0 | March 1, 2024 | March 1, 2025 | | v1.32.1 | November 15, 2023 | November 1, 2024 | | v1.32.0 | November 1, 2023 | November 1, 2024 |
trusted-signing Tutorial Assign Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/trusted-signing/tutorial-assign-roles.md
The Trusted Signing Identity Verifier role is *required* to manage identity vali
```azurecli az role assignment create --assignee <objectId of user/service principle> --role "Trusted Signing Certificate Profile Signer"
- --scope "/subscriptions/<subscriptionId>/resourceGroups/<resource-group-name>/providers/Microsoft.CodeSigning/trustedSigningAccounts/<trustedsigning-account-name>/certificateProfiles/<profileName>"
+ --scope "/subscriptions/<subscriptionId>/resourceGroups/<resource-group-name>/providers/Microsoft.CodeSigning/codeSigningAccounts/<trustedsigning-account-name>/certificateProfiles/<profileName>"
``` ## Related content
virtual-desktop Multimedia Redirection Intro https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/multimedia-redirection-intro.md
Title: Understanding multimedia redirection on Azure Virtual Desktop - Azure
description: An overview of multimedia redirection on Azure Virtual Desktop. Previously updated : 04/09/2024 Last updated : 06/27/2024 # Understanding multimedia redirection for Azure Virtual Desktop
> Call redirection is currently in PREVIEW. > See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-Multimedia redirection redirects media content from Azure Virtual Desktop to your local machine for faster processing and rendering. Both Microsoft Edge and Google Chrome support this feature when using the Windows Desktop client.
+Multimedia redirection redirects media content from Azure Virtual Desktop and Windows 365 to your local machine for faster processing and rendering. Both Microsoft Edge and Google Chrome support this feature when using the Windows Desktop client.
Multimedia redirection has two key components:
The following websites work with call redirection:
- [Content Guru Storm App](https://www.contentguru.com/en-us/news/content-guru-announces-its-storm-ccaas-solution-is-now-compatible-with-microsoft-azure-virtual-desktop/) - [Twilio Flex](https://www.twilio.com/en-us/blog/public-beta-flex-microsoft-azure-virtual-desktop#join-the-flex-for-azure-virtual-desktop-public-beta)
-Microsoft Teams live events aren't media-optimized for Azure Virtual Desktop and Windows 365 when using the native Teams app. However, if you use Teams live events with a browser that supports Teams live events and multimedia redirection, multimedia redirection is a workaround that provides smoother Teams live events playback on Azure Virtual Desktop. Multimedia redirection supports Enterprise Content Delivery Network (ECDN) for Teams live events.
+Microsoft Teams live events aren't media-optimized for Azure Virtual Desktop and Windows 365 when using the native Teams app. However, if you use Teams live events with a browser that supports Teams live events and multimedia redirection, multimedia redirection is a workaround that provides smoother Teams live events playback on Azure Virtual Desktop and Windows 365. Multimedia redirection supports Enterprise Content Delivery Network (ECDN) for Teams live events.
### Check if multimedia redirection is active
virtual-desktop Multimedia Redirection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/multimedia-redirection.md
Title: Use multimedia redirection on Azure Virtual Desktop - Azure
description: How to use multimedia redirection on Azure Virtual Desktop. Previously updated : 07/18/2023 Last updated : 06/27/2024 # Use multimedia redirection on Azure Virtual Desktop
> Multimedia redirection call redirection is currently in PREVIEW. > See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-This article will show you how to use multimedia redirection for Azure Virtual Desktop with Microsoft Edge or Google Chrome browsers. For more information about how multimedia redirection works, see [Understanding multimedia redirection for Azure Virtual Desktop](multimedia-redirection-intro.md).
+This article will show you how to use multimedia redirection for Azure Virtual Desktop and Windows 365 with Microsoft Edge or Google Chrome browsers. For more information about how multimedia redirection works, see [Understanding multimedia redirection for Azure Virtual Desktop](multimedia-redirection-intro.md).
## Prerequisites
-Before you can use multimedia redirection on Azure Virtual Desktop, you'll need the following things:
+Before you can use multimedia redirection on Azure Virtual Desktop and Windows 365, you'll need the following things:
-- An Azure Virtual Desktop deployment.-- Microsoft Edge or Google Chrome installed on your session hosts.
+- An Azure Virtual Desktop or Windows 365 deployment.
+- Microsoft Edge or Google Chrome installed on your session hosts or Cloud PCs.
- Windows Desktop client: - To use video playback redirection, you must install [Windows Desktop client, version 1.2.3916 or later](/windows-server/remote/remote-desktop-services/clients/windowsdesktop-whatsnew). This feature is only compatible with version 1.2.3916 or later of the Windows Desktop client.
virtual-desktop Troubleshoot Multimedia Redirection https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-desktop/troubleshoot-multimedia-redirection.md
Title: Troubleshoot Multimedia redirection on Azure Virtual Desktop - Azure
description: Known issues and troubleshooting instructions for multimedia redirection for Azure Virtual Desktop. Previously updated : 07/18/2023 Last updated : 06/27/2024 # Troubleshoot multimedia redirection for Azure Virtual Desktop
> Call redirection is currently in PREVIEW. > See the [Supplemental Terms of Use for Microsoft Azure Previews](https://azure.microsoft.com/support/legal/preview-supplemental-terms/) for legal terms that apply to Azure features that are in beta, preview, or otherwise not yet released into general availability.
-This article describes known issues and troubleshooting instructions for multimedia redirection for Azure Virtual Desktop.
+This article describes known issues and troubleshooting instructions for multimedia redirection for Azure Virtual Desktop and Windows 365.
## Known issues and limitations The following issues are ones we're already aware of, so you won't need to report them: -- In the first browser tab a user opens, the extension pop-up might show a message that says, "The extension is not loaded", or a message that says video playback or call redirection isn't supported while redirection is working correctly in the tab. You can resolve this issue by opening a second tab.
+- In the first browser tab a user opens, the extension pop-up might show a message that says "The extension is not loaded" or a message that says video playback or call redirection isn't supported while redirection is working correctly in the tab. You can resolve this issue by opening a second tab.
-- Multimedia redirection only works on the [Windows Desktop client](users/connect-windows.md). Any other clients, such as the macOS, iOS, Android or Web client, don't support multimedia redirection.
+- Multimedia redirection only works on the [Windows Desktop client](users/connect-windows.md). Any other clients, such as the macOS, iOS, Android, or Web client, don't support multimedia redirection.
- Multimedia redirection won't work as expected if the session hosts in your deployment are blocking cmd.exe.
The following issues are ones we're already aware of, so you won't need to repor
### The MSI installer doesn't work -- There's a small chance that the MSI installer won't be able to install the extension during internal testing. If you run into this issue, you'll need to install the multimedia redirection extension from the Microsoft Edge Store or Google Chrome Store.
+- There's a small chance that the MSI installer won't be able to install the extension during internal testing. If you run into this issue, you need to install the multimedia redirection extension from the Microsoft Edge Store or Google Chrome Store.
- [Multimedia redirection browser extension (Microsoft Edge)](https://microsoftedge.microsoft.com/addons/detail/wvd-multimedia-redirectio/joeclbldhdmoijbaagobkhlpfjglcihd) - [Multimedia browser extension (Google Chrome)](https://chrome.google.com/webstore/detail/wvd-multimedia-redirectio/lfmemoeeciijgkjkgbgikoonlkabmlno) -- Installing the extension on host machines with the MSI installer will either prompt users to accept the extension the first time they open the browser or display a warning or error message. If users deny this prompt, it can cause the extension to not load. To avoid this issue, install the extensions by [editing the group policy](multimedia-redirection.md#install-the-browser-extension-using-group-policy).
+- Installing the extension on host machines with the MSI installer will prompt users to either accept the extension the first time they open the browser or display a warning or error message. If users deny this prompt, it can cause the extension to not load. To avoid this issue, install the extensions by [editing the group policy](multimedia-redirection.md#install-the-browser-extension-using-group-policy).
-- Sometimes the host and client version number disappears from the extension status message, which prevents the extension from loading on websites that support it. If you've installed the extension correctly, this issue is because your host machine doesn't have the latest C++ Redistributable installed. To fix this issue, install the [latest supported Visual C++ Redistributable downloads](/cpp/windows/latest-supported-vc-redist).
+- Sometimes the host and client version number disappears from the extension status message, which prevents the extension from loading on websites that support it. If you installed the extension correctly, this issue is because your host machine doesn't have the latest C++ Redistributable installed. To fix this issue, install the [latest supported Visual C++ Redistributable downloads](/cpp/windows/latest-supported-vc-redist).
### Known issues for video playback redirection - Video playback redirection doesn't currently support protected content, so videos that use protected content, such as from Pluralsight and Netflix, won't work. -- When you resize the video window, the window's size will adjust faster than the video itself. You'll also see this issue when minimizing and maximizing the window.
+- When you resize the video window, the window's size adjusts faster than the video itself. You'll also see this issue when minimizing and maximizing the window.
-- If you access a video site, sometimes the video will remain in a loading or buffering state but never actually start playing. We're aware of this issue and are currently investigating it. For now, you can make videos load again by signing out of Azure Virtual Desktop and restarting your session.
+- If you access a video site, sometimes the video remains in a loading or buffering state but never actually start playing. We're aware of this issue and are currently investigating it. For now, you can make videos load again by signing out of Azure Virtual Desktop and restarting your session.
### Known issues for call redirection
virtual-machine-scale-sets Standby Pools Faq https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machine-scale-sets/standby-pools-faq.md
Get answers to frequently asked questions about standby pools for Virtual Machine Scale Sets in Azure. ### What are standby pools for Virtual Machine Scale Sets?
-Azure standby pools is a feature for Virtual Machine Scale Sets with Flexible Orchestration that enables faster scaling out of resources by creating a pool of pre-provisioned virtual machines ready to service your workload.
+Azure standby pools for Virtual Machine Scale Sets with Flexible Orchestration enables faster scaling out of resources by creating a pool of pre-provisioned virtual machines ready to service your workload.
### When should I use standby pools for Virtual Machine Scale Sets? Using a standby pool with your Virtual Machine Scale Set can help improve scale-out performance by completing various pre and post provisioning steps in the pool before the instances are placed into the scale set. ### What are the benefits of using Azure standby pools for Virtual Machine Scale Sets?
-Standby pools is a powerful feature for accelerating your time to scale-out and reducing the management needed for provisioning virtual machine resources and getting them ready to service your workload. If your applications are latency sensitive or have long initialization steps, standby pools can help with reducing that time and managing the steps to make your virtual machines ready on your behalf.
+Standby pools is a powerful feature for accelerating your time to scale out and reducing the management needed for provisioning virtual machine resources and getting them ready to service your workload. If your applications are latency sensitive or have long initialization steps, standby pools can help with reducing that time and managing the steps to make your virtual machines ready on your behalf.
### Can I use standby pools on Virtual Machine Scale Sets with Uniform Orchestration?
-Standby pools is only supported on Virtual Machine Scale Sets with Flexible Orchestration.
+Standby pools are only supported on Virtual Machine Scale Sets with Flexible Orchestration.
+
+### Does using a standby pool guarantee capacity?
+Using a standby pool with deallocated instances doesn't guarantee capacity. When starting the deallocated virtual machine, there needs to be enough capacity in the region your instances are deployed in to start the machines. If using running virtual machines in your pool, those virtual machines are already allocated and consuming compute capacity. When the virtual machine moves from the standby pool to the Virtual Machine Scale Set, it doesn't release the compute resources and doesn't require any additional allocation of resources.
+
+### How long can my standby pool name be?
+A standby pool can be anywhere between 3 and 24 characters. For more information, see [Resource naming restrictions for Azure resources](..//azure-resource-manager/management/resource-name-rules.md)
### Can I use standby pools for Virtual Machine Scale Sets if I'm already using Azure autoscale? Attaching a standby pool to a Virtual Machine Scale Set with Azure autoscale enabled isn't supported.
virtual-machines Automatic Vm Guest Patching https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/automatic-vm-guest-patching.md
Previously updated : 10/20/2021 Last updated : 07/03/2024 # Automatic Guest Patching for Azure Virtual Machines and Scale Sets
-> [!CAUTION]
-> This article references CentOS, a Linux distribution that is End Of Life (EOL) status. Please consider your use and plan accordingly. For more information, see the [CentOS End Of Life guidance](~/articles/virtual-machines/workloads/centos/centos-end-of-life.md).
- **Applies to:** :heavy_check_mark: Linux VMs :heavy_check_mark: Windows VMs :heavy_check_mark: Flexible scale sets Enabling automatic guest patching for your Azure Virtual Machines (VMs) and Scale Sets (VMSS) helps ease update management by safely and automatically patching virtual machines to maintain security compliance, while limiting the blast radius of VMs.
Automatic VM guest patching has the following characteristics:
## How does automatic VM guest patching work?
-If automatic VM guest patching is enabled on a VM, then the available *Critical* and *Security* patches are downloaded and applied automatically on the VM. This process kicks off automatically every month when new patches are released. Patch assessment and installation are automatic, and the process includes rebooting the VM as required.
+If automatic VM guest patching is enabled on a VM, then the available *Critical* and *Security* patches are downloaded and applied automatically on the VM. This process kicks off automatically every month when new patches are released. Patch assessment and installation are automatic, and the process includes rebooting the VM as configured. The rebootSetting parameter on the VM Model takes precedence over settings in another system, such as [Maintenance Configuration](https://learn.microsoft.com/azure/virtual-machines/maintenance-configurations#guest).
The VM is assessed periodically every few days and multiple times within any 30-day period to determine the applicable patches for that VM. The patches can be installed any day on the VM during off-peak hours for the VM. This automatic assessment ensures that any missing patches are discovered at the earliest possible opportunity.
Patches are installed within 30 days of the monthly patch releases, following av
Definition updates and other patches not classified as *Critical* or *Security* won't be installed through automatic VM guest patching. To install patches with other patch classifications or schedule patch installation within your own custom maintenance window, you can use [Update Management](./windows/tutorial-config-management.md#manage-windows-updates).
-For IaaS VMs, customers can choose to configure VMs to enable automatic VM guest patching. This will limit the blast radius of VMs getting the updated patch and do an orchestrated update of the VMs. The service also provides [health monitoring](../virtual-machine-scale-sets/virtual-machine-scale-sets-health-extension.md) to detect issues any issues with the update.
+Enabling Automatic Guest Patching on single-instance VMs or Virtual Machine Scale Sets in Flexible orchestration mode allows the Azure platform to update your fleet in phases. Phased deployment follows Azure's [Safe Deployment Practices](https://azure.microsoft.com/blog/advancing-safe-deployment-practices/) and reduces the impact radius if any issues are identified with the latest update. [Health monitoring](../virtual-machine-scale-sets/virtual-machine-scale-sets-health-extension.md) is recommended for single instance VMs and required for Virtual Machine Scale Sets in Flexible orchestration mode to detect issues any issues with the update.
### Availability-first Updates
-The patch installation process is orchestrated globally by Azure for all VMs that have automatic VM guest patching enabled. This orchestration follows availability-first principles across different levels of availability provided by Azure.
+Azure orchestrates the patch installation process across all public and private clouds for VMs that have enabled Automatic Guest Patching. The orchestration follows availability-first principles across different levels of availability provided by Azure.
For a group of virtual machines undergoing an update, the Azure platform will orchestrate updates:
Narrowing the scope of VMs that are patched across regions, within a region, or
The patch installation date for a given VM may vary month-to-month, as a specific VM may be picked up in a different batch between monthly patching cycles. ### Which patches are installed?
-The patches installed depend on the rollout stage for the VM. Every month, a new global rollout is started where all security and critical patches assessed for an individual VM are installed for that VM. The rollout is orchestrated across all Azure regions in batches (described in the availability-first patching section above).
+The patches installed depend on the rollout stage for the VM. Every month, a new global rollout is started where all security and critical patches assessed for an individual VM are installed for that VM. The rollout is orchestrated across all Azure regions in batches.
The exact set of patches to be installed vary based on the VM configuration, including OS type, and assessment timing. It's possible for two identical VMs in different regions to get different patches installed if there are more or less patches available when the patch orchestration reaches different regions at different times. Similarly, but less frequently, VMs within the same region but assessed at different times (due to different Availability Zone or Availability Set batches) might get different patches.
When automatic VM guest patching is enabled for a VM, a VM extension of type `Mi
It can take more than three hours to enable automatic VM guest updates on a VM, as the enablement is completed during the VM's off-peak hours. The extension is also installed and updated during off-peak hours for the VM. If the VM's off-peak hours end before enablement can be completed, the enablement process will resume during the next available off-peak time.
-Please note that the platform will make periodic patching configuration calls to ensure alignment when model changes are detected on IaaS VMs or VMSS Flexible orchestration. Certain model changes such as, but not limited to, updating assessment mode, patch mode, and extension update may trigger a patching configuration call.
+The platform will make periodic patching configuration calls to ensure alignment when model changes are detected on IaaS VMs or scale sets in Flexible orchestration. Certain model changes such as, but not limited to, updating assessment mode, patch mode, and extension update may trigger a patching configuration call.
Automatic updates are disabled in most scenarios, and patch installation is done through the extension going forward. The following conditions apply. - If a Windows VM previously had Automatic Windows Update turned on through the AutomaticByOS patch mode, then Automatic Windows Update is turned off for the VM when the extension is installed.
Example request body for Linux:
```json { "maximumDuration": "PT1H",
- "rebootSetting": "IfRequired",
+ "Setting": "IfRequired",
"linuxParameters": { "classificationsToInclude": [ "Critical",
vpn-gateway Vpn Gateway About Vpn Devices https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/vpn-gateway/vpn-gateway-about-vpn-devices.md
In the following tables:
MSS clamping is done bidirectionally on the Azure VPN Gateway. The following table lists the packet size under different scenarios. | **Packet Flow** |**IPv4** | **IPv6** |
+| | | |
| Over Internet | 1340 bytes | 1360 bytes | | Over Express Route Gateway | 1250 bytes | 1250 bytes |