Updates from: 02/20/2024 02:08:30
Service Microsoft Docs article Related commit history on GitHub Change details
active-directory-b2c Custom Policies Series Validate User Input https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/custom-policies-series-validate-user-input.md
Azure Active Directory B2C (Azure AD B2C) custom policy not only allows you to m
## Step 1 - Validate user input by limiting user input options
-If you know all the possible values that a user can enter for a given input, you can provide a finite set of values that a user must select from. You can use *DropdownSinglSelect*, *CheckboxMultiSelect*, and *RadioSingleSelect* [UserInputType](claimsschema.md#userinputtype) for this purpose. In this article, you'll use a *RadioSingleSelect* input type:
+If you know all the possible values that a user can enter for a given input, you can provide a finite set of values that a user must select from. You can use *DropdownSingleSelect*, *CheckboxMultiSelect*, and *RadioSingleSelect* [UserInputType](claimsschema.md#userinputtype) for this purpose. In this article, you'll use a *RadioSingleSelect* input type:
1. In VS Code, open the file `ContosoCustomPolicy.XML`.
active-directory-b2c Session Behavior https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/active-directory-b2c/session-behavior.md
With single sign-on, users sign in once with a single account and get access to
When the user initially signs in to an application, Azure AD B2C persists a cookie-based session. Upon subsequent authentication requests, Azure AD B2C reads and validates the cookie-based session, and issues an access token without prompting the user to sign in again. If the cookie-based session expires or becomes invalid, the user is prompted to sign-in again.
+>[!NOTE]
+>If the user uses a browser that blocks third-party cookies, there are limitations with SSO due to limited access to the cookie-based session. The most user-visible impact is that there are more interactions required for sign-in. Additionally, the front channel sign-out doesn't immediately clear authentication state from federated applications. Check our recommended ways about [how to handle third-party cookie blocking in browsers](/entra/identity-platform/reference-third-party-cookies-spas).
+ ## Prerequisites [!INCLUDE [active-directory-b2c-customization-prerequisites](../../includes/active-directory-b2c-customization-prerequisites.md)]
ai-services Azure Openai Integration https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/language-service/question-answering/how-to/azure-openai-integration.md
Previously updated : 12/19/2023 Last updated : 02/09/2024 # Connect Custom Question Answering with Azure OpenAI on your data
At the same time, customers often require a custom answer authoring experience t
:::image type="content" source="../media/question-answering/chat-playground.png" alt-text="A screenshot of the playground page of the Azure OpenAI Studio with sections highlighted." lightbox="../media/question-answering/chat-playground.png":::
-You can now start exploring Azure OpenAI capabilities with a no-code approach through the chat playground. It's simply a text box where you can submit a prompt to generate a completion. From this page, you can quickly iterate and experiment with the capabilities. You can also launch a [web app](../../..//openai/concepts/use-your-data.md#using-the-web-app) to chat with the model over the web.
+You can now start exploring Azure OpenAI capabilities with a no-code approach through the chat playground. It's simply a text box where you can submit a prompt to generate a completion. From this page, you can quickly iterate and experiment with the capabilities. You can also launch a [web app](../../../openai/how-to/use-web-app.md) to chat with the model over the web.
## Next steps * [Using Azure OpenAI on your data](../../../openai/concepts/use-your-data.md)
ai-services Use Your Data https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/concepts/use-your-data.md
Last updated 01/09/2023 recommendations: false+
-# Azure OpenAI on your data (preview)
+# Azure OpenAI On Your Data
-Azure OpenAI on your data enables you to run supported chat models such as GPT-35-Turbo and GPT-4 on your data without needing to train or fine-tune models. Running models on your data enables you to chat on top of, and analyze your data with greater accuracy and speed. By doing so, you can unlock valuable insights that can help you make better business decisions, identify trends and patterns, and optimize your operations. One of the key benefits of Azure OpenAI on your data is its ability to tailor the content of conversational AI.
+Use this article to learn about Azure OpenAI On Your Data, which makes it easier for developers to connect, ingest and ground their enterprise data to create personalized copilots (preview) rapidly. It enhances user comprehension, expedites task completion, improves operational efficiency, and aids decision-making.
-Because the model has access to, and can reference specific sources to support its responses, answers are not only based on its pretrained knowledge but also on the latest information available in the designated data source. This grounding data also helps the model avoid generating responses based on outdated or incorrect information.
+## What is Azure OpenAI On Your Data
-## What is Azure OpenAI on your data
-
-Azure OpenAI on your data works with OpenAI's powerful GPT-35-Turbo and GPT-4 language models, enabling them to provide responses based on your data. You can access Azure OpenAI on your data using a REST API or the web-based interface in the [Azure OpenAI Studio](https://oai.azure.com/) to create a solution that connects to your data to enable an enhanced chat experience.
-
-One of the key features of Azure OpenAI on your data is its ability to retrieve and utilize data in a way that enhances the model's output. Azure OpenAI on your data, together with Azure AI Search, determines what data to retrieve from the designated data source based on the user input and provided conversation history. This data is then augmented and resubmitted as a prompt to the OpenAI model, with retrieved information being appended to the original prompt. Although retrieved data is being appended to the prompt, the resulting input is still processed by the model like any other prompt. Once the data has been retrieved and the prompt has been submitted to the model, the model uses this information to provide a completion. See the [Data, privacy, and security for Azure OpenAI Service](/legal/cognitive-services/openai/data-privacy?context=/azure/ai-services/openai/context/context) article for more information.
+Azure OpenAI On Your Data enables you to run advanced AI models such as GPT-35-Turbo and GPT-4 on your own enterprise data without needing to train or fine-tune models. You can chat on top of and analyze your data with greater accuracy. You can specify sources to support the responses based on the latest information available in your designated data sources. You can access Azure OpenAI On Your Data using a REST API, via the SDK or the web-based interface in the [Azure OpenAI Studio](https://oai.azure.com/). You can also create a web app that connects to your data to enable an enhanced chat solution or deploy it directly as a copilot in the Copilot Studio (preview).
## Get started To get started, [connect your data source](../use-your-data-quickstart.md) using Azure OpenAI Studio and start asking questions and chatting on your data. > [!NOTE]
-> To get started, you need to already have been approved for [Azure OpenAI access](../overview.md#how-do-i-get-access-to-azure-openai) and have an [Azure OpenAI Service resource](../how-to/create-resource.md) deployed in a [supported region](#azure-openai-on-your-data-regional-availability) with either the gpt-35-turbo or the gpt-4 models.
+> To get started, you need to already have been approved for [Azure OpenAI access](../overview.md#how-do-i-get-access-to-azure-openai) and have an [Azure OpenAI Service resource](../how-to/create-resource.md) deployed in a [supported region](#regional-availability-and-model-support) with either the gpt-35-turbo or the gpt-4 models.
+
+## Azure Role-based access controls (Azure RBAC) for adding data sources
+
+To use Azure OpenAI On Your Data fully, you need to set one or more Azure RBAC roles. See [Use Azure OpenAI On Your Data securely](../how-to/use-your-data-securely.md#role-assignments) for more information.
## Data formats and file types
-Azure OpenAI on your data supports the following filetypes:
+Azure OpenAI On Your Data supports the following file types:
* `.txt` * `.md` * `.html`
-* Microsoft Word files
-* Microsoft PowerPoint files
-* PDF
+* `.docx`
+* `.pptx`
+* `.pdf`
-There is an [upload limit](../quotas-limits.md), and there are some caveats about document structure and how it might affect the quality of responses from the model:
-
-* The model provides the best citation titles from markdown (`.md`) files.
-
-* If a document is a PDF file, the text contents are extracted as a preprocessing step (unless you're connecting your own Azure AI Search index). If your document contains images, graphs, or other visual content, the model's response quality depends on the quality of the text that can be extracted from them.
+There's an [upload limit](../quotas-limits.md), and there are some caveats about document structure and how it might affect the quality of responses from the model:
* If you're converting data from an unsupported format into a supported format, make sure the conversion: * Doesn't lead to significant data loss. * Doesn't add unexpected noise to your data.
- This will affect the quality of the model response.
-
-## Ingesting your data
-
-There are several different sources of data that you can use. The following sources will be connected to Azure AI Search:
-* Blobs in an Azure storage container that you provide
-* Local files uploaded using the Azure OpenAI Studio
-
-You can additionally ingest your data from an existing Azure AI Search service, or use Azure Cosmos DB for MongoDB vCore.
-
-# [Azure AI Search](#tab/ai-search)
-
-> [!TIP]
-> For documents and datasets with long text, you should use the available [data preparation script](https://go.microsoft.com/fwlink/?linkid=2244395). The script chunks data so that your response with the service will be more accurate. This script also supports scanned PDF files and images.
-
-Once data is ingested, an [Azure AI Search](/azure/search/search-what-is-azure-search) index in your search resource gets created to integrate the information with Azure OpenAI models.
+ This affects the quality of the model response.
-**Data ingestion from Azure storage containers**
+* If your files have special formatting, such as tables and columns, or bullet points, prepare your data with the data preparation script available on [GitHub](https://github.com/microsoft/sample-app-aoai-chatGPT/tree/main/scripts#optional-crack-pdfs-to-text).
-1. Ingestion assets are created in Azure AI Search resource and Azure storage account. Currently these assets are: indexers, indexes, data sources, a [custom skill](/azure/search/cognitive-search-custom-skill-interface) in the search resource, and a container (later called the chunks container) in the Azure storage account. You can specify the input Azure storage container using the [Azure OpenAI studio](https://oai.azure.com/), or the [ingestion API](../reference.md#start-an-ingestion-job).
+* For documents and datasets with long text, you should use the available [data preparation script](https://github.com/microsoft/sample-app-aoai-chatGPT/tree/main/scripts#data-preparation). The script chunks data so that the model's responses are more accurate. This script also supports scanned PDF files and images.
-2. Data is read from the input container, contents are opened and chunked into small chunks with a maximum of 1,024 tokens each. If vector search is enabled, the service calculates the vector representing the embeddings on each chunk. The output of this step (called the "preprocessed" or "chunked" data) is stored in the chunks container created in the previous step.
-
-3. The preprocessed data is loaded from the chunks container, and indexed in the Azure AI Search index.
--
-**Data ingestion from local files**
-
-Using Azure OpenAI Studio, you can upload files from your machine. The service then stores the files to an Azure storage container and performs ingestion from the container.
-
-**Data ingestion from URLs**
-
-Using Azure OpenAI Studio, you can paste URLs and the service will store the webpage content, using it when generating responses from the model.
+## Supported data sources
-### Troubleshooting failed ingestion jobs
+You need to connect to a data source to upload your data. When you want to use your data to chat with an Azure OpenAI model, your data is chunked in a search index so that relevant data can be found based on user queries. For some data sources such as uploading files from your local machine (preview) or data contained in a blob storage account (preview), Azure AI Search is used.
-To troubleshoot a failed job, always look out for errors or warnings specified either in the API response or Azure OpenAI studio. Here are some of the common errors and warnings:
--
-**Quota Limitations Issues**
-
-*An index with the name X in service Y could not be created. Index quota has been exceeded for this service. You must either delete unused indexes first, add a delay between index creation requests, or upgrade the service for higher limits.*
+When you choose the following data sources, your data is ingested into an Azure AI Search index.
-*Standard indexer quota of X has been exceeded for this service. You currently have X standard indexers. You must either delete unused indexers first, change the indexer 'executionMode', or upgrade the service for higher limits.*
-
-Resolution:
-
-Upgrade to a higher pricing tier or delete unused assets.
-
-**Preprocessing Timeout Issues**
-
-*Could not execute skill because the Web API request failed*
-
-*Could not execute skill because Web API skill response is invalid.*
-
-Resolution:
+|Data source | Description |
+|||
+| [Azure AI Search](/azure/search/search-what-is-azure-search) | Use an existing Azure AI Search index with Azure OpenAI On Your Data. |
+|Upload files (preview) | Upload files from your local machine to be stored in an Azure Blob Storage database, and ingested into Azure AI Search. |
+|URL/Web address (preview) | Web content from the URLs is stored in Azure Blob Storage. |
+|Azure Blob Storage (preview) | Upload files from Azure Blob Storage to be ingested into an Azure AI Search index. |
-Break down the input documents into smaller documents and try again.
+# [Azure AI Search](#tab/ai-search)
-**Permissions Issues**
+You might want to consider using an Azure AI Search index when you either want to:
+* Customize the index creation process.
+* Reuse an index created before by ingesting data from other data sources.
-*This request is not authorized to perform this operation.*
+> [!NOTE]
+> To use an existing index, it must have at least one searchable field.
-Resolution:
+### Search types
-This means the storage account is not accessible with the given credentials. In this case, please review the storage account credentials passed to the API and ensure the storage account is not hidden behind a private endpoint (if a private endpoint is not configured for this resource).
+Azure OpenAI On Your Data provides the following search types you can use when you add your data source.
+* [Keyword search](/azure/search/search-lucene-query-architecture)
-### Search options
+* [Semantic search](/azure/search/semantic-search-overview)
+* [Vector search](/azure/search/vector-search-overview) using Ada [embedding](./understand-embeddings.md) models, available in [selected regions](models.md#embeddings-models)
-Azure OpenAI on your data provides several search options you can use when you add your data source, leveraging the following types of search.
+ To enable vector search, you need an existing embedding model deployed in your Azure OpenAI resource. Select your embedding deployment when connecting your data, then select one of the vector search types under **Data management**. If you're using Azure AI Search as a data source, make sure you have a vector column in the index.
-* [Keyword search](/azure/search/search-lucene-query-architecture)
+If you're using your own index, you can customize the [field mapping](#index-field-mapping) when you add your data source to define the fields that will get mapped when answering questions. To customize field mapping, select **Use custom field mapping** on the **Data Source** page when adding your data source.
-* [Semantic search](/azure/search/semantic-search-overview)
-* [Vector search](/azure/search/vector-search-overview) using Ada [embedding](./understand-embeddings.md) models, available in [select regions](models.md#embeddings-models).
- To enable vector search, you will need a `text-embedding-ada-002` deployment in your Azure OpenAI resource. Select your embedding deployment when connecting your data, then select one of the vector search types under **Data management**.
> [!IMPORTANT]
-> * [Semantic search](/azure/search/semantic-search-overview#availability-and-pricing) and [vector search](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/) are subject to additional pricing. You need to choose **Basic or higher SKU** to enable semantic search or vector search. See [pricing tier difference](/azure/search/search-sku-tier) and [service limits](/azure/search/search-limits-quotas-capacity) for more information.
-> * To help improve the quality of the information retrieval and model response, we recommend enabling [semantic search](/azure/search/semantic-search-overview) for the following languages: English, French, Spanish, Portuguese, Italian, Germany, Chinese(Zh), Japanese, Korean, Russian, and Arabic.
+> * [Semantic search](/azure/search/semantic-search-overview#availability-and-pricing) is subject to additional pricing. You need to choose **Basic or higher SKU** to enable semantic search or vector search. See [pricing tier difference](/azure/search/search-sku-tier) and [service limits](/azure/search/search-limits-quotas-capacity) for more information.
+> * To help improve the quality of the information retrieval and model response, we recommend enabling [semantic search](/azure/search/semantic-search-overview) for the following data source languages: English, French, Spanish, Portuguese, Italian, Germany, Chinese(Zh), Japanese, Korean, Russian, and Arabic.
| Search option | Retrieval type | Additional pricing? |Benefits| |||| -- | | *keyword* | Keyword search | No additional pricing. |Performs fast and flexible query parsing and matching over searchable fields, using terms or phrases in any supported language, with or without operators.| | *semantic* | Semantic search | Additional pricing for [semantic search](/azure/search/semantic-search-overview#availability-and-pricing) usage. |Improves the precision and relevance of search results by using a reranker (with AI models) to understand the semantic meaning of query terms and documents returned by the initial search ranker|
-| *vector* | Vector search | [Additional pricing](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/) on your Azure OpenAI account from calling the embedding model. |Enables you to find documents that are similar to a given query input based on the vector embeddings of the content. |
+| *vector* | Vector search | No additional pricing |Enables you to find documents that are similar to a given query input based on the vector embeddings of the content. |
| *hybrid (vector + keyword)* | A hybrid of vector search and keyword search | [Additional pricing](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/) on your Azure OpenAI account from calling the embedding model. |Performs similarity search over vector fields using vector embeddings, while also supporting flexible query parsing and full text search over alphanumeric fields using term queries.|
-| *hybrid (vector + keyword) + semantic* | A hybrid of vector search, semantic, and keyword search for retrieval. | [Additional pricing](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/) on your Azure OpenAI account from calling the embedding model, and additional pricing for [semantic search](/azure/search/semantic-search-overview#availability-and-pricing) usage. |Leverages vector embeddings, language understanding and flexible query parsing to create rich search experiences and generative AI apps that can handle complex and diverse information retrieval scenarios. |
+| *hybrid (vector + keyword) + semantic* | A hybrid of vector search, semantic search, and keyword search. | [Additional pricing](https://azure.microsoft.com/pricing/details/cognitive-services/openai-service/) on your Azure OpenAI account from calling the embedding model, and additional pricing for [semantic search](/azure/search/semantic-search-overview#availability-and-pricing) usage. |Uses vector embeddings, language understanding, and flexible query parsing to create rich search experiences and generative AI apps that can handle complex and diverse information retrieval scenarios. |
+
+### Intelligent search
+
+Azure OpenAI On Your Data has intelligent search enabled for your data. Semantic search is enabled by default if you have both semantic search and keyword search. If you have embedding models, intelligent search will default to hybrid + semantic search.
+
+### Document-level access control
+
+> [!NOTE]
+> Document-level access control is supported when you select Azure AI Search as your data source.
+
+Azure OpenAI On Your Data lets you restrict the documents that can be used in responses for different users with Azure AI Search [security filters](/azure/search/search-security-trimming-for-azure-search-with-aad). When you enable document level access, the search results returned from Azure AI Search and used to generate a response are trimmed based on user Microsoft Entra group membership. You can only enable document-level access on existing Azure AI Search indexes See [Use Azure OpenAI On Your Data securely](../how-to/use-your-data-securely.md#document-level-access-control) for more information.
-The optimal search option can vary depending on your dataset and use-case. You might need to experiment with multiple options to determine which works best for your use-case.
### Index field mapping
If you're using your own index, you will be prompted in the Azure OpenAI Studio
In this example, the fields mapped to **Content data** and **Title** provide information to the model to answer questions. **Title** is also used to title citation text. The field mapped to **File name** generates the citation names in the response.
-Mapping these fields correctly helps ensure the model has better response and citation quality.
-
-### Using the model
-
-After ingesting your data, you can start chatting with the model on your data using the chat playground in Azure OpenAI studio, or the following methods:
-* [Web app](#using-the-web-app)
-* [REST API](../reference.md#azure-ai-search)
-* [C#](https://github.com/Azure/azure-sdk-for-net/blob/main/sdk/openai/Azure.AI.OpenAI/tests/Samples/AzureOnYourData.cs)
-* [Java](https://github.com/Azure/azure-sdk-for-java/blob/main/sdk/openai/azure-ai-openai/src/samples/java/com/azure/ai/openai/ChatCompletionsWithYourData.java)
-* [JavaScript](https://github.com/Azure/azure-sdk-for-js/blob/main/sdk/openai/openai/samples/v1-beta/javascript/bringYourOwnData.js)
-* [PowerShell](../use-your-data-quickstart.md?tabs=command-line%2Cpowershell&pivots=programming-language-powershell#example-powershell-commands)
-* [Python](https://github.com/openai/openai-cookbook/blob/main/examples/azure/chat_with_your_own_data.ipynb)
+Mapping these fields correctly helps ensure the model has better response and citation quality. You can additionally configure this [in the API](../reference.md#completions-extensions) using the `fieldsMapping` parameter.
# [Azure Cosmos DB for MongoDB vCore](#tab/mongo-db)
After ingesting your data, you can start chatting with the model on your data us
### Data preparation
-Use the script [provided on GitHub](https://github.com/microsoft/sample-app-aoai-chatGPT/blob/feature/2023-9/scripts/cosmos_mongo_vcore_data_preparation.py) to prepare your data.
+Use the script provided on [GitHub](https://github.com/microsoft/sample-app-aoai-chatGPT/blob/feature/2023-9/scripts/cosmos_mongo_vcore_data_preparation.py) to prepare your data.
-### Add your data source in Azure OpenAI Studio
+<!--### Add your data source in Azure OpenAI Studio
To add Azure Cosmos DB for MongoDB vCore as a data source, you will need an existing Azure Cosmos DB for MongoDB vCore index containing your data, and a deployed Azure OpenAI Ada embeddings model that will be used for vector search.
To add Azure Cosmos DB for MongoDB vCore as a data source, you will need an exis
1. **Select Database**. In the dropdown menus, select the database name, database collection, and index name that you want to use as your data source. Select the embedding model deployment you would like to use for vector search on this data source, and acknowledge that you will incur charges for using vector search. Then select **Next**. :::image type="content" source="../media/use-your-data/select-mongo-database.png" alt-text="A screenshot showing the screen for adding Mongo DB settings in Azure OpenAI Studio." lightbox="../media/use-your-data/select-mongo-database.png":::
+-->
-1. Enter the database data fields to properly map your data for retrieval.
-
- * Content data (required): The provided field(s) will be used to ground the model on your data. For multiple fields, separate the values with commas, with no spaces.
- * File name/title/URL: Used to display more information when a document is referenced in the chat.
- * Vector fields (required): Select the field in your database that contains the vectors.
-
- :::image type="content" source="../media/use-your-data/mongo-index-mapping.png" alt-text="A screenshot showing the index field mapping options for Mongo DB." lightbox="../media/use-your-data/mongo-index-mapping.png":::
-
-### Using the model
-
-After ingesting your data, you can start chatting with the model on your data using the chat playground in Azure OpenAI studio, or the following methods:
-* [Web app](#using-the-web-app)
-* [REST API](../reference.md#azure-cosmos-db-for-mongodb-vcore)
-
-# [URL/web address](#tab/url-web)
-
-Currently, you can add your data from a URL/web address. Your data from a URL/web address needs to have the following characteristics to be properly ingested:
-
-* A public website, such as [Using your data with Azure OpenAI Service - Azure OpenAI | Microsoft Learn](/azure/ai-services/openai/concepts/use-your-data?tabs=ai-search). Note that you cannot add a URL/Web address with access control, such as with password.
-
-* An HTTPS website.
-
-* The size of content in each URL is smaller than 5MB.
-
-* The website can be downloaded as one of the [supported file types](#data-formats-and-file-types).
+### Index field mapping
-You can use URL as a data source in both the Azure OpenAI Studio and the [ingestion API](../reference.md#start-an-ingestion-job). To use URL/web address as a data source, you need to have an Azure AI Search resource and an Azure Blob Storage resource. When using the Ingestion API you need to create a container first. Only one layer of nested links is supported. Only up to 20 links, on the web page will be fetched.
+When you add your Azure Cosmos DB for MongoDB vCore data source, you can specify data fields to properly map your data for retrieval.
+* Content data (required): One or more provided fields that will be used to ground the model on your data. For multiple fields, separate the values with commas, with no spaces.
+* File name/title/URL: Used to display more information when a document is referenced in the chat.
+* Vector fields (required): Select the field in your database that contains the vectors.
-Once you have added the URL/web address for data ingestion, the web pages from your URL are fetched and saved to your Azure Blob Storage account with a container name: `webpage-<index name>`. Each URL will be saved into a different container within the account. Then the files are indexed into an Azure AI Search index, which is used for retrieval when youΓÇÖre chatting with the model.
-
-When you want to reuse the same URL/web address, you can select [Azure AI Search](/azure/ai-services/openai/concepts/use-your-data?tabs=ai-search) as your data source and select the index you created with your URL previously. Then you can use the already indexed files instead of having the system crawl your URL again. You can use the Azure AI Search index directly and delete the storage container to free up your storage space.
--
+# [Azure Blob Storage (preview)](#tab/blob-storage)
-## Runtime parameters
-
-You can modify the following additional settings in the **Data parameters** section in Azure OpenAI Studio and [the API](../reference.md#completions-extensions). You do not need to re-ingest your data when you update these parameters.
--
-|Parameter name | Description |
-|||
-| **Limit responses to your data** | This flag configures the chatbot's approach to handling queries unrelated to the data source or when search documents are insufficient for a complete answer. When this setting is disabled, the model supplements its responses with its own knowledge in addition to your documents. When this setting is enabled, the model attempts to only rely on your documents for responses. This is the `inScope` parameter in the API. |
-|**Top K Documents** | This parameter is an integer that can be set to 3, 5, 10, or 20, and controls the number of document chunks provided to the large language model for formulating the final response. By default, this is set to 5. The search process can be noisy and sometimes, due to chunking, relevant information may be spread across multiple chunks in the search index. Selecting a top-K number, like 5, ensures that the model can extract relevant information, despite the inherent limitations of search and chunking. However, increasing the number too high can potentially distract the model. Additionally, the maximum number of documents that can be effectively used depends on the version of the model, as each has a different context size and capacity for handling documents. If you find that responses are missing important context, try increasing this parameter. Conversely, if you think the model is providing irrelevant information alongside useful data, consider decreasing it. This is the `topNDocuments` parameter in the API. |
-| **Strictness** | Determines the system's aggressiveness in filtering search documents based on their similarity scores. The system queries Azure Search or other document stores, then decides which documents to provide to large language models like ChatGPT. Filtering out irrelevant documents can significantly enhance the performance of the end-to-end chatbot. Some documents are excluded from the top-K results if they have low similarity scores before forwarding them to the model. This is controlled by an integer value ranging from 1 to 5. Setting this value to 1 means that the system will minimally filter documents based on search similarity to the user query. Conversely, a setting of 5 indicates that the system will aggressively filter out documents, applying a very high similarity threshold. If you find that the chatbot omits relevant information, lower the filter's strictness (set the value closer to 1) to include more documents. Conversely, if irrelevant documents distract the responses, increase the threshold (set the value closer to 5). This is the `strictness` parameter in the API. |
--
-## Document-level access control
-
-> [!NOTE]
-> Document-level access control is supported for Azure AI search only.
-
-Azure OpenAI on your data lets you restrict the documents that can be used in responses for different users with Azure AI Search [security filters](/azure/search/search-security-trimming-for-azure-search-with-aad). When you enable document level access, the search results returned from Azure AI Search and used to generate a response will be trimmed based on user Microsoft Entra group membership. You can only enable document-level access on existing Azure AI Search indexes. To enable document-level access:
-
-1. Follow the steps in the [Azure AI Search documentation](/azure/search/search-security-trimming-for-azure-search-with-aad) to register your application and create users and groups.
-1. [Index your documents with their permitted groups](/azure/search/search-security-trimming-for-azure-search-with-aad#index-document-with-their-permitted-groups). Be sure that your new [security fields](/azure/search/search-security-trimming-for-azure-search#create-security-field) have the schema below:
-
- ```json
- {"name": "group_ids", "type": "Collection(Edm.String)", "filterable": true }
- ```
-
- `group_ids` is the default field name. If you use a different field name like `my_group_ids`, you can map the field in [index field mapping](#index-field-mapping).
-
-1. Make sure each sensitive document in the index has the value set correctly on this security field to indicate the permitted groups of the document.
-1. In [Azure OpenAI Studio](https://oai.azure.com/portal), add your data source. in the [index field mapping](#index-field-mapping) section, you can map zero or one value to the **permitted groups** field, as long as the schema is compatible. If the **Permitted groups** field isn't mapped, document level access won't be enabled.
-
-**Azure OpenAI Studio**
-
-Once the Azure AI Search index is connected, your responses in the studio will have document access based on the Microsoft Entra permissions of the logged in user.
-
-**Web app**
-
-If you are using a published [web app](#using-the-web-app), you need to redeploy it to upgrade to the latest version. The latest version of the web app includes the ability to retrieve the groups of the logged in user's Microsoft Entra account, cache it, and include the group IDs in each API request.
-
-**API**
-
-When using the API, pass the `filter` parameter in each API request. For example:
-
-```json
-{
- "messages": [
- {
- "role": "user",
- "content": "who is my manager?"
- }
- ],
- "dataSources": [
- {
- "type": "AzureCognitiveSearch",
- "parameters": {
- "endpoint": "'$SearchEndpoint'",
- "key": "'$SearchKey'",
- "indexName": "'$SearchIndex'",
- "filter": "my_group_ids/any(g:search.in(g, 'group_id1, group_id2'))"
- }
- }
- ]
-}
-```
-* `my_group_ids` is the field name that you selected for **Permitted groups** during [fields mapping](#index-field-mapping).
-* `group_id1, group_id2` are groups attributed to the logged in user. The client application can retrieve and cache users' groups.
+You might want to use Azure Blob Storage as a data source if you want to connect to existing Azure Blob Storage and use files stored in your containers.
## Schedule automatic index refreshes > [!NOTE]
-> * Automatic index refreshing is supported for Azure Blob storage only.
+> * Automatic index refreshing is supported for Azure Blob Storage only.
> * If a document is deleted from input blob container, the corresponding chunk index records won't be removed by the scheduled refresh.
-To keep your Azure AI Search index up-to-date with your latest data, you can schedule a refresh for it that runs automatically rather than manually updating it every time your data is updated. Automatic index refresh is only available when you choose **blob storage** as the data source. To enable an automatic index refresh:
+To keep your Azure AI Search index up-to-date with your latest data, you can schedule an automatic index refresh rather than manually updating it every time your data is updated. Automatic index refresh is only available when you choose **Azure Blob Storage** as the data source. To enable an automatic index refresh:
1. [Add a data source](../quickstart.md) using Azure OpenAI studio. 1. Under **Select or add data source** select **Indexer schedule** and choose the refresh cadence you would like to apply.
To modify the schedule, you can use the [Azure portal](https://portal.azure.com/
1. Select **Save**.
-## Recommended settings
+# [Upload files (preview)](#tab/file-upload)
-Use the following sections to help you configure Azure OpenAI on your data for optimal results.
+Using Azure OpenAI Studio, you can upload files from your machine to try Azure OpenAI On Your Data, and optionally creating a new Azure Blob Storage account and Azure AI Search resource. The service then stores the files to an Azure storage container and performs ingestion from the container. You can use the [quickstart](../use-your-data-quickstart.md) article to learn how to use this data source option.
-### System message
-Give the model instructions about how it should behave and any context it should reference when generating a response. You can describe the assistant's personality, what it should and shouldn't answer, and how to format responses. There are token limits that apply to the system message, used with every API call, and counted against the overall token limit. The system message will be truncated if it exceeds the token limits listed in the [token estimation](#token-usage-estimation-for-azure-openai-on-your-data) section.
-For example, if you're creating a chatbot where the data consists of transcriptions of quarterly financial earnings calls, you might use the following system message:
+# [URL/Web address (preview)](#tab/web-pages)
-*"You are a financial chatbot useful for answering questions from financial reports. You are given excerpts from the earnings call. Please answer the questions by parsing through all dialogue."*
+You can paste URLs and the service will store the webpage content, using it when generating responses from the model. The content in URLs/web addresses that you use need to have the following characteristics to be properly ingested:
-This system message can help improve the quality of the response by specifying the domain (in this case finance) and mentioning that the data consists of call transcriptions. It helps set the necessary context for the model to respond appropriately.
+* A public website, such as [Using your data with Azure OpenAI Service - Azure OpenAI | Microsoft Learn](/azure/ai-services/openai/concepts/use-your-data?tabs=ai-search). You can't add a URL/Web address with access control, such as ones with a password.
+* An HTTPS website.
+* The size of content in each URL is smaller than 5 MB.
+* The website can be downloaded as one of the [supported file types](#data-formats-and-file-types).
+* Only one layer of nested links is supported. Only up to 20 links, on the web page will be fetched.
-> [!NOTE]
-> The system message is used to modify how GPT assistant responds to a user question based on retrieved documentation. It does not affect the retrieval process. If you'd like to provide instructions for the retrieval process, it is better to include them in the questions.
-> The system message is only guidance. The model might not adhere to every instruction specified because it has been primed with certain behaviors such as objectivity, and avoiding controversial statements. Unexpected behavior might occur if the system message contradicts with these behaviors.
+<!--:::image type="content" source="../media/use-your-data/url.png" alt-text="A screenshot of the Azure OpenAI use your data url/webpage studio configuration page." lightbox="../media/use-your-data/url.png":::-->
-### Maximum response
+Once you have added the URL/web address for data ingestion, the web pages from your URL are fetched and saved to Azure Blob Storage with a container name: `webpage-<index name>`. Each URL will be saved into a different container within the account. Then the files are indexed into an Azure AI Search index, which is used for retrieval when youΓÇÖre chatting with the model.
-Set a limit on the number of tokens per model response. The upper limit for Azure OpenAI on Your Data is 1500. This is equivalent to setting the `max_tokens` parameter in the API.
+
-### Limit responses to your data
+### How data is ingested into Azure AI search
-This option encourages the model to respond using your data only, and is selected by default. If you unselect this option, the model might more readily apply its internal knowledge to respond. Determine the correct selection based on your use case and scenario.
+Data is ingested into Azure AI search using the following process:
+1. Ingestion assets are created in Azure AI Search resource and Azure storage account. Currently these assets are: indexers, indexes, data sources, a [custom skill](/azure/search/cognitive-search-custom-skill-interface) in the search resource, and a container (later called the chunks container) in the Azure storage account. You can specify the input Azure storage container using the [Azure OpenAI studio](https://oai.azure.com/), or the [ingestion API (preview)](../reference.md#start-an-ingestion-job-preview).
+2. Data is read from the input container, contents are opened and chunked into small chunks with a maximum of 1,024 tokens each. If vector search is enabled, the service calculates the vector representing the embeddings on each chunk. The output of this step (called the "preprocessed" or "chunked" data) is stored in the chunks container created in the previous step.
-### Interacting with the model
+3. The preprocessed data is loaded from the chunks container, and indexed in the Azure AI Search index.
-Use the following practices for best results when chatting with the model.
+## Deploy to a copilot (preview) or web app
-**Conversation history**
+After you connect Azure OpenAI to your data, you can deploy it using the **Deploy to** button in Azure OpenAI studio.
-* Before starting a new conversation (or asking a question that is not related to the previous ones), clear the chat history.
-* Getting different responses for the same question between the first conversational turn and subsequent turns can be expected because the conversation history changes the current state of the model. If you receive incorrect answers, report it as a quality bug.
-**Model response**
+This gives you the option of deploying a standalone web app for you and your users to interact with chat models using a graphical user interface. See [Use the Azure OpenAI web app](../how-to/use-web-app.md) for more information.
-* If you are not satisfied with the model response for a specific question, try either making the question more specific or more generic to see how the model responds, and reframe your question accordingly.
+You can also deploy to a copilot in [Copilot Studio](/microsoft-copilot-studio/fundamentals-what-is-copilot-studio) (preview) directly from Azure OpenAI studio, enabling you to bring conversational experiences to various channels such as: Microsoft Teams, websites, Dynamics 365, and other [Azure Bot Service channels](/microsoft-copilot-studio/publication-connect-bot-to-azure-bot-service-channels). The tenant used in the Azure OpenAI service and Copilot Studio (preview) should be the same. For more information, see [Use a connection to Azure OpenAI On Your Data](/microsoft-copilot-studio/nlu-generative-answers-azure-openai).
-* [Chain-of-thought prompting](advanced-prompt-engineering.md?pivots=programming-language-chat-completions#chain-of-thought-prompting) has been shown to be effective in getting the model to produce desired outputs for complex questions/tasks.
+> [!NOTE]
+> Deploying to a copilot in Copilot Studio (preview) is only available in US regions.
-**Question length**
+## Use Azure OpenAI On Your Data securely
-Avoid asking long questions and break them down into multiple questions if possible. The GPT models have limits on the number of tokens they can accept. Token limits are counted toward: the user question, the system message, the retrieved search documents (chunks), internal prompts, the conversation history (if any), and the response. If the question exceeds the token limit, it will be truncated.
+You can use Azure OpenAI On Your Data securely by protecting data and resources with Microsoft Entra ID role-based access control, virtual networks, and private endpoints. You can also restrict the documents that can be used in responses for different users with Azure AI Search security filters. See [Securely use Azure OpenAI On Your Data](../how-to/use-your-data-securely.md).
-**Multi-lingual support**
+## Best practices
-* Currently, keyword search and semantic search in Azure OpenAI on your data supports queries are in the same language as the data in the index. For example, if your data is in Japanese, then input queries also need to be in Japanese. For cross-lingual document retrieval, we recommend building the index with [Vector search](/azure/search/vector-search-overview) enabled.
+Use the following sections to learn how to improve the quality of responses given by the model.
-* To help improve the quality of the information retrieval and model response, we recommend enabling [semantic search](/azure/search/semantic-search-overview) for the following languages: English, French, Spanish, Portuguese, Italian, Germany, Chinese(Zh), Japanese, Korean, Russian, Arabic
+### Runtime parameters
-* We recommend using a system message to inform the model that your data is in another language. For example:
+You can modify the following additional settings in the **Data parameters** section in Azure OpenAI Studio and [the API](../reference.md#completions-extensions). You don't need to reingest your data when you update these parameters.
-* *"**You are an AI assistant designed to help users extract information from retrieved Japanese documents. Please scrutinize the Japanese documents carefully before formulating a response. The user's query will be in Japanese, and you must response also in Japanese."*
-* If you have documents in multiple languages, we recommend building a new index for each language and connecting them separately to Azure OpenAI.
+|Parameter name | Description |
+|||
+| **Limit responses to your data** | This flag configures the chatbot's approach to handling queries unrelated to the data source or when search documents are insufficient for a complete answer. When this setting is disabled, the model supplements its responses with its own knowledge in addition to your documents. When this setting is enabled, the model attempts to only rely on your documents for responses. This is the `inScope` parameter in the API. |
+|**Retrieved documents** | This parameter is an integer that can be set to 3, 5, 10, or 20, and controls the number of document chunks provided to the large language model for formulating the final response. By default, this is set to 5. The search process can be noisy and sometimes, due to chunking, relevant information might be spread across multiple chunks in the search index. Selecting a top-K number, like 5, ensures that the model can extract relevant information, despite the inherent limitations of search and chunking. However, increasing the number too high can potentially distract the model. Additionally, the maximum number of documents that can be effectively used depends on the version of the model, as each has a different context size and capacity for handling documents. If you find that responses are missing important context, try increasing this parameter. This is the `topNDocuments` parameter in the API. |
+| **Strictness** | Determines the system's aggressiveness in filtering search documents based on their similarity scores. The system queries Azure Search or other document stores, then decides which documents to provide to large language models like ChatGPT. Filtering out irrelevant documents can significantly enhance the performance of the end-to-end chatbot. Some documents are excluded from the top-K results if they have low similarity scores before forwarding them to the model. This is controlled by an integer value ranging from 1 to 5. Setting this value to 1 means that the system will minimally filter documents based on search similarity to the user query. Conversely, a setting of 5 indicates that the system will aggressively filter out documents, applying a very high similarity threshold. If you find that the chatbot omits relevant information, lower the filter's strictness (set the value closer to 1) to include more documents. Conversely, if irrelevant documents distract the responses, increase the threshold (set the value closer to 5). This is the `strictness` parameter in the API. |
-### Deploying the model
+### System message
-After you connect Azure OpenAI to your data, you can deploy it using the **Deploy to** button in Azure OpenAI studio.
+You can define a system message to steer the model's reply when using Azure OpenAI On Your Data. This message allows you to customize your replies on top of the retrieval augmented generation (RAG) pattern that Azure OpenAI On Your Data uses. The system message is used in addition to an internal base prompt to provide the experience. To support this, we truncate the system message after a specific [number of tokens](#token-usage-estimation-for-azure-openai-on-your-data) to ensure the model can answer questions using your data. If you are defining extra behavior on top of the default experience, ensure that your system prompt is detailed and explains the exact expected customization.
+Once you select add your dataset, you can use the **System message** section in the Azure OpenAI Studio, or the `roleInformation` [parameter in the API](../reference.md#completions-extensions).
-#### Using Power Virtual Agents
-You can deploy your model to [Power Virtual Agents](/power-virtual-agents/fundamentals-what-is-power-virtual-agents) directly from Azure OpenAI studio, enabling you to bring conversational experiences to various Microsoft Teams, Websites, Power Platform solutions, Dynamics 365, and other [Azure Bot Service channels](/power-virtual-agents/publication-connect-bot-to-azure-bot-service-channels). Power Virtual Agents acts as a conversational and generative AI platform, making the process of creating, publishing and deploying a bot to any number of channels simple and accessible.
+#### Potential usage patterns
-While Power Virtual Agents has features that leverage Azure OpenAI such as [generative answers](/power-virtual-agents/nlu-boost-conversations), deploying a model grounded on your data lets you create a chatbot that will respond using your data, and connect it to the Power Platform. The tenant used in the Azure OpenAI service and Power Platform should be the same. For more information, see [Use a connection to Azure OpenAI on your data](/power-virtual-agents/nlu-generative-answers-azure-openai).
+**Define a role**
-> [!VIDEO https://www.microsoft.com/videoplayer/embed/RW18YwQ]
+You can define a role that you want your assistant. For example, if you are building a support bot, you can add *"You are an expert incident support assistant that helps users solve new issues."*.
-> [!NOTE]
-> Deploying to Power Virtual Agents from Azure OpenAI is only available to US regions.
-> Power Virtual Agents supports Azure AI Search indexes with keyword or semantic search only. Other data sources and advanced features might not be supported.
+**Define the type of data being retrieved**
-#### Using the web app
+You can also add the nature of data you are providing to assistant.
+* Define the topic or scope of your dataset, like "financial report", "academic paper", or "incident report". For example, for technical support you might add *"You answer queries using information from similar incidents in the retrieved documents."*.
+* If your data has certain characteristics, you can add these details to the system message. For example, if your documents are in Japanese, you can add *"You retrieve Japanese documents and you should read them carefully in Japanese and answer in Japanese."*.
+* If your documents include structured data like tables from a financial report, you can also add this fact into the system prompt. For example, if your data has tables, you might add *"You are given data in form of tables pertaining to financial results and you should read the table line by line to perform calculations to answer user questions."*.
-You can also use the available standalone web app to interact with your model using a graphical user interface, which you can deploy using either Azure OpenAI studio or a [manual deployment](https://github.com/microsoft/sample-app-aoai-chatGPT).
+**Define the output style**
-![A screenshot of the web app interface.](../media/use-your-data/web-app.png)
+You can also change the model's output by defining a system message. For example, if you want to ensure that the assistant answers are in French, you can add a prompt like *"You are an AI assistant that helps users who understand French find information. The user questions can be in English or French. Please read the retrieved documents carefully and answer them in French. Please translate the knowledge from documents to French to ensure all answers are in French."*.
-##### Web app customization
+**Reaffirm critical behavior**
-You can also customize the app's frontend and backend logic. For example, you could change the icon that appears in the center of the app by updating `/frontend/src/assets/Azure.svg` and then redeploying the app [using the Azure CLI](https://github.com/microsoft/sample-app-aoai-chatGPT#deploy-with-the-azure-cli). See the source code for the web app, and more information [on GitHub](https://github.com/microsoft/sample-app-aoai-chatGPT).
+Azure OpenAI On Your Data works by sending instructions to a large language model in the form of prompts to answer user queries using your data. If there is a certain behavior that is critical to the application, you can repeat the behavior in system message to increase its accuracy. For example, to guide the model to only answer from documents, you can add "*Please answer using retrieved documents only, and without using your knowledge. Please generate citations to retrieved documents for every claim in your answer. If the user question cannot be answered using retrieved documents, please explain the reasoning behind why documents are relevant to user queries. In any case, do not answer using your own knowledge."*.
-When customizing the app, we recommend:
+**Prompt Engineering tricks**
-- Resetting the chat session (clear chat) if the user changes any settings. Notify the user that their chat history will be lost.
+There are many tricks in prompt engineering that you can try to improve the output. One example is chain-of-thought prompting where you can add *"LetΓÇÖs think step by step about information in retrieved documents to answer user queries. Extract relevant knowledge to user queries from documents step by step and form an answer bottom up from the extracted information from relevant documents."*.
-- Clearly communicating the effect on the user experience that each setting you implement will have.
+> [!NOTE]
+> The system message is used to modify how GPT assistant responds to a user question based on retrieved documentation. It does not affect the retrieval process. If you'd like to provide instructions for the retrieval process, it is better to include them in the questions.
+> The system message is only guidance. The model might not adhere to every instruction specified because it has been primed with certain behaviors such as objectivity, and avoiding controversial statements. Unexpected behavior might occur if the system message contradicts with these behaviors.
-- When you rotate API keys for your Azure OpenAI or Azure AI Search resource, be sure to update the app settings for each of your deployed apps to use the new keys. -- Pulling changes from the `main` branch for the web app's source code frequently to ensure you have the latest bug fixes and improvements.
-##### Important considerations
+### Maximum response
-- Publishing creates an Azure App Service in your subscription. It might incur costs depending on the [pricing plan](https://azure.microsoft.com/pricing/details/app-service/windows/) you select. When you're done with your app, you can delete it from the Azure portal.-- By default, the app will only be accessible to you. To add authentication (for example, restrict access to the app to members of your Azure tenant):
+Set a limit on the number of tokens per model response. The upper limit for Azure OpenAI On Your Data is 1500. This is equivalent to setting the `max_tokens` parameter in the API.
- 1. Go to the [Azure portal](https://portal.azure.com/#home) and search for the app name you specified during publishing. Select the web app, and go to the **Authentication** tab on the left navigation menu. Then select **Add an identity provider**.
-
- :::image type="content" source="../media/quickstarts/web-app-authentication.png" alt-text="Screenshot of the authentication page in the Azure portal." lightbox="../media/quickstarts/web-app-authentication.png":::
+### Limit responses to your data
- 1. Select Microsoft as the identity provider. The default settings on this page will restrict the app to your tenant only, so you don't need to change anything else here. Then select **Add**
-
- Now users will be asked to sign in with their Microsoft Entra account to be able to access your app. You can follow a similar process to add another identity provider if you prefer. The app doesn't use the user's sign-in information in any other way other than verifying they are a member of your tenant.
+This option encourages the model to respond using your data only, and is selected by default. If you unselect this option, the model might more readily apply its internal knowledge to respond. Determine the correct selection based on your use case and scenario.
-### Chat history
-You can enable chat history for your users of the web app. If you enable the feature, your users will have access to their individual previous queries and responses.
-To enable chat history, deploy or redeploy your model as a web app using [Azure OpenAI Studio](https://oai.azure.com/portal)
+### Interacting with the model
+Use the following practices for best results when chatting with the model.
-> [!IMPORTANT]
-> Enabling chat history will create a [Cosmos DB](/azure/cosmos-db/introduction) instance in your resource group, and incur [additional charges](https://azure.microsoft.com/pricing/details/cosmos-db/autoscale-provisioned/) for the storage used.
+**Conversation history**
-Once you've enabled chat history, your users will be able to show and hide it in the top right corner of the app. When the history is shown, they can rename, or delete conversations. As they're logged into the app, conversations will be automatically ordered from newest to oldest, and named based on the first query in the conversation.
+* Before starting a new conversation (or asking a question that isn't related to the previous ones), clear the chat history.
+* Getting different responses for the same question between the first conversational turn and subsequent turns can be expected because the conversation history changes the current state of the model. If you receive incorrect answers, report it as a quality bug.
+**Model response**
-#### Deleting your Cosmos DB instance
+* If you aren't satisfied with the model response for a specific question, try either making the question more specific or more generic to see how the model responds, and reframe your question accordingly.
-Deleting your web app does not delete your Cosmos DB instance automatically. To delete your Cosmos DB instance, along with all stored chats, you need to navigate to the associated resource in the [Azure portal](https://portal.azure.com) and delete it. If you delete the Cosmos DB resource but keep the chat history option enabled on the studio, your users will be notified of a connection error, but can continue to use the web app without access to the chat history.
+* [Chain-of-thought prompting](advanced-prompt-engineering.md?pivots=programming-language-chat-completions#chain-of-thought-prompting) has been shown to be effective in getting the model to produce desired outputs for complex questions/tasks.
-### Using the API
+**Question length**
-After you upload your data through Azure OpenAI studio, you can make a call against Azure OpenAI models through APIs. Consider setting the following parameters even if they are optional for using the API.
+Avoid asking long questions and break them down into multiple questions if possible. The GPT models have limits on the number of tokens they can accept. Token limits are counted toward: the user question, the system message, the retrieved search documents (chunks), internal prompts, the conversation history (if any), and the response. If the question exceeds the token limit, it will be truncated.
+**Multi-lingual support**
+* Currently, keyword search and semantic search in Azure OpenAI On Your Data supports queries are in the same language as the data in the index. For example, if your data is in Japanese, then input queries also need to be in Japanese. For cross-lingual document retrieval, we recommend building the index with [Vector search](/azure/search/vector-search-overview) enabled.
+* To help improve the quality of the information retrieval and model response, we recommend enabling [semantic search](/azure/search/semantic-search-overview) for the following languages: English, French, Spanish, Portuguese, Italian, Germany, Chinese(Zh), Japanese, Korean, Russian, Arabic
+* We recommend using a system message to inform the model that your data is in another language. For example:
-|Parameter |Recommendation |
-|||
-|`fieldsMapping` | Explicitly set the title and content fields of your index. This impacts the search retrieval quality of Azure AI Search, which impacts the overall response and citation quality. |
-|`roleInformation` | Corresponds to the "System Message" in the Azure OpenAI Studio. See the [System message](#system-message) section above for recommendations. |
+* *"**You are an AI assistant designed to help users extract information from retrieved Japanese documents. Please scrutinize the Japanese documents carefully before formulating a response. The user's query will be in Japanese, and you must response also in Japanese."*
+
+* If you have documents in multiple languages, we recommend building a new index for each language and connecting them separately to Azure OpenAI.
#### Streaming data
When you chat with a model, providing a history of the chat will help the model
} ```
-## Token usage estimation for Azure OpenAI on your data
+## Token usage estimation for Azure OpenAI On Your Data
-| Model | Total tokens available | Max tokens for system message | Max tokens for model response |
-|-||||
-| ChatGPT Turbo (0301) 8k | 8000 | 400 | 1500 |
-| ChatGPT Turbo 16k | 16000 | 1000 | 3200 |
-| GPT-4 (8k) | 8000 | 400 | 1500 |
-| GPT-4 32k | 32000 | 2000 | 6400 |
+| Model | Max tokens for system message | Max tokens for model response |
+|--|--|--|
+| GPT-35-0301 | 400 | 1500 |
+| GPT-35-0613-16K | 1000 | 3200 |
+| GPT-4-0613-8K | 400 | 1500 |
+| GPT-4-0613-32K | 2000 | 6400 |
The table above shows the total number of tokens available for each model type. It also determines the maximum number of tokens that can be used for the [system message](#system-message) and the model response. Additionally, the following also consume tokens:
-* The meta prompt (MP): if you limit responses from the model to the grounding data content (`inScope=True` in the API), the maximum number of tokens is 4036 tokens. Otherwise (for example if `inScope=False`) the maximum is 3444 tokens. This number is variable depending on the token length of the user question and conversation history. This estimate includes the base prompt as well as the query rewriting prompts for retrieval.
-* User question and history: Variable but capped at 2000 tokens.
+* The meta prompt (MP): if you limit responses from the model to the grounding data content (`inScope=True` in the API), the maximum number of tokens is 4,036 tokens. Otherwise (for example if `inScope=False`) the maximum is 3,444 tokens. This number is variable depending on the token length of the user question and conversation history. This estimate includes the base prompt and the query rewriting prompts for retrieval.
+* User question and history: Variable but capped at 2,000 tokens.
* Retrieved documents (chunks): The number of tokens used by the retrieved document chunks depends on multiple factors. The upper bound for this is the number of retrieved document chunks multiplied by the chunk size. It will, however, be truncated based on the tokens available tokens for the specific model being used after counting the rest of fields. 20% of the available tokens are reserved for the model response. The remaining 80% of available tokens include the meta prompt, the user question and conversation history, and the system message. The remaining token budget is used by the retrieved document chunks.
class TokenEstimator(object):
token_output = TokenEstimator.estimate_tokens(input_text) ```
-## Azure OpenAI on your data regional availability
+## Troubleshooting
+
+### Failed ingestion jobs
+
+To troubleshoot a failed job, always look out for errors or warnings specified either in the API response or Azure OpenAI studio. Here are some of the common errors and warnings:
++
+**Quota Limitations Issues**
+
+*An index with the name X in service Y could not be created. Index quota has been exceeded for this service. You must either delete unused indexes first, add a delay between index creation requests, or upgrade the service for higher limits.*
+
+*Standard indexer quota of X has been exceeded for this service. You currently have X standard indexers. You must either delete unused indexers first, change the indexer 'executionMode', or upgrade the service for higher limits.*
+
+Resolution:
+
+Upgrade to a higher pricing tier or delete unused assets.
-You can use Azure OpenAI on your data with an Azure OpenAI resource in the following regions:
+**Preprocessing Timeout Issues**
+
+*Could not execute skill because the Web API request failed*
+
+*Could not execute skill because Web API skill response is invalid*
+
+Resolution:
+
+Break down the input documents into smaller documents and try again.
+
+**Permissions Issues**
+
+*This request isn't authorized to perform this operation*
+
+Resolution:
+
+This means the storage account isn't accessible with the given credentials. In this case, please review the storage account credentials passed to the API and ensure the storage account isn't hidden behind a private endpoint (if a private endpoint isn't configured for this resource).
+
+## Regional availability and model support
+
+You can use Azure OpenAI On Your Data with an Azure OpenAI resource in the following regions:
* Australia East * Brazil South * Canada East
You can use Azure OpenAI on your data with an Azure OpenAI resource in the follo
* West Europe * West US
-If your Azure OpenAI resource is in another region, you won't be able to use Azure OpenAI on your data.
+### Supported models
+
+* `gpt-4` (0314)
+* `gpt-4` (0613)
+* `gpt-4-32k` (0314)
+* `gpt-4-32k` (0613)
+* `gpt-4` (1106-preview)
+* `gpt-35-turbo-16k` (0613)
+* `gpt-35-turbo` (1106)
+
+If your Azure OpenAI resource is in another region, you won't be able to use Azure OpenAI On Your Data.
## Next steps * [Get started using your data with Azure OpenAI](../use-your-data-quickstart.md)
-* [Securely use Azure OpenAI on your data](../how-to/use-your-data-securely.md)
+* [Securely use Azure OpenAI On Your Data](../how-to/use-your-data-securely.md)
* [Introduction to prompt engineering](./prompt-engineering.md)
ai-services Use Web App https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/use-web-app.md
+
+ Title: 'Using the Azure OpenAI web app'
+
+description: Use this article to learn about using the available web app to chat with Azure OpenAI models.
+++++ Last updated : 02/09/2024
+recommendations: false
+++
+# Use the Azure OpenAI web app
+
+Along with Azure OpenAI Studio, APIs and SDKs, you can also use the available standalone web app to interact with Azure OpenAI models using a graphical user interface, which you can deploy using either Azure OpenAI studio or a [manual deployment](https://github.com/microsoft/sample-app-aoai-chatGPT).
+
+![A screenshot of the web app interface.](../media/use-your-data/web-app.png)
+
+## Important considerations
+
+- Publishing creates an Azure App Service in your subscription. It might incur costs depending on the [pricing plan](https://azure.microsoft.com/pricing/details/app-service/windows/) you select. When you're done with your app, you can delete it from the Azure portal.
+- By default, the app will be deployed with the Microsoft identity provider already configured, restricting access to the app to members of your Azure tenant. To add or modify authentication:
+
+ 1. Go to the [Azure portal](https://portal.azure.com/#home) and search for the app name you specified during publishing. Select the web app, and go to the **Authentication** tab on the left navigation menu. Then select **Add an identity provider**.
+
+ :::image type="content" source="../media/quickstarts/web-app-authentication.png" alt-text="Screenshot of the authentication page in the Azure portal." lightbox="../media/quickstarts/web-app-authentication.png":::
+
+ 1. Select Microsoft as the identity provider. The default settings on this page will restrict the app to your tenant only, so you don't need to change anything else here. Then select **Add**
+
+ Now users will be asked to sign in with their Microsoft Entra ID account to be able to access your app. You can follow a similar process to add another identity provider if you prefer. The app doesn't use the user's sign-in information in any other way other than verifying they are a member of your tenant.
+
+## Web app customization
+
+You can customize the app's frontend and backend logic. For example, you could change the icon that appears in the center of the app by updating `/frontend/src/assets/Contoso.svg` and then redeploying the app [using the Azure CLI](https://github.com/microsoft/sample-app-aoai-chatGPT#deploy-with-the-azure-cli). See the source code for the web app, and more information on [GitHub](https://github.com/microsoft/sample-app-aoai-chatGPT).
+
+When customizing the app, we recommend:
+
+- Resetting the chat session (clear chat) if the user changes any settings. Notify the user that their chat history will be lost.
+
+- Clearly communicating how each setting you implement will affect the user experience.
+
+- When you rotate API keys for your Azure OpenAI or Azure AI Search resource, be sure to update the app settings for each of your deployed apps to use the new keys.
+
+### Updating the web app
+
+We recommend pulling changes from the `main` branch for the web app's source code frequently to ensure you have the latest bug fixes, API version, and improvements.
+
+> [!NOTE]
+> After February 1, 2024, the web app requires the app startup command to be set to `python3 -m gunicorn app:app`. When updating an app that was published prior to February 1, 2024, you need to manually add the startup command from the **App Service Configuration** page.
+
+## Chat history
+
+You can enable chat history for your users of the web app. When you enable the feature, your users will have access to their individual previous queries and responses.
+
+To enable chat history, deploy or redeploy your model as a web app using [Azure OpenAI Studio](https://oai.azure.com/portal)
++
+> [!IMPORTANT]
+> Enabling chat history will create a [Cosmos DB](/azure/cosmos-db/introduction) instance in your resource group, and incur [additional charges](https://azure.microsoft.com/pricing/details/cosmos-db/autoscale-provisioned/) for the storage used.
+
+Once you've enabled chat history, your users will be able to show and hide it in the top right corner of the app. When the history is shown, they can rename, or delete conversations. As they're logged into the app, conversations will be automatically ordered from newest to oldest, and named based on the first query in the conversation.
++
+## Deleting your Cosmos DB instance
+
+Deleting your web app does not delete your Cosmos DB instance automatically. To delete your Cosmos DB instance, along with all stored chats, you need to navigate to the associated resource in the [Azure portal](https://portal.azure.com) and delete it. If you delete the Cosmos DB resource but keep the chat history option enabled on the studio, your users will be notified of a connection error, but can continue to use the web app without access to the chat history.
+
+## Next steps
+* [Prompt engineering](../concepts/prompt-engineering.md)
+* [Azure openAI on your data](../concepts/use-your-data.md)
ai-services Use Your Data Securely https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/how-to/use-your-data-securely.md
Last updated 02/13/2024
recommendations: false
-# Securely use Azure OpenAI on your data
+# Securely use Azure OpenAI On Your Data
-Use this article to learn how to use Azure OpenAI on your data securely by protecting data and resources with Microsoft Entra ID role-based access control, virtual networks and private endpoints.
+Use this article to learn how to use Azure OpenAI On Your Data securely by protecting data and resources with Microsoft Entra ID role-based access control, virtual networks and private endpoints.
-This article is only applicable when using [Azure OpenAI on your data with text](/azure/ai-services/openai/concepts/use-your-data). It does not apply to [Azure OpenAI on your data with images](/azure/ai-services/openai/concepts/use-your-image-data).
+This article is only applicable when using [Azure OpenAI On Your Data with text](/azure/ai-services/openai/concepts/use-your-data). It does not apply to [Azure OpenAI On Your Data with images](/azure/ai-services/openai/concepts/use-your-image-data).
## Data ingestion architecture
-When you use Azure OpenAI on your data to ingest data from Azure blob storage, local files or URLs into Azure AI Search, the following process is used to process the data.
+When you use Azure OpenAI On Your Data to ingest data from Azure blob storage, local files or URLs into Azure AI Search, the following process is used to process the data.
:::image type="content" source="../media/use-your-data/ingestion-architecture.png" alt-text="A diagram showing the process of ingesting data." lightbox="../media/use-your-data/ingestion-architecture.png":::
When you send API calls to chat with an Azure OpenAI model on your data, the ser
If an embedding deployment is provided in the inference request, the rewritten query will be vectorized by Azure OpenAI, and both query and vector are sent Azure AI Search for vector search.
+## Document-level access control
+
+> [!NOTE]
+> Document-level access control is supported for Azure AI search only.
+
+Azure OpenAI On Your Data lets you restrict the documents that can be used in responses for different users with Azure AI Search [security filters](/azure/search/search-security-trimming-for-azure-search-with-aad). When you enable document level access, the search results returned from Azure AI Search and used to generate a response will be trimmed based on user Microsoft Entra group membership. You can only enable document-level access on existing Azure AI Search indexes. To enable document-level access:
+
+1. Follow the steps in the [Azure AI Search documentation](/azure/search/search-security-trimming-for-azure-search-with-aad) to register your application and create users and groups.
+1. [Index your documents with their permitted groups](/azure/search/search-security-trimming-for-azure-search-with-aad#index-document-with-their-permitted-groups). Be sure that your new [security fields](/azure/search/search-security-trimming-for-azure-search#create-security-field) have the schema below:
+
+ ```json
+ {"name": "group_ids", "type": "Collection(Edm.String)", "filterable": true }
+ ```
+
+ `group_ids` is the default field name. If you use a different field name like `my_group_ids`, you can map the field in [index field mapping](../concepts/use-your-data.md#index-field-mapping).
+
+1. Make sure each sensitive document in the index has the value set correctly on this security field to indicate the permitted groups of the document.
+1. In [Azure OpenAI Studio](https://oai.azure.com/portal), add your data source. in the [index field mapping](../concepts/use-your-data.md#index-field-mapping) section, you can map zero or one value to the **permitted groups** field, as long as the schema is compatible. If the **Permitted groups** field isn't mapped, document level access won't be enabled.
+
+**Azure OpenAI Studio**
+
+Once the Azure AI Search index is connected, your responses in the studio will have document access based on the Microsoft Entra permissions of the logged in user.
+
+**Web app**
+
+If you are using a published [web app](./use-web-app.md), you need to redeploy it to upgrade to the latest version. The latest version of the web app includes the ability to retrieve the groups of the logged in user's Microsoft Entra account, cache it, and include the group IDs in each API request.
+
+**API**
+
+When using the API, pass the `filter` parameter in each API request. For example:
+
+```json
+{
+ "messages": [
+ {
+ "role": "user",
+ "content": "who is my manager?"
+ }
+ ],
+ "dataSources": [
+ {
+ "type": "AzureCognitiveSearch",
+ "parameters": {
+ "endpoint": "'$SearchEndpoint'",
+ "key": "'$SearchKey'",
+ "indexName": "'$SearchIndex'",
+ "filter": "my_group_ids/any(g:search.in(g, 'group_id1, group_id2'))"
+ }
+ }
+ ]
+}
+```
+* `my_group_ids` is the field name that you selected for **Permitted groups** during [fields mapping](../concepts/use-your-data.md#index-field-mapping).
+* `group_id1, group_id2` are groups attributed to the logged in user. The client application can retrieve and cache users' groups.
+ ## Resources configuration
ai-services Reference https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/reference.md
POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deploymen
| Parameter | Type | Required? | Default | Description | |--|--|--|--|--|
-| ```prompt``` | string or array | Optional | ```<\|endoftext\|>``` | The prompt(s) to generate completions for, encoded as a string, or array of strings. Note that ```<\|endoftext\|>``` is the document separator that the model sees during training, so if a prompt isn't specified the model generates as if from the beginning of a new document. |
+| ```prompt``` | string or array | Optional | ```<\|endoftext\|>``` | The prompt or prompts to generate completions for, encoded as a string, or array of strings. ```<\|endoftext\|>``` is the document separator that the model sees during training, so if a prompt isn't specified the model generates as if from the beginning of a new document. |
| ```max_tokens``` | integer | Optional | 16 | The maximum number of tokens to generate in the completion. The token count of your prompt plus max_tokens can't exceed the model's context length. Most models have a context length of 2048 tokens (except for the newest models, which support 4096). | | ```temperature``` | number | Optional | 1 | What sampling temperature to use, between 0 and 2. Higher values mean the model takes more risks. Try 0.9 for more creative applications, and 0 (`argmax sampling`) for ones with a well-defined answer. We generally recommend altering this or top_p but not both. | | ```top_p``` | number | Optional | 1 | An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both. |
POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deploymen
|--|--|--|--| | ```your-resource-name``` | string | Required | The name of your Azure OpenAI Resource. | | ```deployment-id``` | string | Required | The name of your model deployment. You're required to first deploy a model before you can make calls. |
-| ```api-version``` | string | Required |The API version to use for this operation. This follows the YYYY-MM-DD format. |
+| ```api-version``` | string | Required |The API version to use for this operation. This follows the YYYY-MM-DD format. |
**Supported versions**
POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deploymen
|--|--|--|--| | ```your-resource-name``` | string | Required | The name of your Azure OpenAI Resource. | | ```deployment-id``` | string | Required | The name of your model deployment. You're required to first deploy a model before you can make calls. |
-| ```api-version``` | string | Required |The API version to use for this operation. This follows the YYYY-MM-DD or YYYY-MM-DD-preview format. |
+| ```api-version``` | string | Required |The API version to use for this operation. This follows the YYYY-MM-DD or YYYY-MM-DD-preview format. |
**Supported versions**
The request body consists of a series of messages. The model will generate a res
| `content` | string or array | Yes | N/A | The content of the message. It must be a string, unless in a Vision-enabled scenario. If it's part of the `user` message, using the GPT-4 Turbo with Vision model, with the latest API version, then `content` must be an array of structures, where each item represents either text or an image: <ul><li> `text`: input text is represented as a structure with the following properties: </li> <ul> <li> `type` = "text" </li> <li> `text` = the input text </li> </ul> <li> `images`: an input image is represented as a structure with the following properties: </li><ul> <li> `type` = "image_url" </li> <li> `image_url` = a structure with the following properties: </li> <ul> <li> `url` = the image URL </li> <li>(optional) `detail` = `high`, `low`, or `auto` </li> </ul> </ul> </ul>| | `contentPart` | object | No | N/A | Part of a user's multi-modal message. It can be either text type or image type. If text, it will be a text string. If image, it will be a `contentPartImage` object. | | `contentPartImage` | object | No | N/A | Represents a user-uploaded image. It has a `url` property, which is either a URL of the image or the base 64 encoded image data. It also has a `detail` property which can be `auto`, `low`, or `high`.|
-| `enhancements` | object | No | N/A | Represents the Vision enhancement features requested for the chat. It has a `grounding` and `ocr` property, which each have a boolean `enabled` property. Use these to request the OCR service and/or the object detection/grounding service.|
-| `dataSources` | object | No | N/A | Represents additional resource data. Computer Vision resource data is needed for Vision enhancement. It has a `type` property which should be `"AzureComputerVision"` and a `parameters` property which has an `endpoint` and `key` property. These strings should be set to the endpoint URL and access key of your Computer Vision resource.|
+| `enhancements` | object | No | N/A | Represents the Vision enhancement features requested for the chat. It has `grounding` and `ocr` properties, each has a boolean `enabled` property. Use these to request the OCR service and/or the object detection/grounding service.|
+| `dataSources` | object | No | N/A | Represents additional resource data. Computer Vision resource data is needed for Vision enhancement. It has a `type` property, which should be `"AzureComputerVision"` and a `parameters` property, which has an `endpoint` and `key` property. These strings should be set to the endpoint URL and access key of your Computer Vision resource.|
#### Example request
Output formatting adjusted for ease of reading, actual output is a single block
In the example response, `finish_reason` equals `stop`. If `finish_reason` equals `content_filter` consult our [content filtering guide](./concepts/content-filter.md) to understand why this is occurring. > [!IMPORTANT]
-> The `functions` and `function_call` parameters have been deprecated with the release of the [`2023-12-01-preview`](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/inference.json) version of the API. The replacement for `functions` is the `tools` parameter. The replacement for `function_call` is the `tool_choice` parameter. Parallel function calling which was introduced as part of the [`2023-12-01-preview`](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/inference.json) is only supported with `gpt-35-turbo` (1106) and `gpt-4` (1106-preview) also known as GPT-4 Turbo Preview.
+> The `functions` and `function_call` parameters have been deprecated with the release of the [`2023-12-01-preview`](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/inference.json) version of the API. The replacement for `functions` is the `tools` parameter. The replacement for `function_call` is the `tool_choice` parameter. Parallel function calling which was introduced as part of the [`2023-12-01-preview`](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-12-01-preview/inference.json) is only supported with `gpt-35-turbo` (1106) and `gpt-4` (1106-preview) also known as GPT-4 Turbo Preview.
| Parameter | Type | Required? | Default | Description | |--|--|--|--|--| | ```messages``` | array | Required | | The collection of context messages associated with this chat completions request. Typical usage begins with a [chat message](#chatmessage) for the System role that provides instructions for the behavior of the assistant, followed by alternating messages between the User and Assistant roles.| | ```temperature```| number | Optional | 1 | What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or `top_p` but not both. |
-| ```n``` | integer | Optional | 1 | How many chat completion choices to generate for each input message. |
+| ```n``` | integer | Optional | 1 | How many chat completion choices to generate for each input message. |
| ```stream``` | boolean | Optional | false | If set, partial message deltas will be sent, like in ChatGPT. Tokens will be sent as data-only server-sent events as they become available, with the stream terminated by a `data: [DONE]` message." | | ```stop``` | string or array | Optional | null | Up to 4 sequences where the API will stop generating further tokens.| | ```max_tokens``` | integer | Optional | inf | The maximum number of tokens allowed for the generated answer. By default, the number of tokens the model can return will be (4096 - prompt tokens).|
The definition of a caller-specified function that chat completions can invoke i
## Completions extensions
-Extensions for chat completions, for example Azure OpenAI on your data.
+Extensions for chat completions, for example Azure OpenAI On Your Data.
**Use chat completions extensions**
POST {your-resource-name}/openai/deployments/{deployment-id}/extensions/chat/com
|--|--|--|--| | ```your-resource-name``` | string | Required | The name of your Azure OpenAI Resource. | | ```deployment-id``` | string | Required | The name of your model deployment. You're required to first deploy a model before you can make calls. |
-| ```api-version``` | string | Required |The API version to use for this operation. This follows the YYYY-MM-DD format. |
+| ```api-version``` | string | Required |The API version to use for this operation. This follows the YYYY-MM-DD format. |
**Supported versions** - `2023-06-01-preview` (retiring April 2, 2024) [Swagger spec](https://github.com/Azure/azure-rest-api-specs/blob/main/specification/cognitiveservices/data-plane/AzureOpenAI/inference/preview/2023-06-01-preview/inference.json)
POST {your-resource-name}/openai/deployments/{deployment-id}/extensions/chat/com
#### Example request
-You can make requests using [Azure AI Search](./concepts/use-your-data.md?tabs=ai-search#ingesting-your-data), [Azure Cosmos DB for MongoDB vCore](./concepts/use-your-data.md?tabs=mongo-db#ingesting-your-data), [Azure Machine Learning](/azure/machine-learning/overview-what-is-azure-machine-learning), [Pinecone](https://www.pinecone.io/), and [Elasticsearch](https://www.elastic.co/).
+You can make requests using Azure AI Search, Azure Cosmos DB for MongoDB vCore, Pinecone, and Elasticsearch. For more information, see [Azure OpenAI On Your Data](./concepts/use-your-data.md#supported-data-sources).
##### Azure AI Search
curl -i -X POST YOUR_RESOURCE_NAME/openai/deployments/YOUR_DEPLOYMENT_NAME/exten
| Parameters | Type | Required? | Default | Description | |--|--|--|--|--| | `messages` | array | Required | null | The messages to generate chat completions for, in the chat format. |
-| `dataSources` | array | Required | | The data sources to be used for the Azure OpenAI on your data feature. |
+| `dataSources` | array | Required | | The data sources to be used for the Azure OpenAI On Your Data feature. |
| `temperature` | number | Optional | 0 | What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic. We generally recommend altering this or `top_p` but not both. | | `top_p` | number | Optional | 1 |An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with `top_p` probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered. We generally recommend altering this or temperature but not both.| | `stream` | boolean | Optional | false | If set, partial message deltas are sent, like in ChatGPT. Tokens are sent as data-only server-sent events as they become available, with the stream terminated by a message `"messages": [{"delta": {"content": "[DONE]"}, "index": 2, "end_turn": true}]` | | `stop` | string or array | Optional | null | Up to two sequences where the API will stop generating further tokens. |
-| `max_tokens` | integer | Optional | 1000 | The maximum number of tokens allowed for the generated answer. By default, the number of tokens the model can return is `4096 - prompt_tokens`. |
+| `max_tokens` | integer | Optional | 1000 | The maximum number of tokens allowed for the generated answer. By default, the number of tokens the model can return is `4096 - prompt_tokens`. |
The following parameters can be used inside of the `parameters` field inside of `dataSources`. | Parameters | Type | Required? | Default | Description | |--|--|--|--|--|
-| `type` | string | Required | null | The data source to be used for the Azure OpenAI on your data feature. For Azure AI Search the value is `AzureCognitiveSearch`. For Azure Cosmos DB for MongoDB vCore, the value is `AzureCosmosDB`. For Elasticsearch the value is `Elasticsearch`. For Azure Machine Learning, the value is `AzureMLIndex`. For Pinecone, the value is `Pinecone`. |
+| `type` | string | Required | null | The data source to be used for the Azure OpenAI On Your Data feature. For Azure AI Search the value is `AzureCognitiveSearch`. For Azure Cosmos DB for MongoDB vCore, the value is `AzureCosmosDB`. For Elasticsearch the value is `Elasticsearch`. For Azure Machine Learning, the value is `AzureMLIndex`. For Pinecone, the value is `Pinecone`. |
| `indexName` | string | Required | null | The search index to be used. |
-| `inScope` | boolean | Optional | true | If set, this value limits responses specific to the grounding data content. |
-| `topNDocuments` | number | Optional | 5 | Specifies the number of top-scoring documents from your data index used to generate responses. You might want to increase the value when you have short documents or want to provide more context. This is the *retrieved documents* parameter in Azure OpenAI studio. |
-| `semanticConfiguration` | string | Optional | null | The semantic search configuration. Only required when `queryType` is set to `semantic` or `vectorSemanticHybrid`. |
+| `inScope` | boolean | Optional | true | If set, this value limits responses specific to the grounding data content. |
+| `topNDocuments` | number | Optional | 5 | Specifies the number of top-scoring documents from your data index used to generate responses. You might want to increase the value when you have short documents or want to provide more context. This is the *retrieved documents* parameter in Azure OpenAI studio. |
+| `semanticConfiguration` | string | Optional | null | The semantic search configuration. Only required when `queryType` is set to `semantic` or `vectorSemanticHybrid`. |
| `roleInformation` | string | Optional | null | Gives the model instructions about how it should behave and the context it should reference when generating a response. Corresponds to the "System Message" in Azure OpenAI Studio. See [Using your data](./concepts/use-your-data.md#system-message) for more information. ThereΓÇÖs a 100 token limit, which counts towards the overall token limit.| | `filter` | string | Optional | null | The filter pattern used for [restricting access to sensitive documents](./concepts/use-your-data.md#document-level-access-control)
-| `embeddingEndpoint` | string | Optional | null | The endpoint URL for an Ada embedding model deployment, generally of the format `https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/embeddings?api-version=2023-05-15`. Use with the `embeddingKey` parameter for [vector search](./concepts/use-your-data.md#search-options) outside of private networks and private endpoints. |
-| `embeddingKey` | string | Optional | null | The API key for an Ada embedding model deployment. Use with `embeddingEndpoint` for [vector search](./concepts/use-your-data.md#search-options) outside of private networks and private endpoints. |
-| `embeddingDeploymentName` | string | Optional | null | The Ada embedding model deployment name within the same Azure OpenAI resource. Used instead of `embeddingEndpoint` and `embeddingKey` for [vector search](./concepts/use-your-data.md#search-options). Should only be used when both the `embeddingEndpoint` and `embeddingKey` parameters are defined. When this parameter is provided, Azure OpenAI on your data use an internal call to evaluate the Ada embedding model, rather than calling the Azure OpenAI endpoint. This enables you to use vector search in private networks and private endpoints. Billing remains the same whether this parameter is defined or not. Available in regions where embedding models are [available](./concepts/models.md#embeddings-models) starting in API versions `2023-06-01-preview` and later.|
+| `embeddingEndpoint` | string | Optional | null | The endpoint URL for an Ada embedding model deployment, generally of the format `https://YOUR_RESOURCE_NAME.openai.azure.com/openai/deployments/YOUR_DEPLOYMENT_NAME/embeddings?api-version=2023-05-15`. Use with the `embeddingKey` parameter for [vector search](./concepts/use-your-data.md#search-types) outside of private networks and private endpoints. |
+| `embeddingKey` | string | Optional | null | The API key for an Ada embedding model deployment. Use with `embeddingEndpoint` for [vector search](./concepts/use-your-data.md#search-types) outside of private networks and private endpoints. |
+| `embeddingDeploymentName` | string | Optional | null | The Ada embedding model deployment name within the same Azure OpenAI resource. Used instead of `embeddingEndpoint` and `embeddingKey` for [vector search](./concepts/use-your-data.md#search-types). Should only be used when both the `embeddingEndpoint` and `embeddingKey` parameters are defined. When this parameter is provided, Azure OpenAI On Your Data use an internal call to evaluate the Ada embedding model, rather than calling the Azure OpenAI endpoint. This enables you to use vector search in private networks and private endpoints. Billing remains the same whether this parameter is defined or not. Available in regions where embedding models are [available](./concepts/models.md#embeddings-models) starting in API versions `2023-06-01-preview` and later.|
| `strictness` | number | Optional | 3 | Sets the threshold to categorize documents as relevant to your queries. Raising the value means a higher threshold for relevance and filters out more less-relevant documents for responses. Setting this value too high might cause the model to fail to generate responses due to limited available documents. |
The following parameters are used for Azure AI Search.
| `endpoint` | string | Required | null | Azure AI Search only. The data source endpoint. | | `key` | string | Required | null | Azure AI Search only. One of the Azure AI Search admin keys for your service. | | `queryType` | string | Optional | simple | Indicates which query option is used for Azure AI Search. Available types: `simple`, `semantic`, `vector`, `vectorSimpleHybrid`, `vectorSemanticHybrid`. |
-| `fieldsMapping` | dictionary | Optional for Azure AI Search. | null | defines which [fields](./concepts/use-your-data.md?tabs=ai-search#index-field-mapping) you want to map when you add your data source. |
+| `fieldsMapping` | dictionary | Optional for Azure AI Search. | null | defines which [fields](./concepts/use-your-data.md?tabs=ai-search#index-field-mapping) you want to map when you add your data source. |
The following parameters are used inside of the `authentication` field, which enables you to use Azure OpenAI [without public network access](./how-to/use-your-data-securely.md).
The following parameters are used inside of the `fieldsMapping` field.
| `urlField` | string | Optional | null | The field in your index that contains the original URL of each document. | | `filepathField` | string | Optional | null | The field in your index that contains the original file name of each document. | | `contentFields` | dictionary | Optional | null | The fields in your index that contain the main text content of each document. |
-| `contentFieldsSeparator` | string | Optional | null | The separator for the content fields. Use `\n` by default. |
+| `contentFieldsSeparator` | string | Optional | null | The separator for the content fields. Use `\n` by default. |
```json "fieldsMapping": {
The following parameters are used for Azure Cosmos DB for MongoDB vCore.
| `containerName` | string | Required | null | Azure Cosmos DB for MongoDB vCore only. The Azure Cosmos Mongo vCore container name in the database. | | `type` (found inside of`embeddingDependencyType`) | string | Required | null | Indicates the embedding model dependency. | | `deploymentName` (found inside of`embeddingDependencyType`) | string | Required | null | The embedding model deployment name. |
-| `fieldsMapping` | dictionary | Required for Azure Cosmos DB for MongoDB vCore. | null | Index data column mapping. When you use Azure Cosmos DB for MongoDB vCore, the value `vectorFields` is required, which indicates the fields that store vectors. |
+| `fieldsMapping` | dictionary | Required for Azure Cosmos DB for MongoDB vCore. | null | Index data column mapping. When you use Azure Cosmos DB for MongoDB vCore, the value `vectorFields` is required, which indicates the fields that store vectors. |
The following parameters are used inside of the optional `embeddingDependency` parameter, which contains details of a vectorization source that is based on an internal embeddings model deployment name in the same Azure OpenAI resource.
The following parameters are used inside of the `fieldsMapping` field.
| `urlField` | string | Optional | null | The field in your index that contains the original URL of each document. | | `filepathField` | string | Optional | null | The field in your index that contains the original file name of each document. | | `contentFields` | dictionary | Optional | null | The fields in your index that contain the main text content of each document. |
-| `contentFieldsSeparator` | string | Optional | null | The separator for the content fields. Use `\n` by default. |
+| `contentFieldsSeparator` | string | Optional | null | The separator for the content fields. Use `\n` by default. |
| `vectorFields` | dictionary | Optional | null | The names of fields that represent vector data | ```json
The following parameters are used for Pinecone.
| `filepathField` (found inside of `fieldsMapping`) | string | Required | null | The name of the index field to use as a file path. | | `contentFields` (found inside of `fieldsMapping`) | string | Required | null | The name of the index fields that should be treated as content. | | `vectorFields` | dictionary | Optional | null | The names of fields that represent vector data |
-| `contentFieldsSeparator` (found inside of `fieldsMapping`) | string | Required | null | The separator for your content fields. Use `\n` by default. |
+| `contentFieldsSeparator` (found inside of `fieldsMapping`) | string | Required | null | The separator for your content fields. Use `\n` by default. |
The following parameters are used inside of the optional `embeddingDependency` parameter, which contains details of a vectorization source that is based on an internal embeddings model deployment name in the same Azure OpenAI resource.
The following parameters are used inside of the optional `embeddingDependency` p
}, ```
-### Start an ingestion job
+### Start an ingestion job (preview)
> [!TIP] > The `JOB_NAME` you choose will be used as the index name. Be aware of the [constraints](/rest/api/searchservice/create-index#uri-parameters) for the *index name*.
curl -i -X PUT https://YOUR_RESOURCE_NAME.openai.azure.com/openai/extensions/on-
| `chunkSize` | int | Optional |1024 |This number defines the maximum number of tokens in each chunk produced by the ingestion flow. |
-### List ingestion jobs
+### List ingestion jobs (preview)
```console curl -i -X GET https://YOUR_RESOURCE_NAME.openai.azure.com/openai/extensions/on-your-data/ingestion-jobs?api-version=2023-10-01-preview \
curl -i -X GET https://YOUR_RESOURCE_NAME.openai.azure.com/openai/extensions/on-
} ```
-### Get the status of an ingestion job
+### Get the status of an ingestion job (preview)
```console curl -i -X GET https://YOUR_RESOURCE_NAME.openai.azure.com/openai/extensions/on-your-data/ingestion-jobs/YOUR_JOB_NAME?api-version=2023-10-01-preview \
POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deploymen
|--|--|--|--| | ```your-resource-name``` | string | Required | The name of your Azure OpenAI Resource. | | ```deployment-id``` | string | Required | The name of your DALL-E 3 model deployment such as *MyDalle3*. You're required to first deploy a DALL-E 3 model before you can make calls. |
-| ```api-version``` | string | Required |The API version to use for this operation. This follows the YYYY-MM-DD format. |
+| ```api-version``` | string | Required |The API version to use for this operation. This follows the YYYY-MM-DD format. |
**Supported versions**
POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deploymen
| `n` | integer | Optional | 1 | The number of images to generate. Only `n=1` is supported for DALL-E 3. | | `size` | string | Optional | `1024x1024` | The size of the generated images. Must be one of `1792x1024`, `1024x1024`, or `1024x1792`. | | `quality` | string | Optional | `standard` | The quality of the generated images. Must be `hd` or `standard`. |
-| `response_format` | string | Optional | `url` | The format in which the generated images are returned Must be `url` (a URL pointing to the image) or `b64_json` (the base 64 byte code in JSON format). |
+| `response_format` | string | Optional | `url` | The format in which the generated images are returned Must be `url` (a URL pointing to the image) or `b64_json` (the base 64-byte code in JSON format). |
| `style` | string | Optional | `vivid` | The style of the generated images. Must be `natural` or `vivid` (for hyper-realistic / dramatic images). |
POST https://{your-resource-name}.openai.azure.com/openai/images/generations:sub
| Parameter | Type | Required? | Description | |--|--|--|--| | ```your-resource-name``` | string | Required | The name of your Azure OpenAI Resource. |
-| ```api-version``` | string | Required |The API version to use for this operation. This follows the YYYY-MM-DD format. |
+| ```api-version``` | string | Required |The API version to use for this operation. This follows the YYYY-MM-DD format. |
**Supported versions**
POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deploymen
|--|--|--|--| | ```your-resource-name``` | string | Required | The name of your Azure OpenAI resource. | | ```deployment-id``` | string | Required | The name of your Whisper model deployment such as *MyWhisperDeployment*. You're required to first deploy a Whisper model before you can make calls. |
-| ```api-version``` | string | Required |The API version to use for this operation. This value follows the YYYY-MM-DD format. |
+| ```api-version``` | string | Required |The API version to use for this operation. This value follows the YYYY-MM-DD format. |
**Supported versions**
POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deploymen
|--|--|--|--| | ```your-resource-name``` | string | Required | The name of your Azure OpenAI resource. | | ```deployment-id``` | string | Required | The name of your Whisper model deployment such as *MyWhisperDeployment*. You're required to first deploy a Whisper model before you can make calls. |
-| ```api-version``` | string | Required |The API version to use for this operation. This value follows the YYYY-MM-DD format. |
+| ```api-version``` | string | Required |The API version to use for this operation. This value follows the YYYY-MM-DD format. |
**Supported versions**
POST https://{your-resource-name}.openai.azure.com/openai/deployments/{deploymen
|--|--|--|--| | ```your-resource-name``` | string | Required | The name of your Azure OpenAI resource. | | ```deployment-id``` | string | Required | The name of your text to speech model deployment such as *MyTextToSpeechDeployment*. You're required to first deploy a text to speech model (such as `tts-1` or `tts-1-hd`) before you can make calls. |
-| ```api-version``` | string | Required |The API version to use for this operation. This value follows the YYYY-MM-DD format. |
+| ```api-version``` | string | Required |The API version to use for this operation. This value follows the YYYY-MM-DD format. |
**Supported versions**
The speech is returned as an audio file from the previous request.
## Management APIs
-Azure OpenAI is deployed as a part of the Azure AI services. All Azure AI services rely on the same set of management APIs for creation, update and delete operations. The management APIs are also used for deploying models within an OpenAI resource.
+Azure OpenAI is deployed as a part of the Azure AI services. All Azure AI services rely on the same set of management APIs for creation, update, and delete operations. The management APIs are also used for deploying models within an OpenAI resource.
[**Management APIs reference documentation**](/rest/api/cognitiveservices/)
ai-services Use Your Data Quickstart https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/use-your-data-quickstart.md
In this quickstart you can use your own data with Azure OpenAI models. Using Azu
Azure OpenAI requires registration and is currently only available to approved enterprise customers and partners. [See Limited access to Azure OpenAI Service](/legal/cognitive-services/openai/limited-access?context=/azure/ai-services/openai/context/context) for more information. You can apply for access to Azure OpenAI by completing the form at <a href="https://aka.ms/oai/access" target="_blank">https://aka.ms/oai/access</a>. Open an issue on this repo to contact us if you have an issue. -- An Azure OpenAI resource in a [supported region](./concepts/use-your-data.md#azure-openai-on-your-data-regional-availability) with a chat model deployed (for example, GPT-3 or GPT-4). For more information about model deployment, see the [resource deployment guide](./how-to/create-resource.md).-
- - Your chat model can use version `gpt-35-turbo (0301)`, `gpt-35-turbo-16k`, `gpt-4`, and `gpt-4-32k`. You can view or change your model version in [Azure OpenAI Studio](./how-to/working-with-models.md#model-updates).
+- An Azure OpenAI resource deployed in a [supported region](./concepts/use-your-data.md#regional-availability-and-model-support) with a [supported model](./concepts/use-your-data.md#supported-models).
- Be sure that you are assigned at least the [Cognitive Services Contributor](./how-to/role-based-access-control.md#cognitive-services-contributor) role for the Azure OpenAI resource.
ai-services Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/openai/whats-new.md
- ignite-2023 - references_regions Previously updated : 2/6/2024 Last updated : 02/15/2024 recommendations: false
Azure OpenAI Service now supports text to speech APIs with OpenAI's voices. Get
- [Fine-tuning & function calling](./how-to/fine-tuning-functions.md) - [`gpt-35-turbo 1106` support](./concepts/models.md#fine-tuning-models)
-### New regional support for Azure OpenAI on your data
+### New regional support for Azure OpenAI On Your Data
-You can now use Azure OpenAI on your data in the following Azure region:
+You can now use Azure OpenAI On Your Data in the following Azure region:
* South Africa North
+### Azure OpenAI On Your Data general availability
+
+- [Azure OpenAI On Your Data](./concepts/use-your-data.md) is now generally available.
+ ## December 2023
-### Azure OpenAI on your data
+### Azure OpenAI On Your Data
-- Full VPN and private endpoint support for Azure OpenAI on your data, including security support for: storage accounts, Azure OpenAI resources, and Azure AI Search service resources. -- New article for using [Azure OpenAI on your data securely](./how-to/use-your-data-securely.md) by protecting data with virtual networks and private endpoints.
+- Full VPN and private endpoint support for Azure OpenAI On Your Data, including security support for: storage accounts, Azure OpenAI resources, and Azure AI Search service resources.
+- New article for using [Azure OpenAI On Your Data securely](./how-to/use-your-data-securely.md) by protecting data with virtual networks and private endpoints.
### GPT-4 Turbo with Vision now available
GPT-4 Turbo with Vision on Azure OpenAI service is now in public preview. GPT-4
## November 2023
-### New data source support in Azure OpenAI on your data
+### New data source support in Azure OpenAI On Your Data
-- You can now use [Azure Cosmos DB for MongoDB vCore](./concepts/use-your-data.md?tabs=mongo-db.md#ingesting-your-data) as well as URLs/web addresses as data sources to ingest your data and chat with a supported Azure OpenAI model.
+- You can now use [Azure Cosmos DB for MongoDB vCore](./concepts/use-your-data.md#supported-data-sources) as well as URLs/web addresses as data sources to ingest your data and chat with a supported Azure OpenAI model.
### GPT-4 Turbo Preview & GPT-3.5-Turbo-1106 released
Try out DALL-E 3 by following a [quickstart](./dall-e-quickstart.md).
- [Tutorial: fine-tuning GPT-3.5-Turbo](./tutorials/fine-tune.md)
-### Azure OpenAI on your data
+### Azure OpenAI On Your Data
- New [custom parameters](./concepts/use-your-data.md#runtime-parameters) for determining the number of retrieved documents and strictness. - The strictness setting sets the threshold to categorize documents as relevant to your queries.
Azure OpenAI Service now supports speech to text APIs powered by OpenAI's Whispe
### Azure OpenAI on your own data (preview) updates -- You can now deploy Azure OpenAI on your data to [Power Virtual Agents](/azure/ai-services/openai/concepts/use-your-data#deploying-the-model).-- Azure OpenAI on your data now supports private endpoints.
+- You can now deploy Azure OpenAI On Your Data to [Power Virtual Agents](/azure/ai-services/openai/concepts/use-your-data#deploying-the-model).
+- Azure OpenAI On Your Data now supports private endpoints.
- Ability to [filter access to sensitive documents](./concepts/use-your-data.md#document-level-access-control). - [Automatically refresh your index on a schedule](./concepts/use-your-data.md#schedule-automatic-index-refreshes).-- [Vector search and semantic search options](./concepts/use-your-data.md#search-options). -- [View your chat history in the deployed web app](./concepts/use-your-data.md#chat-history)
+- [Vector search and semantic search options](./concepts/use-your-data.md#search-types).
+- [View your chat history in the deployed web app](./how-to/use-web-app.md#chat-history)
## July 2023
Azure OpenAI Service now supports speech to text APIs powered by OpenAI's Whispe
### Use Azure OpenAI on your own data (preview) -- [Azure OpenAI on your data](./concepts/use-your-data.md) is now available in preview, enabling you to chat with OpenAI models such as GPT-35-Turbo and GPT-4 and receive responses based on your data.
+- [Azure OpenAI On Your Data](./concepts/use-your-data.md) is now available in preview, enabling you to chat with OpenAI models such as GPT-35-Turbo and GPT-4 and receive responses based on your data.
### New versions of gpt-35-turbo and gpt-4 models
ai-services Migrate To Openai https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-services/qnamaker/How-To/migrate-to-openai.md
Previously updated : 01/19/2024 Last updated : 02/09/2024 # Migrate QnA Maker to Azure OpenAI on your data
QnA Maker was designed to be a cloud-based Natural Language Processing (NLP) ser
:::image type="content" source="../media/openai/chat-playground-after-deployment.png" alt-text="A screenshot of the playground page of the Azure OpenAI Studio with sections highlighted." lightbox="../media/openai/chat-playground-after-deployment.png":::
-You can now start exploring Azure OpenAI capabilities with a no-code approach through the chat playground. It's simply a text box where you can submit a prompt to generate a completion. From this page, you can quickly iterate and experiment with the capabilities. You can also launch a [web app](../../openai/concepts/use-your-data.md#using-the-web-app) to chat with the model over the web.
+You can now start exploring Azure OpenAI capabilities with a no-code approach through the chat playground. It's simply a text box where you can submit a prompt to generate a completion. From this page, you can quickly iterate and experiment with the capabilities. You can also launch a [web app](../../openai/how-to/use-web-app.md) to chat with the model over the web.
## Next steps * [Using Azure OpenAI on your data](../../openai/concepts/use-your-data.md)
ai-studio Quota https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/ai-studio/how-to/quota.md
Use quotas to manage compute target allocation between multiple Azure AI hub r
By default, all Azure AI hub resources share the same quota as the subscription-level quota for VM families. However, you can set a maximum quota for individual VM families for more granular cost control and governance on Azure AI hub resources in a subscription. Quotas for individual VM families let you share capacity and avoid resource contention issues.
-In Azure AI Studio, select **Manage** from the top menu. Select **Quota** to view your quota at the subscription level in a region for both Azure Machine Learning virtual machine families and for your Azure Open AI resources.
+In Azure AI Studio, select **Manage** from the top menu. Select **Quota** to view your quota at the subscription level in a region for both Azure Machine Learning virtual machine families and for your Azure OpenAI resources.
:::image type="content" source="../media/cost-management/quota-manage.png" alt-text="Screenshot of the page to view and request quota for virtual machines and Azure OpenAI models." lightbox="../media/cost-management/quota-manage.png":::
automation Python Packages https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/automation/python-packages.md
On a Windows 64-bit machine with [Python2.7](https://www.python.org/downloads/re
C:\Python27\Scripts\pip2.7.exe download -d <output dir> <package name> ```
-Once the packages are downloaded, you can import them into your automation account.
+Once the packages are downloaded, you can import them into your Automation account.
### Runbook
azure-monitor Agents Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/agents-overview.md
Using Azure Monitor agent, you get immediate benefits as shown below:
## Consolidating legacy agents
+>[!IMPORTANT]
+>The Log Analytics agent is on a **deprecation path** and won't be supported after **August 31, 2024**. Any new data centers brought online after January 1 2024 will not support the Log Analytics agent. If you use the Log Analytics agent to ingest data to Azure Monitor, [migrate to the new Azure Monitor agent](./azure-monitor-agent-migration.md) prior to that date.
+ Deploy Azure Monitor Agent on all new virtual machines, scale sets, and on-premises servers to collect data for [supported services and features](./azure-monitor-agent-migration.md#migrate-additional-services-and-features). If you have machines already deployed with legacy Log Analytics agents, we recommend you [migrate to Azure Monitor Agent](./azure-monitor-agent-migration.md) as soon as possible. The legacy Log Analytics agent will not be supported after August 2024.
azure-monitor Azure Monitor Agent Send Data To Event Hubs And Storage https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/agents/azure-monitor-agent-send-data-to-event-hubs-and-storage.md
Use custom template deployment to create the DCR association and AMA deployment.
"settings": { "authentication": { "managedIdentity": {
- "identifier-type": "mi_res_id",
"identifier-name": "mi_res_id", "identifier-value": "[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/',parameters('identityName'))]" }
Use custom template deployment to create the DCR association and AMA deployment.
"settings": { "authentication": { "managedIdentity": {
- "identifier-type": "mi_res_id",
"identifier-name": "mi_res_id", "identifier-value": "[resourceID('Microsoft.ManagedIdentity/userAssignedIdentities/',parameters('identityName'))]" }
azure-monitor Alert Options https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/alerts/alert-options.md
+
+ Title: Create alert rules for an Azure resource
+description: Describes the different options for creating alert rules in Azure Monitor and where you can find more information about each option.
+++ Last updated : 02/16/2024++++
+# Create alert rules for Azure resources
+Alerts in Azure Monitor proactively notify you of critical conditions based on collected telemetry and potentially attempt to take corrective action. Alerts are generated from alert rules which define the criteria for the alert and what action to take when the alert triggers. There are no alert rules created by default in Azure Monitor. In addition to creating your own, there are multiple options for getting started quickly with alert rules based on pre-defined conditions, and different options are available for different Azure services. This article describes the different options for creating alert rules in Azure Monitor and where you can find more information about each option.
++
+## Recommended alerts
+Recommended alerts is a feature in Azure Monitor for some services that allows you to quickly create a set of alert rules for a particular resource. Simply select **Set up recommendations** from the **Alerts** tab for the resource in the Azure portal, and select the rules you want to enable. This feature isn't available for all services, and you need to select the option for each resource you want to monitor. One strategy is to use the recommended alerts as guidance for the alert rules that should be created for a particular type of resource. You can use [Azure Policy](#azure-policy) to automatically create these same alert rules for all resources of a particular type.
++
+## Azure Monitor Baseline Alerts (AMBA)
+AMBA is a central repository that combines product group and field experience driven alert definitions that allow customers and partners to improve their observability experience through the adoption of Azure Monitor. It's organized by resource type so you can quickly identify alert definitions that fit your requirements. AMBA leverages Azure Monitor alerts and helps you detect and address issues consistently and at scale indicating problems with monitored resource in your infrastructure. AMBA includes definitions for both metric and log alerts for resources including:
+
+- Service Health
+- Compute resources
+- Networking resources
+- Many more
+
+AMBA also includes example snippets of alert definitions to be directly used in an ARM or BICEP deployments in addition to policy definitions. Read more about on Azure Landing Zone monitoring at [Monitor Azure platform landing zone components](/azure/cloud-adoption-framework/ready/landing-zone/design-area/management-monitor#azure-landing-zone-monitoring-guidance) .
+
+AMBA has patterns that group alerts from different resource types to address specific scenarios. Azure landing zone (ALZ), which is also suitable for non-ALZ aligned customers, is a pattern of AMBA that collates platform alerts into a deployable at-scale solution. Other patterns are under development including SAP and Azure Virtual Desktop, which are intended to minimize friction in adopting observability into your environment.
+
+See [Azure Monitor Baseline Alerts](https://aka.ms/amba) for details.
+
+## Manual alert rules
+You can manually create alert rules for any of your Azure resources using the appropriate metric values or log queries as a signal. You must create and maintain each alert rule for each resource individually, so you will probably want to use one of the other options when they're applicable and only manually create alert rules for special cases. Multiple services in Azure have documentation articles that describe recommended telemetry to collect and alert rules that are recommended for that service. These articles are typically found in the **Monitor** section of the service's documentation. For example, [Monitor Azure virtual machines](../../virtual-machines/monitor-vm.md) and [Monitor Azure Kubernetes Service (AKS)](../../aks/monitor-aks.md).
+
+See [Choosing the right type of alert rule](./alerts-types.md) for more information about the different types of alert rules and articles such as [Create or edit a metric alert rule](./alerts-create-metric-alert-rule.md) and [Create or edit a log alert rule](./alerts-create-log-alert-rule.md) for detailed guidance on manually creating alert rules.
+
+## Azure Policy
+Using [Azure Policy](../../governance/policy/overview.md), you can automatically create alert rules for all resources of a particular type instead of manually creating rules for each individual resource. You still must define the alerting condition, but the alert rules for each resource will automatically be created for you, for both existing resources and any new ones that you create.
+
+See [Resource Manager template samples for metric alert rules in Azure Monitor](./resource-manager-alerts-metric.md) and [Resource Manager template samples for log alert rules in Azure Monitor](./resource-manager-alerts-log.md) for ARM templates that can be used in policy definitions.
+
+## Next steps
+
+- [Read more about alerts in Azure Monitor](./alerts-overview.md)
azure-monitor Tutorial Resource Logs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/azure-monitor/essentials/tutorial-resource-logs.md
Browse through the available queries. Identify one to run and select **Run**. Th
:::image type="content" source="media/tutorial-resource-logs/query-results.png" lightbox="media/tutorial-resource-logs/query-results.png"alt-text="Screenshot that shows the results of a sample log query."::: ## Next steps
-Now that you're collecting resource logs, create a log search alert to be proactively notified when interesting data is identified in your log data.
+Once you're collecting monitoring data for your Azure resources, see your different options for creating alert rules to be proactively notified when Azure Monitor identifies interesting information.
> [!div class="nextstepaction"]
-> [Create a log search alert for an Azure resource](../alerts/tutorial-log-alert.md)
+> [Create alert rules for an Azure resource](../alerts/alert-options.md)
backup Backup Azure Vms Enhanced Policy https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-azure-vms-enhanced-policy.md
Title: Back up Azure VMs with Enhanced policy description: Learn how to configure Enhanced policy to back up VMs. Previously updated : 09/14/2023 Last updated : 02/19/2024
Azure Backup now supports _Enhanced policy_ that's needed to support new Azure o
>[!Important] >- [Default policy](./backup-during-vm-creation.md#create-a-vm-with-backup-configured) will not support protecting newer Azure offerings, such as [Trusted Launch VM](backup-support-matrix-iaas.md#tvm-backup), [Ultra SSD](backup-support-matrix-iaas.md#vm-storage-support), [Premium SSD v2](backup-support-matrix-iaas.md#vm-storage-support), [Shared disk](backup-support-matrix-iaas.md#vm-storage-support), and Confidential Azure VMs.
->- Enhanced policy now supports protecting both Ultra SSD (preview) and Premium SSD v2 (preview). To enroll your subscription for these features, fill these forms - [Ultra SSD protection](https://forms.office.com/r/1GLRnNCntU) and [Premium SSD v2 protection](https://forms.office.com/r/h56TpTc773).
+>- Enhanced policy now supports protecting both Ultra SSD and Premium SSD v2.
>- Backups for VMs having [data access authentication enabled disks](../virtual-machines/windows/download-vhd.md?tabs=azure-portal#secure-downloads-and-uploads-with-azure-ad) will fail. You must enable backup of Trusted Launch VM through enhanced policy only. Enhanced policy provides the following features:
backup Backup Support Matrix Iaas https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/backup/backup-support-matrix-iaas.md
Title: Support matrix for Azure VM backups description: Get a summary of support settings and limitations for backing up Azure VMs by using the Azure Backup service. Previously updated : 01/24/2024 Last updated : 02/19/2024
Adding a disk to a protected VM | Supported.
Resizing a disk on a protected VM | Supported. Shared storage| Backing up VMs by using Cluster Shared Volumes (CSV) or Scale-Out File Server isn't supported. CSV writers are likely to fail during backup. On restore, disks that contain CSV volumes might not come up. [Shared disks](../virtual-machines/disks-shared-enable.md) | Not supported.
-<a name="ultra-disk-backup">Ultra disks</a> | Supported with [Enhanced policy](backup-azure-vms-enhanced-policy.md). The support is currently in preview. <br><br> [Supported regions](../virtual-machines/disks-types.md#ultra-disk-limitations). <br><br> - The preview can be tested on any subscription and no enrollment is required. <br><br> - Configuration of Ultra disk protection is supported via Recovery Services vault and via virtual machine blade. <br><br> - Cross-region restore is currently not supported for machines using Ultra disks. <br><br> - GRS type vaults cannot be used for enabling backup. <br><br> - File-level restore is currently not supported for machines using Ultra disks.
-<a name="premium-ssd-v2-backup">Premium SSD v2</a> | Supported with [Enhanced policy](backup-azure-vms-enhanced-policy.md). The support is currently in preview. <br><br> [Supported regions](../virtual-machines/disks-types.md#regional-availability). <br><br> - The preview can be tested on any subscription and no enrollment is required. <br><br> - Configuration of Premium SSD v2 disk protection is supported via Recovery Services vault and via virtual machine blade. <br><br> - Cross-region restore is currently not supported for machines using Premium v2 disks. <br><br> - GRS type vaults cannot be used for enabling backup. <br><br> - File-level restore is currently not supported for machines using Premium SSD v2 disks.
+<a name="ultra-disk-backup">Ultra disks</a> | Supported with [Enhanced policy](backup-azure-vms-enhanced-policy.md). <br><br> [Supported regions](../virtual-machines/disks-types.md#ultra-disk-limitations). <br><br> - Configuration of Ultra disk protection is supported via Recovery Services vault and via virtual machine blade. <br><br> - Cross-region restore is currently not supported for machines using Ultra disks. <br><br> - GRS type vaults cannot be used for enabling backup. <br><br> - File-level restore is currently not supported for machines using Ultra disks.
+<a name="premium-ssd-v2-backup">Premium SSD v2</a> | Supported with [Enhanced policy](backup-azure-vms-enhanced-policy.md). <br><br> [Supported regions](../virtual-machines/disks-types.md#regional-availability). <br><br> - Configuration of Premium SSD v2 disk protection is supported via Recovery Services vault and via virtual machine blade. <br><br> - Cross-region restore is currently not supported for machines using Premium v2 disks and GRS type vaults cannot be used for enabling backup. <br><br> - File-level restore is currently not supported for machines using Premium SSD v2 disks.
[Temporary disks](../virtual-machines/managed-disks-overview.md#temporary-disk) | Azure Backup doesn't back up temporary disks. NVMe/[ephemeral disks](../virtual-machines/ephemeral-os-disks.md) | Not supported. [Resilient File System (ReFS)](/windows-server/storage/refs/refs-overview) restore | Supported. Volume Shadow Copy Service (VSS) supports app-consistent backups on ReFS.
communications-gateway Connect Operator Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/connect-operator-connect.md
Previously updated : 11/27/2023 Last updated : 02/16/2024 - template-how-to-pattern - has-azure-ad-ps-ref
You must [deploy Azure Communications Gateway](deploy.md).
You must have access to a user account with the Microsoft Entra Global Administrator role. You must allocate six "service verification" test numbers for each of Operator Connect and Teams Phone Mobile. These numbers are used by the Operator Connect and Teams Phone Mobile programs for continuous call testing.+ - If you selected the service you're setting up as part of deploying Azure Communications Gateway, you've allocated numbers for the service already. - Otherwise, choose the phone numbers now (in E.164 format and including the country code) and names to identify them. We recommend names of the form OC1 and OC2 (for Operator Connect) and TPM1 and TPM2 (for Teams Phone Mobile). You must also allocate at least one test number for each service for integration testing. If you want to set up Teams Phone Mobile and you didn't select it when you deployed Azure Communications Gateway, choose:+ - The number used in Teams Phone Mobile to access the Voicemail Interactive Voice Response (IVR) from native dialers.-- How you plan to route Teams Phone Mobile calls to Microsoft Phone System. Choose from:
- - Integrated MCP (MCP in Azure Communications Gateway).
- - On-premises MCP.
- - Another method to route calls.
+- The method for routing Teams Phone Mobile calls to Microsoft Phone System. Choose from:
+
+ - Integrated MCP (MCP in Azure Communications Gateway).
+ - On-premises MCP.
+ - Another method to route calls.
## Enable Operator Connect or Teams Phone Mobile support
If you want to set up Teams Phone Mobile and you didn't select it when you deplo
1. Sign in to the [Azure portal](https://azure.microsoft.com/). 1. In the search bar at the top of the page, search for your Communications Gateway resource and select it.
-1. In the side menu bar, find **Communications services** and select **Operator Connect** or **Teams Phone Mobile** (as appropriate) to open a page for the service.
-1. On the service's page, select **Operator Connect settings** or **Teams Phone Mobile settings**.
-1. Fill in the fields, selecting **Review + create** and **Create**.
+1. In the side menu bar, under **Communications services**, select **Operator Connect** or **Teams Phone Mobile** (as appropriate) to open a page for the service.
+1. Select **Operator Connect settings** or **Teams Phone Mobile settings**.
+1. Fill in the fields, then select **Review + create** and **Create**.
1. Select the **Overview** page for your resource. 1. Select **Add test lines** and add the service verification lines you chose in [Prerequisites](#prerequisites). Set the **Testing purpose** to **Automated**. > [!IMPORTANT]
To add the Project Synergy application:
```azurepowershell Get-Module -ListAvailable ```
- 1. If `AzureAD` doesn't appear in the output, install the module:
+ 1. If `AzureAD` doesn't appear in the output, install the module.
1. Close your current PowerShell window. 1. Open PowerShell as an admin. 1. Run the following command.
Do the following steps in the tenant that contains your Project Synergy applicat
```azurepowershell Get-Module -ListAvailable ```
- 1. If `AzureAD` doesn't appear in the output, install the module:
+ 1. If `AzureAD` doesn't appear in the output, install the module.
1. Close your current PowerShell window. 1. Open PowerShell as an admin. 1. Run the following command.
communications-gateway Get Started https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/get-started.md
Title: Getting started with Azure Communications Gateway
-description: Learn how to plan for and deploy Azure Communications Gateway
+description: Learn how to plan for and deploy Azure Communications Gateway.
Previously updated : 11/06/2023 Last updated : 02/16/2024 #CustomerIntent: As someone setting up Azure Communications Gateway, I want to understand the steps I need to carry out to have live traffic through my deployment.
Read the following articles to learn about Azure Communications Gateway.
- [Onboarding with Included Benefits for Azure Communications Gateway](onboarding.md), to learn about onboarding to Operator Connect or Teams Phone Mobile and the support we can provide. - [Connectivity for Azure Communications Gateway](connectivity.md) and [Reliability in Azure Communications Gateway](reliability-communications-gateway.md), to create a network design that includes Azure Communications Gateway. - [Overview of security for Azure Communications Gateway](security.md), to learn about how Azure Communications Gateway keeps customer data and your network secure.-- [Provisioning API for Azure Communications Gateway](provisioning-platform.md), to learn about when you might need or want to integrate with the Provisioning API.
+- [Provisioning API (preview) for Azure Communications Gateway](provisioning-platform.md), to learn about when you might need or want to integrate with the Provisioning API.
- [Plan and manage costs for Azure Communications Gateway](plan-and-manage-costs.md), to learn about costs for Azure Communications Gateway.-- [Azure Communications Gateway limits, quotas and restrictions](limits.md), to learn about the limits and quotas associated with the Azure Communications Gateway
+- [Azure Communications Gateway limits, quotas and restrictions](limits.md), to learn about the limits and quotas associated with the Azure Communications Gateway.
For Operator Connect and Teams Phone Mobile, also read: -- [Overview of interoperability of Azure Communications Gateway with Operator Connect and Teams Phone Mobile](interoperability-operator-connect.md)
+- [Overview of interoperability of Azure Communications Gateway with Operator Connect and Teams Phone Mobile](interoperability-operator-connect.md).
- [Mobile Control Point in Azure Communications Gateway for Teams Phone Mobile](mobile-control-point.md). - [Emergency calling for Operator Connect and Teams Phone Mobile with Azure Communications Gateway](emergency-calls-operator-connect.md).
Use the following procedures to deploy Azure Communications Gateway and connect
1. [Prepare to deploy Azure Communications Gateway](prepare-to-deploy.md) describes the steps you need to take before you can start creating your Azure Communications Gateway resource. You might need to refer to some of the articles listed in [Learn about and plan for Azure Communications Gateway](#learn-about-and-plan-for-azure-communications-gateway). 1. [Deploy Azure Communications Gateway](deploy.md) describes how to create your Azure Communications Gateway resource in the Azure portal and connect it to your networks.
-1. [Integrate with Azure Communications Gateway's Provisioning API](integrate-with-provisioning-api.md) describes how to integrate with the Provisioning API. Integrating with the API is:
+1. [Integrate with Azure Communications Gateway's Provisioning API (preview)](integrate-with-provisioning-api.md) describes how to integrate with the Provisioning API. Integrating with the API is:
- Required for Microsoft Teams Direct Routing and Zoom Phone Cloud Peering.
- - Optional for Operator Connect: only required to add custom headers to messages entering your core network.
- - Not supported for Teams Phone Mobile.
+ - Recommended for Operator Connect and Teams Phone Mobile because it enables flow-through API-based provisioning of your customers both on Azure Communications Gateway and in the Operator Connect environment. This enables additional functionality to be provided by Azure Communications Gateway, such as injecting custom SIP headers, while also fulfilling the requirement from the the Operator Connect and Teams Phone Mobile programs for you to use APIs for provisioning customers in the Operator Connect environment. For more information, see [Provisioning and Operator Connect APIs](interoperability-operator-connect.md#provisioning-and-operator-connect-apis).
## Integrate with your chosen communications services
Use the following procedures to integrate with Zoom Phone Cloud Peering.
## Next steps -- Learn about [your network and Azure Communications Gateway](role-in-network.md)
+- Learn about [your network and Azure Communications Gateway](role-in-network.md).
- Find out about [the newest enhancements to Azure Communications Gateway](whats-new.md).
communications-gateway Integrate With Provisioning Api https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/integrate-with-provisioning-api.md
Previously updated : 10/09/2023 Last updated : 02/16/2024
-# Integrate with Azure Communications Gateway's Provisioning API
+# Integrate with Azure Communications Gateway's Provisioning API (preview)
-This article explains when you need to integrate with Azure Communications Gateway's Provisioning API and provides a high-level overview of getting started. It's aimed at software developers working for telecommunications operators.
+This article explains when you need to integrate with Azure Communications Gateway's Provisioning API (preview) and provides a high-level overview of getting started. It's aimed at software developers working for telecommunications operators.
-The Provisioning API allows you to configure Azure Communications Gateway with the details of your customers and the numbers that you have assigned to them. It's a REST API.
+The Provisioning API allows you to configure Azure Communications Gateway with the details of your customers and the numbers that you have assigned to them. If you use the Provisioning API for *backend service sync*, you can also provision the Operator Connect and Teams Phone Mobile environments with the details of your enterprise customers and the numbers that you allocate to them. This flow-through provisioning allows you to meet the Operator Connect and Teams Phone Mobile requirement to use APIs to manage your customers and numbers after you launch your service.
+
+ The Provisioning API is a REST API.
Whether you need to integrate with the REST API depends on your chosen communications service. |Communications service |Provisioning API integration |Purpose | ||||
-|Microsoft Teams Direct Routing |Required |- Configure the subdomain associated with each Direct Routing customer<br>- Generate DNS records specific to each customer (as required by the Microsoft 365 environment)<br>- Indicate that numbers are enabled for Direct Routing.<br>- (Optional) Configure a custom header for messages to your network|
-|Operator Connect|Optional|(Optional) Configure a custom header for messages to your network|
-|Teams Phone Mobile|Not supported|N/A|
-|Zoom Phone Cloud Peering |Required |- Indicate that numbers are enabled for Zoom<br>- (Optional) Configure a custom header for messages to your network|
+|Microsoft Teams Direct Routing |Required |- Configuring the subdomain associated with each Direct Routing customer.<br>- Generating DNS records specific to each customer (as required by the Microsoft 365 environment).<br>- Indicating that numbers are enabled for Direct Routing.<br>- (Optional) Configuring a custom header for messages to your network.|
+|Operator Connect|Recommended|- (Recommended) Flow-through provisioning of Operator Connect customers through interoperation with Operator Connect APIs (using backend service sync). <br>- (Optional) Configuring a custom header for messages to your network. |
+|Teams Phone Mobile|Recommended|- (Recommended) Flow-through provisioning of Teams Phone Mobile customers through interoperation with Operator Connect APIs (using backend service sync). <br>- (Optional) Configuring a custom header for messages to your network. |
+|Zoom Phone Cloud Peering |Required |- Indicating that numbers are enabled for Zoom. <br>- (Optional) Configuring a custom header for messages to your network.|
+
+> [!TIP]
+> You can also use the Number Management Portal (preview) for Operator Connect and Teams Phone Mobile.
## Prerequisites
Use the *Key concepts* and *Examples* information in the [API Reference](/rest/a
- *Account* resources are descriptions of operator customers (typically, an enterprise), and per-customer settings for service provisioning. - *Number* resources belong to an account. They describe numbers, the services that the numbers make use of (for example, Microsoft Teams Direct Routing), and any extra per-number configuration.
+- *Request for Information (RFI)* resources are descriptions of operator customers (typically an enterprise) who have expressed interest in receiving service from the operator through Operator Connect and Teams Phone Mobile.
[!INCLUDE [limits on the Provisioning API](includes/communications-gateway-provisioning-api-restrictions.md)]
communications-gateway Interoperability Operator Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/interoperability-operator-connect.md
Title: Overview of Operator Connect and Teams Phone Mobile with Azure Communications Gateway
-description: Understand how Azure Communications Gateway fits into your fixed and mobile networks and into the Operator Connect and Teams Phone Mobile environments
+description: Understand how Azure Communications Gateway fits into your fixed and mobile networks and into the Operator Connect and Teams Phone Mobile environments.
Previously updated : 01/31/2024 Last updated : 02/16/2024
Azure Communications Gateway offers multiple media interworking options. For exa
For full details of the media interworking features available in Azure Communications Gateway, raise a support request.
-## Number Management Portal for provisioning with Operator Connect APIs
+## Provisioning and Operator Connect APIs
-Operator Connect and Teams Phone Mobile require API integration between your IT systems and Microsoft Teams for flow-through provisioning and automation. After your deployment has been certified and launched, you must not use the Operator Connect portal for provisioning. You can use Azure Communications Gateway's Number Management Portal instead. This Azure portal feature enables you to pass the certification process and sell Operator Connect or Teams Phone Mobile services while you carry out a custom API integration project.
+Operator Connect and Teams Phone Mobile require API integration between your IT systems and Microsoft Teams for flow-through provisioning and automation. After your deployment is certified and launched, you must not use a portal for provisioning. Azure Communications Gateway offers an alternative method for provisioning subscribers with its Provisioning API (preview) that allows flow-through provisioning from your BSS clients to Azure Communications Gateway and the Operator Connect environments. Azure Communications Gateway also provides a Number Management Portal (preview), integrated into the Azure portal, for browser-based provisioning which can be used to get you started while you complete API integration.
-For more information, see [Manage an enterprise with Azure Communications Gateway's Number Management Portal for Operator Connect and Teams Phone Mobile](manage-enterprise-operator-connect.md).
+For more information, see:
+
+- [Provisioning API (preview) for Azure Communications Gateway](provisioning-platform.md) and [Integrate with Azure Communications Gateway's Provisioning API](integrate-with-provisioning-api.md).
+- [Manage an enterprise with Azure Communications Gateway's Number Management Portal (preview) for Operator Connect and Teams Phone Mobile](manage-enterprise-operator-connect.md).
> [!TIP]
-> The Number Management Portal does not allow your enterprise customers to manage Teams Calling. For example, it does not provide self-service portals.
+> These methods do not allow your enterprise customers to manage Teams Calling. For example, they do not provide self-service portals.
## Providing call duration data to Microsoft Teams
communications-gateway Manage Enterprise Operator Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/manage-enterprise-operator-connect.md
Title: Use Azure Communications Gateway's Number Management Portal to manage an enterprise
-description: Learn how to add and remove enterprises and numbers for Operator Connect and Teams Phone Mobile with Azure Communication Gateway's Number Management Portal.
+ Title: Use Azure Communications Gateway's Number Management Portal (preview) to manage an enterprise
+description: Learn how to add and remove enterprises and numbers with Azure Communication Gateway's Number Management Portal.
Previously updated : 11/27/2023 Last updated : 02/16/2024
-# Manage an enterprise with Azure Communications Gateway's Number Management Portal for Operator Connect and Teams Phone Mobile
+# Manage an enterprise with Azure Communications Gateway's Number Management Portal (preview)
-Azure Communications Gateway's Number Management Portal enables you to manage enterprise customers and their numbers through the Azure portal.
+Azure Communications Gateway's Number Management Portal (preview) enables you to manage enterprise customers and their numbers through the Azure portal. Any changes made in this portal are automatically provisioned into the Operator Connect and Teams Phone Mobile environments. You can also use Azure Communications Gateway's Provisioning API (preview). For more information, see [Provisioning API (preview) for Azure Communications Gateway](provisioning-platform.md).
-The Operator Connect and Teams Phone Mobile programs don't allow you to use the Operator Connect portal for provisioning after you launch your service in the Teams Admin Center. The Number Management Portal is a simple alternative that you can use until you finish integrating with the Operator Connect APIs.
+> [!IMPORTANT]
+> The Operator Connect and Teams Phone Mobile programs require that full API integration to your BSS is completed prior to launch in the Teams Admin Center. This can either be directly to the Operator Connect API or through the Azure Communications Gateway's Provisioning API (preview).
## Prerequisites
-Confirm that you have **Reader** access to the Azure Communications Gateway resource and appropriate permissions for the Project Synergy enterprise application:
+Confirm that you have **Reader** access to the Azure Communications Gateway resource and appropriate permissions for the AzureCommunicationsGateway enterprise application:
<!-- Must be kept in sync with provision-user-roles.md - steps for understanding and configuring -->
-* To view existing configuration: **PartnerSettings.Read**, **TrunkManagement.Read**, and **NumberManagement.Read**
-* To make changes to consents (which represent your relationships with enterprises) and numbers: **PartnerSettings.Read**, **TrunkManagement.Read**, and **NumberManagement.Write**
+* To view configuration: **ProvisioningAPI.ReadUser**.
+* To add or make changes to configuration: **ProvisioningAPI.ReadUser** and **ProvisioningAPI.WriteUser**.
+* To remove configuration: **ProvisioningAPI.ReadUser** and **ProvisioningAPI.DeleteUser**.
+* To view, add, make changes to, or remove configuration: **ProvisioningAPI.AdminUser**.
If you don't have these permissions, ask your administrator to set them up by following [Set up user roles for Azure Communications Gateway](provision-user-roles.md).
If you're uploading new numbers for an enterprise customer:
* You must complete any internal procedures for assigning numbers. * You must know the numbers you need to upload (as E.164 numbers). Each number must:
- * Contain only digits (0-9), with an optional `+` at the start.
- * Include the country code.
- * Be up to 19 characters long.
+ * Contain only digits (0-9), with an optional `+` at the start.
+ * Include the country code.
+ * Be up to 19 characters long.
* You must know the following information for each number. |Information for each number |Notes | |||
-|Calling profile |One of the `CommsGw` Calling Profiles we created for you.|
-|Intended usage | Individuals (calling users), applications or conference calls.|
+|Intended usage | Individuals (calling users), applications, or conference calls.|
|Capabilities |Which types of call to allow (for example, inbound calls or outbound calls).| |Civic address | A physical location for emergency calls. The enterprise must have configured this address in the Teams Admin Center. Only required for individuals (calling users) and only if you don't allow the enterprise to update the address.| |Location | A description of the location for emergency calls. The enterprise must have configured this location in the Teams Admin Center. Only required for individuals (calling users) and only if you don't allow the enterprise to update the address.|
If you're uploading new numbers for an enterprise customer:
|Country | The country for the number. Only required if you're uploading a North American Toll-Free number, otherwise optional.| |Ticket number (optional) |The ID of any ticket or other request that you want to associate with this number. Up to 64 characters. |
-If you're uploading multiple numbers, prepare a `.csv` file with the heading `Numbers` and one number per line (up to 10,000 numbers), as in the following example. You can use this file to upload multiple numbers at once with the same settings (for example, the same calling profile).
-
-```
-Numbers
-+441632960000
-+441632960001
-+441632960002
-+441632960003
-+441632960004
-```
-
+Each number is automatically assigned to the Operator Connect or Teams Phone Mobile calling profile associated with the Azure Communications Gateway which is being provisioned.
## Go to your Communications Gateway resource
Numbers
1. In the search bar at the top of the page, search for your Communications Gateway resource. 1. Select your Communications Gateway resource.
-## Select an enterprise customer to manage
+## Manage your agreement with an enterprise customer
+
+When an enterprise customer uses the Teams Admin Center to request service, the Operator Connect APIs create a *consent*. The consent represents the relationship between you and the enterprise.
+
+The Number Management Portal displays a consent as a *Request for Information* and allows you to update the status. Finding the Request for Information for an enterprise is also the easiest way to manage numbers for an enterprise.
-When an enterprise customer uses the Teams Admin Center to request service, the Operator Connect APIs create a *consent*. This consent represents the relationship between you and the enterprise.
+1. From the overview page for your Communications Gateway resource, find the **Number Management (Preview)** section in the sidebar.
+1. Select **Requests for Information**.
+1. Find the enterprise that you want to manage. You can use the **Add filter** options to search for the enterprise.
+1. If you need to change the status of the relationship, select the enterprise **Tenant ID** then select **Update relationship status**. Use the drop-down to select the new status. For example, if you're agreeing to provide service to a customer, set the status to **Agreement signed**. If you set the status to **Consent declined** or **Contract terminated**, you must provide a reason.
-The Number Management Portal allows you to update the status of these consents. Finding the consent for an enterprise is also the easiest way to manage numbers for an enterprise.
+## Create an Account for the enterprise
-1. From the overview page for your Communications Gateway resource, find the **Number Management** section in the sidebar. Select **Consents**.
-1. Find the enterprise that you want to manage.
-1. If you need to change the status of the relationship, select **Update Relationship Status** from the menu for the enterprise. Set the new status. For example, if you're agreeing to provide service to a customer, set the status to **Agreement signed**. If you set the status to **Consent Declined** or **Contract Terminated**, you must provide a reason.
+You must create an *Account* for each enterprise that you manage with the Number Management Portal.
+
+1. From the overview page for your Communications Gateway resource, find the **Number Management (Preview)** section in the sidebar.
+1. Select **Accounts**.
+1. Select **Create account**.
+1. Fill in the enterprise **Account name**.
+1. Select the checkboxes for the services you want to enable for the enterprise.
+1. Fill in any additional information requested under the **Communications Services Settings** heading.
+1. Select **Create**.
## Manage numbers for the enterprise Uploading numbers for an enterprise allows IT administrators at the enterprise to allocate those numbers to their users.
-1. Go to the number management page for the enterprise.
- * If you followed [Select an enterprise customer to manage](#select-an-enterprise-customer-to-manage), select **Manage numbers** from the menu.
- * Otherwise, find the **Number Management** section in the sidebar and select **Numbers**. Search for the enterprise using the enterprise's Microsoft Entra tenant ID.
+1. In the sidebar, locate the **Number Management (Preview)** section and select **Accounts**. Select the enterprise **Account name**.
+1. Select **View numbers** to go to the number management page for the enterprise.
1. To upload new numbers for an enterprise: 1. Select **Upload numbers**.
- 1. Fill in the fields based on the information you determined in [Prerequisites](#prerequisites). These settings apply to all the numbers you upload in the **Telephone numbers** section.
- 1. In **Telephone numbers**, add the numbers:
- * If you created a `.csv` file with multiple numbers as described in [Prerequisites](#prerequisites), select **Upload CSV file** and upload the file when prompted.
- * Otherwise, select **Manual input** and add each number individually.
- 1. Select **Review + upload** and **Upload**. Uploading creates an order for uploading numbers over the Operator Connect API.
+ 1. Fill in the fields based on the information you determined in [Prerequisites](#prerequisites). These settings apply to all the numbers you upload in the **Add numbers** section.
+ 1. In **Add numbers** add each number individually.
+ 1. Select **Review and upload** and **Upload**. Uploading creates an order for uploading numbers over the Operator Connect API.
1. Wait 30 seconds, then refresh the order status. When the order status is **Complete**, the numbers are available to the enterprise. You might need to refresh more than once. 1. To remove numbers from an enterprise: 1. Select the numbers.
- 1. Select **Release numbers**.
+ 1. Select **Delete numbers**.
1. Wait 30 seconds, then refresh the order status. When the order status is **Complete**, the numbers have been removed. ## View civic addresses for an enterprise You can view civic addresses for an enterprise. The enterprise configures the details of each civic address, so you can't configure these details.
-1. Go to the civic address page for the enterprise.
- * If you followed [Select an enterprise customer to manage](#select-an-enterprise-customer-to-manage), select **Civic addresses** from the menu.
- * Otherwise, find the **Number Management** section in the sidebar and select **Civic addresses**. Search for the enterprise using the enterprise's Microsoft Entra tenant ID.
-1. View the civic addresses. You can see the address, the company name, the description and whether the address was validated when the enterprise configured the address.
-1. Optionally, select an individual address to view additional information provided by the enterprise (for example, the ELIN information).
+1. In the sidebar, locate the **Number Management (Preview)** section and select **Accounts**. Select the enterprise **Account name**.
+1. Select **Civic addresses** to view the **Unified civic addresses** page for the enterprise.
+1. You can see the address, the company name, the description, and whether the address was validated when the enterprise configured the address.
+1. Optionally, select an individual address to view additional information provided by the enterprise, for example the Emergency Location Identification Number (ELIN).
+
+## Configure a custom header for a number
+
+You can specify a custom SIP header value for an enterprise telephone number, which applies to all SIP messages sent and received by that number.
+
+1. In the sidebar, locate the **Number Management (Preview)** section and select **Numbers**.
+1. Select the **Phone number** checkbox then select **Manage number**.
+1. Specify a **Custom SIP header value**.
+1. Select **Review and upload** then **Upload**.
## Next steps
communications-gateway Overview https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/overview.md
Title: What is Azure Communications Gateway?
-description: Azure Communications Gateway allows telecoms operators to interoperate with Operator Connect, Teams Phone Mobile, Microsoft Teams Direct Routing and Zoom Phone.
+description: Azure Communications Gateway allows telecoms operators to interoperate with Operator Connect, Teams Phone Mobile, Microsoft Teams Direct Routing, and Zoom Phone.
Previously updated : 11/27/2023 Last updated : 02/16/2024
Azure Communications Gateway enables Microsoft Teams calling through the Operato
[!INCLUDE [communications-gateway-tsp-restriction](includes/communications-gateway-tsp-restriction.md)] Diagram that shows how Azure Communications Gateway connects to the Microsoft Phone System, Zoom Phone and to your fixed and mobile networks. Microsoft Teams clients connect to Microsoft Phone System. Zoom clients connect to Zoom Phone. Your fixed network connects to PSTN endpoints. Your mobile network connects to Teams Phone Mobile users. Azure Communications Gateway connects Microsoft Phone System, Zoom Phone and your fixed and mobile networks. :::image-end:::
-Azure Communications Gateway provides advanced SIP, RTP and HTTP interoperability functions (including SBC function certified by Microsoft Teams and Zoom) so that you can integrate with your chosen communications services quickly, reliably and in a secure manner.
+Azure Communications Gateway provides advanced SIP, RTP, and HTTP interoperability functions (including SBC function certified by Microsoft Teams and Zoom) so that you can integrate with your chosen communications services quickly, reliably and in a secure manner.
As part of Microsoft Azure, the network elements in Azure Communications Gateway are fully managed and include an availability SLA. This full management simplifies network operations integration and accelerates the timeline for adding new network functions into production. ## Architecture
-Azure Communications Gateway acts as the edge of your network. This position allows it to interwork between your network and your chosen communications services and meet the requirements of your chosen programs.
+Azure Communications Gateway acts as the edge of your network. This position allows it to interwork between your network and your chosen communications services and to meet the requirements of your chosen programs.
To ensure availability, Azure Communications Gateway is deployed into two Azure Regions within a given Geography, as shown in the following diagram. It supports both active-active and primary-backup geographic redundancy models to fit with your network design.
Azure Communications Gateway supports the SIP and RTP requirements for certified
Azure Communications Gateway's voice features include: -- **Voice interworking** - Azure Communications Gateway can resolve interoperability issues between your network and communications services. Its position on the edge of your network reduces disruption to your networks, especially in complex scenarios like Teams Phone Mobile where Teams Phone System is the call control element. Azure Communications Gateway includes powerful interworking features, for example:
+- **Voice interworking** - Azure Communications Gateway can resolve interoperability issues between your network and communications services. Its position on the edge of your network reduces disruption to your networks, especially in complex scenarios like Teams Phone Mobile where Teams Phone System is the call control element. Azure Communications Gateway includes powerful interworking features, including:
+
+ - 100rel and early media interworking.
+ - Downstream call forking with codec changes.
+ - Custom SIP header and SDP manipulation.
+ - DTMF (Dual-Tone Multi-Frequency tones) interworking between inband tones, RFC2833 telephone-event signaling, and SIP INFO/NOTIFY signaling.
+ - Payload type interworking.
+ - Media transcoding.
+ - Ringback injection.
- - 100rel and early media inter-working
- - Downstream call forking with codec changes
- - Custom SIP header and SDP manipulation
- - DTMF (Dual-Tone Multi-Frequency tones) interworking between inband tones, RFC2833 telephone event and SIP INFO/NOTIFY signaling
- - Payload type interworking
- - Media transcoding
- - Ringback injection
- **Call control integration for Teams Phone Mobile** - Azure Communications Gateway includes an optional IMS Application Server called Mobile Control Point (MCP). MCP ensures calls are only routed to the Microsoft Phone System when a user is eligible for Teams Phone Mobile services. This process minimizes the changes you need in your mobile network to route calls into Microsoft Teams. For more information, see [Mobile Control Point in Azure Communications Gateway for Teams Phone Mobile](mobile-control-point.md).-- **Optional direct peering to Emergency Routing Service Providers for Operator Connect and Teams Phone Mobile (US only)** - If your network can't transmit Emergency location information in PIDF-LO (Presence Information Data Format Location Object) SIP bodies, Azure Communications Gateway can connect directly to your chosen Teams-certified Emergency Routing Service Provider (ERSP) instead. See [Emergency calling for Operator Connect and Teams Phone Mobile with Azure Communications Gateway](emergency-calls-operator-connect.md).
+- **Optional direct peering to Emergency Routing Service Providers for Operator Connect and Teams Phone Mobile (US only)** - If your network can't transmit Emergency location information in PIDF-LO (Presence Information Data Format Location Object) SIP bodies, Azure Communications Gateway can connect directly to your chosen Teams-certified Emergency Routing Service Provider (ERSP) instead. See [Emergency calling for Operator Connect and Teams Phone Mobile with Azure Communications Gateway](emergency-calls-operator-connect.md).
## Provisioning and API integration for Operator Connect and Teams Phone Mobile
-Launching Operator Connect or Teams Phone Mobile requires you to use the Operator Connect APIs to provision subscribers (instead of the Operator Connect Portal). Azure Communications Gateway offers a Number Management Portal integrated into the Azure portal. This portal uses the Operator Connect APIs, allowing you to pass the certification process and sell Operator Connect or Teams Phone Mobile services while you carry out a custom API integration project.
+Launching Operator Connect or Teams Phone Mobile requires you to use the Operator Connect APIs to provision subscribers (instead of the Operator Connect Portal). Azure Communications Gateway offers two methods of provisioning subscribers that allow you to meet this requirement:
+
+- A Number Management Portal (preview) integrated into the Azure portal, for browser-based provisioning.
+- A Provisioning API (preview) that allows flow-through provisioning from your BSS clients to Azure Communications Gateway and the Operator Connect environments.
+
+Both methods integrate with the Operator Connect APIs, allowing you to pass the certification process and sell Operator Connect or Teams Phone Mobile services.
-For more information, see [Number Management Portal for provisioning with Operator Connect APIs](interoperability-operator-connect.md#number-management-portal-for-provisioning-with-operator-connect-apis) and [Manage an enterprise with Azure Communications Gateway's Number Management Portal for Operator Connect and Teams Phone Mobile](manage-enterprise-operator-connect.md).
+For more information, see [Provisioning and Operator Connect APIs (preview)](interoperability-operator-connect.md#provisioning-and-operator-connect-apis).
> [!TIP]
-> The Number Management Portal does not allow your enterprise customers to manage Teams Calling. For example, it does not provide self-service portals.
+> These methods do not allow your enterprise customers to manage Teams Calling. For example, they do not provide self-service portals.
Azure Communications Gateway also automatically integrates with Operator Connect APIs to upload call duration data to Microsoft Teams. For more information, see [Providing call duration data to Microsoft Teams](interoperability-operator-connect.md#providing-call-duration-data-to-microsoft-teams).
Azure Communications Gateway also automatically integrates with Operator Connect
Microsoft Teams Direct Routing's multitenant model for carrier telecommunications operators requires inbound messages to Microsoft Teams to indicate the Microsoft tenant associated with your customers. Azure Communications Gateway automatically updates the SIP signaling to indicate the correct tenant, using information that you provision onto Azure Communications Gateway. This process removes the need for your core network to map between numbers and customer tenants. For more information, see [Identifying the customer tenant for Microsoft Phone System](interoperability-teams-direct-routing.md#identifying-the-customer-tenant-for-microsoft-phone-system).
-Microsoft Teams Direct Routing allows a customer admin to assign any phone number to a user, even if you haven't assigned that number to them. This lack of validation presents a risk of caller ID spoofing. Azure Communications Gateway automatically screens all Direct Routing calls originating from Microsoft Teams. This screening ensures that customers can only place calls from numbers that you have assigned to them. However, you can disable this screening on a per-customer basis if necessary. For more information, see [Support for caller ID screening](interoperability-teams-direct-routing.md#support-for-caller-id-screening).
+Microsoft Teams Direct Routing allows a customer admin to assign any phone number to a user, even if you don't assign that number to them. This lack of validation presents a risk of caller ID spoofing. Azure Communications Gateway automatically screens all Direct Routing calls originating from Microsoft Teams. This screening ensures that customers can only place calls from numbers that you assign to them. However, you can disable this screening on a per-customer basis if necessary. For more information, see [Support for caller ID screening](interoperability-teams-direct-routing.md#support-for-caller-id-screening).
## Next steps -- [Learn how to get started with Azure Communications Gateway](get-started.md)-- [Learn how Azure Communications Gateway fits into your network](role-in-network.md).-- [Learn about the latest Azure Communications Gateway features](whats-new.md)
+- Learn how to [get started with Azure Communications Gateway](get-started.md).
+- Learn about [how Azure Communications Gateway fits into your network](role-in-network.md).
+- Learn about [the latest Azure Communications Gateway features](whats-new.md).
communications-gateway Prepare For Live Traffic Operator Connect https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/prepare-for-live-traffic-operator-connect.md
Previously updated : 11/15/2023 Last updated : 02/16/2024 # Prepare for live traffic with Operator Connect, Teams Phone Mobile and Azure Communications Gateway
In this article, you learn about the steps that you and your onboarding team mus
- You must [deploy Azure Communications Gateway](deploy.md) using the Microsoft Azure portal and [connect it to Operator Connect or Teams Phone Mobile](connect-operator-connect.md). - You must know the test numbers to use for integration testing and for service verification (continuous call testing). These numbers can't be the same. You chose them as part of [deploying Azure Communications Gateway](deploy.md#prerequisites) or [connecting it to Operator Connect or Teams Phone Mobile](connect-operator-connect.md#prerequisites).
- - Integration testing allows you to confirm that Azure Communications Gateway and Microsoft Phone System are interoperating correctly with your network.
- - Service verification is set up by the Operator Connect and Teams Phone Mobile programs. It ensures that your deployment is able to handle calls from Microsoft Phone System throughout the lifetime of your deployment.
+
+ - Integration testing allows you to confirm that Azure Communications Gateway and Microsoft Phone System are interoperating correctly with your network.
+ - Service verification is set up by the Operator Connect and Teams Phone Mobile programs. It ensures that your deployment is able to handle calls from Microsoft Phone System throughout the lifetime of your deployment.
+ - You must have a tenant you can use for integration testing (representing an enterprise customer), and some users in that tenant to whom you can assign the numbers for integration testing.
- - If you don't already have a suitable test tenant, you can use the [Microsoft 365 Developer Program](https://developer.microsoft.com/microsoft-365/dev-program), which provides E5 licenses.
- - The test users must be licensed for Teams Phone System and in Teams Only mode.
+
+ - If you don't already have a suitable test tenant, you can use the [Microsoft 365 Developer Program](https://developer.microsoft.com/microsoft-365/dev-program), which provides E5 licenses.
+ - The test users must be licensed for Teams Phone System and in Teams Only mode.
+ - You must have access to the following configuration portals. |Configuration portal |Required permissions |
In this article, you learn about the steps that you and your onboarding team mus
|[Operator Connect portal](https://operatorconnect.microsoft.com/) | `Admin` role or `PartnerSettings.Read` and `NumberManagement.Write` roles (configured on the Project Synergy enterprise application that you set up when [you connected to Operator Connect or Teams Phone Mobile](connect-operator-connect.md#add-the-project-synergy-application-to-your-azure-tenant))| |[Teams Admin Center](https://admin.teams.microsoft.com/) for your test tenant |User management|
+- If you plan to use Azure Communications Gateway's Provisioning API (preview) to upload numbers to the Operator Connect environment, you must be able to make requests using [a client integrated with the API](integrate-with-provisioning-api.md). You must also have access to the [API Reference](/rest/api/voiceservices).
+
+- If you plan to use Azure Communications Gateway's Number Management Portal (preview) to configure numbers for integration testing, you must have **Reader** access to the Azure Communications Gateway resource and **ProvisioningAPI.ReadUser** and **ProvisioningAPI.WriteUser** roles for the AzureCommunicationsGateway enterprise application.
+ [!INCLUDE [communications-gateway-oc-configuration-ownership](includes/communications-gateway-oc-configuration-ownership.md)] ## Methods
-In some parts of this article, the steps you must take depend on whether your deployment includes the Number Management Portal. This article provides instructions for both types of deployment. Choose the appropriate instructions.
+In some parts of this article, the steps you must take depend on whether you're using the Provisioning API (preview), the Number Management Portal (preview), or the Operator Connect portal and APIs. This article provides instructions for each option. Choose the appropriate instructions.
## Ask your onboarding team to register your test enterprise tenant
Your onboarding team must register the test enterprise tenant that you chose in
1. Find your company's "Operator ID" in your [operator configuration in the Operator Connect portal](https://operatorconnect.microsoft.com/operator/configuration). 1. Provide your onboarding contact with:+ - Your company's name. - Your company's Operator ID. - The ID of the tenant to use for testing.+ 1. Wait for your onboarding team to confirm that your test tenant has been registered. ## Set up your test tenant
Integration testing requires setting up your test tenant for Operator Connect or
> [!IMPORTANT] > Do not assign the service verification numbers to test users. Your onboarding team arranges configuration of your service verification numbers.
-1. Ask your onboarding team for the name of the Calling Profile that you must use for these test numbers. The name typically has the suffix `CommsGw`. We created this Calling Profile for you during the Azure Communications Gateway deployment process.
1. In your test tenant, request service from your company. 1. Sign in to the [Teams Admin Center](https://admin.teams.microsoft.com/) for your test tenant. 1. Select **Voice** > **Operators**. 1. Select your company in the list of operators, fill in the form and select **Add as my operator**. 1. In your test tenant, create some test users (if you don't already have suitable users). License the users for Teams Phone System and place them in Teams Only mode. 1. Configure emergency locations in your test tenant.
-1. Upload numbers in the Number Management Portal (if you chose to deploy it as part of Azure Communications Gateway) or the Operator Connect Operator Portal. Use the Calling Profile that you obtained from your onboarding team.
+1. Upload numbers for integration testing over the Provisioning API (preview), in the Number Management Portal (preview), or using the Operator Connect Operator Portal.
+
+ # [Provisioning API (preview)](#tab/provisioning-api)
+
+ The following steps summarize the requests you must make to the Provisioning API. For full details of the relevant API resources, see the [API Reference](/rest/api/voiceservices).
- # [Number Management Portal](#tab/number-management-portal)
+ 1. Find the _RFI_ (Request for information) resource for your test tenant and update the `status` property of its child _Customer Relationship_ resource to indicate the agreement has been signed.
+ 1. Create an _Account_ resource that represents the customer.
+ 1. Create a _Number_ resource as a child of the Account resource for each test number.
- 1. Sign in to the [Azure portal](https://azure.microsoft.com/).
- 1. In the search bar at the top of the page, search for your Communications Gateway resource.
- 1. Select your Communications Gateway resource.
- 1. On the overview page, select **Consents** in the sidebar.
+ # [Number Management Portal (preview)](#tab/number-management-portal)
+
+ 1. From the overview page for your Communications Gateway resource, find the **Number Management (Preview)** section in the sidebar.
+ 1. Select **Requests for Information**.
1. Select your test tenant.
- 1. From the menu, select **Update Relationship Status**. Set the status to **Agreement signed**.
- 1. From the menu, select **Manage Numbers**.
- 1. Select **Upload numbers**.
- 1. Fill in the fields as required, and then select **Review + upload** and **Upload**.
+ 1. Select **Update relationship status**. Use the drop-down to set the status to **Agreement signed**.
+ 1. Select **Create account**. Fill in the fields as required and select **Create**.
+ 1. Select **View account**.
+ 1. Select **View numbers** and select **Upload numbers**.
+ 1. Fill in the fields as required, and then select **Review and upload** and **Upload**.
- # [Operator Portal](#tab/no-number-management-portal)
+ # [Operator Portal](#tab/no-flow-through)
+ 1. Ask your onboarding team for the name of the Calling Profile that you must use for these test numbers. The name typically has the suffix `CommsGw`. We created this Calling Profile for you during the Azure Communications Gateway deployment process.
1. Open the Operator Portal. 1. Select **Customer Consents**. 1. Select your test tenant.
Network integration includes identifying SIP interoperability requirements and c
You must test typical call flows for your network. We recommend that you follow the example test plan from your onboarding team. Your test plan should include call flow, failover, and connectivity testing. -- If you decide that you need changes to Azure Communications Gateway, ask your onboarding team. Microsoft will make the changes for you.
+- If you decide that you need changes to Azure Communications Gateway, ask your onboarding team to make the changes for you.
- If you need changes to the configuration of devices in your core network, you must make those changes. ## Run a connectivity test and upload proof
Your staff can use a selection of key metrics to monitor Azure Communications Ga
## Verify API integration
-Your onboarding team must provide Microsoft with proof that you have integrated with the Microsoft Teams Operator Connect API for provisioning.
+Your onboarding team must provide Microsoft with proof that you have integrated with the Microsoft Teams Operator Connect APIs for provisioning. Choose the appropriate instructions for your deployment.
+
+# [Provisioning API (preview)](#tab/provisioning-api)
+
+Your onboarding team can obtain proof automatically. You don't need to do anything.
+
+# [Number Management Portal (preview)](#tab/number-management-portal)
+
+You can't use the Number Management Portal after you launch, because the Operator Connect and Teams Phone Mobile programs require full API integration. You can integrate with Azure Communications Gateway's [Provisioning API](provisioning-platform.md) or directly with the Operator Connect API.
-# [Number Management Portal](#tab/number-management-portal)
+If you integrate with the Provisioning API, your onboarding team can obtain proof automatically.
-If you have the Number Management Portal, your onboarding team can obtain proof automatically. You don't need to do anything.
+If you integrate with the Operator Connect API, provide your onboarding team with proof of successful Operator Connect API calls for:
+
+- Partner consent
+- TN Upload to Account
+- Unassign TN
+- Release TN
-# [Without the Number Management Portal](#tab/no-number-management-portal)
+# [Operator Connect APIs](#tab/no-flow-through)
-If you don't have the Number Management Portal, you must provide your onboarding team with proof of successful API calls for:
+Provide your onboarding team with proof of successful API calls for:
- Partner consent - TN Upload to Account
Your service can be launched on specific dates each month. Your onboarding team
- Wait for your launch date. - Learn about [getting support and requesting changes for Azure Communications Gateway](request-changes.md).-- Learn about [using the Number Management Portal to manage enterprises](manage-enterprise-operator-connect.md)
+- Learn about [using the Number Management Portal to manage enterprises](manage-enterprise-operator-connect.md).
- Learn about [monitoring Azure Communications Gateway](monitor-azure-communications-gateway.md).
communications-gateway Provision User Roles https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/provision-user-roles.md
Title: Set up user roles for Azure Communications Gateway
-description: Learn how to configure the user roles required to deploy, manage and monitor your Azure Communications Gateway
+description: Learn how to configure the user roles required to deploy, manage, and monitor your Azure Communications Gateway.
Previously updated : 11/27/2023 Last updated : 02/16/2024 # Set up user roles for Azure Communications Gateway This article guides you through how to configure the permissions required for staff in your organization to: -- Deploy Azure Communications Gateway through the portal-- Raise customer support requests (support tickets)-- Monitor Azure Communications Gateway-- Use the Number Management Portal (for provisioning the Operator Connect or Teams Phone Mobile environments)
+- Deploy Azure Communications Gateway through the portal.
+- Raise customer support requests (support tickets).
+- Monitor Azure Communications Gateway.
+- Use the Number Management Portal (preview) for provisioning the Operator Connect or Teams Phone Mobile environments.
For permissions for the Provisioning API, see [Integrate with Azure Communications Gateway's Provisioning API](integrate-with-provisioning-api.md).
Your staff might need different user roles, depending on the tasks they need to
|Task | Minimum required user role or access | |||
-| Deploying Azure Communications Gateway or changing its configuration |**Contributor** access to the resource group|
-| Raising support requests |**Owner**, **Contributor** or **Support Request Contributor** access to your subscription or a custom role with `Microsoft.Support/*` access at the subscription level|
-|Monitoring logs and metrics | **Reader** access to the Azure Communications Gateway resource|
-| Using the Number Management Portal for Operator Connect or Teams Phone Mobile | **Reader** access to the Azure Communications Gateway resource and appropriate roles for the Project Synergy enterprise application: <!-- Must be kept in sync with step below for configuring and with manage-enterprise-operator-connect.md --><br> - To view existing configuration: **PartnerSettings.Read**, **TrunkManagement.Read**, and **NumberManagement.Read**<br>- To configure your relationship to an enterprise (a _consent_) and numbers: **PartnerSettings.Read**, **TrunkManagement.Read**, and **NumberManagement.Write**|
+| Deploy Azure Communications Gateway or change its configuration. |**Contributor** access to the resource group.|
+| Raise support requests. |**Owner**, **Contributor**, or **Support Request Contributor** access to your subscription or a custom role with `Microsoft.Support/*` access at the subscription level. |
+| Monitor logs and metrics. | **Reader** access to the Azure Communications Gateway resource. |
+| Use the Number Management Portal (preview) | **Reader** access to the Azure Communications Gateway resource and appropriate roles for the AzureCommunicationsGateway enterprise application: <!-- Must be kept in sync with step below for configuring and with manage-enterprise-operator-connect.md --><br>- To view configuration: **ProvisioningAPI.ReadUser**.<br>- To add or make changes to configuration: **ProvisioningAPI.ReadUser** and **ProvisioningAPI.WriteUser**.<br>- To remove configuration: **ProvisioningAPI.ReadUser** and **ProvisioningAPI.DeleteUser**.<br>- To view, add, make changes to, or remove configuration: **ProvisioningAPI.AdminUser**. |
-> [!TIP]
-> To allow staff to manage consents in the Number Management Portal without managing numbers, assign the **NumberManagement.Read**, **TrunkManagement.Read** and **PartnerSettings.Write** roles.
## Configure user roles
You need to use the Azure portal to configure user roles.
- Know who needs access. - Know the appropriate user role or roles to assign them. - Are signed in with a user account with a role that can change role assignments for the subscription, such as **Owner** or **User Access Administrator**.
-1. If you're managing access to the Number Management Portal, ensure that you're signed in with a user account that can change roles for enterprise applications. For example, you could be a Global Administrator, Cloud Application Administrator or Application Administrator. For more information, see [Assign users and groups to an application](../active-directory/manage-apps/assign-user-or-group-access-portal.md).
+1. If you're managing access to the Number Management Portal, ensure that you're signed in with a user account that can change roles for enterprise applications. For example, you could be a Global Administrator, Cloud Application Administrator, or Application Administrator. For more information, see [Assign users and groups to an application](../active-directory/manage-apps/assign-user-or-group-access-portal.md).
### Assign a user role 1. Follow the steps in [Assign a user role using the Azure portal](../role-based-access-control/role-assignments-portal.md) to assign the permissions you determined in [Understand the user roles required for Azure Communications Gateway](#understand-the-user-roles-required-for-azure-communications-gateway).
-1. If you're managing access to the Number Management Portal, also follow [Assign users and groups to an application](/entra/identity/enterprise-apps/assign-user-or-group-access-portal?pivots=portal) to assign suitable roles for each user in the Project Synergy application.
+1. If you're managing access to the Number Management Portal, also follow [Assign users and groups to an application](/entra/identity/enterprise-apps/assign-user-or-group-access-portal?pivots=portal) to assign suitable roles for each user in the AzureCommunicationsGateway enterprise application.
+ <!-- Must be kept in sync with step 1 and with manage-enterprise-operator-connect.md -->
- * To view existing configuration: **PartnerSettings.Read**, **TrunkManagement.Read**, and **NumberManagement.Read**
- * To make changes to consents and numbers: **PartnerSettings.Read**, **TrunkManagement.Read**, and **NumberManagement.Write**
+ - To view configuration: **ProvisioningAPI.ReadUser**.
+ - To add or make changes to configuration: **ProvisioningAPI.ReadUser** and **ProvisioningAPI.WriteUser**.
+ - To remove configuration: **ProvisioningAPI.ReadUser** and **ProvisioningAPI.DeleteUser**.
+ - To view, add, make changes to, or remove configuration: **ProvisioningAPI.AdminUser**.
## Next steps
communications-gateway Provisioning Platform https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/provisioning-platform.md
Previously updated : 11/17/2023 Last updated : 02/16/2024 #CustomerIntent: As someone learning about Azure Communications Gateway, I want to understand the Provisioning Platform, so that I know whether I need to integrate with it
-# Provisioning API for Azure Communications Gateway
+# Provisioning API (preview) for Azure Communications Gateway
-Azure Communications Gateway's Provisioning API allows you to configure Azure Communications Gateway with the details of your customers and the numbers that you have assigned to them.
+Azure Communications Gateway's Provisioning API (preview) allows you to configure Azure Communications Gateway with the details of your customers and the numbers that you assign to them.
You can use the Provisioning API to:-- Configure numbers for specific configuration services-- Add custom header configuration
+- Associate numbers with backend services.
+- Provision backend services with customer configuration (sometimes called _flow-through provisioning_).
+- Add custom header configuration.
-The following table shows whether these uses of the Provisioning API are required, optional or not supported for each communications service. The following sections in this article provide more detail about each use.
+The following table shows how you can use the Provisioning API for each communications service. The following sections in this article provide more detail about each use case.
-|Communications service | Configuring numbers for specific communications service | Custom header configuration |
-||||
-|Microsoft Teams Direct Routing |Required| Optional |
-|Operator Connect|Optional|Optional|
-|Teams Phone Mobile|Not supported|Not supported|
-|Zoom Phone Cloud Peering |Required | Optional |
+|Communications service | Associating numbers with communications service | Flow-through provisioning of communication service | Custom header configuration |
+|||||
+|Microsoft Teams Direct Routing | Required | Not supported | Optional |
+|Operator Connect | Automatically set up if you use flow-through provisioning or the Number Management Portal | Recommended | Optional |
+|Teams Phone Mobile | Automatically set up if you use flow-through provisioning or the Number Management Portal | Recommended | Optional |
+|Zoom Phone Cloud Peering | Required | Not supported | Optional |
-## Configuring numbers for specific communications services
+The flow-through provisioning for Operator Connect and Teams Phone Mobile interoperates with the Operator Connect APIs. It therefore allows you to meet the requirements for API-based provisioning from the Operator Connect and Teams Phone Mobile programs.
+
+> [!TIP]
+> For Operator Connect and Teams Phone Mobile, you can also get started with the Azure Communications Gateway's Number Management Portal, available in the Azure portal. For more information, see [Manage an enterprise with Azure Communications Gateway's Number Management Portal for Operator Connect and Teams Phone Mobile](manage-enterprise-operator-connect.md).
+
+## Associating numbers for specific communications services
For Microsoft Teams Direct Routing and Zoom Phone Cloud Peering, you must provision Azure Communications Gateway with the numbers that you want to assign to each of your customers and enable each number for the chosen communications service. This information allows Azure Communications Gateway to: - Route calls to the correct communications service.-- Update SIP messages for Microsoft Teams Direct Routing with the information that Microsoft Phone System requires to match calls to tenants. For more information about this process, see [Identifying the customer tenant for Microsoft Phone System](interoperability-teams-direct-routing.md#identifying-the-customer-tenant-for-microsoft-phone-system).
+- Update SIP messages for Microsoft Teams Direct Routing with the information that Microsoft Phone System requires to match calls to tenants. For more information, see [Identifying the customer tenant for Microsoft Phone System](interoperability-teams-direct-routing.md#identifying-the-customer-tenant-for-microsoft-phone-system).
+
+For Operator Connect or Teams Phone Mobile:
+- If you use the Provisioning API for flow-through provisioning or you use the Number Management Portal, resources on the Provisioning API associate the customer numbers with the relevant service.
+- Otherwise, Azure Communications Gateway defaults to Operator Connect for fixed-line calls and Teams Phone Mobile for mobile calls, and doesn't create resources on the Provisioning API.
+
+## Flow-through provisioning of communications services
+
+Flow-through provisioning is when you use Azure Communications Gateway to provision a communications service.
+
+For Operator Connect and Teams Phone Mobile, you can use the Provisioning API to provision the Operator Connect and Teams Phone Mobile environment with subscribers (your customers and the numbers you assign to them). This integration is equivalent to separate integration with the Operator Management and Telephone Number Management APIs provided by the Operator Connect environment. It meets the Operator Connect and Teams Phone Mobile requirement to use APIs to manage your customers and numbers after you launch your service.
-Enabling numbers for Operator Connect is optional; if you don't select a communications service for a number, Azure Communications Gateway defaults to Operator Connect for fixed-line calls.
+Azure Communications Gateway doesn't support flow-through provisioning for Microsoft Teams Direct Routing or Zoom Phone Cloud Peering.
## Custom headers
Azure Communications Gateway can add a custom header to messages sent to your co
To set up custom headers: -- Choose the name of the custom header when you [deploy Azure Communications Gateway](deploy.md). This header name is used for all custom headers.
+- Choose the name of the custom header when you [deploy Azure Communications Gateway](deploy.md) or by updating the Provisioning Platform configuration in the Azure portal. This header name is used for all custom headers.
- Use the Provisioning API to provision Azure Communications Gateway with numbers and the contents of the custom header for each number. Azure Communications Gateway then uses this information to add custom headers to a call as follows:
Azure Communications Gateway then uses this information to add custom headers to
- For calls from your network, the called number's configuration determines the header contents. - For calls to your network, the calling number's configuration determines the header contents.
-Azure Communications Gateway doesn't add a header if the number hasn't been provisioned, or the configuration for the number doesn't include contents for a custom header.
+Azure Communications Gateway doesn't add a header if the number isn't provisioned on Azure Communications Gateway, or the configuration for the number doesn't include contents for a custom header.
The following diagram shows an Azure Communications Gateway deployment configured to add a `X-MS-Operator-Content` header to messages sent to the operator network from Operator Connect.
The following diagram shows an Azure Communications Gateway deployment configure
## Next steps -- [Learn about the technical requirements for integrating with the Provisioning API](integrate-with-provisioning-api.md)-- Browse the [API Reference for the Provisioning API](/rest/api/voiceservices)
+- [Learn about the technical requirements for integrating with the Provisioning API](integrate-with-provisioning-api.md).
+- Browse the [API Reference for the Provisioning API](/rest/api/voiceservices).
communications-gateway Role In Network https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/role-in-network.md
Previously updated : 11/06/2023 Last updated : 02/16/2024
Azure Communications Gateway also offers metrics for monitoring your deployment.
We expect your network to have two geographically redundant sites. You must provide networking connections between each site and:
-* The other site in your deployment, as cross-connects.
-* The Azure Regions in which you deploy Azure Communications Gateway.
+- The other site in your deployment, as cross-connects.
+- The Azure Regions in which you deploy Azure Communications Gateway.
Connectivity between your networks and Azure Communications Gateway must meet any relevant network connectivity specifications.
Azure Communications Gateway includes SIP trunks to your own network and can int
[!INCLUDE [communications-gateway-multitenant](includes/communications-gateway-multitenant.md)] To allow Azure Communications Gateway to identify the correct service for a call, you must configure the details of each number and its service(s) on Azure Communications Gateway. This is:-- Required for Microsoft Teams Direct Routing.+
+- Required for Microsoft Teams Direct Routing and Zoom Phone Cloud Peering.
- Not required for Operator Connect (because Azure Communications Gateway defaults to Operator Connect for fixed line calls) or Teams Phone Mobile.
-You can also configure Azure Communications Gateway to add a custom header to messages associated with a number. You can use this to indicate the service and/or the enterprise associated with a call. Custom headers are supported for all services except Teams Phone Mobile.
+You can also configure Azure Communications Gateway to add a custom header to messages associated with a number. You can use this feature to indicate the service and/or the enterprise associated with a call.
+
+For Microsoft Teams Direct Routing and for Zoom Phone Cloud Peering, configuring numbers with services and custom headers requires Azure Communications Gateway's Provisioning API (preview). For more information, see [Provisioning API (preview) for Azure Communications Gateway](provisioning-platform.md). For Operator Connect or Teams Phone Mobile, you can use the Provisioning API or the [Number Management Portal (preview)](manage-enterprise-operator-connect.md)
-Configuring numbers with services and custom headers requires Azure Communications Gateway's Provisioning API. For more information, see [Provisioning API for Azure Communications Gateway](provisioning-platform.md).
+> [!NOTE]
+> Although integrating with the Provisioning API is optional for Operator Connect or Teams Phone Mobile, we strongly recommend it. Integrating with the Provisioning API enables flow-through API-based provisioning of your customers in the Operator Connect environment, in addition to provisioning on Azure Communications Gateway (for custom header configuration). This flow-through provisioning interoperates with the Operator Connect APIs, and allows you to meet the requirements for API-based provisioning from the Operator Connect and Teams Phone Mobile programs. For more information, see [Provisioning and Operator Connect APIs](interoperability-operator-connect.md#provisioning-and-operator-connect-apis).
You can arrange more interworking function as part of your initial network design or at any time by raising a support request for Azure Communications Gateway. For example, you might need extra interworking configuration for:
communications-gateway Whats New https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/communications-gateway/whats-new.md
Previously updated : 02/01/2024 Last updated : 02/16/2024 # What's new in Azure Communications Gateway?
This article covers new features and improvements for Azure Communications Gatew
## February 2024
+### Flow-through provisioning for Operator Connect and Teams Phone Mobile
+
+From February 2024, Azure Communications Gateway supports flow-through provisioning for Operator Connect and Teams Phone Mobile customers and numbers with the Azure Communications Gateway's [Provisioning API (preview)](provisioning-platform.md). Flow-through provisioning on Azure Communications Gateway allows you to provision the Operator Connect environments and Azure Communications Gateway (for custom header configuration) using the same method. It meets the Operator Connect and Teams Phone Mobile requirement to use APIs to manage your customers and numbers after you launch your service.
+
+Provisioning Azure Communications Gateway and the Operator Connect and Teams Phone Mobile environment includes:
+
+- Managing the status of your enterprise customers in the Operator Connect and Teams Phone Mobile environment.
+- Provisioning numbers in the Operator Connect and Teams Phone Mobile environment.
+- Configuring Azure Communications Gateway to add custom headers.
+
+Before you launch your Operator Connect or Teams Phone Mobile service, you can also use the [Number Management Portal (preview)](manage-enterprise-operator-connect.md).
+
+### Custom headers for Teams Phone Mobile calls
+
+From February 2024, you can use the Provisioning API (preview) to set a custom header on Teams Phone Mobile calls. This enhancement extends the function introduced in [November 2023](#custom-header-on-messages-to-operator-networks) for configuring a custom header for Operator Connect, Microsoft Teams Direct Routing, and Zoom Phone Cloud Peering.
+ ### Connectivity metrics
-From February 2024, you can monitor the health of the connection between your network and Azure Communications Gateway with new metrics for responses to SIP INVITE and OPTIONS exchanges. You can view statistics for all INVITE and OPTIONS requests, or narrow your view down to individual regions, request types or response codes. For more information on the available metrics, see [Connectivity metrics](monitoring-azure-communications-gateway-data-reference.md#connectivity-metrics). For an overview of working with metrics, see [Analyzing, filtering and splitting metrics in Azure Monitor](monitor-azure-communications-gateway.md#analyzing-filtering-and-splitting-metrics-in-azure-monitor).
+From February 2024, you can monitor the health of the connection between your network and Azure Communications Gateway with new metrics for responses to SIP INVITE and OPTIONS exchanges. You can view statistics for all INVITE and OPTIONS requests, or narrow your view down to individual regions, request types, or response codes. For more information on the available metrics, see [Connectivity metrics](monitoring-azure-communications-gateway-data-reference.md#connectivity-metrics). For an overview of working with metrics, see [Analyzing, filtering and splitting metrics in Azure Monitor](monitor-azure-communications-gateway.md#analyzing-filtering-and-splitting-metrics-in-azure-monitor).
## November 2023
You must choose the name of the custom header when you [deploy Azure Communicati
You must then use the [Provisioning API](provisioning-platform.md) to configure each number with the contents of the custom header.
-Custom header configuration is available for all communications services except Teams Phone Mobile.
+In November 2023, custom header configuration is available for all communications services except Teams Phone Mobile.
## October 2023
From September 2023, you can use ExpressRoute Microsoft Peering to connect opera
From May 2023, you can deploy Mobile Control Point (MCP) as part of Azure Communications Gateway. MCP is an IMS Application Server that simplifies interworking with Microsoft Phone System for mobile calls. It ensures calls are only routed to the Microsoft Phone System when a user is eligible for Teams Phone Mobile services. This process minimizes the changes you need in your mobile network to route calls into Microsoft Teams. For more information, see [Mobile Control Point in Azure Communications Gateway for Teams Phone Mobile](mobile-control-point.md).
-You can add MCP when you deploy Azure Communications Gateway or by requesting changes to an existing deployment. For more information, see [Deploy Azure Communications Gateway](deploy.md) or [Get support or request changes to your Azure Communications Gateway](request-changes.md)
+You can add MCP when you deploy Azure Communications Gateway or by requesting changes to an existing deployment. For more information, see [Deploy Azure Communications Gateway](deploy.md) or [Get support or request changes to your Azure Communications Gateway.](request-changes.md)
### Authentication with managed identities for Operator Connect APIs
Azure Communications Gateway contains services that need to access the Operator
From May 2023, Azure Communications Gateway automatically provides a managed identity for this authentication. You must set up specific permissions for this managed identity and then add the Application ID of this managed identity to your Operator Connect or Teams Phone Mobile environment. For more information, see [Deploy Azure Communications Gateway](deploy.md).
-This new authentication model replaces an earlier model that required you to create an App registration and manage secrets for it. With the new model, you no longer need to create, manage or rotate secrets.
+This new authentication model replaces an earlier model that required you to create an App registration and manage secrets for it. With the new model, you no longer need to create, manage, or rotate secrets.
## Next steps
defender-for-cloud Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/defender-for-cloud/release-notes.md
If you're looking for items older than six months, you can find them in the [Arc
|Date | Update | |-|-|
+| February 18| [Open Container Initiative (OCI) image format specification support](#open-container-initiative-oci-image-format-specification-support) |
| February 13 | [AWS container vulnerability assessment powered by Trivy retired](#aws-container-vulnerability-assessment-powered-by-trivy-retired) | | February 8 | [Recommendations released for preview: four recommendations for Azure Stack HCI resource type](#recommendations-released-for-preview-four-recommendations-for-azure-stack-hci-resource-type) |
+### Open Container Initiative (OCI) image format specification support
+
+February 18, 2024
+
+The [Open Container Initiative (OCI)](https://github.com/opencontainers/image-spec/blob/main/spec.md) image format specification is now supported by vulnerability assessment, powered by Microsoft Defender Vulnerability Management for AWS, Azure & GCP clouds.
++ ### AWS container vulnerability assessment powered by Trivy retired February 13, 2024
devtest-labs Image Factory Create https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/devtest-labs/image-factory-create.md
The solution enables the speed of creating virtual machines from custom images w
<br/> ## High-level view of the solution
-The solution enables the speed of creating virtual machines from custom images while eliminating extra ongoing maintenance costs. With this solution, you can automatically create custom images and distribute them to other DevTest Labs. You use Azure DevOps (formerly Visual Studio Team Services) as the orchestration engine for automating the all the operations in the DevTest Labs.
+The solution enables the speed of creating virtual machines from custom images while eliminating extra ongoing maintenance costs. With this solution, you can automatically create custom images and distribute them to other DevTest Labs. You use Azure DevOps (formerly Visual Studio Team Services) as the orchestration engine for automating all the operations in the DevTest Labs.
![High-level view of the solution.](./media/create-image-factory/high-level-view-of-solution.png)
hdinsight-aks Secure Traffic By Firewall https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/hdinsight-aks/secure-traffic-by-firewall.md
description: Learn how to secure traffic using firewall on HDInsight on AKS usin
Previously updated : 08/3/2023 Last updated : 02/19/2024 # Use firewall to restrict outbound traffic using Azure CLI
FWROUTE_NAME_INTERNET="${PREFIX}-fwinternet"
``` > [!Important] > 1. If you add NSG in subnet `HDIAKS_SUBNET_NAME`, you need to add certain outbound and inbound rules manually. Follow [use NSG to restrict the traffic](./secure-traffic-by-nsg.md).
- > 1. Don't associate subnet `HDIAKS_SUBNET_NAME` with a route table because HDInsight on AKS creates cluster pool with default outbound type and can't create the cluster pool in a subnet already associated with a route table.
+ > 1. By default, route table will not be associated with subnet. If required, user has to create a route table and associate it with the cluster pool.
## Create HDInsight on AKS cluster pool using Azure portal
FWROUTE_NAME_INTERNET="${PREFIX}-fwinternet"
:::image type="content" source="./media/secure-traffic-by-firewall/security-tab.png" alt-text="Diagram showing the security tab." border="true" lightbox="./media/secure-traffic-by-firewall/security-tab.png":::
- 1. When HDInsight on AKS cluster pool is created, you can find a route table in subnet `HDIAKS_SUBNET_NAME`.
+ 1. Create a route table.
- :::image type="content" source="./media/secure-traffic-by-firewall/route-table.png" alt-text="Diagram showing the route table." border="true" lightbox="./media/secure-traffic-by-firewall/route-table.png":::
+ Create a route table and associate it with the cluster pool. For more information, see [create a route table](../virtual-network/manage-route-table.md#create-a-route-table).
### Get AKS cluster details created behind the cluster pool
FWROUTE_NAME_INTERNET="${PREFIX}-fwinternet"
### Create route in the route table to redirect the traffic to firewall
-1. Get the route table associated with HDInsight on AKS cluster pool.
-
- ```azurecli
- ROUTE_TABLE_ID=$(az network vnet subnet show --name $HDIAKS_SUBNET_NAME --vnet-name $VNET_NAME --resource-group $RG --query routeTable.id -o tsv)
-
- ROUTE_TABLE_NAME=$(az network route-table show --ids $ROUTE_TABLE_ID --query 'name' -o tsv)
- ```
-1. Create the route.
- ```azurecli
- az network route-table route create -g $AKS_MANAGED_RG --name $FWROUTE_NAME --route-table-name $ROUTE_TABLE_NAME --address-prefix 0.0.0.0/0 --next-hop-type VirtualAppliance --next-hop-ip-address $FWPRIVATE_IP
-
- az network route-table route create -g $AKS_MANAGED_RG --name $FWROUTE_NAME_INTERNET --route-table-name $ROUTE_TABLE_NAME --address-prefix $FWPUBLIC_IP/32 --next-hop-type Internet
- ```
+Create a route table to be associated to HDInsight on AKS cluster pool. For more information, see [create route table commands](../virtual-network/manage-route-table.md#create-route-tablecommands).
+ ## Create cluster
-In the previous steps, we have routed the traffic to firewall.
+In the previous steps, we routed network traffic to firewall.
The following steps provide details about the specific network and application rules needed by each cluster type. You can refer to the cluster creation pages for creating [Apache Flink](./flink/flink-create-cluster-portal.md), [Trino](./trino/trino-create-cluster.md), and [Apache Spark](./spark/hdinsight-on-aks-spark-overview.md) clusters based on your need.
The following steps provide details about the specific network and application r
az network route-table route create -g $AKS_MANAGED_RG --name clientip --route-table-name $ROUTE_TABLE_NAME --address-prefix {Client_IPs} --next-hop-type Internet ```
- If you can't reach the cluster and have configured NSG, follow [use NSG to restrict the traffic](./secure-traffic-by-nsg.md) to allow the traffic.
+ If you can't reach the cluster after having configured NSG, follow [use NSG to restrict the traffic](./secure-traffic-by-nsg.md) to allow the traffic.
> [!TIP] > If you want to allow more traffic, you can configure it over the firewall.
machine-learning Concept Connections https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/prompt-flow/concept-connections.md
Connections in prompt flow play a crucial role in establishing connections to re
In the Azure Machine Learning workspace, connections can be configured to be shared across the entire workspace or limited to the creator. Secrets associated with connections are securely persisted in the corresponding Azure Key Vault, adhering to robust security and compliance standards.
-Prompt flow provides various prebuilt connections, including Azure Open AI, Open AI, and Azure Content Safety. These prebuilt connections enable seamless integration with these resources within the built-in tools. Additionally, users have the flexibility to create custom connection types using key-value pairs, empowering them to tailor the connections to their specific requirements, particularly in Python tools.
+Prompt flow provides various prebuilt connections, including Azure OpenAI, OpenAI, and Azure Content Safety. These prebuilt connections enable seamless integration with these resources within the built-in tools. Additionally, users have the flexibility to create custom connection types using key-value pairs, empowering them to tailor the connections to their specific requirements, particularly in Python tools.
| Connection type | Built-in tools | | | - |
-| [Azure Open AI](https://azure.microsoft.com/products/cognitive-services/openai-service) | LLM or Python |
-| [Open AI](https://openai.com/) | LLM or Python |
+| [Azure OpenAI](https://azure.microsoft.com/products/cognitive-services/openai-service) | LLM or Python |
+| [OpenAI](https://openai.com/) | LLM or Python |
| [Azure Content Safety](https://aka.ms/acs-doc) | Content Safety (Text) or Python | | [Azure AI Search](https://azure.microsoft.com/products/search) (formerly Cognitive Search) | Vector DB Lookup or Python | | [Serp](https://serpapi.com/) | Serp API or Python |
machine-learning Azure Machine Learning Release Notes https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/machine-learning/v1/azure-machine-learning-release-notes.md
Azure Machine Learning is now a resource provider for Event Grid, you can config
+ **Preview features** + We are releasing preview support for disk encryption of your local SSD in Azure Machine Learning Compute. Raise a technical support ticket to get your subscription allow listed to use this feature. + Public Preview of Azure Machine Learning Batch Inference. Azure Machine Learning Batch Inference targets large inference jobs that are not time-sensitive. Batch Inference provides cost-effective inference compute scaling, with unparalleled throughput for asynchronous applications. It is optimized for high-throughput, fire-and-forget inference over large collections of data.
- + [**azureml-contrib-dataset**](/python/api/azureml-contrib-dataset)
+ + **azureml-contrib-dataset**
+ Enabled functionalities for labeled dataset ```Python import azureml.core
mysql February 2024 https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/mysql/flexible-server/release-notes/february-2024.md
+
+ Title: Release notes for Azure Database for MySQL Flexible Server - February 2024
+description: Learn about the release notes for Azure Database for MySQL Flexible Server February 2024.
++ Last updated : 02/09/2024+++++
+# Azure Database For MySQL Flexible Server February 2024 Maintenance
+
+We're pleased to announce the February 2024 maintenance for Azure Database for MySQL Flexible Server. This maintenance mainly focuses on known issue fix, underlying OS upgrading and vulnerability patching.
+
+## Engine version changes
+There will be no engine version changes in this maintenance update.
+
+## Features
+There will be no new features in this maintenance update.
+
+## Improvement
+There will be no new improvement in this maintenance update.
+
+## Known Issues Fix
+- Fix HA standby replication dead lock issue caused by slave_preserve_commit_order.
+- Fix promotion stuck issue when source server is unavailable or source region is down. Improve customer experience on replica promotion to better support disaster recovery.
+- Fix the default value of character_set_server & collation_server.
+- Allow customer to start InnoDB buffer pool dump.
orbital Register Spacecraft https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/orbital/register-spacecraft.md
Submit a spacecraft authorization request in order to schedule [contacts](concep
> A [Basic Support Plan](https://azure.microsoft.com/support/plans/) or higher is required to submit a spacecraft authorization request. > [!NOTE]
- > **Private spacecraft**: prior to submitting an authorization request, you must have an active spacecraft license for your satellite and work with Mircosot to add your satellite to our ground station licenses. Microsoft can provide technical information required to complete the federal regulator and ITU processes as needed. Learn more about [initiating ground station licensing](initiate-licensing.md).
+ > **Private spacecraft**: prior to submitting an authorization request, you must have an active spacecraft license for your satellite and work with Microsoft to add your satellite to our ground station licenses. Microsoft can provide technical information required to complete the federal regulator and ITU processes as needed. Learn more about [initiating ground station licensing](initiate-licensing.md).
> > **Public spacecraft**: licensing is not required for authorization. The Azure Orbital Ground Station service supports several public satellites including Aqua, Suomi NPP, JPSS-1/NOAA-20, and Terra.
postgresql Concepts Business Continuity https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/flexible-server/concepts-business-continuity.md
Below are some unplanned failure scenarios and the recovery process.
| - || - | | **Database server failure** | If the database server is down, Azure will attempt to restart the database server. If that fails, the database server will be restarted on another physical node. <br /> <br /> The recovery time (RTO) is dependent on various factors including the activity at the time of fault, such as large transaction, and the volume of recovery to be performed during the database server startup process. <br /> <br /> Applications using the PostgreSQL databases need to be built in a way that they detect and retry dropped connections and failed transactions. | If the database server failure is detected, the server is failed over to the standby server, thus reducing downtime. For more information, see [HA concepts page](./concepts-high-availability.md). RTO is expected to be 60-120s, with zero data loss. | | **Storage failure** | Applications don't see any impact for any storage-related issues such as a disk failure or a physical block corruption. As the data is stored in three copies, the copy of the data is served by the surviving storage. The corrupted data block is automatically repaired and a new copy of the data is automatically created. | For any rare and non-recoverable errors such as the entire storage is inaccessible, the Azure Database for PostgreSQL flexible server instance is failed over to the standby replica to reduce the downtime. For more information, see [HA concepts page](./concepts-high-availability.md). |
-| ** Logical/user errors** | To recover from user errors, such as accidentally dropped tables or incorrectly updated data, you have to perform a [point-in-time recovery](../concepts-backup.md) (PITR). While performing the restore operation, you specify the custom restore point, which is the time right before the error occurred.<br> <br> If you want to restore only a subset of databases or specific tables rather than all databases in the database server, you can restore the database server in a new instance, export the table(s) via [pg_dump](https://www.postgresql.org/docs/current/app-pgdump.html), and then use [pg_restore](https://www.postgresql.org/docs/current/app-pgrestore.html) to restore those tables into your database. | These user errors aren't protected with high availability as all changes are replicated to the standby replica synchronously. You have to perform point-in-time restore to recover from such errors. |
-| ** Availability zone failure** | To recover from a zone-level failure, you can perform point-in-time restore using the backup and choosing a custom restore point with the latest time to restore the latest data. A new Azure Database for PostgreSQL flexible server instance is deployed in another non-impacted zone. The time taken to restore depends on the previous backup and the volume of transaction logs to recover. | Azure Database for PostgreSQL flexible server is automatically failed over to the standby server within 60-120s with zero data loss. For more information, see [HA concepts page](./concepts-high-availability.md). |
-| ** Region failure** | If your server is configured with geo-redundant backup, you can perform geo-restore in the paired region. A new server will be provisioned and recovered to the last available data that was copied to this region. <br /> <br /> You can also use cross region read replicas. In the event of region failure you can perform disaster recovery operation by promoting your read replica to be a standalone read-writeable server. RPO is expected to be up to 5 minutes (data loss possible) except in the case of severe regional failure when the RPO can be close to the replication lag at the time of failure. | Same process. |
+| **Logical/user errors** | To recover from user errors, such as accidentally dropped tables or incorrectly updated data, you have to perform a [point-in-time recovery](../concepts-backup.md) (PITR). While performing the restore operation, you specify the custom restore point, which is the time right before the error occurred.<br> <br> If you want to restore only a subset of databases or specific tables rather than all databases in the database server, you can restore the database server in a new instance, export the table(s) via [pg_dump](https://www.postgresql.org/docs/current/app-pgdump.html), and then use [pg_restore](https://www.postgresql.org/docs/current/app-pgrestore.html) to restore those tables into your database. | These user errors aren't protected with high availability as all changes are replicated to the standby replica synchronously. You have to perform point-in-time restore to recover from such errors. |
+| **Availability zone failure** | To recover from a zone-level failure, you can perform point-in-time restore using the backup and choosing a custom restore point with the latest time to restore the latest data. A new Azure Database for PostgreSQL flexible server instance is deployed in another non-impacted zone. The time taken to restore depends on the previous backup and the volume of transaction logs to recover. | Azure Database for PostgreSQL flexible server is automatically failed over to the standby server within 60-120s with zero data loss. For more information, see [HA concepts page](./concepts-high-availability.md). |
+| **Region failure** | If your server is configured with geo-redundant backup, you can perform geo-restore in the paired region. A new server will be provisioned and recovered to the last available data that was copied to this region. <br /> <br /> You can also use cross region read replicas. In the event of region failure you can perform disaster recovery operation by promoting your read replica to be a standalone read-writeable server. RPO is expected to be up to 5 minutes (data loss possible) except in the case of severe regional failure when the RPO can be close to the replication lag at the time of failure. | Same process. |
### Configure your database after recovery from regional failure
postgresql How To Migrate Single To Flexible Cli https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/postgresql/migrate/how-to-migrate-single-to-flexible-cli.md
The structure of the JSON is:
{ "sourceServerPassword": "<password>", "targetServerPassword": "<password>"
- }
+ },
"sourceServerUserName": "<username>@<servername>", "targetServerUserName": "<username>" },
service-health Resource Health Alert Monitor Guide https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/service-health/resource-health-alert-monitor-guide.md
For information on how to configure resource health notification alerts by using
1. In the Azure [portal](https://portal.azure.com/), select **Service Health**. ![Service Health Selection](./media/resource-health-alert-monitor-guide/service-health-selection.png)
-1. In the **Resource Health** section, select **Service Health**.
+1. In the **Resource Health** section, select **Resource Health**.
1. Select **Add resource health alert**.
-1. The ** an alert rule wizard** opens to the **Conditions** tab, with the **Scope** tab already populated. Follow the steps for Resource Health alerts, starting from the **Conditions** tab, in the [new alert rule wizard](../azure-monitor/alerts/alerts-create-activity-log-alert-rule.md).
+1. The **Create an alert rule** wizard opens to the **Conditions** tab, with the **Scope** tab already populated. Follow the steps for Resource Health alerts, starting from the **Conditions** tab, in the [new alert rule wizard](../azure-monitor/alerts/alerts-create-activity-log-alert-rule.md).
## Next steps
site-recovery Avs Tutorial Dr Drill Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/avs-tutorial-dr-drill-azure.md
Previously updated : 12/04/2023 Last updated : 02/19/2024
When you run a test failover, the following actions happen:
2. Failover processes the data, so that an Azure VM can be created. If you select the latest recovery point, a recovery point is created from the data. 3. An Azure VM is created from the data processed in the previous step.
-Run the test failover as follows:
+**Run the test failover as follows:**
1. In **Settings** > **Replicated Items**, select the VM, and then select **+Test Failover**. 2. Select the **Latest processed** recovery point for this tutorial. This step fails over the VM to the latest available point in time. The time stamp is shown.
If you want to connect to Azure VMs by using Remote Desktop Protocol (RDP) or Se
## Next step
-> [!div class="nextstepaction"]
-> [Learn more about running a failover](avs-tutorial-failover.md)
+- [Learn more about running a failover](avs-tutorial-failover.md).
site-recovery Avs Tutorial Failover https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/avs-tutorial-failover.md
Previously updated : 12/04/2023 Last updated : 02/19/2024
If you encounter any connectivity problems after failover, follow the [troublesh
After failover, reprotect the Azure VMs to the Azure VMware Solution private cloud. Then, after the VMs are reprotected and replicating to the Azure VMware Solution private cloud, fail back from Azure when you're ready.
-> [!div class="nextstepaction"]
-> [Reprotect Azure VMs](avs-tutorial-reprotect.md)
-> [!div class="nextstepaction"]
-> [Fail back from Azure](avs-tutorial-failback.md)
+- Learn how to [reprotect Azure VMs](avs-tutorial-reprotect.md).
+- Learn how to [fail back from Azure](avs-tutorial-failback.md).
+
site-recovery Avs Tutorial Prepare Avs https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/avs-tutorial-prepare-avs.md
Previously updated : 12/04/2023 Last updated : 02/19/2024
Make sure that the VMware vCenter server and VMs comply with requirements:
* Verify that the Azure VMware Solution VMs that you'll replicate to Azure comply with [Azure VM requirements](vmware-physical-azure-support-matrix.md#azure-vm-requirements). * For Linux VMs, ensure that no two devices or mount points have the same names. These names must be unique and aren't case-sensitive. For example, you can't name two devices for the same VM as *device1* and *Device1*. + ## Prepare to connect to Azure VMs after failover After failover, you might want to connect to the Azure VMs from your Azure VMware Solution network. + ### Connect to a Windows VM by using RDP Before failover, enable Remote Desktop Protocol (RDP) on the Azure VMware Solution VM:
There should be no Windows updates pending on the VM when you trigger a failover
After failover, check **Boot diagnostics** to view a screenshot of the VM. If you can't connect, check that the VM is running and review [troubleshooting tips](https://social.technet.microsoft.com/wiki/contents/articles/31666.troubleshooting-remote-desktop-connection-after-failover-using-asr.aspx). + ### Connect to Linux VMs by using SSH On the Azure VMware Solution VM before failover:
After failover, allow incoming connections to the SSH port for the network secur
You can check **Boot diagnostics** to view a screenshot of the VM. + ## Failback requirements If you plan to fail back to your Azure VMware Solution cloud, there are several [prerequisites for failback](avs-tutorial-reprotect.md#before-you-begin). You can prepare these now, but you don't need to. You can prepare after you fail over to Azure.
-## Next steps
-> [!div class="nextstepaction"]
-> [Set up disaster recovery](avs-tutorial-replication.md)
+## Next steps
-If you're replicating multiple VMs:
+- Learn how to [set up disaster recovery](avs-tutorial-replication.md).
+- If you're replicating multiple VMs, perform [capacity planning](site-recovery-deployment-planner.md).
-> [!div class="nextstepaction"]
-> [Perform capacity planning](site-recovery-deployment-planner.md)
site-recovery Avs Tutorial Prepare Azure https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/site-recovery/avs-tutorial-prepare-azure.md
Previously updated : 12/04/2023 Last updated : 02/19/2024
The virtual network takes a few seconds to create. After it's created, it appear
Learn more about:
-> [!div class="nextstepaction"]
-> [Preparing your infrastructure](avs-tutorial-prepare-avs.md)
-
-> [!div class="nextstepaction"]
-> [Azure networks](../virtual-network/virtual-networks-overview.md)
-
-> [!div class="nextstepaction"]
-> [Managed disks](../virtual-machines/managed-disks-overview.md)
+- [Preparing your infrastructure](avs-tutorial-prepare-avs.md)
+- [Azure networks](../virtual-network/virtual-networks-overview.md)
+- [Managed disks](../virtual-machines/managed-disks-overview.md).
virtual-machines Disks Types https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/disks-types.md
Title: Select a disk type for Azure IaaS VMs - managed disks
description: Learn about the available Azure disk types for virtual machines, including ultra disks, Premium SSDs v2, Premium SSDs, standard SSDs, and Standard HDDs. Previously updated : 01/25/2024 Last updated : 02/07/2024
virtual-machines Network Watcher Windows https://github.com/MicrosoftDocs/azure-docs/commits/main/articles/virtual-machines/extensions/network-watcher-windows.md
Previously updated : 06/09/2023- Last updated : 02/19/2024 # Network Watcher Agent virtual machine extension for Windows
This article details the supported platforms and deployment options for the Netw
### Operating system
-The Network Watcher Agent extension for Windows can be configured for Windows Server 2008 R2, 2012, 2012 R2, 2016, 2019 and 2022 releases. Nano Server isn't supported at this time.
+The Network Watcher Agent extension for Windows can be configured for Windows Server 2012, 2012 R2, 2016, 2019 and 2022 releases. Currently, Nano Server isn't supported.
### Internet connectivity
-Some of the Network Watcher Agent functionality requires that the virtual machine is connected to the Internet. Without the ability to establish outgoing connections, the Network Watcher Agent won't be able to upload packet captures to your storage account. For more details, please see the [Network Watcher documentation](../../network-watcher/index.yml).
+Some of the Network Watcher Agent functionality requires that the virtual machine is connected to the Internet. Without the ability to establish outgoing connections, the Network Watcher Agent can't upload packet captures to your storage account. For more information, please see the [Network Watcher documentation](../../network-watcher/index.yml).
## Extension schema
Set-AzVMExtension `
-TypeHandlerVersion "1.4" ```
-## Troubleshooting and support
-
-### Troubleshooting
+## Troubleshooting
You can retrieve data about the state of extension deployments from the Azure portal and PowerShell. To see the deployment state of extensions for a given VM, run the following command using the Azure PowerShell module:
Extension execution output is logged to files found in the following directory:
C:\WindowsAzure\Logs\Plugins\Microsoft.Azure.NetworkWatcher.NetworkWatcherAgentWindows\ ```
-### Support
+## Related content
-If you need more help at any point in this article, you can refer to the [Network Watcher documentation](../../network-watcher/index.yml), or contact the Azure experts on the [MSDN Azure and Stack Overflow forums](https://azure.microsoft.com/support/forums/). Alternatively, you can file an Azure support incident. Go to the [Azure support site](https://azure.microsoft.com/support/options/) and select Get support. For information about using Azure Support, read the [Microsoft Azure support FAQ](https://azure.microsoft.com/support/faq/).
+- [Network Watcher documentation](../../network-watcher/index.yml).
+- [Microsoft Q&A - Network Watcher](/answers/topics/azure-network-watcher.html).